├── compilation_script
├── requirements.txt
└── compile.py
├── resources.csv
├── .github
└── workflows
│ └── recompile_readme.yaml
├── README.md.template
├── papers.csv
└── README.md
/compilation_script/requirements.txt:
--------------------------------------------------------------------------------
1 | pandas
2 |
--------------------------------------------------------------------------------
/resources.csv:
--------------------------------------------------------------------------------
1 | Name,URL,Description,Type,Topics
--------------------------------------------------------------------------------
/.github/workflows/recompile_readme.yaml:
--------------------------------------------------------------------------------
1 | name: Recompile README.md
2 | on:
3 | push:
4 | branches:
5 | - main
6 | paths:
7 | - "resources.csv"
8 | - "papers.csv"
9 |
10 | jobs:
11 | build:
12 | runs-on: ubuntu-latest
13 | steps:
14 |
15 | - name: Checkout the repository content
16 | uses: actions/checkout@v2
17 |
18 | - name: Use Python 3.9
19 | uses: actions/setup-python@v2
20 | with:
21 | python-version: "3.9"
22 |
23 | - name: Install the required packages
24 | working-directory: ./compilation_script
25 | run: |
26 | python -m pip install --upgrade pip
27 | pip install -r requirements.txt
28 |
29 | - name: Execute the Python script
30 | working-directory: ./compilation_script
31 | run: python compile.py
32 |
33 | - name: Commit the changes
34 | run: |
35 | git config --local user.email "action@github.com"
36 | git config --local user.name "GitHub Action"
37 | git add -A
38 | git commit -m "Recompiles README.md" -a
39 |
40 | - name: Push the commit
41 | uses: ad-m/github-push-action@v0.6.0
42 | with:
43 | github_token: ${{ secrets.GITHUB_TOKEN }}
44 | branch: main
--------------------------------------------------------------------------------
/README.md.template:
--------------------------------------------------------------------------------
1 | # Awesome Network Protocol Fuzzing
2 |
3 | 
4 |
5 | ---
6 |
7 | - [Description](#description)
8 | - [Labels Indexes](#labels-indexes)
9 | - [By Type](#by-type)
10 | - [By Purpose](#by-purpose)
11 | - [Papers](#papers)
12 | - [Resources](#resources)
13 | - [Contribution](#contribution)
14 | - [Credits](#credits)
15 |
16 | ---
17 |
18 | ## Description
19 |
20 | A **list of helpful fuzzing tools and research materials** for network protocols can be found in this repository.
21 |
22 | All resources are alphabetically organized and labeled, making it simple to locate them simply searching one item from the index on the entire page (with `CTRL+F`). The ones not having a link attached are present in the `documents/` folder.
23 |
24 | ## Labels Indexes
25 |
26 | ### By Type
27 |
28 | {type_labels}
29 |
30 | ### By Purpose
31 |
32 | {purpose_labels}
33 |
34 | ## Papers
35 |
36 | | Paper Title | Open Source Project | Abstract | Venue | Publication Date |
37 | | --- | --- | --- | --- | --- |
38 | {papers}
39 |
40 | ## Resources
41 |
42 | {resources}
43 |
44 | ## Contribution
45 |
46 | 1. Edit the `resources.csv` file.
47 | 2. Push the changes into the GitHub repository.
48 | 3. Wait for the GitHub action to automatically recompile `README.md`.
49 |
50 | ## Credits
51 |
52 | The template is inspired from this [repository](https://github.com/CyberReasoningSystem/awesome-binary-analysis).
53 |
--------------------------------------------------------------------------------
/papers.csv:
--------------------------------------------------------------------------------
1 | Name,Open Source,URL,Abstract,Venue,Publication Date
2 | Snapfuzz,,https://arxiv.org/abs/2201.04048,"In recent years, fuzz testing has benefited from increased computational power and important algorithmic advances, leading to systems that have discovered many critical bugs and vulnerabilities in production software. Despite these successes, not all applications can be fuzzed efficiently. In particular, stateful applications such as network protocol implementations are constrained by their low fuzzing throughput and the need to develop fuzzing harnesses that reset their state and isolate their side effects. In this paper, we present SnapFuzz, a novel fuzzing framework for network applications. SnapFuzz offers a robust architecture that transforms slow asynchronous network communication into fast synchronous communication based on UNIX domain sockets, speeds up all file operations by redirecting them to an in-memory filesystem, and removes the need for many fragile modifications, such as configuring time delays or writing cleanup scripts, together with several other improvements. Using SnapFuzz, we fuzzed five popular networking applications: LightFTP, Dnsmasq, LIVE555, TinyDTLS and Dcmqrscp. We report impressive performance speedups of 72.4x, 49.7x, 24.8x, 23.9x, and 8.5x, respectively, with significantly simpler fuzzing harnesses in all cases. Through its performance advantage, SnapFuzz has also found 12 previously-unknown crashes in these applications.",CoRR 2022,2022
3 | EPF,https://github.com/fkie-cad/epf,https://ieeexplore.ieee.org/document/9647801,"Network fuzzing is a complex domain requiring fuzzers to handle highly structured input and communication schemes. In fuzzer development, such protocol-dependent semantics usually cause a focus on applicability: Resulting fuzz engines provide powerful APIs to add new protocols but rarely incorporate algorithmic fuzz improvements like the successful coverage-guidance. This paper aims to combine applicability and well-established algorithms for increased network fuzzing effectiveness. We introduce EPF, a coverage-guided and protocol-aware network fuzzing framework. EPF uses population-based simulated annealing to heuristically schedule packet types during fuzzing. In conjunction with a genetic algorithm that uses coverage metrics as fitness function, the framework steers input generation towards coverage maximization. Users can add protocols by defining packet models and state graphs through a Scapy-powered API. We collect first data in a case study on fuzzing the IEC 60870-5-104 SCADA protocol and compare EPF with AFLNet. Based on a total of 600 CPU days of fuzzing, we measure effectiveness using bug and coverage metrics. We report promising results that a) indicate similar performance to AFLNet without any optimizations and b) point out the potential and shortcomings of our approach.",PST 2021,2021
4 | Nyx-Net,https://github.com/RUB-SysSec/nyx-net,https://arxiv.org/abs/2111.03013,"Coverage-guided fuzz testing ('fuzzing') has become mainstream and we have observed lots of progress in this research area recently. However, it is still challenging to efficiently test network services with existing coverage-guided fuzzing methods. In this paper, we introduce the design and implementation of Nyx-Net, a novel snapshot-based fuzzing approach that can successfully fuzz a wide range of targets spanning servers, clients, games, and even Firefox's Inter-Process Communication (IPC) interface. Compared to state-of-the-art methods, Nyx-Net improves test throughput by up to 300x and coverage found by up to 70%. Additionally, Nyx-Net is able to find crashes in two of ProFuzzBench's targets that no other fuzzer found previously. When using Nyx-Net to play the game Super Mario, Nyx-Net shows speedups of 10-30x compared to existing work. Under some circumstances, Nyx-Net is even able play 'faster than light': solving the level takes less wall-clock time than playing the level perfectly even once. Nyx-Net is able to find previously unknown bugs in servers such as Lighttpd, clients such as MySQL client, and even Firefox's IPC mechanism - demonstrating the strength and versatility of the proposed approach. Lastly, our prototype implementation was awarded a $20.000 bug bounty for enabling fuzzing on previously unfuzzable code in Firefox and solving a long-standing problem at Mozilla.",PST 2021,2021
5 | ProFuzzBench,https://github.com/profuzzbench/profuzzbench,https://arxiv.org/abs/2101.05102,"We present a new benchmark (ProFuzzBench) for stateful fuzzing of network protocols. The benchmark includes a suite of representative open-source network servers for popular protocols, and tools to automate experimentation. We discuss challenges and potential directions for future research based on this benchmark.",ISSTA 2021,2021
6 | AFLNET,https://mboehme.github.io/paper/ICST20.AFLNet.pdf,https://mboehme.github.io/paper/ICST20.AFLNet.pdf,"Server fuzzing is difficult. Unlike simple command-line tools, servers feature a massive state space that can be traversed effectively only with well-defined sequences of input messages. Valid sequences are specified in a protocol. In this paper, we present AFLNET, the first greybox fuzzer for protocol implementations. Unlike existing protocol fuzzers, AFLNET takes a mutational approach and uses state-feedback to guide the fuzzing process. AFLNET is seeded with a corpus of recorded message exchanges between the server and an actual client. No protocol specification or message grammars are required. AFLNET acts as a client and replays variations of the original sequence of messages sent to the server and retains those variations that were effective at increasing the coverage of the code or state space. To identify the server states that are exercised by a message sequence, AFLNET uses the server's response codes. From this feedback, AFLNET identifies progressive regions in the state space, and systematically steers towards such regions. The case studies with AFLNET on two popular protocol implementations demonstrate a substantial performance boost over the state-of the-art. AFLNET discovered two new CVEs which are classified as critical (CVSS score CRITICAL 9.8).",ICST20,2020
7 | Multifuzz,,https://www.semanticscholar.org/paper/MultiFuzz%3A-A-Coverage-Based-Multiparty-Protocol-for-Zeng-Lin/270667ae047643d49f98bbeb04c76a6e3d71e368,"The publish/subscribe model has gained prominence in the Internet of things (IoT) network, and both Message Queue Telemetry Transport (MQTT) and Constrained Application Protocol (CoAP) support it. However, existing coverage-based fuzzers may miss some paths when fuzzing such publish/subscribe protocols, because they implicitly assume that there are only two parties in a protocol, which is not true now since there are three parties, i.e., the publisher, the subscriber and the broker. In this paper, we propose MultiFuzz, a new coverage-based multiparty-protocol fuzzer. First, it embeds multiple-connection information in a single input. Second, it uses a message mutation algorithm to stimulate protocol state transitions, without the need of protocol specifications. Third, it uses a new desockmulti module to feed the network messages into the program under test. desockmulti is similar to desock (Preeny), a tool widely used by the community, but it is specially designed for fuzzing and is 10x faster. We implement MultiFuzz based on AFL, and use it to fuzz two popular projects Eclipse Mosquitto and libCoAP. We reported discovered problems to the projects. In addition, we compare MultiFuzz with AFL and two state-of-the-art fuzzers, MOPT and AFLNET, and find it discovering more paths and crashes",Sensors,2020
8 |
9 |
--------------------------------------------------------------------------------
/compilation_script/compile.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python3
2 | """Script for regenerating the README.md file."""
3 |
4 | import typing
5 | import urllib.parse
6 |
7 | import pandas
8 |
9 |
10 | def sanitize_text(text: str) -> str:
11 | """Sanitize the text to be included in the badge URL.
12 |
13 | Args:
14 | text (str): Text to sanitize
15 |
16 | Returns:
17 | str: Sanitiezed text
18 | """
19 | text = text.replace("-", "--")
20 |
21 | return urllib.parse.quote(text)
22 |
23 |
24 | def create_badge(title: str, text: str, color: str) -> str:
25 | """Create a Markdown badge.
26 |
27 | Args:
28 | title (str): Title
29 | text (str): Text
30 | color (str): Color
31 |
32 | Returns:
33 | str: Markdown badge
34 | """
35 | title = sanitize_text(title)
36 | text = sanitize_text(text)
37 |
38 | return (f"")
40 |
41 |
42 | def make_type_label_shield(label: str) -> str:
43 | """Create a Markdown type badge.
44 |
45 | Args:
46 | label (str): Type
47 |
48 | Returns:
49 | str: Markdown badge
50 | """
51 | return create_badge("Type", label, "lightgrey")
52 |
53 |
54 | def make_purpose_label_shield(label: str) -> str:
55 | """Create a Markdown purpose badge.
56 |
57 | Args:
58 | label (str): Purpose
59 |
60 | Returns:
61 | str: Markdown badge
62 | """
63 | return create_badge("Purpose", label, "blue")
64 |
65 |
66 | def create_list_of_shields(
67 | labels: typing.List[str],
68 | shields_creation_func: typing.Callable[[str], str],
69 | prefix: typing.Optional[str] = "- ",
70 | suffix: typing.Optional[str] = "\n",
71 | ) -> str:
72 | """Create a Markdown list of shields.
73 |
74 | Args:
75 | labels (typing.List[str]): Elements
76 | shields_creation_func (typing.Callable[[str], str]): Function to create
77 | each individual shield
78 | prefix (str, optional): Prefix. Defaults to "- ".
79 | suffix (str, optional): Suffix. Defaults to new line.
80 |
81 | Returns:
82 | str: Markdown list of badges
83 | """
84 | labels = list(set(labels))
85 | labels.sort()
86 |
87 | labels_list = [
88 | f"{prefix}{shields_creation_func(label)}" for label in labels
89 | ]
90 |
91 | if suffix is None:
92 | suffix = ""
93 |
94 | return suffix.join(labels_list)
95 |
96 |
97 | def create_resource_item(
98 | name: str,
99 | url: str,
100 | description: str,
101 | types: typing.Optional[typing.List[str]],
102 | purpose: typing.Optional[typing.List[str]],
103 | ) -> str:
104 | """Create a Markdown list element based on the resource information.
105 |
106 | Args:
107 | name (str): Name
108 | url (str): URL
109 | description (str): Description
110 | types (typing.Optional[typing.List[str]]): Types labels
111 | purpose (typing.Optional[typing.List[str]]): Purpose labels
112 |
113 | Returns:
114 | str: Markdown list element
115 | """
116 | if url is not None:
117 | title = f"[{name}]({url})"
118 | else:
119 | title = name
120 |
121 | types_labels = (create_list_of_shields(
122 | types, make_type_label_shield, prefix="", suffix=" ") if types else "")
123 | purpose_labels = (create_list_of_shields(
124 | purpose, make_purpose_label_shield, prefix="", suffix=" ")
125 | if purpose else "")
126 |
127 | return f"""\
128 | - **{title}**
129 | - Description: {description}
130 | - Type: {types_labels}
131 | - Purpose: {purpose_labels}
132 | """
133 |
134 |
135 | def create_paper_item(
136 | name: str,
137 | url: str,
138 | open_source: str,
139 | abstract: str,
140 | venue: str,
141 | date: str,
142 | ) -> str:
143 | """Create a Markdown list element based on the paper information.
144 |
145 | Args:
146 | name (str): Name
147 | url (str): URL
148 | open_source (str): Open Source project
149 | abstract (str): Abstract
150 | venue (str): Venue
151 | date (str): Publication Date
152 |
153 | Returns:
154 | str: Markdown list element
155 | """
156 | if url is not None:
157 | title = f"[{name}]({url})"
158 | else:
159 | title = name
160 |
161 | if open_source is not None:
162 | open_source_link = f"[Repository]({open_source})"
163 | else:
164 | open_source_link = "Not found"
165 |
166 | return f"""\
167 | | **{title}** | {open_source_link} | Click to see the abstract!
{abstract} | {venue} | {date} |
168 | """
169 |
170 | def read_sorted_resources_as_df() -> pandas.DataFrame:
171 | """Read and sort the CSV file with resources.
172 |
173 | Returns:
174 | pandas.DataFrame: pandas dataframe with resources
175 | """
176 | resources_df = pandas.read_csv("../resources.csv")
177 | resources_df.sort_values(by="Name",
178 | key=lambda col: col.str.lower(),
179 | inplace=True)
180 |
181 | return resources_df
182 |
183 | def read_sorted_papers_as_df () -> pandas.DataFrame:
184 | """Read and sort the CSV file with resources.
185 |
186 | Returns:
187 | pandas.DataFrame: pandas dataframe with resources
188 | """
189 | papers_df = pandas.read_csv("../papers.csv")
190 | papers_df.sort_values(by="Name",
191 | key=lambda col: col.str.lower(),
192 | inplace=True)
193 |
194 | return papers_df
195 |
196 | def dump_to_readme(papers: str, resources: str, type_labels: str,
197 | purpose_labels: str) -> None:
198 | """Dump the information into README.md.
199 |
200 | Args:
201 | resources (str): Markdown list of resources
202 | type_labels (str): Markdown shields for resources types
203 | purpose_labels (str): Markdown shields for purpose types
204 | """
205 | with open("../README.md.template", "r", encoding="utf-8") as template_file:
206 | template = template_file.read()
207 |
208 | readme_content = template.format(
209 | papers=papers,
210 | resources=resources,
211 | type_labels=type_labels,
212 | purpose_labels=purpose_labels,
213 | )
214 |
215 | with open("../README.md", "w", encoding="utf-8") as readme_file:
216 | readme_file.write(readme_content)
217 |
218 |
219 | def main() -> None:
220 | """Run main functionality."""
221 | resources_df = read_sorted_resources_as_df()
222 | papers_df = read_sorted_papers_as_df()
223 |
224 | papers = []
225 | resources = []
226 | type_labels = []
227 | purpose_labels = []
228 |
229 | for _, row in papers_df.iterrows():
230 | # Create element
231 | name = row["Name"]
232 | url = row["URL"] if not pandas.isna(row["URL"]) else None
233 | abstract = row["Abstract"]
234 | venue = row["Venue"]
235 | date = row["Publication Date"]
236 | open_source = row["Open Source"] if not pandas.isna(row["Open Source"]) else None
237 | papers.append(
238 | create_paper_item(name, url, open_source, abstract, venue, date))
239 |
240 | for _, row in resources_df.iterrows():
241 | # Keep track of types
242 | types = None
243 | if not pandas.isna(row["Type"]):
244 | types = row["Type"].split(", ")
245 | type_labels.extend(types)
246 |
247 | # Keep track of topics
248 | purpose = None
249 | if not pandas.isna(row["Topics"]):
250 | purpose = row["Topics"].split(", ")
251 | purpose_labels.extend(purpose)
252 |
253 | # Create element
254 | name = row["Name"]
255 | url = row["URL"] if not pandas.isna(row["URL"]) else None
256 | description = row["Description"]
257 | resources.append(
258 | create_resource_item(name, url, description, types, purpose))
259 |
260 | inline_papers = "".join(papers)
261 | inline_resources = "".join(resources)
262 | type_labels_list = create_list_of_shields(type_labels,
263 | make_type_label_shield)
264 | purpose_labels_list = create_list_of_shields(purpose_labels,
265 | make_purpose_label_shield)
266 |
267 | dump_to_readme(inline_papers, inline_resources, type_labels_list, purpose_labels_list)
268 |
269 |
270 | if __name__ == "__main__":
271 | main()
272 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Awesome Network Protocol Fuzzing
2 |
3 | 
4 |
5 | ---
6 |
7 | - [Description](#description)
8 | - [Labels Indexes](#labels-indexes)
9 | - [By Type](#by-type)
10 | - [By Purpose](#by-purpose)
11 | - [Papers](#papers)
12 | - [Resources](#resources)
13 | - [Contribution](#contribution)
14 | - [Credits](#credits)
15 |
16 | ---
17 |
18 | ## Description
19 |
20 | A **list of helpful fuzzing tools and research materials** for network protocols can be found in this repository.
21 |
22 | All resources are alphabetically organized and labeled, making it simple to locate them simply searching one item from the index on the entire page (with `CTRL+F`). The ones not having a link attached are present in the `documents/` folder.
23 |
24 | ## Labels Indexes
25 |
26 | ### By Type
27 |
28 |
29 |
30 | ### By Purpose
31 |
32 |
33 |
34 | ## Papers
35 |
36 | | Paper Title | Open Source Project | Abstract | Venue | Publication Date |
37 | | --- | --- | --- | --- | --- |
38 | | **[AFLNET](https://mboehme.github.io/paper/ICST20.AFLNet.pdf)** | [Repository](https://mboehme.github.io/paper/ICST20.AFLNet.pdf) | Click to see the abstract!
Server fuzzing is difficult. Unlike simple command-line tools, servers feature a massive state space that can be traversed effectively only with well-defined sequences of input messages. Valid sequences are specified in a protocol. In this paper, we present AFLNET, the first greybox fuzzer for protocol implementations. Unlike existing protocol fuzzers, AFLNET takes a mutational approach and uses state-feedback to guide the fuzzing process. AFLNET is seeded with a corpus of recorded message exchanges between the server and an actual client. No protocol specification or message grammars are required. AFLNET acts as a client and replays variations of the original sequence of messages sent to the server and retains those variations that were effective at increasing the coverage of the code or state space. To identify the server states that are exercised by a message sequence, AFLNET uses the server's response codes. From this feedback, AFLNET identifies progressive regions in the state space, and systematically steers towards such regions. The case studies with AFLNET on two popular protocol implementations demonstrate a substantial performance boost over the state-of the-art. AFLNET discovered two new CVEs which are classified as critical (CVSS score CRITICAL 9.8). | ICST20 | 2020 |
39 | | **[EPF](https://ieeexplore.ieee.org/document/9647801)** | [Repository](https://github.com/fkie-cad/epf) | Click to see the abstract!
Network fuzzing is a complex domain requiring fuzzers to handle highly structured input and communication schemes. In fuzzer development, such protocol-dependent semantics usually cause a focus on applicability: Resulting fuzz engines provide powerful APIs to add new protocols but rarely incorporate algorithmic fuzz improvements like the successful coverage-guidance. This paper aims to combine applicability and well-established algorithms for increased network fuzzing effectiveness. We introduce EPF, a coverage-guided and protocol-aware network fuzzing framework. EPF uses population-based simulated annealing to heuristically schedule packet types during fuzzing. In conjunction with a genetic algorithm that uses coverage metrics as fitness function, the framework steers input generation towards coverage maximization. Users can add protocols by defining packet models and state graphs through a Scapy-powered API. We collect first data in a case study on fuzzing the IEC 60870-5-104 SCADA protocol and compare EPF with AFLNet. Based on a total of 600 CPU days of fuzzing, we measure effectiveness using bug and coverage metrics. We report promising results that a) indicate similar performance to AFLNet without any optimizations and b) point out the potential and shortcomings of our approach. | PST 2021 | 2021 |
40 | | **[Multifuzz](https://www.semanticscholar.org/paper/MultiFuzz%3A-A-Coverage-Based-Multiparty-Protocol-for-Zeng-Lin/270667ae047643d49f98bbeb04c76a6e3d71e368)** | Not found | Click to see the abstract!
The publish/subscribe model has gained prominence in the Internet of things (IoT) network, and both Message Queue Telemetry Transport (MQTT) and Constrained Application Protocol (CoAP) support it. However, existing coverage-based fuzzers may miss some paths when fuzzing such publish/subscribe protocols, because they implicitly assume that there are only two parties in a protocol, which is not true now since there are three parties, i.e., the publisher, the subscriber and the broker. In this paper, we propose MultiFuzz, a new coverage-based multiparty-protocol fuzzer. First, it embeds multiple-connection information in a single input. Second, it uses a message mutation algorithm to stimulate protocol state transitions, without the need of protocol specifications. Third, it uses a new desockmulti module to feed the network messages into the program under test. desockmulti is similar to desock (Preeny), a tool widely used by the community, but it is specially designed for fuzzing and is 10x faster. We implement MultiFuzz based on AFL, and use it to fuzz two popular projects Eclipse Mosquitto and libCoAP. We reported discovered problems to the projects. In addition, we compare MultiFuzz with AFL and two state-of-the-art fuzzers, MOPT and AFLNET, and find it discovering more paths and crashes | Sensors | 2020 |
41 | | **[Nyx-Net](https://arxiv.org/abs/2111.03013)** | [Repository](https://github.com/RUB-SysSec/nyx-net) | Click to see the abstract!
Coverage-guided fuzz testing ('fuzzing') has become mainstream and we have observed lots of progress in this research area recently. However, it is still challenging to efficiently test network services with existing coverage-guided fuzzing methods. In this paper, we introduce the design and implementation of Nyx-Net, a novel snapshot-based fuzzing approach that can successfully fuzz a wide range of targets spanning servers, clients, games, and even Firefox's Inter-Process Communication (IPC) interface. Compared to state-of-the-art methods, Nyx-Net improves test throughput by up to 300x and coverage found by up to 70%. Additionally, Nyx-Net is able to find crashes in two of ProFuzzBench's targets that no other fuzzer found previously. When using Nyx-Net to play the game Super Mario, Nyx-Net shows speedups of 10-30x compared to existing work. Under some circumstances, Nyx-Net is even able play 'faster than light': solving the level takes less wall-clock time than playing the level perfectly even once. Nyx-Net is able to find previously unknown bugs in servers such as Lighttpd, clients such as MySQL client, and even Firefox's IPC mechanism - demonstrating the strength and versatility of the proposed approach. Lastly, our prototype implementation was awarded a $20.000 bug bounty for enabling fuzzing on previously unfuzzable code in Firefox and solving a long-standing problem at Mozilla. | PST 2021 | 2021 |
42 | | **[ProFuzzBench](https://arxiv.org/abs/2101.05102)** | [Repository](https://github.com/profuzzbench/profuzzbench) | Click to see the abstract!
We present a new benchmark (ProFuzzBench) for stateful fuzzing of network protocols. The benchmark includes a suite of representative open-source network servers for popular protocols, and tools to automate experimentation. We discuss challenges and potential directions for future research based on this benchmark. | ISSTA 2021 | 2021 |
43 | | **[Snapfuzz](https://arxiv.org/abs/2201.04048)** | Not found | Click to see the abstract!
In recent years, fuzz testing has benefited from increased computational power and important algorithmic advances, leading to systems that have discovered many critical bugs and vulnerabilities in production software. Despite these successes, not all applications can be fuzzed efficiently. In particular, stateful applications such as network protocol implementations are constrained by their low fuzzing throughput and the need to develop fuzzing harnesses that reset their state and isolate their side effects. In this paper, we present SnapFuzz, a novel fuzzing framework for network applications. SnapFuzz offers a robust architecture that transforms slow asynchronous network communication into fast synchronous communication based on UNIX domain sockets, speeds up all file operations by redirecting them to an in-memory filesystem, and removes the need for many fragile modifications, such as configuring time delays or writing cleanup scripts, together with several other improvements. Using SnapFuzz, we fuzzed five popular networking applications: LightFTP, Dnsmasq, LIVE555, TinyDTLS and Dcmqrscp. We report impressive performance speedups of 72.4x, 49.7x, 24.8x, 23.9x, and 8.5x, respectively, with significantly simpler fuzzing harnesses in all cases. Through its performance advantage, SnapFuzz has also found 12 previously-unknown crashes in these applications. | CoRR 2022 | 2022 |
44 |
45 |
46 | ## Resources
47 |
48 |
49 |
50 | ## Contribution
51 |
52 | 1. Edit the `resources.csv` file.
53 | 2. Push the changes into the GitHub repository.
54 | 3. Wait for the GitHub action to automatically recompile `README.md`.
55 |
56 | ## Credits
57 |
58 | The template is inspired from this [repository](https://github.com/CyberReasoningSystem/awesome-binary-analysis).
59 |
--------------------------------------------------------------------------------