├── LICENSE ├── README.md ├── afl-consolidate ├── afl-pause ├── afl-pcmin ├── afl-pollenate ├── afl-resume └── afl-stop /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2015 Ben Nagy. All rights reserved. 2 | 3 | Redistribution and use in source and binary forms, with or without 4 | modification, are permitted provided that the following conditions are met: 5 | 6 | 1. Redistributions of source code must retain the above copyright notice, this 7 | list of conditions and the following disclaimer. 8 | 9 | 2. Redistributions in binary form must reproduce the above copyright notice, 10 | this list of conditions and the following disclaimer in the documentation 11 | and/or other materials provided with the distribution. 12 | 13 | 3. All advertising materials mentioning features or use of this software must 14 | display the following acknowledgement: 15 | "This code is no longer GPL compatible. Suck it." 16 | 17 | 4. Neither the name of the copyright holder nor the names of its contributors 18 | may be used to endorse or promote products derived from this software without 19 | specific prior written permission. 20 | 21 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER "AS IS" AND ANY EXPRESS OR 22 | IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 23 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO 24 | EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT, INDIRECT, 25 | INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 26 | LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR 27 | PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF 28 | LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING 29 | NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, 30 | EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 31 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | afl-trivia 2 | ======= 3 | 4 | ## About 5 | 6 | A small collection of scripts that were once gists. 7 | 8 | ### `afl-pause` & `afl-resume` 9 | 10 | Pause and resume a set of running fuzzers using SIGSTOP / SIGCONT. 11 | 12 | ### `afl-consolidate` 13 | 14 | Consolidate and de-dup all queue and crash files from a set of fuzzers. 15 | 16 | ### `afl-pollenate` 17 | 18 | Pollenate a sync directory between groups of fuzzers running against different 19 | targets. Useful when you are fuzzing eg three different PDF rendering engines. 20 | 21 | ### `afl-pcmin` 22 | 23 | Small modifications to `afl-cmin` to use the GNU parallel tool. Parallelises 24 | the initial tracing and some of the sorting. Also supports clobbering an 25 | existing output directory. 26 | 27 | ## TODO 28 | 29 | - Work out how to parallelise the final selection phase (step 5) in afl-pcmin 30 | 31 | ## Contributing 32 | 33 | * Fork and send a pull request 34 | * Report issues 35 | 36 | ## License & Acknowledgements 37 | 38 | `afl-consolidate` and `afl-pollenate` are released under a permissive but 39 | non-GPL compatible license (based on the 4-clause BSD license). See 40 | LICENSE file for details. I'm not a fan of the GPL. 41 | 42 | The other tools are modified versions from the afl source, so they remain 43 | (c) Google Inc and are licensed under the Apache License 2.0. 44 | 45 | -------------------------------------------------------------------------------- /afl-consolidate: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env ruby 2 | 3 | # (c) Ben Nagy, 2015, All Rights Reserved 4 | # This software is licensed under a permissive but non-GPL compatible license, 5 | # found in the accompanying LICENSE file. 6 | 7 | # Consolidate all queue and crash files from a group of AFL fuzzers into a 8 | # single output directory. Files are named according to their SHA1 sums, which 9 | # inherently removes duplicates. Optionally, an extension can be added, to 10 | # save you the trouble of running rename. 11 | # 12 | # This application assumes you are running your fuzzer work directories from a 13 | # single root directory. If you launch your fuzzers with 14 | # https://github.com/bnagy/afl-launch, this is already done for you. 15 | 16 | require 'fileutils' 17 | require 'digest/sha1' 18 | 19 | def die_usage 20 | $stderr.puts "Usage: #{$0} /path/to/fuzzing/root /path/to/output [extension]" 21 | exit(1) 22 | end 23 | 24 | class Collision < StandardError; end 25 | 26 | def shacp fn, dst, extension 27 | hex = Digest::SHA1.hexdigest File.binread(fn) 28 | target = File.join dst, hex 29 | target << extension 30 | if File.file? target 31 | raise Collision 32 | end 33 | FileUtils.cp fn, target 34 | end 35 | 36 | if ARGV.size < 2 37 | die_usage 38 | end 39 | 40 | unless File.directory? ARGV[0] 41 | warn "Bad directory #{ARGV[0]}" 42 | die_usage 43 | end 44 | 45 | begin 46 | FileUtils.mkdir_p ARGV[1] 47 | rescue 48 | warn "Bad output directory #{ARGV[1]}" 49 | die_usage 50 | end 51 | 52 | fuzz_root = File.expand_path ARGV[0] 53 | out_dir = File.expand_path ARGV[1] 54 | copied = 0 55 | duplicates = 0 56 | extension = "" 57 | if ARGV[2] 58 | # strip any leading dot from the user extension string - we only want one 59 | extension = '.' << ARGV[2].sub( /^\./, '') 60 | end 61 | 62 | fuzzer_dirs = Dir["#{File.join(fuzz_root, '*')}"].select {|e| File.directory? e} 63 | 64 | # Make sure this is really an AFL fuzz root. All dirs should have a queue/ 65 | # subdirectory 66 | unless fuzzer_dirs.all? {|fd| File.directory? File.join(fd, 'queue') } 67 | warn "No queue dir in #{fd} - aborting." 68 | die_usage 69 | end 70 | 71 | fuzzer_dirs.each {|fd| 72 | 73 | warn "Processing #{fd}...\n" 74 | 75 | # get files from the queue 76 | Dir[File.join(fd, 'queue', '*')].each {|fn| 77 | # This skips dot dirs by default, so we don't have to worry about .state/ 78 | next unless File.file? fn 79 | begin 80 | shacp fn, out_dir, extension 81 | copied += 1 82 | rescue Collision 83 | duplicates += 1 84 | rescue 85 | fail "Error while copying #{fn} - #{$!}" 86 | end 87 | } 88 | 89 | # Now get all the crash dirs 90 | Dir[File.join(fd, 'crash*')].each {|cd| 91 | next unless File.directory? cd 92 | Dir[File.join(cd, '*')].each {|fn| 93 | next unless File.file? fn 94 | begin 95 | shacp fn, out_dir, extension 96 | copied += 1 97 | rescue Collision 98 | duplicates += 1 99 | rescue 100 | fail "Error while copying #{fn} - #{$!}" 101 | end 102 | } 103 | } 104 | 105 | } 106 | 107 | warn "#{copied} copied, #{duplicates} duplicates." 108 | -------------------------------------------------------------------------------- /afl-pause: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # american fuzzy lop - pause a set of fuzzers 4 | # -------------------------------------- 5 | # 6 | # By @rantyben, based on afl-whatsup, which is: 7 | # Written and maintained by Michal Zalewski 8 | # 9 | # Copyright 2015 Google Inc. All rights reserved. 10 | # 11 | # Licensed under the Apache License, Version 2.0 (the "License"); 12 | # you may not use this file except in compliance with the License. 13 | # You may obtain a copy of the License at: 14 | # 15 | # http://www.apache.org/licenses/LICENSE-2.0 16 | # 17 | # This pauses all live fuzzers in the given output directory using 18 | # SIGSTOP 19 | 20 | echo "afl-pause - pause fuzzers" 21 | echo 22 | 23 | DIR="$1" 24 | 25 | if [ "$DIR" = "" ]; then 26 | 27 | echo "Usage: $0 afl_sync_dir" 1>&2 28 | echo 1>&2 29 | exit 1 30 | 31 | fi 32 | 33 | cd "$DIR" || exit 1 34 | 35 | if [ -d queue ]; then 36 | 37 | echo "[-] Error: parameter is an individual output directory, not a sync dir." 1>&2 38 | exit 1 39 | 40 | fi 41 | 42 | TMP=`mktemp -t .afl-whatsup-XXXXXXXX` || exit 1 43 | 44 | ALIVE_CNT=0 45 | DEAD_CNT=0 46 | 47 | for i in `find . -maxdepth 2 -iname fuzzer_stats`; do 48 | 49 | sed 's/^command_line.*$/_skip:1/;s/[ ]*:[ ]*/="/;s/$/"/' "$i" >"$TMP" 50 | . "$TMP" 51 | 52 | if ! kill -0 "$fuzzer_pid" 2>/dev/null; then 53 | echo "Instance is dead or running remotely, skipping." 54 | echo 55 | DEAD_CNT=$((DEAD_CNT + 1)) 56 | continue 57 | fi 58 | 59 | ALIVE_CNT=$((ALIVE_CNT + 1)) 60 | 61 | if ! kill -STOP "$fuzzer_pid" 2>/dev/null; then 62 | echo "Unable to pause fuzzer pid $fuzzer_pid, aborting." 63 | echo 64 | exit 1 65 | fi 66 | 67 | done 68 | 69 | rm -f "$TMP" 70 | 71 | echo "Fuzzers paused: $ALIVE_CNT" 72 | 73 | if [ ! "$DEAD_CNT" = "0" ]; then 74 | echo "Dead or remote: $DEAD_CNT (excluded from stats)" 75 | fi 76 | 77 | exit 0 78 | -------------------------------------------------------------------------------- /afl-pcmin: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # 3 | # american fuzzy lop - corpus minimization tool 4 | # --------------------------------------------- 5 | # 6 | # Written and maintained by Michal Zalewski 7 | # (Modified to support GNU parallel by @rantyben) 8 | # 9 | # Copyright 2014, 2015 Google Inc. All rights reserved. 10 | # 11 | # Licensed under the Apache License, Version 2.0 (the "License"); 12 | # you may not use this file except in compliance with the License. 13 | # You may obtain a copy of the License at: 14 | # 15 | # http://www.apache.org/licenses/LICENSE-2.0 16 | # 17 | # This tool tries to find the smallest subset of files in the input directory 18 | # that still trigger the full range of instrumentation data points seen in 19 | # the starting corpus. This has two uses: 20 | # 21 | # - Screening large corpora of input files before using them as a seed for 22 | # seed for afl-fuzz. The tool effectively reject functionally redundant 23 | # files and likely leave you with a much smaller set. 24 | # 25 | # (In this case, you probably also want to consider running afl-tmin on 26 | # the individual files to reduce their size.) 27 | # 28 | # - Minimizing the corpus generated organically by afl-fuzz, perhaps when 29 | # planning to feed it to more resource-intensive tools. The tool achieves 30 | # this by removing all entries that used to trigger unique behaviors in the 31 | # past, but have been made obsolete by later finds. 32 | # 33 | # Note that the tool doesn't modify the files themselves. For that, you want 34 | # afl-tmin. 35 | # 36 | # This script must use bash because other shells may have hardcoded limits on 37 | # array sizes. 38 | # 39 | 40 | echo "corpus minimization tool for afl-fuzz by " 41 | echo 42 | 43 | ######### 44 | # SETUP # 45 | ######### 46 | 47 | # Process command-line options... 48 | 49 | MEM_LIMIT=100 50 | TIMEOUT=none 51 | 52 | unset IN_DIR OUT_DIR STDIN_FILE EXTRA_PAR MEM_LIMIT_GIVEN \ 53 | AFL_CMIN_CRASHES_ONLY AFL_CMIN_ALLOW_ANY QEMU_MODE CLOBBER 54 | 55 | while getopts "+i:o:f:m:t:eXQC" opt; do 56 | 57 | case "$opt" in 58 | 59 | "i") 60 | IN_DIR="$OPTARG" 61 | ;; 62 | 63 | "o") 64 | OUT_DIR="$OPTARG" 65 | ;; 66 | "f") 67 | STDIN_FILE="$OPTARG" 68 | ;; 69 | "m") 70 | MEM_LIMIT="$OPTARG" 71 | MEM_LIMIT_GIVEN=1 72 | ;; 73 | "t") 74 | TIMEOUT="$OPTARG" 75 | ;; 76 | "e") 77 | EXTRA_PAR="$EXTRA_PAR -e" 78 | ;; 79 | "X") 80 | CLOBBER=1 81 | ;; 82 | "C") 83 | export AFL_CMIN_CRASHES_ONLY=1 84 | ;; 85 | "Q") 86 | EXTRA_PAR="$EXTRA_PAR -Q" 87 | test "$MEM_LIMIT_GIVEN" = "" && MEM_LIMIT=250 88 | QEMU_MODE=1 89 | ;; 90 | "?") 91 | exit 1 92 | ;; 93 | 94 | esac 95 | 96 | done 97 | 98 | shift $((OPTIND-1)) 99 | 100 | TARGET_BIN="$1" 101 | 102 | if [ "$TARGET_BIN" = "" -o "$IN_DIR" = "" -o "$OUT_DIR" = "" ]; then 103 | 104 | cat 1>&2 <<_EOF_ 105 | Usage: $0 [ options ] -- /path/to/target_app [ ... ] 106 | Required parameters: 107 | -i dir - input directory with the starting corpus 108 | -o dir - output directory for minimized files 109 | Execution control settings: 110 | -f file - location read by the fuzzed program (stdin) 111 | -m megs - memory limit for child process ($MEM_LIMIT MB) 112 | -t msec - run time limit for child process (none) 113 | -Q - use binary-only instrumentation (QEMU mode) 114 | -X - clobber the output directory without confirmation 115 | Minimization settings: 116 | -C - keep crashing inputs, reject everything else 117 | -e - solve for edge coverage only, ignore hit counts 118 | For additional tips, please consult docs/README. 119 | _EOF_ 120 | exit 1 121 | fi 122 | 123 | # Do a sanity check to discourage the use of /tmp, since we can't really 124 | # handle this safely from a shell script. 125 | 126 | echo "$IN_DIR" | grep -qE '^(/var)?/tmp/' 127 | T1="$?" 128 | 129 | echo "$TARGET_BIN" | grep -qE '^(/var)?/tmp/' 130 | T2="$?" 131 | 132 | echo "$OUT_DIR" | grep -qE '^(/var)?/tmp/' 133 | T3="$?" 134 | 135 | echo "$STDIN_FILE" | grep -qE '^(/var)?/tmp/' 136 | T4="$?" 137 | 138 | echo "$PWD" | grep -qE '^(/var)?/tmp/' 139 | T5="$?" 140 | 141 | if [ "$T1" = "0" -o "$T2" = "0" -o "$T3" = "0" -o "$T4" = "0" -o "$T5" = "0" ]; then 142 | echo "[-] Error: do not use this script in /tmp or /var/tmp." 1>&2 143 | exit 1 144 | fi 145 | 146 | # If @@ is specified, but there's no -f, let's come up with a temporary input 147 | # file name. 148 | 149 | TRACE_DIR="$OUT_DIR/.traces" 150 | 151 | if [ "$STDIN_FILE" = "" ]; then 152 | 153 | if echo "$*" | grep -qF '@@'; then 154 | STDIN_FILE="$TRACE_DIR/.cur_input" 155 | fi 156 | 157 | fi 158 | 159 | # Check for obvious errors. 160 | 161 | if [ ! "$MEM_LIMIT" = "none" ]; then 162 | 163 | if [ "$MEM_LIMIT" -lt "5" ]; then 164 | echo "[-] Error: dangerously low memory limit." 1>&2 165 | exit 1 166 | fi 167 | 168 | fi 169 | 170 | if [ ! "$TIMEOUT" = "none" ]; then 171 | 172 | if [ "$TIMEOUT" -lt "10" ]; then 173 | echo "[-] Error: dangerously low timeout." 1>&2 174 | exit 1 175 | fi 176 | 177 | fi 178 | 179 | if [ ! -f "$TARGET_BIN" -o ! -x "$TARGET_BIN" ]; then 180 | 181 | TNEW="`which "$TARGET_BIN" 2>/dev/null`" 182 | 183 | if [ ! -f "$TNEW" -o ! -x "$TNEW" ]; then 184 | echo "[-] Error: binary '$TARGET_BIN' not found or not executable." 1>&2 185 | exit 1 186 | fi 187 | 188 | TARGET_BIN="$TNEW" 189 | 190 | fi 191 | 192 | if [ "$AFL_SKIP_BIN_CHECK" = "" -a "$QEMU_MODE" = "" ]; then 193 | 194 | if ! grep -qF "__AFL_SHM_ID" "$TARGET_BIN"; then 195 | echo "[-] Error: binary '$TARGET_BIN' doesn't appear to be instrumented." 1>&2 196 | exit 1 197 | fi 198 | 199 | fi 200 | 201 | if [ ! -d "$IN_DIR" ]; then 202 | echo "[-] Error: directory '$IN_DIR' not found." 1>&2 203 | exit 1 204 | fi 205 | 206 | test -d "$IN_DIR/queue" && IN_DIR="$IN_DIR/queue" 207 | 208 | #find "$OUT_DIR" -name 'id[:_]*' -maxdepth 1 -exec rm -- {} \; 2>/dev/null 209 | rm -rf "$TRACE_DIR" 2>/dev/null 210 | 211 | if [ "$CLOBBER" = "" ]; then 212 | rmdir "$OUT_DIR" 2>/dev/null 213 | 214 | if [ -d "$OUT_DIR" ]; then 215 | echo "[-] Error: directory '$OUT_DIR' exists and is not empty - delete it first." 1>&2 216 | exit 1 217 | fi 218 | else 219 | echo "[*] Clobber mode - deleting -o target '$OUT_DIR' ..." 1>&2 220 | rm -rf "$OUT_DIR" 2>/dev/null 221 | fi 222 | 223 | mkdir -m 700 -p "$TRACE_DIR" || exit 1 224 | 225 | if [ ! "$STDIN_FILE" = "" ]; then 226 | rm -f "$STDIN_FILE" || exit 1 227 | touch "$STDIN_FILE" || exit 1 228 | fi 229 | 230 | if [ "$AFL_PATH" = "" ]; then 231 | SHOWMAP="${0%/afl-cmin}/afl-showmap" 232 | else 233 | SHOWMAP="$AFL_PATH/afl-showmap" 234 | fi 235 | 236 | if [ ! -x "$SHOWMAP" ]; then 237 | echo "[-] Error: can't find 'afl-showmap' - please set AFL_PATH." 1>&2 238 | rm -rf "$TRACE_DIR" 239 | exit 1 240 | fi 241 | 242 | IN_COUNT=$((`ls -- "$IN_DIR" 2>/dev/null | wc -l`)) 243 | 244 | if [ "$IN_COUNT" = "0" ]; then 245 | echo "No inputs in the target directory - nothing to be done." 246 | rm -rf "$TRACE_DIR" 247 | exit 1 248 | fi 249 | 250 | FIRST_FILE=`ls "$IN_DIR" | head -1` 251 | 252 | if ln "$IN_DIR/$FIRST_FILE" "$TRACE_DIR/.link_test" 2>/dev/null; then 253 | CP_TOOL=ln 254 | else 255 | CP_TOOL=cp 256 | fi 257 | 258 | # Make sure that we can actually get anything out of afl-showmap before we 259 | # waste too much time. 260 | 261 | echo "[*] Testing the target binary..." 262 | 263 | if [ "$STDIN_FILE" = "" ]; then 264 | 265 | AFL_CMIN_ALLOW_ANY=1 "$SHOWMAP" -m "$MEM_LIMIT" -t "$TIMEOUT" -o "$TRACE_DIR/.run_test" -Z $EXTRA_PAR -- "$@" <"$IN_DIR/$fn" 266 | 267 | else 268 | 269 | cp "$IN_DIR/$FIRST_FILE" "$STDIN_FILE" 270 | AFL_CMIN_ALLOW_ANY=1 "$SHOWMAP" -m "$MEM_LIMIT" -t "$TIMEOUT" -o "$TRACE_DIR/.run_test" -Z $EXTRA_PAR -A "$STDIN_FILE" -- "$@" &2 283 | test "$AFL_KEEP_TRACES" = "" && rm -rf "$TRACE_DIR" 284 | exit 1 285 | 286 | fi 287 | 288 | # Let's roll! 289 | 290 | ############################# 291 | # STEP 1: COLLECTING TRACES # 292 | ############################# 293 | 294 | echo "[*] Obtaining traces for input files in '$IN_DIR'..." 295 | # 296 | #( 297 | # 298 | # CUR=0 299 | # 300 | # if [ "$STDIN_FILE" = "" ]; then 301 | # 302 | # while read -r fn; do 303 | # 304 | # CUR=$((CUR+1)) 305 | # printf "\\r Processing file $CUR/$IN_COUNT... " 306 | # 307 | # "$SHOWMAP" -m "$MEM_LIMIT" -t "$TIMEOUT" -o "$TRACE_DIR/$fn" -Z $EXTRA_PAR -- "$@" <"$IN_DIR/$fn" 308 | # 309 | # done < <(ls "$IN_DIR") 310 | # 311 | # else 312 | # 313 | # while read -r fn; do 314 | # 315 | # CUR=$((CUR+1)) 316 | # printf "\\r Processing file $CUR/$IN_COUNT... " 317 | # 318 | # cp "$IN_DIR/$fn" "$STDIN_FILE" 319 | # 320 | # "$SHOWMAP" -m "$MEM_LIMIT" -t "$TIMEOUT" -o "$TRACE_DIR/$fn" -Z $EXTRA_PAR -A "$STDIN_FILE" -- "$@" "$TRACE_DIR/.all_uniq" 351 | 352 | find "$IN_DIR" -maxdepth 1 -type f -print0 | parallel -0 cat "$TRACE_DIR"/{/} | sort | uniq -c | sort -n >"$TRACE_DIR/.all_uniq" 353 | 354 | TUPLE_COUNT=$((`grep -c . "$TRACE_DIR/.all_uniq"`)) 355 | 356 | echo "[+] Found $TUPLE_COUNT unique tuples across $IN_COUNT files." 357 | 358 | ##################################### 359 | # STEP 3: SELECTING CANDIDATE FILES # 360 | ##################################### 361 | 362 | # The next step is to find the best candidate for each tuple. The "best" 363 | # part is understood simply as the smallest input that includes a particular 364 | # tuple in its trace. Empirical evidence suggests that this produces smaller 365 | # datasets than more involved algorithms that could be still pulled off in 366 | # a shell script. 367 | 368 | echo "[*] Finding best candidates for each tuple..." 369 | # 370 | #CUR=0 371 | # 372 | #while read -r fn; do 373 | # 374 | # CUR=$((CUR+1)) 375 | # printf "\\r Processing file $CUR/$IN_COUNT... " 376 | # 377 | # sed "s#\$# $fn#" "$TRACE_DIR/$fn" >>"$TRACE_DIR/.candidate_list" 378 | # 379 | #done < <(ls -rS "$IN_DIR") 380 | 381 | ls -rS "$IN_DIR" | parallel -k -I FN sed 's^\$^\ FN^' "$TRACE_DIR/FN" >> "$TRACE_DIR/.candidate_list" 382 | 383 | echo 384 | 385 | ############################## 386 | # STEP 4: LOADING CANDIDATES # 387 | ############################## 388 | 389 | # At this point, we have a file of tuple-file pairs, sorted by file size 390 | # in ascending order (as a consequence of ls -rS). By doing sort keyed 391 | # only by tuple (-k 1,1) and configured to output only the first line for 392 | # every key (-s -u), we end up with the smallest file for each tuple. 393 | 394 | echo "[*] Sorting candidate list (be patient)..." 395 | 396 | sort -k1,1 -s -u "$TRACE_DIR/.candidate_list" | \ 397 | sed 's/^/BEST_FILE[/;s/ /]="/;s/$/"/' >"$TRACE_DIR/.candidate_script" 398 | 399 | if [ ! -s "$TRACE_DIR/.candidate_script" ]; then 400 | echo "[-] Error: no traces obtained from test cases, check syntax!" 401 | test "$AFL_KEEP_TRACES" = "" && rm -rf "$TRACE_DIR" 402 | exit 1 403 | fi 404 | 405 | # The sed command converted the sorted list to a shell script that populates 406 | # BEST_FILE[tuple]="fname". Let's load that! 407 | 408 | . "$TRACE_DIR/.candidate_script" 409 | 410 | ########################## 411 | # STEP 5: WRITING OUTPUT # 412 | ########################## 413 | 414 | # The final trick is to grab the top pick for each tuple, unless said tuple is 415 | # already set due to the inclusion of an earlier candidate; and then put all 416 | # tuples associated with the newly-added file to the "already have" list. The 417 | # loop works from least popular tuples and toward the most common ones. 418 | 419 | echo "[*] Processing candidates and writing output files..." 420 | 421 | CUR=0 422 | 423 | touch "$TRACE_DIR/.already_have" 424 | if [ -f "$TRACE_DIR/.already_have" ]; then 425 | echo "Touch ok" 426 | else 427 | echo "[-] Failed to touch '$TRACE_DIR/.already_have' - aborting" 428 | exit 1 429 | fi 430 | 431 | while read -r cnt tuple; do 432 | 433 | CUR=$((CUR+1)) 434 | printf "\\r Processing tuple $CUR/$TUPLE_COUNT... " 435 | 436 | # If we already have this tuple, skip it. 437 | 438 | grep -q "^$tuple\$" "$TRACE_DIR/.already_have" && continue 439 | 440 | FN=${BEST_FILE[tuple]} 441 | 442 | $CP_TOOL "$IN_DIR/$FN" "$OUT_DIR/$FN" 443 | 444 | if [ "$((CUR % 5))" = "0" ]; then 445 | sort -u "$TRACE_DIR/$FN" "$TRACE_DIR/.already_have" >"$TRACE_DIR/.tmp" 446 | mv -f "$TRACE_DIR/.tmp" "$TRACE_DIR/.already_have" 447 | else 448 | cat "$TRACE_DIR/$FN" >>"$TRACE_DIR/.already_have" 449 | fi 450 | 451 | done <"$TRACE_DIR/.all_uniq" 452 | 453 | echo 454 | 455 | OUT_COUNT=`ls -- "$OUT_DIR" | wc -l` 456 | 457 | if [ "$OUT_COUNT" = "1" ]; then 458 | echo "[!] WARNING: All test cases had the same traces, check syntax!" 459 | fi 460 | 461 | echo "[+] Narrowed down to $OUT_COUNT files, saved in '$OUT_DIR'." 462 | echo 463 | 464 | test "$AFL_KEEP_TRACES" = "" && rm -rf "$TRACE_DIR" 465 | 466 | exit 0 467 | -------------------------------------------------------------------------------- /afl-pollenate: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env ruby 2 | 3 | # (c) Ben Nagy, 2015, All Rights Reserved 4 | # This software is licensed under a permissive but non-GPL compatible license, 5 | # found in the accompanying LICENSE file. 6 | 7 | # Pollenate ONE sync dir from each target into all other 8 | # targets fuzzing the same format. Assumes that work 9 | # dirs are named as by https://github.com/bnagy/afl-launch. 10 | # 11 | # Each target syncs inside its own directory already 12 | # so copying any of the sync dirs works. It is possible 13 | # that you'll miss some stuff, but it saves N * N-1 sync 14 | # 15 | # Layout is like 16 | # /path/to/fuzzing 17 | # |_ target1 18 | # |_ t1-M0 19 | # |_ t1-S1 20 | # |_ [...] 21 | # |_ target2 22 | # |_ t2-M0 23 | # |_ t2-S1 24 | # |_ [...] 25 | # 26 | # Then run pollenate /path/to/fuzzing. Dirs will 27 | # be copied as t1-S1.sync 28 | 29 | root = ARGV[0] 30 | 31 | SNOOZE = 3600 32 | INTERVAL = 10 33 | DEBUG = false 34 | 35 | unless root && File.directory?( root ) 36 | $stderr.puts "Usage: #{$0} /root/dir/of/fuzzers" 37 | exit(1) 38 | end 39 | 40 | dirs = Dir["#{File.expand_path(root)}/*/"] 41 | 42 | loop do 43 | 44 | dirs.each {|dir| 45 | others = dirs - [dir] 46 | fuzzer_dirs = Dir["#{dir}*S1/"] 47 | fuzzer_dirs.each {|fd| 48 | others.each {|other| 49 | src = fd.chomp("/") 50 | # Don't sync from synced dirs - that can try to write into the original, 51 | # which ends poorly. 52 | next if src =~ /.sync/ 53 | cmd = "rsync -ra --exclude=\".nfs*\" --exclude=\"crashes*/\" " + 54 | "--exclude=\"hangs*/\" #{src}/ #{File.join(other, File.basename(src)+'.sync')}/" 55 | $stderr.puts "[DEBUG] #{cmd}" if DEBUG 56 | $stderr.print "Syncing #{fd.chomp("/")} -> #{other}" 57 | mark = Time.now 58 | `#{cmd}` 59 | $stderr.puts " ... #{"%.2f" % (Time.now - mark)}s" 60 | } 61 | } 62 | } 63 | 64 | (0...SNOOZE).step(INTERVAL).each {|left| 65 | print "\r" 66 | print( "[SLEEPING] (%dm%ds remaining, ^C to abort) " % (SNOOZE-left).divmod(60) ) 67 | sleep INTERVAL 68 | } 69 | puts 70 | 71 | end 72 | -------------------------------------------------------------------------------- /afl-resume: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # american fuzzy lop - resume a set of fuzzers 4 | # -------------------------------------- 5 | # 6 | # By @rantyben, based on afl-whatsup, which is: 7 | # Written and maintained by Michal Zalewski 8 | # 9 | # Copyright 2015 Google Inc. All rights reserved. 10 | # 11 | # Licensed under the Apache License, Version 2.0 (the "License"); 12 | # you may not use this file except in compliance with the License. 13 | # You may obtain a copy of the License at: 14 | # 15 | # http://www.apache.org/licenses/LICENSE-2.0 16 | # 17 | # This resumes all live fuzzers in the given output directory using 18 | # SIGCONT 19 | 20 | echo "afl-resume - resume fuzzers" 21 | echo 22 | 23 | DIR="$1" 24 | 25 | if [ "$DIR" = "" ]; then 26 | 27 | echo "Usage: $0 afl_sync_dir" 1>&2 28 | echo 1>&2 29 | exit 1 30 | 31 | fi 32 | 33 | cd "$DIR" || exit 1 34 | 35 | if [ -d queue ]; then 36 | 37 | echo "[-] Error: parameter is an individual output directory, not a sync dir." 1>&2 38 | exit 1 39 | 40 | fi 41 | 42 | TMP=`mktemp -t .afl-whatsup-XXXXXXXX` || exit 1 43 | 44 | ALIVE_CNT=0 45 | DEAD_CNT=0 46 | 47 | for i in `find . -maxdepth 2 -iname fuzzer_stats`; do 48 | 49 | sed 's/^command_line.*$/_skip:1/;s/[ ]*:[ ]*/="/;s/$/"/' "$i" >"$TMP" 50 | . "$TMP" 51 | 52 | if ! kill -0 "$fuzzer_pid" 2>/dev/null; then 53 | echo "Instance is dead or running remotely, skipping." 54 | echo 55 | DEAD_CNT=$((DEAD_CNT + 1)) 56 | continue 57 | fi 58 | 59 | ALIVE_CNT=$((ALIVE_CNT + 1)) 60 | 61 | if ! kill -CONT "$fuzzer_pid" 2>/dev/null; then 62 | echo "Unable to resume fuzzer pid $fuzzer_pid, aborting." 63 | echo 64 | exit 1 65 | fi 66 | 67 | done 68 | 69 | rm -f "$TMP" 70 | 71 | echo "Fuzzers resumed: $ALIVE_CNT" 72 | 73 | if [ ! "$DEAD_CNT" = "0" ]; then 74 | echo "Dead or remote: $DEAD_CNT (excluded from stats)" 75 | fi 76 | 77 | exit 0 78 | -------------------------------------------------------------------------------- /afl-stop: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # 3 | # american fuzzy lop - stop a set of fuzzers 4 | # -------------------------------------- 5 | # 6 | # By MarkusTeufelberger, based on afl-whatsup, which is: 7 | # Written and maintained by Michal Zalewski 8 | # 9 | # Copyright 2015 Google Inc. All rights reserved. 10 | # 11 | # Licensed under the Apache License, Version 2.0 (the "License"); 12 | # you may not use this file except in compliance with the License. 13 | # You may obtain a copy of the License at: 14 | # 15 | # http://www.apache.org/licenses/LICENSE-2.0 16 | # 17 | # This stops all live fuzzers in the given output directory using 18 | # SIGKILL 19 | 20 | echo "afl-stop - stop fuzzers" 21 | echo 22 | 23 | DIR="$1" 24 | 25 | if [ "$DIR" = "" ]; then 26 | 27 | echo "Usage: $0 afl_sync_dir" 1>&2 28 | echo 1>&2 29 | exit 1 30 | 31 | fi 32 | 33 | cd "$DIR" || exit 1 34 | 35 | if [ -d queue ]; then 36 | 37 | echo "[-] Error: parameter is an individual output directory, not a sync dir." 1>&2 38 | exit 1 39 | 40 | fi 41 | 42 | TMP=`mktemp -t .afl-whatsup-XXXXXXXX` || exit 1 43 | 44 | ALIVE_CNT=0 45 | DEAD_CNT=0 46 | 47 | for i in `find . -maxdepth 2 -iname fuzzer_stats`; do 48 | 49 | sed 's/^command_line.*$/_skip:1/;s/[ ]*:[ ]*/="/;s/$/"/' "$i" >"$TMP" 50 | . "$TMP" 51 | 52 | if ! kill -0 "$fuzzer_pid" 2>/dev/null; then 53 | echo "Instance is dead or running remotely, skipping." 54 | echo 55 | DEAD_CNT=$((DEAD_CNT + 1)) 56 | continue 57 | fi 58 | 59 | ALIVE_CNT=$((ALIVE_CNT + 1)) 60 | 61 | if ! kill -KILL "$fuzzer_pid" 2>/dev/null; then 62 | echo "Unable to stop fuzzer pid $fuzzer_pid, aborting." 63 | echo 64 | exit 1 65 | fi 66 | 67 | done 68 | 69 | rm -f "$TMP" 70 | 71 | echo "Fuzzers stopped: $ALIVE_CNT" 72 | 73 | if [ ! "$DEAD_CNT" = "0" ]; then 74 | echo "Dead or remote: $DEAD_CNT (excluded from stats)" 75 | fi 76 | 77 | exit 0 78 | --------------------------------------------------------------------------------