├── LICENSE ├── README.md ├── code ├── config.ini ├── env.py ├── fuzz.py ├── iftracer.zip ├── parse_dwarf.py ├── patchloc.py ├── tracer.py └── utils.py ├── data ├── binutils │ ├── cve_2017_14745 │ │ ├── README.txt │ │ ├── cve_2017_14745.Dockerfile │ │ └── exploit │ ├── cve_2017_15020 │ │ ├── README.txt │ │ ├── cve_2017_15020.Dockerfile │ │ └── exploit │ ├── cve_2017_15025 │ │ ├── README.txt │ │ ├── cve_2017_15025.Dockerfile │ │ └── exploit │ └── cve_2017_6965 │ │ ├── README.txt │ │ ├── cve_2017_6965.Dockerfile │ │ └── exploit ├── coreutils │ ├── gnubug_19784 │ │ ├── README.txt │ │ └── gnubug_19784.Dockerfile │ ├── gnubug_25003 │ │ ├── README.txt │ │ └── gnubug_25003.Dockerfile │ ├── gnubug_25023 │ │ ├── README.txt │ │ └── gnubug_25023.Dockerfile │ └── gnubug_26545 │ │ ├── README.txt │ │ └── gnubug_26545.Dockerfile ├── ffmpeg │ ├── bugchrom_1404 │ │ ├── BUGCHROM_1404-setup.zip │ │ ├── README.txt │ │ └── bugchrom_1404.Dockerfile │ └── cve_2017_9992 │ │ ├── CVE_2017_9992-setup.zip │ │ ├── README.txt │ │ └── cve_2017_9992.Dockerfile ├── jasper │ ├── cve_2016_8691 │ │ ├── README.txt │ │ ├── cve_2016_8691.Dockerfile │ │ ├── exploit │ │ └── source.zip │ └── cve_2016_9557 │ │ ├── README.txt │ │ ├── cve_2016_9557.Dockerfile │ │ ├── exploit │ │ └── source.zip ├── libarchive │ └── cve_2016_5844 │ │ ├── CVE_2016_5844-setup.zip │ │ ├── README.txt │ │ ├── cve_2016_5844.Dockerfile │ │ └── libarchive-signed-int-overflow.iso ├── libjpeg │ ├── cve_2012_2806 │ │ ├── README.txt │ │ ├── cve_2012_2806.Dockerfile │ │ └── exploit │ ├── cve_2017_15232 │ │ ├── README.txt │ │ ├── cve_2017_15232.Dockerfile │ │ └── exploit │ ├── cve_2018_14498 │ │ ├── README.txt │ │ ├── cve_2018_14498.Dockerfile │ │ └── exploit │ └── cve_2018_19664 │ │ ├── README.txt │ │ ├── cve_2018_19664.Dockerfile │ │ └── exploit ├── libming │ └── cve_2016_9264 │ │ ├── README.txt │ │ ├── cve_2016_9264.Dockerfile │ │ └── exploit ├── libtiff │ ├── bugzilla_2611 │ │ ├── README.txt │ │ ├── bugzilla_2611.Dockerfile │ │ └── exploit │ ├── bugzilla_2633 │ │ ├── README.txt │ │ ├── bugzilla_2633.Dockerfile │ │ └── exploit │ ├── cve_2016_10092 │ │ ├── README.txt │ │ ├── cve_2016_10092.Dockerfile │ │ └── exploit │ ├── cve_2016_10094 │ │ ├── README.txt │ │ ├── cve_2016_10094.Dockerfile │ │ ├── exploit │ │ └── source.zip │ ├── cve_2016_10272 │ │ ├── README.txt │ │ ├── cve_2016_10272.Dockerfile │ │ └── exploit │ ├── cve_2016_3186 │ │ ├── README.txt │ │ ├── cve_2016_3186.Dockerfile │ │ ├── exploit │ │ └── source.zip │ ├── cve_2016_5314 │ │ ├── README.txt │ │ ├── cve_2016_5314.Dockerfile │ │ ├── exploit │ │ └── source.zip │ ├── cve_2016_5321 │ │ ├── README.txt │ │ ├── cve_2016_5321.Dockerfile │ │ ├── exploit │ │ └── source.zip │ ├── cve_2016_9273 │ │ ├── README.txt │ │ ├── cve_2016_9273.Dockerfile │ │ └── exploit │ ├── cve_2016_9532 │ │ ├── README.txt │ │ ├── cve_2016_9532.Dockerfile │ │ └── exploit │ ├── cve_2017_5225 │ │ ├── README.txt │ │ ├── cve_2017_5225.Dockerfile │ │ └── exploit │ ├── cve_2017_7595 │ │ ├── README.txt │ │ ├── cve_2017_7595.Dockerfile │ │ └── exploit │ └── cve_2017_7601 │ │ ├── README.txt │ │ ├── cve_2017_7601.Dockerfile │ │ └── exploit ├── libxml2 │ ├── cve_2012_5134 │ │ ├── README.txt │ │ ├── cve_2012_5134.Dockerfile │ │ └── exploit │ ├── cve_2016_1838 │ │ ├── README.txt │ │ ├── cve_2016_1838.Dockerfile │ │ └── exploit │ ├── cve_2016_1839 │ │ ├── README.txt │ │ ├── cve_2016_1839.Dockerfile │ │ └── exploit │ └── cve_2017_5969 │ │ ├── README.txt │ │ ├── cve_2017_5969.Dockerfile │ │ └── exploit ├── potrace │ └── cve_2013_7437 │ │ ├── README.txt │ │ ├── cve_2013_7437.Dockerfile │ │ ├── exploit │ │ └── source.zip └── zziplib │ ├── cve_2017_5974 │ ├── README.txt │ ├── cve_2017_5974.Dockerfile │ └── exploit │ ├── cve_2017_5975 │ ├── README.txt │ ├── cve_2017_5975.Dockerfile │ └── exploit │ └── cve_2017_5976 │ ├── README.txt │ ├── cve_2017_5976.Dockerfile │ └── exploit ├── setup.Dockerfile └── test ├── setup-cve_2016_5314 ├── config.ini └── test.sh └── setup-others └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 patchloc 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # VulnLoc 2 | 3 | ## Overview 4 | 5 | 6 | Automatic vulnerability diagnosis can help security analysts identify and, therefore, quickly patch disclosed vulnerabilities. The vulnerability localization problem is to automatically find a program 7 | point at which the “root cause” of the bug can be fixed. This paper 8 | employs a statistical localization approach to analyze a given exploit. 9 | Our main technical contribution is a novel procedure to systematically construct a test-suite which enables high-fidelity localization. 10 | We build our techniques in a tool called VulnLoc (which is originally named with PatchLoc). 11 | VulnLoc automatically pinpoints vulnerability locations, given just one exploit, with 12 | high accuracy. It does not make any assumptions about the 13 | availability of source code, test suites, or specialized knowledge 14 | of the type of vulnerability. 15 | 16 | More details about the project can be found at the [paper](https://www.comp.nus.edu.sg/~prateeks/papers/VulnLoc.pdf). 17 | 18 | ## Installation 19 | 20 | 1) Install dependencies 21 | 22 | VulnLoc requires all the dependencies of Dynamorio, numpy (>=1.16) and pyelftools. 23 | ```console 24 | $ sudo apt install -y build-essential git vim unzip python-dev python-pip ipython wget libssl-dev g++-multilib doxygen transfig imagemagick ghostscript git zlib1g-dev 25 | # install numpy 26 | $ wget https://github.com/numpy/numpy/releases/download/v1.16.6/numpy-1.16.6.zip 27 | $ unzip numpy-1.16.6.zip 28 | $ cd ./numpy-1.16.6 29 | $ python setup.py install 30 | $ cd ../ 31 | # install pyelftools 32 | $ sudo pip install pyelftools 33 | ``` 34 | 35 | 2) Install CMake 36 | 37 | CMake is required for building dynamorio. 38 | ```console 39 | $ wget https://github.com/Kitware/CMake/releases/download/v3.16.2/cmake-3.16.2.tar.gz 40 | $ tar -xvzf ./cmake-3.16.2.tar.gz 41 | $ cd ./cmake-3.16.2 42 | $ ./bootstrap 43 | $ make 44 | $ sudo make install 45 | $ cd ../ 46 | ``` 47 | 48 | 3) Install Dynamorio 49 | ```console 50 | $ git clone https://github.com/DynamoRIO/dynamorio.git 51 | $ cd ./dynamorio 52 | $ mkdir build 53 | $ cd ./build 54 | $ cmake ../ 55 | $ make 56 | $ cd ../../ 57 | ``` 58 | 59 | 4) Install the Dynamorio-based tracer 60 | 61 | We use the tracer to monitor the execution of branches. 62 | ```console 63 | $ unzip iftracer.zip # iftracer.zip can be found in the folder "./code" 64 | ``` 65 | Users need to replace with the path of Dynamorio in the CMakeLists.txt for both iftracer and ifLineTracer. After the modification, please run: 66 | ```console 67 | $ cd ./iftracer/iftracer 68 | $ cmake CMakeLists.txt 69 | $ make 70 | $ cd ../ifLineTracer 71 | $ cmake CMakeLists.txt 72 | $ make 73 | $ cd ../../ 74 | ``` 75 | 76 | 5) Configure the path to Dynamorio and the tracer 77 | 78 | Please fill in the correct path of Dynamorio and the tracer in **./code/env.py**. 79 | ``` 80 | dynamorio_path = "/build/bin64/drrun" 81 | iftracer_path = "/iftracer/libiftracer.so" 82 | iflinetracer_path = "/ifLineTracer/libifLineTracer.so" 83 | libcbr_path = "/build/api/bin/libcbr.so" 84 | ``` 85 | 86 | ## Usage 87 | To show the usage of VulnLoc, we take *cve-2016-5314* as an example. Here are the links to the PoC and the developer-generated patch: 88 | - [PoC](http://bugzilla.maptools.org/show_bug.cgi?id=2554) 89 | - [Developer-generated patch](https://github.com/vadz/libtiff/commit/391e77fcd217e78b2c51342ac3ddb7100ecacdd2) 90 | 91 | 1) (Optional) Compile the target vulnerable program 92 | VulnLoc takes a vulnerable binary and the corresponding PoC as its input. If you do not have the vulnerable binary, please compile the program first. 93 | ```console 94 | $ sudo apt install -y zlib1g-dev 95 | $ cd cve_2016_5314 96 | $ unzip source.zip # source.zip can be found in the folder "./data/cve_2016_5314" 97 | $ cd source 98 | $ ./configure 99 | $ make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 100 | ``` 101 | 102 | 2) Configure the CVE 103 | To monitor the execution of the given vulnerable binary, users need to provide the configuration file for each CVE. The template of the configuration file can be found in *./code/config.ini* file. To complete the configuration file, users need to fill in the following info/attributes for each CVE: 104 | - **cve_tag**: The unique ID of each CVE (e.g., cve_2016_5314). A configuration file can include the information for multiple CVE. For extracting the right configuration, users are required to assign a unique ID for each CVE. 105 | - **trace_cmd**: The command used for executing the vulnerable program with the given PoC. Each argument is separate by ';'. The location of the target argument for fuzzing is replaced with '***'. 106 | - **crash_cmd**: The command used for checking whether the vulnerable program gets exploited or not. crash_cmd follows the same format as trace_cmd. 107 | - **bin_path**: The path to the vulnerable binary. 108 | - **poc**: The path to the PoC 109 | - **poc_fmt**: The type of PoC. 110 | - **mutate_range**: The valid range for mutation. 111 | - **folder**: The output folder for saving the test-suite. 112 | - **crash_tag**: The information which can be utilized to detect whether the program gets exploited or not. The vulnerablity checker is defined in the function *check_exploit* under *./code/fuzz.py*. 113 | 114 | **EXAMPLE: cve-2016-5314** 115 | 116 | a) Building ane exploit detector. 117 | 118 | We utilize Valgrind to detect whether the program gets exploit or not for cve-2016-5314. Valgrind is not the only choice and users can define their own way for detecting the vulnerability. If the binary is compiled with address sanitizer, users can also use ASAN to detect the vulnerability. 119 | - Building Valgrind 120 | ```console 121 | $ sudo apt-get install -y libc6-dbg 122 | $ wget https://sourceware.org/pub/valgrind/valgrind-3.15.0.tar.bz2 123 | $ tar xjf valgrind-3.15.0.tar.bz2 124 | $ cd ./valgrind-3.15.0 125 | $ ./configure 126 | $ make 127 | $ sudo make install 128 | ``` 129 | - Executing the program with Valgrind 130 | ```console 131 | $ cd ./source/tools 132 | $ valgrind ./rgb2ycbcr tmpout1.tif 133 | ``` 134 | Here is the output of Valgrind: 135 | ``` 136 | ==48145== Invalid write of size 1 137 | ==48145== at 0x4E43078: ??? (in /lib/x86_64-linux-gnu/libz.so.1.2.8) 138 | ==48145== by 0x4E4638A: inflate (in /lib/x86_64-linux-gnu/libz.so.1.2.8) 139 | ==48145== by 0x443FC4: PixarLogDecode (tif_pixarlog.c:785) 140 | ==48145== by 0x42C3B4: TIFFReadEncodedTile (tif_read.c:668) 141 | ==48145== by 0x42C2A1: TIFFReadTile (tif_read.c:641) 142 | ==48145== by 0x41FEFC: gtTileContig (tif_getimage.c:656) 143 | ==48145== by 0x41F8C0: TIFFRGBAImageGet (tif_getimage.c:495) 144 | ==48145== by 0x41F9C2: TIFFReadRGBAImageOriented (tif_getimage.c:514) 145 | ==48145== by 0x41FA77: TIFFReadRGBAImage (tif_getimage.c:532) 146 | ==48145== by 0x4022A3: tiffcvt (rgb2ycbcr.c:315) 147 | ==48145== by 0x401811: main (rgb2ycbcr.c:127) 148 | ==48145== Address 0x5749c4c is 0 bytes after a block of size 476 alloc'd 149 | ==48145== at 0x4C2DE96: malloc (vg_replace_malloc.c:309) 150 | ==48145== by 0x4313CE: _TIFFmalloc (tif_unix.c:316) 151 | ==48145== by 0x443C80: PixarLogSetupDecode (tif_pixarlog.c:692) 152 | ==48145== by 0x446E12: PredictorSetupDecode (tif_predict.c:111) 153 | ==48145== by 0x42CFB2: TIFFStartTile (tif_read.c:1001) 154 | ==48145== by 0x42CC0E: TIFFFillTile (tif_read.c:901) 155 | ==48145== by 0x42C37C: TIFFReadEncodedTile (tif_read.c:668) 156 | ==48145== by 0x42C2A1: TIFFReadTile (tif_read.c:641) 157 | ==48145== by 0x41FEFC: gtTileContig (tif_getimage.c:656) 158 | ==48145== by 0x41F8C0: TIFFRGBAImageGet (tif_getimage.c:495) 159 | ==48145== by 0x41F9C2: TIFFReadRGBAImageOriented (tif_getimage.c:514) 160 | ==48145== by 0x41FA77: TIFFReadRGBAImage (tif_getimage.c:532) 161 | ``` 162 | 163 | b) Create the configuration file 164 | ``` 165 | [cve_2016_5314] 166 | trace_cmd=/tools/rgb2ycbcr;***;tmpout1.tif 167 | crash_cmd=valgrind;/tools/rgb2ycbcr;***;tmpout2.tif 168 | bin_path=/tools/rgb2ycbcr 169 | poc= 170 | poc_fmt=bfile 171 | mutate_range=default 172 | folder= 173 | crash_tag=valgrind;3;0x443FC4 174 | ``` 175 | 176 | 3) Run ConcFuzz (collect the test-suite) 177 | ```console 178 | $ cd ./code 179 | $ python fuzz.py --config_file --tag 180 | ``` 181 | 182 | 4) Rank the location candidates 183 | ```console 184 | $ python patchloc.py --config_file --tag --func calc --out_folder --poc_trace_hash --process_num 185 | ``` 186 | 187 | ## More Examples 188 | More examples can be found at the folder **./test**. The **README.md** file in 189 | each subfolder under **./test** will tell you how to setup each CVE in our 190 | benchmark. 191 | 192 | 193 | 194 | -------------------------------------------------------------------------------- /code/config.ini: -------------------------------------------------------------------------------- 1 | [cve_tag] 2 | trace_cmd= 3 | crash_cmd= 4 | bin_path= 5 | poc= 6 | poc_fmt= 7 | mutate_range= 8 | folder= 9 | crash_tag= 10 | 11 | 12 | -------------------------------------------------------------------------------- /code/env.py: -------------------------------------------------------------------------------- 1 | dynamorio_path="/root/workspace/deps/dynamorio/build/bin64/drrun" 2 | iftracer_path="/root/workspace/deps/iftracer/iftracer/libiftracer.so" 3 | iflinetracer_path="/root/workspace/deps/iftracer/ifLineTracer/libifLineTracer.so" 4 | libcbr_path="/root/workspace/deps/dynamorio/build/api/bin/libcbr.so" 5 | -------------------------------------------------------------------------------- /code/fuzz.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import ConfigParser 3 | import logging 4 | import os 5 | import utils 6 | import numpy as np 7 | from time import time 8 | import string 9 | from copy import deepcopy as dc 10 | import hashlib 11 | import shutil 12 | import tracer 13 | import itertools 14 | from multiprocessing import Pool 15 | 16 | DefaultItems = ['trace_cmd', 'crash_cmd', 'poc', 'poc_fmt', 'folder', 'mutate_range', 'crash_tag'] 17 | OutFolder = '' 18 | TmpFolder = '' 19 | TraceFolder = '' 20 | 21 | SeedPool = [] # Each element is in the fmt of [, ]. : True (selected) / False (not selected) 22 | SeedTraceHashList = [] 23 | ReportCollection = [] # Each element if in the fmt of [, ]. : m - malicious / b - benign 24 | TraceHashCollection = [] 25 | GlobalTimeout = 4 * 60 * 60 # 2 hours 26 | LocalTimeout = 4 * 60 * 60 # 2 hours 27 | DefaultRandSeed = 3 28 | DefaultMutateNum = 200 29 | DefaultMaxCombination = 2 30 | MaxCombineNum = 10**20 31 | 32 | def parse_args(): 33 | parser = argparse.ArgumentParser(description="ConcFuzz") 34 | parser.add_argument('--config_file', dest='config_file', type=str, required=True, 35 | help="The path of config file") 36 | parser.add_argument('--tag', dest='tag', type=str, required=True, 37 | help="The corresponding CVE id") 38 | parser.add_argument('--verbose', dest='verbose', type=str, default='True', 39 | help="Whether print out the debugging info") 40 | args = parser.parse_args() 41 | 42 | # check the validity of args 43 | config = ConfigParser.ConfigParser() 44 | config.read(args.config_file) 45 | if args.tag not in config.sections(): 46 | raise Exception("ERROR: Please provide the configuration file for %s" % args.tag) 47 | # read & processing config file 48 | detailed_config = {} 49 | for item in config.items(args.tag): 50 | if item[0] == 'folder': 51 | if not os.path.exists(item[1]): 52 | raise Exception("ERROR: The folder does not exist -> %s" % item[1]) 53 | detailed_config[item[0]] = item[1] 54 | else: 55 | detailed_config[item[0]] = item[1].split(';') 56 | # check whether it contains all the required attributes 57 | if len(set(detailed_config.keys()) & set(DefaultItems)) != len(DefaultItems): 58 | raise Exception("ERROR: Missing required attributes in config.ini -> Required attributes: %s" % str(DefaultItems)) 59 | # check poc & poc_fmt & mutate_range 60 | arg_num = len(detailed_config['poc']) 61 | if arg_num != len(detailed_config['poc_fmt']) and arg_num != len(detailed_config['mutate_range']): 62 | raise Exception("ERROR: Your defined poc is not matched with poc_fmt/mutate_range") 63 | processed_arg = [] 64 | processed_fmt = [] # each element is in the fmt of [, , , ] 65 | for arg_no in range(arg_num): 66 | if detailed_config['poc_fmt'][arg_no] == 'bfile': 67 | if not os.path.exists(detailed_config['poc'][arg_no]): 68 | raise Exception("ERROR: Exploit file does not exist -> %s" % detailed_config['poc'][arg_no]) 69 | content = utils.read_bin(detailed_config['poc'][arg_no]) 70 | 71 | processed_fmt.append(['bfile', len(processed_arg), len(content), range(256)]) 72 | processed_arg += content 73 | elif detailed_config['poc_fmt'][arg_no] == 'int': 74 | try: 75 | tmp = detailed_config['mutate_range'][arg_no].split('~') 76 | mutate_range = range(int(tmp[0]), int(tmp[1])) 77 | except: 78 | raise Exception('ERROR: Please check the value of mutate_range in your config file.') 79 | processed_fmt.append(['int', len(processed_arg), 1, mutate_range]) 80 | processed_arg.append(int(detailed_config['poc'][arg_no])) 81 | elif detailed_config['poc_fmt'][arg_no] == 'float': 82 | try: 83 | tmp = detailed_config['mutate_range'][arg_no].split('~') 84 | mutate_range = list(np.arange(float(tmp[0]), float(tmp[1]), float(tmp[2]))) 85 | except: 86 | raise Exception('ERROR: Please check the value of mutate_range in your config file.') 87 | processed_fmt.append(['float', len(processed_arg), 1, mutate_range]) 88 | processed_arg.append(float(detailed_config['poc'][arg_no])) 89 | elif detailed_config['poc_fmt'][arg_no] == 'str': 90 | processed_fmt.append(['str', len(processed_arg), len(detailed_config['poc'][arg_no]), list(string.printable)]) 91 | processed_arg += list(detailed_config['poc'][arg_no]) 92 | else: 93 | raise Exception("ERROR: Unknown type of arguments -> %s" % detailed_config['poc_fmt'][arg_no]) 94 | detailed_config['poc'] = processed_arg 95 | detailed_config['poc_fmt'] = processed_fmt 96 | detailed_config.pop('mutate_range') 97 | # process the optional args 98 | if 'global_timeout' not in detailed_config: # read the global timeout (overall) 99 | global GlobalTimeout 100 | detailed_config['global_timeout'] = GlobalTimeout 101 | else: 102 | detailed_config['global_timeout'] = int(detailed_config['global_timeout'][0]) 103 | if 'local_timeout' not in detailed_config: # read the local timeout for each seed 104 | global LocalTimeout 105 | detailed_config['local_timeout'] = LocalTimeout 106 | else: 107 | detailed_config['local_timeout'] = int(detailed_config['local_timeout'][0]) 108 | if 'rand_seed' not in detailed_config: # read the randomization seed 109 | global DefaultRandSeed 110 | detailed_config['rand_seed'] = DefaultRandSeed 111 | else: 112 | detailed_config['rand_seed'] = int(detailed_config['rand_seed'][0]) 113 | if 'mutation_num' not in detailed_config: # read the number of mutation for each byte 114 | global DefaultMutateNum 115 | detailed_config['#mutation'] = DefaultMutateNum 116 | else: 117 | detailed_config['#mutation'] = int(detailed_config['mutation_num'][0]) 118 | detailed_config.pop('mutation_num') 119 | if 'combination_num' not in detailed_config: 120 | global DefaultMaxCombination 121 | detailed_config['#combination'] = DefaultMaxCombination 122 | else: 123 | detailed_config['#combination'] = int(detailed_config['combination_num'][0]) 124 | detailed_config.pop('combination_num') 125 | if 'max_combine_num' in detailed_config: 126 | global MaxCombineNum 127 | MaxCombineNum = int(detailed_config['max_combine_num'][0]) 128 | if 'tmp_filename_len' in detailed_config: # read the length of temperol filename 129 | utils.FileNameLen = int(detailed_config['tmp_filename_len'][0]) 130 | # get all the replace idx in the cmd 131 | tmp = ';'.join(detailed_config['trace_cmd']).split('***') 132 | detailed_config['trace_cmd'] = [] 133 | detailed_config['trace_replace_idx'] = [] 134 | for id in range(len(tmp)): 135 | detailed_config['trace_cmd'].append(tmp[id]) 136 | detailed_config['trace_cmd'].append('') 137 | detailed_config['trace_replace_idx'].append(2*id + 1) 138 | detailed_config['trace_cmd'] = detailed_config['trace_cmd'][:-1] 139 | detailed_config['trace_replace_idx'] = detailed_config['trace_replace_idx'][:-1] 140 | 141 | tmp = ';'.join(detailed_config['crash_cmd']).split('***') 142 | detailed_config['crash_cmd'] = [] 143 | detailed_config['crash_replace_idx'] = [] 144 | for id in range(len(tmp)): 145 | detailed_config['crash_cmd'].append(tmp[id]) 146 | detailed_config['crash_cmd'].append('') 147 | detailed_config['crash_replace_idx'].append(2 * id + 1) 148 | detailed_config['crash_cmd'] = detailed_config['crash_cmd'][:-1] 149 | detailed_config['crash_replace_idx'] = detailed_config['crash_replace_idx'][:-1] 150 | # detailed_config['trace_replace_idx'] = np.where(np.asarray(detailed_config['trace_cmd']) == '***')[0] 151 | # detailed_config['crash_replace_idx'] = np.where(np.asarray(detailed_config['crash_cmd']) == '***')[0] 152 | return args.tag, detailed_config, args.verbose 153 | 154 | def init_log(tag, verbose, folder): 155 | global OutFolder, TmpFolder, TraceFolder 156 | OutFolder = os.path.join(folder, 'output_%d' % int(time())) 157 | if os.path.exists(OutFolder): 158 | raise Exception("ERROR: Output folder already exists! -> %s" % OutFolder) 159 | else: 160 | os.mkdir(OutFolder) 161 | TmpFolder = os.path.join(OutFolder, 'tmp') 162 | if not os.path.exists(TmpFolder): 163 | os.mkdir(TmpFolder) 164 | TraceFolder = os.path.join(OutFolder, 'traces') 165 | if not os.path.exists(TraceFolder): 166 | os.mkdir(TraceFolder) 167 | log_path = os.path.join(OutFolder, 'fuzz.log') 168 | if verbose == 'True': 169 | logging.basicConfig(filename=log_path, filemode='a+', level=logging.DEBUG, 170 | format="[%(asctime)s-%(funcName)s-%(levelname)s]: %(message)s", 171 | datefmt="%d-%b-%y %H:%M:%S") 172 | else: 173 | pass 174 | console = logging.StreamHandler() 175 | console.setLevel(logging.INFO) 176 | console_fmt = logging.Formatter(fmt="[%(asctime)s-%(funcName)s-%(levelname)s]: %(message)s", datefmt="%d-%b-%y %H:%M:%S") 177 | console.setFormatter(console_fmt) 178 | logging.getLogger().addHandler(console) 179 | logging.info('Output Folder: %s' % OutFolder) 180 | logging.debug("CVE: %s" % tag) 181 | logging.debug("Config Info: \n%s" % '\n'.join(['\t%s : %s' % (key, config_info[key]) for key in config_info])) 182 | 183 | def choose_seed(): 184 | global SeedPool 185 | # get all the seeds which have not been selected 186 | seed_num = len(SeedPool) 187 | ns_idx = [] 188 | for seed_no in range(seed_num): 189 | if SeedPool[seed_no][0] == False: 190 | ns_idx.append(seed_no) 191 | if len(ns_idx) == 0: 192 | return [] 193 | else: 194 | selected_id = np.random.choice(ns_idx) 195 | SeedPool[selected_id][0] = True 196 | return [SeedTraceHashList[selected_id], SeedPool[selected_id][1]] 197 | 198 | def prepare_args(input_no, poc, poc_fmt): 199 | global TmpFolder 200 | # prepare the arguments 201 | arg_num = len(poc_fmt) 202 | arg_list = [] 203 | for arg_no in range(arg_num): 204 | if poc_fmt[arg_no][0] == 'bfile': # write the list into binary file 205 | content = np.asarray(poc[poc_fmt[arg_no][1]: poc_fmt[arg_no][1]+poc_fmt[arg_no][2]]).astype(np.int) 206 | tmp_filepath = os.path.join(TmpFolder, 'tmp_%d' % input_no) 207 | utils.write_bin(tmp_filepath, content) 208 | arg_list.append(tmp_filepath) 209 | elif poc_fmt[arg_no][0] == 'int': 210 | arg_list.append(int(poc[poc_fmt[arg_no][1]])) 211 | elif poc_fmt[arg_no][0] == 'float': 212 | arg_list.append(float(poc[poc_fmt[arg_no][1]])) 213 | elif poc_fmt[arg_no][0] == 'str': # concatenate all the chars together 214 | arg_list.append(''.join(poc[poc_fmt[arg_no][1]: poc_fmt[arg_no][1]+poc_fmt[arg_no][2]])) 215 | else: 216 | raise Exception("ERROR: Unknown poc_fmt -> %s" % poc_fmt[arg_no][0]) 217 | return arg_list 218 | 219 | def prepare_cmd(cmd_list, replace_idx, arg_list): 220 | replaced_cmd = dc(cmd_list) 221 | arg_num = len(replace_idx) 222 | for arg_no in range(arg_num): 223 | replaced_cmd[replace_idx[arg_no]] = str(arg_list[arg_no]) 224 | replaced_cmd = ''.join(replaced_cmd) 225 | replaced_cmd = replaced_cmd.split(';') 226 | return replaced_cmd 227 | 228 | def calc_trace_hash(trace): 229 | trace_str = '\n'.join(trace) 230 | return hashlib.sha256(trace_str).hexdigest() 231 | 232 | def just_trace(input_no, raw_args, poc_fmt, trace_cmd, trace_replace_idx): 233 | processed_args = prepare_args(input_no, raw_args, poc_fmt) 234 | cmd = prepare_cmd(trace_cmd, trace_replace_idx, processed_args) 235 | trace = tracer.ifTracer(cmd) 236 | trace_hash = calc_trace_hash(trace) 237 | return trace, trace_hash 238 | 239 | def check_exploit(err, crash_info): 240 | tmp = err.split('\n') 241 | if crash_info[0] == 'valgrind': 242 | line_num = len(tmp) 243 | distance = int(crash_info[1]) 244 | for line_no in range(line_num): 245 | item = tmp[line_no] 246 | tmp2 = item.split() 247 | if len(tmp2) >=2 and len(tmp2[0])>=2 and tmp2[0][:2] == '==' and tmp2[1] == 'Invalid': 248 | target_line_no = line_no + 3 249 | if target_line_no < line_num: 250 | if crash_info[2] in tmp[target_line_no]: 251 | return 'm' 252 | return 'b' 253 | elif crash_info[0] == 'asan': 254 | tag = '#'+ crash_info[1] 255 | for item in tmp: 256 | tmp2 = item.split() 257 | if len(tmp2) == 0: 258 | break 259 | if item.split()[0] == tag: 260 | if crash_info[2] in item: 261 | return 'm' 262 | return 'b' 263 | elif crash_info[0] == "assert": 264 | if crash_info[1] in err: 265 | return "m" 266 | else: 267 | return "b" 268 | else: 269 | raise Exception('ERROR: Unknown crash info -> %s' % crash_info) 270 | 271 | def trace_cmp(seed_trace, trace): 272 | min_len = min(len(seed_trace), len(trace)) 273 | for id in range(min_len): 274 | if seed_trace[id] != trace[id]: 275 | return id 276 | return min_len 277 | 278 | def gen_report(input_no, raw_args, poc_fmt, trace_cmd, trace_replace_idx, crash_cmd, crash_replace_idx, crash_info, seed_trace): 279 | processed_args = prepare_args(input_no, raw_args, poc_fmt) 280 | trace_cmd = prepare_cmd(trace_cmd, trace_replace_idx, processed_args) 281 | trace = tracer.ifTracer(trace_cmd) 282 | trace_diff_id = trace_cmp(seed_trace, trace) 283 | trace_hash = calc_trace_hash(trace) 284 | crash_cmd = prepare_cmd(crash_cmd, crash_replace_idx, processed_args) 285 | _, err = tracer.exe_bin(crash_cmd) 286 | crash_result = check_exploit(err, crash_info) 287 | return [input_no, trace, trace_hash, crash_result, trace_diff_id] 288 | 289 | def init_sensitivity_map(seed_len, seed_trace_len, max_combination): 290 | global MaxCombineNum 291 | idx_list = [] 292 | for comb_id in range(1, max_combination+1): 293 | tmp = list(itertools.combinations(range(seed_len), comb_id)) 294 | if len(tmp) > (MaxCombineNum-len(idx_list)): 295 | np.random.shuffle(tmp) 296 | idx_list += tmp[:MaxCombineNum-len(idx_list)] 297 | break 298 | else: 299 | idx_list += tmp 300 | crash_sens_map = { 301 | 'idx': idx_list, 302 | 'value': np.zeros(len(idx_list)) 303 | } 304 | loc_sens_map = { 305 | 'idx': idx_list, 306 | 'tag': np.zeros(len(idx_list)), 307 | 'value': [[] for _ in range(seed_trace_len)] 308 | # 'value': np.zeros((seed_trace_len, len(idx_list))) 309 | } 310 | logging.debug("Max Combinations: %d" % max_combination) 311 | logging.debug("Number of Mutation Idxes: %d" % len(idx_list)) 312 | logging.debug("#Loc: %d" % seed_trace_len) 313 | logging.debug("Size(seed): %d" % seed_len) 314 | return crash_sens_map, loc_sens_map 315 | 316 | def select_mutate_idx(loc_sens_map, seed_len, max_combination): 317 | # select the non-mutated bytes 318 | non_mutated_idx = np.where(loc_sens_map['tag'] == 0)[0] 319 | # find out which loc has not been explored 320 | # unexplore_list = np.where(np.sum(loc_sens_map['value'], axis=-1)==0)[0] 321 | unexplore_list = np.where(np.asarray([len(item) for item in loc_sens_map['value']]) == 0)[0] 322 | logging.debug("#(unexplored loc): %d" % len(unexplore_list)) 323 | if len(unexplore_list) == 0: 324 | return None 325 | unexplore_loc_id = np.min(unexplore_list) 326 | logging.debug("Unexplored Loc ID: %d" % unexplore_loc_id) 327 | tmp = [] 328 | for item in loc_sens_map['value'][:unexplore_loc_id]: 329 | tmp += item 330 | fixed_idx = np.asarray(list(set(tmp))) 331 | # fixed_idx = np.where(np.sum(loc_sens_map['value'][:unexplore_loc_id], axis=0)>0)[0] 332 | logging.debug("Fixed IDs: %s" % str(fixed_idx)) 333 | # find out the bytes that can be mutated 334 | non_mutated_idx = np.asarray(list(set(non_mutated_idx) - set(fixed_idx))) 335 | logging.debug("#(potential idxes): %d" % len(non_mutated_idx)) 336 | # randomly select one idx from non_mutated_idx 337 | min_idx = 0 338 | idx_range = [] 339 | for comb_id in range(1, max_combination + 1): 340 | max_idx = min_idx + len(list(itertools.combinations(range(seed_len), comb_id))) 341 | idx_range += list(non_mutated_idx[np.where(np.logical_and(non_mutated_idx >= min_idx, non_mutated_idx 0: 344 | logging.debug("Select the mutation idx from %d-combination" % comb_id) 345 | np.random.shuffle(idx_range) 346 | return idx_range[0] 347 | return None 348 | 349 | def update_loc_sens_map(mutate_idx, diff_collection, loc_sens_map): 350 | loc_num = len(loc_sens_map['value']) 351 | for diff_id in diff_collection: 352 | if diff_id < loc_num: 353 | logging.debug("Update location sensitivity map! loc: %d; mutate id: %d" % (diff_id, mutate_idx)) 354 | loc_sens_map['value'][diff_id].append(mutate_idx) 355 | # loc_sens_map['value'][diff_id][mutate_idx] = 1 356 | return loc_sens_map 357 | 358 | def update_crash_sens_map(mutate_idx, crash_collection, crash_sens_map): 359 | if len(crash_collection) == 2: 360 | logging.debug("Update crash location sensitivity map! mutate id: %d" % mutate_idx) 361 | crash_sens_map['value'][mutate_idx] = 1 362 | return crash_sens_map 363 | 364 | def update_sens_map(mutate_idx, diff_collection, crash_collection, loc_sens_map, crash_sens_map): 365 | loc_sens_map = update_loc_sens_map(mutate_idx, diff_collection, loc_sens_map) 366 | crash_sens_map = update_crash_sens_map(mutate_idx, crash_collection, crash_sens_map) 367 | return loc_sens_map, crash_sens_map 368 | 369 | def mutate_inputs(seed, poc_fmt, mutation_num, mutate_idx): 370 | redundant_mutations = mutation_num*2 371 | inputs = np.tile(seed, (redundant_mutations, 1)) 372 | # get the mutate range for the specific mutate_idx 373 | for idx in mutate_idx: 374 | mutate_range = None 375 | for arg_fmt in poc_fmt: 376 | if idx >= arg_fmt[1] and idx < (arg_fmt[1] + arg_fmt[2]): 377 | mutate_range = arg_fmt[3] 378 | if mutate_range == None: 379 | raise Exception("ERROR: Cannot find the corresponding fmt -> mutate_idx: %s" % str(idx)) 380 | mutate_values = np.random.choice(mutate_range, redundant_mutations) 381 | inputs[:, idx] = mutate_values 382 | inputs = np.unique(inputs, axis = 0)[: mutation_num] 383 | return inputs 384 | 385 | def concentrate_fuzz(config_info): 386 | global TraceHashCollection, ReportCollection, SeedPool, SeedTraceHashList, TraceFolder, TmpFolder 387 | # init the randomization function 388 | np.random.seed(config_info['rand_seed']) 389 | logging.info("Initialized the random seed -> %d" % config_info['rand_seed']) 390 | 391 | '''Process the PoC''' 392 | # generate the trace for the poc 393 | trace, trace_hash = just_trace(0, config_info['poc'], config_info['poc_fmt'], config_info['trace_cmd'], config_info['trace_replace_idx']) 394 | logging.debug('PoC Hash: %s' % trace_hash) 395 | seed_len = len(config_info['poc']) 396 | # save the trace 397 | TraceHashCollection.append(trace_hash) 398 | path = os.path.join(TraceFolder, trace_hash) 399 | # utils.write_pkl(path, trace) 400 | np.savez(path, trace=trace) 401 | # add the report 402 | ReportCollection.append([trace_hash, 'm']) 403 | # add into seed pool 404 | SeedPool.append([False, config_info['poc']]) 405 | SeedTraceHashList.append(trace_hash) 406 | logging.info('Finish processing the poc!') 407 | 408 | stime = time() # starting time 409 | round_no = 0 410 | while(True): 411 | round_no += 1 412 | # choose seed & load seed_trace 413 | result = choose_seed() 414 | if len(result) == 0: 415 | logging.debug("[R-%d] Finish processing all the seeds!" % round_no) 416 | break 417 | selected_seed = result[1] 418 | selected_seed_trace_hash = result[0] 419 | logging.debug("[R-%d] Select seed -> %s" % (round_no, selected_seed_trace_hash)) 420 | logging.debug("The status of current seed pool:\n%s" % '\n'.join( 421 | ['%s: %s' % (SeedTraceHashList[id], str(SeedPool[id][0])) for id in range(len(SeedPool))])) 422 | trace_path = os.path.join(TraceFolder, selected_seed_trace_hash) + '.npz' 423 | if round_no == 1: 424 | selected_seed_trace = trace 425 | else: 426 | selected_seed_trace = np.load(trace_path) 427 | # selected_seed_trace = utils.read_pkl(trace_path) 428 | logging.info('len(Seed Trace): %d' % len(selected_seed_trace)) 429 | # initialize sensitivity map 430 | crash_sensitivity_map, loc_sensitivity_map = init_sensitivity_map(seed_len, len(selected_seed_trace), config_info['#combination']) 431 | 432 | # check each selected seed 433 | subround_no = 0 434 | while(True): 435 | subround_no += 1 436 | # select mutate byte 437 | mutate_idx = select_mutate_idx(loc_sensitivity_map, seed_len, config_info['#combination']) 438 | if mutate_idx == None: # exist if all the bytes get mutated 439 | break 440 | logging.debug('[R-%d-%d] Select the mutate idx -> %s: %s' % (round_no, subround_no, str(mutate_idx), str(loc_sensitivity_map['idx'][mutate_idx]))) 441 | loc_sensitivity_map['tag'][mutate_idx] = 1 442 | # mutate inputs 443 | inputs = mutate_inputs(selected_seed, config_info['poc_fmt'], config_info['#mutation'], loc_sensitivity_map['idx'][mutate_idx]) 444 | logging.debug("Shape(mutated_inputs): %s" % str(inputs.shape)) 445 | # execute all the mutated inputs 446 | result_collection = [] # each element is in the fmt of [id, trace, trace_hash, crash_result, trace_diff_id] 447 | input_num = len(inputs) 448 | pool = Pool(utils.ProcessNum) 449 | for input_no in range(input_num): 450 | pool.apply_async( 451 | gen_report, 452 | args = (input_no, inputs[input_no], config_info['poc_fmt'], config_info['trace_cmd'], config_info['trace_replace_idx'], 453 | config_info['crash_cmd'], config_info['crash_replace_idx'], config_info['crash_tag'], selected_seed_trace), 454 | callback = result_collection.append 455 | ) 456 | pool.close() 457 | pool.join() 458 | logging.debug("#(Missed): %d" % (input_num-len(result_collection))) 459 | # Delete all the tmp files 460 | shutil.rmtree(TmpFolder) 461 | os.mkdir(TmpFolder) 462 | # if input_num != len(result_collection): 463 | # missed_ids = set(range(input_num)) - set([item[0] for item in result_collection]) 464 | # missed_inputs = [inputs[id] for id in missed_ids] 465 | # output_path = os.path.join(OutFolder, 'missed_inputs.pkl') 466 | # utils.write_pkl(output_path, missed_inputs) 467 | # raise Exception("ERROR: #execution does not match with #input. -> Missed inputs can be found in %s" % output_path) 468 | # collect all the trace 469 | diff_collection = set() 470 | crash_collection = {'m'} 471 | for item in result_collection: 472 | diff_collection.add(item[4]) 473 | crash_collection.add(item[3]) 474 | # save the trace 475 | if item[2] not in TraceHashCollection: 476 | TraceHashCollection.append(item[2]) 477 | trace_path = os.path.join(TraceFolder, item[2]) 478 | # utils.write_pkl(trace_path, item[1]) 479 | np.savez(trace_path, trace=item[1]) 480 | # check whether to add it into the seed pool 481 | if item[3] == 'm' and item[2] not in SeedTraceHashList: 482 | SeedPool.append([False, inputs[item[0]]]) 483 | SeedTraceHashList.append(item[2]) 484 | # Update reports 485 | if [item[2], item[3]] not in ReportCollection: 486 | ReportCollection.append([item[2], item[3]]) 487 | logging.debug("#Diff: %d; #ExeResult: %d; #seed: %d" % (len(diff_collection), len(crash_collection), len(SeedPool))) 488 | # update sensitivity map 489 | loc_sensitivity_map, crash_sensitivity_map = update_sens_map(mutate_idx, diff_collection, crash_collection, loc_sensitivity_map, crash_sensitivity_map) 490 | # check whether it timeouts or not 491 | ctime = time() 492 | duration = ctime-stime 493 | if(duration >= config_info['local_timeout']): # exist if it timeouts 494 | logging.debug("[R-%d-%d] Timeout locally! -> Duration: %f (%f - %f) in seconds" % (round_no, subround_no, duration, ctime, stime)) 495 | break 496 | # check whether all the locations get explored or not. 497 | unexplore_loc_idx_list = np.where(np.asarray([len(item) for item in loc_sensitivity_map['value']]) == 0)[0] 498 | logging.debug("[R-%d-%d] #(Unexplored Locs): %d" % (round_no, subround_no, len(unexplore_loc_idx_list))) 499 | if len(unexplore_loc_idx_list) == 0: 500 | logging.debug("[R-%d-%d] Finish exploring all the locs!" % (round_no, subround_no)) 501 | break 502 | # loc_tag = np.where(np.sum(loc_sensitivity_map['value'], axis = 1) > 0)[0] 503 | # if len(loc_tag) >= len(selected_seed_trace): 504 | # logging.debug("[R-%d-%d] Finish exploring all the locs!" % (round_no, subround_no)) 505 | # break 506 | # processing the local sensitivity (for saving the hard disk) 507 | loc_sens = [] 508 | loc_idxes = [] 509 | loc_num = len(loc_sensitivity_map['value']) 510 | for loc_id in range(loc_num): 511 | if len(loc_sensitivity_map['value'][loc_id]) > 0: 512 | loc_idxes.append(loc_id) 513 | loc_sens.append(loc_sensitivity_map['value'][loc_id]) 514 | # loc_sens = [] 515 | # loc_idxes = [] 516 | # loc_num = len(loc_sensitivity_map['value']) 517 | # for loc_id in range(loc_num): 518 | # tmp = np.where(loc_sensitivity_map['value'][loc_id]>0)[0] 519 | # if len(tmp) > 0: 520 | # loc_idxes.append(loc_id) 521 | # loc_sens.append(tmp) 522 | # save the sensitivity map 523 | sensitivity_filepath = os.path.join(OutFolder, 'sensitivity_%s.pkl' % selected_seed_trace_hash) 524 | logging.debug("Start saving the sensitivity map -> %s" % sensitivity_filepath) 525 | info = { 526 | 'idx': loc_sensitivity_map['idx'], 527 | 'loc_idx': loc_idxes, 528 | 'loc_sens': loc_sens, 529 | 'crash_sens': list(crash_sensitivity_map['value']), 530 | 'loc_tag': list(loc_sensitivity_map['tag']) 531 | } 532 | utils.write_pkl(sensitivity_filepath, info) 533 | logging.debug("Finish writing the sensitivity map -> %s" % sensitivity_filepath) 534 | # check whether it timeouts 535 | ctime = time() 536 | duration = ctime - stime 537 | if (duration >= config_info['global_timeout']): 538 | logging.debug("[R-%d] Timeout! -> Duration: %f (%f - %f) in seconds" % (round_no, duration, ctime, stime)) 539 | break 540 | 541 | # save all the remaining info 542 | report_filepath = os.path.join(OutFolder, 'reports.pkl') 543 | utils.write_pkl(report_filepath, ReportCollection) 544 | logging.debug("Finish writing all the reports!") 545 | 546 | seed_filepath = os.path.join(OutFolder, 'seeds.pkl') 547 | utils.write_pkl(seed_filepath, SeedPool) 548 | logging.debug("Finish writing all the seeds!") 549 | 550 | seed_hash_filepath = os.path.join(OutFolder, 'seed_hashes.pkl') 551 | utils.write_pkl(seed_hash_filepath, SeedTraceHashList) 552 | logging.debug("Finish writing all the hash of seeds!") 553 | logging.debug('Done!') 554 | 555 | if __name__ == '__main__': 556 | tag, config_info, verbose = parse_args() 557 | init_log(tag, verbose, config_info['folder']) 558 | concentrate_fuzz(config_info) 559 | -------------------------------------------------------------------------------- /code/iftracer.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/code/iftracer.zip -------------------------------------------------------------------------------- /code/parse_dwarf.py: -------------------------------------------------------------------------------- 1 | from elftools.elf.elffile import ELFFile 2 | import subprocess 3 | import operator 4 | import string 5 | import logging 6 | import utils 7 | import os 8 | 9 | def find_end_curly_bracket(file_path, start_line, end_line): 10 | content = utils.read_txt(file_path) 11 | current_line = end_line - 1 12 | start_line = start_line - 1 13 | tag = False 14 | while(current_line >= start_line): 15 | if len(content[current_line]) > 0 and content[current_line][0] == '}': 16 | tag = True 17 | break 18 | current_line = current_line - 1 19 | if tag: 20 | return current_line + 1 21 | raise Exception("ERROR: Cannot find the last curly bracket within [%d, %d]" % (start_line, end_line)) 22 | 23 | # A function to do the readelf part which gives me linenumer :: address for each file 24 | def readELF(filepath, flineNumberDict, mainLine, srcfilepath): 25 | filename = srcfilepath.split('/')[-1] 26 | 27 | p1 = subprocess.Popen(['readelf', '-wL', filepath], stdout=subprocess.PIPE, stderr=subprocess.PIPE) 28 | out, err = p1.communicate() 29 | outlist = out.split('File name') 30 | mainAddrList = [] 31 | '''Lists that maintain the file starting and ending boundaries''' 32 | fileBoundRangesDict = {} 33 | fileBoundRangesList = [] 34 | fileBoundIndexList = [] 35 | found = False 36 | 37 | ''' Get all the filenames''' 38 | for out in outlist: 39 | out = 'File name' + out 40 | paragraphs = out.split('\n\n') 41 | 42 | firstFound = False 43 | first = -1 44 | 45 | for paragraph in paragraphs: 46 | start = 0 47 | paragraph += '\n' 48 | 49 | lines = paragraph.split('\n') 50 | for line in lines: 51 | a = line.rstrip('\n').split(None) 52 | # print(a) 53 | if len(a) < 3: 54 | continue 55 | if a[2][0:2] == '0x': 56 | # print(a[0]) 57 | if not (a[0] in fileBoundRangesDict): 58 | # if not firstFound: 59 | first = a[2] 60 | firstFound = True 61 | pfilename = a[0] 62 | fileBoundRangesDict[pfilename] = int(first, 16) 63 | 64 | if not (a[0] in flineNumberDict): 65 | flineNumberDict[a[0]] = {} 66 | 67 | flineNumberDict[a[0]][a[2]] = a[1] 68 | 69 | ''' Assuming that the main function is present in the file correspnding to source executable ''' 70 | if a[1] == str(mainLine) and (a[0] == filename): 71 | mainAddrList.append(a[2]) 72 | found = True 73 | 74 | sorted_fileBoundRangesDict = sorted(fileBoundRangesDict.items(), key=operator.itemgetter(1)) 75 | 76 | fileBoundRangesList = [x[1] for x in sorted_fileBoundRangesDict] 77 | fileBoundIndexList = [x[0] for x in sorted_fileBoundRangesDict] 78 | 79 | if found: 80 | return mainAddrList[0], fileBoundRangesList, fileBoundIndexList 81 | return None, fileBoundRangesList, fileBoundIndexList 82 | 83 | def get_var_size(die_dict, type_die_idx): 84 | type_die = die_dict[type_die_idx] 85 | if type_die.tag == 'DW_TAG_base_type': 86 | type_size = type_die.attributes['DW_AT_byte_size'].value 87 | return ['basic', type_size] 88 | elif type_die.tag == 'DW_TAG_array_type': 89 | new_type_die_idx = type_die.cu.cu_offset + type_die.attributes['DW_AT_type'].value 90 | tmp = get_var_size(die_dict, new_type_die_idx) 91 | element_num = -1 92 | for sub_die in type_die.iter_children(): 93 | if sub_die.tag == 'DW_TAG_subrange_type' and 'DW_AT_upper_bound' in sub_die.attributes: 94 | element_num = sub_die.attributes['DW_AT_upper_bound'].value 95 | if element_num < 0: 96 | raise Exception("ERROR: Cannot find the #elements in the array!\n%s" % type_die.__str__()) 97 | return ['array', tmp[1]*element_num] 98 | elif type_die.tag == 'DW_TAG_pointer_type': 99 | new_type_die_idx = type_die.cu.cu_offset + type_die.attributes['DW_AT_type'].value 100 | tmp = get_var_size(die_dict, new_type_die_idx) 101 | if tmp[0][0] == '*': 102 | return ['*'+tmp[0], tmp[1]] 103 | else: 104 | return ['*', tmp[1]] 105 | elif type_die.tag == 'DW_TAG_structure_type': 106 | if 'DW_AT_declaration' in type_die.attributes and type_die.attributes['DW_AT_declaration'].value: 107 | return ['struct', -1] 108 | else: 109 | return ['struct', type_die.attributes['DW_AT_byte_size'].value] 110 | elif type_die.tag in ['DW_TAG_typedef', 'DW_TAG_const_type']: 111 | new_type_die_idx = type_die.cu.cu_offset + type_die.attributes['DW_AT_type'].value 112 | return get_var_size(die_dict, new_type_die_idx) 113 | else: 114 | raise Exception("ERROR: Unknown type! -> %s" % type_die) 115 | 116 | class DwarfParser(): 117 | def __init__(self, bin_path): 118 | with open(bin_path, 'rb') as f: 119 | elffile = ELFFile(f) 120 | self.dwarfinfo = elffile.get_dwarf_info() 121 | logging.debug("Read dwarf info from file -> %s" % bin_path) 122 | self.bin_path = bin_path 123 | 124 | def bin2func(self, target_addr): 125 | target_cu = None 126 | for CU in self.dwarfinfo.iter_CUs(): 127 | top_die = CU.get_top_DIE() 128 | try: 129 | cu_min_addr = top_die.attributes['DW_AT_low_pc'].value 130 | cu_max_addr = cu_min_addr + top_die.attributes['DW_AT_high_pc'].value 131 | except: 132 | logging.debug("Warning: Cannot find the DW_AT_low_pc & DW_AT_high_pc attributes!\n" + top_die.__str__()) 133 | else: 134 | if target_addr >= cu_min_addr and target_addr < cu_max_addr: 135 | target_cu = CU 136 | break 137 | if target_cu == None: 138 | raise Exception("ERROR: Cannot find the CU containing the target addr -> %s" % target_addr) 139 | 140 | target_die = None 141 | next_die = None 142 | for die in target_cu.iter_DIEs(): 143 | if die.tag == 'DW_TAG_subprogram': 144 | try: 145 | die_min_addr = die.attributes['DW_AT_low_pc'].value 146 | die_max_addr = die_min_addr + die.attributes['DW_AT_high_pc'].value 147 | except: 148 | logging.debug("Warning: Cannot find the DW_AT_low_pc & DW_AT_high_pc attributes!\n" + die.__str__()) 149 | else: 150 | if target_die != None: 151 | next_die = die 152 | break 153 | if target_addr >= die_min_addr and target_addr < die_max_addr: 154 | target_die = die 155 | if target_die == None: 156 | raise Exception("ERROR: Cannot find the function containing the target addr -> %s" % target_addr) 157 | file_dir = target_die.cu.get_top_DIE().attributes['DW_AT_comp_dir'].value 158 | file_name = target_die.cu.get_top_DIE().attributes['DW_AT_name'].value 159 | file_path = os.path.join(file_dir, file_name) 160 | func_name = target_die.attributes['DW_AT_name'].value 161 | func_decl_line = target_die.attributes['DW_AT_decl_line'].value 162 | logging.info("The address <%d> can be found below:\nfile: %s\nfunc name: %s\nfunc decl line: %s" % ( 163 | target_addr, file_path, func_name, func_decl_line)) 164 | if next_die == None: 165 | raise Exception("ERROR: Cannot find the next function after function <%s>" % func_name) 166 | next_func_decl_line = next_die.attributes['DW_AT_decl_line'].value 167 | logging.info('The starting line of the function after <%s>: %d' % (func_name, next_func_decl_line)) 168 | last_curly_bracket_line = find_end_curly_bracket(file_path, func_decl_line, next_func_decl_line) 169 | logging.info('The ending line of function <%s>: %d' % (func_name, last_curly_bracket_line)) 170 | 171 | return file_path, func_name, func_decl_line, last_curly_bracket_line, target_die 172 | 173 | def get_func_src_bound(self): 174 | func_src_bounds = {} 175 | for CU in self.dwarfinfo.iter_CUs(): 176 | top_die = CU.get_top_DIE() 177 | filepath = os.path.join(top_die.attributes['DW_AT_comp_dir'].value, top_die.attributes['DW_AT_name'].value) 178 | func_src_bounds[filepath] = {} 179 | for DIE in CU.iter_DIEs(): 180 | if DIE.tag == "DW_TAG_subprogram": 181 | if "DW_AT_name" not in DIE.attributes or "DW_AT_decl_line" not in DIE.attributes: 182 | continue 183 | func_name = DIE.attributes["DW_AT_name"].value 184 | src_line = DIE.attributes["DW_AT_decl_line"].value 185 | func_src_bounds[filepath][func_name] = src_line 186 | return func_src_bounds 187 | 188 | def get_main_addr(self): 189 | func_bounds = self.get_func_src_bound() 190 | main_tag = 0 191 | src_filepath = '' 192 | main_line = -1 193 | for filepath in func_bounds: 194 | for func_name in func_bounds[filepath]: 195 | if func_name == 'main': 196 | main_tag += 1 197 | src_filepath = filepath 198 | main_line = func_bounds[filepath]['main'] 199 | if main_tag != 1: 200 | raise Exception("ERROR: There are %d main functions." % main_tag) 201 | logging.info("Here is the main function in the source -> %s: %d" % (src_filepath, main_line)) 202 | 203 | flineNumberDict = {} 204 | main_addr, fileBoundRangesList, fileBoundIndexList = readELF(self.bin_path, flineNumberDict, main_line, src_filepath) 205 | logging.info("Here is the main address in binary -> %s" % main_addr) 206 | return flineNumberDict, fileBoundRangesList, fileBoundIndexList, src_filepath 207 | 208 | def get_all_dies(self): 209 | die_dict = {} 210 | for CU in self.dwarfinfo.iter_CUs(): 211 | for DIE in CU.iter_DIEs(): 212 | die_dict[DIE.offset] = DIE 213 | return die_dict 214 | 215 | def get_live_vars(self, func_die, die_dict): 216 | # get global variables 217 | global_dies = [] 218 | CU = func_die.cu 219 | for die in CU.iter_DIEs(): 220 | if die.tag == 'DW_TAG_variable' and die.get_parent().tag == 'DW_TAG_compile_unit': 221 | if 'DW_AT_location' in die.attributes and len(die.attributes['DW_AT_location'].value) > 0 and die.attributes['DW_AT_location'].value[0] == 3: 222 | global_dies.append(die) 223 | # get all local variables 224 | die_info = {'var_dies': [], 'arg_dies': [], 'block_dies': []} 225 | for child_die in func_die.iter_children(): 226 | if child_die.tag == 'DW_TAG_variable': 227 | die_info['var_dies'].append(child_die) 228 | elif child_die.tag == 'DW_TAG_formal_parameter': 229 | die_info['arg_dies'].append(child_die) 230 | elif child_die.tag == 'DW_TAG_lexical_block': 231 | die_info['block_dies'].append(child_die) 232 | else: 233 | pass 234 | block_var_dies = {} 235 | for die in die_info['block_dies']: 236 | block_var_dies[die.offset] = [] 237 | for child_die in die.iter_children(): 238 | if child_die.tag == 'DW_TAG_variable': 239 | block_var_dies[die.offset].append(child_die) 240 | # get the type of all the variables 241 | live_vars_info = { 242 | 'lvars': [], 243 | 'args': [], 244 | 'gvars': [] 245 | } 246 | for die in die_info['var_dies']: 247 | live_vars_info['lvars'].append( 248 | self.parse_var(die_dict, die) 249 | ) 250 | for die in die_info['arg_dies']: 251 | live_vars_info['args'].append( 252 | self.parse_var(die_dict, die) 253 | ) 254 | for block_offset in block_var_dies: 255 | for die in block_var_dies[block_offset]: 256 | live_vars_info['lvars'].append( 257 | self.parse_var(die_dict, die) 258 | ) 259 | for die in global_dies: 260 | live_vars_info['gvars'].append( 261 | self.parse_var(die_dict, die) 262 | ) 263 | return live_vars_info 264 | 265 | 266 | 267 | def parse_var(self, die_dict, die): 268 | var_name = die.attributes['DW_AT_name'].value 269 | decl_line = die.attributes['DW_AT_decl_line'].value 270 | type_die_idx = die.cu.cu_offset + die.attributes['DW_AT_type'].value 271 | tmp = get_var_size(die_dict, type_die_idx) 272 | var_type = tmp[0] 273 | var_size = tmp[1] 274 | return var_name, decl_line, var_type, var_size 275 | 276 | # def extract_func_live_vars(self): 277 | 278 | def get_source_line(bin_path, target_addr_str): 279 | target_value = int(target_addr_str, 16) 280 | end_value = target_value + 1 281 | end_str = hex(end_value) 282 | cmd_list = ['objdump', '-S', '-l', '--start-address=%s' % target_addr_str, '--stop-address=%s' % end_str, bin_path] 283 | p1 = subprocess.Popen(cmd_list, stdout=subprocess.PIPE, stderr=subprocess.PIPE) 284 | out, err = p1.communicate() 285 | content = out.split('\n') 286 | # process target_addr_str 287 | for id in range(len(target_addr_str)): 288 | if target_addr_str[id] not in ['0', 'x']: 289 | break 290 | tag = target_addr_str[id:] 291 | line_num = len(content) 292 | for line_no in range(line_num): 293 | if tag in content[line_no]: 294 | if (line_no+2) < line_num: 295 | return content[line_no+2] 296 | return None 297 | 298 | def get_bin_line(bin_path, target_src_str): 299 | cmd_list = ['objdump', '-S', '-l', bin_path] 300 | p1 = subprocess.Popen(cmd_list, stdout=subprocess.PIPE, stderr=subprocess.PIPE) 301 | out, err = p1.communicate() 302 | content = out.split('\n') 303 | # process target_src_str 304 | src_file = '-'.join(target_src_str.split('-')[:-1]) 305 | src_line_num = target_src_str.split('-')[-1] 306 | tag = src_file + ':' + src_line_num 307 | line_num = len(content) 308 | start_line_no_list = [] 309 | for line_no in range(line_num): 310 | if tag in content[line_no]: 311 | start_line_no_list.append(line_no) 312 | if len(start_line_no_list) == 0: 313 | raise Exception("Cannot find the target src line -> %s" % target_src_str) 314 | addr_collection = [] 315 | for start_line_no in start_line_no_list: 316 | tag = False 317 | addr_list = [] 318 | for line_no in range(start_line_no, line_num): 319 | line = content[line_no].split() 320 | if len(line) == 0: 321 | if tag: 322 | break 323 | else: 324 | continue 325 | if line[0][-1] == ':': 326 | tmp = line[0][:-1] 327 | addr_tag = True 328 | for tmp2 in tmp: 329 | if tmp2 not in string.hexdigits: 330 | addr_tag = False 331 | break 332 | if addr_tag and tag == False: 333 | tag = True 334 | addr_list.append('0x' + '0'*(16-len(tmp)) + tmp) 335 | elif addr_tag and tag: 336 | addr_list.append('0x' + '0'*(16-len(tmp)) + tmp) 337 | elif addr_tag == False and tag: 338 | break 339 | else: 340 | continue 341 | else: 342 | if tag: 343 | break 344 | addr_collection += addr_list 345 | return addr_collection 346 | 347 | 348 | -------------------------------------------------------------------------------- /code/patchloc.py: -------------------------------------------------------------------------------- 1 | import parse_dwarf 2 | import numpy as np 3 | import utils 4 | import argparse 5 | import os 6 | import string 7 | import logging 8 | import subprocess 9 | import ConfigParser 10 | import tracer 11 | from copy import deepcopy as dc 12 | from multiprocessing import Pool 13 | 14 | NPZTag = False 15 | Assem = '' 16 | 17 | def process_poc_trace(poc_trace_path, bin_path, target_src_str): 18 | if NPZTag: 19 | tmp = np.load(poc_trace_path) 20 | poc_trace = tmp['trace'] 21 | else: 22 | poc_trace = utils.read_pkl(poc_trace_path) 23 | poc_trace = np.asarray(poc_trace) 24 | if len(target_src_str) == 0: 25 | return poc_trace 26 | else: 27 | insn_list = parse_dwarf.get_bin_line(bin_path, target_src_str) 28 | insn_idx_list = [] 29 | for insn in insn_list: 30 | insn_idx_list += list(np.where(poc_trace == insn)[0]) 31 | if len(insn_idx_list) == 0: 32 | raise Exception("ERROR: Cannot find the instructions for source -> %s" % target_src_str) 33 | max_id = max(insn_idx_list) 34 | return poc_trace[:max_id+1] 35 | 36 | def read_single_trace(folder_path, file_name, file_no): 37 | if file_no % 100 == 0: 38 | print('Reading %d_th trace' % file_no) 39 | file_path = os.path.join(folder_path, file_name) 40 | if NPZTag: 41 | tmp = np.load(file_path) 42 | content = tmp['trace'] 43 | trace_hash = file_name.split('.')[0] 44 | else: 45 | content = utils.read_pkl(file_path) 46 | trace_hash = file_name 47 | unique_insns = np.unique(np.asarray(content)) 48 | temp = [trace_hash, unique_insns] 49 | return temp 50 | 51 | def init_count_dict(valid_insns): 52 | count_dict = {} 53 | for insn in valid_insns: 54 | count_dict[insn] = 0 55 | return count_dict 56 | 57 | def read_all_reports(report_file, trace_folder, process_num): 58 | file_list = os.listdir(trace_folder) 59 | file_num = len(file_list) 60 | trace_collection = [] 61 | pool = Pool(process_num) 62 | for file_no in range(file_num): 63 | pool.apply_async( 64 | read_single_trace, 65 | args = (trace_folder, file_list[file_no], file_no), 66 | callback = trace_collection.append 67 | ) 68 | pool.close() 69 | pool.join() 70 | print('Finish reading all the traces') 71 | trace_dict = {} 72 | for item in trace_collection: 73 | trace_dict[item[0]] = item[1] 74 | # read reports 75 | reports = utils.read_pkl(report_file) 76 | # split reports 77 | report_dict = { 78 | 'm': [], 'b': [] 79 | } 80 | for item in reports: 81 | report_dict[item[1]].append(item[0]) 82 | print('Finish splitting the reports into two categories!') 83 | return trace_dict, report_dict 84 | 85 | def count(report_list, dest_dict, trace_dict): 86 | target_insn_set = set(dest_dict.keys()) 87 | for trace_hash in report_list: 88 | intersect_set = set(trace_dict[trace_hash]) & target_insn_set 89 | for insn in intersect_set: 90 | dest_dict[insn] += 1 91 | 92 | def normalize_score(score): 93 | max_value = np.max(score) 94 | min_value = np.min(score) 95 | if max_value == min_value: 96 | logging.info('max_value == min_value in normalization') 97 | return score 98 | else: 99 | normalized_score = (score - min_value) / (max_value - min_value) 100 | return normalized_score 101 | 102 | def group_scores(scores): 103 | insn_num = len(scores) 104 | group_info = [] 105 | group_value = -1 106 | group_list = [] 107 | for insn_no in range(insn_num): 108 | if group_value < 0: 109 | group_value = scores[insn_no] 110 | group_list.append(insn_no) 111 | else: 112 | if group_value == scores[insn_no]: 113 | group_list.append(insn_no) 114 | else: 115 | group_info.append(group_list) 116 | group_list = [insn_no] 117 | group_value = scores[insn_no] 118 | group_info.append(group_list) 119 | return group_info 120 | 121 | def calc_scores(valid_insns, tc_num_dict, t_num_dict, malicious_num, output_path): 122 | tc_num_list = np.asarray([tc_num_dict[insn] for insn in valid_insns], dtype=np.float) 123 | t_num_list = np.asarray([t_num_dict[insn] for insn in valid_insns], dtype=np.float) 124 | n_score = tc_num_list / float(malicious_num) 125 | s_score = tc_num_list / t_num_list 126 | normalized_nscore = normalize_score(n_score) 127 | normalized_sscore = normalize_score(s_score) 128 | l2_norm = np.sqrt(normalized_nscore ** 2 + normalized_sscore ** 2) 129 | print('Calculated all the scores!') 130 | sorted_idx_list = np.argsort(-l2_norm) 131 | # sorting all the insns 132 | valid_insns = valid_insns[sorted_idx_list] 133 | tc_num_list = tc_num_list[sorted_idx_list] 134 | t_num_list = t_num_list[sorted_idx_list] 135 | n_score = n_score[sorted_idx_list] 136 | s_score = s_score[sorted_idx_list] 137 | normalized_nscore = normalized_nscore[sorted_idx_list] 138 | normalized_sscore = normalized_sscore[sorted_idx_list] 139 | l2_norm = l2_norm[sorted_idx_list] 140 | print('Sorted all the scores') 141 | # group the insns according to its score 142 | group_info = group_scores(l2_norm) 143 | np.savez(output_path, 144 | insns=valid_insns, tc_num=tc_num_list, t_num=t_num_list, nscore=n_score, sscore=s_score, 145 | normalized_nscore=normalized_nscore, normalized_sscore=normalized_sscore, l2_norm=l2_norm, 146 | group_idx=group_info) 147 | return valid_insns, group_info, l2_norm, normalized_nscore, normalized_sscore 148 | 149 | def count_all(valid_insns, report_dict, trace_dict, output_path): 150 | malicious_num = len(report_dict['m']) 151 | benign_num = len(report_dict['b']) 152 | logging.info("#reports: %d (#malicious: %d; #benign: %d)" % (malicious_num + benign_num, malicious_num, benign_num)) 153 | # initialize all the count info 154 | tc_num_dict = init_count_dict(valid_insns) 155 | t_num_dict = init_count_dict(valid_insns) 156 | # count number(t_i & c) 157 | count(report_dict['m'], tc_num_dict, trace_dict) 158 | count(report_dict['m'] + report_dict['b'], t_num_dict, trace_dict) 159 | valid_insns, group_info, l2_norm, normalized_nscore, normalized_sscore = calc_scores(valid_insns, tc_num_dict, t_num_dict, malicious_num, output_path) 160 | return valid_insns, group_info, l2_norm, normalized_nscore, normalized_sscore 161 | 162 | def rank(poc_trace_path, bin_path, target_src_str, report_file, trace_folder, process_num, npz_path): 163 | # process the poc trace 164 | poc_trace = process_poc_trace(poc_trace_path, bin_path, target_src_str) 165 | unique_insn = np.unique(poc_trace) 166 | # read all the important files 167 | trace_dict, report_dict = read_all_reports(report_file, trace_folder, process_num) 168 | # count 169 | valid_insns, group_info, l2_norm, normalized_nscore, normalized_sscore = count_all(unique_insn, report_dict, trace_dict, npz_path) 170 | return poc_trace, valid_insns, group_info, l2_norm, normalized_nscore, normalized_sscore 171 | 172 | def calc_distance(poc_trace, insns): 173 | distance_list = [] 174 | for insn in insns: 175 | distance_list.append( 176 | np.max(np.where(poc_trace == insn)[0]) 177 | ) 178 | return distance_list 179 | 180 | def insn2src(bin_path, insn): 181 | global Assem 182 | if len(Assem) == 0: 183 | cmd_list = ['objdump', '-S', '-l', bin_path] 184 | p1 = subprocess.Popen(cmd_list, stdout=subprocess.PIPE, stderr=subprocess.PIPE) 185 | out, err = p1.communicate() 186 | content = out.split('\n') 187 | Assem = content 188 | else: 189 | content = Assem 190 | line_num = len(content) 191 | target_insn = insn[-6:] + ':' 192 | target_line_no = -1 193 | for line_no in range(line_num): 194 | line = content[line_no].split() 195 | if len(line) > 0 and line[0] == target_insn: 196 | target_line_no = line_no 197 | break 198 | if target_line_no < 0: 199 | raise Exception("ERROR: Cannot find the instruction -> %s" % insn) 200 | while(target_line_no >= 0): 201 | line = content[target_line_no] 202 | tmp = line.split() 203 | if len(tmp) >= 1 and ':' in tmp[0]: 204 | tmp2 = tmp[0].split(':') 205 | tag = True 206 | for tmp3 in tmp2[1]: 207 | if tmp3 not in string.digits: 208 | tag = False 209 | break 210 | if os.path.exists(tmp2[0]) and tag: 211 | return tmp[0].split('/')[-1] 212 | target_line_no = target_line_no - 1 213 | logging.info("Cannot find the source code for instruction -> %s" % insn) 214 | return "UNKNOWN" 215 | 216 | def show(bin_path, poc_trace, valid_insns, group_info, l2_norm, normalized_nscore, normalized_sscore, show_num): 217 | group_num = len(group_info) 218 | show_no = 0 219 | for group_no in range(group_num): 220 | insn_id_list = np.asarray(group_info[group_no]) 221 | insns = valid_insns[insn_id_list] 222 | distance_list = calc_distance(poc_trace, insns) 223 | sorted_idx_list = np.argsort(-np.asarray(distance_list)) 224 | sorted_insn_id_list = insn_id_list[sorted_idx_list] 225 | 226 | for insn_id in sorted_insn_id_list: 227 | logging.info("[INSN-%d] %s -> %s (l2norm: %f; normalized(N): %f; normalized(S): %f)" % ( 228 | show_no, valid_insns[insn_id], insn2src(bin_path, valid_insns[insn_id]), l2_norm[insn_id], normalized_nscore[insn_id], normalized_sscore[insn_id] 229 | )) 230 | show_no += 1 231 | if show_no >= show_num: 232 | break 233 | if show_no >= show_num: 234 | break 235 | 236 | def parse_args(): 237 | parser = argparse.ArgumentParser(description="PatchLoc") 238 | parser.add_argument("--config_file", dest="config_file", type=str, required=True, 239 | help="The path of config file") 240 | parser.add_argument("--tag", dest="tag", type=str, required=True, 241 | help="The cve tag") 242 | parser.add_argument("--func", dest="func", type=str, required=True, 243 | help="The function for execution (calc/show)") 244 | parser.add_argument("--out_folder", dest="out_folder", type=str, required=True, 245 | help="The path of output folder which is named according to the timestamp") 246 | parser.add_argument("--poc_trace_hash", dest="poc_trace_hash", type=str, required=True, 247 | help="The hash of executing trace of poc") 248 | parser.add_argument("--target_src_str", dest="target_src_str", type=str, default="", 249 | help="The source line at the crash location") 250 | parser.add_argument("--process_num", dest="process_num", type=int, default=10, 251 | help="The number of processes") 252 | parser.add_argument("--show_num", dest="show_num", type=int, default=10, help="The number of instructions to show") 253 | args = parser.parse_args() 254 | 255 | config = ConfigParser.ConfigParser() 256 | config.read(args.config_file) 257 | if args.tag not in config.sections(): 258 | raise Exception("ERROR: Please provide the configuration file for %s" % args.tag) 259 | 260 | detailed_config = {} 261 | for item in config.items(args.tag): 262 | if item[0] == 'folder': 263 | if not os.path.exists(item[1]): 264 | raise Exception("ERROR: The folder does not exist -> %s" % item[1]) 265 | detailed_config[item[0]] = item[1] 266 | else: 267 | detailed_config[item[0]] = item[1].split(';') 268 | 269 | if 'bin_path' in detailed_config: 270 | bin_path = detailed_config['bin_path'][0] 271 | if not os.path.exists(bin_path): 272 | raise Exception("ERROR: Binary file does not exist -> %s" % bin_path) 273 | detailed_config['bin_path'] = bin_path 274 | else: 275 | raise Exception("ERROR: Please specify the binary file in config.ini") 276 | 277 | trace_folder = os.path.join(args.out_folder, 'traces') 278 | if not os.path.exists(trace_folder): 279 | raise Exception("ERROR: Unknown folder -> %s" % trace_folder) 280 | detailed_config['trace_folder'] = trace_folder 281 | 282 | poc_trace_path = os.path.join(trace_folder, args.poc_trace_hash) 283 | if not os.path.exists(poc_trace_path): 284 | poc_trace_path = poc_trace_path + '.npz' 285 | if not os.path.exists(poc_trace_path): 286 | raise Exception("ERROR: Unknown file path -> %s" % poc_trace_path) 287 | else: 288 | global NPZTag 289 | NPZTag = True 290 | detailed_config['poc_trace_path'] = poc_trace_path 291 | 292 | report_file = os.path.join(args.out_folder, 'reports.pkl') 293 | if not os.path.exists(report_file): 294 | raise Exception("ERROR: Unknown file path -> %s" % report_file) 295 | detailed_config['report_file'] = report_file 296 | 297 | npz_path = os.path.join(args.out_folder, 'var_ranking.npz') 298 | detailed_config['npz_path'] = npz_path 299 | 300 | return args.func, args.target_src_str, detailed_config, args.process_num, args.show_num, args.out_folder 301 | 302 | def init_log(out_folder): 303 | log_path = os.path.join(out_folder, 'patchloc.log') 304 | logging.basicConfig(filename=log_path, filemode='a+', level=logging.DEBUG, 305 | format="[%(asctime)s-%(funcName)s-%(levelname)s]: %(message)s", 306 | datefmt="%d-%b-%y %H:%M:%S") 307 | console = logging.StreamHandler() 308 | console.setLevel(logging.INFO) 309 | console_fmt = logging.Formatter(fmt="[%(asctime)s-%(funcName)s-%(levelname)s]: %(message)s", 310 | datefmt="%d-%b-%y %H:%M:%S") 311 | console.setFormatter(console_fmt) 312 | logging.getLogger().addHandler(console) 313 | logging.info("Output Folder: %s" % out_folder) 314 | 315 | def controller(tag, target_src_str, config_info, process_num, show_num): 316 | if tag == 'calc': 317 | poc_trace, valid_insns, group_info, l2_norm, normalized_nscore, normalized_sscore = rank( 318 | config_info['poc_trace_path'], config_info['bin_path'], target_src_str, 319 | config_info['report_file'], config_info['trace_folder'], process_num, config_info['npz_path']) 320 | # get_src_trace(config_info, out_folder) 321 | show(config_info['bin_path'], poc_trace, valid_insns, group_info, l2_norm, normalized_nscore, normalized_sscore, show_num) 322 | elif tag == 'show': 323 | # process the poc trace 324 | poc_trace = process_poc_trace(config_info['poc_trace_path'], config_info['bin_path'], target_src_str) 325 | 326 | if not os.path.exists(config_info['npz_path']): 327 | raise Exception("ERROR: The .npz file does not exist -> %s" % config_info['npz_path']) 328 | info = np.load(config_info['npz_path'], allow_pickle=True) 329 | 330 | show(config_info['bin_path'], poc_trace, info['insns'], info['group_idx'], info['l2_norm'], 331 | info['normalized_nscore'], info['normalized_sscore'], show_num) 332 | else: 333 | raise Exception("ERROR: Function tag does not exist -> %s" % tag) 334 | 335 | def get_src_trace(detailed_config, out_folder): 336 | # process the cmd 337 | trace_cmd = detailed_config['trace_cmd'] 338 | poc = detailed_config['poc'] 339 | replace_idx = np.where(np.asarray(trace_cmd) == '***')[0] 340 | cmd = dc(trace_cmd) 341 | replace_num = len(replace_idx) 342 | for id in range(replace_num): 343 | cmd[replace_idx[id]] = poc[id] 344 | # write the cmd 345 | cmd_path = os.path.join(out_folder, 'cmd.txt') 346 | utils.write_txt(cmd_path, [' '.join(cmd)]) 347 | # get binary path 348 | bin_path = detailed_config['bin_path'] 349 | # get the source trace 350 | tmp_folder = './tempDr' 351 | if not os.path.exists(tmp_folder): 352 | os.mkdir(tmp_folder) 353 | my_parser = parse_dwarf.DwarfParser(bin_path) 354 | flineNumberDict, fileBoundRangesList, fileBoundIndexList, src_filepath = my_parser.get_main_addr() 355 | ifSrcList = tracer.findIfSrcInOrderDyn(bin_path, src_filepath, flineNumberDict, fileBoundRangesList, fileBoundIndexList, cmdFile=cmd_path) 356 | logging.info("Got the source trace!") 357 | # process the source trace 358 | insn2src = {} 359 | src2insn = {} 360 | for item in ifSrcList: 361 | insn = item[0] 362 | src = '-'.join(item[1:3]) 363 | if insn not in insn2src: 364 | insn2src[insn] = src 365 | if src in src2insn: 366 | src2insn[src].add(insn) 367 | else: 368 | src2insn[src] = {insn} 369 | info = { 370 | 'raw': ifSrcList, 371 | 'insn2src': insn2src, 372 | 'src2insn': src2insn 373 | } 374 | # write the source trace 375 | output_path = os.path.join(out_folder, 'poc_source_trace.pkl') 376 | utils.write_pkl(output_path, info) 377 | logging.info("Recorded the source trace -> %s" % output_path) 378 | return insn2src, src2insn 379 | 380 | if __name__ == '__main__': 381 | tag, target_src_str, config_info, process_num, show_num, out_folder = parse_args() 382 | init_log(out_folder) 383 | controller(tag, target_src_str, config_info, process_num, show_num) -------------------------------------------------------------------------------- /code/tracer.py: -------------------------------------------------------------------------------- 1 | import shlex 2 | import utils 3 | import env 4 | import subprocess 5 | from bisect import bisect_left 6 | from collections import defaultdict 7 | 8 | def ifTracer(cmd_list): 9 | # craft tracing command 10 | tracer_cmd_list = [env.dynamorio_path, '-c', env.iftracer_path, '--'] + cmd_list 11 | # execute command 12 | p1 = subprocess.Popen(tracer_cmd_list, stdout=subprocess.PIPE, stderr=subprocess.PIPE) 13 | out, err = p1.communicate() 14 | # parse the output 15 | if_list = [] 16 | for aline in out.split("\n"): 17 | if '0x00000000004' in aline: 18 | t = aline.split(' => ') 19 | if_list.append(t[0]) 20 | return if_list 21 | 22 | 23 | def exe_bin(cmd_list): 24 | p1 = subprocess.Popen(cmd_list, stdout=subprocess.PIPE, stderr=subprocess.PIPE) 25 | out, err = p1.communicate() 26 | return out, err 27 | 28 | def readCBR(cmdFile): 29 | listAddr = [] 30 | lines = open(cmdFile, 'r').readlines() 31 | cmdline = lines[0].rstrip('\n') 32 | cmdlist = [env.dynamorio_path, '-c', env.libcbr_path, '--'] 33 | 34 | for each in shlex.split(cmdline): 35 | cmdlist.append(each) 36 | 37 | p1 = subprocess.Popen(cmdlist, stdout=subprocess.PIPE, stderr=subprocess.PIPE) 38 | out, err = p1.communicate() 39 | 40 | for aline in out.split('\n'): 41 | items = aline.split(':') 42 | 43 | if '0x' == items[0][0:2]: 44 | 45 | if not items[0] in listAddr: 46 | listAddr.append('0x' + items[0].lstrip('0x').lstrip('0')) 47 | 48 | # print(listAddr) 49 | return listAddr 50 | 51 | def tcheckIf(flineNumberDict, name, insID, fileBoundRangesList, fileBoundIndexList, fileAddrDict, lineAddrDict): 52 | ''' Search for 10 addresses behind the one found''' 53 | # print(lineNumberDict) 54 | found = False 55 | 56 | if not (name in lineAddrDict): 57 | 58 | fileToSearch = '' 59 | if not (name in fileAddrDict): 60 | index = bisect_left(fileBoundRangesList, int(name, 16)) - 1 61 | fileAddrDict[name] = fileBoundIndexList[index] 62 | fileToSearch = fileBoundIndexList[index] 63 | else: 64 | fileToSearch = fileAddrDict[name] 65 | 66 | for i in range(50): 67 | tnameInt = int(name, 16) - i 68 | tname = hex(tnameInt).rstrip('L') 69 | 70 | if tname in flineNumberDict[fileToSearch]: 71 | found = True 72 | lineAddrDict[name] = (flineNumberDict[fileToSearch][tname], fileToSearch) 73 | return [insID, flineNumberDict[fileToSearch][tname], name, fileToSearch] 74 | 75 | else: 76 | 77 | return [insID, lineAddrDict[name][0], name, lineAddrDict[name][1]] 78 | 79 | # if not found: 80 | # for i in range(10): 81 | # tname = hex(int(name, 16) + i).rstrip('L') 82 | 83 | # if tname in lineNumberDict: 84 | # found = True 85 | # return [insID, lineNumberDict[tname]] 86 | 87 | return None 88 | 89 | def findIfOrder(flineNumberDict, cmdFile, fileBoundRangesList, fileBoundIndexList): 90 | 91 | ifCollections = [] 92 | linesCBR = readCBR(cmdFile) 93 | fileAddrDict = {} 94 | lineAddrDict = {} 95 | 96 | for i in range(len(linesCBR)): 97 | addr = linesCBR[i] 98 | ifCollections.append(tcheckIf(flineNumberDict, addr, i, fileBoundRangesList, fileBoundIndexList, fileAddrDict, lineAddrDict)) 99 | 100 | idx_list = [] 101 | line_list = [] 102 | nameDict = defaultdict(str) 103 | 104 | for item in ifCollections: 105 | if item == None: 106 | pass 107 | else: 108 | idx_list.append(item[0]) 109 | line_list.append(item[1]) 110 | nameDict[item[2]]= (item[1], item[3]) 111 | 112 | return idx_list, line_list, nameDict 113 | 114 | def findIfSrcInOrderDyn(binFilePath, srcFilePath, flineNumberDict, fileBoundRangesList, fileBoundIndexList, 115 | cmdFile='cmd.txt', process_id=0, timeout=-1): 116 | # start = datetime.now() 117 | 118 | # flineNumberDict, fileBoundRangesList, fileBoundIndexList = getMainAddr(binFilePath, srcFilePath) 119 | ''' Get the linenumbers of conditional statements in the same file for which you got the line numbers, in cmpLineNumbers ''' 120 | idxList, cmpLineList, nameDict = findIfOrder(flineNumberDict, cmdFile, fileBoundRangesList, 121 | fileBoundIndexList) # need to save both idxList, cmpLineList 122 | srcLineList = [] 123 | 124 | fnameDict = {} 125 | addrDict = {} 126 | fnameSet = set() 127 | addrDictRev = {} 128 | 129 | i = 0 130 | fp = open('tempDr/m%d.out' % process_id, 'w') 131 | for key, value in nameDict.iteritems(): 132 | # fnameSet.add(value) 133 | addrDict[i] = key 134 | fnameDict[(key, value[0])] = value[1] 135 | addrDictRev[key] = i 136 | 137 | i += 1 138 | 139 | for key, value in nameDict.iteritems(): 140 | fp.write("%s %d\n" % (str(int(key, 16)), int(value[0]))) 141 | 142 | fp.close() 143 | 144 | timeout = 5 145 | if timeout > 0: 146 | cmdlist = ['timeout', str(timeout), env.dynamorio_path, '-client', env.iflinetracer_path, str(process_id), '--'] 147 | else: 148 | cmdlist = [env.dynamorio_path, '-client', env.iflinetracer_path, str(process_id), '--'] 149 | lines = open(cmdFile, 'r').readlines() 150 | cmdline = lines[0].rstrip('\n') 151 | 152 | for each in shlex.split(cmdline): 153 | cmdlist.append(each) 154 | 155 | p1 = subprocess.Popen(cmdlist, stdout=subprocess.PIPE, stderr=subprocess.PIPE) 156 | out, err = p1.communicate() 157 | # print(out, err) 158 | ifList = [] 159 | 160 | for aline in out.split('\n'): 161 | t = aline.split(' => ') 162 | if t[0][0:2] == '0x': 163 | addr = '0x' + t[0].lstrip('0x').lstrip('0') 164 | b = t[1].split(' ') 165 | ifList.append((t[0], fnameDict[(addr, b[0])], b[0], b[1])) 166 | 167 | return ifList 168 | -------------------------------------------------------------------------------- /code/utils.py: -------------------------------------------------------------------------------- 1 | import pickle 2 | import string 3 | import numpy as np 4 | import multiprocessing 5 | 6 | # system setup 7 | ProcessNum=np.min((10, multiprocessing.cpu_count())) 8 | 9 | # Used for generating the random filename 10 | FileNameChars = list(string.letters + string.digits) 11 | FileNameLen = 30 12 | 13 | ''' 14 | Process the binary file 15 | ''' 16 | def read_bin(path): 17 | with open(path, 'rb') as f: 18 | temp = f.readlines() 19 | temp = ''.join(temp) 20 | content = [ord(i) for i in temp] 21 | return content 22 | 23 | def write_bin(path, inputs): 24 | with open(path, 'wb') as f: 25 | f.write(bytearray(list(inputs))) 26 | 27 | 28 | ''' 29 | Process the normal text file 30 | ''' 31 | def read_txt(path): 32 | with open(path, 'r') as f: 33 | content = f.readlines() 34 | return content 35 | 36 | def write_txt(path, content): 37 | with open(path, 'w') as f: 38 | f.writelines(content) 39 | 40 | ''' 41 | Process the pickle file 42 | ''' 43 | def write_pkl(path, info): 44 | with open(path, 'w') as f: 45 | pickle.dump(info, f) 46 | 47 | def read_pkl(path): 48 | with open(path) as f: 49 | info = pickle.load(f) 50 | return info 51 | 52 | 53 | ''' 54 | Generating the temp filename 55 | ''' 56 | def gen_temp_filename(): 57 | return ''.join(np.random.choice(FileNameChars, FileNameLen)) 58 | -------------------------------------------------------------------------------- /data/binutils/cve_2017_14745/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=e6ff33ca50c1180725dde11c84ee93fcdb4235ef 3 | 4 | PoC: 5 | https://sourceware.org/bugzilla/show_bug.cgi?id=22148 6 | 7 | Command: 8 | > cd /root/source/binutils 9 | > ./objdump -D /root/exploit 10 | -------------------------------------------------------------------------------- /data/binutils/cve_2017_14745/cve_2017_14745.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython zip libtool bison texinfo flex 7 | 8 | WORKDIR /root 9 | RUN git clone git://sourceware.org/git/binutils-gdb.git 10 | RUN mv binutils-gdb source 11 | WORKDIR /root/source 12 | RUN git checkout 7a31b38ef87d133d8204cae67a97f1989d25fa18 13 | RUN CC=gcc CXX=g++ CFLAGS="-DFORTIFY_SOURCE=2 -ggdb -Wno-error" CXXFLAGS="$CFLAGS" ./configure --disable-shared --disable-gdb --disable-libdecnumber --disable-readline --disable-sim LIBS='-ldl -lutil' 14 | RUN make CFLAGS="-ldl -lutil -ggdb -static" CXXFLAGS="-ldl -lutil -ggdb -static" 15 | 16 | COPY ./exploit /root/exploit 17 | 18 | -------------------------------------------------------------------------------- /data/binutils/cve_2017_14745/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/binutils/cve_2017_14745/exploit -------------------------------------------------------------------------------- /data/binutils/cve_2017_15020/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=1da5c9a485f3dcac4c45e96ef4b7dae5948314b5 3 | 4 | PoC: 5 | https://blogs.gentoo.org/ago/2017/10/03/binutils-heap-based-buffer-overflow-in-parse_die-dwarf1-c/ 6 | https://github.com/asarubbo/poc/blob/master/00376-binutils-heapoverflow-parse_die 7 | 8 | Command: 9 | > cd /root/source/binutils 10 | > ./nm-new -A -a -l -S -s --special-syms --synthetic --with-symbol-versions -D /root/exploit 11 | -------------------------------------------------------------------------------- /data/binutils/cve_2017_15020/cve_2017_15020.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython zip libtool bison texinfo flex 7 | 8 | WORKDIR /root 9 | RUN git clone git://sourceware.org/git/binutils-gdb.git 10 | WORKDIR /root/binutils-gdb 11 | RUN git checkout 11855d8a1f11b102a702ab76e95b22082cccf2f8 12 | RUN mv /root/binutils-gdb /root/source 13 | WORKDIR /root/source 14 | RUN CC=gcc CXX=g++ CFLAGS="-DFORTIFY_SOURCE=2 -ggdb -Wno-error" CXXFLAGS="$CFLAGS" ./configure --disable-shared --disable-gdb --disable-libdecnumber --disable-readline --disable-sim LIBS='-ldl -lutil' 15 | RUN make CFLAGS="-ldl -lutil -static -ggdb" CXXFLAGS="-static -ldl -lutil -ggdb" 16 | 17 | COPY ./exploit /root/exploit 18 | -------------------------------------------------------------------------------- /data/binutils/cve_2017_15020/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/binutils/cve_2017_15020/exploit -------------------------------------------------------------------------------- /data/binutils/cve_2017_15025/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=d8010d3e75ec7194a4703774090b27486b742d48 3 | 4 | PoC: 5 | https://sourceware.org/bugzilla/show_bug.cgi?id=22186 6 | 7 | Command: 8 | > cd /root/source/binutils 9 | > ./nm-new -A -a -l -S -s --special-syms --synthetic --with-symbol-versions /root/exploit 10 | 11 | dwarf2.c:2442:34: runtime error: division by zero 12 | Floating point exception (core dumped) 13 | 14 | -------------------------------------------------------------------------------- /data/binutils/cve_2017_15025/cve_2017_15025.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython texinfo bison flex 7 | 8 | WORKDIR /root 9 | RUN git clone git://sourceware.org/git/binutils-gdb.git 10 | RUN mv binutils-gdb source 11 | WORKDIR /root/source 12 | RUN git checkout 515f23e63c0074ab531bc954f84ca40c6281a724 13 | RUN CC=gcc CXX=g++ CFLAGS="-DFORTIFY_SOURCE=2 -fno-omit-frame-pointer -ggdb -Wno-error" CXXFLAGS="$CFLAGS" ./configure --disable-shared --disable-gdb --disable-libdecnumber --disable-readline --disable-sim 14 | RUN make 15 | 16 | COPY ./exploit /root/exploit 17 | -------------------------------------------------------------------------------- /data/binutils/cve_2017_15025/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/binutils/cve_2017_15025/exploit -------------------------------------------------------------------------------- /data/binutils/cve_2017_6965/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;h=03f7786e2f440b9892b1c34a58fb26222ce1b493 3 | 4 | PoC: 5 | https://sourceware.org/bugzilla/show_bug.cgi?id=21137 6 | 7 | Command: 8 | > cd /root/source/binutils 9 | > ./readelf -w /root/exploit 10 | -------------------------------------------------------------------------------- /data/binutils/cve_2017_6965/cve_2017_6965.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython clang texinfo bison flex 7 | 8 | WORKDIR /root 9 | RUN git clone git://sourceware.org/git/binutils-gdb.git 10 | RUN mv binutils-gdb source 11 | WORKDIR /root/source 12 | RUN git checkout 53f7e8ea7fad1fcff1b58f4cbd74e192e0bcbc1d 13 | RUN CC=clang CFLAGS="-DFORTIFY_SOURCE=2 -ggdb -Wno-error" ./configure --disable-shared --disable-gdb --disable-libdecnumber --disable-readline --disable-sim 14 | RUN make 15 | 16 | COPY ./exploit /root/exploit 17 | -------------------------------------------------------------------------------- /data/binutils/cve_2017_6965/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/binutils/cve_2017_6965/exploit -------------------------------------------------------------------------------- /data/coreutils/gnubug_19784/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/coreutils/coreutils/commit/1d0f1b7 3 | 4 | PoC: 5 | 6 | 7 | Command: 8 | > cd /root/source/src 9 | > ./make-prime-list 5 10 | 11 | -------------------------------------------------------------------------------- /data/coreutils/gnubug_19784/gnubug_19784.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf autopoint bison gettext gperf texinfo 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/coreutils/coreutils.git 10 | RUN mv coreutils source 11 | WORKDIR /root/source 12 | RUN git checkout 658529a 13 | RUN ./bootstrap 14 | RUN export FORCE_UNSAFE_CONFIGURE=1 && ./configure && make CFLAGS="-ggdb" CXXFLAGS="-ggdb" src/make-prime-list 15 | 16 | -------------------------------------------------------------------------------- /data/coreutils/gnubug_25003/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/coreutils/coreutils/commit/4954f79 3 | 4 | PoC: 5 | 6 | 7 | Command: 8 | > cd /root/source/src 9 | > touch 7 10 | # ./split -n/ 7 11 | > ./split -n7/75 7 12 | 13 | -------------------------------------------------------------------------------- /data/coreutils/gnubug_25003/gnubug_25003.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf autopoint bison gettext gperf texinfo 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/coreutils/coreutils.git 10 | RUN mv coreutils source 11 | WORKDIR /root/source 12 | RUN git checkout 68c5eec 13 | RUN ./bootstrap 14 | RUN export FORCE_UNSAFE_CONFIGURE=1 && ./configure 15 | RUN make CFLAGS="-ggdb" CXXFLAGS="-ggdb" 16 | 17 | -------------------------------------------------------------------------------- /data/coreutils/gnubug_25023/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/coreutils/coreutils/commit/d91aee 3 | 4 | PoC: 5 | 6 | 7 | Command: 8 | > cd /root/source/src/ 9 | > echo a > a 10 | > ./pr "-S$(printf "\t\t\t")" a -m a 11 | 12 | -------------------------------------------------------------------------------- /data/coreutils/gnubug_25023/gnubug_25023.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf autopoint bison gettext gperf texinfo wget 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/coreutils/coreutils.git 10 | RUN mv coreutils source 11 | WORKDIR /root/source 12 | RUN git checkout ca99c52 13 | RUN ./bootstrap 14 | RUN export FORCE_UNSAFE_CONFIGURE=1 && ./configure 15 | RUN make CFLAGS="-ggdb" CXXFLAGS="-ggdb" 16 | 17 | -------------------------------------------------------------------------------- /data/coreutils/gnubug_26545/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/coreutils/coreutils/commit/f4570a9e 3 | 4 | PoC: 5 | 6 | 7 | Command: 8 | > cd /root/source/src 9 | > touch abc 10 | # ./shred -n -s abc 11 | > ./shred -n4 -s7 abc 12 | 13 | -------------------------------------------------------------------------------- /data/coreutils/gnubug_26545/gnubug_26545.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf autopoint bison gettext gperf texinfo 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/coreutils/coreutils.git 10 | RUN mv coreutils source 11 | WORKDIR /root/source 12 | RUN git checkout 8d34b45 13 | RUN ./bootstrap 14 | RUN export FORCE_UNSAFE_CONFIGURE=1 && ./configure 15 | RUN make CFLAGS="-ggdb" CXXFLAGS="-ggdb" 16 | 17 | 18 | -------------------------------------------------------------------------------- /data/ffmpeg/bugchrom_1404/BUGCHROM_1404-setup.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/ffmpeg/bugchrom_1404/BUGCHROM_1404-setup.zip -------------------------------------------------------------------------------- /data/ffmpeg/bugchrom_1404/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/FFmpeg/FFmpeg/commit/279420b5a63b3f254e4932a4afb91759fb50186a 3 | 4 | PoC: 5 | https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=1404 6 | 7 | Command: 8 | > cd sources/ffmpeg/project/tools/ 9 | > ./target_dec_cavs_fuzzer /root/exploit/test_case -------------------------------------------------------------------------------- /data/ffmpeg/bugchrom_1404/bugchrom_1404.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | # install miscellaneous 4 | RUN apt-get update 5 | RUN apt-get install -y build-essential vim git wget unzip tar clang 6 | RUN apt-get install -y nasm valgrind libass-dev libmp3lame-dev dh-autoreconf 7 | 8 | # copy setup scripts & exploits 9 | WORKDIR /root 10 | COPY ./BUGCHROM_1404-setup.zip /root 11 | RUN unzip BUGCHROM_1404-setup.zip 12 | RUN rm BUGCHROM_1404-setup.zip 13 | 14 | # prepare libs 15 | WORKDIR /root/sources/ffmpeg_deps 16 | RUN ./build_ffmpeg.sh 17 | 18 | # prepare main project 19 | WORKDIR /root/sources/ffmpeg 20 | RUN ./project_config.sh 21 | 22 | # compile tool 23 | # w/ UBSAN : to check exploit (see project_config.sh) 24 | WORKDIR /root/sources/ffmpeg/project 25 | RUN make tools/target_dec_cavs_fuzzer 26 | 27 | # go home 28 | WORKDIR /root 29 | -------------------------------------------------------------------------------- /data/ffmpeg/cve_2017_9992/CVE_2017_9992-setup.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/ffmpeg/cve_2017_9992/CVE_2017_9992-setup.zip -------------------------------------------------------------------------------- /data/ffmpeg/cve_2017_9992/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/FFmpeg/FFmpeg/commit/f52fbf4f3ed02a7d872d8a102006f29b4421f360 3 | 4 | PoC: 5 | https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=1345 6 | 7 | Command: 8 | > cd sources/ffmpeg/project/tools/ 9 | > ./target_dec_cavs_fuzzer /root/exploit/test_case 10 | -------------------------------------------------------------------------------- /data/ffmpeg/cve_2017_9992/cve_2017_9992.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | # install miscellaneous 4 | RUN apt-get update 5 | RUN apt-get install -y build-essential vim git wget unzip tar clang 6 | RUN apt-get install -y nasm libass-dev libmp3lame-dev dh-autoreconf 7 | 8 | # copy setup scripts & exploits 9 | WORKDIR /root 10 | COPY ./CVE_2017_9992-setup.zip /root 11 | RUN unzip CVE_2017_9992-setup.zip 12 | RUN rm CVE_2017_9992-setup.zip 13 | 14 | # prepare libs 15 | WORKDIR /root/sources/ffmpeg_deps 16 | RUN ./build_ffmpeg.sh 17 | 18 | # prepare main project 19 | WORKDIR /root/sources/ffmpeg 20 | RUN ./project_config.sh 21 | 22 | # compile tool 23 | # w/o ASAN : consider using this to check heap overflow 24 | # see bugchrom_1404 for of using UBSAN 25 | WORKDIR /root/sources/ffmpeg/project 26 | RUN make tools/target_dec_cavs_fuzzer 27 | 28 | # go home 29 | WORKDIR /root 30 | -------------------------------------------------------------------------------- /data/jasper/cve_2016_8691/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/mdadams/jasper/commit/d8c2604cd438c41ec72aff52c16ebd8183068020 3 | 4 | PoC: 5 | https://bugzilla.redhat.com/show_bug.cgi?id=1385502 6 | https://github.com/mdadams/jasper/issues/22 7 | 8 | Command: 9 | > cd /root/source/src/appl 10 | > ./imginfo -f /root/exploit 11 | -------------------------------------------------------------------------------- /data/jasper/cve_2016_8691/cve_2016_8691.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf libtool 7 | 8 | WORKDIR /root 9 | COPY ./source.zip /root/source.zip 10 | RUN unzip source.zip 11 | WORKDIR /root/source 12 | RUN autoreconf -i 13 | RUN ./configure 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | COPY ./exploit /root/exploit 17 | 18 | -------------------------------------------------------------------------------- /data/jasper/cve_2016_8691/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/jasper/cve_2016_8691/exploit -------------------------------------------------------------------------------- /data/jasper/cve_2016_8691/source.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/jasper/cve_2016_8691/source.zip -------------------------------------------------------------------------------- /data/jasper/cve_2016_9557/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/mdadams/jasper/commit/d42b2388f7f8e0332c846675133acea151fc557a 3 | 4 | PoC: 5 | https://blogs.gentoo.org/ago/2016/11/19/jasper-signed-integer-overflow-in-jas_image-c/ 6 | https://github.com/asarubbo/poc/blob/master/00020-jasper-signedintoverflow-jas_image_c 7 | 8 | Command: 9 | > cd /root/source/src/appl 10 | > ./imginfo -f /root/exploit 11 | 12 | -------------------------------------------------------------------------------- /data/jasper/cve_2016_9557/cve_2016_9557.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf libtool 7 | 8 | WORKDIR /root 9 | COPY ./source.zip /root/source.zip 10 | RUN unzip source.zip 11 | WORKDIR /root/source 12 | RUN autoreconf -i 13 | RUN ./configure 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | COPY ./exploit /root/exploit 17 | -------------------------------------------------------------------------------- /data/jasper/cve_2016_9557/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/jasper/cve_2016_9557/exploit -------------------------------------------------------------------------------- /data/jasper/cve_2016_9557/source.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/jasper/cve_2016_9557/source.zip -------------------------------------------------------------------------------- /data/libarchive/cve_2016_5844/CVE_2016_5844-setup.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libarchive/cve_2016_5844/CVE_2016_5844-setup.zip -------------------------------------------------------------------------------- /data/libarchive/cve_2016_5844/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/libarchive/libarchive/commit/3ad08e01b4d253c66ae56414886089684155af22 3 | 4 | PoC: 5 | https://github.com/libarchive/libarchive/issues/717 6 | 7 | Command: 8 | > ./sources/bsdtar -tf ./exploit/libarchive-signed-int-overflow.iso -------------------------------------------------------------------------------- /data/libarchive/cve_2016_5844/cve_2016_5844.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | # install miscellaneous 4 | RUN apt-get update 5 | RUN apt-get install -y build-essential vim wget unzip 6 | 7 | # copy exploit 8 | WORKDIR /root/exploit 9 | COPY CVE_2016_5844-setup.zip /root/exploit 10 | RUN unzip CVE_2016_5844-setup.zip 11 | RUN rm CVE_2016_5844-setup.zip 12 | 13 | # download libarchive source (v3.2.0) 14 | WORKDIR /root 15 | RUN wget https://libarchive.org/downloads/libarchive-3.2.0.zip 16 | RUN unzip libarchive-3.2.0.zip 17 | RUN rm libarchive-3.2.0.zip 18 | RUN mv libarchive-3.2.0 sources 19 | 20 | # compile bsdtar 21 | # w/o OPENSSL : type inconsistency introduced around v1.1.0 22 | # w/ UBSAN : to check exploit 23 | WORKDIR /root/sources 24 | RUN ./configure --without-openssl 25 | RUN make CFLAGS="-ggdb" 26 | 27 | # go home 28 | WORKDIR /root 29 | -------------------------------------------------------------------------------- /data/libarchive/cve_2016_5844/libarchive-signed-int-overflow.iso: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libarchive/cve_2016_5844/libarchive-signed-int-overflow.iso -------------------------------------------------------------------------------- /data/libjpeg/cve_2012_2806/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/libjpeg-turbo/libjpeg-turbo/commit/dd2b651243125701dca2ed2f31b3d34056719b9c#diff-ae3d05789ec8758847fb75c9615c9c2f 3 | 4 | PoC: 5 | https://bugs.chromium.org/p/chromium/issues/detail?id=130240 6 | 7 | Command: 8 | > cd /root/source 9 | > ./djpeg /root/exploit 10 | 11 | -------------------------------------------------------------------------------- /data/libjpeg/cve_2012_2806/cve_2012_2806.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf libtool nasm 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/libjpeg-turbo/libjpeg-turbo.git 10 | RUN mv libjpeg-turbo source 11 | WORKDIR /root/source 12 | RUN git checkout 4f24016 13 | RUN autoreconf -fiv 14 | RUN ./configure 15 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 16 | 17 | COPY ./exploit /root/exploit 18 | 19 | -------------------------------------------------------------------------------- /data/libjpeg/cve_2012_2806/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libjpeg/cve_2012_2806/exploit -------------------------------------------------------------------------------- /data/libjpeg/cve_2017_15232/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/libjpeg-turbo/libjpeg-turbo/commit/1ecd9a5729d78518397889a630e3534bd9d963a8 3 | 4 | PoC: 5 | https://github.com/mozilla/mozjpeg/issues/268 6 | 7 | Command: 8 | > cd /root/source 9 | > ./djpeg -crop "1x1+16+16" -onepass -dither ordered -dct float -colors 8 -targa -grayscale -outfile o /root/exploit 10 | 11 | -------------------------------------------------------------------------------- /data/libjpeg/cve_2017_15232/cve_2017_15232.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf libtool nasm 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/libjpeg-turbo/libjpeg-turbo.git 10 | RUN mv libjpeg-turbo source 11 | WORKDIR /root/source 12 | RUN git checkout 3212005 13 | RUN autoreconf -fiv 14 | RUN ./configure 15 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 16 | 17 | COPY ./exploit /root/exploit 18 | 19 | -------------------------------------------------------------------------------- /data/libjpeg/cve_2017_15232/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libjpeg/cve_2017_15232/exploit -------------------------------------------------------------------------------- /data/libjpeg/cve_2018_14498/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/libjpeg-turbo/libjpeg-turbo/commit/9c78a04df4e44ef6487eee99c4258397f4fdca55 3 | 4 | PoC: 5 | https://github.com/libjpeg-turbo/libjpeg-turbo/issues/258 6 | 7 | Command: 8 | > cd /root/source 9 | > ./cjpeg -outfile out /root/exploit 10 | 11 | -------------------------------------------------------------------------------- /data/libjpeg/cve_2018_14498/cve_2018_14498.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython cmake nasm 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/libjpeg-turbo/libjpeg-turbo.git 10 | RUN mv libjpeg-turbo source 11 | WORKDIR /root/source 12 | RUN git checkout 0fa7850 13 | RUN export CXXFLAGS="-ggdb" 14 | RUN export CFLAGS="-ggdb" 15 | RUN cmake CMakeLists.txt 16 | RUN make 17 | 18 | COPY ./exploit /root/exploit 19 | 20 | -------------------------------------------------------------------------------- /data/libjpeg/cve_2018_14498/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libjpeg/cve_2018_14498/exploit -------------------------------------------------------------------------------- /data/libjpeg/cve_2018_19664/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/libjpeg-turbo/libjpeg-turbo/commit/f8cca819a4fb42aafa5f70df43c45e8c416d716f 3 | 4 | PoC: 5 | https://github.com/libjpeg-turbo/libjpeg-turbo/issues/305 6 | 7 | Command: 8 | > cd /root/source 9 | > ./djpeg -colors 256 -bmp /root/exploit 10 | 11 | ================================================================= 12 | ==2408==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x610000007ff7 at pc 0x00000040ca25 bp 0x7ffeb6dcd630 sp 0x7ffeb6dcd620 13 | READ of size 1 at 0x610000007ff7 thread T0 14 | #0 0x40ca24 in put_pixel_rows /root/libjpeg-turbo/wrbmp.c:145 15 | #1 0x4028b2 in main /root/libjpeg-turbo/djpeg.c:762 16 | #2 0x7eff2afa182f in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x2082f) 17 | #3 0x402da8 in _start (/root/libjpeg-turbo/djpeg+0x402da8) 18 | 19 | 20 | PS: 21 | The asan part of the dockerfile may not work. Please install it manually follow the instructions 22 | specified in the dockerfile manually. 23 | 24 | -------------------------------------------------------------------------------- /data/libjpeg/cve_2018_19664/cve_2018_19664.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython cmake nasm 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/libjpeg-turbo/libjpeg-turbo.git 10 | RUN mv libjpeg-turbo source 11 | WORKDIR /root/source 12 | RUN git checkout beefb62 13 | RUN export CXXFLAGS="-ggdb" 14 | RUN export CFLAGS="-ggdb" 15 | RUN cmake CMakeLists.txt 16 | RUN make 17 | 18 | COPY ./exploit /root/exploit 19 | 20 | -------------------------------------------------------------------------------- /data/libjpeg/cve_2018_19664/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libjpeg/cve_2018_19664/exploit -------------------------------------------------------------------------------- /data/libming/cve_2016_9264/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/libming/libming/commit/19e7127e29122be571c87bfb90bca9581417d220 3 | 4 | PoC: 5 | https://blogs.gentoo.org/ago/2016/11/07/libming-listmp3-global-buffer-overflow-in-printmp3headers-listmp3-c/ 6 | https://github.com/asarubbo/poc/blob/master/00034-libming-globaloverflow-printMP3Headers 7 | 8 | Command: 9 | > cd /root/source/util 10 | > ./listmp3 /root/exploit 11 | 12 | -------------------------------------------------------------------------------- /data/libming/cve_2016_9264/cve_2016_9264.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython libtool m4 automake bison flex libfreetype6-dev 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/libming/libming.git 10 | RUN mv libming source 11 | WORKDIR /root/source 12 | RUN git checkout cc6a386 13 | RUN ./autogen.sh 14 | RUN ./configure 15 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 16 | 17 | COPY ./exploit /root/exploit 18 | -------------------------------------------------------------------------------- /data/libming/cve_2016_9264/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libming/cve_2016_9264/exploit -------------------------------------------------------------------------------- /data/libtiff/bugzilla_2611/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/vadz/libtiff/commit/43bc256d8ae44b92d2734a3c5bc73957a4d7c1ec 3 | 4 | PoC: 5 | http://bugzilla.maptools.org/show_bug.cgi?id=2611 6 | https://github.com/asarubbo/poc/blob/master/00083-libtiff-fpe-OJPEGDecodeRaw 7 | 8 | Command: 9 | > cd /root/source/tools 10 | > ./tiffmedian /root/exploit foo 11 | 12 | -------------------------------------------------------------------------------- /data/libtiff/bugzilla_2611/bugzilla_2611.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython libjpeg-dev 7 | 8 | # asan 9 | WORKDIR /root 10 | COPY ./exploit /root/exploit 11 | 12 | # normal 13 | WORKDIR /root 14 | RUN git clone https://github.com/vadz/libtiff.git 15 | RUN mv libtiff source 16 | WORKDIR /root/source 17 | RUN git checkout 9a72a69 18 | RUN ./configure 19 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 20 | 21 | -------------------------------------------------------------------------------- /data/libtiff/bugzilla_2611/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/bugzilla_2611/exploit -------------------------------------------------------------------------------- /data/libtiff/bugzilla_2633/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/vadz/libtiff/commit/5ed9fea523316c2f5cec4d393e4d5d671c2dbc33 3 | 4 | PoC: 5 | http://bugzilla.maptools.org/show_bug.cgi?id=2633 6 | https://github.com/asarubbo/poc/blob/master/00107-libtiff-heapoverflow-PSDataColorContig 7 | 8 | Command: 9 | > cd /root/source/tools 10 | > ./tiff2ps /root/exploit 11 | -------------------------------------------------------------------------------- /data/libtiff/bugzilla_2633/bugzilla_2633.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython 7 | 8 | COPY ./exploit /root/exploit 9 | 10 | WORKDIR /root 11 | RUN git clone https://github.com/vadz/libtiff.git 12 | RUN mv libtiff source 13 | WORKDIR /root/source 14 | RUN git checkout f3069a5 15 | RUN ./configure 16 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 17 | 18 | -------------------------------------------------------------------------------- /data/libtiff/bugzilla_2633/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/bugzilla_2633/exploit -------------------------------------------------------------------------------- /data/libtiff/cve_2016_10092/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/vadz/libtiff/commit/9657bbe3cdce4aaa90e07d50c1c70ae52da0ba6a 3 | 4 | PoC: 5 | https://blogs.gentoo.org/ago/2017/01/01/libtiff-multiple-heap-based-buffer-overflow/ 6 | https://github.com/asarubbo/poc/blob/master/00102-libtiff-heapoverflow-_TIFFmemcpy 7 | 8 | Command: 9 | > cd /root/source/tools 10 | > ./tiffcrop -i /root/exploit foo 11 | 12 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_10092/cve_2016_10092.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/vadz/libtiff.git 10 | RUN mv libtiff source 11 | WORKDIR /root/source 12 | RUN git checkout 43bc256 13 | RUN ./configure 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | COPY ./exploit /root/exploit 17 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_10092/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2016_10092/exploit -------------------------------------------------------------------------------- /data/libtiff/cve_2016_10094/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/vadz/libtiff/commit/c7153361a4041260719b340f73f2f76b0969235c 3 | 4 | PoC: 5 | http://bugzilla.maptools.org/show_bug.cgi?id=2640 6 | https://github.com/asarubbo/poc/blob/master/00112-libtiff-heapoverflow-_TIFFmemcpy 7 | 8 | Command: 9 | > cd /root/source/tools 10 | > ./tiff2pdf ./exploit -o foo 11 | 12 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_10094/cve_2016_10094.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get -y upgrade 5 | RUN apt-get install -y build-essential git vim unzip python-dev python-pip ipython libjpeg-dev 6 | 7 | WORKDIR /root 8 | COPY ./exploit /root/exploit 9 | COPY ./source.zip /root/source.zip 10 | 11 | WORKDIR /root 12 | RUN unzip source.zip 13 | WORKDIR /root/source 14 | RUN ./configure 15 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 16 | 17 | WORKDIR /root 18 | 19 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_10094/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2016_10094/exploit -------------------------------------------------------------------------------- /data/libtiff/cve_2016_10094/source.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2016_10094/source.zip -------------------------------------------------------------------------------- /data/libtiff/cve_2016_10272/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/vadz/libtiff/commit/9657bbe3cdce4aaa90e07d50c1c70ae52da0ba6a 3 | 4 | PoC: 5 | https://blogs.gentoo.org/ago/2017/01/01/libtiff-multiple-heap-based-buffer-overflow/ 6 | https://github.com/asarubbo/poc/blob/master/00103-libtiff-heapoverflow-NeXTDecode 7 | 8 | Command: 9 | > cd /root/source/tools 10 | > ./tiffcrop -i /root/exploit foo 11 | 12 | 13 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_10272/cve_2016_10272.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/vadz/libtiff.git 10 | RUN mv libtiff source 11 | WORKDIR /root/source 12 | RUN git checkout 43bc256 13 | RUN ./configure 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | COPY ./exploit /root/exploit 17 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_10272/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2016_10272/exploit -------------------------------------------------------------------------------- /data/libtiff/cve_2016_3186/README.txt: -------------------------------------------------------------------------------- 1 | Patch Link: 2 | https://bugzilla.redhat.com/attachment.cgi?id=1144235&action=diff 3 | 4 | PoC: 5 | https://bugzilla.redhat.com/show_bug.cgi?id=1319503 6 | 7 | Command: 8 | > cd ./source/tools 9 | > ./gif2tiff ../../exploit out.tif 10 | 11 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_3186/cve_2016_3186.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:18.04 2 | 3 | RUN apt-get update 4 | RUN apt-get -y upgrade 5 | RUN apt-get install -y build-essential git vim unzip python-dev python-pip ipython 6 | 7 | COPY ./exploit /root/exploit 8 | 9 | COPY ./source.zip /root/source.zip 10 | RUN unzip source.zip 11 | WORKDIR /root/source 12 | RUN ./configure 13 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 14 | 15 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_3186/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2016_3186/exploit -------------------------------------------------------------------------------- /data/libtiff/cve_2016_3186/source.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2016_3186/source.zip -------------------------------------------------------------------------------- /data/libtiff/cve_2016_5314/README.txt: -------------------------------------------------------------------------------- 1 | Patch Link: 2 | https://github.com/vadz/libtiff/commit/391e77fcd217e78b2c51342ac3ddb7100ecacdd2 3 | 4 | PoC: 5 | http://bugzilla.maptools.org/show_bug.cgi?id=2554 6 | 7 | Command: 8 | > cd /root/source/tool 9 | > ./rgb2ycbcr ../../exploit tmpout.tif 10 | 11 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_5314/cve_2016_5314.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get -y upgrade 5 | RUN apt-get install -y build-essential git vim unzip python-dev python-pip ipython zlib1g-dev 6 | 7 | WORKDIR /root 8 | COPY ./exploit /root/exploit 9 | COPY ./source.zip /root/source.zip 10 | 11 | WORKDIR /root 12 | RUN unzip source.zip 13 | WORKDIR /root/source 14 | RUN ./configure 15 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 16 | 17 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_5314/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2016_5314/exploit -------------------------------------------------------------------------------- /data/libtiff/cve_2016_5314/source.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2016_5314/source.zip -------------------------------------------------------------------------------- /data/libtiff/cve_2016_5321/README.txt: -------------------------------------------------------------------------------- 1 | Patch Link: 2 | https://github.com/vadz/libtiff/commit/2f79856097f423eb33796a15fcf700d2ea41bf31 3 | 4 | PoC: 5 | http://bugzilla.maptools.org/show_bug.cgi?id=2558 6 | 7 | Command: 8 | > cd /root/source/tools 9 | > ./tiffcrop /root/exploit ./tmpout.tif 10 | 11 | Note: Please use 4.0.6 released version! (Please do not use the commit!) 12 | 13 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_5321/cve_2016_5321.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get -y upgrade 5 | RUN apt-get install -y build-essential git vim unzip python-dev python-pip ipython 6 | 7 | WORKDIR /root 8 | COPY ./exploit /root/exploit 9 | COPY ./source.zip /root/source.zip 10 | 11 | WORKDIR /root 12 | RUN unzip source.zip 13 | WORKDIR /root/source 14 | RUN ./configure 15 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 16 | 17 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_5321/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2016_5321/exploit -------------------------------------------------------------------------------- /data/libtiff/cve_2016_5321/source.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2016_5321/source.zip -------------------------------------------------------------------------------- /data/libtiff/cve_2016_9273/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/vadz/libtiff/commit/d651abc097d91fac57f33b5f9447d0a9183f58e7 3 | 4 | PoC: 5 | http://bugzilla.maptools.org/show_bug.cgi?id=2587 6 | 7 | Command: 8 | > cd /root/source/tools 9 | > ./tiffsplit /root/exploit 10 | 11 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_9273/cve_2016_9273.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/vadz/libtiff.git 10 | RUN mv libtiff source 11 | WORKDIR /root/source 12 | RUN git checkout 6a984bf 13 | RUN ./configure 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | COPY ./exploit /root/exploit 17 | 18 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_9273/exploit: -------------------------------------------------------------------------------- 1 | II*b000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000D`0000000000000000000000000000000000000000000000000000000000000000000000000000 -------------------------------------------------------------------------------- /data/libtiff/cve_2016_9532/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/vadz/libtiff/commit/21d39de1002a5e69caa0574b2cc05d795d6fbfad 3 | 4 | PoC: 5 | http://bugzilla.maptools.org/show_bug.cgi?id=2592 6 | 7 | Command: 8 | > cd /root/source/tools 9 | > ./tiffcrop /root/exploit test 10 | 11 | 12 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_9532/cve_2016_9532.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/vadz/libtiff.git 10 | RUN mv libtiff source 11 | WORKDIR /root/source 12 | RUN git checkout d651abc 13 | RUN ./configure 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | COPY ./exploit /root/exploit 17 | 18 | -------------------------------------------------------------------------------- /data/libtiff/cve_2016_9532/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2016_9532/exploit -------------------------------------------------------------------------------- /data/libtiff/cve_2017_5225/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/vadz/libtiff/commit/5c080298d59efa53264d7248bbe3a04660db6ef7 3 | 4 | PoC: 5 | http://bugzilla.maptools.org/show_bug.cgi?id=2656 6 | 7 | Command: 8 | > cd /root/source/tools 9 | > ./tiffcp -p separate exploit out.tiff 10 | -------------------------------------------------------------------------------- /data/libtiff/cve_2017_5225/cve_2017_5225.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython libzip-dev 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/vadz/libtiff.git 10 | RUN mv libtiff source 11 | WORKDIR /root/source 12 | RUN git checkout 393881d 13 | RUN ./configure 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | COPY ./exploit /root/exploit 17 | 18 | -------------------------------------------------------------------------------- /data/libtiff/cve_2017_5225/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2017_5225/exploit -------------------------------------------------------------------------------- /data/libtiff/cve_2017_7595/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/vadz/libtiff/commit/47f2fb61a3a64667bce1a8398a8fcb1b348ff122 3 | 4 | PoC: 5 | https://blogs.gentoo.org/ago/2017/04/01/libtiff-divide-by-zero-in-jpegsetupencode-tiff_jpeg-c/ 6 | https://github.com/asarubbo/poc/blob/master/00123-libtiff-fpe-JPEGSetupEncode 7 | 8 | Command: 9 | > cd /root/source/tools/ 10 | > ./tiffcp -i /root/exploit ./out 11 | 12 | tif_jpeg.c:1687:26: runtime error: division by zero 13 | Floating point exception (core dumped) 14 | 15 | -------------------------------------------------------------------------------- /data/libtiff/cve_2017_7595/cve_2017_7595.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get -y upgrade 5 | RUN apt-get install -y build-essential git vim unzip python-dev python-pip ipython libjpeg-dev 6 | 7 | WORKDIR /root 8 | COPY ./exploit /root/exploit 9 | 10 | WORKDIR /root 11 | RUN git clone https://github.com/vadz/libtiff.git 12 | RUN mv libtiff source 13 | WORKDIR /root/source 14 | RUN git checkout 2c00d31 15 | RUN ./configure 16 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 17 | -------------------------------------------------------------------------------- /data/libtiff/cve_2017_7595/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2017_7595/exploit -------------------------------------------------------------------------------- /data/libtiff/cve_2017_7601/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/vadz/libtiff/commit/0a76a8c765c7b8327c59646284fa78c3c27e5490 3 | 4 | PoC: 5 | https://blogs.gentoo.org/ago/2017/04/01/libtiff-multiple-ubsan-crashes/ 6 | https://github.com/asarubbo/poc/blob/master/00119-libtiff-shift-long-tif_jpeg 7 | 8 | Command: 9 | > cd /root/source/tools 10 | > ./tiffcp -i /root/exploit foo 11 | 12 | -------------------------------------------------------------------------------- /data/libtiff/cve_2017_7601/cve_2017_7601.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython libjpeg-dev 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/vadz/libtiff.git 10 | RUN mv libtiff source 11 | WORKDIR /root/source 12 | RUN git checkout 3144e57 13 | RUN ./configure 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | WORKDIR /root 17 | COPY ./exploit /root/exploit 18 | -------------------------------------------------------------------------------- /data/libtiff/cve_2017_7601/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libtiff/cve_2017_7601/exploit -------------------------------------------------------------------------------- /data/libxml2/cve_2012_5134/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://gitlab.gnome.org/GNOME/libxml2/commit/6a36fbe3b3e001a8a840b5c1fdd81cefc9947f0d 3 | 4 | PoC: 5 | https://bugs.chromium.org/p/chromium/issues/detail?id=158249 6 | 7 | Command: 8 | > cd /root/source 9 | > ./xmllint /root/exploit 10 | -------------------------------------------------------------------------------- /data/libxml2/cve_2012_5134/cve_2012_5134.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf libtool automake pkg-config 7 | 8 | WORKDIR /root 9 | RUN git clone https://gitlab.gnome.org/GNOME/libxml2.git 10 | RUN mv libxml2 source 11 | WORKDIR /root/source 12 | RUN git checkout 4ea74a44 13 | RUN ./autogen.sh 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | COPY ./exploit /root/exploit 17 | -------------------------------------------------------------------------------- /data/libxml2/cve_2012_5134/exploit: -------------------------------------------------------------------------------- 1 | ]> 2 | 3 | -------------------------------------------------------------------------------- /data/libxml2/cve_2016_1838/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://gitlab.gnome.org/GNOME/libxml2/commit/db07dd613e461df93dde7902c6505629bf0734e9 3 | 4 | 5 | PoC: 6 | https://bugzilla.gnome.org/show_bug.cgi?id=758588 7 | 8 | Command: 9 | > cd /root/source 10 | > ./xmllint /root/exploit 11 | -------------------------------------------------------------------------------- /data/libxml2/cve_2016_1838/cve_2016_1838.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf libtool automake pkg-config 7 | 8 | WORKDIR /root 9 | RUN git clone https://gitlab.gnome.org/GNOME/libxml2.git 10 | RUN mv libxml2 source 11 | WORKDIR /root/source 12 | RUN git checkout cbb27165 13 | RUN ./autogen.sh 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | COPY ./exploit /root/exploit 17 | 18 | -------------------------------------------------------------------------------- /data/libxml2/cve_2016_1838/exploit: -------------------------------------------------------------------------------- 1 | cd /root/source 9 | > ./xmllint -html /root/exploit 10 | -------------------------------------------------------------------------------- /data/libxml2/cve_2016_1839/cve_2016_1839.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf libtool automake pkg-config 7 | 8 | WORKDIR /root 9 | RUN git clone https://gitlab.gnome.org/GNOME/libxml2.git 10 | RUN mv libxml2 source 11 | WORKDIR /root/source 12 | RUN git checkout db07dd61 13 | RUN ./autogen.sh 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | COPY ./exploit /root/exploit 17 | -------------------------------------------------------------------------------- /data/libxml2/cve_2016_1839/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libxml2/cve_2016_1839/exploit -------------------------------------------------------------------------------- /data/libxml2/cve_2017_5969/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://gitlab.gnome.org/GNOME/libxml2/commit/94691dc884d1a8ada39f073408b4bb92fe7fe882 3 | 4 | PoC: 5 | https://www.openwall.com/lists/oss-security/2016/11/05/3 6 | 7 | Command: 8 | > cd /root/source 9 | > ./xmllint --recover /root/exploit 10 | -------------------------------------------------------------------------------- /data/libxml2/cve_2017_5969/cve_2017_5969.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython autoconf libtool automake pkg-config 7 | 8 | WORKDIR /root 9 | RUN git clone https://gitlab.gnome.org/GNOME/libxml2.git 10 | RUN mv libxml2 source 11 | WORKDIR /root/source 12 | RUN git checkout 362b3229 13 | RUN ./autogen.sh 14 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 15 | 16 | COPY ./exploit /root/exploit 17 | 18 | -------------------------------------------------------------------------------- /data/libxml2/cve_2017_5969/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/libxml2/cve_2017_5969/exploit -------------------------------------------------------------------------------- /data/potrace/cve_2013_7437/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://bugs.debian.org/cgi-bin/bugreport.cgi?att=1;bug=778646;filename=potrace-overflow.patch;msg=42 3 | 4 | PoC: 5 | https://bugzilla.redhat.com/show_bug.cgi?id=955808 6 | 7 | Command: 8 | > cd /root/source/src 9 | > ./potrace /root/exploit 10 | -------------------------------------------------------------------------------- /data/potrace/cve_2013_7437/cve_2013_7437.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython zlib1g-dev 7 | 8 | WORKDIR /root 9 | COPY ./source.zip /root/source.zip 10 | RUN unzip source.zip 11 | WORKDIR /root/source 12 | RUN ./configure 13 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 14 | 15 | COPY ./exploit /root/exploit 16 | 17 | -------------------------------------------------------------------------------- /data/potrace/cve_2013_7437/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/potrace/cve_2013_7437/exploit -------------------------------------------------------------------------------- /data/potrace/cve_2013_7437/source.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/potrace/cve_2013_7437/source.zip -------------------------------------------------------------------------------- /data/zziplib/cve_2017_5974/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/gdraheim/zziplib/commit/03de3beabbf570474a9ac05d6dc6b42cdb184cd1 3 | 4 | PoC: 5 | https://blogs.gentoo.org/ago/2017/02/09/zziplib-heap-based-buffer-overflow-in-zzip_mem_entry_extra_block-memdisk-c/ 6 | 7 | Command: 8 | > cd /root/source/Linux_5.0.0-37-generic_x86_64.d/bins 9 | > ./unzzipcat-mem /root/exploit 10 | -------------------------------------------------------------------------------- /data/zziplib/cve_2017_5974/cve_2017_5974.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython zlib1g-dev wget 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/gdraheim/zziplib.git 10 | RUN mv zziplib source 11 | WORKDIR /root/source 12 | RUN git checkout 3a4ffcd 13 | WORKDIR /root/source/docs 14 | RUN wget https://github.com/LuaDist/libzzip/raw/master/docs/zziplib-manpages.tar 15 | WORKDIR /root/source 16 | RUN ./configure 17 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 18 | 19 | COPY ./exploit /root/exploit 20 | 21 | -------------------------------------------------------------------------------- /data/zziplib/cve_2017_5974/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/zziplib/cve_2017_5974/exploit -------------------------------------------------------------------------------- /data/zziplib/cve_2017_5975/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/gdraheim/zziplib/commit/64e745f8a3604ba1c444febed86b5e142ce03dd7 3 | 4 | PoC: 5 | https://blogs.gentoo.org/ago/2017/02/09/zziplib-heap-based-buffer-overflow-in-__zzip_get32-fetch-c/ 6 | https://github.com/asarubbo/poc/blob/master/00150-zziplib-heapoverflow-__zzip_get32 7 | 8 | Command: 9 | > cd /root/source/Linux_5.0.0-37-generic_x86_64.d/bins 10 | > ./unzzipcat-mem /root/exploit 11 | 12 | -------------------------------------------------------------------------------- /data/zziplib/cve_2017_5975/cve_2017_5975.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython zlib1g-dev wget 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/gdraheim/zziplib.git 10 | RUN mv zziplib source 11 | WORKDIR /root/source 12 | RUN git checkout 33d6e9c 13 | WORKDIR /root/source/docs 14 | RUN wget https://github.com/LuaDist/libzzip/raw/master/docs/zziplib-manpages.tar 15 | WORKDIR /root/source 16 | RUN ./configure 17 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 18 | 19 | COPY ./exploit /root/exploit 20 | 21 | -------------------------------------------------------------------------------- /data/zziplib/cve_2017_5975/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/zziplib/cve_2017_5975/exploit -------------------------------------------------------------------------------- /data/zziplib/cve_2017_5976/README.txt: -------------------------------------------------------------------------------- 1 | Patch: 2 | https://github.com/gdraheim/zziplib/commit/03de3beabbf570474a9ac05d6dc6b42cdb184cd1 3 | 4 | PoC: 5 | https://blogs.gentoo.org/ago/2017/02/09/zziplib-heap-based-buffer-overflow-in-zzip_mem_entry_extra_block-memdisk-c/ 6 | 7 | Command: 8 | > cd /root/source/Linux_5.0.0-37-generic_x86_64.d/bins 9 | > ./unzzipcat-mem /root/exploit 10 | 11 | -------------------------------------------------------------------------------- /data/zziplib/cve_2017_5976/cve_2017_5976.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | RUN apt-get update 4 | RUN apt-get install -y build-essential 5 | RUN apt-get update 6 | RUN apt-get install -y git vim unzip python-dev python-pip ipython zlib1g-dev wget 7 | 8 | WORKDIR /root 9 | RUN git clone https://github.com/gdraheim/zziplib.git 10 | RUN mv zziplib source 11 | WORKDIR /root/source 12 | RUN git checkout 3a4ffcd 13 | WORKDIR /root/source/docs 14 | RUN wget https://github.com/LuaDist/libzzip/raw/master/docs/zziplib-manpages.tar 15 | WORKDIR /root/source 16 | RUN ./configure 17 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 18 | 19 | COPY ./exploit /root/exploit 20 | 21 | -------------------------------------------------------------------------------- /data/zziplib/cve_2017_5976/exploit: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/patchloc/VulnLoc/c7a7d4dac092e3b6302aa8a954518a3348d51d3e/data/zziplib/cve_2017_5976/exploit -------------------------------------------------------------------------------- /setup.Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | # Dependencies 4 | RUN apt update --fix-missing 5 | RUN apt install -y build-essential 6 | RUN apt install -y git vim unzip python-dev python-pip ipython wget libssl-dev g++-multilib doxygen transfig imagemagick ghostscript zlib1g-dev 7 | 8 | WORKDIR /root 9 | RUN mkdir workspace 10 | WORKDIR /root/workspace 11 | RUN mkdir deps 12 | WORKDIR /root/workspace/deps 13 | 14 | # Installing numpy 15 | RUN wget https://github.com/numpy/numpy/releases/download/v1.16.6/numpy-1.16.6.zip 16 | RUN unzip numpy-1.16.6.zip 17 | RUN rm numpy-1.16.6.zip 18 | RUN mv numpy-1.16.6 numpy 19 | WORKDIR /root/workspace/deps/numpy 20 | RUN python setup.py install 21 | WORKDIR /root/workspace/deps 22 | 23 | # install pyelftools 24 | RUN pip install pyelftools 25 | 26 | # install CMake 27 | RUN wget https://github.com/Kitware/CMake/releases/download/v3.16.2/cmake-3.16.2.tar.gz 28 | RUN tar -xvzf cmake-3.16.2.tar.gz 29 | RUN rm cmake-3.16.2.tar.gz 30 | RUN mv cmake-3.16.2 cmake 31 | WORKDIR /root/workspace/deps/cmake 32 | RUN ./bootstrap 33 | RUN make 34 | RUN make install 35 | WORKDIR /root/workspace/deps 36 | 37 | # install dynamorio 38 | RUN git clone https://github.com/DynamoRIO/dynamorio.git 39 | WORKDIR /root/workspace/deps/dynamorio 40 | RUN mkdir build 41 | WORKDIR /root/workspace/deps/dynamorio/build 42 | RUN cmake ../ 43 | RUN make 44 | WORKDIR /root/workspace/deps 45 | 46 | # set up the tracer 47 | COPY ./code/iftracer.zip /root/workspace/deps/iftracer.zip 48 | RUN unzip iftracer.zip 49 | RUN rm iftracer.zip 50 | WORKDIR /root/workspace/deps/iftracer/iftracer 51 | RUN cmake CMakeLists.txt 52 | RUN make 53 | WORKDIR /root/workspace/deps/iftracer/ifLineTracer 54 | RUN cmake CMakeLists.txt 55 | RUN make 56 | WORKDIR /root/workspace 57 | 58 | # set up CVE-2016-5314 59 | RUN mkdir cves 60 | WORKDIR /root/workspace/cves 61 | RUN mkdir cve_2016_5314 62 | WORKDIR /root/workspace/cves/cve_2016_5314 63 | RUN apt install -y build-essential git vim unzip python-dev python-pip ipython zlib1g-dev 64 | COPY ./data/libtiff/cve_2016_5314/source.zip ./source.zip 65 | RUN unzip source.zip 66 | RUN rm source.zip 67 | WORKDIR /root/workspace/cves/cve_2016_5314/source 68 | RUN ./configure 69 | RUN make CFLAGS="-static -ggdb" CXXFLAGS="-static -ggdb" 70 | # copy exploit 71 | WORKDIR /root/workspace/cves/cve_2016_5314 72 | COPY ./data/libtiff/cve_2016_5314/exploit ./exploit 73 | # setup an exploit detector for cve-2016-5314 --- valgrind 74 | WORKDIR /root/workspace/deps 75 | RUN apt install -y libc6-dbg 76 | RUN wget https://sourceware.org/pub/valgrind/valgrind-3.15.0.tar.bz2 77 | RUN tar xjf valgrind-3.15.0.tar.bz2 78 | RUN mv valgrind-3.15.0 valgrind 79 | WORKDIR /root/workspace/deps/valgrind 80 | RUN ./configure 81 | RUN make 82 | RUN make install 83 | 84 | # prepare code 85 | WORKDIR /root/workspace 86 | RUN mkdir code 87 | WORKDIR /root/workspace/code 88 | COPY ./code/fuzz.py ./ 89 | COPY ./code/parse_dwarf.py ./ 90 | COPY ./code/patchloc.py ./ 91 | COPY ./code/tracer.py ./ 92 | COPY ./code/utils.py ./ 93 | COPY ./code/env.py ./ 94 | 95 | WORKDIR /root/workspace 96 | -------------------------------------------------------------------------------- /test/setup-cve_2016_5314/config.ini: -------------------------------------------------------------------------------- 1 | [cve_2016_5314] 2 | trace_cmd=/root/workspace/cves/cve_2016_5314/source/tools/rgb2ycbcr;***;tmpout1.tif 3 | crash_cmd=valgrind;/root/workspace/cves/cve_2016_5314/source/tools/rgb2ycbcr;***;tmpout1.tif 4 | bin_path=/root/workspace/cves/cve_2016_5314/source/tools/rgb2ycbcr 5 | poc=/root/workspace/cves/cve_2016_5314/exploit 6 | poc_fmt=bfile 7 | mutate_range=default 8 | folder=/root/workspace/outputs/cve_2016_5314 9 | crash_tag=valgrind;3;tif_pixarlog.c:785 10 | 11 | 12 | -------------------------------------------------------------------------------- /test/setup-cve_2016_5314/test.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | target_cve="cve_2016_5314" 4 | base_folder="/root/workspace" 5 | out_folder="$base_folder/outputs" 6 | code_folder="$base_folder/code" 7 | 8 | if [ ! -d "$out_folder" ]; then 9 | mkdir $out_folder 10 | echo "Created folder -> $out_folder" 11 | fi 12 | 13 | cve_folder="$out_folder/$target_cve" 14 | if [ ! -d "$cve_folder" ]; then 15 | mkdir $cve_folder 16 | echo "Created folder -> $cve_folder" 17 | fi 18 | 19 | cp ./config.ini $code_folder 20 | 21 | echo "The default number of processes is 10. PatchLoc will adjust it according to the number of cpus on the local machines." 22 | echo "The default timeout is 4h. The user can change the timeout in ./code/fuzz.py" 23 | echo "The execution progress can be found at the fuzz.log in the output folder. It will not be printed out in the terminal." 24 | echo "Please do not terminate the execution until PatchLoc timeouts automatically." 25 | 26 | cd $code_folder 27 | python fuzz.py --config_file ./config.ini --tag $target_cve 28 | echo "Finish fuzzing ..." 29 | 30 | # get the output folder 31 | cve_out_folder=`find $out_folder/$target_cve -maxdepth 1 -name 'output_*' -not -path '*/\.*' -type d | sed 's/^\.\///g'` 32 | echo "Output Folder: $cve_out_folder" 33 | # get the hash of the poc 34 | target_fuzz_path="$cve_out_folder/fuzz.log" 35 | poc_hash=`sed '19q;d' $target_fuzz_path | awk '{print $NF}'` 36 | 37 | python patchloc.py --config_file ./config.ini --tag $target_cve --func calc --out_folder $cve_out_folder --poc_trace_hash $poc_hash --process_num 10 38 | -------------------------------------------------------------------------------- /test/setup-others/README.md: -------------------------------------------------------------------------------- 1 | ## Quick Tour 2 | 3 | ### Download Configs and Other Files 4 | Please download all the .zip files in the [folder](https://drive.google.com/drive/folders/1B5dKaMfqN_mJSaYIIkeScdvZb9P6_tQh?usp=sharing) 5 | to **./VulnLoc/test** in your localhost and unzip these files. 6 | 7 | ### Setup Docker Container 8 | 9 | ```bash 10 | cd ./VulnLoc/test/env_setup 11 | # Build a docker image 12 | docker build -f vulnloc_env.Dockerfile -t vulnloc_env . 13 | # Run a docker container 14 | docker run --privileged -it vulnloc_env bash 15 | cd ../../ 16 | docker cp ./code :/root/workspace/code 17 | ``` 18 | 19 | ### Create CVE folder (in container) 20 | 21 | ```bash 22 | # Run following commands in the docker container 23 | cd /root/workspace 24 | mkdir 25 | ``` 26 | 27 | ### Copy Files From Host to Docker Container 28 | 29 | ```bash 30 | # Run following commands in your localhost 31 | cd ./VulnLoc/test/scripts 32 | ./copy_files.sh 33 | ``` 34 | 35 | ### Compile Target Programs 36 | 37 | ```bash 38 | # Run following commands in the docker container 39 | cd /root/workspace/ 40 | ./compile.sh 41 | ``` 42 | 43 | ### Run the Localization Tool 44 | 45 | ```bash 46 | # Run following commands in the docker container 47 | cd /root/workspace//code 48 | 49 | python fuzz.py \ 50 | --config_file /root/workspace//config.ini \ 51 | --tag 52 | 53 | python patchloc.py \ 54 | --config_file /root/workspace//config.ini \ 55 | --tag 56 | --func calc \ 57 | --out_folder /root/workspace//output/output_ \ 58 | --poc_trace_hash \ 59 | --process_num 10 60 | ``` 61 | 62 | ## Example 63 | Let's take cve-2017-5225 as an example. We first create a docker container for running the experiment. 64 | ```bash 65 | cd ./PatchLoc/test/env_setup 66 | # Build a docker image 67 | docker build -f vulnloc_env.Dockerfile -t vulnloc_env . 68 | # Run a docker container 69 | docker run --privileged -it vulnloc_env bash 70 | ``` 71 | To find out the ID of the container, please run (in your localhost): 72 | ```bash 73 | docker ps -a | grep "vulnloc_env" | awk '{print $1;}' 74 | ``` 75 | This command will give you the output such as **88b45068e205**. 76 | 77 | The second step is to prepare the environment for cve-2017-5225. 78 | ```bash 79 | # Run following commands in the docker container 80 | cd /root/workspace 81 | mkdir cve-2017-5225 82 | ``` 83 | ```bash 84 | # Run following commands in your localhost 85 | cd ./PatchLoc/test/scripts 86 | ./copy_files.sh cve-2017-5225 88b45068e205 87 | cd ../../ 88 | docker cp ./code 88b45068e205:/root/workspace/code 89 | ``` 90 | ```bash 91 | # Run following commands in the docker container 92 | cd /root/workspace/cve-2017-5225 93 | ./compile.sh 94 | ``` 95 | The third step is to run ConcFuzz with the target program. 96 | ```bash 97 | # Run following commands in the docker container 98 | cd /root/workspace/cve-2017-5225/code 99 | 100 | python fuzz.py \ 101 | --config_file /root/workspace/cve-2017-5225/config.ini \ 102 | --tag cve-2017-5225 103 | ``` 104 | This step will create an output folder with the name **output_\** (such as **output_1620642503**) in **/root/workspace/cve-2017-5225/output**. 105 | 106 | The final step is to run the localization tool with the target program. 107 | ```bash 108 | # Run following commands in the docker container 109 | cd /root/workspace/cve-2017-5225/code 110 | 111 | python patchloc.py \ 112 | --config_file /root/workspace/cve-2017-5225/config.ini \ 113 | --tag cve-2017-5225 114 | --func calc \ 115 | --out_folder /root/workspace/cve-2017-5225/output/output_1620642503 \ 116 | --poc_trace_hash 27a85cdd21788fbf4ce73198609202993a70ba1d2c9153f018e33c88dea4ffef \ 117 | --process_num 10 118 | ``` 119 | You can find the hash of the exploit trace by running the following commands: 120 | ```bash 121 | cd /root/workspace/cve-2017-5225/output/output_1620642503 122 | head -n 19 fuzz.log | tail -1 | awk '{ print $NF }' 123 | ``` 124 | Finally, you can find the following localization result in the file **/root/workspace/cve-2017-5225/output/output_1620642503/patchloc.log** 125 | ``` 126 | Output Folder: /root/workspace/cve-2017-5225/output/output_1620642503 127 | #reports: 5692 (#malicious: 2697; #benign: 2995) 128 | [INSN-0] 0x0000000000404767 -> tiffcp.c:1089 (l2norm: 1.414214; normalized(N): 1.000000; normalized(S): 1.000000) 129 | [INSN-1] 0x0000000000429402 -> tif_write.c:540 (l2norm: 1.412245; normalized(N): 1.000000; normalized(S): 0.997214) 130 | [INSN-2] 0x00000000004293bd -> tif_write.c:537 (l2norm: 1.412245; normalized(N): 1.000000; normalized(S): 0.997214) 131 | [INSN-3] 0x00000000004293b2 -> tif_write.c:538 (l2norm: 1.412245; normalized(N): 1.000000; normalized(S): 0.997214) 132 | [INSN-4] 0x0000000000429360 -> tif_write.c:531 (l2norm: 1.412245; normalized(N): 1.000000; normalized(S): 0.997214) 133 | ... 134 | ``` 135 | --------------------------------------------------------------------------------