├── LICENSE
├── README.md
├── requirements.txt
├── syzscope
├── __init__.py
├── __main__.py
├── interface
│ ├── __init__.py
│ ├── s2e
│ │ └── __init__.py
│ ├── static_analysis
│ │ ├── __init__.py
│ │ ├── error.py
│ │ └── staticAnalysis.py
│ ├── sym_exec
│ │ ├── __init__.py
│ │ ├── error.py
│ │ ├── mem_instrument.py
│ │ ├── stateManager.py
│ │ ├── symExec.py
│ │ └── symTracing.py
│ ├── utilities.py
│ └── vm
│ │ ├── __init__.py
│ │ ├── error.py
│ │ ├── gdb.py
│ │ ├── instance.py
│ │ ├── kernel.py
│ │ ├── monitor.py
│ │ └── state.py
├── modules
│ ├── __init__.py
│ ├── crash.py
│ ├── deploy
│ │ ├── __init__.py
│ │ ├── case.py
│ │ ├── deploy.py
│ │ └── worker.py
│ └── syzbotCrawler.py
├── patches
│ ├── 760f8.patch
│ ├── kasan.patch
│ ├── pwndbg.patch
│ └── syzkaller-9b1f3e6.patch
├── resources
│ └── kasan_related_funcs
├── scripts
│ ├── check_kvm.sh
│ ├── deploy-bc.sh
│ ├── deploy.sh
│ ├── deploy_linux.sh
│ ├── init-replay.sh
│ ├── linux-clone.sh
│ ├── patch_applying_check.sh
│ ├── requirements.sh
│ ├── run-script.sh
│ ├── run-vm.sh
│ ├── syz-compile.sh
│ └── upload-exp.sh
└── test
│ ├── deploy_test.py
│ └── interface
│ ├── s2e_test.py
│ ├── staticAnalysis_test.py
│ ├── vm_test.py
│ └── worker_test.py
└── tutorial
├── Getting_started.md
├── common_issues.md
├── examples
└── WARNING_held_lock_freed.md
├── folder_structure.md
├── fuzzing.md
├── inspect_results.md
├── poc_repro.md
├── resource
├── SyzScope-final.pdf
└── workflow.png
├── static_taint_analysis.md
├── sym_exec.md
└── workzone_structure.md
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2021 ETenal
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 |
2 |
3 | # SyzScope
4 |
5 | 1. [What is SyzScope?](#What_is_SyzScope)
6 | 2. [Why did we develop SyzScope?](#Why_did_we_develop_SyzScope)
7 | 3. [Access the paper](#access_the_paper)
8 | 4. [Setup](#Setup)
9 | 1. [Dokcer - Recommend](#Dokcer)
10 | 1. [image - ready2go](#Dokcer_ready2go)
11 | 2. [image - mini](#Dokcer_mini)
12 | 2. [Manually setup](#Manually_setup)
13 | 1. [Let's warm up](#warm_up)
14 | 2. [Install requirements](#install_requirements)
15 | 3. [Tweak pwntools](#Tweak_pwntools)
16 | 4. [Using UTF-8 encoding](#Using_UTF_8_encoding)
17 | 5. [Tutorial](#tutorial)
18 | 6. [Common Issues](#common_issues)
19 |
20 | ## THIS VERSION CONDUCTED ALL EXPERIMENT FOR USENIX SECURITY 22. PURSUING UPDATE, FOLLOW MAIN REPO -> [SyzScope](https://github.com/plummm/SyzScope) ##
21 |
22 | ### What is SyzScope?
23 |
24 |
25 |
26 | SyzScope is a system that can automatically uncover *high-risk* impacts given a bug with only *low-risk* impacts.
27 |
28 | ### Why did we develop SyzScope?
29 |
30 |
31 |
32 | A major problem of current fuzzing platforms is that they neglect a critical function that should have been built-in: ***evaluation of a bug's security impact***. It is well-known that the lack of understanding of security impact can lead to delayed bug fixes as well as patch propagation. Therefore, we developed SyzScope to reveal the potential high-risk bugs among seemingly low-risk bugs on syzbot.
33 |
34 | ### More details?
35 |
36 |
37 |
38 | Access our paper [here](tutorial/resource/SyzScope-final.pdf)
39 |
40 |
41 | ```
42 | @inproceedings {277242,
43 | title = {{SyzScope}: Revealing {High-Risk} Security Impacts of {Fuzzer-Exposed} Bugs in Linux kernel},
44 | booktitle = {31st USENIX Security Symposium (USENIX Security 22)},
45 | year = {2022},
46 | address = {Boston, MA},
47 | url = {https://www.usenix.org/conference/usenixsecurity22/presentation/zou},
48 | publisher = {USENIX Association},
49 | month = aug,
50 | }
51 | ```
52 |
53 | ------
54 |
55 | ### Setup
56 |
57 |
58 |
59 |
60 |
61 | #### Dokcer - Recommend
62 |
63 |
64 |
65 | ##### Image - ready2go(18.39 Gb)
66 |
67 |
68 |
69 | ```bash
70 | docker pull etenal/syzscope:ready2go
71 | docker run -it -d --name syzscope -p 2222:22 --privileged etenal/syzscope:ready2go
72 | docker attach syzscope
73 | ```
74 |
75 |
76 |
77 | ###### Inside docker container
78 |
79 | Everything is ready to go
80 |
81 | ```bash
82 | cd /root/SyzScope
83 | git pull
84 | ```
85 |
86 |
87 |
88 | ##### Image - mini(400 MB)
89 |
90 |
91 |
92 | ```bash
93 | docker pull etenal/syzscope:mini
94 | docker run -it -d --name syzscope --privileged etenal/syzscope:mini
95 | docker attach syzscope
96 | ```
97 |
98 |
99 |
100 | ###### Inside docker container
101 |
102 | ```bash
103 | cd /root/SyzScope
104 | git pull
105 | . venv/bin/activate
106 | python3 syzscope --install-requirements
107 | ```
108 |
109 |
110 |
111 | #### Manually setup
112 |
113 |
114 |
115 | **Note**: SyzScope was only tested on Ubuntu 18.04.
116 |
117 |
118 |
119 | ##### Let's warm up
120 |
121 |
122 |
123 | ```bash
124 | apt-get update
125 | apt-get -y install git python3 python3-pip python3-venv sudo
126 | git clone https://github.com/plummm/SyzScope.git
127 | cd SyzScope/
128 | python3 -m venv venv
129 | . venv/bin/activate
130 | pip3 install -r requirements.txt
131 | ```
132 |
133 | ##### Install required packages and compile essential tools
134 |
135 |
136 |
137 | ```bash
138 | python3 syzscope --install-requirements
139 | ```
140 |
141 |
142 |
143 | ##### Tweak pwntools
144 |
145 |
146 |
147 | `Pwntools` print unnecessary debug information when starting or stoping new process (e.g., gdb), or opening new connection (e.g., connect to QEMU monitor). To disable such info, we add one line in its source code.
148 |
149 | ```bash
150 | vim venv/lib//site-packages/pwnlib/log.py
151 | ```
152 |
153 |
154 |
155 | Add `logger.propagate = False` to `class Logger(object)`
156 |
157 | ```python
158 | class Logger(object):
159 | ...
160 | def __init__(self, logger=None):
161 | ...
162 | logger = logging.getLogger(logger_name)
163 | logger.propagate = False #<-- Overhere
164 | ```
165 |
166 |
167 |
168 | ##### Make sure using UTF-8 encoding
169 |
170 |
171 |
172 | Using UTF-8 encoding to run `pwndbg` properly
173 |
174 | SyzScope should install UTF-8 when you install the [requirements](#install_requirements).
175 |
176 | To make sure use UTF-8 by default, add the following commands to `.bashrc` or other shell init script you're using.
177 |
178 | ```bash
179 | export LANG=en_US.UTF-8
180 | export LC_ALL=en_US.UTF-8
181 | ```
182 |
183 |
184 |
185 | ------
186 |
187 | ### Tutorial
188 |
189 |
190 |
191 | [Getting started](tutorial/Getting_started.md)
192 |
193 | [Workzone Structure](tutorial/workzone_structure.md)
194 |
195 | [Inpsect results](tutorial/inspect_results.md)
196 |
197 | [PoC Reproduce](tutorial/poc_repro.md)
198 |
199 | [Fuzzing](tutorial/fuzzing.md)
200 |
201 | [Static Taint Analysis](tutorial/static_taint_analysis.md)
202 |
203 | [Symbolic Execution](tutorial/sym_exec.md)
204 |
205 |
206 |
207 | #### Example
208 |
209 | [WARNING: held lock freed! (CVE-2018-25015)](tutorial/examples/WARNING_held_lock_freed.md)
210 |
211 |
212 |
213 | ------
214 |
215 | ### Common Issues
216 |
217 |
218 |
219 | Check out [common issues](tutorial/common_issues.md)
220 |
221 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | ailment==9.0.5405
2 | angr==9.0.5405
3 | archinfo==9.0.5405
4 | bcrypt==3.2.0
5 | beautifulsoup4==4.9.1
6 | bitstring==3.1.7
7 | cachetools==4.2.1
8 | capstone==4.0.2
9 | certifi==2020.4.5.1
10 | cffi==1.14.4
11 | chardet==3.0.4
12 | claripy==9.0.5405
13 | cle==9.0.5405
14 | CppHeaderParser==2.7.4
15 | cryptography==3.3.1
16 | DateTime==4.3
17 | decorator==4.4.2
18 | dpkt==1.9.4
19 | future==0.18.2
20 | gitdb==4.0.5
21 | GitPython==3.1.12
22 | idna==2.9
23 | intervaltree==3.1.0
24 | itanium-demangler==1.0
25 | Mako==1.1.4
26 | MarkupSafe==1.1.1
27 | mulpyplexer==0.9
28 | networkx==2.5
29 | numpy==1.18.4
30 | packaging==20.8
31 | paramiko==2.7.2
32 | pefile==2019.4.18
33 | pexpect==4.8.0
34 | plumbum==1.6.9
35 | ply==3.11
36 | progressbar2==3.53.1
37 | protobuf==3.14.0
38 | psutil==5.8.0
39 | ptyprocess==0.7.0
40 | pycparser==2.20
41 | pyelftools==0.27
42 | Pygments==2.7.4
43 | PyNaCl==1.4.0
44 | pyparsing==2.4.7
45 | pyserial==3.5
46 | PySMT==0.9.0
47 | PySocks==1.7.1
48 | python-dateutil==2.8.1
49 | python-utils==2.5.5
50 | pytz==2020.1
51 | pyvex==9.0.5405
52 | requests==2.23.0
53 | ROPGadget==6.5
54 | rpyc==5.0.1
55 | six==1.15.0
56 | smmap==3.0.5
57 | sortedcontainers==2.3.0
58 | soupsieve==2.0.1
59 | unicorn==1.0.2rc4
60 | urllib3==1.25.9
61 | wllvm==1.2.8
62 | z3-solver==4.8.10.0
63 | zope.interface==5.1.0
64 | pwntools==4.2.2
65 |
--------------------------------------------------------------------------------
/syzscope/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seclab-ucr/SyzScope/b1a6e20783ba8c92dd33d508e469bc24eaacaab6/syzscope/__init__.py
--------------------------------------------------------------------------------
/syzscope/__main__.py:
--------------------------------------------------------------------------------
1 | import argparse, os, stat, sys
2 | from queue import Empty
3 | import json
4 | import multiprocessing, threading
5 | import gc
6 |
7 | sys.path.append(os.getcwd())
8 | from syzscope.modules import Crawler, Deployer
9 | from subprocess import call
10 | from syzscope.interface.utilities import urlsOfCases, urlsOfCases, FOLDER, CASE
11 |
12 | def args_parse():
13 | parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter,
14 | description='Analyze crash cases from syzbot\n'
15 | 'eg. python syzscope -i 7fd1cbe3e1d2b3f0366d5026854ee5754d451405\n'
16 | 'eg. python syzscope -k="slab-out-of-bounds Read" -k="slab-out-of-bounds Write"')
17 | parser.add_argument('-i', '--input', nargs='?', action='store',
18 | help='The input should be a valid hash or a file contains multiple hashs. -u, -m ,and -k will be ignored if -i is enabled.')
19 | parser.add_argument('-u', '--url', nargs='?', action='store',
20 | default="https://syzkaller.appspot.com/upstream/fixed",
21 | help='Indicate an URL for automatically crawling and running.\n'
22 | '(default value is \'https://syzkaller.appspot.com/upstream/fixed\')')
23 | parser.add_argument('-m', '--max', nargs='?', action='store',
24 | default='9999',
25 | help='The maximum of cases for retrieving\n'
26 | '(By default all the cases will be retrieved)')
27 | parser.add_argument('-k', '--key', action='append',
28 | help='The keywords for detecting cases.\n'
29 | '(By default, it retrieve all cases)\n'
30 | 'This argument could be multiple values')
31 | parser.add_argument('-de', '--deduplicate', action='append',
32 | help='The keywords for deduplicating.\n'
33 | 'eg. -de="use-after-free" will ignore cases that already had "use-after-free" contexts\n'
34 | 'This argument could be multiple values')
35 | parser.add_argument('-pm', '--parallel-max', nargs='?', action='store',
36 | default='1', help='The maximum of parallel processes\n'
37 | '(default valus is 1)')
38 | parser.add_argument('--force', action='store_true',
39 | help='Force to run all cases even it has finished\n')
40 | parser.add_argument('--linux', nargs='?', action='store',
41 | default='-1',
42 | help='Indicate which linux repo to be used for running\n'
43 | '(--parallel-max will be set to 1)')
44 | parser.add_argument('-r', '--replay', choices=['succeed', 'completed', 'incomplete', 'error'],
45 | help='Replay crashes of each case in one directory')
46 | parser.add_argument('--ssh', nargs='?',
47 | default='33777',
48 | help='The default port of ssh using by QEMU\n'
49 | '(default value is 33777)')
50 | parser.add_argument('--ignore', nargs='?', action='store',
51 | help='A file contains cases hashs which are ignored. One line for each hash.')
52 | parser.add_argument('--ignore-batch', nargs='?', action='store',
53 | help='A file contains cases hashs which are ignored. This argument will ignore all other cases share the same patch')
54 | parser.add_argument('--alert', nargs='*', action='store',
55 | default=[''],
56 | help='Set alert for specific crash description')
57 | parser.add_argument('-KF', '--kernel-fuzzing',
58 | action='store_true',
59 | help='Enable kernel fuzzing and reproducing the original impact')
60 | parser.add_argument('-RP', '--reproduce',
61 | action='store_true',
62 | help='Enable reproducing the original impact separatly')
63 | parser.add_argument('-SA', '--static-analysis',
64 | action='store_true',
65 | help='Enable static analysis separatly')
66 | parser.add_argument('-SE', '--symbolic-execution',
67 | action='store_true',
68 | help='Enable symbolic execution separatly')
69 | parser.add_argument('-DV', '--dynamic-validation',
70 | action='store_true',
71 | help='Enable symbolic execution separatly')
72 | parser.add_argument('--guided',
73 | action='store_true',
74 | help='Enable path guided symbolic execution')
75 | parser.add_argument('--use-cache',
76 | action='store_true',
77 | help='Read cases from cache, this will overwrite the --input feild')
78 | parser.add_argument('--gdb', nargs='?',
79 | default='1235',
80 | help='Default gdb port for attaching')
81 | parser.add_argument('--qemu-monitor', nargs='?',
82 | default='9700',
83 | help='Default port of qemu monitor')
84 | parser.add_argument('-M', '--max-compiling-kernel-concurrently', nargs='?',
85 | default='-1',
86 | help='maximum of kernel that compiling at the same time. Default is unlimited.')
87 | parser.add_argument('--filter-by-reported', nargs='?',
88 | default='-1',
89 | help='filter bugs by the days they were reported\n')
90 | parser.add_argument('--filter-by-closed', nargs='?',
91 | default='-1',
92 | help='filter bugs by the days they were closed\n')
93 | parser.add_argument('--timeout-kernel-fuzzing', nargs='?',
94 | default='3',
95 | help='Timeout for kernel fuzzing(by hour)\n'
96 | 'default timeout is 3 hour')
97 | parser.add_argument('--timeout-dynamic-validation', nargs='?',
98 | help='The timeout(by second) of static analysis and symbolic execution\n'
99 | 'If you specify the timeout of static analysis or symbolic execution individually\n'
100 | 'the the rest time is for the other one\n'
101 | 'If you specify the timeout of both static analysis or symbolic execution'
102 | 'This arugment will be ignored'
103 | 'Default timeout is 60 mins')
104 | parser.add_argument('--timeout-static-analysis', nargs='?',
105 | help='The timeout(by second) of static analysis\n'
106 | 'Default timeout is 30 mins')
107 | parser.add_argument('--timeout-symbolic-execution', nargs='?',
108 | help='The timeout(by second) of symbolic execution\n'
109 | 'Default timeout is (timeout_dynamic_validation - timeout_static_analysis)')
110 | parser.add_argument('--get-hash', nargs='?',
111 | help='Get the hash of cases\n'
112 | 'eg. work/completed')
113 | parser.add_argument('--debug', action='store_true',
114 | help='Enable debug mode')
115 | parser.add_argument('--be-bully', action='store_true',
116 | help="Being a bully will kill the proc that occupies the port")
117 | parser.add_argument('--SE-PoC', nargs='?', action='store',
118 | help='The path to a PoC that will be used by symbolic execution')
119 | parser.add_argument('--include-high-risk', action='store_true',
120 | help='Include high risk bugs for analysis')
121 | parser.add_argument('--install-requirements', action='store_true',
122 | help='Install required packages and compile essential tools')
123 |
124 | args = parser.parse_args()
125 | return args
126 |
127 | def print_args_info(args):
128 | print("[*] hash: {}".format(args.input))
129 | print("[*] url: {}".format(args.url))
130 | print("[*] max: {}".format(args.max))
131 | print("[*] key: {}".format(args.key))
132 | print("[*] deduplicate: {}".format(args.deduplicate))
133 | print("[*] alert: {}".format(args.alert))
134 |
135 | try:
136 | int(args.ssh)
137 | except:
138 | print("[-] invalid argument value ssh: {}".format(args.ssh))
139 | os._exit(1)
140 |
141 | try:
142 | int(args.linux)
143 | except:
144 | print("[-] invalid argument value linux: {}".format(args.linux))
145 | os._exit(1)
146 |
147 | try:
148 | int(args.timeout_kernel_fuzzing)
149 | except:
150 | print("[-] invalid argument value time: {}".format(args.timeout_kernel_fuzzing))
151 | os._exit(1)
152 |
153 | def check_kvm():
154 | proj_path = os.path.join(os.getcwd(), "syzscope")
155 | check_kvm_path = os.path.join(proj_path, "scripts/check_kvm.sh")
156 | st = os.stat(check_kvm_path)
157 | os.chmod(check_kvm_path, st.st_mode | stat.S_IEXEC)
158 | r = call([check_kvm_path], shell=False)
159 | if r == 1:
160 | exit(0)
161 |
162 | def cache_cases(cases):
163 | work_path = os.getcwd()
164 | cases_json_path = os.path.join(work_path, "work/cases.json")
165 | with open(cases_json_path, 'w') as f:
166 | json.dump(cases, f)
167 | f.close()
168 |
169 | def read_cases_from_cache():
170 | cases = {}
171 | work_path = os.getcwd()
172 | cases_json_path = os.path.join(work_path, "work/cases.json")
173 | if os.path.exists(cases_json_path):
174 | with open(cases_json_path, 'r') as f:
175 | cases = json.load(f)
176 | f.close()
177 | return cases
178 |
179 | def deploy_one_case(index, args, hash_val):
180 | case = crawler.cases[hash_val]
181 | dp = Deployer(index=index, debug=args.debug, force=args.force, port=int(args.ssh), replay=args.replay, \
182 | linux_index=int(args.linux), time=int(args.timeout_kernel_fuzzing), kernel_fuzzing=args.kernel_fuzzing, reproduce= args.reproduce, alert=args.alert, \
183 | static_analysis=args.static_analysis, symbolic_execution=args.symbolic_execution, gdb_port=int(args.gdb), \
184 | qemu_monitor_port=int(args.qemu_monitor), max_compiling_kernel=int(args.max_compiling_kernel_concurrently), \
185 | timeout_dynamic_validation=args.timeout_dynamic_validation, timeout_static_analysis=args.timeout_static_analysis, \
186 | timeout_symbolic_execution=args.timeout_symbolic_execution, parallel_max=int(args.parallel_max), \
187 | guided=args.guided, be_bully=args.be_bully, se_poc=args.SE_PoC)
188 | dp.deploy(hash_val, case)
189 | del dp
190 |
191 | def prepare_cases(index, args):
192 | while(1):
193 | lock.acquire(blocking=True)
194 | try:
195 | hash_val = g_cases.get(block=True, timeout=3)
196 | if hash_val in ignore:
197 | rest.value -= 1
198 | lock.release()
199 | continue
200 | print("Thread {}: run case {} [{}/{}] left".format(index, hash_val, rest.value-1, total))
201 | rest.value -= 1
202 | lock.release()
203 | x = multiprocessing.Process(target=deploy_one_case, args=(index, args, hash_val,), name="lord-{}".format(i))
204 | x.start()
205 | x.join()
206 | gc.collect()
207 | remove_using_flag(index)
208 | except Empty:
209 | lock.release()
210 | break
211 | print("Thread {} exit->".format(index))
212 |
213 | def get_hash(path):
214 | ret = []
215 | log_path = os.path.join(path, "log")
216 | if os.path.exists(log_path):
217 | ret=urlsOfCases(path, CASE)
218 | else:
219 | ret=urlsOfCases(path, FOLDER)
220 | print("The hash of {}".format(path))
221 | for each in ret:
222 | print(each)
223 |
224 | def remove_using_flag(index):
225 | project_path = os.getcwd()
226 | flag_path = "{}/tools/linux-{}/THIS_KERNEL_IS_BEING_USED".format(project_path,index)
227 | if os.path.isfile(flag_path):
228 | os.remove(flag_path)
229 |
230 | def install_requirments():
231 | proj_path = os.path.join(os.getcwd(), "syzscope")
232 | requirements_path = os.path.join(proj_path, "scripts/requirements.sh")
233 | st = os.stat(requirements_path)
234 | os.chmod(requirements_path, st.st_mode | stat.S_IEXEC)
235 | return call([requirements_path], shell=False)
236 |
237 | def check_requirements():
238 | tools_path = os.path.join(os.getcwd(), "tools")
239 | env_stamp = os.path.join(tools_path, ".stamp/ENV_SETUP")
240 | return os.path.isfile(env_stamp)
241 |
242 | def build_work_dir():
243 | work_path = os.path.join(os.getcwd(), "work")
244 | os.makedirs(work_path, exist_ok=True)
245 | incomp = os.path.join(work_path, "incomplete")
246 | comp = os.path.join(work_path, "completed")
247 | os.makedirs(incomp, exist_ok=True)
248 | os.makedirs(comp, exist_ok=True)
249 |
250 | def args_dependencies():
251 | if args.debug:
252 | print("debug mode runs on single thread")
253 | args.parallel_max = '1'
254 | if args.linux != '-1':
255 | print("specifying a linux repo runs on single thread")
256 | args.parallel_max = '1'
257 |
258 | if __name__ == '__main__':
259 | args = args_parse()
260 | if args.key == None:
261 | args.key = ['']
262 | if args.deduplicate == None:
263 | args.deduplicate = []
264 | if install_requirments() != 0:
265 | print("Fail to install requirements.")
266 | exit(0)
267 | if args.install_requirements:
268 | exit(0)
269 | if not check_requirements():
270 | print("No essential components found. Install them by --install-requirements")
271 | exit(0)
272 |
273 | print_args_info(args)
274 | check_kvm()
275 | args_dependencies()
276 |
277 | if args.get_hash != None:
278 | get_hash(args.get_hash)
279 | sys.exit(0)
280 |
281 | ignore = []
282 | build_work_dir()
283 | manager = multiprocessing.Manager()
284 | if args.ignore != None:
285 | with open(args.ignore, "r") as f:
286 | text = f.readlines()
287 | for line in text:
288 | line = line.strip('\n')
289 | ignore.append(line)
290 | ignore_batch = []
291 | if args.ignore_batch != None:
292 | with open(args.ignore_batch, "r") as f:
293 | text = f.readlines()
294 | for line in text:
295 | line = line.strip('\n')
296 | ignore_batch.append(line)
297 | if args.input != None and args.use_cache:
298 | print("Can not use cache when specifying inputs")
299 | sys.exit(1)
300 | crawler = Crawler(url=args.url, keyword=args.key, max_retrieve=int(args.max), deduplicate=args.deduplicate, ignore_batch=ignore_batch,
301 | filter_by_reported=int(args.filter_by_reported), filter_by_closed=int(args.filter_by_closed), include_high_risk=args.include_high_risk, debug=args.debug)
302 | if args.replay != None:
303 | for url in urlsOfCases(args.replay):
304 | crawler.run_one_case(url)
305 | elif args.input != None:
306 | if len(args.input) == 40:
307 | crawler.run_one_case(args.input)
308 | else:
309 | with open(args.input, 'r') as f:
310 | text = f.readlines()
311 | for line in text:
312 | line = line.strip('\n')
313 | crawler.run_one_case(line)
314 | else:
315 | if args.use_cache:
316 | crawler.cases = read_cases_from_cache()
317 | else:
318 | crawler.run()
319 | if not args.use_cache:
320 | cache_cases(crawler.cases)
321 | if args.dynamic_validation:
322 | args.symbolic_execution = True
323 | args.static_analysis = True
324 | parallel_max = int(args.parallel_max)
325 | parallel_count = 0
326 | lock = threading.Lock()
327 | g_cases = manager.Queue()
328 | for key in crawler.cases:
329 | g_cases.put(key)
330 | #g_cases = manager.list([crawler.cases[x] for x in crawler.cases])
331 | l = list(crawler.cases.keys())
332 | total = len(l)
333 | rest = manager.Value('i', total)
334 | for i in range(0,min(parallel_max,total)):
335 | x = threading.Thread(target=prepare_cases, args=(i, args,), name="lord-{}".format(i))
336 | x.start()
--------------------------------------------------------------------------------
/syzscope/interface/__init__.py:
--------------------------------------------------------------------------------
1 | from .s2e import S2EInterface
2 |
3 | class Interface:
4 | def __init__(self):
5 | self.s2e_inst = S2EInterface()
--------------------------------------------------------------------------------
/syzscope/interface/s2e/__init__.py:
--------------------------------------------------------------------------------
1 | import os
2 | from subprocess import Popen, PIPE, STDOUT
3 |
4 | class S2EInterface:
5 | def __init__(self, s2e_path, kernel_path, syzkaller_path):
6 | self.s2e_path = s2e_path
7 | self.kernel_path = kernel_path
8 | self.syzkaller_path = syzkaller_path
9 |
10 | def getAvoidingPC(self, func_list):
11 | avoid = {}
12 | func2addr = os.path.join(self.syzkaller_path, "bin/syz-fun2addr")
13 | for each_func in func_list:
14 | avoid[each_func] = []
15 | cmd = [func2addr, "-f", each_func, "-v", self.kernel_path]
16 | p = Popen(cmd,
17 | stdout=PIPE,
18 | stderr=STDOUT)
19 | with p.stdout:
20 | for line in iter(p.stdout.readline, b''):
21 | line = line.decode("utf-8").strip('\n').strip('\r')
22 | res = line.split(':')
23 | if len(res) == 2:
24 | if res[0] == 'Start':
25 | addr = int(res[1], 16)
26 | avoid[each_func].append(addr)
27 | if res[0] == 'End':
28 | addr = int(res[1], 16)
29 | avoid[each_func].append(addr)
30 | return avoid
31 |
32 | def generateAvoidList(self, avoid, s2e_project_path):
33 | avoid_list_path = os.path.join(s2e_project_path, "avoid")
34 | f = open(avoid_list_path, 'w')
35 | for func in avoid:
36 | for i in range(0, len(avoid[func]), 2):
37 | start = avoid[func][i]
38 | end = avoid[func][i+1]
39 | text = "{} {}\n".format(hex(start), hex(end))
40 | f.write(text)
41 | f.close()
42 |
43 |
--------------------------------------------------------------------------------
/syzscope/interface/static_analysis/__init__.py:
--------------------------------------------------------------------------------
1 | from .staticAnalysis import StaticAnalysis
--------------------------------------------------------------------------------
/syzscope/interface/static_analysis/error.py:
--------------------------------------------------------------------------------
1 | class CompilingError(Exception):
2 | pass
--------------------------------------------------------------------------------
/syzscope/interface/static_analysis/staticAnalysis.py:
--------------------------------------------------------------------------------
1 | from multiprocessing import managers
2 | import os, stat
3 | from socket import timeout
4 | import logging
5 | import shutil
6 | import syzscope.interface.utilities as utilities
7 | import threading
8 | import time
9 | import queue
10 | import multiprocessing
11 |
12 | from subprocess import Popen, PIPE, STDOUT, TimeoutExpired, call
13 | from .error import CompilingError
14 |
15 | class StaticAnalysis:
16 | def __init__(self, logger, proj_path, index, workdir, case_path, linux_folder, max_compiling_kernel, timeout=30*60, debug=False):
17 | self.case_logger = logger
18 | self.proj_path = proj_path
19 | self.package_path = os.path.join(proj_path, "syzscope")
20 | self.case_path = case_path
21 | self.index = index
22 | self.work_path = os.path.join(case_path, workdir)
23 | self.linux_folder = linux_folder
24 | manager = multiprocessing.Manager()
25 | self.cmd_queue = manager.Queue()
26 | self.bc_ready = False
27 | self.timeout = timeout
28 | self.max_compiling_kernel = max_compiling_kernel
29 | self.debug = debug
30 |
31 | def prepare_static_analysis(self, case, vul_site, func_site):
32 | exitcode = 0
33 | bc_path = ''
34 | commit = case["commit"]
35 | config = case["config"]
36 | vul_file, tmp = vul_site.split(':')
37 | func_file, tmp = func_site.split(':')
38 |
39 | os.makedirs("{}/paths".format(self.work_path), exist_ok=True)
40 | if os.path.exists("{}/one.bc".format(self.case_path)):
41 | return exitcode
42 |
43 | if os.path.splitext(vul_file)[1] == '.h':
44 | bc_path = os.path.dirname(func_file)
45 | else:
46 | dir_list1 = vul_file.split('/')
47 | dir_list2 = func_file.split('/')
48 | for i in range(0, min(len(dir_list1), len(dir_list2)) - 1):
49 | if dir_list1[i] == dir_list2[i]:
50 | bc_path += dir_list1[i] + '/'
51 |
52 | script_path = os.path.join(self.package_path, "scripts/deploy-bc.sh")
53 | utilities.chmodX(script_path)
54 | index = str(self.index)
55 | self.case_logger.info("run: scripts/deploy-bc.sh")
56 | self.adjust_kernel_for_clang()
57 |
58 | p = Popen([script_path, self.linux_folder, index, self.case_path, commit, config, bc_path, "1", str(self.max_compiling_kernel)],
59 | stdout=PIPE,
60 | stderr=STDOUT
61 | )
62 | with p.stdout:
63 | self.__log_subprocess_output(p.stdout, logging.INFO)
64 | exitcode = p.wait()
65 | self.case_logger.info("script/deploy-bc.sh is done with exitcode {}".format(exitcode))
66 |
67 | if exitcode == 1:
68 | x = threading.Thread(target=self.monitor_execution, args=(p, 60*60))
69 | x.start()
70 | if self.compile_bc_extra() != 0:
71 | self.case_logger.error("Error occur when compiling bc or linking them")
72 | return exitcode
73 | elif exitcode != 0:
74 | self.case_logger.error("Error occur in deploy-bc.sh")
75 | return exitcode
76 | shutil.move(self.case_path+"/linux/one.bc", self.case_path+"/one.bc")
77 |
78 | # Restore CONFIG_KCOV CONFIG_KASAN CONFIG_BUG_ON_DATA_CORRUPTION
79 | # Kernel fuzzing and symbolic execution depends on some of them
80 | p = Popen([script_path, self.linux_folder, index, self.case_path, commit, config, bc_path, "0", str(self.max_compiling_kernel)],
81 | stdout=PIPE,
82 | stderr=STDOUT
83 | )
84 | with p.stdout:
85 | self.__log_subprocess_output(p.stdout, logging.INFO)
86 | exitcode = p.wait()
87 | return exitcode
88 |
89 | def monitor_execution(self, p, seconds):
90 | count = 0
91 | while (count < seconds // 10):
92 | count += 1
93 | time.sleep(10)
94 | poll = p.poll()
95 | if poll != None:
96 | return
97 | self.case_logger.info('Time out, kill qemu')
98 | p.kill()
99 |
100 | def adjust_kernel_for_clang(self):
101 | opts = ["-fno-inline-functions", "-fno-builtin-bcmp"]
102 | self._fix_asm_volatile_goto()
103 | self._add_extra_options(opts)
104 |
105 | def compile_bc_extra(self):
106 | regx = r'echo \'[ \t]*CC[ \t]*(([A-Za-z0-9_\-.]+\/)+([A-Za-z0-9_.\-]+))\';'
107 | base = os.path.join(self.case_path, 'linux')
108 | path = os.path.join(base, 'clang_log')
109 |
110 | procs = []
111 | for _ in range(0, 16):
112 | x = multiprocessing.Process(target=self.executor, args={base,})
113 | x.start()
114 | procs.append(x)
115 | with open(path, 'r') as f:
116 | lines = f.readlines()
117 | for line in lines:
118 | p2obj = utilities.regx_get(regx, line, 0)
119 | obj = utilities.regx_get(regx, line, 2)
120 | if p2obj == None or obj == None:
121 | """cmds = line.split(';')
122 | for e in cmds:
123 | call(e, cwd=base)"""
124 | continue
125 | if 'arch/x86/boot' in p2obj \
126 | or 'arch/x86/entry/vdso' in p2obj \
127 | or 'arch/x86/realmode' in p2obj:
128 | continue
129 | #print("CC {}".format(p2obj))
130 | new_cmd = []
131 | try:
132 | clang_path = '{}/tools/llvm/build/bin/clang'.format(self.proj_path)
133 | idx1 = line.index(clang_path)
134 | idx2 = line[idx1:].index(';')
135 | cmd = line[idx1:idx1+idx2].split(' ')
136 | if cmd[0] == clang_path:
137 | new_cmd.append(cmd[0])
138 | new_cmd.append('-emit-llvm')
139 | #if cmd[0] == 'wllvm':
140 | # new_cmd.append('{}/tools/llvm/build/bin/clang'.format(self.proj_path))
141 | # new_cmd.append('-emit-llvm')
142 | new_cmd.extend(cmd[1:])
143 | except ValueError:
144 | self.case_logger.error('No \'wllvm\' or \';\' found in \'{}\''.format(line))
145 | raise CompilingError
146 | idx_obj = len(new_cmd)-2
147 | st = new_cmd[idx_obj]
148 | if st[len(st)-1] == 'o':
149 | new_cmd[idx_obj] = st[:len(st)-1] + 'bc'
150 | else:
151 | self.case_logger.error("{} is not end with .o".format(new_cmd[idx_obj]))
152 | continue
153 | self.cmd_queue.put(new_cmd)
154 | """p = Popen(new_cmd, cwd=base, stdout=PIPE, stderr=PIPE)
155 | try:
156 | p.wait(timeout=5)
157 | except TimeoutExpired:
158 | if p.poll() == None:
159 | p.kill()
160 | """
161 |
162 | self.bc_ready=True
163 | for p in procs:
164 | p.join()
165 | if os.path.exists(os.path.join(self.case_path,'one.bc')):
166 | os.remove(os.path.join(self.case_path,'one.bc'))
167 | link_cmd = '{}/tools/llvm/build/bin/llvm-link -o one.bc `find ./ -name "*.bc" ! -name "timeconst.bc" ! -name "*.mod.bc"`'.format(self.proj_path)
168 | p = Popen(['/bin/bash','-c', link_cmd], stdout=PIPE, stderr=PIPE, cwd=base)
169 | with p.stdout:
170 | self.__log_subprocess_output(p.stdout, logging.INFO)
171 | exitcode = p.wait()
172 | if exitcode != 0:
173 | self.case_logger.error("Fail to construct a monolithic bc")
174 | return exitcode
175 |
176 | def executor(self, base):
177 | while not self.bc_ready or not self.cmd_queue.empty():
178 | try:
179 | cmd = self.cmd_queue.get(block=True, timeout=5)
180 | obj = cmd[len(cmd)-2]
181 | self.case_logger.debug("CC {}".format(obj))
182 | p = Popen(" ".join(cmd), shell=True, cwd=base, stdout=PIPE, stderr=PIPE)
183 | #call(" ".join(cmd), shell=True, cwd=base)
184 | try:
185 | p.wait(timeout=5)
186 | except TimeoutExpired:
187 | if p.poll() == None:
188 | p.kill()
189 | if self.debug:
190 | print("CC {}".format(obj))
191 | if p.poll() == None:
192 | p.kill()
193 | except queue.Empty:
194 | # get() is multithreads safe
195 | #
196 | break
197 |
198 | def KasanVulnChecker(self, report):
199 | vul_site = ''
200 | func_site = ''
201 | func = ''
202 | inline_func = ''
203 | offset = -1
204 | report_list = report.split('\n')
205 | kasan_report = utilities.only_kasan_calltrace(report_list)
206 | trace = utilities.extrace_call_trace(kasan_report)
207 | for each in trace:
208 | """if vul_site == '':
209 | vul_site = utilities.extract_debug_info(each)
210 | if utilities.isInline(each):
211 | inline_func = utilities.extract_func_name(each)
212 | continue
213 | func = utilities.extract_func_name(each)
214 | if func == inline_func:
215 | continue"""
216 | # See if it works after we disabled inline function
217 | if vul_site == '':
218 | vul_site = utilities.extract_debug_info(each)
219 | if func == '':
220 | func = utilities.extract_func_name(each)
221 | func_site = utilities.extract_debug_info(each)
222 | if func == 'fail_dump':
223 | func = None
224 | func_site = None
225 | """if utilities.regx_match(r'__read_once', func) or utilities.regx_match(r'__write_once', func):
226 | vul_site = ''
227 | func = ''
228 | func_site = ''
229 | continue"""
230 | break
231 |
232 | return vul_site, func_site, func
233 |
234 | def saveCallTrace2File(self, trace, vul_site):
235 | syscall_entrance = [r'SYS', r'_sys_', r'^sys_', r'entry_SYSENTER', r'entry_SYSCALL', r'ret_from_fork', r'bpf_prog_[a-z0-9]{16}']
236 | syscall_file = [r'arch/x86/entry']
237 | text = []
238 | flag_record = 0
239 | last_inline = ''
240 | flag_stop = False
241 | err = False
242 | for each in trace:
243 | if utilities.extract_debug_info(each) == vul_site and not flag_record:
244 | flag_record ^= 1
245 | if flag_record:
246 | func = utilities.extract_func_name(each)
247 | for entrance in syscall_entrance:
248 | if utilities.regx_match(entrance, func):
249 | # system call entrance is not included
250 | flag_stop = True
251 | break
252 | if flag_stop:
253 | break
254 | site = utilities.extract_debug_info(each)
255 | if site == None:
256 | continue
257 | for entrance in syscall_file:
258 | if utilities.regx_match(entrance, site):
259 | # system call entrance is not included
260 | flag_stop = True
261 | break
262 | t = "{} {}".format(func, site)
263 | file, line = site.split(':')
264 | s, e = self.getFuncBounds(func, file, int(line))
265 | if s == 0 and e == 0:
266 | err = True
267 | self.case_logger.error("Can not find the boundaries of calltrace function {}".format(func))
268 | t += " {} {}".format(s, e)
269 | # We disabled inline function
270 | if utilities.isInline(each):
271 | last_inline = func
272 | #t += " [inline]
273 | text.append(t)
274 | # Sometimes an inline function will appear at the next line of calltrace as a non-inlined function
275 | if not utilities.isInline(each) and last_inline == func:
276 | text.pop()
277 | path = os.path.join(self.work_path, "CallTrace")
278 | f = open(path, "w")
279 | f.writelines("\n".join(text))
280 | f.truncate()
281 | f.close()
282 | return err
283 |
284 | def getFuncBounds(self, func, file, lo_line):
285 | s = 0
286 | e = 0
287 | base = os.path.join(self.case_path, "linux")
288 | src_file_path = os.path.join(base, file)
289 | with open(src_file_path, 'r') as f:
290 | lines = f.readlines()
291 | text = "".join(lines)
292 | tmp = []
293 | for i in range(lo_line-1, 0, -1):
294 | tmp.insert(0,lines[i])
295 | expr = utilities.regx_get(utilities.kernel_func_def_regx, "".join(tmp), 0)
296 | if expr == None:
297 | continue
298 | n = text.index(expr)
299 | left_bracket = n+len(expr)+1
300 | s = i+1
301 | for j in range(lo_line, len(lines)):
302 | line = lines[j]
303 | if line == '}\n':
304 | e = j+1
305 | return s, e
306 | self.case_logger.error("Incorrect range of {}()".format(func))
307 | return s ,e
308 |
309 | def run_static_analysis(self, offset, size):
310 | calltrace = os.path.join(self.work_path, 'CallTrace')
311 | cmd = ["{}/tools/llvm/build/bin/opt".format(self.proj_path), "-load", "{}/tools/dr_checker/build/SoundyAliasAnalysis/libSoundyAliasAnalysis.so".format(self.proj_path),
312 | "-dr_checker", "-disable-output", "{}/one.bc".format(self.case_path),
313 | "-CalltraceFile={}".format(calltrace),
314 | "-Offset={}".format(offset),
315 | "-PrintPathDir={}/paths".format(self.work_path)]
316 | if size != None:
317 | cmd.append("-Size={}".format(size))
318 | self.case_logger.info("====================Here comes the taint analysis====================")
319 | self.case_logger.info(" ".join(cmd))
320 | p = Popen(cmd,
321 | stdout=PIPE,
322 | stderr=STDOUT
323 | )
324 | x = threading.Thread(target=self.monitor_execution, args=(p, self.timeout))
325 | x.start()
326 | start_time = time.time()
327 | with p.stdout:
328 | self.__log_subprocess_output(p.stdout, logging.INFO)
329 | exitcode = p.wait()
330 | end_time = time.time()
331 | time_on_static_analysis = end_time-start_time
332 | self.case_logger.info("Taint analysis took {}".format(time.strftime('%M:%S', time.gmtime(time_on_static_analysis))))
333 | return exitcode, time_on_static_analysis
334 |
335 | def _fix_asm_volatile_goto(self):
336 | regx = r'#define asm_volatile_goto'
337 | linux_repo = os.path.join(self.case_path, "linux")
338 | compiler_gcc = os.path.join(linux_repo, "include/linux/compiler-gcc.h")
339 | buf = ''
340 | if os.path.exists(compiler_gcc):
341 | with open(compiler_gcc, 'r') as f_gcc:
342 | lines = f_gcc.readlines()
343 | for line in lines:
344 | if utilities.regx_match(regx, line):
345 | buf = line
346 | break
347 | if buf != '':
348 | compiler_clang = os.path.join(linux_repo, "include/linux/compiler-clang.h")
349 | with open(compiler_clang, 'r+') as f_clang:
350 | lines = f_clang.readlines()
351 | data = [buf]
352 | data.extend(lines)
353 | f_clang.seek(0)
354 | f_clang.writelines(data)
355 | f_clang.truncate()
356 | return
357 |
358 | def _add_extra_options(self, opts):
359 | regx = r'KBUILD_CFLAGS[ \t]+:='
360 | linux_repo = os.path.join(self.case_path, "linux")
361 | makefile = os.path.join(linux_repo, "Makefile")
362 | data = []
363 | with open(makefile, 'r+') as f:
364 | lines = f.readlines()
365 | for i in range(0, len(lines)):
366 | line = lines[i]
367 | if utilities.regx_match(regx, line):
368 | parts = line.split(':=')
369 | opts_str = " ".join(opts)
370 | data.extend(lines[:i])
371 | data.append(parts[0] + ":= " + opts_str + " " + parts[1])
372 | data.extend(lines[i+1:])
373 | f.seek(0)
374 | f.writelines(data)
375 | f.truncate()
376 | break
377 |
378 | def __log_subprocess_output(self, pipe, log_level):
379 | for line in iter(pipe.readline, b''):
380 | if log_level == logging.INFO:
381 | self.case_logger.info(line)
382 | if log_level == logging.DEBUG:
383 | self.case_logger.debug(line)
384 |
--------------------------------------------------------------------------------
/syzscope/interface/sym_exec/__init__.py:
--------------------------------------------------------------------------------
1 | from .symExec import SymExec
--------------------------------------------------------------------------------
/syzscope/interface/sym_exec/error.py:
--------------------------------------------------------------------------------
1 | class VulnerabilityNotTrigger(Exception):
2 | pass
3 |
4 | class ExecutionError(Exception):
5 | pass
6 |
7 | class AbnormalGDBBehavior(Exception):
8 | pass
9 |
10 | class InvalidCPU(Exception):
11 | pass
--------------------------------------------------------------------------------
/syzscope/interface/sym_exec/symTracing.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | class PropagationHandler:
4 | def __init__(self):
5 | self._last_write = 0
6 | self._write_queue = []
7 | self._write_from_sym = []
8 |
9 | def is_kasan_write(self, addr):
10 | if self._last_write == addr:
11 | self._last_write = 0
12 | return True
13 | return False
14 |
15 | def log_kasan_write(self,addr):
16 | self._write_queue.append(addr)
17 | if self._last_write !=0:
18 | print("last_write = {} instead of 0".format(hex(self._last_write)))
19 | self._last_write = addr
20 |
21 | def log_symbolic_propagation(self, state, stack):
22 | propagation_info = {}
23 | propagation_info['kasan_write_index'] = len(self._write_queue)-1
24 | propagation_info['pc'] = state.scratch.ins_addr
25 | propagation_info['write_to_mem'] = self._write_queue[len(self._write_queue)-1]
26 | propagation_info['stack'] = stack
27 | self._write_from_sym.append(propagation_info)
28 |
29 | def get_symbolic_propagation(self):
30 | return self._write_from_sym
31 |
32 | def get_write_queue(self, index):
33 | if len(self._write_queue) > index:
34 | return self._write_queue[index]
35 | return None
--------------------------------------------------------------------------------
/syzscope/interface/vm/__init__.py:
--------------------------------------------------------------------------------
1 | from .instance import VMInstance
2 | from .state import VMState
3 |
4 | class VM(VMInstance, VMState):
5 | def __init__(self, linux, port, image, hash_tag, arch='amd64', proj_path='/tmp/', mem="2G", cpu="2", key=None, gdb_port=None, mon_port=None, opts=None, log_name='vm.log', log_suffix="", timeout=None, debug=False, logger=None):
6 | VMInstance.__init__(self, proj_path=proj_path, log_name=log_name, log_suffix=log_suffix, logger=logger, hash_tag=hash_tag, debug=debug)
7 | self.setup(port=port, image=image, linux=linux, mem=mem, cpu=cpu, key=key, gdb_port=gdb_port, mon_port=mon_port, opts=opts, timeout=timeout)
8 | if gdb_port != None:
9 | VMState.__init__(self, linux, gdb_port, arch, proj_path=proj_path, log_suffix=log_suffix, debug=debug)
10 |
11 | def kill(self):
12 | self.kill_vm()
13 | if self.gdb != None:
14 | self.gdb.close()
15 | if self.mon != None:
16 | self.mon.close()
17 | if self.kernel != None and self.kernel.proj != None:
18 | del self.kernel.proj
--------------------------------------------------------------------------------
/syzscope/interface/vm/error.py:
--------------------------------------------------------------------------------
1 | class QemuIsDead(Exception):
2 | pass
3 | class AngrRefuseToLoadKernel(Exception):
4 | pass
5 | class KasanReportEntryNotFound(Exception):
6 | pass
--------------------------------------------------------------------------------
/syzscope/interface/vm/gdb.py:
--------------------------------------------------------------------------------
1 | import logging
2 | import time
3 | import pexpect
4 | import math, re
5 | import syzscope.interface.utilities as utilities
6 |
7 | from subprocess import Popen, PIPE, STDOUT, TimeoutExpired
8 | from pwn import *
9 | from .error import QemuIsDead
10 |
11 | class GDBHelper:
12 | def __init__(self, vmlinux, addr_bytes, log_path = None, debug=False, log_suffix=""):
13 | self._vmlinux = vmlinux
14 | self._prompt = "gdbbot"
15 | self.gdb_inst = None
16 | self.s_mem = 'g'
17 | self.s_group = 8
18 | self._log_suffix = log_suffix
19 | self._debug = debug
20 | if addr_bytes == 4:
21 | self.s_mem = 'w'
22 | self.s_group = 4
23 | #log.propagate = debug
24 | #context.log_level = 'error'
25 | self.gdb_inst = process(["gdb", self._vmlinux])
26 | self.logger = self._init_logger(log_path)
27 |
28 | def _init_logger(self, log_path):
29 | logger = logging.getLogger(__name__+"-{}".format(self._vmlinux))
30 | if len(logger.handlers) != 0:
31 | for each_handler in logger.handlers:
32 | logger.removeHandler(each_handler)
33 | handler = logging.FileHandler("{}/gdb.log{}".format(log_path, self._log_suffix))
34 | format = logging.Formatter('%(asctime)s %(message)s')
35 | handler.setFormatter(format)
36 | logger.setLevel(logging.INFO)
37 | logger.addHandler(handler)
38 | logger.propagate = False
39 | if self._debug:
40 | logger.setLevel(logging.DEBUG)
41 | return logger
42 |
43 | def is_pwndbg(self):
44 | raw = self.sendline('version')
45 | for line in raw.split('\n'):
46 | line = line.strip('\n')
47 | versions = line.split(':')
48 | if 'Pwndbg' in versions[0]:
49 | return True
50 | return False
51 |
52 | def connect(self, port):
53 | self.sendline('target remote :{}'.format(port), timeout=10)
54 |
55 | def set_breakpoint(self, addr):
56 | self.sendline('break *{}'.format(addr))
57 |
58 | def del_breakpoint(self, num=-1):
59 | if num == -1:
60 | self.sendline("d")
61 | else:
62 | self.sendline('d {}'.format(num))
63 |
64 | def resume(self):
65 | self._sendline('continue')
66 | #print("QEMU is running")
67 |
68 | def waitfor(self, pattern, timeout=5):
69 | try:
70 | text = self.gdb_inst.recvuntil(pattern, timeout=timeout)
71 | except EOFError:
72 | raise QemuIsDead
73 | self.logger.info(text.decode("utf-8"))
74 | if self._debug:
75 | print(text.decode("utf-8"))
76 | return text.decode("utf-8")
77 |
78 | def get_mem_content(self, addr, size):
79 | ret = []
80 | regx_mem_contect = r'0x[a-f0-9]+( <[A-Za-z0-9_.\+]+>)?:\W+(0x[a-f0-9]+)(\W+(0x[a-f0-9]+))?'
81 | group = math.ceil(size / self.s_group)
82 | cmd = 'x/{}{}x {}'.format(group, self.s_mem, hex(addr))
83 | raw = self.sendline(cmd)
84 | for line in raw.split('\n'):
85 | line = line.strip('\n')
86 | mem = utilities.regx_get(regx_mem_contect, line, 1)
87 | if mem == None:
88 | continue
89 | ret.append(int(mem, 16))
90 | mem = utilities.regx_get(regx_mem_contect, line, 3)
91 | if mem == None:
92 | continue
93 | ret.append(int(mem, 16))
94 | return ret
95 |
96 | def get_registers(self):
97 | ret = {}
98 | regx_regs = r'([0-9a-z]+)\W+(0x[0-9a-f]+)'
99 | cmd = 'info registers'
100 | raw = self.sendline(cmd)
101 | for line in raw.split('\n'):
102 | line = line.strip('\n')
103 | reg = utilities.regx_get(regx_regs, line, 0)
104 | val = utilities.regx_get(regx_regs, line, 1)
105 | if reg != None and val != None:
106 | ret[reg] = int(val, 16)
107 | return ret
108 |
109 | def get_register(self, reg):
110 | ret = None
111 | regx_regs = r'([0-9a-z]+)\W+(0x[0-9a-f]+)'
112 | cmd = 'info r {}'.format(reg)
113 | raw = self.sendline(cmd)
114 | for line in raw.split('\n'):
115 | line = line.strip('\n')
116 | val = utilities.regx_get(regx_regs, line, 1)
117 | if val != None:
118 | ret = int(val, 16)
119 | return ret
120 |
121 | def get_sections(self):
122 | ret = {}
123 | cmd = 'elfheader'
124 | regx_sections = r'(0x[0-9a-f]+) - (0x[0-9a-f]+) (.*)'
125 | raw = self.sendline(cmd)
126 | for line in raw.split('\n'):
127 | line = line.strip('\n')
128 | s = utilities.regx_get(regx_sections, line, 0)
129 | e = utilities.regx_get(regx_sections, line, 1)
130 | name = utilities.regx_get(regx_sections, line, 2)
131 | if s != None and e != None and name != None:
132 | ret[name] = {}
133 | ret[name]['start'] = int(s, 16)
134 | ret[name]['end'] = int(e, 16)
135 | return ret
136 |
137 | def get_stack_range(self):
138 | ret = []
139 | cmd = 'vmmap'
140 | regx_stack = r'(0x[0-9a-f]+) (0x[0-9a-f]+) .*\[stack\]'
141 | raw = self.sendline(cmd)
142 | for line in raw.split('\n'):
143 | line = line.strip('\n')
144 | s = utilities.regx_get(regx_stack, line, 0)
145 | e = utilities.regx_get(regx_stack, line, 1)
146 | if s != None and e != None:
147 | ret.append(s)
148 | ret.append(e)
149 | break
150 | return ret
151 |
152 | def get_backtrace(self, n=None):
153 | ret = []
154 | cmd = 'bt'
155 | regx_bt = r'#\d+( )+([A-Za-z0-9_.]+)'
156 | raw = self.sendline(cmd)
157 | for line in raw.split('\n'):
158 | line = line.strip('\n')
159 | func_name = utilities.regx_get(regx_bt, line, 1)
160 | if func_name != None:
161 | ret.append(func_name)
162 | if len(ret) >= n:
163 | break
164 | return ret
165 |
166 | def set_scheduler_mode(self, mode):
167 | cmd = 'set scheduler-locking {}'.format(mode)
168 | self.sendline(cmd)
169 |
170 | def finish_cur_func(self):
171 | cmd = 'finish'
172 | self.sendline(cmd)
173 |
174 | def print_code(self, addr, n_line):
175 | cmd = 'x/{}i {}'.format(n_line, addr)
176 | raw = self.sendline(cmd)
177 | return raw
178 |
179 | def get_func_name(self, addr):
180 | func_name_regx = r'0x[a-f0-9]+ <([a-zA-Z0-9_\.]+)(\+\d+)?>:'
181 | raw = self.print_code(addr, 1)
182 | ret = None
183 | for line in raw.split('\n'):
184 | line = line.strip('\n')
185 | line = self._escape_ansi(line)
186 | name = utilities.regx_get(func_name_regx, line, 0)
187 | if name != None:
188 | ret = name
189 | # we dont need refresh again since it was done in print_code()
190 | return ret
191 |
192 | def get_dbg_info(self, addr):
193 | cmd = 'b *{}'.format(addr)
194 | raw = self.sendline(cmd)
195 | dbg_info_regx = r'Breakpoint \d+ at 0x[a-f0-9]+: file (([A-Za-z0-9_\-.]+\/)+[A-Za-z0-9_.\-]+), line (\d+)'
196 | ret = []
197 | for line in raw.split('\n'):
198 | line = line.strip('\n')
199 | dbg_file = utilities.regx_get(dbg_info_regx, line, 0)
200 | dbg_line = utilities.regx_get(dbg_info_regx, line, 2)
201 | if dbg_file != None and dbg_line != None:
202 | ret.append(dbg_file)
203 | ret.append(dbg_line)
204 | return ret
205 |
206 | def refresh(self):
207 | self._sendline('echo')
208 |
209 | def sendline(self, cmd, timeout=5):
210 | #print("send", cmd)
211 | self._sendline(cmd)
212 | raw = self.waitfor("pwndbg>", timeout)
213 | return raw
214 |
215 | def recv(self):
216 | return self.gdb_inst.recv()
217 |
218 | def close(self):
219 | self.gdb_inst.kill()
220 |
221 | def _sendline(self, cmd):
222 | self.gdb_inst.sendline(cmd)
223 |
224 | def _escape_ansi(self, line):
225 | ansi_escape = re.compile(r'(?:\x1B[@-_]|[\x80-\x9F])[0-?]*[ -/]*[@-~]')
226 | return ansi_escape.sub('', line)
227 |
228 | def command(self, cmd):
229 | ret = list()
230 | try:
231 | init = [
232 | "gdb", self._vmlinux, "-ex",
233 | "set prompt %s" % self._prompt
234 | ]
235 | gdb = Popen(init, stdout=PIPE, stdin=PIPE, stderr=PIPE)
236 | outs, errs = gdb.communicate(cmd.encode(), timeout=20)
237 | start = False
238 | for line in outs.decode().split("\n"):
239 | # print(line)
240 | if line.startswith(self._prompt):
241 | start = True
242 | if self._prompt + "quit" in line:
243 | break
244 | if start:
245 | if line.startswith(self._prompt):
246 | line = line[len(self._prompt):]
247 | ret.append(line)
248 | gdb.kill()
249 | except TimeoutExpired:
250 | self.gdb.kill()
251 | return ret
252 |
--------------------------------------------------------------------------------
/syzscope/interface/vm/instance.py:
--------------------------------------------------------------------------------
1 | from inspect import formatannotation
2 | import threading
3 | import logging
4 | import time
5 | import os
6 | import syzscope.interface.utilities as utilities
7 |
8 | from subprocess import Popen, PIPE, STDOUT, call
9 |
10 | reboot_regx = r'reboot: machine restart'
11 | port_error_regx = r'Could not set up host forwarding rule'
12 |
13 | class VMInstance:
14 |
15 | def __init__(self, hash_tag, proj_path='/tmp/', log_name='vm.log', log_suffix="", logger=None, debug=False):
16 | self.proj_path = proj_path
17 | self.port = None
18 | self.image = None
19 | self.linux = None
20 | self.cmd_launch = None
21 | self.timeout = None
22 | self.case_logger = None
23 | self.debug = debug
24 | self.qemu_logger = None
25 | self.qemu_ready = False
26 | self.kill_qemu = False
27 | self.hash_tag = hash_tag
28 | self.log_name = log_name
29 | self.output = []
30 | self.def_opts = ["kasan_multi_shot=1", "earlyprintk=serial", "oops=panic", "nmi_watchdog=panic", "panic=1", \
31 | "ftrace_dump_on_oops=orig_cpu", "rodata=n", "vsyscall=native", "net.ifnames=0", \
32 | "biosdevname=0", "kvm-intel.nested=1", \
33 | "kvm-intel.unrestricted_guest=1", "kvm-intel.vmm_exclusive=1", \
34 | "kvm-intel.fasteoi=1", "kvm-intel.ept=1", "kvm-intel.flexpriority=1", \
35 | "kvm-intel.vpid=1", "kvm-intel.emulate_invalid_guest_state=1", \
36 | "kvm-intel.eptad=1", "kvm-intel.enable_shadow_vmcs=1", "kvm-intel.pml=1", \
37 | "kvm-intel.enable_apicv=1"]
38 | log_name += log_suffix
39 | self.qemu_logger = self.init_logger(os.path.join(proj_path, log_name))
40 | self.case_logger = self.qemu_logger
41 | if logger != None:
42 | self.case_logger = logger
43 | self._qemu = None
44 |
45 | def init_logger(self, log_path):
46 | handler = logging.FileHandler(log_path)
47 | format = logging.Formatter('%(message)s')
48 | handler.setFormatter(format)
49 | logger = logging.getLogger(log_path)
50 | for each_handler in logger.handlers:
51 | logger.removeHandler(each_handler)
52 | logger.addHandler(handler)
53 | logger.setLevel(logging.INFO)
54 | logger.propagate = False
55 | if self.debug:
56 | logger.setLevel(logging.DEBUG)
57 | return logger
58 |
59 | def setup(self, port, image, linux, mem="2G", cpu="2", key=None, gdb_port=None, mon_port=None, opts=None, timeout=None):
60 | cur_opts = ["root=/dev/sda", "console=ttyS0"]
61 | gdb_arg = ""
62 | self.port = port
63 | self.image = image
64 | self.linux = linux
65 | self.key = key
66 | self.timeout = timeout
67 | self.cmd_launch = ["qemu-system-x86_64", "-m", mem, "-smp", cpu]
68 | if gdb_port != None:
69 | self.cmd_launch.extend(["-gdb", "tcp::{}".format(gdb_port)])
70 | if mon_port != None:
71 | self.cmd_launch.extend(["-monitor", "tcp::{},server,nowait,nodelay".format(mon_port)])
72 | if self.port != None:
73 | self.cmd_launch.extend(["-net", "nic,model=e1000", "-net", "user,host=10.0.2.10,hostfwd=tcp::{}-:22".format(self.port)])
74 | self.cmd_launch.extend(["-display", "none", "-serial", "stdio", "-no-reboot", "-enable-kvm", "-cpu", "host,migratable=off",
75 | "-hda", "{}/stretch.img".format(self.image),
76 | "-snapshot", "-kernel", "{}/arch/x86_64/boot/bzImage".format(self.linux),
77 | "-append"])
78 | if opts == None:
79 | cur_opts.extend(self.def_opts)
80 | else:
81 | cur_opts.extend(opts)
82 | if type(cur_opts) == list:
83 | self.cmd_launch.append(" ".join(cur_opts))
84 | self.write_cmd_to_script(self.cmd_launch, "launch_vm.sh")
85 | return
86 |
87 | def run(self):
88 | p = Popen(self.cmd_launch, stdout=PIPE, stderr=STDOUT)
89 | self._qemu = p
90 | if self.timeout != None:
91 | x = threading.Thread(target=self.monitor_execution, name="{} qemu killer".format(self.hash_tag))
92 | x.start()
93 | x1 = threading.Thread(target=self.__log_qemu, args=(p.stdout,), name="{} qemu logger".format(self.hash_tag))
94 | x1.start()
95 |
96 | return p
97 |
98 | def kill_vm(self):
99 | self._qemu.kill()
100 |
101 | def write_cmd_to_script(self, cmd, name):
102 | path_name = os.path.join(self.proj_path, name)
103 | prefix = []
104 | with open(path_name, "w") as f:
105 | for i in range(0, len(cmd)):
106 | each = cmd[i]
107 | prefix.append(each)
108 | if each == '-append':
109 | f.write(" ".join(prefix))
110 | f.write(" \"")
111 | f.write(" ".join(cmd[i+1:]))
112 | f.write("\"")
113 | f.close()
114 |
115 | def upload(self, stuff: list):
116 | cmd = ["scp", "-F", "/dev/null", "-o", "UserKnownHostsFile=/dev/null", "-o", "BatchMode=yes",
117 | "-o", "IdentitiesOnly=yes", "-o", "StrictHostKeyChecking=no", "-i", "".format(self.key),
118 | "-P", "".format(self.port), " ".join(stuff), "root@localhost:/root"]
119 | Popen(cmd, stdout=PIPE, stderr=STDOUT)
120 |
121 | def command(self, cmds):
122 | cmd = ["ssh", "-p", str(self.port), "-F", "/dev/null", "-o", "UserKnownHostsFile=/dev/null",
123 | "-o", "BatchMode=yes", "-o", "IdentitiesOnly=yes", "-o", "StrictHostKeyChecking=no",
124 | "-o", "ConnectTimeout=10", "-i", "".format(self.key),
125 | "-v", "root@localhost", "".format(cmds)]
126 | p = Popen(cmd, stdout=PIPE, stderr=STDOUT)
127 |
128 | def monitor_execution(self):
129 | count = 0
130 | while (count buf must not be NULL
173 | pass
174 | self.qemu_ready = False
175 | return
176 |
--------------------------------------------------------------------------------
/syzscope/interface/vm/kernel.py:
--------------------------------------------------------------------------------
1 | from syzscope.interface.vm.error import AngrRefuseToLoadKernel, KasanReportEntryNotFound
2 | import angr
3 | import json
4 | import re
5 | import struct
6 | import subprocess
7 |
8 | from capstone.x86_const import X86_OP_MEM, X86_OP_IMM
9 | from .gdb import GDBHelper
10 |
11 | clean = lambda x, k: x[x.index(k):]
12 | strip = lambda x, k: x[x.index(k) + len(k):]
13 | boolean = lambda x: "true" if x else "false"
14 |
15 |
16 | class KernelObject:
17 | def __init__(self, key, line):
18 | self._key = key
19 | self._item = json.loads(strip(line, key))
20 |
21 | @property
22 | def json(self):
23 | return self._item
24 |
25 | def getNum(self, num):
26 | if num > 65536: return hex(num)
27 | return str(num)
28 |
29 | def getList(self, l):
30 | ret = "["
31 | for i, each in enumerate(l):
32 | if i != 0:
33 | ret += ", "
34 | ret += self.getStr(each)
35 | ret += "]"
36 | return ret
37 |
38 | def getDict(self, d):
39 | ret = "{"
40 | index = 0
41 | for k, v in d.items():
42 | if index != 0:
43 | ret += ", "
44 | ret += ("%s: %s" % (k, self.getStr(v)))
45 | index += 1
46 | ret += "}"
47 | return ret
48 |
49 | def getStr(self, v):
50 | if isinstance(v, int):
51 | return self.getNum(v)
52 | elif isinstance(v, list):
53 | return self.getList(v)
54 | elif isinstance(v, dict):
55 | return self.getDict(v)
56 | else:
57 | return str(v)
58 |
59 | def save(self, path):
60 | with open(path, 'w') as f:
61 | json.dump(self._item, f)
62 |
63 | @staticmethod
64 | def load(path):
65 | with open(path) as f:
66 | return KernelObject('', f.readline())
67 |
68 | def __str__(self):
69 | return self._key + " " + self.getStr(self._item)
70 |
71 | def __getattr__(self, k):
72 | return self._item[k]
73 |
74 | def __contains__(self, key):
75 | return key in self._item
76 |
77 |
78 | class Kernel:
79 | Interrupt_Functions = ["update_fast_timekeeper", "apic_timer_interrupt"]
80 | FUNCNAME = 0
81 | ADDRESS = 1
82 |
83 | def __init__(self, vmlinux, addr_bytes, proj_path, log_suffix="", debug=False):
84 | try:
85 | self.proj = angr.Project(vmlinux,
86 | load_options={"auto_load_libs": False})
87 | except Exception as e:
88 | print(e)
89 | raise AngrRefuseToLoadKernel
90 | self.gdbhelper = GDBHelper(vmlinux, addr_bytes, proj_path, debug, log_suffix)
91 | # private
92 | self._kasan_report = 0
93 | self._kasan_ret = []
94 |
95 | def getStructOffset(self, struct_name, field_name):
96 | cmd = "p &((struct %s *)0)->%s" % (struct_name, field_name)
97 | ret = self.gdbhelper.commandstr(cmd)
98 | m = re.search('0x[0-9a-f]+', ret)
99 | if m:
100 | return int(m.group(0), 16)
101 | return 0
102 |
103 | def check_output(self, args):
104 | completed = subprocess.run(args, stdout=subprocess.PIPE)
105 | return completed.stdout
106 |
107 | def searchInstruction(self,
108 | start,
109 | end,
110 | instruction,
111 | funCall=None,
112 | exact=False):
113 | while start < end:
114 | block = self.getBlock(start)
115 | if len(block.capstone.insns) == 0:
116 | start += 2
117 | continue
118 | inst = block.capstone.insns[0]
119 | if exact:
120 | if inst.mnemonic == instruction.mnemonic and \
121 | inst.op_str == instruction.op_str:
122 | return start
123 | elif funCall is not None:
124 | if inst.mnemonic == 'call' and \
125 | funCall == self.getTarget(inst.operands[0], Kernel.FUNCNAME):
126 | return start
127 | else:
128 | if inst.mnemonic != instruction.mnemonic:
129 | start += inst.size
130 | continue
131 | if len(inst.operands) != len(instruction.operands):
132 | start += inst.size
133 | continue
134 | Found = True
135 | for i in range(len(inst.operands)):
136 | op, op2 = inst.operands[i], instruction.operands[i]
137 | if op.type != op2.type:
138 | Found = False
139 | break
140 | if op.size != op2.size:
141 | Found = False
142 | break
143 | if Found:
144 | return start
145 | start += inst.size
146 | return 0
147 |
148 | def getTarget(self, operand, addrOrname=ADDRESS):
149 | if operand.type != X86_OP_IMM:
150 | return 0
151 | target = operand.value.imm & 0xffffffffffffffff
152 | if addrOrname == Kernel.ADDRESS:
153 | return target
154 | else:
155 | sym = self.find_symbol(target)
156 | if sym:
157 | return sym.name
158 | return 0
159 |
160 | def getKasanReport(self):
161 | if self._kasan_ret != [] or self._kasan_report != 0:
162 | return self._kasan_report, self._kasan_ret
163 |
164 | report_enabled = self.find_symbol("report_enabled")
165 | if report_enabled != None:
166 | kasan_report = self.find_symbol("__kasan_report")
167 | if kasan_report == None:
168 | raise KasanReportEntryNotFound
169 | else:
170 | kasan_report = self.find_symbol("__kasan_report")
171 | if kasan_report is None:
172 | kasan_report = self.find_symbol("kasan_report")
173 | if kasan_report == None:
174 | return None, None
175 | start = kasan_report.rebased_addr
176 | end = start + kasan_report.size
177 | kasan_report = 0
178 | kasan_ret = []
179 | if report_enabled != None:
180 | self._kasan_report = start
181 | kasan_report = self.find_symbol("kasan_report")
182 | start = kasan_report.rebased_addr
183 | end = start + kasan_report.size
184 | kasan_report = 0
185 | while start < end:
186 | block = self.getBlock(start)
187 | if len(block.capstone.insns) == 0:
188 | start += 2
189 | continue
190 | inst = block.capstone.insns[0]
191 | # first check
192 | if kasan_report == 0 and report_enabled == None:
193 | if inst.mnemonic == "jne":
194 | kasan_report = start + inst.size
195 | #kasan_ret = self.getTarget(inst.operands[0])
196 | #break
197 | elif inst.mnemonic == "je":
198 | kasan_report = self.getTarget(inst.operands[0])
199 | #kasan_ret = start + inst.size
200 | #break
201 | if inst.mnemonic == "ret":
202 | kasan_ret.append(start)
203 | start += inst.size
204 |
205 | if self._kasan_report == 0:
206 | self._kasan_report = kasan_report
207 | self._kasan_ret = kasan_ret
208 | if self._kasan_report == 0 or self._kasan_ret == []:
209 | raise KasanReportEntryNotFound
210 | return self._kasan_report, self._kasan_ret
211 |
212 | def instVisitor(self, funcName, handler):
213 | sym = self.find_symbol(funcName)
214 | if sym is None:
215 | return
216 | start = sym.rebased_addr
217 | end = start + sym.size
218 | cur = start
219 | while cur < end:
220 | block = self.getBlock(cur)
221 | cur += block.size
222 | for insn in block.capstone.insns:
223 | ret = handler(insn)
224 | if ret:
225 | return
226 |
227 | def resolve_addr(self, addr):
228 | func = self.proj.loader.find_symbol(addr, fuzzy=True)
229 | if func:
230 | return "%s+%d" % (func.name, addr - func.rebased_addr)
231 | else:
232 | return hex(addr)
233 |
234 | def find_symbol(self, addr, fuzzy=True):
235 | return self.proj.loader.find_symbol(addr, fuzzy)
236 |
237 | def func_start(self, name):
238 | func = self.proj.loader.find_symbol(name)
239 | if func:
240 | return func.rebased_addr
241 | return 0
242 |
243 | def getFunctionCFG(self, func):
244 | symbol = self.find_symbol(func)
245 | if symbol is None:
246 | return None
247 | return self.proj.analyses.CFGEmulated(context_sensitivity_level=0,
248 | starts=[symbol.rebased_addr],
249 | call_depth=0,
250 | normalize=True)
251 |
252 | def getExitInsns(self, addr):
253 | exits = []
254 | sym = self.find_symbol(addr)
255 | if sym:
256 | cur = sym.rebased_addr
257 | while cur < sym.rebased_addr + sym.size:
258 | block = self.getBlock(cur)
259 | cur += block.size
260 | if block.size == 0:
261 | # BUG() in kernel
262 | print("Warning: empty block at 0x%x" % cur)
263 | cur += 2
264 | for insn in block.capstone.insns:
265 | if insn.mnemonic == 'ret':
266 | exits.append(insn)
267 | return exits
268 |
269 | def getBlock(self, addr):
270 | return self.proj.factory.block(addr)
271 |
272 | def backtrace(self,
273 | filepath,
274 | counter,
275 | ips,
276 | depth,
277 | avoid=Interrupt_Functions):
278 | index = 1
279 | calls = []
280 | counter -= 1
281 |
282 | def readaddr(f, counter):
283 | f.seek(counter * 8)
284 | num = f.read(8)
285 | if num == "" or not num:
286 | raise ValueError("Incomplete trace file")
287 | value = struct.unpack("= 0:
294 | value = readaddr(f, counter)
295 | block = self.getBlock(value)
296 | if len(block.capstone.insns) == 0:
297 | counter -= 1
298 | continue
299 | inst = block.capstone.insns[0]
300 | if inst.mnemonic in ['call', 'jmp']:
301 | if inst.operands[0].type != X86_OP_IMM:
302 | counter -= 1
303 | continue
304 | target_addr = inst.operands[
305 | 0].value.imm & 0xffffffffffffffff
306 | if target_addr == target:
307 | return counter, value + inst.size # counter, return address
308 | counter -= 1
309 | raise ValueError("Failed to find %x" % target)
310 |
311 | def findEntry(f, counter, target):
312 | while counter >= 0:
313 | value = readaddr(f, counter)
314 | if value == target:
315 | return counter
316 | counter -= 1
317 | raise ValueError("Faild to find %x" % target)
318 |
319 | with open(filepath, "rb") as f:
320 | f.seek(counter * 8)
321 | num = f.read(8)
322 | if num != "" and num:
323 | value = struct.unpack(")?:\W+(0x[a-f0-9]+)(\W+(0x[a-f0-9]+))?'
106 | group = math.ceil(size / self.s_group)
107 | cmd = 'x/{}{}x {}'.format(group, self.s_mem, hex(addr))
108 | raw = self.sendline(cmd)
109 | for line in raw.split('\n'):
110 | line = line.strip('\n')
111 | mem = utilities.regx_get(regx_mem_contect, line, 1)
112 | if mem == None:
113 | continue
114 | ret.append(int(mem, 16))
115 | mem = utilities.regx_get(regx_mem_contect, line, 3)
116 | if mem == None:
117 | continue
118 | ret.append(int(mem, 16))
119 | return ret
120 |
121 |
122 | def choose_cpu(self, pc):
123 | ret = -1
124 | n = 0
125 | cmd = 'info cpus'
126 | cpu_regx = r'CPU #(\d+): pc=(0x[a-f0-9]+)'
127 | raw = self.sendline(cmd)
128 | for line in raw.split('\n')[1:]:
129 | line = line.strip('\n')
130 | cpu_index = utilities.regx_get(cpu_regx, line, 0)
131 | cpu_pc = utilities.regx_get(cpu_regx, line, 1)
132 | if cpu_index == None or cpu_pc == None:
133 | self.set_cpu(n)
134 | cpu_pc = self.get_register('rip')
135 | if self.s_group == 4:
136 | cpu_pc = self.get_register('eip')
137 | cpu_pc = hex(cpu_pc)
138 | cpu_index = n
139 | n += 1
140 | if pc == int(cpu_pc, 16):
141 | ret = int(cpu_index)
142 | break
143 | return ret
144 |
145 | def set_cpu(self, index):
146 | cmd = 'cpu {}'.format(index)
147 | self.sendline(cmd)
148 |
149 | def sendline(self, cmd):
150 | self._sendline(cmd)
151 | self.waitfor(cmd)
152 | raw = self.waitfor("(qemu)")
153 | return raw
154 |
155 | def waitfor(self, pattern):
156 | try:
157 | text = self.mon_inst.recvuntil(pattern)
158 | except EOFError:
159 | raise QemuIsDead
160 | self.logger.info(text.decode("utf-8"))
161 | if self._debug:
162 | print(text.decode("utf-8"))
163 | return text.decode("utf-8")
164 |
165 | def close(self):
166 | self.mon_inst.close()
167 |
168 | def _sendline(self, cmd):
169 | self.mon_inst.sendline(cmd)
--------------------------------------------------------------------------------
/syzscope/interface/vm/state.py:
--------------------------------------------------------------------------------
1 | import os
2 | import angr
3 |
4 | from pwn import *
5 | from .kernel import Kernel
6 | from .monitor import Monitor
7 |
8 | class VMState:
9 | ADDRESS = 1
10 | INITIAL = 0
11 | KERNEL_BASE = 0
12 |
13 | def __init__(self, linux, gdb_port, arch, log_suffix="", proj_path=None, debug=False):
14 | self.linux = os.path.join(linux, "vmlinux")
15 | self.gdb_port = gdb_port
16 | self.vm = None
17 | self._kasan_report = 0
18 | self._kasan_ret = 0
19 | self._proj_path = proj_path
20 | self.kernel = None
21 | self.addr_bytes = 8
22 | self.log_suffix = log_suffix
23 | self.gdb = None
24 | self.mon = None
25 | self.debug = debug
26 | self.addr_info = {}
27 | VMState.KERNEL_BASE = 0x7fffffffffffffff
28 | """if arch == 'i386':
29 | self.addr_bytes = 4
30 | VMState.KERNEL_BASE = 0x7fffffff"""
31 | self._sections = None
32 | self.stack_addr = [0,0]
33 | self.kasan_addr = [0,[]]
34 | VMState.INITIAL = 1
35 |
36 | def gdb_connect(self, port):
37 | if self.__check_initialization():
38 | return
39 | if self.debug:
40 | print("Loading kernel, this process may take a while")
41 | self.kernel = Kernel(self.linux, self.addr_bytes, self._proj_path, self.log_suffix, self.debug)
42 | self.gdb = self.kernel.gdbhelper
43 | self.waitfor_pwndbg(timeout=10)
44 | self.gdb.connect(port)
45 | #self.waitfor_pwndbg()
46 | #if not self.gdb.is_pwndbg():
47 | # return False
48 | return True
49 |
50 | def mon_connect(self, port):
51 | if self.__check_initialization():
52 | return
53 | self.mon = Monitor(port, self.addr_bytes, self._proj_path, self.log_suffix, self.debug)
54 | self.mon.connect()
55 |
56 | def set_checkpoint(self):
57 | if self.__check_initialization():
58 | return False
59 | kasan_report, kasan_ret = self.kernel.getKasanReport()
60 | if kasan_report == None:
61 | return False
62 | self.gdb.set_breakpoint(kasan_report)
63 | self.gdb.resume()
64 | self.kasan_addr[0] = kasan_report
65 | self.kasan_addr[1] = kasan_ret
66 | return True
67 |
68 | def lock_thread(self):
69 | if self.__check_initialization():
70 | return
71 | self.gdb.set_scheduler_mode('on')
72 |
73 | def unlock_thread(self):
74 | if self.__check_initialization():
75 | return
76 | self.gdb.set_scheduler_mode('off')
77 |
78 | def reach_target_site(self, addr):
79 | if self.__check_initialization():
80 | return
81 | self.gdb.set_breakpoint(addr)
82 | self.gdb.resume()
83 |
84 | def read_mem(self, addr, size):
85 | if self.__check_initialization():
86 | return
87 | mem = self.mon.get_mem_content(addr, size)
88 | if len(mem) == 1 and size < 8:
89 | val = mem[0]
90 | if size == 4 and self.addr_bytes == 8:
91 | val = val - (val >> 32 << 32)
92 | if size == 2:
93 | val = val - (val >> 16 << 16)
94 | if size == 1:
95 | val = val - (val >> 8 << 8)
96 | mem = [val]
97 |
98 | return mem
99 |
100 | def read_section(self, name=None):
101 | if self.__check_initialization():
102 | return
103 | if self._sections == None:
104 | self._sections = self.gdb.get_sections()
105 | if name in self._sections:
106 | return self._sections[name]
107 | return self._sections
108 |
109 | def read_stack_range(self):
110 | if self.__check_initialization():
111 | return
112 | if self.stack_addr[0] == 0 and self.stack_addr[1] == 0:
113 | ret = self.gdb.get_stack_range()
114 | if len(ret) == 2:
115 | self.stack_addr[0] = int(ret[0], 16)
116 | self.stack_addr[1] = int(ret[1], 16)
117 | return self.stack_addr[0], self.stack_addr[1]
118 | return 0, 0
119 |
120 | def back_to_kasan_ret(self):
121 | if self.__check_initialization():
122 | return
123 | if len(self.kasan_addr[1]) > 0:
124 | self.gdb.del_breakpoint()
125 | for each in self.kasan_addr[1]:
126 | self.gdb.set_breakpoint(each)
127 | self.gdb.resume()
128 |
129 | def back_to_caller(self):
130 | if self.__check_initialization():
131 | return
132 | self.gdb.finish_cur_func()
133 |
134 | def inspect_code(self, addr, n_line):
135 | if self.__check_initialization():
136 | return
137 | return self.gdb.print_code(addr, n_line)
138 |
139 | def read_backtrace(self, n):
140 | if self.__check_initialization():
141 | return
142 | bt = self.gdb.get_backtrace(n)
143 | return bt
144 |
145 | def back_to_vul_site(self):
146 | if self.__check_initialization():
147 | return
148 | kasan_entries = ["__kasan_check_read", "__kasan_check_write", \
149 | "__asan_store1", "__asan_store2", "__asan_store4", "__asan_store8", "__asan_store16", \
150 | "__asan_load1", "__asan_load2", "__asan_load4", "__asan_load8", "__asan_load16"]
151 | cmd = 'finish'
152 | exit_flag = False
153 | extra_check = False
154 | while True:
155 | self.gdb.sendline(cmd)
156 | bt = self.gdb.get_backtrace(1)
157 | if exit_flag:
158 | break
159 | if len(bt) > 0:
160 | if bt[0] == "check_memory_region":
161 | extra_check = True
162 | continue
163 | if bt[0] in kasan_entries:
164 | exit_flag = True
165 | continue
166 | if extra_check:
167 | break
168 |
169 | def is_on_stack(self, addr):
170 | if self.stack_addr[0] == 0 and self.stack_addr[1] == 0:
171 | print("Stack range is unclear")
172 | return False
173 | return addr >= self.stack_addr[0] and addr <= self.stack_addr[1]
174 |
175 | def read_regs(self):
176 | if self.__check_initialization():
177 | return
178 | regs = self.mon.get_registers()
179 | if 'eflags' not in regs:
180 | val = self.gdb.get_register('eflags')
181 | if val != None:
182 | regs['eflags'] = val
183 | return regs
184 |
185 | def prepare_context(self, pc):
186 | index = self.mon.choose_cpu(pc)
187 | if index == -1:
188 | return False
189 | self.mon.set_cpu(index)
190 | return True
191 |
192 | def read_reg(self, reg, timeout=5):
193 | if self.__check_initialization():
194 | return
195 | val = self.mon.get_register(reg)
196 | return val
197 |
198 | def get_func_name(self, addr):
199 | if self.__check_initialization():
200 | return
201 | func_name = None
202 | if addr not in self.addr_info or 'func' not in self.addr_info[addr]:
203 | func_name = self.gdb.get_func_name(addr)
204 | if addr not in self.addr_info:
205 | self.addr_info[addr] = {}
206 | self.addr_info[addr]['func'] = func_name
207 | else:
208 | func_name = self.addr_info[addr]['func']
209 | return func_name
210 |
211 | def get_dbg_info(self, addr):
212 | if self.__check_initialization():
213 | return
214 | file = None
215 | line = None
216 | if addr not in self.addr_info or 'dbg' not in self.addr_info[addr]:
217 | ret = self.gdb.get_dbg_info(addr)
218 | if len(ret) == 2:
219 | file, line = ret[0], ret[1]
220 | if addr not in self.addr_info:
221 | self.addr_info[addr] = {}
222 | self.addr_info[addr]['dbg'] = ret
223 | else:
224 | file = self.addr_info[addr]['dbg'][0]
225 | line = self.addr_info[addr]['dbg'][1]
226 | return file, line
227 |
228 | def waitfor_pwndbg(self, timeout=5):
229 | self.gdb.waitfor("pwndbg>", timeout)
230 |
231 | def __check_initialization(self):
232 | return not VMState.INITIAL
233 |
234 | def __check_pwndbg(self):
235 | self.gdb.sendline('version')
236 | self.gdb.recv()
--------------------------------------------------------------------------------
/syzscope/modules/__init__.py:
--------------------------------------------------------------------------------
1 | from .syzbotCrawler import Crawler
2 | from .crash import CrashChecker
3 | from .deploy import Deployer
4 |
--------------------------------------------------------------------------------
/syzscope/modules/deploy/__init__.py:
--------------------------------------------------------------------------------
1 | from .deploy import Deployer
--------------------------------------------------------------------------------
/syzscope/modules/deploy/case.py:
--------------------------------------------------------------------------------
1 | import datetime
2 | import logging
3 | import os, stat, sys
4 |
5 | stamp_finish_fuzzing = "FINISH_FUZZING"
6 | stamp_build_syzkaller = "BUILD_SYZKALLER"
7 | stamp_build_kernel = "BUILD_KERNEL"
8 | stamp_reproduce_ori_poc = "REPRO_ORI_POC"
9 | stamp_symbolic_execution = "FINISH_SYM_EXEC"
10 | stamp_static_analysis = "FINISH_STATIC_ANALYSIS"
11 |
12 | max_qemu_for_one_case = 4
13 |
14 | class Case:
15 | def __init__(self, index, parallel_max, debug=False, force=False, port=53777, replay='incomplete', linux_index=-1, time=8, kernel_fuzzing=False, reproduce=False, alert=[], static_analysis=False, symbolic_execution=False, gdb_port=1235, qemu_monitor_port=9700, max_compiling_kernel=-1):
16 | self.linux_folder = "linux"
17 | self.project_path = ""
18 | self.package_path = None
19 | self.syzkaller_path = ""
20 | self.image_path = ""
21 | self.current_case_path = ""
22 | self.kernel_path = ""
23 | self.index = index
24 | self.case_logger = None
25 | self.logger = None
26 | self.case_info_logger = None
27 | self.store_read = True
28 | self.force = force
29 | self.time_limit = time
30 | self.crash_checker = None
31 | self.image_switching_date = datetime.datetime(2020, 3, 15)
32 | self.arch = None
33 | self.compiler = None
34 | self.kernel_fuzzing = kernel_fuzzing
35 | self.reproduce_ori_bug = reproduce
36 | self.alert = alert
37 | self.static_analysis = static_analysis
38 | self.symbolic_execution = symbolic_execution
39 | self.parallel_max = parallel_max
40 | self.max_compiling_kernel = max_compiling_kernel
41 | if max_compiling_kernel == -1:
42 | self.max_compiling_kernel = parallel_max
43 | self.max_qemu_for_one_case = max_qemu_for_one_case
44 | self.sa = None
45 | if replay == None:
46 | self.replay = False
47 | self.catalog = 'incomplete'
48 | else:
49 | self.replay = True
50 | self.catalog = replay
51 | self.ssh_port = port + max_qemu_for_one_case*index
52 | self.gdb_port = gdb_port + max_qemu_for_one_case*index
53 | self.qemu_monitor_port = qemu_monitor_port + max_qemu_for_one_case*index
54 | if linux_index != -1:
55 | self.index = linux_index
56 | self.debug = debug
57 | self.hash_val = None
58 | self.init_logger(debug)
59 |
60 | def init_logger(self, debug, hash_val=None):
61 | self.logger = logging.getLogger(__name__+str(self.index))
62 | for each in self.logger.handlers:
63 | self.logger.removeHandler(each)
64 | handler = logging.StreamHandler(sys.stdout)
65 | if hash_val != None:
66 | format = logging.Formatter('%(asctime)s Thread {}: {} %(message)s'.format(self.index, hash_val))
67 | else:
68 | format = logging.Formatter('%(asctime)s Thread {}: %(message)s'.format(self.index))
69 | handler.setFormatter(format)
70 | self.logger.addHandler(handler)
71 | if debug:
72 | self.logger.setLevel(logging.DEBUG)
73 | self.logger.propagate = True
74 | else:
75 | self.logger.setLevel(logging.INFO)
76 | self.logger.propagate = False
77 |
78 | def setup_hash(self, hash_val):
79 | self.hash_val = hash_val
80 | self.init_logger(self.debug, self.hash_val[:7])
81 |
--------------------------------------------------------------------------------
/syzscope/modules/syzbotCrawler.py:
--------------------------------------------------------------------------------
1 | import requests
2 | import logging
3 | import os
4 | import re
5 |
6 | from syzscope.interface.utilities import request_get, extract_vul_obj_offset_and_size, regx_get
7 | from bs4 import BeautifulSoup
8 | from bs4 import element
9 |
10 | syzbot_bug_base_url = "bug?id="
11 | syzbot_host_url = "https://syzkaller.appspot.com/"
12 | num_of_elements = 8
13 |
14 | class Crawler:
15 | def __init__(self,
16 | url="https://syzkaller.appspot.com/upstream/fixed",
17 | keyword=[''], max_retrieve=10, deduplicate=[''], ignore_batch=[], filter_by_reported=-1,
18 | filter_by_closed=-1, include_high_risk=False, debug=False):
19 | self.url = url
20 | if type(keyword) == list:
21 | self.keyword = keyword
22 | else:
23 | print("keyword must be a list")
24 | if type(deduplicate) == list:
25 | self.deduplicate = deduplicate
26 | else:
27 | print("deduplication keyword must be a list")
28 | self.ignore_batch = ignore_batch
29 | self.max_retrieve = max_retrieve
30 | self.cases = {}
31 | self.patches = {}
32 | self.logger = None
33 | self.logger2file = None
34 | self.include_high_risk = include_high_risk
35 | self.init_logger(debug)
36 | self.filter_by_reported = filter_by_reported
37 | self.filter_by_closed = filter_by_closed
38 |
39 | def init_logger(self, debug):
40 | handler = logging.FileHandler("{}/info".format(os.getcwd()))
41 | format = logging.Formatter('%(asctime)s %(levelname)s %(message)s')
42 | handler.setFormatter(format)
43 | self.logger = logging.getLogger(__name__)
44 | self.logger2file = logging.getLogger("log2file")
45 | if debug:
46 | self.logger.setLevel(logging.DEBUG)
47 | self.logger.propagate = True
48 | self.logger2file.setLevel(logging.DEBUG)
49 | self.logger2file.propagate = True
50 | else:
51 | self.logger.setLevel(logging.INFO)
52 | self.logger.propagate = False
53 | self.logger2file.setLevel(logging.INFO)
54 | self.logger2file.propagate = False
55 | self.logger2file.addHandler(handler)
56 |
57 | def run(self):
58 | if len(self.ignore_batch) > 0:
59 | for hash_val in self.ignore_batch:
60 | patch_url = self.get_patch_of_case(hash_val)
61 | if patch_url == None:
62 | continue
63 | commit = regx_get(r"https:\/\/git\.kernel\.org\/pub\/scm\/linux\/kernel\/git\/torvalds\/linux\.git\/commit\/\?id=(\w+)", patch_url, 0)
64 | if commit in self.patches:
65 | continue
66 | self.patches[commit] = True
67 | print("Ignore {} patches".format(len(self.patches)))
68 | cases_hash, high_risk_impacts = self.gather_cases()
69 | for each in cases_hash:
70 | if 'Patch' in each:
71 | patch_url = each['Patch']
72 | commit = regx_get(r"https:\/\/git\.kernel\.org\/pub\/scm\/linux\/kernel\/git\/torvalds\/linux\.git\/commit\/\?id=(\w+)", patch_url, 0)
73 | if commit in self.patches or \
74 | (commit in high_risk_impacts and not self.include_high_risk):
75 | continue
76 | self.patches[commit] = True
77 | if self.retreive_case(each['Hash']) != -1:
78 | self.cases[each['Hash']]['title'] = each['Title']
79 | if 'Patch' in each:
80 | self.cases[each['Hash']]['patch'] = each['Patch']
81 | return
82 |
83 | def run_one_case(self, hash):
84 | self.logger.info("retreive one case: %s",hash)
85 | if self.retreive_case(hash) == -1:
86 | return
87 | self.cases[hash]['title'] = self.get_title_of_case(hash)
88 | patch = self.get_patch_of_case(hash)
89 | if patch != None:
90 | self.cases[hash]['patch'] = patch
91 |
92 | def get_title_of_case(self, hash=None, text=None):
93 | if hash==None and text==None:
94 | self.logger.info("No case given")
95 | return None
96 | if hash!=None:
97 | url = syzbot_host_url + syzbot_bug_base_url + hash
98 | req = requests.request(method='GET', url=url)
99 | soup = BeautifulSoup(req.text, "html.parser")
100 | else:
101 | soup = BeautifulSoup(text, "html.parser")
102 | title = soup.body.b.contents[0]
103 | return title
104 |
105 | def get_patch_of_case(self, hash):
106 | patch = None
107 | url = syzbot_host_url + syzbot_bug_base_url + hash
108 | req = requests.request(method='GET', url=url)
109 | soup = BeautifulSoup(req.text, "html.parser")
110 | mono = soup.find("span", {"class": "mono"})
111 | if mono == None:
112 | return patch
113 | try:
114 | patch = mono.contents[1].attrs['href']
115 | except:
116 | pass
117 | return patch
118 |
119 |
120 | def retreive_case(self, hash):
121 | self.cases[hash] = {}
122 | detail = self.request_detail(hash)
123 | if len(detail) < num_of_elements:
124 | self.logger.error("Failed to get detail of a case {}{}{}".format(syzbot_host_url, syzbot_bug_base_url, hash))
125 | self.cases.pop(hash)
126 | return -1
127 | self.cases[hash]["commit"] = detail[0]
128 | self.cases[hash]["syzkaller"] = detail[1]
129 | self.cases[hash]["config"] = detail[2]
130 | self.cases[hash]["syz_repro"] = detail[3]
131 | self.cases[hash]["log"] = detail[4]
132 | self.cases[hash]["c_repro"] = detail[5]
133 | self.cases[hash]["time"] = detail[6]
134 | self.cases[hash]["manager"] = detail[7]
135 | self.cases[hash]["report"] = detail[8]
136 | self.cases[hash]["vul_offset"] = detail[9]
137 | self.cases[hash]["obj_size"] = detail[10]
138 |
139 | def gather_cases(self):
140 | high_risk_impacts = {}
141 | res = []
142 | tables = self.__get_table(self.url)
143 | if tables == []:
144 | self.logger.error("error occur in gather_cases")
145 | return res, high_risk_impacts
146 | count = 0
147 | for table in tables:
148 | #self.logger.info("table caption {}".format(table.caption.text))
149 | for case in table.tbody.contents:
150 | if type(case) == element.Tag:
151 | title = case.find('td', {"class": "title"})
152 | if title == None:
153 | continue
154 | for keyword in self.deduplicate:
155 | if keyword in title.text:
156 | try:
157 | commit = regx_get(r"https:\/\/git\.kernel\.org\/pub\/scm\/linux\/kernel\/git\/torvalds\/linux\.git\/commit\/\?id=(\w+)", patch_url, 0)
158 | if commit in self.patches or \
159 | (commit in high_risk_impacts and not self.include_high_risk):
160 | continue
161 | self.patches[commit] = True
162 | except:
163 | pass
164 | for keyword in self.keyword:
165 | if 'out-of-bounds write' in title.text or \
166 | 'use-after-free write' in title.text:
167 | commit_list = case.find('td', {"class": "commit_list"})
168 | try:
169 | patch_url = commit_list.contents[1].contents[1].attrs['href']
170 | high_risk_impacts[patch_url] = True
171 | except:
172 | pass
173 | if keyword in title.text or keyword=='':
174 | crash = {}
175 | commit_list = case.find('td', {"class": "commit_list"})
176 | crash['Title'] = title.text
177 | stats = case.find_all('td', {"class": "stat"})
178 | crash['Repro'] = stats[0].text
179 | crash['Bisected'] = stats[1].text
180 | crash['Count'] = stats[2].text
181 | crash['Last'] = stats[3].text
182 | try:
183 | crash['Reported'] = stats[4].text
184 | if self.filter_by_reported > -1 and int(crash['Reported'][:-1]) > self.filter_by_reported:
185 | continue
186 | patch_url = commit_list.contents[1].contents[1].attrs['href']
187 | crash['Patch'] = patch_url
188 | crash['Closed'] = stats[4].text
189 | if self.filter_by_closed > -1 and int(crash['Closed'][:-1]) > self.filter_by_closed:
190 | continue
191 | except:
192 | # patch only works on fixed cases
193 | pass
194 | self.logger.debug("[{}] Find a suitable case: {}".format(count, title.text))
195 | href = title.next.attrs['href']
196 | hash_val = href[8:]
197 | self.logger.debug("[{}] Fetch {}".format(count, hash_val))
198 | crash['Hash'] = hash_val
199 | res.append(crash)
200 | count += 1
201 | break
202 | if count == self.max_retrieve:
203 | break
204 | return res, high_risk_impacts
205 |
206 | def request_detail(self, hash, index=1):
207 | self.logger.debug("\nDetail: {}{}{}".format(syzbot_host_url, syzbot_bug_base_url, hash))
208 | url = syzbot_host_url + syzbot_bug_base_url + hash
209 | tables = self.__get_table(url)
210 | if tables == []:
211 | print("error occur in request_detail: {}".format(hash))
212 | self.logger2file.info("[Failed] {} error occur in request_detail".format(url))
213 | return []
214 | count = 0
215 | for table in tables:
216 | if table.caption.text.find('Crash') != -1:
217 | for case in table.tbody.contents:
218 | if type(case) == element.Tag:
219 | kernel = case.find('td', {"class": "kernel"})
220 | if kernel.text != "upstream":
221 | self.logger.debug("skip kernel: '{}'".format(kernel.text))
222 | continue
223 | count += 1
224 | if count < index:
225 | continue
226 | try:
227 | manager = case.find('td', {"class": "manager"})
228 | manager_str = manager.text
229 | time = case.find('td', {"class": "time"})
230 | time_str = time.text
231 | tags = case.find_all('td', {"class": "tag"})
232 | m = re.search(r'id=([0-9a-z]*)', tags[0].next.attrs['href'])
233 | commit = m.groups()[0]
234 | self.logger.debug("Kernel commit: {}".format(commit))
235 | m = re.search(r'commits\/([0-9a-z]*)', tags[1].next.attrs['href'])
236 | syzkaller = m.groups()[0]
237 | self.logger.debug("Syzkaller commit: {}".format(syzkaller))
238 | config = syzbot_host_url + case.find('td', {"class": "config"}).next.attrs['href']
239 | self.logger.debug("Config URL: {}".format(config))
240 | repros = case.find_all('td', {"class": "repro"})
241 | log = syzbot_host_url + repros[0].next.attrs['href']
242 | self.logger.debug("Log URL: {}".format(log))
243 | report = syzbot_host_url + repros[1].next.attrs['href']
244 | self.logger.debug("Log URL: {}".format(report))
245 | r = request_get(report)
246 | report_list = r.text.split('\n')
247 | offset, size = extract_vul_obj_offset_and_size(report_list)
248 | try:
249 | syz_repro = syzbot_host_url + repros[2].next.attrs['href']
250 | self.logger.debug("Testcase URL: {}".format(syz_repro))
251 | except:
252 | self.logger.info(
253 | "Repro is missing. Failed to retrieve case {}{}{}".format(syzbot_host_url, syzbot_bug_base_url, hash))
254 | self.logger2file.info("[Failed] {} Repro is missing".format(url))
255 | break
256 | try:
257 | c_repro = syzbot_host_url + repros[3].next.attrs['href']
258 | self.logger.debug("C prog URL: {}".format(c_repro))
259 | except:
260 | c_repro = None
261 | self.logger.info("No c prog found")
262 | except:
263 | self.logger.info("Failed to retrieve case {}{}{}".format(syzbot_host_url, syzbot_bug_base_url, hash))
264 | continue
265 | return [commit, syzkaller, config, syz_repro, log, c_repro, time_str, manager_str, report, offset, size]
266 | break
267 | self.logger2file.info("[Failed] {} fail to find a proper crash".format(url))
268 | return []
269 |
270 | def __get_table(self, url):
271 | self.logger.info("Get table from {}".format(url))
272 | req = requests.request(method='GET', url=url)
273 | soup = BeautifulSoup(req.text, "html.parser")
274 | tables = soup.find_all('table', {"class": "list_table"})
275 | if len(tables) == 0:
276 | print("Fail to retrieve bug cases from list_table")
277 | return []
278 | return tables
279 |
280 | if __name__ == '__main__':
281 | pass
--------------------------------------------------------------------------------
/syzscope/patches/760f8.patch:
--------------------------------------------------------------------------------
1 | diff --git a/scripts/selinux/genheaders/genheaders.c b/scripts/selinux/genheaders/genheaders.c
2 | index fa48fabcb3304..3cc4893d98cc5 100644
3 | --- a/scripts/selinux/genheaders/genheaders.c
4 | +++ b/scripts/selinux/genheaders/genheaders.c
5 | @@ -9,7 +9,6 @@
6 | #include
7 | #include
8 | #include
9 | -#include
10 |
11 | struct security_class_mapping {
12 | const char *name;
13 | diff --git a/scripts/selinux/mdp/mdp.c b/scripts/selinux/mdp/mdp.c
14 | index ffe8179f5d41b..c29fa4a6228d6 100644
15 | --- a/scripts/selinux/mdp/mdp.c
16 | +++ b/scripts/selinux/mdp/mdp.c
17 | @@ -32,7 +32,6 @@
18 | #include
19 | #include
20 | #include
21 | -#include
22 |
23 | static void usage(char *name)
24 | {
25 | diff --git a/security/selinux/include/classmap.h b/security/selinux/include/classmap.h
26 | index cc35695d97b4a..45ef6a0c17cc7 100644
27 | --- a/security/selinux/include/classmap.h
28 | +++ b/security/selinux/include/classmap.h
29 | @@ -1,5 +1,6 @@
30 | /* SPDX-License-Identifier: GPL-2.0 */
31 | #include
32 | +#include
33 |
34 | #define COMMON_FILE_SOCK_PERMS "ioctl", "read", "write", "create", \
35 | "getattr", "setattr", "lock", "relabelfrom", "relabelto", "append", "map"
--------------------------------------------------------------------------------
/syzscope/patches/kasan.patch:
--------------------------------------------------------------------------------
1 | diff --git a/mm/kasan/report.c b/mm/kasan/report.c
2 | index 03a443579386..fa777489668b 100644
3 | --- a/mm/kasan/report.c
4 | +++ b/mm/kasan/report.c
5 | @@ -292,6 +292,12 @@ void __kasan_report(unsigned long addr, size_t size, bool is_write, unsigned lon
6 | if (likely(!report_enabled()))
7 | return;
8 |
9 | + if (!is_write) {
10 | + clear_bit(KASAN_BIT_REPORTED, &kasan_flags);
11 | + printk(KERN_WARNING "?!?MAGIC?!?read->%llx size->%d", addr, size);
12 | + return;
13 | + }
14 | +
15 | disable_trace_on_warning();
16 |
17 | tagged_addr = (void *)addr;
18 |
--------------------------------------------------------------------------------
/syzscope/patches/pwndbg.patch:
--------------------------------------------------------------------------------
1 | diff --git a/pwndbg/vmmap.py b/pwndbg/vmmap.py
2 | index be89b47..b224ecb 100644
3 | --- a/pwndbg/vmmap.py
4 | +++ b/pwndbg/vmmap.py
5 | @@ -48,7 +48,8 @@ def get():
6 | pages.extend(proc_pid_maps())
7 |
8 | if not pages and pwndbg.arch.current in ('i386', 'x86-64') and pwndbg.qemu.is_qemu():
9 | - pages.extend(monitor_info_mem())
10 | + #pages.extend(monitor_info_mem())
11 | + pass
12 |
13 | if not pages:
14 | # If debugee is launched from a symlink the debugee memory maps will be
15 |
16 |
--------------------------------------------------------------------------------
/syzscope/resources/kasan_related_funcs:
--------------------------------------------------------------------------------
1 | kasan_enable_current
2 | kasan_disable_current
3 | kasan_poison_shadow
4 | kasan_unpoison_shadow
5 | __kasan_unpoison_stack
6 | kasan_unpoison_task_stack
7 | kasan_unpoison_task_stack_below
8 | kasan_unpoison_stack_above_sp_to
9 | memory_is_poisoned_1
10 | memory_is_poisoned_2_4_8
11 | memory_is_poisoned_16
12 | memory_is_poisoned_n
13 | memory_is_poisoned
14 | check_memory_region_inline
15 | check_memory_region
16 | kasan_alloc_pages
17 | kasan_free_pages
18 | optimal_redzone
19 | kasan_cache_create
20 | kasan_cache_shrink
21 | kasan_cache_shutdown
22 | kasan_metadata_size
23 | kasan_poison_slab
24 | kasan_unpoison_object_data
25 | kasan_poison_object_data
26 | in_irqentry_text
27 | filter_irq_stacks
28 | save_stack
29 | set_track
30 | get_alloc_info
31 | get_free_info
32 | kasan_init_slab_obj
33 | kasan_slab_alloc
34 | kasan_poison_slab_free
35 | kasan_slab_free
36 | kasan_kmalloc
37 | kasan_kmalloc_large
38 | kasan_krealloc
39 | kasan_poison_kfree
40 | kasan_kfree_large
41 | kasan_module_alloc
42 | register_global
43 | __asan_register_globals
44 | __asan_unregister_globals
45 | __asan_loadN
46 | __asan_loadN_noabort
47 | __asan_storeN
48 | __asan_storeN_noabort
49 | __asan_handle_no_return
50 | __asan_poison_stack_memory
51 | __asan_unpoison_stack_memory
52 | kasan_mem_notifier
53 | kasan_memhotplug_init
54 | drain_freelist
55 | free_block
56 | slabs_destroy
57 | enable_cpucache
58 | cache_reap
59 | fixup_objfreelist_debug
60 | fixup_slab_list
61 | kmem_cache_node_init
62 | obj_offset
63 | is_store_user_clean
64 | set_store_user_clean
65 | set_store_user_dirty
66 | set_store_user_dirty
67 | index_to_obj
68 | cache_estimate
69 | __slab_error
70 | noaliencache_setup
71 | slab_max_order_setup
72 | init_reap_node
73 | next_reap_node
74 | start_cpu_timer
75 | init_arraycache
76 | alloc_arraycache
77 | cache_free_pfmemalloc
78 | transfer_objects
79 | free_alien_cache
80 | cache_free_alien
81 | alternate_node_alloc
82 | ____cache_alloc_node
83 | gfp_exact_node
84 | __alloc_alien_cache
85 | free_alien_cache
86 | __drain_alien_cache
87 | reap_alien
88 | drain_alien_cache
89 | __cache_free_alien
90 | cache_free_alien
91 | gfp_exact_node
92 | init_cache_node
93 | init_cache_node_node
94 | setup_kmem_cache_node
95 | cpuup_canceled
96 | cpuup_prepare
97 | slab_prepare_cpu
98 | slab_dead_cpu
99 | slab_online_cpu
100 | slab_offline_cpu
101 | drain_cache_node_node
102 | slab_memory_callback
103 | init_list
104 | set_up_node
105 | kmem_cache_init
106 | kmem_cache_init_late
107 | cpucache_init
108 | kmem_getpages
109 | kmem_freepages
110 | kmem_rcu_free
111 | is_debug_pagealloc_cache
112 | store_stackinfo
113 | slab_kernel_map
114 | slab_kernel_map
115 | poison_obj
116 | dump_line
117 | print_objinfo
118 | check_poison_obj
119 | slab_destroy_debugcheck
120 | slab_destroy_debugcheck
121 | slab_destroy
122 | slabs_destroy
123 | calculate_slab_order
124 | setup_cpu_cache
125 | kmem_cache_flags
126 | set_objfreelist_slab_cache
127 | set_off_slab_cache
128 | set_on_slab_cache
129 | __kmem_cache_create
130 | check_irq_off
131 | check_irq_on
132 | check_mutex_acquired
133 | check_spinlock_acquired
134 | check_spinlock_acquired_node
135 | drain_array_locked
136 | do_drain
137 | drain_cpu_caches
138 | drain_freelist
139 | __kmem_cache_shrink
140 | __kmemcg_cache_deactivate
141 | __kmem_cache_shutdown
142 | __kmem_cache_release
143 | alloc_slabmgmt
144 | get_free_obj
145 | set_free_obj
146 | cache_init_objs_debug
147 | swap_free_obj
148 | shuffle_freelist
149 | shuffle_freelist
150 | cache_init_objs
151 | slab_get_obj
152 | slab_put_obj
153 | slab_map_pages
154 | cache_grow_begin
155 | cache_grow_end
156 | kfree_debugcheck
157 | verify_redzone_free
158 | cache_free_debugcheck
159 | fixup_objfreelist_debug
160 | fixup_slab_list
161 | get_first_slab
162 | cache_alloc_pfmemalloc
163 | alloc_block
164 | cache_alloc_refill
165 | cache_alloc_debugcheck_before
166 | cache_alloc_debugcheck_after
167 | ____cache_alloc
168 | alternate_node_alloc
169 | fallback_alloc
170 | ____cache_alloc_node
171 | free_block
172 | cache_flusharray
173 | __cache_free
174 | ___cache_free
175 | kmem_cache_alloc
176 | kmem_cache_alloc_bulk
177 | kmem_cache_alloc_node
178 | kmem_cache_alloc_node_trace
179 | __kmalloc_node
180 | __kmalloc_node_track_caller
181 | __do_kmalloc
182 | __kmalloc
183 | __kmalloc_track_caller
184 | kmem_cache_free
185 | kmem_cache_free_bulk
186 | kfree
187 | setup_kmem_cache_nodes
188 | __do_tune_cpucache
189 | do_tune_cpucache
190 | enable_cpucache
191 | drain_array
192 | cache_reap
193 | get_slabinfo
194 | slabinfo_show_stats
195 | slabinfo_write
196 | add_caller
197 | handle_slab
198 | show_symbol
199 | leaks_show
200 | slabstats_open
201 | slab_proc_init
202 | __check_heap_object
203 | ksize
204 | kmem_cache_debug
205 | fixup_red_left
206 | kmem_cache_has_cpu_partial
207 | memcg_propagate_slab_attrs
208 | sysfs_slab_remove
209 | sysfs_slab_add
210 | sysfs_slab_alias
211 | memcg_propagate_slab_attrs
212 | sysfs_slab_remove
213 | get_freepointer
214 | get_freepointer_safe
215 | set_freepointer
216 | slab_index
217 | order_objects
218 | oo_order
219 | oo_objects
220 | slab_lock
221 | slab_unlock
222 | set_page_slub_counters
223 | __cmpxchg_double_slab
224 | cmpxchg_double_slab
225 | get_map
226 | size_from_object
227 | restore_red_left
228 | metadata_access_enable
229 | metadata_access_disable
230 | check_valid_pointer
231 | print_section
232 | get_track
233 | set_track
234 | init_tracking
235 | print_track
236 | print_tracking
237 | print_page_info
238 | slab_bug
239 | slab_fix
240 | print_trailer
241 | object_err
242 | slab_err
243 | init_object
244 | restore_bytes
245 | check_bytes_and_report
246 | check_pad_bytes
247 | slab_pad_check
248 | check_object
249 | check_slab
250 | on_freelist
251 | trace
252 | add_full
253 | remove_full
254 | inc_slabs_node
255 | dec_slabs_node
256 | setup_object_debug
257 | alloc_consistency_checks
258 | alloc_debug_processing
259 | free_consistency_checks
260 | setup_slub_debug
261 | kmem_cache_flags
262 | setup_object_debug
263 | alloc_debug_processing
264 | slab_pad_check
265 | check_object
266 | add_full
267 | remove_full
268 | kmem_cache_flags
269 | inc_slabs_node
270 | dec_slabs_node
271 | kmalloc_large_node_hook
272 | kfree_hook
273 | slab_free_hook
274 | slab_free_freelist_hook
275 | setup_object
276 | init_cache_random_seq
277 | init_freelist_randomization
278 | next_freelist_entry
279 | shuffle_freelist
280 | init_cache_random_seq
281 | init_freelist_randomization
282 | shuffle_freelist
283 | allocate_slab
284 | new_slab
285 | __free_slab
286 | rcu_free_slab
287 | free_slab
288 | discard_slab
289 | add_partial
290 | remove_partial
291 | acquire_slab
292 | put_cpu_partial
293 | pfmemalloc_match
294 | get_partial_node
295 | get_any_partial
296 | get_partial
297 | note_cmpxchg_failure
298 | init_kmem_cache_cpus
299 | deactivate_slab
300 | unfreeze_partials
301 | put_cpu_partial
302 | flush_slab
303 | __flush_cpu_slab
304 | flush_cpu_slab
305 | has_cpu_slab
306 | flush_all
307 | slub_cpu_dead
308 | node_match
309 | count_free
310 | count_partial
311 | new_slab_objects
312 | pfmemalloc_match
313 | get_freelist
314 | ___slab_alloc
315 | __slab_alloc
316 | slab_alloc_node
317 | slab_alloc
318 | kmem_cache_alloc
319 | kmem_cache_alloc_trace
320 | kmem_cache_alloc_node
321 | kmem_cache_alloc_node_trace
322 | __slab_free
323 | do_slab_free
324 | slab_free
325 | ___cache_free
326 | kmem_cache_free
327 | build_detached_freelist
328 | kmem_cache_free_bulk
329 | kmem_cache_alloc_bulk
330 | slab_order
331 | calculate_order
332 | alloc_kmem_cache_cpus
333 | early_kmem_cache_node_alloc
334 | free_kmem_cache_nodes
335 | __kmem_cache_release
336 | init_kmem_cache_nodes
337 | set_min_partial
338 | set_cpu_partial
339 | calculate_sizes
340 | kmem_cache_open
341 | list_slab_objects
342 | free_partial
343 | __kmem_cache_shutdown
344 | setup_slub_min_order
345 | setup_slub_max_order
346 | setup_slub_min_objects
347 | __kmalloc
348 | kmalloc_large_node
349 | __kmalloc_node
350 | __check_heap_object
351 | __ksize
352 | ksize
353 | kfree
354 | __kmem_cache_shrink
355 | kmemcg_cache_deact_after_rcu
356 | __kmemcg_cache_deactivate
357 | slab_mem_going_offline_callback
358 | slab_mem_offline_callback
359 | slab_mem_going_online_callback
360 | slab_memory_callback
361 | kmem_cache_init
362 | kmem_cache_init_late
363 | __kmem_cache_create
364 | __kmalloc_track_caller
365 | __kmalloc_node_track_caller
366 | count_inuse
367 | count_total
368 | validate_slab
369 | validate_slab_slab
370 | validate_slab_node
371 | validate_slab_cache
372 | free_loc_track
373 | alloc_loc_track
374 | add_location
375 | process_slab
376 | list_locations
377 | resiliency_test
378 | resiliency_test
379 | setup_slub_memcg_sysfs
380 | show_slab_objects
381 | any_slab_objects
382 | slab_size_show
383 | align_show
384 | object_size_show
385 | objs_per_slab_show
386 | order_store
387 | order_show
388 | min_partial_show
389 | min_partial_store
390 | cpu_partial_show
391 | cpu_partial_store
392 | ctor_show
393 | aliases_show
394 | partial_show
395 | cpu_slabs_show
396 | objects_show
397 | objects_partial_show
398 | slabs_cpu_partial_show
399 | reclaim_account_show
400 | reclaim_account_store
401 | hwcache_align_show
402 | cache_dma_show
403 | destroy_by_rcu_show
404 | reserved_show
405 | slabs_show
406 | total_objects_show
407 | sanity_checks_show
408 | sanity_checks_store
409 | trace_show
410 | trace_store
411 | red_zone_show
412 | red_zone_store
413 | poison_show
414 | poison_store
415 | store_user_show
416 | store_user_store
417 | validate_show
418 | validate_store
419 | alloc_calls_show
420 | free_calls_show
421 | failslab_show
422 | failslab_store
423 | shrink_show
424 | shrink_store
425 | remote_node_defrag_ratio_show
426 | remote_node_defrag_ratio_store
427 | show_stat
428 | clear_stat
429 | slab_attr_show
430 | slab_attr_store
431 | memcg_propagate_slab_attrs
432 | kmem_cache_release
433 | uevent_filter
434 | create_unique_id
435 | sysfs_slab_remove_workfn
436 | sysfs_slab_add
437 | sysfs_slab_remove
438 | sysfs_slab_release
439 | sysfs_slab_alias
440 | slab_sysfs_init
441 | get_slabinfo
442 | slabinfo_show_stats
443 | slabinfo_write
444 | find_first_bad_addr
445 | addr_has_shadow
446 | get_shadow_bug_type
447 | get_wild_bug_type
448 | get_bug_type
449 | print_error_description
450 | kernel_or_module_addr
451 | init_task_stack_addr
452 | kasan_start_report
453 | kasan_end_report
454 | print_track
455 | addr_to_page
456 | describe_object_addr
457 | describe_object
458 | print_address_description
459 | row_is_guilty
460 | shadow_pointer_offset
461 | print_shadow_for_address
462 | kasan_report_invalid_free
463 | kasan_report_error
464 | kasan_save_enable_multi_shot
465 | kasan_restore_multi_shot
466 | kasan_set_multi_shot
467 | kasan_report_enabled
468 | kasan_report
469 | __asan_report_load_n_noabort
470 | __asan_report_store_n_noabort
471 | dump_stack_set_arch_desc
472 | dump_stack_print_info
473 | show_regs_print_info
474 | __dump_stack
475 | dump_stack
476 | __asan_report_load1_noabort
477 | __asan_report_load2_noabort
478 | __asan_report_load4_noabort
479 | __asan_report_load8_noabort
480 | __asan_report_load16_noabort
481 | __asan_report_store1_noabort
482 | __asan_report_store2_noabort
483 | __asan_report_store4_noabort
484 | __asan_report_store8_noabort
485 | __asan_report_store16_noabort
486 | memcpy
487 | __kasan_report
--------------------------------------------------------------------------------
/syzscope/scripts/check_kvm.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 |
4 | set -e
5 |
6 | function add_user_to_kvm_group() {
7 | echo "$(whoami) is not in kvm group"
8 | echo "Adding $(whoami) to kvm group"
9 | set -x
10 | sudo usermod -a -G kvm $(whoami)
11 | set +x
12 | echo "Re-login and run SyzScope again"
13 | exit 1
14 | }
15 |
16 | if [ ! -e "/dev/kvm" ]; then
17 | echo "This machine do not support KVM. SyzScope cannot run on it."
18 | exit 1
19 | fi
20 |
21 | groups $(whoami) | grep kvm || add_user_to_kvm_group
22 | echo "KVM is ready to go"
23 | exit 0
--------------------------------------------------------------------------------
/syzscope/scripts/deploy-bc.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 | #
4 | # Usage ./deplay-bc.sh linux_clone_path index case_path commit config
5 |
6 | set -ex
7 |
8 | echo "running deploy-bc.sh"
9 |
10 | function wait_for_other_compiling() {
11 | # sometime a process may strave to a long time, seems ok if every case has the same weight
12 | n=`ps aux | grep "make -j" | wc -l`
13 | echo "Wait for other compiling"
14 | set +x
15 | while [ $n -ge $(($MAX_COMPILING_KERNEL+1)) ]
16 | do
17 | sleep 10
18 | n=`ps aux | grep "make -j" | wc -l`
19 | done
20 | set -x
21 | }
22 |
23 | function copy_log_then_exit() {
24 | LOG=$1
25 | cp $LOG $CASE_PATH/clang-$LOG
26 | }
27 |
28 | function config_disable() {
29 | key=$1
30 | sed -i "s/$key=n/# $key is not set/g" .config
31 | sed -i "s/$key=m/# $key is not set/g" .config
32 | sed -i "s/$key=y/# $key is not set/g" .config
33 | }
34 |
35 | function config_enable() {
36 | key=$1
37 | sed -i "s/$key=n/# $key is not set/g" .config
38 | sed -i "s/$key=m/# $key is not set/g" .config
39 | sed -i "s/# $key is not set/$key=y/g" .config
40 | }
41 |
42 | if [ $# -ne 8 ]; then
43 | echo "Usage ./deploy-bc.sh linux_clone_path index case_path commit config bc_path compile max_compiling_kernel"
44 | exit 1
45 | fi
46 |
47 | INDEX=$2
48 | CASE_PATH=$3
49 | COMMIT=$4
50 | CONFIG=$5
51 | BC_PATH=$6
52 | COMPILE=$7
53 | MAX_COMPILING_KERNEL=$8
54 | N_CORES=$((`nproc` / $MAX_COMPILING_KERNEL))
55 | PROJECT_PATH="$(pwd)"
56 | export PATH=$PATH:/home/xzou017/.local/bin
57 |
58 | cd $CASE_PATH
59 |
60 | OLD_INDEX=`ls -l linux | cut -d'-' -f 3`
61 | if [ "$OLD_INDEX" != "$INDEX" ]; then
62 | if [ -d "./linux" ]; then
63 | rm -rf "./linux"
64 | fi
65 | ln -s $PROJECT_PATH/tools/$1-$INDEX ./linux
66 | if [ -f "$CASE_PATH/.stamp/BUILD_KERNEL" ]; then
67 | rm $CASE_PATH/.stamp/BUILD_KERNEL
68 | fi
69 | fi
70 |
71 | cd linux
72 |
73 | if [ "$COMPILE" != "1" ]; then
74 |
75 | if [ -f "$CASE_PATH/config" ]; then
76 | git stash
77 | rm .config
78 | cp $CASE_PATH/config .config
79 | if [ ! -f "$CASE_PATH/compiler/compiler" ]; then
80 | echo "No compiler found in $CASE_PATH"
81 | exit 1
82 | fi
83 | COMPILER=$CASE_PATH/compiler/compiler
84 | #wait_for_other_compiling
85 | make -j$N_CORES CC=$COMPILER > make.log 2>&1 || copy_log_then_exit make.log
86 | exit 0
87 | fi
88 |
89 | else
90 |
91 | CONFIGKEYSDISABLE="
92 | CONFIG_KASAN
93 | CONFIG_KCOV
94 | CONFIG_BUG_ON_DATA_CORRUPTION
95 | CONFIG_DRM_I915
96 | CONFIG_XEN
97 | "
98 | for key in $CONFIGKEYSDISABLE;
99 | do
100 | config_disable $key
101 | done
102 |
103 | # save the dry run log
104 | CLANG=$PROJECT_PATH/tools/llvm/build/bin/clang
105 | make olddefconfig CC=$CLANG
106 | find -type f -name '*.bc' ! -name "timeconst.bc" -delete
107 | make -n CC=$CLANG > clang_log || echo "It's OK"
108 |
109 | # First try if wllvm can compile it
110 | export LLVM_COMPILER=clang
111 | export LLVM_COMPILER_PATH=$PROJECT_PATH/tools/llvm/build/bin/
112 | pip3 list | grep wllvm || pip3 install wllvm
113 | make olddefconfig CC=wllvm
114 | ERROR=0
115 | wait_for_other_compiling
116 | make clean CC=wllvm
117 | make -j$N_CORES CC=wllvm > make.log 2>&1 || ERROR=1 && copy_log_then_exit make.log
118 | if [ $ERROR == "0" ]; then
119 | extract-bc vmlinux
120 | mv vmlinux.bc one.bc || (find -type f -name '*.bc' ! -name "timeconst.bc" -delete && exit 1)
121 | exit 0
122 | else
123 | # back to manual compile and link
124 | find -type f -name '*.bc' ! -name "timeconst.bc" -delete
125 | exit 1
126 | fi
127 | fi
128 | exit 1
--------------------------------------------------------------------------------
/syzscope/scripts/deploy.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 | #
4 | # Usage ./deploy.sh linux_clone_path case_hash linux_commit syzkaller_commit linux_config testcase index catalog image arch gcc_version kasan_patch max_compiling_kernel
5 |
6 | set -ex
7 |
8 | echo "running deploy.sh"
9 |
10 | LATEST="9b1f3e6"
11 |
12 | function wait_for_other_compiling() {
13 | # sometime a process may strave to a long time, seems ok if every case has the same weight
14 | n=`ps aux | grep "make -j" | wc -l`
15 | echo "Wait for other compiling"
16 | set +x
17 | while [ $n -ge $(($MAX_COMPILING_KERNEL+1)) ]
18 | do
19 | sleep 10
20 | n=`ps aux | grep "make -j" | wc -l`
21 | done
22 | set -x
23 | }
24 |
25 | function config_disable() {
26 | key=$1
27 | sed -i "s/$key=n/# $key is not set/g" .config
28 | sed -i "s/$key=m/# $key is not set/g" .config
29 | sed -i "s/$key=y/# $key is not set/g" .config
30 | }
31 |
32 | function config_enable() {
33 | key=$1
34 | sed -i "s/$key=n/# $key is not set/g" .config
35 | sed -i "s/$key=m/# $key is not set/g" .config
36 | sed -i "s/# $key is not set/$key=y/g" .config
37 | }
38 |
39 | function copy_log_then_exit() {
40 | LOG=$1
41 | cp $LOG $CASE_PATH/$LOG-$COMPILER_VERSION
42 | exit 1
43 | }
44 |
45 | function try_patch_kernel() {
46 | patch -p1 -i $PROJECT_PATH/syzscope/patches/760f8.patch || copy_log_then_exit make.log
47 | make -j$N_CORES CC=$COMPILER > make.log 2>&1 || copy_log_then_exit make.log
48 | }
49 |
50 | function set_git_config() {
51 | set +x
52 | echo "set user.email for git config"
53 | echo "Input email: "
54 | read email
55 | echo "set user.name for git config"
56 | echo "Input name: "
57 | read name
58 | git config --global user.email $email
59 | git config --global user.name $name
60 | set -x
61 | }
62 |
63 | function build_golang() {
64 | echo "setup golang environment"
65 | rm goroot || echo "clean goroot"
66 | wget https://dl.google.com/go/go1.14.2.linux-amd64.tar.gz
67 | tar -xf go1.14.2.linux-amd64.tar.gz
68 | mv go goroot
69 | if [ ! -d "gopath" ]; then
70 | mkdir gopath
71 | fi
72 | rm go1.14.2.linux-amd64.tar.gz
73 | }
74 |
75 | function back_to_newest_version() {
76 | git checkout -f $LATEST
77 | cp $PATCHES_PATH/syzkaller-9b1f3e6.patch ./syzkaller.patch
78 | }
79 |
80 | function retrieve_proper_patch() {
81 | git rev-list HEAD | grep $(git rev-parse b5df78d) || back_to_newest_version
82 | git rev-list b5df78d | grep $(git rev-parse HEAD) || cp $PATCHES_PATH/syzkaller-b5df78d.patch ./syzkaller.patch
83 | git rev-list 4d4a442 | grep $(git rev-parse HEAD) || cp $PATCHES_PATH/syzkaller-4d4a442.patch ./syzkaller.patch
84 | git rev-list e503f04 | grep $(git rev-parse HEAD) || cp $PATCHES_PATH/syzkaller-e503f04.patch ./syzkaller.patch
85 | git rev-list dbd627e | grep $(git rev-parse HEAD) || cp $PATCHES_PATH/syzkaller-dbd627e.patch ./syzkaller.patch
86 | git rev-list 5de425b | grep $(git rev-parse HEAD) || cp $PATCHES_PATH/syzkaller-5de425b.patch ./syzkaller.patch
87 | git rev-list 1e9788a | grep $(git rev-parse HEAD) || cp $PATCHES_PATH/syzkaller-1e9788a.patch ./syzkaller.patch
88 | git rev-list 2cad5aa | grep $(git rev-parse HEAD) || cp $PATCHES_PATH/syzkaller-2cad5aa.patch ./syzkaller.patch
89 | git rev-list 9b1f3e6 | grep $(git rev-parse HEAD) || cp $PATCHES_PATH/syzkaller-9b1f3e6.patch ./syzkaller.patch
90 | }
91 |
92 | if [ $# -ne 12 ]; then
93 | echo "Usage ./deploy.sh linux_clone_path case_hash linux_commit syzkaller_commit linux_config testcase index catalog image arch gcc_version max_compiling_kernel"
94 | exit 1
95 | fi
96 |
97 | HASH=$2
98 | COMMIT=$3
99 | SYZKALLER=$4
100 | CONFIG=$5
101 | TESTCASE=$6
102 | INDEX=$7
103 | CATALOG=$8
104 | IMAGE=$9
105 | ARCH=${10}
106 | COMPILER_VERSION=${11}
107 | MAX_COMPILING_KERNEL=${12}
108 | PROJECT_PATH="$(pwd)"
109 | PKG_NAME="syzscope"
110 | CASE_PATH=$PROJECT_PATH/work/$CATALOG/$HASH
111 | PATCHES_PATH=$PROJECT_PATH/$PKG_NAME/patches
112 | echo "Compiler: "$COMPILER_VERSION | grep gcc && \
113 | COMPILER=$PROJECT_PATH/tools/$COMPILER_VERSION/bin/gcc || COMPILER=$PROJECT_PATH/tools/$COMPILER_VERSION/bin/clang
114 | N_CORES=$((`nproc` / $MAX_COMPILING_KERNEL))
115 |
116 | if [ ! -d "tools/$1-$INDEX" ]; then
117 | echo "No linux repositories detected"
118 | exit 1
119 | fi
120 |
121 | # Check if linux is cloned by git
122 | cd tools/$1-$INDEX
123 | if [ ! -d ".git" ]; then
124 | echo "This linux repo is not clone by git."
125 | exit 1
126 | fi
127 |
128 | cd ..
129 |
130 | # Check for golang environment
131 | export GOPATH=$CASE_PATH/gopath
132 | export GOROOT=$PROJECT_PATH/tools/goroot
133 | export LLVM_BIN=$PROJECT_PATH/tools/llvm/build/bin
134 | export PATH=$GOROOT/bin:$LLVM_BIN:$PATH
135 | echo "[+] Downloading golang"
136 | go version || build_golang
137 |
138 | cd $CASE_PATH || exit 1
139 | if [ ! -d ".stamp" ]; then
140 | mkdir .stamp
141 | fi
142 |
143 | if [ ! -d "compiler" ]; then
144 | mkdir compiler
145 | fi
146 | cd compiler
147 | if [ ! -L "$CASE_PATH/compiler/compiler" ]; then
148 | ln -s $COMPILER ./compiler
149 | fi
150 |
151 | #Building for syzkaller
152 | echo "[+] Building syzkaller"
153 | if [ ! -f "$CASE_PATH/.stamp/BUILD_SYZKALLER" ]; then
154 | if [ -d "$GOPATH/src/github.com/google/syzkaller" ]; then
155 | rm -rf $GOPATH/src/github.com/google/syzkaller
156 | fi
157 | mkdir -p $GOPATH/src/github.com/google/ || echo "Dir exists"
158 | cd $GOPATH/src/github.com/google/
159 | cp -r $PROJECT_PATH/tools/gopath/src/github.com/google/syzkaller ./
160 | #go get -u -d github.com/google/syzkaller/prog
161 | #fi
162 | cd $GOPATH/src/github.com/google/syzkaller || exit 1
163 | make clean
164 | git stash --all || set_git_config
165 | git checkout -f 9b1f3e665308ee2ddd5b3f35a078219b5c509cdb
166 | #git checkout -
167 | #retrieve_proper_patch
168 | cp $PATCHES_PATH/syzkaller-9b1f3e6.patch ./syzkaller.patch
169 | patch -p1 -i syzkaller.patch
170 | #rm -r executor
171 | #cp -r $PROJECT_PATH/tools/syzkaller/executor ./executor
172 | make TARGETARCH=$ARCH TARGETVMARCH=amd64
173 | if [ ! -d "workdir" ]; then
174 | mkdir workdir
175 | fi
176 | curl $TESTCASE > $GOPATH/src/github.com/google/syzkaller/workdir/testcase-$HASH
177 | touch $CASE_PATH/.stamp/BUILD_SYZKALLER
178 | fi
179 |
180 |
181 | cd $CASE_PATH || exit 1
182 | echo "[+] Copy image"
183 | if [ ! -d "$CASE_PATH/img" ]; then
184 | mkdir -p $CASE_PATH/img
185 | fi
186 | cd img
187 | if [ ! -L "$CASE_PATH/img/stretch.img" ]; then
188 | ln -s $PROJECT_PATH/tools/img/$IMAGE.img ./stretch.img
189 | fi
190 | if [ ! -L "$CASE_PATH/img/stretch.img.key" ]; then
191 | ln -s $PROJECT_PATH/tools/img/$IMAGE.img.key ./stretch.img.key
192 | fi
193 | cd ..
194 |
195 | #Building kernel
196 | echo "[+] Building kernel"
197 | OLD_INDEX=`ls -l linux | cut -d'-' -f 3`
198 | if [ "$OLD_INDEX" != "$INDEX" ]; then
199 | rm -rf "./linux" || echo "No linux repo"
200 | ln -s $PROJECT_PATH/tools/$1-$INDEX ./linux
201 | if [ -f "$CASE_PATH/.stamp/BUILD_KERNEL" ]; then
202 | rm $CASE_PATH/.stamp/BUILD_KERNEL
203 | fi
204 | fi
205 | if [ ! -f "$CASE_PATH/.stamp/BUILD_KERNEL" ]; then
206 | cd linux
207 | if [ -f "THIS_KERNEL_IS_BEING_USED" ]; then
208 | echo "This kernel is using by other thread"
209 | exit 1
210 | fi
211 | git stash || echo "it's ok"
212 | make clean > /dev/null || echo "it's ok"
213 | git clean -fdx -e THIS_KERNEL_IS_BEING_USED > /dev/null || echo "it's ok"
214 | #make clean CC=$COMPILER
215 | #git stash --all || set_git_config
216 | git checkout -f $COMMIT || (git pull https://github.com/torvalds/linux.git master > /dev/null 2>&1 && git checkout -f $COMMIT)
217 | #if [ "$KASAN_PATCH" == "1" ]; then
218 | # cp $PATCHES_PATH/kasan.patch ./
219 | # patch -p1 -i kasan.patch
220 | #fi
221 | #Add a rejection detector in future
222 | curl $CONFIG > .config
223 |
224 | # CONFIGKEYSDISABLE="
225 | #CONFIG_BUG_ON_DATA_CORRUPTION
226 | #CONFIG_KASAN_INLINE
227 | #"
228 |
229 | # CONFIGKEYSENABLE="
230 | #CONFIG_KASAN_OUTLINE
231 | #"
232 |
233 | CONFIGKEYSENABLE="
234 | CONFIG_HAVE_ARCH_KASAN
235 | CONFIG_KASAN
236 | CONFIG_KASAN_OUTLINE
237 | CONFIG_DEBUG_INFO
238 | CONFIG_FRAME_POINTER
239 | CONFIG_UNWINDER_FRAME_POINTER"
240 |
241 | CONFIGKEYSDISABLE="
242 | CONFIG_BUG_ON_DATA_CORRUPTION
243 | CONFIG_KASAN_INLINE
244 | CONFIG_RANDOMIZE_BASE
245 | CONFIG_PANIC_ON_OOPS
246 | CONFIG_X86_SMAP
247 | CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC
248 | CONFIG_BOOTPARAM_HARDLOCKUP_PANIC
249 | CONFIG_BOOTPARAM_HUNG_TASK_PANIC
250 | "
251 | #CONFIG_SOFTLOCKUP_DETECTOR
252 | #CONFIG_LOCKUP_DETECTOR
253 | #CONFIG_HARDLOCKUP_DETECTOR
254 | #CONFIG_DETECT_HUNG_TASK
255 | #CONFIG_WQ_WATCHDOG
256 | #CONFIG_ARCH_HAS_KCOV
257 | #CONFIG_KCOV
258 | #CONFIG_KCOV_INSTRUMENT_ALL
259 | #CONFIG_PROVE_LOCKING
260 | #CONFIG_DEBUG_RT_MUTEXES
261 | #CONFIG_DEBUG_SPINLOCK
262 | #CONFIG_DEBUG_MUTEXES
263 | #CONFIG_DEBUG_WW_MUTEX_SLOWPATH
264 | #CONFIG_DEBUG_RWSEMS
265 | #CONFIG_DEBUG_LOCK_ALLOC
266 | #CONFIG_DEBUG_ATOMIC_SLEEP
267 | #CONFIG_DEBUG_LIST
268 |
269 | for key in $CONFIGKEYSDISABLE;
270 | do
271 | config_disable $key
272 | done
273 |
274 | for key in $CONFIGKEYSENABLE;
275 | do
276 | config_enable $key
277 | done
278 |
279 | make olddefconfig CC=$COMPILER
280 | #wait_for_other_compiling
281 | make -j$N_CORES CC=$COMPILER > make.log 2>&1 || copy_log_then_exit make.log
282 | rm $CASE_PATH/config || echo "It's ok"
283 | cp .config $CASE_PATH/config
284 | touch THIS_KERNEL_IS_BEING_USED
285 | touch $CASE_PATH/.stamp/BUILD_KERNEL
286 | fi
287 |
288 | exit 0
289 |
--------------------------------------------------------------------------------
/syzscope/scripts/deploy_linux.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 | #
4 | # Usage ./deploy_linux fixed linux_path patch_path [linux_commit, config_url, mode]
5 |
6 | set -ex
7 |
8 | echo "running deploy_linux.sh"
9 |
10 | function clean_and_jump() {
11 | git stash --all
12 | git checkout -f $COMMIT
13 | }
14 |
15 | function copy_log_then_exit() {
16 | LOG=$1
17 | cp $LOG $CASE_PATH/$LOG-deploy_linux
18 | exit 1
19 | }
20 |
21 | function config_disable() {
22 | key=$1
23 | sed -i "s/$key=n/# $key is not set/g" .config
24 | sed -i "s/$key=m/# $key is not set/g" .config
25 | sed -i "s/$key=y/# $key is not set/g" .config
26 | }
27 |
28 | function config_enable() {
29 | key=$1
30 | sed -i "s/$key=n/# $key is not set/g" .config
31 | sed -i "s/$key=m/# $key is not set/g" .config
32 | sed -i "s/# $key is not set/$key=y/g" .config
33 | }
34 |
35 | if [ $# -ne 5 ] && [ $# -ne 8 ]; then
36 | echo "Usage ./deploy_linux gcc_version fixed linux_path package_path max_compiling_kernel [linux_commit, config_url, mode]"
37 | exit 1
38 | fi
39 |
40 | COMPILER_VERSION=$1
41 | FIXED=$2
42 | LINUX=$3
43 | PATCH=$4/patches/kasan.patch
44 | MAX_COMPILING_KERNEL=$5
45 | N_CORES=$((`nproc` / $MAX_COMPILING_KERNEL))
46 | echo "Compiler: "$COMPILER_VERSION | grep gcc && \
47 | COMPILER=$4/tools/$COMPILER_VERSION/bin/gcc || COMPILER=$4/tools/$COMPILER_VERSION/bin/clang
48 |
49 | if [ $# -eq 8 ]; then
50 | COMMIT=$6
51 | CONFIG=$7
52 | MODE=$8
53 | fi
54 |
55 | cd $LINUX
56 | cd ..
57 | CASE_PATH=`pwd`
58 | cd linux
59 | if [ $# -eq 5 ]; then
60 | #patch -p1 -N -R < $PATCH
61 | echo "no more patch"
62 | fi
63 | if [ $# -eq 8 ]; then
64 | if [ "$FIXED" != "1" ]; then
65 | git stash
66 | git clean -fdx -e THIS_KERNEL_IS_BEING_USED > /dev/null
67 | CURRENT_HEAD=`git rev-parse HEAD`
68 | if [ "$CURRENT_HEAD" != "$COMMIT" ]; then
69 | #make clean CC=$COMPILER
70 | #git stash --all
71 | git checkout -f $COMMIT || (git pull https://github.com/torvalds/linux.git master > /dev/null 2>&1 && git checkout -f $COMMIT)
72 | fi
73 | curl $CONFIG > .config
74 | else
75 | git format-patch -1 $COMMIT --stdout > fixed.patch
76 | patch -p1 -N -i fixed.patch || exit 1
77 | curl $CONFIG > .config
78 | fi
79 | fi
80 |
81 | # Panic on data corruption may stop the fuzzing session
82 | if [ "$MODE" == "0" ]; then
83 | CONFIGKEYSDISABLE="
84 | CONFIG_BUG_ON_DATA_CORRUPTION
85 | CONFIG_KASAN_INLINE
86 | CONFIG_KCOV
87 | "
88 |
89 | CONFIGKEYSENABLE="
90 | CONFIG_KASAN_OUTLINE
91 | "
92 | fi
93 |
94 | if [ "$MODE" == "1" ]; then
95 | CONFIGKEYSENABLE="
96 | CONFIG_HAVE_ARCH_KASAN
97 | CONFIG_KASAN
98 | CONFIG_KASAN_OUTLINE
99 | CONFIG_DEBUG_INFO
100 | CONFIG_FRAME_POINTER
101 | CONFIG_UNWINDER_FRAME_POINTER"
102 |
103 | CONFIGKEYSDISABLE="
104 | CONFIG_KASAN_INLINE
105 | CONFIG_RANDOMIZE_BASE
106 | CONFIG_SOFTLOCKUP_DETECTOR
107 | CONFIG_LOCKUP_DETECTOR
108 | CONFIG_HARDLOCKUP_DETECTOR
109 | CONFIG_DETECT_HUNG_TASK
110 | CONFIG_WQ_WATCHDOG
111 | CONFIG_PANIC_ON_OOPS
112 | CONFIG_X86_SMAP
113 | CONFIG_PROVE_LOCKING
114 | CONFIG_DEBUG_RT_MUTEXES
115 | CONFIG_DEBUG_SPINLOCK
116 | CONFIG_DEBUG_MUTEXES
117 | CONFIG_DEBUG_WW_MUTEX_SLOWPATH
118 | CONFIG_DEBUG_RWSEMS
119 | CONFIG_DEBUG_LOCK_ALLOC
120 | CONFIG_DEBUG_ATOMIC_SLEEP
121 | CONFIG_DEBUG_LIST
122 | CONFIG_ARCH_HAS_KCOV
123 | CONFIG_KCOV
124 | CONFIG_KCOV_INSTRUMENT_ALL
125 | "
126 | fi
127 |
128 | for key in $CONFIGKEYSDISABLE;
129 | do
130 | config_disable $key
131 | done
132 |
133 | for key in $CONFIGKEYSENABLE;
134 | do
135 | config_enable $key
136 | done
137 |
138 | make olddefconfig CC=$COMPILER
139 | make -j$N_CORES CC=$COMPILER > make.log 2>&1 || copy_log_then_exit make.log
140 | exit 0
141 |
--------------------------------------------------------------------------------
/syzscope/scripts/init-replay.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 | #
4 | # Usage ./init-replay.sh SUBFOLER HASH
5 |
6 | set -e
7 |
8 | PROJECT_PATH="$(pwd)"
9 | SUBFOLDER=$1
10 | HASH=$2
11 | CASE_PATH="$PROJECT_PATH/work/$SUBFOLDER/$HASH"
12 |
13 | if [ -d "$CASE_PATH/.stamp/" ]; then
14 | rm -r $CASE_PATH/.stamp/
15 | fi
--------------------------------------------------------------------------------
/syzscope/scripts/linux-clone.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 | #
4 | # Usage ./linux-clone linux_clone_path
5 |
6 | echo "running linux-clone.sh"
7 |
8 | if [ $# -ne 2 ]; then
9 | echo "Usage ./linux-clone linux_clone_path index"
10 | exit 1
11 | fi
12 |
13 | if [ -d "tools/$1-$2" ]; then
14 | exit 0
15 | fi
16 | if [ ! -d "tools" ]; then
17 | mkdir tools
18 | fi
19 | cd tools || exit 1
20 | if [ ! -d "linux-0" ]; then
21 | git clone https://github.com/torvalds/linux.git $1-$2
22 | else
23 | cp -r linux-0 $1-$2
24 | fi
25 | echo "Linux cloned to $1-$2"
--------------------------------------------------------------------------------
/syzscope/scripts/patch_applying_check.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 | #
4 | # Usage ./patch_applying_check.sh linux_path linux_commit config_url patch_commit
5 |
6 | set -ex
7 | echo "running patch_applying_check.sh"
8 |
9 |
10 | function jump_to_the_patch() {
11 | git stash
12 | git clean -fdx -e THIS_KERNEL_IS_BEING_USED > /dev/null
13 | #make clean CC=$COMPILER
14 | #git stash --all
15 | git checkout -f $PATCH
16 | git format-patch -1 $PATCH --stdout > fixed.patch
17 | }
18 |
19 | function copy_log_then_exit() {
20 | LOG=$1
21 | cp $LOG $CASE_PATH/$LOG-patch_applying_check
22 | exit 1
23 | }
24 |
25 | if [ $# -ne 6 ]; then
26 | echo "Usage ./patch_applying_check.sh linux_path linux_commit config_url patch_commit gcc_version max_compiling_kernel"
27 | exit 1
28 | fi
29 |
30 | LINUX=$1
31 | COMMIT=$2
32 | CONFIG=$3
33 | PATCH=$4
34 | COMPILER_VERSION=$5
35 | MAX_COMPILING_KERNEL=$6
36 | N_CORES=$((`nproc` / $MAX_COMPILING_KERNEL))
37 | echo "Compiler: "$COMPILER_VERSION | grep gcc && \
38 | COMPILER=`pwd`/tools/$COMPILER_VERSION/bin/gcc || COMPILER=`pwd`/tools/$COMPILER_VERSION/bin/clang
39 |
40 | cd $LINUX
41 | cd ..
42 | CASE_PATH=`pwd`
43 | cd linux
44 |
45 | CURRENT_HEAD=`git rev-parse HEAD`
46 | git stash
47 | if [ "$CURRENT_HEAD" != "$COMMIT" ]; then
48 | git clean -fdx -e THIS_KERNEL_IS_BEING_USED > /dev/null
49 | #make clean CC=$COMPILER
50 | #git stash --all
51 | git checkout -f $COMMIT || (git pull https://github.com/torvalds/linux.git master > /dev/null 2>&1 && git checkout -f $COMMIT)
52 | fi
53 | git format-patch -1 $PATCH --stdout > fixed.patch
54 | patch -p1 -N -i fixed.patch || jump_to_the_patch
55 | patch -p1 -R < fixed.patch
56 | curl $CONFIG > .config
57 | sed -i "s/CONFIG_BUG_ON_DATA_CORRUPTION=y/# CONFIG_BUG_ON_DATA_CORRUPTION is not set/g" .config
58 | make olddefconfig CC=$COMPILER
59 | make -j$N_CORES CC=$COMPILER > make.log 2>&1 || copy_log_then_exit make.log
60 | exit 0
--------------------------------------------------------------------------------
/syzscope/scripts/requirements.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 | #
4 | # Usage ./requirements.sh
5 |
6 | if [ ! -f "$(pwd)/tools/.stamp/ENV_SETUP" ]; then
7 | sudo apt-get update || exit 1
8 | sudo apt-get -y install gdb curl git wget qemu-system-x86 debootstrap flex bison libssl-dev libelf-dev locales cmake libxml2-dev libz3-dev bc libncurses5 gcc-multilib g++-multilib
9 | fi
10 |
11 | if [ ! -d "work/completed" ]; then
12 | mkdir -p work/completed
13 | fi
14 |
15 | if [ ! -d "work/incomplete" ]; then
16 | mkdir -p work/incomplete
17 | fi
18 |
19 | TOOLS_PATH="$(pwd)/tools"
20 | SYZSCOPE_PATH="$(pwd)/syzscope"
21 | if [ ! -d "$TOOLS_PATH/.stamp" ]; then
22 | mkdir -p $TOOLS_PATH/.stamp
23 | fi
24 | # Check for image
25 | echo "[+] Building image"
26 | cd $TOOLS_PATH
27 | if [ ! -f "$TOOLS_PATH/.stamp/BUILD_IMAGE" ]; then
28 | if [ ! -d "img" ]; then
29 | mkdir img
30 | fi
31 | cd img
32 | if [ ! -f "stretch.img" ]; then
33 | wget https://storage.googleapis.com/syzkaller/stretch.img > /dev/null
34 | wget https://storage.googleapis.com/syzkaller/stretch.img.key > /dev/null
35 | chmod 400 stretch.img.key
36 | wget https://storage.googleapis.com/syzkaller/wheezy.img > /dev/null
37 | wget https://storage.googleapis.com/syzkaller/wheezy.img.key > /dev/null
38 | chmod 400 wheezy.img.key
39 | touch $TOOLS_PATH/.stamp/BUILD_IMAGE
40 | fi
41 | cd ..
42 | fi
43 |
44 | echo "[+] Building gcc and clang"
45 | if [ ! -f "$TOOLS_PATH/.stamp/BUILD_GCC_CLANG" ]; then
46 | wget https://storage.googleapis.com/syzkaller/gcc-7.tar.gz > /dev/null
47 | tar xzf gcc-7.tar.gz
48 | mv gcc gcc-7
49 | rm gcc-7.tar.gz
50 |
51 | wget https://storage.googleapis.com/syzkaller/gcc-8.0.1-20180301.tar.gz > /dev/null
52 | tar xzf gcc-8.0.1-20180301.tar.gz
53 | mv gcc gcc-8.0.1-20180301
54 | rm gcc-8.0.1-20180301.tar.gz
55 |
56 | wget https://storage.googleapis.com/syzkaller/gcc-8.0.1-20180412.tar.gz > /dev/null
57 | tar xzf gcc-8.0.1-20180412.tar.gz
58 | mv gcc gcc-8.0.1-20180412
59 | rm gcc-8.0.1-20180412.tar.gz
60 |
61 | wget https://storage.googleapis.com/syzkaller/gcc-9.0.0-20181231.tar.gz > /dev/null
62 | tar xzf gcc-9.0.0-20181231.tar.gz
63 | mv gcc gcc-9.0.0-20181231
64 | rm gcc-9.0.0-20181231.tar.gz
65 |
66 | wget https://storage.googleapis.com/syzkaller/gcc-10.1.0-syz.tar.xz > /dev/null
67 | tar xf gcc-10.1.0-syz.tar.xz
68 | mv gcc-10 gcc-10.1.0-20200507
69 | rm gcc-10.1.0-syz.tar.xz
70 |
71 | wget https://storage.googleapis.com/syzkaller/clang-kmsan-329060.tar.gz > /dev/null
72 | tar xzf clang-kmsan-329060.tar.gz
73 | mv clang-kmsan-329060 clang-7-329060
74 | rm clang-kmsan-329060.tar.gz
75 |
76 | wget https://storage.googleapis.com/syzkaller/clang-kmsan-334104.tar.gz > /dev/null
77 | tar xzf clang-kmsan-334104.tar.gz
78 | mv clang-kmsan-334104 clang-7-334104
79 | rm clang-kmsan-334104.tar.gz
80 |
81 | wget https://storage.googleapis.com/syzkaller/clang-kmsan-343298.tar.gz > /dev/null
82 | tar xzf clang-kmsan-343298.tar.gz
83 | mv clang-kmsan-343298 clang-8-343298
84 | rm clang-kmsan-343298.tar.gz
85 |
86 | wget https://storage.googleapis.com/syzkaller/clang_install_c2443155.tar.gz > /dev/null
87 | tar xzf clang_install_c2443155.tar.gz
88 | mv clang_install_c2443155 clang-10-c2443155
89 | rm clang_install_c2443155.tar.gz
90 |
91 | wget https://storage.googleapis.com/syzkaller/clang-11-prerelease-ca2dcbd030e.tar.xz > /dev/null
92 | tar xf clang-11-prerelease-ca2dcbd030e.tar.xz
93 | mv clang clang-11-ca2dcbd030e
94 | rm clang-11-prerelease-ca2dcbd030e.tar.xz
95 |
96 | #This is for gcc-9
97 | #if [ ! -f "/usr/lib/x86_64-linux-gnu/libmpfr.so.4" ]; then
98 | # sudo ln -s /usr/lib/x86_64-linux-gnu/libmpfr.so.6 /usr/lib/x86_64-linux-gnu/libmpfr.so.4
99 | #fi
100 | touch $TOOLS_PATH/.stamp/BUILD_GCC_CLANG
101 | fi
102 |
103 | echo "[+] Building llvm"
104 | if [ ! -f "$TOOLS_PATH/.stamp/BUILD_LLVM" ]; then
105 | wget https://github.com/llvm/llvm-project/releases/download/llvmorg-10.0.1/llvm-project-10.0.1.tar.xz > /dev/null
106 | tar xf llvm-project-10.0.1.tar.xz
107 | mv llvm-project-10.0.1 llvm
108 | rm llvm-project-10.0.1.tar.xz
109 | cd llvm
110 | mkdir build
111 | cd build
112 | cmake -G "Unix Makefiles" -DLLVM_ENABLE_PROJECTS="clang;lld" -DCMAKE_BUILD_TYPE=Release -LLVM_ENABLE_DUMP ../llvm
113 | make -j16
114 |
115 | touch $TOOLS_PATH/.stamp/BUILD_LLVM
116 | cd $TOOLS_PATH
117 | fi
118 |
119 | echo "[+] Build static analysis tool"
120 | if [ ! -f "$TOOLS_PATH/.stamp/BUILD_STATIC_ANALYSIS" ]; then
121 | git clone https://github.com/plummm/dr_checker_x.git dr_checker
122 | cd dr_checker
123 | git checkout taint-analysis-on-llvm-10
124 | cd ..
125 | touch $TOOLS_PATH/.stamp/BUILD_STATIC_ANALYSIS
126 | fi
127 |
128 | echo "[+] Download pwndbg"
129 | if [ ! -f "$TOOLS_PATH/.stamp/SETUP_PWNDBG" ]; then
130 | git clone https://github.com/plummm/pwndbg_linux_kernel.git pwndbg
131 | cd pwndbg
132 | ./setup.sh
133 | locale-gen
134 | sudo sed -i "s/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/g" /etc/locale.gen
135 | locale-gen
136 |
137 | touch $TOOLS_PATH/.stamp/SETUP_PWNDBG
138 | cd ..
139 | fi
140 |
141 | echo "[+] Setup golang environment"
142 | if [ ! -f "$TOOLS_PATH/.stamp/SETUP_GOLANG" ]; then
143 | wget https://dl.google.com/go/go1.14.2.linux-amd64.tar.gz
144 | tar -xf go1.14.2.linux-amd64.tar.gz
145 | mv go goroot
146 | GOPATH=`pwd`/gopath
147 | if [ ! -d "gopath" ]; then
148 | mkdir gopath
149 | fi
150 | rm go1.14.2.linux-amd64.tar.gz
151 | touch $TOOLS_PATH/.stamp/SETUP_GOLANG
152 | fi
153 |
154 | echo "[+] Setup syzkaller"
155 | if [ ! -f "$TOOLS_PATH/.stamp/SETUP_SYZKALLER" ]; then
156 | mkdir -p $GOPATH/src/github.com/google/ || echo "Dir exists"
157 | cd $GOPATH/src/github.com/google/
158 | rm -rf syzkaller || echo "syzkaller does not exist"
159 | git clone https://github.com/google/syzkaller.git
160 | touch $TOOLS_PATH/.stamp/SETUP_SYZKALLER
161 | fi
162 |
163 | touch $TOOLS_PATH/.stamp/ENV_SETUP
164 |
165 | if [ -f "/usr/lib/x86_64-linux-gnu/libmpfr.so.6" ] && [ ! -f "/usr/lib/x86_64-linux-gnu/libmpfr.so.4" ]; then
166 | sudo ln -s /usr/lib/x86_64-linux-gnu/libmpfr.so.6 /usr/lib/x86_64-linux-gnu/libmpfr.so.4
167 | fi
168 |
169 | #BUG: If multiple instances are running, may clean up others' flag
170 | echo "[+] Clean unfinished jobs"
171 | rm linux-*/.git/index.lock || echo "Removing index.lock"
172 | rm linux-*/THIS_KERNEL_IS_BEING_USED || echo "All set"
173 |
174 | exit 0
175 |
--------------------------------------------------------------------------------
/syzscope/scripts/run-script.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 | #
4 | # Usage ./run-script.sh command ssh_port image_path case_path
5 |
6 | echo "running run-script.sh"
7 |
8 | if [ $# -ne 4 ]; then
9 | echo "Usage ./run-script.sh command ssh_port image_path case_path"
10 | exit 1
11 | fi
12 |
13 | COMMAND=$1
14 | PORT=$2
15 | IMAGE_PATH=$3
16 | CASE_PATH=$4
17 |
18 | RAW_COMMAND=`echo $COMMAND | sed -E "s/ -enable=[a-z_]+(,[a-z_]+)*//g"`
19 | NON_REPEAT_COMMAND=`echo $COMMAND | sed -E "s/ -repeat=0/ -repeat=1/g; s/ -procs=[0-9]+/ -procs=1/g"`
20 | NON_REPEAT_RAW_COMMAND=`echo $RAW_COMMAND | sed -E "s/ -repeat=0/ -repeat=1/g; s/ -procs=[0-9]+/ -procs=1/g"`
21 |
22 | cd $CASE_PATH/poc || exit 1
23 | cat << EOF > run.sh
24 | #!/bin/bash
25 | set -ex
26 |
27 | # cprog somehow work not as good as prog, an infinite loop even blocks the execution of syz-execprog
28 | #if [ -f "./poc" ]; then
29 | # ./poc
30 | #fi
31 |
32 | RAW=\$1
33 |
34 | for i in {1..10}
35 | do
36 | # some crashes may be triggered after current process exit
37 | # some crashes need race-condition or multiple executions
38 | if [ "\$RAW" != "0" ]; then
39 | ${NON_REPEAT_RAW_COMMAND}
40 | ${RAW_COMMAND}
41 | else
42 | # old version syz-execprog may not support -enable
43 | ${NON_REPEAT_COMMAND} || ${NON_REPEAT_RAW_COMMAND}
44 | ${COMMAND} || ${RAW_COMMAND}
45 | fi
46 |
47 | #Sometimes the testcase is not required to repeat, but we still give a shot
48 | sleep 5
49 | done
50 | EOF
51 |
52 | CMD="scp -F /dev/null -o UserKnownHostsFile=/dev/null \
53 | -o BatchMode=yes -o IdentitiesOnly=yes -o StrictHostKeyChecking=no \
54 | -i $IMAGE_PATH/stretch.img.key -P $PORT ./run.sh root@localhost:/root"
55 | $CMD
56 | echo $CMD > run-script.sh
57 | exit 0
--------------------------------------------------------------------------------
/syzscope/scripts/run-vm.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 | #
4 | # Usage ./run-vm.sh image_path linux_path ssh_port
5 |
6 | set -ex
7 |
8 | if [ $# -ne 3 ]; then
9 | echo "Usage ./run-vm.sh image_path linux_path ssh_port"
10 | exit 1
11 | fi
12 |
13 | IMAGE=$1
14 | LINUX=$2
15 | PORT=$3
16 |
17 | qemu-system-x86_64 \
18 | -m 2G \
19 | -smp 2 \
20 | -net nic,model=e1000 \
21 | -enable-kvm -cpu host \
22 | -net user,host=10.0.2.10,hostfwd=tcp::$PORT-:22 \
23 | -display none -serial stdio -no-reboot \
24 | -hda $IMAGE \
25 | -kernel $LINUX/arch/x86_64/boot/bzImage \
26 | -append "console=ttyS0 net.ifnames=0 root=/dev/sda printk.synchronous=1"
27 |
--------------------------------------------------------------------------------
/syzscope/scripts/syz-compile.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 | #
4 | # Usage ./syz-compile.sh syzpath arch
5 |
6 | if [ $# -ne 2 ]; then
7 | echo "Usage ./syz-compile.sh case_path arch"
8 | exit 1
9 | fi
10 |
11 | CASE_PATH=$1
12 | SYZ_PATH=$CASE_PATH/gopath/src/github.com/google/syzkaller
13 | ARCH=$2
14 |
15 | export GOPATH=$CASE_PATH/gopath
16 | export GOROOT=`pwd`/tools/goroot
17 | export LLVM_BIN=`pwd`/tools/llvm/build/bin
18 | export PATH=$GOROOT/bin:$LLVM_BIN:$PATH
19 |
20 | cd $SYZ_PATH
21 | make generate || exit 1
22 | rm CorrectTemplate
23 | make TARGETARCH=$ARCH TARGETVMARCH=amd64 || exit 1
24 | exit 0
--------------------------------------------------------------------------------
/syzscope/scripts/upload-exp.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | # Xiaochen Zou 2020, University of California-Riverside
3 | #
4 | # Usage ./upload-exp.sh case_path syz_repro_url ssh_port image_path syz_commit type c_repro i386
5 | # EXITCODE: 2: syz-execprog supports -enable. 3: syz-execprog do not supports -enable.
6 |
7 | set -ex
8 | echo "running upload-exp.sh"
9 |
10 | if [ $# -ne 10 ]; then
11 | echo "Usage ./upload-exp.sh case_path syz_repro_url ssh_port image_path syz_commit type c_repro i386 fixed gcc_version"
12 | exit 1
13 | fi
14 |
15 | CASE_PATH=$1
16 | TESTCASE=$2
17 | PORT=$3
18 | IMAGE_PATH=$4
19 | SYZKALLER=$5
20 | TYPE=$6
21 | C_REPRO=$7
22 | I386=$8
23 | FIXED=$9
24 | GCCVERSION=${10}
25 | EXITCODE=3
26 | PROJECT_PATH=`pwd`
27 | BIN_PATH=$CASE_PATH/gopath/src/github.com/google/syzkaller
28 | GCC=`pwd`/tools/$GCCVERSION/bin/gcc
29 | export GOROOT=`pwd`/tools/goroot
30 | export PATH=$GOROOT/bin:$PATH
31 |
32 | M32=""
33 | ARCH="amd64"
34 | if [ "$I386" != "None" ]; then
35 | M32="-m32"
36 | ARCH="386"
37 | fi
38 |
39 | cd $CASE_PATH
40 | if [ ! -d "$CASE_PATH/poc" ]; then
41 | mkdir $CASE_PATH/poc
42 | fi
43 |
44 | cd $CASE_PATH/poc
45 | if [ "$TYPE" == "1" ]; then
46 | cp $TESTCASE ./testcase || exit 1
47 | else
48 | curl $TESTCASE > testcase
49 | fi
50 | scp -F /dev/null -o UserKnownHostsFile=/dev/null \
51 | -o BatchMode=yes -o IdentitiesOnly=yes -o StrictHostKeyChecking=no \
52 | -i $IMAGE_PATH/stretch.img.key -P $PORT ./testcase root@localhost:/root
53 |
54 | #if [ "$C_REPRO" != "None" ]; then
55 | # curl $C_REPRO > poc.c
56 | # gcc -pthread $M32 -static -o poc poc.c || echo "Error occur when compiling poc"
57 |
58 | # scp -F /dev/null -o UserKnownHostsFile=/dev/null \
59 | # -o BatchMode=yes -o IdentitiesOnly=yes -o StrictHostKeyChecking=no \
60 | # -i $IMAGE_PATH/stretch.img.key -P $PORT ./poc root@localhost:/root
61 | #fi
62 |
63 | if [ "$FIXED" == "0" ]; then
64 | #Only for reproduce original PoC
65 | if [ ! -d "$CASE_PATH/poc/gopath" ]; then
66 | mkdir -p $CASE_PATH/poc/gopath
67 | fi
68 | export GOPATH=$CASE_PATH/poc/gopath
69 | mkdir -p $GOPATH/src/github.com/google/ || echo "Dir exists"
70 | BIN_PATH=$CASE_PATH/poc
71 | cd $GOPATH/src/github.com/google/
72 | if [ ! -d "$GOPATH/src/github.com/google/syzkaller" ]; then
73 | cp -r $PROJECT_PATH/tools/gopath/src/github.com/google/syzkaller ./
74 | #go get -u -d github.com/google/syzkaller/prog
75 | cd $GOPATH/src/github.com/google/syzkaller || exit 1
76 |
77 | git checkout -f $SYZKALLER || (git pull https://github.com/google/syzkaller.git master > /dev/null 2>&1 && git checkout -f $SYZKALLER)
78 | git rev-list HEAD | grep $(git rev-parse dfd609eca1871f01757d6b04b19fc273c87c14e5) || EXITCODE=2
79 | make TARGETARCH=$ARCH TARGETVMARCH=amd64 execprog executor
80 | if [ -d "bin/linux_$ARCH" ]; then
81 | cp bin/linux_amd64/syz-execprog $BIN_PATH
82 | cp bin/linux_$ARCH/syz-executor $BIN_PATH
83 | else
84 | cp bin/syz-execprog $BIN_PATH
85 | cp bin/syz-executor $BIN_PATH
86 | fi
87 | touch MAKE_COMPLETED
88 | else
89 | for i in {1..20}
90 | do
91 | if [ -f "$GOPATH/src/github.com/google/syzkaller/MAKE_COMPLETED" ]; then
92 | break
93 | fi
94 | sleep 10
95 | done
96 | #cd $GOPATH/src/github.com/google/syzkaller
97 | fi
98 | else
99 | cd $CASE_PATH/gopath/src/github.com/google/syzkaller
100 | fi
101 |
102 | if [ ! -f "$BIN_PATH/syz-execprog" ]; then
103 | SYZ_PATH=$CASE_PATH/poc/gopath/src/github.com/google/syzkaller/
104 | if [ -d "$SYZ_PATH/bin/linux_$ARCH" ]; then
105 | cp $SYZ_PATH/bin/linux_amd64/syz-execprog $BIN_PATH
106 | cp $SYZ_PATH/bin/linux_$ARCH/syz-executor $BIN_PATH
107 | else
108 | cp $SYZ_PATH/bin/syz-execprog $BIN_PATH
109 | cp $SYZ_PATH/bin/syz-executor $BIN_PATH
110 | fi
111 | fi
112 |
113 | CMD="scp -F /dev/null -o UserKnownHostsFile=/dev/null \
114 | -o BatchMode=yes -o IdentitiesOnly=yes -o StrictHostKeyChecking=no \
115 | -i $IMAGE_PATH/stretch.img.key -P $PORT $BIN_PATH/syz-execprog $BIN_PATH/syz-executor root@localhost:/"
116 |
117 | $CMD
118 | echo $CMD > upload-exp.sh
119 | exit $EXITCODE
120 |
--------------------------------------------------------------------------------
/syzscope/test/deploy_test.py:
--------------------------------------------------------------------------------
1 | from syzscope.modules import deploy, syzbotCrawler, crash
2 | import syzscope.interface.utilities as utilities
3 | from dateutil import parser as time_parser
4 |
5 | import os
6 |
7 | project_path = os.getcwd()
8 |
9 | def getMinimalDeployer(case_path, case):
10 | force = True
11 | d = deploy.Deployer(0, 1, debug=True)
12 | d.compiler = utilities.set_compiler_version(time_parser.parse(case["time"]), case["config"])
13 | d.project_path = project_path
14 | d.current_case_path = os.path.join(project_path, case_path)
15 | d.image_path = os.path.join(d.current_case_path, "img")
16 | d.kernel_path = os.path.join(d.current_case_path, "linux")
17 | d.crash_checker = crash.CrashChecker(
18 | d.project_path,
19 | d.current_case_path,
20 | 3777,
21 | d.logger,
22 | True,
23 | d.index,
24 | 1,
25 | compiler=d.compiler)
26 | d.syzkaller_path = os.path.join(d.current_case_path, "gopath/src/github.com/google/syzkaller")
27 | return d
28 |
29 | def getCrawler():
30 | crawler = syzbotCrawler.Crawler(debug=True)
31 | return crawler
32 |
33 | def replaceTemplate_test(pattern, pattern_type):
34 | d = getMinimalDeployer()
35 | d.replaceTemplate(pattern, pattern_type)
36 |
37 | def save_case_test(hash_val, exitcode, case, limitedMutation, impact_without_mutating, title=None, secondary_fuzzing=False):
38 | d = getMinimalDeployer("work/succeed/0655ccf", case)
39 | d.save_case(hash_val, exitcode, case, limitedMutation=limitedMutation, impact_without_mutating=impact_without_mutating, title=title, secondary_fuzzing=secondary_fuzzing)
40 |
41 | def copy_new_impact_test(case):
42 | d = getMinimalDeployer('work/completed/232223b')
43 | d.copy_new_impact(case, True, "KASAN: slab-out-of-bounds in hpet_alloc")
44 |
45 | if __name__ == '__main__':
46 | hash_val = "0655ccf655feea0f30dea3f1ba960af39bcd741a"
47 | exitcode = 0
48 | crawler = getCrawler()
49 | crawler.run_one_case(hash_val)
50 | case = crawler.cases.pop(hash_val)
51 | save_case_test(hash_val, 0, case, False, False, case['title'])
--------------------------------------------------------------------------------
/syzscope/test/interface/s2e_test.py:
--------------------------------------------------------------------------------
1 | from syzscope.interface.s2e import S2EInterface
2 |
3 | s2e_path = '/home/xzou017/projects/KOOBE-test/s2e'
4 | kernel_path = '/home/xzou017/projects/KOOBE-test/s2e/images/debian-9.2.1-x86_64-0e2adab6/guestfs/vmlinux'
5 | syz_path = '/home/xzou017/projects/SyzbotAnalyzer/tools/gopath/src/github.com/google/syzkaller'
6 | s2e_project_path = '/home/xzou017/projects/KOOBE-test/s2e/projects/2389bfc'
7 | def init_s2e_inst(s2e_path, kernel_path, syz_path):
8 | inst = S2EInterface(s2e_path, kernel_path, syz_path)
9 | return inst
10 |
11 | def getAvoidingPC_test(inst, func_list):
12 | if inst == None:
13 | return
14 | res = inst.getAvoidingPC(func_list)
15 | for func in res:
16 | print(func, res[func])
17 | return res
18 |
19 | def generateAvoidList_test(inst, avoid, s2e_project_path):
20 | inst.generateAvoidList(avoid, s2e_project_path)
21 |
22 | if __name__ == '__main__':
23 | inst = init_s2e_inst(s2e_path, kernel_path, syz_path)
24 | func_list = [
25 | 'refcount_dec_and_mutex_lock',
26 | 'mutex_unlock',
27 | '_raw_spin_lock_irqsave',
28 | '_raw_spin_unlock_irqrestore',
29 | 'kfree_call_rcu',
30 | 'kfree',
31 | 'mutex_lock',
32 | '_raw_spin_lock',
33 | '_raw_spin_trylock',
34 | 'kfree_skb',
35 | 'get_order',
36 | '_raw_read_lock',
37 | '_raw_spin_lock_bh',
38 | '_raw_spin_unlock_bh',
39 | 'rht_key_hashfn',
40 | ]
41 | res = getAvoidingPC_test(inst, func_list)
42 | generateAvoidList_test(inst, res, s2e_project_path)
43 |
--------------------------------------------------------------------------------
/syzscope/test/interface/staticAnalysis_test.py:
--------------------------------------------------------------------------------
1 | import shutil
2 | import os
3 | import syzscope.interface.static_analysis as static_analysis
4 | import logging
5 | import syzscope.interface.utilities as utilities
6 |
7 | from subprocess import PIPE, STDOUT, Popen
8 | from syzscope.test.deploy_test import getMinimalDeployer, getCrawler
9 |
10 | def compile_bc_extra_test(hash_val):
11 | d = getMinimalDeployer("work/incomplete/{}".format(hash_val[:7]))
12 | sa = static_analysis.StaticAnalysis(logging, d.project_path, 1, 'static-ori', d.current_case_path, "linux", 1)
13 | sa.compile_bc_extra()
14 | """
15 | link_cmd = '{}/tools/llvm/build/bin/llvm-link -o one.bc `find ./ -name "*.bc" ! -name "timeconst.bc"` && mv one.bc {}'.format(d.project_path, d.current_case_path)
16 | p = Popen(['/bin/bash','-c', link_cmd], cwd=d.kernel_path)
17 | exitcode = p.wait()
18 | if exitcode ==0:
19 | if os.path.exists(os.path.join(d.current_case_path,'one.bc')):
20 | os.remove(os.path.join(d.current_case_path,'one.bc'))
21 | """
22 |
23 | def KasanVulnChecker_test(hash_val):
24 | exitcode = 0
25 | crawler = getCrawler()
26 | crawler.run_one_case(hash_val)
27 | case = crawler.cases.pop(hash_val)
28 | d = getMinimalDeployer("work/completed/{}".format(hash_val[:7]))
29 | if 'use-after-free' in case['title'] or 'out-of-bounds' in case['title']:
30 | d.store_read = False
31 | valid_contexts = d.get_buggy_contexts(case)
32 | report = ""
33 | for context in valid_contexts:
34 | if context['type'] == utilities.URL:
35 | raw = utilities.request_get(context['report'])
36 | report = raw.text
37 | else:
38 | f = open(context['report'], 'r')
39 | raw = f.readlines()
40 | report = "".join(raw)
41 | sa = static_analysis.StaticAnalysis(logging, d.project_path, 1, d.current_case_path, "linux", d.max_compiling_kernel, max_compiling_kernel=1)
42 | vul_site, func_site, func = sa.KasanVulnChecker(report)
43 |
44 | def saveCallTrace_test(case):
45 | d = getMinimalDeployer("work/incomplete/341e1a2")
46 | sa = static_analysis.StaticAnalysis(logging, d.project_path, 1, d.current_case_path, "linux", d.max_compiling_kernel)
47 | res = utilities.request_get(case['report'])
48 | vul_site, func_site, func = sa.KasanVulnChecker(res.text)
49 | report_list = res.text.split('\n')
50 | trace = utilities.extrace_call_trace(report_list)
51 | sa.saveCallTrace2File(trace, vul_site)
52 |
53 | if __name__ == '__main__':
54 | compile_bc_extra_test('ab72104359c8066fb418e18d9227eb00b677360a')
--------------------------------------------------------------------------------
/syzscope/test/interface/vm_test.py:
--------------------------------------------------------------------------------
1 | from syzscope.interface.vm.state import VMState
2 | from syzscope.interface.vm import VM
3 |
4 | def init_vmstate():
5 | vm = VMState('/home/xzou017/projects/SyzbotAnalyzer/work/incomplete/00939fa/linux/')
6 | return vm
7 |
8 | def waitfor_kasan_report_test(vm):
9 | vm.connect(1235)
10 | vm.reach_vul_site(0xffffffff84769752)
11 |
12 | def read_mem_test(vm):
13 | vm.read_mem(0xffff88006bfbf760, 16)
14 |
15 | def read_regs_test(vm):
16 | vm.read_regs()
17 |
18 | def init_vm():
19 | vm = VM('/home/xzou017/projects/SyzbotAnalyzer/work/incomplete/00939fa/linux/', 2778,
20 | "/home/xzou017/projects/SyzbotAnalyzer/work/incomplete/00939fa/img", gdb_port=1235, hash_tag='1234')
21 | return vm
22 |
23 | if __name__ == '__main__':
24 | vm = init_vm()
25 | vm.run()
26 | waitfor_kasan_report_test(vm)
27 | read_mem_test(vm)
28 | read_regs_test(vm)
--------------------------------------------------------------------------------
/syzscope/test/interface/worker_test.py:
--------------------------------------------------------------------------------
1 | from syzscope.test.deploy_test import getMinimalDeployer
2 |
3 | def kill_proc_by_port_test(hash_val, ssh):
4 | d = getMinimalDeployer("work/incomplete/{}".format(hash_val[:7]))
5 | d.kill_proc_by_port(ssh)
6 |
7 | if __name__ == '__main__':
8 | kill_proc_by_port_test('00939facb41d022d8694274c584487d484ba7260',33777)
--------------------------------------------------------------------------------
/tutorial/Getting_started.md:
--------------------------------------------------------------------------------
1 | ### Getting started
2 |
3 | - [Run one case](#Run_one_case)
4 | - [Run multiple cases](#Run_multiple_cases)
5 | - [Filter cases by string match](#Filter_cases_by_string_match)
6 | - [Filter cases shared the same patches](#Filter_cases_shared_the_same_patches)
7 | - [Run cases from cache](#Run_cases_from_cache)
8 | - [Reproduce a bug](#Reproduce_a_bug)
9 | - [Run fuzzing](#Run_fuzzing)
10 | - [Run static taint analysis](#Run_static_taint_analysis)
11 | - [Run symbolic execution](#Run_symbolic_execution)
12 | - [Guide symbolic](#Guide_symbolic)
13 | - [Run multiple cases at the same time](#Run_multiple_cases_at_the_same_time)
14 |
15 |
16 |
17 | ### Run one case
18 |
19 | ```bash
20 | python3 syzscope -i f99edaeec58ad40380ed5813d89e205861be2896 ...
21 | ```
22 |
23 |
24 |
25 |
26 |
27 | ### Run multiple cases
28 |
29 | ```bash
30 | python3 syzscope -i dataset ...
31 | ```
32 |
33 |
34 |
35 |
36 |
37 | ### Filter cases by string match
38 |
39 | if no value gives to `--url` or `-u`, SyzScope by default only pick up cases from **Fixed** section on syzbot.
40 |
41 | The following command pick up all *WARNING* bugs and *INFO* bugs from syzbot's **Fixed** section
42 |
43 | ```bash
44 | python3 syzscope -k="WARNING" -k="INFO:" ...
45 | ```
46 |
47 | Now pick all *WARNING* bugs and *INFO* bugs from syzbot's **Open** section
48 |
49 | ```bash
50 | python3 syzscope -k="WARNING" -k="INFO:" -u https://syzkaller.appspot.com/upstream ...
51 | ```
52 |
53 |
54 |
55 |
56 |
57 | ### Filter cases shared the same patches
58 |
59 | Sometime we want to deduplicate bugs. For example, the following command rules out all *WARNING* and *INFO* bugs that shared the same patch as a UAF/OOB bug. Please note that `ignore_UAF_OOB` is a file that contain all UAF/OOB bugs' hash
60 |
61 | ```bash
62 | python3 syzscope -k="WARNING" -k="INFO:" --ignore-batch ignore_UAF_OOB ...
63 | ```
64 |
65 |
66 |
67 |
68 |
69 | ### Run cases from cache
70 |
71 | Every time SyzScope runs new cases, it store the case info into `cases.json`. By using `--use-cache`, we can import the case info directly from cache without crawling syzbot again.
72 |
73 | ```bash
74 | python3 syzscope --use-cache ...
75 | ```
76 |
77 |
78 |
79 |
80 |
81 | ### Reproduce a bug
82 |
83 | Fuzzing used to capture the very first bug impact, but SyzScope allows to capture multiple impacts without panicking the kernel. To find out if any high-risk impacts are right behind a low-risk impact, we can simply reproduce a bug by using `--reproduce` or `-RP`.
84 |
85 | ```bash
86 | python3 syzscope -i f99edaeec58ad40380ed5813d89e205861be2896 -RP
87 | ```
88 |
89 | If reproducing a bug finds at least one high-risk impact behind the low-risk impact, SyzScope will write the bug hash into confirmed impact file (`ConfirmedAbnormallyMemWrite`, `ConfirmedDoubleFree`)
90 |
91 |
92 |
93 |
94 |
95 | ### Run fuzzing
96 |
97 | To apply fuzzing on one or more cases, using `--kernel-fuzzing` or `-KF`. We can also specify the timeout for fuzzing by providing `--timeout-kernel-fuzzing`.
98 |
99 | The following command applied fuzzing on all *WARNING* and *INFO* bugs from syzbot's fixed section, and the time for fuzzing is **3 hours**. See more details about fuzzing on tutorial [fuzzing](./fuzzing.md).
100 |
101 | ```bash
102 | python3 syzscope -k="WARNING" -k="INFO:" -RP -KF --timeout-kernel-fuzzing 3
103 | ```
104 |
105 |
106 |
107 |
108 |
109 | ### Run static taint analysis
110 |
111 | To apply static taint analysis on one or more cases, using `--static-analysis` or `-SA`. We can also specify the timeout for static taint analysis by providing `--timeout-static-analysis`.
112 |
113 | The following command applied static taint analysis on all *WARNING* and *INFO* bugs from syzbot's fixed section, and the time for static taint analysis is **3600 seconds(1 hour)**. See more details about it on tutorial [static taint analysis](./static_taint_analysis.md). Please note that static taint analysis relies on UAF/OOB contexts, if we don't run fuzzing to explore UAF/OOB contexts for non-KASAN bugs, static analysis will fail.
114 |
115 | ```bash
116 | python3 syzscope -k="WARNING" -k="INFO:" -RP -KF --timeout-kernel-fuzzing 3 -SA --timeout-static-analysis 3600
117 | ```
118 |
119 |
120 |
121 |
122 |
123 | ### Run symbolic execution
124 |
125 | To apply symbolic execution on one or more cases, using `--symbolic-execution` or `-SE` to enable it. We can also specify the timeout for symbolic execution by providing `--timeout-symbolic-execution`.
126 |
127 | The following command applied symbolic execution on all *WARNING* and *INFO* bugs from syzbot's fixed section, and the time for symbolic execution is **14400 seconds(4 hour)**. See more details about it on tutorial [symbolic execution](./sym_exec.md). Please note that symbolic execution relies on UAF/OOB contexts, if we don't run fuzzing to explore UAF/OOB contexts for non-KASAN bugs, symbolic execution will fail.
128 |
129 | ```bash
130 | python3 syzscope -k="WARNING" -k="INFO:" -RP -KF --timeout-kernel-fuzzing 3 -SE --timeout-symbolic-execution 14400
131 | ```
132 |
133 | Due to some internal bugs in Z3 solver, symbolic execution may be interrupted and leave the QEMU frozen. This will block further cases since the frozen QEMU occupied the ports for both `ssh` and `gdb`.
134 |
135 | SyzScope can terminate old frozen QEMU at once we found it's unused by providing `--be-bully`.
136 |
137 | ```bash
138 | python3 syzscope -k="WARNING" -k="INFO:" -RP -KF --timeout-kernel-fuzzing 3 -SE --timeout-symbolic-execution 14400 --be-bully
139 | ```
140 |
141 |
142 |
143 |
144 |
145 | ### Guide symbolic execution with static taint analysis
146 |
147 | Using static taint analysis to guide symbolic execution is useful when coming across a large scale experiment. To let symbolic execution be guided, enable static taint analysis and use `--guided`.
148 |
149 | ```bash
150 | python3 syzscope -k="WARNING" -k="INFO:" -RP -KF --timeout-kernel-fuzzing 3 -SA --timeout-static-analysis 3600 -SE --timeout-symbolic-execution 14400 --guided --be-bully
151 | ```
152 |
153 |
154 |
155 |
156 |
157 | ### Run multiple cases at the same time
158 |
159 | SyzScope supports concurrent execution. To run several cases at the same time, provide `--parallel-max` or `-pm`. For example, run up to **8 cases** at the same time.
160 |
161 | ```bash
162 | python3 syzscope -i dataset -KF -SA -SE -pm 8
163 | ```
164 |
165 |
166 |
167 | See more usage of SyzScope by `python3 syzscope -h`
--------------------------------------------------------------------------------
/tutorial/common_issues.md:
--------------------------------------------------------------------------------
1 | # Common issues
2 |
3 | 1. [Fail to compile kernel](#fail_to_compile_kernel)
4 |
5 | 1. [No pahole or pahole version is too old](#pahole_issues)
6 |
7 | 2. [Fail to run kernel fuzzing](#fail_to_run_kernel_fuzzing)
8 |
9 | 1. [Fail to parse testcase](#fail_parse_testcase)
10 |
11 | 3. [Fail to run static taint analysis](#fail_to_run_static_analysis)
12 |
13 | 1. [Error occur at saveCallTrace2File](#error_save_calltrace)
14 | 2. [Error occur during taint analysis](#error_static_analysis)
15 |
16 | 4. [Fail to run symbolic execution](#fail_to_run_symbolic_execution)
17 |
18 | 1. [Cannot_trigger_vulnerability](#cannot_trigger_bug)
19 | 2. [Error occur at upload exp](#error_upload_exp)
20 |
21 |
22 |
23 | ### Fail to compile kernel
24 |
25 |
26 |
27 | 1. ```
28 | BTF: .tmp_vmlinux.btf: pahole (pahole) is not available
29 | Failed to generate BTF for vmlinux
30 | Try to disable CONFIG_DEBUG_INFO_BTF
31 | make: *** [Makefile:1106: vmlinux] Error 1
32 |
33 | or
34 |
35 | BTF: .tmp_vmlinux.btf: pahole version v1.9 is too old, need at least v1.16\
36 | Failed to generate BTF for vmlinux\
37 | Try to disable CONFIG_DEBUG_INFO_BTF
38 | make: *** [Makefile:1162: vmlinux] Error 1
39 | ```
40 |
41 |
42 |
43 | Install a new version of dwarves
44 |
45 | ```bash
46 | wget http://archive.ubuntu.com/ubuntu/pool/universe/d/dwarves-dfsg/dwarves_1.17-1_amd64.deb
47 | dpkg -i dwarves_1.17-1_amd64.deb
48 | ```
49 |
50 |
51 |
52 | ### Fail to run kernel fuzzing
53 |
54 |
55 |
56 | 1. `Fail to parse testcase`
57 |
58 | When the kernel fuzzer exitcode is 3, it means some syscall template does not exist in current syzkaller. Since we build our own kernel fuzzer on top of a particular version of syzkaller, the porting process is hard to be automated. Therefore we use one particular version with all our modification to do all kernel fuzzing.
59 |
60 | However, syzkaller's templates are evolving over time and they are not decouple with the main syzkaller component. When SyzScope find some syscall that our current kernel fuzzer doesn't contain, SyzScope will try to port the missing syscall in our template and return exitcode 3.
61 |
62 | But not all cases can be successfully ported, you can check the fuzzing log in main case log (`log` file under case folder) and might see some error like `Fail to parse testcase: unknown syscall syz_open_dev$binder\n'`. If SyzScope failed to automatically correct the templates, you can manually add the corresponding syscall from `poc/gopath/src/github.com/google/syzkaller/sys/linux/` (The one with correct templates) to `gopath/src/github.com/google/syzkaller/sys/linux/`(Our fuzzer, missing some syscalls in templates) and then compile the new templates `make generate && make TARGETARCH=amd64 TARGETVMARCH=amd64`
63 |
64 |
65 |
66 | ### Fail to run static taint analysis
67 |
68 |
69 |
70 | 1. `Error occur at saveCallTrace2File`
71 |
72 | `CallTrace` is an necessary component in static taint analysis. We need the call trace to simulate the control flow. In order to determine the static taint analysis scope, debug information is provided during the analysis as well as the function start line and end line.
73 |
74 | The call trace generation process is automated by extracting from KASAN report and string match the function, but sometimes it may fail due to unmatched coding style. Thus we need to manually inspect the source code.
75 |
76 | If SyzScope failed to determine the start line and the end line of a function, you will find `Can not find the boundaries of calltrace function xxx` in case log(`log` file under the case folder), check out the start line and the end of this function and manually add it to the `CallTrace`.
77 |
78 | Before rerun the static taint analysis, remember to remove `FINISH_STATIC_ANALYSIS` stamp.
79 |
80 | ```bash
81 | rm work/completed/xxx/.stamp/FINISH_STATIC_ANALYSIS && python3 syzscope -i xxx -SA ...
82 | ```
83 |
84 |
85 |
86 | 2. `Error occur during taint analysis`
87 |
88 | Most errors happen during static analysis are due to poor implementation and corner case in Linux kernel. You can check out the log of static analysis in the main case log file (`log` under the case folder). If you indeed see `Stack dump:` in the log, it means static taint analysis was interrupted by some internal bug, you might want to skip static taint analysis for this case.
89 |
90 |
91 |
92 | ### Fail to run symbolic execution
93 |
94 |
95 |
96 | 1. `Can not trigger vulnerability. Abaondoned`
97 |
98 | Race condition tends to be the top reason. Rerun symbolic execution to increase the possibility of bug triggering. Remember to remove the `FINISH_SYM_EXEC` stamp.
99 |
100 | ```bash
101 | rm work/completed/xxx/.stamp/FINISH_SYM_EXEC && python3 syzscope -i xxx -SE ...
102 | ```
103 |
104 | Or force SyzScope to rerun even the case is finished
105 |
106 | ```bash
107 | python3 syzscope -i xxx -SE --force ...
108 | ```
109 |
110 |
111 |
112 | 2. `Error occur at upload exp`
113 |
114 | Uploading exp is essential for bug reproducing. This step is powered by `scripts/upload-exp.sh`.
115 |
116 | `upload-exp.sh` builds corresponding `syzkaller` and copy `syz-execprog` and `syz-executor` to `poc` folder, then upload the two binary to QEMU.
117 |
118 | There are possible multiple reasons for fail to upload exp, but the most common two reasons are either the two binary were not copied to the `poc` folder or the QEMU is failed to launch. The detailed log for `upload-exp.sh` is in `vm.log` under `sym-xxx` folder.
119 |
120 | When `Error occur at upload exp` happened, check if both `syz-execprog` and `syz-executor` exist in `poc` folder and if anything wrong within `sym-xxx/vm-log` .
121 |
122 |
--------------------------------------------------------------------------------
/tutorial/examples/WARNING_held_lock_freed.md:
--------------------------------------------------------------------------------
1 | ### **WARNING: held lock freed!** (CVE-2018-25015)
2 |
3 | ------
4 |
5 | Let try an existing bug from Syzbot: [WARNING: held lock freed!](https://syzkaller.appspot.com/bug?id=a8d38d1b68ffc744c53bd9b9fc1dbd6c86b1afe2). We can find a UAF/OOB context behind the first WARNING. Therefore we do not need fuzzing at all!
6 |
7 | ```bash
8 | python3 syzscope -i a8d38d1b68ffc744c53bd9b9fc1dbd6c86b1afe2 -RP -SE --timeout-symbolic-execution 3600
9 | ```
10 |
11 | Just reproducing this bug, try to catch the UAF/OOB context by `-RP` or `--reproduce`. And we also need symbolic execution to find more critical impacts, let's use `-SE` or `--symbolic-execution`. At the end, set one hour (3600 seconds) timeout for our symbolic execution, `--timeout-symbolic-execution 3600`
12 |
13 |
14 |
15 | The `symbolic_execution.log` shows we found 2 OOB/UAF write and 1 control flow hijacking.
16 |
17 | Let's take a look.
18 |
19 | ```
20 | 2021-10-01 06:28:45,425 Thread 0: *******************primitives*******************
21 |
22 | 2021-10-01 06:28:45,425 Thread 0: Running for 0:01:50.495701
23 | 2021-10-01 06:28:45,425 Thread 0: Total 3 primitives found during symbolic execution
24 |
25 | 2021-10-01 06:28:45,425 Thread 0: The number of OOB/UAF write is 2
26 |
27 | 2021-10-01 06:28:45,425 Thread 0: The number of arbitrary address write is 0
28 |
29 | 2021-10-01 06:28:45,425 Thread 0: The number of constrained address write is 0
30 |
31 | 2021-10-01 06:28:45,425 Thread 0: The number of arbitrary value write is 0
32 |
33 | 2021-10-01 06:28:45,425 Thread 0: The number of constrained value write is 0
34 |
35 | 2021-10-01 06:28:45,425 Thread 0: The number of control flow hijacking is 1
36 |
37 | 2021-10-01 06:28:45,425 Thread 0: The number of invalid free is 0
38 |
39 | 2021-10-01 06:28:45,425 Thread 0: ************************************************
40 | ```
41 |
42 | The control flow hijacking impact locates in `primitive` folder: `FPD-release_sock-0xffffffff82cb4528-2`
43 |
44 | The call trace shows, after exiting `kasan_report`, the execution returns all the way to `release_sock`, and triggers a tainted function pointer dereference at `net/core/sock.c:2785`
45 |
46 | ```
47 | ^A^[[0m^[[31m^[[0m^B 0xffffffff82cb4528 : call r15
48 | ^A^[[31m^[[1m^Bpwndbg>
49 |
50 | |__asan_load4 mm/kasan/kasan.c:692
51 | |do_raw_spin_lock kernel/locking/spinlock_debug.c:83
52 | |_raw_spin_lock_bh kernel/locking/spinlock.c:169
53 | |release_sock net/core/sock.c:2778
54 | |None net/core/sock.c:2785
55 | ```
56 |
57 |
58 |
59 | No more human intervention, we found the control flow hijacking automatically!
60 |
61 | SyzScope report this high-risk bug and got CVE assigned, read the [detailed bug reports](https://sites.google.com/view/syzscope/warning-held-lock-freed).
--------------------------------------------------------------------------------
/tutorial/folder_structure.md:
--------------------------------------------------------------------------------
1 | ### Folder structure
2 |
3 | ------
4 |
5 | SyzScope work folder has following structures:
6 |
7 | ```
8 | ├── cases.json File. Cache for testing cases
9 | ├── AbnormallyMemRead File. Bug with memory read
10 | ├── AbnormallyMemWrite File. Bug with memory write
11 | ├── DoubleFree File. Bug with double free
12 | ├── ConfirmedAbnormallyMemRead File. Bug with memory read(Patch eliminated)
13 | ├── ConfirmedAbnormallyMemWrite File. Bug with memory write(Patch eliminated)
14 | ├── ConfirmedDoubleFree File. Bug with double free(Patch eliminated)
15 | ├── incomplete Folder. Store ongoing cases
16 | ├── completed Folder. Store low-risk completed cases
17 | ├── succeed Folder. Store high-risk completed cases
18 | ├── xxx Folder. Case hash
19 | ├── crashes Folder. Crashes from fuzzing
20 | ├── ...
21 | └── xxx Folder. Crash detail, syzkaller style
22 | ├── linux Symbolic link. To Linux kernel
23 | ├── poc Folder. For PoC testing
24 | ├── crash_log-ori File. Contain high-risk impact
25 | ├── launch_vm.sh File. Use for launch qemu
26 | ├── log File. Reproduce PoC log
27 | ├── qemu-xxx-ori.log File. Qemu running log
28 | ├── run-script.sh File. Use for reproduce crash
29 | ├── run.sh File. Use for running PoC
30 | ├── syz-execprog Binary. Syzkaller component
31 | ├── syz-executor Binary. Syzkaller component
32 | ├── testcase File. Syzkaller style test case
33 | └── gopath Folder. Contain syzkaller
34 | ├── output Folder. Confirmed crashes
35 | ├── xxx Folder. Crash hash
36 | ├── description File. Crash description
37 | ├── repro.log File. Crash raw log
38 | ├── repro.report File. Crash log with debug info
39 | ├── repro.prog File. Crash reproducer(Syzkaller style)
40 | ├── repro.cprog File. Crash reproducer(C style)
41 | ├── repro.stats File. Crash reproduce log
42 | └── repro.command File. Command for reproducing
43 | └── ori Folder. Original crash
44 | ├── gopath Folder. Contain modified syzkaller
45 | ├── img Symbolic link. To image and key.
46 | ├── compiler Symoblic link. To compiler
47 | ├── sym-xxx Folder. Symbolic execution results
48 | ├── gdb.log-0 File. GDB log
49 | ├── mon.log-0 File. Qemu monitor log
50 | ├── vm.log-0 File. Qemu log
51 | ├── symbolic_execution.log-0 File. Symbolic execution log
52 | ├── launch_vm.sh File. For launching qemu
53 | └── primitives Folder. Contain high-risk impacts
54 | ├── ...
55 | └── FPD-xxx-14 File. A func-ptr-def impact
56 | ├── static-xxx Folder. Static analysis results
57 | ├── CallTrace File. Calltrace for analysis
58 | └── paths Folder. Paths for guidance
59 | ├── path2MemWrite-2-0 File. Path to a memory write
60 | └── TerminatingFunc File. Termination func for sym exec
61 | ├── config File. Config for kernel compiling
62 | ├── log File. Case log
63 | ├── one.bc File. Kernel bc for static analysis
64 | ├── clang-make.log File. Log for bc compiling
65 | └── error Folder. Error cases
66 | ├── xxx
67 | ├── ...
68 | └── make.log-xxx File. Make log if failed
69 | ```
70 |
71 | All cases will running in `incomplete` folder. If a case has been successfully turn to high-risk, we move it to `succeed` folder, otherwise move it to `completed` folder. If a case encounter any sort of error (e.g., compiling error), move it to `error`.
72 |
73 | If new impacts found in fuzzing, the case hash will be written in `AbnormallyMemXXX` or `DoubleFree `file. Then we apply the patch and rerun all new impacts, if they fail to reproduce on patched kernel, we write them in `ConfirmedAbornallyMemXXX` or `ConfirmedDoubleFree` file.
74 |
75 | In error folder, if a case encountered a compiling error, there is a `make.log-xxx` file contains the full compiling log
76 |
--------------------------------------------------------------------------------
/tutorial/fuzzing.md:
--------------------------------------------------------------------------------
1 | ### Fuzzing
2 |
3 | ------
4 |
5 | We built our kernel fuzzer on top of syzkaller. It locates at `gopath` folder under each case folder. Please note that there are two `gopath` folder, one is under the main case folder and the other is under `poc` folder. The one in main case folder contains the modified syzkaller, it's used for kernel fuzzing. The one in `poc` folder is the version that trigger the original bug on syzbot, we use it to compile the particular version of `syz-execprog` and `syz-executor` for bug reproducing.
6 |
7 | ```
8 | ├── gopath Folder. Contain modified syzkaller
9 | ├── poc Folder. For PoC testing
10 | ...
11 | └── gopath Folder. Contain syzkaller
12 | ```
13 |
14 |
15 |
16 | There are several important files under `work` folder
17 |
18 | ```
19 | ├── AbnormallyMemRead File. Bug with memory read
20 | ├── AbnormallyMemWrite File. Bug with memory write
21 | ├── DoubleFree File. Bug with double free
22 | ├── ConfirmedAbnormallyMemRead File. Bug with memory read(Patch eliminated)
23 | ├── ConfirmedAbnormallyMemWrite File. Bug with memory write(Patch eliminated)
24 | ├── ConfirmedDoubleFree File. Bug with double free(Patch eliminated)
25 | ```
26 |
27 | When fuzzing found any memory read bugs, memory write bugs and double free bugs, it will write the case hash into `AbnormallyMemRead`, `AbnormallyMemWrite`, and `DoubleFree`. Beware that all these bugs haven't been verified by patches.
28 | After fuzzing is over, we apply corresponding patches on target kernel, and reproduce the bug we found again. See more details about bug reproduction on [PoC reproduce](./poc_repro.md).
29 | The ones that fail to trigger after patches applied will considered as confirmed new contexts. Then we write the case hash into `ConfirmedAbnormallyMemRead`, `ConfirmedAbnormallyMemWrite`, and `ConfirmedDoubleFree`.
30 |
31 | ```
32 | ├── output Folder. Confirmed crashes
33 | ├── xxx Folder. Crash hash
34 | ├── description File. Crash description
35 | ├── repro.log File. Crash raw log
36 | ├── repro.report File. Crash log with debug info
37 | ├── repro.prog File. Crash reproducer(Syzkaller style)
38 | ├── repro.cprog File. Crash reproducer(C style)
39 | ├── repro.stats File. Crash reproduce log
40 | └── repro.command File. Command for reproducing
41 | └── ori Folder. Original crash
42 | ```
43 | After confirming all new contexts, we move them to `output` folder in corresponding case. Static analysis and symbolic execution will pick each context in `output`.
44 |
--------------------------------------------------------------------------------
/tutorial/inspect_results.md:
--------------------------------------------------------------------------------
1 | ### Inspect results
2 |
3 |
4 |
5 | 
6 |
7 |
8 |
9 | After running SyzScope for the first time, it generated a main `work` folder under the project folder. Please check the detailed structure of main `work` folder on [Workzone Structure](./workzone_structure.md).
10 |
11 |
12 |
13 | The workflow of SyzScope shown above. Each component has their own output folder.
14 |
15 | Fuzzing's output folder is `output`
16 |
17 | Static analysis's output folder is `static-xxx`
18 |
19 | Symbolic execution output folder is `sym-xxx`
20 |
21 |
22 |
23 | If a bug has been turned into high-risk by either fuzzing or symbolic execution, it will be moved to `succeed` folder under main `work` folder.
24 |
25 |
26 |
27 | Normally, we only need to check the `output` and `sym-xxx` under `succeed` folder. By inspecting `repro.report`, we can find if a bug has been turn to high-risk through fuzzing. And `sym-xxx` shows the results from symbolic execution, the final report is printed on the bottom of `symbolic_execution.log` file.
28 |
29 | ```
30 | 2021-10-01 06:28:45,425 Thread 0: *******************primitives*******************
31 |
32 | 2021-10-01 06:28:45,425 Thread 0: Running for 0:01:50.495701
33 | 2021-10-01 06:28:45,425 Thread 0: Total 3 primitives found during symbolic execution
34 |
35 | 2021-10-01 06:28:45,425 Thread 0: The number of OOB/UAF write is 2
36 |
37 | 2021-10-01 06:28:45,425 Thread 0: The number of arbitrary address write is 0
38 |
39 | 2021-10-01 06:28:45,425 Thread 0: The number of constrained address write is 0
40 |
41 | 2021-10-01 06:28:45,425 Thread 0: The number of arbitrary value write is 0
42 |
43 | 2021-10-01 06:28:45,425 Thread 0: The number of constrained value write is 0
44 |
45 | 2021-10-01 06:28:45,425 Thread 0: The number of control flow hijacking is 1
46 |
47 | 2021-10-01 06:28:45,425 Thread 0: The number of invalid free is 0
48 |
49 | 2021-10-01 06:28:45,425 Thread 0: ************************************************
50 | ```
51 |
52 |
--------------------------------------------------------------------------------
/tutorial/poc_repro.md:
--------------------------------------------------------------------------------
1 | ### PoC reproduce
2 |
3 | ------
4 |
5 | ```
6 | ├── poc Folder. For PoC testing
7 | ├── crash_log-ori File. Contain high-risk impact
8 | ├── launch_vm.sh File. Use for launch qemu
9 | ├── log File. Reproduce PoC log
10 | ├── qemu-xxx-ori.log File. Qemu running log
11 | ├── run-script.sh File. Use for reproduce crash
12 | ├── run.sh File. Use for running PoC
13 | ├── syz-execprog Binary. Syzkaller component
14 | ├── syz-executor Binary. Syzkaller component
15 | ├── testcase File. Syzkaller style test case
16 | └── gopath Folder. Contain syzkaller
17 | ```
18 |
19 |
20 |
21 | `poc` folder contains all info about bug reproducing. First, the corresponding version of syzkaller will be cloned in `gopath`, this version would be the one trigger the original bug from syzbot. Two important components `syz-execprog` and `syz-executor` will be copied to `poc` folder.
22 |
23 | Launch the QEMU using `launch_vm.sh`, then run `run.sh` to trigger the bug. The full QEMU log is writing into `qemu-xxx-ori.log`. If it triggers a desired impact, the target impact will transfer to `crash_log-ori`
24 |
25 |
--------------------------------------------------------------------------------
/tutorial/resource/SyzScope-final.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seclab-ucr/SyzScope/b1a6e20783ba8c92dd33d508e469bc24eaacaab6/tutorial/resource/SyzScope-final.pdf
--------------------------------------------------------------------------------
/tutorial/resource/workflow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/seclab-ucr/SyzScope/b1a6e20783ba8c92dd33d508e469bc24eaacaab6/tutorial/resource/workflow.png
--------------------------------------------------------------------------------
/tutorial/static_taint_analysis.md:
--------------------------------------------------------------------------------
1 | ### Static Taint Analysis
2 |
3 | ------
4 |
5 | Static taint analysis preserves in `static-xxx` folder.
6 |
7 | ```
8 | ├── static-xxx Folder. Static analysis results
9 | ├── CallTrace File. Calltrace for analysis
10 | └── paths Folder. Paths for guidance
11 | ├── path2FuncPtrDef-40-52 File. Path to a memory write
12 | └── TerminatingFunc File. Termination func for sym exec
13 | ```
14 |
15 | `CallTrace` is essential when doing static taint analysis. It relies on call trace to determine the analysis order. The following block shows a sample `Calltrace` for static analysis, the order top to bottom is the order from callee to caller. Each line includes
16 |
17 | `function_name source_code_line func_start_line func_end_line`
18 |
19 | ```
20 | task_work_run kernel/task_work.c:144 108 146
21 | exit_to_user_mode_prepare arch/x86/include/asm/current.h:15 13 16
22 | syscall_exit_to_user_mode_prepare kernel/entry/common.c:216 213 234
23 | syscall_exit_to_user_mode kernel/entry/common.c:239 213 244
24 | ```
25 |
26 | We will locate the vulnerable object by its `size`, `offset`, and `debug info`. The vulnerable object usually locates on the top function on the call trace. When the analysis continue, it returns from the top function and back to its caller. The analysis will end if neither the arguments nor any pointer in current function do not have tainted data, and pick this function as Termination function for symbolic execution.
27 |
28 | Each path title contains impact `FuncPtrDef`, number of top-level basic block `40`, and the order of discovery `52`. Let's jump to the details.
29 |
30 | ```
31 | * net/sctp/associola.c:340 net/sctp/associola.c:353 net/sctp/associola.c:341
32 | net/sctp/associola.c:369 net/sctp/associola.c:373 net/sctp/associola.c:370
33 | net/sctp/associola.c:381 net/sctp/associola.c:381 net/sctp/associola.c:382
34 | * net/sctp/associola.c:381 net/sctp/associola.c:381 net/sctp/associola.c:382
35 | * net/sctp/associola.c:381 net/sctp/associola.c:381 net/sctp/associola.c:382
36 | * net/sctp/associola.c:381 net/sctp/associola.c:381 net/sctp/associola.c:382
37 | * net/sctp/associola.c:381 net/sctp/associola.c:381 net/sctp/associola.c:382
38 | * net/sctp/associola.c:381 net/sctp/associola.c:381 net/sctp/associola.c:382
39 | * net/sctp/associola.c:381 net/sctp/associola.c:381 net/sctp/associola.c:382
40 | * net/sctp/associola.c:381 net/sctp/associola.c:381 net/sctp/associola.c:382
41 | * net/sctp/associola.c:381 net/sctp/associola.c:381 net/sctp/associola.c:382
42 | * net/sctp/associola.c:381 net/sctp/associola.c:381 net/sctp/associola.c:382
43 | * net/sctp/associola.c:381 net/sctp/associola.c:386 net/sctp/associola.c:382
44 | * net/sctp/associola.c:392 net/sctp/associola.c:399 net/sctp/associola.c:393
45 | net/sctp/associola.c:1654 net/sctp/associola.c:1656 net/sctp/associola.c:1659
46 | net/sctp/sm_make_chunk.c:1448 net/sctp/sm_make_chunk.c:1449 net/sctp/sm_make_chunk.c:1451
47 | net/sctp/chunk.c:145 net/sctp/chunk.c:146 net/sctp/chunk.c:147
48 | net/sctp/chunk.c:99 net/sctp/chunk.c:96 net/sctp/chunk.c:133
49 | * net/sctp/chunk.c:103 net/sctp/chunk.c:116 net/sctp/chunk.c:104
50 | net/sctp/chunk.c:116 net/sctp/chunk.c:118 net/sctp/chunk.c:129
51 | net/sctp/chunk.c:125 net/sctp/chunk.c:126 net/sctp/chunk.c:129
52 | * net/sctp/ulpqueue.c:204 net/sctp/ulpqueue.c:209 net/sctp/ulpqueue.c:205
53 | net/sctp/ulpqueue.c:209 net/sctp/ulpqueue.c:214 net/sctp/ulpqueue.c:210
54 | net/sctp/ulpqueue.c:214 ./include/linux/compiler.h:178 net/sctp/ulpqueue.c:275
55 | net/sctp/ulpqueue.c:222 net/sctp/ulpqueue.c:223 net/sctp/ulpqueue.c:225
56 | net/sctp/ulpqueue.c:255 net/sctp/ulpqueue.c:256 net/sctp/ulpqueue.c:258
57 | net/sctp/ulpqueue.c:264 net/sctp/ulpqueue.c:267 net/sctp/ulpqueue.c:265
58 | net/sctp/ulpqueue.c:267 net/sctp/ulpqueue.c:267 net/sctp/ulpqueue.c:281
59 | net/sctp/ulpqueue.c:267 net/sctp/ulpqueue.c:268 net/sctp/ulpqueue.c:281
60 | * net/sctp/ulpqueue.c:268 net/sctp/ulpqueue.c:270 net/sctp/ulpqueue.c:269
61 | net/sctp/ulpqueue.c:270
62 | $
63 | call void %88(%struct.sock.1009753* %6) #33, !dbg !13874779
64 | ```
65 |
66 | Each line contains *the condition, the correct branch, the wrong branch.*
67 |
68 | For example, `net/sctp/associola.c:369 net/sctp/associola.c:373 net/sctp/associola.c:370`. At the `condition net/sctp/associola.c:369`, we should take the branch at `net/sctp/associola.c:373` and kill state at branch `net/sctp/associola.c:370`.
69 |
70 | Some lines start with a `*`, it means both branches should be feasible.
71 |
72 | At the end, one line has only one debug info, `net/sctp/ulpqueue.c:270`. It represents the target impact site, then follow by its llvm instruction.
--------------------------------------------------------------------------------
/tutorial/sym_exec.md:
--------------------------------------------------------------------------------
1 | ### Symbolic execution
2 |
3 | ------
4 |
5 | ```
6 | ├── sym-xxx Folder. Symbolic execution results
7 | ├── gdb.log-0 File. GDB log
8 | ├── mon.log-0 File. Qemu monitor log
9 | ├── vm.log-0 File. Qemu log
10 | ├── symbolic_execution.log-0 File. Symbolic execution log
11 | ├── launch_vm.sh File. For launching qemu
12 | └── primitives Folder. Contain high-risk impacts
13 | ├── ...
14 | └── FPD-xxx-14 File. A func-ptr-def impact
15 | ```
16 |
17 | `sym-xxx` folder contains all info from symbolic execution. `vm.log` logs full stdout message comes from QEMU and log from `upload-exp.sh` when upload the PoC, it's useful when inspecting bug triggering status.
18 |
19 | `symbolic_execution.log` contains the final results of symbolic execution. Scroll down to the bottom, you'll see how many high-risk impacts for each type.
20 |
21 | ```
22 | 2021-10-01 06:28:45,425 Thread 0: *******************primitives*******************
23 |
24 | 2021-10-01 06:28:45,425 Thread 0: Running for 0:01:50.495701
25 | 2021-10-01 06:28:45,425 Thread 0: Total 3 primitives found during symbolic execution
26 |
27 | 2021-10-01 06:28:45,425 Thread 0: The number of OOB/UAF write is 2
28 |
29 | 2021-10-01 06:28:45,425 Thread 0: The number of arbitrary address write is 0
30 |
31 | 2021-10-01 06:28:45,425 Thread 0: The number of constrained address write is 0
32 |
33 | 2021-10-01 06:28:45,425 Thread 0: The number of arbitrary value write is 0
34 |
35 | 2021-10-01 06:28:45,425 Thread 0: The number of constrained value write is 0
36 |
37 | 2021-10-01 06:28:45,425 Thread 0: The number of control flow hijacking is 1
38 |
39 | 2021-10-01 06:28:45,425 Thread 0: The number of invalid free is 0
40 |
41 | 2021-10-01 06:28:45,425 Thread 0: ************************************************
42 | ```
43 |
44 |
45 |
46 | `primitives` contains the detailed results for each high-risk impact. There are 7 high-risk impacts in total. `OUW`(OOB/UAF write), `AAW`(Arbitrary address write), `CAW`(Constrained address write), `AVW`(Arbitrary value write), `CVW`(Constrained value write), `FPD`(Function pointer derefence), `IF`(Invalid free).
47 |
48 | Let's take a look at the detailed report of `FPD-release_sock-0xffffffff82cb4528-2`
49 |
50 | ```
51 | Primitive found at 1.0565822124481201 seconds
52 | Control flow hijack found!
53 | rax: is_symbolic: False 0xffffffff82cb44d3
54 | rbx: is_symbolic: False 0xffff88006bf80000
55 | rcx: is_symbolic: False 0xffffffff837fa84d
56 | rdx: is_symbolic: False 0x1
57 | rsi: is_symbolic: False 0xdffffc0000000000
58 | rdi: is_symbolic: False 0xffff88006bf80000
59 | rsp: is_symbolic: False 0xffff88006583f810
60 | rbp: is_symbolic: False 0xffff88006583f8b0
61 | r8: is_symbolic: False 0x0
62 | r9: is_symbolic: False 0x1
63 | r10: is_symbolic: False 0xffff88006583f438
64 | r11: is_symbolic: False 0xfffffbfff0bcf7c6
65 | r12: is_symbolic: False 0x1ffff1000cb07f05
66 | r13: is_symbolic: False 0xffff88006583f888
67 | r14: is_symbolic: False 0xffff88006bf80088
68 | r15: is_symbolic: True 0x9984205b330be0b0
69 | rip: is_symbolic: True 0x9984205b330be0b0
70 | gs: is_symbolic: False 0xffff88006d000000
71 | ================Thread-0 dump_state====================
72 | The value of each register may not reflect the latest state. It only represent the
73 | value at the beginning of current basic block
74 | ```
75 |
76 | We have the snapshot of register status when we were triggering this impact. Please note that the register value only reflect the value at the beginning of current basic block.
77 |
78 | ```
79 | ^A^[[0m^[[31m^[[0m^B 0xffffffff82cb4528 : call r15
80 | ^A^[[31m^[[1m^Bpwndbg>
81 |
82 | |__asan_load4 mm/kasan/kasan.c:692
83 | |do_raw_spin_lock kernel/locking/spinlock_debug.c:83
84 | |_raw_spin_lock_bh kernel/locking/spinlock.c:169
85 | |release_sock net/core/sock.c:2778
86 | |None net/core/sock.c:2785
87 | ```
88 |
89 | Then we have the assembly code that trigger this impact. Combining with the register value, we know `r15` is a symbolic value. And there is a simple call trace for you to inspect. The one with less tab means it's the caller of the upper one, otherwise it's the callee. For example, `release_sock` is the caller of `_raw_spin_lock_bh`, and the function pointer dereference is at `net/core/sock.c:2785`. If starting from kasan report like the symbolic execution did, the call trace is `__asan_load4`->`do_raw_spin_lock`->_`raw_spin_lock_bh`->`release_sock`->`None`, if it fails to get the function name, we just use `None`.
90 |
91 | Please note the call trace is not always accurate. If you are still frustrate with finding a correct trace to target impact, we prepare the full trace log in basic block level.
92 |
93 | ```
94 | 0xffffffff8135a5ab
95 | do_raw_spin_lock kernel/locking/spinlock_debug.c:83
96 | --------------------------------------
97 | 0xffffffff8135a5b8
98 | do_raw_spin_lock kernel/locking/spinlock_debug.c:84
99 | --------------------------------------
100 | 0xffffffff8135a5c4
101 | do_raw_spin_lock ./arch/x86/include/asm/current.h:15
102 | --------------------------------------
103 | 0xffffffff8135a5d7
104 | do_raw_spin_lock kernel/locking/spinlock_debug.c:85
105 | --------------------------------------
106 | 0xffffffff8135a5e3
107 | do_raw_spin_lock kernel/locking/spinlock_debug.c:85
108 | --------------------------------------
109 | 0xffffffff8135a5f3
110 | do_raw_spin_lock ./arch/x86/include/asm/atomic.h:187
111 | --------------------------------------
112 | 0xffffffff8135a606
113 | do_raw_spin_lock kernel/locking/spinlock_debug.c:91
114 | --------------------------------------
115 | 0xffffffff8135a616
116 | do_raw_spin_lock kernel/locking/spinlock_debug.c:91
117 | --------------------------------------
118 | 0xffffffff8135a622
119 | do_raw_spin_lock ./arch/x86/include/asm/current.h:15
120 | --------------------------------------
121 | 0xffffffff837f9ec9
122 | _raw_spin_lock_bh kernel/locking/spinlock.c:169
123 | --------------------------------------
124 | 0xffffffff82cb44d3
125 | release_sock net/core/sock.c:2778
126 | --------------------------------------
127 | 0xffffffff82cb44df
128 | release_sock net/core/sock.c:2778
129 | --------------------------------------
130 | 0xffffffff82cb44f6
131 | release_sock net/core/sock.c:2784
132 | --------------------------------------
133 | 0xffffffff82cb44fb
134 | release_sock net/core/sock.c:2784
135 | --------------------------------------
136 | 0xffffffff82cb4504
137 | release_sock net/core/sock.c:2784
138 | --------------------------------------
139 | 0xffffffff82cb4514
140 | release_sock net/core/sock.c:2784
141 | --------------------------------------
142 | 0xffffffff82cb4520
143 | release_sock net/core/sock.c:2785
144 | --------------------------------------
145 | 0xffffffff82cb4525
146 | release_sock net/core/sock.c:2785
147 | --------------------------------------
148 | Total 20 intraprocedural basic block
149 | Total 29 basic block
150 | ```
151 |
152 |
--------------------------------------------------------------------------------
/tutorial/workzone_structure.md:
--------------------------------------------------------------------------------
1 | ### Folder structure
2 |
3 | ------
4 |
5 | SyzScope work folder has following structures:
6 |
7 | ```
8 | ├── cases.json File. Cache for testing cases
9 | ├── AbnormallyMemRead File. Bug with memory read
10 | ├── AbnormallyMemWrite File. Bug with memory write
11 | ├── DoubleFree File. Bug with double free
12 | ├── ConfirmedAbnormallyMemRead File. Bug with memory read(Patch eliminated)
13 | ├── ConfirmedAbnormallyMemWrite File. Bug with memory write(Patch eliminated)
14 | ├── ConfirmedDoubleFree File. Bug with double free(Patch eliminated)
15 | ├── incomplete Folder. Store ongoing cases
16 | ├── completed Folder. Store low-risk completed cases
17 | ├── succeed Folder. Store high-risk completed cases
18 | ├── xxx Folder. Case hash
19 | ├── crashes Folder. Crashes from fuzzing
20 | ├── ...
21 | └── xxx Folder. Crash detail, syzkaller style
22 | ├── linux Symbolic link. To Linux kernel
23 | ├── poc Folder. For PoC testing
24 | ├── crash_log-ori File. Contain high-risk impact
25 | ├── launch_vm.sh File. Use for launch qemu
26 | ├── log File. Reproduce PoC log
27 | ├── qemu-xxx-ori.log File. Qemu running log
28 | ├── run-script.sh File. Use for reproduce crash
29 | ├── run.sh File. Use for running PoC
30 | ├── syz-execprog Binary. Syzkaller component
31 | ├── syz-executor Binary. Syzkaller component
32 | ├── testcase File. Syzkaller style test case
33 | └── gopath Folder. Contain syzkaller
34 | ├── output Folder. Confirmed crashes
35 | ├── xxx Folder. Crash hash
36 | ├── description File. Crash description
37 | ├── repro.log File. Crash raw log
38 | ├── repro.report File. Crash log with debug info
39 | ├── repro.prog File. Crash reproducer(Syzkaller style)
40 | ├── repro.cprog File. Crash reproducer(C style)
41 | ├── repro.stats File. Crash reproduce log
42 | └── repro.command File. Command for reproducing
43 | └── ori Folder. Original crash
44 | ├── gopath Folder. Contain modified syzkaller
45 | ├── img Symbolic link. To image and key.
46 | ├── compiler Symoblic link. To compiler
47 | ├── sym-xxx Folder. Symbolic execution results
48 | ├── gdb.log-0 File. GDB log
49 | ├── mon.log-0 File. Qemu monitor log
50 | ├── vm.log-0 File. Qemu log
51 | ├── symbolic_execution.log-0 File. Symbolic execution log
52 | ├── launch_vm.sh File. For launching qemu
53 | └── primitives Folder. Contain high-risk impacts
54 | ├── ...
55 | └── FPD-xxx-14 File. A func-ptr-def impact
56 | ├── static-xxx Folder. Static analysis results
57 | ├── CallTrace File. Calltrace for analysis
58 | └── paths Folder. Paths for guidance
59 | ├── path2MemWrite-2-0 File. Path to a memory write
60 | └── TerminatingFunc File. Termination func for sym exec
61 | ├── config File. Config for kernel compiling
62 | ├── log File. Case log
63 | ├── one.bc File. Kernel bc for static analysis
64 | ├── clang-make.log File. Log for bc compiling
65 | └── error Folder. Error cases
66 | ├── xxx
67 | ├── ...
68 | └── make.log-xxx File. Make log if failed
69 | ```
70 |
71 | All cases will running in `incomplete` folder. If a case has been successfully turn to high-risk, we move it to `succeed` folder, otherwise move it to `completed` folder. If a case encounter any sort of error (e.g., compiling error), move it to `error`.
72 |
73 | If new impacts found in fuzzing, the case hash will be written in `AbnormallyMemXXX` or `DoubleFree `file. Then we apply the patch and rerun all new impacts, if they fail to reproduce on patched kernel, we write them in `ConfirmedAbornallyMemXXX` or `ConfirmedDoubleFree` file.
74 |
75 | In error folder, if a case encountered a compiling error, there is a `make.log-xxx` file contains the full compiling log
76 |
--------------------------------------------------------------------------------