├── COPYRIGHT.txt ├── LICENSE.txt ├── README.md ├── bld ├── .gitignore └── Makefile ├── config └── config-4.0.0-040000-generic └── src ├── helper ├── linux_hook.c └── test_hook.c └── test ├── fuzzer └── test_fuzzer.c └── linux-samples-bpf ├── libbpf.h └── test_verifier.c /COPYRIGHT.txt: -------------------------------------------------------------------------------- 1 | Copyright 2015 PLUMgrid 2 | 3 | This program is free software; you can redistribute it and/or 4 | modify it under the terms of the GNU General Public License 5 | as published by the Free Software Foundation; either version 2 6 | of the License, or (at your option) any later version. 7 | 8 | This program is distributed in the hope that it will be useful, 9 | but WITHOUT ANY WARRANTY; without even the implied warranty of 10 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 | GNU General Public License for more details. 12 | 13 | You should have received a copy of the GNU General Public License 14 | along with this program; if not, write to the Free Software 15 | Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. 16 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 2, June 1991 3 | 4 | Copyright (C) 1989, 1991 Free Software Foundation, Inc. 5 | 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 6 | Everyone is permitted to copy and distribute verbatim copies 7 | of this license document, but changing it is not allowed. 8 | 9 | Preamble 10 | 11 | The licenses for most software are designed to take away your 12 | freedom to share and change it. By contrast, the GNU General Public 13 | License is intended to guarantee your freedom to share and change free 14 | software--to make sure the software is free for all its users. This 15 | General Public License applies to most of the Free Software 16 | Foundation's software and to any other program whose authors commit to 17 | using it. (Some other Free Software Foundation software is covered by 18 | the GNU Library General Public License instead.) You can apply it to 19 | your programs, too. 20 | 21 | When we speak of free software, we are referring to freedom, not 22 | price. Our General Public Licenses are designed to make sure that you 23 | have the freedom to distribute copies of free software (and charge for 24 | this service if you wish), that you receive source code or can get it 25 | if you want it, that you can change the software or use pieces of it 26 | in new free programs; and that you know you can do these things. 27 | 28 | To protect your rights, we need to make restrictions that forbid 29 | anyone to deny you these rights or to ask you to surrender the rights. 30 | These restrictions translate to certain responsibilities for you if you 31 | distribute copies of the software, or if you modify it. 32 | 33 | For example, if you distribute copies of such a program, whether 34 | gratis or for a fee, you must give the recipients all the rights that 35 | you have. You must make sure that they, too, receive or can get the 36 | source code. And you must show them these terms so they know their 37 | rights. 38 | 39 | We protect your rights with two steps: (1) copyright the software, and 40 | (2) offer you this license which gives you legal permission to copy, 41 | distribute and/or modify the software. 42 | 43 | Also, for each author's protection and ours, we want to make certain 44 | that everyone understands that there is no warranty for this free 45 | software. If the software is modified by someone else and passed on, we 46 | want its recipients to know that what they have is not the original, so 47 | that any problems introduced by others will not reflect on the original 48 | authors' reputations. 49 | 50 | Finally, any free program is threatened constantly by software 51 | patents. We wish to avoid the danger that redistributors of a free 52 | program will individually obtain patent licenses, in effect making the 53 | program proprietary. To prevent this, we have made it clear that any 54 | patent must be licensed for everyone's free use or not licensed at all. 55 | 56 | The precise terms and conditions for copying, distribution and 57 | modification follow. 58 | 59 | GNU GENERAL PUBLIC LICENSE 60 | TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 61 | 62 | 0. This License applies to any program or other work which contains 63 | a notice placed by the copyright holder saying it may be distributed 64 | under the terms of this General Public License. The "Program", below, 65 | refers to any such program or work, and a "work based on the Program" 66 | means either the Program or any derivative work under copyright law: 67 | that is to say, a work containing the Program or a portion of it, 68 | either verbatim or with modifications and/or translated into another 69 | language. (Hereinafter, translation is included without limitation in 70 | the term "modification".) Each licensee is addressed as "you". 71 | 72 | Activities other than copying, distribution and modification are not 73 | covered by this License; they are outside its scope. The act of 74 | running the Program is not restricted, and the output from the Program 75 | is covered only if its contents constitute a work based on the 76 | Program (independent of having been made by running the Program). 77 | Whether that is true depends on what the Program does. 78 | 79 | 1. You may copy and distribute verbatim copies of the Program's 80 | source code as you receive it, in any medium, provided that you 81 | conspicuously and appropriately publish on each copy an appropriate 82 | copyright notice and disclaimer of warranty; keep intact all the 83 | notices that refer to this License and to the absence of any warranty; 84 | and give any other recipients of the Program a copy of this License 85 | along with the Program. 86 | 87 | You may charge a fee for the physical act of transferring a copy, and 88 | you may at your option offer warranty protection in exchange for a fee. 89 | 90 | 2. You may modify your copy or copies of the Program or any portion 91 | of it, thus forming a work based on the Program, and copy and 92 | distribute such modifications or work under the terms of Section 1 93 | above, provided that you also meet all of these conditions: 94 | 95 | a) You must cause the modified files to carry prominent notices 96 | stating that you changed the files and the date of any change. 97 | 98 | b) You must cause any work that you distribute or publish, that in 99 | whole or in part contains or is derived from the Program or any 100 | part thereof, to be licensed as a whole at no charge to all third 101 | parties under the terms of this License. 102 | 103 | c) If the modified program normally reads commands interactively 104 | when run, you must cause it, when started running for such 105 | interactive use in the most ordinary way, to print or display an 106 | announcement including an appropriate copyright notice and a 107 | notice that there is no warranty (or else, saying that you provide 108 | a warranty) and that users may redistribute the program under 109 | these conditions, and telling the user how to view a copy of this 110 | License. (Exception: if the Program itself is interactive but 111 | does not normally print such an announcement, your work based on 112 | the Program is not required to print an announcement.) 113 | 114 | These requirements apply to the modified work as a whole. If 115 | identifiable sections of that work are not derived from the Program, 116 | and can be reasonably considered independent and separate works in 117 | themselves, then this License, and its terms, do not apply to those 118 | sections when you distribute them as separate works. But when you 119 | distribute the same sections as part of a whole which is a work based 120 | on the Program, the distribution of the whole must be on the terms of 121 | this License, whose permissions for other licensees extend to the 122 | entire whole, and thus to each and every part regardless of who wrote it. 123 | 124 | Thus, it is not the intent of this section to claim rights or contest 125 | your rights to work written entirely by you; rather, the intent is to 126 | exercise the right to control the distribution of derivative or 127 | collective works based on the Program. 128 | 129 | In addition, mere aggregation of another work not based on the Program 130 | with the Program (or with a work based on the Program) on a volume of 131 | a storage or distribution medium does not bring the other work under 132 | the scope of this License. 133 | 134 | 3. You may copy and distribute the Program (or a work based on it, 135 | under Section 2) in object code or executable form under the terms of 136 | Sections 1 and 2 above provided that you also do one of the following: 137 | 138 | a) Accompany it with the complete corresponding machine-readable 139 | source code, which must be distributed under the terms of Sections 140 | 1 and 2 above on a medium customarily used for software interchange; or, 141 | 142 | b) Accompany it with a written offer, valid for at least three 143 | years, to give any third party, for a charge no more than your 144 | cost of physically performing source distribution, a complete 145 | machine-readable copy of the corresponding source code, to be 146 | distributed under the terms of Sections 1 and 2 above on a medium 147 | customarily used for software interchange; or, 148 | 149 | c) Accompany it with the information you received as to the offer 150 | to distribute corresponding source code. (This alternative is 151 | allowed only for noncommercial distribution and only if you 152 | received the program in object code or executable form with such 153 | an offer, in accord with Subsection b above.) 154 | 155 | The source code for a work means the preferred form of the work for 156 | making modifications to it. For an executable work, complete source 157 | code means all the source code for all modules it contains, plus any 158 | associated interface definition files, plus the scripts used to 159 | control compilation and installation of the executable. However, as a 160 | special exception, the source code distributed need not include 161 | anything that is normally distributed (in either source or binary 162 | form) with the major components (compiler, kernel, and so on) of the 163 | operating system on which the executable runs, unless that component 164 | itself accompanies the executable. 165 | 166 | If distribution of executable or object code is made by offering 167 | access to copy from a designated place, then offering equivalent 168 | access to copy the source code from the same place counts as 169 | distribution of the source code, even though third parties are not 170 | compelled to copy the source along with the object code. 171 | 172 | 4. You may not copy, modify, sublicense, or distribute the Program 173 | except as expressly provided under this License. Any attempt 174 | otherwise to copy, modify, sublicense or distribute the Program is 175 | void, and will automatically terminate your rights under this License. 176 | However, parties who have received copies, or rights, from you under 177 | this License will not have their licenses terminated so long as such 178 | parties remain in full compliance. 179 | 180 | 5. You are not required to accept this License, since you have not 181 | signed it. However, nothing else grants you permission to modify or 182 | distribute the Program or its derivative works. These actions are 183 | prohibited by law if you do not accept this License. Therefore, by 184 | modifying or distributing the Program (or any work based on the 185 | Program), you indicate your acceptance of this License to do so, and 186 | all its terms and conditions for copying, distributing or modifying 187 | the Program or works based on it. 188 | 189 | 6. Each time you redistribute the Program (or any work based on the 190 | Program), the recipient automatically receives a license from the 191 | original licensor to copy, distribute or modify the Program subject to 192 | these terms and conditions. You may not impose any further 193 | restrictions on the recipients' exercise of the rights granted herein. 194 | You are not responsible for enforcing compliance by third parties to 195 | this License. 196 | 197 | 7. If, as a consequence of a court judgment or allegation of patent 198 | infringement or for any other reason (not limited to patent issues), 199 | conditions are imposed on you (whether by court order, agreement or 200 | otherwise) that contradict the conditions of this License, they do not 201 | excuse you from the conditions of this License. If you cannot 202 | distribute so as to satisfy simultaneously your obligations under this 203 | License and any other pertinent obligations, then as a consequence you 204 | may not distribute the Program at all. For example, if a patent 205 | license would not permit royalty-free redistribution of the Program by 206 | all those who receive copies directly or indirectly through you, then 207 | the only way you could satisfy both it and this License would be to 208 | refrain entirely from distribution of the Program. 209 | 210 | If any portion of this section is held invalid or unenforceable under 211 | any particular circumstance, the balance of the section is intended to 212 | apply and the section as a whole is intended to apply in other 213 | circumstances. 214 | 215 | It is not the purpose of this section to induce you to infringe any 216 | patents or other property right claims or to contest validity of any 217 | such claims; this section has the sole purpose of protecting the 218 | integrity of the free software distribution system, which is 219 | implemented by public license practices. Many people have made 220 | generous contributions to the wide range of software distributed 221 | through that system in reliance on consistent application of that 222 | system; it is up to the author/donor to decide if he or she is willing 223 | to distribute software through any other system and a licensee cannot 224 | impose that choice. 225 | 226 | This section is intended to make thoroughly clear what is believed to 227 | be a consequence of the rest of this License. 228 | 229 | 8. If the distribution and/or use of the Program is restricted in 230 | certain countries either by patents or by copyrighted interfaces, the 231 | original copyright holder who places the Program under this License 232 | may add an explicit geographical distribution limitation excluding 233 | those countries, so that distribution is permitted only in or among 234 | countries not thus excluded. In such case, this License incorporates 235 | the limitation as if written in the body of this License. 236 | 237 | 9. The Free Software Foundation may publish revised and/or new versions 238 | of the General Public License from time to time. Such new versions will 239 | be similar in spirit to the present version, but may differ in detail to 240 | address new problems or concerns. 241 | 242 | Each version is given a distinguishing version number. If the Program 243 | specifies a version number of this License which applies to it and "any 244 | later version", you have the option of following the terms and conditions 245 | either of that version or of any later version published by the Free 246 | Software Foundation. If the Program does not specify a version number of 247 | this License, you may choose any version ever published by the Free Software 248 | Foundation. 249 | 250 | 10. If you wish to incorporate parts of the Program into other free 251 | programs whose distribution conditions are different, write to the author 252 | to ask for permission. For software which is copyrighted by the Free 253 | Software Foundation, write to the Free Software Foundation; we sometimes 254 | make exceptions for this. Our decision will be guided by the two goals 255 | of preserving the free status of all derivatives of our free software and 256 | of promoting the sharing and reuse of software generally. 257 | 258 | NO WARRANTY 259 | 260 | 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY 261 | FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN 262 | OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES 263 | PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED 264 | OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF 265 | MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS 266 | TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE 267 | PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, 268 | REPAIR OR CORRECTION. 269 | 270 | 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 271 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR 272 | REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, 273 | INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING 274 | OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED 275 | TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY 276 | YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER 277 | PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE 278 | POSSIBILITY OF SUCH DAMAGES. 279 | 280 | END OF TERMS AND CONDITIONS 281 | 282 | How to Apply These Terms to Your New Programs 283 | 284 | If you develop a new program, and you want it to be of the greatest 285 | possible use to the public, the best way to achieve this is to make it 286 | free software which everyone can redistribute and change under these terms. 287 | 288 | To do so, attach the following notices to the program. It is safest 289 | to attach them to the start of each source file to most effectively 290 | convey the exclusion of warranty; and each file should have at least 291 | the "copyright" line and a pointer to where the full notice is found. 292 | 293 | 294 | Copyright (C) 295 | 296 | This program is free software; you can redistribute it and/or modify 297 | it under the terms of the GNU General Public License as published by 298 | the Free Software Foundation; either version 2 of the License, or 299 | (at your option) any later version. 300 | 301 | This program is distributed in the hope that it will be useful, 302 | but WITHOUT ANY WARRANTY; without even the implied warranty of 303 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 304 | GNU General Public License for more details. 305 | 306 | You should have received a copy of the GNU General Public License 307 | along with this program; if not, write to the Free Software 308 | Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA 309 | 310 | 311 | Also add information on how to contact you by electronic and paper mail. 312 | 313 | If the program is interactive, make it output a short notice like this 314 | when it starts in an interactive mode: 315 | 316 | Gnomovision version 69, Copyright (C) year name of author 317 | Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 318 | This is free software, and you are welcome to redistribute it 319 | under certain conditions; type `show c' for details. 320 | 321 | The hypothetical commands `show w' and `show c' should show the appropriate 322 | parts of the General Public License. Of course, the commands you use may 323 | be called something other than `show w' and `show c'; they could even be 324 | mouse-clicks or menu items--whatever suits your program. 325 | 326 | You should also get your employer (if you work as a programmer) or your 327 | school, if any, to sign a "copyright disclaimer" for the program, if 328 | necessary. Here is a sample; alter the names: 329 | 330 | Yoyodyne, Inc., hereby disclaims all copyright interest in the program 331 | `Gnomovision' (which makes passes at compilers) written by James Hacker. 332 | 333 | , 1 April 1989 334 | Ty Coon, President of Vice 335 | 336 | This General Public License does not permit incorporating your program into 337 | proprietary programs. If your program is a subroutine library, you may 338 | consider it more useful to permit linking proprietary applications with the 339 | library. If this is what you want to do, use the GNU Library General 340 | Public License instead of this License. 341 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | This repository implements a mechanism to perform bpf program verification 3 | in user space. It further utilizes llvm sanitizer and fuzzer framework to 4 | extend error detection coverage. 5 | 6 | ## Motivation 7 | 8 | The motivation of this project is to test verifier in userspace so that 9 | we can take advantage of llvm's sanitizer and fuzzer framework. One bug 10 | has been discovered because of this effort: 11 | http://permalink.gmane.org/gmane.linux.network/376864 12 | 13 | ## Directory Overview 14 | 15 | ``` 16 | - bld 17 | - config 18 | - src 19 | - helper 20 | - test 21 | - linux-samples-bpf 22 | - fuzzer 23 | ``` 24 | 25 | The bld directory is used to build and run test programs. 26 | The config directory holds the recommended linux config file 27 | The src/helper contains helper files for kernel and 28 | user hooks. The src/test/linux-samples-bpf contains 29 | related test verifier files from linux/samples/bpf/ and 30 | src/test/fuzzer contains a hook with llvm fuzzer framework. 31 | 32 | ## Prerequisite 33 | 34 | A linux source tree is needed. The source tree is used to pre-process kernel files. 35 | Note that kernel headers will need to be generated at default /usr/include 36 | directory. The default config does not have all necessary BPF options enabled, 37 | and will make verifier tests fail. 38 | you can try to use the one at the config directory instead of default 39 | linux:arch/x86/configs/x86_64_defconfig. 40 | 41 | ```bash 42 | git clone git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 43 | # change defconfig and apply necessary patch as decribed below 44 | cd net-next 45 | make defconfig 46 | make headers_install 47 | ``` 48 | 49 | For above linux tree, apply the following patch so that llvm can cope with linux 50 | inline assembly: 51 | 52 | ``` 53 | yhs@ubuntu:~/work/fuzzer/net-next$ git diff 54 | diff --git a/Makefile b/Makefile 55 | index c361593..cacbe0f 100644 56 | --- a/Makefile 57 | +++ b/Makefile 58 | @@ -686,6 +686,8 @@ KBUILD_CFLAGS += $(call cc-disable-warning, tautological-compare) 59 | # See modpost pattern 2 60 | KBUILD_CFLAGS += $(call cc-option, -mno-global-merge,) 61 | KBUILD_CFLAGS += $(call cc-option, -fcatch-undefined-behavior) 62 | +# no integrated assembler so not checking inlining assembly format 63 | +KBUILD_CFLAGS += $(call cc-option, -no-integrated-as) 64 | else 65 | 66 | # This warning generated too much noise in a regular build. 67 | yhs@ubuntu:~/work/fuzzer/net-next$ 68 | ``` 69 | 70 | A llvm/clang compiler with compiler-rt is needed. The compiler-rt is necessary 71 | for llvm sanitizer support. 72 | 73 | ```bash 74 | sudo apt-get -y install bison build-essential cmake flex git libedit-dev python zlib1g-dev 75 | git clone http://llvm.org/git/llvm.git 76 | cd llvm/tools; git clone http://llvm.org/git/clang.git 77 | cd ../projects; git clone http://llvm.org/git/compiler-rt.git 78 | cd ..; mkdir -p build/install; cd build 79 | cmake -G "Unix Makefiles" -DLLVM_TARGETS_TO_BUILD="BPF;X86" -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$PWD/install .. 80 | make -j4 81 | make install 82 | export PATH=$PWD/install/bin:$PATH 83 | ``` 84 | 85 | ## Build and Run 86 | 87 | ```bash 88 | export KERNEL_TREE_ROOT= 89 | cd bld 90 | make setup 91 | make all 92 | ``` 93 | 94 | The "setup" target creates common symbolic links and download 95 | and build llvm fuzzer. It only needs to run once. 96 | 97 | The "all" target builds two binaries, `test_verifier` and `test_fuzzer`. 98 | `test_verifier` is essentially `linux/samples/bpf/test_verifier.c` with 99 | slight modification to adapt to the new test framework, with 100 | sanitizer support. `test_fuzzer` uses llvm fuzzer framework 101 | for testing. 102 | 103 | `test_verifier` can also generate initial test cases for `test_fuzzer`. 104 | These test cases will make fuzzer more effective in generating relevent 105 | new test cases. 106 | ```bash 107 | # generate initial test cases for fuzzer, put them into ./corpus directory 108 | ./test_verifier -g ./corpus 109 | ./test_fuzzer -max_len=1024 ./corpus 110 | ``` 111 | 112 | To report memory leaks, the `test_fuzzer` needs to terminate normally. 113 | This can be done by limiting the number of runs, e.g., 114 | ```bash 115 | ./test_fuzzer -max_len=1024 -runs=1000000 ./corpus 116 | ``` 117 | 118 | The following command can be used to find the number of coverages: 119 | ```bash 120 | # the actual number of coverages should be the number of lines minus one 121 | # to account for function declaration. 122 | objdump -D test_fuzzer | grep '<__sanitizer_cov>' | wc 123 | ``` 124 | 125 | With this information, you can compare to `test_fuzzer` output to 126 | see what is the coverage. For example, if the total number of 127 | edge coverages is 1106, and you have the following from `test_fuzzer` 128 | output: 129 | ``` 130 | ...... 131 | #3466257868 NEW cov: 884 bits: 3749 units: 936 exec/s: 38408 L: 47 132 | #3486994800 NEW cov: 884 bits: 3750 units: 937 exec/s: 38396 L: 64 133 | #3491234878 NEW cov: 886 bits: 3752 units: 938 exec/s: 38394 L: 58 134 | #3495108828 NEW cov: 886 bits: 3754 units: 939 exec/s: 38391 L: 58 135 | ``` 136 | You will know 886 out of 1106 edges are covered so far. 137 | -------------------------------------------------------------------------------- /bld/.gitignore: -------------------------------------------------------------------------------- 1 | Fuzzer 2 | test_verifier 3 | test_fuzzer 4 | test_coverage 5 | *.h 6 | *.i 7 | *.o 8 | *.c 9 | -------------------------------------------------------------------------------- /bld/Makefile: -------------------------------------------------------------------------------- 1 | define WARN_KERNEL_TREE_ROOT 2 | 3 | Linux kernel tree root is not configured, use "make KERNEL_TREE_ROOT=<> ...", 4 | or set KERNEL_TREE_ROOT as environment variable and "make ...". 5 | 6 | endef 7 | 8 | ifeq ($(KERNEL_TREE_ROOT),) 9 | $(error $(WARN_KERNEL_TREE_ROOT)) 10 | endif 11 | 12 | BLD=$(PWD) 13 | SRC=$(PWD)/.. 14 | 15 | enable_fuzzer_dataflow=1 16 | 17 | FUZZER_CFLAGS=-g -O2 -std=c++11 18 | CFLAGS=-O1 -g -fsanitize=address -fsanitize-coverage=edge,8bit-counters -fno-omit-frame-pointer -DTEST_WORKAROUND 19 | COVERAGE_CFLAGS=-O1 -g -fno-omit-frame-pointer -DTEST_WORKAROUND -fprofile-instr-generate -fcoverage-mapping -fPIC 20 | ifeq ($(enable_fuzzer_dataflow),1) 21 | FUZZER_CFLAGS += -fPIC 22 | CFLAGS += -fPIC -fsanitize-coverage=trace-cmp 23 | endif 24 | 25 | .PHONY: setup test_hook test_verifier all 26 | 27 | setup: 28 | # kernel hook 29 | ln -sf $(SRC)/src/helper/linux_hook.c $(KERNEL_TREE_ROOT)/kernel/bpf/linux_hook.c 30 | # user hook 31 | ln -sf $(SRC)/src/helper/test_hook.c . 32 | # fuzzer 33 | svn co -r352395 http://llvm.org/svn/llvm-project/compiler-rt/trunk/lib/fuzzer Fuzzer 34 | clang -c $(FUZZER_CFLAGS) Fuzzer/*.cpp -IFuzzer 35 | 36 | test_hook: $(KERNEL_TREE_ROOT)/kernel/bpf/verifier.c $(SRC)/src/helper/linux_hook.c $(SRC)/src/helper/test_hook.c 37 | cd $(KERNEL_TREE_ROOT); make HOSTCC=clang CC=clang kernel/bpf/verifier.i kernel/bpf/linux_hook.i 38 | ln -sf $(KERNEL_TREE_ROOT)/kernel/bpf/verifier.i . 39 | ln -sf $(KERNEL_TREE_ROOT)/kernel/bpf/linux_hook.i . 40 | clang $(CFLAGS) -c -I$(KERNEL_TREE_ROOT)/usr/include \ 41 | test_hook.c verifier.i linux_hook.i 42 | 43 | test_verifier: $(SRC)/src/test/linux-samples-bpf/test_verifier.c 44 | ln -sf $(SRC)/src/test/linux-samples-bpf/* . 45 | clang $(CFLAGS) -o $@ -I$(KERNEL_TREE_ROOT)/usr/include \ 46 | test_verifier.c test_hook.c verifier.i linux_hook.i 47 | 48 | test_fuzzer: $(SRC)/src/test/fuzzer/test_fuzzer.c 49 | ln -sf $(SRC)/src/test/fuzzer/* . 50 | clang $(CFLAGS) -c -I$(KERNEL_TREE_ROOT)/usr/include \ 51 | test_fuzzer.c test_hook.c verifier.i linux_hook.i 52 | clang++ $(CFLAGS) -o $@ \ 53 | test_fuzzer.o test_hook.o verifier.o linux_hook.o \ 54 | Fuzzer*.o 55 | 56 | test_coverage: $(SRC)/src/test/fuzzer/test_fuzzer.c test_hook 57 | ln -sf $(SRC)/src/test/fuzzer/* . 58 | clang $(COVERAGE_CFLAGS) -o $@ -I$(KERNEL_TREE_ROOT)/usr/include \ 59 | test_fuzzer.c test_hook.c verifier.i linux_hook.i \ 60 | Fuzzer/standalone/StandaloneFuzzTargetMain.c 61 | 62 | all: test_hook test_verifier test_fuzzer test_coverage 63 | 64 | clean: 65 | /bin/rm -rf *.o test_verifier test_fuzzer test_coverage 66 | 67 | distclean: 68 | /bin/rm -rf test_verifier test_fuzzer test_coverage *.i *.h *.c *.o \ 69 | Fuzzer $(KERNEL_TREE_ROOT)/kernel/bpf/linux_hook.c 70 | -------------------------------------------------------------------------------- /src/helper/linux_hook.c: -------------------------------------------------------------------------------- 1 | /* 2 | * helper functions using kernel data structures 3 | * 4 | * Copyright (c) 2015 PLUMgrid, Inc. 5 | * 6 | * This program is free software; you can redistribute it and/or 7 | * modify it under the terms of version 2 of the GNU General Public 8 | * License as published by the Free Software Foundation. 9 | */ 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | #include 18 | #include 19 | #include 20 | 21 | #define LOG_BUF_SIZE 65536 22 | char bpf_log_buf[LOG_BUF_SIZE]; 23 | 24 | static void __user *u64_to_ptr(__u64 val) 25 | { 26 | return (void __user *) (unsigned long) val; 27 | } 28 | 29 | static __u64 ptr_to_u64(void *ptr) 30 | { 31 | return (__u64) (unsigned long) ptr; 32 | } 33 | 34 | const struct bpf_func_proto bpf_skb_store_bytes_proto = { 35 | .func = NULL, 36 | .gpl_only = false, 37 | .ret_type = RET_INTEGER, 38 | .arg1_type = ARG_PTR_TO_CTX, 39 | .arg2_type = ARG_ANYTHING, 40 | .arg3_type = ARG_PTR_TO_STACK, 41 | .arg4_type = ARG_CONST_STACK_SIZE, 42 | .arg5_type = ARG_ANYTHING, 43 | }; 44 | 45 | const struct bpf_func_proto bpf_l3_csum_replace_proto = { 46 | .func = NULL, 47 | .gpl_only = false, 48 | .ret_type = RET_INTEGER, 49 | .arg1_type = ARG_PTR_TO_CTX, 50 | .arg2_type = ARG_ANYTHING, 51 | .arg3_type = ARG_ANYTHING, 52 | .arg4_type = ARG_ANYTHING, 53 | .arg5_type = ARG_ANYTHING, 54 | }; 55 | 56 | const struct bpf_func_proto bpf_l4_csum_replace_proto = { 57 | .func = NULL, 58 | .gpl_only = false, 59 | .ret_type = RET_INTEGER, 60 | .arg1_type = ARG_PTR_TO_CTX, 61 | .arg2_type = ARG_ANYTHING, 62 | .arg3_type = ARG_ANYTHING, 63 | .arg4_type = ARG_ANYTHING, 64 | .arg5_type = ARG_ANYTHING, 65 | }; 66 | 67 | const struct bpf_func_proto bpf_clone_redirect_proto = { 68 | .func = NULL, 69 | .gpl_only = false, 70 | .ret_type = RET_INTEGER, 71 | .arg1_type = ARG_PTR_TO_CTX, 72 | .arg2_type = ARG_ANYTHING, 73 | .arg3_type = ARG_ANYTHING, 74 | }; 75 | 76 | static const struct bpf_func_proto bpf_get_cgroup_classid_proto = { 77 | .func = NULL, 78 | .gpl_only = false, 79 | .ret_type = RET_INTEGER, 80 | .arg1_type = ARG_PTR_TO_CTX, 81 | }; 82 | 83 | static const struct bpf_func_proto bpf_skb_vlan_push_proto_t = { 84 | .func = NULL, 85 | .gpl_only = false, 86 | .ret_type = RET_INTEGER, 87 | .arg1_type = ARG_PTR_TO_CTX, 88 | .arg2_type = ARG_ANYTHING, 89 | .arg3_type = ARG_ANYTHING, 90 | }; 91 | 92 | static const struct bpf_func_proto bpf_skb_vlan_pop_proto_t = { 93 | .func = NULL, 94 | .gpl_only = false, 95 | .ret_type = RET_INTEGER, 96 | .arg1_type = ARG_PTR_TO_CTX, 97 | }; 98 | 99 | static const struct bpf_func_proto bpf_skb_get_tunnel_key_proto = { 100 | .func = NULL, 101 | .gpl_only = false, 102 | .ret_type = RET_INTEGER, 103 | .arg1_type = ARG_PTR_TO_CTX, 104 | .arg2_type = ARG_PTR_TO_STACK, 105 | .arg3_type = ARG_CONST_STACK_SIZE, 106 | .arg4_type = ARG_ANYTHING, 107 | }; 108 | 109 | static const struct bpf_func_proto bpf_skb_set_tunnel_key_proto = { 110 | .func = NULL, 111 | .gpl_only = false, 112 | .ret_type = RET_INTEGER, 113 | .arg1_type = ARG_PTR_TO_CTX, 114 | .arg2_type = ARG_PTR_TO_STACK, 115 | .arg3_type = ARG_CONST_STACK_SIZE, 116 | .arg4_type = ARG_ANYTHING, 117 | }; 118 | 119 | static const struct bpf_func_proto bpf_map_lookup_elem_proto_k = { 120 | .func = NULL, 121 | .gpl_only = false, 122 | .ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL, 123 | .arg1_type = ARG_CONST_MAP_PTR, 124 | .arg2_type = ARG_PTR_TO_MAP_KEY, 125 | }; 126 | 127 | static const struct bpf_func_proto bpf_map_update_elem_proto_k = { 128 | .func = NULL, 129 | .gpl_only = false, 130 | .ret_type = RET_INTEGER, 131 | .arg1_type = ARG_CONST_MAP_PTR, 132 | .arg2_type = ARG_PTR_TO_MAP_KEY, 133 | .arg3_type = ARG_PTR_TO_MAP_VALUE, 134 | .arg4_type = ARG_ANYTHING, 135 | }; 136 | 137 | static const struct bpf_func_proto bpf_map_delete_elem_proto_k = { 138 | .func = NULL, 139 | .gpl_only = false, 140 | .ret_type = RET_INTEGER, 141 | .arg1_type = ARG_CONST_MAP_PTR, 142 | .arg2_type = ARG_PTR_TO_MAP_KEY, 143 | }; 144 | 145 | static const struct bpf_func_proto bpf_get_prandom_u32_proto_k = { 146 | .func = NULL, 147 | .gpl_only = false, 148 | .ret_type = RET_INTEGER, 149 | }; 150 | 151 | static const struct bpf_func_proto bpf_get_smp_processor_id_proto_k = { 152 | .func = NULL, 153 | .gpl_only = false, 154 | .ret_type = RET_INTEGER, 155 | }; 156 | 157 | static const struct bpf_func_proto bpf_ktime_get_ns_proto_k = { 158 | .func = NULL, 159 | .gpl_only = true, 160 | .ret_type = RET_INTEGER, 161 | }; 162 | 163 | static const struct bpf_func_proto bpf_get_current_pid_tgid_proto_k = { 164 | .func = NULL, 165 | .gpl_only = false, 166 | .ret_type = RET_INTEGER, 167 | }; 168 | 169 | static const struct bpf_func_proto bpf_get_current_uid_gid_proto_k = { 170 | .func = NULL, 171 | .gpl_only = false, 172 | .ret_type = RET_INTEGER, 173 | }; 174 | 175 | static const struct bpf_func_proto bpf_get_current_comm_proto_k = { 176 | .func = NULL, 177 | .gpl_only = false, 178 | .ret_type = RET_INTEGER, 179 | .arg1_type = ARG_PTR_TO_STACK, 180 | .arg2_type = ARG_CONST_STACK_SIZE, 181 | }; 182 | 183 | static const struct bpf_func_proto bpf_tail_call_proto_k = { 184 | .func = NULL, 185 | .gpl_only = false, 186 | .ret_type = RET_VOID, 187 | .arg1_type = ARG_PTR_TO_CTX, 188 | .arg2_type = ARG_CONST_MAP_PTR, 189 | .arg3_type = ARG_ANYTHING, 190 | }; 191 | 192 | static const struct bpf_func_proto bpf_trace_printk_proto_k = { 193 | .func = NULL, 194 | .gpl_only = true, 195 | .ret_type = RET_INTEGER, 196 | .arg1_type = ARG_PTR_TO_STACK, 197 | .arg2_type = ARG_CONST_STACK_SIZE, 198 | }; 199 | 200 | static const struct bpf_func_proto * 201 | sk_filter_func_proto(enum bpf_func_id func_id) 202 | { 203 | switch (func_id) { 204 | case BPF_FUNC_map_lookup_elem: 205 | return &bpf_map_lookup_elem_proto_k; 206 | case BPF_FUNC_map_update_elem: 207 | return &bpf_map_update_elem_proto_k; 208 | case BPF_FUNC_map_delete_elem: 209 | return &bpf_map_delete_elem_proto_k; 210 | case BPF_FUNC_get_prandom_u32: 211 | return &bpf_get_prandom_u32_proto_k; 212 | case BPF_FUNC_get_smp_processor_id: 213 | return &bpf_get_smp_processor_id_proto_k; 214 | case BPF_FUNC_tail_call: 215 | return &bpf_tail_call_proto_k; 216 | case BPF_FUNC_ktime_get_ns: 217 | return &bpf_ktime_get_ns_proto_k; 218 | case BPF_FUNC_trace_printk: 219 | #if 0 220 | return bpf_get_trace_printk_proto(); 221 | #else 222 | return &bpf_trace_printk_proto_k; 223 | #endif 224 | default: 225 | return NULL; 226 | } 227 | } 228 | 229 | static const struct bpf_func_proto * 230 | tc_cls_act_func_proto(enum bpf_func_id func_id) 231 | { 232 | switch (func_id) { 233 | case BPF_FUNC_skb_store_bytes: 234 | return &bpf_skb_store_bytes_proto; 235 | case BPF_FUNC_l3_csum_replace: 236 | return &bpf_l3_csum_replace_proto; 237 | case BPF_FUNC_l4_csum_replace: 238 | return &bpf_l4_csum_replace_proto; 239 | case BPF_FUNC_clone_redirect: 240 | return &bpf_clone_redirect_proto; 241 | case BPF_FUNC_get_cgroup_classid: 242 | return &bpf_get_cgroup_classid_proto; 243 | case BPF_FUNC_skb_vlan_push: 244 | return &bpf_skb_vlan_push_proto_t; 245 | case BPF_FUNC_skb_vlan_pop: 246 | return &bpf_skb_vlan_pop_proto_t; 247 | case BPF_FUNC_skb_get_tunnel_key: 248 | return &bpf_skb_get_tunnel_key_proto; 249 | case BPF_FUNC_skb_set_tunnel_key: 250 | #if 0 251 | return bpf_get_skb_set_tunnel_key_proto(); 252 | #else 253 | return &bpf_skb_set_tunnel_key_proto; 254 | #endif 255 | default: 256 | return sk_filter_func_proto(func_id); 257 | } 258 | } 259 | 260 | static u32 convert_skb_access(int skb_field, int dst_reg, int src_reg, 261 | struct bpf_insn *insn_buf) 262 | { 263 | struct bpf_insn *insn = insn_buf; 264 | 265 | switch (skb_field) { 266 | case SKF_AD_MARK: 267 | BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, mark) != 4); 268 | 269 | *insn++ = BPF_LDX_MEM(BPF_W, dst_reg, src_reg, 270 | offsetof(struct sk_buff, mark)); 271 | break; 272 | 273 | case SKF_AD_PKTTYPE: 274 | *insn++ = BPF_LDX_MEM(BPF_B, dst_reg, src_reg, PKT_TYPE_OFFSET()); 275 | *insn++ = BPF_ALU32_IMM(BPF_AND, dst_reg, PKT_TYPE_MAX); 276 | #ifdef __BIG_ENDIAN_BITFIELD 277 | *insn++ = BPF_ALU32_IMM(BPF_RSH, dst_reg, 5); 278 | #endif 279 | break; 280 | 281 | case SKF_AD_QUEUE: 282 | BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, queue_mapping) != 2); 283 | 284 | *insn++ = BPF_LDX_MEM(BPF_H, dst_reg, src_reg, 285 | offsetof(struct sk_buff, queue_mapping)); 286 | break; 287 | 288 | case SKF_AD_VLAN_TAG: 289 | case SKF_AD_VLAN_TAG_PRESENT: 290 | #if 1 291 | #define VLAN_TAG_PRESENT 0x1000 292 | #endif 293 | BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, vlan_tci) != 2); 294 | BUILD_BUG_ON(VLAN_TAG_PRESENT != 0x1000); 295 | 296 | /* dst_reg = *(u16 *) (src_reg + offsetof(vlan_tci)) */ 297 | *insn++ = BPF_LDX_MEM(BPF_H, dst_reg, src_reg, 298 | offsetof(struct sk_buff, vlan_tci)); 299 | if (skb_field == SKF_AD_VLAN_TAG) { 300 | *insn++ = BPF_ALU32_IMM(BPF_AND, dst_reg, 301 | ~VLAN_TAG_PRESENT); 302 | } else { 303 | /* dst_reg >>= 12 */ 304 | *insn++ = BPF_ALU32_IMM(BPF_RSH, dst_reg, 12); 305 | /* dst_reg &= 1 */ 306 | *insn++ = BPF_ALU32_IMM(BPF_AND, dst_reg, 1); 307 | } 308 | break; 309 | } 310 | 311 | 312 | return insn - insn_buf; 313 | } 314 | 315 | static bool __is_valid_access(int off, int size, enum bpf_access_type type) 316 | { 317 | /* check bounds */ 318 | if (off < 0 || off >= sizeof(struct __sk_buff)) 319 | return false; 320 | 321 | /* disallow misaligned access */ 322 | if (off % size != 0) 323 | return false; 324 | 325 | /* all __sk_buff fields are __u32 */ 326 | if (size != 4) 327 | return false; 328 | 329 | return true; 330 | } 331 | 332 | static bool sk_filter_is_valid_access(int off, int size, 333 | enum bpf_access_type type) 334 | { 335 | if (type == BPF_WRITE) { 336 | switch (off) { 337 | case offsetof(struct __sk_buff, cb[0]) ... 338 | offsetof(struct __sk_buff, cb[4]): 339 | break; 340 | default: 341 | return false; 342 | } 343 | } 344 | 345 | return __is_valid_access(off, size, type); 346 | } 347 | 348 | static bool tc_cls_act_is_valid_access(int off, int size, 349 | enum bpf_access_type type) 350 | { 351 | if (type == BPF_WRITE) { 352 | switch (off) { 353 | case offsetof(struct __sk_buff, mark): 354 | case offsetof(struct __sk_buff, tc_index): 355 | case offsetof(struct __sk_buff, cb[0]) ... 356 | offsetof(struct __sk_buff, cb[4]): 357 | break; 358 | default: 359 | return false; 360 | } 361 | } 362 | return __is_valid_access(off, size, type); 363 | } 364 | 365 | static u32 bpf_net_convert_ctx_access(enum bpf_access_type type, int dst_reg, 366 | int src_reg, int ctx_off, 367 | struct bpf_insn *insn_buf) 368 | { 369 | struct bpf_insn *insn = insn_buf; 370 | 371 | switch (ctx_off) { 372 | case offsetof(struct __sk_buff, len): 373 | BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, len) != 4); 374 | 375 | *insn++ = BPF_LDX_MEM(BPF_W, dst_reg, src_reg, 376 | offsetof(struct sk_buff, len)); 377 | break; 378 | 379 | case offsetof(struct __sk_buff, protocol): 380 | BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, protocol) != 2); 381 | 382 | *insn++ = BPF_LDX_MEM(BPF_H, dst_reg, src_reg, 383 | offsetof(struct sk_buff, protocol)); 384 | break; 385 | 386 | case offsetof(struct __sk_buff, vlan_proto): 387 | BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, vlan_proto) != 2); 388 | 389 | *insn++ = BPF_LDX_MEM(BPF_H, dst_reg, src_reg, 390 | offsetof(struct sk_buff, vlan_proto)); 391 | break; 392 | 393 | case offsetof(struct __sk_buff, priority): 394 | BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, priority) != 4); 395 | 396 | *insn++ = BPF_LDX_MEM(BPF_W, dst_reg, src_reg, 397 | offsetof(struct sk_buff, priority)); 398 | break; 399 | 400 | case offsetof(struct __sk_buff, ingress_ifindex): 401 | BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, skb_iif) != 4); 402 | 403 | *insn++ = BPF_LDX_MEM(BPF_W, dst_reg, src_reg, 404 | offsetof(struct sk_buff, skb_iif)); 405 | break; 406 | 407 | case offsetof(struct __sk_buff, ifindex): 408 | BUILD_BUG_ON(FIELD_SIZEOF(struct net_device, ifindex) != 4); 409 | 410 | *insn++ = BPF_LDX_MEM(bytes_to_bpf_size(FIELD_SIZEOF(struct sk_buff, dev)), 411 | dst_reg, src_reg, 412 | offsetof(struct sk_buff, dev)); 413 | *insn++ = BPF_JMP_IMM(BPF_JEQ, dst_reg, 0, 1); 414 | *insn++ = BPF_LDX_MEM(BPF_W, dst_reg, dst_reg, 415 | offsetof(struct net_device, ifindex)); 416 | break; 417 | 418 | case offsetof(struct __sk_buff, hash): 419 | BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, hash) != 4); 420 | 421 | *insn++ = BPF_LDX_MEM(BPF_W, dst_reg, src_reg, 422 | offsetof(struct sk_buff, hash)); 423 | break; 424 | 425 | case offsetof(struct __sk_buff, mark): 426 | BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, mark) != 4); 427 | 428 | if (type == BPF_WRITE) 429 | *insn++ = BPF_STX_MEM(BPF_W, dst_reg, src_reg, 430 | offsetof(struct sk_buff, mark)); 431 | else 432 | *insn++ = BPF_LDX_MEM(BPF_W, dst_reg, src_reg, 433 | offsetof(struct sk_buff, mark)); 434 | break; 435 | 436 | case offsetof(struct __sk_buff, pkt_type): 437 | return convert_skb_access(SKF_AD_PKTTYPE, dst_reg, src_reg, insn); 438 | 439 | case offsetof(struct __sk_buff, queue_mapping): 440 | return convert_skb_access(SKF_AD_QUEUE, dst_reg, src_reg, insn); 441 | 442 | case offsetof(struct __sk_buff, vlan_present): 443 | return convert_skb_access(SKF_AD_VLAN_TAG_PRESENT, 444 | dst_reg, src_reg, insn); 445 | 446 | case offsetof(struct __sk_buff, vlan_tci): 447 | return convert_skb_access(SKF_AD_VLAN_TAG, 448 | dst_reg, src_reg, insn); 449 | 450 | case offsetof(struct __sk_buff, cb[0]) ... 451 | offsetof(struct __sk_buff, cb[4]): 452 | BUILD_BUG_ON(FIELD_SIZEOF(struct qdisc_skb_cb, data) < 20); 453 | 454 | ctx_off -= offsetof(struct __sk_buff, cb[0]); 455 | ctx_off += offsetof(struct sk_buff, cb); 456 | ctx_off += offsetof(struct qdisc_skb_cb, data); 457 | if (type == BPF_WRITE) 458 | *insn++ = BPF_STX_MEM(BPF_W, dst_reg, src_reg, ctx_off); 459 | else 460 | *insn++ = BPF_LDX_MEM(BPF_W, dst_reg, src_reg, ctx_off); 461 | break; 462 | 463 | case offsetof(struct __sk_buff, tc_index): 464 | /* FIXME: CONFIG_NET_SCHED */ 465 | BUILD_BUG_ON(FIELD_SIZEOF(struct sk_buff, tc_index) != 2); 466 | 467 | if (type == BPF_WRITE) 468 | *insn++ = BPF_STX_MEM(BPF_H, dst_reg, src_reg, 469 | offsetof(struct sk_buff, tc_index)); 470 | else 471 | *insn++ = BPF_LDX_MEM(BPF_H, dst_reg, src_reg, 472 | offsetof(struct sk_buff, tc_index)); 473 | break; 474 | } 475 | 476 | return insn - insn_buf; 477 | } 478 | 479 | const struct bpf_func_proto bpf_perf_event_read_proto = { 480 | .func = NULL, 481 | .gpl_only = false, 482 | .ret_type = RET_INTEGER, 483 | .arg1_type = ARG_CONST_MAP_PTR, 484 | .arg2_type = ARG_ANYTHING, 485 | }; 486 | 487 | static const struct bpf_func_proto bpf_probe_read_proto = { 488 | .func = NULL, 489 | .gpl_only = true, 490 | .ret_type = RET_INTEGER, 491 | .arg1_type = ARG_PTR_TO_STACK, 492 | .arg2_type = ARG_CONST_STACK_SIZE, 493 | .arg3_type = ARG_ANYTHING, 494 | }; 495 | 496 | static const struct bpf_func_proto *kprobe_prog_func_proto(enum bpf_func_id func_id) 497 | { 498 | switch (func_id) { 499 | case BPF_FUNC_map_lookup_elem: 500 | return &bpf_map_lookup_elem_proto_k; 501 | case BPF_FUNC_map_update_elem: 502 | return &bpf_map_update_elem_proto_k; 503 | case BPF_FUNC_map_delete_elem: 504 | return &bpf_map_delete_elem_proto_k; 505 | case BPF_FUNC_probe_read: 506 | return &bpf_probe_read_proto; 507 | case BPF_FUNC_ktime_get_ns: 508 | return &bpf_ktime_get_ns_proto_k; 509 | case BPF_FUNC_tail_call: 510 | return &bpf_tail_call_proto_k; 511 | case BPF_FUNC_get_current_pid_tgid: 512 | return &bpf_get_current_pid_tgid_proto_k; 513 | case BPF_FUNC_get_current_uid_gid: 514 | return &bpf_get_current_uid_gid_proto_k; 515 | case BPF_FUNC_get_current_comm: 516 | return &bpf_get_current_comm_proto_k; 517 | case BPF_FUNC_trace_printk: 518 | #if 0 519 | return bpf_get_trace_printk_proto(); 520 | #else 521 | return &bpf_trace_printk_proto_k; 522 | #endif 523 | case BPF_FUNC_get_smp_processor_id: 524 | return &bpf_get_smp_processor_id_proto_k; 525 | case BPF_FUNC_perf_event_read: 526 | return &bpf_perf_event_read_proto; 527 | default: 528 | return NULL; 529 | } 530 | } 531 | 532 | static bool kprobe_prog_is_valid_access(int off, int size, enum bpf_access_type type) 533 | { 534 | /* check bounds */ 535 | if (off < 0 || off >= sizeof(struct pt_regs)) 536 | return false; 537 | 538 | /* only read is allowed */ 539 | if (type != BPF_READ) 540 | return false; 541 | 542 | /* disallow misaligned access */ 543 | if (off % size != 0) 544 | return false; 545 | 546 | return true; 547 | } 548 | 549 | static const struct bpf_verifier_ops sk_filter_ops = { 550 | .get_func_proto = sk_filter_func_proto, 551 | .is_valid_access = sk_filter_is_valid_access, 552 | .convert_ctx_access = bpf_net_convert_ctx_access, 553 | }; 554 | 555 | static const struct bpf_verifier_ops tc_cls_act_ops = { 556 | .get_func_proto = tc_cls_act_func_proto, 557 | .is_valid_access = tc_cls_act_is_valid_access, 558 | .convert_ctx_access = bpf_net_convert_ctx_access, 559 | }; 560 | 561 | static struct bpf_verifier_ops kprobe_prog_ops = { 562 | .get_func_proto = kprobe_prog_func_proto, 563 | .is_valid_access = kprobe_prog_is_valid_access, 564 | }; 565 | 566 | static struct bpf_prog_type_list sk_filter_type __read_mostly = { 567 | .ops = &sk_filter_ops, 568 | .type = BPF_PROG_TYPE_SOCKET_FILTER, 569 | }; 570 | 571 | static struct bpf_prog_type_list sched_cls_type __read_mostly = { 572 | .ops = &tc_cls_act_ops, 573 | .type = BPF_PROG_TYPE_SCHED_CLS, 574 | }; 575 | 576 | static struct bpf_prog_type_list sched_act_type __read_mostly = { 577 | .ops = &tc_cls_act_ops, 578 | .type = BPF_PROG_TYPE_SCHED_ACT, 579 | }; 580 | 581 | static struct bpf_prog_type_list kprobe_tl = { 582 | .ops = &kprobe_prog_ops, 583 | .type = BPF_PROG_TYPE_KPROBE, 584 | }; 585 | 586 | struct bpf_prog_type_info { 587 | enum bpf_prog_type type; 588 | struct bpf_verifier_ops *ops; 589 | struct bpf_prog_type_info *next; 590 | }; 591 | static struct bpf_prog_type_info *bpf_prog_type_node = NULL; 592 | 593 | static struct bpf_prog_type_info *register_prog_type_k( 594 | struct bpf_prog_type_list *tl) { 595 | struct bpf_prog_type_info *b = (struct bpf_prog_type_info *) 596 | malloc(sizeof(struct bpf_prog_type_info)); 597 | b->type = tl->type; 598 | b->ops = tl->ops; 599 | return b; 600 | } 601 | 602 | static int find_prog_type_k(enum bpf_prog_type type, struct bpf_prog *prog) 603 | { 604 | struct bpf_prog_type_info *m, *n; 605 | 606 | if (bpf_prog_type_node == NULL) { 607 | /* first call, let us do initialization */ 608 | n = register_prog_type_k(&kprobe_tl); 609 | n->next = NULL; 610 | 611 | m = register_prog_type_k(&sched_act_type); 612 | m->next = n; 613 | n = m; 614 | 615 | m = register_prog_type_k(&sched_cls_type); 616 | m->next = n; 617 | n = m; 618 | 619 | m = register_prog_type_k(&sk_filter_type); 620 | m->next = n; 621 | bpf_prog_type_node = m; 622 | } 623 | 624 | /* the callback functions are assigned here */ 625 | for (n = bpf_prog_type_node; n != NULL; n = n->next) { 626 | if (n->type == type) { 627 | prog->aux->ops = n->ops; 628 | prog->type = type; 629 | return 0; 630 | } 631 | } 632 | 633 | return -EINVAL; 634 | } 635 | 636 | /* functions used by test_verifier.c */ 637 | static struct bpf_prog *bpf_prog_alloc_k(unsigned int size) 638 | { 639 | struct bpf_prog_aux *aux; 640 | struct bpf_prog *fp; 641 | 642 | size = round_up(size, PAGE_SIZE); 643 | fp = vmalloc(size); 644 | if (fp == NULL) 645 | return NULL; 646 | 647 | aux = kzalloc(sizeof(*aux), GFP_KERNEL); 648 | if (aux == NULL) { 649 | vfree(fp); 650 | return NULL; 651 | } 652 | 653 | fp->pages = size / PAGE_SIZE; 654 | fp->aux = aux; 655 | 656 | return fp; 657 | } 658 | 659 | static void bpf_prog_free_k(struct bpf_prog *fp) 660 | { 661 | kfree(fp->aux); 662 | vfree(fp); 663 | } 664 | 665 | struct bpf_prog *bpf_prog_realloc_k(struct bpf_prog *fp_old, unsigned int size) 666 | { 667 | struct bpf_prog *fp; 668 | 669 | size = round_up(size, PAGE_SIZE); 670 | if (size <= fp_old->pages * PAGE_SIZE) 671 | return fp_old; 672 | 673 | fp = vmalloc(size); 674 | if (fp != NULL) { 675 | memcpy(fp, fp_old, fp_old->pages * PAGE_SIZE); 676 | fp->pages = size / PAGE_SIZE; 677 | fp_old->aux = NULL; 678 | bpf_prog_free_k(fp_old); 679 | } 680 | 681 | return fp; 682 | } 683 | 684 | /* create a map - but not really going into the kernel */ 685 | struct bpf_map_node { 686 | int fd; 687 | struct bpf_map *map; 688 | struct bpf_map_node *next; 689 | }; 690 | static struct bpf_map_node *map_head = NULL; 691 | static int fd_num = 1; 692 | 693 | int bpf_create_map(enum bpf_map_type map_type, int key_size, int value_size, 694 | int max_entries) 695 | { 696 | struct bpf_map *m; 697 | struct bpf_map_node *n; 698 | int cur_fd; 699 | 700 | m = (struct bpf_map *)vmalloc(sizeof(struct bpf_map)); 701 | m->map_type = map_type; 702 | m->key_size = key_size; 703 | m->value_size = value_size; 704 | m->max_entries = max_entries; 705 | m->ops = NULL; 706 | 707 | n = (struct bpf_map_node *)vmalloc(sizeof(struct bpf_map_node)); 708 | cur_fd = fd_num++; 709 | n->fd = cur_fd; 710 | n->map = m; 711 | 712 | if (map_head == NULL) { 713 | n->next = NULL; 714 | map_head = n; 715 | } else { 716 | n->next = map_head; 717 | map_head = n; 718 | } 719 | 720 | return cur_fd; 721 | } 722 | 723 | void bpf_free_map(int fd) 724 | { 725 | struct bpf_map_node *c, *p; 726 | 727 | c = p = map_head; 728 | while (c != NULL) { 729 | if (c->fd == fd) { 730 | vfree(c->map); 731 | if (c == p) 732 | map_head = c->next; 733 | else 734 | p->next = c->next; 735 | vfree(c); 736 | break; 737 | } 738 | p = c; 739 | c = c->next; 740 | } 741 | } 742 | 743 | void bpf_map_put_k(struct bpf_map *map) 744 | { 745 | return; 746 | } 747 | 748 | unsigned long __fdget_k(unsigned int fd) 749 | { 750 | struct bpf_map_node *n; 751 | 752 | for (n = map_head; n != NULL; n = n->next) { 753 | if (n->fd == fd) 754 | return (unsigned long)n->map; 755 | } 756 | 757 | return -1; 758 | } 759 | 760 | /* load a bpf program - do verification only */ 761 | int bpf_prog_load(enum bpf_prog_type prog_type, 762 | const struct bpf_insn *insns, int prog_len, 763 | const char *license, int kern_version) 764 | { 765 | 766 | union bpf_attr attr = { 767 | .prog_type = prog_type, 768 | .insns = ptr_to_u64((void *) insns), 769 | .insn_cnt = prog_len / sizeof(struct bpf_insn), 770 | .license = ptr_to_u64((void *) license), 771 | .log_buf = ptr_to_u64(bpf_log_buf), 772 | .log_size = LOG_BUF_SIZE, 773 | .log_level = 1, 774 | }; 775 | enum bpf_prog_type type = attr.prog_type; 776 | struct bpf_prog *prog; 777 | int err; 778 | 779 | attr.kern_version = kern_version; 780 | bpf_log_buf[0] = 0; 781 | 782 | if (attr.insn_cnt >= BPF_MAXINSNS) 783 | return -EINVAL; 784 | 785 | /* plain bpf_prog allocation */ 786 | prog = bpf_prog_alloc_k(bpf_prog_size(attr.insn_cnt)); 787 | if (!prog) 788 | return -ENOMEM; 789 | 790 | prog->len = attr.insn_cnt; 791 | 792 | memcpy(prog->insns, u64_to_ptr(attr.insns), prog->len * sizeof(struct bpf_insn)); 793 | prog->orig_prog = NULL; 794 | prog->jited = false; 795 | 796 | atomic_set(&prog->aux->refcnt, 1); 797 | prog->gpl_compatible = 1; 798 | 799 | /* find program type: socket_filter vs tracing_filter */ 800 | err = find_prog_type_k(type, prog); 801 | if (err >= 0) { 802 | /* run eBPF verifier */ 803 | err = bpf_check(&prog, &attr); 804 | 805 | /* this is a workaround for userspace verifier. 806 | * in kernel, the env->prog->aux->used_maps will be 807 | * freed when the map itself is freed. 808 | */ 809 | kfree(prog->aux->used_maps); 810 | } 811 | bpf_prog_free_k(prog); 812 | return err; 813 | } 814 | -------------------------------------------------------------------------------- /src/helper/test_hook.c: -------------------------------------------------------------------------------- 1 | /* 2 | * helper functions to mimic certain kernel functionalities 3 | * 4 | * Copyright (c) 2015 PLUMgrid, Inc. 5 | * 6 | * This program is free software; you can redistribute it and/or 7 | * modify it under the terms of version 2 of the GNU General Public 8 | * License as published by the Free Software Foundation. 9 | */ 10 | #include 11 | #include 12 | 13 | /* some kernel types */ 14 | typedef unsigned gfp_t; 15 | 16 | /* externs */ 17 | extern void bpf_map_put_k(void *map); 18 | extern unsigned long __fdget_k(unsigned int fd); 19 | extern void *bpf_prog_realloc_k(void *fp_old, unsigned int size); 20 | 21 | /* for no optimization level specified, the following interface mockings are required */ 22 | unsigned long phys_base = 0x0; 23 | void * __kmalloc(size_t size, gfp_t flags) { 24 | void *p = malloc(size); 25 | #define ___GFP_ZERO 0x8000u 26 | if (flags & ___GFP_ZERO) 27 | memset(p, 0, size); 28 | return p; 29 | } 30 | 31 | void *vmalloc(unsigned long size) { 32 | return malloc(size); 33 | } 34 | 35 | void kfree(const void *addr) { 36 | free(addr); 37 | } 38 | 39 | void vfree(const void *addr) { 40 | free(addr); 41 | } 42 | 43 | void warn_slowpath_fmt(const char *file, int line, const char *fmt, ...) { 44 | } 45 | 46 | unsigned long _copy_to_user(void *to, const void *from, unsigned n) { 47 | memcpy(to, from, n); 48 | return 0; 49 | } 50 | 51 | unsigned long _copy_from_user(void *to, const void *from, unsigned n) { 52 | memcpy(to, from, n); 53 | return 0; 54 | } 55 | 56 | void bpf_map_put(void *map) { 57 | bpf_map_put_k(map); 58 | } 59 | 60 | void *bpf_prog_realloc(void *fp_old, unsigned int size, gfp_t flags) { 61 | return bpf_prog_realloc_k(fp_old, size); 62 | } 63 | 64 | void mutex_lock(void *lock) { 65 | /* sorry, not support multithreading yet */ 66 | return; 67 | } 68 | void mutex_unlock(void *lock) { 69 | return; 70 | } 71 | 72 | int vscnprintf(char *buf, size_t size, const char *fmt, va_list args) { 73 | int i; 74 | 75 | i = vsnprintf(buf, size, fmt, args); 76 | 77 | if (i < size) 78 | return i; 79 | if (size != 0) 80 | return size - 1; 81 | return 0; 82 | } 83 | 84 | void fput(struct file *fp) { 85 | /* do nothing now */ 86 | } 87 | 88 | void *__memcpy(void *to, const void *from, size_t len) { 89 | return memcpy(to, from, len); 90 | } 91 | 92 | /* __fdget requires maps already associated with fd. 93 | * bpf_map_get needs to return information related to a map. 94 | * Needs to sort it out. 95 | */ 96 | unsigned long __fdget(unsigned int fd) { 97 | unsigned long r = __fdget_k(fd); 98 | return r; 99 | } 100 | 101 | struct fd { 102 | struct file *file; 103 | unsigned int flags; 104 | }; 105 | void *bpf_map_get(struct fd f) { 106 | return f.file; 107 | } 108 | -------------------------------------------------------------------------------- /src/test/fuzzer/test_fuzzer.c: -------------------------------------------------------------------------------- 1 | /* 2 | * LLVM fuzzer test callback implementation 3 | * 4 | * Copyright (c) 2015 PLUMgrid, Inc. 5 | * 6 | * This program is free software; you can redistribute it and/or 7 | * modify it under the terms of version 2 of the GNU General Public 8 | * License as published by the Free Software Foundation. 9 | */ 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | 16 | int bpf_prog_load(enum bpf_prog_type prog_type, 17 | const struct bpf_insn *insns, int insn_len, 18 | const char *license, int kern_version); 19 | 20 | int bpf_create_map(enum bpf_map_type map_type, int key_size, int value_size, 21 | int max_entries); 22 | 23 | static int create_map(void) 24 | { 25 | long long key, value = 0; 26 | int map_fd; 27 | 28 | map_fd = bpf_create_map(BPF_MAP_TYPE_HASH, sizeof(key), sizeof(value), 1024); 29 | if (map_fd < 0) { 30 | printf("failed to create map '%s'\n", strerror(errno)); 31 | } 32 | 33 | return map_fd; 34 | } 35 | 36 | int LLVMFuzzerTestOneInput(const unsigned char *data, unsigned long size) { 37 | struct bpf_insn *prog, *prog_c, *insn; 38 | int i, prog_len = size / sizeof(struct bpf_insn); 39 | 40 | /* If there are any map instructions, we want to create map now. */ 41 | prog = malloc(size); 42 | memcpy(prog, data, size); 43 | insn = prog; 44 | for (i = 0; i < prog_len; i++, insn++) { 45 | if ((insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) && 46 | insn->src_reg == BPF_PSEUDO_MAP_FD) { 47 | int map_fd = create_map(); 48 | insn->imm = map_fd; 49 | } 50 | } 51 | /* keep a copy of instructions since verifier may modify it */ 52 | prog_c = malloc(size); 53 | memcpy(prog_c, prog, size); 54 | 55 | (void)bpf_prog_load(BPF_PROG_TYPE_SOCKET_FILTER, 56 | prog_c, prog_len * sizeof(struct bpf_insn), 57 | "GPL", 0); 58 | memcpy(prog_c, prog, size); 59 | (void)bpf_prog_load(BPF_PROG_TYPE_SCHED_CLS, 60 | prog_c, prog_len * sizeof(struct bpf_insn), 61 | "GPL", 0); 62 | memcpy(prog_c, prog, size); 63 | (void)bpf_prog_load(BPF_PROG_TYPE_SCHED_ACT, 64 | prog_c, prog_len * sizeof(struct bpf_insn), 65 | "GPL", 0); 66 | memcpy(prog_c, prog, size); 67 | (void)bpf_prog_load(BPF_PROG_TYPE_KPROBE, 68 | prog_c, prog_len * sizeof(struct bpf_insn), 69 | "GPL", 0); 70 | 71 | /* remove the created maps */ 72 | insn = prog; 73 | for (i = 0; i < prog_len; i++, insn++) { 74 | if ((insn[0].code == (BPF_LD | BPF_IMM | BPF_DW)) && 75 | insn->src_reg == BPF_PSEUDO_MAP_FD) { 76 | bpf_free_map(insn->imm); 77 | } 78 | } 79 | free(prog_c); 80 | free(prog); 81 | return 0; 82 | } 83 | -------------------------------------------------------------------------------- /src/test/linux-samples-bpf/libbpf.h: -------------------------------------------------------------------------------- 1 | /* 2 | * eBPF mini library 3 | * 4 | * Copyright (c) 2015 PLUMgrid, Inc. 5 | * 6 | * This program is free software; you can redistribute it and/or 7 | * modify it under the terms of version 2 of the GNU General Public 8 | * License as published by the Free Software Foundation. 9 | */ 10 | #ifndef __LIBBPF_H 11 | #define __LIBBPF_H 12 | 13 | struct bpf_insn; 14 | 15 | int bpf_create_map(enum bpf_map_type map_type, int key_size, int value_size, 16 | int max_entries); 17 | int bpf_update_elem(int fd, void *key, void *value, unsigned long long flags); 18 | int bpf_lookup_elem(int fd, void *key, void *value); 19 | int bpf_delete_elem(int fd, void *key); 20 | int bpf_get_next_key(int fd, void *key, void *next_key); 21 | 22 | int bpf_prog_load(enum bpf_prog_type prog_type, 23 | const struct bpf_insn *insns, int insn_len, 24 | const char *license, int kern_version); 25 | 26 | #define LOG_BUF_SIZE 65536 27 | extern char bpf_log_buf[LOG_BUF_SIZE]; 28 | 29 | /* ALU ops on registers, bpf_add|sub|...: dst_reg += src_reg */ 30 | 31 | #define BPF_ALU64_REG(OP, DST, SRC) \ 32 | ((struct bpf_insn) { \ 33 | .code = BPF_ALU64 | BPF_OP(OP) | BPF_X, \ 34 | .dst_reg = DST, \ 35 | .src_reg = SRC, \ 36 | .off = 0, \ 37 | .imm = 0 }) 38 | 39 | #define BPF_ALU32_REG(OP, DST, SRC) \ 40 | ((struct bpf_insn) { \ 41 | .code = BPF_ALU | BPF_OP(OP) | BPF_X, \ 42 | .dst_reg = DST, \ 43 | .src_reg = SRC, \ 44 | .off = 0, \ 45 | .imm = 0 }) 46 | 47 | /* ALU ops on immediates, bpf_add|sub|...: dst_reg += imm32 */ 48 | 49 | #define BPF_ALU64_IMM(OP, DST, IMM) \ 50 | ((struct bpf_insn) { \ 51 | .code = BPF_ALU64 | BPF_OP(OP) | BPF_K, \ 52 | .dst_reg = DST, \ 53 | .src_reg = 0, \ 54 | .off = 0, \ 55 | .imm = IMM }) 56 | 57 | #define BPF_ALU32_IMM(OP, DST, IMM) \ 58 | ((struct bpf_insn) { \ 59 | .code = BPF_ALU | BPF_OP(OP) | BPF_K, \ 60 | .dst_reg = DST, \ 61 | .src_reg = 0, \ 62 | .off = 0, \ 63 | .imm = IMM }) 64 | 65 | /* Short form of mov, dst_reg = src_reg */ 66 | 67 | #define BPF_MOV64_REG(DST, SRC) \ 68 | ((struct bpf_insn) { \ 69 | .code = BPF_ALU64 | BPF_MOV | BPF_X, \ 70 | .dst_reg = DST, \ 71 | .src_reg = SRC, \ 72 | .off = 0, \ 73 | .imm = 0 }) 74 | 75 | /* Short form of mov, dst_reg = imm32 */ 76 | 77 | #define BPF_MOV64_IMM(DST, IMM) \ 78 | ((struct bpf_insn) { \ 79 | .code = BPF_ALU64 | BPF_MOV | BPF_K, \ 80 | .dst_reg = DST, \ 81 | .src_reg = 0, \ 82 | .off = 0, \ 83 | .imm = IMM }) 84 | 85 | /* BPF_LD_IMM64 macro encodes single 'load 64-bit immediate' insn */ 86 | #define BPF_LD_IMM64(DST, IMM) \ 87 | BPF_LD_IMM64_RAW(DST, 0, IMM) 88 | 89 | #define BPF_LD_IMM64_RAW(DST, SRC, IMM) \ 90 | ((struct bpf_insn) { \ 91 | .code = BPF_LD | BPF_DW | BPF_IMM, \ 92 | .dst_reg = DST, \ 93 | .src_reg = SRC, \ 94 | .off = 0, \ 95 | .imm = (__u32) (IMM) }), \ 96 | ((struct bpf_insn) { \ 97 | .code = 0, /* zero is reserved opcode */ \ 98 | .dst_reg = 0, \ 99 | .src_reg = 0, \ 100 | .off = 0, \ 101 | .imm = ((__u64) (IMM)) >> 32 }) 102 | 103 | #ifndef BPF_PSEUDO_MAP_FD 104 | # define BPF_PSEUDO_MAP_FD 1 105 | #endif 106 | 107 | /* pseudo BPF_LD_IMM64 insn used to refer to process-local map_fd */ 108 | #define BPF_LD_MAP_FD(DST, MAP_FD) \ 109 | BPF_LD_IMM64_RAW(DST, BPF_PSEUDO_MAP_FD, MAP_FD) 110 | 111 | 112 | /* Direct packet access, R0 = *(uint *) (skb->data + imm32) */ 113 | 114 | #define BPF_LD_ABS(SIZE, IMM) \ 115 | ((struct bpf_insn) { \ 116 | .code = BPF_LD | BPF_SIZE(SIZE) | BPF_ABS, \ 117 | .dst_reg = 0, \ 118 | .src_reg = 0, \ 119 | .off = 0, \ 120 | .imm = IMM }) 121 | 122 | /* Memory load, dst_reg = *(uint *) (src_reg + off16) */ 123 | 124 | #define BPF_LDX_MEM(SIZE, DST, SRC, OFF) \ 125 | ((struct bpf_insn) { \ 126 | .code = BPF_LDX | BPF_SIZE(SIZE) | BPF_MEM, \ 127 | .dst_reg = DST, \ 128 | .src_reg = SRC, \ 129 | .off = OFF, \ 130 | .imm = 0 }) 131 | 132 | /* Memory store, *(uint *) (dst_reg + off16) = src_reg */ 133 | 134 | #define BPF_STX_MEM(SIZE, DST, SRC, OFF) \ 135 | ((struct bpf_insn) { \ 136 | .code = BPF_STX | BPF_SIZE(SIZE) | BPF_MEM, \ 137 | .dst_reg = DST, \ 138 | .src_reg = SRC, \ 139 | .off = OFF, \ 140 | .imm = 0 }) 141 | 142 | /* Memory store, *(uint *) (dst_reg + off16) = imm32 */ 143 | 144 | #define BPF_ST_MEM(SIZE, DST, OFF, IMM) \ 145 | ((struct bpf_insn) { \ 146 | .code = BPF_ST | BPF_SIZE(SIZE) | BPF_MEM, \ 147 | .dst_reg = DST, \ 148 | .src_reg = 0, \ 149 | .off = OFF, \ 150 | .imm = IMM }) 151 | 152 | /* Conditional jumps against registers, if (dst_reg 'op' src_reg) goto pc + off16 */ 153 | 154 | #define BPF_JMP_REG(OP, DST, SRC, OFF) \ 155 | ((struct bpf_insn) { \ 156 | .code = BPF_JMP | BPF_OP(OP) | BPF_X, \ 157 | .dst_reg = DST, \ 158 | .src_reg = SRC, \ 159 | .off = OFF, \ 160 | .imm = 0 }) 161 | 162 | /* Conditional jumps against immediates, if (dst_reg 'op' imm32) goto pc + off16 */ 163 | 164 | #define BPF_JMP_IMM(OP, DST, IMM, OFF) \ 165 | ((struct bpf_insn) { \ 166 | .code = BPF_JMP | BPF_OP(OP) | BPF_K, \ 167 | .dst_reg = DST, \ 168 | .src_reg = 0, \ 169 | .off = OFF, \ 170 | .imm = IMM }) 171 | 172 | /* Raw code statement block */ 173 | 174 | #define BPF_RAW_INSN(CODE, DST, SRC, OFF, IMM) \ 175 | ((struct bpf_insn) { \ 176 | .code = CODE, \ 177 | .dst_reg = DST, \ 178 | .src_reg = SRC, \ 179 | .off = OFF, \ 180 | .imm = IMM }) 181 | 182 | /* Program exit */ 183 | 184 | #define BPF_EXIT_INSN() \ 185 | ((struct bpf_insn) { \ 186 | .code = BPF_JMP | BPF_EXIT, \ 187 | .dst_reg = 0, \ 188 | .src_reg = 0, \ 189 | .off = 0, \ 190 | .imm = 0 }) 191 | 192 | /* create RAW socket and bind to interface 'name' */ 193 | int open_raw_sock(const char *name); 194 | 195 | struct perf_event_attr; 196 | int perf_event_open(struct perf_event_attr *attr, int pid, int cpu, 197 | int group_fd, unsigned long flags); 198 | #endif 199 | -------------------------------------------------------------------------------- /src/test/linux-samples-bpf/test_verifier.c: -------------------------------------------------------------------------------- 1 | /* 2 | * Testsuite for eBPF verifier 3 | * 4 | * Copyright (c) 2014 PLUMgrid, http://plumgrid.com 5 | * 6 | * This program is free software; you can redistribute it and/or 7 | * modify it under the terms of version 2 of the GNU General Public 8 | * License as published by the Free Software Foundation. 9 | */ 10 | #include 11 | #include 12 | #include 13 | #include 14 | #include 15 | #include 16 | #include 17 | #include 18 | #include 19 | #include 20 | #include 21 | #include "libbpf.h" 22 | 23 | #define MAX_INSNS 512 24 | #define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x))) 25 | 26 | struct bpf_test { 27 | const char *descr; 28 | struct bpf_insn insns[MAX_INSNS]; 29 | int fixup[32]; 30 | const char *errstr; 31 | enum { 32 | ACCEPT, 33 | REJECT 34 | } result; 35 | enum bpf_prog_type prog_type; 36 | }; 37 | 38 | static struct bpf_test tests[] = { 39 | { 40 | "add+sub+mul", 41 | .insns = { 42 | BPF_MOV64_IMM(BPF_REG_1, 1), 43 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 2), 44 | BPF_MOV64_IMM(BPF_REG_2, 3), 45 | BPF_ALU64_REG(BPF_SUB, BPF_REG_1, BPF_REG_2), 46 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -1), 47 | BPF_ALU64_IMM(BPF_MUL, BPF_REG_1, 3), 48 | BPF_MOV64_REG(BPF_REG_0, BPF_REG_1), 49 | BPF_EXIT_INSN(), 50 | }, 51 | .result = ACCEPT, 52 | }, 53 | { 54 | "unreachable", 55 | .insns = { 56 | BPF_EXIT_INSN(), 57 | BPF_EXIT_INSN(), 58 | }, 59 | .errstr = "unreachable", 60 | .result = REJECT, 61 | }, 62 | { 63 | "unreachable2", 64 | .insns = { 65 | BPF_JMP_IMM(BPF_JA, 0, 0, 1), 66 | BPF_JMP_IMM(BPF_JA, 0, 0, 0), 67 | BPF_EXIT_INSN(), 68 | }, 69 | .errstr = "unreachable", 70 | .result = REJECT, 71 | }, 72 | { 73 | "out of range jump", 74 | .insns = { 75 | BPF_JMP_IMM(BPF_JA, 0, 0, 1), 76 | BPF_EXIT_INSN(), 77 | }, 78 | .errstr = "jump out of range", 79 | .result = REJECT, 80 | }, 81 | { 82 | "out of range jump2", 83 | .insns = { 84 | BPF_JMP_IMM(BPF_JA, 0, 0, -2), 85 | BPF_EXIT_INSN(), 86 | }, 87 | .errstr = "jump out of range", 88 | .result = REJECT, 89 | }, 90 | { 91 | "test1 ld_imm64", 92 | .insns = { 93 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), 94 | BPF_LD_IMM64(BPF_REG_0, 0), 95 | BPF_LD_IMM64(BPF_REG_0, 0), 96 | BPF_LD_IMM64(BPF_REG_0, 1), 97 | BPF_LD_IMM64(BPF_REG_0, 1), 98 | BPF_MOV64_IMM(BPF_REG_0, 2), 99 | BPF_EXIT_INSN(), 100 | }, 101 | .errstr = "invalid BPF_LD_IMM insn", 102 | .result = REJECT, 103 | }, 104 | { 105 | "test2 ld_imm64", 106 | .insns = { 107 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), 108 | BPF_LD_IMM64(BPF_REG_0, 0), 109 | BPF_LD_IMM64(BPF_REG_0, 0), 110 | BPF_LD_IMM64(BPF_REG_0, 1), 111 | BPF_LD_IMM64(BPF_REG_0, 1), 112 | BPF_EXIT_INSN(), 113 | }, 114 | .errstr = "invalid BPF_LD_IMM insn", 115 | .result = REJECT, 116 | }, 117 | { 118 | "test3 ld_imm64", 119 | .insns = { 120 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), 121 | BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, 0, 0, 0, 0), 122 | BPF_LD_IMM64(BPF_REG_0, 0), 123 | BPF_LD_IMM64(BPF_REG_0, 0), 124 | BPF_LD_IMM64(BPF_REG_0, 1), 125 | BPF_LD_IMM64(BPF_REG_0, 1), 126 | BPF_EXIT_INSN(), 127 | }, 128 | .errstr = "invalid bpf_ld_imm64 insn", 129 | .result = REJECT, 130 | }, 131 | { 132 | "test4 ld_imm64", 133 | .insns = { 134 | BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, 0, 0, 0, 0), 135 | BPF_EXIT_INSN(), 136 | }, 137 | .errstr = "invalid bpf_ld_imm64 insn", 138 | .result = REJECT, 139 | }, 140 | { 141 | "test5 ld_imm64", 142 | .insns = { 143 | BPF_RAW_INSN(BPF_LD | BPF_IMM | BPF_DW, 0, 0, 0, 0), 144 | }, 145 | .errstr = "invalid bpf_ld_imm64 insn", 146 | .result = REJECT, 147 | }, 148 | { 149 | "no bpf_exit", 150 | .insns = { 151 | BPF_ALU64_REG(BPF_MOV, BPF_REG_0, BPF_REG_2), 152 | }, 153 | .errstr = "jump out of range", 154 | .result = REJECT, 155 | }, 156 | { 157 | "loop (back-edge)", 158 | .insns = { 159 | BPF_JMP_IMM(BPF_JA, 0, 0, -1), 160 | BPF_EXIT_INSN(), 161 | }, 162 | .errstr = "back-edge", 163 | .result = REJECT, 164 | }, 165 | { 166 | "loop2 (back-edge)", 167 | .insns = { 168 | BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), 169 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), 170 | BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), 171 | BPF_JMP_IMM(BPF_JA, 0, 0, -4), 172 | BPF_EXIT_INSN(), 173 | }, 174 | .errstr = "back-edge", 175 | .result = REJECT, 176 | }, 177 | { 178 | "conditional loop", 179 | .insns = { 180 | BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), 181 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_0), 182 | BPF_MOV64_REG(BPF_REG_3, BPF_REG_0), 183 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, -3), 184 | BPF_EXIT_INSN(), 185 | }, 186 | .errstr = "back-edge", 187 | .result = REJECT, 188 | }, 189 | { 190 | "read uninitialized register", 191 | .insns = { 192 | BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 193 | BPF_EXIT_INSN(), 194 | }, 195 | .errstr = "R2 !read_ok", 196 | .result = REJECT, 197 | }, 198 | { 199 | "read invalid register", 200 | .insns = { 201 | BPF_MOV64_REG(BPF_REG_0, -1), 202 | BPF_EXIT_INSN(), 203 | }, 204 | .errstr = "R15 is invalid", 205 | .result = REJECT, 206 | }, 207 | { 208 | "program doesn't init R0 before exit", 209 | .insns = { 210 | BPF_ALU64_REG(BPF_MOV, BPF_REG_2, BPF_REG_1), 211 | BPF_EXIT_INSN(), 212 | }, 213 | .errstr = "R0 !read_ok", 214 | .result = REJECT, 215 | }, 216 | { 217 | "program doesn't init R0 before exit in all branches", 218 | .insns = { 219 | BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2), 220 | BPF_MOV64_IMM(BPF_REG_0, 1), 221 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 2), 222 | BPF_EXIT_INSN(), 223 | }, 224 | .errstr = "R0 !read_ok", 225 | .result = REJECT, 226 | }, 227 | { 228 | "stack out of bounds", 229 | .insns = { 230 | BPF_ST_MEM(BPF_DW, BPF_REG_10, 8, 0), 231 | BPF_EXIT_INSN(), 232 | }, 233 | .errstr = "invalid stack", 234 | .result = REJECT, 235 | }, 236 | { 237 | "invalid call insn1", 238 | .insns = { 239 | BPF_RAW_INSN(BPF_JMP | BPF_CALL | BPF_X, 0, 0, 0, 0), 240 | BPF_EXIT_INSN(), 241 | }, 242 | .errstr = "BPF_CALL uses reserved", 243 | .result = REJECT, 244 | }, 245 | { 246 | "invalid call insn2", 247 | .insns = { 248 | BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 1, 0), 249 | BPF_EXIT_INSN(), 250 | }, 251 | .errstr = "BPF_CALL uses reserved", 252 | .result = REJECT, 253 | }, 254 | { 255 | "invalid function call", 256 | .insns = { 257 | BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, 1234567), 258 | BPF_EXIT_INSN(), 259 | }, 260 | .errstr = "invalid func 1234567", 261 | .result = REJECT, 262 | }, 263 | { 264 | "uninitialized stack1", 265 | .insns = { 266 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 267 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 268 | BPF_LD_MAP_FD(BPF_REG_1, 0), 269 | BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 270 | BPF_EXIT_INSN(), 271 | }, 272 | .fixup = {2}, 273 | .errstr = "invalid indirect read from stack", 274 | .result = REJECT, 275 | }, 276 | { 277 | "uninitialized stack2", 278 | .insns = { 279 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 280 | BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -8), 281 | BPF_EXIT_INSN(), 282 | }, 283 | .errstr = "invalid read from stack", 284 | .result = REJECT, 285 | }, 286 | { 287 | "check valid spill/fill", 288 | .insns = { 289 | /* spill R1(ctx) into stack */ 290 | BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), 291 | 292 | /* fill it back into R2 */ 293 | BPF_LDX_MEM(BPF_DW, BPF_REG_2, BPF_REG_10, -8), 294 | 295 | /* should be able to access R0 = *(R2 + 8) */ 296 | /* BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, 8), */ 297 | BPF_MOV64_REG(BPF_REG_0, BPF_REG_2), 298 | BPF_EXIT_INSN(), 299 | }, 300 | .result = ACCEPT, 301 | }, 302 | { 303 | "check corrupted spill/fill", 304 | .insns = { 305 | /* spill R1(ctx) into stack */ 306 | BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_1, -8), 307 | 308 | /* mess up with R1 pointer on stack */ 309 | BPF_ST_MEM(BPF_B, BPF_REG_10, -7, 0x23), 310 | 311 | /* fill back into R0 should fail */ 312 | BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8), 313 | 314 | BPF_EXIT_INSN(), 315 | }, 316 | .errstr = "corrupted spill", 317 | .result = REJECT, 318 | }, 319 | { 320 | "invalid src register in STX", 321 | .insns = { 322 | BPF_STX_MEM(BPF_B, BPF_REG_10, -1, -1), 323 | BPF_EXIT_INSN(), 324 | }, 325 | .errstr = "R15 is invalid", 326 | .result = REJECT, 327 | }, 328 | { 329 | "invalid dst register in STX", 330 | .insns = { 331 | BPF_STX_MEM(BPF_B, 14, BPF_REG_10, -1), 332 | BPF_EXIT_INSN(), 333 | }, 334 | .errstr = "R14 is invalid", 335 | .result = REJECT, 336 | }, 337 | { 338 | "invalid dst register in ST", 339 | .insns = { 340 | BPF_ST_MEM(BPF_B, 14, -1, -1), 341 | BPF_EXIT_INSN(), 342 | }, 343 | .errstr = "R14 is invalid", 344 | .result = REJECT, 345 | }, 346 | { 347 | "invalid src register in LDX", 348 | .insns = { 349 | BPF_LDX_MEM(BPF_B, BPF_REG_0, 12, 0), 350 | BPF_EXIT_INSN(), 351 | }, 352 | .errstr = "R12 is invalid", 353 | .result = REJECT, 354 | }, 355 | { 356 | "invalid dst register in LDX", 357 | .insns = { 358 | BPF_LDX_MEM(BPF_B, 11, BPF_REG_1, 0), 359 | BPF_EXIT_INSN(), 360 | }, 361 | .errstr = "R11 is invalid", 362 | .result = REJECT, 363 | }, 364 | { 365 | "junk insn", 366 | .insns = { 367 | BPF_RAW_INSN(0, 0, 0, 0, 0), 368 | BPF_EXIT_INSN(), 369 | }, 370 | .errstr = "invalid BPF_LD_IMM", 371 | .result = REJECT, 372 | }, 373 | { 374 | "junk insn2", 375 | .insns = { 376 | BPF_RAW_INSN(1, 0, 0, 0, 0), 377 | BPF_EXIT_INSN(), 378 | }, 379 | .errstr = "BPF_LDX uses reserved fields", 380 | .result = REJECT, 381 | }, 382 | { 383 | "junk insn3", 384 | .insns = { 385 | BPF_RAW_INSN(-1, 0, 0, 0, 0), 386 | BPF_EXIT_INSN(), 387 | }, 388 | .errstr = "invalid BPF_ALU opcode f0", 389 | .result = REJECT, 390 | }, 391 | { 392 | "junk insn4", 393 | .insns = { 394 | BPF_RAW_INSN(-1, -1, -1, -1, -1), 395 | BPF_EXIT_INSN(), 396 | }, 397 | .errstr = "invalid BPF_ALU opcode f0", 398 | .result = REJECT, 399 | }, 400 | { 401 | "junk insn5", 402 | .insns = { 403 | BPF_RAW_INSN(0x7f, -1, -1, -1, -1), 404 | BPF_EXIT_INSN(), 405 | }, 406 | .errstr = "BPF_ALU uses reserved fields", 407 | .result = REJECT, 408 | }, 409 | { 410 | "misaligned read from stack", 411 | .insns = { 412 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 413 | BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_2, -4), 414 | BPF_EXIT_INSN(), 415 | }, 416 | .errstr = "misaligned access", 417 | .result = REJECT, 418 | }, 419 | { 420 | "invalid map_fd for function call", 421 | .insns = { 422 | BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 423 | BPF_ALU64_REG(BPF_MOV, BPF_REG_2, BPF_REG_10), 424 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 425 | BPF_LD_MAP_FD(BPF_REG_1, 0), 426 | BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_delete_elem), 427 | BPF_EXIT_INSN(), 428 | }, 429 | .errstr = "fd 0 is not pointing to valid bpf_map", 430 | .result = REJECT, 431 | }, 432 | { 433 | "don't check return value before access", 434 | .insns = { 435 | BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 436 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 437 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 438 | BPF_LD_MAP_FD(BPF_REG_1, 0), 439 | BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 440 | BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0), 441 | BPF_EXIT_INSN(), 442 | }, 443 | .fixup = {3}, 444 | .errstr = "R0 invalid mem access 'map_value_or_null'", 445 | .result = REJECT, 446 | }, 447 | { 448 | "access memory with incorrect alignment", 449 | .insns = { 450 | BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 451 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 452 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 453 | BPF_LD_MAP_FD(BPF_REG_1, 0), 454 | BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 455 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 1), 456 | BPF_ST_MEM(BPF_DW, BPF_REG_0, 4, 0), 457 | BPF_EXIT_INSN(), 458 | }, 459 | .fixup = {3}, 460 | .errstr = "misaligned access", 461 | .result = REJECT, 462 | }, 463 | { 464 | "sometimes access memory with incorrect alignment", 465 | .insns = { 466 | BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 467 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 468 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 469 | BPF_LD_MAP_FD(BPF_REG_1, 0), 470 | BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 471 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 2), 472 | BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 0), 473 | BPF_EXIT_INSN(), 474 | BPF_ST_MEM(BPF_DW, BPF_REG_0, 0, 1), 475 | BPF_EXIT_INSN(), 476 | }, 477 | .fixup = {3}, 478 | .errstr = "R0 invalid mem access", 479 | .result = REJECT, 480 | }, 481 | { 482 | "jump test 1", 483 | .insns = { 484 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 485 | BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_1, -8), 486 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 1), 487 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 0), 488 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 1), 489 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -16, 1), 490 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 2, 1), 491 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 2), 492 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 1), 493 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -16, 3), 494 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 1), 495 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 4), 496 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 5, 1), 497 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -32, 5), 498 | BPF_MOV64_IMM(BPF_REG_0, 0), 499 | BPF_EXIT_INSN(), 500 | }, 501 | .result = ACCEPT, 502 | }, 503 | { 504 | "jump test 2", 505 | .insns = { 506 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 507 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 2), 508 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 0), 509 | BPF_JMP_IMM(BPF_JA, 0, 0, 14), 510 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 2), 511 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -16, 0), 512 | BPF_JMP_IMM(BPF_JA, 0, 0, 11), 513 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 2, 2), 514 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -32, 0), 515 | BPF_JMP_IMM(BPF_JA, 0, 0, 8), 516 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 2), 517 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -40, 0), 518 | BPF_JMP_IMM(BPF_JA, 0, 0, 5), 519 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 2), 520 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -48, 0), 521 | BPF_JMP_IMM(BPF_JA, 0, 0, 2), 522 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 5, 1), 523 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -56, 0), 524 | BPF_MOV64_IMM(BPF_REG_0, 0), 525 | BPF_EXIT_INSN(), 526 | }, 527 | .result = ACCEPT, 528 | }, 529 | { 530 | "jump test 3", 531 | .insns = { 532 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 533 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 0, 3), 534 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -8, 0), 535 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 536 | BPF_JMP_IMM(BPF_JA, 0, 0, 19), 537 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 1, 3), 538 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -16, 0), 539 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -16), 540 | BPF_JMP_IMM(BPF_JA, 0, 0, 15), 541 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 2, 3), 542 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -32, 0), 543 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -32), 544 | BPF_JMP_IMM(BPF_JA, 0, 0, 11), 545 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 3, 3), 546 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -40, 0), 547 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -40), 548 | BPF_JMP_IMM(BPF_JA, 0, 0, 7), 549 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 4, 3), 550 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -48, 0), 551 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -48), 552 | BPF_JMP_IMM(BPF_JA, 0, 0, 3), 553 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, 5, 0), 554 | BPF_ST_MEM(BPF_DW, BPF_REG_2, -56, 0), 555 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -56), 556 | BPF_LD_MAP_FD(BPF_REG_1, 0), 557 | BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_delete_elem), 558 | BPF_EXIT_INSN(), 559 | }, 560 | .fixup = {24}, 561 | .result = ACCEPT, 562 | }, 563 | { 564 | "jump test 4", 565 | .insns = { 566 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1), 567 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2), 568 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3), 569 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4), 570 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1), 571 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2), 572 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3), 573 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4), 574 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1), 575 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2), 576 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3), 577 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4), 578 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1), 579 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2), 580 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3), 581 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4), 582 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1), 583 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2), 584 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3), 585 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4), 586 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1), 587 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2), 588 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3), 589 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4), 590 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1), 591 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2), 592 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3), 593 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4), 594 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1), 595 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2), 596 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3), 597 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4), 598 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 1), 599 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 2), 600 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 3), 601 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 4), 602 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0), 603 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0), 604 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0), 605 | BPF_JMP_IMM(BPF_JEQ, BPF_REG_1, BPF_REG_10, 0), 606 | BPF_MOV64_IMM(BPF_REG_0, 0), 607 | BPF_EXIT_INSN(), 608 | }, 609 | .result = ACCEPT, 610 | }, 611 | { 612 | "jump test 5", 613 | .insns = { 614 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 615 | BPF_MOV64_REG(BPF_REG_3, BPF_REG_2), 616 | BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2), 617 | BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, -8), 618 | BPF_JMP_IMM(BPF_JA, 0, 0, 2), 619 | BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8), 620 | BPF_JMP_IMM(BPF_JA, 0, 0, 0), 621 | BPF_MOV64_IMM(BPF_REG_0, 0), 622 | BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2), 623 | BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, -8), 624 | BPF_JMP_IMM(BPF_JA, 0, 0, 2), 625 | BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8), 626 | BPF_JMP_IMM(BPF_JA, 0, 0, 0), 627 | BPF_MOV64_IMM(BPF_REG_0, 0), 628 | BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2), 629 | BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, -8), 630 | BPF_JMP_IMM(BPF_JA, 0, 0, 2), 631 | BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8), 632 | BPF_JMP_IMM(BPF_JA, 0, 0, 0), 633 | BPF_MOV64_IMM(BPF_REG_0, 0), 634 | BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2), 635 | BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, -8), 636 | BPF_JMP_IMM(BPF_JA, 0, 0, 2), 637 | BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8), 638 | BPF_JMP_IMM(BPF_JA, 0, 0, 0), 639 | BPF_MOV64_IMM(BPF_REG_0, 0), 640 | BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2), 641 | BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_3, -8), 642 | BPF_JMP_IMM(BPF_JA, 0, 0, 2), 643 | BPF_STX_MEM(BPF_DW, BPF_REG_2, BPF_REG_2, -8), 644 | BPF_JMP_IMM(BPF_JA, 0, 0, 0), 645 | BPF_MOV64_IMM(BPF_REG_0, 0), 646 | BPF_EXIT_INSN(), 647 | }, 648 | .result = ACCEPT, 649 | }, 650 | { 651 | "access skb fields ok", 652 | .insns = { 653 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 654 | offsetof(struct __sk_buff, len)), 655 | BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1), 656 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 657 | offsetof(struct __sk_buff, mark)), 658 | BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1), 659 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 660 | offsetof(struct __sk_buff, pkt_type)), 661 | BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1), 662 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 663 | offsetof(struct __sk_buff, queue_mapping)), 664 | BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 0), 665 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 666 | offsetof(struct __sk_buff, protocol)), 667 | BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 0), 668 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 669 | offsetof(struct __sk_buff, vlan_present)), 670 | BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 0), 671 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 672 | offsetof(struct __sk_buff, vlan_tci)), 673 | BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 0), 674 | BPF_EXIT_INSN(), 675 | }, 676 | .result = ACCEPT, 677 | }, 678 | { 679 | "access skb fields bad1", 680 | .insns = { 681 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, -4), 682 | BPF_EXIT_INSN(), 683 | }, 684 | .errstr = "invalid bpf_context access", 685 | .result = REJECT, 686 | }, 687 | { 688 | "access skb fields bad2", 689 | .insns = { 690 | BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 9), 691 | BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 692 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 693 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 694 | BPF_LD_MAP_FD(BPF_REG_1, 0), 695 | BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 696 | BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), 697 | BPF_EXIT_INSN(), 698 | BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), 699 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 700 | offsetof(struct __sk_buff, pkt_type)), 701 | BPF_EXIT_INSN(), 702 | }, 703 | .fixup = {4}, 704 | .errstr = "different pointers", 705 | .result = REJECT, 706 | }, 707 | { 708 | "access skb fields bad3", 709 | .insns = { 710 | BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 2), 711 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 712 | offsetof(struct __sk_buff, pkt_type)), 713 | BPF_EXIT_INSN(), 714 | BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 715 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 716 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 717 | BPF_LD_MAP_FD(BPF_REG_1, 0), 718 | BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 719 | BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), 720 | BPF_EXIT_INSN(), 721 | BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), 722 | BPF_JMP_IMM(BPF_JA, 0, 0, -12), 723 | }, 724 | .fixup = {6}, 725 | .errstr = "different pointers", 726 | .result = REJECT, 727 | }, 728 | { 729 | "access skb fields bad4", 730 | .insns = { 731 | BPF_JMP_IMM(BPF_JGE, BPF_REG_1, 0, 3), 732 | BPF_LDX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 733 | offsetof(struct __sk_buff, len)), 734 | BPF_MOV64_IMM(BPF_REG_0, 0), 735 | BPF_EXIT_INSN(), 736 | BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0), 737 | BPF_MOV64_REG(BPF_REG_2, BPF_REG_10), 738 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8), 739 | BPF_LD_MAP_FD(BPF_REG_1, 0), 740 | BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem), 741 | BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1), 742 | BPF_EXIT_INSN(), 743 | BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), 744 | BPF_JMP_IMM(BPF_JA, 0, 0, -13), 745 | }, 746 | .fixup = {7}, 747 | .errstr = "different pointers", 748 | .result = REJECT, 749 | }, 750 | { 751 | "check skb->mark is not writeable by sockets", 752 | .insns = { 753 | BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 754 | offsetof(struct __sk_buff, mark)), 755 | BPF_EXIT_INSN(), 756 | }, 757 | .errstr = "invalid bpf_context access", 758 | .result = REJECT, 759 | }, 760 | { 761 | "check skb->tc_index is not writeable by sockets", 762 | .insns = { 763 | BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 764 | offsetof(struct __sk_buff, tc_index)), 765 | BPF_EXIT_INSN(), 766 | }, 767 | .errstr = "invalid bpf_context access", 768 | .result = REJECT, 769 | }, 770 | { 771 | "check non-u32 access to cb", 772 | .insns = { 773 | BPF_STX_MEM(BPF_H, BPF_REG_1, BPF_REG_1, 774 | offsetof(struct __sk_buff, cb[0])), 775 | BPF_EXIT_INSN(), 776 | }, 777 | .errstr = "invalid bpf_context access", 778 | .result = REJECT, 779 | }, 780 | { 781 | "check out of range skb->cb access", 782 | .insns = { 783 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 784 | offsetof(struct __sk_buff, cb[60])), 785 | BPF_EXIT_INSN(), 786 | }, 787 | .errstr = "invalid bpf_context access", 788 | .result = REJECT, 789 | .prog_type = BPF_PROG_TYPE_SCHED_ACT, 790 | }, 791 | { 792 | "write skb fields from socket prog", 793 | .insns = { 794 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 795 | offsetof(struct __sk_buff, cb[4])), 796 | BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1), 797 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 798 | offsetof(struct __sk_buff, mark)), 799 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 800 | offsetof(struct __sk_buff, tc_index)), 801 | BPF_JMP_IMM(BPF_JGE, BPF_REG_0, 0, 1), 802 | BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 803 | offsetof(struct __sk_buff, cb[0])), 804 | BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_1, 805 | offsetof(struct __sk_buff, cb[2])), 806 | BPF_EXIT_INSN(), 807 | }, 808 | .result = ACCEPT, 809 | }, 810 | { 811 | "write skb fields from tc_cls_act prog", 812 | .insns = { 813 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 814 | offsetof(struct __sk_buff, cb[0])), 815 | BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 816 | offsetof(struct __sk_buff, mark)), 817 | BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1, 818 | offsetof(struct __sk_buff, tc_index)), 819 | BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 820 | offsetof(struct __sk_buff, tc_index)), 821 | BPF_STX_MEM(BPF_W, BPF_REG_1, BPF_REG_0, 822 | offsetof(struct __sk_buff, cb[3])), 823 | BPF_EXIT_INSN(), 824 | }, 825 | .result = ACCEPT, 826 | .prog_type = BPF_PROG_TYPE_SCHED_CLS, 827 | }, 828 | { 829 | "PTR_TO_STACK store/load", 830 | .insns = { 831 | BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), 832 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -10), 833 | BPF_ST_MEM(BPF_DW, BPF_REG_1, 2, 0xfaceb00c), 834 | BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 2), 835 | BPF_EXIT_INSN(), 836 | }, 837 | .result = ACCEPT, 838 | }, 839 | { 840 | "PTR_TO_STACK store/load - bad alignment on off", 841 | .insns = { 842 | BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), 843 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), 844 | BPF_ST_MEM(BPF_DW, BPF_REG_1, 2, 0xfaceb00c), 845 | BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 2), 846 | BPF_EXIT_INSN(), 847 | }, 848 | .result = REJECT, 849 | .errstr = "misaligned access off -6 size 8", 850 | }, 851 | { 852 | "PTR_TO_STACK store/load - bad alignment on reg", 853 | .insns = { 854 | BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), 855 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -10), 856 | BPF_ST_MEM(BPF_DW, BPF_REG_1, 8, 0xfaceb00c), 857 | BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 8), 858 | BPF_EXIT_INSN(), 859 | }, 860 | .result = REJECT, 861 | .errstr = "misaligned access off -2 size 8", 862 | }, 863 | { 864 | "PTR_TO_STACK store/load - out of bounds low", 865 | .insns = { 866 | BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), 867 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -80000), 868 | BPF_ST_MEM(BPF_DW, BPF_REG_1, 8, 0xfaceb00c), 869 | BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 8), 870 | BPF_EXIT_INSN(), 871 | }, 872 | .result = REJECT, 873 | .errstr = "invalid stack off=-79992 size=8", 874 | }, 875 | { 876 | "PTR_TO_STACK store/load - out of bounds high", 877 | .insns = { 878 | BPF_MOV64_REG(BPF_REG_1, BPF_REG_10), 879 | BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, -8), 880 | BPF_ST_MEM(BPF_DW, BPF_REG_1, 8, 0xfaceb00c), 881 | BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_1, 8), 882 | BPF_EXIT_INSN(), 883 | }, 884 | .result = REJECT, 885 | .errstr = "invalid stack off=0 size=8", 886 | }, 887 | }; 888 | 889 | static int probe_filter_length(struct bpf_insn *fp) 890 | { 891 | int len = 0; 892 | 893 | for (len = MAX_INSNS - 1; len > 0; --len) 894 | if (fp[len].code != 0 || fp[len].imm != 0) 895 | break; 896 | 897 | return len + 1; 898 | } 899 | 900 | static int create_map(void) 901 | { 902 | long long key, value = 0; 903 | int map_fd; 904 | 905 | map_fd = bpf_create_map(BPF_MAP_TYPE_HASH, sizeof(key), sizeof(value), 1024); 906 | if (map_fd < 0) { 907 | printf("failed to create map '%s'\n", strerror(errno)); 908 | } 909 | 910 | return map_fd; 911 | } 912 | 913 | static int gen_fuzzer_tests(char *test_dir, int plen) 914 | { 915 | char fname[plen + 12], tmp_buf[12]; 916 | int i, fd, len; 917 | 918 | len = strlen(test_dir); 919 | strcpy(fname, test_dir); 920 | 921 | for (i = 0; i < ARRAY_SIZE(tests); i++) { 922 | struct bpf_insn *prog = tests[i].insns; 923 | int prog_len = probe_filter_length(prog); 924 | 925 | sprintf(tmp_buf, "/init_%d", i); 926 | strcpy(&fname[len], tmp_buf); 927 | 928 | fd = open(fname, O_CREAT | O_WRONLY | O_TRUNC, S_IREAD | S_IWUSR); 929 | if (fd == -1) { 930 | fprintf(stderr, "open %s error: %s\n", fname, strerror(errno)); 931 | return 1; 932 | } 933 | write(fd, prog, prog_len * sizeof(struct bpf_insn)); 934 | close(fd); 935 | } 936 | 937 | return 0; 938 | } 939 | 940 | static int test(void) 941 | { 942 | int prog_fd, i, pass_cnt = 0, err_cnt = 0; 943 | 944 | for (i = 0; i < ARRAY_SIZE(tests); i++) { 945 | struct bpf_insn *prog = tests[i].insns; 946 | int prog_type = tests[i].prog_type; 947 | int prog_len = probe_filter_length(prog); 948 | int *fixup = tests[i].fixup; 949 | int map_fd = -1; 950 | 951 | if (*fixup) { 952 | map_fd = create_map(); 953 | 954 | do { 955 | prog[*fixup].imm = map_fd; 956 | fixup++; 957 | } while (*fixup); 958 | } 959 | printf("#%d %s ", i, tests[i].descr); 960 | 961 | prog_fd = bpf_prog_load(prog_type ?: BPF_PROG_TYPE_SOCKET_FILTER, 962 | prog, prog_len * sizeof(struct bpf_insn), 963 | "GPL", 0); 964 | 965 | if (tests[i].result == ACCEPT) { 966 | if (prog_fd < 0) { 967 | printf("FAIL\nfailed to load prog '%s'\n", 968 | strerror(errno)); 969 | printf("%s", bpf_log_buf); 970 | err_cnt++; 971 | goto fail; 972 | } 973 | } else { 974 | if (prog_fd >= 0) { 975 | printf("FAIL\nunexpected success to load\n"); 976 | printf("%s", bpf_log_buf); 977 | err_cnt++; 978 | goto fail; 979 | } 980 | if (strstr(bpf_log_buf, tests[i].errstr) == 0) { 981 | printf("FAIL\nunexpected error message: %s", 982 | bpf_log_buf); 983 | err_cnt++; 984 | goto fail; 985 | } 986 | } 987 | 988 | pass_cnt++; 989 | printf("OK\n"); 990 | fail: 991 | #ifdef TEST_WORKAROUND 992 | (void)1; 993 | #else 994 | if (map_fd >= 0) 995 | close(map_fd); 996 | close(prog_fd); 997 | #endif 998 | 999 | } 1000 | printf("Summary: %d PASSED, %d FAILED\n", pass_cnt, err_cnt); 1001 | 1002 | return 0; 1003 | } 1004 | 1005 | static void usage(char *prog) 1006 | { 1007 | printf("%s [-g fuzzer_corpus_dir]\n", prog); 1008 | } 1009 | 1010 | int main(int argc, char **argv) 1011 | { 1012 | if (argc > 1) { 1013 | if (argc == 3 && strcmp(argv[1], "-g") == 0) { 1014 | /* generate test cases for fuzzer, no need to run the test */ 1015 | int ret; 1016 | char *test_dir = argv[2]; 1017 | ret = mkdir(test_dir, 0755); 1018 | if (ret != 0 && errno != EEXIST) { 1019 | fprintf(stderr, "mkdir error: %s\n", strerror(errno)); 1020 | return 1; 1021 | } 1022 | return gen_fuzzer_tests(test_dir, strlen(test_dir)); 1023 | } else { 1024 | usage(argv[0]); 1025 | return 1; 1026 | } 1027 | } 1028 | 1029 | return test(); 1030 | } 1031 | --------------------------------------------------------------------------------