├── LICENSE ├── README.md ├── X86GFree ├── X86GFree.cpp ├── X86GFreeAssembler.cpp ├── X86GFreeAssembler.h ├── X86GFreeImmediateRecon.cpp ├── X86GFreeJCP.cpp ├── X86GFreeModRMSIB.cpp ├── X86GFreeUtils.cpp ├── X86GFreeUtils.h └── X86MCInstLower.h ├── install.sh └── patches └── llvm.patch /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2016 pagabuc 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## GFree 2 | This is a LLVM-based implementation of 3 | [GFree](https://seclab.ccs.neu.edu/static/publications/acsac2010rop.pdf) for Intel x86-64. 4 | 5 | In the next paragraphs i will shortly recap all the transformation 6 | GFree applies to produce gadget-less binaries, but i still suggest you 7 | to read the original paper (anyway, the installer needs to build a 8 | fresh installation of LLVM, so you will probably have time even for a 9 | coffee! ) 10 | 11 | This is the implementation status of the features presented in the paper: 12 | 13 | - [x] Alignment Sleds 14 | - [x] Return Address Protection 15 | - [x] Frame Cookies 16 | - [x] Register Reallocation 17 | - [x] Instruction Transformation 18 | - [ ] Jump Offset Adjustments 19 | - [x] Immediate and Displacement Reconstructions 20 | - [ ] Inter-Instruction Barrier 21 | 22 | 23 | #### tl;dr 24 | ``` 25 | bash install.sh 26 | jmp FAQ 27 | ``` 28 | 29 | ### What do we want? 30 | 31 | We want to reduce (or ideally eliminate) the possibility of an 32 | attacker to mount a Return Oriented Programming (ROP) or Jump Oriented 33 | Programming (JOP) attack. Those attacks are based on gadgets: short 34 | pieces of code that firstly execute a small task (i.e. load an 35 | immediate in a register) and then pass the control to another 36 | gadget. Given this definition we can divide a gadget in two logical 37 | sections, the *code* and the *linking* section. The first part is the 38 | one that executes the task, while the second one chains two gadgets 39 | together. The very end of the linking section has to be a *free-branch* 40 | instruction, i.e. an instruction that changes the control flow of the 41 | program. GFree's interest is in these free-branch instructions, since 42 | if we remove them, we break {R,J}OP. Sounds fair enough, ain't it? 43 | 44 | ### How we can do it 45 | 46 | Long story short, ROP uses `ret` and JOP uses `call *` and `jmp *` to 47 | chain gadgets together. 48 | 49 | ``` 50 | +---+---+---+---+---+---+---+---+ 51 | | Encoding | Instruction | 52 | +---+---+---+---+---+---+---+---+ 53 | | 0xc{2,3,a,b} | ret | 54 | +---+---+---+---+---+---+---+---+ 55 | | 0xff 0xXX | jmp*/call* | 56 | +---+---+---+---+---+---+---+---+ 57 | Free-branch instructions for ROP and JOP 58 | ``` 59 | 60 | Unfortunately for us, free-branch instructions can be found both in an 61 | aligned way (i.e `ret` at the end of a function) and in an unaligned 62 | one (i.e `ret` inside `mov %rax,0xaaC3aa`) since the Intel 63 | architecture does not force any execution alignment for instructions. 64 | GFree handles both cases with two different sets of techniques: 65 | unaligned gadgets are *removed* and aligned ones are *protected*. 66 | 67 | 68 | ## Unaligned free-branch 69 | 70 | An unaligned free-branch lives inside an instruction. The first step 71 | in order to remove them is to understand the semantic of the different 72 | fields that compose an instruction. This is covered pretty well by the 73 | Intel Manual, but here's a summary of the Intel instruction format: 74 | 75 | ``` 76 | +---+---+---+---+---+---+---+----+---+---+---+---+---+---+---+----+ 77 | | PREFIXES | OPCODE | MODR/M + SIB | OFFSET | IMMEDIATE | 78 | +---+---+---+---+---+---+---+----+---+---+---+---+---+---+---+----+ 79 | Intel Instruction Format 80 | ``` 81 | 82 | Each of this field require it's own specific technique, in order to 83 | remove the free-branch that lives in there. All the techniques we 84 | propose are implemented as LLVM backend Passes. 85 | 86 | #### Immediate & Offset 87 | 88 | The immediate and offset reconstruction pass 89 | (`X86GFreeImmediateRecon.cpp`) is hooked before the register allocation 90 | (addPreRegAlloc). It replaces an "evil" instruction with multiple 91 | "safe" instructions, which preserve the semantic but don't 92 | contain any unaligned gadget. 93 | 94 | Since an example is worth thousands words, the following instruction 95 | 96 | ``` 97 | 05 aa C3 00 00 add eax,0xc3aa 98 | ``` 99 | is rewritten to 100 | 101 | ``` 102 | bb aa 03 00 00 mov ebx,0x3aa 103 | 81 cb 00 c0 00 00 or ebx,0xc000 104 | 01 d8 add eax,ebx 105 | ``` 106 | 107 | and the 0xc3 is successfully removed! 108 | 109 | Offsets are handled in a similar way. For example, 110 | `67 89 98 ff C3 00 00 mov DWORD PTR [eax+0xc3ff],ebx` is translated into: 111 | ``` 112 | b9 ff c0 00 00 mov ecx,0xc0ff 113 | 81 c9 00 03 00 00 or ecx,0x300 114 | 01 c8 add eax,ecx 115 | 67 89 18 mov DWORD PTR [eax],ebx 116 | 29 c8 sub eax,ecx 117 | ``` 118 | 119 | The current implementation handles also some corner cases where EFLAGS 120 | must be preserved, by pushing/popping it to/from the stack. 121 | 122 | #### ModR/M + SIB 123 | 124 | The ModR/M and SIB fields specify the format of the operands of an 125 | instruction. So, for example, in `89 d8 mov eax,ebx` the value 0xd8 126 | tells the first register is `eax` and the second is `ebx`. Similarly 127 | 0xca in `67 8d 44 ca 08 lea eax,[edx+ecx*8+0x8]` indicate that the base is 128 | `edx` and the index is `ecx*8`. 129 | 130 | The pass to handle this cases (`X86GFreeModRMSIB.cpp`) is hooked after 131 | the register allocation but before the register rewriting. At this 132 | point, the MachineInstructions (*MIs*) are 133 | still written with virtual registers but a map (VirtualRegisterMap) 134 | contains - for each virtual register - the allocated physical 135 | register. The first way to remove an unaligned free-branch is to 136 | reallocate a virtual register in such a way the ModRM field became 137 | "safe". The reallocation must be done without breaking the existing 138 | live intervals... using a physical register which is alive where the 139 | instruction is, is definitely not a good idea! 140 | 141 | To understand if a register reallocation correctly sanitize the 142 | instruction, we wrote an assembler from MachineInstr to bytes 143 | (`X86GFreeAssembler.cpp`). The process is iterative, we simply try all 144 | the available registers. 145 | 146 | If a register can not be found, the ~~dirty~~fallback solution 147 | kicks in. As usual, code talks more than words, so: 148 | 149 | ``` 150 | 00 c3 add bl,al 151 | ``` 152 | 153 | is transformed by the fallback solution in 154 | 155 | ``` 156 | 41 55 push r13 157 | 41 88 dd mov r13b,bl 158 | 41 00 c5 add r13b,al 159 | 44 88 eb mov bl,r13b 160 | 41 5d pop r13 161 | ``` 162 | 163 | 164 | #### Prefixes 165 | 166 | No prefixes contains evil bytes, and the only two instruction whose 167 | opcode can be malicious are: `movnti` and `bswap`. 168 | 169 | ## Aligned Free-Branch 170 | 171 | Aligned free-branch are those that normally live in a program and 172 | that cannot be removed. For example: `ret` at the end of a 173 | function, or `call eax` inside a function. GFree adopts two different 174 | techniques to protect them. 175 | 176 | #### Return Address Protection 177 | 178 | To protect aligned `ret`, the entry point of every function is 179 | instrumented with a routine that encrypts the saved return 180 | address. This routine is a xor between the return address 181 | and a random key (taken from fs:0x28): 182 | ``` 183 | 64 4c 8b 1c 25 28 00 00 00 mov %fs:0x28,%r11 184 | 4c 31 1c 24 xor %r11,(%rsp) 185 | 186 | ``` 187 | 188 | Symmetrically each exit point is instrumented with a decryption 189 | routine that xores again the saved return address with fs:0x28: 190 | 191 | ``` 192 | 64 4c 8b 1c 25 28 00 00 00 mov %fs:0x28,%r11 193 | 4c 31 1c 24 xor %r11,(%rsp) 194 | c3 retq 195 | ``` 196 | 197 | This protection works because, without knowing the content of fs:0x28, 198 | the attacker is not able to forge valid return address. 199 | 200 | Moreover, each decryption routine is prepended with a sled of 9 201 | nops. This ensures the routine will be executed from start to end, not 202 | matter what was the execution alignment before. 203 | 204 | The Return Address Protection is implemented in `X86GFree.cpp`. 205 | 206 | #### Jump Control Protection 207 | 208 | The protection scheme for *indirect calls* and *jumps* is based on a 209 | random cookie pushed on the stack. Every function - that contains at 210 | least one instance of these instructions - is instrumented with a 211 | header that compute a xor of a non secret random integer and a secret 212 | key: 213 | 214 | ``` 215 | 49 bb 47 b8 1f 44 ee 03 97 52 movabs $0x529703ee441fb847,%r11 216 | 64 4c 33 1c 25 28 00 00 00 xor %fs:0x28,%r11 217 | 4c 89 5d d0 mov %r11,-0x30(%rbp) 218 | 219 | ``` 220 | This value is then checked before every indirect transfer: 221 | ``` 222 | 49 bb 47 b8 1f 44 ee 03 97 52 movabs $0x529703ee441fb847,%r11 223 | 4c 33 5d d0 xor -0x30(%rbp),%r11 224 | 64 4c 3b 1c 25 28 00 00 00 cmp %fs:0x28,%r11 225 | 0f 84 01 00 00 00 je 400638 226 | f4 hlt 227 | ff 55 e8 callq *-0x18(%rbp) 228 | ``` 229 | 230 | If the check fails the function has not been executed from the very 231 | beginning. This means the attacker jumped in the middle of it and the 232 | indirect transfer is denied by GFree. Also in this case, the routine 233 | is prepended with a sled of 9 nops. 234 | 235 | The Jump Control Protection is implemented in `X86GFreeJCP.cpp`. 236 | 237 | ### Overhead 238 | 239 | Phoronix Test Suite v6.2.2: 240 | 241 | Program | Clang Native | Clang G-Free | Overhead (%) | 242 | ------------------------------ | ----------- | ------------ | -------------- | 243 | Gcrypt Library |1518 |1602 | 5.55 | 244 | John The Ripper |4966 |4521 | 8.96 | 245 | John The Ripper |16937167 |15708667 | 7.25 | 246 | John The Ripper |70871 |53494 | 24.52 | 247 | x264 |155.70 |132.05 | 15.19 | 248 | 7-Zip Compression |21172 |18983 | 10.34 | 249 | Parallel BZIP2 Compression |10.43 |11.01 | 5.56 | 250 | Gzip Compression |11.55 |11.59 | 0.35 | 251 | LZMA Compression |332.14 |334.47 | 0.70 | 252 | Monkey Audio Encoding |5.55 |5.86 | 5.59 | 253 | FLAC Audio Encoding |8.13 |8.01 | -1.48 | 254 | LAME MP3 Encoding |13.04 |13.18 | 1.07 | 255 | Ogg Encoding |7.21 |7.37 | 2.22 | 256 | WavPack Audio Encoding |8.87 |9.03 | 1.80 | 257 | FFmpeg |11.27 |11.63 | 3.19 | 258 | GnuPG |7.62 |7.59 | -0.39 | 259 | Mencoder |21.23 |22.43 | 5.65 | 260 | OpenSSL |543.13 |532.30 | 1.99 | 261 | 262 | 263 | A more detailed version of the results is available [here](http://www.s3.eurecom.fr/~pagabuc/gfree/benchmark.html) 264 | 265 | 266 | ### Evaluation 267 | 268 | The current implementation is able to compile medium-size applications such as: 269 | coreutils, apache, ffmpeg, gzip, lame, openssl, sqlite, util-linux, wireshark, evince. 270 | It also passes *all* the tests included in the aforementioned programs. 271 | 272 | The table compares the amount of gadgets with and without GFree. 273 | 274 | Program | Clang-GFree | Clang | % | 275 | -------------- | ----------- | ----- | ----- | 276 | gzip | 415 | 995 | 58.0 | 277 | httpd | 2992 | 5852 | 48.8 | 278 | lame | 1771 | 4586 | 61.3 | 279 | libxml2 | 6567 | 26295 | 75.0 | 280 | coreutils(ls) | 545 | 1133 | 51.8 | 281 | openssl | 19741 | 35916 | 45.0 | 282 | sqlite3 | 3650 | 11285 | 67.6 | 283 | **TOTAL** | 35681 | 86062 |**58.5**| 284 | 285 | 286 | If you are wondering why the column of Clang-GFree is not zero, please 287 | go ahead and read the TODO. 288 | 289 | 290 | ### TODO 291 | 292 | Contributions are very welcome. The next big step for the project is 293 | to avoid the introduction of new gadgets from the linker, when it 294 | applies relocations. In my mind, iteratively adding nops here and there should 295 | converge but i feel it might exists a less lazy and more optimized way to solve 296 | this problem ;-) 297 | 298 | There are also small fixes like adding support for floating point 299 | registers in the register reallocation, extending the immediate 300 | reconstruction to any missing instructions (i.e. IMUL64rri.) and 301 | emitting optimized nops (instead of "nop"*9 emit "nop word [rax+rax+0x0]"). 302 | 303 | Last but not least, offset of relative jumps are calculated during 304 | compilation and they can introduce new gadgets as well. I currently 305 | have a (somewhat) working implementation of it, so tell me if you want 306 | to complete the job! 307 | 308 | ### CONTACT 309 | 310 | If you are interested working on GFree, please ping me! 311 | 312 | Mail: python -c "print 'pa%s%seurecom.%s' % ('gani', '@', 'fr')" 313 | 314 | Twitter: @pagabuc 315 | 316 | IRC: pagabuc on Freenode -------------------------------------------------------------------------------- /X86GFree/X86GFree.cpp: -------------------------------------------------------------------------------- 1 | //===-- X86GFree.cpp - Make your binary rop-free -----------===// 2 | // 3 | // The LLVM Compiler Infrastructure 4 | // 5 | // This file is distributed under the University of Illinois Open Source 6 | // License. See LICENSE.TXT for details. 7 | // 8 | //===----------------------------------------------------------------------===// 9 | // 10 | // This file defines the pass .... 11 | // 12 | //===----------------------------------------------------------------------===// 13 | 14 | 15 | 16 | #include "X86.h" 17 | #include "X86InstrBuilder.h" 18 | #include "X86TargetMachine.h" 19 | 20 | #include "llvm/ADT/Statistic.h" 21 | #include "llvm/CodeGen/MachineFunctionPass.h" 22 | #include "llvm/CodeGen/MachineFunction.h" 23 | #include "llvm/CodeGen/MachineInstrBuilder.h" 24 | #include "llvm/CodeGen/Passes.h" 25 | #include "llvm/Support/raw_ostream.h" 26 | #include "X86GFreeUtils.h" 27 | 28 | using namespace llvm; 29 | 30 | // Then, on the command line, you can specify '-debug-only=foo' 31 | #define DEBUG_TYPE "gfree" 32 | 33 | STATISTIC(Rap , "Number of return address protection inserted"); 34 | namespace { 35 | 36 | class GFreeMachinePass : public MachineFunctionPass { 37 | public: 38 | GFreeMachinePass() : MachineFunctionPass(ID) {} 39 | bool runOnMachineFunction(MachineFunction &MF) override; 40 | const char *getPassName() const override { return "GFree Main Module"; } 41 | static char ID; 42 | }; 43 | 44 | char GFreeMachinePass::ID = 0; 45 | } 46 | 47 | FunctionPass *llvm::createGFreeMachinePass() { 48 | return new GFreeMachinePass(); 49 | } 50 | 51 | // 64bit => NO: rdx, rbx, r10, r11 52 | // 32bit => NO: edx, ebx 53 | bool handleBSWAP(MachineInstr *MI){ 54 | assert(MI->getOperand(0).isReg() && "handleBSWAP can't handle this instr!"); 55 | 56 | MachineBasicBlock *MBB = MI->getParent(); 57 | MachineFunction *MF = MBB->getParent(); 58 | const X86Subtarget &STI = MF->getSubtarget(); 59 | const X86InstrInfo &TII = *STI.getInstrInfo(); 60 | DebugLoc DL = MI->getDebugLoc(); 61 | MachineInstrBuilder MIB; 62 | unsigned int bswapReg = MI->getOperand(0).getReg(); 63 | std::set unsafeRegSet = {X86::RDX, X86::RBX, X86::R10, 64 | X86::R11, X86::EDX, X86::EBX}; 65 | 66 | // If the register is not unsafe, return. 67 | if( unsafeRegSet.find(bswapReg) == unsafeRegSet.end() ){ 68 | return false; 69 | } 70 | bool is32 = (MI->getOpcode() == X86::BSWAP32r); 71 | unsigned int safeReg = is32 ? X86::ECX : X86::RCX; 72 | unsigned int safeReg64 = X86::RCX; 73 | unsigned int OpcodeMOV = is32 ? X86::MOV32rr : X86::MOV64rr; 74 | unsigned int OpcodeBSWAP = is32 ? X86::BSWAP32r : X86::BSWAP64r; 75 | GFreeDEBUG(1,"[!] Found evil:" << *MI); 76 | // Save safe register 77 | pushReg(MI, safeReg64); 78 | 79 | // Load unsafe reg into the safe 80 | MIB = BuildMI(*MBB, MI, DL, TII.get(OpcodeMOV)).addReg(safeReg).addReg(bswapReg); 81 | 82 | // bswap safeReg 83 | MIB = BuildMI(*MBB, MI, DL, TII.get(OpcodeBSWAP)) 84 | .addReg(safeReg, RegState::Define) 85 | .addReg(safeReg, RegState::Kill); 86 | 87 | // Load safe into unsafe 88 | MIB = BuildMI(*MBB, MI, DL, TII.get(OpcodeMOV)).addReg(bswapReg).addReg(safeReg); 89 | 90 | // Restore safe register 91 | popReg(MI, safeReg64); 92 | return true; 93 | } 94 | 95 | bool handleMOVNTI(MachineInstr *MI){ 96 | MachineBasicBlock *MBB = MI->getParent(); 97 | MachineFunction *MF = MBB->getParent(); 98 | const X86Subtarget &STI = MF->getSubtarget(); 99 | const X86InstrInfo &TII = *STI.getInstrInfo(); 100 | DebugLoc DL = MI->getDebugLoc(); 101 | MachineInstrBuilder MIB; 102 | 103 | bool is32 = (MI->getOpcode() == X86::MOVNTImr); 104 | unsigned int OpcodeMOV = is32 ? X86::MOV32mr : X86::MOV64mr; 105 | MIB = BuildMI(*MBB, MI, DL, TII.get(OpcodeMOV)); 106 | // Copy all the operand from the old MOVNTI to the new MOV. 107 | for (unsigned I = 0, E = MI->getNumOperands(); I < E; ++I){ 108 | MachineOperand *MO = new MachineOperand(MI->getOperand(I)); 109 | MIB.addOperand(*MO); 110 | } 111 | GFreeDEBUG(2,"> " << *MIB); 112 | return true; 113 | } 114 | 115 | void instructionTransformation(MachineFunction &MF){ 116 | MachineInstr *MI; 117 | std::vector toDelete; // This hold all the instructions that will be deleted. 118 | 119 | for (MachineFunction::iterator MBB = MF.begin(), MBBE = MF.end(); MBB != MBBE; ++MBB){ 120 | for (MachineBasicBlock::iterator MBBI = MBB->begin(), MBBIE = MBB->end(); MBBI != MBBIE; MBBI++) { 121 | MI = MBBI; 122 | unsigned Opc = MI->getOpcode(); 123 | bool del = false; 124 | 125 | if( (Opc == X86::BSWAP64r) || (Opc == X86::BSWAP32r)){ 126 | del = handleBSWAP(MI); 127 | } 128 | 129 | if( (Opc == X86::MOVNTImr) || (Opc == X86::MOVNTI_64mr)){ 130 | del = handleMOVNTI(MI); 131 | } 132 | 133 | if (del){ 134 | toDelete.push_back(MI); 135 | } 136 | } 137 | } 138 | // Deleting instructions. 139 | for (std::vector::iterator I = toDelete.begin(); I != toDelete.end(); ++I){ 140 | (*I)->eraseFromParent(); 141 | } 142 | } 143 | 144 | MachineFunction* branchTargetFunction(MachineInstr *MI){ 145 | // errs() << "[-] branchTargetFunction: " << *MI; 146 | assert("[-] jumpTarget called with a MI that's not a branch!" && MI->isBranch()); 147 | return MI->getOperand(0).getMBB()->getParent(); 148 | } 149 | 150 | 151 | void insertPrologueOrEpilogue(MachineInstr *MI, unsigned int retAddrRegister, 152 | unsigned int retAddrOffset, bool Prologue){ 153 | 154 | MachineBasicBlock *MBB = MI->getParent(); 155 | MachineFunction *MF = MBB->getParent(); 156 | const X86Subtarget &STI = MF->getSubtarget(); 157 | const X86InstrInfo &TII = *STI.getInstrInfo(); 158 | DebugLoc DL = MI->getDebugLoc(); 159 | MachineInstrBuilder MIB; 160 | 161 | // Create a new machine basic block to host the prologue. 162 | if(Prologue){ 163 | 164 | MachineBasicBlock *newMBB = MF->CreateMachineBasicBlock(); 165 | MF->insert(MBB->getIterator(), newMBB); 166 | newMBB->addSuccessor(MBB); 167 | 168 | // Update for the next builds. 169 | MBB = newMBB; 170 | MI = MBB->begin(); 171 | DL = MI->getDebugLoc(); 172 | } 173 | 174 | MachineOperand r11_def = MachineOperand::CreateReg(X86::R11, true); 175 | MachineOperand r11_use = MachineOperand::CreateReg(X86::R11, false); 176 | 177 | // Emit the nopsled if we are emitting the epilogue. 178 | if(!Prologue){ 179 | emitNop(MI, 9); 180 | } 181 | 182 | // mov %fs:0x28,%r11 183 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::MOV64rm)).addOperand(r11_def) 184 | .addReg(0).addImm(1).addReg(0).addImm(0x28).addReg(X86::FS); 185 | GFreeDEBUG(2, "> " << *MIB); 186 | 187 | // xor %r11, (%rsp) 188 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::XOR64mr)); 189 | addRegOffset(MIB, retAddrRegister, false, retAddrOffset); 190 | MIB.addOperand(r11_use); 191 | GFreeDEBUG(2, "> " << *MIB); 192 | 193 | MBB->addLiveIn(X86::R11); 194 | MBB->sortUniqueLiveIns(); 195 | } 196 | 197 | void returnAddressProtection(MachineFunction &MF){ 198 | GFreeDEBUG(2, "[+---- Return Address Protection @ ----+]\n"); 199 | 200 | MachineFunction::iterator MBB = MF.begin(); 201 | MachineFunction::iterator MBBE = MF.end(); 202 | 203 | // Skips empty basic blocks. 204 | while(MBB->empty()){ 205 | GFreeDEBUG(3, "[-] Oh no, this MBB is empty, check the next one. \n"); 206 | MBB++; 207 | } 208 | if(MBB == MBBE){ // This shouldn't happen. 209 | return; 210 | } 211 | 212 | MachineBasicBlock::iterator MBBI; 213 | MachineBasicBlock::iterator MBBIE; 214 | 215 | MachineInstrBuilder MIB; 216 | MachineInstr *MI; 217 | int retAddrOffset; 218 | int retAddrRegister; 219 | 220 | retAddrOffset = 0; 221 | retAddrRegister = X86::RSP; 222 | 223 | // Epilogue. 224 | bool inserted = false; 225 | 226 | for (MBB = MF.begin(), MBBE = MF.end(); MBB != MBBE; ++MBB){ 227 | if(MBB->empty()) continue; 228 | MI = std::prev(MBB->end()); 229 | if(MI->isIndirectBranch()){ 230 | continue; 231 | } 232 | if ( ( MI->isReturn() ) || 233 | ( MI->isBranch() && branchTargetFunction(MI)->getFunctionNumber() != MF.getFunctionNumber() )){ 234 | ++Rap; // update stats/ 235 | insertPrologueOrEpilogue(MI, retAddrRegister, retAddrOffset, false); 236 | inserted = true; 237 | } 238 | if ( (std::next(MBB) == MBBE) && MI->isCall()){ // If the last inst of the last basic block is a call, 239 | inserted = true; // just put the epilogue. 240 | } 241 | } 242 | if(inserted){ 243 | GFreeDEBUG(0, "[!] Adding Prologue/Epilogue @ " << MF.getName() << "\n"); 244 | MBB = MF.begin(); 245 | while(MBB->empty()){ 246 | GFreeDEBUG(3, "[-] Oh no, this MBB is empty, check the next one. \n"); 247 | MBB++; 248 | } 249 | MI = MBB->begin(); 250 | insertPrologueOrEpilogue(MI, retAddrRegister, retAddrOffset, true); 251 | } 252 | return; 253 | } 254 | 255 | 256 | // This function checks if MI points to the bottom of the check cookie routine. 257 | // It does perform some check and return: 258 | // -1 if in MBB there will never be the routine we are looking for. The caller should proceed with another MBB. 259 | // 0 if we found the routine 260 | // 1 if we didn't found the routine, but the caller must keep looking for it in this MBB. 261 | int matchCheckCookieRoutine(MachineInstr *MI){ 262 | MachineBasicBlock *ParentMBB = MI->getParent(); 263 | MachineBasicBlock::iterator ParentMIBegin = ParentMBB->begin(); 264 | MachineBasicBlock::iterator tmpMI = MI; 265 | 266 | if(ParentMBB->size() < 5) 267 | return -1; 268 | 269 | if( (ParentMIBegin == tmpMI) || 270 | (std::next(ParentMIBegin) == tmpMI)) 271 | return -1; 272 | 273 | if ((std::prev(tmpMI,4)->getOpcode() == X86::PUSH64r) && 274 | (std::prev(tmpMI,3)->getOpcode() == X86::MOV64ri) && 275 | (std::prev(tmpMI,2)->getOpcode() == X86::XOR64rm) && 276 | (std::prev(tmpMI,1)->getOpcode() == X86::CMP64rm ) && 277 | (tmpMI->getOpcode() == X86::POP64r) ) 278 | return 0; 279 | 280 | return 1; 281 | } 282 | 283 | // This function finalize the cookie for jmp*/call*, and also adds a 284 | // nop sled before the check.. Finalize means, for every jmp*/call* 285 | // go backwards and find the block of instructions inserted from 286 | // X86GFreeJCP.cpp that check the cookie. Bring them down, close to 287 | // the jmp*/call*. 288 | // Also, splice the MBB, put a jump and an hlt between the check and the 289 | // jmp*/call* so the layout will be: 290 | 291 | // check_cookie; 292 | // je; -----------| 293 | // hlt; | 294 | // jmp*/call*; <--| 295 | // 296 | 297 | // This is a sample of the code for checking the cookie: 298 | // > %vreg25 = MOV64rm , 1, %noreg, 0, %noreg; mem:LD8[FixedStack0] GR64:%vreg25 299 | // > %vreg26 = XOR64ri32 %vreg25, 179027149, %EFLAGS; GR64:%vreg26,%vreg25 300 | // > CMP64rm %vreg26, %noreg, 1, %noreg, 40, %FS, %EFLAGS; GR64:%vreg26 301 | 302 | void cookieProtectionFinalization(MachineFunction &MF){ 303 | GFreeDEBUG(2, "\n[+---- Jump Control Protection Finalization ----+]\n"); 304 | const X86Subtarget &STI = MF.getSubtarget(); 305 | const X86InstrInfo &TII = *STI.getInstrInfo(); 306 | MachineFunction::iterator MBB, MBBE; 307 | MachineBasicBlock::iterator MBBI, MBBIE; 308 | MachineInstrBuilder MIB; 309 | MachineInstr *MI; 310 | std::vector alreadyCheckedInstr; 311 | 312 | for (MBB = MF.begin(), MBBE = MF.end(); 313 | MBB != MBBE; ++MBB){ 314 | for (MBBI = MBB->begin(), MBBIE = MBB->end(); 315 | MBBI != MBBIE; ++MBBI) { 316 | 317 | MI = MBBI; 318 | 319 | if(MBB->empty()) 320 | continue; 321 | 322 | if ( !( MI->isIndirectBranch() || isIndirectCall(MI) ) ) // If not jmp* nor call* 323 | continue; 324 | 325 | if( contains(alreadyCheckedInstr, MI) ) 326 | continue; 327 | 328 | // errs()<< "[!] Splitting for call*/jmp* in " << MF.getName() << 329 | // " MBB" << MBB->getNumber() << " : " << *MI; 330 | // errs() << *MBB; 331 | alreadyCheckedInstr.push_back(MI); 332 | 333 | // Do it nicely. 334 | MachineBasicBlock *newMBB = MF.CreateMachineBasicBlock(); 335 | MachineBasicBlock *hltMBB = MF.CreateMachineBasicBlock(); 336 | 337 | MF.insert(MBB, hltMBB); 338 | MF.insert(MBB, newMBB); 339 | newMBB->moveAfter(&*MBB); 340 | hltMBB->moveAfter(&*MBB); 341 | 342 | MIB = BuildMI(*hltMBB, hltMBB->begin(), 343 | hltMBB->begin()->getDebugLoc(), TII.get(X86::HLT)); 344 | 345 | newMBB->splice(newMBB->begin(), &*MBB, MI, MBB->end()); 346 | 347 | newMBB->transferSuccessorsAndUpdatePHIs(&*MBB); 348 | MBB->addSuccessor(hltMBB); 349 | MBB->addSuccessor(newMBB); 350 | 351 | DebugLoc DL = newMBB->begin()->getDebugLoc(); 352 | MIB = BuildMI(*MBB, MBB->end(), DL, TII.get(X86::JE_1)).addMBB(newMBB); 353 | MBB->addLiveIn(X86::EFLAGS); 354 | GFreeDEBUG(1, "> " << *MIB); 355 | 356 | MachineBasicBlock::iterator tmpMI = std::prev(MBB->end()); // JE_1 357 | MachineFunction::iterator tmpMBB = MBB; 358 | 359 | // If the cookie check routine is not before JE, than 360 | // go backwards and push it down! 361 | if((MBB->size() < 6) || 362 | matchCheckCookieRoutine(std::prev(tmpMI)) != 0){ 363 | GFreeDEBUG(2, "[!] Look for the check block and push it down\n"); 364 | int status; 365 | do{ 366 | status = matchCheckCookieRoutine(tmpMI); 367 | if(status == -1){ // We scanned all the block but llvm folded the indirect call in a new MBB. 368 | GFreeDEBUG(2, "[!] Branch was folded. "); 369 | GFreeDEBUG(2, "Starting to look our instructions from the end of prev of MBB#" << (tmpMBB)->getNumber() << "\n"); 370 | tmpMBB = std::prev(tmpMBB); 371 | tmpMI= std::prev(tmpMBB->end()); 372 | } 373 | if(status == 1){ 374 | tmpMI=std::prev(tmpMI); 375 | } 376 | }while(status != 0); 377 | } 378 | else{ // The check routine was not moved, and prev(tmpMI) is pop 379 | tmpMI = std::prev(tmpMI); 380 | } 381 | 382 | MachineInstr *PushMI = std::prev(tmpMI,4); 383 | MachineInstr *MovMI = std::prev(tmpMI,3); // MOV 384 | MachineInstr *XorMI = std::prev(tmpMI,2); // XOR 385 | MachineInstr *CmpMI = std::prev(tmpMI,1); // CMP 386 | MachineInstr *PopMI = std::prev(tmpMI,0); // CMP 387 | 388 | GFreeDEBUG(2, "[GF] From here: \n" << *PushMI << *MovMI << *XorMI << *CmpMI << *PopMI 389 | << "[GF] Move down, close to the JMP\n"); 390 | 391 | MachineOperand &CmpDestReg = CmpMI->getOperand(0); 392 | MachineOperand &CmpBaseReg = XorMI->getOperand(2); 393 | MachineOperand &CmpDisplacement = XorMI->getOperand(5); 394 | MachineBasicBlock::iterator insertPoint = std::prev(MBB->end()); 395 | 396 | assert(CmpDestReg.isReg() && "Is should be a register!"); 397 | assert(CmpBaseReg.isReg() && 398 | CmpDisplacement.isImm() && 399 | "Displacement is not immediate in X86GFree.cpp"); 400 | 401 | // If the cookie is referenced with RSP, we have to add 8 to the displacement because of the push. 402 | int offsetAdjustment = CmpBaseReg.getReg() == X86::RSP ? +8 : 0; 403 | CmpDisplacement.setImm(CmpDisplacement.getImm() + offsetAdjustment); 404 | 405 | PushMI->removeFromParent(); 406 | PushMI->setDebugLoc(DL); 407 | MBB->insert(insertPoint, PushMI); 408 | 409 | MovMI->removeFromParent(); 410 | MovMI->setDebugLoc(DL); 411 | MBB->insert(insertPoint, MovMI); 412 | 413 | XorMI->removeFromParent(); 414 | XorMI->setDebugLoc(DL); 415 | MBB->insert(insertPoint, XorMI); 416 | 417 | CmpMI->removeFromParent(); 418 | CmpMI->setDebugLoc(DL); 419 | MBB->insert(insertPoint, CmpMI); 420 | 421 | PopMI->removeFromParent(); 422 | PopMI->setDebugLoc(DL); 423 | MBB->insert(insertPoint, PopMI); 424 | 425 | emitNop(MovMI, 9); 426 | 427 | GFreeDEBUG(3, "[GF] After splitting: \n" << 428 | " MBB: " << *MBB << 429 | " newMBB: " << *newMBB << 430 | " hltMBB: " << *hltMBB ); 431 | break; 432 | } 433 | } 434 | } 435 | 436 | // Main. 437 | bool GFreeMachinePass::runOnMachineFunction(MachineFunction &MF) { 438 | if(MF.empty()) 439 | return true; 440 | 441 | returnAddressProtection(MF); 442 | cookieProtectionFinalization(MF); 443 | instructionTransformation(MF); 444 | 445 | return true; 446 | 447 | } 448 | static RegisterPass X("gfree", "My Machine Pass"); 449 | 450 | -------------------------------------------------------------------------------- /X86GFree/X86GFreeAssembler.cpp: -------------------------------------------------------------------------------- 1 | 2 | 3 | //===-- X86GFreeAssembler.cpp - Assemble X86 MachineInstr to bytes --------===// 4 | // 5 | // The LLVM Compiler Infrastructure 6 | // 7 | //===----------------------------------------------------------------------===// 8 | // 9 | // This file contains code to XXXXXX. 10 | // 11 | //===----------------------------------------------------------------------===// 12 | 13 | #include 14 | 15 | 16 | using namespace llvm; 17 | 18 | GFreeAssembler::GFreeAssembler(MachineFunction &MF, VirtRegMap *VRMap){ 19 | VRM=VRMap; 20 | STI = &MF.getSubtarget(); 21 | TII = MF.getSubtarget().getInstrInfo(); 22 | TRI = MF.getSubtarget().getRegisterInfo(); 23 | 24 | // Create a temp MachineBasicBlock at the end of this function. 25 | tmpMBB = MF.CreateMachineBasicBlock(); 26 | MF.insert(MF.end(), tmpMBB); 27 | 28 | const TargetMachine &TM = MF.getTarget(); 29 | const Target &T = TM.getTarget(); 30 | 31 | // Let's create a MCCodeEmitter 32 | CodeEmitter.reset(T.createMCCodeEmitter( 33 | *MF.getSubtarget().getInstrInfo(), 34 | *MF.getSubtarget().getRegisterInfo(), 35 | MF.getContext() )); 36 | 37 | // NullStreamer->reset(S); 38 | 39 | // let's create a TargetMachine for AsmPrinter 40 | // tmpTM = T.createTargetMachine( 41 | // TM.getTargetTriple(), 42 | // TM.getTargetCPU(), 43 | // TM.getTargetFeatureString(), 44 | // TM.Options); 45 | // const TargetMachine &tmpTM = MF.getTarget(); 46 | 47 | // Let's create a (null) MCStreamer for AsmPrinter 48 | MCStreamer *NullStreamer = T.createNullStreamer(MF.getContext()); 49 | 50 | // Let's create a X86AsmPrinter for MCInstLower 51 | std::unique_ptr tmpTM; 52 | tmpTM.reset(T.createTargetMachine(TM.getTargetTriple().getTriple(), 53 | TM.getTargetCPU(), 54 | TM.getTargetFeatureString(), 55 | TM.Options)); 56 | 57 | Printer = static_cast(T.createAsmPrinter(*tmpTM, std::unique_ptr(NullStreamer))); 58 | Printer->setSubtarget(&MF.getSubtarget()); 59 | // Finally(!) create an X86MCInstLower object. 60 | MCInstLower = new X86MCInstLower(MF, *Printer); 61 | } 62 | 63 | GFreeAssembler::~GFreeAssembler(){ 64 | // 6b. 65 | tmpMBB->erase(tmpMBB->begin(), tmpMBB->end()); 66 | tmpMBB->eraseFromParent(); 67 | } 68 | 69 | void GFreeAssembler::temporaryRewriteRegister(MachineInstr *MI){ 70 | // errs() << "[+] TemporaryRewriteRegister!\n\n"; 71 | MachineFunction *MF = MI->getParent()->getParent(); 72 | const TargetRegisterInfo *TRI = MF->getRegInfo().getTargetRegisterInfo(); 73 | unsigned int VirtReg, PhysReg; 74 | 75 | for(MachineOperand &MO: MI->operands()){ 76 | if( MO.isReg() && TRI->isVirtualRegister(MO.getReg()) ){ 77 | 78 | VirtReg = MO.getReg(); 79 | PhysReg = VRM->getPhys(VirtReg); 80 | // Preserve semantics of sub-register operands. 81 | if (MO.getSubReg()) { 82 | // PhysReg operands cannot have subregister indexes, so allocate the right (sub) physical register. 83 | PhysReg = TRI->getSubReg(PhysReg, MO.getSubReg()); 84 | assert(PhysReg && "Invalid SubReg for physical register"); 85 | MO.setSubReg(0); 86 | } 87 | MO.setReg(PhysReg); // Rewriting. 88 | } 89 | } 90 | return; 91 | } 92 | 93 | std::vector GFreeAssembler::lowerEncodeInstr(MachineInstr *RegRewMI){ 94 | std::string ResStr; 95 | SmallVector Fixups; 96 | raw_string_ostream tmpRawStream(ResStr); 97 | 98 | MCInst OutMI; 99 | 100 | // Lower. 101 | MCInstLower->Lower(RegRewMI,OutMI); 102 | 103 | // Encode. 104 | CodeEmitter->encodeInstruction(OutMI, tmpRawStream, Fixups, *STI); 105 | tmpRawStream.flush(); 106 | 107 | std::vector MIbytes (ResStr.begin(), ResStr.end()); 108 | return MIbytes; 109 | } 110 | 111 | 112 | // This is somehow copied from ExpandPostRAPseudos.cpp 113 | bool GFreeAssembler::LowerCopy(MachineInstr *MI) { 114 | MachineOperand &DstMO = MI->getOperand(0); 115 | MachineOperand &SrcMO = MI->getOperand(1); 116 | 117 | // For now we don't support floating point instructions. 118 | if(DstMO.getReg() == X86::FP0 || DstMO.getReg() == X86::FP1 || DstMO.getReg() == X86::FP2 || DstMO.getReg() == X86::FP3 || 119 | DstMO.getReg() == X86::FP4 || DstMO.getReg() == X86::FP5 || DstMO.getReg() == X86::FP6 || DstMO.getReg() == X86::FP7 ) 120 | return false; 121 | 122 | if (MI->allDefsAreDead() || 123 | (SrcMO.getReg() == DstMO.getReg()) ) { // copy the same reg. 124 | return false; 125 | } 126 | 127 | // errs() << "real copy!: " << *MI; 128 | TII->copyPhysReg(*MI->getParent(), MI, MI->getDebugLoc(), 129 | DstMO.getReg(), SrcMO.getReg(), SrcMO.isKill()); 130 | 131 | MI->eraseFromParent(); 132 | return true; 133 | } 134 | 135 | // This is somehow copied from ExpandPostRAPseudos.cpp 136 | bool GFreeAssembler::LowerSubregToReg(MachineInstr *MI) { 137 | MachineBasicBlock *MBB = MI->getParent(); 138 | 139 | assert((MI->getOperand(0).isReg() && MI->getOperand(0).isDef()) && 140 | MI->getOperand(1).isImm() && 141 | (MI->getOperand(2).isReg() && MI->getOperand(2).isUse()) && 142 | MI->getOperand(3).isImm() && "Invalid subreg_to_reg"); 143 | unsigned DstReg = MI->getOperand(0).getReg(); 144 | unsigned InsReg = MI->getOperand(2).getReg(); 145 | assert(!MI->getOperand(2).getSubReg() && "SubIdx on physreg?"); 146 | unsigned SubIdx = MI->getOperand(3).getImm(); 147 | assert(SubIdx != 0 && "Invalid index for insert_subreg"); 148 | unsigned DstSubReg = TRI->getSubReg(DstReg, SubIdx); 149 | assert(TargetRegisterInfo::isPhysicalRegister(DstReg) && 150 | "Insert destination must be in a physical register"); 151 | assert(TargetRegisterInfo::isPhysicalRegister(InsReg) && 152 | "Inserted value must be in a physical register"); 153 | 154 | // GFreeDEBUG(dbgs() << "subreg: CONVERTING: " << *MI); 155 | if (MI->allDefsAreDead() || DstSubReg == InsReg) { 156 | return false; 157 | } 158 | 159 | TII->copyPhysReg(*MBB, MI, MI->getDebugLoc(), DstSubReg, InsReg, 160 | MI->getOperand(2).isKill()); 161 | MBB->erase(MI); 162 | return true; 163 | } 164 | 165 | // This is somehow copied from ExpandPostRAPseudos.cpp 166 | // false means that nothing was changed, i.e. MI will be transformed in a KILL. 167 | // true means that something was changed so we need to check this MI. 168 | // NOTE: MI is not valid anymore after this function. 169 | // Use the lowered one from tmpMBB->begin(). 170 | bool GFreeAssembler::expandPseudo(MachineInstr *MI){ 171 | assert(MI->isPseudo() && "MI is not a pseudo!\n"); 172 | if( TII->expandPostRAPseudo(MI) ){ 173 | // errs() << "[+] MI pseudo lowered BY TTI: " << *(tmpMBB->begin()); 174 | return true; 175 | } 176 | 177 | bool Changed = false; 178 | switch (MI->getOpcode()) { 179 | case TargetOpcode::SUBREG_TO_REG: 180 | Changed = LowerSubregToReg(MI); 181 | break; 182 | case TargetOpcode::COPY: 183 | Changed = LowerCopy(MI); 184 | break; 185 | } 186 | // errs() << "[+] MI pseudo lowered BY HAND: " << *(tmpMBB->begin()); 187 | return Changed; 188 | } 189 | 190 | // Here's the plan: 191 | // 1) Clone and insert MI into a tmp MBB (otherwise we can't lower pseudos) 192 | // 2) Fake-allocation of registers 193 | // 3) if MI is pseudo, expand it; 194 | // 4) lower MI to MCInst and assemble 195 | // 6a) delete the lowered-expandend-regallocated MI 196 | // 6b) at the end delete the parent tmpMBB so the function is not altered. 197 | 198 | std::vector GFreeAssembler::MachineInstrToBytes(MachineInstr *MI) { 199 | GFreeDEBUG(3, "[A] MI : " << *MI); 200 | MachineFunction *MF = MI->getParent()->getParent(); 201 | std::vector bytes; 202 | // 1. Clone MI into a new instruction and insert into the temp MBB. 203 | MachineInstr* tmpMI = MF->CloneMachineInstr(MI); 204 | tmpMBB->insertAfter(tmpMBB->begin(), tmpMI); 205 | 206 | // 2. Temporary rewrite the registers. 207 | if(VRM != nullptr){ 208 | temporaryRewriteRegister(tmpMI); 209 | } 210 | 211 | GFreeDEBUG(3, "[A] MI reg-rewrited : " << *tmpMI); 212 | 213 | // 3. We could be before the ExpandPostRAPseudos pass, so we need to expand 214 | // some pseudos. 215 | if(tmpMI->isPseudo()){ 216 | if(!expandPseudo(tmpMI) ){ // If we didn't expanded, return empty array. 217 | goto exit; 218 | } 219 | tmpMI = tmpMBB->begin(); 220 | } 221 | GFreeDEBUG(3, "[A] MI rewrited-expanded : " << *tmpMI); 222 | // 4. Lower and Encode MI. 223 | bytes = lowerEncodeInstr(tmpMI); 224 | GFreeDEBUG(3, "[A] MI rewrited-expaned-lowered: " << *tmpMI); 225 | GFreeDEBUG(3, "[A] MI assembled : [ "); 226 | for ( unsigned char c: bytes) 227 | GFreeDEBUG(3, format("%02x", c)); 228 | GFreeDEBUG(3," ]\n"); 229 | 230 | exit: 231 | // 6a. Empty the MBB. 232 | tmpMBB->erase(tmpMBB->begin(), tmpMBB->end()); 233 | return bytes; 234 | 235 | // [FIXME]: Since the pseudo expansion could produce more than 1 istruction, 236 | // we should process all of them, while now we process just the first. Uncomment 237 | // this and clang -O2 diff-O3-file6GGqDq.c 238 | 239 | // assert(tmpMBB->empty() && "tmpMBBis not empty!"); 240 | 241 | } 242 | 243 | -------------------------------------------------------------------------------- /X86GFree/X86GFreeAssembler.h: -------------------------------------------------------------------------------- 1 | #include "X86.h" 2 | #include "llvm/MC/MCStreamer.h" 3 | #include "X86AsmPrinter.h" 4 | #include "X86MCInstLower.h" 5 | #include "llvm/MC/MCCodeEmitter.h" 6 | #include "llvm/CodeGen/MachineInstr.h" 7 | #include "llvm/Support/raw_ostream.h" 8 | #include "llvm/Support/TargetRegistry.h" 9 | #include "llvm/CodeGen/MachineRegisterInfo.h" 10 | #include "llvm/CodeGen/VirtRegMap.h" 11 | #include "X86GFreeUtils.h" 12 | namespace llvm { 13 | class LLVM_LIBRARY_VISIBILITY GFreeAssembler{ 14 | public: 15 | std::unique_ptr CodeEmitter; 16 | MCStreamer *S; 17 | X86AsmPrinter *Printer; 18 | X86MCInstLower *MCInstLower; 19 | const MCSubtargetInfo *STI; 20 | const TargetRegisterInfo *TRI; 21 | const TargetInstrInfo *TII; 22 | MachineBasicBlock *tmpMBB; 23 | VirtRegMap *VRM; 24 | 25 | void temporaryRewriteRegister(MachineInstr *MI); 26 | std::vector lowerEncodeInstr(MachineInstr *RegRewMI); 27 | bool expandPseudo(MachineInstr *MI); 28 | bool LowerSubregToReg(MachineInstr *MI); 29 | bool LowerCopy(MachineInstr *MI); 30 | 31 | GFreeAssembler(MachineFunction &MF, VirtRegMap *VRMap=nullptr); 32 | std::vector MachineInstrToBytes(MachineInstr *MI); 33 | ~GFreeAssembler(); 34 | }; 35 | 36 | } 37 | -------------------------------------------------------------------------------- /X86GFree/X86GFreeImmediateRecon.cpp: -------------------------------------------------------------------------------- 1 | #include "llvm/Support/Format.h" 2 | #include "llvm/CodeGen/MachineRegisterInfo.h" 3 | #include "llvm/CodeGen/MachineInstrBuilder.h" 4 | #include "X86Subtarget.h" 5 | #include "llvm/Support/raw_ostream.h" 6 | #include "llvm/Support/TargetRegistry.h" 7 | #include "X86GFreeUtils.h" 8 | #include "X86.h" 9 | #include "llvm/CodeGen/MachineFunction.h" 10 | #include "llvm/CodeGen/MachineFunction.h" 11 | #include "llvm/CodeGen/MachineFunctionPass.h" 12 | #include "llvm/ADT/Statistic.h" 13 | 14 | using namespace llvm; 15 | 16 | // Then, on the command line, you can specify '-debug-only=foo' 17 | #define DEBUG_TYPE "gfreeimmediaterecon" 18 | STATISTIC(EvilImm , "Number of immediate that contains c2/c3/ca/cb/ff"); 19 | 20 | namespace { 21 | class GFreeImmediateReconPass : public MachineFunctionPass { 22 | public: 23 | GFreeImmediateReconPass() : MachineFunctionPass(ID) {} 24 | bool runOnMachineBasicBlock(); 25 | bool runOnMachineFunction(MachineFunction &mf){ 26 | MF = &mf; 27 | STI = &MF->getSubtarget(); 28 | TII = MF->getSubtarget().getInstrInfo(); 29 | MachineFunction::iterator MBBI, MBBE; 30 | for (MBBI = MF->begin(), MBBE = MF->end(); MBBI != MBBE; ++MBBI){ 31 | MBB = &*MBBI; 32 | runOnMachineBasicBlock(); 33 | } 34 | return true; 35 | } 36 | const char *getPassName() const override {return "Immediate Reconstruction Pass";} 37 | static char ID; 38 | private: 39 | unsigned int loadImmediateIntoVirtReg(MachineInstr *MI, std::pair split, 40 | int ImmediateIndex, int size, int* counter); 41 | void emitAddInstSubRegToReg(MachineInstr *MI, unsigned int NewOpcode, unsigned int ImmReg, 42 | unsigned int BaseRegIndex, unsigned int OffsetIndex); 43 | void emitNewInstructionMItoMR(MachineInstr *MI, unsigned int NewOpcode, unsigned int ImmReg); 44 | void emitNewInstructionRItoRR(MachineInstr *MI, unsigned int NewOpcode, unsigned int ImmReg); 45 | MachineFunction *MF; 46 | MachineBasicBlock *MBB; 47 | const X86Subtarget *STI; 48 | const TargetInstrInfo *TII; 49 | }; 50 | char GFreeImmediateReconPass::ID = 0; 51 | 52 | } 53 | 54 | FunctionPass *llvm::createGFreeImmediateReconPass() { 55 | return new GFreeImmediateReconPass(); 56 | } 57 | 58 | // This tables contains, for each instruction that could potentially host an 59 | // evil byte in the immediate or in the offset, the new opcode and the size of 60 | // the operand. 61 | std::map> RItoRR_opcodeMap { 62 | { X86::ADC8ri, {X86::ADC8rr, 8} }, 63 | { X86::ADC16ri8, {X86::ADC16rr, 16}}, 64 | { X86::ADC16ri, {X86::ADC16rr, 16}}, 65 | { X86::ADC32ri, {X86::ADC32rr, 32}}, 66 | { X86::ADC32ri8,{X86::ADC32rr,32}}, 67 | { X86::ADC64ri32,{X86::ADC64rr,64}}, 68 | { X86::ADC64ri8,{X86::ADC64rr,64}}, 69 | 70 | { X86::ADD8ri, {X86::ADD8rr,8} }, 71 | { X86::ADD16ri8, {X86::ADD16rr,16} }, 72 | { X86::ADD16ri, {X86::ADD16rr,16} }, 73 | { X86::ADD16ri_DB, {X86::ADD16rr_DB,16} }, 74 | { X86::ADD16ri8_DB, {X86::ADD16rr_DB,16} }, 75 | { X86::ADD32ri, {X86::ADD32rr,32} }, 76 | { X86::ADD32ri8, {X86::ADD32rr,32} }, 77 | { X86::ADD32ri_DB, {X86::ADD32rr_DB,32} }, 78 | { X86::ADD32ri8_DB, {X86::ADD32rr_DB,32} }, 79 | { X86::ADD64ri8, {X86::ADD64rr,64} }, 80 | { X86::ADD64ri32, {X86::ADD64rr,64} }, 81 | { X86::ADD64ri8_DB, {X86::ADD64rr_DB,64} }, 82 | { X86::ADD64ri32_DB, {X86::ADD64rr_DB,64} }, 83 | 84 | { X86::SBB8ri,{X86::SBB8rr,8}}, 85 | { X86::SBB16ri,{X86::SBB16rr,16}}, 86 | { X86::SBB16ri8,{X86::SBB16rr,16}}, 87 | { X86::SBB32ri,{X86::SBB32rr,32}}, 88 | { X86::SBB32ri8,{X86::SBB32rr,32}}, 89 | { X86::SBB64ri32,{X86::SBB64rr,64}}, 90 | { X86::SBB64ri8,{X86::SBB64rr,64}}, 91 | 92 | { X86::SUB8ri,{X86::SUB8rr,8}}, 93 | { X86::SUB16ri,{X86::SUB16rr,16}}, 94 | { X86::SUB16ri8,{X86::SUB16rr,16}}, 95 | { X86::SUB32ri,{X86::SUB32rr,32}}, 96 | { X86::SUB32ri8,{X86::SUB32rr,32}}, 97 | { X86::SUB64ri32,{X86::SUB64rr,64}}, 98 | { X86::SUB64ri8,{X86::SUB64rr,64}}, 99 | 100 | { X86::OR8ri,{X86::OR8rr,8}}, 101 | { X86::OR16ri,{X86::OR16rr,16}}, 102 | { X86::OR16ri8,{X86::OR16rr,16}}, 103 | { X86::OR32ri,{X86::OR32rr,32}}, 104 | { X86::OR32ri8,{X86::OR32rr,32}}, 105 | { X86::OR64ri32,{X86::OR64rr,64}}, 106 | { X86::OR64ri8,{X86::OR64rr,64}}, 107 | 108 | { X86::XOR8ri,{X86::XOR8rr,8}}, 109 | { X86::XOR16ri,{X86::XOR16rr,16}}, 110 | { X86::XOR16ri8,{X86::XOR16rr,16}}, 111 | { X86::XOR32ri,{X86::XOR32rr,32}}, 112 | { X86::XOR32ri8,{X86::XOR32rr,32}}, 113 | { X86::XOR64ri32,{X86::XOR64rr,64}}, 114 | { X86::XOR64ri8,{X86::XOR64rr,64}}, 115 | 116 | { X86::AND8ri,{X86::AND8rr,8}}, 117 | { X86::AND16ri,{X86::AND16rr,16}}, 118 | { X86::AND16ri8,{X86::AND16rr,16}}, 119 | { X86::AND32ri,{X86::AND32rr,32}}, 120 | { X86::AND32ri8,{X86::AND32rr,32}}, 121 | { X86::AND64ri32,{X86::AND64rr,64}}, 122 | { X86::AND64ri8,{X86::AND64rr,64}}, 123 | 124 | { X86::CMP8ri,{X86::CMP8rr,8}}, 125 | { X86::CMP16ri,{X86::CMP16rr,16}}, 126 | { X86::CMP16ri8,{X86::CMP16rr,16}}, 127 | { X86::CMP32ri,{X86::CMP32rr,32}}, 128 | { X86::CMP32ri8,{X86::CMP32rr,32}}, 129 | { X86::CMP64ri32,{X86::CMP64rr,64}}, 130 | { X86::CMP64ri8,{X86::CMP64rr,64}}, 131 | 132 | { X86::MOV8ri,{X86::MOV8rr,8}}, 133 | { X86::MOV16ri,{X86::MOV16rr,16}}, 134 | { X86::MOV32ri,{X86::MOV32rr,32}}, 135 | { X86::MOV32ri64,{X86::MOV32rr,32}}, 136 | { X86::MOV64ri32,{X86::MOV64rr,64}}, 137 | { X86::MOV64ri,{X86::MOV64rr,64}}, 138 | 139 | { X86::TEST8i8, {X86::TEST8rr, 8}}, 140 | { X86::TEST16i16, {X86::TEST16rr, 16}}, 141 | { X86::TEST32i32, {X86::TEST32rr, 32}}, 142 | { X86::TEST64i32, {X86::TEST64rr, 64}}, 143 | { X86::TEST8ri, {X86::TEST8rr, 8}}, 144 | { X86::TEST16ri, {X86::TEST16rr, 16}}, 145 | { X86::TEST32ri, {X86::TEST32rr, 32}}, 146 | { X86::TEST64ri32, {X86::TEST64rr, 64}} 147 | }; 148 | 149 | std::map> MItoMR_opcodeMap { 150 | {X86::ADC32mi,{X86::ADC32mr,32}}, 151 | {X86::ADC32mi8,{X86::ADC32mr,32}}, 152 | {X86::ADC64mi32,{X86::ADC64mr,64}}, 153 | {X86::ADC64mi8,{X86::ADC64mr,64}}, 154 | 155 | {X86::ADD8mi,{X86::ADD8mr,8}}, 156 | {X86::ADD16mi8,{X86::ADD16mr,16}}, 157 | {X86::ADD16mi,{X86::ADD16mr,16}}, 158 | {X86::ADD32mi8,{X86::ADD32mr,32}}, 159 | {X86::ADD32mi,{X86::ADD32mr,32}}, 160 | {X86::ADD64mi8,{X86::ADD64mr,64}}, 161 | {X86::ADD64mi32,{X86::ADD64mr,64}}, 162 | 163 | {X86::SBB8mi,{X86::SBB8mr,8}}, 164 | {X86::SBB16mi8,{X86::SBB16mr,16}}, 165 | {X86::SBB16mi,{X86::SBB16mr,16}}, 166 | {X86::SBB32mi8,{X86::SBB32mr,32}}, 167 | {X86::SBB32mi,{X86::SBB32mr,32}}, 168 | {X86::SBB64mi8,{X86::SBB64mr,64}}, 169 | {X86::SBB64mi32,{X86::SBB64mr,64}}, 170 | 171 | {X86::SUB8mi,{X86::SUB8mr,8}}, 172 | {X86::SUB16mi8,{X86::SUB16mr,16}}, 173 | {X86::SUB16mi,{X86::SUB16mr,16}}, 174 | {X86::SUB32mi8,{X86::SUB32mr,32}}, 175 | {X86::SUB32mi,{X86::SUB32mr,32}}, 176 | {X86::SUB64mi8,{X86::SUB64mr,64}}, 177 | {X86::SUB64mi32,{X86::SUB64mr,64}}, 178 | 179 | {X86::OR8mi,{X86::OR8mr,8}}, 180 | {X86::OR16mi8,{X86::OR16mr,16}}, 181 | {X86::OR16mi,{X86::OR16mr,16}}, 182 | {X86::OR32mi8,{X86::OR32mr,32}}, 183 | {X86::OR32mi,{X86::OR32mr,32}}, 184 | {X86::OR64mi8,{X86::OR64mr,64}}, 185 | {X86::OR64mi32,{X86::OR64mr,64}}, 186 | 187 | {X86::XOR8mi,{X86::XOR8mr,8}}, 188 | {X86::XOR16mi8,{X86::XOR16mr,16}}, 189 | {X86::XOR16mi,{X86::XOR16mr,16}}, 190 | {X86::XOR32mi8,{X86::XOR32mr,32}}, 191 | {X86::XOR32mi,{X86::XOR32mr,32}}, 192 | {X86::XOR64mi8,{X86::XOR64mr,64}}, 193 | {X86::XOR64mi32,{X86::XOR64mr,64}}, 194 | 195 | {X86::AND8mi,{X86::AND8mr,8}}, 196 | {X86::AND16mi8,{X86::AND16mr,16}}, 197 | {X86::AND16mi,{X86::AND16mr,16}}, 198 | {X86::AND32mi8,{X86::AND32mr,32}}, 199 | {X86::AND32mi,{X86::AND32mr,32}}, 200 | {X86::AND64mi8,{X86::AND64mr,64}}, 201 | {X86::AND64mi32,{X86::AND64mr,64}}, 202 | 203 | {X86::CMP8mi,{X86::CMP8mr,8}}, 204 | {X86::CMP16mi8,{X86::CMP16mr,16}}, 205 | {X86::CMP16mi,{X86::CMP16mr,16}}, 206 | {X86::CMP32mi8,{X86::CMP32mr,32}}, 207 | {X86::CMP32mi,{X86::CMP32mr,32}}, 208 | {X86::CMP64mi8,{X86::CMP64mr,64}}, 209 | {X86::CMP64mi32,{X86::CMP64mr,64}}, 210 | 211 | {X86::MOV8mi,{X86::MOV8mr,8}}, 212 | {X86::MOV16mi,{X86::MOV16mr,16}}, 213 | {X86::MOV32mi,{X86::MOV32mr,32}}, 214 | {X86::MOV64mi32,{X86::MOV64mr,64}}, 215 | }; 216 | 217 | // This maps are incomplete, but cover the compilation of some real-world program. 218 | std::map> RMtoRM_opcodeMap { 219 | {X86::MOVSX32rm8,{X86::MOVSX32rm8,64}}, 220 | {X86::MOVSX64rm16,{X86::MOVSX64rm16,64}}, 221 | {X86::MOVZX32rm8,{X86::MOVZX32rm8,64}}, 222 | {X86::MOV8rm,{X86::MOV8rm,64}}, 223 | {X86::MOV16rm,{X86::MOV16rm,64}}, 224 | {X86::MOV32rm,{X86::MOV32rm,64}}, 225 | {X86::MOV64rm,{X86::MOV64rm,64}}, 226 | }; 227 | 228 | std::map> MRtoMR_opcodeMap { 229 | {X86::MOV8mr,{X86::MOV8mr,64}}, 230 | {X86::MOV16mr,{X86::MOV16mr,64}}, 231 | {X86::MOV32mr,{X86::MOV32mr,64}}, 232 | {X86::MOV64mr,{X86::MOV64mr,64}}, 233 | }; 234 | 235 | std::map> LEA_opcodeMap { 236 | {X86::LEA64_32r,{X86::LEA64_32r,64}}, 237 | {X86::LEA16r,{X86::LEA16r,64}}, 238 | {X86::LEA32r,{X86::LEA32r,64}}, 239 | {X86::LEA64r,{X86::LEA64r,64}}, 240 | }; 241 | 242 | bool isRI(unsigned int Opcode){ 243 | return (RItoRR_opcodeMap[Opcode].first != 0); 244 | } 245 | 246 | bool isMI(unsigned int Opcode){ 247 | return (MItoMR_opcodeMap[Opcode].first != 0); 248 | } 249 | 250 | bool isMR(unsigned int Opcode){ 251 | return (MRtoMR_opcodeMap[Opcode].first != 0); 252 | } 253 | 254 | bool isRM(unsigned int Opcode){ 255 | return (RMtoRM_opcodeMap[Opcode].first != 0); 256 | } 257 | 258 | bool isLEA(unsigned int Opcode){ 259 | return (LEA_opcodeMap[Opcode].first != 0); 260 | } 261 | 262 | unsigned int getOpcodeFromMaps(unsigned int Opcode){ 263 | return (RItoRR_opcodeMap[Opcode].first | 264 | MItoMR_opcodeMap[Opcode].first | 265 | RMtoRM_opcodeMap[Opcode].first | 266 | MRtoMR_opcodeMap[Opcode].first | 267 | LEA_opcodeMap[Opcode].first 268 | ); 269 | } 270 | 271 | unsigned int getSizeFromMaps(unsigned int Opcode){ 272 | return (RItoRR_opcodeMap[Opcode].second | 273 | MItoMR_opcodeMap[Opcode].second | 274 | RMtoRM_opcodeMap[Opcode].second | 275 | MRtoMR_opcodeMap[Opcode].second | 276 | LEA_opcodeMap[Opcode].second 277 | ); 278 | } 279 | 280 | // Given a RI instruction and a register (ImmReg) that contains the immediate, 281 | // this function translate the evil RI instruction into a safe RR instruction. 282 | void GFreeImmediateReconPass::emitNewInstructionRItoRR(MachineInstr *MI, unsigned int NewOpcode, unsigned int ImmReg){ 283 | MachineBasicBlock::iterator MBBI = MI; 284 | MachineInstrBuilder MIB; 285 | 286 | bool isMoveCompareTest = isMove(MI) || isCompare(MI) || isTest(MI); 287 | unsigned int DestReg = MI->getOperand(0).getReg(); 288 | unsigned int SrcRegIndex = isMoveCompareTest ? 0 : 1; 289 | unsigned int SrcReg = MI->getOperand(SrcRegIndex).getReg(); 290 | 291 | if ( isMoveCompareTest ) { // Handle MOV, CMP and TEST. 292 | unsigned int flags = isMove(MI) ? RegState::Define : 0; 293 | MIB = BuildMI(*MBB, MBBI, MI->getDebugLoc(), TII->get(NewOpcode)) 294 | .addReg(DestReg, flags) 295 | .addReg(ImmReg); 296 | } 297 | else { // Handle Arithm: XOR, OR, ADD ... 298 | MIB = BuildMI(*MBB, MBBI, MI->getDebugLoc(), TII->get(NewOpcode)) 299 | .addReg(DestReg, RegState::Define) 300 | .addReg(SrcReg) 301 | .addReg(ImmReg); 302 | } 303 | GFreeDEBUG(0, "> " << *MIB); 304 | } 305 | 306 | // Given a MI instruction and a register (ImmReg) that contains the immediate, 307 | // this function translate the evil MI instruction into a safe MR instruction. 308 | void GFreeImmediateReconPass::emitNewInstructionMItoMR(MachineInstr *MI, unsigned int NewOpcode, unsigned int ImmReg){ 309 | MachineInstrBuilder MIB; 310 | MachineBasicBlock::iterator MBBI = MI; 311 | 312 | MI->RemoveOperand(5); // The fifth operand is the immediate. 313 | MIB = BuildMI(*MBB, MBBI, MI->getDebugLoc(), TII->get(NewOpcode)); 314 | for (const MachineOperand &MO : MI->operands()) { // Copy all the operands 315 | MIB.addOperand(MO); 316 | } 317 | MIB.addReg(ImmReg); 318 | GFreeDEBUG(0, "> " << *MIB); 319 | } 320 | 321 | // This function deals with evil offsets. 322 | // ImmReg is a register that contains the offset. 323 | // It does emit 3 new instructions: 324 | // ADD ImmReg, BaseReg 325 | // INST (w/ offset = 0) 326 | // SUB ImmReg, BaseReg 327 | void GFreeImmediateReconPass::emitAddInstSubRegToReg(MachineInstr *MI, unsigned int NewOpcode, unsigned int ImmReg, 328 | unsigned int BaseRegIndex, unsigned int OffsetIndex){ 329 | 330 | MachineBasicBlock::iterator MBBI = MI; 331 | MachineInstrBuilder MIB; 332 | 333 | // The size is always 8 bytes, since we are dealing with memory. 334 | unsigned int size = 8; 335 | const TargetRegisterClass *RegClass = getRegClassFromSize(size); 336 | unsigned int SumReg = MF->getRegInfo().createVirtualRegister(RegClass); 337 | unsigned int SubReg = MF->getRegInfo().createVirtualRegister(RegClass); 338 | unsigned int SrcReg = MI->getOperand(BaseRegIndex).getReg(); 339 | 340 | // ADD 341 | MIB = BuildMI(*MBB, MBBI, MI->getDebugLoc(), TII->get(getADDrrOpcode(size))) 342 | .addReg(SumReg, RegState::Define) 343 | .addReg(SrcReg) 344 | .addReg(ImmReg); 345 | GFreeDEBUG(0, "> " << *MIB); 346 | 347 | MachineInstr *newMI = MF->CloneMachineInstr(MI); 348 | MBB->insert(MBBI, newMI); 349 | 350 | // Adjust operands of the new instruction 351 | newMI->getOperand(BaseRegIndex).setReg(SumReg); 352 | newMI->getOperand(BaseRegIndex).setIsKill(false); // Clear kill flag because it's used by SUB 353 | newMI->getOperand(OffsetIndex).setImm(0); 354 | GFreeDEBUG(0, "> " << *newMI); 355 | 356 | // SUB 357 | MIB = BuildMI(*MBB, MBBI, MI->getDebugLoc(), TII->get(getSUBrrOpcode(size))) 358 | .addReg(SubReg, RegState::Define) 359 | .addReg(SumReg) 360 | .addReg(ImmReg); 361 | GFreeDEBUG(0, "> " << *MIB); 362 | } 363 | 364 | // This function safely load an evil immediate into a new register. 365 | // It returns the number of the new register. 366 | unsigned int GFreeImmediateReconPass::loadImmediateIntoVirtReg(MachineInstr *MI, std::pair split, 367 | int ImmediateIndex, int size, int* counter){ 368 | MachineInstrBuilder MIB; 369 | MachineBasicBlock::iterator MBBI = MI; 370 | 371 | size = size / 8; 372 | const TargetRegisterClass *RegClass = getRegClassFromSize(size); 373 | unsigned int NewReg = MF->getRegInfo().createVirtualRegister(RegClass); 374 | unsigned int ImmReg = MF->getRegInfo().createVirtualRegister(RegClass); 375 | 376 | MIB = BuildMI(*MBB, MBBI, MI->getDebugLoc(), TII->get(getMOVriOpcode(size))) // MOV the big part. 377 | .addReg(NewReg, RegState::Define) 378 | .addImm(split.second); 379 | GFreeDEBUG(0, "> " << *MIB); 380 | 381 | if((uint64_t)split.first <= 0xffffffff){ // if the small part fits in 32bit then we can do mov + or. 382 | MIB = BuildMI(*MBB, MBBI, MI->getDebugLoc(), TII->get(getORriOpcode(size))) // OR the small part. 383 | .addReg(ImmReg, RegState::Define) 384 | .addReg(NewReg) 385 | .addImm(split.first); 386 | GFreeDEBUG(0, "> " << *MIB); 387 | *counter = 2; 388 | } 389 | else{ // else do mov + mov + or 390 | unsigned int NewReg1 = MF->getRegInfo().createVirtualRegister(RegClass); 391 | MIB = BuildMI(*MBB, MBBI, MI->getDebugLoc(), TII->get(getMOVriOpcode(size))) // MOV the high part. 392 | .addReg(NewReg1, RegState::Define) 393 | .addImm(split.first); 394 | GFreeDEBUG(0, "> " << *MIB); 395 | 396 | MIB = BuildMI(*MBB, MBBI, MI->getDebugLoc(), TII->get(getORrrOpcode(size))) // OR the two new registers. 397 | .addReg(ImmReg, RegState::Define) 398 | .addReg(NewReg) 399 | .addReg(NewReg1); 400 | GFreeDEBUG(0, "> " << *MIB); 401 | *counter = 3; 402 | } 403 | return ImmReg; 404 | } 405 | 406 | // Main. 407 | bool GFreeImmediateReconPass::runOnMachineBasicBlock() { 408 | 409 | if(DisableGFree){ 410 | errs()<< "GFREE IS DISABLED!\n"; 411 | return true; 412 | } 413 | 414 | if(MF->empty()) 415 | return true; 416 | 417 | MachineBasicBlock::iterator MBBI, MBBIE; 418 | MachineInstrBuilder MIB; 419 | 420 | std::vector toDelete; // This hold all the instructions that will be deleted. 421 | MachineInstr *MI; 422 | unsigned int i; 423 | std::pair split; 424 | bool pushEFLAGS; 425 | 426 | for (MBBI = MBB->begin(), MBBIE = MBB->end(); MBBI != MBBIE; ++MBBI) { 427 | MI = MBBI; 428 | 429 | for(i=0; igetNumOperands(); i++){ 430 | MachineOperand MO = MI->getOperand(i); 431 | if (!MO.isImm()) 432 | continue; 433 | 434 | unsigned int NewOpcode = getOpcodeFromMaps(MI->getOpcode()); 435 | unsigned int Size = getSizeFromMaps(MI->getOpcode()); 436 | 437 | if(isMI(MI->getOpcode()) && i == 3) Size = 64; // When MI and offset, size must be 8; 438 | if(Size == 0) Size=64; // This is useful so we pass the next if and can print the TODO. 439 | 440 | split = splitInt(MO.getImm(),Size); 441 | bool found = (split.first != 0 || split.second!=0); 442 | if( !found ){ 443 | continue; 444 | } 445 | 446 | /* TODO: The problem with s is that they are translated after the stack 447 | allocation, and the offset changes. We have to handle this at the end of 448 | the pass chain. */ 449 | if ( (isMI(MI->getOpcode()) && MI->getOperand(0).isFI() && i==3) || 450 | // This happens when compiling firefox, why?! 451 | (isMI(MI->getOpcode()) && i == 3 && MI->getOperand(0).isReg() && MI->getOperand(0).getReg() == 0) || 452 | (isRM(MI->getOpcode()) && MI->getOperand(1).isFI()) || 453 | (isMR(MI->getOpcode()) && MI->getOperand(0).isFI()) || 454 | (isLEA(MI->getOpcode()) && MI->getOperand(1).isFI())|| 455 | // ./compile-O3-fileU9CBCR.c 456 | (isLEA(MI->getOpcode()) && MI->getOperand(1).getReg() == 0)|| 457 | (NewOpcode == 0) ){ 458 | GFreeDEBUG(0, "[TODO @ " << MF->getName() << "]: " << *MI); 459 | continue; 460 | } 461 | 462 | pushEFLAGS = needToSaveEFLAGS(MBBI); 463 | 464 | ++EvilImm; // Update stats 465 | 466 | GFreeDEBUG(0, "[!] Found instruction with evil immediate @ " 467 | << MF->getName() << " BB#" << MBB->getNumber() << " : " << *MI << "\n"); 468 | GFreeDEBUG(2, "[IMM] : " << format("0x%016llx @ %s \n", MO.getImm(), MF->getName() ) << 469 | "[IMM] : " << format("(low = 0x%016llx, high = 0x%016llx)\n", split.first, split.second)); 470 | 471 | toDelete.push_back(MI); 472 | GFreeDEBUG(0, "< " << *MI); 473 | 474 | int emittedInstCounter = 0; // This counter will be used for the handling EFLAGS 475 | unsigned int ImmReg = loadImmediateIntoVirtReg(MI, split, i, Size,&emittedInstCounter); 476 | bool flagImmediate=0; 477 | 478 | // Immediates. 479 | if(isRI(MI->getOpcode())){ 480 | emitNewInstructionRItoRR(MI, NewOpcode, ImmReg); 481 | flagImmediate = 1; 482 | } 483 | 484 | if(isMI(MI->getOpcode()) && i == 5){ // 5 is the index of an immediate in a *mi instruction. 485 | emitNewInstructionMItoMR(MI, NewOpcode, ImmReg); 486 | flagImmediate = 1; 487 | } 488 | 489 | // Offsets. 490 | if(isMI(MI->getOpcode()) && i == 3){ // 3 is the index of an offset in a *mi instruction. 491 | emitAddInstSubRegToReg(MI, NewOpcode, ImmReg, 0, i); 492 | } 493 | if(isLEA(MI->getOpcode())){ 494 | MI->clearKillInfo(); 495 | emitAddInstSubRegToReg(MI, NewOpcode, ImmReg, 1, i); 496 | } 497 | if(isRM(MI->getOpcode())){ 498 | emitAddInstSubRegToReg(MI, NewOpcode, ImmReg, 1, i); 499 | } 500 | if(isMR(MI->getOpcode())){ 501 | emitAddInstSubRegToReg(MI, NewOpcode, ImmReg, 0, i); 502 | } 503 | 504 | // Erase the old instruction and update iterators. At this point 505 | // MBBI still points to the original MI. 506 | MachineInstr *NewMI; 507 | if(flagImmediate){ 508 | NewMI = std::prev(MBBI); 509 | MBBI = NewMI; 510 | } 511 | else{ 512 | NewMI = std::prev(MBBI,2); // Skips the sub. 513 | MBBI = std::prev(MBBI); // Points to the sub. 514 | } 515 | MI->eraseFromParent(); 516 | 517 | // EFLAGS handling. 518 | 519 | /* 520 | If we handled an immediate the layout can be: 521 | mov, or, newMI <-- MBBI, (deleted MI) (emittedInstCounter = 2) otherwise 522 | mov, mov, or, newMI <-- MBBI, (deleted MI) (emittedInstCounter = 3) 523 | 524 | If we handled an offset the layout can be: 525 | mov, or, add, newMI, sub <-- MBBI, (deleted MI) (emittedInstCounter = 2) otherwise 526 | mov, mov, or, add, newMI, sub <-- MBBI, (deleted MI) (emittedInstCounter = 3) 527 | */ 528 | 529 | 530 | if( std::next(MBBI) == MBB->end() ) // newMI/SUB is the last instruction of this MBB, check in the next MBB. 531 | pushEFLAGS = needToSaveEFLAGS( (*MBB->succ_begin())->begin() ); 532 | else 533 | pushEFLAGS = needToSaveEFLAGS(std::next(MBBI)); 534 | 535 | if( !pushEFLAGS ) continue; 536 | 537 | GFreeDEBUG(0, "> Push/Pop EFLAGS\n"); 538 | unsigned int saveRegEFLAGS = MF->getRegInfo().createVirtualRegister(&X86::GR64RegClass); 539 | 540 | if(flagImmediate){ 541 | pushEFLAGSinline( std::prev(MBBI,emittedInstCounter), saveRegEFLAGS ); // Before the first mov 542 | popEFLAGSinline ( MBBI, saveRegEFLAGS ); // Before newMI 543 | continue; 544 | } 545 | 546 | bool useEFLAGS = NewMI->readsRegister(X86::EFLAGS); 547 | bool defineEFLAGS = NewMI->definesRegister(X86::EFLAGS); 548 | 549 | if(!defineEFLAGS && !useEFLAGS){ // i.e. LEA 550 | pushEFLAGSinline( std::prev(MBBI,2+emittedInstCounter),saveRegEFLAGS ); // Before first mov 551 | popEFLAGSinline ( std::next(MBBI), saveRegEFLAGS ); // After sub; 552 | } 553 | else if(!defineEFLAGS && useEFLAGS){ // i.e. CMOV 554 | pushEFLAGSinline( std::prev(MBBI,2+emittedInstCounter), saveRegEFLAGS ); // Before first mov 555 | popEFLAGSinline( std::prev(MBBI), saveRegEFLAGS ); // Before newMI 556 | 557 | pushEFLAGSinline ( MBBI, saveRegEFLAGS ); // Before sub; 558 | popEFLAGSinline ( std::next(MBBI), saveRegEFLAGS ); // After sub; 559 | } 560 | else if(defineEFLAGS && !useEFLAGS){ // i.e. CMP 561 | pushEFLAGSinline ( MBBI, saveRegEFLAGS ); // Before sub; 562 | popEFLAGSinline ( std::next(MBBI), saveRegEFLAGS ); // After sub; 563 | } 564 | else if(defineEFLAGS && useEFLAGS){ // i.e. ADC 565 | pushEFLAGSinline( std::prev(MBBI,2+emittedInstCounter), saveRegEFLAGS ); // Before first mov 566 | popEFLAGSinline( std::prev(MBBI), saveRegEFLAGS ); // Before newMI 567 | } 568 | } 569 | } 570 | return true; 571 | } 572 | 573 | static RegisterPass X("gfreeimmediaterecon", "My Machine Pass"); 574 | 575 | // Deleting instructions. 576 | // for (std::vector::iterator I = toDelete.begin(); I != toDelete.end(); ++I){ 577 | // (*I)->eraseFromParent(); 578 | // } 579 | -------------------------------------------------------------------------------- /X86GFree/X86GFreeJCP.cpp: -------------------------------------------------------------------------------- 1 | #include "X86.h" 2 | #include "X86Subtarget.h" 3 | #include "X86InstrBuilder.h" 4 | #include "llvm/CodeGen/MachineFunctionPass.h" 5 | #include "llvm/CodeGen/MachineRegisterInfo.h" 6 | #include "llvm/Support/raw_ostream.h" 7 | #include "llvm/Support/Format.h" 8 | #include "llvm/MC/MCContext.h" 9 | #include "X86GFreeUtils.h" 10 | #include "llvm/ADT/Statistic.h" 11 | #include 12 | #include 13 | #include "llvm/Support/Format.h" 14 | using namespace llvm; 15 | 16 | // Then, on the command line, you can specify '-debug-only=foo' 17 | #define DEBUG_TYPE "gfreeimmediaterecon" 18 | STATISTIC(Jcp , "Number of cookies for call*/jmp* inserted"); 19 | 20 | namespace { 21 | class GFreeJCPPass : public MachineFunctionPass { 22 | public: 23 | GFreeJCPPass() : MachineFunctionPass(ID) {} 24 | bool runOnMachineFunction(MachineFunction &MF) override; 25 | const char *getPassName() const override {return "Jump Control Protection Pass";} 26 | static char ID; 27 | }; 28 | char GFreeJCPPass::ID = 0; 29 | } 30 | 31 | int64_t GFreeCookieCostant; 32 | 33 | FunctionPass *llvm::createGFreeJCPPass() { 34 | return new GFreeJCPPass(); 35 | } 36 | 37 | // Put the cookie on the stack at the beginning of a function. 38 | void insertCookieIndirectJump(MachineInstr* MI, int index){ 39 | MachineBasicBlock *MBB = MI->getParent(); 40 | MachineFunction *MF = MBB->getParent(); 41 | const X86Subtarget &STI = MF->getSubtarget(); 42 | const X86InstrInfo &TII = *STI.getInstrInfo(); 43 | DebugLoc DL = MI->getDebugLoc(); 44 | MachineInstrBuilder MIB; 45 | 46 | // unsigned int VirtReg = MF->getRegInfo().createVirtualRegister(&X86::GR64RegClass); 47 | // unsigned int UselessReg = MF->getRegInfo().createVirtualRegister(&X86::GR64RegClass); 48 | 49 | // Here we are in the prologue of a function, R11 can be clobbered. 50 | unsigned int VirtReg = X86::R11; 51 | unsigned int UselessReg = X86::R11; 52 | MBB->addLiveIn(X86::R11); 53 | MBB->sortUniqueLiveIns(); 54 | 55 | // mov $imm, %VirtReg 56 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::MOV64ri)).addReg(VirtReg, RegState::Define); 57 | MIB.addImm(GFreeCookieCostant); 58 | GFreeDEBUG(2, "> " << *MIB); 59 | 60 | // xor %fs:0x28, %VirtReg 61 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::XOR64rm)).addReg(UselessReg, RegState::Define) 62 | .addReg(VirtReg).addReg(0).addImm(1).addReg(0).addImm(0x28).addReg(X86::FS); 63 | GFreeDEBUG(2, "> " << *MIB); 64 | 65 | // mov VirtReg, (StackIndex) 66 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::MOV64mr)); 67 | addFrameReference(MIB, index); 68 | MIB.addReg(UselessReg); 69 | GFreeDEBUG(2, "> " << *MIB); 70 | 71 | // This is for security. wipe virtreg since it contains %fs:0x28. 72 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::XOR64rr)) 73 | .addReg(UselessReg, RegState::Define) 74 | .addReg(VirtReg, RegState::Kill) 75 | .addReg(VirtReg); 76 | GFreeDEBUG(2,"> " << *MIB); 77 | 78 | // MF->verify(); 79 | } 80 | 81 | void insertCheckCookieIndirectJump(MachineInstr* MI, int index){ 82 | 83 | MachineBasicBlock *MBB = MI->getParent(); 84 | MachineFunction *MF = MBB->getParent(); 85 | const X86Subtarget &STI = MF->getSubtarget(); 86 | const X86InstrInfo &TII = *STI.getInstrInfo(); 87 | DebugLoc DL = MI->getDebugLoc(); 88 | MachineInstrBuilder MIB; 89 | 90 | // unsigned int VirtReg = MF->getRegInfo().createVirtualRegister(&X86::GR64RegClass); 91 | // unsigned int TmpReg = MF->getRegInfo().createVirtualRegister(&X86::GR64RegClass); 92 | 93 | unsigned int VirtReg = X86::R11; 94 | unsigned int TmpReg = X86::R11; 95 | MBB->addLiveIn(X86::R11); 96 | MBB->sortUniqueLiveIns(); 97 | 98 | // Here we are in the middle of a function, so r11 can't be clobbered. 99 | pushReg(MI,X86::R11, RegState::Undef); 100 | 101 | // mov $imm, %VirtReg 102 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::MOV64ri)).addReg(VirtReg, RegState::Define); 103 | MIB.addImm(GFreeCookieCostant); 104 | GFreeDEBUG(2, "> " << *MIB); 105 | 106 | // xor (stack), %VirtReg 107 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::XOR64rm)).addReg(TmpReg, RegState::Define).addReg(VirtReg, RegState::Kill); 108 | addFrameReference(MIB, index); 109 | GFreeDEBUG(2, "> " << *MIB); 110 | 111 | // cmp VirtReg, fs:0x28 112 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::CMP64rm)).addReg(TmpReg) 113 | .addReg(0).addImm(1).addReg(0).addImm(0x28).addReg(X86::FS); 114 | GFreeDEBUG(2, "> " << *MIB); 115 | 116 | popReg(MI,X86::R11); 117 | 118 | // MIB = BuildMI(*MBB, MI, DL, TII.get(X86::NOOP)); 119 | // GFreeDEBUG(2, "> " << *MIB); 120 | 121 | } 122 | 123 | int64_t generateSafeRandom(){ 124 | std::pair tmp_pair; 125 | int64_t rnd; 126 | do{ 127 | rnd = rand(); 128 | rnd = (rnd << 32) | rand(); 129 | tmp_pair = splitInt(rnd,64); 130 | }while((tmp_pair.first != 0)); // If true it means splitInt found an evil bytes. 131 | 132 | // errs() << format("rnd=0x%016llx\n",rnd); 133 | 134 | return rnd; 135 | } 136 | 137 | // Main. 138 | bool GFreeJCPPass::runOnMachineFunction(MachineFunction &MF) { 139 | // Generate the random costant for this function 140 | srand( time(0) + MF.getFunctionNumber() ); 141 | GFreeCookieCostant = generateSafeRandom(); 142 | MachineFunction::iterator MBB, MBBE; 143 | MachineBasicBlock::iterator MBBI, MBBIE; 144 | MachineInstr *MI; 145 | 146 | std::vector alreadyCheckedInstr; 147 | MachineFrameInfo *MFI = MF.getFrameInfo(); 148 | int index = -1; 149 | bool created = false; 150 | 151 | for (MBB = MF.begin(), MBBE = MF.end(); MBB != MBBE; ++MBB){ 152 | 153 | if(MBB->empty()) 154 | continue; 155 | 156 | for (MBBI = MBB->begin(), MBBIE = MBB->end(); MBBI != MBBIE; ++MBBI) { 157 | 158 | MI = MBBI; 159 | 160 | if( ( MI->isIndirectBranch() || isIndirectCall(MI) ) && 161 | !contains(alreadyCheckedInstr, MI) ){ 162 | 163 | // Let's check the cookie... 164 | GFreeDEBUG(0, "[!] Adding Check Cookie in " << MF.getName() << 165 | " MBB#" << MBB->getNumber() << " : " << *MI); 166 | 167 | if(!created){ // Create once and only once a new Stack Object. 168 | index = MFI->CreateStackObject(8, 8, false); 169 | created = true; 170 | } 171 | 172 | insertCheckCookieIndirectJump(MI, index); 173 | ++Jcp; // Update stats. 174 | 175 | // Restart from the right point. 176 | alreadyCheckedInstr.push_back(MI); 177 | 178 | } 179 | } 180 | } 181 | 182 | // Skip empty MBBs. 183 | MBB = MF.begin(); 184 | while(MBB->empty()){ 185 | MBB = std::next(MBB); 186 | } 187 | 188 | if(MBB == MF.end()) return true; 189 | 190 | // In this function there is at least one indirect call. 191 | if( created ){ 192 | GFreeDEBUG(0, "[!] Adding Cookie @ " << MF.getName() << "\n"); 193 | MBBI = MBB->begin(); 194 | insertCookieIndirectJump(MBBI, index); 195 | } 196 | 197 | // MF.verify(); 198 | return true; 199 | } 200 | -------------------------------------------------------------------------------- /X86GFree/X86GFreeModRMSIB.cpp: -------------------------------------------------------------------------------- 1 | #include "X86GFreeAssembler.h" 2 | #include "X86GFreeUtils.h" 3 | #include "X86.h" 4 | #include "llvm/CodeGen/MachineFunction.h" 5 | #include "llvm/CodeGen/MachineFunctionPass.h" 6 | #include "llvm/Support/raw_ostream.h" 7 | #include "llvm/Support/TargetRegistry.h" 8 | #include "llvm/ADT/Statistic.h" 9 | #include "llvm/CodeGen/AllocationOrder.h" 10 | #include "llvm/CodeGen/RegisterClassInfo.h" 11 | #include "llvm/CodeGen/LiveRegMatrix.h" 12 | #include "llvm/CodeGen/LiveInterval.h" 13 | #include "llvm/CodeGen/MachineInstrBuilder.h" 14 | #include "./llvm/CodeGen/LiveIntervalAnalysis.h" 15 | #include 16 | #include 17 | 18 | using namespace llvm; 19 | 20 | #define DEBUG_TYPE "gfreemodrmsib" 21 | STATISTIC(EvilSib , "Number of modified instruction because of an evil ModRM/SIB"); 22 | 23 | namespace { 24 | 25 | class GFreeModRMSIB : public MachineFunctionPass { 26 | 27 | public: 28 | static char ID; 29 | VirtRegMap *VRM; 30 | std::set VirtRegAlreadyReallocated; 31 | const TargetRegisterInfo *TRI; 32 | RegisterClassInfo RegClassInfo; 33 | LiveRegMatrix *Matrix; 34 | LiveIntervals *LIS; 35 | GFreeAssembler *Assembler; 36 | 37 | GFreeModRMSIB() : MachineFunctionPass(ID) {} 38 | bool runOnMachineBasicBlock(MachineBasicBlock &MBB); 39 | bool runOnMachineFunction(MachineFunction &MF){ 40 | MachineFunction::iterator MBB, MBBE; 41 | int loop_counter = 0; 42 | bool loop_again; 43 | do{ 44 | loop_again = false; 45 | for (MBB = MF.begin(), MBBE = MF.end(); MBB != MBBE; ++MBB){ 46 | loop_again |= runOnMachineBasicBlock(*MBB); 47 | } 48 | loop_counter += 1; 49 | }while(loop_again); 50 | 51 | GFreeDEBUG(2, "[MRM][-] On " << MF.getName() << " we did " << loop_counter << " loops\n"); 52 | return true; 53 | } 54 | 55 | const char *getPassName() const override { return "GFree Mod R/M and SIB bytes handler"; } 56 | 57 | void getAnalysisUsage(AnalysisUsage &AU) const override { 58 | AU.setPreservesAll(); 59 | AU.addRequired(); 60 | AU.addRequired(); 61 | AU.addPreserved(); 62 | AU.addRequired(); 63 | AU.addPreserved(); 64 | MachineFunctionPass::getAnalysisUsage(AU); 65 | } 66 | 67 | std::vector AssembleMInewMapping(MachineInstr *MI, unsigned int VirtReg, unsigned int PhysReg); 68 | int allocateNewRegister(MachineInstr *MI); 69 | unsigned int doCodeTransformation(MachineInstr *MI); 70 | unsigned int getSafeReg(MachineInstr *MI, unsigned int PrevPhysReg); 71 | unsigned int getSafeRegEXT(MachineInstr *MI, unsigned int PrevPhysReg); 72 | bool MIusesRegister(MachineInstr *MI, unsigned int safeRegister); 73 | void dumpAllocationOrder(AllocationOrder Order); 74 | }; 75 | char GFreeModRMSIB::ID = 0; 76 | } 77 | 78 | 79 | FunctionPass *llvm::createGFreeModRMSIB() { 80 | return new GFreeModRMSIB(); 81 | } 82 | 83 | bool neverEncodesRetModRmSib(MachineInstr *MI){ 84 | if(MI->isReturn() || MI->isCall() || MI->isIndirectBranch()){ // We already handle rets/call... 85 | return true; 86 | } 87 | 88 | // All the operands must be register or immediates. 89 | for(MachineOperand &MO: MI->operands()){ 90 | if(! (MO.isReg() || MO.isImm()) ){ 91 | return true; 92 | } 93 | } 94 | return false; 95 | } 96 | 97 | 98 | void GFreeModRMSIB::dumpAllocationOrder(AllocationOrder Order){ 99 | unsigned int PhysReg; 100 | errs() << "ORDER: ["; 101 | while( ( PhysReg=Order.next() ) != 0){ 102 | errs()<< " " << TRI->getName(PhysReg) << " "; 103 | } 104 | errs() << "]\n"; 105 | Order.rewind(); 106 | } 107 | 108 | // The algorithm works as follow: 109 | // - For each "evil" instruction we query the Matrix to check if there is a new 110 | // physical register that doesn't interfere and such that the instruction becomes safe. 111 | // - If no register is found, then we do some code transformation to make the instruction safe. 112 | 113 | // [NOTE] reallocating an already reallocated virtual register is dangerous. 114 | // Take this example: 115 | // = INC64r with vreg89 <--> %RBX, encodes a ret in the modr/m byte. 116 | // So we change the mapping and allocate, i.e, %RDX. The instruction becomes safe. 117 | // Then we run into in: 118 | // = ADD64rr , 119 | // For we can't find a new mapping, but for we found that %RCX suitable. 120 | // But assiging to RCX, makes again the INC64r evil! 121 | // We fix this by keeping the a set of already allocated virtual register (that in this case would contain vreg89) 122 | // and denying any further change to that register. 123 | 124 | // Another corner case is when we have two instructions, let's say A and B, and 125 | // A becomes evil when we realloc a register in B. 126 | // We fix this by doing two loops on the function. 127 | 128 | // During the second loop, A will be evil and so a new register (hopefully) 129 | // will be allocated and the corresponding virtual register will be added to 130 | // the set of already allocated register. So, while processing B (or any other 131 | // instruction after A) that virtual register will not be reallocated. 132 | 133 | // When we do a code transformation, we do not change any mapping since it's 134 | // just a sort of wrapper around a MI. 135 | 136 | int GFreeModRMSIB::allocateNewRegister(MachineInstr *MI) { 137 | unsigned int VirtIndex; 138 | unsigned int VirtReg; 139 | unsigned int PrevPhysReg; 140 | unsigned int PhysReg; 141 | 142 | // This list contains for each virtual register how many times the MI is 143 | // evil after encoding it with a new physical register. 144 | std::list stillcontainsList; 145 | // Let's find a virtual register. 146 | for(VirtIndex=0; VirtIndex < MI->getNumOperands(); VirtIndex++ ){ 147 | MachineOperand &MO = MI->getOperand(VirtIndex); 148 | if(MO.isReg() && TRI->isVirtualRegister(MO.getReg())){ 149 | stillcontainsList.push_front(1); 150 | VirtReg = MI->getOperand(VirtIndex).getReg(); 151 | assert(VRM->hasPhys(VirtReg) && "VRM doesn't have this mapping."); 152 | 153 | if (VirtRegAlreadyReallocated.count(VirtReg) != 0){ // Read NOTE above. 154 | continue; 155 | } 156 | 157 | LiveInterval &VirtRegInterval = LIS->getInterval(VirtReg); 158 | PrevPhysReg = VRM->getPhys(VirtReg); 159 | 160 | GFreeDEBUG(1,"[MRM][+] Searching a new register for: " << MI->getOperand(VirtIndex) << ".\n"); 161 | 162 | AllocationOrder Order(VirtReg, *VRM, RegClassInfo,Matrix); 163 | // Loop through all the physical registers associable to this virt reg. 164 | while ((PhysReg = Order.next())) { 165 | if (PhysReg == PrevPhysReg) 166 | continue; 167 | 168 | 169 | // It's impossible that an istruction contains an evil byte for 4+ 170 | // different registers. 171 | if(stillcontainsList.front() > 4){ 172 | GFreeDEBUG(2, "Realloc this register will not solve, stop here.\n"); 173 | break; 174 | } 175 | 176 | // Try if with this register, the instruction still encode a ret. 177 | if(containsRet(AssembleMInewMapping(MI, VirtReg, PhysReg))){ 178 | GFreeDEBUG(3, " [-] " << TRI->getName(PhysReg) << " : still ret\n"); 179 | stillcontainsList.front()++; 180 | continue; 181 | } 182 | 183 | // If no interference, then we found a free register. 184 | if ((Matrix->checkInterference(VirtRegInterval, PhysReg) == LiveRegMatrix::IK_Free)){ 185 | ++EvilSib; 186 | VRM->clearVirt(VirtReg); 187 | Matrix->assign(VirtRegInterval, PhysReg); 188 | 189 | GFreeDEBUG(1,"[MRM][+] found: " << TRI->getName(PhysReg) << 190 | " from " << TRI->getName(PrevPhysReg) << " (" << 191 | MI->getOperand(VirtIndex) << ") \n"); 192 | VirtRegAlreadyReallocated.insert(VirtReg); 193 | Matrix->invalidateVirtRegs(); 194 | return 1; 195 | } 196 | GFreeDEBUG(3," [-] " << TRI->getName(PhysReg) << " : interference\n"); 197 | } 198 | } 199 | } 200 | 201 | 202 | // This means: if all the virtual registers had exceeded the limit of 3 203 | // reallocation (and in every reallocation the encoding still contains a 204 | // ret), than we can't do nothing in this pass. An example are instructions 205 | // that encode a ret in a immediate. We return -1 and inform the main to not 206 | // further process this MI thourgh a code transformation, because it would be 207 | // useless. [5,5] [3,2,5] 208 | 209 | if(*std::min_element(stillcontainsList.begin(), stillcontainsList.end()) >= 4){ 210 | GFreeDEBUG(1,"[MRM][-] Do nothing.\n"); 211 | return -1; 212 | } 213 | 214 | // Otherwise, it means that at least one register didn't execeded the limit, 215 | // so we can do a code transformation. 216 | else{ 217 | GFreeDEBUG(0,"[MRM][-] Do codetransform.\n"); 218 | return 0; 219 | } 220 | } 221 | 222 | 223 | std::vector GFreeModRMSIB::AssembleMInewMapping(MachineInstr *MI, 224 | unsigned int VirtReg, 225 | unsigned int PhysReg){ 226 | assert(VRM->hasPhys(VirtReg) && "VRM doesn't have this mapping."); 227 | unsigned int PrevPhysReg = VRM->getPhys(VirtReg); 228 | 229 | // Temporary create the new virt<->phys mapping 230 | VRM->clearVirt(VirtReg); 231 | VRM->assignVirt2Phys(VirtReg, PhysReg); 232 | 233 | std::vector MIBytes = Assembler->MachineInstrToBytes(MI); 234 | 235 | // Restore the old mapping. 236 | VRM->clearVirt(VirtReg); 237 | VRM->assignVirt2Phys(VirtReg, PrevPhysReg); 238 | return MIBytes; 239 | } 240 | 241 | bool GFreeModRMSIB::MIusesRegister(MachineInstr *MI, unsigned int safeRegister){ 242 | unsigned int PhysReg; 243 | for(const MachineOperand &MO : MI->operands()){ 244 | if( !MO.isReg() ) 245 | continue; 246 | 247 | // If necessary, translate virtual register. 248 | PhysReg = TRI->isVirtualRegister(MO.getReg()) ? VRM->getPhys(MO.getReg()) : MO.getReg(); 249 | 250 | if(PhysReg == llvm::getX86SubSuperRegister(safeRegister, 64, false) || 251 | PhysReg == llvm::getX86SubSuperRegister(safeRegister, 32, false) || 252 | PhysReg == llvm::getX86SubSuperRegister(safeRegister, 16, false) || 253 | PhysReg == llvm::getX86SubSuperRegister(safeRegister, 8, true ) || 254 | PhysReg == llvm::getX86SubSuperRegister(safeRegister, 8, false) ) 255 | 256 | return true; 257 | } 258 | 259 | return false; 260 | 261 | } 262 | 263 | // We can't use R13 straight. Had problem with this instruction, 264 | // where %r13d was already there and we replaced (failing!) rbx with r13 in the mov. 265 | // mov %r13d,0x48(%r13,%rax,8) 266 | // ^ 267 | // | 268 | // push %r13 269 | // mov %rbx,%r13 270 | // mov %r13d,0x48(%r13,%rax,8) 271 | // mov %r13,%rbx 272 | // pop %r13 273 | 274 | unsigned int GFreeModRMSIB::getSafeReg(MachineInstr *MI, unsigned int PrevVirtReg){ 275 | MachineFunction *MF = MI->getParent()->getParent(); 276 | unsigned int safeRegisters[3] = {X86::R13, X86::R15, X86::R14}; 277 | int i; 278 | 279 | // If MI *doesn't* use safeRegister[i] (or any of his subregisters), 280 | // then we can use it. 281 | for(i=0; i<3; i++){ 282 | if(! MIusesRegister(MI, safeRegisters[i]) ) 283 | break; 284 | } 285 | 286 | // This should never happen because an instruction can use up to 3 287 | // register, but if we are here one of those 3 register must be different for 288 | // one contained in usableRegisters, otherwise the MI wasn't evil. 289 | assert(i!=3 && "Can't find a safe reg in X86GFreeModRMSIB.cpp!"); 290 | 291 | 292 | // We return the right size of the safe reg (es: R13d, R13w) 293 | const TargetRegisterClass *VirtRegRC = MF->getRegInfo().getRegClass(PrevVirtReg); 294 | // TODO: support X86::GR32_ABCDRegClass 295 | if(VirtRegRC == &X86::GR32_ABCDRegClass) return 0; 296 | const TargetRegisterClass *LargestVirtRegRC = TRI->getLargestLegalSuperClass(VirtRegRC,*MF); // GR64_with_sub_8bit -> GR64 297 | // errs() << "MI: " << *MI; 298 | // errs() << "Name: " << TRI->getRegClassName(VirtRegRC) << "\n"; 299 | // errs() << "NameLARGE: " << TRI->getRegClassName(TRI->getLargestLegalSuperClass(VirtRegRC,*MF)) << "\n"; 300 | if( LargestVirtRegRC == &X86::GR8RegClass){ 301 | return llvm::getX86SubSuperRegister(safeRegisters[i], 8, false); 302 | } 303 | else if ( LargestVirtRegRC == &X86::GR16RegClass){ 304 | return llvm::getX86SubSuperRegister(safeRegisters[i], 16, false); 305 | } 306 | else if( LargestVirtRegRC == &X86::GR32RegClass){ 307 | return llvm::getX86SubSuperRegister(safeRegisters[i], 32, false); 308 | } 309 | else if ( LargestVirtRegRC == &X86::GR64RegClass){ 310 | return llvm::getX86SubSuperRegister(safeRegisters[i], 64, false); 311 | } 312 | return 0; 313 | } 314 | 315 | unsigned int getMOVrrOpcode(unsigned int PrevPhysReg){ 316 | if( X86::GR8RegClass.contains(PrevPhysReg)){ 317 | return X86::MOV8rr; 318 | } 319 | else if (X86::GR16RegClass.contains(PrevPhysReg)){ 320 | return X86::MOV16rr; 321 | } 322 | else if( X86::GR32RegClass.contains(PrevPhysReg)){ 323 | return X86::MOV32rr; 324 | } 325 | else if (X86::GR64RegClass.contains(PrevPhysReg)){ 326 | return X86::MOV64rr; 327 | } 328 | else { 329 | return 0; 330 | } 331 | } 332 | 333 | // Returns true if a code transformation is done, false otherwise. 334 | unsigned int GFreeModRMSIB::doCodeTransformation(MachineInstr *MI) { 335 | 336 | MachineBasicBlock *MBB = MI->getParent(); 337 | MachineFunction *MF = MBB->getParent(); 338 | const X86Subtarget &STI = MF->getSubtarget(); 339 | const X86InstrInfo &TII = *STI.getInstrInfo(); 340 | DebugLoc DL = MI->getDebugLoc(); 341 | MachineInstrBuilder MIB; 342 | 343 | unsigned int VirtIndex; 344 | unsigned int VirtReg; 345 | unsigned int PrevPhysReg; 346 | unsigned int NewReg; 347 | unsigned int MovOpcode; 348 | // Count how many instruction are inserted before and after the 349 | // target instruction. 350 | unsigned int InsertedBefore = 0; 351 | unsigned int InsertedAfter = 0; 352 | // Here we loop thorugh all the virtual register of MI. We choose a suitable 353 | // NewReg (R13,R14..), and check that the MI, with the new mapping, doesn't 354 | // contains an evil sib/modrm anymore. 355 | for(VirtIndex=0; VirtIndex < MI->getNumOperands(); VirtIndex++ ){ 356 | MachineOperand &MO = MI->getOperand(VirtIndex); 357 | // errs() << *VRM; 358 | if(MO.isReg() && TRI->isVirtualRegister(MO.getReg())){ 359 | VirtReg = MI->getOperand(VirtIndex).getReg(); 360 | PrevPhysReg = VRM->getPhys(VirtReg); 361 | NewReg = getSafeReg(MI, VirtReg); 362 | MovOpcode = getMOVrrOpcode(PrevPhysReg); 363 | if(NewReg == 0 || MovOpcode == 0){ 364 | errs() << "[TODO] MI not handled (1): " << *MI; 365 | return 0; 366 | } 367 | if(!containsRet(AssembleMInewMapping(MI, VirtReg, NewReg))) 368 | break; 369 | } 370 | } 371 | 372 | // We exited from the loop because the operands were finished. 373 | if(VirtIndex == MI->getNumOperands()){ 374 | errs() << "[TODO] MI not handled (2): " << *MI; 375 | return 0; 376 | } 377 | // Otherwise do the code transformation. 378 | ++EvilSib; // Update stats. 379 | unsigned int SuperRegSafe = llvm::getX86SubSuperRegister(NewReg, 64, false); 380 | MBB->addLiveIn(SuperRegSafe); 381 | MBB->sortUniqueLiveIns(); 382 | 383 | 384 | // PUSH R13; 385 | pushReg(MI, SuperRegSafe, RegState::Undef); 386 | InsertedBefore++; 387 | // If this is a copy and we are targeting the first register, we can skip this mov 388 | if(! ((MI->getOpcode() == TargetOpcode::COPY) && 389 | (VirtIndex == 0)) ) 390 | { 391 | // MOV VirtReg -> R13 392 | MIB = BuildMI(*MBB, MI, DL, TII.get(MovOpcode)) 393 | .addReg(NewReg, RegState::Define) 394 | .addReg(VirtReg, RegState::Undef); 395 | GFreeDEBUG(1, "> " << *MIB); 396 | InsertedBefore++; 397 | } 398 | 399 | // INST with R13* 400 | GFreeDEBUG(1, "< " << *MI); 401 | 402 | // Here we replace the "evil" reg with the new safe ref. 403 | MI->substituteRegister(VirtReg, NewReg, 0, *TRI); 404 | 405 | // But we also have to replace every virtual register that is allocated on 406 | // the same physical register as the "evil" reg. 407 | // This was a bug found 408 | // lea 0x0(%r10,%rcx,8),%r10 409 | // was wrongly translated in: 410 | // mov %r10,%r13 411 | // lea 0x0(%r13,%rcx,8),%r10 412 | // mov %r13,%r10 413 | // now is translated in: 414 | // mov %r10,%r13 415 | // lea 0x0(%r13,%rcx,8),%r13 416 | // mov %r13,%r10 417 | 418 | for(VirtIndex=0; VirtIndex < MI->getNumOperands(); VirtIndex++ ){ 419 | MachineOperand &MO = MI->getOperand(VirtIndex); 420 | if(MO.isReg() && TRI->isVirtualRegister(MO.getReg()) && 421 | VRM->getPhys(VirtReg) == VRM->getPhys(MO.getReg())){ 422 | MIB = BuildMI(*MBB, MI, DL, TII.get(TargetOpcode::IMPLICIT_DEF), MO.getReg()); 423 | MO.setReg(NewReg); 424 | } 425 | } 426 | 427 | VirtRegAlreadyReallocated.insert(VirtReg); 428 | GFreeDEBUG(1, "> " << *MI); 429 | 430 | MachineInstrBuilder MovMIB; 431 | // If this is a copy and we are targeting the second register, we 432 | // can skip this mov 433 | if(! ((MI->getOpcode() == TargetOpcode::COPY) && 434 | (VirtIndex == 1)) ) 435 | { 436 | // MOV R13 -> VirtReg 437 | MovMIB = BuildMI(*MBB, MI, DL, TII.get(MovOpcode)) 438 | .addReg(VirtReg, RegState::Define) 439 | .addReg(NewReg, RegState::Undef); 440 | GFreeDEBUG(1, "> " << *MovMIB); 441 | InsertedAfter++; 442 | } 443 | 444 | 445 | // POP R13; 446 | MachineInstrBuilder PopMIB = popReg(MI, SuperRegSafe); 447 | InsertedAfter++; 448 | 449 | // Move MI in the middle, before the last mov (MovMIB) if it was 450 | // created, otherwise before the pop (PopMIB) 451 | MBB->remove(MI); 452 | MBB->insert(MovMIB ? MovMIB : PopMIB,MI); 453 | 454 | // Fix up the live intervals. 455 | ArrayRef Arr(VirtReg); 456 | MachineBasicBlock::iterator MBBI = MI; 457 | LIS->RemoveMachineInstrFromMaps(MI); 458 | LIS->InsertMachineInstrRangeInMaps(std::prev(MBBI,InsertedBefore), std::next(MBBI,InsertedAfter+1)); 459 | LIS->repairIntervalsInRange(MBB, MBBI, MBBI, Arr); 460 | 461 | return 1; 462 | } 463 | 464 | 465 | bool GFreeModRMSIB::runOnMachineBasicBlock(MachineBasicBlock &MBB) { 466 | MachineFunction *MF = MBB.getParent(); 467 | VRM = &getAnalysis(); 468 | Matrix = &getAnalysis(); 469 | LIS = &getAnalysis(); 470 | TRI = MF->getSubtarget().getRegisterInfo(); 471 | RegClassInfo.runOnMachineFunction(VRM->getMachineFunction()); 472 | MachineBasicBlock::iterator MBBI, MBBIE; 473 | MachineInstr *MI; 474 | Assembler = new GFreeAssembler(*MF, VRM); 475 | 476 | bool loop_again = false; 477 | // VirtRegAlreadyReallocated.clear(); 478 | for (MBBI = MBB.begin(), MBBIE = MBB.end(); MBBI != MBBIE; MBBI++) { 479 | 480 | MI = MBBI; 481 | if( neverEncodesRetModRmSib(MI) ) 482 | continue; 483 | // 1. 2. 3. 4. 484 | std::vector MIbytes = Assembler->MachineInstrToBytes(MI); 485 | 486 | // 5. Check if there's a ret. 487 | if( containsRet(MIbytes) ){ 488 | GFreeDEBUG(1, "[MRM][+] Contains Ret: " << *MI); 489 | 490 | int result = allocateNewRegister(MI); 491 | 492 | if (result == 1){ // We did something (a new register was found). 493 | loop_again = true; 494 | continue; 495 | } 496 | if( result == -1 ){ // We can't do nothing. 497 | continue; 498 | } 499 | if( result == 0 ){ // We can do something. 500 | loop_again |= doCodeTransformation(MI); 501 | } 502 | } 503 | } // end while 504 | 505 | delete Assembler; 506 | // errs()<< "After MODRM/SIB: " << MBB; 507 | return loop_again; 508 | } 509 | 510 | static RegisterPass X("gfreemodrmsib", "GFreeModRMSIB"); 511 | 512 | 513 | -------------------------------------------------------------------------------- /X86GFree/X86GFreeUtils.cpp: -------------------------------------------------------------------------------- 1 | #include "X86GFreeUtils.h" 2 | #include "X86.h" 3 | #include "X86Subtarget.h" 4 | #include 5 | #include 6 | #include "llvm/Support/Format.h" 7 | #include "X86InstrBuilder.h" 8 | #include "llvm/CodeGen/MachineRegisterInfo.h" 9 | #include "llvm/Support/raw_ostream.h" 10 | 11 | using namespace llvm; 12 | 13 | cl::opt DisableGFree("disable-gfree", cl::Hidden, 14 | cl::desc("Disable GFree protections")); 15 | 16 | /* Global Variables*/ 17 | 18 | int getORrrOpcode(unsigned int size){ 19 | if( size == 1 ){ 20 | return X86::OR8rr; 21 | } 22 | if( size == 2 ){ 23 | return X86::OR16rr; 24 | } 25 | if( size == 4 ){ 26 | return X86::OR32rr; 27 | } 28 | if( size == 8 ){ 29 | return X86::OR64rr; 30 | } 31 | assert(false && "[getORopcode] We should never get here!"); 32 | return 0; 33 | } 34 | 35 | int getMOVriOpcode(unsigned int size){ 36 | if( size == 1 ){ 37 | return X86::MOV8ri; 38 | } 39 | if( size == 2 ){ 40 | return X86::MOV16ri; 41 | } 42 | if( size == 4 ){ 43 | return X86::MOV32ri; 44 | } 45 | if( size == 8 ){ 46 | return X86::MOV64ri; 47 | } 48 | assert(false && "[getMOVopcode] We should never get here!"); 49 | return 0; 50 | } 51 | 52 | int getADDrrOpcode(unsigned int size){ 53 | if( size == 1 ){ 54 | return X86::ADD8rr; 55 | } 56 | if( size == 2 ){ 57 | return X86::ADD16rr; 58 | } 59 | if( size == 4 ){ 60 | return X86::ADD32rr; 61 | } 62 | if( size == 8 ){ 63 | return X86::ADD64rr; 64 | } 65 | assert(false && "[getADDopcode] We should never get here!"); 66 | return 0; 67 | } 68 | 69 | int getSUBrrOpcode(unsigned int size){ 70 | if( size == 1 ){ 71 | return X86::SUB8rr; 72 | } 73 | if( size == 2 ){ 74 | return X86::SUB16rr; 75 | } 76 | if( size == 4 ){ 77 | return X86::SUB32rr; 78 | } 79 | if( size == 8 ){ 80 | return X86::SUB64rr; 81 | } 82 | assert(false && "[getSUBopcode] We should never get here!"); 83 | return 0; 84 | } 85 | 86 | int getORriOpcode(unsigned int size){ 87 | if( size == 1 ){ 88 | return X86::OR8ri; 89 | } 90 | if( size == 2 ){ 91 | return X86::OR16ri; 92 | } 93 | if( size == 4 ){ 94 | return X86::OR32ri; 95 | } 96 | if( size == 8 ){ 97 | return X86::OR64ri32; 98 | } 99 | 100 | assert(false && "[getORopcode] We should never get here!"); 101 | return 0; 102 | } 103 | 104 | 105 | // python listcalljmpstar.py 106 | int values_to_avoid[] = {0x10,0x11,0x12,0x13,0x16,0x17,0x18,0x19,0x1a,0x1b,0x1e, 107 | 0x1f,0x20,0x21,0x22,0x23,0x26,0x27,0x28,0x29,0x2a,0x2b, 108 | 0x2e,0x2f,0xd0,0xd1,0xd2,0xd3,0xd4,0xd5,0xd6,0xd7,0xe0, 109 | 0xe1,0xe2,0xe3,0xe4,0xe5,0xe6,0xe7}; 110 | bool FFblacklist(int I){ 111 | return std::find(std::begin(values_to_avoid), std::end(values_to_avoid), I) != std::end(values_to_avoid); 112 | } 113 | 114 | std::pair splitInt(int64_t Imm, int Size){ 115 | std::pair p(0,0); 116 | int64_t low = 0; 117 | for(int shift = 0; shift <= Size -8; shift+=8){ 118 | int64_t current_byte = (Imm >> shift ) & 0xFF; // (0xffc3ff >> 8) && 0xff == 0xc3 119 | int64_t next; 120 | if(shift == Size - 8) 121 | next = 0; // next of MSB is outside this instruction, so set it to 0. 122 | else 123 | next = ( Imm >> (shift + 8) ) & 0xFF; 124 | // errs() << format("%d, current=0x%02llx, next=0x%02llx\n",shift, current_byte, next); 125 | if ( current_byte == 0xc2 || current_byte == 0xc3 || 126 | current_byte == 0xca || current_byte == 0xcb || 127 | (current_byte == 0xff && FFblacklist(next)) ){ 128 | // Found a ret in the immediate. 129 | current_byte = current_byte & 0x0f; // == 0x3 130 | low |= (current_byte << shift); // low |= 0x000300 131 | } 132 | } 133 | if (low == 0) // We didn't found anything. 134 | return p; 135 | 136 | p.first = std::min(low,Imm & (~low)); 137 | p.second = std::max(low,Imm & (~low)); 138 | return p; 139 | } 140 | 141 | 142 | bool isIndirectCall(MachineInstr *MI){ 143 | switch(MI->getOpcode()) { 144 | case X86::CALL16r: 145 | case X86::CALL32r: 146 | case X86::CALL64r: 147 | case X86::CALL16m: 148 | case X86::CALL32m: 149 | case X86::CALL64m: 150 | case X86::FARCALL16m: 151 | case X86::FARCALL32m: 152 | case X86::FARCALL64: 153 | case X86::TAILJMPr64: 154 | case X86::TAILJMPm64: 155 | case X86::TCRETURNri64: 156 | case X86::TCRETURNmi64: 157 | return true; 158 | 159 | default: return false; 160 | } 161 | } 162 | 163 | bool contains(std::vector v, MachineInstr* mbb){ 164 | return std::find(std::begin(v), std::end(v), mbb) != std::end(v); 165 | } 166 | 167 | bool isMove(MachineInstr *MI){ 168 | switch(MI->getOpcode()) { 169 | default: 170 | return 0; 171 | case X86::MOV8ri: 172 | case X86::MOV16ri: 173 | case X86::MOV32ri: 174 | case X86::MOV32ri64: 175 | case X86::MOV64ri32: 176 | case X86::MOV64ri: 177 | case X86::MOV8mi: 178 | case X86::MOV16mi: 179 | case X86::MOV32mi: 180 | case X86::MOV64mi32: 181 | return 1; 182 | } 183 | } 184 | 185 | bool isArithmUsesEFLAGS(MachineInstr *MI){ 186 | switch(MI->getOpcode()) { 187 | default: 188 | return 0; 189 | case X86::ADC8ri: 190 | case X86::ADC16ri8: 191 | case X86::ADC16ri: 192 | case X86::ADC32ri: 193 | case X86::ADC32ri8: 194 | case X86::ADC64ri32: 195 | case X86::ADC64ri8: 196 | case X86::SBB8ri: 197 | case X86::SBB16ri: 198 | case X86::SBB16ri8: 199 | case X86::SBB32ri: 200 | case X86::SBB32ri8: 201 | case X86::SBB64ri32: 202 | case X86::SBB64ri8: 203 | case X86::SBB8mi: 204 | case X86::SBB16mi8: 205 | case X86::SBB16mi: 206 | case X86::SBB32mi8: 207 | case X86::SBB32mi: 208 | case X86::SBB64mi8: 209 | case X86::SBB64mi32: 210 | case X86::ADC32mi: 211 | case X86::ADC32mi8: 212 | case X86::ADC64mi32: 213 | case X86::ADC64mi8: 214 | return 1; 215 | } 216 | } 217 | 218 | bool isTest(MachineInstr *MI){ 219 | switch(MI->getOpcode()) { 220 | default: 221 | return 0; 222 | 223 | case X86::TEST8i8: 224 | case X86::TEST8ri: 225 | case X86::TEST16i16: 226 | case X86::TEST16ri: 227 | case X86::TEST32i32: 228 | case X86::TEST32ri: 229 | case X86::TEST64i32: 230 | case X86::TEST64ri32: 231 | // case X86::TEST8mi: 232 | // case X86::TEST16mi: 233 | // case X86::TEST32mi: 234 | // case X86::TEST64mi32: 235 | return 1; 236 | } 237 | } 238 | 239 | bool isCompare(MachineInstr *MI){ 240 | switch(MI->getOpcode()) { 241 | default: 242 | return 0; 243 | 244 | case X86::CMP8ri: 245 | case X86::CMP16ri: 246 | case X86::CMP16ri8: 247 | case X86::CMP32ri: 248 | case X86::CMP32ri8: 249 | case X86::CMP64ri32: 250 | case X86::CMP64ri8: 251 | case X86::CMP8mi: 252 | case X86::CMP16mi: 253 | case X86::CMP16mi8: 254 | case X86::CMP32mi: 255 | case X86::CMP32mi8: 256 | case X86::CMP64mi32: 257 | case X86::CMP64mi8: 258 | return 1; 259 | } 260 | } 261 | 262 | void emitNop(MachineInstr *MI, int count){ 263 | MachineBasicBlock *MBB = MI->getParent(); 264 | MachineFunction *MF = MBB->getParent(); 265 | const X86Subtarget &STI = MF->getSubtarget(); 266 | const X86InstrInfo &TII = *STI.getInstrInfo(); 267 | DebugLoc DL = MI->getDebugLoc(); 268 | MachineInstrBuilder MIB; 269 | while(count>0){ 270 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::NOOP)); 271 | count--; 272 | } 273 | } 274 | 275 | void emitNopAfter(MachineInstr *MI, int count){ 276 | MachineBasicBlock *MBB = MI->getParent(); 277 | MachineFunction *MF = MBB->getParent(); 278 | const X86Subtarget &STI = MF->getSubtarget(); 279 | const X86InstrInfo &TII = *STI.getInstrInfo(); 280 | DebugLoc DL = MI->getDebugLoc(); 281 | MachineInstrBuilder MIB; 282 | MachineBasicBlock::iterator MBBI = MI; 283 | while(count>0){ 284 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::NOOP)); 285 | count--; 286 | // Move it after MI. 287 | MBB->remove(MIB); 288 | MBB->insertAfter(MBBI, MIB); 289 | } 290 | } 291 | 292 | 293 | MachineInstrBuilder pushReg(MachineInstr *MI, unsigned int Reg, unsigned int flags){ 294 | MachineBasicBlock *MBB = MI->getParent(); 295 | MachineFunction *MF = MBB->getParent(); 296 | const X86Subtarget &STI = MF->getSubtarget(); 297 | const X86InstrInfo &TII = *STI.getInstrInfo(); 298 | DebugLoc DL = MI->getDebugLoc(); 299 | MachineInstrBuilder MIB; 300 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::PUSH64r)) 301 | .addReg(Reg, RegState::Kill | flags); 302 | GFreeDEBUG(1, "> " << *MIB); 303 | return MIB; 304 | } 305 | 306 | MachineInstrBuilder popReg(MachineInstr *MI, unsigned int Reg, unsigned int flags){ 307 | MachineBasicBlock *MBB = MI->getParent(); 308 | MachineFunction *MF = MBB->getParent(); 309 | const X86Subtarget &STI = MF->getSubtarget(); 310 | const X86InstrInfo &TII = *STI.getInstrInfo(); 311 | DebugLoc DL = MI->getDebugLoc(); 312 | MachineInstrBuilder MIB; 313 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::POP64r)) 314 | .addReg(Reg, RegState::Define | flags); 315 | GFreeDEBUG(1, "> " << *MIB); 316 | return MIB; 317 | } 318 | 319 | void pushEFLAGS(MachineInstr *MI){ 320 | MachineBasicBlock *MBB = MI->getParent(); 321 | MachineFunction *MF = MBB->getParent(); 322 | const X86Subtarget &STI = MF->getSubtarget(); 323 | const X86InstrInfo &TII = *STI.getInstrInfo(); 324 | DebugLoc DL = MI->getDebugLoc(); 325 | MachineInstrBuilder MIB; 326 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::PUSHF64)); 327 | GFreeDEBUG(0, "> " << *MIB); 328 | } 329 | 330 | void popEFLAGS(MachineInstr *MI){ 331 | MachineBasicBlock *MBB = MI->getParent(); 332 | MachineFunction *MF = MBB->getParent(); 333 | const X86Subtarget &STI = MF->getSubtarget(); 334 | const X86InstrInfo &TII = *STI.getInstrInfo(); 335 | DebugLoc DL = MI->getDebugLoc(); 336 | MachineInstrBuilder MIB; 337 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::POPF64)); 338 | GFreeDEBUG(0, "> " << *MIB); 339 | } 340 | 341 | 342 | // PUSHF64 %RSP, %RSP, %EFLAGS 343 | // TODO: http://reviews.llvm.org/D6629 344 | void pushEFLAGSinline(MachineInstr *MI, unsigned int saveRegEFLAGS){ 345 | MachineBasicBlock *MBB = MI->getParent(); 346 | MachineFunction *MF = MBB->getParent(); 347 | const X86Subtarget &STI = MF->getSubtarget(); 348 | const X86InstrInfo &TII = *STI.getInstrInfo(); 349 | DebugLoc DL = MI->getDebugLoc(); 350 | MachineInstrBuilder MIB; 351 | // Fix this. 352 | // 1# 353 | // Theoretically the code commented below should work but it's not. 354 | // MBB->addLiveIn(X86::EFLAGS); 355 | // MBB->addLiveIn(X86::RSP); 356 | // MIB = BuildMI(*MBB, MI, DL, TII.get(X86::PUSHF64)); 357 | // MIB->getOperand(2).setIsUndef(); 358 | // MBB->sortUniqueLiveIns(); 359 | 360 | // 2# 361 | // if ( MachineBasicBlock::LQR_Dead == 362 | // MBB->computeRegisterLiveness((MF->getRegInfo().getTargetRegisterInfo()), X86::EFLAGS, MI, 5000)){ 363 | // MIB = BuildMI(*MBB, MI, DL, TII.get(TargetOpcode::IMPLICIT_DEF), X86::EFLAGS); 364 | // // errs() << "> " << *MIB; 365 | // } 366 | 367 | // MIB = BuildMI(*MBB, MI, DL, TII.get(X86::INLINEASM)) 368 | // .addExternalSymbol("pushfq") 369 | // .addImm(0) 370 | // .addReg(X86::RSP, RegState::ImplicitDefine) 371 | // .addReg(X86::RSP, RegState::ImplicitKill) 372 | // .addReg(X86::EFLAGS, RegState::ImplicitKill); 373 | 374 | // 3# 375 | // MIB = BuildMI(*MBB, MI, DL, TII.get(TargetOpcode::COPY)).addReg(X86::RAX, RegState::Define).addReg(X86::EFLAGS); 376 | // MBB->addLiveIn(X86::EFLAGS); 377 | // MBB->addLiveIn(X86::RSP); 378 | // MIB = BuildMI(*MBB, MI, DL, TII.get(X86::PUSHF64)); 379 | // MIB->getOperand(2).setIsUndef(); 380 | // MBB->sortUniqueLiveIns(); 381 | 382 | // 4# 383 | if ( MachineBasicBlock::LQR_Dead == 384 | MBB->computeRegisterLiveness((MF->getRegInfo().getTargetRegisterInfo()), X86::EFLAGS, MI, 5000)){ 385 | MIB = BuildMI(*MBB, MI, DL, TII.get(TargetOpcode::IMPLICIT_DEF), X86::EFLAGS); 386 | } 387 | 388 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::INLINEASM)) 389 | .addExternalSymbol("pushfq") 390 | .addImm(0) 391 | .addReg(X86::RSP, RegState::ImplicitDefine) 392 | .addReg(X86::RSP, RegState::ImplicitKill) 393 | .addReg(X86::EFLAGS, RegState::ImplicitKill); 394 | 395 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::POP64r)) 396 | .addReg(saveRegEFLAGS, RegState::Define); 397 | 398 | MBB->addLiveIn(X86::R12); 399 | MBB->sortUniqueLiveIns(); 400 | 401 | GFreeDEBUG(0, "> " << *MIB); 402 | } 403 | 404 | 405 | 406 | void popEFLAGSinline(MachineInstr *MI, unsigned int saveRegEFLAGS){ 407 | MachineBasicBlock *MBB = MI->getParent(); 408 | MachineFunction *MF = MBB->getParent(); 409 | const X86Subtarget &STI = MF->getSubtarget(); 410 | const X86InstrInfo &TII = *STI.getInstrInfo(); 411 | DebugLoc DL = MI->getDebugLoc(); 412 | MachineInstrBuilder MIB; 413 | 414 | // 3# 415 | // MIB = BuildMI(*MBB, MI, DL, TII.get(TargetOpcode::COPY)).addReg(X86::EFLAGS, RegState::Define).addReg(X86::RAX, RegState::Undef); 416 | // 2# 417 | // MIB = BuildMI(*MBB, MI, DL, TII.get(X86::POPF64)); 418 | 419 | // 1# 420 | // MIB = BuildMI(*MBB, MI, DL, TII.get(X86::INLINEASM)) 421 | // .addExternalSymbol("popfq") 422 | // .addImm(0) 423 | // .addReg(X86::RSP, RegState::ImplicitDefine) 424 | // .addReg(X86::RSP, RegState::ImplicitKill) 425 | // .addReg(X86::EFLAGS, RegState::ImplicitDefine); 426 | 427 | // 4# 428 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::PUSH64r)) 429 | .addReg(saveRegEFLAGS); 430 | 431 | MIB = BuildMI(*MBB, MI, DL, TII.get(X86::INLINEASM)) 432 | .addExternalSymbol("popfq") 433 | .addImm(0) 434 | .addReg(X86::RSP, RegState::ImplicitDefine) 435 | .addReg(X86::RSP, RegState::ImplicitKill) 436 | .addReg(X86::EFLAGS, RegState::ImplicitDefine); 437 | 438 | 439 | 440 | GFreeDEBUG(0, "> " << *MIB); 441 | } 442 | 443 | bool needToSaveEFLAGS(MachineInstr *MI){ 444 | MachineBasicBlock *MBB = MI->getParent(); 445 | MachineBasicBlock::iterator MBBI, MBBIE; 446 | MBBI = MI; 447 | MBBIE = MBB->end(); 448 | MachineInstr *CurrentMI; 449 | unsigned int i; 450 | // errs() << "TMP DUMPL " << *MBB; 451 | bool use, def; 452 | use = false; 453 | def = false; 454 | 455 | for (; ; MBBI++) { 456 | 457 | // If we are at the end of the MBB, follow the successor if and only if there is 458 | // one successor. 459 | while(MBBI == MBBIE && MBB->succ_size() == 1){ 460 | MBB = *MBB->succ_begin(); 461 | MBBI = MBB->begin(); 462 | MBBIE = MBB->end(); 463 | } 464 | 465 | if(MBBI == MBBIE && MBB->succ_size() != 1){ // If there are > 1 successors, stop searching. 466 | // errs() << "Found anything1!\n"; 467 | return 1; 468 | } 469 | 470 | CurrentMI = MBBI; 471 | // errs() << "\t[NS]: " << *CurrentMI; 472 | 473 | if( CurrentMI->isReturn() || CurrentMI->isCall() ){ // We do not preserve eflags across return. It should be safe. 474 | // errs() << " Found ret!\n"; 475 | return 0; 476 | } 477 | 478 | for(i=0; igetNumOperands(); i++){ 479 | MachineOperand MO = CurrentMI->getOperand(i); 480 | if(!MO.isReg() || (MO.getReg() != X86::EFLAGS)) 481 | continue; 482 | if(MO.isUse()){ 483 | // errs() << " Found use!\n"; 484 | use = 1; 485 | } 486 | if(MO.isDef()){ 487 | // errs() << " Found def!\n"; 488 | def = 1; 489 | } 490 | } 491 | 492 | if(use == 1) return 1; 493 | if(def == 1) return 0; 494 | } 495 | 496 | // errs() << "Found anything!\n"; 497 | return 1; 498 | } 499 | 500 | bool containsRet(std::vector MIbytes){ 501 | MIbytes.push_back(0); // This trick is just to not overcomplicate the loop. 502 | for(unsigned int i = 0; i != MIbytes.size() - 1; i++) { 503 | 504 | if( MIbytes[i] == 0xc2 || MIbytes[i] == 0xc3 || 505 | MIbytes[i] == 0xca || MIbytes[i] == 0xcb || 506 | ( MIbytes[i] == 0xff && FFblacklist(MIbytes[i+1]) )){ 507 | return true; 508 | } 509 | } 510 | return false; 511 | } 512 | 513 | const TargetRegisterClass *getRegClassFromSize(int size){ 514 | if( size == 1 ){ 515 | return &X86::GR8RegClass; 516 | } 517 | if( size == 2 ){ 518 | return &X86::GR16RegClass; 519 | } 520 | if( size == 4 ){ 521 | return &X86::GR32RegClass; 522 | } 523 | if( size == 8 ){ 524 | return &X86::GR64RegClass; 525 | } 526 | assert(false && "[getRegClassFromSize] We should never get here!"); 527 | return 0; 528 | } 529 | 530 | void dumpSuccessors(MachineBasicBlock *fromMBB){ 531 | errs() << "Successors of MBB#" << fromMBB->getNumber() << ": "; 532 | for(MachineBasicBlock::succ_iterator si = fromMBB->succ_begin(), se=fromMBB->succ_end(); se!=si; si++){ 533 | errs() << "MBB#" << (*si)->getNumber() << " "; 534 | } 535 | errs()<< "\n"; 536 | } 537 | 538 | 539 | -------------------------------------------------------------------------------- /X86GFree/X86GFreeUtils.h: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include "llvm/CodeGen/MachineInstr.h" 4 | #include "llvm/CodeGen/MachineInstrBuilder.h" 5 | #include "llvm/Target/TargetRegisterInfo.h" 6 | #include "llvm/Support/CommandLine.h" 7 | 8 | #ifndef GFREEUTILS_H_ 9 | #define GFREEUTILS_H_ 10 | 11 | #define DEBUGLEVEL -1 12 | #define GFreeDEBUG(level, ...) \ 13 | do { if (level <= DEBUGLEVEL) errs() << std::string(!level? 0: (level)*4,' ') << __VA_ARGS__; } while (0) 14 | 15 | using namespace llvm; 16 | 17 | /* static cl::opt */ 18 | /* DisableGFree("disable-gfree", cl::Hidden, */ 19 | /* cl::desc("Disable GFree protections")); */ 20 | 21 | /* Global Variables */ 22 | extern cl::opt DisableGFree; 23 | 24 | 25 | std::pair splitInt(int64_t Imm, int Size); 26 | 27 | bool isIndirectCall(MachineInstr *MI); 28 | bool isMove(MachineInstr *MI); 29 | bool isTest(MachineInstr *MI); 30 | bool isCompare(MachineInstr *MI); 31 | bool isArithmUsesEFLAGS(MachineInstr *MI); 32 | 33 | const TargetRegisterClass *getRegClassFromSize(int size); 34 | int getORrrOpcode(unsigned int size); 35 | int getMOVriOpcode(unsigned int size); 36 | int getADDrrOpcode(unsigned int size); 37 | int getSUBrrOpcode(unsigned int size); 38 | int getORriOpcode(unsigned int size); 39 | 40 | bool containsRet(std::vector MIbytes); 41 | bool contains(std::vector v, MachineInstr* mbb); 42 | 43 | void emitNop(MachineInstr *MI, int count=1); 44 | void emitNopAfter(MachineInstr *MI, int count=1); 45 | MachineInstrBuilder pushReg(MachineInstr *MI, unsigned int Reg, unsigned int flags=0); 46 | MachineInstrBuilder popReg(MachineInstr *MI, unsigned int Reg, unsigned int flags=0); 47 | bool needToSaveEFLAGS(MachineInstr *MI); 48 | void pushEFLAGS(MachineInstr *MI); 49 | void popEFLAGS(MachineInstr *MI); 50 | void pushEFLAGSinline(MachineInstr *MI, unsigned int saveRegEFLAGS); 51 | void popEFLAGSinline(MachineInstr *MI, unsigned int saveRegEFLAGS); 52 | 53 | void dumpSuccessors(MachineBasicBlock *fromMBB); 54 | 55 | #endif 56 | -------------------------------------------------------------------------------- /X86GFree/X86MCInstLower.h: -------------------------------------------------------------------------------- 1 | #ifndef LLVM_LIB_TARGET_X86_X86MCINSTLOWER_H 2 | #define LLVM_LIB_TARGET_X86_X86MCINSTLOWER_H 3 | 4 | #include "X86AsmPrinter.h" 5 | #include "llvm/CodeGen/MachineModuleInfoImpls.h" 6 | #include "llvm/Support/Compiler.h" 7 | 8 | namespace llvm { 9 | class MCAsmInfo; 10 | class MCContext; 11 | class MCInst; 12 | class MCOperand; 13 | class MCSymbol; 14 | class MachineInstr; 15 | class MachineFunction; 16 | class MachineModuleInfoMachO; 17 | class MachineOperand; 18 | class Mangler; 19 | class TargetMachine; 20 | 21 | 22 | /// X86MCInstLower - This class is used to lower an MachineInstr into an MCInst. 23 | class LLVM_LIBRARY_VISIBILITY X86MCInstLower { 24 | MCContext &Ctx; 25 | const MachineFunction &MF; 26 | const TargetMachine &TM; 27 | const MCAsmInfo &MAI; 28 | X86AsmPrinter &AsmPrinter; 29 | public: 30 | X86MCInstLower(const MachineFunction &MF, X86AsmPrinter &asmprinter); 31 | 32 | Optional LowerMachineOperand(const MachineInstr *MI, 33 | const MachineOperand &MO) const; 34 | void Lower(const MachineInstr *MI, MCInst &OutMI) const; 35 | 36 | MCSymbol *GetSymbolFromOperand(const MachineOperand &MO) const; 37 | MCOperand LowerSymbolOperand(const MachineOperand &MO, MCSymbol *Sym) const; 38 | 39 | private: 40 | MachineModuleInfoMachO &getMachOMMI() const; 41 | Mangler *getMang() const { 42 | return AsmPrinter.Mang; 43 | } 44 | }; 45 | #endif 46 | } // end anonymous namespace 47 | -------------------------------------------------------------------------------- /install.sh: -------------------------------------------------------------------------------- 1 | echo "[+] Downloading llvm, clang and compiler-rt..." 2 | wget http://llvm.org/releases/3.8.0/llvm-3.8.0.src.tar.xz; tar xvf llvm-3.8.0.src.tar.xz 3 | (cd llvm-3.8.0.src && 4 | (cd tools && wget http://llvm.org/releases/3.8.0/cfe-3.8.0.src.tar.xz && tar xvf cfe-3.8.0.src.tar.xz && mv cfe-3.8.0.src clang) && 5 | (cd projects && wget http://llvm.org/releases/3.8.0/compiler-rt-3.8.0.src.tar.xz && tar xvf compiler-rt-3.8.0.src.tar.xz && mv compiler-rt-3.8.0.src compiler-rt) 6 | ) 7 | 8 | 9 | echo "[+] Installing GFree" 10 | patch -p0 < patches/llvm.patch 11 | cp ./X86GFree/* ./llvm-3.8.0.src/lib/Target/X86/ 12 | cp ./llvm-3.8.0.src/lib/CodeGen/AllocationOrder.h ./llvm-3.8.0.src/include/llvm/CodeGen/ 13 | 14 | echo "[~] Building..." 15 | mkdir llvm-build; 16 | (cd llvm-build && 17 | CC=clang CXX=clang++ cmake -G "Ninja" -DCMAKE_BUILD_TYPE="RelWithDebInfo" \ 18 | -DLLVM_TARGETS_TO_BUILD=X86 \ 19 | -DLLVM_OPTIMIZED_TABLEGEN=ON \ 20 | -DLLVM_INCLUDE_EXAMPLES=OFF \ 21 | -DLLVM_INCLUDE_TESTS=OFF \ 22 | -DLLVM_INCLUDE_DOCS=OFF \ 23 | -DLLVM_ENABLE_SPHINX=OFF \ 24 | -DLLVM_PARALLEL_LINK_JOBS=2 \ 25 | -DLLVM_ENABLE_ASSERTIONS=ON \ 26 | -DCOMPILER_RT_BUILD_SANITIZERS=OFF \ 27 | -DCMAKE_C_FLAGS:STRING="-gsplit-dwarf" \ 28 | -DCMAKE_CXX_FLAGS:STRING="-gsplit-dwarf" \ 29 | ../llvm-3.8.0.src && 30 | ninja -j2; 31 | ) 32 | 33 | echo -e "\n[+] Done!" 34 | echo "$PWD/llvm-build/bin/clang -mno-red-zone -fno-optimize-sibling-calls \"\$@\"" > clang-gfree 35 | echo "$PWD/llvm-build/bin/clang++ -mno-red-zone -fno-optimize-sibling-calls \"\$@\"" > clang++-gfree 36 | chmod +x $PWD/clang-gfree $PWD/clang++-gfree 37 | echo "You can now install clang-gfree and clang++-gfree with: 38 | ln -s $PWD/clang-gfree /usr/bin/clang-gfree 39 | ln -s $PWD/clang++-gfree /usr/bin/clang++-gfree" 40 | -------------------------------------------------------------------------------- /patches/llvm.patch: -------------------------------------------------------------------------------- 1 | Only in ./llvm-3.8.0.src/include/llvm/CodeGen: AllocationOrder.h 2 | diff -ur ./llvm-naive/llvm-3.8.0.src/lib/CodeGen/AllocationOrder.h ./llvm-3.8.0.src/lib/CodeGen/AllocationOrder.h 3 | --- ./llvm-naive/llvm-3.8.0.src/lib/CodeGen/AllocationOrder.h 2015-07-16 00:16:00.000000000 +0200 4 | +++ ./llvm-3.8.0.src/lib/CodeGen/AllocationOrder.h 2016-04-14 16:35:53.000000000 +0200 5 | @@ -26,7 +26,7 @@ 6 | class VirtRegMap; 7 | class LiveRegMatrix; 8 | 9 | -class LLVM_LIBRARY_VISIBILITY AllocationOrder { 10 | +class AllocationOrder { 11 | SmallVector Hints; 12 | ArrayRef Order; 13 | int Pos; 14 | diff -ur ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/CMakeLists.txt ./llvm-3.8.0.src/lib/Target/X86/CMakeLists.txt 15 | --- ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/CMakeLists.txt 2015-12-31 23:40:45.000000000 +0100 16 | +++ ./llvm-3.8.0.src/lib/Target/X86/CMakeLists.txt 2016-04-14 16:15:02.000000000 +0200 17 | @@ -36,6 +36,12 @@ 18 | X86FixupLEAs.cpp 19 | X86WinEHState.cpp 20 | X86OptimizeLEAs.cpp 21 | + X86GFreeAssembler.cpp 22 | + X86GFreeImmediateRecon.cpp 23 | + X86GFreeModRMSIB.cpp 24 | + X86GFree.cpp 25 | + X86GFreeJCP.cpp 26 | + X86GFreeUtils.cpp 27 | ) 28 | 29 | add_llvm_target(X86CodeGen ${sources}) 30 | diff -ur ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/X86AsmPrinter.cpp ./llvm-3.8.0.src/lib/Target/X86/X86AsmPrinter.cpp 31 | --- ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/X86AsmPrinter.cpp 2015-12-25 23:09:45.000000000 +0100 32 | +++ ./llvm-3.8.0.src/lib/Target/X86/X86AsmPrinter.cpp 2016-04-14 16:33:50.000000000 +0200 33 | @@ -12,6 +12,7 @@ 34 | // 35 | //===----------------------------------------------------------------------===// 36 | 37 | +#include "X86MCInstLower.h" 38 | #include "X86AsmPrinter.h" 39 | #include "InstPrinter/X86ATTInstPrinter.h" 40 | #include "MCTargetDesc/X86BaseInfo.h" 41 | diff -ur ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/X86AsmPrinter.h ./llvm-3.8.0.src/lib/Target/X86/X86AsmPrinter.h 42 | --- ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/X86AsmPrinter.h 2015-10-15 16:09:59.000000000 +0200 43 | +++ ./llvm-3.8.0.src/lib/Target/X86/X86AsmPrinter.h 2016-04-14 16:32:55.000000000 +0200 44 | @@ -17,7 +17,7 @@ 45 | #include "llvm/Target/TargetMachine.h" 46 | 47 | // Implemented in X86MCInstLower.cpp 48 | -namespace { 49 | +namespace llvm { 50 | class X86MCInstLower; 51 | } 52 | 53 | @@ -95,6 +95,9 @@ 54 | return "X86 Assembly / Object Emitter"; 55 | } 56 | 57 | + // Gfree 58 | + void setSubtarget(const X86Subtarget *X86SubT) { Subtarget=X86SubT; } 59 | + 60 | const X86Subtarget &getSubtarget() const { return *Subtarget; } 61 | 62 | void EmitStartOfAsmFile(Module &M) override; 63 | Only in ./llvm-3.8.0.src/lib/Target/X86: X86GFreeAssembler.cpp 64 | Only in ./llvm-3.8.0.src/lib/Target/X86: X86GFreeAssembler.h 65 | Only in ./llvm-3.8.0.src/lib/Target/X86: X86GFree.cpp 66 | Only in ./llvm-3.8.0.src/lib/Target/X86: X86GFreeImmediateRecon.cpp 67 | Only in ./llvm-3.8.0.src/lib/Target/X86: X86GFreeJCP.cpp 68 | Only in ./llvm-3.8.0.src/lib/Target/X86: X86GFreeModRMSIB.cpp 69 | Only in ./llvm-3.8.0.src/lib/Target/X86: X86GFreeUtils.cpp 70 | Only in ./llvm-3.8.0.src/lib/Target/X86: X86GFreeUtils.h 71 | diff -ur ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/X86.h ./llvm-3.8.0.src/lib/Target/X86/X86.h 72 | --- ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/X86.h 2016-01-13 12:30:44.000000000 +0100 73 | +++ ./llvm-3.8.0.src/lib/Target/X86/X86.h 2016-04-14 16:15:31.000000000 +0200 74 | @@ -72,6 +72,13 @@ 75 | /// must run after prologue/epilogue insertion and before lowering 76 | /// the MachineInstr to MC. 77 | FunctionPass *createX86ExpandPseudoPass(); 78 | + 79 | +// GFree Machine Pass 80 | +FunctionPass *createGFreeImmediateReconPass(); 81 | +FunctionPass *createGFreeJCPPass(); 82 | +FunctionPass *createGFreeModRMSIB(); 83 | +FunctionPass *createGFreeMachinePass(); 84 | + 85 | } // End llvm namespace 86 | 87 | #endif 88 | diff -ur ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/X86MCInstLower.cpp ./llvm-3.8.0.src/lib/Target/X86/X86MCInstLower.cpp 89 | --- ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/X86MCInstLower.cpp 2016-01-05 08:44:14.000000000 +0100 90 | +++ ./llvm-3.8.0.src/lib/Target/X86/X86MCInstLower.cpp 2016-04-14 16:34:29.000000000 +0200 91 | @@ -12,6 +12,7 @@ 92 | // 93 | //===----------------------------------------------------------------------===// 94 | 95 | +#include "X86MCInstLower.h" 96 | #include "X86AsmPrinter.h" 97 | #include "X86RegisterInfo.h" 98 | #include "X86ShuffleDecodeConstantPool.h" 99 | @@ -40,34 +41,6 @@ 100 | #include "llvm/Support/TargetRegistry.h" 101 | using namespace llvm; 102 | 103 | -namespace { 104 | - 105 | -/// X86MCInstLower - This class is used to lower an MachineInstr into an MCInst. 106 | -class X86MCInstLower { 107 | - MCContext &Ctx; 108 | - const MachineFunction &MF; 109 | - const TargetMachine &TM; 110 | - const MCAsmInfo &MAI; 111 | - X86AsmPrinter &AsmPrinter; 112 | -public: 113 | - X86MCInstLower(const MachineFunction &MF, X86AsmPrinter &asmprinter); 114 | - 115 | - Optional LowerMachineOperand(const MachineInstr *MI, 116 | - const MachineOperand &MO) const; 117 | - void Lower(const MachineInstr *MI, MCInst &OutMI) const; 118 | - 119 | - MCSymbol *GetSymbolFromOperand(const MachineOperand &MO) const; 120 | - MCOperand LowerSymbolOperand(const MachineOperand &MO, MCSymbol *Sym) const; 121 | - 122 | -private: 123 | - MachineModuleInfoMachO &getMachOMMI() const; 124 | - Mangler *getMang() const { 125 | - return AsmPrinter.Mang; 126 | - } 127 | -}; 128 | - 129 | -} // end anonymous namespace 130 | - 131 | // Emit a minimal sequence of nops spanning NumBytes bytes. 132 | static void EmitNops(MCStreamer &OS, unsigned NumBytes, bool Is64Bit, 133 | const MCSubtargetInfo &STI); 134 | Only in ./llvm-3.8.0.src/lib/Target/X86: X86MCInstLower.h 135 | diff -ur ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/X86TargetMachine.cpp ./llvm-3.8.0.src/lib/Target/X86/X86TargetMachine.cpp 136 | --- ./llvm-naive/llvm-3.8.0.src/lib/Target/X86/X86TargetMachine.cpp 2015-12-04 11:53:15.000000000 +0100 137 | +++ ./llvm-3.8.0.src/lib/Target/X86/X86TargetMachine.cpp 2016-05-03 18:03:41.102731654 +0200 138 | @@ -208,6 +208,7 @@ 139 | bool addILPOpts() override; 140 | bool addPreISel() override; 141 | void addPreRegAlloc() override; 142 | + bool addPreRewrite() override; 143 | void addPostRegAlloc() override; 144 | void addPreEmitPass() override; 145 | void addPreSched2() override; 146 | @@ -258,9 +259,16 @@ 147 | addPass(createX86OptimizeLEAs()); 148 | 149 | addPass(createX86CallFrameOptimization()); 150 | + addPass(createGFreeImmediateReconPass()); 151 | +} 152 | + 153 | +bool X86PassConfig::addPreRewrite() { 154 | + addPass(createGFreeModRMSIB()); 155 | + return true; 156 | } 157 | 158 | void X86PassConfig::addPostRegAlloc() { 159 | + addPass(createGFreeJCPPass()); 160 | addPass(createX86FloatingPointStackifierPass()); 161 | } 162 | 163 | @@ -277,4 +285,5 @@ 164 | addPass(createX86PadShortFunctions()); 165 | addPass(createX86FixupLEAs()); 166 | } 167 | + addPass(createGFreeMachinePass()); 168 | } 169 | Only in ./llvm-3.8.0.src/: llvm-config 170 | Binary files ./llvm-naive/llvm-3.8.0.src/utils/llvm-build/llvmbuild/componentinfo.pyc and ./llvm-3.8.0.src/utils/llvm-build/llvmbuild/componentinfo.pyc differ 171 | Binary files ./llvm-naive/llvm-3.8.0.src/utils/llvm-build/llvmbuild/configutil.pyc and ./llvm-3.8.0.src/utils/llvm-build/llvmbuild/configutil.pyc differ 172 | Binary files ./llvm-naive/llvm-3.8.0.src/utils/llvm-build/llvmbuild/__init__.pyc and ./llvm-3.8.0.src/utils/llvm-build/llvmbuild/__init__.pyc differ 173 | Binary files ./llvm-naive/llvm-3.8.0.src/utils/llvm-build/llvmbuild/main.pyc and ./llvm-3.8.0.src/utils/llvm-build/llvmbuild/main.pyc differ 174 | Binary files ./llvm-naive/llvm-3.8.0.src/utils/llvm-build/llvmbuild/util.pyc and ./llvm-3.8.0.src/utils/llvm-build/llvmbuild/util.pyc differ 175 | --------------------------------------------------------------------------------