├── .gitignore ├── Cargo.toml ├── LICENSE ├── README.md ├── bootloader ├── .cargo │ └── config ├── .gitignore ├── Cargo.toml ├── build.rs ├── src │ ├── asm_routines.asm │ ├── core_reqs.rs │ ├── main.rs │ ├── mm.rs │ ├── panic.rs │ ├── pe.rs │ ├── pxe.rs │ └── realmode.rs └── stage0.asm ├── debug_console ├── .gitignore ├── Cargo.toml └── src │ └── main.rs ├── emu └── bochsrc.bxrc ├── flatten_pe.py ├── kernel ├── .cargo │ └── config ├── .gitignore ├── Cargo.toml └── src │ ├── acpi.rs │ ├── core_reqs.rs │ ├── main.rs │ ├── mm.rs │ └── panic.rs ├── shared ├── cpu │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── mmu │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── rangeset │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── safecast │ ├── Cargo.toml │ ├── README.md │ ├── bytesafe_derive │ │ ├── Cargo.toml │ │ └── src │ │ │ └── lib.rs │ └── src │ │ └── lib.rs └── serial │ ├── Cargo.toml │ └── src │ └── lib.rs └── src └── main.rs /.gitignore: -------------------------------------------------------------------------------- 1 | stage1.flat 2 | orange_slice.boot 3 | Cargo.lock 4 | target 5 | cpuland.bat 6 | 7 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "bootloader_builder" 3 | version = "0.1.0" 4 | authors = ["gamozo "] 5 | 6 | [dependencies] 7 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 gamozolabs 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Orange Slice 2 | 3 | Orange Slice is a research kernel and hypervisor with an end goal of creating a deterministic hypervisor. This will be developed almost entirely in my free time, and will probably move slow. However I will try to stream almost all dev for this project, such that people can ask questions and hopefully learn a thing or two about kernel and hypervisor development! 4 | 5 | This deterministic hypervisor is going to be designed from the start for fuzzing. Having determinism in a hypervisor would allow us to never have an issue with reproducing a bug, regardless of how complex the bug is. However as a hypervisor we will benefit from the performance of hardware-accelerated virtualization. 6 | 7 | ## TL;DR 8 | 9 | The end goal is a deterministic hypervisor, capable of booting Windows and Linux, with less than a 5x performance slowdown to achieve instruction-and-cycle level determinism for cycle counts and interrupt boundaries. 10 | 11 | # About Me 12 | 13 | [Twitter] 14 | 15 | [My Blog] 16 | 17 | [My Youtube Channel] 18 | 19 | # Intro Video to the project 20 | 21 | [![Youtube Video](https://img.youtube.com/vi/okSUAlx_58Y/0.jpg)](https://www.youtube.com/watch?v=okSUAlx_58Y) 22 | 23 | # Mascot 24 | 25 | [Orange Slice Squishable] 26 | 27 | # This is going to be developed live? 28 | 29 | Yup. Check out [My Youtube Channel] or my [Twitter]. I announce my streams typically a few hours ahead of time, and schedule the streams on Youtube. Further for streams I think are more impactful, I try to schedule them a few days out. 30 | 31 | I'm going to try to do much of the development live, and I'll try to help answer any questions about why certain things are being done. If this project fails, but I teach some people about OS development and get other excited about security research, then it was a success in my eyes. 32 | 33 | # Project Development 34 | 35 | This will be a bootloader, kernel, and hypervisor written entirely in Rust (except for the stage0 in assembly). I already have a couple research kernels written in Rust which I will likely borrow code from. 36 | 37 | I haven't quite determined the design of the kernel yet, but it will be multiprocessing from day one (support for SMP systems, but only single-core guests for now). I have a 256-thread Xeon Phi which I use to stress the scalability and design of the kernel. I already have many different kernel models I've experimented with before for hypervisor development, so hopefully we'll be able to make informed decisions based on past experiences. 38 | 39 | # Building 40 | 41 | Have `nasm`, `lld-link` (from LLVM), `python` (I use Python 3), and Rust nightly (with `i586-pc-windows-msvc` and `x86_64-pc-windows-msvc` targets installed) 42 | 43 | Run `cargo run` in the root directory. Everything should be built :) 44 | 45 | # Using 46 | 47 | Copy `orange_slice.boot` and `orange_slice.kern` to a TFTPD server folder configured for PXE booting. Also set the PXE boot filename to `orange_slice.boot` in your DHCP server. 48 | 49 | ## Previous public hypervisor work 50 | 51 | [Hypervisor for fuzzing written in C] 52 | 53 | [Hypervisor for fuzzing written in assembly] 54 | 55 | # What is determinism? 56 | 57 | When running an operating system there are many different things going on. Things like cycle counts, interrupts, system times, etc, all vary during execution. On an x86 processor you'd struggle to ever get an external interrupt to come in on the same instruction boundaries, or read the same value from `rdtsc`. 58 | 59 | This non-determinism means that you cannot simply run a previous crashing input through again and observe the same result. Things like ASLR state can be influenced by external interrupts and timers, and things like task switches also are influenced by these. Race conditions are typically extremely hard to get to reproduce, and this project aims at doing that with all the performance benefits of a hypervisor. 60 | 61 | # What do we consider determinism? 62 | 63 | If our goal is to develop a deterministic hypervisor, it's important that we lay down some ground rules of what we consider in scope, and not. 64 | 65 | - The hypervisor must return the same results from all emulated devices 66 | - If a time is queried from a PIT/APIC/RDTSC, the same time must be returned as was in prior executions from the same snapshot 67 | - External interrupts must be delivered on the same instruction boundaries 68 | - If we cannot fulfill this goal directly, then we must have a way to determine we "missed" a boundary and restore to a previous known good state which we can "try again". 69 | - We should be able to set breakpoints on future events that we know will happen from a previous execution. This allows us to time travel debug, go back in time, and set a breakpoint on a previously observed condition. 70 | - Probably some more... as we tailor our goals based on successes and failures 71 | 72 | Ultimately we should be able to boot the BIOS, boot into Windows, and finally launch an application that requests the cycle count, and that cycle count should be predictable based on prior runs, and all context switches should have occurred up to that point at deterministic times. 73 | 74 | # Why? 75 | 76 | With my amazing team at Microsoft, we're working on a fully deterministic system level fuzzing tool (this will be open source for everyone soon, likely by late 2019, but no promises!). This is built on the existing system emulator Bochs; but with many modifications to provide APIs for fuzzing, introspection, and system-level time travel debugging. There's also some pretty nutty architecture that was designed to ensure determinism, we can't wait to share and talk about what we've done! 77 | 78 | We made a decision early on in the project, that determinism is more important than performance. Determinism allows us to provide users with system-level time travel debugging, allowing high quality bug reports with the net effect of eliminating all "no-repro" bugs. 79 | 80 | We have already used our new deterministic tooling to reliably reproduce obscure race conditions that historically we were unable to reproduce well enough to fix! 81 | 82 | But, with Bochs comes a 50-100x performance slowdown. Your Windows boot now takes an hour rather than a minute, and your fuzzer performance dramatically drops. However it's worth it for the determinism. We'd rather have 10 bugs get fixed, than "know" about 15 bugs and only fix a few of them. 83 | 84 | The ultimate goal of this project is to bring this performance overhead down from the ~50-100x we have from Bochs, to a goal of <5x. 5x may seem high for a hypervisor, but we're probably going to have to expect interrupts "early" and walk up to the correct boundary to deliver an interrupt. This may have some emulation or single stepping involved. 85 | 86 | If the microarchitecture is nice and predictable in certain situations, then hopefully we'll be able to find a good way to get this determinism with little cost. Otherwise we might have to do things a bit crude and get around the rough edges with partial emulation. 87 | 88 | # Timeframe 89 | 90 | This project is not that important as it only fixes performance issues but none of the others we address with our Bochs approach, such as full-system taint tracking and the ability to fuzz hypervisors with full coverage, feedback, and determinism. It may also fail due to infeasibility, as hardware virtualization extensions are not designed with determinism in mind. 91 | 92 | If this project succeeds, this project will likely be abandoned and a new one will be created that will be user oriented. This project is only for proving that it's possible, and exploring ways of accomplishing this goal... and of course teaching during the process! 93 | 94 | [Orange Slice Squishable]: http://www.squishable.com/pc/comfortfood_orange_slice/Big_Animals/Comfort+Food+Orange+Slice 95 | 96 | [My Youtube Channel]: https://www.youtube.com/user/gamozolabs 97 | [Twitter]: https://twitter.com/gamozolabs 98 | [Hypervisor for fuzzing written in C]: https://github.com/gamozolabs/falkervisor_grilled_cheese 99 | [Hypervisor for fuzzing written in assembly]: https://github.com/gamozolabs/falkervisor_beta 100 | [My Blog]: https://gamozolabs.github.io/ 101 | -------------------------------------------------------------------------------- /bootloader/.cargo/config: -------------------------------------------------------------------------------- 1 | [build] 2 | target = "i586-pc-windows-msvc" 3 | 4 | [target.i586-pc-windows-msvc] 5 | rustflags = ["-Z", "thinlto=off", "-C", "relocation-model=static", "-C", "linker=lld-link", "-C", "link-args=/entry:entry /subsystem:native /base:0xf000 /fixed /debug /nodefaultlib target/asm_routines.obj"] 6 | -------------------------------------------------------------------------------- /bootloader/.gitignore: -------------------------------------------------------------------------------- 1 | Cargo.lock 2 | target 3 | 4 | -------------------------------------------------------------------------------- /bootloader/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "stage1" 3 | version = "0.1.0" 4 | authors = ["Brandon Falk "] 5 | 6 | [dependencies] 7 | serial = { path = "../shared/serial" } 8 | cpu = { path = "../shared/cpu" } 9 | rangeset = { path = "../shared/rangeset" } 10 | safecast = { path = "../shared/safecast" } 11 | bytesafe_derive = { path = "../shared/safecast/bytesafe_derive" } 12 | mmu = { path = "../shared/mmu" } 13 | 14 | [profile.release] 15 | panic = "abort" 16 | opt-level = "z" 17 | lto = true 18 | debug = true 19 | 20 | [profile.dev] 21 | panic = "abort" 22 | debug = true 23 | 24 | -------------------------------------------------------------------------------- /bootloader/build.rs: -------------------------------------------------------------------------------- 1 | use std::process::Command; 2 | use std::path::Path; 3 | 4 | fn nasm(in_asm: &str, out_obj: &str) 5 | { 6 | if Path::new(out_obj).exists() { 7 | std::fs::remove_file(out_obj).expect("Failed to remove old object"); 8 | } 9 | 10 | let status = Command::new("nasm") 11 | .args(&["-f", "win32", "-o", out_obj, in_asm]) 12 | .status().expect("Failed to run nasm"); 13 | 14 | /* Check for command success */ 15 | assert!(status.success(), "NASM command failed"); 16 | 17 | /* Ensure output file was created */ 18 | assert!(Path::new(out_obj).exists(), "NASM did not generate expected file"); 19 | } 20 | 21 | fn main() 22 | { 23 | nasm("src/asm_routines.asm", "target/asm_routines.obj"); 24 | } 25 | 26 | -------------------------------------------------------------------------------- /bootloader/src/asm_routines.asm: -------------------------------------------------------------------------------- 1 | [bits 32] 2 | 3 | ; This is the program code segment base in bytes. Since we use real mode in 4 | ; this codebase we need to make sure we set CS correctly. 5 | ; Since this is in bytes, PROGRAM_BASE of 0x10000 would mean CS will be set to 6 | ; 0x1000 when in real-mode 7 | %define PROGRAM_BASE 0x10000 8 | 9 | struc register_state 10 | .eax: resd 1 11 | .ecx: resd 1 12 | .edx: resd 1 13 | .ebx: resd 1 14 | .esp: resd 1 15 | .ebp: resd 1 16 | .esi: resd 1 17 | .edi: resd 1 18 | .efl: resd 1 19 | 20 | .es: resw 1 21 | .ds: resw 1 22 | .fs: resw 1 23 | .gs: resw 1 24 | .ss: resw 1 25 | endstruc 26 | 27 | section .text 28 | 29 | global _invoke_realmode 30 | _invoke_realmode: 31 | pushad 32 | lgdt [rmgdt] 33 | 34 | ; Set all selectors to data segments 35 | mov ax, 0x10 36 | mov es, ax 37 | mov ds, ax 38 | mov fs, ax 39 | mov gs, ax 40 | mov ss, ax 41 | jmp 0x0008:(.foop - PROGRAM_BASE) 42 | 43 | [bits 16] 44 | .foop: 45 | ; Disable protected mode 46 | mov eax, cr0 47 | and eax, ~1 48 | mov cr0, eax 49 | 50 | ; Clear out all segments 51 | xor ax, ax 52 | mov es, ax 53 | mov ds, ax 54 | mov fs, ax 55 | mov gs, ax 56 | mov ss, ax 57 | 58 | ; Set up a fake iret to do a long jump to switch to new cs. 59 | pushfd ; eflags 60 | push dword (PROGRAM_BASE >> 4) ; cs 61 | push dword (.new_func - PROGRAM_BASE) ; eip 62 | iretd 63 | 64 | .new_func: 65 | ; Get the arguments passed to this function 66 | movzx ebx, byte [esp + (4*0x9)] ; arg1, interrupt number 67 | shl ebx, 2 68 | mov eax, dword [esp + (4*0xa)] ; arg2, pointer to registers 69 | 70 | ; Set up interrupt stack frame. This is what the real mode routine will 71 | ; pop off the stack during its iret. 72 | mov ebp, (.retpoint - PROGRAM_BASE) 73 | pushfw 74 | push cs 75 | push bp 76 | 77 | ; Set up the call for the interrupt by loading the contents of the IVT 78 | ; based on the interrupt number specified 79 | pushfw 80 | push word [bx+2] 81 | push word [bx+0] 82 | 83 | ; Load the register state specified 84 | mov ecx, dword [eax + register_state.ecx] 85 | mov edx, dword [eax + register_state.edx] 86 | mov ebx, dword [eax + register_state.ebx] 87 | mov ebp, dword [eax + register_state.ebp] 88 | mov esi, dword [eax + register_state.esi] 89 | mov edi, dword [eax + register_state.edi] 90 | mov eax, dword [eax + register_state.eax] 91 | 92 | ; Perform a long jump to the interrupt entry point, simulating a software 93 | ; interrupt instruction 94 | iretw 95 | .retpoint: 96 | ; Save off all registers 97 | push eax 98 | push ecx 99 | push edx 100 | push ebx 101 | push ebp 102 | push esi 103 | push edi 104 | pushfd 105 | push es 106 | push ds 107 | push fs 108 | push gs 109 | push ss 110 | 111 | ; Get a pointer to the registers 112 | mov eax, dword [esp + (4*0xa) + (4*8) + (5*2)] ; arg2, pointer to registers 113 | 114 | ; Update the register state with the post-interrupt register state. 115 | pop word [eax + register_state.ss] 116 | pop word [eax + register_state.gs] 117 | pop word [eax + register_state.fs] 118 | pop word [eax + register_state.ds] 119 | pop word [eax + register_state.es] 120 | pop dword [eax + register_state.efl] 121 | pop dword [eax + register_state.edi] 122 | pop dword [eax + register_state.esi] 123 | pop dword [eax + register_state.ebp] 124 | pop dword [eax + register_state.ebx] 125 | pop dword [eax + register_state.edx] 126 | pop dword [eax + register_state.ecx] 127 | pop dword [eax + register_state.eax] 128 | 129 | ; Load data segment for lgdt 130 | mov ax, (PROGRAM_BASE >> 4) 131 | mov ds, ax 132 | 133 | ; Enable protected mode 134 | mov eax, cr0 135 | or eax, 1 136 | mov cr0, eax 137 | 138 | ; Load 32-bit protected mode GDT 139 | mov eax, (pmgdt - PROGRAM_BASE) 140 | lgdt [eax] 141 | 142 | ; Set all segments to data segments 143 | mov ax, 0x10 144 | mov es, ax 145 | mov ds, ax 146 | mov fs, ax 147 | mov gs, ax 148 | mov ss, ax 149 | 150 | ; Long jump back to protected mode. 151 | pushfd ; eflags 152 | push dword 0x0008 ; cs 153 | push dword backout ; eip 154 | iretd 155 | 156 | [bits 32] 157 | 158 | global _pxecall 159 | _pxecall: 160 | pushad 161 | lgdt [rmgdt] 162 | 163 | ; Set all selectors to data segments 164 | mov ax, 0x10 165 | mov es, ax 166 | mov ds, ax 167 | mov fs, ax 168 | mov gs, ax 169 | mov ss, ax 170 | 171 | jmp 0x0008:(.foop - PROGRAM_BASE) 172 | 173 | [bits 16] 174 | .foop: 175 | ; Disable protected mode 176 | mov eax, cr0 177 | and eax, ~1 178 | mov cr0, eax 179 | 180 | ; Clear all segments 181 | xor ax, ax 182 | mov es, ax 183 | mov ds, ax 184 | mov fs, ax 185 | mov gs, ax 186 | mov ss, ax 187 | 188 | ; Perform a long jump to real-mode 189 | pushfd ; eflags 190 | push dword (PROGRAM_BASE >> 4) ; cs 191 | push dword (.new_func - PROGRAM_BASE) ; eip 192 | iretd 193 | 194 | .new_func: 195 | 196 | ; pub fn pxecall(seg: u16, off: u16, pxe_call: u16, 197 | ; param_seg: u16, param_off: u16); 198 | movzx eax, word [esp + (4*0x9)] ; arg1, seg 199 | movzx ebx, word [esp + (4*0xa)] ; arg2, offset 200 | movzx ecx, word [esp + (4*0xb)] ; arg3, pxe_call 201 | movzx edx, word [esp + (4*0xc)] ; arg4, param_seg 202 | movzx esi, word [esp + (4*0xd)] ; arg5, param_off 203 | 204 | ; Set up PXE call parameters (opcode, offset, seg) 205 | push dx 206 | push si 207 | push cx 208 | 209 | ; Set up our return address from the far call 210 | mov ebp, (.retpoint - PROGRAM_BASE) 211 | push cs 212 | push bp 213 | 214 | ; Set up a far call via iretw 215 | pushfw 216 | push ax 217 | push bx 218 | 219 | iretw 220 | .retpoint: 221 | ; Hyper-V has been observed to set the interrupt flag in PXE routines. We 222 | ; clear it ASAP. 223 | cli 224 | 225 | ; Clean up the stack from the 3 word parameters we passed to PXE 226 | add sp, 6 227 | 228 | ; Load data segment for lgdt 229 | mov ax, (PROGRAM_BASE >> 4) 230 | mov ds, ax 231 | 232 | ; Enable protected mode 233 | mov eax, cr0 234 | or eax, 1 235 | mov cr0, eax 236 | 237 | ; Load 32-bit protected mode GDT 238 | mov eax, (pmgdt - PROGRAM_BASE) 239 | lgdt [eax] 240 | 241 | ; Set all segments to data segments 242 | mov ax, 0x10 243 | mov es, ax 244 | mov ds, ax 245 | mov fs, ax 246 | mov gs, ax 247 | mov ss, ax 248 | 249 | ; Jump back to protected mode 250 | pushfd ; eflags 251 | push dword 0x0008 ; cs 252 | push dword backout ; eip 253 | iretd 254 | 255 | [bits 32] 256 | backout: 257 | popad 258 | ret 259 | 260 | section .data 261 | 262 | ; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 263 | 264 | ; 16-bit real mode GDT 265 | 266 | align 8 267 | rmgdt_base: 268 | ; Null descriptor 269 | dq 0x0000000000000000 270 | 271 | ; 16-bit RO code, base PROGRAM_BASE, limit 0x0000ffff 272 | dq 0x00009a000000ffff | (PROGRAM_BASE << 16) 273 | 274 | ; 16-bit RW data, base 0, limit 0x0000ffff 275 | dq 0x000092000000ffff 276 | 277 | rmgdt: 278 | dw (rmgdt - rmgdt_base) - 1 279 | dd rmgdt_base 280 | 281 | ; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 282 | 283 | ; 32-bit protected mode GDT 284 | 285 | align 8 286 | pmgdt_base: 287 | dq 0x0000000000000000 ; Null descriptor 288 | dq 0x00CF9A000000FFFF 289 | dq 0x00CF92000000FFFF 290 | 291 | pmgdt: 292 | dw (pmgdt - pmgdt_base) - 1 293 | dd pmgdt_base 294 | 295 | ; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 296 | 297 | ; 64-bit long mode GDT 298 | 299 | align 8 300 | lmgdt_base: 301 | dq 0x0000000000000000 ; Null descriptor 302 | dq 0x00209a0000000000 ; 64-bit, present, code 303 | dq 0x0000920000000000 ; Present, data r/w 304 | 305 | lmgdt: 306 | dw (lmgdt - lmgdt_base) - 1 307 | dd lmgdt_base 308 | dd 0 309 | 310 | [bits 32] 311 | 312 | global _enter64 313 | _enter64: 314 | ; qword [esp + 0x04] - Entry 315 | ; qword [esp + 0x0c] - Stack 316 | ; qword [esp + 0x14] - Param 317 | ; dword [esp + 0x1c] - New cr3 318 | 319 | ; Get the parameters passed in to this function 320 | mov esi, [esp+0x1c] ; New cr3 321 | 322 | ; Set up CR3 323 | mov cr3, esi 324 | 325 | ; Set NXE (NX enable) and LME (long mode enable) 326 | mov edx, 0 327 | mov eax, 0x00000900 328 | mov ecx, 0xc0000080 329 | wrmsr 330 | 331 | xor eax, eax 332 | or eax, (1 << 9) ; OSFXSR 333 | or eax, (1 << 10) ; OSXMMEXCPT 334 | or eax, (1 << 5) ; PAE 335 | or eax, (1 << 3) ; DE 336 | mov cr4, eax 337 | 338 | xor eax, eax 339 | and eax, ~(1 << 2) ; Clear Emulation flag 340 | or eax, (1 << 0) ; Protected mode enable 341 | or eax, (1 << 1) ; Monitor co-processor 342 | or eax, (1 << 16) ; Write protect 343 | or eax, (1 << 31) ; Paging enable 344 | mov cr0, eax 345 | 346 | ; Load the 64-bit long mode GDT 347 | lgdt [lmgdt] 348 | 349 | ; Long jump to enable long mode! 350 | jmp 0x0008:lm_entry 351 | 352 | [bits 64] 353 | 354 | lm_entry: 355 | ; Set all selectors to 64-bit data segments 356 | mov ax, 0x10 357 | mov es, ax 358 | mov ds, ax 359 | mov fs, ax 360 | mov gs, ax 361 | mov ss, ax 362 | 363 | mov rdi, qword [rsp + 0x4] ; Entry point 364 | mov rbp, qword [rsp + 0xc] ; Stack 365 | sub rbp, 0x28 ; MSFT 64-bit calling convention requires 0x20 homing space 366 | ; We also need 8 bytes for the fake 'return address' since we 367 | ; iretq rather than call. 368 | 369 | ; Parameter 370 | mov rcx, qword [esp + 0x14] 371 | 372 | ; Set up a long jump via an iretq to jump to long mode. 373 | push qword 0x0010 ; ss 374 | push qword rbp ; rsp 375 | pushfq ; rflags 376 | push qword 0x0008 ; cs 377 | push qword rdi ; rip 378 | iretq 379 | 380 | cli 381 | .halt: 382 | hlt 383 | jmp short .halt 384 | 385 | -------------------------------------------------------------------------------- /bootloader/src/core_reqs.rs: -------------------------------------------------------------------------------- 1 | /// libc `memcpy` implementation in rust 2 | /// 3 | /// This implementation of `memcpy` is overlap safe, making it technically 4 | /// `memmove`. 5 | /// 6 | /// # Parameters 7 | /// 8 | /// * `dest` - Pointer to memory to copy to 9 | /// * `src` - Pointer to memory to copy from 10 | /// * `n` - Number of bytes to copy 11 | /// 12 | #[no_mangle] 13 | pub unsafe extern fn memcpy(dest: *mut u8, src: *const u8, n: usize) -> *mut u8 14 | { 15 | memmove(dest, src, n) 16 | } 17 | 18 | /// libc `memmove` implementation in rust 19 | /// 20 | /// # Parameters 21 | /// 22 | /// * `dest` - Pointer to memory to copy to 23 | /// * `src` - Pointer to memory to copy from 24 | /// * `n` - Number of bytes to copy 25 | /// 26 | #[no_mangle] 27 | pub unsafe extern fn memmove(dest: *mut u8, src: *const u8, n: usize) -> *mut u8 28 | { 29 | if src < dest as *const u8 { 30 | /* copy backwards */ 31 | let mut ii = n; 32 | while ii != 0 { 33 | ii -= 1; 34 | *dest.offset(ii as isize) = *src.offset(ii as isize); 35 | } 36 | } else { 37 | /* copy forwards */ 38 | let mut ii = 0; 39 | while ii < n { 40 | *dest.offset(ii as isize) = *src.offset(ii as isize); 41 | ii += 1; 42 | } 43 | } 44 | 45 | dest 46 | } 47 | 48 | /// libc `memset` implementation in rust 49 | /// 50 | /// # Parameters 51 | /// 52 | /// * `s` - Pointer to memory to set 53 | /// * `c` - Character to set `n` bytes in `s` to 54 | /// * `n` - Number of bytes to set 55 | /// 56 | #[no_mangle] 57 | pub unsafe extern fn memset(s: *mut u8, c: i32, n: usize) -> *mut u8 58 | { 59 | let mut ii = 0; 60 | while ii < n { 61 | *s.offset(ii as isize) = c as u8; 62 | ii += 1; 63 | } 64 | 65 | s 66 | } 67 | 68 | /// libc `memcmp` implementation in rust 69 | /// 70 | /// # Parameters 71 | /// 72 | /// * `s1` - Pointer to memory to compare with s2 73 | /// * `s2` - Pointer to memory to compare with s1 74 | /// * `n` - Number of bytes to set 75 | #[no_mangle] 76 | pub unsafe extern fn memcmp(s1: *const u8, s2: *const u8, n: usize) -> i32 77 | { 78 | let mut ii = 0; 79 | while ii < n { 80 | let a = *s1.offset(ii as isize); 81 | let b = *s2.offset(ii as isize); 82 | if a != b { 83 | return a as i32 - b as i32 84 | } 85 | ii += 1; 86 | } 87 | 88 | 0 89 | } 90 | 91 | /* --------------------------------------------------------------------------- 92 | * Microsoft specific intrinsics 93 | * 94 | * These intrinsics use the stdcall convention however are not decorated 95 | * with an @ suffix. To override LLVM from appending this suffix we 96 | * have an \x01 escape byte before the name, which prevents LLVM from all 97 | * name mangling. 98 | * --------------------------------------------------------------------------- 99 | */ 100 | 101 | /// Perform n % d 102 | #[export_name="\x01__aullrem"] 103 | pub extern "stdcall" fn __aullrem(n: u64, d: u64) -> u64 104 | { 105 | ::compiler_builtins::int::udiv::__umoddi3(n, d) 106 | } 107 | 108 | /// Perform n / d 109 | #[export_name="\x01__aulldiv"] 110 | pub extern "stdcall" fn __aulldiv(n: u64, d: u64) -> u64 111 | { 112 | ::compiler_builtins::int::udiv::__udivdi3(n, d) 113 | } 114 | 115 | /// Perform n % d 116 | #[export_name="\x01__allrem"] 117 | pub extern "stdcall" fn __allrem(n: i64, d: i64) -> i64 118 | { 119 | ::compiler_builtins::int::sdiv::__moddi3(n, d) 120 | } 121 | 122 | /// Perform n / d 123 | #[export_name="\x01__alldiv"] 124 | pub extern "stdcall" fn __alldiv(n: i64, d: i64) -> i64 125 | { 126 | ::compiler_builtins::int::sdiv::__divdi3(n, d) 127 | } 128 | 129 | -------------------------------------------------------------------------------- /bootloader/src/main.rs: -------------------------------------------------------------------------------- 1 | #![no_std] 2 | #![no_main] 3 | #![feature(const_fn)] 4 | #![feature(lang_items)] 5 | #![feature(core_intrinsics)] 6 | #![feature(allocator_api)] 7 | #![feature(llvm_asm)] 8 | #![feature(rustc_private)] 9 | 10 | /// Custom non-formatting panic macro. 11 | /// 12 | /// This overrides the existing panic macro to provide a core::fmt-less panic 13 | /// implementation. This is a lot lighter as it results in no use of core::fmt 14 | /// in the binary. This is a strong requirement for how we can fit this program 15 | /// into the 32KiB PXE requirements. 16 | /// 17 | /// Under the hood assert!() uses panic!(), thus we also have assert!()s go 18 | /// through here as well, allowing for idiomatic Rust assert usage. 19 | macro_rules! panic { 20 | () => ({ 21 | $crate::serial::write("!!! PANIC !!!\n"); 22 | $crate::serial::write("Explicit panic\n"); 23 | $crate::cpu::halt(); 24 | }); 25 | ($msg:expr) => ({ 26 | $crate::serial::write("!!! PANIC !!!\n"); 27 | $crate::serial::write($msg); 28 | $crate::serial::write_byte(b'\n'); 29 | $crate::cpu::halt(); 30 | }); 31 | } 32 | 33 | /* External rust-provided crates */ 34 | #[macro_use] 35 | extern crate alloc; 36 | 37 | #[macro_use] 38 | extern crate bytesafe_derive; 39 | 40 | /* Shared crates between bootloader and kernel */ 41 | extern crate serial; 42 | extern crate cpu; 43 | extern crate rangeset; 44 | extern crate safecast; 45 | extern crate mmu; 46 | 47 | pub mod panic; 48 | pub mod core_reqs; 49 | pub mod realmode; 50 | pub mod mm; 51 | pub mod pxe; 52 | pub mod pe; 53 | 54 | use alloc::vec::Vec; 55 | use core::sync::atomic::{AtomicUsize, Ordering}; 56 | use core::alloc::{Layout, GlobalAlloc}; 57 | 58 | /// Global allocator 59 | #[global_allocator] 60 | static GLOBAL_ALLOCATOR: mm::GlobalAllocator = mm::GlobalAllocator; 61 | 62 | /// Physical memory implementation 63 | /// 64 | /// This is used during page table operations 65 | pub struct Pmem {} 66 | 67 | impl mmu::PhysMem for Pmem { 68 | /// Allocate a page 69 | fn alloc_page(&mut self) -> Option<*mut u8> { 70 | unsafe { 71 | let layout = Layout::from_size_align(4096, 4096).unwrap(); 72 | let alloc = GLOBAL_ALLOCATOR.alloc(layout); 73 | if alloc.is_null() { 74 | None 75 | } else { 76 | Some(alloc as *mut u8) 77 | } 78 | } 79 | } 80 | 81 | /// Read a 64-bit value at the physical address specified 82 | fn read_phys(&mut self, addr: *mut u64) -> Result { 83 | unsafe { Ok(core::ptr::read(addr)) } 84 | } 85 | 86 | /// Write a 64-bit value to the physical address specified 87 | fn write_phys(&mut self, addr: *mut u64, val: u64) -> 88 | Result<(), &'static str> { 89 | unsafe { Ok(core::ptr::write(addr, val)) } 90 | } 91 | 92 | /// This is used to let the MMU know if we reserve memory outside of 93 | /// the page tables. Since we do not do this at all we always return true 94 | /// allowing any address not in use in the page tables to be used for 95 | /// ASLR. 96 | fn probe_vaddr(&mut self, _addr: usize, _length: usize) -> bool { 97 | true 98 | } 99 | } 100 | 101 | /// CoreInfo structure to pass into the next stage (kernel). This provides 102 | /// the kernel with critical structures that were constructed in the bootloader 103 | struct CoreInfo { 104 | entry: u64, 105 | stack_base: u64, 106 | bootloader_info: cpu::BootloaderStruct, 107 | } 108 | 109 | static mut CORE_INFO: Option> = None; 110 | static mut PMEM: Pmem = Pmem {}; 111 | static mut PAGE_TABLE: Option> = None; 112 | 113 | #[lang = "oom"] 114 | #[no_mangle] 115 | pub fn rust_oom(_layout: Layout) -> ! { 116 | panic!("Out of memory"); 117 | } 118 | 119 | /// Main entry point for this codebase 120 | /// 121 | /// * `soft_reboot_entry` - 32-bit physical address we can branch to at 122 | /// later stages to do a soft reboot of the kernel 123 | /// * `first_boot` - Set if this is the first time the system has booted 124 | #[no_mangle] 125 | pub extern fn entry(soft_reboot_entry: u32, first_boot: bool, 126 | kbuf: *mut cpu::KernelBuffer) -> ! 127 | { 128 | static CORE_IDS: AtomicUsize = AtomicUsize::new(0); 129 | 130 | /// Stack size allocated for each core 131 | const STACK_SIZE: u64 = 1024 * 1024; 132 | 133 | let kbuf = unsafe { &mut *kbuf }; 134 | 135 | /* Allocate a unique, sequential core ID for this core */ 136 | let core_id = CORE_IDS.fetch_add(1, Ordering::SeqCst); 137 | 138 | if cpu::is_bsp() { 139 | /* Initialize the MM subsystem. This is unsafe as this can only be 140 | * done once. 141 | */ 142 | unsafe { mm::init(); } 143 | 144 | /* Prevent the kernel buffer from being used as free memory */ 145 | if !first_boot && kbuf.kernel_buffer_size != 0xbaadb00d { 146 | unsafe { 147 | mm::remove_range(kbuf.kernel_buffer, 148 | kbuf.kernel_buffer_max_size); 149 | } 150 | } 151 | 152 | /* Print our boot banner :) */ 153 | serial::write("=== orange_slice bootloader v2 ===\n"); 154 | 155 | /* Validate that the CPU supports the features we use */ 156 | let features = cpu::get_cpu_features(); 157 | assert!(features.bits64); 158 | assert!(features.xd); 159 | assert!(features.gbyte_pages); 160 | assert!(features.sse); 161 | assert!(features.sse2); 162 | assert!(features.sse3); 163 | assert!(features.ssse3); 164 | assert!(features.sse4_1); 165 | assert!(features.sse4_2); 166 | 167 | /* Download the kernel */ 168 | let kernel_pe = if first_boot || kbuf.kernel_buffer_size == 0xbaadb00d { 169 | let mut pe = pxe::download_file("orange_slice.kern"); 170 | kbuf.kernel_buffer = pe.as_mut_ptr() as u64; 171 | kbuf.kernel_buffer_size = pe.len() as u64; 172 | kbuf.kernel_buffer_max_size = pe.capacity() as u64; 173 | pe 174 | } else { 175 | unsafe { 176 | Vec::from_raw_parts(kbuf.kernel_buffer as *mut u8, 177 | kbuf.kernel_buffer_size as usize, 178 | kbuf.kernel_buffer_max_size as usize) 179 | } 180 | }; 181 | 182 | // Parse the PE file 183 | let pe_parsed = pe::parse(&kernel_pe); 184 | 185 | // Create a new page table with a 1 TiB identity map 186 | let mut page_table = unsafe { mmu::PageTable::new(&mut PMEM) }; 187 | page_table.add_identity_map(1024 * 1024 * 1024 * 1024).unwrap(); 188 | 189 | unsafe { 190 | assert!(PAGE_TABLE.is_none(), "Page table already set"); 191 | PAGE_TABLE = Some(page_table); 192 | } 193 | 194 | let page_table = unsafe { PAGE_TABLE.as_mut().unwrap() }; 195 | 196 | // Generate a random address to base the kernel at and load the 197 | // kernel into the new page table. 198 | // let kernel_base = page_table.rand_addr(pe_parsed.loaded_size()) 199 | // .unwrap(); 200 | let kernel_base = 0x1337_0000_0000; 201 | let entry = pe_parsed.load(page_table, kernel_base); 202 | 203 | for _ in 0..cpu::MAX_CPUS { 204 | /* Add a 1 MiB stack with random base address */ 205 | let stack_base = page_table.rand_addr(STACK_SIZE).unwrap(); 206 | page_table.add_memory(stack_base, STACK_SIZE).unwrap(); 207 | 208 | /* Construct the core infos to be passed to the kernel */ 209 | unsafe { 210 | if CORE_INFO.is_none() { 211 | CORE_INFO = Some(Vec::with_capacity(cpu::MAX_CPUS)); 212 | } 213 | let ci = CORE_INFO.as_mut().unwrap(); 214 | 215 | /* Construct the core info for this CPU */ 216 | ci.push(CoreInfo { 217 | entry, 218 | stack_base, 219 | bootloader_info: cpu::BootloaderStruct { 220 | phys_memory: rangeset::RangeSet::new(), 221 | soft_reboot_entry: soft_reboot_entry as u64, 222 | kernel_buffer: 223 | kbuf as *mut cpu::KernelBuffer as u64, 224 | }, 225 | }); 226 | } 227 | } 228 | 229 | /* For the BSP, create a copy of the physical memory map to pass 230 | * to the kernel. Once this operation is performed no more dynamic 231 | * allocations can occur in the bootloader. They will panic. 232 | * 233 | * This behavior is required such that the bootloader never takes 234 | * ownership of physical memory that has been given to the kernel as 235 | * free. 236 | */ 237 | unsafe { 238 | CORE_INFO.as_mut().unwrap()[core_id].bootloader_info.phys_memory = 239 | mm::clone_mm_table(); 240 | } 241 | 242 | /* Prevent all structures from being freed */ 243 | core::mem::forget(pe_parsed); 244 | core::mem::forget(kernel_pe); 245 | } 246 | 247 | unsafe { 248 | /* Get a reference to this core's core info */ 249 | let core_info = &CORE_INFO.as_ref().unwrap()[core_id]; 250 | 251 | extern { 252 | fn enter64(entry: u64, stack: u64, param: u64, cr3: u32) -> !; 253 | } 254 | 255 | /* Jump into x86_64 kernel! */ 256 | enter64(core_info.entry, core_info.stack_base + STACK_SIZE, 257 | &core_info.bootloader_info as *const _ as u64, 258 | PAGE_TABLE.as_ref().unwrap().get_backing() as u32); 259 | } 260 | } 261 | -------------------------------------------------------------------------------- /bootloader/src/mm.rs: -------------------------------------------------------------------------------- 1 | use core; 2 | 3 | use realmode; 4 | use rangeset::{Range, RangeSet}; 5 | use core::alloc::{GlobalAlloc, Layout}; 6 | 7 | /// Global containing the contents of the E820 table 8 | /// 9 | /// First value of the tuple is a bool indicating whether allocations are 10 | /// allowed. This is set to false once the MM table has been cloned to pass 11 | /// to the kernel, disabling allocations. 12 | /// 13 | /// Second value indicates if the MM subsystem has been initialized. 14 | /// 15 | /// Third value is the E820 table in a RangeSet 16 | static mut MM_TABLE: (bool, bool, RangeSet) = (false, false, RangeSet::new()); 17 | 18 | /// Packed structure describing E820 entries 19 | #[repr(C, packed)] 20 | #[derive(Clone, Copy, PartialEq, Eq)] 21 | struct E820Entry { 22 | base: u64, 23 | size: u64, 24 | typ: u32, 25 | } 26 | 27 | /// Clone the MM table, further disabling allocations 28 | pub fn clone_mm_table() -> RangeSet 29 | { 30 | unsafe { 31 | /* Make sure MM is initialized and allocations are enabled */ 32 | assert!(MM_TABLE.1, "MM subsystem has not been initialized"); 33 | assert!(MM_TABLE.0, "MM table has already been cloned"); 34 | 35 | /* Disable allocations */ 36 | MM_TABLE.0 = false; 37 | 38 | /* Return copy of MM table */ 39 | MM_TABLE.2.clone() 40 | } 41 | } 42 | 43 | pub unsafe fn remove_range(addr: u64, size: u64) 44 | { 45 | let rs = &mut MM_TABLE.2; 46 | assert!(size > 0, "Invalid size for remove_range()"); 47 | rs.remove(Range { start: addr, end: addr.checked_add(size).unwrap() - 1 }); 48 | } 49 | 50 | /// Initialize the memory managment state. This requests the e820 table from 51 | /// the BIOS and checks for overlapping/double mapped ranges. 52 | pub unsafe fn init() 53 | { 54 | let rs = &mut MM_TABLE.2; 55 | 56 | /* Loop through the E820 twice. The first time we loop we want to 57 | * accumulate free sections into the RangeSet. The second loop we want 58 | * to remove nonfree sections. 59 | */ 60 | for &add_entries in &[true, false] { 61 | /* Continuation code, starts off at 0. BIOS implementation specific 62 | * after first call to e820. 63 | */ 64 | let mut cont = 0; 65 | 66 | /* Get the E820 table from the BIOS, entry by entry */ 67 | loop { 68 | let mut ent = E820Entry { base: 0, size: 0, typ: 0 }; 69 | 70 | /* Set up the register state for the BIOS call */ 71 | let mut regs = realmode::RegisterState { 72 | eax: 0xe820, /* Function 0xE820 */ 73 | ecx: 20, /* Entry size (in bytes) */ 74 | edx: 0x534d4150, /* Magic number 'PAMS' */ 75 | ebx: cont, /* Continuation number */ 76 | edi: &mut ent as *const _ as u32, /* Pointer to buffer */ 77 | ..Default::default() 78 | }; 79 | 80 | /* Invoke BIOS int 0x15, function 0xE820 to get the memory 81 | * entries 82 | */ 83 | realmode::invoke_realmode(0x15, &mut regs); 84 | 85 | /* Validate eax contains correct 'SMAP' magic signature */ 86 | assert!(regs.eax == 0x534d4150, 87 | "E820 did not report correct magic"); 88 | 89 | /* Validate size of E820 entry is >= what we expect */ 90 | assert!(regs.ecx as usize >= core::mem::size_of_val(&ent), 91 | "E820 entry structure was too small"); 92 | 93 | assert!(ent.size > 0, "E820 entry of zero size"); 94 | 95 | /* Safely compute end of memory region */ 96 | let ent_end = match ent.base.checked_add(ent.size - 1) { 97 | Some(x) => x, 98 | None => panic!("E820 entry integer overflow"), 99 | }; 100 | 101 | /* Either insert free regions on the first iteration of the loop 102 | * or remove used regions in the second iteration. 103 | */ 104 | if add_entries && ent.typ == 1 { 105 | rs.insert(Range { start: ent.base, end: ent_end }); 106 | } else if !add_entries && ent.typ != 1 { 107 | rs.remove(Range { start: ent.base, end: ent_end }); 108 | } 109 | 110 | /* If ebx (continuation number) is zero or CF (error) was set, 111 | * break out of the loop. 112 | */ 113 | if regs.ebx == 0 || (regs.efl & 1) == 1 { 114 | break; 115 | } 116 | 117 | /* Update continuation */ 118 | cont = regs.ebx; 119 | } 120 | } 121 | 122 | /* Remove the first 1MB of memory from allocatable memory. This is to 123 | * prevent BIOS data structures and our PXE image from being removed. 124 | */ 125 | rs.remove(Range { start: 0, end: 0xFFFFF }); 126 | 127 | /* Mark MM as initialized and allocations enabled */ 128 | MM_TABLE.0 = true; 129 | MM_TABLE.1 = true; 130 | } 131 | 132 | /// Structure representing global allocator 133 | /// 134 | /// All state is handled elsewhere so this is empty. 135 | pub struct GlobalAllocator; 136 | 137 | unsafe impl GlobalAlloc for GlobalAllocator { 138 | /// Global allocator. Grabs free memory from E820 and removes it from 139 | /// the table. 140 | unsafe fn alloc(&self, layout: Layout) -> *mut u8 141 | { 142 | assert!(MM_TABLE.1, "Attempted to allocate with mm uninitialized"); 143 | assert!(MM_TABLE.0, "Attempted to allocate with allocations disabled"); 144 | 145 | let rs = &mut MM_TABLE.2; 146 | 147 | /* All the actual work is done in alloc_rangeset() */ 148 | let ret = rs.allocate(layout.size() as u64, layout.align() as u64); 149 | if ret.is_null() { 150 | panic!("Allocation failure"); 151 | } else { 152 | ret as *mut u8 153 | } 154 | } 155 | 156 | /// No free implementation. 157 | /// 158 | /// We really have no reason to free in the bootloader, so we do not 159 | /// support a free. We could easily add support if really needed, but 160 | /// having free panic will prevent us from accidentally allocating data 161 | /// and passing it to the next stage by pointer, and letting it drop. 162 | /// Given we don't free anything in the bootloader, anything we pass to 163 | /// the next stage is always valid. 164 | unsafe fn dealloc(&self, _ptr: *mut u8, _layout: Layout) 165 | { 166 | panic!("Dealloc attempted\n"); 167 | } 168 | } 169 | -------------------------------------------------------------------------------- /bootloader/src/panic.rs: -------------------------------------------------------------------------------- 1 | use serial; 2 | use cpu; 3 | use core::panic::PanicInfo; 4 | 5 | /// Panic implementation 6 | /// 7 | /// This currently breaks ABI. This is supposed to be "pub extern fn". By 8 | /// breaking the ABI we let LTO happen, which deletes much code being used 9 | /// to generate formatted parameters for panic. To make this safe we use no 10 | /// parameters passed in here at all. 11 | #[panic_handler] 12 | #[no_mangle] 13 | pub fn panic(_info: &PanicInfo) -> ! { 14 | serial::write("!!! PANIC !!!\n"); 15 | serial::write("Hit rust_begin_unwind()\n"); 16 | cpu::halt(); 17 | } 18 | 19 | -------------------------------------------------------------------------------- /bootloader/src/pe.rs: -------------------------------------------------------------------------------- 1 | use core; 2 | use core::mem::size_of; 3 | use core::alloc::{Layout, GlobalAlloc}; 4 | use alloc::vec::Vec; 5 | use mmu::{PageTable, MapSize, PTBits}; 6 | use safecast::SafeCast; 7 | 8 | /* Number of PE directories */ 9 | const IMAGE_NUMBEROF_DIRECTORY_ENTRIES: usize = 16; 10 | 11 | /* Machine types */ 12 | const IMAGE_FILE_MACHINE_AMD64: u16 = 0x8664; 13 | 14 | /* IMAGE_FILE_HEADER.Characteristics */ 15 | const IMAGE_FILE_EXECUTABLE_IMAGE: u16 = 0x0002; 16 | const IMAGE_FILE_LARGE_ADDRESS_AWARE: u16 = 0x0020; 17 | 18 | /* IMAGE_OPTIONAL_HEADER.Magic */ 19 | const IMAGE_NT_OPTIONAL_HDR64_MAGIC: u16 = 0x20b; 20 | 21 | /* Constants for ImageOptionalHeader64.Subsystem */ 22 | const IMAGE_SUBSYSTEM_NATIVE: u16 = 1; 23 | 24 | /* Constants for ImageSectionHeader.Characteristics */ 25 | const IMAGE_SCN_CNT_CODE: u32 = 0x00000020; 26 | const IMAGE_SCN_CNT_INITIALIZED_DATA: u32 = 0x00000040; 27 | const IMAGE_SCN_CNT_UNINITIALIZED_DATA: u32 = 0x00000080; 28 | const IMAGE_SCN_MEM_DISCARDABLE: u32 = 0x02000000; 29 | const IMAGE_SCN_MEM_EXECUTE: u32 = 0x20000000; 30 | const IMAGE_SCN_MEM_READ: u32 = 0x40000000; 31 | const IMAGE_SCN_MEM_WRITE: u32 = 0x80000000; 32 | 33 | /* Constants for ImageOptionalHeader64.DllCharacteristics */ 34 | const IMAGE_DLLCHARACTERISTICS_HIGH_ENTROPY_VA: u16 = 0x0020; 35 | const IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE: u16 = 0x0040; 36 | const IMAGE_DLLCHARACTERISTICS_NX_COMPAT: u16 = 0x0100; 37 | const IMAGE_DLLCHARACTERISTICS_TERMINAL_SERVER_AWARE: u16 = 0x8000; 38 | 39 | /* Constants for relocation types */ 40 | const IMAGE_REL_BASED_ABSOLUTE: u16 = 0; 41 | const IMAGE_REL_BASED_DIR64: u16 = 10; 42 | 43 | /// IMAGE_NT_HEADERS64 44 | #[repr(C, packed)] 45 | #[allow(non_snake_case)] 46 | #[derive(Default, ByteSafe)] 47 | struct ImageNtHeaders64 { 48 | Signature: [u8; 4], 49 | FileHeader: ImageFileHeader, 50 | OptionalHeader: ImageOptionalHeader64, 51 | } 52 | 53 | /// IMAGE_FILE_HEADER 54 | #[repr(C, packed)] 55 | #[allow(non_snake_case)] 56 | #[derive(Default, ByteSafe)] 57 | struct ImageFileHeader { 58 | Machine: u16, 59 | NumberOfSections: u16, 60 | TimeDateStamp: u32, 61 | PointerToSymbolTable: u32, 62 | NumberOfSymbols: u32, 63 | SizeOfOptionalHeader: u16, 64 | Characteristics: u16, 65 | } 66 | 67 | /// IMAGE_OPTIONAL_HEADER64 68 | #[repr(C, packed)] 69 | #[allow(non_snake_case)] 70 | #[derive(Default, ByteSafe)] 71 | struct ImageOptionalHeader64 { 72 | Magic: u16, 73 | MajorLinkerVersion: u8, 74 | MinorLinkerVersion: u8, 75 | SizeOfCode: u32, 76 | SizeOfInitializedData: u32, 77 | SizeOfUninitializedData: u32, 78 | AddressOfEntryPoint: u32, 79 | BaseOfCode: u32, 80 | ImageBase: u64, 81 | SectionAlignment: u32, 82 | FileAlignment: u32, 83 | MajorOperatingSystemVersion: u16, 84 | MinorOperatingSystemVersion: u16, 85 | MajorImageVersion: u16, 86 | MinorImageVersion: u16, 87 | MajorSubsystemVersion: u16, 88 | MinorSubsystemVersion: u16, 89 | Win32VersionValue: u32, 90 | SizeOfImage: u32, 91 | SizeOfHeaders: u32, 92 | CheckSum: u32, 93 | Subsystem: u16, 94 | DllCharacteristics: u16, 95 | SizeOfStackReserve: u64, 96 | SizeOfStackCommit: u64, 97 | SizeOfHeapReserve: u64, 98 | SizeOfHeapCommit: u64, 99 | LoaderFlags: u32, 100 | NumberOfRvaAndSizes: u32, 101 | DataDirectory: [ImageDataDirectory; IMAGE_NUMBEROF_DIRECTORY_ENTRIES], 102 | } 103 | 104 | /// IMAGE_DATA_DIRECTORY 105 | #[repr(C, packed)] 106 | #[allow(non_snake_case)] 107 | #[derive(Default, ByteSafe)] 108 | struct ImageDataDirectory { 109 | VirtualAddress: u32, 110 | Size: u32, 111 | } 112 | 113 | /// IMAGE_SECTION_HEADER 114 | #[repr(C, packed)] 115 | #[allow(non_snake_case)] 116 | #[derive(Default, ByteSafe)] 117 | struct ImageSectionHeader { 118 | Name: [u8; 8], 119 | VirtualSize: u32, 120 | VirtualAddress: u32, 121 | SizeOfRawData: u32, 122 | PointerToRawData: u32, 123 | PointerToRelocations: u32, 124 | PointerToLinenumbers: u32, 125 | NumberOfRelocations: u16, 126 | NumberOfLinenumbers: u16, 127 | Characteristics: u32, 128 | } 129 | 130 | /// DOS/MZ header for PE files 131 | #[repr(C, packed)] 132 | #[allow(non_snake_case)] 133 | #[derive(Default, ByteSafe)] 134 | struct DosHeader { 135 | signature: [u8; 2], 136 | dont_care1: [u8; 0x20], 137 | dont_care2: [u8; 0x1a], 138 | pe_ptr: u32, 139 | } 140 | 141 | /// Relocation structure 142 | #[repr(C, packed)] 143 | #[allow(non_snake_case)] 144 | #[derive(Default, ByteSafe)] 145 | struct ImageBaseRelocation { 146 | VirtualAddress: u32, 147 | SizeOfBlock: u32, 148 | } 149 | 150 | /// Representation of a PE section 151 | /// 152 | /// All addresses stored are full virtual addresses (not RVA) 153 | pub struct PESection { 154 | /// Virtual address which is the base of this section 155 | vaddr: u64, 156 | 157 | /// Read permissions 158 | read: bool, 159 | 160 | /// Write permissions 161 | write: bool, 162 | 163 | /// Execute permissions 164 | execute: bool, 165 | 166 | /// Discard this section (aka, don't load it only contains information) 167 | discard: bool, 168 | 169 | /// Raw contents of this section 170 | /// 171 | /// Always non-zero length and zero-padded to nearest 4k length 172 | contents: Vec, 173 | } 174 | 175 | /// Parsed PE file structure 176 | /// 177 | /// All addresses stored are full virtual addresses (not RVA) 178 | pub struct PEParsed { 179 | /// Original base address of the PE, used for relocation delta calculation 180 | base_addr: u64, 181 | 182 | /// Virtual size of loaded image (4k aligned) 183 | size_of_image: u64, 184 | 185 | /// Entry point for the PE 186 | entry: u64, 187 | 188 | /// Vector of virtual addresses in the PE file which need to have a 64-bit 189 | /// offset applied based on the delta from the original base address. 190 | relocations: Vec, 191 | 192 | /// Section information for each section in the PE 193 | sections: Vec, 194 | } 195 | 196 | impl PEParsed { 197 | /// Determine the loaded size of this PE. 198 | pub fn loaded_size(&self) -> u64 199 | { 200 | self.size_of_image 201 | } 202 | 203 | /// Load this PE into a given `page_table` relocated to `base` 204 | pub fn load(&self, page_table: &mut PageTable<::Pmem>, base: u64) -> u64 205 | { 206 | /* Base address must be 4k aligned and nonzero */ 207 | assert!(base > 0 && (base & 0xfff) == 0, 208 | "PE loading base address must be 4k aligned and nonzero"); 209 | 210 | /* Layout for raw pages */ 211 | let layout = Layout::from_size_align(4096, 4096).unwrap(); 212 | 213 | /* Determine the relocation delta from the original file base */ 214 | let diff = base.wrapping_sub(self.base_addr); 215 | 216 | /* Make sure this PE can be relocated to desired location */ 217 | assert!(page_table.can_map_memory(base, self.loaded_size()).unwrap(), 218 | "Cannot reserve memory to map PE file"); 219 | 220 | /* Load all of the sections into the page table */ 221 | for section in &self.sections { 222 | /* x86 limitation is that all sections must be readable */ 223 | assert!(section.read == true, "Section mapped not readable"); 224 | 225 | /* If the section should be discarded, skip it */ 226 | if section.discard { 227 | continue; 228 | } 229 | 230 | /* For each page in the section, map it in */ 231 | for ii in (0..section.contents.len()).step_by(4096) { 232 | /* Compute new virtual address after the relocation 233 | * Due to can_map_memory() check at the start of the function 234 | * this +ii is safe from overflows. 235 | */ 236 | let vaddr = section.vaddr.wrapping_add(diff) + ii as u64; 237 | 238 | unsafe { 239 | /* Allocate a page */ 240 | let raw_page = 241 | ::GLOBAL_ALLOCATOR.alloc(layout.clone()) as *mut u8; 242 | 243 | /* Copy memory contents into page */ 244 | core::ptr::copy_nonoverlapping( 245 | section.contents[ii..ii+4096].as_ptr(), 246 | raw_page, 4096); 247 | 248 | /* Permissions for allocation. Set the write flag, 249 | * NX flag, and present flag as needed. 250 | */ 251 | let perms = 252 | if section.write { PTBits::Writable as u64 } 253 | else { 0 } | 254 | if !section.execute { PTBits::ExecuteDisable as u64 } 255 | else { 0 } | PTBits::Present as u64; 256 | 257 | /* Map the page into the page table */ 258 | page_table.map_page_raw(vaddr, raw_page as u64 | perms, 259 | MapSize::Mapping4KiB, 260 | false).unwrap(); 261 | } 262 | } 263 | } 264 | 265 | /* Apply relocations */ 266 | for relocation in &self.relocations { 267 | /* Adjust the relocation vaddr to use the new base */ 268 | let relocation = relocation.wrapping_add(diff); 269 | 270 | /* We assume relocations are 8-byte aligned to prevent needing to 271 | * straddle a page boundry. Could be changed if this is ever 272 | * encountered. 273 | */ 274 | assert!((relocation & 7) == 0, "Relocation not 8-byte aligned"); 275 | 276 | /* Translate the relocation vaddr to phys */ 277 | match page_table.virt_to_phys(relocation).unwrap() { 278 | Some((reloc_phys, _)) => unsafe { 279 | /* Convert the relocation physical address to a mutable 280 | * reference, and apply the relocation delta to it. 281 | */ 282 | let rr = (reloc_phys as *mut u64).as_mut().unwrap(); 283 | *rr = rr.wrapping_add(diff); 284 | }, 285 | None => panic!("Relocation vaddr not present"), 286 | } 287 | } 288 | 289 | /* Return relocated entry point */ 290 | self.entry.wrapping_add(diff) 291 | } 292 | } 293 | 294 | /// Load a PE file into a PEParsed structure 295 | /// 296 | /// This loader is extremely strict. It checks that all flags that matter are 297 | /// verified to match what we have tested and expect. These flags are 298 | /// implemented with exact matches or whitelists rather than blacklists to 299 | /// ensure this never succeeds in an unknown environment. 300 | pub fn parse(file: &Vec) -> PEParsed 301 | { 302 | /* Make sure file is large enough for DOS header */ 303 | assert!(file.len() >= size_of::(), 304 | "File too small for MZ header"); 305 | 306 | /* Parse DOS header and validate signature */ 307 | let dos_hdr: DosHeader = file[..size_of::()].cast_copy(); 308 | assert!(&dos_hdr.signature == b"MZ", "No MZ magic present"); 309 | 310 | /* Safely compute the end pointer of the PE header. Since this value is 311 | * controlled by the file, we need to be careful to make sure it doesn't 312 | * overflow with a checked_add(). 313 | */ 314 | let pe_ptr = dos_hdr.pe_ptr as usize; 315 | let pe_end = pe_ptr.checked_add(size_of::()). 316 | expect("Integer overflow on PE offset"); 317 | 318 | /* Validate PE header bounds */ 319 | assert!(file.len() >= pe_end, "File too small for PE header"); 320 | 321 | /* Parse PE header and validate signature */ 322 | let pe: ImageNtHeaders64 = file[pe_ptr..pe_end].cast_copy(); 323 | assert!(&pe.Signature == b"PE\0\0", "No PE magic present"); 324 | 325 | /* Strictly validate all fields we care about in the IMAGE_FILE_HEADER. 326 | * This might be too strict, but we can relax it if needed later. 327 | */ 328 | assert!(pe.FileHeader.Machine == IMAGE_FILE_MACHINE_AMD64, 329 | "PE file was not for amd64 machines"); 330 | 331 | assert!(pe.FileHeader.NumberOfSections > 0, "PE file has no sections"); 332 | 333 | assert!(pe.FileHeader.SizeOfOptionalHeader as usize == 334 | size_of::(), 335 | "PE file optional header size mismatch"); 336 | 337 | assert!(pe.FileHeader.Characteristics == 338 | (IMAGE_FILE_EXECUTABLE_IMAGE | IMAGE_FILE_LARGE_ADDRESS_AWARE), 339 | "PE file has unexpected characteristics"); 340 | 341 | /* Strictly validate all fields we care about in the IMAGE_OPTIONAL_HEADER. 342 | * This might be too strict, but we can relax it if needed later. 343 | */ 344 | assert!(pe.OptionalHeader.Magic == IMAGE_NT_OPTIONAL_HDR64_MAGIC, 345 | "PE file is not a 64-bit executable"); 346 | 347 | assert!(pe.OptionalHeader.Subsystem == IMAGE_SUBSYSTEM_NATIVE, 348 | "PE file is not of native subsystem type"); 349 | 350 | assert!(pe.OptionalHeader.SectionAlignment == 4096, 351 | "PE section alignment was not 4096"); 352 | 353 | assert!(pe.OptionalHeader.DllCharacteristics == ( 354 | IMAGE_DLLCHARACTERISTICS_HIGH_ENTROPY_VA | 355 | IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE | 356 | IMAGE_DLLCHARACTERISTICS_NX_COMPAT | 357 | IMAGE_DLLCHARACTERISTICS_TERMINAL_SERVER_AWARE), 358 | "PE had unexpected DllCharacteristics"); 359 | 360 | assert!(pe.OptionalHeader.LoaderFlags == 0, 361 | "PE had unexpected LoaderFlags"); 362 | 363 | /* Grab and 4k-align SizeOfImage */ 364 | let size_of_image = (pe.OptionalHeader.SizeOfImage.checked_add(0xfff) 365 | .expect("Integer overflow on SizeOfImage") & !0xfff) as u64; 366 | assert!(size_of_image > 0, "PE SizeOfImage is zero"); 367 | 368 | /* Holds whether or not we found an executable and initialized section 369 | * containing the entry point. 370 | */ 371 | let mut entry_point_valid = false; 372 | 373 | /* Construct vector to hold sections */ 374 | let mut sections = 375 | Vec::with_capacity(pe.FileHeader.NumberOfSections as usize); 376 | 377 | /* No relocations by default */ 378 | let mut relocations = None; 379 | 380 | /* Go through each section as reported by the PE */ 381 | let mut section_ptr = pe_end; 382 | for _ in 0..pe.FileHeader.NumberOfSections { 383 | /* Validate bounds of this IMAGE_SECTION_HEADER */ 384 | let section_end = section_ptr.checked_add( 385 | size_of::()). 386 | expect("PE section integer overflow"); 387 | assert!(file.len() >= section_end, "PE section out of bounds"); 388 | 389 | /* Create an IMAGE_SECTION_HEADER */ 390 | let section: ImageSectionHeader = 391 | file[section_ptr..section_end].cast_copy(); 392 | 393 | /* Validate alignment and section size */ 394 | assert!((section.VirtualAddress & 0xfff) == 0, 395 | "PE section virtual address was not 4k aligned"); 396 | if section.VirtualSize == 0 { 397 | continue; 398 | } 399 | 400 | /* Round up the virtual size to the nearest 4k boundry */ 401 | let rounded_vsize = (section.VirtualSize.checked_add(0xfff). 402 | expect("PE section virtual size integer overflow") & !0xfff) 403 | as usize; 404 | 405 | /* Make sure raw data size is <= vitrual size */ 406 | assert!(section.SizeOfRawData as usize <= rounded_vsize, 407 | "Section raw data larger than virtual size"); 408 | 409 | /* Validate bounds of raw data */ 410 | let rd_start = section.PointerToRawData as usize; 411 | let rd_end = rd_start.checked_add(section.SizeOfRawData as usize). 412 | expect("PE section raw data integer overflow"); 413 | assert!(rd_end <= file.len(), "PE section raw data out of bounds"); 414 | 415 | /* We expect no relocations in the section header */ 416 | assert!(section.NumberOfRelocations == 0, 417 | "PE section has relocations, not supported"); 418 | 419 | /* Compute start and end virtual addresses of this section */ 420 | let section_start_vaddr = pe.OptionalHeader.ImageBase 421 | .checked_add(section.VirtualAddress as u64) 422 | .expect("Overflow on ImageBase + section RVA"); 423 | let section_end_vaddr = section_start_vaddr 424 | .checked_add(rounded_vsize as u64) 425 | .expect("Overflow on section VA + section vsize"); 426 | 427 | /* Validate that this section is inside of the image virtual size */ 428 | assert!((section_end_vaddr - pe.OptionalHeader.ImageBase) <= 429 | size_of_image, "Section outside of virtual image space"); 430 | 431 | /* Create a 4k-aligned region of memory which represents this sections 432 | * virtual memory layout. Padded after the raw data with zero bytes. 433 | */ 434 | let mut contents = vec![0u8; rounded_vsize]; 435 | contents[..rd_end-rd_start].copy_from_slice(&file[rd_start..rd_end]); 436 | 437 | /* Grab permissions */ 438 | let perm_r = (section.Characteristics & IMAGE_SCN_MEM_READ) !=0; 439 | let perm_w = (section.Characteristics & IMAGE_SCN_MEM_WRITE) !=0; 440 | let perm_x = (section.Characteristics & IMAGE_SCN_MEM_EXECUTE) !=0; 441 | let discard = (section.Characteristics & IMAGE_SCN_MEM_DISCARDABLE)!=0; 442 | 443 | /* Check that no unknown characteristics are set */ 444 | assert!((section.Characteristics & !( 445 | IMAGE_SCN_MEM_READ | 446 | IMAGE_SCN_MEM_WRITE | 447 | IMAGE_SCN_MEM_EXECUTE | 448 | IMAGE_SCN_MEM_DISCARDABLE | 449 | IMAGE_SCN_CNT_INITIALIZED_DATA | 450 | IMAGE_SCN_CNT_UNINITIALIZED_DATA | 451 | IMAGE_SCN_CNT_CODE 452 | )) == 0, "Unknown section characteristic set"); 453 | 454 | assert!(!(perm_x && perm_w), "Executable section also writable"); 455 | 456 | if perm_x { 457 | /* If this is an executable section, check if the entry point 458 | * falls in it. We check based on the raw data such that the entry 459 | * point also doesn't point to padding zero bytes. 460 | */ 461 | let entry = pe.OptionalHeader.AddressOfEntryPoint; 462 | if entry >= section.VirtualAddress && 463 | entry < section.VirtualAddress 464 | .checked_add(section.SizeOfRawData).unwrap() { 465 | entry_point_valid = true; 466 | } 467 | } 468 | 469 | /* If this is a relocation section, parse out relocations */ 470 | if §ion.Name == b".reloc\0\0" { 471 | /* Validate entire virtual size is initialized */ 472 | assert!(section.SizeOfRawData >= section.VirtualSize, 473 | "Portion of .reloc section not initialized"); 474 | 475 | /* Slice down the 4k aligned contents to an exact size as specified 476 | * by the VirtualSize. 477 | */ 478 | let mut relocs = &contents[..section.VirtualSize as usize]; 479 | 480 | /* Check if we already have seen a relocation section */ 481 | assert!(relocations.is_none(), 482 | "Multiple relocation sections present"); 483 | 484 | /* Allocate room for at least all relocations. Due to headers this 485 | * allocation will be a bit larger than needed, but that's fine. 486 | */ 487 | let mut reloc_parsed = Vec::with_capacity(relocs.len() / 2); 488 | 489 | while relocs.len() > 0 { 490 | /* Validate bounds */ 491 | assert!(relocs.len() >= size_of::(), 492 | ".reloc section too small for header"); 493 | 494 | /* Parse out one relocation record */ 495 | let ibr: ImageBaseRelocation = 496 | relocs[..size_of::()].cast_copy(); 497 | 498 | /* Validate relocation record base address is 4k aligned */ 499 | assert!((ibr.VirtualAddress & 0xfff) == 0, 500 | "Relocation VirtualAddress not page aligned"); 501 | 502 | /* Validate block size is in bounds and large enough for header 503 | */ 504 | let blocksz = ibr.SizeOfBlock as usize; 505 | assert!(blocksz >= size_of::() && 506 | blocksz <= relocs.len(), 507 | "Invalid relocation section VirtualSize"); 508 | 509 | /* Compute the size of the relocation block payload and seek 510 | * relocs forward to it. 511 | */ 512 | let blocksz = blocksz - size_of::(); 513 | relocs = &relocs[size_of::()..]; 514 | 515 | /* We expect 2 bytes per entry, thus the blocksz should be 516 | * evenly divisible by 2. 517 | */ 518 | assert!((blocksz % 2) == 0, 519 | ".reloc section not evenly divisible by 2"); 520 | 521 | /* Cast the relocs to a &[u16] */ 522 | let type_offsets: &[u16] = relocs[..blocksz].cast(); 523 | 524 | for to in type_offsets { 525 | /* Parse offset and type from relocation */ 526 | let offset = (to & 0x0fff) >> 0; 527 | let typ = (to & 0xf000) >> 12; 528 | 529 | /* Skip absolute relocations */ 530 | if typ == IMAGE_REL_BASED_ABSOLUTE { 531 | continue; 532 | } 533 | 534 | /* Currently we only support DIR64 relocations */ 535 | assert!(typ == IMAGE_REL_BASED_DIR64, 536 | "Unsupported relocation type"); 537 | 538 | /* Add relocation to the relocation list */ 539 | reloc_parsed.push( 540 | pe.OptionalHeader.ImageBase 541 | .checked_add(ibr.VirtualAddress as u64).unwrap() 542 | .checked_add(offset as u64).unwrap()); 543 | } 544 | 545 | relocs = &relocs[blocksz..]; 546 | } 547 | 548 | relocations = Some(reloc_parsed); 549 | } 550 | 551 | /* Add section to list */ 552 | sections.push(PESection { 553 | vaddr: section_start_vaddr, 554 | read: perm_r, 555 | write: perm_w, 556 | execute: perm_x, 557 | discard: discard, 558 | contents: contents, 559 | }); 560 | 561 | /* Seek to next section */ 562 | section_ptr += size_of::(); 563 | } 564 | 565 | assert!(entry_point_valid, "Entry point was not in executable section"); 566 | 567 | PEParsed { 568 | entry: 569 | pe.OptionalHeader.ImageBase 570 | .checked_add(pe.OptionalHeader.AddressOfEntryPoint as u64).unwrap(), 571 | 572 | base_addr: pe.OptionalHeader.ImageBase, 573 | relocations: relocations.unwrap_or(Vec::new()), 574 | sections: sections, 575 | size_of_image: size_of_image, 576 | } 577 | } 578 | 579 | -------------------------------------------------------------------------------- /bootloader/src/pxe.rs: -------------------------------------------------------------------------------- 1 | use alloc::vec::Vec; 2 | 3 | use core; 4 | use serial; 5 | use cpu; 6 | use realmode; 7 | use realmode::SegOff; 8 | 9 | const STATIC_KERNEL_BUFFER_SIZE: usize = 1024 * 1024; 10 | 11 | /// List of PXE opcodes we support using 12 | enum PXEOpcode<'a> { 13 | /// PXE opcode for getting cached packets (and thus IP addresses) 14 | GetCachedInfo(&'a mut PXENV_CACHED_INFO), 15 | 16 | /// PXE opcode to get a file size 17 | GetFileSize(&'a mut PXENV_TFTP_GET_FSIZE), 18 | 19 | /// PXE opcode to open a file 20 | Open(&'a mut PXENV_TFTP_OPEN), 21 | 22 | /// PXE opcode to read 23 | Read(&'a mut PXENV_TFTP_READ), 24 | 25 | /// PXE opcode to close a file 26 | Close(&'a mut PXENV_TFTP_CLOSE), 27 | } 28 | 29 | /// PXENV+ structure 30 | #[repr(C, packed)] 31 | struct PXENV { 32 | /// "PXENV+" 33 | sig: [u8; 6], 34 | 35 | /// API version number. MSB=major LSB=minor. NBPs and OS 36 | /// drivers must check for this version number. If the API version 37 | /// number is 0x0201 or higher, use the !PXE structure. If the API 38 | /// version number is less than 0x0201, then use the PXENV+ 39 | /// structure. 40 | version: u16, 41 | 42 | /// Length of this structure in bytes. This length must be used when 43 | /// computing the checksum of this structure. 44 | length: u8, 45 | 46 | /// Used to make 8-bit checksum of this structure equal zero. 47 | checksum: u8, 48 | 49 | /// Far pointer to real-mode PXE/UNDI API entry point. May be CS:0000h. 50 | rmentry: SegOff, 51 | 52 | /// 32-bit offset to protected-mode PXE/UNDI API entry point. Do not 53 | /// use this entry point. For protected-mode API services, use the 54 | /// !PXE structure 55 | pmoffset: u32, 56 | 57 | /// Protected-mode selector of protected-mode PXE/UNDI API entry 58 | /// point. Do not use this entry point. For protected-mode API 59 | /// services, use the !PXE structure. 60 | pmsel: u16, 61 | 62 | /// Stack segment address. Must be set to 0 when removed from memory. 63 | stackseg: u16, 64 | 65 | /// Stack segment size in bytes. 66 | stacksize: u16, 67 | 68 | /// BC code segment address. Must be set to 0 when removed from memory. 69 | bc_codeseg: u16, 70 | 71 | /// BC code segment size. Must be set to 0 when removed from memory. 72 | bc_codesize: u16, 73 | 74 | /// BC data segment address. Must be set to 0 when removed from memory. 75 | bc_dataseg: u16, 76 | 77 | /// BC data segment size. Must be set to 0 when removed from memory. 78 | bc_datasize: u16, 79 | 80 | /// UNDI data segment address. Must be set to 0 when removed from memory. 81 | undi_dataseg: u16, 82 | 83 | /// UNDI data segment size. Must be set to 0 when removed from memory. 84 | undi_datasize: u16, 85 | 86 | /// UNDI code segment address. Must be set to 0 when removed from memory. 87 | undi_codeseg: u16, 88 | 89 | /// UNDI code segment size. Must be set to 0 when removed from memory. 90 | undi_codesize: u16, 91 | 92 | /// Real mode segment offset pointer to !PXE structure. This field is 93 | /// only present if the API version number is 2.1 or greater. 94 | pxeptr: SegOff, 95 | } 96 | 97 | /// Structure passed in to PXE when the TFTP_GET_FSIZE command is used. 98 | #[repr(C, packed)] 99 | struct PXENV_TFTP_GET_FSIZE { 100 | /// See PXENV_STATUS_xxx constants. 101 | status: u16, 102 | 103 | /// IP address of TFTP server in network order. 104 | server_ip: u32, 105 | 106 | /// IP address of relay agent in network order. If 107 | /// this address is set to zero, the IP layer will resolve this using 108 | /// its own routing table. 109 | gateway_ip: u32, 110 | 111 | /// Name of file to be downloaded. Null terminated 112 | filename: [u8; 128], 113 | 114 | /// Size of the file in bytes. 115 | filesize: u32, 116 | } 117 | 118 | /// Structure passed in to PXE when the TFTP_OPEN command is used. 119 | #[repr(C, packed)] 120 | struct PXENV_TFTP_OPEN { 121 | /// See the PXENV_STATUS_xxx constants. 122 | status: u16, 123 | 124 | /// TFTP server IP address in network order. 125 | server_ip: u32, 126 | 127 | /// Relay agent IP address in network order. If this 128 | /// address is set to zero, the IP layer will resolve this using its own 129 | /// routing table. The IP layer should provide space for a minimum of 130 | /// four routing entries obtained from default router and static route 131 | /// DHCP option tags in the DHCPackr message, plus any non-zero 132 | /// GIADDR field from the DHCPOffer message(s) accepted by the 133 | /// client. 134 | gateway_ip: u32, 135 | 136 | /// Name of file to be downloaded. Null terminated. 137 | filename: [u8; 128], 138 | 139 | /// UDP port TFTP server is listening to requests on 140 | tftp_port: u16, 141 | 142 | /// In: Requested size of TFTP packet, in bytes; with a 143 | /// minimum of 512 bytes. 144 | /// Out: Negotiated size of TFTP packet, in bytes; less than or 145 | /// equal to the requested size 146 | packetsize: u16, 147 | } 148 | 149 | /// Structure passed in to PXE when the TFTP_READ command is used. 150 | #[repr(C, packed)] 151 | struct PXENV_TFTP_READ { 152 | /// Out: See the PXENV_STATUS_xxx constants. 153 | status: u16, 154 | 155 | /// Out: Packet number (1-65535) sent from the TFTP server. 156 | packet_num: u16, 157 | 158 | /// Out: Number of bytes written to the packet buffer. Last packet 159 | /// if this is less thanthe size negotiated in TFTP_OPEN. Zero is valid. 160 | buffer_size: u16, 161 | 162 | /// In: Segment:Offset address of packet buffer. 163 | buffer: SegOff, 164 | } 165 | 166 | /// Structure passed in to PXE when the TFTP_CLOSE command is used. 167 | #[repr(C, packed)] 168 | struct PXENV_TFTP_CLOSE { 169 | /// Out: See the PXENV_STATUS_xxx constants. 170 | status: u16, 171 | } 172 | 173 | /// Structure passed in to PXE when the CACHED_INFO command is used. 174 | #[repr(C, packed)] 175 | struct PXENV_CACHED_INFO { 176 | /// See the PXENV_STATUS_xxx constants. 177 | status: u16, 178 | 179 | /// Type of cached packet being requested. 180 | packet_type: u16, 181 | 182 | /// In: Maximum number of bytes of data that can be copied into Buffer. 183 | /// Out: Number of bytes of data that have been copied 184 | /// into Buffer. If BufferSize and Buffer were both set to zero, 185 | /// this field will contain the amount of data stored in Buffer in 186 | /// the BC data segment. 187 | buffer_size: u16, 188 | 189 | /// In: Segment:Offset address of storage to be filled in by API service 190 | /// Out: If BufferSize and Buffer were both set to zero, this 191 | /// field will contain the segment:offset address of the Buffer in 192 | /// the BC data segment. 193 | buffer_segoff: SegOff, 194 | 195 | /// Out: Maximum size of the Buffer in the BC data segment. 196 | buffer_limit: u16, 197 | } 198 | 199 | /// !PXE structure, obtained via get_pxe() on a PXENV structure. 200 | #[repr(C, packed)] 201 | struct PXE_STRUCT { 202 | /// "!PXE" 203 | sig: [u8; 4], 204 | 205 | /// Length of this structure in bytes. This length must be 206 | /// used when computing the checksum of this structure. 207 | length: u8, 208 | 209 | /// Used to make structure byte checksum equal zero. 210 | checksum: u8, 211 | 212 | /// Revision of this structure is zero. (0x00) 213 | revision: u8, 214 | 215 | /// Must be zero. 216 | reserved: u8, 217 | 218 | /// Real mode segment:offset of UNDI ROM ID structure. 219 | /// Check this structure if you need to know the UNDI API 220 | /// revision level. Filled in by UNDI loader module. 221 | undi_rom_id: SegOff, 222 | 223 | /// Real mode segment:offset of BC ROM ID structure. Must 224 | /// be set to zero if BC is removed from memory. Check this 225 | /// structure if you need to know the BC API revision level. 226 | /// Filled in by base-code loader module. 227 | base_rom_id: SegOff, 228 | 229 | /// PXE API entry point for 16-bit stack segment. This API 230 | /// entry point is in the UNDI code segment and must not be 231 | /// CS:0000h. Filled in by UNDI loader module. 232 | entry_point_sp: SegOff, 233 | 234 | /// PXE API entry point for 32-bit stack segment. May be 235 | /// zero. This API entry point is in the UNDI code segment 236 | /// and must not be CS:0000h. Filled in by UNDI loader 237 | /// module. 238 | entry_point_esp: SegOff, 239 | 240 | /// Far pointer to DHCP/TFTP status call-out procedure. If 241 | /// this field is -1, DHCP/TFTP will not make status calls. If 242 | /// this field is zero, DHCP/TFTP will use the internal status 243 | /// call-out procedure. StatusCallout defaults to zero. 244 | /// Note: The internal status call-out procedure uses BIOS 245 | /// I/O interrupts and will only work in real mode. This field 246 | /// must be updated before making any base-code API calls 247 | /// in protected mode. 248 | status_callout: SegOff, 249 | 250 | /// Must be zero. 251 | reserved2: u8, 252 | 253 | /// Number of segment descriptors needed in protected 254 | /// mode and defined in this table. UNDI requires four 255 | /// descriptors. UNDI plus BC requires seven. 256 | seg_desc_cnt: u8, 257 | 258 | /// First protected mode selector assigned to PXE. 259 | /// Protected mode selectors assigned to PXE must be 260 | /// consecutive. Not used in real mode. Filled in by 261 | /// application before switching to protected mode. 262 | first_selector: u16, 263 | } 264 | 265 | impl PXENV 266 | { 267 | /// Compute checksum of this structure, should be zero if the structure 268 | /// is valid. 269 | fn checksum(&self) -> u8 270 | { 271 | let bytes = unsafe { 272 | core::slice::from_raw_parts( 273 | self as *const _ as *const u8, 274 | core::mem::size_of::()) 275 | }; 276 | 277 | bytes.iter().fold(0u8, |acc, &x| acc.wrapping_add(x)) 278 | } 279 | 280 | /// From this PXENV!, finds the !PXE structure and returns a reference 281 | /// to it. 282 | fn get_pxe(&self) -> &PXE_STRUCT 283 | { 284 | let pxe = unsafe { 285 | &*(self.pxeptr.to_linear() as *const PXE_STRUCT) 286 | }; 287 | 288 | /* Check the validity of the !PXE structure */ 289 | assert!(pxe.length as usize != core::mem::size_of::(), 290 | "!PXE structure size not expected"); 291 | assert!(pxe.checksum() == 0, "!PXE checksum invalid"); 292 | assert!(&pxe.sig == b"!PXE", "'!PXE' signature missing"); 293 | 294 | pxe 295 | } 296 | } 297 | 298 | impl PXE_STRUCT 299 | { 300 | /// Compute checksum of this structure, should be zero if the structure 301 | /// is valid. 302 | fn checksum(&self) -> u8 303 | { 304 | let bytes = unsafe { 305 | core::slice::from_raw_parts( 306 | self as *const _ as *const u8, 307 | self.length as usize) 308 | }; 309 | 310 | bytes.iter().fold(0u8, |acc, &x| acc.wrapping_add(x)) 311 | } 312 | 313 | /// Performs a PXE call 314 | /// 315 | /// This is marked unsafe as a caller can potentially corrupt memory 316 | /// depending on the PXE interface and parameters. Eg. issue a PXE read 317 | /// request to a buffer location that is already reserved/in use. 318 | unsafe fn pxecall(&self, opcode: PXEOpcode) 319 | { 320 | /* Convert the opcode enum into a PXE opcode and pointer to 321 | * PXE parameter. 322 | */ 323 | let (opcode, param) = match opcode { 324 | PXEOpcode::GetCachedInfo(x) => (0x71, x as *mut _ as u16), 325 | PXEOpcode::GetFileSize(x) => (0x25, x as *mut _ as u16), 326 | PXEOpcode::Open(x) => (0x20, x as *mut _ as u16), 327 | PXEOpcode::Read(x) => (0x22, x as *mut _ as u16), 328 | PXEOpcode::Close(x) => (0x21, x as *mut _ as u16), 329 | }; 330 | 331 | /* Perform the PXE call */ 332 | realmode::pxecall(self.entry_point_sp.seg, 333 | self.entry_point_sp.off, 334 | opcode, 0, param); 335 | } 336 | 337 | /// Read 'filename' from the PXE server, return the contents as a Vec 338 | /// of u8s. 339 | fn tftp_read_file(&self, filename: &str) -> Vec 340 | { 341 | /* Make sure there is room for the filename + null terminator in the 342 | * PXE request. 343 | */ 344 | assert!(filename.as_bytes().len() < 128, 345 | "Filename too long for PXE TFTP read"); 346 | 347 | /* Get the DHCP server IP from PXE cached info */ 348 | let server_ip = { 349 | let mut cached_info = PXENV_CACHED_INFO { 350 | status: 0, 351 | packet_type: 2, /* Request the DHCP ACK packet */ 352 | buffer_size: 0, 353 | buffer_segoff: SegOff { off: 0, seg: 0 }, 354 | buffer_limit: 0, 355 | }; 356 | 357 | /* Do a PXE call of PXENV_GET_CACHED_INFO to get the DHCP ACK 358 | * packet. We use this cached ACK packet to obtain the IP 359 | * address of the DHCP server so we can request the kernel from it. 360 | */ 361 | unsafe { self.pxecall(PXEOpcode::GetCachedInfo(&mut cached_info)); } 362 | 363 | assert!(cached_info.status == 0, 364 | "Failed to get cached PXE information"); 365 | 366 | /* We crudely grab the IP address from the DHCP ack 367 | * packet at byte offset 0x14. 368 | */ 369 | unsafe { 370 | *((cached_info.buffer_segoff.to_linear() + 0x14) as *const u32) 371 | } 372 | }; 373 | 374 | let filesize = { 375 | /* Construct a TFTP_GET_FSIZE request for 'filename' */ 376 | let mut file_size_req = PXENV_TFTP_GET_FSIZE { 377 | status: 0, 378 | server_ip: server_ip, 379 | gateway_ip: 0, 380 | filename: [0; 128], 381 | filesize: 0, 382 | }; 383 | 384 | /* Copy the filename into the read request */ 385 | file_size_req.filename[..filename.as_bytes().len()] 386 | .copy_from_slice(filename.as_bytes()); 387 | 388 | /* Perform the TFTP_GET_FSIZE request */ 389 | unsafe { self.pxecall(PXEOpcode::GetFileSize(&mut file_size_req)); } 390 | 391 | assert!(file_size_req.status == 0, 392 | "TFTP_GET_FSIZE: Failed to get file size"); 393 | assert!(file_size_req.filesize > 0, 394 | "TFTP_GET_FSIZE: File size was zero bytes"); 395 | 396 | file_size_req.filesize 397 | }; 398 | 399 | /* Allocate room for the file to download */ 400 | assert!(filesize as usize <= STATIC_KERNEL_BUFFER_SIZE, 401 | "Kernel size too large, increase STATIC_KERNEL_BUFFER_SIZE"); 402 | let mut buf = Vec::with_capacity(STATIC_KERNEL_BUFFER_SIZE); 403 | 404 | /* Create a stack local buffer (which will be in real-mode addressable 405 | * space) for use as an intermediate buffer during reads. 406 | * 407 | * 1428 is the largest size for a UDP packet according to TFTP 408 | * blocksize spec RFC 2348 409 | */ 410 | let low_buf = [0u8; 1428]; 411 | 412 | let nego_psize = { 413 | /* Construct a TFTP_OPEN request for 'filename' */ 414 | let mut tftp_open = PXENV_TFTP_OPEN { 415 | status: 0, 416 | server_ip: server_ip, 417 | gateway_ip: 0, 418 | filename: [0; 128], 419 | tftp_port: 69u16.to_be(), 420 | packetsize: low_buf.len() as u16, 421 | }; 422 | 423 | /* Copy the filename into the read request */ 424 | tftp_open.filename[0..filename.as_bytes().len()] 425 | .copy_from_slice(filename.as_bytes()); 426 | 427 | /* Perform the PXENV_OPEN_FILE request */ 428 | unsafe { self.pxecall(PXEOpcode::Open(&mut tftp_open)); } 429 | 430 | assert!(tftp_open.status == 0, 431 | "PXENV_OPEN_FILE: Failed to open file"); 432 | assert!(tftp_open.packetsize >= 512, 433 | "Negotiated TFTP packet size was smaller than minimum"); 434 | assert!(tftp_open.packetsize <= low_buf.len() as u16, 435 | "Negotiated TFTP packet size was larger than expected"); 436 | 437 | tftp_open.packetsize 438 | }; 439 | 440 | { 441 | loop { 442 | /* Construct a TFTP_READ request for 'filename' */ 443 | let mut tftp_read = PXENV_TFTP_READ { 444 | status: 0, 445 | packet_num: 0, 446 | buffer_size: 0, 447 | buffer: 448 | SegOff { 449 | seg: 0, 450 | off: low_buf.as_ptr() as u16, 451 | }, 452 | }; 453 | 454 | /* Perform the PXENV_READ request */ 455 | unsafe { self.pxecall(PXEOpcode::Read(&mut tftp_read)); } 456 | 457 | assert!(tftp_read.status == 0, "Failed to read file"); 458 | assert!(tftp_read.buffer_size <= nego_psize, 459 | "PXENV_TFTP_READ: Read file returned more \ 460 | than negotiated at open"); 461 | 462 | /* Check if this read will exceed the expected filesize 463 | * 464 | * This could happen if the file on the server increased in 465 | * size after we got the initial filesize. We check later for 466 | * a match of size, but this cancels the transfer once we 467 | * notice there is an issue. 468 | */ 469 | assert!(buf.len() 470 | .wrapping_add(tftp_read.buffer_size as usize) <= 471 | filesize as usize, 472 | "File larger than expected"); 473 | 474 | buf.extend_from_slice( 475 | &low_buf[..tftp_read.buffer_size as usize]); 476 | 477 | /* Resolution of the progress bar */ 478 | const PROG_BAR_WIDTH: usize = 50; 479 | 480 | /* Fancy progress bar :D */ 481 | let prog = (buf.len() * PROG_BAR_WIDTH) / (filesize as usize); 482 | serial::write("\r|"); 483 | for _ in 0..prog { serial::write_byte(b'='); } 484 | for _ in prog..PROG_BAR_WIDTH { serial::write_byte(b' '); } 485 | serial::write_byte(b'|'); 486 | 487 | /* Read ends when first packet of different packetsize than 488 | * original is read. 489 | */ 490 | if tftp_read.buffer_size != nego_psize { 491 | break; 492 | } 493 | } 494 | 495 | /* Newline to go to a newline after our progress bar */ 496 | serial::write_byte(b'\n'); 497 | 498 | assert!(buf.len() == filesize as usize, 499 | "TFTP read did not match expected number of bytes"); 500 | } 501 | 502 | { 503 | /* Close the opened file */ 504 | let mut tftp_close = PXENV_TFTP_CLOSE { status: 0 }; 505 | unsafe { self.pxecall(PXEOpcode::Close(&mut tftp_close)); } 506 | assert!(tftp_close.status == 0, "Failed to close file"); 507 | } 508 | 509 | /* Return buffer */ 510 | buf 511 | } 512 | } 513 | 514 | /// Using PXE download file named `filename` 515 | /// 516 | /// Returns a vector of bytes containing the file contents. 517 | pub fn download_file(filename: &str) -> Vec 518 | { 519 | if !cpu::is_bsp() { 520 | panic!("PXE routines are not allowed on non-BSP cores"); 521 | } 522 | 523 | let pxenv = unsafe { 524 | let mut regs = realmode::RegisterState { 525 | eax: 0x5650, ..Default::default() 526 | }; 527 | 528 | /* Invoke BIOS ax=0x5650 int 0x1a to get PXENV+ structure */ 529 | realmode::invoke_realmode(0x1a, &mut regs); 530 | 531 | /* Check for carry flag */ 532 | assert!((regs.efl & 1) == 0, "PXE installation check failed, CF set"); 533 | 534 | /* Check for PXE magic */ 535 | assert!(regs.eax == 0x564e, "PXE installation check failed, magic"); 536 | 537 | /* Create segoff representing PXENV structure */ 538 | let pxe_segoff = SegOff { seg: regs.es, off: regs.ebx as u16 }; 539 | &*(pxe_segoff.to_linear() as *const PXENV) 540 | }; 541 | 542 | /* Validate PXENV+ structure */ 543 | assert!(core::mem::size_of::() == pxenv.length as usize, 544 | "PXENV+ structure was not of expected size"); 545 | assert!(pxenv.checksum() == 0, "PXENV+ checksum invalid"); 546 | assert!(&pxenv.sig == b"PXENV+", "PXE signature not present"); 547 | assert!(pxenv.version == 0x0201, "PXE version invalid (expected 2.1)"); 548 | 549 | pxenv.get_pxe().tftp_read_file(filename) 550 | } 551 | 552 | -------------------------------------------------------------------------------- /bootloader/src/realmode.rs: -------------------------------------------------------------------------------- 1 | extern { 2 | /// Invoke a realmode software interrupt, for BIOS calls 3 | /// 4 | /// # Summary 5 | /// 6 | /// When this function is invoked, the register state is populated with 7 | /// the fields supplied from `register_state`, excluding segments, efl and 8 | /// esp. Once this context is loaded, a software interrupt of `int_num` is 9 | /// performed. Once the software interrupt is complete, the new register 10 | /// state is saved off to `register_state`, including segments, efl and 11 | /// esp fields. 12 | /// 13 | /// # Parameters 14 | /// 15 | /// * `int_num` - Software interrupt number to invoke 16 | /// * `register_state` - Input/output context for interrupt 17 | /// 18 | pub fn invoke_realmode(int_num: u8, regs: &mut RegisterState); 19 | 20 | /// Invoke a PXE call using the real-mode PXE stack. 21 | /// 22 | /// # Summary 23 | /// 24 | /// This function is used to invoke the real-mode PXE APIs provided by the 25 | /// EntryPointSP entry of the !PXE structure. The seg:off provided by 26 | /// EntryPointSP is what should be used for the first 2 parameters of this 27 | /// function. Provided the right real-mode stack, you then provide a PXE 28 | /// opcode in the `pxe_call` parameter, and point `param_seg`:`param_off` 29 | /// at the buffer describing the structure used by the opcode specified. 30 | /// 31 | /// # Parameters 32 | /// 33 | /// * `seg` - Code segment of the real-mode PXE stack 34 | /// * `off` - Code offset of the real-mode PXE stack 35 | /// * `pxe_call` - PXE call opcode 36 | /// * `param_seg` - Data segment for the PXE parameter 37 | /// * `param_off` - Data offset for the PXE parameter 38 | /// 39 | pub fn pxecall(seg: u16, off: u16, pxe_call: u16, 40 | param_seg: u16, param_off: u16); 41 | } 42 | 43 | /// Structure representing general purpose i386 register state 44 | #[repr(C, packed)] 45 | #[derive(Clone, Copy, Debug, Default)] 46 | pub struct RegisterState { 47 | pub eax: u32, 48 | pub ecx: u32, 49 | pub edx: u32, 50 | pub ebx: u32, 51 | pub esp: u32, 52 | pub ebp: u32, 53 | pub esi: u32, 54 | pub edi: u32, 55 | pub efl: u32, 56 | 57 | pub es: u16, 58 | pub ds: u16, 59 | pub fs: u16, 60 | pub gs: u16, 61 | pub ss: u16, 62 | } 63 | 64 | /// Simple SegOff structure for things that contain a segment and offset. 65 | #[repr(C, packed)] 66 | pub struct SegOff { 67 | pub off: u16, 68 | pub seg: u16, 69 | } 70 | 71 | impl SegOff { 72 | /// Convert a seg:off real-mode address into a linear 32-bit address 73 | pub fn to_linear(&self) -> usize 74 | { 75 | ((self.seg as usize) << 4) + (self.off as usize) 76 | } 77 | } 78 | 79 | -------------------------------------------------------------------------------- /bootloader/stage0.asm: -------------------------------------------------------------------------------- 1 | [bits 16] 2 | [org 0x7c00] 3 | 4 | struc flatpe 5 | .entry: resd 1 6 | .sections: resd 1 7 | .payload: 8 | endstruc 9 | 10 | struc flatpe_section 11 | .vaddr: resd 1 12 | .size: resd 1 13 | .data: 14 | endstruc 15 | 16 | entry: 17 | ; Disable interrupts and clear the direction flag 18 | cli 19 | cld 20 | 21 | ; Set the A20 line 22 | in al, 0x92 23 | or al, 2 24 | out 0x92, al 25 | 26 | ; Zero out DS for the lgdt 27 | xor ax, ax 28 | mov ds, ax 29 | 30 | ; Load the gdt (for 32-bit protected mode) 31 | lgdt [ds:pm_gdt] 32 | 33 | ; Set the protection bit 34 | mov eax, cr0 35 | or eax, (1 << 0) 36 | mov cr0, eax 37 | 38 | ; Jump to protected mode! 39 | jmp 0x0008:pm_entry 40 | 41 | [bits 32] 42 | 43 | pm_entry: 44 | ; Set data segments for protected mode 45 | mov ax, 0x10 46 | mov es, ax 47 | mov ds, ax 48 | mov fs, ax 49 | mov gs, ax 50 | mov ss, ax 51 | 52 | ; Set up a stack 53 | mov esp, 0x7c00 54 | 55 | ; Zero out entire range where kernel can be loaded [0x10000, 0x20000) 56 | ; This is our way of initializing all sections to zero so we only populate 57 | ; sections with raw data 58 | mov edi, 0x10000 59 | mov ecx, 0x20000 - 0x10000 60 | xor eax, eax 61 | rep stosb 62 | 63 | ; Get number of sections 64 | mov eax, [rust_entry + flatpe.sections] 65 | lea ebx, [rust_entry + flatpe.payload] 66 | .lewp: 67 | test eax, eax 68 | jz short .end 69 | 70 | mov edi, [ebx + flatpe_section.vaddr] 71 | lea esi, [ebx + flatpe_section.data] 72 | mov ecx, [ebx + flatpe_section.size] 73 | rep movsb 74 | 75 | add ebx, [ebx + flatpe_section.size] 76 | add ebx, flatpe_section_size 77 | dec eax 78 | jmp short .lewp 79 | 80 | .end: 81 | ; Jump into Rust! 82 | push dword kernel_buffer ; kernel_buffer: *mut KernelBuffer 83 | push dword [first_boot] ; first_boot: bool 84 | push dword soft_reboot_entry ; soft_reboot_entry: u32 85 | 86 | ; Set that this is no longer the first boot 87 | mov dword [first_boot], 0 88 | 89 | call dword [rust_entry + flatpe.entry] 90 | 91 | ; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 92 | 93 | ; 16-bit real mode GDT 94 | 95 | align 8 96 | rmgdt_base: 97 | dq 0x0000000000000000 ; Null descriptor 98 | dq 0x00009a000000ffff ; 16-bit RO code, base 0, limit 0x0000ffff 99 | dq 0x000092000000ffff ; 16-bit RW data, base 0, limit 0x0000ffff 100 | 101 | rmgdt: 102 | dw (rmgdt - rmgdt_base) - 1 103 | dq rmgdt_base 104 | 105 | ; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 106 | 107 | ; 32-bit protected mode GDT 108 | 109 | align 8 110 | pm_gdt_base: 111 | dq 0x0000000000000000 112 | dq 0x00CF9A000000FFFF 113 | dq 0x00CF92000000FFFF 114 | 115 | pm_gdt: 116 | dw (pm_gdt - pm_gdt_base) - 1 117 | dd pm_gdt_base 118 | 119 | ; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 120 | 121 | align 8 122 | reentry_longjmp: 123 | dd rmmode_again 124 | dw 0x0008 125 | 126 | align 8 127 | rm_idt: 128 | dw 0xffff 129 | dq 0 130 | 131 | align 8 132 | rm_gdt: 133 | dw 0xffff 134 | dq 0 135 | 136 | ; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 137 | 138 | ; Tracks if this is the first boot or not. This gets cleared to zero on the 139 | ; first boot, allowing the bootloader to know if it is in a soft reboot or 140 | ; not. This changes whether or not it needs to start up PXE again. 141 | first_boot: dd 1 142 | 143 | kernel_buffer: dq 0 144 | kernel_buffer_size: dq 0 145 | kernel_buffer_max_size: dq 0 146 | 147 | ; Boot magic 148 | times 510-($-$$) db 0 149 | dw 0xAA55 150 | 151 | times 0x400-($-$$) db 0 152 | 153 | [bits 16] 154 | 155 | ; Address 0x8000 156 | ap_entry: 157 | ; Disable interrupts and clear the direction flag 158 | cli 159 | cld 160 | 161 | ; Zero out DS for the lgdt 162 | xor ax, ax 163 | mov ds, ax 164 | 165 | ; Load the gdt (for 32-bit protected mode) 166 | lgdt [ds:pm_gdt] 167 | 168 | ; Set the protection bit 169 | mov eax, cr0 170 | or eax, (1 << 0) 171 | mov cr0, eax 172 | 173 | ; Jump to protected mode! 174 | jmp 0x0008:ap_pm_entry 175 | 176 | times 0x500-($-$$) db 0 177 | 178 | [bits 16] 179 | 180 | ; Addres 0x8100 181 | vm_entry: 182 | mov di, 0xb800 183 | mov es, di 184 | xor di, di 185 | mov cx, 80 * 25 186 | xor ax, ax 187 | rep stosw 188 | 189 | mov di, 0xb800 190 | mov es, di 191 | xor di, di 192 | mov cx, 80 193 | mov ax, 0x0f41 194 | rep stosw 195 | 196 | cli 197 | hlt 198 | jmp vm_entry 199 | 200 | [bits 64] 201 | 202 | soft_reboot_entry: 203 | cli 204 | 205 | ; Set up a stack 206 | mov esp, 0x7c00 207 | 208 | ; Clear registers 209 | xor rax, rax 210 | mov rbx, rax 211 | mov rcx, rax 212 | mov rdx, rax 213 | mov rsi, rax 214 | mov rdi, rax 215 | mov rbp, rax 216 | mov r8, rax 217 | mov r9, rax 218 | mov r10, rax 219 | mov r11, rax 220 | mov r12, rax 221 | mov r13, rax 222 | mov r14, rax 223 | mov r15, rax 224 | 225 | lgdt [rmgdt] 226 | 227 | ; Must be far dword for Intel/AMD compatibility. AMD does not support 228 | ; 64-bit offsets in far jumps in long mode, Intel does however. Force 229 | ; it to be 32-bit as it works in both. 230 | jmp far dword [reentry_longjmp] 231 | 232 | [bits 16] 233 | 234 | align 16 235 | rmmode_again: 236 | ; Disable paging 237 | mov eax, cr0 238 | btr eax, 31 239 | mov cr0, eax 240 | 241 | ; Disable long mode 242 | mov ecx, 0xc0000080 243 | rdmsr 244 | btr eax, 8 245 | wrmsr 246 | 247 | ; Load up the segments to be 16-bit segments 248 | mov ax, 0x10 249 | mov es, ax 250 | mov ds, ax 251 | mov fs, ax 252 | mov gs, ax 253 | mov ss, ax 254 | 255 | ; Disable protected mode 256 | mov eax, cr0 257 | btr eax, 0 258 | mov cr0, eax 259 | 260 | ; Zero out all GPRs (clear out high parts for when we go into 16-bit) 261 | xor eax, eax 262 | mov ebx, eax 263 | mov ecx, eax 264 | mov edx, eax 265 | mov esi, eax 266 | mov edi, eax 267 | mov ebp, eax 268 | mov esp, 0x7c00 269 | 270 | ; Reset the GDT and IDT to their original boot states 271 | lgdt [rm_gdt] 272 | lidt [rm_idt] 273 | 274 | ; Jump back to the start of the bootloader 275 | jmp 0x0000:0x7c00 276 | 277 | [bits 32] 278 | 279 | ap_pm_entry: 280 | ; Set data segments for protected mode 281 | mov ax, 0x10 282 | mov es, ax 283 | mov ds, ax 284 | mov fs, ax 285 | mov gs, ax 286 | mov ss, ax 287 | 288 | ; Set up a stack 289 | mov esp, 0x7c00 290 | 291 | ; Jump into Rust! 292 | push dword 0 ; kernel_buffer: *mut KernelBuffer 293 | push dword 0 ; first_boot: bool 294 | push dword soft_reboot_entry ; soft_reboot_entry: u32 295 | call dword [rust_entry + flatpe.entry] 296 | 297 | rust_entry: 298 | incbin "stage1.flat" 299 | 300 | -------------------------------------------------------------------------------- /debug_console/.gitignore: -------------------------------------------------------------------------------- 1 | Cargo.lock 2 | target 3 | 4 | -------------------------------------------------------------------------------- /debug_console/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "debug_console" 3 | version = "0.1.0" 4 | authors = ["gamozo "] 5 | 6 | [dependencies] 7 | kernel32-sys = "0.2.2" 8 | winapi = "0.2.8" 9 | 10 | -------------------------------------------------------------------------------- /debug_console/src/main.rs: -------------------------------------------------------------------------------- 1 | extern crate winapi; 2 | extern crate kernel32; 3 | 4 | use winapi::winerror; 5 | use std::error::Error; 6 | use winapi::minwinbase::OVERLAPPED; 7 | use std::fs::File; 8 | use std::io::Write; 9 | use std::ffi::OsStr; 10 | use std::os::windows::ffi::OsStrExt; 11 | use std::os::windows::io::{AsRawHandle, FromRawHandle}; 12 | 13 | /// Convert a Rust utf-8 string to a null terminated utf-16 string 14 | fn win32_string(string: &str) -> Vec 15 | { 16 | OsStr::new(string).encode_wide().chain(std::iter::once(0)).collect() 17 | } 18 | 19 | struct OverlappedReader { 20 | filename: Vec, 21 | fd: Option, 22 | active_read: Option<(OVERLAPPED, Vec)>, 23 | } 24 | 25 | impl OverlappedReader { 26 | /// Create a new overlapped reader. No file is opened when this is created 27 | fn new(filename: &str) -> OverlappedReader 28 | { 29 | OverlappedReader { 30 | filename: win32_string(filename), 31 | fd: None, 32 | active_read: None, 33 | } 34 | } 35 | 36 | /// Internal routine to open the file. This is called whenever self.fd 37 | /// is None, and a handle is needed. 38 | fn open(&mut self) -> Result<(), String> 39 | { 40 | let fd = unsafe { 41 | /* Attempt to open a new overlapped file */ 42 | let handle = kernel32::CreateFileW( 43 | self.filename.as_ptr(), 44 | winapi::winnt::GENERIC_READ | winapi::winnt::GENERIC_WRITE, 45 | 0, 46 | std::ptr::null_mut(), 47 | winapi::fileapi::OPEN_EXISTING, 48 | winapi::winbase::FILE_FLAG_OVERLAPPED, 49 | std::ptr::null_mut()); 50 | 51 | /* If we failed, return error */ 52 | if handle == winapi::shlobj::INVALID_HANDLE_VALUE { 53 | return Err(format!("Failed to create file (error: {})", 54 | kernel32::GetLastError())); 55 | } 56 | 57 | print!("Opened handle {:p}\n", handle); 58 | 59 | File::from_raw_handle(handle) 60 | }; 61 | 62 | print!("\n=================================\n"); 63 | print!("New serial session\n"); 64 | print!("=================================\n\n"); 65 | 66 | /* Set new fd */ 67 | self.fd = Some(fd); 68 | Ok(()) 69 | } 70 | 71 | /// Do a blocking write of bytes to this file if it is open 72 | fn write(&mut self, data: &[u8]) -> Result<(), Box> 73 | { 74 | if let Some(ref mut fd) = self.fd { 75 | fd.write_all(data)?; 76 | Ok(()) 77 | } else { 78 | Err("No open fd".into()) 79 | } 80 | } 81 | 82 | /// Attempt to read `size` bytes. Returns None on error or no bytes 83 | /// available. Otherwise returns a Vec containing up to `size` bytes 84 | /// read. 85 | fn try_read(&mut self, size: usize) -> Option> 86 | { 87 | /* If no file is open, try to open it */ 88 | if self.fd.is_none() { 89 | /* If we failed to open the file return None */ 90 | if self.open().is_err() { 91 | return None; 92 | } 93 | } 94 | 95 | /* At this point either the file was already open or we just opened 96 | * it. Get the fd. 97 | */ 98 | let fd = self.fd.as_ref().unwrap().as_raw_handle(); 99 | 100 | /* If there is an existing reader */ 101 | if self.active_read.is_some() { 102 | let (ret, gle, bread) = { 103 | /* Get a reference to the active read state */ 104 | let ar = self.active_read.as_mut().unwrap(); 105 | 106 | /* Cannot read a different size than what is already active */ 107 | assert!(ar.1.len() == size); 108 | 109 | let mut bread = 0u32; 110 | unsafe { 111 | (kernel32::GetOverlappedResult( 112 | fd, 113 | &mut ar.0, 114 | &mut bread, 115 | 0) == winapi::minwindef::TRUE, 116 | kernel32::GetLastError(), bread) 117 | } 118 | }; 119 | 120 | if !ret || (ret && bread <= 0) { 121 | /* If the error was not due to the IO not being complete, 122 | * free the fd. 123 | */ 124 | if (ret && bread <= 0) || gle != winerror::ERROR_IO_INCOMPLETE { 125 | self.fd = None; 126 | self.active_read = None; 127 | } 128 | 129 | None 130 | } else { 131 | /* Return buffer, sliced down to bread bytes */ 132 | let mut ret = self.active_read.take().unwrap().1; 133 | ret.truncate(bread as usize); 134 | Some(ret) 135 | } 136 | } else { 137 | self.active_read = Some(( 138 | unsafe { std::mem::zeroed() }, 139 | vec![0u8; size] 140 | )); 141 | 142 | let ret = { 143 | let ar = self.active_read.as_mut().unwrap(); 144 | 145 | /* Schedule a read to the file */ 146 | unsafe { 147 | kernel32::ReadFile( 148 | fd, 149 | ar.1.as_mut_ptr() as *mut std::os::raw::c_void, 150 | size as u32, 151 | std::ptr::null_mut(), 152 | &mut ar.0) == winapi::minwindef::TRUE 153 | } 154 | }; 155 | 156 | if ret { 157 | /* If read succeeded synchronously, recursively call this 158 | * function which will take the GetOverlappedResult() path. 159 | */ 160 | self.try_read(size) 161 | } else { 162 | /* Validate there was no unexpected error */ 163 | unsafe { 164 | assert!(kernel32::GetLastError() == 165 | winerror::ERROR_IO_PENDING); 166 | } 167 | 168 | None 169 | } 170 | } 171 | } 172 | } 173 | 174 | fn handle_pipe() 175 | { 176 | let mut serial = OverlappedReader::new("\\\\.\\pipe\\kerndebug"); 177 | let mut conin = OverlappedReader::new("CONIN$"); 178 | 179 | loop { 180 | if let Some(inp) = conin.try_read(1024) { 181 | match std::str::from_utf8(&inp) { 182 | Ok("reboot\r\n") => { 183 | serial.write(b"Z").unwrap(); 184 | } 185 | _ => {}, 186 | } 187 | } 188 | 189 | if let Some(woo) = serial.try_read(1) { 190 | match std::str::from_utf8(&woo) { 191 | Ok(value) => { 192 | print!("{}", value); 193 | std::io::stdout().flush().unwrap(); 194 | }, 195 | Err(_) => {}, 196 | } 197 | } else { 198 | /* If there was nothing to display, sleep for 1ms. This just allows 199 | * this loop to sleep if there is nothing to do. If there is a lot 200 | * of print traffic it will not be sleeping so there will be no 201 | * delays. 202 | */ 203 | std::thread::sleep(std::time::Duration::from_millis(1)); 204 | } 205 | } 206 | } 207 | 208 | fn main() 209 | { 210 | handle_pipe(); 211 | } 212 | 213 | -------------------------------------------------------------------------------- /emu/bochsrc.bxrc: -------------------------------------------------------------------------------- 1 | # configuration file generated by Bochs 2 | plugin_ctrl: unmapped=1, biosdev=1, speaker=1, extfpuirq=1, parallel=1, serial=1, gameport=1, e1000=1 3 | config_interface: win32config 4 | display_library: win32 5 | memory: host=1024, guest=1024 6 | romimage: file="D:\dev\tkofuzz\bochs_src\bios\BIOS-bochs-latest", address=0x0, options=none 7 | vgaromimage: file="D:\dev\tkofuzz\bochs_src\bios\VGABIOS-lgpl-latest" 8 | boot: floppy 9 | floppy_bootsig_check: disabled=0 10 | floppya: type=1_44, 1_44="D:\orange_slice\emu\gpxe.dsk", status=inserted, write_protected=1 11 | # no floppyb 12 | ata0: enabled=1, ioaddr1=0x1f0, ioaddr2=0x3f0, irq=14 13 | ata0-master: type=none 14 | ata0-slave: type=none 15 | ata1: enabled=1, ioaddr1=0x170, ioaddr2=0x370, irq=15 16 | ata1-master: type=none 17 | ata1-slave: type=none 18 | ata2: enabled=0 19 | ata3: enabled=0 20 | optromimage1: file=none 21 | optromimage2: file=none 22 | optromimage3: file=none 23 | optromimage4: file=none 24 | optramimage1: file=none 25 | optramimage2: file=none 26 | optramimage3: file=none 27 | optramimage4: file=none 28 | pci: enabled=1, chipset=i440fx 29 | vga: extension=vbe, update_freq=5, realtime=1 30 | cpu: count=1, ips=1000000, model=corei7_haswell_4770, reset_on_triple_fault=1, cpuid_limit_winnt=0, ignore_bad_msrs=1, mwait_is_nop=0 31 | print_timestamps: enabled=0 32 | port_e9_hack: enabled=0 33 | private_colormap: enabled=0 34 | clock: sync=realtime, time0=local, rtc_sync=0 35 | # no cmosimage 36 | # no loader 37 | log: - 38 | logprefix: %t%e%d 39 | debug: action=ignore 40 | info: action=report 41 | error: action=report 42 | panic: action=ask 43 | keyboard: type=mf, serial_delay=250, paste_delay=100000, user_shortcut=none 44 | mouse: type=ps2, enabled=0, toggle=ctrl+mbutton 45 | sound: waveoutdrv=win, waveout=none, waveindrv=win, wavein=none, midioutdrv=win, midiout=none 46 | speaker: enabled=1, mode=sound 47 | parport1: enabled=0 48 | parport2: enabled=0 49 | com1: enabled=1, mode=file, dev="CON" 50 | com2: enabled=0 51 | com3: enabled=0 52 | com4: enabled=0 53 | e1000: enabled=1, mac=b0:c4:20:00:00:00, ethmod=vnet, ethdev=".", script=none, bootrom=none 54 | -------------------------------------------------------------------------------- /flatten_pe.py: -------------------------------------------------------------------------------- 1 | import sys, struct 2 | 3 | IMAGE_FILE_MACHINE_I386 = 0x014c 4 | 5 | MIN_ADDR = 0x10000 6 | MAX_ADDR = 0x20000 7 | 8 | if len(sys.argv) != 3: 9 | print("Usage: flatten_pe.py ") 10 | 11 | pe_file = open(sys.argv[1], "rb").read() 12 | 13 | # Check for MZ 14 | assert pe_file[:2] == b"MZ", "No MZ header present" 15 | 16 | # Grab pointer to PE header 17 | pe_ptr = struct.unpack(" vaddr 54 | assert raw_data_size <= rounded_vsize 55 | assert vaddr >= MIN_ADDR and vaddr < MAX_ADDR 56 | assert vend > MIN_ADDR and vend <= MAX_ADDR 57 | 58 | # Skip zero sized raw data sections 59 | if raw_data_size <= 0: 60 | continue 61 | 62 | sections.append((vaddr, vend, \ 63 | pe_file[raw_data_ptr:raw_data_ptr+raw_data_size])) 64 | 65 | flattened = bytearray() 66 | for (vaddr, vend, raw_data) in sections: 67 | # Should never happen as this is checked above 68 | assert len(raw_data) > 0 69 | 70 | print("%.8x %.8x" % (vaddr, len(raw_data))) 71 | 72 | flattened += struct.pack(""] 5 | edition = "2018" 6 | 7 | [dependencies] 8 | serial = { path = "../shared/serial" } 9 | cpu = { path = "../shared/cpu" } 10 | rangeset = { path = "../shared/rangeset" } 11 | mmu = { path = "../shared/mmu" } 12 | safecast = { path = "../shared/safecast" } 13 | bytesafe_derive = { path = "../shared/safecast/bytesafe_derive" } 14 | 15 | [profile.release] 16 | panic = "abort" 17 | lto = false 18 | debug = true 19 | 20 | [profile.dev] 21 | panic = "abort" 22 | -------------------------------------------------------------------------------- /kernel/src/core_reqs.rs: -------------------------------------------------------------------------------- 1 | /// libc `memcpy` implementation in rust 2 | /// 3 | /// This implementation of `memcpy` is overlap safe, making it technically 4 | /// `memmove`. 5 | /// 6 | /// # Parameters 7 | /// 8 | /// * `dest` - Pointer to memory to copy to 9 | /// * `src` - Pointer to memory to copy from 10 | /// * `n` - Number of bytes to copy 11 | /// 12 | #[no_mangle] 13 | pub unsafe extern fn memcpy(dest: *mut u8, src: *const u8, n: usize) -> *mut u8 14 | { 15 | memmove(dest, src, n) 16 | } 17 | 18 | /// libc `memmove` implementation in rust 19 | /// 20 | /// # Parameters 21 | /// 22 | /// * `dest` - Pointer to memory to copy to 23 | /// * `src` - Pointer to memory to copy from 24 | /// * `n` - Number of bytes to copy 25 | /// 26 | #[no_mangle] 27 | pub unsafe extern fn memmove(dest: *mut u8, src: *const u8, n: usize) -> *mut u8 28 | { 29 | if src < dest as *const u8 { 30 | /* copy backwards */ 31 | let mut ii = n; 32 | while ii != 0 { 33 | ii -= 1; 34 | *dest.offset(ii as isize) = *src.offset(ii as isize); 35 | } 36 | } else { 37 | /* copy forwards */ 38 | let mut ii = 0; 39 | while ii < n { 40 | *dest.offset(ii as isize) = *src.offset(ii as isize); 41 | ii += 1; 42 | } 43 | } 44 | 45 | dest 46 | } 47 | 48 | /// libc `memset` implementation in rust 49 | /// 50 | /// # Parameters 51 | /// 52 | /// * `s` - Pointer to memory to set 53 | /// * `c` - Character to set `n` bytes in `s` to 54 | /// * `n` - Number of bytes to set 55 | /// 56 | #[no_mangle] 57 | pub unsafe extern fn memset(s: *mut u8, c: i32, n: usize) -> *mut u8 58 | { 59 | let mut ii = 0; 60 | while ii < n { 61 | *s.offset(ii as isize) = c as u8; 62 | ii += 1; 63 | } 64 | 65 | s 66 | } 67 | 68 | /// libc `memcmp` implementation in rust 69 | /// 70 | /// # Parameters 71 | /// 72 | /// * `s1` - Pointer to memory to compare with s2 73 | /// * `s2` - Pointer to memory to compare with s1 74 | /// * `n` - Number of bytes to set 75 | #[no_mangle] 76 | pub unsafe extern fn memcmp(s1: *const u8, s2: *const u8, n: usize) -> i32 77 | { 78 | let mut ii = 0; 79 | while ii < n { 80 | let a = *s1.offset(ii as isize); 81 | let b = *s2.offset(ii as isize); 82 | if a != b { 83 | return a as i32 - b as i32 84 | } 85 | ii += 1; 86 | } 87 | 88 | 0 89 | } 90 | 91 | /// Fake `__chkstk()` stub. This is just a nop. If we run out of stack we will 92 | /// crash with a page fault, but that'll have to do. 93 | #[no_mangle] 94 | pub unsafe extern fn __chkstk() {} 95 | 96 | // Making a fake __CxxFrameHandler3 in Rust causes a panic, this is hacky 97 | // workaround where we declare it as a function that will just crash if it 98 | // gets called. 99 | // We should never hit this so it doesn't matter. 100 | global_asm!(r#" 101 | .global __CxxFrameHandler3 102 | __CxxFrameHandler3: 103 | ud2 104 | "#); 105 | 106 | #[no_mangle] 107 | pub unsafe extern fn cos() -> ! { panic!("Unhandled cos"); } 108 | 109 | #[no_mangle] 110 | pub unsafe extern fn cosf() -> ! { panic!("Unhandled cosf"); } 111 | 112 | #[no_mangle] 113 | pub unsafe extern fn sinf() -> ! { panic!("Unhandled sinf"); } 114 | 115 | #[no_mangle] 116 | pub unsafe extern fn sin() -> ! { panic!("Unhandled sin"); } 117 | 118 | -------------------------------------------------------------------------------- /kernel/src/main.rs: -------------------------------------------------------------------------------- 1 | #![no_std] 2 | #![no_main] 3 | #![feature(const_fn)] 4 | #![feature(lang_items)] 5 | #![feature(core_intrinsics)] 6 | #![feature(allocator_api)] 7 | #![feature(llvm_asm)] 8 | #![allow(dead_code)] 9 | #![feature(global_asm)] 10 | #![feature(panic_info_message)] 11 | 12 | extern crate alloc; 13 | 14 | extern crate serial; 15 | extern crate cpu; 16 | extern crate rangeset; 17 | extern crate mmu; 18 | 19 | #[macro_use] extern crate bytesafe_derive; 20 | 21 | use core::sync::atomic::{AtomicUsize, Ordering}; 22 | use core::convert::TryInto; 23 | 24 | /// Global allocator 25 | #[global_allocator] 26 | static GLOBAL_ALLOCATOR: mm::GlobalAllocator = mm::GlobalAllocator; 27 | 28 | /// Whether or not floats are used. This is used by the MSVC calling convention 29 | /// and it just has to exist. 30 | #[export_name="_fltused"] 31 | pub static FLTUSED: usize = 0; 32 | 33 | macro_rules! print { 34 | ( $($arg:tt)* ) => ({ 35 | use core::fmt::Write; 36 | use core::sync::atomic::{AtomicUsize, Ordering}; 37 | static PRINT_LOCK: AtomicUsize = AtomicUsize::new(0); 38 | static PRINT_LOCK_REL: AtomicUsize = AtomicUsize::new(0); 39 | 40 | let ticket = PRINT_LOCK.fetch_add(1, Ordering::SeqCst); 41 | while ticket != PRINT_LOCK_REL.load(Ordering::SeqCst) {} 42 | 43 | let _ = write!(&mut $crate::Writer, $($arg)*); 44 | 45 | PRINT_LOCK_REL.fetch_add(1, Ordering::SeqCst); 46 | }) 47 | } 48 | 49 | /// ACPI code 50 | pub mod acpi; 51 | 52 | /// Panic handler 53 | pub mod panic; 54 | 55 | /// Core requirements needed for Rust, such as libc memset() and friends 56 | pub mod core_reqs; 57 | 58 | /// Bring in the memory manager 59 | pub mod mm; 60 | 61 | /// Writer implementation used by the `print!` macro 62 | pub struct Writer; 63 | 64 | impl core::fmt::Write for Writer { 65 | fn write_str(&mut self, s: &str) -> core::fmt::Result { 66 | serial::write(s); 67 | Ok(()) 68 | } 69 | } 70 | 71 | #[lang = "oom"] 72 | #[no_mangle] 73 | pub fn rust_oom(_layout: alloc::alloc::Layout) -> ! { 74 | panic!("Out of memory"); 75 | } 76 | 77 | /// Main entry point for this codebase 78 | #[no_mangle] 79 | pub extern fn entry(param: u64) -> ! { 80 | static CORE_ID: AtomicUsize = AtomicUsize::new(0); 81 | 82 | // Convert the bootloader parameter into a reference 83 | let param = unsafe { &*(param as *const cpu::BootloaderStruct) }; 84 | 85 | // Get a unique core identifier for this processor 86 | let core_id = CORE_ID.fetch_add(1, Ordering::SeqCst); 87 | 88 | if cpu::is_bsp() { 89 | unsafe { 90 | acpi::init(¶m.phys_memory).expect("Failed to initialize ACPI"); 91 | } 92 | } 93 | 94 | // Attempt to launch the next processor in the list 95 | if false { 96 | unsafe { 97 | acpi::launch_ap(core_id + 1); 98 | } 99 | } 100 | 101 | // First, detect if VM-x is supported on the machine 102 | // See section 23.6 in the Intel Manual "DISCOVERING SUPPORT FOR VMX" 103 | let cpu_features = cpu::get_cpu_features(); 104 | assert!(cpu_features.vmx, "VM-x is not supported, halting"); 105 | 106 | print!("VMX detected, enabling VM-x!\n"); 107 | 108 | unsafe { 109 | // Set CR4.VMXE 110 | const CR4_VMXE: u64 = 1 << 13; 111 | const IA32_FEATURE_CONTROL: u32 = 0x3a; 112 | const IA32_VMX_BASIC: u32 = 0x480; 113 | const VMX_LOCK_BIT: u64 = 1 << 0; 114 | const VMXON_OUTSIDE_SMX: u64 = 1 << 2; 115 | 116 | /// Bits that must be set to 0 in CR0 when doing a VMXON 117 | const IA32_VMX_CR0_FIXED0: u32 = 0x486; 118 | 119 | /// Bits that must be set to 1 in CR0 when doing a VMXON 120 | const IA32_VMX_CR0_FIXED1: u32 = 0x487; 121 | 122 | /// Bits that must be set to 0 in CR4 when doing a VMXON 123 | const IA32_VMX_CR4_FIXED0: u32 = 0x488; 124 | 125 | /// Bits that must be set to 1 in CR4 when doing a VMXON 126 | const IA32_VMX_CR4_FIXED1: u32 = 0x489; 127 | 128 | print!("CR0 Fixed 0 {:#010x}\nCR0 Fixed 1 {:#010x}\nCR4 Fixed 0 {:#010x}\nCR4 Fixed 1 {:#010x}\n", 129 | cpu::rdmsr(IA32_VMX_CR0_FIXED0), cpu::rdmsr(IA32_VMX_CR0_FIXED1), 130 | cpu::rdmsr(IA32_VMX_CR4_FIXED0), cpu::rdmsr(IA32_VMX_CR4_FIXED1)); 131 | 132 | // Set the mandatory bits in CR0 and clear bits that are mandatory zero 133 | cpu::write_cr0((cpu::read_cr0() | cpu::rdmsr(IA32_VMX_CR0_FIXED0)) 134 | & cpu::rdmsr(IA32_VMX_CR0_FIXED1)); 135 | 136 | // Set the mandatory bits in CR4 and clear bits that are mandatory zero 137 | cpu::write_cr4((cpu::read_cr4() | cpu::rdmsr(IA32_VMX_CR4_FIXED0)) 138 | & cpu::rdmsr(IA32_VMX_CR4_FIXED1)); 139 | 140 | // Check if we need to set bits in IA32_FEATURE_CONTROL 141 | if (cpu::rdmsr(IA32_FEATURE_CONTROL) & VMX_LOCK_BIT) == 0 { 142 | // Lock bit not set, initialize IA32_FEATURE_CONTROL register 143 | let old = cpu::rdmsr(IA32_FEATURE_CONTROL); 144 | cpu::wrmsr(IA32_FEATURE_CONTROL, 145 | VMXON_OUTSIDE_SMX | VMX_LOCK_BIT | old); 146 | } 147 | 148 | // Validate that VMXON is allowed outside of SMX mode 149 | // See section 23.7 in the Intel System Manual 150 | // "ENABLING AND ENTERING VMX OPERATION" 151 | let lock_and_vmx = VMXON_OUTSIDE_SMX | VMX_LOCK_BIT; 152 | assert!( 153 | (cpu::rdmsr(IA32_FEATURE_CONTROL) & lock_and_vmx) == lock_and_vmx, 154 | "VMXON not allowed outside of SMX operation according to \ 155 | IA32_FEATURE_CONTROL, or lock bit is not set"); 156 | 157 | // Enable VMX extensions 158 | cpu::write_cr4(cpu::read_cr4() | CR4_VMXE); 159 | print!("Set CR4.VMXE!\n"); 160 | 161 | // Create a 4-KiB zeroed out physical page 162 | let vmxon_region = mm::alloc_page() 163 | .expect("Failed to allocate VMXON region"); 164 | 165 | // Create a 4-KiB zeroed out physical page to point to the vmxon page 166 | let vmxon_ptr_page = mm::alloc_page() 167 | .expect("Failed to allocate VMXON pointer region"); 168 | vmxon_ptr_page[..8].copy_from_slice( 169 | &(vmxon_region.as_mut_ptr() as usize).to_le_bytes()); 170 | 171 | print!("vmxon region allocated at {:p}\n\ 172 | vmxon pointer page allocated at {:p}\n", 173 | vmxon_region.as_mut_ptr(), 174 | vmxon_ptr_page.as_mut_ptr()); 175 | 176 | // Get the VMCS revision number 177 | let vmcs_revision_number = 178 | (cpu::rdmsr(IA32_VMX_BASIC) as u32) & 0x7fff_ffff; 179 | print!("VMCS revision number: {}\n", vmcs_revision_number); 180 | 181 | // Write in the VMCS revision number to the VMXON region 182 | vmxon_region[..4].copy_from_slice(&vmcs_revision_number.to_le_bytes()); 183 | 184 | // Execute VMXON to enable VMX root operation 185 | llvm_asm!("vmxon [$0]" :: "r"(vmxon_ptr_page.as_mut_ptr()) : 186 | "memory", "cc" : "volatile", "intel"); 187 | 188 | // Now we're in VMX root operation 189 | print!("VMXON complete\n"); 190 | 191 | // Create a new zeroed out VMCS region, and write in the revision 192 | // number 193 | let vmcs_region = mm::alloc_page() 194 | .expect("Failed to allocate VMCS region"); 195 | vmcs_region[..4].copy_from_slice(&vmcs_revision_number.to_le_bytes()); 196 | 197 | // Create a 4-KiB zeroed out physical page to point to the vmxon page 198 | let vmcs_ptr_page = mm::alloc_page() 199 | .expect("Failed to allocate VMCS pointer region"); 200 | vmcs_ptr_page[..8].copy_from_slice( 201 | &(vmcs_region.as_mut_ptr() as usize).to_le_bytes()); 202 | 203 | // Activate this given VMCS 204 | llvm_asm!("vmptrld [$0]" :: "r"(vmcs_ptr_page.as_mut_ptr()) : 205 | "memory", "cc" : "volatile", "intel"); 206 | 207 | const VM_INSTRUCTION_ERROR: u64 = 0x00004400; 208 | const EXIT_REASON: u64 = 0x00004402; 209 | const PIN_BASED_CONTROLS: u64 = 0x00004000; 210 | const PROC_BASED_CONTROLS: u64 = 0x00004002; 211 | const PROC2_BASED_CONTROLS: u64 = 0x0000401e; 212 | const EXIT_CONTROLS: u64 = 0x0000400c; 213 | const ENTRY_CONTROLS: u64 = 0x00004012; 214 | const EPT_POINTER: u64 = 0x0000201a; 215 | 216 | // Allocate the root level of the page table 217 | let ept_root = mm::alloc_page() 218 | .expect("Failed to allocate EPT root"); 219 | 220 | let pdpt = mm::alloc_page().expect("Failed to allocate EPT PDPT"); 221 | 222 | let pml4e_entry = (pdpt.as_mut_ptr() as usize) | 7; 223 | ept_root[..8].copy_from_slice(&pml4e_entry.to_le_bytes()); 224 | pdpt[..8].copy_from_slice(&((1 << 7) | 7usize).to_le_bytes()); 225 | 226 | cpu::vmwrite(EPT_POINTER, ept_root.as_mut_ptr() as u64 | (3 << 3)); 227 | 228 | const IA32_VMX_PINBASED_CTLS: u32 = 0x481; 229 | const IA32_VMX_PROCBASED_CTLS: u32 = 0x482; 230 | const IA32_VMX_PROCBASED_CTLS2: u32 = 0x48b; 231 | const IA32_VMX_EXIT_CTLS: u32 = 0x483; 232 | const IA32_VMX_ENTRY_CTLS: u32 = 0x484; 233 | 234 | const ACTIVATE_SECONDARY_CONTROLS: u64 = 1 << 31; 235 | 236 | print!("Getting control requirements\n"); 237 | 238 | let pinbased_ctrl0 = (cpu::rdmsr(IA32_VMX_PINBASED_CTLS) >> 0) & 0xffff_ffff; 239 | let pinbased_ctrl1 = (cpu::rdmsr(IA32_VMX_PINBASED_CTLS) >> 32) & 0xffff_ffff; 240 | let procbased_ctrl0 = (cpu::rdmsr(IA32_VMX_PROCBASED_CTLS) >> 0) & 0xffff_ffff; 241 | let procbased_ctrl1 = (cpu::rdmsr(IA32_VMX_PROCBASED_CTLS) >> 32) & 0xffff_ffff; 242 | let proc2based_ctrl0 = (cpu::rdmsr(IA32_VMX_PROCBASED_CTLS2) >> 0) & 0xffff_ffff; 243 | let proc2based_ctrl1 = (cpu::rdmsr(IA32_VMX_PROCBASED_CTLS2) >> 32) & 0xffff_ffff; 244 | let exit_ctrl0 = (cpu::rdmsr(IA32_VMX_EXIT_CTLS) >> 0) & 0xffff_ffff; 245 | let exit_ctrl1 = (cpu::rdmsr(IA32_VMX_EXIT_CTLS) >> 32) & 0xffff_ffff; 246 | let entry_ctrl0 = (cpu::rdmsr(IA32_VMX_ENTRY_CTLS) >> 0) & 0xffff_ffff; 247 | let entry_ctrl1 = (cpu::rdmsr(IA32_VMX_ENTRY_CTLS) >> 32) & 0xffff_ffff; 248 | 249 | let pinbased_minimum = pinbased_ctrl0 & pinbased_ctrl1; 250 | let procbased_minimum = procbased_ctrl0 & procbased_ctrl1; 251 | let proc2based_minimum = proc2based_ctrl0 & proc2based_ctrl1; 252 | let exit_minimum = exit_ctrl0 & exit_ctrl1; 253 | let entry_minimum = entry_ctrl0 & entry_ctrl1; 254 | 255 | const HOST_ADDRESS_SPACE: u64 = 1 << 9; 256 | const UNRESTRICTED_GUEST: u64 = 1 << 7; 257 | const EPT: u64 = 1 << 1; 258 | 259 | let procbased_minimum = procbased_minimum | ACTIVATE_SECONDARY_CONTROLS; 260 | let proc2based_minimum = proc2based_minimum | UNRESTRICTED_GUEST | EPT; 261 | 262 | let exit_minimum = exit_minimum | HOST_ADDRESS_SPACE; 263 | 264 | cpu::vmwrite(PIN_BASED_CONTROLS, pinbased_minimum); 265 | cpu::vmwrite(PROC_BASED_CONTROLS, procbased_minimum); 266 | cpu::vmwrite(PROC2_BASED_CONTROLS, proc2based_minimum); 267 | cpu::vmwrite(EXIT_CONTROLS, exit_minimum); 268 | cpu::vmwrite(ENTRY_CONTROLS, entry_minimum); 269 | 270 | print!( 271 | "Pin Controls: {:#010x}\n\ 272 | Proc Controls: {:#010x}\n\ 273 | Proc2 Controls: {:#010x}\n\ 274 | Exit Controls: {:#010x}\n\ 275 | Entry Controls: {:#010x}\n", 276 | pinbased_minimum, procbased_minimum, proc2based_minimum, 277 | exit_minimum, entry_minimum); 278 | 279 | const GUEST_ES: u64 = 0x800; 280 | const GUEST_CS: u64 = 0x802; 281 | const GUEST_SS: u64 = 0x804; 282 | const GUEST_DS: u64 = 0x806; 283 | const GUEST_FS: u64 = 0x808; 284 | const GUEST_GS: u64 = 0x80a; 285 | const GUEST_LDTR: u64 = 0x80c; 286 | const GUEST_TR: u64 = 0x80e; 287 | 288 | cpu::vmwrite(GUEST_ES, 0); 289 | cpu::vmwrite(GUEST_CS, 0); 290 | cpu::vmwrite(GUEST_SS, 0); 291 | cpu::vmwrite(GUEST_DS, 0); 292 | cpu::vmwrite(GUEST_FS, 0); 293 | cpu::vmwrite(GUEST_GS, 0); 294 | cpu::vmwrite(GUEST_LDTR, 0); 295 | cpu::vmwrite(GUEST_TR, 0); 296 | 297 | const GUEST_IA32_DEBUGCTL: u64 = 0x2802; 298 | const GUEST_PAT: u64 = 0x2804; 299 | const GUEST_EFER: u64 = 0x2806; 300 | 301 | cpu::vmwrite(GUEST_IA32_DEBUGCTL, 0); 302 | cpu::vmwrite(GUEST_PAT, 0x0007_0406_0007_0406); 303 | cpu::vmwrite(GUEST_EFER, 0); 304 | 305 | const GUEST_ES_LIMIT: u64 = 0x4800; 306 | const GUEST_CS_LIMIT: u64 = 0x4802; 307 | const GUEST_SS_LIMIT: u64 = 0x4804; 308 | const GUEST_DS_LIMIT: u64 = 0x4806; 309 | const GUEST_FS_LIMIT: u64 = 0x4808; 310 | const GUEST_GS_LIMIT: u64 = 0x480a; 311 | const GUEST_LDTR_LIMIT: u64 = 0x480c; 312 | const GUEST_TR_LIMIT: u64 = 0x480e; 313 | const GUEST_GDTR_LIMIT: u64 = 0x4810; 314 | const GUEST_IDTR_LIMIT: u64 = 0x4812; 315 | 316 | cpu::vmwrite(GUEST_ES_LIMIT, 0xffff); 317 | cpu::vmwrite(GUEST_CS_LIMIT, 0xffff); 318 | cpu::vmwrite(GUEST_SS_LIMIT, 0xffff); 319 | cpu::vmwrite(GUEST_DS_LIMIT, 0xffff); 320 | cpu::vmwrite(GUEST_FS_LIMIT, 0xffff); 321 | cpu::vmwrite(GUEST_GS_LIMIT, 0xffff); 322 | cpu::vmwrite(GUEST_LDTR_LIMIT, 0xffff); 323 | cpu::vmwrite(GUEST_TR_LIMIT, 0xffff); 324 | cpu::vmwrite(GUEST_GDTR_LIMIT, 0xffff); 325 | cpu::vmwrite(GUEST_IDTR_LIMIT, 0xffff); 326 | 327 | const GUEST_ES_ACCESS_RIGHTS: u64 = 0x4814; 328 | const GUEST_CS_ACCESS_RIGHTS: u64 = 0x4816; 329 | const GUEST_SS_ACCESS_RIGHTS: u64 = 0x4818; 330 | const GUEST_DS_ACCESS_RIGHTS: u64 = 0x481a; 331 | const GUEST_FS_ACCESS_RIGHTS: u64 = 0x481c; 332 | const GUEST_GS_ACCESS_RIGHTS: u64 = 0x481e; 333 | const GUEST_LDTR_ACCESS_RIGHTS: u64 = 0x4820; 334 | const GUEST_TR_ACCESS_RIGHTS: u64 = 0x4822; 335 | 336 | cpu::vmwrite(GUEST_ES_ACCESS_RIGHTS, 0x93); 337 | cpu::vmwrite(GUEST_CS_ACCESS_RIGHTS, 0x93); 338 | cpu::vmwrite(GUEST_SS_ACCESS_RIGHTS, 0x93); 339 | cpu::vmwrite(GUEST_DS_ACCESS_RIGHTS, 0x93); 340 | cpu::vmwrite(GUEST_FS_ACCESS_RIGHTS, 0x93); 341 | cpu::vmwrite(GUEST_GS_ACCESS_RIGHTS, 0x93); 342 | cpu::vmwrite(GUEST_LDTR_ACCESS_RIGHTS, 0x82); 343 | cpu::vmwrite(GUEST_TR_ACCESS_RIGHTS, 0x83); 344 | 345 | const VMCS_64BIT_GUEST_LINK_POINTER: u64 = 0x00002800; 346 | cpu::vmwrite(VMCS_64BIT_GUEST_LINK_POINTER, !0); 347 | 348 | let minimum_cr0 = 349 | cpu::rdmsr(IA32_VMX_CR0_FIXED0) & cpu::rdmsr(IA32_VMX_CR0_FIXED1); 350 | 351 | // Allow use of CR0.PG=0 (paging disabled) and CR0.PE=0 352 | // (protected mode disbled) 353 | let minimum_cr0 = minimum_cr0 & !0x8000_0001; 354 | 355 | let minimum_cr4 = cpu::rdmsr(IA32_VMX_CR4_FIXED0) 356 | & cpu::rdmsr(IA32_VMX_CR4_FIXED1); 357 | 358 | const GUEST_CR0: u64 = 0x6800; 359 | const GUEST_CR3: u64 = 0x6802; 360 | const GUEST_CR4: u64 = 0x6804; 361 | const GUEST_ES_BASE: u64 = 0x6806; 362 | const GUEST_CS_BASE: u64 = 0x6808; 363 | const GUEST_SS_BASE: u64 = 0x680a; 364 | const GUEST_DS_BASE: u64 = 0x680c; 365 | const GUEST_FS_BASE: u64 = 0x680e; 366 | const GUEST_GS_BASE: u64 = 0x6810; 367 | const GUEST_LDTR_BASE: u64 = 0x6812; 368 | const GUEST_TR_BASE: u64 = 0x6814; 369 | const GUEST_GDTR_BASE: u64 = 0x6816; 370 | const GUEST_IDTR_BASE: u64 = 0x6818; 371 | const GUEST_DR7: u64 = 0x681a; 372 | const GUEST_RSP: u64 = 0x681c; 373 | const GUEST_RIP: u64 = 0x681e; 374 | const GUEST_RFLAGS: u64 = 0x6820; 375 | 376 | print!("Using guest cr0 {:#x}\n", minimum_cr0); 377 | 378 | cpu::vmwrite(GUEST_CR0, minimum_cr0); 379 | cpu::vmwrite(GUEST_CR3, 0); 380 | cpu::vmwrite(GUEST_CR4, minimum_cr4); 381 | cpu::vmwrite(GUEST_ES_BASE, 0); 382 | cpu::vmwrite(GUEST_CS_BASE, 0); 383 | cpu::vmwrite(GUEST_SS_BASE, 0); 384 | cpu::vmwrite(GUEST_DS_BASE, 0); 385 | cpu::vmwrite(GUEST_FS_BASE, 0); 386 | cpu::vmwrite(GUEST_GS_BASE, 0); 387 | cpu::vmwrite(GUEST_LDTR_BASE, 0); 388 | cpu::vmwrite(GUEST_TR_BASE, 0); 389 | cpu::vmwrite(GUEST_GDTR_BASE, 0); 390 | cpu::vmwrite(GUEST_IDTR_BASE, 0); 391 | cpu::vmwrite(GUEST_DR7, 0x0000_0400); 392 | cpu::vmwrite(GUEST_RSP, 0x7000); 393 | cpu::vmwrite(GUEST_RIP, 0x8100); 394 | cpu::vmwrite(GUEST_RFLAGS, 2); 395 | 396 | const HOST_CR0: u64 = 0x6c00; 397 | const HOST_CR3: u64 = 0x6c02; 398 | const HOST_CR4: u64 = 0x6c04; 399 | 400 | const HOST_ES: u64 = 0xc00; 401 | const HOST_CS: u64 = 0xc02; 402 | const HOST_SS: u64 = 0xc04; 403 | const HOST_DS: u64 = 0xc06; 404 | const HOST_FS: u64 = 0xc08; 405 | const HOST_GS: u64 = 0xc0a; 406 | const HOST_TR: u64 = 0xc0c; 407 | 408 | const HOST_FS_BASE: u64 = 0x6c06; 409 | const HOST_GS_BASE: u64 = 0x6c08; 410 | const HOST_TR_BASE: u64 = 0x6c0a; 411 | const HOST_GDTR_BASE: u64 = 0x6c0c; 412 | const HOST_IDTR_BASE: u64 = 0x6c0e; 413 | const HOST_IA32_SYSENTER_ESP: u64 = 0x6c10; 414 | const HOST_IA32_SYSENTER_EIP: u64 = 0x6c12; 415 | const HOST_RSP: u64 = 0x6c14; 416 | const HOST_RIP: u64 = 0x6c16; 417 | 418 | cpu::vmwrite(HOST_CR0, cpu::read_cr0()); 419 | cpu::vmwrite(HOST_CR3, cpu::read_cr3()); 420 | cpu::vmwrite(HOST_CR4, cpu::read_cr4()); 421 | 422 | cpu::vmwrite(HOST_ES, cpu::read_es() as u64); 423 | cpu::vmwrite(HOST_CS, cpu::read_cs() as u64); 424 | cpu::vmwrite(HOST_SS, cpu::read_ss() as u64); 425 | cpu::vmwrite(HOST_DS, cpu::read_ds() as u64); 426 | cpu::vmwrite(HOST_FS, cpu::read_fs() as u64); 427 | cpu::vmwrite(HOST_GS, cpu::read_gs() as u64); 428 | cpu::vmwrite(HOST_TR, cpu::read_ds() as u64); 429 | 430 | cpu::vmwrite(HOST_FS_BASE, 0); 431 | cpu::vmwrite(HOST_GS_BASE, 0); 432 | cpu::vmwrite(HOST_TR_BASE, 0); 433 | cpu::vmwrite(HOST_GDTR_BASE, 0); 434 | cpu::vmwrite(HOST_IDTR_BASE, 0); 435 | cpu::vmwrite(HOST_IA32_SYSENTER_ESP, 0); 436 | cpu::vmwrite(HOST_IA32_SYSENTER_EIP, 0); 437 | 438 | print!("About to launch VM!\n"); 439 | 440 | // Launch the VM 441 | llvm_asm!(r#" 442 | // Save HOST_RSP 443 | mov rax, 0x6c14 444 | vmwrite rax, rsp 445 | 446 | // Save HOST_RIP 447 | mov rax, 0x6c16 448 | lea rbx, [rip + 1f] 449 | vmwrite rax, rbx 450 | 451 | vmlaunch 452 | 453 | 1: 454 | 455 | "# ::: "rax", "rbx", "memory", "cc" : "volatile", "intel"); 456 | 457 | // Read the abort indicator from VMLAUNCH 458 | let abort_indicator = 459 | u32::from_le_bytes(vmcs_region[4..8].try_into().unwrap()); 460 | 461 | print!("Abort indicator is {:#x}\n", abort_indicator); 462 | 463 | print!("VM exit: {:#x}\n", cpu::vmread(EXIT_REASON)); 464 | print!("VM instruction error: {:#x}\n", cpu::vmread(VM_INSTRUCTION_ERROR)); 465 | } 466 | 467 | loop {} 468 | } 469 | -------------------------------------------------------------------------------- /kernel/src/mm.rs: -------------------------------------------------------------------------------- 1 | use alloc::alloc::{Layout, GlobalAlloc}; 2 | use core::sync::atomic::{AtomicUsize, Ordering}; 3 | use crate::acpi; 4 | use mmu::PhysMem; 5 | 6 | pub struct GlobalAllocator; 7 | 8 | /// Physical memory implementation 9 | /// 10 | /// This is used during page table operations 11 | pub struct Pmem; 12 | 13 | static mut PMEM: Pmem = Pmem; 14 | 15 | /// Allocate a new zeroed out page and return it 16 | pub fn alloc_page() -> Option<&'static mut [u8; 4096]> { 17 | unsafe { 18 | if let Some(page) = PMEM.alloc_page() { 19 | let page = &mut *(page as *mut [u8; 4096]); 20 | *page = [0u8; 4096]; 21 | Some(page) 22 | } else { 23 | None 24 | } 25 | } 26 | } 27 | 28 | impl mmu::PhysMem for Pmem { 29 | /// Allocate a page 30 | fn alloc_page(&mut self) -> Option<*mut u8> { 31 | unsafe { 32 | // Get current node id 33 | let node_id = acpi::get_node_id(cpu::get_apic_id()); 34 | 35 | // Get an allocation on the current node 36 | let alloc = acpi::node_alloc_page(node_id.unwrap_or(0)); 37 | if alloc.is_null() { 38 | None 39 | } else { 40 | Some(alloc as *mut u8) 41 | } 42 | } 43 | } 44 | 45 | /// Read a 64-bit value at the physical address specified 46 | fn read_phys(&mut self, addr: *mut u64) -> Result { 47 | unsafe { Ok(core::ptr::read(addr)) } 48 | } 49 | 50 | /// Write a 64-bit value to the physical address specified 51 | fn write_phys(&mut self, addr: *mut u64, val: u64) -> 52 | Result<(), &'static str> { 53 | unsafe { Ok(core::ptr::write(addr, val)) } 54 | } 55 | 56 | /// This is used to let the MMU know if we reserve memory outside of 57 | /// the page tables. Since we do not do this at all we always return true 58 | /// allowing any address not in use in the page tables to be used for 59 | /// ASLR. 60 | fn probe_vaddr(&mut self, _addr: usize, _length: usize) -> bool { 61 | true 62 | } 63 | } 64 | 65 | static PAGE_TABLE_LOCK: AtomicUsize = AtomicUsize::new(0); 66 | static PAGE_TABLE_LOCK_REL: AtomicUsize = AtomicUsize::new(0); 67 | 68 | unsafe impl GlobalAlloc for GlobalAllocator { 69 | /// Global allocator. Grabs free memory from E820 and removes it from 70 | /// the table. 71 | unsafe fn alloc(&self, layout: Layout) -> *mut u8 { 72 | let size = layout.size().checked_add(0xfff).unwrap() & !0xfff; 73 | assert!(size > 0, "Zero size allocations not allowed"); 74 | 75 | let ticket = PAGE_TABLE_LOCK.fetch_add(1, Ordering::SeqCst); 76 | while ticket != PAGE_TABLE_LOCK_REL.load(Ordering::SeqCst) {} 77 | 78 | // Get access to the current page table 79 | let mut page_table = mmu::PageTable::from_existing( 80 | cpu::read_cr3() as *mut _, &mut PMEM); 81 | 82 | // Pick a random 64-bit address to return as the allocation 83 | let alc_base = page_table.rand_addr(size as u64).unwrap(); 84 | page_table.add_memory(alc_base, size as u64).unwrap(); 85 | 86 | PAGE_TABLE_LOCK_REL.fetch_add(1, Ordering::SeqCst); 87 | 88 | alc_base as *mut u8 89 | } 90 | 91 | /// No free implementation currently 92 | unsafe fn dealloc(&self, ptr: *mut u8, layout: Layout) { 93 | let size = layout.size().checked_add(0xfff).unwrap() & !0xfff; 94 | assert!(size > 0, "Zero size allocations not allowed"); 95 | let size = size as u64; 96 | 97 | let ticket = PAGE_TABLE_LOCK.fetch_add(1, Ordering::SeqCst); 98 | while ticket != PAGE_TABLE_LOCK_REL.load(Ordering::SeqCst) {} 99 | 100 | // Get access to the current page table 101 | let mut page_table = mmu::PageTable::from_existing( 102 | cpu::read_cr3() as *mut _, &mut PMEM); 103 | 104 | // Go through each page in the allocation and unmap it 105 | for ii in (0..size).step_by(4096) { 106 | let addr = ptr as u64 + ii; 107 | assert!((addr & 0xfff) == 0, "Non-page-aligned allocation"); 108 | 109 | // Go through all physical pages that were removed 110 | for ppage in &page_table.unmap_page(addr).expect("Failed to unmap"){ 111 | if let Some(ppage) = ppage { 112 | // Get current node id 113 | let node_id = 114 | acpi::get_node_id(cpu::get_apic_id()).unwrap_or(0); 115 | acpi::node_free_page(node_id, (*ppage) as *mut u8); 116 | } 117 | } 118 | } 119 | 120 | PAGE_TABLE_LOCK_REL.fetch_add(1, Ordering::SeqCst); 121 | } 122 | } 123 | -------------------------------------------------------------------------------- /kernel/src/panic.rs: -------------------------------------------------------------------------------- 1 | use cpu; 2 | use core::panic::PanicInfo; 3 | 4 | /// Panic implementation 5 | #[panic_handler] 6 | #[no_mangle] 7 | pub fn panic(info: &PanicInfo) -> ! { 8 | if let Some(location) = info.location() { 9 | print!("!!! PANIC !!! {}:{} ", 10 | location.file(), location.line(),); 11 | } else { 12 | print!("!!! PANIC !!! Panic with no location info "); 13 | } 14 | 15 | if let Some(&args) = info.message() { 16 | use core::fmt::write; 17 | let _ = write(&mut crate::Writer, args); 18 | print!("\n"); 19 | } else { 20 | print!("No arguments\n"); 21 | } 22 | 23 | cpu::halt(); 24 | } 25 | -------------------------------------------------------------------------------- /shared/cpu/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "cpu" 3 | version = "0.1.0" 4 | authors = ["gamozo "] 5 | 6 | [dependencies] 7 | rangeset = { path = "../rangeset" } 8 | 9 | -------------------------------------------------------------------------------- /shared/cpu/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![no_std] 2 | #![feature(llvm_asm)] 3 | 4 | extern crate rangeset; 5 | 6 | pub const MAX_CPUS: usize = 256; 7 | 8 | #[derive(Clone, Copy)] 9 | #[repr(C)] 10 | pub struct BootloaderStruct { 11 | /// If this is the BSP then this is a rangeset representing the free 12 | /// physical memory on the system. 13 | pub phys_memory: rangeset::RangeSet, 14 | 15 | /// Address to jump to perform a soft reboot 16 | pub soft_reboot_entry: u64, 17 | 18 | /// Pointer to KernelBuffer 19 | pub kernel_buffer: u64, 20 | } 21 | 22 | #[repr(C)] 23 | pub struct KernelBuffer { 24 | pub kernel_buffer: u64, 25 | pub kernel_buffer_size: u64, 26 | pub kernel_buffer_max_size: u64, 27 | } 28 | 29 | /// Output the byte `val` to `port` 30 | pub unsafe fn out8(port: u16, val: u8) 31 | { 32 | llvm_asm!("out dx, al" :: "{al}"(val), "{dx}"(port) :: "intel", "volatile"); 33 | } 34 | 35 | /// Input a byte from `port` 36 | pub unsafe fn in8(port: u16) -> u8 37 | { 38 | let ret: u8; 39 | llvm_asm!("in al, dx" : "={al}"(ret) : "{dx}"(port) :: "intel", "volatile"); 40 | ret 41 | } 42 | 43 | /// Output the dword `val` to `port` 44 | pub unsafe fn out32(port: u16, val: u32) 45 | { 46 | llvm_asm!("out dx, eax" :: "{eax}"(val), "{dx}"(port) :: "intel", "volatile"); 47 | } 48 | 49 | /// Input a dword from `port` 50 | pub unsafe fn in32(port: u16) -> u32 51 | { 52 | let ret: u32; 53 | llvm_asm!("in eax, dx" : "={eax}"(ret) : "{dx}"(port) :: "intel", "volatile"); 54 | ret 55 | } 56 | 57 | /// Disable interrupts and halt forever 58 | pub fn halt() -> ! 59 | { 60 | loop { 61 | unsafe { 62 | llvm_asm!("cli ; hlt" :::: "volatile"); 63 | } 64 | } 65 | } 66 | 67 | /// Performs a rdmsr instruction on the msr specified by `msr`. Returns 68 | /// 64-bit MSR contents. 69 | #[inline(always)] 70 | pub unsafe fn rdmsr(msr: u32) -> u64 71 | { 72 | let high: u32; 73 | let low: u32; 74 | 75 | llvm_asm!("rdmsr" : 76 | "={edx}"(high), "={eax}"(low) : "{ecx}"(msr) : 77 | "memory" : 78 | "volatile", "intel"); 79 | 80 | return ((high as u64) << 32) | (low as u64); 81 | } 82 | 83 | /// Performs a wrmsr instruction on the msr specified by `msr`, writes `val` 84 | #[inline(always)] 85 | pub unsafe fn wrmsr(msr: u32, val: u64) 86 | { 87 | llvm_asm!("wrmsr" :: 88 | "{ecx}"(msr), "{eax}"(val as u32), "{edx}"((val >> 32) as u32) : 89 | "memory" : 90 | "volatile", "intel"); 91 | } 92 | 93 | /// Reads the contents of DR7 94 | #[inline(always)] 95 | pub unsafe fn read_dr7() -> u64 96 | { 97 | let dr7; 98 | llvm_asm!("mov $0, dr7" : "=r"(dr7) ::: "intel", "volatile"); 99 | dr7 100 | } 101 | 102 | #[cfg(target_pointer_width = "64")] 103 | #[inline(always)] 104 | /// Reads the contents of CR8 105 | pub unsafe fn read_cr8() -> u64 106 | { 107 | let cr8; 108 | llvm_asm!("mov $0, cr8" : "=r"(cr8) ::: "intel", "volatile"); 109 | cr8 110 | } 111 | 112 | /// Reads the contents of CR3 113 | #[inline(always)] 114 | pub unsafe fn read_cr3() -> u64 115 | { 116 | let cr3: u64; 117 | llvm_asm!("mov $0, cr3" : "=r"(cr3) ::: "intel", "volatile"); 118 | cr3 & 0xffff_ffff_ffff_f000 119 | } 120 | 121 | /// Reads the contents of CR2 122 | #[inline(always)] 123 | pub unsafe fn read_cr2() -> u64 124 | { 125 | let cr2; 126 | llvm_asm!("mov $0, cr2" : "=r"(cr2) ::: "intel", "volatile"); 127 | cr2 128 | } 129 | 130 | /// Writes to dr0 131 | #[inline(always)] 132 | pub unsafe fn write_dr0(val: u64) 133 | { 134 | llvm_asm!("mov dr0, $0" :: "r"(val) :: "intel", "volatile"); 135 | } 136 | 137 | /// Writes to dr1 138 | #[inline(always)] 139 | pub unsafe fn write_dr1(val: u64) 140 | { 141 | llvm_asm!("mov dr1, $0" :: "r"(val) :: "intel", "volatile"); 142 | } 143 | 144 | /// Writes to dr2 145 | #[inline(always)] 146 | pub unsafe fn write_dr2(val: u64) 147 | { 148 | llvm_asm!("mov dr2, $0" :: "r"(val) :: "intel", "volatile"); 149 | } 150 | 151 | /// Writes to dr3 152 | #[inline(always)] 153 | pub unsafe fn write_dr3(val: u64) 154 | { 155 | llvm_asm!("mov dr3, $0" :: "r"(val) :: "intel", "volatile"); 156 | } 157 | 158 | /// Writes to CR2 159 | #[inline(always)] 160 | pub unsafe fn write_cr2(val: u64) 161 | { 162 | llvm_asm!("mov cr2, $0" :: "r"(val) :: "intel", "volatile"); 163 | } 164 | 165 | /// Writes to CR3 166 | #[inline(always)] 167 | pub unsafe fn write_cr3(val: u64) 168 | { 169 | llvm_asm!("mov cr3, $0" :: "r"(val) : "memory" : "intel", "volatile"); 170 | } 171 | 172 | /// Reads the contents of CR4 173 | #[inline(always)] 174 | pub unsafe fn read_cr4() -> u64 175 | { 176 | let cr4; 177 | llvm_asm!("mov $0, cr4" : "=r"(cr4) ::: "intel", "volatile"); 178 | cr4 179 | } 180 | 181 | /// Writes to CR4 182 | #[inline(always)] 183 | pub unsafe fn write_cr4(val: u64) 184 | { 185 | llvm_asm!("mov cr4, $0" :: "r"(val) :: "intel", "volatile"); 186 | } 187 | 188 | /// Reads the contents of CR0 189 | #[inline(always)] 190 | pub unsafe fn read_cr0() -> u64 191 | { 192 | let cr0; 193 | llvm_asm!("mov $0, cr0" : "=r"(cr0) ::: "intel", "volatile"); 194 | cr0 195 | } 196 | 197 | /// Writes to CR0 198 | #[inline(always)] 199 | pub unsafe fn write_cr0(val: u64) 200 | { 201 | llvm_asm!("mov cr0, $0" :: "r"(val) :: "intel", "volatile"); 202 | } 203 | 204 | /// Load the interrupt table specified by vaddr 205 | #[inline(always)] 206 | pub unsafe fn lidt(vaddr: *const u8) 207 | { 208 | llvm_asm!("lidt [$0]" :: 209 | "r"(vaddr) : 210 | "memory" : 211 | "volatile", "intel"); 212 | } 213 | 214 | /// Load the GDT specified by vaddr 215 | #[inline(always)] 216 | pub unsafe fn lgdt(vaddr: *const u8) 217 | { 218 | llvm_asm!("lgdt [$0]" :: 219 | "r"(vaddr) : 220 | "memory" : 221 | "volatile", "intel"); 222 | } 223 | 224 | /// Load the task register with the segment specified by tss_seg. 225 | #[inline(always)] 226 | pub unsafe fn ltr(tss_seg: u16) 227 | { 228 | llvm_asm!("ltr cx" :: "{cx}"(tss_seg) :: "volatile", "intel"); 229 | } 230 | 231 | /// Write back all memory and invalidate caches 232 | #[inline(always)] 233 | pub fn wbinvd() { 234 | unsafe { 235 | llvm_asm!("wbinvd" ::: "memory" : "volatile", "intel"); 236 | } 237 | } 238 | 239 | /// Memory fence for both reads and writes 240 | #[inline(always)] 241 | pub fn mfence() { 242 | unsafe { 243 | llvm_asm!("mfence" ::: "memory" : "volatile", "intel"); 244 | } 245 | } 246 | 247 | /// Flushes cache line associted with the byte pointed to by `ptr` 248 | #[inline(always)] 249 | pub unsafe fn clflush(ptr: *const u8) { 250 | llvm_asm!("clflush [$0]" :: "r"(ptr as usize) : "memory" : "volatile", "intel"); 251 | } 252 | 253 | /// Instruction fence (via write cr2) which serializes execution 254 | #[inline] 255 | pub fn ifence() { 256 | unsafe { 257 | write_cr2(0); 258 | } 259 | } 260 | 261 | /// Read a random number and return it 262 | #[inline(always)] 263 | pub fn rdrand() -> u64 { 264 | let val: u64; 265 | unsafe { 266 | llvm_asm!("rdrand $0" : "=r"(val) ::: "volatile", "intel"); 267 | } 268 | val 269 | } 270 | 271 | /// Performs a rdtsc instruction, returns 64-bit TSC value 272 | #[inline(always)] 273 | pub fn rdtsc() -> u64 274 | { 275 | let high: u32; 276 | let low: u32; 277 | 278 | unsafe { 279 | llvm_asm!("rdtsc" : 280 | "={edx}"(high), "={eax}"(low) ::: 281 | "volatile", "intel"); 282 | } 283 | 284 | return ((high as u64) << 32) | (low as u64); 285 | } 286 | 287 | /// Performs a rdtscp instruction, returns 64-bit TSC value 288 | #[inline(always)] 289 | pub fn rdtscp() -> u64 290 | { 291 | let high: u32; 292 | let low: u32; 293 | 294 | unsafe { 295 | llvm_asm!("rdtscp" : 296 | "={edx}"(high), "={eax}"(low) :: "ecx" : 297 | "volatile", "intel"); 298 | } 299 | 300 | return ((high as u64) << 32) | (low as u64); 301 | } 302 | 303 | /// Performs cpuid passing in eax and ecx as parameters. Returns a tuple 304 | /// containing the resulting (eax, ebx, ecx, edx) 305 | #[inline(always)] 306 | pub unsafe fn cpuid(eax: u32, ecx: u32) -> (u32, u32, u32, u32) 307 | { 308 | let (oeax, oebx, oecx, oedx); 309 | 310 | llvm_asm!("cpuid" : 311 | "={eax}"(oeax), "={ebx}"(oebx), "={ecx}"(oecx), "={edx}"(oedx) : 312 | "{eax}"(eax), "{ecx}"(ecx) :: "volatile", "intel"); 313 | 314 | (oeax, oebx, oecx, oedx) 315 | } 316 | 317 | /// Returns true if the current CPU is the BSP, otherwise returns false. 318 | pub fn is_bsp() -> bool 319 | { 320 | (unsafe { rdmsr(0x1b) } & (1 << 8)) != 0 321 | } 322 | 323 | /// Decrement the interrupt level. If the resulting interrupt level is 0, 324 | /// enable interrupts. 325 | #[inline(always)] 326 | pub unsafe fn interrupts_enable() 327 | { 328 | llvm_asm!("sti" :::: "volatile"); 329 | } 330 | 331 | /// Disable interrupts and then increment the interrupt level. 332 | #[inline(always)] 333 | pub unsafe fn interrupts_disable() 334 | { 335 | llvm_asm!("cli" :::: "volatile"); 336 | } 337 | 338 | #[derive(Default, Debug)] 339 | pub struct CPUFeatures { 340 | pub max_cpuid: u32, 341 | pub max_extended_cpuid: u32, 342 | 343 | pub fpu: bool, 344 | pub vme: bool, 345 | pub de: bool, 346 | pub pse: bool, 347 | pub tsc: bool, 348 | pub mmx: bool, 349 | pub fxsr: bool, 350 | pub sse: bool, 351 | pub sse2: bool, 352 | pub htt: bool, 353 | pub sse3: bool, 354 | pub ssse3: bool, 355 | pub sse4_1: bool, 356 | pub sse4_2: bool, 357 | pub xsave: bool, 358 | pub avx: bool, 359 | pub apic: bool, 360 | 361 | pub vmx: bool, 362 | 363 | pub lahf: bool, 364 | pub lzcnt: bool, 365 | pub prefetchw: bool, 366 | 367 | pub syscall: bool, 368 | pub xd: bool, 369 | pub gbyte_pages: bool, 370 | pub rdtscp: bool, 371 | pub bits64: bool, 372 | 373 | pub avx512f: bool, 374 | } 375 | 376 | /// Set the xcr0 register to a given value 377 | pub unsafe fn write_xcr0(val: u64) 378 | { 379 | llvm_asm!("xsetbv" :: "{ecx}"(0), "{eax}"(val as u32), 380 | "{edx}"((val >> 32) as u32) :: "intel", "volatile"); 381 | } 382 | 383 | /// Get set of CPU features 384 | pub fn get_cpu_features() -> CPUFeatures 385 | { 386 | let mut features: CPUFeatures = Default::default(); 387 | 388 | unsafe { 389 | features.max_cpuid = cpuid(0, 0).0; 390 | features.max_extended_cpuid = cpuid(0x80000000, 0).0; 391 | 392 | if features.max_cpuid >= 1 { 393 | let cpuid_1 = cpuid(1, 0); 394 | features.fpu = ((cpuid_1.3 >> 0) & 1) == 1; 395 | features.vme = ((cpuid_1.3 >> 1) & 1) == 1; 396 | features.de = ((cpuid_1.3 >> 2) & 1) == 1; 397 | features.pse = ((cpuid_1.3 >> 3) & 1) == 1; 398 | features.tsc = ((cpuid_1.3 >> 4) & 1) == 1; 399 | features.apic = ((cpuid_1.3 >> 9) & 1) == 1; 400 | features.mmx = ((cpuid_1.3 >> 23) & 1) == 1; 401 | features.fxsr = ((cpuid_1.3 >> 24) & 1) == 1; 402 | features.sse = ((cpuid_1.3 >> 25) & 1) == 1; 403 | features.sse2 = ((cpuid_1.3 >> 26) & 1) == 1; 404 | features.htt = ((cpuid_1.3 >> 28) & 1) == 1; 405 | 406 | features.sse3 = ((cpuid_1.2 >> 0) & 1) == 1; 407 | features.vmx = ((cpuid_1.2 >> 5) & 1) == 1; 408 | features.ssse3 = ((cpuid_1.2 >> 9) & 1) == 1; 409 | features.sse4_1 = ((cpuid_1.2 >> 19) & 1) == 1; 410 | features.sse4_2 = ((cpuid_1.2 >> 20) & 1) == 1; 411 | features.xsave = ((cpuid_1.2 >> 26) & 1) == 1; 412 | features.avx = ((cpuid_1.2 >> 28) & 1) == 1; 413 | } 414 | 415 | if features.max_cpuid >= 7 { 416 | let cpuid_7 = cpuid(7, 0); 417 | features.avx512f = ((cpuid_7.1 >> 16) & 1) == 1; 418 | } 419 | 420 | if features.max_extended_cpuid >= 0x80000001 { 421 | let cpuid_e1 = cpuid(0x80000001, 0); 422 | 423 | features.lahf = ((cpuid_e1.2 >> 0) & 1) == 1; 424 | features.lzcnt = ((cpuid_e1.2 >> 5) & 1) == 1; 425 | features.prefetchw = ((cpuid_e1.2 >> 8) & 1) == 1; 426 | 427 | features.syscall = ((cpuid_e1.3 >> 11) & 1) == 1; 428 | features.xd = ((cpuid_e1.3 >> 20) & 1) == 1; 429 | features.gbyte_pages = ((cpuid_e1.3 >> 26) & 1) == 1; 430 | features.rdtscp = ((cpuid_e1.3 >> 27) & 1) == 1; 431 | features.bits64 = ((cpuid_e1.3 >> 29) & 1) == 1; 432 | } 433 | } 434 | 435 | features 436 | } 437 | 438 | /// Get a random 64-bit value seeded with the TSC. This is crude but it works 439 | /// early in the boot process. PXE network delays should make this have a 440 | /// reasonable amount of entropy for boot-to-boot differences. But of course 441 | /// should not be used for crypto. 442 | pub fn rdtsc_rand() -> u64 443 | { 444 | let mut init = rdtsc(); 445 | 446 | /* 64 rounds of xorshift */ 447 | for _ in 0..64 { 448 | init ^= init << 13; 449 | init ^= init >> 17; 450 | init ^= init << 43; 451 | } 452 | 453 | init 454 | } 455 | 456 | /// Canonicalize a 64-bit address such that bits [63:48] are sign extended 457 | /// from bit 47 458 | pub fn canonicalize_address(addr: u64) -> u64 459 | { 460 | let mut addr: i64 = addr as i64; 461 | 462 | /* Canon addresses are 48-bits sign extended. Do a shift left by 16 bits 463 | * to mask off the top bits, then do an arithmetic shift right (note i64 464 | * type) to sign extend the 47th bit. 465 | */ 466 | addr <<= 64 - 48; 467 | addr >>= 64 - 48; 468 | 469 | addr as u64 470 | } 471 | 472 | pub unsafe fn apic_read(offset: isize) -> u32 473 | { 474 | assert!((offset & 0xf) == 0, "APIC offset not 4-byte aligned"); 475 | assert!(offset >= 0 && offset < 4096, "APIC offset out of bounds"); 476 | 477 | if !use_x2apic() { 478 | let apic = 0xfee00000 as *mut u32; 479 | core::ptr::read_volatile(apic.offset(offset / 4)) 480 | } else { 481 | let msr = 0x800 + (offset >> 4); 482 | rdmsr(msr as u32) as u32 483 | } 484 | } 485 | 486 | pub unsafe fn apic_write(offset: isize, val: u32) 487 | { 488 | assert!((offset & 0xf) == 0, "APIC offset not 4-byte aligned"); 489 | assert!(offset >= 0 && offset < 4096, "APIC offset out of bounds"); 490 | 491 | if !use_x2apic() { 492 | let apic = 0xfee00000 as *mut u32; 493 | core::ptr::write_volatile(apic.offset(offset / 4), val); 494 | } else { 495 | let msr = 0x800 + (offset >> 4); 496 | wrmsr(msr as u32, val as u64); 497 | } 498 | } 499 | 500 | pub fn use_x2apic() -> bool { 501 | false 502 | // unsafe { 503 | // (cpuid(1, 0).2 & (1 << 21)) != 0 504 | // } 505 | } 506 | 507 | /// Get current cores APIC ID 508 | pub fn get_apic_id() -> usize 509 | { 510 | unsafe { 511 | if use_x2apic() { 512 | apic_read(0x20) as usize 513 | } else { 514 | ((apic_read(0x20) >> 24) & 0xff) as usize 515 | } 516 | } 517 | } 518 | 519 | /// Initialize the APIC of this core 520 | pub unsafe fn apic_init() 521 | { 522 | /* Globally enable the APIC by setting EN in IA32_APIC_BASE_MSR */ 523 | wrmsr(0x1b, rdmsr(0x1b) | (1 << 11)); 524 | 525 | if use_x2apic() { 526 | /* If the x2apic is supported, enable x2apic mode */ 527 | wrmsr(0x1b, rdmsr(0x1b) | (1 << 10)); 528 | } 529 | 530 | /* Enable the APIC */ 531 | apic_write(0xf0, 0x1ff); 532 | } 533 | 534 | /// Invalidate the page specified by `addr` 535 | #[inline(always)] 536 | pub unsafe fn invlpg(addr: usize) 537 | { 538 | llvm_asm!("invlpg [$0]" :: "r"(addr) : "memory" : "volatile", "intel"); 539 | } 540 | 541 | /// Sets the contents of the current VMCS field based on `encoding` and `val` 542 | #[inline(always)] 543 | pub unsafe fn vmwrite(encoding: u64, val: u64) 544 | { 545 | llvm_asm!("vmwrite $0, $1" :: "r"(encoding), "r"(val) :: "intel", "volatile"); 546 | } 547 | 548 | /// Reads the contents of the current VMCS field based on `encoding` 549 | #[inline(always)] 550 | pub unsafe fn vmread(encoding: u64) -> u64 551 | { 552 | let ret; 553 | llvm_asm!("vmread $0, $1" : "=r"(ret) : "r"(encoding) :: "intel", "volatile"); 554 | ret 555 | } 556 | 557 | /// Gets the ES selector value 558 | #[inline(always)] 559 | pub unsafe fn read_es() -> u16 { 560 | let ret; 561 | llvm_asm!("mov $0, es" : "=r"(ret) ::: "intel", "volatile"); 562 | ret 563 | } 564 | 565 | /// Gets the CS selector value 566 | #[inline(always)] 567 | pub unsafe fn read_cs() -> u16 { 568 | let ret; 569 | llvm_asm!("mov $0, cs" : "=r"(ret) ::: "intel", "volatile"); 570 | ret 571 | } 572 | 573 | /// Gets the SS selector value 574 | #[inline(always)] 575 | pub unsafe fn read_ss() -> u16 { 576 | let ret; 577 | llvm_asm!("mov $0, ss" : "=r"(ret) ::: "intel", "volatile"); 578 | ret 579 | } 580 | 581 | /// Gets the DS selector value 582 | #[inline(always)] 583 | pub unsafe fn read_ds() -> u16 { 584 | let ret; 585 | llvm_asm!("mov $0, ds" : "=r"(ret) ::: "intel", "volatile"); 586 | ret 587 | } 588 | 589 | /// Gets the FS selector value 590 | #[inline(always)] 591 | pub unsafe fn read_fs() -> u16 { 592 | let ret; 593 | llvm_asm!("mov $0, fs" : "=r"(ret) ::: "intel", "volatile"); 594 | ret 595 | } 596 | 597 | /// Gets the GS selector value 598 | #[inline(always)] 599 | pub unsafe fn read_gs() -> u16 { 600 | let ret; 601 | llvm_asm!("mov $0, gs" : "=r"(ret) ::: "intel", "volatile"); 602 | ret 603 | } 604 | -------------------------------------------------------------------------------- /shared/mmu/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "mmu" 3 | version = "0.1.0" 4 | authors = ["gamozo "] 5 | 6 | [dependencies] 7 | cpu = { path = "../cpu" } 8 | 9 | -------------------------------------------------------------------------------- /shared/rangeset/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "rangeset" 3 | version = "0.1.0" 4 | authors = ["gamozo "] 5 | 6 | [dependencies] 7 | -------------------------------------------------------------------------------- /shared/rangeset/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![no_std] 2 | #![feature(const_fn)] 3 | 4 | use core::cmp; 5 | 6 | #[derive(Clone, Copy)] 7 | #[repr(C, packed)] 8 | pub struct Range 9 | { 10 | pub start: u64, 11 | pub end: u64, 12 | } 13 | 14 | #[derive(Clone, Copy)] 15 | #[repr(C, packed)] 16 | pub struct RangeSet { 17 | /// Fixed array of ranges in the set 18 | ranges: [Range; 32], 19 | 20 | /// Number of in use entries in `ranges` 21 | /// 22 | /// This is not a usize to make the structure fixed size so we can pass it 23 | /// directly from protected mode to long mode. Since `ranges` is fixed u32 24 | /// is plenty large for this use. 25 | in_use: u32, 26 | } 27 | 28 | impl RangeSet 29 | { 30 | /// Create a new empty RangeSet 31 | pub const fn new() -> RangeSet 32 | { 33 | RangeSet { 34 | ranges: [Range { start: 0, end: 0 }; 32], 35 | in_use: 0, 36 | } 37 | } 38 | 39 | /// Get all the entries in the RangeSet as a slice 40 | pub fn entries(&self) -> &[Range] 41 | { 42 | &self.ranges[..self.in_use as usize] 43 | } 44 | 45 | /// Delete the Range contained in the RangeSet at `idx` 46 | fn delete(&mut self, idx: usize) 47 | { 48 | assert!(idx < self.in_use as usize, "Index out of bounds"); 49 | 50 | for ii in idx..(self.in_use as usize)-1 { 51 | self.ranges.swap(ii, ii+1); 52 | } 53 | 54 | self.in_use -= 1; 55 | } 56 | 57 | /// Insert a new range into this RangeSet. 58 | /// 59 | /// If the range overlaps with an existing range then the ranges will 60 | /// be merged. If the range has no overlap with an existing range then 61 | /// it will simply be added to the set. 62 | pub fn insert(&mut self, mut range: Range) 63 | { 64 | assert!(range.start <= range.end, "Invalid range shape"); 65 | 66 | /* Outside loop forever until we run out of merges with existing 67 | * ranges. 68 | */ 69 | 'try_merges: loop { 70 | for ii in 0..(self.in_use as usize) { 71 | let ent = self.ranges[ii]; 72 | 73 | /* Check for overlap with an existing range. 74 | * Note that we do a saturated add of one to each range. 75 | * This is done so that two ranges that are 'touching' but 76 | * not overlapping will be combined. 77 | */ 78 | if !overlaps(range.start, range.end.saturating_add(1), 79 | ent.start, ent.end.saturating_add(1)){ 80 | continue; 81 | } 82 | 83 | /* There was overlap with an existing range. Make this range 84 | * a combination of the existing ranges. 85 | */ 86 | range.start = cmp::min(range.start, ent.start); 87 | range.end = cmp::max(range.end, ent.end); 88 | 89 | /* Delete the old range, as the new one is now all inclusive */ 90 | self.delete(ii); 91 | 92 | /* Start over looking for merges */ 93 | continue 'try_merges; 94 | } 95 | 96 | break; 97 | } 98 | 99 | assert!((self.in_use as usize) < self.ranges.len(), 100 | "Too many entries in RangeSet on insert"); 101 | 102 | /* Add the new range to the end */ 103 | self.ranges[self.in_use as usize] = range; 104 | self.in_use += 1; 105 | } 106 | 107 | /// Remove `range` from the RangeSet 108 | /// 109 | /// Any range in the RangeSet which overlaps with `range` will be trimmed 110 | /// such that there is no more overlap. If this results in a range in 111 | /// the set becoming empty, the range will be removed entirely from the 112 | /// set. 113 | pub fn remove(&mut self, range: Range) 114 | { 115 | assert!(range.start <= range.end, "Invalid range shape"); 116 | 117 | 'try_subtractions: loop { 118 | for ii in 0..(self.in_use as usize) { 119 | let ent = self.ranges[ii]; 120 | 121 | /* If there is no overlap, there is nothing to do with this 122 | * range. 123 | */ 124 | if !overlaps(range.start, range.end, ent.start, ent.end) { 125 | continue; 126 | } 127 | 128 | /* If this entry is entirely contained by the range to remove, 129 | * then we can just delete it. 130 | */ 131 | if contains(ent.start, ent.end, range.start, range.end) { 132 | self.delete(ii); 133 | continue 'try_subtractions; 134 | } 135 | 136 | /* At this point we know there is overlap, but only partial. 137 | * This means we need to adjust the size of the current range 138 | * and potentially insert a new entry if the entry is split 139 | * in two. 140 | */ 141 | 142 | if range.start <= ent.start { 143 | /* If the overlap is on the low end of the range, adjust 144 | * the start of the range to the end of the range we want 145 | * to remove. 146 | */ 147 | self.ranges[ii].start = range.end.saturating_add(1); 148 | } else if range.end >= ent.end { 149 | /* If the overlap is on the high end of the range, adjust 150 | * the end of the range to the start of the range we want 151 | * to remove. 152 | */ 153 | self.ranges[ii].end = range.start.saturating_sub(1); 154 | } else { 155 | /* If the range to remove fits inside of the range then 156 | * we need to split it into two ranges. 157 | */ 158 | self.ranges[ii].start = range.end.saturating_add(1); 159 | 160 | assert!((self.in_use as usize) < self.ranges.len(), 161 | "Too many entries in RangeSet on split"); 162 | 163 | self.ranges[self.in_use as usize] = Range { 164 | start: ent.start, 165 | end: range.start.saturating_sub(1), 166 | }; 167 | self.in_use += 1; 168 | continue 'try_subtractions; 169 | } 170 | } 171 | 172 | break; 173 | } 174 | } 175 | 176 | /// Subtracts a rangeset from `self` 177 | pub fn subtract(&mut self, rs: &RangeSet) 178 | { 179 | for &ent in rs.entries() { 180 | self.remove(ent); 181 | } 182 | } 183 | 184 | /// Compute the size of the range covered by this rangeset 185 | pub fn sum(&self) -> u64 186 | { 187 | self.entries().iter().fold(0u64, |acc, x| acc + (x.end - x.start) + 1) 188 | } 189 | 190 | /// Allocate `size` bytes of memory with `align` requirement for alignment 191 | /// 192 | /// A return value of NULL represents an error. 193 | /// 194 | /// This function attempts to allocate memory from the provided `rangeset`, 195 | /// fulfilling size and alignment requirements. The alignment must be a 196 | /// power of two and nonzero. This allows for a mask to be created by 197 | /// subtracting 1 from the mask. 198 | pub fn allocate(&mut self, size: u64, align: u64) -> *mut u8 199 | { 200 | /* Validate alignment is nonzero and a power of 2 */ 201 | if align.count_ones() != 1 { 202 | return core::ptr::null_mut(); 203 | } 204 | 205 | /* Zero sized allocations get 1 byte allocated */ 206 | let size = if size <= 0 { 1 } else { size }; 207 | 208 | /* Generate a mask for the specified alignment */ 209 | let alignmask = align - 1; 210 | 211 | /* Go through each memory range in the rangeset */ 212 | let mut allocation = None; 213 | for ent in self.entries() { 214 | /* Determine number of bytes required for front padding to satisfy 215 | * alignment requirments. 216 | */ 217 | let align_fix = (align - (ent.start & alignmask)) & alignmask; 218 | 219 | /* Compute base and end of allocation as an inclusive range 220 | * [base, end] 221 | */ 222 | let base = ent.start; 223 | let end = base.checked_add(size - 1).unwrap(). 224 | checked_add(align_fix).unwrap(); 225 | 226 | /* Validate that this allocation is addressable in the current 227 | * processor state. 228 | */ 229 | if base > core::usize::MAX as u64 || end > core::usize::MAX as u64 { 230 | continue; 231 | } 232 | 233 | /* Check that this entry has enough room to satisfy allocation */ 234 | if end > ent.end { 235 | continue; 236 | } 237 | 238 | /* Allocation successful! */ 239 | allocation = Some((base, end, (base + align_fix) as *mut u8)); 240 | break; 241 | } 242 | 243 | match allocation { 244 | Some((base, end, ptr)) => { 245 | /* If allocation was successful, remove range from RangeSet and 246 | * return! 247 | */ 248 | self.remove(Range { start: base, end: end }); 249 | ptr 250 | }, 251 | 252 | /* If allocation failed return null */ 253 | None => core::ptr::null_mut(), 254 | } 255 | } 256 | } 257 | 258 | /// Determines of the two ranges [x1, x2] and [y1, y2] have any overlap 259 | fn overlaps(mut x1: u64, mut x2: u64, mut y1: u64, mut y2: u64) -> bool 260 | { 261 | /* Make sure x2 is always > x1 */ 262 | if x1 > x2 { 263 | core::mem::swap(&mut x1, &mut x2); 264 | } 265 | 266 | /* Make sure y2 is always > y1 */ 267 | if y1 > y2 { 268 | core::mem::swap(&mut y1, &mut y2); 269 | } 270 | 271 | /* Check if there is overlap */ 272 | if x1 <= y2 && y1 <= x2 { 273 | return true; 274 | } 275 | 276 | false 277 | } 278 | 279 | /// Returns true if the entirity of [x1, x2] is contained inside [y1, y2], else 280 | /// returns false. 281 | fn contains(mut x1: u64, mut x2: u64, mut y1: u64, mut y2: u64) -> bool 282 | { 283 | /* Make sure x2 is always > x1 */ 284 | if x1 > x2 { 285 | core::mem::swap(&mut x1, &mut x2); 286 | } 287 | 288 | /* Make sure y2 is always > y1 */ 289 | if y1 > y2 { 290 | core::mem::swap(&mut y1, &mut y2); 291 | } 292 | 293 | if x1 >= y1 && x2 <= y2 { 294 | return true; 295 | } 296 | 297 | false 298 | } 299 | 300 | -------------------------------------------------------------------------------- /shared/safecast/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "safecast" 3 | version = "0.1.0" 4 | authors = ["Brandon Falk "] 5 | 6 | [dependencies] 7 | 8 | -------------------------------------------------------------------------------- /shared/safecast/README.md: -------------------------------------------------------------------------------- 1 | # safecast 2 | 3 | An attempt to make a procedural macro to support safe casting in Rust. 4 | 5 | ## Goals 6 | 7 | This library is designed to allow for copying raw underlying data between different types in Rust. 8 | This is helpful for handling things like binary files or network protocols. Using this library you 9 | are able to safely create structures and cast/copy between them. 10 | 11 | ## Safety 12 | 13 | This casting/copying is safe given the following: 14 | 15 | - The structure is composed only of types which have no invalid/unsafe underlying binary encodings 16 | - Currently only `u8`, `u16`, `u32`, `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, `isize` are considered 17 | to have these properties. 18 | - Structures may have structures in them which are also packed and contain only the aforementioned 19 | types. 20 | - Fixed sized arrays are also allowed. 21 | - The current implementation is designed to be extra strict. Things like tuples and such would 22 | be fine in practice but the goal is to keep things simple for now to make it easier to 23 | verify. 24 | - The structure is packed such that no padding occurs between fields 25 | - Since the padding between fields contains undefined values this interface could potentially 26 | expose them if cast to another type where the padding is readable. Thus we disallow use 27 | of padding in structures. This doesn't matter much anyways as if you're working with binary 28 | data it's probably packed anyways. 29 | 30 | ## Interface 31 | 32 | `SafeCast::cast_copy_into(&self, dest: &mut T)` 33 | 34 | This routine allows the casting from an existing structure to another type given the other 35 | type also implemented ByteSafe. This method is the one used when `T` is `?Sized`, allowing for 36 | us to cast into things like slices/Vecs. This is the core implementation and is used by 37 | `cast()`. 38 | 39 | This method will panic unless both self and T are equal in size (in bytes). 40 | 41 | `SafeCast::cast_copy(&self) -> T` 42 | 43 | Creates an uninitialized value of type T, and calls `cast_into` on self 44 | to cast it into T. Returns the new value. 45 | 46 | This method will panic unless both self and T are equal in size (in bytes). 47 | 48 | `SafeCast::cast(&self) -> &[T]` 49 | 50 | Casts `Self` to a slice of `T`s, where `Self` is evenly divisible by `T`. 51 | 52 | `SafeCast::cast_mut(&mut self) -> &mut [T]` 53 | 54 | Casts `Self` to a mutable slice of `T`s, where `Self` is evenly divisible by `T`. 55 | 56 | ## Endianness 57 | 58 | I'm not sure if it matches Rust's definition, however I think it is fine for the endianness 59 | to be up to the user to handle. There is no safety violation by having an unexpected 60 | endian swap, thus I'm okay with this not handling endian swaps for you. It is up 61 | to the user to manually swap fields as they use them. 62 | 63 | ## Enforcement / Internals 64 | 65 | To make this library easy to safely use we use a procedural macro to `#[derive(ByteSafe)]` on 66 | a structure. 67 | 68 | Interally we have two traits: `ByteSafe` and `SafeCast`. `ByteSafe` is the unsafe trait which is 69 | used to specify that a type is safe for use for casting and byte-level copies to other types 70 | marked `ByteSafe`. `SafeCast` is the trait which implements the casting/copying funtions for 71 | a given type, if the type implements `ByteSafe`. `SafeCast` is automatically implemented for 72 | any type which is `ByteSafe`. 73 | 74 | The `ByteSafe` trait is the unsafe one which is either manually implemented (developer must verify 75 | it is safe), or is automatically implemented safely by `#[derive(ByteSafe)]`. 76 | 77 | `ByteSafe` contains a dummy function `bytesafe()` which is core to the derive implementation. 78 | `bytesafe()` does nothing, nor does it return anything. It is simply there so that the 79 | automatic derive can attempt to call this function to determine if the trait is implemented. 80 | 81 | Interally the custom derive does 2 simple things. 82 | 83 | - Verifies the structure is marked as packed 84 | - Implements `ByteSafe` for the structure with a custom `ByteSafe::bytesafe()` which attempts to call 85 | `ByteSafe::bytesafe()` on every member of the structure. This behavior verifies that 86 | every member is marked `ByteSafe`. If all members are marked as `ByteSafe`, then the structure 87 | itself can also be marked as `ByteSafe`. 88 | 89 | For this to all work a few manual `ByteSafe` implementations must be done on the core types we 90 | want to allow in structures. In our case this list is `u8`, `u16`, `u32`, `u64`, `usize`, `i8`, `i16`, `i32`, `i64`, `isize`. 91 | Further `ByteSafe` is implemented for slices `[T: ByteSafe]` and fixed-sized arrays up to and including 32-elements 92 | `[T: ByteSafe; 0..33]`. 93 | Our custom derive verifies that each member of the structure is either a `syn::Ty::Path` (raw type), or a `syn::Ty::Array` 94 | (fixed sized array). Thus even though we allow slices for `ByteSafe`, they are not allowed in the structures in a custom 95 | derive, only fixed sized arrays and raw types are. 96 | 97 | The implementation of `ByteSafe` for slices allows for casting slices to structures, and structures back to slices. However 98 | does not allow for slices to be used inside structures that are being cast to/from. 99 | -------------------------------------------------------------------------------- /shared/safecast/bytesafe_derive/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "bytesafe_derive" 3 | version = "0.1.0" 4 | authors = ["Brandon Falk "] 5 | 6 | [lib] 7 | proc-macro = true 8 | 9 | [dependencies] 10 | syn = "0.11.11" 11 | quote = "0.3.15" 12 | 13 | -------------------------------------------------------------------------------- /shared/safecast/bytesafe_derive/src/lib.rs: -------------------------------------------------------------------------------- 1 | #[macro_use] extern crate quote; 2 | 3 | extern crate proc_macro; 4 | extern crate syn; 5 | 6 | use proc_macro::TokenStream; 7 | use quote::ToTokens; 8 | 9 | /// Derive the ByteSafe trait for a given structure 10 | /// 11 | /// This procedural macro ensures that all members of the structure being 12 | /// derived as ByteSafe are also ByteSafe. It also verifies that the structure 13 | /// contains no padding. 14 | #[proc_macro_derive(ByteSafe)] 15 | pub fn derive_bytesafe(input: TokenStream) -> TokenStream { 16 | /* Construct a string representation of the type definition */ 17 | let s = input.to_string(); 18 | 19 | /* Parse the string representation */ 20 | let ast = syn::parse_derive_input(&s).unwrap(); 21 | 22 | /* Build the impl */ 23 | let gen = impl_derive_bytesafe(&ast); 24 | 25 | /* Return the generated impl */ 26 | gen.parse().unwrap() 27 | } 28 | 29 | /// Internal implementation of the ByteSafe derive 30 | fn impl_derive_bytesafe(ast: &syn::DeriveInput) -> quote::Tokens { 31 | let name = &ast.ident; 32 | let (impl_generics, ty_generics, where_clause) = 33 | ast.generics.split_for_impl(); 34 | 35 | let mut stuff = Vec::new(); 36 | 37 | /* There is probably a better/cleaner way of doing this, but check if 38 | * this structure is marked as repr(C). If it is not repr(C) we might 39 | * not be able to directly copy bits as the representation could be 40 | * different than what we expect. 41 | */ 42 | let mut is_repr_c = false; 43 | for attr in &ast.attrs { 44 | if let syn::MetaItem::List(ref ident, ref items) = attr.value { 45 | if ident == "repr" { 46 | for item in items { 47 | if let &syn::NestedMetaItem::MetaItem(ref item) = item { 48 | if item.name() == "C" || item.name() == "packed" { 49 | is_repr_c = true; 50 | } 51 | } 52 | } 53 | } 54 | } 55 | } 56 | assert!(is_repr_c); 57 | 58 | /* We only support structures */ 59 | if let syn::Body::Struct(ref variants) = ast.body { 60 | /* For each field in the structure call bytesafe() on it, this will 61 | * fail if it does not implement the bytesafe trait. 62 | * 63 | * However currently with automatic dereferencing this allows for 64 | * references to be used to types that are ByteSafe, which is an issue. 65 | * We need a workaround for this. 66 | */ 67 | for field in variants.fields().iter() { 68 | match field.ty { 69 | /* We allow Path types */ 70 | syn::Ty::Path(_, _) => {} 71 | 72 | /* We allow fixed sized arrays */ 73 | syn::Ty::Array(_, _) => {} 74 | 75 | /* Anything else in the structure is not allowed */ 76 | _ => panic!("Unsupported type {:?}", field.ty) 77 | } 78 | 79 | let mut typey = quote::Tokens::new(); 80 | field.ty.to_tokens(&mut typey); 81 | 82 | //eprint!("{}\n", typey); 83 | 84 | /* Attempt to call bytesafe() dummy routine on member. This will 85 | * fail at compile time if this structure member doesn't implement 86 | * ByteSafe. 87 | */ 88 | stuff.push(quote! { 89 | /* Accumulate the size of all the raw elements */ 90 | calculated_size += core::mem::size_of::<#typey>(); 91 | <#typey>::bytesafe(); 92 | }); 93 | } 94 | } else { 95 | panic!("Expected struct only for ByteSafe"); 96 | } 97 | 98 | /* Implement ByteSafe! */ 99 | quote! { 100 | unsafe impl #impl_generics ::safecast::ByteSafe for #name #ty_generics #where_clause { 101 | fn bytesafe() 102 | { 103 | /* Normalize so we can use core even in std projects */ 104 | extern crate core; 105 | 106 | let mut calculated_size = 0usize; 107 | 108 | #(#stuff)* 109 | 110 | /* Validate that the size of each individual member adds up 111 | * to the structure size. If this is a mismatch then there was 112 | * padding in the structure and it is not safe to cast this 113 | * structure. 114 | */ 115 | assert!(calculated_size == core::mem::size_of::<#name #ty_generics #where_clause>(), 116 | "Structure contained padding bytes, not safe for cast"); 117 | } 118 | } 119 | } 120 | } 121 | 122 | -------------------------------------------------------------------------------- /shared/safecast/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![no_std] 2 | 3 | /// Trait specifying that a structure can be safely cast to other ByteSafe 4 | /// structures. This indicates there are no possible invalid encodings with 5 | /// any underlying binary data. 6 | /// 7 | /// Trait is unsafe as `Self` must *only* composed of types with no 8 | /// unsafe/invalid binary representations, and has no padding of members. 9 | /// 10 | /// Use #[derive(ByteSafe)] on a structure to get this trait safely. The 11 | /// custom derive will validate the structure satisfies all requirements to 12 | /// implement this safely. 13 | /// 14 | /// To be extremely strict the only allowed types are: u8, u16, u32, u64, 15 | /// usize, i8, i16, i32, i64, and isize. 16 | /// 17 | /// Self must contain no padding as casting padding could make it readable 18 | /// and this is UB. 19 | pub unsafe trait ByteSafe { fn bytesafe() {} } 20 | 21 | /* XXX XXX XXX XXX 22 | * Currently we rely on runtime checks for casting safety. Directly using 23 | * ByteSafe without calling ByteSafe::bytesafe(self); is UB!!! 24 | * 25 | * You should only ever use SafeCast and always cast it, this allows for 26 | * runtime checks to be run. 27 | */ 28 | 29 | /* Raw base types which are plain old data */ 30 | unsafe impl ByteSafe for u8 {} 31 | unsafe impl ByteSafe for u16 {} 32 | unsafe impl ByteSafe for u32 {} 33 | unsafe impl ByteSafe for u64 {} 34 | unsafe impl ByteSafe for usize {} 35 | unsafe impl ByteSafe for i8 {} 36 | unsafe impl ByteSafe for i16 {} 37 | unsafe impl ByteSafe for i32 {} 38 | unsafe impl ByteSafe for i64 {} 39 | unsafe impl ByteSafe for isize {} 40 | 41 | /* Slices and arrays of ByteSafe types are allowed. 42 | * 43 | * While we mark slices as safe, they are *not* allowed inside of structures 44 | * as they would contain pointers. This is safely protected against in the 45 | * #[derive(ByteSafe)] code. 46 | */ 47 | unsafe impl ByteSafe for [T] {} 48 | unsafe impl ByteSafe for [T; 0] {} 49 | unsafe impl ByteSafe for [T; 1] {} 50 | unsafe impl ByteSafe for [T; 2] {} 51 | unsafe impl ByteSafe for [T; 3] {} 52 | unsafe impl ByteSafe for [T; 4] {} 53 | unsafe impl ByteSafe for [T; 5] {} 54 | unsafe impl ByteSafe for [T; 6] {} 55 | unsafe impl ByteSafe for [T; 7] {} 56 | unsafe impl ByteSafe for [T; 8] {} 57 | unsafe impl ByteSafe for [T; 9] {} 58 | unsafe impl ByteSafe for [T; 10] {} 59 | unsafe impl ByteSafe for [T; 11] {} 60 | unsafe impl ByteSafe for [T; 12] {} 61 | unsafe impl ByteSafe for [T; 13] {} 62 | unsafe impl ByteSafe for [T; 14] {} 63 | unsafe impl ByteSafe for [T; 15] {} 64 | unsafe impl ByteSafe for [T; 16] {} 65 | unsafe impl ByteSafe for [T; 17] {} 66 | unsafe impl ByteSafe for [T; 18] {} 67 | unsafe impl ByteSafe for [T; 19] {} 68 | unsafe impl ByteSafe for [T; 20] {} 69 | unsafe impl ByteSafe for [T; 21] {} 70 | unsafe impl ByteSafe for [T; 22] {} 71 | unsafe impl ByteSafe for [T; 23] {} 72 | unsafe impl ByteSafe for [T; 24] {} 73 | unsafe impl ByteSafe for [T; 25] {} 74 | unsafe impl ByteSafe for [T; 26] {} 75 | unsafe impl ByteSafe for [T; 27] {} 76 | unsafe impl ByteSafe for [T; 28] {} 77 | unsafe impl ByteSafe for [T; 29] {} 78 | unsafe impl ByteSafe for [T; 30] {} 79 | unsafe impl ByteSafe for [T; 31] {} 80 | unsafe impl ByteSafe for [T; 32] {} 81 | unsafe impl ByteSafe for [T; 33] {} 82 | unsafe impl ByteSafe for [T; 34] {} 83 | unsafe impl ByteSafe for [T; 35] {} 84 | unsafe impl ByteSafe for [T; 36] {} 85 | unsafe impl ByteSafe for [T; 37] {} 86 | unsafe impl ByteSafe for [T; 38] {} 87 | unsafe impl ByteSafe for [T; 39] {} 88 | unsafe impl ByteSafe for [T; 40] {} 89 | unsafe impl ByteSafe for [T; 41] {} 90 | unsafe impl ByteSafe for [T; 42] {} 91 | unsafe impl ByteSafe for [T; 43] {} 92 | unsafe impl ByteSafe for [T; 44] {} 93 | unsafe impl ByteSafe for [T; 45] {} 94 | unsafe impl ByteSafe for [T; 46] {} 95 | unsafe impl ByteSafe for [T; 47] {} 96 | unsafe impl ByteSafe for [T; 48] {} 97 | unsafe impl ByteSafe for [T; 49] {} 98 | unsafe impl ByteSafe for [T; 50] {} 99 | unsafe impl ByteSafe for [T; 51] {} 100 | unsafe impl ByteSafe for [T; 52] {} 101 | unsafe impl ByteSafe for [T; 53] {} 102 | unsafe impl ByteSafe for [T; 54] {} 103 | unsafe impl ByteSafe for [T; 55] {} 104 | unsafe impl ByteSafe for [T; 56] {} 105 | unsafe impl ByteSafe for [T; 57] {} 106 | unsafe impl ByteSafe for [T; 58] {} 107 | unsafe impl ByteSafe for [T; 59] {} 108 | unsafe impl ByteSafe for [T; 60] {} 109 | unsafe impl ByteSafe for [T; 61] {} 110 | unsafe impl ByteSafe for [T; 62] {} 111 | unsafe impl ByteSafe for [T; 63] {} 112 | unsafe impl ByteSafe for [T; 64] {} 113 | unsafe impl ByteSafe for [T; 65] {} 114 | unsafe impl ByteSafe for [T; 66] {} 115 | unsafe impl ByteSafe for [T; 67] {} 116 | unsafe impl ByteSafe for [T; 68] {} 117 | unsafe impl ByteSafe for [T; 69] {} 118 | unsafe impl ByteSafe for [T; 70] {} 119 | unsafe impl ByteSafe for [T; 71] {} 120 | unsafe impl ByteSafe for [T; 72] {} 121 | unsafe impl ByteSafe for [T; 73] {} 122 | unsafe impl ByteSafe for [T; 74] {} 123 | unsafe impl ByteSafe for [T; 75] {} 124 | unsafe impl ByteSafe for [T; 76] {} 125 | unsafe impl ByteSafe for [T; 77] {} 126 | unsafe impl ByteSafe for [T; 78] {} 127 | unsafe impl ByteSafe for [T; 79] {} 128 | unsafe impl ByteSafe for [T; 80] {} 129 | unsafe impl ByteSafe for [T; 81] {} 130 | unsafe impl ByteSafe for [T; 82] {} 131 | unsafe impl ByteSafe for [T; 83] {} 132 | unsafe impl ByteSafe for [T; 84] {} 133 | unsafe impl ByteSafe for [T; 85] {} 134 | unsafe impl ByteSafe for [T; 86] {} 135 | unsafe impl ByteSafe for [T; 87] {} 136 | unsafe impl ByteSafe for [T; 88] {} 137 | unsafe impl ByteSafe for [T; 89] {} 138 | unsafe impl ByteSafe for [T; 90] {} 139 | unsafe impl ByteSafe for [T; 91] {} 140 | unsafe impl ByteSafe for [T; 92] {} 141 | unsafe impl ByteSafe for [T; 93] {} 142 | unsafe impl ByteSafe for [T; 94] {} 143 | unsafe impl ByteSafe for [T; 95] {} 144 | unsafe impl ByteSafe for [T; 96] {} 145 | unsafe impl ByteSafe for [T; 97] {} 146 | unsafe impl ByteSafe for [T; 98] {} 147 | unsafe impl ByteSafe for [T; 99] {} 148 | unsafe impl ByteSafe for [T; 100] {} 149 | unsafe impl ByteSafe for [T; 101] {} 150 | unsafe impl ByteSafe for [T; 102] {} 151 | unsafe impl ByteSafe for [T; 103] {} 152 | unsafe impl ByteSafe for [T; 104] {} 153 | unsafe impl ByteSafe for [T; 105] {} 154 | unsafe impl ByteSafe for [T; 106] {} 155 | unsafe impl ByteSafe for [T; 107] {} 156 | unsafe impl ByteSafe for [T; 108] {} 157 | unsafe impl ByteSafe for [T; 109] {} 158 | unsafe impl ByteSafe for [T; 110] {} 159 | unsafe impl ByteSafe for [T; 111] {} 160 | unsafe impl ByteSafe for [T; 112] {} 161 | unsafe impl ByteSafe for [T; 113] {} 162 | unsafe impl ByteSafe for [T; 114] {} 163 | unsafe impl ByteSafe for [T; 115] {} 164 | unsafe impl ByteSafe for [T; 116] {} 165 | unsafe impl ByteSafe for [T; 117] {} 166 | unsafe impl ByteSafe for [T; 118] {} 167 | unsafe impl ByteSafe for [T; 119] {} 168 | unsafe impl ByteSafe for [T; 120] {} 169 | unsafe impl ByteSafe for [T; 121] {} 170 | unsafe impl ByteSafe for [T; 122] {} 171 | unsafe impl ByteSafe for [T; 123] {} 172 | unsafe impl ByteSafe for [T; 124] {} 173 | unsafe impl ByteSafe for [T; 125] {} 174 | unsafe impl ByteSafe for [T; 126] {} 175 | unsafe impl ByteSafe for [T; 127] {} 176 | unsafe impl ByteSafe for [T; 128] {} 177 | unsafe impl ByteSafe for [T; 129] {} 178 | unsafe impl ByteSafe for [T; 130] {} 179 | unsafe impl ByteSafe for [T; 131] {} 180 | unsafe impl ByteSafe for [T; 132] {} 181 | unsafe impl ByteSafe for [T; 133] {} 182 | unsafe impl ByteSafe for [T; 134] {} 183 | unsafe impl ByteSafe for [T; 135] {} 184 | unsafe impl ByteSafe for [T; 136] {} 185 | unsafe impl ByteSafe for [T; 137] {} 186 | unsafe impl ByteSafe for [T; 138] {} 187 | unsafe impl ByteSafe for [T; 139] {} 188 | unsafe impl ByteSafe for [T; 140] {} 189 | unsafe impl ByteSafe for [T; 141] {} 190 | unsafe impl ByteSafe for [T; 142] {} 191 | unsafe impl ByteSafe for [T; 143] {} 192 | unsafe impl ByteSafe for [T; 144] {} 193 | unsafe impl ByteSafe for [T; 145] {} 194 | unsafe impl ByteSafe for [T; 146] {} 195 | unsafe impl ByteSafe for [T; 147] {} 196 | unsafe impl ByteSafe for [T; 148] {} 197 | unsafe impl ByteSafe for [T; 149] {} 198 | unsafe impl ByteSafe for [T; 150] {} 199 | unsafe impl ByteSafe for [T; 151] {} 200 | unsafe impl ByteSafe for [T; 152] {} 201 | unsafe impl ByteSafe for [T; 153] {} 202 | unsafe impl ByteSafe for [T; 154] {} 203 | unsafe impl ByteSafe for [T; 155] {} 204 | unsafe impl ByteSafe for [T; 156] {} 205 | unsafe impl ByteSafe for [T; 157] {} 206 | unsafe impl ByteSafe for [T; 158] {} 207 | unsafe impl ByteSafe for [T; 159] {} 208 | unsafe impl ByteSafe for [T; 160] {} 209 | unsafe impl ByteSafe for [T; 161] {} 210 | unsafe impl ByteSafe for [T; 162] {} 211 | unsafe impl ByteSafe for [T; 163] {} 212 | unsafe impl ByteSafe for [T; 164] {} 213 | unsafe impl ByteSafe for [T; 165] {} 214 | unsafe impl ByteSafe for [T; 166] {} 215 | unsafe impl ByteSafe for [T; 167] {} 216 | unsafe impl ByteSafe for [T; 168] {} 217 | unsafe impl ByteSafe for [T; 169] {} 218 | unsafe impl ByteSafe for [T; 170] {} 219 | unsafe impl ByteSafe for [T; 171] {} 220 | unsafe impl ByteSafe for [T; 172] {} 221 | unsafe impl ByteSafe for [T; 173] {} 222 | unsafe impl ByteSafe for [T; 174] {} 223 | unsafe impl ByteSafe for [T; 175] {} 224 | unsafe impl ByteSafe for [T; 176] {} 225 | unsafe impl ByteSafe for [T; 177] {} 226 | unsafe impl ByteSafe for [T; 178] {} 227 | unsafe impl ByteSafe for [T; 179] {} 228 | unsafe impl ByteSafe for [T; 180] {} 229 | unsafe impl ByteSafe for [T; 181] {} 230 | unsafe impl ByteSafe for [T; 182] {} 231 | unsafe impl ByteSafe for [T; 183] {} 232 | unsafe impl ByteSafe for [T; 184] {} 233 | unsafe impl ByteSafe for [T; 185] {} 234 | unsafe impl ByteSafe for [T; 186] {} 235 | unsafe impl ByteSafe for [T; 187] {} 236 | unsafe impl ByteSafe for [T; 188] {} 237 | unsafe impl ByteSafe for [T; 189] {} 238 | unsafe impl ByteSafe for [T; 190] {} 239 | unsafe impl ByteSafe for [T; 191] {} 240 | unsafe impl ByteSafe for [T; 192] {} 241 | unsafe impl ByteSafe for [T; 193] {} 242 | unsafe impl ByteSafe for [T; 194] {} 243 | unsafe impl ByteSafe for [T; 195] {} 244 | unsafe impl ByteSafe for [T; 196] {} 245 | unsafe impl ByteSafe for [T; 197] {} 246 | unsafe impl ByteSafe for [T; 198] {} 247 | unsafe impl ByteSafe for [T; 199] {} 248 | unsafe impl ByteSafe for [T; 200] {} 249 | unsafe impl ByteSafe for [T; 201] {} 250 | unsafe impl ByteSafe for [T; 202] {} 251 | unsafe impl ByteSafe for [T; 203] {} 252 | unsafe impl ByteSafe for [T; 204] {} 253 | unsafe impl ByteSafe for [T; 205] {} 254 | unsafe impl ByteSafe for [T; 206] {} 255 | unsafe impl ByteSafe for [T; 207] {} 256 | unsafe impl ByteSafe for [T; 208] {} 257 | unsafe impl ByteSafe for [T; 209] {} 258 | unsafe impl ByteSafe for [T; 210] {} 259 | unsafe impl ByteSafe for [T; 211] {} 260 | unsafe impl ByteSafe for [T; 212] {} 261 | unsafe impl ByteSafe for [T; 213] {} 262 | unsafe impl ByteSafe for [T; 214] {} 263 | unsafe impl ByteSafe for [T; 215] {} 264 | unsafe impl ByteSafe for [T; 216] {} 265 | unsafe impl ByteSafe for [T; 217] {} 266 | unsafe impl ByteSafe for [T; 218] {} 267 | unsafe impl ByteSafe for [T; 219] {} 268 | unsafe impl ByteSafe for [T; 220] {} 269 | unsafe impl ByteSafe for [T; 221] {} 270 | unsafe impl ByteSafe for [T; 222] {} 271 | unsafe impl ByteSafe for [T; 223] {} 272 | unsafe impl ByteSafe for [T; 224] {} 273 | unsafe impl ByteSafe for [T; 225] {} 274 | unsafe impl ByteSafe for [T; 226] {} 275 | unsafe impl ByteSafe for [T; 227] {} 276 | unsafe impl ByteSafe for [T; 228] {} 277 | unsafe impl ByteSafe for [T; 229] {} 278 | unsafe impl ByteSafe for [T; 230] {} 279 | unsafe impl ByteSafe for [T; 231] {} 280 | unsafe impl ByteSafe for [T; 232] {} 281 | unsafe impl ByteSafe for [T; 233] {} 282 | unsafe impl ByteSafe for [T; 234] {} 283 | unsafe impl ByteSafe for [T; 235] {} 284 | unsafe impl ByteSafe for [T; 236] {} 285 | unsafe impl ByteSafe for [T; 237] {} 286 | unsafe impl ByteSafe for [T; 238] {} 287 | unsafe impl ByteSafe for [T; 239] {} 288 | unsafe impl ByteSafe for [T; 240] {} 289 | unsafe impl ByteSafe for [T; 241] {} 290 | unsafe impl ByteSafe for [T; 242] {} 291 | unsafe impl ByteSafe for [T; 243] {} 292 | unsafe impl ByteSafe for [T; 244] {} 293 | unsafe impl ByteSafe for [T; 245] {} 294 | unsafe impl ByteSafe for [T; 246] {} 295 | unsafe impl ByteSafe for [T; 247] {} 296 | unsafe impl ByteSafe for [T; 248] {} 297 | unsafe impl ByteSafe for [T; 249] {} 298 | unsafe impl ByteSafe for [T; 250] {} 299 | unsafe impl ByteSafe for [T; 251] {} 300 | unsafe impl ByteSafe for [T; 252] {} 301 | unsafe impl ByteSafe for [T; 253] {} 302 | unsafe impl ByteSafe for [T; 254] {} 303 | unsafe impl ByteSafe for [T; 255] {} 304 | unsafe impl ByteSafe for [T; 256] {} 305 | 306 | unsafe impl ByteSafe for [T; 768] {} 307 | unsafe impl ByteSafe for [T; 2408] {} 308 | unsafe impl ByteSafe for [T; 3928] {} 309 | unsafe impl ByteSafe for [T; 4096] {} 310 | 311 | /* Implement SafeCast trait for all T and [T] where T: ByteSafe */ 312 | impl SafeCast for T {} 313 | impl SafeCast for [T] {} 314 | 315 | /// SafeCast implementation 316 | /// 317 | /// If the type is marked ByteSafe this can be implemented. Using this the 318 | /// type can be cast or copied to other types given the other type implements 319 | /// ByteSafe as well. 320 | pub trait SafeCast: ByteSafe { 321 | /// Copy the underlying bits from `Self` into `dest` 322 | /// 323 | /// This is similar to cast_copy, however it copies into a mutable 324 | /// reference. This makes it possible to copy into sized types such as 325 | /// a slice of bytes. 326 | /// 327 | /// This function will panic if `Self` is not the same size as `dest` 328 | fn cast_copy_into(&self, dest: &mut T) 329 | { 330 | ::bytesafe(); 331 | ::bytesafe(); 332 | 333 | /* Validate source and dest are exactly the same size */ 334 | let dest_sz = core::mem::size_of_val(dest); 335 | let src_sz = core::mem::size_of_val(self); 336 | assert!(dest_sz == src_sz); 337 | 338 | unsafe { 339 | core::ptr::copy_nonoverlapping( 340 | self as *const _ as *const u8, 341 | dest as *mut _ as *mut u8, 342 | src_sz); 343 | } 344 | } 345 | 346 | /// Copy the underlying bits from `Self` into a new structure of type `T` 347 | /// 348 | /// This creates a new `T` on the stack as uninitialized, calls 349 | /// `cast_copy_into()` to copy Self into it, and returns the result. 350 | /// 351 | /// This function will panic if `Self` is not the same size as T 352 | fn cast_copy(&self) -> T 353 | { 354 | ::bytesafe(); 355 | ::bytesafe(); 356 | 357 | /* Uninitialized is safe here as we will fill in all of the bytes */ 358 | let mut ret = core::mem::MaybeUninit::::uninit(); 359 | unsafe { 360 | self.cast_copy_into(&mut *ret.as_mut_ptr()); 361 | ret.assume_init() 362 | } 363 | } 364 | 365 | /// Cast `Self` into a slice of `T` spanning the size of `Self` 366 | /// 367 | /// This function will directly cast the reference of `Self` into a slice 368 | /// of `T`, given `Self` is evenly divisible by `T` and alignment matches. 369 | /// 370 | /// The resulting slice will map all bytes of `Self`, never will a partial 371 | /// cast occur. 372 | fn cast(&self) -> &[T] 373 | { 374 | ::bytesafe(); 375 | ::bytesafe(); 376 | 377 | /* Verify alignment is fine */ 378 | let src_ptr = self as *const _ as *const u8 as usize; 379 | assert!(core::mem::align_of::() > 0 && 380 | (src_ptr % core::mem::align_of::()) == 0, 381 | "cast alignment mismatch"); 382 | 383 | /* Validate that self is evenly divisible by T */ 384 | let dest_sz = core::mem::size_of::(); 385 | let src_sz = core::mem::size_of_val(self); 386 | assert!(dest_sz > 0 && (src_sz % dest_sz) == 0, 387 | "cast src cannot be evenly divided by T"); 388 | 389 | /* Convert self into a slice of T's */ 390 | unsafe { 391 | core::slice::from_raw_parts(self as *const _ as *const T, 392 | src_sz / dest_sz) 393 | } 394 | } 395 | 396 | /// Cast `Self` into a slice of `T` spanning the size of `Self` mutably 397 | /// 398 | /// This function will directly cast the reference of `Self` into a slice 399 | /// of `T`, given `Self` is evenly divisible by `T` and alignment matches. 400 | /// 401 | /// The resulting slice will map all bytes of `Self`, never will a partial 402 | /// cast occur. 403 | fn cast_mut(&mut self) -> &mut [T] 404 | { 405 | ::bytesafe(); 406 | ::bytesafe(); 407 | 408 | /* Verify alignment is fine */ 409 | let src_ptr = self as *const _ as *const u8 as usize; 410 | assert!(core::mem::align_of::() > 0 && 411 | (src_ptr % core::mem::align_of::()) == 0, 412 | "cast_mut alignment mismatch"); 413 | 414 | /* Validate that self is evenly divisible by T */ 415 | let dest_sz = core::mem::size_of::(); 416 | let src_sz = core::mem::size_of_val(self); 417 | assert!(dest_sz > 0 && (src_sz % dest_sz) == 0, 418 | "cast_mut src cannot be evenly divided by T"); 419 | 420 | /* Convert self into a slice of T's */ 421 | unsafe { 422 | core::slice::from_raw_parts_mut(self as *mut _ as *mut T, 423 | src_sz / dest_sz) 424 | } 425 | } 426 | } 427 | 428 | -------------------------------------------------------------------------------- /shared/serial/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "serial" 3 | version = "0.1.0" 4 | authors = ["gamozo "] 5 | 6 | [dependencies] 7 | cpu = { path = "../cpu" } 8 | 9 | -------------------------------------------------------------------------------- /shared/serial/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![no_std] 2 | 3 | extern crate cpu; 4 | 5 | /* COM devices 6 | * 7 | * (address, is_present, is_active) 8 | * 9 | * address - I/O address of port 10 | * is_present - Set if scratchpad and loopback tests pass 11 | * is_active - Set if ??? is set 12 | * is_init - Set if port has been probed 13 | */ 14 | static mut COM1: (u16, bool, bool, bool) = (0x3f8, false, false, false); 15 | static mut COM2: (u16, bool, bool, bool) = (0x2f8, false, false, false); 16 | static mut COM3: (u16, bool, bool, bool) = (0x3e8, false, false, false); 17 | static mut COM4: (u16, bool, bool, bool) = (0x2e8, false, false, false); 18 | 19 | #[macro_export] 20 | macro_rules! print { 21 | ( $($arg:tt)* ) => ({ 22 | use core::fmt::Write; 23 | let _ = write!(&mut $crate::Writer, $($arg)*); 24 | }) 25 | } 26 | 27 | /// Writer implementation used by the `print!` macro 28 | pub struct Writer; 29 | 30 | impl core::fmt::Write for Writer 31 | { 32 | fn write_str(&mut self, s: &str) -> core::fmt::Result 33 | { 34 | write(s); 35 | Ok(()) 36 | } 37 | } 38 | 39 | unsafe fn init(port: &mut (u16, bool, bool, bool)) 40 | { 41 | /* Set port to initialized state */ 42 | port.3 = true; 43 | 44 | /* Set scratchpad to contain 0x41, and check if it reads it back */ 45 | cpu::out8(port.0 + 7, 0x41); 46 | if cpu::in8(port.0 + 7) != 0x41 { 47 | port.1 = false; 48 | port.2 = false; 49 | return; 50 | } 51 | 52 | /* Mark port as present */ 53 | port.1 = true; 54 | 55 | /* Disable all interrupts */ 56 | cpu::out8(port.0 + 1, 0); 57 | 58 | /* Set DLAB */ 59 | cpu::out8(port.0 + 3, 0x80); 60 | 61 | /* Write low divisor byte */ 62 | cpu::out8(port.0 + 0, 1); 63 | 64 | /* Write high divisor byte */ 65 | cpu::out8(port.0 + 1, 0); 66 | 67 | /* Clear DLAB, set word length to 8 bits, one stop bit, no parity */ 68 | cpu::out8(port.0 + 3, 3); 69 | 70 | /* Disable FIFOs entirely */ 71 | cpu::out8(port.0 + 2, 0xc7); 72 | 73 | /* Set RTS and DTR */ 74 | cpu::out8(port.0 + 4, 0x0b); 75 | 76 | /* If clear to send, data set ready, and data carrier detect are set 77 | * mark this port as active! 78 | */ 79 | if cpu::in8(port.0 + 6) & 0b10110000 == 0b10110000 { 80 | /* Mark port as active */ 81 | port.2 = true; 82 | } 83 | } 84 | 85 | /// Invoke a closure on each port which has been identified 86 | fn for_each_port(mut func: F) 87 | { 88 | unsafe { 89 | /* If ports are not initialized, initialize them */ 90 | if !COM1.3 { init(&mut COM1) } 91 | if !COM2.3 { init(&mut COM2) } 92 | if !COM3.3 { init(&mut COM3) } 93 | if !COM4.3 { init(&mut COM4) } 94 | 95 | if COM1.1 { func(COM1.0) } 96 | if COM2.1 { func(COM2.0) } 97 | if COM3.1 { func(COM3.0) } 98 | if COM4.1 { func(COM4.0) } 99 | } 100 | } 101 | 102 | /// Write a byte to the serial port data port 103 | pub fn write_byte(byte: u8) 104 | { 105 | /* LF implies CR+LF */ 106 | if byte == b'\n' { 107 | write_byte(b'\r'); 108 | } 109 | 110 | for_each_port(|port| { 111 | unsafe { 112 | while (cpu::in8(port + 5) & 0x20) == 0 {} 113 | cpu::out8(port, byte); 114 | } 115 | }); 116 | } 117 | 118 | /// Write bytes to the serial device 119 | pub fn write_bytes(data: &[u8]) 120 | { 121 | for &byte in data { 122 | write_byte(byte); 123 | } 124 | } 125 | 126 | /// Write a string to the serial device as UTF-8 bytes 127 | pub fn write(string: &str) 128 | { 129 | write_bytes(string.as_bytes()); 130 | } 131 | 132 | /// Returns Some(byte) if a byte is present on the serial port, otherwise 133 | /// returns None 134 | pub fn probe_byte() -> Option 135 | { 136 | let mut byte = None; 137 | 138 | for_each_port(|port| { 139 | unsafe { 140 | if byte.is_none() && (cpu::in8(port + 5) & 1) != 0 { 141 | byte = Some(cpu::in8(port)); 142 | } 143 | } 144 | }); 145 | 146 | byte 147 | } 148 | 149 | -------------------------------------------------------------------------------- /src/main.rs: -------------------------------------------------------------------------------- 1 | use std::process::Command; 2 | use std::path::Path; 3 | 4 | const BOOTFILE_NAME: &'static str = "orange_slice.boot"; 5 | const KERNEL_NAME: &'static str = "orange_slice.kern"; 6 | const KERNEL_PATH: &'static str = 7 | "kernel/target/x86_64-pc-windows-msvc/release/kernel.exe"; 8 | 9 | fn main() 10 | { 11 | const DEPLOY_PATHS: &[&str] = &["C:/dev/tftpd", "D:/tftpd", "O:/tftpd", "Y:/tftpd", "Y:/fuzz_server", "/mnt/biggie/tftpd", "D:/orange_slice/emu"]; 12 | 13 | let args: Vec = std::env::args().collect(); 14 | 15 | if args.len() == 2 && args[1] == "clean" { 16 | /* Remove files */ 17 | for filename in &["stage1.flat", BOOTFILE_NAME] { 18 | if Path::new(filename).exists() { 19 | print!("Removing {}...\n", filename); 20 | std::fs::remove_file(filename).expect("Failed to remove file"); 21 | } 22 | } 23 | 24 | /* Clean bootloader */ 25 | print!("Cleaning bootloader...\n"); 26 | std::env::set_current_dir("bootloader") 27 | .expect("Failed to chdir to bootloader"); 28 | let status = Command::new("cargo").arg("clean") 29 | .status().expect("Failed to invoke bootloader clean"); 30 | assert!(status.success(), "Failed to clean bootloader"); 31 | 32 | /* Clean kernel */ 33 | print!("Cleaning kernel...\n"); 34 | std::env::set_current_dir("../kernel") 35 | .expect("Failed to chdir to kernel"); 36 | let status = Command::new("cargo").arg("clean") 37 | .status().expect("Failed to invoke kernel clean"); 38 | assert!(status.success(), "Failed to clean kernel"); 39 | 40 | print!("Cleaned\n"); 41 | return; 42 | } else if args.len() == 2 && args[1] == "doc" { 43 | print!("Documenting bootloader...\n"); 44 | std::env::set_current_dir("bootloader") 45 | .expect("Failed to chdir to bootloader"); 46 | let bootloader_status = Command::new("cargo") 47 | .args(&["doc", "--release"]) 48 | .env("RUSTDOCFLAGS", "--document-private-items") 49 | .status() 50 | .expect("Failed to invoke doc of bootloader"); 51 | assert!(bootloader_status.success(), "Failed to doc bootloader"); 52 | 53 | print!("Documenting kernel...\n"); 54 | std::env::set_current_dir("../kernel") 55 | .expect("Failed to chdir to kernel"); 56 | let bootloader_status = Command::new("cargo") 57 | .args(&["doc", "--release"]) 58 | .env("RUSTDOCFLAGS", "--document-private-items") 59 | .status() 60 | .expect("Failed to invoke doc of kernel"); 61 | assert!(bootloader_status.success(), "Failed to doc kernel"); 62 | 63 | print!("Documenting done!\n"); 64 | return; 65 | } 66 | 67 | /* Build stage1. This is the rust portion of the bootloader */ 68 | print!("Building stage1...\n"); 69 | std::env::set_current_dir("bootloader") 70 | .expect("Failed to chdir to bootloader"); 71 | let bootloader_status = Command::new("cargo") 72 | .args(&["build", "--release"]) 73 | .status() 74 | .expect("Failed to invoke build of bootloader"); 75 | assert!(bootloader_status.success(), "Failed to build bootloader"); 76 | 77 | /* Flatten the bootloader. This will take the PE produced by the bootloader 78 | * and convert it to an in-memory loaded representation such that it can 79 | * be incbined by the stage0. 80 | */ 81 | print!("Flattening bootloader...\n"); 82 | std::env::set_current_dir("..").expect("Failed to chdir to original dir"); 83 | let flatten_status = Command::new("python") 84 | .args(&["flatten_pe.py", 85 | "bootloader/target/i586-pc-windows-msvc/release/stage1.exe", 86 | "stage1.flat"]) 87 | .status() 88 | .expect("Failed to invoke flatten script"); 89 | assert!(flatten_status.success(), "Failed to flatten bootloader"); 90 | 91 | /* Assemble stage0. This produces the final bootable bootloader. This 92 | * is a tiny trampoline 16-bit assembly snippit that switches to protected 93 | * mode and jumps into the incbined flattened PE file. 94 | */ 95 | print!("Assembling bootloader...\n"); 96 | let stage0_status = Command::new("nasm") 97 | .args(&["-f", "bin", "-o", BOOTFILE_NAME, "bootloader/stage0.asm"]) 98 | .status() 99 | .expect("Failed to invoke NASM for stage0"); 100 | assert!(stage0_status.success(), "Failed to assemble bootloader"); 101 | 102 | print!("Bootloader successfully built\n"); 103 | 104 | let md = std::fs::metadata(BOOTFILE_NAME) 105 | .expect("Failed to get metadata for bootloader"); 106 | assert!(md.is_file(), "Bootloader is not a file!?"); 107 | 108 | print!("Bootloader size is {} bytes ({:8.4}%)\n", md.len(), 109 | md.len() as f64 / (32. * 1024.) * 100.0); 110 | 111 | assert!(md.len() <= (32 * 1024), "Bootloader is too large!"); 112 | 113 | print!("Deploying bootloader...\n"); 114 | 115 | /* Attempt to deploy bootloader to various different TFTP directories. 116 | * Since I work with this codebase on multiple networks and systems, this 117 | * is just a list of the paths that work on each for deployment. It'll try 118 | * to deploy to all of them. 119 | */ 120 | for tftpd_dir in DEPLOY_PATHS { 121 | if !Path::new(tftpd_dir).exists() { 122 | continue; 123 | } 124 | 125 | print!("Deploying bootloader to {}...\n", tftpd_dir); 126 | std::fs::copy(BOOTFILE_NAME, Path::new(tftpd_dir).join(BOOTFILE_NAME)) 127 | .expect("Failed to copy file"); 128 | } 129 | 130 | print!("Bootloader successfully deployed\n"); 131 | 132 | /* Build kernel */ 133 | print!("Building kernel...\n"); 134 | std::env::set_current_dir("kernel") 135 | .expect("Failed to chdir to kernel"); 136 | 137 | let kernel_status = Command::new("cargo") 138 | .args(&["build", "--release"]) 139 | .status() 140 | .expect("Failed to invoke build of kernel"); 141 | assert!(kernel_status.success(), "Failed to build kernel"); 142 | 143 | std::env::set_current_dir("..").expect("Failed to chdir to original dir"); 144 | 145 | /* Deploy kernel, same as bootloader */ 146 | for tftpd_dir in DEPLOY_PATHS { 147 | if !Path::new(tftpd_dir).exists() { 148 | continue; 149 | } 150 | 151 | print!("Deploying kernel to {}...\n", tftpd_dir); 152 | std::fs::copy(KERNEL_PATH, Path::new(tftpd_dir).join(KERNEL_NAME)) 153 | .expect("Failed to copy file"); 154 | } 155 | } 156 | --------------------------------------------------------------------------------