├── .gitignore ├── Cargo.toml ├── LICENSE.md ├── README.md ├── rust-toolchain.toml └── src └── lib.rs /.gitignore: -------------------------------------------------------------------------------- 1 | /target 2 | Cargo.lock 3 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "whorl" 3 | version = "0.2.0" 4 | authors = ["Michael Gattozzi "] 5 | edition = "2021" 6 | 7 | [dependencies] 8 | 9 | [dev-dependencies] 10 | rand = "0.8.5" 11 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2021 Michael Gattozzi 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # whorl - A single file, std only, async Rust executor 2 | 3 | whorl was created to teach you how async executors work in Rust. It is not the 4 | fastest executor nor is it's API perfect, but it will teach you about them and 5 | how they work and where to get started if you wanted to make your own. It's 6 | written in a literate programming style such that reading it from beginning to 7 | end tells you a story about how it works or you can read parts of it in chunks 8 | depending on what you want to get out of it. 9 | 10 | You can read it all [online here on GitHub](https://github.com/mgattozzi/whorl/blob/main/src/lib.rs) 11 | or you can clone the repo yourself and open up `src/lib.rs` to read through it 12 | in your favorite text editor or play around with it and change things. All of 13 | the code is licensed under the `MIT License` so you're mostly free to do with it 14 | as you wish. If you want to make the next `tokio` or just make something for fun 15 | you can do that. 16 | 17 | If you just want to see it in action an example test program is included as part 18 | of the file. You can see it's output by just running: 19 | 20 | ```bash 21 | cargo test -- --nocapture 22 | ``` 23 | 24 | Which should look something like this: 25 | 26 | ```bash 27 | whorl on  main [!⇡] is 📦 v0.1.0 via 🦀 v1.56.0 took 10s 28 | ❯ cargo test -- --nocapture 29 | Compiling whorl v0.1.0 (/home/michael/whorl) 30 | Finished test [unoptimized + debuginfo] target(s) in 0.47s 31 | Running unittests (target/debug/deps/whorl-6d670ffb5bb225ca) 32 | 33 | running 1 test 34 | Begin Asynchronous Execution 35 | Blocking Function Polled To Completion 36 | Spawned Fn #00: Start 1635276666 37 | Spawned Fn #01: Start 1635276666 38 | Spawned Fn #02: Start 1635276666 39 | Spawned Fn #03: Start 1635276666 40 | Spawned Fn #04: Start 1635276666 41 | Spawned Fn #00: Ended 1635276669 42 | Spawned Fn #02: Ended 1635276669 43 | Spawned Fn #03: Ended 1635276669 44 | Spawned Fn #01: Ended 1635276670 45 | Spawned Fn #00: Inner 1635276671 46 | Spawned Fn #03: Inner 1635276674 47 | Spawned Fn #04: Ended 1635276675 48 | Spawned Fn #02: Inner 1635276675 49 | Spawned Fn #01: Inner 1635276678 50 | Spawned Fn #04: Inner 1635276678 51 | End of Asynchronous Execution 52 | test library_test ... ok 53 | 54 | test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 23.00s 55 | 56 | Doc-tests whorl 57 | 58 | running 0 tests 59 | 60 | test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s 61 | ``` 62 | -------------------------------------------------------------------------------- /rust-toolchain.toml: -------------------------------------------------------------------------------- 1 | [toolchain] 2 | channel = "1.81.0" 3 | components = [ "rustfmt", "rust-analyzer", "clippy" ] 4 | -------------------------------------------------------------------------------- /src/lib.rs: -------------------------------------------------------------------------------- 1 | //! # A Whorlwind Tour in Building a Rust Async Executor 2 | //! 3 | //! whorl is a self contained library to run asynchronous Rust code with the 4 | //! following goals in mind: 5 | //! 6 | //! - Keep it in one file. You should be able to read this code beginning to end 7 | //! like a literate program and understand what each part does and how it fits 8 | //! into the larger narrative. The code is organized to tell a story, not 9 | //! necessarily how I would normally structure Rust code. 10 | //! - Teach others what is going on when you run async code in Rust with a runtime 11 | //! like tokio. There is no magic, just many synchronous functions in an async 12 | //! trenchcoat. 13 | //! - Explain why different runtimes are incompatible, even if they all run async 14 | //! programs. 15 | //! - Only use the `std` crate to show that yes all the tools to build one exist 16 | //! and if you wanted to, you could. Note we only use the rand crate for our 17 | //! example test, it is not used for the runtime itself 18 | //! - Use only stable Rust. You can build this today; no fancy features needed. 19 | //! - Explain why `std` doesn't ship an executor, but just the building blocks. 20 | //! 21 | //! What whorl isn't: 22 | //! - Performant, this is an adaptation of a class I gave at Rustconf back in 23 | //! 2018. Its first and foremost goal is to teach *how* an executor 24 | //! works, not the best way to make it fast. Reading the tokio source 25 | //! code would be a really good thing if you want to learn about how to make 26 | //! things performant and scalable. 27 | //! - "The Best Way". Programmers have opinions, I think we should maybe have 28 | //! less of them sometimes. Even me. You might disagree with an API design 29 | //! choice or a way I did something here and that's fine. I just want you to 30 | //! learn how it all works. Async executors also have trade offs so one way 31 | //! might work well for your code and not for another person's code. 32 | //! - An introduction to Rust. This assumes you're somewhat familiar with it and 33 | //! while I've done my best to break it down so that it is easy to understand, 34 | //! that just might not be the case and I might gloss over details given I've 35 | //! used Rust since 1.0 came out in 2015. Expert blinders are real and if 36 | //! things are confusing, do let me know in the issue tracker. I'll try my best 37 | //! to make it easier to grok, but if you've never touched Rust before, this is 38 | //! in all honesty not the best place to start. 39 | //! 40 | //! With all of that in mind, let's dig into it all! 41 | 42 | pub mod futures { 43 | //! This is our module to provide certain kinds of futures to users. In the case 44 | //! of our [`Sleep`] future here, this is not dependent on the runtime in 45 | //! particular. We would be able to run this on any executor that knows how to 46 | //! run a future. Where incompatibilities arise is if you use futures or types 47 | //! that depend on the runtime or traits not defined inside of the standard 48 | //! library. For instance, `std` does not provide an `AsyncRead`/`AsyncWrite` 49 | //! trait as of Oct 2024. As a result, if you want to provide the functionality 50 | //! to asynchronously read or write to something, then that trait tends to be 51 | //! written for an executor. So tokio would have its own `AsyncRead` and so 52 | //! would ours for instance. Now if a new library wanted to write a type that 53 | //! can, say, read from a network socket asynchronously, they'd have to write an 54 | //! implementation of `AsyncRead` for both executors. Not great. Another way 55 | //! incompatibilities can arise is when those futures depend on the state of the 56 | //! runtime itself. Now that implementation is locked to the runtime. 57 | //! 58 | //! Sometimes this is actually okay; maybe the only way to implement 59 | //! something is depending on the runtime state. In other ways it's not 60 | //! great. Things like `AsyncRead`/`AsyncWrite` would be perfect additions 61 | //! to the standard library at some point since they describe things that 62 | //! everyone would need, much like how `Read`/`Write` are in stdlib and we 63 | //! all can write generic code that says, "I will work with anything that I 64 | //! can read or write to." 65 | //! 66 | //! This is why, however, things like Future, Context, Wake, Waker etc. all 67 | //! the components we need to build an executor are in the standard library. 68 | //! It means anyone can build an executor and accept most futures or work 69 | //! with most libraries without needing to worry about which executor they 70 | //! use. It reduces the burden on maintainers and users. In some cases 71 | //! though, we can't avoid it. Something to keep in mind as you navigate the 72 | //! async ecosystem and see that some libraries can work on any executor or 73 | //! some ask you to opt into which executor you want with a feature flag. 74 | use std::{ 75 | future::Future, 76 | pin::Pin, 77 | task::{Context, Poll}, 78 | time::SystemTime, 79 | }; 80 | 81 | /// A future that will allow us to sleep and block further execution of the 82 | /// future it's used in without blocking the thread itself. It will be 83 | /// polled and if the timer is not up, then it will yield execution to the 84 | /// executor. 85 | pub struct Sleep { 86 | /// What time the future was created at, not when it was started to be 87 | /// polled. 88 | now: SystemTime, 89 | /// How long in the future in ms we must wait till we return 90 | /// that the future has finished polling. 91 | ms: u128, 92 | } 93 | 94 | impl Sleep { 95 | /// A simple API whereby we take in how long the consumer of the API 96 | /// wants to sleep in ms and set now to the time of creation and 97 | /// return the type itself, which is a Future. 98 | pub fn new(ms: u128) -> Self { 99 | Self { 100 | now: SystemTime::now(), 101 | ms, 102 | } 103 | } 104 | } 105 | 106 | impl Future for Sleep { 107 | /// We don't need to return a value for [`Sleep`], as we just want it to 108 | /// block execution for a while when someone calls `await` on it. 109 | type Output = (); 110 | /// The actual implementation of the future, where you can call poll on 111 | /// [`Sleep`] if it's pinned and the pin has a mutable reference to 112 | /// [`Sleep`]. In this case we don't need to utilize 113 | /// [`Context`][std::task::Context] here and in fact you often will not. 114 | /// It only serves to provide access to a `Waker` in case you need to 115 | /// wake the task. Since we always do that in our executor, we don't need 116 | /// to do so here, but you might find if you manually write a future 117 | /// that you need access to the waker to wake up the task in a special 118 | /// way. Waking up the task just means we put it back into the executor 119 | /// to be polled again. 120 | fn poll(self: Pin<&mut Self>, _: &mut Context) -> Poll { 121 | // If enough time has passed, then when we're polled we say that 122 | // we're ready and the future has slept enough. If not, we just say 123 | // that we're pending and need to be re-polled, because not enough 124 | // time has passed. 125 | if self.now.elapsed().unwrap().as_millis() >= self.ms { 126 | Poll::Ready(()) 127 | } else { 128 | Poll::Pending 129 | } 130 | } 131 | } 132 | 133 | // In practice, what we do when we sleep is something like this: 134 | // ``` 135 | // async fn example() { 136 | // Sleep::new(2000).await; 137 | // } 138 | // ``` 139 | // 140 | // Which is neat and all but how is that future being polled? Well, this 141 | // all desugars out to: 142 | // ``` 143 | // async fn example() { 144 | // let mut sleep = Sleep::new(2000); 145 | // loop { 146 | // match Pin::new(sleep).as_mut().poll(&mut context) { 147 | // Poll::Ready(()) => (), 148 | // // You can't write yield yourself as this is an unstable 149 | // // feature currently 150 | // Poll::Pending => yield, 151 | // } 152 | // } 153 | // } 154 | } 155 | 156 | #[test] 157 | /// To understand what we'll build, we need to see and understand what we will 158 | /// run and the output we expect to see. Note that if you wish to run this test, 159 | /// you should use the command `cargo test -- --nocapture` so that you can see 160 | /// the output of `println` being used, otherwise it'll look like nothing is 161 | /// happening at all for a while. 162 | fn library_test() { 163 | // We're going to import our Sleep future to make sure that it works, 164 | // because it's not a complicated future and it's easy to see the 165 | // asynchronous nature of the code. 166 | use crate::{futures::Sleep, runtime}; 167 | // We want some random numbers so that the sleep futures finish at different 168 | // times. If we didn't, then the code would look synchronous in nature even 169 | // if it isn't. This is because we schedule and poll tasks in what is 170 | // essentially a loop unless we use block_on. 171 | use rand::Rng; 172 | // We need to know the time to show when a future completes. Time is cursed 173 | // and it's best we do not dabble too much in it. 174 | use std::time::SystemTime; 175 | 176 | // This function causes the runtime to block on this future. It does so by 177 | // just taking this future and polling it till completion in a loop and 178 | // ignoring other tasks on the queue. Sometimes you need to block on async 179 | // functions and treat them as sync. A good example is running a webserver. 180 | // You'd want it to always be running, not just sometimes, and so blocking 181 | // it makes sense. In a single threaded executor this would block all 182 | // execution. In our case our executor is single-threaded. Technically it 183 | // runs on a separate thread from our program and so blocks running other 184 | // tasks, but the main function will keep running. This is why we call 185 | // `wait` to make sure we wait till all futures finish executing before 186 | // exiting. 187 | runtime::block_on(async { 188 | const SECOND: u128 = 1000; //ms 189 | println!("Begin Asynchronous Execution"); 190 | // Create a random number generator so we can generate random numbers 191 | let mut rng = rand::thread_rng(); 192 | 193 | // A small function to generate the time in seconds when we call it. 194 | let time = || { 195 | SystemTime::now() 196 | .duration_since(SystemTime::UNIX_EPOCH) 197 | .unwrap() 198 | .as_secs() 199 | }; 200 | 201 | // Spawn 5 different futures on our executor 202 | for i in 0..5 { 203 | // Generate the two numbers between 1 and 9. We'll spawn two futures 204 | // that will sleep for as many seconds as the random number creates 205 | let random = rng.gen_range(1..10); 206 | let random2 = rng.gen_range(1..10); 207 | 208 | // We now spawn a future onto the runtime from within our future 209 | runtime::spawn(async move { 210 | println!("Spawned Fn #{:02}: Start {}", i, time()); 211 | // This future will sleep for a certain amount of time before 212 | // continuing execution 213 | Sleep::new(SECOND * random).await; 214 | // After the future waits for a while, it then spawns another 215 | // future before printing that it finished. This spawned future 216 | // then sleeps for a while and then prints out when it's done. 217 | // Since we're spawning futures inside futures, the order of 218 | // execution can change. 219 | runtime::spawn(async move { 220 | Sleep::new(SECOND * random2).await; 221 | println!("Spawned Fn #{:02}: Inner {}", i, time()); 222 | }); 223 | println!("Spawned Fn #{:02}: Ended {}", i, time()); 224 | }); 225 | } 226 | // To demonstrate that block_on works we block inside this future before 227 | // we even begin polling the other futures. 228 | runtime::block_on(async { 229 | // This sleeps longer than any of the spawned functions, but we poll 230 | // this to completion first even if we await here. 231 | Sleep::new(11000).await; 232 | println!("Blocking Function Polled To Completion"); 233 | }); 234 | }); 235 | 236 | // We now wait on the runtime to complete each of the tasks that were 237 | // spawned before we exit the program 238 | runtime::wait(); 239 | println!("End of Asynchronous Execution"); 240 | 241 | // When all is said and done when we run this test we should get output that 242 | // looks somewhat like this (though in different orders in each execution): 243 | // 244 | // Begin Asynchronous Execution 245 | // Blocking Function Polled To Completion 246 | // Spawned Fn #00: Start 1634664688 247 | // Spawned Fn #01: Start 1634664688 248 | // Spawned Fn #02: Start 1634664688 249 | // Spawned Fn #03: Start 1634664688 250 | // Spawned Fn #04: Start 1634664688 251 | // Spawned Fn #01: Ended 1634664690 252 | // Spawned Fn #01: Inner 1634664691 253 | // Spawned Fn #04: Ended 1634664694 254 | // Spawned Fn #04: Inner 1634664695 255 | // Spawned Fn #00: Ended 1634664697 256 | // Spawned Fn #02: Ended 1634664697 257 | // Spawned Fn #03: Ended 1634664697 258 | // Spawned Fn #00: Inner 1634664698 259 | // Spawned Fn #03: Inner 1634664698 260 | // Spawned Fn #02: Inner 1634664702 261 | // End of Asynchronous Execution 262 | } 263 | 264 | pub mod runtime { 265 | use std::{ 266 | // We need a place to put the futures that get spawned onto the runtime 267 | // somewhere and while we could use something like a `Vec`, we chose a 268 | // `LinkedList` here. One reason being that we can put tasks at the front of 269 | // the queue if they're a blocking future. The other being that we use a 270 | // constant amount of memory. We only ever use as much as we need for tasks. 271 | // While this might not matter at a small scale, this does at a larger 272 | // scale. If your `Vec` never gets smaller and you have a huge burst of 273 | // tasks under, say, heavy HTTP loads in a web server, then you end up eating 274 | // up a lot of memory that could be used for other things running on the 275 | // same machine. In essence what you've created is a kind of memory leak 276 | // unless you make sure to resize the `Vec`. @mycoliza did a good Twitter 277 | // thread on this here if you want to learn more! 278 | // 279 | // https://twitter.com/mycoliza/status/1298399240121544705 280 | collections::LinkedList, 281 | // A Future is the fundamental block of any async executor. It is a trait 282 | // that types can make or an unnameable type that an async function can 283 | // make. We say it's unnameable because you don't actually define the type 284 | // anywhere and just like a closure you can only specify its behavior with 285 | // a trait. You can't give it a name like you would when you do something 286 | // like `pub struct Foo;`. These types, whether nameable or not, represent all 287 | // the state needed to have an asynchronous function. You poll the future to 288 | // drive its computation along like a state machine that makes transistions 289 | // from one state to another till it finishes. If you reach a point where it 290 | // would yield execution, then it needs to be rescheduled to be polled again 291 | // in the future. It yields though so that you can drive other futures 292 | // forward in their computation! 293 | // 294 | // This is the important part to understand here with the executor: the 295 | // Future trait defines the API we use to drive forward computation of it, 296 | // while the implementor of the trait defines how that computation will work 297 | // and when to yield to the executor. You'll see later that we have an 298 | // example of writing a `Sleep` future by hand as well as unnameable async 299 | // code using `async { }` and we'll expand on when those yield and what it 300 | // desugars to in practice. We're here to demystify the mystical magic of 301 | // async code. 302 | future::Future, 303 | // Ah Pin. What a confusing type. The best way to think about `Pin` is that 304 | // it records when a value became immovable or pinned in place. `Pin` doesn't 305 | // actually pin the value, it just notes that the value will not move, much 306 | // in the same way that you can specify Rust lifetimes. It only records what 307 | // the lifetime already is, it doesn't actually create said lifetime! At the 308 | // bottom of this, I've linked some more in depth reading on Pin, but if you 309 | // don't know much about Pin, starting with the standard library docs isn't a 310 | // bad place. 311 | // 312 | // Note: Unpin is also a confusing name and if you think of it as 313 | // MaybePinned you'll have a better time as the value may be pinned or it 314 | // may not be pinned. It just marks that if you have a Pinned value and it 315 | // moves that's okay and it's safe to do so, whereas for types that do not 316 | // implement Unpin and they somehow move, will cause some really bad things 317 | // to happen since it's not safe for the type to be moved after being 318 | // pinned. We create our executor with the assumption that every future we 319 | // get will need to be a pinned value, even if it is actually Unpin. This 320 | // makes it nicer for everyone using the executor as it's very easy to make 321 | // types that do not implement Unpin. 322 | pin::Pin, 323 | sync::{ 324 | // What's not to love about Atomics? This lets us have thread safe 325 | // access to primitives so that we can modify them or load them using 326 | // Ordering to tell the compiler how it should handle giving out access 327 | // to the data. Atomics are a rather deep topic that's out of scope for 328 | // this. Just note that we want to change a usize safely across threads! 329 | // If you want to learn more Mara Bos' book `Rust Atomics and Locks` is 330 | // incredibly good: https://marabos.nl/atomics/ 331 | atomic::{AtomicUsize, Ordering}, 332 | // Arc is probably one of the more important types we'll use in the 333 | // executor. It lets us freely clone cheap references to the data which 334 | // we can use across threads while making it easy to not have to worry about 335 | // complicated lifetimes since we can easily own the data with a call to 336 | // clone. It's one of my favorite types in the standard library. 337 | Arc, 338 | // Normally I would use `parking_lot` for a Mutex, but the goal is to 339 | // use stdlib only. A personal gripe is that it cares about Mutex 340 | // poisoning (when a thread panics with a hold on the lock), which is 341 | // not something I've in practice run into (others might!) and so calling 342 | // `lock().unwrap()` everywhere can get a bit tedious. That being said 343 | // Mutexes are great. You make sure only one thing has access to the data 344 | // at any given time to access or change it. 345 | Mutex, 346 | }, 347 | // The task module contains all of the types and traits related to 348 | // having an executor that can create and run tasks that are `Futures` 349 | // that need to be polled. 350 | task::{ 351 | // `Context` is passed in every call to `poll` for a `Future`. We 352 | // didn't use it in our `Sleep` one, but it has to be passed in 353 | // regardless. It gives us access to the `Waker` for the future so 354 | // that we can call it ourselves inside the future if need be! 355 | Context, 356 | // Poll is the enum returned from when we poll a `Future`. When we 357 | // call `poll`, this drives the `Future` forward until it either 358 | // yields or it returns a value. `Poll` represents that. It is 359 | // either `Poll::Pending` or `Poll::Ready(T)`. We use this to 360 | // determine if a `Future` is done or not and if not, then we should 361 | // keep polling it. 362 | Poll, 363 | // This is a trait to define how something in an executor is woken 364 | // up. We implement it for `Task` which is what lets us create a 365 | // `Waker` from it, to then make a `Context` which can then be 366 | // passed into the call to `poll` on the `Future` inside the `Task`. 367 | Wake, 368 | // A `Waker` is the type that has a handle to the runtime to let it 369 | // know when a task is ready to be scheduled for polling. We're 370 | // doing a very simple version where as soon as a `Task` is done 371 | // polling we tell the executor to wake it. Instead what you might 372 | // want to do when creating a `Future` is have a more involved way 373 | // to only wake when it would be ready to poll, such as a timer 374 | // completing, or listening for some kind of signal from the OS. 375 | // It's kind of up to the executor how it wants to do it. Maybe how 376 | // it schedules things is different or it has special behavior for 377 | // certain `Future`s that it ships with it. The key thing to note 378 | // here is that this is how tasks are supposed to be rescheduled for 379 | // polling. 380 | Waker, 381 | }, 382 | }; 383 | 384 | /// This is it, the thing we've been alluding to for most of this file. It's 385 | /// the `Runtime`! What is it? What does it do? Well the `Runtime` is what 386 | /// actually drives our async code to completion. Remember asynchronous code 387 | /// is just code that gets run for a bit, yields part way through the 388 | /// function, then continues when polled and it repeats this process till 389 | /// being completed. In reality what this means is that the code is run 390 | /// using synchronous functions that drive tasks in a concurrent manner. 391 | /// They could also be run concurrently and/or in parallel if the executor 392 | /// is multithreaded. Tokio is a good example of this model where it runs 393 | /// tasks in parallel on separate threads and if it has more tasks than 394 | /// threads, it runs them concurrently on those threads. 395 | /// 396 | /// Our `Runtime` in particular has: 397 | pub(crate) struct Runtime { 398 | /// A queue to place all of the tasks that are spawned on the runtime. 399 | queue: Queue, 400 | /// A `Spawner` which can spawn tasks onto our queue for us easily and 401 | /// lets us call `spawn` and `block_on` with ease. 402 | spawner: Spawner, 403 | /// A counter for how many Tasks are on the runtime. We use this in 404 | /// conjunction with `wait` to block until there are no more tasks on 405 | /// the executor. 406 | tasks: AtomicUsize, 407 | } 408 | 409 | /// Our runtime type is designed such that we only ever have one running. 410 | /// You might want to have multiple running in production code though. For 411 | /// instance you limit what happens on one runtime for a free tier version 412 | /// and let the non-free version use as many resources as it can. We 413 | /// implement 3 functions: `start` to actually get async code running, `get` 414 | /// so that we can get references to the runtime, and `spawner` a 415 | /// convenience function to get a `Spawner` to spawn tasks onto the `Runtime`. 416 | impl Runtime { 417 | /// This is what actually drives all of our async code. We spawn a 418 | /// separate thread that loops getting the next task off the queue and 419 | /// if it exists polls it or continues if not. It also checks if the 420 | /// task should block and if it does it just keeps polling the task 421 | /// until it completes! Otherwise it wakes the task to put it back in 422 | /// the queue in the non-blocking version if it's still pending. 423 | /// Otherwise it drops the task by not putting it back into the queue 424 | /// since it's completed. 425 | fn start() { 426 | std::thread::spawn(|| loop { 427 | let task = match Runtime::get().queue.lock().unwrap().pop_front() { 428 | Some(task) => task, 429 | None => continue, 430 | }; 431 | if task.will_block() { 432 | while task.poll().is_pending() {} 433 | } else if task.poll().is_pending() { 434 | task.wake(); 435 | } 436 | }); 437 | } 438 | 439 | /// A function to get a reference to the `Runtime` 440 | pub(crate) fn get() -> &'static Runtime { 441 | &RUNTIME 442 | } 443 | 444 | /// A function to get a new `Spawner` from the `Runtime` 445 | pub(crate) fn spawner() -> Spawner { 446 | Runtime::get().spawner.clone() 447 | } 448 | } 449 | 450 | /// We now create our static type to represent the singular `Runtime` when 451 | /// it is finally initialized. We're using the LazyLock type added in Rust 452 | /// 1.80.0 which allows us to safely initialize a static at runtime that 453 | /// we can then refer too in our program 454 | static RUNTIME: std::sync::LazyLock = std::sync::LazyLock::new(|| { 455 | // This is okay to call because any calls to `Runtime::get()` 456 | // will be blocked until we fully initialize the static which 457 | // will block until it finishes initializing. So we start the runtime 458 | // inside the initialization function, which depends on it being 459 | // initialized, but it is able to wait until the runtime is actually 460 | // initialized and so it all just works. 461 | Runtime::start(); 462 | let queue = Arc::new(Mutex::new(LinkedList::new())); 463 | Runtime { 464 | spawner: Spawner { 465 | queue: queue.clone(), 466 | }, 467 | queue, 468 | tasks: AtomicUsize::new(0), 469 | } 470 | }); 471 | 472 | // The queue is a single linked list that contains all of the tasks being 473 | // run on it. We hand out access to it using a Mutex that has an Arc 474 | // pointing to it so that we can make sure only one thing is touching the 475 | // queue state at a given time. This isn't the most efficient pattern 476 | // especially if we wanted to have the runtime be truly multi-threaded, but 477 | // for the purposes of the code this works just fine. 478 | type Queue = Arc>>>; 479 | 480 | /// We've talked about the `Spawner` a lot up till this point, but it's 481 | /// really just a light wrapper around the queue that knows how to push 482 | /// tasks onto the queue and create new ones. 483 | #[derive(Clone)] 484 | pub(crate) struct Spawner { 485 | queue: Queue, 486 | } 487 | 488 | impl Spawner { 489 | /// This is the function that gets called by the `spawn` function to 490 | /// actually create a new `Task` in our queue. It takes the `Future`, 491 | /// constructs a `Task` and then pushes it to the back of the queue. 492 | fn spawn(self, future: impl Future + Send + Sync + 'static) { 493 | self.inner_spawn(Task::new(false, future)); 494 | } 495 | /// This is the function that gets called by the `spawn_blocking` function to 496 | /// actually create a new `Task` in our queue. It takes the `Future`, 497 | /// constructs a `Task` and then pushes it to the front of the queue 498 | /// where the runtime will check if it should block and then block until 499 | /// this future completes. 500 | fn spawn_blocking(self, future: impl Future + Send + Sync + 'static) { 501 | self.inner_spawn_blocking(Task::new(true, future)); 502 | } 503 | /// This function just takes a `Task` and pushes it onto the queue. We use this 504 | /// both for spawning new `Task`s and to push old ones that get woken up 505 | /// back onto the queue. 506 | fn inner_spawn(self, task: Arc) { 507 | self.queue.lock().unwrap().push_back(task); 508 | } 509 | /// This function takes a `Task` and pushes it to the front of the queue 510 | /// if it is meant to block. We use this both for spawning new blocking 511 | /// `Task`s and to push old ones that get woken up back onto the queue. 512 | fn inner_spawn_blocking(self, task: Arc) { 513 | self.queue.lock().unwrap().push_front(task); 514 | } 515 | } 516 | 517 | /// Spawn a non-blocking `Future` onto the `whorl` runtime 518 | pub fn spawn(future: impl Future + Send + Sync + 'static) { 519 | Runtime::spawner().spawn(future); 520 | } 521 | /// Block on a `Future` and stop others on the `whorl` runtime until this 522 | /// one completes. 523 | pub fn block_on(future: impl Future + Send + Sync + 'static) { 524 | Runtime::spawner().spawn_blocking(future); 525 | } 526 | /// Block further execution of a program until all of the tasks on the 527 | /// `whorl` runtime are completed. 528 | pub fn wait() { 529 | let runtime = Runtime::get(); 530 | while runtime.tasks.load(Ordering::Relaxed) > 0 {} 531 | } 532 | 533 | /// The `Task` is the basic unit for the executor. It represents a `Future` 534 | /// that may or may not be completed. We spawn `Task`s to be run and poll 535 | /// them until completion in a non-blocking manner unless specifically asked 536 | /// for. 537 | struct Task { 538 | /// This is the actual `Future` we will poll inside of a `Task`. We `Box` 539 | /// and `Pin` the `Future` when we create a task so that we don't need 540 | /// to worry about pinning or more complicated things in the runtime. We 541 | /// also need to make sure this is `Send + Sync` so we can use it across threads 542 | /// and so we lock the `Pin>` inside a `Mutex`. 543 | /// It's worth noting that boxing a `Future` is `Pin` since the `Box` is 544 | /// a pointer to an item on the heap. The pointer can be moved around, but 545 | /// what it points too won't move at all until it is dropped. When we call 546 | /// `Box::pin` we're just putting the `Future` on the heap and marking for 547 | /// the type system that this type will not move anymore. 548 | future: Mutex + Send + Sync + 'static>>>, 549 | /// We need a way to check if the runtime should block on this task and 550 | /// so we use a boolean here to check that! 551 | block: bool, 552 | } 553 | 554 | impl Task { 555 | /// This constructs a new task by increasing the count in the runtime of 556 | /// how many tasks there are, pinning the `Future`, and wrapping it all 557 | /// in an `Arc`. 558 | fn new(block: bool, future: impl Future + Send + Sync + 'static) -> Arc { 559 | Runtime::get().tasks.fetch_add(1, Ordering::Relaxed); 560 | Arc::new(Task { 561 | future: Mutex::new(Box::pin(future)), 562 | block, 563 | }) 564 | } 565 | 566 | /// We want to use the `Task` itself as a `Waker` which we'll get more 567 | /// into below. This is a convenience method to construct a new `Waker`. 568 | /// A neat thing to note for `poll` and here as well is that we can 569 | /// restrict a method such that it will only work when `self` is a 570 | /// certain type. In this case you can only call `waker` if the type is 571 | /// a `&Arc`. If it was just `Task` it would not compile or work. 572 | fn waker(self: &Arc) -> Waker { 573 | self.clone().into() 574 | } 575 | 576 | /// This is a convenience method to `poll` a `Future` by creating the 577 | /// `Waker` and `Context` and then getting access to the actual `Future` 578 | /// inside the `Mutex` and calling `poll` on that. 579 | fn poll(self: &Arc) -> Poll<()> { 580 | let waker = self.waker(); 581 | let mut ctx = Context::from_waker(&waker); 582 | self.future.lock().unwrap().as_mut().poll(&mut ctx) 583 | } 584 | 585 | /// Checks the `block` field to see if the `Task` is blocking. 586 | fn will_block(&self) -> bool { 587 | self.block 588 | } 589 | } 590 | 591 | /// Since we increase the count everytime we create a new task we also need 592 | /// to make sure that it *also* decreases the count every time it goes out 593 | /// of scope. This implementation of `Drop` does just that so that we don't 594 | /// need to bookeep about when and where to subtract from the count. 595 | impl Drop for Task { 596 | fn drop(&mut self) { 597 | Runtime::get().tasks.fetch_sub(1, Ordering::Relaxed); 598 | } 599 | } 600 | 601 | /// `Wake` is the crux of all of this executor as it's what lets us 602 | /// reschedule a task when it's ready to be polled. For our implementation 603 | /// we do a simple check to see if the task blocks or not and then spawn it back 604 | /// onto the executor in an appropriate manner. 605 | impl Wake for Task { 606 | fn wake(self: Arc) { 607 | if self.will_block() { 608 | Runtime::spawner().inner_spawn_blocking(self); 609 | } else { 610 | Runtime::spawner().inner_spawn(self); 611 | } 612 | } 613 | } 614 | } 615 | 616 | // That's it! A full asynchronous runtime with comments all in less than 1000 617 | // lines. Most of that being the actual comments themselves. I hope this made 618 | // how Rust async executors work less magical and more understandable. It's a 619 | // lot to take in, but at the end of the day it's just keeping track of state 620 | // and a couple of loops to get it all working. If you want to see how to write 621 | // a more performant executor that's being used in production and works really 622 | // well, then consider reading the source code for `tokio`. I myself learned 623 | // quite a bit reading it and it's fascinating and fairly well documented. 624 | // If you're interested in learning even more about async Rust or you want to 625 | // learn more in-depth things about it, then I recommend reading this list 626 | // of resources and articles I've found useful that are worth your time: 627 | // 628 | // - Asynchronous Programming in Rust: https://rust-lang.github.io/async-book/01_getting_started/01_chapter.html 629 | // - Getting in and out of trouble with Rust futures: https://fasterthanli.me/articles/getting-in-and-out-of-trouble-with-rust-futures 630 | // - Pin and Suffering: https://fasterthanli.me/articles/pin-and-suffering 631 | // - Understanding Rust futures by going way too deep: https://fasterthanli.me/articles/understanding-rust-futures-by-going-way-too-deep 632 | // - How Rust optimizes async/await 633 | // - Part 1: https://tmandry.gitlab.io/blog/posts/optimizing-await-1/ 634 | // - Part 2: https://tmandry.gitlab.io/blog/posts/optimizing-await-2/ 635 | // - The standard library docs have even more information and are worth reading. 636 | // Below are the modules that contain all the types and traits necessary to 637 | // actually create and run async code. They're fairly in-depth and sometimes 638 | // require reading other parts to understand a specific part in a really weird 639 | // dependency graph of sorts, but armed with the knowledge of this executor it 640 | // should be a bit easier to grok what it all means! 641 | // - task module: https://doc.rust-lang.org/stable/std/task/index.html 642 | // - pin module: https://doc.rust-lang.org/stable/std/pin/index.html 643 | // - future module: https://doc.rust-lang.org/stable/std/future/index.html 644 | --------------------------------------------------------------------------------