├── .github └── workflows │ └── ci.yml ├── .gitignore ├── .rustfmt.toml ├── LICENSE ├── README.md ├── book.toml ├── ci ├── dictionary.txt └── spellcheck.sh ├── examples ├── 01_02_why_async │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 01_04_async_await_primer │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 02_02_future_trait │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 02_03_timer │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 02_04_executor │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 03_01_async_await │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 05_01_streams │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 05_02_iteration_and_concurrency │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 06_02_join │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 06_03_select │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 06_04_spawning │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 07_05_recursion │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 09_01_sync_tcp_server │ ├── 404.html │ ├── Cargo.toml │ ├── hello.html │ └── src │ │ └── main.rs ├── 09_02_async_tcp_server │ ├── Cargo.toml │ └── src │ │ └── main.rs ├── 09_03_slow_request │ ├── 404.html │ ├── Cargo.toml │ ├── hello.html │ └── src │ │ └── main.rs ├── 09_04_concurrent_tcp_server │ ├── Cargo.toml │ └── src │ │ └── main.rs ├── 09_05_final_tcp_server │ ├── 404.html │ ├── Cargo.toml │ ├── hello.html │ └── src │ │ └── main.rs ├── Cargo.toml ├── hello-world-join │ ├── Cargo.toml │ └── src │ │ └── main.rs ├── hello-world-sleep │ ├── Cargo.toml │ └── src │ │ └── main.rs ├── hello-world-spawn │ ├── Cargo.toml │ └── src │ │ └── main.rs └── hello-world │ ├── Cargo.toml │ └── src │ └── main.rs └── src ├── 01_getting_started ├── 01_chapter.md ├── 02_why_async.md ├── 03_state_of_async_rust.md └── 04_async_await_primer.md ├── 02_execution ├── 01_chapter.md ├── 02_future.md ├── 03_wakeups.md ├── 04_executor.md └── 05_io.md ├── 03_async_await └── 01_chapter.md ├── 04_pinning └── 01_chapter.md ├── 05_streams ├── 01_chapter.md └── 02_iteration_and_concurrency.md ├── 06_multiple_futures ├── 01_chapter.md ├── 02_join.md ├── 03_select.md └── 04_spawning.md ├── 07_workarounds ├── 01_chapter.md ├── 03_send_approximation.md ├── 04_recursion.md └── 05_async_in_traits.md ├── 08_ecosystem └── 00_chapter.md ├── 09_example ├── 00_intro.md ├── 01_running_async_code.md ├── 02_handling_connections_concurrently.md └── 03_tests.md ├── 12_appendix └── 01_translations.md ├── SUMMARY.md ├── assets └── swap_problem.jpg ├── intro.md ├── navigation ├── index.md ├── intro.md └── topics.md └── part-guide ├── async-await.md ├── concurrency-primitives.md ├── concurrency.md ├── dtors.md ├── futures.md ├── intro.md ├── io.md ├── more-async-await.md ├── runtimes.md ├── streams.md ├── sync.md ├── timers-signals.md └── tools.md /.github/workflows/ci.yml: -------------------------------------------------------------------------------- 1 | name: CI 2 | 3 | on: 4 | pull_request: 5 | push: 6 | branches: 7 | - master 8 | 9 | jobs: 10 | test: 11 | name: build and test 12 | runs-on: ubuntu-latest 13 | steps: 14 | - uses: actions/checkout@v2 15 | - name: Install Rust 16 | run: rustup update stable && rustup default stable 17 | - name: Install mdbook 18 | uses: taiki-e/install-action@mdbook 19 | - name: Install mdbook-linkcheck 20 | uses: taiki-e/install-action@mdbook-linkcheck 21 | - run: mdbook build 22 | - run: cargo test --all --manifest-path=./examples/Cargo.toml --target-dir ./target 23 | - uses: rust-lang/simpleinfra/github-actions/static-websites@master 24 | with: 25 | deploy_dir: book/html 26 | github_token: ${{ secrets.GITHUB_TOKEN }} 27 | if: github.event_name == 'push' && github.ref == 'refs/heads/master' && github.repository_owner == 'rust-lang' 28 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | /book/ 2 | /examples/target/ 3 | /examples/Cargo.lock 4 | target -------------------------------------------------------------------------------- /.rustfmt.toml: -------------------------------------------------------------------------------- 1 | # https://github.com/rust-lang/async-book/pull/59#issuecomment-556240879 2 | disable_all_formatting = true 3 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Aaron Turon 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Asynchronous Programming in Rust 2 | 3 | This book aims to be a thorough guide to asynchronous programming in Rust, from beginner to advanced. 4 | 5 | This book has been unmaintained for a long time and has not had a lot of love. We're currently working to bring it up to date and make it much better! As we're making some major changes, the content might be a bit mixed up, parts may be duplicated or missing, etc. Bear with us, it'll get better soon :-) To see what we're planning and to let us know what you think, see [issue 224](https://github.com/rust-lang/async-book/issues/224). 6 | 7 | ## Requirements 8 | 9 | The async book is built with [`mdbook`] ([docs](https://rust-lang.github.io/mdBook/index.html)), you can install it using cargo. 10 | 11 | ``` 12 | cargo install mdbook 13 | cargo install mdbook-linkcheck 14 | ``` 15 | 16 | [`mdbook`]: https://github.com/rust-lang/mdBook 17 | 18 | ## Building 19 | 20 | To create a finished book, run `mdbook build` to generate it under the `book/` directory. 21 | 22 | ``` 23 | mdbook build 24 | ``` 25 | 26 | ## Development 27 | 28 | While writing it can be handy to see your changes, `mdbook serve` will launch a local web 29 | server to serve the book. 30 | 31 | ``` 32 | mdbook serve 33 | ``` 34 | -------------------------------------------------------------------------------- /book.toml: -------------------------------------------------------------------------------- 1 | [book] 2 | title = "Asynchronous Programming in Rust" 3 | authors = ["Taylor Cramer"] 4 | 5 | [build] 6 | create-missing = false 7 | 8 | [output.html] 9 | git-repository-url = "https://github.com/rust-lang/async-book" 10 | site-url = "/async-book/" 11 | 12 | [output.linkcheck] 13 | follow-web-links = true 14 | traverse-parent-directories = false 15 | -------------------------------------------------------------------------------- /ci/dictionary.txt: -------------------------------------------------------------------------------- 1 | personal_ws-1.1 en 0 utf-8 2 | ambiently 3 | APIs 4 | ArcWake 5 | async 6 | AsyncFuture 7 | asynchronous 8 | AsyncRead 9 | AsyncWrite 10 | AwaitingFutOne 11 | AwaitingFutTwo 12 | cancelling 13 | combinator 14 | combinators 15 | compat 16 | const 17 | coroutines 18 | dyn 19 | enqueued 20 | enum 21 | epoll 22 | FreeBSD 23 | FusedFuture 24 | FusedStream 25 | FutOne 26 | FutTwo 27 | FuturesUnordered 28 | GenFuture 29 | GitHub 30 | gRPC 31 | html 32 | http 33 | Hyper's 34 | impl 35 | implementors 36 | init 37 | interoperate 38 | interprocess 39 | IoBlocker 40 | IOCP 41 | IoObject 42 | JoinHandle 43 | kqueue 44 | localhost 45 | LocalExecutor 46 | metadata 47 | MockTcpStream 48 | multi 49 | multithreaded 50 | multithreading 51 | Mutex 52 | MyError 53 | MyFut 54 | MyType 55 | natively 56 | NotSend 57 | OtherType 58 | performant 59 | PhantomPinned 60 | pointee 61 | println 62 | proxied 63 | proxying 64 | pseudocode 65 | ReadIntoBuf 66 | recognise 67 | repo 68 | refactor 69 | RefCell 70 | repo 71 | repurposed 72 | requeue 73 | ResponseFuture 74 | reusability 75 | runtime 76 | runtimes 77 | rustc 78 | rustup 79 | SimpleFuture 80 | smol 81 | SocketRead 82 | SomeType 83 | spawner 84 | StepOne 85 | StepTwo 86 | struct 87 | structs 88 | subfuture 89 | subfutures 90 | subpar 91 | TcpListener 92 | TcpStream 93 | threadpool 94 | TimerFuture 95 | TODO 96 | Tokio 97 | toml 98 | TryFutureExt 99 | tuple 100 | turbofish 101 | UnixStream 102 | usize 103 | utils 104 | Waker 105 | waker 106 | Wakeups 107 | wakeups 108 | webpage 109 | webpages 110 | webserver 111 | Woot 112 | -------------------------------------------------------------------------------- /ci/spellcheck.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | aspell --version 4 | 5 | # Checks project Markdown files for spelling mistakes. 6 | 7 | # Notes: 8 | 9 | # This script needs dictionary file ($dict_filename) with project-specific 10 | # valid words. If this file is missing, first invocation of a script generates 11 | # a file of words considered typos at the moment. User should remove real typos 12 | # from this file and leave only valid words. When script generates false 13 | # positive after source modification, new valid word should be added 14 | # to dictionary file. 15 | 16 | # Default mode of this script is interactive. Each source file is scanned for 17 | # typos. aspell opens window, suggesting fixes for each found typo. Original 18 | # files with errors will be backed up to files with format "filename.md.bak". 19 | 20 | # When running in CI, this script should be run in "list" mode (pass "list" 21 | # as first argument). In this mode script scans all files and reports found 22 | # errors. Exit code in this case depends on scan result: 23 | # 1 if any errors found, 24 | # 0 if all is clear. 25 | 26 | # Script skips words with length less than or equal to 3. This helps to avoid 27 | # some false positives. 28 | 29 | # We can consider skipping source code in markdown files (```code```) to reduce 30 | # rate of false positives, but then we lose ability to detect typos in code 31 | # comments/strings etc. 32 | 33 | shopt -s nullglob 34 | 35 | dict_filename=./ci/dictionary.txt 36 | markdown_sources=($(find ./src -iname '*.md')) 37 | mode="check" 38 | 39 | # aspell repeatedly modifies the personal dictionary for some reason, 40 | # so we should use a copy of our dictionary. 41 | dict_path="/tmp/dictionary.txt" 42 | 43 | if [[ "$1" == "list" ]]; then 44 | mode="list" 45 | fi 46 | 47 | # Error if running in list (CI) mode and there isn't a dictionary file; 48 | # creating one in CI won't do any good :( 49 | if [[ "$mode" == "list" && ! -f "$dict_filename" ]]; then 50 | echo "No dictionary file found! A dictionary file is required in CI!" 51 | exit 1 52 | fi 53 | 54 | if [[ ! -f "$dict_filename" ]]; then 55 | # Pre-check mode: generates dictionary of words aspell consider typos. 56 | # After user validates that this file contains only valid words, we can 57 | # look for typos using this dictionary and some default aspell dictionary. 58 | echo "Scanning files to generate dictionary file '$dict_filename'." 59 | echo "Please check that it doesn't contain any misspellings." 60 | 61 | echo "personal_ws-1.1 en 0 utf-8" > "$dict_filename" 62 | cat "${markdown_sources[@]}" | aspell --ignore 3 list | sort -u >> "$dict_filename" 63 | elif [[ "$mode" == "list" ]]; then 64 | # List (default) mode: scan all files, report errors. 65 | declare -i retval=0 66 | 67 | cp "$dict_filename" "$dict_path" 68 | 69 | if [ ! -f $dict_path ]; then 70 | retval=1 71 | exit "$retval" 72 | fi 73 | 74 | for fname in "${markdown_sources[@]}"; do 75 | command=$(aspell --ignore 3 --personal="$dict_path" "$mode" < "$fname") 76 | if [[ -n "$command" ]]; then 77 | for error in $command; do 78 | # FIXME: find more correct way to get line number 79 | # (ideally from aspell). Now it can make some false positives, 80 | # because it is just a grep. 81 | grep --with-filename --line-number --color=always "$error" "$fname" 82 | done 83 | retval=1 84 | fi 85 | done 86 | exit "$retval" 87 | elif [[ "$mode" == "check" ]]; then 88 | # Interactive mode: fix typos. 89 | cp "$dict_filename" "$dict_path" 90 | 91 | if [ ! -f $dict_path ]; then 92 | retval=1 93 | exit "$retval" 94 | fi 95 | 96 | for fname in "${markdown_sources[@]}"; do 97 | aspell --ignore 3 --dont-backup --personal="$dict_path" "$mode" "$fname" 98 | done 99 | fi 100 | -------------------------------------------------------------------------------- /examples/01_02_why_async/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_01_02_why_async" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/01_02_why_async/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | use futures::{executor::block_on, join}; 4 | use std::thread; 5 | 6 | fn download(_url: &str) { 7 | // ... 8 | } 9 | 10 | #[test] 11 | // ANCHOR: get_two_sites 12 | fn get_two_sites() { 13 | // Spawn two threads to do work. 14 | let thread_one = thread::spawn(|| download("https://www.foo.com")); 15 | let thread_two = thread::spawn(|| download("https://www.bar.com")); 16 | 17 | // Wait for both threads to complete. 18 | thread_one.join().expect("thread one panicked"); 19 | thread_two.join().expect("thread two panicked"); 20 | } 21 | // ANCHOR_END: get_two_sites 22 | 23 | async fn download_async(_url: &str) { 24 | // ... 25 | } 26 | 27 | // ANCHOR: get_two_sites_async 28 | async fn get_two_sites_async() { 29 | // Create two different "futures" which, when run to completion, 30 | // will asynchronously download the webpages. 31 | let future_one = download_async("https://www.foo.com"); 32 | let future_two = download_async("https://www.bar.com"); 33 | 34 | // Run both futures to completion at the same time. 35 | join!(future_one, future_two); 36 | } 37 | // ANCHOR_END: get_two_sites_async 38 | 39 | #[test] 40 | fn get_two_sites_async_test() { 41 | block_on(get_two_sites_async()); 42 | } 43 | -------------------------------------------------------------------------------- /examples/01_04_async_await_primer/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_01_04_async_await_primer" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/01_04_async_await_primer/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | use futures::executor::block_on; 4 | 5 | mod first { 6 | // ANCHOR: hello_world 7 | // `block_on` blocks the current thread until the provided future has run to 8 | // completion. Other executors provide more complex behavior, like scheduling 9 | // multiple futures onto the same thread. 10 | use futures::executor::block_on; 11 | 12 | async fn hello_world() { 13 | println!("hello, world!"); 14 | } 15 | 16 | fn main() { 17 | let future = hello_world(); // Nothing is printed 18 | block_on(future); // `future` is run and "hello, world!" is printed 19 | } 20 | // ANCHOR_END: hello_world 21 | 22 | #[test] 23 | fn run_main() { main() } 24 | } 25 | 26 | struct Song; 27 | async fn learn_song() -> Song { Song } 28 | async fn sing_song(_: Song) {} 29 | async fn dance() {} 30 | 31 | mod second { 32 | use super::*; 33 | // ANCHOR: block_on_each 34 | fn main() { 35 | let song = block_on(learn_song()); 36 | block_on(sing_song(song)); 37 | block_on(dance()); 38 | } 39 | // ANCHOR_END: block_on_each 40 | 41 | #[test] 42 | fn run_main() { main() } 43 | } 44 | 45 | mod third { 46 | use super::*; 47 | // ANCHOR: block_on_main 48 | async fn learn_and_sing() { 49 | // Wait until the song has been learned before singing it. 50 | // We use `.await` here rather than `block_on` to prevent blocking the 51 | // thread, which makes it possible to `dance` at the same time. 52 | let song = learn_song().await; 53 | sing_song(song).await; 54 | } 55 | 56 | async fn async_main() { 57 | let f1 = learn_and_sing(); 58 | let f2 = dance(); 59 | 60 | // `join!` is like `.await` but can wait for multiple futures concurrently. 61 | // If we're temporarily blocked in the `learn_and_sing` future, the `dance` 62 | // future will take over the current thread. If `dance` becomes blocked, 63 | // `learn_and_sing` can take back over. If both futures are blocked, then 64 | // `async_main` is blocked and will yield to the executor. 65 | futures::join!(f1, f2); 66 | } 67 | 68 | fn main() { 69 | block_on(async_main()); 70 | } 71 | // ANCHOR_END: block_on_main 72 | 73 | #[test] 74 | fn run_main() { main() } 75 | } 76 | -------------------------------------------------------------------------------- /examples/02_02_future_trait/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_02_02_future_trait" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | -------------------------------------------------------------------------------- /examples/02_02_future_trait/src/lib.rs: -------------------------------------------------------------------------------- 1 | // ANCHOR: simple_future 2 | trait SimpleFuture { 3 | type Output; 4 | fn poll(&mut self, wake: fn()) -> Poll; 5 | } 6 | 7 | enum Poll { 8 | Ready(T), 9 | Pending, 10 | } 11 | // ANCHOR_END: simple_future 12 | 13 | struct Socket; 14 | impl Socket { 15 | fn has_data_to_read(&self) -> bool { 16 | // check if the socket is currently readable 17 | true 18 | } 19 | fn read_buf(&self) -> Vec { 20 | // Read data in from the socket 21 | vec![] 22 | } 23 | fn set_readable_callback(&self, _wake: fn()) { 24 | // register `_wake` with something that will call it 25 | // once the socket becomes readable, such as an 26 | // `epoll`-based event loop. 27 | } 28 | } 29 | 30 | // ANCHOR: socket_read 31 | pub struct SocketRead<'a> { 32 | socket: &'a Socket, 33 | } 34 | 35 | impl SimpleFuture for SocketRead<'_> { 36 | type Output = Vec; 37 | 38 | fn poll(&mut self, wake: fn()) -> Poll { 39 | if self.socket.has_data_to_read() { 40 | // The socket has data -- read it into a buffer and return it. 41 | Poll::Ready(self.socket.read_buf()) 42 | } else { 43 | // The socket does not yet have data. 44 | // 45 | // Arrange for `wake` to be called once data is available. 46 | // When data becomes available, `wake` will be called, and the 47 | // user of this `Future` will know to call `poll` again and 48 | // receive data. 49 | self.socket.set_readable_callback(wake); 50 | Poll::Pending 51 | } 52 | } 53 | } 54 | // ANCHOR_END: socket_read 55 | 56 | // ANCHOR: join 57 | /// A SimpleFuture that runs two other futures to completion concurrently. 58 | /// 59 | /// Concurrency is achieved via the fact that calls to `poll` each future 60 | /// may be interleaved, allowing each future to advance itself at its own pace. 61 | pub struct Join { 62 | // Each field may contain a future that should be run to completion. 63 | // If the future has already completed, the field is set to `None`. 64 | // This prevents us from polling a future after it has completed, which 65 | // would violate the contract of the `Future` trait. 66 | a: Option, 67 | b: Option, 68 | } 69 | 70 | impl SimpleFuture for Join 71 | where 72 | FutureA: SimpleFuture, 73 | FutureB: SimpleFuture, 74 | { 75 | type Output = (); 76 | fn poll(&mut self, wake: fn()) -> Poll { 77 | // Attempt to complete future `a`. 78 | if let Some(a) = &mut self.a { 79 | if let Poll::Ready(()) = a.poll(wake) { 80 | self.a.take(); 81 | } 82 | } 83 | 84 | // Attempt to complete future `b`. 85 | if let Some(b) = &mut self.b { 86 | if let Poll::Ready(()) = b.poll(wake) { 87 | self.b.take(); 88 | } 89 | } 90 | 91 | if self.a.is_none() && self.b.is_none() { 92 | // Both futures have completed -- we can return successfully 93 | Poll::Ready(()) 94 | } else { 95 | // One or both futures returned `Poll::Pending` and still have 96 | // work to do. They will call `wake()` when progress can be made. 97 | Poll::Pending 98 | } 99 | } 100 | } 101 | // ANCHOR_END: join 102 | 103 | // ANCHOR: and_then 104 | /// A SimpleFuture that runs two futures to completion, one after another. 105 | // 106 | // Note: for the purposes of this simple example, `AndThenFut` assumes both 107 | // the first and second futures are available at creation-time. The real 108 | // `AndThen` combinator allows creating the second future based on the output 109 | // of the first future, like `get_breakfast.and_then(|food| eat(food))`. 110 | pub struct AndThenFut { 111 | first: Option, 112 | second: FutureB, 113 | } 114 | 115 | impl SimpleFuture for AndThenFut 116 | where 117 | FutureA: SimpleFuture, 118 | FutureB: SimpleFuture, 119 | { 120 | type Output = (); 121 | fn poll(&mut self, wake: fn()) -> Poll { 122 | if let Some(first) = &mut self.first { 123 | match first.poll(wake) { 124 | // We've completed the first future -- remove it and start on 125 | // the second! 126 | Poll::Ready(()) => self.first.take(), 127 | // We couldn't yet complete the first future. 128 | // Notice that we disrupt the flow of the `poll` function with the `return` statement. 129 | Poll::Pending => return Poll::Pending, 130 | }; 131 | } 132 | // Now that the first future is done, attempt to complete the second. 133 | self.second.poll(wake) 134 | } 135 | } 136 | // ANCHOR_END: and_then 137 | 138 | mod real_future { 139 | use std::{ 140 | future::Future as RealFuture, 141 | pin::Pin, 142 | task::{Context, Poll}, 143 | }; 144 | 145 | // ANCHOR: real_future 146 | trait Future { 147 | type Output; 148 | fn poll( 149 | // Note the change from `&mut self` to `Pin<&mut Self>`: 150 | self: Pin<&mut Self>, 151 | // and the change from `wake: fn()` to `cx: &mut Context<'_>`: 152 | cx: &mut Context<'_>, 153 | ) -> Poll; 154 | } 155 | // ANCHOR_END: real_future 156 | 157 | // ensure that `Future` matches `RealFuture`: 158 | impl Future for dyn RealFuture { 159 | type Output = O; 160 | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 161 | RealFuture::poll(self, cx) 162 | } 163 | } 164 | } 165 | -------------------------------------------------------------------------------- /examples/02_03_timer/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_02_03_timer" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/02_03_timer/src/lib.rs: -------------------------------------------------------------------------------- 1 | // ANCHOR: imports 2 | use std::{ 3 | future::Future, 4 | pin::Pin, 5 | sync::{Arc, Mutex}, 6 | task::{Context, Poll, Waker}, 7 | thread, 8 | time::Duration, 9 | }; 10 | // ANCHOR_END: imports 11 | 12 | // ANCHOR: timer_decl 13 | pub struct TimerFuture { 14 | shared_state: Arc>, 15 | } 16 | 17 | /// Shared state between the future and the waiting thread 18 | struct SharedState { 19 | /// Whether or not the sleep time has elapsed 20 | completed: bool, 21 | 22 | /// The waker for the task that `TimerFuture` is running on. 23 | /// The thread can use this after setting `completed = true` to tell 24 | /// `TimerFuture`'s task to wake up, see that `completed = true`, and 25 | /// move forward. 26 | waker: Option, 27 | } 28 | // ANCHOR_END: timer_decl 29 | 30 | // ANCHOR: future_for_timer 31 | impl Future for TimerFuture { 32 | type Output = (); 33 | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 34 | // Look at the shared state to see if the timer has already completed. 35 | let mut shared_state = self.shared_state.lock().unwrap(); 36 | if shared_state.completed { 37 | Poll::Ready(()) 38 | } else { 39 | // Set waker so that the thread can wake up the current task 40 | // when the timer has completed, ensuring that the future is polled 41 | // again and sees that `completed = true`. 42 | // 43 | // It's tempting to do this once rather than repeatedly cloning 44 | // the waker each time. However, the `TimerFuture` can move between 45 | // tasks on the executor, which could cause a stale waker pointing 46 | // to the wrong task, preventing `TimerFuture` from waking up 47 | // correctly. 48 | // 49 | // N.B. it's possible to check for this using the `Waker::will_wake` 50 | // function, but we omit that here to keep things simple. 51 | shared_state.waker = Some(cx.waker().clone()); 52 | Poll::Pending 53 | } 54 | } 55 | } 56 | // ANCHOR_END: future_for_timer 57 | 58 | // ANCHOR: timer_new 59 | impl TimerFuture { 60 | /// Create a new `TimerFuture` which will complete after the provided 61 | /// timeout. 62 | pub fn new(duration: Duration) -> Self { 63 | let shared_state = Arc::new(Mutex::new(SharedState { 64 | completed: false, 65 | waker: None, 66 | })); 67 | 68 | // Spawn the new thread 69 | let thread_shared_state = shared_state.clone(); 70 | thread::spawn(move || { 71 | thread::sleep(duration); 72 | let mut shared_state = thread_shared_state.lock().unwrap(); 73 | // Signal that the timer has completed and wake up the last 74 | // task on which the future was polled, if one exists. 75 | shared_state.completed = true; 76 | if let Some(waker) = shared_state.waker.take() { 77 | waker.wake() 78 | } 79 | }); 80 | 81 | TimerFuture { shared_state } 82 | } 83 | } 84 | // ANCHOR_END: timer_new 85 | 86 | #[test] 87 | fn block_on_timer() { 88 | futures::executor::block_on(async { 89 | TimerFuture::new(Duration::from_secs(1)).await 90 | }) 91 | } 92 | -------------------------------------------------------------------------------- /examples/02_04_executor/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_02_04_executor" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dependencies] 10 | futures = "0.3" 11 | timer_future = { package = "example_02_03_timer", path = "../02_03_timer" } 12 | -------------------------------------------------------------------------------- /examples/02_04_executor/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | // ANCHOR: imports 4 | use futures::{ 5 | future::{BoxFuture, FutureExt}, 6 | task::{waker_ref, ArcWake}, 7 | }; 8 | use std::{ 9 | future::Future, 10 | sync::mpsc::{sync_channel, Receiver, SyncSender}, 11 | sync::{Arc, Mutex}, 12 | task::Context, 13 | time::Duration, 14 | }; 15 | // The timer we wrote in the previous section: 16 | use timer_future::TimerFuture; 17 | // ANCHOR_END: imports 18 | 19 | // ANCHOR: executor_decl 20 | /// Task executor that receives tasks off of a channel and runs them. 21 | struct Executor { 22 | ready_queue: Receiver>, 23 | } 24 | 25 | /// `Spawner` spawns new futures onto the task channel. 26 | #[derive(Clone)] 27 | struct Spawner { 28 | task_sender: SyncSender>, 29 | } 30 | 31 | /// A future that can reschedule itself to be polled by an `Executor`. 32 | struct Task { 33 | /// In-progress future that should be pushed to completion. 34 | /// 35 | /// The `Mutex` is not necessary for correctness, since we only have 36 | /// one thread executing tasks at once. However, Rust isn't smart 37 | /// enough to know that `future` is only mutated from one thread, 38 | /// so we need to use the `Mutex` to prove thread-safety. A production 39 | /// executor would not need this, and could use `UnsafeCell` instead. 40 | future: Mutex>>, 41 | 42 | /// Handle to place the task itself back onto the task queue. 43 | task_sender: SyncSender>, 44 | } 45 | 46 | fn new_executor_and_spawner() -> (Executor, Spawner) { 47 | // Maximum number of tasks to allow queueing in the channel at once. 48 | // This is just to make `sync_channel` happy, and wouldn't be present in 49 | // a real executor. 50 | const MAX_QUEUED_TASKS: usize = 10_000; 51 | let (task_sender, ready_queue) = sync_channel(MAX_QUEUED_TASKS); 52 | (Executor { ready_queue }, Spawner { task_sender }) 53 | } 54 | // ANCHOR_END: executor_decl 55 | 56 | // ANCHOR: spawn_fn 57 | impl Spawner { 58 | fn spawn(&self, future: impl Future + 'static + Send) { 59 | let future = future.boxed(); 60 | let task = Arc::new(Task { 61 | future: Mutex::new(Some(future)), 62 | task_sender: self.task_sender.clone(), 63 | }); 64 | self.task_sender.try_send(task).expect("too many tasks queued"); 65 | } 66 | } 67 | // ANCHOR_END: spawn_fn 68 | 69 | // ANCHOR: arcwake_for_task 70 | impl ArcWake for Task { 71 | fn wake_by_ref(arc_self: &Arc) { 72 | // Implement `wake` by sending this task back onto the task channel 73 | // so that it will be polled again by the executor. 74 | let cloned = arc_self.clone(); 75 | arc_self 76 | .task_sender 77 | .try_send(cloned) 78 | .expect("too many tasks queued"); 79 | } 80 | } 81 | // ANCHOR_END: arcwake_for_task 82 | 83 | // ANCHOR: executor_run 84 | impl Executor { 85 | fn run(&self) { 86 | while let Ok(task) = self.ready_queue.recv() { 87 | // Take the future, and if it has not yet completed (is still Some), 88 | // poll it in an attempt to complete it. 89 | let mut future_slot = task.future.lock().unwrap(); 90 | if let Some(mut future) = future_slot.take() { 91 | // Create a `LocalWaker` from the task itself 92 | let waker = waker_ref(&task); 93 | let context = &mut Context::from_waker(&waker); 94 | // `BoxFuture` is a type alias for 95 | // `Pin + Send + 'static>>`. 96 | // We can get a `Pin<&mut dyn Future + Send + 'static>` 97 | // from it by calling the `Pin::as_mut` method. 98 | if future.as_mut().poll(context).is_pending() { 99 | // We're not done processing the future, so put it 100 | // back in its task to be run again in the future. 101 | *future_slot = Some(future); 102 | } 103 | } 104 | } 105 | } 106 | } 107 | // ANCHOR_END: executor_run 108 | 109 | // ANCHOR: main 110 | fn main() { 111 | let (executor, spawner) = new_executor_and_spawner(); 112 | 113 | // Spawn a task to print before and after waiting on a timer. 114 | spawner.spawn(async { 115 | println!("howdy!"); 116 | // Wait for our timer future to complete after two seconds. 117 | TimerFuture::new(Duration::new(2, 0)).await; 118 | println!("done!"); 119 | }); 120 | 121 | // Drop the spawner so that our executor knows it is finished and won't 122 | // receive more incoming tasks to run. 123 | drop(spawner); 124 | 125 | // Run the executor until the task queue is empty. 126 | // This will print "howdy!", pause, and then print "done!". 127 | executor.run(); 128 | } 129 | // ANCHOR_END: main 130 | 131 | #[test] 132 | fn run_main() { 133 | main() 134 | } 135 | -------------------------------------------------------------------------------- /examples/03_01_async_await/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_03_01_async_await" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/03_01_async_await/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![allow(unused)] 2 | #![cfg(test)] 3 | 4 | mod async_fn_and_block_examples { 5 | use std::future::Future; 6 | // ANCHOR: async_fn_and_block_examples 7 | 8 | // `foo()` returns a type that implements `Future`. 9 | // `foo().await` will result in a value of type `u8`. 10 | async fn foo() -> u8 { 5 } 11 | 12 | fn bar() -> impl Future { 13 | // This `async` block results in a type that implements 14 | // `Future`. 15 | async { 16 | let x: u8 = foo().await; 17 | x + 5 18 | } 19 | } 20 | // ANCHOR_END: async_fn_and_block_examples 21 | } 22 | 23 | mod async_lifetimes_examples { 24 | use std::future::Future; 25 | // ANCHOR: lifetimes_expanded 26 | // This function: 27 | async fn foo(x: &u8) -> u8 { *x } 28 | 29 | // Is equivalent to this function: 30 | fn foo_expanded<'a>(x: &'a u8) -> impl Future + 'a { 31 | async move { *x } 32 | } 33 | // ANCHOR_END: lifetimes_expanded 34 | 35 | async fn borrow_x(x: &u8) -> u8 { *x } 36 | 37 | #[cfg(feature = "never_compiled")] 38 | // ANCHOR: static_future_with_borrow 39 | fn bad() -> impl Future { 40 | let x = 5; 41 | borrow_x(&x) // ERROR: `x` does not live long enough 42 | } 43 | 44 | fn good() -> impl Future { 45 | async { 46 | let x = 5; 47 | borrow_x(&x).await 48 | } 49 | } 50 | // ANCHOR_END: static_future_with_borrow 51 | } 52 | 53 | mod async_move_examples { 54 | use std::future::Future; 55 | // ANCHOR: async_move_examples 56 | /// `async` block: 57 | /// 58 | /// Multiple different `async` blocks can access the same local variable 59 | /// so long as they're executed within the variable's scope 60 | async fn blocks() { 61 | let my_string = "foo".to_string(); 62 | 63 | let future_one = async { 64 | // ... 65 | println!("{my_string}"); 66 | }; 67 | 68 | let future_two = async { 69 | // ... 70 | println!("{my_string}"); 71 | }; 72 | 73 | // Run both futures to completion, printing "foo" twice: 74 | let ((), ()) = futures::join!(future_one, future_two); 75 | } 76 | 77 | /// `async move` block: 78 | /// 79 | /// Only one `async move` block can access the same captured variable, since 80 | /// captures are moved into the `Future` generated by the `async move` block. 81 | /// However, this allows the `Future` to outlive the original scope of the 82 | /// variable: 83 | fn move_block() -> impl Future { 84 | let my_string = "foo".to_string(); 85 | async move { 86 | // ... 87 | println!("{my_string}"); 88 | } 89 | } 90 | // ANCHOR_END: async_move_examples 91 | } 92 | -------------------------------------------------------------------------------- /examples/05_01_streams/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_05_01_streams" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/05_01_streams/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | mod stream_trait { 4 | use futures::stream::Stream as RealStream; 5 | use std::{ 6 | pin::Pin, 7 | task::{Context, Poll}, 8 | }; 9 | 10 | // ANCHOR: stream_trait 11 | trait Stream { 12 | /// The type of the value yielded by the stream. 13 | type Item; 14 | 15 | /// Attempt to resolve the next item in the stream. 16 | /// Returns `Poll::Pending` if not ready, `Poll::Ready(Some(x))` if a value 17 | /// is ready, and `Poll::Ready(None)` if the stream has completed. 18 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) 19 | -> Poll>; 20 | } 21 | // ANCHOR_END: stream_trait 22 | 23 | // assert that `Stream` matches `RealStream`: 24 | impl Stream for dyn RealStream { 25 | type Item = I; 26 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) 27 | -> Poll> 28 | { 29 | RealStream::poll_next(self, cx) 30 | } 31 | } 32 | } 33 | 34 | mod channels { 35 | use futures::{ 36 | channel::mpsc, 37 | prelude::*, 38 | }; 39 | 40 | // ANCHOR: channels 41 | async fn send_recv() { 42 | const BUFFER_SIZE: usize = 10; 43 | let (mut tx, mut rx) = mpsc::channel::(BUFFER_SIZE); 44 | 45 | tx.send(1).await.unwrap(); 46 | tx.send(2).await.unwrap(); 47 | drop(tx); 48 | 49 | // `StreamExt::next` is similar to `Iterator::next`, but returns a 50 | // type that implements `Future>`. 51 | assert_eq!(Some(1), rx.next().await); 52 | assert_eq!(Some(2), rx.next().await); 53 | assert_eq!(None, rx.next().await); 54 | } 55 | // ANCHOR_END: channels 56 | 57 | #[test] 58 | fn run_send_recv() { futures::executor::block_on(send_recv()) } 59 | } 60 | -------------------------------------------------------------------------------- /examples/05_02_iteration_and_concurrency/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_05_02_iteration_and_concurrency" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/05_02_iteration_and_concurrency/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | use futures::{ 4 | executor::block_on, 5 | stream::{self, Stream}, 6 | }; 7 | use std::{ 8 | io, 9 | pin::Pin, 10 | }; 11 | 12 | // ANCHOR: nexts 13 | async fn sum_with_next(mut stream: Pin<&mut dyn Stream>) -> i32 { 14 | use futures::stream::StreamExt; // for `next` 15 | let mut sum = 0; 16 | while let Some(item) = stream.next().await { 17 | sum += item; 18 | } 19 | sum 20 | } 21 | 22 | async fn sum_with_try_next( 23 | mut stream: Pin<&mut dyn Stream>>, 24 | ) -> Result { 25 | use futures::stream::TryStreamExt; // for `try_next` 26 | let mut sum = 0; 27 | while let Some(item) = stream.try_next().await? { 28 | sum += item; 29 | } 30 | Ok(sum) 31 | } 32 | // ANCHOR_END: nexts 33 | 34 | #[test] 35 | fn run_sum_with_next() { 36 | let mut stream = stream::iter(vec![2, 3]); 37 | let pin: Pin<&mut stream::Iter<_>> = Pin::new(&mut stream); 38 | assert_eq!(5, block_on(sum_with_next(pin))); 39 | } 40 | 41 | #[test] 42 | fn run_sum_with_try_next() { 43 | let mut stream = stream::iter(vec![Ok(2), Ok(3)]); 44 | let pin: Pin<&mut stream::Iter<_>> = Pin::new(&mut stream); 45 | assert_eq!(5, block_on(sum_with_try_next(pin)).unwrap()); 46 | } 47 | 48 | #[allow(unused)] 49 | // ANCHOR: try_for_each_concurrent 50 | async fn jump_around( 51 | mut stream: Pin<&mut dyn Stream>>, 52 | ) -> Result<(), io::Error> { 53 | use futures::stream::TryStreamExt; // for `try_for_each_concurrent` 54 | const MAX_CONCURRENT_JUMPERS: usize = 100; 55 | 56 | stream.try_for_each_concurrent(MAX_CONCURRENT_JUMPERS, |num| async move { 57 | jump_n_times(num).await?; 58 | report_n_jumps(num).await?; 59 | Ok(()) 60 | }).await?; 61 | 62 | Ok(()) 63 | } 64 | // ANCHOR_END: try_for_each_concurrent 65 | 66 | async fn jump_n_times(_: u8) -> Result<(), io::Error> { Ok(()) } 67 | async fn report_n_jumps(_: u8) -> Result<(), io::Error> { Ok(()) } 68 | -------------------------------------------------------------------------------- /examples/06_02_join/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_06_02_join" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/06_02_join/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | struct Book; 4 | struct Music; 5 | async fn get_book() -> Book { Book } 6 | async fn get_music() -> Music { Music } 7 | 8 | mod naiive { 9 | use super::*; 10 | // ANCHOR: naiive 11 | async fn get_book_and_music() -> (Book, Music) { 12 | let book = get_book().await; 13 | let music = get_music().await; 14 | (book, music) 15 | } 16 | // ANCHOR_END: naiive 17 | } 18 | 19 | mod other_langs { 20 | use super::*; 21 | // ANCHOR: other_langs 22 | // WRONG -- don't do this 23 | async fn get_book_and_music() -> (Book, Music) { 24 | let book_future = get_book(); 25 | let music_future = get_music(); 26 | (book_future.await, music_future.await) 27 | } 28 | // ANCHOR_END: other_langs 29 | } 30 | 31 | mod join { 32 | use super::*; 33 | // ANCHOR: join 34 | use futures::join; 35 | 36 | async fn get_book_and_music() -> (Book, Music) { 37 | let book_fut = get_book(); 38 | let music_fut = get_music(); 39 | join!(book_fut, music_fut) 40 | } 41 | // ANCHOR_END: join 42 | } 43 | 44 | mod try_join { 45 | use super::{Book, Music}; 46 | // ANCHOR: try_join 47 | use futures::try_join; 48 | 49 | async fn get_book() -> Result { /* ... */ Ok(Book) } 50 | async fn get_music() -> Result { /* ... */ Ok(Music) } 51 | 52 | async fn get_book_and_music() -> Result<(Book, Music), String> { 53 | let book_fut = get_book(); 54 | let music_fut = get_music(); 55 | try_join!(book_fut, music_fut) 56 | } 57 | // ANCHOR_END: try_join 58 | } 59 | 60 | mod mismatched_err { 61 | use super::{Book, Music}; 62 | // ANCHOR: try_join_map_err 63 | use futures::{ 64 | future::TryFutureExt, 65 | try_join, 66 | }; 67 | 68 | async fn get_book() -> Result { /* ... */ Ok(Book) } 69 | async fn get_music() -> Result { /* ... */ Ok(Music) } 70 | 71 | async fn get_book_and_music() -> Result<(Book, Music), String> { 72 | let book_fut = get_book().map_err(|()| "Unable to get book".to_string()); 73 | let music_fut = get_music(); 74 | try_join!(book_fut, music_fut) 75 | } 76 | // ANCHOR_END: try_join_map_err 77 | } 78 | -------------------------------------------------------------------------------- /examples/06_03_select/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_06_03_select" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/06_03_select/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | #![recursion_limit="128"] 3 | 4 | mod example { 5 | // ANCHOR: example 6 | use futures::{ 7 | future::FutureExt, // for `.fuse()` 8 | pin_mut, 9 | select, 10 | }; 11 | 12 | async fn task_one() { /* ... */ } 13 | async fn task_two() { /* ... */ } 14 | 15 | async fn race_tasks() { 16 | let t1 = task_one().fuse(); 17 | let t2 = task_two().fuse(); 18 | 19 | pin_mut!(t1, t2); 20 | 21 | select! { 22 | () = t1 => println!("task one completed first"), 23 | () = t2 => println!("task two completed first"), 24 | } 25 | } 26 | // ANCHOR_END: example 27 | } 28 | 29 | mod default_and_complete { 30 | // ANCHOR: default_and_complete 31 | use futures::{future, select}; 32 | 33 | async fn count() { 34 | let mut a_fut = future::ready(4); 35 | let mut b_fut = future::ready(6); 36 | let mut total = 0; 37 | 38 | loop { 39 | select! { 40 | a = a_fut => total += a, 41 | b = b_fut => total += b, 42 | complete => break, 43 | default => unreachable!(), // never runs (futures are ready, then complete) 44 | }; 45 | } 46 | assert_eq!(total, 10); 47 | } 48 | // ANCHOR_END: default_and_complete 49 | 50 | #[test] 51 | fn run_count() { 52 | futures::executor::block_on(count()); 53 | } 54 | } 55 | 56 | mod fused_stream { 57 | // ANCHOR: fused_stream 58 | use futures::{ 59 | stream::{Stream, StreamExt, FusedStream}, 60 | select, 61 | }; 62 | 63 | async fn add_two_streams( 64 | mut s1: impl Stream + FusedStream + Unpin, 65 | mut s2: impl Stream + FusedStream + Unpin, 66 | ) -> u8 { 67 | let mut total = 0; 68 | 69 | loop { 70 | let item = select! { 71 | x = s1.next() => x, 72 | x = s2.next() => x, 73 | complete => break, 74 | }; 75 | if let Some(next_num) = item { 76 | total += next_num; 77 | } 78 | } 79 | 80 | total 81 | } 82 | // ANCHOR_END: fused_stream 83 | } 84 | 85 | mod fuse_terminated { 86 | // ANCHOR: fuse_terminated 87 | use futures::{ 88 | future::{Fuse, FusedFuture, FutureExt}, 89 | stream::{FusedStream, Stream, StreamExt}, 90 | pin_mut, 91 | select, 92 | }; 93 | 94 | async fn get_new_num() -> u8 { /* ... */ 5 } 95 | 96 | async fn run_on_new_num(_: u8) { /* ... */ } 97 | 98 | async fn run_loop( 99 | mut interval_timer: impl Stream + FusedStream + Unpin, 100 | starting_num: u8, 101 | ) { 102 | let run_on_new_num_fut = run_on_new_num(starting_num).fuse(); 103 | let get_new_num_fut = Fuse::terminated(); 104 | pin_mut!(run_on_new_num_fut, get_new_num_fut); 105 | loop { 106 | select! { 107 | () = interval_timer.select_next_some() => { 108 | // The timer has elapsed. Start a new `get_new_num_fut` 109 | // if one was not already running. 110 | if get_new_num_fut.is_terminated() { 111 | get_new_num_fut.set(get_new_num().fuse()); 112 | } 113 | }, 114 | new_num = get_new_num_fut => { 115 | // A new number has arrived -- start a new `run_on_new_num_fut`, 116 | // dropping the old one. 117 | run_on_new_num_fut.set(run_on_new_num(new_num).fuse()); 118 | }, 119 | // Run the `run_on_new_num_fut` 120 | () = run_on_new_num_fut => {}, 121 | // panic if everything completed, since the `interval_timer` should 122 | // keep yielding values indefinitely. 123 | complete => panic!("`interval_timer` completed unexpectedly"), 124 | } 125 | } 126 | } 127 | // ANCHOR_END: fuse_terminated 128 | } 129 | 130 | mod futures_unordered { 131 | // ANCHOR: futures_unordered 132 | use futures::{ 133 | future::{Fuse, FusedFuture, FutureExt}, 134 | stream::{FusedStream, FuturesUnordered, Stream, StreamExt}, 135 | pin_mut, 136 | select, 137 | }; 138 | 139 | async fn get_new_num() -> u8 { /* ... */ 5 } 140 | 141 | async fn run_on_new_num(_: u8) -> u8 { /* ... */ 5 } 142 | 143 | async fn run_loop( 144 | mut interval_timer: impl Stream + FusedStream + Unpin, 145 | starting_num: u8, 146 | ) { 147 | let mut run_on_new_num_futs = FuturesUnordered::new(); 148 | run_on_new_num_futs.push(run_on_new_num(starting_num)); 149 | let get_new_num_fut = Fuse::terminated(); 150 | pin_mut!(get_new_num_fut); 151 | loop { 152 | select! { 153 | () = interval_timer.select_next_some() => { 154 | // The timer has elapsed. Start a new `get_new_num_fut` 155 | // if one was not already running. 156 | if get_new_num_fut.is_terminated() { 157 | get_new_num_fut.set(get_new_num().fuse()); 158 | } 159 | }, 160 | new_num = get_new_num_fut => { 161 | // A new number has arrived -- start a new `run_on_new_num_fut`. 162 | run_on_new_num_futs.push(run_on_new_num(new_num)); 163 | }, 164 | // Run the `run_on_new_num_futs` and check if any have completed 165 | res = run_on_new_num_futs.select_next_some() => { 166 | println!("run_on_new_num_fut returned {:?}", res); 167 | }, 168 | // panic if everything completed, since the `interval_timer` should 169 | // keep yielding values indefinitely. 170 | complete => panic!("`interval_timer` completed unexpectedly"), 171 | } 172 | } 173 | } 174 | 175 | // ANCHOR_END: futures_unordered 176 | } 177 | -------------------------------------------------------------------------------- /examples/06_04_spawning/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_06_04_spawning" 3 | version = "0.1.0" 4 | edition = "2021" 5 | 6 | # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html 7 | 8 | [dependencies] 9 | futures = "0.3" 10 | 11 | [dependencies.async-std] 12 | version = "1.12.0" 13 | features = ["attributes"] -------------------------------------------------------------------------------- /examples/06_04_spawning/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | #![allow(dead_code)] 3 | 4 | // ANCHOR: example 5 | use async_std::{task, net::TcpListener, net::TcpStream}; 6 | use futures::AsyncWriteExt; 7 | 8 | async fn process_request(stream: &mut TcpStream) -> Result<(), std::io::Error>{ 9 | stream.write_all(b"HTTP/1.1 200 OK\r\n\r\n").await?; 10 | stream.write_all(b"Hello World").await?; 11 | Ok(()) 12 | } 13 | 14 | async fn main() { 15 | let listener = TcpListener::bind("127.0.0.1:8080").await.unwrap(); 16 | loop { 17 | // Accept a new connection 18 | let (mut stream, _) = listener.accept().await.unwrap(); 19 | // Now process this request without blocking the main loop 20 | task::spawn(async move {process_request(&mut stream).await}); 21 | } 22 | } 23 | // ANCHOR_END: example 24 | use std::time::Duration; 25 | async fn my_task(time: Duration) { 26 | println!("Hello from my_task with time {:?}", time); 27 | task::sleep(time).await; 28 | println!("Goodbye from my_task with time {:?}", time); 29 | } 30 | // ANCHOR: join_all 31 | use futures::future::join_all; 32 | async fn task_spawner(){ 33 | let tasks = vec![ 34 | task::spawn(my_task(Duration::from_secs(1))), 35 | task::spawn(my_task(Duration::from_secs(2))), 36 | task::spawn(my_task(Duration::from_secs(3))), 37 | ]; 38 | // If we do not await these tasks and the function finishes, they will be dropped 39 | join_all(tasks).await; 40 | } 41 | // ANCHOR_END: join_all 42 | 43 | #[test] 44 | fn run_task_spawner() { 45 | futures::executor::block_on(task_spawner()); 46 | } -------------------------------------------------------------------------------- /examples/07_05_recursion/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_07_05_recursion" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/07_05_recursion/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | #![allow(dead_code)] 3 | 4 | // ANCHOR: example 5 | use futures::future::{BoxFuture, FutureExt}; 6 | 7 | fn recursive() -> BoxFuture<'static, ()> { 8 | async move { 9 | recursive().await; 10 | recursive().await; 11 | }.boxed() 12 | } 13 | // ANCHOR_END: example 14 | 15 | // ANCHOR: example_pinned 16 | async fn recursive_pinned() { 17 | Box::pin(recursive_pinned()).await; 18 | Box::pin(recursive_pinned()).await; 19 | } 20 | // ANCHOR_END: example_pinned 21 | -------------------------------------------------------------------------------- /examples/09_01_sync_tcp_server/404.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Hello! 6 | 7 | 8 |

Oops!

9 |

Sorry, I don't know what you're asking for.

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_01_sync_tcp_server/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "sync_tcp_server" 3 | version = "0.1.0" 4 | authors = ["Your Name 2 | 3 | 4 | 5 | Hello! 6 | 7 | 8 |

Hello!

9 |

Hi from Rust

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_01_sync_tcp_server/src/main.rs: -------------------------------------------------------------------------------- 1 | use std::fs; 2 | use std::io::prelude::*; 3 | use std::net::TcpListener; 4 | use std::net::TcpStream; 5 | 6 | fn main() { 7 | // Listen for incoming TCP connections on localhost port 7878 8 | let listener = TcpListener::bind("127.0.0.1:7878").unwrap(); 9 | 10 | // Block forever, handling each request that arrives at this IP address 11 | for stream in listener.incoming() { 12 | let stream = stream.unwrap(); 13 | 14 | handle_connection(stream); 15 | } 16 | } 17 | 18 | fn handle_connection(mut stream: TcpStream) { 19 | // Read the first 1024 bytes of data from the stream 20 | let mut buffer = [0; 1024]; 21 | stream.read(&mut buffer).unwrap(); 22 | 23 | let get = b"GET / HTTP/1.1\r\n"; 24 | 25 | // Respond with greetings or a 404, 26 | // depending on the data in the request 27 | let (status_line, filename) = if buffer.starts_with(get) { 28 | ("HTTP/1.1 200 OK\r\n\r\n", "hello.html") 29 | } else { 30 | ("HTTP/1.1 404 NOT FOUND\r\n\r\n", "404.html") 31 | }; 32 | let contents = fs::read_to_string(filename).unwrap(); 33 | 34 | // Write response back to the stream, 35 | // and flush the stream to ensure the response is sent back to the client 36 | let response = format!("{status_line}{contents}"); 37 | stream.write_all(response.as_bytes()).unwrap(); 38 | stream.flush().unwrap(); 39 | } 40 | -------------------------------------------------------------------------------- /examples/09_02_async_tcp_server/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "async_tcp_server" 3 | version = "0.1.0" 4 | authors = ["Your Name 19 | } 20 | // ANCHOR_END: handle_connection_async 21 | -------------------------------------------------------------------------------- /examples/09_03_slow_request/404.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Hello! 6 | 7 | 8 |

Oops!

9 |

Sorry, I don't know what you're asking for.

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_03_slow_request/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "slow_request" 3 | version = "0.1.0" 4 | authors = ["Your Name 2 | 3 | 4 | 5 | Hello! 6 | 7 | 8 |

Hello!

9 |

Hi from Rust

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_03_slow_request/src/main.rs: -------------------------------------------------------------------------------- 1 | use std::fs; 2 | use std::io::{Read, Write}; 3 | use std::net::TcpListener; 4 | use std::net::TcpStream; 5 | 6 | #[async_std::main] 7 | async fn main() { 8 | let listener = TcpListener::bind("127.0.0.1:7878").unwrap(); 9 | for stream in listener.incoming() { 10 | let stream = stream.unwrap(); 11 | handle_connection(stream).await; 12 | } 13 | } 14 | 15 | // ANCHOR: handle_connection 16 | use std::time::Duration; 17 | use async_std::task; 18 | 19 | async fn handle_connection(mut stream: TcpStream) { 20 | let mut buffer = [0; 1024]; 21 | stream.read(&mut buffer).unwrap(); 22 | 23 | let get = b"GET / HTTP/1.1\r\n"; 24 | let sleep = b"GET /sleep HTTP/1.1\r\n"; 25 | 26 | let (status_line, filename) = if buffer.starts_with(get) { 27 | ("HTTP/1.1 200 OK\r\n\r\n", "hello.html") 28 | } else if buffer.starts_with(sleep) { 29 | task::sleep(Duration::from_secs(5)).await; 30 | ("HTTP/1.1 200 OK\r\n\r\n", "hello.html") 31 | } else { 32 | ("HTTP/1.1 404 NOT FOUND\r\n\r\n", "404.html") 33 | }; 34 | let contents = fs::read_to_string(filename).unwrap(); 35 | 36 | let response = format!("{status_line}{contents}"); 37 | stream.write(response.as_bytes()).unwrap(); 38 | stream.flush().unwrap(); 39 | } 40 | // ANCHOR_END: handle_connection 41 | -------------------------------------------------------------------------------- /examples/09_04_concurrent_tcp_server/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "concurrent_tcp_server" 3 | version = "0.1.0" 4 | authors = ["Your Name 28 | stream.write(response.as_bytes()).await.unwrap(); 29 | stream.flush().await.unwrap(); 30 | } 31 | // ANCHOR_END: handle_connection 32 | -------------------------------------------------------------------------------- /examples/09_05_final_tcp_server/404.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Oops! 6 | 7 | 8 |

Oops!

9 |

Sorry, I don't know what you're asking for.

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_05_final_tcp_server/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "final_tcp_server" 3 | version = "0.1.0" 4 | authors = ["Your Name 2 | 3 | 4 | 5 | Hello! 6 | 7 | 8 |

Hello!

9 |

Hi from Rust

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_05_final_tcp_server/src/main.rs: -------------------------------------------------------------------------------- 1 | use std::fs; 2 | 3 | use futures::stream::StreamExt; 4 | 5 | use async_std::net::TcpListener; 6 | use async_std::prelude::*; 7 | // ANCHOR: main_func 8 | use async_std::task::spawn; 9 | 10 | #[async_std::main] 11 | async fn main() { 12 | let listener = TcpListener::bind("127.0.0.1:7878").await.unwrap(); 13 | listener 14 | .incoming() 15 | .for_each_concurrent(/* limit */ None, |stream| async move { 16 | let stream = stream.unwrap(); 17 | spawn(handle_connection(stream)); 18 | }) 19 | .await; 20 | } 21 | // ANCHOR_END: main_func 22 | 23 | use async_std::io::{Read, Write}; 24 | 25 | async fn handle_connection(mut stream: impl Read + Write + Unpin) { 26 | let mut buffer = [0; 1024]; 27 | stream.read(&mut buffer).await.unwrap(); 28 | let get = b"GET / HTTP/1.1\r\n"; 29 | let (status_line, filename) = if buffer.starts_with(get) { 30 | ("HTTP/1.1 200 OK\r\n\r\n", "hello.html") 31 | } else { 32 | ("HTTP/1.1 404 NOT FOUND\r\n\r\n", "404.html") 33 | }; 34 | let contents = fs::read_to_string(filename).unwrap(); 35 | let response = format!("{status_line}{contents}"); 36 | stream.write(response.as_bytes()).await.unwrap(); 37 | stream.flush().await.unwrap(); 38 | } 39 | 40 | #[cfg(test)] 41 | 42 | mod tests { 43 | // ANCHOR: mock_read 44 | use super::*; 45 | use futures::io::Error; 46 | use futures::task::{Context, Poll}; 47 | 48 | use std::cmp::min; 49 | use std::pin::Pin; 50 | 51 | struct MockTcpStream { 52 | read_data: Vec, 53 | write_data: Vec, 54 | } 55 | 56 | impl Read for MockTcpStream { 57 | fn poll_read( 58 | self: Pin<&mut Self>, 59 | _: &mut Context, 60 | buf: &mut [u8], 61 | ) -> Poll> { 62 | let size: usize = min(self.read_data.len(), buf.len()); 63 | buf[..size].copy_from_slice(&self.read_data[..size]); 64 | Poll::Ready(Ok(size)) 65 | } 66 | } 67 | // ANCHOR_END: mock_read 68 | 69 | // ANCHOR: mock_write 70 | impl Write for MockTcpStream { 71 | fn poll_write( 72 | mut self: Pin<&mut Self>, 73 | _: &mut Context, 74 | buf: &[u8], 75 | ) -> Poll> { 76 | self.write_data = Vec::from(buf); 77 | 78 | Poll::Ready(Ok(buf.len())) 79 | } 80 | 81 | fn poll_flush(self: Pin<&mut Self>, _: &mut Context) -> Poll> { 82 | Poll::Ready(Ok(())) 83 | } 84 | 85 | fn poll_close(self: Pin<&mut Self>, _: &mut Context) -> Poll> { 86 | Poll::Ready(Ok(())) 87 | } 88 | } 89 | // ANCHOR_END: mock_write 90 | 91 | // ANCHOR: unpin 92 | impl Unpin for MockTcpStream {} 93 | // ANCHOR_END: unpin 94 | 95 | // ANCHOR: test 96 | use std::fs; 97 | 98 | #[async_std::test] 99 | async fn test_handle_connection() { 100 | let input_bytes = b"GET / HTTP/1.1\r\n"; 101 | let mut contents = vec![0u8; 1024]; 102 | contents[..input_bytes.len()].clone_from_slice(input_bytes); 103 | let mut stream = MockTcpStream { 104 | read_data: contents, 105 | write_data: Vec::new(), 106 | }; 107 | 108 | handle_connection(&mut stream).await; 109 | 110 | let expected_contents = fs::read_to_string("hello.html").unwrap(); 111 | let expected_response = format!("HTTP/1.1 200 OK\r\n\r\n{}", expected_contents); 112 | assert!(stream.write_data.starts_with(expected_response.as_bytes())); 113 | } 114 | // ANCHOR_END: test 115 | } 116 | -------------------------------------------------------------------------------- /examples/Cargo.toml: -------------------------------------------------------------------------------- 1 | [workspace] 2 | members = [ 3 | "hello-world", 4 | "hello-world-sleep", 5 | "hello-world-spawn", 6 | "hello-world-join", 7 | "01_02_why_async", 8 | "01_04_async_await_primer", 9 | "02_02_future_trait", 10 | "02_03_timer", 11 | "02_04_executor", 12 | "03_01_async_await", 13 | "05_01_streams", 14 | "05_02_iteration_and_concurrency", 15 | "06_02_join", 16 | "06_03_select", 17 | "06_04_spawning", 18 | "07_05_recursion", 19 | "09_01_sync_tcp_server", 20 | "09_02_async_tcp_server", 21 | "09_03_slow_request", 22 | "09_04_concurrent_tcp_server", 23 | "09_05_final_tcp_server", 24 | ] 25 | resolver = "2" 26 | -------------------------------------------------------------------------------- /examples/hello-world-join/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "hello-world-join" 3 | version = "0.1.0" 4 | authors = ["Nicholas Cameron "] 5 | edition = "2021" 6 | 7 | [dependencies] 8 | tokio = { version = "1.40.0", features = ["full"] } 9 | -------------------------------------------------------------------------------- /examples/hello-world-join/src/main.rs: -------------------------------------------------------------------------------- 1 | use tokio::{spawn, time::{sleep, Duration}}; 2 | 3 | async fn say_hello() { 4 | // Wait for a while before printing to make it a more interesting race. 5 | sleep(Duration::from_millis(100)).await; 6 | println!("hello"); 7 | } 8 | 9 | async fn say_world() { 10 | sleep(Duration::from_millis(100)).await; 11 | println!("world"); 12 | } 13 | 14 | #[tokio::main] 15 | async fn main() { 16 | let handle1 = spawn(say_hello()); 17 | let handle2 = spawn(say_world()); 18 | 19 | let _ = handle1.await; 20 | let _ = handle2.await; 21 | 22 | println!("!"); 23 | } 24 | -------------------------------------------------------------------------------- /examples/hello-world-sleep/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "hello-world-sleep" 3 | version = "0.1.0" 4 | authors = ["Nicholas Cameron "] 5 | edition = "2021" 6 | 7 | [dependencies] 8 | tokio = { version = "1.40.0", features = ["full"] } 9 | -------------------------------------------------------------------------------- /examples/hello-world-sleep/src/main.rs: -------------------------------------------------------------------------------- 1 | use std::io::{stdout, Write}; 2 | use tokio::time::{sleep, Duration}; 3 | 4 | async fn say_hello() { 5 | print!("hello, "); 6 | // Flush stdout so we see the effect of the above `print` immediately. 7 | stdout().flush().unwrap(); 8 | } 9 | 10 | async fn say_world() { 11 | println!("world!"); 12 | } 13 | 14 | #[tokio::main] 15 | async fn main() { 16 | say_hello().await; 17 | // An async sleep function, puts the current task to sleep for 1s. 18 | sleep(Duration::from_millis(1000)).await; 19 | say_world().await; 20 | } 21 | -------------------------------------------------------------------------------- /examples/hello-world-spawn/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "hello-world-spawn" 3 | version = "0.1.0" 4 | authors = ["Nicholas Cameron "] 5 | edition = "2021" 6 | 7 | [dependencies] 8 | tokio = { version = "1.40.0", features = ["full"] } 9 | -------------------------------------------------------------------------------- /examples/hello-world-spawn/src/main.rs: -------------------------------------------------------------------------------- 1 | use tokio::{spawn, time::{sleep, Duration}}; 2 | 3 | async fn say_hello() { 4 | // Wait for a while before printing to make it a more interesting race. 5 | sleep(Duration::from_millis(100)).await; 6 | println!("hello"); 7 | } 8 | 9 | async fn say_world() { 10 | sleep(Duration::from_millis(100)).await; 11 | println!("world!"); 12 | } 13 | 14 | #[tokio::main] 15 | async fn main() { 16 | spawn(say_hello()); 17 | spawn(say_world()); 18 | // Wait for a while to give the tasks time to run. 19 | sleep(Duration::from_millis(1000)).await; 20 | } 21 | -------------------------------------------------------------------------------- /examples/hello-world/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "hello-world" 3 | version = "0.1.0" 4 | authors = ["Nicholas Cameron "] 5 | edition = "2021" 6 | 7 | [dependencies] 8 | tokio = { version = "1.40.0", features = ["full"] } 9 | -------------------------------------------------------------------------------- /examples/hello-world/src/main.rs: -------------------------------------------------------------------------------- 1 | // Define an async function. 2 | async fn say_hello() { 3 | println!("hello, world!"); 4 | } 5 | 6 | #[tokio::main] // Boilerplate which lets us write `async fn main`, we'll explain it later. 7 | async fn main() { 8 | // Call an async function and await its result. 9 | say_hello().await; 10 | } 11 | -------------------------------------------------------------------------------- /src/01_getting_started/01_chapter.md: -------------------------------------------------------------------------------- 1 | # Getting Started 2 | 3 | Welcome to Asynchronous Programming in Rust! If you're looking to start writing 4 | asynchronous Rust code, you've come to the right place. Whether you're building 5 | a web server, a database, or an operating system, this book will show you 6 | how to use Rust's asynchronous programming tools to get the most out of your 7 | hardware. 8 | 9 | ## What This Book Covers 10 | 11 | This book aims to be a comprehensive, up-to-date guide to using Rust's async 12 | language features and libraries, appropriate for beginners and old hands alike. 13 | 14 | - The early chapters provide an introduction to async programming in general, 15 | and to Rust's particular take on it. 16 | 17 | - The middle chapters discuss key utilities and control-flow tools you can use 18 | when writing async code, and describe best-practices for structuring libraries 19 | and applications to maximize performance and reusability. 20 | 21 | - The last section of the book covers the broader async ecosystem, and provides 22 | a number of examples of how to accomplish common tasks. 23 | 24 | With that out of the way, let's explore the exciting world of Asynchronous 25 | Programming in Rust! 26 | -------------------------------------------------------------------------------- /src/01_getting_started/02_why_async.md: -------------------------------------------------------------------------------- 1 | # Why Async? 2 | 3 | We all love how Rust empowers us to write fast, safe software. 4 | But how does asynchronous programming fit into this vision? 5 | 6 | Asynchronous programming, or async for short, is a _concurrent programming model_ 7 | supported by an increasing number of programming languages. 8 | It lets you run a large number of concurrent 9 | tasks on a small number of OS threads, while preserving much of the 10 | look and feel of ordinary synchronous programming, through the 11 | `async/await` syntax. 12 | 13 | ## Async vs other concurrency models 14 | 15 | Concurrent programming is less mature and "standardized" than 16 | regular, sequential programming. As a result, we express concurrency 17 | differently depending on which concurrent programming model 18 | the language is supporting. 19 | A brief overview of the most popular concurrency models can help 20 | you understand how asynchronous programming fits within the broader 21 | field of concurrent programming: 22 | 23 | - **OS threads** don't require any changes to the programming model, 24 | which makes it very easy to express concurrency. However, synchronizing 25 | between threads can be difficult, and the performance overhead is large. 26 | Thread pools can mitigate some of these costs, but not enough to support 27 | massive IO-bound workloads. 28 | - **Event-driven programming**, in conjunction with _callbacks_, can be very 29 | performant, but tends to result in a verbose, "non-linear" control flow. 30 | Data flow and error propagation is often hard to follow. 31 | - **Coroutines**, like threads, don't require changes to the programming model, 32 | which makes them easy to use. Like async, they can also support a large 33 | number of tasks. However, they abstract away low-level details that 34 | are important for systems programming and custom runtime implementors. 35 | - **The actor model** divides all concurrent computation into units called 36 | actors, which communicate through fallible message passing, much like 37 | in distributed systems. The actor model can be efficiently implemented, but it leaves 38 | many practical issues unanswered, such as flow control and retry logic. 39 | 40 | In summary, asynchronous programming allows highly performant implementations 41 | that are suitable for low-level languages like Rust, while providing 42 | most of the ergonomic benefits of threads and coroutines. 43 | 44 | ## Async in Rust vs other languages 45 | 46 | Although asynchronous programming is supported in many languages, some 47 | details vary across implementations. Rust's implementation of async 48 | differs from most languages in a few ways: 49 | 50 | - **Futures are inert** in Rust and make progress only when polled. Dropping a 51 | future stops it from making further progress. 52 | - **Async is zero-cost** in Rust, which means that you only pay for what you use. 53 | Specifically, you can use async without heap allocations and dynamic dispatch, 54 | which is great for performance! 55 | This also lets you use async in constrained environments, such as embedded systems. 56 | - **No built-in runtime** is provided by Rust. Instead, runtimes are provided by 57 | community maintained crates. 58 | - **Both single- and multithreaded** runtimes are available in Rust, which have 59 | different strengths and weaknesses. 60 | 61 | ## Async vs threads in Rust 62 | 63 | The primary alternative to async in Rust is using OS threads, either 64 | directly through [`std::thread`](https://doc.rust-lang.org/std/thread/) 65 | or indirectly through a thread pool. 66 | Migrating from threads to async or vice versa 67 | typically requires major refactoring work, both in terms of implementation and 68 | (if you are building a library) any exposed public interfaces. As such, 69 | picking the model that suits your needs early can save a lot of development time. 70 | 71 | **OS threads** are suitable for a small number of tasks, since threads come with 72 | CPU and memory overhead. Spawning and switching between threads 73 | is quite expensive as even idle threads consume system resources. 74 | A thread pool library can help mitigate some of these costs, but not all. 75 | However, threads let you reuse existing synchronous code without significant 76 | code changes—no particular programming model is required. 77 | In some operating systems, you can also change the priority of a thread, 78 | which is useful for drivers and other latency sensitive applications. 79 | 80 | **Async** provides significantly reduced CPU and memory 81 | overhead, especially for workloads with a 82 | large amount of IO-bound tasks, such as servers and databases. 83 | All else equal, you can have orders of magnitude more tasks than OS threads, 84 | because an async runtime uses a small amount of (expensive) threads to handle 85 | a large amount of (cheap) tasks. 86 | However, async Rust results in larger binary blobs due to the state 87 | machines generated from async functions and since each executable 88 | bundles an async runtime. 89 | 90 | On a last note, asynchronous programming is not _better_ than threads, 91 | but different. 92 | If you don't need async for performance reasons, threads can often be 93 | the simpler alternative. 94 | 95 | ### Example: Concurrent downloading 96 | 97 | In this example our goal is to download two web pages concurrently. 98 | In a typical threaded application we need to spawn threads 99 | to achieve concurrency: 100 | 101 | ```rust,ignore 102 | {{#include ../../examples/01_02_why_async/src/lib.rs:get_two_sites}} 103 | ``` 104 | 105 | However, downloading a web page is a small task; creating a thread 106 | for such a small amount of work is quite wasteful. For a larger application, it 107 | can easily become a bottleneck. In async Rust, we can run these tasks 108 | concurrently without extra threads: 109 | 110 | ```rust,ignore 111 | {{#include ../../examples/01_02_why_async/src/lib.rs:get_two_sites_async}} 112 | ``` 113 | 114 | Here, no extra threads are created. Additionally, all function calls are statically 115 | dispatched, and there are no heap allocations! 116 | However, we need to write the code to be asynchronous in the first place, 117 | which this book will help you achieve. 118 | 119 | ## Custom concurrency models in Rust 120 | 121 | On a last note, Rust doesn't force you to choose between threads and async. 122 | You can use both models within the same application, which can be 123 | useful when you have mixed threaded and async dependencies. 124 | In fact, you can even use a different concurrency model altogether, 125 | such as event-driven programming, as long as you find a library that 126 | implements it. 127 | -------------------------------------------------------------------------------- /src/01_getting_started/03_state_of_async_rust.md: -------------------------------------------------------------------------------- 1 | # The State of Asynchronous Rust 2 | 3 | Parts of async Rust are supported with the same stability guarantees as 4 | synchronous Rust. Other parts are still maturing and will change 5 | over time. With async Rust, you can expect: 6 | 7 | - Outstanding runtime performance for typical concurrent workloads. 8 | - More frequent interaction with advanced language features, such as lifetimes 9 | and pinning. 10 | - Some compatibility constraints, both between sync and async code, and between 11 | different async runtimes. 12 | - Higher maintenance burden, due to the ongoing evolution of async runtimes 13 | and language support. 14 | 15 | In short, async Rust is more difficult to use and can result in a higher 16 | maintenance burden than synchronous Rust, 17 | but gives you best-in-class performance in return. 18 | All areas of async Rust are constantly improving, 19 | so the impact of these issues will wear off over time. 20 | 21 | ## Language and library support 22 | 23 | While asynchronous programming is supported by Rust itself, 24 | most async applications depend on functionality provided 25 | by community crates. 26 | As such, you need to rely on a mixture of 27 | language features and library support: 28 | 29 | - The most fundamental traits, types and functions, such as the 30 | [`Future`](https://doc.rust-lang.org/std/future/trait.Future.html) trait 31 | are provided by the standard library. 32 | - The `async/await` syntax is supported directly by the Rust compiler. 33 | - Many utility types, macros and functions are provided by the 34 | [`futures`](https://docs.rs/futures/) crate. They can be used in any async 35 | Rust application. 36 | - Execution of async code, IO and task spawning are provided by "async 37 | runtimes", such as Tokio and async-std. Most async applications, and some 38 | async crates, depend on a specific runtime. See 39 | ["The Async Ecosystem"](../08_ecosystem/00_chapter.md) section for more 40 | details. 41 | 42 | Some language features you may be used to from synchronous Rust are not yet 43 | available in async Rust. Notably, Rust did not let you declare async 44 | functions in traits until 1.75.0 stable (and still has limitations on dynamic dispatch for those traits). Instead, you need to use workarounds to achieve the same 45 | result, which can be more verbose. 46 | 47 | ## Compiling and debugging 48 | 49 | For the most part, compiler- and runtime errors in async Rust work 50 | the same way as they have always done in Rust. There are a few 51 | noteworthy differences: 52 | 53 | ### Compilation errors 54 | 55 | Compilation errors in async Rust conform to the same high standards as 56 | synchronous Rust, but since async Rust often depends on more complex language 57 | features, such as lifetimes and pinning, you may encounter these types of 58 | errors more frequently. 59 | 60 | ### Runtime errors 61 | 62 | Whenever the compiler encounters an async function, it generates a state 63 | machine under the hood. Stack traces in async Rust typically contain details 64 | from these state machines, as well as function calls from 65 | the runtime. As such, interpreting stack traces can be a bit more involved than 66 | it would be in synchronous Rust. 67 | 68 | ### New failure modes 69 | 70 | A few novel failure modes are possible in async Rust, for instance 71 | if you call a blocking function from an async context or if you implement 72 | the `Future` trait incorrectly. Such errors can silently pass both the 73 | compiler and sometimes even unit tests. Having a firm understanding 74 | of the underlying concepts, which this book aims to give you, can help you 75 | avoid these pitfalls. 76 | 77 | ## Compatibility considerations 78 | 79 | Asynchronous and synchronous code cannot always be combined freely. 80 | For instance, you can't directly call an async function from a sync function. 81 | Sync and async code also tend to promote different design patterns, which can 82 | make it difficult to compose code intended for the different environments. 83 | 84 | Even async code cannot always be combined freely. Some crates depend on a 85 | specific async runtime to function. If so, it is usually specified in the 86 | crate's dependency list. 87 | 88 | These compatibility issues can limit your options, so make sure to 89 | research which async runtime and what crates you may need early. 90 | Once you have settled in with a runtime, you won't have to worry 91 | much about compatibility. 92 | 93 | ## Performance characteristics 94 | 95 | The performance of async Rust depends on the implementation of the 96 | async runtime you're using. 97 | Even though the runtimes that power async Rust applications are relatively new, 98 | they perform exceptionally well for most practical workloads. 99 | 100 | That said, most of the async ecosystem assumes a _multi-threaded_ runtime. 101 | This makes it difficult to enjoy the theoretical performance benefits 102 | of single-threaded async applications, namely cheaper synchronization. 103 | Another overlooked use-case is _latency sensitive tasks_, which are 104 | important for drivers, GUI applications and so on. Such tasks depend 105 | on runtime and/or OS support in order to be scheduled appropriately. 106 | You can expect better library support for these use cases in the future. 107 | -------------------------------------------------------------------------------- /src/01_getting_started/04_async_await_primer.md: -------------------------------------------------------------------------------- 1 | # `async`/`.await` Primer 2 | 3 | `async`/`.await` is Rust's built-in tool for writing asynchronous functions 4 | that look like synchronous code. `async` transforms a block of code into a 5 | state machine that implements a trait called `Future`. Whereas calling a 6 | blocking function in a synchronous method would block the whole thread, 7 | blocked `Future`s will yield control of the thread, allowing other 8 | `Future`s to run. 9 | 10 | Let's add some dependencies to the `Cargo.toml` file: 11 | 12 | ```toml 13 | {{#include ../../examples/01_04_async_await_primer/Cargo.toml:9:10}} 14 | ``` 15 | 16 | To create an asynchronous function, you can use the `async fn` syntax: 17 | 18 | ```rust,edition2018 19 | async fn do_something() { /* ... */ } 20 | ``` 21 | 22 | The value returned by `async fn` is a `Future`. For anything to happen, 23 | the `Future` needs to be run on an executor. 24 | 25 | ```rust,edition2018 26 | {{#include ../../examples/01_04_async_await_primer/src/lib.rs:hello_world}} 27 | ``` 28 | 29 | Inside an `async fn`, you can use `.await` to wait for the completion of 30 | another type that implements the `Future` trait, such as the output of 31 | another `async fn`. Unlike `block_on`, `.await` doesn't block the current 32 | thread, but instead asynchronously waits for the future to complete, allowing 33 | other tasks to run if the future is currently unable to make progress. 34 | 35 | For example, imagine that we have three `async fn`: `learn_song`, `sing_song`, 36 | and `dance`: 37 | 38 | ```rust,ignore 39 | async fn learn_song() -> Song { /* ... */ } 40 | async fn sing_song(song: Song) { /* ... */ } 41 | async fn dance() { /* ... */ } 42 | ``` 43 | 44 | One way to do learn, sing, and dance would be to block on each of these 45 | individually: 46 | 47 | ```rust,ignore 48 | {{#include ../../examples/01_04_async_await_primer/src/lib.rs:block_on_each}} 49 | ``` 50 | 51 | However, we're not giving the best performance possible this way—we're 52 | only ever doing one thing at once! Clearly we have to learn the song before 53 | we can sing it, but it's possible to dance at the same time as learning and 54 | singing the song. To do this, we can create two separate `async fn` which 55 | can be run concurrently: 56 | 57 | ```rust,ignore 58 | {{#include ../../examples/01_04_async_await_primer/src/lib.rs:block_on_main}} 59 | ``` 60 | 61 | In this example, learning the song must happen before singing the song, but 62 | both learning and singing can happen at the same time as dancing. If we used 63 | `block_on(learn_song())` rather than `learn_song().await` in `learn_and_sing`, 64 | the thread wouldn't be able to do anything else while `learn_song` was running. 65 | This would make it impossible to dance at the same time. By `.await`-ing 66 | the `learn_song` future, we allow other tasks to take over the current thread 67 | if `learn_song` is blocked. This makes it possible to run multiple futures 68 | to completion concurrently on the same thread. 69 | -------------------------------------------------------------------------------- /src/02_execution/01_chapter.md: -------------------------------------------------------------------------------- 1 | # Under the Hood: Executing `Future`s and Tasks 2 | 3 | In this section, we'll cover the underlying structure of how `Future`s and 4 | asynchronous tasks are scheduled. If you're only interested in learning 5 | how to write higher-level code that uses existing `Future` types and aren't 6 | interested in the details of how `Future` types work, you can skip ahead to 7 | the `async`/`await` chapter. However, several of the topics discussed in this 8 | chapter are useful for understanding how `async`/`await` code works, 9 | understanding the runtime and performance properties of `async`/`await` code, 10 | and building new asynchronous primitives. If you decide to skip this section 11 | now, you may want to bookmark it to revisit in the future. 12 | 13 | Now, with that out of the way, let's talk about the `Future` trait. 14 | -------------------------------------------------------------------------------- /src/02_execution/02_future.md: -------------------------------------------------------------------------------- 1 | # The `Future` Trait 2 | 3 | The `Future` trait is at the center of asynchronous programming in Rust. 4 | A `Future` is an asynchronous computation that can produce a value 5 | (although that value may be empty, e.g. `()`). A *simplified* version of 6 | the future trait might look something like this: 7 | 8 | ```rust 9 | {{#include ../../examples/02_02_future_trait/src/lib.rs:simple_future}} 10 | ``` 11 | 12 | Futures can be advanced by calling the `poll` function, which will drive the 13 | future as far towards completion as possible. If the future completes, it 14 | returns `Poll::Ready(result)`. If the future is not able to complete yet, it 15 | returns `Poll::Pending` and arranges for the `wake()` function to be called 16 | when the `Future` is ready to make more progress. When `wake()` is called, the 17 | executor driving the `Future` will call `poll` again so that the `Future` can 18 | make more progress. 19 | 20 | Without `wake()`, the executor would have no way of knowing when a particular 21 | future could make progress, and would have to be constantly polling every 22 | future. With `wake()`, the executor knows exactly which futures are ready to 23 | be `poll`ed. 24 | 25 | For example, consider the case where we want to read from a socket that may 26 | or may not have data available already. If there is data, we can read it 27 | in and return `Poll::Ready(data)`, but if no data is ready, our future is 28 | blocked and can no longer make progress. When no data is available, we 29 | must register `wake` to be called when data becomes ready on the socket, 30 | which will tell the executor that our future is ready to make progress. 31 | A simple `SocketRead` future might look something like this: 32 | 33 | ```rust,ignore 34 | {{#include ../../examples/02_02_future_trait/src/lib.rs:socket_read}} 35 | ``` 36 | 37 | This model of `Future`s allows for composing together multiple asynchronous 38 | operations without needing intermediate allocations. Running multiple futures 39 | at once or chaining futures together can be implemented via allocation-free 40 | state machines, like this: 41 | 42 | ```rust,ignore 43 | {{#include ../../examples/02_02_future_trait/src/lib.rs:join}} 44 | ``` 45 | 46 | This shows how multiple futures can be run simultaneously without needing 47 | separate allocations, allowing for more efficient asynchronous programs. 48 | Similarly, multiple sequential futures can be run one after another, like this: 49 | 50 | ```rust,ignore 51 | {{#include ../../examples/02_02_future_trait/src/lib.rs:and_then}} 52 | ``` 53 | 54 | These examples show how the `Future` trait can be used to express asynchronous 55 | control flow without requiring multiple allocated objects and deeply nested 56 | callbacks. With the basic control-flow out of the way, let's talk about the 57 | real `Future` trait and how it is different. 58 | 59 | ```rust,ignore 60 | {{#include ../../examples/02_02_future_trait/src/lib.rs:real_future}} 61 | ``` 62 | 63 | The first change you'll notice is that our `self` type is no longer `&mut Self`, 64 | but has changed to `Pin<&mut Self>`. We'll talk more about pinning in [a later 65 | section][pinning], but for now know that it allows us to create futures that 66 | are immovable. Immovable objects can store pointers between their fields, 67 | e.g. `struct MyFut { a: i32, ptr_to_a: *const i32 }`. Pinning is necessary 68 | to enable async/await. 69 | 70 | Secondly, `wake: fn()` has changed to `&mut Context<'_>`. In `SimpleFuture`, 71 | we used a call to a function pointer (`fn()`) to tell the future executor that 72 | the future in question should be polled. However, since `fn()` is just a 73 | function pointer, it can't store any data about *which* `Future` called `wake`. 74 | 75 | In a real-world scenario, a complex application like a web server may have 76 | thousands of different connections whose wakeups should all be 77 | managed separately. The `Context` type solves this by providing access to 78 | a value of type `Waker`, which can be used to wake up a specific task. 79 | 80 | [pinning]: ../04_pinning/01_chapter.md 81 | -------------------------------------------------------------------------------- /src/02_execution/03_wakeups.md: -------------------------------------------------------------------------------- 1 | # Task Wakeups with `Waker` 2 | 3 | It's common that futures aren't able to complete the first time they are 4 | `poll`ed. When this happens, the future needs to ensure that it is polled 5 | again once it is ready to make more progress. This is done with the `Waker` 6 | type. 7 | 8 | Each time a future is polled, it is polled as part of a "task". Tasks are 9 | the top-level futures that have been submitted to an executor. 10 | 11 | `Waker` provides a `wake()` method that can be used to tell the executor that 12 | the associated task should be awoken. When `wake()` is called, the executor 13 | knows that the task associated with the `Waker` is ready to make progress, and 14 | its future should be polled again. 15 | 16 | `Waker` also implements `clone()` so that it can be copied around and stored. 17 | 18 | Let's try implementing a simple timer future using `Waker`. 19 | 20 | ## Applied: Build a Timer 21 | 22 | For the sake of the example, we'll just spin up a new thread when the timer 23 | is created, sleep for the required time, and then signal the timer future 24 | when the time window has elapsed. 25 | 26 | First, start a new project with `cargo new --lib timer_future` and add the imports 27 | we'll need to get started to `src/lib.rs`: 28 | 29 | ```rust 30 | {{#include ../../examples/02_03_timer/src/lib.rs:imports}} 31 | ``` 32 | 33 | Let's start by defining the future type itself. Our future needs a way for the 34 | thread to communicate that the timer has elapsed and the future should complete. 35 | We'll use a shared `Arc>` value to communicate between the thread and 36 | the future. 37 | 38 | ```rust,ignore 39 | {{#include ../../examples/02_03_timer/src/lib.rs:timer_decl}} 40 | ``` 41 | 42 | Now, let's actually write the `Future` implementation! 43 | 44 | ```rust,ignore 45 | {{#include ../../examples/02_03_timer/src/lib.rs:future_for_timer}} 46 | ``` 47 | 48 | Pretty simple, right? If the thread has set `shared_state.completed = true`, 49 | we're done! Otherwise, we clone the `Waker` for the current task and pass it to 50 | `shared_state.waker` so that the thread can wake the task back up. 51 | 52 | Importantly, we have to update the `Waker` every time the future is polled 53 | because the future may have moved to a different task with a different 54 | `Waker`. This will happen when futures are passed around between tasks after 55 | being polled. 56 | 57 | Finally, we need the API to actually construct the timer and start the thread: 58 | 59 | ```rust,ignore 60 | {{#include ../../examples/02_03_timer/src/lib.rs:timer_new}} 61 | ``` 62 | 63 | Woot! That's all we need to build a simple timer future. Now, if only we had 64 | an executor to run the future on... 65 | -------------------------------------------------------------------------------- /src/02_execution/04_executor.md: -------------------------------------------------------------------------------- 1 | # Applied: Build an Executor 2 | 3 | Rust's `Future`s are lazy: they won't do anything unless actively driven to 4 | completion. One way to drive a future to completion is to `.await` it inside 5 | an `async` function, but that just pushes the problem one level up: who will 6 | run the futures returned from the top-level `async` functions? The answer is 7 | that we need a `Future` executor. 8 | 9 | `Future` executors take a set of top-level `Future`s and run them to completion 10 | by calling `poll` whenever the `Future` can make progress. Typically, an 11 | executor will `poll` a future once to start off. When `Future`s indicate that 12 | they are ready to make progress by calling `wake()`, they are placed back 13 | onto a queue and `poll` is called again, repeating until the `Future` has 14 | completed. 15 | 16 | In this section, we'll write our own simple executor capable of running a large 17 | number of top-level futures to completion concurrently. 18 | 19 | For this example, we depend on the `futures` crate for the `ArcWake` trait, 20 | which provides an easy way to construct a `Waker`. Edit `Cargo.toml` to add 21 | a new dependency: 22 | 23 | ```toml 24 | [package] 25 | name = "timer_future" 26 | version = "0.1.0" 27 | authors = ["XYZ Author"] 28 | edition = "2021" 29 | 30 | [dependencies] 31 | futures = "0.3" 32 | ``` 33 | 34 | Next, we need the following imports at the top of `src/main.rs`: 35 | 36 | ```rust,ignore 37 | {{#include ../../examples/02_04_executor/src/lib.rs:imports}} 38 | ``` 39 | 40 | Our executor will work by sending tasks to run over a channel. The executor 41 | will pull events off of the channel and run them. When a task is ready to 42 | do more work (is awoken), it can schedule itself to be polled again by 43 | putting itself back onto the channel. 44 | 45 | In this design, the executor itself just needs the receiving end of the task 46 | channel. The user will get a sending end so that they can spawn new futures. 47 | Tasks themselves are just futures that can reschedule themselves, so we'll 48 | store them as a future paired with a sender that the task can use to requeue 49 | itself. 50 | 51 | ```rust,ignore 52 | {{#include ../../examples/02_04_executor/src/lib.rs:executor_decl}} 53 | ``` 54 | 55 | Let's also add a method to spawner to make it easy to spawn new futures. 56 | This method will take a future type, box it, and create a new `Arc` with 57 | it inside which can be enqueued onto the executor. 58 | 59 | ```rust,ignore 60 | {{#include ../../examples/02_04_executor/src/lib.rs:spawn_fn}} 61 | ``` 62 | 63 | To poll futures, we'll need to create a `Waker`. 64 | As discussed in the [task wakeups section], `Waker`s are responsible 65 | for scheduling a task to be polled again once `wake` is called. Remember that 66 | `Waker`s tell the executor exactly which task has become ready, allowing 67 | them to poll just the futures that are ready to make progress. The easiest way 68 | to create a new `Waker` is by implementing the `ArcWake` trait and then using 69 | the `waker_ref` or `.into_waker()` functions to turn an `Arc` 70 | into a `Waker`. Let's implement `ArcWake` for our tasks to allow them to be 71 | turned into `Waker`s and awoken: 72 | 73 | ```rust,ignore 74 | {{#include ../../examples/02_04_executor/src/lib.rs:arcwake_for_task}} 75 | ``` 76 | 77 | When a `Waker` is created from an `Arc`, calling `wake()` on it will 78 | cause a copy of the `Arc` to be sent onto the task channel. Our executor then 79 | needs to pick up the task and poll it. Let's implement that: 80 | 81 | ```rust,ignore 82 | {{#include ../../examples/02_04_executor/src/lib.rs:executor_run}} 83 | ``` 84 | 85 | Congratulations! We now have a working futures executor. We can even use it 86 | to run `async/.await` code and custom futures, such as the `TimerFuture` we 87 | wrote earlier: 88 | 89 | ```rust,edition2018,ignore 90 | {{#include ../../examples/02_04_executor/src/lib.rs:main}} 91 | ``` 92 | 93 | [task wakeups section]: ./03_wakeups.md 94 | -------------------------------------------------------------------------------- /src/02_execution/05_io.md: -------------------------------------------------------------------------------- 1 | # Executors and System IO 2 | 3 | In the previous section on [The `Future` Trait], we discussed this example of 4 | a future that performed an asynchronous read on a socket: 5 | 6 | ```rust,ignore 7 | {{#include ../../examples/02_02_future_trait/src/lib.rs:socket_read}} 8 | ``` 9 | 10 | This future will read available data on a socket, and if no data is available, 11 | it will yield to the executor, requesting that its task be awoken when the 12 | socket becomes readable again. However, it's not clear from this example how 13 | the `Socket` type is implemented, and in particular it isn't obvious how the 14 | `set_readable_callback` function works. How can we arrange for `wake()` 15 | to be called once the socket becomes readable? One option would be to have 16 | a thread that continually checks whether `socket` is readable, calling 17 | `wake()` when appropriate. However, this would be quite inefficient, requiring 18 | a separate thread for each blocked IO future. This would greatly reduce the 19 | efficiency of our async code. 20 | 21 | In practice, this problem is solved through integration with an IO-aware 22 | system blocking primitive, such as `epoll` on Linux, `kqueue` on FreeBSD and 23 | Mac OS, IOCP on Windows, and `port`s on Fuchsia (all of which are exposed 24 | through the cross-platform Rust crate [`mio`]). These primitives all allow 25 | a thread to block on multiple asynchronous IO events, returning once one of 26 | the events completes. In practice, these APIs usually look something like 27 | this: 28 | 29 | ```rust,ignore 30 | struct IoBlocker { 31 | /* ... */ 32 | } 33 | 34 | struct Event { 35 | // An ID uniquely identifying the event that occurred and was listened for. 36 | id: usize, 37 | 38 | // A set of signals to wait for, or which occurred. 39 | signals: Signals, 40 | } 41 | 42 | impl IoBlocker { 43 | /// Create a new collection of asynchronous IO events to block on. 44 | fn new() -> Self { /* ... */ } 45 | 46 | /// Express an interest in a particular IO event. 47 | fn add_io_event_interest( 48 | &self, 49 | 50 | /// The object on which the event will occur 51 | io_object: &IoObject, 52 | 53 | /// A set of signals that may appear on the `io_object` for 54 | /// which an event should be triggered, paired with 55 | /// an ID to give to events that result from this interest. 56 | event: Event, 57 | ) { /* ... */ } 58 | 59 | /// Block until one of the events occurs. 60 | fn block(&self) -> Event { /* ... */ } 61 | } 62 | 63 | let mut io_blocker = IoBlocker::new(); 64 | io_blocker.add_io_event_interest( 65 | &socket_1, 66 | Event { id: 1, signals: READABLE }, 67 | ); 68 | io_blocker.add_io_event_interest( 69 | &socket_2, 70 | Event { id: 2, signals: READABLE | WRITABLE }, 71 | ); 72 | let event = io_blocker.block(); 73 | 74 | // prints e.g. "Socket 1 is now READABLE" if socket one became readable. 75 | println!("Socket {:?} is now {:?}", event.id, event.signals); 76 | ``` 77 | 78 | Futures executors can use these primitives to provide asynchronous IO objects 79 | such as sockets that can configure callbacks to be run when a particular IO 80 | event occurs. In the case of our `SocketRead` example above, the 81 | `Socket::set_readable_callback` function might look like the following pseudocode: 82 | 83 | ```rust,ignore 84 | impl Socket { 85 | fn set_readable_callback(&self, waker: Waker) { 86 | // `local_executor` is a reference to the local executor. 87 | // This could be provided at creation of the socket, but in practice 88 | // many executor implementations pass it down through thread local 89 | // storage for convenience. 90 | let local_executor = self.local_executor; 91 | 92 | // Unique ID for this IO object. 93 | let id = self.id; 94 | 95 | // Store the local waker in the executor's map so that it can be called 96 | // once the IO event arrives. 97 | local_executor.event_map.insert(id, waker); 98 | local_executor.add_io_event_interest( 99 | &self.socket_file_descriptor, 100 | Event { id, signals: READABLE }, 101 | ); 102 | } 103 | } 104 | ``` 105 | 106 | We can now have just one executor thread which can receive and dispatch any 107 | IO event to the appropriate `Waker`, which will wake up the corresponding 108 | task, allowing the executor to drive more tasks to completion before returning 109 | to check for more IO events (and the cycle continues...). 110 | 111 | [The `Future` Trait]: ./02_future.md 112 | [`mio`]: https://github.com/tokio-rs/mio 113 | -------------------------------------------------------------------------------- /src/03_async_await/01_chapter.md: -------------------------------------------------------------------------------- 1 | # `async`/`.await` 2 | 3 | In [the first chapter], we took a brief look at `async`/`.await`. 4 | This chapter will discuss `async`/`.await` in 5 | greater detail, explaining how it works and how `async` code differs from 6 | traditional Rust programs. 7 | 8 | `async`/`.await` are special pieces of Rust syntax that make it possible to 9 | yield control of the current thread rather than blocking, allowing other 10 | code to make progress while waiting on an operation to complete. 11 | 12 | There are two main ways to use `async`: `async fn` and `async` blocks. 13 | Each returns a value that implements the `Future` trait: 14 | 15 | ```rust,edition2018,ignore 16 | {{#include ../../examples/03_01_async_await/src/lib.rs:async_fn_and_block_examples}} 17 | ``` 18 | 19 | As we saw in the first chapter, `async` bodies and other futures are lazy: 20 | they do nothing until they are run. The most common way to run a `Future` 21 | is to `.await` it. When `.await` is called on a `Future`, it will attempt 22 | to run it to completion. If the `Future` is blocked, it will yield control 23 | of the current thread. When more progress can be made, the `Future` will be picked 24 | up by the executor and will resume running, allowing the `.await` to resolve. 25 | 26 | ## `async` Lifetimes 27 | 28 | Unlike traditional functions, `async fn`s which take references or other 29 | non-`'static` arguments return a `Future` which is bounded by the lifetime of 30 | the arguments: 31 | 32 | ```rust,edition2018,ignore 33 | {{#include ../../examples/03_01_async_await/src/lib.rs:lifetimes_expanded}} 34 | ``` 35 | 36 | This means that the future returned from an `async fn` must be `.await`ed 37 | while its non-`'static` arguments are still valid. In the common 38 | case of `.await`ing the future immediately after calling the function 39 | (as in `foo(&x).await`) this is not an issue. However, if storing the future 40 | or sending it over to another task or thread, this may be an issue. 41 | 42 | One common workaround for turning an `async fn` with references-as-arguments 43 | into a `'static` future is to bundle the arguments with the call to the 44 | `async fn` inside an `async` block: 45 | 46 | ```rust,edition2018,ignore 47 | {{#include ../../examples/03_01_async_await/src/lib.rs:static_future_with_borrow}} 48 | ``` 49 | 50 | By moving the argument into the `async` block, we extend its lifetime to match 51 | that of the `Future` returned from the call to `good`. 52 | 53 | ## `async move` 54 | 55 | `async` blocks and closures allow the `move` keyword, much like normal 56 | closures. An `async move` block will take ownership of the variables it 57 | references, allowing it to outlive the current scope, but giving up the ability 58 | to share those variables with other code: 59 | 60 | ```rust,edition2018,ignore 61 | {{#include ../../examples/03_01_async_await/src/lib.rs:async_move_examples}} 62 | ``` 63 | 64 | ## `.await`ing on a Multithreaded Executor 65 | 66 | Note that, when using a multithreaded `Future` executor, a `Future` may move 67 | between threads, so any variables used in `async` bodies must be able to travel 68 | between threads, as any `.await` can potentially result in a switch to a new 69 | thread. 70 | 71 | This means that it is not safe to use `Rc`, `&RefCell` or any other types 72 | that don't implement the `Send` trait, including references to types that don't 73 | implement the `Sync` trait. 74 | 75 | (Caveat: it is possible to use these types as long as they aren't in scope 76 | during a call to `.await`.) 77 | 78 | Similarly, it isn't a good idea to hold a traditional non-futures-aware lock 79 | across an `.await`, as it can cause the threadpool to lock up: one task could 80 | take out a lock, `.await` and yield to the executor, allowing another task to 81 | attempt to take the lock and cause a deadlock. To avoid this, use the `Mutex` 82 | in `futures::lock` rather than the one from `std::sync`. 83 | 84 | [the first chapter]: ../01_getting_started/04_async_await_primer.md 85 | -------------------------------------------------------------------------------- /src/04_pinning/01_chapter.md: -------------------------------------------------------------------------------- 1 | # Pinning 2 | 3 | To poll futures, they must be pinned using a special type called 4 | `Pin`. If you read the explanation of [the `Future` trait] in the 5 | previous section ["Executing `Future`s and Tasks"], you'll recognize 6 | `Pin` from the `self: Pin<&mut Self>` in the `Future::poll` method's definition. 7 | But what does it mean, and why do we need it? 8 | 9 | ## Why Pinning 10 | 11 | `Pin` works in tandem with the `Unpin` marker. Pinning makes it possible 12 | to guarantee that an object implementing `!Unpin` won't ever be moved. To understand 13 | why this is necessary, we need to remember how `async`/`.await` works. Consider 14 | the following code: 15 | 16 | ```rust,edition2018,ignore 17 | let fut_one = /* ... */; 18 | let fut_two = /* ... */; 19 | async move { 20 | fut_one.await; 21 | fut_two.await; 22 | } 23 | ``` 24 | 25 | Under the hood, this creates an anonymous type that implements `Future`, 26 | providing a `poll` method that looks something like this: 27 | 28 | ```rust,ignore 29 | // The `Future` type generated by our `async { ... }` block 30 | struct AsyncFuture { 31 | fut_one: FutOne, 32 | fut_two: FutTwo, 33 | state: State, 34 | } 35 | 36 | // List of states our `async` block can be in 37 | enum State { 38 | AwaitingFutOne, 39 | AwaitingFutTwo, 40 | Done, 41 | } 42 | 43 | impl Future for AsyncFuture { 44 | type Output = (); 45 | 46 | fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<()> { 47 | loop { 48 | match self.state { 49 | State::AwaitingFutOne => match self.fut_one.poll(..) { 50 | Poll::Ready(()) => self.state = State::AwaitingFutTwo, 51 | Poll::Pending => return Poll::Pending, 52 | } 53 | State::AwaitingFutTwo => match self.fut_two.poll(..) { 54 | Poll::Ready(()) => self.state = State::Done, 55 | Poll::Pending => return Poll::Pending, 56 | } 57 | State::Done => return Poll::Ready(()), 58 | } 59 | } 60 | } 61 | } 62 | ``` 63 | 64 | 65 | When `poll` is first called, it will poll `fut_one`. If `fut_one` can't 66 | complete, `AsyncFuture::poll` will return. Future calls to `poll` will pick 67 | up where the previous one left off. This process continues until the future 68 | is able to successfully complete. 69 | 70 | However, what happens if we have an `async` block that uses references? 71 | For example: 72 | 73 | ```rust,edition2018,ignore 74 | async { 75 | let mut x = [0; 128]; 76 | let read_into_buf_fut = read_into_buf(&mut x); 77 | read_into_buf_fut.await; 78 | println!("{:?}", x); 79 | } 80 | ``` 81 | 82 | What struct does this compile down to? 83 | 84 | ```rust,ignore 85 | struct ReadIntoBuf<'a> { 86 | buf: &'a mut [u8], // points to `x` below 87 | } 88 | 89 | struct AsyncFuture { 90 | x: [u8; 128], 91 | read_into_buf_fut: ReadIntoBuf<'what_lifetime?>, 92 | } 93 | ``` 94 | 95 | Here, the `ReadIntoBuf` future holds a reference into the other field of our 96 | structure, `x`. However, if `AsyncFuture` is moved, the location of `x` will 97 | move as well, invalidating the pointer stored in `read_into_buf_fut.buf`. 98 | 99 | Pinning futures to a particular spot in memory prevents this problem, making 100 | it safe to create references to values inside an `async` block. 101 | 102 | ## Pinning in Detail 103 | 104 | Let's try to understand pinning by using a slightly simpler example. The problem we encounter 105 | above is a problem that ultimately boils down to how we handle references in self-referential 106 | types in Rust. 107 | 108 | For now our example will look like this: 109 | 110 | ```rust, ignore 111 | #[derive(Debug)] 112 | struct Test { 113 | a: String, 114 | b: *const String, 115 | } 116 | 117 | impl Test { 118 | fn new(txt: &str) -> Self { 119 | Test { 120 | a: String::from(txt), 121 | b: std::ptr::null(), 122 | } 123 | } 124 | 125 | fn init(&mut self) { 126 | let self_ref: *const String = &self.a; 127 | self.b = self_ref; 128 | } 129 | 130 | fn a(&self) -> &str { 131 | &self.a 132 | } 133 | 134 | fn b(&self) -> &String { 135 | assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 136 | unsafe { &*(self.b) } 137 | } 138 | } 139 | ``` 140 | 141 | `Test` provides methods to get a reference to the value of the fields `a` and `b`. Since `b` is a 142 | reference to `a` we store it as a pointer since the borrowing rules of Rust don't allow us to 143 | define this lifetime. We now have what we call a self-referential struct. 144 | 145 | Our example works fine if we don't move any of our data around as you can observe by running 146 | this example: 147 | 148 | ```rust 149 | fn main() { 150 | let mut test1 = Test::new("test1"); 151 | test1.init(); 152 | let mut test2 = Test::new("test2"); 153 | test2.init(); 154 | 155 | println!("a: {}, b: {}", test1.a(), test1.b()); 156 | println!("a: {}, b: {}", test2.a(), test2.b()); 157 | 158 | } 159 | # #[derive(Debug)] 160 | # struct Test { 161 | # a: String, 162 | # b: *const String, 163 | # } 164 | # 165 | # impl Test { 166 | # fn new(txt: &str) -> Self { 167 | # Test { 168 | # a: String::from(txt), 169 | # b: std::ptr::null(), 170 | # } 171 | # } 172 | # 173 | # // We need an `init` method to actually set our self-reference 174 | # fn init(&mut self) { 175 | # let self_ref: *const String = &self.a; 176 | # self.b = self_ref; 177 | # } 178 | # 179 | # fn a(&self) -> &str { 180 | # &self.a 181 | # } 182 | # 183 | # fn b(&self) -> &String { 184 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 185 | # unsafe { &*(self.b) } 186 | # } 187 | # } 188 | ``` 189 | We get what we'd expect: 190 | 191 | ```rust, ignore 192 | a: test1, b: test1 193 | a: test2, b: test2 194 | ``` 195 | 196 | Let's see what happens if we swap `test1` with `test2` and thereby move the data: 197 | 198 | ```rust 199 | fn main() { 200 | let mut test1 = Test::new("test1"); 201 | test1.init(); 202 | let mut test2 = Test::new("test2"); 203 | test2.init(); 204 | 205 | println!("a: {}, b: {}", test1.a(), test1.b()); 206 | std::mem::swap(&mut test1, &mut test2); 207 | println!("a: {}, b: {}", test2.a(), test2.b()); 208 | 209 | } 210 | # #[derive(Debug)] 211 | # struct Test { 212 | # a: String, 213 | # b: *const String, 214 | # } 215 | # 216 | # impl Test { 217 | # fn new(txt: &str) -> Self { 218 | # Test { 219 | # a: String::from(txt), 220 | # b: std::ptr::null(), 221 | # } 222 | # } 223 | # 224 | # fn init(&mut self) { 225 | # let self_ref: *const String = &self.a; 226 | # self.b = self_ref; 227 | # } 228 | # 229 | # fn a(&self) -> &str { 230 | # &self.a 231 | # } 232 | # 233 | # fn b(&self) -> &String { 234 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 235 | # unsafe { &*(self.b) } 236 | # } 237 | # } 238 | ``` 239 | 240 | Naively, we could think that what we should get a debug print of `test1` two times like this: 241 | 242 | ```rust, ignore 243 | a: test1, b: test1 244 | a: test1, b: test1 245 | ``` 246 | 247 | But instead we get: 248 | 249 | ```rust, ignore 250 | a: test1, b: test1 251 | a: test1, b: test2 252 | ``` 253 | 254 | The pointer to `test2.b` still points to the old location which is inside `test1` 255 | now. The struct is not self-referential anymore, it holds a pointer to a field 256 | in a different object. That means we can't rely on the lifetime of `test2.b` to 257 | be tied to the lifetime of `test2` anymore. 258 | 259 | If you're still not convinced, this should at least convince you: 260 | 261 | ```rust 262 | fn main() { 263 | let mut test1 = Test::new("test1"); 264 | test1.init(); 265 | let mut test2 = Test::new("test2"); 266 | test2.init(); 267 | 268 | println!("a: {}, b: {}", test1.a(), test1.b()); 269 | std::mem::swap(&mut test1, &mut test2); 270 | test1.a = "I've totally changed now!".to_string(); 271 | println!("a: {}, b: {}", test2.a(), test2.b()); 272 | 273 | } 274 | # #[derive(Debug)] 275 | # struct Test { 276 | # a: String, 277 | # b: *const String, 278 | # } 279 | # 280 | # impl Test { 281 | # fn new(txt: &str) -> Self { 282 | # Test { 283 | # a: String::from(txt), 284 | # b: std::ptr::null(), 285 | # } 286 | # } 287 | # 288 | # fn init(&mut self) { 289 | # let self_ref: *const String = &self.a; 290 | # self.b = self_ref; 291 | # } 292 | # 293 | # fn a(&self) -> &str { 294 | # &self.a 295 | # } 296 | # 297 | # fn b(&self) -> &String { 298 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 299 | # unsafe { &*(self.b) } 300 | # } 301 | # } 302 | ``` 303 | 304 | The diagram below can help visualize what's going on: 305 | 306 | **Fig 1: Before and after swap** 307 | ![swap_problem](../assets/swap_problem.jpg) 308 | 309 | It's easy to get this to show undefined behavior and fail in other spectacular ways as well. 310 | 311 | ## Pinning in Practice 312 | 313 | Let's see how pinning and the `Pin` type can help us solve this problem. 314 | 315 | The `Pin` type wraps pointer types, guaranteeing that the values behind the 316 | pointer won't be moved if it is not implementing `Unpin`. For example, `Pin<&mut 317 | T>`, `Pin<&T>`, `Pin>` all guarantee that `T` won't be moved if `T: 318 | !Unpin`. 319 | 320 | Most types don't have a problem being moved. These types implement a trait 321 | called `Unpin`. Pointers to `Unpin` types can be freely placed into or taken 322 | out of `Pin`. For example, `u8` is `Unpin`, so `Pin<&mut u8>` behaves just like 323 | a normal `&mut u8`. 324 | 325 | However, types that can't be moved after they're pinned have a marker called 326 | `!Unpin`. Futures created by async/await are an example of this. 327 | 328 | ### Pinning to the Stack 329 | 330 | Back to our example. We can solve our problem by using `Pin`. Let's take a look at what 331 | our example would look like if we required a pinned pointer instead: 332 | 333 | ```rust, ignore 334 | use std::pin::Pin; 335 | use std::marker::PhantomPinned; 336 | 337 | #[derive(Debug)] 338 | struct Test { 339 | a: String, 340 | b: *const String, 341 | _marker: PhantomPinned, 342 | } 343 | 344 | 345 | impl Test { 346 | fn new(txt: &str) -> Self { 347 | Test { 348 | a: String::from(txt), 349 | b: std::ptr::null(), 350 | _marker: PhantomPinned, // This makes our type `!Unpin` 351 | } 352 | } 353 | 354 | fn init(self: Pin<&mut Self>) { 355 | let self_ptr: *const String = &self.a; 356 | let this = unsafe { self.get_unchecked_mut() }; 357 | this.b = self_ptr; 358 | } 359 | 360 | fn a(self: Pin<&Self>) -> &str { 361 | &self.get_ref().a 362 | } 363 | 364 | fn b(self: Pin<&Self>) -> &String { 365 | assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 366 | unsafe { &*(self.b) } 367 | } 368 | } 369 | ``` 370 | 371 | Pinning an object to the stack will always be `unsafe` if our type implements 372 | `!Unpin`. You can use a crate like [`pin_utils`][pin_utils] to avoid writing 373 | our own `unsafe` code when pinning to the stack. 374 | 375 | Below, we pin the objects `test1` and `test2` to the stack: 376 | 377 | ```rust 378 | pub fn main() { 379 | // test1 is safe to move before we initialize it 380 | let mut test1 = Test::new("test1"); 381 | // Notice how we shadow `test1` to prevent it from being accessed again 382 | let mut test1 = unsafe { Pin::new_unchecked(&mut test1) }; 383 | Test::init(test1.as_mut()); 384 | 385 | let mut test2 = Test::new("test2"); 386 | let mut test2 = unsafe { Pin::new_unchecked(&mut test2) }; 387 | Test::init(test2.as_mut()); 388 | 389 | println!("a: {}, b: {}", Test::a(test1.as_ref()), Test::b(test1.as_ref())); 390 | println!("a: {}, b: {}", Test::a(test2.as_ref()), Test::b(test2.as_ref())); 391 | } 392 | # use std::pin::Pin; 393 | # use std::marker::PhantomPinned; 394 | # 395 | # #[derive(Debug)] 396 | # struct Test { 397 | # a: String, 398 | # b: *const String, 399 | # _marker: PhantomPinned, 400 | # } 401 | # 402 | # 403 | # impl Test { 404 | # fn new(txt: &str) -> Self { 405 | # Test { 406 | # a: String::from(txt), 407 | # b: std::ptr::null(), 408 | # // This makes our type `!Unpin` 409 | # _marker: PhantomPinned, 410 | # } 411 | # } 412 | # 413 | # fn init(self: Pin<&mut Self>) { 414 | # let self_ptr: *const String = &self.a; 415 | # let this = unsafe { self.get_unchecked_mut() }; 416 | # this.b = self_ptr; 417 | # } 418 | # 419 | # fn a(self: Pin<&Self>) -> &str { 420 | # &self.get_ref().a 421 | # } 422 | # 423 | # fn b(self: Pin<&Self>) -> &String { 424 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 425 | # unsafe { &*(self.b) } 426 | # } 427 | # } 428 | ``` 429 | 430 | Now, if we try to move our data now we get a compilation error: 431 | 432 | ```rust, compile_fail 433 | pub fn main() { 434 | let mut test1 = Test::new("test1"); 435 | let mut test1 = unsafe { Pin::new_unchecked(&mut test1) }; 436 | Test::init(test1.as_mut()); 437 | 438 | let mut test2 = Test::new("test2"); 439 | let mut test2 = unsafe { Pin::new_unchecked(&mut test2) }; 440 | Test::init(test2.as_mut()); 441 | 442 | println!("a: {}, b: {}", Test::a(test1.as_ref()), Test::b(test1.as_ref())); 443 | std::mem::swap(test1.get_mut(), test2.get_mut()); 444 | println!("a: {}, b: {}", Test::a(test2.as_ref()), Test::b(test2.as_ref())); 445 | } 446 | # use std::pin::Pin; 447 | # use std::marker::PhantomPinned; 448 | # 449 | # #[derive(Debug)] 450 | # struct Test { 451 | # a: String, 452 | # b: *const String, 453 | # _marker: PhantomPinned, 454 | # } 455 | # 456 | # 457 | # impl Test { 458 | # fn new(txt: &str) -> Self { 459 | # Test { 460 | # a: String::from(txt), 461 | # b: std::ptr::null(), 462 | # _marker: PhantomPinned, // This makes our type `!Unpin` 463 | # } 464 | # } 465 | # 466 | # fn init(self: Pin<&mut Self>) { 467 | # let self_ptr: *const String = &self.a; 468 | # let this = unsafe { self.get_unchecked_mut() }; 469 | # this.b = self_ptr; 470 | # } 471 | # 472 | # fn a(self: Pin<&Self>) -> &str { 473 | # &self.get_ref().a 474 | # } 475 | # 476 | # fn b(self: Pin<&Self>) -> &String { 477 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 478 | # unsafe { &*(self.b) } 479 | # } 480 | # } 481 | ``` 482 | 483 | The type system prevents us from moving the data, as follows: 484 | 485 | ``` 486 | error[E0277]: `PhantomPinned` cannot be unpinned 487 | --> src\test.rs:56:30 488 | | 489 | 56 | std::mem::swap(test1.get_mut(), test2.get_mut()); 490 | | ^^^^^^^ within `test1::Test`, the trait `Unpin` is not implemented for `PhantomPinned` 491 | | 492 | = note: consider using `Box::pin` 493 | note: required because it appears within the type `test1::Test` 494 | --> src\test.rs:7:8 495 | | 496 | 7 | struct Test { 497 | | ^^^^ 498 | note: required by a bound in `std::pin::Pin::<&'a mut T>::get_mut` 499 | --> <...>rustlib/src/rust\library\core\src\pin.rs:748:12 500 | | 501 | 748 | T: Unpin, 502 | | ^^^^^ required by this bound in `std::pin::Pin::<&'a mut T>::get_mut` 503 | ``` 504 | 505 | > It's important to note that stack pinning will always rely on guarantees 506 | > you give when writing `unsafe`. While we know that the _pointee_ of `&'a mut T` 507 | > is pinned for the lifetime of `'a` we can't know if the data `&'a mut T` 508 | > points to isn't moved after `'a` ends. If it does it will violate the Pin 509 | > contract. 510 | > 511 | > A mistake that is easy to make is forgetting to shadow the original variable 512 | > since you could drop the `Pin` and move the data after `&'a mut T` 513 | > like shown below (which violates the Pin contract): 514 | > 515 | > ```rust 516 | > fn main() { 517 | > let mut test1 = Test::new("test1"); 518 | > let mut test1_pin = unsafe { Pin::new_unchecked(&mut test1) }; 519 | > Test::init(test1_pin.as_mut()); 520 | > 521 | > drop(test1_pin); 522 | > println!(r#"test1.b points to "test1": {:?}..."#, test1.b); 523 | > 524 | > let mut test2 = Test::new("test2"); 525 | > mem::swap(&mut test1, &mut test2); 526 | > println!("... and now it points nowhere: {:?}", test1.b); 527 | > } 528 | > # use std::pin::Pin; 529 | > # use std::marker::PhantomPinned; 530 | > # use std::mem; 531 | > # 532 | > # #[derive(Debug)] 533 | > # struct Test { 534 | > # a: String, 535 | > # b: *const String, 536 | > # _marker: PhantomPinned, 537 | > # } 538 | > # 539 | > # 540 | > # impl Test { 541 | > # fn new(txt: &str) -> Self { 542 | > # Test { 543 | > # a: String::from(txt), 544 | > # b: std::ptr::null(), 545 | > # // This makes our type `!Unpin` 546 | > # _marker: PhantomPinned, 547 | > # } 548 | > # } 549 | > # 550 | > # fn init<'a>(self: Pin<&'a mut Self>) { 551 | > # let self_ptr: *const String = &self.a; 552 | > # let this = unsafe { self.get_unchecked_mut() }; 553 | > # this.b = self_ptr; 554 | > # } 555 | > # 556 | > # #[allow(unused)] 557 | > # fn a<'a>(self: Pin<&'a Self>) -> &'a str { 558 | > # &self.get_ref().a 559 | > # } 560 | > # 561 | > # #[allow(unused)] 562 | > # fn b<'a>(self: Pin<&'a Self>) -> &'a String { 563 | > # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 564 | > # unsafe { &*(self.b) } 565 | > # } 566 | > # } 567 | > ``` 568 | 569 | ### Pinning to the Heap 570 | 571 | Pinning an `!Unpin` type to the heap gives our data a stable address so we know 572 | that the data we point to can't move after it's pinned. In contrast to stack 573 | pinning, we know that the data will be pinned for the lifetime of the object. 574 | 575 | ```rust, edition2018 576 | use std::pin::Pin; 577 | use std::marker::PhantomPinned; 578 | 579 | #[derive(Debug)] 580 | struct Test { 581 | a: String, 582 | b: *const String, 583 | _marker: PhantomPinned, 584 | } 585 | 586 | impl Test { 587 | fn new(txt: &str) -> Pin> { 588 | let t = Test { 589 | a: String::from(txt), 590 | b: std::ptr::null(), 591 | _marker: PhantomPinned, 592 | }; 593 | let mut boxed = Box::pin(t); 594 | let self_ptr: *const String = &boxed.a; 595 | unsafe { boxed.as_mut().get_unchecked_mut().b = self_ptr }; 596 | 597 | boxed 598 | } 599 | 600 | fn a(self: Pin<&Self>) -> &str { 601 | &self.get_ref().a 602 | } 603 | 604 | fn b(self: Pin<&Self>) -> &String { 605 | unsafe { &*(self.b) } 606 | } 607 | } 608 | 609 | pub fn main() { 610 | let test1 = Test::new("test1"); 611 | let test2 = Test::new("test2"); 612 | 613 | println!("a: {}, b: {}",test1.as_ref().a(), test1.as_ref().b()); 614 | println!("a: {}, b: {}",test2.as_ref().a(), test2.as_ref().b()); 615 | } 616 | ``` 617 | 618 | Some functions require the futures they work with to be `Unpin`. To use a 619 | `Future` or `Stream` that isn't `Unpin` with a function that requires 620 | `Unpin` types, you'll first have to pin the value using either 621 | `Box::pin` (to create a `Pin>`) or the `pin_utils::pin_mut!` macro 622 | (to create a `Pin<&mut T>`). `Pin>` and `Pin<&mut Fut>` can both be 623 | used as futures, and both implement `Unpin`. 624 | 625 | For example: 626 | 627 | ```rust,edition2018,ignore 628 | use pin_utils::pin_mut; // `pin_utils` is a handy crate available on crates.io 629 | 630 | // A function which takes a `Future` that implements `Unpin`. 631 | fn execute_unpin_future(x: impl Future + Unpin) { /* ... */ } 632 | 633 | let fut = async { /* ... */ }; 634 | execute_unpin_future(fut); // Error: `fut` does not implement `Unpin` trait 635 | 636 | // Pinning with `Box`: 637 | let fut = async { /* ... */ }; 638 | let fut = Box::pin(fut); 639 | execute_unpin_future(fut); // OK 640 | 641 | // Pinning with `pin_mut!`: 642 | let fut = async { /* ... */ }; 643 | pin_mut!(fut); 644 | execute_unpin_future(fut); // OK 645 | ``` 646 | 647 | ## Summary 648 | 649 | 1. If `T: Unpin` (which is the default), then `Pin<'a, T>` is entirely 650 | equivalent to `&'a mut T`. In other words: `Unpin` means it's OK for this type 651 | to be moved even when pinned, so `Pin` will have no effect on such a type. 652 | 653 | 2. Getting a `&mut T` to a pinned T requires unsafe if `T: !Unpin`. 654 | 655 | 3. Most standard library types implement `Unpin`. The same goes for most 656 | "normal" types you encounter in Rust. A `Future` generated by async/await is an exception to this rule. 657 | 658 | 4. You can add a `!Unpin` bound on a type on nightly with a feature flag, or 659 | by adding `std::marker::PhantomPinned` to your type on stable. 660 | 661 | 5. You can either pin data to the stack or to the heap. 662 | 663 | 6. Pinning a `!Unpin` object to the stack requires `unsafe` 664 | 665 | 7. Pinning a `!Unpin` object to the heap does not require `unsafe`. There is a shortcut for doing this using `Box::pin`. 666 | 667 | 8. For pinned data where `T: !Unpin` you have to maintain the invariant that its memory will not 668 | get invalidated or repurposed _from the moment it gets pinned until when drop_ is called. This is 669 | an important part of the _pin contract_. 670 | 671 | ["Executing `Future`s and Tasks"]: ../02_execution/01_chapter.md 672 | [the `Future` trait]: ../02_execution/02_future.md 673 | [pin_utils]: https://docs.rs/pin-utils/ 674 | -------------------------------------------------------------------------------- /src/05_streams/01_chapter.md: -------------------------------------------------------------------------------- 1 | # The `Stream` Trait 2 | 3 | The `Stream` trait is similar to `Future` but can yield multiple values before 4 | completing, similar to the `Iterator` trait from the standard library: 5 | 6 | ```rust,ignore 7 | {{#include ../../examples/05_01_streams/src/lib.rs:stream_trait}} 8 | ``` 9 | 10 | One common example of a `Stream` is the `Receiver` for the channel type from 11 | the `futures` crate. It will yield `Some(val)` every time a value is sent 12 | from the `Sender` end, and will yield `None` once the `Sender` has been 13 | dropped and all pending messages have been received: 14 | 15 | ```rust,edition2018,ignore 16 | {{#include ../../examples/05_01_streams/src/lib.rs:channels}} 17 | ``` 18 | -------------------------------------------------------------------------------- /src/05_streams/02_iteration_and_concurrency.md: -------------------------------------------------------------------------------- 1 | # Iteration and Concurrency 2 | 3 | Similar to synchronous `Iterator`s, there are many different ways to iterate 4 | over and process the values in a `Stream`. There are combinator-style methods 5 | such as `map`, `filter`, and `fold`, and their early-exit-on-error cousins 6 | `try_map`, `try_filter`, and `try_fold`. 7 | 8 | Unfortunately, `for` loops are not usable with `Stream`s, but for 9 | imperative-style code, `while let` and the `next`/`try_next` functions can 10 | be used: 11 | 12 | ```rust,edition2018,ignore 13 | {{#include ../../examples/05_02_iteration_and_concurrency/src/lib.rs:nexts}} 14 | ``` 15 | 16 | However, if we're just processing one element at a time, we're potentially 17 | leaving behind opportunity for concurrency, which is, after all, why we're 18 | writing async code in the first place. To process multiple items from a stream 19 | concurrently, use the `for_each_concurrent` and `try_for_each_concurrent` 20 | methods: 21 | 22 | ```rust,edition2018,ignore 23 | {{#include ../../examples/05_02_iteration_and_concurrency/src/lib.rs:try_for_each_concurrent}} 24 | ``` 25 | -------------------------------------------------------------------------------- /src/06_multiple_futures/01_chapter.md: -------------------------------------------------------------------------------- 1 | # Executing Multiple Futures at a Time 2 | 3 | Up until now, we've mostly executed futures by using `.await`, which blocks 4 | the current task until a particular `Future` completes. However, real 5 | asynchronous applications often need to execute several different 6 | operations concurrently. 7 | 8 | In this chapter, we'll cover some ways to execute multiple asynchronous 9 | operations at the same time: 10 | 11 | - `join!`: waits for futures to all complete 12 | - `select!`: waits for one of several futures to complete 13 | - Spawning: creates a top-level task which ambiently runs a future to completion 14 | - `FuturesUnordered`: a group of futures which yields the result of each subfuture 15 | -------------------------------------------------------------------------------- /src/06_multiple_futures/02_join.md: -------------------------------------------------------------------------------- 1 | # `join!` 2 | 3 | The `futures::join` macro makes it possible to wait for multiple different 4 | futures to complete while executing them all concurrently. 5 | 6 | ## `join!` 7 | 8 | When performing multiple asynchronous operations, it's tempting to simply 9 | `.await` them in a series: 10 | 11 | ```rust,edition2018,ignore 12 | {{#include ../../examples/06_02_join/src/lib.rs:naiive}} 13 | ``` 14 | 15 | However, this will be slower than necessary, since it won't start trying to 16 | `get_music` until after `get_book` has completed. In some other languages, 17 | futures are ambiently run to completion, so two operations can be 18 | run concurrently by first calling each `async fn` to start the futures, and 19 | then awaiting them both: 20 | 21 | ```rust,edition2018,ignore 22 | {{#include ../../examples/06_02_join/src/lib.rs:other_langs}} 23 | ``` 24 | 25 | However, Rust futures won't do any work until they're actively `.await`ed. 26 | This means that the two code snippets above will both run 27 | `book_future` and `music_future` in series rather than running them 28 | concurrently. To correctly run the two futures concurrently, use 29 | `futures::join!`: 30 | 31 | ```rust,edition2018,ignore 32 | {{#include ../../examples/06_02_join/src/lib.rs:join}} 33 | ``` 34 | 35 | The value returned by `join!` is a tuple containing the output of each 36 | `Future` passed in. 37 | 38 | ## `try_join!` 39 | 40 | For futures which return `Result`, consider using `try_join!` rather than 41 | `join!`. Since `join!` only completes once all subfutures have completed, 42 | it'll continue processing other futures even after one of its subfutures 43 | has returned an `Err`. 44 | 45 | Unlike `join!`, `try_join!` will complete immediately if one of the subfutures 46 | returns an error. 47 | 48 | ```rust,edition2018,ignore 49 | {{#include ../../examples/06_02_join/src/lib.rs:try_join}} 50 | ``` 51 | 52 | Note that the futures passed to `try_join!` must all have the same error type. 53 | Consider using the `.map_err(|e| ...)` and `.err_into()` functions from 54 | `futures::future::TryFutureExt` to consolidate the error types: 55 | 56 | ```rust,edition2018,ignore 57 | {{#include ../../examples/06_02_join/src/lib.rs:try_join_map_err}} 58 | ``` 59 | -------------------------------------------------------------------------------- /src/06_multiple_futures/03_select.md: -------------------------------------------------------------------------------- 1 | # `select!` 2 | 3 | The `futures::select` macro runs multiple futures simultaneously, allowing 4 | the user to respond as soon as any future completes. 5 | 6 | ```rust,edition2018 7 | {{#include ../../examples/06_03_select/src/lib.rs:example}} 8 | ``` 9 | 10 | The function above will run both `t1` and `t2` concurrently. When either 11 | `t1` or `t2` finishes, the corresponding handler will call `println!`, and 12 | the function will end without completing the remaining task. 13 | 14 | The basic syntax for `select` is ` = => ,`, 15 | repeated for as many futures as you would like to `select` over. 16 | 17 | ## `default => ...` and `complete => ...` 18 | 19 | `select` also supports `default` and `complete` branches. 20 | 21 | A `default` branch will run if none of the futures being `select`ed 22 | over are yet complete. A `select` with a `default` branch will 23 | therefore always return immediately, since `default` will be run 24 | if none of the other futures are ready. 25 | 26 | `complete` branches can be used to handle the case where all futures 27 | being `select`ed over have completed and will no longer make progress. 28 | This is often handy when looping over a `select!`. 29 | 30 | ```rust,edition2018 31 | {{#include ../../examples/06_03_select/src/lib.rs:default_and_complete}} 32 | ``` 33 | 34 | ## Interaction with `Unpin` and `FusedFuture` 35 | 36 | One thing you may have noticed in the first example above is that we 37 | had to call `.fuse()` on the futures returned by the two `async fn`s, 38 | as well as pinning them with `pin_mut`. Both of these calls are necessary 39 | because the futures used in `select` must implement both the `Unpin` 40 | trait and the `FusedFuture` trait. 41 | 42 | `Unpin` is necessary because the futures used by `select` are not 43 | taken by value, but by mutable reference. By not taking ownership 44 | of the future, uncompleted futures can be used again after the 45 | call to `select`. 46 | 47 | Similarly, the `FusedFuture` trait is required because `select` must 48 | not poll a future after it has completed. `FusedFuture` is implemented 49 | by futures which track whether or not they have completed. This makes 50 | it possible to use `select` in a loop, only polling the futures which 51 | still have yet to complete. This can be seen in the example above, 52 | where `a_fut` or `b_fut` will have completed the second time through 53 | the loop. Because the future returned by `future::ready` implements 54 | `FusedFuture`, it's able to tell `select` not to poll it again. 55 | 56 | Note that streams have a corresponding `FusedStream` trait. Streams 57 | which implement this trait or have been wrapped using `.fuse()` 58 | will yield `FusedFuture` futures from their 59 | `.next()` / `.try_next()` combinators. 60 | 61 | ```rust,edition2018 62 | {{#include ../../examples/06_03_select/src/lib.rs:fused_stream}} 63 | ``` 64 | 65 | ## Concurrent tasks in a `select` loop with `Fuse` and `FuturesUnordered` 66 | 67 | One somewhat hard-to-discover but handy function is `Fuse::terminated()`, 68 | which allows constructing an empty future which is already terminated, 69 | and can later be filled in with a future that needs to be run. 70 | 71 | This can be handy when there's a task that needs to be run during a `select` 72 | loop but which is created inside the `select` loop itself. 73 | 74 | Note the use of the `.select_next_some()` function. This can be 75 | used with `select` to only run the branch for `Some(_)` values 76 | returned from the stream, ignoring `None`s. 77 | 78 | ```rust,edition2018 79 | {{#include ../../examples/06_03_select/src/lib.rs:fuse_terminated}} 80 | ``` 81 | 82 | When many copies of the same future need to be run simultaneously, 83 | use the `FuturesUnordered` type. The following example is similar 84 | to the one above, but will run each copy of `run_on_new_num_fut` 85 | to completion, rather than aborting them when a new one is created. 86 | It will also print out a value returned by `run_on_new_num_fut`. 87 | 88 | ```rust,edition2018 89 | {{#include ../../examples/06_03_select/src/lib.rs:futures_unordered}} 90 | ``` 91 | -------------------------------------------------------------------------------- /src/06_multiple_futures/04_spawning.md: -------------------------------------------------------------------------------- 1 | # `Spawning` 2 | 3 | Spawning allows you to run a new asynchronous task in the background. This allows us to continue executing other code 4 | while it runs. 5 | 6 | Say we have a web server that wants to accept connections without blocking the main thread. 7 | To achieve this, we can use the `async_std::task::spawn` function to create and run a new task that handles the 8 | connections. This function takes a future and returns a `JoinHandle`, which can be used to wait for the result of the 9 | task once it's completed. 10 | 11 | ```rust,edition2018 12 | {{#include ../../examples/06_04_spawning/src/lib.rs:example}} 13 | ``` 14 | 15 | The `JoinHandle` returned by `spawn` implements the `Future` trait, so we can `.await` it to get the result of the task. 16 | This will block the current task until the spawned task completes. If the task is not awaited, your program will 17 | continue executing without waiting for the task, cancelling it if the function is completed before the task is finished. 18 | 19 | ```rust,edition2018 20 | {{#include ../../examples/06_04_spawning/src/lib.rs:join_all}} 21 | ``` 22 | 23 | To communicate between the main task and the spawned task, we can use channels 24 | provided by the async runtime used. -------------------------------------------------------------------------------- /src/07_workarounds/01_chapter.md: -------------------------------------------------------------------------------- 1 | # Workarounds to Know and Love 2 | 3 | Rust's `async` support is still fairly new, and there are a handful of 4 | highly-requested features still under active development, as well 5 | as some subpar diagnostics. This chapter will discuss some common pain 6 | points and explain how to work around them. 7 | -------------------------------------------------------------------------------- /src/07_workarounds/03_send_approximation.md: -------------------------------------------------------------------------------- 1 | # `Send` Approximation 2 | 3 | Some `async fn` state machines are safe to be sent across threads, while 4 | others are not. Whether or not an `async fn` `Future` is `Send` is determined 5 | by whether a non-`Send` type is held across an `.await` point. The compiler 6 | does its best to approximate when values may be held across an `.await` 7 | point, but this analysis is too conservative in a number of places today. 8 | 9 | For example, consider a simple non-`Send` type, perhaps a type 10 | which contains an `Rc`: 11 | 12 | ```rust 13 | use std::rc::Rc; 14 | 15 | #[derive(Default)] 16 | struct NotSend(Rc<()>); 17 | ``` 18 | 19 | Variables of type `NotSend` can briefly appear as temporaries in `async fn`s 20 | even when the resulting `Future` type returned by the `async fn` must be `Send`: 21 | 22 | ```rust,edition2018 23 | # use std::rc::Rc; 24 | # #[derive(Default)] 25 | # struct NotSend(Rc<()>); 26 | async fn bar() {} 27 | async fn foo() { 28 | NotSend::default(); 29 | bar().await; 30 | } 31 | 32 | fn require_send(_: impl Send) {} 33 | 34 | fn main() { 35 | require_send(foo()); 36 | } 37 | ``` 38 | 39 | However, if we change `foo` to store `NotSend` in a variable, this example no 40 | longer compiles: 41 | 42 | ```rust,edition2018 43 | # use std::rc::Rc; 44 | # #[derive(Default)] 45 | # struct NotSend(Rc<()>); 46 | # async fn bar() {} 47 | async fn foo() { 48 | let x = NotSend::default(); 49 | bar().await; 50 | } 51 | # fn require_send(_: impl Send) {} 52 | # fn main() { 53 | # require_send(foo()); 54 | # } 55 | ``` 56 | 57 | ``` 58 | error[E0277]: `std::rc::Rc<()>` cannot be sent between threads safely 59 | --> src/main.rs:15:5 60 | | 61 | 15 | require_send(foo()); 62 | | ^^^^^^^^^^^^ `std::rc::Rc<()>` cannot be sent between threads safely 63 | | 64 | = help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<()>` 65 | = note: required because it appears within the type `NotSend` 66 | = note: required because it appears within the type `{NotSend, impl std::future::Future, ()}` 67 | = note: required because it appears within the type `[static generator@src/main.rs:7:16: 10:2 {NotSend, impl std::future::Future, ()}]` 68 | = note: required because it appears within the type `std::future::GenFuture<[static generator@src/main.rs:7:16: 10:2 {NotSend, impl std::future::Future, ()}]>` 69 | = note: required because it appears within the type `impl std::future::Future` 70 | = note: required because it appears within the type `impl std::future::Future` 71 | note: required by `require_send` 72 | --> src/main.rs:12:1 73 | | 74 | 12 | fn require_send(_: impl Send) {} 75 | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 76 | 77 | error: aborting due to previous error 78 | 79 | For more information about this error, try `rustc --explain E0277`. 80 | ``` 81 | 82 | This error is correct. If we store `x` into a variable, it won't be dropped 83 | until after the `.await`, at which point the `async fn` may be running on 84 | a different thread. Since `Rc` is not `Send`, allowing it to travel across 85 | threads would be unsound. One simple solution to this would be to `drop` 86 | the `Rc` before the `.await`, but unfortunately that does not work today. 87 | 88 | In order to successfully work around this issue, you may have to introduce 89 | a block scope encapsulating any non-`Send` variables. This makes it easier 90 | for the compiler to tell that these variables do not live across an 91 | `.await` point. 92 | 93 | ```rust,edition2018 94 | # use std::rc::Rc; 95 | # #[derive(Default)] 96 | # struct NotSend(Rc<()>); 97 | # async fn bar() {} 98 | async fn foo() { 99 | { 100 | let x = NotSend::default(); 101 | } 102 | bar().await; 103 | } 104 | # fn require_send(_: impl Send) {} 105 | # fn main() { 106 | # require_send(foo()); 107 | # } 108 | ``` 109 | -------------------------------------------------------------------------------- /src/07_workarounds/04_recursion.md: -------------------------------------------------------------------------------- 1 | # Recursion 2 | 3 | Internally, `async fn` creates a state machine type containing each 4 | sub-`Future` being `.await`ed. This makes recursive `async fn`s a little 5 | tricky, since the resulting state machine type has to contain itself: 6 | 7 | ```rust,edition2018 8 | # async fn step_one() { /* ... */ } 9 | # async fn step_two() { /* ... */ } 10 | # struct StepOne; 11 | # struct StepTwo; 12 | // This function: 13 | async fn foo() { 14 | step_one().await; 15 | step_two().await; 16 | } 17 | // generates a type like this: 18 | enum Foo { 19 | First(StepOne), 20 | Second(StepTwo), 21 | } 22 | 23 | // So this function: 24 | async fn recursive() { 25 | recursive().await; 26 | recursive().await; 27 | } 28 | 29 | // generates a type like this: 30 | enum Recursive { 31 | First(Recursive), 32 | Second(Recursive), 33 | } 34 | ``` 35 | 36 | This won't work—we've created an infinitely-sized type! 37 | The compiler will complain: 38 | 39 | ``` 40 | error[E0733]: recursion in an async fn requires boxing 41 | --> src/lib.rs:1:1 42 | | 43 | 1 | async fn recursive() { 44 | | ^^^^^^^^^^^^^^^^^^^^ 45 | | 46 | = note: a recursive `async fn` call must introduce indirection such as `Box::pin` to avoid an infinitely sized future 47 | ``` 48 | 49 | In order to allow this, we have to introduce an indirection using `Box`. 50 | 51 | Prior to Rust 1.77, due to compiler limitations, just wrapping the calls to 52 | `recursive()` in `Box::pin` isn't enough. To make this work, we have 53 | to make `recursive` into a non-`async` function which returns a `.boxed()` 54 | `async` block: 55 | 56 | ```rust,edition2018 57 | {{#include ../../examples/07_05_recursion/src/lib.rs:example}} 58 | ``` 59 | 60 | In newer version of Rust, [that compiler limitation has been lifted]. 61 | 62 | Since Rust 1.77, support for recursion in `async fn` with allocation 63 | indirection [becomes stable], so recursive calls are permitted so long as they 64 | use some form of indirection to avoid an infinite size for the state of the 65 | function. 66 | 67 | This means that code like this now works: 68 | 69 | ```rust,edition2021 70 | {{#include ../../examples/07_05_recursion/src/lib.rs:example_pinned}} 71 | ``` 72 | 73 | [becomes stable]: https://blog.rust-lang.org/2024/03/21/Rust-1.77.0.html#support-for-recursion-in-async-fn 74 | [that compiler limitation has been lifted]: https://github.com/rust-lang/rust/pull/117703/ 75 | -------------------------------------------------------------------------------- /src/07_workarounds/05_async_in_traits.md: -------------------------------------------------------------------------------- 1 | # `async` in Traits 2 | 3 | Currently, `async fn` cannot be used in traits on the stable release of Rust. 4 | Since the 17th November 2022, an MVP of async-fn-in-trait is available on the nightly 5 | version of the compiler tool chain, [see here for details](https://blog.rust-lang.org/inside-rust/2022/11/17/async-fn-in-trait-nightly.html). 6 | 7 | In the meantime, there is a work around for the stable tool chain using the 8 | [async-trait crate from crates.io](https://github.com/dtolnay/async-trait). 9 | 10 | Note that using these trait methods will result in a heap allocation 11 | per-function-call. This is not a significant cost for the vast majority 12 | of applications, but should be considered when deciding whether to use 13 | this functionality in the public API of a low-level function that is expected 14 | to be called millions of times a second. 15 | 16 | Last updates: https://blog.rust-lang.org/2023/12/21/async-fn-rpit-in-traits.html 17 | -------------------------------------------------------------------------------- /src/08_ecosystem/00_chapter.md: -------------------------------------------------------------------------------- 1 | # The Async Ecosystem 2 | Rust currently provides only the bare essentials for writing async code. 3 | Importantly, executors, tasks, reactors, combinators, and low-level I/O futures and traits 4 | are not yet provided in the standard library. In the meantime, 5 | community-provided async ecosystems fill in these gaps. 6 | 7 | The Async Foundations Team is interested in extending examples in the Async Book to cover multiple runtimes. 8 | If you're interested in contributing to this project, please reach out to us on 9 | [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/201246-wg-async-foundations.2Fbook). 10 | 11 | ## Async Runtimes 12 | Async runtimes are libraries used for executing async applications. 13 | Runtimes usually bundle together a *reactor* with one or more *executors*. 14 | Reactors provide subscription mechanisms for external events, like async I/O, interprocess communication, and timers. 15 | In an async runtime, subscribers are typically futures representing low-level I/O operations. 16 | Executors handle the scheduling and execution of tasks. 17 | They keep track of running and suspended tasks, poll futures to completion, and wake tasks when they can make progress. 18 | The word "executor" is frequently used interchangeably with "runtime". 19 | Here, we use the word "ecosystem" to describe a runtime bundled with compatible traits and features. 20 | 21 | ## Community-Provided Async Crates 22 | 23 | ### The Futures Crate 24 | The [`futures` crate](https://docs.rs/futures/) contains traits and functions useful for writing async code. 25 | This includes the `Stream`, `Sink`, `AsyncRead`, and `AsyncWrite` traits, and utilities such as combinators. 26 | These utilities and traits may eventually become part of the standard library. 27 | 28 | `futures` has its own executor, but not its own reactor, so it does not support execution of async I/O or timer futures. 29 | For this reason, it's not considered a full runtime. 30 | A common choice is to use utilities from `futures` with an executor from another crate. 31 | 32 | ### Popular Async Runtimes 33 | There is no asynchronous runtime in the standard library, and none are officially recommended. 34 | The following crates provide popular runtimes. 35 | - [Tokio](https://docs.rs/tokio/): A popular async ecosystem with HTTP, gRPC, and tracing frameworks. 36 | - [async-std](https://docs.rs/async-std/): A crate that provides asynchronous counterparts to standard library components. 37 | - [smol](https://docs.rs/smol/): A small, simplified async runtime. 38 | Provides the `Async` trait that can be used to wrap structs like `UnixStream` or `TcpListener`. 39 | - [fuchsia-async](https://fuchsia.googlesource.com/fuchsia/+/master/src/lib/fuchsia-async/): 40 | An executor for use in the Fuchsia OS. 41 | 42 | ## Determining Ecosystem Compatibility 43 | Not all async applications, frameworks, and libraries are compatible with each other, or with every OS or platform. 44 | Most async code can be used with any ecosystem, but some frameworks and libraries require the use of a specific ecosystem. 45 | Ecosystem constraints are not always documented, but there are several rules of thumb to determine 46 | whether a library, trait, or function depends on a specific ecosystem. 47 | 48 | Any async code that interacts with async I/O, timers, interprocess communication, or tasks 49 | generally depends on a specific async executor or reactor. 50 | All other async code, such as async expressions, combinators, synchronization types, and streams 51 | are usually ecosystem independent, provided that any nested futures are also ecosystem independent. 52 | Before beginning a project, it's recommended to research relevant async frameworks and libraries to ensure 53 | compatibility with your chosen runtime and with each other. 54 | 55 | Notably, `Tokio` uses the `mio` reactor and defines its own versions of async I/O traits, 56 | including `AsyncRead` and `AsyncWrite`. 57 | On its own, it's not compatible with `async-std` and `smol`, 58 | which rely on the [`async-executor` crate](https://docs.rs/async-executor), and the `AsyncRead` and `AsyncWrite` 59 | traits defined in `futures`. 60 | 61 | Conflicting runtime requirements can sometimes be resolved by compatibility layers 62 | that allow you to call code written for one runtime within another. 63 | For example, the [`async_compat` crate](https://docs.rs/async_compat) provides a compatibility layer between 64 | `Tokio` and other runtimes. 65 | 66 | Libraries exposing async APIs should not depend on a specific executor or reactor, 67 | unless they need to spawn tasks or define their own async I/O or timer futures. 68 | Ideally, only binaries should be responsible for scheduling and running tasks. 69 | 70 | ## Single Threaded vs Multi-Threaded Executors 71 | Async executors can be single-threaded or multi-threaded. 72 | For example, the `async-executor` crate has both a single-threaded `LocalExecutor` and a multi-threaded `Executor`. 73 | 74 | A multi-threaded executor makes progress on several tasks simultaneously. 75 | It can speed up the execution greatly for workloads with many tasks, 76 | but synchronizing data between tasks is usually more expensive. 77 | It is recommended to measure performance for your application 78 | when you are choosing between a single- and a multi-threaded runtime. 79 | 80 | Tasks can either be run on the thread that created them or on a separate thread. 81 | Async runtimes often provide functionality for spawning tasks onto separate threads. 82 | Even if tasks are executed on separate threads, they should still be non-blocking. 83 | In order to schedule tasks on a multi-threaded executor, they must also be `Send`. 84 | Some runtimes provide functions for spawning non-`Send` tasks, 85 | which ensures every task is executed on the thread that spawned it. 86 | They may also provide functions for spawning blocking tasks onto dedicated threads, 87 | which is useful for running blocking synchronous code from other libraries. 88 | -------------------------------------------------------------------------------- /src/09_example/00_intro.md: -------------------------------------------------------------------------------- 1 | # Final Project: Building a Concurrent Web Server with Async Rust 2 | In this chapter, we'll use asynchronous Rust to modify the Rust book's 3 | [single-threaded web server](https://doc.rust-lang.org/book/ch20-01-single-threaded.html) 4 | to serve requests concurrently. 5 | ## Recap 6 | Here's what the code looked like at the end of the lesson. 7 | 8 | `src/main.rs`: 9 | ```rust 10 | {{#include ../../examples/09_01_sync_tcp_server/src/main.rs}} 11 | ``` 12 | 13 | `hello.html`: 14 | ```html 15 | {{#include ../../examples/09_01_sync_tcp_server/hello.html}} 16 | ``` 17 | 18 | `404.html`: 19 | ```html 20 | {{#include ../../examples/09_01_sync_tcp_server/404.html}} 21 | ``` 22 | 23 | If you run the server with `cargo run` and visit `127.0.0.1:7878` in your browser, 24 | you'll be greeted with a friendly message from Ferris! -------------------------------------------------------------------------------- /src/09_example/01_running_async_code.md: -------------------------------------------------------------------------------- 1 | # Running Asynchronous Code 2 | An HTTP server should be able to serve multiple clients concurrently; 3 | that is, it should not wait for previous requests to complete before handling the current request. 4 | The book 5 | [solves this problem](https://doc.rust-lang.org/book/ch20-02-multithreaded.html#turning-our-single-threaded-server-into-a-multithreaded-server) 6 | by creating a thread pool where each connection is handled on its own thread. 7 | Here, instead of improving throughput by adding threads, we'll achieve the same effect using asynchronous code. 8 | 9 | Let's modify `handle_connection` to return a future by declaring it an `async fn`: 10 | ```rust,ignore 11 | {{#include ../../examples/09_02_async_tcp_server/src/main.rs:handle_connection_async}} 12 | ``` 13 | 14 | Adding `async` to the function declaration changes its return type 15 | from the unit type `()` to a type that implements `Future`. 16 | 17 | If we try to compile this, the compiler warns us that it will not work: 18 | ```console 19 | $ cargo check 20 | Checking async-rust v0.1.0 (file:///projects/async-rust) 21 | warning: unused implementer of `std::future::Future` that must be used 22 | --> src/main.rs:12:9 23 | | 24 | 12 | handle_connection(stream); 25 | | ^^^^^^^^^^^^^^^^^^^^^^^^^^ 26 | | 27 | = note: `#[warn(unused_must_use)]` on by default 28 | = note: futures do nothing unless you `.await` or poll them 29 | ``` 30 | 31 | Because we haven't `await`ed or `poll`ed the result of `handle_connection`, 32 | it'll never run. If you run the server and visit `127.0.0.1:7878` in a browser, 33 | you'll see that the connection is refused; our server is not handling requests. 34 | 35 | We can't `await` or `poll` futures within synchronous code by itself. 36 | We'll need an asynchronous runtime to handle scheduling and running futures to completion. 37 | Please consult the [section on choosing a runtime](../08_ecosystem/00_chapter.md) 38 | for more information on asynchronous runtimes, executors, and reactors. 39 | Any of the runtimes listed will work for this project, but for these examples, 40 | we've chosen to use the `async-std` crate. 41 | 42 | ## Adding an Async Runtime 43 | The following example will demonstrate refactoring synchronous code to use an async runtime; here, `async-std`. 44 | The `#[async_std::main]` attribute from `async-std` allows us to write an asynchronous main function. 45 | To use it, enable the `attributes` feature of `async-std` in `Cargo.toml`: 46 | ```toml 47 | [dependencies.async-std] 48 | version = "1.6" 49 | features = ["attributes"] 50 | ``` 51 | 52 | As a first step, we'll switch to an asynchronous main function, 53 | and `await` the future returned by the async version of `handle_connection`. 54 | Then, we'll test how the server responds. 55 | Here's what that would look like: 56 | ```rust 57 | {{#include ../../examples/09_02_async_tcp_server/src/main.rs:main_func}} 58 | ``` 59 | Now, let's test to see if our server can handle connections concurrently. 60 | Simply making `handle_connection` asynchronous doesn't mean that the server 61 | can handle multiple connections at the same time, and we'll soon see why. 62 | 63 | To illustrate this, let's simulate a slow request. 64 | When a client makes a request to `127.0.0.1:7878/sleep`, 65 | our server will sleep for 5 seconds: 66 | 67 | ```rust,ignore 68 | {{#include ../../examples/09_03_slow_request/src/main.rs:handle_connection}} 69 | ``` 70 | This is very similar to the 71 | [simulation of a slow request](https://doc.rust-lang.org/book/ch20-02-multithreaded.html#simulating-a-slow-request-in-the-current-server-implementation) 72 | from the Book, but with one important difference: 73 | we're using the non-blocking function `async_std::task::sleep` instead of the blocking function `std::thread::sleep`. 74 | It's important to remember that even if a piece of code is run within an `async fn` and `await`ed, it may still block. 75 | To test whether our server handles connections concurrently, we'll need to ensure that `handle_connection` is non-blocking. 76 | 77 | If you run the server, you'll see that a request to `127.0.0.1:7878/sleep` 78 | will block any other incoming requests for 5 seconds! 79 | This is because there are no other concurrent tasks that can make progress 80 | while we are `await`ing the result of `handle_connection`. 81 | In the next section, we'll see how to use async code to handle connections concurrently. 82 | -------------------------------------------------------------------------------- /src/09_example/02_handling_connections_concurrently.md: -------------------------------------------------------------------------------- 1 | # Handling Connections Concurrently 2 | The problem with our code so far is that `listener.incoming()` is a blocking iterator. 3 | The executor can't run other futures while `listener` waits on incoming connections, 4 | and we can't handle a new connection until we're done with the previous one. 5 | 6 | In order to fix this, we'll transform `listener.incoming()` from a blocking Iterator 7 | to a non-blocking Stream. Streams are similar to Iterators, but can be consumed asynchronously. 8 | For more information, see the [chapter on Streams](../05_streams/01_chapter.md). 9 | 10 | Let's replace our blocking `std::net::TcpListener` with the non-blocking `async_std::net::TcpListener`, 11 | and update our connection handler to accept an `async_std::net::TcpStream`: 12 | ```rust,ignore 13 | {{#include ../../examples/09_04_concurrent_tcp_server/src/main.rs:handle_connection}} 14 | ``` 15 | 16 | The asynchronous version of `TcpListener` implements the `Stream` trait for `listener.incoming()`, 17 | a change which provides two benefits. 18 | The first is that `listener.incoming()` no longer blocks the executor. 19 | The executor can now yield to other pending futures 20 | while there are no incoming TCP connections to be processed. 21 | 22 | The second benefit is that elements from the Stream can optionally be processed concurrently, 23 | using a Stream's `for_each_concurrent` method. 24 | Here, we'll take advantage of this method to handle each incoming request concurrently. 25 | We'll need to import the `Stream` trait from the `futures` crate, so our Cargo.toml now looks like this: 26 | ```diff 27 | +[dependencies] 28 | +futures = "0.3" 29 | 30 | [dependencies.async-std] 31 | version = "1.6" 32 | features = ["attributes"] 33 | ``` 34 | 35 | Now, we can handle each connection concurrently by passing `handle_connection` in through a closure function. 36 | The closure function takes ownership of each `TcpStream`, and is run as soon as a new `TcpStream` becomes available. 37 | As long as `handle_connection` does not block, a slow request will no longer prevent other requests from completing. 38 | ```rust,ignore 39 | {{#include ../../examples/09_04_concurrent_tcp_server/src/main.rs:main_func}} 40 | ``` 41 | # Serving Requests in Parallel 42 | Our example so far has largely presented cooperative multitasking concurrency (using async code) 43 | as an alternative to preemptive multitasking (using threads). 44 | However, async code and threads are not mutually exclusive. 45 | In our example, `for_each_concurrent` processes each connection concurrently, but on the same thread. 46 | The `async-std` crate allows us to spawn tasks onto separate threads as well. 47 | Because `handle_connection` is both `Send` and non-blocking, it's safe to use with `async_std::task::spawn`. 48 | Here's what that would look like: 49 | ```rust 50 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:main_func}} 51 | ``` 52 | Now we are using both cooperative multitasking concurrency and preemptive multitasking to handle multiple requests at the same time! 53 | See the [section on multithreaded executors](../08_ecosystem/00_chapter.md#single-threading-vs-multithreading) 54 | for more information. 55 | -------------------------------------------------------------------------------- /src/09_example/03_tests.md: -------------------------------------------------------------------------------- 1 | # Testing the TCP Server 2 | Let's move on to testing our `handle_connection` function. 3 | 4 | First, we need a `TcpStream` to work with. 5 | In an end-to-end or integration test, we might want to make a real TCP connection 6 | to test our code. 7 | One strategy for doing this is to start a listener on `localhost` port 0. 8 | Port 0 isn't a valid UNIX port, but it'll work for testing. 9 | The operating system will pick an open TCP port for us. 10 | 11 | Instead, in this example we'll write a unit test for the connection handler, 12 | to check that the correct responses are returned for the respective inputs. 13 | To keep our unit test isolated and deterministic, we'll replace the `TcpStream` with a mock. 14 | 15 | First, we'll change the signature of `handle_connection` to make it easier to test. 16 | `handle_connection` doesn't actually require an `async_std::net::TcpStream`; 17 | it requires any struct that implements `async_std::io::Read`, `async_std::io::Write`, and `marker::Unpin`. 18 | Changing the type signature to reflect this allows us to pass a mock for testing. 19 | ```rust,ignore 20 | use async_std::io::{Read, Write}; 21 | 22 | async fn handle_connection(mut stream: impl Read + Write + Unpin) { 23 | ``` 24 | 25 | Next, let's build a mock `TcpStream` that implements these traits. 26 | First, let's implement the `Read` trait, with one method, `poll_read`. 27 | Our mock `TcpStream` will contain some data that is copied into the read buffer, 28 | and we'll return `Poll::Ready` to signify that the read is complete. 29 | ```rust,ignore 30 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:mock_read}} 31 | ``` 32 | 33 | Our implementation of `Write` is very similar, 34 | although we'll need to write three methods: `poll_write`, `poll_flush`, and `poll_close`. 35 | `poll_write` will copy any input data into the mock `TcpStream`, and return `Poll::Ready` when complete. 36 | No work needs to be done to flush or close the mock `TcpStream`, so `poll_flush` and `poll_close` 37 | can just return `Poll::Ready`. 38 | ```rust,ignore 39 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:mock_write}} 40 | ``` 41 | 42 | Lastly, our mock will need to implement `Unpin`, signifying that its location in memory can safely be moved. 43 | For more information on pinning and the `Unpin` trait, see the [section on pinning](../04_pinning/01_chapter.md). 44 | ```rust,ignore 45 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:unpin}} 46 | ``` 47 | 48 | Now we're ready to test the `handle_connection` function. 49 | After setting up the `MockTcpStream` containing some initial data, 50 | we can run `handle_connection` using the attribute `#[async_std::test]`, similarly to how we used `#[async_std::main]`. 51 | To ensure that `handle_connection` works as intended, we'll check that the correct data 52 | was written to the `MockTcpStream` based on its initial contents. 53 | ```rust,ignore 54 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:test}} 55 | ``` 56 | -------------------------------------------------------------------------------- /src/12_appendix/01_translations.md: -------------------------------------------------------------------------------- 1 | # Appendix : Translations of the Book 2 | 3 | For resources in languages other than English. 4 | 5 | - [Русский](https://doc.rust-lang.ru/async-book/) 6 | - [Français](https://jimskapt.github.io/async-book-fr/) 7 | - [فارسی](https://rouzbehsbz.github.io/rust-async-book/) 8 | -------------------------------------------------------------------------------- /src/SUMMARY.md: -------------------------------------------------------------------------------- 1 | # Table of Contents 2 | 3 | [Introduction](intro.md) 4 | 5 | - [Navigation](navigation/intro.md) 6 | - [By topic](navigation/topics.md) 7 | - [FAQs]() 8 | - [Index](navigation/index.md) 9 | 10 | # Part 1: guide 11 | 12 | - [Introduction](part-guide/intro.md) 13 | - [Concurrent programming](part-guide/concurrency.md) 14 | - [Async and await](part-guide/async-await.md) 15 | - [More async/await topics](part-guide/more-async-await.md) 16 | - [IO and issues with blocking](part-guide/io.md) 17 | - [Concurrency primitives](part-guide/concurrency-primitives.md) 18 | - [Channels, locking, and synchronization](part-guide/sync.md) 19 | - [Tools for async programming](part-guide/tools.md) 20 | - [Destruction and clean-up](part-guide/dtors.md) 21 | - [Futures](part-guide/futures.md) 22 | - [Runtimes](part-guide/runtimes.md) 23 | - [Timers and signal handling](part-guide/timers-signals.md) 24 | - [Async iterators (streams)](part-guide/streams.md) 25 | 26 | # Part 2: reference 27 | 28 | - [Implementing futures and streams]() 29 | - [Alternate runtimes]() 30 | - [Implementing your own runtime]() 31 | - [async in sync, sync in async]() 32 | - [Async IO: readiness vs completion, and io_uring]() 33 | - [Design patterns]() 34 | - [Cancellation]() (cancellation safety) 35 | - [Starvation]() 36 | - [Pinning]() 37 | - [Async and FFI]() 38 | - [Comparing async programming in Rust to other languages]() 39 | - [The implementation of async/await in rustc]() 40 | - [Structured concurrency?]() 41 | 42 | 43 | # Old chapters 44 | 45 | - [Getting Started](01_getting_started/01_chapter.md) 46 | - [Why Async?](01_getting_started/02_why_async.md) 47 | - [The State of Asynchronous Rust](01_getting_started/03_state_of_async_rust.md) 48 | - [`async`/`.await` Primer](01_getting_started/04_async_await_primer.md) 49 | - [Under the Hood: Executing `Future`s and Tasks](02_execution/01_chapter.md) 50 | - [The `Future` Trait](02_execution/02_future.md) 51 | - [Task Wakeups with `Waker`](02_execution/03_wakeups.md) 52 | - [Applied: Build an Executor](02_execution/04_executor.md) 53 | - [Executors and System IO](02_execution/05_io.md) 54 | - [`async`/`await`](03_async_await/01_chapter.md) 55 | - [Pinning](04_pinning/01_chapter.md) 56 | - [Streams](05_streams/01_chapter.md) 57 | - [Iteration and Concurrency](05_streams/02_iteration_and_concurrency.md) 58 | - [Executing Multiple Futures at a Time](06_multiple_futures/01_chapter.md) 59 | - [`join!`](06_multiple_futures/02_join.md) 60 | - [`select!`](06_multiple_futures/03_select.md) 61 | - [Spawning](06_multiple_futures/04_spawning.md) 62 | - [TODO: Cancellation and Timeouts]() 63 | - [TODO: `FuturesUnordered`]() 64 | - [Workarounds to Know and Love](07_workarounds/01_chapter.md) 65 | - [`Send` Approximation](07_workarounds/03_send_approximation.md) 66 | - [Recursion](07_workarounds/04_recursion.md) 67 | - [`async` in Traits](07_workarounds/05_async_in_traits.md) 68 | - [The Async Ecosystem](08_ecosystem/00_chapter.md) 69 | - [Final Project: HTTP Server](09_example/00_intro.md) 70 | - [Running Asynchronous Code](09_example/01_running_async_code.md) 71 | - [Handling Connections Concurrently](09_example/02_handling_connections_concurrently.md) 72 | - [Testing the Server](09_example/03_tests.md) 73 | - [TODO: I/O]() 74 | - [TODO: `AsyncRead` and `AsyncWrite`]() 75 | - [TODO: Asynchronous Design Patterns: Solutions and Suggestions]() 76 | - [TODO: Modeling Servers and the Request/Response Pattern]() 77 | - [TODO: Managing Shared State]() 78 | - [Appendix: Translations of the Book](12_appendix/01_translations.md) 79 | 80 | -------------------------------------------------------------------------------- /src/assets/swap_problem.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rust-lang/async-book/ef2837bc7099e40ffc74c4a97d3b25aa8d11fe6d/src/assets/swap_problem.jpg -------------------------------------------------------------------------------- /src/intro.md: -------------------------------------------------------------------------------- 1 | NOTE: this guide is currently undergoing a rewrite after a long time without much work. It is work in progress, much is missing, and what exists is a bit rough. 2 | 3 | # Introduction 4 | 5 | This book is a guide to asynchronous programming in Rust. It is designed to help you take your first steps and to discover more about advanced topics. We don't assume any experience with asynchronous programming (in Rust or another language), but we do assume you're familiar with Rust already. If you want to learn about Rust, you could start with [The Rust Programming Language](https://doc.rust-lang.org/stable/book/). 6 | 7 | This book has two main parts: [part one](part-guide/intro.md) is a beginners guide, it is designed to be read in-order and to take you from total beginner to intermediate level. Part two is a collection of stand-alone chapters on more advanced topics. It should be useful once you've worked through part one or if you already have some experience with async Rust. 8 | 9 | You can navigate this book in multiple ways: 10 | 11 | * You can read it front to back, in order. This is the recommend path for newcomers to async Rust, at least for [part one](part-guide/intro.md) of the book. 12 | * There is a summary contents on the left-hand side of the webpage. 13 | * If you want information about a broad topic, you could start with the topic index. 14 | * If you want to find all discussion about a specific topic, you could start with the detailed index. 15 | * You could see if your question is answered in the FAQs. 16 | 17 | 18 | ## What is Async Programming and why would you do it? 19 | 20 | In concurrent programming, the program does multiple things at the same time (or at least appears to). Programming with threads is one form of concurrent programming. Code within a thread is written in sequential style and the operating system executes threads concurrently. With async programming, concurrency happens entirely within your program (the operating system is not involved). An async runtime (which is just another crate in Rust) manages async tasks in conjunction with the programmer explicitly yielding control by using the `await` keyword. 21 | 22 | Because the operating system is not involved, *context switching* in the async world is very fast. Furthermore, async tasks have much lower memory overhead than operating system threads. This makes async programming a good fit for systems which need to handle very many concurrent tasks and where those tasks spend a lot of time waiting (for example, for client responses or for IO). It also makes async programming a good fit for microcontrollers with very limited amounts of memory and no operating system that provides threads. 23 | 24 | Async programming also offers the programmer fine-grained control over how tasks are executed (levels of parallelism and concurrency, control flow, scheduling, and so forth). This means that async programming can be expressive as well as ergonomic for many uses. In particular, async programming in Rust has a powerful concept of cancellation and supports many different flavours of concurrency (expressed using constructs including `spawn` and its variations, `join`, `select`, `for_each_concurrent`, etc.). These allow composable and reusable implementations of concepts like timeouts, pausing, and throttling. 25 | 26 | 27 | ## Hello, world! 28 | 29 | Just to give you a taste of what async Rust looks like, here is a 'hello, world' example. There is no concurrency, and it doesn't really take advantage of being async. It does define and use an async function, and it does print "hello, world!": 30 | 31 | ```rust,edition2021 32 | {{#include ../examples/hello-world/src/main.rs}} 33 | ``` 34 | 35 | We'll explain everything in detail later. For now, note how we define an asynchronous function using `async fn` and call it using `.await` - an async function in Rust doesn't do anything unless it is `await`ed[^blocking]. 36 | 37 | Like all examples in this book, if you want to see the full example (including `Cargo.toml`, for example) or to run it yourself locally, you can find them in the book's GitHub repo: e.g., [examples/hello-world](https://github.com/rust-lang/async-book/tree/master/examples/hello-world). 38 | 39 | 40 | ## Development of Async Rust 41 | 42 | The async features of Rust have been in development for a while, but it is not a 'finished' part of the language. Async Rust (at least the parts available in the stable compiler and standard libraries) is reliable and performant. It is used in production in some of the most demanding situations at the largest tech companies. However, there are some missing parts and rough edges (rough in the sense of ergonomics rather than reliability). You are likely to stumble upon some of these parts during your journey with async Rust. For most missing parts, there are workarounds and these are covered in this book. 43 | 44 | Currently, working with async iterators (also known as streams) is where most users find some rough parts. Some uses of async in traits are not yet well-supported. There is not a good solution for async destruction. 45 | 46 | Async Rust is being actively worked on. If you want to follow development, you can check out the Async Working Group's [home page](https://rust-lang.github.io/wg-async/meetings.html) which includes their [roadmap](https://rust-lang.github.io/wg-async/vision/roadmap.html). Or you could read the async [project goal](https://github.com/rust-lang/rust-project-goals/issues/105) within the Rust Project. 47 | 48 | Rust is an open source project. If you'd like to contribute to development of async Rust, start at the [contributing docs](https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md) in the main Rust repo. 49 | 50 | 51 | [^blocking]: This is actually a bad example because `println` is *blocking IO* and it is generally a bad idea to do blocking IO in async functions. We'll explain what blocking IO is in [chapter TODO]() and why you shouldn't do blocking IO in an async function in [chapter TODO](). 52 | -------------------------------------------------------------------------------- /src/navigation/index.md: -------------------------------------------------------------------------------- 1 | # Index 2 | 3 | 4 | 5 | - Async/`async` 6 | - [blocks](../part-guide/more-async-await.md#async-blocks) 7 | - [closures](../part-guide/more-async-await.md#async-closures) 8 | - [functions](../part-guide/async-await.md#async-functions) 9 | - [traits](../part-guide/more-async-await.md#async-traits) 10 | - [c.f., threads](../part-guide/concurrency.md#async-programming) 11 | - [`await`](../part-guide/async-await.md#await) 12 | 13 | 14 | 15 | - [Blocking](../part-guide/more-async-await.md#blocking-and-cancellation) 16 | - [IO](../part-guide/more-async-await.md#blocking-io) 17 | 18 | 19 | 20 | - [Cancellation](../part-guide/more-async-await.md#cancellation) 21 | - [`CancellationToken`](../part-guide/more-async-await.md#cancellation) 22 | - [Concurrency](../part-guide/concurrency.md) 23 | - [c.f., parallelism](../part-guide/concurrency.md#concurrency-and-parallelism) 24 | 25 | 26 | 27 | - [Executor](../part-guide/async-await.md#the-runtime) 28 | 29 | 30 | 31 | - [Futures](../part-guide/async-await.md#futures-and-tasks) 32 | - `Future` trait 33 | 34 | 35 | 36 | - IO 37 | - [Blocking](../part-guide/more-async-await.md#blocking-io) 38 | 39 | 40 | 41 | - [Joining tasks](../part-guide/async-await.md#joining-tasks) 42 | - [`JoinHandle`](../part-guide/async-await.md#joinhandle) 43 | - [`abort`](../part-guide/more-async-await.md#cancellation) 44 | 45 | 46 | 47 | - Multitasking 48 | - [Cooperative](../part-guide/concurrency.md#async-programming) 49 | - [Pre-emptive](../part-guide/concurrency.md#processes-and-threads) 50 | 51 | 52 | 53 | - [Parallelism](../part-guide/concurrency.md#concurrency-and-parallelism) 54 | - [c.f., concurrency](../part-guide/concurrency.md#concurrency-and-parallelism) 55 | 56 | 57 | 58 | - [Reactor](../part-guide/async-await.md#the-runtime) 59 | - [Runtimes](../part-guide/async-await.md#the-runtime) 60 | 61 | 62 | 63 | - [Scheduler](../part-guide/async-await.md#the-runtime) 64 | - [Spawning tasks](../part-guide/async-await.md#spawning-tasks) 65 | 66 | 67 | 68 | - [Tasks](../part-guide/async-await.md#futures-and-tasks) 69 | - [Spawning](../part-guide/async-await.md#spawning-tasks) 70 | - Testing 71 | - [Unit tests](../part-guide/more-async-await.md#unit-tests) 72 | - [Threads](../part-guide/concurrency.md#processes-and-threads) 73 | - [Tokio](../part-guide/async-await.md#the-runtime) 74 | - Traits 75 | - [async](../part-guide/more-async-await.md#async-traits) 76 | - `Future` 77 | -------------------------------------------------------------------------------- /src/navigation/intro.md: -------------------------------------------------------------------------------- 1 | # Navigation 2 | 3 | TODO Intro to navigation 4 | 5 | - [By topic](topics.md) 6 | - [FAQs]() 7 | - [Index](index.md) 8 | -------------------------------------------------------------------------------- /src/navigation/topics.md: -------------------------------------------------------------------------------- 1 | # Topic index 2 | 3 | ## Concurrency and parallelism 4 | 5 | - [Introduction](../part-guide/concurrency.md#concurrency-and-parallelism) 6 | - [Running async tasks in parallel using `spawn`](../part-guide/async-await.md#spawning-tasks) 7 | 8 | ## Correctness and safety 9 | 10 | - Cancellation 11 | - [Introduction](../part-guide/more-async-await.md#cancellation) 12 | 13 | ## Performance 14 | 15 | - Blocking 16 | - [Introduction](../part-guide/more-async-await.md#blocking-and-cancellation) 17 | 18 | ## Testing 19 | 20 | - [Unit test syntax](../part-guide/more-async-await.md#unit-tests) 21 | -------------------------------------------------------------------------------- /src/part-guide/async-await.md: -------------------------------------------------------------------------------- 1 | # Async and Await 2 | 3 | In this chapter we'll get started doing some async programming in Rust and we'll introduce the `async` and `await` keywords. 4 | 5 | `async` is an annotation on functions (and other items, such as traits, which we'll get to later); `await` is an operator used in expressions. But before we jump into those keywords, we need to cover a few core concepts of async programming in Rust, this follows from the discussion in the previous chapter, here we'll relate things directly to Rust programming. 6 | 7 | ## Rust async concepts 8 | 9 | ### The runtime 10 | 11 | Async tasks must be managed and scheduled. There are typically more tasks than cores available so they can't all be run at once. When one stops executing another must be picked to execute. If a task is waiting on IO or some other event, it should not be scheduled, but when that completes, it should be scheduled. That requires interacting with the OS and managing IO work. 12 | 13 | Many programming languages provide a runtime. Commonly, this runtime does a lot more than manage async tasks - it might manage memory (including garbage collection), have a role in exception handling, provide an abstraction layer over the OS, or even be a full virtual machine. Rust is a low-level language and strives towards minimal runtime overhead. The async runtime therefore has a much more limited scope than many other languages' runtimes. There are also many ways to design and implement an async runtime, so Rust lets you choose one depending on your requirements, rather than providing one. This does mean that getting started with async programming requires an extra step. 14 | 15 | As well as running and scheduling tasks, a runtime must interact with the OS to manage async IO. It must also provide timer functionality to tasks (which intersects with IO management). There are no strong rules about how a runtime must be structured, but some terms and division of responsibilities are common: 16 | 17 | - *reactor* or *event loop* or *driver* (equivalent terms): dispatches IO and timer events, interacts with the OS, and does the lowest-level driving forward of execution, 18 | - *scheduler*: determines when tasks can execute and on which OS threads, 19 | - *executor* or *runtime*: combines the reactor and scheduler, and is the user-facing API for running async tasks; *runtime* is also used to mean the whole library of functionality (e.g., everything in the Tokio crate, not just the Tokio executor which is represented by the [`Runtime`](https://docs.rs/tokio/latest/tokio/runtime/struct.Runtime.html) type). 20 | 21 | As well as the executor as described above, a runtime crate typically includes many utility traits and functions. These might include traits (e.g., `AsyncRead`) and implementations for IO, functionality for common IO tasks such as networking or accessing the file system, locks, channels, and other synchronisation primitives, utilities for timing, utilities for working with the OS (e.g., signal handling), utility functions for working with futures and streams (async iterators), or monitoring and observation tools. We'll cover many of those in this guide. 22 | 23 | There are many async runtimes to choose from. Some have very different scheduling policies, or are optimised for a specific task or domain. For most of this guide we'll use the [Tokio](https://tokio.rs/) runtime. It's a general purpose runtime and is the most popular runtime in the ecosystem. It's a great choice for getting started and for production work. In some circumstances, you might get better performance or be able to write simpler code with a different runtime. Later in this guide we'll discuss some of the other available runtimes and why you might choose one or another, or even write your own. 24 | 25 | To get up and running as quickly as possible, you need just a little boilerplate. You'll need to include the Tokio crate as a dependency in your Cargo.toml (just like any other crate): 26 | 27 | ``` 28 | [dependencies] 29 | tokio = { version = "1", features = ["full"] } 30 | ``` 31 | 32 | And you'll use the `tokio::main` annotation on your `main` function so that it can be an async function (which is otherwise not permitted in Rust): 33 | 34 | ```rust,norun 35 | #[tokio::main] 36 | async fn main() { ... } 37 | ``` 38 | 39 | That's it! You're ready to write some asynchronous code! 40 | 41 | The `#[tokio::main]` annotation initializes the Tokio runtime and starts an async task for running the code in `main`. Later in this guide we'll explain in more detail what that annotation is doing and how to use async code without it (which will give you more flexibility). 42 | 43 | 44 | ### Futures and tasks 45 | 46 | The basic unit of async concurrency in Rust is the *future*. A future is just a regular old Rust object (a struct or enum, usually) which implements the ['Future'](https://doc.rust-lang.org/std/future/trait.Future.html) trait. A future represents a deferred computation. That is, a computation that will be ready at some point in the future. 47 | 48 | We'll talk a lot about futures in this guide, but it's easiest to get started without worrying too much about them. We'll mention them quite a bit in the next few sections, but we won't really define them or use them directly until later. One important aspect of futures is that they can be combined to make new, 'bigger' futures (we'll talk a lot more about *how* they can be combined later). 49 | 50 | I've used the term 'async task' quite a bit in an informal way in the previous chapter and this one. I've used the term to mean a logical sequence of execution; analogous to a thread but managed within a program rather than externally by the OS. It is often useful to think in terms of tasks, however, Rust itself has no concept of a task and the term is used to mean different things! It is confusing! To make it worse, runtimes do have a concept of a task and different runtimes have slightly different concepts of tasks. 51 | 52 | From here on in, I'm going to try to be precise about the terminology around tasks. When I use just 'task' I mean the abstract concept of a sequence of computation that may occur concurrently with other tasks. I'll use 'async task' to mean exactly the same thing, but in contrast to a task which is implemented as an OS thread. I'll use 'runtime's task' to mean whatever kind of task a runtime imagines, and 'tokio task' (or some other specific runtime) to mean Tokio's idea of a task. 53 | 54 | An async task in Rust is just a future (usually a 'big' future made by combining many others). In other words, a task is a future which is executed. However, there are times when a future is 'executed' without being a runtime's task. This kind of a future is intuitively a *task* but not a *runtime's task*. I'll spell this out more when we get to an example of it. 55 | 56 | 57 | ## Async functions 58 | 59 | The `async` keyword is a modifier on function declarations. E.g., we can write `pub async fn send_to_server(...)`. An async function is simply a function declared using the `async` keyword, and what that means is that it is a function which can be executed asynchronously, in other words the caller *can choose not to* wait for the function to complete before doing something else. 60 | 61 | In more mechanical terms, when an async function is called, the body is not executed as it would be for a regular function. Instead the function body and its arguments are packaged into a future which is returned in lieu of a real result. The caller can then decide what to do with that future (if the caller wants the result 'straight away', then it will `await` the future, see the next section). 62 | 63 | Within an async function, code is executed in the usual, sequential way[^preempt], being async makes no difference. You can call synchronous functions from async functions, and execution proceeds as usual. One extra thing you can do within an async function is use `await` to await other async functions (or futures), which *may* cause yielding of control so that another task can execute. 64 | 65 | [^preempt]: like any other thread, the thread the async function is running on may be pre-empted by the operating system and paused so another thread can get some work done. However, from the function's point of view this is not observable without inspecting data which may have been modified by other threads (and which could have been modified by another thread executing in parallel without the current thread being paused). 66 | 67 | ## `await` 68 | 69 | We stated above that a future is a computation that will be ready at some point in the future. To get the result of that computation, we use the `await` keyword. If the result is ready immediately or can be computed without waiting, then `await` simply does that computation to produce the result. However, if the result is not ready, then `await` hands control over to the scheduler so that another task can proceed (this is cooperative multitasking mentioned in the previous chapter). 70 | 71 | The syntax for using await is `some_future.await`, i.e., it is a postfix keyword used with the `.` operator. That means it can be used ergonomically in chains of method calls and field accesses. 72 | 73 | Consider the following functions: 74 | 75 | ```rust,norun 76 | // An async function, but it doesn't need to wait for anything. 77 | async fn add(a: u32, b: u32) -> u32 { 78 | a + b 79 | } 80 | 81 | async fn wait_to_add(a: u32, b: u32) -> u32 { 82 | sleep(1000).await; 83 | a + b 84 | } 85 | ``` 86 | 87 | If we call `add(15, 3).await` then it will return immediately with the result `18`. If we call `wait_to_add(15, 3).await`, we will eventually get the same answer, but while we wait another task will get an opportunity to run. 88 | 89 | In this silly example, the call to `sleep` is a stand-in for doing some long-running task where we have to wait for the result. This is usually an IO operation where the result is data read from an external source or confirmation that writing to an external destination succeeded. Reading looks something like `let data = read(...).await?`. In this case `await` will cause the current task to wait while the read happens. The task will resume once reading is completed (other tasks could get some work done while the reading task waits). The result of reading could be data successfully read or an error (handled by the `?`). 90 | 91 | Note that if we call `add` or `wait_to_add` or `read` without using `.await` we won't get any answer! 92 | 93 | What? 94 | 95 | Calling an async function returns a future, it doesn't immediately execute the code in the function. Furthermore, a future does not do any work until it is awaited[^poll]. This is in contrast to some other languages where an async function returns a future which begins executing immediately. 96 | 97 | This is an important point about async programming in Rust. After a while it will be second nature, but it often trips up beginners, especially those who have experience with async programming in other languages. 98 | 99 | An important intuition about futures in Rust is that they are inert objects. To get any work done they must be driven forward by an external force (usually an async runtime). 100 | 101 | We've described `await` quite operationally (it runs a future, producing a result), but we talked in the previous chapter about async tasks and concurrency, how does `await` fit into that mental model? First, let's consider pure sequential code: logically, calling a function simply executes the code in the function (with some assignment of variables). In other words, the current task continues executing the next 'chunk' of code which is defined by the function. Similarly, in an async context, calling a non-async function simply continues execution with that function. Calling an async function finds the code to run, but doesn't run it. `await` is an operator which continues execution of the current task, or if the current task can't continue right now, gives another task an opportunity to continue. 102 | 103 | `await` can only be used inside an async context, for now that means inside an async function (we'll see more kinds of async contexts later). To understand why, remember that `await` might hand over control to the runtime so that another task can execute. There is only a runtime to hand control to in an async context. For now, you can imagine the runtime like a global variable which is only accessible in async functions, we'll explain later how it really works. 104 | 105 | Finally, for one more perspective on `await`: we mentioned earlier that futures can be combined together to make 'bigger' futures. `async` functions are one way to define a future, and `await` is one way to combine futures. Using `await` on a future combines that future into the future produced by the async function it's used inside. We'll talk in more detail about this perspective and other ways to combine futures later. 106 | 107 | [^poll]: Or polled, which is a lower-level operation than `await` and happens behind the scenes when using `await`. We'll talk about polling later when we talk about futures in detail. 108 | 109 | ## Some async/await examples 110 | 111 | Let's start by revisiting our 'hello, world!' example: 112 | 113 | ```rust,edition2021 114 | {{#include ../../examples/hello-world/src/main.rs}} 115 | ``` 116 | 117 | You should now recognise the boilerplate around `main`. It's for initializing the Tokio runtime and creating an initial task to run the async `main` function. 118 | 119 | `say_hello` is an async function, when we call it, we have to follow the call with `.await` to run it as part of the current task. Note that if you remove the `.await`, then running the program does nothing! Calling `say_hello` returns a future, but it is never executed so `println` is never called (the compiler will warn you, at least). 120 | 121 | Here's a slightly more realistic example, taken from the [Tokio tutorial](https://tokio.rs/tokio/tutorial/hello-tokio). 122 | 123 | ```rust,norun 124 | #[tokio::main] 125 | async fn main() -> Result<()> { 126 | // Open a connection to the mini-redis address. 127 | let mut client = client::connect("127.0.0.1:6379").await?; 128 | 129 | // Set the key "hello" with value "world" 130 | client.set("hello", "world".into()).await?; 131 | 132 | // Get key "hello" 133 | let result = client.get("hello").await?; 134 | 135 | println!("got value from the server; result={:?}", result); 136 | 137 | Ok(()) 138 | } 139 | ``` 140 | 141 | The code is a bit more interesting, but we're essentially doing the same thing - calling async functions and then awaiting to execute the result. This time we're using `?` for error handling - it works just like in synchronous Rust. 142 | 143 | For all the talk so far about concurrency, parallelism, and asynchrony, both these examples are 100% sequential. Just calling and awaiting async functions does not introduce any concurrency unless there are other tasks to schedule while the awaiting task is waiting. To prove this to ourselves, lets look at another simple (but contrived) example: 144 | 145 | ```rust,edition2021 146 | {{#include ../../examples/hello-world-sleep/src/main.rs}} 147 | ``` 148 | 149 | Between printing "hello" and "world", we put the current task to sleep[^async-sleep] for one second. Observe what happens when we run the program: it prints "hello", does nothing for one second, then prints "world". That is because executing a single task is purely sequential. If we had some concurrency, then that one second nap would be an excellent opportunity to get some other work done, like printing "world". We'll see how to do that in the next section. 150 | 151 | [^async-sleep]: Note that we're using an async sleep function here, if we were to use [`sleep`](https://doc.rust-lang.org/std/thread/fn.sleep.html) from std we'd put the whole thread to sleep. That wouldn't make any difference in this toy example but in a real program it would mean other tasks could not be scheduled on that thread during that time. That is very bad. 152 | 153 | 154 | ## Spawning tasks 155 | 156 | We've talked about async and await as a way to run code in an async task. And we've said that `await` can put the current task to sleep while it waits for IO or some other event. When that happens, another task can run, but how do those other tasks come about? Just like we use `std::thread::spawn` to spawn a new task, we can use [`tokio::spawn`](https://docs.rs/tokio/latest/tokio/task/fn.spawn.html) to spawn a new async task. Note that `spawn` is a function of Tokio, the runtime, not from Rust's standard library, because tasks are purely a runtime concept. 157 | 158 | Here's a tiny example of running an async function on a separate task by using `spawn`: 159 | 160 | ```rust,edition2021 161 | {{#include ../../examples/hello-world-spawn/src/main.rs}} 162 | ``` 163 | 164 | Similar to the last example, we have two functions printing "hello" and "world!". But this time we run them concurrently (and in parallel) rather than sequentially. If you run the program a few times you should see the strings printing in both orders - sometimes "hello" first, sometimes "world!" first. A classic concurrent race! 165 | 166 | Let's dive into what is happening here. There are three concepts in play: futures, tasks, and threads. The `spawn` function takes a future (which remember can be made up of many smaller futures) and runs it as a new Tokio task. Tasks are the the concept which the Tokio runtime schedules and manages (not individual futures). Tokio (in its default configuration) is a multi-threaded runtime which means that when we spawn a new task, that task may be run on a different OS thread from the task it was spawned from (it may be run on the same thread, or it may start on one thread and then be moved to another later on). 167 | 168 | So, when a future is spawned as a task it runs *concurrently* with the task it was spawned from and any other tasks. It may also run in parallel to those tasks if it is scheduled on a different thread. 169 | 170 | To summarise, when we write two statements following each other in Rust, they are executed sequentially (whether in async code or not). When we write `await`, that does not change the concurrency of sequential statements. E.g., `foo(); bar();` is strictly sequential - `foo` is called and afterwards, `bar` is called. That is true whether `foo` and `bar` are async functions or not. `foo().await; bar().await;` is also strictly sequential, `foo` is fully evaluated and then `bar` is fully evaluated. In both cases another thread might be interleaved with the sequential execution and in the second case, another async task might be interleaved at the await points, but the two statements are executed sequentially *with respect to each other* in both cases. 171 | 172 | If we use either `thread::spawn` or `tokio::spawn` we introduce concurrency and potentially parallelism, in the first case between threads and in the second between tasks. 173 | 174 | Later in the guide we'll see cases where we execute futures concurrently, but never in parallel. 175 | 176 | 177 | ### Joining tasks 178 | 179 | If we want to get the result of executing a spawned task, then the spawning task can wait for it to finish and use the result, this is called *joining* the tasks (analogous to [joining](https://doc.rust-lang.org/std/thread/struct.JoinHandle.html#method.join) threads, and the APIs for joining are similar). 180 | 181 | When a task is spawned, the spawn function returns a [`JoinHandle`](https://docs.rs/tokio/latest/tokio/task/struct.JoinHandle.html). If you just want the task to do it's own thing executing, the `JoinHandle` can be discarded (dropping the `JoinHandle` does not affect the spawned task). But if you want the spawning task to wait for the spawned task to complete and then use the result, you can `await` the `JoinHandle` to do so. 182 | 183 | For example, let's revisit our 'Hello, world!' example one more time: 184 | 185 | 186 | ```rust,edition2021 187 | {{#include ../../examples/hello-world-join/src/main.rs}} 188 | ``` 189 | 190 | The code is similar to last time, but instead of just calling `spawn`, we save the returned `JoinHandle`s and later `await` them. Since we're waiting for those tasks to complete before we exit the `main` function, we no longer need the `sleep` in `main`. 191 | 192 | The two spawned tasks are still executing concurrently. If you run the program a few times you should see both orderings. However, the `await`ed join handles are a limit on the concurrency: the final exclamation mark ('!') will *always* be printed last (you could experiment with moving `println!("!");` relative to the `await`s. You'll probably need to change with the sleep times too to get observable effects). 193 | 194 | If we immediately `await`ed the `JoinHandle` of the first `spawn` rather than saved it and later `await`ed (i.e., written `spawn(say_hello()).await;`), then we'd have spawned another task to run the 'hello' future, but the spawning task would have waited for it to finish before doing anything else. In other words, there is no possible concurrency! You almost never want to do this (because why bother with the spawn? Just write the sequential code). 195 | 196 | ### `JoinHandle` 197 | 198 | We'll quickly look at `JoinHandle` in a little more depth. The fact that we can `await` a `JoinHandle` is a clue that a `JoinHandle` is itself a future. `spawn` is not an `async` function, it's a regular function that returns a future (`JoinHandle`). It does some work (to schedule the task) before returning the future (unlike an async future), which is why we don't *need* to `await` `spawn`. Awaiting a `JoinHandle` waits for the spawned task to complete and then returns the result. In the above example, there was no result, we just waited for the task to complete. `JoinHandle` is a generic type and it's type parameter is the type returned by the spawned task. In the above example, the type would be `JoinHandle<()>`, a future that results in a `String` would produce a `JoinHandle` with type `JoinHandle`. 199 | 200 | `await`ing a `JoinHandle` returns a `Result` (which is why we used `let _ = ...` in the above example, it avoids a warning about an unused `Result`). If the spawned task completed successfully, then the task's result will be in the `Ok` variant. If the task panicked or was aborted (a form of cancellation, see [TODO]()), then the result will be an `Err` containing a [`JoinError` docs](https://docs.rs/tokio/latest/tokio/task/struct.JoinError.html). If you are not using cancellation via `abort` in your project, then `unwrapping` the result of `JoinHandle.await` is a reasonable approach, since that is effectively propagating a panic from the spawned task to the spawning task. 201 | -------------------------------------------------------------------------------- /src/part-guide/concurrency-primitives.md: -------------------------------------------------------------------------------- 1 | # Concurrency primitives 2 | 3 | - concurrent composition of futures 4 | - c.f., sequential composition with await, composition of tasks with spawn 5 | - concurrent/task behaviour 6 | - behaviour on error 7 | - streams as alternative, forward ref 8 | - different versions in different runtimes/other crates 9 | - focus on the Tokio versions 10 | 11 | From [comment](https://github.com/rust-lang/async-book/pull/230#discussion_r1829351497): A framing I've started using is that tasks are not the async/await form of threads; it's more accurate to think of them as parallelizable futures. This framing does not match Tokio and async-std's current task design; but both also have trouble propagating cancellation. See parallel_future and tasks are the wrong abstraction for more. 12 | 13 | 14 | ## Join 15 | 16 | - Tokio/futures-rs join macro 17 | - c.f., joining tasks 18 | - join in futures-concurrency 19 | - FuturesUnordered 20 | - like a dynamic version of join 21 | - forward ref to stream 22 | 23 | ## Race/select 24 | 25 | - Tokio select macro 26 | - cancellation issues 27 | - different behaviour of futures-rs version 28 | - race in futures-concurrency 29 | -------------------------------------------------------------------------------- /src/part-guide/dtors.md: -------------------------------------------------------------------------------- 1 | # Destruction and clean-up 2 | 3 | - Object destruction and recap of Drop 4 | - General clean up requirements in software 5 | - Async issues 6 | - Might want to do stuff async during clean up, e.g., send a final message 7 | - Might need to clean up stuff which is still being used async-ly 8 | - Might want to clean up when an async task completes or cancels and there is no way to catch that 9 | - State of the runtime during clean-up phase (esp if we're panicking or whatever) 10 | - No async Drop 11 | - WIP 12 | - forward ref to completion io topic 13 | 14 | ## Cancellation 15 | 16 | - How it happens (recap of more-async-await.md) 17 | - drop a future 18 | - cancellation token 19 | - abort functions 20 | - What we can do about 'catching' cancellation 21 | - logging or monitoring cancellation 22 | - How cancellation affects other futures tasks (forward ref to cancellation safety chapter, this should just be a heads-up) 23 | 24 | ## Panicking and async 25 | 26 | - Propagation of panics across tasks (spawn result) 27 | - Panics leaving data inconsistent (tokio mutexes) 28 | - Calling async code when panicking (make sure you don't) 29 | 30 | ## Patterns for clean-up 31 | 32 | - Avoid needing clean up (abort/restart) 33 | - Don't use async for cleanup and don't worry too much 34 | - async clean up method + dtor bomb (i.e., separate clean-up from destruction) 35 | - centralise/out-source clean-up in a separate task or thread or supervisor object/process 36 | 37 | ## Why no async Drop (yet) 38 | 39 | - Note this is advanced section and not necessary to read 40 | - Why async Drop is hard 41 | - Possible solutions and there issues 42 | - Current status 43 | -------------------------------------------------------------------------------- /src/part-guide/futures.md: -------------------------------------------------------------------------------- 1 | # Futures 2 | 3 | We've talked a lot about futures in the preceding chapters; they're a key part of Rust's async programming story! In this chapter we're going to get into some of the details of what futures are and how they work, and some libraries for working directly with futures. 4 | 5 | ## The `Future` and `IntoFuture` traits 6 | 7 | - Future 8 | - Output assoc type 9 | - No real detail here, polling is in the next section, reference adv sections on Pin, executors/wakers 10 | - IntoFuture 11 | - Usage - general, in await, async builder pattern (pros and cons in using) 12 | - Boxing futures, `Box` and how it used to be common and necessary but mostly isn't now, except for recursion, etc. 13 | 14 | ## Polling 15 | 16 | - what it is and who does it, Poll type 17 | - ready is final state 18 | - how it connects with await 19 | - drop = cancel 20 | - for futures and thus tasks 21 | - implications for async programming in general 22 | - reference to chapter on cancellation safety 23 | 24 | ### Fusing 25 | 26 | ## futures-rs crate 27 | 28 | - History and purpose 29 | - see streams chapter 30 | - helpers for writing executors or other low-level futures stuff 31 | - pinning and boxing 32 | - executor as a partial runtime (see alternate runtimes in reference) 33 | - TryFuture 34 | - convenience futures: pending, ready, ok/err, etc. 35 | - combinator functions on FutureExt 36 | - alternative to Tokio stuff 37 | - functions 38 | - IO traits 39 | 40 | ## futures-concurrency crate 41 | 42 | https://docs.rs/futures-concurrency/latest/futures_concurrency/ 43 | 44 | 45 | -------------------------------------------------------------------------------- /src/part-guide/intro.md: -------------------------------------------------------------------------------- 1 | # Part 1: A guide to asynchronous programming in Rust 2 | 3 | This part of the book is a tutorial-style guide to async Rust. It is aimed at newcomers to async programming in Rust. It should be useful whether or not you've done async programming in other languages. If you have, you might skip the first section or skim it as a refresher. You might also want to read this [comparison to async in other languages]() sooner rather than later. 4 | 5 | We'll start by discussing different models of [concurrent programming](concurrency.md), using processes, threads, or async tasks. This chapter will cover the essential parts of Rust's async model before we get into the nitty-gritty of programming in the second chapter where we introduce the async and await syntax. 6 | -------------------------------------------------------------------------------- /src/part-guide/io.md: -------------------------------------------------------------------------------- 1 | # IO and issues with blocking 2 | 3 | ## Blocking and non-blocking IO 4 | 5 | - High level view 6 | - How async IO fits with async concurrency 7 | - Why blocking IO is bad 8 | - forward ref to streams for streams/sinks 9 | 10 | ## Read and Write 11 | 12 | - async Read and Write traits 13 | - part of the runtime 14 | - how to use 15 | - specific implementations 16 | - network vs disk 17 | - tcp, udp 18 | - file system is not really async, but io_uring (ref to that chapter) 19 | - practical examples 20 | - stdout, etc. 21 | - pipe, fd, etc. 22 | 23 | 24 | ## Memory management 25 | 26 | - Issues with buffer management and async IO 27 | - Different solutions and pros and cons 28 | - zero-copy approach 29 | - shared buffer approach 30 | - Utility crates to help with this, Bytes, etc. 31 | 32 | ## Advanced topics on IO 33 | 34 | - buf read/write 35 | - Read + Write, split, join 36 | - copy 37 | - simplex and duplex 38 | - cancelation 39 | 40 | ## The OS view of IO 41 | 42 | - Different kinds of IO and mechanisms, completion IO, reference to completion IO chapter in adv section 43 | - different runtimes can faciliate this 44 | - mio for low-level interface 45 | 46 | 47 | ## Other blocking operations 48 | 49 | - Why this is bad 50 | - Long running CPU work 51 | - Using Tokio for just CPU work: https://thenewstack.io/using-rustlangs-async-tokio-runtime-for-cpu-bound-tasks/ 52 | - Solutions 53 | - spawn blocking 54 | - thread pool 55 | - etc. 56 | - yielding to the runtime 57 | - not the same as Rust's yield keyword 58 | - await doesn't yield 59 | - implicit yields in Tokio 60 | -------------------------------------------------------------------------------- /src/part-guide/more-async-await.md: -------------------------------------------------------------------------------- 1 | # More async/await topics 2 | 3 | ## Unit tests 4 | 5 | How to unit test async code? The issue is that you can only await from inside an async context, and unit tests in Rust are not async. Luckily, most runtimes provide a convenience attribute for tests similar to the one for `async main`. Using Tokio, it looks like this: 6 | 7 | ```rust,norun 8 | #[tokio::test] 9 | async fn test_something() { 10 | // Write a test here, including all the `await`s you like. 11 | } 12 | ``` 13 | 14 | There are many ways to configure the test, see the [docs](https://docs.rs/tokio/latest/tokio/attr.test.html) for details. 15 | 16 | There are some more advanced topics in testing async code (e.g., testing for race conditions, deadlock, etc.), and we'll cover some of those [later]() in this guide. 17 | 18 | 19 | ## Blocking and cancellation 20 | 21 | Blocking and cancellation are important to keep in mind when programming with async Rust. These concepts are not localised to any particular feature or function, but are ubiquitous properties of the system which you must understand to write correct code. 22 | 23 | ### Blocking IO 24 | 25 | We say a thread (note we're talking about OS threads here, not async tasks) is blocked when it can't make any progress. That's usually because it is waiting for the OS to complete a task on its behalf (usually I/O). Importantly, while a thread is blocked, the OS knows not to schedule it so that other threads can make progress. This is fine in a multithreaded program because it lets other threads make progress while the blocked thread is waiting. However, in an async program, there are other tasks which should be scheduled on the same OS thread, but the OS doesn't know about those and keeps the whole thread waiting. This means that rather than the single task waiting for its I/O to complete (which is fine), many tasks have to wait (which is not fine). 26 | 27 | We'll talk soon about non-blocking/async I/O. For now, just know that non-blocking I/O is I/O which the async runtime knows about and so will only the current task will wait, the thread will not be blocked. It is very important to only use non-blocking I/O from an async task, never blocking I/O (which is the only kind provided in Rust's standard library). 28 | 29 | ### Blocking computation 30 | 31 | You can also block the thread by doing computation (this is not quite the same as blocking I/O, since the OS is not involved, but the effect is similar). If you have a long-running computation (with or without blocking I/O) without yielding control to the runtime, then that task will never give the runtime's scheduler a chance to schedule other tasks. Remember that async programming uses cooperative multitasking. Here a task is not cooperating, so other tasks won't get a chance to get work done. We'll discuss ways to mitigate this later. 32 | 33 | There are many other ways to block a whole thread, and we'll come back to blocking several times in this guide. 34 | 35 | ### Cancellation 36 | 37 | Cancellation means stopping a future (or task) from executing. Since in Rust (and in contrast to many other async/await systems), futures must be driven forward by an external force (like the async runtime), if a future is no longer driven forward then it will not execute any more. If a future is dropped (remember, a future is just a plain old Rust object), then it can never make any more progress and is canceled. 38 | 39 | Cancellation can be initiated in a few ways: 40 | 41 | - By simply dropping a future (if you own it). 42 | - Calling [`abort`](https://docs.rs/tokio/latest/tokio/task/struct.JoinHandle.html#method.abort) on a task's 'JoinHandle' (or an `AbortHandle`). 43 | - Via a [`CancellationToken`](https://docs.rs/tokio-util/latest/tokio_util/sync/struct.CancellationToken.html) (which requires the future being canceled to notice the token and cooperatively cancel itself). 44 | - Implicitly, by a function or macro like [`select`](https://docs.rs/tokio/latest/tokio/macro.select.html). 45 | 46 | The middle two are specific to Tokio, though most runtimes provide similar facilities. Using a `CancellationToken` requires cooperation of the future being canceled, but the others do not. In these other cases, the canceled future will get no notification of cancellation and no opportunity to clean up (besides its destructor). Note that even if a future has a cancellation token, it can still be canceled via the other methods which won't trigger the cancellation token. 47 | 48 | From the perspective of writing async code (in async functions, blocks, futures, etc.), the code might stop executing at any `await` (including hidden ones in macros) and never start again. In order for your code to be correct (specifically to be *cancellation safe*), it must work correctly whether it completes normally or whether it terminates at any await point[^cfThreads]. 49 | 50 | ```rust,norun 51 | async fn some_function(input: Option) { 52 | let Some(input) = input else { 53 | return; // Might terminate here (`return`). 54 | }; 55 | 56 | let x = foo(input)?; // Might terminate here (`?`). 57 | 58 | let y = bar(x).await; // Might terminate here (`await`). 59 | 60 | // ... 61 | 62 | // Might terminate here (implicit return). 63 | } 64 | ``` 65 | 66 | An example of how this can go wrong is if an async function reads data into an internal buffer, then awaits the next datum. If reading the data is destructive (i.e., cannot be re-read from the original source) and the async function is canceled, then the internal buffer will be dropped, and the data in it will be lost. It is important to consider how a future and any data it touches will be impacted by canceling the future, restarting the future, or starting a new future which touches the same data. 67 | 68 | We'll be coming back to cancellation and cancellation safety a few times in this guide, and there is a whole [chapter]() on the topic in the reference section. 69 | 70 | [^cfThreads]: It is interesting to compare cancellation in async programming with canceling threads. Canceling a thread is possible (e.g., using `pthread_cancel` in C, there is no direct way to do this in Rust), but it is almost always a very, very bad idea since the thread being canceled can terminate anywhere. In contrast, canceling an async task can only happen at an await point. As a consequence, it is very rare to cancel an OS thread without terminating the whole process and so as a programmer, you generally don't worry about this happening. In async Rust however, cancellation is definitely something which *can* happen. We'll be discussing how to deal with that as we go along. 71 | 72 | ## Async blocks 73 | 74 | A regular block (`{ ... }`) groups code together in the source and creates a scope of encapsulation for names. At runtime, the block is executed in order and evaluates to the value of its last expression (or the unit type (`()`) if there is no trailing expression). 75 | 76 | Similarly to async functions, an async block is a deferred version of a regular block. An async block scopes code and names together, but at runtime it is not immediately executed and evaluates to a future. To execute the block and obtain the result, it must be `await`ed. E.g.: 77 | 78 | ```rust,norun 79 | let s1 = { 80 | let a = 42; 81 | format!("The answer is {a}") 82 | }; 83 | 84 | let s2 = async { 85 | let q = question().await; 86 | format!("The question is {q}") 87 | }; 88 | ``` 89 | 90 | If we were to execute this snippet, `s1` would be a string which could be printed, but `s2` would be a future; `question()` would not have been called. To print `s2`, we first have to `s2.await`. 91 | 92 | An async block is the simplest way to start an async context and create a future. It is commonly used to create small futures which are only used in one place. 93 | 94 | Unfortunately, control flow with async blocks is a little quirky. Because an async block creates a future rather than straightforwardly executing, it behaves more like a function than a regular block with respect to control flow. `break` and `continue` cannot go 'through' an async block like they can with regular blocks; instead you have to use `return`: 95 | 96 | ```rust,norun 97 | loop { 98 | { 99 | if ... { 100 | // ok 101 | continue; 102 | } 103 | } 104 | 105 | async { 106 | if ... { 107 | // not ok 108 | // continue; 109 | 110 | // ok - continues with the next execution of the `loop`, though note that if there was 111 | // code in the loop after the async block that would be executed. 112 | return; 113 | } 114 | }.await 115 | } 116 | ``` 117 | 118 | To implement `break` you would need to test the value of the block (a common idiom is to use [`ControlFlow`](https://doc.rust-lang.org/std/ops/enum.ControlFlow.html) for the value of the block, which also allows use of `?`). 119 | 120 | Likewise, `?` inside an async block will terminate execution of the future in the presence of an error, causing the `await`ed block to take the value of the error, but won't exit the surrounding function (like `?` in a regular block would). You'll need another `?` after `await` for that: 121 | 122 | ```rust,norun 123 | async { 124 | let x = foo()?; // This `?` only exits the async block, not the surrounding function. 125 | consume(x); 126 | Ok(()) 127 | }.await? 128 | ``` 129 | 130 | Annoyingly, this often confuses the compiler since (unlike functions) the 'return' type of an async block is not explicitly stated. You'll probably need to add some type annotations on variables or use turbofished types to make this work, e.g., `Ok::<_, MyError>(())` instead of `Ok(())` in the above example. 131 | 132 | A function which returns an async block is pretty similar to an async function. Writing `async fn foo() -> ... { ... }` is roughly equivalent to `fn foo() -> ... { async { ... } }`. In fact, from the caller's perspective they are equivalent, and changing from one form to the other is not a breaking change. Furthermore, you can override one with the other when implementing an async trait (see below). However, you do have to adjust the type, making the `Future` explicit in the async block version: `async fn foo() -> Foo` becomes `fn foo() -> impl Future` (you might also need to make other bounds explicit, e.g., `Send` and `'static`). 133 | 134 | You would usually prefer the async function version since it is simpler and clearer. However, the async block version is more flexible since you can execute some code when the function is called (by writing it outside the async block) and some code when the result is awaited (the code inside the async block). 135 | 136 | 137 | ## Async closures 138 | 139 | - closures 140 | - coming soon (https://github.com/rust-lang/rust/pull/132706, https://blog.rust-lang.org/inside-rust/2024/08/09/async-closures-call-for-testing.html) 141 | - async blocks in closures vs async closures 142 | 143 | 144 | ## Lifetimes and borrowing 145 | 146 | - Mentioned the static lifetime above 147 | - Lifetime bounds on futures (`Future + '_`, etc.) 148 | - Borrowing across await points 149 | - I don't know, I'm sure there are more lifetime issues with async functions ... 150 | 151 | 152 | ## `Send + 'static` bounds on futures 153 | 154 | - Why they're there, multi-threaded runtimes 155 | - spawn local to avoid them 156 | - What makes an async fn `Send + 'static` and how to fix bugs with it 157 | 158 | 159 | ## Async traits 160 | 161 | - syntax 162 | - The `Send + 'static` issue and working around it 163 | - trait_variant 164 | - explicit future 165 | - return type notation (https://blog.rust-lang.org/inside-rust/2024/09/26/rtn-call-for-testing.html) 166 | - overriding 167 | - future vs async notation for methods 168 | - object safety 169 | - capture rules (https://blog.rust-lang.org/2024/09/05/impl-trait-capture-rules.html) 170 | - history and async-trait crate 171 | 172 | 173 | ## Recursion 174 | 175 | - Allowed (relatively new), but requires some explicit boxing 176 | - forward reference to futures, pinning 177 | - https://rust-lang.github.io/async-book/07_workarounds/04_recursion.html 178 | - https://blog.rust-lang.org/2024/03/21/Rust-1.77.0.html#support-for-recursion-in-async-fn 179 | - async-recursion macro (https://docs.rs/async-recursion/latest/async_recursion/) 180 | 181 | -------------------------------------------------------------------------------- /src/part-guide/runtimes.md: -------------------------------------------------------------------------------- 1 | # Runtimes and runtime issues 2 | 3 | ## Running async code 4 | 5 | - Explicit startup vs async main 6 | - tokio context concept 7 | - block_on 8 | - runtime as reflected in the code (Runtime, Handle) 9 | - runtime shutdown 10 | 11 | ## Threads and tasks 12 | 13 | - default work stealing, multi-threaded 14 | - revisit Send + 'static bounds 15 | - yield 16 | - spawn-local 17 | - spawn-blocking (recap), block-in-place 18 | - tokio-specific stuff on yielding to other threads, local vs global queues, etc 19 | 20 | ## Configuration options 21 | 22 | - thread pool size 23 | - single threaded, thread per core etc. 24 | 25 | ## Alternate runtimes 26 | 27 | - Why you'd want to use a different runtime or implement your own 28 | - What kind of variations exist in the high-level design 29 | - Forward ref to adv chapters 30 | -------------------------------------------------------------------------------- /src/part-guide/streams.md: -------------------------------------------------------------------------------- 1 | # Async iterators (FKA streams) 2 | 3 | - Stream as an async iterator or as many futures 4 | - WIP 5 | - current status 6 | - futures and Tokio Stream traits 7 | - nightly trait 8 | - lazy like sync iterators 9 | - pinning and streams (forward ref to pinning chapter) 10 | - fused streams 11 | 12 | ## Consuming an async iterator 13 | 14 | - while let with async next 15 | - for_each, for_each_concurrent 16 | - collect 17 | - into_future, buffered 18 | 19 | ## Stream combinators 20 | 21 | - Taking a future instead of a closure 22 | - Some example combinators 23 | - unordered variations 24 | - StreamGroup 25 | 26 | ## Implementing an async iterator 27 | 28 | - Implementing the trait 29 | - Practicalities and util functions 30 | - async_iter stream macro 31 | 32 | ## Sinks 33 | 34 | - https://docs.rs/futures/latest/futures/sink/index.html 35 | 36 | ## Future work 37 | 38 | - current status 39 | - https://rust-lang.github.io/rfcs/2996-async-iterator.html 40 | - async next vs poll 41 | - async iteration syntax 42 | - (async) generators 43 | - lending iterators 44 | 45 | -------------------------------------------------------------------------------- /src/part-guide/sync.md: -------------------------------------------------------------------------------- 1 | # Channels, locking, and synchronization 2 | 3 | note on runtime specificness of sync primitves 4 | 5 | Why we need async primitives rather than use the sync ones 6 | 7 | ## Channels 8 | 9 | - basically same as the std ones, but await 10 | - communicate between tasks (same thread or different) 11 | - one shot 12 | - mpsc 13 | - other channels 14 | - bounded and unbounded channels 15 | 16 | ## Locks 17 | 18 | - async Mutex 19 | - c.f., std::Mutex - can be held across await points (borrowing the mutex in the guard, guard is Send, scheduler-aware? or just because lock is async?), lock is async (will not block the thread waiting for lock to be available) 20 | - even a clippy lint for holding the guard across await (https://rust-lang.github.io/rust-clippy/master/index.html#await_holding_lock) 21 | - more expensive because it can be held across await 22 | - use std::Mutex if you can 23 | - can use try_lock or mutex is expected to not be under contention 24 | - lock is not magically dropped when yield (that's kind of the point of a lock!) 25 | - deadlock by holding mutex over await 26 | - tasks deadlocked, but other tasks can make progress so might not look like a deadlock in process stats/tools/OS 27 | - usual advice - limit scope, minimise locks, order locks, prefer alternatives 28 | - no mutex poisoning 29 | - lock_owned 30 | - blocking_lock 31 | - cannot use in async 32 | - applies to other locks (should the above be moved before discussion of mutex specifically? Probably yes) 33 | - RWLock 34 | - Semaphore 35 | - yielding 36 | 37 | ## Other synchronization primitives 38 | 39 | - notify, barrier 40 | - OnceCell 41 | - atomics 42 | -------------------------------------------------------------------------------- /src/part-guide/timers-signals.md: -------------------------------------------------------------------------------- 1 | # Timers and Signal handling 2 | 3 | ## Time and Timers 4 | 5 | - runtime integration, don't use thread::sleep, etc. 6 | - std Instant and Duration 7 | - sleep 8 | - interval 9 | - timeout 10 | 11 | ## Signal handling 12 | 13 | - what is signal handling and why is it an async issue? 14 | - very OS specific 15 | - see Tokio docs 16 | -------------------------------------------------------------------------------- /src/part-guide/tools.md: -------------------------------------------------------------------------------- 1 | # Tools for async programming 2 | 3 | - Why we need specialist tools for async 4 | - Are there other tools to cover 5 | - loom 6 | 7 | ## Monitoring 8 | 9 | - [Tokio console](https://github.com/tokio-rs/console) 10 | 11 | ## Tracing and logging 12 | 13 | - issues with async tracing 14 | - tracing crate (https://github.com/tokio-rs/tracing) 15 | 16 | ## Debugging 17 | 18 | - Understanding async backtraces (RUST_BACKTRACE and in a debugger) 19 | - Techniques for debugging async code 20 | - Using Tokio console for debugging 21 | - Debugger support (WinDbg?) 22 | 23 | ## Profiling 24 | 25 | - How async messes up flamegraphs 26 | - How to profile async IO 27 | - Getting insight into the runtime 28 | - Tokio metrics 29 | --------------------------------------------------------------------------------