├── .github └── workflows │ └── ci.yml ├── .gitignore ├── .rustfmt.toml ├── LICENSE ├── README.md ├── book.toml ├── ci ├── dictionary.txt └── spellcheck.sh ├── examples ├── 01_02_why_async │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 01_04_async_await_primer │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 02_02_future_trait │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 02_03_timer │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 02_04_executor │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 03_01_async_await │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 05_01_streams │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 05_02_iteration_and_concurrency │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 06_02_join │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 06_03_select │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 06_04_spawning │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 07_05_recursion │ ├── Cargo.toml │ └── src │ │ └── lib.rs ├── 09_01_sync_tcp_server │ ├── 404.html │ ├── Cargo.toml │ ├── hello.html │ └── src │ │ └── main.rs ├── 09_02_async_tcp_server │ ├── Cargo.toml │ └── src │ │ └── main.rs ├── 09_03_slow_request │ ├── 404.html │ ├── Cargo.toml │ ├── hello.html │ └── src │ │ └── main.rs ├── 09_04_concurrent_tcp_server │ ├── Cargo.toml │ └── src │ │ └── main.rs ├── 09_05_final_tcp_server │ ├── 404.html │ ├── Cargo.toml │ ├── hello.html │ └── src │ │ └── main.rs └── Cargo.toml └── src ├── 01_getting_started ├── 01_chapter.md ├── 01_chapter_zh.md ├── 02_why_async.md ├── 02_why_async_zh.md ├── 03_state_of_async_rust.md ├── 03_state_of_async_rust_zh.md ├── 04_async_await_primer.md └── 04_async_await_primer_zh.md ├── 02_execution ├── 01_chapter.md ├── 01_chapter_zh.md ├── 02_future.md ├── 02_future_zh.md ├── 03_wakeups.md ├── 03_wakeups_zh.md ├── 04_executor.md ├── 04_executor_zh.md ├── 05_io.md └── 05_io_zh.md ├── 03_async_await ├── 01_chapter.md └── 01_chapter_zh.md ├── 04_pinning ├── 01_chapter.md └── 01_chapter_zh.md ├── 05_streams ├── 01_chapter.md ├── 01_chapter_zh.md ├── 02_iteration_and_concurrency.md └── 02_iteration_and_concurrency_zh.md ├── 06_multiple_futures ├── 01_chapter.md ├── 01_chapter_zh.md ├── 02_join.md ├── 02_join_zh.md ├── 03_select.md ├── 03_select_zh.md ├── 04_spawning.md └── 04_spawning_zh.md ├── 07_workarounds ├── 01_chapter.md ├── 01_chapter_zh.md ├── 02_err_in_async_blocks.md ├── 02_err_in_async_blocks_zh.md ├── 03_send_approximation.md ├── 03_send_approximation_zh.md ├── 04_recursion.md ├── 04_recursion_zh.md ├── 05_async_in_traits.md └── 05_async_in_traits_zh.md ├── 08_ecosystem ├── 00_chapter.md └── 00_chapter_zh.md ├── 09_example ├── 00_intro.md ├── 00_intro_zh.md ├── 01_running_async_code.md ├── 01_running_async_code_zh.md ├── 02_handling_connections_concurrently.md ├── 02_handling_connections_concurrently_zh.md ├── 03_tests.md └── 03_tests_zh.md ├── 12_appendix ├── 01_translations.md └── 01_translations_zh.md ├── SUMMARY.md ├── SUMMARY_zh.md └── assets └── swap_problem.jpg /.github/workflows/ci.yml: -------------------------------------------------------------------------------- 1 | name: CI 2 | 3 | on: 4 | pull_request: 5 | push: 6 | branches: 7 | - master 8 | 9 | jobs: 10 | test: 11 | name: build and test 12 | runs-on: ubuntu-latest 13 | steps: 14 | - uses: actions/checkout@v2 15 | - name: Install Rust 16 | run: rustup update stable && rustup default stable 17 | - run: sudo apt-get update && sudo apt-get install aspell aspell-en 18 | - name: Install mdbook 19 | uses: taiki-e/install-action@mdbook 20 | - name: Install mdbook-linkcheck 21 | uses: taiki-e/install-action@mdbook-linkcheck 22 | - run: cp src/SUMMARY_zh.md src/SUMMARY.md 23 | - run: mdbook build 24 | - run: cargo test --all --manifest-path=./examples/Cargo.toml --target-dir ./target 25 | - name: Deploy 26 | uses: peaceiris/actions-gh-pages@v3 27 | with: 28 | github_token: ${{ secrets.GITHUB_TOKEN }} 29 | publish_dir: ./book 30 | 31 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | /book/ 2 | /examples/target/ 3 | /examples/Cargo.lock 4 | target -------------------------------------------------------------------------------- /.rustfmt.toml: -------------------------------------------------------------------------------- 1 | # https://github.com/rust-lang/async-book/pull/59#issuecomment-556240879 2 | disable_all_formatting = true 3 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Aaron Turon 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # async-book 2 | Asynchronous Programming in Rust 3 | 4 | ## Requirements 5 | The async book is built with [`mdbook`], you can install it using cargo. 6 | 7 | ``` 8 | cargo install mdbook 9 | cargo install mdbook-linkcheck 10 | ``` 11 | 12 | [`mdbook`]: https://github.com/rust-lang/mdBook 13 | 14 | ## Building 15 | To create a finished book, run `mdbook build` to generate it under the `book/` directory. 16 | ``` 17 | mdbook build 18 | ``` 19 | 20 | ## Development 21 | While writing it can be handy to see your changes, `mdbook serve` will launch a local web 22 | server to serve the book. 23 | ``` 24 | mdbook serve 25 | ``` 26 | -------------------------------------------------------------------------------- /book.toml: -------------------------------------------------------------------------------- 1 | [book] 2 | title = "Asynchronous Programming in Rust" 3 | authors = ["Taylor Cramer"] 4 | 5 | [build] 6 | create-missing = false 7 | 8 | [preprocess.links] 9 | 10 | [output.html] 11 | git-repository-url = "https://github.com/rust-lang/async-book" 12 | site-url = "/async-book/" 13 | 14 | #[output.linkcheck] 15 | #follow-web-links = true 16 | #traverse-parent-directories = false 17 | -------------------------------------------------------------------------------- /ci/dictionary.txt: -------------------------------------------------------------------------------- 1 | personal_ws-1.1 en 0 utf-8 2 | ambiently 3 | APIs 4 | ArcWake 5 | async 6 | AsyncFuture 7 | asynchronous 8 | AsyncRead 9 | AsyncWrite 10 | AwaitingFutOne 11 | AwaitingFutTwo 12 | cancelling 13 | combinator 14 | combinators 15 | compat 16 | const 17 | coroutines 18 | dyn 19 | enqueued 20 | enum 21 | epoll 22 | FreeBSD 23 | FusedFuture 24 | FusedStream 25 | FutOne 26 | FutTwo 27 | FuturesUnordered 28 | GenFuture 29 | gRPC 30 | html 31 | http 32 | Hyper's 33 | impl 34 | implementors 35 | init 36 | interoperate 37 | interprocess 38 | IoBlocker 39 | IOCP 40 | IoObject 41 | JoinHandle 42 | kqueue 43 | localhost 44 | LocalExecutor 45 | metadata 46 | MockTcpStream 47 | multi 48 | multithreaded 49 | multithreading 50 | Mutex 51 | MyError 52 | MyFut 53 | MyType 54 | natively 55 | NotSend 56 | OtherType 57 | performant 58 | PhantomPinned 59 | pointee 60 | println 61 | proxied 62 | proxying 63 | pseudocode 64 | ReadIntoBuf 65 | recognise 66 | refactor 67 | RefCell 68 | repurposed 69 | requeue 70 | ResponseFuture 71 | reusability 72 | runtime 73 | runtimes 74 | rustc 75 | rustup 76 | SimpleFuture 77 | smol 78 | SocketRead 79 | SomeType 80 | spawner 81 | StepOne 82 | StepTwo 83 | struct 84 | structs 85 | subfuture 86 | subfutures 87 | subpar 88 | TcpListener 89 | TcpStream 90 | threadpool 91 | TimerFuture 92 | TODO 93 | Tokio 94 | toml 95 | TryFutureExt 96 | tuple 97 | turbofish 98 | UnixStream 99 | usize 100 | utils 101 | Waker 102 | waker 103 | Wakeups 104 | wakeups 105 | webpages 106 | webserver 107 | Woot 108 | -------------------------------------------------------------------------------- /ci/spellcheck.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | aspell --version 4 | 5 | # Checks project Markdown files for spelling mistakes. 6 | 7 | # Notes: 8 | 9 | # This script needs dictionary file ($dict_filename) with project-specific 10 | # valid words. If this file is missing, first invocation of a script generates 11 | # a file of words considered typos at the moment. User should remove real typos 12 | # from this file and leave only valid words. When script generates false 13 | # positive after source modification, new valid word should be added 14 | # to dictionary file. 15 | 16 | # Default mode of this script is interactive. Each source file is scanned for 17 | # typos. aspell opens window, suggesting fixes for each found typo. Original 18 | # files with errors will be backed up to files with format "filename.md.bak". 19 | 20 | # When running in CI, this script should be run in "list" mode (pass "list" 21 | # as first argument). In this mode script scans all files and reports found 22 | # errors. Exit code in this case depends on scan result: 23 | # 1 if any errors found, 24 | # 0 if all is clear. 25 | 26 | # Script skips words with length less than or equal to 3. This helps to avoid 27 | # some false positives. 28 | 29 | # We can consider skipping source code in markdown files (```code```) to reduce 30 | # rate of false positives, but then we lose ability to detect typos in code 31 | # comments/strings etc. 32 | 33 | shopt -s nullglob 34 | 35 | dict_filename=./ci/dictionary.txt 36 | markdown_sources=($(find ./src -iname '*.md')) 37 | mode="check" 38 | 39 | # aspell repeatedly modifies the personal dictionary for some reason, 40 | # so we should use a copy of our dictionary. 41 | dict_path="/tmp/dictionary.txt" 42 | 43 | if [[ "$1" == "list" ]]; then 44 | mode="list" 45 | fi 46 | 47 | # Error if running in list (CI) mode and there isn't a dictionary file; 48 | # creating one in CI won't do any good :( 49 | if [[ "$mode" == "list" && ! -f "$dict_filename" ]]; then 50 | echo "No dictionary file found! A dictionary file is required in CI!" 51 | exit 1 52 | fi 53 | 54 | if [[ ! -f "$dict_filename" ]]; then 55 | # Pre-check mode: generates dictionary of words aspell consider typos. 56 | # After user validates that this file contains only valid words, we can 57 | # look for typos using this dictionary and some default aspell dictionary. 58 | echo "Scanning files to generate dictionary file '$dict_filename'." 59 | echo "Please check that it doesn't contain any misspellings." 60 | 61 | echo "personal_ws-1.1 en 0 utf-8" > "$dict_filename" 62 | cat "${markdown_sources[@]}" | aspell --ignore 3 list | sort -u >> "$dict_filename" 63 | elif [[ "$mode" == "list" ]]; then 64 | # List (default) mode: scan all files, report errors. 65 | declare -i retval=0 66 | 67 | cp "$dict_filename" "$dict_path" 68 | 69 | if [ ! -f $dict_path ]; then 70 | retval=1 71 | exit "$retval" 72 | fi 73 | 74 | for fname in "${markdown_sources[@]}"; do 75 | command=$(aspell --ignore 3 --personal="$dict_path" "$mode" < "$fname") 76 | if [[ -n "$command" ]]; then 77 | for error in $command; do 78 | # FIXME: find more correct way to get line number 79 | # (ideally from aspell). Now it can make some false positives, 80 | # because it is just a grep. 81 | grep --with-filename --line-number --color=always "$error" "$fname" 82 | done 83 | retval=1 84 | fi 85 | done 86 | exit "$retval" 87 | elif [[ "$mode" == "check" ]]; then 88 | # Interactive mode: fix typos. 89 | cp "$dict_filename" "$dict_path" 90 | 91 | if [ ! -f $dict_path ]; then 92 | retval=1 93 | exit "$retval" 94 | fi 95 | 96 | for fname in "${markdown_sources[@]}"; do 97 | aspell --ignore 3 --dont-backup --personal="$dict_path" "$mode" "$fname" 98 | done 99 | fi 100 | -------------------------------------------------------------------------------- /examples/01_02_why_async/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_01_02_why_async" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/01_02_why_async/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | use futures::{executor::block_on, join}; 4 | use std::thread; 5 | 6 | fn download(_url: &str) { 7 | // ... 8 | } 9 | 10 | #[test] 11 | // ANCHOR: get_two_sites 12 | fn get_two_sites() { 13 | // Spawn two threads to do work. 14 | let thread_one = thread::spawn(|| download("https://www.foo.com")); 15 | let thread_two = thread::spawn(|| download("https://www.bar.com")); 16 | 17 | // Wait for both threads to complete. 18 | thread_one.join().expect("thread one panicked"); 19 | thread_two.join().expect("thread two panicked"); 20 | } 21 | // ANCHOR_END: get_two_sites 22 | 23 | async fn download_async(_url: &str) { 24 | // ... 25 | } 26 | 27 | // ANCHOR: get_two_sites_async 28 | async fn get_two_sites_async() { 29 | // Create two different "futures" which, when run to completion, 30 | // will asynchronously download the webpages. 31 | let future_one = download_async("https://www.foo.com"); 32 | let future_two = download_async("https://www.bar.com"); 33 | 34 | // Run both futures to completion at the same time. 35 | join!(future_one, future_two); 36 | } 37 | // ANCHOR_END: get_two_sites_async 38 | 39 | #[test] 40 | fn get_two_sites_async_test() { 41 | block_on(get_two_sites_async()); 42 | } 43 | -------------------------------------------------------------------------------- /examples/01_04_async_await_primer/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_01_04_async_await_primer" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/01_04_async_await_primer/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | use futures::executor::block_on; 4 | 5 | mod first { 6 | // ANCHOR: hello_world 7 | // `block_on` blocks the current thread until the provided future has run to 8 | // completion. Other executors provide more complex behavior, like scheduling 9 | // multiple futures onto the same thread. 10 | use futures::executor::block_on; 11 | 12 | async fn hello_world() { 13 | println!("hello, world!"); 14 | } 15 | 16 | fn main() { 17 | let future = hello_world(); // Nothing is printed 18 | block_on(future); // `future` is run and "hello, world!" is printed 19 | } 20 | // ANCHOR_END: hello_world 21 | 22 | #[test] 23 | fn run_main() { main() } 24 | } 25 | 26 | struct Song; 27 | async fn learn_song() -> Song { Song } 28 | async fn sing_song(_: Song) {} 29 | async fn dance() {} 30 | 31 | mod second { 32 | use super::*; 33 | // ANCHOR: block_on_each 34 | fn main() { 35 | let song = block_on(learn_song()); 36 | block_on(sing_song(song)); 37 | block_on(dance()); 38 | } 39 | // ANCHOR_END: block_on_each 40 | 41 | #[test] 42 | fn run_main() { main() } 43 | } 44 | 45 | mod third { 46 | use super::*; 47 | // ANCHOR: block_on_main 48 | async fn learn_and_sing() { 49 | // Wait until the song has been learned before singing it. 50 | // We use `.await` here rather than `block_on` to prevent blocking the 51 | // thread, which makes it possible to `dance` at the same time. 52 | let song = learn_song().await; 53 | sing_song(song).await; 54 | } 55 | 56 | async fn async_main() { 57 | let f1 = learn_and_sing(); 58 | let f2 = dance(); 59 | 60 | // `join!` is like `.await` but can wait for multiple futures concurrently. 61 | // If we're temporarily blocked in the `learn_and_sing` future, the `dance` 62 | // future will take over the current thread. If `dance` becomes blocked, 63 | // `learn_and_sing` can take back over. If both futures are blocked, then 64 | // `async_main` is blocked and will yield to the executor. 65 | futures::join!(f1, f2); 66 | } 67 | 68 | fn main() { 69 | block_on(async_main()); 70 | } 71 | // ANCHOR_END: block_on_main 72 | 73 | #[test] 74 | fn run_main() { main() } 75 | } 76 | -------------------------------------------------------------------------------- /examples/02_02_future_trait/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_02_02_future_trait" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | -------------------------------------------------------------------------------- /examples/02_02_future_trait/src/lib.rs: -------------------------------------------------------------------------------- 1 | // ANCHOR: simple_future 2 | trait SimpleFuture { 3 | type Output; 4 | fn poll(&mut self, wake: fn()) -> Poll; 5 | } 6 | 7 | enum Poll { 8 | Ready(T), 9 | Pending, 10 | } 11 | // ANCHOR_END: simple_future 12 | 13 | struct Socket; 14 | impl Socket { 15 | fn has_data_to_read(&self) -> bool { 16 | // check if the socket is currently readable 17 | true 18 | } 19 | fn read_buf(&self) -> Vec { 20 | // Read data in from the socket 21 | vec![] 22 | } 23 | fn set_readable_callback(&self, _wake: fn()) { 24 | // register `_wake` with something that will call it 25 | // once the socket becomes readable, such as an 26 | // `epoll`-based event loop. 27 | } 28 | } 29 | 30 | // ANCHOR: socket_read 31 | pub struct SocketRead<'a> { 32 | socket: &'a Socket, 33 | } 34 | 35 | impl SimpleFuture for SocketRead<'_> { 36 | type Output = Vec; 37 | 38 | fn poll(&mut self, wake: fn()) -> Poll { 39 | if self.socket.has_data_to_read() { 40 | // The socket has data -- read it into a buffer and return it. 41 | Poll::Ready(self.socket.read_buf()) 42 | } else { 43 | // The socket does not yet have data. 44 | // 45 | // Arrange for `wake` to be called once data is available. 46 | // When data becomes available, `wake` will be called, and the 47 | // user of this `Future` will know to call `poll` again and 48 | // receive data. 49 | self.socket.set_readable_callback(wake); 50 | Poll::Pending 51 | } 52 | } 53 | } 54 | // ANCHOR_END: socket_read 55 | 56 | // ANCHOR: join 57 | /// A SimpleFuture that runs two other futures to completion concurrently. 58 | /// 59 | /// Concurrency is achieved via the fact that calls to `poll` each future 60 | /// may be interleaved, allowing each future to advance itself at its own pace. 61 | pub struct Join { 62 | // Each field may contain a future that should be run to completion. 63 | // If the future has already completed, the field is set to `None`. 64 | // This prevents us from polling a future after it has completed, which 65 | // would violate the contract of the `Future` trait. 66 | a: Option, 67 | b: Option, 68 | } 69 | 70 | impl SimpleFuture for Join 71 | where 72 | FutureA: SimpleFuture, 73 | FutureB: SimpleFuture, 74 | { 75 | type Output = (); 76 | fn poll(&mut self, wake: fn()) -> Poll { 77 | // Attempt to complete future `a`. 78 | if let Some(a) = &mut self.a { 79 | if let Poll::Ready(()) = a.poll(wake) { 80 | self.a.take(); 81 | } 82 | } 83 | 84 | // Attempt to complete future `b`. 85 | if let Some(b) = &mut self.b { 86 | if let Poll::Ready(()) = b.poll(wake) { 87 | self.b.take(); 88 | } 89 | } 90 | 91 | if self.a.is_none() && self.b.is_none() { 92 | // Both futures have completed -- we can return successfully 93 | Poll::Ready(()) 94 | } else { 95 | // One or both futures returned `Poll::Pending` and still have 96 | // work to do. They will call `wake()` when progress can be made. 97 | Poll::Pending 98 | } 99 | } 100 | } 101 | // ANCHOR_END: join 102 | 103 | // ANCHOR: and_then 104 | /// A SimpleFuture that runs two futures to completion, one after another. 105 | // 106 | // Note: for the purposes of this simple example, `AndThenFut` assumes both 107 | // the first and second futures are available at creation-time. The real 108 | // `AndThen` combinator allows creating the second future based on the output 109 | // of the first future, like `get_breakfast.and_then(|food| eat(food))`. 110 | pub struct AndThenFut { 111 | first: Option, 112 | second: FutureB, 113 | } 114 | 115 | impl SimpleFuture for AndThenFut 116 | where 117 | FutureA: SimpleFuture, 118 | FutureB: SimpleFuture, 119 | { 120 | type Output = (); 121 | fn poll(&mut self, wake: fn()) -> Poll { 122 | if let Some(first) = &mut self.first { 123 | match first.poll(wake) { 124 | // We've completed the first future -- remove it and start on 125 | // the second! 126 | Poll::Ready(()) => self.first.take(), 127 | // We couldn't yet complete the first future. 128 | Poll::Pending => return Poll::Pending, 129 | }; 130 | } 131 | // Now that the first future is done, attempt to complete the second. 132 | self.second.poll(wake) 133 | } 134 | } 135 | // ANCHOR_END: and_then 136 | 137 | mod real_future { 138 | use std::{ 139 | future::Future as RealFuture, 140 | pin::Pin, 141 | task::{Context, Poll}, 142 | }; 143 | 144 | // ANCHOR: real_future 145 | trait Future { 146 | type Output; 147 | fn poll( 148 | // Note the change from `&mut self` to `Pin<&mut Self>`: 149 | self: Pin<&mut Self>, 150 | // and the change from `wake: fn()` to `cx: &mut Context<'_>`: 151 | cx: &mut Context<'_>, 152 | ) -> Poll; 153 | } 154 | // ANCHOR_END: real_future 155 | 156 | // ensure that `Future` matches `RealFuture`: 157 | impl Future for dyn RealFuture { 158 | type Output = O; 159 | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 160 | RealFuture::poll(self, cx) 161 | } 162 | } 163 | } 164 | -------------------------------------------------------------------------------- /examples/02_03_timer/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_02_03_timer" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/02_03_timer/src/lib.rs: -------------------------------------------------------------------------------- 1 | // ANCHOR: imports 2 | use std::{ 3 | future::Future, 4 | pin::Pin, 5 | sync::{Arc, Mutex}, 6 | task::{Context, Poll, Waker}, 7 | thread, 8 | time::Duration, 9 | }; 10 | // ANCHOR_END: imports 11 | 12 | // ANCHOR: timer_decl 13 | pub struct TimerFuture { 14 | shared_state: Arc>, 15 | } 16 | 17 | /// Shared state between the future and the waiting thread 18 | struct SharedState { 19 | /// Whether or not the sleep time has elapsed 20 | completed: bool, 21 | 22 | /// The waker for the task that `TimerFuture` is running on. 23 | /// The thread can use this after setting `completed = true` to tell 24 | /// `TimerFuture`'s task to wake up, see that `completed = true`, and 25 | /// move forward. 26 | waker: Option, 27 | } 28 | // ANCHOR_END: timer_decl 29 | 30 | // ANCHOR: future_for_timer 31 | impl Future for TimerFuture { 32 | type Output = (); 33 | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 34 | // Look at the shared state to see if the timer has already completed. 35 | let mut shared_state = self.shared_state.lock().unwrap(); 36 | if shared_state.completed { 37 | Poll::Ready(()) 38 | } else { 39 | // Set waker so that the thread can wake up the current task 40 | // when the timer has completed, ensuring that the future is polled 41 | // again and sees that `completed = true`. 42 | // 43 | // It's tempting to do this once rather than repeatedly cloning 44 | // the waker each time. However, the `TimerFuture` can move between 45 | // tasks on the executor, which could cause a stale waker pointing 46 | // to the wrong task, preventing `TimerFuture` from waking up 47 | // correctly. 48 | // 49 | // N.B. it's possible to check for this using the `Waker::will_wake` 50 | // function, but we omit that here to keep things simple. 51 | shared_state.waker = Some(cx.waker().clone()); 52 | Poll::Pending 53 | } 54 | } 55 | } 56 | // ANCHOR_END: future_for_timer 57 | 58 | // ANCHOR: timer_new 59 | impl TimerFuture { 60 | /// Create a new `TimerFuture` which will complete after the provided 61 | /// timeout. 62 | pub fn new(duration: Duration) -> Self { 63 | let shared_state = Arc::new(Mutex::new(SharedState { 64 | completed: false, 65 | waker: None, 66 | })); 67 | 68 | // Spawn the new thread 69 | let thread_shared_state = shared_state.clone(); 70 | thread::spawn(move || { 71 | thread::sleep(duration); 72 | let mut shared_state = thread_shared_state.lock().unwrap(); 73 | // Signal that the timer has completed and wake up the last 74 | // task on which the future was polled, if one exists. 75 | shared_state.completed = true; 76 | if let Some(waker) = shared_state.waker.take() { 77 | waker.wake() 78 | } 79 | }); 80 | 81 | TimerFuture { shared_state } 82 | } 83 | } 84 | // ANCHOR_END: timer_new 85 | 86 | #[test] 87 | fn block_on_timer() { 88 | futures::executor::block_on(async { 89 | TimerFuture::new(Duration::from_secs(1)).await 90 | }) 91 | } 92 | -------------------------------------------------------------------------------- /examples/02_04_executor/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_02_04_executor" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dependencies] 10 | futures = "0.3" 11 | timer_future = { package = "example_02_03_timer", path = "../02_03_timer" } 12 | -------------------------------------------------------------------------------- /examples/02_04_executor/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | // ANCHOR: imports 4 | use futures::{ 5 | future::{BoxFuture, FutureExt}, 6 | task::{waker_ref, ArcWake}, 7 | }; 8 | use std::{ 9 | future::Future, 10 | sync::mpsc::{sync_channel, Receiver, SyncSender}, 11 | sync::{Arc, Mutex}, 12 | task::Context, 13 | time::Duration, 14 | }; 15 | // The timer we wrote in the previous section: 16 | use timer_future::TimerFuture; 17 | // ANCHOR_END: imports 18 | 19 | // ANCHOR: executor_decl 20 | /// Task executor that receives tasks off of a channel and runs them. 21 | struct Executor { 22 | ready_queue: Receiver>, 23 | } 24 | 25 | /// `Spawner` spawns new futures onto the task channel. 26 | #[derive(Clone)] 27 | struct Spawner { 28 | task_sender: SyncSender>, 29 | } 30 | 31 | /// A future that can reschedule itself to be polled by an `Executor`. 32 | struct Task { 33 | /// In-progress future that should be pushed to completion. 34 | /// 35 | /// The `Mutex` is not necessary for correctness, since we only have 36 | /// one thread executing tasks at once. However, Rust isn't smart 37 | /// enough to know that `future` is only mutated from one thread, 38 | /// so we need to use the `Mutex` to prove thread-safety. A production 39 | /// executor would not need this, and could use `UnsafeCell` instead. 40 | future: Mutex>>, 41 | 42 | /// Handle to place the task itself back onto the task queue. 43 | task_sender: SyncSender>, 44 | } 45 | 46 | fn new_executor_and_spawner() -> (Executor, Spawner) { 47 | // Maximum number of tasks to allow queueing in the channel at once. 48 | // This is just to make `sync_channel` happy, and wouldn't be present in 49 | // a real executor. 50 | const MAX_QUEUED_TASKS: usize = 10_000; 51 | let (task_sender, ready_queue) = sync_channel(MAX_QUEUED_TASKS); 52 | (Executor { ready_queue }, Spawner { task_sender }) 53 | } 54 | // ANCHOR_END: executor_decl 55 | 56 | // ANCHOR: spawn_fn 57 | impl Spawner { 58 | fn spawn(&self, future: impl Future + 'static + Send) { 59 | let future = future.boxed(); 60 | let task = Arc::new(Task { 61 | future: Mutex::new(Some(future)), 62 | task_sender: self.task_sender.clone(), 63 | }); 64 | self.task_sender.send(task).expect("too many tasks queued"); 65 | } 66 | } 67 | // ANCHOR_END: spawn_fn 68 | 69 | // ANCHOR: arcwake_for_task 70 | impl ArcWake for Task { 71 | fn wake_by_ref(arc_self: &Arc) { 72 | // Implement `wake` by sending this task back onto the task channel 73 | // so that it will be polled again by the executor. 74 | let cloned = arc_self.clone(); 75 | arc_self 76 | .task_sender 77 | .send(cloned) 78 | .expect("too many tasks queued"); 79 | } 80 | } 81 | // ANCHOR_END: arcwake_for_task 82 | 83 | // ANCHOR: executor_run 84 | impl Executor { 85 | fn run(&self) { 86 | while let Ok(task) = self.ready_queue.recv() { 87 | // Take the future, and if it has not yet completed (is still Some), 88 | // poll it in an attempt to complete it. 89 | let mut future_slot = task.future.lock().unwrap(); 90 | if let Some(mut future) = future_slot.take() { 91 | // Create a `LocalWaker` from the task itself 92 | let waker = waker_ref(&task); 93 | let context = &mut Context::from_waker(&waker); 94 | // `BoxFuture` is a type alias for 95 | // `Pin + Send + 'static>>`. 96 | // We can get a `Pin<&mut dyn Future + Send + 'static>` 97 | // from it by calling the `Pin::as_mut` method. 98 | if future.as_mut().poll(context).is_pending() { 99 | // We're not done processing the future, so put it 100 | // back in its task to be run again in the future. 101 | *future_slot = Some(future); 102 | } 103 | } 104 | } 105 | } 106 | } 107 | // ANCHOR_END: executor_run 108 | 109 | // ANCHOR: main 110 | fn main() { 111 | let (executor, spawner) = new_executor_and_spawner(); 112 | 113 | // Spawn a task to print before and after waiting on a timer. 114 | spawner.spawn(async { 115 | println!("howdy!"); 116 | // Wait for our timer future to complete after two seconds. 117 | TimerFuture::new(Duration::new(2, 0)).await; 118 | println!("done!"); 119 | }); 120 | 121 | // Drop the spawner so that our executor knows it is finished and won't 122 | // receive more incoming tasks to run. 123 | drop(spawner); 124 | 125 | // Run the executor until the task queue is empty. 126 | // This will print "howdy!", pause, and then print "done!". 127 | executor.run(); 128 | } 129 | // ANCHOR_END: main 130 | 131 | #[test] 132 | fn run_main() { 133 | main() 134 | } 135 | -------------------------------------------------------------------------------- /examples/03_01_async_await/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_03_01_async_await" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/03_01_async_await/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![allow(unused)] 2 | #![cfg(test)] 3 | 4 | mod async_fn_and_block_examples { 5 | use std::future::Future; 6 | // ANCHOR: async_fn_and_block_examples 7 | 8 | // `foo()` returns a type that implements `Future`. 9 | // `foo().await` will result in a value of type `u8`. 10 | async fn foo() -> u8 { 5 } 11 | 12 | fn bar() -> impl Future { 13 | // This `async` block results in a type that implements 14 | // `Future`. 15 | async { 16 | let x: u8 = foo().await; 17 | x + 5 18 | } 19 | } 20 | // ANCHOR_END: async_fn_and_block_examples 21 | } 22 | 23 | mod async_lifetimes_examples { 24 | use std::future::Future; 25 | // ANCHOR: lifetimes_expanded 26 | // This function: 27 | async fn foo(x: &u8) -> u8 { *x } 28 | 29 | // Is equivalent to this function: 30 | fn foo_expanded<'a>(x: &'a u8) -> impl Future + 'a { 31 | async move { *x } 32 | } 33 | // ANCHOR_END: lifetimes_expanded 34 | 35 | async fn borrow_x(x: &u8) -> u8 { *x } 36 | 37 | #[cfg(feature = "never_compiled")] 38 | // ANCHOR: static_future_with_borrow 39 | fn bad() -> impl Future { 40 | let x = 5; 41 | borrow_x(&x) // ERROR: `x` does not live long enough 42 | } 43 | 44 | fn good() -> impl Future { 45 | async { 46 | let x = 5; 47 | borrow_x(&x).await 48 | } 49 | } 50 | // ANCHOR_END: static_future_with_borrow 51 | } 52 | 53 | mod async_move_examples { 54 | use std::future::Future; 55 | // ANCHOR: async_move_examples 56 | /// `async` block: 57 | /// 58 | /// Multiple different `async` blocks can access the same local variable 59 | /// so long as they're executed within the variable's scope 60 | async fn blocks() { 61 | let my_string = "foo".to_string(); 62 | 63 | let future_one = async { 64 | // ... 65 | println!("{my_string}"); 66 | }; 67 | 68 | let future_two = async { 69 | // ... 70 | println!("{my_string}"); 71 | }; 72 | 73 | // Run both futures to completion, printing "foo" twice: 74 | let ((), ()) = futures::join!(future_one, future_two); 75 | } 76 | 77 | /// `async move` block: 78 | /// 79 | /// Only one `async move` block can access the same captured variable, since 80 | /// captures are moved into the `Future` generated by the `async move` block. 81 | /// However, this allows the `Future` to outlive the original scope of the 82 | /// variable: 83 | fn move_block() -> impl Future { 84 | let my_string = "foo".to_string(); 85 | async move { 86 | // ... 87 | println!("{my_string}"); 88 | } 89 | } 90 | // ANCHOR_END: async_move_examples 91 | } 92 | -------------------------------------------------------------------------------- /examples/05_01_streams/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_05_01_streams" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/05_01_streams/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | mod stream_trait { 4 | use futures::stream::Stream as RealStream; 5 | use std::{ 6 | pin::Pin, 7 | task::{Context, Poll}, 8 | }; 9 | 10 | // ANCHOR: stream_trait 11 | trait Stream { 12 | /// The type of the value yielded by the stream. 13 | type Item; 14 | 15 | /// Attempt to resolve the next item in the stream. 16 | /// Returns `Poll::Pending` if not ready, `Poll::Ready(Some(x))` if a value 17 | /// is ready, and `Poll::Ready(None)` if the stream has completed. 18 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) 19 | -> Poll>; 20 | } 21 | // ANCHOR_END: stream_trait 22 | 23 | // assert that `Stream` matches `RealStream`: 24 | impl Stream for dyn RealStream { 25 | type Item = I; 26 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) 27 | -> Poll> 28 | { 29 | RealStream::poll_next(self, cx) 30 | } 31 | } 32 | } 33 | 34 | mod channels { 35 | use futures::{ 36 | channel::mpsc, 37 | prelude::*, 38 | }; 39 | 40 | // ANCHOR: channels 41 | async fn send_recv() { 42 | const BUFFER_SIZE: usize = 10; 43 | let (mut tx, mut rx) = mpsc::channel::(BUFFER_SIZE); 44 | 45 | tx.send(1).await.unwrap(); 46 | tx.send(2).await.unwrap(); 47 | drop(tx); 48 | 49 | // `StreamExt::next` is similar to `Iterator::next`, but returns a 50 | // type that implements `Future>`. 51 | assert_eq!(Some(1), rx.next().await); 52 | assert_eq!(Some(2), rx.next().await); 53 | assert_eq!(None, rx.next().await); 54 | } 55 | // ANCHOR_END: channels 56 | 57 | #[test] 58 | fn run_send_recv() { futures::executor::block_on(send_recv()) } 59 | } 60 | -------------------------------------------------------------------------------- /examples/05_02_iteration_and_concurrency/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_05_02_iteration_and_concurrency" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/05_02_iteration_and_concurrency/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | use futures::{ 4 | executor::block_on, 5 | stream::{self, Stream}, 6 | }; 7 | use std::{ 8 | io, 9 | pin::Pin, 10 | }; 11 | 12 | // ANCHOR: nexts 13 | async fn sum_with_next(mut stream: Pin<&mut dyn Stream>) -> i32 { 14 | use futures::stream::StreamExt; // for `next` 15 | let mut sum = 0; 16 | while let Some(item) = stream.next().await { 17 | sum += item; 18 | } 19 | sum 20 | } 21 | 22 | async fn sum_with_try_next( 23 | mut stream: Pin<&mut dyn Stream>>, 24 | ) -> Result { 25 | use futures::stream::TryStreamExt; // for `try_next` 26 | let mut sum = 0; 27 | while let Some(item) = stream.try_next().await? { 28 | sum += item; 29 | } 30 | Ok(sum) 31 | } 32 | // ANCHOR_END: nexts 33 | 34 | #[test] 35 | fn run_sum_with_next() { 36 | let mut stream = stream::iter(vec![2, 3]); 37 | let pin: Pin<&mut stream::Iter<_>> = Pin::new(&mut stream); 38 | assert_eq!(5, block_on(sum_with_next(pin))); 39 | } 40 | 41 | #[test] 42 | fn run_sum_with_try_next() { 43 | let mut stream = stream::iter(vec![Ok(2), Ok(3)]); 44 | let pin: Pin<&mut stream::Iter<_>> = Pin::new(&mut stream); 45 | assert_eq!(5, block_on(sum_with_try_next(pin)).unwrap()); 46 | } 47 | 48 | #[allow(unused)] 49 | // ANCHOR: try_for_each_concurrent 50 | async fn jump_around( 51 | mut stream: Pin<&mut dyn Stream>>, 52 | ) -> Result<(), io::Error> { 53 | use futures::stream::TryStreamExt; // for `try_for_each_concurrent` 54 | const MAX_CONCURRENT_JUMPERS: usize = 100; 55 | 56 | stream.try_for_each_concurrent(MAX_CONCURRENT_JUMPERS, |num| async move { 57 | jump_n_times(num).await?; 58 | report_n_jumps(num).await?; 59 | Ok(()) 60 | }).await?; 61 | 62 | Ok(()) 63 | } 64 | // ANCHOR_END: try_for_each_concurrent 65 | 66 | async fn jump_n_times(_: u8) -> Result<(), io::Error> { Ok(()) } 67 | async fn report_n_jumps(_: u8) -> Result<(), io::Error> { Ok(()) } 68 | -------------------------------------------------------------------------------- /examples/06_02_join/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_06_02_join" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/06_02_join/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | 3 | struct Book; 4 | struct Music; 5 | async fn get_book() -> Book { Book } 6 | async fn get_music() -> Music { Music } 7 | 8 | mod naiive { 9 | use super::*; 10 | // ANCHOR: naiive 11 | async fn get_book_and_music() -> (Book, Music) { 12 | let book = get_book().await; 13 | let music = get_music().await; 14 | (book, music) 15 | } 16 | // ANCHOR_END: naiive 17 | } 18 | 19 | mod other_langs { 20 | use super::*; 21 | // ANCHOR: other_langs 22 | // WRONG -- don't do this 23 | async fn get_book_and_music() -> (Book, Music) { 24 | let book_future = get_book(); 25 | let music_future = get_music(); 26 | (book_future.await, music_future.await) 27 | } 28 | // ANCHOR_END: other_langs 29 | } 30 | 31 | mod join { 32 | use super::*; 33 | // ANCHOR: join 34 | use futures::join; 35 | 36 | async fn get_book_and_music() -> (Book, Music) { 37 | let book_fut = get_book(); 38 | let music_fut = get_music(); 39 | join!(book_fut, music_fut) 40 | } 41 | // ANCHOR_END: join 42 | } 43 | 44 | mod try_join { 45 | use super::{Book, Music}; 46 | // ANCHOR: try_join 47 | use futures::try_join; 48 | 49 | async fn get_book() -> Result { /* ... */ Ok(Book) } 50 | async fn get_music() -> Result { /* ... */ Ok(Music) } 51 | 52 | async fn get_book_and_music() -> Result<(Book, Music), String> { 53 | let book_fut = get_book(); 54 | let music_fut = get_music(); 55 | try_join!(book_fut, music_fut) 56 | } 57 | // ANCHOR_END: try_join 58 | } 59 | 60 | mod mismatched_err { 61 | use super::{Book, Music}; 62 | // ANCHOR: try_join_map_err 63 | use futures::{ 64 | future::TryFutureExt, 65 | try_join, 66 | }; 67 | 68 | async fn get_book() -> Result { /* ... */ Ok(Book) } 69 | async fn get_music() -> Result { /* ... */ Ok(Music) } 70 | 71 | async fn get_book_and_music() -> Result<(Book, Music), String> { 72 | let book_fut = get_book().map_err(|()| "Unable to get book".to_string()); 73 | let music_fut = get_music(); 74 | try_join!(book_fut, music_fut) 75 | } 76 | // ANCHOR_END: try_join_map_err 77 | } 78 | -------------------------------------------------------------------------------- /examples/06_03_select/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_06_03_select" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/06_03_select/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | #![recursion_limit="128"] 3 | 4 | mod example { 5 | // ANCHOR: example 6 | use futures::{ 7 | future::FutureExt, // for `.fuse()` 8 | pin_mut, 9 | select, 10 | }; 11 | 12 | async fn task_one() { /* ... */ } 13 | async fn task_two() { /* ... */ } 14 | 15 | async fn race_tasks() { 16 | let t1 = task_one().fuse(); 17 | let t2 = task_two().fuse(); 18 | 19 | pin_mut!(t1, t2); 20 | 21 | select! { 22 | () = t1 => println!("task one completed first"), 23 | () = t2 => println!("task two completed first"), 24 | } 25 | } 26 | // ANCHOR_END: example 27 | } 28 | 29 | mod default_and_complete { 30 | // ANCHOR: default_and_complete 31 | use futures::{future, select}; 32 | 33 | async fn count() { 34 | let mut a_fut = future::ready(4); 35 | let mut b_fut = future::ready(6); 36 | let mut total = 0; 37 | 38 | loop { 39 | select! { 40 | a = a_fut => total += a, 41 | b = b_fut => total += b, 42 | complete => break, 43 | default => unreachable!(), // never runs (futures are ready, then complete) 44 | }; 45 | } 46 | assert_eq!(total, 10); 47 | } 48 | // ANCHOR_END: default_and_complete 49 | 50 | #[test] 51 | fn run_count() { 52 | futures::executor::block_on(count()); 53 | } 54 | } 55 | 56 | mod fused_stream { 57 | // ANCHOR: fused_stream 58 | use futures::{ 59 | stream::{Stream, StreamExt, FusedStream}, 60 | select, 61 | }; 62 | 63 | async fn add_two_streams( 64 | mut s1: impl Stream + FusedStream + Unpin, 65 | mut s2: impl Stream + FusedStream + Unpin, 66 | ) -> u8 { 67 | let mut total = 0; 68 | 69 | loop { 70 | let item = select! { 71 | x = s1.next() => x, 72 | x = s2.next() => x, 73 | complete => break, 74 | }; 75 | if let Some(next_num) = item { 76 | total += next_num; 77 | } 78 | } 79 | 80 | total 81 | } 82 | // ANCHOR_END: fused_stream 83 | } 84 | 85 | mod fuse_terminated { 86 | // ANCHOR: fuse_terminated 87 | use futures::{ 88 | future::{Fuse, FusedFuture, FutureExt}, 89 | stream::{FusedStream, Stream, StreamExt}, 90 | pin_mut, 91 | select, 92 | }; 93 | 94 | async fn get_new_num() -> u8 { /* ... */ 5 } 95 | 96 | async fn run_on_new_num(_: u8) { /* ... */ } 97 | 98 | async fn run_loop( 99 | mut interval_timer: impl Stream + FusedStream + Unpin, 100 | starting_num: u8, 101 | ) { 102 | let run_on_new_num_fut = run_on_new_num(starting_num).fuse(); 103 | let get_new_num_fut = Fuse::terminated(); 104 | pin_mut!(run_on_new_num_fut, get_new_num_fut); 105 | loop { 106 | select! { 107 | () = interval_timer.select_next_some() => { 108 | // The timer has elapsed. Start a new `get_new_num_fut` 109 | // if one was not already running. 110 | if get_new_num_fut.is_terminated() { 111 | get_new_num_fut.set(get_new_num().fuse()); 112 | } 113 | }, 114 | new_num = get_new_num_fut => { 115 | // A new number has arrived -- start a new `run_on_new_num_fut`, 116 | // dropping the old one. 117 | run_on_new_num_fut.set(run_on_new_num(new_num).fuse()); 118 | }, 119 | // Run the `run_on_new_num_fut` 120 | () = run_on_new_num_fut => {}, 121 | // panic if everything completed, since the `interval_timer` should 122 | // keep yielding values indefinitely. 123 | complete => panic!("`interval_timer` completed unexpectedly"), 124 | } 125 | } 126 | } 127 | // ANCHOR_END: fuse_terminated 128 | } 129 | 130 | mod futures_unordered { 131 | // ANCHOR: futures_unordered 132 | use futures::{ 133 | future::{Fuse, FusedFuture, FutureExt}, 134 | stream::{FusedStream, FuturesUnordered, Stream, StreamExt}, 135 | pin_mut, 136 | select, 137 | }; 138 | 139 | async fn get_new_num() -> u8 { /* ... */ 5 } 140 | 141 | async fn run_on_new_num(_: u8) -> u8 { /* ... */ 5 } 142 | 143 | async fn run_loop( 144 | mut interval_timer: impl Stream + FusedStream + Unpin, 145 | starting_num: u8, 146 | ) { 147 | let mut run_on_new_num_futs = FuturesUnordered::new(); 148 | run_on_new_num_futs.push(run_on_new_num(starting_num)); 149 | let get_new_num_fut = Fuse::terminated(); 150 | pin_mut!(get_new_num_fut); 151 | loop { 152 | select! { 153 | () = interval_timer.select_next_some() => { 154 | // The timer has elapsed. Start a new `get_new_num_fut` 155 | // if one was not already running. 156 | if get_new_num_fut.is_terminated() { 157 | get_new_num_fut.set(get_new_num().fuse()); 158 | } 159 | }, 160 | new_num = get_new_num_fut => { 161 | // A new number has arrived -- start a new `run_on_new_num_fut`. 162 | run_on_new_num_futs.push(run_on_new_num(new_num)); 163 | }, 164 | // Run the `run_on_new_num_futs` and check if any have completed 165 | res = run_on_new_num_futs.select_next_some() => { 166 | println!("run_on_new_num_fut returned {:?}", res); 167 | }, 168 | // panic if everything completed, since the `interval_timer` should 169 | // keep yielding values indefinitely. 170 | complete => panic!("`interval_timer` completed unexpectedly"), 171 | } 172 | } 173 | } 174 | 175 | // ANCHOR_END: futures_unordered 176 | } 177 | -------------------------------------------------------------------------------- /examples/06_04_spawning/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_06_04_spawning" 3 | version = "0.1.0" 4 | edition = "2021" 5 | 6 | # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html 7 | 8 | [dependencies] 9 | futures = "0.3" 10 | 11 | [dependencies.async-std] 12 | version = "1.12.0" 13 | features = ["attributes"] -------------------------------------------------------------------------------- /examples/06_04_spawning/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | #![allow(dead_code)] 3 | 4 | // ANCHOR: example 5 | use async_std::{task, net::TcpListener, net::TcpStream}; 6 | use futures::AsyncWriteExt; 7 | 8 | async fn process_request(stream: &mut TcpStream) -> Result<(), std::io::Error>{ 9 | stream.write_all(b"HTTP/1.1 200 OK\r\n\r\n").await?; 10 | stream.write_all(b"Hello World").await?; 11 | Ok(()) 12 | } 13 | 14 | async fn main() { 15 | let listener = TcpListener::bind("127.0.0.1:8080").await.unwrap(); 16 | loop { 17 | // Accept a new connection 18 | let (mut stream, _) = listener.accept().await.unwrap(); 19 | // Now process this request without blocking the main loop 20 | task::spawn(async move {process_request(&mut stream).await}); 21 | } 22 | } 23 | // ANCHOR_END: example 24 | use std::time::Duration; 25 | async fn my_task(time: Duration) { 26 | println!("Hello from my_task with time {:?}", time); 27 | task::sleep(time).await; 28 | println!("Goodbye from my_task with time {:?}", time); 29 | } 30 | // ANCHOR: join_all 31 | use futures::future::join_all; 32 | async fn task_spawner(){ 33 | let tasks = vec![ 34 | task::spawn(my_task(Duration::from_secs(1))), 35 | task::spawn(my_task(Duration::from_secs(2))), 36 | task::spawn(my_task(Duration::from_secs(3))), 37 | ]; 38 | // If we do not await these tasks and the function finishes, they will be dropped 39 | join_all(tasks).await; 40 | } 41 | // ANCHOR_END: join_all 42 | 43 | #[test] 44 | fn run_task_spawner() { 45 | futures::executor::block_on(task_spawner()); 46 | } -------------------------------------------------------------------------------- /examples/07_05_recursion/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "example_07_05_recursion" 3 | version = "0.1.0" 4 | authors = ["Taylor Cramer "] 5 | edition = "2021" 6 | 7 | [lib] 8 | 9 | [dev-dependencies] 10 | futures = "0.3" 11 | -------------------------------------------------------------------------------- /examples/07_05_recursion/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![cfg(test)] 2 | #![allow(dead_code)] 3 | 4 | // ANCHOR: example 5 | use futures::future::{BoxFuture, FutureExt}; 6 | 7 | fn recursive() -> BoxFuture<'static, ()> { 8 | async move { 9 | recursive().await; 10 | recursive().await; 11 | }.boxed() 12 | } 13 | // ANCHOR_END: example 14 | -------------------------------------------------------------------------------- /examples/09_01_sync_tcp_server/404.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Hello! 6 | 7 | 8 |

Oops!

9 |

Sorry, I don't know what you're asking for.

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_01_sync_tcp_server/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "sync_tcp_server" 3 | version = "0.1.0" 4 | authors = ["Your Name 2 | 3 | 4 | 5 | Hello! 6 | 7 | 8 |

Hello!

9 |

Hi from Rust

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_01_sync_tcp_server/src/main.rs: -------------------------------------------------------------------------------- 1 | use std::fs; 2 | use std::io::prelude::*; 3 | use std::net::TcpListener; 4 | use std::net::TcpStream; 5 | 6 | fn main() { 7 | // Listen for incoming TCP connections on localhost port 7878 8 | let listener = TcpListener::bind("127.0.0.1:7878").unwrap(); 9 | 10 | // Block forever, handling each request that arrives at this IP address 11 | for stream in listener.incoming() { 12 | let stream = stream.unwrap(); 13 | 14 | handle_connection(stream); 15 | } 16 | } 17 | 18 | fn handle_connection(mut stream: TcpStream) { 19 | // Read the first 1024 bytes of data from the stream 20 | let mut buffer = [0; 1024]; 21 | stream.read(&mut buffer).unwrap(); 22 | 23 | let get = b"GET / HTTP/1.1\r\n"; 24 | 25 | // Respond with greetings or a 404, 26 | // depending on the data in the request 27 | let (status_line, filename) = if buffer.starts_with(get) { 28 | ("HTTP/1.1 200 OK\r\n\r\n", "hello.html") 29 | } else { 30 | ("HTTP/1.1 404 NOT FOUND\r\n\r\n", "404.html") 31 | }; 32 | let contents = fs::read_to_string(filename).unwrap(); 33 | 34 | // Write response back to the stream, 35 | // and flush the stream to ensure the response is sent back to the client 36 | let response = format!("{status_line}{contents}"); 37 | stream.write_all(response.as_bytes()).unwrap(); 38 | stream.flush().unwrap(); 39 | } 40 | -------------------------------------------------------------------------------- /examples/09_02_async_tcp_server/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "async_tcp_server" 3 | version = "0.1.0" 4 | authors = ["Your Name 19 | } 20 | // ANCHOR_END: handle_connection_async 21 | -------------------------------------------------------------------------------- /examples/09_03_slow_request/404.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Hello! 6 | 7 | 8 |

Oops!

9 |

Sorry, I don't know what you're asking for.

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_03_slow_request/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "slow_request" 3 | version = "0.1.0" 4 | authors = ["Your Name 2 | 3 | 4 | 5 | Hello! 6 | 7 | 8 |

Hello!

9 |

Hi from Rust

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_03_slow_request/src/main.rs: -------------------------------------------------------------------------------- 1 | use std::fs; 2 | use std::io::{Read, Write}; 3 | use std::net::TcpListener; 4 | use std::net::TcpStream; 5 | 6 | #[async_std::main] 7 | async fn main() { 8 | let listener = TcpListener::bind("127.0.0.1:7878").unwrap(); 9 | for stream in listener.incoming() { 10 | let stream = stream.unwrap(); 11 | handle_connection(stream).await; 12 | } 13 | } 14 | 15 | // ANCHOR: handle_connection 16 | use std::time::Duration; 17 | use async_std::task; 18 | 19 | async fn handle_connection(mut stream: TcpStream) { 20 | let mut buffer = [0; 1024]; 21 | stream.read(&mut buffer).unwrap(); 22 | 23 | let get = b"GET / HTTP/1.1\r\n"; 24 | let sleep = b"GET /sleep HTTP/1.1\r\n"; 25 | 26 | let (status_line, filename) = if buffer.starts_with(get) { 27 | ("HTTP/1.1 200 OK\r\n\r\n", "hello.html") 28 | } else if buffer.starts_with(sleep) { 29 | task::sleep(Duration::from_secs(5)).await; 30 | ("HTTP/1.1 200 OK\r\n\r\n", "hello.html") 31 | } else { 32 | ("HTTP/1.1 404 NOT FOUND\r\n\r\n", "404.html") 33 | }; 34 | let contents = fs::read_to_string(filename).unwrap(); 35 | 36 | let response = format!("{status_line}{contents}"); 37 | stream.write(response.as_bytes()).unwrap(); 38 | stream.flush().unwrap(); 39 | } 40 | // ANCHOR_END: handle_connection 41 | -------------------------------------------------------------------------------- /examples/09_04_concurrent_tcp_server/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "concurrent_tcp_server" 3 | version = "0.1.0" 4 | authors = ["Your Name 28 | stream.write(response.as_bytes()).await.unwrap(); 29 | stream.flush().await.unwrap(); 30 | } 31 | // ANCHOR_END: handle_connection 32 | -------------------------------------------------------------------------------- /examples/09_05_final_tcp_server/404.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Hello! 6 | 7 | 8 |

Oops!

9 |

Sorry, I don't know what you're asking for.

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_05_final_tcp_server/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "final_tcp_server" 3 | version = "0.1.0" 4 | authors = ["Your Name 2 | 3 | 4 | 5 | Hello! 6 | 7 | 8 |

Hello!

9 |

Hi from Rust

10 | 11 | 12 | -------------------------------------------------------------------------------- /examples/09_05_final_tcp_server/src/main.rs: -------------------------------------------------------------------------------- 1 | use std::fs; 2 | 3 | use futures::stream::StreamExt; 4 | 5 | use async_std::net::TcpListener; 6 | use async_std::prelude::*; 7 | // ANCHOR: main_func 8 | use async_std::task::spawn; 9 | 10 | #[async_std::main] 11 | async fn main() { 12 | let listener = TcpListener::bind("127.0.0.1:7878").await.unwrap(); 13 | listener 14 | .incoming() 15 | .for_each_concurrent(/* limit */ None, |stream| async move { 16 | let stream = stream.unwrap(); 17 | spawn(handle_connection(stream)); 18 | }) 19 | .await; 20 | } 21 | // ANCHOR_END: main_func 22 | 23 | use async_std::io::{Read, Write}; 24 | 25 | async fn handle_connection(mut stream: impl Read + Write + Unpin) { 26 | let mut buffer = [0; 1024]; 27 | stream.read(&mut buffer).await.unwrap(); 28 | let get = b"GET / HTTP/1.1\r\n"; 29 | let (status_line, filename) = if buffer.starts_with(get) { 30 | ("HTTP/1.1 200 OK\r\n\r\n", "hello.html") 31 | } else { 32 | ("HTTP/1.1 404 NOT FOUND\r\n\r\n", "404.html") 33 | }; 34 | let contents = fs::read_to_string(filename).unwrap(); 35 | let response = format!("{status_line}{contents}"); 36 | stream.write(response.as_bytes()).await.unwrap(); 37 | stream.flush().await.unwrap(); 38 | } 39 | 40 | #[cfg(test)] 41 | 42 | mod tests { 43 | // ANCHOR: mock_read 44 | use super::*; 45 | use futures::io::Error; 46 | use futures::task::{Context, Poll}; 47 | 48 | use std::cmp::min; 49 | use std::pin::Pin; 50 | 51 | struct MockTcpStream { 52 | read_data: Vec, 53 | write_data: Vec, 54 | } 55 | 56 | impl Read for MockTcpStream { 57 | fn poll_read( 58 | self: Pin<&mut Self>, 59 | _: &mut Context, 60 | buf: &mut [u8], 61 | ) -> Poll> { 62 | let size: usize = min(self.read_data.len(), buf.len()); 63 | buf[..size].copy_from_slice(&self.read_data[..size]); 64 | Poll::Ready(Ok(size)) 65 | } 66 | } 67 | // ANCHOR_END: mock_read 68 | 69 | // ANCHOR: mock_write 70 | impl Write for MockTcpStream { 71 | fn poll_write( 72 | mut self: Pin<&mut Self>, 73 | _: &mut Context, 74 | buf: &[u8], 75 | ) -> Poll> { 76 | self.write_data = Vec::from(buf); 77 | 78 | Poll::Ready(Ok(buf.len())) 79 | } 80 | 81 | fn poll_flush(self: Pin<&mut Self>, _: &mut Context) -> Poll> { 82 | Poll::Ready(Ok(())) 83 | } 84 | 85 | fn poll_close(self: Pin<&mut Self>, _: &mut Context) -> Poll> { 86 | Poll::Ready(Ok(())) 87 | } 88 | } 89 | // ANCHOR_END: mock_write 90 | 91 | // ANCHOR: unpin 92 | impl Unpin for MockTcpStream {} 93 | // ANCHOR_END: unpin 94 | 95 | // ANCHOR: test 96 | use std::fs; 97 | 98 | #[async_std::test] 99 | async fn test_handle_connection() { 100 | let input_bytes = b"GET / HTTP/1.1\r\n"; 101 | let mut contents = vec![0u8; 1024]; 102 | contents[..input_bytes.len()].clone_from_slice(input_bytes); 103 | let mut stream = MockTcpStream { 104 | read_data: contents, 105 | write_data: Vec::new(), 106 | }; 107 | 108 | handle_connection(&mut stream).await; 109 | 110 | let expected_contents = fs::read_to_string("hello.html").unwrap(); 111 | let expected_response = format!("HTTP/1.1 200 OK\r\n\r\n{}", expected_contents); 112 | assert!(stream.write_data.starts_with(expected_response.as_bytes())); 113 | } 114 | // ANCHOR_END: test 115 | } 116 | -------------------------------------------------------------------------------- /examples/Cargo.toml: -------------------------------------------------------------------------------- 1 | [workspace] 2 | members = [ 3 | "01_02_why_async", 4 | "01_04_async_await_primer", 5 | "02_02_future_trait", 6 | "02_03_timer", 7 | "02_04_executor", 8 | "03_01_async_await", 9 | "05_01_streams", 10 | "05_02_iteration_and_concurrency", 11 | "06_02_join", 12 | "06_03_select", 13 | "06_04_spawning", 14 | "07_05_recursion", 15 | "09_01_sync_tcp_server", 16 | "09_02_async_tcp_server", 17 | "09_03_slow_request", 18 | "09_04_concurrent_tcp_server", 19 | "09_05_final_tcp_server", 20 | ] 21 | -------------------------------------------------------------------------------- /src/01_getting_started/01_chapter.md: -------------------------------------------------------------------------------- 1 | # Getting Started 2 | 3 | Welcome to Asynchronous Programming in Rust! If you're looking to start writing 4 | asynchronous Rust code, you've come to the right place. Whether you're building 5 | a web server, a database, or an operating system, this book will show you 6 | how to use Rust's asynchronous programming tools to get the most out of your 7 | hardware. 8 | 9 | ## What This Book Covers 10 | 11 | This book aims to be a comprehensive, up-to-date guide to using Rust's async 12 | language features and libraries, appropriate for beginners and old hands alike. 13 | 14 | - The early chapters provide an introduction to async programming in general, 15 | and to Rust's particular take on it. 16 | 17 | - The middle chapters discuss key utilities and control-flow tools you can use 18 | when writing async code, and describe best-practices for structuring libraries 19 | and applications to maximize performance and reusability. 20 | 21 | - The last section of the book covers the broader async ecosystem, and provides 22 | a number of examples of how to accomplish common tasks. 23 | 24 | With that out of the way, let's explore the exciting world of Asynchronous 25 | Programming in Rust! 26 | -------------------------------------------------------------------------------- /src/01_getting_started/01_chapter_zh.md: -------------------------------------------------------------------------------- 1 | # 让我们开始吧 2 | 3 | 欢迎来到`Rust`异步编程!如果你打算开始学习编写 Rust 异步编程,那么就来对地方了。 4 | 无论是构建 Web 服务器、数据库,亦或是一个操作系统, 5 | 这本书将向你展示如何使用 Rust 异步编程工具来充分发挥你的硬件性能。 6 | 7 | ## 这本书包含了哪些内容 8 | 9 | 本书旨在成为一本全面的,最新的 Rust 异步功能及库的指南,且适用于初学者或是老手。 10 | 11 | - 前几章对异步编程进行了概括性介绍,及 Rust 在此上的一些特性。 12 | 13 | - 中间的章节则介绍了在异步编程中,使用到的一些关键实用程序及控制流工具, 14 | 并讲解了一个构建库和和应用程序,并达到最佳性能与复用性的实例。 15 | 16 | - 本书的最后一部分,则涵盖了更广泛的异步编程生态,提供了许多完成常见任务的例子。 17 | 18 | 有了这些介绍,让我们探索 Rust 异步编程世界,开始这一令人激动人心的旅程吧! 19 | -------------------------------------------------------------------------------- /src/01_getting_started/02_why_async.md: -------------------------------------------------------------------------------- 1 | # Why Async? 2 | 3 | We all love how Rust empowers us to write fast, safe software. 4 | But how does asynchronous programming fit into this vision? 5 | 6 | Asynchronous programming, or async for short, is a _concurrent programming model_ 7 | supported by an increasing number of programming languages. 8 | It lets you run a large number of concurrent 9 | tasks on a small number of OS threads, while preserving much of the 10 | look and feel of ordinary synchronous programming, through the 11 | `async/await` syntax. 12 | 13 | ## Async vs other concurrency models 14 | 15 | Concurrent programming is less mature and "standardized" than 16 | regular, sequential programming. As a result, we express concurrency 17 | differently depending on which concurrent programming model 18 | the language is supporting. 19 | A brief overview of the most popular concurrency models can help 20 | you understand how asynchronous programming fits within the broader 21 | field of concurrent programming: 22 | 23 | - **OS threads** don't require any changes to the programming model, 24 | which makes it very easy to express concurrency. However, synchronizing 25 | between threads can be difficult, and the performance overhead is large. 26 | Thread pools can mitigate some of these costs, but not enough to support 27 | massive IO-bound workloads. 28 | - **Event-driven programming**, in conjunction with _callbacks_, can be very 29 | performant, but tends to result in a verbose, "non-linear" control flow. 30 | Data flow and error propagation is often hard to follow. 31 | - **Coroutines**, like threads, don't require changes to the programming model, 32 | which makes them easy to use. Like async, they can also support a large 33 | number of tasks. However, they abstract away low-level details that 34 | are important for systems programming and custom runtime implementors. 35 | - **The actor model** divides all concurrent computation into units called 36 | actors, which communicate through fallible message passing, much like 37 | in distributed systems. The actor model can be efficiently implemented, but it leaves 38 | many practical issues unanswered, such as flow control and retry logic. 39 | 40 | In summary, asynchronous programming allows highly performant implementations 41 | that are suitable for low-level languages like Rust, while providing 42 | most of the ergonomic benefits of threads and coroutines. 43 | 44 | ## Async in Rust vs other languages 45 | 46 | Although asynchronous programming is supported in many languages, some 47 | details vary across implementations. Rust's implementation of async 48 | differs from most languages in a few ways: 49 | 50 | - **Futures are inert** in Rust and make progress only when polled. Dropping a 51 | future stops it from making further progress. 52 | - **Async is zero-cost** in Rust, which means that you only pay for what you use. 53 | Specifically, you can use async without heap allocations and dynamic dispatch, 54 | which is great for performance! 55 | This also lets you use async in constrained environments, such as embedded systems. 56 | - **No built-in runtime** is provided by Rust. Instead, runtimes are provided by 57 | community maintained crates. 58 | - **Both single- and multithreaded** runtimes are available in Rust, which have 59 | different strengths and weaknesses. 60 | 61 | ## Async vs threads in Rust 62 | 63 | The primary alternative to async in Rust is using OS threads, either 64 | directly through [`std::thread`](https://doc.rust-lang.org/std/thread/) 65 | or indirectly through a thread pool. 66 | Migrating from threads to async or vice versa 67 | typically requires major refactoring work, both in terms of implementation and 68 | (if you are building a library) any exposed public interfaces. As such, 69 | picking the model that suits your needs early can save a lot of development time. 70 | 71 | **OS threads** are suitable for a small number of tasks, since threads come with 72 | CPU and memory overhead. Spawning and switching between threads 73 | is quite expensive as even idle threads consume system resources. 74 | A thread pool library can help mitigate some of these costs, but not all. 75 | However, threads let you reuse existing synchronous code without significant 76 | code changes—no particular programming model is required. 77 | In some operating systems, you can also change the priority of a thread, 78 | which is useful for drivers and other latency sensitive applications. 79 | 80 | **Async** provides significantly reduced CPU and memory 81 | overhead, especially for workloads with a 82 | large amount of IO-bound tasks, such as servers and databases. 83 | All else equal, you can have orders of magnitude more tasks than OS threads, 84 | because an async runtime uses a small amount of (expensive) threads to handle 85 | a large amount of (cheap) tasks. 86 | However, async Rust results in larger binary blobs due to the state 87 | machines generated from async functions and since each executable 88 | bundles an async runtime. 89 | 90 | On a last note, asynchronous programming is not _better_ than threads, 91 | but different. 92 | If you don't need async for performance reasons, threads can often be 93 | the simpler alternative. 94 | 95 | ### Example: Concurrent downloading 96 | 97 | In this example our goal is to download two web pages concurrently. 98 | In a typical threaded application we need to spawn threads 99 | to achieve concurrency: 100 | 101 | ```rust,ignore 102 | {{#include ../../examples/01_02_why_async/src/lib.rs:get_two_sites}} 103 | ``` 104 | 105 | However, downloading a web page is a small task; creating a thread 106 | for such a small amount of work is quite wasteful. For a larger application, it 107 | can easily become a bottleneck. In async Rust, we can run these tasks 108 | concurrently without extra threads: 109 | 110 | ```rust,ignore 111 | {{#include ../../examples/01_02_why_async/src/lib.rs:get_two_sites_async}} 112 | ``` 113 | 114 | Here, no extra threads are created. Additionally, all function calls are statically 115 | dispatched, and there are no heap allocations! 116 | However, we need to write the code to be asynchronous in the first place, 117 | which this book will help you achieve. 118 | 119 | ## Custom concurrency models in Rust 120 | 121 | On a last note, Rust doesn't force you to choose between threads and async. 122 | You can use both models within the same application, which can be 123 | useful when you have mixed threaded and async dependencies. 124 | In fact, you can even use a different concurrency model altogether, 125 | such as event-driven programming, as long as you find a library that 126 | implements it. 127 | -------------------------------------------------------------------------------- /src/01_getting_started/02_why_async_zh.md: -------------------------------------------------------------------------------- 1 | # Why Async? 2 | # 为什么选择 Rust ? 3 | 4 | 我们都喜欢 Rust 这种可以让我们去编写高性能且安全的软件的特性。但在异步编程中, 5 | 如何同样的保证这一点呢? 6 | 7 | 异步编程,或简称异步,是一种被越来越多的语言所支持的并发编程模式。它允许你在 8 | 少量的 OS 进程上运行大量的并发任务, 通过使用`async/await`语法,可同时使其在 9 | 使用和感观上基本等同于普通的同步编程。 10 | 11 | ## 异步与其它并发模式对比 12 | 13 | 并发编程并不如常规的同步编程成熟,也没那么标准化。因此,我们需根据语言支持的 14 | 并发模型,以不同的方式表达并发。 15 | 下面简单介绍最受欢迎的并发模型,这应该可以帮助你去理解异步编程如何适应更广泛的 16 | 并发编程领域: 17 | 18 | - **OS 线程** 不需要对编程模型进行任何修改,这使你可非常方便的进行并发编程。 19 | 然而,在线程之间进行同步很困难,且带来的性能开销很大。线程池可减少一些开销, 20 | 但并不足以满足海量的 I/O 密集工作负载。 21 | - **事件驱动编程(Event-driven programming)**,与回调结合来使用,可以非常高效, 22 | 但往往会导致冗长的,非线性的控制流。数据流和错误信息通常难以追踪。 23 | - **协程**,和线程一样,不需要对编程模型进行任何修改,这使得使用它变得非常简单。 24 | 同时和异步一样,它也可以支持海量的任务。但是,它抽象出了对系统编程来说很重要 25 | 的低级细节与自定义运行时的执行器。 26 | - **actor 模型** 将所有的并发计算划分成`actor`单元,这使得错误信息的传递变得简单 27 | ,就和分布式系统一样。actor 模型可以有效的实现并发编程,但它留下了许多未解决的 28 | 实际问题,如流的控制和重试逻辑。 29 | 30 | 总之,异步编程可实现高性能计算,且适用于 Rust 这种低级编程语言,它同时提供了线程 31 | 和协程中的大部分人性化的优点。 32 | 33 | ## Rust 与其它语言中的异步对比 34 | 35 | 尽管在许多语言中,都支持进行异步编程,但一些细节因实现而异。Rust 对异步的实现与 36 | 大部分编程语言有以下几个不同: 37 | 38 | - Rust 中的 **Futures** 只有在进行轮询时,才会执行,删除 future 会停止其进一步 39 | 执行。 40 | - Rust 中 **Async 是零开销** 的,这意味着只有所执行的任务才会消耗算力,具体来讲, 41 | 你没有在 async 过程中进行堆的分配和动态调度,这可使性能得到充分的发挥! 42 | 这让你可以在资源有限的环境中使用 async,如嵌入式系统中。 43 | - Rust 并未提供**内置运行时环境**,而是由社区维护的 crates 提供。 44 | - Rust 中提供了 **单线程和多线程** 的运行时环境,它们各有不同的优势与缺点。 45 | 46 | ## Rust 中异步和多线程的对比 47 | 48 | Rust 中异步的主要替代方法是使用系统进程,或直接使用 49 | [`std::thread`](https://doc.rust-lang.org/std/thread/)生成,亦或通过线程池调用。 50 | 从线程迁移至异步,通常的主要工作是进行重构,去实现功能和暴露的 51 | APIS(如果你构建一个库),反之亦然。所以,在早期选定适合你需求的模型会大大节省你的 52 | 开发时间。 53 | 54 | **系统线程** 适用于少量的任务,因为进程会消耗 CPU 和内存的开销。生成新进程 55 | 或是在进程之间切换的代价很高,即使是空闲的进程也在消耗系统资源。诚然,使用进程池 56 | 会一定程度上减少这些开销,但不能消除。好处是,使用线程可以让你无需对代码进行大量修 57 | 改即可复用——即其不需要特定的编程模型。在一些系统中,你可以定义线程的优先级,这在 58 | 如驱动或其它低延时程序中非常有用。 59 | 60 | **异步** 可显著的降低带来的 CPU 和内存的消耗,特别是对于海量 IO 密集型的任务负载, 61 | 如服务器和数据库。在其它条件相同的情况下,它可以使你运行比使用系统线程时 62 | 多几个数量级的任务,因为异步的运行时环境使用少量(高代价)的线程来处理巨量的(廉价) 63 | 的任务。然而,由于异步函数生成了大量的状态机,且每个都可执行块都绑定了一个异步运 64 | 行时环境,其生成了更大的二进制 blobs。 65 | 66 | 最后提醒一下,异步并不比进程好,它们只是实现不同。如果出于性能方面考虑并不一定要 67 | 使用异步,那线程通常会是一个更简单的选择。 68 | 69 | ### 示例: 并发下载 70 | 71 | 在这个例子中,我们要实现同时下载两个页面。在典型的线程应用中, 72 | 我们需要创建线程来实现并发: 73 | 74 | ```rust,ignore 75 | {{#include ../../examples/01_02_why_async/src/lib.rs:get_two_sites}} 76 | ``` 77 | 78 | 然而,下载一个网页是个极小的任务,为之创建一个进程是十分浪费资源的, 79 | 在一个更大的程序中,它很容易成为瓶颈。在 Rust 异步中, 80 | 我们可以并行的运行这些任务而无需额外进程。 81 | 82 | ```rust,ignore 83 | {{#include ../../examples/01_02_why_async/src/lib.rs:get_two_sites_async}} 84 | ``` 85 | 86 | 这里,没有创建额外的进程。此外,所有的函数调用都是静态分配的,没有额外的堆分配! 87 | 但是,我们首先需要实现异步编程,而此书将帮助你完成它。 88 | 89 | ## Rust 中的自定义并发模型 90 | 91 | 最后一点要强调的是,Rust 并不强制你在线程和异步之间做出选择。 92 | 你可以在一个程序中同时使用这两种模型,有时混合使用线程和异步依赖时会有更好的效果。 93 | 实际上,你还可以同时使用其它不同的并发模型,如事件驱动编程,只要你找到一个合适的库来实现它! 94 | -------------------------------------------------------------------------------- /src/01_getting_started/03_state_of_async_rust.md: -------------------------------------------------------------------------------- 1 | # The State of Asynchronous Rust 2 | 3 | Parts of async Rust are supported with the same stability guarantees as 4 | synchronous Rust. Other parts are still maturing and will change 5 | over time. With async Rust, you can expect: 6 | 7 | - Outstanding runtime performance for typical concurrent workloads. 8 | - More frequent interaction with advanced language features, such as lifetimes 9 | and pinning. 10 | - Some compatibility constraints, both between sync and async code, and between 11 | different async runtimes. 12 | - Higher maintenance burden, due to the ongoing evolution of async runtimes 13 | and language support. 14 | 15 | In short, async Rust is more difficult to use and can result in a higher 16 | maintenance burden than synchronous Rust, 17 | but gives you best-in-class performance in return. 18 | All areas of async Rust are constantly improving, 19 | so the impact of these issues will wear off over time. 20 | 21 | ## Language and library support 22 | 23 | While asynchronous programming is supported by Rust itself, 24 | most async applications depend on functionality provided 25 | by community crates. 26 | As such, you need to rely on a mixture of 27 | language features and library support: 28 | 29 | - The most fundamental traits, types and functions, such as the 30 | [`Future`](https://doc.rust-lang.org/std/future/trait.Future.html) trait 31 | are provided by the standard library. 32 | - The `async/await` syntax is supported directly by the Rust compiler. 33 | - Many utility types, macros and functions are provided by the 34 | [`futures`](https://docs.rs/futures/) crate. They can be used in any async 35 | Rust application. 36 | - Execution of async code, IO and task spawning are provided by "async 37 | runtimes", such as Tokio and async-std. Most async applications, and some 38 | async crates, depend on a specific runtime. See 39 | ["The Async Ecosystem"](../08_ecosystem/00_chapter.md) section for more 40 | details. 41 | 42 | Some language features you may be used to from synchronous Rust are not yet 43 | available in async Rust. Notably, Rust does not let you declare async 44 | functions in traits. Instead, you need to use workarounds to achieve the same 45 | result, which can be more verbose. 46 | 47 | ## Compiling and debugging 48 | 49 | For the most part, compiler- and runtime errors in async Rust work 50 | the same way as they have always done in Rust. There are a few 51 | noteworthy differences: 52 | 53 | ### Compilation errors 54 | 55 | Compilation errors in async Rust conform to the same high standards as 56 | synchronous Rust, but since async Rust often depends on more complex language 57 | features, such as lifetimes and pinning, you may encounter these types of 58 | errors more frequently. 59 | 60 | ### Runtime errors 61 | 62 | Whenever the compiler encounters an async function, it generates a state 63 | machine under the hood. Stack traces in async Rust typically contain details 64 | from these state machines, as well as function calls from 65 | the runtime. As such, interpreting stack traces can be a bit more involved than 66 | it would be in synchronous Rust. 67 | 68 | ### New failure modes 69 | 70 | A few novel failure modes are possible in async Rust, for instance 71 | if you call a blocking function from an async context or if you implement 72 | the `Future` trait incorrectly. Such errors can silently pass both the 73 | compiler and sometimes even unit tests. Having a firm understanding 74 | of the underlying concepts, which this book aims to give you, can help you 75 | avoid these pitfalls. 76 | 77 | ## Compatibility considerations 78 | 79 | Asynchronous and synchronous code cannot always be combined freely. 80 | For instance, you can't directly call an async function from a sync function. 81 | Sync and async code also tend to promote different design patterns, which can 82 | make it difficult to compose code intended for the different environments. 83 | 84 | Even async code cannot always be combined freely. Some crates depend on a 85 | specific async runtime to function. If so, it is usually specified in the 86 | crate's dependency list. 87 | 88 | These compatibility issues can limit your options, so make sure to 89 | research which async runtime and what crates you may need early. 90 | Once you have settled in with a runtime, you won't have to worry 91 | much about compatibility. 92 | 93 | ## Performance characteristics 94 | 95 | The performance of async Rust depends on the implementation of the 96 | async runtime you're using. 97 | Even though the runtimes that power async Rust applications are relatively new, 98 | they perform exceptionally well for most practical workloads. 99 | 100 | That said, most of the async ecosystem assumes a _multi-threaded_ runtime. 101 | This makes it difficult to enjoy the theoretical performance benefits 102 | of single-threaded async applications, namely cheaper synchronization. 103 | Another overlooked use-case is _latency sensitive tasks_, which are 104 | important for drivers, GUI applications and so on. Such tasks depend 105 | on runtime and/or OS support in order to be scheduled appropriately. 106 | You can expect better library support for these use cases in the future. 107 | -------------------------------------------------------------------------------- /src/01_getting_started/03_state_of_async_rust_zh.md: -------------------------------------------------------------------------------- 1 | # Rust 的异步状态 2 | 3 | Rust 异步编程中部分功能支持,与同步编程一样具有相同的稳定性保障, 4 | 其它部分则仍在完善开发中,不断更新、改变。使用 Rust 异步,你将会: 5 | 6 | - 在典型的并发工作负载中获得出色的运行性能。 7 | - 更频繁地与高级语言功能交互,例如生命周期和固定(Pinning)。 8 | - 同步和异步代码之间,及不同的异步实现之间的一些兼容性约束。 9 | - 由于异步运行时及语言功能支持的不断发展,会面临更重的维护负担工作。 10 | 11 | 简尔言之,使用 Rust 异步编程相较同步编程,将更难以使用且带来更重的维护负担, 12 | 但它将为你带来一流的运行性能。Rust 异步编程的所有领域都在不断改进, 13 | 因此这些问题都将会随着时间更迭而消失。 14 | 15 | ## 语言和库的支持 16 | 17 | 虽然 Rust 自身支持异步编程,但大多数的异步应用程序都依赖于社区的 crate 18 | 所提供的功能。因此,你需要依赖于语言功能及库的混合支持: 19 | 20 | - 标准库提供了最基本的特征、类型和函数, 21 | 如 [`Future`](https://doc.rust-lang.org/std/future/trait.Future.html) 22 | 特征。 23 | - Rust 编译器原生支持 `async/await` 语法。 24 | - [`futures`](https://docs.rs/futures/) crate 提供了众多实用的类型、宏和函数。 25 | 它们可以使用在任何 Rust 异步程序中。 26 | - 异步代码的执行、IO 和任务生功能成由“异步运行时”提供,例如 `Tokio` 和 27 | `async-std`。大多数异步程序和一些异步 crate 依赖于特定的运行时,详情请参阅 28 | [“异步生态系统”](../08_ecosystem/00_chapter_zh.md)。 29 | 30 | 你习惯于在 Rust 同步编程中使用的一些语言特性可能在 Rust 异步编程中尚不可使用。 31 | 需注意,Rust 不允许你在 traits 中声明异步函数,因此,你需要以变通的方式的实现它, 32 | 这可能会导致你的代码变得更加冗长。 33 | 34 | ## 编译和调试 35 | 36 | 在大部分情况下,Rust 异步编程中,编译器和运行时错误的工作方式与同步编程相同。 37 | 但有一些值得注意的区别: 38 | 39 | ### 编译错误 40 | 41 | Rust 异步编译时使用同 Rust 同步编译中一样的严格要求标准来产生错误信息, 42 | 但由于 Rust 的异步通常依赖于更繁杂的语言特性,例如生命周期和固定(Pin), 43 | 你可能会更频繁的遇到这些类型的错误。 44 | 45 | ### 运行时错误 46 | 47 | 每当编译器遇到异步函数时,它都会在后台生成一个状态机。 48 | 异步 Rust 中的堆栈追踪通常包含这些状态机的详细信息,以及来自运行时的函数调用。 49 | 因此,解读异步 Rust 的堆栈追踪可能比同步代码更复杂。 50 | 51 | ### 全新的故障模式 52 | 53 | 在 Rust 异步中,可能有一些新的故障模式,例如,你在异步上下文调用阻塞函数, 54 | 或者你没通正确地实现 `Future` 特征。编译器,甚至于有时单元测试也无法发现这些错误。 55 | 本书旨在,让你对这些基本概念都有着深刻的理解,从而避免踏入这些陷阱。 56 | 57 | ## 兼容性注意事项 58 | 59 | 异步代码和同步代码不能总是自由组合使用。例如,你不能直接从同步代码中调用异步函数。 60 | 同步和异步代码也倾向于使用不同的设计模式,这也使得编写用于不同环境的代码变得困难。 61 | 62 | 异步代码不能总是自由地组合。一些 crates 依赖于特定的异步进行时才能运行。 63 | 若如此,则一般会在 crate 的依赖列表中指定此依赖。 64 | 65 | 这些兼容问题会限制你的选择,因此请务必尽早调查、确定你需要哪些 crate 及异步运行时。 66 | 一旦你选定了某个运行时,就不会再担心兼容性的问题了。 67 | 68 | ## 性能特点 69 | 70 | 异步 Rust 的性能取决于你所使用的运行时的实现方式。 71 | 尽管为 Rust 提供异步运行时的库相对较新, 72 | 但它们在大多数实际工作负载中都表现的非常出色。 73 | 74 | 换句话说,大多数的异步生态系统都假定是一个多线程运行时。 75 | 而这使得它很难取得单线程异步程序理论上的性能优势,即更廉价的同步。 76 | 另一个被忽视的方面是对延迟敏感的任务,这对驱动、GUI 程序等非常重要。 77 | 这些任务需有操作系统的支持、选择适合的运行时,以便完成合理的调度。 78 | 在未来,会有更优秀的库来满足这些场景的需求。 79 | -------------------------------------------------------------------------------- /src/01_getting_started/04_async_await_primer.md: -------------------------------------------------------------------------------- 1 | # `async`/`.await` Primer 2 | 3 | `async`/`.await` is Rust's built-in tool for writing asynchronous functions 4 | that look like synchronous code. `async` transforms a block of code into a 5 | state machine that implements a trait called `Future`. Whereas calling a 6 | blocking function in a synchronous method would block the whole thread, 7 | blocked `Future`s will yield control of the thread, allowing other 8 | `Future`s to run. 9 | 10 | Let's add some dependencies to the `Cargo.toml` file: 11 | 12 | ```toml 13 | {{#include ../../examples/01_04_async_await_primer/Cargo.toml:9:10}} 14 | ``` 15 | 16 | To create an asynchronous function, you can use the `async fn` syntax: 17 | 18 | ```rust,edition2018 19 | async fn do_something() { /* ... */ } 20 | ``` 21 | 22 | The value returned by `async fn` is a `Future`. For anything to happen, 23 | the `Future` needs to be run on an executor. 24 | 25 | ```rust,edition2018 26 | {{#include ../../examples/01_04_async_await_primer/src/lib.rs:hello_world}} 27 | ``` 28 | 29 | Inside an `async fn`, you can use `.await` to wait for the completion of 30 | another type that implements the `Future` trait, such as the output of 31 | another `async fn`. Unlike `block_on`, `.await` doesn't block the current 32 | thread, but instead asynchronously waits for the future to complete, allowing 33 | other tasks to run if the future is currently unable to make progress. 34 | 35 | For example, imagine that we have three `async fn`: `learn_song`, `sing_song`, 36 | and `dance`: 37 | 38 | ```rust,ignore 39 | async fn learn_song() -> Song { /* ... */ } 40 | async fn sing_song(song: Song) { /* ... */ } 41 | async fn dance() { /* ... */ } 42 | ``` 43 | 44 | One way to do learn, sing, and dance would be to block on each of these 45 | individually: 46 | 47 | ```rust,ignore 48 | {{#include ../../examples/01_04_async_await_primer/src/lib.rs:block_on_each}} 49 | ``` 50 | 51 | However, we're not giving the best performance possible this way—we're 52 | only ever doing one thing at once! Clearly we have to learn the song before 53 | we can sing it, but it's possible to dance at the same time as learning and 54 | singing the song. To do this, we can create two separate `async fn` which 55 | can be run concurrently: 56 | 57 | ```rust,ignore 58 | {{#include ../../examples/01_04_async_await_primer/src/lib.rs:block_on_main}} 59 | ``` 60 | 61 | In this example, learning the song must happen before singing the song, but 62 | both learning and singing can happen at the same time as dancing. If we used 63 | `block_on(learn_song())` rather than `learn_song().await` in `learn_and_sing`, 64 | the thread wouldn't be able to do anything else while `learn_song` was running. 65 | This would make it impossible to dance at the same time. By `.await`-ing 66 | the `learn_song` future, we allow other tasks to take over the current thread 67 | if `learn_song` is blocked. This makes it possible to run multiple futures 68 | to completion concurrently on the same thread. 69 | -------------------------------------------------------------------------------- /src/01_getting_started/04_async_await_primer_zh.md: -------------------------------------------------------------------------------- 1 | # `async`/`.await` 入门 2 | 3 | `async`/`.await` 是 Rust 的内置工具,使得你可以如同写同步代码一样编写异步程序。 4 | `async` 会将一个代码块转化为一个实现了名为 `Future` 特征的状态机。 5 | 虽然在同步方法中调用阻塞函数会阻塞整个线程,但阻塞的 `Future` 会让出线程控制权, 6 | 允许其它 `Future` 运行。 7 | 8 | 让我们在 `Cargo.toml` 文件中添加一些依赖项。 9 | 10 | ```toml 11 | {{#include ../../examples/01_04_async_await_primer/Cargo.toml:9:10}} 12 | ``` 13 | 14 | 你可以使用 `async fn` 语法来创建一个异步函数: 15 | 16 | ```rust,edition2018 17 | async fn do_something() { /* ... */ } 18 | ``` 19 | 20 | `async fn` 的返回值是一个 `Future`。需有一个执行器,`Future` 才可执行。 21 | 22 | ```rust,edition2018 23 | {{#include ../../examples/01_04_async_await_primer/src/lib.rs:hello_world}} 24 | ``` 25 | 26 | 在 `async fn` 中,你可以使用 `.await` 等待另一个实现了 `Future` 特征的类型完成, 27 | 例如另一个 `async fn` 的返回值。与 `block_on` 不同,`.await` 不会阻塞当前线程, 28 | 而是在当前 future 无法取得进展时允许其它任务继续运行,同时在异步状态等待它的完成。 29 | 30 | 例如,现在我们有三个异步函数,分别是 `learn_song`,`sing_song` 以及 `dance`: 31 | 32 | ```rust,ignore 33 | async fn learn_song() -> Song { /* ... */ } 34 | async fn sing_song(song: Song) { /* ... */ } 35 | async fn dance() { /* ... */ } 36 | ``` 37 | 38 | 一种是,学唱、唱和跳舞以阻塞的方式的执行: 39 | 40 | ```rust,ignore 41 | {{#include ../../examples/01_04_async_await_primer/src/lib.rs:block_on_each}} 42 | ``` 43 | 44 | 然而,这种方式并未发挥出最好的性能——因为我们每次只做了一件事。 45 | 显然,只有在学会唱歌后才能去唱,但在我们学习或唱歌时,却可以同时跳舞的。 46 | 要实现这个,我们可以分别创建两个 `async fn` 来并发的执行: 47 | 48 | ```rust,ignore 49 | {{#include ../../examples/01_04_async_await_primer/src/lib.rs:block_on_main}} 50 | ``` 51 | 52 | 在这个例子中,学习唱歌必须在唱歌之前,但学唱和唱歌都可与跳舞这个行为同时发生。 53 | 如果我们在 `learn_and_sing` 中使用 `block_on(learn_song())` , 54 | 而不是 `learn_song().await`,它将在执行时阻塞主进程直至学歌完成,而无法同时跳舞。 55 | 通过 `.await`,使得在学歌这一行为发生阻塞时,让出主进程控制权。 56 | 这使得可以在同一线程中同时运行多个 future 并驱使之完成。 57 | -------------------------------------------------------------------------------- /src/02_execution/01_chapter.md: -------------------------------------------------------------------------------- 1 | # Under the Hood: Executing `Future`s and Tasks 2 | 3 | In this section, we'll cover the underlying structure of how `Future`s and 4 | asynchronous tasks are scheduled. If you're only interested in learning 5 | how to write higher-level code that uses existing `Future` types and aren't 6 | interested in the details of how `Future` types work, you can skip ahead to 7 | the `async`/`await` chapter. However, several of the topics discussed in this 8 | chapter are useful for understanding how `async`/`await` code works, 9 | understanding the runtime and performance properties of `async`/`await` code, 10 | and building new asynchronous primitives. If you decide to skip this section 11 | now, you may want to bookmark it to revisit in the future. 12 | 13 | Now, with that out of the way, let's talk about the `Future` trait. 14 | -------------------------------------------------------------------------------- /src/02_execution/01_chapter_zh.md: -------------------------------------------------------------------------------- 1 | # 深入了解:执行 `Future`s 和任务 2 | 3 | 在本章中,我们将介绍如何调度 `Future`s 和异步任务的底层结构。 4 | 如果你只想学习如何编写使用 `Future` 类型的高级代码, 5 | 而对 `Future` 类型的工作原理不感兴趣,可以直接跳到 `async`/`await` 章节。 6 | 但是,本章中提及的几个主题,对理解 `async`/`await` 是如何工作的, 7 | 运行时和 `async`/`await` 代码的性能特性,及构建新的异步原型大有帮助。 8 | 如果你现在决定跳过此章,那最好将它加入到书签中以便将来再重新审读它。 9 | 10 | 那么现在,让我们来聊一聊 `Future` 特征吧。 11 | -------------------------------------------------------------------------------- /src/02_execution/02_future.md: -------------------------------------------------------------------------------- 1 | # The `Future` Trait 2 | 3 | The `Future` trait is at the center of asynchronous programming in Rust. 4 | A `Future` is an asynchronous computation that can produce a value 5 | (although that value may be empty, e.g. `()`). A *simplified* version of 6 | the future trait might look something like this: 7 | 8 | ```rust 9 | {{#include ../../examples/02_02_future_trait/src/lib.rs:simple_future}} 10 | ``` 11 | 12 | Futures can be advanced by calling the `poll` function, which will drive the 13 | future as far towards completion as possible. If the future completes, it 14 | returns `Poll::Ready(result)`. If the future is not able to complete yet, it 15 | returns `Poll::Pending` and arranges for the `wake()` function to be called 16 | when the `Future` is ready to make more progress. When `wake()` is called, the 17 | executor driving the `Future` will call `poll` again so that the `Future` can 18 | make more progress. 19 | 20 | Without `wake()`, the executor would have no way of knowing when a particular 21 | future could make progress, and would have to be constantly polling every 22 | future. With `wake()`, the executor knows exactly which futures are ready to 23 | be `poll`ed. 24 | 25 | For example, consider the case where we want to read from a socket that may 26 | or may not have data available already. If there is data, we can read it 27 | in and return `Poll::Ready(data)`, but if no data is ready, our future is 28 | blocked and can no longer make progress. When no data is available, we 29 | must register `wake` to be called when data becomes ready on the socket, 30 | which will tell the executor that our future is ready to make progress. 31 | A simple `SocketRead` future might look something like this: 32 | 33 | ```rust,ignore 34 | {{#include ../../examples/02_02_future_trait/src/lib.rs:socket_read}} 35 | ``` 36 | 37 | This model of `Future`s allows for composing together multiple asynchronous 38 | operations without needing intermediate allocations. Running multiple futures 39 | at once or chaining futures together can be implemented via allocation-free 40 | state machines, like this: 41 | 42 | ```rust,ignore 43 | {{#include ../../examples/02_02_future_trait/src/lib.rs:join}} 44 | ``` 45 | 46 | This shows how multiple futures can be run simultaneously without needing 47 | separate allocations, allowing for more efficient asynchronous programs. 48 | Similarly, multiple sequential futures can be run one after another, like this: 49 | 50 | ```rust,ignore 51 | {{#include ../../examples/02_02_future_trait/src/lib.rs:and_then}} 52 | ``` 53 | 54 | These examples show how the `Future` trait can be used to express asynchronous 55 | control flow without requiring multiple allocated objects and deeply nested 56 | callbacks. With the basic control-flow out of the way, let's talk about the 57 | real `Future` trait and how it is different. 58 | 59 | ```rust,ignore 60 | {{#include ../../examples/02_02_future_trait/src/lib.rs:real_future}} 61 | ``` 62 | 63 | The first change you'll notice is that our `self` type is no longer `&mut Self`, 64 | but has changed to `Pin<&mut Self>`. We'll talk more about pinning in [a later 65 | section][pinning], but for now know that it allows us to create futures that 66 | are immovable. Immovable objects can store pointers between their fields, 67 | e.g. `struct MyFut { a: i32, ptr_to_a: *const i32 }`. Pinning is necessary 68 | to enable async/await. 69 | 70 | Secondly, `wake: fn()` has changed to `&mut Context<'_>`. In `SimpleFuture`, 71 | we used a call to a function pointer (`fn()`) to tell the future executor that 72 | the future in question should be polled. However, since `fn()` is just a 73 | function pointer, it can't store any data about *which* `Future` called `wake`. 74 | 75 | In a real-world scenario, a complex application like a web server may have 76 | thousands of different connections whose wakeups should all be 77 | managed separately. The `Context` type solves this by providing access to 78 | a value of type `Waker`, which can be used to wake up a specific task. 79 | 80 | [pinning]: ../04_pinning/01_chapter.md 81 | -------------------------------------------------------------------------------- /src/02_execution/02_future_zh.md: -------------------------------------------------------------------------------- 1 | # `Future` 特征 2 | 3 | `Future` 特征是 Rust 异步编程的核心要义。 4 | `Future` 是一种可以产生返回值的异步计算(尽管值可能是空,如`()`)。 5 | `Future` 特征的简化版本可以是这个样子: 6 | 7 | ```rust 8 | {{#include ../../examples/02_02_future_trait/src/lib.rs:simple_future}} 9 | ``` 10 | 11 | 通过调用 `poll` 函数可以推进 Futures,这将驱使 Future 尽快的完成。 12 | 当 future 完成时,将返回 `Poll::Ready(result)`。如果 future 尚不能完成, 13 | 它将返回 `Poll::Pending`,并安排在 future 在可以取得更多的进展时调用 `wake()` 函数。 14 | 当调用 `wake()` 时,驱动 `Future` 的执行器会再次调用 `Poll`, 15 | 以便 `Future` 取得更多的进展。 16 | 17 | 如果没有 `wake()`,执行器将无法得知特定的 future 什么时候可以取得进展, 18 | 将不得不去轮询每个 future,有了 `wake()`,执行器就能准确知道哪个 future 19 | 准备好被 `poll` 了。 20 | 21 | 例如,想像一下我们需要从一个套接字中读取数据,但它里面可能有数据,也可能为空。 22 | 如果有数据,我们可以读取并返回 `Poll::Ready(data)`,但如果是空, 23 | 我们的 future 将阻塞住、无法取得进展。所以在无数据时我们必须注册一个 `wake` 24 | 以便套接字上数据准备好时进行调用,它将通知执行器读取套接字数据这个 future 已就绪。 25 | 一个简单的 `SocketRead` future 如下: 26 | 27 | ```rust,ignore 28 | {{#include ../../examples/02_02_future_trait/src/lib.rs:socket_read}} 29 | ``` 30 | 31 | 这种 `Future`s 模型允许将多个异步操作组合起来而无需中间分配。 32 | 一次运行多个 futures 或将其链接在一起,可通过无分配状态机实现,如下: 33 | 34 | ```rust,ignore 35 | {{#include ../../examples/02_02_future_trait/src/lib.rs:join}} 36 | ``` 37 | 38 | 这展示了如何在不进行单独分配的情况下同时运行多个 futures, 39 | 从而实现更高效的异步程序。同样,多个连续的 futures 也可以顺序地运行,如下: 40 | 41 | ```rust,ignore 42 | {{#include ../../examples/02_02_future_trait/src/lib.rs:and_then}} 43 | ``` 44 | 45 | 这些例子展示了如何使用 `trait` 特征,在无需多个分配的对象和深度嵌套的回调情况下, 46 | 来表示异步控制流程。抛开基本的控制流程,让我们来谈谈真正的 `Future` 47 | 特征以及它的不同之处。 48 | 49 | ```rust,ignore 50 | {{#include ../../examples/02_02_future_trait/src/lib.rs:real_future}} 51 | ``` 52 | 53 | 首先你会看到的是,`self` 类型已不再是 `&mut Self` 而是 `Pin<&mut Self>`。 54 | 我们将在[后面的章节][pinning]中详细讨论 pinning, 55 | 但现在你只需知道它允许我们创建不可移动的 futures 即可。 56 | 不可移动的对象可以在它们的字段之间存储指针,例如 57 | `struct MyFut { a: i32, ptr_to_a: *const i32 }`。 58 | `Pinning` 是启用 `async/awiat` 所必需的功能。 59 | 60 | 其次,`wake: fn()` 变成了 `&mut Context<'_>`。在 `SimpleFuture` 中, 61 | 我们通过调用函数指针(`fn()`)来通知 future 执行器来对调用 wake 的 future 进行 `Poll` 62 | 操作。然而,因为 `fn()` 只是一个函数指针而不包含任何数据,所以你无法得知是哪个 63 | `Future` 在调用 `wake`。 64 | 65 | 在实际场景中,像 Web 服务器这样的复杂程序可能有成千上万个不同的连接, 66 | 而它们的唤醒工作应该分开来进行管理。`Context` 类型通过提供对 `Waker` 67 | 类型的值的访问解决了这个问题,该值可用于唤醒特定的任务。 68 | 69 | [pinning]: ../04_pinning/01_chapter_zh.md 70 | -------------------------------------------------------------------------------- /src/02_execution/03_wakeups.md: -------------------------------------------------------------------------------- 1 | # Task Wakeups with `Waker` 2 | 3 | It's common that futures aren't able to complete the first time they are 4 | `poll`ed. When this happens, the future needs to ensure that it is polled 5 | again once it is ready to make more progress. This is done with the `Waker` 6 | type. 7 | 8 | Each time a future is polled, it is polled as part of a "task". Tasks are 9 | the top-level futures that have been submitted to an executor. 10 | 11 | `Waker` provides a `wake()` method that can be used to tell the executor that 12 | the associated task should be awoken. When `wake()` is called, the executor 13 | knows that the task associated with the `Waker` is ready to make progress, and 14 | its future should be polled again. 15 | 16 | `Waker` also implements `clone()` so that it can be copied around and stored. 17 | 18 | Let's try implementing a simple timer future using `Waker`. 19 | 20 | ## Applied: Build a Timer 21 | 22 | For the sake of the example, we'll just spin up a new thread when the timer 23 | is created, sleep for the required time, and then signal the timer future 24 | when the time window has elapsed. 25 | 26 | First, start a new project with `cargo new --lib timer_future` and add the imports 27 | we'll need to get started to `src/lib.rs`: 28 | 29 | ```rust 30 | {{#include ../../examples/02_03_timer/src/lib.rs:imports}} 31 | ``` 32 | 33 | Let's start by defining the future type itself. Our future needs a way for the 34 | thread to communicate that the timer has elapsed and the future should complete. 35 | We'll use a shared `Arc>` value to communicate between the thread and 36 | the future. 37 | 38 | ```rust,ignore 39 | {{#include ../../examples/02_03_timer/src/lib.rs:timer_decl}} 40 | ``` 41 | 42 | Now, let's actually write the `Future` implementation! 43 | 44 | ```rust,ignore 45 | {{#include ../../examples/02_03_timer/src/lib.rs:future_for_timer}} 46 | ``` 47 | 48 | Pretty simple, right? If the thread has set `shared_state.completed = true`, 49 | we're done! Otherwise, we clone the `Waker` for the current task and pass it to 50 | `shared_state.waker` so that the thread can wake the task back up. 51 | 52 | Importantly, we have to update the `Waker` every time the future is polled 53 | because the future may have moved to a different task with a different 54 | `Waker`. This will happen when futures are passed around between tasks after 55 | being polled. 56 | 57 | Finally, we need the API to actually construct the timer and start the thread: 58 | 59 | ```rust,ignore 60 | {{#include ../../examples/02_03_timer/src/lib.rs:timer_new}} 61 | ``` 62 | 63 | Woot! That's all we need to build a simple timer future. Now, if only we had 64 | an executor to run the future on... 65 | -------------------------------------------------------------------------------- /src/02_execution/03_wakeups_zh.md: -------------------------------------------------------------------------------- 1 | # 通过 `Waker` 唤醒任务 2 | 3 | futures 在第一次被 `poll` 时是未就就绪状态是很常见的。当出现这种情况时, 4 | futures 需要确保在其就绪后即会被再次轮询。而这是通过 `Waker` 类型实现的。 5 | 6 | 每次轮询 future 时,都会将其作为“任务”的一部分进行轮询。 7 | 任务是已交由执行器控制的顶级的 future。 8 | 9 | `Waker` 提供了 `wake()` 方法来告知执行器相关任务需要被唤醒。当调用 `wake()` 时, 10 | 执行器就知道其关联的任务已就绪,并再次轮询那个 future。 11 | 12 | `Waker` 还实现了 `clone()`,以便复制和存储。 13 | 14 | 现在让我们尝试去使用 `Waker` 来实现一个简单的计时器 future 吧。 15 | 16 | ## 应用:构建一个计时器 17 | 18 | 就此示例而言,我们将在创建计时器时启动一个新线程,并让它休眠一定的时间, 19 | 然后在时间窗口结束时给计时器 future 发信号。 20 | 21 | 首先我们通过 `cargo new --lib timer_future` 来创建项目并在 src/lib.rs 22 | 中添加需导入的功能。 23 | 24 | ```rust 25 | {{#include ../../examples/02_03_timer/src/lib.rs:imports}} 26 | ``` 27 | 28 | 让我们首先定义这个 future 类型。 29 | 此 future 需要一种方法去通知线程计时器已完成且自身已就绪。 30 | 我们将使用 `Arc>` 共享值来在线程和 future 之间进行通信。 31 | 32 | ```rust,ignore 33 | {{#include ../../examples/02_03_timer/src/lib.rs:timer_decl}} 34 | ``` 35 | 36 | 那么现在,让我们开始编写代码来实现 `Future`! 37 | 38 | ```rust,ignore 39 | {{#include ../../examples/02_03_timer/src/lib.rs:future_for_timer}} 40 | ``` 41 | 42 | 非常简单,是吧?当这个线程的设置变为 `shared_state.completed = true`,就完成了! 43 | 否则,我们将克隆当前任务的 `Waker` 并把它放置在 `shared_state.waker` 中, 44 | 以便线程可再次唤醒任务。 45 | 46 | 我们必须在每次轮询完 future 后更新 `Waker`, 47 | 因为 future 可能被转移到不同的任务中使用不同的 `Waker` 了,这点非常重要。 48 | 在 futures 被轮询后在任务间传递时,就会发生这种情况。 49 | 50 | 最后,我们需要一个 API 来实际上构建计时器并启动线程: 51 | 52 | ```rust,ignore 53 | {{#include ../../examples/02_03_timer/src/lib.rs:timer_new}} 54 | ``` 55 | 56 | 哈!以上便是我们构建一个简单的计时器 future 所需的全部组件。 57 | 现在,我们只需要一个执行器来运行它了... 58 | -------------------------------------------------------------------------------- /src/02_execution/04_executor.md: -------------------------------------------------------------------------------- 1 | # Applied: Build an Executor 2 | 3 | Rust's `Future`s are lazy: they won't do anything unless actively driven to 4 | completion. One way to drive a future to completion is to `.await` it inside 5 | an `async` function, but that just pushes the problem one level up: who will 6 | run the futures returned from the top-level `async` functions? The answer is 7 | that we need a `Future` executor. 8 | 9 | `Future` executors take a set of top-level `Future`s and run them to completion 10 | by calling `poll` whenever the `Future` can make progress. Typically, an 11 | executor will `poll` a future once to start off. When `Future`s indicate that 12 | they are ready to make progress by calling `wake()`, they are placed back 13 | onto a queue and `poll` is called again, repeating until the `Future` has 14 | completed. 15 | 16 | In this section, we'll write our own simple executor capable of running a large 17 | number of top-level futures to completion concurrently. 18 | 19 | For this example, we depend on the `futures` crate for the `ArcWake` trait, 20 | which provides an easy way to construct a `Waker`. Edit `Cargo.toml` to add 21 | a new dependency: 22 | 23 | ```toml 24 | [package] 25 | name = "timer_future" 26 | version = "0.1.0" 27 | authors = ["XYZ Author"] 28 | edition = "2021" 29 | 30 | [dependencies] 31 | futures = "0.3" 32 | ``` 33 | 34 | Next, we need the following imports at the top of `src/main.rs`: 35 | 36 | ```rust,ignore 37 | {{#include ../../examples/02_04_executor/src/lib.rs:imports}} 38 | ``` 39 | 40 | Our executor will work by sending tasks to run over a channel. The executor 41 | will pull events off of the channel and run them. When a task is ready to 42 | do more work (is awoken), it can schedule itself to be polled again by 43 | putting itself back onto the channel. 44 | 45 | In this design, the executor itself just needs the receiving end of the task 46 | channel. The user will get a sending end so that they can spawn new futures. 47 | Tasks themselves are just futures that can reschedule themselves, so we'll 48 | store them as a future paired with a sender that the task can use to requeue 49 | itself. 50 | 51 | ```rust,ignore 52 | {{#include ../../examples/02_04_executor/src/lib.rs:executor_decl}} 53 | ``` 54 | 55 | Let's also add a method to spawner to make it easy to spawn new futures. 56 | This method will take a future type, box it, and create a new `Arc` with 57 | it inside which can be enqueued onto the executor. 58 | 59 | ```rust,ignore 60 | {{#include ../../examples/02_04_executor/src/lib.rs:spawn_fn}} 61 | ``` 62 | 63 | To poll futures, we'll need to create a `Waker`. 64 | As discussed in the [task wakeups section], `Waker`s are responsible 65 | for scheduling a task to be polled again once `wake` is called. Remember that 66 | `Waker`s tell the executor exactly which task has become ready, allowing 67 | them to poll just the futures that are ready to make progress. The easiest way 68 | to create a new `Waker` is by implementing the `ArcWake` trait and then using 69 | the `waker_ref` or `.into_waker()` functions to turn an `Arc` 70 | into a `Waker`. Let's implement `ArcWake` for our tasks to allow them to be 71 | turned into `Waker`s and awoken: 72 | 73 | ```rust,ignore 74 | {{#include ../../examples/02_04_executor/src/lib.rs:arcwake_for_task}} 75 | ``` 76 | 77 | When a `Waker` is created from an `Arc`, calling `wake()` on it will 78 | cause a copy of the `Arc` to be sent onto the task channel. Our executor then 79 | needs to pick up the task and poll it. Let's implement that: 80 | 81 | ```rust,ignore 82 | {{#include ../../examples/02_04_executor/src/lib.rs:executor_run}} 83 | ``` 84 | 85 | Congratulations! We now have a working futures executor. We can even use it 86 | to run `async/.await` code and custom futures, such as the `TimerFuture` we 87 | wrote earlier: 88 | 89 | ```rust,edition2018,ignore 90 | {{#include ../../examples/02_04_executor/src/lib.rs:main}} 91 | ``` 92 | 93 | [task wakeups section]: ./03_wakeups.md 94 | -------------------------------------------------------------------------------- /src/02_execution/04_executor_zh.md: -------------------------------------------------------------------------------- 1 | # 应用:构建一个执行器 2 | 3 | Rust 的 `Future`s 是懒惰的:除非积极地推动它完成,不然它不会做任何事情。 4 | 一种推动 future 完成的方式是在 `async` 函数中使用 `.await`, 5 | 但这只是将问题推进了一层,还面临着:谁将运行从顶级 `async` 函数里返回的 future? 6 | 很明显我们需要一个 `Future` 执行器。 7 | 8 | `Future` 执行器获取一组顶级 `Future`s 并在 `Future` 9 | 可取得进展时通过调用 `poll` 来将它们运行直至完成。 10 | 通常,执行器会调用一次 `poll` 来使 future 开始运行。 11 | 当 `Future`s 通过调用 `wake()` 表示它们已就绪时,会被再次放入队列中以便 `poll` 12 | 再次调用,重复直到 `Future` 完成。 13 | 14 | 在本章中,我们将编写一个简单的,能够同时运行大量顶级 futures 并驱使其完成的执行器。 15 | 16 | 在这个例子中,我们依赖于 `futures` 箱,它提供了 `ArcWake` 特征, 17 | 有了这个特征,我们可以很方便的构建一个 `Waker`。编辑 `Cargo.toml` 添加依赖: 18 | 19 | ```toml 20 | [package] 21 | name = "timer_future" 22 | version = "0.1.0" 23 | authors = ["XYZ Author"] 24 | edition = "2018" 25 | 26 | [dependencies] 27 | futures = "0.3" 28 | ``` 29 | 30 | 接下来,我们需要在 `src/main.rs` 的顶部导入以下路径: 31 | 32 | ```rust,ignore 33 | {{#include ../../examples/02_04_executor/src/lib.rs:imports}} 34 | ``` 35 | 36 | 我们将通过将任务发送到通道(channel)上,来使执行器运行它们。 37 | 执行器会从通道道中取出事件并运行它。当一个任务已就绪(awoken 状态), 38 | 它可以通过通过将自己再次放入通道以便被再次轮询到。 39 | 40 | 在这个设计中,执行器本身只需要拥有任务通道的接收端。 41 | 用户则拥有此通道的发送端,以便生成新的 futures。任务本身只是可以自我重新调度的 42 | futures,所以我们将它和发送端绑定成一对儿,它可以此重新回到任务队列中。 43 | 44 | ```rust,ignore 45 | {{#include ../../examples/02_04_executor/src/lib.rs:executor_decl}} 46 | ``` 47 | 48 | 同时,让我们也给 spawner 添加一个新方法,使它可以方便地生成新的 futures。 49 | 这个方法将接收一个 future 类型,放入智能指针 box 中,并在创建一个新的 `Arc` 50 | 以便它可以添加到执行器的队列中。 51 | 52 | ```rust,ignore 53 | {{#include ../../examples/02_04_executor/src/lib.rs:spawn_fn}} 54 | ``` 55 | 56 | 我们需要创建一个 `Waker` 来轮询 futures。之前在 [唤醒任务] 57 | 中提到过,一旦任务的 `wake` 被调用,`Waker` 就会安排再次轮询它。请记住, 58 | `Waker` 会准确的告知执行器哪个任务已就绪,这样就会只轮询已就绪的 futures。 59 | 创建一个 `Waker` 最简单的方法,就是实现 `ArcWake` 特征,之后使用 `waker_ref` 60 | 或 `.into_waker` 方法来将一个 `Arc` 转化成 `Waker`。 61 | 下面让我们为 Task 实现 `ArcWake` 以便将它们转化成可唤醒的 `Waker`s。 62 | 63 | ```rust,ignore 64 | {{#include ../../examples/02_04_executor/src/lib.rs:arcwake_for_task}} 65 | ``` 66 | 67 | 当从 `Arc` 创建 `Waker` 后,调用其 `wake()` 将拷贝一份 `Arc` 68 | 并将之发送到任务通道。之后执行器会取得这个任务并轮询它。让我们来实现它: 69 | 70 | ```rust,ignore 71 | {{#include ../../examples/02_04_executor/src/lib.rs:executor_run}} 72 | ``` 73 | 74 | 恭喜!现在我们就有了一个可工作的 futures 执行器。 75 | 我们甚至可以使用它去运行 `async/.await` 代码和自定义的 futures, 76 | 比如说之前完成的 `TimerFuture`。 77 | 78 | ```rust,edition2018,ignore 79 | {{#include ../../examples/02_04_executor/src/lib.rs:main}} 80 | ``` 81 | 82 | [唤醒任务]: ./03_wakeups_zh.md 83 | -------------------------------------------------------------------------------- /src/02_execution/05_io.md: -------------------------------------------------------------------------------- 1 | # Executors and System IO 2 | 3 | In the previous section on [The `Future` Trait], we discussed this example of 4 | a future that performed an asynchronous read on a socket: 5 | 6 | ```rust,ignore 7 | {{#include ../../examples/02_02_future_trait/src/lib.rs:socket_read}} 8 | ``` 9 | 10 | This future will read available data on a socket, and if no data is available, 11 | it will yield to the executor, requesting that its task be awoken when the 12 | socket becomes readable again. However, it's not clear from this example how 13 | the `Socket` type is implemented, and in particular it isn't obvious how the 14 | `set_readable_callback` function works. How can we arrange for `wake()` 15 | to be called once the socket becomes readable? One option would be to have 16 | a thread that continually checks whether `socket` is readable, calling 17 | `wake()` when appropriate. However, this would be quite inefficient, requiring 18 | a separate thread for each blocked IO future. This would greatly reduce the 19 | efficiency of our async code. 20 | 21 | In practice, this problem is solved through integration with an IO-aware 22 | system blocking primitive, such as `epoll` on Linux, `kqueue` on FreeBSD and 23 | Mac OS, IOCP on Windows, and `port`s on Fuchsia (all of which are exposed 24 | through the cross-platform Rust crate [`mio`]). These primitives all allow 25 | a thread to block on multiple asynchronous IO events, returning once one of 26 | the events completes. In practice, these APIs usually look something like 27 | this: 28 | 29 | ```rust,ignore 30 | struct IoBlocker { 31 | /* ... */ 32 | } 33 | 34 | struct Event { 35 | // An ID uniquely identifying the event that occurred and was listened for. 36 | id: usize, 37 | 38 | // A set of signals to wait for, or which occurred. 39 | signals: Signals, 40 | } 41 | 42 | impl IoBlocker { 43 | /// Create a new collection of asynchronous IO events to block on. 44 | fn new() -> Self { /* ... */ } 45 | 46 | /// Express an interest in a particular IO event. 47 | fn add_io_event_interest( 48 | &self, 49 | 50 | /// The object on which the event will occur 51 | io_object: &IoObject, 52 | 53 | /// A set of signals that may appear on the `io_object` for 54 | /// which an event should be triggered, paired with 55 | /// an ID to give to events that result from this interest. 56 | event: Event, 57 | ) { /* ... */ } 58 | 59 | /// Block until one of the events occurs. 60 | fn block(&self) -> Event { /* ... */ } 61 | } 62 | 63 | let mut io_blocker = IoBlocker::new(); 64 | io_blocker.add_io_event_interest( 65 | &socket_1, 66 | Event { id: 1, signals: READABLE }, 67 | ); 68 | io_blocker.add_io_event_interest( 69 | &socket_2, 70 | Event { id: 2, signals: READABLE | WRITABLE }, 71 | ); 72 | let event = io_blocker.block(); 73 | 74 | // prints e.g. "Socket 1 is now READABLE" if socket one became readable. 75 | println!("Socket {:?} is now {:?}", event.id, event.signals); 76 | ``` 77 | 78 | Futures executors can use these primitives to provide asynchronous IO objects 79 | such as sockets that can configure callbacks to be run when a particular IO 80 | event occurs. In the case of our `SocketRead` example above, the 81 | `Socket::set_readable_callback` function might look like the following pseudocode: 82 | 83 | ```rust,ignore 84 | impl Socket { 85 | fn set_readable_callback(&self, waker: Waker) { 86 | // `local_executor` is a reference to the local executor. 87 | // this could be provided at creation of the socket, but in practice 88 | // many executor implementations pass it down through thread local 89 | // storage for convenience. 90 | let local_executor = self.local_executor; 91 | 92 | // Unique ID for this IO object. 93 | let id = self.id; 94 | 95 | // Store the local waker in the executor's map so that it can be called 96 | // once the IO event arrives. 97 | local_executor.event_map.insert(id, waker); 98 | local_executor.add_io_event_interest( 99 | &self.socket_file_descriptor, 100 | Event { id, signals: READABLE }, 101 | ); 102 | } 103 | } 104 | ``` 105 | 106 | We can now have just one executor thread which can receive and dispatch any 107 | IO event to the appropriate `Waker`, which will wake up the corresponding 108 | task, allowing the executor to drive more tasks to completion before returning 109 | to check for more IO events (and the cycle continues...). 110 | 111 | [The `Future` Trait]: ./02_future.md 112 | [`mio`]: https://github.com/tokio-rs/mio 113 | -------------------------------------------------------------------------------- /src/02_execution/05_io_zh.md: -------------------------------------------------------------------------------- 1 | # 执行器与系统 IO 2 | 3 | 在之前的 [`Future` 特征] 中,我们讨论了一个在套接字上进行异步读取的 future 示例: 4 | 5 | ```rust,ignore 6 | {{#include ../../examples/02_02_future_trait/src/lib.rs:socket_read}} 7 | ``` 8 | 9 | 这个 future 将从一个套接字中读取可用数据,当里面无数据时, 10 | 它将执行权让给执行器,请求在套接字再次可读时唤醒其任务。 11 | 但是,在这个例子中并不能清楚地了解到 `Socket` 类型是如何实现的, 12 | 尤其无法明确得知 `set_readable_callback` 函数是如何工作的。 13 | 一旦套接字就绪(可读),我们如何去安排调用 `wake()`? 14 | 一种选择是创建一个线程去不停地检查 `socket` 是否已就绪,并在就绪时调用 `wake()`。 15 | 然而,这样做是十分低效的,每个阻塞的 IO future 都需要为一个单独的线程。 16 | 这将大大降低我们的异步代码的效率。 17 | 18 | 在实际上,这个问题是通过集成一个阻塞 IO 感知系统来解决的,例如 Linux 上的 `epoll`, 19 | MacOS 及 FreeBSD 上的 `kqueue` 、 Windows 上使用的 IOCP,以及 Fuchsia 中的 20 | `port`( 所有这些已通过 Rust 中跨平台的 crate [`mio`] 实现)。 21 | 它们原生地支持在一个线程上有多个异步 IO 阻塞事件,一旦其中一个事件完成就返回。 22 | 这些 APIs 通常看起来是这样的: 23 | 24 | ```rust,ignore 25 | struct IoBlocker { 26 | /* ... */ 27 | } 28 | 29 | struct Event { 30 | // An ID uniquely identifying the event that occurred and was listened for. 31 | id: usize, 32 | 33 | // A set of signals to wait for, or which occurred. 34 | signals: Signals, 35 | } 36 | 37 | impl IoBlocker { 38 | /// Create a new collection of asynchronous IO events to block on. 39 | fn new() -> Self { /* ... */ } 40 | 41 | /// Express an interest in a particular IO event. 42 | fn add_io_event_interest( 43 | &self, 44 | 45 | /// The object on which the event will occur 46 | io_object: &IoObject, 47 | 48 | /// A set of signals that may appear on the `io_object` for 49 | /// which an event should be triggered, paired with 50 | /// an ID to give to events that result from this interest. 51 | event: Event, 52 | ) { /* ... */ } 53 | 54 | /// Block until one of the events occurs. 55 | fn block(&self) -> Event { /* ... */ } 56 | } 57 | 58 | let mut io_blocker = IoBlocker::new(); 59 | io_blocker.add_io_event_interest( 60 | &socket_1, 61 | Event { id: 1, signals: READABLE }, 62 | ); 63 | io_blocker.add_io_event_interest( 64 | &socket_2, 65 | Event { id: 2, signals: READABLE | WRITABLE }, 66 | ); 67 | let event = io_blocker.block(); 68 | 69 | // prints e.g. "Socket 1 is now READABLE" if socket one became readable. 70 | println!("Socket {:?} is now {:?}", event.id, event.signals); 71 | ``` 72 | 73 | Futures 执行器可以使用这些原生支持来产生异步 IO 对象,例如可配置套接字, 74 | 在特定的事件发生时再去运行回调。在上面的 `SocketRead` 示例中, 75 | `Socket::set_readable_callback` 的伪代码可以写成这样: 76 | 77 | ```rust,ignore 78 | impl Socket { 79 | fn set_readable_callback(&self, waker: Waker) { 80 | // `local_executor` is a reference to the local executor. 81 | // this could be provided at creation of the socket, but in practice 82 | // many executor implementations pass it down through thread local 83 | // storage for convenience. 84 | let local_executor = self.local_executor; 85 | 86 | // Unique ID for this IO object. 87 | let id = self.id; 88 | 89 | // Store the local waker in the executor's map so that it can be called 90 | // once the IO event arrives. 91 | local_executor.event_map.insert(id, waker); 92 | local_executor.add_io_event_interest( 93 | &self.socket_file_descriptor, 94 | Event { id, signals: READABLE }, 95 | ); 96 | } 97 | } 98 | ``` 99 | 100 | 我们现在可以只有一个执行器线程,它可以接收任何 IO 事件并将其分派给适当的 Waker, 101 | 唤醒相应的任务,使执行器在返回检查更多的 IO 事件之前驱动更多的任务完成(如此循环...)。 102 | 103 | [`Future` 特征]: ./02_future_zh.md 104 | [`mio`]: https://github.com/tokio-rs/mio 105 | -------------------------------------------------------------------------------- /src/03_async_await/01_chapter.md: -------------------------------------------------------------------------------- 1 | # `async`/`.await` 2 | 3 | In [the first chapter], we took a brief look at `async`/`.await`. 4 | This chapter will discuss `async`/`.await` in 5 | greater detail, explaining how it works and how `async` code differs from 6 | traditional Rust programs. 7 | 8 | `async`/`.await` are special pieces of Rust syntax that make it possible to 9 | yield control of the current thread rather than blocking, allowing other 10 | code to make progress while waiting on an operation to complete. 11 | 12 | There are two main ways to use `async`: `async fn` and `async` blocks. 13 | Each returns a value that implements the `Future` trait: 14 | 15 | ```rust,edition2018,ignore 16 | {{#include ../../examples/03_01_async_await/src/lib.rs:async_fn_and_block_examples}} 17 | ``` 18 | 19 | As we saw in the first chapter, `async` bodies and other futures are lazy: 20 | they do nothing until they are run. The most common way to run a `Future` 21 | is to `.await` it. When `.await` is called on a `Future`, it will attempt 22 | to run it to completion. If the `Future` is blocked, it will yield control 23 | of the current thread. When more progress can be made, the `Future` will be picked 24 | up by the executor and will resume running, allowing the `.await` to resolve. 25 | 26 | ## `async` Lifetimes 27 | 28 | Unlike traditional functions, `async fn`s which take references or other 29 | non-`'static` arguments return a `Future` which is bounded by the lifetime of 30 | the arguments: 31 | 32 | ```rust,edition2018,ignore 33 | {{#include ../../examples/03_01_async_await/src/lib.rs:lifetimes_expanded}} 34 | ``` 35 | 36 | This means that the future returned from an `async fn` must be `.await`ed 37 | while its non-`'static` arguments are still valid. In the common 38 | case of `.await`ing the future immediately after calling the function 39 | (as in `foo(&x).await`) this is not an issue. However, if storing the future 40 | or sending it over to another task or thread, this may be an issue. 41 | 42 | One common workaround for turning an `async fn` with references-as-arguments 43 | into a `'static` future is to bundle the arguments with the call to the 44 | `async fn` inside an `async` block: 45 | 46 | ```rust,edition2018,ignore 47 | {{#include ../../examples/03_01_async_await/src/lib.rs:static_future_with_borrow}} 48 | ``` 49 | 50 | By moving the argument into the `async` block, we extend its lifetime to match 51 | that of the `Future` returned from the call to `good`. 52 | 53 | ## `async move` 54 | 55 | `async` blocks and closures allow the `move` keyword, much like normal 56 | closures. An `async move` block will take ownership of the variables it 57 | references, allowing it to outlive the current scope, but giving up the ability 58 | to share those variables with other code: 59 | 60 | ```rust,edition2018,ignore 61 | {{#include ../../examples/03_01_async_await/src/lib.rs:async_move_examples}} 62 | ``` 63 | 64 | ## `.await`ing on a Multithreaded Executor 65 | 66 | Note that, when using a multithreaded `Future` executor, a `Future` may move 67 | between threads, so any variables used in `async` bodies must be able to travel 68 | between threads, as any `.await` can potentially result in a switch to a new 69 | thread. 70 | 71 | This means that it is not safe to use `Rc`, `&RefCell` or any other types 72 | that don't implement the `Send` trait, including references to types that don't 73 | implement the `Sync` trait. 74 | 75 | (Caveat: it is possible to use these types as long as they aren't in scope 76 | during a call to `.await`.) 77 | 78 | Similarly, it isn't a good idea to hold a traditional non-futures-aware lock 79 | across an `.await`, as it can cause the threadpool to lock up: one task could 80 | take out a lock, `.await` and yield to the executor, allowing another task to 81 | attempt to take the lock and cause a deadlock. To avoid this, use the `Mutex` 82 | in `futures::lock` rather than the one from `std::sync`. 83 | 84 | [the first chapter]: ../01_getting_started/04_async_await_primer.md 85 | -------------------------------------------------------------------------------- /src/03_async_await/01_chapter_zh.md: -------------------------------------------------------------------------------- 1 | # `async`/`.await` 2 | 3 | 在[第一章]中,我们对 `async`/`.await` 已有了一个简单的了解。 4 | 本章将更详尽的介绍 `async`/`.await`,解读它是如何工作的, 5 | 以及 `async` 代码与传统的 Rust 同步程序有何不同。 6 | 7 | `async`/`.await` 是 Rust 语法的特殊部分,通过它可以在本身产生阻塞时, 8 | 让出当前线程的控制权,即在等待自身完成时,亦可允许其它代码运行。 9 | 10 | 有两种方法来使用 `async`:`async fn` 函数和 `async` 代码块。 11 | 它们都会返回一个实现了 `Future` 特征的值。 12 | 13 | ```rust,edition2018,ignore 14 | {{#include ../../examples/03_01_async_await/src/lib.rs:async_fn_and_block_examples}} 15 | ``` 16 | 17 | 正如我们在第一章中所见,`async` 的代码和其它 futures 是惰性的: 18 | 除非去调用它们,否则它们不会做任何事。而最常用的运行 `Future` 的方法就是使用 19 | `.await`。当 `Future` 调用 `.await` 时,这将尝试去运行 `Future` 直至完成它。 20 | 当 `Future` 阻塞时,它将让出线程的控制权。而当 `Future` 再次就绪时, 21 | 执行器会恢复其运行权限,使 `.await` 推动它完成。 22 | 23 | ## `async` 的生命周期 24 | 25 | 不同于传统函数,`async fn`s 接收引用或其它非静态参数, 26 | 并返回一个受其参数的生命周期限制的 `Future`。 27 | 28 | ```rust,edition2018,ignore 29 | {{#include ../../examples/03_01_async_await/src/lib.rs:lifetimes_expanded}} 30 | ``` 31 | 32 | 这意味着,`async fn` 返回的 future,必须在其非静态参数的生命周期内调用 `.await`! 33 | 通常在调用函数后立即对 future 执行 `.await` 时不会出现问题(比如 34 | `foo(&x).await`)。然而,当这个 future 被存储起来或发送到其它任务或线程上时, 35 | 这可能会成为一个问题。 36 | 37 | 一种常见的解决办法是,将引用参数(references-as-arguments)和 38 | `async fn` 调用一并放置在一个 `async`代码块中, 39 | 这将 `async fn` 和引参转化成了一个 `'static` future。 40 | 41 | ```rust,edition2018,ignore 42 | {{#include ../../examples/03_01_async_await/src/lib.rs:static_future_with_borrow}} 43 | ``` 44 | 45 | 通过将参数移动到 `async` 代码块中,我们将它的生命周期延长到同返回的 `Future` 46 | 一样久。 47 | 48 | ## `async move` 49 | 50 | 同普通的闭包一样,`async` 代码块和闭包中可使用 `move` 关键字。 51 | `async move` 代码块将获取其引用变量的所有权,使它得到更长的生命周期, 52 | 但这样做就不能再与其它代码共享这些变量了: 53 | 54 | ```rust,edition2018,ignore 55 | {{#include ../../examples/03_01_async_await/src/lib.rs:async_move_examples}} 56 | ``` 57 | 58 | ## 在多线程执行器上的 `.await` 59 | 60 | 注意,当使用多线程 `Future` 执行器时,`Future` 可能会在线程间移动, 61 | 所以在 `async` 里使用的任何变量都必须能在线程之间传输, 62 | 因为任何 `.await` 都可能导致任务切换到一个新线程上。 63 | 64 | 这意味着使用 `Rc`, `&RefCell` 或其它任何未实现 `Send` 特征的类型及未实现 65 | `Sync` 特征的类型的引用都是不安全的。 66 | 67 | (警告:只要在它们不在调用 `.await` 的代码块里就能使用这些类型。) 68 | 69 | 同样,在 `.await` 中使用传统的“非 future 感知”锁也并不是一个好主意, 70 | 它可能导致线程池死锁:一个任务在 `.await` 时获得了锁,然后交出运行权, 71 | 而执行器调度另一个任务同样想获取这个锁,这就导致了死锁。在 `futures::lock` 中 72 | 使用 `Mutex` 而不是 `std::sync` 可以避免这种情况。 73 | 74 | [第一章]: ../01_getting_started/04_async_await_primer_zh.md 75 | -------------------------------------------------------------------------------- /src/04_pinning/01_chapter.md: -------------------------------------------------------------------------------- 1 | # Pinning 2 | 3 | To poll futures, they must be pinned using a special type called 4 | `Pin`. If you read the explanation of [the `Future` trait] in the 5 | previous section ["Executing `Future`s and Tasks"], you'll recognize 6 | `Pin` from the `self: Pin<&mut Self>` in the `Future::poll` method's definition. 7 | But what does it mean, and why do we need it? 8 | 9 | ## Why Pinning 10 | 11 | `Pin` works in tandem with the `Unpin` marker. Pinning makes it possible 12 | to guarantee that an object implementing `!Unpin` won't ever be moved. To understand 13 | why this is necessary, we need to remember how `async`/`.await` works. Consider 14 | the following code: 15 | 16 | ```rust,edition2018,ignore 17 | let fut_one = /* ... */; 18 | let fut_two = /* ... */; 19 | async move { 20 | fut_one.await; 21 | fut_two.await; 22 | } 23 | ``` 24 | 25 | Under the hood, this creates an anonymous type that implements `Future`, 26 | providing a `poll` method that looks something like this: 27 | 28 | ```rust,ignore 29 | // The `Future` type generated by our `async { ... }` block 30 | struct AsyncFuture { 31 | fut_one: FutOne, 32 | fut_two: FutTwo, 33 | state: State, 34 | } 35 | 36 | // List of states our `async` block can be in 37 | enum State { 38 | AwaitingFutOne, 39 | AwaitingFutTwo, 40 | Done, 41 | } 42 | 43 | impl Future for AsyncFuture { 44 | type Output = (); 45 | 46 | fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<()> { 47 | loop { 48 | match self.state { 49 | State::AwaitingFutOne => match self.fut_one.poll(..) { 50 | Poll::Ready(()) => self.state = State::AwaitingFutTwo, 51 | Poll::Pending => return Poll::Pending, 52 | } 53 | State::AwaitingFutTwo => match self.fut_two.poll(..) { 54 | Poll::Ready(()) => self.state = State::Done, 55 | Poll::Pending => return Poll::Pending, 56 | } 57 | State::Done => return Poll::Ready(()), 58 | } 59 | } 60 | } 61 | } 62 | ``` 63 | 64 | 65 | When `poll` is first called, it will poll `fut_one`. If `fut_one` can't 66 | complete, `AsyncFuture::poll` will return. Future calls to `poll` will pick 67 | up where the previous one left off. This process continues until the future 68 | is able to successfully complete. 69 | 70 | However, what happens if we have an `async` block that uses references? 71 | For example: 72 | 73 | ```rust,edition2018,ignore 74 | async { 75 | let mut x = [0; 128]; 76 | let read_into_buf_fut = read_into_buf(&mut x); 77 | read_into_buf_fut.await; 78 | println!("{:?}", x); 79 | } 80 | ``` 81 | 82 | What struct does this compile down to? 83 | 84 | ```rust,ignore 85 | struct ReadIntoBuf<'a> { 86 | buf: &'a mut [u8], // points to `x` below 87 | } 88 | 89 | struct AsyncFuture { 90 | x: [u8; 128], 91 | read_into_buf_fut: ReadIntoBuf<'what_lifetime?>, 92 | } 93 | ``` 94 | 95 | Here, the `ReadIntoBuf` future holds a reference into the other field of our 96 | structure, `x`. However, if `AsyncFuture` is moved, the location of `x` will 97 | move as well, invalidating the pointer stored in `read_into_buf_fut.buf`. 98 | 99 | Pinning futures to a particular spot in memory prevents this problem, making 100 | it safe to create references to values inside an `async` block. 101 | 102 | ## Pinning in Detail 103 | 104 | Let's try to understand pinning by using an slightly simpler example. The problem we encounter 105 | above is a problem that ultimately boils down to how we handle references in self-referential 106 | types in Rust. 107 | 108 | For now our example will look like this: 109 | 110 | ```rust, ignore 111 | #[derive(Debug)] 112 | struct Test { 113 | a: String, 114 | b: *const String, 115 | } 116 | 117 | impl Test { 118 | fn new(txt: &str) -> Self { 119 | Test { 120 | a: String::from(txt), 121 | b: std::ptr::null(), 122 | } 123 | } 124 | 125 | fn init(&mut self) { 126 | let self_ref: *const String = &self.a; 127 | self.b = self_ref; 128 | } 129 | 130 | fn a(&self) -> &str { 131 | &self.a 132 | } 133 | 134 | fn b(&self) -> &String { 135 | assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 136 | unsafe { &*(self.b) } 137 | } 138 | } 139 | ``` 140 | 141 | `Test` provides methods to get a reference to the value of the fields `a` and `b`. Since `b` is a 142 | reference to `a` we store it as a pointer since the borrowing rules of Rust doesn't allow us to 143 | define this lifetime. We now have what we call a self-referential struct. 144 | 145 | Our example works fine if we don't move any of our data around as you can observe by running 146 | this example: 147 | 148 | ```rust 149 | fn main() { 150 | let mut test1 = Test::new("test1"); 151 | test1.init(); 152 | let mut test2 = Test::new("test2"); 153 | test2.init(); 154 | 155 | println!("a: {}, b: {}", test1.a(), test1.b()); 156 | println!("a: {}, b: {}", test2.a(), test2.b()); 157 | 158 | } 159 | # #[derive(Debug)] 160 | # struct Test { 161 | # a: String, 162 | # b: *const String, 163 | # } 164 | # 165 | # impl Test { 166 | # fn new(txt: &str) -> Self { 167 | # Test { 168 | # a: String::from(txt), 169 | # b: std::ptr::null(), 170 | # } 171 | # } 172 | # 173 | # // We need an `init` method to actually set our self-reference 174 | # fn init(&mut self) { 175 | # let self_ref: *const String = &self.a; 176 | # self.b = self_ref; 177 | # } 178 | # 179 | # fn a(&self) -> &str { 180 | # &self.a 181 | # } 182 | # 183 | # fn b(&self) -> &String { 184 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 185 | # unsafe { &*(self.b) } 186 | # } 187 | # } 188 | ``` 189 | We get what we'd expect: 190 | 191 | ```rust, ignore 192 | a: test1, b: test1 193 | a: test2, b: test2 194 | ``` 195 | 196 | Let's see what happens if we swap `test1` with `test2` and thereby move the data: 197 | 198 | ```rust 199 | fn main() { 200 | let mut test1 = Test::new("test1"); 201 | test1.init(); 202 | let mut test2 = Test::new("test2"); 203 | test2.init(); 204 | 205 | println!("a: {}, b: {}", test1.a(), test1.b()); 206 | std::mem::swap(&mut test1, &mut test2); 207 | println!("a: {}, b: {}", test2.a(), test2.b()); 208 | 209 | } 210 | # #[derive(Debug)] 211 | # struct Test { 212 | # a: String, 213 | # b: *const String, 214 | # } 215 | # 216 | # impl Test { 217 | # fn new(txt: &str) -> Self { 218 | # Test { 219 | # a: String::from(txt), 220 | # b: std::ptr::null(), 221 | # } 222 | # } 223 | # 224 | # fn init(&mut self) { 225 | # let self_ref: *const String = &self.a; 226 | # self.b = self_ref; 227 | # } 228 | # 229 | # fn a(&self) -> &str { 230 | # &self.a 231 | # } 232 | # 233 | # fn b(&self) -> &String { 234 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 235 | # unsafe { &*(self.b) } 236 | # } 237 | # } 238 | ``` 239 | 240 | Naively, we could think that what we should get a debug print of `test1` two times like this: 241 | 242 | ```rust, ignore 243 | a: test1, b: test1 244 | a: test1, b: test1 245 | ``` 246 | 247 | But instead we get: 248 | 249 | ```rust, ignore 250 | a: test1, b: test1 251 | a: test1, b: test2 252 | ``` 253 | 254 | The pointer to `test2.b` still points to the old location which is inside `test1` 255 | now. The struct is not self-referential anymore, it holds a pointer to a field 256 | in a different object. That means we can't rely on the lifetime of `test2.b` to 257 | be tied to the lifetime of `test2` anymore. 258 | 259 | If you're still not convinced, this should at least convince you: 260 | 261 | ```rust 262 | fn main() { 263 | let mut test1 = Test::new("test1"); 264 | test1.init(); 265 | let mut test2 = Test::new("test2"); 266 | test2.init(); 267 | 268 | println!("a: {}, b: {}", test1.a(), test1.b()); 269 | std::mem::swap(&mut test1, &mut test2); 270 | test1.a = "I've totally changed now!".to_string(); 271 | println!("a: {}, b: {}", test2.a(), test2.b()); 272 | 273 | } 274 | # #[derive(Debug)] 275 | # struct Test { 276 | # a: String, 277 | # b: *const String, 278 | # } 279 | # 280 | # impl Test { 281 | # fn new(txt: &str) -> Self { 282 | # Test { 283 | # a: String::from(txt), 284 | # b: std::ptr::null(), 285 | # } 286 | # } 287 | # 288 | # fn init(&mut self) { 289 | # let self_ref: *const String = &self.a; 290 | # self.b = self_ref; 291 | # } 292 | # 293 | # fn a(&self) -> &str { 294 | # &self.a 295 | # } 296 | # 297 | # fn b(&self) -> &String { 298 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 299 | # unsafe { &*(self.b) } 300 | # } 301 | # } 302 | ``` 303 | 304 | The diagram below can help visualize what's going on: 305 | 306 | **Fig 1: Before and after swap** 307 | ![swap_problem](../assets/swap_problem.jpg) 308 | 309 | It's easy to get this to show undefined behavior and fail in other spectacular ways as well. 310 | 311 | ## Pinning in Practice 312 | 313 | Let's see how pinning and the `Pin` type can help us solve this problem. 314 | 315 | The `Pin` type wraps pointer types, guaranteeing that the values behind the 316 | pointer won't be moved if it is not implementing `Unpin`. For example, `Pin<&mut 317 | T>`, `Pin<&T>`, `Pin>` all guarantee that `T` won't be moved if `T: 318 | !Unpin`. 319 | 320 | Most types don't have a problem being moved. These types implement a trait 321 | called `Unpin`. Pointers to `Unpin` types can be freely placed into or taken 322 | out of `Pin`. For example, `u8` is `Unpin`, so `Pin<&mut u8>` behaves just like 323 | a normal `&mut u8`. 324 | 325 | However, types that can't be moved after they're pinned have a marker called 326 | `!Unpin`. Futures created by async/await is an example of this. 327 | 328 | ### Pinning to the Stack 329 | 330 | Back to our example. We can solve our problem by using `Pin`. Let's take a look at what 331 | our example would look like if we required a pinned pointer instead: 332 | 333 | ```rust, ignore 334 | use std::pin::Pin; 335 | use std::marker::PhantomPinned; 336 | 337 | #[derive(Debug)] 338 | struct Test { 339 | a: String, 340 | b: *const String, 341 | _marker: PhantomPinned, 342 | } 343 | 344 | 345 | impl Test { 346 | fn new(txt: &str) -> Self { 347 | Test { 348 | a: String::from(txt), 349 | b: std::ptr::null(), 350 | _marker: PhantomPinned, // This makes our type `!Unpin` 351 | } 352 | } 353 | 354 | fn init(self: Pin<&mut Self>) { 355 | let self_ptr: *const String = &self.a; 356 | let this = unsafe { self.get_unchecked_mut() }; 357 | this.b = self_ptr; 358 | } 359 | 360 | fn a(self: Pin<&Self>) -> &str { 361 | &self.get_ref().a 362 | } 363 | 364 | fn b(self: Pin<&Self>) -> &String { 365 | assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 366 | unsafe { &*(self.b) } 367 | } 368 | } 369 | ``` 370 | 371 | Pinning an object to the stack will always be `unsafe` if our type implements 372 | `!Unpin`. You can use a crate like [`pin_utils`][pin_utils] to avoid writing 373 | our own `unsafe` code when pinning to the stack. 374 | 375 | Below, we pin the objects `test1` and `test2` to the stack: 376 | 377 | ```rust 378 | pub fn main() { 379 | // test1 is safe to move before we initialize it 380 | let mut test1 = Test::new("test1"); 381 | // Notice how we shadow `test1` to prevent it from being accessed again 382 | let mut test1 = unsafe { Pin::new_unchecked(&mut test1) }; 383 | Test::init(test1.as_mut()); 384 | 385 | let mut test2 = Test::new("test2"); 386 | let mut test2 = unsafe { Pin::new_unchecked(&mut test2) }; 387 | Test::init(test2.as_mut()); 388 | 389 | println!("a: {}, b: {}", Test::a(test1.as_ref()), Test::b(test1.as_ref())); 390 | println!("a: {}, b: {}", Test::a(test2.as_ref()), Test::b(test2.as_ref())); 391 | } 392 | # use std::pin::Pin; 393 | # use std::marker::PhantomPinned; 394 | # 395 | # #[derive(Debug)] 396 | # struct Test { 397 | # a: String, 398 | # b: *const String, 399 | # _marker: PhantomPinned, 400 | # } 401 | # 402 | # 403 | # impl Test { 404 | # fn new(txt: &str) -> Self { 405 | # Test { 406 | # a: String::from(txt), 407 | # b: std::ptr::null(), 408 | # // This makes our type `!Unpin` 409 | # _marker: PhantomPinned, 410 | # } 411 | # } 412 | # 413 | # fn init(self: Pin<&mut Self>) { 414 | # let self_ptr: *const String = &self.a; 415 | # let this = unsafe { self.get_unchecked_mut() }; 416 | # this.b = self_ptr; 417 | # } 418 | # 419 | # fn a(self: Pin<&Self>) -> &str { 420 | # &self.get_ref().a 421 | # } 422 | # 423 | # fn b(self: Pin<&Self>) -> &String { 424 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 425 | # unsafe { &*(self.b) } 426 | # } 427 | # } 428 | ``` 429 | 430 | Now, if we try to move our data now we get a compilation error: 431 | 432 | ```rust, compile_fail 433 | pub fn main() { 434 | let mut test1 = Test::new("test1"); 435 | let mut test1 = unsafe { Pin::new_unchecked(&mut test1) }; 436 | Test::init(test1.as_mut()); 437 | 438 | let mut test2 = Test::new("test2"); 439 | let mut test2 = unsafe { Pin::new_unchecked(&mut test2) }; 440 | Test::init(test2.as_mut()); 441 | 442 | println!("a: {}, b: {}", Test::a(test1.as_ref()), Test::b(test1.as_ref())); 443 | std::mem::swap(test1.get_mut(), test2.get_mut()); 444 | println!("a: {}, b: {}", Test::a(test2.as_ref()), Test::b(test2.as_ref())); 445 | } 446 | # use std::pin::Pin; 447 | # use std::marker::PhantomPinned; 448 | # 449 | # #[derive(Debug)] 450 | # struct Test { 451 | # a: String, 452 | # b: *const String, 453 | # _marker: PhantomPinned, 454 | # } 455 | # 456 | # 457 | # impl Test { 458 | # fn new(txt: &str) -> Self { 459 | # Test { 460 | # a: String::from(txt), 461 | # b: std::ptr::null(), 462 | # _marker: PhantomPinned, // This makes our type `!Unpin` 463 | # } 464 | # } 465 | # 466 | # fn init(self: Pin<&mut Self>) { 467 | # let self_ptr: *const String = &self.a; 468 | # let this = unsafe { self.get_unchecked_mut() }; 469 | # this.b = self_ptr; 470 | # } 471 | # 472 | # fn a(self: Pin<&Self>) -> &str { 473 | # &self.get_ref().a 474 | # } 475 | # 476 | # fn b(self: Pin<&Self>) -> &String { 477 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 478 | # unsafe { &*(self.b) } 479 | # } 480 | # } 481 | ``` 482 | 483 | The type system prevents us from moving the data, as follows: 484 | 485 | ``` 486 | error[E0277]: `PhantomPinned` cannot be unpinned 487 | --> src\test.rs:56:30 488 | | 489 | 56 | std::mem::swap(test1.get_mut(), test2.get_mut()); 490 | | ^^^^^^^ within `test1::Test`, the trait `Unpin` is not implemented for `PhantomPinned` 491 | | 492 | = note: consider using `Box::pin` 493 | note: required because it appears within the type `test1::Test` 494 | --> src\test.rs:7:8 495 | | 496 | 7 | struct Test { 497 | | ^^^^ 498 | note: required by a bound in `std::pin::Pin::<&'a mut T>::get_mut` 499 | --> <...>rustlib/src/rust\library\core\src\pin.rs:748:12 500 | | 501 | 748 | T: Unpin, 502 | | ^^^^^ required by this bound in `std::pin::Pin::<&'a mut T>::get_mut` 503 | ``` 504 | 505 | > It's important to note that stack pinning will always rely on guarantees 506 | > you give when writing `unsafe`. While we know that the _pointee_ of `&'a mut T` 507 | > is pinned for the lifetime of `'a` we can't know if the data `&'a mut T` 508 | > points to isn't moved after `'a` ends. If it does it will violate the Pin 509 | > contract. 510 | > 511 | > A mistake that is easy to make is forgetting to shadow the original variable 512 | > since you could drop the `Pin` and move the data after `&'a mut T` 513 | > like shown below (which violates the Pin contract): 514 | > 515 | > ```rust 516 | > fn main() { 517 | > let mut test1 = Test::new("test1"); 518 | > let mut test1_pin = unsafe { Pin::new_unchecked(&mut test1) }; 519 | > Test::init(test1_pin.as_mut()); 520 | > 521 | > drop(test1_pin); 522 | > println!(r#"test1.b points to "test1": {:?}..."#, test1.b); 523 | > 524 | > let mut test2 = Test::new("test2"); 525 | > mem::swap(&mut test1, &mut test2); 526 | > println!("... and now it points nowhere: {:?}", test1.b); 527 | > } 528 | > # use std::pin::Pin; 529 | > # use std::marker::PhantomPinned; 530 | > # use std::mem; 531 | > # 532 | > # #[derive(Debug)] 533 | > # struct Test { 534 | > # a: String, 535 | > # b: *const String, 536 | > # _marker: PhantomPinned, 537 | > # } 538 | > # 539 | > # 540 | > # impl Test { 541 | > # fn new(txt: &str) -> Self { 542 | > # Test { 543 | > # a: String::from(txt), 544 | > # b: std::ptr::null(), 545 | > # // This makes our type `!Unpin` 546 | > # _marker: PhantomPinned, 547 | > # } 548 | > # } 549 | > # 550 | > # fn init<'a>(self: Pin<&'a mut Self>) { 551 | > # let self_ptr: *const String = &self.a; 552 | > # let this = unsafe { self.get_unchecked_mut() }; 553 | > # this.b = self_ptr; 554 | > # } 555 | > # 556 | > # fn a<'a>(self: Pin<&'a Self>) -> &'a str { 557 | > # &self.get_ref().a 558 | > # } 559 | > # 560 | > # fn b<'a>(self: Pin<&'a Self>) -> &'a String { 561 | > # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 562 | > # unsafe { &*(self.b) } 563 | > # } 564 | > # } 565 | > ``` 566 | 567 | ### Pinning to the Heap 568 | 569 | Pinning an `!Unpin` type to the heap gives our data a stable address so we know 570 | that the data we point to can't move after it's pinned. In contrast to stack 571 | pinning, we know that the data will be pinned for the lifetime of the object. 572 | 573 | ```rust, edition2018 574 | use std::pin::Pin; 575 | use std::marker::PhantomPinned; 576 | 577 | #[derive(Debug)] 578 | struct Test { 579 | a: String, 580 | b: *const String, 581 | _marker: PhantomPinned, 582 | } 583 | 584 | impl Test { 585 | fn new(txt: &str) -> Pin> { 586 | let t = Test { 587 | a: String::from(txt), 588 | b: std::ptr::null(), 589 | _marker: PhantomPinned, 590 | }; 591 | let mut boxed = Box::pin(t); 592 | let self_ptr: *const String = &boxed.a; 593 | unsafe { boxed.as_mut().get_unchecked_mut().b = self_ptr }; 594 | 595 | boxed 596 | } 597 | 598 | fn a(self: Pin<&Self>) -> &str { 599 | &self.get_ref().a 600 | } 601 | 602 | fn b(self: Pin<&Self>) -> &String { 603 | unsafe { &*(self.b) } 604 | } 605 | } 606 | 607 | pub fn main() { 608 | let test1 = Test::new("test1"); 609 | let test2 = Test::new("test2"); 610 | 611 | println!("a: {}, b: {}",test1.as_ref().a(), test1.as_ref().b()); 612 | println!("a: {}, b: {}",test2.as_ref().a(), test2.as_ref().b()); 613 | } 614 | ``` 615 | 616 | Some functions require the futures they work with to be `Unpin`. To use a 617 | `Future` or `Stream` that isn't `Unpin` with a function that requires 618 | `Unpin` types, you'll first have to pin the value using either 619 | `Box::pin` (to create a `Pin>`) or the `pin_utils::pin_mut!` macro 620 | (to create a `Pin<&mut T>`). `Pin>` and `Pin<&mut Fut>` can both be 621 | used as futures, and both implement `Unpin`. 622 | 623 | For example: 624 | 625 | ```rust,edition2018,ignore 626 | use pin_utils::pin_mut; // `pin_utils` is a handy crate available on crates.io 627 | 628 | // A function which takes a `Future` that implements `Unpin`. 629 | fn execute_unpin_future(x: impl Future + Unpin) { /* ... */ } 630 | 631 | let fut = async { /* ... */ }; 632 | execute_unpin_future(fut); // Error: `fut` does not implement `Unpin` trait 633 | 634 | // Pinning with `Box`: 635 | let fut = async { /* ... */ }; 636 | let fut = Box::pin(fut); 637 | execute_unpin_future(fut); // OK 638 | 639 | // Pinning with `pin_mut!`: 640 | let fut = async { /* ... */ }; 641 | pin_mut!(fut); 642 | execute_unpin_future(fut); // OK 643 | ``` 644 | 645 | ## Summary 646 | 647 | 1. If `T: Unpin` (which is the default), then `Pin<'a, T>` is entirely 648 | equivalent to `&'a mut T`. In other words: `Unpin` means it's OK for this type 649 | to be moved even when pinned, so `Pin` will have no effect on such a type. 650 | 651 | 2. Getting a `&mut T` to a pinned T requires unsafe if `T: !Unpin`. 652 | 653 | 3. Most standard library types implement `Unpin`. The same goes for most 654 | "normal" types you encounter in Rust. A `Future` generated by async/await is an exception to this rule. 655 | 656 | 4. You can add a `!Unpin` bound on a type on nightly with a feature flag, or 657 | by adding `std::marker::PhantomPinned` to your type on stable. 658 | 659 | 5. You can either pin data to the stack or to the heap. 660 | 661 | 6. Pinning a `!Unpin` object to the stack requires `unsafe` 662 | 663 | 7. Pinning a `!Unpin` object to the heap does not require `unsafe`. There is a shortcut for doing this using `Box::pin`. 664 | 665 | 8. For pinned data where `T: !Unpin` you have to maintain the invariant that its memory will not 666 | get invalidated or repurposed _from the moment it gets pinned until when drop_ is called. This is 667 | an important part of the _pin contract_. 668 | 669 | ["Executing `Future`s and Tasks"]: ../02_execution/01_chapter.md 670 | [the `Future` trait]: ../02_execution/02_future.md 671 | [pin_utils]: https://docs.rs/pin-utils/ 672 | -------------------------------------------------------------------------------- /src/04_pinning/01_chapter_zh.md: -------------------------------------------------------------------------------- 1 | # 固定 2 | 3 | 为了能对 futures 进行轮询,必须使用一个特殊的类型 `Pin` 来固定它们。 4 | 如果你看了["执行 `Future` 和任务"]这章中[`Future` 特征]一节, 5 | 你会在 `Future::poll` 定义的方法里看到对 `Pin` 的使用:`self: Pin<&mut Self>`。 6 | 但这是什么意思,我们又为什么要使用它呢? 7 | 8 | ## 为什么要固定 9 | 10 | `Pin` 与 `Unpin` 标记协同工作。固定可以保证实现了 `!Unpin` 11 | 的对象永远不会被移动!为了理解这样做的必要性,不妨让我们回忆一下, 12 | `async`/`.await` 是如何工作的。请看下面的代码: 13 | 14 | ```rust,edition2018,ignore 15 | let fut_one = /* ... */; 16 | let fut_two = /* ... */; 17 | async move { 18 | fut_one.await; 19 | fut_two.await; 20 | } 21 | ``` 22 | 23 | 在内部,它创建了一个实现了 `Future` 的匿名类型,并提供了一个 `poll` 方法: 24 | 25 | ```rust,ignore 26 | // The `Future` type generated by our `async { ... }` block 27 | struct AsyncFuture { 28 | fut_one: FutOne, 29 | fut_two: FutTwo, 30 | state: State, 31 | } 32 | 33 | // List of states our `async` block can be in 34 | enum State { 35 | AwaitingFutOne, 36 | AwaitingFutTwo, 37 | Done, 38 | } 39 | 40 | impl Future for AsyncFuture { 41 | type Output = (); 42 | 43 | fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<()> { 44 | loop { 45 | match self.state { 46 | State::AwaitingFutOne => match self.fut_one.poll(..) { 47 | Poll::Ready(()) => self.state = State::AwaitingFutTwo, 48 | Poll::Pending => return Poll::Pending, 49 | } 50 | State::AwaitingFutTwo => match self.fut_two.poll(..) { 51 | Poll::Ready(()) => self.state = State::Done, 52 | Poll::Pending => return Poll::Pending, 53 | } 54 | State::Done => return Poll::Ready(()), 55 | } 56 | } 57 | } 58 | } 59 | ``` 60 | 61 | 62 | 当第一次调用 `poll` 时,它将轮询 `fut_one`。如果 `fut_one` 无法完成,将返回 63 | `AsyncFuture::poll`。对 future 的 `poll` 调用会在之前停止的地方的继续。 64 | 此过程一直持续到 future 完成为止。 65 | 66 | 但是,如果我们在一个 `async` 代码块中使用引用会发生什么呢?比如: 67 | 68 | ```rust,edition2018,ignore 69 | async { 70 | let mut x = [0; 128]; 71 | let read_into_buf_fut = read_into_buf(&mut x); 72 | read_into_buf_fut.await; 73 | println!("{:?}", x); 74 | } 75 | ``` 76 | 77 | 这个结构体编译后是什么样子? 78 | 79 | ```rust,ignore 80 | struct ReadIntoBuf<'a> { 81 | buf: &'a mut [u8], // points to `x` below 82 | } 83 | 84 | struct AsyncFuture { 85 | x: [u8; 128], 86 | read_into_buf_fut: ReadIntoBuf<'what_lifetime?>, 87 | } 88 | ``` 89 | 90 | 在这儿,`ReadIntoBuf` future 存储着我们结构体中另一个字段 `x` 的引用。 91 | 如果 `AsyncFuture` 被移动了,`x` 的位置也一定会移动, 92 | 从而导致存储在 `read_into_buf_fut.buf` 中的指针变为无效指针! 93 | 94 | 将 futures 固定在内存中特定的位置可以避免此问题, 95 | 从而安全地在 `async` 代码块中创建对值的引用。 96 | 97 | ## 固定的细节 98 | 99 | 让我们尝试着通过一个稍简单的示例来更好地理解固定。我们在上面遇到的问题, 100 | 归根结底是我们要如何在 Rust 中处理自引用类型中的引用。 101 | 102 | 目前我们的示例看起来是这样的: 103 | 104 | ```rust, ignore 105 | use std::pin::Pin; 106 | 107 | #[derive(Debug)] 108 | struct Test { 109 | a: String, 110 | b: *const String, 111 | } 112 | 113 | impl Test { 114 | fn new(txt: &str) -> Self { 115 | Test { 116 | a: String::from(txt), 117 | b: std::ptr::null(), 118 | } 119 | } 120 | 121 | fn init(&mut self) { 122 | let self_ref: *const String = &self.a; 123 | self.b = self_ref; 124 | } 125 | 126 | fn a(&self) -> &str { 127 | &self.a 128 | } 129 | 130 | fn b(&self) -> &String { 131 | assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 132 | unsafe { &*(self.b) } 133 | } 134 | } 135 | ``` 136 | 137 | `Test` 提供了获取字段 `a` 和 `b` 的引用的方法。因为 `b` 是对 `a` 的引用, 138 | 我们将它存储为一个指针,而由于 Rust 的借用规则所以我们无法定义它的生命周期。 139 | 现在我们有了一个所谓的自引用结构。 140 | 141 | 通过下面这个例子可以看到,在不移动任何数据的情况下它可以正常地工作: 142 | 143 | ```rust 144 | fn main() { 145 | let mut test1 = Test::new("test1"); 146 | test1.init(); 147 | let mut test2 = Test::new("test2"); 148 | test2.init(); 149 | 150 | println!("a: {}, b: {}", test1.a(), test1.b()); 151 | println!("a: {}, b: {}", test2.a(), test2.b()); 152 | 153 | } 154 | # use std::pin::Pin; 155 | # #[derive(Debug)] 156 | # struct Test { 157 | # a: String, 158 | # b: *const String, 159 | # } 160 | # 161 | # impl Test { 162 | # fn new(txt: &str) -> Self { 163 | # Test { 164 | # a: String::from(txt), 165 | # b: std::ptr::null(), 166 | # } 167 | # } 168 | # 169 | # // We need an `init` method to actually set our self-reference 170 | # fn init(&mut self) { 171 | # let self_ref: *const String = &self.a; 172 | # self.b = self_ref; 173 | # } 174 | # 175 | # fn a(&self) -> &str { 176 | # &self.a 177 | # } 178 | # 179 | # fn b(&self) -> &String { 180 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 181 | # unsafe { &*(self.b) } 182 | # } 183 | # } 184 | ``` 185 | 这正是我们所期望的结果: 186 | 187 | ```rust, ignore 188 | a: test1, b: test1 189 | a: test2, b: test2 190 | ``` 191 | 192 | 让我们看看如果我们将 `test` 和 `test2` 互换(移动数据)会发生什么: 193 | 194 | ```rust 195 | fn main() { 196 | let mut test1 = Test::new("test1"); 197 | test1.init(); 198 | let mut test2 = Test::new("test2"); 199 | test2.init(); 200 | 201 | println!("a: {}, b: {}", test1.a(), test1.b()); 202 | std::mem::swap(&mut test1, &mut test2); 203 | println!("a: {}, b: {}", test2.a(), test2.b()); 204 | 205 | } 206 | # use std::pin::Pin; 207 | # #[derive(Debug)] 208 | # struct Test { 209 | # a: String, 210 | # b: *const String, 211 | # } 212 | # 213 | # impl Test { 214 | # fn new(txt: &str) -> Self { 215 | # Test { 216 | # a: String::from(txt), 217 | # b: std::ptr::null(), 218 | # } 219 | # } 220 | # 221 | # fn init(&mut self) { 222 | # let self_ref: *const String = &self.a; 223 | # self.b = self_ref; 224 | # } 225 | # 226 | # fn a(&self) -> &str { 227 | # &self.a 228 | # } 229 | # 230 | # fn b(&self) -> &String { 231 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 232 | # unsafe { &*(self.b) } 233 | # } 234 | # } 235 | ``` 236 | 237 | 我们天真地以为 `test1` 会被打印两次: 238 | 239 | ```rust, ignore 240 | a: test1, b: test1 241 | a: test1, b: test1 242 | ``` 243 | 244 | 但我们看到的却是: 245 | 246 | ```rust, ignore 247 | a: test1, b: test1 248 | a: test1, b: test2 249 | ``` 250 | 251 | 现在,本该指向 `test2.b` 的指针依然指向了 `test1` 内部的旧地址。 252 | 结构体不再是自引用了,它持有一个指向其它对象中字段的指针。 253 | 这意味着我们无法保证 `test2.b` 与 `test2` 的生命周期之间存在关联关系! 254 | 255 | 如果你仍不相信,那这至少可以说服你: 256 | 257 | ```rust 258 | fn main() { 259 | let mut test1 = Test::new("test1"); 260 | test1.init(); 261 | let mut test2 = Test::new("test2"); 262 | test2.init(); 263 | 264 | println!("a: {}, b: {}", test1.a(), test1.b()); 265 | std::mem::swap(&mut test1, &mut test2); 266 | test1.a = "I've totally changed now!".to_string(); 267 | println!("a: {}, b: {}", test2.a(), test2.b()); 268 | 269 | } 270 | # use std::pin::Pin; 271 | # #[derive(Debug)] 272 | # struct Test { 273 | # a: String, 274 | # b: *const String, 275 | # } 276 | # 277 | # impl Test { 278 | # fn new(txt: &str) -> Self { 279 | # Test { 280 | # a: String::from(txt), 281 | # b: std::ptr::null(), 282 | # } 283 | # } 284 | # 285 | # fn init(&mut self) { 286 | # let self_ref: *const String = &self.a; 287 | # self.b = self_ref; 288 | # } 289 | # 290 | # fn a(&self) -> &str { 291 | # &self.a 292 | # } 293 | # 294 | # fn b(&self) -> &String { 295 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 296 | # unsafe { &*(self.b) } 297 | # } 298 | # } 299 | ``` 300 | 301 | 下面的图表可以帮助你了解到底发生了什么: 302 | 303 | **图1: 交换前后** 304 | ![swap_problem](../assets/swap_problem.jpg) 305 | 306 | 通过它可以很容易看出未定义行为及其导致的错误。 307 | 308 | ## 固定:实践 309 | 310 | 让我们来探究下如何固定,以及 `Pin` 类型如何帮助我们解决这个问题。 311 | 312 | `Pin` 类型用于装饰指针类型,来确保若指针指向的值未实现 `Unpin`,则其不能被移动。 313 | 例如,如果有 `T: !Unpin`,则 `Pin<&mut T>`、`Pin<&T>`、`Pin>` 314 | 都可保证 `T` 无法被移动。 315 | 316 | 大部分类型在被移动时都没有问题。因为这些类型实现 `Unpin` 特征。 317 | 指向 `Unpin` 类型的指针都可以自由地放入 `Pin` 或从中取出来。比如, 318 | `u8` 是 `Unpin`,所以 `Pin<&mut u8>` 可当作普通的 `&mut u8` 一样使用。 319 | 320 | 但是,有 `!Unpin` 标记的类型被固定后就不能再被移动了。 321 | async/await 创建的 futures 就是一个例子。 322 | 323 | ### 固定在栈上 324 | 325 | 再回到我们的例子中。我们可以通过使用 `Pin` 来解决这个问题。 326 | 让我们看一看如果使用固定的指针,我们的例子会是什么样子: 327 | 328 | ```rust, ignore 329 | use std::pin::Pin; 330 | use std::marker::PhantomPinned; 331 | 332 | #[derive(Debug)] 333 | struct Test { 334 | a: String, 335 | b: *const String, 336 | _marker: PhantomPinned, 337 | } 338 | 339 | 340 | impl Test { 341 | fn new(txt: &str) -> Self { 342 | Test { 343 | a: String::from(txt), 344 | b: std::ptr::null(), 345 | _marker: PhantomPinned, // This makes our type `!Unpin` 346 | } 347 | } 348 | fn init(self: Pin<&mut Self>) { 349 | let self_ptr: *const String = &self.a; 350 | let this = unsafe { self.get_unchecked_mut() }; 351 | this.b = self_ptr; 352 | } 353 | 354 | fn a(self: Pin<&Self>) -> &str { 355 | &self.get_ref().a 356 | } 357 | 358 | fn b(self: Pin<&Self>) -> &String { 359 | assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 360 | unsafe { &*(self.b) } 361 | } 362 | } 363 | ``` 364 | 365 | 如果我们的类型实现了 `!Unpin`,在栈上固定一个对象总是 `unsafe` 的。 366 | 你可以通过使用像 [`pin_utils`][pin_utils] 这样的 crate 367 | 来避免固定到栈时自己写 `unsafe` 代码。 368 | 369 | 下面,我们将 `test1` 和 `test2` 固定在栈上: 370 | 371 | ```rust 372 | pub fn main() { 373 | // test1 is safe to move before we initialize it 374 | let mut test1 = Test::new("test1"); 375 | // Notice how we shadow `test1` to prevent it from being accessed again 376 | let mut test1 = unsafe { Pin::new_unchecked(&mut test1) }; 377 | Test::init(test1.as_mut()); 378 | 379 | let mut test2 = Test::new("test2"); 380 | let mut test2 = unsafe { Pin::new_unchecked(&mut test2) }; 381 | Test::init(test2.as_mut()); 382 | 383 | println!("a: {}, b: {}", Test::a(test1.as_ref()), Test::b(test1.as_ref())); 384 | println!("a: {}, b: {}", Test::a(test2.as_ref()), Test::b(test2.as_ref())); 385 | } 386 | # use std::pin::Pin; 387 | # use std::marker::PhantomPinned; 388 | # 389 | # #[derive(Debug)] 390 | # struct Test { 391 | # a: String, 392 | # b: *const String, 393 | # _marker: PhantomPinned, 394 | # } 395 | # 396 | # 397 | # impl Test { 398 | # fn new(txt: &str) -> Self { 399 | # Test { 400 | # a: String::from(txt), 401 | # b: std::ptr::null(), 402 | # // This makes our type `!Unpin` 403 | # _marker: PhantomPinned, 404 | # } 405 | # } 406 | # fn init(self: Pin<&mut Self>) { 407 | # let self_ptr: *const String = &self.a; 408 | # let this = unsafe { self.get_unchecked_mut() }; 409 | # this.b = self_ptr; 410 | # } 411 | # 412 | # fn a(self: Pin<&Self>) -> &str { 413 | # &self.get_ref().a 414 | # } 415 | # 416 | # fn b(self: Pin<&Self>) -> &String { 417 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 418 | # unsafe { &*(self.b) } 419 | # } 420 | # } 421 | ``` 422 | 423 | 现在,如果我们尝试移动数据,就会得到一个编译错误: 424 | 425 | ```rust, compile_fail 426 | pub fn main() { 427 | let mut test1 = Test::new("test1"); 428 | let mut test1 = unsafe { Pin::new_unchecked(&mut test1) }; 429 | Test::init(test1.as_mut()); 430 | 431 | let mut test2 = Test::new("test2"); 432 | let mut test2 = unsafe { Pin::new_unchecked(&mut test2) }; 433 | Test::init(test2.as_mut()); 434 | 435 | println!("a: {}, b: {}", Test::a(test1.as_ref()), Test::b(test1.as_ref())); 436 | std::mem::swap(test1.get_mut(), test2.get_mut()); 437 | println!("a: {}, b: {}", Test::a(test2.as_ref()), Test::b(test2.as_ref())); 438 | } 439 | # use std::pin::Pin; 440 | # use std::marker::PhantomPinned; 441 | # 442 | # #[derive(Debug)] 443 | # struct Test { 444 | # a: String, 445 | # b: *const String, 446 | # _marker: PhantomPinned, 447 | # } 448 | # 449 | # 450 | # impl Test { 451 | # fn new(txt: &str) -> Self { 452 | # Test { 453 | # a: String::from(txt), 454 | # b: std::ptr::null(), 455 | # _marker: PhantomPinned, // This makes our type `!Unpin` 456 | # } 457 | # } 458 | # fn init(self: Pin<&mut Self>) { 459 | # let self_ptr: *const String = &self.a; 460 | # let this = unsafe { self.get_unchecked_mut() }; 461 | # this.b = self_ptr; 462 | # } 463 | # 464 | # fn a(self: Pin<&Self>) -> &str { 465 | # &self.get_ref().a 466 | # } 467 | # 468 | # fn b(self: Pin<&Self>) -> &String { 469 | # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 470 | # unsafe { &*(self.b) } 471 | # } 472 | # } 473 | ``` 474 | 475 | 类型系统阻止我们对数据进行移动,像这样: 476 | 477 | ``` 478 | error[E0277]: `PhantomPinned` cannot be unpinned 479 | --> src\test.rs:56:30 480 | | 481 | 56 | std::mem::swap(test1.get_mut(), test2.get_mut()); 482 | | ^^^^^^^ within `test1::Test`, the trait `Unpin` is not implemented for `PhantomPinned` 483 | | 484 | = note: consider using `Box::pin` 485 | note: required because it appears within the type `test1::Test` 486 | --> src\test.rs:7:8 487 | | 488 | 7 | struct Test { 489 | | ^^^^ 490 | note: required by a bound in `std::pin::Pin::<&'a mut T>::get_mut` 491 | --> <...>rustlib/src/rust\library\core\src\pin.rs:748:12 492 | | 493 | 748 | T: Unpin, 494 | | ^^^^^ required by this bound in `std::pin::Pin::<&'a mut T>::get_mut` 495 | ``` 496 | 497 | > 要注意,在栈上固定将始终依赖于你在写 `unsafe` 代码时提供的保证,这很重要。 498 | > 虽然我们知道 `&'a mut T` 的指针在 `'a` 的生命周期内被固定了,但我们不知道 499 | > `'a` 的生命周期结束后它是否被移动了。如果这样做,就违反了 Pin 的原则。 500 | > 501 | > 一个很容易犯的错误是忘记隐藏原始变量,因为你可以删除 `Pin` 并在 `&'a mut T` 502 | > 之后移动数据,如下所示(这违反了 Pin 的原则): 503 | > 504 | > ```rust 505 | > fn main() { 506 | > let mut test1 = Test::new("test1"); 507 | > let mut test1_pin = unsafe { Pin::new_unchecked(&mut test1) }; 508 | > Test::init(test1_pin.as_mut()); 509 | > drop(test1_pin); 510 | > println!(r#"test1.b points to "test1": {:?}..."#, test1.b); 511 | > let mut test2 = Test::new("test2"); 512 | > mem::swap(&mut test1, &mut test2); 513 | > println!("... and now it points nowhere: {:?}", test1.b); 514 | > } 515 | > # use std::pin::Pin; 516 | > # use std::marker::PhantomPinned; 517 | > # use std::mem; 518 | > # 519 | > # #[derive(Debug)] 520 | > # struct Test { 521 | > # a: String, 522 | > # b: *const String, 523 | > # _marker: PhantomPinned, 524 | > # } 525 | > # 526 | > # 527 | > # impl Test { 528 | > # fn new(txt: &str) -> Self { 529 | > # Test { 530 | > # a: String::from(txt), 531 | > # b: std::ptr::null(), 532 | > # // This makes our type `!Unpin` 533 | > # _marker: PhantomPinned, 534 | > # } 535 | > # } 536 | > # fn init(self: Pin<&mut Self>) { 537 | > # let self_ptr: *const String = &self.a; 538 | > # let this = unsafe { self.get_unchecked_mut() }; 539 | > # this.b = self_ptr; 540 | > # } 541 | > # 542 | > # fn a(self: Pin<&Self>) -> &str { 543 | > # &self.get_ref().a 544 | > # } 545 | > # 546 | > # fn b(self: Pin<&Self>) -> &String { 547 | > # assert!(!self.b.is_null(), "Test::b called without Test::init being called first"); 548 | > # unsafe { &*(self.b) } 549 | > # } 550 | > # } 551 | > ``` 552 | 553 | ### 在堆上固定 554 | 555 | 在堆上固定一个 `!Unpin` 类型可以让我们的数据有一个固定的地址, 556 | 且我们知道这个数据被固定后就无法移动了。和在栈上固定相比, 557 | 我们可明确知道这个数据将在对象的生命周期内被固定。 558 | 559 | ```rust, edition2018 560 | use std::pin::Pin; 561 | use std::marker::PhantomPinned; 562 | 563 | #[derive(Debug)] 564 | struct Test { 565 | a: String, 566 | b: *const String, 567 | _marker: PhantomPinned, 568 | } 569 | 570 | impl Test { 571 | fn new(txt: &str) -> Pin> { 572 | let t = Test { 573 | a: String::from(txt), 574 | b: std::ptr::null(), 575 | _marker: PhantomPinned, 576 | }; 577 | let mut boxed = Box::pin(t); 578 | let self_ptr: *const String = &boxed.as_ref().a; 579 | unsafe { boxed.as_mut().get_unchecked_mut().b = self_ptr }; 580 | 581 | boxed 582 | } 583 | 584 | fn a(self: Pin<&Self>) -> &str { 585 | &self.get_ref().a 586 | } 587 | 588 | fn b(self: Pin<&Self>) -> &String { 589 | unsafe { &*(self.b) } 590 | } 591 | } 592 | 593 | pub fn main() { 594 | let mut test1 = Test::new("test1"); 595 | let mut test2 = Test::new("test2"); 596 | 597 | println!("a: {}, b: {}",test1.as_ref().a(), test1.as_ref().b()); 598 | println!("a: {}, b: {}",test2.as_ref().a(), test2.as_ref().b()); 599 | } 600 | ``` 601 | 602 | 一些函数要求它们使用的 futures 必须是 `Unpin`(非固定)的。想要非 `Unpin` 603 | 的 `Future` 或 `Stream` 和要求 `Unpin` 类型的函数一起使用, 604 | 首先你需要使用 `Box::pin`(创建 `Pin>`)或 `pin_utils::pin_mut!` 605 | 宏(创建 `Pin<&mut T>`)固定值。`Pin>` 和 `Pin<&mut Fut>` 606 | 都可作为 futures 使用,并且都实现了 `Unpin`。 607 | 608 | 例如: 609 | 610 | ```rust,edition2018,ignore 611 | use pin_utils::pin_mut; // `pin_utils` is a handy crate available on crates.io 612 | 613 | // A function which takes a `Future` that implements `Unpin`. 614 | fn execute_unpin_future(x: impl Future + Unpin) { /* ... */ } 615 | 616 | let fut = async { /* ... */ }; 617 | execute_unpin_future(fut); // Error: `fut` does not implement `Unpin` trait 618 | 619 | // Pinning with `Box`: 620 | let fut = async { /* ... */ }; 621 | let fut = Box::pin(fut); 622 | execute_unpin_future(fut); // OK 623 | 624 | // Pinning with `pin_mut!`: 625 | let fut = async { /* ... */ }; 626 | pin_mut!(fut); 627 | execute_unpin_future(fut); // OK 628 | ``` 629 | 630 | ## 总结 631 | 632 | 1. 如果 `T: Unpin`(默认实现),那么 `Pin<'a, T>` 完全等同于 `&'a mut T`。 633 | 换句话说,`Unpin` 意味着即使被固定,此类型也可以被移动,`Pin` 634 | 对于这种类型是无效的。 635 | 636 | 2. 如果 `T: !Unpin`,将 `&mut T` 转换为固定的 T 是不安全的,需要 `unsafe`。 637 | 638 | 3. 大部分标准库类型都实现了 `Unpin`。你在 Rust 中使用的大多数“正常”类型亦如此。 639 | 由 async/await 生成的 Future 则是例外。 640 | 641 | 4. 你可以在 nightly 的 Rust 版本里,在类型上添加 `!Unpin` 绑定, 642 | 或者在稳定版上添加 `std::marker::PhantomPinned` 到你的类型。 643 | 644 | 5. 你可以把数据固定在栈或者堆上。 645 | 646 | 6. 将一个 `!Unpin` 对象固定在栈上需要是 `unsafe` 的。 647 | 648 | 7. 将 `!Unpin` 对象固定在堆上不需要 `unsafe`,使用 `Box::pin` 即可方便的完成。 649 | 650 | 8. 固定 `T: !Unpin` 的数据时,你必须保证它的不可变性,即从被固定到数据被 drop, 651 | 它的内存不会失效或被重新分配。这是 Pin 的使用规则中的重要部分。 652 | 653 | ["执行 `Future` 和任务"]: ../02_execution/01_chapter_zh.md 654 | [`Future` 特征]: ../02_execution/02_future_zh.md 655 | [pin_utils]: https://docs.rs/pin-utils/ 656 | -------------------------------------------------------------------------------- /src/05_streams/01_chapter.md: -------------------------------------------------------------------------------- 1 | # The `Stream` Trait 2 | 3 | The `Stream` trait is similar to `Future` but can yield multiple values before 4 | completing, similar to the `Iterator` trait from the standard library: 5 | 6 | ```rust,ignore 7 | {{#include ../../examples/05_01_streams/src/lib.rs:stream_trait}} 8 | ``` 9 | 10 | One common example of a `Stream` is the `Receiver` for the channel type from 11 | the `futures` crate. It will yield `Some(val)` every time a value is sent 12 | from the `Sender` end, and will yield `None` once the `Sender` has been 13 | dropped and all pending messages have been received: 14 | 15 | ```rust,edition2018,ignore 16 | {{#include ../../examples/05_01_streams/src/lib.rs:channels}} 17 | ``` 18 | -------------------------------------------------------------------------------- /src/05_streams/01_chapter_zh.md: -------------------------------------------------------------------------------- 1 | # `Stream` 特征 2 | 3 | `Stream` 特征类似于 `Future` 但是可以在完成前产生多个值, 4 | 亦类似于标准库中的 `Iterator` 特征。 5 | 6 | ```rust,ignore 7 | {{#include ../../examples/05_01_streams/src/lib.rs:stream_trait}} 8 | ``` 9 | 10 | 一个常见的 `Stream` 的例子是 `futures` crate 中 channel 类型的 `Receiver`。 11 | 每当 `Sender` 端发送一个数据,它都会产生一个 `Some(val)`, 12 | 而在通道里所有数据都被取出及 `Sender` 被删除时,则产生 `None`: 13 | 14 | ```rust,edition2018,ignore 15 | {{#include ../../examples/05_01_streams/src/lib.rs:channels}} 16 | ``` 17 | -------------------------------------------------------------------------------- /src/05_streams/02_iteration_and_concurrency.md: -------------------------------------------------------------------------------- 1 | # Iteration and Concurrency 2 | 3 | Similar to synchronous `Iterator`s, there are many different ways to iterate 4 | over and process the values in a `Stream`. There are combinator-style methods 5 | such as `map`, `filter`, and `fold`, and their early-exit-on-error cousins 6 | `try_map`, `try_filter`, and `try_fold`. 7 | 8 | Unfortunately, `for` loops are not usable with `Stream`s, but for 9 | imperative-style code, `while let` and the `next`/`try_next` functions can 10 | be used: 11 | 12 | ```rust,edition2018,ignore 13 | {{#include ../../examples/05_02_iteration_and_concurrency/src/lib.rs:nexts}} 14 | ``` 15 | 16 | However, if we're just processing one element at a time, we're potentially 17 | leaving behind opportunity for concurrency, which is, after all, why we're 18 | writing async code in the first place. To process multiple items from a stream 19 | concurrently, use the `for_each_concurrent` and `try_for_each_concurrent` 20 | methods: 21 | 22 | ```rust,edition2018,ignore 23 | {{#include ../../examples/05_02_iteration_and_concurrency/src/lib.rs:try_for_each_concurrent}} 24 | ``` 25 | -------------------------------------------------------------------------------- /src/05_streams/02_iteration_and_concurrency_zh.md: -------------------------------------------------------------------------------- 1 | # 迭代和并发 2 | 3 | 与同步中的 `Iterator`s 类似,对 `Stream` 中的值进行迭代与处理的方法有多种。 4 | 有组合器风格的方法如 `map`、`filter` 和 `fold`,以及在它们错误时退出的变种 5 | `try_map`、`try_filter` 和 `try_fold`。 6 | 7 | 不幸的是,`Stream`s 不能使用 `for` 循环,而只能使用命令式风格的代码,像 8 | `while let` 和 `next`/`try_next` 函数: 9 | 10 | ```rust,edition2018,ignore 11 | {{#include ../../examples/05_02_iteration_and_concurrency/src/lib.rs:nexts}} 12 | ``` 13 | 14 | 但是,如果我们每次只处理一个元素,这样就潜在地留下了产生并发的机会, 15 | 毕竟这也就是我们首先编写异步代码的原因。在一个 stream 并发中, 16 | 可以使用 `for_each_concurrent` 和 `try_for_each_concurrent` 17 | 函数来处理多个项目: 18 | 19 | ```rust,edition2018,ignore 20 | {{#include ../../examples/05_02_iteration_and_concurrency/src/lib.rs:try_for_each_concurrent}} 21 | ``` 22 | -------------------------------------------------------------------------------- /src/06_multiple_futures/01_chapter.md: -------------------------------------------------------------------------------- 1 | # Executing Multiple Futures at a Time 2 | 3 | Up until now, we've mostly executed futures by using `.await`, which blocks 4 | the current task until a particular `Future` completes. However, real 5 | asynchronous applications often need to execute several different 6 | operations concurrently. 7 | 8 | In this chapter, we'll cover some ways to execute multiple asynchronous 9 | operations at the same time: 10 | 11 | - `join!`: waits for futures to all complete 12 | - `select!`: waits for one of several futures to complete 13 | - Spawning: creates a top-level task which ambiently runs a future to completion 14 | - `FuturesUnordered`: a group of futures which yields the result of each subfuture 15 | -------------------------------------------------------------------------------- /src/06_multiple_futures/01_chapter_zh.md: -------------------------------------------------------------------------------- 1 | # 一次执行多个 Futures 2 | 3 | 到目前为止,我们都是主要通过 `.await` 来运行 futures,它会阻塞当前进程, 4 | 直到一个特定的 `Future` 完成。然而,真正的异步程序通常需要同时运行多个不同的操作。 5 | 6 | 在本章中,我们将介绍几种可同时执行多个异步操作的方法: 7 | 8 | - `join!`:等待直到 futures 全部完成 9 | - `select!`:在多个 futures 中等待其中一个完成 10 | - Spawning:创建一个顶级任务,去推动 future 完成。 11 | - `FuturesUnordered`:一个 futures 组,使每个子 future 产生结果 12 | -------------------------------------------------------------------------------- /src/06_multiple_futures/02_join.md: -------------------------------------------------------------------------------- 1 | # `join!` 2 | 3 | The `futures::join` macro makes it possible to wait for multiple different 4 | futures to complete while executing them all concurrently. 5 | 6 | # `join!` 7 | 8 | When performing multiple asynchronous operations, it's tempting to simply 9 | `.await` them in a series: 10 | 11 | ```rust,edition2018,ignore 12 | {{#include ../../examples/06_02_join/src/lib.rs:naiive}} 13 | ``` 14 | 15 | However, this will be slower than necessary, since it won't start trying to 16 | `get_music` until after `get_book` has completed. In some other languages, 17 | futures are ambiently run to completion, so two operations can be 18 | run concurrently by first calling each `async fn` to start the futures, and 19 | then awaiting them both: 20 | 21 | ```rust,edition2018,ignore 22 | {{#include ../../examples/06_02_join/src/lib.rs:other_langs}} 23 | ``` 24 | 25 | However, Rust futures won't do any work until they're actively `.await`ed. 26 | This means that the two code snippets above will both run 27 | `book_future` and `music_future` in series rather than running them 28 | concurrently. To correctly run the two futures concurrently, use 29 | `futures::join!`: 30 | 31 | ```rust,edition2018,ignore 32 | {{#include ../../examples/06_02_join/src/lib.rs:join}} 33 | ``` 34 | 35 | The value returned by `join!` is a tuple containing the output of each 36 | `Future` passed in. 37 | 38 | ## `try_join!` 39 | 40 | For futures which return `Result`, consider using `try_join!` rather than 41 | `join!`. Since `join!` only completes once all subfutures have completed, 42 | it'll continue processing other futures even after one of its subfutures 43 | has returned an `Err`. 44 | 45 | Unlike `join!`, `try_join!` will complete immediately if one of the subfutures 46 | returns an error. 47 | 48 | ```rust,edition2018,ignore 49 | {{#include ../../examples/06_02_join/src/lib.rs:try_join}} 50 | ``` 51 | 52 | Note that the futures passed to `try_join!` must all have the same error type. 53 | Consider using the `.map_err(|e| ...)` and `.err_into()` functions from 54 | `futures::future::TryFutureExt` to consolidate the error types: 55 | 56 | ```rust,edition2018,ignore 57 | {{#include ../../examples/06_02_join/src/lib.rs:try_join_map_err}} 58 | ``` 59 | -------------------------------------------------------------------------------- /src/06_multiple_futures/02_join_zh.md: -------------------------------------------------------------------------------- 1 | # `join!` 2 | 3 | `futures::join` 宏可以同时执行多个不同的 futures 并等待它们的完成。 4 | 5 | # `join!` 6 | 7 | 当执行多个异步操作时,可以很简单地将它们组成一个使用 `.await` 的序列: 8 | 9 | ```rust,edition2018,ignore 10 | {{#include ../../examples/06_02_join/src/lib.rs:naiive}} 11 | ``` 12 | 13 | 但是,这样做会使它变得更慢,除非 `get_book` 已经完成,否则 `get_music` 14 | 不会开始运行。在其它一些语言中,futures 是在环境中自发去运行、完成的, 15 | 所以可以通过先去调用每个 `async fn` 来启动 future,然后再等待它们完成: 16 | 17 | ```rust,edition2018,ignore 18 | {{#include ../../examples/06_02_join/src/lib.rs:other_langs}} 19 | ``` 20 | 21 | 然而,在 Rust 中,futures 不会在被 `.await` 前做任何操作。 22 | 这就意味着上面的两个代码块都会按序来运行 `book_future` 和 `music_future` 23 | 而非并发地运行它们。我们可以使用 `futures::join!` 来正确的并发运行它们: 24 | 25 | ```rust,edition2018,ignore 26 | {{#include ../../examples/06_02_join/src/lib.rs:join}} 27 | ``` 28 | 29 | `join!` 返回的值是一个包含每个传入的 `Future` 的输出的元组。 30 | 31 | ## `try_join!` 32 | 33 | 对于那些返回值是 `Result` 类型的 futures,可以考虑使用 `try_join!` 而非 `join!`。 34 | 因为 `join!` 只会在所有的子 futures 完成后才会“完成”(返回), 35 | 即使其中的子 future 返回了错误,也会继续等待其它子 future 完成。 36 | 37 | 不同于 `join!`,`try_join!` 将在某个子 future 返回 error 后立即完成(返回)。 38 | 39 | ```rust,edition2018,ignore 40 | {{#include ../../examples/06_02_join/src/lib.rs:try_join}} 41 | ``` 42 | 43 | 要注意!所有传入 `try_join!` 的 futures 都必须有相同的错误类型。 44 | 你可以 `futures::future::TryFutureExt` 中的 `.map_err(|e| ...)` 与 45 | `.err_into()` 来转化错误类型。 46 | 47 | ```rust,edition2018,ignore 48 | {{#include ../../examples/06_02_join/src/lib.rs:try_join_map_err}} 49 | ``` 50 | -------------------------------------------------------------------------------- /src/06_multiple_futures/03_select.md: -------------------------------------------------------------------------------- 1 | # `select!` 2 | 3 | The `futures::select` macro runs multiple futures simultaneously, allowing 4 | the user to respond as soon as any future completes. 5 | 6 | ```rust,edition2018 7 | {{#include ../../examples/06_03_select/src/lib.rs:example}} 8 | ``` 9 | 10 | The function above will run both `t1` and `t2` concurrently. When either 11 | `t1` or `t2` finishes, the corresponding handler will call `println!`, and 12 | the function will end without completing the remaining task. 13 | 14 | The basic syntax for `select` is ` = => ,`, 15 | repeated for as many futures as you would like to `select` over. 16 | 17 | ## `default => ...` and `complete => ...` 18 | 19 | `select` also supports `default` and `complete` branches. 20 | 21 | A `default` branch will run if none of the futures being `select`ed 22 | over are yet complete. A `select` with a `default` branch will 23 | therefore always return immediately, since `default` will be run 24 | if none of the other futures are ready. 25 | 26 | `complete` branches can be used to handle the case where all futures 27 | being `select`ed over have completed and will no longer make progress. 28 | This is often handy when looping over a `select!`. 29 | 30 | ```rust,edition2018 31 | {{#include ../../examples/06_03_select/src/lib.rs:default_and_complete}} 32 | ``` 33 | 34 | ## Interaction with `Unpin` and `FusedFuture` 35 | 36 | One thing you may have noticed in the first example above is that we 37 | had to call `.fuse()` on the futures returned by the two `async fn`s, 38 | as well as pinning them with `pin_mut`. Both of these calls are necessary 39 | because the futures used in `select` must implement both the `Unpin` 40 | trait and the `FusedFuture` trait. 41 | 42 | `Unpin` is necessary because the futures used by `select` are not 43 | taken by value, but by mutable reference. By not taking ownership 44 | of the future, uncompleted futures can be used again after the 45 | call to `select`. 46 | 47 | Similarly, the `FusedFuture` trait is required because `select` must 48 | not poll a future after it has completed. `FusedFuture` is implemented 49 | by futures which track whether or not they have completed. This makes 50 | it possible to use `select` in a loop, only polling the futures which 51 | still have yet to complete. This can be seen in the example above, 52 | where `a_fut` or `b_fut` will have completed the second time through 53 | the loop. Because the future returned by `future::ready` implements 54 | `FusedFuture`, it's able to tell `select` not to poll it again. 55 | 56 | Note that streams have a corresponding `FusedStream` trait. Streams 57 | which implement this trait or have been wrapped using `.fuse()` 58 | will yield `FusedFuture` futures from their 59 | `.next()` / `.try_next()` combinators. 60 | 61 | ```rust,edition2018 62 | {{#include ../../examples/06_03_select/src/lib.rs:fused_stream}} 63 | ``` 64 | 65 | ## Concurrent tasks in a `select` loop with `Fuse` and `FuturesUnordered` 66 | 67 | One somewhat hard-to-discover but handy function is `Fuse::terminated()`, 68 | which allows constructing an empty future which is already terminated, 69 | and can later be filled in with a future that needs to be run. 70 | 71 | This can be handy when there's a task that needs to be run during a `select` 72 | loop but which is created inside the `select` loop itself. 73 | 74 | Note the use of the `.select_next_some()` function. This can be 75 | used with `select` to only run the branch for `Some(_)` values 76 | returned from the stream, ignoring `None`s. 77 | 78 | ```rust,edition2018 79 | {{#include ../../examples/06_03_select/src/lib.rs:fuse_terminated}} 80 | ``` 81 | 82 | When many copies of the same future need to be run simultaneously, 83 | use the `FuturesUnordered` type. The following example is similar 84 | to the one above, but will run each copy of `run_on_new_num_fut` 85 | to completion, rather than aborting them when a new one is created. 86 | It will also print out a value returned by `run_on_new_num_fut`. 87 | 88 | ```rust,edition2018 89 | {{#include ../../examples/06_03_select/src/lib.rs:futures_unordered}} 90 | ``` 91 | -------------------------------------------------------------------------------- /src/06_multiple_futures/03_select_zh.md: -------------------------------------------------------------------------------- 1 | # `select!` 2 | 3 | `futures::select` 宏会同时运行多个 futures,当其中任何一个 future 4 | 完成后,它会立即给用户返回一个响应。 5 | 6 | ```rust,edition2018 7 | {{#include ../../examples/06_03_select/src/lib.rs:example}} 8 | ``` 9 | 10 | 上面的函数将同时运行 `t1` 和 `t2`。当其中任意一个任务完成后, 11 | 就会运行与之对应的 `println!` 语句,同时结束此函数,而不处理其它未完成任务。 12 | 13 | `select` 的基本语法是这样 ` = => ,`, 14 | 像这样你可以在 select 代码块里放进所有你需要的 futures。 15 | 16 | ## `default => ...` and `complete => ...` 17 | 18 | `select` 同样支持 `default` 和 `complete` 分支。 19 | 20 | 当 `select` 中的 futures 都是未完成状态时,将运行 `default` 分支。 21 | 因此具有 `default` 分支的 `select` 都将立即返回一个结果。 22 | 23 | 在 `select` 的所有分支都是已完成状态,不会再取得任何进展时,`complete` 24 | 分支将会运行。在循环中使用 `select!` 时,这是非常有用的! 25 | 26 | ```rust,edition2018 27 | {{#include ../../examples/06_03_select/src/lib.rs:default_and_complete}} 28 | ``` 29 | 30 | ## 与 `Unpin` 和 `FusedFuture` 交互 31 | 32 | 在上面的第一个例子中,也许你发现了这点:对于在两个 `async fn` 返回的 futures, 33 | 我们必须对它们调用 `.fuse()` 方法,同时使用 `pin_mut` 来将它们固定。 34 | 这两个调用都是必要的,因为 `select` 中使用的 futures 必须实现 `Unpin` 和 35 | `FusedFuture` 这两个特征。 36 | 37 | `Unpin` 之所以有必要,是因为 `select` 使用中的 futures 不是其本身, 38 | 而是通过可变引用获取的。通过这种方式,`select` 不会获取 futures 的所有权, 39 | 从而使得其中未完成的 futures 可以在 `select` 后依然可用。 40 | 41 | 同样的,因为 `select` 不能轮询一个已完成的 future,所以我们也需要对 future 实现 42 | `FusedFuture` 特征,以此来追踪其自身的完成状态。这样我们就可以在循环中使用 43 | `select` 了,因为它只会去轮询未完成的 futures。在上面的示例中我们可以看到, 44 | `a_fut` 及 `b_fut` 通过两次 `select` 循环后都已完成。因为 `future::ready` 45 | 返回的 future 实现了 `FusedFuture`,这样它就可以告知 `select` 46 | 不要再去轮询它! 47 | 48 | 注意,streams 具有相应的 `FusedStream` 特征。实现了此特征,或使用 `.fuse()` 49 | 包装后的 Streams,将从它们的 `.next()` / `.try_next()` 组合器中产生 50 | `FusedFuture` futures。 51 | 52 | ```rust,edition2018 53 | {{#include ../../examples/06_03_select/src/lib.rs:fused_stream}} 54 | ``` 55 | 56 | ## 带有 `Fuse` 和 `FuturesUnordered` 的 `select` 循环中的并发任务 57 | 58 | 一个有点儿难以发现但非常好用的函数是 `Fuse::terminated()`, 59 | 它允许创建一个已经终止的空 future,并可稍后再把一个需要运行的 future 填充进去。 60 | 61 | 当一个任务需要在 `select` 循环中运行,但它需要在 `select` 循环内部产生时, 62 | 使用它就会变得很方便。 63 | 64 | 请注意这里使用了 `.select_next_some` 函数。它在同 `select` 一起使用时, 65 | 只会运行 stream 返回值为 `Some(_)` 的分支,而忽略 `None`s。 66 | 67 | ```rust,edition2018 68 | {{#include ../../examples/06_03_select/src/lib.rs:fuse_terminated}} 69 | ``` 70 | 71 | 当同一 future 的多个副本需要同时运行时,请使用 `FuturesUnordered` 类型。 72 | 下面的示例与上面的示例类似,但是会运行 `run_on_new_num_fut` 73 | 的每个副本直至全部完成,而非在创建新的副本后中止之前的任务。 74 | 它还将打印出 `run_on_new_num_fut` 的返回值。 75 | 76 | ```rust,edition2018 77 | {{#include ../../examples/06_03_select/src/lib.rs:futures_unordered}} 78 | ``` 79 | -------------------------------------------------------------------------------- /src/06_multiple_futures/04_spawning.md: -------------------------------------------------------------------------------- 1 | # `Spawning` 2 | 3 | Spawning allows you to run a new asynchronous task in the background. This allows us to continue executing other code 4 | while it runs. 5 | 6 | Say we have a web server that wants to accept connections without blocking the main thread. 7 | To achieve this, we can use the `async_std::task::spawn` function to create and run a new task that handles the 8 | connections. This function takes a future and returns a `JoinHandle`, which can be used to wait for the result of the 9 | task once it's completed. 10 | 11 | ```rust,edition2018 12 | {{#include ../../examples/06_04_spawning/src/lib.rs:example}} 13 | ``` 14 | 15 | The `JoinHandle` returned by `spawn` implements the `Future` trait, so we can `.await` it to get the result of the task. 16 | This will block the current task until the spawned task completes. If the task is not awaited, your program will 17 | continue executing without waiting for the task, cancelling it if the function is completed before the task is finished. 18 | 19 | ```rust,edition2018 20 | {{#include ../../examples/06_04_spawning/src/lib.rs:join_all}} 21 | ``` 22 | 23 | To communicate between the main task and the spawned task, we can use channels 24 | provided by the async runtime used. -------------------------------------------------------------------------------- /src/06_multiple_futures/04_spawning_zh.md: -------------------------------------------------------------------------------- 1 | # `Spawning` 2 | 3 | `Spawning` 使你可以在后台运行新的异步任务,这可以让我们在它运行时继续执行其它代码。 4 | 5 | 假设我们有一个 Web 服务器需要在接受、处理连接时而不阻塞主线程。为了实现这点,我们可以使用 `async_std::task::spawn` 6 | 函数来创建并运行一个新的任务来处理这个连接。`spawn` 函数接收一个 `feture` 并返回一个 `JoinHandle`, 7 | 它可用于等待任务完成后的结果。 8 | 9 | ```rust,edition2018 10 | {{#include ../../examples/06_04_spawning/src/lib.rs:example}} 11 | ``` 12 | 13 | `spawn` 返回实现了 `Future` 特征的 `JoinHandle`,所以我们可以通过 `.await` 来获取此任务的结果。 14 | 但这将阻塞当前的任务,直至新生成的任务运行完成。如果不使用 `.await` 等待该任务,则程序将继续运行, 15 | 若此任务在当前函数运行结束前没有完成,则会取消此任务(即不等待结果,直接丢弃)。 16 | 17 | ```rust,edition2018 18 | {{#include ../../examples/06_04_spawning/src/lib.rs:join_all}} 19 | ``` 20 | 21 | 通常,我们使用 `async` 运行时提供的 `channels` 来进行主任务与派生任务的通信。 22 | -------------------------------------------------------------------------------- /src/07_workarounds/01_chapter.md: -------------------------------------------------------------------------------- 1 | # Workarounds to Know and Love 2 | 3 | Rust's `async` support is still fairly new, and there are a handful of 4 | highly-requested features still under active development, as well 5 | as some subpar diagnostics. This chapter will discuss some common pain 6 | points and explain how to work around them. 7 | -------------------------------------------------------------------------------- /src/07_workarounds/01_chapter_zh.md: -------------------------------------------------------------------------------- 1 | # 了解问题与解决方法 2 | 3 | 由于 Rust 的异步支持仍然是相当新的,所以目前有一些呼声很高的特性、以及低于标准的代码诊断功能 4 | 仍在积极地进行当中。在本章,我们将讨论一些开发中常见的痛点, 5 | 及解决它们的办法。 6 | -------------------------------------------------------------------------------- /src/07_workarounds/02_err_in_async_blocks.md: -------------------------------------------------------------------------------- 1 | # `?` in `async` Blocks 2 | 3 | Just as in `async fn`, it's common to use `?` inside `async` blocks. 4 | However, the return type of `async` blocks isn't explicitly stated. 5 | This can cause the compiler to fail to infer the error type of the 6 | `async` block. 7 | 8 | For example, this code: 9 | 10 | ```rust,edition2018 11 | # struct MyError; 12 | # async fn foo() -> Result<(), MyError> { Ok(()) } 13 | # async fn bar() -> Result<(), MyError> { Ok(()) } 14 | let fut = async { 15 | foo().await?; 16 | bar().await?; 17 | Ok(()) 18 | }; 19 | ``` 20 | 21 | will trigger this error: 22 | 23 | ``` 24 | error[E0282]: type annotations needed 25 | --> src/main.rs:5:9 26 | | 27 | 4 | let fut = async { 28 | | --- consider giving `fut` a type 29 | 5 | foo().await?; 30 | | ^^^^^^^^^^^^ cannot infer type 31 | ``` 32 | 33 | Unfortunately, there's currently no way to "give `fut` a type", nor a way 34 | to explicitly specify the return type of an `async` block. 35 | To work around this, use the "turbofish" operator to supply the success and 36 | error types for the `async` block: 37 | 38 | ```rust,edition2018 39 | # struct MyError; 40 | # async fn foo() -> Result<(), MyError> { Ok(()) } 41 | # async fn bar() -> Result<(), MyError> { Ok(()) } 42 | let fut = async { 43 | foo().await?; 44 | bar().await?; 45 | Ok::<(), MyError>(()) // <- note the explicit type annotation here 46 | }; 47 | ``` 48 | 49 | -------------------------------------------------------------------------------- /src/07_workarounds/02_err_in_async_blocks_zh.md: -------------------------------------------------------------------------------- 1 | # `async` 代码块中的 `?` 2 | 3 | 就像在 `async fn` 中一样,在 `async` 代码块中使用 `?` 是很常见的。 4 | 但是,由于 `async` 代码块的返回值却并非是明确声明的。 5 | 这可能导致编译器无法推断出 `async` 代码块中出现的错误的类型。 6 | 7 | 例如下面这段代码: 8 | 9 | ```rust,edition2018 10 | # struct MyError; 11 | # async fn foo() -> Result<(), MyError> { Ok(()) } 12 | # async fn bar() -> Result<(), MyError> { Ok(()) } 13 | let fut = async { 14 | foo().await?; 15 | bar().await?; 16 | Ok(()) 17 | }; 18 | ``` 19 | 20 | 会引发这种错误: 21 | 22 | ``` 23 | error[E0282]: type annotations needed 24 | --> src/main.rs:5:9 25 | | 26 | 4 | let fut = async { 27 | | --- consider giving `fut` a type 28 | 5 | foo().await?; 29 | | ^^^^^^^^^^^^ cannot infer type 30 | ``` 31 | 32 | 遗憾的是,目前我们没有办法为 `fut` 指定一个类型,也没办法明确说明 `async` 33 | 代码块返回的具体类型。要解决这个问题,可以使用 `turbofish` 运算符, 34 | 它可以为 `async` 代码块提供成功和错误对应的类型: 35 | 36 | ```rust,edition2018 37 | # struct MyError; 38 | # async fn foo() -> Result<(), MyError> { Ok(()) } 39 | # async fn bar() -> Result<(), MyError> { Ok(()) } 40 | let fut = async { 41 | foo().await?; 42 | bar().await?; 43 | Ok::<(), MyError>(()) // <- note the explicit type annotation here 44 | }; 45 | ``` 46 | 47 | -------------------------------------------------------------------------------- /src/07_workarounds/03_send_approximation.md: -------------------------------------------------------------------------------- 1 | # `Send` Approximation 2 | 3 | Some `async fn` state machines are safe to be sent across threads, while 4 | others are not. Whether or not an `async fn` `Future` is `Send` is determined 5 | by whether a non-`Send` type is held across an `.await` point. The compiler 6 | does its best to approximate when values may be held across an `.await` 7 | point, but this analysis is too conservative in a number of places today. 8 | 9 | For example, consider a simple non-`Send` type, perhaps a type 10 | which contains an `Rc`: 11 | 12 | ```rust 13 | use std::rc::Rc; 14 | 15 | #[derive(Default)] 16 | struct NotSend(Rc<()>); 17 | ``` 18 | 19 | Variables of type `NotSend` can briefly appear as temporaries in `async fn`s 20 | even when the resulting `Future` type returned by the `async fn` must be `Send`: 21 | 22 | ```rust,edition2018 23 | # use std::rc::Rc; 24 | # #[derive(Default)] 25 | # struct NotSend(Rc<()>); 26 | async fn bar() {} 27 | async fn foo() { 28 | NotSend::default(); 29 | bar().await; 30 | } 31 | 32 | fn require_send(_: impl Send) {} 33 | 34 | fn main() { 35 | require_send(foo()); 36 | } 37 | ``` 38 | 39 | However, if we change `foo` to store `NotSend` in a variable, this example no 40 | longer compiles: 41 | 42 | ```rust,edition2018 43 | # use std::rc::Rc; 44 | # #[derive(Default)] 45 | # struct NotSend(Rc<()>); 46 | # async fn bar() {} 47 | async fn foo() { 48 | let x = NotSend::default(); 49 | bar().await; 50 | } 51 | # fn require_send(_: impl Send) {} 52 | # fn main() { 53 | # require_send(foo()); 54 | # } 55 | ``` 56 | 57 | ``` 58 | error[E0277]: `std::rc::Rc<()>` cannot be sent between threads safely 59 | --> src/main.rs:15:5 60 | | 61 | 15 | require_send(foo()); 62 | | ^^^^^^^^^^^^ `std::rc::Rc<()>` cannot be sent between threads safely 63 | | 64 | = help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<()>` 65 | = note: required because it appears within the type `NotSend` 66 | = note: required because it appears within the type `{NotSend, impl std::future::Future, ()}` 67 | = note: required because it appears within the type `[static generator@src/main.rs:7:16: 10:2 {NotSend, impl std::future::Future, ()}]` 68 | = note: required because it appears within the type `std::future::GenFuture<[static generator@src/main.rs:7:16: 10:2 {NotSend, impl std::future::Future, ()}]>` 69 | = note: required because it appears within the type `impl std::future::Future` 70 | = note: required because it appears within the type `impl std::future::Future` 71 | note: required by `require_send` 72 | --> src/main.rs:12:1 73 | | 74 | 12 | fn require_send(_: impl Send) {} 75 | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 76 | 77 | error: aborting due to previous error 78 | 79 | For more information about this error, try `rustc --explain E0277`. 80 | ``` 81 | 82 | This error is correct. If we store `x` into a variable, it won't be dropped 83 | until after the `.await`, at which point the `async fn` may be running on 84 | a different thread. Since `Rc` is not `Send`, allowing it to travel across 85 | threads would be unsound. One simple solution to this would be to `drop` 86 | the `Rc` before the `.await`, but unfortunately that does not work today. 87 | 88 | In order to successfully work around this issue, you may have to introduce 89 | a block scope encapsulating any non-`Send` variables. This makes it easier 90 | for the compiler to tell that these variables do not live across an 91 | `.await` point. 92 | 93 | ```rust,edition2018 94 | # use std::rc::Rc; 95 | # #[derive(Default)] 96 | # struct NotSend(Rc<()>); 97 | # async fn bar() {} 98 | async fn foo() { 99 | { 100 | let x = NotSend::default(); 101 | } 102 | bar().await; 103 | } 104 | # fn require_send(_: impl Send) {} 105 | # fn main() { 106 | # require_send(foo()); 107 | # } 108 | ``` 109 | -------------------------------------------------------------------------------- /src/07_workarounds/03_send_approximation_zh.md: -------------------------------------------------------------------------------- 1 | # `Send` 类 2 | 3 | 一些 `async fn` 状态机可以安全地跨线程发送,而另一些则不是。 4 | `async fn` `Future` 是否为 `Send`,取决于是否跨 `.await` 持有非 `Send` 类型。 5 | 编译器会尽可能地预估出值可能通过 `.await` 的时间点, 6 | 但现在这种分析在许多地方都太过于保守。 7 | 8 | 比如,考虑一种简单的 non-`Send` 类型,也许只是一个包含 `Rc` 的类型: 9 | 10 | ```rust 11 | use std::rc::Rc; 12 | 13 | #[derive(Default)] 14 | struct NotSend(Rc<()>); 15 | ``` 16 | 17 | 即使 `async fn` 返回的结果必须是 `Send` 类型,但 Non-`Send` 类型变量, 18 | 也可短暂地作为临时变量在 `async fn` 里使用: 19 | 20 | ```rust,edition2018 21 | # use std::rc::Rc; 22 | # #[derive(Default)] 23 | # struct NotSend(Rc<()>); 24 | async fn bar() {} 25 | async fn foo() { 26 | NotSend::default(); 27 | bar().await; 28 | } 29 | 30 | fn require_send(_: impl Send) {} 31 | 32 | fn main() { 33 | require_send(foo()); 34 | } 35 | ``` 36 | 37 | 一旦我们将 `NotSend` 存储在变量里,这个例子就无法通过编译了: 38 | 39 | ```rust,edition2018 40 | # use std::rc::Rc; 41 | # #[derive(Default)] 42 | # struct NotSend(Rc<()>); 43 | # async fn bar() {} 44 | async fn foo() { 45 | let x = NotSend::default(); 46 | bar().await; 47 | } 48 | # fn require_send(_: impl Send) {} 49 | # fn main() { 50 | # require_send(foo()); 51 | # } 52 | ``` 53 | 54 | ``` 55 | error[E0277]: `std::rc::Rc<()>` cannot be sent between threads safely 56 | --> src/main.rs:15:5 57 | | 58 | 15 | require_send(foo()); 59 | | ^^^^^^^^^^^^ `std::rc::Rc<()>` cannot be sent between threads safely 60 | | 61 | = help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<()>` 62 | = note: required because it appears within the type `NotSend` 63 | = note: required because it appears within the type `{NotSend, impl std::future::Future, ()}` 64 | = note: required because it appears within the type `[static generator@src/main.rs:7:16: 10:2 {NotSend, impl std::future::Future, ()}]` 65 | = note: required because it appears within the type `std::future::GenFuture<[static generator@src/main.rs:7:16: 10:2 {NotSend, impl std::future::Future, ()}]>` 66 | = note: required because it appears within the type `impl std::future::Future` 67 | = note: required because it appears within the type `impl std::future::Future` 68 | note: required by `require_send` 69 | --> src/main.rs:12:1 70 | | 71 | 12 | fn require_send(_: impl Send) {} 72 | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 73 | 74 | error: aborting due to previous error 75 | 76 | For more information about this error, try `rustc --explain E0277`. 77 | ``` 78 | 79 | 这个报错是正确的。如果我们将 `x` 存储在一个变量里,在 `.await` 80 | 之后它才会被删除,而此时 `async fn` 可能在其它的进程中运行。 81 | 因为 `Rc` 不是 `Send`,它不能安全地在线程间传输。一个简单的解决办法是, 82 | 在 `.await` 之前删除 `Rc`,但遗憾的是目前无法这么做。 83 | 84 | 你可以通过使用一个代码块({})来包裹住所有的 non-`Send` 变量,这可解决这个问题。 85 | 这样就很方便的告知编译器,这些变量在 `.await` 前就被丢弃了。 86 | 87 | ```rust,edition2018 88 | # use std::rc::Rc; 89 | # #[derive(Default)] 90 | # struct NotSend(Rc<()>); 91 | # async fn bar() {} 92 | async fn foo() { 93 | { 94 | let x = NotSend::default(); 95 | } 96 | bar().await; 97 | } 98 | # fn require_send(_: impl Send) {} 99 | # fn main() { 100 | # require_send(foo()); 101 | # } 102 | ``` 103 | -------------------------------------------------------------------------------- /src/07_workarounds/04_recursion.md: -------------------------------------------------------------------------------- 1 | # Recursion 2 | 3 | Internally, `async fn` creates a state machine type containing each 4 | sub-`Future` being `.await`ed. This makes recursive `async fn`s a little 5 | tricky, since the resulting state machine type has to contain itself: 6 | 7 | ```rust,edition2018 8 | # async fn step_one() { /* ... */ } 9 | # async fn step_two() { /* ... */ } 10 | # struct StepOne; 11 | # struct StepTwo; 12 | // This function: 13 | async fn foo() { 14 | step_one().await; 15 | step_two().await; 16 | } 17 | // generates a type like this: 18 | enum Foo { 19 | First(StepOne), 20 | Second(StepTwo), 21 | } 22 | 23 | // So this function: 24 | async fn recursive() { 25 | recursive().await; 26 | recursive().await; 27 | } 28 | 29 | // generates a type like this: 30 | enum Recursive { 31 | First(Recursive), 32 | Second(Recursive), 33 | } 34 | ``` 35 | 36 | This won't work—we've created an infinitely-sized type! 37 | The compiler will complain: 38 | 39 | ``` 40 | error[E0733]: recursion in an `async fn` requires boxing 41 | --> src/lib.rs:1:22 42 | | 43 | 1 | async fn recursive() { 44 | | ^ an `async fn` cannot invoke itself directly 45 | | 46 | = note: a recursive `async fn` must be rewritten to return a boxed future. 47 | ``` 48 | 49 | In order to allow this, we have to introduce an indirection using `Box`. 50 | Unfortunately, compiler limitations mean that just wrapping the calls to 51 | `recursive()` in `Box::pin` isn't enough. To make this work, we have 52 | to make `recursive` into a non-`async` function which returns a `.boxed()` 53 | `async` block: 54 | 55 | ```rust,edition2018 56 | {{#include ../../examples/07_05_recursion/src/lib.rs:example}} 57 | ``` 58 | -------------------------------------------------------------------------------- /src/07_workarounds/04_recursion_zh.md: -------------------------------------------------------------------------------- 1 | # 递归 2 | 3 | 在内部,`async fn` 创建了一个包含每个 `.await` 的子 `Future` 的状态机类型。 4 | 因为这种结果状态机必然包括其自身,这使得递归 `async fn` 变得有点儿麻烦了: 5 | 6 | ```rust,edition2018 7 | # async fn step_one() { /* ... */ } 8 | # async fn step_two() { /* ... */ } 9 | # struct StepOne; 10 | # struct StepTwo; 11 | // This function: 12 | async fn foo() { 13 | step_one().await; 14 | step_two().await; 15 | } 16 | // generates a type like this: 17 | enum Foo { 18 | First(StepOne), 19 | Second(StepTwo), 20 | } 21 | 22 | // So this function: 23 | async fn recursive() { 24 | recursive().await; 25 | recursive().await; 26 | } 27 | 28 | // generates a type like this: 29 | enum Recursive { 30 | First(Recursive), 31 | Second(Recursive), 32 | } 33 | ``` 34 | 35 | 我们创建了一个无限大的类型,这将无法工作,编译器会报怨道: 36 | 37 | ``` 38 | error[E0733]: recursion in an `async fn` requires boxing 39 | --> src/lib.rs:1:22 40 | | 41 | 1 | async fn recursive() { 42 | | ^ an `async fn` cannot invoke itself directly 43 | | 44 | = note: a recursive `async fn` must be rewritten to return a boxed future. 45 | ``` 46 | 47 | 为了解决这个,我们必须通过 `Box` 来间接引用它。不幸的是,编译器的限制规则中, 48 | 我们仅仅使用 `Box::pin` 来包装对 `recursive()` 的调用是不够的。 49 | 为了使它能工作,我们必须将 `recursive` 放进一个 non-`async` 函数中, 50 | 它返回一个 `.boxed()` 的 `async` 代码块。 51 | 52 | ```rust,edition2018 53 | {{#include ../../examples/07_05_recursion/src/lib.rs:example}} 54 | ``` 55 | -------------------------------------------------------------------------------- /src/07_workarounds/05_async_in_traits.md: -------------------------------------------------------------------------------- 1 | # `async` in Traits 2 | 3 | Currently, `async fn` cannot be used in traits on the stable release of Rust. 4 | Since the 17th November 2022, an MVP of async-fn-in-trait is available on the nightly 5 | version of the compiler tool chain, [see here for details](https://blog.rust-lang.org/inside-rust/2022/11/17/async-fn-in-trait-nightly.html). 6 | 7 | In the meantime, there is a work around for the stable tool chain using the 8 | [async-trait crate from crates.io](https://github.com/dtolnay/async-trait). 9 | 10 | Note that using these trait methods will result in a heap allocation 11 | per-function-call. This is not a significant cost for the vast majority 12 | of applications, but should be considered when deciding whether to use 13 | this functionality in the public API of a low-level function that is expected 14 | to be called millions of times a second. 15 | -------------------------------------------------------------------------------- /src/07_workarounds/05_async_in_traits_zh.md: -------------------------------------------------------------------------------- 1 | # 特征中的 `async` 2 | 3 | 目前,`async fn` 不能在*稳定版 RUST*的 `Trait` 特性中使用。 4 | 自 2022 年 11 月 17日起,async-fn-in-trait 的 MVP 在编译器工具链的 nightly 版本上可以使用, 5 | [详情请查看](https://blog.rust-lang.org/inside-rust/2022/11/17/async-fn-in-trait-nightly.html)。。。。 6 | 7 | 与此同时,若想在稳定版的特性中使用 `async fn` ,你也可以使用 8 | [async-trait crate from crates.io](https://github.com/dtolnay/async-trait)。 9 | 10 | 注意,使用这些 trait 方法,在每次功能调用时都会导致堆内存分配。 11 | 这对大多数的程序来说,这样的成本代价是可接受的,但是,请仔细考虑, 12 | 是否在可能每秒产生上百万次调用的低端公共 API 上使用它。 13 | -------------------------------------------------------------------------------- /src/08_ecosystem/00_chapter.md: -------------------------------------------------------------------------------- 1 | # The Async Ecosystem 2 | Rust currently provides only the bare essentials for writing async code. 3 | Importantly, executors, tasks, reactors, combinators, and low-level I/O futures and traits 4 | are not yet provided in the standard library. In the meantime, 5 | community-provided async ecosystems fill in these gaps. 6 | 7 | The Async Foundations Team is interested in extending examples in the Async Book to cover multiple runtimes. 8 | If you're interested in contributing to this project, please reach out to us on 9 | [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/201246-wg-async-foundations.2Fbook). 10 | 11 | ## Async Runtimes 12 | Async runtimes are libraries used for executing async applications. 13 | Runtimes usually bundle together a *reactor* with one or more *executors*. 14 | Reactors provide subscription mechanisms for external events, like async I/O, interprocess communication, and timers. 15 | In an async runtime, subscribers are typically futures representing low-level I/O operations. 16 | Executors handle the scheduling and execution of tasks. 17 | They keep track of running and suspended tasks, poll futures to completion, and wake tasks when they can make progress. 18 | The word "executor" is frequently used interchangeably with "runtime". 19 | Here, we use the word "ecosystem" to describe a runtime bundled with compatible traits and features. 20 | 21 | ## Community-Provided Async Crates 22 | 23 | ### The Futures Crate 24 | The [`futures` crate](https://docs.rs/futures/) contains traits and functions useful for writing async code. 25 | This includes the `Stream`, `Sink`, `AsyncRead`, and `AsyncWrite` traits, and utilities such as combinators. 26 | These utilities and traits may eventually become part of the standard library. 27 | 28 | `futures` has its own executor, but not its own reactor, so it does not support execution of async I/O or timer futures. 29 | For this reason, it's not considered a full runtime. 30 | A common choice is to use utilities from `futures` with an executor from another crate. 31 | 32 | ### Popular Async Runtimes 33 | There is no asynchronous runtime in the standard library, and none are officially recommended. 34 | The following crates provide popular runtimes. 35 | - [Tokio](https://docs.rs/tokio/): A popular async ecosystem with HTTP, gRPC, and tracing frameworks. 36 | - [async-std](https://docs.rs/async-std/): A crate that provides asynchronous counterparts to standard library components. 37 | - [smol](https://docs.rs/smol/): A small, simplified async runtime. 38 | Provides the `Async` trait that can be used to wrap structs like `UnixStream` or `TcpListener`. 39 | - [fuchsia-async](https://fuchsia.googlesource.com/fuchsia/+/master/src/lib/fuchsia-async/): 40 | An executor for use in the Fuchsia OS. 41 | 42 | ## Determining Ecosystem Compatibility 43 | Not all async applications, frameworks, and libraries are compatible with each other, or with every OS or platform. 44 | Most async code can be used with any ecosystem, but some frameworks and libraries require the use of a specific ecosystem. 45 | Ecosystem constraints are not always documented, but there are several rules of thumb to determine 46 | whether a library, trait, or function depends on a specific ecosystem. 47 | 48 | Any async code that interacts with async I/O, timers, interprocess communication, or tasks 49 | generally depends on a specific async executor or reactor. 50 | All other async code, such as async expressions, combinators, synchronization types, and streams 51 | are usually ecosystem independent, provided that any nested futures are also ecosystem independent. 52 | Before beginning a project, it's recommended to research relevant async frameworks and libraries to ensure 53 | compatibility with your chosen runtime and with each other. 54 | 55 | Notably, `Tokio` uses the `mio` reactor and defines its own versions of async I/O traits, 56 | including `AsyncRead` and `AsyncWrite`. 57 | On its own, it's not compatible with `async-std` and `smol`, 58 | which rely on the [`async-executor` crate](https://docs.rs/async-executor), and the `AsyncRead` and `AsyncWrite` 59 | traits defined in `futures`. 60 | 61 | Conflicting runtime requirements can sometimes be resolved by compatibility layers 62 | that allow you to call code written for one runtime within another. 63 | For example, the [`async_compat` crate](https://docs.rs/async_compat) provides a compatibility layer between 64 | `Tokio` and other runtimes. 65 | 66 | Libraries exposing async APIs should not depend on a specific executor or reactor, 67 | unless they need to spawn tasks or define their own async I/O or timer futures. 68 | Ideally, only binaries should be responsible for scheduling and running tasks. 69 | 70 | ## Single Threaded vs Multi-Threaded Executors 71 | Async executors can be single-threaded or multi-threaded. 72 | For example, the `async-executor` crate has both a single-threaded `LocalExecutor` and a multi-threaded `Executor`. 73 | 74 | A multi-threaded executor makes progress on several tasks simultaneously. 75 | It can speed up the execution greatly for workloads with many tasks, 76 | but synchronizing data between tasks is usually more expensive. 77 | It is recommended to measure performance for your application 78 | when you are choosing between a single- and a multi-threaded runtime. 79 | 80 | Tasks can either be run on the thread that created them or on a separate thread. 81 | Async runtimes often provide functionality for spawning tasks onto separate threads. 82 | Even if tasks are executed on separate threads, they should still be non-blocking. 83 | In order to schedule tasks on a multi-threaded executor, they must also be `Send`. 84 | Some runtimes provide functions for spawning non-`Send` tasks, 85 | which ensures every task is executed on the thread that spawned it. 86 | They may also provide functions for spawning blocking tasks onto dedicated threads, 87 | which is useful for running blocking synchronous code from other libraries. 88 | -------------------------------------------------------------------------------- /src/08_ecosystem/00_chapter_zh.md: -------------------------------------------------------------------------------- 1 | # 异步的生态系统 2 | 3 | Rust 目前只提供了编写异步代码的基础必要的功能。重要的是,标准库尚未提供像 4 | 执行器(executors)、任务(tasks)、响应器(reactors)、组合器(combinators) 5 | 与低级 I/O 的功能和特征。目前,是由社区提供的异步生态来填补这些空白的。 6 | 7 | Rust 异步基础团队,希望在 Async Book 中扩展更多的示例来涵盖多种运行时。 8 | 如果你有兴趣为这个项目做贡献,可以通过 9 | [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/201246-wg-async-foundations.2Fbook) 10 | 联系我们。 11 | 12 | ## 异步运行时 13 | 14 | 异步运行时,是用来执行异步程序的库。运行时通常由一个响应器及一个或多个执行器组成。 15 | 响应器为外部事务提供订阅机制,如异步 I/O、进程间通信和计时器。 16 | 在一个异步运行时里,订阅者通常是代表低级 I/O 操作的 futures。 17 | 执行器则负责任务的调试和执行。 18 | 它们会保持追踪在运行或中断的任务,当任务可取得进展(就绪)时唤醒它们, 19 | 促使(通过 poll)futures 完成。 20 | “执行器”和“运行时”这两个词,通常我们可能互换使用。 21 | 这里,我们使用“生态系统”这个词来形容一个与其兼容的特征和特性组合在一起的运行时。 22 | 23 | ## 社区提供的异步箱(Crates) 24 | 25 | ### Futures 26 | 27 | [`futures` crate](https://docs.rs/futures/) 28 | 提供了在编写异步代码时实用的特征和方法。 29 | 它提供了 `Stream`、`Sink`、`AsyncRead` 和 `AsyncWrite` 特征, 30 | 及实用工具如组合器。而这些实用工具和特征,也许最终会添加到标准库里! 31 | 32 | `futures` 有自己的执行器,但没自己的响应器,所以它不支持计时器 33 | 和异步 I/O futures 的执行。 34 | 所以,它不是一个完整的运行时。 35 | 通常我们会选择 `futures` 中的实用工具,并配合其它 crate 中的执行器来使用。 36 | 37 | ### 受欢迎的异步运行时 38 | 39 | 因为标准库中不提供异步运行时,所以并没有所谓的官方推荐。 40 | 下面的这些 crates 提供了一些受大家喜爱的运行时。 41 | - [Tokio](https://docs.rs/tokio/):一个受欢迎的异步生态系统,提供了 HTTP,gRPC 42 | 及追踪框架。 43 | - [async-std](https://docs.rs/async-std/):一个为标准库提供异步功能组件的 crate。 44 | - [smol](https://docs.rs/smol/):一个小且简洁实用的异步运行时。 45 | 提供了可用来装饰结构体,如 `UnixStream` 或 `TcpListener` 的 `Async` 特征。 46 | - [fuchsia-async](https://fuchsia.googlesource.com/fuchsia/+/master/src/lib/fuchsia-async/): 47 | Fuchsia OS 中使用的执行器。 48 | 49 | ## 确定生态系统兼容性 50 | 51 | 并非所有的异步程序、框架和库都互相兼容,不同的操作系统和架构也是如此。 52 | 大部分的异步代码都可在任何生态系统中运行,但一些框架和库只能在特定的生态上使用。 53 | 生态系统的限制性不总被提及、记录在案,但有几个经典法则可帮助我们, 54 | 来确定库、特征和方法是否依赖于特定的生态系统。 55 | 56 | 任何包括异步 I/O、计时器、跨进程通信或任务交互的异步代码, 57 | 通常依赖于一个特殊的执行器或响应器。 58 | 而其它异步代码,如异步表达式、组合器、同步类型和流,一般是独立于生态系统的, 59 | 当然前提是其内部包含的 futures 也得是独立于生态系统存在的。 60 | 在开始一个项目之前,建议首先对使用到的异步框架、库做一个调查, 61 | 来确保你选择的运行时对它们有着良好的兼容性。 62 | 63 | 请注意,`Tokio` 使用 `mio` 作为响应器,且定义了自己的异步 I/O 特征, 64 | 包括 `AsyncRead` 和 `AsyncWrite`。就其本身而言,它不兼容 `async-std`和 `smol`, 65 | 因为后者依赖于 [`async-executor` crate](https://docs.rs/async-executor), 66 | 且在 futures 中定义了自己的 `AsyncRead`、`AsyncWrite` 特征。 67 | 68 | 有时,你可以通过一个兼容层来解决运行时的冲突问题,使用这个兼容层, 69 | 你可以在一个运行时里编写调用其它的运行时的代码。比如, 70 | [`async_compat` crate](https://docs.rs/async_compat) 提供了一个可在 `Tokio` 71 | 和其它运行时之间使用的兼容层。 72 | 73 | 由库提供的异步 APIs 不应该依赖于某个特定的执行器或响应器, 74 | 除非它们自己来生成任务,或者定义了自己的异步 I/O 或计时器 futures。 75 | 理想情况下,应该只有二进制程序负责任务的调度与运行。 76 | 77 | ## 单线程与多线程执行器 78 | 79 | 异步执行器可以是单线程或多线程的。例如, 80 | `async-executor` 箱就提供了用于单线程的 `LocalExecutor` 和多线程的 `Executor`。 81 | 82 | 多线程执行器可同时驱使多个任务取得进展。在有很多任务的工作负载上, 83 | 它可极大地提升运行速度,但在任务间同步数据也需付出更高的代价。 84 | 当你在单线程和多线程运行时之间做选择时,建议首先去测试,确认下, 85 | 不同选择下带来的应用的性能差别。 86 | 87 | 任务既可以在创建它们的线程上运行,也可选择让之在其它的线程上运行。 88 | 通常,异步运行时会提供在其它的线程上生成任务的方法。 89 | 即使任务在其它的线程上运行,它们仍需是非阻塞的。为了在多线程执行器上调度任务, 90 | 这些任务必须实现 `Send` 特征。一些运行时提供生成 non-`Send` 任务的功能, 91 | 这样可确保每个任务都将只在生成它的线程上运行。 92 | 它们可能还会提供,将阻塞任务生成到专有线程上的功能, 93 | 这在需要调用其它库中的阻塞同步代码时时非常实用。 94 | -------------------------------------------------------------------------------- /src/09_example/00_intro.md: -------------------------------------------------------------------------------- 1 | # Final Project: Building a Concurrent Web Server with Async Rust 2 | In this chapter, we'll use asynchronous Rust to modify the Rust book's 3 | [single-threaded web server](https://doc.rust-lang.org/book/ch20-01-single-threaded.html) 4 | to serve requests concurrently. 5 | ## Recap 6 | Here's what the code looked like at the end of the lesson. 7 | 8 | `src/main.rs`: 9 | ```rust 10 | {{#include ../../examples/09_01_sync_tcp_server/src/main.rs}} 11 | ``` 12 | 13 | `hello.html`: 14 | ```html 15 | {{#include ../../examples/09_01_sync_tcp_server/hello.html}} 16 | ``` 17 | 18 | `404.html`: 19 | ```html 20 | {{#include ../../examples/09_01_sync_tcp_server/404.html}} 21 | ``` 22 | 23 | If you run the server with `cargo run` and visit `127.0.0.1:7878` in your browser, 24 | you'll be greeted with a friendly message from Ferris! -------------------------------------------------------------------------------- /src/09_example/00_intro_zh.md: -------------------------------------------------------------------------------- 1 | # 最终的项目:使用异步 Rust 构建一个并发 Web 服务器 2 | 3 | 在本章中,我们将以 Rust book 中的 4 | [single-threaded web server](https://doc.rust-lang.org/book/ch20-01-single-threaded.html) 5 | 为基础,改进它以便可处理并发请求。 6 | 7 | ## 总结 8 | 9 | 这会是我们代码的最终形态: 10 | 11 | `src/main.rs`: 12 | ```rust 13 | {{#include ../../examples/09_01_sync_tcp_server/src/main.rs}} 14 | ``` 15 | 16 | `hello.html`: 17 | ```html 18 | {{#include ../../examples/09_01_sync_tcp_server/hello.html}} 19 | ``` 20 | 21 | `404.html`: 22 | ```html 23 | {{#include ../../examples/09_01_sync_tcp_server/404.html}} 24 | ``` 25 | 26 | 使用 `cargo run` 来启动服务,并在浏览器中访问 `127.0.0.1:7878`, 27 | 你将看到 Ferris 带来的友好的问候! 28 | -------------------------------------------------------------------------------- /src/09_example/01_running_async_code.md: -------------------------------------------------------------------------------- 1 | # Running Asynchronous Code 2 | An HTTP server should be able to serve multiple clients concurrently; 3 | that is, it should not wait for previous requests to complete before handling the current request. 4 | The book 5 | [solves this problem](https://doc.rust-lang.org/book/ch20-02-multithreaded.html#turning-our-single-threaded-server-into-a-multithreaded-server) 6 | by creating a thread pool where each connection is handled on its own thread. 7 | Here, instead of improving throughput by adding threads, we'll achieve the same effect using asynchronous code. 8 | 9 | Let's modify `handle_connection` to return a future by declaring it an `async fn`: 10 | ```rust,ignore 11 | {{#include ../../examples/09_02_async_tcp_server/src/main.rs:handle_connection_async}} 12 | ``` 13 | 14 | Adding `async` to the function declaration changes its return type 15 | from the unit type `()` to a type that implements `Future`. 16 | 17 | If we try to compile this, the compiler warns us that it will not work: 18 | ```console 19 | $ cargo check 20 | Checking async-rust v0.1.0 (file:///projects/async-rust) 21 | warning: unused implementer of `std::future::Future` that must be used 22 | --> src/main.rs:12:9 23 | | 24 | 12 | handle_connection(stream); 25 | | ^^^^^^^^^^^^^^^^^^^^^^^^^^ 26 | | 27 | = note: `#[warn(unused_must_use)]` on by default 28 | = note: futures do nothing unless you `.await` or poll them 29 | ``` 30 | 31 | Because we haven't `await`ed or `poll`ed the result of `handle_connection`, 32 | it'll never run. If you run the server and visit `127.0.0.1:7878` in a browser, 33 | you'll see that the connection is refused; our server is not handling requests. 34 | 35 | We can't `await` or `poll` futures within synchronous code by itself. 36 | We'll need an asynchronous runtime to handle scheduling and running futures to completion. 37 | Please consult the [section on choosing a runtime](../08_ecosystem/00_chapter.md) 38 | for more information on asynchronous runtimes, executors, and reactors. 39 | Any of the runtimes listed will work for this project, but for these examples, 40 | we've chosen to use the `async-std` crate. 41 | 42 | ## Adding an Async Runtime 43 | The following example will demonstrate refactoring synchronous code to use an async runtime; here, `async-std`. 44 | The `#[async_std::main]` attribute from `async-std` allows us to write an asynchronous main function. 45 | To use it, enable the `attributes` feature of `async-std` in `Cargo.toml`: 46 | ```toml 47 | [dependencies.async-std] 48 | version = "1.6" 49 | features = ["attributes"] 50 | ``` 51 | 52 | As a first step, we'll switch to an asynchronous main function, 53 | and `await` the future returned by the async version of `handle_connection`. 54 | Then, we'll test how the server responds. 55 | Here's what that would look like: 56 | ```rust 57 | {{#include ../../examples/09_02_async_tcp_server/src/main.rs:main_func}} 58 | ``` 59 | Now, let's test to see if our server can handle connections concurrently. 60 | Simply making `handle_connection` asynchronous doesn't mean that the server 61 | can handle multiple connections at the same time, and we'll soon see why. 62 | 63 | To illustrate this, let's simulate a slow request. 64 | When a client makes a request to `127.0.0.1:7878/sleep`, 65 | our server will sleep for 5 seconds: 66 | 67 | ```rust,ignore 68 | {{#include ../../examples/09_03_slow_request/src/main.rs:handle_connection}} 69 | ``` 70 | This is very similar to the 71 | [simulation of a slow request](https://doc.rust-lang.org/book/ch20-02-multithreaded.html#simulating-a-slow-request-in-the-current-server-implementation) 72 | from the Book, but with one important difference: 73 | we're using the non-blocking function `async_std::task::sleep` instead of the blocking function `std::thread::sleep`. 74 | It's important to remember that even if a piece of code is run within an `async fn` and `await`ed, it may still block. 75 | To test whether our server handles connections concurrently, we'll need to ensure that `handle_connection` is non-blocking. 76 | 77 | If you run the server, you'll see that a request to `127.0.0.1:7878/sleep` 78 | will block any other incoming requests for 5 seconds! 79 | This is because there are no other concurrent tasks that can make progress 80 | while we are `await`ing the result of `handle_connection`. 81 | In the next section, we'll see how to use async code to handle connections concurrently. 82 | -------------------------------------------------------------------------------- /src/09_example/01_running_async_code_zh.md: -------------------------------------------------------------------------------- 1 | # 运行异步代码 2 | 3 | 一个 HTTP 服务器应该能够同时为多个客户端提供服务;也就是说,在处理当前请求时, 4 | 它不应该去等待之前的请求完成。 5 | 在[solves this problem](https://doc.rust-lang.org/book/ch20-02-multithreaded.html#turning-our-single-threaded-server-into-a-multithreaded-server) 6 | 这本书里,通过创建一个线程池,让每个连接都生成一个线程来处理解决了这个问题。 7 | 现在,我们将使用异步代码来实现同样的效果,而不是增加线程数来提升吞吐量。 8 | 9 | 让我们修改 `handle_connection`,通过使用 `async fn` 声明它来让它返回一个 future。 10 | ```rust,ignore 11 | {{#include ../../examples/09_02_async_tcp_server/src/main.rs:handle_connection_async}} 12 | ``` 13 | 14 | 在函数声明时,添加 `async` 会使得它的返回值由单元类型 `()` 15 | 变为实现了 `Future` 的类型。 16 | 17 | 如果现在尝试去编译它,编译器会警告我们,它可能不会工作: 18 | 19 | ```console 20 | $ cargo check 21 | Checking async-rust v0.1.0 (file:///projects/async-rust) 22 | warning: unused implementer of `std::future::Future` that must be used 23 | --> src/main.rs:12:9 24 | | 25 | 12 | handle_connection(stream); 26 | | ^^^^^^^^^^^^^^^^^^^^^^^^^^ 27 | | 28 | = note: `#[warn(unused_must_use)]` on by default 29 | = note: futures do nothing unless you `.await` or poll them 30 | ``` 31 | 32 | 因为我们没有 `await` 或 `poll` `handle_connection` 的返回值,它将永远不会执行。 33 | 如果你启动这个服务并在浏览器中访问 `127.0.0.1:7878`,会看到访问被拒绝, 34 | 因为服务端不会处理任何请求。 35 | 36 | 我们不能在同步代码里去 `await` 或 `poll` futures。现在我们需要一个, 37 | 可以调度并驱动 futures 去完成的异步运行时。 38 | 有关异步运行时、执行器和响应器的更多信息,请参阅 39 | [选择一个运行时](../08_ecosystem/00_chapter_zh.md) 这一章节。 40 | 其中列出的运行时,每一个都可在这个项目上使用,但在下面的示例中, 41 | 我们将使用 `async-std` 箱。 42 | 43 | ## 添加一个异步运行时 44 | 45 | 下面的例子中,将示范同步代码的重构,让它使用异步运行时,这里我们用的是 `async-std`。 46 | `async-std` 中的 `#[async_std::main]` 属性允许我们编写异步的主函数。 47 | 这需要在 `Cargo.toml` 中启用 `async-std` 的 `attributes` 功能: 48 | 49 | ```toml 50 | [dependencies.async-std] 51 | version = "1.6" 52 | features = ["attributes"] 53 | ``` 54 | 55 | 首先,我们要切换到一个异步主函数上,`await` 异步版本的 `handle_connection` 56 | 返回的 future。然后,我们将测试这个服务如何响应,它看起来是这样的: 57 | 58 | ```rust 59 | {{#include ../../examples/09_02_async_tcp_server/src/main.rs:main_func}} 60 | ``` 61 | 62 | 现在,让我们测试下看看,这个服务是否会同时处理多个连接。简单的将 63 | `handle_connection` 标记为异步并不意味着服务就可在同时处理多个连接, 64 | 很快你就知道为什么了。 65 | 66 | 为了说明这点,让我们模拟一个很慢的请求。当一个客户端请求 67 | `127.0.0.1:7878/sleep` 时,服务端将 sleep 5 秒。 68 | 69 | ```rust,ignore 70 | {{#include ../../examples/09_03_slow_request/src/main.rs:handle_connection}} 71 | ``` 72 | 73 | 这与 74 | [simulation of a slow request](https://doc.rust-lang.org/book/ch20-02-multithreaded.html#simulating-a-slow-request-in-the-current-server-implementation) 75 | 非常像,但有一个重要的区别: 76 | 我们使用非阻塞的 `async_std::task::sleep` 来替代阻塞的 `std::thread::sleep` 方法。 77 | 请记住,`async fn` 的代码在 `await` 时,可能会导致阻塞(若是阻塞代码),这很重要。 78 | 为了测试我们的服务能否正常的处理连接,我们必须确保 `handle_connection` 是非阻塞的。 79 | 80 | 现在你启动服务,并访问 `127.0.0.1:7878/sleep` 页面时,它会在5秒内, 81 | 阻塞,不接受任何新请求! 82 | 这是因为当前,在 `await` `handle_connection` 请求时,没有其它并发任务可取进行。 83 | 在下面的章节,我们将介绍如何使用异步代码来并发地处理连接。 84 | -------------------------------------------------------------------------------- /src/09_example/02_handling_connections_concurrently.md: -------------------------------------------------------------------------------- 1 | # Handling Connections Concurrently 2 | The problem with our code so far is that `listener.incoming()` is a blocking iterator. 3 | The executor can't run other futures while `listener` waits on incoming connections, 4 | and we can't handle a new connection until we're done with the previous one. 5 | 6 | In order to fix this, we'll transform `listener.incoming()` from a blocking Iterator 7 | to a non-blocking Stream. Streams are similar to Iterators, but can be consumed asynchronously. 8 | For more information, see the [chapter on Streams](../05_streams/01_chapter.md). 9 | 10 | Let's replace our blocking `std::net::TcpListener` with the non-blocking `async_std::net::TcpListener`, 11 | and update our connection handler to accept an `async_std::net::TcpStream`: 12 | ```rust,ignore 13 | {{#include ../../examples/09_04_concurrent_tcp_server/src/main.rs:handle_connection}} 14 | ``` 15 | 16 | The asynchronous version of `TcpListener` implements the `Stream` trait for `listener.incoming()`, 17 | a change which provides two benefits. 18 | The first is that `listener.incoming()` no longer blocks the executor. 19 | The executor can now yield to other pending futures 20 | while there are no incoming TCP connections to be processed. 21 | 22 | The second benefit is that elements from the Stream can optionally be processed concurrently, 23 | using a Stream's `for_each_concurrent` method. 24 | Here, we'll take advantage of this method to handle each incoming request concurrently. 25 | We'll need to import the `Stream` trait from the `futures` crate, so our Cargo.toml now looks like this: 26 | ```diff 27 | +[dependencies] 28 | +futures = "0.3" 29 | 30 | [dependencies.async-std] 31 | version = "1.6" 32 | features = ["attributes"] 33 | ``` 34 | 35 | Now, we can handle each connection concurrently by passing `handle_connection` in through a closure function. 36 | The closure function takes ownership of each `TcpStream`, and is run as soon as a new `TcpStream` becomes available. 37 | As long as `handle_connection` does not block, a slow request will no longer prevent other requests from completing. 38 | ```rust,ignore 39 | {{#include ../../examples/09_04_concurrent_tcp_server/src/main.rs:main_func}} 40 | ``` 41 | # Serving Requests in Parallel 42 | Our example so far has largely presented concurrency (using async code) 43 | as an alternative to parallelism (using threads). 44 | However, async code and threads are not mutually exclusive. 45 | In our example, `for_each_concurrent` processes each connection concurrently, but on the same thread. 46 | The `async-std` crate allows us to spawn tasks onto separate threads as well. 47 | Because `handle_connection` is both `Send` and non-blocking, it's safe to use with `async_std::task::spawn`. 48 | Here's what that would look like: 49 | ```rust 50 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:main_func}} 51 | ``` 52 | Now we are using both concurrency and parallelism to handle multiple requests at the same time! 53 | See the [section on multithreaded executors](../08_ecosystem/00_chapter.md#single-threading-vs-multithreading) 54 | for more information. -------------------------------------------------------------------------------- /src/09_example/02_handling_connections_concurrently_zh.md: -------------------------------------------------------------------------------- 1 | # 并发地处理连接 2 | 3 | 目前我们的代码中的问题是 `listener.incoming()` 是一个阻塞的迭代器。 4 | 执行器无法在 `listener` 等待一个入站连接时,运行其它 futures, 5 | 这便导致了我们只有等之前的请求完成,才能处理新的连接。 6 | 7 | 我们可以通过将 `listener.incoming()` 从一个阻塞迭代器转变为一个非阻塞流。 8 | 流类似于迭代器,但它是可用于异步。 9 | 详情可回看[Streams](../05_streams/01_chapter_zh.md)。 10 | 11 | 让我们使用 `async-std::net::TcpListener` 替代 `std::net::TcpListener`, 12 | 并更新我们的连接处理函数,让它接受 `async_std::net::TcpStream`: 13 | 14 | ```rust,ignore 15 | {{#include ../../examples/09_04_concurrent_tcp_server/src/main.rs:handle_connection}} 16 | ``` 17 | 18 | 这个异步版本的 `TcpListener` 为 `listener.incoming()` 实现了 `Stream` 特征, 19 | 这带来了两个好处。其一,`listener.incoming()` 不会再阻塞执行器了。 20 | 在入站的 TCP 连接无法取得进展时,它可以允许其它挂起的 futures 去执行。 21 | 22 | 第二个好处是,可以使用 `Stream` 的 `for_each_concurrent` 方法,来并发地处理来自 23 | `Stream` 的元素。在这里,我们将利用这个方法来并发处理每个传入的请求。 24 | 我们需要从 `futures` 箱中导入 `Stream` 特征,现在 Cargo.toml 看起来是这样的: 25 | 26 | ```diff 27 | +[dependencies] 28 | +futures = "0.3" 29 | 30 | [dependencies.async-std] 31 | version = "1.6" 32 | features = ["attributes"] 33 | ``` 34 | 35 | 现在,我们可以通过闭包函数传入 `handle_connection` 来并发处理每个连接。 36 | 闭包函数将获得每个 `TcpStream` 的所有权,并在新的 `TcpStream` 就绪时立即执行。 37 | 因为 `handle_connection` 不再是阻塞的,一个慢请求不会阻止其它请求的完成。 38 | 39 | ```rust,ignore 40 | {{#include ../../examples/09_04_concurrent_tcp_server/src/main.rs:main_func}} 41 | ``` 42 | 43 | # 并行处理请求 44 | 45 | 到目前为止,我们的示例在很大程度上,将并发(通过异步代码) 46 | 作为并行(使用线程)的替代方案。但是,异步代码和线程并非互斥。 47 | 在我们的示例中,`for_each_concurrent` 并发地在同一个进程中处理每个连接。 48 | 但 `async-std` 箱也允许我们去在特定的线程上生成任务。 49 | 因为 `handle_connection` 是可 `Send` 且是非阻塞的,所以它可以安全地使用 50 | `async_std::task::spawn`。代码是这样的: 51 | 52 | ```rust 53 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:main_func}} 54 | ``` 55 | 56 | 现在我们可以同时使用并发和并行来同时处理多个连接!详情可查看 57 | [多线程执行器](../08_ecosystem/00_chapter_zh.md#single-threading-vs-multithreading)。 58 | -------------------------------------------------------------------------------- /src/09_example/03_tests.md: -------------------------------------------------------------------------------- 1 | # Testing the TCP Server 2 | Let's move on to testing our `handle_connection` function. 3 | 4 | First, we need a `TcpStream` to work with. 5 | In an end-to-end or integration test, we might want to make a real TCP connection 6 | to test our code. 7 | One strategy for doing this is to start a listener on `localhost` port 0. 8 | Port 0 isn't a valid UNIX port, but it'll work for testing. 9 | The operating system will pick an open TCP port for us. 10 | 11 | Instead, in this example we'll write a unit test for the connection handler, 12 | to check that the correct responses are returned for the respective inputs. 13 | To keep our unit test isolated and deterministic, we'll replace the `TcpStream` with a mock. 14 | 15 | First, we'll change the signature of `handle_connection` to make it easier to test. 16 | `handle_connection` doesn't actually require an `async_std::net::TcpStream`; 17 | it requires any struct that implements `async_std::io::Read`, `async_std::io::Write`, and `marker::Unpin`. 18 | Changing the type signature to reflect this allows us to pass a mock for testing. 19 | ```rust,ignore 20 | use async_std::io::{Read, Write}; 21 | 22 | async fn handle_connection(mut stream: impl Read + Write + Unpin) { 23 | ``` 24 | 25 | Next, let's build a mock `TcpStream` that implements these traits. 26 | First, let's implement the `Read` trait, with one method, `poll_read`. 27 | Our mock `TcpStream` will contain some data that is copied into the read buffer, 28 | and we'll return `Poll::Ready` to signify that the read is complete. 29 | ```rust,ignore 30 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:mock_read}} 31 | ``` 32 | 33 | Our implementation of `Write` is very similar, 34 | although we'll need to write three methods: `poll_write`, `poll_flush`, and `poll_close`. 35 | `poll_write` will copy any input data into the mock `TcpStream`, and return `Poll::Ready` when complete. 36 | No work needs to be done to flush or close the mock `TcpStream`, so `poll_flush` and `poll_close` 37 | can just return `Poll::Ready`. 38 | ```rust,ignore 39 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:mock_write}} 40 | ``` 41 | 42 | Lastly, our mock will need to implement `Unpin`, signifying that its location in memory can safely be moved. 43 | For more information on pinning and the `Unpin` trait, see the [section on pinning](../04_pinning/01_chapter.md). 44 | ```rust,ignore 45 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:unpin}} 46 | ``` 47 | 48 | Now we're ready to test the `handle_connection` function. 49 | After setting up the `MockTcpStream` containing some initial data, 50 | we can run `handle_connection` using the attribute `#[async_std::test]`, similarly to how we used `#[async_std::main]`. 51 | To ensure that `handle_connection` works as intended, we'll check that the correct data 52 | was written to the `MockTcpStream` based on its initial contents. 53 | ```rust,ignore 54 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:test}} 55 | ``` 56 | -------------------------------------------------------------------------------- /src/09_example/03_tests_zh.md: -------------------------------------------------------------------------------- 1 | # 测试 TCP 服务 2 | 3 | 让我们继续测试我们的 `handle_connection` 函数。 4 | 5 | 首先,我们需要一个 `TcpStream`。 6 | 在端到端或集成测试中,我们需要建立一个真正的 TCP 连接去测试我们的代码。 7 | 一种策略是在 `localhost` 的 0 端口上启用监听器。 8 | 端口 0 在 UNIX 上并不是一个合法端口,但我们可以在测试中使用它。 9 | 操作系统会为我们选择一个打开的 TCP 端口。 10 | 11 | 然而,在这个示例中,我们将为连接处理器写一个单元测试, 12 | 来检查是否为对应的请求返回了正确的响应。 13 | 为了保证我们的单元测试的隔离性和确定性,我们将使用模拟代替 `TcpStream`。 14 | 15 | 首先,我们将改变 `handle_connection` 的接受值类型签名让它更易于测试。 16 | `handle_connection` 并不一定要接收一个 `async_std::net::TcpStream`; 17 | 它只需要接收一个实现了 `async_std::io::Read`、`async_std::io::Write` 和 18 | `marker::Unpin` 的结构体。变更类型签名以便允许我们通过模拟进行测试。 19 | 20 | ```rust,ignore 21 | use std::marker::Unpin; 22 | use async_std::io::{Read, Write}; 23 | 24 | async fn handle_connection(mut stream: impl Read + Write + Unpin) { 25 | ``` 26 | 27 | 下面,让我们构建一个实现了这些特征的模拟 `TcpStream`。 28 | 首先,去实现 `Read` 特征,它只有一个函数——`poll_read`。 29 | 我们的模拟 `TcpStream` 将包括一些复制到读取缓存的数据, 30 | 然后返回 `Poll::Ready` 来提示读取已完成。 31 | 32 | ```rust,ignore 33 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:mock_read}} 34 | ``` 35 | 36 | 实现 `Write` 也类似,但我们需要实现 `poll_write`、`poll_flush` 和 `poll_close` 37 | 三个函数。`poll_write` 将拷贝输入的数据到模拟 `TcpStream` 中,并完成后返回 38 | `Poll::Ready`。模拟 `TcpStream` 不需要执行 flush 和 close,所以 `poll_flush` 39 | 和 `poll_close` 可直接返回 `Poll::Ready` 即可。 40 | 41 | ```rust,ignore 42 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:mock_write}} 43 | ``` 44 | 45 | 最后,我们的模拟 `TcpStream` 需要实现 `Unpin`,来表示它在内存中的位置可安全地移动。 46 | 细节实现可回看 [固定](../04_pinning/01_chapter_zh.md)。 47 | 48 | ```rust,ignore 49 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:unpin}} 50 | ``` 51 | 52 | 现在我们准备好去测试 `handle_connection` 函数了。 53 | 在我们给 `MockTcpStream` 设置一些初始化数据后,我们可以通过 `#[async_std::test]` 54 | 属性来运行 `handle_connection` 函数,它的使用方法和 `#[async_std::main]` 类似。 55 | 为了确保 `handle_connection` 是按我们预期设计工作的,我们将根据其初始数据, 56 | 来检查是否已将正确的数据写入 `MockTcpStream`。 57 | 58 | ```rust,ignore 59 | {{#include ../../examples/09_05_final_tcp_server/src/main.rs:test}} 60 | ``` 61 | -------------------------------------------------------------------------------- /src/12_appendix/01_translations.md: -------------------------------------------------------------------------------- 1 | # Appendix : Translations of the Book 2 | 3 | For resources in languages other than English. 4 | 5 | - [Русский](https://doc.rust-lang.ru/async-book/) 6 | - [Français](https://jimskapt.github.io/async-book-fr/) 7 | - [فارسی](https://rouzbehsbz.github.io/rust-async-book/) 8 | -------------------------------------------------------------------------------- /src/12_appendix/01_translations_zh.md: -------------------------------------------------------------------------------- 1 | # 附录:本书译文 2 | 3 | 本书的其它语言版本 4 | 5 | - [Русский](https://doc.rust-lang.ru/async-book/) 6 | - [Français](https://jimskapt.github.io/async-book-fr/) 7 | - [中文](https://suibianxiedianer.github.io/async-book/) 8 | -------------------------------------------------------------------------------- /src/SUMMARY.md: -------------------------------------------------------------------------------- 1 | # Table of Contents 2 | 3 | - [Getting Started](01_getting_started/01_chapter.md) 4 | - [Why Async?](01_getting_started/02_why_async.md) 5 | - [The State of Asynchronous Rust](01_getting_started/03_state_of_async_rust.md) 6 | - [`async`/`.await` Primer](01_getting_started/04_async_await_primer.md) 7 | - [Under the Hood: Executing `Future`s and Tasks](02_execution/01_chapter.md) 8 | - [The `Future` Trait](02_execution/02_future.md) 9 | - [Task Wakeups with `Waker`](02_execution/03_wakeups.md) 10 | - [Applied: Build an Executor](02_execution/04_executor.md) 11 | - [Executors and System IO](02_execution/05_io.md) 12 | - [`async`/`await`](03_async_await/01_chapter.md) 13 | - [Pinning](04_pinning/01_chapter.md) 14 | - [Streams](05_streams/01_chapter.md) 15 | - [Iteration and Concurrency](05_streams/02_iteration_and_concurrency.md) 16 | - [Executing Multiple Futures at a Time](06_multiple_futures/01_chapter.md) 17 | - [`join!`](06_multiple_futures/02_join.md) 18 | - [`select!`](06_multiple_futures/03_select.md) 19 | - [Spawning](06_multiple_futures/04_spawning.md) 20 | - [TODO: Cancellation and Timeouts]() 21 | - [TODO: `FuturesUnordered`]() 22 | - [Workarounds to Know and Love](07_workarounds/01_chapter.md) 23 | - [`?` in `async` Blocks](07_workarounds/02_err_in_async_blocks.md) 24 | - [`Send` Approximation](07_workarounds/03_send_approximation.md) 25 | - [Recursion](07_workarounds/04_recursion.md) 26 | - [`async` in Traits](07_workarounds/05_async_in_traits.md) 27 | - [The Async Ecosystem](08_ecosystem/00_chapter.md) 28 | - [Final Project: HTTP Server](09_example/00_intro.md) 29 | - [Running Asynchronous Code](09_example/01_running_async_code.md) 30 | - [Handling Connections Concurrently](09_example/02_handling_connections_concurrently.md) 31 | - [Testing the Server](09_example/03_tests.md) 32 | - [TODO: I/O]() 33 | - [TODO: `AsyncRead` and `AsyncWrite`]() 34 | - [TODO: Asynchronous Design Patterns: Solutions and Suggestions]() 35 | - [TODO: Modeling Servers and the Request/Response Pattern]() 36 | - [TODO: Managing Shared State]() 37 | - [Appendix: Translations of the Book](12_appendix/01_translations.md) 38 | -------------------------------------------------------------------------------- /src/SUMMARY_zh.md: -------------------------------------------------------------------------------- 1 | # Table of Contents 2 | 3 | - [让我们开始吧](01_getting_started/01_chapter_zh.md) 4 | - [为什么是异步?](01_getting_started/02_why_async_zh.md) 5 | - [Rust 的异步状态](01_getting_started/03_state_of_async_rust_zh.md) 6 | - [`async`/`.await` 入门](01_getting_started/04_async_await_primer_zh.md) 7 | - [深入了解:执行 `Future` 和任务](02_execution/01_chapter_zh.md) 8 | - [`Future` 特征](02_execution/02_future_zh.md) 9 | - [通过 `Waker` 唤醒任务](02_execution/03_wakeups_zh.md) 10 | - [应用:构建一个执行器](02_execution/04_executor_zh.md) 11 | - [执行器与系统 IO](02_execution/05_io_zh.md) 12 | - [`async`/`await`](03_async_await/01_chapter_zh.md) 13 | - [固定](04_pinning/01_chapter_zh.md) 14 | - [Streams](05_streams/01_chapter_zh.md) 15 | - [迭代和并发](05_streams/02_iteration_and_concurrency_zh.md) 16 | - [一次执行多个 Futures](06_multiple_futures/01_chapter_zh.md) 17 | - [`join!`](06_multiple_futures/02_join_zh.md) 18 | - [`select!`](06_multiple_futures/03_select_zh.md) 19 | - [Spawning](06_multiple_futures/04_spawning_zh.md) 20 | - [TODO: Cancellation and Timeouts]() 21 | - [TODO: `FuturesUnordered`]() 22 | - [Workarounds to Know and Love](07_workarounds/01_chapter_zh.md) 23 | - [`async` 代码块中的错误](07_workarounds/02_err_in_async_blocks_zh.md) 24 | - [`Send` 类](07_workarounds/03_send_approximation_zh.md) 25 | - [递归](07_workarounds/04_recursion_zh.md) 26 | - [特征中的 `async`](07_workarounds/05_async_in_traits_zh.md) 27 | - [异步的生态系统](08_ecosystem/00_chapter_zh.md) 28 | - [最终的项目: HTTP 服务器](09_example/00_intro_zh.md) 29 | - [运行异步代码](09_example/01_running_async_code_zh.md) 30 | - [并发地处理连接](09_example/02_handling_connections_concurrently_zh.md) 31 | - [测试服务](09_example/03_tests_zh.md) 32 | - [TODO: I/O]() 33 | - [TODO: `AsyncRead` and `AsyncWrite`]() 34 | - [TODO: Asynchronous Design Patterns: Solutions and Suggestions]() 35 | - [TODO: Modeling Servers and the Request/Response Pattern]() 36 | - [TODO: Managing Shared State]() 37 | -------------------------------------------------------------------------------- /src/assets/swap_problem.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/suibianxiedianer/async-book/a2f6bda848866708ebc1615fe1af36c5977308ad/src/assets/swap_problem.jpg --------------------------------------------------------------------------------