├── .gitattributes ├── .github ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md └── workflows │ └── ci.yaml ├── .gitignore ├── Cargo.toml ├── LICENSE-APACHE ├── LICENSE-MIT ├── README.md ├── benches ├── bench.rs ├── compare.rs └── utils │ ├── countdown_futures.rs │ ├── countdown_streams.rs │ └── mod.rs ├── examples └── happy_eyeballs.rs ├── src ├── collections │ ├── mod.rs │ └── vec.rs ├── concurrent_stream │ ├── enumerate.rs │ ├── for_each.rs │ ├── from_concurrent_stream.rs │ ├── from_stream.rs │ ├── into_concurrent_stream.rs │ ├── limit.rs │ ├── map.rs │ ├── mod.rs │ ├── take.rs │ └── try_for_each.rs ├── future │ ├── future_group.rs │ ├── futures_ext.rs │ ├── join │ │ ├── array.rs │ │ ├── mod.rs │ │ ├── tuple.rs │ │ └── vec.rs │ ├── mod.rs │ ├── race │ │ ├── array.rs │ │ ├── mod.rs │ │ ├── tuple.rs │ │ └── vec.rs │ ├── race_ok │ │ ├── array │ │ │ ├── error.rs │ │ │ └── mod.rs │ │ ├── mod.rs │ │ ├── tuple │ │ │ ├── error.rs │ │ │ └── mod.rs │ │ └── vec │ │ │ ├── error.rs │ │ │ └── mod.rs │ ├── try_join │ │ ├── array.rs │ │ ├── mod.rs │ │ ├── tuple.rs │ │ └── vec.rs │ └── wait_until.rs ├── lib.rs ├── stream │ ├── chain │ │ ├── array.rs │ │ ├── mod.rs │ │ ├── tuple.rs │ │ └── vec.rs │ ├── into_stream.rs │ ├── merge │ │ ├── array.rs │ │ ├── mod.rs │ │ ├── tuple.rs │ │ └── vec.rs │ ├── mod.rs │ ├── stream_ext.rs │ ├── stream_group.rs │ ├── wait_until.rs │ └── zip │ │ ├── array.rs │ │ ├── mod.rs │ │ ├── tuple.rs │ │ └── vec.rs └── utils │ ├── array.rs │ ├── channel.rs │ ├── futures │ ├── array.rs │ ├── mod.rs │ └── vec.rs │ ├── indexer.rs │ ├── mod.rs │ ├── output │ ├── array.rs │ ├── mod.rs │ └── vec.rs │ ├── pin.rs │ ├── poll_state │ ├── array.rs │ ├── maybe_done.rs │ ├── mod.rs │ ├── poll_state.rs │ └── vec.rs │ ├── private.rs │ ├── stream.rs │ ├── tuple.rs │ └── wakers │ ├── array │ ├── mod.rs │ ├── no_std.rs │ ├── readiness_array.rs │ ├── waker.rs │ └── waker_array.rs │ ├── dummy.rs │ ├── mod.rs │ └── vec │ ├── mod.rs │ ├── no_std.rs │ ├── readiness_vec.rs │ ├── waker.rs │ └── waker_vec.rs └── tests ├── no_std.rs └── regression-155.rs /.gitattributes: -------------------------------------------------------------------------------- 1 | # Auto detect text files and perform LF normalization 2 | * text=auto 3 | -------------------------------------------------------------------------------- /.github/CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as 6 | contributors and maintainers pledge to making participation in our project and 7 | our community a harassment-free experience for everyone, regardless of age, body 8 | size, disability, ethnicity, gender identity and expression, level of 9 | experience, 10 | education, socio-economic status, nationality, personal appearance, race, 11 | religion, or sexual identity and orientation. 12 | 13 | ## Our Standards 14 | 15 | Examples of behavior that contributes to creating a positive environment 16 | include: 17 | 18 | - Using welcoming and inclusive language 19 | - Being respectful of differing viewpoints and experiences 20 | - Gracefully accepting constructive criticism 21 | - Focusing on what is best for the community 22 | - Showing empathy towards other community members 23 | 24 | Examples of unacceptable behavior by participants include: 25 | 26 | - The use of sexualized language or imagery and unwelcome sexual attention or 27 | advances 28 | - Trolling, insulting/derogatory comments, and personal or political attacks 29 | - Public or private harassment 30 | - Publishing others' private information, such as a physical or electronic 31 | address, without explicit permission 32 | - Other conduct which could reasonably be considered inappropriate in a 33 | professional setting 34 | 35 | 36 | ## Our Responsibilities 37 | 38 | Project maintainers are responsible for clarifying the standards of acceptable 39 | behavior and are expected to take appropriate and fair corrective action in 40 | response to any instances of unacceptable behavior. 41 | 42 | Project maintainers have the right and responsibility to remove, edit, or 43 | reject comments, commits, code, wiki edits, issues, and other contributions 44 | that are not aligned to this Code of Conduct, or to ban temporarily or 45 | permanently any contributor for other behaviors that they deem inappropriate, 46 | threatening, offensive, or harmful. 47 | 48 | ## Scope 49 | 50 | This Code of Conduct applies both within project spaces and in public spaces 51 | when an individual is representing the project or its community. Examples of 52 | representing a project or community include using an official project e-mail 53 | address, posting via an official social media account, or acting as an appointed 54 | representative at an online or offline event. Representation of a project may be 55 | further defined and clarified by project maintainers. 56 | 57 | ## Enforcement 58 | 59 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 60 | reported by contacting the project team at yoshuawuyts@gmail.com, or through 61 | IRC. All complaints will be reviewed and investigated and will result in a 62 | response that is deemed necessary and appropriate to the circumstances. The 63 | project team is obligated to maintain confidentiality with regard to the 64 | reporter of an incident. 65 | Further details of specific enforcement policies may be posted separately. 66 | 67 | Project maintainers who do not follow or enforce the Code of Conduct in good 68 | faith may face temporary or permanent repercussions as determined by other 69 | members of the project's leadership. 70 | 71 | ## Attribution 72 | 73 | This Code of Conduct is adapted from the Contributor Covenant, version 1.4, 74 | available at 75 | https://www.contributor-covenant.org/version/1/4/code-of-conduct.html 76 | -------------------------------------------------------------------------------- /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | Contributions include code, documentation, answering user questions, running the 3 | project's infrastructure, and advocating for all types of users. 4 | 5 | The project welcomes all contributions from anyone willing to work in good faith 6 | with other contributors and the community. No contribution is too small and all 7 | contributions are valued. 8 | 9 | This guide explains the process for contributing to the project's GitHub 10 | Repository. 11 | 12 | - [Code of Conduct](#code-of-conduct) 13 | - [Bad Actors](#bad-actors) 14 | 15 | ## Code of Conduct 16 | The project has a [Code of Conduct](./CODE_OF_CONDUCT.md) that *all* 17 | contributors are expected to follow. This code describes the *minimum* behavior 18 | expectations for all contributors. 19 | 20 | As a contributor, how you choose to act and interact towards your 21 | fellow contributors, as well as to the community, will reflect back not only 22 | on yourself but on the project as a whole. The Code of Conduct is designed and 23 | intended, above all else, to help establish a culture within the project that 24 | allows anyone and everyone who wants to contribute to feel safe doing so. 25 | 26 | Should any individual act in any way that is considered in violation of the 27 | [Code of Conduct](./CODE_OF_CONDUCT.md), corrective actions will be taken. It is 28 | possible, however, for any individual to *act* in such a manner that is not in 29 | violation of the strict letter of the Code of Conduct guidelines while still 30 | going completely against the spirit of what that Code is intended to accomplish. 31 | 32 | Open, diverse, and inclusive communities live and die on the basis of trust. 33 | Contributors can disagree with one another so long as they trust that those 34 | disagreements are in good faith and everyone is working towards a common 35 | goal. 36 | 37 | ## Bad Actors 38 | All contributors to tacitly agree to abide by both the letter and 39 | spirit of the [Code of Conduct](./CODE_OF_CONDUCT.md). Failure, or 40 | unwillingness, to do so will result in contributions being respectfully 41 | declined. 42 | 43 | A *bad actor* is someone who repeatedly violates the *spirit* of the Code of 44 | Conduct through consistent failure to self-regulate the way in which they 45 | interact with other contributors in the project. In doing so, bad actors 46 | alienate other contributors, discourage collaboration, and generally reflect 47 | poorly on the project as a whole. 48 | 49 | Being a bad actor may be intentional or unintentional. Typically, unintentional 50 | bad behavior can be easily corrected by being quick to apologize and correct 51 | course *even if you are not entirely convinced you need to*. Giving other 52 | contributors the benefit of the doubt and having a sincere willingness to admit 53 | that you *might* be wrong is critical for any successful open collaboration. 54 | 55 | Don't be a bad actor. 56 | -------------------------------------------------------------------------------- /.github/workflows/ci.yaml: -------------------------------------------------------------------------------- 1 | name: CI 2 | 3 | on: 4 | pull_request: 5 | push: 6 | branches: 7 | - staging 8 | - trying 9 | 10 | env: 11 | RUSTFLAGS: -Dwarnings 12 | 13 | jobs: 14 | build_and_test: 15 | name: Build and test 16 | runs-on: ${{ matrix.os }} 17 | strategy: 18 | matrix: 19 | os: [ubuntu-latest, windows-latest, macOS-latest] 20 | rust: [stable] 21 | 22 | steps: 23 | - uses: actions/checkout@master 24 | 25 | - name: Install ${{ matrix.rust }} 26 | uses: actions-rs/toolchain@v1 27 | with: 28 | toolchain: ${{ matrix.rust }} 29 | override: true 30 | 31 | - name: check 32 | uses: actions-rs/cargo@v1 33 | with: 34 | command: check 35 | args: --all --bins --examples 36 | 37 | - name: check no-std 38 | uses: actions-rs/cargo@v1 39 | with: 40 | command: check 41 | args: --all --no-default-features 42 | 43 | - name: check alloc 44 | uses: actions-rs/cargo@v1 45 | with: 46 | command: check 47 | args: --all --no-default-features --features alloc 48 | 49 | - name: tests 50 | uses: actions-rs/cargo@v1 51 | with: 52 | command: test 53 | args: --all 54 | 55 | msrv: 56 | runs-on: ubuntu-latest 57 | steps: 58 | - uses: actions/checkout@v4 59 | - uses: taiki-e/install-action@cargo-hack 60 | - run: cargo hack check --rust-version --workspace --all-targets --ignore-private 61 | 62 | miri: 63 | name: "Build and test (miri, nightly)" 64 | runs-on: ubuntu-latest 65 | steps: 66 | - uses: actions/checkout@v3 67 | - name: Install Miri 68 | run: | 69 | rustup toolchain install nightly --component miri 70 | rustup override set nightly 71 | cargo miri setup 72 | - name: Test with Miri 73 | run: cargo miri test 74 | 75 | check_clippy_fmt_and_docs: 76 | name: Checking clippy, fmt and docs 77 | runs-on: ubuntu-latest 78 | steps: 79 | - uses: actions/checkout@master 80 | - uses: actions-rs/toolchain@v1 81 | with: 82 | toolchain: nightly 83 | components: rustfmt, clippy 84 | override: true 85 | 86 | - name: clippy 87 | run: cargo clippy -- -Dwarnings 88 | 89 | - name: fmt 90 | run: cargo fmt --all -- --check 91 | 92 | - name: Docs 93 | run: cargo doc 94 | 95 | semver-checks: 96 | runs-on: ubuntu-latest 97 | steps: 98 | - name: Checkout 99 | uses: actions/checkout@v3 100 | - name: Check semver 101 | uses: obi1kenobi/cargo-semver-checks-action@v2 102 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | target/ 2 | tmp/ 3 | Cargo.lock 4 | .DS_Store 5 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "futures-concurrency" 3 | version = "7.6.3" 4 | license = "MIT OR Apache-2.0" 5 | repository = "https://github.com/yoshuawuyts/futures-concurrency" 6 | documentation = "https://docs.rs/futures-concurrency" 7 | description = "Structured concurrency operations for async Rust" 8 | readme = "README.md" 9 | edition = "2021" 10 | keywords = ["async", "concurrency"] 11 | categories = ["asynchronous", "concurrency"] 12 | authors = ["Yoshua Wuyts "] 13 | rust-version = "1.75.0" 14 | 15 | [profile.bench] 16 | debug = true 17 | 18 | [lib] 19 | bench = false 20 | 21 | [[bench]] 22 | name = "bench" 23 | harness = false 24 | 25 | [[bench]] 26 | name = "compare" 27 | harness = false 28 | 29 | [features] 30 | default = ["std"] 31 | std = ["alloc", "futures-lite/std"] 32 | alloc = ["dep:fixedbitset", "dep:slab", "dep:smallvec", "futures-lite/alloc"] 33 | 34 | [dependencies] 35 | fixedbitset = { version = "0.5.7", default-features = false, optional = true } 36 | futures-core = { version = "0.3", default-features = false } 37 | futures-lite = { version = "2.5.0", default-features = false } 38 | pin-project = "1.1" 39 | slab = { version = "0.4.9", optional = true } 40 | smallvec = { version = "1.13", optional = true } 41 | futures-buffered = "0.2.9" 42 | 43 | [dev-dependencies] 44 | async-io = "2.4" 45 | async-std = { version = "1.13.0", features = ["attributes"] } 46 | criterion = { version = "0.5", features = [ 47 | "async", 48 | "async_futures", 49 | "html_reports", 50 | ] } 51 | futures = "0.3" 52 | futures-time = "3.0.0" 53 | itertools = "0.13" 54 | lending-stream = "1.0.1" 55 | rand = "0.8.5" 56 | tokio = { version = "1.41", features = ["macros", "time", "rt-multi-thread"] } 57 | -------------------------------------------------------------------------------- /LICENSE-MIT: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2020 Yoshua Wuyts 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 |

futures-concurrency

2 |
3 | 4 | Structured concurrency operations for async Rust 5 | 6 |
7 | 8 |
9 | 10 |
11 | 12 | 13 | Crates.io version 15 | 16 | 17 | 18 | Download 20 | 21 | 22 | 23 | docs.rs docs 25 | 26 |
27 | 28 |
29 |

30 | 31 | API Docs 32 | 33 | | 34 | 35 | Releases 36 | 37 | | 38 | 39 | Contributing 40 | 41 |

42 |
43 | 44 | Performant, portable, structured concurrency operations for async Rust. It 45 | works with any runtime, does not erase lifetimes, always handles 46 | cancellation, and always returns output to the caller. 47 | 48 | `futures-concurrency` provides concurrency operations for both groups of futures 49 | and streams. Both for bounded and unbounded sets of futures and streams. In both 50 | cases performance should be on par with, if not exceed conventional executor 51 | implementations. 52 | 53 | ## Examples 54 | 55 | **Await multiple futures of different types** 56 | ```rust 57 | use futures_concurrency::prelude::*; 58 | use std::future; 59 | 60 | let a = future::ready(1u8); 61 | let b = future::ready("hello"); 62 | let c = future::ready(3u16); 63 | assert_eq!((a, b, c).join().await, (1, "hello", 3)); 64 | ``` 65 | 66 | **Concurrently process items in a stream** 67 | 68 | ```rust 69 | use futures_concurrency::prelude::*; 70 | 71 | let v: Vec<_> = vec!["chashu", "nori"] 72 | .into_co_stream() 73 | .map(|msg| async move { format!("hello {msg}") }) 74 | .collect() 75 | .await; 76 | 77 | assert_eq!(v, &["hello chashu", "hello nori"]); 78 | ``` 79 | 80 | **Access stack data outside the futures' scope** 81 | 82 | _Adapted from [`std::thread::scope`](https://doc.rust-lang.org/std/thread/fn.scope.html)._ 83 | 84 | ```rust 85 | use futures_concurrency::prelude::*; 86 | 87 | let mut container = vec![1, 2, 3]; 88 | let mut num = 0; 89 | 90 | let a = async { 91 | println!("hello from the first future"); 92 | dbg!(&container); 93 | }; 94 | 95 | let b = async { 96 | println!("hello from the second future"); 97 | num += container[0] + container[2]; 98 | }; 99 | 100 | println!("hello from the main future"); 101 | let _ = (a, b).join().await; 102 | container.push(4); 103 | assert_eq!(num, container.len()); 104 | ``` 105 | 106 | ## Installation 107 | ```sh 108 | $ cargo add futures-concurrency 109 | ``` 110 | 111 | ## Contributing 112 | Want to join us? Check out our ["Contributing" guide][contributing] and take a 113 | look at some of these issues: 114 | 115 | - [Issues labeled "good first issue"][good-first-issue] 116 | - [Issues labeled "help wanted"][help-wanted] 117 | 118 | [contributing]: https://github.com/yoshuawuyts/futures-concurrency/blob/master.github/CONTRIBUTING.md 119 | [good-first-issue]: https://github.com/yoshuawuyts/futures-concurrency/labels/good%20first%20issue 120 | [help-wanted]: https://github.com/yoshuawuyts/futures-concurrency/labels/help%20wanted 121 | 122 | ## License 123 | 124 | 125 | Licensed under either of Apache License, Version 126 | 2.0 or MIT license at your option. 127 | 128 | 129 |
130 | 131 | 132 | Unless you explicitly state otherwise, any contribution intentionally submitted 133 | for inclusion in this crate by you, as defined in the Apache-2.0 license, shall 134 | be dual licensed as above, without any additional terms or conditions. 135 | 136 | -------------------------------------------------------------------------------- /benches/compare.rs: -------------------------------------------------------------------------------- 1 | use criterion::async_executor::FuturesExecutor; 2 | use criterion::{criterion_group, criterion_main, BenchmarkId, Criterion}; 3 | use futures_concurrency::prelude::*; 4 | 5 | mod utils; 6 | 7 | fn join_vs_join_all(c: &mut Criterion) { 8 | let mut group = c.benchmark_group("compare vec::join vs join_all"); 9 | for i in [10, 100, 1000].iter() { 10 | group.bench_with_input(BenchmarkId::new("futures-concurrency", i), i, |b, i| { 11 | b.to_async(FuturesExecutor).iter(|| async { 12 | let futs = utils::futures_vec(*i as usize); 13 | let output = futs.join().await; 14 | assert_eq!(output.len(), *i as usize); 15 | }) 16 | }); 17 | group.bench_with_input(BenchmarkId::new("futures-rs", i), i, |b, i| { 18 | b.to_async(FuturesExecutor).iter(|| async { 19 | let futs = utils::futures_vec(*i as usize); 20 | let output = futures::future::join_all(futs).await; 21 | assert_eq!(output.len(), *i as usize); 22 | }) 23 | }); 24 | } 25 | group.finish(); 26 | } 27 | 28 | fn select_vs_merge(c: &mut Criterion) { 29 | let mut group = c.benchmark_group("compare array::merge vs select!"); 30 | const I: u64 = 10; 31 | const N: usize = I as usize; 32 | group.bench_with_input(BenchmarkId::new("futures-concurrency", I), &I, |b, _| { 33 | b.to_async(FuturesExecutor).iter(|| async { 34 | use futures_lite::prelude::*; 35 | let mut counter = 0; 36 | let streams = utils::streams_array::(); 37 | let mut s = streams.merge(); 38 | while s.next().await.is_some() { 39 | counter += 1; 40 | } 41 | assert_eq!(counter, N); 42 | }) 43 | }); 44 | group.bench_with_input(BenchmarkId::new("futures-rs", I), &I, |b, _| { 45 | b.to_async(FuturesExecutor).iter(|| async { 46 | use futures::select; 47 | use futures::stream::StreamExt; 48 | 49 | // Create two streams of numbers. Both streams require being `fuse`d. 50 | let [mut a, mut b, mut c, mut d, mut e, mut f, mut g, mut h, mut i, mut j] = 51 | utils::streams_array::().map(|fut| fut.fuse()); 52 | 53 | // Initialize the output counter. 54 | let mut total = 0usize; 55 | 56 | // Process each item in the stream; 57 | // break once there are no more items left to sum. 58 | loop { 59 | let item = select! { 60 | item = a.next() => item, 61 | item = b.next() => item, 62 | item = c.next() => item, 63 | item = d.next() => item, 64 | item = e.next() => item, 65 | item = f.next() => item, 66 | item = g.next() => item, 67 | item = h.next() => item, 68 | item = i.next() => item, 69 | item = j.next() => item, 70 | complete => break, 71 | }; 72 | if item.is_some() { 73 | // Increment the counter 74 | total += 1; 75 | } 76 | } 77 | 78 | assert_eq!(total, N); 79 | }) 80 | }); 81 | group.finish(); 82 | } 83 | 84 | criterion_group!(join_bench, join_vs_join_all); 85 | criterion_group!(merge_bench, select_vs_merge); 86 | criterion_main!(join_bench, merge_bench); 87 | -------------------------------------------------------------------------------- /benches/utils/countdown_futures.rs: -------------------------------------------------------------------------------- 1 | use futures_concurrency::future::FutureGroup; 2 | use futures_core::Future; 3 | 4 | use std::cell::{Cell, RefCell}; 5 | use std::collections::BinaryHeap; 6 | use std::pin::Pin; 7 | use std::rc::Rc; 8 | use std::task::{Context, Poll}; 9 | 10 | use super::{shuffle, PrioritizedWaker, State}; 11 | 12 | pub fn futures_vec(len: usize) -> Vec { 13 | let wakers = Rc::new(RefCell::new(BinaryHeap::new())); 14 | let completed = Rc::new(Cell::new(0)); 15 | let mut futures: Vec<_> = (0..len) 16 | .map(|n| CountdownFuture::new(n, len, wakers.clone(), completed.clone())) 17 | .collect(); 18 | shuffle(&mut futures); 19 | futures 20 | } 21 | 22 | #[allow(unused)] 23 | pub fn futures_array() -> [CountdownFuture; N] { 24 | let wakers = Rc::new(RefCell::new(BinaryHeap::new())); 25 | let completed = Rc::new(Cell::new(0)); 26 | let mut futures = 27 | std::array::from_fn(|n| CountdownFuture::new(n, N, wakers.clone(), completed.clone())); 28 | shuffle(&mut futures); 29 | futures 30 | } 31 | 32 | #[allow(unused)] 33 | pub fn make_future_group(len: usize) -> FutureGroup { 34 | let wakers = Rc::new(RefCell::new(BinaryHeap::new())); 35 | let completed = Rc::new(Cell::new(0)); 36 | (0..len) 37 | .map(|n| CountdownFuture::new(n, len, wakers.clone(), completed.clone())) 38 | .collect() 39 | } 40 | 41 | #[allow(unused)] 42 | pub fn make_futures_unordered(len: usize) -> futures::stream::FuturesUnordered { 43 | let wakers = Rc::new(RefCell::new(BinaryHeap::new())); 44 | let completed = Rc::new(Cell::new(0)); 45 | (0..len) 46 | .map(|n| CountdownFuture::new(n, len, wakers.clone(), completed.clone())) 47 | .collect() 48 | } 49 | 50 | #[allow(unused)] 51 | pub fn futures_tuple() -> ( 52 | CountdownFuture, 53 | CountdownFuture, 54 | CountdownFuture, 55 | CountdownFuture, 56 | CountdownFuture, 57 | CountdownFuture, 58 | CountdownFuture, 59 | CountdownFuture, 60 | CountdownFuture, 61 | CountdownFuture, 62 | ) { 63 | let [f0, f1, f2, f3, f4, f5, f6, f7, f8, f9] = futures_array::<10>(); 64 | (f0, f1, f2, f3, f4, f5, f6, f7, f8, f9) 65 | } 66 | 67 | /// A future which will _eventually_ be ready, but needs to be polled N times before it is. 68 | pub struct CountdownFuture { 69 | state: State, 70 | wakers: Rc>>, 71 | index: usize, 72 | max_count: usize, 73 | completed_count: Rc>, 74 | } 75 | 76 | impl CountdownFuture { 77 | pub fn new( 78 | index: usize, 79 | max_count: usize, 80 | wakers: Rc>>, 81 | completed_count: Rc>, 82 | ) -> Self { 83 | Self { 84 | state: State::Init, 85 | wakers, 86 | max_count, 87 | index, 88 | completed_count, 89 | } 90 | } 91 | } 92 | impl Future for CountdownFuture { 93 | type Output = (); 94 | 95 | fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 96 | // If we are the last stream to be polled, skip strait to the Polled state. 97 | if self.wakers.borrow().len() + 1 == self.max_count { 98 | self.state = State::Polled; 99 | } 100 | 101 | match self.state { 102 | State::Init => { 103 | // Push our waker onto the stack so we get woken again someday. 104 | self.wakers 105 | .borrow_mut() 106 | .push(PrioritizedWaker(self.index, cx.waker().clone())); 107 | self.state = State::Polled; 108 | Poll::Pending 109 | } 110 | State::Polled => { 111 | // Wake up the next one 112 | let _ = self 113 | .wakers 114 | .borrow_mut() 115 | .pop() 116 | .map(|PrioritizedWaker(_, waker)| waker.wake()); 117 | 118 | if self.completed_count.get() == self.index { 119 | self.state = State::Done; 120 | self.completed_count.set(self.completed_count.get() + 1); 121 | Poll::Ready(()) 122 | } else { 123 | // We're not done yet, so schedule another wakeup 124 | self.wakers 125 | .borrow_mut() 126 | .push(PrioritizedWaker(self.index, cx.waker().clone())); 127 | Poll::Pending 128 | } 129 | } 130 | State::Done => Poll::Ready(()), 131 | } 132 | } 133 | } 134 | -------------------------------------------------------------------------------- /benches/utils/countdown_streams.rs: -------------------------------------------------------------------------------- 1 | use futures_concurrency::stream::StreamGroup; 2 | use futures_core::Stream; 3 | 4 | use std::cell::{Cell, RefCell}; 5 | use std::collections::BinaryHeap; 6 | use std::pin::Pin; 7 | use std::rc::Rc; 8 | use std::task::{Context, Poll}; 9 | 10 | use super::{shuffle, PrioritizedWaker, State}; 11 | 12 | #[allow(unused)] 13 | pub fn streams_vec(len: usize) -> Vec { 14 | let wakers = Rc::new(RefCell::new(BinaryHeap::new())); 15 | let completed = Rc::new(Cell::new(0)); 16 | let mut streams: Vec<_> = (0..len) 17 | .map(|n| CountdownStream::new(n, len, wakers.clone(), completed.clone())) 18 | .collect(); 19 | shuffle(&mut streams); 20 | streams 21 | } 22 | 23 | #[allow(unused)] 24 | pub fn make_stream_group(len: usize) -> StreamGroup { 25 | let wakers = Rc::new(RefCell::new(BinaryHeap::new())); 26 | let completed = Rc::new(Cell::new(0)); 27 | (0..len) 28 | .map(|n| CountdownStream::new(n, len, wakers.clone(), completed.clone())) 29 | .collect() 30 | } 31 | 32 | #[allow(unused)] 33 | pub fn make_select_all(len: usize) -> futures::stream::SelectAll { 34 | let wakers = Rc::new(RefCell::new(BinaryHeap::new())); 35 | let completed = Rc::new(Cell::new(0)); 36 | (0..len) 37 | .map(|n| CountdownStream::new(n, len, wakers.clone(), completed.clone())) 38 | .collect() 39 | } 40 | 41 | pub fn streams_array() -> [CountdownStream; N] { 42 | let wakers = Rc::new(RefCell::new(BinaryHeap::new())); 43 | let completed = Rc::new(Cell::new(0)); 44 | let mut streams = 45 | core::array::from_fn(|n| CountdownStream::new(n, N, wakers.clone(), completed.clone())); 46 | shuffle(&mut streams); 47 | streams 48 | } 49 | 50 | #[allow(unused)] 51 | pub fn streams_tuple() -> ( 52 | CountdownStream, 53 | CountdownStream, 54 | CountdownStream, 55 | CountdownStream, 56 | CountdownStream, 57 | CountdownStream, 58 | CountdownStream, 59 | CountdownStream, 60 | CountdownStream, 61 | CountdownStream, 62 | ) { 63 | let [f0, f1, f2, f3, f4, f5, f6, f7, f8, f9] = streams_array::<10>(); 64 | (f0, f1, f2, f3, f4, f5, f6, f7, f8, f9) 65 | } 66 | 67 | /// A stream which will _eventually_ be ready, but needs to be polled N times before it is. 68 | pub struct CountdownStream { 69 | state: State, 70 | wakers: Rc>>, 71 | index: usize, 72 | max_count: usize, 73 | completed_count: Rc>, 74 | } 75 | 76 | impl CountdownStream { 77 | pub fn new( 78 | index: usize, 79 | max_count: usize, 80 | wakers: Rc>>, 81 | completed_count: Rc>, 82 | ) -> Self { 83 | Self { 84 | state: State::Init, 85 | wakers, 86 | max_count, 87 | index, 88 | completed_count, 89 | } 90 | } 91 | } 92 | impl Stream for CountdownStream { 93 | type Item = (); 94 | 95 | fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { 96 | // If we are the last stream to be polled, skip strait to the Polled state. 97 | if self.wakers.borrow().len() + 1 == self.max_count { 98 | self.state = State::Polled; 99 | } 100 | 101 | match &mut self.state { 102 | State::Init => { 103 | // Push our waker onto the stack so we get woken again someday. 104 | self.wakers 105 | .borrow_mut() 106 | .push(PrioritizedWaker(self.index, cx.waker().clone())); 107 | self.state = State::Polled; 108 | Poll::Pending 109 | } 110 | State::Polled => { 111 | // Wake up the next one 112 | let _ = self 113 | .wakers 114 | .borrow_mut() 115 | .pop() 116 | .map(|PrioritizedWaker(_, waker)| waker.wake()); 117 | 118 | if self.completed_count.get() == self.index { 119 | self.state = State::Done; 120 | self.completed_count.set(self.completed_count.get() + 1); 121 | Poll::Ready(Some(())) 122 | } else { 123 | // We're not done yet, so schedule another wakeup 124 | self.wakers 125 | .borrow_mut() 126 | .push(PrioritizedWaker(self.index, cx.waker().clone())); 127 | Poll::Pending 128 | } 129 | } 130 | State::Done => Poll::Ready(None), 131 | } 132 | } 133 | } 134 | -------------------------------------------------------------------------------- /benches/utils/mod.rs: -------------------------------------------------------------------------------- 1 | mod countdown_futures; 2 | mod countdown_streams; 3 | 4 | mod prioritized_waker { 5 | use std::{cmp::Ordering, task::Waker}; 6 | 7 | // PrioritizedWaker(index, waker). 8 | // Lowest index gets popped off the BinaryHeap first. 9 | pub struct PrioritizedWaker(pub usize, pub Waker); 10 | impl PartialEq for PrioritizedWaker { 11 | fn eq(&self, other: &Self) -> bool { 12 | self.0 == other.0 13 | } 14 | } 15 | impl Eq for PrioritizedWaker { 16 | fn assert_receiver_is_total_eq(&self) {} 17 | } 18 | impl PartialOrd for PrioritizedWaker { 19 | fn partial_cmp(&self, other: &Self) -> Option { 20 | Some(self.cmp(other)) 21 | } 22 | } 23 | impl Ord for PrioritizedWaker { 24 | fn cmp(&self, other: &Self) -> Ordering { 25 | self.0.cmp(&other.0).reverse() 26 | } 27 | } 28 | } 29 | use prioritized_waker::PrioritizedWaker; 30 | 31 | #[derive(Clone, Copy)] 32 | enum State { 33 | Init, 34 | Polled, 35 | Done, 36 | } 37 | 38 | fn shuffle(slice: &mut [T]) { 39 | use rand::seq::SliceRandom; 40 | use rand::SeedableRng; 41 | let mut rng = rand::rngs::StdRng::seed_from_u64(42); 42 | slice.shuffle(&mut rng); 43 | } 44 | 45 | pub use countdown_futures::*; 46 | pub use countdown_streams::*; 47 | -------------------------------------------------------------------------------- /examples/happy_eyeballs.rs: -------------------------------------------------------------------------------- 1 | use async_std::io::prelude::*; 2 | use futures::future::TryFutureExt; 3 | use futures_concurrency::prelude::*; 4 | use futures_time::prelude::*; 5 | 6 | use async_std::io; 7 | use async_std::net::TcpStream; 8 | use futures::channel::oneshot; 9 | use futures_concurrency::vec::AggregateError; 10 | use futures_time::time::Duration; 11 | use std::error; 12 | 13 | #[async_std::main] 14 | async fn main() -> Result<(), Box> { 15 | // Connect to a socket 16 | let mut socket = open_tcp_socket("rust-lang.org", 80, 3).await?; 17 | 18 | // Make an HTTP GET request. 19 | socket.write_all(b"GET / \r\n").await?; 20 | io::copy(&mut socket, &mut io::stdout()).await?; 21 | 22 | Ok(()) 23 | } 24 | 25 | /// Happy eyeballs algorithm! 26 | async fn open_tcp_socket( 27 | addr: &str, 28 | port: u16, 29 | attempts: u64, 30 | ) -> Result> { 31 | let (mut sender, mut receiver) = oneshot::channel(); 32 | let mut futures = Vec::with_capacity(attempts as usize); 33 | 34 | for attempt in 0..attempts { 35 | // Start a next attempt if the previous one finishes, or timeout expires. 36 | let tcp = TcpStream::connect((addr, port)); 37 | let start_event = receiver.timeout(Duration::from_secs(attempt)); 38 | futures.push(tcp.delay(start_event).map_err(|err| { 39 | // If the socket fails, start the next attempt 40 | let _ = sender.send(()); 41 | err 42 | })); 43 | (sender, receiver) = oneshot::channel(); 44 | } 45 | 46 | // Start connecting. If an attempt succeeds, cancel all others attempts. 47 | futures.race_ok().await 48 | } 49 | -------------------------------------------------------------------------------- /src/collections/mod.rs: -------------------------------------------------------------------------------- 1 | #[cfg(feature = "alloc")] 2 | pub mod vec; 3 | -------------------------------------------------------------------------------- /src/collections/vec.rs: -------------------------------------------------------------------------------- 1 | //! Parallel iterator types for [vectors][std::vec] (`Vec`) 2 | //! 3 | //! You will rarely need to interact with this module directly unless you need 4 | //! to name one of the iterator types. 5 | //! 6 | //! [std::vec]: https://doc.rust-lang.org/std/vec/index.html 7 | 8 | use crate::concurrent_stream::{self, FromStream}; 9 | use crate::prelude::*; 10 | use crate::utils::{from_iter, FromIter}; 11 | #[cfg(all(feature = "alloc", not(feature = "std")))] 12 | use alloc::vec::Vec; 13 | use core::future::Ready; 14 | 15 | pub use crate::future::join::vec::Join; 16 | pub use crate::future::race::vec::Race; 17 | pub use crate::future::race_ok::vec::{AggregateError, RaceOk}; 18 | pub use crate::future::try_join::vec::TryJoin; 19 | pub use crate::stream::chain::vec::Chain; 20 | pub use crate::stream::merge::vec::Merge; 21 | pub use crate::stream::zip::vec::Zip; 22 | 23 | /// Concurrent async iterator that moves out of a vector. 24 | #[derive(Debug)] 25 | pub struct IntoConcurrentStream(FromStream>>); 26 | 27 | impl ConcurrentStream for IntoConcurrentStream { 28 | type Item = T; 29 | 30 | type Future = Ready; 31 | 32 | async fn drive(self, consumer: C) -> C::Output 33 | where 34 | C: concurrent_stream::Consumer, 35 | { 36 | self.0.drive(consumer).await 37 | } 38 | 39 | fn concurrency_limit(&self) -> Option { 40 | self.0.concurrency_limit() 41 | } 42 | } 43 | 44 | impl concurrent_stream::IntoConcurrentStream for Vec { 45 | type Item = T; 46 | 47 | type IntoConcurrentStream = IntoConcurrentStream; 48 | 49 | fn into_co_stream(self) -> Self::IntoConcurrentStream { 50 | let stream = from_iter(self); 51 | let co_stream = stream.co(); 52 | IntoConcurrentStream(co_stream) 53 | } 54 | } 55 | 56 | #[cfg(test)] 57 | mod test { 58 | use crate::prelude::*; 59 | 60 | #[test] 61 | fn collect() { 62 | futures_lite::future::block_on(async { 63 | let v: Vec<_> = vec![1, 2, 3, 4, 5].into_co_stream().collect().await; 64 | assert_eq!(v, &[1, 2, 3, 4, 5]); 65 | }); 66 | } 67 | } 68 | -------------------------------------------------------------------------------- /src/concurrent_stream/enumerate.rs: -------------------------------------------------------------------------------- 1 | use pin_project::pin_project; 2 | 3 | use super::{ConcurrentStream, Consumer}; 4 | use core::future::Future; 5 | use core::num::NonZeroUsize; 6 | use core::pin::Pin; 7 | use core::task::{ready, Context, Poll}; 8 | 9 | /// A concurrent iterator that yields the current count and the element during iteration. 10 | /// 11 | /// This `struct` is created by the [`enumerate`] method on [`ConcurrentStream`]. See its 12 | /// documentation for more. 13 | /// 14 | /// [`enumerate`]: ConcurrentStream::enumerate 15 | /// [`ConcurrentStream`]: trait.ConcurrentStream.html 16 | #[derive(Debug)] 17 | pub struct Enumerate { 18 | inner: CS, 19 | } 20 | 21 | impl Enumerate { 22 | pub(crate) fn new(inner: CS) -> Self { 23 | Self { inner } 24 | } 25 | } 26 | 27 | impl ConcurrentStream for Enumerate { 28 | type Item = (usize, CS::Item); 29 | type Future = EnumerateFuture; 30 | 31 | async fn drive(self, consumer: C) -> C::Output 32 | where 33 | C: Consumer, 34 | { 35 | self.inner 36 | .drive(EnumerateConsumer { 37 | inner: consumer, 38 | count: 0, 39 | }) 40 | .await 41 | } 42 | 43 | fn concurrency_limit(&self) -> Option { 44 | self.inner.concurrency_limit() 45 | } 46 | 47 | fn size_hint(&self) -> (usize, Option) { 48 | self.inner.size_hint() 49 | } 50 | } 51 | 52 | #[pin_project] 53 | struct EnumerateConsumer { 54 | #[pin] 55 | inner: C, 56 | count: usize, 57 | } 58 | impl Consumer for EnumerateConsumer 59 | where 60 | Fut: Future, 61 | C: Consumer<(usize, Item), EnumerateFuture>, 62 | { 63 | type Output = C::Output; 64 | 65 | async fn send(self: Pin<&mut Self>, future: Fut) -> super::ConsumerState { 66 | let this = self.project(); 67 | let count = *this.count; 68 | *this.count += 1; 69 | this.inner.send(EnumerateFuture::new(future, count)).await 70 | } 71 | 72 | async fn progress(self: Pin<&mut Self>) -> super::ConsumerState { 73 | let this = self.project(); 74 | this.inner.progress().await 75 | } 76 | 77 | async fn flush(self: Pin<&mut Self>) -> Self::Output { 78 | let this = self.project(); 79 | this.inner.flush().await 80 | } 81 | } 82 | 83 | /// Takes a future and maps it to another future via a closure 84 | #[derive(Debug)] 85 | #[pin_project::pin_project] 86 | pub struct EnumerateFuture 87 | where 88 | FutT: Future, 89 | { 90 | done: bool, 91 | #[pin] 92 | fut_t: FutT, 93 | count: usize, 94 | } 95 | 96 | impl EnumerateFuture 97 | where 98 | FutT: Future, 99 | { 100 | fn new(fut_t: FutT, count: usize) -> Self { 101 | Self { 102 | done: false, 103 | fut_t, 104 | count, 105 | } 106 | } 107 | } 108 | 109 | impl Future for EnumerateFuture 110 | where 111 | FutT: Future, 112 | { 113 | type Output = (usize, T); 114 | 115 | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 116 | let this = self.project(); 117 | if *this.done { 118 | panic!("future has already been polled to completion once"); 119 | } 120 | 121 | let item = ready!(this.fut_t.poll(cx)); 122 | *this.done = true; 123 | Poll::Ready((*this.count, item)) 124 | } 125 | } 126 | 127 | #[cfg(test)] 128 | mod test { 129 | // use crate::concurrent_stream::{ConcurrentStream, IntoConcurrentStream}; 130 | use crate::prelude::*; 131 | use futures_lite::stream; 132 | use futures_lite::StreamExt; 133 | use std::num::NonZeroUsize; 134 | 135 | #[test] 136 | fn enumerate() { 137 | futures_lite::future::block_on(async { 138 | let mut n = 0; 139 | stream::iter(std::iter::from_fn(|| { 140 | let v = n; 141 | n += 1; 142 | Some(v) 143 | })) 144 | .take(5) 145 | .co() 146 | .limit(NonZeroUsize::new(1)) 147 | .enumerate() 148 | .for_each(|(index, n)| async move { 149 | assert_eq!(index, n); 150 | }) 151 | .await; 152 | }); 153 | } 154 | } 155 | -------------------------------------------------------------------------------- /src/concurrent_stream/for_each.rs: -------------------------------------------------------------------------------- 1 | use super::{Consumer, ConsumerState}; 2 | use futures_buffered::FuturesUnordered; 3 | use futures_lite::StreamExt; 4 | use pin_project::pin_project; 5 | 6 | use alloc::sync::Arc; 7 | use core::future::Future; 8 | use core::marker::PhantomData; 9 | use core::num::NonZeroUsize; 10 | use core::pin::Pin; 11 | use core::sync::atomic::{AtomicUsize, Ordering}; 12 | use core::task::{ready, Context, Poll}; 13 | 14 | // OK: validated! - all bounds should check out 15 | #[pin_project] 16 | pub(crate) struct ForEachConsumer 17 | where 18 | FutT: Future, 19 | F: Fn(T) -> FutB, 20 | FutB: Future, 21 | { 22 | // NOTE: we can remove the `Arc` here if we're willing to make this struct self-referential 23 | count: Arc, 24 | #[pin] 25 | group: FuturesUnordered>, 26 | limit: usize, 27 | f: F, 28 | _phantom: PhantomData<(T, FutB)>, 29 | } 30 | 31 | impl ForEachConsumer 32 | where 33 | A: Future, 34 | F: Fn(T) -> B, 35 | B: Future, 36 | { 37 | pub(crate) fn new(limit: Option, f: F) -> Self { 38 | let limit = match limit { 39 | Some(n) => n.get(), 40 | None => usize::MAX, 41 | }; 42 | Self { 43 | limit, 44 | f, 45 | _phantom: PhantomData, 46 | count: Arc::new(AtomicUsize::new(0)), 47 | group: FuturesUnordered::new(), 48 | } 49 | } 50 | } 51 | 52 | // OK: validated! - we push types `B` into the next consumer 53 | impl Consumer for ForEachConsumer 54 | where 55 | FutT: Future, 56 | F: Fn(T) -> B, 57 | F: Clone, 58 | B: Future, 59 | { 60 | type Output = (); 61 | 62 | async fn send(self: Pin<&mut Self>, future: FutT) -> super::ConsumerState { 63 | let mut this = self.project(); 64 | // If we have no space, we're going to provide backpressure until we have space 65 | while this.count.load(Ordering::Relaxed) >= *this.limit { 66 | this.group.next().await; 67 | } 68 | 69 | // Space was available! - insert the item for posterity 70 | this.count.fetch_add(1, Ordering::Relaxed); 71 | let fut = ForEachFut::new(this.f.clone(), future, this.count.clone()); 72 | this.group.as_mut().push(fut); 73 | 74 | ConsumerState::Continue 75 | } 76 | 77 | async fn progress(self: Pin<&mut Self>) -> super::ConsumerState { 78 | let mut this = self.project(); 79 | while (this.group.next().await).is_some() {} 80 | ConsumerState::Empty 81 | } 82 | 83 | async fn flush(self: Pin<&mut Self>) -> Self::Output { 84 | let mut this = self.project(); 85 | // 4. We will no longer receive any additional futures from the 86 | // underlying stream; wait until all the futures in the group have 87 | // resolved. 88 | while (this.group.next().await).is_some() {} 89 | } 90 | } 91 | 92 | /// Takes a future and maps it to another future via a closure 93 | #[derive(Debug)] 94 | pub struct ForEachFut 95 | where 96 | FutT: Future, 97 | F: Fn(T) -> FutB, 98 | FutB: Future, 99 | { 100 | done: bool, 101 | count: Arc, 102 | f: F, 103 | fut_t: Option, 104 | fut_b: Option, 105 | } 106 | 107 | impl ForEachFut 108 | where 109 | FutT: Future, 110 | F: Fn(T) -> FutB, 111 | FutB: Future, 112 | { 113 | fn new(f: F, fut_t: FutT, count: Arc) -> Self { 114 | Self { 115 | done: false, 116 | count, 117 | f, 118 | fut_t: Some(fut_t), 119 | fut_b: None, 120 | } 121 | } 122 | } 123 | 124 | impl Future for ForEachFut 125 | where 126 | FutT: Future, 127 | F: Fn(T) -> FutB, 128 | FutB: Future, 129 | { 130 | type Output = (); 131 | 132 | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 133 | // SAFETY: we need to access the inner future's fields to project them 134 | let this = unsafe { self.get_unchecked_mut() }; 135 | if this.done { 136 | panic!("future has already been polled to completion once"); 137 | } 138 | 139 | // Poll forward the future containing the value of `T` 140 | if let Some(fut) = this.fut_t.as_mut() { 141 | // SAFETY: we're pin projecting here 142 | let t = ready!(unsafe { Pin::new_unchecked(fut) }.poll(cx)); 143 | let fut_b = (this.f)(t); 144 | this.fut_t = None; 145 | this.fut_b = Some(fut_b); 146 | } 147 | 148 | // Poll forward the future returned by the closure 149 | if let Some(fut) = this.fut_b.as_mut() { 150 | // SAFETY: we're pin projecting here 151 | ready!(unsafe { Pin::new_unchecked(fut) }.poll(cx)); 152 | this.count.fetch_sub(1, Ordering::Relaxed); 153 | this.done = true; 154 | return Poll::Ready(()); 155 | } 156 | 157 | unreachable!("neither future `a` nor future `b` were ready"); 158 | } 159 | } 160 | 161 | #[cfg(test)] 162 | mod test { 163 | use super::*; 164 | use crate::prelude::*; 165 | use futures_lite::stream; 166 | 167 | #[test] 168 | fn concurrency_one() { 169 | futures_lite::future::block_on(async { 170 | let count = Arc::new(AtomicUsize::new(0)); 171 | stream::repeat(1) 172 | .take(2) 173 | .co() 174 | .limit(NonZeroUsize::new(1)) 175 | .for_each(|n| { 176 | let count = count.clone(); 177 | async move { 178 | count.fetch_add(n, Ordering::Relaxed); 179 | } 180 | }) 181 | .await; 182 | 183 | assert_eq!(count.load(Ordering::Relaxed), 2); 184 | }); 185 | } 186 | 187 | #[test] 188 | fn concurrency_three() { 189 | futures_lite::future::block_on(async { 190 | let count = Arc::new(AtomicUsize::new(0)); 191 | stream::repeat(1) 192 | .take(10) 193 | .co() 194 | .limit(NonZeroUsize::new(3)) 195 | .for_each(|n| { 196 | let count = count.clone(); 197 | async move { 198 | count.fetch_add(n, Ordering::Relaxed); 199 | } 200 | }) 201 | .await; 202 | 203 | assert_eq!(count.load(Ordering::Relaxed), 10); 204 | }); 205 | } 206 | } 207 | -------------------------------------------------------------------------------- /src/concurrent_stream/from_concurrent_stream.rs: -------------------------------------------------------------------------------- 1 | use super::{ConcurrentStream, Consumer, ConsumerState, IntoConcurrentStream}; 2 | #[cfg(all(feature = "alloc", not(feature = "std")))] 3 | use alloc::vec::Vec; 4 | use core::future::Future; 5 | use core::pin::Pin; 6 | use futures_buffered::FuturesUnordered; 7 | use futures_lite::StreamExt; 8 | use pin_project::pin_project; 9 | 10 | /// Conversion from a [`ConcurrentStream`] 11 | #[allow(async_fn_in_trait)] 12 | pub trait FromConcurrentStream: Sized { 13 | /// Creates a value from a concurrent iterator. 14 | async fn from_concurrent_stream(iter: T) -> Self 15 | where 16 | T: IntoConcurrentStream; 17 | } 18 | 19 | impl FromConcurrentStream for Vec { 20 | async fn from_concurrent_stream(iter: S) -> Self 21 | where 22 | S: IntoConcurrentStream, 23 | { 24 | let stream = iter.into_co_stream(); 25 | let mut output = Vec::with_capacity(stream.size_hint().1.unwrap_or_default()); 26 | stream.drive(VecConsumer::new(&mut output)).await; 27 | output 28 | } 29 | } 30 | 31 | impl FromConcurrentStream> for Result, E> { 32 | async fn from_concurrent_stream(iter: S) -> Self 33 | where 34 | S: IntoConcurrentStream>, 35 | { 36 | let stream = iter.into_co_stream(); 37 | let mut output = Ok(Vec::with_capacity(stream.size_hint().1.unwrap_or_default())); 38 | stream.drive(ResultVecConsumer::new(&mut output)).await; 39 | output 40 | } 41 | } 42 | 43 | // TODO: replace this with a generalized `fold` operation 44 | #[pin_project] 45 | pub(crate) struct VecConsumer<'a, Fut: Future> { 46 | #[pin] 47 | group: FuturesUnordered, 48 | output: &'a mut Vec, 49 | } 50 | 51 | impl<'a, Fut: Future> VecConsumer<'a, Fut> { 52 | pub(crate) fn new(output: &'a mut Vec) -> Self { 53 | Self { 54 | group: FuturesUnordered::new(), 55 | output, 56 | } 57 | } 58 | } 59 | 60 | impl Consumer for VecConsumer<'_, Fut> 61 | where 62 | Fut: Future, 63 | { 64 | type Output = (); 65 | 66 | async fn send(self: Pin<&mut Self>, future: Fut) -> super::ConsumerState { 67 | let mut this = self.project(); 68 | // unbounded concurrency, so we just goooo 69 | this.group.as_mut().push(future); 70 | ConsumerState::Continue 71 | } 72 | 73 | async fn progress(self: Pin<&mut Self>) -> super::ConsumerState { 74 | let mut this = self.project(); 75 | while let Some(item) = this.group.next().await { 76 | this.output.push(item); 77 | } 78 | ConsumerState::Empty 79 | } 80 | async fn flush(self: Pin<&mut Self>) -> Self::Output { 81 | let mut this = self.project(); 82 | while let Some(item) = this.group.next().await { 83 | this.output.push(item); 84 | } 85 | } 86 | } 87 | 88 | #[pin_project] 89 | pub(crate) struct ResultVecConsumer<'a, Fut: Future, T, E> { 90 | #[pin] 91 | group: FuturesUnordered, 92 | output: &'a mut Result, E>, 93 | } 94 | 95 | impl<'a, Fut: Future, T, E> ResultVecConsumer<'a, Fut, T, E> { 96 | pub(crate) fn new(output: &'a mut Result, E>) -> Self { 97 | Self { 98 | group: FuturesUnordered::new(), 99 | output, 100 | } 101 | } 102 | } 103 | 104 | impl Consumer, Fut> for ResultVecConsumer<'_, Fut, T, E> 105 | where 106 | Fut: Future>, 107 | { 108 | type Output = (); 109 | 110 | async fn send(self: Pin<&mut Self>, future: Fut) -> super::ConsumerState { 111 | let mut this = self.project(); 112 | // unbounded concurrency, so we just goooo 113 | this.group.as_mut().push(future); 114 | ConsumerState::Continue 115 | } 116 | 117 | async fn progress(self: Pin<&mut Self>) -> super::ConsumerState { 118 | let mut this = self.project(); 119 | let Ok(items) = this.output else { 120 | return ConsumerState::Break; 121 | }; 122 | 123 | while let Some(item) = this.group.next().await { 124 | match item { 125 | Ok(item) => { 126 | items.push(item); 127 | } 128 | Err(e) => { 129 | **this.output = Err(e); 130 | return ConsumerState::Break; 131 | } 132 | } 133 | } 134 | ConsumerState::Empty 135 | } 136 | 137 | async fn flush(self: Pin<&mut Self>) -> Self::Output { 138 | self.progress().await; 139 | } 140 | } 141 | 142 | #[cfg(test)] 143 | mod test { 144 | use crate::prelude::*; 145 | use futures_lite::stream; 146 | 147 | #[test] 148 | fn collect() { 149 | futures_lite::future::block_on(async { 150 | let v: Vec<_> = stream::repeat(1).co().take(5).collect().await; 151 | assert_eq!(v, &[1, 1, 1, 1, 1]); 152 | }); 153 | } 154 | 155 | #[test] 156 | fn collect_to_result_ok() { 157 | futures_lite::future::block_on(async { 158 | let v: Result, ()> = stream::repeat(Ok(1)).co().take(5).collect().await; 159 | assert_eq!(v, Ok(vec![1, 1, 1, 1, 1])); 160 | }); 161 | } 162 | 163 | #[test] 164 | fn collect_to_result_err() { 165 | futures_lite::future::block_on(async { 166 | let v: Result, _> = stream::repeat(Err::(())) 167 | .co() 168 | .take(5) 169 | .collect() 170 | .await; 171 | assert_eq!(v, Err(())); 172 | }); 173 | } 174 | } 175 | -------------------------------------------------------------------------------- /src/concurrent_stream/from_stream.rs: -------------------------------------------------------------------------------- 1 | use super::Consumer; 2 | use crate::concurrent_stream::ConsumerState; 3 | use crate::prelude::*; 4 | 5 | use core::future::{ready, Ready}; 6 | use core::num::NonZeroUsize; 7 | use core::pin::pin; 8 | use futures_lite::{Stream, StreamExt}; 9 | 10 | /// A concurrent for each implementation from a `Stream` 11 | #[pin_project::pin_project] 12 | #[derive(Debug)] 13 | pub struct FromStream { 14 | #[pin] 15 | stream: S, 16 | } 17 | 18 | impl FromStream { 19 | pub(crate) fn new(stream: S) -> Self { 20 | Self { stream } 21 | } 22 | } 23 | 24 | impl ConcurrentStream for FromStream 25 | where 26 | S: Stream, 27 | { 28 | type Item = S::Item; 29 | type Future = Ready; 30 | 31 | async fn drive(self, mut consumer: C) -> C::Output 32 | where 33 | C: Consumer, 34 | { 35 | let mut iter = pin!(self.stream); 36 | let mut consumer = pin!(consumer); 37 | 38 | // Concurrently progress the consumer as well as the stream. Whenever 39 | // there is an item from the stream available, we submit it to the 40 | // consumer and we wait. 41 | // 42 | // NOTE(yosh): we're relying on the fact that `Stream::next` can be 43 | // dropped and recreated freely. That's also true for 44 | // `Consumer::progress`; though that is intentional. It should be 45 | // possible to write a combinator which does not drop the `Stream::next` 46 | // future repeatedly. However for now we're happy to rely on this 47 | // property here. 48 | loop { 49 | // Drive the stream forward 50 | let a = async { 51 | let item = iter.next().await; 52 | State::Item(item) 53 | }; 54 | 55 | // Drive the consumer forward 56 | let b = async { 57 | let control_flow = consumer.as_mut().progress().await; 58 | State::Progress(control_flow) 59 | }; 60 | 61 | // If an item is available, submit it to the consumer and wait for 62 | // it to be ready. 63 | match (b, a).race().await { 64 | State::Progress(control_flow) => match control_flow { 65 | ConsumerState::Break => break, 66 | ConsumerState::Continue => continue, 67 | ConsumerState::Empty => match iter.next().await { 68 | Some(item) => match consumer.as_mut().send(ready(item)).await { 69 | ConsumerState::Break => break, 70 | ConsumerState::Empty | ConsumerState::Continue => continue, 71 | }, 72 | None => break, 73 | }, 74 | }, 75 | State::Item(Some(item)) => match consumer.as_mut().send(ready(item)).await { 76 | ConsumerState::Break => break, 77 | ConsumerState::Empty | ConsumerState::Continue => continue, 78 | }, 79 | State::Item(None) => break, 80 | } 81 | } 82 | 83 | // We will no longer receive items from the underlying stream, which 84 | // means we're ready to wait for the consumer to finish up. 85 | consumer.as_mut().flush().await 86 | } 87 | 88 | fn concurrency_limit(&self) -> Option { 89 | None 90 | } 91 | 92 | fn size_hint(&self) -> (usize, Option) { 93 | self.stream.size_hint() 94 | } 95 | } 96 | 97 | enum State { 98 | Progress(super::ConsumerState), 99 | Item(T), 100 | } 101 | -------------------------------------------------------------------------------- /src/concurrent_stream/into_concurrent_stream.rs: -------------------------------------------------------------------------------- 1 | use super::ConcurrentStream; 2 | 3 | /// Conversion into a [`ConcurrentStream`] 4 | pub trait IntoConcurrentStream { 5 | /// The type of the elements being iterated over. 6 | type Item; 7 | /// Which kind of iterator are we turning this into? 8 | type IntoConcurrentStream: ConcurrentStream; 9 | 10 | /// Convert `self` into a concurrent iterator. 11 | fn into_co_stream(self) -> Self::IntoConcurrentStream; 12 | } 13 | 14 | impl IntoConcurrentStream for S { 15 | type Item = S::Item; 16 | type IntoConcurrentStream = S; 17 | 18 | fn into_co_stream(self) -> Self::IntoConcurrentStream { 19 | self 20 | } 21 | } 22 | -------------------------------------------------------------------------------- /src/concurrent_stream/limit.rs: -------------------------------------------------------------------------------- 1 | use pin_project::pin_project; 2 | 3 | use super::{ConcurrentStream, Consumer}; 4 | use core::future::Future; 5 | use core::num::NonZeroUsize; 6 | use core::pin::Pin; 7 | 8 | /// A concurrent iterator that limits the amount of concurrency applied. 9 | /// 10 | /// This `struct` is created by the [`limit`] method on [`ConcurrentStream`]. See its 11 | /// documentation for more. 12 | /// 13 | /// [`limit`]: ConcurrentStream::limit 14 | /// [`ConcurrentStream`]: trait.ConcurrentStream.html 15 | #[derive(Debug)] 16 | pub struct Limit { 17 | inner: CS, 18 | limit: Option, 19 | } 20 | 21 | impl Limit { 22 | pub(crate) fn new(inner: CS, limit: Option) -> Self { 23 | Self { inner, limit } 24 | } 25 | } 26 | 27 | impl ConcurrentStream for Limit { 28 | type Item = CS::Item; 29 | type Future = CS::Future; 30 | 31 | async fn drive(self, consumer: C) -> C::Output 32 | where 33 | C: Consumer, 34 | { 35 | self.inner.drive(LimitConsumer { inner: consumer }).await 36 | } 37 | 38 | // NOTE: this is the only interesting bit in this module. When a limit is 39 | // set, this now starts using it. 40 | fn concurrency_limit(&self) -> Option { 41 | self.limit 42 | } 43 | 44 | fn size_hint(&self) -> (usize, Option) { 45 | self.inner.size_hint() 46 | } 47 | } 48 | 49 | #[pin_project] 50 | struct LimitConsumer { 51 | #[pin] 52 | inner: C, 53 | } 54 | impl Consumer for LimitConsumer 55 | where 56 | Fut: Future, 57 | C: Consumer, 58 | { 59 | type Output = C::Output; 60 | 61 | async fn send(self: Pin<&mut Self>, future: Fut) -> super::ConsumerState { 62 | let this = self.project(); 63 | this.inner.send(future).await 64 | } 65 | 66 | async fn progress(self: Pin<&mut Self>) -> super::ConsumerState { 67 | let this = self.project(); 68 | this.inner.progress().await 69 | } 70 | 71 | async fn flush(self: Pin<&mut Self>) -> Self::Output { 72 | let this = self.project(); 73 | this.inner.flush().await 74 | } 75 | } 76 | -------------------------------------------------------------------------------- /src/concurrent_stream/map.rs: -------------------------------------------------------------------------------- 1 | use pin_project::pin_project; 2 | 3 | use super::{ConcurrentStream, Consumer}; 4 | use core::num::NonZeroUsize; 5 | use core::{ 6 | future::Future, 7 | marker::PhantomData, 8 | pin::Pin, 9 | task::{ready, Context, Poll}, 10 | }; 11 | 12 | /// Convert items from one type into another 13 | #[derive(Debug)] 14 | pub struct Map 15 | where 16 | CS: ConcurrentStream, 17 | F: Fn(T) -> FutB, 18 | F: Clone, 19 | FutT: Future, 20 | FutB: Future, 21 | { 22 | inner: CS, 23 | f: F, 24 | _phantom: PhantomData<(FutT, T, FutB, B)>, 25 | } 26 | 27 | impl Map 28 | where 29 | CS: ConcurrentStream, 30 | F: Fn(T) -> FutB, 31 | F: Clone, 32 | FutT: Future, 33 | FutB: Future, 34 | { 35 | pub(crate) fn new(inner: CS, f: F) -> Self { 36 | Self { 37 | inner, 38 | f, 39 | _phantom: PhantomData, 40 | } 41 | } 42 | } 43 | 44 | impl ConcurrentStream for Map 45 | where 46 | CS: ConcurrentStream, 47 | F: Fn(T) -> FutB, 48 | F: Clone, 49 | FutT: Future, 50 | FutB: Future, 51 | { 52 | type Future = MapFuture; 53 | type Item = B; 54 | 55 | async fn drive(self, consumer: C) -> C::Output 56 | where 57 | C: Consumer, 58 | { 59 | let consumer = MapConsumer { 60 | inner: consumer, 61 | f: self.f, 62 | _phantom: PhantomData, 63 | }; 64 | self.inner.drive(consumer).await 65 | } 66 | 67 | fn concurrency_limit(&self) -> Option { 68 | self.inner.concurrency_limit() 69 | } 70 | 71 | fn size_hint(&self) -> (usize, Option) { 72 | self.inner.size_hint() 73 | } 74 | } 75 | 76 | #[pin_project] 77 | pub struct MapConsumer 78 | where 79 | FutT: Future, 80 | C: Consumer>, 81 | F: Fn(T) -> FutB, 82 | F: Clone, 83 | FutB: Future, 84 | { 85 | #[pin] 86 | inner: C, 87 | f: F, 88 | _phantom: PhantomData<(FutT, T, FutB, B)>, 89 | } 90 | 91 | impl Consumer for MapConsumer 92 | where 93 | FutT: Future, 94 | C: Consumer>, 95 | F: Fn(T) -> FutB, 96 | F: Clone, 97 | FutB: Future, 98 | { 99 | type Output = C::Output; 100 | 101 | async fn progress(self: Pin<&mut Self>) -> super::ConsumerState { 102 | let this = self.project(); 103 | this.inner.progress().await 104 | } 105 | 106 | async fn send(self: Pin<&mut Self>, future: FutT) -> super::ConsumerState { 107 | let this = self.project(); 108 | let fut = MapFuture::new(this.f.clone(), future); 109 | this.inner.send(fut).await 110 | } 111 | 112 | async fn flush(self: Pin<&mut Self>) -> Self::Output { 113 | let this = self.project(); 114 | this.inner.flush().await 115 | } 116 | } 117 | 118 | /// Takes a future and maps it to another future via a closure 119 | #[derive(Debug)] 120 | pub struct MapFuture 121 | where 122 | FutT: Future, 123 | F: Fn(T) -> FutB, 124 | FutB: Future, 125 | { 126 | done: bool, 127 | f: F, 128 | fut_t: Option, 129 | fut_b: Option, 130 | } 131 | 132 | impl MapFuture 133 | where 134 | FutT: Future, 135 | F: Fn(T) -> FutB, 136 | FutB: Future, 137 | { 138 | fn new(f: F, fut_t: FutT) -> Self { 139 | Self { 140 | done: false, 141 | f, 142 | fut_t: Some(fut_t), 143 | fut_b: None, 144 | } 145 | } 146 | } 147 | 148 | impl Future for MapFuture 149 | where 150 | FutT: Future, 151 | F: Fn(T) -> FutB, 152 | FutB: Future, 153 | { 154 | type Output = B; 155 | 156 | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 157 | // SAFETY: we need to access the inner future's fields to project them 158 | let this = unsafe { self.get_unchecked_mut() }; 159 | if this.done { 160 | panic!("future has already been polled to completion once"); 161 | } 162 | 163 | // Poll forward the future containing the value of `T` 164 | if let Some(fut) = this.fut_t.as_mut() { 165 | // SAFETY: we're pin projecting here 166 | let t = ready!(unsafe { Pin::new_unchecked(fut) }.poll(cx)); 167 | let fut_b = (this.f)(t); 168 | this.fut_t = None; 169 | this.fut_b = Some(fut_b); 170 | } 171 | 172 | // Poll forward the future returned by the closure 173 | if let Some(fut) = this.fut_b.as_mut() { 174 | // SAFETY: we're pin projecting here 175 | let t = ready!(unsafe { Pin::new_unchecked(fut) }.poll(cx)); 176 | this.done = true; 177 | return Poll::Ready(t); 178 | } 179 | 180 | unreachable!("neither future `a` nor future `b` were ready"); 181 | } 182 | } 183 | -------------------------------------------------------------------------------- /src/concurrent_stream/take.rs: -------------------------------------------------------------------------------- 1 | use pin_project::pin_project; 2 | 3 | use super::{ConcurrentStream, Consumer, ConsumerState}; 4 | use core::future::Future; 5 | use core::num::NonZeroUsize; 6 | use core::pin::Pin; 7 | 8 | /// A concurrent iterator that only iterates over the first `n` iterations of `iter`. 9 | /// 10 | /// This `struct` is created by the [`take`] method on [`ConcurrentStream`]. See its 11 | /// documentation for more. 12 | /// 13 | /// [`take`]: ConcurrentStream::take 14 | /// [`ConcurrentStream`]: trait.ConcurrentStream.html 15 | #[derive(Debug)] 16 | pub struct Take { 17 | inner: CS, 18 | limit: usize, 19 | } 20 | 21 | impl Take { 22 | pub(crate) fn new(inner: CS, limit: usize) -> Self { 23 | Self { inner, limit } 24 | } 25 | } 26 | 27 | impl ConcurrentStream for Take { 28 | type Item = CS::Item; 29 | type Future = CS::Future; 30 | 31 | async fn drive(self, consumer: C) -> C::Output 32 | where 33 | C: Consumer, 34 | { 35 | self.inner 36 | .drive(TakeConsumer { 37 | inner: consumer, 38 | count: 0, 39 | limit: self.limit, 40 | }) 41 | .await 42 | } 43 | 44 | // NOTE: this is the only interesting bit in this module. When a limit is 45 | // set, this now starts using it. 46 | fn concurrency_limit(&self) -> Option { 47 | self.inner.concurrency_limit() 48 | } 49 | 50 | fn size_hint(&self) -> (usize, Option) { 51 | self.inner.size_hint() 52 | } 53 | } 54 | 55 | #[pin_project] 56 | struct TakeConsumer { 57 | #[pin] 58 | inner: C, 59 | count: usize, 60 | limit: usize, 61 | } 62 | impl Consumer for TakeConsumer 63 | where 64 | Fut: Future, 65 | C: Consumer, 66 | { 67 | type Output = C::Output; 68 | 69 | async fn send(self: Pin<&mut Self>, future: Fut) -> ConsumerState { 70 | let this = self.project(); 71 | *this.count += 1; 72 | let state = this.inner.send(future).await; 73 | if this.count >= this.limit { 74 | ConsumerState::Break 75 | } else { 76 | state 77 | } 78 | } 79 | 80 | async fn progress(self: Pin<&mut Self>) -> ConsumerState { 81 | let this = self.project(); 82 | this.inner.progress().await 83 | } 84 | 85 | async fn flush(self: Pin<&mut Self>) -> Self::Output { 86 | let this = self.project(); 87 | this.inner.flush().await 88 | } 89 | } 90 | 91 | #[cfg(test)] 92 | mod test { 93 | use crate::prelude::*; 94 | use futures_lite::stream; 95 | 96 | #[test] 97 | fn enumerate() { 98 | futures_lite::future::block_on(async { 99 | let mut n = 0; 100 | stream::iter(std::iter::from_fn(|| { 101 | let v = n; 102 | n += 1; 103 | Some(v) 104 | })) 105 | .co() 106 | .take(5) 107 | .for_each(|n| async move { assert!(n < 5) }) 108 | .await; 109 | }); 110 | } 111 | } 112 | -------------------------------------------------------------------------------- /src/future/futures_ext.rs: -------------------------------------------------------------------------------- 1 | use crate::future::Join; 2 | use crate::future::Race; 3 | use core::future::IntoFuture; 4 | use futures_core::Future; 5 | 6 | use super::join::tuple::Join2; 7 | use super::race::tuple::Race2; 8 | use super::WaitUntil; 9 | 10 | /// An extension trait for the `Future` trait. 11 | pub trait FutureExt: Future { 12 | /// Wait for both futures to complete. 13 | fn join(self, other: S2) -> Join2 14 | where 15 | Self: Future + Sized, 16 | S2: IntoFuture; 17 | 18 | /// Wait for the first future to complete. 19 | fn race(self, other: S2) -> Race2 20 | where 21 | Self: Future + Sized, 22 | S2: IntoFuture; 23 | 24 | /// Delay resolving the future until the given deadline. 25 | /// 26 | /// The underlying future will not be polled until the deadline has expired. In addition 27 | /// to using a time source as a deadline, any future can be used as a 28 | /// deadline too. When used in combination with a multi-consumer channel, 29 | /// this method can be used to synchronize the start of multiple futures and streams. 30 | /// 31 | /// # Example 32 | /// 33 | /// ``` 34 | /// # #[cfg(miri)]fn main() {} 35 | /// # #[cfg(not(miri))] 36 | /// # fn main() { 37 | /// use async_io::Timer; 38 | /// use futures_concurrency::prelude::*; 39 | /// use futures_lite::future::block_on; 40 | /// use std::time::{Duration, Instant}; 41 | /// 42 | /// block_on(async { 43 | /// let now = Instant::now(); 44 | /// let duration = Duration::from_millis(100); 45 | /// 46 | /// async { "meow" } 47 | /// .wait_until(Timer::after(duration)) 48 | /// .await; 49 | /// 50 | /// assert!(now.elapsed() >= duration); 51 | /// }); 52 | /// # } 53 | /// ``` 54 | fn wait_until(self, deadline: D) -> WaitUntil 55 | where 56 | Self: Sized, 57 | D: IntoFuture, 58 | { 59 | WaitUntil::new(self, deadline.into_future()) 60 | } 61 | } 62 | 63 | impl FutureExt for F1 64 | where 65 | F1: Future, 66 | { 67 | fn join(self, other: F2) -> Join2 68 | where 69 | Self: Future + Sized, 70 | F2: IntoFuture, 71 | { 72 | Join::join((self, other)) 73 | } 74 | 75 | fn race(self, other: S2) -> Race2 76 | where 77 | Self: Future + Sized, 78 | S2: IntoFuture, 79 | { 80 | Race::race((self, other)) 81 | } 82 | } 83 | -------------------------------------------------------------------------------- /src/future/join/mod.rs: -------------------------------------------------------------------------------- 1 | use core::future::Future; 2 | 3 | pub(crate) mod array; 4 | pub(crate) mod tuple; 5 | #[cfg(feature = "alloc")] 6 | pub(crate) mod vec; 7 | 8 | /// Wait for all futures to complete. 9 | /// 10 | /// Awaits multiple futures simultaneously, returning the output of the futures 11 | /// in the same container type they were created once all complete. 12 | pub trait Join { 13 | /// The resulting output type. 14 | type Output; 15 | 16 | /// The [`Future`] implementation returned by this method. 17 | type Future: Future; 18 | 19 | /// Waits for multiple futures to complete. 20 | /// 21 | /// Awaits multiple futures simultaneously, returning the output of the futures 22 | /// in the same container type they we're created once all complete. 23 | /// 24 | /// # Examples 25 | /// 26 | /// Awaiting multiple futures of the same type can be done using either a vector 27 | /// or an array. 28 | /// ```rust 29 | /// # futures::executor::block_on(async { 30 | /// use futures_concurrency::prelude::*; 31 | /// 32 | /// // all futures passed here are of the same type 33 | /// let fut1 = core::future::ready(1); 34 | /// let fut2 = core::future::ready(2); 35 | /// let fut3 = core::future::ready(3); 36 | /// 37 | /// let outputs = [fut1, fut2, fut3].join().await; 38 | /// assert_eq!(outputs, [1, 2, 3]); 39 | /// # }) 40 | /// ``` 41 | /// 42 | /// In practice however, it's common to want to await multiple futures of 43 | /// different types. For example if you have two different `async {}` blocks, 44 | /// you want to `.await`. To do that, you can call `.join` on tuples of futures. 45 | /// ```rust 46 | /// # futures::executor::block_on(async { 47 | /// use futures_concurrency::prelude::*; 48 | /// 49 | /// async fn some_async_fn() -> usize { 3 } 50 | /// 51 | /// // the futures passed here are of different types 52 | /// let fut1 = core::future::ready(1); 53 | /// let fut2 = async { 2 }; 54 | /// let fut3 = some_async_fn(); 55 | /// // ^ NOTE: no `.await` here! 56 | /// 57 | /// let outputs = (fut1, fut2, fut3).join().await; 58 | /// assert_eq!(outputs, (1, 2, 3)); 59 | /// # }) 60 | /// ``` 61 | /// 62 | ///

63 | /// This function returns a new future which polls all futures concurrently. 64 | fn join(self) -> Self::Future; 65 | } 66 | -------------------------------------------------------------------------------- /src/future/join/vec.rs: -------------------------------------------------------------------------------- 1 | use super::Join as JoinTrait; 2 | use crate::utils::{FutureVec, OutputVec, PollVec, WakerVec}; 3 | 4 | #[cfg(all(feature = "alloc", not(feature = "std")))] 5 | use alloc::vec::Vec; 6 | 7 | use core::fmt; 8 | use core::future::{Future, IntoFuture}; 9 | use core::mem::ManuallyDrop; 10 | use core::ops::DerefMut; 11 | use core::pin::Pin; 12 | use core::task::{Context, Poll}; 13 | 14 | use pin_project::{pin_project, pinned_drop}; 15 | 16 | /// A future which waits for multiple futures to complete. 17 | /// 18 | /// This `struct` is created by the [`join`] method on the [`Join`] trait. See 19 | /// its documentation for more. 20 | /// 21 | /// [`join`]: crate::future::Join::join 22 | /// [`Join`]: crate::future::Join 23 | #[must_use = "futures do nothing unless you `.await` or poll them"] 24 | #[pin_project(PinnedDrop)] 25 | pub struct Join 26 | where 27 | Fut: Future, 28 | { 29 | consumed: bool, 30 | pending: usize, 31 | items: OutputVec<::Output>, 32 | wakers: WakerVec, 33 | state: PollVec, 34 | #[pin] 35 | futures: FutureVec, 36 | } 37 | 38 | impl Join 39 | where 40 | Fut: Future, 41 | { 42 | pub(crate) fn new(futures: Vec) -> Self { 43 | let len = futures.len(); 44 | Join { 45 | consumed: false, 46 | pending: len, 47 | items: OutputVec::uninit(len), 48 | wakers: WakerVec::new(len), 49 | state: PollVec::new_pending(len), 50 | futures: FutureVec::new(futures), 51 | } 52 | } 53 | } 54 | 55 | impl JoinTrait for Vec 56 | where 57 | Fut: IntoFuture, 58 | { 59 | type Output = Vec; 60 | type Future = Join; 61 | 62 | fn join(self) -> Self::Future { 63 | Join::new(self.into_iter().map(IntoFuture::into_future).collect()) 64 | } 65 | } 66 | 67 | impl fmt::Debug for Join 68 | where 69 | Fut: Future + fmt::Debug, 70 | { 71 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 72 | f.debug_list().entries(self.state.iter()).finish() 73 | } 74 | } 75 | 76 | impl Future for Join 77 | where 78 | Fut: Future, 79 | { 80 | type Output = Vec; 81 | 82 | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 83 | let mut this = self.project(); 84 | 85 | assert!( 86 | !*this.consumed, 87 | "Futures must not be polled after completing" 88 | ); 89 | 90 | let mut readiness = this.wakers.readiness(); 91 | readiness.set_waker(cx.waker()); 92 | if *this.pending != 0 && !readiness.any_ready() { 93 | // Nothing is ready yet 94 | return Poll::Pending; 95 | } 96 | 97 | // Poll all ready futures 98 | let futures = this.futures.as_mut(); 99 | let states = &mut this.state[..]; 100 | for (i, mut fut) in futures.iter().enumerate() { 101 | if states[i].is_pending() && readiness.clear_ready(i) { 102 | // unlock readiness so we don't deadlock when polling 103 | #[allow(clippy::drop_non_drop)] 104 | drop(readiness); 105 | 106 | // Obtain the intermediate waker. 107 | let mut cx = Context::from_waker(this.wakers.get(i).unwrap()); 108 | 109 | // Poll the future 110 | // SAFETY: the future's state was "pending", so it's safe to poll 111 | if let Poll::Ready(value) = unsafe { 112 | fut.as_mut() 113 | .map_unchecked_mut(|t| t.deref_mut()) 114 | .poll(&mut cx) 115 | } { 116 | this.items.write(i, value); 117 | states[i].set_ready(); 118 | *this.pending -= 1; 119 | // SAFETY: the future state has been changed to "ready" which 120 | // means we'll no longer poll the future, so it's safe to drop 121 | unsafe { ManuallyDrop::drop(fut.get_unchecked_mut()) }; 122 | } 123 | 124 | // Lock readiness so we can use it again 125 | readiness = this.wakers.readiness(); 126 | } 127 | } 128 | 129 | // Check whether we're all done now or need to keep going. 130 | if *this.pending == 0 { 131 | // Mark all data as "consumed" before we take it 132 | *this.consumed = true; 133 | this.state.iter_mut().for_each(|state| { 134 | debug_assert!( 135 | state.is_ready(), 136 | "Future should have reached a `Ready` state" 137 | ); 138 | state.set_none(); 139 | }); 140 | 141 | // SAFETY: we've checked with the state that all of our outputs have been 142 | // filled, which means we're ready to take the data and assume it's initialized. 143 | Poll::Ready(unsafe { this.items.take() }) 144 | } else { 145 | Poll::Pending 146 | } 147 | } 148 | } 149 | 150 | /// Drop the already initialized values on cancellation. 151 | #[pinned_drop] 152 | impl PinnedDrop for Join 153 | where 154 | Fut: Future, 155 | { 156 | fn drop(self: Pin<&mut Self>) { 157 | let mut this = self.project(); 158 | 159 | // Drop all initialized values. 160 | for i in this.state.ready_indexes() { 161 | // SAFETY: we've just filtered down to *only* the initialized values. 162 | // We can assume they're initialized, and this is where we drop them. 163 | unsafe { this.items.drop(i) }; 164 | } 165 | 166 | // Drop all pending futures. 167 | for i in this.state.pending_indexes() { 168 | // SAFETY: we've just filtered down to *only* the pending futures, 169 | // which have not yet been dropped. 170 | unsafe { this.futures.as_mut().drop(i) }; 171 | } 172 | } 173 | } 174 | 175 | #[cfg(test)] 176 | mod test { 177 | use super::*; 178 | use crate::utils::DummyWaker; 179 | 180 | use alloc::format; 181 | use alloc::sync::Arc; 182 | use alloc::vec; 183 | use core::future; 184 | 185 | #[test] 186 | fn smoke() { 187 | futures_lite::future::block_on(async { 188 | let fut = vec![future::ready("hello"), future::ready("world")].join(); 189 | assert_eq!(fut.await, vec!["hello", "world"]); 190 | }); 191 | } 192 | 193 | #[test] 194 | fn empty() { 195 | futures_lite::future::block_on(async { 196 | let data: Vec> = vec![]; 197 | let fut = data.join(); 198 | assert_eq!(fut.await, vec![]); 199 | }); 200 | } 201 | 202 | #[test] 203 | fn debug() { 204 | let mut fut = vec![future::ready("hello"), future::ready("world")].join(); 205 | assert_eq!(format!("{:?}", fut), "[Pending, Pending]"); 206 | let mut fut = Pin::new(&mut fut); 207 | 208 | let waker = Arc::new(DummyWaker()).into(); 209 | let mut cx = Context::from_waker(&waker); 210 | let _ = fut.as_mut().poll(&mut cx); 211 | assert_eq!(format!("{:?}", fut), "[None, None]"); 212 | } 213 | } 214 | -------------------------------------------------------------------------------- /src/future/mod.rs: -------------------------------------------------------------------------------- 1 | //! Asynchronous basic functionality. 2 | //! 3 | //! Please see the fundamental `async` and `await` keywords and the [async book] 4 | //! for more information on asynchronous programming in Rust. 5 | //! 6 | //! [async book]: https://rust-lang.github.io/async-book/ 7 | //! 8 | //! # Examples 9 | //! 10 | //! ``` 11 | //! use futures_concurrency::prelude::*; 12 | //! use futures_lite::future::block_on; 13 | //! use std::future; 14 | //! 15 | //! block_on(async { 16 | //! // Await multiple similarly-typed futures. 17 | //! let a = future::ready(1); 18 | //! let b = future::ready(2); 19 | //! let c = future::ready(3); 20 | //! assert_eq!([a, b, c].join().await, [1, 2, 3]); 21 | //! 22 | //! // Await multiple differently-typed futures. 23 | //! let a = future::ready(1u8); 24 | //! let b = future::ready("hello"); 25 | //! let c = future::ready(3u16); 26 | //! assert_eq!((a, b, c).join().await, (1, "hello", 3)); 27 | //! 28 | //! // It even works with vectors of futures, providing an alternative 29 | //! // to futures-rs' `join_all`. 30 | //! let a = future::ready(1); 31 | //! let b = future::ready(2); 32 | //! let c = future::ready(3); 33 | //! assert_eq!(vec![a, b, c].join().await, vec![1, 2, 3]); 34 | //! }) 35 | //! ``` 36 | //! 37 | //! # Concurrency 38 | //! 39 | //! It's common for operations to depend on the output of multiple futures. 40 | //! Instead of awaiting each future in sequence it can be more efficient to 41 | //! await them _concurrently_. Rust provides built-in mechanisms in the library 42 | //! to make this easy and convenient to do. 43 | //! 44 | //! ## Infallible Concurrency 45 | //! 46 | //! When working with futures which don't return `Result` types, we 47 | //! provide two built-in concurrency operations: 48 | //! 49 | //! - `future::Merge`: wait for all futures in the set to complete 50 | //! - `future::Race`: wait for the _first_ future in the set to complete 51 | //! 52 | //! Because futures can be considered to be an async sequence of one, see 53 | //! the [async iterator concurrency][crate::stream#concurrency] section for 54 | //! additional async concurrency operations. 55 | //! 56 | //! ## Fallible Concurrency 57 | //! 58 | //! When working with futures which return `Result` types, the meaning of the 59 | //! existing operations changes, and additional `Result`-aware concurrency 60 | //! operations become available: 61 | //! 62 | //! | | __Wait for all outputs__ | __Wait for first output__ | 63 | //! | --- | --- | --- | 64 | //! | __Continue on error__ | `future::Merge` | `future::RaceOk` 65 | //! | __Return early on error__ | `future::TryMerge` | `future::Race` 66 | //! 67 | //! - `future::TryMerge`: wait for all futures in the set to complete _successfully_, or return on the first error. 68 | //! - `future::RaceOk`: wait for the first _successful_ future in the set to 69 | //! complete, or return an `Err` if *no* futures complete successfully. 70 | //! 71 | #[doc(inline)] 72 | #[cfg(feature = "alloc")] 73 | pub use future_group::FutureGroup; 74 | pub use futures_ext::FutureExt; 75 | pub use join::Join; 76 | pub use race::Race; 77 | pub use race_ok::RaceOk; 78 | pub use try_join::TryJoin; 79 | pub use wait_until::WaitUntil; 80 | 81 | /// A growable group of futures which act as a single unit. 82 | #[cfg(feature = "alloc")] 83 | pub mod future_group; 84 | 85 | mod futures_ext; 86 | pub(crate) mod join; 87 | pub(crate) mod race; 88 | pub(crate) mod race_ok; 89 | pub(crate) mod try_join; 90 | pub(crate) mod wait_until; 91 | -------------------------------------------------------------------------------- /src/future/race/array.rs: -------------------------------------------------------------------------------- 1 | use crate::utils::{self, Indexer}; 2 | 3 | use super::Race as RaceTrait; 4 | 5 | use core::fmt; 6 | use core::future::{Future, IntoFuture}; 7 | use core::pin::Pin; 8 | use core::task::{Context, Poll}; 9 | 10 | use pin_project::pin_project; 11 | 12 | /// A future which waits for the first future to complete. 13 | /// 14 | /// This `struct` is created by the [`race`] method on the [`Race`] trait. See 15 | /// its documentation for more. 16 | /// 17 | /// [`race`]: crate::future::Race::race 18 | /// [`Race`]: crate::future::Race 19 | #[must_use = "futures do nothing unless you `.await` or poll them"] 20 | #[pin_project] 21 | pub struct Race 22 | where 23 | Fut: Future, 24 | { 25 | #[pin] 26 | futures: [Fut; N], 27 | indexer: Indexer, 28 | done: bool, 29 | } 30 | 31 | impl fmt::Debug for Race 32 | where 33 | Fut: Future + fmt::Debug, 34 | Fut::Output: fmt::Debug, 35 | { 36 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 37 | f.debug_list().entries(self.futures.iter()).finish() 38 | } 39 | } 40 | 41 | impl Future for Race 42 | where 43 | Fut: Future, 44 | { 45 | type Output = Fut::Output; 46 | 47 | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 48 | let mut this = self.project(); 49 | assert!(!*this.done, "Futures must not be polled after completing"); 50 | 51 | for index in this.indexer.iter() { 52 | let fut = utils::get_pin_mut(this.futures.as_mut(), index).unwrap(); 53 | match fut.poll(cx) { 54 | Poll::Ready(item) => { 55 | *this.done = true; 56 | return Poll::Ready(item); 57 | } 58 | Poll::Pending => continue, 59 | } 60 | } 61 | Poll::Pending 62 | } 63 | } 64 | 65 | impl RaceTrait for [Fut; N] 66 | where 67 | Fut: IntoFuture, 68 | { 69 | type Output = Fut::Output; 70 | type Future = Race; 71 | 72 | fn race(self) -> Self::Future { 73 | Race { 74 | futures: self.map(|fut| fut.into_future()), 75 | indexer: Indexer::new(N), 76 | done: false, 77 | } 78 | } 79 | } 80 | 81 | #[cfg(test)] 82 | mod test { 83 | use super::*; 84 | use core::future; 85 | 86 | // NOTE: we should probably poll in random order. 87 | #[test] 88 | fn no_fairness() { 89 | futures_lite::future::block_on(async { 90 | let res = [future::ready("hello"), future::ready("world")] 91 | .race() 92 | .await; 93 | assert!(matches!(res, "hello" | "world")); 94 | }); 95 | } 96 | } 97 | -------------------------------------------------------------------------------- /src/future/race/mod.rs: -------------------------------------------------------------------------------- 1 | use core::future::Future; 2 | 3 | pub(crate) mod array; 4 | pub(crate) mod tuple; 5 | #[cfg(feature = "alloc")] 6 | pub(crate) mod vec; 7 | 8 | /// Wait for the first future to complete. 9 | /// 10 | /// Awaits multiple future at once, returning as soon as one completes. The 11 | /// other futures are cancelled. 12 | pub trait Race { 13 | /// The resulting output type. 14 | type Output; 15 | 16 | /// Which kind of future are we turning this into? 17 | type Future: Future; 18 | 19 | /// Wait for the first future to complete. 20 | /// 21 | /// Awaits multiple futures at once, returning as soon as one completes. The 22 | /// other futures are cancelled. 23 | /// 24 | /// This function returns a new future which polls all futures concurrently. 25 | fn race(self) -> Self::Future; 26 | } 27 | -------------------------------------------------------------------------------- /src/future/race/tuple.rs: -------------------------------------------------------------------------------- 1 | use super::Race as RaceTrait; 2 | use crate::utils; 3 | 4 | use core::fmt::{self, Debug}; 5 | use core::future::{Future, IntoFuture}; 6 | use core::pin::Pin; 7 | use core::task::{Context, Poll}; 8 | 9 | use pin_project::pin_project; 10 | 11 | macro_rules! impl_race_tuple { 12 | ($StructName:ident $($F:ident)+) => { 13 | /// A future which waits for the first future to complete. 14 | /// 15 | /// This `struct` is created by the [`race`] method on the [`Race`] trait. See 16 | /// its documentation for more. 17 | /// 18 | /// [`race`]: crate::future::Race::race 19 | /// [`Race`]: crate::future::Race 20 | #[pin_project] 21 | #[must_use = "futures do nothing unless you `.await` or poll them"] 22 | #[allow(non_snake_case)] 23 | pub struct $StructName 24 | where $( 25 | $F: Future, 26 | )* { 27 | done: bool, 28 | indexer: utils::Indexer, 29 | $(#[pin] $F: $F,)* 30 | } 31 | 32 | impl Debug for $StructName 33 | where $( 34 | $F: Future + Debug, 35 | )* { 36 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 37 | f.debug_tuple("Race") 38 | $(.field(&self.$F))* 39 | .finish() 40 | } 41 | } 42 | 43 | impl RaceTrait for ($($F,)*) 44 | where $( 45 | $F: IntoFuture, 46 | )* { 47 | type Output = T; 48 | type Future = $StructName; 49 | 50 | fn race(self) -> Self::Future { 51 | let ($($F,)*): ($($F,)*) = self; 52 | $StructName { 53 | done: false, 54 | indexer: utils::Indexer::new(utils::tuple_len!($($F,)*)), 55 | $($F: $F.into_future()),* 56 | } 57 | } 58 | } 59 | 60 | impl Future for $StructName 61 | where 62 | $($F: Future),* 63 | { 64 | type Output = T; 65 | 66 | fn poll( 67 | self: Pin<&mut Self>, cx: &mut Context<'_> 68 | ) -> Poll { 69 | let mut this = self.project(); 70 | assert!(!*this.done, "Futures must not be polled after completing"); 71 | 72 | #[repr(usize)] 73 | enum Indexes { 74 | $($F),* 75 | } 76 | 77 | for i in this.indexer.iter() { 78 | utils::gen_conditions!(i, this, cx, poll, $((Indexes::$F as usize; $F, { 79 | Poll::Ready(output) => { 80 | *this.done = true; 81 | return Poll::Ready(output); 82 | }, 83 | _ => continue, 84 | }))*); 85 | } 86 | 87 | Poll::Pending 88 | } 89 | } 90 | }; 91 | } 92 | 93 | impl_race_tuple! { Race1 A } 94 | impl_race_tuple! { Race2 A B } 95 | impl_race_tuple! { Race3 A B C } 96 | impl_race_tuple! { Race4 A B C D } 97 | impl_race_tuple! { Race5 A B C D E } 98 | impl_race_tuple! { Race6 A B C D E F } 99 | impl_race_tuple! { Race7 A B C D E F G } 100 | impl_race_tuple! { Race8 A B C D E F G H } 101 | impl_race_tuple! { Race9 A B C D E F G H I } 102 | impl_race_tuple! { Race10 A B C D E F G H I J } 103 | impl_race_tuple! { Race11 A B C D E F G H I J K } 104 | impl_race_tuple! { Race12 A B C D E F G H I J K L } 105 | 106 | #[cfg(test)] 107 | mod test { 108 | use super::*; 109 | use core::future; 110 | 111 | #[test] 112 | fn race_1() { 113 | futures_lite::future::block_on(async { 114 | let a = future::ready("world"); 115 | assert_eq!((a,).race().await, "world"); 116 | }); 117 | } 118 | 119 | #[test] 120 | fn race_2() { 121 | futures_lite::future::block_on(async { 122 | let a = future::pending(); 123 | let b = future::ready("world"); 124 | assert_eq!((a, b).race().await, "world"); 125 | }); 126 | } 127 | 128 | #[test] 129 | fn race_3() { 130 | futures_lite::future::block_on(async { 131 | let a = future::pending(); 132 | let b = future::ready("hello"); 133 | let c = future::ready("world"); 134 | let result = (a, b, c).race().await; 135 | assert!(matches!(result, "hello" | "world")); 136 | }); 137 | } 138 | } 139 | -------------------------------------------------------------------------------- /src/future/race/vec.rs: -------------------------------------------------------------------------------- 1 | use crate::utils::{self, Indexer}; 2 | 3 | use super::Race as RaceTrait; 4 | 5 | #[cfg(all(feature = "alloc", not(feature = "std")))] 6 | use alloc::vec::Vec; 7 | 8 | use core::fmt; 9 | use core::future::{Future, IntoFuture}; 10 | use core::pin::Pin; 11 | use core::task::{Context, Poll}; 12 | 13 | use pin_project::pin_project; 14 | 15 | /// A future which waits for the first future to complete. 16 | /// 17 | /// This `struct` is created by the [`race`] method on the [`Race`] trait. See 18 | /// its documentation for more. 19 | /// 20 | /// [`race`]: crate::future::Race::race 21 | /// [`Race`]: crate::future::Race 22 | #[must_use = "futures do nothing unless you `.await` or poll them"] 23 | #[pin_project] 24 | pub struct Race 25 | where 26 | Fut: Future, 27 | { 28 | #[pin] 29 | futures: Vec, 30 | indexer: Indexer, 31 | done: bool, 32 | } 33 | 34 | impl fmt::Debug for Race 35 | where 36 | Fut: Future + fmt::Debug, 37 | Fut::Output: fmt::Debug, 38 | { 39 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 40 | f.debug_list().entries(self.futures.iter()).finish() 41 | } 42 | } 43 | 44 | impl Future for Race 45 | where 46 | Fut: Future, 47 | { 48 | type Output = Fut::Output; 49 | 50 | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 51 | let mut this = self.project(); 52 | assert!(!*this.done, "Futures must not be polled after completing"); 53 | 54 | for index in this.indexer.iter() { 55 | let fut = utils::get_pin_mut_from_vec(this.futures.as_mut(), index).unwrap(); 56 | match fut.poll(cx) { 57 | Poll::Ready(item) => { 58 | *this.done = true; 59 | return Poll::Ready(item); 60 | } 61 | Poll::Pending => continue, 62 | } 63 | } 64 | Poll::Pending 65 | } 66 | } 67 | 68 | impl RaceTrait for Vec 69 | where 70 | Fut: IntoFuture, 71 | { 72 | type Output = Fut::Output; 73 | type Future = Race; 74 | 75 | fn race(self) -> Self::Future { 76 | Race { 77 | indexer: Indexer::new(self.len()), 78 | futures: self.into_iter().map(|fut| fut.into_future()).collect(), 79 | done: false, 80 | } 81 | } 82 | } 83 | 84 | #[cfg(test)] 85 | mod test { 86 | use super::*; 87 | use alloc::vec; 88 | use core::future; 89 | 90 | // NOTE: we should probably poll in random order. 91 | #[test] 92 | fn no_fairness() { 93 | futures_lite::future::block_on(async { 94 | let res = vec![future::ready("hello"), future::ready("world")] 95 | .race() 96 | .await; 97 | assert!(matches!(res, "hello" | "world")); 98 | }); 99 | } 100 | } 101 | -------------------------------------------------------------------------------- /src/future/race_ok/array/error.rs: -------------------------------------------------------------------------------- 1 | use core::fmt; 2 | use core::ops::{Deref, DerefMut}; 3 | #[cfg(feature = "std")] 4 | use std::error::Error; 5 | 6 | /// A collection of errors. 7 | #[repr(transparent)] 8 | pub struct AggregateError { 9 | inner: [E; N], 10 | } 11 | 12 | impl AggregateError { 13 | pub(super) fn new(inner: [E; N]) -> Self { 14 | Self { inner } 15 | } 16 | } 17 | 18 | impl fmt::Debug for AggregateError { 19 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 20 | writeln!(f, "{self}:")?; 21 | 22 | for (i, err) in self.inner.iter().enumerate() { 23 | writeln!(f, "- Error {}: {err}", i + 1)?; 24 | } 25 | 26 | Ok(()) 27 | } 28 | } 29 | 30 | impl fmt::Display for AggregateError { 31 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 32 | write!(f, "{} errors occured", self.inner.len()) 33 | } 34 | } 35 | 36 | impl Deref for AggregateError { 37 | type Target = [E; N]; 38 | 39 | fn deref(&self) -> &Self::Target { 40 | &self.inner 41 | } 42 | } 43 | 44 | impl DerefMut for AggregateError { 45 | fn deref_mut(&mut self) -> &mut Self::Target { 46 | &mut self.inner 47 | } 48 | } 49 | 50 | #[cfg(feature = "std")] 51 | impl std::error::Error for AggregateError {} 52 | -------------------------------------------------------------------------------- /src/future/race_ok/mod.rs: -------------------------------------------------------------------------------- 1 | use core::future::Future; 2 | 3 | pub(crate) mod array; 4 | pub(crate) mod tuple; 5 | #[cfg(feature = "alloc")] 6 | pub(crate) mod vec; 7 | 8 | /// Wait for the first successful future to complete. 9 | /// 10 | /// Awaits multiple futures simultaneously, returning the output of the first 11 | /// future which completes. If no future completes successfully, returns an 12 | /// aggregate error of all failed futures. 13 | pub trait RaceOk { 14 | /// The resulting output type. 15 | type Output; 16 | 17 | /// The resulting error type. 18 | type Error; 19 | 20 | /// Which kind of future are we turning this into? 21 | type Future: Future>; 22 | 23 | /// Waits for the first successful future to complete. 24 | fn race_ok(self) -> Self::Future; 25 | } 26 | -------------------------------------------------------------------------------- /src/future/race_ok/tuple/error.rs: -------------------------------------------------------------------------------- 1 | use core::fmt; 2 | use core::ops::{Deref, DerefMut}; 3 | #[cfg(feature = "std")] 4 | use std::error::Error; 5 | 6 | /// A collection of errors. 7 | #[repr(transparent)] 8 | pub struct AggregateError { 9 | inner: [E; N], 10 | } 11 | 12 | impl AggregateError { 13 | pub(super) fn new(inner: [E; N]) -> Self { 14 | Self { inner } 15 | } 16 | } 17 | 18 | #[cfg(feature = "std")] 19 | impl fmt::Debug for AggregateError { 20 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 21 | writeln!(f, "{self}:")?; 22 | 23 | for (i, err) in self.inner.iter().enumerate() { 24 | writeln!(f, "- Error {}: {err}", i + 1)?; 25 | let mut source = err.source(); 26 | while let Some(err) = source { 27 | writeln!(f, " ↳ Caused by: {err}")?; 28 | source = err.source(); 29 | } 30 | } 31 | 32 | Ok(()) 33 | } 34 | } 35 | 36 | #[cfg(not(feature = "std"))] 37 | impl fmt::Debug for AggregateError { 38 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 39 | writeln!(f, "{self}:")?; 40 | 41 | for (i, err) in self.inner.iter().enumerate() { 42 | writeln!(f, "- Error {}: {err}", i + 1)?; 43 | } 44 | 45 | Ok(()) 46 | } 47 | } 48 | 49 | #[cfg(feature = "std")] 50 | impl fmt::Display for AggregateError { 51 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 52 | write!(f, "{} errors occured", self.inner.len()) 53 | } 54 | } 55 | 56 | #[cfg(not(feature = "std"))] 57 | impl fmt::Display for AggregateError { 58 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 59 | write!(f, "{} errors occured", self.inner.len()) 60 | } 61 | } 62 | 63 | impl Deref for AggregateError { 64 | type Target = [E; N]; 65 | 66 | fn deref(&self) -> &Self::Target { 67 | &self.inner 68 | } 69 | } 70 | 71 | impl DerefMut for AggregateError { 72 | fn deref_mut(&mut self) -> &mut Self::Target { 73 | &mut self.inner 74 | } 75 | } 76 | 77 | #[cfg(feature = "std")] 78 | impl std::error::Error for AggregateError {} 79 | -------------------------------------------------------------------------------- /src/future/race_ok/vec/error.rs: -------------------------------------------------------------------------------- 1 | #[cfg(all(feature = "alloc", not(feature = "std")))] 2 | use alloc::vec::Vec; 3 | 4 | use core::fmt; 5 | use core::ops::Deref; 6 | use core::ops::DerefMut; 7 | #[cfg(feature = "std")] 8 | use std::error::Error; 9 | 10 | /// A collection of errors. 11 | #[repr(transparent)] 12 | pub struct AggregateError { 13 | pub(crate) inner: Vec, 14 | } 15 | 16 | impl AggregateError { 17 | pub(crate) fn new(inner: Vec) -> Self { 18 | Self { inner } 19 | } 20 | } 21 | 22 | impl fmt::Debug for AggregateError { 23 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 24 | writeln!(f, "{self}:")?; 25 | 26 | for (i, err) in self.inner.iter().enumerate() { 27 | writeln!(f, "- Error {}: {err}", i + 1)?; 28 | } 29 | 30 | Ok(()) 31 | } 32 | } 33 | 34 | impl fmt::Display for AggregateError { 35 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 36 | write!(f, "{} errors occurred", self.inner.len()) 37 | } 38 | } 39 | 40 | impl Deref for AggregateError { 41 | type Target = Vec; 42 | 43 | fn deref(&self) -> &Self::Target { 44 | &self.inner 45 | } 46 | } 47 | 48 | impl DerefMut for AggregateError { 49 | fn deref_mut(&mut self) -> &mut Self::Target { 50 | &mut self.inner 51 | } 52 | } 53 | 54 | #[cfg(feature = "std")] 55 | impl Error for AggregateError {} 56 | -------------------------------------------------------------------------------- /src/future/race_ok/vec/mod.rs: -------------------------------------------------------------------------------- 1 | use super::RaceOk as RaceOkTrait; 2 | use crate::utils::iter_pin_mut; 3 | use crate::utils::MaybeDone; 4 | 5 | #[cfg(all(feature = "alloc", not(feature = "std")))] 6 | use alloc::{boxed::Box, vec::Vec}; 7 | 8 | use core::fmt; 9 | use core::future::{Future, IntoFuture}; 10 | use core::mem; 11 | use core::pin::Pin; 12 | use core::task::{Context, Poll}; 13 | 14 | pub use error::AggregateError; 15 | 16 | mod error; 17 | 18 | /// A future which waits for the first successful future to complete. 19 | /// 20 | /// This `struct` is created by the [`race_ok`] method on the [`RaceOk`] trait. See 21 | /// its documentation for more. 22 | /// 23 | /// [`race_ok`]: crate::future::RaceOk::race_ok 24 | /// [`RaceOk`]: crate::future::RaceOk 25 | #[must_use = "futures do nothing unless you `.await` or poll them"] 26 | pub struct RaceOk 27 | where 28 | Fut: Future>, 29 | { 30 | elems: Pin]>>, 31 | } 32 | 33 | impl fmt::Debug for RaceOk 34 | where 35 | Fut: Future> + fmt::Debug, 36 | Fut::Output: fmt::Debug, 37 | { 38 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 39 | f.debug_list().entries(self.elems.iter()).finish() 40 | } 41 | } 42 | 43 | impl Future for RaceOk 44 | where 45 | Fut: Future>, 46 | { 47 | type Output = Result>; 48 | 49 | fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 50 | let mut all_done = true; 51 | 52 | for mut elem in iter_pin_mut(self.elems.as_mut()) { 53 | if elem.as_mut().poll(cx).is_pending() { 54 | all_done = false 55 | } else if let Some(output) = elem.take_ok() { 56 | return Poll::Ready(Ok(output)); 57 | } 58 | } 59 | 60 | if all_done { 61 | let mut elems = mem::replace(&mut self.elems, Box::pin([])); 62 | let result: Vec = iter_pin_mut(elems.as_mut()) 63 | .map(|e| match e.take_err() { 64 | Some(err) => err, 65 | // Since all futures are done without any one of them returning `Ok`, they're 66 | // all `Err`s and so `take_err` cannot fail 67 | None => unreachable!(), 68 | }) 69 | .collect(); 70 | Poll::Ready(Err(AggregateError::new(result))) 71 | } else { 72 | Poll::Pending 73 | } 74 | } 75 | } 76 | 77 | impl RaceOkTrait for Vec 78 | where 79 | Fut: IntoFuture>, 80 | { 81 | type Output = T; 82 | type Error = AggregateError; 83 | type Future = RaceOk; 84 | 85 | fn race_ok(self) -> Self::Future { 86 | let elems: Box<[_]> = self 87 | .into_iter() 88 | .map(|fut| MaybeDone::new(fut.into_future())) 89 | .collect(); 90 | RaceOk { 91 | elems: elems.into(), 92 | } 93 | } 94 | } 95 | 96 | #[cfg(test)] 97 | mod test { 98 | use super::*; 99 | use alloc::vec; 100 | use core::future; 101 | 102 | #[test] 103 | fn all_ok() { 104 | futures_lite::future::block_on(async { 105 | let res: Result<&str, AggregateError<()>> = 106 | vec![future::ready(Ok("hello")), future::ready(Ok("world"))] 107 | .race_ok() 108 | .await; 109 | assert!(res.is_ok()); 110 | }) 111 | } 112 | 113 | #[test] 114 | fn one_err() { 115 | futures_lite::future::block_on(async { 116 | let res: Result<&str, AggregateError<_>> = 117 | vec![future::ready(Ok("hello")), future::ready(Err("oh no"))] 118 | .race_ok() 119 | .await; 120 | assert_eq!(res.unwrap(), "hello"); 121 | }); 122 | } 123 | 124 | #[test] 125 | fn all_err() { 126 | futures_lite::future::block_on(async { 127 | let res: Result<&str, AggregateError<_>> = 128 | vec![future::ready(Err("oops")), future::ready(Err("oh no"))] 129 | .race_ok() 130 | .await; 131 | let errs = res.unwrap_err(); 132 | assert_eq!(errs[0], "oops"); 133 | assert_eq!(errs[1], "oh no"); 134 | }); 135 | } 136 | } 137 | -------------------------------------------------------------------------------- /src/future/try_join/mod.rs: -------------------------------------------------------------------------------- 1 | use core::future::Future; 2 | 3 | pub(crate) mod array; 4 | pub(crate) mod tuple; 5 | #[cfg(feature = "alloc")] 6 | pub(crate) mod vec; 7 | 8 | /// Wait for all futures to complete successfully, or abort early on error. 9 | /// 10 | /// In the case a future errors, all other futures will be cancelled. If 11 | /// futures have been completed, their results will be discarded. 12 | /// 13 | /// If you want to keep partial data in the case of failure, see the `merge` 14 | /// operation. 15 | pub trait TryJoin { 16 | /// The resulting output type. 17 | type Output; 18 | 19 | /// The resulting error type. 20 | type Error; 21 | 22 | /// Which kind of future are we turning this into? 23 | type Future: Future>; 24 | 25 | /// Waits for multiple futures to complete, either returning when all 26 | /// futures complete successfully, or return early when any future completes 27 | /// with an error. 28 | fn try_join(self) -> Self::Future; 29 | } 30 | -------------------------------------------------------------------------------- /src/future/wait_until.rs: -------------------------------------------------------------------------------- 1 | use core::future::Future; 2 | use core::pin::Pin; 3 | use core::task::{ready, Context, Poll}; 4 | 5 | /// Suspends a future until the specified deadline. 6 | /// 7 | /// This `struct` is created by the [`wait_until`] method on [`FutureExt`]. See its 8 | /// documentation for more. 9 | /// 10 | /// [`wait_until`]: crate::future::FutureExt::wait_until 11 | /// [`FutureExt`]: crate::future::FutureExt 12 | #[derive(Debug)] 13 | #[pin_project::pin_project] 14 | #[must_use = "futures do nothing unless polled or .awaited"] 15 | pub struct WaitUntil { 16 | #[pin] 17 | future: F, 18 | #[pin] 19 | deadline: D, 20 | state: State, 21 | } 22 | 23 | /// The internal state 24 | #[derive(Debug)] 25 | enum State { 26 | Started, 27 | PollFuture, 28 | Completed, 29 | } 30 | 31 | impl WaitUntil { 32 | pub(super) fn new(future: F, deadline: D) -> Self { 33 | Self { 34 | future, 35 | deadline, 36 | state: State::Started, 37 | } 38 | } 39 | } 40 | 41 | impl Future for WaitUntil { 42 | type Output = F::Output; 43 | 44 | fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 45 | let mut this = self.project(); 46 | loop { 47 | match this.state { 48 | State::Started => { 49 | ready!(this.deadline.as_mut().poll(cx)); 50 | *this.state = State::PollFuture; 51 | } 52 | State::PollFuture => { 53 | let value = ready!(this.future.as_mut().poll(cx)); 54 | *this.state = State::Completed; 55 | return Poll::Ready(value); 56 | } 57 | State::Completed => panic!("future polled after completing"), 58 | } 59 | } 60 | } 61 | } 62 | -------------------------------------------------------------------------------- /src/stream/chain/array.rs: -------------------------------------------------------------------------------- 1 | use core::fmt; 2 | use core::pin::Pin; 3 | use core::task::{Context, Poll}; 4 | 5 | use futures_core::Stream; 6 | use pin_project::pin_project; 7 | 8 | use crate::utils; 9 | 10 | use super::Chain as ChainTrait; 11 | 12 | /// A stream that chains multiple streams one after another. 13 | /// 14 | /// This `struct` is created by the [`chain`] method on the [`Chain`] trait. See its 15 | /// documentation for more. 16 | /// 17 | /// [`chain`]: trait.Chain.html#method.merge 18 | /// [`Chain`]: trait.Chain.html 19 | #[pin_project] 20 | pub struct Chain { 21 | #[pin] 22 | streams: [S; N], 23 | index: usize, 24 | len: usize, 25 | done: bool, 26 | } 27 | 28 | impl Stream for Chain { 29 | type Item = S::Item; 30 | 31 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { 32 | let mut this = self.project(); 33 | 34 | assert!(!*this.done, "Stream should not be polled after completion"); 35 | 36 | loop { 37 | if this.index == this.len { 38 | *this.done = true; 39 | return Poll::Ready(None); 40 | } 41 | let stream = utils::iter_pin_mut(this.streams.as_mut()) 42 | .nth(*this.index) 43 | .unwrap(); 44 | match stream.poll_next(cx) { 45 | Poll::Ready(Some(item)) => return Poll::Ready(Some(item)), 46 | Poll::Ready(None) => { 47 | *this.index += 1; 48 | continue; 49 | } 50 | Poll::Pending => return Poll::Pending, 51 | } 52 | } 53 | } 54 | } 55 | 56 | impl fmt::Debug for Chain 57 | where 58 | S: Stream + fmt::Debug, 59 | { 60 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 61 | f.debug_list().entries(self.streams.iter()).finish() 62 | } 63 | } 64 | 65 | impl ChainTrait for [S; N] { 66 | type Item = S::Item; 67 | 68 | type Stream = Chain; 69 | 70 | fn chain(self) -> Self::Stream { 71 | Chain { 72 | len: self.len(), 73 | streams: self, 74 | index: 0, 75 | done: false, 76 | } 77 | } 78 | } 79 | 80 | #[cfg(test)] 81 | mod tests { 82 | use super::*; 83 | use futures_lite::future::block_on; 84 | use futures_lite::prelude::*; 85 | use futures_lite::stream; 86 | 87 | #[test] 88 | fn chain_3() { 89 | block_on(async { 90 | let a = stream::once(1); 91 | let b = stream::once(2); 92 | let c = stream::once(3); 93 | let mut s = [a, b, c].chain(); 94 | 95 | assert_eq!(s.next().await, Some(1)); 96 | assert_eq!(s.next().await, Some(2)); 97 | assert_eq!(s.next().await, Some(3)); 98 | assert_eq!(s.next().await, None); 99 | }) 100 | } 101 | } 102 | -------------------------------------------------------------------------------- /src/stream/chain/mod.rs: -------------------------------------------------------------------------------- 1 | use futures_core::Stream; 2 | 3 | pub(crate) mod array; 4 | pub(crate) mod tuple; 5 | #[cfg(feature = "alloc")] 6 | pub(crate) mod vec; 7 | 8 | /// Takes multiple streams and creates a new stream over all in sequence. 9 | pub trait Chain { 10 | /// What's the return type of our stream? 11 | type Item; 12 | 13 | /// What stream do we return? 14 | type Stream: Stream; 15 | 16 | /// Combine multiple streams into a single stream. 17 | fn chain(self) -> Self::Stream; 18 | } 19 | -------------------------------------------------------------------------------- /src/stream/chain/tuple.rs: -------------------------------------------------------------------------------- 1 | use core::fmt; 2 | use core::pin::Pin; 3 | use core::task::{Context, Poll}; 4 | 5 | use futures_core::Stream; 6 | 7 | use super::Chain; 8 | 9 | macro_rules! impl_chain_for_tuple { 10 | ($mod_name: ident $StructName:ident $($F:ident)+) => { 11 | mod $mod_name { 12 | #[repr(usize)] 13 | enum Indexes { 14 | $($F,)+ 15 | } 16 | 17 | $( 18 | pub(super) const $F: usize = Indexes::$F as usize; 19 | )+ 20 | 21 | pub(super) const LEN: usize = [$(Indexes::$F,)+].len(); 22 | } 23 | 24 | #[pin_project::pin_project] 25 | pub struct $StructName<$($F,)+> { 26 | index: usize, 27 | done: bool, 28 | $( #[pin] $F: $F,)+ 29 | } 30 | 31 | impl Stream for $StructName<$($F,)+> 32 | where 33 | $($F: Stream,)+ 34 | { 35 | type Item = T; 36 | 37 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { 38 | let mut this = self.project(); 39 | 40 | assert!(!*this.done, "Stream should not be polled after completion"); 41 | 42 | loop { 43 | if *this.index == $mod_name::LEN { 44 | *this.done = true; 45 | return Poll::Ready(None); 46 | } 47 | 48 | match *this.index { 49 | $( 50 | $mod_name::$F => { 51 | let fut = unsafe { Pin::new_unchecked(&mut this.$F) }; 52 | match fut.poll_next(cx) { 53 | Poll::Ready(None) => { 54 | *this.index += 1; 55 | continue; 56 | } 57 | v @ (Poll::Pending | Poll::Ready(Some(_))) => return v, 58 | } 59 | }, 60 | )+ 61 | _ => unreachable!(), 62 | } 63 | } 64 | } 65 | } 66 | 67 | impl<$($F,)+> fmt::Debug for $StructName<$($F,)+> 68 | where 69 | $($F: fmt::Debug,)+ 70 | { 71 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 72 | f.debug_tuple("Chain") 73 | $(.field(&self.$F))+ 74 | .finish() 75 | } 76 | } 77 | 78 | impl Chain for ($($F,)+) 79 | where 80 | $($F: Stream,)+ 81 | { 82 | type Item = T; 83 | 84 | type Stream = $StructName<$($F,)+>; 85 | 86 | fn chain(self) -> Self::Stream { 87 | let ($($F,)*): ($($F,)*) = self; 88 | Self::Stream { 89 | done: false, 90 | index: 0, 91 | $($F,)+ 92 | } 93 | } 94 | } 95 | } 96 | } 97 | 98 | impl_chain_for_tuple! { chain_1 Chain1 A } 99 | impl_chain_for_tuple! { chain_2 Chain2 A B } 100 | impl_chain_for_tuple! { chain_3 Chain3 A B C } 101 | impl_chain_for_tuple! { chain_4 Chain4 A B C D } 102 | impl_chain_for_tuple! { chain_5 Chain5 A B C D E } 103 | impl_chain_for_tuple! { chain_6 Chain6 A B C D E F } 104 | impl_chain_for_tuple! { chain_7 Chain7 A B C D E F G } 105 | impl_chain_for_tuple! { chain_8 Chain8 A B C D E F G H } 106 | impl_chain_for_tuple! { chain_9 Chain9 A B C D E F G H I } 107 | impl_chain_for_tuple! { chain_10 Chain10 A B C D E F G H I J } 108 | impl_chain_for_tuple! { chain_11 Chain11 A B C D E F G H I J K } 109 | impl_chain_for_tuple! { chain_12 Chain12 A B C D E F G H I J K L } 110 | 111 | #[cfg(test)] 112 | mod tests { 113 | use super::*; 114 | 115 | use futures_lite::future::block_on; 116 | use futures_lite::prelude::*; 117 | use futures_lite::stream; 118 | 119 | #[test] 120 | fn chain_3() { 121 | block_on(async { 122 | let a = stream::once(1); 123 | let b = stream::once(2); 124 | let c = stream::once(3); 125 | let mut s = (a, b, c).chain(); 126 | 127 | assert_eq!(s.next().await, Some(1)); 128 | assert_eq!(s.next().await, Some(2)); 129 | assert_eq!(s.next().await, Some(3)); 130 | assert_eq!(s.next().await, None); 131 | }) 132 | } 133 | } 134 | -------------------------------------------------------------------------------- /src/stream/chain/vec.rs: -------------------------------------------------------------------------------- 1 | #[cfg(all(feature = "alloc", not(feature = "std")))] 2 | use alloc::vec::Vec; 3 | 4 | use core::fmt; 5 | use core::pin::Pin; 6 | use core::task::{Context, Poll}; 7 | 8 | use futures_core::Stream; 9 | use pin_project::pin_project; 10 | 11 | use crate::utils; 12 | 13 | use super::Chain as ChainTrait; 14 | 15 | /// A stream that chains multiple streams one after another. 16 | /// 17 | /// This `struct` is created by the [`chain`] method on the [`Chain`] trait. See its 18 | /// documentation for more. 19 | /// 20 | /// [`chain`]: trait.Chain.html#method.merge 21 | /// [`Chain`]: trait.Chain.html 22 | #[pin_project] 23 | pub struct Chain { 24 | #[pin] 25 | streams: Vec, 26 | index: usize, 27 | len: usize, 28 | done: bool, 29 | } 30 | 31 | impl Stream for Chain { 32 | type Item = S::Item; 33 | 34 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { 35 | let mut this = self.project(); 36 | 37 | assert!(!*this.done, "Stream should not be polled after completion"); 38 | 39 | loop { 40 | if this.index == this.len { 41 | *this.done = true; 42 | return Poll::Ready(None); 43 | } 44 | let stream = utils::iter_pin_mut_vec(this.streams.as_mut()) 45 | .nth(*this.index) 46 | .unwrap(); 47 | match stream.poll_next(cx) { 48 | Poll::Ready(Some(item)) => return Poll::Ready(Some(item)), 49 | Poll::Ready(None) => { 50 | *this.index += 1; 51 | continue; 52 | } 53 | Poll::Pending => return Poll::Pending, 54 | } 55 | } 56 | } 57 | } 58 | 59 | impl fmt::Debug for Chain 60 | where 61 | S: Stream + fmt::Debug, 62 | { 63 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 64 | f.debug_list().entries(self.streams.iter()).finish() 65 | } 66 | } 67 | 68 | impl ChainTrait for Vec { 69 | type Item = S::Item; 70 | 71 | type Stream = Chain; 72 | 73 | fn chain(self) -> Self::Stream { 74 | Chain { 75 | len: self.len(), 76 | streams: self, 77 | index: 0, 78 | done: false, 79 | } 80 | } 81 | } 82 | 83 | #[cfg(test)] 84 | mod tests { 85 | use super::*; 86 | use alloc::vec; 87 | use futures_lite::future::block_on; 88 | use futures_lite::prelude::*; 89 | use futures_lite::stream; 90 | 91 | #[test] 92 | fn chain_3() { 93 | block_on(async { 94 | let a = stream::once(1); 95 | let b = stream::once(2); 96 | let c = stream::once(3); 97 | let mut s = vec![a, b, c].chain(); 98 | 99 | assert_eq!(s.next().await, Some(1)); 100 | assert_eq!(s.next().await, Some(2)); 101 | assert_eq!(s.next().await, Some(3)); 102 | assert_eq!(s.next().await, None); 103 | }) 104 | } 105 | } 106 | -------------------------------------------------------------------------------- /src/stream/into_stream.rs: -------------------------------------------------------------------------------- 1 | use futures_core::Stream; 2 | 3 | /// Conversion into a [`Stream`]. 4 | /// 5 | /// By implementing `IntoStream` for a type, you define how it will be 6 | /// converted to an iterator. This is common for types which describe a 7 | /// collection of some kind. 8 | pub trait IntoStream { 9 | /// The type of the elements being iterated over. 10 | type Item; 11 | 12 | /// Which kind of stream are we turning this into? 13 | type IntoStream: Stream; 14 | 15 | /// Creates a stream from a value. 16 | fn into_stream(self) -> Self::IntoStream; 17 | } 18 | 19 | impl IntoStream for S { 20 | type Item = S::Item; 21 | type IntoStream = S; 22 | 23 | #[inline] 24 | fn into_stream(self) -> S { 25 | self 26 | } 27 | } 28 | -------------------------------------------------------------------------------- /src/stream/merge/array.rs: -------------------------------------------------------------------------------- 1 | use super::Merge as MergeTrait; 2 | use crate::stream::IntoStream; 3 | use crate::utils::{self, Indexer, PollArray, WakerArray}; 4 | 5 | use core::fmt; 6 | use core::pin::Pin; 7 | use core::task::{Context, Poll}; 8 | use futures_core::Stream; 9 | 10 | /// A stream that merges multiple streams into a single stream. 11 | /// 12 | /// This `struct` is created by the [`merge`] method on the [`Merge`] trait. See its 13 | /// documentation for more. 14 | /// 15 | /// [`merge`]: trait.Merge.html#method.merge 16 | /// [`Merge`]: trait.Merge.html 17 | #[pin_project::pin_project] 18 | pub struct Merge 19 | where 20 | S: Stream, 21 | { 22 | #[pin] 23 | streams: [S; N], 24 | indexer: Indexer, 25 | wakers: WakerArray, 26 | state: PollArray, 27 | complete: usize, 28 | done: bool, 29 | } 30 | 31 | impl Merge 32 | where 33 | S: Stream, 34 | { 35 | pub(crate) fn new(streams: [S; N]) -> Self { 36 | Self { 37 | streams, 38 | indexer: Indexer::new(N), 39 | wakers: WakerArray::new(), 40 | state: PollArray::new_pending(), 41 | complete: 0, 42 | done: false, 43 | } 44 | } 45 | } 46 | 47 | impl fmt::Debug for Merge 48 | where 49 | S: Stream + fmt::Debug, 50 | { 51 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 52 | f.debug_list().entries(self.streams.iter()).finish() 53 | } 54 | } 55 | 56 | impl Stream for Merge 57 | where 58 | S: Stream, 59 | { 60 | type Item = S::Item; 61 | 62 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { 63 | let mut this = self.project(); 64 | 65 | let mut readiness = this.wakers.readiness(); 66 | readiness.set_waker(cx.waker()); 67 | 68 | // Iterate over our streams one-by-one. If a stream yields a value, 69 | // we exit early. By default we'll return `Poll::Ready(None)`, but 70 | // this changes if we encounter a `Poll::Pending`. 71 | for index in this.indexer.iter() { 72 | if !readiness.any_ready() { 73 | // Nothing is ready yet 74 | return Poll::Pending; 75 | } else if !readiness.clear_ready(index) || this.state[index].is_none() { 76 | continue; 77 | } 78 | 79 | // unlock readiness so we don't deadlock when polling 80 | #[allow(clippy::drop_non_drop)] 81 | drop(readiness); 82 | 83 | // Obtain the intermediate waker. 84 | let mut cx = Context::from_waker(this.wakers.get(index).unwrap()); 85 | 86 | let stream = utils::get_pin_mut(this.streams.as_mut(), index).unwrap(); 87 | match stream.poll_next(&mut cx) { 88 | Poll::Ready(Some(item)) => { 89 | // Mark ourselves as ready again because we need to poll for the next item. 90 | this.wakers.readiness().set_ready(index); 91 | return Poll::Ready(Some(item)); 92 | } 93 | Poll::Ready(None) => { 94 | *this.complete += 1; 95 | this.state[index].set_none(); 96 | if *this.complete == this.streams.len() { 97 | return Poll::Ready(None); 98 | } 99 | } 100 | Poll::Pending => {} 101 | } 102 | 103 | // Lock readiness so we can use it again 104 | readiness = this.wakers.readiness(); 105 | } 106 | 107 | Poll::Pending 108 | } 109 | } 110 | 111 | impl MergeTrait for [S; N] 112 | where 113 | S: IntoStream, 114 | { 115 | type Item = as Stream>::Item; 116 | type Stream = Merge; 117 | 118 | fn merge(self) -> Self::Stream { 119 | Merge::new(self.map(|i| i.into_stream())) 120 | } 121 | } 122 | 123 | #[cfg(test)] 124 | mod tests { 125 | use super::*; 126 | use futures_lite::future::block_on; 127 | use futures_lite::prelude::*; 128 | use futures_lite::stream; 129 | 130 | #[test] 131 | fn merge_array_4() { 132 | block_on(async { 133 | let a = stream::once(1); 134 | let b = stream::once(2); 135 | let c = stream::once(3); 136 | let d = stream::once(4); 137 | let mut s = [a, b, c, d].merge(); 138 | 139 | let mut counter = 0; 140 | while let Some(n) = s.next().await { 141 | counter += n; 142 | } 143 | assert_eq!(counter, 10); 144 | }) 145 | } 146 | 147 | #[test] 148 | fn merge_array_2x2() { 149 | block_on(async { 150 | let a = stream::repeat(1).take(2); 151 | let b = stream::repeat(2).take(2); 152 | let mut s = [a, b].merge(); 153 | 154 | let mut counter = 0; 155 | while let Some(n) = s.next().await { 156 | counter += n; 157 | } 158 | assert_eq!(counter, 6); 159 | }) 160 | } 161 | 162 | /// This test case uses channels so we'll have streams that return Pending from time to time. 163 | /// 164 | /// The purpose of this test is to make sure we have the waking logic working. 165 | #[test] 166 | #[cfg(feature = "alloc")] 167 | fn merge_channels() { 168 | use alloc::rc::Rc; 169 | use core::cell::RefCell; 170 | use futures::executor::LocalPool; 171 | use futures::task::LocalSpawnExt; 172 | 173 | use crate::future::join::Join; 174 | use crate::utils::channel::local_channel; 175 | 176 | let mut pool = LocalPool::new(); 177 | 178 | let done = Rc::new(RefCell::new(false)); 179 | let done2 = done.clone(); 180 | 181 | pool.spawner() 182 | .spawn_local(async move { 183 | let (send1, receive1) = local_channel(); 184 | let (send2, receive2) = local_channel(); 185 | let (send3, receive3) = local_channel(); 186 | 187 | let (count, ()) = ( 188 | async { 189 | [receive1, receive2, receive3] 190 | .merge() 191 | .fold(0, |a, b| a + b) 192 | .await 193 | }, 194 | async { 195 | for i in 1..=4 { 196 | send1.send(i); 197 | send2.send(i); 198 | send3.send(i); 199 | } 200 | drop(send1); 201 | drop(send2); 202 | drop(send3); 203 | }, 204 | ) 205 | .join() 206 | .await; 207 | 208 | assert_eq!(count, 30); 209 | 210 | *done2.borrow_mut() = true; 211 | }) 212 | .unwrap(); 213 | 214 | while !*done.borrow() { 215 | pool.run_until_stalled() 216 | } 217 | } 218 | } 219 | -------------------------------------------------------------------------------- /src/stream/merge/mod.rs: -------------------------------------------------------------------------------- 1 | use futures_core::Stream; 2 | 3 | pub(crate) mod array; 4 | pub(crate) mod tuple; 5 | #[cfg(feature = "alloc")] 6 | pub(crate) mod vec; 7 | 8 | /// Combines multiple streams into a single stream of all their outputs. 9 | /// 10 | /// Items are yielded as soon as they're received, and the stream continues 11 | /// yield until both streams have been exhausted. The output ordering 12 | /// between streams is not guaranteed. 13 | /// 14 | /// # Examples 15 | /// 16 | /// ``` 17 | /// use futures_concurrency::prelude::*; 18 | /// use futures_lite::stream::{self, StreamExt}; 19 | /// use futures_lite::future::block_on; 20 | /// 21 | /// block_on(async { 22 | /// let a = stream::once(1); 23 | /// let b = stream::once(2); 24 | /// let c = stream::once(3); 25 | /// let mut s = [a, b, c].merge(); 26 | /// 27 | /// let mut buf = vec![]; 28 | /// s.for_each(|n| buf.push(n)).await; 29 | /// buf.sort_unstable(); 30 | /// assert_eq!(&buf, &[1, 2, 3]); 31 | /// }) 32 | /// ``` 33 | pub trait Merge { 34 | /// The resulting output type. 35 | type Item; 36 | 37 | /// The stream type. 38 | type Stream: Stream; 39 | 40 | /// Combine multiple streams into a single stream. 41 | fn merge(self) -> Self::Stream; 42 | } 43 | -------------------------------------------------------------------------------- /src/stream/merge/vec.rs: -------------------------------------------------------------------------------- 1 | use super::Merge as MergeTrait; 2 | use crate::stream::IntoStream; 3 | use crate::utils::{self, Indexer, PollVec, WakerVec}; 4 | 5 | #[cfg(all(feature = "alloc", not(feature = "std")))] 6 | use alloc::vec::Vec; 7 | 8 | use core::fmt; 9 | use core::pin::Pin; 10 | use core::task::{Context, Poll}; 11 | use futures_core::Stream; 12 | 13 | /// A stream that merges multiple streams into a single stream. 14 | /// 15 | /// This `struct` is created by the [`merge`] method on the [`Merge`] trait. See its 16 | /// documentation for more. 17 | /// 18 | /// [`merge`]: trait.Merge.html#method.merge 19 | /// [`Merge`]: trait.Merge.html 20 | #[pin_project::pin_project] 21 | pub struct Merge 22 | where 23 | S: Stream, 24 | { 25 | #[pin] 26 | streams: Vec, 27 | indexer: Indexer, 28 | complete: usize, 29 | wakers: WakerVec, 30 | state: PollVec, 31 | done: bool, 32 | } 33 | 34 | impl Merge 35 | where 36 | S: Stream, 37 | { 38 | pub(crate) fn new(streams: Vec) -> Self { 39 | let len = streams.len(); 40 | Self { 41 | wakers: WakerVec::new(len), 42 | state: PollVec::new_pending(len), 43 | indexer: Indexer::new(len), 44 | streams, 45 | complete: 0, 46 | done: false, 47 | } 48 | } 49 | } 50 | 51 | impl fmt::Debug for Merge 52 | where 53 | S: Stream + fmt::Debug, 54 | { 55 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 56 | f.debug_list().entries(self.streams.iter()).finish() 57 | } 58 | } 59 | 60 | impl Stream for Merge 61 | where 62 | S: Stream, 63 | { 64 | type Item = S::Item; 65 | 66 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { 67 | let mut this = self.project(); 68 | 69 | let mut readiness = this.wakers.readiness(); 70 | readiness.set_waker(cx.waker()); 71 | 72 | // Iterate over our streams one-by-one. If a stream yields a value, 73 | // we exit early. By default we'll return `Poll::Ready(None)`, but 74 | // this changes if we encounter a `Poll::Pending`. 75 | for index in this.indexer.iter() { 76 | if !readiness.any_ready() { 77 | // Nothing is ready yet 78 | return Poll::Pending; 79 | } else if !readiness.clear_ready(index) || this.state[index].is_none() { 80 | continue; 81 | } 82 | 83 | // unlock readiness so we don't deadlock when polling 84 | #[allow(clippy::drop_non_drop)] 85 | drop(readiness); 86 | 87 | // Obtain the intermediate waker. 88 | let mut cx = Context::from_waker(this.wakers.get(index).unwrap()); 89 | 90 | let stream = utils::get_pin_mut_from_vec(this.streams.as_mut(), index).unwrap(); 91 | match stream.poll_next(&mut cx) { 92 | Poll::Ready(Some(item)) => { 93 | // Mark ourselves as ready again because we need to poll for the next item. 94 | this.wakers.readiness().set_ready(index); 95 | return Poll::Ready(Some(item)); 96 | } 97 | Poll::Ready(None) => { 98 | *this.complete += 1; 99 | this.state[index].set_none(); 100 | if *this.complete == this.streams.len() { 101 | return Poll::Ready(None); 102 | } 103 | } 104 | Poll::Pending => {} 105 | } 106 | 107 | // Lock readiness so we can use it again 108 | readiness = this.wakers.readiness(); 109 | } 110 | 111 | Poll::Pending 112 | } 113 | } 114 | 115 | impl MergeTrait for Vec 116 | where 117 | S: IntoStream, 118 | { 119 | type Item = as Stream>::Item; 120 | type Stream = Merge; 121 | 122 | fn merge(self) -> Self::Stream { 123 | Merge::new(self.into_iter().map(|i| i.into_stream()).collect()) 124 | } 125 | } 126 | 127 | #[cfg(test)] 128 | mod tests { 129 | use alloc::rc::Rc; 130 | use alloc::vec; 131 | use core::cell::RefCell; 132 | 133 | use super::*; 134 | use crate::utils::channel::local_channel; 135 | use futures::executor::LocalPool; 136 | use futures::task::LocalSpawnExt; 137 | use futures_lite::future::block_on; 138 | use futures_lite::prelude::*; 139 | use futures_lite::stream; 140 | 141 | use crate::future::join::Join; 142 | 143 | #[test] 144 | fn merge_vec_4() { 145 | block_on(async { 146 | let a = stream::once(1); 147 | let b = stream::once(2); 148 | let c = stream::once(3); 149 | let d = stream::once(4); 150 | let mut s = vec![a, b, c, d].merge(); 151 | 152 | let mut counter = 0; 153 | while let Some(n) = s.next().await { 154 | counter += n; 155 | } 156 | assert_eq!(counter, 10); 157 | }) 158 | } 159 | 160 | #[test] 161 | fn merge_vec_2x2() { 162 | block_on(async { 163 | let a = stream::repeat(1).take(2); 164 | let b = stream::repeat(2).take(2); 165 | let mut s = vec![a, b].merge(); 166 | 167 | let mut counter = 0; 168 | while let Some(n) = s.next().await { 169 | counter += n; 170 | } 171 | assert_eq!(counter, 6); 172 | }) 173 | } 174 | 175 | /// This test case uses channels so we'll have streams that return Pending from time to time. 176 | /// 177 | /// The purpose of this test is to make sure we have the waking logic working. 178 | #[test] 179 | fn merge_channels() { 180 | let mut pool = LocalPool::new(); 181 | 182 | let done = Rc::new(RefCell::new(false)); 183 | let done2 = done.clone(); 184 | 185 | pool.spawner() 186 | .spawn_local(async move { 187 | let (send1, receive1) = local_channel(); 188 | let (send2, receive2) = local_channel(); 189 | let (send3, receive3) = local_channel(); 190 | 191 | let (count, ()) = ( 192 | async { 193 | vec![receive1, receive2, receive3] 194 | .merge() 195 | .fold(0, |a, b| a + b) 196 | .await 197 | }, 198 | async { 199 | for i in 1..=4 { 200 | send1.send(i); 201 | send2.send(i); 202 | send3.send(i); 203 | } 204 | drop(send1); 205 | drop(send2); 206 | drop(send3); 207 | }, 208 | ) 209 | .join() 210 | .await; 211 | 212 | assert_eq!(count, 30); 213 | 214 | *done2.borrow_mut() = true; 215 | }) 216 | .unwrap(); 217 | 218 | while !*done.borrow() { 219 | pool.run_until_stalled() 220 | } 221 | } 222 | } 223 | -------------------------------------------------------------------------------- /src/stream/mod.rs: -------------------------------------------------------------------------------- 1 | //! Composable asynchronous iteration. 2 | //! 3 | //! # Examples 4 | //! 5 | //! Merge multiple streams to handle values as soon as they're ready, without 6 | //! ever dropping a single value: 7 | //! 8 | //! ``` 9 | //! use futures_concurrency::prelude::*; 10 | //! use futures_lite::stream::{self, StreamExt}; 11 | //! use futures_lite::future::block_on; 12 | //! 13 | //! block_on(async { 14 | //! let a = stream::once(1); 15 | //! let b = stream::once(2); 16 | //! let c = stream::once(3); 17 | //! let s = (a, b, c).merge(); 18 | //! 19 | //! let mut counter = 0; 20 | //! s.for_each(|n| counter += n).await; 21 | //! assert_eq!(counter, 6); 22 | //! }) 23 | //! ``` 24 | //! 25 | //! # Concurrency 26 | //! 27 | //! When working with multiple (async) iterators, the ordering in which 28 | //! iterators are awaited is important. As part of async iterators, Rust 29 | //! provides built-in operations to control the order of execution of sets of 30 | //! iterators: 31 | //! 32 | //! - `merge`: combine multiple iterators into a single iterator, where the new 33 | //! iterator yields an item as soon as one is available from one of the 34 | //! underlying iterators. 35 | //! - `zip`: combine multiple iterators into an iterator of pairs. The 36 | //! underlying iterators will be awaited concurrently. 37 | //! - `chain`: iterate over multiple iterators in sequence. The next iterator in 38 | //! the sequence won't start until the previous iterator has finished. 39 | //! 40 | //! ## Futures 41 | //! 42 | //! Futures can be thought of as async sequences of single items. Using 43 | //! `stream::once`, futures can be converted into async iterators and then used 44 | //! with any of the iterator concurrency methods. This enables operations such 45 | //! as `stream::Merge` to be used to execute sets of futures concurrently, but 46 | //! obtain the individual future's outputs as soon as they're available. 47 | //! 48 | //! See the [future concurrency][crate::future#concurrency] documentation for 49 | //! more on futures concurrency. 50 | pub use chain::Chain; 51 | pub use into_stream::IntoStream; 52 | pub use merge::Merge; 53 | pub use stream_ext::StreamExt; 54 | #[doc(inline)] 55 | #[cfg(feature = "alloc")] 56 | pub use stream_group::StreamGroup; 57 | pub use wait_until::WaitUntil; 58 | pub use zip::Zip; 59 | 60 | /// A growable group of streams which act as a single unit. 61 | #[cfg(feature = "alloc")] 62 | pub mod stream_group; 63 | 64 | pub(crate) mod chain; 65 | mod into_stream; 66 | pub(crate) mod merge; 67 | mod stream_ext; 68 | pub(crate) mod wait_until; 69 | pub(crate) mod zip; 70 | -------------------------------------------------------------------------------- /src/stream/stream_ext.rs: -------------------------------------------------------------------------------- 1 | use core::future::IntoFuture; 2 | 3 | use crate::stream::{IntoStream, Merge}; 4 | use futures_core::Stream; 5 | 6 | #[cfg(feature = "alloc")] 7 | use crate::concurrent_stream::FromStream; 8 | 9 | use super::{chain::tuple::Chain2, merge::tuple::Merge2, zip::tuple::Zip2, Chain, WaitUntil, Zip}; 10 | 11 | /// An extension trait for the `Stream` trait. 12 | pub trait StreamExt: Stream { 13 | /// Combines two streams into a single stream of all their outputs. 14 | fn merge(self, other: S2) -> Merge2 15 | where 16 | Self: Stream + Sized, 17 | S2: IntoStream; 18 | 19 | /// Takes two streams and creates a new stream over all in sequence 20 | fn chain(self, other: S2) -> Chain2 21 | where 22 | Self: Stream + Sized, 23 | S2: IntoStream; 24 | 25 | /// ‘Zips up’ multiple streams into a single stream of pairs. 26 | fn zip(self, other: S2) -> Zip2 27 | where 28 | Self: Stream + Sized, 29 | S2: IntoStream; 30 | 31 | /// Convert into a concurrent stream. 32 | #[cfg(feature = "alloc")] 33 | fn co(self) -> FromStream 34 | where 35 | Self: Sized, 36 | { 37 | FromStream::new(self) 38 | } 39 | 40 | /// Delay the yielding of items from the stream until the given deadline. 41 | /// 42 | /// The underlying stream will not be polled until the deadline has expired. In addition 43 | /// to using a time source as a deadline, any future can be used as a 44 | /// deadline too. When used in combination with a multi-consumer channel, 45 | /// this method can be used to synchronize the start of multiple streams and futures. 46 | /// 47 | /// # Example 48 | /// ``` 49 | /// # #[cfg(miri)] fn main() {} 50 | /// # #[cfg(not(miri))] 51 | /// # fn main() { 52 | /// use async_io::Timer; 53 | /// use futures_concurrency::prelude::*; 54 | /// use futures_lite::{future::block_on, stream}; 55 | /// use futures_lite::prelude::*; 56 | /// use std::time::{Duration, Instant}; 57 | /// 58 | /// block_on(async { 59 | /// let now = Instant::now(); 60 | /// let duration = Duration::from_millis(100); 61 | /// 62 | /// stream::once("meow") 63 | /// .wait_until(Timer::after(duration)) 64 | /// .next() 65 | /// .await; 66 | /// 67 | /// assert!(now.elapsed() >= duration); 68 | /// }); 69 | /// # } 70 | /// ``` 71 | fn wait_until(self, deadline: D) -> WaitUntil 72 | where 73 | Self: Sized, 74 | D: IntoFuture, 75 | { 76 | WaitUntil::new(self, deadline.into_future()) 77 | } 78 | } 79 | 80 | impl StreamExt for S1 81 | where 82 | S1: Stream, 83 | { 84 | fn merge(self, other: S2) -> Merge2 85 | where 86 | S1: Stream, 87 | S2: IntoStream, 88 | { 89 | Merge::merge((self, other)) 90 | } 91 | 92 | fn chain(self, other: S2) -> Chain2 93 | where 94 | Self: Stream + Sized, 95 | S2: IntoStream, 96 | { 97 | // TODO(yosh): fix the bounds on the tuple impl 98 | Chain::chain((self, other.into_stream())) 99 | } 100 | 101 | fn zip(self, other: S2) -> Zip2 102 | where 103 | Self: Stream + Sized, 104 | S2: IntoStream, 105 | { 106 | // TODO(yosh): fix the bounds on the tuple impl 107 | Zip::zip((self, other.into_stream())) 108 | } 109 | } 110 | -------------------------------------------------------------------------------- /src/stream/wait_until.rs: -------------------------------------------------------------------------------- 1 | use core::future::Future; 2 | use core::pin::Pin; 3 | use core::task::{Context, Poll}; 4 | 5 | use futures_core::stream::Stream; 6 | use pin_project::pin_project; 7 | 8 | /// Delay execution of a stream once for the specified duration. 9 | /// 10 | /// This `struct` is created by the [`wait_until`] method on [`StreamExt`]. See its 11 | /// documentation for more. 12 | /// 13 | /// [`wait_until`]: crate::stream::StreamExt::wait_until 14 | /// [`StreamExt`]: crate::stream::StreamExt 15 | #[derive(Debug)] 16 | #[must_use = "streams do nothing unless polled or .awaited"] 17 | #[pin_project] 18 | pub struct WaitUntil { 19 | #[pin] 20 | stream: S, 21 | #[pin] 22 | deadline: D, 23 | state: State, 24 | } 25 | 26 | #[derive(Debug)] 27 | enum State { 28 | Timer, 29 | Streaming, 30 | } 31 | 32 | impl WaitUntil { 33 | pub(crate) fn new(stream: S, deadline: D) -> Self { 34 | WaitUntil { 35 | stream, 36 | deadline, 37 | state: State::Timer, 38 | } 39 | } 40 | } 41 | 42 | impl Stream for WaitUntil 43 | where 44 | S: Stream, 45 | D: Future, 46 | { 47 | type Item = S::Item; 48 | 49 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { 50 | let this = self.project(); 51 | 52 | match this.state { 53 | State::Timer => match this.deadline.poll(cx) { 54 | Poll::Pending => Poll::Pending, 55 | Poll::Ready(_) => { 56 | *this.state = State::Streaming; 57 | this.stream.poll_next(cx) 58 | } 59 | }, 60 | State::Streaming => this.stream.poll_next(cx), 61 | } 62 | } 63 | } 64 | -------------------------------------------------------------------------------- /src/stream/zip/array.rs: -------------------------------------------------------------------------------- 1 | use super::Zip as ZipTrait; 2 | use crate::stream::IntoStream; 3 | use crate::utils::{self, PollArray, WakerArray}; 4 | 5 | use core::array; 6 | use core::fmt; 7 | use core::mem::{self, MaybeUninit}; 8 | use core::pin::Pin; 9 | use core::task::{Context, Poll}; 10 | 11 | use futures_core::Stream; 12 | use pin_project::{pin_project, pinned_drop}; 13 | 14 | /// A stream that ‘zips up’ multiple streams into a single stream of pairs. 15 | /// 16 | /// This `struct` is created by the [`zip`] method on the [`Zip`] trait. See its 17 | /// documentation for more. 18 | /// 19 | /// [`zip`]: trait.Zip.html#method.zip 20 | /// [`Zip`]: trait.Zip.html 21 | #[pin_project(PinnedDrop)] 22 | pub struct Zip 23 | where 24 | S: Stream, 25 | { 26 | #[pin] 27 | streams: [S; N], 28 | output: [MaybeUninit<::Item>; N], 29 | wakers: WakerArray, 30 | state: PollArray, 31 | done: bool, 32 | } 33 | 34 | impl Zip 35 | where 36 | S: Stream, 37 | { 38 | pub(crate) fn new(streams: [S; N]) -> Self { 39 | Self { 40 | streams, 41 | output: array::from_fn(|_| MaybeUninit::uninit()), 42 | state: PollArray::new_pending(), 43 | wakers: WakerArray::new(), 44 | done: false, 45 | } 46 | } 47 | } 48 | 49 | impl fmt::Debug for Zip 50 | where 51 | S: Stream + fmt::Debug, 52 | { 53 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 54 | f.debug_list().entries(self.streams.iter()).finish() 55 | } 56 | } 57 | 58 | impl Stream for Zip 59 | where 60 | S: Stream, 61 | { 62 | type Item = [S::Item; N]; 63 | 64 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { 65 | let mut this = self.project(); 66 | 67 | assert!(!*this.done, "Stream should not be polled after completion"); 68 | 69 | let mut readiness = this.wakers.readiness(); 70 | readiness.set_waker(cx.waker()); 71 | for index in 0..N { 72 | if !readiness.any_ready() { 73 | // Nothing is ready yet 74 | return Poll::Pending; 75 | } else if this.state[index].is_ready() || !readiness.clear_ready(index) { 76 | // We already have data stored for this stream, 77 | // Or this waker isn't ready yet 78 | continue; 79 | } 80 | 81 | // unlock readiness so we don't deadlock when polling 82 | #[allow(clippy::drop_non_drop)] 83 | drop(readiness); 84 | 85 | // Obtain the intermediate waker. 86 | let mut cx = Context::from_waker(this.wakers.get(index).unwrap()); 87 | 88 | let stream = utils::get_pin_mut(this.streams.as_mut(), index).unwrap(); 89 | match stream.poll_next(&mut cx) { 90 | Poll::Ready(Some(item)) => { 91 | this.output[index] = MaybeUninit::new(item); 92 | this.state[index].set_ready(); 93 | 94 | let all_ready = this.state.iter().all(|state| state.is_ready()); 95 | if all_ready { 96 | // Reset the future's state. 97 | readiness = this.wakers.readiness(); 98 | readiness.set_all_ready(); 99 | this.state.set_all_pending(); 100 | 101 | // Take the output 102 | // 103 | // SAFETY: we just validated all our data is populated, meaning 104 | // we can assume this is initialized. 105 | let mut output = array::from_fn(|_| MaybeUninit::uninit()); 106 | mem::swap(this.output, &mut output); 107 | let output = unsafe { array_assume_init(output) }; 108 | return Poll::Ready(Some(output)); 109 | } 110 | } 111 | Poll::Ready(None) => { 112 | // If one stream returns `None`, we can no longer return 113 | // pairs - meaning the stream is over. 114 | *this.done = true; 115 | return Poll::Ready(None); 116 | } 117 | Poll::Pending => {} 118 | } 119 | 120 | // Lock readiness so we can use it again 121 | readiness = this.wakers.readiness(); 122 | } 123 | Poll::Pending 124 | } 125 | } 126 | 127 | /// Drop the already initialized values on cancellation. 128 | #[pinned_drop] 129 | impl PinnedDrop for Zip 130 | where 131 | S: Stream, 132 | { 133 | fn drop(self: Pin<&mut Self>) { 134 | let this = self.project(); 135 | 136 | for (state, output) in this.state.iter_mut().zip(this.output.iter_mut()) { 137 | if state.is_ready() { 138 | // SAFETY: we've just filtered down to *only* the initialized values. 139 | // We can assume they're initialized, and this is where we drop them. 140 | unsafe { output.assume_init_drop() }; 141 | } 142 | } 143 | } 144 | } 145 | 146 | impl ZipTrait for [S; N] 147 | where 148 | S: IntoStream, 149 | { 150 | type Item = as Stream>::Item; 151 | type Stream = Zip; 152 | 153 | fn zip(self) -> Self::Stream { 154 | Zip::new(self.map(|i| i.into_stream())) 155 | } 156 | } 157 | 158 | // Inlined version of the unstable `MaybeUninit::array_assume_init` feature. 159 | // FIXME: replace with `utils::array_assume_init` 160 | unsafe fn array_assume_init(array: [MaybeUninit; N]) -> [T; N] { 161 | // SAFETY: 162 | // * The caller guarantees that all elements of the array are initialized 163 | // * `MaybeUninit` and T are guaranteed to have the same layout 164 | // * `MaybeUninit` does not drop, so there are no double-frees 165 | // And thus the conversion is safe 166 | let ret = unsafe { (&array as *const _ as *const [T; N]).read() }; 167 | #[allow(clippy::forget_non_drop)] 168 | mem::forget(array); 169 | ret 170 | } 171 | 172 | #[cfg(test)] 173 | mod tests { 174 | use crate::stream::Zip; 175 | use futures_lite::future::block_on; 176 | use futures_lite::prelude::*; 177 | use futures_lite::stream; 178 | 179 | #[test] 180 | fn zip_array_3() { 181 | block_on(async { 182 | let a = stream::repeat(1).take(2); 183 | let b = stream::repeat(2).take(2); 184 | let c = stream::repeat(3).take(2); 185 | let mut s = Zip::zip([a, b, c]); 186 | 187 | assert_eq!(s.next().await, Some([1, 2, 3])); 188 | assert_eq!(s.next().await, Some([1, 2, 3])); 189 | assert_eq!(s.next().await, None); 190 | }) 191 | } 192 | } 193 | -------------------------------------------------------------------------------- /src/stream/zip/mod.rs: -------------------------------------------------------------------------------- 1 | use futures_core::Stream; 2 | 3 | pub(crate) mod array; 4 | pub(crate) mod tuple; 5 | #[cfg(feature = "alloc")] 6 | pub(crate) mod vec; 7 | 8 | /// ‘Zips up’ multiple streams into a single stream of pairs. 9 | pub trait Zip { 10 | /// What's the return type of our stream? 11 | type Item; 12 | 13 | /// What stream do we return? 14 | type Stream: Stream; 15 | 16 | /// Combine multiple streams into a single stream. 17 | fn zip(self) -> Self::Stream; 18 | } 19 | -------------------------------------------------------------------------------- /src/stream/zip/vec.rs: -------------------------------------------------------------------------------- 1 | use super::Zip as ZipTrait; 2 | use crate::stream::IntoStream; 3 | use crate::utils::{self, PollVec, WakerVec}; 4 | #[cfg(all(feature = "alloc", not(feature = "std")))] 5 | use alloc::vec::Vec; 6 | 7 | use core::fmt; 8 | use core::mem; 9 | use core::mem::MaybeUninit; 10 | use core::pin::Pin; 11 | use core::task::{Context, Poll}; 12 | 13 | use futures_core::Stream; 14 | use pin_project::{pin_project, pinned_drop}; 15 | 16 | /// A stream that ‘zips up’ multiple streams into a single stream of pairs. 17 | /// 18 | /// This `struct` is created by the [`zip`] method on the [`Zip`] trait. See its 19 | /// documentation for more. 20 | /// 21 | /// [`zip`]: trait.Zip.html#method.zip 22 | /// [`Zip`]: trait.Zip.html 23 | #[pin_project(PinnedDrop)] 24 | pub struct Zip 25 | where 26 | S: Stream, 27 | { 28 | #[pin] 29 | streams: Vec, 30 | output: Vec::Item>>, 31 | wakers: WakerVec, 32 | state: PollVec, 33 | done: bool, 34 | len: usize, 35 | } 36 | 37 | impl Zip 38 | where 39 | S: Stream, 40 | { 41 | pub(crate) fn new(streams: Vec) -> Self { 42 | let len = streams.len(); 43 | Self { 44 | len, 45 | streams, 46 | wakers: WakerVec::new(len), 47 | output: (0..len).map(|_| MaybeUninit::uninit()).collect(), 48 | state: PollVec::new_pending(len), 49 | done: false, 50 | } 51 | } 52 | } 53 | 54 | impl fmt::Debug for Zip 55 | where 56 | S: Stream + fmt::Debug, 57 | { 58 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 59 | f.debug_list().entries(self.streams.iter()).finish() 60 | } 61 | } 62 | 63 | impl Stream for Zip 64 | where 65 | S: Stream, 66 | { 67 | type Item = Vec; 68 | 69 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { 70 | let mut this = self.project(); 71 | 72 | assert!(!*this.done, "Stream should not be polled after completion"); 73 | 74 | let mut readiness = this.wakers.readiness(); 75 | readiness.set_waker(cx.waker()); 76 | for index in 0..*this.len { 77 | if !readiness.any_ready() { 78 | // Nothing is ready yet 79 | return Poll::Pending; 80 | } else if this.state[index].is_ready() || !readiness.clear_ready(index) { 81 | // We already have data stored for this stream, 82 | // Or this waker isn't ready yet 83 | continue; 84 | } 85 | 86 | // unlock readiness so we don't deadlock when polling 87 | #[allow(clippy::drop_non_drop)] 88 | drop(readiness); 89 | 90 | // Obtain the intermediate waker. 91 | let mut cx = Context::from_waker(this.wakers.get(index).unwrap()); 92 | 93 | let stream = utils::get_pin_mut_from_vec(this.streams.as_mut(), index).unwrap(); 94 | match stream.poll_next(&mut cx) { 95 | Poll::Ready(Some(item)) => { 96 | this.output[index] = MaybeUninit::new(item); 97 | this.state[index].set_ready(); 98 | 99 | let all_ready = this.state.iter().all(|state| state.is_ready()); 100 | if all_ready { 101 | // Reset the future's state. 102 | readiness = this.wakers.readiness(); 103 | readiness.set_all_ready(); 104 | this.state.set_all_pending(); 105 | 106 | // Take the output 107 | // 108 | // SAFETY: we just validated all our data is populated, meaning 109 | // we can assume this is initialized. 110 | let mut output = (0..*this.len).map(|_| MaybeUninit::uninit()).collect(); 111 | mem::swap(this.output, &mut output); 112 | let output = unsafe { vec_assume_init(output) }; 113 | return Poll::Ready(Some(output)); 114 | } 115 | } 116 | Poll::Ready(None) => { 117 | // If one stream returns `None`, we can no longer return 118 | // pairs - meaning the stream is over. 119 | *this.done = true; 120 | return Poll::Ready(None); 121 | } 122 | Poll::Pending => {} 123 | } 124 | 125 | // Lock readiness so we can use it again 126 | readiness = this.wakers.readiness(); 127 | } 128 | Poll::Pending 129 | } 130 | } 131 | 132 | /// Drop the already initialized values on cancellation. 133 | #[pinned_drop] 134 | impl PinnedDrop for Zip 135 | where 136 | S: Stream, 137 | { 138 | fn drop(self: Pin<&mut Self>) { 139 | let this = self.project(); 140 | 141 | for (state, output) in this.state.iter_mut().zip(this.output.iter_mut()) { 142 | if state.is_ready() { 143 | // SAFETY: we've just filtered down to *only* the initialized values. 144 | // We can assume they're initialized, and this is where we drop them. 145 | unsafe { output.assume_init_drop() }; 146 | } 147 | } 148 | } 149 | } 150 | 151 | impl ZipTrait for Vec 152 | where 153 | S: IntoStream, 154 | { 155 | type Item = as Stream>::Item; 156 | type Stream = Zip; 157 | 158 | fn zip(self) -> Self::Stream { 159 | Zip::new(self.into_iter().map(|i| i.into_stream()).collect()) 160 | } 161 | } 162 | 163 | // Inlined version of the unstable `MaybeUninit::array_assume_init` feature. 164 | // FIXME: replace with `utils::array_assume_init` 165 | unsafe fn vec_assume_init(vec: Vec>) -> Vec { 166 | // SAFETY: 167 | // * The caller guarantees that all elements of the vec are initialized 168 | // * `MaybeUninit` and T are guaranteed to have the same layout 169 | // * `MaybeUninit` does not drop, so there are no double-frees 170 | // And thus the conversion is safe 171 | let ret = unsafe { (&vec as *const _ as *const Vec).read() }; 172 | mem::forget(vec); 173 | ret 174 | } 175 | 176 | #[cfg(test)] 177 | mod tests { 178 | use alloc::vec; 179 | 180 | use crate::stream::Zip; 181 | use futures_lite::future::block_on; 182 | use futures_lite::prelude::*; 183 | use futures_lite::stream; 184 | 185 | #[test] 186 | fn zip_array_3() { 187 | block_on(async { 188 | let a = stream::repeat(1).take(2); 189 | let b = stream::repeat(2).take(2); 190 | let c = stream::repeat(3).take(2); 191 | let mut s = vec![a, b, c].zip(); 192 | 193 | assert_eq!(s.next().await, Some(vec![1, 2, 3])); 194 | assert_eq!(s.next().await, Some(vec![1, 2, 3])); 195 | assert_eq!(s.next().await, None); 196 | }) 197 | } 198 | } 199 | -------------------------------------------------------------------------------- /src/utils/array.rs: -------------------------------------------------------------------------------- 1 | use core::mem::{self, MaybeUninit}; 2 | 3 | /// Extracts the values from an array of `MaybeUninit` containers. 4 | /// 5 | /// # Safety 6 | /// 7 | /// It is up to the caller to guarantee that all elements of the array are 8 | /// in an initialized state. 9 | /// 10 | /// Inlined version of: 11 | pub(crate) unsafe fn array_assume_init(array: [MaybeUninit; N]) -> [T; N] { 12 | // SAFETY: 13 | // * The caller guarantees that all elements of the array are initialized 14 | // * `MaybeUninit` and T are guaranteed to have the same layout 15 | // * `MaybeUninit` does not drop, so there are no double-frees 16 | // And thus the conversion is safe 17 | let ret = unsafe { (&array as *const _ as *const [T; N]).read() }; 18 | 19 | // FIXME: required to avoid `~const Destruct` bound 20 | #[allow(clippy::forget_non_drop)] 21 | mem::forget(array); 22 | ret 23 | } 24 | -------------------------------------------------------------------------------- /src/utils/channel.rs: -------------------------------------------------------------------------------- 1 | use alloc::{collections::VecDeque, rc::Rc}; 2 | use core::{ 3 | cell::RefCell, 4 | pin::Pin, 5 | task::{Context, Poll, Waker}, 6 | }; 7 | 8 | use futures::Stream; 9 | 10 | pub(crate) struct LocalChannel { 11 | queue: VecDeque, 12 | waker: Option, 13 | closed: bool, 14 | } 15 | 16 | pub(crate) struct LocalReceiver { 17 | channel: Rc>>, 18 | } 19 | 20 | impl Stream for LocalReceiver { 21 | type Item = T; 22 | 23 | fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll> { 24 | let mut channel = self.channel.borrow_mut(); 25 | 26 | match channel.queue.pop_front() { 27 | Some(item) => Poll::Ready(Some(item)), 28 | None => { 29 | if channel.closed { 30 | Poll::Ready(None) 31 | } else { 32 | match &mut channel.waker { 33 | Some(prev) => prev.clone_from(cx.waker()), 34 | None => channel.waker = Some(cx.waker().clone()), 35 | } 36 | Poll::Pending 37 | } 38 | } 39 | } 40 | } 41 | } 42 | 43 | pub(crate) struct LocalSender { 44 | channel: Rc>>, 45 | } 46 | 47 | impl LocalSender { 48 | pub(crate) fn send(&self, item: T) { 49 | let mut channel = self.channel.borrow_mut(); 50 | 51 | channel.queue.push_back(item); 52 | 53 | let _ = channel.waker.take().map(Waker::wake); 54 | } 55 | } 56 | 57 | impl Drop for LocalSender { 58 | fn drop(&mut self) { 59 | let mut channel = self.channel.borrow_mut(); 60 | channel.closed = true; 61 | let _ = channel.waker.take().map(Waker::wake); 62 | } 63 | } 64 | 65 | pub(crate) fn local_channel() -> (LocalSender, LocalReceiver) { 66 | let channel = Rc::new(RefCell::new(LocalChannel { 67 | queue: VecDeque::new(), 68 | waker: None, 69 | closed: false, 70 | })); 71 | 72 | ( 73 | LocalSender { 74 | channel: channel.clone(), 75 | }, 76 | LocalReceiver { channel }, 77 | ) 78 | } 79 | -------------------------------------------------------------------------------- /src/utils/futures/array.rs: -------------------------------------------------------------------------------- 1 | use core::{ 2 | mem::{self, ManuallyDrop, MaybeUninit}, 3 | pin::Pin, 4 | }; 5 | 6 | /// An array of futures which can be dropped in-place, intended to be 7 | /// constructed once and then accessed through pin projections. 8 | pub(crate) struct FutureArray { 9 | futures: [ManuallyDrop; N], 10 | } 11 | 12 | impl FutureArray { 13 | /// Create a new instance of `FutureArray` 14 | pub(crate) fn new(futures: [T; N]) -> Self { 15 | // Implementation copied from: https://doc.rust-lang.org/src/core/mem/maybe_uninit.rs.html#1292 16 | let futures = MaybeUninit::new(futures); 17 | // SAFETY: T and MaybeUninit have the same layout 18 | let futures = unsafe { mem::transmute_copy(&mem::ManuallyDrop::new(futures)) }; 19 | Self { futures } 20 | } 21 | 22 | /// Create an iterator of pinned references. 23 | pub(crate) fn iter(self: Pin<&mut Self>) -> impl Iterator>> { 24 | // SAFETY: `std` _could_ make this unsound if it were to decide Pin's 25 | // invariants aren't required to transmit through slices. Otherwise this has 26 | // the same safety as a normal field pin projection. 27 | unsafe { self.get_unchecked_mut() } 28 | .futures 29 | .iter_mut() 30 | .map(|t| unsafe { Pin::new_unchecked(t) }) 31 | } 32 | 33 | /// Drop a future at the given index. 34 | /// 35 | /// # Safety 36 | /// 37 | /// The future is held in a `ManuallyDrop`, so no double-dropping, etc 38 | pub(crate) unsafe fn drop(mut self: Pin<&mut Self>, idx: usize) { 39 | unsafe { 40 | let futures = self.as_mut().get_unchecked_mut().futures.as_mut(); 41 | ManuallyDrop::drop(&mut futures[idx]); 42 | }; 43 | } 44 | } 45 | -------------------------------------------------------------------------------- /src/utils/futures/mod.rs: -------------------------------------------------------------------------------- 1 | mod array; 2 | #[cfg(feature = "alloc")] 3 | mod vec; 4 | 5 | pub(crate) use array::FutureArray; 6 | #[cfg(feature = "alloc")] 7 | pub(crate) use vec::FutureVec; 8 | -------------------------------------------------------------------------------- /src/utils/futures/vec.rs: -------------------------------------------------------------------------------- 1 | #[cfg(all(feature = "alloc", not(feature = "std")))] 2 | use alloc::vec::Vec; 3 | 4 | use core::{ 5 | mem::{self, ManuallyDrop, MaybeUninit}, 6 | pin::Pin, 7 | }; 8 | 9 | /// An array of futures which can be dropped in-place, intended to be 10 | /// constructed once and then accessed through pin projections. 11 | pub(crate) struct FutureVec { 12 | futures: Vec>, 13 | } 14 | 15 | impl FutureVec { 16 | /// Create a new instance of `FutureVec` 17 | pub(crate) fn new(futures: Vec) -> Self { 18 | // Implementation copied from: https://doc.rust-lang.org/src/core/mem/maybe_uninit.rs.html#1292 19 | let futures = MaybeUninit::new(futures); 20 | // SAFETY: T and MaybeUninit have the same layout 21 | let futures = unsafe { mem::transmute_copy(&mem::ManuallyDrop::new(futures)) }; 22 | Self { futures } 23 | } 24 | 25 | /// Create an iterator of pinned references. 26 | pub(crate) fn iter(self: Pin<&mut Self>) -> impl Iterator>> { 27 | // SAFETY: `std` _could_ make this unsound if it were to decide Pin's 28 | // invariants aren't required to transmit through slices. Otherwise this has 29 | // the same safety as a normal field pin projection. 30 | unsafe { self.get_unchecked_mut() } 31 | .futures 32 | .iter_mut() 33 | .map(|t| unsafe { Pin::new_unchecked(t) }) 34 | } 35 | 36 | /// Drop a future at the given index. 37 | /// 38 | /// # Safety 39 | /// 40 | /// The future is held in a `ManuallyDrop`, so no double-dropping, etc 41 | pub(crate) unsafe fn drop(mut self: Pin<&mut Self>, idx: usize) { 42 | unsafe { 43 | let futures = self.as_mut().get_unchecked_mut().futures.as_mut_slice(); 44 | ManuallyDrop::drop(&mut futures[idx]); 45 | }; 46 | } 47 | } 48 | -------------------------------------------------------------------------------- /src/utils/indexer.rs: -------------------------------------------------------------------------------- 1 | use core::ops; 2 | 3 | /// Generate an iteration sequence. This provides *fair* iteration when multiple 4 | /// futures need to be polled concurrently. 5 | pub(crate) struct Indexer { 6 | offset: usize, 7 | max: usize, 8 | } 9 | 10 | impl Indexer { 11 | pub(crate) fn new(max: usize) -> Self { 12 | Self { offset: 0, max } 13 | } 14 | 15 | /// Generate a range between `0..max`, incrementing the starting point 16 | /// for the next iteration. 17 | pub(crate) fn iter(&mut self) -> IndexIter { 18 | // Increment the starting point for next time. 19 | let offset = self.offset; 20 | self.offset = (self.offset + 1).wrapping_rem(self.max); 21 | 22 | IndexIter { 23 | iter: (0..self.max), 24 | offset, 25 | } 26 | } 27 | } 28 | 29 | pub(crate) struct IndexIter { 30 | iter: ops::Range, 31 | offset: usize, 32 | } 33 | 34 | impl Iterator for IndexIter { 35 | type Item = usize; 36 | 37 | fn next(&mut self) -> Option { 38 | self.iter 39 | .next() 40 | .map(|pos| (pos + self.offset).wrapping_rem(self.iter.end)) 41 | } 42 | } 43 | -------------------------------------------------------------------------------- /src/utils/mod.rs: -------------------------------------------------------------------------------- 1 | #![allow(dead_code)] 2 | 3 | //! Utilities to implement the different futures of this crate. 4 | 5 | mod array; 6 | mod futures; 7 | mod indexer; 8 | mod output; 9 | mod pin; 10 | mod poll_state; 11 | mod stream; 12 | mod tuple; 13 | mod wakers; 14 | 15 | #[doc(hidden)] 16 | pub mod private; 17 | 18 | pub(crate) use self::futures::FutureArray; 19 | #[cfg(feature = "alloc")] 20 | pub(crate) use self::futures::FutureVec; 21 | pub(crate) use array::array_assume_init; 22 | pub(crate) use indexer::Indexer; 23 | pub(crate) use output::OutputArray; 24 | #[cfg(feature = "alloc")] 25 | pub(crate) use output::OutputVec; 26 | pub(crate) use pin::{get_pin_mut, iter_pin_mut}; 27 | #[cfg(feature = "alloc")] 28 | pub(crate) use pin::{get_pin_mut_from_vec, iter_pin_mut_vec}; 29 | pub(crate) use poll_state::PollArray; 30 | #[cfg(feature = "alloc")] 31 | pub(crate) use poll_state::{MaybeDone, PollState, PollVec}; 32 | pub(crate) use tuple::{gen_conditions, tuple_len}; 33 | pub(crate) use wakers::WakerArray; 34 | #[cfg(feature = "alloc")] 35 | pub(crate) use wakers::WakerVec; 36 | 37 | #[cfg(all(test, feature = "alloc"))] 38 | pub(crate) use wakers::DummyWaker; 39 | 40 | #[cfg(all(test, feature = "alloc"))] 41 | pub(crate) mod channel; 42 | 43 | #[cfg(feature = "alloc")] 44 | pub(crate) use stream::{from_iter, FromIter}; 45 | -------------------------------------------------------------------------------- /src/utils/output/array.rs: -------------------------------------------------------------------------------- 1 | use core::{ 2 | array, 3 | mem::{self, MaybeUninit}, 4 | }; 5 | 6 | use crate::utils; 7 | 8 | /// A contiguous array of uninitialized data. 9 | pub(crate) struct OutputArray { 10 | data: [MaybeUninit; N], 11 | } 12 | 13 | impl OutputArray { 14 | /// Initialize a new array as uninitialized 15 | pub(crate) fn uninit() -> Self { 16 | Self { 17 | data: array::from_fn(|_| MaybeUninit::uninit()), 18 | } 19 | } 20 | 21 | /// Write a value into memory at the index 22 | pub(crate) fn write(&mut self, idx: usize, value: T) { 23 | self.data[idx] = MaybeUninit::new(value); 24 | } 25 | 26 | /// Drop a value at the index 27 | /// 28 | /// # Safety 29 | /// 30 | /// The value at the index must be initialized 31 | pub(crate) unsafe fn drop(&mut self, idx: usize) { 32 | // SAFETY: The caller is responsible for ensuring this value is 33 | // initialized 34 | unsafe { self.data[idx].assume_init_drop() }; 35 | } 36 | 37 | /// Assume all items are initialized and take the items, 38 | /// leaving behind uninitialized data. 39 | /// 40 | /// # Safety 41 | /// 42 | /// Make sure that all items are initialized prior to calling this method. 43 | pub(crate) unsafe fn take(&mut self) -> [T; N] { 44 | let mut data = array::from_fn(|_| MaybeUninit::uninit()); 45 | mem::swap(&mut self.data, &mut data); 46 | // SAFETY: the caller is on the hook to ensure all items are initialized 47 | unsafe { utils::array_assume_init(data) } 48 | } 49 | } 50 | -------------------------------------------------------------------------------- /src/utils/output/mod.rs: -------------------------------------------------------------------------------- 1 | mod array; 2 | #[cfg(feature = "alloc")] 3 | mod vec; 4 | 5 | pub(crate) use array::OutputArray; 6 | #[cfg(feature = "alloc")] 7 | pub(crate) use vec::OutputVec; 8 | -------------------------------------------------------------------------------- /src/utils/output/vec.rs: -------------------------------------------------------------------------------- 1 | #[cfg(all(feature = "alloc", not(feature = "std")))] 2 | use alloc::{vec, vec::Vec}; 3 | 4 | use core::mem::{self, MaybeUninit}; 5 | 6 | /// A contiguous vector of uninitialized data. 7 | pub(crate) struct OutputVec { 8 | data: Vec, 9 | capacity: usize, 10 | } 11 | 12 | impl OutputVec { 13 | /// Initialize a new vector as uninitialized 14 | pub(crate) fn uninit(capacity: usize) -> Self { 15 | Self { 16 | data: Vec::with_capacity(capacity), 17 | capacity, 18 | } 19 | } 20 | 21 | /// Write a value into memory at the index 22 | pub(crate) fn write(&mut self, idx: usize, value: T) { 23 | let data = self.data.spare_capacity_mut(); 24 | data[idx] = MaybeUninit::new(value); 25 | } 26 | 27 | /// Drop a value at the index 28 | /// 29 | /// # Safety 30 | /// 31 | /// The value at the index must be initialized 32 | pub(crate) unsafe fn drop(&mut self, idx: usize) { 33 | // SAFETY: The caller is responsible for ensuring this value is 34 | // initialized 35 | let data = self.data.spare_capacity_mut(); 36 | unsafe { data[idx].assume_init_drop() }; 37 | } 38 | 39 | /// Assume all items are initialized and take the items, 40 | /// leaving behind an empty vector 41 | /// 42 | /// # Safety 43 | /// 44 | /// Make sure that all items are initialized prior to calling this method. 45 | pub(crate) unsafe fn take(&mut self) -> Vec { 46 | let mut data = vec![]; 47 | mem::swap(&mut self.data, &mut data); 48 | // SAFETY: the caller is on the hook to ensure all items are initialized 49 | unsafe { data.set_len(self.capacity) }; 50 | data 51 | } 52 | } 53 | -------------------------------------------------------------------------------- /src/utils/pin.rs: -------------------------------------------------------------------------------- 1 | #[cfg(all(feature = "alloc", not(feature = "std")))] 2 | use alloc::vec::Vec; 3 | 4 | use core::pin::Pin; 5 | use core::slice::SliceIndex; 6 | 7 | // From: `futures_rs::join_all!` -- https://github.com/rust-lang/futures-rs/blob/b48eb2e9a9485ef7388edc2f177094a27e08e28b/futures-util/src/future/join_all.rs#L18-L23 8 | pub(crate) fn iter_pin_mut(slice: Pin<&mut [T]>) -> impl Iterator> { 9 | // SAFETY: `std` _could_ make this unsound if it were to decide Pin's 10 | // invariants aren't required to transmit through slices. Otherwise this has 11 | // the same safety as a normal field pin projection. 12 | unsafe { slice.get_unchecked_mut() } 13 | .iter_mut() 14 | .map(|t| unsafe { Pin::new_unchecked(t) }) 15 | } 16 | 17 | // From: `futures_rs::join_all!` -- https://github.com/rust-lang/futures-rs/blob/b48eb2e9a9485ef7388edc2f177094a27e08e28b/futures-util/src/future/join_all.rs#L18-L23 18 | #[cfg(feature = "alloc")] 19 | pub(crate) fn iter_pin_mut_vec(slice: Pin<&mut Vec>) -> impl Iterator> { 20 | // SAFETY: `std` _could_ make this unsound if it were to decide Pin's 21 | // invariants aren't required to transmit through slices. Otherwise this has 22 | // the same safety as a normal field pin projection. 23 | unsafe { slice.get_unchecked_mut() } 24 | .iter_mut() 25 | .map(|t| unsafe { Pin::new_unchecked(t) }) 26 | } 27 | 28 | /// Returns a pinned mutable reference to an element or subslice depending on the 29 | /// type of index (see `get`) or `None` if the index is out of bounds. 30 | // From: https://github.com/rust-lang/rust/pull/78370/files 31 | #[inline] 32 | pub(crate) fn get_pin_mut(slice: Pin<&mut [T]>, index: I) -> Option> 33 | where 34 | I: SliceIndex<[T]>, 35 | { 36 | // SAFETY: `get_unchecked_mut` is never used to move the slice inside `self` (`SliceIndex` 37 | // is sealed and all `SliceIndex::get_mut` implementations never move elements). 38 | // `x` is guaranteed to be pinned because it comes from `self` which is pinned. 39 | unsafe { 40 | slice 41 | .get_unchecked_mut() 42 | .get_mut(index) 43 | .map(|x| Pin::new_unchecked(x)) 44 | } 45 | } 46 | 47 | // NOTE: If this is implemented through the trait, this will work on both vecs and 48 | // slices. 49 | // 50 | // From: https://github.com/rust-lang/rust/pull/78370/files 51 | #[cfg(feature = "alloc")] 52 | pub(crate) fn get_pin_mut_from_vec( 53 | slice: Pin<&mut Vec>, 54 | index: I, 55 | ) -> Option> 56 | where 57 | I: SliceIndex<[T]>, 58 | { 59 | // SAFETY: `get_unchecked_mut` is never used to move the slice inside `self` (`SliceIndex` 60 | // is sealed and all `SliceIndex::get_mut` implementations never move elements). 61 | // `x` is guaranteed to be pinned because it comes from `self` which is pinned. 62 | unsafe { 63 | slice 64 | .get_unchecked_mut() 65 | .get_mut(index) 66 | .map(|x| Pin::new_unchecked(x)) 67 | } 68 | } 69 | -------------------------------------------------------------------------------- /src/utils/poll_state/array.rs: -------------------------------------------------------------------------------- 1 | use core::ops::{Deref, DerefMut}; 2 | 3 | use super::PollState; 4 | 5 | pub(crate) struct PollArray { 6 | state: [PollState; N], 7 | } 8 | 9 | impl PollArray { 10 | /// Create a new `PollArray` with all state marked as `None` 11 | #[allow(unused)] 12 | pub(crate) fn new() -> Self { 13 | Self { 14 | state: [PollState::None; N], 15 | } 16 | } 17 | 18 | /// Create a new `PollArray` with all state marked as `Pending` 19 | pub(crate) fn new_pending() -> Self { 20 | Self { 21 | state: [PollState::Pending; N], 22 | } 23 | } 24 | 25 | /// Mark all items as "pending" 26 | #[inline] 27 | pub(crate) fn set_all_pending(&mut self) { 28 | self.fill(PollState::Pending); 29 | } 30 | 31 | /// Mark all items as "none" 32 | #[inline] 33 | #[allow(unused)] 34 | pub(crate) fn set_all_none(&mut self) { 35 | self.fill(PollState::None); 36 | } 37 | 38 | /// Get an iterator of indexes of all items which are "ready". 39 | pub(crate) fn ready_indexes(&self) -> impl Iterator + '_ { 40 | self.iter() 41 | .cloned() 42 | .enumerate() 43 | .filter(|(_, state)| state.is_ready()) 44 | .map(|(i, _)| i) 45 | } 46 | 47 | /// Get an iterator of indexes of all items which are "pending". 48 | pub(crate) fn pending_indexes(&self) -> impl Iterator + '_ { 49 | self.iter() 50 | .cloned() 51 | .enumerate() 52 | .filter(|(_, state)| state.is_pending()) 53 | .map(|(i, _)| i) 54 | } 55 | } 56 | 57 | impl Deref for PollArray { 58 | type Target = [PollState]; 59 | 60 | fn deref(&self) -> &Self::Target { 61 | &self.state 62 | } 63 | } 64 | 65 | impl DerefMut for PollArray { 66 | fn deref_mut(&mut self) -> &mut Self::Target { 67 | &mut self.state 68 | } 69 | } 70 | -------------------------------------------------------------------------------- /src/utils/poll_state/maybe_done.rs: -------------------------------------------------------------------------------- 1 | use core::future::Future; 2 | use core::mem; 3 | use core::pin::Pin; 4 | use core::task::{ready, Context, Poll}; 5 | 6 | /// A future that may have completed. 7 | #[derive(Debug)] 8 | pub(crate) enum MaybeDone { 9 | /// A not-yet-completed future 10 | Future(Fut), 11 | 12 | /// The output of the completed future 13 | Done(Fut::Output), 14 | 15 | /// The empty variant after the result of a [`MaybeDone`] has been 16 | /// taken using the [`take`](MaybeDone::take) method. 17 | Gone, 18 | } 19 | 20 | impl MaybeDone { 21 | /// Create a new instance of `MaybeDone`. 22 | pub(crate) fn new(future: Fut) -> MaybeDone { 23 | Self::Future(future) 24 | } 25 | } 26 | 27 | impl MaybeDone 28 | where 29 | Fut: Future>, 30 | { 31 | /// Attempt to take the `Ok(output)` of a `MaybeDone` without driving it towards completion. 32 | /// If the future is done but is an `Err(_)`, this will return `None`. 33 | #[inline] 34 | pub(crate) fn take_ok(self: Pin<&mut Self>) -> Option { 35 | let this = unsafe { self.get_unchecked_mut() }; 36 | match this { 37 | MaybeDone::Done(Ok(_)) => {} 38 | MaybeDone::Done(Err(_)) | MaybeDone::Future(_) | MaybeDone::Gone => return None, 39 | } 40 | if let MaybeDone::Done(Ok(output)) = mem::replace(this, MaybeDone::Gone) { 41 | Some(output) 42 | } else { 43 | unreachable!() 44 | } 45 | } 46 | 47 | /// Attempt to take the `Err(output)` of a `MaybeDone` without driving it towards completion. 48 | /// If the future is done but is an `Ok(_)`, this will return `None`. 49 | #[inline] 50 | pub(crate) fn take_err(self: Pin<&mut Self>) -> Option { 51 | let this = unsafe { self.get_unchecked_mut() }; 52 | match this { 53 | MaybeDone::Done(Err(_)) => {} 54 | MaybeDone::Done(Ok(_)) | MaybeDone::Future(_) | MaybeDone::Gone => return None, 55 | } 56 | if let MaybeDone::Done(Err(output)) = mem::replace(this, MaybeDone::Gone) { 57 | Some(output) 58 | } else { 59 | unreachable!() 60 | } 61 | } 62 | } 63 | 64 | impl Future for MaybeDone { 65 | type Output = (); 66 | 67 | fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll { 68 | let res = unsafe { 69 | match Pin::as_mut(&mut self).get_unchecked_mut() { 70 | MaybeDone::Future(a) => ready!(Pin::new_unchecked(a).poll(cx)), 71 | MaybeDone::Done(_) => return Poll::Ready(()), 72 | MaybeDone::Gone => panic!("MaybeDone polled after value taken"), 73 | } 74 | }; 75 | self.set(MaybeDone::Done(res)); 76 | Poll::Ready(()) 77 | } 78 | } 79 | -------------------------------------------------------------------------------- /src/utils/poll_state/mod.rs: -------------------------------------------------------------------------------- 1 | #![allow(clippy::module_inception)] 2 | 3 | mod array; 4 | #[cfg(feature = "alloc")] 5 | mod maybe_done; 6 | mod poll_state; 7 | #[cfg(feature = "alloc")] 8 | mod vec; 9 | 10 | pub(crate) use array::PollArray; 11 | #[cfg(feature = "alloc")] 12 | pub(crate) use maybe_done::MaybeDone; 13 | pub(crate) use poll_state::PollState; 14 | #[cfg(feature = "alloc")] 15 | pub(crate) use vec::PollVec; 16 | -------------------------------------------------------------------------------- /src/utils/poll_state/poll_state.rs: -------------------------------------------------------------------------------- 1 | /// Enumerate the current poll state. 2 | #[derive(Debug, Clone, Copy)] 3 | #[repr(u8)] 4 | pub(crate) enum PollState { 5 | /// There is no associated future or stream. 6 | /// This can be because no item was placed to begin with, or because there 7 | /// are was previously an item but there no longer is. 8 | None, 9 | /// Polling the associated future or stream. 10 | Pending, 11 | /// Data has been written to the output structure, and is now ready to be 12 | /// read. 13 | Ready, 14 | } 15 | 16 | impl PollState { 17 | /// Returns `true` if the metadata is [`None`][Self::None]. 18 | #[must_use] 19 | #[inline] 20 | #[allow(unused)] 21 | pub(crate) fn is_none(&self) -> bool { 22 | matches!(self, Self::None) 23 | } 24 | 25 | /// Returns `true` if the metadata is [`Pending`][Self::Pending]. 26 | #[must_use] 27 | #[inline] 28 | pub(crate) fn is_pending(&self) -> bool { 29 | matches!(self, Self::Pending) 30 | } 31 | 32 | /// Returns `true` if the poll state is [`Ready`][Self::Ready]. 33 | #[must_use] 34 | #[inline] 35 | pub(crate) fn is_ready(&self) -> bool { 36 | matches!(self, Self::Ready) 37 | } 38 | 39 | /// Sets the poll state to [`None`][Self::None]. 40 | #[inline] 41 | pub(crate) fn set_none(&mut self) { 42 | *self = PollState::None; 43 | } 44 | 45 | /// Sets the poll state to [`Ready`][Self::Pending]. 46 | #[inline] 47 | #[allow(unused)] 48 | pub(crate) fn set_pending(&mut self) { 49 | *self = PollState::Pending; 50 | } 51 | 52 | /// Sets the poll state to [`Ready`][Self::Ready]. 53 | #[inline] 54 | pub(crate) fn set_ready(&mut self) { 55 | *self = PollState::Ready; 56 | } 57 | } 58 | -------------------------------------------------------------------------------- /src/utils/poll_state/vec.rs: -------------------------------------------------------------------------------- 1 | use core::ops::{Deref, DerefMut}; 2 | use smallvec::{smallvec, SmallVec}; 3 | 4 | use super::PollState; 5 | 6 | /// The maximum number of entries that `PollStates` can store without 7 | /// dynamic memory allocation. 8 | /// 9 | /// The heap variant is the minimum size the data structure can have. 10 | /// It consists of a boxed slice (=2 usizes) and space for the enum 11 | /// tag (another usize because of padding), so 3 usizes. 12 | /// The inline variant then consists of `3 * size_of(usize) - 2` entries. 13 | /// Each entry is a byte and we subtract one byte for a length field, 14 | /// and another byte for the enum tag. 15 | /// 16 | /// ```txt 17 | /// Boxed 18 | /// vvvvv 19 | /// tag 20 | /// | <-------padding----> <--- Box<[T]>::len ---> <--- Box<[T]>::ptr ---> 21 | /// 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 23 | /// tag | 24 | /// len ^^^^^ 25 | /// Inline 26 | /// ``` 27 | const MAX_INLINE_ENTRIES: usize = core::mem::size_of::() * 3 - 2; 28 | 29 | #[derive(Default)] 30 | pub(crate) struct PollVec(SmallVec<[PollState; MAX_INLINE_ENTRIES]>); 31 | 32 | impl PollVec { 33 | pub(crate) fn new(len: usize) -> Self { 34 | Self(smallvec![PollState::None; len]) 35 | } 36 | 37 | pub(crate) fn new_pending(len: usize) -> Self { 38 | Self(smallvec![PollState::Pending; len]) 39 | } 40 | 41 | /// Get an iterator of indexes of all items which are "ready". 42 | pub(crate) fn ready_indexes(&self) -> impl Iterator + '_ { 43 | self.iter() 44 | .cloned() 45 | .enumerate() 46 | .filter(|(_, state)| state.is_ready()) 47 | .map(|(i, _)| i) 48 | } 49 | 50 | /// Get an iterator of indexes of all items which are "pending". 51 | #[allow(unused)] 52 | pub(crate) fn pending_indexes(&self) -> impl Iterator + '_ { 53 | self.iter() 54 | .cloned() 55 | .enumerate() 56 | .filter(|(_, state)| state.is_pending()) 57 | .map(|(i, _)| i) 58 | } 59 | 60 | /// Get an iterator of indexes of all items which are "consumed". 61 | #[allow(unused)] 62 | pub(crate) fn consumed_indexes(&self) -> impl Iterator + '_ { 63 | self.iter() 64 | .cloned() 65 | .enumerate() 66 | .filter(|(_, state)| state.is_none()) 67 | .map(|(i, _)| i) 68 | } 69 | 70 | /// Mark all items as "pending" 71 | #[inline] 72 | pub(crate) fn set_all_pending(&mut self) { 73 | self.0.fill(PollState::Pending); 74 | } 75 | 76 | /// Mark all items as "none" 77 | #[inline] 78 | #[allow(unused)] 79 | pub(crate) fn set_all_none(&mut self) { 80 | self.0.fill(PollState::None); 81 | } 82 | 83 | /// Resize the `PollVec` 84 | pub(crate) fn resize(&mut self, len: usize) { 85 | self.0.resize_with(len, || PollState::None) 86 | } 87 | } 88 | 89 | impl Deref for PollVec { 90 | type Target = [PollState]; 91 | 92 | fn deref(&self) -> &Self::Target { 93 | &self.0 94 | } 95 | } 96 | 97 | impl DerefMut for PollVec { 98 | fn deref_mut(&mut self) -> &mut Self::Target { 99 | &mut self.0 100 | } 101 | } 102 | 103 | #[cfg(test)] 104 | mod tests { 105 | use super::{PollVec, MAX_INLINE_ENTRIES}; 106 | 107 | #[test] 108 | fn type_size() { 109 | // PollVec is three words plus two bits 110 | assert_eq!( 111 | core::mem::size_of::(), 112 | core::mem::size_of::() * 4 113 | ); 114 | } 115 | 116 | #[test] 117 | fn boxed_does_not_allocate_twice() { 118 | // Make sure the debug_assertions in PollStates::new() don't fail. 119 | let _ = PollVec::new_pending(MAX_INLINE_ENTRIES + 10); 120 | } 121 | } 122 | -------------------------------------------------------------------------------- /src/utils/private.rs: -------------------------------------------------------------------------------- 1 | /// We hide the `Try` trait in a private module, as it's only meant to be a 2 | /// stable clone of the standard library's `Try` trait, as yet unstable. 3 | // NOTE: copied from `rayon` 4 | use core::convert::Infallible; 5 | use core::ops::ControlFlow::{self, Break, Continue}; 6 | use core::task::Poll; 7 | 8 | use crate::{private_decl, private_impl}; 9 | 10 | /// Clone of `std::ops::Try`. 11 | /// 12 | /// Implementing this trait is not permitted outside of `futures_concurrency`. 13 | pub trait Try { 14 | private_decl! {} 15 | 16 | type Output; 17 | type Residual; 18 | 19 | fn from_output(output: Self::Output) -> Self; 20 | 21 | fn from_residual(residual: Self::Residual) -> Self; 22 | 23 | fn branch(self) -> ControlFlow; 24 | } 25 | 26 | impl Try for ControlFlow { 27 | private_impl! {} 28 | 29 | type Output = C; 30 | type Residual = ControlFlow; 31 | 32 | fn from_output(output: Self::Output) -> Self { 33 | Continue(output) 34 | } 35 | 36 | fn from_residual(residual: Self::Residual) -> Self { 37 | match residual { 38 | Break(b) => Break(b), 39 | Continue(_) => unreachable!(), 40 | } 41 | } 42 | 43 | fn branch(self) -> ControlFlow { 44 | match self { 45 | Continue(c) => Continue(c), 46 | Break(b) => Break(Break(b)), 47 | } 48 | } 49 | } 50 | 51 | impl Try for Option { 52 | private_impl! {} 53 | 54 | type Output = T; 55 | type Residual = Option; 56 | 57 | fn from_output(output: Self::Output) -> Self { 58 | Some(output) 59 | } 60 | 61 | fn from_residual(residual: Self::Residual) -> Self { 62 | match residual { 63 | None => None, 64 | Some(_) => unreachable!(), 65 | } 66 | } 67 | 68 | fn branch(self) -> ControlFlow { 69 | match self { 70 | Some(c) => Continue(c), 71 | None => Break(None), 72 | } 73 | } 74 | } 75 | 76 | impl Try for Result { 77 | private_impl! {} 78 | 79 | type Output = T; 80 | type Residual = Result; 81 | 82 | fn from_output(output: Self::Output) -> Self { 83 | Ok(output) 84 | } 85 | 86 | fn from_residual(residual: Self::Residual) -> Self { 87 | match residual { 88 | Err(e) => Err(e), 89 | Ok(_) => unreachable!(), 90 | } 91 | } 92 | 93 | fn branch(self) -> ControlFlow { 94 | match self { 95 | Ok(c) => Continue(c), 96 | Err(e) => Break(Err(e)), 97 | } 98 | } 99 | } 100 | 101 | impl Try for Poll> { 102 | private_impl! {} 103 | 104 | type Output = Poll; 105 | type Residual = Result; 106 | 107 | fn from_output(output: Self::Output) -> Self { 108 | output.map(Ok) 109 | } 110 | 111 | fn from_residual(residual: Self::Residual) -> Self { 112 | match residual { 113 | Err(e) => Poll::Ready(Err(e)), 114 | Ok(_) => unreachable!(), 115 | } 116 | } 117 | 118 | fn branch(self) -> ControlFlow { 119 | match self { 120 | Poll::Pending => Continue(Poll::Pending), 121 | Poll::Ready(Ok(c)) => Continue(Poll::Ready(c)), 122 | Poll::Ready(Err(e)) => Break(Err(e)), 123 | } 124 | } 125 | } 126 | 127 | impl Try for Poll>> { 128 | private_impl! {} 129 | 130 | type Output = Poll>; 131 | type Residual = Result; 132 | 133 | fn from_output(output: Self::Output) -> Self { 134 | match output { 135 | Poll::Ready(o) => Poll::Ready(o.map(Ok)), 136 | Poll::Pending => Poll::Pending, 137 | } 138 | } 139 | 140 | fn from_residual(residual: Self::Residual) -> Self { 141 | match residual { 142 | Err(e) => Poll::Ready(Some(Err(e))), 143 | Ok(_) => unreachable!(), 144 | } 145 | } 146 | 147 | fn branch(self) -> ControlFlow { 148 | match self { 149 | Poll::Pending => Continue(Poll::Pending), 150 | Poll::Ready(None) => Continue(Poll::Ready(None)), 151 | Poll::Ready(Some(Ok(c))) => Continue(Poll::Ready(Some(c))), 152 | Poll::Ready(Some(Err(e))) => Break(Err(e)), 153 | } 154 | } 155 | } 156 | 157 | #[allow(missing_debug_implementations)] 158 | pub struct PrivateMarker; 159 | 160 | #[doc(hidden)] 161 | #[macro_export] 162 | macro_rules! private_impl { 163 | () => { 164 | fn __futures_concurrency_private__(&self) -> $crate::private::PrivateMarker { 165 | $crate::private::PrivateMarker 166 | } 167 | }; 168 | } 169 | 170 | #[doc(hidden)] 171 | #[macro_export] 172 | macro_rules! private_decl { 173 | () => { 174 | /// This trait is private; this method exists to make it 175 | /// impossible to implement outside the crate. 176 | #[doc(hidden)] 177 | fn __futures_concurrency_private__(&self) -> $crate::private::PrivateMarker; 178 | }; 179 | } 180 | -------------------------------------------------------------------------------- /src/utils/stream.rs: -------------------------------------------------------------------------------- 1 | use core::pin::Pin; 2 | 3 | use pin_project::pin_project; 4 | 5 | use core::task::{Context, Poll}; 6 | use futures_core::stream::Stream; 7 | 8 | /// A stream that was created from iterator. 9 | #[pin_project] 10 | #[derive(Clone, Debug)] 11 | pub(crate) struct FromIter { 12 | iter: I, 13 | } 14 | 15 | /// Converts an iterator into a stream. 16 | pub(crate) fn from_iter(iter: I) -> FromIter { 17 | FromIter { 18 | iter: iter.into_iter(), 19 | } 20 | } 21 | 22 | impl Stream for FromIter { 23 | type Item = I::Item; 24 | 25 | fn poll_next(mut self: Pin<&mut Self>, _cx: &mut Context<'_>) -> Poll> { 26 | Poll::Ready(self.iter.next()) 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /src/utils/tuple.rs: -------------------------------------------------------------------------------- 1 | /// Generate the `match` conditions inside the main polling body. This macro 2 | /// chooses a random starting point on each call to the given method, making 3 | /// it "fair". 4 | /// 5 | /// The way this algorithm works is: we generate a random number between 0 and 6 | /// the length of the tuple we have. This number determines which element we 7 | /// start with. All other cases are mapped as `r + index`, and after we have the 8 | /// first one, we'll sequentially iterate over all others. The starting point of 9 | /// the stream is random, but the iteration order of all others is not. 10 | // NOTE(yosh): this macro monstrosity is needed so we can increment each `else 11 | // if` branch with + 1. When RFC 3086 becomes available to us, we can replace 12 | // this with `${index($F)}` to get the current iteration. 13 | // 14 | // # References 15 | // - https://twitter.com/maybewaffle/status/1588426440835727360 16 | // - https://twitter.com/Veykril/status/1588231414998335490 17 | // - https://rust-lang.github.io/rfcs/3086-macro-metavar-expr.html 18 | macro_rules! gen_conditions { 19 | // Base condition, setup the depth counter. 20 | ($i:expr, $this:expr, $cx:expr, $method:ident, $(($F_index: expr; $F:ident, { $($arms:pat => $foo:expr,)* }))*) => { 21 | $( 22 | if $i == $F_index { 23 | match unsafe { Pin::new_unchecked(&mut $this.$F) }.$method($cx) { 24 | $($arms => $foo,)* 25 | } 26 | } 27 | )* 28 | } 29 | } 30 | pub(crate) use gen_conditions; 31 | 32 | /// Calculate the number of tuples currently being operated on. 33 | macro_rules! tuple_len { 34 | (@count_one $F:ident) => (1); 35 | ($($F:ident,)*) => (0 $(+ crate::utils::tuple_len!(@count_one $F))*); 36 | } 37 | pub(crate) use tuple_len; 38 | -------------------------------------------------------------------------------- /src/utils/wakers/array/mod.rs: -------------------------------------------------------------------------------- 1 | #[cfg(not(feature = "std"))] 2 | mod no_std; 3 | #[cfg(feature = "std")] 4 | mod readiness_array; 5 | #[cfg(feature = "std")] 6 | mod waker; 7 | #[cfg(feature = "std")] 8 | mod waker_array; 9 | 10 | #[cfg(not(feature = "std"))] 11 | pub(crate) use no_std::WakerArray; 12 | #[cfg(feature = "std")] 13 | pub(crate) use readiness_array::ReadinessArray; 14 | #[cfg(feature = "std")] 15 | pub(crate) use waker::InlineWakerArray; 16 | #[cfg(feature = "std")] 17 | pub(crate) use waker_array::WakerArray; 18 | -------------------------------------------------------------------------------- /src/utils/wakers/array/no_std.rs: -------------------------------------------------------------------------------- 1 | use core::ops::{Deref, DerefMut}; 2 | use core::task::Waker; 3 | 4 | #[derive(Debug)] 5 | pub(crate) struct ReadinessArray { 6 | parent_waker: Option, 7 | } 8 | 9 | impl ReadinessArray { 10 | pub(crate) fn new() -> Self { 11 | Self { parent_waker: None } 12 | } 13 | 14 | /// Returns the old ready state for this id 15 | pub(crate) fn set_ready(&mut self, _id: usize) -> bool { 16 | false 17 | } 18 | 19 | /// Set all markers to ready. 20 | pub(crate) fn set_all_ready(&mut self) {} 21 | 22 | /// Returns whether the task id was previously ready 23 | pub(crate) fn clear_ready(&mut self, _id: usize) -> bool { 24 | true 25 | } 26 | 27 | /// Returns `true` if any of the wakers are ready. 28 | pub(crate) fn any_ready(&self) -> bool { 29 | true 30 | } 31 | 32 | /// Access the parent waker. 33 | #[inline] 34 | pub(crate) fn parent_waker(&self) -> Option<&Waker> { 35 | self.parent_waker.as_ref() 36 | } 37 | 38 | /// Set the parent `Waker`. This needs to be called at the start of every 39 | /// `poll` function. 40 | pub(crate) fn set_waker(&mut self, parent_waker: &Waker) { 41 | match &mut self.parent_waker { 42 | Some(prev) => prev.clone_from(parent_waker), 43 | None => self.parent_waker = Some(parent_waker.clone()), 44 | } 45 | } 46 | } 47 | 48 | pub(crate) struct ReadinessArrayRef<'a, const N: usize> { 49 | inner: &'a mut ReadinessArray, 50 | } 51 | 52 | impl<'a, const N: usize> Deref for ReadinessArrayRef<'a, N> { 53 | type Target = ReadinessArray; 54 | 55 | fn deref(&self) -> &Self::Target { 56 | self.inner 57 | } 58 | } 59 | 60 | impl<'a, const N: usize> DerefMut for ReadinessArrayRef<'a, N> { 61 | fn deref_mut(&mut self) -> &mut Self::Target { 62 | self.inner 63 | } 64 | } 65 | 66 | /// A collection of wakers which delegate to an in-line waker. 67 | pub(crate) struct WakerArray { 68 | readiness: ReadinessArray, 69 | } 70 | 71 | impl WakerArray { 72 | /// Create a new instance of `WakerArray`. 73 | pub(crate) fn new() -> Self { 74 | let readiness = ReadinessArray::new(); 75 | Self { readiness } 76 | } 77 | 78 | pub(crate) fn get(&self, _index: usize) -> Option<&Waker> { 79 | self.readiness.parent_waker() 80 | } 81 | 82 | /// Access the `Readiness`. 83 | pub(crate) fn readiness(&mut self) -> ReadinessArrayRef<'_, N> { 84 | ReadinessArrayRef { 85 | inner: &mut self.readiness, 86 | } 87 | } 88 | } 89 | -------------------------------------------------------------------------------- /src/utils/wakers/array/readiness_array.rs: -------------------------------------------------------------------------------- 1 | use core::task::Waker; 2 | 3 | /// Tracks which wakers are "ready" and should be polled. 4 | #[derive(Debug)] 5 | pub(crate) struct ReadinessArray { 6 | count: usize, 7 | readiness_list: [bool; N], 8 | parent_waker: Option, 9 | } 10 | 11 | impl ReadinessArray { 12 | /// Create a new instance of readiness. 13 | pub(crate) fn new() -> Self { 14 | Self { 15 | count: N, 16 | readiness_list: [true; N], // TODO: use a bitarray instead 17 | parent_waker: None, 18 | } 19 | } 20 | 21 | /// Returns the old ready state for this id 22 | pub(crate) fn set_ready(&mut self, id: usize) -> bool { 23 | if !self.readiness_list[id] { 24 | self.count += 1; 25 | self.readiness_list[id] = true; 26 | 27 | false 28 | } else { 29 | true 30 | } 31 | } 32 | 33 | /// Set all markers to ready. 34 | pub(crate) fn set_all_ready(&mut self) { 35 | self.readiness_list.fill(true); 36 | self.count = N; 37 | } 38 | 39 | /// Returns whether the task id was previously ready 40 | pub(crate) fn clear_ready(&mut self, id: usize) -> bool { 41 | if self.readiness_list[id] { 42 | self.count -= 1; 43 | self.readiness_list[id] = false; 44 | 45 | true 46 | } else { 47 | false 48 | } 49 | } 50 | 51 | /// Returns `true` if any of the wakers are ready. 52 | pub(crate) fn any_ready(&self) -> bool { 53 | self.count > 0 54 | } 55 | 56 | /// Access the parent waker. 57 | #[inline] 58 | pub(crate) fn parent_waker(&self) -> Option<&Waker> { 59 | self.parent_waker.as_ref() 60 | } 61 | 62 | /// Set the parent `Waker`. This needs to be called at the start of every 63 | /// `poll` function. 64 | pub(crate) fn set_waker(&mut self, parent_waker: &Waker) { 65 | match &mut self.parent_waker { 66 | Some(prev) => prev.clone_from(parent_waker), 67 | None => self.parent_waker = Some(parent_waker.clone()), 68 | } 69 | } 70 | } 71 | -------------------------------------------------------------------------------- /src/utils/wakers/array/waker.rs: -------------------------------------------------------------------------------- 1 | use alloc::sync::Arc; 2 | use alloc::task::Wake; 3 | use std::sync::Mutex; 4 | 5 | use super::ReadinessArray; 6 | 7 | /// An efficient waker which delegates wake events. 8 | #[derive(Debug, Clone)] 9 | pub(crate) struct InlineWakerArray { 10 | pub(crate) id: usize, 11 | pub(crate) readiness: Arc>>, 12 | } 13 | 14 | impl InlineWakerArray { 15 | /// Create a new instance of `InlineWaker`. 16 | pub(crate) fn new(id: usize, readiness: Arc>>) -> Self { 17 | Self { id, readiness } 18 | } 19 | } 20 | 21 | impl Wake for InlineWakerArray { 22 | fn wake(self: Arc) { 23 | let mut readiness = self.readiness.lock().unwrap(); 24 | if !readiness.set_ready(self.id) { 25 | readiness 26 | .parent_waker() 27 | .expect("`parent_waker` not available from `Readiness`. Did you forget to call `Readiness::set_waker`?") 28 | .wake_by_ref() 29 | } 30 | } 31 | } 32 | -------------------------------------------------------------------------------- /src/utils/wakers/array/waker_array.rs: -------------------------------------------------------------------------------- 1 | use alloc::sync::Arc; 2 | use core::array; 3 | use core::task::Waker; 4 | use std::sync::{Mutex, MutexGuard}; 5 | 6 | use super::{InlineWakerArray, ReadinessArray}; 7 | 8 | /// A collection of wakers which delegate to an in-line waker. 9 | pub(crate) struct WakerArray { 10 | wakers: [Waker; N], 11 | readiness: Arc>>, 12 | } 13 | 14 | impl WakerArray { 15 | /// Create a new instance of `WakerArray`. 16 | pub(crate) fn new() -> Self { 17 | let readiness = Arc::new(Mutex::new(ReadinessArray::new())); 18 | Self { 19 | wakers: array::from_fn(|i| { 20 | Arc::new(InlineWakerArray::new(i, readiness.clone())).into() 21 | }), 22 | readiness, 23 | } 24 | } 25 | 26 | pub(crate) fn get(&self, index: usize) -> Option<&Waker> { 27 | self.wakers.get(index) 28 | } 29 | 30 | /// Access the `Readiness`. 31 | pub(crate) fn readiness(&mut self) -> MutexGuard<'_, ReadinessArray> { 32 | self.readiness.as_ref().lock().unwrap() 33 | } 34 | } 35 | -------------------------------------------------------------------------------- /src/utils/wakers/dummy.rs: -------------------------------------------------------------------------------- 1 | use alloc::sync::Arc; 2 | use alloc::task::Wake; 3 | 4 | pub(crate) struct DummyWaker(); 5 | impl Wake for DummyWaker { 6 | fn wake(self: Arc) {} 7 | } 8 | -------------------------------------------------------------------------------- /src/utils/wakers/mod.rs: -------------------------------------------------------------------------------- 1 | mod array; 2 | #[cfg(all(test, feature = "alloc"))] 3 | mod dummy; 4 | #[cfg(feature = "alloc")] 5 | mod vec; 6 | 7 | #[cfg(all(test, feature = "alloc"))] 8 | pub(crate) use dummy::DummyWaker; 9 | 10 | pub(crate) use array::*; 11 | #[cfg(feature = "alloc")] 12 | pub(crate) use vec::*; 13 | -------------------------------------------------------------------------------- /src/utils/wakers/vec/mod.rs: -------------------------------------------------------------------------------- 1 | #[cfg(not(feature = "std"))] 2 | mod no_std; 3 | #[cfg(feature = "std")] 4 | mod readiness_vec; 5 | #[cfg(feature = "std")] 6 | mod waker; 7 | #[cfg(feature = "std")] 8 | mod waker_vec; 9 | 10 | #[cfg(not(feature = "std"))] 11 | pub(crate) use no_std::WakerVec; 12 | #[cfg(feature = "std")] 13 | pub(crate) use readiness_vec::ReadinessVec; 14 | #[cfg(feature = "std")] 15 | pub(crate) use waker::InlineWakerVec; 16 | #[cfg(feature = "std")] 17 | pub(crate) use waker_vec::WakerVec; 18 | -------------------------------------------------------------------------------- /src/utils/wakers/vec/no_std.rs: -------------------------------------------------------------------------------- 1 | use core::ops::{Deref, DerefMut}; 2 | use core::task::Waker; 3 | 4 | #[derive(Debug)] 5 | pub(crate) struct ReadinessVec { 6 | parent_waker: Option, 7 | } 8 | 9 | impl ReadinessVec { 10 | pub(crate) fn new() -> Self { 11 | Self { parent_waker: None } 12 | } 13 | 14 | /// Returns the old ready state for this id 15 | pub(crate) fn set_ready(&mut self, _id: usize) -> bool { 16 | false 17 | } 18 | 19 | /// Set all markers to ready. 20 | pub(crate) fn set_all_ready(&mut self) {} 21 | 22 | /// Returns whether the task id was previously ready 23 | pub(crate) fn clear_ready(&mut self, _id: usize) -> bool { 24 | true 25 | } 26 | 27 | /// Returns `true` if any of the wakers are ready. 28 | pub(crate) fn any_ready(&self) -> bool { 29 | true 30 | } 31 | 32 | /// Access the parent waker. 33 | #[inline] 34 | pub(crate) fn parent_waker(&self) -> Option<&Waker> { 35 | self.parent_waker.as_ref() 36 | } 37 | 38 | /// Set the parent `Waker`. This needs to be called at the start of every 39 | /// `poll` function. 40 | pub(crate) fn set_waker(&mut self, parent_waker: &Waker) { 41 | match &mut self.parent_waker { 42 | Some(prev) => prev.clone_from(parent_waker), 43 | None => self.parent_waker = Some(parent_waker.clone()), 44 | } 45 | } 46 | 47 | /// Resize `readiness` to the new length. 48 | /// 49 | /// If new entries are created, they will be marked as 'ready'. 50 | pub(crate) fn resize(&mut self, _len: usize) {} 51 | } 52 | 53 | pub(crate) struct ReadinessVecRef<'a> { 54 | inner: &'a mut ReadinessVec, 55 | } 56 | 57 | impl<'a> Deref for ReadinessVecRef<'a> { 58 | type Target = ReadinessVec; 59 | 60 | fn deref(&self) -> &Self::Target { 61 | self.inner 62 | } 63 | } 64 | 65 | impl<'a> DerefMut for ReadinessVecRef<'a> { 66 | fn deref_mut(&mut self) -> &mut Self::Target { 67 | self.inner 68 | } 69 | } 70 | 71 | /// A collection of wakers which delegate to an in-line waker. 72 | pub(crate) struct WakerVec { 73 | readiness: ReadinessVec, 74 | } 75 | 76 | impl Default for WakerVec { 77 | fn default() -> Self { 78 | Self::new(0) 79 | } 80 | } 81 | 82 | impl WakerVec { 83 | /// Create a new instance of `WakerArray`. 84 | pub(crate) fn new(_len: usize) -> Self { 85 | let readiness = ReadinessVec::new(); 86 | Self { readiness } 87 | } 88 | 89 | pub(crate) fn get(&self, _index: usize) -> Option<&Waker> { 90 | self.readiness.parent_waker() 91 | } 92 | 93 | /// Access the `Readiness`. 94 | pub(crate) fn readiness(&mut self) -> ReadinessVecRef<'_> { 95 | ReadinessVecRef { 96 | inner: &mut self.readiness, 97 | } 98 | } 99 | 100 | /// Resize the `WakerVec` to the new size. 101 | pub(crate) fn resize(&mut self, len: usize) { 102 | self.readiness.resize(len); 103 | } 104 | } 105 | -------------------------------------------------------------------------------- /src/utils/wakers/vec/readiness_vec.rs: -------------------------------------------------------------------------------- 1 | use core::task::Waker; 2 | use fixedbitset::FixedBitSet; 3 | 4 | /// Tracks which wakers are "ready" and should be polled. 5 | #[derive(Debug)] 6 | pub(crate) struct ReadinessVec { 7 | ready_count: usize, 8 | max_count: usize, 9 | readiness_list: FixedBitSet, 10 | parent_waker: Option, 11 | } 12 | 13 | impl ReadinessVec { 14 | /// Create a new instance of readiness. 15 | pub(crate) fn new(len: usize) -> Self { 16 | Self { 17 | ready_count: len, 18 | max_count: len, 19 | // See https://github.com/petgraph/fixedbitset/issues/101 20 | readiness_list: FixedBitSet::with_capacity_and_blocks(len, std::iter::repeat(!0)), 21 | parent_waker: None, 22 | } 23 | } 24 | 25 | /// Set the ready state to `true` for the given index 26 | /// 27 | /// Returns the old ready state for this id 28 | pub(crate) fn set_ready(&mut self, index: usize) -> bool { 29 | if !self.readiness_list[index] { 30 | self.ready_count += 1; 31 | self.readiness_list.set(index, true); 32 | false 33 | } else { 34 | true 35 | } 36 | } 37 | 38 | /// Set all markers to ready. 39 | pub(crate) fn set_all_ready(&mut self) { 40 | self.readiness_list.set_range(.., true); 41 | self.ready_count = self.max_count; 42 | } 43 | 44 | /// Set the ready state to `false` for the given index 45 | /// 46 | /// Returns whether the task id was previously ready 47 | pub(crate) fn clear_ready(&mut self, index: usize) -> bool { 48 | if self.readiness_list[index] { 49 | self.ready_count -= 1; 50 | self.readiness_list.set(index, false); 51 | true 52 | } else { 53 | false 54 | } 55 | } 56 | 57 | /// Returns whether the task id was previously ready 58 | #[allow(unused)] 59 | pub(crate) fn clear_all_ready(&mut self) { 60 | self.readiness_list.set_range(.., false); 61 | self.ready_count = 0; 62 | } 63 | 64 | /// Returns `true` if any of the wakers are ready. 65 | pub(crate) fn any_ready(&self) -> bool { 66 | self.ready_count > 0 67 | } 68 | 69 | /// Access the parent waker. 70 | #[inline] 71 | pub(crate) fn parent_waker(&self) -> Option<&Waker> { 72 | self.parent_waker.as_ref() 73 | } 74 | 75 | /// Set the parent `Waker`. This needs to be called at the start of every 76 | /// `poll` function. 77 | pub(crate) fn set_waker(&mut self, parent_waker: &Waker) { 78 | match &mut self.parent_waker { 79 | Some(prev) => prev.clone_from(parent_waker), 80 | None => self.parent_waker = Some(parent_waker.clone()), 81 | } 82 | } 83 | 84 | /// Resize `readiness` to the new length. 85 | /// 86 | /// If new entries are created, they will be marked as 'ready'. 87 | pub(crate) fn resize(&mut self, len: usize) { 88 | self.max_count = len; 89 | 90 | let old_len = self.readiness_list.len(); 91 | match len.cmp(&old_len) { 92 | std::cmp::Ordering::Less => { 93 | // shrink 94 | self.ready_count -= self.readiness_list.count_ones(len..); 95 | self.readiness_list = FixedBitSet::with_capacity_and_blocks( 96 | len, 97 | self.readiness_list.as_slice().iter().cloned(), 98 | ); 99 | } 100 | std::cmp::Ordering::Equal => { 101 | // no-op 102 | } 103 | std::cmp::Ordering::Greater => { 104 | // grow 105 | self.readiness_list.grow(len); 106 | self.readiness_list.set_range(old_len..len, true); 107 | self.ready_count += len - old_len; 108 | } 109 | } 110 | } 111 | } 112 | 113 | #[cfg(test)] 114 | mod test { 115 | use super::*; 116 | 117 | #[test] 118 | fn resize() { 119 | let mut readiness = ReadinessVec::new(10); 120 | assert!(readiness.any_ready()); 121 | readiness.clear_all_ready(); 122 | assert!(!readiness.any_ready()); 123 | readiness.set_ready(9); 124 | assert!(readiness.any_ready()); 125 | readiness.resize(9); 126 | assert!(!readiness.any_ready()); 127 | readiness.resize(10); 128 | assert!(readiness.any_ready()); 129 | } 130 | } 131 | -------------------------------------------------------------------------------- /src/utils/wakers/vec/waker.rs: -------------------------------------------------------------------------------- 1 | use alloc::sync::Arc; 2 | use alloc::task::Wake; 3 | use std::sync::Mutex; 4 | 5 | use super::ReadinessVec; 6 | 7 | /// An efficient waker which delegates wake events. 8 | #[derive(Debug, Clone)] 9 | pub(crate) struct InlineWakerVec { 10 | pub(crate) id: usize, 11 | pub(crate) readiness: Arc>, 12 | } 13 | 14 | impl InlineWakerVec { 15 | /// Create a new instance of `InlineWaker`. 16 | pub(crate) fn new(id: usize, readiness: Arc>) -> Self { 17 | Self { id, readiness } 18 | } 19 | } 20 | 21 | impl Wake for InlineWakerVec { 22 | fn wake(self: Arc) { 23 | let mut readiness = self.readiness.lock().unwrap(); 24 | if !readiness.set_ready(self.id) { 25 | readiness 26 | .parent_waker() 27 | .expect("`parent_waker` not available from `Readiness`. Did you forget to call `Readiness::set_waker`?") 28 | .wake_by_ref() 29 | } 30 | } 31 | } 32 | -------------------------------------------------------------------------------- /src/utils/wakers/vec/waker_vec.rs: -------------------------------------------------------------------------------- 1 | #[cfg(all(feature = "alloc", not(feature = "std")))] 2 | use alloc::vec::Vec; 3 | 4 | use alloc::sync::Arc; 5 | use core::task::Waker; 6 | use std::sync::{Mutex, MutexGuard}; 7 | 8 | use super::{InlineWakerVec, ReadinessVec}; 9 | 10 | /// A collection of wakers which delegate to an in-line waker. 11 | pub(crate) struct WakerVec { 12 | wakers: Vec, 13 | readiness: Arc>, 14 | } 15 | 16 | impl Default for WakerVec { 17 | fn default() -> Self { 18 | Self::new(0) 19 | } 20 | } 21 | 22 | impl WakerVec { 23 | /// Create a new instance of `WakerVec`. 24 | pub(crate) fn new(len: usize) -> Self { 25 | let readiness = Arc::new(Mutex::new(ReadinessVec::new(len))); 26 | let wakers = (0..len) 27 | .map(|i| Arc::new(InlineWakerVec::new(i, readiness.clone())).into()) 28 | .collect(); 29 | Self { wakers, readiness } 30 | } 31 | 32 | pub(crate) fn get(&self, index: usize) -> Option<&Waker> { 33 | self.wakers.get(index) 34 | } 35 | 36 | /// Access the `Readiness`. 37 | pub(crate) fn readiness(&self) -> MutexGuard<'_, ReadinessVec> { 38 | self.readiness.lock().unwrap() 39 | } 40 | 41 | /// Resize the `WakerVec` to the new size. 42 | pub(crate) fn resize(&mut self, len: usize) { 43 | // If we grow the vec we'll need to extend beyond the current index. 44 | // Which means the first position is the current length, and every position 45 | // beyond that is incremented by 1. 46 | let mut index = self.wakers.len(); 47 | self.wakers.resize_with(len, || { 48 | let ret = Arc::new(InlineWakerVec::new(index, self.readiness.clone())).into(); 49 | index += 1; 50 | ret 51 | }); 52 | 53 | let mut readiness = self.readiness.lock().unwrap(); 54 | readiness.resize(len); 55 | } 56 | } 57 | -------------------------------------------------------------------------------- /tests/no_std.rs: -------------------------------------------------------------------------------- 1 | #![no_std] 2 | 3 | use core::future; 4 | use futures_concurrency::{array::AggregateError, prelude::*}; 5 | use futures_lite::future::block_on; 6 | use futures_lite::prelude::*; 7 | use futures_lite::stream; 8 | 9 | // These tests ensure that the traits provided by `futures-concurrency` work in a no std environment. 10 | 11 | #[test] 12 | fn join() { 13 | futures_lite::future::block_on(async { 14 | let fut = [future::ready("hello"), future::ready("world")].join(); 15 | assert_eq!(fut.await, ["hello", "world"]); 16 | }); 17 | } 18 | 19 | #[test] 20 | fn try_join() { 21 | futures_lite::future::block_on(async { 22 | let res: Result<[&str; 2], &str> = [future::ready(Ok("hello")), future::ready(Ok("world"))] 23 | .try_join() 24 | .await; 25 | assert_eq!(res.unwrap(), ["hello", "world"]); 26 | }) 27 | } 28 | 29 | #[test] 30 | fn race() { 31 | futures_lite::future::block_on(async { 32 | let res = [future::ready("hello"), future::ready("world")] 33 | .race() 34 | .await; 35 | assert!(matches!(res, "hello" | "world")); 36 | }); 37 | } 38 | 39 | #[test] 40 | fn race_ok() { 41 | futures_lite::future::block_on(async { 42 | let res: Result<&str, AggregateError<&str, 2>> = 43 | [future::ready(Ok("hello")), future::ready(Ok("world"))] 44 | .race_ok() 45 | .await; 46 | assert!(res.is_ok()); 47 | }) 48 | } 49 | 50 | #[test] 51 | fn chain_3() { 52 | block_on(async { 53 | let a = stream::once(1); 54 | let b = stream::once(2); 55 | let c = stream::once(3); 56 | let mut s = [a, b, c].chain(); 57 | 58 | assert_eq!(s.next().await, Some(1)); 59 | assert_eq!(s.next().await, Some(2)); 60 | assert_eq!(s.next().await, Some(3)); 61 | assert_eq!(s.next().await, None); 62 | }) 63 | } 64 | 65 | #[test] 66 | fn merge_array_4() { 67 | block_on(async { 68 | let a = stream::once(1); 69 | let b = stream::once(2); 70 | let c = stream::once(3); 71 | let d = stream::once(4); 72 | let mut s = [a, b, c, d].merge(); 73 | 74 | let mut counter = 0; 75 | while let Some(n) = s.next().await { 76 | counter += n; 77 | } 78 | assert_eq!(counter, 10); 79 | }) 80 | } 81 | 82 | #[test] 83 | fn zip_array_3() { 84 | use futures_concurrency::stream::Zip; 85 | 86 | block_on(async { 87 | let a = stream::repeat(1).take(2); 88 | let b = stream::repeat(2).take(2); 89 | let c = stream::repeat(3).take(2); 90 | let mut s = Zip::zip([a, b, c]); 91 | 92 | assert_eq!(s.next().await, Some([1, 2, 3])); 93 | assert_eq!(s.next().await, Some([1, 2, 3])); 94 | assert_eq!(s.next().await, None); 95 | }) 96 | } 97 | -------------------------------------------------------------------------------- /tests/regression-155.rs: -------------------------------------------------------------------------------- 1 | //! Regression test for: https://github.com/yoshuawuyts/futures-concurrency/issues/155 2 | //! 3 | //! We accidentally were marking a value as "ready" in `try_join`on the error 4 | //! path. This meant that when we returned, the destructor assumed a value was 5 | //! initialized when it wasn't, causing it to dereference uninitialized memory. 6 | 7 | #![cfg(feature = "alloc")] 8 | 9 | use futures_concurrency::prelude::*; 10 | use futures_core::Future; 11 | use std::{future::ready, pin::Pin}; 12 | use tokio::time::{sleep, Duration}; 13 | 14 | pub type BoxFuture<'a, T> = Pin + Send + 'a>>; 15 | 16 | async fn process_not_fail() -> Result, ()> { 17 | sleep(Duration::from_millis(100)).await; 18 | Ok(vec![ready(1), ready(2)].join().await) 19 | } 20 | 21 | async fn process_fail() -> Result, ()> { 22 | Err(()) 23 | } 24 | 25 | #[tokio::test] 26 | async fn array() { 27 | let a: BoxFuture<'static, _> = Box::pin(process_fail()); 28 | let b: BoxFuture<'static, _> = Box::pin(process_not_fail()); 29 | let res = [a, b].try_join().await; 30 | assert!(res.is_err()); 31 | } 32 | 33 | #[tokio::test] 34 | async fn vec() { 35 | let a: BoxFuture<'static, _> = Box::pin(process_fail()); 36 | let b: BoxFuture<'static, _> = Box::pin(process_not_fail()); 37 | let res = vec![a, b].try_join().await; 38 | assert!(res.is_err()); 39 | } 40 | 41 | #[tokio::test] 42 | async fn tuple() { 43 | let a = process_fail(); 44 | let b = process_not_fail(); 45 | let res = (a, b).try_join().await; 46 | assert!(res.is_err()); 47 | } 48 | --------------------------------------------------------------------------------