├── .gitignore ├── Cargo.toml ├── LICENSE ├── README.md ├── src ├── allocation.rs └── lib.rs └── tests └── integration.rs /.gitignore: -------------------------------------------------------------------------------- 1 | /target 2 | **/*.rs.bk 3 | Cargo.lock 4 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "second-stack" 3 | version = "0.3.5" 4 | authors = ["Zac "] 5 | edition = "2021" 6 | description = "A fast allocator for short-lived slices and large values." 7 | homepage = "https://github.com/That3Percent/second-stack" 8 | docs = "https://docs.rs/second-stack" 9 | repository = "https://github.com/That3Percent/second-stack" 10 | readme = "README.md" 11 | keywords = ["slice", "stack", "memory-management"] 12 | categories = ["memory-management"] 13 | license = "MIT" 14 | 15 | [badges] 16 | maintenance = { status = "actively-developed" } 17 | 18 | 19 | [features] 20 | 21 | [dev-dependencies] 22 | rand = "0.8.5" 23 | testdrop = "0.1.2" -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2021 Zachary Burns. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | The thread's stack is a high performance way to manage memory. But, it cannot be used for large or dynamically sized allocations. What if the thread had a second stack suitable for that purpose? 2 | 3 | > We've had one, yes. What about second stack? 4 | > ...Pippin, probably. 5 | 6 | `second-stack` is an allocator for short-lived, potentially large values and slices. It is often faster to use than `Vec` for the same reason using the thread's stack is faster than using the heap most of the time. 7 | 8 | The internal representation is a thread local stack that grows as necessary. Once the capacity saturates, the same allocation will be re-used for many consumers, making it more efficient as more libraries adopt it. 9 | 10 | `second-stack` was originally developed for writing dynamic buffers in WebGL (eg: procedurally generate some triangles/colors, write them to a buffer, and hand them off to the graphics card many times per frame without incurring the cost of many heap allocations). But, over time I found that needing a short-lived slice was common and using `second-stack` all over the place allowed for the best memory re-use and performance. 11 | 12 | 13 | There are two ways to use this API. The preferred way is to use methods which delegate to a shared thread local (like `buffer`, and `uninit_slice`. Using these methods ensures that multiple libraries efficiently re-use allocations without passing around context and exposing this implementation detail in their public API. Alternatively, you can use `Stack::new()` to create your own managed stack if you need more control. 14 | 15 | Example using `buffer`: 16 | ```rust 17 | // Buffer fully consumes an iterator, 18 | // writes each item to a slice on the second stack, 19 | // and gives you mutable access to the slice. 20 | // This API supports Drop. 21 | buffer(0..1000, |items| { 22 | assert_eq!(items.len(), 1000); 23 | assert_eq!(items[19], 19); 24 | }) 25 | ``` 26 | 27 | Example using `uninit_slice`: 28 | ```rust 29 | uninit_slice(100, |slice| { 30 | // Write to the 100 element slice here 31 | }) 32 | ``` 33 | 34 | Example using `Stack`: 35 | ```rust 36 | let stack = Stack::new(); 37 | stack.buffer(std::iter::repeat(5).take(100), |slice| { 38 | // Same as second_stack::buffer, but uses an 39 | // owned stack instead of the threadlocal one. 40 | // Not recommended unless you have a specific reason 41 | // because this limits passive sharing. 42 | }) 43 | ``` 44 | 45 | Example placing a huge value: 46 | ```rust 47 | struct Huge { 48 | bytes: [u8; 4194304] 49 | } 50 | 51 | uninit::(|huge| { 52 | // Do something with this very large 53 | // value that would cause a stack overflow if 54 | // we had used the thread stack 55 | }); 56 | ``` 57 | 58 | # FAQ 59 | 60 | > How is this different from a bump allocator like [bumpalo](https://docs.rs/bumpalo/latest/bumpalo/)? 61 | 62 | Bump allocators like bumpalo are arena allocators designed for *phase-oriented* allocations, whereas `second-stack` is a stack. 63 | 64 | This allows `second-stack` to: 65 | * Support `Drop` 66 | * Dynamically up-size the allocation as needed rather than requiring the size be known up-front 67 | * Free and re-use memory earlier 68 | * Conveniently support "large local variables", which does not require architecting the program to fit the arena model -------------------------------------------------------------------------------- /src/allocation.rs: -------------------------------------------------------------------------------- 1 | use std::{ 2 | cell::UnsafeCell, 3 | mem::{self, align_of, replace, size_of}, 4 | ptr, 5 | }; 6 | 7 | use crate::DropStack; 8 | 9 | #[derive(Clone)] 10 | pub(crate) struct Allocation { 11 | pub base: *mut u8, 12 | pub len: usize, 13 | pub capacity: usize, 14 | } 15 | 16 | impl Allocation { 17 | pub fn get_slice<'a, T>( 18 | &mut self, 19 | parent: &'a UnsafeCell, 20 | len: usize, 21 | ) -> (DropStack<'a>, (*mut T, usize)) { 22 | unsafe { 23 | // Requires at a minimum size * len, but at a maximum must also pay 24 | // an alignment cost. 25 | let required_bytes_pessimistic = (align_of::() - 1) + (size_of::() * len); 26 | self.ensure_capacity(required_bytes_pessimistic); 27 | 28 | let restore = self.clone(); 29 | let base = self.base.offset(self.len as isize); 30 | let align = base.align_offset(align_of::()); 31 | let ptr = base.offset(align as isize); 32 | self.len += align + (size_of::() * len); 33 | 34 | ( 35 | DropStack { 36 | restore, 37 | location: parent, 38 | }, 39 | (ptr as *mut T, len), 40 | ) 41 | } 42 | } 43 | fn ensure_capacity(&mut self, capacity: usize) { 44 | if self.remaining_bytes() < capacity { 45 | // Require at least 64 bytes for the smallest allocation, 46 | // and require we at least double in size from the previous 47 | // allocated stack 48 | let mut new_capacity = 64.max(self.capacity * 2); 49 | // Require that we are a power of 2 and can fit 50 | // the desired slice. 51 | while new_capacity < capacity { 52 | new_capacity *= 2; 53 | } 54 | let mut dealloc = replace(self, Allocation::new(new_capacity)); 55 | // If the previous stack was not borrowed, we need to 56 | // free it. 57 | dealloc.try_dealloc(); 58 | } 59 | } 60 | 61 | pub fn ref_eq(&self, other: &Self) -> bool { 62 | self.base == other.base 63 | } 64 | pub fn null() -> Self { 65 | Self { 66 | base: ptr::null_mut(), 67 | len: 0, 68 | capacity: 0, 69 | } 70 | } 71 | 72 | pub fn remaining_bytes(&self) -> usize { 73 | self.capacity - self.len 74 | } 75 | 76 | pub fn new(size_in_bytes: usize) -> Self { 77 | let mut v = Vec::::with_capacity(size_in_bytes); 78 | let base = v.as_mut_ptr(); 79 | mem::forget(v); 80 | 81 | // println!("Alloc {size_in_bytes} bytes at {base:?}"); 82 | 83 | Self { 84 | base, 85 | len: 0, 86 | capacity: size_in_bytes, 87 | } 88 | } 89 | 90 | pub unsafe fn force_dealloc(&mut self) { 91 | if self.base == ptr::null_mut() { 92 | return; 93 | } 94 | 95 | // println!("Dealloc {} bytes at {:?}", self.capacity, self.base,); 96 | // Deallocates the memory 97 | drop(Vec::from_raw_parts(self.base, 0, self.capacity)); 98 | 99 | self.base = ptr::null_mut(); 100 | } 101 | 102 | pub fn try_dealloc(&mut self) { 103 | // Don't dealloc if the slice is in-use. 104 | // We assume at this point that there are no slices with len 105 | // 0 in-use, because we don't use the Allocation type for those. 106 | // See also: 26936c11-5b7c-472e-8f63-7922e63a5425 107 | // See also: 2ec61cda-e074-4b26-a9a5-a01b70706585 108 | if self.len != 0 { 109 | return; 110 | } 111 | 112 | unsafe { self.force_dealloc() } 113 | } 114 | } 115 | -------------------------------------------------------------------------------- /src/lib.rs: -------------------------------------------------------------------------------- 1 | mod allocation; 2 | use allocation::Allocation; 3 | 4 | use std::{ 5 | self, 6 | cell::UnsafeCell, 7 | mem::{size_of, MaybeUninit}, 8 | ptr, slice, 9 | }; 10 | 11 | thread_local!( 12 | static THREAD_LOCAL: Stack = Stack::new() 13 | ); 14 | 15 | /// A Stack that is managed separately from the threadlocal one. 16 | /// Typically, using the threadlocal APIs 17 | /// is encouraged because they enable sharing across libraries, where each 18 | /// re-use lowers the amortized cost of maintaining allocations. But, if 19 | /// full control is necessary this API may be used. 20 | pub struct Stack(UnsafeCell); 21 | 22 | impl Drop for Stack { 23 | fn drop(&mut self) { 24 | let stack = self.0.get_mut(); 25 | // It's ok to use force_dealloc here instead of try_dealloc 26 | // because we know the allocation cannot be in-use. By eliding 27 | // the check, this allows the allocation to be freed when there 28 | // was a panic 29 | unsafe { 30 | stack.force_dealloc(); 31 | } 32 | } 33 | } 34 | 35 | impl Stack { 36 | pub fn new() -> Self { 37 | Self(UnsafeCell::new(Allocation::null())) 38 | } 39 | 40 | /// Place a potentially very large value on this stack. 41 | pub fn uninit(&self, f: F) -> R 42 | where 43 | F: FnOnce(&mut MaybeUninit) -> R, 44 | { 45 | // Delegate implementation to uninit_slice just to get this working. 46 | // Performance could be slightly improved with a bespoke implementation 47 | // of this method. 48 | self.uninit_slice(1, |slice| f(&mut slice[0])) 49 | } 50 | 51 | /// Allocates an uninit slice from this stack. 52 | pub fn uninit_slice(&self, len: usize, f: F) -> R 53 | where 54 | F: FnOnce(&mut [MaybeUninit]) -> R, 55 | { 56 | // Special case for ZST that disregards the rest of the code, 57 | // so that none of that code need account for ZSTs. 58 | // The reason this is convenient is that a ZST may use 59 | // the stack without bumping the pointer, which will 60 | // lead other code to free that memory while still in-use. 61 | // See also: 2ec61cda-e074-4b26-a9a5-a01b70706585 62 | // There may be other issues also. 63 | if std::mem::size_of::() == 0 { 64 | let mut tmp = Vec::::with_capacity(len); 65 | // We do need to take a slice here, because suprisingly 66 | // tmp.capacity() returns 18446744073709551615 67 | let slice = &mut tmp.spare_capacity_mut()[..len]; 68 | return f(slice); 69 | } 70 | 71 | // Required for correctness 72 | // See also: 26936c11-5b7c-472e-8f63-7922e63a5425 73 | if len == 0 { 74 | return f(&mut []); 75 | } 76 | 77 | // Get the new slice, and the old allocation to 78 | // restore once the function is finished running. 79 | let (_restore, (ptr, len)) = unsafe { 80 | let stack = &mut *self.0.get(); 81 | stack.get_slice(&self.0, len) 82 | }; 83 | 84 | let slice = unsafe { slice::from_raw_parts_mut(ptr as *mut MaybeUninit, len) }; 85 | 86 | f(slice) 87 | } 88 | 89 | /// Buffers an iterator to a slice on this stack and gives temporary access to that slice. 90 | /// Do not use with an unbounded iterator, because this will eventually run out of memory and panic. 91 | pub fn buffer(&self, i: I, f: F) -> R 92 | where 93 | I: Iterator, 94 | F: FnOnce(&mut [T]) -> R, 95 | { 96 | // Special case for ZST 97 | if size_of::() == 0 { 98 | let mut v: Vec<_> = i.collect(); 99 | return f(&mut v); 100 | } 101 | 102 | // Data goes in a struct in case user code panics. 103 | // User code includes Iterator::next, FnOnce, and Drop::drop 104 | struct Writer<'a, T> { 105 | restore: Option>, 106 | base: *mut T, 107 | len: usize, 108 | capacity: usize, 109 | } 110 | 111 | impl Writer<'_, T> { 112 | unsafe fn write(&mut self, item: T) { 113 | self.base.add(self.len).write(item); 114 | self.len += 1; 115 | } 116 | 117 | fn try_reuse(&mut self, stack: &mut Allocation) -> bool { 118 | if let Some(prev) = &self.restore { 119 | if prev.restore.ref_eq(stack) { 120 | // If we are already are using this stack, we know the 121 | // end ptr is already aligned. To double in size, 122 | // we would need as many bytes as there are currently 123 | // and do not need to align 124 | let required_bytes = size_of::() * self.capacity; 125 | 126 | if stack.remaining_bytes() >= required_bytes { 127 | stack.len += required_bytes; 128 | self.capacity *= 2; 129 | return true; 130 | } 131 | } 132 | } 133 | false 134 | } 135 | } 136 | 137 | impl Drop for Writer<'_, T> { 138 | fn drop(&mut self) { 139 | unsafe { 140 | for i in 0..self.len { 141 | self.base.add(i).drop_in_place() 142 | } 143 | } 144 | } 145 | } 146 | 147 | unsafe { 148 | let mut writer = Writer { 149 | restore: None, 150 | base: ptr::null_mut(), 151 | capacity: 0, 152 | len: 0, 153 | }; 154 | 155 | for next in i { 156 | if writer.capacity == writer.len { 157 | let stack = &mut *self.0.get(); 158 | 159 | // First try to use the same stack, but if that fails 160 | // copy over to the upsized stack 161 | if !writer.try_reuse(stack) { 162 | // This will always be a different allocation, otherwise 163 | // try_reuse would have succeeded 164 | let (restore, (base, capacity)) = 165 | stack.get_slice(&self.0, (writer.len * 2).max(1)); 166 | 167 | // Check for 0 is to avoid copy from null ptr (miri violation) 168 | if writer.len != 0 { 169 | ptr::copy_nonoverlapping(writer.base, base, writer.len); 170 | } 171 | 172 | // This attempts to restore the old allocation when 173 | // writer.restore is Some, but we know that there 174 | // is a new allocation at this point, so the only 175 | // thing it can do is free memory 176 | writer.restore = Some(restore); 177 | 178 | writer.capacity = capacity; 179 | writer.base = base; 180 | } 181 | } 182 | writer.write(next); 183 | } 184 | 185 | // TODO: (Performance?) Drop reserve of unused stack, if any. We have over-allocated. 186 | // TODO: (Performance?) Consider using size_hint 187 | 188 | let buffer = slice::from_raw_parts_mut(writer.base, writer.len); 189 | f(buffer) 190 | } 191 | } 192 | } 193 | 194 | /// Allocates an uninit slice from the threadlocal stack. 195 | pub fn uninit_slice(len: usize, f: F) -> R 196 | where 197 | F: FnOnce(&mut [MaybeUninit]) -> R, 198 | { 199 | THREAD_LOCAL.with(|stack| stack.uninit_slice(len, f)) 200 | } 201 | 202 | /// Place a potentially very large value on the threadlocal second stack. 203 | pub fn uninit(f: F) -> R 204 | where 205 | F: FnOnce(&mut MaybeUninit) -> R, 206 | { 207 | THREAD_LOCAL.with(|stack| stack.uninit(f)) 208 | } 209 | 210 | /// Buffers an iterator to a slice on the threadlocal stack and gives temporary access to that slice. 211 | /// Panics when running out of memory if the iterator is unbounded. 212 | pub fn buffer(i: I, f: F) -> R 213 | where 214 | I: Iterator, 215 | F: FnOnce(&mut [T]) -> R, 216 | { 217 | THREAD_LOCAL.with(|stack| stack.buffer(i, f)) 218 | } 219 | 220 | // The logic to drop our Allocation goes into a drop impl so that if there 221 | // is a panic the drop logic is still run and we don't leak any memory. 222 | pub(crate) struct DropStack<'a> { 223 | pub restore: Allocation, 224 | pub location: &'a UnsafeCell, 225 | } 226 | 227 | impl Drop for DropStack<'_> { 228 | fn drop(&mut self) { 229 | unsafe { 230 | let mut current = &mut *self.location.get(); 231 | if current.ref_eq(&self.restore) { 232 | current.len = self.restore.len; 233 | } else { 234 | self.restore.try_dealloc(); 235 | } 236 | } 237 | } 238 | } 239 | -------------------------------------------------------------------------------- /tests/integration.rs: -------------------------------------------------------------------------------- 1 | use rand::{ 2 | distributions::{Distribution, Standard}, 3 | rngs::StdRng, 4 | thread_rng, Rng, SeedableRng, 5 | }; 6 | use second_stack::*; 7 | use std::{fmt::Debug, marker::PhantomData, mem::MaybeUninit, thread}; 8 | use testdrop::TestDrop; 9 | 10 | /// Randomly tests both uninit_slice and buffer 11 | /// Includes tricky cases like recursing during iteration and drop 12 | #[test] 13 | fn soak() { 14 | #[derive(Copy, Clone, Debug)] 15 | struct Cfg { 16 | threads: usize, 17 | inner_loops: usize, 18 | outer_loops: usize, 19 | recursion: u32, 20 | } 21 | 22 | let cfg = if cfg!(miri) { 23 | Cfg { 24 | threads: 1, 25 | inner_loops: 2, 26 | outer_loops: 8, 27 | recursion: 4, 28 | } 29 | } else if cfg!(debug_assertions) { 30 | Cfg { 31 | threads: 32, 32 | inner_loops: 3, 33 | outer_loops: 100, 34 | recursion: 8, 35 | } 36 | } else { 37 | Cfg { 38 | threads: 64, 39 | inner_loops: 5, 40 | outer_loops: 500, 41 | recursion: 12, 42 | } 43 | }; 44 | 45 | dbg!(&cfg); 46 | 47 | let mut handles = Vec::with_capacity(cfg.threads); 48 | 49 | for _ in 0..cfg.threads { 50 | let handle = thread::spawn(move || { 51 | for it in 0..cfg.outer_loops { 52 | if thread_rng().gen_bool(1.0 / (cfg.threads * cfg.inner_loops) as f64) { 53 | dbg!(it); 54 | } 55 | thread::spawn(move || { 56 | let local = Stack::new(); 57 | for _ in 0..cfg.inner_loops { 58 | recurse(cfg.recursion, &local); 59 | } 60 | }) 61 | .join() 62 | .unwrap(); 63 | } 64 | }); 65 | handles.push(handle); 66 | } 67 | 68 | for handle in handles.drain(..) { 69 | handle.join().unwrap(); 70 | } 71 | } 72 | 73 | fn rng_pair() -> (StdRng, StdRng) { 74 | let seed = thread_rng().gen(); 75 | (StdRng::from_seed(seed), StdRng::from_seed(seed)) 76 | } 77 | 78 | fn check_value(limit: u32, local: &Stack) 79 | where 80 | T: PartialEq + Debug, 81 | Standard: Distribution, 82 | { 83 | let mut call_check = CallCheck::new(); 84 | 85 | #[cfg(not(miri))] 86 | const LEN: usize = 65536; 87 | #[cfg(miri)] 88 | const LEN: usize = 1024; 89 | 90 | struct Huge { 91 | _a: [T; LEN], 92 | _b: [(T, T); LEN], 93 | _c: [(T, T, T); LEN], 94 | _d: [(T, T, T, T); LEN], 95 | } 96 | 97 | // If T is u8, this value would use almost 1/3 of the 2MiB thread stack 98 | // When recursing and using other types we virtually guarantee a stackoverflow 99 | // if this value was allocated on the thread's stack. Some other types 100 | // already use more than the limit with a single allocation. 101 | let f = move |_huge: &mut MaybeUninit>| { 102 | call_check.ok(); 103 | 104 | // TODO: Do an overwrite check here. 105 | // Even zeroing this out is very expensive. 106 | // *_uninit = MaybeUninit::zeroed(); 107 | // Unfortunately, it is hard to do a sampling for verification as 108 | // well. 109 | recurse(limit, local); 110 | }; 111 | 112 | if rand_bool() { 113 | uninit(f); 114 | } else { 115 | local.uninit(f) 116 | } 117 | } 118 | 119 | /// Grabs a randomly sized slice, verifies it's len, writes 120 | /// random values to it, calls external function, 121 | /// and verifies that all of the writes remained intact. 122 | fn check_slice(limit: u32, local: &Stack) 123 | where 124 | T: PartialEq + Debug, 125 | Standard: Distribution, 126 | { 127 | let len = thread_rng().gen_range(0usize..1025); 128 | 129 | let mut call_check = CallCheck::new(); 130 | 131 | let f = move |uninit: &mut [MaybeUninit]| { 132 | call_check.ok(); 133 | let (mut rng_gen, mut rng_check) = rng_pair(); 134 | 135 | assert_eq!(len, uninit.len()); 136 | for i in 0..uninit.len() { 137 | let value = rng_gen.gen(); 138 | uninit[i] = MaybeUninit::new(value); 139 | } 140 | recurse(limit, local); 141 | let init = unsafe { &*(uninit as *const [MaybeUninit] as *const [T]) }; 142 | // Verify that nothing overwrote this array. 143 | for i in 0..init.len() { 144 | let value = rng_check.gen(); 145 | assert_eq!(init[i], value); 146 | } 147 | }; 148 | 149 | if rand_bool() { 150 | uninit_slice(len, f); 151 | } else { 152 | local.uninit_slice(len, f) 153 | } 154 | } 155 | 156 | fn rand_bool() -> bool { 157 | thread_rng().gen() 158 | } 159 | 160 | fn check_rand_method(limit: u32, local: &Stack) 161 | where 162 | T: Debug + PartialEq, 163 | Standard: Distribution, 164 | { 165 | let switch = thread_rng().gen_range(0u32..3); 166 | match switch { 167 | 0 => check_slice::(limit, local), 168 | 1 => check_iter::(limit, local), 169 | 2 => check_value::(limit, local), 170 | _ => unreachable!(), 171 | } 172 | } 173 | 174 | fn check_rand_type(limit: u32, local: &Stack) { 175 | let switch = thread_rng().gen_range(0u32..13); 176 | // Pick some types with varying size/alignment requirements 177 | match switch { 178 | 0 => check_rand_method::(limit, local), 179 | 1 => check_rand_method::(limit, local), 180 | 2 => check_rand_method::(limit, local), 181 | 3 => check_rand_method::<(u8, u8)>(limit, local), 182 | 4 => check_rand_method::<(u8, u16)>(limit, local), 183 | 5 => check_rand_method::<(u8, u32)>(limit, local), 184 | 6 => check_rand_method::<(u16, u8)>(limit, local), 185 | 7 => check_rand_method::<(u16, u16)>(limit, local), 186 | 8 => check_rand_method::<(u16, u32)>(limit, local), 187 | 9 => check_rand_method::<(u32, u8)>(limit, local), 188 | 10 => check_rand_method::<(u32, u16)>(limit, local), 189 | 11 => check_rand_method::<(u32, u32)>(limit, local), 190 | 12 => check_rand_method::<()>(limit, local), 191 | _ => unreachable!(), 192 | } 193 | } 194 | 195 | fn recurse(mut limit: u32, local: &Stack) { 196 | if limit == 0 { 197 | return; 198 | } 199 | 200 | limit -= 1; 201 | 202 | let with_local = |limit: u32, local: &Stack| { 203 | if thread_rng().gen() { 204 | check_rand_type(limit, local); 205 | } 206 | if thread_rng().gen() { 207 | check_rand_type(limit, local); 208 | } 209 | }; 210 | 211 | if thread_rng().gen_range(0..8) == 0 { 212 | let new_local = Stack::new(); 213 | with_local(limit, &new_local); 214 | } else { 215 | with_local(limit, local); 216 | } 217 | } 218 | 219 | fn check_iter(limit: u32, local: &Stack) 220 | where 221 | T: Debug + PartialEq, 222 | Standard: Distribution, 223 | { 224 | let (rng_gen, mut rng_check) = rng_pair(); 225 | let total = thread_rng().gen_range(0..1025); 226 | let td = TestDrop::new(); 227 | let iter: RandIterator = RandIterator { 228 | total, 229 | count: 0, 230 | rand: rng_gen, 231 | limit, 232 | local, 233 | drop: &td, 234 | _marker: PhantomData, 235 | }; 236 | 237 | let mut check = CallCheck::new(); 238 | let f = |items: &mut [DropCheck]| { 239 | check.ok(); 240 | assert_eq!(items.len(), total); 241 | for item in items { 242 | assert_eq!(&item.value, &rng_check.gen()); 243 | } 244 | recurse(limit, local); 245 | }; 246 | if rand_bool() { 247 | buffer(iter, f); 248 | } else { 249 | local.buffer(iter, f); 250 | } 251 | 252 | assert_eq!(td.num_dropped_items(), td.num_tracked_items()); 253 | } 254 | 255 | struct DropCheck<'a, T> { 256 | _item: testdrop::Item<'a>, 257 | local: &'a Stack, 258 | limit: u32, 259 | probability: usize, 260 | value: T, 261 | } 262 | 263 | impl Drop for DropCheck<'_, T> { 264 | fn drop(&mut self) { 265 | if thread_rng().gen_range(0..self.probability) == 0 { 266 | recurse(self.limit, self.local); 267 | } 268 | } 269 | } 270 | 271 | struct RandIterator<'a, T> { 272 | total: usize, 273 | count: usize, 274 | rand: StdRng, 275 | limit: u32, 276 | drop: &'a TestDrop, 277 | local: &'a Stack, 278 | _marker: PhantomData<*const T>, 279 | } 280 | 281 | impl<'a, T> Iterator for RandIterator<'a, T> 282 | where 283 | Standard: Distribution, 284 | { 285 | type Item = DropCheck<'a, T>; 286 | fn next(&mut self) -> Option { 287 | if self.total == self.count { 288 | return None; 289 | } 290 | let probability = self.total * 2; 291 | 292 | if thread_rng().gen_range(0..probability) == 0 { 293 | recurse(self.limit, self.local); 294 | } 295 | 296 | self.count += 1; 297 | let value = self.rand.gen(); 298 | let item = self.drop.new_item().1; 299 | 300 | return Some(DropCheck { 301 | value, 302 | _item: item, 303 | probability, 304 | local: self.local, 305 | limit: self.limit, 306 | }); 307 | } 308 | } 309 | 310 | struct CallCheck { 311 | called: bool, 312 | } 313 | 314 | impl CallCheck { 315 | pub fn new() -> Self { 316 | Self { called: false } 317 | } 318 | pub fn ok(&mut self) { 319 | self.called = true; 320 | } 321 | } 322 | impl Drop for CallCheck { 323 | #[track_caller] 324 | fn drop(&mut self) { 325 | assert!(self.called == true); 326 | } 327 | } 328 | --------------------------------------------------------------------------------