├── .circleci └── config.yml ├── .gitignore ├── CHANGELOG.md ├── Cargo.toml ├── LICENSE-CC0 ├── LICENSE-MIT ├── README.md ├── derive ├── Cargo.toml └── src │ └── lib.rs ├── examples └── linked_list.rs ├── src ├── allocator_api.rs ├── arena.rs ├── barrier.rs ├── collect.rs ├── collect_impl.rs ├── context.rs ├── dynamic_roots.rs ├── gc.rs ├── gc_weak.rs ├── hashbrown.rs ├── lib.rs ├── lock.rs ├── metrics.rs ├── no_drop.rs ├── static_collect.rs ├── types.rs └── unsize.rs └── tests ├── tests.rs └── ui ├── bad_collect_bound.rs ├── bad_collect_bound.stderr ├── bad_write_field_projections.rs ├── bad_write_field_projections.stderr ├── invalid_collect_field.rs ├── invalid_collect_field.stderr ├── multiple_require_static.rs ├── multiple_require_static.stderr ├── no_drop_and_drop_impl.rs ├── no_drop_and_drop_impl.stderr ├── require_static_enum_variant.rs ├── require_static_enum_variant.stderr ├── require_static_not_static.rs └── require_static_not_static.stderr /.circleci/config.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | 3 | jobs: 4 | build: 5 | docker: 6 | - image: cimg/rust:1.84.0 7 | steps: 8 | - checkout 9 | - run: 10 | name: Setup Rust 11 | command: | 12 | rustup component add rustfmt 13 | rustup toolchain uninstall nightly 14 | rustup toolchain install nightly -c miri rust-src 15 | - run: 16 | name: Version information 17 | command: | 18 | rustup --version 19 | rustc --version 20 | cargo --version 21 | rustc +nightly --version 22 | cargo +nightly --version 23 | - run: 24 | name: Calculate dependencies 25 | command: cargo generate-lockfile 26 | - restore_cache: 27 | keys: 28 | - cargo-cache-{{ arch }}-{{ checksum "Cargo.lock" }} 29 | - run: 30 | name: Check Formatting 31 | command: | 32 | rustfmt --version 33 | cargo fmt --all -- --check --color=auto 34 | - run: 35 | name: Build all targets 36 | command: cargo build --all --all-targets 37 | - run: 38 | name: Build no_std targets 39 | command: | 40 | cargo build --no-default-features 41 | - run: 42 | name: Check with combinations of optional features 43 | command: | 44 | cargo check --all --features tracing 45 | cargo check --all --features allocator-api2 46 | cargo check --all --features hashbrown 47 | cargo check --all --features tracing,allocator-api2,hashbrown 48 | - run: 49 | name: Run all tests 50 | command: cargo test --all 51 | - run: 52 | name: Run all tests under miri 53 | command: | 54 | cargo +nightly miri test --all --tests --all-features -- --skip ui 55 | - save_cache: 56 | paths: 57 | - /usr/local/cargo/registry 58 | - target/debug/.fingerprint 59 | - target/debug/build 60 | - target/debug/deps 61 | key: cargo-cache-{{ arch }}-{{ checksum "Cargo.lock" }} 62 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | /target/ 2 | **/*.rs.bk 3 | **/*.sw? 4 | Cargo.lock 5 | .DS_Store 6 | .#* 7 | .dir-locals.el 8 | .envrc 9 | shell.nix 10 | flake.nix 11 | flake.lock -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | ## [0.5.3] 2 | * Adds a `Collect` impl for `hashbrown::HashTable`. 3 | 4 | ## [0.5.2] 5 | * Add the ability to transition from a fully marked arena immediately into 6 | `CollectionPhase::Collecting`. 7 | 8 | ## [0.5.1] 9 | * Correct the behavior of `Arena::mark_debt` and `Arena::mark_all` 10 | to do what their documentation suggest and do nothing during 11 | `CollectionPhase::Collecting` 12 | * Implement `Collect` for `std::collections::LinkedList` 13 | * Make `StaticCollect` `#[repr(transparent)]`, to support sound casting from 14 | `Gc>` to `Gc`. 15 | 16 | ## [0.5.0] 17 | 18 | This release adds the concept of "finalization" to arenas. At the end of the 19 | "mark" phase, at the moment that an arena is considered "fully marked" and 20 | is ready to transition to the "sweep" phase, the "fully marked" arena can be 21 | examined with some special powers... 22 | 23 | `MarkedArena::finalize` can be called to check which pointers are considered 24 | "dead" for this collection cycle. A "dead" pointer is one that is destined 25 | (without further changes) to be freed during the next "sweep" phase. You can 26 | "resurrect" these pointers either through normal root mutation before entering 27 | the "sweep" phase, or more simply by calling explicit `resurrect` methods 28 | on those pointers. Any resurrection, whether through mutation or `resurrect` 29 | methods, immediately makes the arena no longer *fully* marked, but the arena can 30 | then be re-marked and re-examined. 31 | 32 | This simple, safe API (examining a "fully marked" arena for potentially dead 33 | pointers, performing mutation, then potentially re-marking and re-examining) 34 | is enough to implement quite complex garbage collector behavior. It can be used 35 | to implement "finalizer" methods on objects, "ephemeron tables", Java-style 36 | "reference queues" and more. 37 | 38 | ## Release Highlights 39 | * New `Finalization` API. 40 | * New API to query the current phase of the collector. 41 | * Fixup the pacing algorithm for correctness and simplicity. Makes 42 | `Pacing::with_timing_factor(0.0)` actually work correctly for stop-the-world 43 | collection. 44 | * `GcWeak::is_dropped` now works only during collection, not mutation. 45 | * Add more trait impls to `StaticCollect`. 46 | * `Gc::ptr_eq` and `GcWeak::ptr_eq` now ignore metadata of `dyn` pointers, 47 | matching the behavior of `Rc::ptr_eq` and `Arc::ptr_eq`. 48 | 49 | ## [0.4.0] 50 | 51 | This release adds the ability to track *external* allocations (allocations 52 | which are not stored in a `Gc`) which also participate in pacing the garbage 53 | collector. There is now a new (feature-gated) type `allocator_api::MetricsAlloc` 54 | which implements the `allocator-api2::Allocator` trait which can be used to 55 | automatically track the external allocation of collection types. 56 | 57 | This release also adds (feature-gated) `tracing` support, which emits a span 58 | per GC phase (propagate, sweep, drop), and events when collection resumes and 59 | yields. 60 | 61 | ## Release Highlights 62 | - Tracked external allocations API which participates in GC pacing. 63 | - Feature-gated support for a `allocator_api2::Allocator` implementation that 64 | automatically tracks allocation. 65 | - Feature-gated support for `hashbrown` types, to automatically implement 66 | `Collect` on them. 67 | - Feature-gated `tracing` support. 68 | - Implement `Collect` for `Box`. 69 | - Add methods to project `Write>` and `Write>`. 70 | - Don't fail to build by trying to implement `Collect` on `Arc` for platforms 71 | without `Arc`. 72 | 73 | ## [0.3.3] 74 | - Actually pause for the configured amount of time in the gc, rather than the 75 | minimum. 76 | 77 | ## [0.3.2] 78 | - Implement `Eq`, `PartialEq`, `Ord`, `PartialOrd`, and `Hash` traits on `Gc` 79 | similar to the traits on std smart pointers like `Rc`. 80 | - Relax unnecessary bounds on `Collect` impls of std collections. 81 | - Make `Arena::remembered_size()` return reasonable values. 82 | 83 | ## [0.3.1] 84 | - Fallible `DynamicRootSet` API. 85 | 86 | ## [0.3.0] 87 | 88 | An enormous number of breaking API changes, too many to list, almost the entire 89 | API has been rethought. 90 | 91 | The credit goes mostly to others for the release, @Bale001, @Aaron1011, 92 | @dragazo, and especially @moulins. 93 | 94 | ### Release Highlights 95 | - New `Arena` API that does not require macros and instead uses a `Rootable` 96 | trait and HRTBs for lifetime projection. 97 | - Methods on `Arena` to directly mutate the root type and map the root from one 98 | type to another. 99 | - A new API for 'static roots that are held in smart pointers 100 | (`DynamicRootSet`). 101 | - `Gc` pointers can now point to DSTs, and there is an `unsize!` macro for 102 | unsizing coercions to replace the unstable `Unsize` trait. 103 | - Weak pointers! 104 | - `GcCell` has been replaced by explicit public lock types held within `Gc` 105 | pointers with safe ways of mutating them. 106 | - Field projection on held lock types to allow for separate locks held within a 107 | single `Gc` pointer to be safely mutated. 108 | - Unsafe `Gc` pointer coercions to compatible pointee types. 109 | - Soundly get references to `Gc` types with `&'gc` lifetime. 110 | - More ergonomic `Mutation` and `Collection` context types. 111 | - *Tons* of correctness and soundness fixes. 112 | 113 | This release also **completely drops** the `gc-sequence` combinator crate. 114 | Because of other API changes, anything you could do with `gc-sequence` before 115 | can almost certainly be expressed better either using the new 'static root API 116 | or with the new map API. See [this comment](https://github.com/kyren/gc-arena/pull/50#issuecomment-1538421347) for a bit more info. 117 | 118 | ## [0.2.2] 119 | - No changes, fixing a release snafu with cargo-release 120 | 121 | ## [0.2.1] 122 | - Allow using `#[collect(require_static)]` on fields 123 | - Add no_std compatibility for gc-arena and gc-sequence 124 | - Add `Collect` impl for VecDeque`, `PhantomData`. 125 | - Add `#[track_caller]` for better panic error messages 126 | - Improve error messages for proc-macro derived `Collect` implementations 127 | substantially. 128 | - Improve generated code for `Context::allocate` in release builds. 129 | 130 | ## [0.2] 131 | - API incompatible change: depend on proc-macro2, quote, and syn 1.0 132 | - API incompatible chagne: update synstructure to 0.12 133 | - API incompatible change: use the trick from the `pin-project` crate to prevent 134 | types from implementing Drop by making it cause a conflicting impl of a 135 | `MustNotImplDrop` trait. 136 | - API incompatible change: Add `#[collect(no_drop)]` and remove the `empty_drop` 137 | and `require_copy` versions since they are now less useful than `no_drop`. 138 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [workspace] 2 | resolver = "2" 3 | 4 | members = [ 5 | "derive", 6 | ] 7 | 8 | [workspace.package] 9 | version = "0.5.3" 10 | authors = ["kyren ", "moulins", "Aaron Hill "] 11 | edition = "2021" 12 | license = "MIT" 13 | readme = "README.md" 14 | repository = "https://github.com/kyren/gc-arena" 15 | 16 | [package] 17 | name = "gc-arena" 18 | description = "safe, incrementally garbage collected arenas" 19 | version.workspace = true 20 | authors.workspace = true 21 | edition.workspace = true 22 | license.workspace = true 23 | readme.workspace = true 24 | repository.workspace = true 25 | 26 | [features] 27 | default = ["std"] 28 | std = [] 29 | tracing = ["dep:tracing"] 30 | allocator-api2 = ["dep:allocator-api2", "hashbrown?/allocator-api2"] 31 | 32 | [dependencies] 33 | allocator-api2 = { version = "0.2", optional = true, default-features = false, features = ["alloc"] } 34 | gc-arena-derive = { path = "./derive", version = "0.5.3"} 35 | hashbrown = { version = "0.15.2", optional = true, default-features = false } 36 | tracing = { version = "0.1.37", optional = true, default-features = false } 37 | 38 | [dev-dependencies] 39 | rand = "0.8" 40 | trybuild = "1.0" 41 | -------------------------------------------------------------------------------- /LICENSE-CC0: -------------------------------------------------------------------------------- 1 | Creative Commons Legal Code 2 | 3 | CC0 1.0 Universal 4 | 5 | CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE 6 | LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN 7 | ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS 8 | INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES 9 | REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS 10 | PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM 11 | THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED 12 | HEREUNDER. 13 | 14 | Statement of Purpose 15 | 16 | The laws of most jurisdictions throughout the world automatically confer 17 | exclusive Copyright and Related Rights (defined below) upon the creator 18 | and subsequent owner(s) (each and all, an "owner") of an original work of 19 | authorship and/or a database (each, a "Work"). 20 | 21 | Certain owners wish to permanently relinquish those rights to a Work for 22 | the purpose of contributing to a commons of creative, cultural and 23 | scientific works ("Commons") that the public can reliably and without fear 24 | of later claims of infringement build upon, modify, incorporate in other 25 | works, reuse and redistribute as freely as possible in any form whatsoever 26 | and for any purposes, including without limitation commercial purposes. 27 | These owners may contribute to the Commons to promote the ideal of a free 28 | culture and the further production of creative, cultural and scientific 29 | works, or to gain reputation or greater distribution for their Work in 30 | part through the use and efforts of others. 31 | 32 | For these and/or other purposes and motivations, and without any 33 | expectation of additional consideration or compensation, the person 34 | associating CC0 with a Work (the "Affirmer"), to the extent that he or she 35 | is an owner of Copyright and Related Rights in the Work, voluntarily 36 | elects to apply CC0 to the Work and publicly distribute the Work under its 37 | terms, with knowledge of his or her Copyright and Related Rights in the 38 | Work and the meaning and intended legal effect of CC0 on those rights. 39 | 40 | 1. Copyright and Related Rights. A Work made available under CC0 may be 41 | protected by copyright and related or neighboring rights ("Copyright and 42 | Related Rights"). Copyright and Related Rights include, but are not 43 | limited to, the following: 44 | 45 | i. the right to reproduce, adapt, distribute, perform, display, 46 | communicate, and translate a Work; 47 | ii. moral rights retained by the original author(s) and/or performer(s); 48 | iii. publicity and privacy rights pertaining to a person's image or 49 | likeness depicted in a Work; 50 | iv. rights protecting against unfair competition in regards to a Work, 51 | subject to the limitations in paragraph 4(a), below; 52 | v. rights protecting the extraction, dissemination, use and reuse of data 53 | in a Work; 54 | vi. database rights (such as those arising under Directive 96/9/EC of the 55 | European Parliament and of the Council of 11 March 1996 on the legal 56 | protection of databases, and under any national implementation 57 | thereof, including any amended or successor version of such 58 | directive); and 59 | vii. other similar, equivalent or corresponding rights throughout the 60 | world based on applicable law or treaty, and any national 61 | implementations thereof. 62 | 63 | 2. Waiver. To the greatest extent permitted by, but not in contravention 64 | of, applicable law, Affirmer hereby overtly, fully, permanently, 65 | irrevocably and unconditionally waives, abandons, and surrenders all of 66 | Affirmer's Copyright and Related Rights and associated claims and causes 67 | of action, whether now known or unknown (including existing as well as 68 | future claims and causes of action), in the Work (i) in all territories 69 | worldwide, (ii) for the maximum duration provided by applicable law or 70 | treaty (including future time extensions), (iii) in any current or future 71 | medium and for any number of copies, and (iv) for any purpose whatsoever, 72 | including without limitation commercial, advertising or promotional 73 | purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each 74 | member of the public at large and to the detriment of Affirmer's heirs and 75 | successors, fully intending that such Waiver shall not be subject to 76 | revocation, rescission, cancellation, termination, or any other legal or 77 | equitable action to disrupt the quiet enjoyment of the Work by the public 78 | as contemplated by Affirmer's express Statement of Purpose. 79 | 80 | 3. Public License Fallback. Should any part of the Waiver for any reason 81 | be judged legally invalid or ineffective under applicable law, then the 82 | Waiver shall be preserved to the maximum extent permitted taking into 83 | account Affirmer's express Statement of Purpose. In addition, to the 84 | extent the Waiver is so judged Affirmer hereby grants to each affected 85 | person a royalty-free, non transferable, non sublicensable, non exclusive, 86 | irrevocable and unconditional license to exercise Affirmer's Copyright and 87 | Related Rights in the Work (i) in all territories worldwide, (ii) for the 88 | maximum duration provided by applicable law or treaty (including future 89 | time extensions), (iii) in any current or future medium and for any number 90 | of copies, and (iv) for any purpose whatsoever, including without 91 | limitation commercial, advertising or promotional purposes (the 92 | "License"). The License shall be deemed effective as of the date CC0 was 93 | applied by Affirmer to the Work. Should any part of the License for any 94 | reason be judged legally invalid or ineffective under applicable law, such 95 | partial invalidity or ineffectiveness shall not invalidate the remainder 96 | of the License, and in such case Affirmer hereby affirms that he or she 97 | will not (i) exercise any of his or her remaining Copyright and Related 98 | Rights in the Work or (ii) assert any associated claims and causes of 99 | action with respect to the Work, in either case contrary to Affirmer's 100 | express Statement of Purpose. 101 | 102 | 4. Limitations and Disclaimers. 103 | 104 | a. No trademark or patent rights held by Affirmer are waived, abandoned, 105 | surrendered, licensed or otherwise affected by this document. 106 | b. Affirmer offers the Work as-is and makes no representations or 107 | warranties of any kind concerning the Work, express, implied, 108 | statutory or otherwise, including without limitation warranties of 109 | title, merchantability, fitness for a particular purpose, non 110 | infringement, or the absence of latent or other defects, accuracy, or 111 | the present or absence of errors, whether or not discoverable, all to 112 | the greatest extent permissible under applicable law. 113 | c. Affirmer disclaims responsibility for clearing rights of other persons 114 | that may apply to the Work or any use thereof, including without 115 | limitation any person's Copyright and Related Rights in the Work. 116 | Further, Affirmer disclaims responsibility for obtaining any necessary 117 | consents, permissions or other rights required for any use of the 118 | Work. 119 | d. Affirmer understands and acknowledges that Creative Commons is not a 120 | party to this document and has no duty or obligation with respect to 121 | this CC0 or use of the Work. -------------------------------------------------------------------------------- /LICENSE-MIT: -------------------------------------------------------------------------------- 1 | Permission is hereby granted, free of charge, to any person obtaining a copy 2 | of this software and associated documentation files (the "Software"), to deal 3 | in the Software without restriction, including without limitation the rights 4 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 5 | copies of the Software, and to permit persons to whom the Software is 6 | furnished to do so, subject to the following conditions: 7 | 8 | The above copyright notice and this permission notice shall be included in all 9 | copies or substantial portions of the Software. 10 | 11 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 12 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 13 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 14 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 15 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 16 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 17 | SOFTWARE. 18 | 19 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [![crates.io](https://img.shields.io/crates/v/gc-arena)](https://crates.io/crates/gc-arena) 2 | [![docs.rs](https://docs.rs/gc-arena/badge.svg)](https://docs.rs/gc-arena) 3 | [![Build Status](https://img.shields.io/circleci/project/github/kyren/gc-arena.svg)](https://circleci.com/gh/kyren/gc-arena) 4 | 5 | ## gc-arena 6 | 7 | This repo is home to the `gc-arena` crate, which provides Rust with garbage 8 | collected arenas and a means of safely interacting with them. 9 | 10 | The `gc-arena` crate, along with its helper crate `gc-arena-derive`, provides 11 | allocation with safe, incremental, exact, cycle-detecting garbage collection 12 | within a closed "arena". There are two techniques at play that make this system 13 | sound: 14 | 15 | * Garbage collected objects are traced using the `Collect` trait, which must 16 | be implemented correctly to ensure that all reachable objects are found. This 17 | trait is therefore `unsafe`, but it *can* safely be implemented by procedural 18 | macro, and the `gc-arena-derive` provides such a safe procedural macro. 19 | 20 | * In order for garbage collection to take place, the garbage collector must 21 | first have a list of "root" objects which are known to be reachable. In our 22 | case, the user of `gc-arena` chooses a single root object for the arena, but 23 | this is not sufficient for safe garbage collection. If garbage collection 24 | were to take place when there are garbage collected pointers anywhere on the 25 | Rust stack, such pointers would also need to be considered as "root" objects 26 | to prevent memory unsafety. `gc-arena` solves this by strictly limiting where 27 | garbage collected pointers can be stored, and when they can be alive. The 28 | arena can only be accessed through a single `mutate` method which takes a 29 | callback, and all garbage collected pointers inside this callback are branded 30 | with an invariant lifetime which is unique to that single callback call. Thus, 31 | when outside of this `mutate` method, the rust borrow checker ensures that 32 | it is not possible for garbage collected pointers to be alive anywhere on 33 | the stack, nor is it possible for them to have been smuggled outside of the 34 | arena's root object. Since all pointers can be proven to be reachable from the 35 | single root object, safe garbage collection can take place. 36 | 37 | In other words, the `gc-arena` crate does *not* retrofit Rust with a globally 38 | accessible garbage collector, rather it *only* allows for limited garbage 39 | collection in isolated garbage collected arenas. All garbage collected pointers 40 | must forever live inside only this arena, and pointers from different arenas are 41 | prevented from being stored in the wrong arena. 42 | 43 | See [this blog post](https://kyju.org/blog/rust-safe-garbage-collection/) for a 44 | more in-depth tour of the crate's design. It is quite dense, but it explains 45 | everything necessary to fully understand the machinery used in the included 46 | [linked list example](examples/linked_list.rs). 47 | 48 | ## Use cases 49 | 50 | This crate was developed primarily as a means of writing VMs for garbage 51 | collected languages in safe Rust, but there are probably many more uses than 52 | just this. 53 | 54 | ## Current status and TODOs 55 | 56 | Basically usable and safe! It is used by the Adobe Flash Player emulator 57 | [Ruffle](https://github.com/ruffle-rs/ruffle) for its ActionScript VM as well 58 | as some other projects (like my own stackless Lua runtime 59 | [piccolo](https://github.com/kyren/piccolo), for which the crate was originally 60 | designed) 61 | 62 | The collection algorithm is an incremental mark-and-sweep algorithm very similar 63 | to the one in PUC-Rio Lua, and is optimized primarily for low pause time. During 64 | mutation, allocation "debt" is accumulated, and this "debt" determines the 65 | amount of work that the next call to `Arena::collect` will do. 66 | 67 | The pointers held in arenas (spelled `Gc<'gc, T>`) are zero-cost raw pointers. 68 | They implement `Copy` and are pointer sized, and no bookkeeping at all is done 69 | during mutation. 70 | 71 | Some notable current limitations: 72 | 73 | * Allocating DSTs is currently somewhat painful due to limitations in Rust. It 74 | is possible to have `Gc` pointers to DSTs, and there is a replacement for 75 | unstable `Unsize` coercion, but there is no support for directly allocating 76 | arbitrarily sized DSTs. 77 | 78 | * There is no support at all for multi-threaded allocation and collection. 79 | The basic lifetime and safety techniques here would still work in an arena 80 | supporting multi-threading, but this crate does not support this. It is 81 | optimized for single threaded use and multiple, independent arenas. 82 | 83 | * The `Collect` trait does not provide a mechanism to move objects once they are 84 | allocated, so this limits the types of collectors that could be written. This 85 | is achievable but no work has been done towards this. 86 | 87 | ## Prior Art 88 | 89 | The ideas here are mostly not mine. Much of the user-facing design is borrowed 90 | heavily from [rust-gc]( 91 | https://manishearth.github.io/blog/2015/09/01/designing-a-gc-in-rust/), 92 | and the idea of using "generativity" comes from [You can't spell trust without 93 | Rust](https://raw.githubusercontent.com/Gankro/thesis/master/thesis.pdf). The 94 | design of the actual garbage collection system itself borrows heavily from the 95 | incremental mark-and-sweep collector in Lua 5.4. 96 | 97 | ## License 98 | 99 | Everything in this repository is licensed under either of: 100 | 101 | * MIT license [LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT 102 | * Creative Commons CC0 1.0 Universal Public Domain Dedication 103 | [LICENSE-CC0](LICENSE-CC0) or 104 | https://creativecommons.org/publicdomain/zero/1.0/ 105 | 106 | at your option. 107 | -------------------------------------------------------------------------------- /derive/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "gc-arena-derive" 3 | description = "proc-macro support for gc-arena" 4 | version.workspace = true 5 | authors.workspace = true 6 | edition.workspace = true 7 | license.workspace = true 8 | readme.workspace = true 9 | repository.workspace = true 10 | 11 | [lib] 12 | proc-macro = true 13 | 14 | [dependencies] 15 | proc-macro2 = "1.0.56" 16 | quote = "1.0.26" 17 | syn = { version = "2.0.17", features = ["default", "visit-mut"] } 18 | synstructure = "0.13" 19 | -------------------------------------------------------------------------------- /derive/src/lib.rs: -------------------------------------------------------------------------------- 1 | use proc_macro2::{Span, TokenStream}; 2 | use quote::{quote, quote_spanned, ToTokens}; 3 | use syn::{ 4 | parse::{Parse, ParseStream}, 5 | spanned::Spanned, 6 | visit_mut::VisitMut, 7 | }; 8 | use synstructure::{decl_derive, AddBounds}; 9 | 10 | fn collect_derive(s: synstructure::Structure) -> TokenStream { 11 | fn find_collect_meta(attrs: &[syn::Attribute]) -> syn::Result> { 12 | let mut found = None; 13 | for attr in attrs { 14 | if attr.path().is_ident("collect") && found.replace(attr).is_some() { 15 | return Err(syn::parse::Error::new_spanned( 16 | attr.path(), 17 | "Cannot specify multiple `#[collect]` attributes! Consider merging them.", 18 | )); 19 | } 20 | } 21 | 22 | Ok(found) 23 | } 24 | 25 | // Deriving `Collect` must be done with care, because an implementation of `Drop` is not 26 | // necessarily safe for `Collect` types. This derive macro has three available modes to ensure 27 | // that this is safe: 28 | // 1) Require that the type be 'static with `#[collect(require_static)]`. 29 | // 2) Prohibit a `Drop` impl on the type with `#[collect(no_drop)]` 30 | // 3) Allow a custom `Drop` impl that might be unsafe with `#[collect(unsafe_drop)]`. Such 31 | // `Drop` impls must *not* access garbage collected pointers during `Drop::drop`. 32 | #[derive(PartialEq)] 33 | enum Mode { 34 | RequireStatic, 35 | NoDrop, 36 | UnsafeDrop, 37 | } 38 | 39 | let mut mode = None; 40 | let mut override_bound = None; 41 | let mut gc_lifetime = None; 42 | 43 | fn usage_error(meta: &syn::meta::ParseNestedMeta, msg: &str) -> syn::parse::Error { 44 | meta.error(format_args!( 45 | "{msg}. `#[collect(...)]` requires one mode (`require_static`, `no_drop`, or `unsafe_drop`) and optionally `bound = \"...\"`." 46 | )) 47 | } 48 | 49 | let result = match find_collect_meta(&s.ast().attrs) { 50 | Ok(Some(attr)) => attr.parse_nested_meta(|meta| { 51 | if meta.path.is_ident("bound") { 52 | if override_bound.is_some() { 53 | return Err(usage_error(&meta, "multiple bounds specified")); 54 | } 55 | 56 | let lit: syn::LitStr = meta.value()?.parse()?; 57 | override_bound = Some(lit); 58 | return Ok(()); 59 | } 60 | 61 | if meta.path.is_ident("gc_lifetime") { 62 | if gc_lifetime.is_some() { 63 | return Err(usage_error(&meta, "multiple `'gc` lifetimes specified")); 64 | } 65 | 66 | let lit: syn::Lifetime = meta.value()?.parse()?; 67 | gc_lifetime = Some(lit); 68 | return Ok(()); 69 | } 70 | 71 | meta.input.parse::()?; 72 | 73 | if mode.is_some() { 74 | return Err(usage_error(&meta, "multiple modes specified")); 75 | } else if meta.path.is_ident("require_static") { 76 | mode = Some(Mode::RequireStatic); 77 | } else if meta.path.is_ident("no_drop") { 78 | mode = Some(Mode::NoDrop); 79 | } else if meta.path.is_ident("unsafe_drop") { 80 | mode = Some(Mode::UnsafeDrop); 81 | } else { 82 | return Err(usage_error(&meta, "unknown option")); 83 | } 84 | Ok(()) 85 | }), 86 | Ok(None) => Ok(()), 87 | Err(err) => Err(err), 88 | }; 89 | 90 | if let Err(err) = result { 91 | return err.to_compile_error(); 92 | } 93 | 94 | let Some(mode) = mode else { 95 | panic!( 96 | "{}", 97 | "deriving `Collect` requires a `#[collect(...)]` attribute" 98 | ); 99 | }; 100 | 101 | let where_clause = if mode == Mode::RequireStatic { 102 | quote!(where Self: 'static) 103 | } else { 104 | override_bound 105 | .as_ref() 106 | .map(|x| { 107 | x.parse() 108 | .expect("`#[collect]` failed to parse explicit trait bound expression") 109 | }) 110 | .unwrap_or_else(|| quote!()) 111 | }; 112 | 113 | let mut errors = vec![]; 114 | 115 | let collect_impl = if mode == Mode::RequireStatic { 116 | let mut impl_struct = s.clone(); 117 | impl_struct.add_bounds(AddBounds::None); 118 | impl_struct.gen_impl(quote! { 119 | gen unsafe impl<'gc> ::gc_arena::Collect<'gc> for @Self #where_clause { 120 | const NEEDS_TRACE: bool = false; 121 | } 122 | }) 123 | } else { 124 | let mut impl_struct = s.clone(); 125 | 126 | let mut needs_trace_expr = TokenStream::new(); 127 | quote!(false).to_tokens(&mut needs_trace_expr); 128 | 129 | let mut static_bindings = vec![]; 130 | 131 | // Ignore all bindings that have `#[collect(require_static)]` For each binding with 132 | // `#[collect(require_static)]`, we push a bound of the form `FieldType: 'static` to 133 | // `static_bindings`, which will be added to the genererated `Collect` impl. The presence of 134 | // the bound guarantees that the field cannot hold any `Gc` pointers, so it's safe to ignore 135 | // that field in `needs_trace` and `trace` 136 | impl_struct.filter(|b| match find_collect_meta(&b.ast().attrs) { 137 | Ok(Some(attr)) => { 138 | let mut static_binding = false; 139 | let result = attr.parse_nested_meta(|meta| { 140 | if meta.input.is_empty() && meta.path.is_ident("require_static") { 141 | static_binding = true; 142 | static_bindings.push(b.ast().ty.clone()); 143 | Ok(()) 144 | } else { 145 | Err(meta.error("Only `#[collect(require_static)]` is supported on a field")) 146 | } 147 | }); 148 | errors.extend(result.err()); 149 | !static_binding 150 | } 151 | Ok(None) => true, 152 | Err(err) => { 153 | errors.push(err); 154 | true 155 | } 156 | }); 157 | 158 | for static_binding in static_bindings { 159 | impl_struct.add_where_predicate(syn::parse_quote! { #static_binding: 'static }); 160 | } 161 | 162 | // `#[collect(require_static)]` only makes sense on fields, not enum variants. Emit an error 163 | // if it is used in the wrong place 164 | if let syn::Data::Enum(..) = impl_struct.ast().data { 165 | for v in impl_struct.variants() { 166 | for attr in v.ast().attrs { 167 | if attr.path().is_ident("collect") { 168 | errors.push(syn::parse::Error::new_spanned( 169 | attr.path(), 170 | "`#[collect]` is not suppported on enum variants", 171 | )); 172 | } 173 | } 174 | } 175 | } 176 | 177 | // We've already called `impl_struct.filter`, so we we won't try to include `NEEDS_TRACE` 178 | // for the types of fields that have `#[collect(require_static)]` 179 | for v in impl_struct.variants() { 180 | for b in v.bindings() { 181 | let ty = &b.ast().ty; 182 | // Resolving the span at the call site makes rustc emit a 'the error originates a 183 | // derive macro note' We only use this span on tokens that need to resolve to items 184 | // (e.g. `gc_arena::Collect`), so this won't cause any hygiene issues 185 | let call_span = b.ast().span().resolved_at(Span::call_site()); 186 | quote_spanned!(call_span=> 187 | || <#ty as ::gc_arena::Collect>::NEEDS_TRACE 188 | ) 189 | .to_tokens(&mut needs_trace_expr); 190 | } 191 | } 192 | // Likewise, this will skip any fields that have `#[collect(require_static)]` 193 | let trace_body = impl_struct.each(|bi| { 194 | // See the above handling of `NEEDS_TRACE` for an explanation of this 195 | let call_span = bi.ast().span().resolved_at(Span::call_site()); 196 | quote_spanned!(call_span=> 197 | { 198 | // Use a temporary variable to ensure that all tokens in the call to 199 | // `gc_arena::Collect::trace` have the same hygiene information. If we used 200 | // #bi directly, then we would have a mix of hygiene contexts, which would 201 | // cause rustc to produce sub-optimal error messagse due to its inability to 202 | // merge the spans. This is purely for diagnostic purposes, and has no effect 203 | // on correctness 204 | let bi = #bi; 205 | cc.trace(bi); 206 | } 207 | ) 208 | }); 209 | 210 | // If we have no configured `'gc` lifetime and the type has a *single* generic lifetime, use 211 | // that one. 212 | if gc_lifetime.is_none() { 213 | let mut all_lifetimes = 214 | impl_struct 215 | .ast() 216 | .generics 217 | .params 218 | .iter() 219 | .filter_map(|p| match p { 220 | syn::GenericParam::Lifetime(lt) => Some(lt), 221 | _ => None, 222 | }); 223 | 224 | if let Some(lt) = all_lifetimes.next() { 225 | if all_lifetimes.next().is_none() { 226 | gc_lifetime = Some(lt.lifetime.clone()); 227 | } else { 228 | panic!("deriving `Collect` on a type with multiple lifetime parameters requires a `#[collect(gc_lifetime = ...)]` attribute"); 229 | } 230 | } 231 | }; 232 | 233 | if override_bound.is_some() { 234 | impl_struct.add_bounds(AddBounds::None); 235 | } else { 236 | impl_struct.add_bounds(AddBounds::Generics); 237 | }; 238 | 239 | if let Some(gc_lifetime) = gc_lifetime { 240 | impl_struct.gen_impl(quote! { 241 | gen unsafe impl ::gc_arena::Collect<#gc_lifetime> for @Self #where_clause { 242 | const NEEDS_TRACE: bool = #needs_trace_expr; 243 | 244 | #[inline] 245 | fn trace>(&self, cc: &mut Trace) { 246 | match *self { #trace_body } 247 | } 248 | } 249 | }) 250 | } else { 251 | impl_struct.gen_impl(quote! { 252 | gen unsafe impl<'gc> ::gc_arena::Collect<'gc> for @Self #where_clause { 253 | const NEEDS_TRACE: bool = #needs_trace_expr; 254 | 255 | #[inline] 256 | fn trace>(&self, cc: &mut Trace) { 257 | match *self { #trace_body } 258 | } 259 | } 260 | }) 261 | } 262 | }; 263 | 264 | let drop_impl = if mode == Mode::NoDrop { 265 | let mut drop_struct = s.clone(); 266 | drop_struct.add_bounds(AddBounds::None).gen_impl(quote! { 267 | gen impl ::gc_arena::__MustNotImplDrop for @Self {} 268 | }) 269 | } else { 270 | quote!() 271 | }; 272 | 273 | let errors = errors.into_iter().map(|e| e.to_compile_error()); 274 | quote! { 275 | #collect_impl 276 | #drop_impl 277 | #(#errors)* 278 | } 279 | } 280 | 281 | decl_derive! { 282 | [Collect, attributes(collect)] => 283 | /// Derives the `Collect` trait needed to trace a gc type. 284 | /// 285 | /// To derive `Collect`, an additional attribute is required on the struct/enum called 286 | /// `collect`. This has several optional arguments, but the only required argument is the derive 287 | /// strategy. This can be one of 288 | /// 289 | /// - `#[collect(require_static)]` - Adds a `'static` bound, which allows for a no-op trace 290 | /// implementation. This is the ideal choice where possible. 291 | /// - `#[collect(no_drop)]` - The typical safe tracing derive strategy which only has to add a 292 | /// requirement that your struct/enum does not have a custom implementation of `Drop`. 293 | /// - `#[collect(unsafe_drop)]` - The most versatile tracing derive strategy which allows a 294 | /// custom drop implementation. However, this strategy can lead to unsoundness if care is not 295 | /// taken (see the above explanation of `Drop` interactions). 296 | /// 297 | /// The `collect` attribute also accepts a number of optional configuration settings: 298 | /// 299 | /// - `#[collect(bound = "")]` - Replaces the default generated `where` clause with the 300 | /// given code. This can be an empty string to add no `where` clause, or otherwise must start 301 | /// with `"where"`, e.g., `#[collect(bound = "where T: Collect")]`. Note that this option is 302 | /// ignored for `require_static` mode since the only bound it produces is `Self: 'static`. 303 | /// Also note that providing an explicit bound in this way is safe, and only changes the trait 304 | /// bounds used to enable the implementation of `Collect`. 305 | /// 306 | /// - `#[collect(gc_lifetime = "")]` - the `Collect` trait requires a `'gc` lifetime 307 | /// parameter. If there is no lifetime parameter on the type, then `Collect` will be 308 | /// implemented for all `'gc` lifetimes. If there is one lifetime on the type, this is assumed 309 | /// to be the `'gc` lifetime. In the very unusual case that there are two or more lifetime 310 | /// parameters, you must specify *which* lifetime should be used as the `'gc` lifetime. 311 | /// 312 | /// Options may be passed to the `collect` attribute together, e.g., 313 | /// `#[collect(no_drop, bound = "")]`. 314 | /// 315 | /// The `collect` attribute may also be used on any field of an enum or struct, however the 316 | /// only allowed usage is to specify the strategy as `require_static` (no other strategies are 317 | /// allowed, and no optional settings can be specified). This will add a `'static` bound to the 318 | /// type of the field (regardless of an explicit `bound` setting) in exchange for not having 319 | /// to trace into the given field (the ideal choice where possible). Note that if the entire 320 | /// struct/enum is marked with `require_static` then this is unnecessary. 321 | collect_derive 322 | } 323 | 324 | // Not public API; implementation detail of `gc_arena::Rootable!`. 325 | // Replaces all `'_` lifetimes in a type by the specified named lifetime. 326 | // Syntax: `__unelide_lifetimes!('lt; SomeType)`. 327 | #[doc(hidden)] 328 | #[proc_macro] 329 | pub fn __unelide_lifetimes(input: proc_macro::TokenStream) -> proc_macro::TokenStream { 330 | struct Input { 331 | lt: syn::Lifetime, 332 | ty: syn::Type, 333 | } 334 | 335 | impl Parse for Input { 336 | fn parse(input: ParseStream) -> syn::Result { 337 | let lt: syn::Lifetime = input.parse()?; 338 | let _: syn::Token!(;) = input.parse()?; 339 | let ty: syn::Type = input.parse()?; 340 | Ok(Self { lt, ty }) 341 | } 342 | } 343 | 344 | struct UnelideLifetimes(syn::Lifetime); 345 | 346 | impl VisitMut for UnelideLifetimes { 347 | fn visit_lifetime_mut(&mut self, i: &mut syn::Lifetime) { 348 | if i.ident == "_" { 349 | *i = self.0.clone(); 350 | } 351 | } 352 | } 353 | 354 | let mut input = syn::parse_macro_input!(input as Input); 355 | UnelideLifetimes(input.lt).visit_type_mut(&mut input.ty); 356 | input.ty.to_token_stream().into() 357 | } 358 | -------------------------------------------------------------------------------- /examples/linked_list.rs: -------------------------------------------------------------------------------- 1 | use gc_arena::{Arena, Collect, Gc, Mutation, RefLock, Rootable}; 2 | 3 | // We define a node of a doubly-linked list data structure. 4 | // 5 | // `Collect` is derived procedurally, meaning that we can't mess up and forget 6 | // to trace our inner `prev`, `next`, or `value`. 7 | #[derive(Copy, Clone, Collect)] 8 | // For safety, we agree to not implement `Drop`. We could also use 9 | // `#[collect(unsafe_drop)]` or `#[collect(require_static)]` (if our type were 10 | // 'static) here instead. 11 | #[collect(no_drop)] 12 | struct Node<'gc, T: 'gc> { 13 | // The representation of the `prev` and `next` fields is a plain machine 14 | // pointer that might be NULL. 15 | // 16 | // Thanks, niche optimization! 17 | prev: Option>, 18 | next: Option>, 19 | value: T, 20 | } 21 | 22 | // By default, `Collect` types (other than 'static types) cannot have interior 23 | // mutability. In order to provide safe mutation, we need to use `gc-arena` 24 | // specific types to provide it which guarantee that write barriers are invoked. 25 | // 26 | // We use `RefLock` here as an alternative to `RefCell`. 27 | type NodePtr<'gc, T> = Gc<'gc, RefLock>>; 28 | 29 | // Create a new `Node` and return a pointer to it. 30 | // 31 | // We need to pass the `&Mutation<'gc>` context here because we are mutating the 32 | // object graph (by creating a new "object" with `Gc::new`). 33 | fn new_node<'gc, T: Collect<'gc>>(mc: &Mutation<'gc>, value: T) -> NodePtr<'gc, T> { 34 | Gc::new( 35 | mc, 36 | RefLock::new(Node { 37 | prev: None, 38 | next: None, 39 | value, 40 | }), 41 | ) 42 | } 43 | 44 | // Join two nodes together, setting the `left` node's `next` field to `right`, 45 | // and the `right` node's `prev` field to `left`. 46 | // 47 | // Again, we are mutating the object graph, so we must pass in the 48 | // `&Mutation<'gc>` context. 49 | fn node_join<'gc, T>(mc: &Mutation<'gc>, left: NodePtr<'gc, T>, right: NodePtr<'gc, T>) { 50 | // This is `Gc>::borrow_mut`, which takes the mutation context as 51 | // a parameter. Write barriers will always be invoked on the target pointer, 52 | // so we know it is safe to mutate the value behind the pointer. 53 | left.borrow_mut(mc).next = Some(right); 54 | right.borrow_mut(mc).prev = Some(left); 55 | } 56 | 57 | // Use a `NodePtr` as a cursor, move forward through a linked list by following 58 | // `next` pointers. 59 | // 60 | // Returns `true` if there was a `next` pointer and the target node has been 61 | // changed. 62 | fn node_rotate_right<'gc, T>(node: &mut NodePtr<'gc, T>) -> bool { 63 | if let Some(next) = node.borrow().next { 64 | *node = next; 65 | true 66 | } else { 67 | false 68 | } 69 | } 70 | 71 | // Use a `NodePtr` as a cursor, move backward through a linked list by following 72 | // `prev` pointers. 73 | // 74 | // Returns `true` if there was a `prev` pointer and the target node has been 75 | // changed. 76 | fn node_rotate_left<'gc, T>(node: &mut NodePtr<'gc, T>) -> bool { 77 | if let Some(prev) = node.borrow().prev { 78 | *node = prev; 79 | true 80 | } else { 81 | false 82 | } 83 | } 84 | 85 | fn main() { 86 | // Create a new arena with a single `NodePtr<'_, i32>` as the root type. 87 | // 88 | // We can't refer to some *particular* `NodePtr<'gc, i32>`, what we need to 89 | // be able to refer to is a set of `NodePtr<'_, i32>` for any possible '_ 90 | // that we might pick. We use gc-arena's `Rootable!` macro for this. 91 | let mut arena = Arena::]>::new(|mc| { 92 | // Create a simple linked list with three links. 93 | // 94 | // 1 <-> 2 <-> 3 <-> 4 95 | 96 | let one = new_node(mc, 1); 97 | let two = new_node(mc, 2); 98 | let three = new_node(mc, 3); 99 | let four = new_node(mc, 4); 100 | 101 | node_join(mc, one, two); 102 | node_join(mc, two, three); 103 | node_join(mc, three, four); 104 | 105 | // We return the pointer to 1 as our root 106 | one 107 | }); 108 | 109 | // Outside of a call to `Arena::new` or `Arena::mutate`, we have no access 110 | // to anything *inside* the arena. We have to *visit* the arena with one of 111 | // the mutation methods in order to access its interior. 112 | 113 | arena.mutate_root(|_, root| { 114 | // We can examine the root type and see that our linked list is still 115 | // [1, 2, 3, 4] 116 | for i in 1..=4 { 117 | assert_eq!(root.borrow().value, i); 118 | node_rotate_right(root); 119 | } 120 | }); 121 | 122 | arena.mutate_root(|_, root| { 123 | // Also, all of the reverse links work too. 124 | for i in (1..=4).rev() { 125 | assert_eq!(root.borrow().value, i); 126 | node_rotate_left(root); 127 | } 128 | }); 129 | 130 | arena.mutate(|mc, root| { 131 | // Make the list circular! We sever the connection to 4 and link 3 back 132 | // to 1 making a list like this... 133 | // 134 | // +-> 1 <-> 2 <-> 3 <-+ 135 | // | | 136 | // +-------------------+ 137 | let one = *root; 138 | let two = one.borrow().next.unwrap(); 139 | let three = two.borrow().next.unwrap(); 140 | node_join(mc, three, one); 141 | }); 142 | 143 | // The node for 4 is now unreachable! 144 | // 145 | // It can be freed during collection, but collection does not happen 146 | // automatically. We have to trigger collection *outside* of a mutation 147 | // method. 148 | // 149 | // The `Arena::finish_cycle` method runs a full collection cycle, but this 150 | // is not the only way to trigger collection. 151 | // 152 | // `gc-arena` is an incremental collector, and so keeps track of "debt" 153 | // during the GC cycle, pacing the collector based on the rate and size of 154 | // new allocations. 155 | // 156 | // We can also call `Arena::collect_debt` to do a *bit* of collection at a 157 | // time, based on the current collector debt. 158 | // 159 | // Since the collector has not yet started its marking phase, calling this 160 | // will fully mark the arena and collect all the garbage, so this method 161 | // will always free the 4 node. 162 | arena.finish_cycle(); 163 | 164 | arena.mutate_root(|_, root| { 165 | // Now we can see that if we rotate through our circular list, we will 166 | // get: 167 | // 168 | // 1 -> 2 -> 3 -> 1 -> 2 -> 3 169 | for _ in 0..2 { 170 | for i in 1..=3 { 171 | assert_eq!(root.borrow().value, i); 172 | node_rotate_right(root); 173 | } 174 | } 175 | }); 176 | } 177 | -------------------------------------------------------------------------------- /src/allocator_api.rs: -------------------------------------------------------------------------------- 1 | use std::{alloc::Layout, marker::PhantomData, ptr::NonNull}; 2 | 3 | use allocator_api2::{ 4 | alloc::{AllocError, Allocator, Global}, 5 | boxed, vec, 6 | }; 7 | 8 | use crate::{ 9 | collect::{Collect, Trace}, 10 | context::Mutation, 11 | metrics::Metrics, 12 | types::Invariant, 13 | }; 14 | 15 | #[derive(Clone)] 16 | pub struct MetricsAlloc<'gc, A = Global> { 17 | metrics: Metrics, 18 | allocator: A, 19 | _marker: Invariant<'gc>, 20 | } 21 | 22 | impl<'gc> MetricsAlloc<'gc> { 23 | #[inline] 24 | pub fn new(mc: &Mutation<'gc>) -> Self { 25 | Self::new_in(mc, Global) 26 | } 27 | 28 | /// `MetricsAlloc` is normally branded with the `'gc` branding lifetime to ensure that it is not 29 | /// placed in the wrong arena or used outside of the enclosing arena. 30 | /// 31 | /// This is actually completely artificial and only used as a lint: `gc_arena::metrics::Metrics` 32 | /// has no lifetime at all. Therefore, we can safely provide a method that returns a 33 | /// `MetricsAlloc` with an arbitrary lifetime. 34 | /// 35 | /// NOTE: Use `MetricsAlloc::new` if at all possible, because it is harder to misuse. 36 | #[inline] 37 | pub fn from_metrics(metrics: Metrics) -> Self { 38 | Self::from_metrics_in(metrics, Global) 39 | } 40 | } 41 | 42 | impl<'gc, A> MetricsAlloc<'gc, A> { 43 | #[inline] 44 | pub fn new_in(mc: &Mutation<'gc>, allocator: A) -> Self { 45 | Self { 46 | metrics: mc.metrics().clone(), 47 | allocator, 48 | _marker: PhantomData, 49 | } 50 | } 51 | 52 | #[inline] 53 | pub fn from_metrics_in(metrics: Metrics, allocator: A) -> Self { 54 | Self { 55 | metrics, 56 | allocator, 57 | _marker: PhantomData, 58 | } 59 | } 60 | } 61 | 62 | unsafe impl<'gc, A: Allocator> Allocator for MetricsAlloc<'gc, A> { 63 | #[inline] 64 | fn allocate(&self, layout: Layout) -> Result, AllocError> { 65 | let ptr = self.allocator.allocate(layout)?; 66 | self.metrics.mark_external_allocation(layout.size()); 67 | Ok(ptr) 68 | } 69 | 70 | #[inline] 71 | unsafe fn deallocate(&self, ptr: NonNull, layout: Layout) { 72 | self.metrics.mark_external_deallocation(layout.size()); 73 | self.allocator.deallocate(ptr, layout); 74 | } 75 | 76 | #[inline] 77 | fn allocate_zeroed(&self, layout: Layout) -> Result, AllocError> { 78 | let ptr = self.allocator.allocate_zeroed(layout)?; 79 | self.metrics.mark_external_allocation(layout.size()); 80 | Ok(ptr) 81 | } 82 | 83 | #[inline] 84 | unsafe fn grow( 85 | &self, 86 | ptr: NonNull, 87 | old_layout: Layout, 88 | new_layout: Layout, 89 | ) -> Result, AllocError> { 90 | let ptr = self.allocator.grow(ptr, old_layout, new_layout)?; 91 | self.metrics 92 | .mark_external_allocation(new_layout.size() - old_layout.size()); 93 | Ok(ptr) 94 | } 95 | 96 | #[inline] 97 | unsafe fn grow_zeroed( 98 | &self, 99 | ptr: NonNull, 100 | old_layout: Layout, 101 | new_layout: Layout, 102 | ) -> Result, AllocError> { 103 | let ptr = self.allocator.grow_zeroed(ptr, old_layout, new_layout)?; 104 | self.metrics 105 | .mark_external_allocation(new_layout.size() - old_layout.size()); 106 | Ok(ptr) 107 | } 108 | 109 | #[inline] 110 | unsafe fn shrink( 111 | &self, 112 | ptr: NonNull, 113 | old_layout: Layout, 114 | new_layout: Layout, 115 | ) -> Result, AllocError> { 116 | let ptr = self.allocator.shrink(ptr, old_layout, new_layout)?; 117 | self.metrics 118 | .mark_external_deallocation(old_layout.size() - new_layout.size()); 119 | Ok(ptr) 120 | } 121 | } 122 | 123 | unsafe impl<'gc, A: 'static> Collect<'gc> for MetricsAlloc<'gc, A> { 124 | const NEEDS_TRACE: bool = false; 125 | } 126 | 127 | unsafe impl<'gc> Collect<'gc> for Global { 128 | const NEEDS_TRACE: bool = false; 129 | } 130 | 131 | unsafe impl<'gc, T, A> Collect<'gc> for boxed::Box 132 | where 133 | T: Collect<'gc> + ?Sized, 134 | A: Collect<'gc> + Allocator, 135 | { 136 | const NEEDS_TRACE: bool = T::NEEDS_TRACE || A::NEEDS_TRACE; 137 | 138 | #[inline] 139 | fn trace>(&self, cc: &mut C) { 140 | cc.trace(&**self); 141 | cc.trace(boxed::Box::allocator(self)); 142 | } 143 | } 144 | 145 | unsafe impl<'gc, T: Collect<'gc>, A: Collect<'gc> + Allocator> Collect<'gc> for vec::Vec { 146 | const NEEDS_TRACE: bool = T::NEEDS_TRACE || A::NEEDS_TRACE; 147 | 148 | #[inline] 149 | fn trace>(&self, cc: &mut C) { 150 | for v in self { 151 | cc.trace(v); 152 | } 153 | cc.trace(self.allocator()); 154 | } 155 | } 156 | -------------------------------------------------------------------------------- /src/arena.rs: -------------------------------------------------------------------------------- 1 | use alloc::boxed::Box; 2 | use core::marker::PhantomData; 3 | 4 | use crate::{ 5 | context::{Context, Finalization, Mutation, Phase, RunUntil, Stop}, 6 | metrics::Metrics, 7 | Collect, 8 | }; 9 | 10 | /// A trait that produces a [`Collect`]-able type for the given lifetime. This is used to produce 11 | /// the root [`Collect`] instance in an [`Arena`]. 12 | /// 13 | /// In order to use an implementation of this trait in an [`Arena`], it must implement 14 | /// `Rootable<'a>` for *any* possible `'a`. This is necessary so that the `Root` types can be 15 | /// branded by the unique, invariant lifetimes that makes an `Arena` sound. 16 | pub trait Rootable<'a> { 17 | type Root: ?Sized + 'a; 18 | } 19 | 20 | /// A marker type used by the `Rootable!` macro instead of a bare trait object. 21 | /// 22 | /// Prevents having to include extra ?Sized bounds on every `for<'a> Rootable<'a>`. 23 | #[doc(hidden)] 24 | pub struct __DynRootable(PhantomData); 25 | 26 | impl<'a, T: ?Sized + Rootable<'a>> Rootable<'a> for __DynRootable { 27 | type Root = >::Root; 28 | } 29 | 30 | /// A convenience macro for quickly creating a type that implements `Rootable`. 31 | /// 32 | /// The macro takes a single argument, which should be a generic type with elided lifetimes. 33 | /// When used as a root object, every instance of the elided lifetime will be replaced with 34 | /// the branding lifetime. 35 | /// 36 | /// ``` 37 | /// # use gc_arena::{Arena, Collect, Gc, Rootable}; 38 | /// # 39 | /// # fn main() { 40 | /// #[derive(Collect)] 41 | /// #[collect(no_drop)] 42 | /// struct MyRoot<'gc> { 43 | /// ptr: Gc<'gc, i32>, 44 | /// } 45 | /// 46 | /// type MyArena = Arena]>; 47 | /// 48 | /// // If desired, the branding lifetime can also be explicitely named: 49 | /// type MyArena2 = Arena MyRoot<'gc>]>; 50 | /// # } 51 | /// ``` 52 | /// 53 | /// The macro can also be used to create implementations of `Rootable` that use other generic 54 | /// parameters, though in complex cases it may be better to implement `Rootable` directly. 55 | /// 56 | /// ``` 57 | /// # use gc_arena::{Arena, Collect, Gc, Rootable}; 58 | /// # 59 | /// # fn main() { 60 | /// #[derive(Collect)] 61 | /// #[collect(no_drop)] 62 | /// struct MyGenericRoot<'gc, T> { 63 | /// ptr: Gc<'gc, T>, 64 | /// } 65 | /// 66 | /// type MyGenericArena = Arena]>; 67 | /// # } 68 | /// ``` 69 | #[macro_export] 70 | macro_rules! Rootable { 71 | ($gc:lifetime => $root:ty) => { 72 | // Instead of generating an impl of `Rootable`, we use a trait object. Thus, we avoid the 73 | // need to generate a new type for each invocation of this macro. 74 | $crate::__DynRootable:: $crate::Rootable<$gc, Root = $root>> 75 | }; 76 | ($root:ty) => { 77 | $crate::Rootable!['__gc => $crate::__unelide_lifetimes!('__gc; $root)] 78 | }; 79 | } 80 | 81 | /// A helper type alias for a `Rootable::Root` for a specific lifetime. 82 | pub type Root<'a, R> = >::Root; 83 | 84 | #[derive(Debug, Copy, Clone, Eq, PartialEq, Ord, PartialOrd)] 85 | pub enum CollectionPhase { 86 | /// The arena is done with a collection cycle and is waiting to be restarted. 87 | Sleeping, 88 | /// The arena is currently tracing objects from the root to determine reachability. 89 | Marking, 90 | /// The arena has finished tracing, all reachable objects are marked. This may transition 91 | /// back to `Marking` if write barriers occur. 92 | Marked, 93 | /// The arena has determined a set of unreachable objects and has started freeing them. At this 94 | /// point, marking is no longer taking place so the root may have reachable, unmarked pointers. 95 | Sweeping, 96 | } 97 | 98 | /// A generic, garbage collected arena. 99 | /// 100 | /// Garbage collected arenas allow for isolated sets of garbage collected objects with zero-overhead 101 | /// garbage collected pointers. It provides incremental mark and sweep garbage collection which 102 | /// must be manually triggered outside the `mutate` method, and works best when units of work inside 103 | /// `mutate` can be kept relatively small. It is designed primarily to be a garbage collector for 104 | /// scripting language runtimes. 105 | /// 106 | /// The arena API is able to provide extremely cheap Gc pointers because it is based around 107 | /// "generativity". During construction and access, the root type is branded by a unique, invariant 108 | /// lifetime `'gc` which ensures that `Gc` pointers must be contained inside the root object 109 | /// hierarchy and cannot escape the arena callbacks or be smuggled inside another arena. This way, 110 | /// the arena can be sure that during mutation, all `Gc` pointers come from the arena we expect 111 | /// them to come from, and that they're all either reachable from root or have been allocated during 112 | /// the current `mutate` call. When not inside the `mutate` callback, the arena knows that all `Gc` 113 | /// pointers must be either reachable from root or they are unreachable and safe to collect. In 114 | /// this way, incremental garbage collection can be achieved (assuming "sufficiently small" calls 115 | /// to `mutate`) that is both extremely safe and zero overhead vs what you would write in C with raw 116 | /// pointers and manually ensuring that invariants are held. 117 | pub struct Arena 118 | where 119 | R: for<'a> Rootable<'a>, 120 | { 121 | context: Box, 122 | root: Root<'static, R>, 123 | } 124 | 125 | impl Arena 126 | where 127 | R: for<'a> Rootable<'a>, 128 | for<'a> Root<'a, R>: Sized, 129 | { 130 | /// Create a new arena with the given garbage collector tuning parameters. You must provide a 131 | /// closure that accepts a `&Mutation<'gc>` and returns the appropriate root. 132 | pub fn new(f: F) -> Arena 133 | where 134 | F: for<'gc> FnOnce(&'gc Mutation<'gc>) -> Root<'gc, R>, 135 | { 136 | unsafe { 137 | let context = Box::new(Context::new()); 138 | // Note - we cast the `&Mutation` to a `'static` lifetime here, 139 | // instead of transmuting the root type returned by `f`. Transmuting the root 140 | // type is allowed in nightly versions of rust 141 | // (see https://github.com/rust-lang/rust/pull/101520#issuecomment-1252016235) 142 | // but is not yet stable. Casting the `&Mutation` is completely invisible 143 | // to the callback `f` (since it needs to handle an arbitrary lifetime), 144 | // and lets us stay compatible with older versions of Rust 145 | let mc: &'static Mutation<'_> = &*(context.mutation_context() as *const _); 146 | let root: Root<'static, R> = f(mc); 147 | Arena { context, root } 148 | } 149 | } 150 | 151 | /// Similar to `new`, but allows for constructor that can fail. 152 | pub fn try_new(f: F) -> Result, E> 153 | where 154 | F: for<'gc> FnOnce(&'gc Mutation<'gc>) -> Result, E>, 155 | { 156 | unsafe { 157 | let context = Box::new(Context::new()); 158 | let mc: &'static Mutation<'_> = &*(context.mutation_context() as *const _); 159 | let root: Root<'static, R> = f(mc)?; 160 | Ok(Arena { context, root }) 161 | } 162 | } 163 | 164 | #[inline] 165 | pub fn map_root( 166 | mut self, 167 | f: impl for<'gc> FnOnce(&'gc Mutation<'gc>, Root<'gc, R>) -> Root<'gc, R2>, 168 | ) -> Arena 169 | where 170 | R2: for<'a> Rootable<'a>, 171 | for<'a> Root<'a, R2>: Sized, 172 | { 173 | self.context.root_barrier(); 174 | let new_root: Root<'static, R2> = unsafe { 175 | let mc: &'static Mutation<'_> = &*(self.context.mutation_context() as *const _); 176 | f(mc, self.root) 177 | }; 178 | Arena { 179 | context: self.context, 180 | root: new_root, 181 | } 182 | } 183 | 184 | #[inline] 185 | pub fn try_map_root( 186 | mut self, 187 | f: impl for<'gc> FnOnce(&'gc Mutation<'gc>, Root<'gc, R>) -> Result, E>, 188 | ) -> Result, E> 189 | where 190 | R2: for<'a> Rootable<'a>, 191 | for<'a> Root<'a, R2>: Sized, 192 | { 193 | self.context.root_barrier(); 194 | let new_root: Root<'static, R2> = unsafe { 195 | let mc: &'static Mutation<'_> = &*(self.context.mutation_context() as *const _); 196 | f(mc, self.root)? 197 | }; 198 | Ok(Arena { 199 | context: self.context, 200 | root: new_root, 201 | }) 202 | } 203 | } 204 | 205 | impl Arena 206 | where 207 | R: for<'a> Rootable<'a>, 208 | { 209 | /// The primary means of interacting with a garbage collected arena. Accepts a callback which 210 | /// receives a `&Mutation<'gc>` and a reference to the root, and can return any non garbage 211 | /// collected value. The callback may "mutate" any part of the object graph during this call, 212 | /// but no garbage collection will take place during this method. 213 | #[inline] 214 | pub fn mutate(&self, f: F) -> T 215 | where 216 | F: for<'gc> FnOnce(&'gc Mutation<'gc>, &'gc Root<'gc, R>) -> T, 217 | { 218 | unsafe { 219 | let mc: &'static Mutation<'_> = &*(self.context.mutation_context() as *const _); 220 | let root: &'static Root<'_, R> = &*(&self.root as *const _); 221 | f(mc, root) 222 | } 223 | } 224 | 225 | /// An alternative version of [`Arena::mutate`] which allows mutating the root set, at the 226 | /// cost of an extra write barrier. 227 | #[inline] 228 | pub fn mutate_root(&mut self, f: F) -> T 229 | where 230 | F: for<'gc> FnOnce(&'gc Mutation<'gc>, &'gc mut Root<'gc, R>) -> T, 231 | { 232 | self.context.root_barrier(); 233 | unsafe { 234 | let mc: &'static Mutation<'_> = &*(self.context.mutation_context() as *const _); 235 | let root: &'static mut Root<'_, R> = &mut *(&mut self.root as *mut _); 236 | f(mc, root) 237 | } 238 | } 239 | 240 | #[inline] 241 | pub fn metrics(&self) -> &Metrics { 242 | self.context.metrics() 243 | } 244 | 245 | #[inline] 246 | pub fn collection_phase(&self) -> CollectionPhase { 247 | match self.context.phase() { 248 | Phase::Mark => { 249 | if self.context.gray_remaining() { 250 | CollectionPhase::Marking 251 | } else { 252 | CollectionPhase::Marked 253 | } 254 | } 255 | Phase::Sweep => CollectionPhase::Sweeping, 256 | Phase::Sleep => CollectionPhase::Sleeping, 257 | Phase::Drop => unreachable!(), 258 | } 259 | } 260 | } 261 | 262 | impl Arena 263 | where 264 | R: for<'a> Rootable<'a>, 265 | for<'a> Root<'a, R>: Collect<'a>, 266 | { 267 | /// Run incremental garbage collection until the allocation debt is zero. 268 | /// 269 | /// This will run through ALL phases of the collection cycle until the debt is zero, including 270 | /// implicitly finishing the current cycle and starting a new one (transitioning from 271 | /// [`CollectionPhase::Sweeping`] to [`CollectionPhase::Sleeping`]). Since this method runs 272 | /// until debt is zero with no guaranteed return at any specific transition, you may need to use 273 | /// other methods like [`Arena::mark_debt`] and [`Arena::cycle_debt`] if you need to keep close 274 | /// track of the current collection phase. 275 | /// 276 | /// There is no minimum unit of work enforced here, so it may be faster to only call this method 277 | /// when the allocation debt is above some minimum threshold. 278 | #[inline] 279 | pub fn collect_debt(&mut self) { 280 | unsafe { 281 | self.context 282 | .do_collection(&self.root, RunUntil::PayDebt, Stop::Full); 283 | } 284 | } 285 | 286 | /// Run only the *marking* part of incremental garbage collection until allocation debt is zero. 287 | /// 288 | /// This does *not* transition collection past the [`CollectionPhase::Marked`] 289 | /// phase. Does nothing if the collection phase is [`CollectionPhase::Marked`] or 290 | /// [`CollectionPhase::Sweeping`], otherwise acts like [`Arena::collect_debt`]. 291 | /// 292 | /// If this method stops because the arena is now fully marked (the collection phase is 293 | /// [`CollectionPhase::Marked`]), then a [`MarkedArena`] object will be returned to allow 294 | /// you to examine the state of the fully marked arena. 295 | #[inline] 296 | pub fn mark_debt(&mut self) -> Option> { 297 | unsafe { 298 | self.context 299 | .do_collection(&self.root, RunUntil::PayDebt, Stop::FullyMarked); 300 | } 301 | 302 | if self.context.phase() == Phase::Mark && !self.context.gray_remaining() { 303 | Some(MarkedArena(self)) 304 | } else { 305 | None 306 | } 307 | } 308 | 309 | /// Runs ALL of the remaining *marking* part of the current garbage collection cycle. 310 | /// 311 | /// Similarly to [`Arena::mark_debt`], this does not transition collection past the 312 | /// [`CollectionPhase::Marked`] phase, and does nothing if the collector is currently in the 313 | /// [`CollectionPhase::Marked`] phase or the [`CollectionPhase::Sweeping`] phase. 314 | /// 315 | /// This method will always fully mark the arena and return a [`MarkedArena`] object as long as 316 | /// the current phase is not [`CollectionPhase::Sweeping`]. 317 | #[inline] 318 | pub fn finish_marking(&mut self) -> Option> { 319 | unsafe { 320 | self.context 321 | .do_collection(&self.root, RunUntil::Stop, Stop::FullyMarked); 322 | } 323 | 324 | if self.context.phase() == Phase::Mark && !self.context.gray_remaining() { 325 | Some(MarkedArena(self)) 326 | } else { 327 | None 328 | } 329 | } 330 | 331 | /// Run the *current* collection cycle until the allocation debt is zero. 332 | /// 333 | /// This is nearly identical to the [`Arena::collect_debt`] method, except it 334 | /// *always* returns immediately when a cycle is finished (when phase transitions 335 | /// to [`CollectionPhase::Sleeping`]), and will never transition directly from 336 | /// [`CollectionPhase::Sweeping`] to [`CollectionPhase::Marking`] within a single call, even if 337 | /// there is enough outstanding debt to do so. 338 | /// 339 | /// This mostly only important when the user of an `Arena` needs to closely track collection 340 | /// phases, otherwise [`Arena::collect_debt`] simpler to use. 341 | #[inline] 342 | pub fn cycle_debt(&mut self) { 343 | unsafe { 344 | self.context 345 | .do_collection(&self.root, RunUntil::PayDebt, Stop::FinishCycle); 346 | } 347 | } 348 | 349 | /// Run the current garbage collection cycle to completion, stopping once garbage collection 350 | /// has entered the [`CollectionPhase::Sleeping`] phase. If the collector is currently sleeping, 351 | /// then this restarts the collector and performs a full collection before transitioning back to 352 | /// the sleep phase. 353 | #[inline] 354 | pub fn finish_cycle(&mut self) { 355 | unsafe { 356 | self.context 357 | .do_collection(&self.root, RunUntil::Stop, Stop::FinishCycle); 358 | } 359 | } 360 | } 361 | 362 | pub struct MarkedArena<'a, R: for<'b> Rootable<'b>>(&'a mut Arena); 363 | 364 | impl<'a, R> MarkedArena<'a, R> 365 | where 366 | R: for<'b> Rootable<'b>, 367 | for<'b> Root<'b, R>: Collect<'b>, 368 | { 369 | /// Examine the state of a fully marked arena. 370 | /// 371 | /// Allows you to determine whether `GcWeak` pointers are "dead" (aka, soon-to-be-dropped) and 372 | /// potentially resurrect them for this cycle. 373 | /// 374 | /// Note that the arena is guaranteed to be *fully marked* only at the *beginning* of this 375 | /// callback, any mutation that resurrects a pointer or triggers a write barrier can immediately 376 | /// invalidate this. 377 | #[inline] 378 | pub fn finalize(self, f: F) -> T 379 | where 380 | F: for<'gc> FnOnce(&'gc Finalization<'gc>, &'gc Root<'gc, R>) -> T, 381 | { 382 | unsafe { 383 | let mc: &'static Finalization<'_> = 384 | &*(self.0.context.finalization_context() as *const _); 385 | let root: &'static Root<'_, R> = &*(&self.0.root as *const _); 386 | f(mc, root) 387 | } 388 | } 389 | 390 | /// Immediately transition the arena out of [`CollectionPhase::Marked`] to 391 | /// [`CollectionPhase::Sweeping`]. 392 | #[inline] 393 | pub fn start_sweeping(self) { 394 | unsafe { 395 | self.0 396 | .context 397 | .do_collection(&self.0.root, RunUntil::Stop, Stop::AtSweep); 398 | } 399 | assert_eq!(self.0.context.phase(), Phase::Sweep); 400 | } 401 | } 402 | 403 | /// Create a temporary arena without a root object and perform the given operation on it. 404 | /// 405 | /// No garbage collection will be done until the very end of the call, at which point all 406 | /// allocations will be collected. 407 | /// 408 | /// This is a convenience function that makes it a little easier to quickly test code that uses 409 | /// `gc-arena`, it is not very useful on its own. 410 | pub fn rootless_mutate(f: F) -> R 411 | where 412 | F: for<'gc> FnOnce(&'gc Mutation<'gc>) -> R, 413 | { 414 | unsafe { 415 | let context = Context::new(); 416 | f(context.mutation_context()) 417 | } 418 | } 419 | -------------------------------------------------------------------------------- /src/barrier.rs: -------------------------------------------------------------------------------- 1 | //! Write barrier management. 2 | 3 | use core::mem; 4 | use core::ops::{Deref, DerefMut}; 5 | 6 | #[cfg(doc)] 7 | use crate::Gc; 8 | 9 | /// An (interiorly-)mutable reference inside a GC'd object graph. 10 | /// 11 | /// This type can only exist behind a reference; it is typically obtained by calling 12 | /// [`Gc::write`] on a [`Gc`] pointer or by using the [`field!`] projection macro 13 | /// on a pre-existing `&Write`. 14 | #[non_exhaustive] 15 | #[repr(transparent)] 16 | pub struct Write { 17 | // Public so that the `field!` macro can pattern-match on it; the `non_exhaustive` attribute 18 | // prevents 3rd-party code from instanciating the struct directly. 19 | #[doc(hidden)] 20 | pub __inner: T, 21 | } 22 | 23 | impl Deref for Write { 24 | type Target = T; 25 | 26 | #[inline(always)] 27 | fn deref(&self) -> &Self::Target { 28 | &self.__inner 29 | } 30 | } 31 | 32 | impl DerefMut for Write { 33 | #[inline(always)] 34 | fn deref_mut(&mut self) -> &mut Self::Target { 35 | &mut self.__inner 36 | } 37 | } 38 | 39 | impl Write { 40 | /// Asserts that the given reference can be safely written to. 41 | /// 42 | /// # Safety 43 | /// In order to maintain the invariants of the garbage collector, no new [`Gc`] pointers 44 | /// may be adopted by the referenced value as a result of the interior mutability enabled 45 | /// by this wrapper, unless [`Gc::write`] is invoked manually on the parent [`Gc`] 46 | /// pointer during the current arena callback. 47 | #[inline(always)] 48 | pub unsafe fn assume(v: &T) -> &Self { 49 | // SAFETY: `Self` is `repr(transparent)`. 50 | mem::transmute(v) 51 | } 52 | 53 | /// Gets a writable reference to non-GC'd data. 54 | /// 55 | /// This is safe, as `'static` types can never hold [`Gc`] pointers. 56 | #[inline] 57 | pub fn from_static(v: &T) -> &Self 58 | where 59 | T: 'static, 60 | { 61 | // SAFETY: `Self` is `repr(transparent)`. 62 | unsafe { mem::transmute(v) } 63 | } 64 | 65 | /// Gets a writable reference from a `&mut T`. 66 | /// 67 | /// This is safe, as exclusive access already implies writability. 68 | #[inline] 69 | pub fn from_mut(v: &mut T) -> &mut Self { 70 | // SAFETY: `Self` is `repr(transparent)`. 71 | unsafe { mem::transmute(v) } 72 | } 73 | 74 | /// Implementation detail of `write_field!`; same safety requirements as `assume`. 75 | #[inline(always)] 76 | #[doc(hidden)] 77 | pub unsafe fn __from_ref_and_ptr(v: &T, _: *const T) -> &Self { 78 | // SAFETY: `Self` is `repr(transparent)`. 79 | mem::transmute(v) 80 | } 81 | 82 | /// Unlocks the referenced value, providing full interior mutability. 83 | #[inline] 84 | pub fn unlock(&self) -> &T::Unlocked 85 | where 86 | T: Unlock, 87 | { 88 | // SAFETY: a `&Write` implies that a write barrier was triggered on the parent `Gc`. 89 | unsafe { self.__inner.unlock_unchecked() } 90 | } 91 | } 92 | 93 | impl Write> { 94 | /// Converts from `&Write>` to `Option<&Write>`. 95 | #[inline(always)] 96 | pub fn as_write(&self) -> Option<&Write> { 97 | // SAFETY: this is simple destructuring 98 | unsafe { 99 | match &self.__inner { 100 | None => None, 101 | Some(v) => Some(Write::assume(v)), 102 | } 103 | } 104 | } 105 | } 106 | 107 | impl Write> { 108 | /// Converts from `&Write>` to `Result<&Write, &Write>`. 109 | #[inline(always)] 110 | pub fn as_write(&self) -> Result<&Write, &Write> { 111 | // SAFETY: this is simple destructuring 112 | unsafe { 113 | match &self.__inner { 114 | Ok(v) => Ok(Write::assume(v)), 115 | Err(e) => Err(Write::assume(e)), 116 | } 117 | } 118 | } 119 | } 120 | 121 | /// Types that support additional operations (typically, mutation) when behind a write barrier. 122 | pub trait Unlock { 123 | /// This will typically be a cell-like type providing some sort of interior mutability. 124 | type Unlocked: ?Sized; 125 | 126 | /// Provides unsafe access to the unlocked type, *without* triggering a write barrier. 127 | /// 128 | /// # Safety 129 | /// 130 | /// In order to maintain the invariants of the garbage collector, no new `Gc` pointers 131 | /// may be adopted by as a result of the interior mutability afforded by the unlocked value, 132 | /// unless the write barrier for the containing `Gc` pointer is invoked manually before 133 | /// collection is triggered. 134 | unsafe fn unlock_unchecked(&self) -> &Self::Unlocked; 135 | } 136 | 137 | /// Macro for named field projection behind [`Write`] references. 138 | /// 139 | /// # Usage 140 | /// 141 | /// ``` 142 | /// # use gc_arena::barrier::{field, Write}; 143 | /// struct Container { 144 | /// field: T, 145 | /// } 146 | /// 147 | /// fn project(v: &Write>) -> &Write { 148 | /// field!(v, Container, field) 149 | /// } 150 | /// ``` 151 | /// 152 | /// # Limitations 153 | /// 154 | /// This macro only support structs with named fields; tuples and enums aren't supported. 155 | #[doc(inline)] 156 | pub use crate::__field as field; 157 | 158 | // Actual macro item, hidden so that it doesn't show at the crate root. 159 | #[macro_export] 160 | #[doc(hidden)] 161 | macro_rules! __field { 162 | ($value:expr, $type:path, $field:ident) => { 163 | // SAFETY: 164 | // For this to be sound, we need to prevent deref coercions from happening, as they may 165 | // access nested `Gc` pointers, which would violate the write barrier invariant. This is 166 | // guaranteed as follows: 167 | // - the destructuring pattern, unlike a simple field access, cannot call `Deref`; 168 | // - similarly, the `__from_ref_and_ptr` method takes both a reference (for the lifetime) 169 | // and a pointer, causing a compilation failure if the first argument was coerced. 170 | match $value { 171 | $crate::barrier::Write { 172 | __inner: $type { ref $field, .. }, 173 | .. 174 | } => unsafe { $crate::barrier::Write::__from_ref_and_ptr($field, $field as *const _) }, 175 | } 176 | }; 177 | } 178 | 179 | /// Shorthand for [`field!`]`(...).`[`unlock()`](Write::unlock). 180 | #[doc(inline)] 181 | pub use crate::__unlock as unlock; 182 | 183 | // Actual macro item, hidden so that it doesn't show at the crate root. 184 | #[macro_export] 185 | #[doc(hidden)] 186 | macro_rules! __unlock { 187 | ($value:expr, $type:path, $field:ident) => { 188 | $crate::barrier::field!($value, $type, $field).unlock() 189 | }; 190 | } 191 | -------------------------------------------------------------------------------- /src/collect.rs: -------------------------------------------------------------------------------- 1 | use crate::{Gc, GcWeak}; 2 | 3 | pub use gc_arena_derive::Collect; 4 | 5 | /// A trait for garbage collected objects that can be placed into `Gc` pointers. This trait is 6 | /// unsafe, because `Gc` pointers inside an Arena are assumed never to be dangling, and in order to 7 | /// ensure this certain rules must be followed: 8 | /// 9 | /// 1. `Collect::trace` *must* trace over *every* `Gc` and `GcWeak` pointer held inside this type. 10 | /// 2. Held `Gc` and `GcWeak` pointers must not be accessed inside `Drop::drop` since during drop 11 | /// any such pointer may be dangling. 12 | /// 3. Internal mutability *must* not be used to adopt new `Gc` or `GcWeak` pointers without 13 | /// calling appropriate write barrier operations during the same arena mutation. 14 | /// 15 | /// It is, however, possible to implement this trait safely by procedurally deriving it (see 16 | /// [`gc_arena_derive::Collect`]), which requires that every field in the structure also implement 17 | /// `Collect`, and ensures that `Drop` cannot safely be implemented. Internally mutable types like 18 | /// `Cell` and `RefCell` do not implement `Collect` in such a way that it is possible to store 19 | /// `Gc` pointers inside them, so write barrier requirements cannot be broken when procedurally 20 | /// deriving `Collect`. A safe way of providing internal mutability in this case is to use 21 | /// [`crate::lock::Lock`] and [`crate::lock::RefLock`], which provides internal mutability 22 | /// while ensuring that write barriers are correctly executed. 23 | pub unsafe trait Collect<'gc> { 24 | /// As an optimization, if this type can never hold a `Gc` pointer and `trace` is unnecessary 25 | /// to call, you may set this to `false`. The default value is `true`, signaling that 26 | /// `Collect::trace` must be called. 27 | const NEEDS_TRACE: bool = true; 28 | 29 | /// *Must* call [`Trace::trace_gc`] (resp. [`Trace::trace_gc_weak`]) on all directly owned 30 | /// [`Gc`] (resp. [`GcWeak`]) pointers. If this type holds inner types that implement `Collect`, 31 | /// a valid implementation would simply call [`Trace::trace`] on all the held values to ensure 32 | /// this. 33 | /// 34 | /// # Tracing pointers 35 | /// 36 | /// [`Gc`] and [`GcWeak`] have their own implementations of `Collect` which in turn call 37 | /// [`Trace::trace_gc`] and [`Trace::trace_gc_weak`] respectively. Because of this, it is not 38 | /// actually ever necessary to call [`Trace::trace_gc`] and [`Trace::trace_gc_weak`] directly, 39 | /// but be careful! It is important that owned pointers *themselves* are traced and NOT their 40 | /// contents (the content type will usually also implement `Collect`, so this is easy to 41 | /// accidentally do). 42 | /// 43 | /// It is always okay to use the [`Trace::trace_gc`] and [`Trace::trace_gc_weak`] directly as a 44 | /// potentially less risky alternative when manually implementing `Collect`. 45 | #[inline] 46 | #[allow(unused_variables)] 47 | fn trace>(&self, cc: &mut T) {} 48 | } 49 | 50 | /// The trait that is passed to the [`Collect::trace`] method. 51 | /// 52 | /// Though [`Collect::trace`] is primarily used during the marking phase of the mark and sweep 53 | /// collector, this is a trait rather than a concrete type to facilitate other kinds of uses. 54 | /// 55 | /// This trait is not itself unsafe, but implementers of [`Collect`] *must* uphold the safety 56 | /// guarantees of [`Collect`] when using this trait. 57 | pub trait Trace<'gc> { 58 | /// Trace a [`Gc`] pointer (of any real type). 59 | fn trace_gc(&mut self, gc: Gc<'gc, ()>); 60 | 61 | /// Trace a [`GcWeak`] pointer (of any real type). 62 | fn trace_gc_weak(&mut self, gc: GcWeak<'gc, ()>); 63 | 64 | /// This is a convenience method that calls [`Collect::trace`] but automatically adds a 65 | /// [`Collect::NEEDS_TRACE`] check around it. 66 | /// 67 | /// Manual implementations of [`Collect`] are encouraged to use this to ensure that there 68 | /// is a [`Collect::NEEDS_TRACE`] check without having to implement it manually. This can be 69 | /// important in cases where the [`Collect::trace`] method impl is not `#[inline]` or does not 70 | /// have its own early exit. 71 | /// 72 | /// There is generally no need for custom `Trace` implementations to override this method. 73 | #[inline] 74 | fn trace + ?Sized>(&mut self, value: &C) 75 | where 76 | Self: Sized, 77 | { 78 | if C::NEEDS_TRACE { 79 | value.trace(self); 80 | } 81 | } 82 | } 83 | -------------------------------------------------------------------------------- /src/collect_impl.rs: -------------------------------------------------------------------------------- 1 | use alloc::boxed::Box; 2 | use alloc::collections::{BTreeMap, BTreeSet, BinaryHeap, LinkedList, VecDeque}; 3 | use alloc::rc::Rc; 4 | use alloc::string::String; 5 | use alloc::vec::Vec; 6 | use core::cell::{Cell, RefCell}; 7 | use core::marker::PhantomData; 8 | #[cfg(feature = "std")] 9 | use std::collections::{HashMap, HashSet}; 10 | 11 | use crate::collect::{Collect, Trace}; 12 | 13 | /// If a type is static, we know that it can never hold `Gc` pointers, so it is safe to provide a 14 | /// simple empty `Collect` implementation. 15 | #[macro_export] 16 | macro_rules! static_collect { 17 | ($type:ty) => { 18 | unsafe impl<'gc> Collect<'gc> for $type 19 | where 20 | $type: 'static, 21 | { 22 | const NEEDS_TRACE: bool = false; 23 | } 24 | }; 25 | } 26 | 27 | static_collect!(bool); 28 | static_collect!(char); 29 | static_collect!(u8); 30 | static_collect!(u16); 31 | static_collect!(u32); 32 | static_collect!(u64); 33 | static_collect!(usize); 34 | static_collect!(i8); 35 | static_collect!(i16); 36 | static_collect!(i32); 37 | static_collect!(i64); 38 | static_collect!(isize); 39 | static_collect!(f32); 40 | static_collect!(f64); 41 | static_collect!(String); 42 | static_collect!(str); 43 | static_collect!(alloc::ffi::CString); 44 | static_collect!(core::ffi::CStr); 45 | static_collect!(core::any::TypeId); 46 | #[cfg(feature = "std")] 47 | static_collect!(std::path::Path); 48 | #[cfg(feature = "std")] 49 | static_collect!(std::path::PathBuf); 50 | #[cfg(feature = "std")] 51 | static_collect!(std::ffi::OsStr); 52 | #[cfg(feature = "std")] 53 | static_collect!(std::ffi::OsString); 54 | 55 | /// SAFETY: We know that a `&'static` reference cannot possibly point to `'gc` data, so it is safe 56 | /// to keep in a rooted objet and we do not have to trace through it. 57 | /// 58 | /// HOWEVER, There is an extra bound here that seems superfluous. If we have a `&'static T`, why do 59 | /// we require `T: 'static`, shouldn't this be implied, otherwise a `&'static T` would not be well- 60 | /// formed? WELL, there are currently some neat compiler bugs, observe... 61 | /// 62 | /// ```rust,compile_fail 63 | /// let arena = Arena::]>::new(Default::default(), |mc| { 64 | /// Box::leak(Box::new(Gc::new(mc, 4))) 65 | /// }); 66 | /// ``` 67 | /// 68 | /// At the time of this writing, without the extra `T: static` bound, the above code compiles and 69 | /// produces an arena with a reachable but un-traceable `Gc<'gc, i32>`, and this is unsound. This 70 | /// *is* ofc the stored type of the root, since the Arena is actually constructing a `&'static 71 | /// Gc<'static, i32>` as the root object, but this should still not rightfully compile due to the 72 | /// signature of the constructor callback passed to `Arena::new`. In fact, the 'static lifetime is a 73 | /// red herring, it is possible to change the internals of `Arena` such that the 'gc lifetime given 74 | /// to the callback is *not* 'static, and the problem persists. 75 | /// 76 | /// It should not be required to have this extra lifetime bound, and yet! It fixes the above issue 77 | /// perfectly and the given example of unsoundness no longer compiles. So, until this rustc bug 78 | /// is fixed... 79 | /// 80 | /// DO NOT REMOVE THIS EXTRA `T: 'static` BOUND 81 | unsafe impl<'gc, T: ?Sized + 'static> Collect<'gc> for &'static T { 82 | const NEEDS_TRACE: bool = false; 83 | } 84 | 85 | unsafe impl<'gc, T: ?Sized + Collect<'gc>> Collect<'gc> for Box { 86 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 87 | 88 | #[inline] 89 | fn trace>(&self, cc: &mut C) { 90 | cc.trace(&**self) 91 | } 92 | } 93 | 94 | unsafe impl<'gc, T: Collect<'gc>> Collect<'gc> for [T] { 95 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 96 | 97 | #[inline] 98 | fn trace>(&self, cc: &mut C) { 99 | for t in self.iter() { 100 | cc.trace(t) 101 | } 102 | } 103 | } 104 | 105 | unsafe impl<'gc, T: Collect<'gc>> Collect<'gc> for Option { 106 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 107 | 108 | #[inline] 109 | fn trace>(&self, cc: &mut C) { 110 | if let Some(t) = self.as_ref() { 111 | cc.trace(t) 112 | } 113 | } 114 | } 115 | 116 | unsafe impl<'gc, T: Collect<'gc>, E: Collect<'gc>> Collect<'gc> for Result { 117 | const NEEDS_TRACE: bool = T::NEEDS_TRACE || E::NEEDS_TRACE; 118 | 119 | #[inline] 120 | fn trace>(&self, cc: &mut C) { 121 | match self { 122 | Ok(r) => cc.trace(r), 123 | Err(e) => cc.trace(e), 124 | } 125 | } 126 | } 127 | 128 | unsafe impl<'gc, T: Collect<'gc>> Collect<'gc> for Vec { 129 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 130 | 131 | #[inline] 132 | fn trace>(&self, cc: &mut C) { 133 | for t in self { 134 | cc.trace(t) 135 | } 136 | } 137 | } 138 | 139 | unsafe impl<'gc, T: Collect<'gc>> Collect<'gc> for VecDeque { 140 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 141 | 142 | #[inline] 143 | fn trace>(&self, cc: &mut C) { 144 | for t in self { 145 | cc.trace(t) 146 | } 147 | } 148 | } 149 | 150 | unsafe impl<'gc, T: Collect<'gc>> Collect<'gc> for LinkedList { 151 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 152 | 153 | #[inline] 154 | fn trace>(&self, cc: &mut C) { 155 | for t in self { 156 | cc.trace(t) 157 | } 158 | } 159 | } 160 | 161 | #[cfg(feature = "std")] 162 | unsafe impl<'gc, K, V, S> Collect<'gc> for HashMap 163 | where 164 | K: Collect<'gc>, 165 | V: Collect<'gc>, 166 | S: 'static, 167 | { 168 | const NEEDS_TRACE: bool = K::NEEDS_TRACE || V::NEEDS_TRACE; 169 | 170 | #[inline] 171 | fn trace>(&self, cc: &mut C) { 172 | for (k, v) in self { 173 | cc.trace(k); 174 | cc.trace(v); 175 | } 176 | } 177 | } 178 | 179 | #[cfg(feature = "std")] 180 | unsafe impl<'gc, T, S> Collect<'gc> for HashSet 181 | where 182 | T: Collect<'gc>, 183 | S: 'static, 184 | { 185 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 186 | 187 | #[inline] 188 | fn trace>(&self, cc: &mut C) { 189 | for v in self { 190 | cc.trace(v); 191 | } 192 | } 193 | } 194 | 195 | unsafe impl<'gc, K, V> Collect<'gc> for BTreeMap 196 | where 197 | K: Collect<'gc>, 198 | V: Collect<'gc>, 199 | { 200 | const NEEDS_TRACE: bool = K::NEEDS_TRACE || V::NEEDS_TRACE; 201 | 202 | #[inline] 203 | fn trace>(&self, cc: &mut C) { 204 | for (k, v) in self { 205 | cc.trace(k); 206 | cc.trace(v); 207 | } 208 | } 209 | } 210 | 211 | unsafe impl<'gc, T> Collect<'gc> for BTreeSet 212 | where 213 | T: Collect<'gc>, 214 | { 215 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 216 | 217 | #[inline] 218 | fn trace>(&self, cc: &mut C) { 219 | for v in self { 220 | cc.trace(v); 221 | } 222 | } 223 | } 224 | 225 | unsafe impl<'gc, T> Collect<'gc> for BinaryHeap 226 | where 227 | T: Collect<'gc>, 228 | { 229 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 230 | 231 | #[inline] 232 | fn trace>(&self, cc: &mut C) { 233 | for v in self { 234 | cc.trace(v); 235 | } 236 | } 237 | } 238 | 239 | unsafe impl<'gc, T> Collect<'gc> for Rc 240 | where 241 | T: ?Sized + Collect<'gc>, 242 | { 243 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 244 | 245 | #[inline] 246 | fn trace>(&self, cc: &mut C) { 247 | cc.trace(&**self); 248 | } 249 | } 250 | 251 | #[cfg(target_has_atomic = "ptr")] 252 | unsafe impl<'gc, T> Collect<'gc> for alloc::sync::Arc 253 | where 254 | T: ?Sized + Collect<'gc>, 255 | { 256 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 257 | 258 | #[inline] 259 | fn trace>(&self, cc: &mut C) { 260 | cc.trace(&**self); 261 | } 262 | } 263 | 264 | unsafe impl<'gc, T> Collect<'gc> for Cell 265 | where 266 | T: 'static, 267 | { 268 | const NEEDS_TRACE: bool = false; 269 | } 270 | 271 | unsafe impl<'gc, T> Collect<'gc> for RefCell 272 | where 273 | T: 'static, 274 | { 275 | const NEEDS_TRACE: bool = false; 276 | } 277 | 278 | // SAFETY: `PhantomData` is a ZST, and therefore doesn't store anything 279 | unsafe impl<'gc, T> Collect<'gc> for PhantomData { 280 | const NEEDS_TRACE: bool = false; 281 | } 282 | 283 | unsafe impl<'gc, T: Collect<'gc>, const N: usize> Collect<'gc> for [T; N] { 284 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 285 | 286 | #[inline] 287 | fn trace>(&self, cc: &mut C) { 288 | for t in self { 289 | cc.trace(t) 290 | } 291 | } 292 | } 293 | 294 | macro_rules! impl_tuple { 295 | () => ( 296 | unsafe impl<'gc> Collect<'gc> for () { 297 | const NEEDS_TRACE: bool = false; 298 | } 299 | ); 300 | 301 | ($($name:ident)+) => ( 302 | unsafe impl<'gc, $($name,)*> Collect<'gc> for ($($name,)*) 303 | where $($name: Collect<'gc>,)* 304 | { 305 | const NEEDS_TRACE: bool = false $(|| $name::NEEDS_TRACE)*; 306 | 307 | #[allow(non_snake_case)] 308 | #[inline] 309 | fn trace >(&self, cc: &mut TR) { 310 | let ($($name,)*) = self; 311 | $(cc.trace($name);)* 312 | } 313 | } 314 | ); 315 | } 316 | 317 | impl_tuple! {} 318 | impl_tuple! {A} 319 | impl_tuple! {A B} 320 | impl_tuple! {A B C} 321 | impl_tuple! {A B C D} 322 | impl_tuple! {A B C D E} 323 | impl_tuple! {A B C D E F} 324 | impl_tuple! {A B C D E F G} 325 | impl_tuple! {A B C D E F G H} 326 | impl_tuple! {A B C D E F G H I} 327 | impl_tuple! {A B C D E F G H I J} 328 | impl_tuple! {A B C D E F G H I J K} 329 | impl_tuple! {A B C D E F G H I J K L} 330 | impl_tuple! {A B C D E F G H I J K L M} 331 | impl_tuple! {A B C D E F G H I J K L M N} 332 | impl_tuple! {A B C D E F G H I J K L M N O} 333 | impl_tuple! {A B C D E F G H I J K L M N O P} 334 | -------------------------------------------------------------------------------- /src/context.rs: -------------------------------------------------------------------------------- 1 | use alloc::{boxed::Box, vec::Vec}; 2 | use core::{ 3 | cell::{Cell, UnsafeCell}, 4 | mem, 5 | ops::{ControlFlow, Deref, DerefMut}, 6 | ptr::NonNull, 7 | }; 8 | 9 | use crate::{ 10 | collect::{Collect, Trace}, 11 | metrics::Metrics, 12 | types::{GcBox, GcBoxHeader, GcBoxInner, GcColor, Invariant}, 13 | Gc, GcWeak, 14 | }; 15 | 16 | /// Handle value given by arena callbacks during construction and mutation. Allows allocating new 17 | /// `Gc` pointers and internally mutating values held by `Gc` pointers. 18 | #[repr(transparent)] 19 | pub struct Mutation<'gc> { 20 | context: Context, 21 | _invariant: Invariant<'gc>, 22 | } 23 | 24 | impl<'gc> Mutation<'gc> { 25 | #[inline] 26 | pub fn metrics(&self) -> &Metrics { 27 | self.context.metrics() 28 | } 29 | 30 | /// IF we are in the marking phase AND the `parent` pointer is colored black AND the `child` (if 31 | /// given) is colored white, then change the `parent` color to gray and enqueue it for tracing. 32 | /// 33 | /// This operation is known as a "backwards write barrier". Calling this method is one of the 34 | /// safe ways for the value in the `parent` pointer to use internal mutability to adopt the 35 | /// `child` pointer without invalidating the color invariant. 36 | /// 37 | /// If the `child` parameter is given, then calling this method ensures that the `parent` 38 | /// pointer may safely adopt the `child` pointer. If no `child` is given, then calling this 39 | /// method is more general, and it ensures that the `parent` pointer may adopt *any* child 40 | /// pointer(s) before collection is next triggered. 41 | #[inline] 42 | pub fn backward_barrier(&self, parent: Gc<'gc, ()>, child: Option>) { 43 | self.context.backward_barrier( 44 | unsafe { GcBox::erase(parent.ptr) }, 45 | child.map(|p| unsafe { GcBox::erase(p.ptr) }), 46 | ) 47 | } 48 | 49 | /// A version of [`Mutation::backward_barrier`] that allows adopting a [`GcWeak`] child. 50 | #[inline] 51 | pub fn backward_barrier_weak(&self, parent: Gc<'gc, ()>, child: GcWeak<'gc, ()>) { 52 | self.context 53 | .backward_barrier_weak(unsafe { GcBox::erase(parent.ptr) }, unsafe { 54 | GcBox::erase(child.inner.ptr) 55 | }) 56 | } 57 | 58 | /// IF we are in the marking phase AND the `parent` pointer (if given) is colored black, AND 59 | /// the `child` is colored white, then immediately change the `child` to gray and enqueue it 60 | /// for tracing. 61 | /// 62 | /// This operation is known as a "forwards write barrier". Calling this method is one of the 63 | /// safe ways for the value in the `parent` pointer to use internal mutability to adopt the 64 | /// `child` pointer without invalidating the color invariant. 65 | /// 66 | /// If the `parent` parameter is given, then calling this method ensures that the `parent` 67 | /// pointer may safely adopt the `child` pointer. If no `parent` is given, then calling this 68 | /// method is more general, and it ensures that the `child` pointer may be adopted by *any* 69 | /// parent pointer(s) before collection is next triggered. 70 | #[inline] 71 | pub fn forward_barrier(&self, parent: Option>, child: Gc<'gc, ()>) { 72 | self.context 73 | .forward_barrier(parent.map(|p| unsafe { GcBox::erase(p.ptr) }), unsafe { 74 | GcBox::erase(child.ptr) 75 | }) 76 | } 77 | 78 | /// A version of [`Mutation::forward_barrier`] that allows adopting a [`GcWeak`] child. 79 | #[inline] 80 | pub fn forward_barrier_weak(&self, parent: Option>, child: GcWeak<'gc, ()>) { 81 | self.context 82 | .forward_barrier_weak(parent.map(|p| unsafe { GcBox::erase(p.ptr) }), unsafe { 83 | GcBox::erase(child.inner.ptr) 84 | }) 85 | } 86 | 87 | #[inline] 88 | pub(crate) fn allocate + 'gc>(&self, t: T) -> NonNull> { 89 | self.context.allocate(t) 90 | } 91 | 92 | #[inline] 93 | pub(crate) fn upgrade(&self, gc_box: GcBox) -> bool { 94 | self.context.upgrade(gc_box) 95 | } 96 | } 97 | 98 | /// Handle value given to finalization callbacks in `MarkedArena`. 99 | /// 100 | /// Derefs to `Mutation<'gc>` to allow for arbitrary mutation, but adds additional powers to examine 101 | /// the state of the fully marked arena. 102 | #[repr(transparent)] 103 | pub struct Finalization<'gc> { 104 | context: Context, 105 | _invariant: Invariant<'gc>, 106 | } 107 | 108 | impl<'gc> Deref for Finalization<'gc> { 109 | type Target = Mutation<'gc>; 110 | 111 | fn deref(&self) -> &Self::Target { 112 | // SAFETY: Finalization and Mutation are #[repr(transparent)] 113 | unsafe { mem::transmute::<&Self, &Mutation>(&self) } 114 | } 115 | } 116 | 117 | impl<'gc> Finalization<'gc> { 118 | #[inline] 119 | pub(crate) fn resurrect(&self, gc_box: GcBox) { 120 | self.context.resurrect(gc_box) 121 | } 122 | } 123 | 124 | impl<'gc> Trace<'gc> for Context { 125 | fn trace_gc(&mut self, gc: Gc<'gc, ()>) { 126 | let gc_box = unsafe { GcBox::erase(gc.ptr) }; 127 | Context::trace(self, gc_box) 128 | } 129 | 130 | fn trace_gc_weak(&mut self, gc: GcWeak<'gc, ()>) { 131 | let gc_box = unsafe { GcBox::erase(gc.inner.ptr) }; 132 | Context::trace_weak(self, gc_box) 133 | } 134 | } 135 | 136 | #[derive(Debug, Copy, Clone, Eq, PartialEq)] 137 | pub(crate) enum Phase { 138 | Mark, 139 | Sweep, 140 | Sleep, 141 | Drop, 142 | } 143 | 144 | #[derive(Debug, Copy, Clone, Eq, PartialEq)] 145 | pub(crate) enum RunUntil { 146 | // Run collection until we reach the stop condition *or* debt is zero. 147 | PayDebt, 148 | // Run collection until we reach our stop condition. 149 | Stop, 150 | } 151 | 152 | #[derive(Debug, Copy, Clone, Eq, PartialEq, Ord, PartialOrd)] 153 | pub(crate) enum Stop { 154 | // Don't proceed past the end of marking, *just* before the sweep phase 155 | FullyMarked, 156 | // Don't proceed past the very beginning of the sweep phase 157 | AtSweep, 158 | // Stop once we reach the end of the current cycle and are in `Phase::Sleep`. 159 | FinishCycle, 160 | // Stop once we have done an entire cycle as a single atomic unit. This is the maximum amount 161 | // of work that a call to `Context::do_collection` will do, since a full collection as a single 162 | // atomic unit means that all unreachable values *must* already be freed. 163 | Full, 164 | } 165 | 166 | pub(crate) struct Context { 167 | metrics: Metrics, 168 | phase: Phase, 169 | #[cfg(feature = "tracing")] 170 | phase_span: tracing::Span, 171 | 172 | // A linked list of all allocated `GcBox`es. 173 | all: Cell>, 174 | 175 | // A copy of the head of `all` at the end of `Phase::Mark`. 176 | // During `Phase::Sweep`, we free all white allocations on this list. 177 | // Any allocations created *during* `Phase::Sweep` will be added to `all`, 178 | // but `sweep` will *not* be updated. This ensures that we keep allocations 179 | // alive until we've had a chance to trace them. 180 | sweep: Option, 181 | 182 | // The most recent black object that we encountered during `Phase::Sweep`. 183 | // When we free objects, we update this `GcBox.next` to remove them from 184 | // the linked list. 185 | sweep_prev: Cell>, 186 | 187 | /// Does the root needs to be traced? 188 | /// This should be `true` at the beginning of `Phase::Mark`. 189 | root_needs_trace: bool, 190 | 191 | /// A queue of gray objects, used during `Phase::Mark`. 192 | /// This holds traceable objects that have yet to be traced. 193 | gray: Queue, 194 | 195 | // A queue of gray objects that became gray as a result 196 | // of a write barrier. 197 | gray_again: Queue, 198 | } 199 | 200 | impl Drop for Context { 201 | fn drop(&mut self) { 202 | struct DropAll<'a>(&'a Metrics, Option); 203 | 204 | impl<'a> Drop for DropAll<'a> { 205 | fn drop(&mut self) { 206 | if let Some(gc_box) = self.1.take() { 207 | let mut drop_resume = DropAll(self.0, Some(gc_box)); 208 | while let Some(mut gc_box) = drop_resume.1.take() { 209 | let header = gc_box.header(); 210 | drop_resume.1 = header.next(); 211 | let gc_size = header.size_of_box(); 212 | // SAFETY: the context owns its GC'd objects 213 | unsafe { 214 | if header.is_live() { 215 | gc_box.drop_in_place(); 216 | self.0.mark_gc_dropped(gc_size); 217 | } 218 | gc_box.dealloc(); 219 | self.0.mark_gc_freed(gc_size); 220 | } 221 | } 222 | } 223 | } 224 | } 225 | 226 | let cx = PhaseGuard::enter(self, Some(Phase::Drop)); 227 | DropAll(&cx.metrics, cx.all.get()); 228 | } 229 | } 230 | 231 | impl Context { 232 | pub(crate) unsafe fn new() -> Context { 233 | let metrics = Metrics::new(); 234 | Context { 235 | phase: Phase::Sleep, 236 | #[cfg(feature = "tracing")] 237 | phase_span: PhaseGuard::span_for(&metrics, Phase::Sleep), 238 | metrics: metrics.clone(), 239 | all: Cell::new(None), 240 | sweep: None, 241 | sweep_prev: Cell::new(None), 242 | root_needs_trace: true, 243 | gray: Queue::new(), 244 | gray_again: Queue::new(), 245 | } 246 | } 247 | 248 | #[inline] 249 | pub(crate) unsafe fn mutation_context<'gc>(&self) -> &Mutation<'gc> { 250 | mem::transmute::<&Self, &Mutation>(&self) 251 | } 252 | 253 | #[inline] 254 | pub(crate) unsafe fn finalization_context<'gc>(&self) -> &Finalization<'gc> { 255 | mem::transmute::<&Self, &Finalization>(&self) 256 | } 257 | 258 | #[inline] 259 | pub(crate) fn metrics(&self) -> &Metrics { 260 | &self.metrics 261 | } 262 | 263 | #[inline] 264 | pub(crate) fn root_barrier(&mut self) { 265 | if self.phase == Phase::Mark { 266 | self.root_needs_trace = true; 267 | } 268 | } 269 | 270 | #[inline] 271 | pub(crate) fn phase(&self) -> Phase { 272 | self.phase 273 | } 274 | 275 | #[inline] 276 | pub(crate) fn gray_remaining(&self) -> bool { 277 | !self.gray.is_empty() || !self.gray_again.is_empty() || self.root_needs_trace 278 | } 279 | 280 | // Do some collection work until either we have achieved our `target` (paying off debt or 281 | // finishing a full collection) or we have reached the `stop` condition. 282 | // 283 | // In order for this to be safe, at the time of call no `Gc` pointers can be live that are not 284 | // reachable from the given root object. 285 | // 286 | // If we are currently in `Phase::Sleep` and have positive debt, this will immediately 287 | // transition the collector to `Phase::Mark`. 288 | #[deny(unsafe_op_in_unsafe_fn)] 289 | pub(crate) unsafe fn do_collection<'gc, R: Collect<'gc> + ?Sized>( 290 | &mut self, 291 | root: &R, 292 | run_until: RunUntil, 293 | stop: Stop, 294 | ) { 295 | let mut cx = PhaseGuard::enter(self, None); 296 | 297 | if run_until == RunUntil::PayDebt && !(cx.metrics.allocation_debt() > 0.0) { 298 | cx.log_progress("GC: paused"); 299 | return; 300 | } 301 | 302 | let mut has_slept = false; 303 | 304 | loop { 305 | match cx.phase { 306 | Phase::Sleep => { 307 | has_slept = true; 308 | // Immediately enter the mark phase 309 | cx.switch(Phase::Mark); 310 | } 311 | Phase::Mark => { 312 | if cx.mark_one(root).is_break() { 313 | if stop <= Stop::FullyMarked { 314 | break; 315 | } else { 316 | // If we have no gray objects left, we enter the sweep phase. 317 | cx.switch(Phase::Sweep); 318 | 319 | // Set `sweep to the current head of our `all` linked list. Any new 320 | // allocations during the newly-entered `Phase:Sweep` will update `all`, 321 | // but will *not* be reachable from `this.sweep`. 322 | cx.sweep = cx.all.get(); 323 | } 324 | } 325 | } 326 | Phase::Sweep => { 327 | if stop <= Stop::AtSweep { 328 | break; 329 | } else if cx.sweep_one().is_break() { 330 | // Begin a new cycle. 331 | // 332 | // We reset our debt if we have done an entire collection cycle (marking and 333 | // sweeping) as a single atomic unit. This keeps inherited debt from growing 334 | // without bound. 335 | cx.metrics.finish_cycle(has_slept); 336 | cx.root_needs_trace = true; 337 | cx.switch(Phase::Sleep); 338 | 339 | // We treat a stop condition of `Stop::Finish` as special for the purposes 340 | // of logging, and log that we finished a cycle. 341 | if stop == Stop::FinishCycle { 342 | cx.log_progress("GC: finished"); 343 | return; 344 | } 345 | 346 | // Otherwise we always break if we have performed a full cycle as a single 347 | // atomic unit, because there cannot be any more work to do in this case. 348 | if has_slept { 349 | // We shouldn't be stopping here if the stop condition is something like 350 | // `Stop::AtSweep`, but this should be impossible since the only way to 351 | // get here is to have gone through the entire cycle. 352 | assert!(stop == Stop::Full); 353 | break; 354 | } 355 | } 356 | } 357 | Phase::Drop => unreachable!(), 358 | } 359 | 360 | if run_until == RunUntil::PayDebt && !(cx.metrics.allocation_debt() > 0.0) { 361 | break; 362 | } 363 | } 364 | 365 | cx.log_progress("GC: yielding..."); 366 | } 367 | 368 | fn allocate<'gc, T: Collect<'gc>>(&self, t: T) -> NonNull> { 369 | let header = GcBoxHeader::new::(); 370 | header.set_next(self.all.get()); 371 | header.set_live(true); 372 | header.set_needs_trace(T::NEEDS_TRACE); 373 | 374 | let alloc_size = header.size_of_box(); 375 | 376 | // Make the generated code easier to optimize into `T` being constructed in place or at the 377 | // very least only memcpy'd once. 378 | // For more information, see: https://github.com/kyren/gc-arena/pull/14 379 | let (gc_box, ptr) = unsafe { 380 | let mut uninitialized = Box::new(mem::MaybeUninit::>::uninit()); 381 | core::ptr::write(uninitialized.as_mut_ptr(), GcBoxInner::new(header, t)); 382 | let ptr = NonNull::new_unchecked(Box::into_raw(uninitialized) as *mut GcBoxInner); 383 | (GcBox::erase(ptr), ptr) 384 | }; 385 | 386 | self.all.set(Some(gc_box)); 387 | if self.phase == Phase::Sweep && self.sweep_prev.get().is_none() { 388 | self.sweep_prev.set(self.all.get()); 389 | } 390 | 391 | self.metrics.mark_gc_allocated(alloc_size); 392 | 393 | ptr 394 | } 395 | 396 | #[inline] 397 | fn backward_barrier(&self, parent: GcBox, child: Option) { 398 | // During the marking phase, if we are mutating a black object, we may add a white object to 399 | // it and invalidate the invariant that black objects may not point to white objects. Turn 400 | // the black parent object gray to prevent this. 401 | // 402 | // NOTE: This also adds the pointer to the gray_again queue even if `header.needs_trace()` 403 | // is false, but this is not harmful (just wasteful). There's no reason to call a barrier on 404 | // a pointer that can't adopt other pointers, so we skip the check. 405 | if self.phase == Phase::Mark 406 | && parent.header().color() == GcColor::Black 407 | && child 408 | .map(|c| matches!(c.header().color(), GcColor::White | GcColor::WhiteWeak)) 409 | .unwrap_or(true) 410 | { 411 | // Outline the actual barrier code (which is somewhat expensive and won't be executed 412 | // often) to promote the inlining of the write barrier. 413 | #[cold] 414 | fn barrier(this: &Context, parent: GcBox) { 415 | this.make_gray_again(parent); 416 | } 417 | barrier(&self, parent); 418 | } 419 | } 420 | 421 | #[inline] 422 | fn backward_barrier_weak(&self, parent: GcBox, child: GcBox) { 423 | if self.phase == Phase::Mark 424 | && parent.header().color() == GcColor::Black 425 | && child.header().color() == GcColor::White 426 | { 427 | // Outline the actual barrier code (which is somewhat expensive and won't be executed 428 | // often) to promote the inlining of the write barrier. 429 | #[cold] 430 | fn barrier(this: &Context, parent: GcBox) { 431 | this.make_gray_again(parent); 432 | } 433 | barrier(&self, parent); 434 | } 435 | } 436 | 437 | #[inline] 438 | fn forward_barrier(&self, parent: Option, child: GcBox) { 439 | // During the marking phase, if we are mutating a black object, we may add a white object 440 | // to it and invalidate the invariant that black objects may not point to white objects. 441 | // Immediately trace the child white object to turn it gray (or black) to prevent this. 442 | if self.phase == Phase::Mark 443 | && parent 444 | .map(|p| p.header().color() == GcColor::Black) 445 | .unwrap_or(true) 446 | { 447 | // Outline the actual barrier code (which is somewhat expensive and won't be executed 448 | // often) to promote the inlining of the write barrier. 449 | #[cold] 450 | fn barrier(this: &Context, child: GcBox) { 451 | this.trace(child); 452 | } 453 | barrier(&self, child); 454 | } 455 | } 456 | 457 | #[inline] 458 | fn forward_barrier_weak(&self, parent: Option, child: GcBox) { 459 | // During the marking phase, if we are mutating a black object, we may add a white object 460 | // to it and invalidate the invariant that black objects may not point to white objects. 461 | // Immediately trace the child white object to turn it gray (or black) to prevent this. 462 | if self.phase == Phase::Mark 463 | && parent 464 | .map(|p| p.header().color() == GcColor::Black) 465 | .unwrap_or(true) 466 | { 467 | // Outline the actual barrier code (which is somewhat expensive and won't be executed 468 | // often) to promote the inlining of the write barrier. 469 | #[cold] 470 | fn barrier(this: &Context, child: GcBox) { 471 | this.trace_weak(child); 472 | } 473 | barrier(&self, child); 474 | } 475 | } 476 | 477 | #[inline] 478 | fn trace(&self, gc_box: GcBox) { 479 | let header = gc_box.header(); 480 | let color = header.color(); 481 | match color { 482 | GcColor::Black | GcColor::Gray => {} 483 | GcColor::White | GcColor::WhiteWeak => { 484 | if header.needs_trace() { 485 | // A white traceable object is not in the gray queue, becomes gray and enters 486 | // the normal gray queue. 487 | header.set_color(GcColor::Gray); 488 | debug_assert!(header.is_live()); 489 | self.gray.push(gc_box); 490 | } else { 491 | // A white object that doesn't need tracing simply becomes black. 492 | header.set_color(GcColor::Black); 493 | } 494 | 495 | // Only marking the *first* time counts as a mark metric. 496 | if color == GcColor::White { 497 | self.metrics.mark_gc_marked(header.size_of_box()); 498 | } 499 | } 500 | } 501 | } 502 | 503 | #[inline] 504 | fn trace_weak(&self, gc_box: GcBox) { 505 | let header = gc_box.header(); 506 | if header.color() == GcColor::White { 507 | header.set_color(GcColor::WhiteWeak); 508 | self.metrics.mark_gc_marked(header.size_of_box()); 509 | } 510 | } 511 | 512 | /// Determines whether or not a Gc pointer is safe to be upgraded. 513 | /// This is used by weak pointers to determine if it can safely upgrade to a strong pointer. 514 | #[inline] 515 | fn upgrade(&self, gc_box: GcBox) -> bool { 516 | let header = gc_box.header(); 517 | 518 | // This object has already been freed, definitely not safe to upgrade. 519 | if !header.is_live() { 520 | return false; 521 | } 522 | 523 | // Consider the different possible phases of the GC: 524 | // * In `Phase::Sleep`, the GC is not running, so we can upgrade. 525 | // If the newly-created `Gc` or `GcCell` survives the current `arena.mutate` 526 | // call, then the situtation is equivalent to having copied an existing `Gc`/`GcCell`, 527 | // or having created a new allocation. 528 | // 529 | // * In `Phase::Mark`: 530 | // If the newly-created `Gc` or `GcCell` survives the current `arena.mutate` 531 | // call, then it must have been stored somewhere, triggering a write barrier. 532 | // This will ensure that the new `Gc`/`GcCell` gets traced (if it's now reachable) 533 | // before we transition to `Phase::Sweep`. 534 | // 535 | // * In `Phase::Sweep`: 536 | // If the allocation is `WhiteWeak`, then it's impossible for it to have been freshly- 537 | // created during this `Phase::Sweep`. `WhiteWeak` is only set when a white `GcWeak/ 538 | // GcWeakCell` is traced. A `GcWeak/GcWeakCell` must be created from an existing `Gc/ 539 | // GcCell` via `downgrade()`, so `WhiteWeak` means that a `GcWeak` / `GcWeakCell` existed 540 | // during the last `Phase::Mark.` 541 | // 542 | // Therefore, a `WhiteWeak` object is guaranteed to be deallocated during this 543 | // `Phase::Sweep`, and we must not upgrade it. 544 | // 545 | // Conversely, it's always safe to upgrade a white object that is not `WhiteWeak`. 546 | // In order to call `upgrade`, you must have a `GcWeak/GcWeakCell`. Since it is 547 | // not `WhiteWeak` there cannot have been any `GcWeak/GcWeakCell`s during the 548 | // last `Phase::Mark`, so the weak pointer must have been created during this 549 | // `Phase::Sweep`. This is only possible if the underlying allocation was freshly-created 550 | // - if the allocation existed during `Phase::Mark` but was not traced, then it 551 | // must have been unreachable, which means that the user wouldn't have been able to call 552 | // `downgrade`. Therefore, we can safely upgrade, knowing that the object will not be 553 | // freed during this phase, despite being white. 554 | if self.phase == Phase::Sweep && header.color() == GcColor::WhiteWeak { 555 | return false; 556 | } 557 | true 558 | } 559 | 560 | #[inline] 561 | fn resurrect(&self, gc_box: GcBox) { 562 | let header = gc_box.header(); 563 | debug_assert_eq!(self.phase, Phase::Mark); 564 | debug_assert!(header.is_live()); 565 | let color = header.color(); 566 | if matches!(header.color(), GcColor::White | GcColor::WhiteWeak) { 567 | header.set_color(GcColor::Gray); 568 | self.gray.push(gc_box); 569 | // Only marking the *first* time counts as a mark metric. 570 | if color == GcColor::White { 571 | self.metrics.mark_gc_marked(header.size_of_box()); 572 | } 573 | } 574 | } 575 | 576 | fn mark_one<'gc, R: Collect<'gc> + ?Sized>(&mut self, root: &R) -> ControlFlow<()> { 577 | // We look for an object first in the normal gray queue, then the "gray again" queue. 578 | // Processing "gray again" objects later gives them more time to be mutated again without 579 | // triggering another write barrier. 580 | let next_gray = if let Some(gc_box) = self.gray.pop() { 581 | Some(gc_box) 582 | } else if let Some(gc_box) = self.gray_again.pop() { 583 | Some(gc_box) 584 | } else { 585 | None 586 | }; 587 | 588 | if let Some(gc_box) = next_gray { 589 | // We always mark work for objects processed from both the gray and "gray again" queue. 590 | // When objects are placed into the "gray again" queue due to a write barrier, the 591 | // original work is *undone*, so we do it again here. 592 | self.metrics.mark_gc_traced(gc_box.header().size_of_box()); 593 | gc_box.header().set_color(GcColor::Black); 594 | 595 | // If we have an object in the gray queue, take one, trace it, and turn it black. 596 | 597 | // Our `Collect::trace` call may panic, and if it does the object will be lost from 598 | // the gray queue but potentially incompletely traced. By catching a panic during 599 | // `Arena::collect()`, this could lead to memory unsafety. 600 | // 601 | // So, if the `Collect::trace` call panics, we need to add the popped object back to the 602 | // `gray_again` queue. If the panic is caught, this will maybe give some time for its 603 | // trace method to not panic before attempting to collect it again. 604 | struct DropGuard<'a> { 605 | context: &'a mut Context, 606 | gc_box: GcBox, 607 | } 608 | 609 | impl<'a> Drop for DropGuard<'a> { 610 | fn drop(&mut self) { 611 | self.context.make_gray_again(self.gc_box); 612 | } 613 | } 614 | 615 | let guard = DropGuard { 616 | context: self, 617 | gc_box, 618 | }; 619 | debug_assert!(gc_box.header().is_live()); 620 | unsafe { gc_box.trace_value(guard.context) } 621 | mem::forget(guard); 622 | 623 | ControlFlow::Continue(()) 624 | } else if self.root_needs_trace { 625 | // We treat the root object as gray if `root_needs_trace` is set, and we process it at 626 | // the end of the gray queue for the same reason as the "gray again" objects. 627 | root.trace(self); 628 | self.root_needs_trace = false; 629 | ControlFlow::Continue(()) 630 | } else { 631 | ControlFlow::Break(()) 632 | } 633 | } 634 | 635 | fn sweep_one(&mut self) -> ControlFlow<()> { 636 | let Some(mut sweep) = self.sweep else { 637 | self.sweep_prev.set(None); 638 | return ControlFlow::Break(()); 639 | }; 640 | 641 | let sweep_header = sweep.header(); 642 | let sweep_size = sweep_header.size_of_box(); 643 | 644 | let next_box = sweep_header.next(); 645 | self.sweep = next_box; 646 | 647 | match sweep_header.color() { 648 | // If the next object in the sweep portion of the main list is white, we 649 | // need to remove it from the main object list and destruct it. 650 | GcColor::White => { 651 | if let Some(sweep_prev) = self.sweep_prev.get() { 652 | sweep_prev.header().set_next(next_box); 653 | } else { 654 | // If `sweep_prev` is None, then the sweep pointer is also the 655 | // beginning of the main object list, so we need to adjust it. 656 | debug_assert_eq!(self.all.get(), Some(sweep)); 657 | self.all.set(next_box); 658 | } 659 | 660 | // SAFETY: this object is white, and wasn't traced by a `GcWeak` during this cycle, 661 | // meaning it cannot have either strong or weak pointers, so we can drop the whole 662 | // object. 663 | unsafe { 664 | if sweep_header.is_live() { 665 | // If the alive flag is set, that means we haven't dropped the inner value 666 | // of this object, 667 | sweep.drop_in_place(); 668 | self.metrics.mark_gc_dropped(sweep_size); 669 | } 670 | sweep.dealloc(); 671 | self.metrics.mark_gc_freed(sweep_size); 672 | } 673 | } 674 | // Keep the `GcBox` as part of the linked list if we traced a weak pointer to it. The 675 | // weak pointer still needs access to the `GcBox` to be able to check if the object 676 | // is still alive. We can only deallocate the `GcBox`, once there are no weak pointers 677 | // left. 678 | GcColor::WhiteWeak => { 679 | self.sweep_prev.set(Some(sweep)); 680 | sweep_header.set_color(GcColor::White); 681 | if sweep_header.is_live() { 682 | sweep_header.set_live(false); 683 | // SAFETY: Since this object is white, that means there are no more strong 684 | // pointers to this object, only weak pointers, so we can safely drop its 685 | // contents. 686 | unsafe { sweep.drop_in_place() } 687 | self.metrics.mark_gc_dropped(sweep_size); 688 | } 689 | self.metrics.mark_gc_remembered(sweep_size); 690 | } 691 | // If the next object in the sweep portion of the main list is black, we 692 | // need to keep it but turn it back white. 693 | GcColor::Black => { 694 | self.sweep_prev.set(Some(sweep)); 695 | sweep_header.set_color(GcColor::White); 696 | self.metrics.mark_gc_remembered(sweep_size); 697 | } 698 | // No gray objects should be in this part of the main list, they should 699 | // be added to the beginning of the list before the sweep pointer, so it 700 | // should not be possible for us to encounter them here. 701 | GcColor::Gray => { 702 | debug_assert!(false, "unexpected gray object in sweep list") 703 | } 704 | } 705 | 706 | ControlFlow::Continue(()) 707 | } 708 | 709 | // Take a black pointer and turn it gray and put it in the `gray_again` queue. 710 | fn make_gray_again(&self, gc_box: GcBox) { 711 | let header = gc_box.header(); 712 | debug_assert_eq!(header.color(), GcColor::Black); 713 | header.set_color(GcColor::Gray); 714 | self.gray_again.push(gc_box); 715 | self.metrics.mark_gc_untraced(header.size_of_box()); 716 | } 717 | } 718 | 719 | /// Helper type for managing phase transitions. 720 | struct PhaseGuard<'a> { 721 | cx: &'a mut Context, 722 | #[cfg(feature = "tracing")] 723 | span: tracing::span::EnteredSpan, 724 | } 725 | 726 | impl<'a> Drop for PhaseGuard<'a> { 727 | fn drop(&mut self) { 728 | #[cfg(feature = "tracing")] 729 | { 730 | let span = mem::replace(&mut self.span, tracing::Span::none().entered()); 731 | self.cx.phase_span = span.exit(); 732 | } 733 | } 734 | } 735 | 736 | impl<'a> Deref for PhaseGuard<'a> { 737 | type Target = Context; 738 | 739 | #[inline(always)] 740 | fn deref(&self) -> &Context { 741 | &self.cx 742 | } 743 | } 744 | 745 | impl<'a> DerefMut for PhaseGuard<'a> { 746 | #[inline(always)] 747 | fn deref_mut(&mut self) -> &mut Context { 748 | &mut self.cx 749 | } 750 | } 751 | 752 | impl<'a> PhaseGuard<'a> { 753 | fn enter(cx: &'a mut Context, phase: Option) -> Self { 754 | if let Some(phase) = phase { 755 | cx.phase = phase; 756 | } 757 | 758 | Self { 759 | #[cfg(feature = "tracing")] 760 | span: { 761 | let mut span = mem::replace(&mut cx.phase_span, tracing::Span::none()); 762 | if let Some(phase) = phase { 763 | span = Self::span_for(&cx.metrics, phase); 764 | } 765 | span.entered() 766 | }, 767 | cx, 768 | } 769 | } 770 | 771 | fn switch(&mut self, phase: Phase) { 772 | self.cx.phase = phase; 773 | 774 | #[cfg(feature = "tracing")] 775 | { 776 | let _ = mem::replace(&mut self.span, tracing::Span::none().entered()); 777 | self.span = Self::span_for(&self.cx.metrics, phase).entered(); 778 | } 779 | } 780 | 781 | fn log_progress(&mut self, #[allow(unused)] message: &str) { 782 | // TODO: add more infos here 783 | #[cfg(feature = "tracing")] 784 | tracing::debug!( 785 | target: "gc_arena", 786 | parent: &self.span, 787 | message, 788 | phase = tracing::field::debug(self.cx.phase), 789 | allocated = self.cx.metrics.total_allocation(), 790 | ); 791 | } 792 | 793 | #[cfg(feature = "tracing")] 794 | fn span_for(metrics: &Metrics, phase: Phase) -> tracing::Span { 795 | tracing::debug_span!( 796 | target: "gc_arena", 797 | "gc_arena", 798 | id = metrics.arena_id(), 799 | ?phase, 800 | ) 801 | } 802 | } 803 | 804 | // A shared, internally mutable `Vec` that avoids the overhead of `RefCell`. Used for the "gray" 805 | // and "gray again" queues. 806 | // 807 | // SAFETY: We do not return any references at all to the contents of the internal `UnsafeCell`, nor 808 | // do we provide any methods with callbacks. Since this type is `!Sync`, only one reference to the 809 | // `UnsafeCell` contents can be alive at any given time, thus we cannot violate aliasing rules. 810 | #[derive(Default)] 811 | struct Queue { 812 | vec: UnsafeCell>, 813 | } 814 | 815 | impl Queue { 816 | fn new() -> Self { 817 | Self { 818 | vec: UnsafeCell::new(Vec::new()), 819 | } 820 | } 821 | 822 | fn is_empty(&self) -> bool { 823 | unsafe { (*self.vec.get().cast_const()).is_empty() } 824 | } 825 | 826 | fn push(&self, val: T) { 827 | unsafe { 828 | (*self.vec.get()).push(val); 829 | } 830 | } 831 | 832 | fn pop(&self) -> Option { 833 | unsafe { (*self.vec.get()).pop() } 834 | } 835 | } 836 | -------------------------------------------------------------------------------- /src/dynamic_roots.rs: -------------------------------------------------------------------------------- 1 | use core::{cell::RefCell, fmt, mem}; 2 | 3 | use alloc::{ 4 | rc::{Rc, Weak}, 5 | vec::Vec, 6 | }; 7 | 8 | use crate::{ 9 | arena::Root, 10 | collect::{Collect, Trace}, 11 | metrics::Metrics, 12 | Gc, Mutation, Rootable, 13 | }; 14 | 15 | /// A way of registering GC roots dynamically. 16 | /// 17 | /// Use this type as (a part of) an [`Arena`](crate::Arena) root to enable dynamic rooting of 18 | /// GC'd objects through [`DynamicRoot`] handles. 19 | // 20 | // SAFETY: This allows us to convert `Gc<'gc>` pointers to `Gc<'static>` and back, and this is VERY 21 | // sketchy. We know it is safe because: 22 | // 1) The `DynamicRootSet` must be created inside an arena and is branded with an invariant `'gc` 23 | // lifetime. 24 | // 2) When stashing a `Gc<'gc, R>` pointer, the invariant `'gc` lifetimes must match. 25 | // 3) When fetching, we make sure that the `DynamicRoot` `slots` field points to the same object 26 | // as the `slots` field in the `DynamicRootSet`. We never drop this `Rc` or change the `Weak` 27 | // held in any `DynamicRoot`, so if they both point to the same object, the original `Gc` 28 | // pointer *must* have originally been stashed using *this* set. Therefore, it is safe to cast 29 | // it back to whatever our current `'gc` lifetime is. 30 | #[derive(Copy, Clone)] 31 | pub struct DynamicRootSet<'gc>(Gc<'gc, Inner<'gc>>); 32 | 33 | unsafe impl<'gc> Collect<'gc> for DynamicRootSet<'gc> { 34 | fn trace>(&self, cc: &mut T) { 35 | cc.trace(&self.0); 36 | } 37 | } 38 | 39 | impl<'gc> DynamicRootSet<'gc> { 40 | /// Creates a new, empty root set. 41 | pub fn new(mc: &Mutation<'gc>) -> Self { 42 | DynamicRootSet(Gc::new( 43 | mc, 44 | Inner { 45 | slots: Rc::new(RefCell::new(Slots::new(mc.metrics().clone()))), 46 | }, 47 | )) 48 | } 49 | 50 | /// Puts a root inside this root set. 51 | /// 52 | /// The returned handle can be freely stored outside the current arena, and will keep the root 53 | /// alive across garbage collections. 54 | pub fn stash Rootable<'a>>( 55 | &self, 56 | mc: &Mutation<'gc>, 57 | root: Gc<'gc, Root<'gc, R>>, 58 | ) -> DynamicRoot { 59 | // SAFETY: We are adopting a new `Gc` pointer, so we must invoke a write barrier. 60 | mc.backward_barrier(Gc::erase(self.0), Some(Gc::erase(root))); 61 | 62 | let mut slots = self.0.slots.borrow_mut(); 63 | let index = slots.add(unsafe { Gc::cast(root) }); 64 | 65 | let ptr = 66 | unsafe { mem::transmute::>, Gc<'static, Root<'static, R>>>(root) }; 67 | let slots = unsafe { 68 | mem::transmute::>>, Weak>>>( 69 | Rc::downgrade(&self.0.slots), 70 | ) 71 | }; 72 | 73 | DynamicRoot { ptr, slots, index } 74 | } 75 | 76 | /// Gets immutable access to the given root. 77 | /// 78 | /// # Panics 79 | /// 80 | /// Panics if the handle doesn't belong to this root set. For the non-panicking variant, use 81 | /// [`try_fetch`](Self::try_fetch). 82 | #[inline] 83 | pub fn fetch Rootable<'r>>(&self, root: &DynamicRoot) -> Gc<'gc, Root<'gc, R>> { 84 | if self.contains(root) { 85 | unsafe { 86 | mem::transmute::>, Gc<'gc, Root<'gc, R>>>(root.ptr) 87 | } 88 | } else { 89 | panic!("mismatched root set") 90 | } 91 | } 92 | 93 | /// Gets immutable access to the given root, or returns an error if the handle doesn't belong 94 | /// to this root set. 95 | #[inline] 96 | pub fn try_fetch Rootable<'r>>( 97 | &self, 98 | root: &DynamicRoot, 99 | ) -> Result>, MismatchedRootSet> { 100 | if self.contains(root) { 101 | Ok(unsafe { 102 | mem::transmute::>, Gc<'gc, Root<'gc, R>>>(root.ptr) 103 | }) 104 | } else { 105 | Err(MismatchedRootSet(())) 106 | } 107 | } 108 | 109 | /// Tests if the given handle belongs to this root set. 110 | #[inline] 111 | pub fn contains Rootable<'r>>(&self, root: &DynamicRoot) -> bool { 112 | // NOTE: We are making an assumption about how `Weak` works that is currently true and 113 | // surely MUST continue to be true, but is possibly under-specified in the stdlib. We are 114 | // assuming that if the `Weak` pointer held in the given `DynamicRoot` points to a *dropped* 115 | // root set, that `Weak::as_ptr` will return a pointer that cannot possibly belong to a 116 | // live `Rc`. 117 | let ours = unsafe { 118 | mem::transmute::<*const RefCell>, *const RefCell>>( 119 | Rc::as_ptr(&self.0.slots), 120 | ) 121 | }; 122 | let theirs = Weak::as_ptr(&root.slots); 123 | ours == theirs 124 | } 125 | } 126 | 127 | /// Handle to a `Gc` pointer held inside a [`DynamicRootSet`] which is `'static` and can be held 128 | /// outside of the arena. 129 | pub struct DynamicRoot Rootable<'gc>> { 130 | ptr: Gc<'static, Root<'static, R>>, 131 | slots: Weak>>, 132 | index: Index, 133 | } 134 | 135 | impl Rootable<'gc>> Drop for DynamicRoot { 136 | fn drop(&mut self) { 137 | if let Some(slots) = self.slots.upgrade() { 138 | slots.borrow_mut().dec(self.index); 139 | } 140 | } 141 | } 142 | 143 | impl Rootable<'gc>> Clone for DynamicRoot { 144 | fn clone(&self) -> Self { 145 | if let Some(slots) = self.slots.upgrade() { 146 | slots.borrow_mut().inc(self.index); 147 | } 148 | 149 | Self { 150 | ptr: self.ptr, 151 | slots: self.slots.clone(), 152 | index: self.index, 153 | } 154 | } 155 | } 156 | 157 | impl Rootable<'gc>> DynamicRoot { 158 | /// Get a pointer to the held object. 159 | /// 160 | /// This returns [`Gc::as_ptr`] for the [`Gc`] provided when the `DynamicRoot` is stashed. 161 | /// 162 | /// # Safety 163 | /// 164 | /// It is possible to use this to reconstruct the original `Gc` pointer by calling the unsafe 165 | /// [`Gc::from_ptr`], but this is incredibly dangerous! 166 | /// 167 | /// First, if the [`DynamicRootSet`] in which the `DynamicRoot` was stashed has been collected, 168 | /// then either the returned pointer or other transitive `Gc` pointers objects may be dangling. 169 | /// The parent `DynamicRootSet` *must* still be uncollected in order to do this soundly. 170 | /// 171 | /// Second, the `'gc` lifetime returned here is unbound, so it is meaningless and can allow 172 | /// improper mixing of objects across arenas. The returned `'gc` lifetime must be bound to only 173 | /// the arena that holds the parent `DynamicRootSet`. 174 | #[inline] 175 | pub fn as_ptr<'gc>(&self) -> *const Root<'gc, R> { 176 | unsafe { mem::transmute::<&Root<'static, R>, &Root<'gc, R>>(&self.ptr) as *const _ } 177 | } 178 | } 179 | 180 | /// Error returned when trying to fetch a [`DynamicRoot`] from the wrong [`DynamicRootSet`]. 181 | #[derive(Debug)] 182 | pub struct MismatchedRootSet(()); 183 | 184 | impl fmt::Display for MismatchedRootSet { 185 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 186 | f.write_str("mismatched root set") 187 | } 188 | } 189 | 190 | #[cfg(feature = "std")] 191 | impl std::error::Error for MismatchedRootSet {} 192 | 193 | struct Inner<'gc> { 194 | slots: Rc>>, 195 | } 196 | 197 | unsafe impl<'gc> Collect<'gc> for Inner<'gc> { 198 | fn trace>(&self, cc: &mut T) { 199 | cc.trace(&*self.slots.borrow()); 200 | } 201 | } 202 | 203 | type Index = usize; 204 | 205 | // By avoiding Option, `Slot` can go from 24 bytes to 16. 206 | // 207 | // usize::MAX can never be a valid index without using more than `usize::MAX` memory in the slots 208 | // vec, which is impossible. 209 | const NULL_INDEX: Index = usize::MAX; 210 | 211 | enum Slot<'gc> { 212 | Vacant { next_free: Index }, 213 | Occupied { root: Gc<'gc, ()>, ref_count: usize }, 214 | } 215 | 216 | unsafe impl<'gc> Collect<'gc> for Slot<'gc> { 217 | fn trace>(&self, cc: &mut T) { 218 | match self { 219 | Slot::Vacant { .. } => {} 220 | Slot::Occupied { root, ref_count: _ } => cc.trace_gc(*root), 221 | } 222 | } 223 | } 224 | 225 | struct Slots<'gc> { 226 | metrics: Metrics, 227 | slots: Vec>, 228 | next_free: Index, 229 | } 230 | 231 | impl<'gc> Drop for Slots<'gc> { 232 | fn drop(&mut self) { 233 | self.metrics 234 | .mark_external_deallocation(self.slots.capacity() * mem::size_of::()); 235 | } 236 | } 237 | 238 | unsafe impl<'gc> Collect<'gc> for Slots<'gc> { 239 | fn trace>(&self, cc: &mut T) { 240 | cc.trace(&self.slots); 241 | } 242 | } 243 | 244 | impl<'gc> Slots<'gc> { 245 | fn new(metrics: Metrics) -> Self { 246 | Self { 247 | metrics, 248 | slots: Vec::new(), 249 | next_free: NULL_INDEX, 250 | } 251 | } 252 | 253 | fn add(&mut self, p: Gc<'gc, ()>) -> Index { 254 | // Occupied slot refcount starts at 0. A refcount of 0 and a set ptr implies that there is 255 | // *one* live reference. 256 | 257 | if self.next_free != NULL_INDEX { 258 | let idx = self.next_free; 259 | let slot = &mut self.slots[idx]; 260 | match *slot { 261 | Slot::Vacant { next_free } => { 262 | self.next_free = next_free; 263 | } 264 | Slot::Occupied { .. } => panic!("free slot linked list corrupted"), 265 | } 266 | *slot = Slot::Occupied { 267 | root: p, 268 | ref_count: 0, 269 | }; 270 | idx 271 | } else { 272 | let idx = self.slots.len(); 273 | 274 | let old_capacity = self.slots.capacity(); 275 | self.slots.push(Slot::Occupied { 276 | root: p, 277 | ref_count: 0, 278 | }); 279 | let new_capacity = self.slots.capacity(); 280 | 281 | debug_assert!(new_capacity >= old_capacity); 282 | if new_capacity > old_capacity { 283 | self.metrics.mark_external_allocation( 284 | (new_capacity - old_capacity) * mem::size_of::(), 285 | ); 286 | } 287 | 288 | idx 289 | } 290 | } 291 | 292 | fn inc(&mut self, idx: Index) { 293 | match &mut self.slots[idx] { 294 | Slot::Occupied { ref_count, .. } => { 295 | *ref_count = ref_count 296 | .checked_add(1) 297 | .expect("DynamicRoot refcount overflow!"); 298 | } 299 | Slot::Vacant { .. } => panic!("taken slot has been improperly freed"), 300 | } 301 | } 302 | 303 | fn dec(&mut self, idx: Index) { 304 | let slot = &mut self.slots[idx]; 305 | match slot { 306 | Slot::Occupied { ref_count, .. } => { 307 | if *ref_count == 0 { 308 | *slot = Slot::Vacant { 309 | next_free: self.next_free, 310 | }; 311 | self.next_free = idx; 312 | } else { 313 | *ref_count -= 1; 314 | } 315 | } 316 | Slot::Vacant { .. } => panic!("taken slot has been improperly freed"), 317 | } 318 | } 319 | } 320 | -------------------------------------------------------------------------------- /src/gc.rs: -------------------------------------------------------------------------------- 1 | use core::{ 2 | alloc::Layout, 3 | borrow::Borrow, 4 | fmt::{self, Debug, Display, Pointer}, 5 | hash::{Hash, Hasher}, 6 | marker::PhantomData, 7 | ops::Deref, 8 | ptr::NonNull, 9 | }; 10 | 11 | use crate::{ 12 | barrier::{Unlock, Write}, 13 | collect::{Collect, Trace}, 14 | context::Mutation, 15 | gc_weak::GcWeak, 16 | static_collect::Static, 17 | types::{GcBox, GcBoxHeader, GcBoxInner, GcColor, Invariant}, 18 | Finalization, 19 | }; 20 | 21 | /// A garbage collected pointer to a type T. Implements Copy, and is implemented as a plain machine 22 | /// pointer. You can only allocate `Gc` pointers through a `&Mutation<'gc>` inside an arena type, 23 | /// and through "generativity" such `Gc` pointers may not escape the arena they were born in or 24 | /// be stored inside TLS. This, combined with correct `Collect` implementations, means that `Gc` 25 | /// pointers will never be dangling and are always safe to access. 26 | pub struct Gc<'gc, T: ?Sized + 'gc> { 27 | pub(crate) ptr: NonNull>, 28 | pub(crate) _invariant: Invariant<'gc>, 29 | } 30 | 31 | impl<'gc, T: Debug + ?Sized + 'gc> Debug for Gc<'gc, T> { 32 | fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { 33 | fmt::Debug::fmt(&**self, fmt) 34 | } 35 | } 36 | 37 | impl<'gc, T: ?Sized + 'gc> Pointer for Gc<'gc, T> { 38 | fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { 39 | fmt::Pointer::fmt(&Gc::as_ptr(*self), fmt) 40 | } 41 | } 42 | 43 | impl<'gc, T: Display + ?Sized + 'gc> Display for Gc<'gc, T> { 44 | fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { 45 | fmt::Display::fmt(&**self, fmt) 46 | } 47 | } 48 | 49 | impl<'gc, T: ?Sized + 'gc> Copy for Gc<'gc, T> {} 50 | 51 | impl<'gc, T: ?Sized + 'gc> Clone for Gc<'gc, T> { 52 | #[inline] 53 | fn clone(&self) -> Gc<'gc, T> { 54 | *self 55 | } 56 | } 57 | 58 | unsafe impl<'gc, T: ?Sized + 'gc> Collect<'gc> for Gc<'gc, T> { 59 | #[inline] 60 | fn trace>(&self, cc: &mut C) { 61 | cc.trace_gc(Self::erase(*self)) 62 | } 63 | } 64 | 65 | impl<'gc, T: ?Sized + 'gc> Deref for Gc<'gc, T> { 66 | type Target = T; 67 | 68 | #[inline] 69 | fn deref(&self) -> &T { 70 | unsafe { &self.ptr.as_ref().value } 71 | } 72 | } 73 | 74 | impl<'gc, T: ?Sized + 'gc> AsRef for Gc<'gc, T> { 75 | #[inline] 76 | fn as_ref(&self) -> &T { 77 | unsafe { &self.ptr.as_ref().value } 78 | } 79 | } 80 | 81 | impl<'gc, T: ?Sized + 'gc> Borrow for Gc<'gc, T> { 82 | #[inline] 83 | fn borrow(&self) -> &T { 84 | unsafe { &self.ptr.as_ref().value } 85 | } 86 | } 87 | 88 | impl<'gc, T: Collect<'gc> + 'gc> Gc<'gc, T> { 89 | #[inline] 90 | pub fn new(mc: &Mutation<'gc>, t: T) -> Gc<'gc, T> { 91 | Gc { 92 | ptr: mc.allocate(t), 93 | _invariant: PhantomData, 94 | } 95 | } 96 | } 97 | 98 | impl<'gc, T: 'static> Gc<'gc, T> { 99 | /// Create a new `Gc` pointer from a static value. 100 | /// 101 | /// This method does not require that the type `T` implement `Collect`. This uses [`Static`] 102 | /// internally to automatically provide a trivial `Collect` impl and is equivalent to the 103 | /// following code: 104 | /// 105 | /// ```rust 106 | /// # use gc_arena::{Gc, Static}; 107 | /// # fn main() { 108 | /// # gc_arena::arena::rootless_mutate(|mc| { 109 | /// struct MyStaticStruct; 110 | /// let p = Gc::new(mc, Static(MyStaticStruct)); 111 | /// // This is allowed because `Static` is `#[repr(transparent)]` 112 | /// let p: Gc = unsafe { Gc::cast(p) }; 113 | /// # }); 114 | /// # } 115 | /// ``` 116 | #[inline] 117 | pub fn new_static(mc: &Mutation<'gc>, t: T) -> Gc<'gc, T> { 118 | let p = Gc::new(mc, Static(t)); 119 | // SAFETY: `Static` is `#[repr(transparent)]`. 120 | unsafe { Gc::cast::(p) } 121 | } 122 | } 123 | 124 | impl<'gc, T: ?Sized + 'gc> Gc<'gc, T> { 125 | /// Cast a `Gc` pointer to a different type. 126 | /// 127 | /// # Safety 128 | /// It must be valid to dereference a `*mut U` that has come from casting a `*mut T`. 129 | #[inline] 130 | pub unsafe fn cast(this: Gc<'gc, T>) -> Gc<'gc, U> { 131 | Gc { 132 | ptr: NonNull::cast(this.ptr), 133 | _invariant: PhantomData, 134 | } 135 | } 136 | 137 | /// Cast a `Gc` to the unit type. 138 | /// 139 | /// This is exactly the same as `unsafe { Gc::cast::<()>(this) }`, but we can provide this 140 | /// method safely because it is always safe to dereference a `*mut ()` that has come from 141 | /// casting a `*mut T`. 142 | #[inline] 143 | pub fn erase(this: Gc<'gc, T>) -> Gc<'gc, ()> { 144 | unsafe { Gc::cast(this) } 145 | } 146 | 147 | /// Retrieve a `Gc` from a raw pointer obtained from `Gc::as_ptr` 148 | /// 149 | /// # Safety 150 | /// The provided pointer must have been obtained from `Gc::as_ptr`, and the pointer must not 151 | /// have been collected yet. 152 | #[inline] 153 | pub unsafe fn from_ptr(ptr: *const T) -> Gc<'gc, T> { 154 | let layout = Layout::new::(); 155 | let (_, header_offset) = layout.extend(Layout::for_value(&*ptr)).unwrap(); 156 | let header_offset = -(header_offset as isize); 157 | let ptr = (ptr as *mut T).byte_offset(header_offset) as *mut GcBoxInner; 158 | Gc { 159 | ptr: NonNull::new_unchecked(ptr), 160 | _invariant: PhantomData, 161 | } 162 | } 163 | } 164 | 165 | impl<'gc, T: Unlock + ?Sized + 'gc> Gc<'gc, T> { 166 | /// Shorthand for [`Gc::write`]`(mc, self).`[`unlock()`](Write::unlock). 167 | #[inline] 168 | pub fn unlock(self, mc: &Mutation<'gc>) -> &'gc T::Unlocked { 169 | Gc::write(mc, self); 170 | // SAFETY: see doc-comment. 171 | unsafe { self.as_ref().unlock_unchecked() } 172 | } 173 | } 174 | 175 | impl<'gc, T: ?Sized + 'gc> Gc<'gc, T> { 176 | /// Obtains a long-lived reference to the contents of this `Gc`. 177 | /// 178 | /// Unlike `AsRef` or `Deref`, the returned reference isn't bound to the `Gc` itself, and 179 | /// will stay valid for the entirety of the current arena callback. 180 | #[inline] 181 | pub fn as_ref(self: Gc<'gc, T>) -> &'gc T { 182 | // SAFETY: The returned reference cannot escape the current arena callback, as `&'gc T` 183 | // never implements `Collect` (unless `'gc` is `'static`, which is impossible here), and 184 | // so cannot be stored inside the GC root. 185 | unsafe { &self.ptr.as_ref().value } 186 | } 187 | 188 | #[inline] 189 | pub fn downgrade(this: Gc<'gc, T>) -> GcWeak<'gc, T> { 190 | GcWeak { inner: this } 191 | } 192 | 193 | /// Triggers a write barrier on this `Gc`, allowing for safe mutation. 194 | /// 195 | /// This triggers an unrestricted *backwards* write barrier on this pointer, meaning that it is 196 | /// guaranteed that this pointer can safely adopt *any* arbitrary child pointers (until the next 197 | /// time that collection is triggered). 198 | /// 199 | /// It returns a reference to the inner `T` wrapped in a `Write` marker to allow for 200 | /// unrestricted mutation on the held type or any of its directly held fields. 201 | #[inline] 202 | pub fn write(mc: &Mutation<'gc>, gc: Self) -> &'gc Write { 203 | unsafe { 204 | mc.backward_barrier(Gc::erase(gc), None); 205 | // SAFETY: the write barrier stays valid until the end of the current callback. 206 | Write::assume(gc.as_ref()) 207 | } 208 | } 209 | 210 | /// Returns true if two `Gc`s point to the same allocation. 211 | /// 212 | /// Similarly to `Rc::ptr_eq` and `Arc::ptr_eq`, this function ignores the metadata of `dyn` 213 | /// pointers. 214 | #[inline] 215 | pub fn ptr_eq(this: Gc<'gc, T>, other: Gc<'gc, T>) -> bool { 216 | // TODO: Equivalent to `core::ptr::addr_eq`: 217 | // https://github.com/rust-lang/rust/issues/116324 218 | Gc::as_ptr(this) as *const () == Gc::as_ptr(other) as *const () 219 | } 220 | 221 | #[inline] 222 | pub fn as_ptr(gc: Gc<'gc, T>) -> *const T { 223 | unsafe { 224 | let inner = gc.ptr.as_ptr(); 225 | core::ptr::addr_of!((*inner).value) as *const T 226 | } 227 | } 228 | 229 | /// Returns true when a pointer is *dead* during finalization. This is equivalent to 230 | /// `GcWeak::is_dead` for strong pointers. 231 | /// 232 | /// Any strong pointer reachable from the root will never be dead, BUT there can be strong 233 | /// pointers reachable only through other weak pointers that can be dead. 234 | #[inline] 235 | pub fn is_dead(_: &Finalization<'gc>, gc: Gc<'gc, T>) -> bool { 236 | let inner = unsafe { gc.ptr.as_ref() }; 237 | matches!(inner.header.color(), GcColor::White | GcColor::WhiteWeak) 238 | } 239 | 240 | /// Manually marks a dead `Gc` pointer as reachable and keeps it alive. 241 | /// 242 | /// Equivalent to `GcWeak::resurrect` for strong pointers. Manually marks this pointer and 243 | /// all transitively held pointers as reachable, thus keeping them from being dropped this 244 | /// collection cycle. 245 | #[inline] 246 | pub fn resurrect(fc: &Finalization<'gc>, gc: Gc<'gc, T>) { 247 | unsafe { 248 | fc.resurrect(GcBox::erase(gc.ptr)); 249 | } 250 | } 251 | } 252 | 253 | impl<'gc, T: PartialEq + ?Sized + 'gc> PartialEq for Gc<'gc, T> { 254 | fn eq(&self, other: &Self) -> bool { 255 | (**self).eq(other) 256 | } 257 | 258 | fn ne(&self, other: &Self) -> bool { 259 | (**self).ne(other) 260 | } 261 | } 262 | 263 | impl<'gc, T: Eq + ?Sized + 'gc> Eq for Gc<'gc, T> {} 264 | 265 | impl<'gc, T: PartialOrd + ?Sized + 'gc> PartialOrd for Gc<'gc, T> { 266 | fn partial_cmp(&self, other: &Self) -> Option { 267 | (**self).partial_cmp(other) 268 | } 269 | 270 | fn le(&self, other: &Self) -> bool { 271 | (**self).le(other) 272 | } 273 | 274 | fn lt(&self, other: &Self) -> bool { 275 | (**self).lt(other) 276 | } 277 | 278 | fn ge(&self, other: &Self) -> bool { 279 | (**self).ge(other) 280 | } 281 | 282 | fn gt(&self, other: &Self) -> bool { 283 | (**self).gt(other) 284 | } 285 | } 286 | 287 | impl<'gc, T: Ord + ?Sized + 'gc> Ord for Gc<'gc, T> { 288 | fn cmp(&self, other: &Self) -> core::cmp::Ordering { 289 | (**self).cmp(other) 290 | } 291 | } 292 | 293 | impl<'gc, T: Hash + ?Sized + 'gc> Hash for Gc<'gc, T> { 294 | fn hash(&self, state: &mut H) { 295 | (**self).hash(state) 296 | } 297 | } 298 | -------------------------------------------------------------------------------- /src/gc_weak.rs: -------------------------------------------------------------------------------- 1 | use crate::collect::{Collect, Trace}; 2 | use crate::context::Finalization; 3 | use crate::gc::Gc; 4 | use crate::types::GcBox; 5 | use crate::Mutation; 6 | 7 | use core::fmt::{self, Debug}; 8 | 9 | pub struct GcWeak<'gc, T: ?Sized + 'gc> { 10 | pub(crate) inner: Gc<'gc, T>, 11 | } 12 | 13 | impl<'gc, T: ?Sized + 'gc> Copy for GcWeak<'gc, T> {} 14 | 15 | impl<'gc, T: ?Sized + 'gc> Clone for GcWeak<'gc, T> { 16 | #[inline] 17 | fn clone(&self) -> GcWeak<'gc, T> { 18 | *self 19 | } 20 | } 21 | 22 | impl<'gc, T: ?Sized + 'gc> Debug for GcWeak<'gc, T> { 23 | fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { 24 | write!(fmt, "(GcWeak)") 25 | } 26 | } 27 | 28 | unsafe impl<'gc, T: ?Sized + 'gc> Collect<'gc> for GcWeak<'gc, T> { 29 | #[inline] 30 | fn trace>(&self, cc: &mut C) { 31 | cc.trace_gc_weak(Self::erase(*self)) 32 | } 33 | } 34 | 35 | impl<'gc, T: ?Sized + 'gc> GcWeak<'gc, T> { 36 | /// If the `GcWeak` pointer can be safely upgraded to a strong pointer, upgrade it. 37 | /// 38 | /// This will fail if the value the `GcWeak` points to is dropped, or if we are in the 39 | /// [`crate::arena::CollectionPhase::Sweeping`] phase and we know the pointer *will* be dropped. 40 | #[inline] 41 | pub fn upgrade(self, mc: &Mutation<'gc>) -> Option> { 42 | let ptr = unsafe { GcBox::erase(self.inner.ptr) }; 43 | mc.upgrade(ptr).then(|| self.inner) 44 | } 45 | 46 | /// Returns whether the value referenced by this `GcWeak` has already been dropped. 47 | /// 48 | /// # Note 49 | /// 50 | /// This is not the same as using [`GcWeak::upgrade`] and checking if the result is `None`! A 51 | /// `GcWeak` pointer can fail to upgrade *without* having been dropped if the current collection 52 | /// phase is [`crate::arena::CollectionPhase::Sweeping`] and the pointer *will* be dropped. 53 | /// 54 | /// It is not safe to use this to use this and casting as a substitute for [`GcWeak::upgrade`]. 55 | #[inline] 56 | pub fn is_dropped(self) -> bool { 57 | !unsafe { self.inner.ptr.as_ref() }.header.is_live() 58 | } 59 | 60 | /// Returns true when a pointer is *dead* during finalization. 61 | /// 62 | /// This is a weaker condition than being *dropped*, as the pointer *may* still be valid. Being 63 | /// *dead* means that there were no strong pointers pointing to this weak pointer that were 64 | /// found by the marking phase, and if it is not already dropped, it *will* be dropped as soon 65 | /// as collection resumes. 66 | /// 67 | /// If the pointer is still valid, it may be resurrected using `GcWeak::upgrade` or 68 | /// `GcWeak::resurrect`. 69 | /// 70 | /// NOTE: This returns true if the pointer was destined to be collected at the **start** of the 71 | /// current finalization callback. Resurrecting one pointer can transitively resurrect others, 72 | /// and this method does not reflect this from within the same finalization call! If transitive 73 | /// resurrection is important, you may have to carefully call finalize multiple times for one 74 | /// collection cycle with marking stages in-between, and in the precise order that you want. 75 | #[inline] 76 | pub fn is_dead(self, fc: &Finalization<'gc>) -> bool { 77 | Gc::is_dead(fc, self.inner) 78 | } 79 | 80 | /// Manually marks a dead (but non-dropped) `GcWeak` as strongly reachable and keeps it alive. 81 | /// 82 | /// This is similar to a write barrier in that it moves the collection phase back to `Marking` 83 | /// if it is not already there. All transitively held pointers from this will also be marked as 84 | /// reachable once marking resumes. 85 | /// 86 | /// Returns the upgraded `Gc` pointer as a convenience. Whether or not the strong pointer is 87 | /// stored anywhere, the value and all transitively reachable values are still guaranteed to not 88 | /// be dropped this collection cycle. 89 | #[inline] 90 | pub fn resurrect(self, fc: &Finalization<'gc>) -> Option> { 91 | // SAFETY: We know that we are currently marking, so any non-dropped pointer is safe to 92 | // resurrect. 93 | if unsafe { self.inner.ptr.as_ref() }.header.is_live() { 94 | Gc::resurrect(fc, self.inner); 95 | Some(self.inner) 96 | } else { 97 | None 98 | } 99 | } 100 | 101 | /// Returns true if two `GcWeak`s point to the same allocation. 102 | /// 103 | /// Similarly to `Rc::ptr_eq` and `Arc::ptr_eq`, this function ignores the metadata of `dyn` 104 | /// pointers. 105 | #[inline] 106 | pub fn ptr_eq(this: GcWeak<'gc, T>, other: GcWeak<'gc, T>) -> bool { 107 | // TODO: Equivalent to `core::ptr::addr_eq`: 108 | // https://github.com/rust-lang/rust/issues/116324 109 | this.as_ptr() as *const () == other.as_ptr() as *const () 110 | } 111 | 112 | #[inline] 113 | pub fn as_ptr(self) -> *const T { 114 | Gc::as_ptr(self.inner) 115 | } 116 | 117 | /// Cast the internal pointer to a different type. 118 | /// 119 | /// # Safety 120 | /// It must be valid to dereference a `*mut U` that has come from casting a `*mut T`. 121 | #[inline] 122 | pub unsafe fn cast(this: GcWeak<'gc, T>) -> GcWeak<'gc, U> { 123 | GcWeak { 124 | inner: Gc::cast::(this.inner), 125 | } 126 | } 127 | 128 | /// Cast a `GcWeak` to the unit type. 129 | /// 130 | /// This is exactly the same as `unsafe { GcWeak::cast::<()>(this) }`, but we can provide this 131 | /// method safely because it is always safe to dereference a `*mut ()` that has come from 132 | /// casting a `*mut T`. 133 | #[inline] 134 | pub fn erase(this: GcWeak<'gc, T>) -> GcWeak<'gc, ()> { 135 | GcWeak { 136 | inner: Gc::erase(this.inner), 137 | } 138 | } 139 | 140 | /// Retrieve a `GcWeak` from a raw pointer obtained from `GcWeak::as_ptr` 141 | /// 142 | /// # Safety 143 | /// The provided pointer must have been obtained from `GcWeak::as_ptr` or `Gc::as_ptr`, and 144 | /// the pointer must not have been *fully* collected yet (it may be a dropped but valid weak 145 | /// pointer). 146 | #[inline] 147 | pub unsafe fn from_ptr(ptr: *const T) -> GcWeak<'gc, T> { 148 | GcWeak { 149 | inner: Gc::from_ptr(ptr), 150 | } 151 | } 152 | } 153 | -------------------------------------------------------------------------------- /src/hashbrown.rs: -------------------------------------------------------------------------------- 1 | #[cfg(feature = "allocator-api2")] 2 | mod inner { 3 | use allocator_api2::alloc::Allocator; 4 | 5 | use crate::collect::{Collect, Trace}; 6 | 7 | unsafe impl<'gc, K, V, S, A> Collect<'gc> for hashbrown::HashMap 8 | where 9 | K: Collect<'gc>, 10 | V: Collect<'gc>, 11 | S: 'static, 12 | A: Allocator + Clone + Collect<'gc>, 13 | { 14 | const NEEDS_TRACE: bool = K::NEEDS_TRACE || V::NEEDS_TRACE || A::NEEDS_TRACE; 15 | 16 | #[inline] 17 | fn trace>(&self, cc: &mut C) { 18 | for (k, v) in self { 19 | cc.trace(k); 20 | cc.trace(v); 21 | } 22 | cc.trace(self.allocator()); 23 | } 24 | } 25 | 26 | unsafe impl<'gc, T, S, A> Collect<'gc> for hashbrown::HashSet 27 | where 28 | T: Collect<'gc>, 29 | S: 'static, 30 | A: Allocator + Clone + Collect<'gc>, 31 | { 32 | const NEEDS_TRACE: bool = T::NEEDS_TRACE || A::NEEDS_TRACE; 33 | 34 | #[inline] 35 | fn trace>(&self, cc: &mut C) { 36 | for v in self { 37 | cc.trace(v); 38 | } 39 | cc.trace(self.allocator()); 40 | } 41 | } 42 | 43 | unsafe impl<'gc, T, A> Collect<'gc> for hashbrown::HashTable 44 | where 45 | T: Collect<'gc>, 46 | A: Allocator + Clone + Collect<'gc>, 47 | { 48 | const NEEDS_TRACE: bool = T::NEEDS_TRACE || A::NEEDS_TRACE; 49 | 50 | #[inline] 51 | fn trace>(&self, cc: &mut C) { 52 | for v in self { 53 | cc.trace(v); 54 | } 55 | cc.trace(self.allocator()); 56 | } 57 | } 58 | } 59 | 60 | #[cfg(not(feature = "allocator-api2"))] 61 | mod inner { 62 | use crate::collect::{Collect, Trace}; 63 | 64 | unsafe impl<'gc, K, V, S> Collect<'gc> for hashbrown::HashMap 65 | where 66 | K: Collect<'gc>, 67 | V: Collect<'gc>, 68 | S: 'static, 69 | { 70 | const NEEDS_TRACE: bool = K::NEEDS_TRACE || V::NEEDS_TRACE; 71 | 72 | #[inline] 73 | fn trace>(&self, cc: &mut C) { 74 | for (k, v) in self { 75 | cc.trace(k); 76 | cc.trace(v); 77 | } 78 | } 79 | } 80 | 81 | unsafe impl<'gc, T, S> Collect<'gc> for hashbrown::HashSet 82 | where 83 | T: Collect<'gc>, 84 | S: 'static, 85 | { 86 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 87 | 88 | #[inline] 89 | fn trace>(&self, cc: &mut C) { 90 | for v in self { 91 | cc.trace(v); 92 | } 93 | } 94 | } 95 | 96 | unsafe impl<'gc, T> Collect<'gc> for hashbrown::HashTable 97 | where 98 | T: Collect<'gc>, 99 | { 100 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 101 | 102 | #[inline] 103 | fn trace>(&self, cc: &mut C) { 104 | for v in self { 105 | cc.trace(v); 106 | } 107 | } 108 | } 109 | } 110 | -------------------------------------------------------------------------------- /src/lib.rs: -------------------------------------------------------------------------------- 1 | #![no_std] 2 | #![cfg_attr(miri, feature(strict_provenance))] 3 | 4 | #[cfg(feature = "std")] 5 | extern crate std; 6 | 7 | extern crate alloc; 8 | 9 | pub mod arena; 10 | pub mod barrier; 11 | pub mod collect; 12 | mod collect_impl; 13 | mod context; 14 | pub mod dynamic_roots; 15 | mod gc; 16 | mod gc_weak; 17 | pub mod lock; 18 | pub mod metrics; 19 | mod no_drop; 20 | mod static_collect; 21 | mod types; 22 | mod unsize; 23 | 24 | #[cfg(feature = "allocator-api2")] 25 | pub mod allocator_api; 26 | 27 | #[cfg(feature = "hashbrown")] 28 | mod hashbrown; 29 | 30 | #[doc(hidden)] 31 | pub use gc_arena_derive::__unelide_lifetimes; 32 | 33 | #[doc(hidden)] 34 | pub use self::{arena::__DynRootable, no_drop::__MustNotImplDrop, unsize::__CoercePtrInternal}; 35 | 36 | pub use self::{ 37 | arena::{Arena, Rootable}, 38 | collect::Collect, 39 | context::{Finalization, Mutation}, 40 | dynamic_roots::{DynamicRoot, DynamicRootSet}, 41 | gc::Gc, 42 | gc_weak::GcWeak, 43 | lock::{GcLock, GcRefLock, Lock, RefLock}, 44 | static_collect::Static, 45 | }; 46 | -------------------------------------------------------------------------------- /src/lock.rs: -------------------------------------------------------------------------------- 1 | //! GC-aware interior mutability types. 2 | 3 | use core::{ 4 | cell::{BorrowError, BorrowMutError, Cell, Ref, RefCell, RefMut}, 5 | cmp::Ordering, 6 | fmt, 7 | }; 8 | 9 | use crate::{ 10 | barrier::Unlock, 11 | collect::{Collect, Trace}, 12 | Gc, Mutation, 13 | }; 14 | 15 | // Helper macro to factor out the common parts of locks types. 16 | macro_rules! make_lock_wrapper { 17 | ( 18 | $(#[$meta:meta])* 19 | locked = $locked_type:ident as $gc_locked_type:ident; 20 | unlocked = $unlocked_type:ident unsafe $unsafe_unlock_method:ident; 21 | impl Sized { $($sized_items:tt)* } 22 | impl ?Sized { $($unsized_items:tt)* } 23 | ) => { 24 | /// A wrapper around a [` 25 | #[doc = stringify!($unlocked_type)] 26 | /// `] that implements [`Collect`]. 27 | /// 28 | /// Only provides safe read access to the wrapped [` 29 | #[doc = stringify!($unlocked_type)] 30 | /// `], full write access requires unsafety. 31 | /// 32 | /// If the ` 33 | #[doc = stringify!($locked_type)] 34 | /// ` is directly held in a [`Gc`] pointer, safe mutable access is provided, 35 | /// since methods on [`Gc`] can ensure that the write barrier is called. 36 | $(#[$meta])* 37 | #[repr(transparent)] 38 | pub struct $locked_type { 39 | cell: $unlocked_type, 40 | } 41 | 42 | #[doc = concat!("An alias for `Gc<'gc, ", stringify!($locked_type), ">`.")] 43 | pub type $gc_locked_type<'gc, T> = Gc<'gc, $locked_type>; 44 | 45 | impl $locked_type { 46 | #[inline] 47 | pub fn new(t: T) -> $locked_type { 48 | Self { cell: $unlocked_type::new(t) } 49 | } 50 | 51 | #[inline] 52 | pub fn into_inner(self) -> T { 53 | self.cell.into_inner() 54 | } 55 | 56 | $($sized_items)* 57 | } 58 | 59 | impl $locked_type { 60 | #[inline] 61 | pub fn as_ptr(&self) -> *mut T { 62 | self.cell.as_ptr() 63 | } 64 | 65 | $($unsized_items)* 66 | 67 | #[doc = concat!("Access the wrapped [`", stringify!($unlocked_type), "`].")] 68 | /// 69 | /// # Safety 70 | /// In order to maintain the invariants of the garbage collector, no new [`Gc`] 71 | /// pointers may be adopted by this type as a result of the interior mutability 72 | /// afforded by directly accessing the inner [` 73 | #[doc = stringify!($unlocked_type)] 74 | /// `], unless the write barrier for the containing [`Gc`] pointer is invoked manually 75 | /// before collection is triggered. 76 | #[inline] 77 | pub unsafe fn $unsafe_unlock_method(&self) -> &$unlocked_type { 78 | &self.cell 79 | } 80 | 81 | #[inline] 82 | pub fn get_mut(&mut self) -> &mut T { 83 | self.cell.get_mut() 84 | } 85 | } 86 | 87 | impl Unlock for $locked_type { 88 | type Unlocked = $unlocked_type; 89 | 90 | #[inline] 91 | unsafe fn unlock_unchecked(&self) -> &Self::Unlocked { 92 | &self.cell 93 | } 94 | } 95 | 96 | impl From for $locked_type { 97 | #[inline] 98 | fn from(t: T) -> Self { 99 | Self::new(t) 100 | } 101 | } 102 | 103 | impl From<$unlocked_type> for $locked_type { 104 | #[inline] 105 | fn from(cell: $unlocked_type) -> Self { 106 | Self { cell } 107 | } 108 | } 109 | }; 110 | } 111 | 112 | make_lock_wrapper!( 113 | #[derive(Default)] 114 | locked = Lock as GcLock; 115 | unlocked = Cell unsafe as_cell; 116 | impl Sized { 117 | #[inline] 118 | pub fn get(&self) -> T where T: Copy { 119 | self.cell.get() 120 | } 121 | 122 | #[inline] 123 | pub fn take(&self) -> T where T: Default { 124 | // Despite mutating the contained value, this doesn't need a write barrier, as 125 | // the return value of `Default::default` can never contain (non-leaked) `Gc` pointers. 126 | // 127 | // The reason for this is somewhat subtle, and boils down to lifetime parametricity. 128 | // Because Rust doesn't allow naming concrete lifetimes, and because `Default` doesn't 129 | // have any lifetime parameters, any potential `'gc` lifetime in `T` must be 130 | // existentially quantified. As such, a `Default` implementation that tries to smuggle 131 | // a branded `Gc` pointer or `Mutation` through external state (e.g. thread 132 | // locals) must use unsafe code and cannot be sound in the first place, as it has no 133 | // way to ensure that the smuggled data has the correct `'gc` brand. 134 | self.cell.take() 135 | } 136 | } 137 | impl ?Sized {} 138 | ); 139 | 140 | impl fmt::Debug for Lock { 141 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 142 | f.debug_tuple("Lock").field(&self.cell).finish() 143 | } 144 | } 145 | 146 | impl<'gc, T: Copy + 'gc> Gc<'gc, Lock> { 147 | #[inline] 148 | pub fn get(self) -> T { 149 | self.cell.get() 150 | } 151 | 152 | #[inline] 153 | pub fn set(self, mc: &Mutation<'gc>, t: T) { 154 | self.unlock(mc).set(t); 155 | } 156 | } 157 | 158 | unsafe impl<'gc, T: Collect<'gc> + Copy + 'gc> Collect<'gc> for Lock { 159 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 160 | 161 | #[inline] 162 | fn trace>(&self, cc: &mut C) { 163 | // Okay, so this calls `T::trace` on a *copy* of `T`. 164 | // 165 | // This is theoretically a correctness issue, because technically `T` could have interior 166 | // mutability and modify the copy, and this modification would be lost. 167 | // 168 | // However, currently there is not a type in rust that allows for interior mutability that 169 | // is also `Copy`, so this *currently* impossible to even observe. 170 | // 171 | // I am assured that this requirement is technially "only" a lint, and could be relaxed in 172 | // the future. If this requirement is ever relaxed in some way, fixing this is relatively 173 | // easy, by setting the value of the cell to the copy we make, after tracing (via a drop 174 | // guard in case of panics). Additionally, this is not a safety issue, only a correctness 175 | // issue, the changes will "just" be lost after this call returns. 176 | // 177 | // It could be fixed now, but since it is not even testable because it is currently 178 | // *impossible*, I did not bother. One day this may need to be implemented! 179 | cc.trace(&self.get()); 180 | } 181 | } 182 | 183 | // Can't use `#[derive]` because of the non-standard bounds. 184 | impl Clone for Lock { 185 | #[inline] 186 | fn clone(&self) -> Self { 187 | Self::new(self.get()) 188 | } 189 | } 190 | 191 | // Can't use `#[derive]` because of the non-standard bounds. 192 | impl PartialEq for Lock { 193 | #[inline] 194 | fn eq(&self, other: &Self) -> bool { 195 | self.get() == other.get() 196 | } 197 | } 198 | 199 | // Can't use `#[derive]` because of the non-standard bounds. 200 | impl Eq for Lock {} 201 | 202 | // Can't use `#[derive]` because of the non-standard bounds. 203 | impl PartialOrd for Lock { 204 | #[inline] 205 | fn partial_cmp(&self, other: &Self) -> Option { 206 | self.get().partial_cmp(&other.get()) 207 | } 208 | } 209 | 210 | // Can't use `#[derive]` because of the non-standard bounds. 211 | impl Ord for Lock { 212 | #[inline] 213 | fn cmp(&self, other: &Self) -> Ordering { 214 | self.get().cmp(&other.get()) 215 | } 216 | } 217 | 218 | make_lock_wrapper!( 219 | #[derive(Clone, Default, Eq, PartialEq, Ord, PartialOrd)] 220 | locked = RefLock as GcRefLock; 221 | unlocked = RefCell unsafe as_ref_cell; 222 | impl Sized { 223 | #[inline] 224 | pub fn take(&self) -> T where T: Default { 225 | // See comment in `Lock::take`. 226 | self.cell.take() 227 | } 228 | } 229 | impl ?Sized { 230 | #[track_caller] 231 | #[inline] 232 | pub fn borrow<'a>(&'a self) -> Ref<'a, T> { 233 | self.cell.borrow() 234 | } 235 | 236 | #[inline] 237 | pub fn try_borrow<'a>(&'a self) -> Result, BorrowError> { 238 | self.cell.try_borrow() 239 | } 240 | } 241 | ); 242 | 243 | impl fmt::Debug for RefLock { 244 | fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { 245 | let mut fmt = fmt.debug_tuple("RefLock"); 246 | match self.try_borrow() { 247 | Ok(borrow) => fmt.field(&borrow), 248 | Err(_) => { 249 | // The RefLock is mutably borrowed so we can't look at its value 250 | // here. Show a placeholder instead. 251 | struct BorrowedPlaceholder; 252 | 253 | impl fmt::Debug for BorrowedPlaceholder { 254 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 255 | f.write_str("") 256 | } 257 | } 258 | 259 | fmt.field(&BorrowedPlaceholder) 260 | } 261 | } 262 | .finish() 263 | } 264 | } 265 | 266 | impl<'gc, T: ?Sized + 'gc> Gc<'gc, RefLock> { 267 | #[track_caller] 268 | #[inline] 269 | pub fn borrow(self) -> Ref<'gc, T> { 270 | RefLock::borrow(self.as_ref()) 271 | } 272 | 273 | #[inline] 274 | pub fn try_borrow(self) -> Result, BorrowError> { 275 | RefLock::try_borrow(self.as_ref()) 276 | } 277 | 278 | #[track_caller] 279 | #[inline] 280 | pub fn borrow_mut(self, mc: &Mutation<'gc>) -> RefMut<'gc, T> { 281 | self.unlock(mc).borrow_mut() 282 | } 283 | 284 | #[inline] 285 | pub fn try_borrow_mut(self, mc: &Mutation<'gc>) -> Result, BorrowMutError> { 286 | self.unlock(mc).try_borrow_mut() 287 | } 288 | } 289 | 290 | unsafe impl<'gc, T: Collect<'gc> + 'gc + ?Sized> Collect<'gc> for RefLock { 291 | const NEEDS_TRACE: bool = T::NEEDS_TRACE; 292 | 293 | #[inline] 294 | fn trace>(&self, cc: &mut C) { 295 | cc.trace(&*self.borrow()); 296 | } 297 | } 298 | -------------------------------------------------------------------------------- /src/metrics.rs: -------------------------------------------------------------------------------- 1 | use alloc::rc::Rc; 2 | use core::cell::Cell; 3 | 4 | /// Tuning parameters for a given garbage collected [`crate::Arena`]. 5 | /// 6 | /// Any allocation that occurs during a collection cycle will incur "debt" that is exactly equal to 7 | /// the allocated bytes. This "debt" is paid off by running the collection algorithm some amount of 8 | /// time proportional to the debt. Exactly how much "debt" is paid off and in what proportion by the 9 | /// different parts of the collection algorithm is configured by the chosen values here. We refer to 10 | /// the amount of "debt" paid off by running the collection algorithm as "work". 11 | /// 12 | /// The most important idea behind choosing these tuning parameters is that we always want the 13 | /// collector (when it is not sleeping) to deterministically run *faster* than allocation. We do 14 | /// this so we can be *completely sure* that once collection starts, the collection cycle will 15 | /// finish and memory will not grow without bound. If we are tuning for low pause time however, 16 | /// it is also important that not *too* many costly operations are run within a single call to 17 | /// [`crate::Arena::collect_debt`], and this goal is in tension with the first, more important goal. 18 | /// 19 | /// How these two goals are balanced is that we must choose our tuning parameters so that the 20 | /// total amount of "work" performed to either *remember* or *free* one byte of allocated data 21 | /// is always *less than one*, as this makes the collector deterministically run faster than the 22 | /// rate of allocation (which is crucial). The closer the amount of "work" performed to remember 23 | /// or free one byte is to 1.0, the slower the collector will go and the higher the maximum amount 24 | /// of used memory will be. The closer the amount of "work" performed to remember or free one 25 | /// byte is to 0.0, the faster the collector will go and the closer it will get to behaving like a 26 | /// stop-the-world collector. 27 | /// 28 | /// All live pointers in a cycle are either remembered or freed once, but it is important that 29 | /// *both paths* require less than one unit of "work" per byte to fully complete. There is no way to 30 | /// predict a priori the ratio of remembered vs forgotten values, so if either path takes too close 31 | /// to or over 1.0 unit of work per byte to complete, collection may run too slowly. 32 | /// 33 | /// # Factors that control the time the GC sleeps 34 | /// 35 | /// `sleep_factor` is fairly self explanatory. Setting this too low will reduce the time of the 36 | /// [`crate::arena::CollectionPhase::Sleeping`] phase but this is not harmful (it can even be set 37 | /// to zero to keep the collector always running!), setting it much larger than 1.0 will make the 38 | /// collector wait a very long time before collecting again, and usually not what you want. 39 | /// 40 | /// `min_sleep` is also self explanatory and usually does not need changing from the default value. 41 | /// It should always be relatively small. 42 | /// 43 | /// # Timing factors for remembered values 44 | /// 45 | /// Every live `Gc` value in an [`crate::Arena`] that is reachable from the root is "remembered". 46 | /// Every remembered value will always have exactly three things done to it in a given cycle: 47 | /// 48 | /// 1) It will at some point be found and marked as reachable (and potentially queued for tracing). 49 | /// When this happens, `mark_factor * alloc_size` work is recorded. 50 | /// 2) Entries in the queue for tracing will eventually be traced by having their 51 | /// [`crate::Collect::trace`] method called. At this time, `trace_factor * alloc_size` work is 52 | /// recorded. Calling `Collect::trace` will usually mark other pointers as reachable and queue 53 | /// them for tracing if they have not already been, so this step may also transitively perform 54 | /// other work, but each step is only performed exactly once for each individual remembered 55 | /// value. 56 | /// 3) During the [`crate::arena::CollectionPhase::Sweeping`] phase, each remembered value has to 57 | /// be iterated over in the sweep list and removed from it. This a small, constant amount of 58 | /// work that is very fast, but it should always perform *some* work to keep pause time low, so 59 | /// `keep_factor * alloc_size` work is recorded. 60 | /// 61 | /// # Timing factors for forgotten values 62 | /// 63 | /// Allocated values that are not reachable in a GC cycle are simpler than 64 | /// remembered values. Only two operations are performed on them, and only during 65 | /// [`crate::arena::CollectionPhase::Sweeping`]: dropping and freeing. 66 | /// 67 | /// If a value is unreachable, then when it is encountered in the free list it will be dropped (and 68 | /// `drop_factor * alloc_size` work will be recorded), and then the memory backing the value will be 69 | /// freed (and `free_factor * alloc_size` work will be recorded). 70 | /// 71 | /// # Timing factors for weakly reachable values 72 | /// 73 | /// There is actually a *third* possible path for a value to take in a collection cycle which is a 74 | /// hybrid of the two, but thankfully it is not too complex. 75 | /// 76 | /// If a value is (newly) weakly reachable this cycle, first the pointer will be marked as 77 | /// (weakly) reachable (`mark_factor * alloc_size` work is recorded), then during sweeping it will 78 | /// be *dropped* (`drop_factor * alloc_size` work is recorded), and then *kept* (`keep_factor * 79 | /// alloc_size` work is recorded). 80 | /// 81 | /// # Summary 82 | /// 83 | /// This may seem complicated but it is actually not too difficult to make sure that the GC will not 84 | /// stall: *every path that a pointer can take must never do 1.0 or more unit of work per byte within 85 | /// a cycle*. 86 | /// 87 | /// The important formulas to check are: 88 | /// 89 | /// - We need to make sure that remembered values are processed faster than allocation: 90 | /// `mark_factor + trace_factor + keep_factor < 1.0` 91 | /// - We need to make sure that forgotten values are processed faster than allocation: 92 | /// `drop_factor + free_factor < 1.0` 93 | /// - We need to make sure that weakly remembered values are processed faster than allocation: 94 | /// `mark_factor + drop_factor + keep_factor < 1.0` 95 | /// 96 | /// It is also important to note that this is not an exhaustive list of all the possible paths a 97 | /// pointer can take, but every path will always be a *subset* of one of the above paths. The above 98 | /// formulas represent every possible the worst case: for example, if a weakly reachable value has 99 | /// already been dropped then only `mark_factor + keep_factor` work will be recorded, and if we 100 | /// can prove that a reachable value has [`crate::Collect::NEEDS_TRACE`] set to false, then only 101 | /// `mark_factor + keep_factor` work will be recorded. This is not important to remember though, it 102 | /// is true that when the collector elides work it may not actually record that work as performed, 103 | /// but this will only *speed up* collection, it can never cause the collector to stall. 104 | #[derive(Debug, Copy, Clone)] 105 | pub struct Pacing { 106 | /// Controls the length of the [`crate::arena::CollectionPhase::Sleeping`] phase. 107 | /// 108 | /// At the start of a new GC cycle, the collector will wait until the live size reaches 109 | /// ` + * sleep_factor` before starting 110 | /// collection. 111 | /// 112 | /// External memory is ***not*** included in the "remembered size" for the purposes of 113 | /// calculating a new cycle's sleep period. 114 | pub sleep_factor: f64, 115 | 116 | /// The minimum length of the [`crate::arena::CollectionPhase::Sleeping`] phase. 117 | /// 118 | /// if the calculated sleep amount using `sleep_factor` is lower than `min_sleep`, this will 119 | /// be used instead. This is mostly useful when the heap is very small to prevent rapidly 120 | /// restarting collections. 121 | pub min_sleep: usize, 122 | 123 | /// The multiplicative factor for "work" performed per byte when a `Gc` value is first marked as 124 | /// reachable. 125 | pub mark_factor: f64, 126 | 127 | /// The multiplicative factor for "work" performed per byte when a `Gc` value has its 128 | /// [`crate::Collect::trace`] method called. 129 | pub trace_factor: f64, 130 | 131 | /// The multiplicative factor for "work" performed per byte when a reachable `Gc` value is 132 | /// iterated over during [`crate::arena::CollectionPhase::Sweeping`]. 133 | pub keep_factor: f64, 134 | 135 | /// The multiplicative factor for "work" performed per byte when a `Gc` value that is forgotten 136 | /// or only weakly reachable is dropped during [`crate::arena::CollectionPhase::Sweeping`]. 137 | pub drop_factor: f64, 138 | 139 | /// The multiplicative factor for "work" performed per byte when a forgotten `Gc` value is freed 140 | /// during [`crate::arena::CollectionPhase::Sweeping`]. 141 | pub free_factor: f64, 142 | } 143 | 144 | impl Pacing { 145 | pub const DEFAULT: Pacing = Pacing { 146 | sleep_factor: 0.5, 147 | min_sleep: 4096, 148 | mark_factor: 0.1, 149 | trace_factor: 0.4, 150 | keep_factor: 0.05, 151 | drop_factor: 0.2, 152 | free_factor: 0.3, 153 | }; 154 | 155 | /// A good default "stop-the-world" [`Pacing`] configuration. 156 | /// 157 | /// This has all of the work factors set to zero so that as soon as the collector wakes from 158 | /// sleep, it will immediately perform a full collection. 159 | /// 160 | /// It is important to set the sleep factor fairly high when configuring a collector this way 161 | /// (close to or even somewhat larger than 1.0). 162 | pub const STOP_THE_WORLD: Pacing = Pacing { 163 | sleep_factor: 1.0, 164 | min_sleep: 4096, 165 | mark_factor: 0.0, 166 | trace_factor: 0.0, 167 | keep_factor: 0.0, 168 | drop_factor: 0.0, 169 | free_factor: 0.0, 170 | }; 171 | } 172 | 173 | impl Default for Pacing { 174 | #[inline] 175 | fn default() -> Pacing { 176 | Self::DEFAULT 177 | } 178 | } 179 | 180 | #[derive(Debug, Default)] 181 | struct MetricsInner { 182 | pacing: Cell, 183 | 184 | total_gcs: Cell, 185 | total_gc_bytes: Cell, 186 | total_external_bytes: Cell, 187 | 188 | wakeup_amount: Cell, 189 | artificial_debt: Cell, 190 | 191 | // The number of external bytes that have been marked as allocated at the beginning of this 192 | // cycle. 193 | external_bytes_start: Cell, 194 | 195 | // Statistics for `Gc` allocations and deallocations that happen during a GC cycle. 196 | allocated_gc_bytes: Cell, 197 | dropped_gc_bytes: Cell, 198 | freed_gc_bytes: Cell, 199 | 200 | // Statistics for `Gc` pointers that have been marked as non-white this cycle. 201 | marked_gcs: Cell, 202 | marked_gc_bytes: Cell, 203 | 204 | // Statistics for `Gc` pointers that have their contents traced. 205 | traced_gcs: Cell, 206 | traced_gc_bytes: Cell, 207 | 208 | // Statistics for reachable `Gc` pointers as they are iterated through during the sweep phase. 209 | remembered_gcs: Cell, 210 | remembered_gc_bytes: Cell, 211 | } 212 | 213 | #[derive(Clone)] 214 | pub struct Metrics(Rc); 215 | 216 | impl Metrics { 217 | pub(crate) fn new() -> Self { 218 | Self(Default::default()) 219 | } 220 | 221 | /// Return a value identifying the arena, for logging purposes. 222 | #[cfg(feature = "tracing")] 223 | pub(crate) fn arena_id(&self) -> tracing::field::DebugValue<*const ()> { 224 | // Be very cheeky and use the `Metrics` address as a (temporally) unique ID. 225 | // TODO: use a monotonically increasing global counter instead? 226 | tracing::field::debug(Rc::as_ptr(&self.0) as *const ()) 227 | } 228 | 229 | /// Sets the pacing parameters used by the collection algorithm. 230 | /// 231 | /// The factors that affect the gc sleep time will not take effect until the start of the next 232 | /// collection. 233 | #[inline] 234 | pub fn set_pacing(&self, pacing: Pacing) { 235 | self.0.pacing.set(pacing); 236 | } 237 | 238 | /// Returns the current number of `Gc`s allocated that have not yet been freed. 239 | #[inline] 240 | pub fn total_gc_count(&self) -> usize { 241 | self.0.total_gcs.get() 242 | } 243 | 244 | /// Returns the total bytes allocated by all live `Gc` pointers. 245 | #[inline] 246 | pub fn total_gc_allocation(&self) -> usize { 247 | self.0.total_gc_bytes.get() 248 | } 249 | 250 | /// Returns the total bytes that have been marked as externally allocated. 251 | /// 252 | /// A call to [`Metrics::mark_external_allocation`] will increase this count, and a call to 253 | /// [`Metrics::mark_external_deallocation`] will decrease it. 254 | #[inline] 255 | pub fn total_external_allocation(&self) -> usize { 256 | self.0.total_external_bytes.get() 257 | } 258 | 259 | /// Returns the sum of `Metrics::total_gc_allocation()` and 260 | /// `Metrics::total_external_allocation()`. 261 | #[inline] 262 | pub fn total_allocation(&self) -> usize { 263 | self.0 264 | .total_gc_bytes 265 | .get() 266 | .saturating_add(self.0.total_external_bytes.get()) 267 | } 268 | 269 | /// Call to mark that bytes have been externally allocated that are owned by an arena. 270 | /// 271 | /// This affects the GC pacing, marking external bytes as allocated will trigger allocation 272 | /// debt. 273 | #[inline] 274 | pub fn mark_external_allocation(&self, bytes: usize) { 275 | cell_update(&self.0.total_external_bytes, |b| b.saturating_add(bytes)); 276 | } 277 | 278 | /// Call to mark that bytes which have been marked as allocated with 279 | /// [`Metrics::mark_external_allocation`] have been since deallocated. 280 | /// 281 | /// This affects the GC pacing, marking external bytes as deallocated will reduce allocation 282 | /// debt. 283 | /// 284 | /// It is safe, but may result in unspecified behavior (such as very weird GC pacing), if the 285 | /// amount of bytes marked for deallocation is greater than the number of bytes marked for 286 | /// allocation. 287 | #[inline] 288 | pub fn mark_external_deallocation(&self, bytes: usize) { 289 | cell_update(&self.0.total_external_bytes, |b| b.saturating_sub(bytes)); 290 | } 291 | 292 | /// Add artificial debt equivalent to allocating the given number of bytes. 293 | /// 294 | /// This is different than marking external allocation because it will not show up in a call to 295 | /// [`Metrics::total_external_allocation`] or [`Metrics::total_allocation`] and instead *only* 296 | /// speeds up collection. 297 | #[inline] 298 | pub fn add_debt(&self, bytes: usize) { 299 | cell_update(&self.0.artificial_debt, |d| d + bytes as f64); 300 | } 301 | 302 | /// All arena allocation causes the arena to accumulate "allocation debt". This debt is then 303 | /// used to time incremental garbage collection based on the tuning parameters in the current 304 | /// `Pacing`. The allocation debt is measured in bytes, but will generally increase at a rate 305 | /// faster than that of allocation so that collection will always complete. 306 | #[inline] 307 | pub fn allocation_debt(&self) -> f64 { 308 | let total_gcs = self.0.total_gcs.get(); 309 | if total_gcs == 0 { 310 | // If we have no live `Gc`s, then there is no possible collection to do so always 311 | // return zero debt. 312 | return 0.0; 313 | } 314 | 315 | // Right now, we treat allocating an external byte as 1.0 units of debt and deallocating an 316 | // external byte as 1.0 units of work (we also treat freeing more external bytes than were 317 | // allocated in the current cycle as performing *no* work). The result is that the *total* 318 | // increase of externally allocated bytes (allocated minus freed) incurs debt exactly the 319 | // same as GC allocated bytes. 320 | let allocated_external_bytes = self 321 | .0 322 | .total_external_bytes 323 | .get() 324 | .checked_sub(self.0.external_bytes_start.get()) 325 | .unwrap_or(0); 326 | 327 | let allocated_bytes = 328 | self.0.allocated_gc_bytes.get() as f64 + allocated_external_bytes as f64; 329 | 330 | // Every allocation after the `wakeup_amount` in a cycle is a debit. 331 | let cycle_debits = 332 | allocated_bytes - self.0.wakeup_amount.get() + self.0.artificial_debt.get(); 333 | 334 | // If our debits are not positive, then we know the total debt is not positive. 335 | if cycle_debits <= 0.0 { 336 | return 0.0; 337 | } 338 | 339 | let pacing = self.0.pacing.get(); 340 | 341 | let cycle_credits = self.0.marked_gc_bytes.get() as f64 * pacing.mark_factor 342 | + self.0.traced_gc_bytes.get() as f64 * pacing.trace_factor 343 | + self.0.remembered_gc_bytes.get() as f64 * pacing.keep_factor 344 | + self.0.dropped_gc_bytes.get() as f64 * pacing.drop_factor 345 | + self.0.freed_gc_bytes.get() as f64 * pacing.free_factor; 346 | 347 | (cycle_debits - cycle_credits).max(0.0) 348 | } 349 | 350 | pub(crate) fn finish_cycle(&self, reset_debt: bool) { 351 | let pacing = self.0.pacing.get(); 352 | let remembered_size = self.0.remembered_gc_bytes.get(); 353 | let wakeup_amount = 354 | (remembered_size as f64 * pacing.sleep_factor).max(pacing.min_sleep as f64); 355 | 356 | let artificial_debt = if reset_debt { 357 | 0.0 358 | } else { 359 | self.allocation_debt() 360 | }; 361 | 362 | self.0.wakeup_amount.set(wakeup_amount); 363 | self.0.artificial_debt.set(artificial_debt); 364 | 365 | self.0 366 | .external_bytes_start 367 | .set(self.0.total_external_bytes.get()); 368 | self.0.allocated_gc_bytes.set(0); 369 | self.0.dropped_gc_bytes.set(0); 370 | self.0.freed_gc_bytes.set(0); 371 | self.0.marked_gcs.set(0); 372 | self.0.marked_gc_bytes.set(0); 373 | self.0.traced_gcs.set(0); 374 | self.0.traced_gc_bytes.set(0); 375 | self.0.remembered_gcs.set(0); 376 | self.0.remembered_gc_bytes.set(0); 377 | } 378 | 379 | #[inline] 380 | pub(crate) fn mark_gc_allocated(&self, bytes: usize) { 381 | cell_update(&self.0.total_gcs, |c| c + 1); 382 | cell_update(&self.0.total_gc_bytes, |b| b + bytes); 383 | cell_update(&self.0.allocated_gc_bytes, |b| b.saturating_add(bytes)); 384 | } 385 | 386 | #[inline] 387 | pub(crate) fn mark_gc_dropped(&self, bytes: usize) { 388 | cell_update(&self.0.dropped_gc_bytes, |b| b.saturating_add(bytes)); 389 | } 390 | 391 | #[inline] 392 | pub(crate) fn mark_gc_freed(&self, bytes: usize) { 393 | cell_update(&self.0.total_gcs, |c| c - 1); 394 | cell_update(&self.0.total_gc_bytes, |b| b - bytes); 395 | cell_update(&self.0.freed_gc_bytes, |b| b.saturating_add(bytes)); 396 | } 397 | 398 | #[inline] 399 | pub(crate) fn mark_gc_marked(&self, bytes: usize) { 400 | cell_update(&self.0.marked_gcs, |c| c + 1); 401 | cell_update(&self.0.marked_gc_bytes, |b| b + bytes); 402 | } 403 | 404 | #[inline] 405 | pub(crate) fn mark_gc_traced(&self, bytes: usize) { 406 | cell_update(&self.0.traced_gcs, |c| c + 1); 407 | cell_update(&self.0.traced_gc_bytes, |b| b + bytes); 408 | } 409 | 410 | #[inline] 411 | pub(crate) fn mark_gc_untraced(&self, bytes: usize) { 412 | cell_update(&self.0.traced_gcs, |c| c - 1); 413 | cell_update(&self.0.traced_gc_bytes, |b| b - bytes); 414 | } 415 | 416 | #[inline] 417 | pub(crate) fn mark_gc_remembered(&self, bytes: usize) { 418 | cell_update(&self.0.remembered_gcs, |c| c + 1); 419 | cell_update(&self.0.remembered_gc_bytes, |b| b + bytes); 420 | } 421 | } 422 | 423 | // TODO: Use `Cell::update` when it is available, see: 424 | // https://github.com/rust-lang/rust/issues/50186 425 | #[inline] 426 | fn cell_update(c: &Cell, f: impl FnOnce(T) -> T) { 427 | c.set(f(c.get())) 428 | } 429 | -------------------------------------------------------------------------------- /src/no_drop.rs: -------------------------------------------------------------------------------- 1 | // Trait that is automatically implemented for all types that implement `Drop`. 2 | // 3 | // Used to cause a conflicting trait impl if a type implements `Drop` to forbid implementing `Drop`. 4 | #[doc(hidden)] 5 | pub trait __MustNotImplDrop {} 6 | 7 | #[allow(drop_bounds)] 8 | impl __MustNotImplDrop for T {} 9 | -------------------------------------------------------------------------------- /src/static_collect.rs: -------------------------------------------------------------------------------- 1 | use crate::collect::Collect; 2 | use crate::Rootable; 3 | 4 | use alloc::borrow::{Borrow, BorrowMut}; 5 | use core::convert::{AsMut, AsRef}; 6 | use core::ops::{Deref, DerefMut}; 7 | 8 | /// A wrapper type that implements Collect whenever the contained T is 'static, which is useful in 9 | /// generic contexts 10 | #[derive(Debug, Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash, Default)] 11 | #[repr(transparent)] 12 | pub struct Static(pub T); 13 | 14 | impl<'a, T: ?Sized + 'static> Rootable<'a> for Static { 15 | type Root = Static; 16 | } 17 | 18 | unsafe impl<'gc, T: ?Sized + 'static> Collect<'gc> for Static { 19 | const NEEDS_TRACE: bool = false; 20 | } 21 | 22 | impl From for Static { 23 | fn from(value: T) -> Self { 24 | Self(value) 25 | } 26 | } 27 | 28 | impl AsRef for Static { 29 | fn as_ref(&self) -> &T { 30 | &self.0 31 | } 32 | } 33 | 34 | impl AsMut for Static { 35 | fn as_mut(&mut self) -> &mut T { 36 | &mut self.0 37 | } 38 | } 39 | 40 | impl Deref for Static { 41 | type Target = T; 42 | fn deref(&self) -> &Self::Target { 43 | &self.0 44 | } 45 | } 46 | 47 | impl DerefMut for Static { 48 | fn deref_mut(&mut self) -> &mut Self::Target { 49 | &mut self.0 50 | } 51 | } 52 | 53 | impl Borrow for Static { 54 | fn borrow(&self) -> &T { 55 | &self.0 56 | } 57 | } 58 | 59 | impl BorrowMut for Static { 60 | fn borrow_mut(&mut self) -> &mut T { 61 | &mut self.0 62 | } 63 | } 64 | -------------------------------------------------------------------------------- /src/types.rs: -------------------------------------------------------------------------------- 1 | use core::alloc::Layout; 2 | use core::cell::Cell; 3 | use core::marker::PhantomData; 4 | use core::ptr::NonNull; 5 | use core::{mem, ptr}; 6 | 7 | use crate::{collect::Collect, context::Context}; 8 | 9 | /// A thin-pointer-sized box containing a type-erased GC object. 10 | /// Stores the metadata required by the GC algorithm inline (see `GcBoxInner` 11 | /// for its typed counterpart). 12 | 13 | #[derive(Copy, Clone, Debug, Eq, PartialEq)] 14 | pub(crate) struct GcBox(NonNull>); 15 | 16 | impl GcBox { 17 | /// Erases a pointer to a typed GC object. 18 | /// 19 | /// **SAFETY:** The pointer must point to a valid `GcBoxInner` allocated 20 | /// in a `Box`. 21 | #[inline(always)] 22 | pub(crate) unsafe fn erase(ptr: NonNull>) -> Self { 23 | // This cast is sound because `GcBoxInner` is `repr(C)`. 24 | let erased = ptr.as_ptr() as *mut GcBoxInner<()>; 25 | Self(NonNull::new_unchecked(erased)) 26 | } 27 | 28 | /// Gets a pointer to the value stored inside this box. 29 | /// `T` must be the same type that was used with `erase`, so that 30 | /// we can correctly compute the field offset. 31 | #[inline(always)] 32 | fn unerased_value(&self) -> *mut T { 33 | unsafe { 34 | let ptr = self.0.as_ptr() as *mut GcBoxInner; 35 | // Don't create a reference, to keep the full provenance. 36 | // Also, this gives us interior mutability "for free". 37 | ptr::addr_of_mut!((*ptr).value) as *mut T 38 | } 39 | } 40 | 41 | #[inline(always)] 42 | pub(crate) fn header(&self) -> &GcBoxHeader { 43 | unsafe { &self.0.as_ref().header } 44 | } 45 | 46 | /// Traces the stored value. 47 | /// 48 | /// **SAFETY**: `Self::drop_in_place` must not have been called. 49 | #[inline(always)] 50 | pub(crate) unsafe fn trace_value(&self, cc: &mut Context) { 51 | (self.header().vtable().trace_value)(*self, cc) 52 | } 53 | 54 | /// Drops the stored value. 55 | /// 56 | /// **SAFETY**: once called, no GC pointers should access the stored value 57 | /// (but accessing the `GcBox` itself is still safe). 58 | #[inline(always)] 59 | pub(crate) unsafe fn drop_in_place(&mut self) { 60 | (self.header().vtable().drop_value)(*self) 61 | } 62 | 63 | /// Deallocates the box. Failing to call `Self::drop_in_place` beforehand 64 | /// will cause the stored value to be leaked. 65 | /// 66 | /// **SAFETY**: once called, this `GcBox` should never be accessed by any GC 67 | /// pointers again. 68 | #[inline(always)] 69 | pub(crate) unsafe fn dealloc(self) { 70 | let layout = self.header().vtable().box_layout; 71 | let ptr = self.0.as_ptr() as *mut u8; 72 | // SAFETY: the pointer was `Box`-allocated with this layout. 73 | alloc::alloc::dealloc(ptr, layout); 74 | } 75 | } 76 | 77 | pub(crate) struct GcBoxHeader { 78 | /// The next element in the global linked list of allocated objects. 79 | next: Cell>, 80 | /// A custom virtual function table for handling type-specific operations. 81 | /// 82 | /// The lower bits of the pointer are used to store GC flags: 83 | /// - bits 0 & 1 for the current `GcColor`; 84 | /// - bit 2 for the `needs_trace` flag; 85 | /// - bit 3 for the `is_live` flag. 86 | tagged_vtable: Cell<*const CollectVtable>, 87 | } 88 | 89 | impl GcBoxHeader { 90 | #[inline(always)] 91 | pub fn new<'gc, T: Collect<'gc>>() -> Self { 92 | // Helper trait to materialize vtables in static memory. 93 | trait HasCollectVtable { 94 | const VTABLE: CollectVtable; 95 | } 96 | 97 | impl<'gc, T: Collect<'gc>> HasCollectVtable for T { 98 | const VTABLE: CollectVtable = CollectVtable::vtable_for::(); 99 | } 100 | 101 | let vtable: &'static _ = &::VTABLE; 102 | Self { 103 | next: Cell::new(None), 104 | tagged_vtable: Cell::new(vtable as *const _), 105 | } 106 | } 107 | 108 | /// Gets a reference to the `CollectVtable` used by this box. 109 | #[inline(always)] 110 | fn vtable(&self) -> &'static CollectVtable { 111 | let ptr = tagged_ptr::untag(self.tagged_vtable.get()); 112 | // SAFETY: 113 | // - the pointer was properly untagged. 114 | // - the vtable is stored in static memory. 115 | unsafe { &*ptr } 116 | } 117 | 118 | /// Gets the next element in the global linked list of allocated objects. 119 | #[inline(always)] 120 | pub(crate) fn next(&self) -> Option { 121 | self.next.get() 122 | } 123 | 124 | /// Sets the next element in the global linked list of allocated objects. 125 | #[inline(always)] 126 | pub(crate) fn set_next(&self, next: Option) { 127 | self.next.set(next) 128 | } 129 | 130 | /// Returns the (shallow) size occupied by this box in memory. 131 | #[inline(always)] 132 | pub(crate) fn size_of_box(&self) -> usize { 133 | self.vtable().box_layout.size() 134 | } 135 | 136 | #[inline] 137 | pub(crate) fn color(&self) -> GcColor { 138 | match tagged_ptr::get::<0x3, _>(self.tagged_vtable.get()) { 139 | 0x0 => GcColor::White, 140 | 0x1 => GcColor::WhiteWeak, 141 | 0x2 => GcColor::Gray, 142 | _ => GcColor::Black, 143 | } 144 | } 145 | 146 | #[inline] 147 | pub(crate) fn set_color(&self, color: GcColor) { 148 | tagged_ptr::set::<0x3, _>( 149 | &self.tagged_vtable, 150 | match color { 151 | GcColor::White => 0x0, 152 | GcColor::WhiteWeak => 0x1, 153 | GcColor::Gray => 0x2, 154 | GcColor::Black => 0x3, 155 | }, 156 | ); 157 | } 158 | #[inline] 159 | pub(crate) fn needs_trace(&self) -> bool { 160 | tagged_ptr::get::<0x4, _>(self.tagged_vtable.get()) != 0x0 161 | } 162 | 163 | /// Determines whether or not we've dropped the `dyn Collect` value 164 | /// stored in `GcBox.value` 165 | /// When we garbage-collect a `GcBox` that still has outstanding weak pointers, 166 | /// we set `alive` to false. When there are no more weak pointers remaining, 167 | /// we will deallocate the `GcBox`, but skip dropping the `dyn Collect` value 168 | /// (since we've already done it). 169 | #[inline] 170 | pub(crate) fn is_live(&self) -> bool { 171 | tagged_ptr::get::<0x8, _>(self.tagged_vtable.get()) != 0x0 172 | } 173 | 174 | #[inline] 175 | pub(crate) fn set_needs_trace(&self, needs_trace: bool) { 176 | tagged_ptr::set_bool::<0x4, _>(&self.tagged_vtable, needs_trace); 177 | } 178 | 179 | #[inline] 180 | pub(crate) fn set_live(&self, alive: bool) { 181 | tagged_ptr::set_bool::<0x8, _>(&self.tagged_vtable, alive); 182 | } 183 | } 184 | 185 | /// Type-specific operations for GC'd values. 186 | /// 187 | /// We use a custom vtable instead of `dyn Collect` for extra flexibility. 188 | /// The type is over-aligned so that `GcBoxHeader` can store flags into the LSBs of the vtable pointer. 189 | #[repr(align(16))] 190 | struct CollectVtable { 191 | /// The layout of the `GcBox` the GC'd value is stored in. 192 | box_layout: Layout, 193 | /// Drops the value stored in the given `GcBox` (without deallocating the box). 194 | drop_value: unsafe fn(GcBox), 195 | /// Traces the value stored in the given `GcBox`. 196 | trace_value: unsafe fn(GcBox, &mut Context), 197 | } 198 | 199 | impl CollectVtable { 200 | /// Makes a vtable for a known, `Sized` type. 201 | /// Because `T: Sized`, we can recover a typed pointer 202 | /// directly from the erased `GcBox`. 203 | #[inline(always)] 204 | const fn vtable_for<'gc, T: Collect<'gc>>() -> Self { 205 | Self { 206 | box_layout: Layout::new::>(), 207 | drop_value: |erased| unsafe { 208 | ptr::drop_in_place(erased.unerased_value::()); 209 | }, 210 | trace_value: |erased, cc| unsafe { 211 | let val = &*(erased.unerased_value::()); 212 | val.trace(cc) 213 | }, 214 | } 215 | } 216 | } 217 | 218 | /// A typed GC'd value, together with its metadata. 219 | /// This type is never manipulated directly by the GC algorithm, allowing 220 | /// user-facing `Gc`s to freely cast their pointer to it. 221 | #[repr(C)] 222 | pub(crate) struct GcBoxInner { 223 | pub(crate) header: GcBoxHeader, 224 | /// The typed value stored in this `GcBox`. 225 | pub(crate) value: mem::ManuallyDrop, 226 | } 227 | 228 | impl<'gc, T: Collect<'gc>> GcBoxInner { 229 | #[inline(always)] 230 | pub(crate) fn new(header: GcBoxHeader, t: T) -> Self { 231 | Self { 232 | header, 233 | value: mem::ManuallyDrop::new(t), 234 | } 235 | } 236 | } 237 | 238 | #[derive(Copy, Clone, Eq, PartialEq, Debug)] 239 | pub(crate) enum GcColor { 240 | /// An object that has not yet been reached by tracing (if we're in a tracing phase). 241 | /// 242 | /// During `Phase::Sweep`, we will free all white objects that existed *before* the start of the 243 | /// current `Phase::Sweep`. Objects allocated during `Phase::Sweep` will be white, but will not 244 | /// be freed. 245 | White, 246 | /// Like White, but for objects weakly reachable from a Black object. 247 | /// 248 | /// These objects may drop their contents during `Phase::Sweep`, but must stay allocated so that 249 | /// weak references can check the alive status. 250 | WhiteWeak, 251 | /// An object reachable from a Black object, but that has not yet been traced using 252 | /// `Collect::trace`. We also mark black objects as gray during `Phase::Mark` in response to 253 | /// a write barrier, so that we re-trace and find any objects newly reachable from the mutated 254 | /// object. 255 | Gray, 256 | /// An object that was reached during tracing. It will not be freed during `Phase::Sweep`. At 257 | /// the end of `Phase::Sweep`, all black objects will be reset to white. 258 | Black, 259 | } 260 | 261 | // Phantom type that holds a lifetime and ensures that it is invariant. 262 | pub(crate) type Invariant<'a> = PhantomData>; 263 | 264 | /// Utility functions for tagging and untagging pointers. 265 | mod tagged_ptr { 266 | use core::cell::Cell; 267 | 268 | trait ValidMask { 269 | const CHECK: (); 270 | } 271 | 272 | impl ValidMask for T { 273 | const CHECK: () = assert!(MASK < core::mem::align_of::()); 274 | } 275 | 276 | /// Checks that `$mask` can be used to tag a pointer to `$type`. 277 | /// If this isn't true, this macro will cause a post-monomorphization error. 278 | macro_rules! check_mask { 279 | ($type:ty, $mask:expr) => { 280 | let _ = <$type as ValidMask<$mask>>::CHECK; 281 | }; 282 | } 283 | 284 | #[inline(always)] 285 | pub(super) fn untag(tagged_ptr: *const T) -> *const T { 286 | let mask = core::mem::align_of::() - 1; 287 | tagged_ptr.map_addr(|addr| addr & !mask) 288 | } 289 | 290 | #[inline(always)] 291 | pub(super) fn get(tagged_ptr: *const T) -> usize { 292 | check_mask!(T, MASK); 293 | tagged_ptr.addr() & MASK 294 | } 295 | 296 | #[inline(always)] 297 | pub(super) fn set(pcell: &Cell<*const T>, tag: usize) { 298 | check_mask!(T, MASK); 299 | let ptr = pcell.get(); 300 | let ptr = ptr.map_addr(|addr| (addr & !MASK) | (tag & MASK)); 301 | pcell.set(ptr) 302 | } 303 | 304 | #[inline(always)] 305 | pub(super) fn set_bool(pcell: &Cell<*const T>, value: bool) { 306 | check_mask!(T, MASK); 307 | let ptr = pcell.get(); 308 | let ptr = ptr.map_addr(|addr| (addr & !MASK) | if value { MASK } else { 0 }); 309 | pcell.set(ptr) 310 | } 311 | } 312 | -------------------------------------------------------------------------------- /src/unsize.rs: -------------------------------------------------------------------------------- 1 | use core::marker::PhantomData; 2 | use core::ptr::NonNull; 3 | 4 | use crate::{ 5 | types::GcBoxInner, 6 | {Gc, GcWeak}, 7 | }; 8 | 9 | /// Unsizes a [`Gc`] or [`GcWeak`] pointer. 10 | /// 11 | /// This macro is a `gc_arena`-specific replacement for the nightly-only `CoerceUnsized` trait. 12 | /// 13 | /// ## Usage 14 | /// 15 | /// ```rust 16 | /// # use std::fmt::Display; 17 | /// # use gc_arena::{Gc, unsize}; 18 | /// # fn main() { 19 | /// # gc_arena::arena::rootless_mutate(|mc| { 20 | /// // Unsizing arrays to slices. 21 | /// let mut slice; 22 | /// slice = unsize!(Gc::new(mc, [1, 2]) => [u8]); 23 | /// assert_eq!(slice.len(), 2); 24 | /// slice = unsize!(Gc::new(mc, [42; 4]) => [u8]); 25 | /// assert_eq!(slice.len(), 4); 26 | /// 27 | /// // Unsizing values to trait objects. 28 | /// let mut display; 29 | /// display = unsize!(Gc::new(mc, "Hello world!".to_owned()) => dyn Display); 30 | /// assert_eq!(display.to_string(), "Hello world!"); 31 | /// display = unsize!(Gc::new(mc, 123456) => dyn Display); 32 | /// assert_eq!(display.to_string(), "123456"); 33 | /// # }) 34 | /// # } 35 | /// ``` 36 | /// 37 | /// The `unsize` macro is safe, and will fail to compile when trying to coerce between 38 | /// incompatible types. 39 | /// ```rust,compile_fail 40 | /// # use std::error::Error; 41 | /// # use gc_arena::{Gc, unsize}; 42 | /// # fn main() { 43 | /// # gc_arena::arena::rootless_mutate(|mc| { 44 | /// // Error: `Option` doesn't implement `Error`. 45 | /// let _ = unsize!(Gc::new(mc, Some('💥')) => dyn Error); 46 | /// # }) 47 | /// # } 48 | /// ``` 49 | #[macro_export] 50 | macro_rules! unsize { 51 | ($gc:expr => $ty:ty) => {{ 52 | let gc = $gc; 53 | // SAFETY: the closure has a trivial body and must be a valid pointer 54 | // coercion, if it compiles. Additionally, the `__CoercePtrInternal` trait 55 | // ensures that the resulting GC pointer has the correct `'gc` lifetime. 56 | unsafe { 57 | $crate::__CoercePtrInternal::__coerce_unchecked(gc, |p: *mut _| -> *mut $ty { p }) 58 | } 59 | }}; 60 | } 61 | 62 | // Not public API; implementation detail of the `unsize` macro. 63 | // 64 | // Maps a raw pointer coercion (`*mut FromPtr -> *mut ToPtr`) to 65 | // a smart pointer coercion (`Self -> Dst`). 66 | #[doc(hidden)] 67 | pub unsafe trait __CoercePtrInternal { 68 | type FromPtr; 69 | type ToPtr: ?Sized; 70 | // SAFETY: `coerce` must be a valid pointer coercion; in particular, the coerced 71 | // pointer must have the same address and provenance as the original. 72 | unsafe fn __coerce_unchecked(self, coerce: F) -> Dst 73 | where 74 | F: FnOnce(*mut Self::FromPtr) -> *mut Self::ToPtr; 75 | } 76 | 77 | unsafe impl<'gc, T, U: ?Sized> __CoercePtrInternal> for Gc<'gc, T> { 78 | type FromPtr = T; 79 | type ToPtr = U; 80 | 81 | #[inline(always)] 82 | unsafe fn __coerce_unchecked(self, coerce: F) -> Gc<'gc, U> 83 | where 84 | F: FnOnce(*mut T) -> *mut U, 85 | { 86 | let ptr = self.ptr.as_ptr() as *mut T; 87 | let ptr = NonNull::new_unchecked(coerce(ptr) as *mut GcBoxInner); 88 | Gc { 89 | ptr, 90 | _invariant: PhantomData, 91 | } 92 | } 93 | } 94 | 95 | unsafe impl<'gc, T, U: ?Sized> __CoercePtrInternal> for GcWeak<'gc, T> { 96 | type FromPtr = T; 97 | type ToPtr = U; 98 | 99 | #[inline(always)] 100 | unsafe fn __coerce_unchecked(self, coerce: F) -> GcWeak<'gc, U> 101 | where 102 | F: FnOnce(*mut T) -> *mut U, 103 | { 104 | let inner = self.inner.__coerce_unchecked(coerce); 105 | GcWeak { inner } 106 | } 107 | } 108 | -------------------------------------------------------------------------------- /tests/ui/bad_collect_bound.rs: -------------------------------------------------------------------------------- 1 | use gc_arena::Collect; 2 | 3 | struct NotCollect; 4 | 5 | #[derive(Collect)] 6 | #[collect(no_drop)] 7 | struct MyStruct { 8 | field: NotCollect 9 | } 10 | 11 | fn main() {} 12 | -------------------------------------------------------------------------------- /tests/ui/bad_collect_bound.stderr: -------------------------------------------------------------------------------- 1 | error[E0277]: the trait bound `NotCollect: Collect<'_>` is not satisfied 2 | --> tests/ui/bad_collect_bound.rs:8:12 3 | | 4 | 8 | field: NotCollect 5 | | ^^^^^^^^^^ the trait `Collect<'_>` is not implemented for `NotCollect` 6 | | 7 | = help: the following other types implement trait `Collect<'gc>`: 8 | &'static T 9 | () 10 | (A, B) 11 | (A, B, C) 12 | (A, B, C, D) 13 | (A, B, C, D, E) 14 | (A, B, C, D, E, F) 15 | (A, B, C, D, E, F, G) 16 | and $N others 17 | 18 | error[E0277]: the trait bound `NotCollect: Collect<'gc>` is not satisfied 19 | --> tests/ui/bad_collect_bound.rs:8:5 20 | | 21 | 5 | #[derive(Collect)] 22 | | ------- in this derive macro expansion 23 | ... 24 | 8 | field: NotCollect 25 | | ^^^^^ the trait `Collect<'gc>` is not implemented for `NotCollect` 26 | | 27 | = help: the following other types implement trait `Collect<'gc>`: 28 | &'static T 29 | () 30 | (A, B) 31 | (A, B, C) 32 | (A, B, C, D) 33 | (A, B, C, D, E) 34 | (A, B, C, D, E, F) 35 | (A, B, C, D, E, F, G) 36 | and $N others 37 | note: required by a bound in `gc_arena::collect::Trace::trace` 38 | --> src/collect.rs 39 | | 40 | | fn trace + ?Sized>(&mut self, value: &C) 41 | | ^^^^^^^^^^^^ required by this bound in `Trace::trace` 42 | = note: this error originates in the derive macro `Collect` (in Nightly builds, run with -Z macro-backtrace for more info) 43 | -------------------------------------------------------------------------------- /tests/ui/bad_write_field_projections.rs: -------------------------------------------------------------------------------- 1 | 2 | 3 | use gc_arena::Gc; 4 | use gc_arena::barrier::{Write, field}; 5 | 6 | struct Foo<'gc> { 7 | foo: i32, 8 | bar: Gc<'gc, u32>, 9 | baz: f64, 10 | } 11 | 12 | fn projection_prederef<'a, 'gc>(v: &'a Write>>) -> &'a Write { 13 | field!(v, Foo, foo) 14 | } 15 | 16 | fn projection_postderef<'a, 'gc>(v: &'a Write>) -> &'a Write { 17 | field!(v, Foo, bar) 18 | } 19 | 20 | fn projection_wrong_type<'a, 'gc>(v: &'a Write>) -> &'a Write { 21 | field!(v, Foo, baz) 22 | } 23 | 24 | fn main() {} 25 | -------------------------------------------------------------------------------- /tests/ui/bad_write_field_projections.stderr: -------------------------------------------------------------------------------- 1 | error[E0308]: mismatched types 2 | --> tests/ui/bad_write_field_projections.rs:13:5 3 | | 4 | 13 | field!(v, Foo, foo) 5 | | ^^^^^^^-^^^^^^^^^^^ 6 | | | | 7 | | | this expression has type `&'a gc_arena::barrier::Write>>` 8 | | expected `Gc<'_, Foo<'_>>`, found `Foo<'_>` 9 | | 10 | = note: expected struct `Gc<'gc, Foo<'gc>, >` 11 | found struct `Foo<'_>` 12 | = note: this error originates in the macro `field` (in Nightly builds, run with -Z macro-backtrace for more info) 13 | 14 | error[E0606]: casting `&Gc<'_, u32>` as `*const u32` is invalid 15 | --> tests/ui/bad_write_field_projections.rs:17:5 16 | | 17 | 17 | field!(v, Foo, bar) 18 | | ^^^^^^^^^^^^^^^^^^^ 19 | | 20 | = note: this error originates in the macro `field` (in Nightly builds, run with -Z macro-backtrace for more info) 21 | 22 | error[E0308]: mismatched types 23 | --> tests/ui/bad_write_field_projections.rs:21:5 24 | | 25 | 21 | field!(v, Foo, baz) 26 | | ^^^^^^^^^^^^^^^^^^^ 27 | | | 28 | | expected `&i64`, found `&f64` 29 | | arguments to this function are incorrect 30 | | 31 | = note: expected reference `&i64` 32 | found reference `&f64` 33 | note: associated function defined here 34 | --> src/barrier.rs 35 | | 36 | | pub unsafe fn __from_ref_and_ptr(v: &T, _: *const T) -> &Self { 37 | | ^^^^^^^^^^^^^^^^^^ 38 | = note: this error originates in the macro `field` (in Nightly builds, run with -Z macro-backtrace for more info) 39 | 40 | error[E0606]: casting `&f64` as `*const i64` is invalid 41 | --> tests/ui/bad_write_field_projections.rs:21:5 42 | | 43 | 21 | field!(v, Foo, baz) 44 | | ^^^^^^^^^^^^^^^^^^^ 45 | | 46 | = note: this error originates in the macro `field` (in Nightly builds, run with -Z macro-backtrace for more info) 47 | -------------------------------------------------------------------------------- /tests/ui/invalid_collect_field.rs: -------------------------------------------------------------------------------- 1 | use gc_arena::Collect; 2 | 3 | #[derive(Collect)] 4 | #[collect(no_drop)] 5 | struct MyStruct { 6 | #[collect(invalid_arg)] field: u8 7 | } 8 | 9 | fn main() {} 10 | -------------------------------------------------------------------------------- /tests/ui/invalid_collect_field.stderr: -------------------------------------------------------------------------------- 1 | error: Only `#[collect(require_static)]` is supported on a field 2 | --> $DIR/invalid_collect_field.rs:6:15 3 | | 4 | 6 | #[collect(invalid_arg)] field: u8 5 | | ^^^^^^^^^^^ 6 | -------------------------------------------------------------------------------- /tests/ui/multiple_require_static.rs: -------------------------------------------------------------------------------- 1 | use gc_arena::Collect; 2 | 3 | #[derive(Collect)] 4 | #[collect(no_drop)] 5 | struct MyStruct { 6 | #[collect(require_static)] 7 | #[collect(require_static)] 8 | field: bool 9 | } 10 | 11 | fn main() {} 12 | -------------------------------------------------------------------------------- /tests/ui/multiple_require_static.stderr: -------------------------------------------------------------------------------- 1 | error: Cannot specify multiple `#[collect]` attributes! Consider merging them. 2 | --> tests/ui/multiple_require_static.rs:7:7 3 | | 4 | 7 | #[collect(require_static)] 5 | | ^^^^^^^ 6 | -------------------------------------------------------------------------------- /tests/ui/no_drop_and_drop_impl.rs: -------------------------------------------------------------------------------- 1 | use gc_arena::Collect; 2 | 3 | #[derive(Collect)] 4 | #[collect(no_drop)] 5 | struct Foo { 6 | } 7 | 8 | impl Drop for Foo { 9 | fn drop(&mut self) {} 10 | } 11 | 12 | fn main() {} 13 | -------------------------------------------------------------------------------- /tests/ui/no_drop_and_drop_impl.stderr: -------------------------------------------------------------------------------- 1 | error[E0119]: conflicting implementations of trait `__MustNotImplDrop` for type `Foo` 2 | --> $DIR/no_drop_and_drop_impl.rs:3:10 3 | | 4 | 3 | #[derive(Collect)] 5 | | ^^^^^^^ 6 | | 7 | = note: conflicting implementation in crate `gc_arena`: 8 | - impl __MustNotImplDrop for T 9 | where T: Drop; 10 | = note: this error originates in the derive macro `Collect` (in Nightly builds, run with -Z macro-backtrace for more info) 11 | -------------------------------------------------------------------------------- /tests/ui/require_static_enum_variant.rs: -------------------------------------------------------------------------------- 1 | use gc_arena::Collect; 2 | 3 | #[derive(Collect)] 4 | #[collect(no_drop)] 5 | enum MyEnum { 6 | #[collect(require_static)] 7 | First { 8 | field: u8 9 | } 10 | } 11 | 12 | fn main() {} 13 | -------------------------------------------------------------------------------- /tests/ui/require_static_enum_variant.stderr: -------------------------------------------------------------------------------- 1 | error: `#[collect]` is not suppported on enum variants 2 | --> $DIR/require_static_enum_variant.rs:6:7 3 | | 4 | 6 | #[collect(require_static)] 5 | | ^^^^^^^ 6 | -------------------------------------------------------------------------------- /tests/ui/require_static_not_static.rs: -------------------------------------------------------------------------------- 1 | use gc_arena::Collect; 2 | 3 | struct NoCollectImpl<'a>(&'a bool); 4 | 5 | #[derive(Collect)] 6 | #[collect(no_drop)] 7 | struct MyStruct<'a> { 8 | #[collect(require_static)] 9 | field: NoCollectImpl<'a> 10 | } 11 | 12 | fn assert_my_struct_collect<'a>() { 13 | let _ = MyStruct::<'a>::NEEDS_TRACE; 14 | } 15 | 16 | fn main() {} 17 | -------------------------------------------------------------------------------- /tests/ui/require_static_not_static.stderr: -------------------------------------------------------------------------------- 1 | error: lifetime may not live long enough 2 | --> tests/ui/require_static_not_static.rs:13:13 3 | | 4 | 12 | fn assert_my_struct_collect<'a>() { 5 | | -- lifetime `'a` defined here 6 | 13 | let _ = MyStruct::<'a>::NEEDS_TRACE; 7 | | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ requires that `'a` must outlive `'static` 8 | --------------------------------------------------------------------------------