├── .commitlintrc.yml ├── .github ├── dependabot.yml └── workflows │ ├── benchmarks.yml │ ├── cargo_publish_dry_run.yml │ ├── ci.yml │ └── deploy_documentation.yml ├── .gitignore ├── CHANGELOG.md ├── Cargo.lock ├── Cargo.toml ├── LICENSE ├── README.md ├── benches ├── backtracking.rs ├── large_case.rs └── sudoku.rs ├── examples ├── branching_error_reporting.rs ├── caching_dependency_provider.rs ├── doc_interface.rs ├── doc_interface_error.rs ├── doc_interface_semantic.rs ├── linear_error_reporting.rs └── unsat_root_message_no_version.rs ├── release.md ├── src ├── error.rs ├── internal │ ├── arena.rs │ ├── core.rs │ ├── incompatibility.rs │ ├── mod.rs │ ├── partial_solution.rs │ ├── small_map.rs │ └── small_vec.rs ├── lib.rs ├── package.rs ├── provider.rs ├── report.rs ├── solver.rs ├── term.rs ├── type_aliases.rs ├── version.rs └── version_set.rs ├── test-examples └── large_case_u16_NumberVersion.ron ├── tests ├── examples.rs ├── proptest.rs ├── sat_dependency_provider.rs └── tests.rs └── version-ranges ├── Cargo.toml ├── Changelog.md ├── LICENSE ├── README.md ├── number-line-ranges.svg └── src └── lib.rs /.commitlintrc.yml: -------------------------------------------------------------------------------- 1 | extends: 2 | - '@commitlint/config-conventional' 3 | 4 | # https://commitlint.js.org/#/reference-rules 5 | rules: 6 | header-max-length: [ 2, 'always', 72 ] 7 | body-max-line-length: [ 1, 'always', 72 ] 8 | footer-max-line-length: [ 1, 'always', 72 ] 9 | footer-leading-blank: [ 1, 'always' ] 10 | type-enum: [ 11 | 2, 12 | 'always', 13 | [ 14 | 'build', 15 | 'ci', 16 | 'docs', 17 | 'feat', 18 | 'fix', 19 | 'perf', 20 | 'refactor', 21 | 'release', 22 | 'revert', 23 | 'style', 24 | 'test', 25 | ] 26 | ] 27 | -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | version: 2 2 | updates: 3 | - package-ecosystem: "github-actions" 4 | directory: "/" 5 | schedule: 6 | interval: "weekly" 7 | groups: 8 | artifacts: 9 | # Group upload/download artifact updates, the versions are dependent 10 | patterns: 11 | - "actions/*-artifact" 12 | 13 | - package-ecosystem: "cargo" 14 | directory: "/" 15 | schedule: 16 | interval: "monthly" 17 | -------------------------------------------------------------------------------- /.github/workflows/benchmarks.yml: -------------------------------------------------------------------------------- 1 | name: Benchmarks (CodSpeed) 2 | 3 | on: 4 | push: 5 | branches: 6 | - dev 7 | pull_request: 8 | workflow_dispatch: 9 | 10 | jobs: 11 | benchmarks: 12 | name: Run benchmarks 13 | runs-on: ubuntu-latest 14 | steps: 15 | - uses: actions/checkout@v4 16 | - name: Setup rust toolchain, cache and cargo-codspeed binary 17 | uses: moonrepo/setup-rust@v1 18 | with: 19 | channel: stable 20 | cache-target: release 21 | bins: cargo-codspeed 22 | 23 | - name: Build the benchmark target(s) 24 | run: cargo codspeed build --features serde 25 | 26 | - name: Run the benchmarks 27 | uses: CodSpeedHQ/action@v3 28 | with: 29 | run: cargo codspeed run 30 | -------------------------------------------------------------------------------- /.github/workflows/cargo_publish_dry_run.yml: -------------------------------------------------------------------------------- 1 | # Runs `cargo publish --dry-run` before another release 2 | 3 | name: Check crate publishing works 4 | on: 5 | pull_request: 6 | branches: [ release ] 7 | workflow_dispatch: 8 | 9 | env: 10 | CARGO_TERM_COLOR: always 11 | 12 | jobs: 13 | cargo_publish_dry_run: 14 | name: Publishing works 15 | runs-on: ubuntu-latest 16 | steps: 17 | - uses: actions/checkout@v4 18 | - name: Install stable Rust 19 | uses: dtolnay/rust-toolchain@stable 20 | 21 | - name: Get Cargo version 22 | id: cargo_version 23 | run: echo "::set-output name=version::$(cargo -V | tr -d ' ')" 24 | shell: bash 25 | 26 | - name: Run `cargo publish --dry-run` 27 | run: cargo publish --dry-run 28 | -------------------------------------------------------------------------------- /.github/workflows/ci.yml: -------------------------------------------------------------------------------- 1 | name: CI 2 | on: 3 | pull_request: 4 | merge_group: 5 | push: 6 | branches: [ release, dev ] 7 | schedule: [ cron: "0 6 * * 4" ] 8 | 9 | env: 10 | CARGO_TERM_COLOR: always 11 | 12 | jobs: 13 | test: 14 | name: Tests 15 | runs-on: ubuntu-latest 16 | steps: 17 | - uses: actions/checkout@v4 18 | - uses: dtolnay/rust-toolchain@stable 19 | - run: cargo build --workspace 20 | - run: cargo test --all-features --workspace 21 | 22 | clippy: 23 | name: Clippy 24 | runs-on: ubuntu-latest 25 | steps: 26 | - uses: actions/checkout@v4 27 | - uses: dtolnay/rust-toolchain@stable 28 | with: 29 | components: clippy 30 | - name: Check Clippy lints 31 | env: 32 | RUSTFLAGS: -D warnings 33 | run: cargo clippy --workspace 34 | 35 | check_formatting: 36 | name: Formatting 37 | runs-on: ubuntu-latest 38 | steps: 39 | - uses: actions/checkout@v4 40 | - uses: dtolnay/rust-toolchain@stable 41 | with: 42 | components: rustfmt 43 | - run: cargo fmt --all -- --check 44 | 45 | check_documentation: 46 | name: Docs 47 | runs-on: ubuntu-latest 48 | steps: 49 | - uses: actions/checkout@v4 50 | - uses: dtolnay/rust-toolchain@stable 51 | - name: Check documentation 52 | env: 53 | RUSTDOCFLAGS: -D warnings 54 | run: cargo doc --workspace --no-deps --document-private-items 55 | 56 | minimal-versions: 57 | name: Minimal versions 58 | runs-on: ubuntu-latest 59 | steps: 60 | - uses: actions/checkout@v4 61 | - uses: dtolnay/rust-toolchain@nightly 62 | - run: cargo +nightly update -Zminimal-versions 63 | - run: cargo +nightly build --workspace 64 | - run: cargo +nightly test --all-features --workspace 65 | -------------------------------------------------------------------------------- /.github/workflows/deploy_documentation.yml: -------------------------------------------------------------------------------- 1 | # Deploys the latest development documentation to Github Pages 2 | 3 | name: Deploy documentation 4 | on: 5 | push: 6 | branches: [ dev ] 7 | workflow_dispatch: 8 | 9 | env: 10 | CARGO_TERM_COLOR: always 11 | 12 | jobs: 13 | deploy_documentation: 14 | runs-on: ubuntu-latest 15 | steps: 16 | - uses: actions/checkout@v4 17 | - uses: dtolnay/rust-toolchain@stable 18 | 19 | - name: Build documentation 20 | run: cargo doc --no-deps 21 | 22 | - name: Deploy documentation 23 | if: ${{ github.event_name == 'branches' }} 24 | uses: peaceiris/actions-gh-pages@v4 25 | with: 26 | github_token: ${{ secrets.GITHUB_TOKEN }} 27 | publish_dir: ./target/doc 28 | force_orphan: true 29 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | /target 2 | 3 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | 3 | All notable changes to this project will be documented in this file. 4 | 5 | ## [0.3.0] - 2025-02-12 - [(diff with 0.2.1)][0.2.1-diff] 6 | 7 | PubGrub 0.3 has a more flexible interface and speeds resolution significantly. The public API is very different now, we 8 | recommend starting the migration by implementing the new `DependencyProvider` interface following the 9 | [Guide](https://pubgrub-rs.github.io/pubgrub/pubgrub/). 10 | 11 | All public interfaces are now in the root of the crate. 12 | 13 | In the main interface, `DependencyProvider`, `choose_package_version` was split into two methods: `prioritize` 14 | for choosing which package to decide next by assigning a priority to each package, and `choose_version`. The generic 15 | parameters became associated types. The version set is configurable by an associated type. 16 | 17 | `Dependencies` gained a generic parameter for custom incompatibility type outside version conflicts, such as packages 18 | not available for the current platform or permission errors. This type is on `DependencyProvider` as 19 | `DependencyProvider::M`. 20 | 21 | `pubgrub::range::Range` now lives in its own crate as [`version_ranges::Ranges`](https://docs.rs/version-ranges/0.1/version_ranges/struct.Ranges.html). A `Version` can be almost any 22 | ordered type now, it only needs to support set operations through `VersionSet`. 23 | 24 | At a glance, this is the new `DependencyProvider` interface: 25 | 26 | ```rust 27 | pub trait DependencyProvider { 28 | type P: Package; 29 | type V: Debug + Display + Clone + Ord; 30 | type VS: VersionSet; 31 | type M: Eq + Clone + Debug + Display; 32 | type Priority: Ord + Clone; 33 | type Err: Error + 'static; 34 | 35 | fn prioritize( 36 | &self, 37 | package: &Self::P, 38 | range: &Self::VS, 39 | package_conflicts_counts: &PackageResolutionStatistics, 40 | ) -> Self::Priority; 41 | 42 | fn choose_version( 43 | &self, 44 | package: &Self::P, 45 | range: &Self::VS, 46 | ) -> Result, Self::Err>; 47 | 48 | fn get_dependencies( 49 | &self, 50 | package: &Self::P, 51 | version: &Self::V, 52 | ) -> Result, Self::Err>; 53 | 54 | } 55 | ``` 56 | 57 | ## [0.2.1] - 2021-06-30 - [(diff with 0.2.0)][0.2.0-diff] 58 | 59 | This release is focused on performance improvements and code readability, without any change to the public API. 60 | 61 | The code tends to be simpler around tricky parts of the algorithm such as conflict resolution. 62 | Some data structures have been rewritten (with no unsafe) to lower memory usage. 63 | Depending on scenarios, version 0.2.1 is 3 to 8 times faster than 0.2.0. 64 | As an example, solving all elm package versions existing went from 580ms to 175ms on my laptop. 65 | While solving a specific subset of packages from crates.io went from 2.5s to 320ms on my laptop. 66 | 67 | Below are listed all the important changes in the internal parts of the API. 68 | 69 | #### Added 70 | 71 | - New `SmallVec` data structure (with no unsafe) using fixed size arrays for up to 2 entries. 72 | - New `SmallMap` data structure (with no unsafe) using fixed size arrays for up to 2 entries. 73 | - New `Arena` data structure (with no unsafe) backed by a `Vec` and indexed with `Id` where `T` is phantom data. 74 | 75 | #### Changed 76 | 77 | - Updated the `large_case` benchmark to run with both u16 and string package identifiers in registries. 78 | - Use the new `Arena` for the incompatibility store, and use its `Id` identifiers to reference incompatibilities instead of full owned copies in the `incompatibilities` field of the solver `State`. 79 | - Save satisfier indices of each package involved in an incompatibility when looking for its satisfier. This speeds up the search for the previous satisfier. 80 | - Early unit propagation loop restart at the first conflict found instead of continuing evaluation for the current package. 81 | - Index incompatibilities by package in a hash map instead of using a vec. 82 | - Keep track of already contradicted incompatibilities in a `Set` until the next backtrack to speed up unit propagation. 83 | - Unify `history` and `memory` in `partial_solution` under a unique hash map indexed by packages. This should speed up access to relevan terms in conflict resolution. 84 | 85 | ## [0.2.0] - 2020-11-19 - [(diff with 0.1.0)][0.1.0-diff] 86 | 87 | This release brings many important improvements to PubGrub. 88 | The gist of it is: 89 | 90 | - A bug in the algorithm's implementation was [fixed](https://github.com/pubgrub-rs/pubgrub/pull/23). 91 | - The solver is now implemented in a `resolve` function taking as argument 92 | an implementer of the `DependencyProvider` trait, 93 | which has more control over the decision making process. 94 | - End-to-end property testing of large synthetic registries was added. 95 | - More than 10x performance improvement. 96 | 97 | ### Changes affecting the public API 98 | 99 | #### Added 100 | 101 | - Links to code items in the code documentation. 102 | - New `"serde"` feature that allows serializing some library types, useful for making simple reproducible bug reports. 103 | - New variants for `error::PubGrubError` which are `DependencyOnTheEmptySet`, 104 | `SelfDependency`, `ErrorChoosingPackageVersion` and `ErrorInShouldCancel`. 105 | - New `type_alias::Map` defined as `rustc_hash::FxHashMap`. 106 | - New `type_alias::SelectedDependencies` defined as `Map`. 107 | - The types `Dependencies` and `DependencyConstraints` were introduced to clarify intent. 108 | - New function `choose_package_with_fewest_versions` to help implement 109 | the `choose_package_version` method of a `DependencyProvider`. 110 | - Implement `FromStr` for `SemanticVersion`. 111 | - Add the `VersionParseError` type for parsing of semantic versions. 112 | 113 | #### Changed 114 | 115 | - The `Solver` trait was replaced by a `DependencyProvider` trait 116 | which now must implement a `choose_package_version` method 117 | instead of `list_available_versions`. 118 | So it now has the ability to choose a package in addition to a version. 119 | The `DependencyProvider` also has a new optional method `should_cancel` 120 | that may be used to stop the solver if needed. 121 | - The `choose_package_version` and `get_dependencies` methods of the 122 | `DependencyProvider` trait now take an immutable reference to `self`. 123 | Interior mutability can be used by implementor if mutability is needed. 124 | - The `Solver.run` method was thus replaced by a free function `solver::resolve` 125 | taking a dependency provider as first argument. 126 | - The `OfflineSolver` is thus replaced by an `OfflineDependencyProvider`. 127 | - `SemanticVersion` now takes `u32` instead of `usize` for its 3 parts. 128 | - `NumberVersion` now uses `u32` instead of `usize`. 129 | 130 | #### Removed 131 | 132 | - `ErrorRetrievingVersions` variant of `error::PubGrubError`. 133 | 134 | ### Changes in the internal parts of the API 135 | 136 | #### Added 137 | 138 | - `benches/large_case.rs` enables benchmarking of serialized registries of packages. 139 | - `examples/caching_dependency_provider.rs` an example dependency provider caching dependencies. 140 | - `PackageTerm = (P, Term)` new type alias for readability. 141 | - `Memory.term_intersection_for_package(&mut self, package: &P) -> Option<&Term>` 142 | - New types were introduces for conflict resolution in `internal::partial_solution` 143 | to clarify the intent and return values of some functions. 144 | Those types are `DatedAssignment` and `SatisfierAndPreviousHistory`. 145 | - `PartialSolution.term_intersection_for_package` calling the same function 146 | from its `memory`. 147 | - New property tests for ranges: `negate_contains_opposite`, `intesection_contains_both` 148 | and `union_contains_either`. 149 | - A large synthetic test case was added in `test-examples/`. 150 | - A new test example `double_choices` was added 151 | for the detection of a bug (fixed) in the implementation. 152 | - Property testing of big synthetic datasets was added in `tests/proptest.rs`. 153 | - Comparison of PubGrub solver and a SAT solver 154 | was added with `tests/sat_dependency_provider.rs`. 155 | - Other regression and unit tests were added to `tests/tests.rs`. 156 | 157 | #### Changed 158 | 159 | - CI workflow was improved (`./github/workflows/`), including a check for 160 | [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/) and 161 | [Clippy](https://github.com/rust-lang/rust-clippy) for source code linting. 162 | - Using SPDX license identifiers instead of MPL-2.0 classic file headers. 163 | - `State.incompatibilities` is now wrapped inside a `Rc`. 164 | - `DecisionLevel(u32)` is used in place of `usize` for partial solution decision levels. 165 | - `State.conflict_resolution` now also returns the almost satisfied package 166 | to avoid an unnecessary call to `self.partial_solution.relation(...)` after conflict resolution. 167 | - `Kind::NoVersion` renamed to `Kind::NoVersions` and all other usage of `noversion` 168 | has been changed to `no_versions`. 169 | - Variants of the `incompatibility::Relation` enum have changed. 170 | - Incompatibility now uses a deterministic hasher to store packages in its hash map. 171 | - `incompatibility.relation(...)` now takes a function as argument to avoid computations 172 | of unnecessary terms intersections. 173 | - `Memory` now uses a deterministic hasher instead of the default one. 174 | - `memory::PackageAssignments` is now an enum instead of a struct. 175 | - Derivations in a `PackageAssignments` keep a precomputed intersection of derivation terms. 176 | - `potential_packages` method now returns a `Range` 177 | instead of a `Term` for the versions constraint of each package. 178 | - `PartialSolution.relation` now takes `&mut self` instead of `&self` 179 | to be able to store computation of terms intersection. 180 | - `Term.accept_version` was renamed `Term.contains`. 181 | - The `satisfied_by` and `contradicted_by` methods of a `Term` 182 | now directly takes a reference to the intersection of other terms. 183 | Same for `relation_with`. 184 | 185 | #### Removed 186 | 187 | - `term` field of an `Assignment::Derivation` variant. 188 | - `Memory.all_terms` method was removed. 189 | - `Memory.remove_decision` method was removed in favor of a check before using `Memory.add_decision`. 190 | - `PartialSolution` methods `pick_package` and `pick_version` have been removed 191 | since control was given back to the dependency provider to choose a package version. 192 | - `PartialSolution` methods `remove_last_decision` and `satisfies_any_of` were removed 193 | in favor of a preventive check before calling `add_decision`. 194 | - `Term.is_negative`. 195 | 196 | #### Fixed 197 | 198 | - Prior cause computation (`incompatibility::prior_cause`) now uses the intersection of package terms 199 | instead of their union, which was an implementation error. 200 | 201 | ## [0.1.0] - 2020-10-01 202 | 203 | ### Added 204 | 205 | - `README.md` as the home page of this repository. 206 | - `LICENSE`, code is provided under the MPL 2.0 license. 207 | - `Cargo.toml` configuration of this Rust project. 208 | - `src/` containing all the source code for this first implementation of PubGrub in Rust. 209 | - `tests/` containing test end-to-end examples. 210 | - `examples/` other examples, not in the form of tests. 211 | - `.gitignore` configured for a Rust project. 212 | - `.github/workflows/` CI to automatically build, test and document on push and pull requests. 213 | 214 | [0.3.0]: https://github.com/pubgrub-rs/pubgrub/releases/tag/v0.3.0 215 | [0.2.1]: https://github.com/pubgrub-rs/pubgrub/releases/tag/v0.2.1 216 | [0.2.0]: https://github.com/pubgrub-rs/pubgrub/releases/tag/v0.2.0 217 | [0.1.0]: https://github.com/pubgrub-rs/pubgrub/releases/tag/v0.1.0 218 | 219 | [unreleased-diff]: https://github.com/pubgrub-rs/pubgrub/compare/release...dev 220 | [0.2.1-diff]: https://github.com/pubgrub-rs/pubgrub/compare/v0.2.1...v0.3.0 221 | [0.2.0-diff]: https://github.com/pubgrub-rs/pubgrub/compare/v0.2.0...v0.2.1 222 | [0.1.0-diff]: https://github.com/pubgrub-rs/pubgrub/compare/v0.1.0...v0.2.0 223 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | # SPDX-License-Identifier: MPL-2.0 2 | 3 | [workspace] 4 | members = ["version-ranges"] 5 | 6 | [package] 7 | name = "pubgrub" 8 | version = "0.3.0" 9 | authors = [ 10 | "Matthieu Pizenberg ", 11 | "Alex Tokarev ", 12 | "Jacob Finkelman ", 13 | ] 14 | edition = "2021" 15 | description = "PubGrub version solving algorithm" 16 | readme = "README.md" 17 | repository = "https://github.com/pubgrub-rs/pubgrub" 18 | license = "MPL-2.0" 19 | keywords = ["dependency", "pubgrub", "semver", "solver", "version"] 20 | categories = ["algorithms"] 21 | include = ["Cargo.toml", "LICENSE", "README.md", "src/**", "tests/**", "examples/**", "benches/**"] 22 | 23 | # See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html 24 | 25 | [dependencies] 26 | indexmap = "2.7.0" 27 | # for debug logs in tests 28 | log = "0.4.27" 29 | priority-queue = "2.3.1" 30 | rustc-hash = "^2.1.1" 31 | serde = { version = "1.0", features = ["derive"], optional = true } 32 | thiserror = "2.0" 33 | version-ranges = { version = "0.1.0", path = "version-ranges" } 34 | 35 | [dev-dependencies] 36 | criterion = { version = "2.7.2", package = "codspeed-criterion-compat" } 37 | env_logger = "0.11.6" 38 | proptest = "1.6.0" 39 | ron = "0.10.1" 40 | varisat = "0.2.2" 41 | version-ranges = { version = "0.1.0", path = "version-ranges", features = ["proptest"] } 42 | 43 | [features] 44 | serde = ["dep:serde", "version-ranges/serde"] 45 | 46 | [[bench]] 47 | name = "backtracking" 48 | harness = false 49 | 50 | [[bench]] 51 | name = "large_case" 52 | harness = false 53 | required-features = ["serde"] 54 | 55 | [[bench]] 56 | name = "sudoku" 57 | harness = false 58 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Mozilla Public License Version 2.0 2 | ================================== 3 | 4 | 1. Definitions 5 | -------------- 6 | 7 | 1.1. "Contributor" 8 | means each individual or legal entity that creates, contributes to 9 | the creation of, or owns Covered Software. 10 | 11 | 1.2. "Contributor Version" 12 | means the combination of the Contributions of others (if any) used 13 | by a Contributor and that particular Contributor's Contribution. 14 | 15 | 1.3. "Contribution" 16 | means Covered Software of a particular Contributor. 17 | 18 | 1.4. "Covered Software" 19 | means Source Code Form to which the initial Contributor has attached 20 | the notice in Exhibit A, the Executable Form of such Source Code 21 | Form, and Modifications of such Source Code Form, in each case 22 | including portions thereof. 23 | 24 | 1.5. "Incompatible With Secondary Licenses" 25 | means 26 | 27 | (a) that the initial Contributor has attached the notice described 28 | in Exhibit B to the Covered Software; or 29 | 30 | (b) that the Covered Software was made available under the terms of 31 | version 1.1 or earlier of the License, but not also under the 32 | terms of a Secondary License. 33 | 34 | 1.6. "Executable Form" 35 | means any form of the work other than Source Code Form. 36 | 37 | 1.7. "Larger Work" 38 | means a work that combines Covered Software with other material, in 39 | a separate file or files, that is not Covered Software. 40 | 41 | 1.8. "License" 42 | means this document. 43 | 44 | 1.9. "Licensable" 45 | means having the right to grant, to the maximum extent possible, 46 | whether at the time of the initial grant or subsequently, any and 47 | all of the rights conveyed by this License. 48 | 49 | 1.10. "Modifications" 50 | means any of the following: 51 | 52 | (a) any file in Source Code Form that results from an addition to, 53 | deletion from, or modification of the contents of Covered 54 | Software; or 55 | 56 | (b) any new file in Source Code Form that contains any Covered 57 | Software. 58 | 59 | 1.11. "Patent Claims" of a Contributor 60 | means any patent claim(s), including without limitation, method, 61 | process, and apparatus claims, in any patent Licensable by such 62 | Contributor that would be infringed, but for the grant of the 63 | License, by the making, using, selling, offering for sale, having 64 | made, import, or transfer of either its Contributions or its 65 | Contributor Version. 66 | 67 | 1.12. "Secondary License" 68 | means either the GNU General Public License, Version 2.0, the GNU 69 | Lesser General Public License, Version 2.1, the GNU Affero General 70 | Public License, Version 3.0, or any later versions of those 71 | licenses. 72 | 73 | 1.13. "Source Code Form" 74 | means the form of the work preferred for making modifications. 75 | 76 | 1.14. "You" (or "Your") 77 | means an individual or a legal entity exercising rights under this 78 | License. For legal entities, "You" includes any entity that 79 | controls, is controlled by, or is under common control with You. For 80 | purposes of this definition, "control" means (a) the power, direct 81 | or indirect, to cause the direction or management of such entity, 82 | whether by contract or otherwise, or (b) ownership of more than 83 | fifty percent (50%) of the outstanding shares or beneficial 84 | ownership of such entity. 85 | 86 | 2. License Grants and Conditions 87 | -------------------------------- 88 | 89 | 2.1. Grants 90 | 91 | Each Contributor hereby grants You a world-wide, royalty-free, 92 | non-exclusive license: 93 | 94 | (a) under intellectual property rights (other than patent or trademark) 95 | Licensable by such Contributor to use, reproduce, make available, 96 | modify, display, perform, distribute, and otherwise exploit its 97 | Contributions, either on an unmodified basis, with Modifications, or 98 | as part of a Larger Work; and 99 | 100 | (b) under Patent Claims of such Contributor to make, use, sell, offer 101 | for sale, have made, import, and otherwise transfer either its 102 | Contributions or its Contributor Version. 103 | 104 | 2.2. Effective Date 105 | 106 | The licenses granted in Section 2.1 with respect to any Contribution 107 | become effective for each Contribution on the date the Contributor first 108 | distributes such Contribution. 109 | 110 | 2.3. Limitations on Grant Scope 111 | 112 | The licenses granted in this Section 2 are the only rights granted under 113 | this License. No additional rights or licenses will be implied from the 114 | distribution or licensing of Covered Software under this License. 115 | Notwithstanding Section 2.1(b) above, no patent license is granted by a 116 | Contributor: 117 | 118 | (a) for any code that a Contributor has removed from Covered Software; 119 | or 120 | 121 | (b) for infringements caused by: (i) Your and any other third party's 122 | modifications of Covered Software, or (ii) the combination of its 123 | Contributions with other software (except as part of its Contributor 124 | Version); or 125 | 126 | (c) under Patent Claims infringed by Covered Software in the absence of 127 | its Contributions. 128 | 129 | This License does not grant any rights in the trademarks, service marks, 130 | or logos of any Contributor (except as may be necessary to comply with 131 | the notice requirements in Section 3.4). 132 | 133 | 2.4. Subsequent Licenses 134 | 135 | No Contributor makes additional grants as a result of Your choice to 136 | distribute the Covered Software under a subsequent version of this 137 | License (see Section 10.2) or under the terms of a Secondary License (if 138 | permitted under the terms of Section 3.3). 139 | 140 | 2.5. Representation 141 | 142 | Each Contributor represents that the Contributor believes its 143 | Contributions are its original creation(s) or it has sufficient rights 144 | to grant the rights to its Contributions conveyed by this License. 145 | 146 | 2.6. Fair Use 147 | 148 | This License is not intended to limit any rights You have under 149 | applicable copyright doctrines of fair use, fair dealing, or other 150 | equivalents. 151 | 152 | 2.7. Conditions 153 | 154 | Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted 155 | in Section 2.1. 156 | 157 | 3. Responsibilities 158 | ------------------- 159 | 160 | 3.1. Distribution of Source Form 161 | 162 | All distribution of Covered Software in Source Code Form, including any 163 | Modifications that You create or to which You contribute, must be under 164 | the terms of this License. You must inform recipients that the Source 165 | Code Form of the Covered Software is governed by the terms of this 166 | License, and how they can obtain a copy of this License. You may not 167 | attempt to alter or restrict the recipients' rights in the Source Code 168 | Form. 169 | 170 | 3.2. Distribution of Executable Form 171 | 172 | If You distribute Covered Software in Executable Form then: 173 | 174 | (a) such Covered Software must also be made available in Source Code 175 | Form, as described in Section 3.1, and You must inform recipients of 176 | the Executable Form how they can obtain a copy of such Source Code 177 | Form by reasonable means in a timely manner, at a charge no more 178 | than the cost of distribution to the recipient; and 179 | 180 | (b) You may distribute such Executable Form under the terms of this 181 | License, or sublicense it under different terms, provided that the 182 | license for the Executable Form does not attempt to limit or alter 183 | the recipients' rights in the Source Code Form under this License. 184 | 185 | 3.3. Distribution of a Larger Work 186 | 187 | You may create and distribute a Larger Work under terms of Your choice, 188 | provided that You also comply with the requirements of this License for 189 | the Covered Software. If the Larger Work is a combination of Covered 190 | Software with a work governed by one or more Secondary Licenses, and the 191 | Covered Software is not Incompatible With Secondary Licenses, this 192 | License permits You to additionally distribute such Covered Software 193 | under the terms of such Secondary License(s), so that the recipient of 194 | the Larger Work may, at their option, further distribute the Covered 195 | Software under the terms of either this License or such Secondary 196 | License(s). 197 | 198 | 3.4. Notices 199 | 200 | You may not remove or alter the substance of any license notices 201 | (including copyright notices, patent notices, disclaimers of warranty, 202 | or limitations of liability) contained within the Source Code Form of 203 | the Covered Software, except that You may alter any license notices to 204 | the extent required to remedy known factual inaccuracies. 205 | 206 | 3.5. Application of Additional Terms 207 | 208 | You may choose to offer, and to charge a fee for, warranty, support, 209 | indemnity or liability obligations to one or more recipients of Covered 210 | Software. However, You may do so only on Your own behalf, and not on 211 | behalf of any Contributor. You must make it absolutely clear that any 212 | such warranty, support, indemnity, or liability obligation is offered by 213 | You alone, and You hereby agree to indemnify every Contributor for any 214 | liability incurred by such Contributor as a result of warranty, support, 215 | indemnity or liability terms You offer. You may include additional 216 | disclaimers of warranty and limitations of liability specific to any 217 | jurisdiction. 218 | 219 | 4. Inability to Comply Due to Statute or Regulation 220 | --------------------------------------------------- 221 | 222 | If it is impossible for You to comply with any of the terms of this 223 | License with respect to some or all of the Covered Software due to 224 | statute, judicial order, or regulation then You must: (a) comply with 225 | the terms of this License to the maximum extent possible; and (b) 226 | describe the limitations and the code they affect. Such description must 227 | be placed in a text file included with all distributions of the Covered 228 | Software under this License. Except to the extent prohibited by statute 229 | or regulation, such description must be sufficiently detailed for a 230 | recipient of ordinary skill to be able to understand it. 231 | 232 | 5. Termination 233 | -------------- 234 | 235 | 5.1. The rights granted under this License will terminate automatically 236 | if You fail to comply with any of its terms. However, if You become 237 | compliant, then the rights granted under this License from a particular 238 | Contributor are reinstated (a) provisionally, unless and until such 239 | Contributor explicitly and finally terminates Your grants, and (b) on an 240 | ongoing basis, if such Contributor fails to notify You of the 241 | non-compliance by some reasonable means prior to 60 days after You have 242 | come back into compliance. Moreover, Your grants from a particular 243 | Contributor are reinstated on an ongoing basis if such Contributor 244 | notifies You of the non-compliance by some reasonable means, this is the 245 | first time You have received notice of non-compliance with this License 246 | from such Contributor, and You become compliant prior to 30 days after 247 | Your receipt of the notice. 248 | 249 | 5.2. If You initiate litigation against any entity by asserting a patent 250 | infringement claim (excluding declaratory judgment actions, 251 | counter-claims, and cross-claims) alleging that a Contributor Version 252 | directly or indirectly infringes any patent, then the rights granted to 253 | You by any and all Contributors for the Covered Software under Section 254 | 2.1 of this License shall terminate. 255 | 256 | 5.3. In the event of termination under Sections 5.1 or 5.2 above, all 257 | end user license agreements (excluding distributors and resellers) which 258 | have been validly granted by You or Your distributors under this License 259 | prior to termination shall survive termination. 260 | 261 | ************************************************************************ 262 | * * 263 | * 6. Disclaimer of Warranty * 264 | * ------------------------- * 265 | * * 266 | * Covered Software is provided under this License on an "as is" * 267 | * basis, without warranty of any kind, either expressed, implied, or * 268 | * statutory, including, without limitation, warranties that the * 269 | * Covered Software is free of defects, merchantable, fit for a * 270 | * particular purpose or non-infringing. The entire risk as to the * 271 | * quality and performance of the Covered Software is with You. * 272 | * Should any Covered Software prove defective in any respect, You * 273 | * (not any Contributor) assume the cost of any necessary servicing, * 274 | * repair, or correction. This disclaimer of warranty constitutes an * 275 | * essential part of this License. No use of any Covered Software is * 276 | * authorized under this License except under this disclaimer. * 277 | * * 278 | ************************************************************************ 279 | 280 | ************************************************************************ 281 | * * 282 | * 7. Limitation of Liability * 283 | * -------------------------- * 284 | * * 285 | * Under no circumstances and under no legal theory, whether tort * 286 | * (including negligence), contract, or otherwise, shall any * 287 | * Contributor, or anyone who distributes Covered Software as * 288 | * permitted above, be liable to You for any direct, indirect, * 289 | * special, incidental, or consequential damages of any character * 290 | * including, without limitation, damages for lost profits, loss of * 291 | * goodwill, work stoppage, computer failure or malfunction, or any * 292 | * and all other commercial damages or losses, even if such party * 293 | * shall have been informed of the possibility of such damages. This * 294 | * limitation of liability shall not apply to liability for death or * 295 | * personal injury resulting from such party's negligence to the * 296 | * extent applicable law prohibits such limitation. Some * 297 | * jurisdictions do not allow the exclusion or limitation of * 298 | * incidental or consequential damages, so this exclusion and * 299 | * limitation may not apply to You. * 300 | * * 301 | ************************************************************************ 302 | 303 | 8. Litigation 304 | ------------- 305 | 306 | Any litigation relating to this License may be brought only in the 307 | courts of a jurisdiction where the defendant maintains its principal 308 | place of business and such litigation shall be governed by laws of that 309 | jurisdiction, without reference to its conflict-of-law provisions. 310 | Nothing in this Section shall prevent a party's ability to bring 311 | cross-claims or counter-claims. 312 | 313 | 9. Miscellaneous 314 | ---------------- 315 | 316 | This License represents the complete agreement concerning the subject 317 | matter hereof. If any provision of this License is held to be 318 | unenforceable, such provision shall be reformed only to the extent 319 | necessary to make it enforceable. Any law or regulation which provides 320 | that the language of a contract shall be construed against the drafter 321 | shall not be used to construe this License against a Contributor. 322 | 323 | 10. Versions of the License 324 | --------------------------- 325 | 326 | 10.1. New Versions 327 | 328 | Mozilla Foundation is the license steward. Except as provided in Section 329 | 10.3, no one other than the license steward has the right to modify or 330 | publish new versions of this License. Each version will be given a 331 | distinguishing version number. 332 | 333 | 10.2. Effect of New Versions 334 | 335 | You may distribute the Covered Software under the terms of the version 336 | of the License under which You originally received the Covered Software, 337 | or under the terms of any subsequent version published by the license 338 | steward. 339 | 340 | 10.3. Modified Versions 341 | 342 | If you create software not governed by this License, and you want to 343 | create a new license for such software, you may create and use a 344 | modified version of this License if you rename the license and remove 345 | any references to the name of the license steward (except to note that 346 | such modified license differs from this License). 347 | 348 | 10.4. Distributing Source Code Form that is Incompatible With Secondary 349 | Licenses 350 | 351 | If You choose to distribute Source Code Form that is Incompatible With 352 | Secondary Licenses under the terms of this version of the License, the 353 | notice described in Exhibit B of this License must be attached. 354 | 355 | Exhibit A - Source Code Form License Notice 356 | ------------------------------------------- 357 | 358 | This Source Code Form is subject to the terms of the Mozilla Public 359 | License, v. 2.0. If a copy of the MPL was not distributed with this 360 | file, You can obtain one at http://mozilla.org/MPL/2.0/. 361 | 362 | If it is not possible or desirable to put the notice in a particular 363 | file, then You may include the notice in a location (such as a LICENSE 364 | file in a relevant directory) where a recipient would be likely to look 365 | for such a notice. 366 | 367 | You may add additional accurate notices of copyright ownership. 368 | 369 | Exhibit B - "Incompatible With Secondary Licenses" Notice 370 | --------------------------------------------------------- 371 | 372 | This Source Code Form is "Incompatible With Secondary Licenses", as 373 | defined by the Mozilla Public License, v. 2.0. 374 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # PubGrub version solving algorithm 2 | 3 | ![license](https://img.shields.io/crates/l/pubgrub.svg) 4 | [![crates.io](https://img.shields.io/crates/v/pubgrub.svg?logo=rust)][crates] 5 | [![docs.rs](https://img.shields.io/badge/docs.rs-pubgrub-yellow)][docs] 6 | [![guide](https://img.shields.io/badge/guide-pubgrub-pink?logo=read-the-docs)][guide] 7 | 8 | Version solving consists in efficiently finding a set of packages and versions 9 | that satisfy all the constraints of a given project dependencies. 10 | In addition, when that is not possible, 11 | PubGrub tries to provide a very human-readable and clear 12 | explanation as to why that failed. 13 | The [introductory blog post about PubGrub][medium-pubgrub] presents 14 | one such example of failure explanation: 15 | 16 | ```txt 17 | Because dropdown >=2.0.0 depends on icons >=2.0.0 and 18 | root depends on icons <2.0.0, dropdown >=2.0.0 is forbidden. 19 | 20 | And because menu >=1.1.0 depends on dropdown >=2.0.0, 21 | menu >=1.1.0 is forbidden. 22 | 23 | And because menu <1.1.0 depends on dropdown >=1.0.0 <2.0.0 24 | which depends on intl <4.0.0, every version of menu 25 | requires intl <4.0.0. 26 | 27 | So, because root depends on both menu >=1.0.0 and intl >=5.0.0, 28 | version solving failed. 29 | ``` 30 | 31 | This pubgrub crate provides a Rust implementation of PubGrub. 32 | It is generic and works for any type of dependency system 33 | as long as packages (P) and versions (V) implement 34 | the provided `Package` and `Version` traits. 35 | 36 | 37 | ## Using the pubgrub crate 38 | 39 | A [guide][guide] with both high-level explanations and 40 | in-depth algorithm details is available online. 41 | The [API documentation is available on docs.rs][docs]. 42 | A version of the [API docs for the unreleased functionality][docs-dev] from `dev` branch is also 43 | accessible for convenience. 44 | 45 | 46 | ## Contributing 47 | 48 | Discussion and development happens here on GitHub and on our 49 | [Zulip stream](https://rust-lang.zulipchat.com/#narrow/stream/260232-t-cargo.2FPubGrub). 50 | Please join in! 51 | 52 | Remember to always be considerate of others, 53 | who may have different native languages, cultures and experiences. 54 | We want everyone to feel welcomed, 55 | let us know with a private message on Zulip if you don't feel that way. 56 | 57 | 58 | ## PubGrub 59 | 60 | PubGrub is a version solving algorithm, 61 | written in 2018 by Natalie Weizenbaum 62 | for the Dart package manager. 63 | It is supposed to be very fast and to explain errors 64 | more clearly than the alternatives. 65 | An introductory blog post was 66 | [published on Medium][medium-pubgrub] by its author. 67 | 68 | The detailed explanation of the algorithm is 69 | [provided on GitHub][github-pubgrub], 70 | and complemented by the ["Internals" section of our guide][guide-internals]. 71 | The foundation of the algorithm is based on ASP (Answer Set Programming), 72 | and a book called 73 | "[Answer Set Solving in Practice][potassco-book]" 74 | by Martin Gebser, Roland Kaminski, Benjamin Kaufmann and Torsten Schaub. 75 | 76 | [crates]: https://crates.io/crates/pubgrub 77 | [guide]: https://pubgrub-rs-guide.pages.dev 78 | [guide-internals]: https://pubgrub-rs-guide.pages.dev/internals/intro.html 79 | [docs]: https://docs.rs/pubgrub 80 | [docs-dev]: https://pubgrub-rs.github.io/pubgrub/pubgrub/ 81 | [medium-pubgrub]: https://medium.com/@nex3/pubgrub-2fb6470504f 82 | [github-pubgrub]: https://github.com/dart-lang/pub/blob/master/doc/solver.md 83 | [potassco-book]: https://potassco.org/book/ 84 | -------------------------------------------------------------------------------- /benches/backtracking.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | //! This bench monitors the performance of backtracking and term intersection. 4 | //! 5 | //! Dependencies are constructed in a way that all versions need to be tested before finding a solution. 6 | 7 | use criterion::*; 8 | use pubgrub::OfflineDependencyProvider; 9 | use version_ranges::Ranges; 10 | 11 | /// This benchmark is a simplified reproduction of one of the patterns found in the `solana-*` crates from Cargo: 12 | /// * `solana-archiver-lib v1.1.12` depends on many layers of other solana crates with req `>= 1.1.12`. 13 | /// * each version `1.x.y` higher than `1.5.15` of a solana crate depends on other solana crates with req `= 1.x.y`. 14 | /// * `solana-crate-features` depends on `cc` with the `num_cpus` feature, which doesn't exist in recent versions of `cc`. 15 | fn backtracking_singletons(c: &mut Criterion, package_count: u32, version_count: u32) { 16 | let mut dependency_provider = OfflineDependencyProvider::>::new(); 17 | 18 | dependency_provider.add_dependencies(0u32, 0u32, [(1u32, Ranges::full())]); 19 | dependency_provider.add_dependencies(1u32, 0u32, []); 20 | 21 | for n in 1..package_count { 22 | for v in 1..version_count { 23 | dependency_provider.add_dependencies(n, v, [(n + 1, Ranges::singleton(v))]); 24 | } 25 | } 26 | 27 | c.bench_function("backtracking_singletons", |b| { 28 | b.iter(|| { 29 | let _ = pubgrub::resolve(&dependency_provider, 0u32, 0u32); 30 | }) 31 | }); 32 | } 33 | 34 | /// This benchmark is a simplified reproduction of one of the patterns found in the `solana-*` crates from Cargo: 35 | /// * `solana-archiver-lib v1.1.12` depends on many layers of other solana crates with req `>= 1.1.12`. 36 | /// * `solana-archiver-lib v1.1.12` also depends on `ed25519-dalek v1.0.0-pre.3`. 37 | /// * each version `1.x.y` higher than `1.5.15` of a solana crate depends on other solana crates with req `= 1.x.y`. 38 | /// * `solana-crate-features >= 1.2.17` depends on `ed25519-dalek v1.0.0-pre.4` or a higher incompatible version. 39 | fn backtracking_disjoint_versions(c: &mut Criterion, package_count: u32, version_count: u32) { 40 | let mut dependency_provider = OfflineDependencyProvider::>::new(); 41 | 42 | let root_deps = [(1u32, Ranges::full()), (u32::MAX, Ranges::singleton(0u32))]; 43 | dependency_provider.add_dependencies(0u32, 0u32, root_deps); 44 | 45 | dependency_provider.add_dependencies(1u32, 0u32, []); 46 | 47 | for n in 1..package_count { 48 | for v in 1..version_count { 49 | dependency_provider.add_dependencies(n, v, [(n + 1, Ranges::singleton(v))]); 50 | } 51 | } 52 | for v in 1..version_count { 53 | dependency_provider.add_dependencies(package_count, v, [(u32::MAX, Ranges::singleton(v))]); 54 | } 55 | 56 | for v in 0..version_count { 57 | dependency_provider.add_dependencies(u32::MAX, v, []); 58 | } 59 | 60 | c.bench_function("backtracking_disjoint_versions", |b| { 61 | b.iter(|| { 62 | let _ = pubgrub::resolve(&dependency_provider, 0u32, 0u32); 63 | }) 64 | }); 65 | } 66 | 67 | /// This benchmark is a simplified reproduction of one of the patterns found in the `solana-*` crates from Cargo: 68 | /// * `solana-archiver-lib v1.1.12` depends on many layers of other solana crates with req `>= 1.1.12`. 69 | /// * each version `1.x.y` lower than `1.5.14` of a solana crate depends on other solana crates with req `>= 1.x.y`. 70 | /// * `solana-crate-features` depends on `cc` with the `num_cpus` feature, which doesn't exist in recent versions of `cc`. 71 | fn backtracking_ranges(c: &mut Criterion, package_count: u32, version_count: u32) { 72 | let mut dependency_provider = OfflineDependencyProvider::>::new(); 73 | 74 | dependency_provider.add_dependencies(0u32, 0u32, [(1u32, Ranges::full())]); 75 | dependency_provider.add_dependencies(1u32, 0u32, []); 76 | 77 | for n in 1..package_count { 78 | for v in 1..version_count { 79 | let r = Ranges::higher_than(version_count - v); 80 | dependency_provider.add_dependencies(n, v, [(n + 1, r)]); 81 | } 82 | } 83 | 84 | c.bench_function("backtracking_ranges", |b| { 85 | b.iter(|| { 86 | let _ = pubgrub::resolve(&dependency_provider, 0u32, 0u32); 87 | }) 88 | }); 89 | } 90 | 91 | fn bench_group(c: &mut Criterion) { 92 | backtracking_singletons(c, 100, 500); 93 | backtracking_disjoint_versions(c, 300, 200); 94 | backtracking_ranges(c, 5, 200); 95 | } 96 | 97 | criterion_group!(benches, bench_group); 98 | criterion_main!(benches); 99 | -------------------------------------------------------------------------------- /benches/large_case.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use std::time::Duration; 4 | 5 | use criterion::*; 6 | use serde::de::Deserialize; 7 | 8 | use pubgrub::{resolve, OfflineDependencyProvider, Package, Range, SemanticVersion, VersionSet}; 9 | 10 | fn bench<'a, P: Package + Deserialize<'a>, VS: VersionSet + Deserialize<'a>>( 11 | b: &mut Bencher, 12 | case: &'a str, 13 | ) where 14 | ::V: Deserialize<'a>, 15 | { 16 | let dependency_provider: OfflineDependencyProvider = ron::de::from_str(case).unwrap(); 17 | 18 | b.iter(|| { 19 | for p in dependency_provider.packages() { 20 | for n in dependency_provider.versions(p).unwrap() { 21 | let _ = resolve(&dependency_provider, p.clone(), n.clone()); 22 | } 23 | } 24 | }); 25 | } 26 | 27 | fn bench_nested(c: &mut Criterion) { 28 | let mut group = c.benchmark_group("large_cases"); 29 | group.measurement_time(Duration::from_secs(20)); 30 | 31 | for case in std::fs::read_dir("test-examples").unwrap() { 32 | let case = case.unwrap().path(); 33 | let name = case.file_name().unwrap().to_string_lossy(); 34 | let data = std::fs::read_to_string(&case).unwrap(); 35 | if name.ends_with("u16_NumberVersion.ron") || name.ends_with("u16_u32.ron") { 36 | group.bench_function(name, |b| { 37 | bench::>(b, &data); 38 | }); 39 | } else if name.ends_with("str_SemanticVersion.ron") { 40 | group.bench_function(name, |b| { 41 | bench::<&str, Range>(b, &data); 42 | }); 43 | } 44 | } 45 | 46 | group.finish(); 47 | } 48 | 49 | criterion_group!(benches, bench_nested); 50 | criterion_main!(benches); 51 | -------------------------------------------------------------------------------- /benches/sudoku.rs: -------------------------------------------------------------------------------- 1 | //! A sudoku solver. 2 | //! 3 | //! Uses `Arc` for being closer to real versions. 4 | // SPDX-License-Identifier: MPL-2.0 5 | 6 | use pubgrub::{resolve, OfflineDependencyProvider, Range}; 7 | use std::fmt; 8 | use std::sync::Arc; 9 | use version_ranges::Ranges; 10 | 11 | use criterion::*; 12 | 13 | /// The size of a box in the board. 14 | const BOARD_BASE: usize = 3; 15 | /// The size of the board. 16 | const BOARD_SIZE: usize = BOARD_BASE * BOARD_BASE; 17 | 18 | type DP = OfflineDependencyProvider>>; 19 | 20 | #[derive(Clone, Debug, Eq, Hash, PartialEq)] 21 | enum SudokuPackage { 22 | /// Add all known fields. 23 | Root, 24 | /// Version is the value of the cell. 25 | Cell { row: usize, col: usize }, 26 | } 27 | 28 | impl fmt::Display for SudokuPackage { 29 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 30 | match self { 31 | SudokuPackage::Root => f.write_str("root"), 32 | SudokuPackage::Cell { row, col } => { 33 | write!(f, "({col}, {row})") 34 | } 35 | } 36 | } 37 | } 38 | 39 | fn from_board(b: &str) -> Vec<(SudokuPackage, Range>)> { 40 | let mut out = vec![]; 41 | for (row, line) in b 42 | .trim() 43 | .lines() 44 | .map(str::trim) 45 | .filter(|l| !l.starts_with('-')) 46 | .enumerate() 47 | { 48 | for (col, val) in line 49 | .split_ascii_whitespace() 50 | .filter(|c| !c.starts_with('|')) 51 | .enumerate() 52 | { 53 | if let Some(val) = val.chars().next().unwrap().to_digit(10) { 54 | out.push(( 55 | SudokuPackage::Cell { 56 | row: row + 1, 57 | col: col + 1, 58 | }, 59 | Range::singleton(val as usize), 60 | )); 61 | } 62 | } 63 | } 64 | out 65 | } 66 | 67 | /// Encode all the exclusions from assigning a cell to a value 68 | fn encode_constraints( 69 | dependency_provider: &mut OfflineDependencyProvider>>, 70 | ) { 71 | for row in 1..=BOARD_SIZE { 72 | for col in 1..=BOARD_SIZE { 73 | for val in 1..=BOARD_SIZE { 74 | let mut deps = vec![]; 75 | // A number may only occur once in a row 76 | for row_ in 1..=BOARD_SIZE { 77 | if row_ == row { 78 | continue; 79 | } 80 | deps.push(( 81 | SudokuPackage::Cell { row: row_, col }, 82 | Range::singleton(Arc::new(val)).complement(), 83 | )) 84 | } 85 | // A number may only occur once in a col 86 | for col_ in 1..=BOARD_SIZE { 87 | if col_ == col { 88 | continue; 89 | } 90 | deps.push(( 91 | SudokuPackage::Cell { row, col: col_ }, 92 | Range::singleton(Arc::new(val)).complement(), 93 | )) 94 | } 95 | // A number may only occur once in a box 96 | let box_base_row = row - ((row - 1) % BOARD_BASE); 97 | let box_base_col = col - ((col - 1) % BOARD_BASE); 98 | for row_ in box_base_row..box_base_row + BOARD_BASE { 99 | for col_ in box_base_col..box_base_col + BOARD_BASE { 100 | if col_ == col && row_ == row { 101 | continue; 102 | } 103 | deps.push(( 104 | SudokuPackage::Cell { 105 | row: row_, 106 | col: col_, 107 | }, 108 | Range::singleton(Arc::new(val)).complement(), 109 | )) 110 | } 111 | } 112 | let name = SudokuPackage::Cell { row, col }; 113 | dependency_provider.add_dependencies(name, val, deps) 114 | } 115 | } 116 | } 117 | } 118 | 119 | fn solve(c: &mut Criterion, board: Vec<(SudokuPackage, Ranges>)>, case: &str) { 120 | let mut dependency_provider = DP::new(); 121 | encode_constraints(&mut dependency_provider); 122 | dependency_provider.add_dependencies(SudokuPackage::Root, Arc::new(1usize), board); 123 | c.bench_function(case, |b| { 124 | b.iter(|| { 125 | let _ = resolve(&dependency_provider, SudokuPackage::Root, Arc::new(1usize)); 126 | }) 127 | }); 128 | } 129 | 130 | fn bench_solve(c: &mut Criterion) { 131 | let easy = from_board( 132 | r#" 133 | 5 3 _ | _ 7 _ | _ _ _ 134 | 6 _ _ | 1 9 5 | _ _ _ 135 | _ 9 8 | _ _ _ | _ 6 _ 136 | -------+-------+------- 137 | 8 5 9 | _ 6 1 | 4 2 3 138 | 4 2 6 | 8 5 3 | 7 9 1 139 | 7 1 3 | 9 2 4 | 8 5 6 140 | -------+-------+------- 141 | _ 6 _ | _ _ _ | 2 8 _ 142 | _ _ _ | 4 1 9 | _ _ 5 143 | _ _ _ | _ 8 6 | 1 7 9"#, 144 | ); 145 | let hard = from_board( 146 | r#" 147 | 5 3 _ | _ 7 _ | _ _ _ 148 | 6 _ _ | 1 9 5 | _ _ _ 149 | _ 9 8 | _ _ _ | _ 6 _ 150 | -------+-------+------- 151 | 8 _ _ | _ 6 _ | _ _ 3 152 | 4 _ _ | 8 _ 3 | _ _ 1 153 | 7 _ _ | _ 2 _ | _ _ 6 154 | -------+-------+------- 155 | _ 6 _ | _ _ _ | 2 8 _ 156 | _ _ _ | 4 1 9 | _ _ 5 157 | _ _ _ | _ 8 _ | _ 7 9"#, 158 | ); 159 | solve(c, easy, "sudoku-easy"); 160 | solve(c, hard, "sudoku-hard"); 161 | } 162 | 163 | criterion_group!(benches, bench_solve); 164 | criterion_main!(benches); 165 | -------------------------------------------------------------------------------- /examples/branching_error_reporting.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use pubgrub::{ 4 | resolve, DefaultStringReporter, OfflineDependencyProvider, PubGrubError, Ranges, Reporter, 5 | SemanticVersion, 6 | }; 7 | 8 | type SemVS = Ranges; 9 | 10 | // https://github.com/dart-lang/pub/blob/master/doc/solver.md#branching-error-reporting 11 | fn main() { 12 | let mut dependency_provider = OfflineDependencyProvider::<&str, SemVS>::new(); 13 | #[rustfmt::skip] 14 | // root 1.0.0 depends on foo ^1.0.0 15 | dependency_provider.add_dependencies( 16 | "root", (1, 0, 0), 17 | [("foo", Ranges::from_range_bounds((1, 0, 0)..(2, 0, 0)))], 18 | ); 19 | #[rustfmt::skip] 20 | // foo 1.0.0 depends on a ^1.0.0 and b ^1.0.0 21 | dependency_provider.add_dependencies( 22 | "foo", (1, 0, 0), 23 | [ 24 | ("a", Ranges::from_range_bounds((1, 0, 0)..(2, 0, 0))), 25 | ("b", Ranges::from_range_bounds((1, 0, 0)..(2, 0, 0))), 26 | ], 27 | ); 28 | #[rustfmt::skip] 29 | // foo 1.1.0 depends on x ^1.0.0 and y ^1.0.0 30 | dependency_provider.add_dependencies( 31 | "foo", (1, 1, 0), 32 | [ 33 | ("x", Ranges::from_range_bounds((1, 0, 0)..(2, 0, 0))), 34 | ("y", Ranges::from_range_bounds((1, 0, 0)..(2, 0, 0))), 35 | ], 36 | ); 37 | #[rustfmt::skip] 38 | // a 1.0.0 depends on b ^2.0.0 39 | dependency_provider.add_dependencies( 40 | "a", (1, 0, 0), 41 | [("b", Ranges::from_range_bounds((2, 0, 0)..(3, 0, 0)))], 42 | ); 43 | // b 1.0.0 and 2.0.0 have no dependencies. 44 | dependency_provider.add_dependencies("b", (1, 0, 0), []); 45 | dependency_provider.add_dependencies("b", (2, 0, 0), []); 46 | #[rustfmt::skip] 47 | // x 1.0.0 depends on y ^2.0.0. 48 | dependency_provider.add_dependencies( 49 | "x", (1, 0, 0), 50 | [("y", Ranges::from_range_bounds((2, 0, 0)..(3, 0, 0)))], 51 | ); 52 | // y 1.0.0 and 2.0.0 have no dependencies. 53 | dependency_provider.add_dependencies("y", (1, 0, 0), []); 54 | dependency_provider.add_dependencies("y", (2, 0, 0), []); 55 | 56 | // Run the algorithm. 57 | match resolve(&dependency_provider, "root", (1, 0, 0)) { 58 | Ok(sol) => println!("{:?}", sol), 59 | Err(PubGrubError::NoSolution(mut derivation_tree)) => { 60 | derivation_tree.collapse_no_versions(); 61 | eprintln!("{}", DefaultStringReporter::report(&derivation_tree)); 62 | std::process::exit(1); 63 | } 64 | Err(err) => panic!("{:?}", err), 65 | }; 66 | } 67 | -------------------------------------------------------------------------------- /examples/caching_dependency_provider.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use std::cell::RefCell; 4 | 5 | use pubgrub::{ 6 | resolve, Dependencies, DependencyProvider, OfflineDependencyProvider, 7 | PackageResolutionStatistics, Ranges, 8 | }; 9 | 10 | type NumVS = Ranges; 11 | 12 | // An example implementing caching dependency provider that will 13 | // store queried dependencies in memory and check them before querying more from remote. 14 | struct CachingDependencyProvider { 15 | remote_dependencies: DP, 16 | cached_dependencies: RefCell>, 17 | } 18 | 19 | impl CachingDependencyProvider { 20 | pub fn new(remote_dependencies_provider: DP) -> Self { 21 | CachingDependencyProvider { 22 | remote_dependencies: remote_dependencies_provider, 23 | cached_dependencies: RefCell::new(OfflineDependencyProvider::new()), 24 | } 25 | } 26 | } 27 | 28 | impl> DependencyProvider for CachingDependencyProvider { 29 | // Caches dependencies if they were already queried 30 | fn get_dependencies( 31 | &self, 32 | package: &DP::P, 33 | version: &DP::V, 34 | ) -> Result, DP::Err> { 35 | let mut cache = self.cached_dependencies.borrow_mut(); 36 | match cache.get_dependencies(package, version) { 37 | Ok(Dependencies::Unavailable(_)) => { 38 | let dependencies = self.remote_dependencies.get_dependencies(package, version); 39 | match dependencies { 40 | Ok(Dependencies::Available(dependencies)) => { 41 | cache.add_dependencies( 42 | package.clone(), 43 | version.clone(), 44 | dependencies.clone(), 45 | ); 46 | Ok(Dependencies::Available(dependencies)) 47 | } 48 | Ok(Dependencies::Unavailable(reason)) => Ok(Dependencies::Unavailable(reason)), 49 | error @ Err(_) => error, 50 | } 51 | } 52 | Ok(dependencies) => Ok(dependencies), 53 | Err(_) => unreachable!(), 54 | } 55 | } 56 | 57 | fn choose_version(&self, package: &DP::P, ranges: &DP::VS) -> Result, DP::Err> { 58 | self.remote_dependencies.choose_version(package, ranges) 59 | } 60 | 61 | type Priority = DP::Priority; 62 | 63 | fn prioritize( 64 | &self, 65 | package: &Self::P, 66 | range: &Self::VS, 67 | package_statistics: &PackageResolutionStatistics, 68 | ) -> Self::Priority { 69 | self.remote_dependencies 70 | .prioritize(package, range, package_statistics) 71 | } 72 | 73 | type Err = DP::Err; 74 | 75 | type P = DP::P; 76 | type V = DP::V; 77 | type VS = DP::VS; 78 | type M = DP::M; 79 | } 80 | 81 | fn main() { 82 | // Simulating remote provider locally. 83 | let mut remote_dependencies_provider = OfflineDependencyProvider::<&str, NumVS>::new(); 84 | 85 | // Add dependencies as needed. Here only root package is added. 86 | remote_dependencies_provider.add_dependencies("root", 1u32, Vec::new()); 87 | 88 | let caching_dependencies_provider = 89 | CachingDependencyProvider::new(remote_dependencies_provider); 90 | 91 | let solution = resolve(&caching_dependencies_provider, "root", 1u32); 92 | println!("Solution: {:?}", solution); 93 | } 94 | -------------------------------------------------------------------------------- /examples/doc_interface.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use pubgrub::{resolve, OfflineDependencyProvider, Ranges}; 4 | 5 | type NumVS = Ranges; 6 | 7 | // `root` depends on `menu` and `icons` 8 | // `menu` depends on `dropdown` 9 | // `dropdown` depends on `icons` 10 | // `icons` has no dependency 11 | #[rustfmt::skip] 12 | fn main() { 13 | let mut dependency_provider = OfflineDependencyProvider::<&str, NumVS>::new(); 14 | dependency_provider.add_dependencies( 15 | "root", 1u32, [("menu", Ranges::full()), ("icons", Ranges::full())], 16 | ); 17 | dependency_provider.add_dependencies("menu", 1u32, [("dropdown", Ranges::full())]); 18 | dependency_provider.add_dependencies("dropdown", 1u32, [("icons", Ranges::full())]); 19 | dependency_provider.add_dependencies("icons", 1u32, []); 20 | 21 | // Run the algorithm. 22 | let solution = resolve(&dependency_provider, "root", 1u32); 23 | println!("Solution: {:?}", solution); 24 | } 25 | -------------------------------------------------------------------------------- /examples/doc_interface_error.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use pubgrub::{ 4 | resolve, DefaultStringReporter, OfflineDependencyProvider, PubGrubError, Ranges, Reporter, 5 | SemanticVersion, 6 | }; 7 | 8 | type SemVS = Ranges; 9 | 10 | // `root` depends on `menu`, `icons 1.0.0` and `intl 5.0.0` 11 | // `menu 1.0.0` depends on `dropdown < 2.0.0` 12 | // `menu >= 1.1.0` depends on `dropdown >= 2.0.0` 13 | // `dropdown 1.8.0` depends on `intl 3.0.0` 14 | // `dropdown >= 2.0.0` depends on `icons 2.0.0` 15 | // `icons` has no dependency 16 | // `intl` has no dependency 17 | #[rustfmt::skip] 18 | fn main() { 19 | let mut dependency_provider = OfflineDependencyProvider::<&str, SemVS>::new(); 20 | // Direct dependencies: menu and icons. 21 | dependency_provider.add_dependencies("root", (1, 0, 0), [ 22 | ("menu", Ranges::full()), 23 | ("icons", Ranges::singleton((1, 0, 0))), 24 | ("intl", Ranges::singleton((5, 0, 0))), 25 | ]); 26 | 27 | // Dependencies of the menu lib. 28 | dependency_provider.add_dependencies("menu", (1, 0, 0), [ 29 | ("dropdown", Ranges::from_range_bounds(..(2, 0, 0))), 30 | ]); 31 | dependency_provider.add_dependencies("menu", (1, 1, 0), [ 32 | ("dropdown", Ranges::from_range_bounds((2, 0, 0)..)), 33 | ]); 34 | dependency_provider.add_dependencies("menu", (1, 2, 0), [ 35 | ("dropdown", Ranges::from_range_bounds((2, 0, 0)..)), 36 | ]); 37 | dependency_provider.add_dependencies("menu", (1, 3, 0), [ 38 | ("dropdown", Ranges::from_range_bounds((2, 0, 0)..)), 39 | ]); 40 | dependency_provider.add_dependencies("menu", (1, 4, 0), [ 41 | ("dropdown", Ranges::from_range_bounds((2, 0, 0)..)), 42 | ]); 43 | dependency_provider.add_dependencies("menu", (1, 5, 0), [ 44 | ("dropdown", Ranges::from_range_bounds((2, 0, 0)..)), 45 | ]); 46 | 47 | // Dependencies of the dropdown lib. 48 | dependency_provider.add_dependencies("dropdown", (1, 8, 0), [ 49 | ("intl", Ranges::singleton((3, 0, 0))), 50 | ]); 51 | dependency_provider.add_dependencies("dropdown", (2, 0, 0), [ 52 | ("icons", Ranges::singleton((2, 0, 0))), 53 | ]); 54 | dependency_provider.add_dependencies("dropdown", (2, 1, 0), [ 55 | ("icons", Ranges::singleton((2, 0, 0))), 56 | ]); 57 | dependency_provider.add_dependencies("dropdown", (2, 2, 0), [ 58 | ("icons", Ranges::singleton((2, 0, 0))), 59 | ]); 60 | dependency_provider.add_dependencies("dropdown", (2, 3, 0), [ 61 | ("icons", Ranges::singleton((2, 0, 0))), 62 | ]); 63 | 64 | // Icons have no dependencies. 65 | dependency_provider.add_dependencies("icons", (1, 0, 0), []); 66 | dependency_provider.add_dependencies("icons", (2, 0, 0), []); 67 | 68 | // Intl have no dependencies. 69 | dependency_provider.add_dependencies("intl", (3, 0, 0), []); 70 | dependency_provider.add_dependencies("intl", (4, 0, 0), []); 71 | dependency_provider.add_dependencies("intl", (5, 0, 0), []); 72 | 73 | // Run the algorithm. 74 | match resolve(&dependency_provider, "root", (1, 0, 0)) { 75 | Ok(sol) => println!("{:?}", sol), 76 | Err(PubGrubError::NoSolution(mut derivation_tree)) => { 77 | derivation_tree.collapse_no_versions(); 78 | eprintln!("{}", DefaultStringReporter::report(&derivation_tree)); 79 | } 80 | Err(err) => panic!("{:?}", err), 81 | }; 82 | } 83 | -------------------------------------------------------------------------------- /examples/doc_interface_semantic.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use pubgrub::{ 4 | resolve, DefaultStringReporter, OfflineDependencyProvider, PubGrubError, Ranges, Reporter, 5 | SemanticVersion, 6 | }; 7 | 8 | type SemVS = Ranges; 9 | 10 | // `root` depends on `menu` and `icons 1.0.0` 11 | // `menu 1.0.0` depends on `dropdown < 2.0.0` 12 | // `menu >= 1.1.0` depends on `dropdown >= 2.0.0` 13 | // `dropdown 1.8.0` has no dependency 14 | // `dropdown >= 2.0.0` depends on `icons 2.0.0` 15 | // `icons` has no dependency 16 | #[rustfmt::skip] 17 | fn main() { 18 | let mut dependency_provider = OfflineDependencyProvider::<&str, SemVS>::new(); 19 | // Direct dependencies: menu and icons. 20 | dependency_provider.add_dependencies("root", (1, 0, 0), [ 21 | ("menu", Ranges::full()), 22 | ("icons", Ranges::singleton((1, 0, 0))), 23 | ]); 24 | 25 | // Dependencies of the menu lib. 26 | dependency_provider.add_dependencies("menu", (1, 0, 0), [ 27 | ("dropdown", Ranges::from_range_bounds(..(2, 0, 0))), 28 | ]); 29 | dependency_provider.add_dependencies("menu", (1, 1, 0), [ 30 | ("dropdown", Ranges::from_range_bounds((2, 0, 0)..)), 31 | ]); 32 | dependency_provider.add_dependencies("menu", (1, 2, 0), [ 33 | ("dropdown", Ranges::from_range_bounds((2, 0, 0)..)), 34 | ]); 35 | dependency_provider.add_dependencies("menu", (1, 3, 0), [ 36 | ("dropdown", Ranges::from_range_bounds((2, 0, 0)..)), 37 | ]); 38 | dependency_provider.add_dependencies("menu", (1, 4, 0), [ 39 | ("dropdown", Ranges::from_range_bounds((2, 0, 0)..)), 40 | ]); 41 | dependency_provider.add_dependencies("menu", (1, 5, 0), [ 42 | ("dropdown", Ranges::from_range_bounds((2, 0, 0)..)), 43 | ]); 44 | 45 | // Dependencies of the dropdown lib. 46 | dependency_provider.add_dependencies("dropdown", (1, 8, 0), []); 47 | dependency_provider.add_dependencies("dropdown", (2, 0, 0), [ 48 | ("icons", Ranges::singleton((2, 0, 0))), 49 | ]); 50 | dependency_provider.add_dependencies("dropdown", (2, 1, 0), [ 51 | ("icons", Ranges::singleton((2, 0, 0))), 52 | ]); 53 | dependency_provider.add_dependencies("dropdown", (2, 2, 0), [ 54 | ("icons", Ranges::singleton((2, 0, 0))), 55 | ]); 56 | dependency_provider.add_dependencies("dropdown", (2, 3, 0), [ 57 | ("icons", Ranges::singleton((2, 0, 0))), 58 | ]); 59 | 60 | // Icons has no dependency. 61 | dependency_provider.add_dependencies("icons", (1, 0, 0), []); 62 | dependency_provider.add_dependencies("icons", (2, 0, 0), []); 63 | 64 | // Run the algorithm. 65 | match resolve(&dependency_provider, "root", (1, 0, 0)) { 66 | Ok(sol) => println!("{:?}", sol), 67 | Err(PubGrubError::NoSolution(mut derivation_tree)) => { 68 | derivation_tree.collapse_no_versions(); 69 | eprintln!("{}", DefaultStringReporter::report(&derivation_tree)); 70 | } 71 | Err(err) => panic!("{:?}", err), 72 | }; 73 | } 74 | -------------------------------------------------------------------------------- /examples/linear_error_reporting.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use pubgrub::{ 4 | resolve, DefaultStringReporter, OfflineDependencyProvider, PubGrubError, Ranges, Reporter, 5 | SemanticVersion, 6 | }; 7 | 8 | type SemVS = Ranges; 9 | 10 | // https://github.com/dart-lang/pub/blob/master/doc/solver.md#linear-error-reporting 11 | fn main() { 12 | let mut dependency_provider = OfflineDependencyProvider::<&str, SemVS>::new(); 13 | #[rustfmt::skip] 14 | // root 1.0.0 depends on foo ^1.0.0 and baz ^1.0.0 15 | dependency_provider.add_dependencies( 16 | "root", (1, 0, 0), 17 | [ 18 | ("foo", Ranges::from_range_bounds((1, 0, 0)..(2, 0, 0))), 19 | ("baz", Ranges::from_range_bounds((1, 0, 0)..(2, 0, 0))), 20 | ], 21 | ); 22 | #[rustfmt::skip] 23 | // foo 1.0.0 depends on bar ^2.0.0 24 | dependency_provider.add_dependencies( 25 | "foo", (1, 0, 0), 26 | [("bar", Ranges::from_range_bounds((2, 0, 0)..(3, 0, 0)))], 27 | ); 28 | #[rustfmt::skip] 29 | // bar 2.0.0 depends on baz ^3.0.0 30 | dependency_provider.add_dependencies( 31 | "bar", (2, 0, 0), 32 | [("baz", Ranges::from_range_bounds((3, 0, 0)..(4, 0, 0)))], 33 | ); 34 | // baz 1.0.0 and 3.0.0 have no dependencies 35 | dependency_provider.add_dependencies("baz", (1, 0, 0), []); 36 | dependency_provider.add_dependencies("baz", (3, 0, 0), []); 37 | 38 | // Run the algorithm. 39 | match resolve(&dependency_provider, "root", (1, 0, 0)) { 40 | Ok(sol) => println!("{:?}", sol), 41 | Err(PubGrubError::NoSolution(mut derivation_tree)) => { 42 | derivation_tree.collapse_no_versions(); 43 | eprintln!("{}", DefaultStringReporter::report(&derivation_tree)); 44 | std::process::exit(1); 45 | } 46 | Err(err) => panic!("{:?}", err), 47 | }; 48 | } 49 | -------------------------------------------------------------------------------- /examples/unsat_root_message_no_version.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use std::fmt::{self, Display}; 4 | 5 | use pubgrub::{ 6 | resolve, DefaultStringReporter, Derived, External, Map, OfflineDependencyProvider, 7 | PubGrubError, Ranges, ReportFormatter, Reporter, SemanticVersion, Term, 8 | }; 9 | 10 | #[derive(Clone, Debug, PartialEq, Eq, Hash)] 11 | pub enum Package { 12 | Root, 13 | Package(String), 14 | } 15 | 16 | impl Display for Package { 17 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 18 | match self { 19 | Package::Root => write!(f, "root"), 20 | Package::Package(name) => write!(f, "{}", name), 21 | } 22 | } 23 | } 24 | 25 | #[derive(Debug, Default)] 26 | struct CustomReportFormatter; 27 | 28 | impl ReportFormatter, String> for CustomReportFormatter { 29 | type Output = String; 30 | 31 | fn format_terms(&self, terms: &Map>>) -> String { 32 | let terms_vec: Vec<_> = terms.iter().collect(); 33 | match terms_vec.as_slice() { 34 | [] => "version solving failed".into(), 35 | [(package @ Package::Root, Term::Positive(_))] => { 36 | format!("{package} is forbidden") 37 | } 38 | [(package @ Package::Root, Term::Negative(_))] => { 39 | format!("{package} is mandatory") 40 | } 41 | [(package @ Package::Package(_), Term::Positive(ranges))] => { 42 | format!("{package} {ranges} is forbidden") 43 | } 44 | [(package @ Package::Package(_), Term::Negative(ranges))] => { 45 | format!("{package} {ranges} is mandatory") 46 | } 47 | [(p1, Term::Positive(r1)), (p2, Term::Negative(r2))] => { 48 | External::<_, _, String>::FromDependencyOf(p1, r1.clone(), p2, r2.clone()) 49 | .to_string() 50 | } 51 | [(p1, Term::Negative(r1)), (p2, Term::Positive(r2))] => { 52 | External::<_, _, String>::FromDependencyOf(p2, r2.clone(), p1, r1.clone()) 53 | .to_string() 54 | } 55 | slice => { 56 | let str_terms: Vec<_> = slice.iter().map(|(p, t)| format!("{p} {t}")).collect(); 57 | str_terms.join(", ") + " are incompatible" 58 | } 59 | } 60 | } 61 | 62 | fn format_external( 63 | &self, 64 | external: &External, String>, 65 | ) -> String { 66 | match external { 67 | External::NotRoot(package, version) => { 68 | format!("we are solving dependencies of {package} {version}") 69 | } 70 | External::NoVersions(package, set) => { 71 | if set == &Ranges::full() { 72 | format!("there is no available version for {package}") 73 | } else { 74 | format!("there is no version of {package} in {set}") 75 | } 76 | } 77 | External::Custom(package, set, reason) => { 78 | if set == &Ranges::full() { 79 | format!("dependencies of {package} are unavailable because {reason}") 80 | } else { 81 | format!("dependencies of {package} at version {set} are unavailable because {reason}") 82 | } 83 | } 84 | External::FromDependencyOf(package, package_set, dependency, dependency_set) => { 85 | if package_set == &Ranges::full() && dependency_set == &Ranges::full() { 86 | format!("{package} depends on {dependency}") 87 | } else if package_set == &Ranges::full() { 88 | format!("{package} depends on {dependency} {dependency_set}") 89 | } else if dependency_set == &Ranges::full() { 90 | if matches!(package, Package::Root) { 91 | // Exclude the dummy version for root packages 92 | format!("{package} depends on {dependency}") 93 | } else { 94 | format!("{package} {package_set} depends on {dependency}") 95 | } 96 | } else if matches!(package, Package::Root) { 97 | // Exclude the dummy version for root packages 98 | format!("{package} depends on {dependency} {dependency_set}") 99 | } else { 100 | format!("{package} {package_set} depends on {dependency} {dependency_set}") 101 | } 102 | } 103 | } 104 | } 105 | 106 | /// Simplest case, we just combine two external incompatibilities. 107 | fn explain_both_external( 108 | &self, 109 | external1: &External, String>, 110 | external2: &External, String>, 111 | current_terms: &Map>>, 112 | ) -> String { 113 | // TODO: order should be chosen to make it more logical. 114 | format!( 115 | "Because {} and {}, {}.", 116 | self.format_external(external1), 117 | self.format_external(external2), 118 | self.format_terms(current_terms) 119 | ) 120 | } 121 | 122 | /// Both causes have already been explained so we use their refs. 123 | fn explain_both_ref( 124 | &self, 125 | ref_id1: usize, 126 | derived1: &Derived, String>, 127 | ref_id2: usize, 128 | derived2: &Derived, String>, 129 | current_terms: &Map>>, 130 | ) -> String { 131 | // TODO: order should be chosen to make it more logical. 132 | format!( 133 | "Because {} ({}) and {} ({}), {}.", 134 | self.format_terms(&derived1.terms), 135 | ref_id1, 136 | self.format_terms(&derived2.terms), 137 | ref_id2, 138 | self.format_terms(current_terms) 139 | ) 140 | } 141 | 142 | /// One cause is derived (already explained so one-line), 143 | /// the other is a one-line external cause, 144 | /// and finally we conclude with the current incompatibility. 145 | fn explain_ref_and_external( 146 | &self, 147 | ref_id: usize, 148 | derived: &Derived, String>, 149 | external: &External, String>, 150 | current_terms: &Map>>, 151 | ) -> String { 152 | // TODO: order should be chosen to make it more logical. 153 | format!( 154 | "Because {} ({}) and {}, {}.", 155 | self.format_terms(&derived.terms), 156 | ref_id, 157 | self.format_external(external), 158 | self.format_terms(current_terms) 159 | ) 160 | } 161 | 162 | /// Add an external cause to the chain of explanations. 163 | fn and_explain_external( 164 | &self, 165 | external: &External, String>, 166 | current_terms: &Map>>, 167 | ) -> String { 168 | format!( 169 | "And because {}, {}.", 170 | self.format_external(external), 171 | self.format_terms(current_terms) 172 | ) 173 | } 174 | 175 | /// Add an already explained incompat to the chain of explanations. 176 | fn and_explain_ref( 177 | &self, 178 | ref_id: usize, 179 | derived: &Derived, String>, 180 | current_terms: &Map>>, 181 | ) -> String { 182 | format!( 183 | "And because {} ({}), {}.", 184 | self.format_terms(&derived.terms), 185 | ref_id, 186 | self.format_terms(current_terms) 187 | ) 188 | } 189 | 190 | /// Add an already explained incompat to the chain of explanations. 191 | fn and_explain_prior_and_external( 192 | &self, 193 | prior_external: &External, String>, 194 | external: &External, String>, 195 | current_terms: &Map>>, 196 | ) -> String { 197 | format!( 198 | "And because {} and {}, {}.", 199 | self.format_external(prior_external), 200 | self.format_external(external), 201 | self.format_terms(current_terms) 202 | ) 203 | } 204 | } 205 | 206 | fn main() { 207 | let mut dependency_provider = 208 | OfflineDependencyProvider::>::new(); 209 | // Define the root package with a dependency on a package we do not provide 210 | dependency_provider.add_dependencies( 211 | Package::Root, 212 | (0, 0, 0), 213 | vec![( 214 | Package::Package("foo".to_string()), 215 | Ranges::singleton((1, 0, 0)), 216 | )], 217 | ); 218 | 219 | // Run the algorithm 220 | match resolve(&dependency_provider, Package::Root, (0, 0, 0)) { 221 | Ok(sol) => println!("{:?}", sol), 222 | Err(PubGrubError::NoSolution(derivation_tree)) => { 223 | eprintln!("No solution.\n"); 224 | 225 | eprintln!("### Default report:"); 226 | eprintln!("```"); 227 | eprintln!("{}", DefaultStringReporter::report(&derivation_tree)); 228 | eprintln!("```\n"); 229 | 230 | eprintln!("### Report with custom formatter:"); 231 | eprintln!("```"); 232 | eprintln!( 233 | "{}", 234 | DefaultStringReporter::report_with_formatter( 235 | &derivation_tree, 236 | &CustomReportFormatter 237 | ) 238 | ); 239 | eprintln!("```"); 240 | std::process::exit(1); 241 | } 242 | Err(err) => panic!("{:?}", err), 243 | }; 244 | } 245 | -------------------------------------------------------------------------------- /release.md: -------------------------------------------------------------------------------- 1 | # Creation of a new release 2 | 3 | This is taking the 0.2.1 release as an example. 4 | 5 | ## GitHub stuff 6 | 7 | - Checkout the prep-v0.2.1 branch 8 | - Update the release date in the changelog and push to the PR. 9 | - Squash merge the PR to the dev branch 10 | - Check that the merged PR is passing the tests on the dev branch 11 | - Pull the updated dev locally 12 | - Switch to the release branch 13 | - Merge locally dev into release in fast-forward mode, we want to keep the history of commits and the merge point. 14 | - `git tag -a v0.2.1 -m "v0.2.1: mostly perf improvements"` 15 | - (Optional) cryptographically sign the tag 16 | - On GitHub, edit the branch protection setting for release: uncheck include admin, and save 17 | - Push release to github: git push --follow-tags 18 | - Reset the release branch protection to include admins 19 | - On GitHub, create a release from that tag. 20 | 21 | ## Crates.io stuff 22 | 23 | - `cargo publish --dry-run` 24 | - `cargo publish` 25 | 26 | ## Community stuff 27 | 28 | Talk about the awesome new features of the new release online. 29 | -------------------------------------------------------------------------------- /src/error.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | //! Handling pubgrub errors. 4 | 5 | use thiserror::Error; 6 | 7 | use crate::{DependencyProvider, DerivationTree}; 8 | 9 | /// There is no solution for this set of dependencies. 10 | pub type NoSolutionError = DerivationTree< 11 | ::P, 12 | ::VS, 13 | ::M, 14 | >; 15 | 16 | /// Errors that may occur while solving dependencies. 17 | #[derive(Error)] 18 | pub enum PubGrubError { 19 | /// There is no solution for this set of dependencies. 20 | #[error("There is no solution")] 21 | NoSolution(NoSolutionError), 22 | 23 | /// Error arising when the implementer of [DependencyProvider] returned an error in the method 24 | /// [`get_dependencies`](DependencyProvider::get_dependencies). 25 | #[error("Retrieving dependencies of {package} {version} failed")] 26 | ErrorRetrievingDependencies { 27 | /// Package whose dependencies we want. 28 | package: DP::P, 29 | /// Version of the package for which we want the dependencies. 30 | version: DP::V, 31 | /// Error raised by the implementer of [DependencyProvider]. 32 | source: DP::Err, 33 | }, 34 | 35 | /// Error arising when the implementer of [DependencyProvider] returned an error in the method 36 | /// [`choose_version`](DependencyProvider::choose_version). 37 | #[error("Choosing a version for {package} failed")] 38 | ErrorChoosingVersion { 39 | /// Package to choose a version for. 40 | package: DP::P, 41 | /// Error raised by the implementer of [DependencyProvider]. 42 | source: DP::Err, 43 | }, 44 | 45 | /// Error arising when the implementer of [DependencyProvider] 46 | /// returned an error in the method [`should_cancel`](DependencyProvider::should_cancel). 47 | #[error("The solver was cancelled")] 48 | ErrorInShouldCancel(#[source] DP::Err), 49 | } 50 | 51 | impl From> for PubGrubError { 52 | fn from(err: NoSolutionError) -> Self { 53 | Self::NoSolution(err) 54 | } 55 | } 56 | 57 | impl std::fmt::Debug for PubGrubError 58 | where 59 | DP: DependencyProvider, 60 | { 61 | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { 62 | match self { 63 | Self::NoSolution(err) => f.debug_tuple("NoSolution").field(&err).finish(), 64 | Self::ErrorRetrievingDependencies { 65 | package, 66 | version, 67 | source, 68 | } => f 69 | .debug_struct("ErrorRetrievingDependencies") 70 | .field("package", package) 71 | .field("version", version) 72 | .field("source", source) 73 | .finish(), 74 | Self::ErrorChoosingVersion { package, source } => f 75 | .debug_struct("ErrorChoosingVersion") 76 | .field("package", package) 77 | .field("source", source) 78 | .finish(), 79 | Self::ErrorInShouldCancel(arg0) => { 80 | f.debug_tuple("ErrorInShouldCancel").field(arg0).finish() 81 | } 82 | } 83 | } 84 | } 85 | -------------------------------------------------------------------------------- /src/internal/arena.rs: -------------------------------------------------------------------------------- 1 | use std::fmt; 2 | use std::hash::{Hash, Hasher}; 3 | use std::marker::PhantomData; 4 | use std::ops::{Index, Range}; 5 | 6 | type FnvIndexSet = indexmap::IndexSet; 7 | 8 | /// The index of a value allocated in an arena that holds `T`s. 9 | /// 10 | /// The Clone, Copy and other traits are defined manually because 11 | /// deriving them adds some additional constraints on the `T` generic type 12 | /// that we actually don't need since it is phantom. 13 | /// 14 | /// 15 | pub(crate) struct Id { 16 | raw: u32, 17 | _ty: PhantomData T>, 18 | } 19 | 20 | impl Clone for Id { 21 | fn clone(&self) -> Self { 22 | *self 23 | } 24 | } 25 | 26 | impl Copy for Id {} 27 | 28 | impl PartialEq for Id { 29 | fn eq(&self, other: &Id) -> bool { 30 | self.raw == other.raw 31 | } 32 | } 33 | 34 | impl Eq for Id {} 35 | 36 | impl Hash for Id { 37 | fn hash(&self, state: &mut H) { 38 | self.raw.hash(state) 39 | } 40 | } 41 | 42 | impl fmt::Debug for Id { 43 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 44 | let mut type_name = std::any::type_name::(); 45 | if let Some(id) = type_name.rfind(':') { 46 | type_name = &type_name[id + 1..] 47 | } 48 | write!(f, "Id::<{}>({})", type_name, self.raw) 49 | } 50 | } 51 | 52 | impl Id { 53 | pub(crate) fn into_raw(self) -> usize { 54 | self.raw as usize 55 | } 56 | fn from(n: u32) -> Self { 57 | Self { 58 | raw: n, 59 | _ty: PhantomData, 60 | } 61 | } 62 | pub(crate) fn range_to_iter(range: Range) -> impl Iterator { 63 | let start = range.start.raw; 64 | let end = range.end.raw; 65 | (start..end).map(Self::from) 66 | } 67 | } 68 | 69 | /// Yet another index-based arena. 70 | /// 71 | /// An arena is a kind of simple grow-only allocator, backed by a `Vec` 72 | /// where all items have the same lifetime, making it easier 73 | /// to have references between those items. 74 | /// They are all dropped at once when the arena is dropped. 75 | #[derive(Clone, PartialEq, Eq)] 76 | pub(crate) struct Arena { 77 | data: Vec, 78 | } 79 | 80 | impl fmt::Debug for Arena { 81 | fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { 82 | fmt.debug_struct("Arena") 83 | .field("len", &self.data.len()) 84 | .field("data", &self.data) 85 | .finish() 86 | } 87 | } 88 | 89 | impl Default for Arena { 90 | fn default() -> Self { 91 | Self::new() 92 | } 93 | } 94 | 95 | impl Arena { 96 | pub(crate) fn new() -> Self { 97 | Self { data: Vec::new() } 98 | } 99 | 100 | pub(crate) fn alloc(&mut self, value: T) -> Id { 101 | let raw = self.data.len(); 102 | self.data.push(value); 103 | Id::from(raw as u32) 104 | } 105 | 106 | pub(crate) fn alloc_iter>(&mut self, values: I) -> Range> { 107 | let start = Id::from(self.data.len() as u32); 108 | values.for_each(|v| { 109 | self.alloc(v); 110 | }); 111 | let end = Id::from(self.data.len() as u32); 112 | Range { start, end } 113 | } 114 | } 115 | 116 | impl Index> for Arena { 117 | type Output = T; 118 | fn index(&self, id: Id) -> &T { 119 | &self.data[id.raw as usize] 120 | } 121 | } 122 | 123 | impl Index>> for Arena { 124 | type Output = [T]; 125 | fn index(&self, id: Range>) -> &[T] { 126 | &self.data[(id.start.raw as usize)..(id.end.raw as usize)] 127 | } 128 | } 129 | 130 | /// Yet another index-based arena. This one de-duplicates entries by hashing. 131 | /// 132 | /// An arena is a kind of simple grow-only allocator, backed by a `Vec` 133 | /// where all items have the same lifetime, making it easier 134 | /// to have references between those items. 135 | /// In this case the `Vec` is inside a `IndexSet` allowing fast lookup by value not just index. 136 | /// They are all dropped at once when the arena is dropped. 137 | #[derive(Clone, PartialEq, Eq)] 138 | pub struct HashArena { 139 | data: FnvIndexSet, 140 | } 141 | 142 | impl fmt::Debug for HashArena { 143 | fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { 144 | fmt.debug_struct("Arena") 145 | .field("len", &self.data.len()) 146 | .field("data", &self.data) 147 | .finish() 148 | } 149 | } 150 | 151 | impl HashArena { 152 | pub fn new() -> Self { 153 | HashArena { 154 | data: FnvIndexSet::default(), 155 | } 156 | } 157 | 158 | pub fn alloc(&mut self, value: T) -> Id { 159 | let (raw, _) = self.data.insert_full(value); 160 | Id::from(raw as u32) 161 | } 162 | } 163 | 164 | impl Index> for HashArena { 165 | type Output = T; 166 | fn index(&self, id: Id) -> &T { 167 | &self.data[id.raw as usize] 168 | } 169 | } 170 | -------------------------------------------------------------------------------- /src/internal/core.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | //! Core model and functions 4 | //! to write a functional PubGrub algorithm. 5 | 6 | use std::collections::HashSet as Set; 7 | use std::sync::Arc; 8 | 9 | use crate::internal::{ 10 | Arena, DecisionLevel, HashArena, Id, IncompDpId, IncompId, Incompatibility, PartialSolution, 11 | Relation, SatisfierSearch, SmallVec, 12 | }; 13 | use crate::{DependencyProvider, DerivationTree, Map, NoSolutionError, VersionSet}; 14 | 15 | /// Current state of the PubGrub algorithm. 16 | #[derive(Clone)] 17 | pub(crate) struct State { 18 | pub root_package: Id, 19 | root_version: DP::V, 20 | 21 | #[allow(clippy::type_complexity)] 22 | incompatibilities: Map, Vec>>, 23 | 24 | /// As an optimization, store the ids of incompatibilities that are already contradicted. 25 | /// 26 | /// For each one keep track of the decision level when it was found to be contradicted. 27 | /// These will stay contradicted until we have backtracked beyond its associated decision level. 28 | contradicted_incompatibilities: Map, DecisionLevel>, 29 | 30 | /// All incompatibilities expressing dependencies, 31 | /// with common dependents merged. 32 | #[allow(clippy::type_complexity)] 33 | merged_dependencies: Map<(Id, Id), SmallVec>>, 34 | 35 | /// Partial solution. 36 | /// TODO: remove pub. 37 | pub(crate) partial_solution: PartialSolution, 38 | 39 | /// The store is the reference storage for all incompatibilities. 40 | pub(crate) incompatibility_store: Arena>, 41 | 42 | /// The store is the reference storage for all packages. 43 | pub(crate) package_store: HashArena, 44 | 45 | /// This is a stack of work to be done in `unit_propagation`. 46 | /// It can definitely be a local variable to that method, but 47 | /// this way we can reuse the same allocation for better performance. 48 | unit_propagation_buffer: SmallVec>, 49 | } 50 | 51 | impl State { 52 | /// Initialization of PubGrub state. 53 | pub(crate) fn init(root_package: DP::P, root_version: DP::V) -> Self { 54 | let mut incompatibility_store = Arena::new(); 55 | let mut package_store = HashArena::new(); 56 | let root_package = package_store.alloc(root_package); 57 | let not_root_id = incompatibility_store.alloc(Incompatibility::not_root( 58 | root_package, 59 | root_version.clone(), 60 | )); 61 | let mut incompatibilities = Map::default(); 62 | incompatibilities.insert(root_package, vec![not_root_id]); 63 | Self { 64 | root_package, 65 | root_version, 66 | incompatibilities, 67 | contradicted_incompatibilities: Map::default(), 68 | partial_solution: PartialSolution::empty(), 69 | incompatibility_store, 70 | package_store, 71 | unit_propagation_buffer: SmallVec::Empty, 72 | merged_dependencies: Map::default(), 73 | } 74 | } 75 | 76 | /// Add the dependencies for the current version of the current package as incompatibilities. 77 | pub(crate) fn add_package_version_dependencies( 78 | &mut self, 79 | package: Id, 80 | version: DP::V, 81 | dependencies: impl IntoIterator, 82 | ) -> Option> { 83 | let dep_incompats = 84 | self.add_incompatibility_from_dependencies(package, version.clone(), dependencies); 85 | self.partial_solution.add_package_version_incompatibilities( 86 | package, 87 | version.clone(), 88 | dep_incompats, 89 | &self.incompatibility_store, 90 | ) 91 | } 92 | 93 | /// Add an incompatibility to the state. 94 | pub(crate) fn add_incompatibility(&mut self, incompat: Incompatibility) { 95 | let id = self.incompatibility_store.alloc(incompat); 96 | self.merge_incompatibility(id); 97 | } 98 | 99 | /// Add an incompatibility to the state. 100 | #[cold] 101 | pub(crate) fn add_incompatibility_from_dependencies( 102 | &mut self, 103 | package: Id, 104 | version: DP::V, 105 | deps: impl IntoIterator, 106 | ) -> std::ops::Range> { 107 | // Create incompatibilities and allocate them in the store. 108 | let new_incompats_id_range = 109 | self.incompatibility_store 110 | .alloc_iter(deps.into_iter().map(|(dep_p, dep_vs)| { 111 | let dep_pid = self.package_store.alloc(dep_p); 112 | Incompatibility::from_dependency( 113 | package, 114 | ::singleton(version.clone()), 115 | (dep_pid, dep_vs), 116 | ) 117 | })); 118 | // Merge the newly created incompatibilities with the older ones. 119 | for id in IncompDpId::::range_to_iter(new_incompats_id_range.clone()) { 120 | self.merge_incompatibility(id); 121 | } 122 | new_incompats_id_range 123 | } 124 | 125 | /// Unit propagation is the core mechanism of the solving algorithm. 126 | /// CF 127 | /// 128 | /// For each package with a satisfied incompatibility, returns the package and the root cause 129 | /// incompatibility. 130 | #[cold] 131 | #[allow(clippy::type_complexity)] // Type definitions don't support impl trait. 132 | pub(crate) fn unit_propagation( 133 | &mut self, 134 | package: Id, 135 | ) -> Result, IncompDpId)>, NoSolutionError> { 136 | let mut satisfier_causes = SmallVec::default(); 137 | self.unit_propagation_buffer.clear(); 138 | self.unit_propagation_buffer.push(package); 139 | while let Some(current_package) = self.unit_propagation_buffer.pop() { 140 | // Iterate over incompatibilities in reverse order 141 | // to evaluate first the newest incompatibilities. 142 | let mut conflict_id = None; 143 | // We only care about incompatibilities if it contains the current package. 144 | for &incompat_id in self.incompatibilities[¤t_package].iter().rev() { 145 | if self 146 | .contradicted_incompatibilities 147 | .contains_key(&incompat_id) 148 | { 149 | continue; 150 | } 151 | let current_incompat = &self.incompatibility_store[incompat_id]; 152 | match self.partial_solution.relation(current_incompat) { 153 | // If the partial solution satisfies the incompatibility 154 | // we must perform conflict resolution. 155 | Relation::Satisfied => { 156 | log::info!( 157 | "Start conflict resolution because incompat satisfied:\n {}", 158 | current_incompat.display(&self.package_store) 159 | ); 160 | conflict_id = Some(incompat_id); 161 | break; 162 | } 163 | Relation::AlmostSatisfied(package_almost) => { 164 | // Add `package_almost` to the `unit_propagation_buffer` set. 165 | // Putting items in `unit_propagation_buffer` more than once waste cycles, 166 | // but so does allocating a hash map and hashing each item. 167 | // In practice `unit_propagation_buffer` is small enough that we can just do a linear scan. 168 | if !self.unit_propagation_buffer.contains(&package_almost) { 169 | self.unit_propagation_buffer.push(package_almost); 170 | } 171 | // Add (not term) to the partial solution with incompat as cause. 172 | self.partial_solution.add_derivation( 173 | package_almost, 174 | incompat_id, 175 | &self.incompatibility_store, 176 | ); 177 | // With the partial solution updated, the incompatibility is now contradicted. 178 | self.contradicted_incompatibilities 179 | .insert(incompat_id, self.partial_solution.current_decision_level()); 180 | } 181 | Relation::Contradicted(_) => { 182 | self.contradicted_incompatibilities 183 | .insert(incompat_id, self.partial_solution.current_decision_level()); 184 | } 185 | _ => {} 186 | } 187 | } 188 | if let Some(incompat_id) = conflict_id { 189 | let (package_almost, root_cause) = self 190 | .conflict_resolution(incompat_id, &mut satisfier_causes) 191 | .map_err(|terminal_incompat_id| { 192 | self.build_derivation_tree(terminal_incompat_id) 193 | })?; 194 | self.unit_propagation_buffer.clear(); 195 | self.unit_propagation_buffer.push(package_almost); 196 | // Add to the partial solution with incompat as cause. 197 | self.partial_solution.add_derivation( 198 | package_almost, 199 | root_cause, 200 | &self.incompatibility_store, 201 | ); 202 | // After conflict resolution and the partial solution update, 203 | // the root cause incompatibility is now contradicted. 204 | self.contradicted_incompatibilities 205 | .insert(root_cause, self.partial_solution.current_decision_level()); 206 | } 207 | } 208 | // If there are no more changed packages, unit propagation is done. 209 | Ok(satisfier_causes) 210 | } 211 | 212 | /// Return the root cause or the terminal incompatibility. CF 213 | /// 214 | /// 215 | /// When we found a conflict, we want to learn as much as possible from it, to avoid making (or 216 | /// keeping) decisions that will be rejected. Say we found that the dependency requirements on X and the 217 | /// dependency requirements on Y are incompatible. We may find that the decisions on earlier packages B and C 218 | /// require us to make incompatible requirements on X and Y, so we backtrack until either B or C 219 | /// can be revisited. To make it practical, we really only need one of the terms to be a 220 | /// decision. We may as well leave the other terms general. Something like "the dependency on 221 | /// the package X is incompatible with the decision on C" tends to work out pretty well. Then if 222 | /// A turns out to also have a dependency on X the resulting root cause is still useful. 223 | /// (`unit_propagation` will ensure we don't try that version of C.) 224 | /// Of course, this is more heuristics than science. If the output is too general, then 225 | /// `unit_propagation` will handle the confusion by calling us again with the next most specific 226 | /// conflict it comes across. If the output is too specific, then the outer `solver` loop will 227 | /// eventually end up calling us again until all possibilities are enumerated. 228 | /// 229 | /// To end up with a more useful incompatibility, this function combines incompatibilities into 230 | /// derivations. Fulfilling this derivation implies the later conflict. By banning it, we 231 | /// prevent the intermediate steps from occurring again, at least in the exact same way. 232 | /// However, the statistics collected for `prioritize` may want to analyze those intermediate 233 | /// steps. For example we might start with "there is no version 1 of Z", and 234 | /// `conflict_resolution` may be able to determine that "that was inevitable when we picked 235 | /// version 1 of X" which was inevitable when we picked W and so on, until version 1 of B, which 236 | /// was depended on by version 1 of A. Therefore the root cause may simplify all the way down to 237 | /// "we cannot pick version 1 of A". This will prevent us going down this path again. However 238 | /// when we start looking at version 2 of A, and discover that it depends on version 2 of B, we 239 | /// will want to prioritize the chain of intermediate steps to check if it has a problem with 240 | /// the same shape. The `satisfier_causes` argument keeps track of these intermediate steps so 241 | /// that the caller can use them for prioritization. 242 | #[allow(clippy::type_complexity)] 243 | #[cold] 244 | fn conflict_resolution( 245 | &mut self, 246 | incompatibility: IncompDpId, 247 | satisfier_causes: &mut SmallVec<(Id, IncompDpId)>, 248 | ) -> Result<(Id, IncompDpId), IncompDpId> { 249 | let mut current_incompat_id = incompatibility; 250 | let mut current_incompat_changed = false; 251 | loop { 252 | if self.incompatibility_store[current_incompat_id] 253 | .is_terminal(self.root_package, &self.root_version) 254 | { 255 | return Err(current_incompat_id); 256 | } else { 257 | let (package, satisfier_search_result) = self.partial_solution.satisfier_search( 258 | &self.incompatibility_store[current_incompat_id], 259 | &self.incompatibility_store, 260 | ); 261 | match satisfier_search_result { 262 | SatisfierSearch::DifferentDecisionLevels { 263 | previous_satisfier_level, 264 | } => { 265 | self.backtrack( 266 | current_incompat_id, 267 | current_incompat_changed, 268 | previous_satisfier_level, 269 | ); 270 | log::info!("backtrack to {:?}", previous_satisfier_level); 271 | satisfier_causes.push((package, current_incompat_id)); 272 | return Ok((package, current_incompat_id)); 273 | } 274 | SatisfierSearch::SameDecisionLevels { satisfier_cause } => { 275 | let prior_cause = Incompatibility::prior_cause( 276 | current_incompat_id, 277 | satisfier_cause, 278 | package, 279 | &self.incompatibility_store, 280 | ); 281 | log::info!("prior cause: {}", prior_cause.display(&self.package_store)); 282 | current_incompat_id = self.incompatibility_store.alloc(prior_cause); 283 | satisfier_causes.push((package, current_incompat_id)); 284 | current_incompat_changed = true; 285 | } 286 | } 287 | } 288 | } 289 | } 290 | 291 | /// Backtracking. 292 | fn backtrack( 293 | &mut self, 294 | incompat: IncompDpId, 295 | incompat_changed: bool, 296 | decision_level: DecisionLevel, 297 | ) { 298 | self.partial_solution.backtrack(decision_level); 299 | // Remove contradicted incompatibilities that depend on decisions we just backtracked away. 300 | self.contradicted_incompatibilities 301 | .retain(|_, dl| *dl <= decision_level); 302 | if incompat_changed { 303 | self.merge_incompatibility(incompat); 304 | } 305 | } 306 | 307 | /// Add this incompatibility into the set of all incompatibilities. 308 | /// 309 | /// PubGrub collapses identical dependencies from adjacent package versions 310 | /// into individual incompatibilities. 311 | /// This substantially reduces the total number of incompatibilities 312 | /// and makes it much easier for PubGrub to reason about multiple versions of packages at once. 313 | /// 314 | /// For example, rather than representing 315 | /// foo 1.0.0 depends on bar ^1.0.0 and 316 | /// foo 1.1.0 depends on bar ^1.0.0 317 | /// as two separate incompatibilities, 318 | /// they are collapsed together into the single incompatibility {foo ^1.0.0, not bar ^1.0.0} 319 | /// (provided that no other version of foo exists between 1.0.0 and 2.0.0). 320 | /// We could collapse them into { foo (1.0.0 ∪ 1.1.0), not bar ^1.0.0 } 321 | /// without having to check the existence of other versions though. 322 | fn merge_incompatibility(&mut self, mut id: IncompDpId) { 323 | if let Some((p1, p2)) = self.incompatibility_store[id].as_dependency() { 324 | // If we are a dependency, there's a good chance we can be merged with a previous dependency 325 | let deps_lookup = self.merged_dependencies.entry((p1, p2)).or_default(); 326 | if let Some((past, merged)) = deps_lookup.as_mut_slice().iter_mut().find_map(|past| { 327 | self.incompatibility_store[id] 328 | .merge_dependents(&self.incompatibility_store[*past]) 329 | .map(|m| (past, m)) 330 | }) { 331 | let new = self.incompatibility_store.alloc(merged); 332 | for (pkg, _) in self.incompatibility_store[new].iter() { 333 | self.incompatibilities 334 | .entry(pkg) 335 | .or_default() 336 | .retain(|id| id != past); 337 | } 338 | *past = new; 339 | id = new; 340 | } else { 341 | deps_lookup.push(id); 342 | } 343 | } 344 | for (pkg, term) in self.incompatibility_store[id].iter() { 345 | if cfg!(debug_assertions) { 346 | assert_ne!(term, &crate::term::Term::any()); 347 | } 348 | self.incompatibilities.entry(pkg).or_default().push(id); 349 | } 350 | } 351 | 352 | // Error reporting ######################################################### 353 | 354 | fn build_derivation_tree( 355 | &self, 356 | incompat: IncompDpId, 357 | ) -> DerivationTree { 358 | let mut all_ids: Set> = Set::default(); 359 | let mut shared_ids = Set::default(); 360 | let mut stack = vec![incompat]; 361 | while let Some(i) = stack.pop() { 362 | if let Some((id1, id2)) = self.incompatibility_store[i].causes() { 363 | if all_ids.contains(&i) { 364 | shared_ids.insert(i); 365 | } else { 366 | stack.push(id1); 367 | stack.push(id2); 368 | } 369 | } 370 | all_ids.insert(i); 371 | } 372 | // To avoid recursion we need to generate trees in topological order. 373 | // That is to say we need to ensure that the causes are processed before the incompatibility they effect. 374 | // It happens to be that sorting by their ID maintains this property. 375 | let mut sorted_ids = all_ids.into_iter().collect::>(); 376 | sorted_ids.sort_unstable_by_key(|id| id.into_raw()); 377 | let mut precomputed = Map::default(); 378 | for id in sorted_ids { 379 | let tree = Incompatibility::build_derivation_tree( 380 | id, 381 | &shared_ids, 382 | &self.incompatibility_store, 383 | &self.package_store, 384 | &precomputed, 385 | ); 386 | precomputed.insert(id, Arc::new(tree)); 387 | } 388 | // Now the user can refer to the entire tree from its root. 389 | Arc::into_inner(precomputed.remove(&incompat).unwrap()).unwrap() 390 | } 391 | } 392 | -------------------------------------------------------------------------------- /src/internal/incompatibility.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | //! An incompatibility is a set of terms for different packages 4 | //! that should never be satisfied all together. 5 | 6 | use std::fmt::{Debug, Display}; 7 | use std::sync::Arc; 8 | 9 | use crate::internal::{Arena, HashArena, Id, SmallMap}; 10 | use crate::{ 11 | term, DependencyProvider, DerivationTree, Derived, External, Map, Package, Set, Term, 12 | VersionSet, 13 | }; 14 | 15 | /// An incompatibility is a set of terms for different packages 16 | /// that should never be satisfied all together. 17 | /// An incompatibility usually originates from a package dependency. 18 | /// For example, if package A at version 1 depends on package B 19 | /// at version 2, you can never have both terms `A = 1` 20 | /// and `not B = 2` satisfied at the same time in a partial solution. 21 | /// This would mean that we found a solution with package A at version 1 22 | /// but not with package B at version 2. 23 | /// Yet A at version 1 depends on B at version 2 so this is not possible. 24 | /// Therefore, the set `{ A = 1, not B = 2 }` is an incompatibility, 25 | /// defined from dependencies of A at version 1. 26 | /// 27 | /// Incompatibilities can also be derived from two other incompatibilities 28 | /// during conflict resolution. More about all this in 29 | /// [PubGrub documentation](https://github.com/dart-lang/pub/blob/master/doc/solver.md#incompatibility). 30 | #[derive(Debug, Clone)] 31 | pub(crate) struct Incompatibility { 32 | package_terms: SmallMap, Term>, 33 | kind: Kind, 34 | } 35 | 36 | /// Type alias of unique identifiers for incompatibilities. 37 | pub(crate) type IncompId = Id>; 38 | 39 | pub(crate) type IncompDpId = IncompId< 40 | ::P, 41 | ::VS, 42 | ::M, 43 | >; 44 | 45 | #[derive(Debug, Clone)] 46 | enum Kind { 47 | /// Initial incompatibility aiming at picking the root package for the first decision. 48 | /// 49 | /// This incompatibility drives the resolution, it requires that we pick the (virtual) root 50 | /// packages. 51 | NotRoot(Id

, VS::V), 52 | /// There are no versions in the given range for this package. 53 | /// 54 | /// This incompatibility is used when we tried all versions in a range and no version 55 | /// worked, so we have to backtrack 56 | NoVersions(Id

, VS), 57 | /// Incompatibility coming from the dependencies of a given package. 58 | /// 59 | /// If a@1 depends on b>=1,<2, we create an incompatibility with terms `{a 1, b <1,>=2}` with 60 | /// kind `FromDependencyOf(a, 1, b, >=1,<2)`. 61 | /// 62 | /// We can merge multiple dependents with the same version. For example, if a@1 depends on b and 63 | /// a@2 depends on b, we can say instead a@1||2 depends on b. 64 | FromDependencyOf(Id

, VS, Id

, VS), 65 | /// Derived from two causes. Stores cause ids. 66 | /// 67 | /// For example, if a -> b and b -> c, we can derive a -> c. 68 | DerivedFrom(IncompId, IncompId), 69 | /// The package is unavailable for reasons outside pubgrub. 70 | /// 71 | /// Examples: 72 | /// * The version would require building the package, but builds are disabled. 73 | /// * The package is not available in the cache, but internet access has been disabled. 74 | Custom(Id

, VS, M), 75 | } 76 | 77 | /// A Relation describes how a set of terms can be compared to an incompatibility. 78 | /// Typically, the set of terms comes from the partial solution. 79 | #[derive(Eq, PartialEq, Debug)] 80 | pub(crate) enum Relation { 81 | /// We say that a set of terms S satisfies an incompatibility I 82 | /// if S satisfies every term in I. 83 | Satisfied, 84 | /// We say that S contradicts I 85 | /// if S contradicts at least one term in I. 86 | Contradicted(Id

), 87 | /// If S satisfies all but one of I's terms and is inconclusive for the remaining term, 88 | /// we say S "almost satisfies" I and we call the remaining term the "unsatisfied term". 89 | AlmostSatisfied(Id

), 90 | /// Otherwise, we say that their relation is inconclusive. 91 | Inconclusive, 92 | } 93 | 94 | impl Incompatibility { 95 | /// Create the initial "not Root" incompatibility. 96 | pub(crate) fn not_root(package: Id

, version: VS::V) -> Self { 97 | Self { 98 | package_terms: SmallMap::One([( 99 | package, 100 | Term::Negative(VS::singleton(version.clone())), 101 | )]), 102 | kind: Kind::NotRoot(package, version), 103 | } 104 | } 105 | 106 | /// Create an incompatibility to remember that a given set does not contain any version. 107 | pub(crate) fn no_versions(package: Id

, term: Term) -> Self { 108 | let set = match &term { 109 | Term::Positive(r) => r.clone(), 110 | Term::Negative(_) => panic!("No version should have a positive term"), 111 | }; 112 | Self { 113 | package_terms: SmallMap::One([(package, term)]), 114 | kind: Kind::NoVersions(package, set), 115 | } 116 | } 117 | 118 | /// Create an incompatibility for a reason outside pubgrub. 119 | #[allow(dead_code)] // Used by uv 120 | pub(crate) fn custom_term(package: Id

, term: Term, metadata: M) -> Self { 121 | let set = match &term { 122 | Term::Positive(r) => r.clone(), 123 | Term::Negative(_) => panic!("No version should have a positive term"), 124 | }; 125 | Self { 126 | package_terms: SmallMap::One([(package, term)]), 127 | kind: Kind::Custom(package, set, metadata), 128 | } 129 | } 130 | 131 | /// Create an incompatibility for a reason outside pubgrub. 132 | pub(crate) fn custom_version(package: Id

, version: VS::V, metadata: M) -> Self { 133 | let set = VS::singleton(version); 134 | let term = Term::Positive(set.clone()); 135 | Self { 136 | package_terms: SmallMap::One([(package, term)]), 137 | kind: Kind::Custom(package, set, metadata), 138 | } 139 | } 140 | 141 | /// Build an incompatibility from a given dependency. 142 | pub(crate) fn from_dependency(package: Id

, versions: VS, dep: (Id

, VS)) -> Self { 143 | let (p2, set2) = dep; 144 | Self { 145 | package_terms: if set2 == VS::empty() { 146 | SmallMap::One([(package, Term::Positive(versions.clone()))]) 147 | } else { 148 | SmallMap::Two([ 149 | (package, Term::Positive(versions.clone())), 150 | (p2, Term::Negative(set2.clone())), 151 | ]) 152 | }, 153 | kind: Kind::FromDependencyOf(package, versions, p2, set2), 154 | } 155 | } 156 | 157 | pub(crate) fn as_dependency(&self) -> Option<(Id

, Id

)> { 158 | match &self.kind { 159 | Kind::FromDependencyOf(p1, _, p2, _) => Some((*p1, *p2)), 160 | _ => None, 161 | } 162 | } 163 | 164 | /// Merge dependant versions with the same dependency. 165 | /// 166 | /// When multiple versions of a package depend on the same range of another package, 167 | /// we can merge the two into a single incompatibility. 168 | /// For example, if a@1 depends on b and a@2 depends on b, we can say instead 169 | /// a@1||2 depends on b. 170 | /// 171 | /// It is a special case of prior cause computation where the unified package 172 | /// is the common dependant in the two incompatibilities expressing dependencies. 173 | pub(crate) fn merge_dependents(&self, other: &Self) -> Option { 174 | // It is almost certainly a bug to call this method without checking that self is a dependency 175 | debug_assert!(self.as_dependency().is_some()); 176 | // Check that both incompatibilities are of the shape p1 depends on p2, 177 | // with the same p1 and p2. 178 | let self_pkgs = self.as_dependency()?; 179 | if self_pkgs != other.as_dependency()? { 180 | return None; 181 | } 182 | let (p1, p2) = self_pkgs; 183 | // We ignore self-dependencies. They are always either trivially true or trivially false, 184 | // as the package version implies whether the constraint will always be fulfilled or always 185 | // violated. 186 | // At time of writing, the public crate API only allowed a map of dependencies, 187 | // meaning it can't hit this branch, which requires two self-dependencies. 188 | if p1 == p2 { 189 | return None; 190 | } 191 | let dep_term = self.get(p2); 192 | // The dependency range for p2 must be the same in both case 193 | // to be able to merge multiple p1 ranges. 194 | if dep_term != other.get(p2) { 195 | return None; 196 | } 197 | Some(Self::from_dependency( 198 | p1, 199 | self.get(p1) 200 | .unwrap() 201 | .unwrap_positive() 202 | .union(other.get(p1).unwrap().unwrap_positive()), // It is safe to `simplify` here 203 | ( 204 | p2, 205 | dep_term.map_or(VS::empty(), |v| v.unwrap_negative().clone()), 206 | ), 207 | )) 208 | } 209 | 210 | /// Prior cause of two incompatibilities using the rule of resolution. 211 | pub(crate) fn prior_cause( 212 | incompat: Id, 213 | satisfier_cause: Id, 214 | package: Id

, 215 | incompatibility_store: &Arena, 216 | ) -> Self { 217 | let kind = Kind::DerivedFrom(incompat, satisfier_cause); 218 | // Optimization to avoid cloning and dropping t1 219 | let (t1, mut package_terms) = incompatibility_store[incompat] 220 | .package_terms 221 | .split_one(&package) 222 | .unwrap(); 223 | let satisfier_cause_terms = &incompatibility_store[satisfier_cause].package_terms; 224 | package_terms.merge( 225 | satisfier_cause_terms.iter().filter(|(p, _)| p != &&package), 226 | |t1, t2| Some(t1.intersection(t2)), 227 | ); 228 | let term = t1.union(satisfier_cause_terms.get(&package).unwrap()); 229 | if term != Term::any() { 230 | package_terms.insert(package, term); 231 | } 232 | Self { 233 | package_terms, 234 | kind, 235 | } 236 | } 237 | 238 | /// Check if an incompatibility should mark the end of the algorithm 239 | /// because it satisfies the root package. 240 | pub(crate) fn is_terminal(&self, root_package: Id

, root_version: &VS::V) -> bool { 241 | if self.package_terms.len() == 0 { 242 | true 243 | } else if self.package_terms.len() > 1 { 244 | false 245 | } else { 246 | let (package, term) = self.package_terms.iter().next().unwrap(); 247 | (package == &root_package) && term.contains(root_version) 248 | } 249 | } 250 | 251 | /// Get the term related to a given package (if it exists). 252 | pub(crate) fn get(&self, package: Id

) -> Option<&Term> { 253 | self.package_terms.get(&package) 254 | } 255 | 256 | /// Iterate over packages. 257 | pub(crate) fn iter(&self) -> impl Iterator, &Term)> { 258 | self.package_terms 259 | .iter() 260 | .map(|(package, term)| (*package, term)) 261 | } 262 | 263 | // Reporting ############################################################### 264 | 265 | /// Retrieve parent causes if of type DerivedFrom. 266 | pub(crate) fn causes(&self) -> Option<(Id, Id)> { 267 | match self.kind { 268 | Kind::DerivedFrom(id1, id2) => Some((id1, id2)), 269 | _ => None, 270 | } 271 | } 272 | 273 | /// Build a derivation tree for error reporting. 274 | pub(crate) fn build_derivation_tree( 275 | self_id: Id, 276 | shared_ids: &Set>, 277 | store: &Arena, 278 | package_store: &HashArena

, 279 | precomputed: &Map, Arc>>, 280 | ) -> DerivationTree { 281 | match store[self_id].kind.clone() { 282 | Kind::DerivedFrom(id1, id2) => { 283 | let derived: Derived = Derived { 284 | terms: store[self_id] 285 | .package_terms 286 | .iter() 287 | .map(|(&a, b)| (package_store[a].clone(), b.clone())) 288 | .collect(), 289 | shared_id: shared_ids.get(&self_id).map(|id| id.into_raw()), 290 | cause1: precomputed 291 | .get(&id1) 292 | .expect("Non-topological calls building tree") 293 | .clone(), 294 | cause2: precomputed 295 | .get(&id2) 296 | .expect("Non-topological calls building tree") 297 | .clone(), 298 | }; 299 | DerivationTree::Derived(derived) 300 | } 301 | Kind::NotRoot(package, version) => { 302 | DerivationTree::External(External::NotRoot(package_store[package].clone(), version)) 303 | } 304 | Kind::NoVersions(package, set) => DerivationTree::External(External::NoVersions( 305 | package_store[package].clone(), 306 | set.clone(), 307 | )), 308 | Kind::FromDependencyOf(package, set, dep_package, dep_set) => { 309 | DerivationTree::External(External::FromDependencyOf( 310 | package_store[package].clone(), 311 | set.clone(), 312 | package_store[dep_package].clone(), 313 | dep_set.clone(), 314 | )) 315 | } 316 | Kind::Custom(package, set, metadata) => DerivationTree::External(External::Custom( 317 | package_store[package].clone(), 318 | set.clone(), 319 | metadata.clone(), 320 | )), 321 | } 322 | } 323 | } 324 | 325 | impl<'a, P: Package, VS: VersionSet + 'a, M: Eq + Clone + Debug + Display + 'a> 326 | Incompatibility 327 | { 328 | /// CF definition of Relation enum. 329 | pub(crate) fn relation(&self, terms: impl Fn(Id

) -> Option<&'a Term>) -> Relation

{ 330 | let mut relation = Relation::Satisfied; 331 | for (&package, incompat_term) in self.package_terms.iter() { 332 | match terms(package).map(|term| incompat_term.relation_with(term)) { 333 | Some(term::Relation::Satisfied) => {} 334 | Some(term::Relation::Contradicted) => { 335 | return Relation::Contradicted(package); 336 | } 337 | None | Some(term::Relation::Inconclusive) => { 338 | // If a package is not present, the intersection is the same as [Term::any]. 339 | // According to the rules of satisfactions, the relation would be inconclusive. 340 | // It could also be satisfied if the incompatibility term was also [Term::any], 341 | // but we systematically remove those from incompatibilities 342 | // so we're safe on that front. 343 | if relation == Relation::Satisfied { 344 | relation = Relation::AlmostSatisfied(package); 345 | } else { 346 | return Relation::Inconclusive; 347 | } 348 | } 349 | } 350 | } 351 | relation 352 | } 353 | } 354 | 355 | impl Incompatibility { 356 | pub fn display<'a>(&'a self, package_store: &'a HashArena

) -> impl Display + 'a { 357 | match self.iter().collect::>().as_slice() { 358 | [] => "version solving failed".into(), 359 | // TODO: special case when that unique package is root. 360 | [(package, Term::Positive(range))] => { 361 | format!("{} {} is forbidden", package_store[*package], range) 362 | } 363 | [(package, Term::Negative(range))] => { 364 | format!("{} {} is mandatory", package_store[*package], range) 365 | } 366 | [(p_pos, Term::Positive(r_pos)), (p_neg, Term::Negative(r_neg))] 367 | | [(p_neg, Term::Negative(r_neg)), (p_pos, Term::Positive(r_pos))] => { 368 | External::<_, _, M>::FromDependencyOf( 369 | &package_store[*p_pos], 370 | r_pos.clone(), 371 | &package_store[*p_neg], 372 | r_neg.clone(), 373 | ) 374 | .to_string() 375 | } 376 | slice => { 377 | let str_terms: Vec<_> = slice 378 | .iter() 379 | .map(|(p, t)| format!("{} {}", package_store[*p], t)) 380 | .collect(); 381 | str_terms.join(", ") + " are incompatible" 382 | } 383 | } 384 | } 385 | } 386 | 387 | // TESTS ####################################################################### 388 | 389 | #[cfg(test)] 390 | pub(crate) mod tests { 391 | use proptest::prelude::*; 392 | use std::cmp::Reverse; 393 | use std::collections::BTreeMap; 394 | 395 | use super::*; 396 | use crate::internal::State; 397 | use crate::term::tests::strategy as term_strat; 398 | use crate::{OfflineDependencyProvider, Ranges}; 399 | 400 | proptest! { 401 | 402 | /// For any three different packages p1, p2 and p3, 403 | /// for any three terms t1, t2 and t3, 404 | /// if we have the two following incompatibilities: 405 | /// { p1: t1, p2: not t2 } 406 | /// { p2: t2, p3: t3 } 407 | /// the rule of resolution says that we can deduce the following incompatibility: 408 | /// { p1: t1, p3: t3 } 409 | #[test] 410 | fn rule_of_resolution(t1 in term_strat(), t2 in term_strat(), t3 in term_strat()) { 411 | let mut store = Arena::new(); 412 | let mut package_store = HashArena::new(); 413 | let p1 = package_store.alloc("p1"); 414 | let p2 = package_store.alloc("p2"); 415 | let p3 = package_store.alloc("p3"); 416 | let i1 = store.alloc(Incompatibility { 417 | package_terms: SmallMap::Two([(p1, t1.clone()), (p2, t2.negate())]), 418 | kind: Kind::<_, _, String>::FromDependencyOf(p1, Ranges::full(), p2, Ranges::full()) 419 | }); 420 | 421 | let i2 = store.alloc(Incompatibility { 422 | package_terms: SmallMap::Two([(p2, t2), (p3, t3.clone())]), 423 | kind: Kind::<_, _, String>::FromDependencyOf(p2, Ranges::full(), p3, Ranges::full()) 424 | }); 425 | 426 | let mut i3 = Map::default(); 427 | i3.insert(p1, t1); 428 | i3.insert(p3, t3); 429 | 430 | let i_resolution = Incompatibility::prior_cause(i1, i2, p2, &store); 431 | assert_eq!(i_resolution.package_terms.iter().map(|(&k, v)|(k, v.clone())).collect::>(), i3); 432 | } 433 | 434 | } 435 | 436 | /// Check that multiple self-dependencies are supported. 437 | /// 438 | /// The current public API deduplicates dependencies through a map, so we test them here 439 | /// manually. 440 | /// 441 | /// https://github.com/astral-sh/uv/issues/13344 442 | #[test] 443 | fn package_depend_on_self() { 444 | let cases: &[Vec<(String, Ranges)>] = &[ 445 | vec![("foo".to_string(), Ranges::full())], 446 | vec![ 447 | ("foo".to_string(), Ranges::full()), 448 | ("foo".to_string(), Ranges::full()), 449 | ], 450 | vec![ 451 | ("foo".to_string(), Ranges::full()), 452 | ("foo".to_string(), Ranges::singleton(1usize)), 453 | ], 454 | vec![ 455 | ("foo".to_string(), Ranges::singleton(1usize)), 456 | ("foo".to_string(), Ranges::from_range_bounds(1usize..2)), 457 | ("foo".to_string(), Ranges::from_range_bounds(1usize..3)), 458 | ], 459 | ]; 460 | 461 | for case in cases { 462 | let mut state: State>> = 463 | State::init("root".to_string(), 0); 464 | state.unit_propagation(state.root_package).unwrap(); 465 | 466 | // Add the root package 467 | state.add_package_version_dependencies( 468 | state.root_package, 469 | 0, 470 | [("foo".to_string(), Ranges::singleton(1usize))], 471 | ); 472 | state.unit_propagation(state.root_package).unwrap(); 473 | 474 | // Add a package that depends on itself twice 475 | let (next, _) = state 476 | .partial_solution 477 | .pick_highest_priority_pkg(|_p, _r| (0, Reverse(0))) 478 | .unwrap(); 479 | state.add_package_version_dependencies(next, 1, case.clone()); 480 | state.unit_propagation(next).unwrap(); 481 | 482 | assert!(state 483 | .partial_solution 484 | .pick_highest_priority_pkg(|_p, _r| (0, Reverse(0))) 485 | .is_none()); 486 | 487 | let solution: BTreeMap = state 488 | .partial_solution 489 | .extract_solution() 490 | .map(|(p, v)| (state.package_store[p].clone(), v)) 491 | .collect(); 492 | let expected = BTreeMap::from([("root".to_string(), 0), ("foo".to_string(), 1)]); 493 | 494 | assert_eq!(solution, expected, "{:?}", case); 495 | } 496 | } 497 | } 498 | -------------------------------------------------------------------------------- /src/internal/mod.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | //! Non exposed modules. 4 | 5 | mod arena; 6 | mod core; 7 | mod incompatibility; 8 | mod partial_solution; 9 | mod small_map; 10 | mod small_vec; 11 | 12 | pub(crate) use arena::{Arena, HashArena, Id}; 13 | pub(crate) use core::State; 14 | pub(crate) use incompatibility::{IncompDpId, IncompId, Incompatibility, Relation}; 15 | pub(crate) use partial_solution::{DecisionLevel, PartialSolution, SatisfierSearch}; 16 | pub(crate) use small_map::SmallMap; 17 | pub(crate) use small_vec::SmallVec; 18 | -------------------------------------------------------------------------------- /src/internal/small_map.rs: -------------------------------------------------------------------------------- 1 | use std::hash::Hash; 2 | 3 | use crate::Map; 4 | 5 | #[derive(Debug, Clone)] 6 | pub(crate) enum SmallMap { 7 | Empty, 8 | One([(K, V); 1]), 9 | Two([(K, V); 2]), 10 | Flexible(Map), 11 | } 12 | 13 | impl SmallMap { 14 | pub(crate) fn get(&self, key: &K) -> Option<&V> { 15 | match self { 16 | Self::Empty => None, 17 | Self::One([(k, v)]) if k == key => Some(v), 18 | Self::One(_) => None, 19 | Self::Two([(k1, v1), _]) if key == k1 => Some(v1), 20 | Self::Two([_, (k2, v2)]) if key == k2 => Some(v2), 21 | Self::Two(_) => None, 22 | Self::Flexible(data) => data.get(key), 23 | } 24 | } 25 | 26 | pub(crate) fn get_mut(&mut self, key: &K) -> Option<&mut V> { 27 | match self { 28 | Self::Empty => None, 29 | Self::One([(k, v)]) if k == key => Some(v), 30 | Self::One(_) => None, 31 | Self::Two([(k1, v1), _]) if key == k1 => Some(v1), 32 | Self::Two([_, (k2, v2)]) if key == k2 => Some(v2), 33 | Self::Two(_) => None, 34 | Self::Flexible(data) => data.get_mut(key), 35 | } 36 | } 37 | 38 | pub(crate) fn remove(&mut self, key: &K) -> Option { 39 | let out; 40 | *self = match std::mem::take(self) { 41 | Self::Empty => { 42 | out = None; 43 | Self::Empty 44 | } 45 | Self::One([(k, v)]) => { 46 | if key == &k { 47 | out = Some(v); 48 | Self::Empty 49 | } else { 50 | out = None; 51 | Self::One([(k, v)]) 52 | } 53 | } 54 | Self::Two([(k1, v1), (k2, v2)]) => { 55 | if key == &k1 { 56 | out = Some(v1); 57 | Self::One([(k2, v2)]) 58 | } else if key == &k2 { 59 | out = Some(v2); 60 | Self::One([(k1, v1)]) 61 | } else { 62 | out = None; 63 | Self::Two([(k1, v1), (k2, v2)]) 64 | } 65 | } 66 | Self::Flexible(mut data) => { 67 | out = data.remove(key); 68 | Self::Flexible(data) 69 | } 70 | }; 71 | out 72 | } 73 | 74 | pub(crate) fn insert(&mut self, key: K, value: V) { 75 | *self = match std::mem::take(self) { 76 | Self::Empty => Self::One([(key, value)]), 77 | Self::One([(k, v)]) => { 78 | if key == k { 79 | Self::One([(k, value)]) 80 | } else { 81 | Self::Two([(k, v), (key, value)]) 82 | } 83 | } 84 | Self::Two([(k1, v1), (k2, v2)]) => { 85 | if key == k1 { 86 | Self::Two([(k1, value), (k2, v2)]) 87 | } else if key == k2 { 88 | Self::Two([(k1, v1), (k2, value)]) 89 | } else { 90 | let mut data: Map = Map::with_capacity_and_hasher(3, Default::default()); 91 | data.insert(key, value); 92 | data.insert(k1, v1); 93 | data.insert(k2, v2); 94 | Self::Flexible(data) 95 | } 96 | } 97 | Self::Flexible(mut data) => { 98 | data.insert(key, value); 99 | Self::Flexible(data) 100 | } 101 | }; 102 | } 103 | 104 | /// Returns a reference to the value for one key and a copy of the map without the key. 105 | /// 106 | /// This is an optimization over the following, where we only need a reference to `t1`. It 107 | /// avoids cloning and then drop the ranges in each `prior_cause` call. 108 | /// ```ignore 109 | /// let mut package_terms = package_terms.clone(); 110 | // let t1 = package_terms.remove(package).unwrap(); 111 | /// ``` 112 | pub(crate) fn split_one(&self, key: &K) -> Option<(&V, Self)> 113 | where 114 | K: Clone, 115 | V: Clone, 116 | { 117 | match self { 118 | Self::Empty => None, 119 | Self::One([(k, v)]) => { 120 | if k == key { 121 | Some((v, Self::Empty)) 122 | } else { 123 | None 124 | } 125 | } 126 | Self::Two([(k1, v1), (k2, v2)]) => { 127 | if k1 == key { 128 | Some((v1, Self::One([(k2.clone(), v2.clone())]))) 129 | } else if k2 == key { 130 | Some((v2, Self::One([(k1.clone(), v1.clone())]))) 131 | } else { 132 | None 133 | } 134 | } 135 | Self::Flexible(map) => { 136 | if let Some(value) = map.get(key) { 137 | let mut map = map.clone(); 138 | map.remove(key).unwrap(); 139 | Some((value, Self::Flexible(map))) 140 | } else { 141 | None 142 | } 143 | } 144 | } 145 | } 146 | } 147 | 148 | impl SmallMap { 149 | /// Merge two hash maps. 150 | /// 151 | /// When a key is common to both, 152 | /// apply the provided function to both values. 153 | /// If the result is None, remove that key from the merged map, 154 | /// otherwise add the content of the `Some(_)`. 155 | pub(crate) fn merge<'a>( 156 | &'a mut self, 157 | map_2: impl Iterator, 158 | f: impl Fn(&V, &V) -> Option, 159 | ) { 160 | for (key, val_2) in map_2 { 161 | match self.get_mut(key) { 162 | None => { 163 | self.insert(key.clone(), val_2.clone()); 164 | } 165 | Some(val_1) => match f(val_1, val_2) { 166 | None => { 167 | self.remove(key); 168 | } 169 | Some(merged_value) => *val_1 = merged_value, 170 | }, 171 | } 172 | } 173 | } 174 | } 175 | 176 | impl Default for SmallMap { 177 | fn default() -> Self { 178 | Self::Empty 179 | } 180 | } 181 | 182 | impl SmallMap { 183 | pub(crate) fn len(&self) -> usize { 184 | match self { 185 | Self::Empty => 0, 186 | Self::One(_) => 1, 187 | Self::Two(_) => 2, 188 | Self::Flexible(data) => data.len(), 189 | } 190 | } 191 | } 192 | 193 | enum IterSmallMap<'a, K, V> { 194 | Inline(std::slice::Iter<'a, (K, V)>), 195 | Map(std::collections::hash_map::Iter<'a, K, V>), 196 | } 197 | 198 | impl<'a, K: 'a, V: 'a> Iterator for IterSmallMap<'a, K, V> { 199 | type Item = (&'a K, &'a V); 200 | 201 | fn next(&mut self) -> Option { 202 | match self { 203 | IterSmallMap::Inline(inner) => inner.next().map(|(k, v)| (k, v)), 204 | IterSmallMap::Map(inner) => inner.next(), 205 | } 206 | } 207 | } 208 | 209 | impl SmallMap { 210 | pub(crate) fn iter(&self) -> impl Iterator { 211 | match self { 212 | Self::Empty => IterSmallMap::Inline([].iter()), 213 | Self::One(data) => IterSmallMap::Inline(data.iter()), 214 | Self::Two(data) => IterSmallMap::Inline(data.iter()), 215 | Self::Flexible(data) => IterSmallMap::Map(data.iter()), 216 | } 217 | } 218 | } 219 | -------------------------------------------------------------------------------- /src/internal/small_vec.rs: -------------------------------------------------------------------------------- 1 | use std::fmt; 2 | use std::hash::{Hash, Hasher}; 3 | use std::ops::Deref; 4 | 5 | #[derive(Clone)] 6 | pub enum SmallVec { 7 | Empty, 8 | One([T; 1]), 9 | Two([T; 2]), 10 | Flexible(Vec), 11 | } 12 | 13 | impl SmallVec { 14 | pub fn empty() -> Self { 15 | Self::Empty 16 | } 17 | 18 | pub fn one(t: T) -> Self { 19 | Self::One([t]) 20 | } 21 | 22 | pub fn as_slice(&self) -> &[T] { 23 | match self { 24 | Self::Empty => &[], 25 | Self::One(v) => v, 26 | Self::Two(v) => v, 27 | Self::Flexible(v) => v, 28 | } 29 | } 30 | 31 | pub fn as_mut_slice(&mut self) -> &mut [T] { 32 | match self { 33 | Self::Empty => &mut [], 34 | Self::One(v) => v, 35 | Self::Two(v) => v, 36 | Self::Flexible(v) => v, 37 | } 38 | } 39 | 40 | pub fn push(&mut self, new: T) { 41 | *self = match std::mem::take(self) { 42 | Self::Empty => Self::One([new]), 43 | Self::One([v1]) => Self::Two([v1, new]), 44 | Self::Two([v1, v2]) => Self::Flexible(vec![v1, v2, new]), 45 | Self::Flexible(mut v) => { 46 | v.push(new); 47 | Self::Flexible(v) 48 | } 49 | } 50 | } 51 | 52 | pub fn pop(&mut self) -> Option { 53 | match std::mem::take(self) { 54 | Self::Empty => None, 55 | Self::One([v1]) => { 56 | *self = Self::Empty; 57 | Some(v1) 58 | } 59 | Self::Two([v1, v2]) => { 60 | *self = Self::One([v1]); 61 | Some(v2) 62 | } 63 | Self::Flexible(mut v) => { 64 | let out = v.pop(); 65 | *self = Self::Flexible(v); 66 | out 67 | } 68 | } 69 | } 70 | 71 | pub fn clear(&mut self) { 72 | if let Self::Flexible(mut v) = std::mem::take(self) { 73 | v.clear(); 74 | *self = Self::Flexible(v); 75 | } // else: self already eq Empty from the take 76 | } 77 | 78 | pub fn iter(&self) -> std::slice::Iter<'_, T> { 79 | self.as_slice().iter() 80 | } 81 | } 82 | 83 | impl Default for SmallVec { 84 | fn default() -> Self { 85 | Self::Empty 86 | } 87 | } 88 | 89 | impl Deref for SmallVec { 90 | type Target = [T]; 91 | 92 | fn deref(&self) -> &Self::Target { 93 | self.as_slice() 94 | } 95 | } 96 | 97 | impl<'a, T> IntoIterator for &'a SmallVec { 98 | type Item = &'a T; 99 | 100 | type IntoIter = std::slice::Iter<'a, T>; 101 | 102 | fn into_iter(self) -> Self::IntoIter { 103 | self.iter() 104 | } 105 | } 106 | 107 | impl Eq for SmallVec {} 108 | 109 | impl PartialEq for SmallVec { 110 | fn eq(&self, other: &Self) -> bool { 111 | self.as_slice() == other.as_slice() 112 | } 113 | } 114 | 115 | impl fmt::Debug for SmallVec { 116 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 117 | self.as_slice().fmt(f) 118 | } 119 | } 120 | 121 | impl Hash for SmallVec { 122 | fn hash(&self, state: &mut H) { 123 | self.len().hash(state); 124 | Hash::hash_slice(self.as_slice(), state); 125 | } 126 | } 127 | 128 | #[cfg(feature = "serde")] 129 | impl serde::Serialize for SmallVec { 130 | fn serialize(&self, s: S) -> Result { 131 | serde::Serialize::serialize(self.as_slice(), s) 132 | } 133 | } 134 | 135 | #[cfg(feature = "serde")] 136 | impl<'de, T: serde::Deserialize<'de>> serde::Deserialize<'de> for SmallVec { 137 | fn deserialize>(d: D) -> Result { 138 | struct SmallVecVisitor { 139 | marker: std::marker::PhantomData, 140 | } 141 | 142 | impl<'de, T> serde::de::Visitor<'de> for SmallVecVisitor 143 | where 144 | T: serde::Deserialize<'de>, 145 | { 146 | type Value = SmallVec; 147 | 148 | fn expecting(&self, formatter: &mut fmt::Formatter) -> fmt::Result { 149 | formatter.write_str("a sequence") 150 | } 151 | 152 | fn visit_seq(self, mut seq: A) -> Result 153 | where 154 | A: serde::de::SeqAccess<'de>, 155 | { 156 | let mut values = SmallVec::empty(); 157 | while let Some(value) = seq.next_element()? { 158 | values.push(value); 159 | } 160 | Ok(values) 161 | } 162 | } 163 | 164 | let visitor = SmallVecVisitor { 165 | marker: Default::default(), 166 | }; 167 | d.deserialize_seq(visitor) 168 | } 169 | } 170 | 171 | impl IntoIterator for SmallVec { 172 | type Item = T; 173 | type IntoIter = SmallVecIntoIter; 174 | 175 | fn into_iter(self) -> Self::IntoIter { 176 | match self { 177 | SmallVec::Empty => SmallVecIntoIter::Empty, 178 | SmallVec::One(a) => SmallVecIntoIter::One(a.into_iter()), 179 | SmallVec::Two(a) => SmallVecIntoIter::Two(a.into_iter()), 180 | SmallVec::Flexible(v) => SmallVecIntoIter::Flexible(v.into_iter()), 181 | } 182 | } 183 | } 184 | 185 | pub enum SmallVecIntoIter { 186 | Empty, 187 | One(<[T; 1] as IntoIterator>::IntoIter), 188 | Two(<[T; 2] as IntoIterator>::IntoIter), 189 | Flexible( as IntoIterator>::IntoIter), 190 | } 191 | 192 | impl Iterator for SmallVecIntoIter { 193 | type Item = T; 194 | 195 | fn next(&mut self) -> Option { 196 | match self { 197 | SmallVecIntoIter::Empty => None, 198 | SmallVecIntoIter::One(it) => it.next(), 199 | SmallVecIntoIter::Two(it) => it.next(), 200 | SmallVecIntoIter::Flexible(it) => it.next(), 201 | } 202 | } 203 | } 204 | 205 | // TESTS ####################################################################### 206 | 207 | #[cfg(test)] 208 | pub mod tests { 209 | use proptest::prelude::*; 210 | 211 | use super::*; 212 | 213 | proptest! { 214 | #[test] 215 | fn push_and_pop(commands: Vec>) { 216 | let mut v = vec![]; 217 | let mut sv = SmallVec::Empty; 218 | for command in commands { 219 | match command { 220 | Some(i) => { 221 | v.push(i); 222 | sv.push(i); 223 | } 224 | None => { 225 | assert_eq!(v.pop(), sv.pop()); 226 | } 227 | } 228 | assert_eq!(v.as_slice(), sv.as_slice()); 229 | } 230 | } 231 | } 232 | } 233 | -------------------------------------------------------------------------------- /src/lib.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | //! PubGrub version solving algorithm. 4 | //! 5 | //! Version solving consists in efficiently finding a set of packages and versions 6 | //! that satisfy all the constraints of a given project dependencies. 7 | //! In addition, when that is not possible, 8 | //! we should try to provide a very human-readable and clear 9 | //! explanation as to why that failed. 10 | //! 11 | //! # Basic example 12 | //! 13 | //! Let's imagine that we are building a user interface 14 | //! with a menu containing dropdowns with some icons, 15 | //! icons that we are also directly using in other parts of the interface. 16 | //! For this scenario our direct dependencies are `menu` and `icons`, 17 | //! but the complete set of dependencies looks like follows: 18 | //! 19 | //! - `root` depends on `menu` and `icons` 20 | //! - `menu` depends on `dropdown` 21 | //! - `dropdown` depends on `icons` 22 | //! - `icons` has no dependency 23 | //! 24 | //! We can model that scenario with this library as follows 25 | //! ``` 26 | //! # use pubgrub::{OfflineDependencyProvider, resolve, Ranges}; 27 | //! 28 | //! type NumVS = Ranges; 29 | //! 30 | //! let mut dependency_provider = OfflineDependencyProvider::<&str, NumVS>::new(); 31 | //! 32 | //! dependency_provider.add_dependencies( 33 | //! "root", 34 | //! 1u32, 35 | //! [("menu", Ranges::full()), ("icons", Ranges::full())], 36 | //! ); 37 | //! dependency_provider.add_dependencies("menu", 1u32, [("dropdown", Ranges::full())]); 38 | //! dependency_provider.add_dependencies("dropdown", 1u32, [("icons", Ranges::full())]); 39 | //! dependency_provider.add_dependencies("icons", 1u32, []); 40 | //! 41 | //! // Run the algorithm. 42 | //! let solution = resolve(&dependency_provider, "root", 1u32).unwrap(); 43 | //! ``` 44 | //! 45 | //! # Package and Version flexibility 46 | //! 47 | //! The [OfflineDependencyProvider] used in that example is generic over the way package names, 48 | //! version requirements, and version numbers are represented. 49 | //! 50 | //! The first bound is the type of package names. It can be anything that implements our [Package] trait. 51 | //! The [Package] trait is automatic if the type already implements 52 | //! [Clone] + [Eq] + [Hash] + [Debug] + [Display](std::fmt::Display). 53 | //! So things like [String] will work out of the box. 54 | //! 55 | //! The second bound is the type of package requirements. It can be anything that implements our [VersionSet] trait. 56 | //! This trait is used to figure out how version requirements are combined. 57 | //! If the normal [Ord]/[PartialEq] operations are all that is needed for requirements, our [Ranges] type will work. 58 | //! 59 | //! The chosen `VersionSet` in turn specifies what can be used for version numbers. 60 | //! This type needs to at least implement [Clone] + [Ord] + [Debug] + [Display](std::fmt::Display). 61 | //! For convenience, this library provides [SemanticVersion] that implements the basics of semantic versioning rules. 62 | //! 63 | //! # DependencyProvider trait 64 | //! 65 | //! In our previous example we used the 66 | //! [OfflineDependencyProvider], 67 | //! which is a basic implementation of the [DependencyProvider] trait. 68 | //! 69 | //! But we might want to implement the [DependencyProvider] 70 | //! trait for our own type. 71 | //! Let's say that we will use [String] for packages, 72 | //! and [SemanticVersion] for versions. 73 | //! This may be done quite easily by implementing the three following functions. 74 | //! ``` 75 | //! # use pubgrub::{DependencyProvider, Dependencies, SemanticVersion, Ranges, 76 | //! # DependencyConstraints, Map, PackageResolutionStatistics}; 77 | //! # use std::error::Error; 78 | //! # use std::borrow::Borrow; 79 | //! # use std::convert::Infallible; 80 | //! # 81 | //! # struct MyDependencyProvider; 82 | //! # 83 | //! type SemVS = Ranges; 84 | //! 85 | //! impl DependencyProvider for MyDependencyProvider { 86 | //! fn choose_version(&self, package: &String, range: &SemVS) -> Result, Infallible> { 87 | //! unimplemented!() 88 | //! } 89 | //! 90 | //! type Priority = usize; 91 | //! fn prioritize(&self, package: &String, range: &SemVS, conflicts_counts: &PackageResolutionStatistics) -> Self::Priority { 92 | //! unimplemented!() 93 | //! } 94 | //! 95 | //! fn get_dependencies( 96 | //! &self, 97 | //! package: &String, 98 | //! version: &SemanticVersion, 99 | //! ) -> Result, Infallible> { 100 | //! Ok(Dependencies::Available(DependencyConstraints::default())) 101 | //! } 102 | //! 103 | //! type Err = Infallible; 104 | //! type P = String; 105 | //! type V = SemanticVersion; 106 | //! type VS = SemVS; 107 | //! type M = String; 108 | //! } 109 | //! ``` 110 | //! 111 | //! The first method 112 | //! [choose_version](DependencyProvider::choose_version) 113 | //! chooses a version compatible with the provided range for a package. 114 | //! The second method 115 | //! [prioritize](DependencyProvider::prioritize) 116 | //! in which order different packages should be chosen. 117 | //! Usually prioritizing packages 118 | //! with the fewest number of compatible versions speeds up resolution. 119 | //! But in general you are free to employ whatever strategy suits you best 120 | //! to pick a package and a version. 121 | //! 122 | //! The third method [get_dependencies](DependencyProvider::get_dependencies) 123 | //! aims at retrieving the dependencies of a given package at a given version. 124 | //! 125 | //! In a real scenario, these two methods may involve reading the file system 126 | //! or doing network request, so you may want to hold a cache in your 127 | //! [DependencyProvider] implementation. 128 | //! How exactly this could be achieved is shown in `CachingDependencyProvider` 129 | //! (see `examples/caching_dependency_provider.rs`). 130 | //! You could also use the [OfflineDependencyProvider] 131 | //! type defined by the crate as guidance, 132 | //! but you are free to use whatever approach makes sense in your situation. 133 | //! 134 | //! # Solution and error reporting 135 | //! 136 | //! When everything goes well, the algorithm finds and returns the complete 137 | //! set of direct and indirect dependencies satisfying all the constraints. 138 | //! The packages and versions selected are returned as 139 | //! [SelectedDependencies](SelectedDependencies). 140 | //! But sometimes there is no solution because dependencies are incompatible. 141 | //! In such cases, [resolve(...)](resolve) returns a 142 | //! [PubGrubError::NoSolution(derivation_tree)](PubGrubError::NoSolution), 143 | //! where the provided derivation tree is a custom binary tree 144 | //! containing the full chain of reasons why there is no solution. 145 | //! 146 | //! All the items in the tree are called incompatibilities 147 | //! and may be of two types, either "external" or "derived". 148 | //! Leaves of the tree are external incompatibilities, 149 | //! and nodes are derived. 150 | //! External incompatibilities have reasons that are independent 151 | //! of the way this algorithm is implemented such as 152 | //! - dependencies: "package_a" at version 1 depends on "package_b" at version 4 153 | //! - missing dependencies: dependencies of "package_a" are unavailable 154 | //! - absence of version: there is no version of "package_a" in the range [3.1.0 4.0.0[ 155 | //! 156 | //! Derived incompatibilities are obtained during the algorithm execution by deduction, 157 | //! such as if "a" depends on "b" and "b" depends on "c", "a" depends on "c". 158 | //! 159 | //! This crate defines a [Reporter] trait, with an associated 160 | //! [Output](Reporter::Output) type and a single method. 161 | //! ``` 162 | //! # use pubgrub::{Package, VersionSet, DerivationTree}; 163 | //! # use std::fmt::{Debug, Display}; 164 | //! # 165 | //! pub trait Reporter { 166 | //! type Output; 167 | //! 168 | //! fn report(derivation_tree: &DerivationTree) -> Self::Output; 169 | //! } 170 | //! ``` 171 | //! Implementing a [Reporter] may involve a lot of heuristics 172 | //! to make the output human-readable and natural. 173 | //! For convenience, we provide a default implementation 174 | //! [DefaultStringReporter] that outputs the report as a [String]. 175 | //! You may use it as follows: 176 | //! ``` 177 | //! # use pubgrub::{resolve, OfflineDependencyProvider, DefaultStringReporter, Reporter, PubGrubError, Ranges}; 178 | //! # 179 | //! # type NumVS = Ranges; 180 | //! # 181 | //! # let dependency_provider = OfflineDependencyProvider::<&str, NumVS>::new(); 182 | //! # let root_package = "root"; 183 | //! # let root_version = 1u32; 184 | //! # 185 | //! match resolve(&dependency_provider, root_package, root_version) { 186 | //! Ok(solution) => println!("{:?}", solution), 187 | //! Err(PubGrubError::NoSolution(mut derivation_tree)) => { 188 | //! derivation_tree.collapse_no_versions(); 189 | //! eprintln!("{}", DefaultStringReporter::report(&derivation_tree)); 190 | //! } 191 | //! Err(err) => panic!("{:?}", err), 192 | //! }; 193 | //! ``` 194 | //! Notice that we also used 195 | //! [collapse_no_versions()](DerivationTree::collapse_no_versions) above. 196 | //! This method simplifies the derivation tree to get rid of the 197 | //! [NoVersions](External::NoVersions) 198 | //! external incompatibilities in the derivation tree. 199 | //! So instead of seeing things like this in the report: 200 | //! ```txt 201 | //! Because there is no version of foo in 1.0.1 <= v < 2.0.0 202 | //! and foo 1.0.0 depends on bar 2.0.0 <= v < 3.0.0, 203 | //! foo 1.0.0 <= v < 2.0.0 depends on bar 2.0.0 <= v < 3.0.0. 204 | //! ``` 205 | //! you may have directly: 206 | //! ```txt 207 | //! foo 1.0.0 <= v < 2.0.0 depends on bar 2.0.0 <= v < 3.0.0. 208 | //! ``` 209 | //! Beware though that if you are using some kind of offline mode 210 | //! with a cache, you may want to know that some versions 211 | //! do not exist in your cache. 212 | 213 | #![warn(missing_docs)] 214 | 215 | mod error; 216 | mod package; 217 | mod provider; 218 | mod report; 219 | mod solver; 220 | mod term; 221 | mod type_aliases; 222 | mod version; 223 | mod version_set; 224 | 225 | pub use error::{NoSolutionError, PubGrubError}; 226 | pub use package::Package; 227 | pub use provider::OfflineDependencyProvider; 228 | pub use report::{ 229 | DefaultStringReportFormatter, DefaultStringReporter, DerivationTree, Derived, External, 230 | ReportFormatter, Reporter, 231 | }; 232 | pub use solver::{resolve, Dependencies, DependencyProvider, PackageResolutionStatistics}; 233 | pub use term::Term; 234 | pub use type_aliases::{DependencyConstraints, Map, SelectedDependencies, Set}; 235 | pub use version::{SemanticVersion, VersionParseError}; 236 | pub use version_ranges::Ranges; 237 | #[deprecated(note = "Use `Ranges` instead")] 238 | pub use version_ranges::Ranges as Range; 239 | pub use version_set::VersionSet; 240 | 241 | mod internal; 242 | -------------------------------------------------------------------------------- /src/package.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | //! Trait for identifying packages. 4 | //! Automatically implemented for traits implementing 5 | //! [Clone] + [Eq] + [Hash] + [Debug] + [Display]. 6 | 7 | use std::fmt::{Debug, Display}; 8 | use std::hash::Hash; 9 | 10 | /// Trait for identifying packages. 11 | /// Automatically implemented for types already implementing 12 | /// [Clone] + [Eq] + [Hash] + [Debug] + [Display]. 13 | pub trait Package: Clone + Eq + Hash + Debug + Display {} 14 | 15 | /// Automatically implement the Package trait for any type 16 | /// that already implement [Clone] + [Eq] + [Hash] + [Debug] + [Display]. 17 | impl Package for T {} 18 | -------------------------------------------------------------------------------- /src/provider.rs: -------------------------------------------------------------------------------- 1 | use std::cmp::Reverse; 2 | use std::collections::BTreeMap; 3 | use std::convert::Infallible; 4 | 5 | use crate::{ 6 | Dependencies, DependencyConstraints, DependencyProvider, Map, Package, 7 | PackageResolutionStatistics, VersionSet, 8 | }; 9 | 10 | /// A basic implementation of [DependencyProvider]. 11 | #[derive(Debug, Clone, Default)] 12 | #[cfg_attr(feature = "serde", derive(serde::Serialize, serde::Deserialize))] 13 | #[cfg_attr( 14 | feature = "serde", 15 | serde(bound( 16 | serialize = "VS::V: serde::Serialize, VS: serde::Serialize, P: serde::Serialize", 17 | deserialize = "VS::V: serde::Deserialize<'de>, VS: serde::Deserialize<'de>, P: serde::Deserialize<'de>" 18 | )) 19 | )] 20 | #[cfg_attr(feature = "serde", serde(transparent))] 21 | pub struct OfflineDependencyProvider { 22 | dependencies: Map>>, 23 | } 24 | 25 | impl OfflineDependencyProvider { 26 | /// Creates an empty OfflineDependencyProvider with no dependencies. 27 | pub fn new() -> Self { 28 | Self { 29 | dependencies: Map::default(), 30 | } 31 | } 32 | 33 | /// Registers the dependencies of a package and version pair. 34 | /// Dependencies must be added with a single call to 35 | /// [add_dependencies](OfflineDependencyProvider::add_dependencies). 36 | /// All subsequent calls to 37 | /// [add_dependencies](OfflineDependencyProvider::add_dependencies) for a given 38 | /// package version pair will replace the dependencies by the new ones. 39 | /// 40 | /// The API does not allow to add dependencies one at a time to uphold an assumption that 41 | /// [OfflineDependencyProvider.get_dependencies(p, v)](OfflineDependencyProvider::get_dependencies) 42 | /// provides all dependencies of a given package (p) and version (v) pair. 43 | pub fn add_dependencies>( 44 | &mut self, 45 | package: P, 46 | version: impl Into, 47 | dependencies: I, 48 | ) { 49 | let package_deps = dependencies.into_iter().collect(); 50 | let v = version.into(); 51 | *self 52 | .dependencies 53 | .entry(package) 54 | .or_default() 55 | .entry(v) 56 | .or_default() = package_deps; 57 | } 58 | 59 | /// Lists packages that have been saved. 60 | pub fn packages(&self) -> impl Iterator { 61 | self.dependencies.keys() 62 | } 63 | 64 | /// Lists versions of saved packages in sorted order. 65 | /// Returns [None] if no information is available regarding that package. 66 | pub fn versions(&self, package: &P) -> Option> { 67 | self.dependencies.get(package).map(|k| k.keys()) 68 | } 69 | 70 | /// Lists dependencies of a given package and version. 71 | /// Returns [None] if no information is available regarding that package and version pair. 72 | fn dependencies(&self, package: &P, version: &VS::V) -> Option> { 73 | self.dependencies.get(package)?.get(version).cloned() 74 | } 75 | } 76 | 77 | /// An implementation of [DependencyProvider] that 78 | /// contains all dependency information available in memory. 79 | /// Currently packages are picked with the fewest versions contained in the constraints first. 80 | /// But, that may change in new versions if better heuristics are found. 81 | /// Versions are picked with the newest versions first. 82 | impl DependencyProvider for OfflineDependencyProvider { 83 | type P = P; 84 | type V = VS::V; 85 | type VS = VS; 86 | type M = String; 87 | 88 | type Err = Infallible; 89 | 90 | #[inline] 91 | fn choose_version(&self, package: &P, range: &VS) -> Result, Infallible> { 92 | Ok(self 93 | .dependencies 94 | .get(package) 95 | .and_then(|versions| versions.keys().rev().find(|v| range.contains(v)).cloned())) 96 | } 97 | 98 | type Priority = (u32, Reverse); 99 | 100 | #[inline] 101 | fn prioritize( 102 | &self, 103 | package: &Self::P, 104 | range: &Self::VS, 105 | package_statistics: &PackageResolutionStatistics, 106 | ) -> Self::Priority { 107 | let version_count = self 108 | .dependencies 109 | .get(package) 110 | .map(|versions| versions.keys().filter(|v| range.contains(v)).count()) 111 | .unwrap_or(0); 112 | if version_count == 0 { 113 | return (u32::MAX, Reverse(0)); 114 | } 115 | (package_statistics.conflict_count(), Reverse(version_count)) 116 | } 117 | 118 | #[inline] 119 | fn get_dependencies( 120 | &self, 121 | package: &P, 122 | version: &VS::V, 123 | ) -> Result, Infallible> { 124 | Ok(match self.dependencies(package, version) { 125 | None => { 126 | Dependencies::Unavailable("its dependencies could not be determined".to_string()) 127 | } 128 | Some(dependencies) => Dependencies::Available(dependencies), 129 | }) 130 | } 131 | } 132 | -------------------------------------------------------------------------------- /src/solver.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use std::collections::BTreeSet as Set; 4 | use std::error::Error; 5 | use std::fmt::{Debug, Display}; 6 | 7 | use log::{debug, info}; 8 | 9 | use crate::internal::{Id, Incompatibility, State}; 10 | use crate::{ 11 | DependencyConstraints, Map, Package, PubGrubError, SelectedDependencies, Term, VersionSet, 12 | }; 13 | 14 | /// Statistics on how often a package conflicted with other packages. 15 | #[derive(Debug, Default, Clone)] 16 | pub struct PackageResolutionStatistics { 17 | // We track these fields separately but currently don't expose them separately to keep the 18 | // stable API slim. Please be encouraged to try different combinations of them and report if 19 | // you find better metrics that should be exposed. 20 | // 21 | // Say we have packages A and B, A having higher priority than B. We first decide A and then B, 22 | // and then find B to conflict with A. We call be B "affected" and A "culprit" since the 23 | // decisions for B is being rejected due to the decision we made for A earlier. 24 | // 25 | // If B is rejected due to its dependencies conflicting with A, we increase 26 | // `dependencies_affected` for B and for `dependencies_culprit` A. If B is rejected in unit 27 | // through an incompatibility with B, we increase `unit_propagation_affected` for B and for 28 | // `unit_propagation_culprit` A. 29 | unit_propagation_affected: u32, 30 | unit_propagation_culprit: u32, 31 | dependencies_affected: u32, 32 | dependencies_culprit: u32, 33 | } 34 | 35 | impl PackageResolutionStatistics { 36 | /// The number of conflicts this package was involved in. 37 | /// 38 | /// Processing packages with a high conflict count earlier usually speeds up resolution. 39 | /// 40 | /// Whenever a package is part of the root cause incompatibility of a conflict, we increase its 41 | /// count by one. Since the structure of the incompatibilities may change, this count too may 42 | /// change in the future. 43 | pub fn conflict_count(&self) -> u32 { 44 | self.unit_propagation_affected 45 | + self.unit_propagation_culprit 46 | + self.dependencies_affected 47 | + self.dependencies_culprit 48 | } 49 | } 50 | 51 | /// Finds a set of packages satisfying dependency bounds for a given package + version pair. 52 | /// 53 | /// It consists in efficiently finding a set of packages and versions 54 | /// that satisfy all the constraints of a given project dependencies. 55 | /// In addition, when that is not possible, 56 | /// PubGrub tries to provide a very human-readable and clear 57 | /// explanation as to why that failed. 58 | /// Below is an example of explanation present in 59 | /// the introductory blog post about PubGrub 60 | /// (Although this crate is not yet capable of building formatting quite this nice.) 61 | /// 62 | /// ```txt 63 | /// Because dropdown >=2.0.0 depends on icons >=2.0.0 and 64 | /// root depends on icons <2.0.0, dropdown >=2.0.0 is forbidden. 65 | /// 66 | /// And because menu >=1.1.0 depends on dropdown >=2.0.0, 67 | /// menu >=1.1.0 is forbidden. 68 | /// 69 | /// And because menu <1.1.0 depends on dropdown >=1.0.0 <2.0.0 70 | /// which depends on intl <4.0.0, every version of menu 71 | /// requires intl <4.0.0. 72 | /// 73 | /// So, because root depends on both menu >=1.0.0 and intl >=5.0.0, 74 | /// version solving failed. 75 | /// ``` 76 | /// 77 | /// Is generic over an implementation of [DependencyProvider] which represents where the dependency constraints come from. 78 | /// The associated types on the DependencyProvider allow flexibility for the representation of 79 | /// package names, version requirements, version numbers, and other things. 80 | /// See its documentation for more details. 81 | /// For simple cases [OfflineDependencyProvider](crate::OfflineDependencyProvider) may be sufficient. 82 | /// 83 | /// ## API 84 | /// 85 | /// ``` 86 | /// # use std::convert::Infallible; 87 | /// # use pubgrub::{resolve, OfflineDependencyProvider, PubGrubError, Ranges}; 88 | /// # 89 | /// # type NumVS = Ranges; 90 | /// # 91 | /// # fn try_main() -> Result<(), PubGrubError>> { 92 | /// # let dependency_provider = OfflineDependencyProvider::<&str, NumVS>::new(); 93 | /// # let package = "root"; 94 | /// # let version = 1u32; 95 | /// let solution = resolve(&dependency_provider, package, version)?; 96 | /// # Ok(()) 97 | /// # } 98 | /// # fn main() { 99 | /// # assert!(matches!(try_main(), Err(PubGrubError::NoSolution(_)))); 100 | /// # } 101 | /// ``` 102 | /// 103 | /// The call to [resolve] for a given package at a given version 104 | /// will compute the set of packages and versions needed 105 | /// to satisfy the dependencies of that package and version pair. 106 | /// If there is no solution, the reason will be provided as clear as possible. 107 | #[cold] 108 | pub fn resolve( 109 | dependency_provider: &DP, 110 | package: DP::P, 111 | version: impl Into, 112 | ) -> Result, PubGrubError> { 113 | let mut state: State = State::init(package.clone(), version.into()); 114 | let mut conflict_tracker: Map, PackageResolutionStatistics> = Map::default(); 115 | let mut added_dependencies: Map, Set> = Map::default(); 116 | let mut next = state.root_package; 117 | loop { 118 | dependency_provider 119 | .should_cancel() 120 | .map_err(|err| PubGrubError::ErrorInShouldCancel(err))?; 121 | 122 | info!( 123 | "unit_propagation: {:?} = '{}'", 124 | &next, state.package_store[next] 125 | ); 126 | let satisfier_causes = state.unit_propagation(next)?; 127 | for (affected, incompat) in satisfier_causes { 128 | conflict_tracker 129 | .entry(affected) 130 | .or_default() 131 | .unit_propagation_affected += 1; 132 | for (conflict_package, _) in state.incompatibility_store[incompat].iter() { 133 | if conflict_package == affected { 134 | continue; 135 | } 136 | conflict_tracker 137 | .entry(conflict_package) 138 | .or_default() 139 | .unit_propagation_culprit += 1; 140 | } 141 | } 142 | 143 | debug!( 144 | "Partial solution after unit propagation: {}", 145 | state.partial_solution.display(&state.package_store) 146 | ); 147 | 148 | let Some((highest_priority_pkg, term_intersection)) = 149 | state.partial_solution.pick_highest_priority_pkg(|p, r| { 150 | dependency_provider.prioritize( 151 | &state.package_store[p], 152 | r, 153 | conflict_tracker.entry(p).or_default(), 154 | ) 155 | }) 156 | else { 157 | return Ok(state 158 | .partial_solution 159 | .extract_solution() 160 | .map(|(p, v)| (state.package_store[p].clone(), v)) 161 | .collect()); 162 | }; 163 | next = highest_priority_pkg; 164 | 165 | let decision = dependency_provider 166 | .choose_version(&state.package_store[next], term_intersection) 167 | .map_err(|err| PubGrubError::ErrorChoosingVersion { 168 | package: state.package_store[next].clone(), 169 | source: err, 170 | })?; 171 | 172 | info!( 173 | "DP chose: {:?} = '{}' @ {:?}", 174 | &next, state.package_store[next], decision 175 | ); 176 | 177 | // Pick the next compatible version. 178 | let v = match decision { 179 | None => { 180 | let inc = 181 | Incompatibility::no_versions(next, Term::Positive(term_intersection.clone())); 182 | state.add_incompatibility(inc); 183 | continue; 184 | } 185 | Some(x) => x, 186 | }; 187 | 188 | if !term_intersection.contains(&v) { 189 | panic!( 190 | "`choose_version` picked an incompatible version for package {}, {} is not in {}", 191 | state.package_store[next], v, term_intersection 192 | ); 193 | } 194 | 195 | let is_new_dependency = added_dependencies 196 | .entry(next) 197 | .or_default() 198 | .insert(v.clone()); 199 | 200 | if is_new_dependency { 201 | // Retrieve that package dependencies. 202 | let p = next; 203 | let dependencies = dependency_provider 204 | .get_dependencies(&state.package_store[p], &v) 205 | .map_err(|err| PubGrubError::ErrorRetrievingDependencies { 206 | package: state.package_store[p].clone(), 207 | version: v.clone(), 208 | source: err, 209 | })?; 210 | 211 | let dependencies = match dependencies { 212 | Dependencies::Unavailable(reason) => { 213 | state.add_incompatibility(Incompatibility::custom_version( 214 | p, 215 | v.clone(), 216 | reason, 217 | )); 218 | continue; 219 | } 220 | Dependencies::Available(x) => x, 221 | }; 222 | 223 | // Add that package and version if the dependencies are not problematic. 224 | if let Some(conflict) = 225 | state.add_package_version_dependencies(p, v.clone(), dependencies) 226 | { 227 | conflict_tracker.entry(p).or_default().dependencies_affected += 1; 228 | for (incompat_package, _) in state.incompatibility_store[conflict].iter() { 229 | if incompat_package == p { 230 | continue; 231 | } 232 | conflict_tracker 233 | .entry(incompat_package) 234 | .or_default() 235 | .dependencies_culprit += 1; 236 | } 237 | } 238 | } else { 239 | // `dep_incompats` are already in `incompatibilities` so we know there are not satisfied 240 | // terms and can add the decision directly. 241 | info!( 242 | "add_decision (not first time): {:?} = '{}' @ {}", 243 | &next, state.package_store[next], v 244 | ); 245 | state.partial_solution.add_decision(next, v); 246 | } 247 | } 248 | } 249 | 250 | /// An enum used by [DependencyProvider] that holds information about package dependencies. 251 | /// For each [Package] there is a set of versions allowed as a dependency. 252 | #[derive(Clone)] 253 | pub enum Dependencies { 254 | /// Package dependencies are unavailable with the reason why they are missing. 255 | Unavailable(M), 256 | /// Container for all available package versions. 257 | Available(DependencyConstraints), 258 | } 259 | 260 | /// Trait that allows the algorithm to retrieve available packages and their dependencies. 261 | /// An implementor needs to be supplied to the [resolve] function. 262 | pub trait DependencyProvider { 263 | /// How this provider stores the name of the packages. 264 | type P: Package; 265 | 266 | /// How this provider stores the versions of the packages. 267 | /// 268 | /// A common choice is [`SemanticVersion`][crate::version::SemanticVersion]. 269 | type V: Debug + Display + Clone + Ord; 270 | 271 | /// How this provider stores the version requirements for the packages. 272 | /// The requirements must be able to process the same kind of version as this dependency provider. 273 | /// 274 | /// A common choice is [`Ranges`][version_ranges::Ranges]. 275 | type VS: VersionSet; 276 | 277 | /// The type returned from `prioritize`. The resolver does not care what type this is 278 | /// as long as it can pick a largest one and clone it. 279 | /// 280 | /// [`Reverse`](std::cmp::Reverse) can be useful if you want to pick the package with 281 | /// the fewest versions that match the outstanding constraint. 282 | type Priority: Ord + Clone; 283 | 284 | /// Type for custom incompatibilities. 285 | /// 286 | /// There are reasons in user code outside pubgrub that can cause packages or versions 287 | /// to be unavailable. Examples: 288 | /// * The version would require building the package, but builds are disabled. 289 | /// * The package is not available in the cache, but internet access has been disabled. 290 | /// * The package uses a legacy format not supported anymore. 291 | /// 292 | /// The intended use is to track them in an enum and assign them to this type. You can also 293 | /// assign [`String`] as placeholder. 294 | type M: Eq + Clone + Debug + Display; 295 | 296 | /// The kind of error returned from these methods. 297 | /// 298 | /// Returning this signals that resolution should fail with this error. 299 | type Err: Error + 'static; 300 | 301 | /// Determine the order in which versions are chosen for packages. 302 | /// 303 | /// Decisions are always made for the highest priority package first. The order of decisions 304 | /// determines which solution is chosen and can drastically change the performances of the 305 | /// solver. If there is a conflict between two package versions, decisions will be backtracked 306 | /// until the lower priority package version is discarded preserving the higher priority 307 | /// package. Usually, you want to decide more certain packages (e.g. those with a single version 308 | /// constraint) and packages with more conflicts first. 309 | /// 310 | /// The `package_conflicts_counts` argument provides access to some other heuristics that 311 | /// are production users have found useful. Although the exact meaning/efficacy of those 312 | /// arguments may change. 313 | /// 314 | /// The function is called once for each new package and then cached until we detect a 315 | /// (potential) change to `range`, otherwise it is cached, assuming that the priority only 316 | /// depends on the arguments to this function. 317 | /// 318 | /// If two packages have the same priority, PubGrub will bias toward a breadth first search. 319 | fn prioritize( 320 | &self, 321 | package: &Self::P, 322 | range: &Self::VS, 323 | // TODO(konsti): Are we always refreshing the priorities when `PackageResolutionStatistics` 324 | // changed for a package? 325 | package_conflicts_counts: &PackageResolutionStatistics, 326 | ) -> Self::Priority; 327 | 328 | /// Once the resolver has found the highest `Priority` package from all potential valid 329 | /// packages, it needs to know what version of that package to use. The most common pattern 330 | /// is to select the largest version that the range contains. 331 | fn choose_version( 332 | &self, 333 | package: &Self::P, 334 | range: &Self::VS, 335 | ) -> Result, Self::Err>; 336 | 337 | /// Retrieves the package dependencies. 338 | /// Return [Dependencies::Unavailable] if its dependencies are unavailable. 339 | #[allow(clippy::type_complexity)] 340 | fn get_dependencies( 341 | &self, 342 | package: &Self::P, 343 | version: &Self::V, 344 | ) -> Result, Self::Err>; 345 | 346 | /// This is called fairly regularly during the resolution, 347 | /// if it returns an Err then resolution will be terminated. 348 | /// This is helpful if you want to add some form of early termination like a timeout, 349 | /// or you want to add some form of user feedback if things are taking a while. 350 | /// If not provided the resolver will run as long as needed. 351 | fn should_cancel(&self) -> Result<(), Self::Err> { 352 | Ok(()) 353 | } 354 | } 355 | -------------------------------------------------------------------------------- /src/term.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | //! A term is the fundamental unit of operation of the PubGrub algorithm. 4 | //! It is a positive or negative expression regarding a set of versions. 5 | 6 | use std::fmt::{self, Display}; 7 | 8 | use crate::VersionSet; 9 | 10 | /// A positive or negative expression regarding a set of versions. 11 | /// 12 | /// `Positive(r)` and `Negative(r.complement())` are not equivalent: 13 | /// * the term `Positive(r)` is satisfied if the package is selected AND the selected version is in `r`. 14 | /// * the term `Negative(r.complement())` is satisfied if the package is not selected OR the selected version is in `r`. 15 | /// 16 | /// A `Positive` term in the partial solution requires a version to be selected, but a `Negative` term 17 | /// allows for a solution that does not have that package selected. 18 | /// Specifically, `Positive(VS::empty())` means that there was a conflict (we need to select a version for the package 19 | /// but can't pick any), while `Negative(VS::full())` would mean it is fine as long as we don't select the package. 20 | #[derive(Debug, Clone, Eq, PartialEq)] 21 | pub enum Term { 22 | /// For example, `1.0.0 <= v < 2.0.0` is a positive expression 23 | /// that is evaluated true if a version is selected 24 | /// and comprised between version 1.0.0 and version 2.0.0. 25 | Positive(VS), 26 | /// The term `not (v < 3.0.0)` is a negative expression 27 | /// that is evaluated true if a version >= 3.0.0 is selected 28 | /// or if no version is selected at all. 29 | Negative(VS), 30 | } 31 | 32 | /// Base methods. 33 | impl Term { 34 | /// A term that is always true. 35 | pub(crate) fn any() -> Self { 36 | Self::Negative(VS::empty()) 37 | } 38 | 39 | /// A term that is never true. 40 | pub(crate) fn empty() -> Self { 41 | Self::Positive(VS::empty()) 42 | } 43 | 44 | /// A positive term containing exactly that version. 45 | pub(crate) fn exact(version: VS::V) -> Self { 46 | Self::Positive(VS::singleton(version)) 47 | } 48 | 49 | /// Simply check if a term is positive. 50 | pub(crate) fn is_positive(&self) -> bool { 51 | match self { 52 | Self::Positive(_) => true, 53 | Self::Negative(_) => false, 54 | } 55 | } 56 | 57 | /// Negate a term. 58 | /// Evaluation of a negated term always returns 59 | /// the opposite of the evaluation of the original one. 60 | pub(crate) fn negate(&self) -> Self { 61 | match self { 62 | Self::Positive(set) => Self::Negative(set.clone()), 63 | Self::Negative(set) => Self::Positive(set.clone()), 64 | } 65 | } 66 | 67 | /// Evaluate a term regarding a given choice of version. 68 | pub(crate) fn contains(&self, v: &VS::V) -> bool { 69 | match self { 70 | Self::Positive(set) => set.contains(v), 71 | Self::Negative(set) => !set.contains(v), 72 | } 73 | } 74 | 75 | /// Unwrap the set contained in a positive term. 76 | /// 77 | /// Panics if used on a negative set. 78 | pub(crate) fn unwrap_positive(&self) -> &VS { 79 | match self { 80 | Self::Positive(set) => set, 81 | Self::Negative(set) => panic!("Negative term cannot unwrap positive set: {set:?}"), 82 | } 83 | } 84 | 85 | /// Unwrap the set contained in a negative term. 86 | /// 87 | /// Panics if used on a positive set. 88 | pub(crate) fn unwrap_negative(&self) -> &VS { 89 | match self { 90 | Self::Negative(set) => set, 91 | Self::Positive(set) => panic!("Positive term cannot unwrap negative set: {set:?}"), 92 | } 93 | } 94 | } 95 | 96 | /// Set operations with terms. 97 | impl Term { 98 | /// Compute the intersection of two terms. 99 | /// 100 | /// The intersection is negative (unselected package is allowed) 101 | /// if all terms are negative. 102 | pub(crate) fn intersection(&self, other: &Self) -> Self { 103 | match (self, other) { 104 | (Self::Positive(r1), Self::Positive(r2)) => Self::Positive(r1.intersection(r2)), 105 | (Self::Positive(p), Self::Negative(n)) | (Self::Negative(n), Self::Positive(p)) => { 106 | Self::Positive(n.complement().intersection(p)) 107 | } 108 | (Self::Negative(r1), Self::Negative(r2)) => Self::Negative(r1.union(r2)), 109 | } 110 | } 111 | 112 | /// Check whether two terms are mutually exclusive. 113 | /// 114 | /// An optimization for the native implementation of checking whether the intersection of two sets is empty. 115 | pub(crate) fn is_disjoint(&self, other: &Self) -> bool { 116 | match (self, other) { 117 | (Self::Positive(r1), Self::Positive(r2)) => r1.is_disjoint(r2), 118 | // Unselected package is allowed in both terms, so they are never disjoint. 119 | (Self::Negative(_), Self::Negative(_)) => false, 120 | // If the positive term is a subset of the negative term, it lies fully in the region that the negative 121 | // term excludes. 122 | (Self::Positive(p), Self::Negative(n)) | (Self::Negative(n), Self::Positive(p)) => { 123 | p.subset_of(n) 124 | } 125 | } 126 | } 127 | 128 | /// Compute the union of two terms. 129 | /// If at least one term is negative, the union is also negative (unselected package is allowed). 130 | pub(crate) fn union(&self, other: &Self) -> Self { 131 | match (self, other) { 132 | (Self::Positive(r1), Self::Positive(r2)) => Self::Positive(r1.union(r2)), 133 | (Self::Positive(p), Self::Negative(n)) | (Self::Negative(n), Self::Positive(p)) => { 134 | Self::Negative(p.complement().intersection(n)) 135 | } 136 | (Self::Negative(r1), Self::Negative(r2)) => Self::Negative(r1.intersection(r2)), 137 | } 138 | } 139 | 140 | /// Indicate if this term is a subset of another term. 141 | /// Just like for sets, we say that t1 is a subset of t2 142 | /// if and only if t1 ∩ t2 = t1. 143 | pub(crate) fn subset_of(&self, other: &Self) -> bool { 144 | match (self, other) { 145 | (Self::Positive(r1), Self::Positive(r2)) => r1.subset_of(r2), 146 | (Self::Positive(r1), Self::Negative(r2)) => r1.is_disjoint(r2), 147 | // Only a negative term allows the unselected package, 148 | // so it can never be a subset of a positive term. 149 | (Self::Negative(_), Self::Positive(_)) => false, 150 | (Self::Negative(r1), Self::Negative(r2)) => r2.subset_of(r1), 151 | } 152 | } 153 | } 154 | 155 | /// Describe a relation between a set of terms S and another term t. 156 | /// 157 | /// As a shorthand, we say that a term v 158 | /// satisfies or contradicts a term t if {v} satisfies or contradicts it. 159 | pub(crate) enum Relation { 160 | /// We say that a set of terms S "satisfies" a term t 161 | /// if t must be true whenever every term in S is true. 162 | Satisfied, 163 | /// Conversely, S "contradicts" t if t must be false 164 | /// whenever every term in S is true. 165 | Contradicted, 166 | /// If neither of these is true we say that S is "inconclusive" for t. 167 | Inconclusive, 168 | } 169 | 170 | /// Relation between terms. 171 | impl Term { 172 | /// Check if a set of terms satisfies this term. 173 | /// 174 | /// We say that a set of terms S "satisfies" a term t 175 | /// if t must be true whenever every term in S is true. 176 | /// 177 | /// It turns out that this can also be expressed with set operations: 178 | /// S satisfies t if and only if ⋂ S ⊆ t 179 | #[cfg(test)] 180 | fn satisfied_by(&self, terms_intersection: &Self) -> bool { 181 | terms_intersection.subset_of(self) 182 | } 183 | 184 | /// Check if a set of terms contradicts this term. 185 | /// 186 | /// We say that a set of terms S "contradicts" a term t 187 | /// if t must be false whenever every term in S is true. 188 | /// 189 | /// It turns out that this can also be expressed with set operations: 190 | /// S contradicts t if and only if ⋂ S is disjoint with t 191 | /// S contradicts t if and only if (⋂ S) ⋂ t = ∅ 192 | #[cfg(test)] 193 | fn contradicted_by(&self, terms_intersection: &Self) -> bool { 194 | terms_intersection.intersection(self) == Self::empty() 195 | } 196 | 197 | /// Check if a set of terms satisfies or contradicts a given term. 198 | /// Otherwise the relation is inconclusive. 199 | pub(crate) fn relation_with(&self, other_terms_intersection: &Self) -> Relation { 200 | if other_terms_intersection.subset_of(self) { 201 | Relation::Satisfied 202 | } else if self.is_disjoint(other_terms_intersection) { 203 | Relation::Contradicted 204 | } else { 205 | Relation::Inconclusive 206 | } 207 | } 208 | } 209 | 210 | impl AsRef for Term { 211 | fn as_ref(&self) -> &Self { 212 | self 213 | } 214 | } 215 | 216 | // REPORT ###################################################################### 217 | 218 | impl Display for Term { 219 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 220 | match self { 221 | Self::Positive(set) => write!(f, "{}", set), 222 | Self::Negative(set) => write!(f, "Not ( {} )", set), 223 | } 224 | } 225 | } 226 | 227 | // TESTS ####################################################################### 228 | 229 | #[cfg(test)] 230 | pub mod tests { 231 | use super::*; 232 | use proptest::prelude::*; 233 | use version_ranges::Ranges; 234 | 235 | pub fn strategy() -> impl Strategy>> { 236 | prop_oneof![ 237 | version_ranges::proptest_strategy().prop_map(Term::Negative), 238 | version_ranges::proptest_strategy().prop_map(Term::Positive), 239 | ] 240 | } 241 | proptest! { 242 | 243 | // Testing relation -------------------------------- 244 | 245 | #[test] 246 | fn relation_with(term1 in strategy(), term2 in strategy()) { 247 | match term1.relation_with(&term2) { 248 | Relation::Satisfied => assert!(term1.satisfied_by(&term2)), 249 | Relation::Contradicted => assert!(term1.contradicted_by(&term2)), 250 | Relation::Inconclusive => { 251 | assert!(!term1.satisfied_by(&term2)); 252 | assert!(!term1.contradicted_by(&term2)); 253 | } 254 | } 255 | } 256 | 257 | /// Ensure that we don't wrongly convert between positive and negative ranges 258 | #[test] 259 | fn positive_negative(term1 in strategy(), term2 in strategy()) { 260 | let intersection_positive = term1.is_positive() || term2.is_positive(); 261 | let union_positive = term1.is_positive() && term2.is_positive(); 262 | assert_eq!(term1.intersection(&term2).is_positive(), intersection_positive); 263 | assert_eq!(term1.union(&term2).is_positive(), union_positive); 264 | } 265 | 266 | #[test] 267 | fn is_disjoint_through_intersection(r1 in strategy(), r2 in strategy()) { 268 | let disjoint_def = r1.intersection(&r2) == Term::empty(); 269 | assert_eq!(r1.is_disjoint(&r2), disjoint_def); 270 | } 271 | 272 | #[test] 273 | fn subset_of_through_intersection(r1 in strategy(), r2 in strategy()) { 274 | let disjoint_def = r1.intersection(&r2) == r1; 275 | assert_eq!(r1.subset_of(&r2), disjoint_def); 276 | } 277 | 278 | #[test] 279 | fn union_through_intersection(r1 in strategy(), r2 in strategy()) { 280 | let union_def = r1 281 | .negate() 282 | .intersection(&r2.negate()) 283 | .negate(); 284 | assert_eq!(r1.union(&r2), union_def); 285 | } 286 | } 287 | } 288 | -------------------------------------------------------------------------------- /src/type_aliases.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | //! Publicly exported type aliases. 4 | 5 | use crate::DependencyProvider; 6 | 7 | /// Map implementation used by the library. 8 | pub type Map = rustc_hash::FxHashMap; 9 | 10 | /// Set implementation used by the library. 11 | pub type Set = rustc_hash::FxHashSet; 12 | 13 | /// Concrete dependencies picked by the library during [resolve](crate::solver::resolve) 14 | /// from [DependencyConstraints]. 15 | pub type SelectedDependencies = 16 | Map<::P, ::V>; 17 | 18 | /// Holds information about all possible versions a given package can accept. 19 | /// There is a difference in semantics between an empty map 20 | /// inside [DependencyConstraints] and [Dependencies::Unavailable](crate::solver::Dependencies::Unavailable): 21 | /// the former means the package has no dependency and it is a known fact, 22 | /// while the latter means they could not be fetched by the [DependencyProvider]. 23 | pub type DependencyConstraints = Map; 24 | -------------------------------------------------------------------------------- /src/version.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | //! Traits and implementations to create and compare versions. 4 | 5 | use std::fmt::{self, Debug, Display}; 6 | use std::str::FromStr; 7 | 8 | use thiserror::Error; 9 | 10 | /// Type for semantic versions: major.minor.patch. 11 | #[derive(Debug, Copy, Clone, Ord, PartialOrd, Eq, PartialEq, Hash)] 12 | pub struct SemanticVersion { 13 | major: u32, 14 | minor: u32, 15 | patch: u32, 16 | } 17 | 18 | #[cfg(feature = "serde")] 19 | impl serde::Serialize for SemanticVersion { 20 | fn serialize(&self, serializer: S) -> Result 21 | where 22 | S: serde::Serializer, 23 | { 24 | serializer.serialize_str(&format!("{}", self)) 25 | } 26 | } 27 | 28 | #[cfg(feature = "serde")] 29 | impl<'de> serde::Deserialize<'de> for SemanticVersion { 30 | fn deserialize(deserializer: D) -> Result 31 | where 32 | D: serde::Deserializer<'de>, 33 | { 34 | let s = String::deserialize(deserializer)?; 35 | FromStr::from_str(&s).map_err(serde::de::Error::custom) 36 | } 37 | } 38 | 39 | // Constructors 40 | impl SemanticVersion { 41 | /// Create a version with "major", "minor" and "patch" values. 42 | /// `version = major.minor.patch` 43 | pub fn new(major: u32, minor: u32, patch: u32) -> Self { 44 | Self { 45 | major, 46 | minor, 47 | patch, 48 | } 49 | } 50 | 51 | /// Version 0.0.0. 52 | pub fn zero() -> Self { 53 | Self::new(0, 0, 0) 54 | } 55 | 56 | /// Version 1.0.0. 57 | pub fn one() -> Self { 58 | Self::new(1, 0, 0) 59 | } 60 | 61 | /// Version 2.0.0. 62 | pub fn two() -> Self { 63 | Self::new(2, 0, 0) 64 | } 65 | } 66 | 67 | // Convert a tuple (major, minor, patch) into a version. 68 | impl From<(u32, u32, u32)> for SemanticVersion { 69 | fn from(tuple: (u32, u32, u32)) -> Self { 70 | let (major, minor, patch) = tuple; 71 | Self::new(major, minor, patch) 72 | } 73 | } 74 | 75 | // Convert a &(major, minor, patch) into a version. 76 | impl From<&(u32, u32, u32)> for SemanticVersion { 77 | fn from(tuple: &(u32, u32, u32)) -> Self { 78 | let (major, minor, patch) = *tuple; 79 | Self::new(major, minor, patch) 80 | } 81 | } 82 | 83 | // Convert an &version into a version. 84 | impl From<&SemanticVersion> for SemanticVersion { 85 | fn from(v: &SemanticVersion) -> Self { 86 | *v 87 | } 88 | } 89 | 90 | // Convert a version into a tuple (major, minor, patch). 91 | impl From for (u32, u32, u32) { 92 | fn from(v: SemanticVersion) -> Self { 93 | (v.major, v.minor, v.patch) 94 | } 95 | } 96 | 97 | // Bump versions. 98 | impl SemanticVersion { 99 | /// Bump the patch number of a version. 100 | pub fn bump_patch(self) -> Self { 101 | Self::new(self.major, self.minor, self.patch + 1) 102 | } 103 | 104 | /// Bump the minor number of a version. 105 | pub fn bump_minor(self) -> Self { 106 | Self::new(self.major, self.minor + 1, 0) 107 | } 108 | 109 | /// Bump the major number of a version. 110 | pub fn bump_major(self) -> Self { 111 | Self::new(self.major + 1, 0, 0) 112 | } 113 | } 114 | 115 | /// Error creating [SemanticVersion] from [String]. 116 | #[derive(Error, Debug, PartialEq, Eq)] 117 | pub enum VersionParseError { 118 | /// [SemanticVersion] must contain major, minor, patch versions. 119 | #[error("version {full_version} must contain 3 numbers separated by dot")] 120 | NotThreeParts { 121 | /// [SemanticVersion] that was being parsed. 122 | full_version: String, 123 | }, 124 | /// Wrapper around [ParseIntError](core::num::ParseIntError). 125 | #[error("cannot parse '{version_part}' in '{full_version}' as u32: {parse_error}")] 126 | ParseIntError { 127 | /// [SemanticVersion] that was being parsed. 128 | full_version: String, 129 | /// A version part where parsing failed. 130 | version_part: String, 131 | /// A specific error resulted from parsing a part of the version as [u32]. 132 | parse_error: String, 133 | }, 134 | } 135 | 136 | impl FromStr for SemanticVersion { 137 | type Err = VersionParseError; 138 | 139 | fn from_str(s: &str) -> Result { 140 | let parse_u32 = |part: &str| { 141 | part.parse::().map_err(|e| Self::Err::ParseIntError { 142 | full_version: s.to_string(), 143 | version_part: part.to_string(), 144 | parse_error: e.to_string(), 145 | }) 146 | }; 147 | 148 | let mut parts = s.split('.'); 149 | match (parts.next(), parts.next(), parts.next(), parts.next()) { 150 | (Some(major), Some(minor), Some(patch), None) => { 151 | let major = parse_u32(major)?; 152 | let minor = parse_u32(minor)?; 153 | let patch = parse_u32(patch)?; 154 | Ok(Self { 155 | major, 156 | minor, 157 | patch, 158 | }) 159 | } 160 | _ => Err(Self::Err::NotThreeParts { 161 | full_version: s.to_string(), 162 | }), 163 | } 164 | } 165 | } 166 | 167 | #[test] 168 | fn from_str_for_semantic_version() { 169 | let parse = |str: &str| str.parse::(); 170 | assert!(parse( 171 | &SemanticVersion { 172 | major: 0, 173 | minor: 1, 174 | patch: 0 175 | } 176 | .to_string() 177 | ) 178 | .is_ok()); 179 | assert!(parse("1.2.3").is_ok()); 180 | assert_eq!( 181 | parse("1.abc.3"), 182 | Err(VersionParseError::ParseIntError { 183 | full_version: "1.abc.3".to_owned(), 184 | version_part: "abc".to_owned(), 185 | parse_error: "invalid digit found in string".to_owned(), 186 | }) 187 | ); 188 | assert_eq!( 189 | parse("1.2.-3"), 190 | Err(VersionParseError::ParseIntError { 191 | full_version: "1.2.-3".to_owned(), 192 | version_part: "-3".to_owned(), 193 | parse_error: "invalid digit found in string".to_owned(), 194 | }) 195 | ); 196 | assert_eq!( 197 | parse("1.2.9876543210"), 198 | Err(VersionParseError::ParseIntError { 199 | full_version: "1.2.9876543210".to_owned(), 200 | version_part: "9876543210".to_owned(), 201 | parse_error: "number too large to fit in target type".to_owned(), 202 | }) 203 | ); 204 | assert_eq!( 205 | parse("1.2"), 206 | Err(VersionParseError::NotThreeParts { 207 | full_version: "1.2".to_owned(), 208 | }) 209 | ); 210 | assert_eq!( 211 | parse("1.2.3."), 212 | Err(VersionParseError::NotThreeParts { 213 | full_version: "1.2.3.".to_owned(), 214 | }) 215 | ); 216 | } 217 | 218 | impl Display for SemanticVersion { 219 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 220 | write!(f, "{}.{}.{}", self.major, self.minor, self.patch) 221 | } 222 | } 223 | -------------------------------------------------------------------------------- /src/version_set.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use std::fmt::{Debug, Display}; 4 | 5 | use crate::Ranges; 6 | 7 | /// A set of versions. 8 | /// 9 | /// See [`Ranges`] for an implementation. 10 | /// 11 | /// The methods with default implementations can be overwritten for better performance, but their 12 | /// output must be equal to the default implementation. 13 | /// 14 | /// # Equality 15 | /// 16 | /// It is important that the `Eq` trait is implemented so that if two sets contain the same 17 | /// versions, they are equal under `Eq`. In particular, you can only use `#[derive(Eq, PartialEq)]` 18 | /// if `Eq` is strictly equivalent to the structural equality, i.e. if version sets are always 19 | /// stored in a canonical representations. Such problems may arise if your implementations of 20 | /// `complement()` and `intersection()` do not return canonical representations. 21 | /// 22 | /// For example, `>=1,<4 || >=2,<5` and `>=1,<4 || >=3,<5` are equal, because they can both be 23 | /// normalized to `>=1,<5`. 24 | /// 25 | /// Note that pubgrub does not know which versions actually exist for a package, the contract 26 | /// is about upholding the mathematical properties of set operations, assuming all versions are 27 | /// possible. This is required for the solver to determine the relationship of version sets to each 28 | /// other. 29 | pub trait VersionSet: Debug + Display + Clone + Eq { 30 | /// Version type associated with the sets manipulated. 31 | type V: Debug + Display + Clone + Ord; 32 | 33 | // Constructors 34 | 35 | /// An empty set containing no version. 36 | fn empty() -> Self; 37 | 38 | /// A set containing only the given version. 39 | fn singleton(v: Self::V) -> Self; 40 | 41 | // Operations 42 | 43 | /// The set of all version that are not in this set. 44 | fn complement(&self) -> Self; 45 | 46 | /// The set of all versions that are in both sets. 47 | fn intersection(&self, other: &Self) -> Self; 48 | 49 | /// Whether the version is part of this set. 50 | fn contains(&self, v: &Self::V) -> bool; 51 | 52 | // Automatically implemented functions 53 | 54 | /// The set containing all versions. 55 | /// 56 | /// The default implementation is the complement of the empty set. 57 | fn full() -> Self { 58 | Self::empty().complement() 59 | } 60 | 61 | /// The set of all versions that are either (or both) of the sets. 62 | /// 63 | /// The default implementation is complement of the intersection of the complements of both sets 64 | /// (De Morgan's law). 65 | fn union(&self, other: &Self) -> Self { 66 | self.complement() 67 | .intersection(&other.complement()) 68 | .complement() 69 | } 70 | 71 | /// Whether the ranges have no overlapping segments. 72 | fn is_disjoint(&self, other: &Self) -> bool { 73 | self.intersection(other) == Self::empty() 74 | } 75 | 76 | /// Whether all ranges of `self` are contained in `other`. 77 | fn subset_of(&self, other: &Self) -> bool { 78 | self == &self.intersection(other) 79 | } 80 | } 81 | 82 | /// [`Ranges`] contains optimized implementations of all operations. 83 | impl VersionSet for Ranges { 84 | type V = T; 85 | 86 | fn empty() -> Self { 87 | Ranges::empty() 88 | } 89 | 90 | fn singleton(v: Self::V) -> Self { 91 | Ranges::singleton(v) 92 | } 93 | 94 | fn complement(&self) -> Self { 95 | Ranges::complement(self) 96 | } 97 | 98 | fn intersection(&self, other: &Self) -> Self { 99 | Ranges::intersection(self, other) 100 | } 101 | 102 | fn contains(&self, v: &Self::V) -> bool { 103 | Ranges::contains(self, v) 104 | } 105 | 106 | fn full() -> Self { 107 | Ranges::full() 108 | } 109 | 110 | fn union(&self, other: &Self) -> Self { 111 | Ranges::union(self, other) 112 | } 113 | 114 | fn is_disjoint(&self, other: &Self) -> bool { 115 | Ranges::is_disjoint(self, other) 116 | } 117 | 118 | fn subset_of(&self, other: &Self) -> bool { 119 | Ranges::subset_of(self, other) 120 | } 121 | } 122 | -------------------------------------------------------------------------------- /tests/examples.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use pubgrub::{ 4 | resolve, DefaultStringReporter, Map, OfflineDependencyProvider, PubGrubError, Ranges, 5 | Reporter as _, SemanticVersion, Set, 6 | }; 7 | 8 | type NumVS = Ranges; 9 | type SemVS = Ranges; 10 | 11 | use std::io::Write; 12 | 13 | use log::LevelFilter; 14 | 15 | fn init_log() { 16 | let _ = env_logger::builder() 17 | .filter_level(LevelFilter::Trace) 18 | .format(|buf, record| writeln!(buf, "{}", record.args())) 19 | .is_test(true) 20 | .try_init(); 21 | } 22 | 23 | #[test] 24 | /// https://github.com/dart-lang/pub/blob/master/doc/solver.md#no-conflicts 25 | fn no_conflict() { 26 | init_log(); 27 | let mut dependency_provider = OfflineDependencyProvider::<&str, SemVS>::new(); 28 | #[rustfmt::skip] 29 | dependency_provider.add_dependencies( 30 | "root", (1, 0, 0), 31 | [("foo", Ranges::between((1, 0, 0), (2, 0, 0)))], 32 | ); 33 | #[rustfmt::skip] 34 | dependency_provider.add_dependencies( 35 | "foo", (1, 0, 0), 36 | [("bar", Ranges::between((1, 0, 0), (2, 0, 0)))], 37 | ); 38 | dependency_provider.add_dependencies("bar", (1, 0, 0), []); 39 | dependency_provider.add_dependencies("bar", (2, 0, 0), []); 40 | 41 | // Run the algorithm. 42 | let computed_solution = resolve(&dependency_provider, "root", (1, 0, 0)).unwrap(); 43 | 44 | // Solution. 45 | let mut expected_solution = Map::default(); 46 | expected_solution.insert("root", (1, 0, 0).into()); 47 | expected_solution.insert("foo", (1, 0, 0).into()); 48 | expected_solution.insert("bar", (1, 0, 0).into()); 49 | 50 | // Comparing the true solution with the one computed by the algorithm. 51 | assert_eq!(expected_solution, computed_solution); 52 | } 53 | 54 | #[test] 55 | /// https://github.com/dart-lang/pub/blob/master/doc/solver.md#avoiding-conflict-during-decision-making 56 | fn avoiding_conflict_during_decision_making() { 57 | init_log(); 58 | let mut dependency_provider = OfflineDependencyProvider::<&str, SemVS>::new(); 59 | #[rustfmt::skip] 60 | dependency_provider.add_dependencies( 61 | "root", (1, 0, 0), 62 | [ 63 | ("foo", Ranges::between((1, 0, 0), (2, 0, 0))), 64 | ("bar", Ranges::between((1, 0, 0), (2, 0, 0))), 65 | ], 66 | ); 67 | #[rustfmt::skip] 68 | dependency_provider.add_dependencies( 69 | "foo", (1, 1, 0), 70 | [("bar", Ranges::between((2, 0, 0), (3, 0, 0)))], 71 | ); 72 | dependency_provider.add_dependencies("foo", (1, 0, 0), []); 73 | dependency_provider.add_dependencies("bar", (1, 0, 0), []); 74 | dependency_provider.add_dependencies("bar", (1, 1, 0), []); 75 | dependency_provider.add_dependencies("bar", (2, 0, 0), []); 76 | 77 | // Run the algorithm. 78 | let computed_solution = resolve(&dependency_provider, "root", (1, 0, 0)).unwrap(); 79 | 80 | // Solution. 81 | let mut expected_solution = Map::default(); 82 | expected_solution.insert("root", (1, 0, 0).into()); 83 | expected_solution.insert("foo", (1, 0, 0).into()); 84 | expected_solution.insert("bar", (1, 1, 0).into()); 85 | 86 | // Comparing the true solution with the one computed by the algorithm. 87 | assert_eq!(expected_solution, computed_solution); 88 | } 89 | 90 | #[test] 91 | /// https://github.com/dart-lang/pub/blob/master/doc/solver.md#performing-conflict-resolution 92 | fn conflict_resolution() { 93 | init_log(); 94 | let mut dependency_provider = OfflineDependencyProvider::<&str, SemVS>::new(); 95 | #[rustfmt::skip] 96 | dependency_provider.add_dependencies( 97 | "root", (1, 0, 0), 98 | [("foo", Ranges::higher_than((1, 0, 0)))], 99 | ); 100 | #[rustfmt::skip] 101 | dependency_provider.add_dependencies( 102 | "foo", (2, 0, 0), 103 | [("bar", Ranges::between((1, 0, 0), (2, 0, 0)))], 104 | ); 105 | dependency_provider.add_dependencies("foo", (1, 0, 0), []); 106 | #[rustfmt::skip] 107 | dependency_provider.add_dependencies( 108 | "bar", (1, 0, 0), 109 | [("foo", Ranges::between((1, 0, 0), (2, 0, 0)))], 110 | ); 111 | 112 | // Run the algorithm. 113 | let computed_solution = resolve(&dependency_provider, "root", (1, 0, 0)).unwrap(); 114 | 115 | // Solution. 116 | let mut expected_solution = Map::default(); 117 | expected_solution.insert("root", (1, 0, 0).into()); 118 | expected_solution.insert("foo", (1, 0, 0).into()); 119 | 120 | // Comparing the true solution with the one computed by the algorithm. 121 | assert_eq!(expected_solution, computed_solution); 122 | } 123 | 124 | #[test] 125 | /// https://github.com/dart-lang/pub/blob/master/doc/solver.md#conflict-resolution-with-a-partial-satisfier 126 | fn conflict_with_partial_satisfier() { 127 | init_log(); 128 | let mut dependency_provider = OfflineDependencyProvider::<&str, SemVS>::new(); 129 | #[rustfmt::skip] 130 | // root 1.0.0 depends on foo ^1.0.0 and target ^2.0.0 131 | dependency_provider.add_dependencies( 132 | "root", (1, 0, 0), 133 | [ 134 | ("foo", Ranges::between((1, 0, 0), (2, 0, 0))), 135 | ("target", Ranges::between((2, 0, 0), (3, 0, 0))), 136 | ], 137 | ); 138 | #[rustfmt::skip] 139 | // foo 1.1.0 depends on left ^1.0.0 and right ^1.0.0 140 | dependency_provider.add_dependencies( 141 | "foo", (1, 1, 0), 142 | [ 143 | ("left", Ranges::between((1, 0, 0), (2, 0, 0))), 144 | ("right", Ranges::between((1, 0, 0), (2, 0, 0))), 145 | ], 146 | ); 147 | dependency_provider.add_dependencies("foo", (1, 0, 0), []); 148 | #[rustfmt::skip] 149 | // left 1.0.0 depends on shared >=1.0.0 150 | dependency_provider.add_dependencies( 151 | "left", (1, 0, 0), 152 | [("shared", Ranges::higher_than((1, 0, 0)))], 153 | ); 154 | #[rustfmt::skip] 155 | // right 1.0.0 depends on shared <2.0.0 156 | dependency_provider.add_dependencies( 157 | "right", (1, 0, 0), 158 | [("shared", Ranges::strictly_lower_than((2, 0, 0)))], 159 | ); 160 | dependency_provider.add_dependencies("shared", (2, 0, 0), []); 161 | #[rustfmt::skip] 162 | // shared 1.0.0 depends on target ^1.0.0 163 | dependency_provider.add_dependencies( 164 | "shared", (1, 0, 0), 165 | [("target", Ranges::between((1, 0, 0), (2, 0, 0)))], 166 | ); 167 | dependency_provider.add_dependencies("target", (2, 0, 0), []); 168 | dependency_provider.add_dependencies("target", (1, 0, 0), []); 169 | 170 | // Run the algorithm. 171 | let computed_solution = resolve(&dependency_provider, "root", (1, 0, 0)).unwrap(); 172 | 173 | // Solution. 174 | let mut expected_solution = Map::default(); 175 | expected_solution.insert("root", (1, 0, 0).into()); 176 | expected_solution.insert("foo", (1, 0, 0).into()); 177 | expected_solution.insert("target", (2, 0, 0).into()); 178 | 179 | // Comparing the true solution with the one computed by the algorithm. 180 | assert_eq!(expected_solution, computed_solution); 181 | } 182 | 183 | #[test] 184 | /// a0 dep on b and c 185 | /// b0 dep on d0 186 | /// b1 dep on d1 (not existing) 187 | /// c0 has no dep 188 | /// c1 dep on d2 (not existing) 189 | /// d0 has no dep 190 | /// 191 | /// Solution: a0, b0, c0, d0 192 | fn double_choices() { 193 | init_log(); 194 | let mut dependency_provider = OfflineDependencyProvider::<&str, NumVS>::new(); 195 | dependency_provider.add_dependencies("a", 0u32, [("b", Ranges::full()), ("c", Ranges::full())]); 196 | dependency_provider.add_dependencies("b", 0u32, [("d", Ranges::singleton(0u32))]); 197 | dependency_provider.add_dependencies("b", 1u32, [("d", Ranges::singleton(1u32))]); 198 | dependency_provider.add_dependencies("c", 0u32, []); 199 | dependency_provider.add_dependencies("c", 1u32, [("d", Ranges::singleton(2u32))]); 200 | dependency_provider.add_dependencies("d", 0u32, []); 201 | 202 | // Solution. 203 | let mut expected_solution = Map::default(); 204 | expected_solution.insert("a", 0u32); 205 | expected_solution.insert("b", 0u32); 206 | expected_solution.insert("c", 0u32); 207 | expected_solution.insert("d", 0u32); 208 | 209 | // Run the algorithm. 210 | let computed_solution = resolve(&dependency_provider, "a", 0u32).unwrap(); 211 | assert_eq!(expected_solution, computed_solution); 212 | } 213 | 214 | #[test] 215 | fn confusing_with_lots_of_holes() { 216 | let mut dependency_provider = OfflineDependencyProvider::<&str, NumVS>::new(); 217 | 218 | // root depends on foo... 219 | dependency_provider.add_dependencies( 220 | "root", 221 | 1u32, 222 | vec![("foo", Ranges::full()), ("baz", Ranges::full())], 223 | ); 224 | 225 | for i in 1..6 { 226 | // foo depends on bar... 227 | dependency_provider.add_dependencies("foo", i as u32, vec![("bar", Ranges::full())]); 228 | } 229 | 230 | // This package is part of the dependency tree, but it's not part of the conflict 231 | dependency_provider.add_dependencies("baz", 1u32, vec![]); 232 | 233 | let Err(PubGrubError::NoSolution(mut derivation_tree)) = 234 | resolve(&dependency_provider, "root", 1u32) 235 | else { 236 | unreachable!() 237 | }; 238 | assert_eq!( 239 | &DefaultStringReporter::report(&derivation_tree), 240 | r#"Because there is no available version for bar and foo 1 | 2 | 3 | 4 | 5 depends on bar, foo 1 | 2 | 3 | 4 | 5 is forbidden. 241 | And because there is no version of foo in <1 | >1, <2 | >2, <3 | >3, <4 | >4, <5 | >5 and root 1 depends on foo, root 1 is forbidden."# 242 | ); 243 | derivation_tree.collapse_no_versions(); 244 | assert_eq!( 245 | &DefaultStringReporter::report(&derivation_tree), 246 | "Because foo depends on bar and root 1 depends on foo, root 1 is forbidden." 247 | ); 248 | assert_eq!( 249 | derivation_tree.packages(), 250 | // baz isn't shown. 251 | Set::from_iter(&["root", "foo", "bar"]) 252 | ); 253 | } 254 | -------------------------------------------------------------------------------- /tests/sat_dependency_provider.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use pubgrub::{ 4 | Dependencies, DependencyProvider, Map, OfflineDependencyProvider, Package, PubGrubError, 5 | SelectedDependencies, VersionSet, 6 | }; 7 | use varisat::ExtendFormula; 8 | 9 | fn sat_at_most_one(solver: &mut impl ExtendFormula, vars: &[varisat::Var]) { 10 | if vars.len() <= 1 { 11 | return; 12 | } else if vars.len() == 2 { 13 | solver.add_clause(&[vars[0].negative(), vars[1].negative()]); 14 | return; 15 | } else if vars.len() == 3 { 16 | solver.add_clause(&[vars[0].negative(), vars[1].negative()]); 17 | solver.add_clause(&[vars[0].negative(), vars[2].negative()]); 18 | solver.add_clause(&[vars[1].negative(), vars[2].negative()]); 19 | return; 20 | } 21 | // use the "Binary Encoding" from 22 | // https://www.it.uu.se/research/group/astra/ModRef10/papers/Alan%20M.%20Frisch%20and%20Paul%20A.%20Giannoros.%20SAT%20Encodings%20of%20the%20At-Most-k%20Constraint%20-%20ModRef%202010.pdf 23 | let len_bits = vars.len().ilog2() as usize + 1; 24 | let bits: Vec = solver.new_var_iter(len_bits).collect(); 25 | for (i, p) in vars.iter().enumerate() { 26 | for (j, &bit) in bits.iter().enumerate() { 27 | solver.add_clause(&[p.negative(), bit.lit(((1 << j) & i) > 0)]); 28 | } 29 | } 30 | } 31 | 32 | /// Resolution can be reduced to the SAT problem. So this is an alternative implementation 33 | /// of the resolver that uses a SAT library for the hard work. This is intended to be easy to read, 34 | /// as compared to the real resolver. This will find a valid resolution if one exists. 35 | /// 36 | /// The SAT library does not optimize for the newer version, 37 | /// so the selected packages may not match the real resolver. 38 | pub struct SatResolve { 39 | solver: varisat::Solver<'static>, 40 | all_versions_by_p: Map>, 41 | } 42 | 43 | impl SatResolve { 44 | pub fn new(dp: &OfflineDependencyProvider) -> Self { 45 | let mut cnf = varisat::CnfFormula::new(); 46 | 47 | let mut all_versions = vec![]; 48 | let mut all_versions_by_p: Map> = Map::default(); 49 | 50 | for p in dp.packages() { 51 | let mut versions_for_p = vec![]; 52 | for v in dp.versions(p).unwrap() { 53 | let new_var = cnf.new_var(); 54 | all_versions.push((p.clone(), v.clone(), new_var)); 55 | versions_for_p.push(new_var); 56 | all_versions_by_p 57 | .entry(p.clone()) 58 | .or_default() 59 | .push((v.clone(), new_var)); 60 | } 61 | // no two versions of the same package 62 | sat_at_most_one(&mut cnf, &versions_for_p); 63 | } 64 | 65 | // active packages need each of there `deps` to be satisfied 66 | for (p, v, var) in &all_versions { 67 | let deps = match dp.get_dependencies(p, v).unwrap() { 68 | Dependencies::Unavailable(_) => panic!(), 69 | Dependencies::Available(d) => d, 70 | }; 71 | for (p1, range) in &deps { 72 | let empty_vec = vec![]; 73 | let mut matches: Vec = all_versions_by_p 74 | .get(p1) 75 | .unwrap_or(&empty_vec) 76 | .iter() 77 | .filter(|(v1, _)| range.contains(v1)) 78 | .map(|(_, var1)| var1.positive()) 79 | .collect(); 80 | // ^ the `dep` is satisfied or 81 | matches.push(var.negative()); 82 | // ^ `p` is not active 83 | cnf.add_clause(&matches); 84 | } 85 | } 86 | 87 | let mut solver = varisat::Solver::new(); 88 | solver.add_formula(&cnf); 89 | 90 | // We dont need to `solve` now. We know that "use nothing" will satisfy all the clauses so far. 91 | // But things run faster if we let it spend some time figuring out how the constraints interact before we add assumptions. 92 | solver 93 | .solve() 94 | .expect("docs say it can't error in default config"); 95 | 96 | Self { 97 | solver, 98 | all_versions_by_p, 99 | } 100 | } 101 | 102 | pub fn resolve(&mut self, name: &P, ver: &VS::V) -> bool { 103 | if let Some(vers) = self.all_versions_by_p.get(name) { 104 | if let Some((_, var)) = vers.iter().find(|(v, _)| v == ver) { 105 | self.solver.assume(&[var.positive()]); 106 | 107 | self.solver 108 | .solve() 109 | .expect("docs say it can't error in default config") 110 | } else { 111 | false 112 | } 113 | } else { 114 | false 115 | } 116 | } 117 | 118 | pub fn is_valid_solution>( 119 | &mut self, 120 | pids: &SelectedDependencies, 121 | ) -> bool { 122 | let mut assumption = vec![]; 123 | 124 | for (p, vs) in &self.all_versions_by_p { 125 | let pid_for_p = pids.get(p); 126 | for (v, var) in vs { 127 | assumption.push(var.lit(pid_for_p == Some(v))) 128 | } 129 | } 130 | 131 | self.solver.assume(&assumption); 132 | 133 | self.solver 134 | .solve() 135 | .expect("docs say it can't error in default config") 136 | } 137 | 138 | pub fn check_resolve>( 139 | &mut self, 140 | res: &Result, PubGrubError>, 141 | p: &P, 142 | v: &VS::V, 143 | ) { 144 | match res { 145 | Ok(s) => { 146 | assert!(self.is_valid_solution::(s)); 147 | } 148 | Err(_) => { 149 | assert!(!self.resolve(p, v)); 150 | } 151 | } 152 | } 153 | } 154 | -------------------------------------------------------------------------------- /tests/tests.rs: -------------------------------------------------------------------------------- 1 | // SPDX-License-Identifier: MPL-2.0 2 | 3 | use pubgrub::{resolve, OfflineDependencyProvider, PubGrubError, Ranges}; 4 | 5 | type NumVS = Ranges; 6 | 7 | #[test] 8 | fn same_result_on_repeated_runs() { 9 | let mut dependency_provider = OfflineDependencyProvider::<_, NumVS>::new(); 10 | 11 | dependency_provider.add_dependencies("c", 0u32, []); 12 | dependency_provider.add_dependencies("c", 2u32, []); 13 | dependency_provider.add_dependencies("b", 0u32, []); 14 | dependency_provider.add_dependencies("b", 1u32, [("c", Ranges::between(0u32, 1u32))]); 15 | 16 | dependency_provider.add_dependencies("a", 0u32, [("b", Ranges::full()), ("c", Ranges::full())]); 17 | 18 | let name = "a"; 19 | let ver: u32 = 0; 20 | let one = resolve(&dependency_provider, name, ver); 21 | for _ in 0..10 { 22 | match (&one, &resolve(&dependency_provider, name, ver)) { 23 | (Ok(l), Ok(r)) => assert_eq!(l, r), 24 | _ => panic!("not the same result"), 25 | } 26 | } 27 | } 28 | 29 | #[test] 30 | fn should_always_find_a_satisfier() { 31 | let mut dependency_provider = OfflineDependencyProvider::<_, NumVS>::new(); 32 | dependency_provider.add_dependencies("a", 0u32, [("b", Ranges::empty())]); 33 | assert!(matches!( 34 | resolve(&dependency_provider, "a", 0u32), 35 | Err(PubGrubError::NoSolution { .. }) 36 | )); 37 | 38 | dependency_provider.add_dependencies("c", 0u32, [("a", Ranges::full())]); 39 | assert!(matches!( 40 | resolve(&dependency_provider, "c", 0u32), 41 | Err(PubGrubError::NoSolution { .. }) 42 | )); 43 | } 44 | 45 | #[test] 46 | fn depend_on_self() { 47 | let mut dependency_provider = OfflineDependencyProvider::<_, NumVS>::new(); 48 | dependency_provider.add_dependencies("a", 0u32, [("a", Ranges::full())]); 49 | assert!(resolve(&dependency_provider, "a", 0u32).is_ok()); 50 | dependency_provider.add_dependencies("a", 66u32, [("a", Ranges::singleton(111u32))]); 51 | assert!(resolve(&dependency_provider, "a", 66u32).is_err()); 52 | } 53 | -------------------------------------------------------------------------------- /version-ranges/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "version-ranges" 3 | version = "0.1.1" 4 | description = "Performance-optimized type for generic version ranges and operations on them." 5 | edition = "2021" 6 | repository = "https://github.com/pubgrub-rs/pubgrub" 7 | license = "MPL-2.0" 8 | keywords = ["version", "pubgrub", "selector", "ranges"] 9 | include = ["LICENSE", "README.md", "number-line-ranges.svg", "src/**"] 10 | 11 | [dependencies] 12 | proptest = { version = "1.6.0", optional = true } 13 | serde = { version = "1.0.219", features = ["derive"], optional = true } 14 | smallvec = { version = "1.14.0", features = ["union"] } 15 | 16 | [features] 17 | serde = ["dep:serde", "smallvec/serde"] 18 | 19 | [dev-dependencies] 20 | proptest = "1.6.0" 21 | ron = "0.10.1" 22 | -------------------------------------------------------------------------------- /version-ranges/Changelog.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | 3 | Changelog for the version-ranges crate. 4 | 5 | ## v0.1.1 6 | 7 | * Added `Ranges::from_iter` 8 | * Implement `IntoIter` on `Ranges` 9 | 10 | ## v0.1.0 11 | 12 | Initial release! 13 | -------------------------------------------------------------------------------- /version-ranges/LICENSE: -------------------------------------------------------------------------------- 1 | ../LICENSE -------------------------------------------------------------------------------- /version-ranges/README.md: -------------------------------------------------------------------------------- 1 | # Ranges 2 | 3 | [![crates.io](https://img.shields.io/crates/v/version-ranges.svg?logo=rust)](https://crates.io/crates/version-ranges) 4 | [![docs.rs](https://img.shields.io/badge/docs.rs-version-ranges)](https://docs.rs/version-ranges) 5 | 6 | This crate contains a performance-optimized type for generic version ranges and operations on them. 7 | 8 | `Ranges` can represent version selectors such as `(>=1.5.1, <2) OR (==3.1) OR (>4)`. Internally, it is an ordered list 9 | of contiguous intervals (segments) with inclusive, exclusive or open-ended ends, similar to a 10 | `Vec<(Bound, Bound)>`. 11 | 12 | You can construct a basic range from one of the following build blocks. All other ranges are concatenation, union, and 13 | complement of these basic ranges. 14 | 15 | - `Ranges::empty()`: No version 16 | - `Ranges::full()`: All versions 17 | - `Ranges::singleton(v)`: Only the version v exactly 18 | - `Ranges::higher_than(v)`: All versions `v <= versions` 19 | - `Ranges::strictly_higher_than(v)`: All versions `v < versions` 20 | - `Ranges::lower_than(v)`: All versions `versions <= v` 21 | - `Ranges::strictly_lower_than(v)`: All versions `versions < v` 22 | - `Ranges::between(v1, v2)`: All versions `v1 <= versions < v2` 23 | 24 | The optimized operations include `complement`, `contains`, `contains_many`, `intersection`, `is_disjoint`, 25 | `subset_of` and `union`. 26 | 27 | `Ranges` is generic over any type that implements `Ord` + `Clone` and can represent all kinds of slices with ordered 28 | coordinates, not just version ranges. While built as a performance-critical piece 29 | of [pubgrub](https://github.com/pubgrub-rs/pubgrub), it can be adopted for other domains, too. 30 | 31 | ![A number line and a sample range on it](number-line-ranges.svg) 32 | 33 | You can imagine a `Ranges` as slices over a number line. 34 | 35 | Note that there are limitations to the equality implementation: Given a `Ranges`, the segments 36 | `(Unbounded, Included(42u32))` and `(Included(0), Included(42u32))` as well as 37 | `(Included(1), Included(5))` and `(Included(1), Included(3)) + (Included(4), Included(5))` 38 | are reported as unequal, even though the match the same versions: We can't tell that there isn't a version between `0` 39 | and `-inf` or `3` and `4` respectively. 40 | 41 | ## Optional features 42 | 43 | * `serde`: serialization and deserialization for the version range, given that the version type also supports it. 44 | * `proptest`: Exports are proptest strategy for `Ranges`. 45 | -------------------------------------------------------------------------------- /version-ranges/number-line-ranges.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | --------------------------------------------------------------------------------