├── .github └── workflows │ ├── bootc.yml │ └── rust.yml ├── .gitignore ├── Cargo.toml ├── LICENSE-APACHE ├── LICENSE-MIT ├── README.md ├── ci ├── container-build-integration.sh ├── ima.sh ├── installdeps.sh ├── integration.sh ├── lints.sh ├── priv-integration.sh └── priv-test-cockpit-selinux.sh ├── cli ├── Cargo.toml └── src │ └── main.rs ├── deny.toml ├── docs └── questions-and-answers.md ├── lib ├── Cargo.toml ├── README.md ├── src │ ├── bootabletree.rs │ ├── chunking.rs │ ├── cli.rs │ ├── commit.rs │ ├── container │ │ ├── deploy.rs │ │ ├── encapsulate.rs │ │ ├── mod.rs │ │ ├── skopeo.rs │ │ ├── store.rs │ │ ├── tests │ │ │ └── it │ │ │ │ └── fixtures │ │ │ │ └── exampleos.tar.zst │ │ ├── unencapsulate.rs │ │ └── update_detachedmeta.rs │ ├── container_utils.rs │ ├── diff.rs │ ├── docgen.rs │ ├── fixture.rs │ ├── fixtures │ │ ├── fedora-coreos-contentmeta.json.gz │ │ └── ostree-gpg-test-home.tar.gz │ ├── globals.rs │ ├── ima.rs │ ├── integrationtest.rs │ ├── isolation.rs │ ├── keyfileext.rs │ ├── lib.rs │ ├── logging.rs │ ├── mountutil.rs │ ├── objectsource.rs │ ├── objgv.rs │ ├── ostree_manual.rs │ ├── ostree_prepareroot.rs │ ├── refescape.rs │ ├── repair.rs │ ├── selinux.rs │ ├── statistics.rs │ ├── sysroot.rs │ ├── tar │ │ ├── export.rs │ │ ├── import.rs │ │ ├── mod.rs │ │ └── write.rs │ ├── tokio_util.rs │ └── utils.rs └── tests │ └── it │ ├── fixtures │ ├── hlinks.tar.gz │ ├── manifest1.json │ └── manifest2.json │ └── main.rs ├── man └── ostree-container-auth.md └── ostree-and-containers.md /.github/workflows/bootc.yml: -------------------------------------------------------------------------------- 1 | name: bootc 2 | 3 | permissions: 4 | actions: read 5 | 6 | on: 7 | push: 8 | branches: [main] 9 | pull_request: 10 | branches: [main] 11 | workflow_dispatch: {} 12 | 13 | jobs: 14 | build-c9s: 15 | runs-on: ubuntu-latest 16 | container: quay.io/centos/centos:stream9 17 | steps: 18 | - run: dnf -y install git-core 19 | - uses: actions/checkout@v3 20 | with: 21 | repository: containers/bootc 22 | path: bootc 23 | - uses: actions/checkout@v3 24 | with: 25 | path: ostree-rs-ext 26 | - name: Patch bootc to use ostree-rs-ext 27 | run: | 28 | set -xeuo pipefail 29 | cd bootc 30 | cat >> Cargo.toml << 'EOF' 31 | [patch.crates-io] 32 | ostree-ext = { path = "../ostree-rs-ext/lib" } 33 | EOF 34 | - name: Install deps 35 | run: ./bootc/ci/installdeps.sh 36 | - name: Cache Dependencies 37 | uses: Swatinem/rust-cache@v2 38 | with: 39 | key: "build-bootc-c9s" 40 | workspaces: bootc 41 | - name: Build 42 | run: cd bootc && make test-bin-archive 43 | - name: Upload binary 44 | uses: actions/upload-artifact@v4 45 | with: 46 | name: bootc-c9s.tar.zst 47 | path: bootc/target/bootc.tar.zst 48 | privtest-alongside: 49 | name: "Test install-alongside" 50 | needs: build-c9s 51 | runs-on: ubuntu-latest 52 | steps: 53 | - name: Download 54 | uses: actions/download-artifact@v4.1.7 55 | with: 56 | name: bootc-c9s.tar.zst 57 | - name: Install 58 | run: tar -xvf bootc.tar.zst 59 | - name: Integration tests 60 | run: | 61 | set -xeuo pipefail 62 | sudo podman run --rm -ti --privileged -v /:/target -v /var/lib/containers:/var/lib/containers -v ./usr/bin/bootc:/usr/bin/bootc --pid=host --security-opt label=disable \ 63 | quay.io/centos-bootc/centos-bootc-dev:stream9 bootc install to-filesystem \ 64 | --karg=foo=bar --disable-selinux --replace=alongside /target 65 | 66 | -------------------------------------------------------------------------------- /.github/workflows/rust.yml: -------------------------------------------------------------------------------- 1 | # Inspired by https://github.com/rust-analyzer/rust-analyzer/blob/master/.github/workflows/ci.yaml 2 | # but tweaked in several ways. If you make changes here, consider doing so across other 3 | # repositories in e.g. ostreedev etc. 4 | name: Rust 5 | 6 | permissions: 7 | actions: read 8 | 9 | on: 10 | push: 11 | branches: [main] 12 | pull_request: 13 | branches: [main] 14 | workflow_dispatch: {} 15 | 16 | env: 17 | CARGO_TERM_COLOR: always 18 | 19 | jobs: 20 | tests: 21 | runs-on: ubuntu-latest 22 | container: quay.io/coreos-assembler/fcos-buildroot:testing-devel 23 | steps: 24 | - uses: actions/checkout@v3 25 | - name: Code lints 26 | run: ./ci/lints.sh 27 | - name: Install deps 28 | run: ./ci/installdeps.sh 29 | # xref containers/containers-image-proxy-rs 30 | - name: Cache Dependencies 31 | uses: Swatinem/rust-cache@v2 32 | with: 33 | key: "tests" 34 | - name: cargo fmt (check) 35 | run: cargo fmt -- --check -l 36 | - name: Build 37 | run: cargo test --no-run 38 | - name: Individual checks 39 | run: (cd cli && cargo check) && (cd lib && cargo check) 40 | - name: Run tests 41 | run: cargo test -- --nocapture --quiet 42 | - name: Manpage generation 43 | run: mkdir -p target/man && cargo run --features=docgen -- man --directory target/man 44 | - name: cargo clippy 45 | run: cargo clippy 46 | build: 47 | runs-on: ubuntu-latest 48 | container: quay.io/coreos-assembler/fcos-buildroot:testing-devel 49 | steps: 50 | - uses: actions/checkout@v3 51 | - name: Install deps 52 | run: ./ci/installdeps.sh 53 | - name: Cache Dependencies 54 | uses: Swatinem/rust-cache@v2 55 | with: 56 | key: "build" 57 | - name: Build 58 | run: cargo build --release --features=internal-testing-api 59 | - name: Upload binary 60 | uses: actions/upload-artifact@v4 61 | with: 62 | name: ostree-ext-cli 63 | path: target/release/ostree-ext-cli 64 | build-minimum-toolchain: 65 | name: "Build using MSRV" 66 | runs-on: ubuntu-latest 67 | container: quay.io/coreos-assembler/fcos-buildroot:testing-devel 68 | steps: 69 | - name: Checkout repository 70 | uses: actions/checkout@v3 71 | - name: Install deps 72 | run: ./ci/installdeps.sh 73 | - name: Detect crate MSRV 74 | shell: bash 75 | run: | 76 | msrv=$(cargo metadata --format-version 1 --no-deps | \ 77 | jq -r '.packages[1].rust_version') 78 | echo "Crate MSRV: $msrv" 79 | echo "ACTION_MSRV_TOOLCHAIN=$msrv" >> $GITHUB_ENV 80 | - name: Remove system Rust toolchain 81 | run: dnf remove -y rust cargo 82 | - uses: dtolnay/rust-toolchain@master 83 | with: 84 | toolchain: ${{ env['ACTION_MSRV_TOOLCHAIN'] }} 85 | - name: Cache Dependencies 86 | uses: Swatinem/rust-cache@v2 87 | with: 88 | key: "min" 89 | - name: cargo check 90 | run: cargo check 91 | cargo-deny: 92 | runs-on: ubuntu-latest 93 | steps: 94 | - uses: actions/checkout@v3 95 | - uses: EmbarkStudios/cargo-deny-action@v1 96 | with: 97 | log-level: warn 98 | command: check bans sources licenses 99 | integration: 100 | name: "Integration" 101 | needs: build 102 | runs-on: ubuntu-latest 103 | container: quay.io/fedora/fedora-coreos:testing-devel 104 | steps: 105 | - name: Checkout repository 106 | uses: actions/checkout@v3 107 | - name: Download ostree-ext-cli 108 | uses: actions/download-artifact@v4.1.7 109 | with: 110 | name: ostree-ext-cli 111 | - name: Install 112 | run: install ostree-ext-cli /usr/bin && rm -v ostree-ext-cli 113 | - name: Integration tests 114 | run: ./ci/integration.sh 115 | ima: 116 | name: "Integration (IMA)" 117 | needs: build 118 | runs-on: ubuntu-latest 119 | container: quay.io/fedora/fedora-coreos:testing-devel 120 | steps: 121 | - name: Checkout repository 122 | uses: actions/checkout@v3 123 | - name: Download ostree-ext-cli 124 | uses: actions/download-artifact@v4.1.7 125 | with: 126 | name: ostree-ext-cli 127 | - name: Install 128 | run: install ostree-ext-cli /usr/bin && rm -v ostree-ext-cli 129 | - name: Integration tests 130 | run: ./ci/ima.sh 131 | privtest: 132 | name: "Privileged testing" 133 | needs: build 134 | runs-on: ubuntu-latest 135 | container: 136 | image: quay.io/fedora/fedora-coreos:testing-devel 137 | options: "--privileged --pid=host -v /var/tmp:/var/tmp -v /run/dbus:/run/dbus -v /run/systemd:/run/systemd -v /:/run/host" 138 | steps: 139 | - name: Checkout repository 140 | uses: actions/checkout@v3 141 | - name: Download 142 | uses: actions/download-artifact@v4.1.7 143 | with: 144 | name: ostree-ext-cli 145 | - name: Install 146 | run: install ostree-ext-cli /usr/bin && rm -v ostree-ext-cli 147 | - name: Integration tests 148 | run: ./ci/priv-integration.sh 149 | privtest-cockpit: 150 | name: "Privileged testing (cockpit)" 151 | needs: build 152 | runs-on: ubuntu-latest 153 | container: 154 | image: quay.io/fedora/fedora-bootc:41 155 | options: "--privileged --pid=host -v /var/tmp:/var/tmp -v /run/dbus:/run/dbus -v /run/systemd:/run/systemd -v /:/run/host" 156 | steps: 157 | - name: Checkout repository 158 | uses: actions/checkout@v4 159 | - name: Download 160 | uses: actions/download-artifact@v4.1.7 161 | with: 162 | name: ostree-ext-cli 163 | - name: Install 164 | run: install ostree-ext-cli /usr/bin && rm -v ostree-ext-cli 165 | - name: Integration tests 166 | run: ./ci/priv-test-cockpit-selinux.sh 167 | container-build: 168 | name: "Container build" 169 | needs: build 170 | runs-on: ubuntu-latest 171 | steps: 172 | - name: Checkout repository 173 | uses: actions/checkout@v3 174 | - name: Checkout coreos-layering-examples 175 | uses: actions/checkout@v3 176 | with: 177 | repository: coreos/coreos-layering-examples 178 | path: coreos-layering-examples 179 | - name: Download 180 | uses: actions/download-artifact@v4.1.7 181 | with: 182 | name: ostree-ext-cli 183 | - name: Integration tests 184 | run: ./ci/container-build-integration.sh 185 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | example 2 | 3 | 4 | # Added by cargo 5 | 6 | /target 7 | Cargo.lock 8 | -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [workspace] 2 | members = ["cli", "lib"] 3 | resolver = "2" 4 | 5 | # These bits are copied from rpm-ostree. 6 | [profile.dev] 7 | opt-level = 1 # No optimizations are too slow for us. 8 | 9 | [profile.release] 10 | lto = "thin" 11 | # We use FFI so this is safest 12 | panic = "abort" 13 | # We assume we're being delivered via e.g. RPM which supports split debuginfo 14 | debug = true 15 | 16 | [profile.releaselto] 17 | codegen-units = 1 18 | inherits = "release" 19 | lto = "yes" 20 | -------------------------------------------------------------------------------- /LICENSE-APACHE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | 203 | -------------------------------------------------------------------------------- /LICENSE-MIT: -------------------------------------------------------------------------------- 1 | Copyright (c) 2016 The openat Developers 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy 4 | of this software and associated documentation files (the "Software"), to deal 5 | in the Software without restriction, including without limitation the rights 6 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 7 | copies of the Software, and to permit persons to whom the Software is 8 | furnished to do so, subject to the following conditions: 9 | 10 | The above copyright notice and this permission notice shall be included in all 11 | copies or substantial portions of the Software. 12 | 13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 15 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 16 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 17 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 18 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 19 | SOFTWARE. 20 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # NOTE: THIS REPOSITORY IS MOVED INTO https://github.com/containers/bootc/ 2 | 3 | This repository has been merged into https://github.com/containers/bootc/ 4 | and will no longer be published as a standalone crate. Everything 5 | below is preserved for historical context. 6 | 7 | The future of ostree and containers/OCI will be driven by bootc. 8 | 9 | --- 10 | 11 | # ostree-ext 12 | 13 | Extension APIs for [ostree](https://github.com/ostreedev/ostree/) that are written in Rust, using the [Rust ostree bindings](https://crates.io/crates/ostree). 14 | 15 | If you are writing tooling that uses ostree and Rust, this crate is intended for you. 16 | However, while the ostree core is very stable, the APIs and data models and this crate 17 | should be considered "slushy". An effort will be made to preserve backwards compatibility 18 | for data written by prior versions (e.g. of tar and container serialization), but 19 | if you choose to use this crate, please [file an issue](https://github.com/ostreedev/ostree-rs-ext/issues) 20 | to let us know. 21 | 22 | At the moment, the following projects are known to use this crate: 23 | 24 | - https://github.com/containers/bootc 25 | - https://github.com/coreos/rpm-ostree 26 | 27 | The intention of this crate is to be where new high level ostree-related features 28 | land. However, at this time it is kept separate from the core C library, which 29 | is in turn separate from the [ostree-rs bindings](https://github.com/ostreedev/ostree-rs). 30 | 31 | High level features (more on this below): 32 | 33 | - ostree and [opencontainers/image](https://github.com/opencontainers/image-spec) bridging/integration 34 | - Generalized tar import/export 35 | - APIs to diff ostree commits 36 | 37 | ```mermaid 38 | flowchart TD 39 | ostree-rs-ext --- ostree-rs --- ostree 40 | ostree-rs-ext --- containers-image-proxy-rs --- skopeo --- containers/image 41 | ``` 42 | 43 | For more information on the container stack, see below. 44 | 45 | ## module "tar": tar export/import 46 | 47 | ostree's support for exporting to a tarball is lossy because it doesn't have e.g. commit 48 | metadata. This adds a new export format that is effectively a new custom repository mode 49 | combined with a hardlinked checkout. 50 | 51 | This new export stream can be losslessly imported back into a different repository. 52 | 53 | ### Filesystem layout 54 | 55 | ``` 56 | . 57 | ├── etc # content is at traditional /etc, not /usr/etc 58 | │   └── passwd 59 | ├── sysroot 60 | │   └── ostree # ostree object store with hardlinks to destinations 61 | │   ├── repo 62 | │   │   └── objects 63 | │   │   ├── 00 64 | │   │   └── 8b 65 | │   │   └── 7df143d91c716ecfa5fc1730022f6b421b05cedee8fd52b1fc65a96030ad52.file.xattrs 66 | │   │   └── 7df143d91c716ecfa5fc1730022f6b421b05cedee8fd52b1fc65a96030ad52.file 67 | │   └── xattrs # A new directory with extended attributes, hardlinked with .xattr files 68 | │   └── 58d523efd29244331392770befa2f8bd55b3ef594532d3b8dbf94b70dc72e674 69 | └── usr 70 | ├── bin 71 | │   └── bash 72 | └── lib64 73 | └── libc.so 74 | ``` 75 | 76 | Think of this like a new ostree repository mode `tar-stream` or so, although right now it only holds a single commit. 77 | 78 | A major distinction is the addition of special `.xattr` files; tar variants and support library differ too much for us to rely on this making it through round trips. And further, to support the webserver-in-container we need e.g. `security.selinux` to not be changed/overwritten by the container runtime. 79 | 80 | ## module "diff": Compute the difference between two ostree commits 81 | 82 | ```rust 83 | let subdir: Option<&str> = None; 84 | let refname = "fedora/coreos/x86_64/stable"; 85 | let diff = ostree_ext::diff::diff(repo, &format!("{}^", refname), refname, subdir)?; 86 | ``` 87 | 88 | This is used by `rpm-ostree ex apply-live`. 89 | 90 | ## module "container": Bridging between ostree and OCI/Docker images 91 | 92 | 93 | This module contains APIs to bidirectionally map between OSTree commits and the [opencontainers](https://github.com/opencontainers) 94 | ecosystem. 95 | 96 | Because container images are just layers of tarballs, this builds on the [`crate::tar`] module. 97 | 98 | This module builds on [containers-image-proxy-rs](https://github.com/containers/containers-image-proxy-rs) 99 | and [skopeo](https://github.com/containers/skopeo), which in turn is ultimately a frontend 100 | around the [containers/image](https://github.com/containers/image) ecosystem. 101 | 102 | In particular, the `containers/image` library is used to fetch content from remote registries, 103 | which allows building on top of functionality in that library, including signatures, mirroring 104 | and in general a battle tested codebase for interacting with both OCI and Docker registries. 105 | 106 | ### Encapsulation 107 | 108 | For existing organizations which use ostree, APIs (and a CLI) are provided to "encapsulate" 109 | and "unencapsulate" an OSTree commit as as an OCI image. 110 | 111 | ``` 112 | $ ostree-ext-cli container encapsulate --repo=/path/to/repo exampleos/x86_64/stable docker://quay.io/exampleos/exampleos:stable 113 | ``` 114 | You can then e.g. 115 | 116 | ``` 117 | $ podman run --rm -ti --entrypoint bash quay.io/exampleos/exampleos:stable 118 | ``` 119 | 120 | Running the container directly for e.g. CI testing is one use case. But more importantly, this container image 121 | can be pushed to any registry, and used as part of ostree-based operating system release engineering. 122 | 123 | However, this is a very simplistic model - it currently generates a container image with a single layer, which means 124 | every change requires redownloading that entire layer. As of recently, the underlying APIs for generating 125 | container images support "chunked" images. But this requires coding for a specific package/build system. 126 | 127 | A good reference code base for generating "chunked" images is [rpm-ostree compose container-encapsulate](https://coreos.github.io/rpm-ostree/container/#converting-ostree-commits-to-new-base-images). This is used to generate the current [Fedora CoreOS](https://quay.io/repository/fedora/fedora-coreos?tab=tags&tag=latest) 128 | images. 129 | 130 | ### Unencapsulate an ostree-container directly 131 | 132 | A primary goal of this effort is to make it fully native to an ostree-based operating system to pull a container image directly too. 133 | 134 | The CLI offers a method to "unencapsulate" - fetch a container image in a streaming fashion and 135 | import the embedded OSTree commit. Here, you must use a prefix scheme which defines signature verification. 136 | 137 | - `ostree-remote-image:$remote:$imagereference`: This declares that the OSTree commit embedded in the image reference should be verified using the ostree remote config `$remote`. 138 | - `ostree-image-signed:$imagereference`: Fetch via the containers/image stack, but require *some* signature verification (not via ostree). 139 | - `ostree-unverified-image:$imagereference`: Don't do any signature verification 140 | 141 | ``` 142 | $ ostree-ext-cli container unencapsulate --repo=/ostree/repo ostree-remote-image:someremote:docker://quay.io/exampleos/exampleos:stable 143 | ``` 144 | 145 | But a project like rpm-ostree could hence support: 146 | 147 | ``` 148 | $ rpm-ostree rebase ostree-remote-image:someremote:quay.io/exampleos/exampleos:stable 149 | ``` 150 | 151 | (Along with the usual `rpm-ostree upgrade` knowing to pull that container image) 152 | 153 | 154 | To emphasize this, the current high level model is that this is a one-to-one mapping - an ostree commit 155 | can be exported (wrapped) into a container image, which will have exactly one layer. Upon import 156 | back into an ostree repository, all container metadata except for its digested checksum will be discarded. 157 | 158 | #### Signatures 159 | 160 | OSTree supports GPG and ed25519 signatures natively, and it's expected by default that 161 | when booting from a fetched container image, one verifies ostree-level signatures. 162 | For ostree, a signing configuration is specified via an ostree remote. In order to 163 | pair this configuration together, this library defines a "URL-like" string schema: 164 | `ostree-remote-registry::` 165 | A concrete instantiation might be e.g.: `ostree-remote-registry:fedora:quay.io/coreos/fedora-coreos:stable` 166 | To parse and generate these strings, see [`OstreeImageReference`]. 167 | 168 | ### Layering 169 | 170 | A key feature of container images is support for layering. This functionality is handled 171 | via a separate [container/store](https://docs.rs/ostree_ext/latest/ostree_ext/container/store/) module. 172 | 173 | These APIs are also exposed via the CLI: 174 | 175 | ``` 176 | $ ostree-ext-cli container image --help 177 | ostree-ext-cli-container-image 0.4.0-alpha.0 178 | Commands for working with (possibly layered, non-encapsulated) container images 179 | 180 | USAGE: 181 | ostree-ext-cli container image 182 | 183 | FLAGS: 184 | -h, --help Prints help information 185 | -V, --version Prints version information 186 | 187 | SUBCOMMANDS: 188 | copy Copy a pulled container image from one repo to another 189 | deploy Perform initial deployment for a container image 190 | help Prints this message or the help of the given subcommand(s) 191 | list List container images 192 | pull Pull (or update) a container image 193 | ``` 194 | 195 | ## More details about ostree and containers 196 | 197 | See [ostree-and-containers.md](ostree-and-containers.md). 198 | -------------------------------------------------------------------------------- /ci/container-build-integration.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Verify `ostree container commit` 3 | set -euo pipefail 4 | 5 | image=quay.io/fedora/fedora-coreos:stable 6 | example=coreos-layering-examples/tailscale 7 | set -x 8 | 9 | chmod a+x ostree-ext-cli 10 | workdir=${PWD} 11 | cd ${example} 12 | cp ${workdir}/ostree-ext-cli . 13 | sed -ie 's,ostree container commit,ostree-ext-cli container commit,' Containerfile 14 | sed -ie 's,^\(FROM .*\),\1\nADD ostree-ext-cli /usr/bin/,' Containerfile 15 | git diff 16 | 17 | for runtime in podman docker; do 18 | $runtime build -t localhost/fcos-tailscale -f Containerfile . 19 | $runtime run --rm localhost/fcos-tailscale rpm -q tailscale 20 | done 21 | 22 | cd $(mktemp -d) 23 | cp ${workdir}/ostree-ext-cli . 24 | cat > Containerfile << EOF 25 | FROM $image 26 | ADD ostree-ext-cli /usr/bin/ 27 | RUN set -x; test \$(ostree-ext-cli internal-only-for-testing detect-env) = ostree-container 28 | EOF 29 | # Also verify docker buildx, which apparently doesn't have /.dockerenv 30 | docker buildx build -t localhost/fcos-tailscale -f Containerfile . 31 | 32 | echo ok container image integration 33 | -------------------------------------------------------------------------------- /ci/ima.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Assumes that the current environment is a mutable ostree-container 3 | # with ostree-ext-cli installed in /usr/bin. 4 | # Runs IMA tests. 5 | set -xeuo pipefail 6 | 7 | # https://github.com/ostreedev/ostree-rs-ext/issues/417 8 | mkdir -p /var/tmp 9 | 10 | if test '!' -x /usr/bin/evmctl; then 11 | rpm-ostree install ima-evm-utils 12 | fi 13 | 14 | ostree-ext-cli internal-only-for-testing run-ima 15 | echo ok "ima" 16 | -------------------------------------------------------------------------------- /ci/installdeps.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -xeuo pipefail 3 | 4 | # For some reason dnf copr enable -y says there are no builds? 5 | cat >/etc/yum.repos.d/coreos-continuous.repo << 'EOF' 6 | [copr:copr.fedorainfracloud.org:group_CoreOS:continuous] 7 | name=Copr repo for continuous owned by @CoreOS 8 | baseurl=https://download.copr.fedorainfracloud.org/results/@CoreOS/continuous/fedora-$releasever-$basearch/ 9 | type=rpm-md 10 | skip_if_unavailable=True 11 | gpgcheck=1 12 | gpgkey=https://download.copr.fedorainfracloud.org/results/@CoreOS/continuous/pubkey.gpg 13 | repo_gpgcheck=0 14 | enabled=1 15 | enabled_metadata=1 16 | EOF 17 | 18 | # Our tests depend on this 19 | dnf -y install skopeo 20 | 21 | # Always pull ostree from updates-testing to avoid the bodhi wait 22 | dnf -y update ostree 23 | -------------------------------------------------------------------------------- /ci/integration.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Assumes that the current environment is a mutable ostree-container 3 | # with ostree-ext-cli installed in /usr/bin. 4 | # Runs integration tests. 5 | set -xeuo pipefail 6 | 7 | # Output an ok message for TAP 8 | n_tap_tests=0 9 | tap_ok() { 10 | echo "ok" "$@" 11 | n_tap_tests=$(($n_tap_tests+1)) 12 | } 13 | 14 | tap_end() { 15 | echo "1..${n_tap_tests}" 16 | } 17 | 18 | env=$(ostree-ext-cli internal-only-for-testing detect-env) 19 | test "${env}" = ostree-container 20 | tap_ok environment 21 | 22 | ostree-ext-cli internal-only-for-testing run 23 | tap_ok integrationtests 24 | 25 | tap_end 26 | -------------------------------------------------------------------------------- /ci/lints.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -xeuo pipefail 3 | tmpf=$(mktemp) 4 | git grep 'dbg!' '*.rs' > ${tmpf} || true 5 | if test -s ${tmpf}; then 6 | echo "Found dbg!" 1>&2 7 | cat "${tmpf}" 8 | exit 1 9 | fi -------------------------------------------------------------------------------- /ci/priv-integration.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Assumes that the current environment is a privileged container 3 | # with the host mounted at /run/host. We can basically write 4 | # whatever we want, however we can't actually *reboot* the host. 5 | set -euo pipefail 6 | 7 | # https://github.com/ostreedev/ostree-rs-ext/issues/417 8 | mkdir -p /var/tmp 9 | 10 | sysroot=/run/host 11 | # Current stable image fixture 12 | image=quay.io/fedora/fedora-coreos:testing-devel 13 | imgref=ostree-unverified-registry:${image} 14 | stateroot=testos 15 | 16 | # This image was generated manually; TODO auto-generate in quay.io/coreos-assembler or better start sigstore signing our production images 17 | FIXTURE_SIGSTORE_SIGNED_FCOS_IMAGE=quay.io/rh_ee_rsaini/coreos 18 | 19 | cd $(mktemp -d -p /var/tmp) 20 | 21 | set -x 22 | 23 | if test '!' -e "${sysroot}/ostree"; then 24 | ostree admin init-fs --modern "${sysroot}" 25 | ostree config --repo $sysroot/ostree/repo set sysroot.bootloader none 26 | fi 27 | if test '!' -d "${sysroot}/ostree/deploy/${stateroot}"; then 28 | ostree admin os-init "${stateroot}" --sysroot "${sysroot}" 29 | fi 30 | # Should be no images pruned 31 | ostree-ext-cli container image prune-images --sysroot "${sysroot}" 32 | # Test the syntax which uses full imgrefs. 33 | ostree-ext-cli container image deploy --sysroot "${sysroot}" \ 34 | --stateroot "${stateroot}" --imgref "${imgref}" 35 | ostree admin --sysroot="${sysroot}" status 36 | ostree-ext-cli container image metadata --repo "${sysroot}/ostree/repo" registry:"${image}" > manifest.json 37 | jq '.schemaVersion' < manifest.json 38 | ostree-ext-cli container image remove --repo "${sysroot}/ostree/repo" registry:"${image}" 39 | ostree admin --sysroot="${sysroot}" undeploy 0 40 | # Now test the new syntax which has a nicer --image that defaults to registry. 41 | ostree-ext-cli container image deploy --transport registry --sysroot "${sysroot}" \ 42 | --stateroot "${stateroot}" --image "${image}" 43 | ostree admin --sysroot="${sysroot}" status 44 | ostree admin --sysroot="${sysroot}" undeploy 0 45 | if ostree-ext-cli container image deploy --transport registry --sysroot "${sysroot}" \ 46 | --stateroot "${stateroot}" --image "${image}" --enforce-container-sigpolicy 2>err.txt; then 47 | echo "Deployment with enforced verification succeeded unexpectedly" 1>&2 48 | exit 1 49 | fi 50 | if ! grep -Ee 'insecureAcceptAnything.*refusing usage' err.txt; then 51 | echo "unexpected error" 1>&2 52 | cat err.txt 53 | fi 54 | # Now we should prune it 55 | ostree-ext-cli container image prune-images --sysroot "${sysroot}" 56 | ostree-ext-cli container image list --repo "${sysroot}/ostree/repo" > out.txt 57 | test $(stat -c '%s' out.txt) = 0 58 | 59 | for img in "${image}"; do 60 | ostree-ext-cli container image deploy --sysroot "${sysroot}" \ 61 | --stateroot "${stateroot}" --imgref ostree-unverified-registry:"${img}" 62 | ostree admin --sysroot="${sysroot}" status 63 | initial_refs=$(ostree --repo="${sysroot}/ostree/repo" refs | wc -l) 64 | ostree-ext-cli container image remove --repo "${sysroot}/ostree/repo" registry:"${img}" 65 | pruned_refs=$(ostree --repo="${sysroot}/ostree/repo" refs | wc -l) 66 | # Removing the image should only drop the image reference, not its layers 67 | test "$(($initial_refs - 1))" = "$pruned_refs" 68 | ostree admin --sysroot="${sysroot}" undeploy 0 69 | # TODO: when we fold together ostree and ostree-ext, automatically prune layers 70 | n_commits=$(find ${sysroot}/ostree/repo -name '*.commit' | wc -l) 71 | test "${n_commits}" -gt 0 72 | # But right now this still doesn't prune *content* 73 | ostree-ext-cli container image prune-layers --repo="${sysroot}/ostree/repo" 74 | ostree --repo="${sysroot}/ostree/repo" refs > refs.txt 75 | if test "$(wc -l < refs.txt)" -ne 0; then 76 | echo "found refs" 77 | cat refs.txt 78 | exit 1 79 | fi 80 | # And this one should GC the objects too 81 | ostree-ext-cli container image prune-images --full --sysroot="${sysroot}" > out.txt 82 | n_commits=$(find ${sysroot}/ostree/repo -name '*.commit' | wc -l) 83 | test "${n_commits}" -eq 0 84 | done 85 | 86 | # Verify we have systemd journal messages 87 | nsenter -m -t 1 journalctl _COMM=ostree-ext-cli > logs.txt 88 | grep 'layers already present: ' logs.txt 89 | 90 | podman pull ${image} 91 | ostree --repo="${sysroot}/ostree/repo" init --mode=bare-user 92 | ostree-ext-cli container image pull ${sysroot}/ostree/repo ostree-unverified-image:containers-storage:${image} 93 | echo "ok pulled from containers storage" 94 | 95 | ostree-ext-cli container compare ${imgref} ${imgref} > compare.txt 96 | grep "Removed layers: *0 *Size: 0 bytes" compare.txt 97 | grep "Added layers: *0 *Size: 0 bytes" compare.txt 98 | 99 | mkdir build 100 | cd build 101 | cat >Dockerfile << EOF 102 | FROM ${image} 103 | RUN touch /usr/share/somefile 104 | EOF 105 | systemd-run -dP --wait podman build -t localhost/fcos-derived . 106 | derived_img=oci:/var/tmp/derived.oci 107 | derived_img_dir=dir:/var/tmp/derived.dir 108 | systemd-run -dP --wait skopeo copy containers-storage:localhost/fcos-derived "${derived_img}" 109 | systemd-run -dP --wait skopeo copy "${derived_img}" "${derived_img_dir}" 110 | 111 | # Prune to reset state 112 | ostree refs ostree/container/image --delete 113 | 114 | repo="${sysroot}/ostree/repo" 115 | images=$(ostree container image list --repo "${repo}" | wc -l) 116 | test "${images}" -eq 1 117 | ostree-ext-cli container image deploy --sysroot "${sysroot}" \ 118 | --stateroot "${stateroot}" --imgref ostree-unverified-image:"${derived_img}" 119 | imgref=$(ostree refs --repo=${repo} ostree/container/image | head -1) 120 | img_commit=$(ostree --repo=${repo} rev-parse ostree/container/image/${imgref}) 121 | ostree-ext-cli container image remove --repo "${repo}" "${derived_img}" 122 | 123 | ostree-ext-cli container image deploy --sysroot "${sysroot}" \ 124 | --stateroot "${stateroot}" --imgref ostree-unverified-image:"${derived_img}" 125 | img_commit2=$(ostree --repo=${repo} rev-parse ostree/container/image/${imgref}) 126 | test "${img_commit}" = "${img_commit2}" 127 | echo "ok deploy derived container identical revs" 128 | 129 | ostree-ext-cli container image deploy --sysroot "${sysroot}" \ 130 | --stateroot "${stateroot}" --imgref ostree-unverified-image:"${derived_img_dir}" 131 | echo "ok deploy derived container from local dir" 132 | ostree-ext-cli container image remove --repo "${repo}" "${derived_img_dir}" 133 | rm -rf /var/tmp/derived.dir 134 | 135 | # Verify policy 136 | 137 | mkdir -p /etc/pki/containers 138 | #Ensure Wrong Public Key fails 139 | cat > /etc/pki/containers/fcos.pub << EOF 140 | -----BEGIN PUBLIC KEY----- 141 | MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEPw/TzXY5FQ00LT2orloOuAbqoOKv 142 | relAN0my/O8tziGvc16PtEhF6A7Eun0/9//AMRZ8BwLn2cORZiQsGd5adA== 143 | -----END PUBLIC KEY----- 144 | EOF 145 | 146 | cat > /etc/containers/registries.d/default.yaml << EOF 147 | docker: 148 | ${FIXTURE_SIGSTORE_SIGNED_FCOS_IMAGE}: 149 | use-sigstore-attachments: true 150 | EOF 151 | 152 | cat > /etc/containers/policy.json << EOF 153 | { 154 | "default": [ 155 | { 156 | "type": "reject" 157 | } 158 | ], 159 | "transports": { 160 | "docker": { 161 | "quay.io/fedora/fedora-coreos": [ 162 | { 163 | "type": "insecureAcceptAnything" 164 | } 165 | ], 166 | "${FIXTURE_SIGSTORE_SIGNED_FCOS_IMAGE}": [ 167 | { 168 | "type": "sigstoreSigned", 169 | "keyPath": "/etc/pki/containers/fcos.pub", 170 | "signedIdentity": { 171 | "type": "matchRepository" 172 | } 173 | } 174 | ] 175 | 176 | } 177 | } 178 | } 179 | EOF 180 | 181 | if ostree container image pull ${repo} ostree-image-signed:docker://${FIXTURE_SIGSTORE_SIGNED_FCOS_IMAGE} 2> error; then 182 | echo "unexpectedly pulled image" 1>&2 183 | exit 1 184 | else 185 | grep -q "invalid signature" error 186 | fi 187 | 188 | #Ensure Correct Public Key succeeds 189 | cat > /etc/pki/containers/fcos.pub << EOF 190 | -----BEGIN PUBLIC KEY----- 191 | MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEREpVb8t/Rp/78fawILAodC6EXGCG 192 | rWNjJoPo7J99cBu5Ui4oCKD+hAHagop7GTi/G3UBP/dtduy2BVdICuBETQ== 193 | -----END PUBLIC KEY----- 194 | EOF 195 | ostree container image pull ${repo} ostree-image-signed:docker://${FIXTURE_SIGSTORE_SIGNED_FCOS_IMAGE} 196 | ostree container image history --repo ${repo} docker://${FIXTURE_SIGSTORE_SIGNED_FCOS_IMAGE} 197 | 198 | echo ok privileged integration 199 | -------------------------------------------------------------------------------- /ci/priv-test-cockpit-selinux.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Assumes that the current environment is a privileged container 3 | # with the host mounted at /run/host. We can basically write 4 | # whatever we want, however we can't actually *reboot* the host. 5 | set -euo pipefail 6 | 7 | sysroot=/run/host 8 | stateroot=test-cockpit 9 | repo=$sysroot/ostree/repo 10 | image=registry.gitlab.com/fedora/bootc/tests/container-fixtures/cockpit 11 | imgref=ostree-unverified-registry:${image} 12 | 13 | cd $(mktemp -d -p /var/tmp) 14 | 15 | set -x 16 | 17 | if test '!' -e "${sysroot}/ostree"; then 18 | ostree admin init-fs --epoch=1 "${sysroot}" 19 | ostree config --repo $repo set sysroot.bootloader none 20 | fi 21 | ostree admin stateroot-init "${stateroot}" --sysroot "${sysroot}" 22 | ostree-ext-cli container image deploy --sysroot "${sysroot}" \ 23 | --stateroot "${stateroot}" --imgref "${imgref}" 24 | ref=$(ostree refs --repo $repo ostree/container/image | head -1) 25 | commit=$(ostree rev-parse --repo $repo ostree/container/image/$ref) 26 | ostree ls --repo $repo -X ${commit} /usr/lib/systemd/system|grep -i cockpit >out.txt 27 | if ! grep -q :cockpit_unit_file_t:s0 out.txt; then 28 | echo "failed to find cockpit_unit_file_t" 1>&2 29 | exit 1 30 | fi 31 | 32 | echo ok "derived selinux" 33 | -------------------------------------------------------------------------------- /cli/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "ostree-ext-cli" 3 | version = "0.1.4" 4 | authors = ["Colin Walters "] 5 | edition = "2021" 6 | license = "MIT OR Apache-2.0" 7 | repository = "https://github.com/ostreedev/ostree-rs-ext" 8 | readme = "../README.md" 9 | publish = false 10 | rust-version = "1.74.0" 11 | 12 | [dependencies] 13 | anyhow = "1.0" 14 | ostree-ext = { path = "../lib" } 15 | clap = "4.2" 16 | libc = "0.2.92" 17 | tokio = { version = "1", features = ["macros"] } 18 | log = "0.4.0" 19 | tracing = "0.1" 20 | tracing-subscriber = "0.3" 21 | 22 | [features] 23 | # A proxy for the library feature 24 | internal-testing-api = ["ostree-ext/internal-testing-api"] 25 | -------------------------------------------------------------------------------- /cli/src/main.rs: -------------------------------------------------------------------------------- 1 | // Good defaults 2 | #![forbid(unused_must_use)] 3 | #![deny(unsafe_code)] 4 | 5 | use anyhow::Result; 6 | 7 | async fn run() -> Result<()> { 8 | tracing_subscriber::fmt::init(); 9 | tracing::trace!("starting"); 10 | ostree_ext::cli::run_from_iter(std::env::args_os()).await 11 | } 12 | 13 | #[tokio::main(flavor = "current_thread")] 14 | async fn main() { 15 | if let Err(e) = run().await { 16 | eprintln!("error: {:#}", e); 17 | std::process::exit(1); 18 | } 19 | } 20 | -------------------------------------------------------------------------------- /deny.toml: -------------------------------------------------------------------------------- 1 | [licenses] 2 | unlicensed = "deny" 3 | allow = ["Apache-2.0", "Apache-2.0 WITH LLVM-exception", "MIT", "BSD-3-Clause", "BSD-2-Clause", "Unicode-DFS-2016"] 4 | 5 | [bans] 6 | 7 | [sources] 8 | unknown-registry = "deny" 9 | unknown-git = "deny" 10 | allow-git = [] 11 | -------------------------------------------------------------------------------- /docs/questions-and-answers.md: -------------------------------------------------------------------------------- 1 | # Questions and answers 2 | 3 | ## module "container": Encapsulate OSTree commits in OCI/Docker images 4 | 5 | ### How is this different from the "tarball-of-archive-repo" approach currently used in RHEL CoreOS? Aren't both encapsulating an OSTree commit in an OCI image? 6 | 7 | - The "tarball-of-archive-repo" approach is essentially just putting an OSTree repo in archive mode under `/srv` as an additional layer over a regular RHEL base image. In the new data format, users can do e.g. `podman run --rm -ti quay.io/fedora/fedora-coreos:stable bash`. This could be quite useful for some tests for OSTree commits (at one point we had a test that literally booted a whole VM to run `rpm -q` - it'd be much cheaper to do those kinds of "OS sanity checks" in a container). 8 | 9 | - The new data format is intentionally designed to be streamed; the files inside the tarball are ordered by (commit, metadata, content ...). With "tarball-of-archive-repo" as is today that's not true, so we need to pull and extract the whole thing to a temporary location, which is inefficient. See also https://github.com/ostreedev/ostree-rs-ext/issues/1. 10 | 11 | - We have a much clearer story for adding Docker/OCI style _derivation_ later. 12 | 13 | - The new data format abstracts away OSTree a bit more and avoids needing people to think about OSTree unnecessarily. 14 | 15 | ### Why pull from a container image instead of the current (older) method of pulling from OSTree repos? 16 | 17 | A good example is for people who want to do offline/disconnected installations and updates. They will almost certainly have container images they want to pull too - now the OS is just another container image. Users no longer need to mirror OSTree repos. Overall, as mentioned already, we want to abstract away OSTree a bit more. 18 | 19 | ### Can users view this as a regular container image? 20 | 21 | Yes, and it also provides some extras. In addition to being able to be run as a container, if the host is OSTree-based, the host itself can be deployed/updated into this image, too. There is also GPG signing and per-file integrity validation that comes with OSTree. 22 | 23 | ### So then would this OSTree commit in container image also work as a bootimage (bootable from a USB drive)? 24 | 25 | No. Though there could certainly be kernels and initramfses in the (OSTree commit in the) container image, that doesn't make it bootable. OSTree _understands_ bootloaders and can update kernels/initramfs images, but it doesn't update bootloaders, that is [bootupd](https://github.com/coreos/bootupd)'s job. Furthermore, this is still a container image, made of tarballs and manifests; it is not formatted to be a disk image (e.g. it doesn't have a FAT32 formatted ESP). Related to this topic is https://github.com/iximiuz/docker-to-linux, which illustrates the difference between a docker image and a bootable image. 26 | TL;DR, OSTree commit in container image is meant only to deliver OS updates (OSTree commits), not bootable disk images. 27 | 28 | ### How much deduplication do we still get with this new approach? 29 | 30 | Unfortunately, today, we do indeed need to download more than actually needed, but the files will still be deduplicated on disk, just like before. So we still won't be storing extra files, but we will be downloading extra files. 31 | But for users doing offline mirroring, this shouldn't matter that much. In OpenShift, the entire image is downloaded today, as well. 32 | Nevertheless, see https://github.com/ostreedev/ostree-rs-ext/#integrating-with-future-container-deltas. 33 | 34 | ### Will there be support for "layers" in the OSTree commit in container image? 35 | 36 | Not yet, but, as mentioned above, this opens up the possibility of doing OCI style derivation, so this could certainly be added later. It would be useful to make this image as familiar to admins as possible. Right now, the ostree-rs-ext client is only parsing one layer of the container image. 37 | 38 | ### How will mirroring image registries work? 39 | 40 | since ostree-rs-ext uses skopeo (which uses `containers/image`), mirroring is transparently supported, i.e. admins can configure their mirroring in `containers-registries.conf` and it'll just work. 41 | -------------------------------------------------------------------------------- /lib/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | authors = ["Colin Walters "] 3 | description = "Extension APIs for OSTree" 4 | edition = "2021" 5 | license = "MIT OR Apache-2.0" 6 | name = "ostree-ext" 7 | readme = "../README.md" 8 | repository = "https://github.com/ostreedev/ostree-rs-ext" 9 | version = "0.15.3" 10 | rust-version = "1.74.0" 11 | 12 | [dependencies] 13 | # Note that we re-export the oci-spec types 14 | # that are exported by this crate, so when bumping 15 | # semver here you must also bump our semver. 16 | containers-image-proxy = "0.7.0" 17 | # We re-export this library too. 18 | ostree = { features = ["v2022_6"], version = "0.19.0" } 19 | 20 | # Private dependencies 21 | anyhow = "1.0" 22 | camino = "1.0.4" 23 | chrono = "0.4.19" 24 | olpc-cjson = "0.1.1" 25 | clap = { version= "4.2", features = ["derive"] } 26 | clap_mangen = { version = "0.2", optional = true } 27 | cap-std-ext = "4.0.2" 28 | flate2 = { features = ["zlib"], default-features = false, version = "1.0.20" } 29 | fn-error-context = "0.2.0" 30 | futures-util = "0.3.13" 31 | gvariant = "0.5.0" 32 | hex = "0.4.3" 33 | io-lifetimes = "2" 34 | indicatif = "0.17.0" 35 | once_cell = "1.9" 36 | libc = "0.2.92" 37 | libsystemd = "0.7.0" 38 | openssl = "0.10.33" 39 | ocidir = "0.3.0" 40 | pin-project = "1.0" 41 | regex = "1.5.4" 42 | rustix = { version = "0.38", features = ["fs", "process"] } 43 | serde = { features = ["derive"], version = "1.0.125" } 44 | serde_json = "1.0.64" 45 | tar = "0.4.43" 46 | tempfile = "3.2.0" 47 | terminal_size = "0.3" 48 | tokio = { features = ["io-std", "time", "process", "rt", "net"], version = ">= 1.13.0" } 49 | tokio-util = { features = ["io-util"], version = "0.7" } 50 | tokio-stream = { features = ["sync"], version = "0.1.8" } 51 | tracing = "0.1" 52 | zstd = { version = "0.13.1", features = ["pkg-config"] } 53 | indexmap = { version = "2.2.2", features = ["serde"] } 54 | 55 | indoc = { version = "2", optional = true } 56 | xshell = { version = "0.2", optional = true } 57 | similar-asserts = { version = "1.5.0", optional = true } 58 | 59 | [dev-dependencies] 60 | quickcheck = "1" 61 | # https://github.com/rust-lang/cargo/issues/2911 62 | # https://github.com/rust-lang/rfcs/pull/1956 63 | ostree-ext = { path = ".", features = ["internal-testing-api"] } 64 | 65 | [package.metadata.docs.rs] 66 | features = ["dox"] 67 | 68 | [features] 69 | docgen = ["clap_mangen"] 70 | dox = ["ostree/dox"] 71 | internal-testing-api = ["xshell", "indoc", "similar-asserts"] 72 | -------------------------------------------------------------------------------- /lib/README.md: -------------------------------------------------------------------------------- 1 | ../README.md -------------------------------------------------------------------------------- /lib/src/bootabletree.rs: -------------------------------------------------------------------------------- 1 | //! Helper functions for bootable OSTrees. 2 | 3 | use std::path::Path; 4 | 5 | use anyhow::Result; 6 | use camino::Utf8Path; 7 | use camino::Utf8PathBuf; 8 | use cap_std::fs::Dir; 9 | use cap_std_ext::cap_std; 10 | use ostree::gio; 11 | use ostree::prelude::*; 12 | 13 | const MODULES: &str = "usr/lib/modules"; 14 | const VMLINUZ: &str = "vmlinuz"; 15 | 16 | /// Find the kernel modules directory in a bootable OSTree commit. 17 | /// The target directory will have a `vmlinuz` file representing the kernel binary. 18 | pub fn find_kernel_dir( 19 | root: &gio::File, 20 | cancellable: Option<&gio::Cancellable>, 21 | ) -> Result> { 22 | let moddir = root.resolve_relative_path(MODULES); 23 | let e = moddir.enumerate_children( 24 | "standard::name", 25 | gio::FileQueryInfoFlags::NOFOLLOW_SYMLINKS, 26 | cancellable, 27 | )?; 28 | let mut r = None; 29 | for child in e.clone() { 30 | let child = &child?; 31 | if child.file_type() != gio::FileType::Directory { 32 | continue; 33 | } 34 | let childpath = e.child(child); 35 | let vmlinuz = childpath.child(VMLINUZ); 36 | if !vmlinuz.query_exists(cancellable) { 37 | continue; 38 | } 39 | if r.replace(childpath).is_some() { 40 | anyhow::bail!("Found multiple subdirectories in {}", MODULES); 41 | } 42 | } 43 | Ok(r) 44 | } 45 | 46 | fn read_dir_optional( 47 | d: &Dir, 48 | p: impl AsRef, 49 | ) -> std::io::Result> { 50 | match d.read_dir(p.as_ref()) { 51 | Ok(r) => Ok(Some(r)), 52 | Err(e) if e.kind() == std::io::ErrorKind::NotFound => Ok(None), 53 | Err(e) => Err(e), 54 | } 55 | } 56 | 57 | /// Find the kernel modules directory in checked out directory tree. 58 | /// The target directory will have a `vmlinuz` file representing the kernel binary. 59 | pub fn find_kernel_dir_fs(root: &Dir) -> Result> { 60 | let mut r = None; 61 | let entries = if let Some(entries) = read_dir_optional(root, MODULES)? { 62 | entries 63 | } else { 64 | return Ok(None); 65 | }; 66 | for child in entries { 67 | let child = &child?; 68 | if !child.file_type()?.is_dir() { 69 | continue; 70 | } 71 | let name = child.file_name(); 72 | let name = if let Some(n) = name.to_str() { 73 | n 74 | } else { 75 | continue; 76 | }; 77 | let mut pbuf = Utf8Path::new(MODULES).to_owned(); 78 | pbuf.push(name); 79 | pbuf.push(VMLINUZ); 80 | if !root.try_exists(&pbuf)? { 81 | continue; 82 | } 83 | pbuf.pop(); 84 | if r.replace(pbuf).is_some() { 85 | anyhow::bail!("Found multiple subdirectories in {}", MODULES); 86 | } 87 | } 88 | Ok(r) 89 | } 90 | 91 | #[cfg(test)] 92 | mod test { 93 | use super::*; 94 | use cap_std_ext::{cap_std, cap_tempfile}; 95 | 96 | #[test] 97 | fn test_find_kernel_dir_fs() -> Result<()> { 98 | let td = cap_tempfile::tempdir(cap_std::ambient_authority())?; 99 | 100 | // Verify the empty case 101 | assert!(find_kernel_dir_fs(&td).unwrap().is_none()); 102 | let moddir = Utf8Path::new("usr/lib/modules"); 103 | td.create_dir_all(moddir)?; 104 | assert!(find_kernel_dir_fs(&td).unwrap().is_none()); 105 | 106 | let kpath = moddir.join("5.12.8-32.aarch64"); 107 | td.create_dir_all(&kpath)?; 108 | td.write(kpath.join("vmlinuz"), "some kernel")?; 109 | let kpath2 = moddir.join("5.13.7-44.aarch64"); 110 | td.create_dir_all(&kpath2)?; 111 | td.write(kpath2.join("foo.ko"), "some kmod")?; 112 | 113 | assert_eq!( 114 | find_kernel_dir_fs(&td) 115 | .unwrap() 116 | .unwrap() 117 | .file_name() 118 | .unwrap(), 119 | kpath.file_name().unwrap() 120 | ); 121 | 122 | Ok(()) 123 | } 124 | } 125 | -------------------------------------------------------------------------------- /lib/src/commit.rs: -------------------------------------------------------------------------------- 1 | //! This module contains the functions to implement the commit 2 | //! procedures as part of building an ostree container image. 3 | //! 4 | 5 | use crate::container_utils::require_ostree_container; 6 | use crate::mountutil::is_mountpoint; 7 | use anyhow::Context; 8 | use anyhow::Result; 9 | use cap_std::fs::Dir; 10 | use cap_std::fs::MetadataExt; 11 | use cap_std_ext::cap_std; 12 | use cap_std_ext::dirext::CapStdExtDirExt; 13 | use std::path::Path; 14 | use std::path::PathBuf; 15 | use tokio::task; 16 | 17 | /// Directories for which we will always remove all content. 18 | const FORCE_CLEAN_PATHS: &[&str] = &["run", "tmp", "var/tmp", "var/cache"]; 19 | 20 | /// Recursively remove the target directory, but avoid traversing across mount points. 21 | fn remove_all_on_mount_recurse(root: &Dir, rootdev: u64, path: &Path) -> Result { 22 | let mut skipped = false; 23 | for entry in root 24 | .read_dir(path) 25 | .with_context(|| format!("Reading {path:?}"))? 26 | { 27 | let entry = entry?; 28 | let metadata = entry.metadata()?; 29 | if metadata.dev() != rootdev { 30 | skipped = true; 31 | continue; 32 | } 33 | let name = entry.file_name(); 34 | let path = &path.join(name); 35 | 36 | if metadata.is_dir() { 37 | skipped |= remove_all_on_mount_recurse(root, rootdev, path.as_path())?; 38 | } else { 39 | root.remove_file(path) 40 | .with_context(|| format!("Removing {path:?}"))?; 41 | } 42 | } 43 | if !skipped { 44 | root.remove_dir(path) 45 | .with_context(|| format!("Removing {path:?}"))?; 46 | } 47 | Ok(skipped) 48 | } 49 | 50 | fn clean_subdir(root: &Dir, rootdev: u64) -> Result<()> { 51 | for entry in root.entries()? { 52 | let entry = entry?; 53 | let metadata = entry.metadata()?; 54 | let dev = metadata.dev(); 55 | let path = PathBuf::from(entry.file_name()); 56 | // Ignore other filesystem mounts, e.g. podman injects /run/.containerenv 57 | if dev != rootdev { 58 | tracing::trace!("Skipping entry in foreign dev {path:?}"); 59 | continue; 60 | } 61 | // Also ignore bind mounts, if we have a new enough kernel with statx() 62 | // that will tell us. 63 | if is_mountpoint(root, &path)?.unwrap_or_default() { 64 | tracing::trace!("Skipping mount point {path:?}"); 65 | continue; 66 | } 67 | if metadata.is_dir() { 68 | remove_all_on_mount_recurse(root, rootdev, &path)?; 69 | } else { 70 | root.remove_file(&path) 71 | .with_context(|| format!("Removing {path:?}"))?; 72 | } 73 | } 74 | Ok(()) 75 | } 76 | 77 | fn clean_paths_in(root: &Dir, rootdev: u64) -> Result<()> { 78 | for path in FORCE_CLEAN_PATHS { 79 | let subdir = if let Some(subdir) = root.open_dir_optional(path)? { 80 | subdir 81 | } else { 82 | continue; 83 | }; 84 | clean_subdir(&subdir, rootdev).with_context(|| format!("Cleaning {path}"))?; 85 | } 86 | Ok(()) 87 | } 88 | 89 | /// Given a root filesystem, clean out empty directories and warn about 90 | /// files in /var. /run, /tmp, and /var/tmp have their contents recursively cleaned. 91 | pub fn prepare_ostree_commit_in(root: &Dir) -> Result<()> { 92 | let rootdev = root.dir_metadata()?.dev(); 93 | clean_paths_in(root, rootdev) 94 | } 95 | 96 | /// Like [`prepare_ostree_commit_in`] but only emits warnings about unsupported 97 | /// files in `/var` and will not error. 98 | pub fn prepare_ostree_commit_in_nonstrict(root: &Dir) -> Result<()> { 99 | let rootdev = root.dir_metadata()?.dev(); 100 | clean_paths_in(root, rootdev) 101 | } 102 | 103 | /// Entrypoint to the commit procedures, initially we just 104 | /// have one validation but we expect more in the future. 105 | pub(crate) async fn container_commit() -> Result<()> { 106 | task::spawn_blocking(move || { 107 | require_ostree_container()?; 108 | let rootdir = Dir::open_ambient_dir("/", cap_std::ambient_authority())?; 109 | prepare_ostree_commit_in(&rootdir) 110 | }) 111 | .await? 112 | } 113 | 114 | #[cfg(test)] 115 | mod tests { 116 | use super::*; 117 | use camino::Utf8Path; 118 | 119 | use cap_std_ext::cap_tempfile; 120 | 121 | #[test] 122 | fn commit() -> Result<()> { 123 | let td = &cap_tempfile::tempdir(cap_std::ambient_authority())?; 124 | 125 | // Handle the empty case 126 | prepare_ostree_commit_in(td).unwrap(); 127 | prepare_ostree_commit_in_nonstrict(td).unwrap(); 128 | 129 | let var = Utf8Path::new("var"); 130 | let run = Utf8Path::new("run"); 131 | let tmp = Utf8Path::new("tmp"); 132 | let vartmp_foobar = &var.join("tmp/foo/bar"); 133 | let runsystemd = &run.join("systemd"); 134 | let resolvstub = &runsystemd.join("resolv.conf"); 135 | 136 | for p in [var, run, tmp] { 137 | td.create_dir(p)?; 138 | } 139 | 140 | td.create_dir_all(vartmp_foobar)?; 141 | td.write(vartmp_foobar.join("a"), "somefile")?; 142 | td.write(vartmp_foobar.join("b"), "somefile2")?; 143 | td.create_dir_all(runsystemd)?; 144 | td.write(resolvstub, "stub resolv")?; 145 | prepare_ostree_commit_in(td).unwrap(); 146 | assert!(td.try_exists(var)?); 147 | assert!(td.try_exists(var.join("tmp"))?); 148 | assert!(!td.try_exists(vartmp_foobar)?); 149 | assert!(td.try_exists(run)?); 150 | assert!(!td.try_exists(runsystemd)?); 151 | 152 | let systemd = run.join("systemd"); 153 | td.create_dir_all(&systemd)?; 154 | prepare_ostree_commit_in(td).unwrap(); 155 | assert!(td.try_exists(var)?); 156 | assert!(!td.try_exists(&systemd)?); 157 | 158 | td.remove_dir_all(var)?; 159 | td.create_dir(var)?; 160 | td.write(var.join("foo"), "somefile")?; 161 | prepare_ostree_commit_in(td).unwrap(); 162 | // Right now we don't auto-create var/tmp if it didn't exist, but maybe 163 | // we will in the future. 164 | assert!(!td.try_exists(var.join("tmp"))?); 165 | assert!(td.try_exists(var)?); 166 | 167 | td.write(var.join("foo"), "somefile")?; 168 | prepare_ostree_commit_in_nonstrict(td).unwrap(); 169 | assert!(td.try_exists(var)?); 170 | 171 | let nested = Utf8Path::new("var/lib/nested"); 172 | td.create_dir_all(nested)?; 173 | td.write(nested.join("foo"), "test1")?; 174 | td.write(nested.join("foo2"), "test2")?; 175 | prepare_ostree_commit_in(td).unwrap(); 176 | assert!(td.try_exists(var)?); 177 | assert!(td.try_exists(nested)?); 178 | 179 | Ok(()) 180 | } 181 | } 182 | -------------------------------------------------------------------------------- /lib/src/container/deploy.rs: -------------------------------------------------------------------------------- 1 | //! Perform initial setup for a container image based system root 2 | 3 | use std::collections::HashSet; 4 | 5 | use anyhow::Result; 6 | use fn_error_context::context; 7 | use ostree::glib; 8 | 9 | use super::store::{gc_image_layers, LayeredImageState}; 10 | use super::{ImageReference, OstreeImageReference}; 11 | use crate::container::store::PrepareResult; 12 | use crate::keyfileext::KeyFileExt; 13 | use crate::sysroot::SysrootLock; 14 | 15 | /// The key in the OSTree origin which holds a serialized [`super::OstreeImageReference`]. 16 | pub const ORIGIN_CONTAINER: &str = "container-image-reference"; 17 | 18 | /// The name of the default stateroot. 19 | // xref https://github.com/ostreedev/ostree/issues/2794 20 | pub const STATEROOT_DEFAULT: &str = "default"; 21 | 22 | /// Options configuring deployment. 23 | #[derive(Debug, Default)] 24 | #[non_exhaustive] 25 | pub struct DeployOpts<'a> { 26 | /// Kernel arguments to use. 27 | pub kargs: Option<&'a [&'a str]>, 28 | /// Target image reference, as distinct from the source. 29 | /// 30 | /// In many cases, one may want a workflow where a system is provisioned from 31 | /// an image with a specific digest (e.g. `quay.io/example/os@sha256:...) for 32 | /// reproducibilty. However, one would want `ostree admin upgrade` to fetch 33 | /// `quay.io/example/os:latest`. 34 | /// 35 | /// To implement this, use this option for the latter `:latest` tag. 36 | pub target_imgref: Option<&'a OstreeImageReference>, 37 | 38 | /// Configuration for fetching containers. 39 | pub proxy_cfg: Option, 40 | 41 | /// If true, then no image reference will be written; but there will be refs 42 | /// for the fetched layers. This ensures that if the machine is later updated 43 | /// to a different container image, the fetch process will reuse shared layers, but 44 | /// it will not be necessary to remove the previous image. 45 | pub no_imgref: bool, 46 | 47 | /// Do not cleanup deployments 48 | pub no_clean: bool, 49 | } 50 | 51 | /// Write a container image to an OSTree deployment. 52 | /// 53 | /// This API is currently intended for only an initial deployment. 54 | #[context("Performing deployment")] 55 | pub async fn deploy( 56 | sysroot: &ostree::Sysroot, 57 | stateroot: &str, 58 | imgref: &OstreeImageReference, 59 | options: Option>, 60 | ) -> Result> { 61 | let cancellable = ostree::gio::Cancellable::NONE; 62 | let options = options.unwrap_or_default(); 63 | let repo = &sysroot.repo(); 64 | let merge_deployment = sysroot.merge_deployment(Some(stateroot)); 65 | let mut imp = 66 | super::store::ImageImporter::new(repo, imgref, options.proxy_cfg.unwrap_or_default()) 67 | .await?; 68 | imp.require_bootable(); 69 | if let Some(target) = options.target_imgref { 70 | imp.set_target(target); 71 | } 72 | if options.no_imgref { 73 | imp.set_no_imgref(); 74 | } 75 | let state = match imp.prepare().await? { 76 | PrepareResult::AlreadyPresent(r) => r, 77 | PrepareResult::Ready(prep) => { 78 | if let Some(warning) = prep.deprecated_warning() { 79 | crate::cli::print_deprecated_warning(warning).await; 80 | } 81 | 82 | imp.import(prep).await? 83 | } 84 | }; 85 | let commit = state.merge_commit.as_str(); 86 | let origin = glib::KeyFile::new(); 87 | let target_imgref = options.target_imgref.unwrap_or(imgref); 88 | origin.set_string("origin", ORIGIN_CONTAINER, &target_imgref.to_string()); 89 | 90 | let opts = ostree::SysrootDeployTreeOpts { 91 | override_kernel_argv: options.kargs, 92 | ..Default::default() 93 | }; 94 | 95 | if sysroot.booted_deployment().is_some() { 96 | sysroot.stage_tree_with_options( 97 | Some(stateroot), 98 | commit, 99 | Some(&origin), 100 | merge_deployment.as_ref(), 101 | &opts, 102 | cancellable, 103 | )?; 104 | } else { 105 | let deployment = &sysroot.deploy_tree_with_options( 106 | Some(stateroot), 107 | commit, 108 | Some(&origin), 109 | merge_deployment.as_ref(), 110 | Some(&opts), 111 | cancellable, 112 | )?; 113 | let flags = if options.no_clean { 114 | ostree::SysrootSimpleWriteDeploymentFlags::NO_CLEAN 115 | } else { 116 | ostree::SysrootSimpleWriteDeploymentFlags::NONE 117 | }; 118 | sysroot.simple_write_deployment( 119 | Some(stateroot), 120 | deployment, 121 | merge_deployment.as_ref(), 122 | flags, 123 | cancellable, 124 | )?; 125 | if !options.no_clean { 126 | sysroot.cleanup(cancellable)?; 127 | } 128 | } 129 | 130 | Ok(state) 131 | } 132 | 133 | /// Query the container image reference for a deployment 134 | fn deployment_origin_container( 135 | deploy: &ostree::Deployment, 136 | ) -> Result> { 137 | let origin = deploy 138 | .origin() 139 | .map(|o| o.optional_string("origin", ORIGIN_CONTAINER)) 140 | .transpose()? 141 | .flatten(); 142 | let r = origin 143 | .map(|v| OstreeImageReference::try_from(v.as_str())) 144 | .transpose()?; 145 | Ok(r) 146 | } 147 | 148 | /// Remove all container images which are not the target of a deployment. 149 | /// This acts equivalently to [`super::store::remove_images()`] - the underlying layers 150 | /// are not pruned. 151 | /// 152 | /// The set of removed images is returned. 153 | pub fn remove_undeployed_images(sysroot: &SysrootLock) -> Result> { 154 | let repo = &sysroot.repo(); 155 | let deployment_origins: Result> = sysroot 156 | .deployments() 157 | .into_iter() 158 | .filter_map(|deploy| { 159 | deployment_origin_container(&deploy) 160 | .map(|v| v.map(|v| v.imgref)) 161 | .transpose() 162 | }) 163 | .collect(); 164 | let deployment_origins = deployment_origins?; 165 | // TODO add an API that returns ImageReference instead 166 | let all_images = super::store::list_images(&sysroot.repo())? 167 | .into_iter() 168 | .filter_map(|img| ImageReference::try_from(img.as_str()).ok()); 169 | let mut removed = Vec::new(); 170 | for image in all_images { 171 | if !deployment_origins.contains(&image) { 172 | super::store::remove_image(repo, &image)?; 173 | removed.push(image); 174 | } 175 | } 176 | Ok(removed) 177 | } 178 | 179 | /// The result of a prune operation 180 | #[derive(Debug, Clone, PartialEq, Eq)] 181 | pub struct Pruned { 182 | /// The number of images that were pruned 183 | pub n_images: u32, 184 | /// The number of image layers that were pruned 185 | pub n_layers: u32, 186 | /// The number of OSTree objects that were pruned 187 | pub n_objects_pruned: u32, 188 | /// The total size of pruned objects 189 | pub objsize: u64, 190 | } 191 | 192 | impl Pruned { 193 | /// Whether this prune was a no-op (i.e. no images, layers or objects were pruned). 194 | pub fn is_empty(&self) -> bool { 195 | self.n_images == 0 && self.n_layers == 0 && self.n_objects_pruned == 0 196 | } 197 | } 198 | 199 | /// This combines the functionality of [`remove_undeployed_images()`] with [`super::store::gc_image_layers()`]. 200 | pub fn prune(sysroot: &SysrootLock) -> Result { 201 | let repo = &sysroot.repo(); 202 | // Prune container images which are not deployed. 203 | // SAFETY: There should never be more than u32 images 204 | let n_images = remove_undeployed_images(sysroot)?.len().try_into().unwrap(); 205 | // Prune unreferenced layer branches. 206 | let n_layers = gc_image_layers(repo)?; 207 | // Prune the objects in the repo; the above just removed refs (branches). 208 | let (_, n_objects_pruned, objsize) = repo.prune( 209 | ostree::RepoPruneFlags::REFS_ONLY, 210 | 0, 211 | ostree::gio::Cancellable::NONE, 212 | )?; 213 | // SAFETY: The number of pruned objects should never be negative 214 | let n_objects_pruned = u32::try_from(n_objects_pruned).unwrap(); 215 | Ok(Pruned { 216 | n_images, 217 | n_layers, 218 | n_objects_pruned, 219 | objsize, 220 | }) 221 | } 222 | -------------------------------------------------------------------------------- /lib/src/container/skopeo.rs: -------------------------------------------------------------------------------- 1 | //! Fork skopeo as a subprocess 2 | 3 | use super::ImageReference; 4 | use anyhow::{Context, Result}; 5 | use cap_std_ext::cmdext::CapStdExtCommandExt; 6 | use containers_image_proxy::oci_spec::image as oci_image; 7 | use fn_error_context::context; 8 | use io_lifetimes::OwnedFd; 9 | use serde::Deserialize; 10 | use std::io::Read; 11 | use std::path::Path; 12 | use std::process::Stdio; 13 | use std::str::FromStr; 14 | use tokio::process::Command; 15 | 16 | // See `man containers-policy.json` and 17 | // https://github.com/containers/image/blob/main/signature/policy_types.go 18 | // Ideally we add something like `skopeo pull --disallow-insecure-accept-anything` 19 | // but for now we parse the policy. 20 | const POLICY_PATH: &str = "/etc/containers/policy.json"; 21 | const INSECURE_ACCEPT_ANYTHING: &str = "insecureAcceptAnything"; 22 | 23 | #[derive(Deserialize)] 24 | struct PolicyEntry { 25 | #[serde(rename = "type")] 26 | ty: String, 27 | } 28 | #[derive(Deserialize)] 29 | struct ContainerPolicy { 30 | default: Option>, 31 | } 32 | 33 | impl ContainerPolicy { 34 | fn is_default_insecure(&self) -> bool { 35 | if let Some(default) = self.default.as_deref() { 36 | match default.split_first() { 37 | Some((v, &[])) => v.ty == INSECURE_ACCEPT_ANYTHING, 38 | _ => false, 39 | } 40 | } else { 41 | false 42 | } 43 | } 44 | } 45 | 46 | pub(crate) fn container_policy_is_default_insecure() -> Result { 47 | let r = std::io::BufReader::new(std::fs::File::open(POLICY_PATH)?); 48 | let policy: ContainerPolicy = serde_json::from_reader(r)?; 49 | Ok(policy.is_default_insecure()) 50 | } 51 | 52 | /// Create a Command builder for skopeo. 53 | pub(crate) fn new_cmd() -> std::process::Command { 54 | let mut cmd = std::process::Command::new("skopeo"); 55 | cmd.stdin(Stdio::null()); 56 | cmd 57 | } 58 | 59 | /// Spawn the child process 60 | pub(crate) fn spawn(mut cmd: Command) -> Result { 61 | let cmd = cmd.stdin(Stdio::null()).stderr(Stdio::piped()); 62 | cmd.spawn().context("Failed to exec skopeo") 63 | } 64 | 65 | /// Use skopeo to copy a container image. 66 | #[context("Skopeo copy")] 67 | pub(crate) async fn copy( 68 | src: &ImageReference, 69 | dest: &ImageReference, 70 | authfile: Option<&Path>, 71 | add_fd: Option<(std::sync::Arc, i32)>, 72 | progress: bool, 73 | ) -> Result { 74 | let digestfile = tempfile::NamedTempFile::new()?; 75 | let mut cmd = new_cmd(); 76 | cmd.arg("copy"); 77 | if !progress { 78 | cmd.stdout(std::process::Stdio::null()); 79 | } 80 | cmd.arg("--digestfile"); 81 | cmd.arg(digestfile.path()); 82 | if let Some((add_fd, n)) = add_fd { 83 | cmd.take_fd_n(add_fd, n); 84 | } 85 | if let Some(authfile) = authfile { 86 | cmd.arg("--authfile"); 87 | cmd.arg(authfile); 88 | } 89 | cmd.args(&[src.to_string(), dest.to_string()]); 90 | let mut cmd = tokio::process::Command::from(cmd); 91 | cmd.kill_on_drop(true); 92 | let proc = super::skopeo::spawn(cmd)?; 93 | let output = proc.wait_with_output().await?; 94 | if !output.status.success() { 95 | let stderr = String::from_utf8_lossy(&output.stderr); 96 | return Err(anyhow::anyhow!("skopeo failed: {}\n", stderr)); 97 | } 98 | let mut digestfile = digestfile.into_file(); 99 | let mut r = String::new(); 100 | digestfile.read_to_string(&mut r)?; 101 | Ok(oci_image::Digest::from_str(r.trim())?) 102 | } 103 | 104 | #[cfg(test)] 105 | mod tests { 106 | use super::*; 107 | 108 | // Default value as of the Fedora 34 containers-common-1-21.fc34.noarch package. 109 | const DEFAULT_POLICY: &str = indoc::indoc! {r#" 110 | { 111 | "default": [ 112 | { 113 | "type": "insecureAcceptAnything" 114 | } 115 | ], 116 | "transports": 117 | { 118 | "docker-daemon": 119 | { 120 | "": [{"type":"insecureAcceptAnything"}] 121 | } 122 | } 123 | } 124 | "#}; 125 | 126 | // Stripped down copy from the manual. 127 | const REASONABLY_LOCKED_DOWN: &str = indoc::indoc! { r#" 128 | { 129 | "default": [{"type": "reject"}], 130 | "transports": { 131 | "dir": { 132 | "": [{"type": "insecureAcceptAnything"}] 133 | }, 134 | "atomic": { 135 | "hostname:5000/myns/official": [ 136 | { 137 | "type": "signedBy", 138 | "keyType": "GPGKeys", 139 | "keyPath": "/path/to/official-pubkey.gpg" 140 | } 141 | ] 142 | } 143 | } 144 | } 145 | "#}; 146 | 147 | #[test] 148 | fn policy_is_insecure() { 149 | let p: ContainerPolicy = serde_json::from_str(DEFAULT_POLICY).unwrap(); 150 | assert!(p.is_default_insecure()); 151 | for &v in &["{}", REASONABLY_LOCKED_DOWN] { 152 | let p: ContainerPolicy = serde_json::from_str(v).unwrap(); 153 | assert!(!p.is_default_insecure()); 154 | } 155 | } 156 | } 157 | -------------------------------------------------------------------------------- /lib/src/container/tests/it/fixtures/exampleos.tar.zst: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ostreedev/ostree-rs-ext/b885f9468829e6af41fa95504d65975c48378d4d/lib/src/container/tests/it/fixtures/exampleos.tar.zst -------------------------------------------------------------------------------- /lib/src/container/unencapsulate.rs: -------------------------------------------------------------------------------- 1 | //! APIs for "unencapsulating" OSTree commits from container images 2 | //! 3 | //! This code only operates on container images that were created via 4 | //! [`encapsulate`]. 5 | //! 6 | //! # External depenendency on container-image-proxy 7 | //! 8 | //! This code requires 9 | //! installed as a binary in $PATH. 10 | //! 11 | //! The rationale for this is that while there exist Rust crates to speak 12 | //! the Docker distribution API, the Go library 13 | //! supports key things we want for production use like: 14 | //! 15 | //! - Image mirroring and remapping; effectively `man containers-registries.conf` 16 | //! For example, we need to support an administrator mirroring an ostree-container 17 | //! into a disconnected registry, without changing all the pull specs. 18 | //! - Signing 19 | //! 20 | //! Additionally, the proxy "upconverts" manifests into OCI, so we don't need to care 21 | //! about parsing the Docker manifest format (as used by most registries still). 22 | //! 23 | //! [`encapsulate`]: [`super::encapsulate()`] 24 | 25 | // # Implementation 26 | // 27 | // First, we support explicitly fetching just the manifest: https://github.com/opencontainers/image-spec/blob/main/manifest.md 28 | // This will give us information about the layers it contains, and crucially the digest (sha256) of 29 | // the manifest is how higher level software can detect changes. 30 | // 31 | // Once we have the manifest, we expect it to point to a single `application/vnd.oci.image.layer.v1.tar+gzip` layer, 32 | // which is exactly what is exported by the [`crate::tar::export`] process. 33 | 34 | use crate::container::store::LayerProgress; 35 | 36 | use super::*; 37 | use containers_image_proxy::{ImageProxy, OpenedImage}; 38 | use fn_error_context::context; 39 | use futures_util::{Future, FutureExt}; 40 | use oci_spec::image::{self as oci_image, Digest}; 41 | use std::io::Read; 42 | use std::sync::{Arc, Mutex}; 43 | use tokio::{ 44 | io::{AsyncBufRead, AsyncRead}, 45 | sync::watch::{Receiver, Sender}, 46 | }; 47 | use tracing::instrument; 48 | 49 | /// The legacy MIME type returned by the skopeo/(containers/storage) code 50 | /// when we have local uncompressed docker-formatted image. 51 | /// TODO: change the skopeo code to shield us from this correctly 52 | const DOCKER_TYPE_LAYER_TAR: &str = "application/vnd.docker.image.rootfs.diff.tar"; 53 | 54 | type Progress = tokio::sync::watch::Sender; 55 | 56 | /// A read wrapper that updates the download progress. 57 | #[pin_project::pin_project] 58 | #[derive(Debug)] 59 | pub(crate) struct ProgressReader { 60 | #[pin] 61 | pub(crate) reader: T, 62 | #[pin] 63 | pub(crate) progress: Arc>, 64 | } 65 | 66 | impl ProgressReader { 67 | pub(crate) fn new(reader: T) -> (Self, Receiver) { 68 | let (progress, r) = tokio::sync::watch::channel(1); 69 | let progress = Arc::new(Mutex::new(progress)); 70 | (ProgressReader { reader, progress }, r) 71 | } 72 | } 73 | 74 | impl AsyncRead for ProgressReader { 75 | fn poll_read( 76 | self: std::pin::Pin<&mut Self>, 77 | cx: &mut std::task::Context<'_>, 78 | buf: &mut tokio::io::ReadBuf<'_>, 79 | ) -> std::task::Poll> { 80 | let this = self.project(); 81 | let len = buf.filled().len(); 82 | match this.reader.poll_read(cx, buf) { 83 | v @ std::task::Poll::Ready(Ok(_)) => { 84 | let progress = this.progress.lock().unwrap(); 85 | let state = { 86 | let mut state = *progress.borrow(); 87 | let newlen = buf.filled().len(); 88 | debug_assert!(newlen >= len); 89 | let read = (newlen - len) as u64; 90 | state += read; 91 | state 92 | }; 93 | // Ignore errors, if the caller disconnected from progress that's OK. 94 | let _ = progress.send(state); 95 | v 96 | } 97 | o => o, 98 | } 99 | } 100 | } 101 | 102 | async fn fetch_manifest_impl( 103 | proxy: &mut ImageProxy, 104 | imgref: &OstreeImageReference, 105 | ) -> Result<(oci_image::ImageManifest, oci_image::Digest)> { 106 | let oi = &proxy.open_image(&imgref.imgref.to_string()).await?; 107 | let (digest, manifest) = proxy.fetch_manifest(oi).await?; 108 | proxy.close_image(oi).await?; 109 | Ok((manifest, oci_image::Digest::from_str(digest.as_str())?)) 110 | } 111 | 112 | /// Download the manifest for a target image and its sha256 digest. 113 | #[context("Fetching manifest")] 114 | pub async fn fetch_manifest( 115 | imgref: &OstreeImageReference, 116 | ) -> Result<(oci_image::ImageManifest, oci_image::Digest)> { 117 | let mut proxy = ImageProxy::new().await?; 118 | fetch_manifest_impl(&mut proxy, imgref).await 119 | } 120 | 121 | /// Download the manifest for a target image and its sha256 digest, as well as the image configuration. 122 | #[context("Fetching manifest and config")] 123 | pub async fn fetch_manifest_and_config( 124 | imgref: &OstreeImageReference, 125 | ) -> Result<( 126 | oci_image::ImageManifest, 127 | oci_image::Digest, 128 | oci_image::ImageConfiguration, 129 | )> { 130 | let proxy = ImageProxy::new().await?; 131 | let oi = &proxy.open_image(&imgref.imgref.to_string()).await?; 132 | let (digest, manifest) = proxy.fetch_manifest(oi).await?; 133 | let digest = oci_image::Digest::from_str(&digest)?; 134 | let config = proxy.fetch_config(oi).await?; 135 | Ok((manifest, digest, config)) 136 | } 137 | 138 | /// The result of an import operation 139 | #[derive(Debug)] 140 | pub struct Import { 141 | /// The ostree commit that was imported 142 | pub ostree_commit: String, 143 | /// The image digest retrieved 144 | pub image_digest: Digest, 145 | 146 | /// Any deprecation warning 147 | pub deprecated_warning: Option, 148 | } 149 | 150 | /// Use this to process potential errors from a worker and a driver. 151 | /// This is really a brutal hack around the fact that an error can occur 152 | /// on either our side or in the proxy. But if an error occurs on our 153 | /// side, then we will close the pipe, which will *also* cause the proxy 154 | /// to error out. 155 | /// 156 | /// What we really want is for the proxy to tell us when it got an 157 | /// error from us closing the pipe. Or, we could store that state 158 | /// on our side. Both are slightly tricky, so we have this (again) 159 | /// hacky thing where we just search for `broken pipe` in the error text. 160 | /// 161 | /// Or to restate all of the above - what this function does is check 162 | /// to see if the worker function had an error *and* if the proxy 163 | /// had an error, but if the proxy's error ends in `broken pipe` 164 | /// then it means the real only error is from the worker. 165 | pub(crate) async fn join_fetch( 166 | worker: impl Future>, 167 | driver: impl Future>, 168 | ) -> Result { 169 | let (worker, driver) = tokio::join!(worker, driver); 170 | match (worker, driver) { 171 | (Ok(t), Ok(())) => Ok(t), 172 | (Err(worker), Err(driver)) => { 173 | let text = driver.root_cause().to_string(); 174 | if text.ends_with("broken pipe") { 175 | tracing::trace!("Ignoring broken pipe failure from driver"); 176 | Err(worker) 177 | } else { 178 | Err(worker.context(format!("proxy failure: {} and client error", text))) 179 | } 180 | } 181 | (Ok(_), Err(driver)) => Err(driver), 182 | (Err(worker), Ok(())) => Err(worker), 183 | } 184 | } 185 | 186 | /// Fetch a container image and import its embedded OSTree commit. 187 | #[context("Importing {}", imgref)] 188 | #[instrument(level = "debug", skip(repo))] 189 | pub async fn unencapsulate(repo: &ostree::Repo, imgref: &OstreeImageReference) -> Result { 190 | let importer = super::store::ImageImporter::new(repo, imgref, Default::default()).await?; 191 | importer.unencapsulate().await 192 | } 193 | 194 | /// Create a decompressor for this MIME type, given a stream of input. 195 | pub(crate) fn decompressor( 196 | media_type: &oci_image::MediaType, 197 | src: impl Read + Send + 'static, 198 | ) -> Result> { 199 | let r: Box = match media_type { 200 | m @ (oci_image::MediaType::ImageLayerGzip | oci_image::MediaType::ImageLayerZstd) => { 201 | if matches!(m, oci_image::MediaType::ImageLayerZstd) { 202 | Box::new(zstd::stream::read::Decoder::new(src)?) 203 | } else { 204 | Box::new(flate2::bufread::GzDecoder::new(std::io::BufReader::new( 205 | src, 206 | ))) 207 | } 208 | } 209 | oci_image::MediaType::ImageLayer => Box::new(src), 210 | oci_image::MediaType::Other(t) if t.as_str() == DOCKER_TYPE_LAYER_TAR => Box::new(src), 211 | o => anyhow::bail!("Unhandled layer type: {}", o), 212 | }; 213 | Ok(r) 214 | } 215 | 216 | /// A wrapper for [`get_blob`] which fetches a layer and decompresses it. 217 | pub(crate) async fn fetch_layer<'a>( 218 | proxy: &'a ImageProxy, 219 | img: &OpenedImage, 220 | manifest: &oci_image::ImageManifest, 221 | layer: &'a oci_image::Descriptor, 222 | progress: Option<&'a Sender>>, 223 | layer_info: Option<&Vec>, 224 | transport_src: Transport, 225 | ) -> Result<( 226 | Box, 227 | impl Future> + 'a, 228 | oci_image::MediaType, 229 | )> { 230 | use futures_util::future::Either; 231 | tracing::debug!("fetching {}", layer.digest()); 232 | let layer_index = manifest.layers().iter().position(|x| x == layer).unwrap(); 233 | let (blob, driver, size); 234 | let media_type: oci_image::MediaType; 235 | match transport_src { 236 | Transport::ContainerStorage => { 237 | let layer_info = layer_info 238 | .ok_or_else(|| anyhow!("skopeo too old to pull from containers-storage"))?; 239 | let n_layers = layer_info.len(); 240 | let layer_blob = layer_info.get(layer_index).ok_or_else(|| { 241 | anyhow!("blobid position {layer_index} exceeds diffid count {n_layers}") 242 | })?; 243 | size = layer_blob.size; 244 | media_type = layer_blob.media_type.clone(); 245 | (blob, driver) = proxy.get_blob(img, &layer_blob.digest, size).await?; 246 | } 247 | _ => { 248 | size = layer.size(); 249 | media_type = layer.media_type().clone(); 250 | (blob, driver) = proxy.get_blob(img, layer.digest(), size).await?; 251 | } 252 | }; 253 | 254 | let driver = async { driver.await.map_err(Into::into) }; 255 | 256 | if let Some(progress) = progress { 257 | let (readprogress, mut readwatch) = ProgressReader::new(blob); 258 | let readprogress = tokio::io::BufReader::new(readprogress); 259 | let readproxy = async move { 260 | while let Ok(()) = readwatch.changed().await { 261 | let fetched = readwatch.borrow_and_update(); 262 | let status = LayerProgress { 263 | layer_index, 264 | fetched: *fetched, 265 | total: size, 266 | }; 267 | progress.send_replace(Some(status)); 268 | } 269 | }; 270 | let reader = Box::new(readprogress); 271 | let driver = futures_util::future::join(readproxy, driver).map(|r| r.1); 272 | Ok((reader, Either::Left(driver), media_type)) 273 | } else { 274 | Ok((Box::new(blob), Either::Right(driver), media_type)) 275 | } 276 | } 277 | -------------------------------------------------------------------------------- /lib/src/container/update_detachedmeta.rs: -------------------------------------------------------------------------------- 1 | use super::ImageReference; 2 | use crate::container::{skopeo, DIFFID_LABEL}; 3 | use crate::container::{store as container_store, Transport}; 4 | use anyhow::{anyhow, Context, Result}; 5 | use camino::Utf8Path; 6 | use cap_std::fs::Dir; 7 | use cap_std_ext::cap_std; 8 | use containers_image_proxy::oci_spec::image as oci_image; 9 | use std::io::{BufReader, BufWriter}; 10 | 11 | /// Given an OSTree container image reference, update the detached metadata (e.g. GPG signature) 12 | /// while preserving all other container image metadata. 13 | /// 14 | /// The return value is the manifest digest of (e.g. `@sha256:`) the image. 15 | pub async fn update_detached_metadata( 16 | src: &ImageReference, 17 | dest: &ImageReference, 18 | detached_buf: Option<&[u8]>, 19 | ) -> Result { 20 | // For now, convert the source to a temporary OCI directory, so we can directly 21 | // parse and manipulate it. In the future this will be replaced by https://github.com/ostreedev/ostree-rs-ext/issues/153 22 | // and other work to directly use the containers/image API via containers-image-proxy. 23 | let tempdir = tempfile::tempdir_in("/var/tmp")?; 24 | let tempsrc = tempdir.path().join("src"); 25 | let tempsrc_utf8 = Utf8Path::from_path(&tempsrc).ok_or_else(|| anyhow!("Invalid tempdir"))?; 26 | let tempsrc_ref = ImageReference { 27 | transport: Transport::OciDir, 28 | name: tempsrc_utf8.to_string(), 29 | }; 30 | 31 | // Full copy of the source image 32 | let pulled_digest = skopeo::copy(src, &tempsrc_ref, None, None, false) 33 | .await 34 | .context("Creating temporary copy to OCI dir")?; 35 | 36 | // Copy to the thread 37 | let detached_buf = detached_buf.map(Vec::from); 38 | let tempsrc_ref_path = tempsrc_ref.name.clone(); 39 | // Fork a thread to do the heavy lifting of filtering the tar stream, rewriting the manifest/config. 40 | crate::tokio_util::spawn_blocking_cancellable_flatten(move |cancellable| { 41 | // Open the temporary OCI directory. 42 | let tempsrc = Dir::open_ambient_dir(tempsrc_ref_path, cap_std::ambient_authority()) 43 | .context("Opening src")?; 44 | let tempsrc = ocidir::OciDir::open(&tempsrc)?; 45 | 46 | // Load the manifest, platform, and config 47 | let idx = tempsrc 48 | .read_index()? 49 | .ok_or(anyhow!("Reading image index from source"))?; 50 | let manifest_descriptor = idx 51 | .manifests() 52 | .first() 53 | .ok_or(anyhow!("No manifests in index"))?; 54 | let mut manifest: oci_image::ImageManifest = tempsrc 55 | .read_json_blob(manifest_descriptor) 56 | .context("Reading manifest json blob")?; 57 | 58 | anyhow::ensure!(manifest_descriptor.digest() == &pulled_digest); 59 | let platform = manifest_descriptor 60 | .platform() 61 | .as_ref() 62 | .cloned() 63 | .unwrap_or_default(); 64 | let mut config: oci_image::ImageConfiguration = 65 | tempsrc.read_json_blob(manifest.config())?; 66 | let mut ctrcfg = config 67 | .config() 68 | .as_ref() 69 | .cloned() 70 | .ok_or_else(|| anyhow!("Image is missing container configuration"))?; 71 | 72 | // Find the OSTree commit layer we want to replace 73 | let (commit_layer, _, _) = container_store::parse_manifest_layout(&manifest, &config)?; 74 | let commit_layer_idx = manifest 75 | .layers() 76 | .iter() 77 | .position(|x| x == commit_layer) 78 | .unwrap(); 79 | 80 | // Create a new layer 81 | let out_layer = { 82 | // Create tar streams for source and destination 83 | let src_layer = BufReader::new(tempsrc.read_blob(commit_layer)?); 84 | let mut src_layer = flate2::read::GzDecoder::new(src_layer); 85 | let mut out_layer = BufWriter::new(tempsrc.create_gzip_layer(None)?); 86 | 87 | // Process the tar stream and inject our new detached metadata 88 | crate::tar::update_detached_metadata( 89 | &mut src_layer, 90 | &mut out_layer, 91 | detached_buf.as_deref(), 92 | Some(cancellable), 93 | )?; 94 | 95 | // Flush all wrappers, and finalize the layer 96 | out_layer 97 | .into_inner() 98 | .map_err(|_| anyhow!("Failed to flush buffer"))? 99 | .complete()? 100 | }; 101 | // Get the diffid and descriptor for our new tar layer 102 | let out_layer_diffid = format!("sha256:{}", out_layer.uncompressed_sha256.digest()); 103 | let out_layer_descriptor = out_layer 104 | .descriptor() 105 | .media_type(oci_image::MediaType::ImageLayerGzip) 106 | .build() 107 | .unwrap(); // SAFETY: We pass all required fields 108 | 109 | // Splice it into both the manifest and config 110 | manifest.layers_mut()[commit_layer_idx] = out_layer_descriptor; 111 | config.rootfs_mut().diff_ids_mut()[commit_layer_idx].clone_from(&out_layer_diffid); 112 | 113 | let labels = ctrcfg.labels_mut().get_or_insert_with(Default::default); 114 | // Nothing to do except in the special case where there's somehow only one 115 | // chunked layer. 116 | if manifest.layers().len() == 1 { 117 | labels.insert(DIFFID_LABEL.into(), out_layer_diffid); 118 | } 119 | config.set_config(Some(ctrcfg)); 120 | 121 | // Write the config and manifest 122 | let new_config_descriptor = tempsrc.write_config(config)?; 123 | manifest.set_config(new_config_descriptor); 124 | // This entirely replaces the single entry in the OCI directory, which skopeo will find by default. 125 | tempsrc 126 | .replace_with_single_manifest(manifest, platform) 127 | .context("Writing manifest")?; 128 | Ok(()) 129 | }) 130 | .await 131 | .context("Regenerating commit layer")?; 132 | 133 | // Finally, copy the mutated image back to the target. For chunked images, 134 | // because we only changed one layer, skopeo should know not to re-upload shared blobs. 135 | crate::container::skopeo::copy(&tempsrc_ref, dest, None, None, false) 136 | .await 137 | .context("Copying to destination") 138 | } 139 | -------------------------------------------------------------------------------- /lib/src/container_utils.rs: -------------------------------------------------------------------------------- 1 | //! Helpers for interacting with containers at runtime. 2 | 3 | use crate::keyfileext::KeyFileExt; 4 | use anyhow::Result; 5 | use ostree::glib; 6 | use std::io::Read; 7 | use std::path::Path; 8 | 9 | // See https://github.com/coreos/rpm-ostree/pull/3285#issuecomment-999101477 10 | // For compatibility with older ostree, we stick this in /sysroot where 11 | // it will be ignored. 12 | const V0_REPO_CONFIG: &str = "/sysroot/config"; 13 | const V1_REPO_CONFIG: &str = "/sysroot/ostree/repo/config"; 14 | 15 | /// Attempts to detect if the current process is running inside a container. 16 | /// This looks for the `container` environment variable or the presence 17 | /// of Docker or podman's more generic `/run/.containerenv`. 18 | /// This is a best-effort function, as there is not a 100% reliable way 19 | /// to determine this. 20 | pub fn running_in_container() -> bool { 21 | if std::env::var_os("container").is_some() { 22 | return true; 23 | } 24 | // https://stackoverflow.com/questions/20010199/how-to-determine-if-a-process-runs-inside-lxc-docker 25 | for p in ["/run/.containerenv", "/.dockerenv"] { 26 | if std::path::Path::new(p).exists() { 27 | return true; 28 | } 29 | } 30 | false 31 | } 32 | 33 | // https://docs.rs/openat-ext/0.1.10/openat_ext/trait.OpenatDirExt.html#tymethod.open_file_optional 34 | // https://users.rust-lang.org/t/why-i-use-anyhow-error-even-in-libraries/68592 35 | pub(crate) fn open_optional(path: impl AsRef) -> std::io::Result> { 36 | match std::fs::File::open(path.as_ref()) { 37 | Ok(r) => Ok(Some(r)), 38 | Err(e) if e.kind() == std::io::ErrorKind::NotFound => Ok(None), 39 | Err(e) => Err(e), 40 | } 41 | } 42 | 43 | /// Returns `true` if the current root filesystem has an ostree repository in `bare-split-xattrs` mode. 44 | /// This will be the case in a running ostree-native container. 45 | pub fn is_bare_split_xattrs() -> Result { 46 | if let Some(configf) = open_optional(V1_REPO_CONFIG) 47 | .transpose() 48 | .or_else(|| open_optional(V0_REPO_CONFIG).transpose()) 49 | { 50 | let configf = configf?; 51 | let mut bufr = std::io::BufReader::new(configf); 52 | let mut s = String::new(); 53 | bufr.read_to_string(&mut s)?; 54 | let kf = glib::KeyFile::new(); 55 | kf.load_from_data(&s, glib::KeyFileFlags::NONE)?; 56 | let r = if let Some(mode) = kf.optional_string("core", "mode")? { 57 | mode == crate::tar::BARE_SPLIT_XATTRS_MODE 58 | } else { 59 | false 60 | }; 61 | Ok(r) 62 | } else { 63 | Ok(false) 64 | } 65 | } 66 | 67 | /// Returns `true` if the current booted filesystem appears to be an ostree-native container. 68 | /// 69 | /// This just invokes [`is_bare_split_xattrs`] and [`running_in_container`]. 70 | pub fn is_ostree_container() -> Result { 71 | let is_container_ostree = is_bare_split_xattrs()?; 72 | let running_in_systemd = std::env::var_os("INVOCATION_ID").is_some(); 73 | // If we have a container-ostree repo format, then we'll assume we're 74 | // running in a container unless there's strong evidence not (we detect 75 | // we're part of a systemd unit or are in a booted ostree system). 76 | let maybe_container = running_in_container() 77 | || (!running_in_systemd && !Path::new("/run/ostree-booted").exists()); 78 | Ok(is_container_ostree && maybe_container) 79 | } 80 | 81 | /// Returns an error unless the current filesystem is an ostree-based container 82 | /// 83 | /// This just wraps [`is_ostree_container`]. 84 | pub fn require_ostree_container() -> Result<()> { 85 | if !is_ostree_container()? { 86 | anyhow::bail!("Not in an ostree-based container environment"); 87 | } 88 | Ok(()) 89 | } 90 | -------------------------------------------------------------------------------- /lib/src/diff.rs: -------------------------------------------------------------------------------- 1 | //! Compute the difference between two OSTree commits. 2 | 3 | /* 4 | * Copyright (C) 2020 Red Hat, Inc. 5 | * 6 | * SPDX-License-Identifier: Apache-2.0 OR MIT 7 | */ 8 | 9 | use anyhow::{Context, Result}; 10 | use fn_error_context::context; 11 | use gio::prelude::*; 12 | use ostree::gio; 13 | use std::collections::BTreeSet; 14 | use std::fmt; 15 | 16 | /// Like `g_file_query_info()`, but return None if the target doesn't exist. 17 | pub(crate) fn query_info_optional( 18 | f: &gio::File, 19 | queryattrs: &str, 20 | queryflags: gio::FileQueryInfoFlags, 21 | ) -> Result> { 22 | let cancellable = gio::Cancellable::NONE; 23 | match f.query_info(queryattrs, queryflags, cancellable) { 24 | Ok(i) => Ok(Some(i)), 25 | Err(e) => { 26 | if let Some(ref e2) = e.kind::() { 27 | match e2 { 28 | gio::IOErrorEnum::NotFound => Ok(None), 29 | _ => Err(e.into()), 30 | } 31 | } else { 32 | Err(e.into()) 33 | } 34 | } 35 | } 36 | } 37 | 38 | /// A set of file paths. 39 | pub type FileSet = BTreeSet; 40 | 41 | /// Diff between two ostree commits. 42 | #[derive(Debug, Default)] 43 | pub struct FileTreeDiff { 44 | /// The prefix passed for diffing, e.g. /usr 45 | pub subdir: Option, 46 | /// Files that are new in an existing directory 47 | pub added_files: FileSet, 48 | /// New directories 49 | pub added_dirs: FileSet, 50 | /// Files removed 51 | pub removed_files: FileSet, 52 | /// Directories removed (recursively) 53 | pub removed_dirs: FileSet, 54 | /// Files that changed (in any way, metadata or content) 55 | pub changed_files: FileSet, 56 | /// Directories that changed mode/permissions 57 | pub changed_dirs: FileSet, 58 | } 59 | 60 | impl fmt::Display for FileTreeDiff { 61 | fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { 62 | write!( 63 | f, 64 | "files(added:{} removed:{} changed:{}) dirs(added:{} removed:{} changed:{})", 65 | self.added_files.len(), 66 | self.removed_files.len(), 67 | self.changed_files.len(), 68 | self.added_dirs.len(), 69 | self.removed_dirs.len(), 70 | self.changed_dirs.len() 71 | ) 72 | } 73 | } 74 | 75 | fn diff_recurse( 76 | prefix: &str, 77 | diff: &mut FileTreeDiff, 78 | from: &ostree::RepoFile, 79 | to: &ostree::RepoFile, 80 | ) -> Result<()> { 81 | let cancellable = gio::Cancellable::NONE; 82 | let queryattrs = "standard::name,standard::type"; 83 | let queryflags = gio::FileQueryInfoFlags::NOFOLLOW_SYMLINKS; 84 | let from_iter = from.enumerate_children(queryattrs, queryflags, cancellable)?; 85 | 86 | // Iterate over the source (from) directory, and compare with the 87 | // target (to) directory. This generates removals and changes. 88 | while let Some(from_info) = from_iter.next_file(cancellable)? { 89 | let from_child = from_iter.child(&from_info); 90 | let name = from_info.name(); 91 | let name = name.to_str().expect("UTF-8 ostree name"); 92 | let path = format!("{prefix}{name}"); 93 | let to_child = to.child(name); 94 | let to_info = query_info_optional(&to_child, queryattrs, queryflags) 95 | .context("querying optional to")?; 96 | let is_dir = matches!(from_info.file_type(), gio::FileType::Directory); 97 | if to_info.is_some() { 98 | let to_child = to_child.downcast::().expect("downcast"); 99 | to_child.ensure_resolved()?; 100 | let from_child = from_child.downcast::().expect("downcast"); 101 | from_child.ensure_resolved()?; 102 | 103 | if is_dir { 104 | let from_contents_checksum = from_child.tree_get_contents_checksum(); 105 | let to_contents_checksum = to_child.tree_get_contents_checksum(); 106 | if from_contents_checksum != to_contents_checksum { 107 | let subpath = format!("{}/", path); 108 | diff_recurse(&subpath, diff, &from_child, &to_child)?; 109 | } 110 | let from_meta_checksum = from_child.tree_get_metadata_checksum(); 111 | let to_meta_checksum = to_child.tree_get_metadata_checksum(); 112 | if from_meta_checksum != to_meta_checksum { 113 | diff.changed_dirs.insert(path); 114 | } 115 | } else { 116 | let from_checksum = from_child.checksum(); 117 | let to_checksum = to_child.checksum(); 118 | if from_checksum != to_checksum { 119 | diff.changed_files.insert(path); 120 | } 121 | } 122 | } else if is_dir { 123 | diff.removed_dirs.insert(path); 124 | } else { 125 | diff.removed_files.insert(path); 126 | } 127 | } 128 | // Iterate over the target (to) directory, and find any 129 | // files/directories which were not present in the source. 130 | let to_iter = to.enumerate_children(queryattrs, queryflags, cancellable)?; 131 | while let Some(to_info) = to_iter.next_file(cancellable)? { 132 | let name = to_info.name(); 133 | let name = name.to_str().expect("UTF-8 ostree name"); 134 | let path = format!("{prefix}{name}"); 135 | let from_child = from.child(name); 136 | let from_info = query_info_optional(&from_child, queryattrs, queryflags) 137 | .context("querying optional from")?; 138 | if from_info.is_some() { 139 | continue; 140 | } 141 | let is_dir = matches!(to_info.file_type(), gio::FileType::Directory); 142 | if is_dir { 143 | diff.added_dirs.insert(path); 144 | } else { 145 | diff.added_files.insert(path); 146 | } 147 | } 148 | Ok(()) 149 | } 150 | 151 | /// Given two ostree commits, compute the diff between them. 152 | #[context("Computing ostree diff")] 153 | pub fn diff>( 154 | repo: &ostree::Repo, 155 | from: &str, 156 | to: &str, 157 | subdir: Option

, 158 | ) -> Result { 159 | let subdir = subdir.as_ref(); 160 | let subdir = subdir.map(|s| s.as_ref()); 161 | let (fromroot, _) = repo.read_commit(from, gio::Cancellable::NONE)?; 162 | let (toroot, _) = repo.read_commit(to, gio::Cancellable::NONE)?; 163 | let (fromroot, toroot) = if let Some(subdir) = subdir { 164 | ( 165 | fromroot.resolve_relative_path(subdir), 166 | toroot.resolve_relative_path(subdir), 167 | ) 168 | } else { 169 | (fromroot, toroot) 170 | }; 171 | let fromroot = fromroot.downcast::().expect("downcast"); 172 | fromroot.ensure_resolved()?; 173 | let toroot = toroot.downcast::().expect("downcast"); 174 | toroot.ensure_resolved()?; 175 | let mut diff = FileTreeDiff { 176 | subdir: subdir.map(|s| s.to_string()), 177 | ..Default::default() 178 | }; 179 | diff_recurse("/", &mut diff, &fromroot, &toroot)?; 180 | Ok(diff) 181 | } 182 | -------------------------------------------------------------------------------- /lib/src/docgen.rs: -------------------------------------------------------------------------------- 1 | // Copyright 2022 Red Hat, Inc. 2 | // 3 | // SPDX-License-Identifier: Apache-2.0 OR MIT 4 | 5 | use anyhow::{Context, Result}; 6 | use camino::Utf8Path; 7 | use clap::{Command, CommandFactory}; 8 | use std::fs::OpenOptions; 9 | use std::io::Write; 10 | 11 | pub fn generate_manpages(directory: &Utf8Path) -> Result<()> { 12 | generate_one(directory, crate::cli::Opt::command()) 13 | } 14 | 15 | fn generate_one(directory: &Utf8Path, cmd: Command) -> Result<()> { 16 | let version = env!("CARGO_PKG_VERSION"); 17 | let name = cmd.get_name(); 18 | let path = directory.join(format!("{name}.8")); 19 | println!("Generating {path}..."); 20 | 21 | let mut out = OpenOptions::new() 22 | .create(true) 23 | .write(true) 24 | .truncate(true) 25 | .open(&path) 26 | .with_context(|| format!("opening {path}")) 27 | .map(std::io::BufWriter::new)?; 28 | clap_mangen::Man::new(cmd.clone()) 29 | .title("ostree-ext") 30 | .section("8") 31 | .source(format!("ostree-ext {version}")) 32 | .render(&mut out) 33 | .with_context(|| format!("rendering {name}.8"))?; 34 | out.flush().context("flushing man page")?; 35 | drop(out); 36 | 37 | for subcmd in cmd.get_subcommands().filter(|c| !c.is_hide_set()) { 38 | let subname = format!("{}-{}", name, subcmd.get_name()); 39 | // SAFETY: Latest clap 4 requires names are &'static - this is 40 | // not long-running production code, so we just leak the names here. 41 | let subname = &*std::boxed::Box::leak(subname.into_boxed_str()); 42 | let subcmd = subcmd.clone().name(subname).alias(subname).version(version); 43 | generate_one(directory, subcmd)?; 44 | } 45 | Ok(()) 46 | } 47 | -------------------------------------------------------------------------------- /lib/src/fixtures/fedora-coreos-contentmeta.json.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ostreedev/ostree-rs-ext/b885f9468829e6af41fa95504d65975c48378d4d/lib/src/fixtures/fedora-coreos-contentmeta.json.gz -------------------------------------------------------------------------------- /lib/src/fixtures/ostree-gpg-test-home.tar.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ostreedev/ostree-rs-ext/b885f9468829e6af41fa95504d65975c48378d4d/lib/src/fixtures/ostree-gpg-test-home.tar.gz -------------------------------------------------------------------------------- /lib/src/globals.rs: -------------------------------------------------------------------------------- 1 | //! Module containing access to global state. 2 | 3 | use super::Result; 4 | use camino::{Utf8Path, Utf8PathBuf}; 5 | use cap_std_ext::cap_std::fs::Dir; 6 | use cap_std_ext::RootDir; 7 | use once_cell::sync::OnceCell; 8 | use ostree::glib; 9 | use std::fs::File; 10 | 11 | struct ConfigPaths { 12 | persistent: Utf8PathBuf, 13 | runtime: Utf8PathBuf, 14 | system: Option, 15 | } 16 | 17 | /// Get the runtime and persistent config directories. In the system (root) case, these 18 | /// system(root) case: /run/ostree /etc/ostree /usr/lib/ostree 19 | /// user(nonroot) case: /run/user/$uid/ostree ~/.config/ostree 20 | fn get_config_paths(root: bool) -> &'static ConfigPaths { 21 | if root { 22 | static PATHS_ROOT: OnceCell = OnceCell::new(); 23 | PATHS_ROOT.get_or_init(|| ConfigPaths::new("etc", "run", Some("usr/lib"))) 24 | } else { 25 | static PATHS_USER: OnceCell = OnceCell::new(); 26 | PATHS_USER.get_or_init(|| { 27 | ConfigPaths::new( 28 | Utf8PathBuf::try_from(glib::user_config_dir()).unwrap(), 29 | Utf8PathBuf::try_from(glib::user_runtime_dir()).unwrap(), 30 | None, 31 | ) 32 | }) 33 | } 34 | } 35 | 36 | impl ConfigPaths { 37 | fn new>(persistent: P, runtime: P, system: Option

) -> Self { 38 | fn relative_owned(p: &Utf8Path) -> Utf8PathBuf { 39 | p.as_str().trim_start_matches('/').into() 40 | } 41 | let mut r = ConfigPaths { 42 | persistent: relative_owned(persistent.as_ref()), 43 | runtime: relative_owned(runtime.as_ref()), 44 | system: system.as_ref().map(|s| relative_owned(s.as_ref())), 45 | }; 46 | let path = "ostree"; 47 | r.persistent.push(path); 48 | r.runtime.push(path); 49 | if let Some(system) = r.system.as_mut() { 50 | system.push(path); 51 | } 52 | r 53 | } 54 | 55 | /// Return the path and an open fd for a config file, if it exists. 56 | pub(crate) fn open_file( 57 | &self, 58 | root: &RootDir, 59 | p: impl AsRef, 60 | ) -> Result> { 61 | let p = p.as_ref(); 62 | let mut runtime = self.runtime.clone(); 63 | runtime.push(p); 64 | if let Some(f) = root.open_optional(&runtime)? { 65 | return Ok(Some((runtime, f))); 66 | } 67 | let mut persistent = self.persistent.clone(); 68 | persistent.push(p); 69 | if let Some(f) = root.open_optional(&persistent)? { 70 | return Ok(Some((persistent, f))); 71 | } 72 | if let Some(mut system) = self.system.clone() { 73 | system.push(p); 74 | if let Some(f) = root.open_optional(&system)? { 75 | return Ok(Some((system, f))); 76 | } 77 | } 78 | Ok(None) 79 | } 80 | } 81 | 82 | /// Return the path to the global container authentication file, if it exists. 83 | pub fn get_global_authfile(root: &Dir) -> Result> { 84 | let root = &RootDir::new(root, ".")?; 85 | let am_uid0 = rustix::process::getuid() == rustix::process::Uid::ROOT; 86 | get_global_authfile_impl(root, am_uid0) 87 | } 88 | 89 | /// Return the path to the global container authentication file, if it exists. 90 | fn get_global_authfile_impl(root: &RootDir, am_uid0: bool) -> Result> { 91 | let paths = get_config_paths(am_uid0); 92 | paths.open_file(root, "auth.json") 93 | } 94 | 95 | #[cfg(test)] 96 | mod tests { 97 | use std::io::Read; 98 | 99 | use super::*; 100 | use camino::Utf8PathBuf; 101 | use cap_std_ext::{cap_std, cap_tempfile}; 102 | 103 | fn read_authfile( 104 | root: &cap_std_ext::RootDir, 105 | am_uid0: bool, 106 | ) -> Result> { 107 | let r = get_global_authfile_impl(root, am_uid0)?; 108 | if let Some((path, mut f)) = r { 109 | let mut s = String::new(); 110 | f.read_to_string(&mut s)?; 111 | Ok(Some((path.try_into()?, s))) 112 | } else { 113 | Ok(None) 114 | } 115 | } 116 | 117 | #[test] 118 | fn test_config_paths() -> Result<()> { 119 | let root = &cap_tempfile::TempDir::new(cap_std::ambient_authority())?; 120 | let rootdir = &RootDir::new(root, ".")?; 121 | assert!(read_authfile(rootdir, true).unwrap().is_none()); 122 | root.create_dir_all("etc/ostree")?; 123 | root.write("etc/ostree/auth.json", "etc ostree auth")?; 124 | let (p, authdata) = read_authfile(rootdir, true).unwrap().unwrap(); 125 | assert_eq!(p, "etc/ostree/auth.json"); 126 | assert_eq!(authdata, "etc ostree auth"); 127 | root.create_dir_all("usr/lib/ostree")?; 128 | root.write("usr/lib/ostree/auth.json", "usrlib ostree auth")?; 129 | // We should see /etc content still 130 | let (p, authdata) = read_authfile(rootdir, true).unwrap().unwrap(); 131 | assert_eq!(p, "etc/ostree/auth.json"); 132 | assert_eq!(authdata, "etc ostree auth"); 133 | // Now remove the /etc content, unveiling the /usr content 134 | root.remove_file("etc/ostree/auth.json")?; 135 | let (p, authdata) = read_authfile(rootdir, true).unwrap().unwrap(); 136 | assert_eq!(p, "usr/lib/ostree/auth.json"); 137 | assert_eq!(authdata, "usrlib ostree auth"); 138 | 139 | // Verify symlinks work, both relative... 140 | root.create_dir_all("etc/containers")?; 141 | root.write("etc/containers/auth.json", "etc containers ostree auth")?; 142 | root.symlink_contents("../containers/auth.json", "etc/ostree/auth.json")?; 143 | let (p, authdata) = read_authfile(rootdir, true).unwrap().unwrap(); 144 | assert_eq!(p, "etc/ostree/auth.json"); 145 | assert_eq!(authdata, "etc containers ostree auth"); 146 | // And an absolute link 147 | root.remove_file("etc/ostree/auth.json")?; 148 | root.symlink_contents("/etc/containers/auth.json", "etc/ostree/auth.json")?; 149 | assert_eq!(p, "etc/ostree/auth.json"); 150 | assert_eq!(authdata, "etc containers ostree auth"); 151 | 152 | // Non-root 153 | let mut user_runtime_dir = 154 | Utf8Path::from_path(glib::user_runtime_dir().strip_prefix("/").unwrap()) 155 | .unwrap() 156 | .to_path_buf(); 157 | user_runtime_dir.push("ostree"); 158 | root.create_dir_all(&user_runtime_dir)?; 159 | user_runtime_dir.push("auth.json"); 160 | root.write(&user_runtime_dir, "usr_runtime_dir ostree auth")?; 161 | 162 | let mut user_config_dir = 163 | Utf8Path::from_path(glib::user_config_dir().strip_prefix("/").unwrap()) 164 | .unwrap() 165 | .to_path_buf(); 166 | user_config_dir.push("ostree"); 167 | root.create_dir_all(&user_config_dir)?; 168 | user_config_dir.push("auth.json"); 169 | root.write(&user_config_dir, "usr_config_dir ostree auth")?; 170 | 171 | // We should see runtime_dir content still 172 | let (p, authdata) = read_authfile(rootdir, false).unwrap().unwrap(); 173 | assert_eq!(p, user_runtime_dir); 174 | assert_eq!(authdata, "usr_runtime_dir ostree auth"); 175 | 176 | // Now remove the runtime_dir content, unveiling the config_dir content 177 | root.remove_file(&user_runtime_dir)?; 178 | let (p, authdata) = read_authfile(rootdir, false).unwrap().unwrap(); 179 | assert_eq!(p, user_config_dir); 180 | assert_eq!(authdata, "usr_config_dir ostree auth"); 181 | 182 | Ok(()) 183 | } 184 | } 185 | -------------------------------------------------------------------------------- /lib/src/ima.rs: -------------------------------------------------------------------------------- 1 | //! Write IMA signatures to an ostree commit 2 | 3 | // SPDX-License-Identifier: Apache-2.0 OR MIT 4 | 5 | use crate::objgv::*; 6 | use anyhow::{Context, Result}; 7 | use camino::Utf8PathBuf; 8 | use fn_error_context::context; 9 | use gio::glib; 10 | use gio::prelude::*; 11 | use glib::Cast; 12 | use glib::Variant; 13 | use gvariant::aligned_bytes::TryAsAligned; 14 | use gvariant::{gv, Marker, Structure}; 15 | use ostree::gio; 16 | use rustix::fd::BorrowedFd; 17 | use std::collections::{BTreeMap, HashMap}; 18 | use std::ffi::CString; 19 | use std::fs::File; 20 | use std::io::Seek; 21 | use std::os::unix::io::AsRawFd; 22 | use std::process::{Command, Stdio}; 23 | 24 | /// Extended attribute keys used for IMA. 25 | const IMA_XATTR: &str = "security.ima"; 26 | 27 | /// Attributes to configure IMA signatures. 28 | #[derive(Debug, Clone)] 29 | pub struct ImaOpts { 30 | /// Digest algorithm 31 | pub algorithm: String, 32 | 33 | /// Path to IMA key 34 | pub key: Utf8PathBuf, 35 | 36 | /// Replace any existing IMA signatures. 37 | pub overwrite: bool, 38 | } 39 | 40 | /// Convert a GVariant of type `a(ayay)` to a mutable map 41 | fn xattrs_to_map(v: &glib::Variant) -> BTreeMap, Vec> { 42 | let v = v.data_as_bytes(); 43 | let v = v.try_as_aligned().unwrap(); 44 | let v = gv!("a(ayay)").cast(v); 45 | let mut map: BTreeMap, Vec> = BTreeMap::new(); 46 | for e in v.iter() { 47 | let (k, v) = e.to_tuple(); 48 | map.insert(k.into(), v.into()); 49 | } 50 | map 51 | } 52 | 53 | /// Create a new GVariant of type a(ayay). This is used by OSTree's extended attributes. 54 | pub(crate) fn new_variant_a_ayay<'a, T: 'a + AsRef<[u8]>>( 55 | items: impl IntoIterator, 56 | ) -> glib::Variant { 57 | let children = items.into_iter().map(|(a, b)| { 58 | let a = a.as_ref(); 59 | let b = b.as_ref(); 60 | Variant::tuple_from_iter([a.to_variant(), b.to_variant()]) 61 | }); 62 | Variant::array_from_iter::<(&[u8], &[u8])>(children) 63 | } 64 | 65 | struct CommitRewriter<'a> { 66 | repo: &'a ostree::Repo, 67 | ima: &'a ImaOpts, 68 | tempdir: tempfile::TempDir, 69 | /// Maps content object sha256 hex string to a signed object sha256 hex string 70 | rewritten_files: HashMap, 71 | } 72 | 73 | #[allow(unsafe_code)] 74 | #[context("Gathering xattr {}", k)] 75 | fn steal_xattr(f: &File, k: &str) -> Result> { 76 | let k = &CString::new(k)?; 77 | unsafe { 78 | let k = k.as_ptr() as *const _; 79 | let r = libc::fgetxattr(f.as_raw_fd(), k, std::ptr::null_mut(), 0); 80 | if r < 0 { 81 | return Err(std::io::Error::last_os_error().into()); 82 | } 83 | let sz: usize = r.try_into()?; 84 | let mut buf = vec![0u8; sz]; 85 | let r = libc::fgetxattr(f.as_raw_fd(), k, buf.as_mut_ptr() as *mut _, sz); 86 | if r < 0 { 87 | return Err(std::io::Error::last_os_error().into()); 88 | } 89 | let r = libc::fremovexattr(f.as_raw_fd(), k); 90 | if r < 0 { 91 | return Err(std::io::Error::last_os_error().into()); 92 | } 93 | Ok(buf) 94 | } 95 | } 96 | 97 | impl<'a> CommitRewriter<'a> { 98 | fn new(repo: &'a ostree::Repo, ima: &'a ImaOpts) -> Result { 99 | Ok(Self { 100 | repo, 101 | ima, 102 | tempdir: tempfile::tempdir_in(format!("/proc/self/fd/{}/tmp", repo.dfd()))?, 103 | rewritten_files: Default::default(), 104 | }) 105 | } 106 | 107 | /// Use `evmctl` to generate an IMA signature on a file, then 108 | /// scrape the xattr value out of it (removing it). 109 | /// 110 | /// evmctl can write a separate file but it picks the name...so 111 | /// we do this hacky dance of `--xattr-user` instead. 112 | #[allow(unsafe_code)] 113 | #[context("IMA signing object")] 114 | fn ima_sign(&self, instream: &gio::InputStream) -> Result, Vec>> { 115 | let mut tempf = tempfile::NamedTempFile::new_in(self.tempdir.path())?; 116 | // If we're operating on a bare repo, we can clone the file (copy_file_range) directly. 117 | if let Ok(instream) = instream.clone().downcast::() { 118 | use cap_std_ext::cap_std::io_lifetimes::AsFilelike; 119 | // View the fd as a File 120 | let instream_fd = unsafe { BorrowedFd::borrow_raw(instream.as_raw_fd()) }; 121 | let instream_fd = instream_fd.as_filelike_view::(); 122 | std::io::copy(&mut (&*instream_fd), tempf.as_file_mut())?; 123 | } else { 124 | // If we're operating on an archive repo, then we need to uncompress 125 | // and recompress... 126 | let mut instream = instream.clone().into_read(); 127 | let _n = std::io::copy(&mut instream, tempf.as_file_mut())?; 128 | } 129 | tempf.seek(std::io::SeekFrom::Start(0))?; 130 | 131 | let mut proc = Command::new("evmctl"); 132 | proc.current_dir(self.tempdir.path()) 133 | .stdout(Stdio::null()) 134 | .stderr(Stdio::piped()) 135 | .args(["ima_sign", "--xattr-user", "--key", self.ima.key.as_str()]) 136 | .args(["--hashalgo", self.ima.algorithm.as_str()]) 137 | .arg(tempf.path().file_name().unwrap()); 138 | let status = proc.output().context("Spawning evmctl")?; 139 | if !status.status.success() { 140 | return Err(anyhow::anyhow!( 141 | "evmctl failed: {:?}\n{}", 142 | status.status, 143 | String::from_utf8_lossy(&status.stderr), 144 | )); 145 | } 146 | let mut r = HashMap::new(); 147 | let user_k = IMA_XATTR.replace("security.", "user."); 148 | let v = steal_xattr(tempf.as_file(), user_k.as_str())?; 149 | r.insert(Vec::from(IMA_XATTR.as_bytes()), v); 150 | Ok(r) 151 | } 152 | 153 | #[context("Content object {}", checksum)] 154 | fn map_file(&mut self, checksum: &str) -> Result> { 155 | let cancellable = gio::Cancellable::NONE; 156 | let (instream, meta, xattrs) = self.repo.load_file(checksum, cancellable)?; 157 | let instream = if let Some(i) = instream { 158 | i 159 | } else { 160 | return Ok(None); 161 | }; 162 | let mut xattrs = xattrs_to_map(&xattrs); 163 | let existing_sig = xattrs.remove(IMA_XATTR.as_bytes()); 164 | if existing_sig.is_some() && !self.ima.overwrite { 165 | return Ok(None); 166 | } 167 | 168 | // Now inject the IMA xattr 169 | let xattrs = { 170 | let signed = self.ima_sign(&instream)?; 171 | xattrs.extend(signed); 172 | new_variant_a_ayay(&xattrs) 173 | }; 174 | // Now reload the input stream 175 | let (instream, _, _) = self.repo.load_file(checksum, cancellable)?; 176 | let instream = instream.unwrap(); 177 | let (ostream, size) = 178 | ostree::raw_file_to_content_stream(&instream, &meta, Some(&xattrs), cancellable)?; 179 | let new_checksum = self 180 | .repo 181 | .write_content(None, &ostream, size, cancellable)? 182 | .to_hex(); 183 | 184 | Ok(Some(new_checksum)) 185 | } 186 | 187 | /// Write a dirtree object. 188 | fn map_dirtree(&mut self, checksum: &str) -> Result { 189 | let src = &self 190 | .repo 191 | .load_variant(ostree::ObjectType::DirTree, checksum)?; 192 | let src = src.data_as_bytes(); 193 | let src = src.try_as_aligned()?; 194 | let src = gv_dirtree!().cast(src); 195 | let (files, dirs) = src.to_tuple(); 196 | 197 | // A reusable buffer to avoid heap allocating these 198 | let mut hexbuf = [0u8; 64]; 199 | 200 | let mut new_files = Vec::new(); 201 | for file in files { 202 | let (name, csum) = file.to_tuple(); 203 | let name = name.to_str(); 204 | hex::encode_to_slice(csum, &mut hexbuf)?; 205 | let checksum = std::str::from_utf8(&hexbuf)?; 206 | if let Some(mapped) = self.rewritten_files.get(checksum) { 207 | new_files.push((name, hex::decode(mapped)?)); 208 | } else if let Some(mapped) = self.map_file(checksum)? { 209 | let mapped_bytes = hex::decode(&mapped)?; 210 | self.rewritten_files.insert(checksum.into(), mapped); 211 | new_files.push((name, mapped_bytes)); 212 | } else { 213 | new_files.push((name, Vec::from(csum))); 214 | } 215 | } 216 | 217 | let mut new_dirs = Vec::new(); 218 | for item in dirs { 219 | let (name, contents_csum, meta_csum_bytes) = item.to_tuple(); 220 | let name = name.to_str(); 221 | hex::encode_to_slice(contents_csum, &mut hexbuf)?; 222 | let contents_csum = std::str::from_utf8(&hexbuf)?; 223 | let mapped = self.map_dirtree(contents_csum)?; 224 | let mapped = hex::decode(mapped)?; 225 | new_dirs.push((name, mapped, meta_csum_bytes)); 226 | } 227 | 228 | let new_dirtree = (new_files, new_dirs).to_variant(); 229 | 230 | let mapped = self 231 | .repo 232 | .write_metadata( 233 | ostree::ObjectType::DirTree, 234 | None, 235 | &new_dirtree, 236 | gio::Cancellable::NONE, 237 | )? 238 | .to_hex(); 239 | 240 | Ok(mapped) 241 | } 242 | 243 | /// Write a commit object. 244 | #[context("Mapping {}", rev)] 245 | fn map_commit(&mut self, rev: &str) -> Result { 246 | let checksum = self.repo.require_rev(rev)?; 247 | let cancellable = gio::Cancellable::NONE; 248 | let (commit_v, _) = self.repo.load_commit(&checksum)?; 249 | let commit_v = &commit_v; 250 | 251 | let commit_bytes = commit_v.data_as_bytes(); 252 | let commit_bytes = commit_bytes.try_as_aligned()?; 253 | let commit = gv_commit!().cast(commit_bytes); 254 | let commit = commit.to_tuple(); 255 | let contents = &hex::encode(commit.6); 256 | 257 | let new_dt = self.map_dirtree(contents)?; 258 | 259 | let n_parts = 8; 260 | let mut parts = Vec::with_capacity(n_parts); 261 | for i in 0..n_parts { 262 | parts.push(commit_v.child_value(i)); 263 | } 264 | let new_dt = hex::decode(new_dt)?; 265 | parts[6] = new_dt.to_variant(); 266 | let new_commit = Variant::tuple_from_iter(&parts); 267 | 268 | let new_commit_checksum = self 269 | .repo 270 | .write_metadata(ostree::ObjectType::Commit, None, &new_commit, cancellable)? 271 | .to_hex(); 272 | 273 | Ok(new_commit_checksum) 274 | } 275 | } 276 | 277 | /// Given an OSTree commit and an IMA configuration, generate a new commit object with IMA signatures. 278 | /// 279 | /// The generated commit object will inherit all metadata from the existing commit object 280 | /// such as version, etc. 281 | /// 282 | /// This function does not create an ostree transaction; it's recommended to use outside the call 283 | /// to this function. 284 | pub fn ima_sign(repo: &ostree::Repo, ostree_ref: &str, opts: &ImaOpts) -> Result { 285 | let writer = &mut CommitRewriter::new(repo, opts)?; 286 | writer.map_commit(ostree_ref) 287 | } 288 | -------------------------------------------------------------------------------- /lib/src/integrationtest.rs: -------------------------------------------------------------------------------- 1 | //! Module used for integration tests; should not be public. 2 | 3 | use std::path::Path; 4 | 5 | use crate::container_utils::is_ostree_container; 6 | use anyhow::{anyhow, Context, Result}; 7 | use camino::Utf8Path; 8 | use cap_std::fs::Dir; 9 | use cap_std_ext::cap_std; 10 | use containers_image_proxy::oci_spec; 11 | use fn_error_context::context; 12 | use gio::prelude::*; 13 | use oci_spec::image as oci_image; 14 | use ocidir::{ 15 | oci_spec::image::{Arch, Platform}, 16 | GzipLayerWriter, 17 | }; 18 | use ostree::gio; 19 | use xshell::cmd; 20 | 21 | pub(crate) fn detectenv() -> Result<&'static str> { 22 | let r = if is_ostree_container()? { 23 | "ostree-container" 24 | } else if Path::new("/run/ostree-booted").exists() { 25 | "ostree" 26 | } else if crate::container_utils::running_in_container() { 27 | "container" 28 | } else { 29 | "none" 30 | }; 31 | Ok(r) 32 | } 33 | 34 | /// Using `src` as a base, take append `dir` into OCI image. 35 | /// Should only be enabled for testing. 36 | #[context("Generating derived oci")] 37 | pub fn generate_derived_oci( 38 | src: impl AsRef, 39 | dir: impl AsRef, 40 | tag: Option<&str>, 41 | ) -> Result<()> { 42 | generate_derived_oci_from_tar( 43 | src, 44 | move |w| { 45 | let dir = dir.as_ref(); 46 | let mut layer_tar = tar::Builder::new(w); 47 | layer_tar.append_dir_all("./", dir)?; 48 | layer_tar.finish()?; 49 | Ok(()) 50 | }, 51 | tag, 52 | None, 53 | ) 54 | } 55 | 56 | /// Using `src` as a base, take append `dir` into OCI image. 57 | /// Should only be enabled for testing. 58 | #[context("Generating derived oci")] 59 | pub fn generate_derived_oci_from_tar( 60 | src: impl AsRef, 61 | f: F, 62 | tag: Option<&str>, 63 | arch: Option, 64 | ) -> Result<()> 65 | where 66 | F: FnOnce(&mut GzipLayerWriter) -> Result<()>, 67 | { 68 | let src = src.as_ref(); 69 | let src = Dir::open_ambient_dir(src, cap_std::ambient_authority())?; 70 | let src = ocidir::OciDir::open(&src)?; 71 | 72 | let idx = src 73 | .read_index()? 74 | .ok_or(anyhow!("Reading image index from source"))?; 75 | let manifest_descriptor = idx 76 | .manifests() 77 | .first() 78 | .ok_or(anyhow!("No manifests in index"))?; 79 | let mut manifest: oci_image::ImageManifest = src 80 | .read_json_blob(manifest_descriptor) 81 | .context("Reading manifest json blob")?; 82 | let mut config: oci_image::ImageConfiguration = src.read_json_blob(manifest.config())?; 83 | 84 | if let Some(arch) = arch.as_ref() { 85 | config.set_architecture(arch.clone()); 86 | } 87 | 88 | let mut bw = src.create_gzip_layer(None)?; 89 | f(&mut bw)?; 90 | let new_layer = bw.complete()?; 91 | 92 | manifest.layers_mut().push( 93 | new_layer 94 | .blob 95 | .descriptor() 96 | .media_type(oci_spec::image::MediaType::ImageLayerGzip) 97 | .build() 98 | .unwrap(), 99 | ); 100 | config.history_mut().push( 101 | oci_spec::image::HistoryBuilder::default() 102 | .created_by("generate_derived_oci") 103 | .build() 104 | .unwrap(), 105 | ); 106 | config 107 | .rootfs_mut() 108 | .diff_ids_mut() 109 | .push(new_layer.uncompressed_sha256.digest().to_string()); 110 | let new_config_desc = src.write_config(config)?; 111 | manifest.set_config(new_config_desc); 112 | 113 | let mut platform = Platform::default(); 114 | if let Some(arch) = arch.as_ref() { 115 | platform.set_architecture(arch.clone()); 116 | } 117 | 118 | if let Some(tag) = tag { 119 | src.insert_manifest(manifest, Some(tag), platform)?; 120 | } else { 121 | src.replace_with_single_manifest(manifest, platform)?; 122 | } 123 | Ok(()) 124 | } 125 | 126 | fn test_proxy_auth() -> Result<()> { 127 | use containers_image_proxy::ImageProxyConfig; 128 | let merge = crate::container::merge_default_container_proxy_opts; 129 | let mut c = ImageProxyConfig::default(); 130 | merge(&mut c)?; 131 | assert_eq!(c.authfile, None); 132 | std::fs::create_dir_all("/etc/ostree")?; 133 | let authpath = Path::new("/etc/ostree/auth.json"); 134 | std::fs::write(authpath, "{}")?; 135 | let mut c = ImageProxyConfig::default(); 136 | merge(&mut c)?; 137 | if rustix::process::getuid().is_root() { 138 | assert!(c.auth_data.is_some()); 139 | } else { 140 | assert_eq!(c.authfile.unwrap().as_path(), authpath,); 141 | } 142 | let c = ImageProxyConfig { 143 | auth_anonymous: true, 144 | ..Default::default() 145 | }; 146 | assert_eq!(c.authfile, None); 147 | std::fs::remove_file(authpath)?; 148 | let mut c = ImageProxyConfig::default(); 149 | merge(&mut c)?; 150 | assert_eq!(c.authfile, None); 151 | Ok(()) 152 | } 153 | 154 | /// Create a test fixture in the same way our unit tests does, and print 155 | /// the location of the temporary directory. Also export a chunked image. 156 | /// Useful for debugging things interactively. 157 | pub(crate) async fn create_fixture() -> Result<()> { 158 | let fixture = crate::fixture::Fixture::new_v1()?; 159 | let imgref = fixture.export_container().await?.0; 160 | println!("Wrote: {:?}", imgref); 161 | let path = fixture.into_tempdir().into_path(); 162 | println!("Wrote: {:?}", path); 163 | Ok(()) 164 | } 165 | 166 | pub(crate) fn test_ima() -> Result<()> { 167 | use gvariant::aligned_bytes::TryAsAligned; 168 | use gvariant::{gv, Marker, Structure}; 169 | 170 | let cancellable = gio::Cancellable::NONE; 171 | let fixture = crate::fixture::Fixture::new_v1()?; 172 | 173 | let config = indoc::indoc! { r#" 174 | [ req ] 175 | default_bits = 3048 176 | distinguished_name = req_distinguished_name 177 | prompt = no 178 | string_mask = utf8only 179 | x509_extensions = myexts 180 | [ req_distinguished_name ] 181 | O = Test 182 | CN = Test key 183 | emailAddress = example@example.com 184 | [ myexts ] 185 | basicConstraints=critical,CA:FALSE 186 | keyUsage=digitalSignature 187 | subjectKeyIdentifier=hash 188 | authorityKeyIdentifier=keyid 189 | "#}; 190 | std::fs::write(fixture.path.join("genkey.config"), config)?; 191 | let sh = xshell::Shell::new()?; 192 | sh.change_dir(&fixture.path); 193 | cmd!( 194 | sh, 195 | "openssl req -new -nodes -utf8 -sha256 -days 36500 -batch -x509 -config genkey.config -outform DER -out ima.der -keyout privkey_ima.pem" 196 | ) 197 | .ignore_stderr() 198 | .ignore_stdout() 199 | .run()?; 200 | 201 | let imaopts = crate::ima::ImaOpts { 202 | algorithm: "sha256".into(), 203 | key: fixture.path.join("privkey_ima.pem"), 204 | overwrite: false, 205 | }; 206 | let rewritten_commit = 207 | crate::ima::ima_sign(fixture.srcrepo(), fixture.testref(), &imaopts).unwrap(); 208 | 209 | let root = fixture 210 | .srcrepo() 211 | .read_commit(&rewritten_commit, cancellable)? 212 | .0; 213 | let bash = root.resolve_relative_path("/usr/bin/bash"); 214 | let bash = bash.downcast_ref::().unwrap(); 215 | let xattrs = bash.xattrs(cancellable).unwrap(); 216 | let v = xattrs.data_as_bytes(); 217 | let v = v.try_as_aligned().unwrap(); 218 | let v = gv!("a(ayay)").cast(v); 219 | let mut found_ima = false; 220 | for xattr in v.iter() { 221 | let k = xattr.to_tuple().0; 222 | if k != b"security.ima" { 223 | continue; 224 | } 225 | found_ima = true; 226 | break; 227 | } 228 | if !found_ima { 229 | anyhow::bail!("Failed to find IMA xattr"); 230 | } 231 | println!("ok IMA"); 232 | Ok(()) 233 | } 234 | 235 | #[cfg(feature = "internal-testing-api")] 236 | #[context("Running integration tests")] 237 | pub(crate) fn run_tests() -> Result<()> { 238 | crate::container_utils::require_ostree_container()?; 239 | // When there's a new integration test to run, add it here. 240 | test_proxy_auth()?; 241 | println!("integration tests succeeded."); 242 | Ok(()) 243 | } 244 | -------------------------------------------------------------------------------- /lib/src/isolation.rs: -------------------------------------------------------------------------------- 1 | use std::process::Command; 2 | 3 | use once_cell::sync::Lazy; 4 | 5 | pub(crate) const DEFAULT_UNPRIVILEGED_USER: &str = "nobody"; 6 | 7 | /// Checks if the current process is (apparently at least) 8 | /// running under systemd. We use this in various places 9 | /// to e.g. log to the journal instead of printing to stdout. 10 | pub(crate) fn running_in_systemd() -> bool { 11 | static RUNNING_IN_SYSTEMD: Lazy = Lazy::new(|| { 12 | // See https://www.freedesktop.org/software/systemd/man/systemd.exec.html#%24INVOCATION_ID 13 | std::env::var_os("INVOCATION_ID") 14 | .filter(|s| !s.is_empty()) 15 | .is_some() 16 | }); 17 | 18 | *RUNNING_IN_SYSTEMD 19 | } 20 | 21 | /// Return a prepared subprocess configuration that will run as an unprivileged user if possible. 22 | /// 23 | /// This currently only drops privileges when run under systemd with DynamicUser. 24 | pub(crate) fn unprivileged_subprocess(binary: &str, user: &str) -> Command { 25 | // TODO: if we detect we're running in a container as uid 0, perhaps at least switch to the 26 | // "bin" user if we can? 27 | if !running_in_systemd() { 28 | return Command::new(binary); 29 | } 30 | let mut cmd = Command::new("setpriv"); 31 | // Clear some strategic environment variables that may cause the containers/image stack 32 | // to look in the wrong places for things. 33 | cmd.env_remove("HOME"); 34 | cmd.env_remove("XDG_DATA_DIR"); 35 | cmd.env_remove("USER"); 36 | cmd.args([ 37 | "--no-new-privs", 38 | "--init-groups", 39 | "--reuid", 40 | user, 41 | "--bounding-set", 42 | "-all", 43 | "--pdeathsig", 44 | "TERM", 45 | "--", 46 | binary, 47 | ]); 48 | cmd 49 | } 50 | -------------------------------------------------------------------------------- /lib/src/keyfileext.rs: -------------------------------------------------------------------------------- 1 | //! Helper methods for [`glib::KeyFile`]. 2 | 3 | use glib::GString; 4 | use ostree::glib; 5 | 6 | /// Helper methods for [`glib::KeyFile`]. 7 | pub trait KeyFileExt { 8 | /// Get a string value, but return `None` if the key does not exist. 9 | fn optional_string(&self, group: &str, key: &str) -> Result, glib::Error>; 10 | /// Get a boolean value, but return `None` if the key does not exist. 11 | fn optional_bool(&self, group: &str, key: &str) -> Result, glib::Error>; 12 | } 13 | 14 | /// Consume a keyfile error, mapping the case where group or key is not found to `Ok(None)`. 15 | pub fn map_keyfile_optional(res: Result) -> Result, glib::Error> { 16 | match res { 17 | Ok(v) => Ok(Some(v)), 18 | Err(e) => { 19 | if let Some(t) = e.kind::() { 20 | match t { 21 | glib::KeyFileError::GroupNotFound | glib::KeyFileError::KeyNotFound => Ok(None), 22 | _ => Err(e), 23 | } 24 | } else { 25 | Err(e) 26 | } 27 | } 28 | } 29 | } 30 | 31 | impl KeyFileExt for glib::KeyFile { 32 | fn optional_string(&self, group: &str, key: &str) -> Result, glib::Error> { 33 | map_keyfile_optional(self.string(group, key)) 34 | } 35 | 36 | fn optional_bool(&self, group: &str, key: &str) -> Result, glib::Error> { 37 | map_keyfile_optional(self.boolean(group, key)) 38 | } 39 | } 40 | 41 | #[cfg(test)] 42 | mod tests { 43 | use super::*; 44 | 45 | #[test] 46 | fn test_optional() { 47 | let kf = glib::KeyFile::new(); 48 | assert_eq!(kf.optional_string("foo", "bar").unwrap(), None); 49 | kf.set_string("foo", "baz", "someval"); 50 | assert_eq!(kf.optional_string("foo", "bar").unwrap(), None); 51 | assert_eq!( 52 | kf.optional_string("foo", "baz").unwrap().unwrap(), 53 | "someval" 54 | ); 55 | 56 | assert!(kf.optional_bool("foo", "baz").is_err()); 57 | assert_eq!(kf.optional_bool("foo", "bar").unwrap(), None); 58 | kf.set_boolean("foo", "somebool", false); 59 | assert_eq!(kf.optional_bool("foo", "somebool").unwrap(), Some(false)); 60 | } 61 | } 62 | -------------------------------------------------------------------------------- /lib/src/lib.rs: -------------------------------------------------------------------------------- 1 | //! # Extension APIs for ostree 2 | //! 3 | //! This crate builds on top of the core ostree C library 4 | //! and the Rust bindings to it, adding new functionality 5 | //! written in Rust. 6 | 7 | // See https://doc.rust-lang.org/rustc/lints/listing/allowed-by-default.html 8 | #![deny(missing_docs)] 9 | #![deny(missing_debug_implementations)] 10 | #![forbid(unused_must_use)] 11 | #![deny(unsafe_code)] 12 | #![cfg_attr(feature = "dox", feature(doc_cfg))] 13 | #![deny(clippy::dbg_macro)] 14 | #![deny(clippy::todo)] 15 | 16 | // Re-export our dependencies. See https://gtk-rs.org/blog/2021/06/22/new-release.html 17 | // "Dependencies are re-exported". Users will need e.g. `gio::File`, so this avoids 18 | // them needing to update matching versions. 19 | pub use containers_image_proxy; 20 | pub use containers_image_proxy::oci_spec; 21 | pub use ostree; 22 | pub use ostree::gio; 23 | pub use ostree::gio::glib; 24 | 25 | /// Our generic catchall fatal error, expected to be converted 26 | /// to a string to output to a terminal or logs. 27 | type Result = anyhow::Result; 28 | 29 | // Import global functions. 30 | pub mod globals; 31 | 32 | mod isolation; 33 | 34 | pub mod bootabletree; 35 | pub mod cli; 36 | pub mod container; 37 | pub mod container_utils; 38 | pub mod diff; 39 | pub mod ima; 40 | pub mod keyfileext; 41 | pub(crate) mod logging; 42 | pub mod mountutil; 43 | pub mod ostree_prepareroot; 44 | pub mod refescape; 45 | #[doc(hidden)] 46 | pub mod repair; 47 | pub mod sysroot; 48 | pub mod tar; 49 | pub mod tokio_util; 50 | 51 | pub mod selinux; 52 | 53 | pub mod chunking; 54 | pub mod commit; 55 | pub mod objectsource; 56 | pub(crate) mod objgv; 57 | #[cfg(feature = "internal-testing-api")] 58 | pub mod ostree_manual; 59 | #[cfg(not(feature = "internal-testing-api"))] 60 | pub(crate) mod ostree_manual; 61 | 62 | pub(crate) mod statistics; 63 | 64 | mod utils; 65 | 66 | #[cfg(feature = "docgen")] 67 | mod docgen; 68 | 69 | /// Prelude, intended for glob import. 70 | pub mod prelude { 71 | #[doc(hidden)] 72 | pub use ostree::prelude::*; 73 | } 74 | 75 | #[cfg(feature = "internal-testing-api")] 76 | pub mod fixture; 77 | #[cfg(feature = "internal-testing-api")] 78 | pub mod integrationtest; 79 | -------------------------------------------------------------------------------- /lib/src/logging.rs: -------------------------------------------------------------------------------- 1 | use std::collections::HashMap; 2 | use std::sync::atomic::{AtomicBool, Ordering}; 3 | 4 | /// Set to true if we failed to write to the journal once 5 | static EMITTED_JOURNAL_ERROR: AtomicBool = AtomicBool::new(false); 6 | 7 | /// Wrapper for systemd structured logging which only emits a message 8 | /// if we're targeting the system repository, and it's booted. 9 | pub(crate) fn system_repo_journal_send( 10 | repo: &ostree::Repo, 11 | priority: libsystemd::logging::Priority, 12 | msg: &str, 13 | vars: impl Iterator, 14 | ) where 15 | K: AsRef, 16 | V: AsRef, 17 | { 18 | if !libsystemd::daemon::booted() { 19 | return; 20 | } 21 | if !repo.is_system() { 22 | return; 23 | } 24 | if let Err(e) = libsystemd::logging::journal_send(priority, msg, vars) { 25 | if !EMITTED_JOURNAL_ERROR.swap(true, Ordering::SeqCst) { 26 | eprintln!("failed to write to journal: {e}"); 27 | } 28 | } 29 | } 30 | 31 | /// Wrapper for systemd structured logging which only emits a message 32 | /// if we're targeting the system repository, and it's booted. 33 | pub(crate) fn system_repo_journal_print( 34 | repo: &ostree::Repo, 35 | priority: libsystemd::logging::Priority, 36 | msg: &str, 37 | ) { 38 | let vars: HashMap<&str, &str> = HashMap::new(); 39 | system_repo_journal_send(repo, priority, msg, vars.into_iter()) 40 | } 41 | -------------------------------------------------------------------------------- /lib/src/mountutil.rs: -------------------------------------------------------------------------------- 1 | //! Helpers for interacting with mounts. 2 | 3 | use std::os::fd::AsFd; 4 | use std::path::Path; 5 | 6 | use anyhow::Result; 7 | use cap_std::fs::Dir; 8 | use cap_std_ext::cap_std; 9 | 10 | // Fix musl support 11 | #[cfg(target_env = "gnu")] 12 | use libc::STATX_ATTR_MOUNT_ROOT; 13 | #[cfg(target_env = "musl")] 14 | const STATX_ATTR_MOUNT_ROOT: libc::c_int = 0x2000; 15 | 16 | fn is_mountpoint_impl_statx(root: &Dir, path: &Path) -> Result> { 17 | // https://github.com/systemd/systemd/blob/8fbf0a214e2fe474655b17a4b663122943b55db0/src/basic/mountpoint-util.c#L176 18 | use rustix::fs::{AtFlags, StatxFlags}; 19 | 20 | // SAFETY(unwrap): We can infallibly convert an i32 into a u64. 21 | let mountroot_flag: u64 = STATX_ATTR_MOUNT_ROOT.try_into().unwrap(); 22 | match rustix::fs::statx( 23 | root.as_fd(), 24 | path, 25 | AtFlags::NO_AUTOMOUNT | AtFlags::SYMLINK_NOFOLLOW, 26 | StatxFlags::empty(), 27 | ) { 28 | Ok(r) => { 29 | let present = (r.stx_attributes_mask & mountroot_flag) > 0; 30 | Ok(present.then_some(r.stx_attributes & mountroot_flag > 0)) 31 | } 32 | Err(e) if e == rustix::io::Errno::NOSYS => Ok(None), 33 | Err(e) => Err(e.into()), 34 | } 35 | } 36 | 37 | /// Try to (heuristically) determine if the provided path is a mount root. 38 | pub fn is_mountpoint(root: &Dir, path: impl AsRef) -> Result> { 39 | is_mountpoint_impl_statx(root, path.as_ref()) 40 | } 41 | 42 | #[cfg(test)] 43 | mod tests { 44 | use super::*; 45 | use cap_std_ext::cap_tempfile; 46 | 47 | #[test] 48 | fn test_is_mountpoint() -> Result<()> { 49 | let root = cap_std::fs::Dir::open_ambient_dir("/", cap_std::ambient_authority())?; 50 | let supported = is_mountpoint(&root, Path::new("/")).unwrap(); 51 | match supported { 52 | Some(r) => assert!(r), 53 | // If the host doesn't support statx, ignore this for now 54 | None => return Ok(()), 55 | } 56 | let tmpdir = cap_tempfile::TempDir::new(cap_std::ambient_authority())?; 57 | assert!(!is_mountpoint(&tmpdir, Path::new(".")).unwrap().unwrap()); 58 | Ok(()) 59 | } 60 | } 61 | -------------------------------------------------------------------------------- /lib/src/objectsource.rs: -------------------------------------------------------------------------------- 1 | //! Metadata about the source of an object: a component or package. 2 | //! 3 | //! This is used to help split up containers into distinct layers. 4 | 5 | use indexmap::IndexMap; 6 | use std::borrow::Borrow; 7 | use std::collections::HashSet; 8 | use std::hash::Hash; 9 | use std::rc::Rc; 10 | 11 | use serde::{Deserialize, Serialize, Serializer}; 12 | 13 | mod rcstr_serialize { 14 | use serde::Deserializer; 15 | 16 | use super::*; 17 | 18 | pub(crate) fn serialize(v: &Rc, serializer: S) -> Result 19 | where 20 | S: Serializer, 21 | { 22 | serializer.serialize_str(v) 23 | } 24 | 25 | pub(crate) fn deserialize<'de, D>(deserializer: D) -> Result, D::Error> 26 | where 27 | D: Deserializer<'de>, 28 | { 29 | let v = String::deserialize(deserializer)?; 30 | Ok(Rc::from(v.into_boxed_str())) 31 | } 32 | } 33 | 34 | /// Identifier for content (e.g. package/layer). Not necessarily human readable. 35 | /// For example in RPMs, this may be a full "NEVRA" i.e. name-epoch:version-release.architecture e.g. kernel-6.2-2.fc38.aarch64 36 | /// But that's not strictly required as this string should only live in memory and not be persisted. 37 | pub type ContentID = Rc; 38 | 39 | /// Metadata about a component/package. 40 | #[derive(Debug, Eq, Deserialize, Serialize)] 41 | pub struct ObjectSourceMeta { 42 | /// Unique identifier, does not need to be human readable, but can be. 43 | #[serde(with = "rcstr_serialize")] 44 | pub identifier: ContentID, 45 | /// Just the name of the package (no version), needs to be human readable. 46 | #[serde(with = "rcstr_serialize")] 47 | pub name: Rc, 48 | /// Identifier for the *source* of this content; for example, if multiple binary 49 | /// packages derive from a single git repository or source package. 50 | #[serde(with = "rcstr_serialize")] 51 | pub srcid: Rc, 52 | /// Unitless, relative offset of last change time. 53 | /// One suggested way to generate this number is to have it be in units of hours or days 54 | /// since the earliest changed item. 55 | pub change_time_offset: u32, 56 | /// Change frequency 57 | pub change_frequency: u32, 58 | } 59 | 60 | impl PartialEq for ObjectSourceMeta { 61 | fn eq(&self, other: &Self) -> bool { 62 | *self.identifier == *other.identifier 63 | } 64 | } 65 | 66 | impl Hash for ObjectSourceMeta { 67 | fn hash(&self, state: &mut H) { 68 | self.identifier.hash(state); 69 | } 70 | } 71 | 72 | impl Borrow for ObjectSourceMeta { 73 | fn borrow(&self) -> &str { 74 | &self.identifier 75 | } 76 | } 77 | 78 | /// Maps from e.g. "bash" or "kernel" to metadata about that content 79 | pub type ObjectMetaSet = HashSet; 80 | 81 | /// Maps from an ostree content object digest to the `ContentSet` key. 82 | pub type ObjectMetaMap = IndexMap; 83 | 84 | /// Grouping of metadata about an object. 85 | #[derive(Debug, Default)] 86 | pub struct ObjectMeta { 87 | /// The set of object sources with their metadata. 88 | pub set: ObjectMetaSet, 89 | /// Mapping from content object to source. 90 | pub map: ObjectMetaMap, 91 | } 92 | -------------------------------------------------------------------------------- /lib/src/objgv.rs: -------------------------------------------------------------------------------- 1 | /// Type representing an ostree commit object. 2 | macro_rules! gv_commit { 3 | () => { 4 | gvariant::gv!("(a{sv}aya(say)sstayay)") 5 | }; 6 | } 7 | pub(crate) use gv_commit; 8 | 9 | /// Type representing an ostree DIRTREE object. 10 | macro_rules! gv_dirtree { 11 | () => { 12 | gvariant::gv!("(a(say)a(sayay))") 13 | }; 14 | } 15 | pub(crate) use gv_dirtree; 16 | 17 | #[cfg(test)] 18 | mod tests { 19 | use gvariant::aligned_bytes::TryAsAligned; 20 | use gvariant::Marker; 21 | 22 | use super::*; 23 | #[test] 24 | fn test_dirtree() { 25 | // Just a compilation test 26 | let data = b"".try_as_aligned().ok(); 27 | if let Some(data) = data { 28 | let _t = gv_dirtree!().cast(data); 29 | } 30 | } 31 | } 32 | -------------------------------------------------------------------------------- /lib/src/ostree_manual.rs: -------------------------------------------------------------------------------- 1 | //! Manual workarounds for ostree bugs 2 | 3 | use std::io::Read; 4 | use std::ptr; 5 | 6 | use ostree::prelude::{Cast, InputStreamExtManual}; 7 | use ostree::{gio, glib}; 8 | 9 | /// Equivalent of `g_file_read()` for ostree::RepoFile to work around https://github.com/ostreedev/ostree/issues/2703 10 | #[allow(unsafe_code)] 11 | pub fn repo_file_read(f: &ostree::RepoFile) -> Result { 12 | use glib::translate::*; 13 | let stream = unsafe { 14 | let f = f.upcast_ref::(); 15 | let mut error = ptr::null_mut(); 16 | let stream = gio::ffi::g_file_read(f.to_glib_none().0, ptr::null_mut(), &mut error); 17 | if !error.is_null() { 18 | return Err(from_glib_full(error)); 19 | } 20 | // Upcast to GInputStream here 21 | from_glib_full(stream as *mut gio::ffi::GInputStream) 22 | }; 23 | 24 | Ok(stream) 25 | } 26 | 27 | /// Read a repo file to a string. 28 | pub fn repo_file_read_to_string(f: &ostree::RepoFile) -> anyhow::Result { 29 | let mut r = String::new(); 30 | let mut s = repo_file_read(f)?.into_read(); 31 | s.read_to_string(&mut r)?; 32 | Ok(r) 33 | } 34 | -------------------------------------------------------------------------------- /lib/src/ostree_prepareroot.rs: -------------------------------------------------------------------------------- 1 | //! Logic related to parsing ostree-prepare-root.conf. 2 | //! 3 | 4 | // SPDX-License-Identifier: Apache-2.0 OR MIT 5 | 6 | use std::str::FromStr; 7 | 8 | use anyhow::{Context, Result}; 9 | use camino::Utf8Path; 10 | use glib::Cast; 11 | use ostree::prelude::FileExt; 12 | use ostree::{gio, glib}; 13 | 14 | use crate::keyfileext::KeyFileExt; 15 | use crate::ostree_manual; 16 | use crate::utils::ResultExt; 17 | 18 | pub(crate) const CONF_PATH: &str = "ostree/prepare-root.conf"; 19 | 20 | pub(crate) fn load_config(root: &ostree::RepoFile) -> Result> { 21 | let cancellable = gio::Cancellable::NONE; 22 | let kf = glib::KeyFile::new(); 23 | for path in ["etc", "usr/lib"].into_iter().map(Utf8Path::new) { 24 | let path = &path.join(CONF_PATH); 25 | let f = root.resolve_relative_path(path); 26 | if !f.query_exists(cancellable) { 27 | continue; 28 | } 29 | let f = f.downcast_ref::().unwrap(); 30 | let contents = ostree_manual::repo_file_read_to_string(f)?; 31 | kf.load_from_data(&contents, glib::KeyFileFlags::NONE) 32 | .with_context(|| format!("Parsing {path}"))?; 33 | tracing::debug!("Loaded {path}"); 34 | return Ok(Some(kf)); 35 | } 36 | tracing::debug!("No {CONF_PATH} found"); 37 | Ok(None) 38 | } 39 | 40 | /// Query whether the target root has the `root.transient` key 41 | /// which sets up a transient overlayfs. 42 | pub(crate) fn overlayfs_root_enabled(root: &ostree::RepoFile) -> Result { 43 | if let Some(config) = load_config(root)? { 44 | overlayfs_enabled_in_config(&config) 45 | } else { 46 | Ok(false) 47 | } 48 | } 49 | 50 | #[derive(Debug, PartialEq, Eq)] 51 | enum Tristate { 52 | Enabled, 53 | Disabled, 54 | Maybe, 55 | } 56 | 57 | impl FromStr for Tristate { 58 | type Err = anyhow::Error; 59 | 60 | fn from_str(s: &str) -> Result { 61 | let r = match s { 62 | // Keep this in sync with ot_keyfile_get_tristate_with_default from ostree 63 | "yes" | "true" | "1" => Tristate::Enabled, 64 | "no" | "false" | "0" => Tristate::Disabled, 65 | "maybe" => Tristate::Maybe, 66 | o => anyhow::bail!("Invalid tristate value: {o}"), 67 | }; 68 | Ok(r) 69 | } 70 | } 71 | 72 | impl Default for Tristate { 73 | fn default() -> Self { 74 | Self::Disabled 75 | } 76 | } 77 | 78 | impl Tristate { 79 | pub(crate) fn maybe_enabled(&self) -> bool { 80 | match self { 81 | Self::Enabled | Self::Maybe => true, 82 | Self::Disabled => false, 83 | } 84 | } 85 | } 86 | 87 | #[derive(Debug, PartialEq, Eq)] 88 | enum ComposefsState { 89 | Signed, 90 | Tristate(Tristate), 91 | } 92 | 93 | impl Default for ComposefsState { 94 | fn default() -> Self { 95 | Self::Tristate(Tristate::default()) 96 | } 97 | } 98 | 99 | impl FromStr for ComposefsState { 100 | type Err = anyhow::Error; 101 | 102 | fn from_str(s: &str) -> Result { 103 | let r = match s { 104 | "signed" => Self::Signed, 105 | o => Self::Tristate(Tristate::from_str(o)?), 106 | }; 107 | Ok(r) 108 | } 109 | } 110 | 111 | impl ComposefsState { 112 | pub(crate) fn maybe_enabled(&self) -> bool { 113 | match self { 114 | ComposefsState::Signed => true, 115 | ComposefsState::Tristate(t) => t.maybe_enabled(), 116 | } 117 | } 118 | } 119 | 120 | /// Query whether the config uses an overlayfs model (composefs or plain overlayfs). 121 | pub fn overlayfs_enabled_in_config(config: &glib::KeyFile) -> Result { 122 | let root_transient = config 123 | .optional_bool("root", "transient")? 124 | .unwrap_or_default(); 125 | let composefs = config 126 | .optional_string("composefs", "enabled")? 127 | .map(|s| ComposefsState::from_str(s.as_str())) 128 | .transpose() 129 | .log_err_default() 130 | .unwrap_or_default(); 131 | Ok(root_transient || composefs.maybe_enabled()) 132 | } 133 | 134 | #[test] 135 | fn test_tristate() { 136 | for v in ["yes", "true", "1"] { 137 | assert_eq!(Tristate::from_str(v).unwrap(), Tristate::Enabled); 138 | } 139 | assert_eq!(Tristate::from_str("maybe").unwrap(), Tristate::Maybe); 140 | for v in ["no", "false", "0"] { 141 | assert_eq!(Tristate::from_str(v).unwrap(), Tristate::Disabled); 142 | } 143 | for v in ["", "junk", "fal", "tr1"] { 144 | assert!(Tristate::from_str(v).is_err()); 145 | } 146 | } 147 | 148 | #[test] 149 | fn test_composefs_state() { 150 | assert_eq!( 151 | ComposefsState::from_str("signed").unwrap(), 152 | ComposefsState::Signed 153 | ); 154 | for v in ["yes", "true", "1"] { 155 | assert_eq!( 156 | ComposefsState::from_str(v).unwrap(), 157 | ComposefsState::Tristate(Tristate::Enabled) 158 | ); 159 | } 160 | assert_eq!(Tristate::from_str("maybe").unwrap(), Tristate::Maybe); 161 | for v in ["no", "false", "0"] { 162 | assert_eq!( 163 | ComposefsState::from_str(v).unwrap(), 164 | ComposefsState::Tristate(Tristate::Disabled) 165 | ); 166 | } 167 | } 168 | 169 | #[test] 170 | fn test_overlayfs_enabled() { 171 | let d0 = indoc::indoc! { r#" 172 | [foo] 173 | bar = baz 174 | [root] 175 | "# }; 176 | let d1 = indoc::indoc! { r#" 177 | [root] 178 | transient = false 179 | "# }; 180 | let d2 = indoc::indoc! { r#" 181 | [composefs] 182 | enabled = false 183 | "# }; 184 | for v in ["", d0, d1, d2] { 185 | let kf = glib::KeyFile::new(); 186 | kf.load_from_data(v, glib::KeyFileFlags::empty()).unwrap(); 187 | assert_eq!(overlayfs_enabled_in_config(&kf).unwrap(), false); 188 | } 189 | 190 | let e0 = format!("{d0}\n[root]\ntransient = true"); 191 | let e1 = format!("{d1}\n[composefs]\nenabled = true\n[other]\nsomekey = someval"); 192 | let e2 = format!("{d1}\n[composefs]\nenabled = yes"); 193 | let e3 = format!("{d1}\n[composefs]\nenabled = signed"); 194 | for v in [e0, e1, e2, e3] { 195 | let kf = glib::KeyFile::new(); 196 | kf.load_from_data(&v, glib::KeyFileFlags::empty()).unwrap(); 197 | assert_eq!(overlayfs_enabled_in_config(&kf).unwrap(), true); 198 | } 199 | } 200 | -------------------------------------------------------------------------------- /lib/src/refescape.rs: -------------------------------------------------------------------------------- 1 | //! Escape strings for use in ostree refs. 2 | //! 3 | //! It can be desirable to map arbitrary identifiers, such as RPM/dpkg 4 | //! package names or container image references (e.g. `docker://quay.io/examplecorp/os:latest`) 5 | //! into ostree refs (branch names) which have a quite restricted set 6 | //! of valid characters; basically alphanumeric, plus `/`, `-`, `_`. 7 | //! 8 | //! This escaping scheme uses `_` in a similar way as a `\` character is 9 | //! used in Rust unicode escaped values. For example, `:` is `_3A_` (hexadecimal). 10 | //! Because the empty path is not valid, `//` is escaped as `/_2F_` (i.e. the second `/` is escaped). 11 | 12 | use anyhow::Result; 13 | use std::fmt::Write; 14 | 15 | /// Escape a single string; this is a backend of [`prefix_escape_for_ref`]. 16 | fn escape_for_ref(s: &str) -> Result { 17 | if s.is_empty() { 18 | return Err(anyhow::anyhow!("Invalid empty string for ref")); 19 | } 20 | fn escape_c(r: &mut String, c: char) { 21 | write!(r, "_{:02X}_", c as u32).unwrap() 22 | } 23 | let mut r = String::new(); 24 | let mut it = s 25 | .chars() 26 | .map(|c| { 27 | if c == '\0' { 28 | Err(anyhow::anyhow!( 29 | "Invalid embedded NUL in string for ostree ref" 30 | )) 31 | } else { 32 | Ok(c) 33 | } 34 | }) 35 | .peekable(); 36 | 37 | let mut previous_alphanumeric = false; 38 | while let Some(c) = it.next() { 39 | let has_next = it.peek().is_some(); 40 | let c = c?; 41 | let current_alphanumeric = c.is_ascii_alphanumeric(); 42 | match c { 43 | c if current_alphanumeric => r.push(c), 44 | '/' if previous_alphanumeric && has_next => r.push(c), 45 | // Pass through `-` unconditionally 46 | '-' => r.push(c), 47 | // The underscore `_` quotes itself `__`. 48 | '_' => r.push_str("__"), 49 | o => escape_c(&mut r, o), 50 | } 51 | previous_alphanumeric = current_alphanumeric; 52 | } 53 | Ok(r) 54 | } 55 | 56 | /// Compute a string suitable for use as an OSTree ref, where `s` can be a (nearly) 57 | /// arbitrary UTF-8 string. This requires a non-empty prefix. 58 | /// 59 | /// The restrictions on `s` are: 60 | /// - The empty string is not supported 61 | /// - There may not be embedded `NUL` (`\0`) characters. 62 | /// 63 | /// The intention behind requiring a prefix is that a common need is to use e.g. 64 | /// [`ostree::Repo::list_refs`] to find refs of a certain "type". 65 | /// 66 | /// # Examples: 67 | /// 68 | /// ```rust 69 | /// # fn test() -> anyhow::Result<()> { 70 | /// use ostree_ext::refescape; 71 | /// let s = "registry:quay.io/coreos/fedora:latest"; 72 | /// assert_eq!(refescape::prefix_escape_for_ref("container", s)?, 73 | /// "container/registry_3A_quay_2E_io/coreos/fedora_3A_latest"); 74 | /// # Ok(()) 75 | /// # } 76 | /// ``` 77 | pub fn prefix_escape_for_ref(prefix: &str, s: &str) -> Result { 78 | Ok(format!("{}/{}", prefix, escape_for_ref(s)?)) 79 | } 80 | 81 | /// Reverse the effect of [`escape_for_ref()`]. 82 | fn unescape_for_ref(s: &str) -> Result { 83 | let mut r = String::new(); 84 | let mut it = s.chars(); 85 | let mut buf = String::new(); 86 | while let Some(c) = it.next() { 87 | match c { 88 | c if c.is_ascii_alphanumeric() => { 89 | r.push(c); 90 | } 91 | '-' | '/' => r.push(c), 92 | '_' => { 93 | let next = it.next(); 94 | if let Some('_') = next { 95 | r.push('_') 96 | } else if let Some(c) = next { 97 | buf.clear(); 98 | buf.push(c); 99 | for c in &mut it { 100 | if c == '_' { 101 | break; 102 | } 103 | buf.push(c); 104 | } 105 | let v = u32::from_str_radix(&buf, 16)?; 106 | let c: char = v.try_into()?; 107 | r.push(c); 108 | } 109 | } 110 | o => anyhow::bail!("Invalid character {}", o), 111 | } 112 | } 113 | Ok(r) 114 | } 115 | 116 | /// Remove a prefix from an ostree ref, and return the unescaped remainder. 117 | /// 118 | /// # Examples: 119 | /// 120 | /// ```rust 121 | /// # fn test() -> anyhow::Result<()> { 122 | /// use ostree_ext::refescape; 123 | /// let s = "registry:quay.io/coreos/fedora:latest"; 124 | /// assert_eq!(refescape::unprefix_unescape_ref("container", "container/registry_3A_quay_2E_io/coreos/fedora_3A_latest")?, s); 125 | /// # Ok(()) 126 | /// # } 127 | /// ``` 128 | pub fn unprefix_unescape_ref(prefix: &str, ostree_ref: &str) -> Result { 129 | let rest = ostree_ref 130 | .strip_prefix(prefix) 131 | .and_then(|s| s.strip_prefix('/')) 132 | .ok_or_else(|| { 133 | anyhow::anyhow!( 134 | "ref does not match expected prefix {}/: {}", 135 | ostree_ref, 136 | prefix 137 | ) 138 | })?; 139 | unescape_for_ref(rest) 140 | } 141 | 142 | #[cfg(test)] 143 | mod test { 144 | use super::*; 145 | use quickcheck::{quickcheck, TestResult}; 146 | 147 | const TESTPREFIX: &str = "testprefix/blah"; 148 | 149 | const UNCHANGED: &[&str] = &["foo", "foo/bar/baz-blah/foo"]; 150 | const ROUNDTRIP: &[&str] = &[ 151 | "localhost:5000/foo:latest", 152 | "fedora/x86_64/coreos", 153 | "/foo/bar/foo.oci-archive", 154 | "/foo/bar/foo.docker-archive", 155 | "docker://quay.io/exampleos/blah:latest", 156 | "oci-archive:/path/to/foo.ociarchive", 157 | "docker-archive:/path/to/foo.dockerarchive", 158 | ]; 159 | const CORNERCASES: &[&str] = &["/", "blah/", "/foo/"]; 160 | 161 | #[test] 162 | fn escape() { 163 | // These strings shouldn't change 164 | for &v in UNCHANGED { 165 | let escaped = &escape_for_ref(v).unwrap(); 166 | ostree::validate_rev(escaped).unwrap(); 167 | assert_eq!(escaped.as_str(), v); 168 | } 169 | // Roundtrip cases, plus unchanged cases 170 | for &v in UNCHANGED.iter().chain(ROUNDTRIP).chain(CORNERCASES) { 171 | let escaped = &prefix_escape_for_ref(TESTPREFIX, v).unwrap(); 172 | ostree::validate_rev(escaped).unwrap(); 173 | let unescaped = unprefix_unescape_ref(TESTPREFIX, escaped).unwrap(); 174 | assert_eq!(v, unescaped); 175 | } 176 | // Explicit test 177 | assert_eq!( 178 | escape_for_ref(ROUNDTRIP[0]).unwrap(), 179 | "localhost_3A_5000/foo_3A_latest" 180 | ); 181 | } 182 | 183 | fn roundtrip(s: String) -> TestResult { 184 | // Ensure we only try strings which match the predicates. 185 | let r = prefix_escape_for_ref(TESTPREFIX, &s); 186 | let escaped = match r { 187 | Ok(v) => v, 188 | Err(_) => return TestResult::discard(), 189 | }; 190 | let unescaped = unprefix_unescape_ref(TESTPREFIX, &escaped).unwrap(); 191 | TestResult::from_bool(unescaped == s) 192 | } 193 | 194 | #[test] 195 | fn qcheck() { 196 | quickcheck(roundtrip as fn(String) -> TestResult); 197 | } 198 | } 199 | -------------------------------------------------------------------------------- /lib/src/repair.rs: -------------------------------------------------------------------------------- 1 | //! System repair functionality 2 | 3 | use std::collections::{BTreeMap, BTreeSet}; 4 | use std::fmt::Display; 5 | 6 | use anyhow::{anyhow, Context, Result}; 7 | use cap_std::fs::{Dir, MetadataExt}; 8 | use cap_std_ext::cap_std; 9 | use fn_error_context::context; 10 | use serde::{Deserialize, Serialize}; 11 | 12 | use crate::sysroot::SysrootLock; 13 | 14 | // Find the inode numbers for objects 15 | fn gather_inodes( 16 | prefix: &str, 17 | dir: &Dir, 18 | little_inodes: &mut BTreeMap, 19 | big_inodes: &mut BTreeMap, 20 | ) -> Result<()> { 21 | for child in dir.entries()? { 22 | let child = child?; 23 | let metadata = child.metadata()?; 24 | if !(metadata.is_file() || metadata.is_symlink()) { 25 | continue; 26 | } 27 | let name = child.file_name(); 28 | let name = name 29 | .to_str() 30 | .ok_or_else(|| anyhow::anyhow!("Invalid {name:?}"))?; 31 | let object_rest = name 32 | .split_once('.') 33 | .ok_or_else(|| anyhow!("Invalid object {name}"))? 34 | .0; 35 | let checksum = format!("{prefix}{object_rest}"); 36 | let inode = metadata.ino(); 37 | if let Ok(little) = u32::try_from(inode) { 38 | little_inodes.insert(little, checksum); 39 | } else { 40 | big_inodes.insert(inode, checksum); 41 | } 42 | } 43 | Ok(()) 44 | } 45 | 46 | #[derive(Default, Debug, PartialEq, Eq, Serialize, Deserialize)] 47 | #[serde(rename_all = "kebab-case")] 48 | pub struct RepairResult { 49 | /// Result of inode checking 50 | pub inodes: InodeCheck, 51 | // Whether we detected a likely corrupted merge commit 52 | pub likely_corrupted_container_image_merges: Vec, 53 | // Whether the booted deployment is likely corrupted 54 | pub booted_is_likely_corrupted: bool, 55 | // Whether the staged deployment is likely corrupted 56 | pub staged_is_likely_corrupted: bool, 57 | } 58 | 59 | #[derive(Default, Debug, PartialEq, Eq, Serialize, Deserialize)] 60 | #[serde(rename_all = "kebab-case")] 61 | pub struct InodeCheck { 62 | // Number of >32 bit inodes found 63 | pub inode64: u64, 64 | // Number of <= 32 bit inodes found 65 | pub inode32: u64, 66 | // Number of collisions found (when 64 bit inode is truncated to 32 bit) 67 | pub collisions: BTreeSet, 68 | } 69 | 70 | impl Display for InodeCheck { 71 | fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { 72 | write!( 73 | f, 74 | "ostree inode check:\n 64bit inodes: {}\n 32 bit inodes: {}\n collisions: {}\n", 75 | self.inode64, 76 | self.inode32, 77 | self.collisions.len() 78 | ) 79 | } 80 | } 81 | 82 | impl InodeCheck { 83 | pub fn is_ok(&self) -> bool { 84 | self.collisions.is_empty() 85 | } 86 | } 87 | 88 | #[context("Checking inodes")] 89 | #[doc(hidden)] 90 | /// Detect if any commits are potentially incorrect due to inode truncations. 91 | pub fn check_inode_collision(repo: &ostree::Repo, verbose: bool) -> Result { 92 | let repo_dir = Dir::reopen_dir(&repo.dfd_borrow())?; 93 | let objects = repo_dir.open_dir("objects")?; 94 | 95 | println!( 96 | r#"Attempting analysis of ostree state for files that may be incorrectly linked. 97 | For more information, see https://github.com/ostreedev/ostree/pull/2874/commits/de6fddc6adee09a93901243dc7074090828a1912 98 | "# 99 | ); 100 | 101 | println!("Gathering inodes for ostree objects..."); 102 | let mut little_inodes = BTreeMap::new(); 103 | let mut big_inodes = BTreeMap::new(); 104 | 105 | for child in objects.entries()? { 106 | let child = child?; 107 | if !child.file_type()?.is_dir() { 108 | continue; 109 | } 110 | let name = child.file_name(); 111 | if name.len() != 2 { 112 | continue; 113 | } 114 | let name = name 115 | .to_str() 116 | .ok_or_else(|| anyhow::anyhow!("Invalid {name:?}"))?; 117 | let objdir = child.open_dir()?; 118 | gather_inodes(name, &objdir, &mut little_inodes, &mut big_inodes) 119 | .with_context(|| format!("Processing {name:?}"))?; 120 | } 121 | 122 | let mut colliding_inodes = BTreeMap::new(); 123 | for (big_inum, big_inum_checksum) in big_inodes.iter() { 124 | let truncated = *big_inum as u32; 125 | if let Some(small_inum_object) = little_inodes.get(&truncated) { 126 | // Don't output each collision unless verbose mode is enabled. It's actually 127 | // quite interesting to see data, but only for development and deep introspection 128 | // use cases. 129 | if verbose { 130 | eprintln!( 131 | r#"collision: 132 | inode (>32 bit): {big_inum} 133 | object: {big_inum_checksum} 134 | inode (truncated): {truncated} 135 | object: {small_inum_object} 136 | "# 137 | ); 138 | } 139 | colliding_inodes.insert(big_inum, big_inum_checksum); 140 | } 141 | } 142 | 143 | // From here let's just track the possibly-colliding 64 bit inode, not also 144 | // the checksum. 145 | let collisions = colliding_inodes 146 | .keys() 147 | .map(|&&v| v) 148 | .collect::>(); 149 | 150 | let inode32 = little_inodes.len() as u64; 151 | let inode64 = big_inodes.len() as u64; 152 | Ok(InodeCheck { 153 | inode32, 154 | inode64, 155 | collisions, 156 | }) 157 | } 158 | 159 | /// Attempt to automatically repair any corruption from inode collisions. 160 | #[doc(hidden)] 161 | pub fn analyze_for_repair(sysroot: &SysrootLock, verbose: bool) -> Result { 162 | use crate::container::store as container_store; 163 | let repo = &sysroot.repo(); 164 | 165 | // Query booted and pending state 166 | let booted_deployment = sysroot.booted_deployment(); 167 | let booted_checksum = booted_deployment.as_ref().map(|b| b.csum()); 168 | let booted_checksum = booted_checksum.as_ref().map(|s| s.as_str()); 169 | let staged_deployment = sysroot.staged_deployment(); 170 | let staged_checksum = staged_deployment.as_ref().map(|b| b.csum()); 171 | let staged_checksum = staged_checksum.as_ref().map(|s| s.as_str()); 172 | 173 | let inodes = check_inode_collision(repo, verbose)?; 174 | println!("{}", inodes); 175 | if inodes.is_ok() { 176 | println!("OK no colliding inodes found"); 177 | return Ok(RepairResult { 178 | inodes, 179 | ..Default::default() 180 | }); 181 | } 182 | 183 | let all_images = container_store::list_images(repo)?; 184 | let all_images = all_images 185 | .into_iter() 186 | .map(|img| crate::container::ImageReference::try_from(img.as_str())) 187 | .collect::>>()?; 188 | println!("Verifying ostree-container images: {}", all_images.len()); 189 | let mut likely_corrupted_container_image_merges = Vec::new(); 190 | let mut booted_is_likely_corrupted = false; 191 | let mut staged_is_likely_corrupted = false; 192 | for imgref in all_images { 193 | if let Some(state) = container_store::query_image(repo, &imgref)? { 194 | if !container_store::verify_container_image( 195 | sysroot, 196 | &imgref, 197 | &state, 198 | &inodes.collisions, 199 | verbose, 200 | )? { 201 | eprintln!("warning: Corrupted image {imgref}"); 202 | likely_corrupted_container_image_merges.push(imgref.to_string()); 203 | let merge_commit = state.merge_commit.as_str(); 204 | if booted_checksum == Some(merge_commit) { 205 | booted_is_likely_corrupted = true; 206 | eprintln!("warning: booted deployment is likely corrupted"); 207 | } else if staged_checksum == Some(merge_commit) { 208 | staged_is_likely_corrupted = true; 209 | eprintln!("warning: staged deployment is likely corrupted"); 210 | } 211 | } 212 | } else { 213 | // This really shouldn't happen 214 | eprintln!("warning: Image was removed from underneath us: {imgref}"); 215 | std::thread::sleep(std::time::Duration::from_secs(1)); 216 | } 217 | } 218 | Ok(RepairResult { 219 | inodes, 220 | likely_corrupted_container_image_merges, 221 | booted_is_likely_corrupted, 222 | staged_is_likely_corrupted, 223 | }) 224 | } 225 | 226 | impl RepairResult { 227 | pub fn check(&self) -> anyhow::Result<()> { 228 | if self.booted_is_likely_corrupted { 229 | eprintln!("warning: booted deployment is likely corrupted"); 230 | } 231 | if self.booted_is_likely_corrupted { 232 | eprintln!("warning: staged deployment is likely corrupted"); 233 | } 234 | match self.likely_corrupted_container_image_merges.len() { 235 | 0 => { 236 | println!("OK no corruption found"); 237 | Ok(()) 238 | } 239 | n => { 240 | anyhow::bail!("Found corruption in images: {n}") 241 | } 242 | } 243 | } 244 | 245 | #[context("Repairing")] 246 | pub fn repair(self, sysroot: &SysrootLock) -> Result<()> { 247 | let repo = &sysroot.repo(); 248 | for imgref in self.likely_corrupted_container_image_merges { 249 | let imgref = crate::container::ImageReference::try_from(imgref.as_str())?; 250 | eprintln!("Flushing cached state for corrupted merged image: {imgref}"); 251 | crate::container::store::remove_images(repo, [&imgref])?; 252 | } 253 | if self.booted_is_likely_corrupted { 254 | anyhow::bail!("TODO redeploy and reboot for booted deployment corruption"); 255 | } 256 | if self.staged_is_likely_corrupted { 257 | anyhow::bail!("TODO undeploy for staged deployment corruption"); 258 | } 259 | Ok(()) 260 | } 261 | } 262 | -------------------------------------------------------------------------------- /lib/src/selinux.rs: -------------------------------------------------------------------------------- 1 | //! SELinux-related helper APIs. 2 | 3 | use anyhow::Result; 4 | use fn_error_context::context; 5 | use std::path::Path; 6 | 7 | /// The well-known selinuxfs mount point 8 | const SELINUX_MNT: &str = "/sys/fs/selinux"; 9 | /// Hardcoded value for SELinux domain capable of setting unknown contexts. 10 | const INSTALL_T: &str = "install_t"; 11 | 12 | /// Query for whether or not SELinux is enabled. 13 | pub fn is_selinux_enabled() -> bool { 14 | Path::new(SELINUX_MNT).join("access").exists() 15 | } 16 | 17 | /// Return an error If the current process is not running in the `install_t` domain. 18 | #[context("Verifying self is install_t SELinux domain")] 19 | pub fn verify_install_domain() -> Result<()> { 20 | // If it doesn't look like SELinux is enabled, then nothing to do. 21 | if !is_selinux_enabled() { 22 | return Ok(()); 23 | } 24 | 25 | // If we're not root, there's no need to try to warn because we can only 26 | // do read-only operations anyways. 27 | if !rustix::process::getuid().is_root() { 28 | return Ok(()); 29 | } 30 | 31 | let self_domain = std::fs::read_to_string("/proc/self/attr/current")?; 32 | let is_install_t = self_domain.split(':').any(|x| x == INSTALL_T); 33 | if !is_install_t { 34 | anyhow::bail!( 35 | "Detected SELinux enabled system, but the executing binary is not labeled install_exec_t" 36 | ); 37 | } 38 | Ok(()) 39 | } 40 | -------------------------------------------------------------------------------- /lib/src/statistics.rs: -------------------------------------------------------------------------------- 1 | //! This module holds implementations of some basic statistical properties, such as mean and standard deviation. 2 | 3 | pub(crate) fn mean(data: &[u64]) -> Option { 4 | if data.is_empty() { 5 | None 6 | } else { 7 | Some(data.iter().sum::() as f64 / data.len() as f64) 8 | } 9 | } 10 | 11 | pub(crate) fn std_deviation(data: &[u64]) -> Option { 12 | match (mean(data), data.len()) { 13 | (Some(data_mean), count) if count > 0 => { 14 | let variance = data 15 | .iter() 16 | .map(|value| { 17 | let diff = data_mean - (*value as f64); 18 | diff * diff 19 | }) 20 | .sum::() 21 | / count as f64; 22 | Some(variance.sqrt()) 23 | } 24 | _ => None, 25 | } 26 | } 27 | 28 | //Assumed sorted 29 | pub(crate) fn median_absolute_deviation(data: &mut [u64]) -> Option<(f64, f64)> { 30 | if data.is_empty() { 31 | None 32 | } else { 33 | //Sort data 34 | //data.sort_by(|a, b| a.partial_cmp(b).unwrap()); 35 | 36 | //Find median of data 37 | let median_data: f64 = match data.len() % 2 { 38 | 1 => data[data.len() / 2] as f64, 39 | _ => 0.5 * (data[data.len() / 2 - 1] + data[data.len() / 2]) as f64, 40 | }; 41 | 42 | //Absolute deviations 43 | let mut absolute_deviations = Vec::new(); 44 | for size in data { 45 | absolute_deviations.push(f64::abs(*size as f64 - median_data)) 46 | } 47 | 48 | absolute_deviations.sort_by(|a, b| a.partial_cmp(b).unwrap()); 49 | let l = absolute_deviations.len(); 50 | let mad: f64 = match l % 2 { 51 | 1 => absolute_deviations[l / 2], 52 | _ => 0.5 * (absolute_deviations[l / 2 - 1] + absolute_deviations[l / 2]), 53 | }; 54 | 55 | Some((median_data, mad)) 56 | } 57 | } 58 | 59 | #[test] 60 | fn test_mean() { 61 | assert_eq!(mean(&[]), None); 62 | for v in [0u64, 1, 5, 100] { 63 | assert_eq!(mean(&[v]), Some(v as f64)); 64 | } 65 | assert_eq!(mean(&[0, 1]), Some(0.5)); 66 | assert_eq!(mean(&[0, 5, 100]), Some(35.0)); 67 | assert_eq!(mean(&[7, 4, 30, 14]), Some(13.75)); 68 | } 69 | 70 | #[test] 71 | fn test_std_deviation() { 72 | assert_eq!(std_deviation(&[]), None); 73 | for v in [0u64, 1, 5, 100] { 74 | assert_eq!(std_deviation(&[v]), Some(0 as f64)); 75 | } 76 | assert_eq!(std_deviation(&[1, 4]), Some(1.5)); 77 | assert_eq!(std_deviation(&[2, 2, 2, 2]), Some(0.0)); 78 | assert_eq!( 79 | std_deviation(&[1, 20, 300, 4000, 50000, 600000, 7000000, 80000000]), 80 | Some(26193874.56387471) 81 | ); 82 | } 83 | 84 | #[test] 85 | fn test_median_absolute_deviation() { 86 | //Assumes sorted 87 | assert_eq!(median_absolute_deviation(&mut []), None); 88 | for v in [0u64, 1, 5, 100] { 89 | assert_eq!(median_absolute_deviation(&mut [v]), Some((v as f64, 0.0))); 90 | } 91 | assert_eq!(median_absolute_deviation(&mut [1, 4]), Some((2.5, 1.5))); 92 | assert_eq!( 93 | median_absolute_deviation(&mut [2, 2, 2, 2]), 94 | Some((2.0, 0.0)) 95 | ); 96 | assert_eq!( 97 | median_absolute_deviation(&mut [ 98 | 1, 2, 3, 3, 4, 4, 4, 5, 5, 6, 6, 6, 7, 7, 7, 8, 9, 12, 52, 90 99 | ]), 100 | Some((6.0, 2.0)) 101 | ); 102 | 103 | //if more than half of the data has the same value, MAD = 0, thus any 104 | //value different from the residual median is classified as an outlier 105 | assert_eq!( 106 | median_absolute_deviation(&mut [0, 1, 1, 1, 1, 1, 1, 1, 0]), 107 | Some((1.0, 0.0)) 108 | ); 109 | } 110 | -------------------------------------------------------------------------------- /lib/src/sysroot.rs: -------------------------------------------------------------------------------- 1 | //! Helpers for interacting with sysroots. 2 | 3 | use std::ops::Deref; 4 | 5 | use anyhow::Result; 6 | 7 | /// A locked system root. 8 | #[derive(Debug)] 9 | pub struct SysrootLock { 10 | /// The underlying sysroot value. 11 | pub sysroot: ostree::Sysroot, 12 | /// True if we didn't actually lock 13 | unowned: bool, 14 | } 15 | 16 | impl Drop for SysrootLock { 17 | fn drop(&mut self) { 18 | if self.unowned { 19 | return; 20 | } 21 | self.sysroot.unlock(); 22 | } 23 | } 24 | 25 | impl Deref for SysrootLock { 26 | type Target = ostree::Sysroot; 27 | 28 | fn deref(&self) -> &Self::Target { 29 | &self.sysroot 30 | } 31 | } 32 | 33 | impl SysrootLock { 34 | /// Asynchronously acquire a sysroot lock. If the lock cannot be acquired 35 | /// immediately, a status message will be printed to standard output. 36 | /// The lock will be unlocked when this object is dropped. 37 | pub async fn new_from_sysroot(sysroot: &ostree::Sysroot) -> Result { 38 | let mut printed = false; 39 | loop { 40 | if sysroot.try_lock()? { 41 | return Ok(Self { 42 | sysroot: sysroot.clone(), 43 | unowned: false, 44 | }); 45 | } 46 | if !printed { 47 | println!("Waiting for sysroot lock..."); 48 | printed = true; 49 | } 50 | tokio::time::sleep(std::time::Duration::from_secs(3)).await; 51 | } 52 | } 53 | 54 | /// This function should only be used when you have locked the sysroot 55 | /// externally (e.g. in C/C++ code). This also does not unlock on drop. 56 | pub fn from_assumed_locked(sysroot: &ostree::Sysroot) -> Self { 57 | Self { 58 | sysroot: sysroot.clone(), 59 | unowned: true, 60 | } 61 | } 62 | } 63 | -------------------------------------------------------------------------------- /lib/src/tar/mod.rs: -------------------------------------------------------------------------------- 1 | //! # Losslessly export and import ostree commits as tar archives 2 | //! 3 | //! Convert an ostree commit into a tarball stream, and import it again, including 4 | //! support for OSTree signature verification. 5 | //! 6 | //! In the current libostree C library, while it supports export to tar, this 7 | //! process is lossy - commit metadata is discarded. Further, re-importing 8 | //! requires recalculating all of the object checksums, and tying these 9 | //! together, it does not support verifying ostree level cryptographic signatures 10 | //! such as GPG/ed25519. 11 | //! 12 | //! # Tar stream layout 13 | //! 14 | //! In order to solve these problems, this new tar serialization format effectively 15 | //! combines *both* a `/sysroot/ostree/repo/objects` directory and a checkout in `/usr`, 16 | //! where the latter are hardlinks to the former. 17 | //! 18 | //! The exported stream will have the ostree metadata first; in particular the commit object. 19 | //! Following the commit object is the `.commitmeta` object, which contains any cryptographic 20 | //! signatures. 21 | //! 22 | //! This library then supports verifying the pair of (commit, commitmeta) using an ostree 23 | //! remote, in the same way that `ostree pull` will do. 24 | //! 25 | //! The remainder of the stream is a breadth-first traversal of dirtree/dirmeta objects and the 26 | //! content objects they reference. 27 | //! 28 | //! # `bare-split-xattrs` repository mode 29 | //! 30 | //! In format version 1, the tar stream embeds a proper ostree repository using a tailored 31 | //! `bare-split-xattrs` mode. 32 | //! 33 | //! This is because extended attributes (xattrs) are a complex subject for tar, which has 34 | //! many variants. 35 | //! Further, when exporting bootable ostree commits to container images, it is not actually 36 | //! desired to have the container runtime try to unpack and apply those. 37 | //! 38 | //! For these reasons, extended attributes (xattrs) get serialized into detached objects 39 | //! which are associated with the relevant content objects. 40 | //! 41 | //! At a low level, two dedicated object types are used: 42 | //! * `file-xattrs` as regular files storing (and de-duplicating) xattrs content. 43 | //! * `file-xattrs-link` as hardlinks which associate a `file` object to its corresponding 44 | //! `file-xattrs` object. 45 | 46 | mod import; 47 | pub use import::*; 48 | mod export; 49 | pub use export::*; 50 | mod write; 51 | pub use write::*; 52 | -------------------------------------------------------------------------------- /lib/src/tokio_util.rs: -------------------------------------------------------------------------------- 1 | //! Helpers for bridging GLib async/mainloop with Tokio. 2 | 3 | use anyhow::Result; 4 | use core::fmt::{Debug, Display}; 5 | use futures_util::{Future, FutureExt}; 6 | use ostree::gio; 7 | use ostree::prelude::{CancellableExt, CancellableExtManual}; 8 | 9 | /// Call a faillible future, while monitoring `cancellable` and return an error if cancelled. 10 | pub async fn run_with_cancellable(f: F, cancellable: &gio::Cancellable) -> Result 11 | where 12 | F: Future>, 13 | { 14 | // Bridge GCancellable to a tokio notification 15 | let notify = std::sync::Arc::new(tokio::sync::Notify::new()); 16 | let notify2 = notify.clone(); 17 | cancellable.connect_cancelled(move |_| notify2.notify_one()); 18 | cancellable.set_error_if_cancelled()?; 19 | // See https://blog.yoshuawuyts.com/futures-concurrency-3/ on why 20 | // `select!` is a trap in general, but I believe this case is safe. 21 | tokio::select! { 22 | r = f => r, 23 | _ = notify.notified() => { 24 | Err(anyhow::anyhow!("Operation was cancelled")) 25 | } 26 | } 27 | } 28 | 29 | struct CancelOnDrop(gio::Cancellable); 30 | 31 | impl Drop for CancelOnDrop { 32 | fn drop(&mut self) { 33 | self.0.cancel(); 34 | } 35 | } 36 | 37 | /// Wrapper for [`tokio::task::spawn_blocking`] which provides a [`gio::Cancellable`] that will be triggered on drop. 38 | /// 39 | /// This function should be used in a Rust/tokio native `async fn`, but that want to invoke 40 | /// GLib style blocking APIs that use `GCancellable`. The cancellable will be triggered when this 41 | /// future is dropped, which helps bound thread usage. 42 | /// 43 | /// This is in a sense the inverse of [`run_with_cancellable`]. 44 | pub fn spawn_blocking_cancellable(f: F) -> tokio::task::JoinHandle 45 | where 46 | F: FnOnce(&gio::Cancellable) -> R + Send + 'static, 47 | R: Send + 'static, 48 | { 49 | tokio::task::spawn_blocking(move || { 50 | let dropper = CancelOnDrop(gio::Cancellable::new()); 51 | f(&dropper.0) 52 | }) 53 | } 54 | 55 | /// Flatten a nested Result>, defaulting to converting the error type to an `anyhow::Error`. 56 | /// See https://doc.rust-lang.org/std/result/enum.Result.html#method.flatten 57 | pub(crate) fn flatten_anyhow(r: std::result::Result, E>) -> Result 58 | where 59 | E: Display + Debug + Send + Sync + 'static, 60 | { 61 | match r { 62 | Ok(x) => x, 63 | Err(e) => Err(anyhow::anyhow!(e)), 64 | } 65 | } 66 | 67 | /// A wrapper around [`spawn_blocking_cancellable`] that flattens nested results. 68 | pub fn spawn_blocking_cancellable_flatten(f: F) -> impl Future> 69 | where 70 | F: FnOnce(&gio::Cancellable) -> Result + Send + 'static, 71 | T: Send + 'static, 72 | { 73 | spawn_blocking_cancellable(f).map(flatten_anyhow) 74 | } 75 | 76 | /// A wrapper around [`tokio::task::spawn_blocking`] that flattens nested results. 77 | pub fn spawn_blocking_flatten(f: F) -> impl Future> 78 | where 79 | F: FnOnce() -> Result + Send + 'static, 80 | T: Send + 'static, 81 | { 82 | tokio::task::spawn_blocking(f).map(flatten_anyhow) 83 | } 84 | 85 | #[cfg(test)] 86 | mod tests { 87 | use super::*; 88 | 89 | #[tokio::test] 90 | async fn test_cancellable() { 91 | let cancellable = ostree::gio::Cancellable::new(); 92 | 93 | let cancellable_copy = cancellable.clone(); 94 | let s = async move { 95 | tokio::time::sleep(std::time::Duration::from_millis(200)).await; 96 | cancellable_copy.cancel(); 97 | }; 98 | let r = async move { 99 | tokio::time::sleep(std::time::Duration::from_secs(200)).await; 100 | Ok(()) 101 | }; 102 | let r = run_with_cancellable(r, &cancellable); 103 | let (_, r) = tokio::join!(s, r); 104 | assert!(r.is_err()); 105 | } 106 | } 107 | -------------------------------------------------------------------------------- /lib/src/utils.rs: -------------------------------------------------------------------------------- 1 | pub(crate) trait ResultExt { 2 | /// Return the Ok value unchanged. In the err case, log it, and call the closure to compute the default 3 | fn log_err_or_else(self, default: F) -> T 4 | where 5 | F: FnOnce() -> T; 6 | /// Return the Ok value unchanged. In the err case, log it, and return the default value 7 | fn log_err_default(self) -> T 8 | where 9 | T: Default; 10 | } 11 | 12 | impl ResultExt for Result { 13 | #[track_caller] 14 | fn log_err_or_else(self, default: F) -> T 15 | where 16 | F: FnOnce() -> T, 17 | { 18 | match self { 19 | Ok(r) => r, 20 | Err(e) => { 21 | tracing::debug!("{e}"); 22 | default() 23 | } 24 | } 25 | } 26 | 27 | #[track_caller] 28 | fn log_err_default(self) -> T 29 | where 30 | T: Default, 31 | { 32 | self.log_err_or_else(|| Default::default()) 33 | } 34 | } 35 | -------------------------------------------------------------------------------- /lib/tests/it/fixtures/hlinks.tar.gz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ostreedev/ostree-rs-ext/b885f9468829e6af41fa95504d65975c48378d4d/lib/tests/it/fixtures/hlinks.tar.gz -------------------------------------------------------------------------------- /lib/tests/it/fixtures/manifest1.json: -------------------------------------------------------------------------------- 1 | {"schemaVersion":2,"config":{"mediaType":"application/vnd.oci.image.config.v1+json","digest":"sha256:f3b50d0849a19894aa27ca2346a78efdacf2c56bdc2a3493672d2a819990fedf","size":9301},"layers":[{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:75f4abe8518ec55cb8bf0d358a737084f38e2c030a28651d698c0b7569d680a6","size":1387849},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:777cb841d2803f775a36fba62bcbfe84b2a1e0abc27cf995961b63c3d218a410","size":48676116},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:1179dc1e2994ec0466787ec43967db9016b4b93c602bb9675d7fe4c0993366ba","size":124705297},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:74555b3730c4c0f77529ead433db58e038070666b93a5cc0da262d7b8debff0e","size":38743650},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:0ff8b1fdd38e5cfb6390024de23ba4b947cd872055f62e70f2c21dad5c928925","size":77161948},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:76b83eea62b7b93200a056b5e0201ef486c67f1eeebcf2c7678ced4d614cece2","size":21970157},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:d85c742f69904cb8dbf98abca4724d364d91792fcf8b5f5634ab36dda162bfc4","size":59797135},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:167e5df36d0fcbed876ca90c1ed1e6c79b5e2bdaba5eae74ab86444654b19eff","size":49410348},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:b34384ba76fa1e335cc8d75522508d977854f2b423f8aceb50ca6dfc2f609a99","size":21714783},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:7bf2d65ebf222ee10115284abf6909b1a3da0f3bd6d8d849e30723636b7145cb","size":15264848},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:a75bbf55d8de4dbd54e429e16fbd46688717faf4ea823c94676529cc2525fd5f","size":14373701},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:cf728677fa8c84bfcfd71e17953062421538d492d7fbfdd0dbce8eb1e5f6eec3","size":8400473},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:caff60c1ef085fb500c94230ccab9338e531578635070230b1413b439fd53f8f","size":6914489},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:65ca8f9bddaa720b74c5a7401bf273e93eba6b3b855a62422a8258373e0b1ae0","size":8294965},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:387bab4fcb713e9691617a645b6af2b7ad29fe5e009b0b0d3215645ef315481c","size":6600369},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:f63dcde5a664dad3eb3321bbcf2913d9644d16561a67c86ab61d814c1462583d","size":16869027},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:8bcd90242651342fbd2ed5ca3e60d03de90fdd28c3a9f634329f6e1c21c79718","size":5735283},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:cb65c21a0659b5b826881280556995a7ca4818c2b9b7a89e31d816a996fa8640","size":4528663},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:5187f51b62f4a2e82198a75afcc623a0323d4804fa7848e2e0acb30d77b8d9ab","size":5266030},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:bfef79d6d35378fba9093083ff6bd7b5ed9f443f87517785e6ff134dc8d08c6a","size":4316135},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:1cf332fd50b382af7941d6416994f270c894e9d60fb5c6cecf25de887673bbcb","size":3914655},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:e0d80be6e71bfae398f06f7a7e3b224290f3dde7544c8413f922934abeb1f599","size":2441858},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:48ff87e7a7af41d7139c5230e2e939aa97cafb1f62a114825bda5f5904e04a0e","size":3818782},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:8bcc652ccaa27638bd5bd2d7188053f1736586afbae87b3952e9211c773e3563","size":3885971},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:d83d9388b8c8c1e7c97b6b18f5107b74354700ebce9da161ccb73156a2c54a2e","size":3442642},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:efc465ae44a18ee395e542eb97c8d1fc21bf9d5fb49244ba4738e9bf48bfd3dc","size":3066348},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:c5c471cce08aa9cc7d96884a9e1981b7bb67ee43524af47533f50a8ddde7a83d","size":909923},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:8956cd951abc481ba364cf8ef5deca7cc9185b59ed95ae40b52e42afdc271d8e","size":3553645},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:5b0963a6c89d595b5c4786e2f3ce0bc168a262efab74dfce3d7c8d1063482c60","size":1495301},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:bf2df295da2716291f9dd4707158bca218b4a7920965955a4808b824c1bee2b6","size":3063142},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:19b2ea8d63794b8249960d581216ae1ccb80f8cfe518ff8dd1f12d65d19527a5","size":8109718},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:420636df561ccc835ef9665f41d4bc91c5f00614a61dca266af2bcd7bee2cc25","size":3003935},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:5ae67caf0978d82848d47ff932eee83a1e5d2581382c9c47335f69c9d7acc180","size":2468557},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:4f4b8bb8463dc74bb7f32eee78d02b71f61a322967b6d6cbb29829d262376f74","size":2427605},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:69373f86b83e6e5a962de07f40ff780a031b42d2568ffbb8b3c36de42cc90dec","size":2991782},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:2d05c2f993f9761946701da37f45fc573a2db8467f92b3f0d356f5f7adaf229e","size":3085765},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:41925843e5c965165bedc9c8124b96038f08a89c95ba94603a5f782dc813f0a8","size":2724309},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:a8c39f2998073e0e8b55fb88ccd68d2621a0fb6e31a528fd4790a1c90f8508a9","size":2512079},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:b905f801d092faba0c155597dd1303fa8c0540116af59c111ed7744e486ed63b","size":2341122},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:4f46b58b37828fa71fa5d7417a8ca7a62761cc6a72eb1592943572fc2446b054","size":2759344},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:3fbae92ecc64cf253b643a0e75b56514dc694451f163b47fb4e15af373238e10","size":2539288},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:744dd4a3ec521668942661cf1f184eb8f07f44025ce1aa35d5072ad9d72946fe","size":2415870},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:6c74c0a05a36bddabef1fdfae365ff87a9c5dd1ec7345d9e20f7f8ab04b39fc6","size":2145078},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:910ff6f93303ebedde3459f599b06d7b70d8f0674e3fe1d6623e3af809245cc4","size":5098511},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:2752e2f62f38fea3a390f111d673d2529dbf929f6c67ec7ef4359731d1a7edd8","size":1051999},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:5065c3aac5fcc3c1bde50a19d776974353301f269a936dd2933a67711af3b703","size":2713694},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:8bf6993eea50bbd8b448e6fd719f83c82d1d40b623f2c415f7727e766587ea83","size":1686714},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:630221744f0f9632f4f34f74241e65f79e78f938100266a119113af1ce10a1c5","size":2061581},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:e7e2eae322bca0ffa01bb2cae72288507bef1a11ad51f99d0a4faba1b1e000b9","size":2079706},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:bb6374635385b0c2539c284b137d831bd45fbe64b5e49aee8ad92d14c156a41b","size":3142398},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:40493ecd0f9ab499a2bec715415c3a98774ea6d1c9c01eb30a6b56793204a02d","size":69953187}]} -------------------------------------------------------------------------------- /lib/tests/it/fixtures/manifest2.json: -------------------------------------------------------------------------------- 1 | {"schemaVersion":2,"config":{"mediaType":"application/vnd.oci.image.config.v1+json","digest":"sha256:ca0f7e342503b45a1110aba49177e386242e9192ab1742a95998b6b99c2a0150","size":9301},"layers":[{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:bca674ffe2ebe92b9e952bc807b9f1cd0d559c057e95ac81f3bae12a9b96b53e","size":1387854},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:777cb841d2803f775a36fba62bcbfe84b2a1e0abc27cf995961b63c3d218a410","size":48676116},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:1179dc1e2994ec0466787ec43967db9016b4b93c602bb9675d7fe4c0993366ba","size":124705297},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:74555b3730c4c0f77529ead433db58e038070666b93a5cc0da262d7b8debff0e","size":38743650},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:0b5d930ffc92d444b0a7b39beed322945a3038603fbe2a56415a6d02d598df1f","size":77162517},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:8d12d20c2d1c8f05c533a2a1b27a457f25add8ad38382523660c4093f180887b","size":21970100},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:d85c742f69904cb8dbf98abca4724d364d91792fcf8b5f5634ab36dda162bfc4","size":59797135},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:167e5df36d0fcbed876ca90c1ed1e6c79b5e2bdaba5eae74ab86444654b19eff","size":49410348},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:b34384ba76fa1e335cc8d75522508d977854f2b423f8aceb50ca6dfc2f609a99","size":21714783},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:7bf2d65ebf222ee10115284abf6909b1a3da0f3bd6d8d849e30723636b7145cb","size":15264848},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:a75bbf55d8de4dbd54e429e16fbd46688717faf4ea823c94676529cc2525fd5f","size":14373701},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:cf728677fa8c84bfcfd71e17953062421538d492d7fbfdd0dbce8eb1e5f6eec3","size":8400473},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:caff60c1ef085fb500c94230ccab9338e531578635070230b1413b439fd53f8f","size":6914489},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:65ca8f9bddaa720b74c5a7401bf273e93eba6b3b855a62422a8258373e0b1ae0","size":8294965},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:387bab4fcb713e9691617a645b6af2b7ad29fe5e009b0b0d3215645ef315481c","size":6600369},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:f63dcde5a664dad3eb3321bbcf2913d9644d16561a67c86ab61d814c1462583d","size":16869027},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:8bcd90242651342fbd2ed5ca3e60d03de90fdd28c3a9f634329f6e1c21c79718","size":5735283},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:cb65c21a0659b5b826881280556995a7ca4818c2b9b7a89e31d816a996fa8640","size":4528663},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:5187f51b62f4a2e82198a75afcc623a0323d4804fa7848e2e0acb30d77b8d9ab","size":5266030},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:bfef79d6d35378fba9093083ff6bd7b5ed9f443f87517785e6ff134dc8d08c6a","size":4316135},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:1cf332fd50b382af7941d6416994f270c894e9d60fb5c6cecf25de887673bbcb","size":3914655},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:e0d80be6e71bfae398f06f7a7e3b224290f3dde7544c8413f922934abeb1f599","size":2441858},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:48ff87e7a7af41d7139c5230e2e939aa97cafb1f62a114825bda5f5904e04a0e","size":3818782},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:8bcc652ccaa27638bd5bd2d7188053f1736586afbae87b3952e9211c773e3563","size":3885971},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:d83d9388b8c8c1e7c97b6b18f5107b74354700ebce9da161ccb73156a2c54a2e","size":3442642},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:efc465ae44a18ee395e542eb97c8d1fc21bf9d5fb49244ba4738e9bf48bfd3dc","size":3066348},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:c5c471cce08aa9cc7d96884a9e1981b7bb67ee43524af47533f50a8ddde7a83d","size":909923},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:8956cd951abc481ba364cf8ef5deca7cc9185b59ed95ae40b52e42afdc271d8e","size":3553645},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:5b0963a6c89d595b5c4786e2f3ce0bc168a262efab74dfce3d7c8d1063482c60","size":1495301},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:bf2df295da2716291f9dd4707158bca218b4a7920965955a4808b824c1bee2b6","size":3063142},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:19b2ea8d63794b8249960d581216ae1ccb80f8cfe518ff8dd1f12d65d19527a5","size":8109718},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:420636df561ccc835ef9665f41d4bc91c5f00614a61dca266af2bcd7bee2cc25","size":3003935},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:5ae67caf0978d82848d47ff932eee83a1e5d2581382c9c47335f69c9d7acc180","size":2468557},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:4f4b8bb8463dc74bb7f32eee78d02b71f61a322967b6d6cbb29829d262376f74","size":2427605},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:69373f86b83e6e5a962de07f40ff780a031b42d2568ffbb8b3c36de42cc90dec","size":2991782},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:2d05c2f993f9761946701da37f45fc573a2db8467f92b3f0d356f5f7adaf229e","size":3085765},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:41925843e5c965165bedc9c8124b96038f08a89c95ba94603a5f782dc813f0a8","size":2724309},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:a8c39f2998073e0e8b55fb88ccd68d2621a0fb6e31a528fd4790a1c90f8508a9","size":2512079},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:b905f801d092faba0c155597dd1303fa8c0540116af59c111ed7744e486ed63b","size":2341122},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:4f46b58b37828fa71fa5d7417a8ca7a62761cc6a72eb1592943572fc2446b054","size":2759344},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:3fbae92ecc64cf253b643a0e75b56514dc694451f163b47fb4e15af373238e10","size":2539288},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:744dd4a3ec521668942661cf1f184eb8f07f44025ce1aa35d5072ad9d72946fe","size":2415870},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:6c74c0a05a36bddabef1fdfae365ff87a9c5dd1ec7345d9e20f7f8ab04b39fc6","size":2145078},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:910ff6f93303ebedde3459f599b06d7b70d8f0674e3fe1d6623e3af809245cc4","size":5098511},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:2752e2f62f38fea3a390f111d673d2529dbf929f6c67ec7ef4359731d1a7edd8","size":1051999},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:5065c3aac5fcc3c1bde50a19d776974353301f269a936dd2933a67711af3b703","size":2713694},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:8bf6993eea50bbd8b448e6fd719f83c82d1d40b623f2c415f7727e766587ea83","size":1686714},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:630221744f0f9632f4f34f74241e65f79e78f938100266a119113af1ce10a1c5","size":2061581},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:e7e2eae322bca0ffa01bb2cae72288507bef1a11ad51f99d0a4faba1b1e000b9","size":2079706},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:bb6374635385b0c2539c284b137d831bd45fbe64b5e49aee8ad92d14c156a41b","size":3142398},{"mediaType":"application/vnd.oci.image.layer.v1.tar+gzip","digest":"sha256:cb9b8a4ac4a8df62df79e6f0348a14b3ec239816d42985631c88e76d4e3ff815","size":69952385}]} -------------------------------------------------------------------------------- /man/ostree-container-auth.md: -------------------------------------------------------------------------------- 1 | % ostree-container-auth 5 2 | 3 | # NAME 4 | ostree-container-auth description of the registry authentication file 5 | 6 | # DESCRIPTION 7 | 8 | The OSTree container stack uses the same file formats as **containers-auth(5)** but 9 | not the same locations. 10 | 11 | When running as uid 0 (root), the tooling uses `/etc/ostree/auth.json` first, then looks 12 | in `/run/ostree/auth.json`, and finally checks `/usr/lib/ostree/auth.json`. 13 | For any other uid, the file paths used are in `${XDG_RUNTIME_DIR}/ostree/auth.json`. 14 | 15 | In the future, it is likely that a path that is supported for both "system podman" 16 | usage and ostree will be added. 17 | 18 | ## FORMAT 19 | 20 | The auth.json file stores, or references, credentials that allow the user to authenticate 21 | to container image registries. 22 | It is primarily managed by a `login` command from a container tool such as `podman login`, 23 | `buildah login`, or `skopeo login`. 24 | 25 | For more information, see **containers-auth(5)**. 26 | 27 | # SEE ALSO 28 | 29 | **containers-auth(5)**, **skopeo-login(1)**, **skopeo-logout(1)** 30 | -------------------------------------------------------------------------------- /ostree-and-containers.md: -------------------------------------------------------------------------------- 1 | # ostree vs OCI/Docker 2 | 3 | Be sure to see the main [README.md](README.md) which describes the current architecture intersecting ostree and OCI. 4 | 5 | Looking at this project, one might ask: why even have ostree? Why not just have the operating system directly use something like the [containers/image](https://github.com/containers/image/) storage? 6 | 7 | The first answer to this is that it's a goal of this project to "hide" ostree usage; it should feel "native" to ship and manage the operating system "as if" it was just running a container. 8 | 9 | But, ostree has a *lot* of stuff built up around it and we can't just throw that away. 10 | 11 | ## Understanding kernels 12 | 13 | ostree was designed from the start to manage bootable operating system trees - hence the name of the project. For example, ostree understands bootloaders and kernels/initramfs images. Container tools don't. 14 | 15 | ## Signing 16 | 17 | ostree also quite early on gained an opinionated mechanism to sign images (commits) via GPG. As of this time there are multiple competing mechanisms for container signing, and it is not widely deployed. 18 | For running random containers from `docker.io`, it can be OK to just trust TLS or pin via `@sha256` - a whole idea of Docker is that containers are isolated and it should be reasonably safe to 19 | at least try out random containers. But for the *operating system* its integrity is paramount because it's ultimately trusted. 20 | 21 | ## Deduplication 22 | 23 | ostree's hardlink store is designed around de-duplication. Operating systems can get large and they are most natural as "base images" - which in the Docker container model 24 | are duplicated on disk. Of course storage systems like containers/image could learn to de-duplicate; but it would be a use case that *mostly* applied to just the operating system. 25 | 26 | ## Being able to remove all container images 27 | 28 | In Kubernetes, the kubelet will prune the image storage periodically, removing images not backed by containers. If we store the operating system itself as an image...well, we'd need to do something like teach the container storage to have the concept of an image that is "pinned" because it's actually the booted filesystem. Or create a "fake" container representing the running operating system. 29 | 30 | Other projects in this space ended up having an "early docker" distinct from the "main docker" which brings its own large set of challenges. 31 | 32 | ## SELinux 33 | 34 | OSTree has *first class* support for SELinux. It was baked into the design from the very start. Handling SELinux is very tricky because it's a part of the operating system that can influence *everything else*. And specifically file labels. 35 | 36 | In this approach we aren't trying to inject xattrs into the tar stream; they're stored out of band for reliability. 37 | 38 | ## Independence of complexity of container storage 39 | 40 | This stuff could be done - but the container storage and tooling is already quite complex, and introducing a special case like this would be treading into new ground. 41 | 42 | Today for example, cri-o ships a `crio-wipe.service` which removes all container storage across major version upgrades. 43 | 44 | ostree is a fairly simple format and has been 100% stable throughout its life so far. 45 | 46 | ## ostree format has per-file integrity 47 | 48 | More on this here: https://ostreedev.github.io/ostree/related-projects/#docker 49 | 50 | ## Allow hiding ostree while not reinventing everything 51 | 52 | So, again the goal here is: make it feel "native" to ship and manage the operating system "as if" it was just running a container without throwing away everything in ostree today. 53 | 54 | 55 | ### Future: Running an ostree-container as a webserver 56 | 57 | It also should work to run the ostree-container as a webserver, which will expose a webserver that responds to `GET /repo`. 58 | 59 | The effect will be as if it was built from a `Dockerfile` that contains `EXPOSE 8080`; it will work to e.g. 60 | `kubectl run nginx --image=quay.io/exampleos/exampleos:latest --replicas=1` 61 | and then also create a service for it. 62 | 63 | ### Integrating with future container deltas 64 | 65 | See https://blogs.gnome.org/alexl/2020/05/13/putting-container-updates-on-a-diet/ 66 | --------------------------------------------------------------------------------