├── template
├── .gitignore
├── .hatchignore
├── src
│ └── main.rs
├── Cargo.toml
├── README.md
├── .hatch.toml
├── .github
│ └── workflows
│ │ └── pkg.yml
└── Dockerfile
├── docs
├── images
│ └── starter-workflow-screenshot.png
├── develop
│ └── README.md
├── os_publishing.md
├── README.md
├── cross_compiling.md
├── minimal_docker_example.md
├── FAQ.md
├── docker_packaging.md
├── key_concepts_and_config.md
└── os_packaging.md
├── .github
├── dependabot.yml
└── pull_request_template.md
├── README.md
├── LICENSE
└── fragments
└── macros.systemd.sh
/template/.gitignore:
--------------------------------------------------------------------------------
1 | target/
2 |
--------------------------------------------------------------------------------
/template/.hatchignore:
--------------------------------------------------------------------------------
1 | cargo-generate.toml
2 | README.md
3 |
--------------------------------------------------------------------------------
/template/src/main.rs:
--------------------------------------------------------------------------------
1 | fn main() {
2 | println!("Hello, world!");
3 | }
4 |
--------------------------------------------------------------------------------
/docs/images/starter-workflow-screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/NLnetLabs/ploutos/HEAD/docs/images/starter-workflow-screenshot.png
--------------------------------------------------------------------------------
/.github/dependabot.yml:
--------------------------------------------------------------------------------
1 | # Set update schedule for GitHub Actions
2 |
3 | version: 2
4 | updates:
5 |
6 | - package-ecosystem: "github-actions"
7 | directory: "/"
8 | schedule:
9 | # Check for updates to GitHub Actions every weekday
10 | interval: "daily"
11 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | ## Ploutos
2 |
3 |
4 |
5 |
6 |
7 |
8 | ## Quick start
9 |
10 | Learn more about Ploutos and [see it in action](https://www.youtube.com/watch?v=ZZnLC0KmkHs) as presented at the November 30 2022 Rust Nederland meetup.
11 |
12 | ## Documentation
13 |
14 | Ploutos is a GitHub reusable workflow for packaging Rust Cargo projects as DEB & RPM packages and Docker images.
15 |
16 | - [User guide](docs/README.md)
17 | - [Demo template](template/README.md)
18 | - [Contributing](docs/develop/README.md)
19 |
20 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | BSD 3-Clause License
2 |
3 | Copyright (c) 2022, NLnet Labs
4 |
5 | Redistribution and use in source and binary forms, with or without
6 | modification, are permitted provided that the following conditions are met:
7 |
8 | 1. Redistributions of source code must retain the above copyright notice, this
9 | list of conditions and the following disclaimer.
10 |
11 | 2. Redistributions in binary form must reproduce the above copyright notice,
12 | this list of conditions and the following disclaimer in the documentation
13 | and/or other materials provided with the distribution.
14 |
15 | 3. Neither the name of the copyright holder nor the names of its
16 | contributors may be used to endorse or promote products derived from
17 | this software without specific prior written permission.
18 |
19 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
20 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
21 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
22 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
23 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
24 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
25 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
26 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
27 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
28 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
29 |
--------------------------------------------------------------------------------
/template/Cargo.toml:
--------------------------------------------------------------------------------
1 | [package]
2 | name = "{{ project_name }}"
3 | version = "0.0.1"
4 | edition = "2021"
5 | authors = [ "{{ git_author }}" ]
6 | {% if license != 'None' %}license = "{{ license }}"
7 | {% endif -%}
8 | {% if 'deb' in package_types -%}description = "The Rust {{ project_name }} tool."
9 | {% endif -%}
10 |
11 | {% if 'deb' in package_types %}
12 | [package.metadata.deb]
13 | extended-description = "{{ project_name }}"
14 | {% endif -%}
15 |
16 | {% if 'rpm' in package_types %}
17 | [package.metadata.generate-rpm]
18 | {#
19 | Cargo.toml uses SPDX 2.1 license expression identifiers but RPMs must use a Short Name from the approved list. The user
20 | selects a license from a list of SPDX 2.1 licenses so here we handle any conversion necessary.
21 | See:
22 | - https://doc.rust-lang.org/cargo/reference/manifest.html#the-license-and-license-file-fields
23 | - https://fedoraproject.org/wiki/Licensing:Main?rd=Licensing
24 | -#}
25 | {% if license == 'Apache-2.0' %}license = "ASL 2.0"
26 | {% endif -%}
27 | {% if license == 'BSD-2-Clause' %}license = "BSD"
28 | {% endif -%}
29 | {% if license == 'BSD-3-Clause' %}license = "BSD"
30 | {% endif -%}
31 | {% if license == 'GPL-2.0' %}license = "GPLv2+"
32 | {% endif -%}
33 | {% if license == 'GPL-3.0' %}license = "GPLv2+"
34 | {% endif -%}
35 | {% if license == 'LGPL-2.0' %}license = "LGPLv2+"
36 | {% endif -%}
37 | {% if license == 'LGPL-2.1' %}license = "LGPLv2+"
38 | {% endif -%}
39 | {% if license == 'LGPL-3.0' %}license = "LGPLv2+"
40 | {% endif -%}
41 | {% if license == 'MPL-2.0' %}license = "MPLv1.0"
42 | {% endif -%}
43 | assets = [
44 | { source = "target/release/{{ project_name }}", dest = "/usr/bin/{{ project_name }}", mode = "755" },
45 | ]
46 | {%- endif %}
47 |
--------------------------------------------------------------------------------
/template/README.md:
--------------------------------------------------------------------------------
1 | # Pluotos cargo-hatch template
2 |
3 | This directory contains a [cargo-hatch](https://crates.io/crates/cargo-hatch) template for generating a Ploutos enabled Git repository. When pushed to GitHub it will trigger GitHub Actions to package a simple Hello World Rust application as DEB, RPM package(s) and/or Docker image(s) for x86_64 platforms and optionally also for other architectures.
4 |
5 | ## tl;dr
6 |
7 | ```
8 | cargo hatch git https://github.com/NLnetLabs/ploutos --folder template
9 | ```
10 |
11 | ## Usage
12 |
13 | The following assumes that you have already created an empty GitHub project called `/`.
14 |
15 | **Tip:** Do you intend to say yes when Hatch asks `Publish Docker images(s) to Docker Hub`? Then:
16 | 1. Generate an access token at https://hub.docker.com/settings/security.
17 | 2. Store it in a `DOCKER_HUB_TOKEN` secret in your new GitHub project at https://github.com/YOUR_ORG/YOUR_REPO/settings/secrets/actions.
18 |
19 | First install Cargo Hatch and invoke it using the template in this repository:
20 |
21 | ```shell
22 | cargo install cargo-hatch
23 | cargo hatch git https://github.com/NLnetLabs/ploutos --folder template
24 | ```
25 |
26 | Now enter the project directory that was created, generate the `Cargo.lock` file and commit the files to Git:
27 |
28 | ```shell
29 | cd
30 | cargo generate-lockfile
31 | git add .gitignore .github *
32 | git commit -m "Initial version."
33 | ```
34 |
35 | And then follow the standard GitHub instructions for pushing the local Git project to GitHub:
36 |
37 | ```shell
38 | git remote add origin git@github.com:/.git
39 | git branch -M main
40 | git push --set-upstream origin main
41 | ```
42 |
43 |
--------------------------------------------------------------------------------
/template/.hatch.toml:
--------------------------------------------------------------------------------
1 | crate_type = "bin"
2 |
3 | # For Cargo the license must be an SPDX 2.1 license expression identifier. Here we let the user choose the SPDX 2.1
4 | # license identifier for one of the "popular licenses" listed on opensource.org, or "None".
5 | # See:
6 | # - https://doc.rust-lang.org/cargo/reference/manifest.html#the-license-and-license-file-fields
7 | # - https://fedoraproject.org/wiki/Licensing:Main?rd=Licensing
8 | # - https://opensource.org/licenses
9 | [license]
10 | type = "list"
11 | description = "Which license should your project have?"
12 | values = [
13 | "Apache-2.0",
14 | "BSD-3-Clause",
15 | "BSD-2-Clause",
16 | "GPL-2.0",
17 | "GPL-3.0",
18 | "LGPL-2.0",
19 | "LGPL-2.1",
20 | "LGPL-3.0",
21 | "MIT",
22 | "MPL-2.0",
23 | "CDDL-1.0",
24 | "EPL-2.0",
25 | "None"
26 | ]
27 | default = "BSD-3-Clause"
28 |
29 | [package_types]
30 | type = "multi_list"
31 | description = "Which package types should be built?"
32 | values = [ "deb", "rpm", "docker" ]
33 | default = [ "deb", "rpm", "docker" ]
34 |
35 | [docker_org]
36 | type = "string"
37 | description = "Your Docker organization"
38 | condition = "{{ 'docker' in package_types }}"
39 | validator.regex = "^[a-z0-9]{2,}(?:[._-][a-z0-9]+)*$"
40 |
41 | [docker_repo]
42 | type = "string"
43 | description = "Your Docker repository"
44 | condition = "{{ 'docker' in package_types }}"
45 | validator.regex = "^[a-z0-9]{2,}(?:[._-][a-z0-9]+)*$"
46 |
47 | [docker_publish_user]
48 | type = "string"
49 | description = "Your Docker username for publishing (None to skip publishing, remember to set secret DOCKER_HUB_TOKEN to the access token from https://hub.docker.com/settings/security)"
50 | condition = "{{ 'docker' in package_types }}"
51 | default = "None"
52 |
53 | [cross_targets]
54 | type = "multi_list"
55 | description = "Which targets (in addition to x86_64) should be compiled for?"
56 | values = ["Raspberry Pi 1b", "Raspberry Pi 4b", "Rock 64"]
57 | default = []
58 |
59 | [[ignore]]
60 | paths = ["Dockerfile"]
61 | condition = "{{ 'docker' not in package_types }}"
62 |
--------------------------------------------------------------------------------
/template/.github/workflows/pkg.yml:
--------------------------------------------------------------------------------
1 | name: "pkg"
2 |
3 | on:
4 | push:
5 | workflow_dispatch:
6 |
7 | jobs:
8 | pkg:
9 | uses: NLnetLabs/ploutos/.github/workflows/pkg-rust.yml@v5
10 | {%- if docker_publish_user | default(value='None') != 'None' %}
11 | secrets:
12 | DOCKER_HUB_ID: {{ docker_publish_user }}
13 | DOCKER_HUB_TOKEN: {% raw -%}${{{%- endraw %} secrets.DOCKER_HUB_TOKEN {% raw -%}}}{%- endraw -%}
14 | {%- endif %}
15 | with:
16 | {%- if 'docker' in package_types %}
17 | docker_org: {{ docker_org }}
18 | docker_repo: {{ docker_repo }}
19 | docker_build_rules: |
20 | include:
21 | - platform: "linux/amd64"
22 | shortname: "amd64"
23 | mode: "build"
24 | {%- if 'Raspberry Pi 1b' in cross_targets %}
25 | # Raspberry Pi 1b
26 | - platform: "linux/arm/v6"
27 | shortname: "armv6"
28 | target: "arm-unknown-linux-musleabihf"
29 | mode: "copy"
30 | {%- endif %}
31 | {%- if 'Raspberry Pi 4b' in cross_targets %}
32 | # Raspberry Pi 4b
33 | - platform: "linux/arm/v7"
34 | shortname: "armv7"
35 | target: "armv7-unknown-linux-musleabihf"
36 | mode: "copy"
37 | {%- endif %}
38 | {%- if 'Rock 64' in cross_targets %}
39 | # Rock 64
40 | - platform: "linux/arm64"
41 | shortname: "arm64"
42 | target: "aarch64-unknown-linux-musl"
43 | mode: "copy"
44 | {%- endif %}
45 | {% endif %}
46 |
47 | {%- if 'deb' in package_types or 'rpm' in package_types %}
48 | package_build_rules: |
49 | image:
50 | {%- if 'deb' in package_types %}
51 | - "ubuntu:xenial"
52 | - "ubuntu:bionic"
53 | - "ubuntu:focal"
54 | - "ubuntu:jammy"
55 | - "debian:stretch"
56 | - "debian:buster"
57 | - "debian:bullseye"
58 | {%- endif %}
59 | {%- if 'rpm' in package_types %}
60 | - "centos:7"
61 | - "centos:8"
62 | {%- endif %}
63 | target: x86_64
64 | {%- if cross_targets | length > 0 %}
65 | include:
66 | {%- endif -%}
67 | {%- if 'Raspberry Pi 1b' in cross_targets %}
68 | # Raspberry Pi 1b
69 | - image: "debian:buster"
70 | target: arm-unknown-linux-musleabihf
71 | {%- endif %}
72 | {%- if 'Raspberry Pi 4b' in cross_targets %}
73 | # Raspberry Pi 4b
74 | - image: "debian:bullseye"
75 | target: armv7-unknown-linux-musleabihf
76 | {%- endif %}
77 | {%- if 'Rock 64' in cross_targets %}
78 | # Rock 64
79 | - image: "debian:buster"
80 | target: aarch64-unknown-linux-musl
81 | {%- endif %}
82 | {%- endif %}
83 |
--------------------------------------------------------------------------------
/.github/pull_request_template.md:
--------------------------------------------------------------------------------
1 | This release contains the following changes:
2 |
3 | - TODO
4 |
5 | Successful test runs can be seen here:
6 |
7 | - dev branch: TODO
8 | - main branch: TODO
9 | - release tag: TODO
10 |
11 | Release checklist:
12 |
13 | - [ ] 1. Create a branch in the RELEASE repo, let's call this the RELEASE branch.
14 | - [ ] 2. Change RPM_MACROS_URL in the workflow to point to the new RELEASE branch.
15 | - [ ] 3. Create a PR in the RELEASE repo for the RELEASE branch.
16 | - [ ] 4. Create a matching branch in the TEST repo, let's call this the TEST branch.
17 | - [ ] 5. Make the desired changes to the RELEASE branch.
18 | - [ ] 6. In the TEST branch modify `.github/workflows/pkg.yml` so that instead of referring to `pkg-rust.yml@vX` it refers to `pkg-rust.yml@` or `pkg-rust.yml@`.
19 | - [ ] 7. Create a PR in the `ploutos-testing` repository from the TEST branch to `main`, let's call this the TEST PR.
20 | - [ ] 8. Repeat step 5 until the the `Packaging` workflow run in the TEST PR passes and behaves as desired.
21 | - [ ] 9. Merge the TEST PR to the `main` branch.
22 | - [ ] 10. Verify that the automatically invoked run of the `Packaging` workflow in the TEST repo against the `main` branch passes and behaves as desired. If not, repeat steps 4-9 until the new TEST PR passes and behaves as desired.
23 | - [ ] 11. Create a release tag in the TEST repo with the same release tag as will be used in the RELEASE repo, e.g. v1.2.3. _**Note:** Remember to respect semantic versioning, i.e. if the changes being made are not backward compatible you will need to bump the MAJOR version (in MAJOR.MINOR.PATCH) **and** any workflows that invoke the reusable workflow will need to be **manually edited** to refer to the new MAJOR version._
24 | - [ ] 12. Verify that the automatically invoked run of the `Packaging` workflow in the TEST repo passes against the newly created release tag passes and behaves as desired. If not, delete the release tag **in the TEST repo** and repeat steps 4-11 until the new TEST PR passes and behaves as desired.
25 | - [ ] 13. Merge the RELEASE PR to the `main` branch.
26 | - [ ] 14. Change RPM_MACROS_URL in the workflow to point to vX.Y.Z tag (if your release branch has a different name).
27 | - [ ] 15. Create the new release vX.Y.Z tag in the RELEASE repo.
28 | - [ ] 16. Update the vX tag in the RELEASE repo to point to the new vX.Y.Z tag ([howto](https://github.com/NLnetLabs/ploutos/blob/main/docs/develop/README.md#release-process)).
29 | - [ ] 17. Edit `.github/workflows/pkg.yml` in the `main` branch of the TEST repo to refer again to `@vX`.
30 | - [ ] 18. Verify that the `Packaging` action in the TEST repo against the `main` branch passes and works as desired.
31 | - [ ] 19. (optional) If the MAJOR version was changed, update affected repositories that use the reusable workflow to use the new MAJOR version, including adjusting to any breaking changes introduced by the MAJOR version change.
32 |
--------------------------------------------------------------------------------
/fragments/macros.systemd.sh:
--------------------------------------------------------------------------------
1 | # Scripts based on the RPM %systemd_post scriptlet. See:
2 | # - https://docs.fedoraproject.org/en-US/packaging-guidelines/Scriptlets/#_systemd
3 | # - https://github.com/systemd/systemd/blob/v251/src/rpm/macros.systemd.in
4 | # - https://github.com/systemd/systemd/blob/v251/src/rpm/triggers.systemd.in
5 | # - https://github.com/systemd/systemd/blob/v251/src/rpm/systemd-update-helper.in
6 | #
7 | # As we currently target CentOS 7 and CentOS 8, these are the relevant versions of the
8 | # original scripts (systemd-update-helper doesn't exist in these versions yet):
9 | #
10 | # CentOS 8 systemd v239: https://github.com/systemd/systemd/blob/v239/src/core/macros.systemd.in
11 | # https://github.com/systemd/systemd/blob/v239/src/core/triggers.systemd.in
12 | # CentOS 7 systemd v219: https://github.com/systemd/systemd/blob/v219/src/core/macros.systemd.in
13 | #
14 | # Note: These functions have only been tested with Bash.
15 |
16 | systemd_update_helper_v239() {
17 | case "$command" in
18 | install-system-units)
19 | systemctl --no-reload preset "$@" &>/dev/null
20 | ;;
21 |
22 | remove-system-units)
23 | systemctl --no-reload disable --now "$@" &>/dev/null
24 | ;;
25 |
26 | mark-restart-system-units)
27 | systemctl try-restart "$@" &>/dev/null 2>&1
28 | ;;
29 |
30 | system-reload)
31 | systemctl daemon-reload
32 | ;;
33 | esac
34 | }
35 |
36 | systemd_update_helper_v219() {
37 | case "$command" in
38 | install-system-units)
39 | systemctl preset "$@" &>/dev/null 2>&1
40 | ;;
41 |
42 | remove-system-units)
43 | systemctl --no-reload disable "$@" > /dev/null 2>&1
44 | systemctl stop "$@" > /dev/null 2>&1
45 | ;;
46 |
47 | mark-restart-system-units)
48 | systemctl try-restart "$@" >/dev/null 2>&1
49 | ;;
50 |
51 | system-reload)
52 | ;;
53 | esac
54 | }
55 |
56 | systemd_update_helper() {
57 | command="${1:?}"
58 | shift
59 |
60 | command -v systemctl >/dev/null || exit 0
61 |
62 | # Determine the version of systemd we are running under.
63 | # systemctl --version outputs a first line of the form:
64 | # systemd 239 (239-58.el8_6.8)
65 | SYSTEMD_VER=$(systemctl --version | head -1 | awk '{ print $2}')
66 |
67 | if [ "${SYSTEMD_VER}" -le 219 ]; then
68 | systemd_update_helper_v219 "$@"
69 | else
70 | systemd_update_helper_v239 "$@"
71 | fi
72 | }
73 |
74 | systemd_post() {
75 | systemd_update_helper install-system-units "$@"
76 | }
77 |
78 | systemd_preun() {
79 | systemd_update_helper remove-system-units "$@"
80 | }
81 |
82 | systemd_postun_with_restart() {
83 | systemd_update_helper mark-restart-system-units "$@"
84 | }
85 |
86 | systemd_triggers() {
87 | systemd_update_helper system-reload
88 | }
89 |
--------------------------------------------------------------------------------
/docs/develop/README.md:
--------------------------------------------------------------------------------
1 | # Ploutos: Contributor guide
2 |
3 | This page is intended for people diagnosing, improving or fixing the reusable workflow itself. It is NOT intended for users of the workfow. Users should consult the [user guide](../README.md).
4 |
5 | ## Tips
6 |
7 | 1. The workflow Docker behaviour differs depending on whether the workflow is invoked for a Git release tag ("release" here meaning that the tag is of the form `v*` without a trailing `-*` suffix), a `main` branch or some other branch (e.g. a PR branch). To fully test it you should either run the workflow in each of these cases or modify the workflow behaviour temporarily to be triggered as necessary for testing.
8 |
9 | 2. When a calling GitHub workflow invokes a GitHub Action or reusable workflow it does so typically by major version number, e.g. @v2. However, for reusable workflows this isn't actually done via GitHub selecting the nearest match according to semantic versioning rules, instead it is a trick achieved by the action and workflow publishers tagging their repository twice: once with the actual version, e.g. v2.1.3, and once with a major version only tag, e.g. v2, that **both point to the same Git ref**, i.e. the major version tag is deleted and re-created whenever a new minor or patch version tag is created.
10 |
11 | 3. When you push a change to the `pkg-rust.yml` workflow in this repository, downstream workflows that call `pkg-rust.yml` (e.g. from the https://github.com/NLnetLabs/ploutos-testing/ repository) will not see the changes unless you either update the `@` to match the new commit, or if using `@` if the tag is moved to the new commit, or if using `@` you will need to trigger a new run of the action or do "Re-run all jobs" on the workflow run. Doing "Re-run failed jobs" is **NOT ENOUGH** as then GitHub Actions will use the workflow at the exact same commit as it used before, it won't pick up the new commit to the branch.
12 |
13 | ## Automated testing
14 |
15 | The https://github.com/NLnetLabs/ploutos-testing/ repository contains workflows that can be used to test Ploutos. The `ploutos-testing` repository is referred to below as the `TEST repo`.
16 |
17 | ## Release process
18 |
19 | To test and release changes to the workflow the recommended approach is to create a PR _(an example of this release process in use can be seen [here](https://github.com/NLnetLabs/ploutos/pull/42))_ and follow the release steps shown in the [default PR template](https://github.com/NLnetLabs/ploutos/.github/pull_request_template.md).
20 |
21 | **How to update the vN tag:**
22 |
23 | At the time of writing the GitHub web interface does not offer a way to delete tags or update tags, only to delete the extra "release" details which can be associated with a tag. To update a tag one must do it locally and remotely from the command line using the `git tag` and `git push` commands.
24 |
25 | One way to update the latest vN tag to point to the latest vX.Y.Z tag using a **Bash** shell is:
26 | ```bash
27 | $ cd path/to/ploutos/git/clone
28 | $ git checkout main
29 | $ git pull
30 | $ git fetch --tags
31 | $ NEW_VER=$(git tag --points-at HEAD)
32 | $ MAJOR_VER=$(echo $NEW_VER | grep -Eo '^v[0-9]+')
33 | $ NEW_REF=$(git rev-list -n 1 $NEW_VER)
34 | $ OLD_REF=$(git rev-list -n 1 $MAJOR_VER)
35 | $ git tag --force ${MAJOR_VER} ${NEW_REF}
36 | $ UPDATED_REF=$(git rev-list -n 1 ${MAJOR_VER})
37 | $ if [ "${UPDATED_REF}" != "${NEW_REF}" ]
38 | then
39 | echo "ERROR: Major version tag ${MAJOR_VER} does not point at NEW_REF ${NEW_REF}"
40 | else
41 | git push --force --tags
42 | fi
43 | ```
44 |
--------------------------------------------------------------------------------
/docs/os_publishing.md:
--------------------------------------------------------------------------------
1 | **Contents:**
2 |
3 | - [Introduction](#introduction)
4 | - [Considerations](#considerations)
5 | - [Doing it yourself](#doing-it-yourself)
6 | - [Tools that can help](#tools-that-can-help)
7 | - [3rd party services](#3rd-party-services)
8 |
9 | # Introduction
10 |
11 | Ploutos produces O/S packages such as DEB and RPM files (contained within ZIP archives attached to GitHub Actions workflow runs), but it doesn't "publish" them for you.
12 |
13 | Publishing of O/S packages is the process of making them available on a web server (your "online repository") somewhere in the correct directory structure with the correct accompanying metadata files such that standard O/S packaging tools like `apt` and `yum` can find and install packages from your "online repository".
14 |
15 | # Considerations
16 |
17 | - **How much will it cost to host the packages?** This will be influenced by the service you use to host the web server, the number of historical versions that you wish to keep, the package types you want to offer (e.g. DEB and/or RPM), and the number of specific O/S versions you intend to package for separately (unless your packages are usable on many/all O/S versions and such can be offered as a single O/S version independent package).
18 | - **How much will it cost to serve the packages?** This will be influenced by the service you use to host the web server, the size of your packages, the size of your expected audience, how often you expect to publish new versions, and whether you wish to pay extra to serve the packages closer to the end user by fronting your repository with something like a Content Distribution Network.
19 | - **How many packages do you intend to publish?** If only one or a few you may need a simpler setup than if you intend to publish many different software packages.
20 | - **What kind of availability guarantees do you want?** Does it matter if your repository is offline or slow or unreachable sometimes and/or to some clients?
21 | - **How long do you expect to keep the repository?** This can influence how much room you need for growth, or if you want the repository to also support additional package types later.
22 | - **How do you want to manage the metadata?** By invoking stock packaging tool commands manually, or by using some tooling to help you, or even by using a 3rd party service to do the work for you?
23 | - **Who should have access to your package signing key?**
24 |
25 | # Doing it yourself
26 |
27 | There are many examples out there of how to do this yourself, e.g. to pick just a few:
28 |
29 | - Official [Ubuntu](https://help.ubuntu.com/community/Repositories/Personal), [RedHat](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/deployment_guide/sec-yum_repository) and [Fedora](https://docs.fedoraproject.org/en-US/packaging-guidelines/) guides.
30 | - Via Google:
31 | - [deb](https://earthly.dev/blog/creating-and-hosting-your-own-deb-packages-and-apt-repo/), [deb](https://medium.com/sqooba/create-your-own-custom-and-authenticated-apt-repository-1e4a4cf0b864), [deb](https://www.linuxbabe.com/linux-server/set-up-package-repository-debian-ubuntu-server)
32 | - [deb & rpm](https://www.percona.com/blog/how-to-create-your-own-repositories-for-packages/)
33 | - [rpm](http://nuxref.com/2016/10/06/hosting-rpm-repository/), [rpm](https://www.recitalsoftware.com/blogs/34-howto-create-your-own-yum-repository-on-redhat-and-fedora-linux), [rpm](https://bgstack15.wordpress.com/2019/08/02/how-i-use-the-copr-to-build-and-host-rpms-for-centos-and-fedora/)
34 | - _(Coming soon) How we at NLnet Labs publish our O/S packages._
35 |
36 | # Tools that can help
37 |
38 | Just a few examples, there are likely many others out there:
39 |
40 | - https://www.aptly.info/
41 | - https://manpages.debian.org/testing/debarchiver/debarchiver.1.en.html
42 | - https://wiki.debian.org/DakHowTo
43 |
44 | # 3rd party services
45 |
46 | **Disclaimer:** The services are listed in alphabetical order, there is no special meaning or priority to the order. We have NO relationship with these services nor experience with them and cannot recommend them, they were simply found via Google, use them at your own risk.
47 |
48 | - https://copr.fedorainfracloud.org/
49 | - https://packagecloud.io/
50 | - https://rpmdeb.com/
51 | - https://rpmfusion.org/Contributors
52 |
--------------------------------------------------------------------------------
/docs/README.md:
--------------------------------------------------------------------------------
1 | # Ploutos: User guide
2 |
3 | In this documentation we'll show you how to invoke the NLnet Labs Rust Cargo Packaging **reusable** workflow (hereafter the "Ploutos workflow") from your own repository and how to create the supporting files needed.
4 |
5 | > _**WARNING:** Using Ploutos is free for public GitHub repositories, but is **NOT FREE** for **private GitHub repositories**. As Ploutos runs many jobs in parallel (if configured to build for multiple package types and/or targets) it can consume a LOT of GitHub Actions minutes. If you exceed the [free limits for GitHub private repositories](https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions) it will cost money! For example a workflow that ran for ~11 minutes actually used ~141 minutes of GitHub Actions resources in total._
6 |
7 | **Contents:**
8 | - [See also](#see-also)
9 | - [Getting started](#getting-started)
10 | - [Examples](#examples)
11 | - [Key concepts and general configuration](#key-concepts-and-general-configuration)
12 | - [Creating specific package types](#creating-specific-package-types)
13 |
14 | ## See also
15 |
16 | - **FAQ:** If your question isn't answered here, or you'd just like to know more, checkout the [FAQ](./FAQ.md).
17 |
18 | - **Ploutos presentation:** Learn more about Ploutos and [see it in action](https://www.youtube.com/watch?v=ZZnLC0KmkHs) as presented at the November 30 2022 Rust Nederland meetup.
19 |
20 | - **The starter workflow:** If you already know how to use this workflow but just want to quickly add it to a new project you might find the [starter workflow](../starter_workflow.md) helpful _(**only** visible to NLnet Labs GitHub organization members unfortunately)_.
21 |
22 | - **The demo template:** This [template](../template/README.md) can be used to create your own repository with sample input files and workflow invocation to get started with the Ploutos workflow.
23 |
24 | - **Examples of the workflow in use:** This documentation contains some limited examples but if you're looking for real world examples of how to invoke and configure the Ploutos workflow take a look at the [projects that are already using the Ploutos workflow](https://github.com/NLnetLabs/ploutos/network/dependents).
25 |
26 | - **The testing repository:** The https://github.com/NLnetLabs/ploutos-testing/ repository contains test data and workflow invocations for automated testing of as many features of Ploutos as possible.
27 |
28 | ## Getting started
29 |
30 | 1. Decide which package types you want to create.
31 | 2. Determine which [inputs](https://github.com/NLnetLabs/ploutos/blob/main/.github/workflows/pkg-rust.yml#L131) you need to provide to the Ploutos workflow.
32 | 3. Create the files in your repository that will be referenced by the inputs _(perhaps start from the [template](../template/README.md))_.
33 | 4. Call the Ploutos workflow from your own workflow with the chosen inputs _(by hand or via the [starter workflow](/.starter_workflow.md))_.
34 | 5. Run your workflow _(e.g. triggered by a push, or use the GitHUb [`workflow_dispatch` manual trigger](https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow))_.
35 | 6. Use the created packages:
36 | - DEB and RPM packages will be attached as artifacts to the workflow run that you can [download](https://docs.github.com/en/actions/managing-workflow-runs/downloading-workflow-artifacts).
37 | - Docker images will have been published to Docker Hub.
38 | 7. (optional) Publish your DEB and RPM packages to a repository somewhere.
39 |
40 | ## Examples
41 |
42 | - [Simple Docker example](./minimal_docker_example.md)
43 | - [Simple DEB example](./os_packaging.md#example)
44 | - [Real use cases](https://github.com/NLnetLabs/ploutos/network/dependents?dependent_type=REPOSITORY)
45 |
46 | ## Key concepts and general configuration
47 |
48 | Read [this page](./key_concepts_and_config.md) to learn more about key concepts and general configuration not specific to any single packaging type.
49 |
50 | ## Creating specific package types
51 |
52 | To learn more about how to build a particular package type using the Ploutos workflow see:
53 |
54 | - [Cross compiling](./cross_compiling.md)
55 | - [Creating O/S packages](./os_packaging.md)
56 | - [Publishing O/S packages](./os_publishing.md)
57 | - [Creating & publishing Docker images](./docker_packaging.md)
58 |
--------------------------------------------------------------------------------
/docs/cross_compiling.md:
--------------------------------------------------------------------------------
1 | # Ploutos: Cross-compiling
2 |
3 | **Contents:**
4 | - [Known issues](#known-issues)
5 | - [Inputs](#inputs)
6 | - [Workflow inputs](#workflow-inputs)
7 | - [Outputs](#outputs)
8 | - [How it works](#how-it-works)
9 | - [Why is the cross tool used?](#why-is-the-cross-tool-used)
10 | - [Custom runner support](#custom-runner-support)
11 |
12 | ## Known issues
13 |
14 | - [Cross-compilation is not customisable](https://github.com/NLnetLabs/.github/issues/42)
15 |
16 | ## Inputs
17 |
18 | The set of targets to cross-compile for is automatically determined from the unique union of the `target` values supplied in the `docker_build_rules` and/or `package_build_rules` inputs.
19 |
20 | ### Workflow inputs
21 |
22 | | Input | Type | Required | Description |
23 | |---|---|---|---|
24 | | `cross_max_wait_mins` | string | No | The maximum number of minutes allowed for the `cross` job to complete the cross-compilation process and to upload the resulting binaries as workflow artifacts. After this permutations of the downstream `docker` and `pkg` workflow jobs will fail if the artifact has not yet become available to download. |
25 |
26 | ## Outputs
27 |
28 | The result of cross-compilation is a temporary artifact uploaded to GitHub Actions that will be downloaded by later jobs in the Ploutos workflow. This is because, as the GitHub Actions ["Storing workflow data as artifacts" docs](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts) say, _"Artifacts allow you to share data between jobs in a workflow_".
29 |
30 | In the example above this would cause temporary artifacts with the following names to be uploaded:
31 |
32 | - `tmp-cross-binaries-arm-unknown-linux-gnueabihf`
33 | - `tmp-cross-binaries-arm-unknown-linux-musleabihf`
34 |
35 | While these are referred to as "temporary" artifacts (because they are not needed after the later jobs consume them) they are actually not deleted by the Ploutos workflow in case they are useful for debugging issues with the packaging process. GitHub Actions will anyway [delete workflow artifacts after 90 days by default](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts#about-workflow-artifacts).
36 |
37 | ## How it works
38 |
39 | The `cross` job runs in parallel to the `docker` and `pkg` jobs in the Ploutos workflow. Permutations of those jobs that need to use the cross-compiled binaries will wait `cross_max_wait_mins` minutes until the binary they need has been uploaded as a workflow artifact by the `cross` job.
40 |
41 | Cross compilation takes place inside a Docker container running on an x86_64 GH runner host using an image from the Rust [`cross`](https://github.com/cross-rs/cross) project. These images contain the correct toolchain components needed to compile for one of the [supported targets](https://github.com/cross-rs/cross#supported-targets).
42 |
43 | ### Why is the cross tool used?
44 |
45 | Alternatives were explored but found lacking.
46 |
47 | `cargo-deb` supports cross-compilation [upto a point](https://github.com/kornelski/cargo-deb/issues/60#issuecomment-1333852148), `cargo-generate-rpm` does no compilation at all, so using `cargo-deb` cross support would be both incomplete and inconsistent with the approach required for RPMs.
48 |
49 | Docker buildx QEmu based cross-compilation for example is far too slow ([due to the emulated execution](https://github.com/multiarch/qemu-user-static/issues/176#issuecomment-1191078533)) and doesn't parallelize across multiple GitHub hosted runners.
50 |
51 | Native Rust Cargo support for cross-compilation requires you to know more about the required toolchain, to install the required tools yourself including the appropriate strip tool, set required environment variables, and to add a `.cargo/config.toml` file to your project with the paths to the tools to use (which may vary by build environment!).
52 |
53 | As a Rust project, the fact that the `cross` tool was originally developed by the Rust Embedded Working Group Tools team makes it highly attractive for our use case.
54 |
55 | ## Custom runner support
56 |
57 | When using the `runs_on` workflow input to assign a custom GitHub hosted or self-hosted runner for Ploutos to use, with cross-compiling there is a risk, if the pool of available runners is small, that the Ploutos workflow can become stuck waiting for a cross-compiled artifact while no runner exists to assign to the cross compilation job. As a work-around for this particular edge case you can also specify a `cross_runs_on` workflow input. The `cross_runs_on` label should be assigned to at least one GitHub runner which does **NOT** have the label specified in the `runs_on` workflow input. This ensures that there is always at least one runner available to carry out cross-compilation thereby unblocking the workflow.
58 |
--------------------------------------------------------------------------------
/docs/minimal_docker_example.md:
--------------------------------------------------------------------------------
1 | # Ploutos: Minimal Docker example
2 |
3 | This page shows a minimal example of using the Ploutos workflow to package a very simple Docker image. In fact it doesn't even package a Rust application!
4 |
5 | **Contents:**
6 | - [Introduction](#introduction)
7 | - [Inputs](#inputs)
8 | - [Outputs](#outputs)
9 |
10 | ## Introduction
11 |
12 | The workflow we define below will configure the Ploutos workflow to:
13 |
14 | - Build a Linux x86 64 architecture image from the `Dockerfile` located in the root of the callers repository.
15 | - Tag the created Docker image as `my_org/my_image_name:test-amd64`.
16 | - Attach the created Docker image as a GitHUb Actions artifact to the caller workflow run _(as a zip file containing a tar file produced by the [`docker save`](https://docs.docker.com/engine/reference/commandline/save/) command)_.
17 |
18 |
19 | ## Inputs
20 |
21 | ### Repository layout
22 |
23 | For this example we will need to create 3 files in the callers GitHub repository with the following directory layout:
24 |
25 | ```
26 | /
27 | .github/
28 | workflows/
29 | my_pkg_workflow.yml <-- your workflow
30 | Dockerfile <-- the Dockerfile to build an image from
31 | ```
32 |
33 | Now let's look at the content of these files.
34 |
35 | _**Tip:** Read [Docker packaging with the Ploutos workflow](./docker_packaging.md) for a deeper dive into the meaning of the Docker specific terms, inputs & values used in the examples below._
36 |
37 | ### `.github/workflows/my_pkg_workflow.yml`
38 |
39 | In this example the file contents below define a workflow that GitHub Actions will run whenever a Git `push` to your repository occurs or when the workflow is invoked by you manually via the GitHub web UI (so-called `workflow_dispatch`).
40 |
41 | This example only has a single job that has no steps of its own but instead invokes the NLnet Labs Rust Cargo Packaging reusable workflow.
42 |
43 | ```yaml
44 | name: Packaging
45 |
46 | on:
47 | push:
48 | workflow_dispatch:
49 |
50 | jobs:
51 | my_pkg_job:
52 | uses: NLnetLabs/ploutos/.github/workflows/pkg-rust.yml@v5
53 | with:
54 | docker_org: my_org
55 | docker_repo: my_image_name
56 | docker_build_rules: |
57 | platform: "linux/amd64"
58 | shortname: "amd64"
59 | ```
60 |
61 | There are a few things to note here:
62 |
63 | 1. You can give this file any name you wish but it must be located in the `.github/workflows/` subdirectory of your repository. See the [official GitHub Actions documentation](https://docs.github.com/en/actions/using-workflows/about-workflows#create-an-example-workflow) for more information.
64 |
65 | 2. With the ["uses"](https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_iduses) key we tell GitHub Actions to invoke the NLnet Labs Rust Cargo Packaging reusable workflow located at the given URL.
66 |
67 | 3. We also specify @vN denoting the version number of the Ploutos workflow to invoke. This corresponds to a tag in the [NLnetLabs/.github](https://github.com/NLnetLabs/.github/tags/) repository For more information about the version number see [version numbers and upgrades](./README.md#pkg-workflow-version-numbers-and-upgrades).
68 |
69 | 4. We provide three ["inputs"](https://docs.github.com/en/actions/using-workflows/reusing-workflows#using-inputs-and-secrets-in-a-reusable-workflow) to the workflow as child key value pairs of the "with" key:
70 | - `docker_org`
71 | - `docker_repo`
72 | - `docker_build_rules`
73 |
74 | **Tip:** The full set of available inputs that the Ploutos workflow accepts is defined in the Ploutos workflow itself [here](https://github.com/NLnetLabs/.github/blob/main/.github/workflows/pkg-rust.yml#L131).
75 |
76 | ### `docker_build_rules`
77 |
78 | In this example we configure the Ploutos workflow to build a Docker image for the Linux x86 64 aka `linux/amd64` target architecture. There are more options that can be used here and you can target other architectures too but we won't cover that in this simple example.
79 |
80 | ### `Dockerfile`
81 |
82 | Finally, a simple Docker [`Dockerfile`](https://docs.docker.com/engine/reference/builder/) which tells Docker what the content of the built image should be. In this case it's just a simple image which prints "Hello World!" to the terminal when the built image is run.
83 |
84 | ```Dockerfile
85 | FROM alpine
86 | CMD ["echo", "Hello World!"]
87 | ```
88 |
89 | ## Outputs
90 |
91 | When run and successful the workflow will have a [GitHub Actions artifact](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts) attached to the workflow run.
92 |
93 | The artifact will be named `tmp-docker-image-amd64` and can be downloaded either via the UI or using the [GitHub CLI](https://docs.github.com/en/github-cli/github-cli/about-github-cli). Note that only logged-in users with the GitHub `actions:read` permission will be able to see and download the artifact.
94 |
95 | The artifact contains the built Docker image. We can test it like so using the GitHub CLI:
96 |
97 | _**Tip:** The `gh run download` command unzips the downloaded artifact file for us automatically! Also note that the term "run" in this context refers to an existing workflow "run" and is not used here as the verb "to run"._
98 |
99 | ```
100 | $ cd path/to/your/repo/clone
101 | $ gh run download --name tmp-docker-image-amd64
102 | $ docker load -i docker-amd64-img.tar
103 | Loaded image: my_org/my_image_name:test-amd64
104 | $ docker run --rm my_org/my_image_name:test-amd64
105 | Hello World!
106 | ```
107 |
--------------------------------------------------------------------------------
/template/Dockerfile:
--------------------------------------------------------------------------------
1 | # This is a multi-stage Dockerfile, with a selectable first stage. With this
2 | # approach we get:
3 | #
4 | # 1. Separation of dependencies needed to build our app in the 'build' stage
5 | # and those needed to run our app in the 'final' stage, as we don't want
6 | # the build-time dependencies to be included in the final Docker image.
7 | #
8 | # 2. Support for either building our app for the architecture of the base
9 | # image using MODE=build (the default) or for externally built app
10 | # binaries (e.g. cross-compiled) using MODE=copy.
11 | #
12 | # In total there are four stages consisting of:
13 | # - Two possible first stages: 'build' or 'copy'.
14 | # - A special 'source' stage which selects either 'build' or 'copy' as the
15 | # source of binaries to be used by ...
16 | # - The 'final' stage.
17 |
18 |
19 | ###
20 | ### ARG DEFINITIONS ###########################################################
21 | ###
22 |
23 | # This section defines arguments that can be overriden on the command line
24 | # when invoking `docker build` using the argument form:
25 | #
26 | # `--build-arg =`.
27 |
28 | # MODE
29 | # ====
30 | # Supported values: build (default), copy
31 | #
32 | # By default this Dockerfile will build our app from sources. If the sources
33 | # have already been (cross) compiled by some external process and you wish to
34 | # use the resulting binaries from that process, then:
35 | #
36 | # 1. Create a directory on the host called 'dockerbin/$TARGETPLATFORM'
37 | # containing the already compiled app binaries (where $TARGETPLATFORM
38 | # is a special variable set by Docker BuiltKit).
39 | # 2. Supply arguments `--build-arg MODE=copy` to `docker build`.
40 | ARG MODE=build
41 |
42 |
43 | # BASE_IMG
44 | # ========
45 | #
46 | # Only used when MODE=build.
47 | ARG BASE_IMG=alpine:3.16
48 |
49 |
50 | # CARGO_ARGS
51 | # ==========
52 | #
53 | # Only used when MODE=build.
54 | #
55 | # This ARG can be used to control the features enabled when compiling the app
56 | # or other compilation settings as necessary.
57 | ARG CARGO_ARGS
58 |
59 |
60 | ###
61 | ### BUILD STAGES ##############################################################
62 | ###
63 |
64 |
65 | # -----------------------------------------------------------------------------
66 | # Docker stage: build
67 | # -----------------------------------------------------------------------------
68 | #
69 | # Builds our app binaries from sources.
70 | FROM ${BASE_IMG} AS build
71 | ARG CARGO_ARGS
72 |
73 | RUN apk add --no-cache rust cargo
74 |
75 | WORKDIR /tmp/build
76 | COPY . .
77 |
78 | # `CARGO_HTTP_MULTIPLEXING` forces Cargo to use HTTP/1.1 without pipelining
79 | # instead of HTTP/2 with multiplexing. This seems to help with various
80 | # "spurious network error" warnings when Cargo attempts to fetch from crates.io
81 | # when building this image on Docker Hub and GitHub Actions build machines.
82 | #
83 | # `cargo install` is used instead of `cargo build` because it places just the
84 | # binaries we need into a predictable output directory. We can't control this
85 | # with arguments to cargo build as `--out-dir` is unstable and contentious and
86 | # `--target-dir` still requires us to know which profile and target the
87 | # binaries were built for. By using `cargo install` we can also avoid needing
88 | # to hard-code the set of binary names to copy so that if we add or remove
89 | # built binaries in future this will "just work". Note that `--root /tmp/out`
90 | # actually causes the binaries to be placed in `/tmp/out/bin/`. `cargo install`
91 | # will create the output directory for us.
92 | RUN CARGO_HTTP_MULTIPLEXING=false cargo install \
93 | --locked \
94 | --path . \
95 | --root /tmp/out/ \
96 | ${CARGO_ARGS}
97 |
98 |
99 | # -----------------------------------------------------------------------------
100 | # Docker stage: copy
101 | # -----------------------------------------------------------------------------
102 | # Only used when MODE=copy.
103 | #
104 | # Copy binaries from the host directory 'dockerbin/$TARGETPLATFORM' directory
105 | # into this build stage to the same predictable location that binaries would be
106 | # in if MODE were 'build'.
107 | #
108 | # Requires that `docker build` be invoked with variable `DOCKER_BUILDKIT=1` set
109 | # in the environment. This is necessary so that Docker will skip the unused
110 | # 'build' stage and so that the magic $TARGETPLATFORM ARG will be set for us.
111 | FROM ${BASE_IMG} AS copy
112 | ARG TARGETPLATFORM
113 | ONBUILD COPY dockerbin/$TARGETPLATFORM /tmp/out/bin/
114 |
115 |
116 | # -----------------------------------------------------------------------------
117 | # Docker stage: source
118 | # -----------------------------------------------------------------------------
119 | # This is a "magic" build stage that "labels" a chosen prior build stage as the
120 | # one that the build stage after this one should copy application binaries
121 | # from. It also causes the ONBUILD COPY command from the 'copy' stage to be run
122 | # if needed. Finally, we ensure binaries have the executable flag set because
123 | # when copied in from outside they may not have the flag set, especially if
124 | # they were uploaded as a GH actions artifact then downloaded again which
125 | # causes file permissions to be lost.
126 | # See: https://github.com/actions/upload-artifact#permission-loss
127 | FROM ${MODE} AS source
128 | RUN chmod a+x /tmp/out/bin/*
129 |
130 |
131 | # -----------------------------------------------------------------------------
132 | # Docker stage: final
133 | # -----------------------------------------------------------------------------
134 | # Create an image containing just the binaries, configs & scripts needed to run
135 | # our app, and not the things needed to build it.
136 | #
137 | # The previous build stage from which binaries are copied is controlled by the
138 | # MODE ARG (see above).
139 | FROM alpine:3.16 AS final
140 |
141 | # Installed required runtime dependencies
142 | RUN apk add --no-cache libgcc tini
143 |
144 | # Copy binaries from the 'source' build stage into the image we are building
145 | COPY --from=source /tmp/out/bin/* /usr/local/bin/
146 |
147 | # Use Tini to ensure that our application responds to CTRL-C when run in the
148 | # foreground without the Docker argument "--init" (which is actually another
149 | # way of activating Tini, but cannot be enabled from inside the Docker image).
150 | ENTRYPOINT ["/sbin/tini", "--", "{{ project_name }}"]
151 |
--------------------------------------------------------------------------------
/docs/FAQ.md:
--------------------------------------------------------------------------------
1 | # FAQ
2 |
3 | - [What is Ploutos?](#what-is-plotos)
4 | - [Why use Ploutos?](#why-use-ploutos)
5 | - [Known issues](#known-issues)
6 | - [Can I just run the Ploutos workflow?](#can-i-just-run-the-ploutos-workflow)
7 | - [What packages can the Ploutos workflow produce?](#what-packages-can-the-ploutos-workflow-produce)
8 | - [How can I run the created packages?](#how-can-i-run-the-created-packages)
9 | - [How does it work?](#how-does-it-work)
10 |
11 | ## What is Ploutos?
12 |
13 | Ploutos is a GitHub Actions reusable workflow that provides a thin, easy to use, wrapper around parallel invocation of tools such as cross, cargo-deb and cargo-generare-rpm to package and test your Rust application for your chosen combination of package formats, target cpu architectures and operating system flavours.
14 |
15 | ## Why use Ploutos?
16 |
17 | Ploutos simplifies the creation of Debian, RPM and Docker packages for your Rust projects. You can call it in your project's workflow, by using [Github's reusable workflow feature](https://docs.github.com/en/actions/using-workflows/reusing-workflows). By reusing Ploutus, you can focus on the packaging specifics that matter for your project, instead of duplicating the foundation in every project.
18 |
19 | ## Known issues
20 |
21 | The Ploutos workflow was originally written for use only by NLnet Labs. As such not all behaviours are yet (fully) configurable. With time, sufficient interest and resource permitting these limitations can in principle be removed. For a list of open issues and ideas for improvement and to submit your own see https://github.com/NLnetLabs/ploutos/issues/.
22 |
23 | ## Can I just run the Ploutos workflow?
24 |
25 | No, it is not intended to be used standalone. To use it you must call it from your own GitHub Workflow. See the [official GitHub Actions documentation](https://docs.github.com/en/actions/using-workflows/reusing-workflows#calling-a-reusable-workflow) on calling reusable workflows for more information.
26 |
27 | ## What packages can the Ploutos workflow produce?
28 |
29 | The Ploutos workflow is capable of producing Linux (DEB & RPM) packages and Docker images.
30 |
31 | Produced DEB and RPM packages will be attached as artifacts to the caller workflow run. **Only GitHub users with `actions:read` permission** will be able to download the artifacts.
32 |
33 | > The Ploutos workflow does **NOT** publish DEB and/or RPM packages anywhere. If you want your users to be able to download the produced DEB and/or RPM either directly or from a package repository using a tool like `apt` (for DEB) or `yum` (for RPM) you will need to upload the packages to the appropriate location yourself.
34 |
35 | Produced Docker images can optionally be published to [Docker Hub](https://hub.docker.com/). In order for this to work you must configure the destination Docker Hub organisation, repository, username and password/access token and ensure that the used credentials provide write access to the relevant Docker Hub repository.
36 |
37 | At NLnet Labs we publish produced DEB and RPM packages at https://packages.nlnetlabs.nl/ via an internal process that downloads the workflow run artifacts and signs & uploads them to the correct location, and Docker images are published by the Ploutos workflow to the appropriate repository under the https://hub.docker.com/r/nlnetlabs/ Docker organisation.
38 |
39 | ## How can I run the created packages?
40 |
41 | Linux packages should be installed using the appropriate package manager (e.g. `apt` for DEB packages and `yum` for RPM packages).
42 |
43 | Docker images can be run using the [`docker run`](https://docs.docker.com/engine/reference/commandline/run/) command.
44 |
45 | ## How does it work?
46 |
47 | The Ploutos workflow is a GitHub Actions "reusable workflow" because it [defines](https://github.com/NLnetLabs/ploutos/blob/main/.github/workflows/pkg-rust.yml#L130) the `workflow_call` trigger and the set of inputs that must be provided in order to call the workflow. For an explanation of GitHub reusable workflows see the [official GitHub Actions documentation](https://docs.github.com/en/actions/using-workflows/reusing-workflows) on reusable workflows.
48 |
49 | Once called the workflow runs one or more jobs like so:
50 |
51 | ```mermaid
52 | flowchart LR
53 | prepare --> cross
54 | cross --> pkg --> pkg-test
55 | cross --> docker --> docker-manifest
56 | click cross href "https://github.com/NLnetLabs/ploutos/blob/main/docs/cross_compiling.md" "Cross-compilation"
57 | click pkg href "https://github.com/NLnetLabs/ploutos/blob/main/docs/os_packaging.md" "O/S Packaging"
58 | click pkg-test href "https://github.com/NLnetLabs/ploutos/blob/main/docs/os_package_testing.md" "O/S Package Testing"
59 | click docker href "https://github.com/NLnetLabs/ploutos/blob/main/docs/docker_packaging.md" "Docker Packaging"
60 | click docker-manifest href "https://github.com/NLnetLabs/ploutos/blob/main/docs/docker_multi_arch.md" "Docker Multi-Arch Packaging"
61 | ```
62 |
63 | All of the jobs except `prepare` are matrix jobs, i.e. N instances of the job run in parallel where N is the number of relevant input matrix permutations.
64 |
65 | Note that Git checkout is **NOT** done by the caller. Instead Ploutos checks out the source code at multiple different points in the workflow:
66 |
67 | - In `prepare` to be able to load rule files from the checked out files.
68 | - In `cross` to have the application files to build available on the GH runner.
69 | - In `pkg` to have the application files to build available in the container.
70 | - In `pkg-test` to have the test script to run available for copying into the LXC container.
71 | - In `docker` to have the `Dockerfile` and Docker context files available for building.
72 |
73 | Only the packaging types that you request (via the workflow call parameters) will actually be run, i.e. you can build only DEB packages, or only RPM and Docker, and cross-compile or not as needed.
74 |
75 | - `prepare` - checks if the given inputs look roughly okay.
76 | - [`cross`](./cross_compiling.md) - cross-compiles the Rust Cargo application if needed.
77 | - [`pkg`](./os_packaging.md) - compiles (if not already cross-compiled) and packages the Rust Cargo application as a DEB or RPM package.
78 | - [`pkg-test`](./os_packaging.md) - tests the produced DEB/RPM packages, both with some standard checks and optionally with application-specific checks provided by you.
79 | - [`docker`](./docker_packaging.md) - builds and publishes one or more Docker images.
80 | - [`docker-manifest`](./docker_packaging.md) - publishes a combined Docker Manifest that groups architecture specific variants of the same image under a single Docker tag.
81 |
82 | The core parts of the workflow are not specific to GitHub but instead just invoke Rust ecosystem tools like [`cargo`](https://doc.rust-lang.org/cargo/), [`cross`](https://github.com/cross-rs/cross), [`cargo-deb`](https://github.com/kornelski/cargo-deb#readme) and [`cargo-generate-rpm`](https://github.com/cat-in-136/cargo-generate-rpm), and setup the correct conditions for invoking those tools, and the testing part invokes tools such as [`lxd` & `lxc`](https://linuxcontainers.org/), `apt` and `yum` which are also not GitHub specific. And of course there are the parts that invoke the `docker` command. The GitHub specific part is the pipeline that ties all these steps together and runs pieces in parallel and passes inputs in and outputs out.
83 |
--------------------------------------------------------------------------------
/docs/docker_packaging.md:
--------------------------------------------------------------------------------
1 | # Ploutos: Docker packaging
2 |
3 | **Contents:**
4 | - [Known issues](#known-issues)
5 | - [Terminology](#terminology)
6 | - [Inputs](#inputs)
7 | - [Workflow inputs](#workflow-inputs)
8 | - [Docker build rules](#docker-build-rules)
9 | - [Dockerfile build arguments](#dockerfile-build-arguments)
10 | - [Docker Hub secrets](#docker-hub-secrets)
11 | - [Outputs](#outputs)
12 | - [How it works](#how-it-works)
13 | - [Building](#building)
14 | - [Publication](#publication)
15 | - [Generated image names](#generated-image-names)
16 |
17 | ## Known issues
18 |
19 | - [The Docker registry to publish to is not configurable](https://github.com/NLnetLabs/.github/issues/37)
20 | - [Version number determination should be more robust](https://github.com/NLnetLabs/.github/issues/43)
21 |
22 | ## Terminology
23 |
24 | Docker terminology regarding the location/identity of an image published to a registry (let's assume [Docker Hub](https://hub.docker.com/)) is a bit confusing. Dockers' own [official documentation](https://docs.docker.com/engine/reference/commandline/tag/) conflates the terms "image" and "tag". When configuring the Ploutos workflow we therefore use the following terminology:
25 |
26 | ```
27 | # Using Docker Hub terminology, for a Docker image named nlnetlabs/krill:v0.1.2-arm64:
28 | # - The Organization would be 'nlnetlabs'.
29 | # - The Repository would be 'krill'.
30 | # - The Tag would be v0.1.2-arm64
31 | # Collectively I refer to the combination of /: as the 'image' name,
32 | ```
33 |
34 | Source: https://github.com/NLnetLabs/.github/blob/main/.github/workflows/pkg-rust.yml
35 |
36 | ## Inputs
37 |
38 | ### Workflow inputs
39 |
40 | | Input | Type | Required | Description |
41 | |---|---|---|---|
42 | | `docker_org` | string | Yes | E.g. `nlnetlabs`. |
43 | | `docker_repo` | string | Yes | E.g. `krill`. |
44 | | `docker_build_rules` | [matrix](./key_concepts_and_config.md#matrix-rules) | Yes | See below. |
45 | | `docker_sanity_check_command` | string | No | A command to pass to `docker run`. If it returns a non-zero exit code it will cause the packaging workflow to fail. The command is intended to be a simple sanity check of the built image and should return quickly. It will only be run against images built for the x86_64 architecture as in order for `docker run` to work the image CPU architecture must match the host runner CPU architecture. As such when building images for non-x86_64 architectures it does **NOT** verify that ALL built images are sane. |
46 | | `docker_file_path` | string | No | The path relative to the Git checkout to the `Dockerfile`. Defaults to `./Dockerfile.` |
47 | | `docker_context_path` | string | No | The path relative to the Git checkout to use as the Docker context. Defaults to `.`. |
48 |
49 | **Note:** There is no input for specifying the Docker tag because the tag is automatically determined based on the current Git branch/tag and architecture "shortname" (taken from the `docker_build_rules` matrix).
50 |
51 | ### Docker build rules
52 |
53 | A rules [matrix](./key_concepts_and_config.md#matrix-rules) with the following keys must be provided to guide the build process:
54 |
55 | | Matrix Key | Required | Description |
56 | |---|---|---|
57 | | `platform` | Yes | Set the [target platform for the build](https://docs.docker.com/engine/reference/commandline/buildx_build/#platform), e.g. `linux/amd64`. See [^1], [^2] and [^3]. |
58 | | `shortname` | Yes | Suffixes the tag of the architecture specific "manifest" image with this value, e.g. `amd64`. |
59 | | `target` | No | Used to determine the correct cross-compiled binary GitHub Actions artifact to compile & download. Only used when `mode` is `copy`. |
60 | | `mode` | No | `copy` (for cross-compiled targets) or `build`. Passed through to the `Dockerfile` as build arg `MODE`. |
61 | | `cargo_args` | No | Can be used when testing, e.g. set to `--no-default-features` to speed up the application build. Passed through to the Dockerfile as build arg `CARGO_ARGS`. |
62 |
63 | [^1]: https://go.dev/doc/install/source#environment (from [^4])
64 | [^2]: https://github.com/containerd/containerd/blob/v1.4.3/platforms/database.go#L83
65 | [^3]: https://stackoverflow.com/a/70889505
66 | [^4]: https://github.com/docker-library/official-images#architectures-other-than-amd64 (from [^5])
67 | [^5]: https://docs.docker.com/desktop/multi-arch/
68 |
69 | Example using an inline YAML string matrix definition:
70 |
71 | ```yaml
72 | jobs:
73 | my_pkg_job:
74 | uses: NLnetLabs/ploutos/.github/workflows/pkg-rust.yml@v5
75 | with:
76 | docker_build_rules: |
77 | include:
78 | - platform: "linux/amd64"
79 | shortname: "amd64"
80 | mode: "build"
81 |
82 | - platform: "linux/arm/v6"
83 | shortname: "armv6"
84 | target: "arm-unknown-linux-musleabihf"
85 | mode: "copy"
86 | ```
87 |
88 | ### Dockerfile build arguments
89 |
90 | The Ploutos workflow will invoke `docker buildx` passing [`--build-arg =`](https://docs.docker.com/engine/reference/commandline/build/#set-build-time-variables---build-arg) for the following custom build arguments.
91 |
92 | The Docker context will be the root of a clone of the callers GitHub repository.
93 |
94 | Your `Dockerfile` MUST define corresponding [`ARG [=]`](https://docs.docker.com/engine/reference/builder/#arg) instructions for these build arguments.
95 |
96 | | Build Arg | Description |
97 | |---|---|
98 | | `MODE=build` | The `Dockerfile` should build the application from sources available in the Docker context. |
99 | | `MODE=copy` | The pre-compiled binaries will be made available to the build process in subdirectory `dockerbin/$TARGETPLATFORM/*` of the Docker build context, where [`$TARGETPLATFORM`](https://docs.docker.com/engine/reference/builder/#automatic-platform-args-in-the-global-scope) is a variable made available to the `Dockerfile` build process by Docker, e.g. `linux/amd64`. For an example see https://github.com/NLnetLabs/routinator/blob/v0.11.3/Dockerfile#L99. |
100 | | `CARGO_ARGS=...` | Only relevant when `MODE` is `build`. Expected to be passed to the Cargo build process, e.g. `cargo build ... ${CARGO_ARGS}` or `cargo install ... ${CARGO_ARGS}`. For an example see https://github.com/NLnetLabs/routinator/blob/v0.11.3/Dockerfile#L92. |
101 |
102 | ## Docker Hub secrets
103 |
104 | The Ploutos workflow supports two Docker specific secrets which can be passed to the workflow like so:
105 |
106 | ```yaml
107 | jobs:
108 | full:
109 | uses: NLnetLabs/ploutos/.github/workflows/pkg-rust.yml@v5
110 | secrets:
111 | DOCKER_HUB_ID: ${{ secrets.YOUR_DOCKER_HUB_ID }}
112 | DOCKER_HUB_TOKEN: ${{ secrets.YOUR_DOCKER_HUB_TOKEN }}
113 | ```
114 |
115 | Or, if you are willing to trust the packaging workflow with all of your secrets!!!
116 |
117 | ```yaml
118 | jobs:
119 | full:
120 | uses: NLnetLabs/ploutos/.github/workflows/pkg-rust.yml@v5
121 | secrets: inherit
122 | ```
123 |
124 | If either of the `DOCKER_HUB_ID` and/or `DOCKER_HUB_TOKEN` secrets are defined, the workflow `prepare` job will attempt to login to Docker Hub using these credentials and will fail the Ploutos workflow if it is unable to login.
125 |
126 | Best practice is to use a separately created [Docker Hub access token](https://docs.docker.com/docker-hub/access-tokens/#create-an-access-token) for automation purposes that has minimal access rights. The Ploutos workflow needs write access but not delete access.
127 |
128 | _**Note:** If neither of the `DOCKER_HUB_ID` and `DOCKER_HUB_TOKEN` secrets are defined then the Ploutos workflow will **NOT** atttempt to publish images to Docker Hub._
129 |
130 | ## Outputs
131 |
132 | A [GitHub Actions artifact](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts) will be attached to the workflow run with the name `tmp-docker-image-`. The artifact will be a `zip` file, inside which will be a `tar` file called `docker--img.tar`. The `tar` file is the output of the [`docker save` command](https://docs.docker.com/engine/reference/commandline/save/) and can be loaded into a local Docker daemon using the [`docker load` command](https://docs.docker.com/engine/reference/commandline/load/).
133 |
134 | If the required secrets are defined (see above), and the git ref is either the `main` branch or a `v*` tag, then the Docker image will be published to Docker Hub with the generated image name (see above).
135 |
136 | ## How it works
137 |
138 | The `docker` workflow job will do a Git checkout of the repository that hosts the caller workflow.
139 |
140 | ### Building
141 |
142 | When using the [`cross` job](./cross_compiling.md) to cross-compile your application for different architectures you do not want to build the application again when building the Docker image from the `Dockerfile`.
143 |
144 | You can direct the Ploutos workflow to use pre-cross-compiled binaries by setting the `mode` to `copy` instead of the default `build` in your `docker_build_rules(_path)` input matrix.
145 |
146 | You must however make sure that your `Dockerfile` supports the build arguments that the Ploutos workflow will pass to it (see above).
147 |
148 | ### Publication
149 |
150 | The Ploutos workflow is able to output built Docker images in three ways:
151 |
152 | 1. **Output Docker images as [GitHub Actions artifacts](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts) attached to the workflow run** This can be useful for testing or manual distribution or if you don't (yet) have a Docker Hub login and/or access token.
153 |
154 | 2. **Publish Docker images to Docker Hub:** For the common single architecture case this is what you probably want.
155 |
156 | 3. **Publish [multi-arch Docker images](https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/) AND a [Docker manifest](https://docs.docker.com/engine/reference/commandline/manifest/) to Docker Hub:** This is useful when publishing the same image for multiple architectures to enable the end user to run the image without needing to specify the desired architecture.
157 |
158 | ### Generated image names
159 |
160 | As stated above there is no way to manually control the tag given to the created Docker images. The images need to have distinct tags per architecture and per version/release type. For these reasons the workflow determines the tag itself. Possible tags that the workflow can generate are:
161 |
162 | | Image Name | Archtecture Specific Tag | Multi-Arch Tag | Conditions |
163 | |---|---|---|---|
164 | | `/` | `:vX.Y.Z-` | `:vX.Y.Z` | No dash `-` in git ref |
165 | | `/` | `:unstable-` | `:unstable` | Branch is `main` |
166 | | `/` | `:latest-` | `:latest` | No dash `-` in git ref and not `main` |
167 | | `/` | `:test-` | `:test` | Neither `main` nor `vX.Y.Z` tag |
168 |
--------------------------------------------------------------------------------
/docs/key_concepts_and_config.md:
--------------------------------------------------------------------------------
1 | # Ploutos: Key concepts & general configuration
2 |
3 | **Contents:**
4 |
5 | - [Stability promise](#stability-promise)
6 | - [Operating system versions](#operating-system-versions)
7 | - [Application versions](#application-versions)
8 | - [Next dev version](#next-dev-version)
9 | - [Matrix rules](#matrix-rules)
10 | - [Caching and performance](#caching-and-performance)
11 | - [Cargo workspace support](#cargo-workspace-support)
12 | - [Custom runner support](#custom-runner-support)
13 |
14 | ## Stability promise
15 |
16 | When you refer to the Ploutos workflow you are also indicating which version of the workflow that you want to use, e.g.:
17 |
18 | ```yaml
19 | jobs:
20 | my_pkg_job:
21 | uses: NLnetLabs/ploutos/.github/workflows/pkg-rust.yml@v5
22 | ```
23 |
24 | Here we see that the v3 version of the workflow will be used.
25 |
26 | What may not be obvious is that this will work for v3.0.0, v3.0.1, v3.3.4 and so on. This is because Ploutos follows the principles of [Semantic Versioning](https://semver.org/).
27 |
28 | The version number consists of MAJOR.MINOR.PATCH components. Any change in minor and patch versions should be backward compatible and thus safe to use automatically.
29 |
30 | If a backward incompatible change is made however then the the major version number will be increased, e.g. from `v5` to `v6`. In that case you will not get the new version with the breaking changes unless you manually update the `uses` line in your workfow to refer to the new major version.
31 |
32 | ## Operating system versions
33 |
34 | Ploutos aims to support so-called long-term stable (LTS) versions of operating systems that it packages for.
35 |
36 | ## Application versions
37 |
38 | The Ploutos workflow differentiates between "release", "pre-release", "unstable" and "development" types of release/application version.
39 |
40 | Ploutos uses your application version number, as defined in the `Cargo.toml` `version` field, and/or in the git ref (e.g. a branch name, or a release tag such as v1.2.3), to know which type of version is being packaged.
41 |
42 | Ploutos also takes care of handling special cases that relate to these different types.
43 |
44 | For example, an `XXX-rc1` (a "pre-release" or "release candidate") version defined in `Cargo.toml` requires special treatment for O/S packages as it should be considered older than `XXX` but won't be unless the dash `-` is replaced by a tilda `~`. Contrary to that, version `XXX-dev` (a "development" version) should be considered NEWER than `XXX` so SHOULD have a dash rather than a tilda. Also, `Cargo.toml` cannot contain tilda characters in the version number string so we have to handle this ourselves.
45 |
46 | Also, when publishing to Docker Hub one wouldn't necessarily want the latest and greatest `main` branch code to be published as the Docker tag `latest` as users will automatically be upgraded to that if they don't provide a version number and do `docker run` (on a machine that has no version of the image yet locally) or do `docker pull` (to fetch the latest). There can still be value in letting users run the bleeding edge version for testing purposes and doing the Docker packaging for them, so we publish Docker images built from a `main` branch as tag `unstable`.
47 |
48 | These are just a couple of examples of special behaviour relating to version numbers that Ploutos handles for you.
49 |
50 | _**Known issue:** [Inconsistent determination version number determination](https://github.com/NLnetLabs/.github/issues/43)_
51 |
52 | _**Known issue:** [Version number 'v' prefixed should not be required](https://github.com/NLnetLabs/.github/issues/44)_
53 |
54 | ## Next dev version
55 |
56 | The Ploutos workflow accepts an input parameter called `next_ver_label`:
57 |
58 | ```yaml
59 | next_ver_label:
60 | description: "A tag suffix that denotes an in-development rather than release version, e.g. `dev``."
61 | required: false
62 | type: string
63 | default: dev
64 | ```
65 |
66 | As you can see it defaults to `dev`. This relates to the notion of a "development" version of your application referred to in the previous section.
67 |
68 | When building packages or providing your source code to others to build, it is helpful to know e.g. when a bug report is submitted, which version of the application does the issue relate to? If you release version v1.2.3 of your application and then commit some more changes to version control, perhaps even build packaged versions of that "development" version, it shouldn't also report itself as v1.2.3 as that was the version that was "released" and may differ in code and behaviour to the "development" version. But it also shouldn't report itself as v1.2.4 or v1.3.0, we don't know yet what the next version will be or what kinds of major, minor or patch differences it will contain as we haven't crafted a release yet. So instead, we update the version in `main` immediately after release to be `vX.Y.Z-dev` signifying that this is a new in-development version, not the last released version and not the next release version. And, as mentioned in the section above, this also has a necessary impact on the version inside the built DEB and RPM packages, notably the use of tilda instead of dash!
69 |
70 | If `dev` isn't the suffix you use, you can change that with the `next_ver_label` input. We don't however support at this time other schemes for signifying a dev version via the version number, only suffixing.
71 |
72 | ## Matrix rules
73 |
74 | Several of the inputs to the Ploutus workflow are of "matrix" type. These matrices are used with [GitHub matrix strategies](https://docs.github.com/en/actions/using-jobs/using-a-matrix-for-your-jobs) to parallelize the packaging process.
75 |
76 | GitHub will attempt to maximize the number of jobs running in parallel, and for each permutation of the matrix given to a particular job it will launch another parallel instance of the same workflow job to process that particular input permutation.
77 |
78 | A matrix is an ordered sequence of `key: value` pairs. In the examples below there is a single key called `target` whose value is a list of strings.
79 |
80 | An input of "matrix" type can be specified in one of two ways:
81 |
82 | - As an inline YAML string matrix, e.g.: _(note the YAML | multi-line ["literal block style indicator"](https://yaml.org/spec/1.0/#id2490752) which is required to preserve the line breaks in the matrix definition)_
83 |
84 | ```yaml
85 | jobs:
86 | my_pkg_job:
87 | uses: NLnetLabs/ploutos/.github/workflows/pkg-rust.yml@v5
88 | with:
89 | cross_build_rules: |
90 | target:
91 | - arm-unknown-linux-musleabihf
92 | - arm-unknown-linux-gnueabihf
93 | ```
94 |
95 | - As the relative path to a YAML file containing the string matrix, e.g.:
96 |
97 | ```yaml
98 | jobs:
99 | my_pkg_job:
100 | uses: NLnetLabs/ploutos/.github/workflows/pkg-rust.yml@v5
101 | with:
102 | cross_build_rules: pkg/rules/cross_build_rules.yml
103 | ```
104 |
105 | Where `pkg/rules/cross_build_rules.yml` looks like this:
106 |
107 | ```yaml
108 | target:
109 | - 'arm-unknown-linux-musleabihf'
110 | - 'arm-unknown-linux-gnueabihf'
111 | ```
112 |
113 | ## Caching and performance
114 |
115 | For steps of the packaging process that take a long time (e.g Cargo install of supporting tools such as cargo-deb, cargo-generate-rpm, cross, etc.) we use the GitHub Actions caching support to store the resulting binaries.
116 |
117 | After successful caching, subsequent invocations of the packaging workflow will proceed much faster through such steps. If the stored items expire from the cache they will of course need to be rebuilt causing the next run to be slower again.
118 |
119 | While this doesn't help much for infrequent releases, it makes a big difference when iterating your packaging settings until the resulting packages match your expectations.
120 |
121 | ## Rust version
122 |
123 | The Rust version used to compile your application is not expliclity controlled anywhere in the packaging process at present.
124 |
125 | - For cross-compilation this is currently 1.64.0 from the [Ubuntu 20.04 GitHub hosted runner pre-installed software](https://github.com/actions/runner-images/blob/main/images/linux/Ubuntu2004-Readme.md).
126 |
127 | - For O/S packaging it installs latest Rust via rustup.
128 |
129 | - For Docker images it depends on how your `Dockerfile` performs the compilation.
130 |
131 | _**Known issue:** [Inconsistent Rust compiler version](https://github.com/NLnetLabs/.github/issues/52)_
132 |
133 | ## Artifact prefixing
134 |
135 | By default temporary and final produced artifacts are named under the assumption that no other workflow jobs exist that also upload artifacts and thus may cause artifact name conflicts.
136 |
137 | If necessary the `artifact_prefix` workflow string input can be used to specify a prefix that will be added to every GitHub actions artifact uploaded by Ploutos.
138 |
139 | ## Strict mode
140 |
141 | Some actions performed by Ploutos can result in warnings or errors that are potentially spurious, that is to say that just because Lintian or rpmlint or some other tool reports a problem does not mean to say that we should consider it fatal. For such cases Ploutos by default includes the output of the tools in the log, and in some cases raises warnings in the workflow log, but won't fail the workflow run. If needed by setting the `strict_mode` workflow input to `true` you can force Ploutos to be stricter in some cases than it would normally be.
142 |
143 | ## Cargo workspace support
144 |
145 | A Rust Cargo "workspace" (see [here](https://doc.rust-lang.org/book/ch14-03-cargo-workspaces.html) and [here](https://doc.rust-lang.org/cargo/reference/workspaces.html)) is a _"set of packages that share the same Cargo.lock and output directory"_.
146 |
147 | When using the workspace feature without a root package, i.e. your root `Cargo.toml` lacks a `[package]` section, this is known as a "virtual" workspace or manifest. When using a virtual workspace the package tooling is unable to find the configuration it needs in the root `Cargo.toml` and so you need to provide additional Ploutos settings to guide the package tooling to the right place.
148 |
149 | Two string workflow inputs exist for this purpose:
150 |
151 | - `manifest_dir` - directs Ploutos to the directory containing the root `Cargo.toml`, if not actually in the root directory.
152 | - `workspace_package` - tells Ploutos which "workspace member" (child project directory) contains a `Cargo.toml` that includes the `[package]` section.
153 |
154 | ## Custom runner support
155 |
156 | By default Ploutos is configured with a suitable [GitHub hosted runner type](https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners) to run Ploutos jobs on.
157 |
158 | By specifing a `runs_on` workflow input you can cause Ploutos to run its jobs on a different runner type.
159 |
160 | The motivating use case for this option is to run Ploutos jobs on a [self-hosted runner](https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners) instead, e.g. to use a more powerful machine or to avoid having to install or copy slow or large applications or data in to the build environment, or to manage cost by using your own host instead of one that GitHub charges for.
161 |
162 | If using a self-hosted runner, Ploutos assumes that [software preinstalled in GitHub hosted runners](https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-software) are available so you may also need to install additional tools on your self-hosted runner, e.g. at the time of writing recent versions of `jq`, `yq` and Rust are needed. You will also need to ensure that Docker and LXC/LXD are installed. These tools should be installed system-wide and the actions runner software should run as `root` (by installing it with `./svc.sh install root`) otherwise it will not be able to issue Docker commands (adding the user to the `docker` group isn't enough as it will later try to remove files and fail as they will be owned by `root`).
163 |
--------------------------------------------------------------------------------
/docs/os_packaging.md:
--------------------------------------------------------------------------------
1 | # Ploutos: O/S packaging
2 |
3 | **Contents:**
4 | - [Introduction](#introduction)
5 | - [Example](#example)
6 | - [Inputs](#inputs)
7 | - [Cargo.toml](#cargotoml)
8 | - [Workflow inputs](#workflow-inputs)
9 | - [Package build rules](#package-build-rules)
10 | - [Permitted `` values](#permitted-image-values)
11 | - [Package test rules](#package-test-rules)
12 | - [Package test script inputs](#package-test-script-inputs)
13 | - [Outputs](#outputs)
14 | - [How it works](#how-it-works)
15 | - [Build host pre-installed packages](#build-host-pre-installed-packages)
16 | - [Install-time package dependencies](#install-time-package-dependencies)
17 | - [Target-specific and multi-package packaging](#target-specific-and-multi-package-packaging)
18 | - [Maintainer script(let)s](#maintainer-scriptlets)
19 | - [Systemd units](#systemd-units)
20 | - [Automated handling of special cases](#automated-handling-of-special-cases)
21 |
22 | ## Introduction
23 |
24 | The Ploutos workflow can package your Rust Cargo application as one or both of the following common Linux O/S package formats:
25 |
26 | | Format | Installers | Example Operating Systems |
27 | |---|---|---|
28 | | [DEB](https://en.wikipedia.org/wiki/Deb_(file_format)) | `apt`, `apt-get` | Debian & derivatives (e.g. Ubuntu) |
29 | | [RPM](https://en.wikipedia.org/wiki/Rpm_(file_format)) | `yum`, `dnf` | RedHat, Fedora, CentOS & derivatives (e.g. Stream, Rocky Linux, Alma Linux) |
30 |
31 | The `pkg` and `pkg-test` jobs of the Ploutos workflow package your Rust Cargo application into one or more of these formats, run some sanity checks on them and can verify that they can be installed, ugpraded, uninstalled and can also run tests specific to your application on the installed package.
32 |
33 | The set of files to include in the package are defined in `Cargo.toml`. The binaries included are those built by `cargo build --release --locked` (with optional additional arguments defined by you). Other files can be included by adding them to a set of "assets" defined in `Cargo.toml`. Binaries to be included in the package are either pre-compiled by the [`cross` job](./cross_compiling.md) of the Ploutos workflow, or compiled during the `pkg` job, and are stripped of debug symbols before being included.
34 |
35 | Packaging and, if needed, compilation, take place inside a Docker container. DEB packaging is handled by the [`cargo-deb` tool](https://crates.io/crates/cargo-deb). RPM packaging is handled by the [`cargo-generate-rpm` tool](https://github.com/cat-in-136/cargo-generate-rpm).
36 |
37 | Package testing takes place inside [LXD container instances](https://linuxcontainers.org/lxd/docs/master/explanation/instances/) because, unlike Docker containers, they support systemd and other multi-process scenarios that you may wish to test.
38 |
39 | _**Note:** DEB and RPM packages support many different metadata fields and the native DEB and RPM tooling has many capabilities. We support only the limited subset of capabilities that we have thus far needed. If you need something that it is not yet supported please request it by creating an issue at https://github.com/NLnetLabs/ploutos/issues/, PRs are also welcome!_
40 |
41 | ## Example
42 |
43 | _**Note: This example assumes have a GitHub account, that you are running on Linux, and that Rust, Cargo and git are installed.**_
44 |
45 | For the packaging process to work we need simple Hello World Cargo project to package, and a bare minimum of package metadata, let's create that and verify that it compiles and runs:
46 |
47 |
48 | Click here to show the example
49 |
50 | ```shell
51 | $ cargo new my_pkg_test
52 | $ cd my_pkg_test
53 | $ cat <Cargo.toml
54 | [package]
55 | name = "pkg_hw_test"
56 | version = "0.1.0"
57 | edition = "2021"
58 | authors = ["Example Author"]
59 | EOF
60 | $ cargo run
61 | ...
62 | Hello, world!
63 | ```
64 |
65 | Now let's add a minimal packaging workflow that will package our simple Rust application into a DEB package (because Ubuntu is a DEB based O/S).
66 |
67 | ```shell
68 | $ mkdir -p .github/workflows
69 | $ cat <.github/workflows/pkg.yml
70 | on:
71 | push:
72 |
73 | jobs:
74 | package:
75 | uses: NLnetLabs/ploutos/.github/workflows/pkg-rust.yml@v5
76 | with:
77 | package_build_rules: |
78 | pkg: mytest
79 | image: "ubuntu:jammy"
80 | target: x86_64
81 | ```
82 |
83 | Assuming that you have just created an empty GitHub repository, let's setup Git to push to it, add & commit the files we have created and push them to GitHub:
84 |
85 | ```shell
86 | $ git remote add origin git@github.com:/.git
87 | $ git branch -M main
88 | $ git add src/ Cargo.toml Cargo.lock
89 | $ git commit -m "Initial commit."
90 | $ git push -u origin main
91 | ```
92 |
93 | Now browse to https://github.com///actions and you should see the packaging action running. It will take a few minutes the first time as it is needs to install supporting tooling and then add it to the GH cache for quicker re-use on subsequent invocations.
94 |
95 | When finished and successful the Summary page for the completed GitHub Actions run should have an Artifacts section at the bottom listing a single artifact, e.g. something like this:
96 |
97 | ```
98 | Artifacts
99 | Produced during runtime
100 |
101 | Name Size
102 | mytest_ubuntu_jammy_x86_64 121 KB
103 | ```
104 |
105 | The artifact is a zip file that you can download and unzip, and inside is the DEB artifact that you can install, e.g.:
106 |
107 | ```shell
108 | $ unzip mytest_ubuntu_jammy_x86_64.zip
109 | Archive: mytest_ubuntu_jammy_x86_64.zip
110 | creating: debian/
111 | inflating: debian/pkg_hw_test_0.1.0-1jammy_amd64.deb
112 |
113 | $ cd debian/
114 |
115 | $ apt show ./pkg_hw_test_0.1.0-1jammy_amd64.deb
116 | Package: pkg_hw_test
117 | Version: 0.1.0-1jammy
118 | Priority: optional
119 | Maintainer: Example Author
120 | Installed-Size: 326 kB
121 | Depends: libc6 (>= 2.34)
122 | Download-Size: 124 kB
123 | APT-Sources: /tmp/pkg_hw_test_0.1.0-1jammy_amd64.deb
124 | Description: [generated from Rust crate pkg_hw_test]
125 |
126 | $ sudo apt install ./pkg_hw_test_0.1.0-1jammy_amd64.deb
127 | Reading package lists... Done
128 | Building dependency tree... Done
129 | Reading state information... Done
130 | Note, selecting 'pkg_hw_test' instead of './pkg_hw_test_0.1.0-1jammy_amd64.deb'
131 | The following NEW packages will be installed:
132 | pkg_hw_test
133 | 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
134 | Need to get 0 B/124 kB of archives.
135 | After this operation, 326 kB of additional disk space will be used.
136 | Get:1 /tmp/pkg_hw_test_0.1.0-1jammy_amd64.deb pkg_hw_test amd64 0.1.0-1jammy [124 kB]
137 | debconf: delaying package configuration, since apt-utils is not installed
138 | Selecting previously unselected package pkg_hw_test.
139 | (Reading database ... 4395 files and directories currently installed.)
140 | Preparing to unpack .../pkg_hw_test_0.1.0-1jammy_amd64.deb ...
141 | Unpacking pkg_hw_test (0.1.0-1jammy) ...
142 | Setting up pkg_hw_test (0.1.0-1jammy) ...
143 |
144 | $ pkg_hw_test
145 | Hello, world!
146 | ```
147 |
148 |
149 |
150 | ## Inputs
151 |
152 | ### Cargo.toml
153 |
154 | Many of the settings that affect DEB and RPM packaging are taken from your `Cargo.toml` file by the [`cargo-deb`](https://github.com/kornelski/cargo-deb) and [`cargo-generate-rpm`](https://github.com/cat-in-136/cargo-generate-rpm) tools respectively. For more information read their respective documentation.
155 |
156 | Both `cargo-deb` and `cargo-generate-rpm` have a `variant` feature. Ploutos will look for a `cargo-deb` "variant" by the name `-[-]` (with target only included in the variant name when cross-compiling, i.e. when `` is not `x86_64`. It also supports the concept of a "minimal" variant, more on that below. `cargo-generate-rpm` variants are not currently supported, except for some limited use internal to Ploutos.
157 |
158 | ### Workflow inputs
159 |
160 | | Input | Type | Required | Description |
161 | |---|---|---|---|
162 | | `package_build_rules` | [matrix](./key_concepts_and_config.md#matrix-rules) | Yes | Defines packages to build and how to build them. See below. |
163 | | `package_test_rules` | [matrix](./key_concepts_and_config.md#matrix-rules) | No | Defines the packages to test and how to test them. Defaults to `package_build_rules`. See below. |
164 | | `package_test_scripts_path` | string | No | The path to find scripts for running tests. Invoked scripts take a single argument: post-install or post-upgrade. |
165 | | `deb_extra_build_packages` | string | No | A space separated set of additional Debian packages to install in the build host when (not cross) compiling. |
166 | | `deb_apt_key_url` | string | No* | The URL of the public key that can be used to verify a signed package if installing using `deb_apt_source`. Defaults to the NLnet Labs package repository key URL. |
167 | | `deb_apt_source` | string | No* | A line or lines to write to an APT `/etc/apt/sources.list.d/` file, or the relative path to such a file to install. Used when `mode` of `package_test_rules` is `upgrade-from-published`. Defaults to the NLnet Labs package installation settings. |
168 | | `rpm_extra_build_packages` | string | No | A space separated set of additional RPM packages to install in the build host when (not cross) compiling. |
169 | | `rpm_scriptlets_path` | string | No | The path to a TOML file defining one or more of the `pre_install_script`, `pre_uninstall_script`, `post_install_script` and/or `post_uninstall_script` `cargo-generate-rpm` settings. Note: Since Ploutos v6, when building an "alternate" package (see multi-packaging below) the file will actually be looked for at `-`. |
170 | | `rpm_yum_key_url` | string | No* | The URL of the public key that can be used to verify a signed package if installing using `rpm_yum_repo`. Defaults to the NLnet Labs package repository key URL. |
171 | | `rpm_yum_source` | string | No* | A line or lines to write to a YUM `/etc/yum.repos.d/` file, or the relative path to such a file to install. Used when `mode` of `package_test_rules` is `upgrade-from-published`. Defaults to the NLnet Labs package installation settings. |
172 |
173 | ### Package build rules
174 |
175 | By supplying `package_build_rules` you instruct Ploutos to package your Rust Cargo application. The process looks like this:
176 |
177 | ```mermaid
178 | flowchart LR
179 | download --> package
180 | compile --> package
181 | package --> post-process --> verify --> upload
182 | ```
183 |
184 | A rules [matrix](./key_concepts_and_config.md#matrix-rules) with the following keys must be provided to guide the build process:
185 |
186 | | Matrix Key | Required | Description |
187 | |---|---|---|
188 | | `pkg` | No | The package to build. Defaults to the value of the `name` key in the `[package]` table in `Cargo.toml`. Used in various places. See below. |
189 | | `image` | Yes | Specifies the Docker image used by GitHub Actions to run the job in which your application will be built (when not cross-compiled) and packaged. The package type to build is implied by ``, e.g. DEBs for Ubuntu and Debian, RPMs for CentOS Has the form `:` (e.g. `ubuntu:jammy`, `debian:buster`, `centos:7`, etc). Also see `os` below. |
190 | | `target` | Yes | Should be `x86_64` If `x86_64` the Rust application will be compiled using `cargo-deb` (for DEB) or `cargo build` (for RPM) and stripped. Otherwise it will be used to determine the cross-compiled binary GitHub Actions artifact to compile and download. |
191 | | `os` | No | Overrides the value of `image` when determining `os_name` and `os_rel`. |
192 | | `extra_build_args` | No | A space separated set of additional command line arguments to pass to `cargo build` (possibly via `cargo-deb`). |
193 | | `extra_cargo_deb_args` | No | A space separated set of additional command line arguments to pass to `cargo-deb` |
194 | | `deb_extra_lintian_args` | No | A space separated set of additional command line arguments to pass to the Debian Lintian package linting tool. Useful to suppress errors you wish to ignore or to supress warnings when `strict_mode` is set to `true`. |
195 | | `rpm_systemd_service_unit_file` | No | Relative path to the systemd file, or files (if it ends with `*`) to inclde in an RPM package. See below for more info. |
196 | | `rpm_rpmlint_check_filters` | No | A space separated set of additional rpmlint checks to filter out. See https://fedoraproject.org/wiki/Common_Rpmlint_issues for some example check names, e.g. `no-documentation`. |
197 |
198 | The following keys are special and only relate to the package testing phase when no `package_test_rules` value has been supplied. These keys will be removed from `package_build_rules` before the package building phase, but will be preserved in `package_test_rules` for the package testing phase.
199 |
200 | | Matrix Key | Required | Description |
201 | |---|---|---|
202 | | `test-exclude` | No | Sets the GitHub Actions matrix `exclude` key in the `package_test_rules` matrix. |
203 | | `test-image` | No | Sets the `test-image` key in the `package_test_rules` matrix. See below for more information. |
204 | | `test-mode` | No | Sets the `mode` key in the `package_test_rules` matrix. See below for more information. |
205 |
206 | **Note:** When `package_test_rules` is not supplied the `package_build_rules` matrix is also used as the `package_test_rules` matrix, since normally you want to test every package that you build. When using `package_build_rules` this way you can also supply `package_test_rules` matrix keys in the `package_build_rules` input. These will be ignored by the package building workflow job.
207 |
208 | #### Permitted `` values
209 |
210 | The `` **MUST** be one of the following:
211 |
212 | - `centos:` where `` is one of: `7` or `8`
213 | - `debian:` where `` is one of: `stretch`, `buster` or `bullseye`
214 | - `almalinux:` where `` is one of: `8` or `9`.
215 | - `ubuntu:` where `` is one of: `xenial`, `bionic`, `focal` or `jammy`
216 |
217 | Note the absence of RedHat, Fedora, Rocky Linux, Alma Linux, etc. which are all RPM compatible, and similarly no mention of other Debian derivatives such as Kali Linux or Raspbian.
218 |
219 | You can in principle build your package inside an alternate DEB or RPM compatible Docker image by setting `` appropriately to a Docker image that is natively DEB or RPM compatible, e.g `redhat/ubi8`, `fedora:37`, `almalinux:8` or `kalilinux/kali-rolling`, but then you **MUST** set `` to one of the supported `` values in order to guide the packaging process to produce a DEB or RPM and to take into account known issues with certain releases (especially older ones).
220 |
221 | It may not matter which O/S release the RPM or DEB package is built inside, except for example if your build process requires a dependency package that is only available in the package repositories of a particular O/S release, or if one O/S is known to bundle much newer or older versions of a dependency and that could impact your application, or if there is some other variation which matters in your case.
222 |
223 | ### Package test rules
224 |
225 | `package_test_rules` instructs Ploutos to test your packages (beyond the basic verification done post-build). If not specified, and not disabled, `package_build_rules` will be used as the default input for `package_test_rules`.
226 |
227 | Only packages that were built using `package_build_rules` can be tested. By default the packages built according to `package_build_rules` will be tested, minus any packages for which testing is not supported (e.g. missing LXC image or unsupported architecture).
228 |
229 | Testing packages is optional. To disable testing of packages completely set `package_test_rules` to `none`.
230 |
231 | To test package upgrade you must also supply the `deb_apt_xxx` and/or `rpm_yum_xxx` workflow inputs as needed by your packages. To test upgrade of all built packages without supplying `package_test_rules`, add a `test-mode` key to the `package_build_rules` matrix with value `upgrade-from-published`.
232 |
233 | If you wish to test only a subset of the built packages and/or wish to test package upgrade with only a subset of the packages, then you will either need to fully specify the `package_test_rules` input matrix, or add a `test-exclude` key to the `package_build_rules` matrix with values in the form supported by the special GitHub Actions `exclude` key. E.g. you can exclude `test-mode` value `upgrade-from-published` only for a new O/S version which is being packaged for for the first time (and thus has no prior releases to do upgrade testing against) by speciying a subkey of `test-exclude` like `image: almalinux:9`.
234 |
235 | The testing process looks like this:
236 |
237 | ```mermaid
238 | flowchart LR
239 | prep-container[Launch & setup\nLXC container]
240 | install-prev[Install last\npublished pkg]
241 | install-new[Install just\nbuilt pkg]
242 | test1[run\npost-install\ntests]
243 | test2[run\npost-upgrade\ntests]
244 |
245 | prep-container --> next{mode}
246 | next -- upgrade\nfrom\npublished --> install-prev
247 | next -- fresh\ninstall --> install-new
248 | install-prev --> test1
249 | install-new --> test1
250 | test1 --> next2{mode}
251 | next2 -- upgrade\nfrom\npublished --> upgrade --> test2 --> uninstall
252 | next2 -- fresh\ninstall --> uninstall
253 | uninstall --> reinstall
254 | ```
255 |
256 | A rules [matrix](./key_concepts_and_config.md#matrix-rules) with the following keys must be provided to guide the testing process:
257 |
258 | | Matrix Key | Required | Description |
259 | |---|---|---|
260 | | `pkg` | Yes | The package to test. Must match the value used with `package_build_rules`.|
261 | | `image` | Yes | Specifies the LXC `images://cloud` image used for installing and testing the built package. See: https://images.linuxcontainers.org/. |
262 | | `target` | Yes | The target the package was built for. Must match the value used with `package_build_rules`. |
263 | | `mode` | Yes | One of: `fresh-install` or `upgrade-from-published` _(assumes a previous version is available in the default package repositories)_. |
264 | | `ignore_upgrade_failure` | No | If package upgrade fails should this be ignored? (default: false). Ignoring an upgrade can be necessary when a prior release has a bug in the scripts that are run when the package is upgraded, otherwise the `pkg-test` job will fail. |
265 | | `test-image` | No | An LXC distribution and release pair separated by a colon ':' character. See http://images.linuxcontainers.org/ for possible values. Note that only 'cloud' variants are supported. The LXC container launched will run this image instead of `image`, thereby allowing you to test the package built for the `image` O/S in a different but assumed to be compatible O/S, e.g. test Rocky Linux 9 packages in Alma Linux 9 or CentOS 9 Stream. |
266 |
267 | ### Package test script inputs
268 |
269 | In addition to verifying the built package (during the `pkg` job) and installing it (during the `pkg-test` job) to test that it works, you may also supply an application specific test script to run during the `pkg-test` job post-install and post-upgrade.
270 |
271 | The script should be provided as the relative path to an executable file that is part of your application git repository, i.e. available when the repository branch being tested is checked out.
272 |
273 | The script can optionally check different things after initial installation vs after upgrade from a previous installation, by checking the value of the first command line argument provided to the script. This will either be `post-install` or `post-upgrade`.
274 |
275 | If the script exits with a non-zero exit code then the `pkg-test` job will fail.
276 |
277 | ### Upgrade testing
278 |
279 | To test package upgrade there must exist an earlier version of the package to upgrade from. Upgrade testing is only done if a matrix item in `package_test_rules` sets `mode` to `upgrade-from-published` as opposed to `fresh-install`.
280 |
281 | To demonstrate and test the actual end user upgrade experience, upgrade testing installs the most recently published version of the package from its public repository, assuming that that package is an earlier version than the one being built _(**note:** if not you will get an error about the upgrade not being an upgrade)_ and then upgrades it using the newly built package.
282 |
283 | For backward compatibility the testing process assumes by default that the package is located in the NLnet Labs public package repository. If it is located in some other repository you can override the default locations using the `deb_apt_key_url` and `deb_apt_source` inputs, and their RPM counterparts `rpm_yum_key_url` and `rpm_yum_repo`.
284 |
285 | ## Outputs
286 |
287 | A [GitHub Actions artifact](https://docs.github.com/en/actions/using-workflows/storing-workflow-data-as-artifacts) will be attached to the workflow run with the name `___`. The artifact will be a `zip` file, inside which will either be `generate-rpm/*.rpm` or `debian/*.deb`.
288 |
289 | ## How it works
290 |
291 | The `pkg` and `pkg-test` workflow jobs will do a Git checkout of the repository that hosts the caller workflow.
292 |
293 | The `cargo-deb` and/or `cargo-generate-rpm` tools will be invoked to package (and if not cross-compiled, will also compile) your Rust application.
294 |
295 | Post package creation, the built package will be checked for issues (using `Lintian` for DEBs, and `rpmlint` for RPMs).
296 |
297 | If testing is enabled the packages will be installed inside an LXC container running the O/S that the package was intended for, using the appropriate package management tool (e.g. `apt` or `yum`). The package will also be uninstalled and reinstalled to further exercise any scripts included in the packages to perform special actions on install/upgrade/uninstall.
298 |
299 | To complete the testing regime, if enabled, a separate test run installs a the most recently published (assumed to be) older version of the package then upgrades it using the newly built package.
300 |
301 | ### Build host pre-installed packages
302 |
303 | Rust is installed from [rustup](https://rustup.rs/) using the [minimal profile](https://rust-lang.github.io/rustup/concepts/profiles.html).
304 |
305 | Some limited base development tools are installed prior to Rust compilation to support cases where a native library must be built for a dependency, plus some tools used by Ploutos itself:
306 |
307 | | `os_name` | Packages installed |
308 | |---|---|
309 | | `debian` or `ubuntu` | `binutils`, `gcc`, `dpkg-dev`, `jq`, `lintian` & `pkg-config` |
310 | | `centos` | `findutils`, `gcc`, `jq` & `rpmlint` |
311 |
312 | If needed you can cause more packages to be installed in the build host using the `deb_extra_build_packages` and/or `rpm_extra_build_packages` workflow inputs.
313 |
314 | ### Install-time package dependencies
315 |
316 | Both DEB and RPM packages support the concept of other packages that should be installed in order to use our package. Both `cargo-deb` (via `$auto`) and `cargo-generate-rpm` (via `auto-req`) are able to determine needed shared libraries and the package that provides them and automagically adds such dependendencies to the created package. For cross-compiled binaries and/or for additional tools known to be needed (either by your application and/or its pre/post install scripts) you must specify such dependencies manually in `Cargo.toml`.
317 |
318 | ### Target-specific and multi-package packaging
319 |
320 | Ploutos has some special behaviours regarding selection of the right `Cargo.toml` TOML table settings to use with `cargo-deb` and `cargo-generate-rpm`.
321 |
322 | While both `cargo-deb` and `cargo-generate-rpm` take their core configuration from a combination of `Cargo.toml` `[package.metadata.XXX]` settings and command line arguments, and both support the notion of "variants" as a way to override and/or extend the settings defined in the `[package.metadata.XXX]` TOML table, "variant" support in `cargo-generate-rpm` is relatively new and not yet fully adoptd by Ploutos and neither tool supports defining packaging settings for more than one application in a single `Cargo.toml` file.
323 |
324 | For DEB packaging, Ploutos will look for and instruct `cargo-deb` to use a variant named `-` (or `--` when cross-compiling) if it exists, and assuming that the `minimal` or `minimal-cross` profiles don't also exist and have not been chosen (see 'Support for "old" O/S releases' above).
325 |
326 | For both DEB and RPM packaging, Ploutos has some limited support for defining packaging settings for more than one package in a single `Cargo.toml` file. Prior to invoking `cargo-deb` or `cargo-generate-rpm`, Ploutos will look in `Cargo.toml` for an "alternate" table whose name matches that of the 'pkg' value from the build rules matrix for the current package being built. If a `[package.metadata.deb_alt_base_]` (for DEBs), or `[package.metadata.generate-rpm-alt-base-]` (for RPMs), TOML table exists in `Cargo.toml` Ploutos will look for such a replace the proper `[package.metadata.deb]` or `[package.metadata.generate-rpm]` TOML table with the "alternate" table that was found. Additionally, for RPMs only, this also affects the lookup of RPM scriptlets. When a matching "alternate" table is found, instead of looking for RPM scriptlets at `` Ploutos will instead look for `-`.
327 |
328 | ## Maintainer script(let)s
329 |
330 | For some packages you may wish to take more action on install, upgrade or uninstall of your package than just adding files to the target system. For example you may wish to start/stop services, create user accounts, generate config files, etc.
331 |
332 | Debian packages implement such capabilities via so-called [maintainer scripts](https://www.debian.org/doc/debian-policy/ch-maintainerscripts), while RPM packages do so via so-called [scriptlets](https://docs.fedoraproject.org/en-US/packaging-guidelines/Scriptlets/).
333 |
334 | `cargo-deb` supports a `maintainer-scripts` key in the `[package.metadata.deb]` TOML table of `Cargo.toml` which specifies the path to a directory containing `preinst`, `postinst`, `prerm` and/or `postrm` scripts which will be included in the DEB package in the right way such that tools such as `apt` will execute them as appropriate on package install, upgrade and/or uninstall. The exact files that are selected for inclusion is affected by the `cargo-deb` "variant" that Ploutos enables.
335 |
336 | `cargo-generate-rpm` supports `pre_install_script`, `pre_uninstall_script,` `post_install_script` and `post_uninstall_script` keys in `[package.metadata.generate-rpm]` TOML table of `Cargo.toml` which expect scripts defined inline as TOML strings.
337 |
338 | Both DEB and RPM support so-called "macros" within the maintainer scripts. `cargo-deb` has built-in support for a [subset](https://github.com/kornelski/cargo-deb/blob/main/autoscripts/) of the RPM macros, mostly for working with systemd units, which can be incorporated automatically by adding a `#DEBHELPER#` line to your maintainer script file. This will take care of activating any systemd units that were detected during package creation.
339 |
340 | `cargo-generate-rpm` has no equivalent but, Ploutos is able to [emulate](https://github.com/NLnetLabs/ploutos/blob/main/fragments/macros.systemd.sh) systemd unit related macros by replacing a `#RPM_SYSTEMD_MACROS#` line in your maintainer scripts. You are responsible for invoking the macros yourself however from your scriptlets, just including `#RPM_SYSTEMD_MACROS#` is not enough. For an example see [this file in the Ploutos self-test suite](https://github.com/NLnetLabs/ploutos-testing/blob/main/pkg/rpm/scriptlets.toml).
341 |
342 | For RPMs, do not set `xxx_script` settings in `[package.metadata.generate-rpm]` to a command to execute a script at its installed location as set in the `[package.metadata.generate-rpm]` TOML table `assets` array. While this may work in some cases, it will not in others. For example it will NOT work for post-uninstall scripts as the script file will be deleted before it can be executed. Instead, if you do not wish to include entire shell scripts in your `Cargo.toml` file, Ploutos supports defining the `xxx_script` setting values in a separate TOML file via the `rpm_scriptlets_path` workflow input.
343 |
344 | Lastly, if your scripts execute other programs, e.g. `adduser`, those programs have to be present on the target system, and so the package that provides them should be added as a runtime install dependency (via the `depends` setting in the `[package.metadata.deb]` TOML table in `Cargo.toml` for DEBs, or the `requires` setting in the `[package.metadata.generate-rpm]` table for RPMs). The automated depdendency discovery features of the `cargo-deb` and `cargo-generate-rpm` tools are not capable of detecting dependencies of maintainer script(let)s.
345 |
346 | Further reading:
347 | - [Debian policy manual chapter 6: Package maintainer scripts and installation procedure](https://www.debian.org/doc/debian-policy/ch-maintainerscripts)
348 | - [Fedora Packaging Guidelines](https://docs.fedoraproject.org/en-US/packaging-guidelines/Scriptlets/)
349 | - [Maximum RPM: Install/Erase-time Scripts](http://ftp.rpm.org/max-rpm/s1-rpm-inside-scripts.html#S2-RPM-INSIDE-ERASE-TIME-SCRIPTS)
350 |
351 | ### Systemd units
352 |
353 | To have your application start and stop automatically on operating systems that support systemd, you can install systemd a unit to handle this for you.
354 |
355 | `cargo-deb` has native support for detecting systemd unit files and installing them into the correct place and can supply the commands a maintainer script needs to use to (de)activate the unit on (un)install. It is even able to select different unit files based on the `cargo-deb` "variant" that Ploutos enables (e.g. see section _"Support for old O/S releases"_ in [Automated handling of special cases](#automated-handling-of-special-cases) below).
356 |
357 | `cargo-generate-rpm` has no such native support, but if you include a systemd unit file as an "asset" in the `[package.metadata.generate-rpm]` section of `Cargo.toml` you can then (de)activate the unit as needed by using the `#RPM_SYSTEMD_MACROS#` mentioned above in the section on [Maintainer script(let)s](#maintainer-scriptlets).
358 |
359 | ### Automated handling of special cases
360 |
361 | Ploutos is aware of certain cases that must be handled specially. Note that special cases are usually only handled for long-term support (LTS) operating system versions, behaviour may not be correct on non-LTS versions.
362 |
363 | Examples of special cases handled by Ploutos include:
364 |
365 | - **CentOS 8 EOL:** Per https://www.centos.org/centos-linux-eol/ _"content will be removed from our mirrors, and moved to vault.centos.org"_, thus, when building the package, if `image` is `centos:8` the `yum` configuration is adjusted to use the vault so that `yum` commands continue to work. When testing the package if the image is `centos:8` it will be replaced by `almalinux:8` because the CentOS 8 image no longer exists in the LXC image repository.
366 |
367 | - **LZMA and older O/S releases:** DEB and RPM packages created for Ubuntu Xenial and CentOS 7 respectively must not be compressed with LZMA otherwise the packaging tools fail with errors such as _"malformed-deb-archive newer compressed control.tar.xz"_ (on Ubuntu, see [cargo-deb issue #12](https://github.com/kornelski/cargo-deb/issues/12) and _"cpio: Bad magic"_ (on CentOS, see [cargo-generate-rpm issue #30](https://github.com/cat-in-136/cargo-generate-rpm/issues/30)).
368 |
369 | - **`unattended-upgrade` compatible DEB archives**: Ploutos works around [cargo-deb issue #47](https://github.com/kornelski/cargo-deb/issues/47) by unpacking and repacking the created DEB archive to ensure that data paths are `./` prefixed.
370 |
371 | - **DEB changelog:** Debian archives are required to have a [`changelog`](https://www.debian.org/doc/manuals/maint-guide/dreq.en.html#changelog) in a very specific format. We generate a minimal file on the fly of the form:
372 |
373 | ```
374 | ${MATRIX_PKG} (${PKG_APP_VER}) unstable; urgency=medium
375 | * See: https://github.com/${{ env.GITHUB_REPOSITORY }}/releases/tag/v${APP_NEW_VER}changelog
376 | -- maintainer ${MAINTAINER} ${RFC5322_TS}
377 | ```
378 |
379 | Where:
380 | - `${APP_NEW_VER}` is the value of the `version` key in `Cargo.toml`.
381 | - `${MAINTAINER}` is the value of the `package.metadata.deb.maintainer` key in `Cargo.toml` (for the selected variant) if set, else the first value set for the `author` key.
382 | - `${MATRIX_PKG}` is the value of the `` matrix key for the current permutation of the package build rules matrix being built.
383 | - `${PKG_APP_VER}` is the version of the application being built based on `version` in `Cargo.toml` but post-processed to handle things like [pre-releases](#./key_concepts_and_config#application-versions) or [next development versions](#./key_concepts_and_config#next-dev-version).
384 | - `${RFC5322_TS}` is set to the time now while building.
385 |
386 | - **Support for "old" O/S releases:** For some "old" O/S releases it is known that the version of systemd that they support understands far fewer systemd unit file keys than more modern versions. In such cases (Ubuntu Xenial & Bionic, and Debian Stretch) the `cargo-deb` "variant" to use will be set to `minimal` if there exists a `[package.metadata.deb.variants.minimal]` TOML table in `Cargo.toml`. When cross-compiling the `minimal-cross` variant is looked for instead.
387 |
388 | - **Modified configuration files:** When upgrading a DEB package the installer knows which files are so-called [`conffiles`](https://www.debian.org/doc/manuals/maint-guide/dother.en.html#conffiles). In an interactive shell session when a package is upgraded and it is detected that the user has made changes to a `conffile` that is supplied by the package being upgraded, the user is asked whether to keep or discard their modifications. In an automated environment like the package testing phase of Ploutos nobody is present to answer this question, but we want to test that package upgrade succeeds also when preserving user changes instead of taking the new configuration files provided by the package. Therefore Ploutos sets the `confold` option when upgrading the package so that the users "old" configuration files are kept.
389 |
--------------------------------------------------------------------------------