├── .gitignore ├── README.md ├── apps ├── build_images ├── hello-microshift │ ├── README.md │ ├── deployment.yaml │ ├── hello-microshift.spec │ ├── images │ │ ├── Containerfile │ │ └── html │ │ │ ├── index.html │ │ │ └── microshift_logo.png │ ├── kustomization.yaml │ ├── namespace.yaml │ ├── route.yaml │ └── service.yaml ├── make_rpms └── push_images ├── demos ├── e2e-demo │ ├── README.md │ ├── blueprint_v0.0.1.toml │ ├── build.sh │ ├── kickstart.ks.tmpl │ ├── register_cluster.sh │ ├── register_device.sh │ └── source_transmission.toml ├── edge-console-demo │ ├── README.md │ ├── demo-kickstart.ks.template │ ├── images │ │ ├── 01-edge-management.png │ │ ├── 02-add-repository.png │ │ ├── 03-create-new-image.png │ │ ├── 04-create-new-image.png │ │ ├── 05-select-base.png │ │ ├── 06-add-ssh-keys.png │ │ ├── 07-add-our-custom-repo.png │ │ ├── 08-add-our-microshift-package.png │ │ ├── 09-review.png │ │ ├── 10-image-ready.png │ │ ├── 11-download-iso.png │ │ ├── 12-inject-kickstart.png │ │ ├── 13-system-gets-listed.png │ │ ├── 14-custom-manifests-repo.png │ │ ├── 15-create-new-version.png │ │ ├── 16-add-new-custom-repo-to-image.png │ │ ├── 17-add-new-manifests-package.png │ │ ├── 18-new-image-version.png │ │ ├── 19-update-all.png │ │ ├── 19-update-system.png │ │ ├── 20-once-rebooted.png │ │ ├── 20-service-screenshot.png │ │ └── 21-the-application-runs.png │ ├── manifests │ │ ├── kustomization.yaml │ │ ├── microweb-mdns.yaml │ │ ├── microweb-ns.yaml │ │ ├── microweb-service.yaml │ │ └── microweb.yaml │ ├── microshift-hello-world-manifests.spec │ ├── rhc_credentials.env.example │ └── run.sh ├── hello-microshift-demo │ ├── README.md │ ├── blueprint_v0.0.1.toml │ └── kickstart.ks.tmpl ├── ibaas-demo │ ├── README.md │ ├── cleanup-ibass-demo.sh │ ├── data-edge-installer.json.template │ ├── data-ostree.json.template │ ├── kickstart.ks.template │ └── run.sh └── ostree-demo │ ├── README.md │ ├── blueprint_v0.0.1.toml │ ├── blueprint_v0.0.2.toml │ ├── blueprint_v0.0.3.toml │ ├── kickstart.ks.tmpl │ └── source_transmission.toml └── scripts ├── build ├── build-latest-rpms ├── cleanup ├── configure-builder ├── configure-virthost ├── mirror-repos ├── prepare-aws-bucket ├── provision-device ├── reset-device └── shared └── installer.toml /.gitignore: -------------------------------------------------------------------------------- 1 | builds/* 2 | builds 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # MicroShift Demos 2 | 3 | This repo contains demos of various [MicroShift](https://github.com/openshift/microshift) features. 4 | 5 | * [hello-microshift-demo](https://github.com/redhat-et/microshift-demos/tree/main/demos/hello-microshift-demo): Demonstrates a minimal RHEL for Edge with MicroShift and deploying a "Hello, MicroShift!" app on it. 6 | * [ostree-demo](https://github.com/redhat-et/microshift-demos/tree/main/demos/ostree-demo): Become familiar with `rpm-ostree` basics (image building, updates&rollbacks, etc.) and "upgrading into MicroShift". 7 | * [e2e-demo](https://github.com/redhat-et/microshift-demos/tree/main/demos/e2e-demo): (outdated!) Demonstrates the end-to-end process from device provisioning to management via GitOps and ACM. 8 | * [ibaas-demo](https://github.com/redhat-et/microshift-demos/tree/main/demos/ibaas-demo): Build a RHEL for Edge ISO containing MicroShift and its dependencies in a completely automated manner using Red Hat's Hosted Image Builder service from `console.redhat.com`. 9 | 10 | ## Building demo images on a RHEL machine 11 | 12 | Unless otherwise noted, the demos require you to build a couple of artefacts such as `rpm-ostree` container and ISO installer images. The build process is described below. 13 | 14 | Start from a RHEL 8.7 or higher machine (virtual or bare metal) registered via `subscription-manager` and attached to a subscription that includes the OpenShift Container Platform repos. You can add a trial evaluation for OCP at [Red Hat Customer Portal - Product Downloads](https://access.redhat.com/downloads). 15 | 16 | Install git if not yet installed and clone the demo repo: 17 | 18 | git clone https://github.com/redhat-et/microshift-demos.git 19 | cd microshift-demos 20 | 21 | Set up Image Builder and other build dependencies: 22 | 23 | ./scripts/configure-builder 24 | 25 | Mirror MicroShift and its dependencies into a local repo to accelerate the image build process. 26 | 27 | ./scripts/mirror-repos 28 | 29 | Download the OpenShift pull secret from https://console.redhat.com/openshift/downloads#tool-pull-secret and copy it to `$HOME/.pull-secret.json`. 30 | 31 | Build the artefacts for a given demo by running 32 | 33 | ./scripts/build $DEMONAME 34 | 35 | whereby `$DEMONAME` is one of the demos in the list above, e.g. `ostree-demo`. 36 | 37 | Once the build completes, you should find the demo's artefacts in `builds/$DEMONAME`, e.g. for the `ostree-demo` this will be 38 | 39 | id_demo 40 | id_demo.pub 41 | ostree-demo-0.0.1-container.tar 42 | ostree-demo-0.0.1-metadata.tar 43 | ostree-demo-0.0.1-logs.tar 44 | ostree-demo-0.0.2-container.tar 45 | ... 46 | ostree-demo-installer.x86_64.iso 47 | password 48 | 49 | After deploying a machine with the installer, you should be able to log into it using the user `microshift` and the password in `builds/$DEMONAME/password` or via `ssh` with 50 | 51 | ssh -o "IdentitiesOnly=yes" -i builds/$DEMONAME/id_demo microshift@$MACHINE_IP 52 | 53 | ## Building demo images with pre-release versions of MicroShift 54 | 55 | By default, the `mirror-repos` script will mirror the latest MicroShift version from the official 4.12 release repos. 56 | 57 | To mirror the developer preview repos instead (may not always work), use: 58 | 59 | MICROSHIFT_DEV_PREVIEW=true MICROSHIFT_VERSION=4.13 MICROSHIFT_DEPS_VERSION=4.12 ./scripts/mirror-repos 60 | 61 | To test the latest MicroShift version built from source instead (may not always work), use: 62 | 63 | ./scripts/build-latest-rpms 64 | MICROSHIFT_DEV_PREVIEW=true MICROSHIFT_VERSION=4.13 MICROSHIFT_DEPS_VERSION=4.12 ./scripts/mirror-repos 65 | -------------------------------------------------------------------------------- /apps/build_images: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | REPOROOT="$(git rev-parse --show-toplevel)" 6 | APPS=$(cd "${REPOROOT}/apps"; find * -maxdepth 0 -type d | xargs) 7 | ARCHS="linux/amd64,linux/arm64" 8 | 9 | title() { 10 | echo -e "\E[34m# $1\E[00m"; 11 | } 12 | 13 | build_image() { 14 | local app="$1" 15 | 16 | pushd "${REPOROOT}/apps/${app}/images" &>/dev/null 17 | title "Building ${app}" 18 | manifestName="quay.io/microshift/${app}:latest" 19 | buildah manifest rm "${manifestName}" || true 20 | buildah manifest create "${manifestName}" 21 | buildah build --platform=${ARCHS} --manifest "${manifestName}" 22 | popd &>/dev/null 23 | } 24 | 25 | for app in ${APPS}; do 26 | build_image ${app} 27 | done -------------------------------------------------------------------------------- /apps/hello-microshift/README.md: -------------------------------------------------------------------------------- 1 | # Hello, MicroShift! Application 2 | 3 | This application serves a static web page with the MicroShift logo and the string "Hello, MicroShift!". Deploy it with 4 | 5 | oc apply -k https://github.com/redhat-et/microshift-demos/apps/hello-microshift?ref=main 6 | 7 | The service is exposed via the route `hello-microshift.local`. That route needs to be resolvable to the primary IP address of the machine, e.g. by adding an entry to `/etc/hosts` like 8 | 9 | 10.0.2.15 hello-microshift.local 10 | -------------------------------------------------------------------------------- /apps/hello-microshift/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: hello-microshift 5 | spec: 6 | replicas: 2 7 | selector: 8 | matchLabels: 9 | app.kubernetes.io/name: hello-microshift 10 | template: 11 | metadata: 12 | labels: 13 | app.kubernetes.io/name: hello-microshift 14 | spec: 15 | containers: 16 | - name: hello-microshift 17 | image: quay.io/microshift/hello-microshift:latest 18 | imagePullPolicy: IfNotPresent 19 | ports: 20 | - containerPort: 8080 21 | protocol: TCP 22 | securityContext: 23 | allowPrivilegeEscalation: false 24 | capabilities: 25 | drop: ["ALL"] 26 | securityContext: 27 | runAsNonRoot: true 28 | seccompProfile: 29 | type: RuntimeDefault 30 | -------------------------------------------------------------------------------- /apps/hello-microshift/hello-microshift.spec: -------------------------------------------------------------------------------- 1 | %global app hello-microshift 2 | 3 | Name: microshift-demos-%{app} 4 | Version: 0.0.1 5 | Release: 1%{?dist} 6 | Summary: Manifests of the "Hello, MicroShift!" app 7 | 8 | License: ASL 2.0 9 | URL: https://github.com/redhat-et/microshift-demos/tree/main/apps/%{app} 10 | Source0: https://github.com/redhat-et/microshift-demos/archive/refs/tags/%{app}-v%{version}.tar.gz 11 | BuildArch: noarch 12 | 13 | %description 14 | Installs the manifests of the "Hello, MicroShift!" app into MicroShift's 15 | auto-manifest folder. 16 | 17 | 18 | %global tardir microshift-demos-%{app}-v%{version} 19 | %global source %{_builddir}/%{tardir}/apps/%{app} 20 | %global manifestdir /etc/microshift/manifests 21 | 22 | %prep 23 | %setup -q -n %{tardir} 24 | 25 | %build 26 | 27 | %install 28 | mkdir -p %{buildroot}/%{manifestdir} 29 | install -m 0644 %{source}/deployment.yaml %{buildroot}/%{manifestdir}/deployment.yaml 30 | install -m 0644 %{source}/kustomization.yaml %{buildroot}/%{manifestdir}/kustomization.yaml 31 | install -m 0644 %{source}/namespace.yaml %{buildroot}/%{manifestdir}/namespace.yaml 32 | install -m 0644 %{source}/route.yaml %{buildroot}/%{manifestdir}/route.yaml 33 | install -m 0644 %{source}/service.yaml %{buildroot}/%{manifestdir}/service.yaml 34 | 35 | %files 36 | %dir %{manifestdir} 37 | %config %{manifestdir}/deployment.yaml 38 | %config %{manifestdir}/kustomization.yaml 39 | %config %{manifestdir}/namespace.yaml 40 | %config %{manifestdir}/route.yaml 41 | %config %{manifestdir}/service.yaml 42 | 43 | %changelog 44 | * Fri Dec 2 2022 Frank A. Zdarsky - 0.0.1-1 45 | - First package 46 | -------------------------------------------------------------------------------- /apps/hello-microshift/images/Containerfile: -------------------------------------------------------------------------------- 1 | FROM nginxinc/nginx-unprivileged:stable-alpine 2 | COPY html/ /usr/share/nginx/html/ 3 | -------------------------------------------------------------------------------- /apps/hello-microshift/images/html/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 21 | 22 | 23 |
24 | MicroShift logo 25 | Hello, MicroShift! 26 |
27 | 28 | 29 | -------------------------------------------------------------------------------- /apps/hello-microshift/images/html/microshift_logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/apps/hello-microshift/images/html/microshift_logo.png -------------------------------------------------------------------------------- /apps/hello-microshift/kustomization.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kustomize.config.k8s.io/v1beta1 2 | kind: Kustomization 3 | metadata: 4 | name: arbitrary 5 | 6 | commonLabels: 7 | app.kubernetes.io/name: hello-microshift 8 | 9 | namespace: demo 10 | 11 | resources: 12 | - namespace.yaml 13 | - deployment.yaml 14 | - service.yaml 15 | - route.yaml 16 | -------------------------------------------------------------------------------- /apps/hello-microshift/namespace.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: demo -------------------------------------------------------------------------------- /apps/hello-microshift/route.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: route.openshift.io/v1 2 | kind: Route 3 | metadata: 4 | name: hello-microshift 5 | spec: 6 | host: hello-microshift.local 7 | to: 8 | kind: Service 9 | name: hello-microshift 10 | port: 11 | targetPort: 8080 12 | wildcardPolicy: None -------------------------------------------------------------------------------- /apps/hello-microshift/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: hello-microshift 5 | spec: 6 | ports: 7 | - port: 8080 8 | protocol: TCP 9 | targetPort: 8080 10 | selector: 11 | app.kubernetes.io/name: hello-microshift 12 | type: ClusterIP 13 | 14 | -------------------------------------------------------------------------------- /apps/make_rpms: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | REPOROOT="$(git rev-parse --show-toplevel)" 6 | APPS=$(cd "${REPOROOT}/apps"; find * -maxdepth 0 -type d | xargs) 7 | 8 | if ! command -v rpmbuild &> /dev/null; then 9 | sudo dnf install -y rpmdevtools rpmlint 10 | fi 11 | 12 | title() { 13 | echo -e "\E[34m# $1\E[00m"; 14 | } 15 | 16 | build_rpm() { 17 | local app="$1" 18 | 19 | title "Copying .spec and source tarball for ${app}" 20 | specfile="${REPOROOT}/apps/${app}/${app}.spec" 21 | version=$(awk '/Version:/ { print $2 }' "${specfile}") 22 | cp "${REPOROOT}/apps/${app}/${app}.spec" ~/rpmbuild/SPECS 23 | wget -P ~/rpmbuild/SOURCES/ https://github.com/redhat-et/microshift-demos/archive/refs/tags/${app}-v${version}.tar.gz 24 | 25 | title "Building ${app} RPM" 26 | rpmbuild -bs ~/rpmbuild/SPECS/${app}.spec 27 | rpmbuild -bb ~/rpmbuild/SPECS/${app}.spec 28 | 29 | title "Running linter on ${app} RPM" 30 | rpmlint ~/rpmbuild/RPMS/noarch/microshift-${app}-app-*.rpm 31 | 32 | popd &>/dev/null 33 | } 34 | 35 | rpmdev-setuptree 36 | for app in ${APPS}; do 37 | build_rpm ${app} 38 | done -------------------------------------------------------------------------------- /apps/push_images: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | REPOROOT="$(git rev-parse --show-toplevel)" 6 | APPS=$(cd "${REPOROOT}/apps"; find * -maxdepth 0 -type d | xargs) 7 | 8 | title() { 9 | echo -e "\E[34m# $1\E[00m"; 10 | } 11 | 12 | push_image() { 13 | local app="$1" 14 | 15 | pushd "${REPOROOT}/apps/${app}/images" &>/dev/null 16 | title "Pushing ${app}" 17 | buildah manifest push --all "quay.io/microshift/${app}:latest" "docker://quay.io/microshift/${app}:latest" 18 | popd &>/dev/null 19 | } 20 | 21 | for app in ${APPS}; do 22 | push_image ${app} 23 | done -------------------------------------------------------------------------------- /demos/e2e-demo/README.md: -------------------------------------------------------------------------------- 1 | # MicroShift E2E Provisioning Demo 2 | 3 | This demo shows an end-to-end provisioning workflow for MicroShift on a RHEL4Edge device: 4 | 5 | * building a RHEL4Edge installer image containing MicroShift's dependencies, 6 | * provisioning and on-boarding a new device into the RHEL4Edge Fleet Manager, 7 | * deploying MicroShift to that device using GitOps and getting the MicroShift cluster registered with Open Cluster Management, and 8 | * deploying a test workload on MicroShift via Open Cluster Management. 9 | 10 | ## Pre-requisites 11 | 12 | Follow the instructions for [building demo images on a RHEL machine](https://github.com/redhat-et/microshift-demos/tree/main/README.md) to build the `e2e-demo` artefacts. 13 | 14 | For this demo, you also need a GitHub repo from which you will configure the RHEL edge device running MicroShift via GitOps. Fork the demo's GitOps repo https://github.com/redhat-et/microshift-config into your own org and define the GITOPS_REPO environment variable accordingly: 15 | 16 | export GITOPS_REPO="https://github.com/MY_ORG/microshift-config" 17 | 18 | Finally, you need an [Open Cluster Management](https://open-cluster-management.io/) instance accessible by the device you'll provision as well as the `oc` client installed on the machine you'll register the cluster from. 19 | 20 | ## Provsioning and On-Boarding a Device 21 | 22 | Use the installer ISO to provision a physical device or VM (e.g. on libvirt). 23 | 24 | When the device boots into the RHEL 4 Edge image for the first time, it'll eventually automatically on-board via [FIDO Device Onboard](https://fidoalliance.org/intro-to-fido-device-onboard/). Until that is implemented, your device needs to be *connected to Red Hat VPN* and you need to perform a few manual steps: 25 | 26 | 1. Log into the device's console (user: `microshift`, password from `builds/e2e-demo/password`). 27 | 2. `curl` and run the `register_device.sh` script from the demo repo: 28 | 29 | curl https://raw.githubusercontent.com/redhat-et/microshift-demos/main/e2e-demo/register_device.sh | sudo sh - 30 | 31 | 3. When prompted, enter your RHSM credentials. 32 | 33 | You should now be able to see your device registered under [https://console.stage.redhat.com/beta/edge/fleet-management](https://console.stage.redhat.com/beta/edge/fleet-management). 34 | 35 | ## Deploying MicroShift 36 | 37 | As the Fleet Manager does not provide device configuration management yet, the demo uses the [Transmission](https://github.com/redhat-et/transmission) agent to stage configuration and other assets onto devices. It does so by polling the `GITOPS_REPO` for changes, using the device's Insights ID as the branch name for that repo. 38 | 39 | Therefore, once the device is registered with Insights, look up the Insights ID on the Fleet Manager's info page for your device and store it: 40 | 41 | DEVICE_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" # replace with your device's Insights ID 42 | 43 | Log into your Open Cluster Management instance as admin using `oc`. 44 | 45 | Finally, deploy MicroShift from the GitOps repo you created at the beginning using the following process: 46 | 47 | git clone ${GITOPS_REPO} microshift-config 48 | cd microshift-config 49 | git checkout -b ${DEVICE_ID} 50 | 51 | CLUSTER_NAME="microshift-demo" 52 | curl https://raw.githubusercontent.com/redhat-et/microshift-demos/main/e2e-demo/register_cluster.sh | bash -s - ${CLUSTER_NAME} 53 | 54 | git add . 55 | git commit -m "Update cluster name and ACM credentials" 56 | git push origin ${DEVICE_ID} 57 | 58 | You should have a new branch named after your device's Insights ID in your repo with the configuration of OCM's `klusterlet` agent updated in `/var/lib/microshift/manifests`. The next time Transmission on your device checks for updates, it'll install MicroShift and apply the `klusterlet` configuration. A few moments after MicroShift starts, you should see the new cluster appearing in the OCM console. 59 | 60 | ## Deploying a Workload via Open Cluster Management 61 | 62 | There is a lot of [documentation](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.3/html/applications/index) on how to deploy a new application onto clusters managed by Open Cluster Management. 63 | 64 | Follow these steps to deploy a sample application: 65 | 66 | * Go to Open Cluster Management's application tab. 67 | * Click on Create application button. 68 | * Enter Name and Namespace for your application. Choose Git as repository type, and enter the URL of the git repo where your application manifests are stored and the placement policies you would like. The recommended one for this demo is *Deploy to all online clusters and local cluster*. 69 | * Click on Save and wait for Open Cluster Management to create all the needed resources. 70 | 71 | As an example, you can use the following [git repository](https://github.com/oglok/edge-app). It will deploy a replicated NGINX container and expose it on the 30303 port. Once this application is deployed, you should see the NGINX landing page with your browser using your device's IP address on the port mentioned above: 72 | 73 | http://DEVICE_IP:30303 74 | 75 | Open Cluster Management allows you to view where the applications are deployed, and search for resources on specific clusters. 76 | -------------------------------------------------------------------------------- /demos/e2e-demo/blueprint_v0.0.1.toml: -------------------------------------------------------------------------------- 1 | name = "e2e-demo" 2 | 3 | description = "" 4 | version = "0.0.1" 5 | modules = [] 6 | groups = [] 7 | 8 | # Force correct redhat-release 9 | 10 | [[packages]] 11 | name = "redhat-release" 12 | version = "*" 13 | 14 | # MicroShift dependencies 15 | 16 | [[packages]] 17 | name = "podman" 18 | version = "*" 19 | 20 | [[packages]] 21 | name = "firewalld" 22 | version = "*" 23 | 24 | [[packages]] 25 | name = "conntrack-tools" 26 | version = "*" 27 | 28 | # configuration management 29 | 30 | [[packages]] 31 | name = "transmission-agent" 32 | version = "0.1.6" 33 | 34 | [[packages]] 35 | name = "git" 36 | version = "*" 37 | 38 | 39 | # device management 40 | 41 | [[packages]] 42 | name = "rhc" 43 | version = "*" 44 | 45 | [[packages]] 46 | name = "insights-client" 47 | version = "*" 48 | 49 | [[packages]] 50 | name = "rhc-worker-playbook" 51 | version = "*" 52 | 53 | [[packages]] 54 | name = "ansible" 55 | version = "*" 56 | 57 | [[packages]] 58 | name = "subscription-manager-plugin-ostree" 59 | version = "1.26.*" 60 | 61 | [[packages]] 62 | name = "subscription-manager" 63 | version = "*" 64 | 65 | 66 | # troubleshooting tools 67 | 68 | [[packages]] 69 | name = "iputils" 70 | version = "*" 71 | 72 | [[packages]] 73 | name = "bind-utils" 74 | version = "*" 75 | 76 | [[packages]] 77 | name = "net-tools" 78 | version = "*" 79 | 80 | [[packages]] 81 | name = "iotop" 82 | version = "*" 83 | 84 | # Microshift 85 | 86 | [[packages]] 87 | name = "microshift" 88 | version = "*" 89 | 90 | [customizations] 91 | [customizations.services] 92 | enabled = ["crio", "transmission.timer", "microshift"] 93 | -------------------------------------------------------------------------------- /demos/e2e-demo/build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | DEMOROOT=$(git rev-parse --show-toplevel)/e2e-demo 6 | 7 | title() { 8 | echo -e "\E[34m\n# $1\E[00m"; 9 | } 10 | 11 | load_blueprint() { 12 | sudo composer-cli blueprints delete $1 2>/dev/null || true 13 | sudo composer-cli blueprints push ${DEMOROOT}/image-builder/$1.toml 14 | sudo composer-cli blueprints depsolve $1 15 | } 16 | 17 | waitfor_image() { 18 | STATUS=$(sudo composer-cli compose status | grep $1 | awk '{print $2}') 19 | while [ ${STATUS} != FINISHED ]; do 20 | sleep 10 21 | STATUS=$(sudo composer-cli compose status | grep $1 | awk '{print $2}') 22 | echo $(date +'%Y-%m-%d %H:%M:%S') ${STATUS} 23 | if [ ${STATUS} == "FAILED" ] 24 | then 25 | echo "Blueprint build has failed. For more info, download logs from composer." 26 | exit 1 27 | fi 28 | done 29 | } 30 | 31 | download_image() { 32 | sudo composer-cli compose logs $1 33 | sudo composer-cli compose metadata $1 34 | sudo composer-cli compose image $1 35 | } 36 | 37 | 38 | mkdir -p ${DEMOROOT}/builds 39 | pushd ${DEMOROOT}/builds &>/dev/null 40 | 41 | title "Adding RHOCP and Ansible repos to builder" 42 | if [ ! -d "/etc/osbuild-composer/repositories/" ] 43 | then 44 | mkdir -p /etc/osbuild-composer/repositories/ 45 | fi 46 | sudo cp ${DEMOROOT}/image-builder/rhel-8.json /etc/osbuild-composer/repositories/ 47 | sudo cp ${DEMOROOT}/image-builder/rhel-85.json /etc/osbuild-composer/repositories/ 48 | sudo systemctl restart osbuild-composer.service 49 | 50 | title "Loading sources for transmission" 51 | sudo composer-cli sources delete transmission 2>/dev/null || true 52 | sudo composer-cli sources add ${DEMOROOT}/image-builder/transmission.toml 53 | 54 | title "Loading sources for microshift" 55 | sudo composer-cli sources delete microshift 2>/dev/null || true 56 | sudo composer-cli sources add ${DEMOROOT}/image-builder/microshift.toml 57 | 58 | title "Loading r4e-microshift blueprint" 59 | load_blueprint r4e-microshift 60 | 61 | title "Building r4e-microshift ostree container image" 62 | UUID=$(sudo composer-cli compose start-ostree --ref rhel/8/$(uname -i)/edge r4e-microshift edge-container | awk '{print $2}') 63 | waitfor_image ${UUID} 64 | download_image ${UUID} 65 | 66 | title "Serving r4e-microshift ostree container locally" 67 | IMAGEID=$(cat ./${UUID}-container.tar | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*') 68 | sudo podman tag ${IMAGEID} localhost/rhel-edge-container 69 | sudo podman rm -f rhel-edge-container 2>/dev/null || true 70 | sudo podman run -d --name=rhel-edge-container -p 8080:8080 localhost/rhel-edge-container 71 | 72 | title "Removing RHOCP and Ansible repos from builder" # builder trips on it 73 | sudo rm /etc/osbuild-composer/repositories/rhel-8.json 74 | sudo rm /etc/osbuild-composer/repositories/rhel-85.json 75 | sudo systemctl restart osbuild-composer.service 76 | 77 | title "Loading installer blueprint" 78 | load_blueprint installer 79 | 80 | title "Building installer ISO" 81 | UUID=$(sudo composer-cli compose start-ostree --ref rhel/8/$(uname -i)/edge --url http://localhost:8080/repo/ installer edge-installer | awk '{print $2}') 82 | waitfor_image ${UUID} 83 | download_image ${UUID} 84 | 85 | title "Cleaning up local ostree container serving" 86 | sudo podman rm -f rhel-edge-container 2>/dev/null || true 87 | sudo podman rmi -f ${IMAGEID} 2>/dev/null || true 88 | 89 | title "Embedding kickstart" 90 | cp ${DEMOROOT}/image-builder/kickstart.ks ${DEMOROOT}/builds/kickstart.ks 91 | sudo podman run --rm --privileged -ti -v ${DEMOROOT}/builds:/data -v /dev:/dev fedora /bin/bash -c \ 92 | "dnf -y install lorax; cd /data; mkksiso kickstart.ks ${UUID}-installer.iso r4e-microshift-installer.$(uname -i).iso; exit" 93 | sudo chown $(whoami). ${DEMOROOT}/builds/r4e-microshift-installer.$(uname -i).iso 94 | 95 | title "Done" 96 | popd &>/dev/null 97 | -------------------------------------------------------------------------------- /demos/e2e-demo/kickstart.ks.tmpl: -------------------------------------------------------------------------------- 1 | lang en_US.UTF-8 2 | keyboard us 3 | timezone UTC 4 | zerombr 5 | text 6 | reboot 7 | 8 | # Configure network to use DHCP and activate on boot 9 | network --bootproto=dhcp --device=link --activate --onboot=on 10 | 11 | # Partition disk with a 1GB boot XFS partition and an LVM volume containing a 8GB+ system root 12 | # The remainder of the volume will be used by the CSI driver for storing data 13 | # 14 | # For example, a 20GB disk would be partitioned in the following way: 15 | # 16 | # NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT 17 | # sda 8:0 0 20G 0 disk 18 | # ├─sda1 8:1 0 1G 0 part /boot 19 | # └─sda2 8:2 0 19G 0 part 20 | # └─rhel-root 253:0 0 8G 0 lvm /sysroot 21 | # 22 | zerombr 23 | clearpart --all --initlabel 24 | part /boot --fstype=xfs --asprimary --size=1024 25 | part pv.01 --grow 26 | volgroup rhel pv.01 27 | logvol / --vgname=rhel --fstype=xfs --size=8192 --name=root 28 | 29 | # Configure ostree 30 | ostreesetup --nogpg --osname=rhel --remote=edge --url=file:///run/install/repo/ostree/repo --ref=$OSTREE_REF 31 | 32 | 33 | %post --log=/var/log/anaconda/post-install.log --erroronfail 34 | 35 | # Add the default user and enable passwordless sudo. Add password and/or authorized keys if configured. 36 | useradd -m -d "/home/$USER_NAME" -G wheel "$USER_NAME" 37 | [ -n '$USER_PASS_ENCRYPTED' ] && usermod -p '$USER_PASS_ENCRYPTED' "$USER_NAME" 38 | if [ -n '$USER_AUTHORIZED_KEY' ]; then 39 | mkdir -p "/home/$USER_NAME/.ssh" 40 | chmod 755 "/home/$USER_NAME/.ssh" 41 | tee "/home/$USER_NAME/.ssh/authorized_keys" > /dev/null <> /etc/sudoers 47 | 48 | # Configure where rpm-ostree looks for ostree updates 49 | echo -e 'url=$OSTREE_REPO_URL' >> /etc/ostree/remotes.d/edge.conf 50 | 51 | # Configure where transmission-agent looks for config updates 52 | echo -e '$TRANSMISSION_URL' > /etc/transmission-url 53 | 54 | # The pull secret is mandatory for MicroShift builds on top of OpenShift, but not OKD 55 | # The /etc/crio/crio.conf.d/microshift.conf references the /etc/crio/openshift-pull-secret file 56 | mkdir -p /etc/crio 57 | cat > /etc/crio/openshift-pull-secret << EOPULLSECRET 58 | $OCP_PULL_SECRET_CONTENTS 59 | EOPULLSECRET 60 | chmod 600 /etc/crio/openshift-pull-secret 61 | 62 | # Configure the mandatory firewall rules 63 | firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 64 | firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 65 | 66 | %end 67 | 68 | 69 | %post --log=/var/log/anaconda/insights-on-reboot-unit-install.log --interpreter=/usr/bin/bash --erroronfail 70 | 71 | INSIGHTS_CLIENT_OVERRIDE_DIR=/etc/systemd/system/insights-client.service.d 72 | INSIGHTS_CLIENT_OVERRIDE_FILE=$INSIGHTS_CLIENT_OVERRIDE_DIR/override.conf 73 | 74 | if [ ! -f $INSIGHTS_CLIENT_OVERRIDE_FILE ]; then 75 | mkdir -p $INSIGHTS_CLIENT_OVERRIDE_DIR 76 | cat > $INSIGHTS_CLIENT_OVERRIDE_FILE << EOF 77 | [Unit] 78 | Requisite=greenboot-healthcheck.service 79 | After=network-online.target greenboot-healthcheck.service 80 | 81 | [Install] 82 | WantedBy=multi-user.target 83 | EOF 84 | systemctl enable insights-client.service 85 | fi 86 | 87 | %end 88 | -------------------------------------------------------------------------------- /demos/e2e-demo/register_cluster.sh: -------------------------------------------------------------------------------- 1 | #! /usr/bin/env bash 2 | 3 | set -euo pipefail 4 | 5 | if ! command -v yq &> /dev/null 6 | then 7 | mkdir -p $HOME/bin 8 | wget https://github.com/mikefarah/yq/releases/download/v4.11.2/yq_linux_amd64 -O $HOME/bin/yq 9 | chmod +x $HOME/bin/yq 10 | yq --version 11 | fi 12 | 13 | MANIFESTS_DIR=$(git rev-parse --show-toplevel)/var/lib/microshift/manifests 14 | if [ ! -d "${MANIFESTS_DIR}" ] 15 | then 16 | echo "No klusterlet manifests found in ${MANIFESTS_DIR}. Are you in the config git repo?" 17 | exit 1 18 | fi 19 | 20 | CLUSTER_NAME=${1:? "\$1 should be a cluster name (host part of the cluster's FQDN)"} 21 | 22 | WORK_DIR=$HOME/.acm/"$CLUSTER_NAME" 23 | SPOKE_DIR="$WORK_DIR"/spoke 24 | rm -rf "$WORK_DIR" 25 | mkdir -p "$WORK_DIR" "$SPOKE_DIR" 26 | 27 | cat <"$WORK_DIR"/managed-cluster.yaml 28 | apiVersion: cluster.open-cluster-management.io/v1 29 | kind: ManagedCluster 30 | metadata: 31 | name: "$CLUSTER_NAME" 32 | spec: 33 | hubAcceptsClient: true 34 | EOF 35 | 36 | cat <"$WORK_DIR"/klusterlet-addon-config.yaml 37 | apiVersion: agent.open-cluster-management.io/v1 38 | kind: KlusterletAddonConfig 39 | metadata: 40 | name: "$CLUSTER_NAME" 41 | namespace: "$CLUSTER_NAME" 42 | spec: 43 | clusterName: "$CLUSTER_NAME" 44 | clusterNamespace: "$CLUSTER_NAME" 45 | applicationManager: 46 | enabled: true 47 | certPolicyController: 48 | enabled: true 49 | clusterLabels: 50 | cloud: auto-detect 51 | vendor: auto-detect 52 | iamPolicyController: 53 | enabled: true 54 | policyController: 55 | enabled: true 56 | searchCollector: 57 | enabled: true 58 | version: 2.2.0 59 | EOF 60 | 61 | oc new-project "$CLUSTER_NAME" 1>/dev/null 62 | oc label namespace "$CLUSTER_NAME" cluster.open-cluster-management.io/managedCluster="$CLUSTER_NAME" 63 | oc apply -f "$WORK_DIR"/managed-cluster.yaml 64 | oc apply -f "$WORK_DIR"/klusterlet-addon-config.yaml 65 | 66 | sleep 3 67 | 68 | oc get secret "$CLUSTER_NAME"-import -n "$CLUSTER_NAME" -o jsonpath={.data.import\\.yaml} | base64 --decode > "$SPOKE_DIR"/import.yaml 69 | 70 | KUBECONFIG=$(yq eval-all '. | select(.metadata.name == "bootstrap-hub-kubeconfig") | .data.kubeconfig' "$SPOKE_DIR"/import.yaml) 71 | sed -i "s/{{ .clustername }}/${CLUSTER_NAME}/g" ${MANIFESTS_DIR}/klusterlet.yaml 72 | sed -i "s/{{ .kubeconfig }}/${KUBECONFIG}/g" ${MANIFESTS_DIR}/klusterlet-kubeconfighub.yaml 73 | -------------------------------------------------------------------------------- /demos/e2e-demo/register_device.sh: -------------------------------------------------------------------------------- 1 | sudo tee /etc/rhc/config.toml > /dev/null < /dev/null < 82 | 83 | $ sudo su - 84 | $ ./get-oc.sh 85 | $ oc get pods -A -o wide 86 | $ oc get nodes -o wide 87 | ``` 88 | 89 | We provide a small script to download the right version of the OpenShift client (4.12 EC4 at this time), and the KUBECONFIG is pointed to `/var/lib/microshift/resources/kubeadmin/kubeconfig` on the root user `.profile` file. 90 | 91 | ### Create a new version of the image, and update the system 92 | We can update our image with new content afterwards, and get the connected system upgraded when the new image is ready. 93 | 94 | In the following example we install a small application set of manifests in `/usr/lib/microshift/manifests`. The manifests start a small hello world web server based in nginx, and expose the service via a `hello-world.local` route. 95 | 96 | 97 | ## Updating the system to run an application in MicroShift 98 | 99 | MicroShift consumes manifests provided in `/etc/microshift/manifests` and `/usr/lib/microshift/manifests/` the first directory is meant for day-2 management systems, while the `/usr/lib` path can be used to embed manifests into the ostree images without having the risk of those manifests being modified at runtime. 100 | 101 | For the purpose of the demo we have created a small repository in `https://copr.fedorainfracloud.org/coprs/mangelajo/microshift-hello-world/` that contains the manifests of our demo. You can create your own repository with your own manifests in copr if you register to [copr.fedorainfracloud.org](https://copr.fedorainfracloud.org), where you can create a project, and upload the manifests srpm. 102 | 103 | To create the `srpm` use the provided `create-manifests-srpm.sh`: 104 | ```bash 105 | [majopela@lenovo demo]$ ls manifests/ 106 | kustomization.yaml microweb-mdns.yaml microweb-ns.yaml microweb-service.yaml microweb.yaml 107 | ``` 108 | 109 | ```bash 110 | [majopela@lenovo demo]$ ./create-manifests-srpm.sh 111 | Creating manifests tarball 112 | Creating SRPM 113 | setting SOURCE_DATE_EPOCH=1666828800 114 | Wrote: /home/majopela/rpmbuild/SRPMS/microshift-hello-world-manifests-1.0.0-1.src.rpm 115 | Wrote ./microshift-hello-world-manifests-1.0.0-1.src.rpm 116 | Done 117 | ``` 118 | 119 | But you don't need to create this repositoy if you want and simple use the pre-created one in [https://copr.fedorainfracloud.org/coprs/mangelajo/microshift-hello-world/](https://copr.fedorainfracloud.org/coprs/mangelajo/microshift-hello-world/). 120 | 121 | ### Create a new version of the image with the manifests 122 | 123 | Add the new manifests repository to the image manager 124 | 125 | ![Add new custom repo](./images/14-custom-manifests-repo.png) 126 | 127 | Create a new version of the image 128 | 129 | ![Create a new image](./images/15-create-new-version.png) 130 | 131 | Add the `microshift-manifests-example` repository to the image 132 | 133 | ![Add the manifests repo](./images/16-add-new-custom-repo-to-image.png) 134 | 135 | Add the new `microshift-manifests-example` package to the image. 136 | 137 | ![Add the manifests package](./images/17-add-new-manifests-package.png) 138 | 139 | Continue with the process, and eventually your new image version will be built, you only 140 | need the ostree tar layer, not the iso. You can always build the iso if you want to install new systems with the application available right away, but build time is longer; remember to inject the kickstart if you do that. 141 | 142 | ![The new image is ready](./images/18-new-image-version.png) 143 | 144 | You can proceed to update the running system at this point. 145 | 146 | ![Update your system](./images/19-update-system.png) 147 | 148 | The update can be monitored in the running system by running: 149 | ```bash 150 | $ journalctl -u rhcd -u rpmostreed -f 151 | ``` 152 | 153 | Please note that this operation once triggered will take some time, as the 154 | edge management system will prepare and extract the ostree to be 155 | exposed via https to the instances. 156 | 157 | Once the system has updated and rebooted, if you login and (as root) use oc, 158 | you can see the new application running in the `microweb` namespace. 159 | 160 | ![Update your system](./images/20-once-rebooted.png) 161 | 162 | And you can connect to the application via a route on http://hello-world.local 163 | ![Update your system](./images/21-the-application-runs.png) 164 | 165 | Please note that if the system where your browser is running does not resolve 166 | mdns, you will need to add the ip address of the MicroShift instance to your 167 | local /etc/hosts, or to your DNS servers. i.e.: 168 | 169 | ```bash 170 | echo 192.168.100.12 hello-world.local >>/etc/hosts 171 | ``` 172 | 173 | -------------------------------------------------------------------------------- /demos/edge-console-demo/demo-kickstart.ks.template: -------------------------------------------------------------------------------- 1 | lang en_US.UTF-8 2 | # keyboard us 3 | keyboard es 4 | timezone UTC 5 | text 6 | reboot --eject 7 | 8 | # Configure network to use DHCP and activate on boot 9 | network --bootproto=dhcp --device=link --activate --onboot=on --nameserver=8.8.8.8 10 | 11 | # Partition disk with a 1GB boot XFS partition and an LVM volume containing a 16GB+ system root 12 | # The remainder of the volume will be used by the CSI driver for storing data 13 | # 14 | # For example, a 20GB disk would be partitioned in the following way: 15 | # 16 | # NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT 17 | # sda 8:0 0 32G 0 disk 18 | # ├─sda1 8:1 0 1G 0 part /boot 19 | # └─sda2 8:2 0 19G 0 part 20 | # └─rhel-root 253:0 0 16G 0 lvm /sysroot 21 | # 22 | zerombr 23 | clearpart --all --initlabel 24 | part /boot --fstype=xfs --asprimary --size=1024 25 | # Uncomment this line to add a SWAP partition of the recommended size 26 | #part swap --fstype=swap --recommended 27 | part pv.01 --grow 28 | volgroup rhel pv.01 29 | logvol / --vgname=rhel --fstype=xfs --size=16000 --name=root 30 | 31 | # Gen the ostreesetup line in the pre section 32 | %include /tmp/ostreesetup 33 | 34 | %pre 35 | echo PRE 36 | 37 | # RHEL for Edge 8.5 moves the ostree dir to the root of the image 38 | # RHEL for Edge 8.4 and 9.0 install from /run/install/repo 39 | # Auto-detect a dir at that location and inject it into the command list for install 40 | [[ -d /run/install/repo/ostree ]] && repodir='/run/install/repo/ostree/repo' || repodir='/ostree/repo' 41 | ref=$(ostree refs --repo=${repodir}) 42 | echo "ostreesetup --nogpg --osname=rhel-edge --remote=rhel-edge --url=file://${repodir} --ref=${ref}" > /tmp/ostreesetup 43 | 44 | %end 45 | 46 | %post --log=/var/log/anaconda/post-install.log --erroronfail 47 | 48 | # The pull secret is mandatory for MicroShift builds on top of OpenShift, but not OKD 49 | # The /etc/crio/crio.conf.d/microshift.conf references the /etc/crio/openshift-pull-secret file 50 | cat > /etc/crio/openshift-pull-secret << EOF 51 | __PULL_SECRET__ 52 | EOF 53 | chmod 600 /etc/crio/openshift-pull-secret 54 | 55 | USER_NAME=redhat 56 | USER_HOME=/home/redhat 57 | 58 | # Create a default redhat user / redhat pw, allowing it to run sudo commands without password 59 | useradd -m -d ${USER_HOME} -p \$5\$XDVQ6DxT8S5YWLV7\$8f2om5JfjK56v9ofUkUAwZXTxJl3Sqnc9yPnza4xoJ0 -G wheel ${USER_NAME} 60 | echo -e ${USER_NAME}'\tALL=(ALL)\tNOPASSWD: ALL' >> /etc/sudoers 61 | 62 | mkdir -p ${USER_HOME}/.ssh 63 | chmod 755 ${USER_HOME}/.ssh 64 | cat <> ${USER_HOME}/.ssh/authorized_keys 65 | __AUTH_KEYS__ 66 | EOF 67 | 68 | chmod 600 ${USER_HOME}/.ssh/authorized_keys 69 | chown ${USER_NAME}:${USER_NAME} ${USER_HOME}/.ssh/authorized_keys 70 | # no sudo password for user 71 | echo -e "${USER_NAME}\tALL=(ALL)\tNOPASSWD: ALL" >> /etc/sudoers 72 | 73 | 74 | # Configure the firewall (rules reload is not necessary here) 75 | firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 76 | firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 77 | 78 | # there is a bug with firewalld that will remove rules created by ovn/microshift 79 | systemctl disable firewalld 80 | 81 | echo -e 'export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig' >> /root/.profile 82 | 83 | HOSTNAME=micro-$(uuidgen | cut -f 1 -d \-).local 84 | # HOSTNAME=micro-$(cat cat /sys/class/dmi/id/product_uuid | tail -c 13).local 85 | echo $HOSTNAME > /etc/hostname 86 | echo 127.0.0.1 $HOSTNAME >> /etc/hosts 87 | echo HOSTNAME: $HOSTNAME 88 | 89 | systemctl enable microshift.service 90 | 91 | cat </root/get-oc.sh 92 | #!/bin/sh 93 | 94 | curl https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp-dev-preview/4.12.0-ec.4/openshift-client-linux-4.12.0-ec.4.tar.gz \ 95 | --output /tmp/openshift-client-linux-4.12.0-ec.4.tar.gz 96 | 97 | cd /usr/local/bin 98 | tar -zxvf /tmp/openshift-client-linux-4.12.0-ec.4.tar.gz 99 | rm /tmp/openshift-client-linux-4.12.0-ec.4.tar.gz 100 | 101 | EOF 102 | 103 | chmod a+x /root/get-oc.sh 104 | 105 | %end 106 | 107 | %post --erroronfail 108 | #### RHC / fleet manager creds ########## 109 | cat <<'__FLEET_ENV__' >> /root/fleet_env.bash 110 | __RHC_CREDENTIALS__ 111 | __FLEET_ENV__ 112 | ########################################### 113 | 114 | %end 115 | 116 | %post --log=/var/log/anaconda/insights-on-reboot-unit-install.log --interpreter=/usr/bin/bash --erroronfail 117 | echo POST-INSIGHTS-CLIENT-OVERRIDE 118 | 119 | INSIGHTS_CLIENT_OVERRIDE_DIR=/etc/systemd/system/insights-client.service.d 120 | INSIGHTS_CLIENT_OVERRIDE_FILE=$INSIGHTS_CLIENT_OVERRIDE_DIR/override.conf 121 | 122 | if [ ! -f $INSIGHTS_CLIENT_OVERRIDE_FILE ]; then 123 | mkdir -p $INSIGHTS_CLIENT_OVERRIDE_DIR 124 | cat > $INSIGHTS_CLIENT_OVERRIDE_FILE << EOF 125 | [Unit] 126 | Requisite=greenboot-healthcheck.service 127 | After=network-online.target greenboot-healthcheck.service 128 | 129 | [Install] 130 | WantedBy=multi-user.target 131 | EOF 132 | 133 | systemctl enable insights-client.service 134 | fi 135 | 136 | %end 137 | 138 | %post --log=/var/log/anaconda/post-autoregister.log 139 | echo POST-AUTOREGISTER 140 | 141 | # Automatically register if credentials are provided 142 | [[ -e /root/fleet_env.bash ]] && source /root/fleet_env.bash 143 | RHC_FIRSTBOOT=${RHC_FIRSTBOOT:-false} 144 | 145 | # CREATE AUTOREGISTER SCRIPT 146 | # TODO: rhc firstboot registration script should be something installed with RHC (if not already) 147 | cat << '__RHCREGISTER__' >> /usr/local/bin/rhc_autoregister.sh 148 | #!/bin/bash 149 | 150 | 151 | if [ -e /root/fleet_env.bash ] 152 | then 153 | source /root/fleet_env.bash 154 | 155 | [[ -e /root/fleet_tags.yaml ]] && cp /root/fleet_tags.yaml /etc/insights-client/tags.yaml 156 | 157 | if [[ -z ${RHC_ORGID+x} ]] && [[ -z ${RHC_USER+x} ]] 158 | then 159 | echo "No credentials provided for registration" 160 | else 161 | # Register with RHSM 162 | [[ -v RHC_ORGID ]] \ 163 | && subscription-manager register --org $RHC_ORGID --activationkey $RHC_ACTIVATION_KEY --force \ 164 | || subscription-manager register --username $RHC_USER --password $RHC_PASS --auto-attach --force 165 | 166 | # Register with Insights 167 | insights-client --register > /var/log/anaconda/post-insights-command.log 2>&1 168 | 169 | # Enable and start RHCD service 170 | systemctl enable rhcd.service 171 | systemctl restart rhcd.service 172 | 173 | # rm /etc/rhsm/facts/osbuild.facts 174 | 175 | # Register with RHC 176 | [[ -v RHC_ORGID ]] \ 177 | && rhc connect --organization $RHC_ORGID --activation-key $RHC_ACTIVATION_KEY \ 178 | || rhc connect --username $RHC_USER --password $RHC_PASS 179 | 180 | systemctl status rhcd.service 181 | systemctl status insights-client 182 | 183 | 184 | # Set specific display name set in custom post 185 | if [ -z ${INSIGHTS_DISPLAY_NAME+x} ] 186 | then 187 | # Replace localhost with Subscription Manager ID and set Insights display name 188 | # Subscription Manager ID was chosen based on availability. Refactor based on feedback 189 | statichostname=$(hostnamectl | grep "Static hostname" | awk -F": " '{print $2}') 190 | transienthostname=$(hostnamectl | grep "Transient hostname" | awk -F": " '{print $2}') 191 | [[ -z ${transienthostname+x} ]] && displayname=${statichostname} || displayname=${transienthostname} 192 | if [[ "${displayname}" == "localhost.localdomain" ]] 193 | then 194 | displayname=$(subscription-manager identity | grep "system identity" | awk -F": " '{print $2}') 195 | insights-client --display-name "${DISPLAY_NAME_PREFIX}${displayname}" 196 | fi 197 | else 198 | insights-client --display-name "$INSIGHTS_DISPLAY_NAME" 199 | fi 200 | fi 201 | else 202 | echo "INFO: No /root/fleet_env.bash file. Skipping registration" 203 | fi 204 | __RHCREGISTER__ 205 | 206 | # need to make it executable and restore selinux context 207 | chmod 755 /usr/local/bin/rhc_autoregister.sh 208 | restorecon -rv /usr/local/bin 209 | 210 | # CREATE AUTO REGISTRATION FIRSTBOOT SERVICE 211 | cat << '__RHCFIRSTBOOTSERVICE__' >> /etc/systemd/system/rhc_autoregister.service 212 | [Unit] 213 | Before=systemd-user-sessions.service 214 | Wants=network-online.target microshift-ovs-init.service 215 | After=network-online.target microshift-ovs-init.service 216 | ConditionPathExists=/root/fleet_env.bash 217 | 218 | [Service] 219 | Type=oneshot 220 | ExecStart=/usr/local/bin/rhc_autoregister.sh 221 | ExecStartPost=/usr/bin/rm /root/fleet_env.bash 222 | RemainAfterExit=yes 223 | 224 | [Install] 225 | WantedBy=multi-user.target 226 | 227 | __RHCFIRSTBOOTSERVICE__ 228 | 229 | # Set up first boot registration or do it now before reboot 230 | [[ $RHC_FIRSTBOOT == "true" ]] \ 231 | && systemctl enable rhc_autoregister.service \ 232 | || /usr/local/bin/rhc_autoregister.sh 233 | 234 | #systemctl enable rhcd.service 235 | 236 | %end 237 | 238 | 239 | %post --log=/var/log/anaconda/post-cleanup.log 240 | # Cleanup fleet-ification 241 | echo POST-CLEANUP 242 | 243 | [[ -e /root/fleet_env.bash ]] && source /root/fleet_env.bash 244 | RHC_FIRSTBOOT=${RHC_FIRSTBOOT:-false} 245 | 246 | # Clean up fleet install file(s) 247 | [[ $RHC_FIRSTBOOT != "true" && -e /root/fleet_env.bash ]] && rm /root/fleet_env.bash 248 | 249 | %end 250 | 251 | -------------------------------------------------------------------------------- /demos/edge-console-demo/images/01-edge-management.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/01-edge-management.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/02-add-repository.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/02-add-repository.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/03-create-new-image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/03-create-new-image.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/04-create-new-image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/04-create-new-image.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/05-select-base.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/05-select-base.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/06-add-ssh-keys.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/06-add-ssh-keys.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/07-add-our-custom-repo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/07-add-our-custom-repo.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/08-add-our-microshift-package.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/08-add-our-microshift-package.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/09-review.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/09-review.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/10-image-ready.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/10-image-ready.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/11-download-iso.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/11-download-iso.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/12-inject-kickstart.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/12-inject-kickstart.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/13-system-gets-listed.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/13-system-gets-listed.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/14-custom-manifests-repo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/14-custom-manifests-repo.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/15-create-new-version.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/15-create-new-version.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/16-add-new-custom-repo-to-image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/16-add-new-custom-repo-to-image.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/17-add-new-manifests-package.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/17-add-new-manifests-package.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/18-new-image-version.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/18-new-image-version.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/19-update-all.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/19-update-all.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/19-update-system.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/19-update-system.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/20-once-rebooted.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/20-once-rebooted.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/20-service-screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/20-service-screenshot.png -------------------------------------------------------------------------------- /demos/edge-console-demo/images/21-the-application-runs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-et/microshift-demos/0814cebf7ff3141fc8cde503cd56cb4d23e77868/demos/edge-console-demo/images/21-the-application-runs.png -------------------------------------------------------------------------------- /demos/edge-console-demo/manifests/kustomization.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: kustomize.config.k8s.io/v1beta1 2 | kind: Kustomization 3 | namespace: microweb 4 | resources: 5 | - microweb-ns.yaml 6 | - microweb-mdns.yaml 7 | - microweb-service.yaml 8 | - microweb.yaml 9 | -------------------------------------------------------------------------------- /demos/edge-console-demo/manifests/microweb-mdns.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: route.openshift.io/v1 2 | kind: Route 3 | metadata: 4 | labels: 5 | app: microweb 6 | name: microweb 7 | spec: 8 | host: hello-world.local 9 | port: 10 | targetPort: 8080 11 | to: 12 | kind: Service 13 | name: microweb 14 | weight: 100 15 | wildcardPolicy: None 16 | 17 | -------------------------------------------------------------------------------- /demos/edge-console-demo/manifests/microweb-ns.yaml: -------------------------------------------------------------------------------- 1 | kind: Namespace 2 | apiVersion: v1 3 | metadata: 4 | name: microweb 5 | labels: 6 | name: microweb 7 | -------------------------------------------------------------------------------- /demos/edge-console-demo/manifests/microweb-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | labels: 5 | app: microweb 6 | name: microweb 7 | spec: 8 | ports: 9 | - port: 8080 10 | protocol: TCP 11 | targetPort: 8080 12 | selector: 13 | app: microweb 14 | sessionAffinity: None 15 | type: ClusterIP 16 | -------------------------------------------------------------------------------- /demos/edge-console-demo/manifests/microweb.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: microweb 5 | labels: 6 | app: microweb 7 | spec: 8 | replicas: 2 9 | selector: 10 | matchLabels: 11 | app: microweb 12 | template: 13 | metadata: 14 | labels: 15 | app: microweb 16 | spec: 17 | containers: 18 | - name: microweb 19 | image: quay.io/microshift/hello-world:latest 20 | imagePullPolicy: IfNotPresent 21 | ports: 22 | - containerPort: 8080 23 | 24 | -------------------------------------------------------------------------------- /demos/edge-console-demo/microshift-hello-world-manifests.spec: -------------------------------------------------------------------------------- 1 | 2 | %global manifestStore /usr/lib/microshift/manifests 3 | Name: microshift-hello-world-manifests 4 | Version: 1.0.0 5 | Release: 1 6 | Summary: Application specific manifests 7 | License: Apache License 2.0 8 | Source0: microshift-hello-world-manifests-1.0.0-1-manifests.tar.bz2 9 | 10 | %description 11 | This package provides a manifests for the deployed application in MicroShift. 12 | 13 | %install 14 | mkdir -p %{buildroot}%{manifestStore} 15 | cd %{buildroot}%{manifestStore} 16 | tar xfjv %{SOURCE0} 17 | 18 | %files 19 | %dir "%{manifestStore}" 20 | "%{manifestStore}/kustomization.yaml" 21 | "%{manifestStore}/microweb-mdns.yaml" 22 | "%{manifestStore}/microweb-ns.yaml" 23 | "%{manifestStore}/microweb-service.yaml" 24 | "%{manifestStore}/microweb.yaml" 25 | 26 | %changelog 27 | * Thu Oct 27 2022 Miguel Angel Ajo . 1.0.0-1 28 | First version of the manifests package 29 | -------------------------------------------------------------------------------- /demos/edge-console-demo/rhc_credentials.env.example: -------------------------------------------------------------------------------- 1 | 2 | # use orgid / activation key 3 | # RHC_ORGID= 4 | # RHC_ACTIVATION_KEY= 5 | 6 | # or user/password 7 | 8 | # RHC_USER=my@email.com 9 | # RHC_PASS=mypassword 10 | 11 | # please note those credentials will be embedded as part of the kickstart inside the .iso file 12 | 13 | RHC_FIRSTBOOT=true 14 | -------------------------------------------------------------------------------- /demos/edge-console-demo/run.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | REPO_ROOT="$(git rev-parse --show-toplevel)" 6 | DEMO_ROOT="${REPO_ROOT}/demos/edge-console-demo" 7 | 8 | INPUT_ISO=$1 9 | CREDENTIALS=$2 10 | PULL_SECRET=$3 11 | OUTPUT_ISO=$4 12 | AUTH_KEY=${AUTH_KEY:-~/.ssh/id_rsa.pub} 13 | 14 | title() { 15 | echo -e "\e[1;32m$1\e[0m" 16 | } 17 | 18 | check_tool_installed() { 19 | if ! command -v $1 &> /dev/null 20 | then 21 | echo "Please install $1" 22 | exit 23 | fi 24 | } 25 | 26 | check_sudo_works() { 27 | if ! sudo -n true 2>/dev/null; then 28 | echo "Please add yourself to sudoers" 29 | exit 30 | fi 31 | } 32 | 33 | usage() { 34 | echo "Usage: $0 " 35 | echo " is the iso downloaded from the image builder in console.redhat.com" 36 | echo " is a file with the credentials to connect the system to" 37 | echo " hosted fleet manager" 38 | echo " the file must set the following variables:" 39 | echo " RHC_USER + RHC_PASS" 40 | echo " or RHC_ORGID + RHC_ACTIVATION_KEY" 41 | echo " RHC_FIRSTBOOT=true" 42 | echo " is the pull secret obtained from https://console.redhat.com/openshift/downloads#tool-pull-secret" 43 | echo " is the output ISO with the embedded kickstart" 44 | exit 1 45 | } 46 | 47 | check_tool_installed mkksiso 48 | 49 | if [ -z "$INPUT_ISO" ] || [ -z "$CREDENTIALS" ] || [ -z "$PULL_SECRET" ] || [ -z "$OUTPUT_ISO" ]; then 50 | usage 51 | fi 52 | 53 | if [ ! -f "$AUTH_KEY" ]; then 54 | echo "Please create an ssh key pair and set the AUTH_KEY variable to the public key file" 55 | echo "this script looks by default for AUTH_KEY defaults to ${AUTH_KEY}" 56 | exit 1 57 | fi 58 | if [ ! -f "$INPUT_ISO" ]; then 59 | echo "Input ISO $INPUT_ISO does not exist" 60 | exit 1 61 | fi 62 | 63 | if [ ! -f "$PULL_SECRET" ]; then 64 | echo "Input pull secret $PULL_SECRET does not exist" 65 | exit 1 66 | fi 67 | 68 | if [ ! -f "$CREDENTIALS" ]; then 69 | echo "Input credentials $CREDENTIALS does not exist" 70 | exit 1 71 | fi 72 | 73 | # make temporary directory 74 | TMPDIR=$(mktemp -d) 75 | trap "[[ ! -z "$TMPDIR" ]] && rm -rf ${TMPDIR}" EXIT 76 | 77 | title "Creating kickstart file" 78 | 79 | cp "${DEMO_ROOT}/demo-kickstart.ks.template" "${TMPDIR}/kickstart.ks" 80 | 81 | # Read pull secret file content 82 | PULL_SECRET_CONTENT=$(cat $PULL_SECRET) 83 | SSH_PUBKEY_CONTENT=$(cat $AUTH_KEY) 84 | RHC_CREDS_CONTENT=$(cat $CREDENTIALS | sed ':a;N;$!ba;s/\n/\\n/g' | sed 's/\$/\\$/g') 85 | 86 | echo " - injecting pull secret" 87 | sed -i "s|__PULL_SECRET__|${PULL_SECRET_CONTENT}|g" "${TMPDIR}/kickstart.ks" 88 | 89 | echo " - injecting ssh pub key from ${AUTH_KEY}" 90 | sed -i "s|__AUTH_KEYS__|${SSH_PUBKEY_CONTENT}|g" "${TMPDIR}/kickstart.ks" 91 | 92 | echo " - injecting rhc credentials from ${CREDENTIALS}" 93 | sed -i "s|__RHC_CREDENTIALS__|${RHC_CREDS_CONTENT}|g" "${TMPDIR}/kickstart.ks" 94 | 95 | title "Creating ISO, this will require sudo privileges" 96 | 97 | sudo mkksiso "${TMPDIR}/kickstart.ks" "${INPUT_ISO}" "${TMPDIR}/output.iso" 98 | rm -f "${OUTPUT_ISO}" 2>/dev/null || true 99 | mv "${TMPDIR}/output.iso" "${OUTPUT_ISO}" 100 | 101 | echo moved "${TMPDIR}/output.iso" to "${OUTPUT_ISO}" 102 | cp "${TMPDIR}/kickstart.ks" "${OUTPUT_ISO}.ks" 103 | 104 | title "Done!" -------------------------------------------------------------------------------- /demos/hello-microshift-demo/README.md: -------------------------------------------------------------------------------- 1 | # Hello, MicroShift! Demo 2 | 3 | This demo creates a minimal RHEL for Edge with MicroShift image and shows deploying a simple "Hello, MicroShift!" workload. 4 | 5 | ## Preparing the demo 6 | 7 | Follow the instructions for [building demo images on a RHEL machine](https://github.com/redhat-et/microshift-demos/tree/main/README.md), building the demo with `./scripts/build hello-microshift-demo`. 8 | 9 | ## Running the demo 10 | ### Installing the ISO and accessing the MicroShift cluster 11 | 12 | Install a VM or physical machine with the minimum system requirements (2 cores, 2GB RAM, 10GB disk) using the ISO at `./builds/hello-microshift-demo/hello-microshift-demo-installer.x86_64.iso`. 13 | 14 | SSH into the machine: 15 | 16 | ssh -o "IdentitiesOnly=yes" -i ./builds/hello-microshift-demo/id_demo microshift@$MACHINE_IP 17 | 18 | Verify that the MicroShift service has started: 19 | 20 | sudo systemctl status microshift 21 | 22 | Verify that you can access MicroShift locally: 23 | 24 | oc get all -A 25 | 26 | Now wait for MicroShift to be fully up-and-running. This may take a few minutes the first time MicroShift starts, because it still needs to pull the container images it deploys. When it's ready, `oc get pods -A` output should look similar to: 27 | 28 | NAMESPACE NAME READY STATUS RESTARTS AGE 29 | openshift-dns pod/dns-default-lm55n 2/2 Running 0 80s 30 | openshift-dns pod/node-resolver-zp7gw 1/1 Running 0 3m11s 31 | openshift-ingress pod/router-default-ddc545d88-mk8gc 1/1 Running 0 3m5s 32 | openshift-ovn-kubernetes pod/ovnkube-master-4586k 4/4 Running 0 3m11s 33 | openshift-ovn-kubernetes pod/ovnkube-node-xgx9t 1/1 Running 0 3m11s 34 | openshift-service-ca pod/service-ca-77fc4cc659-ncmbv 1/1 Running 0 3m6s 35 | openshift-storage pod/topolvm-controller-5fc9996875-lzpgx 4/4 Running 0 3m12s 36 | openshift-storage pod/topolvm-node-hb5mh 4/4 Running 0 80s 37 | 38 | ### Deploying the "Hello, MicroShift!" application and accessing it locally 39 | 40 | Now let's deploy the "Hello, MicroShift!" application: 41 | 42 | oc apply -k https://github.com/redhat-et/microshift-demos/apps/hello-microshift?ref=main 43 | 44 | Verify that the application is deployed and the route is accepted: 45 | 46 | [microshift@edge ~]$ oc get pods -n demo 47 | NAME READY STATUS RESTARTS AGE 48 | hello-microshift-6bdbc6c444-nnhrm 1/1 Running 0 24s 49 | hello-microshift-6bdbc6c444-zp5cc 1/1 Running 0 24s 50 | 51 | [microshift@edge ~]$ oc get routes -n demo 52 | NAME HOST ADMITTED SERVICE TLS 53 | hello-microshift hello-microshift.local True hello-microshift 54 | 55 | Add an entry to `/etc/hosts` to map the application's route (`hello-microshift.local`) to the machine's primary IP: 56 | 57 | hostIP=$(ip route get 1.1.1.1 | grep -oP 'src \K\S+') 58 | sudo sed -i .bak '/hello-microshift.local/d' /etc/hosts 59 | echo "${hostIP} hello-microshift.local" | sudo tee -a /etc/hosts 60 | 61 | Now, trying to `curl` the application's route should return the "Hello, MicroShift!" HTML page: 62 | 63 | [microshift@edge ~]$ curl http://hello-microshift.local 64 | 65 | 66 | ... 67 | 68 | ### Accessing the cluster and "Hello, MicroShift!" application remotely 69 | 70 | Next, let's access the cluster and application from outside the MicroShift machine. 71 | 72 | If you're running the MicroShift on a VM _and_ your hypervisor connects instances via NAT, make sure to create port mappings from the hypervisor to guest ports 22 (ssh), 80 (http), and 6443 (K8s API). 73 | 74 | Once more, you need to edit `/etc/hosts` to resolve `hello-microshift.local` to the MicroShift machine's IP, then you can `curl` the route and also access the page in your browser: 75 | 76 | [user@core ~]$ curl http://hello-microshift.local 77 | 78 | 79 | ... 80 | 81 | To remotely access the cluster using the `oc` client, copy the kubeconfig from the MicroShift machine to your local machine. Then update the URL of the `server:` field in the kubeconfig to point to your MicroShift machine: 82 | 83 | mkdir -p ~/.kube/config 84 | ssh -o "IdentitiesOnly=yes" -i ./builds/hello-microshift/demo/id_demo microshift@$MACHINE_IP "sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig" > ~/.kube/config 85 | sed -i .bak 's|server: https://127.0.0.1:6443|server: https://hello-microshift.local:6443|' ~/.kube/config 86 | 87 | Now you can access the cluster remotely: 88 | 89 | [user@core ~]$ oc get pods -n demo 90 | NAME READY STATUS RESTARTS AGE 91 | hello-microshift-6bdbc6c444-8sjc6 1/1 Running 0 45m 92 | hello-microshift-6bdbc6c444-bm5j4 1/1 Running 0 45m 93 | -------------------------------------------------------------------------------- /demos/hello-microshift-demo/blueprint_v0.0.1.toml: -------------------------------------------------------------------------------- 1 | name = "hello-microshift-demo" 2 | 3 | description = "" 4 | version = "0.0.1" 5 | modules = [] 6 | groups = [] 7 | 8 | 9 | # MicroShift, oc client, and git 10 | 11 | [[packages]] 12 | name = "microshift" 13 | version = "*" 14 | 15 | [[packages]] 16 | name = "openshift-clients" 17 | version = "*" 18 | 19 | [[packages]] 20 | name = "git" 21 | version = "*" 22 | 23 | 24 | # troubleshooting tools 25 | 26 | [[packages]] 27 | name = "iputils" 28 | version = "*" 29 | 30 | [[packages]] 31 | name = "bind-utils" 32 | version = "*" 33 | 34 | [[packages]] 35 | name = "net-tools" 36 | version = "*" 37 | 38 | 39 | # other 40 | 41 | [[packages]] 42 | name = "redhat-release" 43 | version = "*" 44 | 45 | 46 | # customizations 47 | 48 | [customizations.firewall.services] 49 | enabled = ["ssh", "http"] 50 | disabled = ["cockpit"] 51 | 52 | [customizations.firewall] 53 | port = ["6443/tcp"] 54 | 55 | [customizations.services] 56 | enabled = ["microshift"] 57 | -------------------------------------------------------------------------------- /demos/hello-microshift-demo/kickstart.ks.tmpl: -------------------------------------------------------------------------------- 1 | lang en_US.UTF-8 2 | keyboard us 3 | timezone UTC 4 | text 5 | reboot 6 | 7 | # Configure network to use DHCP and activate on boot 8 | network --bootproto=dhcp --device=link --activate --onboot=on --hostname=edge.local 9 | 10 | # Partition disk with a 1GB boot XFS partition and an LVM volume containing a 8GB+ system root 11 | # The remainder of the volume will be used by the CSI driver for storing data 12 | # 13 | # For example, a 20GB disk would be partitioned in the following way: 14 | # 15 | # NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT 16 | # sda 8:0 0 20G 0 disk 17 | # ├─sda1 8:1 0 200M 0 part /boot/efi 18 | # ├─sda1 8:1 0 800M 0 part /boot 19 | # └─sda2 8:2 0 19G 0 part 20 | # └─rhel-root 253:0 0 8G 0 lvm /sysroot 21 | # 22 | zerombr 23 | clearpart --all --initlabel 24 | part /boot/efi --fstype=efi --size=200 25 | part /boot --fstype=xfs --asprimary --size=800 26 | # Uncomment this line to add a SWAP partition of the recommended size 27 | #part swap --fstype=swap --recommended 28 | part pv.01 --grow 29 | volgroup rhel pv.01 30 | logvol / --vgname=rhel --fstype=xfs --size=8192 --name=root 31 | 32 | # Configure ostree 33 | ostreesetup --nogpg --osname=rhel --remote=edge --url=file:///run/install/repo/ostree/repo --ref=$OSTREE_REF 34 | 35 | 36 | %post --log=/var/log/anaconda/post-install.log --erroronfail 37 | 38 | # Add the default user and enable passwordless sudo. Add password and/or authorized keys if configured. 39 | useradd -m -d "/home/$USER_NAME" -G wheel "$USER_NAME" 40 | [ -n '$USER_PASS_ENCRYPTED' ] && usermod -p '$USER_PASS_ENCRYPTED' "$USER_NAME" 41 | if [ -n '$USER_AUTHORIZED_KEY' ]; then 42 | mkdir -p "/home/$USER_NAME/.ssh" 43 | chmod 755 "/home/$USER_NAME/.ssh" 44 | tee "/home/$USER_NAME/.ssh/authorized_keys" > /dev/null <> /etc/sudoers 50 | 51 | # Configure where rpm-ostree looks for ostree updates 52 | echo -e 'url=$OSTREE_REPO_URL' >> /etc/ostree/remotes.d/edge.conf 53 | 54 | # The pull secret is mandatory for MicroShift builds on top of OpenShift, but not OKD 55 | # The /etc/crio/crio.conf.d/microshift.conf references the /etc/crio/openshift-pull-secret file 56 | mkdir -p /etc/crio 57 | cat > /etc/crio/openshift-pull-secret << EOPULLSECRET 58 | $OCP_PULL_SECRET_CONTENTS 59 | EOPULLSECRET 60 | chmod 600 /etc/crio/openshift-pull-secret 61 | 62 | # Configure the mandatory firewall rules that cannot (yet) be configured from blueprint 63 | firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 64 | firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 65 | 66 | # for convenience, set up the kubeconfig when the user logs in 67 | mkdir -p "/home/$USER_NAME/.kube" 68 | chmod 755 "/home/$USER_NAME/.kube" 69 | chown "$USER_NAME". "/home/$USER_NAME/.kube" 70 | echo 'sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config 2> /dev/null' >> "/home/$USER_NAME/.bashrc" 71 | 72 | %end 73 | -------------------------------------------------------------------------------- /demos/ibaas-demo/README.md: -------------------------------------------------------------------------------- 1 | # MicroShift Image Builder as a Service 2 | 3 | This demo will allow you to build a RHEL for Edge 8 image containing MicroShift and all its dependencies by using the hosted Image Builder service running at `console.redhat.com`. This new Red Hat managed service is a great tool for users to build different customized RHEL images without the need to have specific infrastructure on premise. 4 | 5 | ## Pre-requisites 6 | 7 | This demo has two basic requirements: 8 | 9 | * A RHEL/Fedora machine to be able to execute a simple bash script 10 | * An AWS account in order to create an S3 bucket where the automation will host some files. 11 | 12 | ## Demo workflow 13 | 14 | The main script called `run.sh` will get you through the following steps: 15 | 16 | * Download the MicroShift official repository and locally build RPM packages. 17 | * Install the AWS CLI and configure it with your own account credentials. 18 | * Create an S3 bucket and store the MicroShift packages as an RPM repo. 19 | * Call hosted Image Builder service in `console.redhat.com` to build an ostree-commit with all the required packages. 20 | * Host the ostree-commit into the recently created S3 bucket. 21 | * Call hosted Image Builder service to create an ISO image using that ostree-commit. 22 | * Inject a kickstart file that will configure the system for MicroShift. 23 | 24 | 25 | ## Demo execution 26 | 27 | Go to the demo folder `demos/ibaas-demo/` and run the following command with your own parameters: 28 | 29 | ``` 30 | ./run.sh RHUSER PASSWORD BUCKET-NAME PULLSECRET 31 | 32 | ``` 33 | 34 | where: 35 | * RHUSER: user of your Red Hat account 36 | * PASSWORD: password of your Red Hat account 37 | * BUCKET-NAME: name to create a new S3 bucket on your AWS account 38 | * PULLSECRET: path to a file containing your Red Hat's [pull secret](https://console.redhat.com/openshift/install/pull-secret 39 | 40 | 41 | The resulting ISO image will be stored in the following directory under `builds/iso/`. You can use this ISO to install it in a VM or a physical machine. We have created a simple user to allow you to login `redhat/redhat`, but we encourage you to change the password inmediately. -------------------------------------------------------------------------------- /demos/ibaas-demo/cleanup-ibass-demo.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | REPOROOT="$(git rev-parse --show-toplevel)" 6 | DEMOROOT="${REPOROOT}/demos/ibaas-demo" 7 | RPMDIR="${REPOROOT}/builds/rpms" 8 | ISODIR="${REPOROOT}/builds/iso" 9 | 10 | 11 | title() { 12 | echo -e "\E[34m\n# $1\E[00m"; 13 | } 14 | 15 | title "Cleaning up the MicroShift IBaaS demo environment." 16 | echo "This script will not clean any S3 bucket content." 17 | 18 | # Delete the ostree-commit 19 | rm -rf ostree-commit.tar ostree-commit-id ostree-commit data-ostree.json 20 | # Delete ISO assets 21 | rm -rf data-edge-installer.json edge-installer-iso-id kickstart.ks 22 | # Delete the RPMs 23 | rm -rf ${RPMDIR} 24 | # Delete ISO 25 | rm -rf edge-installer.iso 26 | rm -rf ${ISODIR} 27 | 28 | -------------------------------------------------------------------------------- /demos/ibaas-demo/data-edge-installer.json.template: -------------------------------------------------------------------------------- 1 | { 2 | "customizations": { 3 | "packages": [ 4 | ] 5 | }, 6 | "distribution": "rhel-8", 7 | "image_requests": [ 8 | { 9 | "architecture": "x86_64", 10 | "image_type": "edge-installer", 11 | "ostree": { 12 | "url": "https://BUCKET_NAME.s3.REGION.amazonaws.com/repo/", 13 | "ref": "rhel/8/x86_64/edge" 14 | }, 15 | "upload_request": { 16 | "options": { 17 | }, 18 | "type": "aws.s3" 19 | } 20 | } 21 | ] 22 | } -------------------------------------------------------------------------------- /demos/ibaas-demo/data-ostree.json.template: -------------------------------------------------------------------------------- 1 | { 2 | "customizations": { 3 | "payload_repositories": [ 4 | { 5 | "rhsm": true, 6 | "baseurl": "https://cdn.redhat.com/content/dist/layered/rhel8/x86_64/fast-datapath/os" 7 | }, 8 | { 9 | "rhsm": true, 10 | "baseurl": "https://cdn.redhat.com/content/dist/layered/rhel8/x86_64/rhocp/4.11/os" 11 | }, 12 | { 13 | "rhsm": false, 14 | "baseurl": "https://BUCKET_NAME.s3.REGION.amazonaws.com/" 15 | } 16 | 17 | ], 18 | "packages": [ 19 | "microshift", 20 | "cri-o", 21 | "openvswitch2.17", 22 | "openshift-clients" 23 | ] 24 | }, 25 | "distribution": "rhel-8", 26 | "image_requests": [ 27 | { 28 | "architecture": "x86_64", 29 | "image_type": "edge-commit", 30 | "ostree": {}, 31 | "upload_request": { 32 | "options": { 33 | }, 34 | "type": "aws.s3" 35 | } 36 | } 37 | ] 38 | } -------------------------------------------------------------------------------- /demos/ibaas-demo/kickstart.ks.template: -------------------------------------------------------------------------------- 1 | lang en_US.UTF-8 2 | keyboard us 3 | timezone UTC 4 | text 5 | reboot 6 | 7 | # Configure network to use DHCP and activate on boot 8 | network --bootproto=dhcp --device=link --activate --onboot=on 9 | 10 | # Partition disk with a 1GB boot XFS partition and an LVM volume containing a 8GB+ system root 11 | # The remainder of the volume will be used by the CSI driver for storing data 12 | # 13 | # For example, a 20GB disk would be partitioned in the following way: 14 | # 15 | # NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT 16 | # sda 8:0 0 20G 0 disk 17 | # ├─sda1 8:1 0 1G 0 part /boot 18 | # └─sda2 8:2 0 19G 0 part 19 | # └─rhel-root 253:0 0 8G 0 lvm /sysroot 20 | # 21 | zerombr 22 | clearpart --all --initlabel 23 | part /boot --fstype=xfs --asprimary --size=1024 24 | # Uncomment this line to add a SWAP partition of the recommended size 25 | #part swap --fstype=swap --recommended 26 | part pv.01 --grow 27 | volgroup rhel pv.01 28 | logvol / --vgname=rhel --fstype=xfs --size=8192 --name=root 29 | 30 | # Configure ostree 31 | ostreesetup --nogpg --osname=rhel --remote=edge --url=file:///run/install/repo/ostree/repo --ref=rhel/8/x86_64/edge 32 | 33 | %post --log=/var/log/anaconda/post-install.log --erroronfail 34 | 35 | # Replace the ostree server name 36 | echo -e 'url=http://BUCKET_NAME.s3.REGION.amazonaws.com/repo/' >> /etc/ostree/remotes.d/edge.conf 37 | 38 | # The pull secret is mandatory for MicroShift builds on top of OpenShift, but not OKD 39 | # The /etc/crio/crio.conf.d/microshift.conf references the /etc/crio/openshift-pull-secret file 40 | cat > /etc/crio/openshift-pull-secret << EOF 41 | PULL_SECRET_CONTENT 42 | EOF 43 | chmod 600 /etc/crio/openshift-pull-secret 44 | 45 | # Create a default redhat user, allowing it to run sudo commands without password 46 | useradd -m -d /home/redhat -p \$5\$XDVQ6DxT8S5YWLV7\$8f2om5JfjK56v9ofUkUAwZXTxJl3Sqnc9yPnza4xoJ0 -G wheel redhat 47 | echo -e 'redhat\tALL=(ALL)\tNOPASSWD: ALL' >> /etc/sudoers 48 | 49 | # Make sure redhat user directory contents ownership is correct 50 | chown -R redhat:redhat /home/redhat/ 51 | 52 | # Configure the firewall (rules reload is not necessary here) 53 | firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 54 | firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 55 | 56 | echo -e 'export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig' >> /root/.profile 57 | 58 | HOSTNAME=micro-$(uuidgen | cut -f 1 -d \-).local 59 | echo $HOSTNAME > /etc/hostname 60 | echo 127.0.0.1 $HOSTNAME >> /etc/hosts 61 | 62 | %end -------------------------------------------------------------------------------- /demos/ibaas-demo/run.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | REPOROOT="$(git rev-parse --show-toplevel)" 6 | DEMOROOT="${REPOROOT}/demos/ibaas-demo" 7 | RHUSER=$1 8 | PASSWD=$2 9 | BUCKET_NAME=$3 10 | PULL_SECRET=$4 11 | 12 | title() { 13 | echo -e "\E[34m\n# $1\E[00m"; 14 | } 15 | 16 | function create_iso { 17 | title "Creating the edge-installer ISO using the Red Hat hosted Image Builder service" 18 | sleep 10 19 | ID=$(curl -H "Content-Type: application/json" -X POST -u "$RHUSER":"$PASSWD" -d@data-edge-installer.json https://console.redhat.com/api/image-builder/v1/compose | jq -r .id) 20 | echo $ID > edge-installer-iso-id 21 | } 22 | 23 | # Usage function to explain arguments required for this script 24 | function usage() { 25 | echo "Usage: $(basename $0) " 26 | echo ": User name of your Red Hat account" 27 | echo ": Password of your Red Hat account" 28 | echo ": Name of the S3 bucket to host the RPMs/ostree-commit" 29 | echo ": Path to the pull secret file" 30 | exit 1 31 | } 32 | 33 | # Check if the required arguments are provided 34 | if [[ -z "${RHUSER}" || -z "${PASSWD}" || -z "${BUCKET_NAME}" ]]; then 35 | usage 36 | fi 37 | 38 | title "Building MicroShift RPMs from the latest commit and creating a repo" 39 | ${REPOROOT}/scripts/build-latest-rpms 40 | 41 | title "Preparing S3 bucket to host the RPMs/ostree-commit" 42 | ${REPOROOT}/scripts/prepare-aws-bucket "${BUCKET_NAME}" 43 | 44 | # Replace placeholders in data-ostree.json 45 | cp data-ostree.json.template data-ostree.json 46 | sed -i "s|BUCKET_NAME|${BUCKET_NAME}|g" data-ostree.json 47 | REGION=$(aws configure get region) 48 | sed -i "s|REGION|${REGION}|g" data-ostree.json 49 | 50 | title "Creating the ostree-commit using the Red Hat hosted Image Builder service" 51 | ID=$(curl -H "Content-Type: application/json" -X POST -u "$RHUSER":"$PASSWD" -d@data-ostree.json https://console.redhat.com/api/image-builder/v1/compose | jq -r .id) 52 | echo $ID > ostree-commit-id 53 | 54 | # Wait for the ostree-commit to be ready 55 | while true; do 56 | STATUS=$(curl -u "$RHUSER":"$PASSWD" https://console.redhat.com/api/image-builder/v1/composes/"$ID" | jq -r '.image_status.status') 57 | case $STATUS in 58 | "success") 59 | echo "Creation of ostree-commit successful" 60 | break 61 | ;; 62 | "failure") 63 | echo "Building the ostree-commit failed" 64 | exit 1 65 | ;; 66 | *) 67 | echo "Status: $STATUS Waiting for image build to complete" 68 | sleep 10 69 | ;; 70 | esac 71 | done 72 | 73 | # Download the ostree-commit 74 | title "Downloading the ostree-commit" 75 | IMAGE=$(curl -u "$RHUSER":"$PASSWD" https://console.redhat.com/api/image-builder/v1/composes/"$ID" | jq -r '.image_status.upload_status.options.url') 76 | rm -rf ostree-commit.tar ostree-commit 77 | curl -o ostree-commit.tar "$IMAGE" 78 | 79 | # Extract the ostree-commit into a directory 80 | title "Extracting the ostree-commit" 81 | mkdir -p ostree-commit 82 | tar -xf ostree-commit.tar -C ostree-commit 83 | 84 | # Check that the ostree-commit contains the reference file 85 | if [[ ! -f ostree-commit/repo/refs/heads/rhel/8/x86_64/edge ]]; then 86 | echo "The ostree-commit does not contain the reference file." 87 | echo "Please, try to build the ostree-commit again, or just run this script again." 88 | exit 1 89 | fi 90 | 91 | # Sync ostree-commit to S3 bucket with public ACL 92 | title "Syncing ostree-commit to S3 bucket" 93 | aws s3 sync ostree-commit/ "s3://${BUCKET_NAME}" --acl public-read 94 | sleep 10 95 | 96 | # Replace placeholders in data-edge-installer.json 97 | cp data-edge-installer.json.template data-edge-installer.json 98 | sed -i "s|BUCKET_NAME|${BUCKET_NAME}|g" data-edge-installer.json 99 | sed -i "s|REGION|${REGION}|g" data-edge-installer.json 100 | 101 | # Create the edge-installer ISO using the Red Hat hosted Image Builder service 102 | create_iso 103 | 104 | # Wait for the edge-installer ISO to be ready 105 | RETRIES=20 106 | while true; do 107 | ID=$(cat edge-installer-iso-id) 108 | STATUS=$(curl -u "$RHUSER":"$PASSWD" https://console.redhat.com/api/image-builder/v1/composes/"$ID" | jq -r '.image_status.status') 109 | case $STATUS in 110 | "success") 111 | echo "Creation of edge-installer ISO successful" 112 | break 113 | ;; 114 | "failure") 115 | echo "Building the edge-installer ISO failed. Retrying..." 116 | RETRIES=$((RETRIES-1)) 117 | echo $RETRIES 118 | if [ $RETRIES -eq 0 ]; then 119 | echo "Retries exceeded the maximum number of attemps. Exiting..." 120 | exit 1 121 | fi 122 | create_iso 123 | ;; 124 | *) 125 | echo "Status: $STATUS Waiting for image build to complete" 126 | sleep 10 127 | ;; 128 | esac 129 | done 130 | 131 | # Download the edge-installer ISO 132 | title "Downloading the edge-installer ISO" 133 | ID=$(cat edge-installer-iso-id) 134 | IMAGE=$(curl -u "$RHUSER":"$PASSWD" https://console.redhat.com/api/image-builder/v1/composes/"$ID" | jq -r '.image_status.upload_status.options.url') 135 | rm -rf edge-installer.iso 136 | curl -o edge-installer.iso "$IMAGE" 137 | 138 | # Read pull secret file content 139 | PULL_SECRET_CONTENT=$(cat $PULL_SECRET) 140 | 141 | # Replace placeholders in kickstart.ks 142 | cp kickstart.ks.template kickstart.ks 143 | sed -i "s|BUCKET_NAME|${BUCKET_NAME}|g" kickstart.ks 144 | sed -i "s|REGION|${REGION}|g" kickstart.ks 145 | sed -i "s|PULL_SECRET_CONTENT|${PULL_SECRET_CONTENT}|g" kickstart.ks 146 | 147 | # Embed specific kickstart file to configure the edge-installer ISO for MicroShift 148 | title "Embedding kickstart file to configure the edge-installer ISO for MicroShift" 149 | mkdir -p ${REPOROOT}/builds/iso 150 | sudo mkksiso kickstart.ks edge-installer.iso ${REPOROOT}/builds/iso/microshift-edge-installer.iso 151 | 152 | title "MicroShift ISO is ready at ${REPOROOT}/builds/iso/microshift-edge-installer.iso" 153 | -------------------------------------------------------------------------------- /demos/ostree-demo/README.md: -------------------------------------------------------------------------------- 1 | # OSTree Demo 2 | 3 | This demo introduces core technologies of RHEL for Edge, such as ImageBuilder, rpm-ostree, and greenboot. It then shows how to embed MicroShift into an ostree and deploy it. 4 | 5 | Note the demo is deliberately low-level, walking through how to build OS images from the command line using the `composer-cli` tool, as this is what one would use to automate a GitOps pipeline for OS images. For building images graphically use the [web console](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/composing_installing_and_managing_rhel_for_edge_images/index#creating-a-blueprint-for-rhel-for-edge-images-using-web-console_composing-rhel-for-edge-images-using-image-builder-in-rhel-web-console) instead. For a full GitOps automation, see the [RHEL for Edge Automation Framework](https://github.com/redhat-cop/rhel-edge-automation-arch). 6 | 7 | ## Preparing the demo 8 | 9 | Follow the instructions for [building demo images on a RHEL machine](https://github.com/redhat-et/microshift-demos/tree/main/README.md) up to the point of having mirrored the repos, but do not build the `ostree-demo` artefacts yet. 10 | 11 | For this demo, you also need a GitHub repo from which you will configure the RHEL edge device running MicroShift via GitOps. Fork the demo's GitOps repo https://github.com/redhat-et/microshift-config into your own org and define the GITOPS_REPO environment variable accordingly: 12 | 13 | export GITOPS_REPO="https://github.com/MY_ORG/microshift-config" 14 | 15 | ## Running the demo 16 | 17 | ### Creating a first blueprint, building&serving an ostree 18 | 19 | Have a look at [`demos/ostree-demo/blueprint_v0.0.1.toml`](https://github.com/redhat-et/microshift-demos/tree/main/demos/ostree-demo/blueprint_v0.0.1.toml), which defines a blueprint named `ostree-demo` that adds a few RPM packages for facilitating network troubleshooting to a base RHEL for Edge system, for example `arp`. 20 | 21 | In a terminal, use the `composer-cli` tool to list previously uploaded blueprints (you shouldn't have any initially): 22 | 23 | sudo composer-cli blueprints list 24 | 25 | Now upload the v0.0.1 blueprint 26 | 27 | sudo composer-cli blueprints push demos/ostree-demo/blueprint_v0.0.1.toml 28 | 29 | Start a build of that blueprint into an image of type `edge-container`: 30 | 31 | sudo composer-cli compose start-ostree --ref rhel/8/x86_64/edge ostree-demo edge-container 32 | 33 | Specifying `edge-container` produces an OCI container image that contains both the ostree commit built from that the `ostree-demo` blueprint as well as an `ngingx` server that serves the ostree repo. This is makes testing, signing, and distributing the repo easy. 34 | 35 | Check the status of the build with 36 | 37 | sudo composer-cli compose status 38 | 39 | Note the compose build id output and assign it to `$BUILD_ID`. Once its status is `FINISHED`, download the `edge-container` image using 40 | 41 | sudo composer-cli compose image ${BUILD_ID} 42 | 43 | The downloaded tarball can then be loaded into Podman, tagged, and served locally: 44 | 45 | IMAGE_ID=$(cat ./${BUILD_ID}-container.tar | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*') 46 | sudo podman tag ${IMAGE_ID} localhost/ostree-demo:0.0.1 47 | sudo podman run -d --name=ostree-server -p 8080:8080 localhost/ostree-demo:0.0.1 48 | 49 | Check that the web server is running and serving the repo: 50 | 51 | curl http://localhost:8080/repo/config 52 | 53 | If you want, check what the ostree repo looks like: 54 | 55 | sudo podman exec -it ostree-server /bin/bash 56 | ls -l /usr/share/nginx/html 57 | 58 | ### Provisioning a VM with the ostree, looking around 59 | 60 | Use your favorite virtualization solution to create a VM installed using the `builds/ostree-demo/ostree-demo-installer.x86_64.iso`. For example, to use `libvirt`, run 61 | 62 | ./scripts/configure-virthost 63 | sudo cp ./builds/ostree-demo/ostree-demo-installer.x86_64.iso /var/lib/libvirt/images 64 | ./scripts/provision-device 65 | 66 | Note the VM must be able to reach the web server you're running on Podman. 67 | 68 | After the VM boots, SSH into it and have a look at the ostree filesystem: 69 | 70 | sudo ls -l / # note most dirs are sym-linked to /var or /usr, there are new /ostree an /sysroot dirs 71 | sudo touch /usr/test # /usr and most other dirs are mounted read-only 72 | sudo touch /etc/test # /etc and /var are read-write 73 | 74 | Most of the file system is read-only, which is key to enabling transactional updates and rollbacks. `/var` contains application state and therefore needs to be read-write and isn't changed during updates and rollbacks. `/etc` is contains system configuration and therefore needs to be read-write, too. Its content gets three-way-merged during updates and rollbacks. 75 | 76 | Next, check the status of the ostree 77 | 78 | sudo rpm-ostree status 79 | 80 | You'll get an output that looks somewhat like this: 81 | 82 | State: idle 83 | Deployments: 84 | ● ostree://edge:rhel/8/x86_64/edge 85 | Version: 8.6 (2022-03-02T21:18:55Z) 86 | Commit: 09f7284d4d0045e2529fea8730eb11161b2544ec6a796671e26f5f402699d332 87 | 88 | This means the system has downloaded commit 09f7... from the remote ostree repo "edge" at ref "rhel/8/x86_64/edge" and has booted into it (marked by the ● ). 89 | 90 | You can check the RPMs that ostree contains with 91 | 92 | sudo rpm-ostree db list rhel/8/x86_64/edge 93 | 94 | You can also check whether new updates are available, which is currently not the case, of course: 95 | 96 | sudo rpm-ostree upgrade --check 97 | 98 | ### Updating the blueprint, updating and rolling back the device 99 | 100 | Next, assume the operations team updates the blueprint to add the `iotop` package (see [`demos/ostree-demo/blueprint_v0.0.2.toml`](https://github.com/redhat-et/microshift-demos/tree/main/demos/ostree-demo/blueprint_v0.0.2.toml)), builds the updated ostree and publishes it. 101 | 102 | For simplicity, now run the build script to build the remaining artefacts. Once complete, note you have a new ostree-tarball `builds/ostree-demo/ostree-demo-0.0.2-container.tar`. 103 | 104 | Now let's serve the updated ostree using Podman. On the _builder machine_ run: 105 | 106 | IMAGE_ID=$(cat ./builds/ostree-demo/ostree-demo-0.0.2-container.tar | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*') 107 | sudo podman tag ${IMAGE_ID} localhost/ostree-demo:0.0.2 108 | sudo podman rm -f ostree-server 109 | sudo podman run -d --name=ostree-server -p 8080:8080 localhost/ostree-demo:0.0.2 110 | 111 | Now back on your VM console, check for available updates: 112 | 113 | sudo rpm-ostree upgrade --check 114 | 115 | You'll see a new ostree commit being available and can check what changes it contains: 116 | 117 | sudo rpm-ostree upgrade --preview 118 | 119 | You'll notice the update correctly adds the `iotop` package, but apparently someone made a mistake and accidentally removed the `bind-utils` package from the updated blueprint that provides the `dig` DNS client. Suppose `dig` was critical for the device's operation, for example to find its management system. 120 | 121 | Verify that `dig` is still installed in the current version: 122 | 123 | which dig 124 | 125 | Now let's stage the upgrade to the "broken" version 0.0.2: 126 | 127 | sudo rpm-ostree upgrade 128 | 129 | Checking the ostree status, you'll note the system now has two ostree commmits, the new one being top of the list (it'll be booted by default) but not yet active (it has no ● ): 130 | 131 | sudo rpm-ostree status 132 | 133 | Note also that `dig` is still present on the system: 134 | 135 | which dig 136 | 137 | Now reboot into the updated system: 138 | 139 | sudo systemctl reboot 140 | 141 | It just takes seconds until you can SSH back into the VM and verify the system has updated. You'll also notice `dig` is now missing from the system. Not good. Let's roll the system back to the previous ostree version: 142 | 143 | sudo rpm-ostree rollback 144 | sudo systemctl reboot 145 | 146 | Again this just takes seconds. Verify the system is on the original ostree and has `dig` availble again. 147 | 148 | ### Rolling back automatically using greenboot 149 | 150 | RHEL for Edge provides the `greenboot` tool that will run user-defined health checks and automatically roll back the a system update if those checks fail during multiple attempts. Let's add a check that fails when `dig` is not present on the system: 151 | 152 | sudo tee /etc/greenboot/check/required.d/01_check_deps.sh > /dev/null <<'EOF' 153 | #!/bin/bash 154 | 155 | if [ -x /usr/bin/dig ]; then 156 | echo "dig found, check passed!" 157 | exit 0 158 | else 159 | echo "dig not found, check failed!" 160 | exit 1 161 | fi 162 | EOF 163 | sudo chmod +x /etc/greenboot/check/required.d/01_check_deps.sh 164 | 165 | Let's also add some logging for failed health checks: 166 | 167 | sudo tee /etc/greenboot/red.d/bootfail.sh > /dev/null <<'EOF' 168 | #!/bin/bash 169 | 170 | LOG="/var/roothome/greenboot.log" 171 | 172 | echo "greenboot detected a boot failure" >> $LOG 173 | date >> $LOG 174 | grub2-editenv list | grep boot_counter >> $LOG 175 | echo "----------------" >> $LOG 176 | echo "" >> $LOG 177 | EOF 178 | sudo chmod +x /etc/greenboot/red.d/bootfail.sh 179 | 180 | Let's retry the upgrade to the broken v0.0.2: 181 | 182 | sudo rpm-ostree upgrade 183 | sudo systemctl reboot 184 | 185 | Watching the VM's console, you'll notice repeated attempts to boot into the updated system with the health check failing each time as `dig` is not present. After the third failed attempt, the system gets rolled back and booted into a "working" state again. 186 | 187 | ### Embedding and rolling out MicroShift 188 | 189 | Next, let's add MicroShift to the blueprint (see [`demos/ostree-demo/blueprint_v0.0.3.toml`](https://github.com/redhat-et/microshift-demos/tree/main/demos/ostree-demo/blueprint_v0.0.3.toml)) and "publish" the updated ostree repo. 190 | 191 | On the _host system_ run: 192 | 193 | IMAGE_ID=$(cat ./builds/ostree-demo/ostree-demo-0.0.3-container.tar | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*') 194 | sudo podman tag ${IMAGE_ID} localhost/ostree-demo:0.0.3 195 | sudo podman rm -f ostree-server 196 | sudo podman run -d --name=ostree-server -p 8080:8080 localhost/ostree-demo:0.0.3 197 | 198 | Back on your VM console, upgrade the system to the latest ostree version: 199 | 200 | sudo rpm-ostree upgrade 201 | sudo systemctl reboot 202 | 203 | You can now verify on the VM that MicroShift is installed and is starting up: 204 | 205 | systemctl status microshift 206 | 207 | After a minute or so, you should be able to see the cluster running: 208 | 209 | mkdir -p ~/.kube 210 | sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config 211 | oc get all -A 212 | 213 | ### Deploying workloads, connecting to Advanced Cluster Manager 214 | 215 | For security reasons, production systems should not allow remote access via SSH or Kube API (e.g. `kubectl` or `oc`). Instead, you should use a device management agent of your choice to pull updates from your management system and apply them locally, for example to drop your workload's manifests into `/etc/microshift/manifests` with a `kustomization.yaml` and restart the MicroShift service. 216 | 217 | For this demo, we'll use [Transmission](https://github.com/redhat-et/transmission) agent as lightweight way of configuring the devices using GitOps. Note the blueprint v0.0.3 already added this agent. On the VM, check that Transmission service is running: 218 | 219 | sudo systemctl status transmission 220 | 221 | You'll also notice a journal entry like 222 | 223 | Mar 09 07:45:53 edge transmission[2505790]: 2022-03-09 07:45:53,102 INFO: Running update, URL is https://github.com/redhat-et/microshift-config?ref=89b0aea8-0ec5-e9e0-5644-0cd55b835532. 224 | 225 | and on the login prompt on the VM's console you'll find the same URL. This points to the `${GITOPS}` repo you've set up at the very beginning. The Transmission agent on the device tries to clone that repo, check out the branch named after the ${DEVICE_ID} that uniquely identifies your device (here: 89b0...), and then roll the content of that branch into the running file system. 226 | 227 | If you have an instance of Red Hat Advanced Cluster Mangagement (ACM) running and accessible from your machine via `oc`, then you can clone your GitOps repo, checkout the "ostree-demo" branch and see under `/etc/microshift/manifests` manifests for installing the ACM `klusterlet` agent and a `kustomization.yaml` for applying these manifests. What's missing is adding the cluster's name and ACM credentials to the manifests. 228 | 229 | On the machine with ACM access, run: 230 | 231 | demo_dir=$(pwd) 232 | git clone "${GITOPS_REPO}" ostree-demo-config 233 | cd ostree-demo-config 234 | git checkout ostree-demo 235 | demos/ostree-demo/register_cluster.sh "ostree-demo-cluster" 236 | git checkout -b ${DEVICE_ID} 237 | git push origin ${DEVICE_ID} 238 | 239 | A few moments later, you should see your MicroShift cluster registered with ACM, ready to deploy workloads. 240 | -------------------------------------------------------------------------------- /demos/ostree-demo/blueprint_v0.0.1.toml: -------------------------------------------------------------------------------- 1 | name = "ostree-demo" 2 | 3 | description = "" 4 | version = "0.0.1" 5 | modules = [] 6 | groups = [] 7 | 8 | 9 | # troubleshooting tools 10 | 11 | [[packages]] 12 | name = "iputils" 13 | version = "*" 14 | 15 | [[packages]] 16 | name = "bind-utils" 17 | version = "*" 18 | 19 | [[packages]] 20 | name = "net-tools" 21 | version = "*" 22 | 23 | 24 | # other 25 | 26 | [[packages]] 27 | name = "redhat-release" 28 | version = "*" 29 | -------------------------------------------------------------------------------- /demos/ostree-demo/blueprint_v0.0.2.toml: -------------------------------------------------------------------------------- 1 | name = "ostree-demo" 2 | 3 | description = "" 4 | version = "0.0.2" 5 | modules = [] 6 | groups = [] 7 | 8 | 9 | # troubleshooting tools 10 | 11 | [[packages]] 12 | name = "iputils" 13 | version = "*" 14 | 15 | [[packages]] 16 | name = "net-tools" 17 | version = "*" 18 | 19 | [[packages]] 20 | name = "iotop" 21 | version = "*" 22 | 23 | 24 | # other 25 | 26 | [[packages]] 27 | name = "redhat-release" 28 | version = "*" 29 | -------------------------------------------------------------------------------- /demos/ostree-demo/blueprint_v0.0.3.toml: -------------------------------------------------------------------------------- 1 | name = "ostree-demo" 2 | 3 | description = "" 4 | version = "0.0.3" 5 | modules = [] 6 | groups = [] 7 | 8 | 9 | # MicroShift and oc client 10 | 11 | [[packages]] 12 | name = "microshift" 13 | version = "*" 14 | 15 | [[packages]] 16 | name = "openshift-clients" 17 | version = "*" 18 | 19 | 20 | # configuration management 21 | 22 | [[packages]] 23 | name = "transmission-agent" 24 | version = "0.1.6" 25 | 26 | [[packages]] 27 | name = "git" 28 | version = "*" 29 | 30 | 31 | # troubleshooting tools 32 | 33 | [[packages]] 34 | name = "iputils" 35 | version = "*" 36 | 37 | [[packages]] 38 | name = "bind-utils" 39 | version = "*" 40 | 41 | [[packages]] 42 | name = "net-tools" 43 | version = "*" 44 | 45 | [[packages]] 46 | name = "iotop" 47 | version = "*" 48 | 49 | 50 | # other 51 | 52 | [[packages]] 53 | name = "redhat-release" 54 | version = "*" 55 | 56 | 57 | # customizations 58 | 59 | [customizations] 60 | 61 | [customizations.services] 62 | enabled = ["transmission.timer", "microshift"] 63 | -------------------------------------------------------------------------------- /demos/ostree-demo/kickstart.ks.tmpl: -------------------------------------------------------------------------------- 1 | lang en_US.UTF-8 2 | keyboard us 3 | timezone UTC 4 | text 5 | reboot 6 | 7 | # Configure network to use DHCP and activate on boot 8 | network --bootproto=dhcp --device=link --activate --onboot=on --hostname=edge.local 9 | 10 | # Partition disk with a 1GB boot XFS partition and an LVM volume containing a 8GB+ system root 11 | # The remainder of the volume will be used by the CSI driver for storing data 12 | # 13 | # For example, a 20GB disk would be partitioned in the following way: 14 | # 15 | # NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT 16 | # sda 8:0 0 20G 0 disk 17 | # ├─sda1 8:1 0 1G 0 part /boot 18 | # └─sda2 8:2 0 19G 0 part 19 | # └─rhel-root 253:0 0 8G 0 lvm /sysroot 20 | # 21 | zerombr 22 | clearpart --all --initlabel 23 | part /boot/efi --fstype=efi --size=200 24 | part /boot --fstype=xfs --asprimary --size=800 25 | # Uncomment this line to add a SWAP partition of the recommended size 26 | #part swap --fstype=swap --recommended 27 | part pv.01 --grow 28 | volgroup rhel pv.01 29 | logvol / --vgname=rhel --fstype=xfs --size=8192 --name=root 30 | 31 | # Configure ostree 32 | ostreesetup --nogpg --osname=rhel --remote=edge --url=file:///run/install/repo/ostree/repo --ref=$OSTREE_REF 33 | 34 | 35 | %post --log=/var/log/anaconda/post-install.log --erroronfail 36 | 37 | # Add the default user and enable passwordless sudo. Add password and/or authorized keys if configured. 38 | useradd -m -d "/home/$USER_NAME" -G wheel "$USER_NAME" 39 | [ -n '$USER_PASS_ENCRYPTED' ] && usermod -p '$USER_PASS_ENCRYPTED' "$USER_NAME" 40 | if [ -n '$USER_AUTHORIZED_KEY' ]; then 41 | mkdir -p "/home/$USER_NAME/.ssh" 42 | chmod 755 "/home/$USER_NAME/.ssh" 43 | tee "/home/$USER_NAME/.ssh/authorized_keys" > /dev/null <> /etc/sudoers 49 | 50 | # Configure where rpm-ostree looks for ostree updates 51 | echo -e 'url=$OSTREE_REPO_URL' >> /etc/ostree/remotes.d/edge.conf 52 | 53 | # Configure where transmission-agent looks for config updates 54 | echo -e '$TRANSMISSION_URL' > /etc/transmission-url 55 | 56 | # The pull secret is mandatory for MicroShift builds on top of OpenShift, but not OKD 57 | # The /etc/crio/crio.conf.d/microshift.conf references the /etc/crio/openshift-pull-secret file 58 | mkdir -p /etc/crio 59 | cat > /etc/crio/openshift-pull-secret << EOPULLSECRET 60 | $OCP_PULL_SECRET_CONTENTS 61 | EOPULLSECRET 62 | chmod 600 /etc/crio/openshift-pull-secret 63 | 64 | # Configure the mandatory firewall rules that cannot (yet) be configured from blueprint 65 | firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16 66 | firewall-offline-cmd --zone=trusted --add-source=169.254.169.1 67 | 68 | %end 69 | -------------------------------------------------------------------------------- /demos/ostree-demo/source_transmission.toml: -------------------------------------------------------------------------------- 1 | id = "transmission" 2 | name = "Transmission" 3 | type = "yum-baseurl" 4 | url = "http://cdn.redhat.edge-lab.net/content/rpms/rhel/8" 5 | check_gpg = false 6 | check_ssl = false 7 | system = false 8 | -------------------------------------------------------------------------------- /scripts/build: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | title() { echo -e "\E[34m# $1\E[00m"; } 6 | fatal() { echo -e "\E[31mError: $1\E[00m"; exit 1; } 7 | 8 | trap 'fatal "command on line ${LINENO} exited with code $?: $BASH_COMMAND"' ERR 9 | 10 | 11 | REPOROOT="$(git rev-parse --show-toplevel)" 12 | DEMOS=$(cd "${REPOROOT}/demos"; find * -maxdepth 1 -type d | xargs) 13 | 14 | DISTRO=$(grep "^ID=" /etc/os-release | cut -d'=' -f2 | tr -d '"') 15 | DISTRO_VERSION=$(grep "^VERSION_ID=" /etc/os-release | cut -d'=' -f2 | tr -d '"') 16 | [[ "$DISTRO_VERSION" =~ ^([0-9]{1,})\.([0-9]{1,})$ ]] || fatal "Invalid OS version string (have: ${DISTRO_VERSION})" 17 | DISTRO_VERSION_MAJOR=${BASH_REMATCH[1]} 18 | DISTRO_VERSION_MINOR=${BASH_REMATCH[2]} 19 | 20 | OSTREE_REF="${DISTRO}/${DISTRO_VERSION_MAJOR}/$(uname -i)/edge" 21 | 22 | 23 | usage() { 24 | local error_message="$1" 25 | 26 | echo "Usage: $(basename "$0") " 27 | [ -n "$error_message" ] && fatal "${error_message}" 28 | exit 0 29 | } 30 | 31 | if [[ $# -ne 1 || " ${DEMOS} " != *"$1"* ]]; then 32 | usage "Must specify demo name (one of [${DEMOS}])." 33 | fi 34 | 35 | 36 | DEMONAME=$1 37 | DEMODIR="${REPOROOT}/demos/${DEMONAME}" 38 | BUILDDIR="${REPOROOT}/builds/${DEMONAME}" 39 | 40 | PRIMARY_IP=$(ip route get 8.8.8.8 | head -1 | cut -d' ' -f 7) 41 | 42 | SSH_PUBLIC_KEY_FILE=${SSH_PUBLIC_KEY_FILE:-${BUILDDIR}/id_demo.pub} 43 | PASSWORD_FILE=${PASSWORD_FILE:-${BUILDDIR}/password} 44 | GITOPS_REPO=${GITOPS_REPO:-https://github.com/redhat-et/microshift-config} 45 | OSTREE_SERVER_URL=${OSTREE_SERVER_URL:-http://${PRIMARY_IP}:8080} 46 | OCP_PULL_SECRET_FILE=${OCP_PULL_SECRET_FILE:-$HOME/.pull-secret.json} 47 | [ ! -s "${OCP_PULL_SECRET_FILE}" ] && usage "Empty or missing pull secret file ${OCP_PULL_SECRET_FILE}" 48 | 49 | 50 | # Adds a source repo using the standard osbuild-composer mechanism (.toml file). 51 | add_repo() { 52 | local source=$1 53 | 54 | id="$(grep -Po '^\s?id\s?=\s?"\K[^"]+' "${source}" | head -n 1)" 55 | title "Adding source '${id}' to builder" 56 | 57 | # This is a workaround for a current osbuild-composer limitation which is that 58 | # it does not correctly handle baseurls with yum variables like $basearch, 59 | # so we need to render these vars beforehand 60 | rendered_source=${BUILDDIR}/$(basename "${source}") 61 | sed \ 62 | -e "s|\$basearch|$(uname -i)|g" \ 63 | -e "s|\$releasever|${DISTRO_VERSION_MAJOR}|g" \ 64 | "${source}" > "${rendered_source}" 65 | 66 | sudo composer-cli sources delete "${id}" 2>/dev/null || true 67 | sudo composer-cli sources add "${rendered_source}" 68 | } 69 | 70 | load_blueprint() { 71 | local name=$1 72 | local file=$2 73 | 74 | sudo composer-cli blueprints delete "${name}" 2>/dev/null || true 75 | sudo composer-cli blueprints push "${file}" 76 | sudo composer-cli blueprints depsolve "${name}" 77 | } 78 | 79 | waitfor_image() { 80 | local uuid=$1 81 | 82 | local tstart=$(date +%s) 83 | echo "$(date +'%Y-%m-%d %H:%M:%S') STARTED" 84 | 85 | local status=$(sudo composer-cli compose info --json "${uuid}" | jq -r '.body.queue_status') 86 | while [ "${status}" = RUNNING ] || [ "${status}" = WAITING ]; do 87 | sleep 10 88 | status=$(sudo composer-cli compose info --json "${uuid}" | jq -r '.body.queue_status') 89 | echo -en "$(date +'%Y-%m-%d %H:%M:%S') ${status}\r" 90 | done 91 | 92 | local tend=$(date +%s) 93 | echo "$(date +'%Y-%m-%d %H:%M:%S') ${status} - elapsed $(( (tend - tstart) / 60 )) minutes" 94 | 95 | if [ "${status}" = FAILED ]; then 96 | download_image "${uuid}" 1 97 | echo "Blueprint build has failed. For more information, review the downloaded logs" 98 | exit 1 99 | fi 100 | } 101 | 102 | download_image() { 103 | local uuid=$1 104 | 105 | sudo composer-cli compose logs "${uuid}" 106 | sudo composer-cli compose metadata "${uuid}" 107 | sudo composer-cli compose image "${uuid}" 108 | } 109 | 110 | build_image() { 111 | local blueprint_file=$1 112 | local blueprint=$2 113 | local version=$3 114 | local image_type=$4 115 | local parent_blueprint=$5 116 | local parent_version=$6 117 | 118 | title "Loading ${blueprint} blueprint v${version}" 119 | load_blueprint "${blueprint}" "${blueprint_file}" 120 | 121 | if [ -n "$parent_version" ]; then 122 | title "Serving ${parent_blueprint} v${parent_version} container locally" 123 | sudo podman rm -f ostree-server 2>/dev/null || true 124 | sudo podman rmi -f "localhost/${parent_blueprint}:${parent_version}" 2>/dev/null || true 125 | imageid=$(cat "./${parent_blueprint}-${parent_version}-container.tar" | sudo podman load | grep -o -P '(?<=sha256[@:])[a-z0-9]*') 126 | sudo podman tag "${imageid}" "localhost/${parent_blueprint}:${parent_version}" 127 | sudo podman run -d --name=ostree-server -p 8080:8080 "localhost/${parent_blueprint}:${parent_version}" 128 | 129 | title "Building ${image_type} for ${blueprint} v${version}, parent ${parent_blueprint} v${parent_version}" 130 | result=$(sudo composer-cli compose start-ostree --json --ref "${OSTREE_REF}" --url http://localhost:8080/repo/ "${blueprint}" "${image_type}") || true 131 | buildid=$(jq -r '.body.build_id' <<< "${result}") 132 | if [ "${buildid}" = "null" ]; then 133 | fatal "Error starting compose for ${blueprint}: $(jq -r '.body.errors' <<< "${result}")" 134 | fi 135 | else 136 | title "Building ${image_type} for ${blueprint} v${version}" 137 | result=$(sudo composer-cli compose start-ostree --json --ref "${OSTREE_REF}" "${blueprint}" "${image_type}") || true 138 | buildid=$(jq -r '.body.build_id' <<< "${result}") 139 | if [ "${buildid}" = "null" ]; then 140 | fatal "Error starting compose for ${blueprint}: $(jq -r '.body.errors' <<< "${result}")" 141 | fi 142 | fi 143 | 144 | waitfor_image "${buildid}" 145 | download_image "${buildid}" 146 | sudo chown "$(whoami)." "${buildid}"*.{tar,iso} 2>/dev/null || true 147 | rename "${buildid}" "${blueprint}-${version}" "${buildid}"*.{tar,iso} 2>/dev/null || true 148 | } 149 | 150 | 151 | # Verify the MicroShift RPMs have been mirrored to disk 152 | result=$(sudo composer-cli sources list --json | jq '.body.sources | index("microshift-local")') 153 | [ "${result}" = null ] && fatal "Did not find local mirror of MicroShift RPMs. Did you run ./scripts/mirror-repos ?" 154 | 155 | # Copy the pull secret to osbuild worker's config dir (in case containers are embedded into blueprint) 156 | sudo mkdir -p /etc/osbuild-worker 157 | sudo cp -- "${OCP_PULL_SECRET_FILE}" /etc/osbuild-worker/pull-secret.json 158 | sudo tee /etc/osbuild-worker/osbuild-worker.toml &>/dev/null </dev/null 166 | 167 | # Add additional repos required by the specific demo 168 | sources=$(shopt -s nullglob; echo "${DEMODIR}"/source_*.toml) 169 | for repo in ${sources}; do 170 | add_repo "${repo}" 171 | done 172 | 173 | # Build images from blueprints in alphabetical order of file names. 174 | # Assumes files are named following the pattern "blueprint_${SOME_VERSION}_${CPU_ARCH}.toml", 175 | # or - if no such pattern exists - instead "blueprint_${SOME_VERSION}.toml" 176 | # Assumes blueprint N is the parent of blueprint N+1. 177 | parent_version="" 178 | root_parent_version="" 179 | blueprints=$(shopt -s nullglob; echo "${DEMODIR}"/blueprint_*_$(uname -i).toml) 180 | if [ -z "${blueprints}" ]; then 181 | blueprints=$(shopt -s nullglob; echo "${DEMODIR}"/blueprint_*.toml) 182 | fi 183 | if [ -z "${blueprints}" ]; then 184 | fatal "${DEMODIR} does not contain a blueprint." 185 | fi 186 | for blueprint in ${blueprints}; do 187 | version="$(echo "${blueprint}" | grep -Po '_v\K(.*)(?=\.toml)')" 188 | if [ ! -f "${BUILDDIR}/${DEMONAME}-${version}-container.tar" ]; then 189 | if [ -z "${parent_version}" ]; then 190 | build_image "${blueprint}" "${DEMONAME}" "${version}" edge-container 191 | else 192 | build_image "${blueprint}" "${DEMONAME}" "${version}" edge-container "${DEMONAME}" "${parent_version}" 193 | fi 194 | else 195 | title "Skipping build of ${DEMONAME} v${version}" 196 | fi 197 | parent_version="${version}" 198 | [ -z "${root_parent_version}" ] && root_parent_version="${parent_version}" 199 | done 200 | 201 | # Build the installer ISO if it doesn't exist yet 202 | if [ ! -f "${BUILDDIR}/installer-0.0.0-installer.iso" ]; then 203 | build_image "${REPOROOT}/scripts/shared/installer.toml" "installer" 0.0.0 edge-installer "${DEMONAME}" "${root_parent_version}" 204 | else 205 | title "Skipping build of installer" 206 | fi 207 | 208 | # Embed the kickstart into the installer ISO if there's no ISO containing it yet 209 | if [ ! -f "${BUILDDIR}/${DEMONAME}-installer.$(uname -i).iso" ]; then 210 | title "Embedding kickstart" 211 | if [ -f "${SSH_PUBLIC_KEY_FILE}" ]; then 212 | echo "INFO: Using existing SSH public key ${SSH_PUBLIC_KEY_FILE}" 213 | else 214 | echo "INFO: Generating new SSH key pair ${SSH_PUBLIC_KEY_FILE%.pub}" 215 | ssh-keygen -t ed25519 -C "microshift@edge" -f ${SSH_PUBLIC_KEY_FILE%.pub} -N "" 216 | fi 217 | if [ -f "${PASSWORD_FILE}" ]; then 218 | echo "INFO: Using existing user password file ${PASSWORD_FILE}" 219 | else 220 | echo "INFO: Generating new user password file ${PASSWORD_FILE}" 221 | head -c8 < <(< /dev/urandom tr -dc _A-Z-a-z-0-9) > "${PASSWORD_FILE}" 222 | fi 223 | cat "${DEMODIR}/kickstart.ks.tmpl" | \ 224 | OSTREE_REPO_URL=${OSTREE_SERVER_URL}/repo/ \ 225 | OSTREE_REF=${OSTREE_REF} \ 226 | TRANSMISSION_URL=${GITOPS_REPO}?ref=\${uuid} \ 227 | USER_NAME="microshift" \ 228 | USER_PASS_ENCRYPTED=$(openssl passwd -6 -stdin < "${PASSWORD_FILE}") \ 229 | USER_AUTHORIZED_KEY=$(cat "${SSH_PUBLIC_KEY_FILE}") \ 230 | OCP_PULL_SECRET_CONTENTS=$(cat "${OCP_PULL_SECRET_FILE}" | jq -c) \ 231 | envsubst > "${BUILDDIR}/kickstart.ks" 232 | sudo mkksiso kickstart.ks installer-0.0.0-installer.iso "${DEMONAME}-installer.$(uname -i).iso" 233 | sudo chown -R "$(whoami)." "${BUILDDIR}" 234 | else 235 | title "Skipping embedding of kickstart" 236 | fi 237 | 238 | title "Cleaning up local ostree container serving" 239 | sudo podman rm -f ostree-server 2>/dev/null || true 240 | 241 | title "Done" 242 | popd &>/dev/null 243 | -------------------------------------------------------------------------------- /scripts/build-latest-rpms: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | title() { echo -e "\E[34m# $1\E[00m"; } 6 | fatal() { echo -e "\E[31mError: $1\E[00m"; exit 1; } 7 | 8 | 9 | REPOROOT="$(git rev-parse --show-toplevel)" 10 | MICROSHIFT_RPM_DIR="${REPOROOT}/builds/rpms" 11 | 12 | 13 | title "Cloning MicroShift repo" 14 | git clone https://github.com/openshift/microshift.git 15 | 16 | title "Building MicroShift RPMs" 17 | pushd microshift &>/dev/null 18 | make rpm 19 | popd &>/dev/null 20 | 21 | title "Copying RPMs to ${RPMDIR}" 22 | mkdir -p "${MICROSHIFT_RPM_DIR}" 23 | cp _output/rpmbuild/RPMS/*/*.rpm "${MICROSHIFT_RPM_DIR}" 24 | 25 | title "Cleaning up" 26 | rm -rf microshift -------------------------------------------------------------------------------- /scripts/cleanup: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | REPOROOT="$(git rev-parse --show-toplevel)" 6 | 7 | title() { 8 | echo -e "\E[34m# $1\E[00m"; 9 | } 10 | 11 | title "Deleting build dir" 12 | sudo rm -rf "${REPOROOT}"/builds 13 | 14 | title "Deleting mirror repo dir" 15 | sudo rm -rf /var/repos 16 | 17 | title "Cancelling and deleting composes" 18 | for uuid in $(sudo composer-cli compose list | awk '{print $1}'); do 19 | echo "Deleting compose ${uuid}" 20 | sudo composer-cli compose cancel ${uuid} || true 21 | sudo composer-cli compose delete ${uuid} || true 22 | done 23 | 24 | title "Deleting blueprints" 25 | for blueprint in $(sudo composer-cli blueprints list | awk '{print $1}'); do 26 | echo "Deleting blueprint ${blueprint}" 27 | sudo composer-cli blueprints delete ${blueprint} || true 28 | done 29 | 30 | title "Deleting sources" 31 | for source in $(sudo composer-cli sources list | awk '{print $1}'); do 32 | if [[ " baseos appstream " == *" $source "* ]]; then 33 | continue 34 | fi 35 | echo "Deleting source ${source}" 36 | sudo composer-cli sources delete ${source} || true 37 | done 38 | -------------------------------------------------------------------------------- /scripts/configure-builder: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | title() { echo -e "\E[34m# $1\E[00m"; } 6 | fatal() { echo -e "\E[31mError: $1\E[00m"; exit 1; } 7 | 8 | 9 | DISTRO=$(grep "^ID=" /etc/os-release | cut -d'=' -f2 | tr -d '"') 10 | DISTRO_VERSION=$(grep "^VERSION_ID=" /etc/os-release | cut -d'=' -f2 | tr -d '"') 11 | [[ "$DISTRO_VERSION" =~ ^([0-9]{1,})\.([0-9]{1,})$ ]] || fatal "Invalid OS version string (have: ${DISTRO_VERSION})" 12 | DISTRO_VERSION_MAJOR=${BASH_REMATCH[1]} 13 | DISTRO_VERSION_MINOR=${BASH_REMATCH[2]} 14 | 15 | 16 | # Verify we are on a supported distro. 17 | case "${DISTRO}-${DISTRO_VERSION_MAJOR}" in 18 | rhel-8) 19 | [ "${DISTRO_VERSION_MINOR}" -lt 7 ] && fatal "RHEL8 version must be >= 8.7 (have: ${DISTRO_VERSION})." 20 | ;; 21 | 22 | rhel-9) 23 | [ "${DISTRO_VERSION_MINOR}" -lt 1 ] && fatal "RHEL9 version must be >= 9.1 (have: ${DISTRO_VERSION})." 24 | ;; 25 | 26 | *) 27 | fatal "\"${DISTRO}\" is not a supported distribution." 28 | ;; 29 | esac 30 | 31 | title "Installing experimental 'ostree' version (with support for embedding containers)" 32 | # Remove original ostree packages 33 | LIST2REMOVE=$(rpm -qa | grep -E '^ostree' || true) 34 | [ -n "${LIST2REMOVE}" ] && sudo dnf remove -y ${LIST2REMOVE} 35 | 36 | # Clean-up the old osbuild jobs and state to avoid incompatibilities between versions 37 | sudo rm -rf /var/lib/osbuild-composer || true 38 | sudo rm -rf /var/cache/{osbuild-composer,osbuild-worker} || true 39 | 40 | # Add the repo for the experimental ostree packages (they'll be installed with osbuild) 41 | sudo curl --location --output /etc/yum.repos.d/walters-ostreerhel8-centos-stream-8.repo \ 42 | https://copr.fedorainfracloud.org/coprs/walters/ostreerhel8/repo/centos-stream-8/walters-ostreerhel8-centos-stream-8.repo 43 | 44 | title "Installing ImageBuilder tools" 45 | sudo dnf install -y \ 46 | osbuild-composer composer-cli cockpit-composer \ 47 | bash-completion podman genisoimage syslinux \ 48 | createrepo syslinux yum-utils selinux-policy-devel jq wget lorax rpm-build 49 | 50 | title "Starting osbuild-composer and cockpit services" 51 | sudo systemctl enable osbuild-composer.socket 52 | sudo systemctl enable cockpit.socket 53 | sudo systemctl restart osbuild-composer.socket 54 | sudo systemctl restart osbuild-local-worker.socket 55 | sudo systemctl restart osbuild-composer.service 56 | 57 | title "configuring firewall" 58 | sudo firewall-cmd -q --add-service=cockpit --permanent 59 | sudo firewall-cmd --reload 60 | 61 | title "Done" 62 | -------------------------------------------------------------------------------- /scripts/configure-virthost: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -exo pipefail 4 | 5 | sudo dnf module install -y virt 6 | sudo dnf install -y virt-install virt-viewer 7 | 8 | sudo groupadd --system libvirt 9 | sudo usermod -a -G libvirt $(whoami) 10 | sudo sed -i '/unix_sock_group/s/^#//g' /etc/libvirt/libvirtd.conf 11 | 12 | sudo systemctl restart libvirtd 13 | sudo virt-host-validate 14 | 15 | sudo firewall-cmd --add-port=5900-5910/tcp --permanent 16 | sudo firewall-cmd --reload 17 | 18 | # sudo ip link add br0 type bridge 19 | 20 | # fzdarsky@lab-05 network-scripts]$ cat ifcfg-eno1 21 | # TYPE=Ethernet 22 | # PROXY_METHOD=none 23 | # BROWSER_ONLY=no 24 | # BOOTPROTO=dhcp 25 | # DEFROUTE=yes 26 | # IPV4_FAILURE_FATAL=no 27 | # IPV6INIT=yes 28 | # IPV6_AUTOCONF=yes 29 | # IPV6_DEFROUTE=yes 30 | # IPV6_FAILURE_FATAL=no 31 | # NAME=eno1 32 | # UUID=5d717232-4a4c-41b3-9960-c9a843b0e315 33 | # DEVICE=eno1 34 | # ONBOOT=yes 35 | # IPV6_PRIVACY=no 36 | 37 | # [fzdarsky@lab-05 network-scripts]$ cat ifcfg-eno1 38 | # TYPE=Ethernet 39 | # BOOTPROTO=none 40 | # NAME=eno1 41 | # UUID=5d717232-4a4c-41b3-9960-c9a843b0e315 42 | # DEVICE=eno1 43 | # ONBOOT=yes 44 | # BRIDGE=br0 45 | # DELAY=0 46 | # NM_CONTROLLED=0 47 | 48 | # [fzdarsky@lab-05 network-scripts]$ cat ifcfg-br0 49 | # DEVICE=br0 50 | # TYPE=Bridge 51 | # BOOTPROTO=none 52 | # IPADDR=192.168.178.105 53 | # GATEWAY=192.168.178.1 54 | # NETMASK=255.255.255.0 55 | # ONBOOT=yes 56 | # DELAY=0 57 | # NM_CONTROLLED=0 58 | -------------------------------------------------------------------------------- /scripts/mirror-repos: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | title() { echo -e "\E[34m# $1\E[00m"; } 6 | fatal() { echo -e "\E[31mError: $1\E[00m"; exit 1; } 7 | 8 | 9 | MICROSHIFT_VERSION="${MICROSHIFT_VERSION:-4.12}" 10 | MICROSHIFT_DEPS_VERSION="${MICROSHIFT_DEPS_VERSION:-${MICROSHIFT_VERSION}}" 11 | MICROSHIFT_DEV_PREVIEW="${MICROSHIFT_DEV_PREVIEW:-false}" 12 | 13 | REPOROOT="$(git rev-parse --show-toplevel)" 14 | MIRROR_DIR="/var/repos/microshift-local" 15 | MICROSHIFT_RPM_DIR="${REPOROOT}/builds/rpms" 16 | 17 | DISTRO=$(grep "^ID=" /etc/os-release | cut -d'=' -f2 | tr -d '"') 18 | DISTRO_VERSION=$(grep "^VERSION_ID=" /etc/os-release | cut -d'=' -f2 | tr -d '"') 19 | [[ "$DISTRO_VERSION" =~ ^([0-9]{1,})\.([0-9]{1,})$ ]] || fatal "Invalid OS version string (have: ${DISTRO_VERSION})" 20 | DISTRO_VERSION_MAJOR=${BASH_REMATCH[1]} 21 | DISTRO_VERSION_MINOR=${BASH_REMATCH[2]} 22 | 23 | 24 | # Verify we are on a supported distro and configuration. 25 | case "${DISTRO}-${DISTRO_VERSION_MAJOR}" in 26 | rhel-8) 27 | [ "${DISTRO_VERSION_MINOR}" -lt 7 ] && fatal "RHEL8 version must be >= 8.7 (have: ${DISTRO_VERSION})." 28 | ;; 29 | 30 | rhel-9) 31 | [ "${DISTRO_VERSION_MINOR}" -lt 1 ] && fatal "RHEL9 version must be >= 9.1 (have: ${DISTRO_VERSION})." 32 | 33 | # On RHEL9, run only if MICROSHIFT_DEV_PREVIEW has been explicitly requested. 34 | [ "${MICROSHIFT_DEV_PREVIEW}" != true ] && fatal "On RHEL9, please use the dev preview repos by running: MICROSHIFT_DEV_PREVIEW=true $0" 35 | ;; 36 | 37 | *) 38 | fatal "\"${DISTRO}\" is not a supported distribution." 39 | ;; 40 | esac 41 | 42 | # Create an empty mirror repo dir. 43 | sudo rm -rf "${MIRROR_DIR}" 2>/dev/null && sudo mkdir -p "${MIRROR_DIR}" 44 | 45 | # Configure the right repos for this distro, version, and release channel 46 | case "${DISTRO}-${DISTRO_VERSION_MAJOR}" in 47 | rhel-8|rhel-9) 48 | if [ "${MICROSHIFT_DEV_PREVIEW}" = true ]; then 49 | # Import Red Hat public keys to allow RPM GPG check (not necessary if a system is registered) 50 | if ! sudo subscription-manager status >& /dev/null ; then 51 | sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-* 52 | fi 53 | 54 | # Need to install both the latest MicroShift dev preview builds as well as the beta repo of dependencies 55 | # Use el8 RPMs even on a RHEL9 host until dedicated el9 builds become avaialble 56 | microshift_repo="microshift-${MICROSHIFT_VERSION}-dev-preview-for-rhel-8-$(uname -i)-rpms" 57 | rhocp_repo="rhocp-${MICROSHIFT_DEPS_VERSION}-beta-for-rhel-8-$(uname -i)-rpms" 58 | fastdatapath_repo="fast-datapath-for-rhel-${DISTRO_VERSION_MAJOR}-$(uname -i)-rpms" 59 | title "Configuring ${microshift_repo}, ${rhocp_repo}, and ${fastdatapath_repo} repos" 60 | sudo tee /etc/yum.repos.d/microshift-dev-preview.repo > /dev/null </dev/null 2>&1; then 107 | title "Copying MicroShift RPMs from ${MICROSHIFT_RPM_DIR} into mirror repo." 108 | sudo find "${MIRROR_DIR}" -name \*microshift\* -exec rm -f {} \; 109 | sudo cp "${MICROSHIFT_RPM_DIR}"/microshift*.rpm "${MIRROR_DIR}/${microshift_repo}" 110 | fi 111 | 112 | # Create the repo. 113 | title "Creating the local MicroShift RPM repo" 114 | sudo createrepo "${MIRROR_DIR}" >/dev/null 115 | 116 | # Add the repo as Image Builder source. 117 | title "Adding the local MicroShift RPM repo as Image Builder source" 118 | sudo tee "${MIRROR_DIR}/source_microshift-local.toml" > /dev/null </dev/null || true 129 | sudo composer-cli sources add "${MIRROR_DIR}/source_microshift-local.toml" -------------------------------------------------------------------------------- /scripts/prepare-aws-bucket: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | # Get bucket name as input from user 6 | BUCKET_NAME=$1 7 | REPOROOT="$(git rev-parse --show-toplevel)" 8 | RPMDIR="${REPOROOT}/builds/rpms" 9 | 10 | if [[ -z "${BUCKET_NAME}" ]]; then 11 | echo "Usage: $0 " 12 | exit 1 13 | fi 14 | 15 | # This script will create an S3 bucket to host the latest MicroShift RPMs 16 | # and the intermediate ostree-commit needed to build the final edge-installer ISO. 17 | 18 | title() { 19 | echo -e "\E[34m# $1\E[00m"; 20 | } 21 | 22 | # Check if the AWS CLI is installed 23 | if ! command -v aws &>/dev/null ; then 24 | echo "The AWS CLI is not installed. Installing..." 25 | sudo dnf install -y awscli 26 | fi 27 | 28 | # Check if the AWS CLI is configured 29 | if ! aws sts get-caller-identity &>/dev/null ; then 30 | echo "The AWS CLI is not configured. Running 'aws configure' to configure it." 31 | aws configure 32 | fi 33 | 34 | # Create bucket if it does not exist 35 | if ! aws s3 ls "s3://${BUCKET_NAME}" &>/dev/null; then 36 | echo "Creating bucket ${BUCKET_NAME}" 37 | aws s3 mb "s3://${BUCKET_NAME}" 38 | fi 39 | 40 | # Clean up S3 bucket content 41 | title "Cleaning up S3 bucket content" 42 | aws s3 rm "s3://${BUCKET_NAME}" --recursive 43 | 44 | # Sync RPM directory to S3 bucket with public ACL 45 | title "Syncing RPMs to S3 bucket" 46 | aws s3 sync "${RPMDIR}" "s3://${BUCKET_NAME}" --acl public-read 47 | 48 | -------------------------------------------------------------------------------- /scripts/provision-device: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | DEMOROOT=$(git rev-parse --show-toplevel)/ostree-demo 6 | 7 | # sudo virsh net-define ${DEMOROOT}/device-provision/macvtap-network.xml || true 8 | # sudo virsh net-autostart macvtap-net || true 9 | # sudo virsh net-start macvtap-net || true 10 | 11 | sudo virt-install \ 12 | --name ostree-demo \ 13 | --vcpus 2 \ 14 | --memory 4096 \ 15 | --disk path=/var/lib/libvirt/images/ostree-demo.qcow2,size=20 \ 16 | --network network=default,model=virtio,mac=52:54:00:00:00:01 \ 17 | --os-type linux \ 18 | --os-variant rhel8.5 \ 19 | --cdrom /var/lib/libvirt/images/ostree-demo-installer.x86_64.iso 20 | -------------------------------------------------------------------------------- /scripts/reset-device: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e -o pipefail 4 | 5 | DEMOROOT=$(git rev-parse --show-toplevel)/ostree-demo 6 | 7 | sudo cp ${DEMOROOT}/builds/ostree-demo-installer.x86_64.iso /var/lib/libvirt/images 8 | sudo rm /var/lib/libvirt/images/ostree-demo.qcow2 || true 9 | sudo qemu-img create -f qcow2 /var/lib/libvirt/images/ostree-demo.qcow2 20G 10 | -------------------------------------------------------------------------------- /scripts/shared/installer.toml: -------------------------------------------------------------------------------- 1 | name = "installer" 2 | 3 | description = "" 4 | version = "0.0.0" 5 | modules = [] 6 | groups = [] 7 | packages = [] 8 | --------------------------------------------------------------------------------