46 |
--------------------------------------------------------------------------------
/docs/architecture.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Architecture
3 | date: 2020-07-22
4 | ---
5 |
6 | # Architecture
7 |
8 | ![Architecture]
9 |
10 | ## Provisioner
11 |
12 | The provisioner machine is the main driver for executing a workflow.
13 | The Provisioner houses and runs the [Tinkerbell stack], acts as the DHCP server, keeps track of hardware data, templates, and workflows.
14 | You may divide these components into multiple servers that would then all function as your Provisioner.
15 |
16 | ### Provisioner Requirements
17 |
18 | **OS**
19 |
20 | The Tinkerbell stack has been tested on Ubuntu 16.04 and CentOS 7.
21 |
22 | **Minimum Resources**
23 |
24 | - CPU - 2vCPUs
25 | - RAM - 4 GB
26 | - Disk - 20 GB, this includes the OS
27 |
28 | **Network**
29 |
30 | L2 networking is required for the ability to run a DHCP server (in this case, Boots).
31 |
32 | ## Worker
33 |
34 | A Worker machine is the target machine, identified by its hardware data, that calls back to the Provisioner and executes its specified workflows.
35 | Any machine that has had its hardware data pushed into Tinkerbell can become a part of a workflow.
36 | A Worker can be a part of multiple workflows.
37 |
38 | ### Worker Requirements
39 |
40 | There are some very basic requirements that a Worker machine must meet in order to be able to boot and call back to the Provisioner.
41 |
42 | - They must be able to boot from network using iPXE.
43 | - 4 GB of RAM for OSIE boot and operation.
44 |
45 | There are no Disk requirements for a Worker since OSIE runs an in-memory operating system.
46 | Your disk requirements will be determined by the OS you are going to install and other use-case considerations.
47 |
48 | [architecture]: images/architecture-diagram.png
49 | [tinkerbell stack]: index.md#whats-powering-tinkerbell
50 |
--------------------------------------------------------------------------------
/docs/hook.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Hook
3 | ---
4 |
5 | # Hook
6 |
7 | Hook is the default light weight in-memory operating system used to provision a machine.
8 | It is built using LinuxKit.
9 | That operating system starts an application called [tink-worker] which communicates with [tink-server] to retrieve tasks for execution (tasks that are part of a workflow).
10 | The tasks performed by [tink-worker] typically result in a provisioned bare metal machine.
11 |
12 | See the [Hook repository] for more information on its construction.
13 |
14 | ## Customizing Hook
15 |
16 | Some users may need to customize Hook to include additional drivers for their hardware.
17 | Follow the documentation in the [Hook repository] to build a [custom Kernel] and Hook.
18 |
19 |
20 |
21 | ## Bring your own
22 |
23 | You may wish to use your own operating system to provision machines.
24 | To use your own OS with the Tinkerbell stack it must run [tink-worker], a Golang application maintained by the Tinkerbell community.
25 | [tink-worker] is shipped as a Docker container.
26 | Its responsibility is to execute workflow tasks.
27 |
28 |
29 |
30 | [alpine linux]: https://alpinelinux.org
31 | [linuxkit]: https://github.com/linuxkit/linuxkit
32 | [screenshot from the worker]: images/vagrant-setup-vbox-worker.png
33 | [tink]: https://github.com/tinkerbell/tink
34 | [tink-server]: services/tink-server.md
35 | [tink-worker]: services/tink-worker.md
36 | [workflow]: workflows/working-with-workflows.md
37 | [hook repository]: https://github.com/tinkerbell/hook
38 | [custom kernel]: https://github.com/tinkerbell/hook/tree/main/kernel
39 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | > [!IMPORTANT]
2 | > The docs content of this repo have been moved into https://github.com/tinkerbell/tinkerbell.org, please see that repo for further updates.
3 |
4 | # Tinkerbell Docs
5 |
6 | This is the source repo for the Tinkerbell docs.
7 | They are build using static-site generator [MkDocs] and using the [mkdocs-material] theme, then served by netlify to [docs.tinkerbell.org].
8 |
9 | ## Development
10 |
11 | This repository uses [MkDocs] as the documentation framework and [Poetry] to manage it's dependencies.
12 | Make sure to have [Python] installed.
13 | If you wish, you can install `mkdocs` and other dependencies to build the docs locally by running the following commands
14 |
15 | ```sh
16 | poetry install
17 | poetry run mkdocs serve
18 | ```
19 |
20 | If you wish to work within a development environment you can use Poetry's virtualenv environment:
21 |
22 | ```sh
23 | $ poetry install
24 | $ poetry shell
25 | ```
26 |
27 | To deactivate the virtual environment, simply run `deactivate`.
28 |
29 | ## Contributing to the Tinkerbell Docs
30 |
31 | All the markdown source files for the documentation are in the `docs/` folder.
32 | Find the file that you want to update and edit it.
33 | Then open a Pull Request with your changes.
34 | Make sure that the build passes, and take a look at the netlify preview to see your changes staged on the website.
35 |
36 | ### Page metadata
37 |
38 | Currently the metadata for the page is yaml formatted, with two fields: title and date.
39 | If you edit a doc, update the date to when you made your edits.
40 |
41 | ### Adding Images
42 |
43 | All the images for the docs are in the `images/` folder.
44 | To pull the image into your doc, use a relative link to the image file.
45 | Example:
46 |
47 | ```markdown
48 | ![Architecture]
49 | ```
50 |
51 | ### Adding a page
52 |
53 | If you would like to submit a new page to the documentation, be sure to add it to the `nav` section in mkdocs.yml.
54 | This will ensure that the page appears in the table of contents.
55 |
56 | [architecture]: /images/architecture-diagram.png
57 | [docs.tinkerbell.org]: https://docs.tinkerbell.org/
58 | [mkdocs]: https://www.mkdocs.org/
59 | [mkdocs-material]: https://squidfunk.github.io/mkdocs-material/
60 | [poetry]: https://python-poetry.org/
61 | [python]: https://www.python.org/downloads/
62 | [stability badge]: https://img.shields.io/badge/Stability-Experimental-red.svg
63 |
--------------------------------------------------------------------------------
/docs/services/hegel.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Hegel
3 | date: 2020-08-31
4 | ---
5 |
6 | # Hegel
7 |
8 | Hegel is Tinkerbell's metadata store that can be used during common provisioning processes such as cloud-init.
9 | Metadata is exposed over HTTP and sources from various fields on the [`Hardware`] custom resource.
10 |
11 | Take a look at the code in the [tinkerbell/hegel] GitHub repository.
12 |
13 | ## Using Hegel
14 |
15 | You can access Hegel in a Tinkerbell setup at the Provisioner's IP address `/metadata`.
16 | You can use cURL to retrieve the metadata it stores.
17 |
18 | Hegel by default exposes an `HTTP` API on port `50061`.
19 | You can interact with it via cURL
20 |
21 | ```sh
22 | curl $HEGEL_IP:50061/metadata
23 | ```
24 |
25 | You can also retrieve a AWS EC2 compatible format uses from `/meta-data`.
26 |
27 | ```sh
28 | curl $HEGEL_IP:50061//meta-data
29 | ```
30 |
31 | For example, if you are using the Local Setup with Vagrant, Hegel runs as part of the Provisioner virtual machine with the IP: `192.168.1.1`.
32 | When the Worker starts and if you have logged in to [hook] using the password `root` you can access the metadata for your server via `curl`:
33 |
34 | ```sh
35 | curl -s "192.168.1.1:50061/metadata" | jq .
36 | {
37 | "facility": {
38 | "facility_code": "onprem"
39 | },
40 | "instance": {},
41 | "state": ""
42 | }
43 | ```
44 |
45 | Or in AWS EC2 format:
46 |
47 | ```sh
48 | curl -s "192.168.1.1:50061/2009-04-04/meta-data"
49 | ```
50 |
51 | If you look at the `hardware-data.json` that we used during the Vagrant setup you will find the `facility_code=onprem` as well.
52 |
53 | ## Other Resources
54 |
55 | Every cloud provider is capable of exposing metadata to servers that you can query as part of your automation, usually via HTTP.
56 | Some examples are:
57 |
58 | - [AWS: Instance metadata and user data]
59 | - [GCP: Storing and retrieving instance metadata]
60 | - [Equinix Metal: Metadata]
61 |
62 | [aws: instance metadata and user data]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
63 | [equinix metal: metadata]: https://metal.equinix.com/developers/docs/servers/metadata/
64 | [gcp: storing and retrieving instance metadata]: https://cloud.google.com/compute/docs/metadata/overview
65 | [hook]: ../hook.md
66 | [tinkerbell/hegel]: https://github.com/tinkerbell/hegel
67 | [`hardware`]: https://github.com/tinkerbell/tink/blob/main/config/crd/bases/tinkerbell.org_hardware.yaml
68 |
--------------------------------------------------------------------------------
/DCO.md:
--------------------------------------------------------------------------------
1 | # DCO Sign Off
2 |
3 | All authors to the project retain copyright to their work. However, to ensure
4 | that they are only submitting work that they have rights to, we are requiring
5 | everyone to acknowledge this by signing their work.
6 |
7 | Since this signature indicates your rights to the contribution and
8 | certifies the statements below, it must contain your real name and
9 | email address. Various forms of noreply email address must not be used.
10 |
11 | Any copyright notices in this repository should specify the authors as "The
12 | project authors".
13 |
14 | To sign your work, just add a line like this at the end of your commit message:
15 |
16 | ```text
17 | Signed-off-by: Jess Owens
18 | ```
19 |
20 | This can easily be done with the `--signoff` option to `git commit`.
21 |
22 | By doing this you state that you can certify the following (from [https://developercertificate.org/][1]):
23 |
24 | ```text
25 | Developer Certificate of Origin
26 | Version 1.1
27 |
28 | Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
29 | 1 Letterman Drive
30 | Suite D4700
31 | San Francisco, CA, 94129
32 |
33 | Everyone is permitted to copy and distribute verbatim copies of this
34 | license document, but changing it is not allowed.
35 |
36 |
37 | Developer's Certificate of Origin 1.1
38 |
39 | By making a contribution to this project, I certify that:
40 |
41 | (a) The contribution was created in whole or in part by me and I
42 | have the right to submit it under the open source license
43 | indicated in the file; or
44 |
45 | (b) The contribution is based upon previous work that, to the best
46 | of my knowledge, is covered under an appropriate open source
47 | license and I have the right under that license to submit that
48 | work with modifications, whether created in whole or in part
49 | by me, under the same open source license (unless I am
50 | permitted to submit under a different license), as indicated
51 | in the file; or
52 |
53 | (c) The contribution was provided directly to me by some other
54 | person who certified (a), (b) or (c) and I have not modified
55 | it.
56 |
57 | (d) I understand and agree that this project and the contribution
58 | are public and that a record of the contribution (including all
59 | personal information I submit with it, including my sign-off) is
60 | maintained indefinitely and may be redistributed consistent with
61 | this project or the open source license(s) involved.
62 | ```
63 |
--------------------------------------------------------------------------------
/docs/services/boots.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Boots
3 | date: 2020-08-31
4 | ---
5 |
6 | # Boots
7 |
8 | Tinkerbell relies on network booting a server in order to prepare it to execute workflows.
9 | Boots is Tinkerbell's DHCP server, handling IP addresses and requests.
10 | It is also the TFTP server, serving iPXE and the initial installation image.
11 |
12 | Boots is written in Go, and can built, run, and tested outside of the Tinkerbell stack.
13 | Take a look at the code in the [tinkerbell/boots] GitHub repository.
14 |
15 | ## What Boots does
16 |
17 | When a Worker comes on-line for the first time, it PXE boots and sends a DHCP request to the Provisioner.
18 | Boots receives the request and assigns the Worker its IP Address as defined in the hardware data.
19 |
20 | Next, Boots communicates over TFTP to download the iPXE script to the Worker.
21 |
22 | The iPXE script tells the Worker to download and boot an in-memory operating system called [hook].
23 | From there you are inside an OS and you can do what you like, the most common action is to partition your hard drive and installing the actual operating system.
24 | Tinkerbell abstracts those actions with the concept of a workflow.
25 |
26 | ## Configuring an image registry requiring authentication
27 |
28 | When using a registry requiring authentication Boots must be configured with the registry credentials so it can pass
29 | them to Hook. The 3 required environment variables for authenticated registry access are:
30 |
31 | ```sh
32 | DOCKER_REGISTRY: $TINKERBELL_HOST_IP
33 | REGISTRY_USERNAME: $TINKERBELL_REGISTRY_USERNAME
34 | REGISTRY_PASSWORD: $TINKERBELL_REGISTRY_PASSWORD
35 | ```
36 |
37 | ## Other Resources
38 |
39 | One of the core concepts behind Tinkerbell is network booting.
40 |
41 | Let's imagine you are in a datacenter with hundreds of servers; it is not reasonable to go over all of the one by one with a USB stick to install the operating system you need.
42 | If you use Provisioner with an API, like Tinkerbell does, things get even more complicated as there isn't an operator running around with USB stick for every API request.
43 |
44 | There are a lot of articles and use cases for netbooting, here a few that our contributors enjoyed or even wrote:
45 |
46 | - [First journeys with netboot and ipxe installing Ubuntu]
47 | - [The state of netbooting Raspberry Pis]
48 | - [RedHat Enterprise Linux: PREPARING FOR A NETWORK INSTALLATION]
49 |
50 | [first journeys with netboot and ipxe installing ubuntu]: https://gianarb.it/blog/first-journeys-with-netboot-ipxe
51 | [hook]: ../hook.md
52 | [redhat enterprise linux: preparing for a network installation]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/chap-installation-server-setup
53 | [the state of netbooting raspberry pis]: https://blog.alexellis.io/the-state-of-netbooting-raspberry-pi/
54 | [tinkerbell/boots]: https://github.com/tinkerbell/boots
55 |
--------------------------------------------------------------------------------
/mkdocs.yml:
--------------------------------------------------------------------------------
1 | site_name: Tinkerbell Docs
2 | site_url: https://docs.tinkerbell.org
3 | site_description: Designed for scalable provisioning of physical hardware, Tinkerbell is both open-source and cloud native. Contributed by Equinix Metal.
4 | site_image: https://tinkerbell.org/images/social/tinkerbell-social-image.png
5 | repo_url: https://github.com/tinkerbell/tinkerbell-docs
6 | edit_uri: "edit/main/docs/"
7 |
8 | nav:
9 | - Home: index.md
10 |
11 | - Architecture: architecture.md
12 |
13 | - Services:
14 | - Tink Server: services/tink-server.md
15 | - Tink Worker: services/tink-worker.md
16 | - Tink Controller: services/tink-controller.md
17 | - Boots: services/boots.md
18 | - Hegel: services/hegel.md
19 | - PBnJ: services/pbnj.md
20 | - Rufio: services/rufio.md
21 | - Hook: hook.md
22 |
23 | - Workflows:
24 | - Working with Workflows: workflows/working-with-workflows.md
25 | - A Hello-world Workflow: workflows/hello-world-workflow.md
26 |
27 | - Hardware Data: hardware-data.md
28 | - Templates: templates.md
29 |
30 | - Actions:
31 | - Action Hub: https://artifacthub.io/packages/search?kind=4
32 | - Action Architecture: actions/action-architecture.md
33 | - Creating a Basic Action: actions/creating-a-basic-action.md
34 |
35 | - Setting Up Tinkerbell: setup.md
36 |
37 | - Deploying Operating Systems:
38 | - The Basics: deploying-operating-systems/the-basics.md
39 | - The Deployment: deploying-operating-systems/the-deployment.md
40 | - Example - Deploying Ubuntu: deploying-operating-systems/examples-ubuntu.md
41 | - Example - Deploying Ubuntu from Packer Machine Image: deploying-operating-systems/examples-ubuntu-packer.md
42 | - Example - Deploying Debian: deploying-operating-systems/examples-debian.md
43 | - Example - Deploying FreeBSD: deploying-operating-systems/examples-freebsd.md
44 | - Example - Deploying Red Hat Enterprise Linux and CentOS: deploying-operating-systems/examples-rhel-centos.md
45 | - Example - Deploying Windows: deploying-operating-systems/examples-win.md
46 |
47 | - Troubleshooting: troubleshooting.md
48 |
49 | - Contribute: contrib/index.md
50 |
51 | theme:
52 | name: material
53 | custom_dir: overrides
54 | logo: assets/logo.png
55 | footer_logo: assets/packet.png
56 | font: false
57 |
58 | extra_css:
59 | - stylesheets/extra.css
60 | - stylesheets/simple-grid.css
61 | - stylesheets/font-awesome.min.css
62 |
63 | extra:
64 | social:
65 | - icon: fontawesome/brands/github
66 | link: https://github.com/tinkerbell
67 | - icon: fontawesome/brands/slack
68 | link: https://slack.cncf.io/
69 | - icon: fontawesome/brands/twitter
70 | link: https://twitter.com/tinkerbell_oss
71 |
72 | #google_analytics:
73 | # - UA-52258647-10
74 | # - auto
75 |
76 | plugins:
77 | - search
78 |
79 | markdown_extensions:
80 | - admonition
81 | - mdx_truly_sane_lists
82 | - pymdownx.superfences
83 | - pymdownx.tabbed
84 |
--------------------------------------------------------------------------------
/docs/index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Home
3 | ---
4 |
5 | # The Tinkerbell Docs
6 |
7 | Everything you need to know about Tinkerbell and its major component microservices.
8 |
9 | ## What is Tinkerbell?
10 |
11 | Tinkerbell is an open-source, bare metal provisioning engine, built by the team at Equinix Metal.
12 |
13 | Interested in contributing? Check out our [Contributors' Page].
14 |
15 | ## What's Powering Tinkerbell?
16 |
17 | - **[Tink Server]** is responsible for serving tasks to be run by Tink Worker and updating the state of tasks as reported by Tink worker.
18 |
19 | - **[Tink Worker]** is responsible for retrieving and executing workflow tasks. It reports the state of tasks back to Tink Server. It is pre-packaged into our default in-memory provisioning OS, Hook.
20 |
21 | - **[Tink Controller]** is responsible for rendering workflow templates and managing workflow state as Tink Worker's report on their task status'. It is an internal component that users generally do not need to interact with.
22 |
23 | - **[Boots]** is Tinkerbell's DHCP server.
24 | It handles DHCP requests, hands out IPs, and serves up [iPXE].
25 | It uses the Tinkerbell client to pull and push hardware data.
26 | It only responds to a predefined set of MAC addresses so it can be deployed in an existing network without interfering with existing DHCP infrastructure.
27 |
28 | - **[Hook]** is Tinkerbell's default in-memory installation environment for bare metal. Hook executes workflow tasks that result in a provisioned machine.
29 |
30 |
31 | ### Optional services
32 |
33 | - **[Hegel]** is a metadata service that can be used during the configuration of a permanent OS.
34 |
35 | - **[PBnJ]** is a microservice that can communicate with baseboard management controllers (BMCs) to control power and boot settings.
36 |
37 | - **[Rufio]** is a Kubernetes controller that facilitates baseboard management controller interactions. It functions similarly to PBnJ but with a Kubernetes focused API.
38 |
39 | ## First Steps
40 |
41 | New to Tinkerbell or bare metal provisioning? Visit the [playground] for functional examples that illustrate Tinkerbell stack deployment.
42 |
43 | ## Get Help
44 |
45 | Need a little help getting started? We're here!
46 |
47 | - Check out the [FAQs] - When there are questions, we document the answers.
48 | - Join the [CNCF Community Slack].
49 | Look for the [#tinkerbell] channel.
50 | - Submit an issue on [Github].
51 |
52 | [boots]: services/boots.md
53 | [pbnj]: services/pbnj.md
54 | [hook]: hook.md
55 | [hegel]: services/hegel.md
56 | [rufio]: services/rufio.md
57 | [tink server]: services/tink-server.md
58 | [tink worker]: services/tink-worker.md
59 | [tink controller]: services/tink-controller.md
60 |
61 | [cncf community slack]: https://slack.cncf.io/
62 | [contributors' page]: https://tinkerbell.org/community/contributors/
63 | [docker hub]: https://hub.docker.com/
64 | [faqs]: https://tinkerbell.org/faq/
65 | [github]: https://github.com/tinkerbell
66 | [ipxe]: https://ipxe.org/
67 | [quay]: https://quay.io/
68 | [#tinkerbell]: https://app.slack.com/client/T08PSQ7BQ/C01SRB41GMT
69 | [playground]: https://github.com/tinkerbell/playground
--------------------------------------------------------------------------------
/docs/stylesheets/simple-grid.css:
--------------------------------------------------------------------------------
1 | /**
2 | *** SIMPLE GRID
3 | *** (C) ZACH COLE 2016
4 | **/
5 |
6 | @import url(https://fonts.googleapis.com/css?family=Lato:400,300,300italic,400italic,700,700italic);
7 |
8 | .font-light {
9 | font-weight: 300;
10 | }
11 |
12 | .font-regular {
13 | font-weight: 400;
14 | }
15 |
16 | .font-heavy {
17 | font-weight: 700;
18 | }
19 |
20 | /* POSITIONING */
21 |
22 | .left {
23 | text-align: left;
24 | }
25 |
26 | .right {
27 | text-align: right;
28 | }
29 |
30 | .center {
31 | text-align: center;
32 | margin-left: auto;
33 | margin-right: auto;
34 | }
35 |
36 | .justify {
37 | text-align: justify;
38 | }
39 |
40 | .middle {
41 | align-items: center;
42 | }
43 |
44 | /* ==== GRID SYSTEM ==== */
45 |
46 | .container {
47 | width: 90%;
48 | margin-left: auto;
49 | margin-right: auto;
50 | }
51 |
52 | .row {
53 | position: relative;
54 | width: 100%;
55 | }
56 |
57 | .row [class^="col"] {
58 | float: left;
59 | margin: 0.5rem 2%;
60 | min-height: 0.125rem;
61 | }
62 |
63 | .col-1,
64 | .col-2,
65 | .col-3,
66 | .col-4,
67 | .col-5,
68 | .col-6,
69 | .col-7,
70 | .col-8,
71 | .col-9,
72 | .col-10,
73 | .col-11,
74 | .col-12 {
75 | width: 96%;
76 | }
77 |
78 | .col-1-sm {
79 | width: 4.33%;
80 | }
81 |
82 | .col-2-sm {
83 | width: 12.66%;
84 | }
85 |
86 | .col-3-sm {
87 | width: 21%;
88 | }
89 |
90 | .col-4-sm {
91 | width: 29.33%;
92 | }
93 |
94 | .col-5-sm {
95 | width: 37.66%;
96 | }
97 |
98 | .col-6-sm {
99 | width: 46%;
100 | }
101 |
102 | .col-7-sm {
103 | width: 54.33%;
104 | }
105 |
106 | .col-8-sm {
107 | width: 62.66%;
108 | }
109 |
110 | .col-9-sm {
111 | width: 71%;
112 | }
113 |
114 | .col-10-sm {
115 | width: 79.33%;
116 | }
117 |
118 | .col-11-sm {
119 | width: 87.66%;
120 | }
121 |
122 | .col-12-sm {
123 | width: 96%;
124 | }
125 |
126 | .row::after {
127 | content: "";
128 | display: table;
129 | clear: both;
130 | }
131 |
132 | .hidden-sm {
133 | display: none;
134 | }
135 |
136 | @media only screen and (min-width: 33.75em) {
137 | /* 540px */
138 | .container {
139 | width: 80%;
140 | }
141 | }
142 |
143 | @media only screen and (min-width: 45em) {
144 | /* 720px */
145 | .col-1 {
146 | width: 4.33%;
147 | }
148 |
149 | .col-2 {
150 | width: 12.66%;
151 | }
152 |
153 | .col-3 {
154 | width: 21%;
155 | }
156 |
157 | .col-4 {
158 | width: 29.33%;
159 | }
160 |
161 | .col-5 {
162 | width: 37.66%;
163 | }
164 |
165 | .col-6 {
166 | width: 46%;
167 | }
168 |
169 | .col-7 {
170 | width: 54.33%;
171 | }
172 |
173 | .col-8 {
174 | width: 62.66%;
175 | }
176 |
177 | .col-9 {
178 | width: 71%;
179 | }
180 |
181 | .col-10 {
182 | width: 79.33%;
183 | }
184 |
185 | .col-11 {
186 | width: 87.66%;
187 | }
188 |
189 | .col-12 {
190 | width: 96%;
191 | }
192 |
193 | .hidden-sm {
194 | display: block;
195 | }
196 | }
197 |
198 | @media only screen and (min-width: 60em) {
199 | /* 960px */
200 | .container {
201 | width: 75%;
202 | max-width: 60rem;
203 | }
204 | }
205 |
--------------------------------------------------------------------------------
/docs/templates.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Templates
3 | date: 2020-08-03
4 | ---
5 |
6 | # Templates
7 |
8 | A Template is a YAML file that defines the source of a Workflow by declaring one or more _tasks_.
9 | The tasks are executed sequentially, in the order in which they are declared.
10 |
11 | Each task consists of one or more [action].
12 | Each action contains an image to be executed as part of a workflow, identified by the `image` field.
13 | You can create any script, app, or other set of instructions to be an action image by containerizing it and pushing it into either the local Docker registry included in the Tinkerbell stack or an external image repository.
14 |
15 | Here is a sample template:
16 |
17 | ```yaml
18 | version: "0.1"
19 | name: ubuntu_provisioning
20 | global_timeout: 6000
21 | tasks:
22 | - name: "os-installation"
23 | worker: "{{.device_1}}"
24 | volumes:
25 | - /dev:/dev
26 | - /dev/console:/dev/console
27 | - /lib/firmware:/lib/firmware:ro
28 | environment:
29 | MIRROR_HOST:
30 | actions:
31 | - name: "disk-wipe"
32 | image: disk-wipe
33 | timeout: 90
34 | - name: "disk-partition"
35 | image: disk-partition
36 | timeout: 600
37 | environment:
38 | MIRROR_HOST:
39 | volumes:
40 | - /statedir:/statedir
41 | - name: "install-root-fs"
42 | image: install-root-fs
43 | timeout: 600
44 | - name: "install-grub"
45 | image: install-grub
46 | timeout: 600
47 | volumes:
48 | - /statedir:/statedir
49 | ```
50 |
51 | The `volumes` field contains the volume mappings between the host machine and the docker container where your images are running.
52 |
53 | The `environment` field is used to pass environment variables to the images.
54 |
55 | Each action can have its own volumes and environment variables.
56 | Any entry at an action will overwrite the value defined at the task level.
57 | For example, in the above template the `MIRROR_HOST` environment variable defined at action `disk-partition` will overwrite the value defined at task level.
58 | The other actions will receive the original value defined at the task level.
59 |
60 | The timeout defines the amount of time to wait for an action to execute and is in seconds.
61 |
62 | A hardware device, such as a Worker's MAC address, is specified in template as keys.
63 |
64 | ```text
65 | {{.device_1}}
66 | {{.device_2}}
67 | ```
68 |
69 | Keys can only contain _letters_, _numbers_ and _underscores_.
70 | These keys are evaluated during workflow creation, being passed in as an JSON argument to `tink workflow create`.
71 |
72 | Templates are each stored as blobs in the database; they are later parsed during the creation of a workflow.
73 |
74 | ## Template CLI Commands
75 |
76 | ```text
77 | create Create a template in the database.
78 | delete Delete a template from the database.
79 | get Get a template by ID.
80 | list List all templates in the database.
81 | update Update an existing template.
82 | ```
83 |
84 | `tink template --help` - Displays the available commands and their usage for `tink template`.
85 |
86 | [action]: actions/action-architecture.md
87 |
--------------------------------------------------------------------------------
/overrides/main.html:
--------------------------------------------------------------------------------
1 | {#- This file was automatically generated - do not edit -#} {% extends
2 | "base.html" %} {% block extrahead %} {% set title = config.site_name %} {% if
3 | page and page.meta and page.meta.title %} {% set title = title ~ " - " ~
4 | page.meta.title %} {% elif page and page.title and not page.is_homepage %} {%
5 | set title = title ~ " - " ~ page.title | striptags %} {% endif %} {% set image =
6 | config.site_image %}
7 |
8 |
14 |
15 |
20 |
25 |
30 |
35 |
40 |
45 |
50 |
55 |
60 |
66 |
72 |
78 |
84 |
85 |
89 |
90 |
91 |
92 |
93 |
94 |
95 |
96 |
97 |
98 |
99 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
112 | {% endblock %} {% block content %} {{ super() }} {% endblock %} {% block
113 | analytics %} {{ super() }}
114 |
124 | {% endblock %}
125 |
--------------------------------------------------------------------------------
/docs/deploying-operating-systems/examples-freebsd.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Example - FreeBSD
3 | date: 2021-03-12
4 | ---
5 |
6 | # Deploying FreeBSD
7 |
8 | This is a guide which walks through the process of deploying FreeBSD from an operating system image.
9 |
10 | ## Getting the Image
11 |
12 | FreeBSD distributes their Operating System in a number of different formats, which are all available on the [cloud-images] site.
13 |
14 | Below are two examples of images we can use:
15 |
16 | ```sh
17 | FreeBSD-12.2-RELEASE-amd64.qcow2.xz 599212960 2020-Oct-23 06:27
18 | FreeBSD-12.2-RELEASE-amd64.raw.xz 600337912 2020-Oct-23 06:44
19 | ```
20 |
21 | Both images come with compressed with the `xz` compression format, you will need to decompress them with the `xz` command.
22 |
23 | ```sh
24 | xz -d
25 | ```
26 |
27 | The `raw` image is a disk image which ontains a full partition table (including OS and Swap partition) and boot loader for our FreeBSD system.
28 | You can examine this with `losetup`.
29 |
30 | ```sh
31 | losetup -f -P ./FreeBSD-12.2-RELEASE-amd64.raw
32 |
33 | fdisk -l /dev/loop1
34 |
35 | Disk /dev/loop1: 5 GiB, 5369587712 bytes, 10487476 sectors
36 | Units: sectors of 1 * 512 = 512 bytes
37 | Sector size (logical/physical): 512 bytes / 512 bytes
38 | I/O size (minimum/optimal): 512 bytes / 512 bytes
39 | Disklabel type: gpt
40 | Disk identifier: 2346CB62-14FB-11EB-9C6B-0CC47AD8B808
41 |
42 | Device Start End Sectors Size Type
43 | /dev/loop1p1 3 113 111 55.5K FreeBSD boot
44 | /dev/loop1p2 114 1713 1600 800K EFI System
45 | /dev/loop1p3 1714 2098865 2097152 1G FreeBSD swap
46 | /dev/loop1p4 2098866 10487473 8388608 4G FreeBSD UFS
47 | ```
48 |
49 | The `raw` image comes with everything that we will need to install and deploy FreeBSD.
50 |
51 | The other image, with the extension `.qcow2.xz` is a compressed `qcow2` filesystem image and is a **full** disk image including partition tables, partitions filled with filesystems and files, and importantly, a boot loader at the beginning of the disk image.
52 | However, if you want to use the `qcow` image you will have to convert it with the `qemu-img` CLI tool.
53 |
54 | ```sh
55 | apt-get install -y qemu-utils
56 | ```
57 |
58 | Then use the tool to convert the image into a `raw` filesystem.
59 |
60 | ```sh
61 | qemu-img convert ./FreeBSD-12.2-RELEASE-amd64.qcow2 -O raw ./FreeBSD-12.2-RELEASE-amd64.raw
62 | ```
63 |
64 | Once you have a `raw` filesystem image, you can optionally compress the raw image to save on both local disk space and network bandwidth when deploying the image.
65 |
66 | ```sh
67 | gzip ./FreeBSD-12.2-RELEASE-amd64.raw
68 | ```
69 |
70 | The raw image will need to live at an accessible web server.
71 |
72 | ## Creating the Template
73 |
74 | The template uses actions from the [Artifact Hub].
75 |
76 | - [image2disk] - to write the image to a block device.
77 | - [kexec] - to `kexec` into our newly provisioned operating system.
78 |
79 | > Important: Don't forget to pull, tag and push `quay.io/tinkerbell-actions/image2disk:v1.0.0` prior to using it.
80 |
81 | ```yaml
82 | version: "0.1"
83 | name: FreeBSD_deployment
84 | global_timeout: 1800
85 | tasks:
86 | - name: "os-installation"
87 | worker: "{{.device_1}}"
88 | volumes:
89 | - /dev:/dev
90 | - /dev/console:/dev/console
91 | - /lib/firmware:/lib/firmware:ro
92 | actions:
93 | - name: "stream-FreeBSD-image"
94 | image: quay.io/tinkerbell-actions/image2disk:v1.0.0
95 | timeout: 600
96 | environment:
97 | DEST_DISK: /dev/sda
98 | IMG_URL: "http://192.168.1.1:8080/FreeBSD-12.2-RELEASE-amd64.raw.gz"
99 | COMPRESSED: true
100 | - name: "kexec-FreeBSD"
101 | image: quay.io/tinkerbell-actions/kexec:v1.0.0
102 | timeout: 90
103 | pid: host
104 | environment:
105 | BLOCK_DEVICE: /dev/sda1
106 | FS_TYPE: ext4
107 | ```
108 |
109 | [artifact hub]: https://artifacthub.io/packages/search?kind=4
110 | [cloud-images]: https://download.freebsd.org/ftp/releases/VM-IMAGES/12.2-RELEASE/amd64/Latest/
111 | [image2disk]: https://artifacthub.io/packages/tbaction/tinkerbell-community/image2disk
112 | [kexec]: https://artifacthub.io/packages/tbaction/tinkerbell-community/kexec
113 |
--------------------------------------------------------------------------------
/docs/actions/creating-a-basic-action.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Creating a Basic Action
3 | date: 2021-02-15
4 | ---
5 |
6 | # Creating a Basic Action
7 |
8 | This guide will step through creating a basic action, whilst also discussing a number of things an action builder should be aware of.
9 |
10 | ## Basic principles for an action
11 |
12 | Ideally when creating an action it should follow a few basic principles:
13 |
14 | - Minimal image size (action images need to be downloaded to `tmpfs` and will use the servers memory both at rest and runtime)
15 | - Single use, one action should not try to do all things.
16 | Actions should follow the linux philosophy of do one thing, and do one thing well.
17 | - Image re-use, where possible an image should be both simple and single-use but have the ability to be customised for other users.
18 | - Chain together well with other actions, most actions will store in-use data on `/statedir`.
19 | Keeping with these standards ensures other actions know where to find this persistent data.
20 | - Fail on unrecoverable error, an action when it encounters a point it can't continue should fail (with sufficient logging).
21 | No other changes should occur allowing the Tinkerbell operator the capability to debug why this failure has taken place.
22 |
23 | ## Our example action
24 |
25 | A common task is manipulating the filesystem of the newly provisioned Operating System, there are numerous reasons for this such as users, network config, ssh keys or other files that require change.
26 | This example will use bash to make it as simple as possible to understand, however the we're aiming to use Golang where possible for a lot of the tinkerbell actions on the [tinkerbell hub].
27 |
28 | Our simple action will mount our newly provisioned Operating System, and [touch] a file to a location that we have specified.
29 |
30 | As this action will use bash, and require shelling out to a number of other commands we will start with one of the smallest "distro" images [Alpine Linux].
31 |
32 | We will pass three pieces of information as environment variables into this action:
33 |
34 | - `BLOCK_DEVICE` the device with our filesystem created on it
35 | - `FS_TYPE` the format of the filesystem
36 | - `TOUCH_PATH` the path of the file to "touch"/create.
37 |
38 | ### `example_action.sh`
39 |
40 | ```sh
41 | #!/bin/bash
42 | set -x
43 |
44 | # Check that the environment variable is set, so we know what device to mount
45 | if [[ ! -v BLOCK_DEVICE ]]; then
46 | echo "BLOCK_DEVICE NEEDS SETTING"
47 | exit 1
48 | fi
49 |
50 | # Check for other variables FS_TYPE / TOUCH_PATH
51 |
52 | # Mount the disk (set -x) will report failures
53 | /usr/bin/mount -v -f ${FS_TYPE} ${BLOCK_DEVICE} /mnt
54 |
55 | # Create our file
56 | /usr/bin/touch /mnt/$TOUCH_PATH
57 |
58 | echo "Succesfully created [$TOUCH_PATH]"
59 | exit 0
60 | ```
61 |
62 | ### `Dockerfile`
63 |
64 | ```dockerfile
65 | FROM alpine:3.13.1
66 | COPY /example_action.sh /
67 | ENTRYPOINT ["/example_action.sh"]
68 | ```
69 |
70 | ### Creating our action
71 |
72 | Create the image:
73 |
74 | `docker build -t example_actions:v0.1 .`
75 |
76 | Tag it to our local registry `192.168.1.1` and push it so that `tink-worker` can use it, if the worker has internet access then we can use a public registry such as Docker hub / quay.io etc..
77 |
78 | `docker tag example_actions:v0.1 192.168.1.1\example_actions:v0.1`
79 |
80 | We can now push/upload our new action to use in a workflow!
81 |
82 | `docker push 192.168.1.1\example_actions:v0.1`
83 |
84 | ## Using our action
85 |
86 | Following all the steps above we can now create an action in a workflow, this simple will just leave us with a file called "hello" left in `/tmp`.
87 |
88 | ```yaml
89 | actions:
90 | - name: "Say Hello!"
91 | image: 192.168.1.1\example_actions:v0.1
92 | timeout: 90
93 | environment:
94 | BLOCK_DEVICE: /dev/sda3
95 | FS_TYPE: ext4
96 | TOUCH_PATH: /tmp/hello
97 | ```
98 |
99 | In an ideal scenario previous actions will do things such as wipe disks and create filesystems, allowing us to use the partition/filesystem in later actions.
100 |
101 | ## Further reading
102 |
103 | The Tinkerbell community has created a number of actions that are available on the [Artifact Hub].
104 | All of the source code for these actions are available on the GitHub repository for the [Tinkerbell Hub].
105 |
106 | [alpine linux]: https://alpinelinux.org
107 | [artifact hub]: https://artifacthub.io/packages/search?kind=4
108 | [tinkerbell hub]: https://github.com/tinkerbell/hub/tree/main/actions
109 | [touch]: https://www.tecmint.com/8-pratical-examples-of-linux-touch-command/
110 |
--------------------------------------------------------------------------------
/docs/actions/action-architecture.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Action Architecture
3 | date: 2021-02-15
4 | ---
5 |
6 | # Action Architecture
7 |
8 | An action in Tinkerbell is a single unit of execution that takes place within a workflow, which itself is made from multiple actions in order to provision a piece of network booted infrastructure.
9 | An action ideally should contain a single task used as part of a longer chain of tasks, examples include:
10 |
11 | - Wipe a Disk
12 | - Partition disks
13 | - Download files to the underlying disk
14 | - Write keys to a TPM
15 | - Create Users
16 | - Write a cloud-init file
17 | - [Kexec] to a new Operating System
18 |
19 | A Tinkerbell Action is contained within a container image and should be hosted on a registry.
20 | As the tink-worker executes a workflow it will pull action containers sequentially and execute them as containers.
21 |
22 | ## Action Containers
23 |
24 | As mentioned above an action runs within a container, which provides a number of inherent benefits:
25 |
26 | - Contained code
27 | - Re-usable modules
28 | - Well established execution environment
29 | - Use of existing infrastructure to host actions (container registry)
30 |
31 | However, there are a number of usage concerns that must be considered when passing configuration in or using an action with the underlying hardware.
32 |
33 | ### Action Container privileges
34 |
35 | By default an action container is started as a [privileged] container, in numerous environments this is discouraged however with a requirement to the underlying hardware this is a requirement for a Tinkerbell action.
36 | This means that an action has direct access to hardware, such as block devices e.g. `/dev/sda` allowing us to wipe/partition/image the storage as an action.
37 |
38 | ### Namespace
39 |
40 | By default an action will be created in it's own [Linux Namespace] meaning that whilst it can see underlying hardware, it is unaware of any other processes or existing network configuration (the Docker engine auto-magically manages external networking through the Docker network).
41 | This under the majority of use-cases is good for isolating what tasks an action is performing, however there are a number of use-cases where being able to communicate with the hosts existing processes is a requirement.
42 | The most obvious two (so far) are the capability to `reboot` or `kexec` into a new kernel, both of these actions typically involve a few steps:
43 |
44 | 1. Action calls the `/sbin/reboot` binary or `reboot()` syscall.
45 | 2. Kernel is aware of the reboot and sends a `signal` to process ID 1.
46 | 3. Process ID 1 (which should be `/init`) kills all processes and reboots the machine.
47 |
48 | When an action attempts to do these steps in a container in its own namespace, nothing will occur as PID 1 is usually the process in the action container.
49 | To allow the expected behavior an action can use `pid: host` in its configuration, this will mean that the action processes will be amongst all of the processes on the host itself (including the "real" PID 1).
50 | With the action in the host process ID namespace both a `reboot` or `kexec` will be able to work as expected.
51 |
52 | ### Passing configuration to an action
53 |
54 | Most actions can make use of reading the metadata during runtime, however there may be use-cases to keep a large standardized set of actions that can be written directly into a workflow.
55 |
56 | An action should be created using an [ENTRYPOINT] meaning that we don't need to specify what needs to run within the action image.
57 | However if required there is the possibility to override this with the `command` section of the action configuration.
58 |
59 | e.g. Overwriting the command
60 |
61 | ```yaml
62 | command:
63 | - "/my_command.sh"
64 | - "-flag"
65 | - "argument"
66 | ```
67 |
68 | The most common method to pass information into an action is through the usage of environment variables that are parsed by the action code as it is running.
69 | For example, if I had an action that needed to mount a disk the action would need to know the block device with the filesystem and the type of formatted filesystem on that device.
70 | We can pass that as shown below in the configuration of the action:
71 |
72 | ```yaml
73 | environment:
74 | BLOCK_DEVICE: /dev/sda3
75 | FS_TYPE: ext4
76 | ```
77 |
78 | With this understanding of the basic architecture of we can start to look at what would be required to [create your own action].
79 |
80 | [create your own action]: creating-a-basic-action.md
81 | [entrypoint]: https://docs.docker.com/engine/reference/builder/#entrypoint
82 | [kexec]: https://wiki.archlinux.org/title/Kexec
83 | [linux namespace]: https://en.wikipedia.org/wiki/Linux_namespaces
84 | [privileged]: https://docs.docker.com/engine/reference/run/#runtime-privilege-and-linux-capabilities
85 |
--------------------------------------------------------------------------------
/docs/workflows/hello-world-workflow.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: A Hello-world Workflow
3 | date: 2020-07-10
4 | ---
5 |
6 | # A Hello-world Workflow
7 |
8 | The "Hello World" example uses an example hardware data, template, and workflow to show off some basic Tinkerbell functionality, and the interaction between a Provisioner and a Worker.
9 | It uses the hello-world docker image as an example task that could be performed on a Worker as part of a workflow.
10 |
11 | ## Prerequisites
12 |
13 | - You have a Provisioner up and running, with the Tinkerbell stack installed and configured.
14 | This can be done locally with Vagrant as an experimental environment, on Equinix Metal for a more robust setup, or installed on any other environment that you have configured.
15 |
16 | - You have a Worker that has not yet been brought up, or can be restarted.
17 |
18 | ## Hardware Data
19 |
20 | This example is intended to be environment agnostic, and assumes that you have a Worker machine as the intended target.
21 | The workflow in this example is simple enough that you can use the [Minimal Hardware Data example] with your targeted Worker's MAC Address and/or IP Address substituted in.
22 |
23 | ## The `hello-world` Action Image
24 |
25 | The workflow will have a single task that will have a single action.
26 | Actions are stored as images in an image repository either locally or remotely.
27 | For this example, pull down the `hello-world` image from Docker to host it locally in the Docker registry on the Provisioner.
28 |
29 | ```sh
30 | docker pull hello-world
31 | docker tag hello-world /hello-world
32 | docker push /hello-world
33 | ```
34 |
35 | This image doesn't have any instructions that the Worker will be able to perform, it's just an example to enable pushing a workflow out to the Worker when it comes up.
36 |
37 | ## The Template
38 |
39 | A template is a YAML file that lays out the series of tasks that you want to perform on the Worker after it boots up.
40 | The template for this workflow contains the one task with single `hello-world` action.
41 | The worker field contains a reference to `device_1` which will be substituted with either the MAC Address or the IP Address of your Worker when you run the `tink workflow create` command in the next step.
42 |
43 | Save this template as `hello-world.tmpl`.
44 |
45 | ```yaml
46 | version: "0.1"
47 | name: hello_world_workflow
48 | global_timeout: 600
49 | tasks:
50 | - name: "hello world"
51 | worker: "{{.device_1}}"
52 | actions:
53 | - name: "hello_world"
54 | image: hello-world
55 | timeout: 60
56 | ```
57 |
58 | ## The Workflow
59 |
60 | If you haven't already, be sure to have
61 |
62 | - Pushed the Worker's hardware data to the database with `tink hardware push`.
63 | - Created the template in the database with `tink template create`.
64 |
65 | You can now use the hardware data and the template to create a workflow.
66 | You need two pieces of information.
67 | The MAC Address or IP Address of your Worker as specified in the hardware data and the Template ID that is returned from the `tink template create` command.
68 |
69 | ```sh
70 | tink workflow create -t $TEMPLATE_ID -r '{"device_1": ""}'
71 | ```
72 |
73 | ## Workflow Execution
74 |
75 | You can now boot up or restart your Worker and a few things are going to happen:
76 |
77 | - First, the Worker will iPXE boot into an Alpine Linux distribution running in memory
78 | - Second, the Worker will call back to the Provisioner to check for it's workflow.
79 | - Third, The Provisioner will push the workflow to the Worker for it to execute.
80 |
81 | While the workflow execution does not have much effect on the state of the Worker, you can check that the workflow was successfully executed from the `tink workflow events` command.
82 |
83 | ```sh
84 | tink workflow events $ID
85 | +--------------------------------------+-------------+-------------+----------------+---------------------------------+--------------------+
86 | | WORKER ID | TASK NAME | ACTION NAME | EXECUTION TIME | MESSAGE | ACTION STATUS |
87 | +--------------------------------------+-------------+-------------+----------------+---------------------------------+--------------------+
88 | | ce2e62ed-826f-4485-a39f-a82bb74338e2 | hello world | hello_world | 0 | Started execution | ACTION_IN_PROGRESS |
89 | | ce2e62ed-826f-4485-a39f-a82bb74338e2 | hello world | hello_world | 0 | Finished Execution Successfully | ACTION_SUCCESS |
90 | +--------------------------------------+-------------+-------------+----------------+---------------------------------+--------------------+
91 | ```
92 |
93 | If you reboot the Worker at this point, it will again PXE boot, since no alternate operating system was installed as part of the `hello-world` workflow.
94 |
95 | [minimal hardware data example]: ../hardware-data.md#the-minimal-hardware-data
96 |
--------------------------------------------------------------------------------
/docs/workflows/working-with-workflows.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Working with Workflows
3 | date: 2020-07-28
4 | ---
5 |
6 | # Working with Workflows
7 |
8 | A workflow is the complete set of operations to be run on a Worker.
9 | It consists of two building blocks: a Worker's [hardware data] and a [template].
10 | Workflows are immutable.
11 | Updating a template or hardware data does not update existing workflows.
12 |
13 | ## Creating a Workflow
14 |
15 | You create a workflow with the `tink workflow create` command, which takes a template ID and a JSON object that identifies the Worker, and combines them into a workflow.
16 | The workflow is stored in the database on the Provisioner and returns a workflow ID.
17 |
18 | For example,
19 |
20 | ```sh
21 | tink workflow create \
22 | -t 75ab8483-6f42-42a9-a80d-a9f6196130df \
23 | -r '{"device_1":"08:00:27:00:00:01"}'
24 | Created Workflow: a8984b09-566d-47ba-b6c5-fbe482d8ad7f
25 | ```
26 |
27 | The template ID is `75ab8483-6f42-42a9-a80d-a9f6196130df`.
28 | The MAC address of the Worker is `08:00:27:00:00:01`, which should match the MAC address of hardware data that you have already created to identify that Worker.
29 | It is mapped to `device_1`, which is where the MAC address will be substituted into the template when the workflow is created.
30 |
31 | After creating a workflow, you can retrieve it from the database by ID with `tink workflow get`.
32 | This is particularly useful to check to see that the MAC address or IP Address of the Worker was correctly substituted when you created the workflow.
33 |
34 | ```sh
35 | tink workflow get a8984b09-566d-47ba-b6c5-fbe482d8ad7f
36 | version: "0.1"
37 | name: hello_world_workflow
38 | global_timeout: 600
39 | tasks:
40 | - name: "hello world"
41 | worker: "08:00:27:00:00:01"
42 | actions:
43 | - name: "hello_world"
44 | image: hello-world
45 | timeout: 60
46 | ```
47 |
48 | In addition, you can list all the workflows stored in the database with `tink workflow list`.
49 | Delete a workflow with `tink workflow delete`.
50 |
51 | ## Workflow Execution
52 |
53 | On the first boot, the Worker is PXE booted, asks Boots for it's IP address, and loads into OSIE.
54 | It then asks the `tink-server` for workflows that match its MAC or IP address.
55 | Those workflows are then executed onto the Worker.
56 |
57 | ![Architecture]
58 |
59 | If there are no workflows defined for the Worker, the Provisioner will ignore the Worker's request.
60 | If as a part of the workflow, a new OS is installed and completes successfully, then the boot request (after reboot) will be handled by newly installed OS.
61 | If as a part of the workflow, an OS is **not** installed, then the Worker after reboot will request PXE-boot from the Provisioner.
62 |
63 | You can view the events and the state of a workflow during or after its execution with the tink CLI using the `tink workflow events` an the `tink workflow state` commands.
64 |
65 | ## Ephemeral Data
66 |
67 | Ephemeral data is data that is shared between Workers as they execute workflows.
68 | Ephemeral data is stored at `/workflow/` in each tink-worker.
69 |
70 | Initially the directory is empty; you populate with it by having the actions of a [template] write to it.
71 | Then, the content in `/workflow/` is pushed back to the database and from the database, pushed out to the other Workers.
72 |
73 | As the workflow progresses, subsequent actions on a Worker can read any ephemeral data that's been created by previous actions on other Workers, as well as update that file with any changes.
74 | Ephemeral data is only preserved through the life of a single workflow.
75 | Each workflow that executes gets an empty file.
76 |
77 | The data can take the form of a light JSON like below, or some binary files that other workers might require to complete their action.
78 | There is a 10 MB limit for ephemeral data, because it gets pushed to and from the tink-server and tink-worker with every action, so it needs to be pretty light.
79 |
80 | For instance, a Worker may write the following data:
81 |
82 | ```json
83 | {
84 | "instance_id": "123e4567-e89b-12d3-a456-426655440000",
85 | "mac_addr": "F5:C9:E2:99:BD:9B",
86 | "operating_system": "ubuntu_18_04"
87 | }
88 | ```
89 |
90 | The other worker may retrieve and use this data and eventually add some more:
91 |
92 | ```json
93 | {
94 | "instance_id": "123e4567-e89b-12d3-a456-426655440000",
95 | "ip_addresses": [
96 | {
97 | "address_family": 4,
98 | "address": "172.27.0.23",
99 | "cidr": 31,
100 | "private": true
101 | }
102 | ],
103 | "mac_addr": "F5:C9:E2:99:BD:9B",
104 | "operating_system": "ubuntu_18_04"
105 | }
106 | ```
107 |
108 | You can get the ephemeral data associated with a workflow with the `tink workflow data` tink CLI command.
109 |
110 | [architecture]: ../images/workflow-diagram.png
111 | [hardware data]: ../hardware-data.md
112 | [template]: ../templates.md
113 |
--------------------------------------------------------------------------------
/docs/stylesheets/extra.css:
--------------------------------------------------------------------------------
1 | :root {
2 | --md-primary-fg-color: #3985a9;
3 | --md-primary-fg-color--light: #1d5086;
4 | --md-primary-fg-color--dark: #05254c;
5 | }
6 |
7 | body {
8 | font-family: Gotham A, Gotham B, Helvetica, Roboto, Arial, sans-serif;
9 | background: #f8f8f8;
10 | font-size: 0.875rem !important;
11 | }
12 |
13 | h1,
14 | h2,
15 | h3,
16 | h4,
17 | h5 {
18 | font-family: Gotham A, Gotham B, Helvetica, Roboto, Arial, sans-serif !important;
19 | line-height: 1.3;
20 | }
21 |
22 | pre,
23 | code,
24 | kbd {
25 | font-family: Consolas, "Liberation Mono", Courier, monospace;
26 | }
27 |
28 | .align-middle {
29 | align-items: center;
30 | }
31 |
32 | .grid-x {
33 | display: flex;
34 | flex-flow: row wrap;
35 | }
36 |
37 | /* Header */
38 | .menu-toggle {
39 | position: absolute;
40 | right: 20px;
41 | top: 1%;
42 | z-index: 10;
43 | font-size: 20px;
44 | color: #fff;
45 | }
46 |
47 | #site-header {
48 | background-color: #05254c;
49 | color: #fff;
50 | display: block;
51 | }
52 |
53 | .nav {
54 | text-align: right;
55 | }
56 |
57 | .nav a {
58 | color: #fff;
59 | font-size: 1rem;
60 | }
61 |
62 | .nav ul {
63 | margin: 0;
64 | padding: 0;
65 | list-style-type: none;
66 | display: flex;
67 | justify-content: flex-end;
68 | align-items: center;
69 | }
70 |
71 | .nav ul li {
72 | margin: 0 25px;
73 | }
74 |
75 | .nav ul li a {
76 | display: inline-block;
77 | font-size: 0.8em;
78 | font-weight: 400;
79 | border-bottom: 1px solid #05254c;
80 | color: #9dafc4;
81 | }
82 |
83 | .nav ul li.active a {
84 | color: #fff;
85 | }
86 |
87 | .nav ul li a:hover {
88 | color: #fff;
89 | border-bottom: 1px solid #fff;
90 | }
91 |
92 | .nav ul li.pre {
93 | margin: 0 5px;
94 | }
95 |
96 | .nav ul li.pre a {
97 | color: #ffd105;
98 | border: 1px solid #0386ac;
99 | border-radius: 50em;
100 | padding: 3px;
101 | width: 32px;
102 | height: 32px;
103 | line-height: 1.65;
104 | text-align: center;
105 | display: block;
106 | }
107 |
108 | .nav ul li.pre a:hover {
109 | border: 1px solid #fff;
110 | }
111 |
112 | .md-nav__title .md-nav__button.md-logo img,
113 | .md-nav__title .md-nav__button.md-logo svg {
114 | width: 100%;
115 | }
116 |
117 | @media screen and (max-width: 76.1875em) {
118 | .nav .nav-top {
119 | margin-right: 5%;
120 | }
121 | }
122 |
123 | @media print, screen and (max-width: 63.99875em) {
124 | #site-header {
125 | padding-bottom: 0;
126 | padding-top: 20px;
127 | }
128 |
129 | .nav .nav-top {
130 | overflow-y: auto;
131 | white-space: nowrap;
132 | width: 100%;
133 | padding-top: 0;
134 | padding-bottom: 20px;
135 | padding-left: 20px;
136 | padding-right: 20px;
137 | }
138 | .nav ul {
139 | display: flex;
140 | align-items: center;
141 | justify-content: center;
142 | margin-left: 20px;
143 | }
144 | .nav ul li {
145 | margin: 0 10px;
146 | display: inline;
147 | }
148 | .nav ul li:first-child {
149 | margin-left: 0;
150 | }
151 | .nav ul li a {
152 | font-size: 0.875rem;
153 | display: inline-block !important;
154 | }
155 | .logo {
156 | margin-bottom: 20px;
157 | display: block;
158 | }
159 | .logo img {
160 | height: 40px;
161 | }
162 | }
163 |
164 | /* Footer */
165 | #footer-wrapper {
166 | color: #3e3e3e;
167 | padding-top: 20px;
168 | padding-bottom: 20px;
169 | text-align: center;
170 | }
171 |
172 | #footer-wrapper * {
173 | font-size: 12px;
174 | }
175 |
176 | #footer-container {
177 | border-top: 0.5px solid #c9c9cc;
178 | padding-top: 30px;
179 | }
180 |
181 | #footer-wrapper a {
182 | display: inline-block;
183 | text-decoration: underline;
184 | }
185 |
186 | #footer-wrapper p {
187 | margin-bottom: 5px;
188 | margin-top: 0;
189 | }
190 |
191 | #footer-wrapper .text {
192 | margin: 0;
193 | }
194 |
195 | #nav-footer {
196 | padding-bottom: 20px;
197 | }
198 |
199 | #nav-footer ul {
200 | display: flex;
201 | justify-content: center;
202 | align-items: center;
203 | margin: 0;
204 | padding: 0;
205 | list-style-type: none;
206 | }
207 |
208 | #nav-footer ul li {
209 | line-height: 1;
210 | list-style-type: none;
211 | margin: 0;
212 | padding: 0 7px;
213 | position: relative;
214 | }
215 |
216 | #nav-footer ul li:last-child {
217 | padding-right: 0;
218 | }
219 |
220 | #nav-footer ul li a {
221 | line-height: 1;
222 | text-decoration: none;
223 | }
224 |
225 | #nav-footer ul li a:hover {
226 | color: #0082a7;
227 | }
228 |
229 | #nav-footer ul li:after {
230 | content: "";
231 | position: absolute;
232 | right: 0;
233 | top: 0;
234 | width: 1px;
235 | height: 100%;
236 | border-right: 1px solid #000;
237 | }
238 |
239 | #nav-footer ul li:last-child:after {
240 | border-right: 0;
241 | }
242 |
243 | .logo-footer svg {
244 | vertical-align: middle;
245 | display: inline-block;
246 | margin-left: 20px;
247 | }
248 |
249 | @media print, screen and (max-width: 63.99875em) {
250 | }
251 |
252 | .md-footer-nav {
253 | display: none;
254 | }
255 | .md-header {
256 | background-color: var(--md-primary-fg-color--dark);
257 | }
258 | .md-footer {
259 | background-color: var(--md-primary-fg-color--dark);
260 | }
261 |
--------------------------------------------------------------------------------
/docs/deploying-operating-systems/examples-win.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Example - Windows
3 | date: 2021-03-24
4 | ---
5 |
6 | # Deploying Windows
7 |
8 | This is a guide which walks through the process of deploying various Windows versions from an operating system image.
9 |
10 | ## Generating the Images
11 |
12 | The [tinkerbell] GitHub organization contains a project called [crocodile] that largely automates the entire process of image creation.
13 |
14 | The pre-requisites for using the `crocodile` project are:
15 |
16 | - git
17 | - Docker
18 |
19 | It currently can build the following versions of Windows Operating System images:
20 |
21 | - Windows 10
22 | - Windows Server 2012
23 | - Windows Server 2016
24 | - Windows Server 2019
25 |
26 | ### Downloading `crocodile`
27 |
28 | First, clone the repo:
29 |
30 | ```sh
31 | git clone https://github.com/tinkerbell/crocodile
32 | ```
33 |
34 | Then, move to the builder directory:
35 |
36 | ```sh
37 | cd crocodile
38 | ```
39 |
40 | ### Building the Image Builder
41 |
42 | The `docker build` command will create a local container called `croc:latest` that has everything required to build our Operating System images.
43 |
44 | ```sh
45 | docker build -t croc .
46 | ```
47 |
48 | ### Creating an Image
49 |
50 | Run `docker run`.
51 |
52 | ```sh
53 | docker run -it --rm \
54 | -v $PWD/packer_cache:/packer/packer_cache \
55 | -v $PWD/images:/var/tmp/images \
56 | --net=host \
57 | --device=/dev/kvm \
58 | croc:latest
59 | ```
60 |
61 | The command will create the a `packer_cache` folder and an `images` folder in the current folder.
62 | These folders will be used for assets and the built OS images, respectively.
63 |
64 | ```text
65 | .--. .--.
66 | / \/ \
67 | | .-. .-. \
68 | |/_ |/_ | \
69 | || `\|| `\| `----.
70 | |\0_/ \0_/ --, \_
71 | .--"""""-. / (` \ `-.
72 | / \-----'-. \ \
73 | \ () () /`\ \
74 | | .___.-' | \
75 | \ /` \| / ;
76 | `-.___ ___.' .-.`.---.| \
77 | \| ``-..___,.-'`\| / / / | `\
78 | ` \| ,`/ / / , /
79 | ` |\ / / |\/
80 | , .'`-; ' \/
81 | , |\-' .' , .-'`
82 | .-|\--;`` .-' |\.'
83 | ( `"'-.|\ (___,.--'`'
84 | `-. `"` _.--'
85 | `. _.-'`-.
86 | `''---''`` `."
87 | Select "quit" when you've finished building Operating Systems
88 | 1) windows-2012
89 | 2) windows-2016
90 | 3) windows-2019
91 | 4) windows-10
92 | 5) quit
93 | ```
94 |
95 | Select the Operating System you'd like to build and the entire process will begin, including downloading of the required ISO's and configuring of the Operating Systems.
96 |
97 | When it finishes, the newly built Windows Operating Systems will exist in the `images` folder.
98 |
99 | ## Creating the Template
100 |
101 | First, the template will need a custom action to reboot the system into the new Operating System after it's written to the device.
102 |
103 | ### Creating a `reboot` action `Dockerfile`
104 |
105 | In a different folder create a `Dockerfile` with the following contents:
106 |
107 | ```dockerfile
108 | FROM busybox
109 | ENTRYPOINT [ "touch", "/worker/reboot" ]
110 | ```
111 |
112 | Then, build the new action and push it to the local registry.
113 |
114 | ```sh
115 | docker build -t local-registry/reboot:1.0 .
116 | ```
117 |
118 | Once the new action is pushed to the local registry, it can be used as an action in a template.
119 |
120 | ```yaml
121 | actions:
122 | - name: "reboot"
123 | image: local-registry/reboot:1.0
124 | timeout: 90
125 | volumes:
126 | - /worker:/worker
127 | ```
128 |
129 | ### The Example Template
130 |
131 | The template uses actions from the [Artifact Hub].
132 |
133 | - [image2disk] - to write the image to a block device.
134 | - Our custom action that will cause a system reboot into our new Operating System.
135 |
136 | > Important: Don't forget to pull, tag, and push `quay.io/tinkerbell-actions/image2disk:v1.0.0` prior to using it.
137 |
138 | ```yaml
139 | version: "0.1"
140 | name: Windows_deployment
141 | global_timeout: 1800
142 | tasks:
143 | - name: "os-installation"
144 | worker: "{{.device_1}}"
145 | volumes:
146 | - /dev:/dev
147 | - /dev/console:/dev/console
148 | - /lib/firmware:/lib/firmware:ro
149 | actions:
150 | - name: "stream-Windows-image"
151 | image: quay.io/tinkerbell-actions/image2disk:v1.0.0
152 | timeout: 600
153 | environment:
154 | DEST_DISK: /dev/sda
155 | IMG_URL: "http://192.168.1.1:8080/tink-windows-2016.raw.gz"
156 | COMPRESSED: true
157 | - name: "reboot into Windows"
158 | image: local-registry/reboot:1.0
159 | timeout: 90
160 | volumes:
161 | - /worker:/worker
162 | ```
163 |
164 | [artifact hub]: https://artifacthub.io/packages/search?kind=4
165 | [crocodile]: https://github.com/tinkerbell/crocodile
166 | [image2disk]: https://artifacthub.io/packages/tbaction/tinkerbell-community/image2disk
167 | [tinkerbell]: https://tinkerbell.org
168 |
--------------------------------------------------------------------------------
/docs/assets/packet.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/docs/deploying-operating-systems/examples-ubuntu.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Example - Ubuntu
3 | date: 2021-03-12
4 | ---
5 |
6 | # Deploying Ubuntu
7 |
8 | This guide walks through the process of deploying Ubuntu from either an operating system image or a Docker image.
9 |
10 | ## Using an Operating System Image
11 |
12 | Ubuntu is distributed in a number of different formats, which are all available on the [cloud-images] site.
13 |
14 | This example uses the image with the `.img` extension.
15 |
16 | ```text
17 | focal-server-cloudimg-amd64.img 2021-03-11 22:27 528M Ubuntu Server 20.04 LTS (Focal Fossa) daily builds
18 | ```
19 |
20 | This image is actually a `qcow2` filesystem image and is a **full** disk image including partition tables, partitions filled with filesystems and the files, and importantly, a boot loader at the beginning of the disk image.
21 |
22 | ### Converting Image
23 |
24 | In order to use this image, it needs to be converted into a `raw` filesystem.
25 | In order to do the conversion, install the `qemu-img` CLI tool.
26 |
27 | ```sh
28 | apt-get install -y qemu-utils
29 | ```
30 |
31 | Then, use the tool to convert the image into a `raw` filesystem.
32 |
33 | ```sh
34 | qemu-img convert ./focal-server-cloudimg-amd64.img -O raw ./focal-server-cloudimg-amd64.raw
35 | ```
36 |
37 | **Optional** - You can compress this raw image to save on both local disk space and network bandwidth when deploying the image.
38 |
39 | ```sh
40 | gzip ./focal-server-cloudimg-amd64.raw
41 | ```
42 |
43 | Move the raw image to an accessible web server.
44 |
45 | ### Creating the Template
46 |
47 | The template uses [actions] from the [Artifact Hub].
48 |
49 | - [image2disk] - to write the OS image to a block device.
50 | - [kexec] - to `kexec` into our newly provisioned operating system.
51 |
52 | > Important: Don't forget to pull, tag, and push `quay.io/tinkerbell-actions/image2disk:v1.0.0` prior to using it.
53 |
54 | ```yaml
55 | version: "0.1"
56 | name: Ubuntu_Focal
57 | global_timeout: 1800
58 | tasks:
59 | - name: "os-installation"
60 | worker: "{{.device_1}}"
61 | volumes:
62 | - /dev:/dev
63 | - /dev/console:/dev/console
64 | - /lib/firmware:/lib/firmware:ro
65 | actions:
66 | - name: "stream-ubuntu-image"
67 | image: quay.io/tinkerbell-actions/image2disk:v1.0.0
68 | timeout: 600
69 | environment:
70 | DEST_DISK: /dev/sda
71 | IMG_URL: "http://192.168.1.1:8080/focal-server-cloudimg-amd64.raw.gz"
72 | COMPRESSED: true
73 | - name: "kexec-ubuntu"
74 | image: quay.io/tinkerbell-actions/kexec:v1.0.0
75 | timeout: 90
76 | pid: host
77 | environment:
78 | BLOCK_DEVICE: /dev/sda1
79 | FS_TYPE: ext4
80 | ```
81 |
82 | ### File System Images
83 |
84 | Note that it is also possible to install Ubuntu from the compressed filesystem image.
85 |
86 | ```text
87 | focal-server-cloudimg-amd64.tar.gz 2021-03-11 22:30 485M File system image and Kernel packed
88 | ```
89 |
90 | This filesystem image is typically an `ext4` filesystem that contains all of the files in a partition for Ubuntu to run.
91 | However, in order for us to use this image we would need to:
92 |
93 | - Partition the disk
94 | - Write this data to the partition
95 | - Install a boot loader
96 |
97 | With all of this in mind, it becomes much simpler to convert the `qcow2` image and simply write it to disk.
98 |
99 | ## Using a Docker Image
100 |
101 | We can easily make use of the **official** Docker images to generate a root filesystem for use when deploying with Tinkerbell.
102 |
103 | ### Downloading the Image
104 |
105 | ```sh
106 | TMPRFS=$(docker container create ubuntu:latest)
107 | docker export $TMPRFS > ubuntu_rootfs.tar
108 | docker rm $TMPRFS
109 | ```
110 |
111 | **Optional** - You can compress this raw image to save on both local disk space and network bandwidth when deploying the image.
112 |
113 | ```sh
114 | gzip ./ubuntu_rootfs.tar
115 | ```
116 |
117 | Move the raw image to an accessible web server.
118 |
119 | ### Creating the Template
120 |
121 | The template makes use of the actions from the artifact hub.
122 |
123 | - [rootio] - to partition our disk and make filesystems.
124 | - [archive2disk] - to write the OS image to a block device.
125 | - [cexec] - to run commands inside (chroot) our newly provisioned operating system.
126 | - [kexec] - to `kexec` into our newly provisioned operating system.
127 |
128 | ```yaml
129 | version: "0.1"
130 | name: ubuntu_provisioning
131 | global_timeout: 1800
132 | tasks:
133 | - name: "os-installation"
134 | worker: "{{.device_1}}"
135 | volumes:
136 | - /dev:/dev
137 | - /dev/console:/dev/console
138 | - /lib/firmware:/lib/firmware:ro
139 | actions:
140 | - name: "disk-wipe-partition"
141 | image: quay.io/tinkerbell-actions/rootio:v1.0.0
142 | timeout: 90
143 | command: ["partition"]
144 | environment:
145 | MIRROR_HOST: 192.168.1.2
146 | - name: "format"
147 | image: quay.io/tinkerbell-actions/rootio:v1.0.0
148 | timeout: 90
149 | command: ["format"]
150 | environment:
151 | MIRROR_HOST: 192.168.1.2
152 | - name: "expand-ubuntu-filesystem-to-root"
153 | image: quay.io/tinkerbell-actions/archive2disk:v1.0.0
154 | timeout: 90
155 | environment:
156 | ARCHIVE_URL: http://192.168.1.1:8080/ubuntu_rootfs.tar.gz
157 | ARCHIVE_TYPE: targz
158 | DEST_DISK: /dev/sda3
159 | FS_TYPE: ext4
160 | DEST_PATH: /
161 | - name: "install-grub-bootloader"
162 | image: quay.io/tinkerbell-actions/cexec:v1.0.0
163 | timeout: 90
164 | environment:
165 | BLOCK_DEVICE: /dev/sda3
166 | FS_TYPE: ext4
167 | CHROOT: y
168 | CMD_LINE: "grub-install --root-directory=/boot /dev/sda"
169 | - name: "kexec-ubuntu"
170 | image: quay.io/tinkerbell-actions/kexec:v1.0.0
171 | timeout: 600
172 | environment:
173 | BLOCK_DEVICE: /dev/sda3
174 | FS_TYPE: ext4
175 | ```
176 |
177 | [actions]: https://github.com/artifacthub/hub/blob/master/docs/metadata/artifacthub-pkg.yml
178 | [archive2disk]: https://artifacthub.io/packages/tbaction/tinkerbell-community/archive2disk
179 | [artifact hub]: https://artifacthub.io/packages/search?kind=4
180 | [cexec]: https://artifacthub.io/packages/tbaction/tinkerbell-community/cexec
181 | [cloud-images]: https://cloud-images.ubuntu.com/daily/server/focal/current/
182 | [image2disk]: https://artifacthub.io/packages/tbaction/tinkerbell-community/image2disk
183 | [kexec]: https://artifacthub.io/packages/tbaction/tinkerbell-community/kexec
184 | [rootio]: https://artifacthub.io/packages/tbaction/tinkerbell-community/rootio
185 |
--------------------------------------------------------------------------------
/docs/deploying-operating-systems/examples-ubuntu-packer.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Examples - Deploying Ubuntu from Packer Machine image
3 | date: 2021-04-02
4 | ---
5 |
6 | # Deploying Ubuntu from Packer Machine Image
7 |
8 | This guide will walk you through how you create a minimalistic raw Ubuntu image using [Packer], an awesome tool to build automated machine images.
9 |
10 | Currently, Packer does not officially provide a way to make bare metal machine images.
11 | So, in this example, we will use `virtualbox-iso` builder to create a Virtual Machine Disk (VDMK) and then convert it to a raw image.
12 |
13 | The raw image can then be deployed on a bare metal server using Tinkerbell.
14 |
15 | ## Preseed and Config files for Ubuntu 20.04
16 |
17 | Below are the preseed file and the config file for creating a minimalistic Ubuntu 20.04 image.
18 |
19 | When building an image using `virtualbox-iso`, the [preseed file] will help with automating the deployment.
20 | It is placed inside the `http` directory, and the config file references the location of the preseed file in the `boot_command` list of the `builders` object.
21 |
22 | ### pressed.cfg
23 |
24 | ```text
25 | choose-mirror-bin mirror/http/proxy string
26 | d-i base-installer/kernel/override-image string linux-server
27 | d-i clock-setup/utc boolean true
28 | d-i clock-setup/utc-auto boolean true
29 | d-i finish-install/reboot_in_progress note
30 | d-i grub-installer/only_debian boolean true
31 | d-i grub-installer/with_other_os boolean true
32 | d-i partman-auto/disk string /dev/sda
33 | d-i partman-auto-lvm/guided_size string max
34 | d-i partman-auto/choose_recipe select atomic
35 | d-i partman-auto/method string lvm
36 | d-i partman-lvm/confirm boolean true
37 | d-i partman-lvm/confirm boolean true
38 | d-i partman-lvm/confirm_nooverwrite boolean true
39 | d-i partman-lvm/device_remove_lvm boolean true
40 | d-i partman/choose_partition select finish
41 | d-i partman/confirm boolean true
42 | d-i partman/confirm_nooverwrite boolean true
43 | d-i partman/confirm_write_new_label boolean true
44 | d-i pkgsel/include string openssh-server cryptsetup build-essential libssl-dev libreadline-dev zlib1g-dev linux-source dkms nfs-common docker-compose
45 | d-i pkgsel/install-language-support boolean false
46 | d-i pkgsel/update-policy select none
47 | d-i pkgsel/upgrade select full-upgrade
48 | d-i time/zone string UTC
49 | tasksel tasksel/first multiselect standard, ubuntu-server
50 |
51 | d-i console-setup/ask_detect boolean false
52 | d-i keyboard-configuration/layoutcode string us
53 | d-i keyboard-configuration/modelcode string pc105
54 | d-i debian-installer/locale string en_US.UTF-8
55 |
56 | # Create vagrant user account.
57 | d-i passwd/user-fullname string vagrant
58 | d-i passwd/username string vagrant
59 | d-i passwd/user-password password vagrant
60 | d-i passwd/user-password-again password vagrant
61 | d-i user-setup/allow-password-weak boolean true
62 | d-i user-setup/encrypt-home boolean false
63 | d-i passwd/user-default-groups vagrant sudo
64 | d-i passwd/user-uid string 900
65 | ```
66 |
67 | In config file, the builder type is set to `virtualbox-iso` to generate the VMDK and the post-processor type is set to `compress` to generate a `tar` file.
68 |
69 | ### config.json
70 |
71 | ```json
72 | {
73 | "builders": [
74 | {
75 | "boot_command": [
76 | "",
77 | "",
78 | "",
79 | "/install/vmlinuz",
80 | " initrd=/install/initrd.gz",
81 | " auto-install/enable=true",
82 | " debconf/priority=critical",
83 | " netcfg/get_domain=vm",
84 | " netcfg/get_hostname=vagrant",
85 | " grub-installer/bootdev=/dev/sda",
86 | " preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg",
87 | " -- ",
88 | ""
89 | ],
90 | "boot_wait": "10s",
91 | "guest_os_type": "ubuntu-64",
92 | "guest_additions_mode": "disable",
93 | "disk_size": 8192,
94 | "http_directory": "http",
95 | "iso_urls": [
96 | "ubuntu-18.04.5-server-amd64.iso",
97 | "http://cdimage.ubuntu.com/ubuntu/releases/bionic/release/ubuntu-18.04.5-server-amd64.iso"
98 | ],
99 | "iso_checksum": "sha256:8c5fc24894394035402f66f3824beb7234b757dd2b5531379cb310cedfdf0996",
100 | "shutdown_command": "echo 'vagrant' | sudo -S shutdown -P now",
101 | "ssh_password": "vagrant",
102 | "ssh_username": "vagrant",
103 | "ssh_wait_timeout": "10000s",
104 | "type": "virtualbox-iso",
105 | "vm_name": "packer-ubuntu-64-20-04"
106 | }
107 | ],
108 | "post-processors": [
109 | {
110 | "type": "compress",
111 | "compression_level": 9,
112 | "output": "test.tar",
113 | "keep_input_artifact": true
114 | }
115 | ]
116 | }
117 | ```
118 |
119 | Both files are reference files, if you wish to modify something, you can make the changes accordingly.
120 | The steps to generate the image will remain the same.
121 |
122 | The files will need to be placed in the directory structure of the Packer image builder.
123 |
124 | ```text
125 | ubuntu_packer_image
126 | ├── http
127 | │ └── preseed.cfg
128 | └── config.json
129 | ```
130 |
131 | ## Generating the VMDK
132 |
133 | Run `packer build` to generate the VMDK and `tar` file.
134 |
135 | ```sh
136 | PACKER_LOG=1 packer build config.json
137 | ```
138 |
139 | Setting `PACKER_LOG` will allow you to see the logs of the Packer build step.
140 |
141 | When you run `packer build` with the example config file, the VMDK will be inside the output directory, while `tar` will be at the root directory.
142 |
143 | ## Converting the Image
144 |
145 | Currently, the raw image can not be built directly from `virtualbox-iso` builder, so we will convert and then compress it.
146 | Note, if you are using `qemu` builder type instead of the `virtualbox-iso` builder, then you can skip the conversion step as Packer lets you directly create a raw image.
147 |
148 | First, get the `qemu-img` CLI tool.
149 |
150 | ```sh
151 | apt-get install -y qemu-utils
152 | ```
153 |
154 | Then use the tool to convert the VMDK into a `raw` filesystem.
155 |
156 | ```sh
157 | qemu-img convert -f vmdk -o raw output-virtualbox-iso/packer-ubuntu-64-20.04-disk001.vmdk test_packer.raw
158 | ```
159 |
160 | Once you have a `raw` filesystem image, you can compress the raw image.
161 |
162 | ```sh
163 | gzip test_packer.raw
164 | ```
165 |
166 | The result is a `test_packer.raw.gz` file which can now be deployed on Tinkerbell.
167 | You can also use the `raw` file `test_packer.raw` directly, the benefit of having the compressed file is that it will be streamed over the network in less time.
168 |
169 | ## Creating a Template
170 |
171 | Below is a reference file for creating a Template using above Ubuntu Packer image.
172 | This section is similar to the other examples we have in the `Deploying Operating systems` section.
173 | You can follow them for more references.
174 |
175 | The template uses actions from the [Artifact Hub].
176 |
177 | - [image2disk] - to write the OS image to a block device.
178 | - [kexec] - to `kexec` into our newly provisioned operating system.
179 |
180 | ```yaml
181 | version: "0.1"
182 | name: Ubuntu_20_04
183 | global_timeout: 1800
184 | tasks:
185 | - name: "os-installation"
186 | worker: "{{.device_1}}"
187 | volumes:
188 | - /dev:/dev
189 | - /dev/console:/dev/console
190 | - /lib/firmware:/lib/firmware:ro
191 | actions:
192 | - name: "stream_ubuntu_packer_image"
193 | image: quay.io/tinkerbell-actions/image2disk:v1.0.0
194 | timeout: 600
195 | environment:
196 | DEST_DISK: /dev/sda
197 | IMG_URL: "http://192.168.1.1:8080/test_packer.raw.gz"
198 | COMPRESSED: true
199 | - name: "kexec_ubuntu"
200 | image: quay.io/tinkerbell-actions/kexec:v1.0.0
201 | timeout: 90
202 | pid: host
203 | environment:
204 | BLOCK_DEVICE: /dev/sda1
205 | FS_TYPE: ext4
206 | ```
207 |
208 | [artifact hub]: https://artifacthub.io/packages/search?kind=4
209 | [image2disk]: https://artifacthub.io/packages/tbaction/tinkerbell-community/image2disk
210 | [kexec]: https://artifacthub.io/packages/tbaction/tinkerbell-community/kexec
211 | [packer]: https://www.packer.io/
212 | [preseed file]: https://www.packer.io/guides/automatic-operating-system-installs/preseed_ubuntu
213 |
--------------------------------------------------------------------------------
/docs/deploying-operating-systems/examples-debian.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Example - Debian
3 | date: 2021-06-10
4 | ---
5 |
6 | # Deploying Debian
7 |
8 | This guide walks through the process of deploying Debian through:
9 |
10 | - an Operating System Image
11 | - a Docker Image
12 | - Bootstrap
13 |
14 | ## Using an Operating System Image
15 |
16 | Debian distributes their Operating System in a number of different formats, which are all available on the [cloud-images] site.
17 |
18 | Below are two examples of images we can use:
19 |
20 | ```sh
21 | debian-10-openstack-amd64.qcow2 2021-03-04 10:56 577M
22 | debian-10-openstack-amd64.raw 2021-03-04 10:53 2.0G
23 | ```
24 |
25 | The first image is a `qcow2` filesystem image and is a **full** disk image including partition tables, partitions filled with filesystems and files, and importantly, a boot loader at the beginning of the disk image.
26 | If you use the `qcow2` image, you will need to convert it into a `raw` image by installing the `qemu-img` cli tool,
27 |
28 | ```sh
29 | apt-get install -y qemu-utils
30 | ```
31 |
32 | and using it to convert the image into a `raw` filesystem.
33 |
34 | ```sh
35 | qemu-img convert ./debian-10-openstack-amd64.qcow2 -O raw ./debian-10-openstack-amd64.raw
36 | ```
37 |
38 | The second image is already a `raw` disk image which can be used as-is because it contains everything that we need to boot directly into Debian.
39 |
40 | **Optional** - You can compress the raw image to save on both local disk space and network bandwidth when deploying the image.
41 |
42 | ```sh
43 | gzip ./debian-10-openstack-amd64.raw
44 | ```
45 |
46 | The raw image will need to live at an accessible web server.
47 |
48 | ### Creating the Template
49 |
50 | The template uses actions from the [Artifact Hub].
51 |
52 | - [image2disk] - to write the OS image to a block device.
53 | - [kexec] - to `kexec` into our newly provisioned operating system.
54 |
55 | ```yaml
56 | version: "0.1"
57 | name: debian_Focal
58 | global_timeout: 1800
59 | tasks:
60 | - name: "os-installation"
61 | worker: "{{.device_1}}"
62 | volumes:
63 | - /dev:/dev
64 | - /dev/console:/dev/console
65 | - /lib/firmware:/lib/firmware:ro
66 | actions:
67 | - name: "stream-debian-image"
68 | image: quay.io/tinkerbell-actions/image2disk:v1.0.0
69 | timeout: 600
70 | environment:
71 | DEST_DISK: /dev/sda
72 | IMG_URL: "http://192.168.1.1:8080/debian-10-openstack-amd64.raw.gz"
73 | COMPRESSED: true
74 | - name: "kexec-debian"
75 | image: quay.io/tinkerbell-actions/kexec:v1.0.0
76 | timeout: 90
77 | pid: host
78 | environment:
79 | BLOCK_DEVICE: /dev/sda1
80 | FS_TYPE: ext4
81 | ```
82 |
83 | ## Important
84 |
85 | Don't forget to pull, tag and push `quay.io/tinkerbell-actions/image2disk:v1.0.0` prior to using it.
86 |
87 | ## Using a Docker Image
88 |
89 | We can easily make use of the **official** docker images to generate a root filesystem for use when deploying with Tinkerbell.
90 |
91 | ### Downloading the Image
92 |
93 | ```sh
94 | TMPRFS=$(docker container create debian:latest)
95 | docker export $TMPRFS > debian_rootfs.tar
96 | docker rm $TMPRFS
97 | ```
98 |
99 | **Optional** - You can compress the raw image to save on both local disk space and network bandwidth when deploying the image.
100 |
101 | ```sh
102 | gzip ./debian_rootfs.tar
103 | ```
104 |
105 | The raw image will need to live at a locally accessible web server.
106 |
107 | ### Creating the Template
108 |
109 | The template uses actions from the [artifact. hub].
110 |
111 | - [rootio] - to partition our disk and make filesystems.
112 | - [archive2disk] - to write the OS image to a block device.
113 | - [cexec] - to run commands inside (chroot) our newly provisioned operating system.
114 | - [kexec] - to `kexec` into our newly provisioned operating system.
115 |
116 | ```yaml
117 | version: "0.1"
118 | name: debian_bullseye_provisioning
119 | global_timeout: 1800
120 | tasks:
121 | - name: "os-installation"
122 | worker: "{{.device_1}}"
123 | volumes:
124 | - /dev:/dev
125 | - /dev/console:/dev/console
126 | - /lib/firmware:/lib/firmware:ro
127 | actions:
128 | - name: "disk-wipe-partition"
129 | image: quay.io/tinkerbell-actions/rootio:v1.0.0
130 | timeout: 90
131 | command: ["partition"]
132 | environment:
133 | MIRROR_HOST: 192.168.1.2
134 | - name: "format"
135 | image: quay.io/tinkerbell-actions/rootio:v1.0.0
136 | timeout: 90
137 | command: ["format"]
138 | environment:
139 | MIRROR_HOST: 192.168.1.2
140 | - name: "expand-debian-filesystem-to-root"
141 | image: quay.io/tinkerbell-actions/archive2disk:v1.0.0
142 | timeout: 90
143 | environment:
144 | ARCHIVE_URL: http://192.168.1.1:8080/debian_rootfs.tar.gz
145 | ARCHIVE_TYPE: targz
146 | DEST_DISK: /dev/sda3
147 | FS_TYPE: ext4
148 | DEST_PATH: /
149 | - name: "install-grub-bootloader"
150 | image: quay.io/tinkerbell-actions/cexec:v1.0.0
151 | timeout: 90
152 | environment:
153 | BLOCK_DEVICE: /dev/sda3
154 | FS_TYPE: ext4
155 | CHROOT: y
156 | CMD_LINE: "grub-install --root-directory=/boot /dev/sda"
157 | - name: "kexec-debian"
158 | image: quay.io/tinkerbell-actions/kexec:v1.0.0
159 | timeout: 600
160 | environment:
161 | BLOCK_DEVICE: /dev/sda3
162 | FS_TYPE: ext4
163 | ```
164 |
165 | ## Using Bootstrap
166 |
167 | The final method for installing Debian is to use the [grml-debootstrap] installer.
168 | We will need to create an action that will invoke the installer and install to our local disk.
169 |
170 | ### Creating the Dockerfile
171 |
172 | The `Dockerfile` creates a new image based upon Debian, installs all of the components needed for `grml-debootstrap`.
173 | Then, it sets the `ENTRYPOINT` to execute the `grml-debootstrap` program to install to Debian to `/dev/sda3` and install the boot-loader to `/dev/sda`.
174 |
175 | ```dockerfile
176 | FROM debian:bullseye
177 | RUN apt-get update && apt-get install -y grml-debootstrap
178 | ENTRYPOINT ["grml-debootstrap", "--target", "/dev/sda3", "--grub", "/dev/sda"]
179 | ```
180 |
181 | Now create an action image from our Dockerfile and push it to registry.
182 |
183 | ```sh
184 | docker build -t local-registry/debian:bootstrap .
185 | docker push local-registry/debian:bootstrap
186 | ```
187 |
188 | Once the new action is pushed to the local registry, it can be used as an action in a template.
189 |
190 | ```yaml
191 | actions:
192 | - name: "expand Debian filesystem to root"
193 | image: local-registry/debian:bootstrap
194 | timeout: 90
195 | ```
196 |
197 | ### Creating the Template
198 |
199 | The template uses actions from the [artifact hub].
200 |
201 | - [rootio] - to partition our disk and make filesystems
202 | - Our custom action that will invoke the Bootstrap program.
203 | - [kexec] - to `kexec` into our newly provisioned operating system.
204 |
205 | As well as the `debian:bootstrap` action from the local registry.
206 |
207 | ```yaml
208 | version: "0.1"
209 | name: debian_bullseye_provisioning
210 | global_timeout: 1800
211 | tasks:
212 | - name: "os-installation"
213 | worker: "{{.device_1}}"
214 | volumes:
215 | - /dev:/dev
216 | - /dev/console:/dev/console
217 | - /lib/firmware:/lib/firmware:ro
218 | actions:
219 | - name: "disk-wipe-partition"
220 | image: quay.io/tinkerbell-actions/rootio:v1.0.0
221 | timeout: 90
222 | command: ["partition"]
223 | environment:
224 | MIRROR_HOST: 192.168.1.2
225 | - name: "format"
226 | image: quay.io/tinkerbell-actions/rootio:v1.0.0
227 | timeout: 90
228 | command: ["format"]
229 | environment:
230 | MIRROR_HOST: 192.168.1.2
231 | - name: "expand Debian filesystem to root"
232 | image: local-registry/debian:bootstrap
233 | timeout: 90
234 | - name: "kexec-debian"
235 | image: quay.io/tinkerbell-actions/kexec:v1.0.0
236 | timeout: 600
237 | environment:
238 | BLOCK_DEVICE: /dev/sda3
239 | FS_TYPE: ext4
240 | ```
241 |
242 | [archive2disk]: https://artifacthub.io/packages/tbaction/tinkerbell-community/archive2disk
243 | [artifact hub]: https://artifacthub.io/packages/search?kind=4
244 | [cexec]: https://artifacthub.io/packages/tbaction/tinkerbell-community/cexec
245 | [cloud-images]: https://cloud.debian.org/images/cloud/
246 | [grml-debootstrap]: https://grml.org/grml-debootstrap/
247 | [image2disk]: https://artifacthub.io/packages/tbaction/tinkerbell-community/image2disk
248 | [kexec]: https://artifacthub.io/packages/tbaction/tinkerbell-community/kexec
249 | [rootio]: https://artifacthub.io/packages/tbaction/tinkerbell-community/rootio
250 |
--------------------------------------------------------------------------------
/docs/deploying-operating-systems/the-basics.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: The Basics of Deploying an Operating System
3 | date: 2021-02-16
4 | ---
5 |
6 | # The Basics of Deploying an Operating System
7 |
8 | This guide covers deploying Operating Systems, this is important as not all Operating Systems are designed to be deployed in the same manner.
9 | Some Operating Systems depend on being laid out on a disk in a specific way, some are almost completely dependent on actually being "installed" and others can be easily automated for quick deployment but require additional steps to customise afterwards.
10 |
11 | ## Getting an Operating System Deployed
12 |
13 | Most Operating Systems are deployed in relatively the same manner:
14 |
15 | 1. A machine boots and reads from installation media (presented locally or over the network).
16 | 2. The target disks are prepared, typically partitions may be created or HA technologies such as disk mirroring and then finally these partitions are "formatted" so that they contain a file system.
17 | 3. Either a minimal set of packages or a custom selection of packages will be installed to the new file system.
18 | Most Operating Systems or distributions have their own concept of "packages" but ultimately under the covers the package contains binaries and required libraries for an application along with some logic that dictates where the files should be written too along with some versioning information that the package manager can use.
19 | 4. There may be some final customization such as setting users, network configuration etc..
20 | 5. A Boot loader is written to the target disk so that when the machine is next powered on it can boot the newly provisioned Operating System.
21 |
22 | The order of the steps may differ but pretty much all of the major Operating Systems (Linux, Windows, MacOS) follow the same pattern to deploy on target hardware.
23 |
24 | ## Options for Automating a Deployment
25 |
26 | There are usually two trains of thought when thinking about deploying an Operating System, which are scripted which will go through the steps listed above but no user interaction is required or image based which takes a copy of a deployment and uses that as a "rubber stamp" for other installs.
27 |
28 | ### Scripted
29 |
30 | Operating Systems were originally designed to be ran on hardware of a predetermined configuration, which meant that there was no need for customizing of the installation.
31 | However as time passed a few things happened that suddenly required Operating Systems to become more flexible:
32 |
33 | - Usage of computers sky rocketed
34 | - The number of hardware vendors producing compute parts increased
35 | - A lot of traditional types of work became digitized.
36 |
37 | All of these factors suddenly required an Operating System to support more and more types of hardware and it’s required configuration(s), furthermore end-users required the capability to tailor an Operating System to behave as needed.
38 | To provide this functionality Operating System vendors built rudimentary user interfaces that would ask questions or provide the capability for a user installing the OS to set various configuration options.
39 | This worked for a period of time but as more and more computer systems were deployed this became an administrative nightmare, as it was impossible to automate multiple installations as they required interaction to proceed and finally the need for humans to interact brings about the possibility of human error during the deployment.
40 |
41 | In order for large scale IT system installations to take place then operations needed a method for [unattended installations], where installations can happen without any involvement.
42 | The technique for this to work was to modify the Operating System installation code so that it could take a file that can answer all of the questions that would have previously required user input in order to progress the installation.
43 | These technologies are all named in a way that reflects that:
44 |
45 | - preseed
46 | - answer file(s)
47 | - kickstart
48 | - jumpstart
49 |
50 | Once a member of the operations team has "designed" the set of responses for their chosen Operating System then this single configuration can be re-used as many times as required.
51 | This removes the human element from accidentally entering the wrong data or clicking the wrong button during an Operating System installation and ensures that the installation is standardized and "documented".
52 |
53 | **However**, one thing to consider is that although a scripted installation is a repeatable procedure that requires no human interaction it is not always 100% reliable.
54 | This installation still runs through a **lot** of steps, such as every package has to be installed along with the prerequisite package management along with configuring devices and other things that happen during that initial installation phase.
55 | Whilst this may work perfectly on the initial machine undiscovered errors can appear when moving this installation method to different hardware.
56 | There have been numerous issues caused by packages relying on `sleep` during an installation step, the problem here usually is due to this package being developed on a laptop and then moved to much larger hardware.
57 | Suddenly this `sleep` is no longer in step with the behavior of the hardware as the task completes much quicker than it had on slower hardware.
58 | This has typically led to numerous installation failures and can be thought of as a race-condition.
59 |
60 | ### Image
61 |
62 | Creating an image of an existing Operating System has existed for a long time, we can see it referenced in this [1960s IBM manual] for their mainframes.
63 | To understand imaging, we first need to understand the anatomy of a physical block device.
64 |
65 | #### Anatomy of a disk
66 |
67 | We can usually think of a disk being like a long strip of paper starting at position 0 and ending with the length of the strip of paper (or its capacity).
68 | The positions are vitally important as they’re used by a computer when it starts in order to find things on disk.
69 |
70 | The **Boot sector** is the first place a machine will attempt to boot from once it has completed its hardware initialization and hardware checks the code in this location will be used in order to instruct the computer where to look for the rest of the boot sequence and code.
71 | In the majority of examples a computer will boot the first phase from this boot sector and then be told where to look for the subsequent phases and more than likely the remaining code will live within a partition.
72 |
73 | A **partition** defines some ring-fenced capacity on an underlying device that can be then presented to the underlying hardware as usable storage.
74 | Partitions will then be "formatted" so that they have a structure that understands concepts such as folders/directories and files, along with additional functionality such as permissions.
75 |
76 | ![Diagram of a disk layout]
77 |
78 | Now that we know the makeup of a disk we can see that there are lots of different things that we may need to be aware of, such as type of boot loader, size or number of partitions, type of file systems and then the files and packages that need installing within those partitions.
79 |
80 | We can safely move away from all of this by taking a full copy of the disk!
81 | Starting from position 0 we can read every byte until we’ve reached the end of the disk (EOF) and we have a full copy of _everything_ from boot loaders to partitions and the underlying files.
82 |
83 | The steps for creating and using a machine image are usually:
84 |
85 | - Install an Operating System once (correctly)
86 | - Create an image of this deployed Operating System
87 | - Deploy the "golden image" to all other hosts
88 |
89 | ### Filesystem archives
90 |
91 | The other alternative that is common for Operating System deployment is a hybrid of the two above, and involves an a compressed archive of all files that would make up the Operating System.
92 | It usually requires the block devices on the server to have been partitioned and formatted in advance in order for the files in the archive to be written to the filesystem.
93 | Once the contents of the archive have been written to the filesystem, the remaining steps are to install a boot loader and perform and post-installation configuration (accounts/network configuration).
94 |
95 | #### Creating a filesystem archive
96 |
97 | A number of Linux Distributions provide a root file system archive that can already be downloaded quite easily, or a method to create your own:
98 |
99 | - [Ubuntu]
100 | - [Debian]
101 | - [Arch] (The rootfs is in the format archlinux-bootstrap--.tar.gz)
102 |
103 | We can also make use of root file systems created by the Docker maintainers
104 |
105 | ```sh
106 | TMPRFS=$(docker container create ubuntu:latest)
107 | docker export $TMPRFS > rootfs.tar
108 | docker rm $TMPRFS
109 | ```
110 |
111 | Finally there is plenty of tooling to create your own root file systems, a good example is [livemedia-creator].
112 |
113 | [1960s ibm manual]: https://web.archive.org/web/20140701185435/http://www.demorton.com/Tech/$OSTL.pdf
114 | [arch]: https://archive.archlinux.org/iso/
115 | [debian]: https://wiki.debian.org/Debootstrap
116 | [diagram of a disk layout]: ../images/disk-layout.png
117 | [livemedia-creator]: https://weldr.io/lorax/livemedia-creator.html
118 | [ubuntu]: http://cdimage.ubuntu.com/ubuntu-base/releases/20.04/release/
119 | [unattended installations]: https://en.wikipedia.org/wiki/Installation_(computer_programs)
120 |
--------------------------------------------------------------------------------
/docs/deploying-operating-systems/the-deployment.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: The Deployment
3 | date: 2021-02-19
4 | ---
5 |
6 | # The Deployment
7 |
8 | In the majority of cases there will be a number of steps required before we're able to deploy an Operating System to a new piece of hardware.
9 | Which steps are largely dependent on the type or format of the Operating System deployment media that the provider distributes or which installation method you want to use.
10 |
11 | ## Using an OS Image
12 |
13 | Not all Operating System images are distributed in the same formats, and in most use-cases either pre-preperation or conversion to a supported image type will be required.
14 |
15 | ### Preparing the Image
16 |
17 | A large number of Operating System vendors tend to distribute their images using the [qcow] format, which comes from the Qemu virtualisation project.
18 | This provides a number of features that end users find desirable:
19 |
20 | - Small image size (an image is typically only as large as the data written, not the size of the logical block device)
21 | - The image can be used directly by the qemu/kvm hypervisor (used by a number of cloud providers)
22 | - The `qcow` image can be easily converted to other formats.
23 |
24 | The qemu project provides a number of useful tools to manage Operating System image files, the [qemu-image] tool is commonly used to create/convert disk images.
25 |
26 | We can convert a `qcow` image to a `raw` image with the following command:
27 |
28 | ```sh
29 | qemu-img convert -O raw diskimage.qcow2 diskimage.raw
30 | ```
31 |
32 | One drawback of this is that a `qcow` image will only occupy the space where data has been written, so if we had a 20G disk image and installed an OS that only used 1G the image size would be the 1G of data.
33 | **However**, when we move to a `raw` image then the image will occupy all of the disk image size regardless of the contents.
34 |
35 | ### Streaming the Image to Disk
36 |
37 | Once you have your OS image prepared, use an action to write it directly to an underlying block device, which would effectively provision the Operating System to the hardware allowing us to reboot into this new OS.
38 | The [image2disk] action is designed for this use-case and has the capability to stream an Operating System image from a remote location over HTTP/HTTPS and write it directly to a specified block device.
39 |
40 | For example, you can stream a raw Ubuntu image from a web-server and write the OS image to the block device `/dev/sda`.
41 |
42 | ```yaml
43 | actions:
44 | - name: "stream ubuntu"
45 | image: quay.io/tinkerbell-actions/image2disk:v1.0.0
46 | timeout: 90
47 | environment:
48 | IMG_URL: 192.168.1.1:8080/ubuntu.raw
49 | DEST_DISK: /dev/sda
50 | ```
51 |
52 | [image2disk] also supports on-the-fly gzip streaming, which allows you to compress the raw image using [gzip] to save local disk space **and** the amount of network traffic to the hosts that are being provisioned.
53 |
54 | ```sh
55 | gzip diskimage.raw
56 | ```
57 |
58 | The resulting file `diskimage.raw.gz` in most cases will be smaller than the original qcow file.
59 |
60 | You can then use the [image2disk] action to stream the image to the block device.
61 |
62 | ```yaml
63 | actions:
64 | - name: "stream ubuntu"
65 | image: quay.io/tinkerbell-actions/image2disk:v1.0.0
66 | timeout: 90
67 | environment:
68 | IMG_URL: http://192.168.1.1:8080/ubuntu.tar.gz
69 | DEST_DISK: /dev/sda
70 | COMPRESSED: true
71 | ```
72 |
73 | ## Using a Filesystem Archive
74 |
75 | ### Formatting a Block Device
76 |
77 | When provisioning from a filesystem archive, there is a **pre-requisite** for the block device to be partitioned and formatted with a filesystem before we can write files and directories to the storage.
78 | In Tinkerbell we specify in hardware data the configuration for the storage.
79 | For example, the following snippet details the configuration for the block device `/dev/sda`.
80 | There are three partitions that will be created and labeled.
81 | It also specifies the format and filesystem type for two of those partitions.
82 |
83 | ```json
84 | "storage": {
85 | "disks": [
86 | {
87 | "device": "/dev/sda",
88 | "partitions": [
89 | {
90 | "label": "BIOS",
91 | "number": 1,
92 | "size": 4096
93 | },
94 | {
95 | "label": "SWAP",
96 | "number": 2,
97 | "size": 3993600
98 | },
99 | {
100 | "label": "ROOT",
101 | "number": 3,
102 | "size": 0
103 | }
104 | ],
105 | "wipe_table": true
106 | }
107 | ],
108 | "filesystems": [
109 | {
110 | "mount": {
111 | "create": {
112 | "options": ["-L", "ROOT"]
113 | },
114 | "device": "/dev/sda3",
115 | "format": "ext4",
116 | "point": "/"
117 | }
118 | },
119 | {
120 | "mount": {
121 | "create": {
122 | "options": ["-L", "SWAP"]
123 | },
124 | "device": "/dev/sda2",
125 | "format": "swap",
126 | "point": "none"
127 | }
128 | }
129 | ]
130 | }
131 | ```
132 |
133 | > More information about block device configuration is on the Equinix Metal™ [Custom Partitioning & Raid] page.
134 |
135 | The example blob is just the description of the device in hardware data, we will also need an action during provisioning to parse this metadata and actually write these changes to the block device.
136 | This is the job of the [rootio] action.
137 |
138 | ```yaml
139 | actions:
140 | - name: "format"
141 | image: quay.io/tinkerbell-actions/rootio:v1.0.0
142 | timeout: 90
143 | command: ["format"]
144 | environment:
145 | MIRROR_HOST: 192.168.1.2
146 | ```
147 |
148 | Once this action has completed we will have successfully modified the underlying block device to have our storage configuration!
149 |
150 | ### Extracting the OS to the Filesystem
151 |
152 | As detailed in [The Basics of Deploying an Operating System], we can download or create a filesystem archive in a number of different ways.
153 | Once we have a compressed archive of all of the files that make up the Operating System, we will again need to use an action to manage the task of fetching the archive and extracting it to our newly formatted file system.
154 | The action [archive2disk] has the functionality to **mount** a filesystem and both **stream**/**extract** a filesystem archive directly to the new filesystem.
155 |
156 | ```yaml
157 | actions:
158 | - name: "expand ubuntu filesystem to root"
159 | image: quay.io/tinkerbell-actions/archive2disk:v1.0.0
160 | timeout: 90
161 | environment:
162 | ARCHIVE_URL: http://192.168.1.1:8080/ubuntu.tar.gz
163 | ARCHIVE_TYPE: targz
164 | DEST_DISK: /dev/sda3
165 | FS_TYPE: ext4
166 | DEST_PATH: /
167 | ```
168 |
169 | ### Installing a Boot Loader
170 |
171 | Whilst we may have deployed a full Operating System to our persistent storage, it will be rendered useless at a _reboot_ unless we install a boot loader so that the machine knows how to load this new OS.
172 | We can automate this process by providing another action whose role would be to execute a command such as `grub-install` or `sysinux` to write the bootloader code to the beginning of the block device where the BIOS knows where to look for it on machine startup.
173 |
174 | ## Using an Installer
175 |
176 | Some Operating Systems may require a combination of the two above examples for deployment, however there are other Operating Systems that can also be deployed through the use of an installer.
177 |
178 | These typically will require an installer binary to exist, as well as:
179 |
180 | - It may require an action to write the installer binary to persistent storage for it to be ran.
181 | - An action (docker container) may have all of the relevant files required to execute an installer.
182 |
183 | ### Debian example
184 |
185 | `Dockerfile`
186 |
187 | ```dockerfile
188 | FROM debian:bullseye
189 | RUN apt-get update; apt-get install -y grml-debootstrap
190 | ENTRYPOINT ["grml-debootstrap", "--target", "/dev/sda3", "--grub", "/dev/sda"]
191 | ```
192 |
193 | We can create an action from our Dockerfile:
194 |
195 | ```sh
196 | docker build -t local-registry/debian:example .
197 | ```
198 |
199 | Once we have pushed our new action to the registry we can reference the action in a workflow as shown below.
200 |
201 | ```yaml
202 | actions:
203 | - name: "expand ubuntu filesystem to root"
204 | image: local-registry/debian:example
205 | timeout: 90
206 | ```
207 |
208 | ### Additional Bootstraps
209 |
210 | - [Centos/RHEL]
211 | - [Ubuntu]
212 |
213 | ## Next Steps
214 |
215 | Once an Operating System image **or** a filesystem + bootloader is deployed we may need to customize it or boot into our new system, we can either `reboot` the host or `kexec` directly into the new OS.
216 |
217 | [archive2disk]: https://artifacthub.io/packages/tbaction/tinkerbell-community/archive2disk
218 | [centos/rhel]: https://github.com/dozzie/yumbootstrap
219 | [custom partitioning & raid]: https://deploy.equinix.com/developers/docs/metal/storage/custom-partitioning-raid/
220 | [gzip]: https://en.wikipedia.org/wiki/Gzip
221 | [image2disk]: https://artifacthub.io/packages/tbaction/tinkerbell-community/image2disk
222 | [qcow]: https://en.wikipedia.org/wiki/Qcow
223 | [qemu-image]: https://en.wikibooks.org/wiki/QEMU/Images#Copying_an_image_to_a_physical_device
224 | [rootio]: https://artifacthub.io/packages/tbaction/tinkerbell-community/rootio
225 | [the basics of deploying an operating system]: the-basics.md/#filesystem-archives
226 | [ubuntu]: https://help.ubuntu.com/lts/installation-guide/amd64/apds04.html
227 |
--------------------------------------------------------------------------------
/docs/contrib/index.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Contributing
3 | ---
4 |
5 | If you are you are interested in contributing to Tinkerbell, _Welcome!_ and we are thankful you are here.
6 |
7 | Tinkerbell is an open source project made of different components, and a lot of the code is written in Go, but it is not the only way to make a contribution.
8 | We use and rely on a lot of different technologies such as: iPXE, Docker, Prometheus, Kubernetes and much more.
9 |
10 | The projects inside Tinkerbell are designed to be as independent as possible, some are ready and others have a long way to go.
11 | In general, deciding where you want to contribute depends on what you are working towards.
12 |
13 | The best way to start is to join the [#tinkerbell] channel over on the [CNCF Community Slack] or the [Contributor Mailing list].
14 | You can find more about how we communicate on the [COMMUNICATION page] in the [tinkerbell/.github] repo.
15 |
16 | ## Contributing to the Codebase
17 |
18 | You can find all our projects on [GitHub].
19 | Have a look at issues and pull requests and if you can't figure out anything you want to do, ping us on Slack.
20 |
21 | Some advice for getting started is to figure out a way to scope your contribution to a single repository.
22 | This is a good practice because it simplifies development and helps us to avoid breaking changes.
23 |
24 | At this current stage we are far from out first stable release, which means that occasionally we will have breaking changes.
25 | For more on our policy about breaking changes, check out the [proposal regarding breaking changes].
26 |
27 | ## Proposals
28 |
29 | Tinkerbell uses a proposals repository over in [tinkerbell/proposals] to share ideas, discuss, and collaborate on Tinkerbell in a public manner.
30 |
31 | Proposal workflow is explained in [Proposal 001], where the information required to write your own proposal or to understand the state of a current proposal.
32 |
33 | ## Triage
34 |
35 | The [Community Triage Party call] takes place **every other Tuesday** at **3pm UTC (11am ETC/8am PST)** - join us to learn more about Tinkerbell!
36 |
37 | Regularly triaging incoming issues and pull requests is critical to the health of an open-source project.
38 | The triage process ensures that communication flows openly between users and contributors, even as people vacation or move to new projects.
39 | In particular:
40 |
41 | - Lower issue response latency encourages users to become contributors
42 | - Lower PR review latency encourages first-time contributors to become regular
43 | - Labeled issues help the community to prioritize work
44 |
45 | All community members are welcome and encouraged to join and help us triage Tinkerbell.
46 | It's a great way to learn more about how the project functions, and gain exposure to different parts of the codebase.
47 |
48 | ### Triage access
49 |
50 | Open an issue in the [Tinkerbell .github repo] to request triage access if you do not already have access to add labels to issues.
51 |
52 | At the beginning of our Community Triage Party call, we will also ask at the beginning of the call if anyone requires triage access to participate.
53 |
54 | ### Daily Triage Process
55 |
56 | The goal of daily triage is to be the initial point of contact for an incoming issue or pull request.
57 | Anyone can participate on any day of the week!
58 |
59 | The dashboard of items requiring a response can be found at the [Daily Triage Dashboard].
60 | Each box of items is defined by a rule and has a listed `Resolution` action.
61 | Once the requested action has been taken, the issue will disappear from the list.
62 |
63 | - _Unresponded issues_ need a follow-up by someone who is a member of the Tinkerbell organization
64 | - _Review Ready_ PRs require a code review follow-up
65 | - _Unkinded issues_ are those requiring a label describing the kind of issue.
66 | In Tinkerbell, we use:
67 | - `enhancement`, `bug`, `documentation`, `question`, `todo`, `idea`, `epic`
68 |
69 | Other labels we use are:
70 |
71 | - `Good First Issue` - bug has a proposed solution, can be implemented without further discussion.
72 | - `Help wanted` - if the bug could use help from the open-source community
73 |
74 | ### Bi-weekly Community Triage Process
75 |
76 | The goal of bi-weekly triage is to catch items that may have fallen through the cracks.
77 |
78 | The dashboard of items requiring a response can be found at the [Bi-Weekly Triage Dashboard].
79 | Each box of items is defined by a rule and has a listed `Resolution` action.
80 | Once the defined action has been taken, the issue will disappear from the list.
81 |
82 | At the beginning of the call, we will ask if anyone requires triage permissions to participate.
83 |
84 | - _Stale Pull Requests_ are PRs that appear to be going nowhere.
85 | If we haven't heard back from the user in 30 days, the PR should be closed with care.
86 | The author can reopen it when they are ready.
87 |
88 | ### Prioritization
89 |
90 | !! note
91 | These labels have not yet been finalized - but are based on what is used in other CNCF projects.
92 |
93 | If the issue is not question, it needs a [priority label]:
94 |
95 | - `priority/critical-urgent`: someone's top priority ASAP, such as security issue or issue that causes data loss.
96 | Rarely used, to be resolved within 5 days.
97 | - `priority/important-soon`: high priority for the team.
98 | To be resolved within 8 weeks.
99 | - `priority/important-longterm`: a long-term priority.
100 | To be resolved within a year.
101 | - `priority/backlog`: agreed that this would be good to have, but not yet a priority.
102 | Consider tagging as `help wanted`
103 | - `priority/awaiting-more-evidence`: may be useful, but there is not yet enough evidence to support inclusion on the backlog.
104 |
105 | ### Example follow-ups
106 |
107 | When a user submits a pull request or issue, it's essential to be respectful of the time the user has invested in opening the issue.
108 | Be kind.
109 |
110 | These are some templates that you can use as [Github Saved replies] for easy access during triage.
111 |
112 | #### Stale PR with an outstanding comment
113 |
114 | > @xx - It appears that this PR is doing the right thing and is very close to being merged! There is only one PR comment to resolve, and the PR will also need to be rebased against the latest code from the main branch.
115 | >
116 | > If we don't hear back within the next two weeks, we will likely close this PR as part of our PR grooming policy. If this happens, you can reopen this PR at any point once the required changes are made.
117 | >
118 | > Thank you for your contribution, and I hope to hear back from you soon!
119 |
120 | ### Needs more information
121 |
122 | > I don't yet have a clear way to replicate this issue. Do you mind adding some additional details? Here is additional information that would be helpful:
123 | >
124 | > - The exact command lines used
125 | > - Logs or command-line output
126 | >
127 | > Thank you for sharing your experience!
128 |
129 | #### Closing: Duplicate Issue
130 |
131 | > This issue appears to be a duplicate of #X. Do you mind if we move the conversation there?
132 | >
133 | > This way we can centralize the content relating to the issue. If you feel that this issue is not a duplicate, please reopen it using `/reopen`. If you have additional information to share, please add it to the new issue.
134 | >
135 | > Thank you for reporting this!
136 |
137 | #### Closing: Lack of Information
138 |
139 | If an issue hasn't been active for more than 8 weeks, and the author has been pinged at least once, then it can be closed.
140 |
141 | > Hey @author -- hopefully it's OK if I close this - there wasn't enough information to make it actionable, and some time has already passed. If you are able to provide additional details, you add a comment, and we will reopen it.
142 | >
143 | > Here is additional information that may be helpful to us:
144 | >
145 | > \* Whether the issue occurs with the latest Tinkerbell release
146 | >
147 | > \* The exact command-lines used
148 | >
149 | > Thank you for sharing your experience!
150 |
151 | #### Closing: Very stale PR
152 |
153 | Once a PR is 30 days old and pinged at least twice, it's safe to close it:
154 |
155 | > Closing this PR as stale. Please reopen this PR when you are ready to retake a look at it. Thank you for your contribution!
156 |
157 | [bi-weekly triage dashboard]: https://triage.meyu.us/s/weekly
158 | [community triage party call]: https://equinix.zoom.us/j/96016156757?pwd=nzzkczzmbfdvsm9ubhnzahryngdvdz09
159 | [daily triage dashboard]: https://triage.meyu.us/s/daily
160 | [github saved replies]: https://docs.github.com/en/get-started/writing-on-github/working-with-saved-replies/using-saved-replies
161 | [priority label]: https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#define-priority
162 | [tinkerbell .github repo]: https://github.com/tinkerbell/.github/issues
163 |
164 |
165 | ## Terms and Stewardship
166 |
167 | The [tinkerbell/.github] repository contains all the information relating to the code of conduct.
168 |
169 | - [Contributor Covenant Code of Conduct]
170 |
171 | The [tinkerbell/org] repository contains all the information relating to the license and documents on project owner responsibilities and maintainer information.
172 |
173 | - [License]
174 | - [Maintainers]
175 | - [Owners]
176 |
177 | [cncf community slack]: https://slack.cncf.io/
178 | [communication page]: https://github.com/tinkerbell/org/blob/main/COMMUNICATION.md
179 | [contributor covenant code of conduct]: https://github.com/tinkerbell/.github/blob/main/CODE_OF_CONDUCT.md
180 | [contributor mailing list]: https://github.com/tinkerbell/org/blob/main/COMMUNICATION.md#contributors-mailing-list
181 | [github]: https://github.com/tinkerbell
182 | [license]: https://github.com/tinkerbell/org/blob/main/LICENSE
183 | [maintainers]: https://github.com/tinkerbell/org/blob/main/MAINTAINERS.md
184 | [owners]: https://github.com/tinkerbell/org/blob/main/OWNERS.md
185 | [proposal 001]: https://github.com/tinkerbell/proposals/tree/main/proposals/0001
186 | [proposal regarding breaking changes]: https://github.com/tinkerbell/proposals/blob/main/proposals/0011/README.md
187 | [tinkerbell/.github]: https://github.com/tinkerbell/.github
188 | [#tinkerbell]: https://app.slack.com/client/T08PSQ7BQ/C01SRB41GMT
189 | [tinkerbell/org]: https://github.com/tinkerbell/org
190 | [tinkerbell/proposals]: https://github.com/tinkerbell/proposals
191 |
--------------------------------------------------------------------------------
/docs/deploying-operating-systems/examples-rhel-centos.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Example - Red Hat Enterprise Linux and CentOS
3 | date: 2021-03-16
4 | ---
5 |
6 | # Deploying Red Hat Enterprise Linux or CentOS
7 |
8 | This is a guide which walks through the process of deploying either Red Hat Enterprise Linux (RHEL) or CentOS from an operating system image or a Docker image.
9 |
10 | ## Using an Operating System Image
11 |
12 | RedHat provides both RHEL and CoreOS images in the RedHat provide RHEL Operating System images in the `qcow2` format.
13 |
14 | The CentOS images are available the [cloud-images] site.
15 |
16 | RHEL images require a Red Hat Account in order to download, and are available at (login required):
17 |
18 | - [RHEL8]
19 | - [RHEL7]
20 |
21 | A `qcow2` filesystem image which is a **full** disk image including partition tables, partitions filled with filesystems and the files, and importantly, a boot loader at the beginning of the disk image.
22 | It will need to be converted to a `raw` filesystem image in order to use it.
23 |
24 | ### Converting Image
25 |
26 | In order to use this image, it needs to be converted into a `raw` filesystem.
27 | In order to do the conversion, install the `qemu-img` CLI tool.
28 |
29 | ```sh
30 | apt-get install -y qemu-utils
31 | ```
32 |
33 | Then, use the tool to convert the image into a `raw` filesystem.
34 | This example uses one of the CentOS images.
35 |
36 | ```sh
37 | qemu-img convert ./CentOS-8-GenericCloud-8.3.2011-20201204.2.x86_64.qcow2 -O raw ./CentOS-8-GenericCloud-8.3.2011-20201204.2.x86_64.raw
38 | ```
39 |
40 | **Optional** - You can compress this raw image to save on both local disk space and network bandwidth when deploying the image.
41 |
42 | ```sh
43 | gzip ./CentOS-8-GenericCloud-8.3.2011-20201204.2.x86_64.raw
44 | ```
45 |
46 | Move the raw image to an accessible web server.
47 |
48 | ### Fedora CoreOS
49 |
50 | CentOS 8 is the last release and will be going EOL at the end of 2021, but following the acquisition of CoreOS by Red Hat, they distribute an additional operating system called Fedora CoreOS.
51 | It is available at [Get Fedora], and distributed in both `raw` and `qcow2` format.
52 |
53 | ```sh
54 | fedora-coreos-33.20210217.3.0-metal.x86_64.qcow2.xz
55 | fedora-coreos-33.20210217.3.0-metal.x86_64.raw.xz
56 | ```
57 |
58 | Both images come with compressed with the `xz` compression format.
59 | You can decompress these image with the `xz` command.
60 |
61 | ```sh
62 | xz -d
63 | ```
64 |
65 | The `raw` disk image contains a full partition table (including OS and Swap partition) and boot loader for our Fedora CoreOS system, and can be used without converting it first.
66 |
67 | The `.qcow2.xz` image is a **full** disk image including partition tables, partitions filled with filesystems and the files, and importantly, a boot loader at the beginning of the disk image.
68 | It will need to be converted to a `raw` filesystem image in order to use it, like RHEL and CentOS.
69 |
70 | ### Creating the Template
71 |
72 | The template uses actions from the [Artifact Hub].
73 |
74 | - [image2disk] - to write the OS image to a block device.
75 | - [kexec] - to `kexec` into our newly provisioned operating system.
76 |
77 | > Important: Don't forget to pull, tag and push `quay.io/tinkerbell-actions/image2disk:v1.0.0` prior to using it.
78 |
79 | The example template uses the CentOS images, but you can modify it for other the other distributions such as RHEL or Fedora CoreOS.
80 |
81 | ```yaml
82 | version: '0.1'
83 | name: CentOS_Deployment
84 | global_timeout: 1800
85 | tasks:
86 | - name: os-installation
87 | worker: '{{.device_1}}'
88 | volumes:
89 | - '/dev:/dev'
90 | - '/dev/console:/dev/console'
91 | - '/lib/firmware:/lib/firmware:ro'
92 | actions:
93 | - name: stream-image
94 | image: 'quay.io/tinkerbell-actions/image2disk:v1.0.0'
95 | timeout: 600
96 | environment:
97 | DEST_DISK: /dev/sda
98 | IMG_URL: >-
99 | http://192.168.1.1:8080/CentOS-8-GenericCloud-8.3.2011-20201204.2.x86_64.raw.gz
100 | COMPRESSED: true
101 | - name: kexec
102 | image: 'quay.io/tinkerbell-actions/kexec:v1.0.0'
103 | timeout: 90
104 | pid: host
105 | environment:
106 | BLOCK_DEVICE: /dev/sda1
107 | FS_TYPE: ext4
108 | ```
109 |
110 | ## Using a Docker Image for CentOS
111 |
112 | We can easily make use of the **official** docker images to generate a root filesystem for use when deploying with Tinkerbell.
113 |
114 | ### Downloading the CentOS Image
115 |
116 | ```sh
117 | TMPRFS=$(docker container create centos:8)
118 | docker export $TMPRFS > centos_rootfs.tar
119 | docker rm $TMPRFS
120 | ```
121 |
122 | **Optional** - You can compress this filesystem archive to save on both local disk space and network bandwidth when deploying the image.
123 |
124 | ```sh
125 | gzip ./centos_rootfs.tar
126 | ```
127 |
128 | Move the raw image to an accessible web server.
129 |
130 | ### Creating the CentOS Template
131 |
132 | The template makes use of the actions from the artifact hub.
133 |
134 | - [rootio] - to partition our disk and make filesystems.
135 | - [archive2disk] - to write the OS image to a block device.
136 | - [cexec] - to run commands inside (chroot) our newly provisioned operating system.
137 | - [kexec] - to `kexec` into our newly provisioned operating system.
138 |
139 | ```yaml
140 | version: '0.1'
141 | name: debian_bullseye_provisioning
142 | global_timeout: 1800
143 | tasks:
144 | - name: os-installation
145 | worker: '{{.device_1}}'
146 | volumes:
147 | - '/dev:/dev'
148 | - '/dev/console:/dev/console'
149 | - '/lib/firmware:/lib/firmware:ro'
150 | actions:
151 | actions:
152 | - name: disk-wipe-partition
153 | image: 'quay.io/tinkerbell-actions/rootio:v1.0.0'
154 | timeout: 90
155 | command:
156 | - partition
157 | environment:
158 | MIRROR_HOST: 192.168.1.2
159 | - name: format
160 | image: 'quay.io/tinkerbell-actions/rootio:v1.0.0'
161 | timeout: 90
162 | command:
163 | - format
164 | environment:
165 | MIRROR_HOST: 192.168.1.2
166 | - name: expand-filesystem-to-root
167 | image: 'quay.io/tinkerbell-actions/archive2disk:v1.0.0'
168 | timeout: 90
169 | environment:
170 | ARCHIVE_URL: 'http://192.168.1.1:8080/centos_rootfs.tar.gz'
171 | ARCHIVE_TYPE: targz
172 | DEST_DISK: /dev/sda3
173 | FS_TYPE: ext4
174 | DEST_PATH: /
175 | - name: install-grub-bootloader
176 | image: 'quay.io/tinkerbell-actions/cexec:v1.0.0'
177 | timeout: 90
178 | environment:
179 | BLOCK_DEVICE: /dev/sda3
180 | FS_TYPE: ext4
181 | CHROOT: 'y'
182 | CMD_LINE: grub-install --root-directory=/boot /dev/sda
183 | - name: kexec
184 | image: 'quay.io/tinkerbell-actions/kexec:v1.0.0'
185 | timeout: 600
186 | environment:
187 | BLOCK_DEVICE: /dev/sda3
188 | FS_TYPE: ext4
189 | ```
190 |
191 | ## Using a Docker Image for Red Hat Enterprise Linux
192 |
193 | We can easily make use of the **official** docker images to generate a root filesystem for use when deploying with Tinkerbell.
194 |
195 | ### Download the RHEL Image
196 |
197 | ```sh
198 | TMPRFS=$(docker container create registry.access.redhat.com/rhel7:latest)
199 | docker export $TMPRFS > rhel_rootfs.tar
200 | docker rm $TMPRFS
201 | ```
202 |
203 | **Optional** - You can compress this filesystem archive to save on both local disk space and network bandwidth when deploying the image.
204 |
205 | ```sh
206 | gzip ./rhel_rootfs.tar
207 | ```
208 |
209 | Move the raw image to an accessible web server.
210 |
211 | ### Creating the RHEL Template
212 |
213 | The template makes use of the actions from the artifact hub.
214 |
215 | - [rootio] - to partition our disk and make filesystems.
216 | - [archive2disk] - to write the OS image to a block device.
217 | - [cexec] - to run commands inside (chroot) our newly provisioned operating system.
218 | - [kexec] - to `kexec` into our newly provisioned operating system.
219 |
220 | ```yaml
221 | version: '0.1'
222 | name: debian_bullseye_provisioning
223 | global_timeout: 1800
224 | tasks:
225 | - name: os-installation
226 | worker: '{{.device_1}}'
227 | volumes:
228 | - '/dev:/dev'
229 | - '/dev/console:/dev/console'
230 | - '/lib/firmware:/lib/firmware:ro'
231 | actions:
232 | actions:
233 | - name: disk-wipe-partition
234 | image: 'quay.io/tinkerbell-actions/rootio:v1.0.0'
235 | timeout: 90
236 | command:
237 | - partition
238 | environment:
239 | MIRROR_HOST: 192.168.1.2
240 | - name: format
241 | image: 'quay.io/tinkerbell-actions/rootio:v1.0.0'
242 | timeout: 90
243 | command:
244 | - format
245 | environment:
246 | MIRROR_HOST: 192.168.1.2
247 | - name: expand-filesystem-to-root
248 | image: 'quay.io/tinkerbell-actions/archive2disk:v1.0.0'
249 | timeout: 90
250 | environment:
251 | ARCHIVE_URL: 'http://192.168.1.1:8080/rhel_rootfs.tar.gz'
252 | ARCHIVE_TYPE: targz
253 | DEST_DISK: /dev/sda3
254 | FS_TYPE: ext4
255 | DEST_PATH: /
256 | - name: install-EPEL-repo
257 | image: 'quay.io/tinkerbell-actions/cexec:v1.0.0'
258 | timeout: 90
259 | environment:
260 | BLOCK_DEVICE: /dev/sda3
261 | FS_TYPE: ext4
262 | CHROOT: 'y'
263 | CMD_LINE: >-
264 | curl -O
265 | https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm;
266 | yum install ./epel-release-latest-7.noarch.rpm; yum install grub2
267 | - name: install-grub-bootloader
268 | image: 'quay.io/tinkerbell-actions/cexec:v1.0.0'
269 | timeout: 90
270 | environment:
271 | BLOCK_DEVICE: /dev/sda3
272 | FS_TYPE: ext4
273 | CHROOT: 'y'
274 | CMD_LINE: grub-install --root-directory=/boot /dev/sda
275 | - name: kexec
276 | image: 'quay.io/tinkerbell-actions/kexec:v1.0.0'
277 | timeout: 600
278 | environment:
279 | BLOCK_DEVICE: /dev/sda3
280 | FS_TYPE: ext4
281 | ```
282 |
283 | [archive2disk]: https://artifacthub.io/packages/tbaction/tinkerbell-community/archive2disk
284 | [artifact hub]: https://artifacthub.io/packages/search?kind=4
285 | [cexec]: https://artifacthub.io/packages/tbaction/tinkerbell-community/cexec
286 | [cloud-images]: https://cloud.centos.org/centos/8/x86_64/images/
287 | [get fedora]: https://getfedora.org/en/coreos/download/?tab=metal_virtualized&stream=stable&arch=x86_64
288 | [image2disk]: https://artifacthub.io/packages/tbaction/tinkerbell-community/image2disk
289 | [kexec]: https://artifacthub.io/packages/tbaction/tinkerbell-community/kexec
290 | [rhel7]: https://access.redhat.com/downloads/content/69/ver=/rhel---7/7.9/x86_64/product-software
291 | [rhel8]: https://access.redhat.com/downloads/content/479/ver=/rhel---8/8.5/x86_64/product-software
292 | [rootio]: https://artifacthub.io/packages/tbaction/tinkerbell-community/rootio
293 |
--------------------------------------------------------------------------------
/docs/assets/logo.svg:
--------------------------------------------------------------------------------
1 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/overrides/partials/header.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
6 |
174 |
175 |
--------------------------------------------------------------------------------
/docs/hardware-data.md:
--------------------------------------------------------------------------------
1 | ---
2 | title: Hardware Data
3 | date: 2019-01-04
4 | ---
5 |
6 | # Hardware Data
7 |
8 | - Hardware data holds the details about the hardware that you wish to use with a workflow.
9 | - A hardware may have multiple network devices that can be used in a worklfow.
10 | - The details about all those devices is maintained in JSON format as hardware data.
11 |
12 | ## Example
13 |
14 | If you have a hardware that has a single network/worker device on it, its hardware data shall be structured like the following:
15 |
16 | ```json
17 | {
18 | "id": "0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94",
19 | "metadata": {
20 | "bonding_mode": 5,
21 | "custom": {
22 | "preinstalled_operating_system_version": {},
23 | "private_subnets": []
24 | },
25 | "facility": {
26 | "facility_code": "ewr1",
27 | "plan_slug": "c2.medium.x86",
28 | "plan_version_slug": ""
29 | },
30 | "instance": {
31 | "crypted_root_password": "redacted",
32 | "operating_system_version": {
33 | "distro": "ubuntu",
34 | "os_slug": "ubuntu_18_04",
35 | "version": "18.04"
36 | },
37 | "storage": {
38 | "disks": [
39 | {
40 | "device": "/dev/sda",
41 | "partitions": [
42 | {
43 | "label": "BIOS",
44 | "number": 1,
45 | "size": 4096
46 | },
47 | {
48 | "label": "SWAP",
49 | "number": 2,
50 | "size": 3993600
51 | },
52 | {
53 | "label": "ROOT",
54 | "number": 3,
55 | "size": 0
56 | }
57 | ],
58 | "wipe_table": true
59 | }
60 | ],
61 | "filesystems": [
62 | {
63 | "mount": {
64 | "create": {
65 | "options": ["-L", "ROOT"]
66 | },
67 | "device": "/dev/sda3",
68 | "format": "ext4",
69 | "point": "/"
70 | }
71 | },
72 | {
73 | "mount": {
74 | "create": {
75 | "options": ["-L", "SWAP"]
76 | },
77 | "device": "/dev/sda2",
78 | "format": "swap",
79 | "point": "none"
80 | }
81 | }
82 | ]
83 | }
84 | },
85 | "manufacturer": {
86 | "id": "",
87 | "slug": ""
88 | },
89 | "state": ""
90 | },
91 | "network": {
92 | "interfaces": [
93 | {
94 | "dhcp": {
95 | "arch": "x86_64",
96 | "hostname": "server001",
97 | "ip": {
98 | "address": "192.168.1.5",
99 | "gateway": "192.168.1.1",
100 | "netmask": "255.255.255.248"
101 | },
102 | "lease_time": 86400,
103 | "mac": "00:00:00:00:00:00",
104 | "name_servers": [],
105 | "time_servers": [],
106 | "uefi": false
107 | },
108 | "netboot": {
109 | "allow_pxe": true,
110 | "allow_workflow": true,
111 | "ipxe": {
112 | "contents": "#!ipxe",
113 | "url": "http://url/menu.ipxe"
114 | },
115 | "osie": {
116 | "base_url": "",
117 | "initrd": "",
118 | "kernel": "vmlinuz-x86_64"
119 | }
120 | }
121 | }
122 | ]
123 | }
124 | }
125 | ```
126 |
127 | ## Property Description
128 |
129 | The following section explains each property in the above example:
130 |
131 | | Property | Description |
132 | | ------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
133 | | id | An identifier used to uniquely identify the hardware. The `id` can be generated using the `uuidgen` command. If you are in Equinix Metal environment, you can get the `id` from the server overview page. The `id` doesn't _have_ to be a UUID, provided it's unique. |
134 | | network | Network details |
135 | | network.Interfaces[] | List of network interfaces on the hardware |
136 | | network.interfaces[].dhcp | DHCP details |
137 | | network.interfaces[].dhcp.mac | MAC address of the network device (worker) |
138 | | network.interfaces[].dhcp.ip | IP details for DHCP |
139 | | network.interfaces[].dhcp.ip.address | Worker IP address to be requested over DHCP |
140 | | network.interfaces[].dhcp.ip.gateway | Gateway address |
141 | | network.interfaces[].dhcp.ip.netmask | Netmask for the private network |
142 | | network.interfaces[].dhcp.hostname | Hostname |
143 | | network.interfaces[].dhcp.lease_time | Expiration in secs (default: 86400) |
144 | | network.interfaces[].dhcp.name_servers[] | DNS servers |
145 | | network.interfaces[].dhcp.time_servers[] | NTP servers |
146 | | network.interfaces[].dhcp.arch | Hardware architecture, example: `x86_64` |
147 | | network.interfaces[].dhcp.uefi | Is UEFI |
148 | | network.interfaces[].netboot | Netboot details |
149 | | network.interfaces[].netboot.allow_pxe | Must be set to `true` to PXE. |
150 | | network.interfaces[].netboot.allow_workflow | Must be `true` in order to execute a workflow. |
151 | | network.interfaces[].netboot.ipxe | Details for iPXE |
152 | | network.interfaces[].netboot.ipxe.url | iPXE script URL |
153 | | network.interfaces[].netboot.ipxe.contents | iPXE script contents |
154 | | network.interfaces[].netboot.osie | OSIE details |
155 | | network.interfaces[].netboot.osie.kernel | Kernel |
156 | | network.interfaces[].netboot.osie.initrd | Initrd |
157 | | network.interfaces[].netboot.osie.base_url | Base URL for the kernel and initrd |
158 | | metadata | Hardware metadata details |
159 | | metadata.state | State must be set to `provisioning` for workflows. |
160 | | metadata.bonding_mode | Bonding mode |
161 | | metadata.manufacturer | Manufacturer details |
162 | | metadata.instance | Holds the details for an instance |
163 | | metadata.instance.storage | Details for an instance storage like disks and filesystems |
164 | | metadata.instance.storage.disks | List of disk partitions |
165 | | metadata.instance.storage.disks[].device | Name of the disk |
166 | | metadata.instance.storage.disks[].wipe_table | Set to `true` to allow disk wipe. |
167 | | metadata.instance.storage.disks[].partitions | List of disk partitions |
168 | | metadata.instance.storage.disks[].partitions[].size | Size of the partition |
169 | | metadata.instance.storage.disks[].partitions[].label | Partition label like BIOS, SWAP or ROOT |
170 | | metadata.instance.storage.disks[].partitions[].number | Partition number |
171 | | metadata.instance.storage.filesystems | List of filesystems and their respective mount points |
172 | | metadata.instance.storage.filesystems[].mount | Details about the filesystem to be mounted |
173 | | metadata.instance.storage.filesystems[].mount.point | Mount point for the filesystem |
174 | | metadata.instance.storage.filesystems[].mount.create | Additional details that can be provided while creating a partition |
175 | | metadata.instance.storage.filesystems[].mount.create.options | Options to be passed to `mkfs` while creating a partition |
176 | | metadata.instance.storage.filesystems[].mount.device | Device to be mounted |
177 | | metadata.instance.storage.filesystems[].mount.format | Filesystem format |
178 | | metadata.instance.crypted_root_password | Hash for root password that is used to login into the worker after provisioning. The hash can be generated using the `openssl passwd` command. For example, `openssl passwd -6 -salt xyz your-password`. |
179 | | metadata.operating_system_version | Details about the operating system to be installed |
180 | | metadata.operating_system_version.distro | Operating system distribution name, like ubuntu |
181 | | metadata.operating_system_version.version | Operating system version, like 18.04 or 20.04 |
182 | | metadata.operating_system_version.os_slug | A slug is a combination of operating system distro and version. |
183 | | metadata.facility | Facility details |
184 | | metadata.facility.plan_slug | The slug for the worker class. The value for this property depends on how you setup your workflow. While it is required if you are using the OS images from [packet-images] repository, it may be left out if not used at all in the workflow. |
185 | | metadata.facility.facility_code | For local setup, `onprem` or any other string value can be used. |
186 |
187 | ## The Minimal Hardware Data
188 |
189 | While the hardware data is essential, not all the properties are required for every workflow.
190 | In fact, it's upto a workflow designer how they want to use the data in their workflow.
191 | Therefore, you may start with the minimal data given below and only add the properties you would want to use in your workflow.
192 |
193 | ```json
194 | {
195 | "id": "0eba0bf8-3772-4b4a-ab9f-6ebe93b90a94",
196 | "metadata": {
197 | "facility": {
198 | "facility_code": "ewr1",
199 | "plan_slug": "c2.medium.x86",
200 | "plan_version_slug": ""
201 | },
202 | "instance": {},
203 | "state": "provisioning"
204 | },
205 | "network": {
206 | "interfaces": [
207 | {
208 | "dhcp": {
209 | "arch": "x86_64",
210 | "ip": {
211 | "address": "192.168.1.5",
212 | "gateway": "192.168.1.1",
213 | "netmask": "255.255.255.248"
214 | },
215 | "mac": "00:00:00:00:00:00",
216 | "uefi": false
217 | },
218 | "netboot": {
219 | "allow_pxe": true,
220 | "allow_workflow": true
221 | }
222 | }
223 | ]
224 | }
225 | }
226 | ```
227 |
228 | [packet-images]: https://github.com/packethost/packet-images
229 |
--------------------------------------------------------------------------------