├── README.md
├── _acmztp
├── argocd
│ ├── app-project.yaml
│ ├── argocd-openshift-gitops-patch.json
│ ├── clusters-app.yaml
│ ├── gitops-policy-rolebinding.yaml
│ ├── kustomization.yaml
│ ├── policies-app-project.yaml
│ ├── policies-app.yaml
│ └── ztp_gitops_flow.png
├── policygentemplates
│ ├── common.yaml
│ ├── kustomization.yaml
│ ├── ns.yaml
│ └── source-crs
│ │ └── web-terminal
│ │ ├── namespace.yaml
│ │ ├── operator-group.yaml
│ │ ├── status.yaml
│ │ └── subscription.yaml
└── siteconfig
│ ├── .gitignore
│ ├── ca-montreal-bmc-secret.yaml
│ ├── ca-montreal-sealed-secret.yaml
│ ├── ca-montreal.yaml
│ ├── kustomization.yaml
│ ├── manifests
│ ├── enable-crun-master.yaml
│ └── enable-crun-worker.yaml
│ └── siteConfig-ca-montreal.yaml
├── doc
└── resources
│ ├── ocp-ztp.drawio
│ └── ocp-ztp.png
├── hub
├── .gitignore
├── 00-clusterimageset.yaml
├── 02-assistedserviceconfig.yaml
├── 03-assisted-deployment-ssh-private-key-EXAMPLE.yaml
├── README.md
├── argocd-app.yaml
└── kustomization.yaml
├── hypershift
├── README.md
└── ca-montreal.sh
├── libvirt
└── cloud-init
│ ├── README.md
│ ├── meta-data
│ └── user-data
├── metal-provisioner
├── 00-namespace.yaml
├── 01-bmo.yaml
├── 02-ironic.yaml
└── kustomization.yaml
├── spoke-3nodes-ztp
├── .gitignore
├── 00-namespace.yaml
├── 01-agentclusterinstall.yaml
├── 02-clusterdeployment.yaml
├── 03-nmstateconfig.yaml
├── 04-spokeinfraenv.yaml
├── 05-baremetalhost.yaml
├── 05-userdata.yaml
├── 06-assisteddeploymentpullsecret-EXAMPLE.yaml
├── 07-kusterlet.yaml
└── kustomization.yaml
├── spoke-manual
├── 00-agentclusterinstall.yaml
├── 01-clusterdeployment.yaml
├── 02-spokeinfraenv.yaml
├── README.md
├── kustomization.yaml
└── libvirt
│ ├── net.xml
│ └── vm.xml
└── spoke-sno-ztp
├── .gitignore
├── 00-namespace.yaml
├── 01-agentclusterinstall.yaml
├── 02-clusterdeployment.yaml
├── 03-nmstateconfig.yaml
├── 04-spokeinfraenv.yaml
├── 05-baremetalhost.yaml
├── 05-userdata.yaml
├── 06-assisteddeploymentpullsecret-EXAMPLE.yaml
├── 07-kusterlet.yaml
└── kustomization.yaml
/README.md:
--------------------------------------------------------------------------------
1 | # Deploy a OpenShift on libvirt using RHACM ZTP capabilities
2 |
3 | The goal is to leverage the latest capabilities from Red Hat Advanced Cluster Management (RHACM) 2.3+ to deploy an OpenShift cluster using the Zero Touch Provisioning on an emulated bare metal environment.
4 |
5 | The typical Zero Touch Provisioning flow is meant to work for bare metal environment; but if like me, you don't have a bare metal environment handy, or want to optimize the only server you have, make sure to review the section "Ironic & Metal3".
6 |
7 | RHACM works in a hub and spoke manner. So the goal here is to deploy a spoke from the hub cluster.
8 |
9 | The overall setup uses the following components:
10 |
11 | - [Red Hat Advanced Cluster Management](https://www.openshift.com/products/advanced-cluster-management) (RHACM) provides the overall feature set to manage a fleet of cluster. It also provide all the foundational elements to create an [assisted service](https://github.com/openshift/assisted-service).
12 |
13 | If you do not have a baremetal cluster, you also need to deploy:
14 |
15 | - [Ironic](https://wiki.openstack.org/wiki/Ironic): It is the OpenStack bare metal provisioning tool that uses PXE or BMC to provision and turn on/off machines
16 | - [Metal3](https://metal3.io/): It is the Kubernetes bare metal provisioning tool. Under the hood, it uses Ironic. And above the hood, it provides an [operator](https://github.com/metal3-io/baremetal-operator) along with the CRD it supports: `BareMetalHost`
17 |
18 | Let's align on the Zero Touch Provisioning expectation:
19 |
20 | - the overall libvirt environment will be setup manually (although it could easily be automated).
21 | - once the environment is correctly setup, we will apply the manifests that will automate the spoke cluster creation.
22 |
23 | ### Table of Content
24 |
25 | 1. [Pre-requisites](#prerequisites)
26 | 2. [Architecture](#ztpflow)
27 | 3. [Install requirements on the hub cluster](#hubcluster)
28 | - [Assisted Service](#assistedservice)
29 | - [Ironic & Metal3](#bmo)
30 | 4. [Install requirements on the spoke server](#spokecluster)
31 | - [Install libvirt](#libvirtinstall)
32 | - [Install and configure Sushy service](#sushy)
33 | - [Libvirt setup](#libvirtsetup)
34 | - [Create a storage pool](#storage)
35 | - [Create a network](#net)
36 | - [Create the disk](#disk)
37 | - [Create the VM / libvirt domain](#vm)
38 | 5. [Let's deploy the spoke](#spoke)
39 | - [Few debugging tips](#debug)
40 | - [Accessing your cluster](#access)
41 |
42 | ## Pre-requisite
43 |
44 | - Red Hat OpenShift Container Platform __4.8+__ for the hub cluster- see [here](https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/installing/index) on how to deploy
45 | - Red Hat Advanced Cluster Management __2.3+__ installed on the hub cluster- see [here](https://github.com/open-cluster-management/deploy#prepare-to-deploy-open-cluster-management-instance-only-do-once) on how to deploy
46 | - A server with at least 32GB of RAM, 8 CPUs and 120 GB of disk - this is the machine we will use for the spoke. Mine is setup with CentOS 8.4
47 | - Clone the git repo: `git clone https://github.com/adetalhouet/ocp-gitops`
48 |
49 | ## Architecture
50 |
51 | 
52 |
53 | ## Requirements on the hub cluster
54 | The assumption is the cluster is __not__ deployed on bare metal. If that's the case skip the Ironic and Metal3 portion.
55 |
56 | In my case, my hub cluster is deployed in AWS. As it isn't a bare metal cluster, you don't have the Ironic and Metal3 pieces, so we will deploy them ourselves.
57 |
58 | ### Install the Assisted Service
59 |
60 | The related manifest for the install are located in the `hub` folder. The main manifest is `02-assistedserviceconfig.yaml` specifying the `AgentServiceConfig` definition, which defines the base RHCOS image to use for the server installation.
61 |
62 | We also create a `ClusterImageSet` to refer to OpenShift 4.8 version. This will be referenced by the spoke manifest to define what version of OpenShift to install.
63 |
64 | Add your private key in the `hub/03-assisted-deployment-ssh-private-key.yaml` file (use the example), and then apply the folder. The private key will be in the resulting VM, and you will use the corresponding public key to ssh, if needed.
65 |
66 | [Follow the documention to enable Central Infrastructure Management service.](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.4/html/clusters/managing-your-clusters#enable-cim)
67 |
68 | ```
69 | oc patch hiveconfig hive --type merge -p '{"spec":{"targetNamespace":"hive","logLevel":"debug","featureGates":{"custom":{"enabled":["AlphaAgentInstallStrategy"]},"featureSet":"Custom"}}}'
70 | oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
71 | oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true }}'
72 | ```
73 |
74 | Everything will be installed in the `open-cluster-management` namespace.
75 |
76 | ~~~
77 | $ oc apply -k hub
78 | configmap "assisted-service-config" deleted
79 | secret "assisted-deployment-ssh-private-key" deleted
80 | agentserviceconfig.agent-install.openshift.io "agent" deleted
81 | clusterimageset.hive.openshift.io "openshift-v4.8.0" deleted
82 | ~~~
83 |
84 | After view second, check the assisted service has been created
85 |
86 | ~~~
87 | $ oc get pod -n open-cluster-management -l app=assisted-service
88 | ~~~
89 |
90 | ### Install Ironic and Metal3
91 |
92 | The assumption is the cluster is __not__ deployed on bare metal. If that's the case skip this section.
93 |
94 | Both Ironic and Metal3 can be installed from the [baremetal-operator ](https://github.com/metal3-io/baremetal-operator) but experience has proven it is a very opinionated install, and out of the box, doesn't work in my environment, and probably will not work in yours.
95 |
96 | So, I pulled all the manifests required for the install, and put them in the `metal-provisioner` folder.
97 |
98 | As Ironic will be the component instructing the remote server to download the ISO, it needs to be configured properly so the remote server can reach back to the underlying Ironic's HTTP server.
99 |
100 | The `02-ironic.yaml` manifest provides a `Service` and `Route` to expose the various services it provides. And it also contains a `ConfigMap` called `ironic-bmo-configmap` containing all the configuration bits required for Ironic to work properly.
101 | These elements points to my environment, so you need to customize them accordingly, by adjusting the $CLUSTER_NAME.$DOMAIN_NAME in the `Route` definition and in the `ironic-bmo-configmap` ConfigMap.
102 |
103 | In my case `$CLUSTER_NAME.$DOMAIN_NAME = hub-adetalhouet.rhlteco.io`
104 |
105 | Here is a command to help make that change; make sure to replace `$CLUSTER_NAME.$DOMAIN_NAME` with yours. If you're on a mac, using `gsed` instead of `sed` to use the GNU sed binary.
106 |
107 | ~~~
108 | $ sed -i "s/hub-adetalhouet.rhtelco.io/$CLUSTER_NAME.$DOMAIN_NAME/g" metal-provisioner/02-ironic.yaml
109 | ~~~
110 |
111 | Based on the upstream Ironic image, I had to adjust the start command of the `ironic-api` and `ironic-conductor` containers to alter their `ironic.conf` configuration so it would consume the exposed `Route` rather than the internal IP. When Ironic using the BMC to configure the server, it will instruct the server to load the boot ISO image from its http server; the Ironic http server must be reachable from the spoke server. In my case, given the hub and the spoke only share public internet as a common network, I had to expose Ironic http server. If you have a private network, the setup would work the same.
112 |
113 | In both of these containers, the `/etc/ironic/ironic.conf` configuration is created at runtime, based on the Jinja template `/etc/ironic/ironic.conf.j2`; so I modify the template to have the resulting generated config as expected.
114 |
115 | ~~~
116 | $ sed -i "s/{{ env.IRONIC_URL_HOST }}:{{ env.HTTP_PORT }}/{{ env.IRONIC_HTTP_URL }}/g" /etc/ironic/ironic.conf.j2
117 | $ sed -i "s/host = {{ env.IRONIC_URL_HOST }}/host = {{ env.IRONIC_HTTP_URL }}/g" /etc/ironic/ironic.conf.j2
118 | ~~~
119 |
120 | Finally, Ironic uses host network (although not required in our case), so I have granted the `metal-provisioner` ServiceAccount `privileged` SCC. And in the `ironic-bmo-configmap` you need to update the `PROVISIONING_INTERFACE` to reflect your node interface. This is stupid, because we don't care about this at all in our case, but Ironic will basically take the IP from this interface and use it at many places. Actually, some of the place where it uses the host ip are the places where we made the change in the `ironic.conf` in the previous section.
121 |
122 | Keep in mind, the initial intention of this `bare-metal-operator` is to work in a `BareMetal` environment, where it is assumed the `PROVISIONING_INTERFACE` is on a network that can reach the nodes you would want to either add in the cluster, or provisioned with OpenShift using the ZTP flow.
123 |
124 | Have a review of the manifest, and when confident, apply them
125 |
126 | ~~~
127 | $ oc apply -k metal-provisioner
128 | namespace/metal-provisioner created
129 | serviceaccount/metal-provisioner created
130 | clusterrole.rbac.authorization.k8s.io/baremetalhost-role created
131 | clusterrole.rbac.authorization.k8s.io/ironic-scc created
132 | clusterrolebinding.rbac.authorization.k8s.io/baremetalhost-rolebinding created
133 | clusterrolebinding.rbac.authorization.k8s.io/ironic-rolebinding created
134 | configmap/baremetal-operator-ironic created
135 | configmap/ironic-bmo-configmap created
136 | configmap/ironic-htpasswd created
137 | configmap/ironic-inspector-htpasswd created
138 | secret/ironic-auth-config created
139 | secret/ironic-credentials created
140 | secret/ironic-inspector-auth-config created
141 | secret/ironic-inspector-credentials created
142 | secret/ironic-rpc-auth-config created
143 | secret/mariadb-password created
144 | service/ironic created
145 | deployment.apps/baremetal-operator-controller-manager created
146 | deployment.apps/capm3-ironic created
147 | route.route.openshift.io/ironic-api created
148 | route.route.openshift.io/ironic-http created
149 | route.route.openshift.io/ironic-inspector created
150 | ~~~
151 |
152 | Here is an output of what you should expect
153 |
154 |
155 | oc get all -n metal-provisioner
156 |
157 | ~~~
158 | $ oc get all -n metal-provisioner
159 |
160 | NAME READY STATUS RESTARTS AGE
161 | pod/baremetal-operator-controller-manager-7477d5cd57-2cbmj 2/2 Running 0 20m
162 | pod/capm3-ironic-6cc84ff99c-l5bpt 5/5 Running 0 20m
163 |
164 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
165 | service/ironic ClusterIP 172.30.59.7 5050/TCP,6385/TCP,80/TCP 20m
166 |
167 | NAME READY UP-TO-DATE AVAILABLE AGE
168 | deployment.apps/baremetal-operator-controller-manager 1/1 1 1 20m
169 | deployment.apps/capm3-ironic 1/1 1 1 20m
170 |
171 | NAME DESIRED CURRENT READY AGE
172 | replicaset.apps/baremetal-operator-controller-manager-7477d5cd57 1 1 1 20m
173 | replicaset.apps/capm3-ironic-6cc84ff99c 1 1 1 20m
174 |
175 | NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
176 | route.route.openshift.io/ironic-api ironic-api-metal-provisioner.apps.hub-adetalhouet.rhtelco.io ironic api None
177 | route.route.openshift.io/ironic-http ironic-http-metal-provisioner.apps.hub-adetalhouet.rhtelco.io ironic httpd None
178 | route.route.openshift.io/ironic-inspector ironic-inspector-metal-provisioner.apps.hub-adetalhouet.rhtelco.io ironic inspector None
179 | ~~~
180 |
181 |
182 | ## Requirements on the spoke server
183 | I'm assuming you have a blank server, running CentOS 8.4, and login as `root`.
184 |
185 | ### Install libvirt
186 | Install the required dependencies.
187 |
188 | ~~~
189 | dnf install -y bind-utils libguestfs-tools cloud-init
190 | dnf module install virt -y
191 | dnf install virt-install -y
192 | systemctl enable libvirtd --now
193 | ~~~
194 |
195 | ### Install and configure Sushy service
196 | Sushy service is a Virtual Redfish BMC emulator for libvirt or OpenStack virtualization. In our case, we will use it for libvirt, in order to add BMC capabilities to libvirt domain. That will enable remote control of the VMs.
197 |
198 | ~~~
199 | dnf install python3 -y
200 | pip3 install sushy-tools
201 | ~~~
202 |
203 | Then you need to configure the service. In my case, I'm binding the sushy service on all my interfaces. But if you have a management interface providing connectivity between the hub and the spoke environments, you should use that interface instead.
204 | Also, the port is customizable, and if you have firewall in the way, make sure to open them accordingly.
205 |
206 | ~~~
207 | echo "SUSHY_EMULATOR_LISTEN_IP = u'0.0.0.0'
208 | SUSHY_EMULATOR_LISTEN_PORT = 8000
209 | SUSHY_EMULATOR_SSL_CERT = None
210 | SUSHY_EMULATOR_SSL_KEY = None
211 | SUSHY_EMULATOR_OS_CLOUD = None
212 | SUSHY_EMULATOR_LIBVIRT_URI = u'qemu:///system'
213 | SUSHY_EMULATOR_IGNORE_BOOT_DEVICE = True
214 | # This specifies where to find the boot loader for a UEFI boot. This is what the ZTP process uses.
215 | SUSHY_EMULATOR_BOOT_LOADER_MAP = {
216 | u'UEFI': {
217 | u'x86_64': u'/usr/share/OVMF/OVMF_CODE.secboot.fd'
218 | },
219 | u'Legacy': {
220 | u'x86_64': None
221 | }
222 | }" > /etc/sushy.conf
223 | ~~~
224 |
225 | There is currently an [issue](https://bugzilla.redhat.com/show_bug.cgi?id=1906500) with libvirt that basically forces the use of secure boot. Theoritically this can be disabled, but the feature isn't working properly since RHEL 8.3 (so it's the same in CentOS that I'm using).
226 | In order to mask the secure feature boot vars, to allow a non-secure boot, [the following solution has been suggested](https://bugzilla.redhat.com/show_bug.cgi?id=1906500#c23):
227 |
228 | ~~~
229 | mkdir -p /etc/qemu/firmware
230 | touch /etc/qemu/firmware/40-edk2-ovmf-sb.json
231 | ~~~
232 |
233 | Now, let's create the sushy service and start it.
234 |
235 | ~~~
236 | echo '[Unit]
237 | Description=Sushy Libvirt emulator
238 | After=syslog.target
239 |
240 | [Service]
241 | Type=simple
242 | ExecStart=/usr/local/bin/sushy-emulator --config /etc/sushy.conf
243 | StandardOutput=syslog
244 | StandardError=syslog
245 |
246 | [Install]
247 | WantedBy=multi-user.target' > /usr/lib/systemd/system/sushy.service
248 | systemctl start sushy
249 | systemctl enable --now sushy.service
250 | ~~~
251 |
252 | Finally, let's start the built-in firewall and allow traffic on port 8000.
253 |
254 | ~~~
255 | systemctl start firewalld
256 | firewall-cmd --add-port=8000/tcp --permanent
257 | firewall-cmd --add-port=8000/tcp --zone libvirt --permanent
258 | ~~~
259 |
260 | ### Libvirt setup
261 |
262 | #### Create a pool
263 |
264 | When Ironic will use our virtual BMC, emulated by sushy-tools, to load the ISO in the server (VM in our case), sushy-tools will host that image in the `default` storage pool, so we need to create it accordingly. (I couldn't find a way, yet, to configure the storage pool to use.)
265 | ~~~
266 | $ mkdir -p /var/lib/libvirt/sno-ztp
267 | $ virsh pool-define-as default --type dir --target /var/lib/libvirt/sno-ztp
268 | $ virsh pool-start default
269 | $ virsh pool-autostart default
270 |
271 | $ virsh pool-list
272 | Name State Autostart
273 | -------------------------------
274 | boot active yes
275 | default active yes
276 | images active yes
277 | ~~~
278 |
279 | #### Create a network
280 |
281 | OpenShift Bare Metal install has the following requirements:
282 |
283 | - a proper hostname / domain name mapped to the MAC address of the interface to use for the provisioning
284 | - a DNS entry for the api.$clusterName.$domainName
285 | - a DNS entry for the *.apps.$clusterName.$domainName
286 |
287 | So we will configure them accordingly in the libvirt network definition, using the built-in dnsmaq capability of libvirt network.
288 |
289 | Here is my network definition (`libvirt/sno/net.xml`)
290 |
291 | libvirt/sno/net.xml
292 |
293 | ~~~
294 |
295 | sno
296 |
297 |
298 |
299 |
300 |
301 |
302 |
303 |
304 |
305 |
306 |
307 |
308 |
309 | api.sno.lab.adetalhouet
310 |
311 |
312 |
313 |
314 |
315 |
316 |
317 |
318 | ~~~
319 |
320 |
321 | Now let's define and start our network
322 |
323 | ~~~
324 | # create the file net.xml with the content above
325 | $ virsh net-define net.xml
326 | $ virsh net-start sno
327 | $ virsh net-autostart sno
328 |
329 | $ virsh net-list
330 | Name State Autostart Persistent
331 | --------------------------------------------
332 | default active yes yes
333 | sno active no yes
334 | ~~~
335 |
336 | #### Create the disk
337 |
338 | In order for Assisted Installer to allow the installation of the Single Node OpenShift to happen, one of the requirement is the disk size: it must be at least of 120GB. When creating a disk of 120GB, or even 150GB, for some reason I had issues and the Assisted Service wouldn't allow the installation complaining about the disk size requirepement not being met.
339 | So let's create a disk of 200 GB to be sure.
340 | ~~~
341 | $ qemu-img create -f qcow2 /var/lib/libvirt/sno-ztp/sno.qcow2 200G
342 | Formatting '/var/lib/libvirt/sno-ztp/sno.qcow2', fmt=qcow2 size=214748364800 cluster_size=65536 lazy_refcounts=off refcount_bits=16
343 | ~~~
344 |
345 | #### Create the VM / libvirt domain
346 |
347 | While creating the VM, make sure to adjust RAM and CPU, as well as the network and disk if you've made modification.
348 | The interface configured in the domain is the one we pre-defined in the network definition, and we will identify the interface by its mac address. When the VM will boot, it will be able to resolve its hostname through the DNS entry.
349 |
350 | (FYI - I spent hours trying to nail down the proper xml definition, more importantly the `os` bits. When the Assisted Asservice will start the provisioning, it will first start the VM, load the discovery.iso and then restart the VM to boot from the newly added disc. After the restart, the `os` section will be modified, as Assisted Service will configure an UEFI boot.)
351 |
352 | Here is my VM definition (`libvirt/sno/vm.xml`)
353 |
354 | libvirt/sno/vm.xml
355 |
356 | ~~~
357 |
358 | sno
359 | b6c92bbb-1e87-4972-b17a-12def3948890
360 |
361 |
362 |
363 |
364 |
365 | 33554432
366 | 33554432
367 | 16
368 |
369 | hvm
370 |
371 |
372 |
373 |
374 |
375 |
376 |
377 |
378 |
379 |
380 |
381 |
382 |
383 |
384 |
385 |
386 |
387 |
388 |
389 |
390 |
391 |
392 | /usr/libexec/qemu-kvm
393 |
394 |
395 |
396 |
397 |
398 |
399 |
400 |
401 |
402 |
403 |
404 |
405 |
406 |
407 |
408 |
409 |
410 |
411 |
412 |
413 |
414 |
415 |
416 |
417 |
420 |
421 |
422 |
423 |
424 | /dev/urandom
425 |
426 |
427 |
428 | ~~~
429 |
430 | Now let's define our domain.
431 |
432 | ~~~
433 | # create the file vm.xml with the content above
434 | virsh define vm.xml
435 | virsh autostart sno
436 | ~~~
437 | Do not start the VM by yourself, it will be done later in the process, automatically. Moreover, your VM at this point has no CDROM to boot from.
438 |
439 | If you have a bridge network, you can add an additional interface to the domain definition. Please see `libvirt/sno/vm-bridge-net.xml` along with `libvirt/sno/bridge-interface.md`. And see `spoke-ztp/03-nmstateconfig.yaml` on how to configure the interface within the resulting VM.
440 |
441 | Now the environment is ready, let's create an Single Node OpenShift cluster automagically.
442 |
443 | ## Let's deploy the spoke
444 |
445 | We will use all the manifests in the `spoke-ztp/` folder. Simply apply the following command:
446 | ~~~
447 | $ oc apply -k spoke-ztp/
448 | namespace/sno-ztp created
449 | secret/assisted-deployment-pull-secret created
450 | secret/sno-secret created
451 | infraenv.agent-install.openshift.io/sno-ztp-infraenv created
452 | nmstateconfig.agent-install.openshift.io/lab-spoke-adetalhouet created
453 | klusterletaddonconfig.agent.open-cluster-management.io/lab-spoke-adetalhouet created
454 | managedcluster.cluster.open-cluster-management.io/lab-spoke-adetalhouet created
455 | agentclusterinstall.extensions.hive.openshift.io/sno-ztp-clusteragent created
456 | clusterdeployment.hive.openshift.io/sno-ztp-cluster created
457 | baremetalhost.metal3.io/sno-ztp-bmh created
458 | ~~~
459 |
460 | It will take on average 60-ish minutes for the cluster to be ready.
461 | That said, to validate the cluster will get deployed properly, few tests you can do.
462 |
463 | __Let's review the manifests:__
464 |
465 | - `00-namespace.yaml` creates the namespace where the configuration will be hosted.
466 | - `01-agentclusterinstall.yaml` defines the `AgentClusterInstall` is responsible for the overall cluster configuration. This is where you specify:
467 | - the network requirements (clusterNetwork, serviceNetwork, machineNetwork).
468 | - the OpenShift version to use, by refering to the `ClusterImageSet` name we created earlier.
469 | - the overall cluster setup, i.e. how many control and worker node you want. In our case, we deploy a SNO, so only 1 control node.
470 | - the pub key that goes with the private key setup earlier in the `assisted-deployment-ssh-private-key` secret
471 | - `02-clusterdeployment.yaml` defines the `ClusterDeployment`
472 | - it references the `AgentClusterIntall` and define the `pull-secret` to use for the cluster provisioning.
473 | - this is where you define the `baseDomain` and the `clusterName` to use for the spoke cluster
474 | - `03-nmstateconfig.yaml` is required if you are using a bridge network and want to set a static ip. See [here](https://github.com/nmstate/nmstate) along with their doc for more information / use cases.
475 | - `04-spokeinfraenv.yaml` defines the `InfraEnv`. It is basically a when to customize the intial cluster setup. If you want to add/modify some files for the ignition process, you can. If you want to configure additional networking bits, this is where you can do it as well. Refer to the doc, and here is [an example](https://github.com/openshift/openshift-docs/blob/main/modules/ztp-configuring-a-static-ip.adoc).
476 | - `05-baremetalhost.yaml` defines the `BareMetalHost`. This is where you provide the information on:
477 | - how to connect to the server through its BMC
478 | - the MAC address of the provisioning interface
479 | - `05-userdata.yaml` is an attempt at providing additional information but libvirt isn't liking the way that disk is provided (I didn't dig into this, so here is the error I got if you want to dig into this)
480 | - `06-assisteddeploymentpullsecret.yaml` is the pull-secret to download the images in the spoke cluster.
481 | - `07-kusterlet.yaml` setup the cluster to be imported within RHACM, and have the addon agents installed.
482 | ~~~
483 | {"level":"info","ts":1628007509.626545,"logger":"provisioner.ironic","msg":"current provision state","host":"sno-ztp~sno-ztp-bmh","lastError":"Deploy step deploy.deploy failed: Redfish exception occurred. Error: Setting power state to power on failed for node 1e87eede-7ddb-4da4-bb7f-2a12037ac323. Error: HTTP POST http://148.251.12.17:8000/redfish/v1/Systems/b6c92bbb-1e87-4972-b17a-12def3948891/Actions/ComputerSystem.Reset returned code 500. Base.1.0.GeneralError: Error changing power state at libvirt URI \"qemu:///system\": internal error: qemu unexpectedly closed the monitor: 2021-08-03T16:18:17.110846Z qemu-kvm: -device isa-fdc,bootindexA=1: Device isa-fdc is not supported with machine type pc-q35-rhel8.2.0 Extended information: [{'@odata.type': '/redfish/v1/$metadata#Message.1.0.0.Message', 'MessageId': 'Base.1.0.GeneralError'}].","current":"deploy failed","target":"active"}
484 | ~~~
485 | - `04-assisteddeploymentpullsecret.yaml` defines your `pull-secret`
486 | - `05-kusterlet.yaml` tells RHACM to add this cluster as a managed cluster, and deploy the various addon agents on it.
487 |
488 | ### Few debugging tips
489 | ###### Storage
490 | First, look at the storage pool folder; sometimes ISO upload isn't working properly, and the resulting ISO doesn't have all the data. See the size of both ISO; the expected size, based on my experience, is `107884544`. So if the file is not in that range, the VM won't boot properly.
491 |
492 | ~~~
493 | [root@lab sno]# ls -ls /var/lib/libvirt/sno-ztp/
494 | total 25639776
495 | 4 -rw------- 1 qemu qemu 3265 Jul 29 04:04 boot-072b3441-2bd9-4aaf-939c-7e4640e38935-iso-79ad9824-8110-4570-83e3-a8cd6ae9d435.img
496 | 105356 -rw------- 1 qemu qemu 107884544 Aug 1 20:48 boot-961b4d9e-1766-4e38-8a6d-8de54c7a836b-iso-b6c92bbb-1e87-4972-b17a-12def3948890.img
497 | 25534416 -rw-r--r-- 1 root root 26146963456 Jul 30 14:36 sno2.qcow2
498 | ~~~
499 | ###### Network
500 | After a few minutes of having your VM running, it should get its IP from the network. Two ways to validate:
501 | Check the network `dhcp-leases` and ensure the IP has been assigned
502 | ~~~
503 | [root@lab sno]# virsh net-dhcp-leases sno
504 | Expiry Time MAC address Protocol IP address Hostname Client ID or DUID
505 | ----------------------------------------------------------------------------------------------------------
506 | 2021-08-01 22:05:02 02:04:00:00:00:66 ipv4 192.168.123.5/24 sno 01:02:04:00:00:00:66
507 | ~~~
508 | If so, attempt to SSH using the public key that goes with the private key you configure while installing the Assisted Service. You should be able to ssh properly.
509 | ~~~
510 | [root@lab sno]# ssh core@192.168.123.5
511 | The authenticity of host '192.168.123.5 (192.168.123.5)' can't be established.
512 | ECDSA key fingerprint is SHA256:N6wy/bQ5YeL01LsLci+IVztzRs8XFVeU4rYJIDGD8SU.
513 | Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
514 | Warning: Permanently added '192.168.123.5' (ECDSA) to the list of known hosts.
515 | Red Hat Enterprise Linux CoreOS 48.84.202107202156-0
516 | Part of OpenShift 4.8, RHCOS is a Kubernetes native operating system
517 | managed by the Machine Config Operator (`clusteroperator/machine-config`).
518 |
519 | WARNING: Direct SSH access to machines is not recommended; instead,
520 | make configuration changes via `machineconfig` objects:
521 | https://docs.openshift.com/container-platform/4.8/architecture/architecture-rhcos.html
522 |
523 | ---
524 | [core@sno ~]$
525 | ~~~
526 |
527 | Here are extra information to expect when you are using a bridge network. The IP of the bridged interface will be manually configured using manifest `03-nmstateconfig.yaml`. If you have DHCP for that network, that would work as well.
528 | You can validate the created domain has both interface, and you can look for the IP of the private network, if you want to ssh/troubleshoot.
529 |
530 | ~~~
531 | [root@lab sno-pub]# virsh domiflist 21
532 | Interface Type Source Model MAC
533 | ------------------------------------------------------------
534 | vnet19 bridge br0 virtio 00:50:56:01:15:94
535 | vnet20 network sno virtio 52:54:00:73:90:59
536 |
537 | [root@lab sno-pub]# virsh domifaddr 21
538 | Name MAC address Protocol Address
539 | -------------------------------------------------------------------------------
540 | vnet20 52:54:00:73:90:59 ipv4 192.168.123.248/24
541 | ~~~
542 |
543 | ###### Monitor the `Agent`
544 | Assuming the above worked, then I suggest you monitor the `Agent` that was created for your cluster deployment. This will give you an URL you can use to follow the events occuring in your cluster.
545 |
546 | The `Agent` is the bare metal installer agent, it will provide info regarding the bare metal install provisioning.
547 |
548 |
549 | oc get Agent -n sno-ztp
550 |
551 | ~~~
552 | $ oc get Agent -n sno-ztp
553 | NAME CLUSTER APPROVED ROLE STAGE
554 | b6c92bbb-1e87-4972-b17a-12def3948890 sno-ztp-cluster true master Done
555 |
556 | $ oc describe Agent b6c92bbb-1e87-4972-b17a-12def3948890 -n sno-ztp
557 | Name: b6c92bbb-1e87-4972-b17a-12def3948890
558 | Namespace: sno-ztp
559 | Labels: agent-install.openshift.io/bmh=sno-ztp-bmh
560 | infraenvs.agent-install.openshift.io=sno-ztp-infraenv
561 | Annotations:
562 | API Version: agent-install.openshift.io/v1beta1
563 | Kind: Agent
564 | Metadata:
565 | Creation Timestamp: 2021-08-01T18:52:44Z
566 | Finalizers:
567 | agent.agent-install.openshift.io/ai-deprovision
568 | Generation: 2
569 | Resource Version: 27926035
570 | UID: a91ff59c-a08c-41ac-9603-1981bec69f70
571 | Spec:
572 | Approved: true
573 | Cluster Deployment Name:
574 | Name: sno-ztp-cluster
575 | Namespace: sno-ztp
576 | Role:
577 | Status:
578 | Bootstrap: true
579 | Conditions:
580 | Last Transition Time: 2021-08-01T18:52:44Z
581 | Message: The Spec has been successfully applied
582 | Reason: SyncOK
583 | Status: True
584 | Type: SpecSynced
585 | Last Transition Time: 2021-08-01T18:52:44Z
586 | Message: The agent's connection to the installation service is unimpaired
587 | Reason: AgentIsConnected
588 | Status: True
589 | Type: Connected
590 | Last Transition Time: 2021-08-01T18:52:52Z
591 | Message: The agent installation stopped
592 | Reason: AgentInstallationStopped
593 | Status: True
594 | Type: RequirementsMet
595 | Last Transition Time: 2021-08-01T18:52:52Z
596 | Message: The agent's validations are passing
597 | Reason: ValidationsPassing
598 | Status: True
599 | Type: Validated
600 | Last Transition Time: 2021-08-01T19:09:59Z
601 | Message: The installation has completed: Done
602 | Reason: InstallationCompleted
603 | Status: True
604 | Type: Installed
605 | Debug Info:
606 | Events URL: https://assisted-service-open-cluster-management.apps.hub-adetalhouet.rhtelco.io/api/assisted-install/v1/clusters/064d8242-e63a-4ba9-9eb4-aaaa773cdf32/events?api_key=eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJjbHVzdGVyX2lkIjoiMDY0ZDgyNDItZTYzYS00YmE5LTllYjQtYWFhYTc3M2NkZjMyIn0.zWoiIcDcDGY3XfTDij3AktHCocRjNbmB1XFhXJjMrhBO_yZypNRp1OCfKwjuSSkpLGmkhEiZrVAKGrZA7QtA0A&host_id=b6c92bbb-1e87-4972-b17a-12def3948890
607 | State: installed
608 | State Info: Done
609 | Inventory:
610 | Bmc Address: 0.0.0.0
611 | bmcV6Address: ::/0
612 | Boot:
613 | Current Boot Mode: uefi
614 | Cpu:
615 | Architecture: x86_64
616 | Clock Megahertz: 3491
617 | Count: 16
618 | Flags:
619 | fpu
620 | vme
621 | de
622 | pse
623 | tsc
624 | msr
625 | pae
626 | mce
627 | cx8
628 | apic
629 | sep
630 | mtrr
631 | pge
632 | mca
633 | cmov
634 | pat
635 | pse36
636 | clflush
637 | mmx
638 | fxsr
639 | sse
640 | sse2
641 | ss
642 | syscall
643 | nx
644 | pdpe1gb
645 | rdtscp
646 | lm
647 | constant_tsc
648 | arch_perfmon
649 | rep_good
650 | nopl
651 | xtopology
652 | cpuid
653 | tsc_known_freq
654 | pni
655 | pclmulqdq
656 | vmx
657 | ssse3
658 | fma
659 | cx16
660 | pdcm
661 | pcid
662 | sse4_1
663 | sse4_2
664 | x2apic
665 | movbe
666 | popcnt
667 | tsc_deadline_timer
668 | aes
669 | xsave
670 | avx
671 | f16c
672 | rdrand
673 | hypervisor
674 | lahf_lm
675 | abm
676 | cpuid_fault
677 | invpcid_single
678 | pti
679 | ssbd
680 | ibrs
681 | ibpb
682 | stibp
683 | tpr_shadow
684 | vnmi
685 | flexpriority
686 | ept
687 | vpid
688 | ept_ad
689 | fsgsbase
690 | tsc_adjust
691 | bmi1
692 | avx2
693 | smep
694 | bmi2
695 | erms
696 | invpcid
697 | xsaveopt
698 | arat
699 | umip
700 | md_clear
701 | arch_capabilities
702 | Model Name: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
703 | Disks:
704 | By Path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1
705 | Drive Type: ODD
706 | Hctl: 0:0:0:0
707 | Id: /dev/disk/by-path/pci-0000:00:1f.2-ata-1
708 | Installation Eligibility:
709 | Not Eligible Reasons:
710 | Disk is removable
711 | Disk is too small (disk only has 108 MB, but 120 GB are required)
712 | Drive type is ODD, it must be one of HDD, SSD.
713 | Io Perf:
714 | Model: QEMU_DVD-ROM
715 | Name: sr0
716 | Path: /dev/sr0
717 | Serial: QM00001
718 | Size Bytes: 107884544
719 | Smart: {"json_format_version":[1,0],"smartctl":{"version":[7,1],"svn_revision":"5022","platform_info":"x86_64-linux-4.18.0-305.10.2.el8_4.x86_64","build_info":"(local build)","argv":["smartctl","--xall","--json=c","/dev/sr0"],"exit_status":4},"device":{"name":"/dev/sr0","info_name":"/dev/sr0","type":"scsi","protocol":"SCSI"},"vendor":"QEMU","product":"QEMU DVD-ROM","model_name":"QEMU QEMU DVD-ROM","revision":"2.5+","scsi_version":"SPC-3","device_type":{"scsi_value":5,"name":"CD/DVD"},"local_time":{"time_t":1627844447,"asctime":"Sun Aug 1 19:00:47 2021 UTC"},"temperature":{"current":0,"drive_trip":0}}
720 | Vendor: QEMU
721 | Bootable: true
722 | By Path: /dev/disk/by-path/pci-0000:04:00.0
723 | Drive Type: HDD
724 | Id: /dev/disk/by-path/pci-0000:04:00.0
725 | Installation Eligibility:
726 | Eligible: true
727 | Not Eligible Reasons:
728 | Io Perf:
729 | Name: vda
730 | Path: /dev/vda
731 | Size Bytes: 214748364800
732 | Smart: {"json_format_version":[1,0],"smartctl":{"version":[7,1],"svn_revision":"5022","platform_info":"x86_64-linux-4.18.0-305.10.2.el8_4.x86_64","build_info":"(local build)","argv":["smartctl","--xall","--json=c","/dev/vda"],"messages":[{"string":"/dev/vda: Unable to detect device type","severity":"error"}],"exit_status":1}}
733 | Vendor: 0x1af4
734 | Hostname: sno
735 | Interfaces:
736 | Flags:
737 | up
738 | broadcast
739 | multicast
740 | Has Carrier: true
741 | ipV4Addresses:
742 | 192.168.123.5/24
743 | ipV6Addresses:
744 | Mac Address: 02:04:00:00:00:66
745 | Mtu: 1500
746 | Name: enp1s0
747 | Product: 0x0001
748 | Speed Mbps: -1
749 | Vendor: 0x1af4
750 | Memory:
751 | Physical Bytes: 68719476736
752 | Usable Bytes: 67514548224
753 | System Vendor:
754 | Manufacturer: Red Hat
755 | Product Name: KVM
756 | Virtual: true
757 | Progress:
758 | Current Stage: Done
759 | Stage Start Time: 2021-08-01T19:09:59Z
760 | Stage Update Time: 2021-08-01T19:09:59Z
761 | Role: master
762 | Events:
763 | ~~~
764 |
765 |
766 |
767 | ###### Monitor the `ClusterDeployment`
768 |
769 | The cluster deployment is responsible for the OCP cluster. You can also monitor it, as this is the element that will give you the % of progress of the cluster install.
770 |
771 | ~~~
772 | $ oc describe ClusterDeployments sno-ztp-cluster -n sno-ztp
773 |
774 | --[cut]--
775 | status:
776 | cliImage: >-
777 | quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5917b18697edb46458d9fd39cefab191c8324561fa83da160f6fdd0b90c55fe0
778 | conditions:
779 | - lastProbeTime: '2021-08-01T19:22:30Z'
780 | lastTransitionTime: '2021-08-01T18:47:08Z'
781 | message: >-
782 | The installation is in progress: Finalizing cluster installation.
783 | Cluster version status: progressing, message: Working towards 4.8.2: 640
784 | of 676 done (94% complete)
785 | --[/cut]--
786 |
787 | ~~~
788 |
789 |
790 | ### Accessing your cluster
791 |
792 | After enough time, your cluster should be deployed. In order to get the kubeconfig / kubeadmin password, look at the `ClusterDeployment` CR, it will contain the secret name where to find the information.
793 | Note: the information will be populated only uppon successul deployment.
794 |
795 |
796 | oc get ClusterDeployments -n sno-ztp
797 |
798 | ~~~
799 | $ oc get ClusterDeployments -n sno-ztp
800 | NAME PLATFORM REGION CLUSTERTYPE INSTALLED INFRAID VERSION POWERSTATE AGE
801 | sno-ztp-cluster agent-baremetal true 064d8242-e63a-4ba9-9eb4-aaaa773cdf32 Unsupported 59m
802 |
803 | $ oc describe ClusterDeployments sno-ztp-cluster -n sno-ztp
804 | Name: sno-ztp-cluster
805 | Namespace: sno-ztp
806 | Labels: hive.openshift.io/cluster-platform=agent-baremetal
807 | Annotations: open-cluster-management.io/user-group: c3lzdGVtOm1hc3RlcnMsc3lzdGVtOmF1dGhlbnRpY2F0ZWQ=
808 | open-cluster-management.io/user-identity: c3lzdGVtOmFkbWlu
809 | API Version: hive.openshift.io/v1
810 | Kind: ClusterDeployment
811 | Metadata:
812 | Creation Timestamp: 2021-07-30T02:32:25Z
813 | Finalizers:
814 | hive.openshift.io/deprovision
815 | clusterdeployments.agent-install.openshift.io/ai-deprovision
816 | Spec:
817 | Base Domain: rhtelco.io
818 | Cluster Install Ref:
819 | Group: extensions.hive.openshift.io
820 | Kind: AgentClusterInstall
821 | Name: sno-ztp-clusteragent
822 | Version: v1beta1
823 | Cluster Metadata:
824 | Admin Kubeconfig Secret Ref:
825 | Name: sno-ztp-cluster-admin-kubeconfig
826 | Admin Password Secret Ref:
827 | Name: sno-ztp-cluster-admin-password
828 | Cluster ID: 82f49c11-7fb0-4185-82eb-0eab243fbfd2
829 | Infra ID: e85bc2e6-ea53-4e0a-8e68-8922307a0159
830 | Cluster Name: lab-spoke-adetalhouet
831 | Control Plane Config:
832 | Serving Certificates:
833 | Installed: true
834 | Platform:
835 | Agent Bare Metal:
836 | Agent Selector:
837 | Match Labels:
838 | Location: eu/fi
839 | Pull Secret Ref:
840 | Name: assisted-deployment-pull-secret
841 | Status:
842 | Cli Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5917b18697edb46458d9fd39cefab191c8324561fa83da160f6fdd0b90c55fe0
843 | Conditions:
844 | Last Probe Time: 2021-07-30T03:25:00Z
845 | Last Transition Time: 2021-07-30T02:32:37Z
846 | Message: The installation is in progress: Finalizing cluster installation. Cluster version status: available, message: Done applying 4.8.0
847 | Reason: InstallationInProgress
848 | Status: False
849 | Type: ClusterInstallCompleted
850 | Last Probe Time: 2021-07-30T03:28:01Z
851 | Last Transition Time: 2021-07-30T03:28:01Z
852 | Message: Unsupported platform: no actuator to handle it
853 | Reason: Unsupported
854 | Status: False
855 | Type: Hibernating
856 | Last Probe Time: 2021-07-30T03:28:01Z
857 | Last Transition Time: 2021-07-30T03:28:01Z
858 | Message: ClusterSync has not yet been created
859 | Reason: MissingClusterSync
860 | Status: True
861 | Type: SyncSetFailed
862 | Last Probe Time: 2021-07-30T03:28:01Z
863 | Last Transition Time: 2021-07-30T03:28:01Z
864 | Message: Get "https://api.lab-spoke-adetalhouet.rhtelco.io:6443/api?timeout=32s": dial tcp 148.251.12.17:6443: connect: connection refused
865 | Reason: ErrorConnectingToCluster
866 | Status: True
867 | Type: Unreachable
868 | Last Probe Time: 2021-07-30T02:32:25Z
869 | Last Transition Time: 2021-07-30T02:32:25Z
870 | Message: Platform credentials passed authentication check
871 | Reason: PlatformAuthSuccess
872 | Status: False
873 | Type: AuthenticationFailure
874 | Last Probe Time: 2021-07-30T02:32:37Z
875 | Last Transition Time: 2021-07-30T02:32:37Z
876 | Message: The installation has not failed
877 | Reason: InstallationNotFailed
878 | Status: False
879 | Type: ClusterInstallFailed
880 | Last Probe Time: 2021-07-30T02:41:00Z
881 | Last Transition Time: 2021-07-30T02:41:00Z
882 | Message: The cluster requirements are met
883 | Reason: ClusterAlreadyInstalling
884 | Status: True
885 | Type: ClusterInstallRequirementsMet
886 | Last Probe Time: 2021-07-30T02:32:37Z
887 | Last Transition Time: 2021-07-30T02:32:37Z
888 | Message: The installation is waiting to start or in progress
889 | Reason: InstallationNotStopped
890 | Status: False
891 | Type: ClusterInstallStopped
892 | Last Probe Time: 2021-07-30T03:28:01Z
893 | Last Transition Time: 2021-07-30T03:28:01Z
894 | Message: Control plane certificates are present
895 | Reason: ControlPlaneCertificatesFound
896 | Status: False
897 | Type: ControlPlaneCertificateNotFound
898 | Last Probe Time: 2021-07-30T02:32:37Z
899 | Last Transition Time: 2021-07-30T02:32:37Z
900 | Message: Images required for cluster deployment installations are resolved
901 | Reason: ImagesResolved
902 | Status: False
903 | Type: InstallImagesNotResolved
904 | Last Probe Time: 2021-07-30T02:32:37Z
905 | Last Transition Time: 2021-07-30T02:32:37Z
906 | Message: InstallerImage is resolved.
907 | Reason: InstallerImageResolved
908 | Status: False
909 | Type: InstallerImageResolutionFailed
910 | Last Probe Time: 2021-07-30T02:32:37Z
911 | Last Transition Time: 2021-07-30T02:32:37Z
912 | Message: The installation has not failed
913 | Reason: InstallationNotFailed
914 | Status: False
915 | Type: ProvisionFailed
916 | Last Probe Time: 2021-07-30T02:32:37Z
917 | Last Transition Time: 2021-07-30T02:32:37Z
918 | Message: The installation is waiting to start or in progress
919 | Reason: InstallationNotStopped
920 | Status: False
921 | Type: ProvisionStopped
922 | Last Probe Time: 2021-07-30T02:32:25Z
923 | Last Transition Time: 2021-07-30T02:32:25Z
924 | Message: no ClusterRelocates match
925 | Reason: NoMatchingRelocates
926 | Status: False
927 | Type: RelocationFailed
928 | Last Probe Time: 2021-07-30T02:32:25Z
929 | Last Transition Time: 2021-07-30T02:32:25Z
930 | Message: Condition Initialized
931 | Reason: Initialized
932 | Status: Unknown
933 | Type: AWSPrivateLinkFailed
934 | Last Probe Time: 2021-07-30T02:32:25Z
935 | Last Transition Time: 2021-07-30T02:32:25Z
936 | Message: Condition Initialized
937 | Reason: Initialized
938 | Status: Unknown
939 | Type: AWSPrivateLinkReady
940 | Last Probe Time: 2021-07-30T02:32:25Z
941 | Last Transition Time: 2021-07-30T02:32:25Z
942 | Message: Condition Initialized
943 | Reason: Initialized
944 | Status: Unknown
945 | Type: ActiveAPIURLOverride
946 | Last Probe Time: 2021-07-30T02:32:25Z
947 | Last Transition Time: 2021-07-30T02:32:25Z
948 | Message: Condition Initialized
949 | Reason: Initialized
950 | Status: Unknown
951 | Type: DNSNotReady
952 | Last Probe Time: 2021-07-30T02:32:25Z
953 | Last Transition Time: 2021-07-30T02:32:25Z
954 | Message: Condition Initialized
955 | Reason: Initialized
956 | Status: Unknown
957 | Type: DeprovisionLaunchError
958 | Last Probe Time: 2021-07-30T02:32:25Z
959 | Last Transition Time: 2021-07-30T02:32:25Z
960 | Message: Condition Initialized
961 | Reason: Initialized
962 | Status: Unknown
963 | Type: IngressCertificateNotFound
964 | Last Probe Time: 2021-07-30T02:32:25Z
965 | Last Transition Time: 2021-07-30T02:32:25Z
966 | Message: Condition Initialized
967 | Reason: Initialized
968 | Status: Unknown
969 | Type: InstallLaunchError
970 | Last Probe Time: 2021-07-30T02:32:25Z
971 | Last Transition Time: 2021-07-30T02:32:25Z
972 | Message: Condition Initialized
973 | Reason: Initialized
974 | Status: Unknown
975 | Type: RequirementsMet
976 | Install Started Timestamp: 2021-07-30T02:41:00Z
977 | Install Version: 4.8.0
978 | Installed Timestamp: 2021-07-30T02:41:00Z
979 | Installer Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb3e6c54c4e2e07f95a9af44a5a1839df562a843b4ac9e1d5fb5bb4df4b4f7d6
980 | Events:
981 | ~~~
982 |
983 |
984 | ## Some post deploy action
985 | As I have a server with only one Interface and no console port access, I couldn't create a bridge interface for libvirt. So the poor man solution is to use iptables to forward the traffic hitting my public IP port 443 to my private VM IP.
986 |
987 | [They are more fancing way to do this.](https://wiki.libvirt.org/page/Networking#Forwarding_Incoming_Connections)
988 |
989 | ###### when the host is stopped
990 | ~~~
991 | /sbin/iptables -D FORWARD -o virbr1 -p tcp -d 192.168.123.5 --dport 443 -j ACCEPT
992 | /sbin/iptables -t nat -D PREROUTING -p tcp --dport 443 -j DNAT --to 192.168.123.5:443
993 |
994 | /sbin/iptables -D FORWARD -o virbr1 -p tcp -d 192.168.123.5 --dport 6443 -j ACCEPT
995 | /sbin/iptables -t nat -D PREROUTING -p tcp --dport 6443 -j DNAT --to 192.168.123.5:6443
996 | ~~~
997 |
998 | ###### when is host is up
999 | ~~~
1000 | /sbin/iptables -I FORWARD -o virbr1 -p tcp -d 192.168.123.5 --dport 443 -j ACCEPT
1001 | /sbin/iptables -t nat -I PREROUTING -p tcp --dport 443 -j DNAT --to 192.168.123.5:443
1002 |
1003 | /sbin/iptables -I FORWARD -o virbr1 -p tcp -d 192.168.123.5 --dport 6443 -j ACCEPT
1004 | /sbin/iptables -t nat -I PREROUTING -p tcp --dport 6443 -j DNAT --to 192.168.123.5:6443
1005 | ~~~
1006 |
1007 | ##### backup
1008 | https://www.cyberciti.biz/faq/how-to-install-kvm-on-centos-8-headless-server/
1009 |
1010 | https://wiki.libvirt.org/page/Networking#Forwarding_Incoming_Connections
1011 |
1012 | https://www.itix.fr/blog/deploy-openshift-single-node-in-kvm/
1013 |
1014 | ~~~
1015 | oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true}}'
1016 | oc patch hiveconfig hive --type merge -p '{"spec":{"targetNamespace":"hive","logLevel":"debug","featureGates":{"custom":{"enabled":["AlphaAgentInstallStrategy"]},"featureSet":"Custom"}}}'
1017 | ~~~
--------------------------------------------------------------------------------
/_acmztp/argocd/app-project.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: argoproj.io/v1alpha1
2 | kind: AppProject
3 | metadata:
4 | name: ztp-app-project
5 | namespace: openshift-gitops
6 | spec:
7 | clusterResourceWhitelist:
8 | - group: 'cluster.open-cluster-management.io'
9 | kind: ManagedCluster
10 | - group: ''
11 | kind: Namespace
12 | destinations:
13 | - namespace: '*'
14 | server: '*'
15 | namespaceResourceWhitelist:
16 | - group: ''
17 | kind: ConfigMap
18 | - group: ''
19 | kind: Namespace
20 | - group: ''
21 | kind: Secret
22 | - group: 'agent-install.openshift.io'
23 | kind: InfraEnv
24 | - group: 'agent-install.openshift.io'
25 | kind: NMStateConfig
26 | - group: 'extensions.hive.openshift.io'
27 | kind: AgentClusterInstall
28 | - group: 'hive.openshift.io'
29 | kind: ClusterDeployment
30 | - group: 'metal3.io'
31 | kind: BareMetalHost
32 | - group: 'metal3.io'
33 | kind: HostFirmwareSettings
34 | - group: 'agent.open-cluster-management.io'
35 | kind: KlusterletAddonConfig
36 | - group: 'cluster.open-cluster-management.io'
37 | kind: ManagedCluster
38 | - group: 'ran.openshift.io'
39 | kind: SiteConfig
40 | - group: 'bitnami.com'
41 | kind: SealedSecret
42 | sourceRepos:
43 | - '*'
44 |
--------------------------------------------------------------------------------
/_acmztp/argocd/argocd-openshift-gitops-patch.json:
--------------------------------------------------------------------------------
1 | {
2 | "spec": {
3 | "controller": {
4 | "resources": {
5 | "limits": {
6 | "cpu": "16",
7 | "memory": "32Gi"
8 | },
9 | "requests": {
10 | "cpu": "1",
11 | "memory": "2Gi"
12 | }
13 | }
14 | },
15 | "kustomizeBuildOptions": "--enable-alpha-plugins",
16 | "repo": {
17 | "volumes": [
18 | {
19 | "name": "kustomize",
20 | "readOnly": false,
21 | "path": "/.config"
22 | }
23 | ],
24 | "initContainers": [
25 | {
26 | "resources": {
27 | },
28 | "terminationMessagePath": "/dev/termination-log",
29 | "name": "kustomize-plugin",
30 | "command": [
31 | "/exportkustomize.sh"
32 | ],
33 | "args": [
34 | "/.config"
35 | ],
36 | "imagePullPolicy": "Always",
37 | "volumeMounts": [
38 | {
39 | "name": "kustomize",
40 | "mountPath": "/.config"
41 | }
42 | ],
43 | "terminationMessagePolicy": "File",
44 | "image": "registry.redhat.io/openshift4/ztp-site-generate-rhel8:v4.12.2"
45 | }
46 | ],
47 | "volumeMounts": [
48 | {
49 | "name": "kustomize",
50 | "mountPath": "/.config"
51 | }
52 | ],
53 | "env": [
54 | {
55 | "name": "ARGOCD_EXEC_TIMEOUT",
56 | "value": "360s"
57 | },
58 | {
59 | "name": "KUSTOMIZE_PLUGIN_HOME",
60 | "value": "/.config/kustomize/plugin"
61 | }
62 | ],
63 | "resources": {
64 | "limits": {
65 | "cpu": "8",
66 | "memory": "16Gi"
67 | },
68 | "requests": {
69 | "cpu": "1",
70 | "memory": "2Gi"
71 | }
72 | }
73 | }
74 | }
75 | }
76 |
--------------------------------------------------------------------------------
/_acmztp/argocd/clusters-app.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Namespace
3 | metadata:
4 | name: clusters-sub
5 | ---
6 | apiVersion: argoproj.io/v1alpha1
7 | kind: Application
8 | metadata:
9 | name: clusters
10 | namespace: openshift-gitops
11 | spec:
12 | destination:
13 | server: https://kubernetes.default.svc
14 | namespace: clusters-sub
15 | project: ztp-app-project
16 | source:
17 | path: _acmztp/siteconfig
18 | repoURL: https://github.com/adetalhouet/ocp-ztp
19 | targetRevision: master
20 | # uncomment the below plugin if you will be adding the plugin binaries in the same repo->dir where
21 | # the sitconfig.yaml exist AND use the ../../hack/patch-argocd-dev.sh script to re-patch the deployment-repo-server
22 | # plugin:
23 | # name: kustomize-with-local-plugins
24 | syncPolicy:
25 | automated:
26 | prune: true
27 | selfHeal: true
28 | syncOptions:
29 | - CreateNamespace=true
30 |
--------------------------------------------------------------------------------
/_acmztp/argocd/gitops-policy-rolebinding.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: rbac.authorization.k8s.io/v1
2 | kind: ClusterRoleBinding
3 | metadata:
4 | name: gitops-policy
5 | roleRef:
6 | apiGroup: rbac.authorization.k8s.io
7 | kind: ClusterRole
8 | name: open-cluster-management:cluster-manager-admin
9 | subjects:
10 | - kind: ServiceAccount
11 | name: openshift-gitops-argocd-application-controller
12 | namespace: openshift-gitops
13 |
--------------------------------------------------------------------------------
/_acmztp/argocd/kustomization.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kustomize.config.k8s.io/v1beta1
2 | kind: Kustomization
3 |
4 | resources:
5 | - app-project.yaml
6 | - policies-app-project.yaml
7 | - gitops-policy-rolebinding.yaml
8 | - clusters-app.yaml
9 | - policies-app.yaml
10 |
--------------------------------------------------------------------------------
/_acmztp/argocd/policies-app-project.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: argoproj.io/v1alpha1
2 | kind: AppProject
3 | metadata:
4 | name: policy-app-project
5 | namespace: openshift-gitops
6 | spec:
7 | clusterResourceWhitelist:
8 | - group: ''
9 | kind: Namespace
10 | destinations:
11 | - namespace: 'ztp*'
12 | server: '*'
13 | - namespace: 'policies-sub'
14 | server: '*'
15 | namespaceResourceWhitelist:
16 | - group: ''
17 | kind: ConfigMap
18 | - group: ''
19 | kind: Namespace
20 | - group: 'apps.open-cluster-management.io'
21 | kind: PlacementRule
22 | - group: 'policy.open-cluster-management.io'
23 | kind: Policy
24 | - group: 'policy.open-cluster-management.io'
25 | kind: PlacementBinding
26 | - group: 'ran.openshift.io'
27 | kind: PolicyGenTemplate
28 | sourceRepos:
29 | - '*'
30 |
--------------------------------------------------------------------------------
/_acmztp/argocd/policies-app.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Namespace
3 | metadata:
4 | name: policies-sub
5 | ---
6 | apiVersion: argoproj.io/v1alpha1
7 | kind: Application
8 | metadata:
9 | name: policies
10 | namespace: openshift-gitops
11 | spec:
12 | destination:
13 | server: https://kubernetes.default.svc
14 | namespace: policies-sub
15 | project: policy-app-project
16 | source:
17 | path: _acmztp/policygentemplates
18 | repoURL: https://github.com/adetalhouet/ocp-ztp
19 | targetRevision: master
20 | # uncomment the below plugin if you will be adding the plugin binaries in the same repo->dir where
21 | # the policyGenTemplate.yaml exist AND use the ../../hack/patch-argocd-dev.sh script to re-patch the deployment-repo-server
22 | # plugin:
23 | # name: kustomize-with-local-plugins
24 | syncPolicy:
25 | automated:
26 | prune: true
27 | selfHeal: true
28 | syncOptions:
29 | - CreateNamespace=true
30 |
31 |
--------------------------------------------------------------------------------
/_acmztp/argocd/ztp_gitops_flow.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/adetalhouet/ocp-ztp/10215f5b8ba18742b7230399bae1aa18bcd429cb/_acmztp/argocd/ztp_gitops_flow.png
--------------------------------------------------------------------------------
/_acmztp/policygentemplates/common.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: ran.openshift.io/v1
3 | kind: PolicyGenTemplate
4 | metadata:
5 | name: "common"
6 | namespace: "ztp-common"
7 | spec:
8 | bindingRules:
9 | # These policies will correspond to all clusters with this label:
10 | common: "true"
11 | sourceFiles:
12 | # Create operators policies that will be installed in all clusters
13 |
14 | # Enable Web Terminal
15 | - fileName: web-terminal/namespace.yaml
16 | policyName: "web-terminal-policy"
17 | - fileName: web-terminal/operator-group.yaml
18 | policyName: "web-terminal-policy"
19 | - fileName: web-terminal/subscription.yaml
20 | policyName: "web-terminal-policy"
21 | - fileName: web-terminal/status.yaml
22 | policyName: "web-terminal-policy"
--------------------------------------------------------------------------------
/_acmztp/policygentemplates/kustomization.yaml:
--------------------------------------------------------------------------------
1 | generators:
2 | - common.yaml
3 |
4 | resources:
5 | - ns.yaml
6 |
--------------------------------------------------------------------------------
/_acmztp/policygentemplates/ns.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Namespace
3 | metadata:
4 | name: ztp-common
--------------------------------------------------------------------------------
/_acmztp/policygentemplates/source-crs/web-terminal/namespace.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Namespace
3 | metadata:
4 | name: web-terminal
5 | annotations:
6 | ran.openshift.io/ztp-deploy-wave: "2"
7 |
--------------------------------------------------------------------------------
/_acmztp/policygentemplates/source-crs/web-terminal/operator-group.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: operators.coreos.com/v1
2 | kind: OperatorGroup
3 | metadata:
4 | name: web-terminal
5 | namespace: web-terminal
6 | annotations:
7 | ran.openshift.io/ztp-deploy-wave: "2"
8 | spec:
9 | targetNamespaces:
10 | - web-terminal
11 |
--------------------------------------------------------------------------------
/_acmztp/policygentemplates/source-crs/web-terminal/status.yaml:
--------------------------------------------------------------------------------
1 | # This CR verifies the installation/upgrade of the Sriov Fec Operator
2 | apiVersion: operators.coreos.com/v1
3 | kind: Operator
4 | metadata:
5 | name: web-terminal.web-terminal
6 | annotations:
7 | ran.openshift.io/ztp-deploy-wave: "2"
8 | status:
9 | components:
10 | refs:
11 | - kind: Subscription
12 | namespace: web-terminal
13 | conditions:
14 | - type: CatalogSourcesUnhealthy
15 | status: "False"
16 | - kind: InstallPlan
17 | namespace: web-terminal
18 | conditions:
19 | - type: Installed
20 | status: "True"
21 | - kind: ClusterServiceVersion
22 | namespace: web-terminal
23 | conditions:
24 | - type: Succeeded
25 | status: "True"
26 | reason: InstallSucceeded
27 |
--------------------------------------------------------------------------------
/_acmztp/policygentemplates/source-crs/web-terminal/subscription.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: operators.coreos.com/v1alpha1
2 | kind: Subscription
3 | metadata:
4 | name: web-terminal
5 | spec:
6 | channel: fast
7 | installPlanApproval: Automatic
8 | name: web-terminal
9 | source: redhat-operators
10 | sourceNamespace: openshift-marketplace
--------------------------------------------------------------------------------
/_acmztp/siteconfig/.gitignore:
--------------------------------------------------------------------------------
1 | ca-montreal-secret.yaml
--------------------------------------------------------------------------------
/_acmztp/siteconfig/ca-montreal-bmc-secret.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Secret
3 | metadata:
4 | name: bmh-secret
5 | namespace: ca-montreal
6 | data:
7 | password: Ym9iCg==
8 | username: Ym9iCg==
9 | type: Opaque
--------------------------------------------------------------------------------
/_acmztp/siteconfig/ca-montreal-sealed-secret.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: bitnami.com/v1alpha1
2 | kind: SealedSecret
3 | metadata:
4 | creationTimestamp: null
5 | name: assisted-deployment-pull-secret
6 | namespace: ca-montreal
7 | spec:
8 | encryptedData:
9 | .dockerconfigjson: AgCAO04A1E8uXGB7aMfXtWHQN9UFAb7bpswtmJmnRJ58P2DrMnXjIi8a65zl7d7ZqqDtu1gFKzzyoYfHu8riSurZ0rWGOFIU7t/pLkCqk3+nZyW++ZAXZybew/UKjOwHhib7XgWklXiUT00HK/PA2sHdsjgVSWJlLO6xiltWbdlkmsGyPsOyBzicgTsZpltInC00EJzr3pVNBNHuKGNYnTPUWA4DR5ikXv+pBS7tUnee8ccVSuUVHk03RyhLcr+mfGKueRXtB3mk2zwXxg1g2bZ6jOUojV9kvurLf+mLvsHneba1ivWYeO2kimbSIrcl9CSyURBsLmZUecMYbDQa/mgD4hDonTJIR8j3QnALHa3nNdiqDuzKSG/CFtj4WIrSqd8oUVoLTW9t6of9zR3hEmHvC2Y58pDSv0y4WEFkLeZU4li9wqsuO4eTnyL3Grvcn1excdQFYeyiyous1l3fSCFN2rVGu63FXEwR+nAMLEJHcDiEmClr9tij2PMZB5eZ2DzUkMaLFgzDwiSmwlm1XcKiI9l0x6iWJtwhDknjo0ig/a1PU2F2zZKSum/vo2KNuAvLAEKNH7iQlOrgm1fRKGhaRto7vzErn7nqGPGyf9+WdgBsrPmzvqFEs32SSxl9r0CIcLYmrZ0p2cWM/oOWQSJaawBaUJzNzEk736w5PsGHd4gu0lei7GRKA93jZjeQqjXg58bQtPHFtqb4ORCSu5HvIQ+H42TYGYbbFlbO3FZgmplxpGY93VG8mGzb/ptwSo9Expk/AZwMOUDPdNXqmFu1UC/E7i2QSasYr7jjaP8rnxRhwTIr8HZey4iIzIWh8D9i52+sKdUJuRzGKJSiX4zedwmGc01aLUfrSMY8et1oQX1tQW9HM3RgdrnOV9lnzuvEH1uGulML31uhjkuELMjGL0t3XXTjkcXMjagbOJcYbEDZfb51/eoV50dYcgAWJrDvysx2db0sHHiwEtbUp+gSufGbKS3ZcpYH5oaxvhNITuBq5G4zHLKGKaQpVjb57kVyvFWiUdQCCIB/TSLP262QyRBkJIdZ0gRuatJkDm33Vo1OMMAlCCZMMSGCHu87b6kXgIoRSy/X4qbSCg/8WxDkMKSlmORYjMoTrOMAGNwHQxCehjMXoa9XrNzkjMa4CGD347YxIXLpjdcjETIZGA7KwcbrcC/DiN4WGmyNXYgiPk2tP4ItWgrH9haEq5RW+a0i3za7P/pSIT6c+aKIB0F8WXqJg1CFYgGnSDVczYjXzAmaU9MzJY6clSkfpUqNWe0xNmk+OVv8ioGgOac+UmEZY8l8JWoEHjVeW6tkbzaYIn9zRO41KZ9dppALIH6EBrr3os64qLPqOlLtEfx1vY2p2pdhIf3sXsEEwys1KTJcfjwb3FO6yaJTG8qZD5vGjpP4gBdvun/KlqLFA+r56aGrvkS9gVkVpkA5Ldcfc29+uAKq8jNV7tOlr4W+1O/FxfhhIswcTsXKO/klf5EnX80NrWmVdVYQErWwBssw9Q1PSB0s0GVrl7mIcjb7Ujrcp+0reAah/RzW/W6TFaKO6wNaAVN2FHGd+MCpRPfrEIYs1vYKfv4yg41ooYXF7SJSMFv5WnTRXdQ/Ns2KWxTjeo64zA869ei/gfNFGhAgoS9zPHfRPe2Zger208s38+Q0Bbpoi+3DRztYCaGUaZuhdVgUOxPgpxXMtP8bic1N3zTOV1UyquRPQsLVooDuiwYUnYgtTcjA1hyD+wQtaqI7lzY5a7+K5+xvCCJguJLnjtmn46sOaHkSxKkpVif8ewQWQP2o/+0DhH7PrwgTu3p0We8Ug9ZJotGnPEyZo2mtjqIV1/l03W3huYeqrMBZHpYxf4rBAb3n8BDfi3tJun01OqwY3HMtuZsxt06xJJJX56WP95Tq048/XAf6cFeGtwgsrGCD+4zboeagvJz4J/M43m1dI3l+XLoJvbGpGML0K3zktsR6oEuRb6NleLpIWclp+LfNGWfqEaYbV+KJVTuutbXbaGfCQ9XrAkc8j1c9Z/lhMwoKpVUNO8mAdougELw+7cz+NqxJGCvM3Kln4UvseaLHbcRhavpAvU5DYpmLr92Ga/PIMXj8caANaN6wXwXS0yEx8HK2tncu16ISLrEOHjj8d4WkGvKYF9rlGQfPV6Ta6z3FPmaQDSLDJwY2g89vf+HWjPZPeXtld5GpjTIKU30S1IZG4aGUzxmlkOa9mTtqAAUCJF1FyS88VvEKbVyY5ErGRZawqMH/oBkQC5YfvalkaEovq56u1opULpeDA/yqU3jjl/KIg/27mWSGXSnbAjQy8WYbeGcTD5Uwmbs55hok6LP/jAw9jNyUGRQEj1Utb/Zmi9Q8UyvZn9AFsiJXR9iDQXRmHHk3O2mbgSxwPxhAs9/D70is2FDec0UtiKNJmbjKqPWUUBVk7gqT3hhm5A52LQ2PgSnz+XAjA+MuVYS7sygmYDgshc8Yn1IGvepyLXLo5K1ZS1soiACyoR7dpXD50njNDYvE4/2BwKGJ5pDYG2a/XqV8/gIunbzYu9/elwwetguBZQyzFHlW9RZc+Po0SyfuDNEAU5SXB88dBX239oY4PN1pq/hnK5Px3k+/IUwltPa886hcRr6H5enKYIasDnurIstLHf8884hTC1X9rJ2DrDt/R3QSI5EgMDC6fvNasNRB1KhBix0ZAnQ4hTsmCD3DIrmzkQVEk3I5U1JbEdBfr3YpABzSI7zCF0HY9iKE1CwtTSfincCJJeX4jPvWVjo3u5pbC9AL2X0wrME6L0nSOjiZPG5T4mGGAaqmd6sB1LTlfJjBbGLWNEoA7PemmEABeVRPsbkbKUfyFUTd/1bU0Z8oIek2elec7INo8DizokQ1jrGH3q7S4Oh2LOWFSeeRX6J+Dg+e6mzinbIJH2LBh8sNEWXz4+IIs62cP2XtN0Pm4IJJjCChMmfLOcRTlXrWJtdAmy/WBgEZG1UGuL1qsbfEDG5kSIQIQt2ywkVLAX8vrhqAHGd0ew6sCmlnTdLszq7UctEsFdCE3Y14GrfKkij7weWemI9XYKu8qEQ8hAxbJmwF89wKF7XfojumZq5UaqrusrcL2VZOwIUQSi23QyQ24ThswW57UIC5j3Pe+k37ibIy76NR0cALzfknnfv6LO6DJdaa1tAf6wGEo9l0OE3clXWbM7frgf20buLk+Nefchh/4L0lwACQnPaZe/wfCLPb4A8FLE5G0NGMnRvFn2aeIUi+oD2zh8L0g3v3yDuq2oAkOoPl59HY10YbMv+MMBqlWJnEGrj3oMkyoRvhx89JmsEkmjwN4hOMTYhNOcrxcBFfcGnP2T+gOjvQg6XdbXAJdkxPjHxyqIWn2OuyTgJhxLkw5CzwWiyR2kh2DLMUHXyB5Pfhxak9YigBNtHYNVMRuPaqxLEKyBy/snWb7mM9tK6meNyiy6o1/RULfmxcxeqBjHXxtYzwxLFUA8Z4Ps5i5BD99/5MRaoSTZIW18VKT4uASgYk6+QfDxtJlYMQhFtfOA7JAb6csuxIOgGrn2xpc4pCF54ZK77uml2S7eRSiqnr/pCTK38yBUYOENivmgwCsfMKJpLIbR8vPlV75YtPisag9ZQWwonaA3cb6BJveRBdf9JBEWv3MidA6ScLqKpQjajUjGqGprnsZMmJocSf2mvR23xfv/tbB8pZt0WROCTkfM0YqDN8Okat78C5J6O+RGeYTQCoby9PbJ+TA0+dx5Z+QKlbLRD8lLiUuo2ZVRPftXO7I5sGjYSIBF3gmWL6VtBYlecBdhOHsuLxgbaokHcL7CWqExl9IHcPVMpkrvKSP5haQnVMojnZ2XE/R3eSilAGK8ssksFeaDCzIyw6zWnn2nQayLMmmcEUbd3DeekWynBtjiCHIu4kAV68u4HGF83GX5vfCl10UGT2wFtJkiv88JJg8eI3px3SpSt1gxAvMVVDhE+lYDWJZBDX6qB+Ldp9boH0/nzmcc5Z/hiRtdAr0WH9dQHlw9IaP9v9SJopP3NtiOsKCOv3JSdHxG2wH5V7Dy3AwEUvhXVngh63D4EbNuO0r+xt7rFJaDGXYnJbchjZHtObG6cNfK4NpzJZ4m0h5JWrEnU1D1eom+uHkdoR+tttQoaX4zN14sFF/4maksD/UHJcmkKgo1fJQHENTMb6JwG7juGYsqv1x6/0m/IYHlKOGMf8Qf2y3kmkYdf3ks9V6GBtsFrlUrbbnu0SCROM2oyF8RJZLwT3oePgZk07T2FWi2pFh1mXVk4ezoyXqyVI6dC8OJhnSmEnkKk/QPgD8ndGuOF9kA+rOxV5NUHpvoBNIfShjcYlvrbR2UjoBytEPY3AmprEgRDWGjgQi3XHy2LAEEcQ1qrpNH3kaQlIjFRF8EVW1yWrP6E1GjfdxAYtGGkwMZlzMOLOYREP
10 | template:
11 | data: null
12 | metadata:
13 | creationTimestamp: null
14 | name: assisted-deployment-pull-secret
15 | namespace: ca-montreal
16 | type: kubernetes.io/dockerconfigjson
17 |
18 |
--------------------------------------------------------------------------------
/_acmztp/siteconfig/ca-montreal.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: ran.openshift.io/v1
2 | kind: SiteConfig
3 | metadata:
4 | name: "ca-montreal"
5 | namespace: "ca-montreal"
6 | spec:
7 | baseDomain: "adetalhouet.ca"
8 | pullSecretRef:
9 | name: "assisted-deployment-pull-secret"
10 | clusterImageSetNameRef: "openshift-v4.11.9"
11 | sshPublicKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFS0S+jf5OW7CDuwuJO46IpeunNc19oyXlRQwR6tBx67EPXAt3LxB/BMbyr8+MLMIErzaIUSvG70yk34cB4jbXrs8cbwSdxGPro3ZZqu9qT8+ILhEXtok6uUBG8OKlhIqrAm6Iq3hH1Kbgwj/72B9eaKIpNHzvrZSM/UNAYZzNvENuBGeWuO1kfxnhWmzp+eh+8vTPcYdLzJKv+BOQBxz6T8SI5By0TfvAvVS2xMmhMRMs1TBDLUBgzZgd06X0ghSaOimz4aVbmqI4WwClIy8ZiXhL/j1IkSF97qNo26yb/yYnyk+BqqrsOQIEQQcfzY+skpHQ1JiPjPVYHsujhgctFgwCR0/KKw2QcqOK67est5gDW3vaf/zIDhRnPdT2IhJQTQNEepRjKfHF2EgGIMSU4TosJ5ygx+q0oZ5ITcFHSiIK3aoOt2QXZPY+Dtork5zYbE2M3PLrgRrT1VW1eTH6v5GYjUDq95mwcKYBirSvd3QuUbrGjFQuxfCZlceUui0= adetalhouet@joatmon.localdomain"
12 | clusters:
13 | - clusterName: "ca-montreal"
14 | networkType: "OVNKubernetes"
15 | # installConfigOverrides: "{\"capabilities\":{\"baselineCapabilitySet\": \"None\", \"additionalEnabledCapabilities\": [ \"marketplace\", \"NodeTuning\" ] }}"
16 | # extraManifestPath: manifests
17 | # extraManifests:
18 | # filter:
19 | # inclusionDefault: exclude
20 | # include:
21 | # - enable-crun-master.yaml
22 | # - enable-crun-worker.yaml
23 | clusterLabels:
24 | sites : "ca-montreal"
25 | common: "true"
26 | clusterNetwork:
27 | - cidr: "10.128.0.0/14"
28 | hostPrefix: 23
29 | serviceNetwork:
30 | - "172.30.0.0/16"
31 | machineNetwork:
32 | - cidr: "192.168.123.0/24"
33 | # additionalNTPSources:
34 | # - 2.rhel.pool.ntp.org
35 | # cpuPartitioningMode: AllNodes
36 | nodes:
37 | - hostName: "ca-montreal-node1"
38 | role: "master"
39 | bmcAddress: "redfish-virtualmedia+http://192.168.1.170:8000/redfish/v1/Systems/181d3e23-d417-40a9-88cf-1e01d7fb75fe"
40 | bmcCredentialsName:
41 | name: "bmh-secret"
42 | bootMACAddress: "02:04:00:00:01:03"
--------------------------------------------------------------------------------
/_acmztp/siteconfig/kustomization.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kustomize.config.k8s.io/v1beta1
2 | kind: Kustomization
3 |
4 | resources:
5 | - ca-montreal-sealed-secret.yaml
6 | - ca-montreal-bmc-secret.yaml
7 |
8 | generators:
9 | - ca-montreal.yaml
10 |
11 | - siteConfig-ca-montreal.yaml
--------------------------------------------------------------------------------
/_acmztp/siteconfig/manifests/enable-crun-master.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: machineconfiguration.openshift.io/v1
2 | kind: ContainerRuntimeConfig
3 | metadata:
4 | name: enable-crun-master
5 | spec:
6 | machineConfigPoolSelector:
7 | matchLabels:
8 | pools.operator.machineconfiguration.openshift.io/master: ""
9 | containerRuntimeConfig:
10 | defaultRuntime: crun
--------------------------------------------------------------------------------
/_acmztp/siteconfig/manifests/enable-crun-worker.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: machineconfiguration.openshift.io/v1
2 | kind: ContainerRuntimeConfig
3 | metadata:
4 | name: enable-crun-worker
5 | spec:
6 | machineConfigPoolSelector:
7 | matchLabels:
8 | pools.operator.machineconfiguration.openshift.io/worker: ""
9 | containerRuntimeConfig:
10 | defaultRuntime: crun
--------------------------------------------------------------------------------
/_acmztp/siteconfig/siteConfig-ca-montreal.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: ran.openshift.io/v1
2 | kind: SiteConfig
3 | metadata:
4 | name: ca-montreal
5 | namespace: ca-montreal
6 | labels: {}
7 | spec:
8 | pullSecretRef:
9 | name: assisted-deployment-pull-secret
10 | clusterImageSetNameRef: openshift-v4.11.9
11 | sshPublicKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFS0S+jf5OW7CDuwuJO46IpeunNc19oyXlRQwR6tBx67EPXAt3LxB/BMbyr8+MLMIErzaIUSvG70yk34cB4jbXrs8cbwSdxGPro3ZZqu9qT8+ILhEXtok6uUBG8OKlhIqrAm6Iq3hH1Kbgwj/72B9eaKIpNHzvrZSM/UNAYZzNvENuBGeWuO1kfxnhWmzp+eh+8vTPcYdLzJKv+BOQBxz6T8SI5By0TfvAvVS2xMmhMRMs1TBDLUBgzZgd06X0ghSaOimz4aVbmqI4WwClIy8ZiXhL/j1IkSF97qNo26yb/yYnyk+BqqrsOQIEQQcfzY+skpHQ1JiPjPVYHsujhgctFgwCR0/KKw2QcqOK67est5gDW3vaf/zIDhRnPdT2IhJQTQNEepRjKfHF2EgGIMSU4TosJ5ygx+q0oZ5ITcFHSiIK3aoOt2QXZPY+Dtork5zYbE2M3PLrgRrT1VW1eTH6v5GYjUDq95mwcKYBirSvd3QuUbrGjFQuxfCZlceUui0=
12 | adetalhouet@joatmon.localdomain
13 | sshPrivateKeySecretRef:
14 | name: ""
15 | clusters:
16 | - apiVIP: ""
17 | ingressVIP: ""
18 | apiVIPs: []
19 | ingressVIPs: []
20 | clusterName: ca-montreal
21 | holdInstallation: false
22 | additionalNTPSources: []
23 | nodes:
24 | - bmcAddress: redfish-virtualmedia+http://192.168.1.170:8000/redfish/v1/Systems/181d3e23-d417-40a9-88cf-1e01d7fb75fe
25 | bootMACAddress: "02:04:00:00:01:03"
26 | rootDeviceHints: {}
27 | cpuset: ""
28 | nodeNetwork:
29 | config: {}
30 | interfaces: []
31 | nodeLabels: {}
32 | hostName: ca-montreal-node1
33 | bmcCredentialsName:
34 | name: bmh-secret
35 | bootMode: ""
36 | userData: {}
37 | installerArgs: ""
38 | ignitionConfigOverride: ""
39 | role: master
40 | crTemplates: {}
41 | biosConfigRef:
42 | filePath: ""
43 | diskPartition: []
44 | ironicInspect: ""
45 | machineNetwork:
46 | - cidr: 192.168.123.0/24
47 | serviceNetwork:
48 | - 172.30.0.0/16
49 | clusterLabels:
50 | common: "true"
51 | sites: ca-montreal
52 | networkType: OVNKubernetes
53 | clusterNetwork:
54 | - cidr: 10.128.0.0/14
55 | hostPrefix: 23
56 | ignitionConfigOverride: ""
57 | diskEncryption:
58 | type: ""
59 | tang: []
60 | extraManifestPath: ""
61 | biosConfigRef:
62 | filePath: ""
63 | extraManifests:
64 | filter: null
65 | cpuPartitioningMode: ""
66 | extramanifestonly: false
67 | nummasters: 0
68 | numworkers: 0
69 | clustertype: ""
70 | crTemplates: {}
71 | mergeDefaultMachineConfigs: false
72 | baseDomain: adetalhouet.ca
73 | crTemplates: {}
74 | biosConfigRef:
75 | filePath: ""
76 |
--------------------------------------------------------------------------------
/doc/resources/ocp-ztp.drawio:
--------------------------------------------------------------------------------
1 | 7V1Zk5vIsv41ivA8QFDsPPamse9ce/rYPp5znjoQoBbXCDSA1O759bcKKAS1sAkk2lY73GohKEFlVmbWl9tCudv++D22d5uPkesFC1lyfyyU+4UsA02V4As68pofsTQrP/Ac+25x0vHAF/8frzhYXPe8910vqZ2YRlGQ+rv6QScKQ89Ja8fsOI5e6qeto6D+rTv72aMOfHHsgD76l++mm/yoqUnH4+89/3mDvxlIxSdbG59cHEg2thu9VA4pDwvlLo6iNP9r++POC9Dk4XnJr1tyPi1vLPbCtMsFH/78ePj08M2/eXz59LfyP/9STf0PwVDyYQ52sC+euLjb9BVPQRztQ9dDo0gL5fZl46fel53toE9fINHhsU26DeA7AP98DuwkKU4N7JUX3NrO9+dsiLsoiOJsSGWd/cBTku9e6myK89dRmC7trR8gXnnvBQcv9R27+KBgDYBPrIwmZT/oy2Pb9b3jZ2EUwmtu7cB/DuHbwFvDmbo9eDEaN7gpDm9910XPekvPaDHJ6ArvR+VQMcO/e9HWS+NXeAr+FFO7YHcNv385Mo+Kj22qjIM5yi4Y9rkc+0hT+EdB1h4kBhSFb/76Ag+8c2zBgU8a24EAfqOo3kLnljlNI3R+ksbRd69CKdewVhml1n4Q1PjB0x2nyg9gJHoodXqU76v0kBj0kNWp6CFT9NjsV/DAn3ePSI4F+yT14oZFCNoXISYOIi8ci0cegggPD8UyIpbXMvshF+sYxFEJ4ug0cRQWcYA5FXFocQjHhQtkF0cHP/GhPLkIaZaaLun3DaSpit3R15BGkMmiyWSqDCpNtoRUhtLSAySHXP9QI47+9x7p12zehCRTITfwBCDtfmSTgz+Hfz0Xr9k4yc4OTxpoBWc8Y53cNoGCMAgQ7+TDw6fOv6H+rfBw9gD4KMFocGag2eN1YLJkl9tCa/8HYsx2rit1ICWy9eyHZr38h8GqN7f6rY61/2OU+ClcNqzv/l/ihPIeSHU/Cg9rNR6WZYOhB2genoyFtfmzsB9Hoe/8Six7DsYztcsynk4xHqazlByc04z/M9Ls/hb9g8fjKLULASJY0kREtAgiMjQgi4jKVES0KCJGOy9MNv46FfzwOfagMfCrmSmKYraaKRbLmJxsqWEMY85CHnJF+mvZJRPIB0VX5mVdAHrTjwktud7a3kOyvjlJPwXhtLrMYGln7ZyCHdDoAEUoSMjAD+GUYsQTTYZrJ5uSeDwLexWlabTlSnOKqBWCJxt7h75/++MZAb2i/ZIoooPEue88BZHtPq3swA6d7DpKLZiaoi64+NwIhFT1+h4Vkq0TIadD3WgVTcvN0L1BEDWa8nwm63OOwLnX/xQ0zd78F70RNfz2/kf1w/vX4h0JrmnoX9M8p3b87KXtRqPn1sBymhodZzv2AmizHeoQO4sExTc8Rj6845LY0KiuEVuV5PoQSbSPHa+4qgqJkwMRAJQiafWB8omhBso4onzsE6BAFvqeq1wk7DLoAJPxqLgx3k3p8vde+k8GUEnvln4IV6P7W0WD5yNyVPhgjL9mxRGM59qeuXYWQ8Fi3TG91Xoc+aB3wYFNBstOiAPTKNYf3z42UGYys70UEDyNXAN++Qp9BDoZFmHEqx0h4dLNNj6d+FDNkEWahBEywqKt7Yd4nFVcfrqDc7ugnAF9F/FkrOJqnumqTTKAYg9TXiljsYelEuxh0OwBWJZ2CcOMzx40oOKFOzWJqsTtuEGjmAWopihrQASyCOCWYqnQRkTfLVdd0jPJ2GRDtLJNaWP2N/x7I8Y5i/LdzFNuHXTC0yurskjbnNhSqfPiRKyo8PGG4ZIq9FJaTHEl0gBUgmZ6C3K7bkKmV0R4W0ukKBvhiSEP1+OZGqVsH4laLo1eQrWMpOi/MupLcLzdlmkSeIdEi2EW409mSyk03nHw41UMBghhFkpWZUlQsORVDveXw/kpUex6McGVE0how7RmJ6FZtuRgVOAIBPy3+hkHFfB++Gl+GbDU4n2OJ0gSKN4fL0Vv2vAEdNKjF/twbhB7NIeF5Hvyprkp1lQrGIEX+0zQCFMyxToeoQFCznXFIwxqqHMjEgptzZ6iZQ/QekDY+AiWcEUCa4UlfPqorQYLFDWpUMhaNEyGYMg5o0udvkKCfHMjqeiXVPsFRXCr16XREJmbxuHBOFjd9FYrpQrja5YHHf1r1CyuH8N5yEeECxFJBL7rYXwtZBqEi2EGWsigFvkqilLBTyLR3z5TjIaxf3+bBUtXmYzr9mkN8uHQtCBa9lU3mIMlFjsXd3O/SVMUC36D5kReOm4IRN+JwrUPmT0WHchA8tK1Uxu+oOMJOina7pA3TNjYsfsCqSsgh4pwiAIULbvU5Gygp6ckjWL4JU9Pbgz1A3zxDr6DXg/wwzjaPj05LnwRgGyKu/C52xpjAKWlb5LjICFXTpwzCYmrjsCtFukQAxaDW2VJZPDr8ej4HGtSHAt3p+LfTvRCG1C5BMLR+vKi0Y+1DqIXB7JBKiIeWdlcNzafss2cXKP7iQHqPD5oj6ir85tl3EuGMRLH6BJhsyDAiiHhTAbHTMUvKgsJ4dvZBRkqbMLLMKB5q2KOyzV7/GiBt9jjNVtcBdiyZtvivWzumVjIJGoLxxUl+Av/EAN2tZZNS6GGJTlqagee3M5nzj4+lHBR2+aOG3oPHySlr8wOL/2gvJzjgKPZljLTECGyrxrM0Fw2bN3RaTPzL1Nu4cH7OQIjtqRO7AkJbb9WTtuhE5KGGzaJGwZS430per/zVeO082VDazxfI+dJax5f13qebwHifmrnwz/yGR9VNqg01oOCNIXCqSds7RDallsvbArF+jlDNeUuoZoGvb7J5TOevcCHPWYTqYlkPmQcV0i8GG1DrjGbp/Hg3DJCVHpXnm9t32hgPkkzKlB/DCKSgfnYQX6pwHyV3qjOySBcr9eywzYIecGGclu0IXsf07yLobhsuSysTy5TtPoP8PpptzateVmbGtu46r8fAqRVpHfdD/U1OBWFbSFzDU6TbQhzDUgynLftfHL8Yh1yDU7i/mXFajYgyWCPFoPT6ns+seOQdeMMBiorRLnwqWC74TPKGqiYFOUHubFSx15JC0Xg2Luivdsl4ma/EmwXpchuor2XivEm9QInEv0oG40QmsgRM1SntcN5HXXbsWxFTXwyw95uNfRvHDUHcJAZDnjGvtKqmmPlNk0WFIll1pzC248ObkWv6CJBEiVLGQ9Ua1dE89IvilSHyRTDGKZfyIHwLvpcYBtg2Vb95VUprvK8VwEdE6gqD1cZdaqMwkkZNVNcOaeM6lDbqCKjVkHkfP+68UPSuq5+QFjmXYoS0XQoBR+oCr62+B2e2CNNfvhIlXcUzW/u0b+T4Fp1VvKNzAYqc/h6ZwORiIB+Xvmm0QkhyT7ZvAqo4NlI+dxtHsaJMkOG1wIZQTLpPDi/6p0ELAadyj2p0dDwNz9O91m1mM+eu/aTDfzr9uNdA9XHKcZGeqlZFRR4ko2neVq83O3KcASiG1I9+LPcVVeIrgOa5spkNKch5scoTgdEsbHAYVPCTH35ADEyKmEEjJGUH4QY6xtkNHVqiGmQIRHWxYO+NHq374fQdkBXEnzzxuK95NZ4r50fZoiWYOeBgPIS4UPL4vkFuV8AFyki5fFE5AUDvUza/SGzrHVlIv7UOyAKPOS8g4XeRKTGoB4JKHV4wlLLM3iIOHpH4gmU0EM/NJRP7UK4lj2GO/DfVbiei3RwOacV1cA2S+vuAEPOM9keGBZh/ZG+n67bA50IatDIjetI2DoJ4yvAaLwvk8Ti286ngj/OgDXr9F7c36X2KvCmLlT1c29tTJWwNFgZ0TpjdU3mAdXHTXEyhsVUilIupQu5COD7OeQ46XpHIYrXy1yEKFBE3bTKHyJ8RwKUo7Gz01LSxWMkqIRBw6MfjIwpnxiE0fmRQCWWvIziFzt2ER1je73OaiemKFfaR0s/zHbxuMCD9M4LEm8hw7uR9ugPCRqm0goadi56E3rpSxR//40JWTdvDXe26/rIZkUbwyz1qEhL2sIZ8kP6+DgZUR1HWSYr9PjLo5SXhA/w1/LPz3/dfL5Hb9GM4SxdSdihSXTQbwHNLJHvJQmCu8u2y5KqKuj9/8FfN3d3D49fWwKhfprpQw8f2imeycfPD5///PfXD59+J6aPNVP3n26+Zp9lfFqfXOUmO5EziZ19HG3Q4vgejlq/gDsocPys+NEn74Uy6id3fxiSUbcOGRnxrEgkEmMeTw/T4WSYxrvB3Cq3BDmyikGU8iyDM3meuThHPYVDjoNuPdeHgvK25rCrV0NRbnLQaxljwHR5QNlbX16T1Nuibf9Kdyx5tVoJwDMNQbUMWVgBw4a7fddbK5Zqmmhv0FjJgh9cuZtgicQenGu03osFUWwV4Kja7UJDTht7n0ZJsaVlgFfcDhrNifdcyK2lecd5l5hu1SHdMuinauoqjDU2XS3lDtF+U6Xzj1/kb2Ybd2hVmmx6v3UbU6EdfZSwzPT1Xaa5UbMWeNjeIo2Z/0bH36MSYQ1FbCg7skjgH54qn393NtYmStLMVnJqIzUnu5cjQjNmW78FlM0JJY5YCbaoXeDvaqcT1uHxxGV5m/wqP/RjdB2dGJoYKHuo2jn2zhfZT1a5FN40+2rmaezno7TX1WobplIYQSsayzU8ndnGDMrMSR6ukmLxMw598eJDNpFlEBPa3cBtS4B0D1rXu+glOyFzgESQFpl8QbV74UuWyJ5tYRHnFGPxVRk7Y/ak5l1NADmJzw9OhD65FdhclCMR06CRNk5X3QisloEm1oUYrjwpU3sEvmvMjW3PTpgPW4hS1dKpR2xqZMTmYC45c+inIQ+UinkdVa5IdJD7dxMhzSvtk/zTMoq9cA9nMvP916+PyA6L8/hSkkEh9TORM9W2r/M+j8fkgwUmzfHNy5hUvWVnzuL5FtXmlyyVLEGzCxduG7oa8ClQySXeNAxJFxzsxpAfjhCxE223+xDOfj7niB9v4Tx/ZHQNk7ICNAgFTDeIV5MN5OBMfSMmjfcOeuoPuI9Pg9XbeG926GbmgeNVbADJR1+735WfH4Ok3fwW6MRUyY0qNwvZm05xPrMtUbrraGnP20hfbYxmZaIPrYpAKhNqoKmVSYfWDHURfAuFEkrnvytfkViGzI8+yd/Xj5ka+2yQHSVHwGcXr8TYgDiG74Q6uz52pjfIaker3TYUvUPGcKywrgEV2Mp9XceeIruKM7VIQq/4V2+ZzU2oWLDiLDQEephPSLNnMMLrdhWhh9p6SZIHr/UI6Ookcvj1zZtFy3jVd+FP6W4t8xcVEbeOqQZwGaLBKtR1PDz6/tUcaqndRimKeb4plEmxo83VSa54uMrxxUZ0RB66uKL9CrhrbTt1qKkOOVA4V/kt73O0hY2vSHefT1C0mWUJ1Wzm1oP/Udkxt9iZYw2f2uk+yRWs58NjeOcuZQ+ZnxQVx1M7axqQ5IZsHuRIx9VcZVrX4GCuTOPp+F9cqPFMpRGEHQl06IzGLdOKuW//2evqv39fPkrR3+m/br54H60vjKbq1HIbO1Oj5nkbRGWetXxiVULuzo8iPJfKRPo+YDbYlBmmbtmbenQK03rs5i+EALxzbAEZM7EdCOA3iuqn5WFxSui6hrVih5N7el7Ogy7QdBI9CAvdYmRMqcwuPGO4WpnkoONHN/vVgmqUw12DkxXSenjAof68Si3VtToCbQBRLM1iSERmiyRA+k9HIw7tz6TyqS9BmlNrnJ1GJiKM29JpMpkqg0qTLaE30HJ8Bad3yzDxf51KZ8PDydmJbCfycH2baVnnq5HGZOE3UKYPd1P/dVj2DIwHJJx+dCnOo6MpMaHfZnU+qhrfFFTUSCoydOBU5fmYVKTj9VCJqmTjr1PBD59juMH+5QwVmcjmYBkqrNpS0xkq/Lpks5HyuJv6VcqfwHkK4VG6uHmBTfYK52FCS663tvdBU63mmUr6KQgHyAQwhnpmeSYnE+ygAyDHRGFdO9mU1Gt1g3X0cTX0G7FfEkUsAsGfUOzd08oOkC8gZqkFU1PUBRegG4GSJahWLsFuhJwMdsNerDkVFsxvgPDKt8bY883Iizn0DcIeAziuqnepLAKDAhKJtI/nw2ezCau22/A+de+99J8MpJLeLf0Qrke3mpHZsR1sT5i/qSFU2bhjGF6sO6a3Wo8jIbQuWLDJYNoJ1TSNN/7x7WMDZSYz3McqIjACnQyioIvFKHjFhIVHKcDIphMfcRzejxonVpOe92QXZU5wwiEwpKfzJKzS3OSZmelQOmFHYA9LIthDpdkDsIxtMEYVPDZ70GjeKq7FVAxuFlpLFM2bhRIUb6HxL9mdeYKdgoaLITeBC4wyfKY+Ri9BNtvxE6OHS6XQS2mR1JJbfFqNgGqSGsqCRi3JG9GIIQ/X45kaJWof6VkujV4ClBXk3nFl1JfgeHsrUyXgDVZNAIbAHSO3jM35NPyJ62+MUoSyypKgYMl+YvfsPYtnKYebk/VHltCGRiIAyhlLVbL5tEOA9lRp9ccKUsBSF9XKeqhP5bgVpLogCtWSUuzYi0LItEIP+eq/FPZg6GRreyAPzefX6bHOjT/INFJ8ip71wp2RjGP61nPXM9P39FFbTRaUYC4U0hYNk+EVvSonNZcOaHGzNJoib1DnjK5bHnT0r1G3uH7sOcVwcCUimcD3NYyvh0yiJ+QM9JBMOxVWUZQKfhKJ/vaZYrQ3VjcZtNZNhgd3yP0lbOzYfYGkFJADRThEAQqAXmpyNtDTU5JGMfySp6csTRG+eChxA74e4IdxtH16ysoL9CuzzIBFS2fkGOWTT4NTNF00yOAoFsPKZXf0es+Tyba32FaqdpEII/FvJ3qhrSg6ur/BdYXSph3ICamI2ATlrCy61AcZ0IQA8EncNSh9cPYmwXKWcS8ZxlhMY5GJW0CSNZaYMxk8MxnH9OuTw07X7V1Xu19vcXaTSRU0V7s+xfS+lKFMAbWWJarSsXwVMWDnGlgaGWthiTpRy31qm5lV9orgswv2Ri19bh0KgiBCZF81mKG77wBbN3Z5y5JL8SvDFTx4W0eO1LEBU98K67JKflGxE+V2vpN7XqAqJ19QtI/v2iEVKpLmb9BB3ws0QN5S7YLFqYXi2UKC9hlG7C6ilOz42QM1QZdATYNe6NZUeLL8BsLx6eIk15DNU4KF55YRItPevHyj+0bj8kmaUXH6YxCRistn9uqkqThZ+KbcoZDuBU3D9XotO2zTkL0xad6W8OvL8SIXjbbQxe7mZKtDAa+odrtTv6jdSdkzQ90JhkFZRjrVz2Is09PiWMtc05OyVfUWS5KyVdsuoL6hQB+4F1DPUPRa5F2gken9baan1fcZLOoZjHPYqvxElRU2ITo0pD6CsqSxInBM31+xOfVpzoV6b2ogSyqt8FhJTpPFRmJVOqc496PvW9Er6kyQRMlSJgPauqgk66KaRq9DZwCoxjBNQ40kTVdtl811zLKVvSVWKbDyHFgBHROokg9XKXWilCrTM2pmOasp52RSSu4lpajmlT0adDYVKKLpUIo+UBV9Q3tmNDfapGh+c4/+jQviXhTDpRKDJDLou7OEoxL3yT3j1BKOdmYl+2TzKqRRFIyU3N3me5xpp8mTZJPOhfmrbktWq4TJ8gcYbVS+5d2j4MHPuCmUlLWdOgUGGuDAZhVUGKlFdHd9OAbVcTsKapNd7bnEiN9XJiM67R94zLrdjRLBnHcUI5jlQvFjZMTCCKAjKUEIQTagUuaUuSO6alBhnrp86aAwhXZ6FI0CKMZ5Y/Fgcms82M4PM4BLyEvMwiOqiT7On1+Q+wV4kUJSHk9IXjQQDHIpoHhUZpntylQ8ykpx6gindzDVmwjVGPMjAaWOVFhqeQYv8Ae9I6GFplZBnF1Hs4mPkQ/8d353cjOM3936bwU4sCnTHsR/0X0CmSsCwNAeADoZ70B5qUcC3Glw3zQa78yg0O2WC0w6OKR2wTTYs8pHcnaDnejX9q7X9q4XQ790jdhrsHooT9bflb3I+sFfZ+zvOgrmdFllYhkmh969Hbh6U39XICkTNnhtzMm7Nni9NnjNbvra4HV4g9dxlQrLpzJZh1e2cGB2f752eL1Ih9f55oGQZf6BAgaqR0Du2qiRplaHrDpQF+C8Pj1eZ80YTW35gDK4FTDFJ2eOTlCZ8eTXLq/n6vLauXlrfTU0rHmuXp5dl1f2QzBLRV27vF67vP5UdkazOhncM5xSJ9M1DWev3g5JBdeeiNc+rxP2eT0t38+k+rzCxXqBRq/s1cUMfr82el1cG73ORKhdG72eudHrqG0NgcooNH4BOYetmIqc88KdmvSOE5u6rNiIcWHdI0vnVs1Sx0VGGosyKQwemixgTKM99YfQS3GRyysDzZqBgAQ00aST3M/LQjJLBoGEYZdUgiiUa+nzeZbcNS26fikj1uGsYa3aoCJgZfckosg3Ov5opwgEy47IksI0Mdhxdy1FvQZmPS46AjEdwvWwSG/P6pHZjHChcD0qGadz9TCy5zSZaz8xlINzSAgdKl+NsLegQy1GqtB59SfLsXQVbs1rbeYFxenUJEURlYGJ2ZZEa2R9vKgxFMUaIfDpeDpCND5GLnImPvw/
--------------------------------------------------------------------------------
/doc/resources/ocp-ztp.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/adetalhouet/ocp-ztp/10215f5b8ba18742b7230399bae1aa18bcd429cb/doc/resources/ocp-ztp.png
--------------------------------------------------------------------------------
/hub/.gitignore:
--------------------------------------------------------------------------------
1 | 03-assisted-deployment-ssh-private-key.yaml
2 | 04-assisteddeploymentpullsecret.yaml
--------------------------------------------------------------------------------
/hub/00-clusterimageset.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: hive.openshift.io/v1
3 | kind: ClusterImageSet
4 | metadata:
5 | name: openshift-v4.11.9
6 | namespace: open-cluster-management
7 | spec:
8 | releaseImage: quay.io/openshift-release-dev/ocp-release:4.11.9-x86_64
9 | ---
10 | apiVersion: hive.openshift.io/v1
11 | kind: ClusterImageSet
12 | metadata:
13 | name: openshift-v4.13.1
14 | namespace: open-cluster-management
15 | spec:
16 | releaseImage: quay.io/openshift-release-dev/ocp-release:4.13.1-x86_64
--------------------------------------------------------------------------------
/hub/02-assistedserviceconfig.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: ConfigMap
4 | metadata:
5 | name: assisted-service-config
6 | namespace: multicluster-engine
7 | data:
8 | HW_VALIDATOR_REQUIREMENTS: |
9 | [{
10 | "version": "default",
11 | "master": {
12 | "cpu_cores": 4,
13 | "ram_mib": 16384,
14 | "disk_size_gb": 100,
15 | "network_latency_threshold_ms": 100,
16 | "packet_loss_percentage": 0
17 | },
18 | "worker": {
19 | "cpu_cores": 2,
20 | "ram_mib": 8192,
21 | "disk_size_gb": 100,
22 | "network_latency_threshold_ms": 1000,
23 | "packet_loss_percentage": 10
24 | },
25 | "sno": {
26 | "cpu_cores": 8,
27 | "ram_mib": 16384,
28 | "disk_size_gb": 100
29 | },
30 | "edge-worker": {
31 | "cpu_cores": 2,
32 | "ram_mib": 8192,
33 | "disk_size_gb": 15
34 | }
35 | }]
36 | ---
37 | apiVersion: agent-install.openshift.io/v1beta1
38 | kind: AgentServiceConfig
39 | metadata:
40 | name: agent
41 | namespace: open-cluster-management
42 | annotations:
43 | unsupported.agent-install.openshift.io/assisted-service-configmap: "assisted-service-config"
44 | spec:
45 | databaseStorage:
46 | accessModes:
47 | - ReadWriteOnce
48 | resources:
49 | requests:
50 | storage: 40Gi
51 | filesystemStorage:
52 | accessModes:
53 | - ReadWriteOnce
54 | resources:
55 | requests:
56 | storage: 100Gi
57 | imageStorage:
58 | accessModes:
59 | - ReadWriteOnce
60 | resources:
61 | requests:
62 | storage: 50Gi
63 | osImages:
64 | - openshiftVersion: "4.9"
65 | version: "49.83.202103251640-0"
66 | url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.9/4.9.0/rhcos-4.9.0-x86_64-live.x86_64.iso"
67 | rootFSUrl: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.9/4.9.0/rhcos-live-rootfs.x86_64.img"
68 | cpuArchitecture: "x86_64"
69 | - openshiftVersion: "4.11"
70 | version: "49.83.202103251640-0"
71 | url: "https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.11/latest/rhcos-4.11.9-x86_64-live.x86_64.iso"
72 | rootFSUrl: "https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.11/latest/rhcos-4.11.9-x86_64-live-rootfs.x86_64.img"
73 | cpuArchitecture: "x86_64"
74 | - openshiftVersion: "4.13"
75 | version: "49.83.202103251640-0"
76 | url: "https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.13/latest/rhcos-4.13.0-x86_64-live.x86_64.iso"
77 | rootFSUrl: "https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.13/latest/rhcos-4.13.0-x86_64-live-rootfs.x86_64.img"
78 | cpuArchitecture: "x86_64"
79 |
--------------------------------------------------------------------------------
/hub/03-assisted-deployment-ssh-private-key-EXAMPLE.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Secret
4 | metadata:
5 | name: assisted-deployment-ssh-private-key
6 | namespace: open-cluster-management
7 | stringData:
8 | ssh-privatekey: |-
9 | YOUR_KEY
10 | type: Opaque
--------------------------------------------------------------------------------
/hub/README.md:
--------------------------------------------------------------------------------
1 | This folder contains what is needed to configure the `assisted-service` in the hub cluster.
2 |
3 | Adjust the various files based on the release you want to deploy.
4 |
5 | After applying these manifest, a new `deployment` named `assisted-service` will be created in the `open-cluster-management`. Along with the deployment, there will be a `service` that will be the API endpoint the spoke cluster will use to interact with the hub.
6 |
7 | That agent will be the main interface between the hub and spoke cluster.
--------------------------------------------------------------------------------
/hub/argocd-app.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: argoproj.io/v1alpha1
2 | kind: Application
3 | metadata:
4 | name: acm-assisted-installer-hub-config
5 | namespace: openshift-gitops
6 | spec:
7 | destination:
8 | namespace: open-cluster-management
9 | server: 'https://kubernetes.default.svc'
10 | project: default
11 | source:
12 | path: hub
13 | repoURL: 'https://github.com/adetalhouet/ocp-ztp/'
14 | targetRevision: HEAD
15 | syncPolicy:
16 | automated:
17 | selfHeal: true
--------------------------------------------------------------------------------
/hub/kustomization.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: kustomize.config.k8s.io/v1beta1
3 | kind: Kustomization
4 |
5 | resources:
6 | - 00-clusterimageset.yaml
7 | - 02-assistedserviceconfig.yaml
8 | # - 03-assisted-deployment-ssh-private-key.yaml
9 |
--------------------------------------------------------------------------------
/hypershift/README.md:
--------------------------------------------------------------------------------
1 | # Setup Hypershift addon and AI for MCE with ACM hub cluster
2 |
3 | Start by reading the official product [documentation](https://access.redhat.com/documentation/en-us/red_hat_advanced_cluster_management_for_kubernetes/2.5/html/clusters/managing-your-clusters#hosted-control-plane-intro)
4 |
5 |
6 | ## Table of Contents
7 |
8 |
9 | - [Enable the hypershift related components on the hub cluster](#enable-the-hypershift-related-components-on-the-hub-cluster)
10 | - [Turn one of the managed clusters into the HyperShift management cluster](#turn-one-of-the-managed-clusters-into-the-hypershift-management-cluster)
11 | - [Patch the provisioning CR to watch all namespace](#patch-the-provisioning-cr-to-watch-all-namespace)
12 | - [Create Assisted Installer service in MCE namespace](#create-assisted-installer-service-in-mce-namespace)
13 | - [Setup DNS entries for hypershift cluster](#setup-dns-entries-for-hypershift-cluster)
14 | - [Create the hypershift cluster namespace](#create-the-hypershift-cluster-namespace)
15 | - [Create ssh and pull-secret secret](#create-ssh-and-pull-secret-secret)
16 | - [Create the InfraEnv](#create-infraenv)
17 | - [Create BareMetalHost consuming the above InfraEnv](#create-baremetalhost-consuming-the-above-infraenv)
18 | - [Create HypershiftDeployment](#create-hypershiftdeployment)
19 |
20 |
21 | ## [Enable the hypershift related components on the hub cluster](https://github.com/stolostron/hypershift-deployment-controller/blob/main/docs/provision_hypershift_clusters_by_mce.md#enable-the-hosted-control-planes-related-components-on-the-hub-cluster)
22 |
23 | ~~~
24 | oc patch mce multiclusterengine --type=merge -p '{"spec":{"overrides":{"components":[{"name":"hypershift-preview","enabled": true}]}}}'
25 | ~~~
26 |
27 | ## [Turn one of the managed clusters into the HyperShift management cluster](https://github.com/stolostron/hypershift-deployment-controller/blob/main/docs/provision_hypershift_clusters_by_mce.md#turn-one-of-the-managed-clusters-into-the-hypershift-management-cluster)
28 |
29 | In my case, I will use `local-cluster`
30 |
31 | Below is how to create the bucket using ODF.
32 |
33 | ~~~
34 | echo "---
35 | apiVersion: addon.open-cluster-management.io/v1alpha1
36 | kind: ManagedClusterAddOn
37 | metadata:
38 | name: hypershift-addon
39 | namespace: local-cluster
40 | spec:
41 | installNamespace: open-cluster-management-agent-addon
42 | ---
43 | apiVersion: objectbucket.io/v1alpha1
44 | kind: ObjectBucketClaim
45 | metadata:
46 | name: hypershift-operator-oidc-provider-bucket
47 | namespace: local-cluster
48 | spec:
49 | additionalConfig:
50 | bucketclass: noobaa-default-bucket-class
51 | generateBucketName: hypershift-operator-oidc-provider-bucket
52 | objectBucketName: obc-hypershift-operator-oidc-provider-bucket
53 | storageClassName: openshift-storage.noobaa.io" | oc apply -f -
54 | ~~~
55 |
56 | Bellow is how to create the secret for hypershift consuming the bucket credentials.
57 |
58 | ~~~
59 | ACCESS_KEY=$(oc get secret hypershift-operator-oidc-provider-bucket -n local-cluster --template={{.data.AWS_ACCESS_KEY_ID}} | base64 -d)
60 | SECRET_KEY=$(oc get secret hypershift-operator-oidc-provider-bucket -n local-cluster --template={{.data.AWS_SECRET_ACCESS_KEY}} | base64 -d)
61 |
62 | echo "[default]
63 | aws_access_key_id = $ACCESS_KEY
64 | aws_secret_access_key = $SECRET_KEY" > $HOME/bucket-creds
65 |
66 | oc create secret generic hypershift-operator-oidc-provider-s3-credentials \
67 | --from-file=credentials=$HOME/bucket-creds \
68 | --from-literal=bucket=hypershift-operator-oidc-p-e8c50eb0-7df3-4f27-ac24-2a5ad714b4d7 \
69 | --from-literal=region=Montreal \
70 | -n local-cluster
71 |
72 | oc label secret hypershift-operator-oidc-provider-s3-credentials -n local-cluster cluster.open-cluster-management.io/backup=""
73 | ~~~
74 |
75 | ## Patch the `provisioning` CR to watch all namespace
76 |
77 | ~~~
78 | oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"watchAllNamespaces": true }}'
79 | ~~~
80 |
81 | ## Create Assisted Installer service in MCE namespace
82 |
83 | ~~~
84 | export DB_VOLUME_SIZE="10Gi"
85 | export FS_VOLUME_SIZE="10Gi"
86 | export OCP_VERSION="4.10"
87 | export ARCH="x86_64"
88 | export OCP_RELEASE_VERSION=$(curl -s https://mirror.openshift.com/pub/openshift-v4/${ARCH}/clients/ocp/latest-${OCP_VERSION}/release.txt | awk '/machine-os / { print $2 }')
89 | export ISO_URL="https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/${OCP_VERSION}/4.10.16/rhcos-${OCP_VERSION}.16-${ARCH}-live.${ARCH}.iso"
90 | export ROOT_FS_URL="https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/${OCP_VERSION}/latest/rhcos-live-rootfs.${ARCH}.img"
91 |
92 | envsubst <<"EOF" | oc apply -f -
93 | apiVersion: agent-install.openshift.io/v1beta1
94 | kind: AgentServiceConfig
95 | metadata:
96 | name: agent
97 | namespace: multicluster-engine
98 | spec:
99 | databaseStorage:
100 | accessModes:
101 | - ReadWriteOnce
102 | resources:
103 | requests:
104 | storage: ${DB_VOLUME_SIZE}
105 | filesystemStorage:
106 | accessModes:
107 | - ReadWriteOnce
108 | resources:
109 | requests:
110 | storage: ${FS_VOLUME_SIZE}
111 | osImages:
112 | - openshiftVersion: "${OCP_VERSION}"
113 | version: "${OCP_RELEASE_VERSION}"
114 | url: "${ISO_URL}"
115 | rootFSUrl: "${ROOT_FS_URL}"
116 | cpuArchitecture: "${ARCH}"
117 | EOF
118 | ~~~
119 |
120 | ## Setup DNS entries for hypershift cluster
121 | The bellow example uses bind as DNS server.
122 |
123 | Two records are required for the hypershift cluster to be functional and accessible.
124 | The first on is for the hosted cluster API server, which is exposed through NodePort
125 | This IP is one of the ACM cluster node.
126 | ~~~
127 | api-server.ca-montreal.adetalhouet.ca. IN A 192.168.123.10
128 | ~~~
129 |
130 | The second one is to provide ingress. Better solution could be implemented to have keepavlive and load balancing between the workers.
131 | The IP is one of the hypershift cluster's worker node.
132 | ~~~
133 | *.apps.ca-montreal.adetalhouet.ca. IN A 192.168.123.20
134 | ~~~
135 |
136 | ## Create the hypershift cluster namespace
137 |
138 | `oc create ns ca-montreal`
139 |
140 | ## Create ssh and pull-secret secret
141 | So we can provision the bare metal node and access them later on.
142 |
143 | ~~~
144 | export SSH_PUB_KEY=$(cat $HOME/.ssh/id_rsa.pub)
145 | envsubst <<"EOF" | oc apply -f -
146 | apiVersion: v1
147 | kind: Secret
148 | metadata:
149 | name: agent-demo-ssh-key
150 | namespace: ca-montreal
151 | stringData:
152 | id_rsa.pub: ${SSH_PUB_KEY}
153 | EOF
154 |
155 | DOCKER_CONFIG_JSON=`oc extract secret/pull-secret -n openshift-config --to=-`
156 | oc create secret generic pull-secret \
157 | -n ca-montreal \
158 | --from-literal=.dockerconfigjson="$DOCKER_CONFIG_JSON" \
159 | --type=kubernetes.io/dockerconfigjson
160 | ~~~
161 |
162 | ## Create InfraEnv
163 |
164 | This will will generate the ISO used to boostrap the baremetal nodes
165 |
166 | ~~~
167 | envsubst <<"EOF" | oc apply -f -
168 | apiVersion: agent-install.openshift.io/v1beta1
169 | kind: InfraEnv
170 | metadata:
171 | name: ca-montreal
172 | namespace: ca-montreal
173 | spec:
174 | pullSecretRef:
175 | name: pull-secret
176 | sshAuthorizedKey: ${SSH_PUB_KEY}
177 | EOF
178 | ~~~
179 |
180 | ## Create BareMetalHost consuming the above InfraEnv.
181 | Under the hood, OpenShift will load the ISO and start the bare metal node. The agent part of the ISO will register the node with Assisted Installer.
182 |
183 | ~~~
184 | apiVersion: metal3.io/v1alpha1
185 | kind: BareMetalHost
186 | metadata:
187 | name: ca-montreal-node1
188 | namespace: ca-montreal
189 | labels:
190 | infraenvs.agent-install.openshift.io: "ca-montreal"
191 | annotations:
192 | inspect.metal3.io: disabled
193 | bmac.agent-install.openshift.io/hostname: "ca-montreal-node1"
194 | spec:
195 | online: true
196 | bmc:
197 | address: redfish-virtualmedia+http://192.168.0.190:8000/redfish/v1/Systems/a59aa864-afa2-4363-a8c2-eac2edb63234
198 | credentialsName: ca-montreal-node1-secret
199 | disableCertificateVerification: true
200 | bootMACAddress: 02:04:00:00:01:03
201 | automatedCleaningMode: disabled
202 | ---
203 | # dummy secret - it is not used by required by assisted service and bare metal operator
204 | apiVersion: v1
205 | kind: Secret
206 | metadata:
207 | name: ca-montreal-node1-secret
208 | namespace: ca-montreal
209 | data:
210 | password: Ym9iCg==
211 | username: Ym9iCg==
212 | type: Opaque
213 | ~~~
214 |
215 | Patch the corresponding bare metal node agents to approve, set role and define installation disk
216 |
217 | ~~~
218 | oc get agents -n ca-montreal -o name | xargs oc patch -n ca-montreal -p '{"spec":{"installation_disk_id":"/dev/vda","approved":true,"role":"worker"}}' --type merge
219 | ~~~
220 |
221 | ## Create Hypershift Deployment
222 |
223 | ### Create role for Cluster API provider
224 | ~~~
225 | apiVersion: rbac.authorization.k8s.io/v1
226 | kind: Role
227 | metadata:
228 | creationTimestamp: null
229 | name: capi-provider-role
230 | namespace: ca-montreal
231 | rules:
232 | - apiGroups:
233 | - agent-install.openshift.io
234 | resources:
235 | - agents
236 | verbs:
237 | - '*'
238 | ~~~
239 |
240 | ### Create hosted control plane
241 | ~~~
242 | ---
243 | apiVersion: hypershift.openshift.io/v1alpha1
244 | kind: HostedCluster
245 | metadata:
246 | name: 'ca-montreal'
247 | namespace: 'ca-montreal'
248 | labels:
249 | "cluster.open-cluster-management.io/clusterset": 'default'
250 | spec:
251 | fips: false
252 | release:
253 | image: 'quay.io/openshift-release-dev/ocp-release:4.10.26-x86_64'
254 | dns:
255 | baseDomain: adetalhouet.ca
256 | controllerAvailabilityPolicy: SingleReplica
257 | infraID: ca-montreal
258 | etcd:
259 | managed:
260 | storage:
261 | persistentVolume:
262 | size: 4Gi
263 | type: PersistentVolume
264 | managementType: Managed
265 | infrastructureAvailabilityPolicy: SingleReplica
266 | platform:
267 | agent:
268 | agentNamespace: ca-montreal
269 | type: Agent
270 | networking:
271 | clusterNetwork:
272 | - cidr: 10.132.0.0/14
273 | machineNetwork:
274 | - cidr: 192.168.123.0/24
275 | networkType: OVNKubernetes
276 | serviceNetwork:
277 | - cidr: 172.31.0.0/16
278 | clusterID: af5d43f0-0936-49cf-88a3-79736034adb2
279 | pullSecret:
280 | name: pull-secret
281 | issuerURL: 'https://kubernetes.default.svc'
282 | sshKey:
283 | name: agent-demo-ssh-key
284 | autoscaling: {}
285 | olmCatalogPlacement: management
286 | services:
287 | - service: APIServer
288 | servicePublishingStrategy:
289 | nodePort:
290 | address: api-server.ca-montreal.adetalhouet.ca
291 | type: NodePort
292 | - service: OAuthServer
293 | servicePublishingStrategy:
294 | type: Route
295 | - service: Konnectivity
296 | servicePublishingStrategy:
297 | type: Route
298 | - service: Ignition
299 | servicePublishingStrategy:
300 | type: Route
301 | ~~~
302 |
303 | ### Create node pool consuming our bare metal hosts
304 | ~~~
305 | ---
306 | apiVersion: hypershift.openshift.io/v1alpha1
307 | kind: NodePool
308 | metadata:
309 | name: 'nodepool-ca-montreal-1'
310 | namespace: 'ca-montreal'
311 | spec:
312 | clusterName: 'ca-montreal'
313 | replicas: 1
314 | management:
315 | autoRepair: false
316 | upgradeType: InPlace
317 | platform:
318 | type: Agent
319 | agent:
320 | agentLabelSelector:
321 | matchLabels: {}
322 | release:
323 | image: quay.io/openshift-release-dev/ocp-release:4.10.26-x86_64
324 | ~~~
325 |
326 | ### Import cluster in ACM
327 | ~~~
328 | ---
329 | apiVersion: cluster.open-cluster-management.io/v1
330 | kind: ManagedCluster
331 | metadata:
332 | labels:
333 | cloud: hybrid
334 | name: ca-montreal
335 | cluster.open-cluster-management.io/clusterset: 'default'
336 | name: ca-montreal
337 | spec:
338 | hubAcceptsClient: true
339 | ---
340 | apiVersion: agent.open-cluster-management.io/v1
341 | kind: KlusterletAddonConfig
342 | metadata:
343 | name: 'ca-montreal'
344 | namespace: 'ca-montreal'
345 | spec:
346 | clusterName: 'ca-montreal'
347 | clusterNamespace: 'ca-montreal'
348 | clusterLabels:
349 | cloud: ai-hypershift
350 | applicationManager:
351 | enabled: true
352 | policyController:
353 | enabled: true
354 | searchCollector:
355 | enabled: true
356 | certPolicyController:
357 | enabled: true
358 | iamPolicyController:
359 | enabled: true
360 | ~~~
361 |
362 |
363 | # Additional details
364 |
365 | https://github.com/openshift/hypershift/blob/main/docs/content/how-to/agent/create-agent-cluster.md
366 |
367 | https://hypershift-docs.netlify.app/how-to/agent/create-agent-cluster/
368 |
369 | https://github.com/stolostron/hypershift-deployment-controller/blob/main/docs/provision_hypershift_clusters_by_mce.md#provision-a-hypershift-hosted-cluster-on-bare-metal
370 |
--------------------------------------------------------------------------------
/hypershift/ca-montreal.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 |
4 | oc create ns ca-montreal
5 |
6 | export SSH_PUB_KEY=$(cat $HOME/.ssh/id_rsa.pub)
7 | envsubst <<"EOF" | oc apply -f -
8 | apiVersion: v1
9 | kind: Secret
10 | metadata:
11 | name: agent-demo-ssh-key
12 | namespace: ca-montreal
13 | stringData:
14 | id_rsa.pub: ${SSH_PUB_KEY}
15 | EOF
16 |
17 | DOCKER_CONFIG_JSON=`oc extract secret/pull-secret -n openshift-config --to=-`
18 | oc create secret generic pull-secret \
19 | -n ca-montreal \
20 | --from-literal=.dockerconfigjson="$DOCKER_CONFIG_JSON" \
21 | --type=kubernetes.io/dockerconfigjson
22 |
23 | envsubst <<"EOF" | oc apply -f -
24 | apiVersion: agent-install.openshift.io/v1beta1
25 | kind: InfraEnv
26 | metadata:
27 | name: ca-montreal
28 | namespace: ca-montreal
29 | labels:
30 | agentclusterinstalls.extensions.hive.openshift.io/location: Montreal
31 | spec:
32 | pullSecretRef:
33 | name: pull-secret
34 | sshAuthorizedKey: ${SSH_PUB_KEY}
35 | agentLabelSelector:
36 | matchLabels:
37 | agentclusterinstalls.extensions.hive.openshift.io/location: Montreal
38 | EOF
39 |
40 | echo "
41 | ---
42 | apiVersion: metal3.io/v1alpha1
43 | kind: BareMetalHost
44 | metadata:
45 | name: ca-montreal-node1
46 | namespace: ca-montreal
47 | labels:
48 | infraenvs.agent-install.openshift.io: "ca-montreal"
49 | annotations:
50 | inspect.metal3.io: disabled
51 | bmac.agent-install.openshift.io/hostname: "ca-montreal-node1"
52 | spec:
53 | online: true
54 | bmc:
55 | address: redfish-virtualmedia+http://192.168.0.190:8000/redfish/v1/Systems/d4e63915-eebf-4948-b1b3-542a11a4286a
56 | credentialsName: ca-montreal-node1-secret
57 | disableCertificateVerification: true
58 | bootMACAddress: 02:04:00:00:01:03
59 | automatedCleaningMode: disabled
60 | ---
61 | # dummy secret - it is not used by required by assisted service and bare metal operator
62 | apiVersion: v1
63 | kind: Secret
64 | metadata:
65 | name: ca-montreal-node1-secret
66 | namespace: ca-montreal
67 | data:
68 | password: Ym9iCg==
69 | username: Ym9iCg==
70 | type: Opaque" | oc apply -f -
71 |
72 | echo "
73 | ---
74 | apiVersion: rbac.authorization.k8s.io/v1
75 | kind: Role
76 | metadata:
77 | name: capi-provider-role
78 | namespace: ca-montreal
79 | rules:
80 | - apiGroups:
81 | - agent-install.openshift.io
82 | resources:
83 | - agents
84 | verbs:
85 | - '*'
86 | ---
87 | apiVersion: hypershift.openshift.io/v1alpha1
88 | kind: HostedCluster
89 | metadata:
90 | name: 'ca-montreal'
91 | namespace: 'ca-montreal'
92 | labels:
93 | "cluster.open-cluster-management.io/clusterset": 'default'
94 | spec:
95 | fips: false
96 | release:
97 | image: 'quay.io/openshift-release-dev/ocp-release:4.10.26-x86_64'
98 | dns:
99 | baseDomain: adetalhouet.ca
100 | controllerAvailabilityPolicy: SingleReplica
101 | infraID: ca-montreal
102 | etcd:
103 | managed:
104 | storage:
105 | persistentVolume:
106 | size: 4Gi
107 | type: PersistentVolume
108 | managementType: Managed
109 | infrastructureAvailabilityPolicy: SingleReplica
110 | platform:
111 | agent:
112 | agentNamespace: ca-montreal
113 | type: Agent
114 | networking:
115 | clusterNetwork:
116 | - cidr: 10.132.0.0/14
117 | machineNetwork:
118 | - cidr: 192.168.123.0/24
119 | networkType: OVNKubernetes
120 | serviceNetwork:
121 | - cidr: 172.31.0.0/16
122 | pullSecret:
123 | name: pull-secret
124 | issuerURL: 'https://kubernetes.default.svc'
125 | sshKey:
126 | name: agent-demo-ssh-key
127 | autoscaling: {}
128 | olmCatalogPlacement: management
129 | services:
130 | - service: APIServer
131 | servicePublishingStrategy:
132 | nodePort:
133 | address: api-server.ca-montreal.adetalhouet.ca
134 | type: NodePort
135 | - service: OAuthServer
136 | servicePublishingStrategy:
137 | type: Route
138 | - service: Konnectivity
139 | servicePublishingStrategy:
140 | type: Route
141 | - service: Ignition
142 | servicePublishingStrategy:
143 | type: Route
144 | ---
145 | apiVersion: hypershift.openshift.io/v1alpha1
146 | kind: NodePool
147 | metadata:
148 | name: 'nodepool-ca-montreal-1'
149 | namespace: 'ca-montreal'
150 | spec:
151 | clusterName: 'ca-montreal'
152 | replicas: 1
153 | management:
154 | autoRepair: false
155 | upgradeType: InPlace
156 | platform:
157 | type: Agent
158 | agent:
159 | agentLabelSelector:
160 | matchLabels: {}
161 | release:
162 | image: quay.io/openshift-release-dev/ocp-release:4.10.26-x86_64
163 | ---
164 | apiVersion: cluster.open-cluster-management.io/v1
165 | kind: ManagedCluster
166 | metadata:
167 | labels:
168 | cloud: hybrid
169 | name: ca-montreal
170 | cluster.open-cluster-management.io/clusterset: 'default'
171 | name: ca-montreal
172 | spec:
173 | hubAcceptsClient: true
174 | ---
175 | apiVersion: agent.open-cluster-management.io/v1
176 | kind: KlusterletAddonConfig
177 | metadata:
178 | name: 'ca-montreal'
179 | namespace: 'ca-montreal'
180 | spec:
181 | clusterName: 'ca-montreal'
182 | clusterNamespace: 'ca-montreal'
183 | clusterLabels:
184 | cloud: ai-hypershift
185 | applicationManager:
186 | enabled: true
187 | policyController:
188 | enabled: true
189 | searchCollector:
190 | enabled: true
191 | certPolicyController:
192 | enabled: true
193 | iamPolicyController:
194 | enabled: true" | oc apply -f -
--------------------------------------------------------------------------------
/libvirt/cloud-init/README.md:
--------------------------------------------------------------------------------
1 | ##### Validate user-data syntax
2 | cloud-init devel schema --config-file user-data
3 |
4 | ##### Build cloud-init ISO
5 | mkisofs -o cidata.iso -V cidata -J -r user-data meta-data
--------------------------------------------------------------------------------
/libvirt/cloud-init/meta-data:
--------------------------------------------------------------------------------
1 | instance-id: sno
2 | local-hostname: sno.lab.adetalhouet
--------------------------------------------------------------------------------
/libvirt/cloud-init/user-data:
--------------------------------------------------------------------------------
1 | #cloud-config
2 |
3 | # Hostname management
4 | preserve_hostname: False
5 | hostname: sno.lab.adetalhouet
6 | fqdn: sno.lab.adetalhouet
7 |
8 | ethernets:
9 | eth0:
10 | addresses:
11 | - 148.251.12.37/32
12 | gateway4: 148.251.12.33
13 |
14 | # Users
15 | users:
16 | - name: adetalhouet
17 | groups: adm,sys
18 | shell: /bin/bash
19 | home: /home/adetalhouet
20 | sudo: ALL=(ALL) NOPASSWD:ALL
21 | lock_passwd: false
22 | ssh-authorized-keys:
23 | - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPwyNH/qkYcqkKk5MiNjKHxnoadME6crIJ8aIs3R6TZQ root@lab.adetalhouet
24 |
25 | # Configure where output will go
26 | output:
27 | all: ">> /var/log/cloud-init.log"
28 |
29 | # configure interaction with ssh server
30 | ssh_pwauth: false
31 | disable_root: true
32 |
33 | # Install my public ssh key to the first user-defined user configured
34 | # in cloud.cfg in the template (which is centos for CentOS cloud images)
35 | ssh_authorized_keys:
36 | - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPwyNH/qkYcqkKk5MiNjKHxnoadME6crIJ8aIs3R6TZQ root@lab.adetalhouet
37 |
38 | # Remove cloud-init
39 | runcmd:
40 | - systemctl stop NetworkManager.service && systemctl start NetworkManager.service
41 | - dnf -y remove cloud-init
--------------------------------------------------------------------------------
/metal-provisioner/00-namespace.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Namespace
4 | metadata:
5 | name: metal-provisioner
--------------------------------------------------------------------------------
/metal-provisioner/01-bmo.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: ServiceAccount
4 | metadata:
5 | name: metal-provisioner
6 | namespace: metal-provisioner
7 | ---
8 | apiVersion: rbac.authorization.k8s.io/v1
9 | kind: ClusterRole
10 | metadata:
11 | name: baremetalhost-role
12 | namespace: metal-provisioner
13 | rules:
14 | - apiGroups:
15 | - coordination.k8s.io
16 | resources:
17 | - '*'
18 | verbs:
19 | - '*'
20 | - apiGroups:
21 | - metal3.io
22 | resources:
23 | - '*'
24 | verbs:
25 | - '*'
26 | - apiGroups:
27 | - ""
28 | resources:
29 | - '*'
30 | verbs:
31 | - '*'
32 | ---
33 | apiVersion: rbac.authorization.k8s.io/v1
34 | kind: ClusterRoleBinding
35 | metadata:
36 | name: baremetalhost-rolebinding
37 | namespace: metal-provisioner
38 | roleRef:
39 | apiGroup: rbac.authorization.k8s.io
40 | kind: ClusterRole
41 | name: baremetalhost-role
42 | subjects:
43 | - kind: ServiceAccount
44 | name: metal-provisioner
45 | namespace: metal-provisioner
46 | ---
47 | kind: Secret
48 | apiVersion: v1
49 | metadata:
50 | name: ironic-credentials
51 | namespace: metal-provisioner
52 | data:
53 | password: Ym9iCg==
54 | username: Ym9iCg==
55 | type: Opaque
56 | ---
57 | kind: Secret
58 | apiVersion: v1
59 | metadata:
60 | name: ironic-inspector-credentials
61 | namespace: metal-provisioner
62 | data:
63 | password: Ym9iCg==
64 | username: Ym9iCg==
65 | type: Opaque
66 | ---
67 | kind: ConfigMap
68 | apiVersion: v1
69 | metadata:
70 | name: baremetal-operator-ironic
71 | namespace: metal-provisioner
72 | data:
73 | DEPLOY_KERNEL_URL: 'http://ironic.metal-provisioner:6180/images/ironic-python-agent.kernel'
74 | DEPLOY_RAMDISK_URL: 'http://ironic.metal-provisioner:6180/images/ironic-python-agent.initramfs'
75 | DHCP_RANGE: '172.22.0.10,172.22.0.100' # that param doesn't matter
76 | HTTP_PORT: '6180'
77 | IRONIC_ENDPOINT: 'http://ironic.metal-provisioner:6385/v1/'
78 | IRONIC_FAST_TRACK: 'true'
79 | IRONIC_INSPECTOR_ENDPOINT: 'http://ironic.metal-provisioner:5050/v1/'
80 | PROVISIONING_INTERFACE: eth2 # that param doesn't matter
81 | ---
82 | kind: Deployment
83 | apiVersion: apps/v1
84 | metadata:
85 | name: baremetal-operator-controller-manager
86 | namespace: metal-provisioner
87 | labels:
88 | control-plane: controller-manager
89 | spec:
90 | replicas: 1
91 | selector:
92 | matchLabels:
93 | control-plane: controller-manager
94 | template:
95 | metadata:
96 | labels:
97 | control-plane: controller-manager
98 | spec:
99 | volumes:
100 | - name: ironic-credentials
101 | secret:
102 | secretName: ironic-credentials
103 | defaultMode: 420
104 | - name: ironic-inspector-credentials
105 | secret:
106 | secretName: ironic-inspector-credentials
107 | defaultMode: 420
108 | containers:
109 | - resources: {}
110 | terminationMessagePath: /dev/termination-log
111 | name: manager
112 | command:
113 | - /baremetal-operator
114 | livenessProbe:
115 | httpGet:
116 | path: /healthz
117 | port: 9440
118 | scheme: HTTP
119 | initialDelaySeconds: 3
120 | timeoutSeconds: 1
121 | periodSeconds: 3
122 | successThreshold: 1
123 | failureThreshold: 3
124 | env:
125 | - name: POD_NAME
126 | valueFrom:
127 | fieldRef:
128 | apiVersion: v1
129 | fieldPath: metadata.name
130 | - name: POD_NAMESPACE
131 | valueFrom:
132 | fieldRef:
133 | apiVersion: v1
134 | fieldPath: metadata.namespace
135 | imagePullPolicy: Always
136 | volumeMounts:
137 | - name: ironic-credentials
138 | readOnly: true
139 | mountPath: /opt/metal3/auth/ironic
140 | - name: ironic-inspector-credentials
141 | readOnly: true
142 | mountPath: /opt/metal3/auth/ironic-inspector
143 | terminationMessagePolicy: File
144 | envFrom:
145 | - configMapRef:
146 | name: baremetal-operator-ironic
147 | image: quay.io/metal3-io/baremetal-operator:capm3-v0.4.3
148 | args:
149 | - '--metrics-addr=127.0.0.1:8085'
150 | - '--enable-leader-election'
151 | - name: kube-rbac-proxy
152 | image: 'gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0'
153 | args:
154 | - '--secure-listen-address=0.0.0.0:8443'
155 | - '--upstream=http://127.0.0.1:8085/'
156 | - '--logtostderr=true'
157 | - '--v=10'
158 | ports:
159 | - name: https
160 | containerPort: 8443
161 | protocol: TCP
162 | resources: {}
163 | terminationMessagePath: /dev/termination-log
164 | terminationMessagePolicy: File
165 | imagePullPolicy: IfNotPresent
166 | restartPolicy: Always
167 | terminationGracePeriodSeconds: 10
168 | dnsPolicy: ClusterFirst
169 | securityContext: {}
170 | schedulerName: default-scheduler
171 | serviceAccount: metal-provisioner
172 | serviceAccountName: metal-provisioner
173 | strategy:
174 | type: RollingUpdate
175 | rollingUpdate:
176 | maxUnavailable: 25%
177 | maxSurge: 25%
178 | revisionHistoryLimit: 10
179 | progressDeadlineSeconds: 600
180 |
--------------------------------------------------------------------------------
/metal-provisioner/02-ironic.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | kind: ClusterRole
3 | apiVersion: rbac.authorization.k8s.io/v1
4 | metadata:
5 | name: ironic-scc
6 | namespace: metal-provisioner
7 | rules:
8 | - apiGroups: ["security.openshift.io"]
9 | resources: ["securitycontextconstraints"]
10 | resourceNames: ["privileged"]
11 | verbs: ["use"]
12 | ---
13 | apiVersion: rbac.authorization.k8s.io/v1
14 | kind: ClusterRoleBinding
15 | metadata:
16 | name: ironic-rolebinding
17 | namespace: metal-provisioner
18 | roleRef:
19 | apiGroup: rbac.authorization.k8s.io
20 | kind: ClusterRole
21 | name: ironic-scc
22 | subjects:
23 | - kind: ServiceAccount
24 | name: metal-provisioner
25 | namespace: metal-provisioner
26 | ---
27 | apiVersion: apps/v1
28 | kind: Deployment
29 | metadata:
30 | name: capm3-ironic
31 | namespace: metal-provisioner
32 | spec:
33 | replicas: 1
34 | strategy:
35 | # We cannot run Ironic with more than one replica at a time. The recreate
36 | # strategy makes sure that the old pod is gone before a new is started.
37 | type: Recreate
38 | selector:
39 | matchLabels:
40 | name: capm3-ironic
41 | template:
42 | metadata:
43 | labels:
44 | name: capm3-ironic
45 | spec:
46 | hostNetwork: true
47 | containers:
48 | - name: ironic-dnsmasq
49 | image: quay.io/metal3-io/ironic
50 | imagePullPolicy: Always
51 | securityContext:
52 | capabilities:
53 | add: ["NET_ADMIN"]
54 | command:
55 | - /bin/rundnsmasq
56 | volumeMounts:
57 | - mountPath: /shared
58 | name: ironic-data-volume
59 | envFrom:
60 | - configMapRef:
61 | name: ironic-bmo-configmap
62 | - name: mariadb
63 | image: quay.io/metal3-io/mariadb
64 | imagePullPolicy: Always
65 | command:
66 | - /bin/runmariadb
67 | volumeMounts:
68 | - mountPath: /shared
69 | name: ironic-data-volume
70 | env:
71 | - name: MARIADB_PASSWORD
72 | valueFrom:
73 | secretKeyRef:
74 | name: mariadb-password
75 | key: password
76 | - name: RESTART_CONTAINER_CERTIFICATE_UPDATED
77 | valueFrom:
78 | configMapKeyRef:
79 | name: ironic-bmo-configmap
80 | key: RESTART_CONTAINER_CERTIFICATE_UPDATED
81 | - name: ironic-api
82 | image: quay.io/metal3-io/ironic
83 | imagePullPolicy: Always
84 | command:
85 | - /bin/bash
86 | - '-c'
87 | - >
88 | sed -i "s/{{ env.IRONIC_URL_HOST }}:{{ env.HTTP_PORT }}/{{ env.IRONIC_HTTP_URL }}/g" /etc/ironic/ironic.conf.j2
89 |
90 | sed -i "s/host = {{ env.IRONIC_URL_HOST }}/host = {{ env.IRONIC_HTTP_URL }}/g" /etc/ironic/ironic.conf.j2
91 |
92 | /bin/runironic-api
93 | volumeMounts:
94 | - mountPath: /shared
95 | name: ironic-data-volume
96 | envFrom:
97 | - configMapRef:
98 | name: ironic-bmo-configmap
99 | env:
100 | - name: MARIADB_PASSWORD
101 | valueFrom:
102 | secretKeyRef:
103 | name: mariadb-password
104 | key: password
105 | ports:
106 | - containerPort: 6180 # HTTPD service
107 | - containerPort: 6385 # Ironic API
108 | - name: ironic-conductor
109 | image: quay.io/metal3-io/ironic
110 | imagePullPolicy: Always
111 | command:
112 | - /bin/bash
113 | - '-c'
114 | - >
115 | sed -i "s/{{ env.IRONIC_URL_HOST }}:{{ env.HTTP_PORT }}/{{ env.IRONIC_HTTP_URL }}/g" /etc/ironic/ironic.conf.j2
116 |
117 | sed -i "s/host = {{ env.IRONIC_URL_HOST }}/host = {{ env.IRONIC_HTTP_URL }}/g" /etc/ironic/ironic.conf.j2
118 |
119 | /bin/runironic-conductor
120 | volumeMounts:
121 | - mountPath: /shared
122 | name: ironic-data-volume
123 | envFrom:
124 | - configMapRef:
125 | name: ironic-bmo-configmap
126 | env:
127 | - name: MARIADB_PASSWORD
128 | valueFrom:
129 | secretKeyRef:
130 | name: mariadb-password
131 | key: password
132 | - name: ironic-inspector
133 | image: quay.io/metal3-io/ironic
134 | imagePullPolicy: Always
135 | command:
136 | - /bin/runironic-inspector
137 | envFrom:
138 | - configMapRef:
139 | name: ironic-bmo-configmap
140 | ports:
141 | - containerPort: 5050
142 | initContainers:
143 | - name: ironic-ipa-downloader
144 | image: quay.io/metal3-io/ironic-ipa-downloader
145 | imagePullPolicy: Always
146 | command:
147 | - /usr/local/bin/get-resource.sh
148 | envFrom:
149 | - configMapRef:
150 | name: ironic-bmo-configmap
151 | volumeMounts:
152 | - mountPath: /shared
153 | name: ironic-data-volume
154 | volumes:
155 | - name: ironic-data-volume
156 | emptyDir: {}
157 | serviceAccount: metal-provisioner
158 | serviceAccountName: metal-provisioner
159 | ---
160 | apiVersion: v1
161 | kind: Service
162 | metadata:
163 | name: ironic
164 | namespace: metal-provisioner
165 | spec:
166 | type: ClusterIP
167 | selector:
168 | name: capm3-ironic
169 | ports:
170 | - name: inspector
171 | port: 5050
172 | protocol: TCP
173 | targetPort: 5050
174 | - name: api
175 | port: 6385
176 | protocol: TCP
177 | targetPort: 6385
178 | - name: httpd
179 | port: 80
180 | protocol: TCP
181 | targetPort: 6180
182 | ---
183 | kind: Route
184 | apiVersion: route.openshift.io/v1
185 | metadata:
186 | name: ironic-http
187 | namespace: metal-provisioner
188 | spec:
189 | host: ironic-http-metal-provisioner.apps.hub-adetalhouet.rhtelco.io
190 | to:
191 | kind: Service
192 | name: ironic
193 | weight: 100
194 | port:
195 | targetPort: httpd
196 | wildcardPolicy: None
197 | ---
198 | kind: Route
199 | apiVersion: route.openshift.io/v1
200 | metadata:
201 | name: ironic-api
202 | namespace: metal-provisioner
203 | spec:
204 | host: ironic-api-metal-provisioner.apps.hub-adetalhouet.rhtelco.io
205 | to:
206 | kind: Service
207 | name: ironic
208 | weight: 100
209 | port:
210 | targetPort: api
211 | wildcardPolicy: None
212 | ---
213 | kind: Route
214 | apiVersion: route.openshift.io/v1
215 | metadata:
216 | name: ironic-inspector
217 | namespace: metal-provisioner
218 | spec:
219 | host: ironic-inspector-metal-provisioner.apps.hub-adetalhouet.rhtelco.io
220 | to:
221 | kind: Service
222 | name: ironic
223 | weight: 100
224 | port:
225 | targetPort: inspector
226 | wildcardPolicy: None
227 | ---
228 | kind: Secret
229 | apiVersion: v1
230 | metadata:
231 | name: mariadb-password
232 | namespace: metal-provisioner
233 | data:
234 | password: Y2hhbmdlbWU=
235 | type: Opaque
236 | ---
237 | kind: Secret
238 | apiVersion: v1
239 | metadata:
240 | name: ironic-auth-config
241 | namespace: metal-provisioner
242 | data:
243 | auth-config: W2lyb25pY10KYXV0aF90eXBlPWh0dHBfYmFzaWMKdXNlcm5hbWU9Ym9iCnBhc3N3b3JkPWJvYg==
244 | type: Opaque
245 | ---
246 | kind: Secret
247 | apiVersion: v1
248 | metadata:
249 | name: ironic-inspector-auth-config
250 | namespace: metal-provisioner
251 | data:
252 | auth-config: >-
253 | W2luc3BlY3Rvcl0KYXV0aF90eXBlPWh0dHBfYmFzaWMKdXNlcm5hbWU9Ym9iCnBhc3N3b3JkPWJvYg==
254 | type: Opaque
255 | ---
256 | kind: Secret
257 | apiVersion: v1
258 | metadata:
259 | name: ironic-rpc-auth-config
260 | namespace: metal-provisioner
261 | data:
262 | auth-config: >-
263 | W2pzb25fcnBjXQphdXRoX3R5cGU9aHR0cF9iYXNpYwp1c2VybmFtZT1ib2IKcGFzc3dvcmQ9Ym9iCmh0dHBfYmFzaWNfdXNlcm5hbWU9Ym9iCmh0dHBfYmFzaWNfcGFzc3dvcmQ9Ym9i
264 | type: Opaque
265 | ---
266 | kind: ConfigMap
267 | apiVersion: v1
268 | metadata:
269 | name: ironic-bmo-configmap
270 | namespace: metal-provisioner
271 | data:
272 | IRONIC_FAST_TRACK: 'true'
273 | DEPLOY_KERNEL_URL: ironic-http-metal-provisioner.apps.hub-adetalhouet.rhtelco.io/images/ironic-python-agent.kernel
274 | RESTART_CONTAINER_CERTIFICATE_UPDATED: 'false'
275 | INSPECTOR_REVERSE_PROXY_SETUP: 'false'
276 | PROVISIONING_INTERFACE: ens5 # this is the interface on the host node - as Ironic is provide hostnetwork privilege, it will look for this interface to get its IP. Make sure it to update as per your environment.
277 | IRONIC_INSPECTOR_URL: ironic-inspector-metal-provisioner.apps.hub-adetalhouet.rhtelco.io
278 | IRONIC_KERNEL_PARAMS: console=ttyS0
279 | IRONIC_API_URL: ironic-api-metal-provisioner.apps.hub-adetalhouet.rhtelco.io
280 | DHCP_RANGE: '172.22.0.10,172.22.0.100' # not needed in our case but required by Ironic
281 | IRONIC_INSPECTOR_VLAN_INTERFACES: all
282 | IRONIC_ENDPOINT: ironic-api-metal-provisioner.apps.hub-adetalhouet.rhtelco.io/v1/
283 | DEPLOY_RAMDISK_URL: ironic-http-metal-provisioner.apps.hub-adetalhouet.rhtelco.io/images/ironic-python-agent.initramfs
284 | IRONIC_HTTP_URL: ironic-http-metal-provisioner.apps.hub-adetalhouet.rhtelco.io
285 | IRONIC_URL_HOST: ironic-http-metal-provisioner.apps.hub-adetalhouet.rhtelco.io
286 | HTTP_PORT: '6180'
287 | IRONIC_INSPECTOR_ENDPOINT: ironic-inspector-metal-provisioner.apps.hub-adetalhouet.rhtelco.io/v1/
288 | ---
289 | kind: ConfigMap
290 | apiVersion: v1
291 | metadata:
292 | name: ironic-htpasswd
293 | namespace: metal-provisioner
294 | data:
295 | HTTP_BASIC_HTPASSWD: 'bob:$2y$05$3.cpdcaJSTH5jbPDA3MjJuxYjmGMEwdv7uHdDCeu7gQnx920i0YOm'
296 | ---
297 | kind: ConfigMap
298 | apiVersion: v1
299 | metadata:
300 | name: ironic-inspector-htpasswd
301 | namespace: metal-provisioner
302 | data:
303 | HTTP_BASIC_HTPASSWD: 'bob:$2y$05$Z6g5zDHDvlflpBCoUFMvJe.9Hdbu0wUpYftkFfOz1020WBVASnY1S'
304 |
--------------------------------------------------------------------------------
/metal-provisioner/kustomization.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: kustomize.config.k8s.io/v1beta1
3 | kind: Kustomization
4 |
5 | resources:
6 | - 00-namespace.yaml
7 | - 01-bmo.yaml
8 | - 02-ironic.yaml
--------------------------------------------------------------------------------
/spoke-3nodes-ztp/.gitignore:
--------------------------------------------------------------------------------
1 | *-assisteddeploymentpullsecret.yaml
2 |
--------------------------------------------------------------------------------
/spoke-3nodes-ztp/00-namespace.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Namespace
4 | metadata:
5 | name: ca-regina
--------------------------------------------------------------------------------
/spoke-3nodes-ztp/01-agentclusterinstall.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: extensions.hive.openshift.io/v1beta1
3 | kind: AgentClusterInstall
4 | metadata:
5 | name: ca-regina
6 | namespace: ca-regina
7 | annotations:
8 | agent-install.openshift.io/install-config-overrides: '{"networking":{"networkType":"OVNKubernetes"}}'
9 | label:
10 | agentclusterinstalls.extensions.hive.openshift.io/location: Regina
11 | spec:
12 | clusterDeploymentRef:
13 | name: ca-regina
14 | imageSetRef:
15 | name: openshift-v4.10.10
16 | apiVIP: "192.168.123.253"
17 | ingressVIP: "192.168.123.252"
18 | networking:
19 | clusterNetwork:
20 | - cidr: "10.128.0.0/14"
21 | hostPrefix: 23
22 | serviceNetwork:
23 | - "172.30.0.0/16"
24 | # machineNetwork:
25 | # - cidr: "192.168.123.0/24"
26 | provisionRequirements:
27 | controlPlaneAgents: 3
28 | # workerAgents: 2
29 | sshPublicKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXBlG+5FRGFwLAxhk88Nce10VwN7W0N9+aBKzoXWx/Y3h5eJmwdy3apo+kBxEbf+GW01u9EFSV3MZR+uaufvT0t0fF1zyFV2pB+HNVBPoOKs7ZGaqNzWi4uR0REBH+rIeoY7eR528kSbxHZNWjzxB6jc/PCmF7gM/MWnNFieZKLBwoLpC3rOEorF6Q5GRj0c7EOYn0sdK149i1BUhJFWEJfxXSS5pkArIa4TCW2hgO06TN41UpCPa17KDG+rxrrgs0i9J//RTke/w4PnddlY0ETASZXgNbDOJwldTGlmQTjzrjrBMgzf950xLnHiB2qX7SgZL2xrC4pO3i2RZezeIPujO3RAQjP+LAkUgG41Ui0d8v2dkZ53/OSfTXx3GB2eIUTGLVK2iK3uKzKys178dwuSvFON60YPi/n/TX8va+XaJzc4JImFNFQW4wF+RlAc3v1hNGOKQhGODtaDZ7oU0BDd4ddXe8ownN7W0LSWufxyJ9x8jH+DiUAI1jDHvhtH0= root@adetalhouet-t640-1"
--------------------------------------------------------------------------------
/spoke-3nodes-ztp/02-clusterdeployment.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: hive.openshift.io/v1
3 | kind: ClusterDeployment
4 | metadata:
5 | name: ca-regina
6 | namespace: ca-regina
7 | spec:
8 | clusterName: ca-regina
9 | baseDomain: adetalhouet.ca
10 | clusterInstallRef:
11 | group: extensions.hive.openshift.io
12 | kind: AgentClusterInstall
13 | name: ca-regina
14 | version: v1beta1
15 | platform:
16 | agentBareMetal:
17 | agentSelector:
18 | matchLabels:
19 | agentclusterinstalls.extensions.hive.openshift.io/location: Montreal
20 | pullSecretRef:
21 | name: assisted-deployment-pull-secret
--------------------------------------------------------------------------------
/spoke-3nodes-ztp/03-nmstateconfig.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: agent-install.openshift.io/v1beta1
2 | kind: NMStateConfig
3 | metadata:
4 | name: ca-regina-nmstate-node1
5 | namespace: ca-regina
6 | labels:
7 | cluster-name: ca-regina-nmstate
8 | spec:
9 | config:
10 | interfaces:
11 | - name: enp1s0
12 | type: ethernet
13 | state: up
14 | ipv4:
15 | dhcp: true
16 | enabled: true
17 | ipv6:
18 | enabled: false
19 | interfaces:
20 | - name: "enp1s0"
21 | macAddress: "02:04:00:00:01:01"
22 | ---
23 | apiVersion: agent-install.openshift.io/v1beta1
24 | kind: NMStateConfig
25 | metadata:
26 | name: ca-regina-nmstate-node2
27 | namespace: ca-regina
28 | labels:
29 | cluster-name: ca-regina-nmstate
30 | spec:
31 | config:
32 | interfaces:
33 | - name: enp1s0
34 | type: ethernet
35 | state: up
36 | ipv4:
37 | dhcp: true
38 | enabled: true
39 | ipv6:
40 | enabled: false
41 | interfaces:
42 | - name: "enp1s0"
43 | macAddress: "02:04:00:00:01:02"
44 | ---
45 | apiVersion: agent-install.openshift.io/v1beta1
46 | kind: NMStateConfig
47 | metadata:
48 | name: ca-regina-nmstate-node3
49 | namespace: ca-regina
50 | labels:
51 | cluster-name: ca-regina-nmstate
52 | spec:
53 | config:
54 | interfaces:
55 | - name: enp1s0
56 | type: ethernet
57 | state: up
58 | ipv4:
59 | dhcp: true
60 | enabled: true
61 | ipv6:
62 | enabled: false
63 | # routes:
64 | # config:
65 | # - destination: 0.0.0.0/0
66 | # next-hop-address: 192.168.123.1
67 | # next-hop-interface: enp1s0
68 | interfaces:
69 | - name: "enp1s0"
70 | macAddress: "02:04:00:00:01:03"
--------------------------------------------------------------------------------
/spoke-3nodes-ztp/04-spokeinfraenv.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: agent-install.openshift.io/v1beta1
3 | kind: InfraEnv
4 | metadata:
5 | labels:
6 | agentclusterinstalls.extensions.hive.openshift.io/location: Regina
7 | networkType: static
8 | name: ca-regina
9 | namespace: ca-regina
10 | spec:
11 | clusterRef:
12 | name: ca-regina
13 | namespace: ca-regina
14 | additionalNTPSources:
15 | - 2.rhel.pool.ntp.org
16 | sshAuthorizedKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXBlG+5FRGFwLAxhk88Nce10VwN7W0N9+aBKzoXWx/Y3h5eJmwdy3apo+kBxEbf+GW01u9EFSV3MZR+uaufvT0t0fF1zyFV2pB+HNVBPoOKs7ZGaqNzWi4uR0REBH+rIeoY7eR528kSbxHZNWjzxB6jc/PCmF7gM/MWnNFieZKLBwoLpC3rOEorF6Q5GRj0c7EOYn0sdK149i1BUhJFWEJfxXSS5pkArIa4TCW2hgO06TN41UpCPa17KDG+rxrrgs0i9J//RTke/w4PnddlY0ETASZXgNbDOJwldTGlmQTjzrjrBMgzf950xLnHiB2qX7SgZL2xrC4pO3i2RZezeIPujO3RAQjP+LAkUgG41Ui0d8v2dkZ53/OSfTXx3GB2eIUTGLVK2iK3uKzKys178dwuSvFON60YPi/n/TX8va+XaJzc4JImFNFQW4wF+RlAc3v1hNGOKQhGODtaDZ7oU0BDd4ddXe8ownN7W0LSWufxyJ9x8jH+DiUAI1jDHvhtH0= root@adetalhouet-t640-1"
17 | agentLabelSelector:
18 | matchLabels:
19 | agentclusterinstalls.extensions.hive.openshift.io/location: Regina
20 | pullSecretRef:
21 | name: assisted-deployment-pull-secret
22 | nmStateConfigLabelSelector:
23 | matchLabels:
24 | cluster-name: ca-regina-nmstate
--------------------------------------------------------------------------------
/spoke-3nodes-ztp/05-baremetalhost.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: metal3.io/v1alpha1
3 | kind: BareMetalHost
4 | metadata:
5 | name: ca-regina-node1
6 | namespace: ca-regina
7 | labels:
8 | infraenvs.agent-install.openshift.io: "ca-regina"
9 | annotations:
10 | inspect.metal3.io: disabled
11 | bmac.agent-install.openshift.io/hostname: "ca-regina-node1"
12 | spec:
13 | online: true
14 | # userData:
15 | # name: bmh-userdata
16 | # namespace: sno-ztp
17 | bmc:
18 | address: redfish-virtualmedia+http://10.0.0.249:8000/redfish/v1/Systems/5bfd0979-d4e8-4f83-a70c-2c6661eccc6c
19 | credentialsName: ca-regina-node1-secret
20 | disableCertificateVerification: true
21 | bootMACAddress: 02:04:00:00:01:01
22 | automatedCleaningMode: disabled
23 | hardwareProfile: libvirt
24 | ---
25 | apiVersion: metal3.io/v1alpha1
26 | kind: BareMetalHost
27 | metadata:
28 | name: ca-regina-node2
29 | namespace: ca-regina
30 | labels:
31 | infraenvs.agent-install.openshift.io: "ca-regina"
32 | annotations:
33 | inspect.metal3.io: disabled
34 | bmac.agent-install.openshift.io/hostname: "ca-regina-node2"
35 | spec:
36 | online: true
37 | # userData:
38 | # name: bmh-userdata
39 | # namespace: sno-ztp
40 | bmc:
41 | address: redfish-virtualmedia+http://10.0.0.249:8000/redfish/v1/Systems/c4f4b45f-00f1-4cf6-aac1-fcc81d96d84e
42 | credentialsName: ca-regina-node2-secret
43 | disableCertificateVerification: true
44 | bootMACAddress: 02:04:00:00:01:02
45 | automatedCleaningMode: disabled
46 | hardwareProfile: libvirt
47 | ---
48 | apiVersion: metal3.io/v1alpha1
49 | kind: BareMetalHost
50 | metadata:
51 | name: ca-regina-node3
52 | namespace: ca-regina
53 | labels:
54 | infraenvs.agent-install.openshift.io: "ca-regina"
55 | annotations:
56 | inspect.metal3.io: disabled
57 | bmac.agent-install.openshift.io/hostname: "ca-regina-node3"
58 | spec:
59 | online: true
60 | # userData:
61 | # name: bmh-userdata
62 | # namespace: sno-ztp
63 | bmc:
64 | address: redfish-virtualmedia+http://10.0.0.249:8000/redfish/v1/Systems/e0ad77da-f10c-4b3d-8283-fe0f0f497059
65 | credentialsName: ca-regina-node3-secret
66 | disableCertificateVerification: true
67 | bootMACAddress: 02:04:00:00:01:03
68 | automatedCleaningMode: disabled
69 | hardwareProfile: libvirt
70 | ---
71 | # dummy secret - it is not used by required by assisted service and bare metal operator
72 | apiVersion: v1
73 | kind: Secret
74 | metadata:
75 | name: ca-regina-node1-secret
76 | namespace: ca-regina
77 | data:
78 | password: Ym9iCg==
79 | username: Ym9iCg==
80 | type: Opaque
81 | ---
82 | # dummy secret - it is not used by required by assisted service and bare metal operator
83 | apiVersion: v1
84 | kind: Secret
85 | metadata:
86 | name: ca-regina-node2-secret
87 | namespace: ca-regina
88 | data:
89 | password: Ym9iCg==
90 | username: Ym9iCg==
91 | type: Opaque
92 | ---
93 | # dummy secret - it is not used by required by assisted service and bare metal operator
94 | apiVersion: v1
95 | kind: Secret
96 | metadata:
97 | name: ca-regina-node3-secret
98 | namespace: ca-regina
99 | data:
100 | password: Ym9iCg==
101 | username: Ym9iCg==
102 | type: Opaque
--------------------------------------------------------------------------------
/spoke-3nodes-ztp/05-userdata.yaml:
--------------------------------------------------------------------------------
1 | kind: Secret
2 | apiVersion: v1
3 | metadata:
4 | name: bmh-userdata
5 | namespace: sno-ztp
6 | data:
7 | userData: >-
8 | I2Nsb3VkLWNvbmZpZwoKIyBIb3N0bmFtZSBtYW5hZ2VtZW50CnByZXNlcnZlX2hvc3RuYW1lOiBGYWxzZQpob3N0bmFtZTogc25vLmxhYi5hZGV0YWxob3VldApmcWRuOiBzbm8ubGFiLmFkZXRhbGhvdWV0CgpldGhlcm5ldHM6CiAgZXRoMDoKICAgIGFkZHJlc3NlczoKICAgICAgLSAxNDguMjUxLjEyLjM3LzMyCiAgICBnYXRld2F5NDogMTQ4LjI1MS4xMi4zMwoKIyBVc2Vycwp1c2VyczoKICAgIC0gbmFtZTogYWRldGFsaG91ZXQKICAgICAgZ3JvdXBzOiBhZG0sc3lzCiAgICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgICAgaG9tZTogL2hvbWUvYWRldGFsaG91ZXQKICAgICAgc3VkbzogQUxMPShBTEwpIE5PUEFTU1dEOkFMTAogICAgICBsb2NrX3Bhc3N3ZDogZmFsc2UKICAgICAgc3NoLWF1dGhvcml6ZWQta2V5czoKICAgICAgICAtIHNzaC1lZDI1NTE5IEFBQUFDM056YUMxbFpESTFOVEU1QUFBQUlQd3lOSC9xa1ljcWtLazVNaU5qS0h4bm9hZE1FNmNySUo4YUlzM1I2VFpRIHJvb3RAbGFiLmFkZXRhbGhvdWV0CgojIENvbmZpZ3VyZSB3aGVyZSBvdXRwdXQgd2lsbCBnbwpvdXRwdXQ6CiAgYWxsOiAiPj4gL3Zhci9sb2cvY2xvdWQtaW5pdC5sb2ciCgojIGNvbmZpZ3VyZSBpbnRlcmFjdGlvbiB3aXRoIHNzaCBzZXJ2ZXIKc3NoX3B3YXV0aDogZmFsc2UKZGlzYWJsZV9yb290OiB0cnVlCgojIEluc3RhbGwgbXkgcHVibGljIHNzaCBrZXkgdG8gdGhlIGZpcnN0IHVzZXItZGVmaW5lZCB1c2VyIGNvbmZpZ3VyZWQKIyBpbiBjbG91ZC5jZmcgaW4gdGhlIHRlbXBsYXRlICh3aGljaCBpcyBjZW50b3MgZm9yIENlbnRPUyBjbG91ZCBpbWFnZXMpCnNzaF9hdXRob3JpemVkX2tleXM6CiAgLSBzc2gtZWQyNTUxOSBBQUFBQzNOemFDMWxaREkxTlRFNUFBQUFJUHd5TkgvcWtZY3FrS2s1TWlOaktIeG5vYWRNRTZjcklKOGFJczNSNlRaUSByb290QGxhYi5hZGV0YWxob3VldAoKIyBSZW1vdmUgY2xvdWQtaW5pdApydW5jbWQ6CiAgLSBzeXN0ZW1jdGwgc3RvcCBOZXR3b3JrTWFuYWdlci5zZXJ2aWNlICYmIHN5c3RlbWN0bCBzdGFydCBOZXR3b3JrTWFuYWdlci5zZXJ2aWNlCiAgLSBkbmYgLXkgcmVtb3ZlIGNsb3VkLWluaXQK
9 | type: Opaque
10 |
--------------------------------------------------------------------------------
/spoke-3nodes-ztp/06-assisteddeploymentpullsecret-EXAMPLE.yaml:
--------------------------------------------------------------------------------
1 | # ---
2 | # apiVersion: v1
3 | # kind: Secret
4 | # metadata:
5 | # name: assisted-deployment-pull-secret
6 | # namespace: sno-ztp
7 | # stringData:
8 | # .dockerconfigjson: 'YOUR_PULL_SECRET'
--------------------------------------------------------------------------------
/spoke-3nodes-ztp/07-kusterlet.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: cluster.open-cluster-management.io/v1
3 | kind: ManagedCluster
4 | metadata:
5 | labels:
6 | cloud: hybrid
7 | name: ca-regina
8 | name: ca-regina
9 | spec:
10 | hubAcceptsClient: true
11 | ---
12 | apiVersion: agent.open-cluster-management.io/v1
13 | kind: KlusterletAddonConfig
14 | metadata:
15 | name: ca-regina
16 | namespace: ca-regina
17 | spec:
18 | clusterName: ca-regina
19 | clusterNamespace: ca-regina
20 | clusterLabels:
21 | cloud: hybrid
22 | applicationManager:
23 | enabled: true
24 | policyController:
25 | enabled: true
26 | searchCollector:
27 | enabled: true
28 | certPolicyController:
29 | enabled: true
30 | observabilityController:
31 | enabled: true
32 | iamPolicyController:
33 | enabled: true
--------------------------------------------------------------------------------
/spoke-3nodes-ztp/kustomization.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: kustomize.config.k8s.io/v1beta1
3 | kind: Kustomization
4 |
5 | resources:
6 | - 00-namespace.yaml
7 | - 01-agentclusterinstall.yaml
8 | - 02-clusterdeployment.yaml
9 | - 03-nmstateconfig.yaml
10 | - 04-spokeinfraenv.yaml
11 | - 05-baremetalhost.yaml
12 | # - 05-userdata.yaml
13 | - 06-assisteddeploymentpullsecret.yaml
14 | - 07-kusterlet.yaml
--------------------------------------------------------------------------------
/spoke-manual/00-agentclusterinstall.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: extensions.hive.openshift.io/v1beta1
3 | kind: AgentClusterInstall
4 | metadata:
5 | name: lab-cluster-aci
6 | namespace: open-cluster-management
7 | spec:
8 | clusterDeploymentRef:
9 | name: lab-cluster
10 | imageSetRef:
11 | name: openshift-v4.8.10
12 | networking:
13 | clusterNetwork:
14 | - cidr: "10.128.0.0/14"
15 | hostPrefix: 23
16 | serviceNetwork:
17 | - "172.30.0.0/16"
18 | machineNetwork:
19 | - cidr: "192.168.123.0/24"
20 | provisionRequirements:
21 | controlPlaneAgents: 1
22 | sshPublicKey: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPwyNH/qkYcqkKk5MiNjKHxnoadME6crIJ8aIs3R6TZQ root@lab.adetalhouet"
--------------------------------------------------------------------------------
/spoke-manual/01-clusterdeployment.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: hive.openshift.io/v1
3 | kind: ClusterDeployment
4 | metadata:
5 | name: lab-cluster
6 | namespace: open-cluster-management
7 | spec:
8 | baseDomain: rhtelco.io
9 | clusterName: lab-spoke-adetalhouet
10 | controlPlaneConfig:
11 | servingCertificates: {}
12 | installed: false
13 | clusterInstallRef:
14 | group: extensions.hive.openshift.io
15 | kind: AgentClusterInstall
16 | name: lab-cluster-aci
17 | version: v1beta1
18 | platform:
19 | agentBareMetal:
20 | agentSelector:
21 | matchLabels:
22 | deploy-mode: "manual"
23 | pullSecretRef:
24 | name: assisted-deployment-pull-secret
--------------------------------------------------------------------------------
/spoke-manual/02-spokeinfraenv.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: agent-install.openshift.io/v1beta1
3 | kind: InfraEnv
4 | metadata:
5 | name: lab-env
6 | namespace: open-cluster-management
7 | spec:
8 | clusterRef:
9 | name: lab-cluster
10 | namespace: open-cluster-management
11 | sshAuthorizedKey: "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPwyNH/qkYcqkKk5MiNjKHxnoadME6crIJ8aIs3R6TZQ root@lab.adetalhouet"
12 | agentLabelSelector:
13 | matchLabels:
14 | deploy-mode: "manual"
15 | pullSecretRef:
16 | name: assisted-deployment-pull-secret
17 | ignitionConfigOverride: '{"ignition": {"version": "3.1.0"}, "storage": {"files": [{"path": "/etc/videotron-demo", "contents": {"source": "data:text/plain;base64,aGVscGltdHJhcHBlZGluYXN3YWdnZXJzcGVj"}}]}}'
18 | nmStateConfigLabelSelector:
19 | matchLabels:
20 | cluster-name: lab-spoke-adetalhouet
--------------------------------------------------------------------------------
/spoke-manual/README.md:
--------------------------------------------------------------------------------
1 | This folder contains what is needed to configure the spoke cluster on KVM.
2 |
3 | __Manual Spoke cluster deployment__
4 |
5 | 1. Get the ISO URL from the InfraEnv CR
6 | ~~~
7 | oc get infraenv lab-env -n open-cluster-management -o jsonpath={.status.isoDownloadURL}
8 | ~~~
9 | 2. Download and host it on the server hosting the KVM machine
10 | 3. Add the ISO in the KVM definition of the SNO VM.
11 | 4. Boot the node and wait for it to be self-registered against the Assisted Service.
12 | 5. Validate from the AgentClusterInstall CR on the .status.conditions all the requirements are met
13 | ~~~
14 | oc describe AgentClusterInstall lab-cluster-aci -n open-cluster-management
15 | ~~~
16 | You should read somewhere in the status _The installation is pending on the approval of 1 agents_ If that is the case, go step #6.
17 | 6. Edit the created agent by approving it
18 | ~~~
19 | # edit the lab to match the name of your infraenv CR
20 | AGENT=`oc get Agent -l infraenvs.agent-install.openshift.io=lab-env -n open-cluster-management -o name`
21 | oc patch $AGENT -n open-cluster-management --type='json' -p='[{"op" : "replace" ,"path": "/spec/approved" ,"value": true}]'
22 | ~~~
23 |
24 |
25 | Once the deployment is done, you can find the `kubeadmin` password through the secrets created and referenced in the AgentClusterInstall CR; that is held in the hub cluster.
26 |
27 | ~~~
28 | oc describe AgentClusterInstall lab-cluster-aci -n open-cluster-management
29 | Name: lab-cluster-aci
30 | Namespace: open-cluster-management
31 | Labels:
32 | Annotations:
33 | API Version: extensions.hive.openshift.io/v1beta1
34 | Kind: AgentClusterInstall
35 | Metadata:
36 | Creation Timestamp: 2021-07-20T23:35:29Z
37 | Finalizers:
38 | agentclusterinstall.agent-install.openshift.io/ai-deprovision
39 | Generation: 3
40 | Owner References:
41 | API Version: hive.openshift.io/v1
42 | Kind: ClusterDeployment
43 | Name: lab-cluster
44 | UID: 85777df5-201f-4a4d-aeaf-c2313448aaec
45 | Resource Version: 4179210
46 | UID: 080eb67e-17bd-4960-abc0-e2035a586ece
47 | Spec:
48 | Cluster Deployment Ref:
49 | Name: lab-cluster
50 | Cluster Metadata:
51 | Admin Kubeconfig Secret Ref:
52 | Name: lab-cluster-admin-kubeconfig
53 | Admin Password Secret Ref:
54 | Name: lab-cluster-admin-password
55 | Cluster ID: 575d4038-25cd-41c7-8744-f8aba3b19d80
56 | Infra ID: 24b2e5a7-6443-47ad-bbd3-61edf1e335f5
57 | Image Set Ref:
58 | Name: openshift-v4.8.0
59 | Networking:
60 | Cluster Network:
61 | Cidr: 10.128.0.0/14
62 | Host Prefix: 23
63 | Machine Network:
64 | Cidr: 192.168.123.0/24
65 | Service Network:
66 | 172.30.0.0/16
67 | Provision Requirements:
68 | Control Plane Agents: 1
69 | Ssh Public Key: ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPwyNH/qkYcqkKk5MiNjKHxnoadME6crIJ8aIs3R6TZQ root@lab.adetalhouet
70 | Status:
71 | Conditions:
72 | Last Probe Time: 2021-07-20T23:35:37Z
73 | Last Transition Time: 2021-07-20T23:35:37Z
74 | Message: SyncOK
75 | Reason: SyncOK
76 | Status: True
77 | Type: SpecSynced
78 | Last Probe Time: 2021-07-20T23:37:56Z
79 | Last Transition Time: 2021-07-20T23:37:56Z
80 | Message: The cluster's validations are passing
81 | Reason: ValidationsPassing
82 | Status: True
83 | Type: Validated
84 | Last Probe Time: 2021-07-21T00:15:06Z
85 | Last Transition Time: 2021-07-21T00:15:06Z
86 | Message: The cluster installation stopped
87 | Reason: ClusterInstallationStopped
88 | Status: True
89 | Type: RequirementsMet
90 | Last Probe Time: 2021-07-21T00:15:06Z
91 | Last Transition Time: 2021-07-21T00:15:06Z
92 | Message: The installation has completed: Cluster is installed
93 | Reason: InstallationCompleted
94 | Status: True
95 | Type: Completed
96 | Last Probe Time: 2021-07-20T23:35:37Z
97 | Last Transition Time: 2021-07-20T23:35:37Z
98 | Message: The installation has not failed
99 | Reason: InstallationNotFailed
100 | Status: False
101 | Type: Failed
102 | Last Probe Time: 2021-07-21T00:15:06Z
103 | Last Transition Time: 2021-07-21T00:15:06Z
104 | Message: The installation has stopped because it completed successfully
105 | Reason: InstallationCompleted
106 | Status: True
107 | Type: Stopped
108 | Connectivity Majority Groups: {"192.168.123.0/24":[]}
109 | Debug Info:
110 | Events URL: https://assisted-service-open-cluster-management.apps.hub-adetalhouet.rhtelco.io/api/assisted-install/v1/clusters/24b2e5a7-6443-47ad-bbd3-61edf1e335f5/events?api_key=eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJjbHVzdGVyX2lkIjoiMjRiMmU1YTctNjQ0My00N2FkLWJiZDMtNjFlZGYxZTMzNWY1In0.k5AYDPBtWTI1JbZESnATMxh6vqyLjeq7M7D5iglRzmnwArF9y_a4RQZFUzV9zctPDgV69fp4x8Hau_VQJoJmDg
111 | Logs URL: https://assisted-service-open-cluster-management.apps.hub-adetalhouet.rhtelco.io/api/assisted-install/v1/clusters/24b2e5a7-6443-47ad-bbd3-61edf1e335f5/logs?api_key=eyJhbGciOiJFUzI1NiIsInR5cCI6IkpXVCJ9.eyJjbHVzdGVyX2lkIjoiMjRiMmU1YTctNjQ0My00N2FkLWJiZDMtNjFlZGYxZTMzNWY1In0.NWq1oVTO2jaWvtyS3WSvUUSW_oRkx4_8YQ3cNtspYDhJvAnSuHFYakImx6OS9vk7zOJnKv8uPM2PSzg0096UMQ
112 | State: installed
113 | State Info: Cluster is installed
114 | Events:
115 | ~~~
--------------------------------------------------------------------------------
/spoke-manual/kustomization.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: kustomize.config.k8s.io/v1beta1
3 | kind: Kustomization
4 |
5 | resources:
6 | - 00-agentclusterinstall.yaml
7 | - 01-clusterdeployment.yaml
8 | - 02-spokeinfraenv.yaml
--------------------------------------------------------------------------------
/spoke-manual/libvirt/net.xml:
--------------------------------------------------------------------------------
1 |
2 | sno
3 |
4 |
5 |
6 |
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
16 | api.sno.lab.adetalhouet
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
--------------------------------------------------------------------------------
/spoke-manual/libvirt/vm.xml:
--------------------------------------------------------------------------------
1 |
2 | node1
3 |
4 |
5 |
6 |
7 |
8 | 16777216
9 | 16777216
10 | 8
11 |
12 | hvm
13 |
14 |
15 |
16 |
17 |
18 |
19 |
20 |
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 |
34 |
35 | /usr/libexec/qemu-kvm
36 |
37 |
38 |
39 |
40 |
41 |
42 |
43 |
44 |
45 |
46 |
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 |
55 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
63 |
64 |
65 |
68 |
69 |
70 |
71 |
72 | /dev/urandom
73 |
74 |
75 |
--------------------------------------------------------------------------------
/spoke-sno-ztp/.gitignore:
--------------------------------------------------------------------------------
1 | *-assisteddeploymentpullsecret.yaml
2 |
--------------------------------------------------------------------------------
/spoke-sno-ztp/00-namespace.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Namespace
4 | metadata:
5 | name: ca-montreal
--------------------------------------------------------------------------------
/spoke-sno-ztp/01-agentclusterinstall.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: extensions.hive.openshift.io/v1beta1
3 | kind: AgentClusterInstall
4 | metadata:
5 | name: ca-montreal
6 | namespace: ca-montreal
7 | label:
8 | agentclusterinstalls.extensions.hive.openshift.io/location: Montreal
9 | annotations:
10 | agent-install.openshift.io/install-config-overrides: '{"networking":{"networkType":"OVNKubernetes"}}'
11 | spec:
12 | clusterDeploymentRef:
13 | name: ca-montreal
14 | imageSetRef:
15 | name: openshift-v4.11.9
16 | # apiVIP: ""
17 | # ingressVIP: ""
18 | networking:
19 | clusterNetwork:
20 | - cidr: "10.128.0.0/14"
21 | hostPrefix: 23
22 | serviceNetwork:
23 | - "172.30.0.0/16"
24 | machineNetwork:
25 | - cidr: "192.168.123.0/24"
26 | provisionRequirements:
27 | controlPlaneAgents: 1
28 | # workerAgents: 2
29 | sshPublicKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDFS0S+jf5OW7CDuwuJO46IpeunNc19oyXlRQwR6tBx67EPXAt3LxB/BMbyr8+MLMIErzaIUSvG70yk34cB4jbXrs8cbwSdxGPro3ZZqu9qT8+ILhEXtok6uUBG8OKlhIqrAm6Iq3hH1Kbgwj/72B9eaKIpNHzvrZSM/UNAYZzNvENuBGeWuO1kfxnhWmzp+eh+8vTPcYdLzJKv+BOQBxz6T8SI5By0TfvAvVS2xMmhMRMs1TBDLUBgzZgd06X0ghSaOimz4aVbmqI4WwClIy8ZiXhL/j1IkSF97qNo26yb/yYnyk+BqqrsOQIEQQcfzY+skpHQ1JiPjPVYHsujhgctFgwCR0/KKw2QcqOK67est5gDW3vaf/zIDhRnPdT2IhJQTQNEepRjKfHF2EgGIMSU4TosJ5ygx+q0oZ5ITcFHSiIK3aoOt2QXZPY+Dtork5zYbE2M3PLrgRrT1VW1eTH6v5GYjUDq95mwcKYBirSvd3QuUbrGjFQuxfCZlceUui0= adetalhouet@joatmon.localdomain"
--------------------------------------------------------------------------------
/spoke-sno-ztp/02-clusterdeployment.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: hive.openshift.io/v1
3 | kind: ClusterDeployment
4 | metadata:
5 | name: ca-montreal
6 | namespace: ca-montreal
7 | spec:
8 | clusterName: ca-montreal
9 | baseDomain: adetalhouet.ca
10 | clusterInstallRef:
11 | group: extensions.hive.openshift.io
12 | kind: AgentClusterInstall
13 | name: ca-montreal
14 | version: v1beta1
15 | platform:
16 | agentBareMetal:
17 | agentSelector:
18 | matchLabels:
19 | agentclusterinstalls.extensions.hive.openshift.io/location: Montreal
20 | pullSecretRef:
21 | name: assisted-deployment-pull-secret
--------------------------------------------------------------------------------
/spoke-sno-ztp/03-nmstateconfig.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: agent-install.openshift.io/v1beta1
2 | kind: NMStateConfig
3 | metadata:
4 | name: lab-spoke-adetalhouet
5 | namespace: sno-ztp
6 | labels:
7 | cluster-name: lab-spoke-adetalhouet
8 | spec:
9 | config:
10 | interfaces:
11 | - name: eth0
12 | type: ethernet
13 | state: up
14 | ipv4:
15 | address:
16 | - ip: 148.251.12.37
17 | prefix-length: 32
18 | dhcp: false
19 | enabled: true
20 | ipv6:
21 | enabled: false
22 | routes:
23 | config:
24 | - destination: 0.0.0.0/0
25 | next-hop-address: 148.251.12.33
26 | next-hop-interface: eth0
27 | interfaces:
28 | - name: "eth0"
29 | macAddress: "00:50:56:01:15:94"
--------------------------------------------------------------------------------
/spoke-sno-ztp/04-spokeinfraenv.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: agent-install.openshift.io/v1beta1
3 | kind: InfraEnv
4 | metadata:
5 | labels:
6 | agentclusterinstalls.extensions.hive.openshift.io/location: Montreal
7 | networkType: dhcp
8 | name: ca-montreal
9 | namespace: ca-montreal
10 | spec:
11 | clusterRef:
12 | name: ca-montreal
13 | namespace: ca-montreal
14 | additionalNTPSources:
15 | - 2.rhel.pool.ntp.org
16 | sshAuthorizedKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCXBlG+5FRGFwLAxhk88Nce10VwN7W0N9+aBKzoXWx/Y3h5eJmwdy3apo+kBxEbf+GW01u9EFSV3MZR+uaufvT0t0fF1zyFV2pB+HNVBPoOKs7ZGaqNzWi4uR0REBH+rIeoY7eR528kSbxHZNWjzxB6jc/PCmF7gM/MWnNFieZKLBwoLpC3rOEorF6Q5GRj0c7EOYn0sdK149i1BUhJFWEJfxXSS5pkArIa4TCW2hgO06TN41UpCPa17KDG+rxrrgs0i9J//RTke/w4PnddlY0ETASZXgNbDOJwldTGlmQTjzrjrBMgzf950xLnHiB2qX7SgZL2xrC4pO3i2RZezeIPujO3RAQjP+LAkUgG41Ui0d8v2dkZ53/OSfTXx3GB2eIUTGLVK2iK3uKzKys178dwuSvFON60YPi/n/TX8va+XaJzc4JImFNFQW4wF+RlAc3v1hNGOKQhGODtaDZ7oU0BDd4ddXe8ownN7W0LSWufxyJ9x8jH+DiUAI1jDHvhtH0= root@adetalhouet-t640-1"
17 | agentLabelSelector:
18 | matchLabels:
19 | agentclusterinstalls.extensions.hive.openshift.io/location: Montreal
20 | pullSecretRef:
21 | name: assisted-deployment-pull-secret
22 | # nmStateConfigLabelSelector:
23 | # matchLabels:
24 | # cluster-name: lab-spoke-adetalhouet
--------------------------------------------------------------------------------
/spoke-sno-ztp/05-baremetalhost.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: metal3.io/v1alpha1
3 | kind: BareMetalHost
4 | metadata:
5 | name: ca-montreal-node1
6 | namespace: ca-montreal
7 | labels:
8 | infraenvs.agent-install.openshift.io: "ca-montreal"
9 | annotations:
10 | inspect.metal3.io: disabled
11 | bmac.agent-install.openshift.io/hostname: "ca-montreal-node1"
12 | spec:
13 | online: true
14 | # userData:
15 | # name: bmh-userdata
16 | # namespace: sno-ztp
17 | bmc:
18 | address: redfish-virtualmedia+http://10.0.0.249:8000/redfish/v1/Systems/7e689341-84c6-4732-a1aa-2979e23385a5
19 | credentialsName: sno-secret
20 | disableCertificateVerification: true
21 | bootMACAddress: 52:54:00:d3:54:af
22 | automatedCleaningMode: disabled
23 | ---
24 | # dummy secret - it is not used by required by assisted service and bare metal operator
25 | apiVersion: v1
26 | kind: Secret
27 | metadata:
28 | name: sno-secret
29 | namespace: ca-montreal
30 | data:
31 | password: Ym9iCg==
32 | username: Ym9iCg==
33 | type: Opaque
--------------------------------------------------------------------------------
/spoke-sno-ztp/05-userdata.yaml:
--------------------------------------------------------------------------------
1 | kind: Secret
2 | apiVersion: v1
3 | metadata:
4 | name: bmh-userdata
5 | namespace: sno-ztp
6 | data:
7 | userData: >-
8 | I2Nsb3VkLWNvbmZpZwoKIyBIb3N0bmFtZSBtYW5hZ2VtZW50CnByZXNlcnZlX2hvc3RuYW1lOiBGYWxzZQpob3N0bmFtZTogc25vLmxhYi5hZGV0YWxob3VldApmcWRuOiBzbm8ubGFiLmFkZXRhbGhvdWV0CgpldGhlcm5ldHM6CiAgZXRoMDoKICAgIGFkZHJlc3NlczoKICAgICAgLSAxNDguMjUxLjEyLjM3LzMyCiAgICBnYXRld2F5NDogMTQ4LjI1MS4xMi4zMwoKIyBVc2Vycwp1c2VyczoKICAgIC0gbmFtZTogYWRldGFsaG91ZXQKICAgICAgZ3JvdXBzOiBhZG0sc3lzCiAgICAgIHNoZWxsOiAvYmluL2Jhc2gKICAgICAgaG9tZTogL2hvbWUvYWRldGFsaG91ZXQKICAgICAgc3VkbzogQUxMPShBTEwpIE5PUEFTU1dEOkFMTAogICAgICBsb2NrX3Bhc3N3ZDogZmFsc2UKICAgICAgc3NoLWF1dGhvcml6ZWQta2V5czoKICAgICAgICAtIHNzaC1lZDI1NTE5IEFBQUFDM056YUMxbFpESTFOVEU1QUFBQUlQd3lOSC9xa1ljcWtLazVNaU5qS0h4bm9hZE1FNmNySUo4YUlzM1I2VFpRIHJvb3RAbGFiLmFkZXRhbGhvdWV0CgojIENvbmZpZ3VyZSB3aGVyZSBvdXRwdXQgd2lsbCBnbwpvdXRwdXQ6CiAgYWxsOiAiPj4gL3Zhci9sb2cvY2xvdWQtaW5pdC5sb2ciCgojIGNvbmZpZ3VyZSBpbnRlcmFjdGlvbiB3aXRoIHNzaCBzZXJ2ZXIKc3NoX3B3YXV0aDogZmFsc2UKZGlzYWJsZV9yb290OiB0cnVlCgojIEluc3RhbGwgbXkgcHVibGljIHNzaCBrZXkgdG8gdGhlIGZpcnN0IHVzZXItZGVmaW5lZCB1c2VyIGNvbmZpZ3VyZWQKIyBpbiBjbG91ZC5jZmcgaW4gdGhlIHRlbXBsYXRlICh3aGljaCBpcyBjZW50b3MgZm9yIENlbnRPUyBjbG91ZCBpbWFnZXMpCnNzaF9hdXRob3JpemVkX2tleXM6CiAgLSBzc2gtZWQyNTUxOSBBQUFBQzNOemFDMWxaREkxTlRFNUFBQUFJUHd5TkgvcWtZY3FrS2s1TWlOaktIeG5vYWRNRTZjcklKOGFJczNSNlRaUSByb290QGxhYi5hZGV0YWxob3VldAoKIyBSZW1vdmUgY2xvdWQtaW5pdApydW5jbWQ6CiAgLSBzeXN0ZW1jdGwgc3RvcCBOZXR3b3JrTWFuYWdlci5zZXJ2aWNlICYmIHN5c3RlbWN0bCBzdGFydCBOZXR3b3JrTWFuYWdlci5zZXJ2aWNlCiAgLSBkbmYgLXkgcmVtb3ZlIGNsb3VkLWluaXQK
9 | type: Opaque
10 |
--------------------------------------------------------------------------------
/spoke-sno-ztp/06-assisteddeploymentpullsecret-EXAMPLE.yaml:
--------------------------------------------------------------------------------
1 | # ---
2 | # apiVersion: v1
3 | # kind: Secret
4 | # metadata:
5 | # name: assisted-deployment-pull-secret
6 | # namespace: sno-ztp
7 | # stringData:
8 | # .dockerconfigjson: 'YOUR_PULL_SECRET'
--------------------------------------------------------------------------------
/spoke-sno-ztp/07-kusterlet.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: cluster.open-cluster-management.io/v1
3 | kind: ManagedCluster
4 | metadata:
5 | labels:
6 | cloud: hybrid
7 | name: ca-montreal
8 | name: ca-montreal
9 | spec:
10 | hubAcceptsClient: true
11 | ---
12 | apiVersion: agent.open-cluster-management.io/v1
13 | kind: KlusterletAddonConfig
14 | metadata:
15 | name: ca-montreal
16 | namespace: ca-montreal
17 | spec:
18 | clusterName: ca-montreal
19 | clusterNamespace: ca-montreal
20 | clusterLabels:
21 | cloud: hybrid
22 | applicationManager:
23 | enabled: true
24 | policyController:
25 | enabled: true
26 | searchCollector:
27 | enabled: true
28 | certPolicyController:
29 | enabled: true
30 | observabilityController:
31 | enabled: true
32 | iamPolicyController:
33 | enabled: true
--------------------------------------------------------------------------------
/spoke-sno-ztp/kustomization.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: kustomize.config.k8s.io/v1beta1
3 | kind: Kustomization
4 |
5 | resources:
6 | - 00-namespace.yaml
7 | - 01-agentclusterinstall.yaml
8 | - 02-clusterdeployment.yaml
9 | # - 03-nmstateconfig.yaml
10 | - 04-spokeinfraenv.yaml
11 | - 05-baremetalhost.yaml
12 | # - 05-userdata.yaml
13 | - 06-assisteddeploymentpullsecret.yaml
14 | - 07-kusterlet.yaml
--------------------------------------------------------------------------------