├── .gitignore ├── README.md ├── TSD ├── OCP_4.x.pptx └── VMWare_Screen_shots.pptx ├── ansible.cfg ├── doc ├── access-openshift.md ├── install-cp4d-35-airgapped.md ├── install-cp4d-35-db2-event-store.md ├── install-cp4d-35.md ├── ocp-airgapped-create-registry.md ├── ocp-step-2-prepare-installation-explanation.md ├── ocp-step-3-install-openshift-explanation.md ├── stop-cloud-pak.md ├── vmware-create-sc-ocs.md ├── vmware-step-1-prepare-bastion.md ├── vmware-step-2-prepare-openshift-installation.md ├── vmware-step-2a-prepare-ipi.md ├── vmware-step-2b-prepare-ova.md ├── vmware-step-2c-prepare-pxe.md ├── vmware-step-3-install-openshift.md ├── vmware-step-4-define-storage.md ├── vmware-step-5-post-install.md └── wipe-nodes.md ├── images ├── cluster-topology.png ├── convert-to-template.png ├── deploy-ovf-template.png ├── ocp-installation-process-vmware.png ├── select-ovf-template.png ├── vsphere-mac-address.png ├── vsphere-start-nodes.png └── vsphere-vm-folder.png ├── inventory ├── .gitignore ├── vmware-airgapped-example.inv ├── vmware-example-410.inv ├── vmware-example-48-ipi.inv ├── vmware-example-48.inv └── vmware-example-49.inv ├── playbooks ├── ocp4.yaml ├── ocp4_disable_dhcp.yaml ├── ocp4_recover.yaml ├── ocp4_remove_bootstrap.yaml ├── ocp4_vm_create.yaml ├── ocp4_vm_delete.yaml ├── ocp4_vm_power_on.yaml ├── ocp4_vm_update_vapp.yaml └── tasks │ ├── check_variables.yaml │ ├── chrony_client.j2 │ ├── chrony_client.yaml │ ├── chrony_server.j2 │ ├── chrony_server.yaml │ ├── create_oauth.j2 │ ├── create_oauth.yaml │ ├── dns_interface.yaml │ ├── dns_reset.yaml │ ├── dns_server.j2 │ ├── dns_server.yaml │ ├── dns_server_disable_dhcp.yaml │ ├── dnsmasq_libvirt_remove.yaml │ ├── etc_ansible_hosts.yaml │ ├── firewall_iptables_remove.yaml │ ├── global_proxy.j2 │ ├── global_proxy.yaml │ ├── haproxy.j2 │ ├── haproxy.yaml │ ├── haproxy_remove_bootstrap.yaml │ ├── hosts.yaml │ ├── http_server.j2 │ ├── http_server.yaml │ ├── init_facts.yaml │ ├── install_gui.yaml │ ├── nfs_server.yaml │ ├── nfs_server_exports.j2 │ ├── nfs_storage_class.j2 │ ├── nfs_storage_class_cluster_role.j2 │ ├── nfs_storage_class_script.yaml │ ├── nfs_storage_scripts.j2 │ ├── ocp_download.yaml │ ├── ocp_ignition.yaml │ ├── ocp_ignition_bootstrap_ova.j2 │ ├── ocp_ignition_chrony.j2 │ ├── ocp_ignition_proxy.j2 │ ├── ocp_install_config.j2 │ ├── ocp_install_config.yaml │ ├── ocp_install_dir.yaml │ ├── ocp_install_ipi.yaml │ ├── ocp_install_scripts.yaml │ ├── ocp_install_scripts_bootstrap_remove.j2 │ ├── ocp_install_scripts_bootstrap_wait.j2 │ ├── ocp_install_scripts_create_admin_user.j2 │ ├── ocp_install_scripts_create_registry_storage.j2 │ ├── ocp_install_scripts_dhcp_disable.j2 │ ├── ocp_install_scripts_install_wait.j2 │ ├── ocp_install_scripts_post_install.j2 │ ├── ocp_install_scripts_wait_co_ready.j2 │ ├── ocp_install_scripts_wait_nodes_ready.j2 │ ├── ocp_recover_certificates.yaml │ ├── ocp_start_namespace.j2 │ ├── ocp_stop_namespace.j2 │ ├── ping.yaml │ ├── prepare_xfs_volume.yaml │ ├── pxe_links.yaml │ ├── reboot.yaml │ ├── registry_storage.j2 │ ├── registry_storage.yaml │ ├── set_hostname.yaml │ ├── ssh_playbook.yaml │ ├── tftpboot.yaml │ ├── tftpboot_default.j2 │ ├── tftpboot_vm_menu.j2 │ ├── vm_create.yaml │ ├── vm_delete.yaml │ ├── vm_import_certificates.yaml │ ├── vm_poweron.yaml │ └── vm_update_vapp.yaml ├── prepare.sh ├── vm_create.sh ├── vm_delete.sh ├── vm_power_on.sh └── vm_update_vapp.sh /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | scrap 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Install Red Hat OpenShift 4.x on VMWare or Bare Metal 2 | 3 | The instructions in this repository are designed to lay out the Red Hat OpenShift Container Platform (OCP) 4.x on various infrastructures for custom demos and POCs of the IBM Cloud Paks. It is intended for those who want to fasttrack installing OpenShift with some tried and tested automation. It saved several IBM-ers numerous hours everytime they need to set up a cluster to do Cloud Pak demos. 4 | 5 | **IMPORTANT NOTE: It is not intended for this repository to configure and deploy OpenShift or a Cloud Pak for production use.** 6 | 7 | This repository covers the main steps in the provisioning process. 8 | 9 | ## Cluster topology and installation process 10 | The deployment instructions have been written with the following topology in mind: 11 | ![OpenShift 4.x cluster topology](/images/cluster-topology.png) 12 | 13 | In the topology, the **Bastion** node plays a key role. At installation time, it serves as the node from which the OpenShift installation process is run and it serves the Red Hat CoreOS boot and ISO images in case of a PXE install. More permanently, it acts as a Load Balancer, DNS, NTP and NFS server (for the image registry and applications). 14 | 15 | Red Hat documents two main options for laying out the OpenShift Container Platform (OCP): IPI (Installer Provided Infrastructure) and UPI (User Provided Infrastructure). This repository and guide provides assets for both. 16 | 17 | ### OpenShift installation process on VMWare 18 | When deploying on VMWare infrastructure, you can either create the VMs that make up the OpenShift cluster nodes manually and then proceed with the OpenShift installation, or you can automatically create the VMs through the provided Ansible scripts or via the IPI installation method. For automatic provisioning and IPI installation, you must have access to the ESX user and password. 19 | 20 | ![VMWare - OCP installation process](/images/ocp-installation-process-vmware.png) 21 | 22 | ## Step 1 - Prepare Bastion node and (optionally) cluster nodes 23 | Before you can install OpenShift, you need a bastion node from which the installation will be run. Dependent on the chosen installation type, you can also provision the cluster nodes. 24 | 25 | [Step 1 - Prepare bastion and infrastructure](/doc/vmware-step-1-prepare-bastion.md) 26 | 27 | ## Step 2 - Prepare for OpenShift installation 28 | This steps is dependent on the type of installation you will be performing. Click the appropriate link from the Step 1 document to find the steps associated with the chosen installation type. 29 | 30 | [Step 2 - Prepare for OpenShift installation](/doc/vmware-step-2-prepare-openshift-installation.md) 31 | 32 | ## Step 3 - Install OpenShift (manual) 33 | Once the bastion node has been prepared, continue with the installation of OpenShift. 34 | 35 | [Step 3 - Install OpenShift](/doc/vmware-step-3-install-openshift.md) 36 | 37 | ## Step 4 - Define storage (manual) 38 | Now, create the storage class(es) and set the storage to be used for the applications and the image registry. 39 | 40 | [Step 4 - Define storage](/doc/vmware-step-4-define-storage.md) 41 | 42 | ## Step 5 - Finalize installation (manual) 43 | You can finalize the OpenShift installation by executing the steps in the document below. 44 | 45 | [Step 5 - Post-installation](/doc/vmware-step-5-post-install.md) 46 | 47 | ## Step 6 - Install the Cloud Pak 48 | The below list links to the installation steps of various Cloud Paks, provided by IBM Cloud Pak technical specialists. It is by no means intended to replace the official documentation, but meant to accelerate the deployment of the Cloud Pak for POC and demo purposes. 49 | 50 | * [Cloud Pak for Data 3.5](/doc/install-cp4d-35.md) -------------------------------------------------------------------------------- /TSD/OCP_4.x.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM-ICP4D/cloud-pak-ocp-4/907f3649a5576e7ca881f5dd68ac7f70f3148be6/TSD/OCP_4.x.pptx -------------------------------------------------------------------------------- /TSD/VMWare_Screen_shots.pptx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM-ICP4D/cloud-pak-ocp-4/907f3649a5576e7ca881f5dd68ac7f70f3148be6/TSD/VMWare_Screen_shots.pptx -------------------------------------------------------------------------------- /ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | forks=50 3 | timeout=30 4 | remote_user=root 5 | host_key_checking=False 6 | callback_whitelist=profile_tasks 7 | interpreter_python=auto_silent 8 | 9 | # Use the YAML callback plugin. 10 | stdout_callback = yaml -------------------------------------------------------------------------------- /doc/access-openshift.md: -------------------------------------------------------------------------------- 1 | # Accessing Red Hat OpenShift Console 2 | 3 | Once the Red Hat OpenShift cluster has been instantiated, you will probably want to access the console to add applications or to monitor the cluster. OpenShift heavily depends on DNS to access the Admin Console, Applications Console and Cluster Console and in a production situation you would need to add the master to your DNS server and set up a wildcard DNS entry for the cluster console and other applications (such as Cloud Pak for Data). 4 | 5 | ## Access via your local browser 6 | If you want to access OpenShift from your local browser, change the `/etc/hosts` file on your laptop and add the entries for the master node as shown below. 7 | 8 | Example `/etc/hosts` entry: 9 | ``` 10 | console-openshift-console.apps.ocp45.coc.ibm.com oauth-openshift.apps.ocp45.coc.ibm.com 11 | ``` 12 | 13 | Once you have added this entry, navigate to the following address: 14 | https://console-openshift-console.apps.ocp45.coc.ibm.com 15 | 16 | Log on using user `ocadmin` and password `passw0rd`. 17 | -------------------------------------------------------------------------------- /doc/install-cp4d-35-airgapped.md: -------------------------------------------------------------------------------- 1 | # Air-gapped installation of Cloud Pak for Data 3.5 on Red Hat OpenShift 2 | These steps help to prepare the installation of Cloud Pak for Data 3.5 in air-gapped mode, in case the OpenShift cluster is not connected to the internet. 3 | 4 | In essence you will first have to download all the services you want to install and then ship these to the bastion node from which you can continue to push the images to the registry and do the installation. 5 | 6 | ## Download Cloud Pak for Data 7 | 8 | ### Log on to the download node 9 | Ensure that you're logged on to a machine which can run the cpd- command. This can be a Linux server or a Windows or Mac workstation. In the steps below we're assuming you will be running the the download on a Linux server that is connected to the internet. 10 | 11 | ### Download installer 12 | ``` 13 | wget https://github.com/IBM/cpd-cli/releases/download/v3.5.2/cpd-cli-linux-EE-3.5.2.tgz -P /tmp/ 14 | mkdir -p /nfs/cpd 15 | tar xvf /tmp/cpd-cli-linux-EE-3.5.2.tgz -C /nfs/cpd 16 | rm -f /tmp/cpd-cli-linux-EE-3.5.2.tgz 17 | ``` 18 | 19 | ### Obtain your entitlement key for the container registry 20 | Login here: https://myibm.ibm.com/products-services/containerlibrary, using your IBMid. Then copy the entitlement key. 21 | 22 | ### Apply key to the repo.yaml file 23 | Insert the entitlement key after the `apikey:` parameter in the `/cp4d_download/cpd/repo.yaml` file. Please make sure you leave a blank after the `:`. 24 | 25 | ### Download Cloud Pak for Data services - individual assemblies 26 | Use the steps below if you want to install Cloud Pak for Data with selective modules. 27 | 28 | #### Download Cloud Pak for Data Lite 29 | ``` 30 | cd /cp4d_download/cpd 31 | assembly="lite" 32 | ./cpd-cli preload-images --assembly $assembly --repo ./repo.yaml --action download --accept-all-licenses 33 | ``` 34 | 35 | #### Download other assemblies 36 | You can repeat the above steps for the other assemblies, each time by selecting a different assembly name, for example: 37 | ``` 38 | assembly="wml" 39 | ... 40 | ``` 41 | * Watson Machine Learning: wml 42 | * Watson Knowledge Catalog: wkc 43 | * Data Virtualization: dv 44 | * Db2 Warehouse: db2wh 45 | * Db2 Event Store: db2eventstore 46 | * SPSS Modeler: spss-modeler 47 | * Decision Optimization: dods 48 | * Cognos Analytics: ca 49 | * DataStage: ds 50 | 51 | If you want to download an assembly that is not listed above, find the installation instructions here: https://www.ibm.com/support/producthub/icpdata/docs/view/services/SSQNUZ_current/cpd/svc/services.html?t=Add%20services&p=services. 52 | 53 | ### Download Cloud Pak for Data patches - individual assemblies 54 | If you want to download patches for the assemblies, use the following steps. You can find available patches here: https://www.ibm.com/support/producthub/icpdata/docs/content/SSQNUZ_current/patch/avail-patches.html 55 | 56 | #### Download patch for Cloud Pak for Data Lite 57 | ``` 58 | cd /cp4d_download/cpd 59 | assembly="lite" 60 | ./cpd-cli patch --assembly $assembly --repo ./repo.yaml --version 3.5.2 --patch-name cpd-3.5.2-lite-patch-1 --action download 61 | ``` 62 | 63 | ### Download Cloud Pak for Data services - multiple assemblies 64 | Alternatively, you can download all the assemblies you want to install later using the following steps. 65 | ``` 66 | assemblies="lite wsl wml wkc spss dods rstudio dv" 67 | cd /cp4d_download/cpd 68 | for assembly in $assemblies;do 69 | echo $assembly 70 | ./cpd-cli preload-images --assembly $assembly --repo ./repo.yaml --action download --accept-all-licenses 71 | done 72 | ``` 73 | 74 | ### Download Cloud Pak for Data patches - multiple assemblies 75 | You can also download the patches for all the assemblies. You can find available patches here: https://www.ibm.com/support/producthub/icpdata/docs/content/SSQNUZ_current/patch/avail-patches.html 76 | ``` 77 | for p in \ 78 | "lite cpd-3.5.2-lite-patch-1" \ 79 | "wsl cpd-3.5.2-ccs-patch-1" \ 80 | "wsl cpd-3.5.1-wsl-patch-1" \ 81 | ;do 82 | set -- $p 83 | ./cpd-cli patch --assembly $1 --repo ./repo.yaml --version 3.5.2 --patch-name $2 --action download 84 | done 85 | ``` 86 | 87 | ### Tar the downloads directory on the downloads server 88 | ``` 89 | tar czf /tmp/cp4d_downloads.tar.gz /cp4d_downloads 90 | ``` 91 | 92 | Ship the tar file (most-likely 100+ GB) to the bastion server. Continue with the steps documented here: [Install Cloud Pak for Data](/doc/install-cp4d-35.md) -------------------------------------------------------------------------------- /doc/install-cp4d-35-db2-event-store.md: -------------------------------------------------------------------------------- 1 | # Install Db2 Event Store 2 | This document lists the steps to configure a Cloud pak for Data cluster to include the Db2 Event Store. Special steps are needed to taint the nodes, prepare the local volumes and ensure they are mounted at boot. 3 | 4 | ## Configure nodes for Db2 Event Store 5 | 6 | ### Specify 3 event store nodes 7 | ``` 8 | es_nodes="es-1.ocp45.uk.ibm.com es-2.ocp45.uk.ibm.com es-3.ocp45.uk.ibm.com" 9 | ``` 10 | 11 | ### Taint and label the nodes 12 | ``` 13 | for node_name in $es_nodes;do 14 | oc adm taint node $node_name icp4data=database-db2eventstore:NoSchedule --overwrite 15 | oc label node $node_name icp4data=database-db2eventstore --overwrite 16 | oc label node $node_name node-role.db2eventstore.ibm.com/control=true --overwrite 17 | done 18 | ``` 19 | 20 | ### Check that nodes have been labeled correctly 21 | ``` 22 | oc get no -l icp4data=database-db2eventstore 23 | ``` 24 | 25 | ### Format Event Store local disks 26 | We will use logical volumes to make it easier to expand if needed. 27 | ``` 28 | for es_node in $es_nodes;do 29 | ssh core@$es_node 'sudo pvcreate /dev/sdb;sudo vgcreate es /dev/sdb;sudo lvcreate -l 100%FREE -n es_local es;sudo mkfs.xfs /dev/es/es_local' 30 | done 31 | ``` 32 | 33 | ### Create MCP for Event Store nodes 34 | ``` 35 | cat << EOF > /tmp/es_mcp.yaml 36 | apiVersion: machineconfiguration.openshift.io/v1 37 | kind: MachineConfigPool 38 | metadata: 39 | name: db2eventstore 40 | spec: 41 | machineConfigSelector: 42 | matchExpressions: 43 | - {key: machineconfiguration.openshift.io/role, operator: In, values: [worker,db2eventstore]} 44 | maxUnavailable: 1 45 | nodeSelector: 46 | matchLabels: 47 | icp4data: database-db2eventstore 48 | paused: false 49 | EOF 50 | ``` 51 | 52 | ### Create machine config for Event Store nodes 53 | ``` 54 | cat << EOF > /tmp/es_mc_local_mount.yaml 55 | apiVersion: machineconfiguration.openshift.io/v1 56 | kind: MachineConfig 57 | metadata: 58 | labels: 59 | machineconfiguration.openshift.io/role: db2eventstore 60 | name: 50-db2eventstore-local-mount 61 | spec: 62 | config: 63 | ignition: 64 | version: 2.2.0 65 | systemd: 66 | units: 67 | - contents: | 68 | [Service] 69 | Type=oneshot 70 | ExecStartPre=/usr/bin/mkdir -p /mnt/es_local 71 | ExecStart=/usr/bin/mount /dev/es/es_local /mnt/es_local 72 | [Install] 73 | WantedBy=muti-user.target 74 | name: es-local-mount.service 75 | enabled: true 76 | EOF 77 | ``` 78 | 79 | ### Create configs 80 | ``` 81 | oc create -f /tmp/es_mcp.yaml 82 | oc create -f /tmp/es_mc_local_mount.yaml 83 | ``` 84 | 85 | ### Wait until Event Store nodes have been assigned to the correct MCP 86 | ``` 87 | watch -n 10 oc get mcp 88 | ``` 89 | 90 | Output: 91 | ``` 92 | [root@bastion bin]# watch -n 10 oc get mcp 93 | Every 10.0s: oc get mcp Thu Jun 18 05:24:44 2020 94 | 95 | NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT 96 | db2eventstore False True False 3 0 0 0 97 | master rendered-master-0d397032dda301ab303ac43f25eea268 True False False 3 3 3 0 98 | worker rendered-worker-e94e7b0ff3b8923b16de48ce4f4d2520 True False False 3 3 3 0 99 | ``` 100 | 101 | Wait until the `db2eventstore ` machine config pool has `READYMACHINECOUNT=3`. 102 | 103 | ### Now you can create the Db2 Event Store instance 104 | * Specify `database-db2eventstore` for the node label 105 | * Specify `/mnt/es_local` for the `Local storage path` 106 | 107 | ### Check the status of the pods 108 | ``` 109 | oc get po -l component=eventstore 110 | ``` 111 | 112 | Output: 113 | ``` 114 | [root@bastion bin]# oc get po -l component=eventstore 115 | NAME READY STATUS RESTARTS AGE 116 | db2eventstore-1592454732439-dm-backend-647d859cbf-6lkk6 0/1 Running 0 95s 117 | db2eventstore-1592454732439-dm-frontend-8f7967797-xq7zj 1/1 Running 0 95s 118 | db2eventstore-1592454732439-tenant-catalog-779fd765f8-ffh6n 0/1 Pending 0 95s 119 | db2eventstore-1592454732439-tenant-engine-74c448467-fzxtb 0/1 Init:2/6 0 94s 120 | db2eventstore-1592454732439-tenant-engine-74c448467-phv2c 0/1 Init:2/6 0 95s 121 | db2eventstore-1592454732439-tenant-engine-74c448467-trvsj 0/1 Init:1/6 0 94s 122 | db2eventstore-1592454732439-tenant-tools-6b667c5b5-v5pps 0/1 Running 0 95s 123 | db2eventstore-1592454732439-tenant-zk-0 0/1 Pending 0 95s 124 | db2eventstore-1592454732439-tenant-zk-1 0/1 Pending 0 94s 125 | db2eventstore-1592454732439-tenant-zk-2 0/1 Pending 0 94s 126 | db2eventstore-1592454732439-tools-slave-56c98576c5-9gmxp 0/1 Init:1/3 0 94s 127 | db2eventstore-1592454732439-tools-slave-56c98576c5-rq7q7 0/1 Init:1/3 0 94s 128 | db2eventstore-1592454732439-tools-slave-56c98576c5-txxt7 0/1 Init:1/3 0 95s 129 | ``` 130 | 131 | No need to worry if you see pods with status `Pending` or even `ConfigError`, those statuses are typically transient. -------------------------------------------------------------------------------- /doc/ocp-step-2-prepare-installation-explanation.md: -------------------------------------------------------------------------------- 1 | # Explanation of what happens during preparation of the OpenShift installation 2 | The script runs the `playbooks/ocp4.yaml` Ansible playbook which will do the following: 3 | * Disable firewall on the Bastion node 4 | * Create installation directory (default is `/ocp_install`, configurable in the inventory file) 5 | * Configure proxy client (if applicable) 6 | * Install packages on the Bastion node (nginx, dnsmasq, ...) 7 | * Set up chrony NTP server on the Bastion node, this will be used by all OpenShift nodes to sync the time 8 | * Generate `/etc/ansible/hosts` file, useful to run `ansible` scripts later 9 | * Download OpenShift client and dependencies (Red Hat CoreOS) 10 | * Configure passwordless SSH on the Bastion node 11 | * Generate OpenShift installation configuration (`install-config.yaml`) 12 | * Generate installation scripts used in subsquent steps 13 | * Set up DNS and DHCP server on the Bastion node (needed for OpenShift installation) 14 | * Generate CoreOS ignition files which will used when the OpenShift nodes are booted with PXE 15 | * Create PXE files based on the MAC addresses of the OpenShift nodes 16 | * Set up a load balancer (`haproxy`) on the Bastion node 17 | * Configure NFS on the bastion node 18 | -------------------------------------------------------------------------------- /doc/ocp-step-3-install-openshift-explanation.md: -------------------------------------------------------------------------------- 1 | # Explanation of what happens during installation of the control plane 2 | 3 | ## PXE Boot 4 | All OpenShift cluster nodes (bootstrap, masters, workers) were created as "empty" and don't have an operating system installed, which means they will fail at startup and defer boot to PXE (Preboot eXecution Environment); a boot through the network. Every starting node sends out a DHCP broadcast including its MAC address which is picked up by the `dnsmasq` service which was started in the preparation step (`dnsmasq` has a dual role as DNS and DHCP server). The DHCP server replies to the starting node with a suggested IP address and information how to PXE boot, this information is contained in the `dnsmasq` configuration file that was generated before: 5 | 6 | ``` 7 | dhcp-range=192.168.1.200,192.168.1.250 8 | 9 | enable-tftp 10 | tftp-root=/ocp_install/tftpboot 11 | dhcp-boot=pxelinux.0 12 | ``` 13 | 14 | As part of `dnsmasq`, a TFTP (Trivial File Protocol) server is started which serves files in the `/ocp_install/tftpboot` directory and its subdirectories. From this TFTP server, the starting node retrieves the SYSLINUX bootloader file, `pxelinux.0`, which does the initial pre-boot of the server. The PXE server reads its configuration from the file associated with the MAC address of the the booting server. 15 | 16 | ``` 17 | [root@bastion pxelinux.cfg]# ll /ocp_install/tftpboot/pxelinux.cfg 18 | total 44 19 | lrwxrwxrwx. 1 root root 48 Apr 21 09:03 01-00-50-52-54-50-01 -> /ocp_install/tftpboot/pxelinux.cfg/bootstrap 20 | lrwxrwxrwx. 1 root root 47 Apr 21 09:03 01-00-50-52-54-60-01 -> /ocp_install/tftpboot/pxelinux.cfg/master-1 21 | ... 22 | -rw-r--r--. 1 root root 138 Apr 21 09:02 default 23 | -rw-r--r--. 1 root root 514 Apr 21 09:02 bootstrap 24 | -rw-r--r--. 1 root root 513 Apr 21 09:02 master-1 25 | ... 26 | ``` 27 | 28 | In case the booting server has MAC address `00:50:52:54:60:01` it reads the configuration in file `01-00-50-52-54-60-01`; the colons in the MAC address are replaced by dashes and the file name is prefixed with `01` which stands for Ethernet. To simplify finding the configuration file by mere mortals, the PXE configuration file is a symbolic link which points to a file with the host name of the server. The PXE configuration file looks something like this: 29 | 30 | ``` 31 | [root@bastion pxelinux.cfg]# cat master-1 32 | default menu.c32 33 | # set Timeout 3 Seconds 34 | timeout 30 35 | ontimeout linux 36 | label linux 37 | menu label ^Install RHEL CoreOS 38 | menu default 39 | kernel /images/vmlinuz 40 | append initrd=/images/initramfs.img nomodeset rd.neednet=1 coreos.inst=yes coreos.inst.install_dev=sda coreos.inst.image_url=http://192.168.1.100:8090/rhcos-metal-bios.raw.gz coreos.inst.ignition_url=http://192.168.1.100:8090/ocp43-master-1.ign nameserver=192.168.1.100 ip=192.168.1.101::192.168.1.1:255.255.255.0:master-1.uk.ibm.com:ens192:none:192.168.1.100 41 | ``` 42 | 43 | This may look a bit cryptic but esssentially the file contains all information PXE needs to load the initial ramdisk `initramfs.img`, which is located in the `images` directory in the TFTP root directory. Initram is a temporary root file system in memory. Also, it holds the URL of the CoreOS Linux kernel: `http://192.168.1.100:8090/rhcos-metal-bios.raw.gz`, which is served by the `nginx` HTTP server that was started on the bastion node in the preparation steps. Finally you will find information about the ignition file that configures CoreOS, the IP address, netmask, host name, interface the booted server will assume and the DNS (nameserver) it configures. The ignition file is also served by the HTTP server on the bastion node. 44 | 45 | Ignition is a provisioning utility that was created for CoreOS and is a tool that can partition disks, format partitions, write files and configure users. There are 3 types of ignition files that were created during the preparation steps, by the OpenShift installer: bootstrap, master and worker ignition files. In the preparation steps, these standard files were used to generate node-specific ignition files, such as `ocp43-master-1.ign`, which looks as follows: 46 | 47 | ``` 48 | [root@bastion ocp_install]# cat /ocp_install/ocp43-master-1.ign 49 | { 50 | "ignition": { 51 | "config": { 52 | "append": [ 53 | { 54 | "source": "http://192.168.1.100:8090/master.ign", 55 | "verification": {} 56 | } 57 | ] 58 | }, 59 | "timeouts": {}, 60 | "version": "2.2.0" 61 | }, 62 | "networkd": {}, 63 | "passwd": {}, 64 | "storage": { 65 | "files": [ 66 | { 67 | "contents": { 68 | "source": "data:,master-1" 69 | }, 70 | "filesystem": "root", 71 | "mode": 420, 72 | "path": "/etc/hostname", 73 | "user": { 74 | "name": "root" 75 | } 76 | }, 77 | { 78 | "contents": { 79 | "source": "data:text/plain;base64,IyBBbnNpYmxlIG1hbmFnZWQKCiMgU2VydmVycyB0byBiZSB1c2VkIGFzIGEgQ2hyb255L05UUCB0aW1lIHNlcnZlcgpzZXJ2ZXIgMTkyLjE2OC4xLjEwMCBpYnVyc3QKCiMgUmVjb3JkIHRoZSByYXRlIGF0IHdoaWNoIHRoZSBzeXN0ZW0gY2xvY2sgZ2FpbnMvbG9zc2VzIHRpbWUuCmRyaWZ0ZmlsZSAvdmFyL2xpYi9jaHJvbnkvZHJpZnQKCiMgU3luY2hyb25pemUgd2l0aCBsb2NhbCBjbG9jawpsb2NhbCBzdHJhdHVtIDEwCgojIEZvcmNlIHRoZSBjbG9jayB0byBiZSBzdGVwcGVkIGF0IHJlc3RhcnQgb2YgdGhlIHNlcnZpY2UgKGF0IGJvb3QpCiMgaWYgdGhlIHRpbWUgZGlmZmVyZW5jZSBpcyBncmVhdGVyIHRoYW4gMSBzZWNvbmQKaW5pdHN0ZXBzbGV3IDEgMTkyLjE2OC4xLjEwMAoKIyBBbGxvdyB0aGUgc3lzdGVtIGNsb2NrIHRvIGJlIHN0ZXBwZWQgaW4gdGhlIGZpcnN0IHRocmVlIHVwZGF0ZXMKIyBpZiBpdHMgb2Zmc2V0IGlzIGxhcmdlciB0aGFuIDEgc2Vjb25kLgptYWtlc3RlcCAxLjAgMwoKIyBFbmFibGUga2VybmVsIHN5bmNocm9uaXphdGlvbiBvZiB0aGUgcmVhbC10aW1lIGNsb2NrIChSVEMpLgpydGNzeW5jCgojIFNwZWNpZnkgZGlyZWN0b3J5IGZvciBsb2cgZmlsZXMuCmxvZ2RpciAvdmFyL2xvZy9jaHJvbnkK" 80 | }, 81 | "filesystem": "root", 82 | "mode": 420, 83 | "path": "/etc/chrony.conf", 84 | "user": { 85 | "name": "root" 86 | } 87 | } 88 | ] 89 | }, 90 | "systemd": {} 91 | } 92 | ``` 93 | 94 | Again, the above can be a bit intimidating but essentially it starts with appending the standard ignition file that was created by the OpenShift installer. Then, the hostname of the server is set in `/etc/hostname` and the `chrony` time server is configured in file `/etc/chrony.conf`. Because the `chrony` configuration contains special characters such as new lines, the file contents have been encoded in base-64. When we decode the very long string `IyBBbn....nkK` using `base64 -d`, it looks like this: 95 | 96 | ``` 97 | # Ansible managed 98 | 99 | # Servers to be used as a Chrony/NTP time server 100 | server 192.168.1.100 iburst 101 | 102 | # Record the rate at which the system clock gains/losses time. 103 | driftfile /var/lib/chrony/drift 104 | 105 | # Synchronize with local clock 106 | local stratum 10 107 | 108 | # Force the clock to be stepped at restart of the service (at boot) 109 | # if the time difference is greater than 1 second 110 | initstepslew 1 192.168.1.100 111 | 112 | # Allow the system clock to be stepped in the first three updates 113 | # if its offset is larger than 1 second. 114 | makestep 1.0 3 115 | 116 | # Enable kernel synchronization of the real-time clock (RTC). 117 | rtcsync 118 | 119 | # Specify directory for log files. 120 | logdir /var/log/chrony 121 | ``` 122 | 123 | ## Install the control plane 124 | Once the PXE boot components have been set up, you will start the cluster nodes. The bootstrap node is only needed at initial install and serves a temporary control plane that "bootstraps" the permanent OpenShift control plane consisting of the 3 master nodes and a number of worker nodes, dependent on the base configuration you have chosen. 125 | 126 | When you open the web console of any of the virtual servers just started, you will observe that the SYSLINUX boot loader is started, then loading the CoreOS operating system and booting it. When the CoreOS operating system is started on the master nodes, they will start `etcd`, `kubetlet` and the other services required by OpenShift. 127 | 128 | Once the bootstrap node is started, you can `ssh` to it: `ssh core@bootstrap` and view the journal control log messages as the master nodes are started. Essentially, you don't need to do this as the `wait_bootstrap.sh` will wait until the control plane has been activated. 129 | 130 | Sometimes it's useful to take a look at the journal control logs of bootstrap and masters. Especially when the cluster is behind a firewall and/or proxy server, you might find that images cannot be loaded due to restrictive internet access. -------------------------------------------------------------------------------- /doc/stop-cloud-pak.md: -------------------------------------------------------------------------------- 1 | # Set the resource quota for a namespace to stop the Cloud Pak 2 | You can use this procedure to change the resource quota for a project to 0 and to stop all processing in that namespace, effectively stopping the Cloud Pak. This may be useful when templating a cluster or when you have multiple patterns defined on the same cluster, but want to activate only one. 3 | 4 | ## Stopping the processing in a project 5 | 6 | ### Create resourcequota object 7 | ``` 8 | cat << EOF > /tmp/rq-0.yaml 9 | apiVersion: v1 10 | kind: ResourceQuota 11 | metadata: 12 | name: compute-resources-0 13 | spec: 14 | hard: 15 | pods: "0" 16 | EOF 17 | ``` 18 | 19 | ### Set resource quota for project 20 | ``` 21 | oc apply -n zen -f /tmp/rq-0.yaml 22 | ``` 23 | 24 | ### Delete all pods in project 25 | ``` 26 | oc delete po -n zen --all 27 | ``` 28 | 29 | Now wait for all pods to stop. 30 | 31 | ## Restart processing in a project 32 | 33 | ### Remove the resource quota for the project 34 | ``` 35 | oc delete resourcequota -n zen compute-resources-0 36 | ``` 37 | 38 | ### Watch application come up 39 | It may take a few seconds (sometimes more) for the scheduler to start the pods in the namespace. 40 | ``` 41 | watch -n 10 'oc get po -n zen' 42 | ``` 43 | -------------------------------------------------------------------------------- /doc/vmware-create-sc-ocs.md: -------------------------------------------------------------------------------- 1 | # Setting up OCS on an OCP 4.x installation provisioned on VMWare with local storage on the workers 2 | 3 | Using steps found here: https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.6/html/deploying_openshift_container_storage_on_vmware_vsphere/deploy-using-local-storage-devices-vmware. 4 | 5 | ## Pre-requisites 6 | The steps in this document assume you have 3 (dedicated) worker nodes in the cluster, each with one additional large raw disk (for Ceph). In the example below the disks are sized 200 GB 7 | 8 | ## Log in to OpenShift 9 | ``` 10 | oc login -u ocadmin -p passw0rd 11 | ``` 12 | 13 | ## Add labels and taints to the workers 14 | ``` 15 | ocs_nodes='ocs-1.ocp46.uk.ibm.com ocs-2.ocp46.uk.ibm.com ocs-3.ocp46.uk.ibm.com' 16 | for ocs_node in $ocs_nodes;do 17 | oc label nodes $ocs_node cluster.ocs.openshift.io/openshift-storage="" --overwrite 18 | oc label nodes $ocs_node node-role.kubernetes.io/infra="" --overwrite 19 | oc adm taint nodes $ocs_node node.ocs.openshift.io/storage="true":NoSchedule 20 | done 21 | ``` 22 | 23 | ## Install OCS operator 24 | You can install the operator using the OpenShift console. 25 | 26 | ### Install OCS operator from web interface 27 | - Open OpenShift 28 | - Go to Administrator --> Operators --> OperatorHub 29 | - Find `OpenShift Container Storage` 30 | - Install 31 | - Select `A specific namespace on the cluster`, namespace `openshift-storage` will be created automatically 32 | - Update channel: stable-4.6 33 | - Click Install 34 | 35 | ### Wait until the pods are running 36 | ``` 37 | watch -n 5 "oc get po -n openshift-storage" 38 | ``` 39 | 40 | Expected output: 41 | ``` 42 | NAME READY STATUS RESTARTS AGE 43 | noobaa-operator-7688b8849d-xzlgm 1/1 Running 0 37m 44 | ocs-metrics-exporter-776fffcf89-w2hmq 1/1 Running 0 37m 45 | ocs-operator-6b8455554f-frr89 1/1 Running 0 37m 46 | rook-ceph-operator-6457794bc-zqbt7 1/1 Running 0 37m 47 | ``` 48 | 49 | ## Install local storage operator 50 | You can install the operator using the OpenShift console. 51 | 52 | ### Install operator via OpenShift console 53 | - Open OpenShift console 54 | - Go to Administrator --> Operators --> OperatorHub 55 | - Find `Local Storage` 56 | - Install 57 | - Specify namespace `openshift-local-storage` 58 | - Update channel: 4.6 59 | - Click Install 60 | 61 | ### Wait until the operator is running 62 | ``` 63 | watch -n 5 "oc get csv -n openshift-local-storage" 64 | ``` 65 | 66 | ## Create storage cluster 67 | - Open OpenShift console 68 | - Go to Administrator --> Opoerators --> Installed Operators 69 | - Select `openshift-storage` as the project 70 | - Click on OpenShift Container Storage 71 | - Under Storage Cluster, click on Create instance 72 | - Select `Internal - Attached Devices` for Mode 73 | - Select nodes 74 | - Select `ocs-1`, `ocs-2` and `ocs-3` and click Next 75 | - Wait for a bit so that OpenShift can interrogate the workers 76 | - Enter `local-volume-sdb` for the Volume Set Name 77 | - Click Advanced and select the disk size, for example Min: 200 GiB, Max: 200 GiB 78 | - Select `local-volume-sdb` for the Storage Class 79 | - Click Create 80 | 81 | Wait until PVs for `local-volume-sdb` storage class have been created; each PV is 200 GB. 82 | ``` 83 | watch -n 5 'oc get pv' 84 | ``` 85 | 86 | ## Create storage cluster 87 | - Open OpenShift console 88 | - Go to Administrator --> Operators --> Installed Operators 89 | - Select `openshift-storage` project at the top of the screen 90 | - Click the `OpenShift Container Storage` operator 91 | - Click the `Storage Cluster` link (tab) 92 | - Click `Create OCS Cluster Service` 93 | - Select `Internal` for mode 94 | - The 3 storage nodes that were labelled before should already be selected 95 | - Select `localblock` for Storage Class 96 | - Click `Create` 97 | 98 | OCS will also automatically create the Ceph storage classes. 99 | 100 | ## Wait until all pods are up and running 101 | The storage cluster will be ready when the `ocs-operator` pod is ready. 102 | ``` 103 | watch -n 5 'oc get pod -n openshift-storage' 104 | ``` 105 | 106 | ## Wait for the storage cluster to be created 107 | ``` 108 | watch -n 10 'oc get po -n openshift-storage 109 | ``` 110 | 111 | Wait until the OCS operator pod `ocs-operator-xxxxxxxx-yyyyy` is running with READY=`1/1`. You will see more than 20 pods starting in the `openshift-storage` namespace. 112 | 113 | ## Add toleration to CSI plug-ins 114 | If you install any components which require nodes to be tainted (such as Db2 Event Store), you need to add additional tolerations for the OCS DaemonSets, which can be done via the `rook-ceph-operator-config` ConfigMap. 115 | 116 | First, get the definition of the Configmap: 117 | ``` 118 | oc get cm -n openshift-storage rook-ceph-operator-config -o yaml > /tmp/rook-ceph-operator-config.yaml 119 | ``` 120 | 121 | Edit, the file to have the following value for the CSI_PLUGIN_TOLERATIONS data element: 122 | ``` 123 | data: 124 | CSI_PLUGIN_TOLERATIONS: |2- 125 | 126 | - key: node.ocs.openshift.io/storage 127 | operator: Equal 128 | value: "true" 129 | effect: NoSchedule 130 | - key: icp4data 131 | operator: Equal 132 | value: "database-db2eventstore" 133 | effect: NoSchedule 134 | ``` 135 | 136 | Apply the changes to the ConfigMap: 137 | ``` 138 | oc apply -f /tmp/rook-ceph-operator-config.yaml 139 | ``` 140 | 141 | You should notice that the CSI pods are now also scheduled on the Event Store nodes. This ConfigMap change will survive the restart of the rook operator. 142 | 143 | ## Record name of storage class(es) 144 | Now that the OCS operator has been created you should have a `ocs-storagecluster-cephfs` storage class which you can use for the internal image registry and other purposes. -------------------------------------------------------------------------------- /doc/vmware-step-1-prepare-bastion.md: -------------------------------------------------------------------------------- 1 | # Establish VMWare infrastructure for installation of OpenShift 2 | Before installing OpenShift 4.x you need to provision the (virtual) servers that will host the software. The most common infrastructure is virtual machines on an ESX infrastructure. 3 | 4 | In this document we cover the steps for provisioning of demo/POC infrastructure and the expected layout of the cluster. 5 | 6 | ## Provision bastion node (and potentially an NFS node) 7 | Make sure that the following infrastructure is available or can be created: 8 | * 1 RHEL 8.x, 8 processing cores and 16 GB of memory 9 | * 1 optional RHEL 8.x NFS server, 8 processing cores and 32 GB of memory, if you want to use NFS and don't have an NFS server available already. If you don't want to provision a separate server for NFS storage, you can also use the bastion node for this. In that case make sure you configure the bastion node with 8 processing cores and 32 GB of memory. 10 | 11 | ## Log on to the Bastion node 12 | Log on to the bastion node as `root`. 13 | 14 | ## Disable the firewall on the bastion node 15 | The bastion node will host the OpenShift installation files. On a typical freshly installed RHEL server, the `firewalld` service is activated by default and this will cause problems, so you need to disable it. 16 | ``` 17 | systemctl stop firewalld;systemctl disable firewalld 18 | ``` 19 | 20 | ## Enable required repositories on the bastion node (and NFS if in use) 21 | You must install certain packages on the bastion node (and optionally NFS) for the preparation script to function. These will come from Red Hat Enterprise Linux and EPEL repositories. Make sure the following repositories are available from Red Hat or the satellite server in use for the infrastructure: 22 | * rhel-server-rpms - Red Hat Enterprise Linux Server (RPMs) 23 | 24 | For EPEL, you need the following repository: 25 | * epel/x86_64 - Extra Packages for Enterprise Linux - x86_64 26 | 27 | If you don't have this repository configured yet, you can do as as follows for RHEL-8: 28 | ``` 29 | yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm 30 | ``` 31 | 32 | ## Install packages on the bastion node 33 | On the bastion, install the packages which are needed for the preparation, for RHEL-8: 34 | ``` 35 | yum install -y ansible bind-utils buildah chrony dnsmasq git \ 36 | haproxy httpd-tools jq libvirt net-tools nfs-utils nginx podman \ 37 | python3 python3-netaddr python3-passlib python3-pip python3-policycoreutils python3-pyvmomi python3-requests \ 38 | screen sos syslinux-tftpboot wget yum-utils 39 | ``` 40 | 41 | Additionally, install some additional Python modules: 42 | ``` 43 | pip3 install passlib 44 | ``` 45 | 46 | > **Note** If your server has more than 1 Python version, Ansible typically chooses the newest version. If `pip3` references a different version of Python than the one used by Ansible, you may have to find the latest version and run the `pip install` against that version. 47 | 48 | Example: 49 | ``` 50 | ls -al /usr/bin/pip* 51 | ``` 52 | 53 | Output: 54 | ``` 55 | lrwxrwxrwx. 1 root root 23 Sep 5 09:42 /usr/bin/pip-3 -> /etc/alternatives/pip-3 56 | lrwxrwxrwx. 1 root root 22 Sep 5 09:42 /usr/bin/pip3 -> /etc/alternatives/pip3 57 | lrwxrwxrwx. 1 root root 8 Oct 14 2021 /usr/bin/pip-3.6 -> ./pip3.6 58 | -rwxr-xr-x. 1 root root 209 Oct 14 2021 /usr/bin/pip3.6 59 | lrwxrwxrwx. 1 root root 8 Oct 18 2021 /usr/bin/pip-3.8 -> ./pip3.8 60 | -rwxr-xr-x. 1 root root 536 Oct 18 2021 /usr/bin/pip3.8 61 | ``` 62 | 63 | Now install using `pip3.8`: 64 | ``` 65 | pip3.8 install passlib 66 | ``` 67 | 68 | If you have a separate storage server, please install the following packages on that VM (works for both RHEL-8 and RHEL-7): 69 | ``` 70 | yum install -y bind-utils chrony net-tools nfs-utils wget yum-utils 71 | ``` 72 | 73 | ## Clone this repo 74 | If you have access to the internet from the bastion, you can also clone this repository using the following procedure. 75 | ``` 76 | cd /root 77 | git clone https://github.com/IBM-ICP4D/cloud-pak-ocp-4.git 78 | ``` 79 | 80 | ### Alternative: upload the repository zip 81 | Export this repository to a zip file and upload it to the Bastion node. Unzip the file in a directory of your preference, typically `/root`. 82 | 83 | ## Prepare registry server (in case of disconnected installation) 84 | If you're doing a disconnected (air-gapped) installation of OpenShift, please ensure you set up a registry server. You can follow the instructions in the Red Hat OpenShift documentation or the steps documented here: [Create air-gapped registry](/doc/ocp-airgapped-create-registry.md) 85 | 86 | ## Download the OpenShift 4.x pull secret 87 | A pull secret is needed to download the OpenShift assets from the Red Hat registry. You can download your pull secret from here: https://cloud.redhat.com/openshift/install/vsphere/user-provisioned. Click on **Download pull secret**. 88 | 89 | Create file `/tmp/ocp_pullsecret.json` on the node from which you run the `prepare.sh` script. 90 | 91 | ### Disconnected (air-gapped) installation of OpenShift 4.x 92 | If you're doing a disconnected installation of OpenShift, please download the pull secret that was created using the steps in [Create air-gapped registry](/doc/ocp-airgapped-create-registry.md), for example: 93 | ``` 94 | wget http://registry.uk.ibm.com:8080/ocp4_downloads/ocp4_install/ocp_pullsecret.json -O /tmp/ocp_pullsecret.json 95 | ``` 96 | 97 | Also, download the certificate that of the registry server that was created, for example: 98 | ``` 99 | wget http://registry.uk.ibm.com:8080/ocp4_downloads/registry/certs/registry.crt -O /tmp/registry.crt 100 | ``` 101 | 102 | > If your registry server is not registered in the DNS, you can add an entry to the `/etc/hosts` file on the bastion node. This file is used for input by the DNS server spun up on the bastion node so the registry server IP address can be resolved from all the cluster node. 103 | 104 | ## Prepare infrastructure 105 | The next steps depend on which type of installation you are going to do and whether or not you have ESX credentials which will allow you to create VMs. If you have the correct permissions, the easiest is to choose an IPI (Installer Provisioned Infrastructure) installation. With the other two installation types (OVA and PXE Boot) you can choose to manually create the virtual machines. 106 | 107 | * IPI: OpenShift will create the VMs as part of the installation. Continue with [IPI installation](/doc/vmware-step-2a-prepare-ipi.md) 108 | * VMWare template (ova): Create nodes based on an OVA template you upload to vSphere. Continue with [OVA installation](/doc/vmware-step-2b-prepare-ova.md) 109 | * PXE Boot (pxe): Create empty nodes. When booted, the operating system will be loaded from the bastion node using TFTP. Continue with [PXE Boot installation](/doc/vmware-step-2c-prepare-pxe.md) 110 | -------------------------------------------------------------------------------- /doc/vmware-step-2-prepare-openshift-installation.md: -------------------------------------------------------------------------------- 1 | # Install OpenShift on VMWare - Step 2 - Prepare for OpenShift installation 2 | The steps below guide you through the preparation of the bastion node and the installation of the OpenShift control plane and workers after you have provisioned the VMs. 3 | 4 | ## Use screen to multiplex your terminal 5 | The `screen` utility allows you to perform a long-running task in a terminal, which continues even if your connection drops. Once started, you can use `Ctrl-A D` to detach from your screen and `screen -r` to return to the detached screen. If you accidentally started multiple screens, you can use `screen -ls` to list the screen sessions and then attach to one of them using `screen -r `. As a best practice, try to have only one screen terminal active to simplify navigation. 6 | ``` 7 | screen 8 | ``` 9 | 10 | 11 | 12 | 13 | 14 | ## Prepare the nodes for the OpenShift installation or run the installation 15 | Set environment variables for the root password of the bastion, NFS and load balancer nodes (must be the same) and set the OpenShift administrator (ocadmin) password. If you do not set the environment variables, the script will prompt to specify them. 16 | ``` 17 | export root_password= 18 | export ocp_admin_password= 19 | ``` 20 | 21 | ### OVA Install 22 | 23 | If you are doing a UPI with OVF templates then there are a few extra steps that need to be excuted before installation. First the install binaries should be downloaded to the bastion using the following command: 24 | 25 | ``` 26 | cd ~/cloud-pak-ocp-4 27 | ./prepare.sh -i inventory/ --skip-install 28 | ``` 29 | 30 | #### Prepare the VMWare environment for the creation of machines (OVA install) 31 | 32 | Before the user can create the machines a template needs to be created within the VMWare environment. From the folder that is house the templates right click and select "Deploy OVF Template" 33 | 34 | ![Deploy OVF Template](/images/deploy-ovf-template.png) 35 | 36 | Enter the details of the OVA file to use for the template. The ova file to use for the template is being served by the bastion node http server from the ocp_install directory. Enter the details of the ova fle to use for the template 37 | 38 | ![Select OVF Template](/images/select-ovf-template.png) 39 | 40 | Follow the necessary steps to select the name and folder of the template, the vmware compute resource to use. Review the details. Finally select the VMWare storage to use and complte the creation of the template. 41 | 42 | After creation the machine created needs to be converted to a template. Right click on the machine created above and select Template -> Convert to Template 43 | 44 | ![Convert to Template](/images/convert-to-template.png) 45 | 46 | #### Create the VMWare machines (OVA install) 47 | 48 | The machines can be create manually or if the installer has the correct permissions the creation of the machines can be automated. 49 | 50 | With the manual approach the VMWare admin creates the machines with the appropriate compute, memory and storage using as a base the templated created in the previous section 51 | 52 | If the creation can be automated the creation of the machines can ve accomplished using the following script: 53 | ``` 54 | cd ~/cloud-pak-ocp-4 55 | ./vm-create.sh -i inventory/ 56 | ``` 57 | 58 | The use will be prompted for the user id and password of the VMWare account. All machines in the inventory file will be created. 59 | 60 | After setting up the machines run the following to prepare the installation of OpenShift. 61 | ``` 62 | cd ~/cloud-pak-ocp-4 63 | ./prepare.sh -i inventory/ -e vc_user= -e vc_password= [other parameters...] 64 | ``` 65 | 66 | After the prepare script has finished the VMWare machines will need to be configured withthe ignition files and a couple of extra parameters. This can be accomplished with the following script: 67 | ``` 68 | cd ~/cloud-pak-ocp-4 69 | ./vm-update-vapps.sh -i inventory/ 70 | ``` 71 | 72 | The user will be prompted for the VMWare user id and password. 73 | 74 | ### IPI Install if your vSphere user can create VMs 75 | If your vSphere user can create VMs and you are performing and IPI installation or can have the preparation script create the virtual machines, run the script as follows: 76 | 77 | ``` 78 | cd ~/cloud-pak-ocp-4 79 | ./prepare.sh -i inventory/ -e vc_user= -e vc_password= [other parameters...] 80 | ``` 81 | 82 | ### Install if the VMs have been pre-created 83 | On the bastion node, you must run the script that will prepare the installation of OpenShift 4.x. 84 | ``` 85 | cd ~/cloud-pak-ocp-4 86 | ./prepare.sh -i inventory/ [other parameters...] 87 | ``` 88 | 89 | ### Prepare script failing 90 | If the prepare script fails at some point, you can fix the issue and run the script again. Don't continue to the next step until the Ansible playbook has run successfully. 91 | 92 | A successfully completed prepare script output looks something like below. The `unreachable` and `failed` counts should be `0`. 93 | ``` 94 | PLAY RECAP ************************************************************************************************************************************************************************** 95 | 192.168.1.100 : ok=15 changed=12 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 96 | localhost : ok=79 changed=67 unreachable=0 failed=0 skipped=7 rescued=0 ignored=0 97 | 98 | Wednesday 06 May 2020 07:03:41 +0100 (0:00:02.687) 0:04:21.252 ********* 99 | =============================================================================== 100 | Install common packages ----------------------------------------------------------------------------------------------------------------------------------------------------- 90.25s 101 | . 102 | . 103 | . 104 | Generate ignition files for the workers -------------------------------------------------------------------------------------------------------------------------------------- 1.47s 105 | ``` 106 | 107 | ## Continue with next step if you chose manual installation 108 | If you specified `run_install=True` in the inventory file, the preparation script will attempt to run the installation to the end, including creation of storage classes and configuring the registry. Should you want to run the installation manually, you can proceed with the next step: installation of OpenShift. 109 | 110 | [VMWare - Step 3 - Install OpenShift](/doc/vmware-step-3-install-openshift.md) 111 | 112 | ## What's happening during the preparation? 113 | If you want to know what is happening during the preparation of the bastion node and OpenShift installation, check here: [Explanation of control plane preparation procedure](/doc/ocp-step-2-prepare-installation-explanation.md) 114 | -------------------------------------------------------------------------------- /doc/vmware-step-2a-prepare-ipi.md: -------------------------------------------------------------------------------- 1 | # Installation using Installer Provisioned Infrastructure (IPI) 2 | This is the most straightforward installation method if your vSphere user can provision new virtual machines. The OpenShift installation process uploads a VM template and clones this into the bootstrap, masters and workers and provisions the clusters. 3 | 4 | > When running the prepare script in the next step, you must specify the vc_user and vc_password parameters. Make sure that you have them handy. 5 | 6 | ### Disconnected (air-gapped) installation of OpenShift 4.x 7 | In case you're doing a disconnected installation of OpenShift, update the example inventory file `/inventory/inventory/vmware-airgapped-example.inv` to match your cluster. 8 | 9 | ## Customize the inventory file 10 | You will have to adapt an existing inventory file in the `inventory` directory and choose how the OpenShift and relevant servers are layed out on your infrastructure. The inventory file is used by the `prepare.sh` script which utilizes Ansible to prepare the environment for installation of OpenShift 4.x. 11 | 12 | Go to the `inventory` directory and copy one of the exiting `.inv` files. Make sure you review all the properties, especially: 13 | * Domain name and cluster name, these will be used to define the node names of the cluster 14 | * Proxy configuration (if applicable) 15 | * OpenShift version 16 | * Installation type (IPI) in this case 17 | * DHCP range - Ensure that you have enough free IP addresses for all cluster nodes. 18 | * Ingress and API virtual IP addresses 19 | * Number of masters and workers and VM properties 20 | * VMWare environment settings 21 | * Nodes section - The sections for bootstrap, masters and workers will be ignored because OpenShift will determine the name of the nodes. However, do not remove the sections, they must exist. 22 | 23 | ### IPI Install 24 | Run the script as follows. It will prepare the bastion node and continue with the IPI install. 25 | ``` 26 | cd ~/cloud-pak-ocp-4 27 | ./prepare.sh -i inventory/ -e vc_user= -e vc_password= [other parameters...] 28 | ``` 29 | 30 | After installation is finished, you can start using OpenShift. 31 | 32 | ## Keep the cluster running for 24h 33 | > **IMPORTANT:** After completing the OpenShift 4.x installation, ensure that you keep the cluster running for at least 24 hours. This is required to renew the temporary control plane certificates. If you shut down the cluster nodes before the control plane certificates are renewed and they expire while the cluster is down, you will not be able to access OpenShift. 34 | 35 | ## Access OpenShift (optional) 36 | You can now access OpenShift by editing your `/etc/hosts` file. See [Access OpenShift](/doc/access-openshift.md) for more details. -------------------------------------------------------------------------------- /doc/vmware-step-2b-prepare-ova.md: -------------------------------------------------------------------------------- 1 | # Installation based on VMWare templates (OVA) 2 | If the administrator wants to create the VMs based on a template, specify the OVA installation method. The OVA template file must be manually uploaded to the ESX server; this cannot be done by the preparation process. If your user can provision new virtual machines but you do not want to use th IPI, you can have the preparation script create the VMs, update the application settings and start the VMs. If your user cannot control VMs, provide the location of the OVA file and and application settings to the administrator and run through the installation manually. 3 | 4 | ### Disconnected (air-gapped) installation of OpenShift 4.x 5 | In case you're doing a disconnected installation of OpenShift, update the example inventory file `/inventory/inventory/vmware-airgapped-example.inv` to match your cluster. 6 | 7 | ## Customize the inventory file 8 | You will have to adapt an existing inventory file in the `inventory` directory and choose how the OpenShift and relevant servers are layed out on your infrastructure. The inventory file is used by the `prepare.sh` script which utilizes Ansible to prepare the environment for installation of OpenShift 4.x. 9 | 10 | Go to the `inventory` directory and copy one of the exiting `.inv` files. Make sure you review all the properties, especially: 11 | * Domain name and cluster name, these will be used to define the node names of the cluster 12 | * Proxy configuration (if applicable) 13 | * OpenShift version 14 | * Installation type (OVA) in this case 15 | * Run install: `False` 16 | * DHCP range - Ensure that you have enough free IP addresses for all cluster nodes 17 | 18 | You will return to the inventory file after creation of the VMs. 19 | 20 | ## Prepare the nodes for the OpenShift installation or run the installation 21 | 22 | Set environment variables for the root password of the bastion, NFS and load balancer nodes (must be the same) and set the OpenShift administrator (ocadmin) password. If you do not set the environment variables, the script will prompt to specify them. 23 | ``` 24 | export root_password= 25 | export ocp_admin_password= 26 | ``` 27 | 28 | ### Download VMWare template to bastion node 29 | 30 | The following command will download the OVA template to the bastion node so that it can be uploaded to vSphere. 31 | 32 | ``` 33 | cd ~/cloud-pak-ocp-4 34 | ./prepare.sh -i inventory/ --skip-install 35 | ``` 36 | 37 | ### Create the VMWare template in vSphere 38 | 39 | Before the user can create the machines a template needs to be created within the VMWare environment. From the folder that houses the templates or the folder you want to create your OpenShift cluster in, right click and select "Deploy OVF Template" 40 | 41 | ![Deploy OVF Template](/images/deploy-ovf-template.png) 42 | 43 | Enter the details of the OVA file to use for the template. The ova file to use for the template is being served by the bastion node HTTP server (port 8090) from the `ocp_install` directory. Enter the details of the ova file to use for the template: 44 | 45 | ![Select OVF Template](/images/select-ovf-template.png) 46 | 47 | Follow the necessary steps to select the name and folder of the template, the vmware compute resource to use. Finally select the VMWare storage to use and complete the creation of the template. 48 | 49 | > You can skip the **Customize template** options where you would normally enter the ignition config data. This will be specified for each of the VMs. 50 | 51 | After creation the machine created needs to be converted to a template. Right click on the machine created above and select Template -> Convert to Template 52 | 53 | ![Convert to Template](/images/convert-to-template.png) 54 | 55 | ### Create the VMWare machines 56 | 57 | With the manual approach the VMWare admin can create the machines with the appropriate compute, memory and storage using the base template created in the previous section. 58 | 59 | * 1 bootstrap node with 100 GB assigned to the first volume, 4 processing cores and 8 GB of memory 60 | * 3 master nodes, servers with at least 200 GB assigned to the first volume, 8 processing cores and 32 GB of memory 61 | * 3 or more workers, servers with at least 200 GB assigned to the first volume, 16 processing cores and 64 GB of memory 62 | * Optionally, 3 or more workers, used for OCS/ODF or Portworx. These servers must have 200 GB assiged to the first volume, and one larger (for example 500 GB) volume for the storage 63 | 64 | It recommended to create all virtual servers in a VM folder to keep them together. Also it is best to create a master and worker VM template with the correct specifications so that it is easy to clone them. 65 | 66 | After creation, your VM folder should look something like this: 67 | ![VM Folder](/images/vsphere-vm-folder.png) 68 | 69 | ### Update the nodes section(s) in the inventory file 70 | Go to the section in the inventory file that lists the bootstrap, masters and workers and update the host names and MAC addresses in the inventory file. For each node, copy the MAC address as displayed in the network adapter configuration in vSphere, for example: 71 | ![vSphere MAC address](/images/vsphere-mac-address.png) 72 | 73 | After making the changes to the inventory file, the bottom section of the inventory file should look something like this: 74 | ``` 75 | [bootstrap] 76 | 10.99.92.52 host="bootstrap" mac="00:50:56:ab:81:bb" 77 | 78 | [masters] 79 | 10.99.92.53 host="master-1" mac="00:50:56:ab:55:e0" 80 | 10.99.92.54 host="master-2" mac="00:50:56:ab:9b:48" 81 | 10.99.92.55 host="master-3" mac="00:50:56:ab:10:ab" 82 | 83 | [workers] 84 | 10.99.92.56 host="worker-1" mac="00:50:56:ab:3f:8b" 85 | 10.99.92.57 host="worker-2" mac="00:50:56:ab:ec:78" 86 | 10.99.92.58 host="worker-3" mac="00:50:56:ab:b2:6d" 87 | ``` 88 | 89 | ### Re-run preparation script 90 | This step creates the remainder of the files needed to complete the installation, such as the application data. 91 | ``` 92 | cd ~/cloud-pak-ocp-4 93 | ./prepare.sh -i inventory/ 94 | ``` 95 | 96 | ### Update the VM application data 97 | In the `/ocp_install` directory you will find the `bootstrap-ova.ign.64`, `master.ign.64` and `worker.ign.64` files. You will need to customize every VM you created in the previous steps with the application properties. For each of the VMs: 98 | * Select the VM 99 | * Edit settings 100 | * Go to the VM Options tab 101 | * Open the Advanced section 102 | * Click on `EDIT CONFIGURATION` 103 | 104 | Add 3 configuration parameters: 105 | * `guestinfo.ignition.config.data.encoding` --> `base64` 106 | * `guestinfo.ignition.config.data` --> paste the contents of the appropritate `.ign.64` file 107 | * `disk.EnableUUID` --> `TRUE` 108 | 109 | ## Continue with next step if you chose manual installation 110 | If you specified `run_install=True` in the inventory file, the preparation script will attempt to run the installation to the end, including creation of storage classes and configuring the registry. Should you want to run the installation manually, you can proceed with the next step: installation of OpenShift. 111 | 112 | [VMWare - Step 3 - Install OpenShift](/doc/vmware-step-3-install-openshift.md) 113 | -------------------------------------------------------------------------------- /doc/vmware-step-2c-prepare-pxe.md: -------------------------------------------------------------------------------- 1 | # Installation based on PXE Boot (PXE) 2 | If your vSphere user cannot provision new virtual machines or if you want to determine the names of your nodes or use static IP addresses, you can use the PXE boot installation option. 3 | 4 | ### Create empty VMWare machines 5 | 6 | With the manual approach the VMWare admin can create **empty** machines with the appropriate compute, memory and storage using the base template created in the previous section. 7 | 8 | * 1 bootstrap node with 100 GB assigned to the first volume, 4 processing cores and 8 GB of memory 9 | * 3 master nodes, servers with at least 200 GB assigned to the first volume, 8 processing cores and 32 GB of memory 10 | * 3 or more workers, servers with at least 200 GB assigned to the first volume, 16 processing cores and 64 GB of memory 11 | * Optionally, 3 or more workers, used for OCS/ODF or Portworx. These servers must have 200 GB assiged to the first volume, and one larger (for example 500 GB) volume for the storage 12 | 13 | It recommended to create all virtual servers in a VM folder to keep them together. Also it is best to create a master and worker VM template with the correct specifications so that it is easy to clone them. 14 | 15 | After creation, your VM folder should look something like this: 16 | ![VM Folder](/images/vsphere-vm-folder.png) 17 | 18 | ### Disconnected (air-gapped) installation of OpenShift 4.x 19 | In case you're doing a disconnected installation of OpenShift, update the example inventory file `/inventory/inventory/vmware-airgapped-example.inv` to match your cluster. 20 | 21 | ## Customize the inventory file 22 | You will have to adapt an existing inventory file in the `inventory` directory and choose how the OpenShift and relevant servers are layed out on your infrastructure. The inventory file is used by the `prepare.sh` script which utilizes Ansible to prepare the environment for installation of OpenShift 4.x. 23 | 24 | Go to the `inventory` directory and copy one of the exiting `.inv` files. Make sure you review all the properties, especially: 25 | * Domain name and cluster name, these will be used to define the node names of the cluster 26 | * Proxy configuration (if applicable) 27 | * OpenShift version 28 | * Installation type (PXE) in this case 29 | * DHCP range - Ensure that you have enough free IP addresses for all cluster nodes. 30 | * Nodes section - Specify the IP addresses and names of the bootstrap, masters and workers. If your virtual machines have been pre-created by a vSphere administrator, also specify the MAC address associated with each VM so that the DHCP server will assign the correct IP address. 31 | 32 | For each node, copy the MAC address as displayed in the network adapter configuration in vSphere, for example: 33 | ![vSphere MAC address](/images/vsphere-mac-address.png) 34 | 35 | After making the changes to the inventory file, the bottom section of the inventory file should look something like this: 36 | ``` 37 | [bootstrap] 38 | 10.99.92.52 host="bootstrap" mac="00:50:56:ab:81:bb" 39 | 40 | [masters] 41 | 10.99.92.53 host="master-1" mac="00:50:56:ab:55:e0" 42 | 10.99.92.54 host="master-2" mac="00:50:56:ab:9b:48" 43 | 10.99.92.55 host="master-3" mac="00:50:56:ab:10:ab" 44 | 45 | [workers] 46 | 10.99.92.56 host="worker-1" mac="00:50:56:ab:3f:8b" 47 | 10.99.92.57 host="worker-2" mac="00:50:56:ab:ec:78" 48 | 10.99.92.58 host="worker-3" mac="00:50:56:ab:b2:6d" 49 | ``` 50 | 51 | ## Prepare the nodes for the OpenShift installation or run the installation 52 | 53 | Set environment variables for the root password of the bastion, NFS and load balancer nodes (must be the same) and set the OpenShift administrator (ocadmin) password. If you do not set the environment variables, the script will prompt to specify them. 54 | ``` 55 | export root_password= 56 | export ocp_admin_password= 57 | ``` 58 | 59 | ### Run the preparation script 60 | On the bastion node, you must run the script that will prepare the installation of OpenShift 4.x. 61 | ``` 62 | cd ~/cloud-pak-ocp-4 63 | ./prepare.sh -i inventory/ [other parameters...] 64 | ``` 65 | 66 | ## Continue with next step 67 | If you specified `run_install=True` in the inventory file, the preparation script will attempt to run the installation to the end, including creation of storage classes and configuring the registry. Should you want to run the installation manually, you can proceed with the next step: installation of OpenShift. 68 | 69 | [VMWare - Step 3 - Install OpenShift](/doc/vmware-step-3-install-openshift.md) 70 | -------------------------------------------------------------------------------- /doc/vmware-step-3-install-openshift.md: -------------------------------------------------------------------------------- 1 | # Install OpenShift on VMWare - Step 3 - Install OpenShift 2 | The steps below guide you through the installation of the OpenShift cluster. It assumes that the bastion node of the cluster has already been prepared in the previous step. 3 | 4 | ## Start all cluster nodes 5 | Go to the vSphere web interface of your ESX infrastructure and log in. 6 | 7 | Find the VM folder that holds the cluster VMs and start the bootstrap, masters and workers. 8 | 9 | ![vSphere Start control plane](/images/vsphere-start-nodes.png) 10 | 11 | ## Wait for bootstrap to complete 12 | The bootstrapping of the control plane can take up to 30 minutes. Run the following command to wait for the bootstrapping to complete. 13 | ``` 14 | /ocp_install/scripts/wait_bootstrap.sh 15 | ``` 16 | 17 | You should see something like this: 18 | ``` 19 | INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocp45.coc.ibm.com:6443... 20 | INFO API v1.16.2 up 21 | INFO Waiting up to 30m0s for bootstrapping to complete... 22 | INFO It is now safe to remove the bootstrap resources 23 | ``` 24 | 25 | If you want the output to be a bit more verbose, especially while waiting for the first step to complete, you can run the script as follows: 26 | ``` 27 | /ocp_install/scripts/wait_bootstrap.sh --log-level=debug 28 | ``` 29 | 30 | ## Remove bootstrap from load balancer 31 | Now that the control plane has been started, the bootstrap is no longer needed. Execute the following step to remove the references to the bootstrap node from the load balancer and shut down the bootstrap node. After that you can remove the bootstrap VM from within vSphere. 32 | > If you're not managing the load balancer from/on the bastion node, you have to manually remove or comment out the bootstrap entries. 33 | ``` 34 | /ocp_install/scripts/remove_bootstrap.sh 35 | ``` 36 | 37 | ## Approve Certificate Signing Requests and wait for nodes to become ready 38 | Run the following to approve CSRs of the workers and wait for the nodes to become Ready. 39 | ``` 40 | /ocp_install/scripts/wait_nodes_ready.sh 41 | ``` 42 | 43 | Alternatively, you can manually approve the certificates. 44 | ``` 45 | export KUBECONFIG=/ocp_install/auth/kubeconfig 46 | oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs --no-run-if-empty oc adm certificate approve 47 | oc get no 48 | ``` 49 | 50 | Repeat the `oc get csr` and `oc get no` commands until you see all workers added to your cluster. 51 | 52 | ## Create OpenShift administrator user 53 | Rather than using `kubeadmin` to operate OpenShift, we're creating an OpenShift administrator user. The script below will create the `ocadmin` user with the password that was set in the environment variable `ocp_admin_password`. 54 | ``` 55 | /ocp_install/scripts/create_admin_user.sh 56 | ``` 57 | 58 | You can ignore the `Warning: User 'ocadmin' not found` message. Once the script has run, a new authentication mechanism will be added to the `authentication` cluster operator. This will take a few minutes and only once finished you can log on with the `ocadmin` user. 59 | 60 | ## Wait for installation to complete 61 | Now wait for the installation to complete. This should be very quick, but could take up to 30 minutes. 62 | ``` 63 | /ocp_install/scripts/wait_install.sh 64 | ``` 65 | 66 | Something like the following should be displayed: 67 | ``` 68 | INFO Waiting up to 30m0s for the cluster at https://api.ocp45.coc.ibm.com:6443 to initialize... 69 | INFO Waiting up to 10m0s for the openshift-console route to be created... 70 | INFO Install complete! 71 | INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/ocp_install/auth/kubeconfig' 72 | INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp45.coc.ibm.com 73 | INFO Login to the console with user: kubeadmin, password: Fo9wP-taw47-6ujnV-jwT2k 74 | ``` 75 | 76 | **Please note that the above output renders the `kubeadmin` password** 77 | 78 | ## Wait for all cluster operators to become ready 79 | To wait for the cluster operators to become ready, run the following script: 80 | ``` 81 | /ocp_install/scripts/wait_co_ready.sh 82 | ``` 83 | 84 | Alternatively, you can monitor the cluster operators by running the following command. 85 | ``` 86 | watch -n 10 'oc get co' 87 | ``` 88 | The deployment has been completed if AVAILABLE=True and PROGRESSING=False for all cluster operators. 89 | 90 | Once complete, you can log on as below. 91 | ``` 92 | unset KUBECONFIG 93 | API_URL=$(grep "api-int" /etc/dnsmasq.conf | sed 's#.*\(api.*\)/.*#\1#') 94 | oc login -s $API_URL:6443 -u admin -p passw0rd --insecure-skip-tls-verify=true 95 | ``` 96 | 97 | The password above might be different for you. 98 | 99 | ## Continue with next step 100 | Now that the masters and workers are running, we can defin the storage to be used for the applications and image registry. 101 | 102 | [VMWare - Step 4 - Define storage](/doc/vmware-step-4-define-storage.md) 103 | 104 | ## What's happening during the OpenShift installation? 105 | If you want to know what is happening during the installation of the control plane, check here: [Explanation of OpenShift installation procedure](/doc/ocp-step-3-install-openshift-explanation.md) 106 | -------------------------------------------------------------------------------- /doc/vmware-step-4-define-storage.md: -------------------------------------------------------------------------------- 1 | # Install OpenShift on VMWare - Step 4 - Define storage 2 | Now that the cluster is up and running - from a processing perspective - you need to define storage classes and configure the OpenShift internal image registry to persist its content on that storage. Dependent on the Cloud Pak you will be installing, you can choose from using NFS, OCS/Ceph or Portworx. 3 | 4 | ## Create storage classes 5 | For NFS, a script to create the `managed-nfs-storage` storage class will have been generated if you configured this in the inventory file. OCS/Ceph and Portworx currently require manual steps via the OpenShift console or the command line. Please note that you're not restricted to create only 1 storage class. If you have capacity on the the cluster and a matching configuration of disks, you can create multiple storage classes. 6 | 7 | ### Create NFS storage class 8 | NFS is a convenient and easy to configure storage solution but not suitable for production in the POC/demo cluster setup. On the cluster, the bastion node also serves NFS with a single large volume used for both the image registry and storage defined for the Cloud Paks. It introduces a single point of failure for both of the above consumers, which we don't consider an issue for non-production use. 9 | 10 | If you want to use NFS for any of the applications deployed on OpenShift, you can create a storage class `managed-nfs-storage`. This will attach the storage class to the server and NFS directory configured in the inventory file used when preparing the OpenShift installation. 11 | ``` 12 | /ocp_install/scripts/create_nfs_sc.sh 13 | ``` 14 | 15 | Once you have run the script, the provisioner will start in the `default` namespace and a storage class has been created: 16 | ``` 17 | [root@bastion cloud-pak-ocp-4]# oc get sc 18 | NAME PROVISIONER AGE 19 | managed-nfs-storage icpd-nfs.io/nfs 4m18s 20 | ``` 21 | 22 | ### Create OpenShift Container Storage - Ceph 23 | As from version 4 of OpenShift, OpenShift Container Storage implements Ceph, a software defined storage (SDS) solution that has been available as a storage backend for Kubernetes and OpenStack for a number of years. OCS/Ceph provides a distributed file system that can handle node failures and also has better performance characteristics than NFS. 24 | 25 | On the demo infrastructure, an empty raw volume has been created on the worker nodes. This can be used to deploy OCS. 26 | 27 | To create the OCS/Ceph storage class, please refer to [OCS Storage Class](/doc/vmware-create-sc-ocs.md). 28 | 29 | ## Define storage for image registry 30 | During the installation of OpenShift, no container image registry has been created; this has to be done as a separate step and storage must be assigned to the image registry. Run the script that creates the persistent volume claim (PVC) for the image registry. The storage class to be used for the registry has been defined in the inventory file. 31 | ``` 32 | /ocp_install/scripts/create_registry_storage.sh 33 | ``` 34 | 35 | ## Continue with next step 36 | Now you can continue with the post-installation steps: 37 | 38 | [VMWare - Step 5 - Post-installation steps](/doc/vmware-step-5-post-install.md) 39 | -------------------------------------------------------------------------------- /doc/vmware-step-5-post-install.md: -------------------------------------------------------------------------------- 1 | # Install OpenShift on VMWare - Step 5 - Post-install 2 | The steps below guide you through the post-installation steps of OpenShift. It assumes that the OpenShift masters and workers are in **Ready** state. 3 | 4 | ## Post-install steps 5 | The following script will clean up the PXE links and rebuild the known_hosts file. 6 | ``` 7 | /ocp_install/scripts/post_install.sh 8 | ``` 9 | 10 | Amongst others, the script may opt the cluster out from remote health checking. Applying these changes to all nodes will take some time. To wait for the cluster operators to become ready, run the following: 11 | ``` 12 | /ocp_install/scripts/wait_co_ready.sh 13 | ``` 14 | 15 | ## Optional: Disable DHCP server 16 | After the workers have been deployed, you may no longer need the DHCP server that is running on the bastion node. To avoid any conflicts with other activities on the VMWare infrastructure, you can now disable the DHCP service within `dnsmasq`. 17 | ``` 18 | /ocp_install/scripts/disable_dhcp.sh 19 | ``` 20 | 21 | ## Keep the cluster running for 24h 22 | > **IMPORTANT:** After completing the OpenShift 4.x installation, ensure that you keep the cluster running for at least 24 hours. This is required to renew the temporary control plane certificates. If you shut down the cluster nodes before the control plane certificates are renewed and they expire while the cluster is down, you will not be able to access OpenShift. 23 | 24 | ## Access OpenShift (optional) 25 | You can now access OpenShift by editing your `/etc/hosts` file. See [Access OpenShift](/doc/access-openshift.md) for more details. 26 | 27 | ## Continue with next step 28 | You can now continue with the installation of one of the Cloud Paks. 29 | 30 | [Step 6 - Install the Cloud Pak](/README.md#step-6---install-the-cloud-pak) 31 | -------------------------------------------------------------------------------- /doc/wipe-nodes.md: -------------------------------------------------------------------------------- 1 | # Wipe the nodes of the OpenShift cluster 2 | 3 | This document describes to prepare VMs that already have an operating system. All VMs, except for the Bastion node, are wiped and installed from scratch by erasing the disk with `dd`. The wiped VM will - on next reboot - start a PXE boot that in turn will start the OpenShift installation with an installation method that is also used for bare metal installations. 4 | 5 | ## Check if the nodes can be accessed 6 | ``` 7 | ansible bootstrap,masters,workers -u core -b -a 'bash -c "hostname;whoami"' 8 | ``` 9 | 10 | >**IMPORTANT: BE VERY CAREFUL WITH THE NEXT COMMANDS, THEY CAN DESTROY YOUR CLUSTER** 11 | 12 | ## Wiping the nodes if OCP 4.x has already been installed 13 | ``` 14 | ansible bootstrap,masters,workers -u core -b -a 'bash -c "hostname;sudo dd if=/dev/zero of=/dev/sda count=1000 bs=1M;sudo shutdown -h 1"' 15 | ``` 16 | 17 | ## Clean known hosts 18 | For the next installation of OpenShift to work ok, remove the known hosts. 19 | ``` 20 | rm -f ~/.ssh/known_hosts 21 | ``` -------------------------------------------------------------------------------- /images/cluster-topology.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM-ICP4D/cloud-pak-ocp-4/907f3649a5576e7ca881f5dd68ac7f70f3148be6/images/cluster-topology.png -------------------------------------------------------------------------------- /images/convert-to-template.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM-ICP4D/cloud-pak-ocp-4/907f3649a5576e7ca881f5dd68ac7f70f3148be6/images/convert-to-template.png -------------------------------------------------------------------------------- /images/deploy-ovf-template.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM-ICP4D/cloud-pak-ocp-4/907f3649a5576e7ca881f5dd68ac7f70f3148be6/images/deploy-ovf-template.png -------------------------------------------------------------------------------- /images/ocp-installation-process-vmware.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM-ICP4D/cloud-pak-ocp-4/907f3649a5576e7ca881f5dd68ac7f70f3148be6/images/ocp-installation-process-vmware.png -------------------------------------------------------------------------------- /images/select-ovf-template.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM-ICP4D/cloud-pak-ocp-4/907f3649a5576e7ca881f5dd68ac7f70f3148be6/images/select-ovf-template.png -------------------------------------------------------------------------------- /images/vsphere-mac-address.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM-ICP4D/cloud-pak-ocp-4/907f3649a5576e7ca881f5dd68ac7f70f3148be6/images/vsphere-mac-address.png -------------------------------------------------------------------------------- /images/vsphere-start-nodes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM-ICP4D/cloud-pak-ocp-4/907f3649a5576e7ca881f5dd68ac7f70f3148be6/images/vsphere-start-nodes.png -------------------------------------------------------------------------------- /images/vsphere-vm-folder.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/IBM-ICP4D/cloud-pak-ocp-4/907f3649a5576e7ca881f5dd68ac7f70f3148be6/images/vsphere-vm-folder.png -------------------------------------------------------------------------------- /inventory/.gitignore: -------------------------------------------------------------------------------- 1 | *.backup 2 | testing 3 | -------------------------------------------------------------------------------- /inventory/vmware-example-410.inv: -------------------------------------------------------------------------------- 1 | [all:children] 2 | masters 3 | workers 4 | 5 | [all:vars] 6 | ansible_ssh_common_args='-o StrictHostKeyChecking=no' 7 | domain_name="coc.ibm.com" 8 | cluster_name="ocp410" 9 | default_gateway="10.99.92.1" 10 | 11 | # 12 | # Indicate the method by which Red Hat CoreOS will be installed. This can be one of the following values: 13 | # - pxe: CoreOS will be loaded on the bootstrap, masters and workers using PXE-boot. This is the default. 14 | # - ova: Only for VMWare. CoreOS will be loaded using a VMWare template (imported OVA file). 15 | # - ipi: Only for VMWare. CoreOS and OpenShift will be installed via Installer-provisioned infrastructure. OpenShift 16 | # will take care of creating the required VMs and automatically assigns hostnames. If you choose this installation 17 | # method, please also complete the IPI section in the inventory file. 18 | # 19 | # For pxe and ova installs you have the option of letting the prepare script create the VMs for you by setting 20 | # vm_create_vms=True. This can only be done if you specify the vc_user and vc_password properties at the command line. 21 | # If you specify run_install=True and vm_create_vms=True, the script will start the virtual machines. Otherwise, you must 22 | # start the bootstrap, masters and workers yourself while the script is waiting for the bootstrap. 23 | # When rhcos_installation_method=ipi, run_install is assumed to be True as well. 24 | # 25 | rhcos_installation_method=pxe 26 | vm_create_vms=False 27 | run_install=True 28 | 29 | # 30 | # OCP Installation directory 31 | # Depicts the directory on the bastion (current) node in which the OpenShift installation 32 | # artifacts will be stored. 33 | # 34 | ocp_install_dir="/ocp_install" 35 | 36 | # 37 | # Proxy settings. If the nodes can only reach the internet through a proxy, the proxy server must be configured 38 | # in the OpenShift installation configuration file. Make sure that you specify all 3 properties: http_proxy, https_proxy 39 | # and no_proxy. Property no_proxy must contain the domain name, the k8s internal addresses (10.1.0.0/16 and 172.30.0.0/16), 40 | # the internal services domain (.svc) and the IP range of the cluster nodes (for example 192.168.1.0/24). 41 | # Additionally, if the bastion (and other non-cluster nodes) have not yet 42 | # been configured with a global proxy environment variable, the preparation script can add this to the profile of all users. 43 | # 44 | #http_proxy="http://bastion.{{cluster_name}}.{{domain_name}}:3128" 45 | #https_proxy="http://bastion.{{cluster_name}}.{{domain_name}}:3128" 46 | #no_proxy=".{{domain_name}},10.1.0.0/16,{{service_network}},10.99.92.0/24,.svc" 47 | #configure_global_proxy=True 48 | 49 | # 50 | # OpenShift download URLs. These are the URLs used by the prepare script to download 51 | # the packages needed. If you want to install a different release of OpenShift, it is sufficient to change the 52 | # openshift_release property. The download scripts automatically select the relevant packages from the URL. 53 | # 54 | openshift_release="4.10" 55 | openshift_base_url="https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest-{{openshift_release}}/" 56 | rhcos_base_url="https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/{{openshift_release}}/latest/" 57 | openshift_client_package_pattern=".*(openshift-client.*tar.gz).*" 58 | openshift_install_package_pattern=".*(openshift-install-linux.*tar.gz).*" 59 | rhcos_metal_package_pattern=".*(rhcos.*metal.*raw.gz).*" 60 | rhcos_kernel_package_pattern=".*(rhcos.*-kernel.x86_64).*" 61 | rhcos_initramfs_package_pattern=".*(rhcos.*initramfs.x86_64.img).*" 62 | rhcos_rootfs_package_pattern=".*(rhcos.*rootfs.x86_64.img).*" 63 | rhcos_ova_package_pattern=".*(rhcos.*.ova).*" 64 | 65 | # 66 | # Indicate whether to opt out of remote health checking of the OpenShift cluster 67 | # 68 | opt_out_health_checking=False 69 | 70 | # 71 | # Indicates whether the chrony NTP service must be enabled on the bastion node. If enabled, 72 | # all OpenShift nodes, bootstrap, masters and workers will synchronize their times with the 73 | # bastion node instead of a public NTP server. If you use the bastion node as the NTP server, 74 | # you have to specify which range of servers will be allowed to synchronize 75 | # themselves with the bastion node; you can do this in the ntp_allow property. 76 | # Finally, specify a list of external servers with which the nodes will by synchronized, for example: 77 | # ntp_server=['0.rhel.pool.ntp.org', '1.rhel.pool.ntp.org']. 78 | # 79 | setup_chrony_server=False 80 | ntp_allow="10.99.92.0/24" 81 | ntp_servers=['10.99.240.1'] 82 | override_chrony_settings_on_cluster_nodes=True 83 | 84 | # 85 | # A DNS server (dnsmasq) will be set up on the bastion node. Specify the upstream name server 86 | # that dnsmasq will use. 87 | # 88 | external_name_servers=['8.8.8.8'] 89 | interface_script="/etc/sysconfig/network-scripts/ifcfg-ens192" 90 | 91 | # 92 | # Indicate whether DHCP and TFTP must run on the bastion node as part of dnsmasq. When using PXE boot, the masters 93 | # and worker will get an fixed IP address from the dnsmasq DHCP server. Specify a range of addresses from which the 94 | # DHCP server can issue an IP address. Every node configured in the bootstrap, masters and workers sections below will 95 | # have a specific dhcp-server entry in the dnsmasq configuration, specifying the IP address they will be assigned by 96 | # the DHCP server. If the DHCP server runs on the bastion node and the RHCOS installation (rhcos_installation_method) is PXE, 97 | # it will also serve the RHCOS ISO using the TFTP server. 98 | # 99 | dhcp_on_bastion=True 100 | dhcp_range=['10.99.92.51','10.99.92.60'] 101 | 102 | # 103 | # Indicate whether the load balancer must be configured by the prepare script. If you choose to use an external load 104 | # balancer, set the manage_load_balancer property to False. You can still configure the load balancer under the [lb] 105 | # section. 106 | # 107 | manage_load_balancer=True 108 | 109 | # 110 | # Indicate the port that the HTTP server (nginx) on the bastion node will listen on. The HTTP server provides access 111 | # to the ignition files required at boot time of the bootstrap, masters and workers, as well as PXE boot assets such as 112 | # RHCOS ISO. 113 | # 114 | http_server_port=8090 115 | 116 | # Set up desktop on the bastion node 117 | setup_desktop=False 118 | 119 | # Set the bastion hostname as part of the preparation. If set to True, the bastion hostname will be set to 120 | # .. 121 | set_bastion_hostname=False 122 | 123 | # 124 | # If manage_nfs is True, the Ansible script will try to format and mount the NFS volume configured in the nfs_volume* 125 | # properties on the server indicated in the [nfs] section. If the value is False, the nfs server referred to in the [nfs] 126 | # section will still be used to configure the managed-nfs-storage storage class if specified. However in that case, it is assumed 127 | # that an external NFS server is available which is not configured by the Ansible playbooks. 128 | # 129 | # Volume parameters: these parameters indicate which volume the preparation scripts have to 130 | # format and mount. The "selector" parameter is used to find the device using the lsblk 131 | # command; you can specify the size (such as 500G) or any unique string that identifies 132 | # the device (such as sdb). 133 | # 134 | # The "nfs_volume_mount_path" indicates the mount point that is created for the device. Even if you are not managing NFS 135 | # yourself, but are using an external NFS server for NFS storge class, you should configure the path that must be mounted. 136 | # 137 | manage_nfs=True 138 | nfs_volume_selector="sdb" 139 | 140 | nfs_volume_mount_path="/nfs" 141 | 142 | # 143 | # Storageclass parameters: indicate whether NFS and/or Portworx storage classes must be created 144 | # 145 | 146 | # Create managed-nfs-storage storage class? 147 | create_nfs_sc=True 148 | 149 | # Install Portworx and create storage classes? 150 | create_portworx_sc=False 151 | 152 | # 153 | # Registry storage class and size to be used when image registry is configured. If NFS is used for 154 | # the image registry, the storage class is typically managed-nfs-storage. For OCS, it would be ocs-storagecluster-cephfs. 155 | # 156 | image_registry_storage_class=managed-nfs-storage 157 | image_registry_size=200Gi 158 | 159 | # This variable configures the subnet in which services will be created in the OpenShift Container Platform SDN. 160 | # Specify a private block that does not conflict with any existing network blocks in your infrastructure to which pods, 161 | # nodes, or the master might require access to, or the installation will fail. Defaults to 172.30.0.0/16, 162 | # and cannot be re-configured after deployment. If changing from the default, avoid 172.17.0.0/16, 163 | # which the docker0 network bridge uses by default, or modify the docker0 network. 164 | # 165 | service_network="172.30.0.0/16" 166 | 167 | # 168 | # Additional variables for IPI (Installer-provisioned Infrastructure) installations. 169 | # 170 | apiVIP="10.99.92.51" 171 | ingressVIP="10.99.92.52" 172 | vm_number_of_masters=3 173 | vm_number_of_workers=3 174 | 175 | # 176 | # VMware envrionment specific properties. These are only used if you have selected the IPI installation method, or if 177 | # you want the prepare script to create the VMs or when using the vm_create.sh script to create these. 178 | # You can ignore these properties if you manually create the VMs. 179 | # 180 | vc_vcenter="10.99.92.13" 181 | vc_datacenter="Datacenter1" 182 | vc_datastore="Datastore1" 183 | vc_cluster="Cluster1" 184 | vc_res_pool="resourcepool" 185 | vc_folder="fkocp48" 186 | vc_guest_id="rhel7_64Guest" 187 | vc_network="VM Network" 188 | vm_template="/Datacenter1/vm/Templates/RHCOS_4.10.1" 189 | 190 | # Bootstrap VM properties 191 | vm_bootstrap_mem=16384 192 | vm_bootstrap_cpu=4 193 | vm_bootstrap_disk=100 194 | 195 | # Master VM properties 196 | vm_master_mem=32768 197 | vm_master_cpu=8 198 | vm_master_disk=200 199 | 200 | # Worker VM properties 201 | vm_worker_mem=65536 202 | vm_worker_cpu=16 203 | vm_worker_disk=200 204 | 205 | [lb] 206 | 10.99.92.50 host="bastion" 207 | 208 | [nfs] 209 | 10.99.92.50 host="bastion" 210 | 211 | [bastion] 212 | 10.99.92.50 host="bastion" 213 | 214 | [bootstrap] 215 | 10.99.92.51 host="bootstrap" mac="xx:xx:xx:xx:xx:xx" 216 | 217 | [masters] 218 | 10.99.92.52 host="master-1" mac="xx:xx:xx:xx:xx:xx" 219 | 10.99.92.53 host="master-2" mac="xx:xx:xx:xx:xx:xx" 220 | 10.99.92.54 host="master-3" mac="xx:xx:xx:xx:xx:xx" 221 | 222 | [workers] 223 | 10.99.92.55 host="worker-1" mac="xx:xx:xx:xx:xx:xx" 224 | 10.99.92.56 host="worker-2" mac="xx:xx:xx:xx:xx:xx" 225 | 10.99.92.57 host="worker-3" mac="xx:xx:xx:xx:xx:xx" 226 | -------------------------------------------------------------------------------- /inventory/vmware-example-48-ipi.inv: -------------------------------------------------------------------------------- 1 | [all:children] 2 | masters 3 | workers 4 | 5 | [all:vars] 6 | ansible_ssh_common_args='-o StrictHostKeyChecking=no' 7 | domain_name="coc.ibm.com" 8 | cluster_name="ocp48" 9 | default_gateway="10.99.92.1" 10 | 11 | # 12 | # Indicate the method by which Red Hat CoreOS will be installed. This can be one of the following values: 13 | # - pxe: CoreOS will be loaded on the bootstrap, masters and workers using PXE-boot. This is the default. 14 | # - ova: Only for VMWare. CoreOS will be loaded using a VMWare template (imported OVA file). 15 | # - ipi: Only for VMWare. CoreOS and OpenShift will be installed via Installer-provisioned infrastructure. OpenShift 16 | # will take care of creating the required VMs and automatically assigns hostnames. If you choose this installation 17 | # method, please also complete the IPI section in the inventory file. 18 | # 19 | # For pxe and ova installs you have the option of letting the prepare script create the VMs for you by setting 20 | # vm_create_vms=True. This can only be done if you specify the vc_user and vc_password properties at the command line. 21 | # If you specify run_install=True and vm_create_vms=True, the script will start the virtual machines. Otherwise, you must 22 | # start the bootstrap, masters and workers yourself while the script is waiting for the bootstrap. 23 | # When rhcos_installation_method=ipi, run_install is assumed to be True as well. 24 | # 25 | rhcos_installation_method=ipi 26 | vm_create_vms=False 27 | run_install=True 28 | 29 | # 30 | # OCP Installation directory 31 | # Depicts the directory on the bastion (current) node in which the OpenShift installation 32 | # artifacts will be stored. 33 | # 34 | ocp_install_dir="/ocp_install" 35 | 36 | # 37 | # Proxy settings. If the nodes can only reach the internet through a proxy, the proxy server must be configured 38 | # in the OpenShift installation configuration file. Make sure that you specify all 3 properties: http_proxy, https_proxy 39 | # and no_proxy. Property no_proxy must contain the domain name, the k8s internal addresses (10.1.0.0/16 and 172.30.0.0/16), 40 | # the internal services domain (.svc) and the IP range of the cluster nodes (for example 192.168.1.0/24). 41 | # Additionally, if the bastion (and other non-cluster nodes) have not yet 42 | # been configured with a global proxy environment variable, the preparation script can add this to the profile of all users. 43 | # 44 | #http_proxy="http://bastion.{{cluster_name}}.{{domain_name}}:3128" 45 | #https_proxy="http://bastion.{{cluster_name}}.{{domain_name}}:3128" 46 | #no_proxy=".{{domain_name}},10.1.0.0/16,{{service_network}},10.99.92.0/24,.svc" 47 | #configure_global_proxy=True 48 | 49 | # 50 | # OpenShift download URLs. These are the URLs used by the prepare script to download 51 | # the packages needed. If you want to install a different release of OpenShift, it is sufficient to change the 52 | # openshift_release property. The download scripts automatically select the relevant packages from the URL. 53 | # 54 | openshift_release="4.8" 55 | openshift_base_url="https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest-{{openshift_release}}/" 56 | rhcos_base_url="https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/{{openshift_release}}/latest/" 57 | openshift_client_package_pattern=".*(openshift-client.*tar.gz).*" 58 | openshift_install_package_pattern=".*(openshift-install-linux.*tar.gz).*" 59 | rhcos_metal_package_pattern=".*(rhcos.*metal.*raw.gz).*" 60 | rhcos_kernel_package_pattern=".*(rhcos.*-kernel.x86_64).*" 61 | rhcos_initramfs_package_pattern=".*(rhcos.*initramfs.x86_64.img).*" 62 | rhcos_rootfs_package_pattern=".*(rhcos.*rootfs.x86_64.img).*" 63 | rhcos_ova_package_pattern=".*(rhcos.*.ova).*" 64 | 65 | # 66 | # Indicate whether to opt out of remote health checking of the OpenShift cluster 67 | # 68 | opt_out_health_checking=False 69 | 70 | # 71 | # Indicates whether the chrony NTP service must be enabled on the bastion node. If enabled, 72 | # all OpenShift nodes, bootstrap, masters and workers will synchronize their times with the 73 | # bastion node instead of a public NTP server. If you use the bastion node as the NTP server, 74 | # you have to specify which range of servers will be allowed to synchronize 75 | # themselves with the bastion node; you can do this in the ntp_allow property. 76 | # Finally, specify a list of external servers with which the nodes will by synchronized, for example: 77 | # ntp_server=['0.rhel.pool.ntp.org', '1.rhel.pool.ntp.org']. 78 | # 79 | setup_chrony_server=False 80 | ntp_allow="10.99.92.0/24" 81 | ntp_servers=['10.99.240.1'] 82 | override_chrony_settings_on_cluster_nodes=True 83 | 84 | # 85 | # A DNS server (dnsmasq) will be set up on the bastion node. Specify the upstream name server 86 | # that dnsmasq will use. 87 | # 88 | external_name_servers=['8.8.8.8'] 89 | interface_script="/etc/sysconfig/network-scripts/ifcfg-ens192" 90 | 91 | # 92 | # Indicate whether DHCP and TFTP must run on the bastion node as part of dnsmasq. When using PXE boot, the masters 93 | # and worker will get an fixed IP address from the dnsmasq DHCP server. Specify a range of addresses from which the 94 | # DHCP server can issue an IP address. Every node configured in the bootstrap, masters and workers sections below will 95 | # have a specific dhcp-server entry in the dnsmasq configuration, specifying the IP address they will be assigned by 96 | # the DHCP server. If the DHCP server runs on the bastion node and the RHCOS installation (rhcos_installation_method) is PXE, 97 | # it will also serve the RHCOS ISO using the TFTP server. 98 | # 99 | dhcp_on_bastion=True 100 | dhcp_range=['10.99.92.53','10.99.92.70'] 101 | 102 | # 103 | # Indicate whether the load balancer must be configured by the prepare script. If you choose to use an external load 104 | # balancer, set the manage_load_balancer property to False. You can still configure the load balancer under the [lb] 105 | # section. 106 | # 107 | manage_load_balancer=True 108 | 109 | # 110 | # Indicate the port that the HTTP server (nginx) on the bastion node will listen on. The HTTP server provides access 111 | # to the ignition files required at boot time of the bootstrap, masters and workers, as well as PXE boot assets such as 112 | # RHCOS ISO. 113 | # 114 | http_server_port=8090 115 | 116 | # Set up desktop on the bastion node 117 | setup_desktop=False 118 | 119 | # Set the bastion hostname as part of the preparation. If set to True, the bastion hostname will be set to 120 | # .. 121 | set_bastion_hostname=False 122 | 123 | # 124 | # If manage_nfs is True, the Ansible script will try to format and mount the NFS volume configured in the nfs_volume* 125 | # properties on the server indicated in the [nfs] section. If the value is False, the nfs server referred to in the [nfs] 126 | # section will still be used to configure the managed-nfs-storage storage class if specified. However in that case, it is assumed 127 | # that an external NFS server is available which is not configured by the Ansible playbooks. 128 | # 129 | # Volume parameters: these parameters indicate which volume the preparation scripts have to 130 | # format and mount. The "selector" parameter is used to find the device using the lsblk 131 | # command; you can specify the size (such as 500G) or any unique string that identifies 132 | # the device (such as sdb). 133 | # 134 | # The "nfs_volume_mount_path" indicates the mount point that is created for the device. Even if you are not managing NFS 135 | # yourself, but are using an external NFS server for NFS storge class, you should configure the path that must be mounted. 136 | # 137 | manage_nfs=True 138 | nfs_volume_selector="sdb" 139 | 140 | nfs_volume_mount_path="/nfs" 141 | 142 | # 143 | # Storageclass parameters: indicate whether NFS and/or Portworx storage classes must be created 144 | # 145 | 146 | # Create managed-nfs-storage storage class? 147 | create_nfs_sc=True 148 | 149 | # Install Portworx and create storage classes? 150 | create_portworx_sc=False 151 | 152 | # 153 | # Registry storage class and size to be used when image registry is configured. If NFS is used for 154 | # the image registry, the storage class is typically managed-nfs-storage. For OCS, it would be ocs-storagecluster-cephfs. 155 | # 156 | image_registry_storage_class=managed-nfs-storage 157 | image_registry_size=200Gi 158 | 159 | # This variable configures the subnet in which services will be created in the OpenShift Container Platform SDN. 160 | # Specify a private block that does not conflict with any existing network blocks in your infrastructure to which pods, 161 | # nodes, or the master might require access to, or the installation will fail. Defaults to 172.30.0.0/16, 162 | # and cannot be re-configured after deployment. If changing from the default, avoid 172.17.0.0/16, 163 | # which the docker0 network bridge uses by default, or modify the docker0 network. 164 | # 165 | service_network="172.30.0.0/16" 166 | 167 | # 168 | # Additional variables for IPI (Installer-provisioned Infrastructure) installations. 169 | # 170 | apiVIP="10.99.92.51" 171 | ingressVIP="10.99.92.52" 172 | vm_number_of_masters=3 173 | vm_number_of_workers=3 174 | 175 | # 176 | # VMware envrionment specific properties. These are required for the IPI installation method. 177 | # 178 | vc_vcenter="10.99.92.13" 179 | vc_datacenter="Datacenter1" 180 | vc_datastore="Datastore1" 181 | vc_cluster="Cluster1" 182 | vc_res_pool="resourcepool" 183 | vc_folder="fkocp48" 184 | vc_network="VM Network" 185 | vm_template="/Datacenter1/vm/Templates/RHCOS_4.8.8" 186 | 187 | # Master VM properties 188 | vm_master_mem=32768 189 | vm_master_cpu=8 190 | vm_master_disk=200 191 | 192 | # Worker VM properties 193 | vm_worker_mem=65536 194 | vm_worker_cpu=16 195 | vm_worker_disk=200 196 | 197 | [lb] 198 | 10.99.92.50 host="bastion" 199 | 200 | [nfs] 201 | 10.99.92.50 host="bastion" 202 | 203 | [bastion] 204 | 10.99.92.50 host="bastion" 205 | 206 | [bootstrap] 207 | # Ignored for IPI installations 208 | 209 | [masters] 210 | # Ignored for IPI installations 211 | 212 | [workers] 213 | # Ignored for IPI installations -------------------------------------------------------------------------------- /inventory/vmware-example-48.inv: -------------------------------------------------------------------------------- 1 | [all:children] 2 | masters 3 | workers 4 | 5 | [all:vars] 6 | ansible_ssh_common_args='-o StrictHostKeyChecking=no' 7 | domain_name="coc.ibm.com" 8 | cluster_name="ocp48" 9 | default_gateway="10.99.92.1" 10 | 11 | # 12 | # Indicate the method by which Red Hat CoreOS will be installed. This can be one of the following values: 13 | # - pxe: CoreOS will be loaded on the bootstrap, masters and workers using PXE-boot. This is the default. 14 | # - ova: Only for VMWare. CoreOS will be loaded using a VMWare template (imported OVA file). 15 | # - ipi: Only for VMWare. CoreOS and OpenShift will be installed via Installer-provisioned infrastructure. OpenShift 16 | # will take care of creating the required VMs and automatically assigns hostnames. If you choose this installation 17 | # method, please also complete the IPI section in the inventory file. 18 | # 19 | # For pxe and ova installs you have the option of letting the prepare script create the VMs for you by setting 20 | # vm_create_vms=True. This can only be done if you specify the vc_user and vc_password properties at the command line. 21 | # If you specify run_install=True and vm_create_vms=True, the script will start the virtual machines. Otherwise, you must 22 | # start the bootstrap, masters and workers yourself while the script is waiting for the bootstrap. 23 | # When rhcos_installation_method=ipi, run_install is assumed to be True as well. 24 | # 25 | rhcos_installation_method=pxe 26 | vm_create_vms=False 27 | run_install=True 28 | 29 | # 30 | # OCP Installation directory 31 | # Depicts the directory on the bastion (current) node in which the OpenShift installation 32 | # artifacts will be stored. 33 | # 34 | ocp_install_dir="/ocp_install" 35 | 36 | # 37 | # Proxy settings. If the nodes can only reach the internet through a proxy, the proxy server must be configured 38 | # in the OpenShift installation configuration file. Make sure that you specify all 3 properties: http_proxy, https_proxy 39 | # and no_proxy. Property no_proxy must contain the domain name, the k8s internal addresses (10.1.0.0/16 and 172.30.0.0/16), 40 | # the internal services domain (.svc) and the IP range of the cluster nodes (for example 192.168.1.0/24). 41 | # Additionally, if the bastion (and other non-cluster nodes) have not yet 42 | # been configured with a global proxy environment variable, the preparation script can add this to the profile of all users. 43 | # 44 | #http_proxy="http://bastion.{{cluster_name}}.{{domain_name}}:3128" 45 | #https_proxy="http://bastion.{{cluster_name}}.{{domain_name}}:3128" 46 | #no_proxy=".{{domain_name}},10.1.0.0/16,{{service_network}},10.99.92.0/24,.svc" 47 | #configure_global_proxy=True 48 | 49 | # 50 | # OpenShift download URLs. These are the URLs used by the prepare script to download 51 | # the packages needed. If you want to install a different release of OpenShift, it is sufficient to change the 52 | # openshift_release property. The download scripts automatically select the relevant packages from the URL. 53 | # 54 | openshift_release="4.8" 55 | openshift_base_url="https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest-{{openshift_release}}/" 56 | rhcos_base_url="https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/{{openshift_release}}/latest/" 57 | openshift_client_package_pattern=".*(openshift-client.*tar.gz).*" 58 | openshift_install_package_pattern=".*(openshift-install-linux.*tar.gz).*" 59 | rhcos_metal_package_pattern=".*(rhcos.*metal.*raw.gz).*" 60 | rhcos_kernel_package_pattern=".*(rhcos.*-kernel.x86_64).*" 61 | rhcos_initramfs_package_pattern=".*(rhcos.*initramfs.x86_64.img).*" 62 | rhcos_rootfs_package_pattern=".*(rhcos.*rootfs.x86_64.img).*" 63 | rhcos_ova_package_pattern=".*(rhcos.*.ova).*" 64 | 65 | # 66 | # Indicate whether to opt out of remote health checking of the OpenShift cluster 67 | # 68 | opt_out_health_checking=False 69 | 70 | # 71 | # Indicates whether the chrony NTP service must be enabled on the bastion node. If enabled, 72 | # all OpenShift nodes, bootstrap, masters and workers will synchronize their times with the 73 | # bastion node instead of a public NTP server. If you use the bastion node as the NTP server, 74 | # you have to specify which range of servers will be allowed to synchronize 75 | # themselves with the bastion node; you can do this in the ntp_allow property. 76 | # Finally, specify a list of external servers with which the nodes will by synchronized, for example: 77 | # ntp_server=['0.rhel.pool.ntp.org', '1.rhel.pool.ntp.org']. 78 | # 79 | setup_chrony_server=False 80 | ntp_allow="10.99.92.0/24" 81 | ntp_servers=['10.99.240.1'] 82 | override_chrony_settings_on_cluster_nodes=True 83 | 84 | # 85 | # A DNS server (dnsmasq) will be set up on the bastion node. Specify the upstream name server 86 | # that dnsmasq will use. 87 | # 88 | external_name_servers=['8.8.8.8'] 89 | interface_script="/etc/sysconfig/network-scripts/ifcfg-ens192" 90 | 91 | # 92 | # Indicate whether DHCP and TFTP must run on the bastion node as part of dnsmasq. When using PXE boot, the masters 93 | # and worker will get an fixed IP address from the dnsmasq DHCP server. Specify a range of addresses from which the 94 | # DHCP server can issue an IP address. Every node configured in the bootstrap, masters and workers sections below will 95 | # have a specific dhcp-server entry in the dnsmasq configuration, specifying the IP address they will be assigned by 96 | # the DHCP server. If the DHCP server runs on the bastion node and the RHCOS installation (rhcos_installation_method) is PXE, 97 | # it will also serve the RHCOS ISO using the TFTP server. 98 | # 99 | dhcp_on_bastion=True 100 | dhcp_range=['10.99.92.51','10.99.92.60'] 101 | 102 | # 103 | # Indicate whether the load balancer must be configured by the prepare script. If you choose to use an external load 104 | # balancer, set the manage_load_balancer property to False. You can still configure the load balancer under the [lb] 105 | # section. 106 | # 107 | manage_load_balancer=True 108 | 109 | # 110 | # Indicate the port that the HTTP server (nginx) on the bastion node will listen on. The HTTP server provides access 111 | # to the ignition files required at boot time of the bootstrap, masters and workers, as well as PXE boot assets such as 112 | # RHCOS ISO. 113 | # 114 | http_server_port=8090 115 | 116 | # Set up desktop on the bastion node 117 | setup_desktop=False 118 | 119 | # Set the bastion hostname as part of the preparation. If set to True, the bastion hostname will be set to 120 | # .. 121 | set_bastion_hostname=False 122 | 123 | # 124 | # If manage_nfs is True, the Ansible script will try to format and mount the NFS volume configured in the nfs_volume* 125 | # properties on the server indicated in the [nfs] section. If the value is False, the nfs server referred to in the [nfs] 126 | # section will still be used to configure the managed-nfs-storage storage class if specified. However in that case, it is assumed 127 | # that an external NFS server is available which is not configured by the Ansible playbooks. 128 | # 129 | # Volume parameters: these parameters indicate which volume the preparation scripts have to 130 | # format and mount. The "selector" parameter is used to find the device using the lsblk 131 | # command; you can specify the size (such as 500G) or any unique string that identifies 132 | # the device (such as sdb). 133 | # 134 | # The "nfs_volume_mount_path" indicates the mount point that is created for the device. Even if you are not managing NFS 135 | # yourself, but are using an external NFS server for NFS storge class, you should configure the path that must be mounted. 136 | # 137 | manage_nfs=True 138 | nfs_volume_selector="sdb" 139 | 140 | nfs_volume_mount_path="/nfs" 141 | 142 | # 143 | # Storageclass parameters: indicate whether NFS and/or Portworx storage classes must be created 144 | # 145 | 146 | # Create managed-nfs-storage storage class? 147 | create_nfs_sc=True 148 | 149 | # Install Portworx and create storage classes? 150 | create_portworx_sc=False 151 | 152 | # 153 | # Registry storage class and size to be used when image registry is configured. If NFS is used for 154 | # the image registry, the storage class is typically managed-nfs-storage. For OCS, it would be ocs-storagecluster-cephfs. 155 | # 156 | image_registry_storage_class=managed-nfs-storage 157 | image_registry_size=200Gi 158 | 159 | # This variable configures the subnet in which services will be created in the OpenShift Container Platform SDN. 160 | # Specify a private block that does not conflict with any existing network blocks in your infrastructure to which pods, 161 | # nodes, or the master might require access to, or the installation will fail. Defaults to 172.30.0.0/16, 162 | # and cannot be re-configured after deployment. If changing from the default, avoid 172.17.0.0/16, 163 | # which the docker0 network bridge uses by default, or modify the docker0 network. 164 | # 165 | service_network="172.30.0.0/16" 166 | 167 | # 168 | # Additional variables for IPI (Installer-provisioned Infrastructure) installations. 169 | # 170 | apiVIP="10.99.92.51" 171 | ingressVIP="10.99.92.52" 172 | vm_number_of_masters=3 173 | vm_number_of_workers=3 174 | 175 | # 176 | # VMware envrionment specific properties. These are only used if you have selected the IPI installation method, or if 177 | # you want the prepare script to create the VMs or when using the vm_create.sh script to create these. 178 | # You can ignore these properties if you manually create the VMs. 179 | # 180 | vc_vcenter="10.99.92.13" 181 | vc_datacenter="Datacenter1" 182 | vc_datastore="Datastore1" 183 | vc_cluster="Cluster1" 184 | vc_res_pool="resourcepool" 185 | vc_folder="fkocp48" 186 | vc_guest_id="rhel7_64Guest" 187 | vc_network="VM Network" 188 | vm_template="/Datacenter1/vm/Templates/RHCOS_4.8.8" 189 | 190 | # Bootstrap VM properties 191 | vm_bootstrap_mem=16384 192 | vm_bootstrap_cpu=4 193 | vm_bootstrap_disk=100 194 | 195 | # Master VM properties 196 | vm_master_mem=32768 197 | vm_master_cpu=8 198 | vm_master_disk=200 199 | 200 | # Worker VM properties 201 | vm_worker_mem=65536 202 | vm_worker_cpu=16 203 | vm_worker_disk=200 204 | 205 | [lb] 206 | 10.99.92.50 host="bastion" 207 | 208 | [nfs] 209 | 10.99.92.50 host="bastion" 210 | 211 | [bastion] 212 | 10.99.92.50 host="bastion" 213 | 214 | [bootstrap] 215 | 10.99.92.51 host="bootstrap" mac="xx:xx:xx:xx:xx:xx" 216 | 217 | [masters] 218 | 10.99.92.52 host="master-1" mac="xx:xx:xx:xx:xx:xx" 219 | 10.99.92.53 host="master-2" mac="xx:xx:xx:xx:xx:xx" 220 | 10.99.92.54 host="master-3" mac="xx:xx:xx:xx:xx:xx" 221 | 222 | [workers] 223 | 10.99.92.55 host="worker-1" mac="xx:xx:xx:xx:xx:xx" 224 | 10.99.92.56 host="worker-2" mac="xx:xx:xx:xx:xx:xx" 225 | 10.99.92.57 host="worker-3" mac="xx:xx:xx:xx:xx:xx" 226 | -------------------------------------------------------------------------------- /inventory/vmware-example-49.inv: -------------------------------------------------------------------------------- 1 | [all:children] 2 | masters 3 | workers 4 | 5 | [all:vars] 6 | ansible_ssh_common_args='-o StrictHostKeyChecking=no' 7 | domain_name="coc.ibm.com" 8 | cluster_name="ocp49" 9 | default_gateway="10.99.92.1" 10 | 11 | # 12 | # Indicate the method by which Red Hat CoreOS will be installed. This can be one of the following values: 13 | # - pxe: CoreOS will be loaded on the bootstrap, masters and workers using PXE-boot. This is the default. 14 | # - ova: Only for VMWare. CoreOS will be loaded using a VMWare template (imported OVA file). 15 | # - ipi: Only for VMWare. CoreOS and OpenShift will be installed via Installer-provisioned infrastructure. OpenShift 16 | # will take care of creating the required VMs and automatically assigns hostnames. If you choose this installation 17 | # method, please also complete the IPI section in the inventory file. 18 | # 19 | # For pxe and ova installs you have the option of letting the prepare script create the VMs for you by setting 20 | # vm_create_vms=True. This can only be done if you specify the vc_user and vc_password properties at the command line. 21 | # If you specify run_install=True and vm_create_vms=True, the script will start the virtual machines. Otherwise, you must 22 | # start the bootstrap, masters and workers yourself while the script is waiting for the bootstrap. 23 | # When rhcos_installation_method=ipi, run_install is assumed to be True as well. 24 | # 25 | rhcos_installation_method=pxe 26 | vm_create_vms=False 27 | run_install=True 28 | 29 | # 30 | # OCP Installation directory 31 | # Depicts the directory on the bastion (current) node in which the OpenShift installation 32 | # artifacts will be stored. 33 | # 34 | ocp_install_dir="/ocp_install" 35 | 36 | # 37 | # Proxy settings. If the nodes can only reach the internet through a proxy, the proxy server must be configured 38 | # in the OpenShift installation configuration file. Make sure that you specify all 3 properties: http_proxy, https_proxy 39 | # and no_proxy. Property no_proxy must contain the domain name, the k8s internal addresses (10.1.0.0/16 and 172.30.0.0/16), 40 | # the internal services domain (.svc) and the IP range of the cluster nodes (for example 192.168.1.0/24). 41 | # Additionally, if the bastion (and other non-cluster nodes) have not yet 42 | # been configured with a global proxy environment variable, the preparation script can add this to the profile of all users. 43 | # 44 | #http_proxy="http://bastion.{{cluster_name}}.{{domain_name}}:3128" 45 | #https_proxy="http://bastion.{{cluster_name}}.{{domain_name}}:3128" 46 | #no_proxy=".{{domain_name}},10.1.0.0/16,{{service_network}},10.99.92.0/24,.svc" 47 | #configure_global_proxy=True 48 | 49 | # 50 | # OpenShift download URLs. These are the URLs used by the prepare script to download 51 | # the packages needed. If you want to install a different release of OpenShift, it is sufficient to change the 52 | # openshift_release property. The download scripts automatically select the relevant packages from the URL. 53 | # 54 | openshift_release="4.9" 55 | openshift_base_url="https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest-{{openshift_release}}/" 56 | rhcos_base_url="https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/{{openshift_release}}/latest/" 57 | openshift_client_package_pattern=".*(openshift-client.*tar.gz).*" 58 | openshift_install_package_pattern=".*(openshift-install-linux.*tar.gz).*" 59 | rhcos_metal_package_pattern=".*(rhcos.*metal.*raw.gz).*" 60 | rhcos_kernel_package_pattern=".*(rhcos.*-kernel.x86_64).*" 61 | rhcos_initramfs_package_pattern=".*(rhcos.*initramfs.x86_64.img).*" 62 | rhcos_rootfs_package_pattern=".*(rhcos.*rootfs.x86_64.img).*" 63 | rhcos_ova_package_pattern=".*(rhcos.*.ova).*" 64 | 65 | # 66 | # Indicate whether to opt out of remote health checking of the OpenShift cluster 67 | # 68 | opt_out_health_checking=False 69 | 70 | # 71 | # Indicates whether the chrony NTP service must be enabled on the bastion node. If enabled, 72 | # all OpenShift nodes, bootstrap, masters and workers will synchronize their times with the 73 | # bastion node instead of a public NTP server. If you use the bastion node as the NTP server, 74 | # you have to specify which range of servers will be allowed to synchronize 75 | # themselves with the bastion node; you can do this in the ntp_allow property. 76 | # Finally, specify a list of external servers with which the nodes will by synchronized, for example: 77 | # ntp_server=['0.rhel.pool.ntp.org', '1.rhel.pool.ntp.org']. 78 | # 79 | setup_chrony_server=False 80 | ntp_allow="10.99.92.0/24" 81 | ntp_servers=['10.99.240.1'] 82 | override_chrony_settings_on_cluster_nodes=True 83 | 84 | # 85 | # A DNS server (dnsmasq) will be set up on the bastion node. Specify the upstream name server 86 | # that dnsmasq will use. 87 | # 88 | external_name_servers=['8.8.8.8'] 89 | interface_script="/etc/sysconfig/network-scripts/ifcfg-ens192" 90 | 91 | # 92 | # Indicate whether DHCP and TFTP must run on the bastion node as part of dnsmasq. When using PXE boot, the masters 93 | # and worker will get an fixed IP address from the dnsmasq DHCP server. Specify a range of addresses from which the 94 | # DHCP server can issue an IP address. Every node configured in the bootstrap, masters and workers sections below will 95 | # have a specific dhcp-server entry in the dnsmasq configuration, specifying the IP address they will be assigned by 96 | # the DHCP server. If the DHCP server runs on the bastion node and the RHCOS installation (rhcos_installation_method) is PXE, 97 | # it will also serve the RHCOS ISO using the TFTP server. 98 | # 99 | dhcp_on_bastion=True 100 | dhcp_range=['10.99.92.51','10.99.92.60'] 101 | 102 | # 103 | # Indicate whether the load balancer must be configured by the prepare script. If you choose to use an external load 104 | # balancer, set the manage_load_balancer property to False. You can still configure the load balancer under the [lb] 105 | # section. 106 | # 107 | manage_load_balancer=True 108 | 109 | # 110 | # Indicate the port that the HTTP server (nginx) on the bastion node will listen on. The HTTP server provides access 111 | # to the ignition files required at boot time of the bootstrap, masters and workers, as well as PXE boot assets such as 112 | # RHCOS ISO. 113 | # 114 | http_server_port=8090 115 | 116 | # Set up desktop on the bastion node 117 | setup_desktop=False 118 | 119 | # Set the bastion hostname as part of the preparation. If set to True, the bastion hostname will be set to 120 | # .. 121 | set_bastion_hostname=False 122 | 123 | # 124 | # If manage_nfs is True, the Ansible script will try to format and mount the NFS volume configured in the nfs_volume* 125 | # properties on the server indicated in the [nfs] section. If the value is False, the nfs server referred to in the [nfs] 126 | # section will still be used to configure the managed-nfs-storage storage class if specified. However in that case, it is assumed 127 | # that an external NFS server is available which is not configured by the Ansible playbooks. 128 | # 129 | # Volume parameters: these parameters indicate which volume the preparation scripts have to 130 | # format and mount. The "selector" parameter is used to find the device using the lsblk 131 | # command; you can specify the size (such as 500G) or any unique string that identifies 132 | # the device (such as sdb). 133 | # 134 | # The "nfs_volume_mount_path" indicates the mount point that is created for the device. Even if you are not managing NFS 135 | # yourself, but are using an external NFS server for NFS storge class, you should configure the path that must be mounted. 136 | # 137 | manage_nfs=True 138 | nfs_volume_selector="sdb" 139 | 140 | nfs_volume_mount_path="/nfs" 141 | 142 | # 143 | # Storageclass parameters: indicate whether NFS and/or Portworx storage classes must be created 144 | # 145 | 146 | # Create managed-nfs-storage storage class? 147 | create_nfs_sc=True 148 | 149 | # Install Portworx and create storage classes? 150 | create_portworx_sc=False 151 | 152 | # 153 | # Registry storage class and size to be used when image registry is configured. If NFS is used for 154 | # the image registry, the storage class is typically managed-nfs-storage. For OCS, it would be ocs-storagecluster-cephfs. 155 | # 156 | image_registry_storage_class=managed-nfs-storage 157 | image_registry_size=200Gi 158 | 159 | # This variable configures the subnet in which services will be created in the OpenShift Container Platform SDN. 160 | # Specify a private block that does not conflict with any existing network blocks in your infrastructure to which pods, 161 | # nodes, or the master might require access to, or the installation will fail. Defaults to 172.30.0.0/16, 162 | # and cannot be re-configured after deployment. If changing from the default, avoid 172.17.0.0/16, 163 | # which the docker0 network bridge uses by default, or modify the docker0 network. 164 | # 165 | service_network="172.30.0.0/16" 166 | 167 | # 168 | # Additional variables for IPI (Installer-provisioned Infrastructure) installations. 169 | # 170 | apiVIP="10.99.92.51" 171 | ingressVIP="10.99.92.52" 172 | vm_number_of_masters=3 173 | vm_number_of_workers=3 174 | 175 | # 176 | # VMware envrionment specific properties. These are only used if you have selected the IPI installation method, or if 177 | # you want the prepare script to create the VMs or when using the vm_create.sh script to create these. 178 | # You can ignore these properties if you manually create the VMs. 179 | # 180 | vc_vcenter="10.99.92.13" 181 | vc_datacenter="Datacenter1" 182 | vc_datastore="Datastore1" 183 | vc_cluster="Cluster1" 184 | vc_res_pool="resourcepool" 185 | vc_folder="fkocp48" 186 | vc_guest_id="rhel7_64Guest" 187 | vc_network="VM Network" 188 | vm_template="/Datacenter1/vm/Templates/RHCOS_4.9.3" 189 | 190 | # Bootstrap VM properties 191 | vm_bootstrap_mem=16384 192 | vm_bootstrap_cpu=4 193 | vm_bootstrap_disk=100 194 | 195 | # Master VM properties 196 | vm_master_mem=32768 197 | vm_master_cpu=8 198 | vm_master_disk=200 199 | 200 | # Worker VM properties 201 | vm_worker_mem=65536 202 | vm_worker_cpu=16 203 | vm_worker_disk=200 204 | 205 | [lb] 206 | 10.99.92.50 host="bastion" 207 | 208 | [nfs] 209 | 10.99.92.50 host="bastion" 210 | 211 | [bastion] 212 | 10.99.92.50 host="bastion" 213 | 214 | [bootstrap] 215 | 10.99.92.51 host="bootstrap" mac="xx:xx:xx:xx:xx:xx" 216 | 217 | [masters] 218 | 10.99.92.52 host="master-1" mac="xx:xx:xx:xx:xx:xx" 219 | 10.99.92.53 host="master-2" mac="xx:xx:xx:xx:xx:xx" 220 | 10.99.92.54 host="master-3" mac="xx:xx:xx:xx:xx:xx" 221 | 222 | [workers] 223 | 10.99.92.55 host="worker-1" mac="xx:xx:xx:xx:xx:xx" 224 | 10.99.92.56 host="worker-2" mac="xx:xx:xx:xx:xx:xx" 225 | 10.99.92.57 host="worker-3" mac="xx:xx:xx:xx:xx:xx" 226 | -------------------------------------------------------------------------------- /playbooks/ocp4.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # This playbook prepares the environment to install OpenShift 4.x 3 | 4 | # 5 | # The playbook consists of the following sections: 6 | # 1. Initialize bastion 7 | # 2. Provision infrastructure for UPI install 8 | # 3. Download OpenShift installer and client 9 | # 4. Configure instrastructure services (DNS, DHCP, Chrony, HAProxy, HTTP Server, NFS) 10 | # 5. Install OpenShift 11 | # 6. Wait for installation to complete 12 | # 7. Create storage class(es) anc configure internal registry 13 | # 8. Post-installation tasks 14 | 15 | # 16 | # 1. Initialize bastion node 17 | # 18 | - name: Initialize bastion 19 | hosts: localhost 20 | connection: local 21 | gather_facts: no 22 | become: no 23 | tasks: 24 | - include: tasks/init_facts.yaml 25 | - include: tasks/check_variables.yaml 26 | - include: tasks/dns_reset.yaml 27 | - include: tasks/firewall_iptables_remove.yaml 28 | - include: tasks/set_hostname.yaml 29 | - include: tasks/hosts.yaml 30 | tags: hosts 31 | - name: Configure SSH 32 | import_playbook: tasks/ssh_playbook.yaml 33 | tags: ssh 34 | 35 | # 36 | # 2. Provision infrastructure for UPI install 37 | # 38 | - name: Provision infrastructure for UPI install 39 | import_playbook: ocp4_vm_create.yaml 40 | when: vm_create_vms|bool and rhcos_installation_method|upper!="IPI" 41 | tags: vm_create 42 | 43 | # 44 | # 3. Download OpenShift installer and client 45 | # 46 | - name: Download OpenShift installer and client 47 | hosts: localhost 48 | connection: local 49 | gather_facts: yes 50 | become: no 51 | tasks: 52 | - include: tasks/ocp_install_dir.yaml 53 | - include: tasks/ocp_download.yaml 54 | tags: download 55 | when: install_script_already_run==False 56 | 57 | # 58 | # 4. Configure instrastructure services (DNS, DHCP, Chrony, HAProxy, HTTP Server, NFS) 59 | # 60 | - name: Configure infrastructure services 61 | hosts: localhost 62 | connection: local 63 | gather_facts: yes 64 | become: no 65 | tasks: 66 | - include: tasks/global_proxy.yaml 67 | tags: proxy 68 | - include: tasks/chrony_server.yaml 69 | tags: chrony 70 | - include: tasks/chrony_client.yaml 71 | tags: chrony 72 | - include: tasks/http_server.yaml 73 | tags: http 74 | - include: tasks/etc_ansible_hosts.yaml 75 | - include: tasks/dnsmasq_libvirt_remove.yaml 76 | - block: 77 | - include: tasks/tftpboot.yaml 78 | tags: tftp 79 | - include: tasks/pxe_links.yaml 80 | when: rhcos_installation_method|upper=="PXE" 81 | - include: tasks/dns_server.yaml 82 | tags: dns 83 | - include: tasks/dns_interface.yaml 84 | tags: dns 85 | 86 | - hosts: lb 87 | gather_facts: yes 88 | become: yes 89 | tasks: 90 | - include: tasks/haproxy.yaml 91 | tags: haproxy 92 | when: manage_load_balancer is not defined or manage_load_balancer|bool 93 | 94 | - hosts: nfs 95 | gather_facts: no 96 | become: yes 97 | tasks: 98 | - name: Prepare NFS volume 99 | include: tasks/prepare_xfs_volume.yaml 100 | vars: 101 | volume_selector: "{{ nfs_volume_selector }}" 102 | mount_point: "{{ nfs_volume_mount_path }}" 103 | when: manage_nfs|bool 104 | - name: Configure NFS server 105 | include: tasks/nfs_server.yaml 106 | when: manage_nfs|bool 107 | 108 | # 109 | # 5. Install OpenShift 110 | # 111 | - name: Prepare and install OpenShift 112 | hosts: localhost 113 | connection: local 114 | gather_facts: yes 115 | become: yes 116 | tasks: 117 | - block: 118 | - include: tasks/ocp_install_config.yaml 119 | tags: ocp_install_config 120 | - include: tasks/ocp_ignition.yaml 121 | tags: ignition 122 | when: rhcos_installation_method|upper!="IPI" 123 | - include: tasks/ocp_install_ipi.yaml 124 | tags: install_ocp 125 | when: rhcos_installation_method|upper=="IPI" 126 | - include: tasks/create_oauth.yaml 127 | when: 128 | - install_script_already_run==False 129 | - not skip_install 130 | 131 | - include: tasks/registry_storage.yaml 132 | - include: tasks/ocp_install_scripts.yaml 133 | tags: scripts 134 | 135 | - import_playbook: ocp4_vm_power_on.yaml 136 | when: 137 | - vm_create_vms|bool 138 | - run_install|bool 139 | - rhcos_installation_method|upper!="IPI" 140 | - not skip_install 141 | tags: vm_power_on 142 | 143 | # 144 | # 6. Wait for installation to complete 145 | # 146 | - name: Wait for OpenShift installation to complete 147 | hosts: localhost 148 | connection: local 149 | gather_facts: no 150 | become: yes 151 | tasks: 152 | - block: 153 | - name: Wait for bootstrap 154 | shell: | 155 | {{ocp_install_dir}}/scripts/wait_bootstrap.sh 156 | - name: Remove bootstrap 157 | shell: | 158 | {{ocp_install_dir}}/scripts/remove_bootstrap.sh 159 | - name: Wait for all nodes to become ready 160 | shell: | 161 | {{ocp_install_dir}}/scripts/wait_nodes_ready.sh 162 | when: 163 | - run_install|bool 164 | - not skip_install 165 | - rhcos_installation_method|upper!="IPI" 166 | 167 | - block: 168 | - name: Create ocadmin user 169 | shell: | 170 | {{ocp_install_dir}}/scripts/create_admin_user.sh 171 | - name: Wait for installation to complete 172 | shell: | 173 | {{ocp_install_dir}}/scripts/wait_install.sh 174 | - name: Wait for cluster operators to become ready 175 | shell: | 176 | {{ocp_install_dir}}/scripts/wait_co_ready.sh 177 | when: 178 | - run_install|bool 179 | - not skip_install 180 | 181 | # 182 | # 7. Create storage class(es) and configure registry 183 | # 184 | - name: Create storage class(es) and configure internal registry 185 | hosts: localhost 186 | connection: local 187 | gather_facts: no 188 | become: yes 189 | tasks: 190 | - block: 191 | - name: Create NFS storage class 192 | shell: | 193 | {{ocp_install_dir}}/scripts/create_nfs_sc.sh 194 | - name: Configure image registry storage (only when NFS) 195 | shell: | 196 | {{ocp_install_dir}}/scripts/create_registry_storage.sh 197 | when: 198 | - run_install|bool 199 | - create_nfs_sc|bool 200 | - not skip_install 201 | 202 | # 203 | # 8. Post-installation tasks 204 | # 205 | - name: Post-installation tasks 206 | hosts: localhost 207 | connection: local 208 | gather_facts: no 209 | become: yes 210 | tasks: 211 | - block: 212 | - name: Post-install 213 | shell: | 214 | {{ocp_install_dir}}/scripts/post_install.sh 215 | - name: Final step ==> wait for cluster operators to become ready 216 | shell: | 217 | {{ocp_install_dir}}/scripts/wait_co_ready.sh 218 | when: run_install|bool 219 | -------------------------------------------------------------------------------- /playbooks/ocp4_disable_dhcp.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: bastion 4 | remote_user: root 5 | gather_facts: no 6 | become: yes 7 | tasks: 8 | - include: tasks/dns_server_disable_dhcp.yaml 9 | -------------------------------------------------------------------------------- /playbooks/ocp4_recover.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: 4 | - masters 5 | - workers 6 | remote_user: core 7 | gather_facts: no 8 | tasks: 9 | - name: Waiting for cluster nodes to come alive 10 | wait_for_connection: 11 | sleep: 10 12 | timeout: 900 13 | - name: Wait for kubelet service to become active 14 | service: 15 | name: kubelet 16 | state: started 17 | - name: If run at reboot, wait another 10 minutes to allow the OpenShift services to start 18 | pause: 19 | seconds: 600 20 | when: at_boot is defined and at_boot|bool 21 | 22 | - hosts: localhost 23 | connection: local 24 | tasks: 25 | 26 | - name: Check if control plane certificate has expired 27 | shell: | 28 | echo | openssl s_client -connect api-int.{{cluster_name}}.{{domain_name}}:6443 2>/dev/null | openssl x509 -checkend 10 29 | register: certificate_status 30 | failed_when: certificate_status.rc != 1 and certificate_status.rc != 0 31 | 32 | - set_fact: 33 | certificate_expired: '{{certificate_status.rc != 0}}' 34 | delegate_to: localhost 35 | delegate_facts: yes 36 | 37 | - import_playbook: tasks/ocp_recover_certificates.yaml 38 | when: hostvars.localhost.certificate_expired or (force_recovery is defined and force_recovery|bool) 39 | 40 | -------------------------------------------------------------------------------- /playbooks/ocp4_remove_bootstrap.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: lb 4 | remote_user: root 5 | gather_facts: no 6 | become: yes 7 | tasks: 8 | - include: tasks/haproxy_remove_bootstrap.yaml 9 | when: manage_load_balancer is not defined or manage_load_balancer|bool 10 | -------------------------------------------------------------------------------- /playbooks/ocp4_vm_create.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # This playbook prepares the VMware environment to install OpenShift 4.x 3 | 4 | - hosts: localhost 5 | connection: local 6 | gather_facts: no 7 | become: no 8 | tasks: 9 | - name: Import vSphere certificates 10 | include: tasks/vm_import_certificates.yaml 11 | 12 | - name: Create bootstrap vm 13 | include: tasks/vm_create.yaml 14 | vars: 15 | ocp_group: bootstrap 16 | vm_memory: "{{ vm_bootstrap_mem }}" 17 | vm_cpu: "{{ vm_bootstrap_cpu }}" 18 | vm_disk: "{{ vm_bootstrap_disk }}" 19 | 20 | - name: Create master vms 21 | include: tasks/vm_create.yaml 22 | vars: 23 | ocp_group: masters 24 | vm_memory: "{{ vm_master_mem }}" 25 | vm_cpu: "{{ vm_master_cpu }}" 26 | vm_disk: "{{ vm_master_disk }}" 27 | 28 | - name: Create worker vms 29 | include: tasks/vm_create.yaml 30 | vars: 31 | ocp_group: workers 32 | vm_memory: "{{ vm_worker_mem }}" 33 | vm_cpu: "{{ vm_worker_cpu }}" 34 | vm_disk: "{{ vm_worker_disk }}" 35 | -------------------------------------------------------------------------------- /playbooks/ocp4_vm_delete.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # This playbook delete the VMware environment 3 | 4 | - hosts: localhost 5 | connection: local 6 | gather_facts: no 7 | become: no 8 | tasks: 9 | - name: Delete bootstrap vm 10 | include: tasks/vm_delete.yaml 11 | vars: 12 | ocp_group: bootstrap 13 | 14 | - name: Delete master vms 15 | include: tasks/vm_delete.yaml 16 | vars: 17 | ocp_group: masters 18 | 19 | - name: Delete worker vms 20 | include: tasks/vm_delete.yaml 21 | vars: 22 | ocp_group: workers 23 | -------------------------------------------------------------------------------- /playbooks/ocp4_vm_power_on.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Poweron the VMware environment 3 | 4 | - hosts: localhost 5 | connection: local 6 | gather_facts: no 7 | become: no 8 | tasks: 9 | - name: Poweron bootstrap vm 10 | include: tasks/vm_poweron.yaml 11 | vars: 12 | ocp_group: bootstrap 13 | pause: 0 14 | 15 | - name: Poweron master vms 16 | include: tasks/vm_poweron.yaml 17 | vars: 18 | ocp_group: masters 19 | pause: 1 20 | 21 | - name: Poweron worker vms 22 | include: tasks/vm_poweron.yaml 23 | vars: 24 | ocp_group: workers 25 | pause: 1 26 | -------------------------------------------------------------------------------- /playbooks/ocp4_vm_update_vapp.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # This playbook update the VMs with vApp properties 3 | 4 | - hosts: localhost 5 | connection: local 6 | gather_facts: no 7 | become: no 8 | tasks: 9 | - name: Update vApp of bootstrap vm 10 | include: tasks/vm_update_vapp.yaml 11 | vars: 12 | ocp_group: bootstrap 13 | node_type: bootstrap 14 | ignition_file_name: bootstrap-ova.ign 15 | 16 | - name: Update vApp of master vms 17 | include: tasks/vm_update_vapp.yaml 18 | vars: 19 | ocp_group: masters 20 | node_type: master 21 | ignition_file_name: master.ign 22 | 23 | - name: Update vApp of worker vms 24 | include: tasks/vm_update_vapp.yaml 25 | vars: 26 | ocp_group: workers 27 | node_type: worker 28 | ignition_file_name: worker.ign 29 | -------------------------------------------------------------------------------- /playbooks/tasks/check_variables.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Check variables 3 | fail: 4 | msg: "vSphere user (vc_user) and password (vc_password) must be defined when doing IPI installation." 5 | when: rhcos_installation_method|upper=="IPI" and (vc_user is not defined or vc_password is not defined) 6 | 7 | - fail: 8 | msg: "vSphere user (vc_user) and password (vc_password) must be defined when creating VMs (vm_create_vms property in inventory)." 9 | when: vm_create_vms==True and (vc_user is not defined or vc_password is not defined) 10 | -------------------------------------------------------------------------------- /playbooks/tasks/chrony_client.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | # Servers to be used as a Chrony/NTP time server 4 | {% if setup_chrony_server %} 5 | server {{groups['bastion'][0]}} iburst 6 | {% else %} 7 | {% for ntp_server in ntp_servers %} 8 | server {{ntp_server}} iburst 9 | {% endfor %} 10 | {% endif %} 11 | 12 | # Record the rate at which the system clock gains/losses time. 13 | driftfile /var/lib/chrony/drift 14 | 15 | # Synchronize with local clock 16 | local stratum 10 17 | 18 | # Force the clock to be stepped at restart of the service (at boot) 19 | # if the time difference is greater than 1 second 20 | {% if setup_chrony_server %} 21 | initstepslew 1 {{groups['bastion'][0]}} 22 | {% else %} 23 | initstepslew 1 {{ntp_servers[0]}} 24 | {% endif %} 25 | 26 | # Allow the system clock to be stepped in the first three updates 27 | # if its offset is larger than 1 second. 28 | makestep 1.0 3 29 | 30 | # Enable kernel synchronization of the real-time clock (RTC). 31 | rtcsync 32 | 33 | # Specify directory for log files. 34 | logdir /var/log/chrony 35 | -------------------------------------------------------------------------------- /playbooks/tasks/chrony_client.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Generate chrony client configuration to be used by the OpenShift nodes 4 | template: 5 | src: "chrony_client.j2" 6 | dest: "{{ocp_install_dir}}/chrony_client.conf" 7 | owner: root 8 | group: root 9 | mode: 0644 10 | -------------------------------------------------------------------------------- /playbooks/tasks/chrony_server.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | # Servers to be used as a Chrony/NTP time server 4 | {% for ntp_server in ntp_servers %} 5 | server {{ntp_server}} iburst 6 | {% endfor %} 7 | 8 | # Record the rate at which the system clock gains/losses time. 9 | driftfile /var/lib/chrony/drift 10 | 11 | # Force the clock to be stepped at restart of the service (at boot) 12 | # if the time difference is greater than 1 second 13 | initstepslew 1 {{ntp_servers[0]}} 14 | 15 | # Allow the system clock to be stepped in the first three updates 16 | # if its offset is larger than 1 second. 17 | makestep 1.0 3 18 | 19 | # Enable kernel synchronization of the real-time clock (RTC). 20 | rtcsync 21 | 22 | # Enable hardware timestamping on all interfaces that support it. 23 | #hwtimestamp * 24 | 25 | # Increase the minimum number of selectable sources required to adjust 26 | # the system clock. 27 | #minsources 2 28 | 29 | # Allow NTP client access from local network. 30 | {% if setup_chrony_server %} 31 | allow {{ ntp_allow }} 32 | {% endif %} 33 | 34 | # Serve time even if not synchronized to a time source. 35 | #local stratum 10 36 | 37 | # Specify file containing keys for NTP authentication. 38 | #keyfile /etc/chrony.keys 39 | 40 | # Specify directory for log files. 41 | logdir /var/log/chrony 42 | 43 | # Log slews made by the system, do not log anything else 44 | log tracking 45 | # log measurements statistics tracking # This is the default value 46 | -------------------------------------------------------------------------------- /playbooks/tasks/chrony_server.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Generate chrony server configuration 4 | template: 5 | src: "chrony_server.j2" 6 | dest: "/etc/chrony.conf" 7 | owner: root 8 | group: root 9 | mode: 0644 10 | register: chrony_configured 11 | 12 | - name: Stop service chronyd 13 | service: 14 | name: chronyd 15 | enabled: no 16 | state: stopped 17 | 18 | - name: Synchronize the time 19 | shell: | 20 | chronyd -q 'server {{ntp_servers[0]}} iburst' 21 | rm -f /var/run/chronyd.pid 22 | 23 | - name: Restart service chronyd 24 | service: 25 | name: chronyd 26 | enabled: yes 27 | state: restarted 28 | -------------------------------------------------------------------------------- /playbooks/tasks/create_oauth.j2: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: config.openshift.io/v1 3 | kind: OAuth 4 | metadata: 5 | name: cluster 6 | spec: 7 | identityProviders: 8 | - name: htpasswd_identity_provider 9 | mappingMethod: claim 10 | type: HTPasswd 11 | htpasswd: 12 | fileData: 13 | name: htpass-secret 14 | -------------------------------------------------------------------------------- /playbooks/tasks/create_oauth.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Generate yaml files for creation of the OAuth 4 | template: 5 | src: create_oauth.j2 6 | dest: "{{ocp_install_dir}}/ocp_oauth.yaml" 7 | owner: root 8 | group: root 9 | -------------------------------------------------------------------------------- /playbooks/tasks/dns_interface.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Check if interface script exists 4 | stat: 5 | path: "{{interface_script}}" 6 | register: stat_result 7 | 8 | - name: Change DNS service in interface script 9 | lineinfile: 10 | path: "{{interface_script}}" 11 | regexp: '^DNS1' 12 | line: "DNS1={{groups['bastion'][0]}}" 13 | when: stat_result.stat.exists==True 14 | register: interface_changed 15 | 16 | - name: Restart NetworkManager service 17 | service: 18 | name: NetworkManager 19 | enabled: yes 20 | state: restarted 21 | when: stat_result.stat.exists 22 | -------------------------------------------------------------------------------- /playbooks/tasks/dns_reset.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Check if backup of /etc/resolv.conf exists 4 | stat: 5 | path: "/etc/resolv.conf_backup" 6 | register: resolv_conf_backup 7 | 8 | - name: Restore /etc/resolv.conf 9 | copy: 10 | src: "/etc/resolv.conf_backup" 11 | dest: "/etc/resolv.conf" 12 | force: yes 13 | owner: root 14 | group: root 15 | mode: 0644 16 | when: resolv_conf_backup.stat.exists 17 | -------------------------------------------------------------------------------- /playbooks/tasks/dns_server.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | no-resolv 4 | server={{external_name_servers[0]}} 5 | local=/{{cluster_name}}.{{domain_name}}/ 6 | 7 | address=/api-int.{{cluster_name}}.{{domain_name}}/{{groups['lb'][0]}} 8 | address=/api.{{cluster_name}}.{{domain_name}}/{{groups['lb'][0]}} 9 | address=/.apps.{{cluster_name}}.{{domain_name}}/{{groups['lb'][0]}} 10 | 11 | {% if rhcos_installation_method|upper != "IPI" %} 12 | address=/etcd-0.{{cluster_name}}.{{domain_name}}/{{groups['masters'][0]}} 13 | address=/etcd-1.{{cluster_name}}.{{domain_name}}/{{groups['masters'][1]}} 14 | address=/etcd-2.{{cluster_name}}.{{domain_name}}/{{groups['masters'][2]}} 15 | srv-host=_etcd-server-ssl._tcp.{{cluster_name}}.{{domain_name}},etcd-0.{{cluster_name}}.{{domain_name}},2380,0,10 16 | srv-host=_etcd-server-ssl._tcp.{{cluster_name}}.{{domain_name}},etcd-1.{{cluster_name}}.{{domain_name}},2380,0,10 17 | srv-host=_etcd-server-ssl._tcp.{{cluster_name}}.{{domain_name}},etcd-2.{{cluster_name}}.{{domain_name}},2380,0,10 18 | {% endif %} 19 | 20 | {% if dhcp_on_bastion %} 21 | domain={{cluster_name}}.{{domain_name}} 22 | dhcp-range={{dhcp_range[0]}},{{dhcp_range[1]}},infinite 23 | dhcp-option=3,{{default_gateway}} 24 | 25 | {% if rhcos_installation_method|upper!="IPI" %} 26 | {% for host in groups['masters'] | sort %} 27 | dhcp-host={{hostvars[host]['mac']}},{{hostvars[host]['host']}},{{host}} 28 | {% endfor %} 29 | 30 | {% for host in groups['workers'] | sort %} 31 | dhcp-host={{hostvars[host]['mac']}},{{hostvars[host]['host']}},{{host}} 32 | {% endfor %} 33 | 34 | {% for host in groups['bootstrap'] | sort %} 35 | dhcp-host={{hostvars[host]['mac']}},{{hostvars[host]['host']}},{{host}} 36 | {% endfor %} 37 | {% endif %} 38 | 39 | {% if rhcos_installation_method|upper=="PXE" %} 40 | enable-tftp 41 | tftp-root=/tftpboot 42 | dhcp-boot=pxelinux.0 43 | {% endif %} 44 | 45 | {% endif %} -------------------------------------------------------------------------------- /playbooks/tasks/dns_server.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create backup of /etc/resolv.conf 4 | copy: 5 | src: /etc/resolv.conf 6 | dest: "/etc/resolv.conf_backup" 7 | force: no 8 | 9 | - name: Generate dnsmasq.conf file 10 | template: 11 | src: dns_server.j2 12 | dest: "/etc/dnsmasq.conf" 13 | owner: root 14 | group: root 15 | mode: 0644 16 | 17 | - name: Start service dnsmasq 18 | service: 19 | name: dnsmasq 20 | enabled: yes 21 | state: restarted 22 | -------------------------------------------------------------------------------- /playbooks/tasks/dns_server_disable_dhcp.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Disable DHCP on DNS server 4 | replace: 5 | path: /etc/dnsmasq.conf 6 | regexp: '(dhcp.*)' 7 | replace: '#\1' 8 | 9 | - name: Start service dnsmasq 10 | service: 11 | name: dnsmasq 12 | enabled: yes 13 | state: restarted 14 | -------------------------------------------------------------------------------- /playbooks/tasks/dnsmasq_libvirt_remove.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Check if libvirt network is active 4 | shell: | 5 | virsh net-list 2> /dev/null 6 | register: libvirt_active 7 | changed_when: libvirt_active.rc == 1 8 | failed_when: libvirt_active.rc not in [0,1] 9 | ignore_errors: true 10 | 11 | - name: Disable libvirt network 12 | shell: | 13 | virsh net-destroy default 14 | virsh net-autostart --network default --disable 15 | when: libvirt_active.stdout.find("default") != -1 16 | ignore_errors: true 17 | -------------------------------------------------------------------------------- /playbooks/tasks/etc_ansible_hosts.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Copy inventory file to /etc/ansible/hosts 4 | copy: 5 | src: "{{inventory_file}}" 6 | dest: /etc/ansible/hosts 7 | 8 | - name: Copy Ansible configuration to /etc/ansible/ansible.cfg 9 | copy: 10 | src: "{{script_dir}}/ansible.cfg" 11 | dest: /etc/ansible/ansible.cfg -------------------------------------------------------------------------------- /playbooks/tasks/firewall_iptables_remove.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Remove all rules from iptables 3 | 4 | - name: Gather service facts 5 | service_facts: 6 | 7 | - name: Stop iptables service 8 | service: 9 | name: iptables 10 | enabled: no 11 | state: stopped 12 | when: "'iptables' in services" 13 | 14 | - name: Stop firewalld service 15 | service: 16 | name: firewalld 17 | enabled: no 18 | state: stopped 19 | when: "'firewalld' in services" 20 | 21 | - name: Check if Kubernetes forwards are already in iptables 22 | shell: | 23 | iptables -nvL | awk '/kubernetes/ {count ++} END {print count}' 24 | register: kubernetes_forwards 25 | 26 | - name: Flush all rules from iptables (only if kubernetes forwards do not exist) 27 | shell: | 28 | iptables -P INPUT ACCCEPT 29 | iptables -P FORWARD ACCCEPT 30 | iptables -P OUTPUT ACCCEPT 31 | iptables -F INPUT 32 | iptables -F FORWARD 33 | iptables -F OUTPUT 34 | when: kubernetes_forwards.stdout == "" 35 | 36 | - name: Save rules into configuration (only if kubernetes forwards do not exist) 37 | shell: | 38 | iptables-save > /etc/sysconfig/iptables 39 | when: kubernetes_forwards.stdout == "" 40 | 41 | - name: Restart iptables service 42 | service: 43 | name: iptables 44 | enabled: yes 45 | state: started 46 | when: "'iptables' in services" 47 | -------------------------------------------------------------------------------- /playbooks/tasks/global_proxy.j2: -------------------------------------------------------------------------------- 1 | {% if http_proxy is defined %} 2 | export http_proxy={{http_proxy}} 3 | {% endif %} 4 | {% if https_proxy is defined %} 5 | export https_proxy={{https_proxy}} 6 | {% endif %} 7 | {% if no_proxy is defined %} 8 | export no_proxy={{no_proxy}} 9 | {% endif %} -------------------------------------------------------------------------------- /playbooks/tasks/global_proxy.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Configure global proxy in /etc/profile.d 4 | template: 5 | src: global_proxy.j2 6 | dest: /etc/profile.d/global_proxy.sh 7 | mode: 0777 8 | owner: root 9 | group: root 10 | when: configure_global_proxy is defined and configure_global_proxy==True 11 | 12 | - name: Remove /etc/profile.d/global_proxy.sh if global proxy must not be set 13 | file: 14 | path: /etc/profile.d/global_proxy.sh 15 | state: absent 16 | when: configure_global_proxy is not defined or configure_global_proxy==False 17 | -------------------------------------------------------------------------------- /playbooks/tasks/haproxy.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | defaults 4 | mode tcp 5 | timeout connect 5000ms 6 | timeout client 50000ms 7 | timeout server 50000ms 8 | 9 | frontend api 10 | bind *:6443 11 | mode tcp 12 | default_backend cpapi 13 | {% if rhcos_installation_method|upper!="IPI" %} 14 | frontend cfg 15 | bind *:22623 16 | mode tcp 17 | default_backend cpcfg 18 | {% endif %} 19 | frontend http 20 | bind *:80 21 | mode http 22 | default_backend wrkhttp 23 | stats uri /haproxy?stats 24 | frontend https 25 | bind *:443 26 | mode tcp 27 | default_backend wrkhttps 28 | 29 | backend cpapi 30 | balance roundrobin 31 | {% if rhcos_installation_method|upper!="IPI" %} 32 | server bootstrap {{groups['bootstrap'][0]}}:6443 check 33 | {% for host in groups['masters'] | sort %} 34 | server {{hostvars[host]['host']}} {{host}}:6443 check 35 | {% endfor %} 36 | {% else %} 37 | server masters {{apiVIP}}:6443 check 38 | {% endif %} 39 | 40 | {% if rhcos_installation_method|upper!="IPI" %} 41 | backend cpcfg 42 | balance roundrobin 43 | server bootstrap {{groups['bootstrap'][0]}}:22623 check 44 | {% for host in groups['masters'] | sort %} 45 | server {{hostvars[host]['host']}} {{host}}:22623 check 46 | {% endfor %} 47 | {% endif %} 48 | 49 | backend wrkhttp 50 | mode http 51 | balance roundrobin 52 | option forwardfor 53 | {% if rhcos_installation_method|upper!="IPI" %} 54 | {% for host in groups['workers'] | sort %} 55 | server {{hostvars[host]['host']}} {{host}}:80 check 56 | {% endfor %} 57 | {% else %} 58 | server workers {{ingressVIP}}:80 check 59 | {% endif %} 60 | 61 | backend wrkhttps 62 | mode tcp 63 | balance roundrobin 64 | {% if rhcos_installation_method|upper!="IPI" %} 65 | {% for host in groups['workers'] | sort %} 66 | server {{hostvars[host]['host']}} {{host}}:443 check 67 | {% endfor %} 68 | {% else %} 69 | server workers {{ingressVIP}}:443 check 70 | {% endif %} -------------------------------------------------------------------------------- /playbooks/tasks/haproxy.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Generate haproxy.cfg file 4 | template: 5 | src: haproxy.j2 6 | dest: "/etc/haproxy/haproxy.cfg" 7 | owner: root 8 | group: root 9 | mode: 0644 10 | 11 | - name: Allow all ports for HAProxy (SELinux) 12 | seboolean: 13 | name: haproxy_connect_any 14 | state: yes 15 | persistent: yes 16 | when: ansible_selinux.status=='enabled' 17 | 18 | - name: Start service haproxy 19 | service: 20 | name: haproxy 21 | enabled: yes 22 | state: restarted 23 | -------------------------------------------------------------------------------- /playbooks/tasks/haproxy_remove_bootstrap.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Disable bootstrap in load balancer 4 | replace: 5 | path: /etc/haproxy/haproxy.cfg 6 | regexp: '(^\s+)(server\s+bootstrap)(.*)' 7 | replace: '\1#\2\3' 8 | 9 | - name: Start service haproxy 10 | service: 11 | name: haproxy 12 | enabled: yes 13 | state: restarted 14 | -------------------------------------------------------------------------------- /playbooks/tasks/hosts.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Update hosts file 3 | 4 | - name: Check if /etc/cloud/cloud.cfg exists 5 | stat: 6 | path: "/etc/cloud/cloud.cfg" 7 | register: cloud_cfg_file 8 | 9 | - name: Disable automatic updates of /etc/hosts, through /etc/cloud/cloud.cfg 10 | replace: 11 | path: "/etc/cloud/cloud.cfg" 12 | regexp: "manage_etc_hosts: True" 13 | replace: "manage_etc_hosts: False" 14 | when: cloud_cfg_file.stat.exists == True 15 | 16 | - name: Add IP address of all hosts to /etc/hosts 17 | lineinfile: 18 | dest: /etc/hosts 19 | regexp: '^.*{{ hostvars[item].host }}$' 20 | line: "{{item}} {{ hostvars[item].host }}.{{cluster_name}}.{{domain_name}} {{ hostvars[item].host }}" 21 | state: present 22 | when: hostvars[item].host is defined 23 | with_items: "{{ groups.all }}" 24 | -------------------------------------------------------------------------------- /playbooks/tasks/http_server.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | user nginx; 4 | worker_processes auto; 5 | error_log /var/log/nginx/error.log; 6 | pid /run/nginx.pid; 7 | 8 | # Load dynamic modules. See /usr/share/doc/nginx/README.dynamic. 9 | include /usr/share/nginx/modules/*.conf; 10 | 11 | events { 12 | worker_connections 1024; 13 | } 14 | 15 | http { 16 | log_format main '$remote_addr - $remote_user [$time_local] "$request" ' 17 | '$status $body_bytes_sent "$http_referer" ' 18 | '"$http_user_agent" "$http_x_forwarded_for"'; 19 | 20 | access_log /var/log/nginx/access.log main; 21 | 22 | sendfile on; 23 | tcp_nopush on; 24 | tcp_nodelay on; 25 | keepalive_timeout 65; 26 | types_hash_max_size 2048; 27 | 28 | include /etc/nginx/mime.types; 29 | default_type application/octet-stream; 30 | 31 | # Load modular configuration files from the /etc/nginx/conf.d directory. 32 | # See http://nginx.org/en/docs/ngx_core_module.html#include 33 | # for more information. 34 | include /etc/nginx/conf.d/*.conf; 35 | 36 | server { 37 | listen {{http_server_port}} default_server; 38 | server_name _; 39 | root /; 40 | 41 | # Load configuration files for the default server block. 42 | include /etc/nginx/default.d/*.conf; 43 | 44 | location {{ocp_install_dir}} { 45 | autoindex on; 46 | } 47 | 48 | {% if air_gapped_download_dir is defined %} 49 | location {{air_gapped_download_dir}} { 50 | autoindex on; 51 | } 52 | {% endif %} 53 | 54 | error_page 404 /404.html; 55 | location = /40x.html { 56 | } 57 | 58 | error_page 500 502 503 504 /50x.html; 59 | location = /50x.html { 60 | } 61 | } 62 | } 63 | -------------------------------------------------------------------------------- /playbooks/tasks/http_server.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Generate nginx configuration file 4 | template: 5 | src: http_server.j2 6 | dest: "/etc/nginx/nginx.conf" 7 | owner: root 8 | group: root 9 | mode: 0644 10 | 11 | - block: 12 | - name: Check if nginx has already been configured for port {{http_server_port}} 13 | shell: 14 | semanage port -l | grep http_port_t | grep {{http_server_port}} | wc -l 15 | register: _http_ports 16 | 17 | - name: Allow nginx to listen on non-standard port (SELinux) 18 | shell: | 19 | semanage port -a -t http_port_t -p tcp {{http_server_port}} 20 | when: _http_ports.stdout == "0" 21 | when: ansible_selinux.status=='enabled' 22 | 23 | - name: Start nginx 24 | service: 25 | name: nginx 26 | enabled: yes 27 | state: restarted 28 | -------------------------------------------------------------------------------- /playbooks/tasks/init_facts.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Initialize facts 4 | set_fact: 5 | reboot_server: False 6 | install_script_already_run: False 7 | skip_install: "{{ SKIP_INSTALL | bool }}" 8 | set_bastion_hostname: "{{ set_bastion_hostname | default(False) }}" 9 | rhcos_installation_method: "{{ rhcos_installation_method | default('PXE') }}" 10 | vm_create_vms: "{{ vm_create_vms | default(False) }}" -------------------------------------------------------------------------------- /playbooks/tasks/install_gui.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Install GNOME and VNC on the load balancer 3 | 4 | - name: Install Gnome desktop environment 5 | yum: 6 | name: '@gnome-desktop' 7 | state: present 8 | 9 | - name: Activate Gnome desktop 10 | shell: | 11 | systemctl set-default graphical.target 12 | 13 | - name: Install VNC server 14 | yum: 15 | name: tigervnc-server 16 | state: present 17 | 18 | - name: Copy configuration file 19 | copy: 20 | dest: /etc/systemd/system/vncserver-root@:1.service 21 | force: false 22 | remote_src: true 23 | src: /usr/lib/systemd/system/vncserver@.service 24 | 25 | - name: Configure ExecStart 26 | lineinfile: 27 | line: ExecStart=/usr/sbin/runuser -l root -c "/usr/bin/vncserver %i -geometry 1920x1080 -geometry 1280x768" 28 | path: /etc/systemd/system/vncserver-root@:1.service 29 | regexp: ^ExecStart= 30 | 31 | - name: Configure PIDFile 32 | lineinfile: 33 | line: PIDFile=/root/.vnc/%H%i.pid 34 | path: /etc/systemd/system/vncserver-root@:1.service 35 | regexp: ^PIDFile= 36 | 37 | - name: Remove service user directive which sometimes seems to be present 38 | lineinfile: 39 | path: /etc/systemd/system/vncserver-root@:1.service 40 | regexp: ^User= 41 | state: absent 42 | 43 | - name: Check if VNC password is already set 44 | stat: 45 | path: /root/.vnc/passwd 46 | register: vnc_passwd_file 47 | 48 | - name: Create .vnc directory 49 | file: 50 | group: "root" 51 | mode: 0755 52 | owner: "root" 53 | path: /root/.vnc 54 | state: directory 55 | when: not vnc_passwd_file.stat.exists 56 | 57 | - name: Set default VNC password 58 | shell: | 59 | set -o pipefail 60 | echo "passw0rd" | vncpasswd -f > /root/.vnc/passwd 61 | when: not vnc_passwd_file.stat.exists 62 | 63 | - name: Set correct permissions for VNC passwd file 64 | file: 65 | group: "root" 66 | mode: 0600 67 | owner: "root" 68 | path: /root/.vnc/passwd 69 | when: not vnc_passwd_file.stat.exists 70 | 71 | - name: Start and enable VNC service 72 | systemd: 73 | daemon_reload: true 74 | enabled: true 75 | name: vncserver-root@:1 76 | state: restarted 77 | -------------------------------------------------------------------------------- /playbooks/tasks/nfs_server.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Set permissions for NFS directory 4 | file: 5 | group: "root" 6 | mode: 0777 7 | owner: "root" 8 | path: "{{ nfs_volume_mount_path }}" 9 | state: directory 10 | 11 | - name: Create image registry directory 12 | file: 13 | mode: 0777 14 | owner: "root" 15 | group: "root" 16 | path: "{{nfs_volume_mount_path}}/image-registry" 17 | state: directory 18 | 19 | - name: copy /etc/exports 20 | template: 21 | src: nfs_server_exports.j2 22 | dest: /etc/exports 23 | owner: root 24 | group: root 25 | 26 | - name: Start NFS service 27 | service: 28 | name: nfs-server 29 | enabled: yes 30 | state: restarted 31 | 32 | - name: Start RPC bind service 33 | service: 34 | name: rpcbind 35 | enabled: yes 36 | state: restarted -------------------------------------------------------------------------------- /playbooks/tasks/nfs_server_exports.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | {{ nfs_volume_mount_path }} *(rw,sync,no_root_squash) 4 | 5 | {{ nfs_volume_mount_path }}/image-registry *(rw,sync,no_root_squash) 6 | -------------------------------------------------------------------------------- /playbooks/tasks/nfs_storage_class.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | --- 3 | apiVersion: v1 4 | kind: ServiceAccount 5 | metadata: 6 | name: nfs-client-provisioner 7 | --- 8 | kind: DeploymentConfig 9 | apiVersion: v1 10 | metadata: 11 | name: nfs-client-provisioner 12 | spec: 13 | replicas: 1 14 | strategy: 15 | type: Recreate 16 | template: 17 | metadata: 18 | labels: 19 | app: nfs-client-provisioner 20 | spec: 21 | serviceAccountName: nfs-client-provisioner 22 | containers: 23 | - name: nfs-client-provisioner 24 | image: gcr.io/k8s-staging-sig-storage/nfs-subdir-external-provisioner:v4.0.2 25 | volumeMounts: 26 | - name: nfs-client-root 27 | mountPath: /persistentvolumes 28 | env: 29 | - name: PROVISIONER_NAME 30 | value: icpd-nfs.io/nfs 31 | - name: NFS_SERVER 32 | value: {{groups['nfs'][0]}} 33 | - name: NFS_PATH 34 | value: {{ nfs_volume_mount_path }} 35 | volumes: 36 | - name: nfs-client-root 37 | nfs: 38 | server: {{groups['nfs'][0]}} 39 | path: {{ nfs_volume_mount_path }} 40 | --- 41 | apiVersion: storage.k8s.io/v1 42 | kind: StorageClass 43 | metadata: 44 | name: managed-nfs-storage 45 | provisioner: icpd-nfs.io/nfs 46 | parameters: 47 | archiveOnDelete: "false" # When set to "false" your PVs will not be archived 48 | # by the provisioner upon deletion of the PVC. -------------------------------------------------------------------------------- /playbooks/tasks/nfs_storage_class_cluster_role.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | --- 3 | apiVersion: rbac.authorization.k8s.io/v1 4 | kind: ClusterRoleBinding 5 | metadata: 6 | name: nfs-role-binding 7 | subjects: 8 | - kind: ServiceAccount 9 | name: nfs-client-provisioner 10 | namespace: default 11 | roleRef: 12 | kind: ClusterRole 13 | name: cluster-admin 14 | apiGroup: rbac.authorization.k8s.io 15 | -------------------------------------------------------------------------------- /playbooks/tasks/nfs_storage_class_script.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create directory for NFS client 4 | file: 5 | group: "root" 6 | mode: 0755 7 | owner: "root" 8 | path: "{{ocp_install_dir}}/nfs-client" 9 | state: directory 10 | 11 | - name: Create NFS storage class yaml 12 | template: 13 | src: nfs_storage_class.j2 14 | dest: "{{ocp_install_dir}}/nfs-client/nfs-storage-class.yaml" 15 | owner: root 16 | group: root 17 | 18 | - name: Create cluster role binding yaml 19 | template: 20 | src: nfs_storage_class_cluster_role.j2 21 | dest: "{{ocp_install_dir}}/nfs-client/nfs-storage-class-cluster-role.yaml" 22 | owner: root 23 | group: root 24 | 25 | - name: Generate NFS storage class creation script 26 | template: 27 | src: nfs_storage_scripts.j2 28 | dest: "{{ocp_install_dir}}/scripts/create_nfs_sc.sh" 29 | owner: root 30 | group: root 31 | mode: 755 32 | -------------------------------------------------------------------------------- /playbooks/tasks/nfs_storage_scripts.j2: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Create NFS storage class 4 | echo "Creating NFS storage class" 5 | export KUBECONFIG={{ocp_install_dir}}/auth/kubeconfig 6 | 7 | oc -n default create -f {{ocp_install_dir}}/nfs-client/nfs-storage-class.yaml 8 | oc adm policy add-scc-to-user anyuid -z nfs-client-provisioner 9 | oc adm policy add-scc-to-user hostmount-anyuid -z nfs-client-provisioner 10 | oc -n default create -f {{ocp_install_dir}}/nfs-client/nfs-storage-class-cluster-role.yaml 11 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_download.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Obtain OpenShift packages from URL "{{openshift_base_url}}" 4 | shell: | 5 | curl -s {{openshift_base_url}} --list-only | grep 'href=' 6 | register: openshift_packages 7 | until: openshift_packages is not failed 8 | retries: 10 9 | delay: 10 10 | 11 | - name: Obtain Red Hat CoreOS packages from URL "{{rhcos_base_url}}" 12 | shell: | 13 | curl -L -s {{rhcos_base_url}} --list-only | grep 'href=' 14 | register: rhcos_packages 15 | until: rhcos_packages is not failed 16 | retries: 10 17 | delay: 10 18 | 19 | # The below tasks will extract the exact file names from the list of found packages. 20 | # This is needed because the content of the "latest" directory tends to change without notice. 21 | - set_fact: 22 | openshift_client_package: "{{openshift_packages.stdout | regex_search(vars['openshift_client_package_pattern'],'\\1') | first }}" 23 | openshift_installer_package: "{{openshift_packages.stdout | regex_search(vars['openshift_install_package_pattern'],'\\1') | first }}" 24 | 25 | - set_fact: 26 | rhcos_kernel_package: "{{rhcos_packages.stdout | regex_search(vars['rhcos_kernel_package_pattern'],'\\1') | first }}" 27 | rhcos_initramfs_package: "{{rhcos_packages.stdout | regex_search(vars['rhcos_initramfs_package_pattern'],'\\1') | first }}" 28 | when: rhcos_installation_method=="pxe" 29 | 30 | - set_fact: 31 | rhcos_rootfs_package: "{{rhcos_packages.stdout | regex_search(vars['rhcos_rootfs_package_pattern'],'\\1') | first }}" 32 | when: rhcos_installation_method=="pxe" 33 | 34 | - set_fact: 35 | rhcos_ova_package: "{{rhcos_packages.stdout | regex_search(vars['rhcos_ova_package_pattern'],'\\1') | first }}" 36 | when: rhcos_installation_method=="ova" 37 | 38 | - name: Download OpenShift client '{{openshift_base_url | regex_replace("\/$", "")}}/{{openshift_client_package}}' 39 | get_url: 40 | url: '{{openshift_base_url | regex_replace("\/$", "")}}/{{openshift_client_package}}' 41 | dest: "{{ocp_install_dir}}/{{openshift_client_package}}" 42 | owner: root 43 | mode: 0644 44 | register: download_result 45 | until: download_result is succeeded 46 | retries: 5 47 | delay: 30 48 | 49 | - name: Download OpenShift installer '{{openshift_base_url | regex_replace("\/$", "")}}/{{openshift_installer_package}}' 50 | get_url: 51 | url: '{{openshift_base_url | regex_replace("\/$", "")}}/{{openshift_installer_package}}' 52 | dest: "{{ocp_install_dir}}/{{openshift_installer_package}}" 53 | owner: root 54 | mode: 0644 55 | register: download_result 56 | until: download_result is succeeded 57 | retries: 5 58 | delay: 30 59 | 60 | - name: Unpack OpenShift client 61 | unarchive: 62 | src: "{{ocp_install_dir}}/{{openshift_client_package}}" 63 | dest: /usr/local/bin 64 | 65 | - name: Unpack OpenShift installer 66 | unarchive: 67 | src: "{{ocp_install_dir}}/{{openshift_installer_package}}" 68 | dest: "{{ocp_install_dir}}" 69 | 70 | - name: Download kernel package '{{rhcos_base_url | regex_replace("\/$", "")}}/{{rhcos_kernel_package}}' 71 | get_url: 72 | url: '{{rhcos_base_url | regex_replace("\/$", "")}}/{{rhcos_kernel_package}}' 73 | dest: "{{ocp_install_dir}}/{{rhcos_kernel_package}}" 74 | owner: root 75 | mode: 0644 76 | register: download_result 77 | until: download_result is succeeded 78 | retries: 3 79 | when: rhcos_installation_method=="pxe" 80 | 81 | - name: Download initial RAM disk package '{{rhcos_base_url | regex_replace("\/$", "")}}/{{rhcos_initramfs_package}}' 82 | get_url: 83 | url: '{{rhcos_base_url | regex_replace("\/$", "")}}/{{rhcos_initramfs_package}}' 84 | dest: "{{ocp_install_dir}}/{{rhcos_initramfs_package}}" 85 | owner: root 86 | mode: 0644 87 | register: download_result 88 | until: download_result is succeeded 89 | retries: 3 90 | when: rhcos_installation_method=="pxe" 91 | 92 | - name: Download root file system package '{{rhcos_base_url | regex_replace("\/$", "")}}/{{rhcos_rootfs_package}}' 93 | get_url: 94 | url: '{{rhcos_base_url | regex_replace("\/$", "")}}/{{rhcos_rootfs_package}}' 95 | dest: "{{ocp_install_dir}}/{{rhcos_rootfs_package}}" 96 | owner: root 97 | mode: 0644 98 | register: download_result 99 | until: download_result is succeeded 100 | retries: 3 101 | when: rhcos_installation_method=="pxe" 102 | 103 | - block: 104 | - name: Check if SELinux has already been configured for CoreOS Kernel package 105 | shell: 106 | semanage fcontext -l | grep httpd_sys_content_t | grep '{{ocp_install_dir}}/{{rhcos_kernel_package}}' | wc -l 107 | register: _selinux_rhcos 108 | - name: Enable access to CoreOS Kernel package 109 | shell: | 110 | semanage fcontext -a -t httpd_sys_content_t -s system_u '{{ocp_install_dir}}/{{rhcos_kernel_package}}' 111 | when: _selinux_rhcos.stdout == "0" 112 | 113 | - name: Check if SELinux has already been configured for CoreOS initramfs package 114 | shell: 115 | semanage fcontext -l | grep httpd_sys_content_t | grep '{{ocp_install_dir}}/{{rhcos_initramfs_package}}' | wc -l 116 | register: _selinux_rhcos 117 | - name: Enable access to CoreOS initramfs package 118 | shell: | 119 | semanage fcontext -a -t httpd_sys_content_t -s system_u '{{ocp_install_dir}}/{{rhcos_initramfs_package}}' 120 | when: _selinux_rhcos.stdout == "0" 121 | 122 | - name: Check if SELinux has already been configured for CoreOS rootfs package 123 | shell: 124 | semanage fcontext -l | grep httpd_sys_content_t | grep '{{ocp_install_dir}}/{{rhcos_rootfs_package}}' | wc -l 125 | register: _selinux_rhcos 126 | - name: Enable access to CoreOS rootfs package 127 | shell: | 128 | semanage fcontext -a -t httpd_sys_content_t -s system_u '{{ocp_install_dir}}/{{rhcos_rootfs_package}}' 129 | when: _selinux_rhcos.stdout == "0" 130 | 131 | when: 132 | - rhcos_installation_method=="pxe" 133 | - ansible_selinux.status=='enabled' 134 | 135 | - name: Restore SELinux context for CoreOS files 136 | shell: restorecon -Rv "{{ocp_install_dir}}/*" 137 | 138 | - name: Download CoreOS OVA package '{{rhcos_base_url | regex_replace("\/$", "")}}/{{rhcos_ova_package}}' 139 | get_url: 140 | url: '{{rhcos_base_url | regex_replace("\/$", "")}}/{{rhcos_ova_package}}' 141 | dest: "{{ocp_install_dir}}/{{rhcos_ova_package}}" 142 | owner: root 143 | mode: 0644 144 | when: rhcos_installation_method=="ova" 145 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_ignition.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create OpenShift ignition manifests 4 | shell: | 5 | {{ocp_install_dir}}/openshift-install create manifests --dir={{ocp_install_dir}} 6 | 7 | - name: Make masters unschedulable 8 | lineinfile: 9 | path: "{{ocp_install_dir}}/manifests/cluster-scheduler-02-config.yml" 10 | regex: " mastersSchedulable: true" 11 | line: " mastersSchedulable: false" 12 | 13 | - name: To check, keep copy of cluster-scheduler-02-config.yml in /tmp 14 | copy: 15 | src: "{{ocp_install_dir}}/manifests/cluster-scheduler-02-config.yml" 16 | dest: /tmp/cluster-scheduler-02-config.yml 17 | 18 | - name: Get the chrony client configuration 19 | slurp: 20 | src: "{{ocp_install_dir}}/chrony_client.conf" 21 | register: chrony_client_conf 22 | 23 | - name: Create Machine Config for chrony 24 | template: 25 | src: ocp_ignition_chrony.j2 26 | dest: "{{ocp_install_dir}}/openshift/40_{{item}}s_ocp_chrony_configuration.yaml" 27 | owner: root 28 | group: root 29 | mode: 0644 30 | with_items: 31 | - "master" 32 | - "worker" 33 | vars: 34 | node_type: "{{item}}" 35 | when: override_chrony_settings_on_cluster_nodes|bool 36 | 37 | - name: Get the global proxy configuration (if defined) 38 | slurp: 39 | src: "/etc/profile.d/global_proxy.sh" 40 | register: global_proxy_conf 41 | when: configure_global_proxy is defined and configure_global_proxy==True 42 | 43 | - name: Create Machine Config for global proxy 44 | template: 45 | src: ocp_ignition_proxy.j2 46 | dest: "{{ocp_install_dir}}/openshift/40_{{item}}s_ocp_proxy_configuration.yaml" 47 | owner: root 48 | group: root 49 | mode: 0644 50 | with_items: 51 | - "master" 52 | - "worker" 53 | vars: 54 | node_type: "{{item}}" 55 | when: configure_global_proxy is defined and configure_global_proxy==True 56 | 57 | - name: Create OpenShift ignition configurations 58 | shell: | 59 | {{ocp_install_dir}}/openshift-install create ignition-configs --dir={{ocp_install_dir}} 60 | 61 | - name: Change ignition files permissions 62 | file: 63 | path: "{{item}}" 64 | owner: root 65 | group: root 66 | mode: '0644' 67 | with_items: 68 | - "{{ocp_install_dir}}/bootstrap.ign" 69 | - "{{ocp_install_dir}}/master.ign" 70 | - "{{ocp_install_dir}}/worker.ign" 71 | 72 | - block: 73 | - name: Check if SELinux has already been configured for ignition files 74 | shell: 75 | semanage fcontext -l | grep httpd_sys_content_t | grep '{{ ocp_install_dir }}/.*\.ign' | wc -l 76 | register: _http_ignition 77 | 78 | - name: Enable access to ignition files for nginx (SELinux) 79 | shell: | 80 | semanage fcontext -a -t httpd_sys_content_t -s system_u '{{ ocp_install_dir }}/.*\.ign' 81 | when: _http_ignition.stdout == "0" 82 | when: ansible_selinux.status=='enabled' 83 | 84 | - name: Restore SELinux context for ignition files 85 | shell: restorecon -Rv "{{ocp_install_dir}}/*.ign" 86 | when: ansible_selinux.status=='enabled' 87 | 88 | - name: Generate ignition file for bootstrap server OVA installation 89 | template: 90 | src: ocp_ignition_bootstrap_ova.j2 91 | dest: "{{ocp_install_dir}}/bootstrap-ova.ign" 92 | when: rhcos_installation_method|upper=="OVA" 93 | 94 | - name: Generate base64 ignition files for OVA installation 95 | copy: 96 | content: "{{ lookup('file',item) | b64encode }}" 97 | dest: "{{item}}.64" 98 | with_items: 99 | - "{{ocp_install_dir}}/bootstrap-ova.ign" 100 | - "{{ocp_install_dir}}/master.ign" 101 | - "{{ocp_install_dir}}/worker.ign" 102 | when: rhcos_installation_method|upper=="OVA" 103 | 104 | - block: 105 | - name: Check if SELinux has already been configured for base64 ignition files 106 | shell: 107 | semanage fcontext -l | grep httpd_sys_content_t | grep '{{ ocp_install_dir }}/.*\.ign\.64' | wc -l 108 | register: _http_ignition_64 109 | 110 | - name: Enable access to base64 ignition files for nginx (SELinux) 111 | shell: | 112 | semanage fcontext -a -t httpd_sys_content_t -s system_u '{{ ocp_install_dir }}/.*\.ign\.64' 113 | when: _http_ignition_64.stdout == "0" 114 | when: 115 | - ansible_selinux.status=='enabled' 116 | - rhcos_installation_method|upper=="OVA" 117 | 118 | - name: Restore SELinux context for ignition files 119 | shell: restorecon -Rv "{{ocp_install_dir}}/*.ign.64" 120 | when: 121 | - ansible_selinux.status=='enabled' 122 | - rhcos_installation_method|upper=="OVA" -------------------------------------------------------------------------------- /playbooks/tasks/ocp_ignition_bootstrap_ova.j2: -------------------------------------------------------------------------------- 1 | {% if openshift_release < "4.6" %} 2 | { 3 | "ignition": { 4 | "config": { 5 | "merge": [ 6 | { 7 | "source": "http://{{groups['bastion'][0]}}:{{http_server_port}}/{{ocp_install_dir}}/bootstrap.ign" 8 | } 9 | ] 10 | }, 11 | "version": "2.2.0" 12 | } 13 | } 14 | {% endif %} 15 | {% if openshift_release >= "4.6" %} 16 | { 17 | "ignition": { 18 | "config": { 19 | "merge": [ 20 | { 21 | "source": "http://{{groups['bastion'][0]}}:{{http_server_port}}/{{ocp_install_dir}}/bootstrap.ign" 22 | } 23 | ] 24 | }, 25 | "version": "3.1.0" 26 | } 27 | } 28 | {% endif %} -------------------------------------------------------------------------------- /playbooks/tasks/ocp_ignition_chrony.j2: -------------------------------------------------------------------------------- 1 | apiVersion: machineconfiguration.openshift.io/v1 2 | kind: MachineConfig 3 | metadata: 4 | labels: 5 | machineconfiguration.openshift.io/role: {{node_type}} 6 | name: 40-{{node_type}}s-chrony-configuration 7 | spec: 8 | config: 9 | ignition: 10 | config: {} 11 | security: 12 | tls: {} 13 | timeouts: {} 14 | version: 3.1.0 15 | networkd: {} 16 | passwd: {} 17 | storage: 18 | files: 19 | - contents: 20 | source: "data:text/plain;charset=utf-8;base64,{{chrony_client_conf['content']}}" 21 | mode: 420 22 | overwrite: true 23 | path: /etc/chrony.conf 24 | osImageURL: "" -------------------------------------------------------------------------------- /playbooks/tasks/ocp_ignition_proxy.j2: -------------------------------------------------------------------------------- 1 | apiVersion: machineconfiguration.openshift.io/v1 2 | kind: MachineConfig 3 | metadata: 4 | labels: 5 | machineconfiguration.openshift.io/role: {{node_type}} 6 | name: 40-{{node_type}}s-proxy-configuration 7 | spec: 8 | config: 9 | ignition: 10 | config: {} 11 | security: 12 | tls: {} 13 | timeouts: {} 14 | version: 3.1.0 15 | networkd: {} 16 | passwd: {} 17 | storage: 18 | files: 19 | - contents: 20 | source: "data:text/plain;charset=utf-8;base64,{{global_proxy_conf['content']}}" 21 | mode: 420 22 | overwrite: true 23 | path: /etc/profile.d/global_proxy.sh 24 | osImageURL: "" -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_config.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | apiVersion: v1 4 | baseDomain: {{ domain_name }} 5 | {% if http_proxy is defined %} 6 | proxy: 7 | httpProxy: {{http_proxy}} 8 | httpsProxy: {{https_proxy}} 9 | noProxy: {{no_proxy}} 10 | {% endif %} 11 | compute: 12 | - hyperthreading: Enabled 13 | name: worker 14 | {% if rhcos_installation_method|upper!="IPI" %} 15 | replicas: 0 16 | {% else %} 17 | replicas: {{vm_number_of_workers}} 18 | platform: 19 | vsphere: 20 | cpus: {{vm_worker_cpu}} 21 | coresPerSocket: 2 22 | memoryMB: {{vm_worker_mem}} 23 | osDisk: 24 | diskSizeGB: {{vm_worker_disk}} 25 | {% endif %} 26 | controlPlane: 27 | hyperthreading: Enabled 28 | name: master 29 | {% if rhcos_installation_method|upper!="IPI" %} 30 | replicas: 3 31 | {% else %} 32 | replicas: {{vm_number_of_masters}} 33 | platform: 34 | vsphere: 35 | cpus: {{vm_master_cpu}} 36 | coresPerSocket: 2 37 | memoryMB: {{vm_master_mem}} 38 | osDisk: 39 | diskSizeGB: {{vm_master_disk}} 40 | {% endif %} 41 | metadata: 42 | name: {{ cluster_name }} 43 | platform: 44 | {% if rhcos_installation_method|upper!="IPI" %} 45 | none: {} 46 | {% else %} 47 | vsphere: 48 | apiVIP: {{apiVIP}} 49 | ingressVIP: {{ingressVIP}} 50 | cluster: {{vc_cluster}} 51 | datacenter: {{vc_datacenter}} 52 | defaultDatastore: {{vc_datastore}} 53 | network: {{vc_network}} 54 | username: {{vc_user}} 55 | password: {{vc_password}} 56 | vCenter: {{vc_vcenter}} 57 | folder: /{{vc_datacenter}}/vm/{{vc_folder}} 58 | {% endif %} 59 | {% if service_network is defined %} 60 | networking: 61 | serviceNetwork: 62 | - {{ service_network }} 63 | {% endif %} 64 | pullSecret: '{{ pull_secret['content'] | b64decode | trim }}' 65 | sshKey: '{{ ssh_key['content'] | b64decode | trim }}' 66 | {% if air_gapped_install is defined and air_gapped_install %} 67 | additionalTrustBundle: | 68 | {{ lookup('file', air_gapped_registry_trust_bundle_file) | indent(2,true) }} 69 | imageContentSources: 70 | - mirrors: 71 | - {{air_gapped_mirror_ocp_release}} 72 | source: {{air_gapped_source_ocp_release}} 73 | - mirrors: 74 | - {{air_gapped_mirror_ocp_release_dev}} 75 | source: {{air_gapped_source_ocp_release_dev}} 76 | {% endif %} 77 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_config.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Generate install-config.yaml 3 | 4 | - name: Generate OCP administrator ocadmin password 5 | htpasswd: 6 | path: "{{ocp_install_dir}}/htpasswd" 7 | name: ocadmin 8 | password: "{{ocp_admin_password}}" 9 | owner: root 10 | mode: 0640 11 | 12 | - name: Get pull secret from file 13 | slurp: 14 | src: "{{ pull_secret_file }}" 15 | register: pull_secret 16 | 17 | - name: Get public SSH key 18 | slurp: 19 | src: "/root/.ssh/id_rsa.pub" 20 | register: ssh_key 21 | 22 | - name: Generate install-config.yaml 23 | template: 24 | src: ocp_install_config.j2 25 | dest: "{{ocp_install_dir}}/install-config.yaml" 26 | owner: root 27 | group: root 28 | mode: 0666 29 | 30 | - name: Generate install-config.yaml in /tmp 31 | template: 32 | src: ocp_install_config.j2 33 | dest: "/tmp/install-config.yaml" 34 | owner: root 35 | group: root 36 | mode: 0666 37 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_dir.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Create OpenShift installation directory 3 | file: 4 | group: "root" 5 | mode: 0755 6 | owner: "root" 7 | path: "{{ocp_install_dir}}" 8 | state: directory 9 | 10 | - name: Create scripts directory 11 | file: 12 | group: "root" 13 | mode: 0755 14 | owner: "root" 15 | path: "{{ocp_install_dir}}/scripts" 16 | state: directory 17 | 18 | - name: Check if installation was already performed 19 | stat: 20 | path: "{{ocp_install_dir}}/auth" 21 | register: auth_dir 22 | 23 | - set_fact: 24 | install_script_already_run: True 25 | when: auth_dir.stat.exists -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_ipi.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Import vSphere certificates 4 | include: vm_import_certificates.yaml 5 | 6 | - name: Create OpenShift cluster using IPI installer 7 | shell: | 8 | {{ocp_install_dir}}/openshift-install create cluster --dir={{ocp_install_dir}} 9 | 10 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_scripts.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Generate {{ocp_install_dir}}/scripts/wait_bootstrap.sh 4 | template: 5 | src: ocp_install_scripts_bootstrap_wait.j2 6 | dest: "{{ocp_install_dir}}/scripts/wait_bootstrap.sh" 7 | owner: root 8 | group: root 9 | mode: 0755 10 | when: rhcos_installation_method|upper!="IPI" 11 | 12 | - name: Generate {{ocp_install_dir}}/scripts/remove_bootstrap.sh 13 | template: 14 | src: ocp_install_scripts_bootstrap_remove.j2 15 | dest: "{{ocp_install_dir}}/scripts/remove_bootstrap.sh" 16 | owner: root 17 | group: root 18 | mode: 0755 19 | when: rhcos_installation_method|upper!="IPI" 20 | 21 | - name: Generate {{ocp_install_dir}}/scripts/wait_nodes_ready.sh 22 | template: 23 | src: ocp_install_scripts_wait_nodes_ready.j2 24 | dest: "{{ocp_install_dir}}/scripts/wait_nodes_ready.sh" 25 | owner: root 26 | group: root 27 | mode: 0755 28 | 29 | - name: Generate {{ocp_install_dir}}/scripts/create_registry_storage.sh 30 | template: 31 | src: ocp_install_scripts_create_registry_storage.j2 32 | dest: "{{ocp_install_dir}}/scripts/create_registry_storage.sh" 33 | owner: root 34 | group: root 35 | mode: 0755 36 | 37 | - name: Generate {{ocp_install_dir}}/scripts/wait_install.sh 38 | template: 39 | src: ocp_install_scripts_install_wait.j2 40 | dest: "{{ocp_install_dir}}/scripts/wait_install.sh" 41 | owner: root 42 | group: root 43 | mode: 0755 44 | 45 | - name: Generate {{ocp_install_dir}}/scripts/create_admin_user.sh 46 | template: 47 | src: ocp_install_scripts_create_admin_user.j2 48 | dest: "{{ocp_install_dir}}/scripts/create_admin_user.sh" 49 | owner: root 50 | group: root 51 | mode: 0755 52 | 53 | - name: Generate {{ocp_install_dir}}/scripts/post_install.sh 54 | template: 55 | src: ocp_install_scripts_post_install.j2 56 | dest: "{{ocp_install_dir}}/scripts/post_install.sh" 57 | owner: root 58 | group: root 59 | mode: 0755 60 | 61 | - include: nfs_storage_class_script.yaml 62 | when: create_nfs_sc|bool 63 | 64 | - name: Generate {{ocp_install_dir}}/scripts/disable_dhcp.sh 65 | template: 66 | src: ocp_install_scripts_dhcp_disable.j2 67 | dest: "{{ocp_install_dir}}/scripts/disable_dhcp.sh" 68 | owner: root 69 | group: root 70 | mode: 0755 71 | 72 | - name: Generate {{ocp_install_dir}}/scripts/wait_co_ready.sh 73 | template: 74 | src: ocp_install_scripts_wait_co_ready.j2 75 | dest: "{{ocp_install_dir}}/scripts/wait_co_ready.sh" 76 | owner: root 77 | group: root 78 | mode: 0755 79 | 80 | - name: Generate {{ocp_install_dir}}/scripts/start_namespace.sh 81 | template: 82 | src: ocp_start_namespace.j2 83 | dest: "{{ocp_install_dir}}/scripts/start_namespace.sh" 84 | owner: root 85 | group: root 86 | mode: 0755 87 | 88 | - name: Generate {{ocp_install_dir}}/scripts/stop_namespace.sh 89 | template: 90 | src: ocp_stop_namespace.j2 91 | dest: "{{ocp_install_dir}}/scripts/stop_namespace.sh" 92 | owner: root 93 | group: root 94 | mode: 0755 95 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_scripts_bootstrap_remove.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | ansible-playbook -i {{inventory_file}} {{script_dir}}/playbooks/ocp4_remove_bootstrap.yaml \ 4 | "$@" 5 | 6 | echo "Shutting down bootstrap server {{groups['bootstrap'][0]}}" 7 | ssh -o StrictHostKeyChecking=no core@{{groups['bootstrap'][0]}} 'sudo shutdown -h now &' 8 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_scripts_bootstrap_wait.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | {{ocp_install_dir}}/openshift-install --dir={{ocp_install_dir}} wait-for bootstrap-complete "$@" 4 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_scripts_create_admin_user.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | export KUBECONFIG={{ocp_install_dir}}/auth/kubeconfig 4 | oc create secret generic htpass-secret --from-file={{ocp_install_dir}}/htpasswd -n openshift-config 5 | oc apply -f {{ocp_install_dir}}/ocp_oauth.yaml 6 | oc adm policy add-cluster-role-to-user cluster-admin ocadmin 7 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_scripts_create_registry_storage.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | export KUBECONFIG={{ocp_install_dir}}/auth/kubeconfig 4 | 5 | echo "Check if registry storage PVS already exists" 6 | oc get pvc -n openshift-image-registry image-registry-pvc > /dev/null 2>&1 7 | 8 | if [ $? -ne 0 ];then 9 | # Disable the image registry 10 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Removed"}}' 11 | 12 | # Create PVC pointing to the registry storage class to use 13 | oc create -f {{ocp_install_dir}}/registry_storage.yaml 14 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"pvc":{ "claim": "image-registry-pvc" }}}}' 15 | 16 | # Create the default route for the image registry 17 | oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge 18 | 19 | # Enable the image registry 20 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 21 | else 22 | echo "Image registry PVC already exists, no changes made" 23 | fi 24 | 25 | # Show the current status of the cluster operator 26 | echo "Showing status of image-registry cluster operator: oc get co image-registry" 27 | oc get co image-registry -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_scripts_dhcp_disable.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | echo "Disabling DHCP for the DNS server" 4 | ansible-playbook -i {{inventory_file}} {{script_dir}}/playbooks/ocp4_disable_dhcp.yaml \ 5 | "$@" 6 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_scripts_install_wait.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | {{ocp_install_dir}}/openshift-install --dir={{ocp_install_dir}} wait-for install-complete 4 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_scripts_post_install.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | export KUBECONFIG={{ocp_install_dir}}/auth/kubeconfig 4 | 5 | {% if opt_out_health_checking is defined and opt_out_health_checking %} 6 | echo "Opt out of remote health checking" 7 | oc extract secret/pull-secret -n openshift-config --to=/tmp 8 | cat /tmp/.dockerconfigjson | jq 'del(.auths["cloud.openshift.com"])' > /tmp/new_ocp_pullsecret.json 9 | oc set data secret/pull-secret -n openshift-config --from-file=.dockerconfigjson=/tmp/new_ocp_pullsecret.json 10 | {% endif %} 11 | 12 | echo "Unlinking PXE boot files" 13 | unlink {{ocp_install_dir}}/tftpboot/pxelinux.cfg/01* 2>/dev/null 14 | 15 | echo "Rebuilding ~/.ssh/known_hosts to allow password-less ssh" 16 | rm -f /root/.ssh/known_hosts 17 | {% for host in groups['masters'] | union(groups['workers']) | sort %} 18 | ssh-keyscan {{host}} >> ~/.ssh/known_hosts 2>&1 /dev/null 19 | ssh-keyscan {{hostvars[host]['host']}}.{{cluster_name}}.{{domain_name}} >> ~/.ssh/known_hosts 2>&1 /dev/null 20 | ssh-keyscan {{hostvars[host]['host']}} >> ~/.ssh/known_hosts 2>&1 /dev/null 21 | {% endfor %} 22 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_scripts_wait_co_ready.j2: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd ) 3 | 4 | SLEEP_TIME=10 5 | TIME_OUT=900 6 | 7 | export KUBECONFIG={{ocp_install_dir}}/auth/kubeconfig 8 | 9 | wait_time=0 10 | ready=0 11 | while (("$ready"=="0"));do 12 | oc get co --no-headers > /tmp/wait_co_ready.out 13 | if [ $? == 0 ];then 14 | not_ready_count=$(cat /tmp/wait_co_ready.out | awk '{print $3}' | grep -c "False") 15 | progressing_count=$(cat /tmp/wait_co_ready.out | awk '{print $4}' | grep -c "True") 16 | degraded_count=$(cat /tmp/wait_co_ready.out | awk '{print $5}' | grep -c "True") 17 | if (("$not_ready_count"=="0" && "$progressing_count"=="0" && "$degraded_count"=="0"));then 18 | ready=1 19 | else 20 | echo "Not ready: $not_ready_count, Progressing: $progressing_count, Degraded: $degraded_count" 21 | fi 22 | fi 23 | 24 | if (("$ready"=="0"));then 25 | echo "Cluster operators not ready yet, sleeping for $SLEEP_TIME seconds" 26 | sleep $SLEEP_TIME 27 | wait_time=$(($wait_time + $SLEEP_TIME)) 28 | fi 29 | 30 | if (("$wait_time" > "$TIME_OUT"));then 31 | echo "Waiting for cluster operators to become ready timed out after $TIME_OUT seconds" 32 | exit 1 33 | fi 34 | done 35 | 36 | echo "All cluster operators are ready now" -------------------------------------------------------------------------------- /playbooks/tasks/ocp_install_scripts_wait_nodes_ready.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | 3 | all_nodes="{% for host in groups['masters'] | union(groups['workers']) | sort -%} 4 | {{hostvars[host]['host']}}.{{cluster_name}}.{{domain_name}} {% if not loop.last -%} {%- endif %} 5 | {%- endfor %}" 6 | all_nodes_count=$(echo $all_nodes | wc -w) 7 | 8 | echo "Waiting for $all_nodes_count nodes to become ready" 9 | 10 | TIMEOUT=1800 11 | WAIT=10 12 | WAITED=0 13 | ALL_NODES_READY=0 14 | 15 | export KUBECONFIG={{ocp_install_dir}}/auth/kubeconfig 16 | 17 | while [ $ALL_NODES_READY -eq 0 ] && [ $WAITED -lt $TIMEOUT ];do 18 | MISSING=0 19 | NOT_READY=0 20 | READY=0 21 | 22 | # Approve any pending CSRs 23 | oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | \ 24 | xargs --no-run-if-empty oc adm certificate approve 25 | 26 | for node in $(oc get no --no-headers -o custom-columns=':.metadata.name');do 27 | oc get no $node >/dev/null 2>&1 28 | node_status=$(oc get no $node -o jsonpath='{.status.conditions[?(.status=="True")].type}') 29 | if [[ "$node_status" == "Ready" ]];then 30 | ((READY+=1)) 31 | else 32 | ((NOT_READY+=1)) 33 | fi 34 | done 35 | 36 | # Now test if all nodes were ready 37 | MISSING=$(expr $all_nodes_count - $READY - $NOT_READY) 38 | 39 | if [ $MISSING -eq 0 ] && [ $NOT_READY -eq 0 ];then 40 | ALL_NODES_READY=1 41 | else 42 | echo "Not all nodes are available yet: $READY ready, $MISSING missing, $NOT_READY not ready" 43 | sleep $WAIT 44 | let "WAITED+=WAIT" 45 | fi 46 | done 47 | 48 | if [ $ALL_NODES_READY -eq 1 ];then 49 | echo "All nodes are ready now" 50 | oc get no 51 | else 52 | echo "Not all nodes have become ready within $TIMEOUT seconds" 53 | exit 1 54 | fi 55 | -------------------------------------------------------------------------------- /playbooks/tasks/ocp_recover_certificates.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: localhost 4 | connection: local 5 | tasks: 6 | 7 | - name: Delete left-over recovery directory 8 | file: 9 | path: "{{ocp_install_dir}}/recovery" 10 | state: absent 11 | 12 | - name: Create recovery directory 13 | file: 14 | group: "root" 15 | mode: 0755 16 | owner: "root" 17 | path: "{{ocp_install_dir}}/recovery" 18 | state: directory 19 | 20 | - hosts: masters[0] 21 | remote_user: core 22 | become: yes 23 | gather_facts: no 24 | tasks: 25 | - name: Get OpenShift version 26 | shell: | 27 | oc version 28 | register: oc_version_full 29 | 30 | - set_fact: 31 | oc_version="{{oc_version_full.stdout | regex_search(regexp,'\\0') | first}}" 32 | vars: 33 | regexp: '\d+\.\d+\.\d+' 34 | 35 | - debug: 36 | msg: "{{oc_version}}" 37 | 38 | - name: Obtain Kubernetes API Server Operator image reference for OpenShift version {{oc_version}} 39 | shell: | 40 | oc adm release info --registry-config='/var/lib/kubelet/config.json' \ 41 | "quay.io/openshift-release-dev/ocp-release:{{oc_version}}-x86_64" \ 42 | --image-for=cluster-kube-apiserver-operator 43 | register: kao_image 44 | 45 | - set_fact: 46 | kao_image_name="{{kao_image.stdout}}" 47 | 48 | - name: Obtain currently running pods 49 | shell: | 50 | crictl pods > /tmp/pods_before.log 51 | 52 | - name: Pull the cluster-kube-apiserver-operator image 53 | shell: | 54 | podman pull --authfile=/var/lib/kubelet/config.json "{{kao_image_name}}" 55 | 56 | - name: Destroy current recovery API server 57 | shell: | 58 | podman run -it --network=host -v /etc/kubernetes/:/etc/kubernetes/:Z \ 59 | --entrypoint=/usr/bin/cluster-kube-apiserver-operator "{{kao_image_name}}" \ 60 | recovery-apiserver destroy 61 | register: recovery_active 62 | failed_when: recovery_active.rc != 1 and recovery_active.rc != 0 63 | 64 | - name: Start Kubernetes API Operator pod 65 | shell: | 66 | podman run -it --network=host -v /etc/kubernetes/:/etc/kubernetes/:Z \ 67 | --entrypoint=/usr/bin/cluster-kube-apiserver-operator "{{kao_image_name}}" \ 68 | recovery-apiserver create 69 | 70 | - name: Wait for recovery API server to come up 71 | shell: | 72 | export KUBECONFIG=/etc/kubernetes/static-pod-resources/recovery-kube-apiserver-pod/admin.kubeconfig 73 | until oc get namespace kube-system 2>/dev/null 1>&2; do echo "waiting...";sleep 1; done 74 | 75 | - name: Regenerate certificates 76 | shell: | 77 | podman run -it --network=host -v /etc/kubernetes/:/etc/kubernetes/:Z \ 78 | --entrypoint=/usr/bin/cluster-kube-apiserver-operator "{{kao_image_name}}" \ 79 | regenerate-certificates 80 | 81 | - name: Force new roll-outs 82 | shell: | 83 | export KUBECONFIG=/etc/kubernetes/static-pod-resources/recovery-kube-apiserver-pod/admin.kubeconfig 84 | oc patch kubeapiserver cluster \ 85 | -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge 86 | oc patch kubecontrollermanager cluster \ 87 | -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge 88 | oc patch kubescheduler cluster \ 89 | -p='{"spec": {"forceRedeploymentReason": "recovery-'"$( date --rfc-3339=ns )"'"}}' --type=merge 90 | 91 | - name: Recover kubeconfig 92 | shell: | 93 | export KUBECONFIG=/etc/kubernetes/static-pod-resources/recovery-kube-apiserver-pod/admin.kubeconfig 94 | /usr/local/bin/recover-kubeconfig.sh > /tmp/copy-kubeconfig 95 | 96 | - name: Get CA certificate 97 | shell: | 98 | export KUBECONFIG=/etc/kubernetes/static-pod-resources/recovery-kube-apiserver-pod/admin.kubeconfig 99 | oc get configmap kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator \ 100 | --template='{{ '{{' }} index .data "ca-bundle.crt" {{ '}}' }}' > /tmp/copy-kubelet-ca.crt 101 | 102 | - name: Fetch the generated files 103 | fetch: 104 | src: "{{item}}" 105 | flat: yes 106 | dest: "{{ocp_install_dir}}/recovery/" 107 | with_items: 108 | - /tmp/copy-kubeconfig 109 | - /tmp/copy-kubelet-ca.crt 110 | 111 | - hosts: 112 | - masters 113 | remote_user: core 114 | become: yes 115 | gather_facts: no 116 | tasks: 117 | - name: Copy kubeconfig and CA certificate to masters 118 | copy: 119 | src: "{{item.src}}" 120 | dest: "{{item.dest}}" 121 | with_items: 122 | - { src: '{{ocp_install_dir}}/recovery/copy-kubeconfig', dest: '/etc/kubernetes/kubeconfig'} 123 | - { src: '{{ocp_install_dir}}/recovery/copy-kubelet-ca.crt', dest: '/etc/kubernetes/kubelet-ca.crt'} 124 | 125 | - name: Force deamon-reload on masters 126 | file: 127 | path: /run/machine-config-daemon-force 128 | state: touch 129 | 130 | - name: Stop kubelet on masters 131 | shell: | 132 | systemctl stop kubelet 133 | rm -rf /var/lib/kubelet/pki /var/lib/kubelet/kubeconfig 134 | 135 | - name: Wait for 10 seconds for kubelet to stop on masters 136 | pause: 137 | seconds: 10 138 | 139 | - name: Kill pods on masters 140 | shell: | 141 | if [ -e /tmp/pods_before.log ];then 142 | crictl stopp $(cat /tmp/pods_before.log | awk '{print $1}' | grep -v POD) 143 | crictl rmp $(cat /tmp/pods_before.log | awk '{print $1}' | grep -v POD) 144 | else 145 | crictl stopp $(crictl pods | awk '{print $1}' | grep -v POD) 146 | crictl rmp $(crictl pods | awk '{print $1}' | grep -v POD) 147 | fi 148 | 149 | - name: Restart kubelet on masters 150 | shell: | 151 | systemctl start kubelet 152 | 153 | - name: Wait for 2 minutes for kubelet to come up on masters and to start generating CSRs 154 | pause: 155 | minutes: 2 156 | 157 | - hosts: masters[0] 158 | remote_user: core 159 | become: yes 160 | gather_facts: no 161 | tasks: 162 | - name: Approve CSRs and wait for masters to become ready (max 30 mins) 163 | shell: | 164 | export KUBECONFIG=/etc/kubernetes/static-pod-resources/recovery-kube-apiserver-pod/admin.kubeconfig 165 | oc get csr --no-headers 2> /dev/null | grep Pending | awk '{print $1}' | xargs -r oc adm certificate approve 166 | echo "Not ready nodes are" $(oc get no --no-headers | grep master | grep -i NotReady | wc -l) 167 | register: not_ready_nodes 168 | until: not_ready_nodes.stdout.find("Not ready nodes are 0") != -1 169 | retries: 60 170 | delay: 30 171 | ignore_errors: yes 172 | 173 | - hosts: 174 | - workers 175 | remote_user: core 176 | become: yes 177 | gather_facts: no 178 | tasks: 179 | - name: Stop kubelet on workers 180 | shell: | 181 | systemctl stop kubelet 182 | rm -rf /var/lib/kubelet/pki /var/lib/kubelet/kubeconfig 183 | 184 | - name: Wait for 10 seconds for kubelet to stop on workers 185 | pause: 186 | seconds: 10 187 | 188 | - name: Restart kubelet on workers 189 | shell: | 190 | systemctl start kubelet 191 | 192 | - name: Wait for 2 minutes for kubelet to come up on workers and to start generating CSRs 193 | pause: 194 | minutes: 2 195 | 196 | - hosts: masters[0] 197 | remote_user: core 198 | become: yes 199 | gather_facts: no 200 | tasks: 201 | - name: Approve CSRs and wait for workers to become ready (max 30 mins) 202 | shell: | 203 | export KUBECONFIG=/etc/kubernetes/static-pod-resources/recovery-kube-apiserver-pod/admin.kubeconfig 204 | oc get csr --no-headers 2> /dev/null | grep Pending | awk '{print $1}' | xargs -r oc adm certificate approve 205 | echo "Not ready nodes are" $(oc get no --no-headers | grep worker | grep -i NotReady | wc -l) 206 | register: not_ready_nodes 207 | until: not_ready_nodes.stdout.find("Not ready nodes are 0") != -1 208 | retries: 60 209 | delay: 30 210 | ignore_errors: yes 211 | 212 | - name: Copy script to wait for cluster operators to first master node 213 | copy: 214 | src: '{{ocp_install_dir}}/scripts/wait_co_ready.sh' 215 | dest: "/tmp/wait_co_ready.sh" 216 | mode: preserve 217 | 218 | - name: Wait for cluster operators to become ready, this may take 10-15 minutes 219 | shell: | 220 | export KUBECONFIG=/etc/kubernetes/static-pod-resources/recovery-kube-apiserver-pod/admin.kubeconfig 221 | /tmp/wait_co_ready.sh 222 | 223 | - name: Destroy recovery API server {{kao_image_name}} 224 | shell: | 225 | podman run -it --network=host -v /etc/kubernetes/:/etc/kubernetes/:Z \ 226 | --entrypoint=/usr/bin/cluster-kube-apiserver-operator "{{kao_image_name}}" \ 227 | recovery-apiserver destroy 228 | tags: destroy -------------------------------------------------------------------------------- /playbooks/tasks/ocp_start_namespace.j2: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd ) 3 | 4 | # Check number of parameters 5 | if [ "$#" -lt 2 ]; then 6 | echo "Usage: $0 -n " 7 | exit 1 8 | fi 9 | 10 | # Parse parameters 11 | PARAMS="" 12 | while (( "$#" )); do 13 | case "$1" in 14 | -n) 15 | NAMESPACE=$2 16 | shift 2 17 | ;; 18 | *) # preserve remaining arguments 19 | PARAMS="$PARAMS $1" 20 | shift 21 | ;; 22 | esac 23 | done 24 | 25 | # Set remaining parameters 26 | eval set -- "$PARAMS" 27 | 28 | echo "Removing compute quota in namespace $NAMESPACE" 29 | oc delete resourcequota -n $NAMESPACE project-quota-pods-0 30 | 31 | echo "Quota removed, pods in namespace $NAMESPACE should now be starting (this may take a couple of minutes)" -------------------------------------------------------------------------------- /playbooks/tasks/ocp_stop_namespace.j2: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd ) 3 | 4 | # Check number of parameters 5 | if [ "$#" -lt 2 ]; then 6 | echo "Usage: $0 -n " 7 | exit 1 8 | fi 9 | 10 | # Parse parameters 11 | PARAMS="" 12 | while (( "$#" )); do 13 | case "$1" in 14 | -n) 15 | NAMESPACE=$2 16 | shift 2 17 | ;; 18 | *) # preserve remaining arguments 19 | PARAMS="$PARAMS $1" 20 | shift 21 | ;; 22 | esac 23 | done 24 | 25 | # Set remaining parameters 26 | eval set -- "$PARAMS" 27 | 28 | # Set resource quote for project 29 | echo "Setting compute quota of namespace $NAMESPACE to 0" 30 | oc create quota project-quota-pods-0 --hard pods="0" -n $NAMESPACE 31 | 32 | echo "Stopping all pods in namespace $NAMESPACE" 33 | oc delete po -n $NAMESPACE --all 34 | -------------------------------------------------------------------------------- /playbooks/tasks/ping.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Ping command, just to test inclusion 3 | 4 | - name: Ping 5 | ping 6 | -------------------------------------------------------------------------------- /playbooks/tasks/prepare_xfs_volume.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Get and format volumes, then mount to the appropriate directory 3 | 4 | - name: Get volume for specified selector of {{ volume_selector }} 5 | shell: "lsblk -l | grep '{{ volume_selector }}' | head -1 | awk '{print $1}'" 6 | register: volume_name 7 | 8 | - name: Format volume "/dev/{{ volume_name.stdout }}" 9 | filesystem: 10 | fstype: xfs 11 | dev: "/dev/{{ volume_name.stdout }}" 12 | when: volume_name.stdout != "" 13 | 14 | - name: Mount volume "{{ mount_point }}" 15 | mount: 16 | path: "{{ mount_point }}" 17 | src: /dev/{{ volume_name.stdout }} 18 | fstype: xfs 19 | opts: noatime 20 | state: mounted 21 | when: volume_name.stdout != "" 22 | -------------------------------------------------------------------------------- /playbooks/tasks/pxe_links.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create links for MAC addresses for PXE boot 4 | file: 5 | src: "{{ocp_install_dir}}/tftpboot/pxelinux.cfg/{{hostvars[item]['host']}}" 6 | dest: "{{ocp_install_dir}}/tftpboot/pxelinux.cfg/01-{{hostvars[item]['mac'] | lower | replace(':','-')}}" 7 | owner: root 8 | group: root 9 | state: link 10 | with_items: 11 | - "{{groups['bootstrap']}}" 12 | - "{{groups['masters']}}" 13 | - "{{groups['workers']}}" 14 | -------------------------------------------------------------------------------- /playbooks/tasks/reboot.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Reboot servers if needed and allowed 4 | reboot: 5 | when: reboot_server and allow_reboot|bool 6 | 7 | - name: Reboot servers manually !!! 8 | debug: 9 | msg: 10 | - Changes have been made to this server which require a reboot. You must reboot the server 11 | - manually before installing OpenShift. 12 | when: reboot_server and not allow_reboot|bool 13 | -------------------------------------------------------------------------------- /playbooks/tasks/registry_storage.j2: -------------------------------------------------------------------------------- 1 | # {{ ansible_managed }} 2 | --- 3 | kind: PersistentVolumeClaim 4 | apiVersion: v1 5 | metadata: 6 | name: image-registry-pvc 7 | namespace: openshift-image-registry 8 | spec: 9 | storageClassName: "{{image_registry_storage_class}}" 10 | accessModes: 11 | - ReadWriteMany 12 | resources: 13 | requests: 14 | storage: "{{image_registry_size}}" 15 | -------------------------------------------------------------------------------- /playbooks/tasks/registry_storage.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Generate yaml files for image registry 4 | template: 5 | src: registry_storage.j2 6 | dest: "{{ocp_install_dir}}/registry_storage.yaml" 7 | owner: root 8 | group: root 9 | -------------------------------------------------------------------------------- /playbooks/tasks/set_hostname.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Set host name 3 | 4 | - name: Set host name for bastion server to "{{hostvars[groups['bastion'][0]].host}}.{{cluster_name}}.{{domain_name}}" 5 | hostname: 6 | name: "{{hostvars[groups['bastion'][0]].host}}.{{cluster_name}}.{{domain_name}}" 7 | when: set_bastion_hostname|bool 8 | -------------------------------------------------------------------------------- /playbooks/tasks/ssh_playbook.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Set up password-less SSH 3 | 4 | - hosts: localhost 5 | remote_user: root 6 | gather_facts: no 7 | become: yes 8 | 9 | tasks: 10 | - name: Get host name for current node 11 | command: hostname 12 | register: hostname 13 | 14 | - name: Generate SSH key on current host (if not existent) 15 | shell: | 16 | if [ ! -e ~/.ssh/id_rsa ];then ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa -C "{{ hostname.stdout }}";fi 17 | ignore_errors: true 18 | 19 | - hosts: 20 | - bastion 21 | remote_user: root 22 | gather_facts: no 23 | become: yes 24 | 25 | tasks: 26 | - name: Configure host for password-less SSH for bastion node 27 | authorized_key: 28 | user: root 29 | state: present 30 | key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" 31 | 32 | - hosts: 33 | - nfs 34 | remote_user: root 35 | gather_facts: no 36 | become: yes 37 | 38 | tasks: 39 | - name: Configure host for password-less SSH for NFS node 40 | authorized_key: 41 | user: root 42 | state: present 43 | key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" 44 | when: manage_nfs|bool 45 | 46 | - hosts: 47 | - lb 48 | remote_user: root 49 | gather_facts: no 50 | become: yes 51 | 52 | tasks: 53 | - name: Configure host for password-less SSH for load balancer 54 | authorized_key: 55 | user: root 56 | state: present 57 | key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}" 58 | when: manage_load_balancer is not defined or manage_load_balancer|bool -------------------------------------------------------------------------------- /playbooks/tasks/tftpboot.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Find CoreOS kernel 3 | find: 4 | paths: "{{ocp_install_dir}}" 5 | patterns: "{{rhcos_kernel_package_pattern}}" 6 | use_regex: yes 7 | register: found_kernel_files 8 | 9 | - name: Find initial RAM disk package 10 | find: 11 | paths: "{{ocp_install_dir}}" 12 | patterns: "{{rhcos_initramfs_package_pattern}}" 13 | use_regex: yes 14 | register: found_initramfs_files 15 | 16 | - name: Find root file system package 17 | find: 18 | paths: "{{ocp_install_dir}}" 19 | patterns: "{{rhcos_rootfs_package_pattern}}" 20 | use_regex: yes 21 | register: found_rootfs_files 22 | 23 | - set_fact: 24 | rhcos_kernel_package: "{{(found_kernel_files.files | last)['path']}}" 25 | rhcos_initramfs_package: "{{(found_initramfs_files.files | last)['path']}}" 26 | 27 | - set_fact: 28 | rhcos_rootfs_package: "{{(found_rootfs_files.files | last)['path']}}" 29 | 30 | - name: Create TFTP directory 31 | file: 32 | group: "root" 33 | mode: 0755 34 | owner: "root" 35 | path: "{{ocp_install_dir}}/tftpboot" 36 | state: directory 37 | 38 | - block: 39 | - name: Check if SELinux has already been configured for TFTP files 40 | shell: 41 | semanage fcontext -l | grep tftpdir_t | grep '{{ocp_install_dir}}/tftpboot(/.*)?' | wc -l 42 | register: _tftp_selinux 43 | 44 | - name: Enable access to TFTP files (SELinux) 45 | shell: | 46 | semanage fcontext -a -t tftpdir_t -s system_u '{{ocp_install_dir}}/tftpboot(/.*)?' 47 | when: _tftp_selinux.stdout == "0" 48 | when: 49 | - ansible_selinux.status=='enabled' 50 | 51 | - name: Restore SELinux context for TFTP 52 | shell: restorecon -Rv "{{ocp_install_dir}}/tftpboot" 53 | 54 | - name: Create TFTP images directory 55 | file: 56 | group: "root" 57 | mode: 0755 58 | owner: "root" 59 | path: "{{ocp_install_dir}}/tftpboot/images" 60 | state: directory 61 | 62 | - name: Copy kernel package "{{rhcos_kernel_package}}" to TFTP 63 | copy: 64 | src: "{{rhcos_kernel_package}}" 65 | dest: "{{ocp_install_dir}}/tftpboot/images/{{rhcos_kernel_package | basename}}" 66 | remote_src: True 67 | 68 | - name: Copy initial RAM file system package "{{rhcos_initramfs_package}}" to TFTP 69 | copy: 70 | src: "{{rhcos_initramfs_package}}" 71 | dest: "{{ocp_install_dir}}/tftpboot/images/{{rhcos_initramfs_package | basename}}" 72 | remote_src: True 73 | 74 | - name: Copy root file system package "{{rhcos_initramfs_package}}" to TFTP 75 | copy: 76 | src: "{{rhcos_rootfs_package}}" 77 | dest: "{{ocp_install_dir}}/tftpboot/images/{{rhcos_rootfs_package | basename}}" 78 | remote_src: True 79 | 80 | - name: Create TFTP configuration directory 81 | file: 82 | group: "root" 83 | mode: 0755 84 | owner: "root" 85 | path: "{{ocp_install_dir}}/tftpboot/pxelinux.cfg" 86 | state: directory 87 | 88 | - name: Check if /var/lib/tftpboot exists 89 | stat: 90 | path: "/var/lib/tftpboot" 91 | register: var_lib_tftpboot 92 | 93 | - name: Create symlink /tftpboot if TFTP was installed in /var/lib/tftpboot 94 | file: 95 | path: "/tftpboot" 96 | src: "/var/lib/tftpboot" 97 | state: link 98 | when: var_lib_tftpboot.stat.exists 99 | 100 | - name: Create symlinks to pxelinux.cfg and images 101 | file: 102 | path: "/tftpboot/{{item}}" 103 | src: "{{ocp_install_dir}}/tftpboot/{{item}}" 104 | state: link 105 | with_items: 106 | ['pxelinux.cfg','images'] 107 | 108 | - name: Generate default boot configuration file 109 | template: 110 | src: tftpboot_default.j2 111 | dest: "{{ocp_install_dir}}/tftpboot/pxelinux.cfg/default" 112 | owner: root 113 | group: root 114 | mode: 0644 115 | 116 | - name: Generate boot menu for bootstrap 117 | template: 118 | src: tftpboot_vm_menu.j2 119 | dest: "{{ocp_install_dir}}/tftpboot/pxelinux.cfg/{{hostvars[item]['host']}}" 120 | owner: root 121 | group: root 122 | mode: 0644 123 | with_items: 124 | - "{{groups['bootstrap']}}" 125 | vars: 126 | node_type: bootstrap 127 | metal_bios_file: "{{(found_metal_files.files | last)['path']}}" 128 | 129 | - name: Generate boot menu for masters 130 | template: 131 | src: tftpboot_vm_menu.j2 132 | dest: "{{ocp_install_dir}}/tftpboot/pxelinux.cfg/{{hostvars[item]['host']}}" 133 | owner: root 134 | group: root 135 | mode: 0644 136 | with_items: 137 | - "{{groups['masters']}}" 138 | vars: 139 | node_type: master 140 | 141 | - name: Generate boot menu for workers 142 | template: 143 | src: tftpboot_vm_menu.j2 144 | dest: "{{ocp_install_dir}}/tftpboot/pxelinux.cfg/{{hostvars[item]['host']}}" 145 | owner: root 146 | group: root 147 | mode: 0644 148 | with_items: 149 | - "{{groups['workers']}}" 150 | vars: 151 | node_type: worker -------------------------------------------------------------------------------- /playbooks/tasks/tftpboot_default.j2: -------------------------------------------------------------------------------- 1 | DEFAULT menu.c32 2 | PROMPT 0 3 | TIMEOUT 300 4 | ONTIMEOUT local 5 | 6 | MENU TITLE Main Menu 7 | 8 | LABEL local 9 | MENU LABEL Boot local hard drive 10 | LOCALBOOT 0 11 | -------------------------------------------------------------------------------- /playbooks/tasks/tftpboot_vm_menu.j2: -------------------------------------------------------------------------------- 1 | default menu.c32 2 | # set Timeout 3 Seconds 3 | timeout 30 4 | ontimeout linux 5 | label linux 6 | menu label ^Install RHEL CoreOS 7 | menu default 8 | kernel /images/{{rhcos_kernel_package | basename}} 9 | append initrd=/images/{{rhcos_initramfs_package | basename}} coreos.live.rootfs_url=http://{{groups['bastion'][0]}}:{{http_server_port}}/{{rhcos_rootfs_package}} rd.neednet=1 coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://{{groups['bastion'][0]}}:{{http_server_port}}/{{ocp_install_dir}}/{{node_type}}.ign ip={{item}}::{{default_gateway}}:255.255.255.0:{{hostvars[item]['host']}}.{{cluster_name}}.{{domain_name}}:ens192:none nameserver={{groups['bastion'][0]}} -------------------------------------------------------------------------------- /playbooks/tasks/vm_create.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Create VMs for the infrastructure 3 | 4 | - name: Create empty {{ ocp_group }} VMs 5 | vmware_guest: 6 | hostname: "{{ vc_vcenter }}" 7 | username: "{{ vc_user }}" 8 | password: "{{ vc_password }}" 9 | validate_certs: yes 10 | datacenter: "{{ vc_datacenter }}" 11 | cluster: "{{ vc_cluster }}" 12 | resource_pool: "{{ vc_res_pool }}" 13 | folder: "{{ vc_folder }}" 14 | name: "{{cluster_name}}-{{ hostvars[item].host }}" 15 | state: present 16 | guest_id: "{{ vc_guest_id }}" 17 | wait_for_ip_address: no 18 | hardware: 19 | memory_mb: "{{ vm_memory }}" 20 | num_cpus: "{{ vm_cpu }}" 21 | boot_firmware: bios 22 | scsi: paravirtual 23 | disk: 24 | - size_gb: "{{ vm_disk }}" 25 | datastore: "{{ vc_datastore }}" 26 | type: thin 27 | networks: 28 | - name: "{{ vc_network }}" 29 | device_type: vmxnet3 30 | with_items: 31 | - "{{ groups[ocp_group] }}" 32 | delegate_to: localhost 33 | register: vm_guest_facts_pxe 34 | when: rhcos_installation_method|upper=="PXE" 35 | 36 | - name: Create {{ ocp_group }} VMs using template {{vm_template}} 37 | vmware_guest: 38 | hostname: "{{ vc_vcenter }}" 39 | username: "{{ vc_user }}" 40 | password: "{{ vc_password }}" 41 | validate_certs: yes 42 | datacenter: "{{ vc_datacenter }}" 43 | cluster: "{{ vc_cluster }}" 44 | resource_pool: "{{ vc_res_pool }}" 45 | folder: "{{ vc_folder }}" 46 | name: "{{cluster_name}}-{{ hostvars[item].host }}" 47 | template: "{{vm_template}}" 48 | state: present 49 | guest_id: "{{ vc_guest_id }}" 50 | wait_for_ip_address: no 51 | hardware: 52 | memory_mb: "{{ vm_memory }}" 53 | num_cpus: "{{ vm_cpu }}" 54 | boot_firmware: bios 55 | scsi: paravirtual 56 | disk: 57 | - size_gb: "{{ vm_disk }}" 58 | datastore: "{{ vc_datastore }}" 59 | type: thin 60 | networks: 61 | - name: "{{ vc_network }}" 62 | device_type: vmxnet3 63 | with_items: 64 | - "{{ groups[ocp_group] }}" 65 | delegate_to: localhost 66 | register: vm_guest_facts_ova 67 | when: rhcos_installation_method|upper=="OVA" 68 | 69 | - name: Register in inventory file (if empty VMs) 70 | replace: 71 | path: "{{ inventory_file }}" 72 | regexp: '^(.*{{ item.item }}.*mac=)"(.*)"(.*)?$' 73 | replace: '\1"{{ item.instance.hw_eth0.macaddress }}"\3' 74 | with_items: "{{ vm_guest_facts_pxe.results }}" 75 | when: rhcos_installation_method|upper=="PXE" 76 | 77 | - name: Register in inventory file (when using template) 78 | replace: 79 | path: "{{ inventory_file }}" 80 | regexp: '^(.*{{ item.item }}.*mac=)"(.*)"(.*)?$' 81 | replace: '\1"{{ item.instance.hw_eth0.macaddress }}"\3' 82 | with_items: "{{ vm_guest_facts_ova.results }}" 83 | when: rhcos_installation_method|upper=="OVA" 84 | 85 | 86 | - meta: refresh_inventory 87 | -------------------------------------------------------------------------------- /playbooks/tasks/vm_delete.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Delete VMs for the infrastructure 3 | 4 | - name: Delete {{ ocp_group }} VMs 5 | vmware_guest: 6 | hostname: "{{ vc_vcenter }}" 7 | username: "{{ vc_user }}" 8 | password: "{{ vc_password }}" 9 | validate_certs: yes 10 | datacenter: "{{ vc_datacenter }}" 11 | cluster: "{{ vc_cluster }}" 12 | resource_pool: "{{ vc_res_pool }}" 13 | folder: "{{ vc_folder }}" 14 | name: "{{cluster_name}}-{{ hostvars[item].host }}" 15 | state: absent 16 | force: yes 17 | with_items: 18 | - "{{ groups[ocp_group] }}" 19 | delegate_to: localhost 20 | -------------------------------------------------------------------------------- /playbooks/tasks/vm_import_certificates.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Create temporary directory for certificates 4 | tempfile: 5 | state: directory 6 | register: temp_dir 7 | 8 | - name: Unpack the certificates 9 | unarchive: 10 | src: "https://{{vc_vcenter}}/certs/download.zip" 11 | dest: "{{temp_dir.path}}" 12 | remote_src: True 13 | validate_certs: False 14 | 15 | - name: Add to trust store 16 | copy: 17 | src: "{{item}}" 18 | dest: "/etc/pki/ca-trust/source/anchors/" 19 | with_fileglob: 20 | - "{{temp_dir.path}}/certs/lin/*" 21 | 22 | - name: Update certificate authorities 23 | shell: update-ca-trust -------------------------------------------------------------------------------- /playbooks/tasks/vm_poweron.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Poweron VMs for the infrastructure 3 | 4 | - name: Power on {{ ocp_group }} VMs 5 | vmware_guest: 6 | hostname: "{{ vc_vcenter }}" 7 | username: "{{ vc_user }}" 8 | password: "{{ vc_password }}" 9 | validate_certs: yes 10 | datacenter: "{{ vc_datacenter }}" 11 | cluster: "{{ vc_cluster }}" 12 | resource_pool: "{{ vc_res_pool }}" 13 | folder: "{{ vc_folder }}" 14 | name: "{{cluster_name}}-{{ hostvars[item].host }}" 15 | state: poweredon 16 | with_items: 17 | - "{{ groups[ocp_group] }}" 18 | delegate_to: localhost 19 | 20 | - pause: 21 | seconds: "{{ pause }}" 22 | -------------------------------------------------------------------------------- /playbooks/tasks/vm_update_vapp.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Change vApp properties of the VMs to include ignition info 3 | 4 | - name: Change vApp properties of {{ocp_group}} VMs 5 | vmware_guest: 6 | hostname: "{{ vc_vcenter }}" 7 | username: "{{ vc_user }}" 8 | password: "{{ vc_password }}" 9 | validate_certs: yes 10 | datacenter: "{{ vc_datacenter }}" 11 | cluster: "{{ vc_cluster }}" 12 | resource_pool: "{{ vc_res_pool }}" 13 | folder: "{{ vc_folder }}" 14 | name: "{{cluster_name}}-{{ hostvars[item].host }}" 15 | state: present 16 | vapp_properties: 17 | - id: "guestinfo.ignition.config.data.encoding" 18 | value: "base64" 19 | - id: "guestinfo.ignition.config.data" 20 | value: "{{ lookup('file', ocp_install_dir+'/'+ignition_file_name) | b64encode }}" 21 | - id: "disk.EnableUUID" 22 | value: "TRUE" 23 | with_items: 24 | - "{{ groups[ocp_group] }}" 25 | delegate_to: localhost -------------------------------------------------------------------------------- /prepare.sh: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd ) 3 | 4 | # Check number of parameters 5 | if [ "$#" -lt 2 ]; then 6 | echo "Usage: $0 -i [other ansible-playbook parameters]" 7 | exit 1 8 | fi 9 | 10 | SKIP_INSTALL="False" 11 | 12 | # Parse parameters 13 | PARAMS="" 14 | while (( "$#" )); do 15 | case "$1" in 16 | -i) 17 | INVENTORY_FILE_PARAM=$2 18 | shift 2 19 | ;; 20 | --skip-install) 21 | SKIP_INSTALL="True" 22 | shift 23 | ;; 24 | *) # preserve remaining arguments 25 | PARAMS="$PARAMS $1" 26 | shift 27 | ;; 28 | esac 29 | done 30 | 31 | # Set remaining parameters 32 | eval set -- "$PARAMS" 33 | 34 | if [ ! -e $INVENTORY_FILE_PARAM ]; then 35 | echo "Usage: $0 -i [other ansible-playbook parameters]" 36 | echo "Available inventory files are:" 37 | find ./inventory/ -name "*.inv" 38 | exit 1 39 | fi 40 | 41 | if [ -z $pull_secret_file ];then 42 | pull_secret_file="/tmp/ocp_pullsecret.json" 43 | fi 44 | 45 | if [ ! -e $pull_secret_file ];then 46 | echo "Pull secret file $pull_secret_file does not exist, please create the file or set the pull_secret_file environment variable to point to the file that holds the pull secret." 47 | exit 1 48 | fi 49 | 50 | if [ -z $ocp_admin_password ];then 51 | echo 'OpenShift ocadmin administrator password (ocp_admin_password):' 52 | read -s ocp_admin_password 53 | if [ -z $ocp_admin_password ];then 54 | echo "Error: OpenShift administrator password. OpenShift administrator password environment variable ocp_admin_password or entered at prompt" 55 | exit 1 56 | fi 57 | fi 58 | 59 | if [ -z $root_password ];then 60 | echo 'Root password, leave blank if password-less SSH has already been configured (root_password):' 61 | read -s root_password 62 | if [ -z $root_password ];then 63 | echo "Assuming password-less SSH has been configured on all nodes in the cluster and that inventory file has been adjusted accordingly." 64 | fi 65 | fi 66 | 67 | # Echo extra parameters (if any) 68 | [[ ! -z "$@" ]] && echo "Extra parameters passed to ansible-playbook: $@" 69 | 70 | pushd $SCRIPT_DIR > /dev/null 71 | 72 | # Run ansible playbook 73 | inventory_file=$(realpath $INVENTORY_FILE_PARAM) 74 | 75 | ansible-playbook -i $inventory_file playbooks/ocp4.yaml \ 76 | -e ansible_ssh_pass=$root_password \ 77 | -e ocp_admin_password=$ocp_admin_password \ 78 | -e pull_secret_file=$pull_secret_file \ 79 | -e script_dir=$SCRIPT_DIR \ 80 | -e inventory_file=$inventory_file \ 81 | -e SKIP_INSTALL=$SKIP_INSTALL \ 82 | "$@" 83 | 84 | ANSIBLE_EXIT_CODE=$? 85 | 86 | popd > /dev/null 87 | 88 | exit $ANSIBLE_EXIT_CODE 89 | -------------------------------------------------------------------------------- /vm_create.sh: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd ) 3 | 4 | # Check number of parameters 5 | if [ "$#" -lt 2 ]; then 6 | echo "Usage: $0 -i [other ansible-playbook parameters]" 7 | exit 1 8 | fi 9 | 10 | # Parse parameters 11 | PARAMS="" 12 | while (( "$#" )); do 13 | case "$1" in 14 | -i) 15 | INVENTORY_FILE_PARAM=$2 16 | shift 2 17 | ;; 18 | *) # preserve remaining arguments 19 | PARAMS="$PARAMS $1" 20 | shift 21 | ;; 22 | esac 23 | done 24 | 25 | # Set remaining parameters 26 | eval set -- "$PARAMS" 27 | 28 | if [ ! -e $INVENTORY_FILE_PARAM ]; then 29 | echo "Usage: $0 -i [other ansible-playbook parameters]" 30 | echo "Available inventory files are:" 31 | find ./inventory/ -name "*.inv" 32 | exit 1 33 | fi 34 | 35 | if [ -z "$vc_user" ];then 36 | echo 37 | echo 'vCenter user:' 38 | read vc_user 39 | if [ -z "$vc_user" ];then 40 | echo "Error: vCenter user. vCenter user environment variable vc_user not set or entered at prompt" 41 | exit 1 42 | fi 43 | fi 44 | 45 | if [ -z "$vc_password" ];then 46 | echo 47 | echo 'vCenter password:' 48 | read -s vc_password 49 | if [ -z "$vc_password" ];then 50 | echo "Error: vCenter password. vCenter password environment variable vc_password not set or entered at prompt" 51 | exit 1 52 | fi 53 | fi 54 | 55 | # Echo extra parameters (if any) 56 | [[ ! -z "$@" ]] && echo "Extra parameters passed to ansible-playbook: $@" 57 | 58 | # Run ansible playbook 59 | inventory_file=$(realpath $INVENTORY_FILE_PARAM) 60 | 61 | ansible-playbook -i $inventory_file playbooks/ocp4_vm_create.yaml \ 62 | -e vc_user="$vc_user" \ 63 | -e vc_password="$vc_password" \ 64 | -e inventory_file=$inventory_file \ 65 | "$@" 66 | -------------------------------------------------------------------------------- /vm_delete.sh: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd ) 3 | 4 | # Check number of parameters 5 | if [ "$#" -lt 2 ]; then 6 | echo "Usage: $0 -i [other ansible-playbook parameters]" 7 | exit 1 8 | fi 9 | 10 | # Parse parameters 11 | PARAMS="" 12 | while (( "$#" )); do 13 | case "$1" in 14 | -i) 15 | INVENTORY_FILE_PARAM=$2 16 | shift 2 17 | ;; 18 | *) # preserve remaining arguments 19 | PARAMS="$PARAMS $1" 20 | shift 21 | ;; 22 | esac 23 | done 24 | 25 | # Set remaining parameters 26 | eval set -- "$PARAMS" 27 | 28 | if [ ! -e $INVENTORY_FILE_PARAM ]; then 29 | echo "Usage: $0 -i [other ansible-playbook parameters]" 30 | echo "Available inventory files are:" 31 | find ./inventory/ -name "*.inv" 32 | exit 1 33 | fi 34 | 35 | if [ -z $vc_user ];then 36 | echo 37 | echo 'vCenter user:' 38 | read vc_user 39 | if [ -z $vc_user ];then 40 | echo "Error: vCenter user. vCenter user environment variable vc_user not set or entered at prompt" 41 | exit 1 42 | fi 43 | fi 44 | 45 | if [ -z $vc_password ];then 46 | echo 47 | echo 'vCenter password:' 48 | read -s vc_password 49 | if [ -z $vc_password ];then 50 | echo "Error: vCenter password. vCenter password environment variable vc_password not set or entered at prompt" 51 | exit 1 52 | fi 53 | fi 54 | 55 | # Echo extra parameters (if any) 56 | [[ ! -z "$@" ]] && echo "Extra parameters passed to ansible-playbook: $@" 57 | 58 | # Run ansible playbook 59 | inventory_file=$(realpath $INVENTORY_FILE_PARAM) 60 | 61 | ansible-playbook -i $inventory_file playbooks/ocp4_vm_delete.yaml \ 62 | -e vc_user=$vc_user \ 63 | -e vc_password=$vc_password \ 64 | "$@" 65 | -------------------------------------------------------------------------------- /vm_power_on.sh: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd ) 3 | 4 | # Check number of parameters 5 | if [ "$#" -lt 2 ]; then 6 | echo "Usage: $0 -i [other ansible-playbook parameters]" 7 | exit 1 8 | fi 9 | 10 | # Parse parameters 11 | PARAMS="" 12 | while (( "$#" )); do 13 | case "$1" in 14 | -i) 15 | INVENTORY_FILE_PARAM=$2 16 | shift 2 17 | ;; 18 | *) # preserve remaining arguments 19 | PARAMS="$PARAMS $1" 20 | shift 21 | ;; 22 | esac 23 | done 24 | 25 | # Set remaining parameters 26 | eval set -- "$PARAMS" 27 | 28 | if [ ! -e $INVENTORY_FILE_PARAM ]; then 29 | echo "Usage: $0 -i [other ansible-playbook parameters]" 30 | echo "Available inventory files are:" 31 | find ./inventory/ -name "*.inv" 32 | exit 1 33 | fi 34 | 35 | if [ -z "$vc_user" ];then 36 | echo 37 | echo 'vCenter user:' 38 | read vc_user 39 | if [ -z "$vc_user" ];then 40 | echo "Error: vCenter user. vCenter user environment variable vc_user not set or entered at prompt" 41 | exit 1 42 | fi 43 | fi 44 | 45 | if [ -z "$vc_password" ];then 46 | echo 47 | echo 'vCenter password:' 48 | read -s vc_password 49 | if [ -z "$vc_password" ];then 50 | echo "Error: vCenter password. vCenter password environment variable vc_password not set or entered at prompt" 51 | exit 1 52 | fi 53 | fi 54 | 55 | # Echo extra parameters (if any) 56 | [[ ! -z "$@" ]] && echo "Extra parameters passed to ansible-playbook: $@" 57 | 58 | # Run ansible playbook 59 | inventory_file=$(realpath $INVENTORY_FILE_PARAM) 60 | 61 | ansible-playbook -i $inventory_file playbooks/ocp4_vm_power_on.yaml \ 62 | -e vc_user="$vc_user" \ 63 | -e vc_password="$vc_password" \ 64 | -e inventory_file=$inventory_file \ 65 | "$@" 66 | -------------------------------------------------------------------------------- /vm_update_vapp.sh: -------------------------------------------------------------------------------- 1 | #! /bin/bash 2 | SCRIPT_DIR=$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd ) 3 | 4 | # Check number of parameters 5 | if [ "$#" -lt 2 ]; then 6 | echo "Usage: $0 -i [other ansible-playbook parameters]" 7 | exit 1 8 | fi 9 | 10 | # Parse parameters 11 | PARAMS="" 12 | while (( "$#" )); do 13 | case "$1" in 14 | -i) 15 | INVENTORY_FILE_PARAM=$2 16 | shift 2 17 | ;; 18 | *) # preserve remaining arguments 19 | PARAMS="$PARAMS $1" 20 | shift 21 | ;; 22 | esac 23 | done 24 | 25 | # Set remaining parameters 26 | eval set -- "$PARAMS" 27 | 28 | if [ ! -e $INVENTORY_FILE_PARAM ]; then 29 | echo "Usage: $0 -i [other ansible-playbook parameters]" 30 | echo "Available inventory files are:" 31 | find ./inventory/ -name "*.inv" 32 | exit 1 33 | fi 34 | 35 | if [ -z "$vc_user" ];then 36 | echo 37 | echo 'vCenter user:' 38 | read vc_user 39 | if [ -z "$vc_user" ];then 40 | echo "Error: vCenter user. vCenter user environment variable vc_user not set or entered at prompt" 41 | exit 1 42 | fi 43 | fi 44 | 45 | if [ -z "$vc_password" ];then 46 | echo 47 | echo 'vCenter password:' 48 | read -s vc_password 49 | if [ -z "$vc_password" ];then 50 | echo "Error: vCenter password. vCenter password environment variable vc_password not set or entered at prompt" 51 | exit 1 52 | fi 53 | fi 54 | 55 | # Echo extra parameters (if any) 56 | [[ ! -z "$@" ]] && echo "Extra parameters passed to ansible-playbook: $@" 57 | 58 | # Run ansible playbook 59 | inventory_file=$(realpath $INVENTORY_FILE_PARAM) 60 | 61 | ansible-playbook -i $inventory_file playbooks/ocp4_vm_update_vapp.yaml \ 62 | -e vc_user="$vc_user" \ 63 | -e vc_password="$vc_password" \ 64 | -e inventory_file=$inventory_file \ 65 | "$@" 66 | --------------------------------------------------------------------------------