├── docs ├── _config.yml ├── pages │ ├── Pipeline.md │ ├── images │ │ ├── MyLab.jpeg │ │ ├── Home-Lab.jpg │ │ ├── MiniLab.jpeg │ │ ├── MiniLab2.jpg │ │ ├── NexusAdmin.png │ │ ├── NexusRealms.png │ │ ├── OKD-Release.png │ │ ├── FCOS-Download.png │ │ └── CreateOriginRepo.png │ ├── UpdateOKD.md │ ├── HtPasswd.md │ ├── OpenShift-Commons-CRC-Demo.md │ ├── Local_DHCP.md │ ├── Ceph.md │ ├── Deploy_KVM_Host.md │ ├── Nginx_Config.md │ ├── ShuttingDown.md │ ├── MariaDB.md │ ├── Nexus_Config.md │ ├── KubeConDemo.md │ ├── KVM_Host_Install.md │ ├── DNS_Config.md │ ├── InfraNodes.md │ ├── Bastion.md │ ├── Router.md │ ├── DeployOKD.md │ └── Notes.md ├── index.md └── LabIntro.md ├── .gitignore ├── README.md └── LICENSE /docs/_config.yml: -------------------------------------------------------------------------------- 1 | theme: jekyll-theme-cayman -------------------------------------------------------------------------------- /docs/pages/Pipeline.md: -------------------------------------------------------------------------------- 1 | # WIP 2 | 3 | # Moved to https://github.com/cgruver/tekton-pipeline-okd4 4 | -------------------------------------------------------------------------------- /docs/pages/images/MyLab.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cgruver/okd4-upi-lab-setup/HEAD/docs/pages/images/MyLab.jpeg -------------------------------------------------------------------------------- /docs/pages/images/Home-Lab.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cgruver/okd4-upi-lab-setup/HEAD/docs/pages/images/Home-Lab.jpg -------------------------------------------------------------------------------- /docs/pages/images/MiniLab.jpeg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cgruver/okd4-upi-lab-setup/HEAD/docs/pages/images/MiniLab.jpeg -------------------------------------------------------------------------------- /docs/pages/images/MiniLab2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cgruver/okd4-upi-lab-setup/HEAD/docs/pages/images/MiniLab2.jpg -------------------------------------------------------------------------------- /docs/pages/images/NexusAdmin.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cgruver/okd4-upi-lab-setup/HEAD/docs/pages/images/NexusAdmin.png -------------------------------------------------------------------------------- /docs/pages/images/NexusRealms.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cgruver/okd4-upi-lab-setup/HEAD/docs/pages/images/NexusRealms.png -------------------------------------------------------------------------------- /docs/pages/images/OKD-Release.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cgruver/okd4-upi-lab-setup/HEAD/docs/pages/images/OKD-Release.png -------------------------------------------------------------------------------- /docs/pages/images/FCOS-Download.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cgruver/okd4-upi-lab-setup/HEAD/docs/pages/images/FCOS-Download.png -------------------------------------------------------------------------------- /docs/pages/images/CreateOriginRepo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cgruver/okd4-upi-lab-setup/HEAD/docs/pages/images/CreateOriginRepo.png -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # IDEs and editors 2 | /.idea 3 | .project 4 | .classpath 5 | .c9/ 6 | *.launch 7 | .settings/ 8 | *.sublime-workspace 9 | 10 | # IDE - VSCode 11 | .vscode/* 12 | !.vscode/settings.json 13 | !.vscode/tasks.json 14 | !.vscode/launch.json 15 | !.vscode/extensions.json 16 | .history/* 17 | 18 | # misc 19 | /wip 20 | 21 | # System Files 22 | .DS_Store 23 | Thumbs.db 24 | -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | # This tutorial is being deprecated in favor of a new version: 2 | 3 | ## Link To New Tutorial: [https://upstreamwithoutapaddle.com/home-lab/lab-intro/](https://upstreamwithoutapaddle.com/home-lab/lab-intro/) 4 | 5 | ## The newest version of my helper scripts is here: [https://github.com/cgruver/kamarotos](https://github.com/cgruver/kamarotos) 6 | 7 | The archived main branch can be found in the `archive` branch of this project and the previous documentation can be found at: [Lab Intro](LabIntro.md) 8 | 9 | I will not be doing any additional development on this project. 10 | 11 | New work can be found on my Blog: [Upstream - Without A Paddle](https://upstreamwithoutapaddle.com/) 12 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # This tutorial is being deprecated in favor of a new version: 2 | 3 | ## [https://upstreamwithoutapaddle.com/home-lab/lab-intro/](https://upstreamwithoutapaddle.com/home-lab/lab-intro/) 4 | 5 | The archived main branch can be found in the `archive` branch of this project and the previous documentation can be found at: [https://cgruver.github.io/okd4-upi-lab-setup/](https://cgruver.github.io/okd4-upi-lab-setup/) 6 | 7 | I will not be doing any additional development on this project. 8 | 9 | The newest version of my helper scripts is here: [https://github.com/cgruver/kamarotos](https://github.com/cgruver/kamarotos) 10 | 11 | The replacement for this project is here: [Building a Portable Kubernetes Home Lab with OpenShift - OKD4](https://upstreamwithoutapaddle.com/home-lab/lab-intro/) 12 | 13 | Follow my Blog here: [Upstream - Without A Paddle](https://upstreamwithoutapaddle.com/) 14 | -------------------------------------------------------------------------------- /docs/pages/UpdateOKD.md: -------------------------------------------------------------------------------- 1 | ## Updating your cluster to a new version: 2 | 3 | # WIP 4 | 5 | Upgrade: 6 | 7 | oc adm upgrade 8 | 9 | Cluster version is 4.4.0-0.okd-2020-04-09-104654 10 | 11 | Updates: 12 | 13 | VERSION IMAGE 14 | 4.4.0-0.okd-2020-04-09-113408 registry.svc.ci.openshift.org/origin/release@sha256:724d170530bd738830f0ba370e74d94a22fc70cf1c017b1d1447d39ae7c3cf4f 15 | 4.4.0-0.okd-2020-04-09-124138 registry.svc.ci.openshift.org/origin/release@sha256:ce16ac845c0a0d178149553a51214367f63860aea71c0337f25556f25e5b8bb3 16 | 17 | ssh root@${LAB_NAMESERVER} 'sed -i "s|registry.svc.ci.openshift.org|;sinkhole|g" /etc/named/zones/db.sinkhole && systemctl restart named' 18 | 19 | export OKD_RELEASE=4.4.0-0.okd-2020-04-09-124138 20 | 21 | oc adm -a ${LOCAL_SECRET_JSON} release mirror --from=${OKD_REGISTRY}:${OKD_RELEASE} --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OKD_RELEASE} 22 | 23 | OKD_VER=$(echo $OKD_RELEASE | sed "s|4.4.0-0.okd|4.4|g") 24 | cp ${OKD4_LAB_PATH}/okd-upgrade.yaml ${OKD4_LAB_PATH}/${OKD_VER}.yaml 25 | sed -i "s|%%OKD_VER%%|${OKD_VER}|g" ${OKD4_LAB_PATH}/${OKD_VER}.yaml 26 | oc apply -f ${OKD4_LAB_PATH}/${OKD_VER}.yaml 27 | 28 | ssh root@${LAB_NAMESERVER} 'sed -i "s|;sinkhole|registry.svc.ci.openshift.org|g" /etc/named/zones/db.sinkhole && systemctl restart named' 29 | 30 | oc adm upgrade --to=${OKD_RELEASE} --force 31 | -------------------------------------------------------------------------------- /docs/pages/HtPasswd.md: -------------------------------------------------------------------------------- 1 | ## Setting up HTPasswd as an Identity Provider 2 | 3 | These instructions will help you set up an Identity provider so that you can remove the temporary kubeadmin user. 4 | 5 | 1. Create an htpasswd file with two users. The `user` admin will be assigned the password that was created when you installed your cluster. The user `devuser` will be assigned the password `devpwd`. THe user `devuser` will have default permissions. 6 | 7 | mkdir -p ${OKD_LAB_PATH}/okd-creds 8 | htpasswd -B -c -b ${OKD_LAB_PATH}/okd-creds/htpasswd admin $(cat ${OKD_LAB_PATH}/okd4-install-dir/auth/kubeadmin-password) 9 | htpasswd -b ${OKD_LAB_PATH}/okd-creds/htpasswd devuser devpwd 10 | 11 | 1. Now, create a Secret with this htpasswd file: 12 | 13 | oc create -n openshift-config secret generic htpasswd-secret --from-file=htpasswd=${OKD_LAB_PATH}/okd-creds/htpasswd 14 | 15 | 1. Create the Htpasswd Identity Provider: 16 | 17 | I have provided an Identity Provider custom resource configuration located at `./Provisioning/htpasswd-cr.yaml` in this project. 18 | 19 | From the root of this project run: 20 | 21 | oc apply -f ./Provisioning/htpasswd-cr.yaml 22 | 23 | 1. Make the user `admin` a Cluster Administrator: 24 | 25 | oc adm policy add-cluster-role-to-user cluster-admin admin 26 | 27 | 1. Now, log into the web console as your new admin user to verify access. Select the `Htpasswd` provider when you log in. 28 | 29 | 1. Finally, remove temporary user: 30 | 31 | oc delete secrets kubeadmin -n kube-system 32 | -------------------------------------------------------------------------------- /docs/pages/OpenShift-Commons-CRC-Demo.md: -------------------------------------------------------------------------------- 1 | # CRC Demo: 2 | 3 | ```bash 4 | crc oc-env 5 | 6 | eval $(crc oc-env) 7 | 8 | which oc 9 | 10 | oc login -u kubeadmin -p 6Dq36-mD6Ap-ySriA-wZmBr https://api.crc.testing:6443 11 | 12 | crc console 13 | 14 | htpasswd -B -c -b /tmp/htpasswd admin secret-password 15 | 16 | oc create -n openshift-config secret generic htpasswd-secret --from-file=htpasswd=/tmp/htpasswd 17 | 18 | oc apply -f ~/Documents/VSCode/GitHub-CGruver/okd4-upi-lab-setup/Provisioning/htpasswd-cr.yaml 19 | 20 | oc adm policy add-cluster-role-to-user cluster-admin admin 21 | 22 | oc login -u admin https://api.crc.testing:6443 23 | 24 | cd ~/Documents/VSCode/GitHub-CGruver/tekton-pipeline-okd4/operator 25 | 26 | oc apply -f operator_v1alpha1_config_crd.yaml 27 | 28 | oc apply -f role.yaml -n openshift-operators 29 | 30 | oc apply -f role_binding.yaml -n openshift-operators 31 | 32 | oc apply -f service_account.yaml -n openshift-operators 33 | 34 | oc apply -f operator.yaml 35 | 36 | oc apply -f operator_v1alpha1_config_cr.yaml 37 | 38 | cd ~/tmp/namespace-configuration-operator 39 | 40 | oc adm new-project namespace-configuration-operator 41 | 42 | oc apply -f deploy/olm-deploy -n namespace-configuration-operator 43 | 44 | cd ~/Documents/VSCode/GitHub-CGruver/tekton-pipeline-okd4/ 45 | 46 | 47 | 48 | oc apply -f quarkus-jvm-pipeline-template.yml -n openshift 49 | 50 | oc process --local -f namespace-configuration.yml -p MVN_MIRROR_ID=homelab-central -p MVN_MIRROR_NAME=homelab-central -p MVN_MIRROR_URL=https://nexus.clg.lab:8443/repository/maven-public/ | oc apply -f - 51 | 52 | oc new-project my-namespace 53 | 54 | oc label namespace my-namespace pipeline=tekton 55 | 56 | IMAGE_REGISTRY=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') 57 | docker login -u $(oc whoami) -p $(oc whoami -t) ${IMAGE_REGISTRY} 58 | 59 | docker system prune --all --force 60 | 61 | docker pull quay.io/openshift/origin-cli:4.5.0 62 | docker tag quay.io/openshift/origin-cli:4.5.0 ${IMAGE_REGISTRY}/openshift/origin-cli:4.5.0 63 | docker push ${IMAGE_REGISTRY}/openshift/origin-cli:4.5.0 64 | 65 | docker build -t ${IMAGE_REGISTRY}/openshift/jdk-ubi-minimal:8.1 jdk-ubi-minimal/ 66 | docker push ${IMAGE_REGISTRY}/openshift/jdk-ubi-minimal:8.1 67 | 68 | docker build -t ${IMAGE_REGISTRY}/openshift/maven-ubi-minimal:3.6.3-jdk-11 maven-ubi-minimal/ 69 | docker push ${IMAGE_REGISTRY}/openshift/maven-ubi-minimal:3.6.3-jdk-11 70 | 71 | docker build -t ${IMAGE_REGISTRY}/openshift/buildah:noroot buildah-noroot/ 72 | docker push ${IMAGE_REGISTRY}/openshift/buildah:noroot 73 | 74 | oc process openshift//quarkus-jvm-pipeline-dev -n my-namespace -p APP_NAME=home-library-catalog -p GIT_REPOSITORY=https://github.com/cgruver/home-library-catalog.git -p GIT_BRANCH=master | oc create -f - -------------------------------------------------------------------------------- /docs/pages/Local_DHCP.md: -------------------------------------------------------------------------------- 1 | ## Adding PXE Boot capability to your Bastion host server 2 | 3 | 4 | 5 | First let's install and enable a TFTP server: 6 | 7 | yum -y install tftp tftp-server xinetd 8 | systemctl start xinetd 9 | systemctl enable xinetd 10 | firewall-cmd --add-port=69/tcp --permanent 11 | firewall-cmd --add-port=69/udp --permanent 12 | firewall-cmd --reload 13 | 14 | Edit the tftp configuration file to enable tftp. Set `disable = no` 15 | 16 | vi /etc/xinetd.d/tftp 17 | 18 | # default: off 19 | # description: The tftp server serves files using the trivial file transfer \ 20 | # protocol. The tftp protocol is often used to boot diskless \ 21 | # workstations, download configuration files to network-aware printers, \ 22 | # and to start the installation process for some operating systems. 23 | service tftp 24 | { 25 | socket_type = dgram 26 | protocol = udp 27 | wait = yes 28 | user = root 29 | server = /usr/sbin/in.tftpd 30 | server_args = -s /data/tftpboot 31 | disable = no 32 | per_source = 11 33 | cps = 100 2 34 | flags = IPv4 35 | } 36 | 37 | With TFTP configured, it's now time to copy over the files for iPXE. 38 | 39 | This project has some files already prepared for you. They are located in ./Provisioning/iPXE. 40 | 41 | ||| 42 | |-|-| 43 | | boot.ipxe | This is the initial iPXE bootstrap file. It has logic in it to look for a file with the booting host's MAC address. Otherwise it pulls the default.ipxe file. | 44 | | default.ipxe | This file will initiate a kickstart install of CentOS 7 for non-OKD hosts. __This is not working yet__ | 45 | | grub.cfg | Until I get iPXE working with intel NUC, we'll be using UEFI | 46 | 47 | From the root directory of this project, execute the following: 48 | 49 | mkdir tmp-work 50 | mkdir -p /data/tftpboot/networkboot 51 | mkdir /data/tftpboot/ipxe 52 | cp ./Provisioning/iPXE/* ./tmp-work 53 | for i in $(ls ./tmp-work) 54 | do 55 | sed -i "s|%%INSTALL_URL%%|${INSTALL_URL}|g" ./tmp-work/${i} 56 | done 57 | cp ./Provisioning/iPXE/boot.ipxe /data/tftpboot/boot.ipxe 58 | 59 | wget http://boot.ipxe.org/ipxe.efi 60 | cp ./ipxe.efi /data/tftpboot/ipxe.efi 61 | rm -f ./ipxe.efi 62 | 63 | cp ${INSTALL_ROOT}/centos/isolinux/vmlinuz /data/tftpboot/networkboot 64 | cp ${INSTALL_ROOT}/centos/isolinux/initrd.img /data/tftpboot/networkboot 65 | 66 | __Warning:__ If you set up DHCP on the Bastion host you will either have to disable DHCP in your home router, or put your lab on an isolated network. 67 | 68 | ### DHCP on Linux: 69 | 70 | If you are going to set up your own DHCP server on the Bastion host, do the following: 71 | 72 | dnf -y install dhcp 73 | firewall-cmd --add-service=dhcp --permanent 74 | firewall-cmd --reload 75 | 76 | I have provided a DHCP configuration file for you. From the root of this project: 77 | 78 | cp ./Provisioning/dhcpd.conf /etc/dhcp/dhcpd.conf 79 | sed -i "s|%%LAB_DOMAIN%%|${LAB_DOMAIN}|g" /etc/dhcp/dhcpd.conf 80 | 81 | This configuration assumes that you are using the `10.11.11.0/24` network. It configures a DHCP range of `10.10.11.11` - `10.10.11.29` 82 | 83 | If you are using a different configuration, then edit /etc/dhcp/dhcpd.conf appropriately. 84 | 85 | Finally, enable DHCP: 86 | 87 | systemctl enable dhcpd 88 | systemctl start dhcpd 89 | 90 | Now, continue on to set up your Nexus: [Sonatype Nexus Setup](Nexus_Config.md) 91 | -------------------------------------------------------------------------------- /docs/pages/Ceph.md: -------------------------------------------------------------------------------- 1 | ## Hyper-Converged Ceph Cluster Deployment 2 | 3 | The following instructions will set up Ceph storage as a block provider for your OKD cluster. You will then create a PVC for the OKD image registry, and modify the image registry to use the PVC for persistence. 4 | 5 | Ideally, you need a full cluster for this deployment; 3 master and 3 worker nodes. Additionally, you need to give the worker nodes a second disk that will be used by Ceph. I have provided an example guest_inventory file at `./Provisioning/guest_inventory/okd4_ceph`. You can do this with a minimal cluster of just 3 master/worker nodes, but you may need to add RAM to avoid constraints. 6 | 7 | Follow these steps to deploy a Ceph cluster: 8 | 9 | 1. From the root directory of this project, grab and modify the files that are prepared for you. 10 | 11 | mkdir -p ${OKD4_LAB_PATH}/ceph 12 | cp ./Ceph/* ${OKD4_LAB_PATH}/ceph 13 | sed -i "s|%%LAB_DOMAIN%%|${LAB_DOMAIN}|g" ${OKD4_LAB_PATH}/ceph/cluster.yaml 14 | 15 | The Ceph installation files were taken from https://github.com/rook/rook and modified for this tutorial. 16 | 17 | __Note: If you do not have worker nodes, then you will need to modify the `cluster.yml` file to use your master nodes.__ 18 | 19 | 1. Now, we need to label our worker nodes so that Ceph knows where to deploy. If you look at the cluster.yml file, you will see the `nodeAffinity` configuration. 20 | 21 | __Note: As before, if you do not have dedicated worker nodes, then replace `worker` with `master` below.__ 22 | 23 | for i in 0 1 2 24 | do 25 | oc label nodes okd4-worker-${i}.${LAB_DOMAIN} role=storage-node 26 | done 27 | 28 | 1. Create the Ceph Operator: 29 | 30 | oc apply -f ${OKD4_LAB_PATH}/ceph/crds.yaml 31 | oc apply -f ${OKD4_LAB_PATH}/ceph/common.yaml 32 | oc apply -f ${OKD4_LAB_PATH}/ceph/operator-openshift.yaml 33 | 34 | __Wait for the Operator pods to completely deploy before executing the next step.__ 35 | 36 | 1. Deploy the Ceph cluster: 37 | 38 | oc apply -f ${OKD4_LAB_PATH}/ceph/cluster.yaml 39 | 40 | __This will take a while to complete.__ 41 | 42 | It is finished when you see all of the `rook-ceph-osd-prepare-okd4-worker...` pods in a `Completed` state. 43 | 44 | oc get pods -n rook-ceph 45 | 46 | ### Now, let's create a PVC for the Image Registry. 47 | 48 | 1. First, we need a Storage Class for our new Ceph block strage provisioner: 49 | 50 | oc apply -f ${OKD4_LAB_PATH}/ceph/ceph-storage-class.yml 51 | 52 | 1. Now create the PVC: 53 | 54 | oc apply -f ${OKD4_LAB_PATH}/ceph/registry-pvc.yml 55 | 56 | 1. Make sure that the PVC gets bound to a new PV: 57 | 58 | oc get pvc -n openshift-image-registry 59 | 60 | You should see output similar to: 61 | 62 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 63 | registry-pvc Bound pvc-bcee4ccd-aa6e-4c8c-89b0-3f8da1c17df0 100Gi RWO rook-ceph-block 4d17h 64 | 65 | 1. Finally, patch the `imageregistry` operator to use the PVC that you just created: 66 | 67 | If you previously added `emptyDir` as a storage type to the Registry, you need to remove it first: 68 | 69 | oc patch configs.imageregistry.operator.openshift.io cluster --type json -p '[{ "op": "remove", "path": "/spec/storage/emptyDir" }]' 70 | 71 | Now patch it to use the new PVC: 72 | 73 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"rolloutStrategy":"Recreate","managementState":"Managed","storage":{"pvc":{"claim":"registry-pvc"}}}}' 74 | 75 | 1. If you want to designate your new storage class as the `default` storage class, the do the following: 76 | 77 | oc patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 78 | 79 | __You just created a Ceph cluster and bound your image registry to a Persistent Volume!__ 80 | -------------------------------------------------------------------------------- /docs/pages/Deploy_KVM_Host.md: -------------------------------------------------------------------------------- 1 | ## Installing or reinstalling a NUC via iPXE 2 | 3 | __Note:__ If you would rather manually install your KVM hosts, then follow this guide: [KVM Host Manual Install](KVM_Host_Install.md) 4 | 5 | The installation on a bare metal host will work like this: 6 | 7 | 1. The host will power on and find no bootable OS 8 | 1. The host will attempt a network boot by requesting a DHCP address and PXE boot info 9 | * The DHCP server will issue an IP address and direct the host to the PXE boot file on the TFTP boot server 10 | 1. The host will retrieve the `boot.ipxe` file from the TFTP boot server 11 | 1. The `boot.ipxe` script will then retrieve an iPXE script name from the MAC address of the host. 12 | 1. The host will begin booting: 13 | 1. The host will retrieve the `vmlinuz`, and `initrd` files from the HTTP install server 14 | 1. The host will load the kernel and init-ram 15 | 1. The host will retrieve the kickstart file or ignition config file depending on the install type. 16 | 1. The host should now begin an unattended install. 17 | 1. The host will reboot and run the `firstboot.sh` script, if one is configured. (selinux is temporarily disabled to allow this script to run) 18 | 1. The host is now ready to use! 19 | 20 | There are a couple of things that we need to put in place to get started. 21 | 22 | First we need to flip the NUC over and get the MAC address for the wired NIC. You also need to know whether you have NVME or SATA SSDs in the NUC. 23 | 24 | I have provided a helper script, `DeployKvmHost.sh` that will configure the files for you. 25 | 26 | DeployKvmHost.sh -h=kvm-host01 -m=1c:69:7a:02:b6:c2 -d=nvme0n1 # Example with 1 NVME SSD 27 | DeployKvmHost.sh -h=kvm-host01 -m=1c:69:7a:02:b6:c2 -d=sda,sdb # Example with 2 SATA SSD 28 | 29 | Finally, make sure that you have created DNS `A` and `PTR` records. [DNS Setup](DNS_Config.md) 30 | 31 | We are now ready to plug in the NUC and boot it up. 32 | 33 | __Caution:__ This is the point at which you might have to attach a keyboard and monitor to your NUC. We need to ensure that the BIOS is set up to attempt a Network Boot with UEFI, not legacy. You also need to ensure that `Secure Boot` is disabled in the BIOS since we are not explicitly trusting the boot images. 34 | 35 | __Take this opportunity to apply the latest BIOS to your NUC__ 36 | 37 | Tools URL: https://downloadcenter.intel.com/download/30090/Intel-Aptio-V-UEFI-Firmware-Integrator-Tools 38 | 39 | You won't need the keyboard or mouse again, until it's time for another BIOS update... Eventually we'll figure out how to push those from the OS too. ;-) 40 | 41 | The last thing that I've prepared for you is the ability to reinstall your OS. 42 | 43 | ### Re-Install your NUC host 44 | 45 | __*I have included a very dangerous script in this project.*__ If you follow all of the setup instructions, it will be installed in `/root/bin/rebuildhost.sh` of your host. 46 | 47 | The script is a quick and dirty way to brick your host so that when it reboots, it will force a Network Install. 48 | 49 | The script will destroy your boot partitions and wipe the MBR in the installed SSD drives. For example: 50 | 51 | Destroy boot partitions: 52 | 53 | umount /boot/efi 54 | umount /boot 55 | wipefs -a /dev/sda2 56 | wipefs -a /dev/sda1 57 | 58 | Wipe MBR: 59 | 60 | dd if=/dev/zero of=/dev/sda bs=512 count=1 61 | dd if=/dev/zero of=/dev/sdb bs=512 count=1 62 | 63 | Reboot: 64 | 65 | shutdown -r now 66 | 67 | That's it! Your host is now a Brick. If your PXE environment is set up properly, then in a few minutes you will have a fresh OS install. 68 | 69 | Go ahead a build out all of your KVM hosts are this point. For this lab you need at least one KVM host with 64GB of RAM. With this configuration, you will build an OKD cluster with 3 Master nodes which are also schedulable, (is that a word?), as worker nodes. If you have two, then you will build an OKD cluster with 3 Master and 3 Worker nodes. 70 | 71 | It is now time to deploy an OKD cluster: [Deploy OKD](DeployOKD.md) 72 | -------------------------------------------------------------------------------- /docs/pages/Nginx_Config.md: -------------------------------------------------------------------------------- 1 | ## Nginx Config & RPM Repository Synch 2 | We are going to install the Nginx HTTP server and configure it to serve up all of the RPM packages that we need to build our guest VMs. 3 | 4 | We'll use the reposync command to copy RPM repository contents from remote mirrors into our Nginx server. 5 | 6 | We are also going to copy the CentOS minimal install ISO into our Nginx server. 7 | 8 | We need to open firewall ports for HTTP/S so that we can access our Nginx server: 9 | 10 | firewall-cmd --permanent --add-service=http 11 | firewall-cmd --permanent --add-service=https 12 | firewall-cmd --reload 13 | 14 | Install and start Nginx: 15 | 16 | dnf -y install nginx 17 | systemctl enable nginx --now 18 | 19 | ### RPM Repository Mirror 20 | 21 | Create directories to hold all of the RPMs: 22 | 23 | mkdir -p ${REPO_PATH}/{baseos,appstream,extras,epel-modular,epel,powertools} 24 | 25 | Synch the repositories into the directories we just created: (This will take a while) 26 | 27 | LOCAL_REPOS="baseos appstream extras epel epel-modular powertools" 28 | for REPO in ${LOCAL_REPOS} 29 | do 30 | reposync -m --repoid=${REPO} --newest-only --delete --download-metadata -p ${REPO_PATH}/ 31 | done 32 | 33 | Our Nginx server is now ready to serve up CentOS RPMs. 34 | 35 | To refresh your RPM repositories, run the above script again, or better yet, create a cron job to run it periodically. 36 | 37 | ### Host installation, FCOS & CentOS 38 | 39 | Now, we are going to set up the artifacts for host installation. This will include FCOS via `ignition`, and CentOS via `kickstart`. 40 | 41 | mkdir -p ${INSTALL_ROOT}/{centos,fcos,fcos/ignition,firstboot,kickstart,hostconfig,postinstall} 42 | 43 | Create encrypted passwords to be used in your KVM host and Guest installations: 44 | 45 | mkdir -p ${OKD4_LAB_PATH} 46 | openssl passwd -1 'guest-root-password' > ${OKD4_LAB_PATH}/lab_guest_pw 47 | openssl passwd -1 'host-root-password' > ${OKD4_LAB_PATH}/lab_host_pw 48 | 49 | ### CentOS: 50 | 51 | 1. Deploy the Minimal ISO files. 52 | 53 | Download the CentOS Stream install files from a mirror: 54 | 55 | wget -m -np -nH --cut-dirs=5 -P ${INSTALL_ROOT}/centos http://mirror.centos.org/centos/8-stream/BaseOS/x86_64/os/ 56 | 57 | 1. Deploy the files from this project for supporting `kickstart` installation. 58 | 59 | Make a temporary work space: 60 | 61 | mkdir tmp-work 62 | 63 | Prep the install files from this project: 64 | 65 | cp -rf ./Provisioning/guest_install/postinstall ./tmp-work 66 | 67 | sed -i "s|%%REPO_URL%%|${REPO_URL}|g" ./tmp-work/postinstall/local-repos.repo 68 | sed -i "s|%%LAB_NAMESERVER%%|${LAB_NAMESERVER}|g" ./tmp-work/postinstall/chrony.conf 69 | 70 | Copy your public SSH key 71 | 72 | cat ~/.ssh/id_rsa.pub > ./tmp-work/postinstall/authorized_keys 73 | 74 | Copy the prepared files into place 75 | 76 | scp -r ./tmp-work/postinstall root@${INSTALL_HOST}:${INSTALL_ROOT} 77 | rm -rf ./tmp-work 78 | 79 | ### FCOS: 80 | 81 | 1. In a browser, go to: `https://getfedora.org/en/coreos/download/` 82 | 1. Make sure you are on the `stable` Stream, select the `Bare Metal & Virtualized` tab, and make note of the current version. 83 | 84 | ![FCOS Download Page](images/FCOS-Download.png) 85 | 86 | 1. Set the FCOS version as a variable. For example: 87 | 88 | FCOS_VER=32.20200923.3.0 89 | 90 | 1. Set the FCOS_STREAM variable to `stable` or `testing` to match the stream that you are pulling from. 91 | 92 | FCOS_STREAM=stable 93 | 94 | 1. Download the FCOS images for iPXE booting: 95 | 96 | mkdir /tmp/fcos 97 | curl -o /tmp/fcos/vmlinuz https://builds.coreos.fedoraproject.org/prod/streams/${FCOS_STREAM}/builds/${FCOS_VER}/x86_64/fedora-coreos-${FCOS_VER}-live-kernel-x86_64 98 | curl -o /tmp/fcos/initrd https://builds.coreos.fedoraproject.org/prod/streams/${FCOS_STREAM}/builds/${FCOS_VER}/x86_64/fedora-coreos-${FCOS_VER}-live-initramfs.x86_64.img 99 | curl -o /tmp/fcos/rootfs.img https://builds.coreos.fedoraproject.org/prod/streams/${FCOS_STREAM}/builds/${FCOS_VER}/x86_64/fedora-coreos-${FCOS_VER}-live-rootfs.x86_64.img 100 | scp -r /tmp/fcos root@${INSTALL_HOST}:${INSTALL_ROOT} 101 | rm -rf /tmp/fcos 102 | 103 | Now, continue on to [DHCP Setup](GL-AR750S-Ext.md) 104 | -------------------------------------------------------------------------------- /docs/pages/ShuttingDown.md: -------------------------------------------------------------------------------- 1 | # Safely Shutting Down Your OKD Cluster 2 | 3 | # WIP: Documentation Incomplete 4 | 5 | Sccale down any applications using your MariaDB RDBMS 6 | 7 | Scale down the MariaDB cluster 8 | 9 | oc scale statefulsets mariadb-galera --replicas=0 -n mariadb-galera 10 | 11 | oc get pods -n mariadb-galera 12 | 13 | NAME READY STATUS RESTARTS AGE 14 | mariadb-galera-0 1/1 Running 0 13m 15 | mariadb-galera-1 1/1 Running 0 12m 16 | mariadb-galera-2 1/1 Terminating 0 12m 17 | 18 | Be patient and wait for all 3 pods to shutdown. 19 | 20 | oc get pods -n mariadb-galera 21 | 22 | NAME READY STATUS RESTARTS AGE 23 | mariadb-galera-0 1/1 Running 0 14m 24 | mariadb-galera-1 1/1 Terminating 0 14m 25 | 26 | oc get pods -n mariadb-galera 27 | 28 | NAME READY STATUS RESTARTS AGE 29 | mariadb-galera-0 1/1 Terminating 0 15m 30 | 31 | Shutdown the Image Registry: 32 | 33 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Removed"}}' 34 | 35 | Cordon, and Drain the `worker` nodes: (This will take a while, be patient) 36 | 37 | for i in 0 1 2 ; do oc adm cordon okd4-worker-${i}.${LAB_DOMAIN} ; done 38 | 39 | for i in 0 1 2 ; do oc adm drain okd4-worker-${i}.${LAB_DOMAIN} --ignore-daemonsets --force --grace-period=60 --delete-emptydir-data; done 40 | 41 | Shutdown the worker nodes: (Wait for them to all shut down before proceeding) 42 | 43 | for i in 0 1 2 ; do ssh core@okd4-worker-${i}.${LAB_DOMAIN} "sudo shutdown -h now"; done 44 | 45 | Shutdown the master nodes: (Wait for them to all shut down before proceeding) 46 | 47 | for i in 0 1 2 ; do ssh core@okd4-master-${i}.${LAB_DOMAIN} "sudo shutdown -h now"; done 48 | 49 | Shutdown the load balancer: 50 | 51 | ssh root@okd4-lb01 "shutdown -h now" 52 | 53 | 54 | 55 | ### Restarting after shutdown: 56 | 57 | ipmitool -I lanplus -H10.11.11.10 -p6228 -Uadmin -Ppassword chassis power on 58 | 59 | for i in 6230 6231 6232; do ipmitool -I lanplus -H10.11.11.10 -p${i} -Uadmin -Ppassword chassis power on; sleep 3; done 60 | 61 | ssh core@okd4-master-0.${LAB_DOMAIN} "journalctl -b -f -u kubelet.service" 62 | 63 | oc login -u admin https://api.okd4.${LAB_DOMAIN}:6443 64 | 65 | oc get nodes 66 | 67 | NAME STATUS ROLES AGE VERSION 68 | okd4-master-0.your.domain.org Ready infra,master 21d v1.17.1 69 | okd4-master-1.your.domain.org Ready infra,master 21d v1.17.1 70 | okd4-master-2.your.domain.org Ready infra,master 21d v1.17.1 71 | okd4-worker-0.your.domain.org NotReady,SchedulingDisabled worker 21d v1.17.1 72 | okd4-worker-1.your.domain.org NotReady,SchedulingDisabled worker 21d v1.17.1 73 | okd4-worker-2.your.domain.org NotReady,SchedulingDisabled worker 21d v1.17.1 74 | 75 | for i in 6233 6234 6235; do ipmitool -I lanplus -H10.11.11.10 -p${i} -Uadmin -Ppassword chassis power on; sleep 3; done 76 | 77 | oc get nodes 78 | 79 | NAME STATUS ROLES AGE VERSION 80 | okd4-master-0.your.domain.org Ready infra,master 21d v1.17.1 81 | okd4-master-1.your.domain.org Ready infra,master 21d v1.17.1 82 | okd4-master-2.your.domain.org Ready infra,master 21d v1.17.1 83 | okd4-worker-0.your.domain.org Ready,SchedulingDisabled worker 21d v1.17.1 84 | okd4-worker-1.your.domain.org Ready,SchedulingDisabled worker 21d v1.17.1 85 | okd4-worker-2.your.domain.org Ready,SchedulingDisabled worker 21d v1.17.1 86 | 87 | oc get nodes 88 | 89 | NAME STATUS ROLES AGE VERSION 90 | okd4-master-0.your.domain.org Ready infra,master 21d v1.17.1 91 | okd4-master-1.your.domain.org Ready infra,master 21d v1.17.1 92 | okd4-master-2.your.domain.org Ready infra,master 21d v1.17.1 93 | okd4-worker-0.your.domain.org Ready worker 21d v1.17.1 94 | okd4-worker-1.your.domain.org Ready worker 21d v1.17.1 95 | okd4-worker-2.your.domain.org Ready worker 21d v1.17.1 96 | 97 | 98 | for i in 0 1 2 ; do oc adm uncordon okd4-worker-${i}.${LAB_DOMAIN} ; done 99 | 100 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}' 101 | 102 | oc scale statefulsets mariadb-galera --replicas=3 -n mariadb-galera 103 | -------------------------------------------------------------------------------- /docs/pages/MariaDB.md: -------------------------------------------------------------------------------- 1 | # Deploying a MariaDB Galera Cluster 2 | 3 | __Note: This section assumes that you have set up Ceph storage as described here: [Ceph](Ceph.md)__ 4 | 5 | In this section we will build a custom container image that is configured to be part of a MariaDB 10.4 cluster using Galera. The MariaSB cluster will be deployed as a StatefulSet and will leverage Ceph block devices with an XFS filesystem for persistent storage. 6 | 7 | ### Building the MariaDB Galera Cluster container image 8 | 9 | We are going to use podman to build this image. Then, we are going to push it to the image registry of our OpenShift cluster. 10 | 11 | mkdir -p ${OKD4_LAB_PATH}/mariadb 12 | cp -r ./MariaDB/* ${OKD4_LAB_PATH}/mariadb 13 | cd ${OKD4_LAB_PATH}/mariadb/Image 14 | ls -l 15 | 16 | total 40 17 | -rw-r--r-- 1 user staff 1231 Jan 12 12:33 Dockerfile 18 | -rw-r--r-- 1 user staff 98 Jan 12 12:30 MariaDB.repo 19 | -rw-r--r-- 1 user staff 87 Jan 12 12:26 liveness-probe.sh 20 | -rw-r--r-- 1 user staff 1999 Jan 12 12:37 mariadb-cluster.sh 21 | -rw-r--r-- 1 user staff 98 Jan 12 12:26 readiness-probe.sh 22 | 23 | * `Dockerfile`: This file contains the directions for building our container. 24 | * `MariaDB.repo`: This file is the definition for the MariaDB 10.4 RPM repository. Since our container is based on a CentOS 7 base image, our package installer is RPM based. 25 | * `liveness-probe.sh`: The script called by the OpenShift runtime to check pod liveness. More on this file and the next one later. The liveness probe and readiness probe are used by OpenShift to monitor the state of the running container. 26 | * `readiness-probe.sh`: The script called by the OpenShift runtime to check pod readiness. 27 | * `mariadb-cluster.sh`: This is the main script that configures and starts MariaDB. It is set as the entry point for the container image. 28 | 29 | 30 | Now, let's build this image and push it to the OpenShift repository. 31 | 32 | 1. Make sure that your Docker daemon is running. 33 | 34 | On CentOS: 35 | 36 | dnf install -y podman 37 | 38 | On a desktop OS, start `Docker Desktop`. 39 | 40 | 2. Log into your image registry: (Assuming that you are already logged into your OpenShift cluster) 41 | 42 | If you have not exposed the default route for external access to the registry, do that now: 43 | 44 | oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge 45 | 46 | Log in: 47 | 48 | podman login -u $(oc whoami) -p $(oc whoami -t) --tls-verify=false $(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') 49 | 50 | 3. Build the image: 51 | 52 | cd ./MariaDB/Image 53 | podman build -t $(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')/openshift/mariadb-galera:10.4 . 54 | 55 | 4. Push the image to the OpenShift image registry. 56 | 57 | podman push $(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')/openshift/mariadb-galera:10.4 --tls-verify=false 58 | 59 | Now, let's deploy a database cluster: 60 | 61 | cd ../Deploy 62 | 63 | ### Deploying a MariaDB Galera cluster into OpenShift: 64 | 65 | Let's look at the OpenShift deployment files in the Deploy folder. 66 | 67 | * `mariadb-galera-configmap.yaml`: ConfigMap with the MariaDB configuration 68 | * `mariadb-galera-headless-svc.yaml`: Headless service for inter-pod communications 69 | * `mariadb-galera-loadbalance-svc.yaml`: Service for database connections 70 | * `mariadb-statefulset.yaml`: StatefulSet deployment definition 71 | 72 | The OpenShift deployment is a StatefulSet. A StatefulSet is a special kind of deployment that ensures that a given pod will always get the same PersistentVolumeClaim across restarts. For more information, see the official documentation [here](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/). 73 | 74 | Now, let's deploy MariaDB: (Assuming that you are logged into your OpenShift cluster) 75 | 76 | 1. Create a new Namespace: 77 | 78 | oc new-project mariadb-galera 79 | 80 | 1. Create the service account for our MariaDB deployment, and add the `anyuid` Security Context Constraint to the service account. This will allow the MariaDB pod to run as UID 27 like we defined in our Dockerfile. 81 | 82 | oc create sa mariadb -n mariadb-galera 83 | oc adm policy add-scc-to-user anyuid -z mariadb -n mariadb-galera 84 | 85 | 1. Deploy the MariaDB cluster: 86 | 87 | oc apply -f mariadb-galera-configmap.yaml -n mariadb-galera 88 | oc apply -f mariadb-galera-headless-svc.yaml -n mariadb-galera 89 | oc apply -f mariadb-galera-loadbalance-svc.yaml -n mariadb-galera 90 | oc apply -f mariadb-statefulset.yaml -n mariadb-galera 91 | 92 | You should now see your MariaDB cluster deploying. Each node will start in series after the previous has passed it's readiness probe. The `podManagementPolicy: "OrderedReady"` directive in the StatefulSet ensures that the cluster will always stop and start in a healthy state. 93 | -------------------------------------------------------------------------------- /docs/pages/Nexus_Config.md: -------------------------------------------------------------------------------- 1 | ## Setting up Sonatype Nexus 2 | 3 | We are now going to install [Sonatype Nexus](https://www.sonatype.com/nexus-repository-oss). The Nexus will be used for our external container registry, as well as serving as an artifact repository for maven, npm, or any other application development repositories that you might need. 4 | 5 | Nexus requires Java 8, so let's install that now if it was not installed during the initial system build: 6 | 7 | dnf -y install java-11-openjdk java-1.8.0-openjdk 8 | alternatives --set java $(alternatives --display java | grep 'family java-11-openjdk' | cut -d' ' -f1) 9 | 10 | Note that we installed both Java 8 and 11, and then set the system to Java 11. We will configure Nexus to use Java 8, since it does not yet support any newer Java version. 11 | 12 | Now, we'll install Nexus: 13 | 14 | mkdir /usr/local/nexus 15 | cd /usr/local/nexus 16 | wget https://download.sonatype.com/nexus/3/latest-unix.tar.gz 17 | tar -xzvf latest-unix.tar.gz 18 | ln -s nexus-3.14.0-04 nexus-3 # substitute the appropriate version here 19 | 20 | Add a user for Nexus: 21 | 22 | groupadd nexus 23 | useradd -g nexus nexus 24 | chown -R nexus:nexus /usr/local/nexus 25 | 26 | Enable firewall access: 27 | 28 | firewall-cmd --add-port=8081/tcp --permanent 29 | firewall-cmd --add-port=8443/tcp --permanent 30 | firewall-cmd --add-port=5000/tcp --permanent 31 | firewall-cmd --add-port=5001/tcp --permanent 32 | firewall-cmd --reload 33 | 34 | Create a service reference for Nexus so the OS can start and stop it: 35 | 36 | cat < /etc/systemd/system/nexus.service 37 | [Unit] 38 | Description=nexus service 39 | After=network.target 40 | 41 | [Service] 42 | Type=forking 43 | LimitNOFILE=65536 44 | ExecStart=/usr/local/nexus/nexus-3/bin/nexus start 45 | ExecStop=/usr/local/nexus/nexus-3/bin/nexus stop 46 | User=nexus 47 | Restart=on-abort 48 | 49 | [Install] 50 | WantedBy=multi-user.target 51 | EOF 52 | 53 | Configure Nexus to use JRE 8 54 | 55 | sed -i "s|# INSTALL4J_JAVA_HOME_OVERRIDE=|INSTALL4J_JAVA_HOME_OVERRIDE=$(alternatives --display java | grep 'slave jre:' | grep 'java-1.8.0-openjdk' | cut -d' ' -f4)|g" /usr/local/nexus/nexus-3/bin/nexus 56 | 57 | ### Enabling TLS 58 | 59 | Before we start Nexus, let's go ahead a set up TLS so that our connections are secure from prying eyes. 60 | 61 | ```bash 62 | keytool -genkeypair -keystore keystore.jks -storepass password -keypass password -alias jetty -keyalg RSA -keysize 4096 -validity 5000 -dname "CN=nexus.${LAB_DOMAIN}, OU=okd4-lab, O=okd4-lab, L=Roanoke, ST=Virginia, C=US" -ext "SAN=DNS:nexus.${LAB_DOMAIN},IP:${BASTION_HOST}" -ext "BC=ca:true" 63 | keytool -importkeystore -srckeystore keystore.jks -destkeystore keystore.jks -deststoretype pkcs12 64 | cp keystore.jks /usr/local/nexus/nexus-3/etc/ssl/keystore.jks 65 | chown nexus:nexus /usr/local/nexus/nexus-3/etc/ssl/keystore.jks 66 | ``` 67 | 68 | Modify the Nexus configuration for HTTPS: 69 | 70 | ```bash 71 | mkdir /usr/local/nexus/sonatype-work/nexus3/etc 72 | cat <> /usr/local/nexus/sonatype-work/nexus3/etc/nexus.properties 73 | nexus-args=\${jetty.etc}/jetty.xml,\${jetty.etc}/jetty-https.xml,\${jetty.etc}/jetty-requestlog.xml 74 | application-port-ssl=8443 75 | EOF 76 | chown -R nexus:nexus /usr/local/nexus/sonatype-work/nexus3/etc 77 | ``` 78 | 79 | Now we should be able to start Nexus: 80 | 81 | ```bash 82 | systemctl enable nexus --now 83 | ``` 84 | 85 | Add the Nexus cert to your host's local keystore: 86 | 87 | ```bash 88 | keytool -printcert -sslserver nexus.${LAB_DOMAIN}:8443 -rfc > /etc/pki/ca-trust/source/anchors/nexus.crt 89 | update-ca-trust 90 | ``` 91 | 92 | Now point your browser to `https://nexus.your.domain.com:8443`. Login, and create a password for your admin user. 93 | 94 | If prompted to allow anonymous access, select to allow. 95 | 96 | The `?` in the top right hand corner of the Nexus screen will take you to their documentation. 97 | 98 | We need to create a hosted Docker registry to hold the mirror of the OKD images that we will use to install our cluster. 99 | 100 | 1. Login as your new admin user 101 | 1. Select the gear icon from the top bar, in between a cube icon and the search dialog. 102 | 1. Select `Repositories` from the left menu bar. 103 | 104 | ![Nexus Admin](images/NexusAdmin.png) 105 | 106 | 1. Select `+ Create repository` 107 | 1. Select `docker (hosted)` 108 | 1. Name your repository `origin` 109 | 1. Check `HTTPS` and put `5001` in the port dialog entry 110 | 1. Check `Allow anonymous docker pull` 111 | 1. Check `Enable Docker V1 API`, you may need this for some older docker clients. 112 | 113 | ![Nexus OKD Repo](images/CreateOriginRepo.png) 114 | 115 | 1. Click `Create repository` at the bottom of the page. 116 | 1. Now expand the `Security` menu on the left and select `Realms` 117 | 1. Add `Docker Bearer Token Realm` to the list of active `Realms` 118 | 119 | ![Realms](images/NexusRealms.png) 120 | 121 | 1. Click `Save` 122 | 123 | Now we need to deploy at least one KVM host for our cluster: [Build KVM Host/s](Deploy_KVM_Host.md) 124 | -------------------------------------------------------------------------------- /docs/pages/KubeConDemo.md: -------------------------------------------------------------------------------- 1 | # KubeCon EU - OKD UPI Demo: 2 | 3 | ## Intro: 4 | 5 | 1. Mirrored Install 6 | 1. UPI 7 | 1. Simulated Bare Metal - vBMC for IPMI control 8 | 1. LB and VMs already staged 9 | 1. iPXE boot using MAC of primary NIC 10 | 1. Fixed IPs - added to ignition with fcct 11 | 12 | ## Bootstrap 13 | 14 | Power On the Bootstrap Node: 15 | 16 | ipmitool -I lanplus -H10.11.11.10 -p6229 -Uadmin -Ppassword chassis power on 17 | 18 | Watch it Boot: 19 | 20 | virsh console okd4-bootstrap 21 | 22 | Start Master Nodes: 23 | 24 | for i in 6230 6231 6232; do ipmitool -I lanplus -H10.11.11.10 -p${i} -Uadmin -Ppassword chassis power on; sleep 10; done 25 | 26 | Watch Bootstrap logs: 27 | 28 | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@okd4-bootstrap.clg.lab "journalctl -b -f -u bootkube.service" 29 | 30 | Monitor Bootstrap Progress: 31 | 32 | openshift-install --dir=${OKD4_LAB_PATH}/okd4-install-dir wait-for bootstrap-complete --log-level debug 33 | 34 | Remove Bootstrap Node: 35 | 36 | ssh root@okd4-lb01.clg.lab "cat /etc/haproxy/haproxy.cfg | grep -v bootstrap > /etc/haproxy/haproxy.tmp && mv /etc/haproxy/haproxy.tmp /etc/haproxy/haproxy.cfg && systemctl restart haproxy.service" 37 | 38 | virsh destroy okd4-bootstrap 39 | 40 | ## Complete Install 41 | 42 | Monitor Install Progress: 43 | 44 | openshift-install --dir=${OKD4_LAB_PATH}/okd4-install-dir wait-for install-complete --log-level debug 45 | 46 | ## Discuss the environment while install completes: 47 | 48 | ### Load Balancer Configuration 49 | 50 | ssh root@okd4-lb01.clg.lab "cat /etc/haproxy/haproxy.cfg" 51 | 52 | ### DNS 53 | 54 | cat /etc/named/zones/db.clg.lab 55 | 56 | ### iPXE 57 | 58 | 59 | 60 | ## Post Install 61 | 62 | export KUBECONFIG="${OKD4_LAB_PATH}/okd4-install-dir/auth/kubeconfig" 63 | 64 | Remove Samples Operator: 65 | 66 | oc patch configs.samples.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Removed"}}' 67 | 68 | EmptyDir for Registry: 69 | 70 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed","storage":{"emptyDir":{}}}}' 71 | 72 | Image Pruner: 73 | 74 | oc patch imagepruners.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"schedule":"0 0 * * *","suspend":false,"keepTagRevisions":3,"keepYoungerThan":60,"resources":{},"affinity":{},"nodeSelector":{},"tolerations":[],"startingDeadlineSeconds":60,"successfulJobsHistoryLimit":3,"failedJobsHistoryLimit":3}}' 75 | 76 | Add Worker Nodes: 77 | 78 | for i in 6233 6234 6235; do ipmitool -I lanplus -H10.11.11.10 -p${i} -Uadmin -Ppassword chassis power on; sleep 10; done 79 | 80 | Monitor Worker Node Boot: 81 | 82 | ssh root@kvm-host01.clg.lab 83 | virsh console okd4-worker-0 84 | 85 | Approve CSR: 86 | 87 | oc get csr 88 | oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve 89 | 90 | oc get nodes 91 | 92 | Designate Infra Nodes: 93 | 94 | for i in 0 1 2 95 | do 96 | oc label nodes okd4-master-${i}.${LAB_DOMAIN} node-role.kubernetes.io/infra="" 97 | done 98 | 99 | oc patch scheduler cluster --patch '{"spec":{"mastersSchedulable":false}}' --type=merge 100 | 101 | oc patch -n openshift-ingress-operator ingresscontroller default --patch '{"spec":{"nodePlacement":{"nodeSelector":{"matchLabels":{"node-role.kubernetes.io/infra":""}},"tolerations":[{"key":"node.kubernetes.io/unschedulable","effect":"NoSchedule"},{"key":"node-role.kubernetes.io/master","effect":"NoSchedule"}]}}}' --type=merge 102 | 103 | oc get pod -n openshift-ingress -o wide 104 | 105 | HtPassword Setup: 106 | 107 | oc create -n openshift-config secret generic htpasswd-secret --from-file=htpasswd=${OKD4_LAB_PATH}/okd-creds/htpasswd 108 | 109 | oc apply -f ./Provisioning/htpasswd-cr.yaml 110 | 111 | oc adm policy add-cluster-role-to-user cluster-admin admin 112 | 113 | oc delete secrets kubeadmin -n kube-system 114 | 115 | ## Open DNS for Updates: 116 | 117 | ssh root@${LAB_NAMESERVER} 'sed -i "s|registry.svc.ci.openshift.org|;sinkhole-reg|g" /etc/named/zones/db.sinkhole && sed -i "s|quay.io|;sinkhole-quay|g" /etc/named/zones/db.sinkhole && systemctl restart named' 118 | 119 | ## Ceph 120 | 121 | for i in 0 1 2 122 | do 123 | oc label nodes okd4-worker-${i}.${LAB_DOMAIN} role=storage-node 124 | done 125 | 126 | oc apply -f ${OKD4_LAB_PATH}/ceph/common.yml 127 | oc apply -f ${OKD4_LAB_PATH}/ceph/operator-openshift.yml 128 | 129 | oc apply -f ${OKD4_LAB_PATH}/ceph/cluster.yml 130 | 131 | oc get pods -n rook-ceph 132 | 133 | ## PVC for the Image Registry 134 | 135 | oc apply -f ${OKD4_LAB_PATH}/ceph/ceph-storage-class.yml 136 | 137 | oc apply -f ${OKD4_LAB_PATH}/ceph/registry-pvc.yml 138 | 139 | oc get pvc -n openshift-image-registry 140 | 141 | oc patch configs.imageregistry.operator.openshift.io cluster --type json -p '[{ "op": "remove", "path": "/spec/storage/emptyDir" }]' 142 | 143 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"rolloutStrategy":"Recreate","managementState":"Managed","storage":{"pvc":{"claim":"registry-pvc"}}}}' 144 | 145 | ## Eclipse Che 146 | -------------------------------------------------------------------------------- /docs/pages/KVM_Host_Install.md: -------------------------------------------------------------------------------- 1 | ## Manual KVM Host install 2 | 3 | Assuming you already have a USB key with the Centos 8 install ISO on it, do the following: 4 | 5 | Ensure that you have DNS `A` and `PTR` records for each host. The DNS files that you set up in [DNS Setup](DNS_Config.md) contain records for 3 kvm-hosts. 6 | 7 | ### Install CentOS: (Choose a Minimal Install) 8 | 9 | * Network: 10 | * Configure the network interface with a fixed IP address 11 | * If you are following this guide exactly: 12 | * Set the following Hostnames an IP 13 | * `kvm-host01` `10.11.11.200` 14 | * `kvm-host02` `10.11.11.201` 15 | * Continue this pattern if you have multiple KVM hosts. 16 | * Storage: 17 | * Allocate 100GB for the `/` filesystem 18 | * Do not create a `/home` filesystem (no users on this system) 19 | * Allocate the remaining disk space for the VM guest filesystem 20 | * I put my KVM guests in `/VirtualMachines` 21 | 22 | After the installation completes, ensure that you can ssh to your host from your bastion. 23 | 24 | ssh-copy-id root@kvm-host01.${LAB_DOMAIN} 25 | ssh root@kvm-host01.${LAB_DOMAIN} "uname -a" 26 | 27 | ssh-copy-id root@kvm-host01.${LAB_DOMAIN} 28 | ssh root@kvm-host01.${LAB_DOMAIN} "uname -a" 29 | 30 | Then, disconnect the monitor, mouse and keyboard. 31 | 32 | Set up KVM for each host. Do the following for kvm-host01, and modify the IP info for each successive kvm-host that you build: 33 | 34 | 1. SSH to the host: 35 | 36 | ssh root@kvm-host01 37 | 38 | 1. Set up KVM: 39 | 40 | cat << EOF > /etc/yum.repos.d/kvm-common.repo 41 | [kvm-common] 42 | name=KVM Common 43 | baseurl=http://mirror.centos.org/centos/7/virt/x86_64/kvm-common/ 44 | gpgcheck=0 45 | enabled=1 46 | EOF 47 | 48 | dnf -y install wget git net-tools bind-utils bash-completion nfs-utils rsync qemu-kvm libvirt libvirt-python libguestfs-tools virt-install iscsi-initiator-utils 49 | dnf -y update 50 | 51 | cat <> /etc/modprobe.d/kvm.conf 52 | options kvm_intel nested=1 53 | EOF 54 | systemctl enable libvirtd 55 | 56 | mkdir /VirtualMachines 57 | virsh pool-destroy default 58 | virsh pool-undefine default 59 | 60 | If there is no default pool, the two commands above will fail. This is OK, carry on. 61 | 62 | virsh pool-define-as --name default --type dir --target /VirtualMachines 63 | virsh pool-autostart default 64 | virsh pool-start default 65 | 66 | 1. Set up Network Bridge: 67 | 68 | You need to identify the NIC that you configured when you installed this host. It will be something like `eno1`, or `enp108s0u1` 69 | 70 | ip addr 71 | 72 | You will see out put like: 73 | 74 | 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 75 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 76 | inet 127.0.0.1/8 scope host lo 77 | valid_lft forever preferred_lft forever 78 | inet6 ::1/128 scope host 79 | valid_lft forever preferred_lft forever 80 | 81 | .... 82 | 83 | 15: eno1: mtu 1500 qdisc noqueue state UP group default qlen 1000 84 | link/ether 1c:69:7a:03:21:e9 brd ff:ff:ff:ff:ff:ff 85 | inet 10.11.11.10/24 brd 10.11.11.255 scope global noprefixroute br0 86 | valid_lft forever preferred_lft forever 87 | inet6 fe80::1e69:7aff:fe03:21e9/64 scope link 88 | valid_lft forever preferred_lft forever 89 | 90 | Somewhere in the output will be the interface that you configured with your bastion IP address. Find it and set a variable with that value: 91 | 92 | PRIMARY_NIC="eno1" 93 | IP=10.11.11.200 # Change this for kvm-host01, etc... 94 | 95 | Set the following variables from your bastion host: `LAB_DOMAIN`, `LAB_NAMESERVER`, `LAB_GATEWAY` 96 | 97 | 1. Set the hostname: 98 | 99 | hostnamectl set-hostname kvm-host01.${LAB_DOMAIN} 100 | 101 | 1. Create a network bridge device named `br0` 102 | 103 | nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.address "${IP}/24" ipv4.gateway "${LAB_GATEWAY}" ipv4.dns "${LAB_NAMESERVER}" ipv4.dns-search "${LAB_DOMAIN}" ipv4.never-default no connection.autoconnect yes bridge.stp no ipv6.method ignore 104 | 105 | 1. Create a slave device for your primary NIC 106 | 107 | nmcli con add type ethernet con-name br0-slave-1 ifname ${PRIMARY_NIC} master br0 108 | 109 | 1. Delete the configuration of the primary NIC 110 | 111 | nmcli con del ${PRIMARY_NIC} 112 | 113 | 1. Put it back disabled 114 | 115 | nmcli con add type ethernet con-name ${PRIMARY_NIC} ifname ${PRIMARY_NIC} connection.autoconnect no ipv4.method disabled ipv6.method ignore 116 | 117 | 1. Reboot: 118 | 119 | shutdown -r now 120 | 121 | Go ahead a build out all of your KVM hosts are this point. For this lab you need at least one KVM host with 64GB of RAM. With this configuration, you will build an OKD cluster with 3 Master nodes which are also schedulable, (is that a word?), as worker nodes. If you have two, then you will build an OKD cluster with 3 Master and 3 Worker nodes. 122 | 123 | It is now time to deploy an OKD cluster: [Deploy OKD](DeployOKD.md) 124 | -------------------------------------------------------------------------------- /docs/pages/DNS_Config.md: -------------------------------------------------------------------------------- 1 | ## DNS Configuration 2 | 3 | This tutorial includes pre-configured files for you to modify for your specific installation. These files will go into your `/etc` directory. You will need to modify them for your specific setup. 4 | 5 | /etc/named.conf 6 | /etc/named/named.conf.local 7 | /etc/named/zones/db.10.11.11 8 | /etc/named/zones/db.your.domain.org 9 | /etc/named/zones/db.sinkhole 10 | 11 | __If you set up your lab router on the 10.11.11/24 network. Then you can use the example DNS files as they are for this exercise.__ 12 | 13 | Do the following, from the root of this project: 14 | 15 | mv /etc/named.conf /etc/named.conf.orig 16 | cp ./DNS/named.conf /etc 17 | cp -r ./DNS/named /etc 18 | 19 | mv /etc/named/zones/db.your.domain.org /etc/named/zones/db.${LAB_DOMAIN} # Don't substitute your.domain.org in this line 20 | sed -i "s|%%LAB_DOMAIN%%|${LAB_DOMAIN}|g" /etc/named/named.conf.local 21 | sed -i "s|%%LAB_DOMAIN%%|${LAB_DOMAIN}|g" /etc/named/zones/db.${LAB_DOMAIN} 22 | sed -i "s|%%LAB_DOMAIN%%|${LAB_DOMAIN}|g" /etc/named/zones/db.10.11.11 23 | sed -i "s|%%LAB_DOMAIN%%|${LAB_DOMAIN}|g" /etc/named/zones/db.sinkhole 24 | 25 | Now let's talk about this configuration, starting with the A records, (forward lookup zone). If you did not use the 10.11.11/24 network as illustrated, then you will have to edit the files to reflect the appropriate A and PTR records for your setup. 26 | 27 | In the example file, there are some entries to take note of: 28 | 29 | 1. The KVM hosts are named `kvm-host01`, `kvm-host02`, etc... Modify this to reflect the number of KVM hosts that your lab setup with have. The example allows for three hosts. 30 | 31 | 1. The Bastion Host is `bastion`. 32 | 33 | 1. The Sonatype Nexus server gets it's own alias A record, `nexus.your.domain.org`. This is not strictly necessary, but I find it useful. For your lab, make sure that this A record reflects the IP address of the server where you have installed Nexus. In this example, it is installed on the bastion host. 34 | 35 | 1. These example files contain references for a full OpenShift cluster with an haproxy load balancer. The OKD cluster has three each of master, and worker (compute) nodes. In this tutorial, you will build a minimal cluster with three master nodes which are also schedulable as workers. 36 | 37 | __Remove or add entries to these files as needed for your setup.__ 38 | 39 | 1. There is one wildcard record that OKD needs: __`okd4` is the name of the cluster.__ 40 | 41 | *.apps.okd4.your.domain.org 42 | 43 | The "apps" record will be for all of the applications that you deploy into your OKD cluster. 44 | 45 | This wildcard A record needs to point to the entry point for your OKD cluster. If you build a cluster with three master nodes like we are doing here, you will need a load balancer in front of the cluster. In this case, your wildcard A records will point to the IP address of your load balancer. Never fear, I will show you how to deploy an HA-Proxy load balancer. 46 | 47 | 1. There are two A records for the Kubernetes API, internal & external. In this case, the same load balancer is handling both. So, they both point to the IP address of the load balancer. __Again, `okd4` is the name of the cluster.__ 48 | 49 | api.okd4.your.domain.org. IN A 10.10.11.50 50 | api-int.okd4.your.domain.org. IN A 10.10.11.50 51 | 52 | 1. There are three SRV records for the etcd hosts. 53 | 54 | _etcd-server-ssl._tcp.okd4.your.domain.org 86400 IN SRV 0 10 2380 etcd-0.okd4.your.domain.org. 55 | _etcd-server-ssl._tcp.okd4.your.domain.org 86400 IN SRV 0 10 2380 etcd-1.okd4.your.domain.org. 56 | _etcd-server-ssl._tcp.okd4.your.domain.org 86400 IN SRV 0 10 2380 etcd-2.okd4.your.domain.org. 57 | 58 | 1. The db.sinkhole file is used to block DNS requests to `registry.svc.ci.openshift.org`. This forces the OKD installation to use the Nexus mirror that we will create. This file is modified by the utility scripts that I provided to enable and disable access to `registry.svc.ci.openshift.org` accordingly. 59 | 60 | When you have completed all of your configuration changes, you can test the configuration with the following command: 61 | 62 | named-checkconf 63 | 64 | If the output is clean, then you are ready to fire it up! 65 | 66 | ### Starting DNS 67 | 68 | Now that we are done with the configuration let's enable DNS and start it up. 69 | 70 | firewall-cmd --permanent --add-service=dns 71 | firewall-cmd --reload 72 | systemctl enable named --now 73 | 74 | You can now test DNS resolution. Try some `pings` or `dig` commands. 75 | 76 | ### __Hugely Helpful Tip:__ 77 | 78 | __If you are using a MacBook for your workstation, you can enable DNS resolution to your lab by creating a file in the `/etc/resolver` directory on your Mac.__ 79 | 80 | sudo bash 81 | 82 | vi /etc/resolver/your.domain.com 83 | 84 | Name the file `your.domain.com` after the domain that you created for your lab. Enter something like this example, modified for your DNS server's IP: 85 | 86 | nameserver 10.11.11.10 87 | 88 | Save the file. 89 | 90 | Your MacBook should now query your new DNS server for entries in your new domain. __Note:__ If your MacBook is on a different network and is routed to your Lab network, then the `acl` entry in your DNS configuration must allow your external network to query. Otherwise, you will bang your head wondering why it does not work... __The ACL is very powerful. Use it. Just like you are using firewalld. Right? I know you did not disable it when you installed your host...__ 91 | 92 | ### On to the next... 93 | 94 | Now that we have DNS configured, continue on to [Nginx Setup](Nginx_Config.md). 95 | -------------------------------------------------------------------------------- /docs/LabIntro.md: -------------------------------------------------------------------------------- 1 | ## Building an OpenShift - OKD 4.X Lab, Soup to Nuts 2 | 3 | __Note: This tutorial is being deprecated in favor of a new version:__ 4 | 5 | ## Link To New Tutorial: [https://upstreamwithoutapaddle.com/home-lab/lab-intro/](https://upstreamwithoutapaddle.com/home-lab/lab-intro/) 6 | 7 | ## The newest version of my helper scripts is here: [https://github.com/cgruver/kamarotos](https://github.com/cgruver/kamarotos) 8 | 9 | The archived main branch can be found in the `archive` branch of this project and the previous documentation can be found at: [Lab Intro](LabIntro.md) 10 | 11 | I will not be doing any additional development on this project. 12 | 13 | New work can be found on my Blog: [Upstream - Without A Paddle](https://upstreamwithoutapaddle.com/) 14 | 15 | 16 | ### Equipment for your lab 17 | 18 | You will need at least one physical server for your lab. More is obviously better, but also more expensive. I have built my lab around the small form-factor NUC systems that Intel builds. My favorite is the [NUC10i7FNK](https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc10i7fnk.html). This little machine sports a 6-core 10th Gen i7 processor at 25W TDP and supports 64GB of RAM. 19 | 20 | I am also a fan of the [NUC10i3FNK](https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc10i3fnk.html). This unit is smaller than the NUC10i7FNK. It sports a dual-core CPU, supports 64GB of RAM and has a single M.2 slot for an SSD. One of these will make a great [Bastion Host](pages/Bastion.md) and development server. 21 | 22 | You will either need a router that supports DHCP reservations, TFTP, and PXE or you will set up DHCP & TFTP on a Linux host. Assuming that you already have a home router, you can use that as long as it supports TFTP and PXE configuration. Most consumer WiFi routers do not support TFTP or PXE. However, if you want something portable and awesome, check out the GL.iNet [GL-AR750S-Ext](https://www.gl-inet.com/products/gl-ar750s/). This little guy runs OpenWRT which means that you can use it as a router for your lab network, plus - Wireless bridge, VPN, PXE, Http, DNS, etc... [OpenWRT](https://openwrt.org) is a very powerful networking distro. There is a new version out now, [GL-MV1000](https://www.gl-inet.com/products/gl-mv1000/). It does not have WiFi, but it is much faster than the GL-AR750S-Ext. I carry the AR750 with me when traveling, and use a pair of the MV1000s in my home lab. 23 | 24 | You may also need a network switch, if you don't have enough 1GB ports available in your router. I am using a couple of [Netgear GS110EMX](https://www.netgear.com/support/product/GS110EMX.aspx). It's a great little managed switch with 8 1Gb ports and 2 10Gb ports. The 10Gb ports are really handy if you also have a NAS device that supports 10Gb network speeds. 25 | 26 | Optional: NAS device. 27 | 28 | In early 2019, I came across this little Frankenstein. The QNAP NAS-Book [TBS-453DX](https://www.qnap.com/en-us/product/tbs-453dx). This thing is not much bigger than the NUCi7KYK, (the VHS tape). It has 4 M.2 slots for SSD storage and will serve as an iSCSI server, in addition to all of the other capabilities that QNAP markets it for. The iSCSI server is what caught my eye! This thing completes a mini-datacenter setup. With this device added to my lab, I am able to replicate most of the capabilities that you will find in an enterprise datacenter. 29 | 30 | My home lab has grown to be almost embarrassing... but, what can I say, except that I have a VERY understanding wife. 31 | 32 | ![Picture of my home Lab - Yes, those are Looney Toons DVDs behind.](pages/images/Home-Lab.jpg) 33 | 34 | For your own lab, I would recommend starting with the following: 35 | 36 | * 1 x NUC8i3BEK - For your Bastion host and development server 37 | * 32GB RAM 38 | * 500GB M.2 SATA SSD 39 | * 1 x NUC10i7FNK - For your Hypervisor 40 | * 64GB RAM 41 | * 1TB M.2 SATA SSD 42 | * 1 x GL.iNet GL-AR750S-Ext - For your router 43 | 44 | ![Picture of my Mini Lab setup.](pages/images/MiniLab2.jpg) 45 | 46 | A minimal setup like this will cost a little less than a 13" MacBook Pro with 16GB of RAM. For that outlay you get 8 CPU cores (16 virtual CPUs), 96GB of RAM, and a really cool travel router! 47 | 48 | Check prices at [Amazon.com](https://www.amazon.com) and [B&H Photo Video](https://www.bhphotovideo.com). I get most of my gear from those two outlets. 49 | 50 | Once you have acquired the necessary gear, it's time to start setting it all up. 51 | 52 | __Follow each of these guides to get setup:__ 53 | 54 | 1. [Bastion Host](pages/Bastion.md) 55 | 1. [DNS Setup](pages/DNS_Config.md) 56 | 1. [Nginx Setup & RPM Repo sync](pages/Nginx_Config.md) 57 | 1. [PXE Boot with TFTP & DHCP](pages/GL-AR750S-Ext.md) 58 | 1. [Sonatype Nexus Setup](pages/Nexus_Config.md) 59 | 1. [Build KVM Host/s](pages/Deploy_KVM_Host.md) 60 | 61 | When your setup is complete, it's time to deploy your OKD cluster: 62 | 63 | [Deploy OKD](pages/DeployOKD.md) 64 | 65 | After deployment is complete, here are some things to do with your cluster: 66 | 67 | 1. [Designate your Master Nodes as Infrastructure Nodes](InfraNodes.md) 68 | 69 | __Do Not do this step if you do not have dedicated `worker` nodes.__ 70 | 71 | If you have dedicated worker nodes in addition to three master nodes, then I recommend this step to pin your Ingress Routers to the Master nodes. If they restart on worker nodes, you will lose Ingress access to your cluster unless you add the worker nodes to your external HA Proxy configuration. I prefer to use Infrasturcture nodes to run the Ingress routers and a number of other pods. 72 | 73 | 1. [Set up Htpasswd as an Identity Provider](pages/HtPasswd.md) 74 | 1. [Deploy a Ceph cluster for block storage provisioning](pages/Ceph.md) 75 | 1. [Create a MariaDB Galera StatefulSet](pages/MariaDB.md) 76 | 1. [Updating Your Cluster](UpdateOKD.md) 77 | 1. [Tekton pipeline for Quarkus and Spring Boot applications](https://github.com/cgruver/tekton-pipeline-okd4) 78 | 1. [Gracefully shut down your cluster](pages/ShuttingDown.md) 79 | -------------------------------------------------------------------------------- /docs/pages/InfraNodes.md: -------------------------------------------------------------------------------- 1 | ## Designate Master nodes as Infrastructure nodes 2 | 3 | 1. Add a label to your master nodes: 4 | 5 | ```bash 6 | for i in 0 1 2 7 | do 8 | oc label nodes okd4-master-${i}.${LAB_DOMAIN} node-role.kubernetes.io/infra="" 9 | done 10 | ``` 11 | 12 | 1. Remove the `worker` label from the master nodes: 13 | 14 | ```bash 15 | oc patch scheduler cluster --patch '{"spec":{"mastersSchedulable":false}}' --type=merge 16 | ``` 17 | 18 | 1. Add `nodePlacement` and taint tolerations to the Ingress Controller: 19 | 20 | ```bash 21 | oc patch -n openshift-ingress-operator ingresscontroller default --patch '{"spec":{"nodePlacement":{"nodeSelector":{"matchLabels":{"node-role.kubernetes.io/infra":""}},"tolerations":[{"key":"node.kubernetes.io/unschedulable","effect":"NoSchedule"},{"key":"node-role.kubernetes.io/master","effect":"NoSchedule"}]}}}' --type=merge 22 | ``` 23 | 24 | 1. Verify that your Ingress pods get provisioned onto the master nodes: 25 | 26 | ```bash 27 | oc get pod -n openshift-ingress -o wide 28 | ``` 29 | 30 | 1. Repeat for the ImageRegistry: 31 | 32 | ```bash 33 | oc patch configs.imageregistry.operator.openshift.io cluster --patch '{"spec":{"nodeSelector":{"node-role.kubernetes.io/infra":""},"tolerations":[{"key":"node.kubernetes.io/unschedulable","effect":"NoSchedule"},{"key":"node-role.kubernetes.io/master","effect":"NoSchedule"}]}}' --type=merge 34 | ``` 35 | 36 | 1. Finally for Cluster Monitoring: 37 | 38 | Create a file named `cluster-monitoring-config.yaml` with the following content: 39 | 40 | ```yaml 41 | apiVersion: v1 42 | kind: ConfigMap 43 | metadata: 44 | name: cluster-monitoring-config 45 | namespace: openshift-monitoring 46 | data: 47 | config.yaml: | 48 | prometheusOperator: 49 | nodeSelector: 50 | node-role.kubernetes.io/infra: "" 51 | tolerations: 52 | - key: "node-role.kubernetes.io/master" 53 | operator: "Equal" 54 | value: "" 55 | effect: "NoSchedule" 56 | prometheusK8s: 57 | nodeSelector: 58 | node-role.kubernetes.io/infra: "" 59 | tolerations: 60 | - key: "node-role.kubernetes.io/master" 61 | operator: "Equal" 62 | value: "" 63 | effect: "NoSchedule" 64 | alertmanagerMain: 65 | nodeSelector: 66 | node-role.kubernetes.io/infra: "" 67 | tolerations: 68 | - key: "node-role.kubernetes.io/master" 69 | operator: "Equal" 70 | value: "" 71 | effect: "NoSchedule" 72 | kubeStateMetrics: 73 | nodeSelector: 74 | node-role.kubernetes.io/infra: "" 75 | tolerations: 76 | - key: "node-role.kubernetes.io/master" 77 | operator: "Equal" 78 | value: "" 79 | effect: "NoSchedule" 80 | grafana: 81 | nodeSelector: 82 | node-role.kubernetes.io/infra: "" 83 | tolerations: 84 | - key: "node-role.kubernetes.io/master" 85 | operator: "Equal" 86 | value: "" 87 | effect: "NoSchedule" 88 | telemeterClient: 89 | nodeSelector: 90 | node-role.kubernetes.io/infra: "" 91 | tolerations: 92 | - key: "node-role.kubernetes.io/master" 93 | operator: "Equal" 94 | value: "" 95 | effect: "NoSchedule" 96 | k8sPrometheusAdapter: 97 | nodeSelector: 98 | node-role.kubernetes.io/infra: "" 99 | tolerations: 100 | - key: "node-role.kubernetes.io/master" 101 | operator: "Equal" 102 | value: "" 103 | effect: "NoSchedule" 104 | openshiftStateMetrics: 105 | nodeSelector: 106 | node-role.kubernetes.io/infra: "" 107 | tolerations: 108 | - key: "node-role.kubernetes.io/master" 109 | operator: "Equal" 110 | value: "" 111 | effect: "NoSchedule" 112 | thanosQuerier: 113 | nodeSelector: 114 | node-role.kubernetes.io/infra: "" 115 | tolerations: 116 | - key: "node-role.kubernetes.io/master" 117 | operator: "Equal" 118 | value: "" 119 | effect: "NoSchedule" 120 | ``` 121 | # Work In Progress from here down: 122 | 123 | ## Designate selected Worker nodes as Infrastructure nodes 124 | 125 | for i in 0 1 2 126 | do 127 | oc label nodes okd4-infra-${i}.${LAB_DOMAIN} node-role.kubernetes.io/infra="" 128 | oc adm taint nodes okd4-infra-${i}.${LAB_DOMAIN} infra=infraNode:NoSchedule 129 | oc adm taint nodes okd4-infra-${i}.${LAB_DOMAIN} infra=infraNode:NoExecute 130 | done 131 | 132 | ## Move Workloads to the new Infra nodes 133 | 134 | * IngressController: `oc patch -n openshift-ingress-operator ingresscontroller default --patch '{"spec":{"nodePlacement":{"nodeSelector":{"matchLabels":{"node-role.kubernetes.io/infra":""}},"tolerations":[{"key":"infra","value":"infraNode","effect":"NoSchedule"},{"key":"infra","value":"infraNode","effect":"NoExecute"}]}}}' --type=merge` 135 | 136 | * ImageRegistry: `oc patch configs.imageregistry.operator.openshift.io cluster --patch '{"spec":{"nodeSelector":{"node-role.kubernetes.io/infra":""},"tolerations":[{"key":"infra","value":"infraNode","effect":"NoSchedule"},{"key":"infra","value":"infraNode","effect":"NoExecute"}]}}' --type=merge` 137 | 138 | * Cluster Monitoring: 139 | 140 | ```yaml 141 | apiVersion: v1 142 | kind: ConfigMap 143 | metadata: 144 | name: cluster-monitoring-config 145 | namespace: openshift-monitoring 146 | data: 147 | config.yaml: | 148 | prometheusOperator: 149 | nodeSelector: 150 | node-role.kubernetes.io/infra: "" 151 | tolerations: 152 | - key: "infra" 153 | operator: "Equal" 154 | value: "infraNode" 155 | effect: "NoSchedule" 156 | - key: "infra" 157 | operator: "Equal" 158 | value: "infraNode" 159 | effect: "NoExecute" 160 | prometheusK8s: 161 | nodeSelector: 162 | node-role.kubernetes.io/infra: "" 163 | tolerations: 164 | - key: "infra" 165 | operator: "Equal" 166 | value: "infraNode" 167 | effect: "NoSchedule" 168 | - key: "infra" 169 | operator: "Equal" 170 | value: "infraNode" 171 | effect: "NoExecute" 172 | alertmanagerMain: 173 | nodeSelector: 174 | node-role.kubernetes.io/infra: "" 175 | tolerations: 176 | - key: "infra" 177 | operator: "Equal" 178 | value: "infraNode" 179 | effect: "NoSchedule" 180 | - key: "infra" 181 | operator: "Equal" 182 | value: "infraNode" 183 | effect: "NoExecute" 184 | kubeStateMetrics: 185 | nodeSelector: 186 | node-role.kubernetes.io/infra: "" 187 | tolerations: 188 | - key: "infra" 189 | operator: "Equal" 190 | value: "infraNode" 191 | effect: "NoSchedule" 192 | - key: "infra" 193 | operator: "Equal" 194 | value: "infraNode" 195 | effect: "NoExecute" 196 | grafana: 197 | nodeSelector: 198 | node-role.kubernetes.io/infra: "" 199 | tolerations: 200 | - key: "infra" 201 | operator: "Equal" 202 | value: "infraNode" 203 | effect: "NoSchedule" 204 | - key: "infra" 205 | operator: "Equal" 206 | value: "infraNode" 207 | effect: "NoExecute" 208 | telemeterClient: 209 | nodeSelector: 210 | node-role.kubernetes.io/infra: "" 211 | tolerations: 212 | - key: "infra" 213 | operator: "Equal" 214 | value: "infraNode" 215 | effect: "NoSchedule" 216 | - key: "infra" 217 | operator: "Equal" 218 | value: "infraNode" 219 | effect: "NoExecute" 220 | k8sPrometheusAdapter: 221 | nodeSelector: 222 | node-role.kubernetes.io/infra: "" 223 | tolerations: 224 | - key: "infra" 225 | operator: "Equal" 226 | value: "infraNode" 227 | effect: "NoSchedule" 228 | - key: "infra" 229 | operator: "Equal" 230 | value: "infraNode" 231 | effect: "NoExecute" 232 | openshiftStateMetrics: 233 | nodeSelector: 234 | node-role.kubernetes.io/infra: "" 235 | tolerations: 236 | - key: "infra" 237 | operator: "Equal" 238 | value: "infraNode" 239 | effect: "NoSchedule" 240 | - key: "infra" 241 | operator: "Equal" 242 | value: "infraNode" 243 | effect: "NoExecute" 244 | thanosQuerier: 245 | nodeSelector: 246 | node-role.kubernetes.io/infra: "" 247 | tolerations: 248 | - key: "infra" 249 | operator: "Equal" 250 | value: "infraNode" 251 | effect: "NoSchedule" 252 | - key: "infra" 253 | operator: "Equal" 254 | value: "infraNode" 255 | effect: "NoExecute" 256 | ``` 257 | 258 | `oc apply -f cluster-monitoring-config.yaml -n openshift-monitoring` 259 | -------------------------------------------------------------------------------- /docs/pages/Bastion.md: -------------------------------------------------------------------------------- 1 | ## Setting up the Bastion host 2 | 3 | The bastion host is generically what I call the system that hosts utilities in support of the rest of the lab. It is also configured for password-less SSH into the rest of the lab. 4 | 5 | The bastion host serves: 6 | 7 | * Nginx for hosting RPMs and install files for KVM hosts and guests. 8 | * A DNS server for the lab ecosystem. 9 | * Sonatype Nexus for the OKD registry mirror, Maven Artifacts, and Container Images. 10 | 11 | I recommend using a bare-metal host for your Bastion. It will also run the load-balancer VM and Bootstrap VM for your cluster. I am using a [NUC8i3BEK](https://ark.intel.com/content/www/us/en/ark/products/126149/intel-nuc-kit-nuc8i3bek.html) with 32GB of RAM for my Bastion host. This little box with 32GB of RAM is perfect for this purpose, and also very portable for throwing in a bag to take my dev environment with me. My OpenShift build environment is also installed on the Bastion host. 12 | 13 | You need to start with a minimal CentOS 8 install. (__This tutorial assumes that you are comfortable installing a Linux OS.__) 14 | 15 | Download the minimal install ISO from: http://isoredirect.centos.org/centos/8/isos/x86_64/ 16 | 17 | I use [balenaEtcher](https://www.balena.io/etcher/) to create a bootable USB key from a CentOS ISO. 18 | 19 | You will have to attach monitor, mouse, and keyboard to your NUC for the install. After the install, these machines will be headless. So, no need for a complicated KVM setup... The other, older meaning of KVM... not confusing at all. 20 | 21 | ### Install CentOS: (Choose a Minimal Install) 22 | 23 | * Network: 24 | * Configure the network interface with a fixed IP address, `10.11.11.10` if you are following this guide exactly. 25 | * Set the system hostname to `bastion` 26 | * Storage: 27 | * Allocate 100GB for the `/` filesystem 28 | * Do not create a `/home` filesystem (no users on this system) 29 | * Allocate the remaining disk space for the VM guest filesystem 30 | * I put my KVM guests in `/VirtualMachines` 31 | 32 | After the installation completes, ensure that you can ssh to your host. 33 | 34 | ssh root@10.11.11.10 35 | 36 | Create an SSH key pair on your workstation, if you don't already have one: 37 | 38 | ssh-keygen # Take all the defaults 39 | 40 | Enable password-less SSH: 41 | 42 | ssh-copy-id root@10.11.11.10 43 | 44 | Shutdown the host and disconnect the keyboard, mouse, and display. Your host is now headless. 45 | 46 | ### __Power the host back on, log in via SSH, and continue the Bastion host set up.__ 47 | 48 | Install some added packages: 49 | 50 | 1. Install the packages that we are going to need. 51 | 52 | ```bash 53 | dnf -y module install virt 54 | dnf -y install wget gcc git net-tools bind bind-utils bash-completion rsync ipmitool python3-pip yum-utils libguestfs-tools virt-install createrepo java-11-openjdk.x86_64 epel-release ipxe-bootimgs python36-devel libvirt-devel httpd-tools 55 | ``` 56 | 57 | 1. Install Virtual BMC: 58 | 59 | ```bash 60 | pip3.6 install --upgrade pip 61 | pip install setuptools_rust wheel virtualbmc 62 | ``` 63 | 64 | Set up VBMC as a systemd controlled service: 65 | 66 | cat > /etc/systemd/system/vbmcd.service < 98 | 99 | 100 | ``` 101 | 102 | 1. Now is a good time to update and reboot the bastion host: 103 | 104 | ```bash 105 | dnf -y update 106 | shutdown -r now 107 | ``` 108 | 109 | ### Log back into the host 110 | 111 | Clone this project: 112 | 113 | git clone https://github.com/cgruver/okd4-upi-lab-setup 114 | cd okd4-upi-lab-setup 115 | 116 | Copy the utility scripts that I have prepared for you: 117 | 118 | mkdir -p ~/bin/lab_bin 119 | cp ./Provisioning/bin/*.sh ~/bin/lab_bin 120 | chmod 700 ~/bin/lab_bin/*.sh 121 | 122 | Retrieve the `fcct` tool, which will be used to manipulate ignition files 123 | 124 | wget -O ~/bin/lab_bin/butane https://github.com/coreos/butane/releases/download/v0.7.0/butane-x86_64-unknown-linux-gnu 125 | chmod 700 ~/bin/lab_bin/butane 126 | 127 | Next, we need to set up some environment variables that we will use to set up the rest of the lab. You need to make some decisions at this point, fill in the following information, and then edit `~/bin/lab_bin/setLabEnv.sh` accordingly: 128 | 129 | | Variable | Example Value | Description | 130 | | --- | --- | --- | 131 | | `LAB_DOMAIN` | `your.domain.org` | The domain that you want for your lab. This will be part of your DNS setup | 132 | | `BASTION_HOST` | `10.11.11.10` | The IP address of your bastion host. | 133 | | `INSTALL_HOST` | `10.11.11.10` | The IP address of your Nginx Server, likely your bastion host. | 134 | | `PXE_HOST` | `10.11.11.10` | The IP address of your iPXE server, either your OpenWRT router, or your bastion host. | 135 | | `LAB_NAMESERVER` | `10.11.11.10` | The IP address of your Name Server, likely your bastion host. | 136 | | `LAB_GATEWAY` | `10.11.11.1` | The IP address of your router | 137 | | `LAB_NETMASK` | `255.255.255.0` | The netmask of your router | 138 | | `INSTALL_ROOT` | `/usr/share/nginx/html/install` | The directory that will hold CentOS install images | 139 | | `REPO_PATH` | `/usr/share/nginx/html/repos` | The directory on your Nginx server that will hold an RPM repository mirror | 140 | | `OKD4_LAB_PATH` | `~/okd4-lab` | The path from which we will build our OKD4 cluster | 141 | | `OKD_REGISTRY` | `registry.svc.ci.openshift.org/origin/release` | This is where we will get our OKD 4 images from to populate our local mirror | 142 | | `LOCAL_REGISTRY` | `nexus.${LAB_DOMAIN}:5001` | The URL that we will use for our local mirror of the OKD registry images | 143 | | `LOCAL_REPOSITORY` | `origin` | The repository where the local OKD image mirror will be pushed | 144 | | `LOCAL_SECRET_JSON` | `${OKD4_LAB_PATH}/pull_secret.json` | The path to the pull secret that we will need for mirroring OKD images | 145 | 146 | After you have edited `~/bin/lab_bin/setLabEnv.sh` to reflect the values above, configure bash to execute this script on login: 147 | 148 | echo ". /root/bin/lab_bin/setLabEnv.sh" >> ~/.bashrc 149 | 150 | Log out, then log back in to ensure that the environment is set correctly. 151 | 152 | ```bash 153 | exit 154 | ssh root@10.11.11.10 155 | ``` 156 | 157 | Now, enable this host to be a time server for the rest of your lab: (adjust the network value if you are using a different IP range) 158 | 159 | echo "allow 10.11.11.0/24" >> /etc/chrony.conf 160 | 161 | Create a network bridge that will be used by our HA Proxy server and the OKD bootstrap node: 162 | 163 | 1. You need to identify the NIC that you configured when you installed this host. It will be something like `eno1`, or `enp108s0u1` 164 | 165 | ip addr 166 | 167 | You will see out put like: 168 | 169 | 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 170 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 171 | inet 127.0.0.1/8 scope host lo 172 | valid_lft forever preferred_lft forever 173 | inet6 ::1/128 scope host 174 | valid_lft forever preferred_lft forever 175 | 176 | .... 177 | 178 | 15: eno1: mtu 1500 qdisc noqueue state UP group default qlen 1000 179 | link/ether 1c:69:7a:03:21:e9 brd ff:ff:ff:ff:ff:ff 180 | inet 10.11.11.10/24 brd 10.11.11.255 scope global noprefixroute br0 181 | valid_lft forever preferred_lft forever 182 | inet6 fe80::1e69:7aff:fe03:21e9/64 scope link 183 | valid_lft forever preferred_lft forever 184 | 185 | Somewhere in the output will be the interface that you configured with your bastion IP address. Find it and set a variable with that value: 186 | 187 | PRIMARY_NIC="eno1" 188 | 189 | 1. Create a network bridge device named `br0` 190 | 191 | nmcli connection add type bridge ifname br0 con-name br0 ipv4.method manual ipv4.address "${BASTION_HOST}/24" ipv4.gateway "${LAB_GATEWAY}" ipv4.dns "${LAB_NAMESERVER}" ipv4.dns-search "${LAB_DOMAIN}" ipv4.never-default no connection.autoconnect yes bridge.stp no ipv6.method ignore 192 | 193 | 1. Create a slave device for your primary NIC 194 | 195 | nmcli con add type ethernet con-name br0-slave-1 ifname ${PRIMARY_NIC} master br0 196 | 197 | 1. Delete the configuration of the primary NIC 198 | 199 | nmcli con del ${PRIMARY_NIC} 200 | 201 | 1. Put it back disabled 202 | 203 | nmcli con add type ethernet con-name ${PRIMARY_NIC} ifname ${PRIMARY_NIC} connection.autoconnect no ipv4.method disabled ipv6.method ignore 204 | 205 | __For the rest of this setup, unless otherwise specified, it is assumed that you are working from the Bastion Host. You will need the environment variables that we just set up for some of the commands that you will be executing.__ 206 | 207 | Now we are ready to set up DNS: Go to [DNS Setup](DNS_Config.md) 208 | -------------------------------------------------------------------------------- /docs/pages/Router.md: -------------------------------------------------------------------------------- 1 | # Lab Router 2 | 3 | ## Setting up DHCP, DNS, and iPXE booting 4 | 5 | Your DHCP server needs to be able to direct PXE Boot clients to a TFTP server. This is normally done by configuring a couple of parameters in your DHCP server, which will look something like: 6 | 7 | ```bash 8 | next-server = 10.11.11.10 # The IP address of your TFTP server 9 | filename = "ipxe.efi" 10 | ``` 11 | 12 | Unfortunately, most home routers don't support the configuration of those parameters. At this point you have an option. You can either set up TFTP and DHCP on your bastion host, or you can use an OpenWRT based router. I have included instructions for setting up the GL.iNET GL-MV1000, GL-MV1000W, or GL-AR750S-Ext Travel Router. 13 | 14 | If you are configuring PXE on the bastion host, the continue on to: [Set up Bastion Host](Bastion.md) 15 | 16 | If you are using the GL-AR750S-Ext, you will need a Micro SD card that is formatted with an EXT file-system. The GL-MV1000, or GL-MV1000W has on-board storage that is sufficient for this configuration. 17 | 18 | __GL-AR750S-Ext:__ From a linux host, insert the micro SD card, and run the following: 19 | 20 | ```bash 21 | mkfs.ext4 /dev/sdc1 22 | ``` 23 | 24 | Insert the SD card into the router. It will mount at `/mnt/sda1`, or `/mnt/sda` if you did not create a partition, but formatted the whole card. 25 | 26 | You will need to enable root ssh access to your router. The best way to do this is by adding an SSH key. __Don't allow password access over ssh.__ We already created an SSH key for our Bastion host, so we'll use that. If you want to enable SSH access from your workstation as well, then follow the same instructions to create/add that key as well. We will also set the router IP address to `10.11.11.1` 27 | 28 | 1. Login to your router with a browser: `https://` 29 | 1. Expand the `MORE SETTINGS` menu on the left, and select `LAN IP` 30 | 1. Fill in the following: 31 | 32 | ||| 33 | |---|---| 34 | |LAN IP|10.11.11.1| 35 | |Start IP Address|10.11.11.11| 36 | |End IP Address|10.11.11.29| 37 | 38 | 1. Click `Apply` 39 | 1. Now, select the `Advanced` option from the left menu bar. 40 | 1. Login to the Advanced Administration console 41 | 1. Expand the `System` menu at the top of the screen, and select `Administration` 42 | 1. Select the `SSH Access` tab. 43 | 1. Ensure that the Dropbear Instance `Interface` is set to `unspecified` and that the `Port` is `22` 44 | 1. Ensure that the following are __NOT__ checked: 45 | * `Password authentication` 46 | * `Allow root logins with password` 47 | * `Gateway ports` 48 | 1. Click `Save` 49 | 1. Select the `SSH-Keys` tab 50 | 1. Paste your __*public*__ SSH key into the `SSH-Keys` section at the bottom of the page and select `Add Key` 51 | 52 | Your public SSH key is likely in the file `$HOME/.ssh/id_rsa.pub` 53 | 1. Repeat with additional keys. 54 | 1. Click `Save & Apply` 55 | 56 | Now that we have enabled SSH access to the router, we will login and complete our setup from the command-line. 57 | 58 | ```bash 59 | ssh root@ 60 | ``` 61 | 62 | If you are using the `GL-AR750S-Ext`, note that I create a symbolic link from the SD card to /data so that the configuration matches the configuration of the `GL-MV1000`. Since I have both, this keeps things consistent. 63 | 64 | ```bash 65 | ln -s /mnt/sda1 /data # This is not necessary for the GL-MV1000 66 | ``` 67 | 68 | Now we will enable TFTP and iPXE: 69 | 70 | ```bash 71 | mkdir -p /data/tftpboot/ipxe 72 | mkdir /data/tftpboot/networkboot 73 | 74 | uci add_list dhcp.lan.dhcp_option="6,10.11.11.10,8.8.8.8,8.8.4.4" 75 | uci set dhcp.lan.leasetime="5m" 76 | uci set dhcp.@dnsmasq[0].enable_tftp=1 77 | uci set dhcp.@dnsmasq[0].tftp_root=/data/tftpboot 78 | uci set dhcp.efi64_boot_1=match 79 | uci set dhcp.efi64_boot_1.networkid='set:efi64' 80 | uci set dhcp.efi64_boot_1.match='60,PXEClient:Arch:00007' 81 | uci set dhcp.efi64_boot_2=match 82 | uci set dhcp.efi64_boot_2.networkid='set:efi64' 83 | uci set dhcp.efi64_boot_2.match='60,PXEClient:Arch:00009' 84 | uci set dhcp.ipxe_boot=userclass 85 | uci set dhcp.ipxe_boot.networkid='set:ipxe' 86 | uci set dhcp.ipxe_boot.userclass='iPXE' 87 | uci set dhcp.uefi=boot 88 | uci set dhcp.uefi.filename='tag:efi64,tag:!ipxe,ipxe.efi' 89 | uci set dhcp.uefi.serveraddress='10.11.11.1' 90 | uci set dhcp.uefi.servername='pxe' 91 | uci set dhcp.uefi.force='1' 92 | uci set dhcp.ipxe=boot 93 | uci set dhcp.ipxe.filename='tag:ipxe,boot.ipxe' 94 | uci set dhcp.ipxe.serveraddress='10.11.11.1' 95 | uci set dhcp.ipxe.servername='pxe' 96 | uci set dhcp.ipxe.force='1' 97 | uci commit dhcp 98 | /etc/init.d/dnsmasq restart 99 | ``` 100 | 101 | That's a lot of `uci` commands that we just did. I won't drain the list, but I will explain at a high level, because some of this is not well documented by OpenWRT, and is the result of a LOT of Googling on my part. 102 | 103 | * `uci add_list dhcp.lan.dhcp_option="6,10.11.11.10,8.8.8.8,8.8.4.4"` This is setting DHCP option 6, (DNS Servers), it represents the list of DNS servers that the DHCP client should use for name resolution. 104 | * `uci set dhcp.efi64_boot_1=match` This series of commands creates a DHCP match for DHCP option 60 in the request. If the vendor class identifier is `7` or `9`, then the match variable, (`networkid`), is set to `efi64` which is arbitrary, but matchable in the `boot` sections which come after. 105 | * `uci set dhcp.ipxe_boot=userclass` This series of commands looks for a userclass of `iPXE` in the DHCP request and sets the `networkid` variable to `ipxe`. 106 | * `uci set dhcp.uefi=boot` This series of commands looks for a `networkid` match against either `efi64` or `ipxe` and sends the appropriate PXE response back to the client. 107 | 108 | With the router configured, it's now time to set up the files for iPXE. 109 | 110 | First, let's install some additional packages on our router: 111 | 112 | ```bash 113 | opkg update 114 | opkg install wget git-http ca-bundle haproxy bind-server bind-tools bash 115 | ``` 116 | 117 | Download the UEFI iPXE boot image: 118 | 119 | ```bash 120 | wget http://boot.ipxe.org/ipxe.efi -O /data/tftpboot/ipxe.efi 121 | ``` 122 | 123 | Create this initial boot file: 124 | 125 | ```bash 126 | cat << EOF > /data/tftpboot/boot.ipxe 127 | #!ipxe 128 | 129 | echo ======================================================== 130 | echo UUID: \${uuid} 131 | echo Manufacturer: \${manufacturer} 132 | echo Product name: \${product} 133 | echo Hostname: \${hostname} 134 | echo 135 | echo MAC address: \${net0/mac} 136 | echo IP address: \${net0/ip} 137 | echo IPv6 address: \${net0.ndp.0/ip6:ipv6} 138 | echo Netmask: \${net0/netmask} 139 | echo 140 | echo Gateway: \${gateway} 141 | echo DNS: \${dns} 142 | echo IPv6 DNS: \${dns6} 143 | echo Domain: \${domain} 144 | echo ======================================================== 145 | 146 | chain --replace --autofree ipxe/\${mac:hexhyp}.ipxe 147 | EOF 148 | ``` 149 | 150 | 151 | Now copy the necessary files to the router: 152 | 153 | ```bash 154 | mkdir -p /data/install/centos 155 | wget -m -np -nH --cut-dirs=5 -P /data/install/centos http://mirror.centos.org/centos/8-stream/BaseOS/x86_64/os/ 156 | ``` 157 | 158 | ```bash 159 | cp /data/install/centos/isolinux/vmlinuz /data/tftpboot/networkboot 160 | cp /data/install/centos/isolinux/initrd.img /data/tftpboot/networkboot 161 | ``` 162 | 163 | ## DNS Configuration 164 | 165 | ```bash 166 | git clone https://github.com/cgruver/okd4-upi-lab-setup 167 | cd okd4-upi-lab-setup 168 | mkdir -p /etc/profile.d 169 | 170 | echo ". /root/bin/lab_bin/setLabEnv.sh" >> /etc/profile.d/lab.sh 171 | ``` 172 | 173 | Log out, then back in. Check that the env settings are as expected. 174 | 175 | ```bash 176 | LAB_NET=$(ip -br addr show dev br-lan label br-lan | cut -d" " -f1) 177 | export LAB_DOMAIN=your.lab.domain 178 | 179 | ``` 180 | 181 | 182 | 183 | 184 | This tutorial includes pre-configured files for you to modify for your specific installation. These files will go into your `/etc/bind` directory. You will need to modify them for your specific setup. 185 | 186 | ```bash 187 | /etc/bind/named.conf 188 | /etc/bind/named.conf.local 189 | /etc/bind/zones/db.10.11.11 190 | /etc/bind/zones/db.your.domain.org 191 | /etc/bind/zones/db.sinkhole 192 | ``` 193 | 194 | __If you set up your lab router on the 10.11.11/24 network. Then you can use the example DNS files as they are for this exercise.__ 195 | 196 | Do the following, from the root of this project: 197 | 198 | ```bash 199 | mv /etc/bind/named.conf /etc/bind/named.conf.orig 200 | cp ./DNS/named.conf /etc/bind 201 | cp -r ./DNS/named/* /etc/bind 202 | 203 | mv /etc/bind/zones/db.your.domain.org /etc/bind/zones/db.${LAB_DOMAIN} # Don't substitute your.domain.org in this line 204 | sed -i "s|%%LAB_DOMAIN%%|${LAB_DOMAIN}|g" /etc/bind/named.conf.local 205 | sed -i "s|%%LAB_DOMAIN%%|${LAB_DOMAIN}|g" /etc/bind/zones/db.${LAB_DOMAIN} 206 | sed -i "s|%%LAB_DOMAIN%%|${LAB_DOMAIN}|g" /etc/bind/zones/db.10.11.11 207 | sed -i "s|%%LAB_DOMAIN%%|${LAB_DOMAIN}|g" /etc/bind/zones/db.sinkhole 208 | ``` 209 | 210 | Now let's talk about this configuration, starting with the A records, (forward lookup zone). If you did not use the 10.11.11/24 network as illustrated, then you will have to edit the files to reflect the appropriate A and PTR records for your setup. 211 | 212 | In the example file, there are some entries to take note of: 213 | 214 | 1. The KVM hosts are named `kvm-host01`, `kvm-host02`, etc... Modify this to reflect the number of KVM hosts that your lab setup with have. The example allows for three hosts. 215 | 216 | 1. The Bastion Host is `bastion`. 217 | 218 | 1. The Sonatype Nexus server gets it's own alias A record, `nexus.your.domain.org`. This is not strictly necessary, but I find it useful. For your lab, make sure that this A record reflects the IP address of the server where you have installed Nexus. In this example, it is installed on the bastion host. 219 | 220 | 1. These example files contain references for a full OpenShift cluster with an haproxy load balancer. The OKD cluster has three each of master, and worker (compute) nodes. In this tutorial, you will build a minimal cluster with three master nodes which are also schedulable as workers. 221 | 222 | __Remove or add entries to these files as needed for your setup.__ 223 | 224 | 1. There is one wildcard record that OKD needs: __`okd4` is the name of the cluster.__ 225 | 226 | *.apps.okd4.your.domain.org 227 | 228 | The "apps" record will be for all of the applications that you deploy into your OKD cluster. 229 | 230 | This wildcard A record needs to point to the entry point for your OKD cluster. If you build a cluster with three master nodes like we are doing here, you will need a load balancer in front of the cluster. In this case, your wildcard A records will point to the IP address of your load balancer. Never fear, I will show you how to deploy an HA-Proxy load balancer. 231 | 232 | 1. There are two A records for the Kubernetes API, internal & external. In this case, the same load balancer is handling both. So, they both point to the IP address of the load balancer. __Again, `okd4` is the name of the cluster.__ 233 | 234 | ```bash 235 | api.okd4.your.domain.org. IN A 10.10.11.50 236 | api-int.okd4.your.domain.org. IN A 10.10.11.50 237 | ``` 238 | 239 | 1. There are three SRV records for the etcd hosts. 240 | 241 | ```bash 242 | _etcd-server-ssl._tcp.okd4.your.domain.org 86400 IN SRV 0 10 2380 etcd-0.okd4.your.domain.org. 243 | _etcd-server-ssl._tcp.okd4.your.domain.org 86400 IN SRV 0 10 2380 etcd-1.okd4.your.domain.org. 244 | _etcd-server-ssl._tcp.okd4.your.domain.org 86400 IN SRV 0 10 2380 etcd-2.okd4.your.domain.org. 245 | ``` 246 | 247 | 1. The db.sinkhole file is used to block DNS requests to `registry.svc.ci.openshift.org`. This forces the OKD installation to use the Nexus mirror that we will create. This file is modified by the utility scripts that I provided to enable and disable access to `registry.svc.ci.openshift.org` accordingly. 248 | 249 | When you have completed all of your configuration changes, you can test the configuration with the following command: 250 | 251 | ```bash 252 | named-checkconf 253 | ``` 254 | 255 | If the output is clean, then you are ready to fire it up! 256 | 257 | ### Starting DNS 258 | 259 | Now that we are done with the configuration let's enable DNS and start it up. 260 | 261 | ```bash 262 | firewall-cmd --permanent --add-service=dns 263 | firewall-cmd --reload 264 | systemctl enable named --now 265 | ``` 266 | 267 | You can now test DNS resolution. Try some `pings` or `dig` commands. 268 | 269 | ### __Hugely Helpful Tip:__ 270 | 271 | __If you are using a MacBook for your workstation, you can enable DNS resolution to your lab by creating a file in the `/etc/resolver` directory on your Mac.__ 272 | 273 | ```bash 274 | sudo bash 275 | 276 | vi /etc/resolver/your.domain.com 277 | ``` 278 | 279 | Name the file `your.domain.com` after the domain that you created for your lab. Enter something like this example, modified for your DNS server's IP: 280 | 281 | ```bash 282 | nameserver 10.11.11.10 283 | ``` 284 | 285 | Save the file. 286 | 287 | Your MacBook should now query your new DNS server for entries in your new domain. __Note:__ If your MacBook is on a different network and is routed to your Lab network, then the `acl` entry in your DNS configuration must allow your external network to query. Otherwise, you will bang your head wondering why it does not work... __The ACL is very powerful. Use it. Just like you are using firewalld. Right? I know you did not disable it when you installed your host...__ 288 | 289 | __Your router is now ready to PXE boot hosts.__ 290 | 291 | ## Set up Bastion Host 292 | 293 | Now, let's try out our new configuration by using PXE to set up our bastion host. 294 | 295 | We are going to create a kickstart file for the bastion host. 296 | 297 | Continue on to set up your Nexus: [Sonatype Nexus Setup](Nexus_Config.md) 298 | -------------------------------------------------------------------------------- /docs/pages/DeployOKD.md: -------------------------------------------------------------------------------- 1 | ## Prepare to Install OKD 4.4 2 | 3 | I have provided a set of utility scripts to automate a lot of the tasks associated with deploying and tearing down an OKD cluster. In your `~/bin/lab-bin` directory you will see the following: 4 | 5 | | | | 6 | |-|-| 7 | | `UnDeployLabGuest.sh` | Destroys a guest VM and supporting infrastructure | 8 | | `DeployOkdNodes.sh` | Creates the HA-Proxy, Bootstrap, Master, and Worker VMs from an inventory file, (described below) | 9 | | `UnDeployOkdNodes.sh` | Destroys the OKD cluster and all supporting infrastructure | 10 | | `PowerOnVms.sh` | Helper script that uses IPMI to power on the VMs listed in an inventory file | 11 | 12 | 1. First, let's prepare to deploy the VMs for our OKD cluster by preparing the Cluster VM inventory file: 13 | 14 | This is not an ansible inventory like you might have encountered with OKD 3.11. This is something I made up for my lab that allows me to quickly create, manage, and destroy virtual machines. 15 | 16 | I have provided an example that will create the virtual machines for this deployment. It is located at `./Provisioning/guest_inventory/okd4_lab`. The file is structured in such a way that it can be parsed by the utility scripts provided in this project. The columns in the comma delimited file are used for the following purposes: 17 | 18 | | Column | Name | Description | 19 | |-|-|-| 20 | | 1 | KVM_HOST_NODE | The hypervisor host that this VM will be provisioned on | 21 | | 2 | GUEST_HOSTNAME | The hostname of this VM, must be in DNS with `A` and `PTR` records | 22 | | 3 | MEMORY | The amount of RAM in MB to allocate to this VM | 23 | | 4 | CPU | The number of vCPUs to allocate to this VM | 24 | | 5 | ROOT_VOL | The size in GB of the first HDD to provision | 25 | | 6 | DATA_VOL | The size in GB of the second HDD to provision; `0` for none | 26 | | 7 | NUM_OF_NICS | The number of NICs to provision for thie VM; `1` or `2` | 27 | | 8 | ROLE | The OKD role that this VM will play: `ha-proxy`, `bootstrap`, `master`, or `worker` | 28 | | 9 | VBMC_PORT | The port that VBMC will bind to for IPMI control of this VM | 29 | 30 | It looks like this: (The entries for the three worker nodes are commented out, if you have two KVM hosts with 64GB RAM each, then you can uncomment those lines and have a full 6-node cluster) 31 | 32 | ```bash 33 | bastion,okd4-lb01,4096,1,50,0,1,ha-proxy,2668 34 | bastion,okd4-bootstrap,16384,4,50,0,1,bootstrap,6229 35 | kvm-host01,okd4-master-0,20480,4,100,0,1,master,6230 36 | kvm-host01,okd4-master-1,20480,4,100,0,1,master,6231 37 | kvm-host01,okd4-master-2,20480,4,100,0,1,master,6232 38 | # kvm-host02,okd4-worker-0,20480,4,100,0,1,worker,6233 39 | # kvm-host02,okd4-worker-1,20480,4,100,0,1,worker,6234 40 | # kvm-host02,okd4-worker-2,20480,4,100,0,1,worker,6235 41 | ``` 42 | 43 | Copy this file into place, and modify it if necessary: 44 | 45 | ```bash 46 | mkdir -p ${OKD4_LAB_PATH}/guest-inventory 47 | cp ./Provisioning/guest_inventory/okd4_lab ${OKD4_LAB_PATH}/guest-inventory 48 | ``` 49 | 50 | 1. Retrieve the `oc` command. We're going to grab an older version of `oc`, but that's OK. We just need it to retrieve to current versions of `oc` and `openshift-install` 51 | 52 | ```bash 53 | wget https://github.com/openshift/okd/releases/download/4.5.0-0.okd-2020-07-14-153706-ga/openshift-client-linux-4.5.0-0.okd-2020-07-14-153706-ga.tar.gz 54 | ``` 55 | 56 | 1. Uncompress the archive and move the `oc` executable to your ~/bin directory. Make sure ~/bin is in your path. 57 | 58 | ```bash 59 | tar -xzf openshift-client-linux-4.5.0-0.okd-2020-07-14-153706-ga.tar.gz 60 | mv oc ~/bin 61 | ``` 62 | 63 | The `DeployOkdNodes.sh` script will pull the correct version of `oc` and `openshift-install` when we run it. It will over-write older versions in `~/bin`. 64 | 65 | 1. Now, we need a couple of pull secrets. 66 | 67 | The first one is for quay.io. Since we are installing OKD, we don't need an official pull secret. So, we will use a fake one. 68 | 69 | 1. Create the pull secret for Nexus. Use a username and password that has write authority to the `origin` repository that we created earlier. 70 | 71 | ```bash 72 | NEXUS_PWD=$(echo -n "admin:your_admin_password" | base64 -w0) 73 | ``` 74 | 75 | 1. We need to put the pull secret into a JSON file that we will use to mirror the OKD images into our Nexus registry. We'll also need the pull secret for our cluster install. 76 | 77 | ```bash 78 | cat << EOF > ${OKD4_LAB_PATH}/pull_secret.json 79 | {"auths": {"fake": {"auth": "Zm9vOmJhcgo="},"nexus.${LAB_DOMAIN}:5001": {"auth": "${NEXUS_PWD}"}}} 80 | EOF 81 | ``` 82 | 83 | 1. We need to pull a current version of OKD. So point your browser at `https://origin-release.svc.ci.openshift.org`. 84 | 85 | ![OKD Release](images/OKD-Release.png) 86 | 87 | Select the most recent 4.4.0-0.okd release that is in a Phase of `Accepted`, and copy the release name into an environment variable: 88 | 89 | ```bash 90 | export OKD_RELEASE=4.7.0-0.okd-2021-04-24-103438 91 | getOkdCmds.sh 92 | ``` 93 | 94 | 1. The next step is to prepare our install-config.yaml file that `openshift-install` will use to create the `ignition` files for bootstrap, master, and worker nodes. 95 | 96 | I have prepared a skeleton file for you in this project, `./Provisioning/install-config-upi.yaml`. 97 | 98 | ```yaml 99 | apiVersion: v1 100 | baseDomain: %%LAB_DOMAIN%% 101 | metadata: 102 | name: %%CLUSTER_NAME%% 103 | networking: 104 | networkType: OpenShiftSDN 105 | clusterNetwork: 106 | - cidr: 10.100.0.0/14 107 | hostPrefix: 23 108 | serviceNetwork: 109 | - 172.30.0.0/16 110 | machineNetwork: 111 | - cidr: 10.11.11.0/24 112 | compute: 113 | - name: worker 114 | replicas: 0 115 | controlPlane: 116 | name: master 117 | replicas: 3 118 | platform: 119 | none: {} 120 | pullSecret: '%%PULL_SECRET%%' 121 | sshKey: %%SSH_KEY%% 122 | additionalTrustBundle: | 123 | 124 | imageContentSources: 125 | - mirrors: 126 | - nexus.%%LAB_DOMAIN%%:5001/origin 127 | source: registry.svc.ci.openshift.org/origin/%%OKD_VER%% 128 | - mirrors: 129 | - nexus.%%LAB_DOMAIN%%:5001/origin 130 | source: registry.svc.ci.openshift.org/origin/release 131 | ``` 132 | 133 | Copy this file to our working directory. 134 | 135 | ```bash 136 | cp ./Provisioning/install-config-upi.yaml ${OKD4_LAB_PATH}/install-config-upi.yaml 137 | ``` 138 | 139 | Patch in some values: 140 | 141 | ```bash 142 | sed -i "s|%%LAB_DOMAIN%%|${LAB_DOMAIN}|g" ${OKD4_LAB_PATH}/install-config-upi.yaml 143 | SECRET=$(cat ${OKD4_LAB_PATH}/pull_secret.json) 144 | sed -i "s|%%PULL_SECRET%%|${SECRET}|g" ${OKD4_LAB_PATH}/install-config-upi.yaml 145 | SSH_KEY=$(cat ~/.ssh/id_rsa.pub) 146 | sed -i "s|%%SSH_KEY%%|${SSH_KEY}|g" ${OKD4_LAB_PATH}/install-config-upi.yaml 147 | ``` 148 | 149 | For the last piece, you need to manually paste in a cert. No `sed` magic here for you... 150 | 151 | Copy the contents of: `/etc/pki/ca-trust/source/anchors/nexus.crt` and paste it into the blank line here in the config file: 152 | 153 | ```bash 154 | additionalTrustBundle: | 155 | 156 | imageContentSources: 157 | ``` 158 | 159 | You need to indent every line of the cert with two spaces for the yaml syntax. 160 | 161 | Your install-config-upi.yaml file should now look something like: 162 | 163 | ```yaml 164 | apiVersion: v1 165 | baseDomain: your.domain.org 166 | metadata: 167 | name: %%CLUSTER_NAME%% 168 | networking: 169 | networkType: OpenShiftSDN 170 | clusterNetwork: 171 | - cidr: 10.100.0.0/14 172 | hostPrefix: 23 173 | serviceNetwork: 174 | - 172.30.0.0/16 175 | compute: 176 | - name: worker 177 | replicas: 0 178 | controlPlane: 179 | name: master 180 | replicas: 3 181 | platform: 182 | none: {} 183 | pullSecret: '{"auths": {"fake": {"auth": "Zm9vOmJhcgo="},"nexus.oscluster.clgcom.org:5002": {"auth": "YREDACTEDREDACTED=="}}}' 184 | sshKey: ssh-rsa AAAREDACTEDREDACTEDAQAREDACTEDREDACTEDMnvPFqpEoOvZi+YK3L6MIGzVXbgo8SZREDACTEDREDACTEDbNZhieREDACTEDREDACTEDYI/upDR8TUREDACTEDREDACTEDoG1oJ+cRf6Z6gd+LZNE+jscnK/xnAyHfCBdhoyREDACTEDREDACTED9HmLRkbBkv5/2FPpc+bZ2xl9+I1BDr2uREDACTEDREDACTEDG7Ms0vJqrUhwb+o911tOJB3OWkREDACTEDREDACTEDU+1lNcFE44RREDACTEDREDACTEDov8tWSzn root@bastion 185 | additionalTrustBundle: | 186 | -----BEGIN CERTIFICATE----- 187 | MIIFyTREDACTEDREDACTEDm59lk0W1CnMA0GCSqGSIb3DQEBCwUAMHsxCzAJBgNV 188 | BAYTAlREDACTEDREDACTEDVTMREwDwYDVQQIDAhWaXJnaW5pYTEQMA4GA1UEBwwH 189 | A1UECgwGY2xnY29tMREwDwREDACTEDREDACTEDYDVQQLDAhva2Q0LWxhYjEjMCEG 190 | b3NjbHVzdGVyLmNsZ2NvbS5vcmcwHhcNMjAwMzREDACTEDREDACTEDE0MTYxMTQ2 191 | MTQ2WREDACTEDREDACTEDjB7MQswCQYDVQQGEwJVUzERMA8GA1UECAwIVmlyZ2lu 192 | B1JvYW5va2UxDzANBgNVBREDACTEDREDACTEDAoMBmNsZ2NvbTERMA8GA1UECwwI 193 | BgNVBAMMGm5leHVzLm9zY2x1c3Rlci5jbGdjbREDACTEDREDACTED20ub3JnMIIC 194 | REDACTEDREDACTEDAQEFAAOCAg8AMIICCgKCAgEAwwnvZEW+UqsyyWwHS4rlWbcz 195 | hmvMMBXEXqNqSp5sREDACTEDREDACTEDlYrjKIBdLa9isEfgIydtTWZugG1L1iA4 196 | hgdAlW83s8wwKW4bbEd8iDZyUFfzmFSKREDACTEDREDACTEDTrwk9JcH+S3/oGbk 197 | 9iq8oKMiFkz9loYxTu93/p/iGieTWMFGajbAuUPjZsBYgbf9REDACTEDREDACTED 198 | REDACTEDREDACTEDYlFMcpkdlfYwJbJcfqeXAf9Y/QJQbBqRFxJCuXzr/D5Ingg3 199 | HrXXvOr612LWHFvZREDACTEDREDACTEDYj7JRKKPKXIA0NHA29Db0TdVUzDi3uUs 200 | WcDBmIpfZTXfrHG9pcj1CbOsw3vPhD4mREDACTEDREDACTEDCApsGKET4FhnFLkt 201 | yc2vpaut8X3Pjep821pQznT1sR6G1bF1eP84nFhL7qnBdhEwREDACTEDREDACTED 202 | REDACTEDREDACTEDIuOZH60cUhMNpl0uMSYU2BvfVDKQlcGPUh7pDWWhZ+5I1pei 203 | KgWUMBT/j3KAJNgFREDACTEDREDACTEDX43aDvUxyjbDg8FyjBGY1jdS8TnGg3YM 204 | zGP5auSqeyO1yZ2v3nbr9xUoRTVuzPUwREDACTEDREDACTED0SfiaeGPczpNfT8f 205 | 6H0CAwEAAaNQME4wHQYDVR0OBBYEFPAJpXdtNX0bi8dh1QMsREDACTEDREDACTED 206 | REDACTEDREDACTEDIwQYMBaAFPAJpXdtNX0bi8dh1QMsE1URxd8tMAwGA1UdEwQF 207 | hvcNAQELBQADggIBREDACTEDREDACTEDAAx0CX20lQhP6HBNRl7C7IpTEBpds/4E 208 | dHuDuGMILaawZTbbKLMTlGu01Y8uCO/3REDACTEDREDACTEDUVZeX7X9NAw80l4J 209 | kPtLrp169L/09F+qc8c39jb7QaNRWenrNEFFJqoLRakdXM1MREDACTEDREDACTED 210 | REDACTEDREDACTED5CAWBCRgm67NhAJlzYOyqplLs0dPPX+kWdANotCfVxDx1jRM 211 | 8tDL/7kurJA/wSOLREDACTEDREDACTEDDCaNs205/nEAEhrHLr8NHt42/TpmgRlg 212 | fcZ7JFw3gOtsk6Mi3XtS6rxSKpVqUWJ8REDACTEDREDACTED3nafC2IQCmBU2KIZ 213 | 3Oir8xCyVjgf4EY/dQc5GpIxrJ3dV+U2Hna3ZsiCooAdq957REDACTEDREDACTED 214 | REDACTEDREDACTED57krXJy+4z8CdSMa36Pmc115nrN9Ea5C12d6UVnHnN+Kk4cL 215 | Wr9ZZSO3jDiwuzidREDACTEDREDACTEDk/IP3tkLtS0s9gWDdHdHeW0eit+trPib 216 | Oo9fJIxuD246HTQb+51ZfrvyBcbAA/M3REDACTEDREDACTED06B/Uq4CQMjhRwrU 217 | aUEYgiOJjUjLXGJSuDVdCo4J9kpQa5D1bUxcHxTp3R98CasnREDACTEDREDACTED 218 | -----END CERTIFICATE----- 219 | imageContentSources: 220 | - mirrors: 221 | - nexus.your.domain.org:5001/origin 222 | source: %%OKD_SOURCE_1%% 223 | - mirrors: 224 | - nexus.your.domain.org:5001/origin 225 | source: %%OKD_SOURCE_2%% 226 | ``` 227 | 228 | 2. Now mirror the OKD images into the local Nexus: 229 | 230 | ```bash 231 | mirrorOkdRelease.sh 232 | ``` 233 | 234 | The output should look something like: 235 | 236 | ``` 237 | Success 238 | Update image: nexus.your.domain.org:5001/origin:4.5.0-0.okd-2020-08-12-020541 239 | Mirror prefix: nexus.your.domain.org:5001/origin 240 | 241 | To use the new mirrored repository to install, add the following section to the install-config.yaml: 242 | 243 | imageContentSources: 244 | - mirrors: 245 | - nexus.your.domain.org:5001/origin 246 | source: quay.io/openshift/okd 247 | - mirrors: 248 | - nexus.your.domain.org:5001/origin 249 | source: quay.io/openshift/okd-content 250 | 251 | 252 | To use the new mirrored repository for upgrades, use the following to create an ImageContentSourcePolicy: 253 | 254 | apiVersion: operator.openshift.io/v1alpha1 255 | kind: ImageContentSourcePolicy 256 | metadata: 257 | name: example 258 | spec: 259 | repositoryDigestMirrors: 260 | - mirrors: 261 | - nexus.your.domain.org:5001/origin 262 | source: quay.io/openshift/okd 263 | - mirrors: 264 | - nexus.your.domain.org:5001/origin 265 | source: quay.io/openshift/okd-content 266 | ``` 267 | 268 | 1. Create a DNS sinkhole for `registry.svc.ci.openshift.org`, `quay.io`, `docker.io`, and `github.com`. This will simulate a datacenter with no internet access. 269 | 270 | ```bash 271 | Sinkhole.sh -d 272 | ``` 273 | 274 | __Note: When you want to restore access to the above domains, execute: 275 | 276 | ```bash 277 | Sinkhole.sh -c 278 | ``` 279 | 280 | 3. Create the cluster virtual machines and set up for OKD installation: 281 | 282 | ```bash 283 | DeployOkdNodes.sh -i=${OKD4_LAB_PATH}/guest-inventory/okd4_lab -cn=okd4 284 | ``` 285 | 286 | This script does a whole lot of work for us. 287 | 288 | 1. It will pull the current versions of `oc` and `openshift-install` based on the value of `${OKD_RELEASE}` that we set previously. 289 | 1. fills in the OKD version and `%%CLUSTER_NAME%%` in the install-config-upi.yaml file and copies that file to the install directory as install-config.yaml. 290 | 1. Invokes the openshift-install command against our install-config to produce ignition files 291 | 1. Copies the ignition files into place for FCOS install 292 | 1. Sets up for a mirrored install by putting `quay.io` and `registry.svc.ci.openshift.org` into a DNS sinkhole. 293 | 1. Creates guest VMs based on the inventory file at `${OKD4_LAB_PATH}/guest-inventory/okd4` 294 | 1. Creates iPXE boot files for each VM and copies them to the iPXE server, (your router) 295 | 296 | # We are now ready to fire up our OKD cluster!!! 297 | 298 | 1. Start the LB and watch the installation. 299 | 300 | ```bash 301 | ipmitool -I lanplus -H10.11.11.10 -p6228 -Uadmin -Ppassword chassis power on 302 | virsh console okd4-lb01 303 | ``` 304 | 305 | You should see your HA Proxy VM do an iPXE boot and begin an unattended installation of CentOS 8. 306 | 307 | 1. Start the bootstrap node 308 | 309 | ```bash 310 | ipmitool -I lanplus -H10.11.11.10 -p6229 -Uadmin -Ppassword chassis power on 311 | ``` 312 | 313 | 1. Start the cluster master nodes 314 | 315 | ```bash 316 | for i in 6230 6231 6232 317 | do 318 | ipmitool -I lanplus -H10.11.11.10 -p${i} -Uadmin -Ppassword chassis power on 319 | done 320 | ``` 321 | 322 | 1. Start the cluster worker nodes (If you have any) 323 | 324 | ```bash 325 | for i in 6233 6234 6235 326 | do 327 | ipmitool -I lanplus -H10.11.11.10 -p${i} -Uadmin -Ppassword chassis power on 328 | done 329 | ``` 330 | 331 | ### Now let's sit back and watch the install: 332 | 333 | __Note: It is normal to see logs which look like errors while `bootkube` and `kublet` are waiting for resources to be provisioned.__ 334 | 335 | __Don't be alarmed if you see streams of `connection refused` errors for a minute or two.__ If the errors persist for more than a few minutes, then you might have real issues, but be patient. 336 | 337 | * To watch a node boot and install: 338 | * Bootstrap node from the Bastion host: 339 | 340 | ```bash 341 | virsh console okd4-bootstrap 342 | ``` 343 | 344 | * Master Node from `kvm-host01` 345 | 346 | ```bash 347 | virsh console okd4-master-0 348 | ``` 349 | 350 | * Once a host has installed FCOS: 351 | * Bootstrap Node: 352 | 353 | ```bash 354 | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@okd4-bootstrap "journalctl -b -f -u bootkube.service" 355 | ``` 356 | 357 | * Master Node: 358 | 359 | ```bash 360 | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@okd4-master-0 "journalctl -b -f -u kubelet.service" 361 | ``` 362 | 363 | * Monitor OKD install progress: 364 | * Bootstrap Progress: 365 | 366 | ```bash 367 | openshift-install --dir=${OKD4_LAB_PATH}/okd4-install-dir wait-for bootstrap-complete --log-level debug 368 | ``` 369 | 370 | * When bootstrap is complete, remove the bootstrap node from HA-Proxy 371 | 372 | ```bash 373 | ssh root@okd4-lb01 "cat /etc/haproxy/haproxy.cfg | grep -v bootstrap > /etc/haproxy/haproxy.tmp && mv /etc/haproxy/haproxy.tmp /etc/haproxy/haproxy.cfg && systemctl restart haproxy.service" 374 | ``` 375 | 376 | Destroy the Bootstrap Node on the Bastion host: 377 | 378 | ```bash 379 | DestroyBootstrap.sh 380 | ``` 381 | 382 | * Install Progress: 383 | 384 | ```bash 385 | openshift-install --dir=${OKD4_LAB_PATH}/okd4-install-dir wait-for install-complete --log-level debug 386 | ``` 387 | 388 | * Install Complete: 389 | 390 | You will see output that looks like: 391 | 392 | ```bash 393 | INFO Waiting up to 10m0s for the openshift-console route to be created... 394 | DEBUG Route found in openshift-console namespace: console 395 | DEBUG Route found in openshift-console namespace: downloads 396 | DEBUG OpenShift console route is created 397 | INFO Install complete! 398 | INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/okd4-lab/okd4-install-dir/auth/kubeconfig' 399 | INFO Access the OpenShift web-console here: https://console-openshift-console.apps.okd4.your.domain.org 400 | INFO Login to the console with user: kubeadmin, password: aBCdE-FGHiJ-klMNO-PqrSt 401 | ``` 402 | 403 | ### Log into your new cluster console: 404 | 405 | Point your browser to the url listed at the completion of install: `https://console-openshift-console.apps.okd4.your.domain.org` 406 | 407 | Log in as `kubeadmin` with the password from the output at the completion of the install. 408 | 409 | __If you forget the password for this initial account, you can find it in the file:__ `${OKD4_LAB_PATH}/okd4-install-dir/auth/kubeadmin-password` 410 | 411 | __Note: the first time you try to log in, you may have to wait a bit for all of the console resources to initialize.__ 412 | 413 | You will have to accept the certs for your new cluster. 414 | 415 | ### Issue commands against your new cluster: 416 | 417 | ```bash 418 | export KUBECONFIG="${OKD4_LAB_PATH}/okd4-install-dir/auth/kubeconfig" 419 | oc get pods --all-namespaces 420 | ``` 421 | 422 | You may need to approve the certs of you master and or worker nodes before they can join the cluster: 423 | 424 | ```bash 425 | oc get csr 426 | ``` 427 | 428 | If you see certs in a Pending state: 429 | 430 | ```bash 431 | oc get csr -ojson | jq -r '.items[] | select(.status == {} ) | .metadata.name' | xargs oc adm certificate approve 432 | ``` 433 | 434 | Create an Empty volume for registry storage: 435 | 436 | ```bash 437 | oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed","storage":{"emptyDir":{}}}}' 438 | ``` 439 | 440 | ### If it all goes pancake shaped: 441 | 442 | ```bash 443 | openshift-install --dir=okd4-install gather bootstrap --bootstrap 10.11.11.49 --master 10.11.11.60 --master 10.11.11.61 --master 10.11.11.62 444 | ``` 445 | 446 | ### Next: 447 | 448 | 1. Create an Image Pruner: 449 | 450 | ```bash 451 | oc patch imagepruners.imageregistry.operator.openshift.io/cluster --type merge -p '{"spec":{"schedule":"0 0 * * *","suspend":false,"keepTagRevisions":3,"keepYoungerThan":60,"resources":{},"affinity":{},"nodeSelector":{},"tolerations":[],"startingDeadlineSeconds":60,"successfulJobsHistoryLimit":3,"failedJobsHistoryLimit":3}}' 452 | ``` 453 | 1. [Designate your Master Nodes as Infrastructure Nodes](InfraNodes.md) 454 | 455 | __Do Not do this step if you do not have dedicated `worker` nodes.__ 456 | 457 | If you have dedicated worker nodes in addition to three master nodes, then I recommend this step to pin your Ingress Routers to the Master nodes. If they restart on worker nodes, you will lose Ingress access to your cluster unless you add the worker nodes to your external HA Proxy configuration. I prefer to use Infrasturcture nodes to run the Ingress routers and a number of other pods. 458 | 459 | 1. [Set up Htpasswd as an Identity Provider](HtPasswd.md) 460 | 1. [Deploy a Ceph cluster for block storage provisioning](Ceph.md) 461 | 1. [Create a MariaDB Galera StatefulSet](MariaDB.md) 462 | 1. [Updating Your Cluster](UpdateOKD.md) 463 | 1. Coming soon... Tekton pipeline for Quarkus and Spring Boot applications. 464 | 1. [Gracefully shut down your cluster](ShuttingDown.md) 465 | -------------------------------------------------------------------------------- /docs/pages/Notes.md: -------------------------------------------------------------------------------- 1 | # Notes before they become docs 2 | 3 | ## Reset the HA Proxy configuration for a new cluster build: 4 | 5 | ```bash 6 | ssh okd4-lb01 "curl -o /etc/haproxy/haproxy.cfg http://${INSTALL_HOST}/install/postinstall/haproxy.cfg && systemctl restart haproxy" 7 | ``` 8 | 9 | ## Setup DNS resolution for a SNC build 10 | 11 | ```bash 12 | nmcli connection mod "Bridge connection br0" ipv4.dns "${SNC_NAMESERVER}" ipv4.dns-search "${SNC_DOMAIN}" 13 | ``` 14 | ## Upgrade: 15 | 16 | ```bash 17 | oc adm upgrade 18 | 19 | Cluster version is 4.4.0-0.okd-2020-04-09-104654 20 | 21 | Updates: 22 | 23 | VERSION IMAGE 24 | 4.4.0-0.okd-2020-04-09-113408 registry.svc.ci.openshift.org/origin/release@sha256:724d170530bd738830f0ba370e74d94a22fc70cf1c017b1d1447d39ae7c3cf4f 25 | 4.4.0-0.okd-2020-04-09-124138 registry.svc.ci.openshift.org/origin/release@sha256:ce16ac845c0a0d178149553a51214367f63860aea71c0337f25556f25e5b8bb3 26 | 27 | ssh root@${LAB_NAMESERVER} 'sed -i "s|registry.svc.ci.openshift.org|;sinkhole|g" /etc/named/zones/db.sinkhole && systemctl restart named' 28 | 29 | export OKD_RELEASE=4.4.0-0.okd-2020-04-09-124138 30 | 31 | oc adm -a ${LOCAL_SECRET_JSON} release mirror --from=${OKD_REGISTRY}:${OKD_RELEASE} --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OKD_RELEASE} 32 | 33 | oc apply -f upgrade.yaml 34 | 35 | ssh root@${LAB_NAMESERVER} 'sed -i "s|;sinkhole|registry.svc.ci.openshift.org|g" /etc/named/zones/db.sinkhole && systemctl restart named' 36 | 37 | oc adm upgrade --to=${OKD_RELEASE} 38 | 39 | 40 | oc patch clusterversion/version --patch '{"spec":{"upstream":"https://origin-release.svc.ci.openshift.org/graph"}}' --type=merge 41 | ``` 42 | 43 | ## Samples Operator: Extract templates and image streams, then remove the operator. We don't want everything and the kitchen sink... 44 | 45 | ```bash 46 | mkdir -p ${OKD4_LAB_PATH}/OKD-Templates-ImageStreams/templates 47 | mkdir ${OKD4_LAB_PATH}/OKD-Templates-ImageStreams/image-streams 48 | oc project openshift 49 | oc get template | grep -v NAME | while read line 50 | do 51 | TEMPLATE=$(echo $line | cut -d' ' -f1) 52 | oc get --export template ${TEMPLATE} -o yaml > ${OKD4_LAB_PATH}/OKD-Templates-ImageStreams/templates/${TEMPLATE}.yml 53 | done 54 | 55 | oc get is | grep -v NAME | while read line 56 | do 57 | IS=$(echo $line | cut -d' ' -f1) 58 | oc get --export is ${IS} -o yaml > ${OKD4_LAB_PATH}/OKD-Templates-ImageStreams/image-streams/${IS}.yml 59 | done 60 | 61 | oc patch configs.samples.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Removed"}}' 62 | ``` 63 | 64 | ## Fix Hostname: 65 | 66 | ```bash 67 | for i in 0 1 2 ; do ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@okd4-master-${i}.${LAB_DOMAIN} "sudo hostnamectl set-hostname okd4-master-${i}.my.domain.org && sudo shutdown -r now"; done 68 | for i in 0 1 2 ; do ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@okd4-worker-${i}.${LAB_DOMAIN} "sudo hostnamectl set-hostname okd4-worker-${i}.my.domain.org && sudo shutdown -r now"; done 69 | ``` 70 | 71 | ## Logs: 72 | 73 | ```bash 74 | for i in 0 1 2 ; do echo "okd4-master-${i}" ; ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@okd4-master-${i}.${LAB_DOMAIN} "sudo journalctl --disk-usage"; done 75 | for i in 0 1 2 ; do echo "okd4-master-${i}" ; ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@okd4-worker-${i}.${LAB_DOMAIN} "sudo journalctl --disk-usage"; done 76 | 77 | for i in 0 1 2 ; do echo "okd4-master-${i}" ; ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@okd4-master-${i}.${LAB_DOMAIN} "sudo journalctl --vacuum-time=1s"; done 78 | for i in 0 1 2 ; do echo "okd4-master-${i}" ; ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@okd4-worker-${i}.${LAB_DOMAIN} "sudo journalctl --vacuum-time=1s"; done 79 | ``` 80 | 81 | ## Project Provisioning: 82 | 83 | ```bash 84 | oc describe clusterrolebinding.rbac self-provisioners 85 | 86 | # Remove self-provisioning from all roles 87 | oc patch clusterrolebinding.rbac self-provisioners -p '{"subjects": null}' 88 | 89 | # Remove from specific role 90 | oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth 91 | 92 | # Prevent automatic updates to the role 93 | oc patch clusterrolebinding.rbac self-provisioners -p '{ "metadata": { "annotations": { "rbac.authorization.kubernetes.io/autoupdate": "false" } } }' 94 | ``` 95 | 96 | ## iSCSI: 97 | 98 | ```bash 99 | echo "InitiatorName=iqn.$(hostname)" > /etc/iscsi/initiatorname.iscsi 100 | systemctl enable iscsid --now 101 | 102 | iscsiadm -m discovery -t st -l -p 10.11.11.5:3260 103 | 104 | for i in 0 1 2 ; do ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@okd4-prd-master-${i}.${LAB_DOMAIN} "sudo bash -c \"echo InitiatorName=iqn.$(hostname) > /etc/iscsi/initiatorname.iscsi\" && sudo systemctl enable iscsid --now"; done 105 | 106 | for i in 0 1 2 3 4 5 ; do ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core@okd4-prd-worker-${i}.${LAB_DOMAIN} "sudo bash -c \"echo InitiatorName=iqn.$(hostname) > /etc/iscsi/initiatorname.iscsi\" && sudo systemctl enable iscsid --now"; done 107 | ``` 108 | 109 | ## FCCT: 110 | 111 | ```bash 112 | wget https://github.com/coreos/fcct/releases/download/v0.6.0/fcct-x86_64-unknown-linux-gnu 113 | mv fcct-x86_64-unknown-linux-gnu ~/bin/lab_bin/fcct 114 | chmod 750 ~/bin/lab_bin/fcct 115 | ``` 116 | 117 | ```yaml 118 | # Merge some tweaks/bugfixes with the master Ignition config 119 | variant: fcos 120 | version: 1.1.0 121 | ignition: 122 | config: 123 | merge: 124 | - local: ./files/master.ign 125 | systemd: 126 | units: 127 | # we don't want docker starting 128 | # https://github.com/openshift/okd/issues/243 129 | - name: docker.service 130 | mask: true 131 | storage: 132 | files: 133 | # Disable zincati, this should be removed in the next OKD beta 134 | # https://github.com/openshift/machine-config-operator/pull/1890 135 | # https://github.com/openshift/okd/issues/215 136 | - path: /etc/zincati/config.d/90-disable-feature.toml 137 | contents: 138 | inline: | 139 | [updates] 140 | enabled = false 141 | - path: /etc/systemd/network/25-nic0.link 142 | mode: 0644 143 | contents: 144 | inline: | 145 | [Match] 146 | MACAddress=${NET_MAC_0} 147 | [Link] 148 | Name=nic0 149 | - path: /etc/NetworkManager/system-connections/nic0.nmconnection 150 | mode: 0600 151 | overwrite: true 152 | contents: 153 | inline: | 154 | [connection] 155 | type=ethernet 156 | interface-name=nic0 157 | 158 | [ethernet] 159 | mac-address= 160 | 161 | [ipv4] 162 | method=manual 163 | addresses=192.0.2.10/24 164 | gateway=192.0.2.1 165 | dns=192.168.124.1;1.1.1.1;8.8.8.8 166 | dns-search=redhat.com 167 | ``` 168 | 169 | ## iPXE: 170 | 171 | ```bash 172 | wget http://boot.ipxe.org/ipxe.efi 173 | ``` 174 | 175 | ```bash 176 | uci add_list dhcp.lan.dhcp_option="6,10.11.11.10,8.8.8.8,8.8.4.4" 177 | uci set dhcp.@dnsmasq[0].enable_tftp=1 178 | uci set dhcp.@dnsmasq[0].tftp_root=/data/tftpboot 179 | uci set dhcp.efi64_boot_1=match 180 | uci set dhcp.efi64_boot_1.networkid='set:efi64' 181 | uci set dhcp.efi64_boot_1.match='60,PXEClient:Arch:00007' 182 | uci set dhcp.efi64_boot_2=match 183 | uci set dhcp.efi64_boot_2.networkid='set:efi64' 184 | uci set dhcp.efi64_boot_2.match='60,PXEClient:Arch:00009' 185 | uci set dhcp.ipxe_boot=userclass 186 | uci set dhcp.ipxe_boot.networkid='set:ipxe' 187 | uci set dhcp.ipxe_boot.userclass='iPXE' 188 | uci set dhcp.uefi=boot 189 | uci set dhcp.uefi.filename='tag:efi64,tag:!ipxe,ipxe.efi' 190 | uci set dhcp.uefi.serveraddress='10.11.11.1' 191 | uci set dhcp.uefi.servername='pxe' 192 | uci set dhcp.uefi.force='1' 193 | uci set dhcp.ipxe=boot 194 | uci set dhcp.ipxe.filename='tag:ipxe,boot.ipxe' 195 | uci set dhcp.ipxe.serveraddress='10.11.11.1' 196 | uci set dhcp.ipxe.servername='pxe' 197 | uci set dhcp.ipxe.force='1' 198 | uci commit dhcp 199 | /etc/init.d/dnsmasq restart 200 | ``` 201 | 202 | ## Journald 203 | 204 | ```bash 205 | sed -i 's/#Storage.*/Storage=persistent/' /etc/systemd/journald.conf 206 | sed -i 's/#SystemMaxUse.*/SystemMaxUse=4G/' /etc/systemd/journald.conf 207 | systemctl restart systemd-journald.service 208 | ``` 209 | 210 | ## KubeVirt 211 | 212 | ### Node Maintenance Operator 213 | 214 | ```bash 215 | git clone https://github.com/kubevirt/node-maintenance-operator.git 216 | cd node-maintenance-operator/ 217 | 218 | oc apply -f deploy/deployment-ocp/catalogsource.yaml 219 | oc apply -f deploy/deployment-ocp/namespace.yaml 220 | oc apply -f deploy/deployment-ocp/operatorgroup.yaml 221 | oc apply -f deploy/deployment-ocp/subscription.yaml 222 | ``` 223 | 224 | ### Hyperconverged Cluster Operator 225 | 226 | ```bash 227 | export REGISTRY_NAMESPACE=kubevirt 228 | export IMAGE_REGISTRY=${LOCAL_REGISTRY} 229 | export TAG=4.6 230 | export CONTAINER_TAG=4.6 231 | export OPERATOR_IMAGE=hyperconverged-cluster-operator 232 | export CONTAINER_BUILD_CMD=podman 233 | export WORK_DIR=${OKD4_LAB_PATH}/kubevirt 234 | 235 | git clone https://github.com/kubevirt/hyperconverged-cluster-operator.git 236 | 237 | git checkout release-4.6` 238 | 239 | podman build -f build/Dockerfile -t ${LOCAL_REGISTRY}/${REGISTRY_NAMESPACE}/${OPERATOR_IMAGE}:${TAG} --build-arg git_sha=$(shell git describe --no-match --always --abbrev=40 --dirty) . 240 | 241 | podman build -f tools/operator-courier/Dockerfile -t hco-courier . 242 | podman tag hco-courier:latest ${LOCAL_REGISTRY}/${REGISTRY_NAMESPACE}/hco-courier:latest 243 | podman tag hco-courier:latest ${LOCAL_REGISTRY}/${REGISTRY_NAMESPACE}/hco-courier:${TAG} 244 | 245 | podman login ${LOCAL_REGISTRY} 246 | 247 | podman push ${LOCAL_REGISTRY}/${REGISTRY_NAMESPACE}/${OPERATOR_IMAGE}:${TAG} 248 | podman push ${LOCAL_REGISTRY}/${REGISTRY_NAMESPACE}/hco-courier:latest 249 | podman push ${LOCAL_REGISTRY}/${REGISTRY_NAMESPACE}/hco-courier:${TAG} 250 | 251 | ./hack/build-registry-bundle.sh 252 | 253 | cd ${WORK_DIR} 254 | 255 | cat < ${WORK_DIR}/operator-group.yml 256 | apiVersion: operators.coreos.com/v1 257 | kind: OperatorGroup 258 | metadata: 259 | name: hco-operatorgroup 260 | namespace: kubevirt-hyperconverged 261 | spec: 262 | targetNamespaces: 263 | - "kubevirt-hyperconverged" 264 | EOF 265 | 266 | cat < ${WORK_DIR}/catalog-source.yml 267 | apiVersion: operators.coreos.com/v1alpha1 268 | kind: CatalogSource 269 | metadata: 270 | name: hco-catalogsource 271 | namespace: openshift-marketplace 272 | spec: 273 | sourceType: grpc 274 | image: ${IMAGE_REGISTRY}/${REGISTRY_NAMESPACE}/hco-container-registry:${CONTAINER_TAG} 275 | displayName: KubeVirt HyperConverged 276 | publisher: ${LAB_DOMAIN} 277 | updateStrategy: 278 | registryPoll: 279 | interval: 30m 280 | EOF 281 | 282 | cat < subscription.yml 283 | apiVersion: operators.coreos.com/v1alpha1 284 | kind: Subscription 285 | metadata: 286 | name: hco-subscription 287 | namespace: kubevirt-hyperconverged 288 | spec: 289 | channel: "1.0.0" 290 | name: kubevirt-hyperconverged 291 | source: hco-catalogsource 292 | sourceNamespace: openshift-marketplace 293 | EOF 294 | 295 | oc create -f hco.cr.yaml -n kubevirt-hyperconverged 296 | 297 | export KUBEVIRT_PROVIDER="okd-4.5" 298 | ``` 299 | 300 | ### KubeVirt project: 301 | 302 | ```bash 303 | # export DOCKER_PREFIX=${LOCAL_REGISTRY}/kubevirt 304 | # export DOCKER_TAG=okd-4.5 305 | ``` 306 | 307 | ### Chrony 308 | 309 | https://docs.openshift.com/container-platform/4.5/installing/install_config/installing-customizing.html 310 | 311 | ```yaml 312 | apiVersion: machineconfiguration.openshift.io/v1 313 | kind: MachineConfig 314 | metadata: 315 | labels: 316 | machineconfiguration.openshift.io/role: master 317 | name: 99-crhony-master 318 | spec: 319 | config: 320 | ignition: 321 | version: 2.2.0 322 | storage: 323 | files: 324 | - contents: 325 | source: data:text/plain;charset=utf-8;base64, 326 | filesystem: root 327 | mode: 420 328 | path: /etc/chrony.conf 329 | ``` 330 | 331 | ```bash 332 | cat << EOF | base64 333 | server clock.redhat.com iburst 334 | driftfile /var/lib/chrony/drift 335 | makestep 1.0 3 336 | rtcsync 337 | logdir /var/log/chrony 338 | EOF 339 | 340 | cat << EOF > ./99_masters-chrony-configuration.yaml 341 | apiVersion: machineconfiguration.openshift.io/v1 342 | kind: MachineConfig 343 | metadata: 344 | labels: 345 | machineconfiguration.openshift.io/role: master 346 | name: masters-chrony-configuration 347 | spec: 348 | config: 349 | ignition: 350 | config: {} 351 | security: 352 | tls: {} 353 | timeouts: {} 354 | version: 2.2.0 355 | networkd: {} 356 | passwd: {} 357 | storage: 358 | files: 359 | - contents: 360 | source: data:text/plain;charset=utf-8;base64,c2VydmVyIGNsb2NrLnJlZGhhdC5jb20gaWJ1cnN0CmRyaWZ0ZmlsZSAvdmFyL2xpYi9jaHJvbnkvZHJpZnQKbWFrZXN0ZXAgMS4wIDMKcnRjc3luYwpsb2dkaXIgL3Zhci9sb2cvY2hyb255Cg== 361 | verification: {} 362 | filesystem: root 363 | mode: 420 364 | path: /etc/chrony.conf 365 | osImageURL: "" 366 | EOF 367 | ``` 368 | 369 | ## CRC for OKD: 370 | 371 | ### One time setup 372 | 373 | ```bash 374 | dnf install jq golang-bin gcc-c++ golang make zip 375 | 376 | firewall-cmd --add-rich-rule "rule service name="libvirt" reject" --permanent 377 | firewall-cmd --zone=dmz --change-interface=tt0 --permanent 378 | firewall-cmd --zone=dmz --add-service=libvirt --permanent 379 | firewall-cmd --zone=dmz --add-service=dns --permanent 380 | firewall-cmd --zone=dmz --add-service=dhcp --permanent 381 | firewall-cmd --reload 382 | 383 | cat <> /etc/libvirt/libvirtd.conf 384 | listen_tls = 0 385 | listen_tcp = 1 386 | auth_tcp = "none" 387 | tcp_port = "16509" 388 | EOF 389 | 390 | systemctl stop libvirtd 391 | systemctl enable libvirtd-tcp.socket --now 392 | systemctl start libvirtd 393 | 394 | cat < /etc/NetworkManager/conf.d/openshift.conf 395 | [main] 396 | dns=dnsmasq 397 | EOF 398 | 399 | cat < /etc/NetworkManager/dnsmasq.d/openshift.conf 400 | server=/crc.testing/192.168.126.1 401 | address=/apps-crc.testing/192.168.126.11 402 | EOF 403 | 404 | systemctl reload NetworkManager 405 | 406 | ``` 407 | 408 | ### Build SNC: 409 | 410 | ```bash 411 | 412 | mkdir /root/crc-build 413 | cd /root/crc-build 414 | 415 | git clone https://github.com/code-ready/crc 416 | git clone https://github.com/code-ready/snc 417 | 418 | cd snc 419 | 420 | cat << FOE > ~/bin/sncSetup.sh 421 | export OKD_VERSION=\$1 422 | export CRC_DIR=/root/crc-build 423 | export OPENSHIFT_PULL_SECRET_PATH="\${CRC_DIR}/pull_secret.json" 424 | export BUNDLE_VERSION=\${OKD_VERSION} 425 | export BUNDLE_DIR=\${CRC_DIR}/snc 426 | export OKD_BUILD=true 427 | export TF_VAR_libvirt_bootstrap_memory=16384 428 | export LIBGUESTFS_BACKEND=direct 429 | export KUBECONFIG=\${CRC_DIR}/snc/crc-tmp-install-data/auth/kubeconfig 430 | export OC=\${CRC_DIR}/snc/openshift-clients/linux/oc 431 | cat << EOF > \${CRC_DIR}/pull_secret.json 432 | {"auths":{"fake":{"auth": "Zm9vOmJhcgo="}}} 433 | EOF 434 | FOE 435 | 436 | chmod 700 ~/bin/sncSetup.sh 437 | 438 | . sncSetup.sh 4.7.0-0.okd-2021-06-13-090745 439 | 440 | ./snc.sh 441 | 442 | # Watch progress: 443 | export KUBECONFIG=crc-tmp-install-data/auth/kubeconfig 444 | ./oc get pods --all-namespaces 445 | 446 | # Rotate Certs: 447 | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i id_ecdsa_crc core@api.crc.testing -- sudo openssl x509 -checkend 2160000 -noout -in /var/lib/kubelet/pki/kubelet-client-current.pem 448 | 449 | ./oc delete secrets/csr-signer-signer secrets/csr-signer -n openshift-kube-controller-manager-operator 450 | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i id_ecdsa_crc core@api.crc.testing -- sudo rm -fr /var/lib/kubelet/pki 451 | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i id_ecdsa_crc core@api.crc.testing -- sudo rm -fr /var/lib/kubelet/kubeconfig 452 | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i id_ecdsa_crc core@api.crc.testing -- sudo systemctl restart kubelet 453 | 454 | ./oc get csr 455 | ./oc get csr '-ojsonpath={.items[*].metadata.name}' | xargs ./oc adm certificate approve 456 | 457 | ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -i id_ecdsa_crc core@api.crc.testing -- sudo openssl x509 -checkend 2160000 -noout -in /var/lib/kubelet/pki/kubelet-client-current.pem 458 | 459 | # Clean up Ingress: 460 | 461 | ./oc get pods --all-namespaces | grep NodeAffinity | while read i 462 | do 463 | NS=$(echo ${i} | cut -d" " -f1 ) 464 | POD=$(echo ${i} | cut -d" " -f2 ) 465 | ./oc delete pod ${POD} -n ${NS} 466 | done 467 | 468 | ./oc get pods --all-namespaces | grep CrashLoop | while read i 469 | do 470 | NS=$(echo ${i} | cut -d" " -f1 ) 471 | POD=$(echo ${i} | cut -d" " -f2 ) 472 | ./oc delete pod ${POD} -n ${NS} 473 | done 474 | 475 | ./oc delete pod --field-selector=status.phase==Succeeded --all-namespaces 476 | 477 | ./createdisk.sh crc-tmp-install-data 478 | 479 | cd ../crc 480 | 481 | make release 482 | make out/macos-amd64/crc-macos-amd64.pkg 483 | 484 | ``` 485 | 486 | ### Clean up VMs: 487 | 488 | ```bash 489 | 490 | CRC=$(virsh net-list --all | grep crc- | cut -d" " -f2) 491 | virsh destroy ${CRC}-bootstrap 492 | virsh undefine ${CRC}-bootstrap 493 | virsh destroy ${CRC}-master-0 494 | virsh undefine ${CRC}-master-0 495 | virsh net-destroy ${CRC} 496 | virsh net-undefine ${CRC} 497 | virsh pool-destroy ${CRC} 498 | virsh pool-undefine ${CRC} 499 | rm -rf /var/lib/libvirt/openshift-images/${CRC} 500 | 501 | ``` 502 | 503 | ### Clean up SNC build: 504 | 505 | ```bash 506 | rm -rf ${CRC_DIR}/snc/crc_*_${OKD_VERSION}* 507 | ``` 508 | ### Rebase Git 509 | 510 | ```bash 511 | git remote add upstream https://github.com/code-ready/crc.git 512 | git fetch upstream 513 | git rebase upstream/master 514 | git push origin master --force 515 | 516 | 517 | 518 | git checkout -b okd-snc 519 | 520 | git checkout -b wip 521 | 522 | git reset --soft okd-snc 523 | git add . 524 | git commit -m "Message Here" 525 | git push 526 | ``` 527 | 528 | ### CentOS 8 Nic issue: 529 | 530 | ```bash 531 | ethtool -K nic0 tso off 532 | ``` 533 | 534 | ### Market Place Disconnected 535 | 536 | ```bash 537 | oc patch OperatorHub cluster --type json -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' 538 | oc patch OperatorHub cluster --type json -p '[{"op": "add", "path": "/spec/sources/0/disabled", "value": true}]' 539 | 540 | 541 | # oc patch OperatorHub cluster --type json -p '[{"op": "remove", "path": "/spec/sources/0"}]' 542 | # oc patch OperatorHub cluster --type json -p '[{"op": "replace", "path": "/spec/sources/0", "value": {"name":"community-operators","disabled":false}}]' 543 | # oc patch OperatorHub cluster --type json -p '[{"op": "add", "path": "/spec/sources/-", "value": {"name":"community-operators","disabled":true}}]' 544 | ``` 545 | 546 | ### Add worker node: 547 | 548 | ```bash 549 | oc extract -n openshift-machine-api secret/worker-user-data --keys=userData --to=- > worker.ign 550 | ``` 551 | 552 | ### Clean up Completed or Failed Pods: 553 | 554 | ```bash 555 | oc delete pod --field-selector=status.phase==Succeeded 556 | oc delete pod --field-selector=status.phase==Failed 557 | ``` 558 | 559 | ### Pod Scheduling: 560 | 561 | ```yaml 562 | apiVersion: apps/v1 563 | kind: Deployment 564 | metadata: 565 | name: frontend-app 566 | labels: 567 | app: frontend-app 568 | spec: 569 | serviceName: galera-cluster 570 | podManagementPolicy: "OrderedReady" 571 | replicas: 3 572 | selector: 573 | matchLabels: 574 | app: frontend-app 575 | template: 576 | metadata: 577 | labels: 578 | app: frontend-app 579 | spec: 580 | securityContext: 581 | runAsUser: 27 582 | fsGroup: 27 583 | serviceAccount: mariadb 584 | terminationGracePeriodSeconds: 60 585 | nodeSelector: 586 | node-role.kubernetes.io/worker: "" 587 | affinity: 588 | podAntiAffinity: 589 | preferredDuringSchedulingIgnoredDuringExecution: 590 | - weight: 100 591 | podAffinityTerm: 592 | labelSelector: 593 | matchExpressions: 594 | - key: app 595 | operator: In 596 | values: 597 | - frontend-app 598 | topologyKey: kubernetes.io/hostname 599 | podAffinity: 600 | preferredDuringSchedulingIgnoredDuringExecution: 601 | - weight: 100 602 | podAffinityTerm: 603 | labelSelector: 604 | matchExpressions: 605 | - key: app 606 | operator: In 607 | values: 608 | - backend-app 609 | topologyKey: kubernetes.io/hostname 610 | containers: 611 | - name: frontend-app 612 | image: image-registry.openshift-image-registry.svc:5000/openshift/frontend-app:latest 613 | imagePullPolicy: IfNotPresent 614 | env: {} 615 | ports: {} 616 | volumeMounts: {} 617 | volumes: {} 618 | 619 | apiVersion: apps/v1 620 | kind: Deployment 621 | metadata: 622 | name: backend-app 623 | labels: 624 | app: backend-app 625 | spec: 626 | serviceName: galera-cluster 627 | podManagementPolicy: "OrderedReady" 628 | replicas: 3 629 | selector: 630 | matchLabels: 631 | app: backend-app 632 | template: 633 | metadata: 634 | labels: 635 | app: backend-app 636 | spec: 637 | securityContext: 638 | runAsUser: 27 639 | fsGroup: 27 640 | serviceAccount: mariadb 641 | terminationGracePeriodSeconds: 60 642 | nodeSelector: 643 | node-role.kubernetes.io/worker: "" 644 | affinity: 645 | podAntiAffinity: 646 | preferredDuringSchedulingIgnoredDuringExecution: 647 | - weight: 100 648 | podAffinityTerm: 649 | labelSelector: 650 | matchExpressions: 651 | - key: app 652 | operator: In 653 | values: 654 | - backend-app 655 | topologyKey: kubernetes.io/hostname 656 | containers: 657 | - name: backend-app 658 | image: image-registry.openshift-image-registry.svc:5000/openshift/backend-app:latest 659 | imagePullPolicy: IfNotPresent 660 | env: {} 661 | ports: {} 662 | volumeMounts: {} 663 | volumes: {} 664 | 665 | ``` 666 | 667 | ### Clone multiple repos at once 668 | 669 | ```bash 670 | curl -s https://cgruver:@api.github.com/orgs/cgruver-erdemo/repos | jq ".[].clone_url" | xargs -n 1 git clone 671 | ``` 672 | 673 | ### Update CentOS 8 to CentOS Stream 674 | 675 | ```bash 676 | dnf install centos-release-stream 677 | dnf swap centos-{linux,stream}-repos 678 | dnf distro-sync 679 | ``` 680 | 681 | ### Docker Hub Mirror 682 | 683 | ```bash 684 | cat < dockerHubMirror.yaml 685 | apiVersion: operator.openshift.io/v1alpha1 686 | kind: ImageContentSourcePolicy 687 | metadata: 688 | name: dockerhub 689 | spec: 690 | repositoryDigestMirrors: 691 | - mirrors: 692 | - nexus.${LAB_DOMAIN}:5002/dockerhub 693 | source: docker.io 694 | EOF 695 | ``` 696 | 697 | ### CIDR magic: 698 | 699 | ```bash 700 | mask2cidr () 701 | { 702 | local x=${1##*255.} 703 | set -- 0^^^128^192^224^240^248^252^254^ $(( (${#1} - ${#x})*2 )) ${x%%.*} 704 | x=${1%%$3*} 705 | echo $(( $2 + (${#x}/4) )) 706 | } 707 | 708 | cidr2mask () 709 | { 710 | set -- $(( 5 - ($1 / 8) )) 255 255 255 255 $(( (255 << (8 - ($1 % 8))) & 255 )) 0 0 0 711 | [ $1 -gt 1 ] && shift $1 || shift 712 | echo ${1-0}.${2-0}.${3-0}.${4-0} 713 | } 714 | ``` 715 | 716 | ### Blog: 717 | 718 | ```bash 719 | gem install jekyll bundler 720 | jekyll new blog-name 721 | bundle add webrick 722 | bundle exec jekyll serve --livereload --drafts 723 | 724 | gem update --system 725 | gem update 726 | rm Gemfile.lock 727 | bundle update --all 728 | ``` -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | GNU GENERAL PUBLIC LICENSE 2 | Version 3, 29 June 2007 3 | 4 | Copyright (C) 2007 Free Software Foundation, Inc. 5 | Everyone is permitted to copy and distribute verbatim copies 6 | of this license document, but changing it is not allowed. 7 | 8 | Preamble 9 | 10 | The GNU General Public License is a free, copyleft license for 11 | software and other kinds of works. 12 | 13 | The licenses for most software and other practical works are designed 14 | to take away your freedom to share and change the works. By contrast, 15 | the GNU General Public License is intended to guarantee your freedom to 16 | share and change all versions of a program--to make sure it remains free 17 | software for all its users. We, the Free Software Foundation, use the 18 | GNU General Public License for most of our software; it applies also to 19 | any other work released this way by its authors. You can apply it to 20 | your programs, too. 21 | 22 | When we speak of free software, we are referring to freedom, not 23 | price. Our General Public Licenses are designed to make sure that you 24 | have the freedom to distribute copies of free software (and charge for 25 | them if you wish), that you receive source code or can get it if you 26 | want it, that you can change the software or use pieces of it in new 27 | free programs, and that you know you can do these things. 28 | 29 | To protect your rights, we need to prevent others from denying you 30 | these rights or asking you to surrender the rights. Therefore, you have 31 | certain responsibilities if you distribute copies of the software, or if 32 | you modify it: responsibilities to respect the freedom of others. 33 | 34 | For example, if you distribute copies of such a program, whether 35 | gratis or for a fee, you must pass on to the recipients the same 36 | freedoms that you received. You must make sure that they, too, receive 37 | or can get the source code. And you must show them these terms so they 38 | know their rights. 39 | 40 | Developers that use the GNU GPL protect your rights with two steps: 41 | (1) assert copyright on the software, and (2) offer you this License 42 | giving you legal permission to copy, distribute and/or modify it. 43 | 44 | For the developers' and authors' protection, the GPL clearly explains 45 | that there is no warranty for this free software. For both users' and 46 | authors' sake, the GPL requires that modified versions be marked as 47 | changed, so that their problems will not be attributed erroneously to 48 | authors of previous versions. 49 | 50 | Some devices are designed to deny users access to install or run 51 | modified versions of the software inside them, although the manufacturer 52 | can do so. This is fundamentally incompatible with the aim of 53 | protecting users' freedom to change the software. The systematic 54 | pattern of such abuse occurs in the area of products for individuals to 55 | use, which is precisely where it is most unacceptable. Therefore, we 56 | have designed this version of the GPL to prohibit the practice for those 57 | products. If such problems arise substantially in other domains, we 58 | stand ready to extend this provision to those domains in future versions 59 | of the GPL, as needed to protect the freedom of users. 60 | 61 | Finally, every program is threatened constantly by software patents. 62 | States should not allow patents to restrict development and use of 63 | software on general-purpose computers, but in those that do, we wish to 64 | avoid the special danger that patents applied to a free program could 65 | make it effectively proprietary. To prevent this, the GPL assures that 66 | patents cannot be used to render the program non-free. 67 | 68 | The precise terms and conditions for copying, distribution and 69 | modification follow. 70 | 71 | TERMS AND CONDITIONS 72 | 73 | 0. Definitions. 74 | 75 | "This License" refers to version 3 of the GNU General Public License. 76 | 77 | "Copyright" also means copyright-like laws that apply to other kinds of 78 | works, such as semiconductor masks. 79 | 80 | "The Program" refers to any copyrightable work licensed under this 81 | License. Each licensee is addressed as "you". "Licensees" and 82 | "recipients" may be individuals or organizations. 83 | 84 | To "modify" a work means to copy from or adapt all or part of the work 85 | in a fashion requiring copyright permission, other than the making of an 86 | exact copy. The resulting work is called a "modified version" of the 87 | earlier work or a work "based on" the earlier work. 88 | 89 | A "covered work" means either the unmodified Program or a work based 90 | on the Program. 91 | 92 | To "propagate" a work means to do anything with it that, without 93 | permission, would make you directly or secondarily liable for 94 | infringement under applicable copyright law, except executing it on a 95 | computer or modifying a private copy. Propagation includes copying, 96 | distribution (with or without modification), making available to the 97 | public, and in some countries other activities as well. 98 | 99 | To "convey" a work means any kind of propagation that enables other 100 | parties to make or receive copies. Mere interaction with a user through 101 | a computer network, with no transfer of a copy, is not conveying. 102 | 103 | An interactive user interface displays "Appropriate Legal Notices" 104 | to the extent that it includes a convenient and prominently visible 105 | feature that (1) displays an appropriate copyright notice, and (2) 106 | tells the user that there is no warranty for the work (except to the 107 | extent that warranties are provided), that licensees may convey the 108 | work under this License, and how to view a copy of this License. If 109 | the interface presents a list of user commands or options, such as a 110 | menu, a prominent item in the list meets this criterion. 111 | 112 | 1. Source Code. 113 | 114 | The "source code" for a work means the preferred form of the work 115 | for making modifications to it. "Object code" means any non-source 116 | form of a work. 117 | 118 | A "Standard Interface" means an interface that either is an official 119 | standard defined by a recognized standards body, or, in the case of 120 | interfaces specified for a particular programming language, one that 121 | is widely used among developers working in that language. 122 | 123 | The "System Libraries" of an executable work include anything, other 124 | than the work as a whole, that (a) is included in the normal form of 125 | packaging a Major Component, but which is not part of that Major 126 | Component, and (b) serves only to enable use of the work with that 127 | Major Component, or to implement a Standard Interface for which an 128 | implementation is available to the public in source code form. A 129 | "Major Component", in this context, means a major essential component 130 | (kernel, window system, and so on) of the specific operating system 131 | (if any) on which the executable work runs, or a compiler used to 132 | produce the work, or an object code interpreter used to run it. 133 | 134 | The "Corresponding Source" for a work in object code form means all 135 | the source code needed to generate, install, and (for an executable 136 | work) run the object code and to modify the work, including scripts to 137 | control those activities. However, it does not include the work's 138 | System Libraries, or general-purpose tools or generally available free 139 | programs which are used unmodified in performing those activities but 140 | which are not part of the work. For example, Corresponding Source 141 | includes interface definition files associated with source files for 142 | the work, and the source code for shared libraries and dynamically 143 | linked subprograms that the work is specifically designed to require, 144 | such as by intimate data communication or control flow between those 145 | subprograms and other parts of the work. 146 | 147 | The Corresponding Source need not include anything that users 148 | can regenerate automatically from other parts of the Corresponding 149 | Source. 150 | 151 | The Corresponding Source for a work in source code form is that 152 | same work. 153 | 154 | 2. Basic Permissions. 155 | 156 | All rights granted under this License are granted for the term of 157 | copyright on the Program, and are irrevocable provided the stated 158 | conditions are met. This License explicitly affirms your unlimited 159 | permission to run the unmodified Program. The output from running a 160 | covered work is covered by this License only if the output, given its 161 | content, constitutes a covered work. This License acknowledges your 162 | rights of fair use or other equivalent, as provided by copyright law. 163 | 164 | You may make, run and propagate covered works that you do not 165 | convey, without conditions so long as your license otherwise remains 166 | in force. You may convey covered works to others for the sole purpose 167 | of having them make modifications exclusively for you, or provide you 168 | with facilities for running those works, provided that you comply with 169 | the terms of this License in conveying all material for which you do 170 | not control copyright. Those thus making or running the covered works 171 | for you must do so exclusively on your behalf, under your direction 172 | and control, on terms that prohibit them from making any copies of 173 | your copyrighted material outside their relationship with you. 174 | 175 | Conveying under any other circumstances is permitted solely under 176 | the conditions stated below. Sublicensing is not allowed; section 10 177 | makes it unnecessary. 178 | 179 | 3. Protecting Users' Legal Rights From Anti-Circumvention Law. 180 | 181 | No covered work shall be deemed part of an effective technological 182 | measure under any applicable law fulfilling obligations under article 183 | 11 of the WIPO copyright treaty adopted on 20 December 1996, or 184 | similar laws prohibiting or restricting circumvention of such 185 | measures. 186 | 187 | When you convey a covered work, you waive any legal power to forbid 188 | circumvention of technological measures to the extent such circumvention 189 | is effected by exercising rights under this License with respect to 190 | the covered work, and you disclaim any intention to limit operation or 191 | modification of the work as a means of enforcing, against the work's 192 | users, your or third parties' legal rights to forbid circumvention of 193 | technological measures. 194 | 195 | 4. Conveying Verbatim Copies. 196 | 197 | You may convey verbatim copies of the Program's source code as you 198 | receive it, in any medium, provided that you conspicuously and 199 | appropriately publish on each copy an appropriate copyright notice; 200 | keep intact all notices stating that this License and any 201 | non-permissive terms added in accord with section 7 apply to the code; 202 | keep intact all notices of the absence of any warranty; and give all 203 | recipients a copy of this License along with the Program. 204 | 205 | You may charge any price or no price for each copy that you convey, 206 | and you may offer support or warranty protection for a fee. 207 | 208 | 5. Conveying Modified Source Versions. 209 | 210 | You may convey a work based on the Program, or the modifications to 211 | produce it from the Program, in the form of source code under the 212 | terms of section 4, provided that you also meet all of these conditions: 213 | 214 | a) The work must carry prominent notices stating that you modified 215 | it, and giving a relevant date. 216 | 217 | b) The work must carry prominent notices stating that it is 218 | released under this License and any conditions added under section 219 | 7. This requirement modifies the requirement in section 4 to 220 | "keep intact all notices". 221 | 222 | c) You must license the entire work, as a whole, under this 223 | License to anyone who comes into possession of a copy. This 224 | License will therefore apply, along with any applicable section 7 225 | additional terms, to the whole of the work, and all its parts, 226 | regardless of how they are packaged. This License gives no 227 | permission to license the work in any other way, but it does not 228 | invalidate such permission if you have separately received it. 229 | 230 | d) If the work has interactive user interfaces, each must display 231 | Appropriate Legal Notices; however, if the Program has interactive 232 | interfaces that do not display Appropriate Legal Notices, your 233 | work need not make them do so. 234 | 235 | A compilation of a covered work with other separate and independent 236 | works, which are not by their nature extensions of the covered work, 237 | and which are not combined with it such as to form a larger program, 238 | in or on a volume of a storage or distribution medium, is called an 239 | "aggregate" if the compilation and its resulting copyright are not 240 | used to limit the access or legal rights of the compilation's users 241 | beyond what the individual works permit. Inclusion of a covered work 242 | in an aggregate does not cause this License to apply to the other 243 | parts of the aggregate. 244 | 245 | 6. Conveying Non-Source Forms. 246 | 247 | You may convey a covered work in object code form under the terms 248 | of sections 4 and 5, provided that you also convey the 249 | machine-readable Corresponding Source under the terms of this License, 250 | in one of these ways: 251 | 252 | a) Convey the object code in, or embodied in, a physical product 253 | (including a physical distribution medium), accompanied by the 254 | Corresponding Source fixed on a durable physical medium 255 | customarily used for software interchange. 256 | 257 | b) Convey the object code in, or embodied in, a physical product 258 | (including a physical distribution medium), accompanied by a 259 | written offer, valid for at least three years and valid for as 260 | long as you offer spare parts or customer support for that product 261 | model, to give anyone who possesses the object code either (1) a 262 | copy of the Corresponding Source for all the software in the 263 | product that is covered by this License, on a durable physical 264 | medium customarily used for software interchange, for a price no 265 | more than your reasonable cost of physically performing this 266 | conveying of source, or (2) access to copy the 267 | Corresponding Source from a network server at no charge. 268 | 269 | c) Convey individual copies of the object code with a copy of the 270 | written offer to provide the Corresponding Source. This 271 | alternative is allowed only occasionally and noncommercially, and 272 | only if you received the object code with such an offer, in accord 273 | with subsection 6b. 274 | 275 | d) Convey the object code by offering access from a designated 276 | place (gratis or for a charge), and offer equivalent access to the 277 | Corresponding Source in the same way through the same place at no 278 | further charge. You need not require recipients to copy the 279 | Corresponding Source along with the object code. If the place to 280 | copy the object code is a network server, the Corresponding Source 281 | may be on a different server (operated by you or a third party) 282 | that supports equivalent copying facilities, provided you maintain 283 | clear directions next to the object code saying where to find the 284 | Corresponding Source. Regardless of what server hosts the 285 | Corresponding Source, you remain obligated to ensure that it is 286 | available for as long as needed to satisfy these requirements. 287 | 288 | e) Convey the object code using peer-to-peer transmission, provided 289 | you inform other peers where the object code and Corresponding 290 | Source of the work are being offered to the general public at no 291 | charge under subsection 6d. 292 | 293 | A separable portion of the object code, whose source code is excluded 294 | from the Corresponding Source as a System Library, need not be 295 | included in conveying the object code work. 296 | 297 | A "User Product" is either (1) a "consumer product", which means any 298 | tangible personal property which is normally used for personal, family, 299 | or household purposes, or (2) anything designed or sold for incorporation 300 | into a dwelling. In determining whether a product is a consumer product, 301 | doubtful cases shall be resolved in favor of coverage. For a particular 302 | product received by a particular user, "normally used" refers to a 303 | typical or common use of that class of product, regardless of the status 304 | of the particular user or of the way in which the particular user 305 | actually uses, or expects or is expected to use, the product. A product 306 | is a consumer product regardless of whether the product has substantial 307 | commercial, industrial or non-consumer uses, unless such uses represent 308 | the only significant mode of use of the product. 309 | 310 | "Installation Information" for a User Product means any methods, 311 | procedures, authorization keys, or other information required to install 312 | and execute modified versions of a covered work in that User Product from 313 | a modified version of its Corresponding Source. The information must 314 | suffice to ensure that the continued functioning of the modified object 315 | code is in no case prevented or interfered with solely because 316 | modification has been made. 317 | 318 | If you convey an object code work under this section in, or with, or 319 | specifically for use in, a User Product, and the conveying occurs as 320 | part of a transaction in which the right of possession and use of the 321 | User Product is transferred to the recipient in perpetuity or for a 322 | fixed term (regardless of how the transaction is characterized), the 323 | Corresponding Source conveyed under this section must be accompanied 324 | by the Installation Information. But this requirement does not apply 325 | if neither you nor any third party retains the ability to install 326 | modified object code on the User Product (for example, the work has 327 | been installed in ROM). 328 | 329 | The requirement to provide Installation Information does not include a 330 | requirement to continue to provide support service, warranty, or updates 331 | for a work that has been modified or installed by the recipient, or for 332 | the User Product in which it has been modified or installed. Access to a 333 | network may be denied when the modification itself materially and 334 | adversely affects the operation of the network or violates the rules and 335 | protocols for communication across the network. 336 | 337 | Corresponding Source conveyed, and Installation Information provided, 338 | in accord with this section must be in a format that is publicly 339 | documented (and with an implementation available to the public in 340 | source code form), and must require no special password or key for 341 | unpacking, reading or copying. 342 | 343 | 7. Additional Terms. 344 | 345 | "Additional permissions" are terms that supplement the terms of this 346 | License by making exceptions from one or more of its conditions. 347 | Additional permissions that are applicable to the entire Program shall 348 | be treated as though they were included in this License, to the extent 349 | that they are valid under applicable law. If additional permissions 350 | apply only to part of the Program, that part may be used separately 351 | under those permissions, but the entire Program remains governed by 352 | this License without regard to the additional permissions. 353 | 354 | When you convey a copy of a covered work, you may at your option 355 | remove any additional permissions from that copy, or from any part of 356 | it. (Additional permissions may be written to require their own 357 | removal in certain cases when you modify the work.) You may place 358 | additional permissions on material, added by you to a covered work, 359 | for which you have or can give appropriate copyright permission. 360 | 361 | Notwithstanding any other provision of this License, for material you 362 | add to a covered work, you may (if authorized by the copyright holders of 363 | that material) supplement the terms of this License with terms: 364 | 365 | a) Disclaiming warranty or limiting liability differently from the 366 | terms of sections 15 and 16 of this License; or 367 | 368 | b) Requiring preservation of specified reasonable legal notices or 369 | author attributions in that material or in the Appropriate Legal 370 | Notices displayed by works containing it; or 371 | 372 | c) Prohibiting misrepresentation of the origin of that material, or 373 | requiring that modified versions of such material be marked in 374 | reasonable ways as different from the original version; or 375 | 376 | d) Limiting the use for publicity purposes of names of licensors or 377 | authors of the material; or 378 | 379 | e) Declining to grant rights under trademark law for use of some 380 | trade names, trademarks, or service marks; or 381 | 382 | f) Requiring indemnification of licensors and authors of that 383 | material by anyone who conveys the material (or modified versions of 384 | it) with contractual assumptions of liability to the recipient, for 385 | any liability that these contractual assumptions directly impose on 386 | those licensors and authors. 387 | 388 | All other non-permissive additional terms are considered "further 389 | restrictions" within the meaning of section 10. If the Program as you 390 | received it, or any part of it, contains a notice stating that it is 391 | governed by this License along with a term that is a further 392 | restriction, you may remove that term. If a license document contains 393 | a further restriction but permits relicensing or conveying under this 394 | License, you may add to a covered work material governed by the terms 395 | of that license document, provided that the further restriction does 396 | not survive such relicensing or conveying. 397 | 398 | If you add terms to a covered work in accord with this section, you 399 | must place, in the relevant source files, a statement of the 400 | additional terms that apply to those files, or a notice indicating 401 | where to find the applicable terms. 402 | 403 | Additional terms, permissive or non-permissive, may be stated in the 404 | form of a separately written license, or stated as exceptions; 405 | the above requirements apply either way. 406 | 407 | 8. Termination. 408 | 409 | You may not propagate or modify a covered work except as expressly 410 | provided under this License. Any attempt otherwise to propagate or 411 | modify it is void, and will automatically terminate your rights under 412 | this License (including any patent licenses granted under the third 413 | paragraph of section 11). 414 | 415 | However, if you cease all violation of this License, then your 416 | license from a particular copyright holder is reinstated (a) 417 | provisionally, unless and until the copyright holder explicitly and 418 | finally terminates your license, and (b) permanently, if the copyright 419 | holder fails to notify you of the violation by some reasonable means 420 | prior to 60 days after the cessation. 421 | 422 | Moreover, your license from a particular copyright holder is 423 | reinstated permanently if the copyright holder notifies you of the 424 | violation by some reasonable means, this is the first time you have 425 | received notice of violation of this License (for any work) from that 426 | copyright holder, and you cure the violation prior to 30 days after 427 | your receipt of the notice. 428 | 429 | Termination of your rights under this section does not terminate the 430 | licenses of parties who have received copies or rights from you under 431 | this License. If your rights have been terminated and not permanently 432 | reinstated, you do not qualify to receive new licenses for the same 433 | material under section 10. 434 | 435 | 9. Acceptance Not Required for Having Copies. 436 | 437 | You are not required to accept this License in order to receive or 438 | run a copy of the Program. Ancillary propagation of a covered work 439 | occurring solely as a consequence of using peer-to-peer transmission 440 | to receive a copy likewise does not require acceptance. However, 441 | nothing other than this License grants you permission to propagate or 442 | modify any covered work. These actions infringe copyright if you do 443 | not accept this License. Therefore, by modifying or propagating a 444 | covered work, you indicate your acceptance of this License to do so. 445 | 446 | 10. Automatic Licensing of Downstream Recipients. 447 | 448 | Each time you convey a covered work, the recipient automatically 449 | receives a license from the original licensors, to run, modify and 450 | propagate that work, subject to this License. You are not responsible 451 | for enforcing compliance by third parties with this License. 452 | 453 | An "entity transaction" is a transaction transferring control of an 454 | organization, or substantially all assets of one, or subdividing an 455 | organization, or merging organizations. If propagation of a covered 456 | work results from an entity transaction, each party to that 457 | transaction who receives a copy of the work also receives whatever 458 | licenses to the work the party's predecessor in interest had or could 459 | give under the previous paragraph, plus a right to possession of the 460 | Corresponding Source of the work from the predecessor in interest, if 461 | the predecessor has it or can get it with reasonable efforts. 462 | 463 | You may not impose any further restrictions on the exercise of the 464 | rights granted or affirmed under this License. For example, you may 465 | not impose a license fee, royalty, or other charge for exercise of 466 | rights granted under this License, and you may not initiate litigation 467 | (including a cross-claim or counterclaim in a lawsuit) alleging that 468 | any patent claim is infringed by making, using, selling, offering for 469 | sale, or importing the Program or any portion of it. 470 | 471 | 11. Patents. 472 | 473 | A "contributor" is a copyright holder who authorizes use under this 474 | License of the Program or a work on which the Program is based. The 475 | work thus licensed is called the contributor's "contributor version". 476 | 477 | A contributor's "essential patent claims" are all patent claims 478 | owned or controlled by the contributor, whether already acquired or 479 | hereafter acquired, that would be infringed by some manner, permitted 480 | by this License, of making, using, or selling its contributor version, 481 | but do not include claims that would be infringed only as a 482 | consequence of further modification of the contributor version. For 483 | purposes of this definition, "control" includes the right to grant 484 | patent sublicenses in a manner consistent with the requirements of 485 | this License. 486 | 487 | Each contributor grants you a non-exclusive, worldwide, royalty-free 488 | patent license under the contributor's essential patent claims, to 489 | make, use, sell, offer for sale, import and otherwise run, modify and 490 | propagate the contents of its contributor version. 491 | 492 | In the following three paragraphs, a "patent license" is any express 493 | agreement or commitment, however denominated, not to enforce a patent 494 | (such as an express permission to practice a patent or covenant not to 495 | sue for patent infringement). To "grant" such a patent license to a 496 | party means to make such an agreement or commitment not to enforce a 497 | patent against the party. 498 | 499 | If you convey a covered work, knowingly relying on a patent license, 500 | and the Corresponding Source of the work is not available for anyone 501 | to copy, free of charge and under the terms of this License, through a 502 | publicly available network server or other readily accessible means, 503 | then you must either (1) cause the Corresponding Source to be so 504 | available, or (2) arrange to deprive yourself of the benefit of the 505 | patent license for this particular work, or (3) arrange, in a manner 506 | consistent with the requirements of this License, to extend the patent 507 | license to downstream recipients. "Knowingly relying" means you have 508 | actual knowledge that, but for the patent license, your conveying the 509 | covered work in a country, or your recipient's use of the covered work 510 | in a country, would infringe one or more identifiable patents in that 511 | country that you have reason to believe are valid. 512 | 513 | If, pursuant to or in connection with a single transaction or 514 | arrangement, you convey, or propagate by procuring conveyance of, a 515 | covered work, and grant a patent license to some of the parties 516 | receiving the covered work authorizing them to use, propagate, modify 517 | or convey a specific copy of the covered work, then the patent license 518 | you grant is automatically extended to all recipients of the covered 519 | work and works based on it. 520 | 521 | A patent license is "discriminatory" if it does not include within 522 | the scope of its coverage, prohibits the exercise of, or is 523 | conditioned on the non-exercise of one or more of the rights that are 524 | specifically granted under this License. You may not convey a covered 525 | work if you are a party to an arrangement with a third party that is 526 | in the business of distributing software, under which you make payment 527 | to the third party based on the extent of your activity of conveying 528 | the work, and under which the third party grants, to any of the 529 | parties who would receive the covered work from you, a discriminatory 530 | patent license (a) in connection with copies of the covered work 531 | conveyed by you (or copies made from those copies), or (b) primarily 532 | for and in connection with specific products or compilations that 533 | contain the covered work, unless you entered into that arrangement, 534 | or that patent license was granted, prior to 28 March 2007. 535 | 536 | Nothing in this License shall be construed as excluding or limiting 537 | any implied license or other defenses to infringement that may 538 | otherwise be available to you under applicable patent law. 539 | 540 | 12. No Surrender of Others' Freedom. 541 | 542 | If conditions are imposed on you (whether by court order, agreement or 543 | otherwise) that contradict the conditions of this License, they do not 544 | excuse you from the conditions of this License. If you cannot convey a 545 | covered work so as to satisfy simultaneously your obligations under this 546 | License and any other pertinent obligations, then as a consequence you may 547 | not convey it at all. For example, if you agree to terms that obligate you 548 | to collect a royalty for further conveying from those to whom you convey 549 | the Program, the only way you could satisfy both those terms and this 550 | License would be to refrain entirely from conveying the Program. 551 | 552 | 13. Use with the GNU Affero General Public License. 553 | 554 | Notwithstanding any other provision of this License, you have 555 | permission to link or combine any covered work with a work licensed 556 | under version 3 of the GNU Affero General Public License into a single 557 | combined work, and to convey the resulting work. The terms of this 558 | License will continue to apply to the part which is the covered work, 559 | but the special requirements of the GNU Affero General Public License, 560 | section 13, concerning interaction through a network will apply to the 561 | combination as such. 562 | 563 | 14. Revised Versions of this License. 564 | 565 | The Free Software Foundation may publish revised and/or new versions of 566 | the GNU General Public License from time to time. Such new versions will 567 | be similar in spirit to the present version, but may differ in detail to 568 | address new problems or concerns. 569 | 570 | Each version is given a distinguishing version number. If the 571 | Program specifies that a certain numbered version of the GNU General 572 | Public License "or any later version" applies to it, you have the 573 | option of following the terms and conditions either of that numbered 574 | version or of any later version published by the Free Software 575 | Foundation. If the Program does not specify a version number of the 576 | GNU General Public License, you may choose any version ever published 577 | by the Free Software Foundation. 578 | 579 | If the Program specifies that a proxy can decide which future 580 | versions of the GNU General Public License can be used, that proxy's 581 | public statement of acceptance of a version permanently authorizes you 582 | to choose that version for the Program. 583 | 584 | Later license versions may give you additional or different 585 | permissions. However, no additional obligations are imposed on any 586 | author or copyright holder as a result of your choosing to follow a 587 | later version. 588 | 589 | 15. Disclaimer of Warranty. 590 | 591 | THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY 592 | APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT 593 | HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY 594 | OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, 595 | THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 596 | PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM 597 | IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF 598 | ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 599 | 600 | 16. Limitation of Liability. 601 | 602 | IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING 603 | WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS 604 | THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY 605 | GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE 606 | USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF 607 | DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD 608 | PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), 609 | EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF 610 | SUCH DAMAGES. 611 | 612 | 17. Interpretation of Sections 15 and 16. 613 | 614 | If the disclaimer of warranty and limitation of liability provided 615 | above cannot be given local legal effect according to their terms, 616 | reviewing courts shall apply local law that most closely approximates 617 | an absolute waiver of all civil liability in connection with the 618 | Program, unless a warranty or assumption of liability accompanies a 619 | copy of the Program in return for a fee. 620 | 621 | END OF TERMS AND CONDITIONS 622 | 623 | How to Apply These Terms to Your New Programs 624 | 625 | If you develop a new program, and you want it to be of the greatest 626 | possible use to the public, the best way to achieve this is to make it 627 | free software which everyone can redistribute and change under these terms. 628 | 629 | To do so, attach the following notices to the program. It is safest 630 | to attach them to the start of each source file to most effectively 631 | state the exclusion of warranty; and each file should have at least 632 | the "copyright" line and a pointer to where the full notice is found. 633 | 634 | 635 | Copyright (C) 636 | 637 | This program is free software: you can redistribute it and/or modify 638 | it under the terms of the GNU General Public License as published by 639 | the Free Software Foundation, either version 3 of the License, or 640 | (at your option) any later version. 641 | 642 | This program is distributed in the hope that it will be useful, 643 | but WITHOUT ANY WARRANTY; without even the implied warranty of 644 | MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 645 | GNU General Public License for more details. 646 | 647 | You should have received a copy of the GNU General Public License 648 | along with this program. If not, see . 649 | 650 | Also add information on how to contact you by electronic and paper mail. 651 | 652 | If the program does terminal interaction, make it output a short 653 | notice like this when it starts in an interactive mode: 654 | 655 | Copyright (C) 656 | This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. 657 | This is free software, and you are welcome to redistribute it 658 | under certain conditions; type `show c' for details. 659 | 660 | The hypothetical commands `show w' and `show c' should show the appropriate 661 | parts of the General Public License. Of course, your program's commands 662 | might be different; for a GUI interface, you would use an "about box". 663 | 664 | You should also get your employer (if you work as a programmer) or school, 665 | if any, to sign a "copyright disclaimer" for the program, if necessary. 666 | For more information on this, and how to apply and follow the GNU GPL, see 667 | . 668 | 669 | The GNU General Public License does not permit incorporating your program 670 | into proprietary programs. If your program is a subroutine library, you 671 | may consider it more useful to permit linking proprietary applications with 672 | the library. If this is what you want to do, use the GNU Lesser General 673 | Public License instead of this License. But first, please read 674 | . 675 | --------------------------------------------------------------------------------