├── AROInstall.md ├── IPIonAWSGovManualIAM.md ├── IPIonAWSGovTerraform.md ├── IPIonMAGInstall.md ├── IPIonMAGInstallManualIAM.md ├── IPIonOvirtDisconnected.md ├── IPIonVMWare.md ├── LICENSE ├── README.md ├── SpartaInstall.md ├── appendix ├── disconnected-registry-standalone-quay.md └── disconnected-registry.md ├── aws-gov-ipi-dis-maniam ├── account_names.txt ├── aws-ebs-csi-driver-operator-policy.json ├── aws-ebs-csi-driver-operator-secret-templ.yaml ├── aws-gov-vpc-drawing.svg ├── cloud-credential-operator-iam-ro-policy.json ├── cloud-credential-operator-iam-ro-secret-templ.yaml ├── cloud-credential-operator-s3-policy.json ├── cloud-credential-operator-s3-secret-templ.yaml ├── cloudformation.yaml ├── containers-templ.json ├── credentials.var ├── install-config-template.yaml ├── ocp-users.sh ├── openshift-image-registry-policy.json ├── openshift-image-registry-secret-templ.yaml ├── openshift-ingress-policy.json ├── openshift-ingress-secret-templ.yaml ├── openshift-machine-api-aws-policy.json ├── openshift-machine-api-aws-secret-templ.yaml ├── role-policy-templ.json ├── secret-helper.sh └── trust-policy.json ├── aws-ipi-terraform ├── .gitignore ├── 00-auth.tf ├── ec2.tf ├── example-config.yaml ├── igw.tf ├── output.tf ├── public_key.tf ├── security-group.tf ├── subnet.tf └── vpc.tf ├── images ├── ovirtVmCreateImg1.png └── ovirtVmCreateImg2.png └── sparta └── config.yml /AROInstall.md: -------------------------------------------------------------------------------- 1 | # Installing Azure Red Hat OpenShift (ARO) in Disconnected Microsoft Azure 2 | 3 | ## Overview 4 | 5 | This guide will demonstrate how to install an Azure Red Hat OpenShift (ARO) cluster on an existing disconnected network in Microsoft Azure. The network will disallow inbound connections from the internet. The network will restrict outbound connections to the internet based on the reqired [official list provided by Microsoft](https://docs.microsoft.com/en-us/azure/openshift/howto-restrict-egress#minimum-required-fqdn--application-rules). 6 | 7 | *Note*: Restricting outbound connections entirely violates the [Azure Red Hat OpenShift support policy](https://docs.microsoft.com/en-us/azure/openshift/support-policies-v4#cluster-configuration-requirements). 8 | 9 | ## Installation 10 | 11 | #### Set variables 12 | 13 | ```bash 14 | LOCATION=eastus 15 | RESOURCEGROUP=aro-disconnected 16 | CLUSTER=aro-disconnected 17 | VNET_NAME=aro-disconnected-vnet 18 | FIREWALL_NAME=aro-disconnected-firewall 19 | ``` 20 | 21 | #### Create resource group 22 | 23 | ```bash 24 | az group create -l $LOCATION -n $RESOURCEGROUP 25 | ``` 26 | 27 | #### Create virtual network 28 | 29 | ```bash 30 | az network vnet create \ 31 | --resource-group $RESOURCEGROUP \ 32 | --name $VNET_NAME \ 33 | --address-prefixes 10.0.0.0/16 34 | ``` 35 | 36 | #### Create empty subnet for masters nodes 37 | ```bash 38 | az network vnet subnet create \ 39 | --resource-group $RESOURCEGROUP \ 40 | --vnet-name $VNET_NAME \ 41 | --name master-subnet \ 42 | --address-prefixes 10.0.0.0/24 \ 43 | --service-endpoints Microsoft.ContainerRegistry 44 | ``` 45 | 46 | *Note*: This subnet accesses Microsoft's internal container registry over a [private endpoint](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview). The internal container registry is used for provisioning the cluster, and it is not accessible for general use. See [networking](https://docs.microsoft.com/en-us/azure/openshift/concepts-networking) for more information. 47 | 48 | #### Create empty subnet for worker nodes 49 | ```bash 50 | az network vnet subnet create \ 51 | --resource-group $RESOURCEGROUP \ 52 | --vnet-name $VNET_NAME \ 53 | --name worker-subnet \ 54 | --address-prefixes 10.0.1.0/24 \ 55 | --service-endpoints Microsoft.ContainerRegistry 56 | ``` 57 | 58 | *Note*: This subnet accesses Microsoft's internal container registry over a [private endpoint](https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoints-overview). The internal container registry is used for provisioning the cluster, and it is not accessible for general use. See [networking](https://docs.microsoft.com/en-us/azure/openshift/concepts-networking) for more information. 59 | 60 | #### Disable subnet private endpoint policies 61 | ```bash 62 | az network vnet subnet update \ 63 | --name master-subnet \ 64 | --resource-group $RESOURCEGROUP \ 65 | --vnet-name $VNET_NAME \ 66 | --disable-private-link-service-network-policies true 67 | ``` 68 | 69 | *Note*: This must be executed to allow Private Link connections to your cluster by Azure SREs 70 | 71 | #### Create a firewall subnet 72 | ```bash 73 | az network vnet subnet create -g $RESOURCEGROUP --vnet-name $VNET_NAME -n AzureFirewallSubnet --address-prefixes 10.0.10.0/26 74 | ``` 75 | 76 | *Note*: The Azure Firewall subnet size should be /26, see the [FAQ](https://docs.microsoft.com/en-us/azure/firewall/firewall-faq#why-does-azure-firewall-need-a--26-subnet-size). 77 | 78 | #### Create a public subnet for bastion host 79 | ```bash 80 | az network vnet subnet create -g $RESOURCEGROUP --vnet-name $VNET_NAME -n public-subnet --address-prefixes 10.0.2.0/24 81 | ``` 82 | 83 | #### Create Azure Firewall 84 | 85 | Create public IP 86 | ```bash 87 | az network public-ip create --name fw-pip --resource-group $RESOURCEGROUP --allocation-method static --sku standard 88 | ``` 89 | 90 | Create firewall and IP config 91 | ```bash 92 | az extension add -n azure-firewall 93 | az network firewall create -g $RESOURCEGROUP -n $FIREWALL_NAME --location $LOCATION 94 | az network firewall ip-config create --firewall-name $FIREWALL_NAME --name FW-config --public-ip-address fw-pip --resource-group $RESOURCEGROUP --vnet-name $VNET_NAME 95 | ``` 96 | 97 | Set Azure Firewall private IP address 98 | ```bash 99 | fwprivaddr=$(az network firewall ip-config list -g $RESOURCEGROUP -f $FIREWALL_NAME --query "[?name=='FW-config'].privateIpAddress" --output tsv) 100 | ``` 101 | 102 | #### Create routing table 103 | ```bash 104 | az network route-table create --name $FIREWALL_NAME-rt-table --resource-group $RESOURCEGROUP 105 | ``` 106 | 107 | #### Create firewall route 108 | ```bash 109 | az network route-table route create \ 110 | --resource-group $RESOURCEGROUP \ 111 | --name $FIREWALL_NAME-rt-table-route \ 112 | --route-table-name $FIREWALL_NAME-rt-table \ 113 | --address-prefix 0.0.0.0/0 \ 114 | --next-hop-type VirtualAppliance \ 115 | --next-hop-ip-address $fwprivaddr 116 | ``` 117 | 118 | #### Add outbound application rule 119 | ```bash 120 | az network firewall application-rule create \ 121 | -g $RESOURCEGROUP -f $FIREWALL_NAME \ 122 | --collection-name azure_ms \ 123 | --name azure \ 124 | --protocols 'http=80' 'https=443' \ 125 | --target-fqdns *.quay.io sso.redhat.com registry.redhat.io management.azure.com mirror.openshift.com api.openshift.com registry.access.redhat.com login.microsoftonline.com gcs.prod.monitoring.core.windows.net *.blob.core.windows.net *.servicebus.windows.net *.table.core.windows.net \ 126 | --source-addresses 10.0.0.0/24 10.0.1.0/24 \ 127 | --priority 100 --action Allow 128 | ``` 129 | 130 | #### Route internal traffic to firewall on the master and worker subnets 131 | ```bash 132 | az network vnet subnet update -g $RESOURCEGROUP --vnet-name $VNET_NAME --name master-subnet --route-table $FIREWALL_NAME-rt-table 133 | az network vnet subnet update -g $RESOURCEGROUP --vnet-name $VNET_NAME --name worker-subnet --route-table $FIREWALL_NAME-rt-table 134 | ``` 135 | 136 | #### Create the ARO cluster in the disconnected network 137 | ```bash 138 | az aro create \ 139 | --resource-group $RESOURCEGROUP \ 140 | --name $CLUSTER \ 141 | --vnet $VNET_NAME \ 142 | --master-subnet master-subnet \ 143 | --worker-subnet worker-subnet \ 144 | --apiserver-visibility Private \ 145 | --ingress-visibility Private 146 | ``` 147 | 148 | *Note*: Optionally add `--pull-secret` if you have a [Red Hat pull secret](https://docs.microsoft.com/en-us/azure/openshift/howto-add-update-pull-secret) 149 | 150 | #### Create Bastion Host in public subnet 151 | 152 | Create SSH key pair 153 | ```bash 154 | ssh-keygen -m PEM -t rsa -b 4096 -f azure-key 155 | ``` 156 | 157 | Create VM 158 | ```bash 159 | az vm create -n bastion -g $RESOURCEGROUP \ 160 | --image RedHat:RHEL:8.2:latest \ 161 | --size Standard_D2s_v3 \ 162 | --public-ip-address bastion-pub-ip \ 163 | --vnet-name $VNET_NAME --subnet public-subnet \ 164 | --admin-username azureuser \ 165 | --ssh-key-values azure-key.pub 166 | ``` 167 | 168 | ## Smoke Test 169 | 170 | #### Connect to ARO over public internet 171 | 172 | Set kubeadmin password 173 | ```bash 174 | KUBEADMIN_PASSWORD=$(az aro list-credentials --name $CLUSTER --resource-group $RESOURCEGROUP --query kubeadminPassword -o tsv) 175 | ``` 176 | 177 | Try logging in 178 | ```bash 179 | API_SERVER=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv) 180 | oc login $API_SERVER -u kubeadmin -p $KUBEADMIN_PASSWORD 181 | ``` 182 | 183 | The connection fails because the API server is not accessible to the public internet. 184 | 185 | 186 | #### Connect to ARO through Bastion host 187 | 188 | Make note of `API_SERVER` and `KUBEADMIN_PASSWORD` 189 | ```bash 190 | echo $API_SERVER 191 | echo $KUBEADMIN_PASSWORD 192 | ``` 193 | 194 | SSH to the Bastion host 195 | ```bash 196 | BASTION_PUBLIC_IP=$(az vm show -d -g $RESOURCEGROUP -n bastion --query publicIps -o tsv) 197 | ssh -i azure-key azureuser@$BASTION_PUBLIC_IP 198 | ``` 199 | 200 | Install `oc` on the Bastion host 201 | 202 | *Note*: You can find the latest release of the CLI [here](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/) 203 | 204 | ```bash 205 | $ wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz 206 | $ mkdir openshift 207 | $ tar -zxvf openshift-client-linux.tar.gz -C openshift 208 | $ echo 'export PATH=$PATH:~/openshift' >> ~/.bashrc && source ~/.bashrc 209 | ``` 210 | 211 | Login to the cluster 212 | 213 | ```bash 214 | $ oc login -u kubeadmin -p 215 | $ oc whoami 216 | ``` 217 | 218 | > Output 219 | 220 | ``` 221 | kube:admin 222 | ``` 223 | 224 | #### Test outbound connection to public internet 225 | 226 | *Note*: Make sure to connect to ARO through the Bastion host (see previous section) 227 | 228 | Execute an outbound internet connection from a pod in the cluster 229 | 230 | ```bash 231 | $ oc exec -it alertmanager-main-0 -n openshift-monitoring -- curl redhat.com 232 | ``` 233 | 234 | > Output (sample) 235 | 236 | ``` 237 | HTTP request from 10.20.1.6:42876 to redhat.com:80. Url: redhat.com. Action: Deny. No rule matched. Proceeding with default action 238 | ``` 239 | 240 | The connection is denied by the Azure Firewall. 241 | -------------------------------------------------------------------------------- /IPIonAWSGovManualIAM.md: -------------------------------------------------------------------------------- 1 | # Installing OpenShift in Disconnected AWS GovCloud using IPI 2 | 3 | ## NOTICE: OpenShift 4.7.10 may not install into GovCloud. Please see https://bugzilla.redhat.com/show_bug.cgi?id=1958420 for more info. 4 | ## Overview 5 | 6 | This guide is intended to demonstrate how to perform the OpenShift installation using the IPI method on AWS GovCloud. In addition, the guide will walk through performing this installation on an existing disconnected network. In other words the network does not allow access to and from the internet. 7 | 8 | ## YouTube Video 9 | 10 | A video that walks through this guide is available here: https://youtu.be/bHmcWHF-sEA 11 | 12 | ## AWS Configuration Requirements for Demo 13 |
14 | 15 |
Image: Demo VPC Drawing
16 |

17 |
18 | 19 | 20 | 21 | In this guide, we will install OpenShift onto an existing AWS GovCloud VPC. This VPC will contain three private subnets that have no connectivity to the internet, as well as a public subnet that will facilitate our access to the private subnets from the internet (bastion). We still need to allow access to the AWS APIs from the private subnets. For this demo, that AWS API communication is facilitated by a squid proxy. Without that access, we will not be able to install a cloud aware OpenShift cluster. 22 | 23 | This guide will assume that the user has valid accounts and subscriptions to both Red Hat OpenShift and AWS GovCloud. 24 | 25 | A Cloud Formation template that details the VPC with squid proxy used in this demo can be found [**here**](https://raw.githubusercontent.com/redhat-cop/ocp-disconnected-docs/main/aws-gov-ipi-dis-maniam/cloudformation.yaml). 26 | 27 | Before running the cloud formation, ensure the following is created. 28 | 29 | 1. A key-pair. This command will pull your local public key. 30 | ```sh 31 | aws ec2 import-key-pair --key-name disconnected-east-1 --public-key-material fileb://~/.ssh/id_rsa.pub 32 | ``` 33 | 34 | 2. The VPC Network & Subnets. Copy the cloud formation file to your local directory before running. 35 | ```sh 36 | aws cloudformation create-stack --stack-name ocpdd --template-body file://./cloudformation.yaml --capabilities CAPABILITY_IAM 37 | ``` 38 | 39 | 3. Bastion on Public Subnet on VPC created 40 | - Allow 22 (default) 41 | - Ensure at least 50G of disk space is allocated 42 | 43 | Register the machine and ensure the following packages are installed 44 | - podman 45 | - unzip 46 | - aws-cli (see https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html#cliv2-linux-install) 47 | 48 | 4. Private registry on Private Subnet on VPC created 49 | - Allow 22 (default) 50 | - Allow 5000 51 | - Ensure at least 50G of disk space is allocated 52 | _You may wish to generate another key on the public bastion and add it's public key here before creation. Use the same method as step 1 on the bastion with a unique name._ 53 | 54 | Ensure the following binaries are transferred and installed 55 | - https://github.com/itchyny/gojq (mv release and rename to /usr/bin/jq) 56 | - aws-cli (transfer the install directory and run the install) 57 | 58 | # 59 | ## Installing OpenShift 60 | 61 | ### Create OpenShift Installation Bundle 62 | 1. Download and compress the stable release bundle on internet an connected machine using the OpenShift4-mirror companion utility found **[here](https://github.com/redhat-cop/ocp-disconnected-docs.git)** 63 | 64 | 65 | You will first need to retrieve an OpenShift pull secret. Once you have retrieved that, enter it into the literals of the value for `--pull-secret` in the command below. Pull secrets can be obtained from https://cloud.redhat.com/openshift/install/aws/installer-provisioned 66 | 67 | ```bash 68 | OCP_VER=$(curl http://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/release.txt 2>&1 | grep -oP "(?<=Version:\s\s).*") 69 | podman run -it --security-opt label=disable -v ./:/app/bundle quay.io/redhatgov/openshift4_mirror:latest \ 70 | ./openshift_mirror bundle \ 71 | --openshift-version ${OCP_VER} \ 72 | --platform aws \ 73 | --skip-existing \ 74 | --skip-catalogs \ 75 | --pull-secret ${PULL_SECRET} && \ 76 | git clone https://github.com/redhat-cop/ocp-disconnected-docs.git ./${OCP_VER}/ocp-disconnected && \ 77 | tar -zcvf openshift-${OCP_VER}.tar.gz ${OCP_VER} 78 | ``` 79 | 2. Transfer bundle from internet connected machine to disconnected vpc host. 80 | 81 | # 82 | ### Prepare and Deploy 83 | 3. Extract bundle on disconnected vpc host. From the directory containing the OCP bundle. 84 | ```bash 85 | OCP_VER=$(ls | grep -oP '(?<=openshift-)\d\.\d\.\d(?=.tar.gz)') 86 | tar -xzvf openshift-${OCP_VER}.tar.gz 87 | ``` 88 | 89 | 4. Create S3 Bucket and attach policies. 90 | 91 | ```bash 92 | export awsreg=$(aws configure get region) 93 | export s3name=$(date +%s"-rhcos") 94 | aws s3api create-bucket --bucket ${s3name} --region ${awsreg} --create-bucket-configuration LocationConstraint=${awsreg} 95 | aws iam create-role --role-name vmimport --assume-role-policy-document "file://${OCP_VER}/ocp-disconnected/aws-gov-ipi-dis-maniam/trust-policy.json" 96 | envsubst < ./${OCP_VER}/ocp-disconnected/aws-gov-ipi-dis-maniam/role-policy-templ.json > ./${OCP_VER}/ocp-disconnected/aws-gov-ipi-dis-maniam/role-policy.json 97 | aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document "file://${OCP_VER}/ocp-disconnected/aws-gov-ipi-dis-maniam/role-policy.json" 98 | ``` 99 | 100 | 5. Upload RHCOS Image to S3 101 | 102 | ```bash 103 | export RHCOS_VER=$(ls ./${OCP_VER}/rhcos/ | grep -oP '.*(?=\.vmdk.gz)') 104 | gzip -d ./${OCP_VER}/rhcos/${RHCOS_VER}.vmdk.gz 105 | aws s3 mv ./${OCP_VER}/rhcos/${RHCOS_VER}.vmdk s3://${s3name} 106 | ``` 107 | 108 | 6. Create AMI 109 | 110 | ```bash 111 | envsubst < ./${OCP_VER}/ocp-disconnected/aws-gov-ipi-dis-maniam/containers-templ.json > ./${OCP_VER}/ocp-disconnected/containers.json 112 | taskid=$(aws ec2 import-snapshot --region ${awsreg} --description "rhcos-snapshot" --disk-container file://${OCP_VER}/ocp-disconnected/containers.json | jq -r '.ImportTaskId') 113 | until [[ $resp == "completed" ]]; do sleep 2; echo "Snapshot progress: "$(aws ec2 describe-import-snapshot-tasks --region ${awsreg} | jq --arg task "$taskid" -r '.ImportSnapshotTasks[] | select(.ImportTaskId==$task) | .SnapshotTaskDetail.Progress')"%"; resp=$(aws ec2 describe-import-snapshot-tasks --region ${awsreg} | jq --arg task "$taskid" -r '.ImportSnapshotTasks[] | select(.ImportTaskId==$task) | .SnapshotTaskDetail.Status'); done 114 | snapid=$(aws ec2 describe-import-snapshot-tasks --region ${awsreg} | jq --arg task "$taskid" '.ImportSnapshotTasks[] | select(.ImportTaskId==$task) | .SnapshotTaskDetail.SnapshotId') 115 | aws ec2 register-image \ 116 | --region ${awsreg} \ 117 | --architecture x86_64 \ 118 | --description "${RHCOS_VER}" \ 119 | --ena-support \ 120 | --name "${RHCOS_VER}" \ 121 | --virtualization-type hvm \ 122 | --root-device-name '/dev/xvda' \ 123 | --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId='${snapid}'}' 124 | ``` 125 | 126 | 7. Record the AMI ID from the output of the above command. 127 | 128 | 8. Create registry cert on disconnected vpc host 129 | ```bash 130 | export SUBJ="/C=US/ST=Virginia/O=Red Hat/CN=${HOSTNAME}" 131 | openssl req -newkey rsa:4096 -nodes -sha256 -keyout registry.key -x509 -days 365 -out registry.crt -subj "$SUBJ" -addext "subjectAltName = DNS:$HOSTNAME" 132 | ``` 133 | 134 | 9. Make a copy of the install config 135 | ```bash 136 | mkdir ./${OCP_VER}/config 137 | cp ./${OCP_VER}/ocp-disconnected/aws-gov-ipi-dis-maniam/install-config-template.yaml ./${OCP_VER}/config/install-config.yaml 138 | ``` 139 | 10. Edit install config 140 | For this step, Open `./${OCP_VER}/config/install-config.yaml` and edit the following fields: 141 | 142 | ```yaml 143 | baseDomain: i.e. example.com 144 | additionalTrustBundle: copy and paste the content of ./registry.crt here. 145 | imageContentSources: 146 | mirrors: Only edit the registry hostname fields of this section. Make sure that you use the $HOSTNAME of the devices that you are currently using. 147 | metadata: 148 | name: i.e. test-cluster 149 | networking: 150 | machineNetwork: 151 | - cidr: i.e. 10.0.41.0/20. Shorten or lengthen this list as needed. 152 | platform: 153 | aws: 154 | region: the default region of your configured aws cli 155 | zones: A list of availability zones that you are deploying into. Shorten or lengthen this list as needed. 156 | subnets: i.e. subnet-ef12d288. The length of this list must match the .networking.machineNetwork[].cidr length. 157 | amiID: the AMI ID recorded from step 9 158 | pullSecret: your pull secret enclosed in literals 159 | sshKey: i.e ssh-rsa AAAAB3... No quotes 160 | ``` 161 | Don't forget to save and close the file! 162 | 163 | 11. Make a backup of the final config: 164 | ```bash 165 | cp -R ./${OCP_VER}/config/ ./${OCP_VER}/config.bak 166 | ``` 167 | 168 | 12. Create manifests from install config. 169 | ```bash 170 | openshift-install create manifests --dir ./${OCP_VER}/config 171 | ``` 172 | 173 | 13. create iam users and Policies 174 | 175 | ```bash 176 | cd ./${OCP_VER}/ocp-disconnected/aws-gov-ipi-dis-maniam 177 | chmod +x ./ocp-users.sh 178 | ./ocp-users.sh prepPolicies 179 | ./ocp-users.sh createUsers 180 | ``` 181 | 182 | 14. Use the convenience script to create the aws credentials and kubernetes secrets: 183 | ```bash 184 | chmod +x ./secret-helper.sh 185 | ./secret-helper.sh 186 | cp secrets/* ../../config/openshift/ 187 | cd - 188 | ``` 189 | 190 | 15. start up the registry in the background 191 | ```bash 192 | oc image serve --dir=./${OCP_VER}/release/ --tls-crt=./registry.crt --tls-key=./registry.key & 193 | ``` 194 | 195 | 16. Deploy the cluster 196 | 197 | ``` 198 | openshift-install create cluster --dir ./${OCP_VER}/config 199 | ``` 200 | # 201 | ### Cluster Access 202 | 203 | You can now access the cluster via CLI with oc or the web console with a web browser. 204 | 205 | 1. Locate the OpenShift access information provided by the final installer output. 206 | 207 | Example: 208 | ``` 209 | INFO Waiting up to 10m0s for the openshift-console route to be created... 210 | INFO Install complete! 211 | INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/ec2-user/data/vid-pres/${OCP_VER}/config/auth/kubeconfig' 212 | INFO Access the OpenShift web-console here: https://console-openshift-console.apps.test-cluster.testocp1.net 213 | INFO Login to the console with user: "kubeadmin", and password: "z9yDP-2M6DS-oE9Im-Dcdzk" 214 | INFO Time elapsed: 48m34s 215 | ``` 216 | 217 | 2. Set the default kube context used by oc and kubectl: 218 | 219 | Example: 220 | ``` 221 | export KUBECONFIG=/home/ec2-user/data/vid-pres/4.7.0/config/auth/kubeconfig 222 | ``` 223 | 224 | _Config file optionaly availible at `$OCP_VER/config/auth`_ 225 | 226 | 3. Access the web console: 227 | 228 | URL Example: 229 | `https://console-openshift-console.apps.test-cluster.testocp1.net` 230 | 231 | Credentials Example: 232 | ``` 233 | INFO Login to the console with user: "kubeadmin", and password: "z9yDP-2M6DS-oE9Im-Dcdzk 234 | ``` 235 | -------------------------------------------------------------------------------- /IPIonAWSGovTerraform.md: -------------------------------------------------------------------------------- 1 | # AWS IPI Terraform 2 | 3 | Provisions neccessary resources to begin airgapped bundle installations through isolated VPC via public VPC peering connection. 4 | 5 | [![asciicast](https://asciinema.org/a/7WK0adg1J9Q5rcqqdKJOpGt3t.svg)](https://asciinema.org/a/7WK0adg1J9Q5rcqqdKJOpGt3t) 6 | 7 | Current applicable overlays include openshift4_mirror bundler. 8 | 9 | Prereq 10 | Local machine with terraform installed as well as either AWS credentials configured with awscli, or credentials placed in `00-auth.tf`. 11 | 12 | Steps 13 | 14 | # Provision AWS Resources 15 | 1. Run `terraform init` and `terraform apply` 16 | 2. Note output values, inclduing public IP 17 | 18 | # Low-side Pull 19 | 1. Login to bastion. 20 | 2. Register bastion with Red Hat `sudo subscription-manager register` 21 | 3. Install podman & git `sudo dnf install podman git -y` 22 | 4. Define PULL_SECRET `PULL_SECRET=''` 23 | 5. Define OCP_VER `OCP_VER=$(curl http://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/release.txt 2>&1 | grep -oP "(?<=Version:\s\s).*")` 24 | 6. Bundle images 25 | ```bash 26 | podman run -it --security-opt label=disable -v ./:/app/bundle quay.io/redhatgov/openshift4_mirror:latest \ 27 | ./openshift_mirror bundle \ 28 | --openshift-version ${OCP_VER} \ 29 | --platform aws \ 30 | --skip-existing \ 31 | --skip-catalogs \ 32 | --pull-secret ${PULL_SECRET} 33 | ``` 34 | 7. Bundle repository `git clone https://github.com/redhat-cop/ocp-disconnected-docs.git ./${OCP_VER}/ocp-disconnected` 35 | 36 | # Transfer Bundle 37 | Tar, move, copy, rsync, airgap walk. Bundle Transfer. This example will rsync. 38 | 39 | 1. Generate ssh key on bastion `ssh-keygen` 40 | 2. Uncomment the private instance in ec2.tf 41 | 3. Uncomment and paste the contents of `~/.ssh/id_rsa.pub` in the public_key.tf's bastion_key resource 42 | 4. Uncomment the output of private instance's address 43 | 5. Run `terraform apply` 44 | 6. Note the private instance's DNS name 45 | 7. Confirm access to private bastion access via ssh ec2-user@private-bastion-address. (It may take a minute to initialize) 46 | 8. Sync the bundle `rsync -azvP ${OCP_VER} private-bastion-address:~` 47 | 48 | # High-side Deploy 49 | Prepare deployment. 50 | 1. Login to the private instance `ssh private-bastion-address` 51 | 2. Define version context `export OCP_VER=4.8.4` 52 | 3. Prepare config directory `mkdir ./${OCP_VER}/config` 53 | 4. Prepare registry cert 54 | ```bash 55 | export SUBJ="/C=US/ST=Virginia/O=Red Hat/CN=${HOSTNAME}" 56 | openssl req -newkey rsa:4096 -nodes -sha256 -keyout registry.key -x509 -days 365 -out registry.crt -subj "$SUBJ" -addext "subjectAltName = DNS:$HOSTNAME" 57 | ``` 58 | 5. Adjust the `example-config.yaml` with the proper values, replacing the subnets and mirror location with previous outputs and the registry cert from `registry.crt`'s contents. 59 | 6. Rename it `install-config.yaml` and place it in `./${OCP_VER}/config/install-config.yaml` 60 | ```bash 61 | cat > ./${OCP_VER}/config/install-config.yaml 62 | paste 63 | ^d 64 | ``` 65 | 7. Copy binaries to a PATH directory `sudo cp ./${OCP_VER}/bin/* /usr/bin` 66 | 67 | # Install OpenShift 4 68 | Install OpenShift 4 69 | 1. Serve images `oc image serve --dir=./${OCP_VER}/release/ --tls-crt=./registry.crt --tls-key=./registry.key &` 70 | 2. Run the installer `openshift-install create cluster --dir ./${OCP_VER}/config --log-level=debug` 71 | 72 | 73 | -------------------------------------------------------------------------------- /IPIonMAGInstall.md: -------------------------------------------------------------------------------- 1 | 2 | # Installing OpenShift in Disconnected Microsoft Azure Government using IPI 3 | 4 | 5 | ## Overview 6 | 7 | This guide is intended to demonstrate how to perform the OpenShift installation using the IPI method on Microsoft Azure Government. In addition, the guide will walk through performing this installation on an existing disconnected network. In other words the network does not allow access to and from the internet. 8 | 9 | ## YouTube Video 10 | 11 | A video that walks through this guide is available here: https://youtu.be/JcoTBcm3cIc 12 | 13 | ## MAG Configuration Requirements 14 | 15 | In this guide, we will install OpenShift onto an existing virtual network. This virtual network will contain two private subnets that are firewalled off from access to and from the internet. As we will need a way to gain access to those subnets, there is one subnet that will be the public subnet and that will host the bastion node from which we will use to access the private network. The following section entitled Example MAG configuration details the network configuration used in the guide. While the internet is firewalled off from the private network, we still need to allow access to the Azure and Azure Government cloud APIs. Without that we will not be able to install a cloud aware OpenShift cluster. Please note the firewall rules created that allow this access to the Azure cloud APIs. 16 | 17 | This guide will assume that the user has valid accounts and subscriptions to both Red Hat OpenShift and MS Azure Government. This guide will also assume that an SSH keypair was created and the files azure-key.pem and azure-key.pub both exist. 18 | 19 | 20 | ### Example MAG Configuration 21 | 22 | The following section may be used to create a virtual network with the following components. 23 | 24 | * Service Principal Account 25 | * Azure Virtual Network 26 | * Private DNS zone 27 | * Firewall 28 | * Public and Private subnets 29 | * Bastion Host 30 | * Registry Host 31 | 32 | For the purpose of this demo, it is the assumed that these components will be provided by the user or the cloud administrator. However, IPI can also create these components for you, if desired. Please create or provide components according to the examples in this section. 33 | 34 | If you have already created and validated these resources, please skip to the Installing OpenShift section. 35 | 36 | #### 1. Obtain Azure CLI and login 37 | 38 | Use the link below and follow the instructions to install the Azure CLI 39 | 40 | 41 | 42 | * [https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-yum?view=azure-cli-latest](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-yum?view=azure-cli-latest) 43 | 44 | Login to Azure and set the cloud provider 45 | 46 | 47 | ``` 48 | az login 49 | 50 | az cloud set --name AzureUSGovernment 51 | ``` 52 | 53 | 54 | #### 2. Create Service Principal account 55 | 56 | Use the link below and follow the instructions to create the Service Principal account. Make note of the subscription id, tenant id, client id and password token, these will be used later in this guide. 57 | 58 | 59 | 60 | * [https://docs.openshift.com/container-platform/4.6/installing/installing_azure/installing-azure-account.html#installation-azure-service-principal_installing-azure-account](https://docs.openshift.com/container-platform/4.6/installing/installing_azure/installing-azure-account.html#installation-azure-service-principal_installing-azure-account) 61 | 62 | #### 3. Create Resource Group 63 | 64 | Create a resource group where AZURE_REGION is either usgovtexas or usgovvirginia 65 | 66 | 67 | ``` 68 | az group create -l -n 69 | ``` 70 | 71 | 72 | #### 4. Create VNET 73 | 74 | 75 | ``` 76 | az network vnet create -g -n --address-prefixes 10.1.0.0/16 77 | ``` 78 | 79 | 80 | #### 5. Create FW Rules and Route table for Private Subnets 81 | 82 | The Firewall will block traffic to and from the internet. In order for the OpenShift cluster to be cloud aware and to be able to run the IPI method of install, we need to allow access to the Azure and Azure for Government APIs. 83 | 84 | 85 | ``` 86 | az extension add -n azure-firewall 87 | 88 | az network firewall create -g -n 89 | 90 | az network vnet subnet create \ 91 | -g \ 92 | --vnet-name \ 93 | -n AzureFirewallSubnet \ 94 | --address-prefixes 10.1.10.0/24 95 | 96 | az network public-ip create \ 97 | --name fw-pip \ 98 | --resource-group \ 99 | --allocation-method static \ 100 | --sku standard 101 | 102 | az network firewall ip-config create \ 103 | --firewall-name \ 104 | --name FW-config \ 105 | --public-ip-address fw-pip \ 106 | --resource-group \ 107 | --vnet-name 108 | 109 | fwprivaddr=$( \ 110 | az network firewall ip-config list \ 111 | -g \ 112 | -f \ 113 | --query "[?name=='FW-config'].privateIpAddress" \ 114 | --output tsv) 115 | 116 | az network route-table create \ 117 | --name Firewall-rt-table \ 118 | --resource-group \ 119 | --disable-bgp-route-propagation true 120 | 121 | az network route-table route create \ 122 | --resource-group ${RG} \ 123 | --name fw-route \ 124 | --route-table-name Firewall-rt-table \ 125 | --address-prefix 0.0.0.0/0 \ 126 | --next-hop-type VirtualAppliance \ 127 | --next-hop-ip-address $fwprivaddr 128 | 129 | az network firewall application-rule create \ 130 | --collection-name azure_gov \ 131 | --firewall-name \ 132 | --name azure \ 133 | --protocols Http=80 Https=443 \ 134 | --resource-group \ 135 | --target-fqdns \ 136 | *microsoftonline.us \ 137 | *graph.windows.net \ 138 | *usgovcloudapi.net \ 139 | *applicationinsights.us \ 140 | *microsoft.us \ 141 | --source-addresses 10.1.1.0/24 10.1.2.0/24 \ 142 | --priority 100 \ 143 | --action Allow 144 | 145 | az network firewall application-rule create \ 146 | --collection-name azure_ms \ 147 | --firewall-name \ 148 | --name azure \ 149 | --protocols Http=80 Https=443 \ 150 | --resource-group \ 151 | --target-fqdns \ 152 | *azure.com *microsoft.com \ 153 | *microsoftonline.com \ 154 | *windows.net \ 155 | --source-addresses 10.1.1.0/24 10.1.2.0/24 \ 156 | --priority 200 \ 157 | --action Allow 158 | ``` 159 | 160 | 161 | #### 6. Create Public Subnet for Bastion 162 | 163 | 164 | ``` 165 | az network vnet subnet create \ 166 | -g \ 167 | --vnet-name \ 168 | -n \ 169 | --address-prefixes 10.1.0.0/24 170 | ``` 171 | 172 | 173 | #### 7. Create Private Subnet for Control Plane 174 | 175 | 176 | ``` 177 | az network vnet subnet create \ 178 | -g \ 179 | --vnet-name \ 180 | -n \ 181 | --address-prefixes 10.1.1.0/24 \ 182 | --route-table Firewall-rt-table 183 | ``` 184 | 185 | 186 | #### 8. Create Private Subnet for Compute Plane 187 | 188 | 189 | ``` 190 | az network vnet subnet create \ 191 | -g \ 192 | --vnet-name \ 193 | -n \ 194 | --address-prefixes 10.1.2.0/24 \ 195 | --route-table Firewall-rt-table 196 | ``` 197 | 198 | 199 | #### 9. Create Bastion Host in Public Subnet 200 | 201 | Note: Ensure that the file azure-key.pub exists in the current working directory. Also, if the operator catalog will also be downloaded copied over, please adjust the os-disk-size-gb value accordingly. 202 | 203 | 204 | ``` 205 | az vm create -n -g \ 206 | --image RedHat:RHEL:8.2:latest \ 207 | --size Standard_D2s_v3 \ 208 | --os-disk-size-gb 150 \ 209 | --public-ip-address bastion-pub-ip \ 210 | --vnet-name --subnet \ 211 | --admin-username azureuser \ 212 | --ssh-key-values azure-key.pub 213 | ``` 214 | 215 | 216 | #### 10. Create Registry Host in Private Subnet 217 | 218 | Note: Ensure that the file azure-key.pub exists in the current working directory. Also, if the operator catalog will also be downloaded copied over, please adjust the os-disk-size-gb value accordingly. 219 | 220 | 221 | ``` 222 | az vm create -n -g \ 223 | --image RedHat:RHEL:8.2:latest \ 224 | --size Standard_D2s_v3 \ 225 | --os-disk-size-gb 150 \ 226 | --public-ip-address '' \ 227 | --vnet-name --subnet \ 228 | --admin-username azureuser \ 229 | --ssh-key-values azure-key.pub 230 | ``` 231 | 232 | 233 | #### 11. Create Private DNS and add A Record for Registry host 234 | 235 | The REGISTRY_IP is the private ip address assigned to the Registry host in the previous step. 236 | 237 | 238 | ``` 239 | az network private-dns zone create -g -n 240 | 241 | az network private-dns link vnet create \ 242 | -g -n private-dnslink \ 243 | -z -v -e true 244 | 245 | az network private-dns record-set a add-record \ 246 | -g \ 247 | -z \ 248 | -n registry \ 249 | -a 250 | ``` 251 | 252 | 253 | #### 12. Resize Logical Volume on Bastion 254 | 255 | 256 | ``` 257 | scp -i azure-key.pem azure-key.pem azureuser@${BASTION_PUBLIC_IP}:~/.ssh/azure-key.pem 258 | 259 | ssh -i azure-sshkey.pem azureuser@${BASTION_PUBLIC_IP} 260 | 261 | sudo lsblk #identify blk dev where home is mapped to (ex /dev/sda2) 262 | sudo parted -l #when prompted type 'fix' 263 | sudo growpart /dev/sda 2 264 | sudo pvresize /dev/sda2 265 | sudo pvscan 266 | sudo lvresize -r -L +125G /dev/mapper/rootvg-homelv 267 | ``` 268 | 269 | 270 | #### 13. Resize Logical Volume on Registry 271 | 272 | 273 | ``` 274 | #From bastion 275 | ssh -i ~/.ssh azure-sshkey.pem azureuser@registry. 276 | 277 | sudo lsblk # identify blk dev where home is mapped to (ex /dev/sda2) 278 | sudo parted -l #when prompted type 'fix' 279 | sudo growpart /dev/sda 2 280 | sudo pvresize /dev/sda2 281 | sudo pvscan 282 | sudo lvresize -r -L +125G /dev/mapper/rootvg-homelv 283 | ``` 284 | 285 | 286 | 287 | ## Installing OpenShift 288 | 289 | ### Bundling Content and Moving it to the Disconnected Environment 290 | 291 | #### 1. Create Bundle on Bastion 292 | 293 | In order to capture all the artifacts needed to install openshift, this guide will use a tool called openshift4_mirror. Please see [https://repo1.dso.mil/platform-one/distros/red-hat/ocp4/openshift4-mirror](https://repo1.dso.mil/platform-one/distros/red-hat/ocp4/openshift4-mirror) for more information about this tool. In addition, the pull-secret will need to be obtained from [https://cloud.redhat.com/openshift/install/pull-secret](https://cloud.redhat.com/openshift/install/pull-secret). If the operator catalogs are also needed, ensure that there is enough disk space and remove the --skip-catalogs flag. 294 | 295 | The following steps require OpenShift 4.6 and above. Replace with the specific target install version, such as 4.7.0. 296 | 297 | 298 | ``` 299 | #From Bastion 300 | sudo dnf install podman 301 | 302 | mkdir mirror && cd mirror 303 | 304 | podman run -it -v ./:/app/bundle:Z quay.io/redhatgov/openshift4_mirror:latest 305 | 306 | ./openshift_mirror bundle \ 307 | --openshift-version \ 308 | --platform azure \ 309 | --skip-existing --skip-catalogs \ 310 | --pull-secret '' 311 | 312 | #exit by using ctrl-d 313 | 314 | tar czf OpenShiftBundle-.tgz / 315 | 316 | ``` 317 | 318 | 319 | #### 2. Push Bundle to Registry 320 | 321 | 322 | ``` 323 | #From Bastion 324 | scp -i ~/.ssh/azure-key.pem OpenShiftBundle-.tgz registry.:~ 325 | 326 | ssh -i ~/.ssh/azure-key.pem registry. 327 | ``` 328 | 329 | 330 | #### 3. Start Image Registry 331 | 332 | For the purpose of this demo, we will use a temporary registry to serve the OpenShift install media. PLEASE NOTE: you should replace this step with a registry of your choice. 333 | 334 | 335 | ``` 336 | #From Registry 337 | 338 | tar xzf OpenShiftBundle-.tgz 339 | 340 | cd 341 | 342 | openssl req -newkey rsa:4096 -nodes -sha256 -keyout domain.key -x509 -days 365 -out domain.crt -subj "/CN=registry./O=Red Hat/L=Default City/ST=TX/C=US" 343 | 344 | sudo firewall-cmd --zone=public --permanent --add-port=5000/tcp 345 | sudo firewall-cmd --reload 346 | 347 | bin/oc image serve --dir=$PWD/release/ --tls-crt=domain.crt --tls-key=domain.key 348 | 349 | #Test from Bastion 350 | curl -k https://registry.:5000/v2/openshift/ 351 | ``` 352 | 353 | 354 | #### 4. Prep install-config.yaml 355 | 356 | The install-config.yaml file provides the installer with the parameters for the deployment. First, create the file. 357 | 358 | ``` 359 | #From Registry 360 | 361 | cd && mkdir ocp_install && cd ocp_install 362 | 363 | vi install-config.yaml # copy and paste install-config.template from below 364 | 365 | #Edit template as needed 366 | ``` 367 | 368 | #### 5. install-config.template 369 | 370 | The template below has defined the parameters for this use case. Please supply the user specific content. Note, that is FIPS cryptography is required, this must be set in the install-config.yaml prior to installation. It cannot be changed post-installation. 371 | 372 | 373 | ``` 374 | apiVersion: v1 375 | baseDomain: 376 | compute: 377 | - hyperthreading: Enabled 378 | name: worker 379 | platform: 380 | azure: 381 | osDisk: 382 | diskSizeGB: 512 383 | type: Standard_D2s_v3 384 | replicas: 4 385 | controlPlane: 386 | hyperthreading: Enabled 387 | name: master 388 | platform: 389 | azure: 390 | osDisk: 391 | diskSizeGB: 512 392 | type: Standard_D8s_v3 393 | replicas: 3 394 | metadata: 395 | creationTimestamp: null 396 | name: 397 | networking: 398 | clusterNetwork: 399 | - cidr: 10.11.0.0/16 400 | hostPrefix: 23 401 | machineNetwork: 402 | - cidr: 10.1.1.0/24 403 | - cidr: 10.1.2.0/24 404 | networkType: OpenShiftSDN 405 | serviceNetwork: 406 | - 172.30.0.0/16 407 | platform: 408 | azure: 409 | baseDomainResourceGroupName: 410 | cloudName: AzureUSGovernmentCloud 411 | computeSubnet: 412 | controlPlaneSubnet: 413 | networkResourceGroupName: 414 | outboundType: UserDefinedRouting 415 | region: 416 | virtualNetwork: 417 | publish: Internal 418 | fips: 419 | pullSecret: | 420 | { "auths": { "": { "auth": "", "email": "example@redhat.com" } } } 421 | additionalTrustBundle: | 422 | -----BEGIN CERTIFICATE----- 423 | MIIFozCCA4ugAwIBAgIUKcifYaM+d4mCC6RNgnKUpFFARfswDQYJKoZIhvcNAQEL 424 | ... 425 | -----END CERTIFICATE----- 426 | imageContentSources: 427 | - mirrors: 428 | - :5000/openshift/release 429 | source: quay.io/openshift-release-dev/ocp-release 430 | - mirrors: 431 | - :5000/openshift/release 432 | source: registry.svc.ci.openshift.org/ocp/release 433 | - mirrors: 434 | - :5000/openshift/release 435 | source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 436 | sshKey: | 437 | ssh-rsa AAAAB3Nza... 438 | ``` 439 | 440 | 441 | ### Run OpenShift Install 442 | 443 | The first time the install is run, it will prompt for the azure subscription id, tenant id, client id, and client secret/password. These values will need to correspond to the service principal account required for the installation. It will then save this to $HOME/.azure/osServicePrincipal.json and will reference that file for future runs. 444 | 445 | 446 | ``` 447 | #From Bastion 448 | 449 | cd ~/ 450 | 451 | bin/openshift-install create cluster --dir=/home/azureuser/ocp_install/ --log-level=debug 452 | ``` 453 | 454 | 455 | Once the installation completes successfully, the logs will print out the URL to the OpenShift console along with the password for the kubeadmin account. Please note that you will need to establish a VPN connection, or some like method in order to be able to access the web console. Additionally, It will print the path to the kubeconfig file that may be used with the OpenShift CLI (oc) to connect to the OpenShift API service. The following is an example of the logs. 456 | 457 | 458 | ``` 459 | INFO Install complete! 460 | INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/azureuser/ocp_install/auth/kubeconfig' 461 | INFO Access the OpenShift web-console here: https://console-openshift-console.apps.openshift.example.com 462 | INFO Login to the console with user: "kubeadmin", and password: "XXXXX-XXXXX-XXXXX-XXXXX" 463 | DEBUG Time elapsed per stage: 464 | DEBUG Infrastructure: 13m57s 465 | DEBUG Bootstrap Complete: 9m25s 466 | DEBUG Bootstrap Destroy: 5m57s 467 | DEBUG Cluster Operators: 12m39s 468 | INFO Time elapsed: 42m7s 469 | ``` 470 | -------------------------------------------------------------------------------- /IPIonMAGInstallManualIAM.md: -------------------------------------------------------------------------------- 1 | 2 | # Installing OpenShift in Disconnected Microsoft Azure Government using IPI with Manually Create IAM 3 | 4 | 5 | ## Overview 6 | 7 | This guide is intended to demonstrate how to perform the OpenShift installation using the IPI method on Microsoft Azure Government. In addition, the guide will walk through performing this installation on an existing disconnected network. In other words the network does not allow access to and from the internet. And finally, this installation will not store administrative credentials and disable the cloud credential operator from automatically creating new service principal accounts for use by other services. Instead, those accounts will be created manually. 8 | 9 | ## YouTube Video 10 | 11 | A video that walks through this guide is available here: https://youtu.be/cAdGCLQ15zI 12 | 13 | ## MAG Configuration Requirements 14 | 15 | In this guide, we will install OpenShift onto an existing virtual network. This virtual network will contain two private subnets that are firewalled off from access to and from the internet. As we will need a way to gain access to those subnets, there is one subnet that will be the public subnet and that will host the bastion node from which we will use to access the private network. The following section entitled Example MAG configuration details the network configuration used in the guide. While the internet is firewalled off from the private network, we still need to allow access to the Azure and Azure Government cloud APIs. Without that we will not be able to install a cloud aware OpenShift cluster. Please note the firewall rules created that allow this access to the Azure cloud APIs. 16 | 17 | This guide will assume that the user has valid accounts and subscriptions to both Red Hat OpenShift and MS Azure Government. This guide will also assume that an SSH keypair was created and the files azure-key.pem and azure-key.pub both exist. 18 | 19 | 20 | ### Example MAG Configuration 21 | 22 | The following section may be used to create a virtual network with the following components. 23 | 24 | 25 | 26 | * Service Principal Account 27 | * Azure Virtual Network 28 | * Private DNS zone 29 | * Firewall 30 | * Public and Private subnets 31 | * Bastion Host 32 | * Registry Host 33 | 34 | For the purpose of this demo, it is the assumed that these components will be provided by the user or the cloud administrator. However, IPI can also create these components for you, if desired. Please create or provide components according to the examples in this section. 35 | 36 | If you have already created and validated these resources, please skip to the Installing OpenShift section. 37 | 38 | #### 1. Obtain Azure CLI and login 39 | 40 | Use the link below and follow the instructions to install the Azure CLI 41 | 42 | 43 | 44 | * [https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-yum?view=azure-cli-latest](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-yum?view=azure-cli-latest) 45 | 46 | Login to azure and set the cloud provider 47 | 48 | 49 | ``` 50 | az login 51 | 52 | az cloud set --name AzureUSGovernment 53 | ``` 54 | 55 | 56 | #### 2. Create/Request Service Principal account(s) 57 | 58 | In order to perform the install, one or more Service Principal accounts will need to be created. In order to perform the install, a Service Account with the role of ‘Contributor’ and 'User Access Administrator' will need to be created. In addition, there are a few services that require a Service Account in order to provide cloud aware functionality. This guide will use two Service Principal Accounts, one for the installation and one for each service. However, the user may opt to create additional Service Principal accounts as needed. The following document details how to obtain what credentials are needed. 59 | 60 | * [https://docs.openshift.com/container-platform/4.6/installing/installing_azure/manually-creating-iam-azure.html](https://docs.openshift.com/container-platform/4.6/installing/installing_azure/manually-creating-iam-azure.html) 61 | 62 | The following commands may be used to create the Service Principal for the installation. Make note of the subscription id, tenant id, client id and password token, these will be used later in this guide. 63 | 64 | 65 | ``` 66 | az ad sp create-for-rbac --role Contributor --name 67 | 68 | az role assignment create --role "User Access Administrator" \ 69 | --assignee-object-id $(az ad sp list --filter "appId eq ''" \ 70 | | jq '.[0].objectId' -r) 71 | 72 | ``` 73 | 74 | 75 | #### 3. Create Resource Group 76 | 77 | Create a resource group where AZURE_REGION is either usgovtexas or usgovvirginia 78 | 79 | 80 | ``` 81 | az group create -l -n 82 | ``` 83 | 84 | 85 | #### 4. Create VNET 86 | 87 | 88 | ``` 89 | az network vnet create -g -n --address-prefixes 10.1.0.0/16 90 | ``` 91 | 92 | 93 | #### 5. Create FW Rules and Route table for Private Subnets 94 | 95 | The Firewall will block traffic to and from the internet. In order for the OpenShift cluster to be cloud aware and to be able to run the IPI method of install, we need to allow access to the Azure and Azure for Government APIs. 96 | 97 | 98 | ``` 99 | az extension add -n azure-firewall 100 | 101 | az network firewall create -g -n 102 | 103 | az network vnet subnet create \ 104 | -g \ 105 | --vnet-name \ 106 | -n AzureFirewallSubnet \ 107 | --address-prefixes 10.1.10.0/24 108 | 109 | az network public-ip create \ 110 | --name fw-pip \ 111 | --resource-group \ 112 | --allocation-method static \ 113 | --sku standard 114 | 115 | az network firewall ip-config create \ 116 | --firewall-name \ 117 | --name FW-config \ 118 | --public-ip-address fw-pip \ 119 | --resource-group \ 120 | --vnet-name 121 | 122 | fwprivaddr=$( \ 123 | az network firewall ip-config list \ 124 | -g \ 125 | -f \ 126 | --query "[?name=='FW-config'].privateIpAddress" \ 127 | --output tsv) 128 | 129 | az network route-table create \ 130 | --name Firewall-rt-table \ 131 | --resource-group \ 132 | --disable-bgp-route-propagation true 133 | 134 | az network route-table route create \ 135 | --resource-group ${RG} \ 136 | --name fw-route \ 137 | --route-table-name Firewall-rt-table \ 138 | --address-prefix 0.0.0.0/0 \ 139 | --next-hop-type VirtualAppliance \ 140 | --next-hop-ip-address $fwprivaddr 141 | 142 | az network firewall application-rule create \ 143 | --collection-name azure_gov \ 144 | --firewall-name \ 145 | --name azure \ 146 | --protocols Http=80 Https=443 \ 147 | --resource-group \ 148 | --target-fqdns \ 149 | *microsoftonline.us \ 150 | *graph.windows.net \ 151 | *usgovcloudapi.net \ 152 | *applicationinsights.us \ 153 | *microsoft.us \ 154 | --source-addresses 10.1.1.0/24 10.1.2.0/24 \ 155 | --priority 100 \ 156 | --action Allow 157 | 158 | az network firewall application-rule create \ 159 | --collection-name azure_ms \ 160 | --firewall-name \ 161 | --name azure \ 162 | --protocols Http=80 Https=443 \ 163 | --resource-group \ 164 | --target-fqdns \ 165 | *azure.com *microsoft.com \ 166 | *microsoftonline.com \ 167 | *windows.net \ 168 | --source-addresses 10.1.1.0/24 10.1.2.0/24 \ 169 | --priority 200 \ 170 | --action Allow 171 | ``` 172 | 173 | 174 | #### 6. Create Public Subnet for Bastion 175 | 176 | 177 | ``` 178 | az network vnet subnet create \ 179 | -g \ 180 | --vnet-name \ 181 | -n \ 182 | --address-prefixes 10.1.0.0/24 183 | ``` 184 | 185 | 186 | #### 7. Create Private Subnet for Control Plane 187 | 188 | 189 | ``` 190 | az network vnet subnet create \ 191 | -g \ 192 | --vnet-name \ 193 | -n \ 194 | --address-prefixes 10.1.1.0/24 \ 195 | --route-table Firewall-rt-table 196 | ``` 197 | 198 | 199 | #### 8. Create Private Subnet for Compute Plane 200 | 201 | 202 | ``` 203 | az network vnet subnet create \ 204 | -g \ 205 | --vnet-name \ 206 | -n \ 207 | --address-prefixes 10.1.2.0/24 \ 208 | --route-table Firewall-rt-table 209 | ``` 210 | 211 | 212 | #### 9. Create Bastion host in Public Subnet 213 | 214 | Note: Ensure that the file azure-key.pub exists in the current working directory. Also, if the operator catalog will also be downloaded copied over, please adjust the os-disk-size-gb value accordingly. 215 | 216 | 217 | ``` 218 | az vm create -n -g \ 219 | --image RedHat:RHEL:8.2:latest \ 220 | --size Standard_D2s_v3 \ 221 | --os-disk-size-gb 150 \ 222 | --public-ip-address bastion-pub-ip \ 223 | --vnet-name --subnet \ 224 | --admin-username azureuser \ 225 | --ssh-key-values azure-key.pub 226 | 227 | ``` 228 | 229 | 230 | #### 10. Create Registry Host in Private Subnet 231 | 232 | Note: Ensure that the file azure-key.pub exists in the current working directory. Also, if the operator catalog will also be downloaded copied over, please adjust the os-disk-size-gb value accordingly. 233 | 234 | 235 | ``` 236 | az vm create -n -g \ 237 | --image RedHat:RHEL:8.2:latest \ 238 | --size Standard_D2s_v3 \ 239 | --os-disk-size-gb 150 \ 240 | --public-ip-address '' \ 241 | --vnet-name --subnet \ 242 | --admin-username azureuser \ 243 | --ssh-key-values azure-key.pub 244 | ``` 245 | 246 | 247 | #### 11. Create Private DNS and add A Record for Registry host 248 | 249 | The REGISTRY_IP is the private ip address assigned to the Registry host in the previous step. 250 | 251 | 252 | ``` 253 | az network private-dns zone create -g -n 254 | 255 | az network private-dns link vnet create \ 256 | -g -n private-dnslink \ 257 | -z -v -e true 258 | 259 | az network private-dns record-set a add-record \ 260 | -g \ 261 | -z \ 262 | -n registry \ 263 | -a 264 | ``` 265 | 266 | 267 | #### 12. Resize Logical Volume on Bastion 268 | 269 | 270 | ``` 271 | scp -i azure-key.pem azure-key.pem azureuser@${BASTION_PUBLIC_IP}:~/.ssh/azure-key.pem 272 | 273 | ssh -i azure-sshkey.pem azureuser@${BASTION_PUBLIC_IP} 274 | 275 | sudo lsblk #identify blk dev where home is mapped to (ex /dev/sda2) 276 | sudo parted -l #when prompted type 'fix' 277 | sudo growpart /dev/sda 2 278 | sudo pvresize /dev/sda2 279 | sudo pvscan 280 | sudo lvresize -r -L +125G /dev/mapper/rootvg-homelv 281 | ``` 282 | 283 | 284 | #### 13. Resize Logical Volume on Registry 285 | 286 | 287 | ``` 288 | #From bastion 289 | ssh -i ~/.ssh azure-sshkey.pem azureuser@registry. 290 | 291 | sudo lsblk # identify blk dev where home is mapped to (ex /dev/sda2) 292 | sudo parted -l #when prompted type 'fix' 293 | sudo growpart /dev/sda 2 294 | sudo pvresize /dev/sda2 295 | sudo pvscan 296 | sudo lvresize -r -L +125G /dev/mapper/rootvg-homelv 297 | ``` 298 | 299 | ## Installing OpenShift 300 | 301 | ### Bundling Content and Moving it to the Disconnected Environment 302 | 303 | #### 1. Create Bundle on Bastion 304 | 305 | In order to capture all the artifacts needed to install openshift, this guide will use a tool called openshift4_mirror. Please see [https://repo1.dso.mil/platform-one/distros/red-hat/ocp4/openshift4-mirror](https://repo1.dso.mil/platform-one/distros/red-hat/ocp4/openshift4-mirror) for more information about this tool. In addition, the pull-secret will need to be obtained from [https://cloud.redhat.com/openshift/install/pull-secret](https://cloud.redhat.com/openshift/install/pull-secret). If the operator catalogs are also needed, ensure that there is enough disk space and remove the --skip-catalogs flag. 306 | 307 | The following steps require OpenShift 4.6 and above. Replace with the specific target install version, such as 4.7.0. 308 | 309 | ``` 310 | #From Bastion 311 | sudo dnf install podman 312 | 313 | mkdir mirror && cd mirror 314 | 315 | podman run -it -v ./:/app/bundle:Z quay.io/redhatgov/openshift4_mirror:latest 316 | 317 | ./openshift_mirror bundle \ 318 | --openshift-version \ 319 | --platform azure \ 320 | --skip-existing --skip-catalogs \ 321 | --pull-secret '' 322 | 323 | #exit by using ctrl-d 324 | 325 | tar czf OpenShiftBundle-.tgz / 326 | 327 | ``` 328 | 329 | 330 | #### 2. Push Bundle to Registry 331 | 332 | 333 | ``` 334 | #From Bastion 335 | scp -i ~/.ssh/azure-key.pem OpenShiftBundle-.tgz registry.:~ 336 | 337 | ssh -i ~/.ssh/azure-key.pem registry.:~ 338 | ``` 339 | 340 | 341 | #### 3. Start Image Registry 342 | 343 | For the purpose of this demo, we will use a temporary registry to serve the OpenShift install media. PLEASE NOTE: you should replace this step with a registry of your choice. 344 | 345 | ``` 346 | #From Registry 347 | 348 | tar xzf OpenShiftBundle-.tgz 349 | 350 | cd 351 | 352 | openssl req -newkey rsa:4096 -nodes -sha256 -keyout domain.key -x509 -days 365 -out domain.crt -subj "/CN=registry./O=Red Hat/L=Default City/ST=TX/C=US" 353 | 354 | sudo firewall-cmd --zone=public --permanent --add-port=5000/tcp 355 | sudo firewall-cmd --reload 356 | 357 | bin/oc image serve --dir=$PWD/release/ --tls-crt=domain.crt --tls-key=domain.key 358 | 359 | #Test from Bastion 360 | curl -k https://registry.:5000/v2/openshift/ 361 | ``` 362 | 363 | 364 | #### 4. Prep install-config.yaml 365 | 366 | 367 | ``` 368 | #From Registry 369 | 370 | cd && mkdir ocp_install && cd ocp_install 371 | 372 | vi install-config.yaml # copy and paste install-config.template from below 373 | 374 | #Edit template as needed 375 | ``` 376 | 377 | 378 | #### 5. install-config.template 379 | The template below has defined the parameters for this use case. Please supply the user specific content. Note, that is FIPS cryptography is required, this must be set in the install-config.yaml prior to installation. It cannot be changed post-installation. 380 | 381 | ``` 382 | apiVersion: v1 383 | baseDomain: 384 | compute: 385 | - hyperthreading: Enabled 386 | name: worker 387 | platform: 388 | azure: 389 | osDisk: 390 | diskSizeGB: 512 391 | type: Standard_D2s_v3 392 | replicas: 4 393 | controlPlane: 394 | hyperthreading: Enabled 395 | name: master 396 | platform: 397 | azure: 398 | osDisk: 399 | diskSizeGB: 512 400 | type: Standard_D8s_v3 401 | replicas: 3 402 | metadata: 403 | creationTimestamp: null 404 | name: 405 | networking: 406 | clusterNetwork: 407 | - cidr: 10.11.0.0/16 408 | hostPrefix: 23 409 | machineNetwork: 410 | - cidr: 10.1.1.0/24 411 | - cidr: 10.1.2.0/24 412 | networkType: OpenShiftSDN 413 | serviceNetwork: 414 | - 172.30.0.0/16 415 | platform: 416 | azure: 417 | baseDomainResourceGroupName: 418 | cloudName: AzureUSGovernmentCloud 419 | computeSubnet: 420 | controlPlaneSubnet: 421 | networkResourceGroupName: 422 | outboundType: UserDefinedRouting 423 | region: 424 | virtualNetwork: 425 | publish: Internal 426 | fips: 427 | pullSecret: | 428 | { "auths": { "": { "auth": "", "email": "example@redhat.com" } } } 429 | additionalTrustBundle: | 430 | -----BEGIN CERTIFICATE----- 431 | MIIFozCCA4ugAwIBAgIUKcifYaM+d4mCC6RNgnKUpFFARfswDQYJKoZIhvcNAQEL 432 | ... 433 | -----END CERTIFICATE----- 434 | imageContentSources: 435 | - mirrors: 436 | - :5000/openshift/release 437 | source: quay.io/openshift-release-dev/ocp-release 438 | - mirrors: 439 | - :5000/openshift/release 440 | source: registry.svc.ci.openshift.org/ocp/release 441 | - mirrors: 442 | - :5000/openshift/release 443 | source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 444 | sshKey: | 445 | ssh-rsa AAAAB3Nza... 446 | ``` 447 | 448 | 449 | #### 6. Create Manifest 450 | 451 | The first time the openshift-install binary is run, it will prompt for the azure subscription id, tenant id, client id, and client secret/password. These values will need to correspond to the service principal account required for the installation. It will then save this to $HOME/.azure/osServicePrincipal.json and will reference that file for future runs. 452 | 453 | 454 | ``` 455 | #From Registry 456 | 457 | cd ~/ 458 | 459 | bin/openshift-install create manifests --dir=/home/azureuser/ocp_install/ --log-level=debug 460 | ``` 461 | 462 | #### 7. Set Cloud Credentials Operator to Manual Mode 463 | 464 | Update to Cloud Credentials Operator to set to Manual mode instead of the default Mint Mode, and remove the cloud credential secret. 465 | 466 | 467 | ``` 468 | cat < /home/azureuser/ocp_install//manifests/cco-configmap.yaml 469 | apiVersion: v1 470 | kind: ConfigMap 471 | metadata: 472 | name: cloud-credential-operator-config 473 | namespace: openshift-cloud-credential-operator 474 | annotations: 475 | release.openshift.io/create-only: "true" 476 | data: 477 | disabled: "true" 478 | EOF 479 | 480 | rm /home/azureuser/ocp_install/openshift/99_cloud-creds-secret.yaml 481 | 482 | ``` 483 | 484 | #### 8. Create Credential Secrets 485 | 486 | New secrets need to be created for each credential request that is created in the install. Please refer to the section regarding the creation of the service principals above for details on identifying what credentials are needed. For each credential request, we need to create a secret using the name and namespace defined in the request. In addition, we need to identify the ClusterId for this install. To do this view the file /home/azureuser/ocp_install/.openshift_install_state.json and look for the ClusterID definition. Copy and save the InfraId value. 487 | 488 | For each credential request, create a new file under /home/azureuser/ocp_install/openshift/ with a unique name and set the content of the file as follows 489 | 490 | 491 | ``` 492 | kind: Secret 493 | apiVersion: v1 494 | metadata: 495 | name: 496 | namespace: 497 | stringData: 498 | azure_subscription_id: "" 499 | azure_client_id: "" 500 | azure_client_secret: "" 501 | azure_tenant_id: "" 502 | azure_resource_prefix: "" 503 | azure_resourcegroup: "-rg" 504 | azure_region: "" 505 | ``` 506 | 507 | 508 | ### Run OpenShift Install 509 | 510 | 511 | ``` 512 | #From Registry 513 | 514 | bin/openshift-install create cluster --dir=/home/azureuser/ocp_install/ --log-level=debug 515 | ``` 516 | 517 | 518 | Once the installation completes successfully, the logs will print out the URL to the OpenShift console along with the password for the kubeadmin account. Please note that you will need to establish a VPN connection, or some like method in order to be able to access the web console. Additionally, It will print the path to the kubeconfig file that may be used with the OpenShift CLI (oc) to connect to the OpenShift API service. The following is an example of the logs. 519 | 520 | 521 | ``` 522 | INFO Install complete! 523 | INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/azureuser/ocp_install/auth/kubeconfig' 524 | INFO Access the OpenShift web-console here: https://console-openshift-console.apps.openshift.dbasparta.io 525 | INFO Login to the console with user: "kubeadmin", and password: "XXXXX-XXXXX-XXXXX-XXXXX" 526 | DEBUG Time elapsed per stage: 527 | DEBUG Infrastructure: 13m57s 528 | DEBUG Bootstrap Complete: 9m25s 529 | DEBUG Bootstrap Destroy: 5m57s 530 | DEBUG Cluster Operators: 12m39s 531 | INFO Time elapsed: 42m7s 532 | ``` 533 | 534 | -------------------------------------------------------------------------------- /IPIonOvirtDisconnected.md: -------------------------------------------------------------------------------- 1 | RHV/OVirt disconnected IPI installations 2 | ======================================== 3 | 4 | This guide covers how to install OCP 4.8+ on Red Hat RHV (OVirt) in a disconnected environments. 5 | 6 | Definitions: 7 | ------------ 8 | 9 | - Disconnected 10 | - An environment that is prevented from accessing the public internet where OCP container images and executable are located. 11 | - The general process of getting software to a disconnected environment involves a physical media with data being brought to the isolated network. 12 | - There is no temporary switch to allow access to the internet (airgabed) 13 | - Bastion host 14 | - A VM/host where installation is managed from. A bastion host exists within the disconnected network and as a temporary system on a connected platform. 15 | - Network Services 16 | - DNS - Domain Name Service. This service translates names to IPs are are required for an OCP install 17 | - NTP - Network Time Protocol. This service allows multiple hosts to synchronize their clocks required for proper cluster functionality. This is required for an OCP install. 18 | - DHCP - Dynamic Host Configuration Protocol. This service provides BOOTP ethernet features resulting in automatic assignment of IP addresses for hosts. This is required for OCP installs using the IPI method. 19 | - IPI 20 | - Installer Provision Infrastructure. The openshift installer creates the underlying infrastructure (VMs) that OpenShift is running on. 21 | - UPI 22 | - User Provisioned Infrastructure. The installer does not create/manage the infrastructure, but needs to be "guided" and told where pre-configured infrastructure is located. 23 | - Host 24 | + For this guide, a host is a VM running in RHV/oVirt. For bastions this could be a host running bare metal or on a different infrastructure. This guide will use "host" to describe a VM used for OpenShift. 25 | 26 | # Prerequisites 27 | 28 | This guide assumes an already installed and working version of RHV 4.6. The installation guide can be found here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html/product_guide/installation 29 | 30 | Minimal configuration: 31 | * 3 RHVH nodes with 16 cores, 128GB of memory each. 32 cores is recommended if ODF is needed as part of the installation. 32 | * 10GBps switches for the RHVH nodes and attached infrastructure: 33 | - Storage server - Gluster or CEPH backend storage for RHVH 34 | - At least 1TB of total redundant storage (RAID 5, 6, 10 etc) 35 | - Separate storage network on nodes. For production use, each RHV node should have dual nics/fiber for each interface: 36 | - *oVirtMgmt*: Front-end IP for management. This can be kept on 1MBps and not 10gbps. This maps to the IP address of the RHVH node. 37 | - *OcpNetwork*: VM network for OCP installation. This must be on the 10gpbs network. 38 | - *StorageNetwork*: Network between RHVH nodes and storage nodes. Must be on 10GBps. Each RHVH node must have an IP in the storage network (can be on a separate VLAN). 39 | * RHV Manager - this can be self-hosted or on a separate physical server. See installation guide and installation options to see what will work for the environment. This system should be 8GB of RAM and at least 2 cores. Highly recommend using a valid certificate and not self-signed certs for the API. 40 | * DNS, Ceritificate management and DHCP services exists on subnet selected for OCP install. Suggest using FreeIPA and add DHCP to this server. This is an easy two small VMs on RHV (FreeIPA should always be more than a single server with replication). 41 | + TODO: Add section on configuring FreeIPA/DHCP 42 | * RHV service account for OCP installation. RHV can be configured to use Kerberos from FreeIPA for authentication - not required but "it helps". Create separate account with rights to create/destroy VMs and Virtual disk images (not admin!). This account will be used when configuring the openshift installer for RHV. 43 | 44 | # RHV configuration 45 | 46 | TODO .. add setup of networks, storage and users 47 | 48 | # Online Bastion Configuration 49 | 50 | Note - this bastion host must be connected to the internet. It's used to retrieve all software needed for the installation and make an export to take to the offline cluster. This host should/will not have access to the network where OCP eventually will be installed. 51 | 52 | ## Install the OC CLI 53 | On a RHEL8, use a non-root account and execute the following content: 54 | 55 | ```bash 56 | mkdir $HOME/{bin,Downloads} 2>/dev/null 57 | 58 | ocp4url="https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest" 59 | bin=$HOME/bin" 60 | ocptar="openshift-client-linux.tar.gz" 61 | ocpinstall="openshift-install-linux.tar.gz" 62 | rhcos="rhcos-openstack.x86_64.qcow2.gz" 63 | 64 | for item in $ocptar $ocpinstall $rhcos; do 65 | curl -o "$HOME/Downloads/$item" "${ocp4url}/${item}" 66 | done 67 | 68 | tar -pxv -C $bin -f "$HOME/Downloads/${ocptar}" 69 | tar -pxv -C $bin -f "$HOME/Downloads/${ocpinstall}" 70 | 71 | rm $HOME/bin/README* 72 | 73 | # This should return the version of oc if all worked 74 | oc version 75 | ``` 76 | 77 | ## Copy and make ready for export all container imageContentSources 78 | 79 | The RHEL8 host must have podman installed. 80 | 81 | A [disconnected registry](appendix/disconnected-registry-standalone-quay.md) must be installed. Any registry can be used; this guide assumes QUAY. 82 | 83 | The resulting files should be copied to $HOME/Downloads 84 | 85 | ## Copy all files to offline media 86 | 87 | The above steps results in several files, including openshift-${OCP_RELEASE}-release-bundle.tar.gz that needs to be copied to the $HOME/Downloads directory. From there, a simple copy to a USB or use *genisoimage* to create ISO that can be copied to DVDs. Note, these images will be sizable - standard music ISOs will not be large enough. Depending on the security requirements on the disconnected site, find the media that makes most sense. Note, there's a good chance what-ever media is picked will NOT be allowed to return, so be careful using something that you cannot loose. 88 | 89 | The following files will exist: 90 | * openshift-client-linux.tar.gz 91 | * openshift-install-linux.tar.gz 92 | * rhcos-openstack.x86_64.qcow2.gz (large) 93 | * openshift-${OCP_RELEASE}-release-bundle.tar.gz (very large) 94 | * postgres.tar 95 | * redis.tar 96 | * quay.tar 97 | 98 | In addition, bring a RHEL 8 (everthing) ISO if the site doesn't have Satellite or other sources for RHEL. 99 | 100 | # Offline Bastion Setup 101 | 102 | Assumption: RHV 4.6 already installed, configured 103 | 104 | Goal: Install a RHEL host where installation and maintenance will be done from. This host will be used to validate environment and hold configuration settings, management keys etc. The host will also be able to SSH into any OCP node for diagnostics - this access can be blocked by firewalls for any other host. When not in use, this host should be shutdown but NOT removed. 105 | 106 | ## Confirm RHV installation 107 | 108 | Log into the RHVM cluster as a cluster admin - admin rights to the oVirt cluster meant for OCP is a minimum. Check the following exists: 109 | 110 | * Cluster defined with at least 3 hosts - note the cluster ID of this cluster (ie ccc53763-c479-410f-af0b-ec846929b46h). This is the Cluster ID and will be needed in the next section. 111 | + Cluster must have 3 hosts, each at least 16 cores and 128GB of RAM. Recommend 32+ cores per host. 112 | + Ensure a logical network for OCP is defined - ie. "aServerNetwork". This network must be assigned to NICs on each host that are on a 10 Gbps switch. 2 bonded nics is recommended but not required. 113 | + Ensure compatability version is set to 4.6 for the cluster. 114 | + Cluster should have a storage network defined to separate it from the VM traffic 115 | * Network Definition - "aServerNetwork" in this example 116 | + Take note of the vnicProfileID of the "aServerNetwork" network for OCP (ie 1bf648af-a1f6-4c13-8fd7-636a11b2fd36) - this is the Network ID needed for the install-config later in this guide. 117 | * Storage definition 118 | + The installer will use a single storage domain for the VMs during creation, and create a Storage Class pointing to this storage domain for registry storage, etcd etc. Post installation you may want to add additional storage classes, where some could point to additional storage domains in oVirt, but for the installation we can only select one. 119 | + Storage network must be high IO - spinning disks are not supported/recommended. SSDs at a minimum. NVMe is recommended. 120 | 121 | If you cannot find the id for the vnicProfileID in the web-console, the following ansible will return the ID (from the Bastion host that will be created in the next step - the IDs are not created until the install-config is to be created) 122 | 123 | ovirt-network.yaml: 124 | ```ansible 125 | --- 126 | - name: Retrieve vNIC ID 127 | hosts: localhost 128 | connection: local 129 | gather_facts: yes 130 | vars: 131 | network: "aServerNetwork" 132 | 133 | tasks: 134 | - block: 135 | - name: Obtain SSO token with using username/password credentials 136 | ovirt_auth: 137 | url: "https://rhvm44.example.com/ovirt-engine/api" 138 | username: "rhevadmin@example.com" 139 | ca_file: "{{ lookup('env','HOME') }}/ansible/data/ca.crt" 140 | password: "secretpassword" 141 | - name: Get network info 142 | ovirt_network_info: 143 | auth: "{{ ovirt_auth }}" 144 | pattern: "name={{ network }}" 145 | fetch_nested: yes 146 | register: netinfo 147 | - debug: 148 | var: netinfo 149 | always: 150 | - name: Always revoke the SSO token 151 | ovirt_auth: 152 | state: absent 153 | ovirt_auth: "{{ ovirt_auth }}" 154 | ``` 155 | 156 | In the result will be a list of vNicProfiles - typically there will just be one. This is the ID needed above. 157 | 158 | To provide the ovirt module, issue the following command: 159 | 160 | ```bash 161 | $ ansible-galaxy collection install redhat.rhv 162 | ``` 163 | 164 | If some of the above areas are not present, do not proceed until oVirt is configured with the proper settings. 165 | 166 | * In the installation configuration you'll need the following values: 167 | - ovirt_cluster_id: ccc53763-c479-410f-af0b-ec846929b46h 168 | - ovirt_network_name: aServerNetwork 169 | - ovirt_storage_domain_id: 9008664c-f69c-4139-bc4f-266aacda6ebf 170 | - vnicProfileID: 1bf648af-a1f6-4c13-8fd7-636a11b2fd36 171 | 172 | With the above confirmed and recorded, clarify what subnet the aServerNetwork is assigned. This depends on the external network the cluster is connected to and isn't part of oVirt. 173 | 174 | ## Install/upload RHEL ISO 175 | Until we have a bastion host, we'll focus on the RHV Management console. Once the bastion host is running, everything can be scripted. 176 | 177 | Because the bastion host needs very few resources, it's not recommended to do a bare-metal install, but instead create a small VM in the same network as where OpenShift will be installed. 178 | 179 | If the RHV environment does not have a RHEL template, and PXE boots using a Satellite server in the disconnected environment isn't available, we'll need to start by uploading the RHEL ISO that was put on the media for the offline site in the steps above. 180 | 181 | To upload ISO's there are two options: 182 | 1) In the RHV Management console, open the storage domain and click UPLOAD. Choose the ISO file and wait for it to be uploaded. 183 | 2) Add the ISO to the "iso" domain (no deprecated) - this is often a NFS share. Copying the ISO directly to this NFS share will make it available. 184 | 185 | To create a template, we first create a blank VM using the ISO as the boot media. 186 | 187 | ![Create empty VM](/images/ovirtVmCreateImg1.png) 188 | 189 | Be sure to allocate at least 400GB of disk space. If you intend to keep this VM around, making a data disk and mounting it as /home will be a better approach. Should you want to use this ISO to create other RHEL systems, use a smaller disk (20-50GB) and when you create the bastion from this template, expand the size to 400GB. 190 | 191 | Optionally configure the ISO as part of the permanent VM metadata: 192 | 193 | ![VM Disk Options](/images/ovirtVmCreateImg2.png) 194 | 195 | Use "Run Once" and start the VM with the ISO as an active boot device. Do a "Minimal Install" and enable containers. When the VM boots, it will get an IP in the same network OCP will be in - this validates DHCP is working. If the bastion is to host a permanent container registry or be used as a SSH target from outside 196 | 197 | With the system installed, attach the system to IPA which will allow logins using the centralized users or define a local user "ocp". Adding "ocp" to the wheels group to allow for local sudo will help during diagnostics, but is not required. 198 | 199 | Post installation tasks: 200 | 201 | * Ensure podman and skopeo are installed 202 | * Install ansible-engine 2.9 203 | * Verify DNS resolution works. 204 | * Verify access to the oVirt API end pointing 205 | 206 | Once the basic infrastructure is validated, shutdown the VM and convert it to a oVirt template. Be very sure to select "sealed". Create a new VM from this template with the correct bastion name, and continue below. 207 | 208 | * Generate SSH key for OCP install (ssh-keygen) 209 | 210 | Copy data from media created offsite onto bastion host: 211 | * Create $HOME/Downloads 212 | * Copy all data from selected media to $HOME/Downloads 213 | * Follow the disconnected registry guide and instantiate the 3 QUAY containers. Ensure hostname matches what-ever certificate was created or redo the certificate with the hostname for this bastion. 214 | * If external access to the registry is required, add firewall rule to allow port 443 traffic into the cluster. 215 | * If there's already a registry on site, use skopeo to copy all registry content from QUAY once it runs to the existing local registry. Once this copy is done, the quay containers can be shutdown. 216 | 217 | # OpenShift Installation from Offline Bastion 218 | At this point you have a bastion host in the disconnected environment with all the files needed to do an OpenShift installation. To continue, we need the following information: 219 | 220 | Hostname and IP of the following: 221 | * API end-point 222 | + Verify a hostname exist using the 'host' command and take note of the IP: 223 | + ```[ocp@bastion ovirt]$ host api.ovirt.ocp4.peterlarsen.org 224 | api.ovirt.ocp4.peterlarsen.org has address 192.168.11.40``` 225 | + Take note of the IP address. Do a 'host' command on the IP to verify reverse resolution works. 226 | + If DNS resolution does not work, the local DNS server for the network must be modified before the installation can be continued. Alternative, install FreeIPA-server on the bastion host with DNS and use it to hold DNS entries (this is not recommended for production). 227 | * Wildcard end-point 228 | + ```host bla.apps.ovirt.ocp4.peterlarsen.org 229 | bla.apps.ovirt.ocp4.peterlarsen.org has address 192.168.11.41``` 230 | + Take note of the IP address 231 | 232 | The IPI install uses a keepalive VIP for each of these features and does not utilize a load balancer. A load balancer like HAProxy or F5 can be added post installation. 233 | 234 | ## Generate pull-secret 235 | Create a directory "$HOME/mirror" where we'll place local data about the environment to manage accessing the mirror. 236 | 237 | Create pull-secret file in $HOME/mirror by using access data from the container registry that was copied to in the above step, or the local QUAY instance. 238 | 239 | Use "podman" to login to the registry: 240 | 241 | ```podman login --authfile=$HOME/mirror/pull-secret.json quay2.example.com --username=ocp+robot --password=secretpassword``` 242 | 243 | If you omit the password and username you'll be prompted for these values. Verify the generated pull-secret file is valid json: 244 | 245 | ```jq . < $HOME/mirror/pull-secret.json``` 246 | 247 | This file will start with the element "auths" and have a list of hostnames with auth and email. The email is used for audit purposes, so using one that's recongized is recommended. 248 | 249 | ## Install the oc and kubectl commands 250 | 251 | Install the 'oc' and 'kubectl' commands: 252 | 253 | ```bash 254 | mkdir $HOME/{bin,Downloads} 2>/dev/null 255 | 256 | bin=$HOME/bin" 257 | ocptar="openshift-client-linux.tar.gz" 258 | 259 | tar -pxv -C $bin -f "$HOME/Downloads/${ocptar}" 260 | 261 | rm $HOME/bin/README* 262 | 263 | # This should return the version of oc if all worked 264 | oc version 265 | ``` 266 | ## Generate openshift-install 267 | 268 | We need to generate the openshift-install command based on the ocp-release image. To help, create a file defining the following environment variables: 269 | 270 | ```bash 271 | export OCP_RELEASE="4.8.$REL" 272 | export LOCAL_REGISTRY='quay2.example.com' 273 | export LOCAL_REPOSITORY='ocp/openshift48' 274 | export LOCAL_SECRET_JSON="pull-secret.json" 275 | ``` 276 | 277 | Set $REL to the release number that was downloaded initially. Make the local registry have the hostname of the bastion (not localhost!!) or the registry server the images were copied to. 278 | 279 | Run the following commands: 280 | ```bash 281 | cd $HOME/mirror 282 | oc adm release extract -a ${LOCAL_SECRET_JSON} --command=openshift-install "${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-x86_64" 283 | mv openshift-install $HOME/bin/ 284 | ``` 285 | 286 | To test the validity run: 287 | 288 | ``` 289 | $ openshift-install version 290 | install48 4.8.12 291 | built from commit 450e95767d89f809cb1afe5a142e9c824a269de8 292 | release image quay2.example.com/ocp/openshift48@sha256:c3af995af7ee85e88c43c943e0a64c7066d90e77fafdabc7b22a095e4ea3c25a 293 | ``` 294 | 295 | Note the release image - this must be the address of the "disconnected" container registry where all the OCP images are located. 296 | 297 | ## Setup RHCOS ovirt template for install 298 | When using IPI the default behavior is to download the RHCOS image from a public web-site - the image depends on the cluster setup. For oVirt we use the openstack image which includes cloud-init and other boot configuration settings needed for the installer to be successful. 299 | 300 | ```bash 301 | $ ansible-galaxy install ovirt.image-template 302 | ``` 303 | 304 | TODO: Change setup to allow direct upload from local file. Potentially using "ovirt_disk" directory: 305 | 306 | $HOME/mirror/ovirt-rhcos.yaml: 307 | ```ansible 308 | --- 309 | - name: Create RHCOS Template 310 | hosts: localhost 311 | connection: local 312 | gather_facts: no 313 | vars_files: 314 | # Contains encrypted `engine_password` varibale using ansible-vault 315 | - passwords.yml 316 | 317 | vars: 318 | baseurl: "https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/latest/latest" 319 | rhcos_name: "rhcos-openstack.x86_64.qcow2.gz" 320 | rhcos_url: "{{ baseurl }}/{{ rhcos_name }}" 321 | image_checksum: "sha256:{{ baseurl }}/sha256sum.txt" 322 | template: rhcos-template 323 | engine_fqdn: "rhvm44.example.com" 324 | engine_user: "rhevadmin@peterlarsen.org" 325 | engine_cafile: "{{ lookup('env','HOME') }}/ansible/data/ca.crt" 326 | qcow_url: "{{ rhcos_url }}" 327 | template_cluster: ocpcluster 328 | template_name: rhcos_template 329 | template_memory: 16GiB 330 | template_cpu: 4 331 | template_disk_size: 120GiB 332 | template_disk_interface: virtio 333 | template_disk_storage: rhevquick 334 | template_operating_system: rhel_rhcos 335 | template_nics: 336 | - name: nic1 337 | profile_name: iso 338 | interface: virtio 339 | template_seal: false 340 | 341 | roles: 342 | - ovirt.image-template 343 | ``` 344 | -------------------------------------------------------------------------------- /IPIonVMWare.md: -------------------------------------------------------------------------------- 1 | # Installing OpenShift in VMware Disconnected using IPI 2 | 3 | ## Overview 4 | This guide is intended to demonstrate how to perform the OpenShift installation using the installer provisioned infrastructure (IPI) method on vmWare. Additionally, this guide will provide details on properly configuring vmWare for a successful deployment. 5 | 6 | ## Environment Overview 7 | In this guide, we will install OpenShift onto an existing vmware environment. This vmware environment 8 | has a portgroup for public internet access and a portgroup for internal access already created. There 9 | is also a pre-existing rhel8 template provisioned which has the following characteristics: 10 | ``` 11 | 2 vcpu 12 | 2 Gb Memory 13 | 50 Gb disk 14 | vmNetwork (internal-only network) 15 | ``` 16 | The VMware environment is a three-node cluster with the below software installed and configured: 17 | - vCenter 7.0 18 | - vSphere 7.0 19 | - vSAN 20 | - A single datacenter (name: vdc) 21 | - A single cluster (name: vcluster) 22 | - A single portgroup for Public network access 23 | - A single portgroup for internal network access 24 | - RHEL 8.x Template (name: RHEL8-TEMPLATE) 25 | 26 | ## Prerequisites 27 | - at least one internet facing machine 28 | - dns server with zones and subdomains 29 | - dns entries (A Records) for registry, api and wildcard apps `*.apps` 30 | - dhcp service 31 | 32 | ## vmWare Role Permissions 33 | This guide does not use the full administrator privileges to complete the installation. Follow the below link for the permissions required in vcenter to complete the installation: 34 | 35 | [vmware permissions](https://docs.openshift.com/container-platform/4.6/installing/installing_vsphere/installing-vsphere-installer-provisioned-customizations.html#installation-vsphere-installer-infra-requirements_installing-vsphere-installer-provisioned-customizations) 36 | 37 | #From vcenter user interface: 38 | If a user role and permissions do not exist, follow these steps to create a new role, an user in the 39 | local SSO domain, and then assign the role to the user. 40 | 1. Create a new role: 41 | - Click on menu, then click administration 42 | - Click on `Roles` under `Access Control` 43 | - Click the `+` sign to create a new role 44 | - Using the document above, assign the correct permissions to the role 45 | - Assign the role a name and give it an optional description 46 | - Click Finish 47 | 48 | 2. Create a user 49 | - Click menu, then click administration 50 | - Click `Users and Groups` under `Single Sign On` 51 | - Select the appropriate domain in the `Domain` drop-down 52 | - Click `ADD USER` and complete the required fields 53 | 54 | 3. Assigning Role to user 55 | - Click menu, then click administration 56 | - Click the `+` sign to add a permission 57 | - Enter the username previously created 58 | - Assign the role previously created 59 | - Click `Propogate to children` 60 | 61 | ## Preparing the mirror node 62 |

A virtual machine is deployed from the RHEL8 template and is configured with both the public and internal portgroups.

63 | 64 | #From connected bastion node 65 | 1. Download the quay pull secret from [cloud.redhat.com](https://cloud.redhat.com) 66 | 67 | 2. Download the OpenShift commadline utilities and OVA files 68 | ``` 69 | curl -LfO https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.7/latest/rhcos-vmware.x86_64.ova 70 | ``` 71 | ``` 72 | curl -LfO http://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest-4.7/openshift-client-linux.tar.gz 73 | ``` 74 | ``` 75 | curl -LfO http://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest-4.7/openshift-install-linux.tar.gz 76 | ``` 77 | ``` 78 | curl -LfO http://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest-4.7/opm-linux.tar.gz 79 | ``` 80 | 81 | 3. Install the OpenShift client on the local machine 82 | ``` 83 | tar xvf ./openshift-client-linux.tar.gz -C /usr/local/bin 84 | ``` 85 | 86 | 4. Create environment variables for mirroring the content: 87 | ``` 88 | #!/bin/bash 89 | export GODEBUG=x509ignoreCN=0 90 | export OCP_RELEASE=4.7.1 91 | export PRODUCT_REPO='openshift-release-dev' 92 | export LOCAL_SECRET_JSON=/root/installsecret.json 93 | export RELEASE_NAME='ocp-release' 94 | export ARCHITECTURE='x86_64' 95 | export REMOVABLE_MEDIA_PATH=/root/data/ 96 | ``` 97 | 98 | 5. Source the environment file 99 | ``` 100 | source environment.sh 101 | ``` 102 | 103 | 6. Run the `oc mirror` command to pull the operators and images: 104 | ``` 105 | oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} 106 | ``` 107 | 108 | The command will complete and the path /root/data will be populated with the required images for installation. 109 | 110 | 7. Archive the working directory into a tarball and move the tarball to the internal machine 111 | ``` 112 | tar cvf mirror_bundle.tar /root/data \ 113 | openshift-client-linux.tar.gz \ 114 | openshift-install-linux.tar.gz \ 115 | opm-linux.tar.gz \ 116 | rhcos-vmware.x86_64.ova 117 | .json 118 | ``` 119 | 120 | 8. SCP the file (or use whatever transfer means available) to the disconnected node: 121 | ``` 122 | scp mirror_bundle.tar :~ 123 | ``` 124 | 125 | ## Disconnected node: 126 | 127 |

A virtual machine is deployed from the RHEL8 template and is configured with only the internal portgroups.

128 | 129 | **Note:** #From the disconnected node: 130 | 131 | 1. Enable the ports for the registry and web server through the local firewall: 132 | ``` 133 | firewall-cmd --add-service=http --add-service=https --permanent 134 | firewall-cmd --add-port=5000/tcp --permanent 135 | firewall-cmd --reload 136 | ``` 137 | 138 | 2. Register machine to Satellite or YUM service to install packages: 139 | |Name|Purpose|Required Y/N| 140 | |-----|-----|-----| 141 | |httpd|WebServer to host ova file| yes (or use web alt. (nginx))| 142 | |unzip|unzip vmware certs| yes| 143 | |jq| view json in readable format | no (will make managing json much easier| 144 | 145 | 3. Install packages: 146 | ``` 147 | dnf install -y httpd jq unzip 148 | ``` 149 | 150 | 4. Install the vcenter certificates 151 | ``` 152 | curl -k -LfO https://vcenter.example.com/certs/download.zip` 153 | unzip download.zip 154 | ``` 155 | 156 | 5. Add certs to trust store 157 | ``` 158 | cp /root/certs/lin/* /etc/pki/ca-trust/source/anchors/ 159 | ``` 160 | ``` 161 | update-ca-trust extract 162 | ``` 163 | 6. Unpack the mirror_bundle tarball 164 | ``` 165 | tar xvf mirror_bundle.tar 166 | ``` 167 | 168 | 7. Unpack the openshift client tarballs 169 | ``` 170 | tar xvf ./openshift-client-linux.tar.gz -C /usr/local/bin/ 171 | ``` 172 | ``` 173 | tar xvf ./openshift-install-linux.tar.gz -C /usr/local/bin/ 174 | ``` 175 | ``` 176 | tar xvf ./opm-linux.tar.gz -C /usr/local/bin 177 | ``` 178 | 179 | 8. Start | Enable httpd service and copy ova to webroot 180 | ``` 181 | systemctl enable --now httpd 182 | 183 | cp /root/rhcos-vmware.x86_64 /var/www/html 184 | 185 | restorecon -FRvv /var/www/html 186 | ``` 187 | 188 | 9. Create the SSL certificates 189 | ``` 190 | openssl req -newkey rsa:4096 -nodes -sha256 -keyout domain.key -x509 -days 365 -out domain.crt -subj "/CN=registry.example.com/O=Red Hat/L=Default City/ST=TX/C=US" 191 | ``` 192 | 193 | 10. Use `oc image serve` to host the bootstrap registry content 194 | ``` 195 | oc image serve --tls-crt=/root/domain.crt --tls-key=/root/domain.key --listen=':5000' --dir=/root/data/mirror & 196 | ``` 197 | 198 | 11. Test to make sure it's working: 199 | **Note:** #From the disconnected node: 200 | ``` 201 | ss -plunt | grep 5000 202 | ``` 203 | 204 | Expected Output: 205 | ``` 206 | tcp LISTEN 0 128 0.0.0.0:5000 0.0.0.0:* users:(("conmon",pid=13764,fd=5)) 207 | ``` 208 | 12. Create an SSH Key pair 209 | ``` 210 | ssh-keygen 211 | ``` 212 | 213 | 13. Create base64 encoded registry username password `(user: registry| password: registry)` 214 | ``` 215 | echo -n 'registry:registry' | base64 216 | ``` 217 | 218 | **Expected Result:** `cmVnaXN0cnk6cmVnaXN0cnk=` 219 | 220 | 14. Create the registry pull secret 221 | **Note:** The following string gets added to the install-config.yaml 222 | ``` 223 | {"auths":{"internal_registry.fqdn:5000":{"auth":"cmVnaXN0cnk6cmVnaXN0cnk="}}} 224 | ``` 225 | 226 | 15. Create a deployment directory and a cluster directory: 227 | `mkdir /root/deployment /root/openshift` 228 | 229 | **Note:** Once the installer ingests the install-config, it is no longer available. 230 | 231 | 16. Create the install-config.yaml 232 | `vim /root/deployment/install-config.yaml` 233 | 234 | ```yaml 235 | apiVersion: v1 236 | baseDomain: "{{ openshift_base_domain }}" 237 | compute: 238 | - architecture: amd64 239 | hyperthreading: Enabled 240 | name: worker 241 | platform: 242 | vsphere: 243 | cpus: 4 244 | coresPerSocket: 2 245 | memoryMB: 16384 246 | osDisk: 247 | diskSizeGB: 120 248 | replicas: 3 249 | controlPlane: 250 | architecture: amd64 251 | hyperthreading: Enabled 252 | name: master 253 | platform: 254 | vsphere: 255 | cpus: 4 256 | coresPerSocket: 2 257 | memoryMB: 16384 258 | osDisk: 259 | diskSizeGB: 120 260 | replicas: 3 261 | metadata: 262 | name: "{{ openshift_cluster_name }}" 263 | networking: 264 | clusterNetwork: 265 | - cidr: 10.128.0.0/14 266 | hostPrefix: 23 267 | machineNetwork: 268 | - cidr: "{{ openshift_machine_network }}" 269 | networkType: OpenShiftSDN 270 | serviceNetwork: 271 | - 172.30.0.0/16 272 | platform: 273 | vsphere: 274 | apiVIP: "{{ openshift_api_VIP }}" 275 | ingressVIP: "{{ openshift_ingress_apps_VIP }}" 276 | cluster: "{{ vcenter_cluster }}" 277 | datacenter: "{{ venter_datacenter }}" 278 | folder: "{{ vcenter_openshift_vm_folder }}" 279 | defaultDatastore: "{{ vcenter_datastore }}" 280 | network: "{{ vcenter_virtual_machine_network }}" 281 | password: "{{ vcenter_openshift_svc_account_password }}" 282 | username: "{{ vcenter_openshift_svc_account }}" 283 | vCenter: "{{ vcenter_fqdn}}" 284 | clusterOSImage: "{{ openshift_ova_http_path }}" 285 | publish: External 286 | fips: true 287 | pullSecret: '****' 288 | imageContentSources: 289 | - mirrors: 290 | - registry.redhat.local:5000/openshift/release 291 | source: quay.io/openshift-release-dev/ocp-release 292 | - mirrors: 293 | - registry.redhat.local:5000/openshift/release 294 | source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 295 | sshKey: | 296 | ssh-rsa AAAAB3NzaC1yc2EAAAADA{...ommitted } #This is the SSH public key output 297 | additionalTrustBundle: | 298 | -----BEGIN CERTIFICATE----- 299 | -----END CERTIFICATE----- 300 | ``` 301 | 302 | 17. Copy the config to the openshift directory and run the deployment 303 | ``` 304 | cp /root/deployment/install-config.yaml /root/openshift 305 | 306 | openshift-install create cluster --dir=/root/openshift --log-level=debug 307 | ``` 308 | 309 | ## Monitor the cluster deployment: 310 | ``` 311 | tail -f /root/openshift/.openshift_install.log 312 | 313 | export KUBECONFIG=/root/openshift/auth/kubeconfig 314 | 315 | oc get co 316 | 317 | oc get nodes 318 | 319 | oc get machines -A 320 | ``` 321 | 322 | ## Destroying the cluster 323 | ``` 324 | openshift-install destroy cluster --dir=openshift --log-level=debug 325 | ``` -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2020 Code Sparta Official 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## What is OpenShift? 2 | 3 | Red Hat OpenShift is a leading enterprise Kubernetes platform, purpose-built for innovation. Beginning with release version 4.6, OpenShift now comes with a number of enhancements focused on enabling our government customers to accelerate and expand their adoption of containers and DevSecOps, such as a comprehensive approach to FIPS, expanded support for government clouds, an automated approach to compliance via the Compliance Operator, and others. 4 | 5 | Red Hat OpenShift is a certified Kubernetes distribution, which provides consistency via the Kubernetes API, timely updates, and confirmability to ensure interoperability. Red Hat is a leading contributor to Kubernetes and other projects in this ecosystem including Linux, Prometheus, Jaeger, CNI, Envoy, Istio, etc. At Red Hat we use a 100% open source development model to deliver enterprise products. No fork, no rebase, no proprietary extensions. 6 | 7 | Red Hat emphasizes the enterprise supportability of open source software, which is particularly important with the DoD mission. Not only do they build an ecosystem of supported partners and vendors. We recognize that keeping up with the frequent upstream releases can be challenging and we’re investing in automated capabilities like the operator framework and over-the-air updates to make it easier to keep up to date with the latest innovations coming out of the communities and to deploy containers at scale. 8 | 9 | For more information please refer to [this blog](https://www.openshift.com/blog/red-hat-openshift-4.6-the-kubernetes-platform-for-government). 10 | 11 | 12 | ## How do you install OpenShift? 13 | 14 | There are two native ways of installing OpenShift, Installer Provisioned Infrastructure (IPI) and User Provisioned Infrastructure (UPI). Both produce highly available, fully capable OpenShift clusters, the only difference is who is responsible for provisioning the required infrastructure. 15 | 16 | Additionally, you can create your own custom automation around a UPI install to target a specific use case or environment such as Cloud One. An example of this is Sparta, which is a tool incepted by Red Hat Consulting to help facilitate installs in Cloud One or similarly restricted AWS environments. 17 | 18 | In this section, we will be diving into what each approach entails then discussing the merits of each approach relative to a DoD use case. 19 | 20 | **Installer Provisioned Infrastructure (IPI)** 21 | 22 | For clusters with installer-provisioned infrastructure (IPI), you delegate the infrastructure bootstrapping and provisioning to the installation program instead of doing it yourself. The installation program creates all of the networking, machines, and operating systems that are required to support the cluster. There are some aspects that are configurable such as the number of machines that the control plane uses, the type of virtual machine that the cluster deploys, or the CIDR range for the Kubernetes service network. However, generally speaking, for highly customized installations a User Provisioned Infrastructure (UPI) approach will be required. 23 | 24 | **User Provisioned Infrastructure (UPI)** 25 | 26 | If you provision and manage the infrastructure for your cluster yourself, you must provide all of the cluster infrastructure and resources, including: 27 | 28 | 29 | 30 | * The underlying infrastructure for the control plane and compute machines that make up the cluster 31 | * Load balancers (for the cluster, OpenShift will create software proxies in front of any application workloads automatically) 32 | * Cluster networking (i.e. VPCs) and required subnets 33 | * DNS records for the cluster 34 | * Storage for the cluster infrastructure 35 | 36 | Red Hat provides cloud-provider templates to help you get started in building your infrastructure. The customer has the option to leverage these Red Hat provided templates or build custom automation to build out infrastructure. Linked here are the guides for AWS and Azure, but more are available for other providers: 37 | 38 | 39 | 40 | * AWS CloudFormation templates: [https://docs.openshift.com/container-platform/4.6/installing/installing_aws/installing-aws-user-infra.html#installation-aws-user-infra-requirements_installing-aws-user-infra](https://docs.openshift.com/container-platform/4.6/installing/installing_aws/installing-aws-user-infra.html#installation-aws-user-infra-requirements_installing-aws-user-infra) 41 | * Azure ARM templates: [https://docs.openshift.com/container-platform/4.6/installing/installing_azure/installing-azure-user-infra.html#installation-azure-user-infra-config-project](https://docs.openshift.com/container-platform/4.6/installing/installing_azure/installing-azure-user-infra.html#installation-azure-user-infra-config-project) 42 | 43 | **What is Sparta?** 44 | 45 | Sparta is a customized UPI install with additional tools to help facilitate a disconnected build to the requirements of Cloud One’s IL2 AWS GovCloud environment (C1DL). Some of the unique challenges in that environment include: 46 | 47 | 48 | * Disconnected AWS GovCloud 49 | * Can’t create IAM roles 50 | * Can’t create ELBs 51 | * Existing VPCs configured specifically for C1DL 52 | * Can’t modify subnets or route tables 53 | * Multiple subnets provided for different purposes (common, apps, etc.) 54 | 55 | An architecture overview for Sparta is available [here](https://codectl.io/docs/overview) and the [github page](https://github.com/CodeSparta). 56 | 57 | Sparta is under continuous development to continue to better target the use cases of our DoD customers. 58 | 59 | **Considerations When Installing OpenShift Disconnected** 60 | 61 | OpenShift installed via any of these methods can be installed disconnected. It does not require connectivity to the internet to function with the exception of cloud provider APIs. Even the cloud provider APIs are not strictly required but you will lose the ability to leverage the cloud provider plugin which provides native integration with the underlying cloud services. This plugin allows for some advanced machine management and scaling capabilities, including: 62 | 63 | 64 | 65 | * Integration with in-tree storage providers for the cloud (e.g. gp2 storage class for EBS volumes in AWS) allowing you to dynamically provision storage. This functionality can be added separately using the CSI drivers for the cloud if you do not have cloud integration enabled. 66 | * Machine integration to enable easy adding/removing nodes from the cluster. This is functionality that cluster autoscaling relies on to work. 67 | * Setting up and configuring object (S3) storage for the internal OpenShift container registry. 68 | * Automatic configuration of load balancers and DNS entries (if enabled) for the OpenShift router. 69 | 70 | The primary consideration when installing disconnected is taking the install content (container images, operators, iso/ova/ovf etc.) to the high side where it needs to be hosted in a content repository. If the environment already has an existing container registry, the container images can be loaded into it to support the installation. Additionally, a web server is needed to host the operating system image and the configuration files for the RHCOS nodes. This content needs to be accessible from the environment where the nodes will be deployed and from the host where the installation is performed from. 71 | 72 | **Comparing IPI, UPI, and Sparta for the DoD Requirements** 73 | 74 | The table below will compare features of IPI vs UPI to help you decide what is best for your organization. 75 | 76 | 77 | 78 | 79 | 81 | 83 | 85 | 87 | 88 | 89 | 91 | 93 | 95 | 97 | 98 | 99 | 101 | 103 | 105 | 107 | 108 | 109 | 111 | 113 | 115 | 117 | 118 | 119 | 121 | 123 | 125 | 127 | 128 | 129 | 131 | 133 | 135 | 137 | 138 | 139 | 141 | 143 | 145 | 147 | 148 | 149 | 151 | 153 | 155 | 157 | 158 | 159 | 161 | 163 | 165 | 167 | 168 | 169 | 171 | 173 | 175 | 177 | 178 | 179 | 181 | 183 | 185 | 187 | 188 |
80 | IPI 82 | UPI 84 | Sparta 86 |
Can be installed in disconnected environment 90 | Yes 92 | Yes 94 | Yes 96 |
Installs highly available cluster 100 | Yes 102 | Yes 104 | Yes 106 |
Can be installed in AWS, Azure, GCP, VMWare, RHV, OpenStack 110 | Yes 112 | Yes 114 | No, AWS only, on roadmap to expand 116 |
Can be installed bare metal 120 | Yes, but only on certain hardware 122 | Yes 124 | No 126 |
Can be installed in existing VPCs (or Azure VNETs) 130 | Yes (see blog) 132 | Yes 134 | Yes 136 |
Requires full administrative privileges 140 | By default yes, but can also pre-create less privileged IAM roles (see docs for AWS and Azure) 142 | No 144 | No 146 |
Requires Route 53 (or Azure DNS) 150 | Yes 152 | No 154 | Yes, but on the roadmap to allow other options 156 |
Can be installed in C2S 160 | No 162 | Yes 164 | No, but on roadmap 166 |
Can be installed with existing ELB 170 | No 172 | Yes 174 | Yes 176 |
180 | 182 | 184 | 186 |
189 | 190 | 191 | 192 | ## Getting Started 193 | 194 | In this section, we will walk you through installations using each of the methods described above. 195 | 196 | ### Installing OpenShift in Disconnected Microsoft Azure Government using IPI 197 | 198 | [OpenShift IPI on MAG](IPIonMAGInstall.md) 199 | 200 | [OpenShift IPI on MAG with Manual IAM](IPIonMAGInstallManualIAM.md) 201 | 202 | ### Installing OpenShift in Disconnected AWS GovCloud using IPI 203 | 204 | [OpenShift IPI on AWS GovCloud with Manual IAM](IPIonAWSGovManualIAM.md) 205 | 206 | ## Installing OpenShift in Disconnected AWS GovCloud using Sparta 207 | 208 | [Sparta Install Docs](SpartaInstall.md) 209 | 210 | ## APPENDIX 211 | 212 | [Sample Disconnected Registry Implementations](appendix/disconnected-registry.md) 213 | -------------------------------------------------------------------------------- /SpartaInstall.md: -------------------------------------------------------------------------------- 1 | 2 | ## Sparta 3 | 4 | 5 | #### Red Hat OpenShift Platform Delivery as Code 6 | 7 | Sparta was created to solve the problem of delivering the Red Hat OpenShift Container Platform, (based on Kubernetes) along with an extensible middleware and application portfolio, within restricted deployment environments (e.g. behind an air gap). 8 | 9 | The delivery design centers around the Koffer and Konductor automation runtime containers as pluggable artifact collection and Infrastructure as Code (IaC) delivery engines, which orchestrates the CloudCtl deployment services pod to augment cloud native features. 10 | 11 | #### What is CodeSparta? 12 | 13 | In the simplest terms, CodeSparta is a target agnostic, additive, (Kubernetes) private cloud Trusted Platform Delivery ToolKit. Sparta codifies & automates “prerequisites” with an emphasis on extensibility, repeatability, and developer ease of use. 14 | 15 | 16 | #### What problem does it solve? 17 | 18 | The first generation CodeSparta was created to solve the complexity of delivering the Red Hat OpenShift Kubernetes Platform, along with middleware, and an application portfolio, within restricted deployment environments which may incur privilege restrictions & require building on pre-existing infrastructure. Sparta adapts to these requirements in highly complex target environments (e.g. behind an airgap) in a declarative, auditable, airgap capable, and automated fashion. Sparta continues to mature to meet a growing demand for it’s reliability & flexibility in enabling new and changing environments. 19 | 20 | 21 | #### How does this magic work? 22 | 23 | The delivery design centers around the Koffer and Konductor automation runtime engines as pluggable artifact collection and Infrastructure as Code (IaC) delivery engines. Additionally the CloudCtl “Lifecycle Deployment Services” pod augments cloud native features and or provides deployment time prerequisite services during IaC run cycles. 24 | 25 | 26 | #### What are the different components that make up CodeSparta? 27 | 28 | Koffer, Konductor, CloudCtl, and Jinx, are the heart of CodeSparta’s reliability & extensibility framework. 29 | 30 | 31 | #### What is Koffer? 32 | 33 | Koffer Engine is a containerized automation runtime for raking in various artifacts required to deploy Red Hat OpenShift Infrastructure, Middleware, and Applications into restricted and or air-gapped environments. Koffer is an intelligence void IaC runtime engine designed to execute purpose built external artifact “collector” plugins written in ansible, python, golang, bash, or combinations thereof. 34 | 35 | 36 | #### What is Konductor? 37 | 38 | Konductor is a human friendly RedHat UBI8 based Infrastructure As Code (IaC) development & deployment runtime which includes multiple cloud provider tools & devops deployment utilities. Included is a developer workspace for DevOps dependency & control as well as support for a unified local or remote config yaml for zero touch Koffer & Konductor orchestrated dynamic pluggable IaC driven platform delivery. It is a core component in creating the CloudCtl containerized services pod, and is intended for use in both typical & restricted or airgap network environments. 39 | 40 | 41 | #### What is CloudCtl? 42 | 43 | CloudCtl is a short lived “Lifecycle Services Pod” delivery framework designed to meet the needs of zero pre-existing infrastructure deployment or augment cloud native features for “bring your own service” scenarios. It provides a dynamic container based infrastructure service as code standard for consistent and reliable deployment, lifecycle, and outage rescue + postmortem operations tasks. It is designed to spawn from rudimentary Konductor plugin automation and is capable of dynamically hosting additional containerized services as needed. CloudCtl pod is fully capable of meeting any and all service delivery needs to deliver a cold datacenter “first heart beat” deployment with no prerequisites other than Podman installed on a single supported linux host and the minimum viable Koffer artifact bundles. 44 | 45 | 46 | #### How do Sparta components work with each other? 47 | 48 | All of Sparta’s core components were designed with declarative operation, ease of use, and bulletproof reliability as the crowning hierarchy of need. To that end these delivery tools were built to codify the repetitive and recyclable logic patterns into purpose built utilities and standards wrapped in minimalist declarative configuration. Each component is intended to support individual use. Unified orchestration is also supported from a single declarative ‘sparta.yml’ configuration file provided locally or called from remote https/s3 locations to support conformity with enterprise secret handling and version controlled end-to-end platform delivery. 49 | 50 | Koffer creates standardized tar artifact bundles, including container images and git repo codebase(s) for the automated deployment & lifecycle maintenance of the platform. 51 | 52 | Konductor consumes Koffer raked artifact bundles to unpack artifacts & IaC. It then orchestrates artifact delivery services & executes the packaged IaC to deliver the programmed capability. Konductor supports a declarative yaml configuration format, cli flags provided at runtime, or user-prompt style interaction to inform code requirements. 53 | 54 | CloudCtl is a dynamic Konductor orchestrated framework for serving various deployment & lifecycle ops time infrastructure service requirements. CloudCtl is designed for extensible support of “bring your own” services including CoreDNS, Nginx, Docker Registry, ISC-DHCP, and Tftpd. CloudCtl is intended for use as a “last resort crutch” where pre-existing enterprise or cloud native supporting services are prefered if supported in the Konductor IaC plugins. 55 | 56 | 57 | #### Method 58 | 59 | Air-gapped and restricted network deliveries represent similar but critically unique challenges. Currently, Sparta delivers via an airgap only model, primarily aimed at pre-existing infrastructure and consisting of four distinct stages. 60 | 61 | 62 | ## Install Guide 63 | 64 | The Sparta platform delivery ecosystem is maintained by contributors from Red Hat Consulting. This guide provides brief instructions on the basic Sparta platform delivery method to prepare and provision an air-gapped Red Hat OpenShift deployment on AWS GovCloud. 65 | 66 | 67 | #### Overview of Steps for Air-gapped Deployment: 68 | 69 | 70 | 71 | 1. Prerequisite Tasks 72 | 2. Generate Offline Bundle 73 | 3. Import Artifacts to Air-gapped System 74 | 4. Air-gapped Deployment 75 | 76 | 77 | ### Prerequisite Tasks Page 78 | 79 | Sparta is used to install OpenShift into a private, air-gapped VPC in AWS. The steps outlined in this document will use a DevKit to create a VPC in AWS. Additionally, the DevKit will provision a RHEL8 Bastion node (sparta-bastion-node) and a RH CoreOS node (sparta-registry-node) to act as a private registry for the install. 80 | 81 | 82 | #### Development Checklist: 83 | 84 | Use this checklist to ensure prerequisites have been met: 85 | 86 | [ ] AWS Commercial or GovCloud account Key Secret & ID Pair [see 1 below] 87 | 88 | [ ] RHEL 8 Minimal AMI ID ([https://access.redhat.com/solutions/15356](https://access.redhat.com/solutions/15356) ) [see 2 below] 89 | 90 | [ ] RH Quay pull secret ([https://cloud.redhat.com/openshift/install/metal/user-provisioned](https://cloud.redhat.com/openshift/install/metal/user-provisioned)) 91 | 92 | [ ] Internet connected linux terminal (ICLT) with: 93 | 94 | Packages: 95 | 96 | 97 | 98 | * Git 99 | * Podman 100 | * AWS CLI 101 | 1. AWS Security Credentials 102 | * AWS Commercial (https://console.aws.amazon.com/iam/home#/security_credentials) 103 | * AWS GovCloud ([https://console.amazonaws-us-gov.com/iam/home#/security_credentials](https://console.amazonaws-us-gov.com/iam/home#/security_credentials)) 104 | 2. AWS RHEL 8 AMI 105 | ``` 106 | aws ec2 describe-images --owners 309956199498 --query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' --filters "Name=name,Values=RHEL-8.3*" "Name=architecture,Values=x86_64" --region us-east-2 --output table 107 | ``` 108 | 109 | 110 | ### Deployment: 111 | 112 | The Sparta DevKit VPC simulates air-gapped environments. It will create the required infrastructure to begin a development install. This will include: 113 | * VPCs 114 | * Private subnets (3 instances across availability zones) 115 | * Public subnets (3 instances across availability zones) 116 | * Security Groups 117 | * Master security group 118 | * Worker security group 119 | * Registry security group 120 | * IAM Roles 121 | * Master IAM policy 122 | * Worker IAM policy 123 | * Bastion Node 124 | * Registry Node 125 | * Route 53 Configurations 126 | * Internet gateway 127 | 128 | For deployment on customer provided infrastructure, eg. VPC, please see [Appendix](#appendix) for qualifying configurations. After reading the appendix return here and proceed. 129 | 130 | Perform these steps to setup the required infrastructure for an air-gapped OCP installation using Sparta DevKit VPC. 131 | 132 | From ICLT (Specified in Development Checklist): 133 | 134 | 1. Create AWS Key pair if needed. You may use an existing key if you like; note that we will refer to the name of the key as sparta throughout this doc. If using an existing key skip ahead to step #4 135 | 1. Create aws ssh key pair named sparta, this will download the file sparta.pem (https://docs.aws.amazon.com/cli/latest/userguide/cli-services-ec2-keypairs.html). 136 | 1. Copy sparta.pem to a new working directory of your choice or the .ssh directory in your home directory 137 | 1. Export the file path of the sparta.pem to an env var: 138 | 139 | ``` 140 | export SPARTA_PRIVATE_KEY=[full path to sparta.pem] 141 | ``` 142 | 143 | 5. Set the correct permissions on the new folder and pem file: 144 | 145 | ``` 146 | chmod 700 $(dirname $SPARTA_PRIVATE_KEY) 147 | chmod 600 $SPARTA_PRIVATE_KEY 148 | ``` 149 | 150 | 6. In that working directory run the following command: 151 | 152 | ``` 153 | ssh-keygen -y -f \ 154 | $SPARTA_PRIVATE_KEY > $(dirname $SPARTA_PRIVATE_KEY)/sparta.pub 155 | ``` 156 | 157 | 158 | 7. Upload RHCOS AMI to AWS when in GovCloud 159 | 160 | 8. Use the following link for instructions on how to upload a RHCOS image as an AMI: ([https://docs.openshift.com/container-platform/4.8/installing/installing_aws/installing-aws-government-region.html#installation-aws-regions-with-no-ami_installing-aws-government-region](https://docs.openshift.com/container-platform/4.8/installing/installing_aws/installing-aws-government-region.html#installation-aws-regions-with-no-ami_installing-aws-government-region)) 161 | 162 | Here is a link to the required VMDK for the RHCOS AMI upload: 163 | https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/latest/latest/rhcos-aws.x86_64.vmdk.gz 164 | 165 | 9. If in the commercial cloud please find the ami for your region from the following link: 166 | https://docs.openshift.com/container-platform/4.8/installing/installing_aws/installing-aws-user-infra.html#installation-aws-user-infra-rhcos-ami_installing-aws-user-infra 167 | 168 | 169 | ### Configure the Sparta DevKit VPC 170 | 1. Export the version of OpenShift to be deployed as an environment variable. The value 4.8.0 is the only supported value for this set of directions. 171 | 172 | ``` 173 | export OCP_VERSION=4.8.0 174 | ``` 175 | 176 | 2. Clone the “devkit-vpc” repo. 177 | If you are using an existing VPC that meets Sparta requirements(see [Appendix](#appendix)) replace `DEVKIT_BRANCH` in the command below with `existing-vpc`. 178 | If you want the devkit to create your vpc for you replace `DEVKIT_BRANCH` in the command below with `$OCP_VERSION`. 179 | 180 | ``` 181 | git clone --branch [DEVKIT_BRANCH] \ 182 | https://repo1.dso.mil/platform-one/distros/red-hat/ocp4/govcloud/devkit-vpc 183 | ``` 184 | 185 | 186 | 3. Change directory to the “devkit-vpc” git repo 187 | 188 | ``` 189 | cd devkit-vpc 190 | ``` 191 | 192 | 193 | 4. Configure terraform deployment variables: 194 | 195 | ``` 196 | vi variables.tf 197 | ``` 198 | 199 | 200 | 5. Use the following table for assistance when setting the terraform deployment variables (Note: you’re only required to fill out the variables in this table, leave the remaining vars as is): 201 | 202 |
203 | 204 | 205 | 208 | 210 | 212 | 213 | 214 | 216 | 218 | 220 | 221 | 222 | 224 | 226 | 228 | 229 | 230 | 232 | 234 | 236 | 237 | 238 | 240 | 242 | 244 | 245 | 246 | 248 | 250 | 252 | 253 | 254 | 256 | 258 | 260 | 261 | 262 | 264 | 266 | 268 | 269 |
206 | Variable Name 207 | Example 209 | Explanation 211 |
aws_ssh_key 215 | sparta 217 | Set this to the name of the AWS Key Pair created in step 2 of 'Deployment'. 219 |
ssh_pub_key 223 | to get this value execute the following command: `cat $(dirname $SPARTA_PRIVATE_KEY)/sparta.pub` 225 | The file content from the sparta.pub key create above 227 |
rhcos_ami 231 | ami-0d5f9982f029fbc14 233 | The RH CoreOS AMI 235 |
vpc_id 239 | sparta or vpc-xxxxxxxxxxxxxxxxx 241 | If using an existing VPC set this to the VPC's ID, otherwise use the string 'sparta'. 243 |
cluster_name 247 | sparta 249 | If using an existing VPC set this to the VPC's 'Name', otherwise use the string 'sparta'. 251 |
cluster_domain 255 | sparta.io 257 | The base domain name for the OpenShift cluster 259 |
bastion_ami 263 | ami-07da8bff8ee284be8 265 | The RHEL8 AMI, this will be used as the OS for the sparta-bastion created by the devkit-vpc script 267 |
270 | 271 | 6. Launch the Sparta DevKit VPC replacing the -e args with your region and AWS key values: 272 | 273 | ``` 274 | ./devkit-build-vpc.sh -vvv \ 275 | -e aws_cloud_region=[AWS_REGION] \ 276 | -e aws_access_key=[AWS_ACCESS_KEY] \ 277 | -e aws_secret_key=[AWS_SECRET_KEY] 278 | ``` 279 | 280 | 281 | 7. One of the things the above command created was an EC2 instance named ‘sparta-bastion-node’. Export the sparta-bastion-node public ip address into an env var. This can be found in the AWS EC2 web console: 282 | 283 | ``` 284 | export SPARTA_BASTION_NODE_PUBLIC_IP=[public ip of sparta-bastion-node] 285 | ``` 286 | 287 | 288 | 8. Push AWS SSH keys sparta-bastion-host: 289 | 290 | ``` 291 | scp -i $SPARTA_PRIVATE_KEY \ 292 | $SPARTA_PRIVATE_KEY \ 293 | ec2-user@$SPARTA_BASTION_NODE_PUBLIC_IP:~/.ssh/ 294 | ``` 295 | 296 | 297 | ### Generate Offline Bundle 298 | 299 | This step generates the offline OCP installer bundle. 300 | 301 | From the ICLT (Specified in Development Checklist): 302 | 303 | 304 | 305 | 1. Create Platform Artifacts Staging Directory 306 | 307 | ``` 308 | mkdir -p ~/bundle 309 | ``` 310 | 311 | 312 | 2. Build OpenShift Infrastructure, Operators, and App Bundles 313 | 314 | Note you may need to adjust the config url based on your circumstances. 315 | 316 | ``` 317 | podman run -it --rm \ 318 | --pull always \ 319 | --volume ~/bundle:/root/bundle:z \ 320 | quay.io/cloudctl/koffer:v00.21.0305 \ 321 | bundle \ 322 | --config https://raw.githubusercontent.com/RedHatGov/ocp-disconnected-docs/main/sparta/config.yml 323 | ``` 324 | 325 | 326 | 327 | Note: Paste the Quay.io Image Pull Secret referenced in the prerequisites section when prompted. 328 | 329 | 330 | 331 | 3. Verify the size of the bundle is approximately 7.4GB 332 | 333 | ``` 334 | du -sh ~/bundle/* 335 | ``` 336 | 337 | 338 | 339 | Example output, the version below would match the env var `OCP_VERSION` value: 340 | 341 | 342 | ``` 343 | 7.4G /home/ec2-user/bundle/koffer-bundle.openshift-4.8.0.tar 344 | ``` 345 | 346 | 347 | 348 | ### Import Artifacts to Air-gapped System 349 | 350 | This section details the procedures for transferring the platform bundle to the target air gap location when using the Sparta DevKit-VPC. 351 | 352 | 353 | 354 | 1. From the ICLT, copy platform bundle to bastion 355 | 356 | ``` 357 | scp -i $SPARTA_PRIVATE_KEY -r ~/bundle \ 358 | ec2-user@$SPARTA_BASTION_NODE_PUBLIC_IP:~ 359 | ``` 360 | 361 | 362 | 2. SSH from ICLT to sparta-bastion-node 363 | 364 | ``` 365 | ssh -i $SPARTA_PRIVATE_KEY ec2-user@$SPARTA_BASTION_NODE_PUBLIC_IP 366 | ``` 367 | 368 | 369 | 3. Export the version of OpenShift to be deployed as an environment variable, ensure this is the same value as was set on the ICTL machine. 370 | 371 | ``` 372 | export OCP_VERSION=4.8.0 373 | ``` 374 | 375 | 376 | 4. One of the things the devkit created was an EC2 instance named ‘sparta-registry-node’. Export the sparta-registry-node private ip address into an env var. This can be found in the AWS EC2 web console: 377 | 378 | ``` 379 | export SPARTA_REGISTRY_NODE_PRIVATE_IP=[private ip of sparta-registry-node] 380 | ``` 381 | 382 | 383 | 5. Copy platform bundle to AWS EC2 instance named sparta-registry-node. 384 | 385 | ``` 386 | scp -i ~/.ssh/sparta.pem -r ~/bundle \ 387 | core@$SPARTA_REGISTRY_NODE_PRIVATE_IP:~ 388 | ``` 389 | 390 | 391 | 6. Extract platform bundle on sparta-registry-node 392 | 393 | ``` 394 | ssh -i ~/.ssh/sparta.pem -t core@$SPARTA_REGISTRY_NODE_PRIVATE_IP \ 395 | "sudo tar xvf ~/bundle/koffer-bundle.sparta-aws-$OCP_VERSION.tar -C /root" 396 | ``` 397 | 398 | 399 | 400 | 401 | ### Air-gapped Deployment 402 | 403 | This step will deploy OCP into the DevKit VPC. 404 | 405 | From the sparta-bastion-node: 406 | 407 | 408 | 409 | 1. SSH to the sparta-registry-node. 410 | 411 | ``` 412 | ssh -i ~/.ssh/sparta.pem core@$SPARTA_REGISTRY_NODE_PRIVATE_IP 413 | ``` 414 | 415 | 416 | 2. Acquire root 417 | 418 | ``` 419 | sudo -i 420 | ``` 421 | 422 | 3. Run init.sh 423 | 424 | ``` 425 | cd /root/cloudctl && ./init.sh 426 | ``` 427 | 428 | 4. Exec into Konductor 429 | 430 | ``` 431 | podman exec -it konductor connect 432 | ``` 433 | 434 | 5. Edit cluster-vars.yml setting the variables as defined in the table below. 435 | 436 | ``` 437 | vim /root/platform/iac/cluster-vars.yml 438 | ``` 439 | 440 | 441 | 442 | 443 | 444 | 447 | 449 | 451 | 452 | 453 | 455 | 457 | 459 | 460 | 461 | 463 | 465 | 467 | 468 | 469 | 471 | 473 | 475 | 476 | 477 | 479 | 481 | 483 | 484 | 485 | 487 | 489 | 491 | 492 | 493 | 495 | 497 | 499 | 500 | 501 | 503 | 505 | 507 | 508 | 509 | 511 | 516 | 518 | 519 |
445 | Variable Name 446 | Example 448 | Explanation 450 |
vpc_name 454 | sparta 456 | The AWS VPC Name for the target Environment 458 |
name_domain 462 | spartadomain.io 464 | The domain name for the cluster 466 |
vpc_id 470 | vpc-XXXXXXXXX 472 | The AWS VPC Id for the target environment 474 |
aws_region 478 | us-east-1 480 | Set this to your target AWS region where OpenShift will be deployed 482 |
aws_access_key_id 486 | AAAAAAAAAAAAAAAAAAAA 488 | AWS Key Id for the target environment 490 |
aws_secret_access_key 494 | aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa 496 | AWS Key Secret for the target environment 498 |
rhcos_ami 502 | ami-XXXXXXXXX 504 | THE Red Hat CoreOS AMI Id for the target environment 506 |
subnet_list 510 | 512 | - subnet-XXXXXXXXX 513 | - subnet-XXXXXXXXX 514 | - subnet-XXXXXXXXX 515 | The list of AWS Subnet Ids associated with the target VPC 517 |
520 |
521 |
522 | 523 | 524 | 6. Deploy Cluster 525 | 526 | ``` 527 | cd /root/platform/iac/sparta && ./site.yml 528 | ``` 529 | 530 | 7. Run the following commands, it ultimately runs a watch command. At the top of the output it will inform you when to move on to the next step;this may take 30-60 minutes. 531 | 532 | ``` 533 | function watch_command() { 534 | echo 535 | echo 536 | 537 | ELB_HOST=$(oc get svc -n openshift-ingress | awk '/router-default/{print $4}') 538 | 539 | if [[ -n "$ELB_HOST" && "$ELB_HOST" != "" ]]; then 540 | echo "ready to run the next step..." 541 | else 542 | echo "the elb has not yet been created, continue waiting..." 543 | fi 544 | 545 | echo 546 | echo 547 | 548 | oc get co 549 | } 550 | 551 | export -f watch_command 552 | 553 | watch -d -n 5 -c watch_command 554 | ``` 555 | 556 | 557 | 8. Print & Load Apps ELB DNS CNAME Forwarder into apps route53 entry. To retrieve the wildcard DNS name run the following. 558 | 559 | ``` 560 | # prints the target value for the CNAME 561 | oc get svc -n openshift-ingress | awk '/router-default/{print $4}' 562 | 563 | # prints the CNAME value 564 | echo "*.$(expr "$(oc get route -n openshift-console console | awk '/console/{print $2}')" : '[^.][^.]*\.\(.*\)')" 565 | ``` 566 | 567 | 568 | 1. Create a wildcard DNS record in your provider 569 | 1. value: “*.apps.cluster.domain.io” (see output above for value) 570 | 2. type: CNAME 571 | 3. target: ELB CNAME (see output above for value) 572 | 573 | 1. Execute the following watch command and wait for the authentication and console operators to be available: 574 | 575 | ``` 576 | watch -d -n 5 oc get co 577 | ``` 578 | 579 | 580 | 9. Run the following command to get the password for the kubeadmin user: 581 | 582 | 583 | ``` 584 | cat /root/platform/secrets/cluster/auth/kubeadmin-password 585 | ``` 586 | 587 | 588 | 10. To get the url to the console: 589 | 590 | 591 | ``` 592 | oc whoami --show-console 593 | ``` 594 | 595 | 596 | 11. In order to access the web console, we will need to connect to the private VPC. One way to do this is to simply use sshuttle ([https://sshuttle.readthedocs.io/en/stable/overview.html](https://sshuttle.readthedocs.io/en/stable/overview.html)) using the following commands. 597 | 598 | From the ICLT host, run the following command to install Python 3.6 on the Sparta Bastion Node to support using sshuttle 599 | 600 | 601 | ``` 602 | ssh -i $SPARTA_PRIVATE_KEY -t ec2-user@$SPARTA_BASTION_NODE_PUBLIC_IP "sudo dnf -y install python36" 603 | ``` 604 | 605 | 606 | 12. From the ICLT launch sshuttle to connect to the VPC 607 | 608 | 609 | ``` 610 | sshuttle --dns -r ec2-user@$SPARTA_BASTION_NODE_PUBLIC_IP 0/0 \ 611 | --ssh-cmd "ssh -i $SPARTA_PRIVATE_KEY" 612 | ``` 613 | 614 | 615 | 13. Now you can connect to the console using your browser of choice. 616 | 617 | 618 | ### Cluster & VPC Teardown Page Section 619 | 620 | From the sparta-registry-node: 621 | 622 | 623 | 624 | 1. Exec into the container 625 | 626 | ``` 627 | sudo podman exec -it konductor bash 628 | ``` 629 | 630 | 631 | 632 | 633 | 634 | 635 | 636 | 2. Change into the Terraform Directory 637 | 638 | 639 | 640 | 641 | ``` 642 | cd /root/platform/iac/shaman 643 | ``` 644 | 645 | 646 | 647 | 648 | 3. Using the oc tool, patch the masters to make them schedulable 649 | 650 | ``` 651 | oc patch schedulers.config.openshift.io cluster -p '{"spec":{"mastersSchedulable":true}}' --type=merge 652 | ``` 653 | 654 | 655 | 4. Delete machinesets & wait for worker nodes to terminate 656 | 657 | ``` 658 | for i in $(oc get machinesets -A | awk '/machine-api/{print $2}'); do oc delete machineset $i -n openshift-machine-api; echo deleted $i; done 659 | ``` 660 | 661 | 662 | 5. Delete service router & wait for it to terminate 663 | 664 | ``` 665 | oc delete service router-default -n openshift-ingress & 666 | ``` 667 | 668 | 669 | 6. Execute control plane breakdown playbook 670 | 671 | 672 | 673 | 674 | ``` 675 | chmod +x ./breakdown.yml && ./breakdown.yml 676 | ``` 677 | 678 | 679 | From the ICLT: 680 | 681 | Change into the devkit-vpc directory 682 | 683 | Execute breakdown script 684 | 685 | 686 | ``` 687 | ./devkit-destroy-vpc.sh 688 | ``` 689 | 690 | ### Appendix 691 | 692 | #### VPC 693 | 694 | The VPC name, id and IPv4 CIDR block will need to be provided at various points of the install process. Also, the cluster_name variable in the install will need to be set to the VPC name. 695 | 696 | VPC Example 697 | ``` 698 | Name tag: ${cluster_name} 699 | IPv4 CIDR block: 10.0.0.0/16 700 | IPv6 CIDR block: 701 | ``` 702 | 703 | #### Subnets 704 | 705 | The install expects that there will be a public facing subnet group and a private subnet group. Additionally, each subnet group will span three availability zones. 706 | 707 | Public Subnet Example 708 | ``` 709 | Name tag: ${cluster_name}-public-us-gov-west-1{a,b,c} 710 | VPC: ${vpc_id} 711 | Availability Zone: us-gov-west-1{a,b,c} 712 | IPv4 CIDR block: 10.0.{0,1,2}.0/24 713 | ``` 714 | 715 | Private Subnet Example 716 | ``` 717 | Name tag: ${cluster_name}-private-us-gov-west-1{a,b,c} 718 | VPC: ${vpc_id} 719 | Availability Zone: us-gov-west-1{a,b,c} 720 | IPv4 CIDR block: 10.0.{3,4,5}.0/2 721 | ``` 722 | 723 | #### Service Endpoints 724 | 725 | Service Endpoints for EC2, Elastic Loadbalancer, and S3 will be needed during the install in order to access the AWS APIs for these services. 726 | 727 | Service endpoint for S3 Example: 728 | ``` 729 | Service name: com.amazonaws.us-gov-west-1.s3 730 | VPC: ${vpc_id} 731 | Route table: ${private_route_table_id} 732 | Custom: 733 | { 734 | "Version": "2008-10-17", 735 | "Statement": [ 736 | { 737 | "Principal": "*", 738 | "Action": "*", 739 | "Effect": "Allow", 740 | "Resource": "*" 741 | } 742 | ] 743 | } 744 | Tags: 745 | Name: manual-test-pri-s3-vpce 746 | “kubernetes.io/cluster/${cluster_name}", "owned" 747 | ``` 748 | 749 | Service endpoint for EC2 Example: 750 | ``` 751 | Service name: com.amazonaws.us-gov-west-1.elasticloadbalancing 752 | VPC: ${vpc_id} 753 | Endpoint type: Interface 754 | Private DNS: true 755 | Security groups: ${cluster_name}-elb-vpce 756 | Subnets: ${private_subnet_ids} 757 | Tags: 758 | Name: ${cluster_name}-elb-vpce 759 | ``` 760 | 761 | Service endpoint for ELB Example: 762 | ``` 763 | Service name: com.amazonaws.us-gov-west-1.elasticloadbalancing 764 | VPC: ${vpc_id} 765 | Endpoint type: Interface 766 | Private DNS: true 767 | Security groups: ${cluster_name}-elb-vpce 768 | Subnets: ${private_subnet_ids} 769 | Tags: 770 | Name: ${cluster_name}-elb-vpce 771 | ``` 772 | 773 | #### Route 53 774 | 775 | In addition to the VPC, There needs to be a private zone for the cluster subdomain in Route 53. 776 | 777 | Hosted Zone Example 778 | ``` 779 | Domain Name: ${cluster_name}.${cluster_domain} 780 | Type: private 781 | ``` 782 | 783 | After running the DevKit to create the Registry Node, a record will need to be added to the hosted zone for the registry node. 784 | 785 | ``` 786 | Record Name: registry.${cluster_name}.${cluster_domain} 787 | Type: A 788 | Routing Policy: Simple 789 | Value/Route traffic to: ${registry_node_private_ipv4} 790 | ``` 791 | -------------------------------------------------------------------------------- /appendix/disconnected-registry-standalone-quay.md: -------------------------------------------------------------------------------- 1 | ## Deploying a Disconnected Registry for OpenShift 4 on Standalone Non-production Quay 2 | To install OpenShift 4 in a disconnected environment, you must provide a registry server to host the images for the installation. This process is documented in the OpensShift documentation section [Creating a mirror registry for installation in a restricted network](https://docs.openshift.com/container-platform/4.7/installing/install_config/installing-restricted-networks-preparations.html). The registry must adhere to the most recent container image API, referred to as `schema2`, see [About the mirror registry](https://docs.openshift.com/container-platform/4.7/installing/install_config/installing-restricted-networks-preparations.html#installation-about-mirror-registry_installing-restricted-networks-preparations). OpenShift documentation of the disconnected mirroring process uses the [Docker registery container](https://hub.docker.com/_/registry). However, in some cases a customer may wish to only use packages provided by Red Hat. Red Hat Quay can be installed in a standalone non-production configuration to support installing OpenShift. 3 | 4 | ### Requirements 5 | * An internet connected host with podman installed 6 | * A disconnected host to run the registry 7 | * A login for registry.redhat.io (Red Hat customer login) 8 | * Enough free disk space for the release images. 9 | * On the internet connected host 10 | * OpenShift 4.7 images are approximately 7GB. 11 | * The images to run Quay are approximately 2GB. 12 | * On the disconnected registry host 13 | * This will require approximately double the space as the internet connected host to account for the staged tar files and then the registry data itself. 14 | 15 | ### Detailed Steps - internet connected host 16 | 1. Mirror the OpenShift images to disk 17 | ``` 18 | EMAIL=youremail@example.com 19 | OCP_RELEASE=4.7.3 20 | LOCAL_REGISTRY='quay.local.lab:8443' 21 | LOCAL_REPOSITORY='ocp4/openshift4' 22 | PRODUCT_REPO='openshift-release-dev' 23 | RELEASE_NAME='ocp-release' 24 | ARCHITECTURE='x86_64' 25 | REMOVABLE_MEDIA_PATH=/home/mike/bundle 26 | 27 | LOCAL_SECRET_TXT=pull-secret.txt 28 | LOCAL_SECRET_JSON='pull-secret.json' 29 | 30 | # use the credentials for the account created in Step 6d. 31 | REGISTRY_CREDS=$(echo -n ':' | base64 -w0) 32 | jq ".auths += {\"${LOCAL_REGISTRY}\": {\"auth\": \"${REGISTRY_CREDS}\",\"email\": \"${EMAIL}\"}}" < ${LOCAL_SECRET_TXT} > ${LOCAL_SECRET_JSON} 33 | 34 | # Transfer merged pull-secret to disconnected host 35 | scp ${LOCAL_SECRET_JSON} @: 36 | 37 | # this will print the info (imageContentSources) needed to put in install-config.yaml which won't get printed syncing to local directory 38 | oc adm release mirror -a ${LOCAL_SECRET_JSON} --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE} --dry-run 39 | 40 | # sync to local directory 41 | oc adm release mirror -a ${LOCAL_SECRET_JSON} --to-dir=${REMOVABLE_MEDIA_PATH}/mirror quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} 42 | 43 | tar cvf openshift-${OCP_RELEASE}-release-bundle.tar.gz ${REMOVABLE_MEDIA_PATH} 44 | ``` 45 | 46 | 2. On the internet connected host, download and save the images needed for Quay 47 | ``` 48 | podman pull registry.redhat.io/rhel8/postgresql-10:1 49 | podman pull registry.redhat.io/rhel8/redis-5:1 50 | podman pull registry.redhat.io/quay/quay-rhel8:v3.4.3 51 | 52 | podman save -o postgres.tar registry.redhat.io/rhel8/postgresql-10:1 53 | podman save -o redis.tar registry.redhat.io/rhel8/redis-5:1 54 | podman save -o quay.tar registry.redhat.io/quay/quay-rhel8:v3.4.3 55 | ``` 56 | 57 | 3. Transfer the tar file openshift-${OCP_RELEASE}-release-bundle.tar.gz and the 3 image tar files to the disconnected host that will run the registry. 58 | 59 | ### Detailed Steps - disconnected registry host 60 | 1. Load the images for the Quay registry into the local registry 61 | ``` 62 | podman load -i postgres.tar 63 | podman load -i redis.tar 64 | podman load -i quay.tar 65 | ``` 66 | 67 | 2. Set default location for Quay installation 68 | ``` 69 | export QUAY=/path/to/quay 70 | ``` 71 | 3. Setup Postgresql 72 | 73 | a. Configure postgresql data directory 74 | 75 | ``` 76 | mkdir $QUAY/postgresql-quay 77 | sudo setfacl -m u:26:-wx $QUAY/postgresql-quay 78 | ``` 79 | 80 | b. Start postgresql container, set POSTGRESQL variables as desired 81 | 82 | ``` 83 | sudo podman run -d --name postgresql-quay -e POSTGRESQL_USER=quayuser \ 84 | -e POSTGRESQL_PASSWORD=quaypass -e POSTGRESQL_DATABASE=quay -e \ 85 | POSTGRESQL_ADMIN_PASSWORD=adminpass -p 5432:5432 \ 86 | -v $QUAY/postgresql-quay:/var/lib/pgsql/data:Z \ 87 | registry.redhat.io/rhel8/postgresql-10:1 88 | ``` 89 | 90 | c. Configure postgresql 91 | 92 | ``` 93 | sudo podman exec -it postgresql-quay \ 94 | /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS pg_trgm" | \ 95 | psql -d quay -U postgres' 96 | ``` 97 | 98 | 4. Setup Redis 99 | 100 | Start redis container, set REDIS_PASSWORD as desired 101 | ``` 102 | sudo podman run -d --name redis -p 6379:6379 \ 103 | -e REDIS_PASSWORD=strongpassword registry.redhat.io/rhel8/redis-5:1 104 | ``` 105 | 106 | 5. Generate Quay certificate 107 | * Update fqdn and ip address for registry host 108 | ``` 109 | cat > ssl-ca.cnf <: (port required if not 80 or 443) 166 | - TLS: Red Hat Quay handles TLS, Browse and select cert and key created in Step 5 167 | 168 | *Database* 169 | - Database Type: Postgres 170 | - Database Server: : (port from step 2b) 171 | _NOTE:_ In some cases, Quay may have issues talking to Postgres using the container host IP. In this case, use the IP assigned to the Postgres container (i.e. 10.88.0.x). 172 | - Username: from Step 2b 173 | - Password: from Step 2b 174 | - Database Name: from Step 2b 175 | 176 | *Redis* 177 | - Redis Hostname: 178 | _NOTE:_ In some cases, Quay may have issues talking to Redis using the container host IP. In this case, use the IP assigned to the Redis container (i.e. 10.88.0.x). 179 | - Redis port: from Step 4a 180 | - Redis password: from Step 4a 181 | 182 | *Access Settings* 183 | - Super Users: add a user here that will be a Super User, click Add 184 | 185 | 186 | If all required settings have been configured correctly, click the Validate 187 | *Configuration Changes* button at the bottom of the page. 188 | 189 | If everything is correct, click the Download button to save the configuration. 190 | 191 | Hit Ctrl-C to stop the quay_config container. 192 | 193 | c. Configure quay directories 194 | ``` 195 | mkdir $QUAY/config 196 | mkdir $QUAY/storage 197 | setfacl -m u:1001:-wx $QUAY/storage 198 | tar xvf /quay-config.tar.gz -C $QUAY/config 199 | ``` 200 | 201 | d. Start quay container 202 | ``` 203 | sudo podman run -d -p 8443:8443 --name=quay -v $QUAY/config:/conf/stack:Z \ 204 | -v $QUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.4.3 205 | ``` 206 | 207 | The quay container will take a couple of minutes to start up fully. Once done, you can access Quay at https://: as defined in Step 6b. 208 | 209 | For the initial login, click Create Account, and create an account matching the name specified as a Super User in Step 6b. 210 | 211 | 7. Configure Quay repository for OpenShift images 212 | - Create New Organization to create a new organization to host the OpenShift release images. 213 | - Click Create New Repository to create a repository to host the OpenShift release images. 214 | - These names will be used in the next step. 215 | 216 | 8. Extract the release bundle. This _*MUST*_ be extracted to the same absolute path as ${REMOVABLE_MEDIA_PATH} in Step 7. 217 | ``` 218 | tar xvf openshift-${OCP_RELEASE}-release-bundle.tar.gz 219 | ``` 220 | 221 | 9. Mirror the release images to your disconnected registry 222 | ``` 223 | # mirror images from directory to quay, --insecure shouldn't be needed if quay has a valid cert 224 | oc image mirror -a ${LOCAL_SECRET_JSON} --from-dir=${REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:${OCP_RELEASE}*" ${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} 225 | ``` 226 | -------------------------------------------------------------------------------- /appendix/disconnected-registry.md: -------------------------------------------------------------------------------- 1 | ## Sample Disconnected Registry Setups 2 | [Standalone Quay Non-Production](disconnected-registry-standalone-quay.md) -------------------------------------------------------------------------------- /aws-gov-ipi-dis-maniam/account_names.txt: -------------------------------------------------------------------------------- 1 | aws-ebs-csi-driver-operator 2 | cloud-credential-operator-iam-ro 3 | cloud-credential-operator-s3 4 | openshift-image-registry 5 | openshift-ingress 6 | openshift-machine-api-aws 7 | -------------------------------------------------------------------------------- /aws-gov-ipi-dis-maniam/aws-ebs-csi-driver-operator-policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Action": [ 7 | "ec2:AttachVolume", 8 | "ec2:CreateSnapshot", 9 | "ec2:CreateTags", 10 | "ec2:CreateVolume", 11 | "ec2:DeleteSnapshot", 12 | "ec2:DeleteTags", 13 | "ec2:DeleteVolume", 14 | "ec2:DescribeInstances", 15 | "ec2:DescribeSnapshots", 16 | "ec2:DescribeTags", 17 | "ec2:DescribeVolumes", 18 | "ec2:DescribeVolumesModifications", 19 | "ec2:DetachVolume", 20 | "ec2:ModifyVolume" 21 | ], 22 | "Resource": "*" 23 | }, 24 | { 25 | "Effect": "Allow", 26 | "Action": [ 27 | "iam:GetUser" 28 | ], 29 | "Resource": "arn:aws-us-gov:iam::123456789012:user/aws-ebs-csi-driver-operator" 30 | } 31 | ] 32 | } 33 | -------------------------------------------------------------------------------- /aws-gov-ipi-dis-maniam/aws-ebs-csi-driver-operator-secret-templ.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | namespace: openshift-cluster-csi-drivers 5 | name: ebs-cloud-credentials 6 | stringData: 7 | aws_access_key_id: ${keyid} 8 | aws_secret_access_key: ${key} -------------------------------------------------------------------------------- /aws-gov-ipi-dis-maniam/aws-gov-vpc-drawing.svg: -------------------------------------------------------------------------------- 1 | -------------------------------------------------------------------------------- /aws-gov-ipi-dis-maniam/cloud-credential-operator-iam-ro-policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Action": [ 7 | "iam:GetUser", 8 | "iam:GetUserPolicy", 9 | "iam:ListAccessKeys" 10 | ], 11 | "Resource": "*" 12 | }, 13 | { 14 | "Effect": "Allow", 15 | "Action": [ 16 | "iam:GetUser" 17 | ], 18 | "Resource": "arn:aws-us-gov:iam::123456789012:user/cloud-credential-operator-iam-ro" 19 | } 20 | ] 21 | } 22 | -------------------------------------------------------------------------------- /aws-gov-ipi-dis-maniam/cloud-credential-operator-iam-ro-secret-templ.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | namespace: openshift-cloud-credential-operator 5 | name: cloud-credential-operator-iam-ro-creds 6 | stringData: 7 | aws_access_key_id: ${keyid} 8 | aws_secret_access_key: ${key} -------------------------------------------------------------------------------- /aws-gov-ipi-dis-maniam/cloud-credential-operator-s3-policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Action": [ 7 | "s3:CreateBucket", 8 | "s3:PutBucketTagging", 9 | "s3:PutObject", 10 | "s3:PutObjectAcl" 11 | ], 12 | "Resource": "*" 13 | }, 14 | { 15 | "Effect": "Allow", 16 | "Action": [ 17 | "iam:GetUser" 18 | ], 19 | "Resource": "arn:aws-us-gov:iam::123456789012:user/cloud-credential-operator-s3" 20 | } 21 | ] 22 | } 23 | -------------------------------------------------------------------------------- /aws-gov-ipi-dis-maniam/cloud-credential-operator-s3-secret-templ.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | namespace: openshift-cloud-credential-operator 5 | name: cloud-credential-operator-s3-creds 6 | stringData: 7 | aws_access_key_id: ${keyid} 8 | aws_secret_access_key: ${key} -------------------------------------------------------------------------------- /aws-gov-ipi-dis-maniam/cloudformation.yaml: -------------------------------------------------------------------------------- 1 | AWSTemplateFormatVersion: 2010-09-09 2 | Description: Template for Best Practice VPC with 1-3 AZs 3 | 4 | # 192.168.0.0/16 5 | 6 | Parameters: 7 | 8 | VpcCidr: 9 | AllowedPattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\/(1[6-9]|2[0-4]))$ 10 | ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/16-24. 11 | Default: 10.0.0.0/16 12 | Description: CIDR block for VPC. 13 | Type: String 14 | 15 | AvailabilityZoneCount: 16 | ConstraintDescription: "The number of availability zones. (Min: 1, Max: 3)" 17 | MinValue: 1 18 | MaxValue: 3 19 | Default: 3 20 | Description: "How many AZs to create VPC subnets for. (Min: 1, Max: 3)" 21 | Type: Number 22 | 23 | SubnetBits: 24 | ConstraintDescription: CIDR block parameter must be in the form x.x.x.x/19-27. 25 | MinValue: 5 26 | MaxValue: 13 27 | Default: 12 28 | Description: "Size of each subnet to create within the availability zones. (Min: 5 = /27, Max: 13 = /19)" 29 | Type: Number 30 | 31 | AmiId: 32 | Type: 'AWS::SSM::Parameter::Value' 33 | Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2' 34 | Description: AMI ID pointer in AWS Systems Manager Parameter Store. Default value points to the 35 | latest Amazon Linux 2 AMI ID. 36 | 37 | #KeyName: 38 | # Type: AWS::EC2::KeyPair::KeyName 39 | # Description: Name of RSA key for EC2 access for testing and troubleshooting. 40 | 41 | InstanceType: 42 | Type: String 43 | Default: t3.small 44 | Description: Instance type to use to launch the NAT instances. 45 | AllowedValues: 46 | - t3.nano 47 | - t3.micro 48 | - t3.small 49 | - t3.medium 50 | - t3.large 51 | - m4.large 52 | - m4.xlarge 53 | - m4.2xlarge 54 | - m5.large 55 | - m5.xlarge 56 | - m5.2xlarge 57 | - c4.large 58 | - c4.xlarge 59 | - c4.large 60 | - c5.large 61 | - c5.xlarge 62 | - c5.large 63 | 64 | 65 | Metadata: 66 | AWS::CloudFormation::Interface: 67 | ParameterGroups: 68 | - Label: 69 | default: "Network Configuration" 70 | Parameters: 71 | - VpcCidr 72 | - SubnetBits 73 | - Label: 74 | default: "Availability Zones" 75 | Parameters: 76 | - AvailabilityZoneCount 77 | ParameterLabels: 78 | AvailabilityZoneCount: 79 | default: "Availability Zone Count" 80 | VpcCidr: 81 | default: "VPC CIDR" 82 | SubnetBits: 83 | default: "Bits Per Subnet" 84 | 85 | Conditions: 86 | DoAz3: !Equals [3, !Ref AvailabilityZoneCount] 87 | DoAz2: !Or [!Equals [2, !Ref AvailabilityZoneCount], Condition: DoAz3] 88 | 89 | Resources: 90 | 91 | VPC: 92 | Type: "AWS::EC2::VPC" 93 | Properties: 94 | EnableDnsSupport: "true" 95 | EnableDnsHostnames: "true" 96 | CidrBlock: !Ref VpcCidr 97 | 98 | ##################################################################### 99 | # Public Subnets 100 | 101 | PublicSubnet: 102 | Type: "AWS::EC2::Subnet" 103 | Properties: 104 | VpcId: !Ref VPC 105 | CidrBlock: !Select [0, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] 106 | MapPublicIpOnLaunch: true 107 | Tags: 108 | - Key: Name 109 | Value: !Sub 'Public Subnet - ${AWS::StackName}' 110 | AvailabilityZone: !Select 111 | - 0 112 | - Fn::GetAZs: !Ref "AWS::Region" 113 | 114 | PublicSubnet2: 115 | Type: "AWS::EC2::Subnet" 116 | Condition: DoAz2 117 | Properties: 118 | VpcId: !Ref VPC 119 | CidrBlock: !Select [1, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] 120 | MapPublicIpOnLaunch: true 121 | Tags: 122 | - Key: Name 123 | Value: !Sub 'Public Subnet 2 - ${AWS::StackName}' 124 | AvailabilityZone: !Select 125 | - 1 126 | - Fn::GetAZs: !Ref "AWS::Region" 127 | 128 | PublicSubnet3: 129 | Type: "AWS::EC2::Subnet" 130 | Condition: DoAz3 131 | Properties: 132 | VpcId: !Ref VPC 133 | CidrBlock: !Select [2, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] 134 | MapPublicIpOnLaunch: true 135 | Tags: 136 | - Key: Name 137 | Value: !Sub 'Public Subnet 3 - ${AWS::StackName}' 138 | AvailabilityZone: !Select 139 | - 2 140 | - Fn::GetAZs: !Ref "AWS::Region" 141 | 142 | InternetGateway: 143 | Type: "AWS::EC2::InternetGateway" 144 | 145 | GatewayToInternet: 146 | Type: "AWS::EC2::VPCGatewayAttachment" 147 | Properties: 148 | VpcId: !Ref VPC 149 | InternetGatewayId: !Ref InternetGateway 150 | 151 | PublicRouteTable: 152 | Type: "AWS::EC2::RouteTable" 153 | Properties: 154 | VpcId: !Ref VPC 155 | 156 | PublicRoute: 157 | Type: "AWS::EC2::Route" 158 | DependsOn: GatewayToInternet 159 | Properties: 160 | RouteTableId: !Ref PublicRouteTable 161 | DestinationCidrBlock: 0.0.0.0/0 162 | GatewayId: !Ref InternetGateway 163 | 164 | PublicSubnetRouteTableAssociation: 165 | Type: "AWS::EC2::SubnetRouteTableAssociation" 166 | Properties: 167 | SubnetId: !Ref PublicSubnet 168 | RouteTableId: !Ref PublicRouteTable 169 | 170 | PublicSubnetRouteTableAssociation2: 171 | Type: "AWS::EC2::SubnetRouteTableAssociation" 172 | Condition: DoAz2 173 | Properties: 174 | SubnetId: !Ref PublicSubnet2 175 | RouteTableId: !Ref PublicRouteTable 176 | 177 | PublicSubnetRouteTableAssociation3: 178 | Condition: DoAz3 179 | Type: "AWS::EC2::SubnetRouteTableAssociation" 180 | Properties: 181 | SubnetId: !Ref PublicSubnet3 182 | RouteTableId: !Ref PublicRouteTable 183 | 184 | ####################################################################3 185 | # Private Subnets 186 | 187 | PrivateSubnet: 188 | Type: "AWS::EC2::Subnet" 189 | Properties: 190 | VpcId: !Ref VPC 191 | CidrBlock: !Select [3, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] 192 | Tags: 193 | - Key: Name 194 | Value: !Sub 'Private Subnet - ${AWS::StackName}' 195 | AvailabilityZone: !Select 196 | - 0 197 | - Fn::GetAZs: !Ref "AWS::Region" 198 | 199 | PrivateRouteTable: 200 | Type: AWS::EC2::RouteTable 201 | Properties: 202 | VpcId: !Ref VPC 203 | Tags: 204 | - Key: Name 205 | Value: !Sub 'Private Route Table - ${AWS::StackName}' 206 | 207 | PrivateRouteTableSubnetAssociation: 208 | Type: AWS::EC2::SubnetRouteTableAssociation 209 | Properties: 210 | SubnetId: !Ref PrivateSubnet 211 | RouteTableId: !Ref PrivateRouteTable 212 | 213 | PrivateSubnetNATRoute: 214 | Type: AWS::EC2::Route 215 | DependsOn: NATInstance 216 | Properties: 217 | RouteTableId: 218 | Ref: PrivateRouteTable 219 | DestinationCidrBlock: 0.0.0.0/0 220 | InstanceId: !Ref NATInstance 221 | 222 | PrivateSubnet2: 223 | Type: "AWS::EC2::Subnet" 224 | Condition: DoAz2 225 | Properties: 226 | VpcId: !Ref VPC 227 | CidrBlock: !Select [4, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] 228 | Tags: 229 | - Key: Name 230 | Value: !Sub 'Private Subnet 2 - ${AWS::StackName}' 231 | AvailabilityZone: !Select 232 | - 1 233 | - Fn::GetAZs: !Ref "AWS::Region" 234 | 235 | PrivateRouteTable2: 236 | Type: AWS::EC2::RouteTable 237 | Condition: DoAz2 238 | Properties: 239 | VpcId: !Ref VPC 240 | Tags: 241 | - Key: Name 242 | Value: !Sub 'Private Route Table 2 - ${AWS::StackName}' 243 | 244 | PrivateRouteTableSubnetAssociation2: 245 | Type: AWS::EC2::SubnetRouteTableAssociation 246 | Condition: DoAz2 247 | Properties: 248 | SubnetId: !Ref PrivateSubnet2 249 | RouteTableId: !Ref PrivateRouteTable2 250 | 251 | PrivateSubnetNATRoute2: 252 | Type: AWS::EC2::Route 253 | Condition: DoAz2 254 | DependsOn: NATInstance 255 | Properties: 256 | RouteTableId: 257 | Ref: PrivateRouteTable2 258 | DestinationCidrBlock: 0.0.0.0/0 259 | InstanceId: !Ref NATInstance 260 | 261 | PrivateSubnet3: 262 | Type: "AWS::EC2::Subnet" 263 | Condition: DoAz3 264 | Properties: 265 | VpcId: !Ref VPC 266 | CidrBlock: !Select [5, !Cidr [!Ref VpcCidr, 6, !Ref SubnetBits]] 267 | Tags: 268 | - Key: Name 269 | Value: !Sub 'Private Subnet 3 - ${AWS::StackName}' 270 | AvailabilityZone: !Select 271 | - 2 272 | - Fn::GetAZs: !Ref "AWS::Region" 273 | 274 | PrivateRouteTable3: 275 | Type: AWS::EC2::RouteTable 276 | Condition: DoAz3 277 | Properties: 278 | VpcId: !Ref VPC 279 | Tags: 280 | - Key: Name 281 | Value: !Sub 'Private Route Table 3 - ${AWS::StackName}' 282 | 283 | PrivateRouteTableSubnetAssociation3: 284 | Type: AWS::EC2::SubnetRouteTableAssociation 285 | Condition: DoAz3 286 | Properties: 287 | SubnetId: !Ref PrivateSubnet3 288 | RouteTableId: !Ref PrivateRouteTable3 289 | 290 | PrivateSubnetNATRoute3: 291 | Type: AWS::EC2::Route 292 | Condition: DoAz3 293 | DependsOn: NATInstance 294 | Properties: 295 | RouteTableId: 296 | Ref: PrivateRouteTable3 297 | DestinationCidrBlock: 0.0.0.0/0 298 | InstanceId: !Ref NATInstance 299 | 300 | ############################################################################## 301 | # 302 | # NAT Instance 303 | ######## 304 | 305 | NATInstanceRole: 306 | Type: AWS::IAM::Role 307 | Properties: 308 | AssumeRolePolicyDocument: 309 | Version: '2012-10-17' 310 | Statement: 311 | - Effect: Allow 312 | Principal: 313 | Service: 314 | - ec2.amazonaws.com 315 | Action: 316 | - sts:AssumeRole 317 | Path: / 318 | ManagedPolicyArns: 319 | - arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM 320 | - arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy 321 | Policies: 322 | - PolicyName: root 323 | PolicyDocument: 324 | Version: '2012-10-17' 325 | Statement: 326 | # - Effect: Allow 327 | # Action: 328 | # - s3:GetObject 329 | # - s3:ListObject 330 | # Resource: !Sub '${S3Bucket.Arn}*' 331 | - Effect: Allow 332 | Action: 333 | - ec2:ModifyInstanceAttribute 334 | Resource: '*' 335 | 336 | NATInstanceProfile: 337 | Type: AWS::IAM::InstanceProfile 338 | Properties: 339 | Roles: 340 | - !Ref NATInstanceRole 341 | Path: / 342 | 343 | NATInstanceSG: 344 | Type: AWS::EC2::SecurityGroup 345 | Properties: 346 | GroupDescription: Allows HTTP and HTTPS from private instances to NAT instances 347 | SecurityGroupIngress: 348 | - CidrIp: !Ref VpcCidr 349 | FromPort: 80 350 | ToPort: 80 351 | IpProtocol: TCP 352 | - CidrIp: !Ref VpcCidr 353 | FromPort: 443 354 | ToPort: 443 355 | IpProtocol: TCP 356 | - CidrIp: '10.0.0.0/25' 357 | FromPort: 80 358 | ToPort: 80 359 | IpProtocol: TCP 360 | - CidrIp: '10.0.0.0/25' 361 | FromPort: 443 362 | ToPort: 443 363 | IpProtocol: TCP 364 | Tags: 365 | - Key: Name 366 | Value: !Sub 'NAT Instance SG - ${AWS::StackName}' 367 | VpcId: !Ref VPC 368 | 369 | NATInstance: 370 | Type: AWS::EC2::Instance 371 | Properties: 372 | ImageId: !Ref AmiId 373 | InstanceType: !Ref InstanceType 374 | IamInstanceProfile: !Ref NATInstanceProfile 375 | KeyName: disconnected-east-1 376 | NetworkInterfaces: 377 | - AssociatePublicIpAddress: "true" 378 | DeviceIndex: "0" 379 | GroupSet: 380 | - !Ref NATInstanceSG 381 | SubnetId: !Ref PublicSubnet 382 | UserData: 383 | Fn::Base64: 384 | !Sub | 385 | #!/bin/bash -xe 386 | # Redirect the user-data output to the console logs 387 | exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1 388 | 389 | # Apply the latest security patches 390 | yum update -y --security 391 | 392 | # Disable source / destination check. It cannot be disabled from the launch configuration 393 | region=${AWS::Region} 394 | instanceid=`curl -s http://169.254.169.254/latest/meta-data/instance-id` 395 | aws ec2 modify-instance-attribute --no-source-dest-check --instance-id $instanceid --region $region 396 | 397 | # Install and start Squid 398 | yum install -y squid 399 | systemctl enable --now squid 400 | sleep 5 401 | iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3129 402 | iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3130 403 | 404 | cp -a /etc/squid /etc/squid_orig 405 | 406 | # Create a SSL certificate for the SslBump Squid module 407 | mkdir /etc/squid/ssl 408 | pushd /etc/squid/ssl 409 | openssl genrsa -out squid.key 4096 410 | openssl req -new -key squid.key -out squid.csr -subj "/C=US/ST=VA/L=squid/O=squid/CN=squid" 411 | openssl x509 -req -days 3650 -in squid.csr -signkey squid.key -out squid.crt 412 | cat squid.key squid.crt >> squid.pem 413 | 414 | echo '.amazonaws.com' > /etc/squid/whitelist.txt 415 | echo '.cloudfront.net' >> /etc/squid/whitelist.txt 416 | 417 | cat > /etc/squid/squid.conf << 'EOF' 418 | 419 | visible_hostname squid 420 | cache deny all 421 | 422 | # Log format and rotation 423 | logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %sni %Sh/%> 3 | credentialsMode: Manual 4 | additionalTrustBundle: | 5 | -----BEGIN CERTIFICATE----- 6 | MIIFtTCCA52gAwIBAgIUATPXseBaaRHE0Mgybh29VgOyZBUwDQYJKoZIhvcNAQEL 7 | -----END CERTIFICATE----- 8 | imageContentSources: 9 | - mirrors: 10 | - << registry-hostname >>:5000/openshift/release 11 | source: quay.io/openshift-release-dev/ocp-release 12 | - mirrors: 13 | - << registry-hostname >>:5000/openshift/release 14 | source: registry.svc.ci.openshift.org/ocp/release 15 | - mirrors: 16 | - << registry-hostname >>:5000/openshift/release 17 | source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 18 | controlPlane: 19 | architecture: amd64 20 | hyperthreading: Enabled 21 | name: master 22 | platform: {} 23 | replicas: 3 24 | compute: 25 | - architecture: amd64 26 | hyperthreading: Enabled 27 | name: worker 28 | platform: 29 | aws: 30 | type: m5.xlarge 31 | replicas: 3 32 | metadata: 33 | name: << Cluster Name >> 34 | networking: 35 | clusterNetwork: 36 | - cidr: 10.128.0.0/14 37 | hostPrefix: 23 38 | machineNetwork: 39 | - cidr: << Subnet/CIDR 1 >> 40 | - cidr: << Subnet/CIDR 2 >> 41 | - cidr: << Subnet/CIDR 3 >> 42 | networkType: OpenShiftSDN 43 | serviceNetwork: 44 | - 172.30.0.0/16 45 | platform: 46 | aws: 47 | region: << AWS Region Name >> 48 | zones: 49 | - << Availability Zone Name 1 >> 50 | - << Availability Zone Name 2 >> 51 | - << Availability Zone Name 3 >> 52 | subnets: 53 | - << Subnet ID 1 >> 54 | - << Subnet ID 1 >> 55 | - << Subnet ID 1 >> 56 | amiID: << Your RHCOS AMI ID >> 57 | pullSecret: '<< Your Pull Secret Here >>' 58 | sshKey: << Your SSH KEY HERE >> 59 | fips: false 60 | publish: Internal 61 | -------------------------------------------------------------------------------- /aws-gov-ipi-dis-maniam/ocp-users.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -x 2 | 3 | # 4 | # This script is intended to assist users with manually managing AWS IAM accounts 5 | # when installing OCP 4.6.X with the "credentialMode: Manual" installation option. 6 | # 7 | # Dependencies: 8 | # jq, 9 | # aws cli 10 | # 11 | # 12 | # Available commands: 13 | # 14 | # writePolicies: Copy users and policies from AWS to local files in the same directory (uses aws current context) 15 | # 16 | # prepPolicies: Prepares policy files for redeployment (uses aws current contxt) 17 | # 18 | # createUsers: Creates users and attaches new policies to them (uses aws current context) 19 | # 20 | # cleanupFiles: Deletes policy files from current directory 21 | # 22 | # help: Prints this message 23 | # 24 | # 25 | 26 | 27 | # Lists all usernames 28 | function __listUserNames() { 29 | 30 | aws iam list-users | jq -r '.Users | map(.UserName)' | sed -e 's/.$//' -e 's/"//g' 31 | 32 | } 33 | 34 | # List tags of a user 35 | function __listUserTags() { 36 | 37 | aws iam list-user-tags --user-name $1 38 | 39 | } 40 | 41 | # List policies attached to a user 42 | function __listPolices() { 43 | 44 | aws iam list-user-policies --user-name $1 | jq -r '.PolicyNames | .[]' | sed -e 's/"//g' 45 | 46 | } 47 | 48 | # Dump a poliicies' details 49 | function __getPolicy() { 50 | 51 | aws iam get-user-policy --user-name $1 --policy-name $2 52 | 53 | } 54 | 55 | # Creates IAM user and returns arn 56 | 57 | function __createUser() { 58 | 59 | aws iam create-user --user-name $1 | jq -r '.User.Arn' 60 | 61 | } 62 | 63 | # Create IAM Policy 64 | 65 | function __createPolicy() { 66 | 67 | aws iam create-policy --policy-name $1 --policy-document file://./$2 | jq -r '.Policy.Arn' 68 | 69 | } 70 | 71 | function __attachPolicy() { 72 | 73 | aws iam attach-user-policy --user-name $1 --policy-arn "$2" 74 | 75 | } 76 | 77 | # Get users created by openshift 78 | function __getClusterUsers() { 79 | 80 | for u in $(__listUserNames) 81 | do 82 | UT=$(__listUserTags $u | grep -o kubernetes\.io\/cluster) 83 | if [ $UT ] 84 | then 85 | echo $u 86 | fi 87 | done 88 | 89 | } 90 | 91 | function writeUsers() { 92 | 93 | for c in $(__getClusterUsers) 94 | do 95 | UN=$(echo $c | sed -re 's/^(\w*\-){3}//' -e 's/(\-\w*$)//') 96 | echo $UN >> account_names.txt 97 | done 98 | 99 | } 100 | 101 | function writePolicies() { 102 | 103 | for un in $(__getClusterUsers) 104 | do 105 | for lp in $(__listPolices $un) 106 | do 107 | __getPolicy $un $lp > $(echo $un | sed -re 's/^(\w*\-){3}//' -e 's/(\-\w*$)//').json 108 | done 109 | done 110 | 111 | } 112 | 113 | function prepPolicies() { 114 | 115 | for u in $(cat account_names.txt) 116 | do 117 | f=$(grep -ls $u --exclude="account_names.txt" *policy.json) # get filename to edit 118 | na=$(aws sts get-caller-identity | jq -r '.Arn' | grep -oP '.*?/')$u 119 | oa=$(grep -Po 'arn.*(? secrets/$u-secret.yaml 10 | done 11 | -------------------------------------------------------------------------------- /aws-gov-ipi-dis-maniam/trust-policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Principal": { "Service": "vmie.amazonaws.com" }, 7 | "Action": "sts:AssumeRole", 8 | "Condition": { 9 | "StringEquals":{ 10 | "sts:Externalid": "vmimport" 11 | } 12 | } 13 | } 14 | ] 15 | } 16 | -------------------------------------------------------------------------------- /aws-ipi-terraform/.gitignore: -------------------------------------------------------------------------------- 1 | # Local .terraform directories 2 | **/.terraform/* 3 | 4 | # .tfstate files 5 | *.tfstate 6 | *.tfstate.* 7 | *.terraform.lock.hcl* 8 | 9 | # Crash log files 10 | crash.log 11 | 12 | # Exclude all .tfvars files, which are likely to contain sentitive data, such as 13 | # password, private keys, and other secrets. These should not be part of version 14 | # control as they are data points which are potentially sensitive and subject 15 | # to change depending on the environment. 16 | # 17 | *.tfvars 18 | 19 | # Ignore override files as they are usually used to override resources locally and so 20 | # are not checked in 21 | override.tf 22 | override.tf.json 23 | *_override.tf 24 | *_override.tf.json 25 | 26 | # Include override files you do wish to add to version control using negated pattern 27 | # 28 | # !example_override.tf 29 | 30 | # Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan 31 | # example: *tfplan* 32 | 33 | # Ignore CLI configuration files 34 | .terraformrc 35 | terraform.rc -------------------------------------------------------------------------------- /aws-ipi-terraform/00-auth.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "3.53.0" 6 | } 7 | } 8 | } 9 | 10 | provider "aws" { 11 | region = "us-gov-east-1" 12 | } -------------------------------------------------------------------------------- /aws-ipi-terraform/ec2.tf: -------------------------------------------------------------------------------- 1 | resource "aws_instance" "public" { 2 | ami = "ami-0cca63ccd9a87f1b2" 3 | availability_zone = data.aws_availability_zones.available.names[0] 4 | ebs_optimized = true 5 | instance_type = "t3.small" 6 | monitoring = false 7 | key_name = "ec2_key" 8 | subnet_id = aws_subnet.public_a.id 9 | associate_public_ip_address = true 10 | source_dest_check = true 11 | 12 | root_block_device { 13 | volume_type = "gp2" 14 | volume_size = 120 15 | delete_on_termination = true 16 | } 17 | 18 | } 19 | /* 20 | resource "aws_instance" "private" { 21 | ami = "ami-0cca63ccd9a87f1b2" 22 | availability_zone = data.aws_availability_zones.available.names[0] 23 | ebs_optimized = true 24 | instance_type = "t3.small" 25 | monitoring = false 26 | key_name = aws_key_pair.bastion_key.key_name 27 | subnet_id = aws_subnet.private_a.id 28 | associate_public_ip_address = false 29 | source_dest_check = true 30 | 31 | root_block_device { 32 | volume_type = "gp2" 33 | volume_size = 120 34 | delete_on_termination = true 35 | } 36 | }*/ 37 | -------------------------------------------------------------------------------- /aws-ipi-terraform/example-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | baseDomain: sfxworks.net 3 | additionalTrustBundle: | 4 | -----BEGIN CERTIFICATE----- 5 | -----END CERTIFICATE----- 6 | imageContentSources: 7 | - mirrors: 8 | - ip-10-2-63-100.us-gov-east-1.compute.internal:5000/openshift/release 9 | source: quay.io/openshift-release-dev/ocp-release 10 | - mirrors: 11 | - ip-10-2-63-100.us-gov-east-1.compute.internal:5000/openshift/release 12 | source: registry.svc.ci.openshift.org/ocp/release 13 | - mirrors: 14 | - ip-10-2-63-100.us-gov-east-1.compute.internal:5000/openshift/release 15 | source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 16 | controlPlane: 17 | architecture: amd64 18 | hyperthreading: Enabled 19 | name: master 20 | platform: {} 21 | replicas: 3 22 | compute: 23 | - architecture: amd64 24 | hyperthreading: Enabled 25 | name: worker 26 | platform: 27 | aws: 28 | type: m5.xlarge 29 | replicas: 3 30 | metadata: 31 | name: terraform 32 | networking: 33 | clusterNetwork: 34 | - cidr: 10.128.0.0/14 35 | hostPrefix: 23 36 | machineNetwork: 37 | - cidr: 10.2.48.0/20 38 | - cidr: 10.2.80.0/20 39 | - cidr: 10.2.64.0/20 40 | networkType: OpenShiftSDN 41 | serviceNetwork: 42 | - 172.30.0.0/16 43 | platform: 44 | aws: 45 | region: us-gov-east-1 46 | zones: 47 | - us-gov-east-1 48 | - us-gov-east-2 49 | - us-gov-east-3 50 | subnets: 51 | - subnet-0fb704982b969f30a 52 | - subnet-09d0fcb59739b1db9 53 | - subnet-0fbe102bad6ea84c5 54 | amiID: ami-0bae2581da0f8ce7b 55 | pullSecret: '' 56 | sshKey: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8CFvsp8ltw6JQapn0zlQhgbgfUgtQT0sWlEX2N1k6fHS03bIC7kQ5/vRGHgdjmUrPj9DZIqJ8ZhilSMRACwbkkdDaNmf0AQpfTHki3RI4BVMD+XQ9+/lxykiKOLV6HyIhtCNIlpV3IQePJOS+EgXm4svUWh8Z4i93fvfiWAjTQZNJHpRkXNNMqH5UBcKykB3+vFTRvLuzeDblLlPme3HNQcgCTruTUJA/Lh4hCHiqRFAzJ/FRe/UjyRSlAgilqK0MODABfyFRzhkIBj1tT4BX9pFbyy3DcpiQ+X2kPCNV9zSdaAqDL1OJly6U5LEJ6217KOfkWWZH296Cr83BhF5gDtOicg3V2NtUtTKOMOhUNSmxvhbaI36n58qCCNvsmqqiQ9T09itfAF2eBiyz014GfAhL49phHCn8L+BxbwtBNr7RlHDDtXvEKEdd2klIsptQUezzNB+K9YHEwivmKt6DLQOZghm1PA71Z+7cNh/I2f+AIk0Ag4T1gS4IZBb8fac= 57 | fips: false 58 | publish: Internal -------------------------------------------------------------------------------- /aws-ipi-terraform/igw.tf: -------------------------------------------------------------------------------- 1 | resource "aws_internet_gateway" "public_gw" { 2 | vpc_id = aws_vpc.public_vpc.id 3 | } 4 | 5 | resource "aws_internet_gateway" "isolated_gw" { 6 | vpc_id = aws_vpc.isolated_vpc.id 7 | } 8 | 9 | resource "aws_eip" "nat" { 10 | 11 | } 12 | 13 | resource "aws_nat_gateway" "isolated_nat" { 14 | allocation_id = aws_eip.nat.id 15 | subnet_id = aws_subnet.nat_subnet.id 16 | 17 | # To ensure proper ordering, it is recommended to add an explicit dependency 18 | # on the Internet Gateway for the VPC. 19 | depends_on = [aws_internet_gateway.isolated_gw] 20 | } 21 | 22 | 23 | 24 | resource "aws_default_route_table" "public_route" { 25 | default_route_table_id = aws_vpc.public_vpc.default_route_table_id 26 | 27 | route { 28 | cidr_block = "0.0.0.0/0" 29 | gateway_id = aws_internet_gateway.public_gw.id 30 | } 31 | 32 | route { 33 | cidr_block = aws_vpc.isolated_vpc.cidr_block 34 | vpc_peering_connection_id = aws_vpc_peering_connection.peer.id 35 | } 36 | } 37 | 38 | 39 | resource "aws_default_route_table" "isolated_route" { 40 | default_route_table_id = aws_vpc.isolated_vpc.default_route_table_id 41 | 42 | 43 | route { 44 | cidr_block = "0.0.0.0/0" 45 | nat_gateway_id = aws_nat_gateway.isolated_nat.id 46 | } 47 | 48 | 49 | route { 50 | cidr_block = aws_vpc.public_vpc.cidr_block 51 | vpc_peering_connection_id = aws_vpc_peering_connection.peer.id 52 | } 53 | } 54 | 55 | resource "aws_route_table" "nat" { 56 | vpc_id = aws_vpc.isolated_vpc.id 57 | 58 | route { 59 | cidr_block = "0.0.0.0/0" 60 | gateway_id = aws_internet_gateway.isolated_gw.id 61 | } 62 | } 63 | 64 | resource "aws_route_table_association" "nat" { 65 | subnet_id = aws_subnet.nat_subnet.id 66 | route_table_id = aws_route_table.nat.id 67 | } 68 | 69 | -------------------------------------------------------------------------------- /aws-ipi-terraform/output.tf: -------------------------------------------------------------------------------- 1 | /* 2 | output "private_instance" { 3 | value = aws_instance.private.private_dns 4 | } 5 | */ 6 | output "public_instance" { 7 | value = aws_instance.public.public_ip 8 | } 9 | 10 | output "subnet_a_cidr" { 11 | value = aws_subnet.private_a.cidr_block 12 | } 13 | 14 | output "subnet_b_cidr" { 15 | value = aws_subnet.private_b.cidr_block 16 | } 17 | 18 | output "subnet_c_cidr" { 19 | value = aws_subnet.private_c.cidr_block 20 | } 21 | 22 | output "zone_a" { 23 | value = data.aws_availability_zones.available.names[0] 24 | } 25 | 26 | output "zone_b" { 27 | value = data.aws_availability_zones.available.names[1] 28 | } 29 | 30 | output "zone_c" { 31 | value = data.aws_availability_zones.available.names[2] 32 | } 33 | 34 | 35 | output "subnet_a" { 36 | value = aws_subnet.private_a.id 37 | } 38 | 39 | output "subnet_b" { 40 | value = aws_subnet.private_b.id 41 | } 42 | 43 | output "subnet_c" { 44 | value = aws_subnet.private_c.id 45 | } 46 | -------------------------------------------------------------------------------- /aws-ipi-terraform/public_key.tf: -------------------------------------------------------------------------------- 1 | resource "aws_key_pair" "ec2_key" { 2 | key_name = "ec2_key" 3 | public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8CFvsp8ltw6JQapn0zlQhgbgfUgtQT0sWlEX2N1k6fHS03bIC7kQ5/vRGHgdjmUrPj9DZIqJ8ZhilSMRACwbkkdDaNmf0AQpfTHki3RI4BVMD+XQ9+/lxykiKOLV6HyIhtCNIlpV3IQePJOS+EgXm4svUWh8Z4i93fvfiWAjTQZNJHpRkXNNMqH5UBcKykB3+vFTRvLuzeDblLlPme3HNQcgCTruTUJA/Lh4hCHiqRFAzJ/FRe/UjyRSlAgilqK0MODABfyFRzhkIBj1tT4BX9pFbyy3DcpiQ+X2kPCNV9zSdaAqDL1OJly6U5LEJ6217KOfkWWZH296Cr83BhF5gDtOicg3V2NtUtTKOMOhUNSmxvhbaI36n58qCCNvsmqqiQ9T09itfAF2eBiyz014GfAhL49phHCn8L+BxbwtBNr7RlHDDtXvEKEdd2klIsptQUezzNB+K9YHEwivmKt6DLQOZghm1PA71Z+7cNh/I2f+AIk0Ag4T1gS4IZBb8fac=" 4 | } 5 | 6 | /* 7 | resource "aws_key_pair" "bastion_key" { 8 | key_name = "bastion_key" 9 | public_key = "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPUkHqyHgpt1e2fUtCUnARknjTb4OIPC7dW8WBv5Snl7/LQw4a2fCVzvSWgJhpyo1S6V4EjLRfRgTKK/QzHXZ6L+Eyc5iEwj7P3CpBlzHll84ItitREOgW0SDkiWE5heeWZPXHYi7dbkVO4bfMEE8qDqm5ODKktgZej3bLn6I/yjLk5JoDBfqFXZSOrnWbtLtTElIGNM2eC777H2BwrJMfy9ySwy7MR/BI2eDZxv7+KCH/H+eu3RuLy9M50ADb5hJAl2a1UhHpXHR+GNv4afnVF+4rnPHw+ChofcuDkAEUauY+CwePlNKnc1Utf7OxisFfh4m0KaAiTZ+2fXHFRtrxPMoUfRh7lN6U2JI1zF9Gr9piHMyDS9V+27vzBZbjxz8iNSEDdRrFl5j1TMbQ2gbUoI1sh1kX2nmXw+6we/8UgHaBIgvpJwYKuwoJOV8VBywS7z5QhmudXK3m8jSD4MQFWVKdPk4ow5dqFuZ6DwknwfsXdQOPIckOWIHWAfm+vx0=" 10 | } 11 | */ -------------------------------------------------------------------------------- /aws-ipi-terraform/security-group.tf: -------------------------------------------------------------------------------- 1 | 2 | data "aws_ip_ranges" "usgov" { 3 | regions = ["us-gov-east-1", "us-gov-west-1"] 4 | services = ["amazon", "ec2", "route53"] 5 | } 6 | 7 | resource "aws_default_security_group" "default" { 8 | vpc_id = aws_vpc.isolated_vpc.id 9 | 10 | egress { 11 | from_port = "443" 12 | to_port = "443" 13 | protocol = "tcp" 14 | cidr_blocks = data.aws_ip_ranges.usgov.cidr_blocks 15 | } 16 | 17 | ingress { 18 | from_port = "0" 19 | to_port = "65535" 20 | protocol = "tcp" 21 | cidr_blocks = [aws_vpc.isolated_vpc.cidr_block] 22 | } 23 | 24 | egress { 25 | from_port = "0" 26 | to_port = "65535" 27 | protocol = "tcp" 28 | cidr_blocks = [aws_vpc.isolated_vpc.cidr_block] 29 | } 30 | 31 | ingress { 32 | from_port = "0" 33 | to_port = "65535" 34 | protocol = "tcp" 35 | cidr_blocks = [aws_vpc.public_vpc.cidr_block] 36 | } 37 | 38 | egress { 39 | from_port = "0" 40 | to_port = "65535" 41 | protocol = "tcp" 42 | cidr_blocks = [aws_vpc.public_vpc.cidr_block] 43 | } 44 | 45 | ingress { 46 | protocol = -1 47 | self = true 48 | from_port = 0 49 | to_port = 0 50 | } 51 | 52 | 53 | } 54 | 55 | resource "aws_default_security_group" "public_default" { 56 | vpc_id = aws_vpc.public_vpc.id 57 | 58 | ingress { 59 | from_port = "22" 60 | to_port = "22" 61 | protocol = "tcp" 62 | cidr_blocks = ["0.0.0.0/0"] 63 | } 64 | 65 | ingress { 66 | protocol = -1 67 | self = true 68 | from_port = 0 69 | to_port = 0 70 | } 71 | 72 | egress { 73 | from_port = 0 74 | to_port = 0 75 | protocol = "-1" 76 | cidr_blocks = ["0.0.0.0/0"] 77 | } 78 | 79 | } -------------------------------------------------------------------------------- /aws-ipi-terraform/subnet.tf: -------------------------------------------------------------------------------- 1 | data "aws_availability_zones" "available" { 2 | state = "available" 3 | } 4 | 5 | resource "aws_subnet" "private_a" { 6 | vpc_id = aws_vpc.isolated_vpc.id 7 | cidr_block = "10.2.48.0/20" 8 | availability_zone = data.aws_availability_zones.available.names[0] 9 | map_public_ip_on_launch = false 10 | } 11 | 12 | resource "aws_subnet" "private_b" { 13 | vpc_id = aws_vpc.isolated_vpc.id 14 | cidr_block = "10.2.64.0/20" 15 | availability_zone = data.aws_availability_zones.available.names[1] 16 | map_public_ip_on_launch = false 17 | 18 | } 19 | 20 | resource "aws_subnet" "private_c" { 21 | vpc_id = aws_vpc.isolated_vpc.id 22 | cidr_block = "10.2.80.0/20" 23 | availability_zone = data.aws_availability_zones.available.names[2] 24 | map_public_ip_on_launch = false 25 | } 26 | 27 | resource "aws_subnet" "nat_subnet" { 28 | vpc_id = aws_vpc.isolated_vpc.id 29 | cidr_block = "10.2.112.0/20" 30 | availability_zone = data.aws_availability_zones.available.names[0] 31 | map_public_ip_on_launch = true 32 | } 33 | 34 | resource "aws_subnet" "public_a" { 35 | vpc_id = aws_vpc.public_vpc.id 36 | cidr_block = "10.1.0.0/20" 37 | availability_zone = data.aws_availability_zones.available.names[0] 38 | map_public_ip_on_launch = true 39 | 40 | } 41 | 42 | resource "aws_subnet" "public_b" { 43 | vpc_id = aws_vpc.public_vpc.id 44 | cidr_block = "10.1.16.0/20" 45 | availability_zone = data.aws_availability_zones.available.names[1] 46 | map_public_ip_on_launch = true 47 | 48 | } 49 | 50 | resource "aws_subnet" "public_c" { 51 | vpc_id = aws_vpc.public_vpc.id 52 | cidr_block = "10.1.32.0/20" 53 | availability_zone = data.aws_availability_zones.available.names[2] 54 | map_public_ip_on_launch = true 55 | 56 | } -------------------------------------------------------------------------------- /aws-ipi-terraform/vpc.tf: -------------------------------------------------------------------------------- 1 | resource "aws_vpc" "public_vpc" { 2 | cidr_block = "10.1.0.0/16" 3 | enable_dns_hostnames = true 4 | enable_dns_support = true 5 | instance_tenancy = "default" 6 | } 7 | 8 | resource "aws_vpc" "isolated_vpc" { 9 | cidr_block = "10.2.0.0/16" 10 | enable_dns_hostnames = true 11 | enable_dns_support = true 12 | instance_tenancy = "default" 13 | } 14 | 15 | resource "aws_vpc_peering_connection" "peer" { 16 | peer_vpc_id = aws_vpc.public_vpc.id 17 | vpc_id = aws_vpc.isolated_vpc.id 18 | auto_accept = true 19 | } -------------------------------------------------------------------------------- /images/ovirtVmCreateImg1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-cop/ocp-disconnected-docs/5e9a115048ba659484924412ece5ee159b572fd1/images/ovirtVmCreateImg1.png -------------------------------------------------------------------------------- /images/ovirtVmCreateImg2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/redhat-cop/ocp-disconnected-docs/5e9a115048ba659484924412ece5ee159b572fd1/images/ovirtVmCreateImg2.png -------------------------------------------------------------------------------- /sparta/config.yml: -------------------------------------------------------------------------------- 1 | # podman run -it --rm --pull always --volume ${HOME}/bundle:/root/bundle:z quay.io/cloudctl/koffer bundle --config https://codectl.io/docs/config/nightlies/sparta.yml 2 | koffer: 3 | silent: false 4 | plugins: 5 | sparta: 6 | version: v00.21.0521 7 | service: github.com 8 | organization: codesparta 9 | env: 10 | - name: "BUNDLE" 11 | value: "true" 12 | - name: "MIRROR" 13 | value: "true" 14 | - name: "PROVIDER" 15 | value: "aws" 16 | - name: "VERSION" 17 | value: "4.8.0" 18 | - name: "TPDK_VERSION" 19 | value: "v00.21.0521" 20 | - name: "SPARTA_VERSION" 21 | value: "v00.21.0521" 22 | - name: "REDHAT_OPERATORS" 23 | value: "null" 24 | - name: "RHCOS_AWS" 25 | value: "true" 26 | --------------------------------------------------------------------------------