├── .gitignore ├── LICENSE ├── README.md ├── aro-4 ├── aro-4-aad-oidc.sh ├── aro-4-app-gateway.md ├── aro-4-azure-arc.sh ├── aro-4-azure-container-registry.sh ├── aro-4-azure-files.sh ├── aro-4-backup.sh ├── aro-4-custom-api-certificate.sh ├── aro-4-custom-ingress-certificate.sh ├── aro-4-dns-forwarder.sh ├── aro-4-egress-lockdown.sh ├── aro-4-provision.sh ├── aro-4-routes.sh ├── aro-4-sp-rotation.sh ├── aro-4-upgrade.sh ├── aro-internal-registry.sh ├── container-insights-agent-config.yaml ├── deprecated-aro-4-azure-monitor.sh ├── internal-ingress.yaml ├── logs-workspace-deployment.json ├── machine-set-storage-infra.yaml └── nginx-pod.yaml ├── ocp-installer-configuration.md ├── ocp-ipi.md ├── ocp-jumpbox-provision.md ├── ocp-prerequisites.md ├── ocp-testing.md ├── ocp-upi.md ├── openshift-client-linux.tar.gz.1 ├── provisioning ├── aro-provision.sh ├── azure-monitor │ ├── logs-workspace-deployment.json │ ├── ocp-v4-azure-monitor.sh │ └── onboarding_azuremonitor_for_containers.sh ├── install-configs │ └── install-config-internal.yaml ├── installer-jumpbox.sh ├── ocp-azure-provision-4-3.sh ├── ocp-azure-provision.sh └── upi │ ├── 01_vnet.json │ ├── 02_storage.json │ ├── 03_infra-internal-lb.json │ ├── 03_infra-public-lb.json │ ├── 04_bootstrap-internal-only.json │ ├── 04_bootstrap.json │ ├── 05_masters-internal-only.json │ ├── 05_masters.json │ ├── 06_workers.json │ ├── dotmap │ ├── __init__.py │ ├── __pycache__ │ │ └── __init__.cpython-36.pyc │ └── test.py │ └── setup-manifests.py └── res ├── aad-permissions.png ├── cloud-shell.png ├── dns-zone-ns.png ├── mohamed-saif.jpg ├── new-dns-zone.png ├── ocp-azure-architecture.png ├── ocp-cluster-settings.png ├── ocp-console.png ├── ocp-installer-files.png ├── ocp-namespaces.png ├── ocp-rg.png ├── ocp-storage-classes.png ├── ocp-storage-images.png ├── ocp-storage-primary.png └── ocp-subdomain.png /.gitignore: -------------------------------------------------------------------------------- 1 | **/*-preview.* 2 | **/ARO-RP 3 | **/*-installation 4 | **/install-config-templates 5 | provisioning/ocp-installation.tar.gz 6 | provisioning/aro-4-info.txt 7 | provisioning/pull-secret.txt 8 | provisioning/aro-assignment-error-info.txt 9 | client/** 10 | **/*.tar.gz 11 | **/aro-provision.vars 12 | **/aro-logs-workspace-deployment-updated.json 13 | aro-4/onboarding_azuremonitor_for_containers.sh 14 | aro-4/aro-4-ovn-kubernetes.sh 15 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Saif 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # OpenShift 4.x on Azure IaaS 2 | 3 | Provisioning Red Hat OpenShift Container Platform 4.x (starting from 4.2) on Azure IaaS using the Red Hat's official Installer (Installer Provisioned Infrastructure or IPI). 4 | 5 | Also soon you will be able to use Red Hat's (User Provided Infrastructure or UPI) to assume more control over the created infrastructure. 6 | 7 | If you used the IPI method along with a public DNS (External cluster), you should arrive at an architecture similar to this: 8 | 9 | ![ocp-azure](res/ocp-azure-architecture.png) 10 | 11 | ## Azure CLI 12 | 13 | Azure CLI is my preferred way to provision resources on Azure as it provide readable and repeatable steps to create multiple environments. 14 | 15 | I will use Azure Cloud Shell to do that. Visit [Azure Cloud Shell](https://docs.microsoft.com/en-us/azure/cloud-shell/overview) documentation for further details, or visit [shell.azure.com](https://shell.azure.com) if you know your way around. 16 | 17 | You can also use your favorite terminal as well (I use VS Code with WSL:Ubuntu under Windows 10 and zsh terminal) 18 | 19 | OCP v4.2 provisioning script can be found here [ocp-azure-provision.sh](provisioning/ocp-azure-provision.sh) 20 | 21 | OCP 4.3 provisioning script (the current version of OCP) can be found here [ocp-azure-provision.sh](provisioning/ocp-azure-provision-4-3.sh) 22 | 23 | It is easy to access the Cloud Shell from withing the Azure Portal or by visiting [shell.azure.com](https://shell.azure.com): 24 | 25 | ![cloud-shell](res/cloud-shell.png) 26 | 27 | ## OCP Installation Options 28 | 29 | You need to make a decision on which way you want to provision your cluster based on your requirements. 30 | 31 | Installing OCP 4.3 now offer the following: 32 | - Support joining an existing virtual network (or it will create one for you) 33 | - Support creating fully private cluster (with only private DNS) 34 | 35 | In this guide, I will be talking about the 2 ways to provision OCP 4.3 clusters on Azure (IPI and UPI) 36 | 37 | ### Install-config.yaml 38 | 39 | Regardless of the installation method, you need to have your [install-config.yaml] with your needed preferences saved securely in a source control system for future use. 40 | 41 | You can reuse existing install-config.yaml or create one for the first time through the following command: 42 | 43 | ### IPI 44 | 45 | Using IPI provides a quick and efficient way to provision clusters but you lose a little bit of control over the provisioned cluster installation. 46 | 47 | Use this approach if you don't have strict cluster provisioning policies (like deploying in existing resource group is not possible to my knowledge). 48 | 49 | All what you need to use the IPI method, is: 50 | 1. Service Principal with appropriate permissions (detailed in the script) 51 | 2. Details of the vnet address space and whether it exists or it is new 52 | - Address space of the vnet 53 | - Subnet for Masters 54 | - Subnet for Workers 55 | 3. DNS (private or public) 56 | 4. Pull secret for cluster activation from your Red Hat account 57 | 5. OPTIONAL: SSH key to be used to connect the cluster nodes for diagnostics 58 | 59 | ### UPI 60 | 61 | Using UPI (User Provided Infrastructure) is my recommended approach in enterprise setup of production environments where you have a subscription wide policies that relates to naming, RBAC and tagging among many other requirements that requires more control over the cluster provisioning. 62 | 63 | In the UPI, you will be creating/reusing existing: 64 | 1. Resource Group 65 | 2. Virtual Network 66 | 3. Masters Managed Identity 67 | 4. Bootstrap Machine (ARM Deployment) 68 | 5. Masters (ARM Deployment) 69 | 6. OPTIONAL: Workers provisioning (you can do this after the cluster masters are up) 70 | 71 | ## Installation Guide 72 | 73 | We will have a common process where you need to have whether using IPI or UPI and then specific steps for each. 74 | 75 | You should now by now whether you are creating a private or public cluster, what is your virtual network settings/information, cluster name and have access to Red Hat pull secret. 76 | 77 | ### [Prepare Jump-box Machine](ocp-jumpbox-provision.md) 78 | 79 | It is a good practice to have a jump box server to act as your installation terminal (especially if you are creating a private cluster with no access to the vnet). This guid helps you in setting up this VM and I would highly recommend doing so. 80 | 81 | If you are using a local dev machine, make sure to follow the installation steps mentioned in this guide to make sure you have all the needed tools. 82 | 83 | >**NOTE:** OCP installer currently support linux or Mac environments only. If you are on Windows, make sure to use [Windows Subsystem for Linux (WSL)](https://docs.microsoft.com/en-us/windows/wsl/install-win10) 84 | 85 | ### [Prerequisites](ocp-prerequisites.md) 86 | 87 | To use Red Hat OCP installer, you need to prepare in advance few prerequisites before starting the installation process. 88 | 89 | This guide only focuses on the prerequisites that is shared between IPI and UPI methods. 90 | 91 | ### [Installer Configuration](ocp-installer-configuration.md) 92 | 93 | OCP installer depends on having install-config.yaml file with all the cluster initial configuration. You can have this setup of the first time and then reuse it with slight modification to provision same or additional clusters. 94 | 95 | ### [IPI Approach](ocp-ipi.md) 96 | 97 | Follow this guide to install OCP via IPI 98 | 99 | ### [UPI Approach](ocp-upi.md) 100 | 101 | Follow this guide to install OCP via UPI 102 | 103 | ### [OCP Cluster Testing](ocp-testing.md) 104 | 105 | Now it is time to access the cluster mainly via OC client CLI. -------------------------------------------------------------------------------- /aro-4/aro-4-aad-oidc.sh: -------------------------------------------------------------------------------- 1 | PREFIX=aro4 2 | LOCATION=southafricanorth # Check the available regions on the ARO roadmap https://aka.ms/aro/roadmap 3 | LOCATION_CODE=zan 4 | CLUSTER=$PREFIX-$LOCATION_CODE 5 | ARO_RG="$PREFIX-$LOCATION_CODE" 6 | 7 | domain=$(az aro show -g $ARO_RG -n $CLUSTER --query clusterProfile.domain -o tsv) 8 | location=$(az aro show -g $ARO_RG -n $CLUSTER --query location -o tsv) 9 | 10 | # OC login details 11 | CLUSTER_URL=$(az aro show -g $ARO_RG -n $CLUSTER --query apiserverProfile.url -o tsv) 12 | USER=$(az aro list-credentials -g $ARO_RG -n $CLUSTER --query kubeadminUsername -o tsv) 13 | PASSWORD=$(az aro list-credentials -g $ARO_RG -n $CLUSTER --query kubeadminPassword -o tsv) 14 | 15 | webConsole=$(az aro show -g $ARO_RG -n $CLUSTER --query consoleProfile.url -o tsv) 16 | oauthCallbackURL=https://oauth-openshift.apps.$domain.$location.aroapp.io/oauth2callback/AAD 17 | 18 | CLIENT_SECRET=P@a$$w0rd$RANDOM 19 | APP_ID=$(az ad app create \ 20 | --query appId -o tsv \ 21 | --display-name aro-auth \ 22 | --reply-urls $oauthCallbackURL \ 23 | --password $CLIENT_SECRET) 24 | 25 | echo $APP_ID 26 | 27 | TENANT_ID=$(az account show --query tenantId -o tsv) 28 | echo $TENANT_ID 29 | 30 | # configure OpenShift to use the email claim and fall back to upn to set the Preferred Username by adding the upn as part of the ID token returned by Azure Active Directory. 31 | cat > manifest.json<< EOF 32 | [{ 33 | "name": "upn", 34 | "source": null, 35 | "essential": false, 36 | "additionalProperties": [] 37 | }, 38 | { 39 | "name": "email", 40 | "source": null, 41 | "essential": false, 42 | "additionalProperties": [] 43 | }, 44 | { 45 | "name": "name", 46 | "source": null, 47 | "essential": false, 48 | "additionalProperties": [] 49 | }] 50 | EOF 51 | 52 | az ad app update \ 53 | --set optionalClaims.idToken=@manifest.json \ 54 | --id $APP_ID 55 | 56 | # Add permission for the Azure Active Directory Graph.User.Read scope to enable sign in and read user profile. 57 | az ad app permission add \ 58 | --api 00000002-0000-0000-c000-000000000000 \ 59 | --api-permissions 311a71cc-e848-46a1-bdf8-97ff7156d8e6=Scope \ 60 | --id $APP_ID 61 | 62 | oc login $CLUSTER_URL --username=$USER --password=$PASSWORD 63 | 64 | oc create secret generic openid-client-secret-azuread \ 65 | --namespace openshift-config \ 66 | --from-literal=clientSecret=$CLIENT_SECRET 67 | 68 | # Replace ${APP_ID} and ${TENANT_ID} with relevant values 69 | cat > oidc.yaml<< EOF 70 | apiVersion: config.openshift.io/v1 71 | kind: OAuth 72 | metadata: 73 | name: cluster 74 | spec: 75 | identityProviders: 76 | - name: AAD 77 | mappingMethod: claim 78 | type: OpenID 79 | openID: 80 | clientID: ${APP_ID} 81 | clientSecret: 82 | name: openid-client-secret-azuread 83 | extraScopes: 84 | - email 85 | - profile 86 | extraAuthorizeParameters: 87 | include_granted_scopes: "true" 88 | claims: 89 | preferredUsername: 90 | - email 91 | - upn 92 | name: 93 | - name 94 | email: 95 | - email 96 | issuer: https://login.microsoftonline.com/${TENANT_ID} 97 | EOF 98 | 99 | oc apply -f oidc.yaml 100 | # oauth.config.openshift.io/cluster configured 101 | 102 | # Open a new private or ingonito window in your borswer and navigate the the console. Select AAD as your authentication and sign in with your AAD account 103 | # Head back to the OC, to grant this user a cluster-admin role 104 | oc get users 105 | # Copy the name of the user 106 | oc adm policy add-cluster-role-to-user cluster-admin $USER_NAME 107 | # Refresh your browser you should see the privileges already took effect. 108 | 109 | # Adding role to a user scoped at project 110 | oc adm policy add-role-to-user -n 111 | 112 | # Installing Red Hat's Group Sync Operator from Operator Hub 113 | 114 | # Create new app registration and copy the following values: 115 | AZURE_TENANT_ID= 116 | GROUP_SYNC_SP_NAME=aro-group-sync-operator-sp 117 | AZURE_CLIENT_ID= 118 | AZURE_CLIENT_SECRET= 119 | # Grant the following API permissions to Microsoft Graph: Group.Read.All, GroupMember.Read.All and User.Read.All 120 | 121 | oc create secret generic azure-group-sync -n group-sync-operator --from-literal=AZURE_TENANT_ID=$AZURE_TENANT_ID --from-literal=AZURE_CLIENT_ID=$AZURE_CLIENT_ID --from-literal=AZURE_CLIENT_SECRET=$AZURE_CLIENT_SECRET 122 | 123 | cat << EOF | oc apply -f - 124 | apiVersion: redhatcop.redhat.io/v1alpha1 125 | kind: GroupSync 126 | metadata: 127 | name: azure-groupsync 128 | spec: 129 | providers: 130 | - name: azure 131 | azure: 132 | credentialsSecret: 133 | name: azure-group-sync 134 | namespace: group-sync-operator 135 | baseGroups: 136 | - aro-admins 137 | - aro-devs 138 | - aro-support 139 | userNameAttributes: 140 | - userPrincipalName 141 | - upn 142 | - email 143 | - name 144 | EOF -------------------------------------------------------------------------------- /aro-4/aro-4-app-gateway.md: -------------------------------------------------------------------------------- 1 | # Exposing ARO via App Gateway 2 | 3 | When you create a fully private cluster (both ingress and API), you can't reach this cluster via any client that don't have line-of-sight to the private IPs for the cluster vnet. 4 | 5 | If you want to allow this cluster access via public internet, you can use Azure Application Gateway. 6 | 7 | App Gateway can have both public and private front end IPs at the same time which give you the flexibility to decide how each component is exposed. 8 | 9 | Now there are few notes that you would consider before applying this approach: 10 | 11 | 1. Public DNS: If you want to use a private cluster over public internet, you need to: 12 | 13 | - Select a cluster DNS name that can be resolved both publicly (to public IP) and privately (to private IP). 14 | - TLS Certificates: If you are relying on cluster self-signed certificates, you need to have in hand OpenShift root certificate, *.apps certificate and api certificate 15 | 16 | 2. Create new application gateway that has access to cluster network 17 | - Create application gateway in the same vnet (but in a new dedicated subnet) 18 | - Or create application gateway in peered hub vnet 19 | 20 | -------------------------------------------------------------------------------- /aro-4/aro-4-azure-arc.sh: -------------------------------------------------------------------------------- 1 | # Adding Arc extentions 2 | az extension add --name connectedk8s 3 | az extension add --name k8s-extension 4 | # update if extension already exists 5 | az extension update --name connectedk8s 6 | az extension update --name k8s-extension 7 | 8 | # Register required resource providers 9 | az provider register --namespace Microsoft.Kubernetes 10 | az provider register --namespace Microsoft.KubernetesConfiguration 11 | az provider register --namespace Microsoft.ExtendedLocation 12 | 13 | # validate the registration status 14 | az provider show -n Microsoft.Kubernetes -o table 15 | az provider show -n Microsoft.KubernetesConfiguration -o table 16 | az provider show -n Microsoft.ExtendedLocation -o table 17 | 18 | # Connect ARO to Arc 19 | # docs: https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli 20 | # Firewall rules: https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli#meet-network-requirements 21 | ARC_RG=azure-arc 22 | ARC_LOCATION=westeurope 23 | ARO_CLUSTER_NAME=aro4-weu 24 | ARO_RG=aro4-weu 25 | # Create Azure Arc resource group 26 | az group create --name $ARC_RG --location $ARC_LOCATION 27 | 28 | # Before connecting the ARO cluster, we need to make sure that kubeconfig is configured to the target cluster 29 | CLUSTER_URL=$(az aro show -g $ARO_RG -n $ARO_CLUSTER_NAME --query apiserverProfile.url -o tsv) 30 | USER=$(az aro list-credentials -g $ARO_RG -n $ARO_CLUSTER_NAME --query kubeadminUsername -o tsv) 31 | PASSWORD=$(az aro list-credentials -g $ARO_RG -n $ARO_CLUSTER_NAME --query kubeadminPassword -o tsv) 32 | oc login $CLUSTER_URL --username=$USER --password=$PASSWORD 33 | 34 | oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa 35 | 36 | # Create connected cluster 37 | az connectedk8s connect \ 38 | --name $ARO_CLUSTER_NAME \ 39 | --resource-group $ARC_RG \ 40 | --location $LOCATION \ 41 | --tags Datacenter=$LOCATION CountryOrRegion=NL 42 | # look in the output for "provisioningState": "Succeeded" 43 | 44 | ################## 45 | # Arc Extentions # 46 | ################## 47 | 48 | # Container Insights - Azure Monitor: 49 | # Docs: https://docs.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters 50 | 51 | # Firewall outbound rules: 52 | # Endpoint Port 53 | # *.ods.opinsights.azure.com 443 54 | # *.oms.opinsights.azure.com 443 55 | # dc.services.visualstudio.com 443 56 | # *.monitoring.azure.com 443 57 | # login.microsoftonline.com 443 58 | 59 | ARO_LOGS_WORKSPACE_NAME=aro4-logs-weu 60 | WORKSPACE_ID=$(az resource list \ 61 | --resource-type Microsoft.OperationalInsights/workspaces \ 62 | --query "[?contains(name, '${ARO_LOGS_WORKSPACE_NAME}')].id" -o tsv) 63 | echo $WORKSPACE_ID 64 | 65 | az k8s-extension create \ 66 | --name azuremonitor-containers \ 67 | --cluster-name $ARO_CLUSTER_NAME \ 68 | --resource-group $ARC_RG \ 69 | --cluster-type connectedClusters \ 70 | --extension-type Microsoft.AzureMonitor.Containers \ 71 | --configuration-settings logAnalyticsWorkspaceResourceID=$WORKSPACE_ID 72 | 73 | # look in the output for "provisioningState": "Succeeded" 74 | # Validate 75 | az k8s-extension show \ 76 | --name azuremonitor-containers \ 77 | --cluster-name $ARO_CLUSTER_NAME \ 78 | --resource-group $ARC_RG \ 79 | --cluster-type connectedClusters \ 80 | -n azuremonitor-containers 81 | 82 | # You might configure the default deployment by adding the following param: 83 | # --configuration-settings omsagent.resources.daemonset.limits.cpu=150m omsagent.resources.daemonset.limits.memory=600Mi omsagent.resources.deployment.limits.cpu=1 omsagent.resources.deployment.limits.memory=750Mi 84 | 85 | # Microsoft Defender 86 | # Docs: https://docs.microsoft.com/en-us/azure/defender-for-cloud/defender-for-containers-enable?tabs=aks-deploy-portal%2Ck8s-deploy-asc%2Ck8s-verify-asc%2Ck8s-remove-arc%2Caks-removeprofile-api&pivots=defender-for-container-arc 87 | 88 | # Make sure that Microsoft Defender for Containers is enabled 89 | 90 | # Firewall outbound rules: 91 | # Domain Port 92 | # *.ods.opinsights.azure.com 443 93 | # *.oms.opinsights.azure.com 443 94 | # login.microsoftonline.com 443 95 | 96 | # You can use Microsoft Defender in Azure Portal or Azure CLI to enable the protection 97 | az k8s-extension create \ 98 | --name microsoft.azuredefender.kubernetes \ 99 | --cluster-type connectedClusters \ 100 | --cluster-name $ARO_CLUSTER_NAME \ 101 | --resource-group $ARC_RG \ 102 | --extension-type microsoft.azuredefender.kubernetes \ 103 | --configuration-settings logAnalyticsWorkspaceResourceID=$WORKSPACE_ID auditLogPath="/var/log/kube-apiserver/audit.log" 104 | 105 | # Validate 106 | az k8s-extension show \ 107 | --name microsoft.azuredefender.kubernetes \ 108 | --cluster-name $ARO_CLUSTER_NAME \ 109 | --resource-group $ARC_RG \ 110 | --cluster-type connectedClusters \ 111 | -n microsoft.azuredefender.kubernetes 112 | 113 | # Microsoft Defender in Azure Portal can help in creating new policy with "DeployIfNotExists" effect through the 114 | # enfoce button under Microsoft Defender for Cloud -> Your Arc-Connected Cluster -> Defender policy -> Enforce 115 | # Policy template is "Configure Azure Arc enabled Kubernetes clusters to install Azure Defender's extension" 116 | -------------------------------------------------------------------------------- /aro-4/aro-4-azure-container-registry.sh: -------------------------------------------------------------------------------- 1 | # NOTE: These steps assume that you have oc client tools already signed in into your ARO cluster 2 | 3 | # Azure Container Registry (ACR) Integration 4 | # It is common on Azure to use ACR for central container registry 5 | # ARO can easily integrate with ACR through SPN and Kubernetes pull secret 6 | 7 | # Set the name of existing or new ACR 8 | CONTAINER_REGISTRY_NAME="aroacr$RANDOM" 9 | # If you don't have ACR, you can create one: 10 | az acr create \ 11 | -g $ARO_RG \ 12 | -n $CONTAINER_REGISTRY_NAME \ 13 | --sku Standard \ 14 | --tags "PROJECT=ARO4" "STATUS=EXPERIMENTAL" 15 | # Getting the resource id: 16 | ACR_ID=$(az acr show --name $CONTAINER_REGISTRY_NAME --query id --output tsv) 17 | # Creating service principal 18 | # Create a SP to be used to access ACR (this will be used by Azure DevOps to push images to the registry) 19 | ACR_SP_NAME="${CLUSTER}-acr-sp" 20 | ACR_SP=$(az ad sp create-for-rbac -n $ACR_SP_NAME --skip-assignment) 21 | # echo $ACR_SP | jq 22 | ACR_SP_ID=$(echo $ACR_SP | jq -r .appId) 23 | ACR_SP_PASSWORD=$(echo $ACR_SP | jq -r .password) 24 | 25 | echo $ACR_SP_ID 26 | echo $ACR_SP_PASSWORD 27 | 28 | # Take a note of the ID and Password values as we will be using them in Azure DevOps 29 | 30 | # We need the full ACR Azure resource id to grant the permissions 31 | # No we grant permissions to the SP to allow push and pull roles 32 | az role assignment create --assignee $ACR_SP_ID --scope $ACR_ID --role acrpull 33 | az role assignment create --assignee $ACR_SP_ID --scope $ACR_ID --role acrpush 34 | 35 | # Creating the pull secret in ARO 36 | ARO_PULL_SECRET_NAME=default-acr 37 | oc create secret docker-registry $ARO_PULL_SECRET_NAME \ 38 | --namespace default \ 39 | --docker-server=https://$CONTAINER_REGISTRY_NAME.azurecr.io \ 40 | --docker-username=$ACR_SP_ID \ 41 | --docker-password=$ACR_SP_PASSWORD 42 | 43 | # OPTIONAL: Import an image to ACR for testing 44 | az acr import \ 45 | --name $CONTAINER_REGISTRY_NAME \ 46 | --source docker.io/library/nginx:latest \ 47 | --image nginx:latest 48 | # Validate the import was successful 49 | az acr repository show-manifests \ 50 | --name $CONTAINER_REGISTRY_NAME \ 51 | --repository nginx 52 | 53 | # Deploy from ACR to ARO 54 | # Open the sample deployment file (hello-world-pod.yaml) and replace the #{acrName}# with your ACR name 55 | oc apply -f nginx-pod.yaml 56 | # clean up 57 | oc delete -f nginx-pod.yaml -------------------------------------------------------------------------------- /aro-4/aro-4-azure-files.sh: -------------------------------------------------------------------------------- 1 | # Docs: https://docs.microsoft.com/en-us/azure/openshift/howto-create-a-storageclass 2 | 3 | AZURE_FILES_RESOURCE_GROUP=$ARO_RG 4 | LOCATION=westeurope 5 | 6 | # az group create -l $LOCATION -n $AZURE_FILES_RESOURCE_GROUP 7 | 8 | AZURE_STORAGE_ACCOUNT_NAME=aroweustoragemsft 9 | 10 | az storage account create \ 11 | --name $AZURE_STORAGE_ACCOUNT_NAME \ 12 | --resource-group $AZURE_FILES_RESOURCE_GROUP \ 13 | --kind StorageV2 \ 14 | --sku Standard_LRS 15 | 16 | ARO_RESOURCE_GROUP=$ARO_RG 17 | ARO_CLUSTER=$CLUSTER 18 | ARO_SERVICE_PRINCIPAL_ID=$(az aro show -g $ARO_RESOURCE_GROUP -n $ARO_CLUSTER --query servicePrincipalProfile.clientId -o tsv) 19 | echo $ARO_SERVICE_PRINCIPAL_ID 20 | AZURE_FILES_RESOURCE_GROUP_RES_ID=$(az group show -n $AZURE_FILES_RESOURCE_GROUP --query id -o tsv) 21 | echo $AZURE_FILES_RESOURCE_GROUP_RES_ID 22 | az role assignment create --role Contributor --scope $AZURE_FILES_RESOURCE_GROUP_RES_ID --assignee $ARO_SERVICE_PRINCIPAL_ID -g $AZURE_FILES_RESOURCE_GROUP 23 | 24 | az role assignment list \ 25 | --all \ 26 | --assignee $ARO_SP_ID \ 27 | --output json | jq '.[] | {"principalName":.principalName, "roleDefinitionName":.roleDefinitionName, "scope":.scope}' 28 | 29 | 30 | ARO_API_SERVER=$(az aro list --query "[?contains(name,'$ARO_CLUSTER')].[apiserverProfile.url]" -o tsv) 31 | 32 | oc login -u kubeadmin -p $(az aro list-credentials -g $ARO_RESOURCE_GROUP -n $ARO_CLUSTER --query=kubeadminPassword -o tsv) $ARO_API_SERVER 33 | 34 | oc create clusterrole azure-secret-reader \ 35 | --verb=create,get \ 36 | --resource=secrets 37 | 38 | oc adm policy add-cluster-role-to-user azure-secret-reader system:serviceaccount:kube-system:persistent-volume-binder 39 | 40 | cat << EOF >> azure-storageclass-azure-file.yaml 41 | kind: StorageClass 42 | apiVersion: storage.k8s.io/v1 43 | metadata: 44 | name: azure-file 45 | provisioner: kubernetes.io/azure-file 46 | parameters: 47 | location: $LOCATION 48 | secretNamespace: kube-system 49 | skuName: Standard_LRS 50 | storageAccount: $AZURE_STORAGE_ACCOUNT_NAME 51 | resourceGroup: $AZURE_FILES_RESOURCE_GROUP 52 | reclaimPolicy: Delete 53 | volumeBindingMode: Immediate 54 | EOF 55 | 56 | oc create -f azure-storageclass-azure-file.yaml 57 | 58 | # change default storage to Azure Files 59 | oc patch storageclass managed-premium -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 60 | 61 | oc patch storageclass azure-file -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 62 | 63 | # validate 64 | oc new-project azfiletest 65 | oc new-app httpd-example 66 | 67 | #Wait for the pod to become Ready 68 | curl $(oc get route httpd-example -n azfiletest -o jsonpath={.spec.host}) 69 | 70 | #If you have set the storage class by default, you can omit the --claim-class parameter 71 | oc set volume dc/httpd-example --add --name=v1 -t pvc --claim-size=1G -m /data --claim-class='azure-file' 72 | 73 | #Wait for the new deployment to rollout 74 | export POD=$(oc get pods --field-selector=status.phase==Running -o jsonpath={.items[].metadata.name}) 75 | oc exec httpd-example-1-zp8dl -- bash -c "echo 'azure file storage $RANDOM' >> /data/test.txt" 76 | 77 | oc exec httpd-example-1-zp8dl -- bash -c "cat /data/test.txt" 78 | 79 | # validate 2 80 | AZURE_FILES_SECRET=custom-azure-storage 81 | AZURE_STORAGE_ACCOUNT_KEY= 82 | AZURE_FILES_SHARE_NAME=aro-share 83 | oc create secret generic $AZURE_FILES_SECRET --from-literal=azurestorageaccountname=$AZURE_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$AZURE_STORAGE_ACCOUNT_KEY 84 | 85 | cat << EOF >> azure-storage-pv.yaml 86 | apiVersion: "v1" 87 | kind: "PersistentVolume" 88 | metadata: 89 | name: "pv0001" 90 | spec: 91 | capacity: 92 | storage: "5Gi" 93 | accessModes: 94 | - "ReadWriteMany" 95 | storageClassName: azure-file 96 | azureFile: 97 | secretName: $AZURE_FILES_SECRET 98 | shareName: $AZURE_FILES_SHARE_NAME 99 | readOnly: false 100 | EOF 101 | 102 | oc apply -f azure-storage-pv.yaml 103 | 104 | cat << EOF >> azure-storage-pvc.yaml 105 | apiVersion: "v1" 106 | kind: "PersistentVolumeClaim" 107 | metadata: 108 | name: "claim1" 109 | spec: 110 | accessModes: 111 | - "ReadWriteMany" 112 | resources: 113 | requests: 114 | storage: "5Gi" 115 | storageClassName: azure-file 116 | volumeName: "pv0001" 117 | EOF 118 | 119 | oc apply -f azure-storage-pvc.yaml 120 | 121 | export POD=pod-name 122 | cat << EOF >> azure-storage-pvc-pod.yaml 123 | apiVersion: v1 124 | kind: Pod 125 | metadata: 126 | name: $POD 127 | spec: 128 | containers: 129 | - name: nginx 130 | image: nginx:1.17.4 131 | ports: 132 | - containerPort: 80 133 | readinessProbe: 134 | httpGet: 135 | path: / 136 | port: 80 137 | initialDelaySeconds: 5 138 | periodSeconds: 5 139 | resources: 140 | limits: 141 | memory: 500Mi 142 | cpu: 500m 143 | requests: 144 | memory: 100Mi 145 | cpu: 100m 146 | volumeMounts: 147 | - mountPath: "/data" 148 | name: azure-file-share 149 | volumes: 150 | - name: azure-file-share 151 | persistentVolumeClaim: 152 | claimName: claim1 153 | EOF 154 | 155 | oc apply -f azure-storage-pvc-pod.yaml 156 | 157 | 158 | oc exec $POD -- bash -c "echo '$POD: azure file storage $RANDOM' >> /data/test.txt" 159 | 160 | oc exec $POD -- bash -c "cat /data/test.txt" -------------------------------------------------------------------------------- /aro-4/aro-4-backup.sh: -------------------------------------------------------------------------------- 1 | RELEASE_NAME=velero-v1.5.2-linux-amd64 2 | wget https://github.com/vmware-tanzu/velero/releases/download/v1.5.2/$RELEASE_NAME.tar.gz 3 | tar -xvf $RELEASE_NAME.tar.gz 4 | sudo cp ./$RELEASE_NAME/velero /usr/local/bin/ 5 | velero version 6 | 7 | AZURE_BACKUP_RESOURCE_GROUP=Velero_Backups 8 | LOCATION=westeurope 9 | az group create -n $AZURE_BACKUP_RESOURCE_GROUP --location $LOCATION 10 | 11 | AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" 12 | az storage account create \ 13 | --name $AZURE_STORAGE_ACCOUNT_ID \ 14 | --resource-group $AZURE_BACKUP_RESOURCE_GROUP \ 15 | --sku Standard_GRS \ 16 | --encryption-services blob \ 17 | --https-only true \ 18 | --kind BlobStorage \ 19 | --access-tier Hot 20 | 21 | BLOB_CONTAINER=velero-k8s 22 | az storage container create -n $BLOB_CONTAINER --public-access off --account-name $AZURE_STORAGE_ACCOUNT_ID 23 | 24 | # Cluster name and resource group 25 | CLUSTER_NAME=aro4-weu 26 | CLUSTER_RG=aro4-weu 27 | 28 | # For ARO, you need the resource group 29 | export AZURE_RESOURCE_GROUP=$(az aro show --name $CLUSTER_NAME --resource-group $CLUSTER_RG | jq -r .clusterProfile.resourceGroupId | cut -d '/' -f 5,5) 30 | echo $AZURE_RESOURCE_GROUP 31 | # Subscription and tenant information 32 | SUBSCRIPTION_ACCOUNT=$(az account show) 33 | echo $SUBSCRIPTION_ACCOUNT | jq 34 | # Get the tenant ID 35 | AZURE_TENANT_ID=$(echo $SUBSCRIPTION_ACCOUNT | jq -r .tenantId) 36 | # or use TENANT_ID=$(az account show --query tenantId -o tsv) 37 | echo $AZURE_TENANT_ID 38 | # Get the subscription ID 39 | AZURE_SUBSCRIPTION_ID=$(echo $SUBSCRIPTION_ACCOUNT | jq -r .id) 40 | # or use TENANT_ID=$(az account show --query tenantId -o tsv) 41 | echo $AZURE_SUBSCRIPTION_ID 42 | 43 | AZURE_CLIENT_SECRET=$(az ad sp create-for-rbac --name "aro4-velero-sp" --skip-assignments --query 'password' -o tsv) 44 | # \ 45 | # ) 46 | az role assignment create --assignee $AZURE_CLIENT_ID --role "Contributor" --scope "/subscriptions/$AZURE_SUBSCRIPTION_ID" 47 | AZURE_CLIENT_ID=$(az ad sp list --display-name "aro4-velero-sp" --query '[0].appId' -o tsv) 48 | echo $AZURE_CLIENT_ID 49 | az role assignment list \ 50 | --all \ 51 | --assignee $AZURE_CLIENT_ID \ 52 | --output json | jq '.[] | {"principalName":.principalName, "roleDefinitionName":.roleDefinitionName, "scope":.scope}' 53 | 54 | cat << EOF > ./credentials-velero.yaml 55 | AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID} 56 | AZURE_TENANT_ID=${AZURE_TENANT_ID} 57 | AZURE_CLIENT_ID=${AZURE_CLIENT_ID} 58 | AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET} 59 | AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP} 60 | AZURE_CLOUD_NAME=AzurePublicCloud 61 | EOF 62 | 63 | # Installing Velero 64 | velero install \ 65 | --provider azure \ 66 | --plugins velero/velero-plugin-for-microsoft-azure:v1.1.1 \ 67 | --bucket $BLOB_CONTAINER \ 68 | --secret-file ./credentials-velero.yaml \ 69 | --backup-location-config resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,storageAccount=$AZURE_STORAGE_ACCOUNT_ID \ 70 | --snapshot-location-config apiTimeout=15m \ 71 | --velero-pod-cpu-limit="0" --velero-pod-mem-limit="0" \ 72 | --velero-pod-mem-request="0" --velero-pod-cpu-request="0" 73 | 74 | kubectl logs deployment/velero -n velero 75 | 76 | # create a backup of default namespace 77 | NS=default 78 | PROJECT_NAME=ostoy 79 | BACKUP_NAME=$PROJECT_NAME-backup 80 | velero create backup $BACKUP_NAME --include-namespaces=$NS 81 | 82 | # Check backup status (look for phase:Completed) 83 | velero backup describe $BACKUP_NAME 84 | velero backup logs $BACKUP_NAME 85 | 86 | # Restore 87 | oc get backups -n velero 88 | RESTORE_NAME=$PROJECT_NAME-restore 89 | velero restore create $RESTORE_NAME --from-backup $BACKUP_NAME 90 | 91 | # Check the restoring status 92 | oc get restore -n velero $RESTORE_NAME -o yaml 93 | velero restore logs $RESTORE_NAME 94 | # Clean up 95 | kubectl delete namespace/velero clusterrolebinding/velero 96 | kubectl delete crds -l component=velero 97 | -------------------------------------------------------------------------------- /aro-4/aro-4-custom-api-certificate.sh: -------------------------------------------------------------------------------- 1 | # https://docs.openshift.com/container-platform/4.5/security/certificates/api-server.html 2 | 3 | oc create secret tls \ 4 | --cert= \ 5 | --key= \ 6 | -n openshift-config 7 | 8 | oc patch apiserver cluster \ 9 | --type=merge -p \ 10 | '{"spec":{"servingCerts": {"namedCertificates": 11 | [{"names": [""], 12 | "servingCertificate": {"name": ""}}]}}}' 13 | 14 | # Validate 15 | oc get apiserver cluster -o yaml 16 | -------------------------------------------------------------------------------- /aro-4/aro-4-custom-ingress-certificate.sh: -------------------------------------------------------------------------------- 1 | # https://docs.openshift.com/container-platform/4.5/security/certificates/replacing-default-ingress-certificate.html 2 | 3 | # Create new configuration for the custom root CA public key 4 | oc create configmap custom-ca \ 5 | --from-file=ca-bundle.crt= \ 6 | -n openshift-config 7 | 8 | oc patch proxy/cluster \ 9 | --type=merge \ 10 | --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' 11 | 12 | # Create a new TLS secret for a wildcard certificate for *.apps.CLUSTER-BASE-DOMAIN 13 | oc create secret tls \ 14 | --cert= \ 15 | --key= \ 16 | -n openshift-ingress 17 | 18 | 19 | oc patch ingresscontroller.operator default \ 20 | --type=merge -p \ 21 | '{"spec":{"defaultCertificate": {"name": ""}}}' \ 22 | -n openshift-ingress-operator -------------------------------------------------------------------------------- /aro-4/aro-4-dns-forwarder.sh: -------------------------------------------------------------------------------- 1 | # NOTE: These steps assume that you have oc client tools already signed in into your ARO cluster 2 | 3 | # DNS Forwarder setup (for on-premise DNS name resolutions) 4 | oc edit dns.operator/default 5 | 6 | # Update the spec: {} with your DNS forward setup 7 | # spec: 8 | # servers: 9 | # - name: foo-server 10 | # zones: 11 | # - foo.com 12 | # forwardPlugin: 13 | # upstreams: 14 | # - 1.1.1.1 15 | # - 2.2.2.2:5353 16 | # - name: bar-server 17 | # zones: 18 | # - bar.com 19 | # - example.com 20 | # forwardPlugin: 21 | # upstreams: 22 | # - 3.3.3.3 23 | # - 4.4.4.4:5454 24 | 25 | # I used the following to forward to a DNS server deployed in a peered hub network 26 | # spec: 27 | # servers: 28 | # - forwardPlugin: 29 | # upstreams: 30 | # - 10.165.5.4 31 | # name: azure-custom-dns 32 | # zones: 33 | # - mohamedsaif-cloud.corp 34 | 35 | # Check the status 36 | oc describe clusteroperators/dns 37 | 38 | # Check the dns logs: 39 | oc logs --namespace=openshift-dns-operator deployment/dns-operator -c dns-operator 40 | 41 | # Test the DNS resolution 42 | oc run --generator=run-pod/v1 -it --rm aro-ssh --image=debian 43 | # Once you are in the interactive session, execute the following commands (replace the FQDN with yours) 44 | apt-get update 45 | apt-get install dnsutils -y 46 | apt-get install curl -y 47 | nslookup -debug ocp.mohamedsaif-cloud.corp. 48 | 49 | # Or you can use the native ssh to node 50 | oc debug node/NODENAME 51 | -------------------------------------------------------------------------------- /aro-4/aro-4-egress-lockdown.sh: -------------------------------------------------------------------------------- 1 | # You can use several network firewall 2 | # This guide assumes you are using Azure Firewall 3 | 4 | # Variables 5 | FW_RG=central-infosec-ent-weu 6 | FW_NAME=hub-ext-fw-ent-weu 7 | 8 | FW_PUBLIC_IP=$(az network public-ip show --ids $(az network firewall show -g $FW_RG -n $FW_NAME --query "ipConfigurations[0].publicIpAddress.id" -o tsv) --query "ipAddress" -o tsv) 9 | FW_PRIVATE_IP=$(az network firewall show -g $FW_RG -n $FW_NAME --query "ipConfigurations[0].privateIpAddress" -o tsv) 10 | 11 | echo $FW_PUBLIC_IP 12 | echo $FW_PRIVATE_IP 13 | 20.50.214.65 14 | az network route-table create -g $FW_RG --name aro-route 15 | az network route-table route create -g $FW_RG --name aro-fw-udr --route-table-name aro-route --address-prefix 0.0.0.0/0 --next-hop-type VirtualAppliance --next-hop-ip-address $FW_PRIVATE_IP 16 | 17 | # required rules 18 | az network firewall application-rule create -g $FW_RG -f $FW_NAME \ 19 | --collection-name 'OpenShift' \ 20 | --action allow \ 21 | --priority 500 \ 22 | -n 'required' \ 23 | --source-addresses '*' \ 24 | --protocols 'http=80' 'https=443' \ 25 | --target-fqdns 'registry.redhat.io' '*.quay.io' 'sso.redhat.com' 'management.azure.com' 'mirror.openshift.com' 'api.openshift.com' 'quay.io' '*.blob.core.windows.net' 'gcs.prod.monitoring.core.windows.net' 'registry.access.redhat.com' 'login.microsoftonline.com' '*.servicebus.windows.net' '*.table.core.windows.net' 'grafana.com' 26 | 27 | # Optional rule for Docker 28 | az network firewall application-rule create -g $FW_RG -f $FW_NAME \ 29 | --collection-name 'Docker' \ 30 | --action allow \ 31 | --priority 501 \ 32 | -n 'docker' \ 33 | --source-addresses '*' \ 34 | --protocols 'http=80' 'https=443' \ 35 | --target-fqdns '*cloudflare.docker.com' '*registry-1.docker.io' 'apt.dockerproject.org' 'auth.docker.io' 36 | 37 | az network firewall network-rule create \ 38 | -g $RG_INFOSEC\ 39 | --f $FW_NAME \ 40 | --collection-name "azure-services-rules" \ 41 | -n "service-tags" \ 42 | --source-addresses "*" \ 43 | --protocols "Any" \ 44 | --destination-addresses "AzureContainerRegistry" "MicrosoftContainerRegistry" "AzureActiveDirectory" \ 45 | --destination-ports "*" \ 46 | --action "Allow" \ 47 | --priority 230 48 | 49 | # required rules for public clusters 50 | az network firewall network-rule create -g $FW_RG -f $FW_NAME \ 51 | --collection-name 'OpenShift-Public' \ 52 | --action allow \ 53 | --priority 502 \ 54 | -n 'required-public' \ 55 | --source-addresses '6443' \ 56 | --destination-ports "6443" \ 57 | --destination-addresses "52.143.13.154"\ 58 | --protocols "TCP" 59 | 60 | 61 | # Get the route table id 62 | ROUTE_ID=$(az network route-table show -g $FW_RG --name aro-route --query id -o tsv) 63 | 64 | # Avoid adding the UDR on the masters subnet incase of a public cluster. 65 | az network vnet subnet update -g $VNET_RG --vnet-name $PROJ_VNET_NAME --name $MASTERS_SUBNET_NAME --route-table $ROUTE_ID 66 | az network vnet subnet update -g $VNET_RG --vnet-name $PROJ_VNET_NAME --name $WORKERS_SUBNET_NAME --route-table $ROUTE_ID 67 | 68 | # Test 69 | cat <> ./aro-provision-$LOCATION_CODE.vars 129 | # Check the available regions on the ARO roadmap https://aka.ms/aro/roadmap 130 | echo export LOCATION=westeurope >> ./aro-provision-$LOCATION_CODE.vars 131 | echo export ARO_RG=$ARO_RG >> ./aro-provision-$LOCATION_CODE.vars 132 | echo export ARO_INFRA_RG=$ARO_INFRA_RG >> ./aro-provision-$LOCATION_CODE.vars 133 | echo export VNET_RG=$VNET_RG >> ./aro-provision-$LOCATION_CODE.vars 134 | # Cluster information 135 | echo export CLUSTER=$CLUSTER >> ./aro-provision-$LOCATION_CODE.vars 136 | echo export DOMAIN_NAME=$DOMAIN_NAME >> ./aro-provision-$LOCATION_CODE.vars 137 | echo export INGRESS_VISIBILITY=$INGRESS_VISIBILITY >> ./aro-provision-$LOCATION_CODE.vars 138 | echo export API_VISIBILITY=$API_VISIBILITY >> ./aro-provision-$LOCATION_CODE.vars 139 | echo export WORKERS_VM_SIZE=$WORKERS_VM_SIZE >> ./aro-provision-$LOCATION_CODE.vars 140 | # Network details 141 | echo export PROJ_VNET_NAME=$PROJ_VNET_NAME >> ./aro-provision-$LOCATION_CODE.vars 142 | echo export MASTERS_SUBNET_NAME=$MASTERS_SUBNET_NAME >> ./aro-provision-$LOCATION_CODE.vars 143 | echo export WORKERS_SUBNET_NAME=$WORKERS_SUBNET_NAME >> ./aro-provision-$LOCATION_CODE.vars 144 | echo export PROJ_VNET_ADDRESS_SPACE=$PROJ_VNET_ADDRESS_SPACE >> ./aro-provision-$LOCATION_CODE.vars 145 | echo export MASTERS_SUBNET_IP_PREFIX=$MASTERS_SUBNET_IP_PREFIX >> ./aro-provision-$LOCATION_CODE.vars 146 | echo export WORKERS_SUBNET_IP_PREFIX=$WORKERS_SUBNET_IP_PREFIX >> ./aro-provision-$LOCATION_CODE.vars 147 | # Service Principal 148 | echo export ARO_SP_ID=$ARO_SP_ID >> ./aro-provision-$LOCATION_CODE.vars 149 | echo export ARO_SP_PASSWORD=$ARO_SP_PASSWORD >> ./aro-provision-$LOCATION_CODE.vars 150 | echo export ARO_SP_TENANT=$ARO_SP_TENANT >> ./aro-provision-$LOCATION_CODE.vars 151 | echo export ARO_SP_OBJECT_ID=$ARO_SP_OBJECT_ID >> ./aro-provision-$LOCATION_CODE.vars 152 | echo export ARO_RP_SP_OBJECT_ID=$ARO_RP_SP_OBJECT_ID >> ./aro-provision-$LOCATION_CODE.vars 153 | 154 | # Creating the cluster 155 | az aro create \ 156 | --resource-group $ARO_RG \ 157 | --cluster-resource-group $ARO_INFRA_RG \ 158 | --name $CLUSTER \ 159 | --location $LOCATION \ 160 | --vnet $PROJ_VNET_NAME \ 161 | --vnet-resource-group $VNET_RG \ 162 | --master-subnet $MASTERS_SUBNET_NAME \ 163 | --worker-subnet $WORKERS_SUBNET_NAME \ 164 | --ingress-visibility $INGRESS_VISIBILITY \ 165 | --apiserver-visibility $API_VISIBILITY \ 166 | --pull-secret $PULL_SECRET \ 167 | --worker-count 3 \ 168 | --client-id $ARO_SP_ID \ 169 | --client-secret $ARO_SP_PASSWORD \ 170 | --domain $DOMAIN_NAME \ 171 | --worker-vm-size $WORKERS_VM_SIZE \ 172 | --tags "PROJECT=ARO4" "STATUS=EXPERIMENTAL" 173 | 174 | # Append this flag if you expect to face challenges during provisioning 175 | # --debug 176 | 177 | # In private cluster, I would highly recommend setting up the private DNS by including the following: 178 | # --domain $DOMAIN_NAME 179 | # After the cluster provisioning, you can retrieve the IPs for ingress and API to be updated in the DNS records 180 | API_IP=$(az aro show -g $ARO_RG -n $CLUSTER --query apiserverProfile.ip -o tsv) 181 | INGRESS_IP=$(az aro show -g $ARO_RG -n $CLUSTER --query 'ingressProfiles[0].ip' -o tsv) 182 | echo $API_IP 183 | echo $INGRESS_IP 184 | # To create fully private clusters add the following to the create command: 185 | # Ingress controls the visibility of your workloads 186 | # API Server control the visibility of your masters api server 187 | # --ingress-visibility Private \ 188 | # --apiserver-visibility Private \ 189 | 190 | # Custom dns for the cluster --domain $DOMAIN_NAME 191 | # locate the load balancer without the word "internal" in the cluster infra resource group 192 | # For the frontend IP configurations with a GUID like name, assign the IP to your *.apps.$DOMAIN_NAME record in selected DNS server 193 | # For the other frontend IP rule, assign it to api.$DOMAIN_NAME record in selected DNS server 194 | 195 | # Check the cluster 196 | az aro list -o table 197 | 198 | # To display cluster kubeadmin credentials: 199 | az aro list-credentials -g $ARO_RG -n $CLUSTER 200 | 201 | # Getting the oc CLI tools 202 | mkdir oc-cli 203 | wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz 204 | tar -xvzf ./openshift-client-linux.tar.gz -C ./oc-cli 205 | sudo cp ./oc-cli/oc /usr/local/bin/ 206 | oc version 207 | 208 | # Login to the cluster using the cli 209 | # Get the API server url: 210 | CLUSTER_URL=$(az aro show -g $ARO_RG -n $CLUSTER --query apiserverProfile.url -o tsv) 211 | USER=$(az aro list-credentials -g $ARO_RG -n $CLUSTER --query kubeadminUsername -o tsv) 212 | PASSWORD=$(az aro list-credentials -g $ARO_RG -n $CLUSTER --query kubeadminPassword -o tsv) 213 | oc login $CLUSTER_URL --username=$USER --password=$PASSWORD 214 | # test the successful login 215 | oc get nodes 216 | begop-aa6QZ-KPUES-WzZJy 217 | podman login -u kubeadmin -p $(oc whoami -t) image-registry.openshift-image-registry.svc:5000 218 | # Scale the cluster to 4 worker nodes 219 | # The easiest way to do this is via the console -> Compute -> Machine Sets -> each worker will have a machine set (usually with different availability zones to optimize the cluster SLA), set the desired count to the target value 220 | # You can also scale manually through the machineset apis 221 | # Get all machinesets 222 | oc get machinesets -n openshift-machine-api 223 | # Scale a particular one to 2 nodes 224 | oc scale --replicas=1 machineset -n openshift-machine-api 225 | # NOTE: Having zero worker nodes in your cluster will result be default in losing access to OpenShift console. You will still be able to access the cluster via oc CLI 226 | # NOTE: If you need to cool down the cluster to save cost, I would recommend maintaining at least 2 nodes during that period to avoid hitting problems with cluster operations 227 | 228 | # Clean up 229 | az aro delete -g $ARO_RG -n $CLUSTER 230 | 231 | # ARO Create options 232 | # Command 233 | # az aro create : Create a cluster. 234 | # Command group 'aro' is in preview. It may be changed/removed in a future release. 235 | # Arguments 236 | # --master-subnet wq [Required] : Name or ID of master vnet subnet. If name is supplied, 237 | # `--vnet` must be supplied. 238 | # --name -n [Required] : Name of cluster. 239 | # --resource-group -g [Required] : Name of resource group. You can configure the default group 240 | # using `az configure --defaults group=`. 241 | # --worker-subnet [Required] : Name or ID of worker vnet subnet. If name is supplied, 242 | # `--vnet` must be supplied. 243 | # --apiserver-visibility : API server visibility. 244 | # --client-id : Client ID of cluster service principal. 245 | # --client-secret : Client secret of cluster service principal. 246 | # --cluster-resource-group : Resource group of cluster. 247 | # --domain : Domain of cluster. 248 | # --ingress-visibility : Ingress visibility. 249 | # --location -l : Location. Values from: `az account list-locations`. You can 250 | # configure the default location using `az configure --defaults 251 | # location=`. 252 | # --master-vm-size : Size of master VMs. 253 | # --no-wait : Do not wait for the long-running operation to finish. 254 | # --pod-cidr : CIDR of pod network. 255 | # --service-cidr : CIDR of service network. 256 | # --tags : Space-separated tags: key[=value] [key[=value] ...]. Use '' to 257 | # clear existing tags. 258 | # --vnet : Name or ID of vnet. If name is supplied, `--vnet-resource- 259 | # group` must be supplied. 260 | # --vnet-resource-group : Name of vnet resource group. 261 | # --worker-count : Count of worker VMs. 262 | # --worker-vm-disk-size-gb : Disk size in GB of worker VMs. 263 | # --worker-vm-size : Size of worker VMs. 264 | 265 | # Global Arguments 266 | # --debug : Increase logging verbosity to show all debug logs. 267 | # --help -h : Show this help message and exit. 268 | # --output -o : Output format. Allowed values: json, jsonc, none, table, tsv, 269 | # yaml, yamlc. Default: json. 270 | # --query : JMESPath query string. See http://jmespath.org/ for more 271 | # information and examples. 272 | # --subscription : Name or ID of subscription. You can configure the default 273 | # subscription using `az account set -s NAME_OR_ID`. 274 | # --verbose : Increase logging verbosity. Use --debug for full debug logs. -------------------------------------------------------------------------------- /aro-4/aro-4-routes.sh: -------------------------------------------------------------------------------- 1 | # ARO is created with a default ingress route 2 | oc -n openshift-ingress-operator get ingresscontroller 3 | 4 | # Details of the default ingress router 5 | oc -n openshift-ingress-operator get ingresscontroller/default -o json | jq '.spec' 6 | # or 7 | oc describe --namespace=openshift-ingress-operator ingresscontroller/default 8 | # To check if this router has public or private external IP 9 | oc -n openshift-ingress get svc 10 | 11 | # OCP docs: https://docs.openshift.com/container-platform/4.3/networking/ingress-operator.html#nw-ingress-view_configuring-ingress 12 | 13 | # It is good to look at the yaml definition of the default ingress: 14 | oc -n openshift-ingress-operator get ingresscontroller/default -o json | jq 15 | 16 | oc -n openshift-ingress-operator edit ingresscontroller/default 17 | 18 | # The below yaml will create new internal ingress: 19 | # apiVersion: operator.openshift.io/v1 20 | # kind: IngressController 21 | # metadata: 22 | # namespace: openshift-ingress-operator 23 | # name: internal 24 | # spec: 25 | # domain: intapps.aro4-weu-14920.westeurope.aroapp.io 26 | # endpointPublishingStrategy: 27 | # type: LoadBalancerService 28 | # loadBalancer: 29 | # scope: Internal 30 | # namespaceSelector: 31 | # matchLabels: 32 | # type: internal 33 | 34 | # namespaceSelector above is selected to instruct OCP to use specific namespace only. Other option is route selector 35 | oc apply -f internal-ingress.yaml 36 | 37 | # checking the newly created ingress: 38 | oc -n openshift-ingress-operator get ingresscontroller 39 | oc describe --namespace=openshift-ingress-operator ingresscontroller/internal-apps 40 | oc -n openshift-ingress get svc 41 | 42 | # creating a new project to use that ingress 43 | oc new-project internal-db 44 | # label the project with type=internal 45 | oc label namespace/internal-db type=internal 46 | 47 | # create new pod 48 | oc new-app --docker-image erjosito/sqlapi:0.1 49 | oc new-app --docker-image openshift/hello-openshift 50 | 51 | # expose the pod 52 | oc expose svc sqlapi 53 | oc expose service/hello-openshift 54 | # checking the route 55 | oc describe route/sqlapi 56 | oc describe route/hello-openshift 57 | # You will notice that the service is exposed over both default and internal as default don't have any selectors setup 58 | # Let's add label to default route 59 | oc -n openshift-ingress-operator edit ingresscontroller/default 60 | oc -n openshift-ingress-operator delete ingresscontroller/internal 61 | 62 | # Add the following to the route spec: 63 | # spec: 64 | # defaultCertificate: 65 | # name: 1997c2c5-965a-45cb-b11c-8e26e5a96882-ingress 66 | # namespaceSelector: 67 | # matchLabels: 68 | # type: external 69 | # replicas: 2 70 | 71 | # to update the route, we will delete it to 72 | nslookup hello-openshift-internal-db.apps.aro-weu.az.mohamedsaif.com 73 | nslookup internal.apps.aro-weu.az.mohamedsaif.com 74 | curl http://hello-openshift-internal-db.apps.aro-weu.az.mohamedsaif.com -k 75 | curl http://hello-openshift-internal-db.internal.apps.aro-weu.az.mohamedsaif.com 76 | 77 | # Forcing the default ingress to be internal 78 | oc replace --force --wait --filename - < 5 | oc edit secrets azure-credentials -n kube-system 6 | 7 | # replace base64 string with new values. It will require base64 decode -> modify -> base64 encode -> update secret 8 | oc edit secret azure-cloud-provider -n kube-system 9 | 10 | echo 'hello' | base64 11 | echo 'aGVsbG8K' | base64 -d 12 | -------------------------------------------------------------------------------- /aro-4/aro-4-upgrade.sh: -------------------------------------------------------------------------------- 1 | ## Cluster upgrades 2 | 3 | # Current version status 4 | oc get clusterversion 5 | 6 | # Detailed cluster version 7 | oc get clusterversion -o json | jq 8 | 9 | # Update channel status 10 | oc get clusterversion -o json|jq ".items[0].spec" 11 | 12 | # Modifying update channel: 13 | # Manually through updating the channel value under the spec section will result in updated cluster settings 14 | oc edit clusterversion 15 | 16 | # Or through the patch command 17 | oc patch clusterversion version \ 18 | --type=merge -p \ 19 | '{"spec":{"channel": "stable-4.6"}}' 20 | 21 | # review cluster upgrade history 22 | oc get clusterversion -o json|jq ".items[0].status.history" 23 | # Get available upgrade 24 | oc get clusterversion -o json|jq ".items[0].status.availableUpdates" 25 | 26 | # Check the upgrade command options 27 | oc adm upgrade -h 28 | 29 | # to upgrade to latest 30 | oc adm upgrade --to-latest=true 31 | 32 | # to upgrade to particular version 33 | oc adm upgrade --to= -------------------------------------------------------------------------------- /aro-4/aro-internal-registry.sh: -------------------------------------------------------------------------------- 1 | # Ensure registry is exposed on default router 2 | oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge 3 | 4 | # Checking registry pods 5 | oc get pods -n openshift-image-registry 6 | oc logs deployments/image-registry -n openshift-image-registry --tail 10 7 | 8 | 9 | 10 | # Replacing existing registry storage: 11 | # Create a new blob container named image-registry in the storage account first 12 | STORAGE_KEY=1XHjLZsQQwgZT4tAA0TGsvR576AmjXPJFK1p88SgazWUZu0x1dZsbCF+Ms2rTwkAoMZ+DK8Aszy5+AStA81wAg== 13 | oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=$STORAGE_KEY --namespace openshift-image-registry 14 | # Registry storage config 15 | oc edit configs.imageregistry.operator.openshift.io/cluster 16 | acrteststorageaccount 17 | # section to be edited 18 | # storage: 19 | # azure: 20 | # accountName: 21 | # container: 22 | 23 | 24 | REGISTRY_HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') 25 | echo $REGISTRY_HOST 26 | 27 | oc policy add-role-to-user registry-editor kubeadmin 28 | 29 | # Using pod man 30 | podman login -u kubeadmin -p $(oc whoami -t) --tls-verify=false $REGISTRY_HOST 31 | 32 | # Using docker 33 | # You need to configure the docker daemon like: 34 | # { 35 | # ..., 36 | # "insecure-registries": [ 37 | # "registry.fqdn.com", 38 | # "registry.fqdn.com:5000" 39 | # ] 40 | # } 41 | docker login -u kubeadmin -p $(oc whoami -t) $REGISTRY_HOST 42 | 43 | # Sample image pull/tag/push 44 | OCP_PROJECT=ocp-samples 45 | oc new-project $OCP_PROJECT 46 | docker pull openshift/hello-openshift 47 | docker pull quay.io/ostoylab/ostoy-frontend:1.4.0 48 | docker tag quay.io/ostoylab/ostoy-frontend:1.4.0 $REGISTRY_HOST/$OCP_PROJECT/ostoy-frontend:1.4.0 49 | 50 | podman tag quay.io/ostoylab/ostoy-frontend:1.4.0 image-registry.openshift-image-registry.svc:5000/$OCP_PROJECT/ostoy-frontend:1.4.0 51 | podman push image-registry.openshift-image-registry.svc:5000/$OCP_PROJECT/ostoy-frontend:1.4.0 52 | 53 | echo $REGISTRY_HOST/$OCP_PROJECT/ostoy-frontend:1.4.0 54 | docker push $REGISTRY_HOST/$OCP_PROJECT/ostoy-frontend:1.4.0 55 | 56 | oc new-app $OCP_PROJECT/ostoy-frontend:1.4.0 --name=ostoy-frontend -------------------------------------------------------------------------------- /aro-4/container-insights-agent-config.yaml: -------------------------------------------------------------------------------- 1 | # source: https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_prod/kubernetes/container-azm-ms-agentconfig.yaml 2 | kind: ConfigMap 3 | apiVersion: v1 4 | metadata: 5 | name: container-azm-ms-agentconfig 6 | namespace: kube-system 7 | annotations: 8 | openshift.io/reconcile-protect: "true" 9 | data: 10 | schema-version: 11 | #string.used by agent to parse config. supported versions are {v1}. Configs with other schema versions will be rejected by the agent. 12 | v1 13 | config-version: 14 | #string.used by customer to keep track of this config file's version in their source control/repository (max allowed 10 chars, other chars will be truncated) 15 | ver1 16 | log-data-collection-settings: |- 17 | # Log data collection settings 18 | # Any errors related to config map settings can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to. 19 | 20 | [log_collection_settings] 21 | [log_collection_settings.stdout] 22 | # In the absense of this configmap, default value for enabled is true 23 | enabled = true 24 | # exclude_namespaces setting holds good only if enabled is set to true 25 | # kube-system log collection is disabled by default in the absence of 'log_collection_settings.stdout' setting. If you want to enable kube-system, remove it from the following setting. 26 | # If you want to continue to disable kube-system log collection keep this namespace in the following setting and add any other namespace you want to disable log collection to the array. 27 | # In the absense of this configmap, default value for exclude_namespaces = ["kube-system"] 28 | exclude_namespaces = ["kube-system"] 29 | 30 | [log_collection_settings.stderr] 31 | # Default value for enabled is true 32 | enabled = true 33 | # exclude_namespaces setting holds good only if enabled is set to true 34 | # kube-system log collection is disabled by default in the absence of 'log_collection_settings.stderr' setting. If you want to enable kube-system, remove it from the following setting. 35 | # If you want to continue to disable kube-system log collection keep this namespace in the following setting and add any other namespace you want to disable log collection to the array. 36 | # In the absense of this cofigmap, default value for exclude_namespaces = ["kube-system"] 37 | exclude_namespaces = ["kube-system"] 38 | 39 | [log_collection_settings.env_var] 40 | # In the absense of this configmap, default value for enabled is true 41 | enabled = true 42 | [log_collection_settings.enrich_container_logs] 43 | # In the absense of this configmap, default value for enrich_container_logs is false 44 | enabled = false 45 | # When this is enabled (enabled = true), every container log entry (both stdout & stderr) will be enriched with container Name & container Image 46 | [log_collection_settings.collect_all_kube_events] 47 | # In the absense of this configmap, default value for collect_all_kube_events is false 48 | # When the setting is set to false, only the kube events with !normal event type will be collected 49 | enabled = false 50 | # When this is enabled (enabled = true), all kube events including normal events will be collected 51 | 52 | prometheus-data-collection-settings: |- 53 | # Custom Prometheus metrics data collection settings 54 | [prometheus_data_collection_settings.cluster] 55 | # Cluster level scrape endpoint(s). These metrics will be scraped from agent's Replicaset (singleton) 56 | # Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to. 57 | 58 | #Interval specifying how often to scrape for metrics. This is duration of time and can be specified for supporting settings by combining an integer value and time unit as a string value. Valid time units are ns, us (or µs), ms, s, m, h. 59 | interval = "1m" 60 | 61 | ## Uncomment the following settings with valid string arrays for prometheus scraping 62 | #fieldpass = ["metric_to_pass1", "metric_to_pass12"] 63 | 64 | #fielddrop = ["metric_to_drop"] 65 | 66 | # An array of urls to scrape metrics from. 67 | # urls = ["http://myurl:9101/metrics"] 68 | 69 | # An array of Kubernetes services to scrape metrics from. 70 | # kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"] 71 | 72 | # When monitor_kubernetes_pods = true, replicaset will scrape Kubernetes pods for the following prometheus annotations: 73 | # - prometheus.io/scrape: Enable scraping for this pod 74 | # - prometheus.io/scheme: If the metrics endpoint is secured then you will need to 75 | # set this to `https` & most likely set the tls config. 76 | # - prometheus.io/path: If the metrics path is not /metrics, define it with this annotation. 77 | # - prometheus.io/port: If port is not 9102 use this annotation 78 | monitor_kubernetes_pods = false 79 | 80 | ## Restricts Kubernetes monitoring to namespaces for pods that have annotations set and are scraped using the monitor_kubernetes_pods setting. 81 | ## This will take effect when monitor_kubernetes_pods is set to true 82 | ## ex: monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"] 83 | # monitor_kubernetes_pods_namespaces = ["default1"] 84 | 85 | ## Label selector to target pods which have the specified label 86 | ## This will take effect when monitor_kubernetes_pods is set to true 87 | ## Reference the docs at https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors 88 | # kubernetes_label_selector = "env=dev,app=nginx" 89 | 90 | ## Field selector to target pods which have the specified field 91 | ## This will take effect when monitor_kubernetes_pods is set to true 92 | ## Reference the docs at https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/ 93 | ## eg. To scrape pods on a specific node 94 | # kubernetes_field_selector = "spec.nodeName=$HOSTNAME" 95 | 96 | [prometheus_data_collection_settings.node] 97 | # Node level scrape endpoint(s). These metrics will be scraped from agent's DaemonSet running in every node in the cluster 98 | # Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to. 99 | 100 | #Interval specifying how often to scrape for metrics. This is duration of time and can be specified for supporting settings by combining an integer value and time unit as a string value. Valid time units are ns, us (or µs), ms, s, m, h. 101 | interval = "1m" 102 | 103 | ## Uncomment the following settings with valid string arrays for prometheus scraping 104 | 105 | # An array of urls to scrape metrics from. $NODE_IP (all upper case) will substitute of running Node's IP address 106 | # urls = ["http://$NODE_IP:9103/metrics"] 107 | 108 | #fieldpass = ["metric_to_pass1", "metric_to_pass12"] 109 | 110 | #fielddrop = ["metric_to_drop"] 111 | 112 | metric_collection_settings: |- 113 | # Metrics collection settings for metrics sent to Log Analytics and MDM 114 | [metric_collection_settings.collect_kube_system_pv_metrics] 115 | # In the absense of this configmap, default value for collect_kube_system_pv_metrics is false 116 | # When the setting is set to false, only the persistent volume metrics outside the kube-system namespace will be collected 117 | enabled = false 118 | # When this is enabled (enabled = true), persistent volume metrics including those in the kube-system namespace will be collected 119 | 120 | alertable-metrics-configuration-settings: |- 121 | # Alertable metrics configuration settings for container resource utilization 122 | [alertable_metrics_configuration_settings.container_resource_utilization_thresholds] 123 | # The threshold(Type Float) will be rounded off to 2 decimal points 124 | # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage 125 | container_cpu_threshold_percentage = 95.0 126 | # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage 127 | container_memory_rss_threshold_percentage = 95.0 128 | # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage 129 | container_memory_working_set_threshold_percentage = 95.0 130 | 131 | # Alertable metrics configuration settings for persistent volume utilization 132 | [alertable_metrics_configuration_settings.pv_utilization_thresholds] 133 | # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage 134 | pv_usage_threshold_percentage = 60.0 135 | 136 | # Alertable metrics configuration settings for completed jobs count 137 | [alertable_metrics_configuration_settings.job_completion_threshold] 138 | # Threshold for completed job count , metric will be sent only for those jobs which were completed earlier than the following threshold 139 | job_completion_threshold_time_minutes = 360 140 | integrations: |- 141 | [integrations.azure_network_policy_manager] 142 | collect_basic_metrics = false 143 | collect_advanced_metrics = false 144 | 145 | # Doc - https://github.com/microsoft/Docker-Provider/blob/ci_prod/Documentation/AgentSettings/ReadMe.md 146 | agent-settings: |- 147 | # prometheus scrape fluent bit settings for high scale 148 | # buffer size should be greater than or equal to chunk size else we set it to chunk size. 149 | [agent_settings.prometheus_fbit_settings] 150 | tcp_listener_chunk_size = 10 151 | tcp_listener_buffer_size = 10 152 | tcp_listener_mem_buf_limit = 200 153 | # The following settings are "undocumented", we don't recommend uncommenting them unless directed by Microsoft. 154 | # They increase the maximum stdout/stderr log collection rate but will also cause higher cpu/memory usage. 155 | # [agent_settings.fbit_config] 156 | # log_flush_interval_secs = "1" # default value is 15 157 | # tail_mem_buf_limit_megabytes = "10" # default value is 10 158 | # tail_buf_chunksize_megabytes = "1" # default value is 32kb (comment out this line for default) 159 | # tail_buf_maxsize_megabytes = "1" # defautl value is 32kb (comment out this line for default) 160 | -------------------------------------------------------------------------------- /aro-4/deprecated-aro-4-azure-monitor.sh: -------------------------------------------------------------------------------- 1 | # NOTE: These steps assume that you have oc client tools already signed in into your ARO cluster 2 | # I'm assuming variables from the cluster provisioning are in memory. If not, please run 3 | source ./aro-provision.vars 4 | 5 | # Azure Monitor Integration 6 | # Prerequisites: 7 | # Azure CLI v2.0.72+, Helm 3, Bash v4 and kubectl set to OpenShift context 8 | # Docs: https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-azure-redhat4-setup 9 | 10 | # Creating new Log Analytics Workspace 11 | # Skip if you will join an existing one 12 | ARO_LOGS_LOCATION=$LOCATION 13 | ARO_LOGS_WORKSPACE_NAME=$CLUSTER-logs-$RANDOM 14 | ARO_LOGS_RG=$ARO_RG 15 | sed logs-workspace-deployment.json \ 16 | -e s/WORKSPACE-NAME/$ARO_LOGS_WORKSPACE_NAME/g \ 17 | -e s/DEPLOYMENT-LOCATION/$ARO_LOGS_LOCATION/g \ 18 | -e s/ENVIRONMENT-VALUE/DEV/g \ 19 | -e s/PROJECT-VALUE/ARO4/g \ 20 | -e s/DEPARTMENT-VALUE/IT/g \ 21 | -e s/STATUS-VALUE/EXPERIMENTAL/g \ 22 | > aro-logs-workspace-deployment-updated.json 23 | 24 | # Deployment can take a few mins 25 | ARO_LOGS_WORKSPACE=$(az group deployment create \ 26 | --resource-group $ARO_LOGS_RG \ 27 | --name aro-logs-workspace-deployment \ 28 | --template-file aro-logs-workspace-deployment-updated.json) 29 | 30 | ARO_LOGS_WORKSPACE_ID=$(echo $ARO_LOGS_WORKSPACE | jq -r '.properties["outputResources"][].id') 31 | 32 | echo export ARO_LOGS_WORKSPACE_ID="$ARO_LOGS_WORKSPACE_ID" >> ./aro-provision.vars 33 | 34 | # If you are using an existing one, get the ID 35 | # Make sure the ARO_LOGS_WORKSPACE_NAME is reflecting the target workspace name 36 | # ARO_LOGS_WORKSPACE_ID=$(az resource list --resource-type Microsoft.OperationalInsights/workspaces --query "[?contains(name, '${ARO_LOGS_WORKSPACE_NAME}')].id" -o tsv) 37 | # echo export ARO_LOGS_WORKSPACE_ID=$ARO_LOGS_WORKSPACE_ID >> ./aro-provision.vars 38 | # If you are not sure what is the name, you can list them here: 39 | # az resource list --resource-type Microsoft.OperationalInsights/workspaces -o table 40 | 41 | # On boarding the cluster to Azure Monitor 42 | # Get the latest installation scripts (downloads enable-monitoring.sh): 43 | curl -o enable-monitoring.sh -L https://aka.ms/enable-monitoring-bash-script 44 | 45 | # IMPORTANT: Make sure that ARO is the active Kubectl context before executing the script 46 | KUBE_CONTEXT=$(kubectl config current-context) 47 | ARO_CLUSTER_ID=$(az aro show -g $ARO_RG -n $CLUSTER --query id -o tsv) 48 | 49 | export azureAroV4ClusterResourceId="" 50 | export logAnalyticsWorkspaceResourceId=$ARO_LOGS_WORKSPACE_ID 51 | 52 | bash onboarding_azuremonitor_for_containers.sh $KUBE_CONTEXT $ARO_CLUSTER_ID $ARO_LOGS_WORKSPACE_ID -------------------------------------------------------------------------------- /aro-4/internal-ingress.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: operator.openshift.io/v1 2 | kind: IngressController 3 | metadata: 4 | namespace: openshift-ingress-operator 5 | name: private-apps 6 | spec: 7 | domain: internal.apps.aro-weu.az.mohamedsaif.corp 8 | replicas: 2 9 | endpointPublishingStrategy: 10 | type: LoadBalancerService 11 | loadBalancer: 12 | scope: Internal 13 | # namespaceSelector: 14 | # matchLabels: 15 | # type: internal 16 | routeSelector: 17 | matchLabels: 18 | type: internal 19 | -------------------------------------------------------------------------------- /aro-4/logs-workspace-deployment.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", 3 | "contentVersion": "1.0.0.0", 4 | "parameters": { 5 | "workspaceName": { 6 | "type": "String", 7 | "defaultValue": "WORKSPACE-NAME", 8 | "metadata": { 9 | "description": "Specifies the name of the workspace." 10 | } 11 | }, 12 | "location": { 13 | "type": "String", 14 | "defaultValue": "DEPLOYMENT-LOCATION", 15 | "metadata": { 16 | "description": "Specifies the location in which to create the workspace." 17 | } 18 | }, 19 | "tagValues": { 20 | "type": "object", 21 | "defaultValue": { 22 | "ENVIRONMENT": "ENVIRONMENT-VALUE", 23 | "PROJECT": "PROJECT-VALUE", 24 | "DEPARTMENT": "DEPARTMENT-VALUE", 25 | "STATUS": "STATUS-VALUE" 26 | } 27 | }, 28 | "sku": { 29 | "type": "String", 30 | "allowedValues": [ 31 | "Standalone", 32 | "PerNode", 33 | "PerGB2018" 34 | ], 35 | "defaultValue": "PerGB2018", 36 | "metadata": { 37 | "description": "Specifies the service tier of the workspace: Standalone, PerNode, Per-GB" 38 | } 39 | } 40 | }, 41 | "resources": [ 42 | { 43 | "type": "Microsoft.OperationalInsights/workspaces", 44 | "name": "[parameters('workspaceName')]", 45 | "apiVersion": "2015-11-01-preview", 46 | "location": "[parameters('location')]", 47 | "tags": "[parameters('tagValues')]", 48 | "properties": { 49 | "sku": { 50 | "Name": "[parameters('sku')]" 51 | }, 52 | "features": { 53 | "searchVersion": 1 54 | } 55 | } 56 | } 57 | ] 58 | } -------------------------------------------------------------------------------- /aro-4/machine-set-storage-infra.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: machine.openshift.io/v1beta1 2 | kind: MachineSet 3 | metadata: 4 | # Cluster id should be something like cluster-name-2p2t4 (last digit are random numbers generated during creation) 5 | name: REPLACE-CLUSTER-ID-infra-westeurope1 6 | namespace: openshift-machine-api 7 | labels: 8 | machine.openshift.io/cluster-api-cluster: REPLACE-CLUSTER-ID 9 | machine.openshift.io/cluster-api-machine-role: worker 10 | machine.openshift.io/cluster-api-machine-type: worker 11 | spec: 12 | replicas: 1 13 | selector: 14 | matchLabels: 15 | machine.openshift.io/cluster-api-cluster: REPLACE-CLUSTER-ID 16 | machine.openshift.io/cluster-api-machineset: REPLACE-CLUSTER-ID-infra-westeurope1 17 | template: 18 | metadata: 19 | labels: 20 | machine.openshift.io/cluster-api-cluster: REPLACE-CLUSTER-ID 21 | machine.openshift.io/cluster-api-machine-role: worker 22 | machine.openshift.io/cluster-api-machine-type: worker 23 | machine.openshift.io/cluster-api-machineset: REPLACE-CLUSTER-ID-infra-westeurope1 24 | spec: 25 | taints: 26 | - effect: NoSchedule 27 | key: node.ocs.openshift.io/storage 28 | value: "true" 29 | metadata: 30 | labels: 31 | node-role.kubernetes.io/infra: "" 32 | cluster.ocs.openshift.io/openshift-storage: "" 33 | providerSpec: 34 | value: 35 | osDisk: 36 | diskSizeGB: 1024 37 | managedDisk: 38 | storageAccountType: Premium_LRS 39 | osType: Linux 40 | networkResourceGroup: aro4-shared-weu 41 | publicLoadBalancer: REPLACE-CLUSTER-ID 42 | userDataSecret: 43 | name: worker-user-data 44 | vnet: aro-vnet-weu 45 | credentialsSecret: 46 | name: azure-cloud-credentials 47 | namespace: openshift-machine-api 48 | zone: '1' 49 | metadata: 50 | creationTimestamp: null 51 | publicIP: false 52 | resourceGroup: aro4-infra-weu 53 | kind: AzureMachineProviderSpec 54 | location: westeurope 55 | vmSize: Standard_D4s_v3 56 | image: 57 | offer: aro4 58 | publisher: azureopenshift 59 | resourceID: '' 60 | sku: aro_46 61 | version: 46.82.20201126 62 | subnet: aro4-weu-workers 63 | apiVersion: azureproviderconfig.openshift.io/v1beta1 -------------------------------------------------------------------------------- /aro-4/nginx-pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: nginx-pod 5 | namespace: default 6 | spec: 7 | containers: 8 | - name: nginx-pod 9 | image: #{acrName}#/nginx:latest 10 | imagePullPolicy: IfNotPresent 11 | imagePullSecrets: 12 | - name: default-acr -------------------------------------------------------------------------------- /ocp-installer-configuration.md: -------------------------------------------------------------------------------- 1 | # Installer Configuration 2 | 3 | OCP installer depends on having install-config.yaml file with all the cluster initial configuration. You can have this setup of the first time and then reuse it with slight modification to provision same or additional clusters. 4 | 5 | ## Generating SSH for the cluster 6 | 7 | It is a good practice to have SSH key created and submitted to allow various diagnostics scenarios. 8 | 9 | ```bash 10 | 11 | ssh-keygen -f ~/.ssh/$CLUSTER_NAME-rsa -t rsa -N '' 12 | 13 | # Starting ssh-agent and add the key to it (used for diagnostic access to the cluster) 14 | eval "$(ssh-agent -s)" 15 | ssh-add ~/.ssh/$CLUSTER_NAME-rsa 16 | 17 | ``` 18 | 19 | ## Red Hat Pull Secret 20 | 21 | In order to install, activate and link the OCP installation to your Red Hat account, you need the Pull Secret. 22 | 23 | Visit [Red Hat's website] to obtain it and optionally save it here for future use. 24 | 25 | ```bash 26 | 27 | # Get the json pull secret from RedHat (save it to the installation folder you created) 28 | # https://cloud.redhat.com/openshift/install/azure/installer-provisioned 29 | # To save the pull secret, you can use vi 30 | vi pull-secret.json 31 | # Tip: type i to enter the insert mode, paste the secret, press escape and then type :wq (write and quit) 32 | 33 | ``` 34 | 35 | ## OCP initial setup steps 36 | 37 | Assuming that you already have installer folder with the OCP installer binary there. Move to that folder and create new sub-folder to save the generated installer files there. 38 | 39 | ```bash 40 | 41 | # Change dir to installer 42 | cd installer 43 | # Create a new directory to save installer generated files 44 | mkdir installation 45 | 46 | ``` 47 | 48 | ## Preparing install-config.yaml file 49 | 50 | ### First time (no existing install-config.yaml yet) 51 | 52 | If this is the first time, you can start by launcing the installer to generate the first install-config: 53 | 54 | ```bash 55 | 56 | # NEW CONFIG: run the create install-config to generate the initial configs 57 | ./openshift-install create install-config --dir=./installation 58 | # Sample prompts (Azure subscription details will then be saved and will not be promoted again with future installation using the same machine) 59 | # ? SSH Public Key /home/user_id/.ssh/id_rsa.pub 60 | # ? Platform azure 61 | # ? azure subscription id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 62 | # ? azure tenant id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 63 | # ? azure service principal client id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 64 | # ? azure service principal client secret xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx 65 | # INFO Saving user credentials to "/home/user_id/.azure/osServicePrincipal.json" 66 | # ? Region westeurope 67 | # ? Base Domain example.com 68 | # ? Cluster Name test 69 | # ? Pull Secret [? for help] 70 | 71 | ``` 72 | 73 | ### Existing install-config.yaml 74 | 75 | Locate the install-config.yaml (I would assume you have it somewhere in your terminal) 76 | 77 | # If you have the file somewhere else, just copy the content to the vi 78 | vi install-config.yaml 79 | 80 | Now you should have the install-config.yaml located in the ```installer``` folder (not ```installation```). This is important as the OCP installer will delete the file once your started creating the cluster and we want to hang to it. 81 | 82 | After adjusting the config file to your specs, copy it out of the (installation) folder 83 | 84 | ```bash 85 | # For subsequent times, you can copy the saved config to the installation folder 86 | cp ./install-config.yaml ./installation 87 | 88 | ``` 89 | 90 | >**NOTE:** Credentials saved to ~/.azure/osServicePrincipal.json for the first time your run the installer create config. 91 | After that it will not ask again for the SP details 92 | If you have it created some where before, just use again vi to make sure it is correct (or copy the content to the new terminal) 93 | vi ~/.azure/osServicePrincipal.json 94 | 95 | ### Review 96 | 97 | You should review the generated install-config.yaml and tune any parameters before creating the cluster 98 | 99 | Now the cluster final configuration are saved to install-config.yaml 100 | 101 | To proceed, you have 2 options, IPI or UPI. pick one that fits your need and proceed with the cluster provisioning -------------------------------------------------------------------------------- /ocp-ipi.md: -------------------------------------------------------------------------------- 1 | # IPI 2 | 3 | Using IPI provides a quick and efficient way to provision clusters but you lose a little bit of control over the provisioned cluster installation. 4 | 5 | Use this approach if you don't have strict cluster provisioning policies (like deploying in existing resource group is not possible to my knowledge). 6 | 7 | All what you need to use the IPI method, is: 8 | 1. Service Principal with appropriate permissions (detailed in the script) 9 | 2. Details of the vnet address space and whether it exists or it is new 10 | - Address space of the vnet 11 | - Subnet for Masters 12 | - Subnet for Workers 13 | 3. DNS (private or public) 14 | 4. Pull secret for cluster activation from your Red Hat account 15 | 5. OPTIONAL: SSH key to be used to connect the cluster nodes for diagnostics 16 | 17 | >**NOTE:** I'm assuming that you already have the install-config.yaml generated along with Azure service principal configured and saved to ~/.azure/osServicePrincipal.json 18 | 19 | ## OPTIONAL Generating manifests 20 | 21 | If you want to have access to advanced configurations editing (modifying kube-proxy for example) can be achieved by generating the installation manifests 22 | 23 | ```bash 24 | 25 | ./openshift-install create manifests --dir=./installation 26 | 27 | ``` 28 | 29 | This will generate 2 folders, openshift and manifests and a state file (.openshift_install_state.json) 30 | 31 | Check .openshift_install_state.json for a detailed list of configuration and resource names. 32 | 33 | You can notice a random string called (InfraID) present in the .openshift_install_state.json configs which will be used to ensure uniqueness of generated resources. 34 | 35 | Installer will provision resources in the form of -. 36 | 37 | ## OPTIONAL Check subscription limits of VM-cores 38 | 39 | You might hit some subscription service provisioning limits during the installation (especially if you are using Azure free credits or non-enterprise accounts) 40 | 41 | To avoid getting error like: 42 | 43 | ```bash 44 | # compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. 45 | # Status= Code="OperationNotAllowed" Message="Operation results in exceeding quota limits of Core. Maximum allowed: 20, Current in use: 20 46 | # , Additional requested: 8. 47 | ``` 48 | 49 | Solving it limits usually easy, submit a new support request here: 50 | [https://aka.ms/ProdportalCRP/?#create/Microsoft.Support/Parameters/](https://aka.ms/ProdportalCRP/?#create/Microsoft.Support/Parameters/) 51 | 52 | Use the following details: 53 | - Type Service and subscription limits (quotas) 54 | - Subscription Select target subscription 55 | - Problem type Compute-VM (cores-vCPUs) subscription limit increases 56 | - Click add new quota details (increase from 20 to 50 as the new quota) 57 | 58 | Sometimes it is auto approved :) 59 | 60 | If you want to check the current limits for a specific location: 61 | ```bash 62 | 63 | az vm list-usage -l $OCP_LOCATION -o table 64 | 65 | ``` 66 | 67 | ## Create the cluster 68 | 69 | >**NOTE:** change log level to debug to get further details (other options are warn and error) 70 | 71 | ```bash 72 | 73 | ./openshift-install create cluster --dir=./installation --log-level=info 74 | 75 | 76 | # By default, a cluster will create: 77 | # Bootstrap: 1 Standard_D4s_v3 vm (removed after install) 78 | # Master Nodes: 3 Standard_D8s_v3 (4 vcpus, 16 GiB memory) 79 | # Worker Nodes: 3 Standard_D2s_v3 (). 80 | # Kubernetes APIs will be located at something like: 81 | # https://api.ocp-azure-dev-cluster.YOURDOMAIN.com:6443/ 82 | 83 | # Normal installer output 84 | # INFO Consuming Install Config from target directory 85 | # INFO Creating infrastructure resources... 86 | # INFO Waiting up to 30m0s for the Kubernetes API at https://api.dev-ocp-weu.YOURDOMAIN.COM:6443... 87 | # INFO API v1.16.2 up 88 | # INFO Waiting up to 30m0s for bootstrapping to complete... 89 | # INFO Destroying the bootstrap resources... 90 | # INFO Waiting up to 30m0s for the cluster at https://api.dev-ocp-weu.YOURDOMAIN.COM:6443 to initialize... 91 | # INFO Waiting up to 10m0s for the openshift-console route to be created... 92 | # INFO Install complete! 93 | # INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=HOME/OpenShift-On-Azure/ocp-4-3-installation/installer/installation/auth/kubeconfig' 94 | # INFO Access the OpenShift web-console here: https://console-openshift-console.apps.dev-ocp-weu.YOURDOMAIN.COM 95 | # INFO Login to the console with user: kubeadmin, password: yQLvW-BzmTQ-DY8dx-AZZsY 96 | 97 | ``` 98 | 99 | ## Deleting the cluster all generated resources 100 | 101 | If cluster needs to be destroyed to be recreated, execute the following: 102 | 103 | ```bash 104 | 105 | ./openshift-install destroy cluster --dir=./installation 106 | 107 | ``` 108 | Note that some files might not be removed (like the terraform.tfstate) by the installer. You need to remove them manually 109 | 110 | Sample destruction output of fully provisioned cluster 111 | 112 | ```bash 113 | 114 | # INFO deleted record=api.dev-ocp-weu 115 | # INFO deleted record="*.apps.dev-ocp-weu" 116 | # INFO deleted resource group=dev-ocp-weu-fsnm5-rg 117 | # INFO deleted appID=GUID 118 | # INFO deleted appID=GUID 119 | # INFO deleted appID=GUID 120 | 121 | ``` -------------------------------------------------------------------------------- /ocp-jumpbox-provision.md: -------------------------------------------------------------------------------- 1 | # Jump-Box Provision 2 | 3 | It is a good practice to have a jump box server to act as your installation terminal (especially if you are creating a private cluster with no access to the vnet). This guid helps you in setting up this VM and I would highly recommend doing so. 4 | 5 | If you are using a local dev machine, make sure to follow the installation steps mentioned in this guide to make sure you have all the needed tools. 6 | 7 | ## Creating new VM 8 | 9 | The following steps can be used to provision Ubuntu VM on Azure. 10 | 11 | >**NOTE:** You can skip these steps till (Tooling & configurations) if you intent to use your current machine. 12 | 13 | ### Generating SSH key pair 14 | 15 | ```bash 16 | 17 | ssh-keygen -f ~/.ssh/installer-box-rsa -m PEM -t rsa -b 4096 18 | 19 | ``` 20 | 21 | ### Jump-Box subnet 22 | 23 | We need the jump-box provisioned in a subnet that have a line-of-sight of the potential OCP cluster. 24 | 25 | You can also opt-in to have a separate virtual network that is peered with the OCP cluster network as well. 26 | 27 | ```bash 28 | 29 | # Get the ID for the masters subnet (as it is in a different resource group) 30 | INST_SUBNET_ID=$(az network vnet subnet show -g $RG_VNET --vnet-name $OCP_VNET_NAME --name $INST_SUBNET_NAME --query id -o tsv) 31 | 32 | ``` 33 | 34 | >**NOTE:** Above command retrieve an existing subnet id, if you need to create one, please follow the steps in the [OCP-Prerequisites.md] virtual network section. 35 | 36 | ### Jump-Box resource group 37 | 38 | ```bash 39 | 40 | # Create a resource group to host jump box 41 | OCP_LOCATION_CODE=westeurope 42 | PREFIX=dev 43 | RG_INSTALLER=$PREFIX-installer-rg-$OCP_LOCATION_CODE 44 | az group create --name $RG_INSTALLER --location $OCP_LOCATION 45 | 46 | ``` 47 | 48 | ### Creating the VM 49 | 50 | ``` 51 | INSTALLER_PIP=$(az vm create \ 52 | --resource-group $RG_INSTALLER \ 53 | --name installer-box \ 54 | --image UbuntuLTS \ 55 | --subnet $INST_SUBNET_ID \ 56 | --size "Standard_B2s" \ 57 | --admin-username localadmin \ 58 | --ssh-key-values ~/.ssh/installer-box-rsa.pub \ 59 | --query publicIpAddress -o tsv) 60 | 61 | export INSTALLER_PIP=$INSTALLER_PIP >> ~/.bashrc 62 | 63 | ``` 64 | 65 | If you have an existing jump box, just set the public publicIpAddress 66 | 67 | ``` 68 | 69 | INSTALLER_PIP=REPLACE_IP 70 | 71 | ``` 72 | 73 | ### Connecting to the jump-box 74 | 75 | #### OPTIONAL: Copy any needed files to target jump-box 76 | 77 | Before you connect to the jump-box VM, you can copy any needed files (use this only if you have custom files that you wish to have on the machine like custom install-config files). 78 | 79 | ```bash 80 | 81 | # Zip the installation files that you want to copy to the jump box 82 | # make sure you are in the right folder on the local machine 83 | cd provisioning 84 | tar -pvczf ocp-installation.tar.gz . 85 | 86 | scp -i ~/.ssh/installer-box-rsa ./ocp-installation.tar.gz localadmin@$INSTALLER_PIP:~/ocp.tar.gz 87 | 88 | ``` 89 | #### Connecting to the jump-box 90 | 91 | ```bash 92 | # SSH to the jumpbox 93 | ssh -i ~/.ssh/installer-box-rsa localadmin@$INSTALLER_PIP 94 | 95 | ``` 96 | 97 | You might want to clone the GitHub repo as well for the UPI installation files (if you didn't already in the copy step) 98 | 99 | ```bash 100 | 101 | git clone https://github.com/mohamedsaif/OpenShift-On-Azure.git 102 | 103 | ``` 104 | 105 | ## Tooling & configurations 106 | 107 | Now we need to to make sure that all needed tooling is installed/downloaded. 108 | 109 | ### Azure CLI 110 | 111 | ```bash 112 | # Installing Azure CLI 113 | curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash 114 | 115 | ``` 116 | 117 | ### OpenShift CLI (Installer & Client) 118 | 119 | Download the installer/client program from RedHat (save it to the installation folder you created) 120 | 121 | >**NOTE:** Depending on when you found this guide, the latest version of the installer is 4.3.5, there might be a new version exists. You can check the latest version by visiting [OCP Clients](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/) 122 | 123 | #### OCP Installer 124 | 125 | ```bash 126 | 127 | # Extract the installer to installer folder 128 | mkdir installer 129 | wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.3.3/openshift-install-linux-4.3.5.tar.gz 130 | tar -xvzf ./openshift-install-linux-4.3.5.tar.gz -C ./installer 131 | 132 | # If you wish to have it in PATH libs so you can execute it without having it in folder, run this: 133 | # sudo cp ./installer/openshift-install /usr/local/bin/ 134 | 135 | ``` 136 | 137 | #### OCP Client 138 | 139 | ```bash 140 | 141 | mkdir client 142 | wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.3.3/openshift-client-linux-4.3.5.tar.gz 143 | tar -xvzf ./openshift-client-linux-4.3.5.tar.gz -C ./client 144 | 145 | ``` 146 | 147 | ### python3 148 | 149 | ```bash 150 | 151 | sudo apt-get update 152 | sudo apt-get install python3.6 153 | python3 --version 154 | 155 | # pip should be installed as part of python 3.6 :) 156 | 157 | ``` 158 | 159 | ### Installing PyYAML for manipulating yaml files 160 | 161 | ```bash 162 | 163 | sudo pip install -U PyYAML 164 | 165 | ``` 166 | 167 | ### DotMap (used in manipulating files as well) 168 | 169 | ```bash 170 | 171 | pip install dotmap 172 | 173 | ``` 174 | 175 | ### jq 176 | 177 | ```bash 178 | 179 | sudo apt-get install jq 180 | 181 | ``` 182 | 183 | ### yq 184 | ```bash 185 | 186 | sudo pip install yq 187 | 188 | ``` 189 | 190 | ### tree (folder visual rep) 191 | 192 | ```bash 193 | 194 | sudo apt-get install tree 195 | 196 | ``` 197 | 198 | >**NOTE:** If you faced issues with unrecognized commands, you might consider restarting the VM for some of the tooling to picked up. 199 | ```sudo apt-get update``` 200 | 201 | ## Login to Azure 202 | 203 | You might need to provision any custom resources before or during the installation, so let's sign in to Azure 204 | 205 | ```bash 206 | 207 | az login 208 | 209 | az account set --subscription "SUBSCRIPTION_NAME" 210 | 211 | # Make sure the active subscription is set correctly 212 | az account show 213 | 214 | # Set the Azure subscription and AAD tenant ids 215 | OCP_TENANT_ID=$(az account show --query tenantId -o tsv) 216 | OCP_SUBSCRIPTION_ID=$(az account show --query id -o tsv) 217 | echo $OCP_TENANT_ID 218 | echo $OCP_SUBSCRIPTION_ID 219 | 220 | ``` 221 | 222 | ### OPTIONAL DNS Resolver 223 | 224 | If you are using Azure VM, you might want to update the DNS server name to point at Azure DNS fixed IP address (to be able to easily resolve the OCP private DNS FQDNs) 225 | 226 | ```bash 227 | 228 | # Adding Azure DNS server (to handle the private name resoultion) 229 | sudo chmod o+r /etc/resolv.conf 230 | 231 | # Edit the DNS server name to use Azure's DNS server fixed IP 168.63.129.16 (press i to be in insert mode, then ESC and type :wq to save and exit) 232 | sudo vi /etc/resolv.conf 233 | 234 | ``` 235 | 236 | ### OPTIONAL Extract the copied archives 237 | 238 | If you have copied any archive to the remote jump-box, you can extract the files now. 239 | 240 | ```bash 241 | 242 | mkdir ocp-installer 243 | tar -xvzf ./ocp.tar.gz -C ./ocp-installer 244 | cd ocp-installer 245 | # Check the extracted files (you should have your config and OCP installer) 246 | ls 247 | 248 | ``` -------------------------------------------------------------------------------- /ocp-prerequisites.md: -------------------------------------------------------------------------------- 1 | # Prerequisites 2 | 3 | To use Red Hat OCP installer, you need to prepare in advance few prerequisites before starting the installation process. 4 | 5 | This guide only focuses on the prerequisites that is shared between IPI and UPI methods. 6 | 7 | ## Installation parameters 8 | 9 | >**NOTE:** If you are using existing resources like vnet, please make sure to update the values to the correct names and skip the creation steps 10 | 11 | ```bash 12 | 13 | OCP_LOCATION=westeurope 14 | OCP_LOCATION_CODE=euw 15 | SUBSCRIPTION_CODE=mct 16 | PREFIX=$SUBSCRIPTION_CODE-ocp-dev 17 | RG_PUBLIC_DNS=$SUBSCRIPTION_CODE-dns-shared-rg 18 | RG_VNET=$PREFIX-vnet-rg-$OCP_LOCATION_CODE 19 | CLUSTER_NAME=dev-ocp-int-$OCP_LOCATION_CODE 20 | 21 | DNS_ZONE=[subdomain].yourdomain.com 22 | 23 | ``` 24 | 25 | ## Resource groups 26 | 27 | ### vNet resource group 28 | 29 | Create a resource group to host the network resources (in this setup, we will use it for vnet) 30 | 31 | >**NOTE:** If you have existing vnet, make sure that RG_VNET is set to its name and skip creation 32 | 33 | ```bash 34 | 35 | az group create --name $RG_VNET --location $OCP_LOCATION 36 | 37 | ``` 38 | 39 | ### Public DNS resource group 40 | 41 | OPTIONAL: Create a resource group to host the public DNS (if you are using one) 42 | 43 | ```bash 44 | 45 | az group create --name $RG_PUBLIC_DNS --location $OCP_LOCATION 46 | 47 | ``` 48 | 49 | >**NOTE:** For the cluster resource group, it will depend on your way of installation (IPI will create one, UPI you will create one later) 50 | 51 | ## (OPTIONAL) Public DNS Setup 52 | 53 | ```bash 54 | 55 | # OPTION 1: Full delegation of a root domain to Azure DNS Zone 56 | # Create a DNS Zone (for naked or subdomain) 57 | az network dns zone create -g $RG_PUBLIC_DNS -n $DNS_ZONE 58 | 59 | # Delegate the DNS Zone by updating the domain registrar Name Servers to point at Azure DNS Zone Name Servers 60 | # Get the NS to be update in the domain registrar (you can create NS records for the naked-domain (@) or subdomain) 61 | az network dns zone show -g $RG_PUBLIC_DNS -n $DNS_ZONE --query nameServers -o table 62 | 63 | # Visit the registrar to update the NS records 64 | 65 | # Check if the update successful 66 | # It might take several mins for the DNS records to propagate 67 | nslookup -type=SOA $DNS_ZONE 68 | 69 | # Response like 70 | # Server: ns1-04.azure-dns.com 71 | # Address: 208.76.47.4 72 | 73 | # yoursubdomain.yourdomain.com 74 | # primary name server = ns1-04.azure-dns.com 75 | # responsible mail addr = msnhst.microsoft.com 76 | # serial = 1 77 | # refresh = 900 (15 mins) 78 | # retry = 300 (5 mins) 79 | # expire = 604800 (7 days) 80 | # default TTL = 300 (5 mins) 81 | 82 | # Note: some proxies and routers might not prevent the nslookup. 83 | # Note: You need to make sure that you can get a valid response to nslookup before you proceed. 84 | 85 | ``` 86 | 87 | ## Virtual network setup 88 | 89 | If you have existing vnet, no need to create one, just update the below params with your network configs. 90 | 91 | I will be creating the following cluster networking 92 | - Address space: 10.165.0.0/23 (~500 addresses) 93 | - Masters CIDR: 10.165.0.0/24 (~250 addresses) 94 | - Workers CIDR: 10.165.1.0/24 (~250 addresses) 95 | 96 | ```bash 97 | 98 | OCP_VNET_ADDRESS_SPACE="10.165.0.0/23" 99 | OCP_VNET_NAME="spoke-${PREFIX}-${OCP_LOCATION_CODE}" 100 | # Masters subnet (master VMs, ILB, IPI VMs) 101 | MST_SUBNET_IP_PREFIX="10.165.0.0/24" 102 | MST_SUBNET_NAME="mgm-subnet" 103 | # Workers subnet 104 | WRK_SUBNET_IP_PREFIX="10.165.1.0/24" 105 | WRK_SUBNET_NAME="pods-subnet" 106 | 107 | az network vnet create \ 108 | --resource-group $RG_VNET \ 109 | --name $OCP_VNET_NAME \ 110 | --address-prefixes $OCP_VNET_ADDRESS_SPACE \ 111 | --subnet-name $MST_SUBNET_NAME \ 112 | --subnet-prefix $MST_SUBNET_IP_PREFIX 113 | 114 | # Create subnet for services 115 | az network vnet subnet create \ 116 | --resource-group $RG_VNET \ 117 | --vnet-name $OCP_VNET_NAME \ 118 | --name $WRK_SUBNET_NAME \ 119 | --address-prefix $WRK_SUBNET_IP_PREFIX 120 | 121 | # Creating also Network Security Groups 122 | MST_SUBNET_NSG_NAME=$MST_SUBNET_NAME-nsg 123 | az network nsg create \ 124 | --name $MST_SUBNET_NSG_NAME \ 125 | --resource-group $RG_VNET 126 | 127 | az network nsg rule create \ 128 | --resource-group $RG_VNET \ 129 | --nsg-name $MST_SUBNET_NSG_NAME \ 130 | --name "apiserver_in" \ 131 | --priority 101 \ 132 | --access Allow \ 133 | --protocol Tcp \ 134 | --direction Inbound \ 135 | --source-address-prefixes $WRK_SUBNET_IP_PREFIX \ 136 | --source-port-ranges '*' \ 137 | --destination-port-ranges 6443 \ 138 | --destination-address-prefixes '*' \ 139 | --description "Allow API Server inbound connection (from workers)" 140 | 141 | # If you will use the installer-jumpbox VM, you can create a separate subnet for it 142 | # It will allow you to delete it (or disable it via Network Security Groups after the cluster provisions) 143 | INST_SUBNET_NAME="inst-subnet" 144 | INST_SUBNET_IP_PREFIX="10.165.2.0/24" 145 | az network vnet subnet create \ 146 | --resource-group $RG_VNET \ 147 | --vnet-name $OCP_VNET_NAME \ 148 | --name $INST_SUBNET_NAME \ 149 | --address-prefix $INST_SUBNET_IP_PREFIX 150 | 151 | ``` 152 | 153 | ## Service principal setup 154 | 155 | >**NOTE:** Usually this step requires the cooperation of AAD/Azure administrators. Reach out with these scripts to get them provisioned. 156 | 157 | ### Creating the SPN 158 | 159 | Create a SP to be used by OpenShift (no permissions is granted here, it will be granted in the next steps) 160 | 161 | ```bash 162 | 163 | OCP_SP=$(az ad sp create-for-rbac -n "${PREFIX}-installer-sp" --skip-assignment) 164 | 165 | # As the json result stored in OCP_SP, we use some jq Kung Fu to extract the values 166 | # jq documentation: (https://shapeshed.com/jq-json/#how-to-pretty-print-json) 167 | echo $OCP_SP | jq 168 | OCP_SP_ID=$(echo $OCP_SP | jq -r .appId) 169 | OCP_SP_PASSWORD=$(echo $OCP_SP | jq -r .password) 170 | OCP_SP_TENANT=$(echo $OCP_SP | jq -r .tenant) 171 | OCP_SP_SUBSCRIPTION_ID=$OCP_SUBSCRIPTION_ID 172 | echo $OCP_SP_ID 173 | echo $OCP_SP_PASSWORD 174 | echo $OCP_SP_TENANT 175 | echo $OCP_SP_SUBSCRIPTION_ID 176 | # Save the above information in a secure location 177 | 178 | ``` 179 | 180 | ### SPN permissions 181 | 182 | #### Assigning AAD ReadWrite.OwnedBy 183 | 184 | ```bash 185 | 186 | az ad app permission add --id $OCP_SP_ID --api 00000002-0000-0000-c000-000000000000 --api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role 187 | 188 | # Requesting the (Admin Consent) for the permission. 189 | az ad app permission grant --id $OCP_SP_ID --api 00000002-0000-0000-c000-000000000000 190 | 191 | ``` 192 | 193 | Now by visiting the AAD in Azure portal, you can search for your service principal under "App Registrations" and make sure to grant the admin consent. 194 | 195 | Checking on the AAD permission admin consent is granted successfully (if not, click on the Grant Admin Consent). 196 | 197 | ![aad-permissions](res/aad-permissions.png) 198 | 199 | #### Assigning "Contributor" (for Azure resources creation) 200 | 201 | ```bash 202 | 203 | az role assignment create --assignee $OCP_SP_ID --role "Contributor" 204 | 205 | ``` 206 | 207 | #### Assigning "User Access Administrator" (to grant access to OCP provisioned components) 208 | 209 | ```bash 210 | 211 | az role assignment create --assignee $OCP_SP_ID --role "User Access Administrator" 212 | 213 | ``` 214 | 215 | #### Have a look at SP Azure assignments 216 | 217 | ```bash 218 | 219 | az role assignment list --assignee $OCP_SP_ID -o table 220 | 221 | ``` 222 | 223 | ### Saving the SP credentials 224 | 225 | OCP installer look for a file ```~/.azure/osServicePrincipal.json```. 226 | 227 | We will save the SP details to that file so the OCP installer will pick the new one without prompting automatically. 228 | 229 | ```bash 230 | 231 | echo $OCP_SP | jq --arg sub_id $OCP_SUBSCRIPTION_ID '{subscriptionId:$sub_id,clientId:.appId, clientSecret:.password,tenantId:.tenant}' > ~/.azure/osServicePrincipal.json 232 | 233 | ``` 234 | 235 | ### Reset SPN credentials 236 | 237 | If you wish to reset the credentials 238 | 239 | ```bash 240 | 241 | az ad sp credential reset --name $OCP_SP_ID 242 | 243 | ``` 244 | 245 | ### Recap 246 | 247 | Notes: 248 | - OCP IPI installer rely on SP credentials stored in (~/.azure/osServicePrincipal.json). 249 | - If you run installer before on the current terminal, it will use the service principal from that location 250 | - You can delete this file to instruct the installer to prompt for the SP credentials 251 | -------------------------------------------------------------------------------- /ocp-testing.md: -------------------------------------------------------------------------------- 1 | # OCP Cluster Testing 2 | 3 | Congratulations! 4 | 5 | Now it is time to access the cluster mainly via OC client CLI. 6 | 7 | >**NOTE:** Although it says completed, you might need to give it a few mins to warm up :) 8 | 9 | I will be testing the cluster using OC client CLI. 10 | 11 | ```bash 12 | 13 | # You can access the web-console as per the instructions provided, but let's try using oc CLI instead 14 | cd .. 15 | cd client 16 | 17 | # this step so you will not need to use oc login (you might have a different path) 18 | # export KUBECONFIG=~/ocp-installer/installer/installation/auth/kubeconfig 19 | 20 | # basic operations 21 | ./oc version 22 | ./oc config view 23 | ./oc status 24 | 25 | # Famous get pods 26 | ./oc get pod --all-namespaces 27 | 28 | # Our cluster running a kubernetes and OpenShift services by default 29 | ./oc get svc 30 | # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 31 | # docker-registry ClusterIP 172.30.78.158 32 | # kubernetes ClusterIP 172.30.0.1 443/TCP 36m 33 | # openshift ExternalName kubernetes.default.svc.cluster.local 24m 34 | 35 | # No selected project for sure 36 | ./oc project 37 | 38 | # if you are interested to look behind the scene on what is happing, access the logs 39 | cat ./.openshift_install.log 40 | 41 | ``` -------------------------------------------------------------------------------- /openshift-client-linux.tar.gz.1: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/openshift-client-linux.tar.gz.1 -------------------------------------------------------------------------------- /provisioning/aro-provision.sh: -------------------------------------------------------------------------------- 1 | az login 2 | LOCATION=southafricanorth 3 | CLUSTER_NAME=aroclusterza 4 | 5 | APPID="REPLACE" 6 | GROUPID="REPLACE" 7 | SECRET="REPLACE" 8 | TENANT="REPLACE" 9 | 10 | az feature register --namespace Microsoft.ContainerService -n AROGA 11 | az provider register -n Microsoft.ContainerService 12 | 13 | az group create --name $CLUSTER_NAME --location $LOCATION 14 | 15 | # vnet to peer 16 | VNET_ID=$(az network vnet show -n {VNET name} -g {VNET resource group} --query id -o tsv) 17 | 18 | WORKSPACE_ID=$(az monitor log-analytics workspace show -g {RESOURCE_GROUP} -n {NAME} --query id -o tsv) 19 | 20 | az openshift create \ 21 | --resource-group $CLUSTER_NAME \ 22 | --name $CLUSTER_NAME \ 23 | -l $LOCATION \ 24 | --aad-client-app-id $APPID \ 25 | --aad-client-app-secret $SECRET \ 26 | --aad-tenant-id $TENANT \ 27 | --customer-admin-group-id $GROUPID 28 | 29 | az openshift show -n $CLUSTER_NAME -g $CLUSTER_NAME 30 | 31 | # CLI Installation 32 | cd 33 | mkdir lib 34 | cd lib 35 | mkdir oc311 36 | cd oc311 37 | curl https://mirror.openshift.com/pub/openshift-v3/clients/3.11.154/linux/oc.tar.gz --output oc.tar.gz 38 | tar -xzf oc.tar.gz 39 | ls 40 | 41 | -------------------------------------------------------------------------------- /provisioning/azure-monitor/logs-workspace-deployment.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", 3 | "contentVersion": "1.0.0.0", 4 | "parameters": { 5 | "workspaceName": { 6 | "type": "String", 7 | "defaultValue": "WORKSPACE-NAME", 8 | "metadata": { 9 | "description": "Specifies the name of the workspace." 10 | } 11 | }, 12 | "location": { 13 | "type": "String", 14 | "defaultValue": "DEPLOYMENT-LOCATION", 15 | "metadata": { 16 | "description": "Specifies the location in which to create the workspace." 17 | } 18 | }, 19 | "tagValues": { 20 | "type": "object", 21 | "defaultValue": { 22 | "ENVIRONMENT": "ENVIRONMENT-VALUE", 23 | "PROJECT": "PROJECT-VALUE", 24 | "DEPARTMENT": "DEPARTMENT-VALUE", 25 | "STATUS": "STATUS-VALUE" 26 | } 27 | }, 28 | "sku": { 29 | "type": "String", 30 | "allowedValues": [ 31 | "Standalone", 32 | "PerNode", 33 | "PerGB2018" 34 | ], 35 | "defaultValue": "PerGB2018", 36 | "metadata": { 37 | "description": "Specifies the service tier of the workspace: Standalone, PerNode, Per-GB" 38 | } 39 | } 40 | }, 41 | "resources": [ 42 | { 43 | "type": "Microsoft.OperationalInsights/workspaces", 44 | "name": "[parameters('workspaceName')]", 45 | "apiVersion": "2015-11-01-preview", 46 | "location": "[parameters('location')]", 47 | "tags": "[parameters('tagValues')]", 48 | "properties": { 49 | "sku": { 50 | "Name": "[parameters('sku')]" 51 | }, 52 | "features": { 53 | "searchVersion": 1 54 | } 55 | } 56 | } 57 | ] 58 | } -------------------------------------------------------------------------------- /provisioning/azure-monitor/ocp-v4-azure-monitor.sh: -------------------------------------------------------------------------------- 1 | # Azure Monitor Integration 2 | # Prerequisites: 3 | # Azure CLI v2.0.72+, Helm 3, Bash v4 and kubectl set to OpenShift context 4 | # Docs: https://docs.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-azure-redhat4-setup 5 | 6 | # Creating new Log Analytics Workspace 7 | # Skip if you will join an existing one 8 | # Update the below variables to the desired values before execution 9 | OCP_LOGS_LOCATION=westeurope 10 | OCP_LOGS_WORKSPACE_NAME=ocp4-logs-$RANDOM 11 | OCP_LOGS_RG=REPLACE_RESOURCE_GROUP_NAME 12 | sed logs-workspace-deployment.json \ 13 | -e s/WORKSPACE-NAME/$OCP_LOGS_WORKSPACE_NAME/g \ 14 | -e s/DEPLOYMENT-LOCATION/$OCP_LOGS_LOCATION/g \ 15 | -e s/ENVIRONMENT-VALUE/DEV/g \ 16 | -e s/PROJECT-VALUE/OCP4/g \ 17 | -e s/DEPARTMENT-VALUE/IT/g \ 18 | -e s/STATUS-VALUE/EXPERIMENTAL/g \ 19 | > ocp-logs-workspace-deployment-updated.json 20 | 21 | # Deployment can take a few mins 22 | OCP_LOGS_WORKSPACE=$(az group deployment create \ 23 | --resource-group $OCP_LOGS_RG \ 24 | --name ocp-logs-workspace-deployment \ 25 | --template-file ocp-logs-workspace-deployment-updated.json) 26 | 27 | OCP_LOGS_WORKSPACE_ID=$(echo $OCP_LOGS_WORKSPACE | jq -r '.properties["outputResources"][].id') 28 | 29 | echo export OCP_LOGS_WORKSPACE_ID=$OCP_LOGS_WORKSPACE_ID >> ./ocp-provision.vars 30 | 31 | # If you are using an existing one, get the ID 32 | # Make sure the OCP_LOGS_WORKSPACE_NAME is reflecting the target workspace name 33 | # OCP_LOGS_WORKSPACE_ID=$(az resource list --resource-type Microsoft.OperationalInsights/workspaces --query "[?contains(name, '${OCP_LOGS_WORKSPACE_NAME}')].id" -o tsv) 34 | # echo export OCP_LOGS_WORKSPACE_ID=$OCP_LOGS_WORKSPACE_ID >> ./ocp-provision.vars 35 | # If you are not sure what is the name, you can list them here: 36 | # az resource list --resource-type Microsoft.OperationalInsights/workspaces -o table 37 | 38 | # On boarding the cluster to Azure Monitor 39 | # Get the latest installation scripts: 40 | curl -LO https://raw.githubusercontent.com/microsoft/OMS-docker/ci_feature_prod/docs/openshiftV4/onboarding_azuremonitor_for_containers.sh 41 | 42 | # This should be invoked with 4 arguments: 43 | # azureSubscriptionId, azureRegionforLogAnalyticsWorkspace, clusterName and kubeContext name 44 | CLUSTER_NAME=ocp4-cluster 45 | KUBE_CONTEXT=$(kubectl config current-context) 46 | ARO_CLUSTER_ID=$(az aro show -g $ARO_RG -n $CLUSTER --query id -o tsv) 47 | 48 | bash onboarding_azuremonitor_for_containers.sh $CLUSTER_NAME $KUBE_CONTEXT $ARO_CLUSTER_ID $ARO_LOGS_WORKSPACE_ID -------------------------------------------------------------------------------- /provisioning/azure-monitor/onboarding_azuremonitor_for_containers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Execute this directly in Azure Cloud Shell (https://shell.azure.com) by pasting (SHIFT+INS on Windows, CTRL+V on Mac or Linux) 4 | # the following line (beginning with curl...) at the command prompt and then replacing the args: 5 | # This scripts Onboards Azure Monitor for containers to Openshift v4 clusters hosted in on-prem or any cloud environment 6 | # 7 | # 1. Creates the Default Azure log analytics workspace if doesn't exist one in specified azure subscription and region 8 | # 2. Adds the ContainerInsights solution to the Azure log analytics workspace 9 | # 3. Installs Azure Monitor for containers HELM chart to the K8s cluster in Kubeconfig 10 | # Prerequisites : 11 | # Azure CLI: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest 12 | # Helm3 : https://helm.sh/docs/intro/install/ 13 | # 14 | # bash onboarding_azuremonitor_for_containers.sh 15 | # For example: 16 | # bash ./onboarding_azuremonitor_for_containers.sh 00000000-0000-0000-0000-000000000000 eastus myocp42 admin 17 | 18 | if [ $# -le 3 ] 19 | then 20 | echo "Error: This should be invoked with 4 arguments, azureSubscriptionId, azureRegionforLogAnalyticsWorkspace, clusterName and kubeContext name and optional 5th argument logAnalyticsWorkspaceResourceId" 21 | exit 1 22 | fi 23 | 24 | echo "subscriptionId:"${1} 25 | echo "azureRegionforLogAnalyticsWorkspace:"${2} 26 | echo "clusterName:"${3} 27 | echo "kubeconfig context:"${4} 28 | echo "logAnalyticsWorkspaceResourceId:"${5} 29 | 30 | subscriptionId=${1} 31 | logAnalyticsWorkspaceRegion=${2} 32 | clusterName=${3} 33 | logAnalyticsWorkspaceResourceId=${5} 34 | 35 | echo "Azure SubscriptionId:" $subscriptionId 36 | echo "Azure Region for Log Analytics Workspace:" $logAnalyticsWorkspaceRegion 37 | echo "cluster Name:" $clusterName 38 | 39 | echo "Set AzureCloud as active cloud for az cli" 40 | az cloud set -n AzureCloud 41 | 42 | echo "login to the azure interactively" 43 | az login 44 | 45 | echo "set the subscription id: ${subscriptionId}" 46 | az account set -s ${subscriptionId} 47 | 48 | if [ -z $logAnalyticsWorkspaceResourceId ]; then 49 | echo "since logAnalyticsWorkspaceResourceId parameter not provided so using or creating default azure log analytics workspace" 50 | 51 | # mapping fors for default Azure Log Analytics workspace 52 | declare -A AzureCloudLocationToOmsRegionCodeMap=( 53 | [australiasoutheast]=ASE 54 | [australiaeast]=EAU 55 | [australiacentral]=CAU 56 | [canadacentral]=CCA 57 | [centralindia]=CIN 58 | [centralus]=CUS 59 | [eastasia]=EA 60 | [eastus]=EUS 61 | [eastus2]=EUS2 62 | [eastus2euap]=EAP 63 | [francecentral]=PAR 64 | [japaneast]=EJP 65 | [koreacentral]=SE 66 | [northeurope]=NEU 67 | [southcentralus]=SCUS 68 | [southeastasia]=SEA 69 | [uksouth]=SUK 70 | [usgovvirginia]=USGV 71 | [westcentralus]=EUS 72 | [westeurope]=WEU 73 | [westus]=WUS 74 | [westus2]=WUS2 75 | ) 76 | 77 | declare -A AzureCloudRegionToOmsRegionMap=( 78 | [australiacentral]=australiacentral 79 | [australiacentral2]=australiacentral 80 | [australiaeast]=australiaeast 81 | [australiasoutheast]=australiasoutheast 82 | [brazilsouth]=southcentralus 83 | [canadacentral]=canadacentral 84 | [canadaeast]=canadacentral 85 | [centralus]=centralus 86 | [centralindia]=centralindia 87 | [eastasia]=eastasia 88 | [eastus]=eastus 89 | [eastus2]=eastus2 90 | [francecentral]=francecentral 91 | [francesouth]=francecentral 92 | [japaneast]=japaneast 93 | [japanwest]=japaneast 94 | [koreacentral]=koreacentral 95 | [koreasouth]=koreacentral 96 | [northcentralus]=eastus 97 | [northeurope]=northeurope 98 | [southafricanorth]=westeurope 99 | [southafricawest]=westeurope 100 | [southcentralus]=southcentralus 101 | [southeastasia]=southeastasia 102 | [southindia]=centralindia 103 | [uksouth]=uksouth 104 | [ukwest]=uksouth 105 | [westcentralus]=eastus 106 | [westeurope]=westeurope 107 | [westindia]=centralindia 108 | [westus]=westus 109 | [westus2]=westus2 110 | ) 111 | 112 | export workspaceRegionCode="EUS" 113 | export workspaceRegion="eastus" 114 | 115 | if [ -n "${AzureCloudRegionToOmsRegionMap[$logAnalyticsWorkspaceRegion]}" ]; 116 | then 117 | workspaceRegion=${AzureCloudRegionToOmsRegionMap[$logAnalyticsWorkspaceRegion]} 118 | fi 119 | echo "Workspace Region:"$workspaceRegion 120 | 121 | if [ -n "${AzureCloudLocationToOmsRegionCodeMap[$workspaceRegion]}" ]; 122 | then 123 | workspaceRegionCode=${AzureCloudLocationToOmsRegionCodeMap[$workspaceRegion]} 124 | fi 125 | echo "Workspace Region Code:"$workspaceRegionCode 126 | 127 | export defaultWorkspaceResourceGroup="DefaultResourceGroup-"$workspaceRegionCode 128 | export isRGExists=$(az group exists -g $defaultWorkspaceResourceGroup) 129 | export defaultWorkspaceName="DefaultWorkspace-"$subscriptionId"-"$workspaceRegionCode 130 | 131 | if $isRGExists 132 | then echo "using existing default resource group:"$defaultWorkspaceResourceGroup 133 | else 134 | az group create -g $defaultWorkspaceResourceGroup -l $workspaceRegion 135 | fi 136 | 137 | export workspaceList=$(az resource list -g $defaultWorkspaceResourceGroup -n $defaultWorkspaceName --resource-type Microsoft.OperationalInsights/workspaces) 138 | if [ "$workspaceList" = "[]" ]; 139 | then 140 | # create new default workspace since no mapped existing default workspace 141 | echo '{"location":"'"$workspaceRegion"'", "properties":{"sku":{"name": "standalone"}}}' > WorkspaceProps.json 142 | cat WorkspaceProps.json 143 | workspace=$(az resource create -g $defaultWorkspaceResourceGroup -n $defaultWorkspaceName --resource-type Microsoft.OperationalInsights/workspaces --is-full-object -p @WorkspaceProps.json) 144 | else 145 | echo "using existing default workspace:"$defaultWorkspaceName 146 | fi 147 | 148 | workspaceResourceId=$(az resource show -g $defaultWorkspaceResourceGroup -n $defaultWorkspaceName --resource-type Microsoft.OperationalInsights/workspaces --query id) 149 | workspaceResourceId=$(echo $workspaceResourceId | tr -d '"') 150 | 151 | else 152 | echo "using provided azure log analytics workspace:${logAnalyticsWorkspaceResourceId}" 153 | export workspaceResourceId=$(echo $logAnalyticsWorkspaceResourceId | tr -d '"') 154 | export workspaceSubscriptionId="$(echo ${logAnalyticsWorkspaceResourceId} | cut -d'/' -f3)" 155 | export defaultWorkspaceResourceGroup="$(echo ${logAnalyticsWorkspaceResourceId} | cut -d'/' -f5)" 156 | export defaultWorkspaceName="$(echo ${logAnalyticsWorkspaceResourceId} | cut -d'/' -f9)" 157 | 158 | echo "set the workspace subscription id: ${workspaceSubscriptionId}" 159 | az account set -s ${workspaceSubscriptionId} 160 | export workspaceRegion=$(az resource show --ids ${logAnalyticsWorkspaceResourceId} --query location) 161 | export workspaceRegion=$(echo $workspaceRegion | tr -d '"') 162 | echo "Workspace Region:"$workspaceRegion 163 | 164 | fi 165 | 166 | # get the workspace guid 167 | export workspaceGuid=$(az resource show -g $defaultWorkspaceResourceGroup -n $defaultWorkspaceName --resource-type Microsoft.OperationalInsights/workspaces --query properties.customerId) 168 | workspaceGuid=$(echo $workspaceGuid | tr -d '"') 169 | 170 | echo "workspaceResourceId:"$workspaceResourceId 171 | echo "workspaceGuid:"$workspaceGuid 172 | 173 | echo "adding containerinsights solution to workspace" 174 | solution=$(az group deployment create -g $defaultWorkspaceResourceGroup --template-uri https://raw.githubusercontent.com/microsoft/OMS-docker/ci_feature_prod/docs/templates/azuremonitor-containerSolution.json --parameters workspaceResourceId=$workspaceResourceId --parameters workspaceRegion=$workspaceRegion) 175 | 176 | echo "getting workspace primaryshared key" 177 | workspaceKey=$(az rest --method post --uri $workspaceResourceId/sharedKeys?api-version=2015-11-01-preview --query primarySharedKey) 178 | workspaceKey=$(echo $workspaceKey | tr -d '"') 179 | echo $workspaceKey 180 | 181 | echo "installing Azure Monitor for containers HELM chart ..." 182 | 183 | echo "adding azmon-preview repo" 184 | helm repo add azmon-preview https://ganga1980.github.io/azuremonitor-containers-helm-charts/ 185 | echo "updating helm repo to get latest charts" 186 | helm repo update 187 | 188 | helm upgrade --install azmon-containers-release-1 --set omsagent.secret.wsid=$workspaceGuid,omsagent.secret.key=$workspaceKey,omsagent.env.clusterName=${3} azmon-preview/azuremonitor-containers --kube-context ${4} 189 | echo "chart installation completed." 190 | 191 | echo "Proceed to https://aka.ms/azmon-containers-hybrid to view health of your newly onboarded OpenshiftV4 cluster" 192 | -------------------------------------------------------------------------------- /provisioning/install-configs/install-config-internal.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | baseDomain: YOURDOMAIN.COM 3 | compute: 4 | - hyperthreading: Enabled 5 | name: worker 6 | platform: 7 | azure: 8 | osDisk: 9 | diskSizeGB: 120 10 | type: Standard_D8s_v3 11 | replicas: 3 12 | controlPlane: 13 | hyperthreading: Enabled 14 | name: master 15 | platform: 16 | azure: 17 | osDisk: 18 | diskSizeGB: 120 19 | type: Standard_D8s_v3 20 | replicas: 3 21 | metadata: 22 | creationTimestamp: null 23 | name: ocp-dev-euw 24 | networking: 25 | clusterNetwork: 26 | - cidr: 10.128.0.0/14 27 | hostPrefix: 23 28 | machineCIDR: 10.165.0.0/23 29 | networkType: OpenShiftSDN 30 | serviceNetwork: 31 | - 172.30.0.0/16 32 | platform: 33 | azure: 34 | baseDomainResourceGroupName: RG_DNS 35 | region: uaenorth 36 | networkResourceGroupName: RG_VNET 37 | virtualNetwork: VNET 38 | controlPlaneSubnet: mgm-subnet 39 | computeSubnet: pods-subnet 40 | publish: Internal 41 | pullSecret: '{"auths":{"cloud.openshift.com": }}' 42 | sshKey: | 43 | ssh-rsa SOMETHING 44 | -------------------------------------------------------------------------------- /provisioning/installer-jumpbox.sh: -------------------------------------------------------------------------------- 1 | ### OPTIONAL: Create an installation jumpbox 2 | ssh-keygen -f ~/.ssh/installer-box-rsa -m PEM -t rsa -b 4096 3 | # Get the ID for the masters subnet (as it is in a different resource group) 4 | INST_SUBNET_ID=$(az network vnet subnet show -g $RG_VNET --vnet-name $OCP_VNET_NAME --name $INST_SUBNET_NAME --query id -o tsv) 5 | 6 | # Create a resource group to host jump box 7 | RG_INSTALLER=$PREFIX-installer-rg-$OCP_LOCATION_CODE 8 | az group create --name $RG_INSTALLER --location $OCP_LOCATION 9 | 10 | INSTALLER_PIP=$(az vm create \ 11 | --resource-group $RG_INSTALLER \ 12 | --name installer-box \ 13 | --image UbuntuLTS \ 14 | --subnet $INST_SUBNET_ID \ 15 | --size "Standard_B2s" \ 16 | --admin-username localadmin \ 17 | --ssh-key-values ~/.ssh/installer-box-rsa.pub \ 18 | --query publicIpAddress -o tsv) 19 | 20 | export INSTALLER_PIP=$INSTALLER_PIP >> ~/.bashrc 21 | # If you have an existing jumpbox, just set the public publicIpAddress 22 | # INSTALLER_PIP=YOUR_IP 23 | 24 | # Zip the installation files that you want to copy to the jumpbox 25 | # make sure you are in the right folder on the local machine 26 | cd provisioning 27 | tar -pvczf ocp-installation.tar.gz . 28 | 29 | scp -i ~/.ssh/installer-box-rsa ./ocp-installation.tar.gz localadmin@$INSTALLER_PIP:~/ocp.tar.gz 30 | 31 | # SSH to the jumpbox 32 | ssh -i ~/.ssh/installer-box-rsa localadmin@$INSTALLER_PIP 33 | 34 | # Installing Azure CLI 35 | curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash 36 | 37 | # Adding Azure DNS server (to handle the private name resoultion) 38 | sudo chmod o+r /etc/resolv.conf 39 | 40 | # Edit the DNS server name to use Azure's DNS server fixed IP 168.63.129.16 (press i to be in insert mode, then ESC and type :wq to save and exit) 41 | sudo vi /etc/resolv.conf 42 | 43 | # Login to Azure 44 | az login 45 | 46 | az account set --subscription "SUBSCRIPTION_NAME" 47 | 48 | # Make sure the active subscription is set correctly 49 | az account show 50 | 51 | # Set the Azure subscription and AAD tenant ids 52 | OCP_TENANT_ID=$(az account show --query tenantId -o tsv) 53 | OCP_SUBSCRIPTION_ID=$(az account show --query id -o tsv) 54 | echo $OCP_TENANT_ID 55 | echo $OCP_SUBSCRIPTION_ID 56 | 57 | # Extract the installation files 58 | mkdir ocp-installer 59 | tar -xvzf ./ocp.tar.gz -C ./ocp-installer 60 | cd ocp-installer 61 | # Check the extracted files (you should have your config and OCP installer) 62 | ls 63 | 64 | # Set the variables from the main script and continue the installation -------------------------------------------------------------------------------- /provisioning/ocp-azure-provision.sh: -------------------------------------------------------------------------------- 1 | # Variables 2 | PREFIX=ocp-azure 3 | RG=$PREFIX-rg 4 | LOCATION=uaenorth 5 | DNS_ZONE=salesdynamic.com 6 | 7 | #***** Login to Azure Subscription ***** 8 | # A browser window will open to complete the authentication :) 9 | az login 10 | 11 | az account set --subscription "SUBSCRIPTION_NAME" 12 | 13 | #Make sure the active subscription is set correctly 14 | az account show 15 | 16 | # Set the tenant ID 17 | TENANT_ID=$(az account show --query tenantId -o tsv) 18 | SUBSCRIPTION_ID=$(az account show --query id -o tsv) 19 | echo $TENANT_ID 20 | echo $SUBSCRIPTION_ID 21 | 22 | clear 23 | 24 | #***** END Login to Azure Subscription ***** 25 | 26 | #***** OpenShift Prerequisites ***** 27 | 28 | # Create a resource group 29 | az group create --name $RG --location $LOCATION 30 | 31 | ### DNS Setup 32 | 33 | # OPTION 1: Full delegation of a root domain to Azure DNS Zone 34 | # Create a DNS Zone 35 | az network dns zone create -g $RG -n $DNS_ZONE 36 | 37 | 38 | # Delegate the DNS Zone by updating the Name Servers to Azure DNS Zone Name Servers 39 | # Get the NS 40 | az network dns zone show -g $RG -n $DNS_ZONE --query nameServers -o table 41 | 42 | # Visit the registrar to update the NS records 43 | 44 | # Check if the update successful, it might take several mins 45 | nslookup -type=SOA $DNS_ZONE 46 | 47 | # Response like 48 | # Server: ns1-04.azure-dns.com 49 | # Address: 208.76.47.4 50 | 51 | # contoso.net 52 | # primary name server = ns1-04.azure-dns.com 53 | # responsible mail addr = msnhst.microsoft.com 54 | # serial = 1 55 | # refresh = 900 (15 mins) 56 | # retry = 300 (5 mins) 57 | # expire = 604800 (7 days) 58 | # default TTL = 300 (5 mins) 59 | 60 | # OPTION 2: Using subdomain 61 | # Create a DNS Zone for subdomain 62 | az network dns zone create -g $RG -n ocp-dev.$DNS_ZONE 63 | 64 | ### End DNS Setup 65 | 66 | ### SP Setup 67 | 68 | # Create a SP to be used by OpenShift 69 | OCP_SP=$(az ad sp create-for-rbac -n "${PREFIX}-installer-sp" --skip-assignment) 70 | # As the json result stored in OCP_SP, we use some jq Kung Fu to extract the values 71 | # jq documentation: (https://shapeshed.com/jq-json/#how-to-pretty-print-json) 72 | echo $OCP_SP | jq 73 | OCP_SP_ID=$(echo $OCP_SP | jq -r .appId) 74 | OCP_SP_PASSWORD=$(echo $OCP_SP | jq -r .password) 75 | OCP_SP_TENANT=$(echo $OCP_SP | jq -r .tenant) 76 | OCP_SP_SUBSCRIPTION_ID=$SUBSCRIPTION_ID 77 | echo $OCP_SP_ID 78 | echo $OCP_SP_PASSWORD 79 | echo $OCP_SP_TENANT 80 | echo $OCP_SP_SUBSCRIPTION_ID 81 | # Or create the SP and save the information to file 82 | # az ad sp create-for-rbac --role Owner --name team-installer | jq --arg sub_id "$(az account show | jq -r '.id')" '{subscriptionId:$sub_id,clientId:.appId, clientSecret:.password,tenantId:.tenant}' > ~/.azure/osServicePrincipal.json 83 | 84 | # Assigning AAD ReadWrite.OwnedBy 85 | az ad app permission add --id $OCP_SP_ID --api 00000002-0000-0000-c000-000000000000 --api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role 86 | # Granting the AAD permission (Admin Consent). You can double check on Azure Portal to make sure the admin consent was granted 87 | az ad app permission grant --id $OCP_SP_ID --api 00000002-0000-0000-c000-000000000000 88 | 89 | # Assigning Contributor and "User Access Administrator" 90 | az role assignment create --assignee $OCP_SP_ID --role "Owner" 91 | # Or: az role assignment create --assignee $OCP_SP_ID --role "Contributor" 92 | az role assignment create --assignee $OCP_SP_ID --role "User Access Administrator" 93 | 94 | # Have a look at SP Azure assignments: 95 | az role assignment list --assignee $OCP_SP_ID -o table 96 | 97 | # If you wish to reset the credentials 98 | # az ad sp credential reset --name $OCP_SP_ID 99 | 100 | # Have an ssh key ready to be used 101 | # ssh-keygen -f ~/.ssh/openshift_rsa -t rsa -N '' 102 | 103 | # Download the installer/client program from RedHat 104 | # https://cloud.redhat.com/openshift/install/azure/installer-provisioned 105 | 106 | # Get the json pull secret from RedHat 107 | # https://cloud.redhat.com/openshift/install/azure/installer-provisioned 108 | 109 | # Upload the files downloaded through Azure Cloud Shell or save tar.gz files to a folder if you are using client terminal (like OCP-Install) 110 | 111 | # Extract the installer to installer folder 112 | mkdir installer 113 | tar -xvzf ./openshift-install-linux-4.2.0-0.nightly-2019-09-23-115152.tar.gz -C ./installer 114 | 115 | mkdir client 116 | tar -xvzf ./openshift-client-linux-4.2.0-0.nightly-2019-09-23-115152.tar.gz -C ./client 117 | 118 | # Starting Installer-Provisioned-Infrastructure 119 | # Change dir to installer 120 | cd installer 121 | ./openshift-install create install-config 122 | 123 | # Note: Credentials saved to /home/localadmin/.azure/osServicePrincipal.json 124 | 125 | # Note: You can review the generated install-config.yaml and tune any parameters before creating the cluster 126 | 127 | # Now the cluster configuration are saved to install-config.yaml 128 | 129 | # Create the cluster based on the above configuration 130 | ./openshift-install create cluster 131 | 132 | # You might hit some subscription service provisioning limits: 133 | # compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. 134 | # Status= Code="OperationNotAllowed" Message="Operation results in exceeding quota limits of Core. Maximum allowed: 20, Current in use: 20 135 | # , Additional requested: 8. 136 | # Solving it is super easy, submit a new support request here: 137 | # https://aka.ms/ProdportalCRP/?#create/Microsoft.Support/Parameters/ 138 | # Use the following details: 139 | # Type Service and subscription limits (quotas) 140 | # Subscription Select target subscription 141 | # Problem type Compute-VM (cores-vCPUs) subscription limit increases 142 | # Click add new quota details (increase from 20 to 50 as the new quota) 143 | # Usually it is auto approved :) 144 | # To view the current limits for a specific location: 145 | az vm list-usage -l $LOCATION -o table 146 | 147 | # By default, a cluster will create: 148 | # Bootstrap: 1 Standard_D4s_v3 vm (removed after install) 149 | # Master Nodes: 3 Standard_D8s_v3 (4 vcpus, 16 GiB memory) 150 | # Worker Nodes: 3 Standard_D2s_v3 (). 151 | # Kubernetes APIs will be located at something like: 152 | # https://api.ocp-azure-dev-cluster.YOURDOMAIN.com:6443/ 153 | 154 | # Normal installer output 155 | # INFO Consuming "Install Config" from target directory 156 | # INFO Creating infrastructure resources... 157 | # INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocp-dev-ae.salesdynamic.com:6443... 158 | # INFO API v1.14.6+8d00594 up 159 | # INFO Waiting up to 30m0s for bootstrapping to complete... 160 | # INFO Destroying the bootstrap resources... 161 | # INFO Waiting up to 30m0s for the cluster at https://api.ocp-dev-ae.salesdynamic.com:6443 to initialize... 162 | # INFO Waiting up to 10m0s for the openshift-console route to be created... 163 | # INFO Install complete! 164 | # INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/localadmin/aks/AKS-SecureCluster/OCP/OCP-Install/installer/auth/kubeconfig' 165 | # INFO Access the OpenShift web-console here: https://console-openshift-console.apps.CLUSTER-NAME.DOMAIN-NAME.com 166 | # INFO Login to the console with user: kubeadmin, password: STju6-SEzcN-Nw8vT-nxdD8 167 | 168 | # Congratulations 169 | # Although it says completed, you might need to give it a few mins to warm up :) 170 | 171 | # You can access the web-console as per the instructions provided, but let's try using oc CLI instead 172 | cd .. 173 | cd client 174 | 175 | # this step so you will not need to use oc login (you will have a different path) 176 | export KUBECONFIG=/home/localadmin/aks/AKS-SecureCluster/OCP/OCP-Install/installer/auth/kubeconfig 177 | 178 | # basic operations 179 | ./oc version 180 | ./oc config view 181 | ./oc status 182 | 183 | # Famous get pods 184 | ./oc get pod --all-namespaces 185 | 186 | # Our cluster running a kubernetes and OpenShift services by default 187 | ./oc get svc 188 | # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 189 | # docker-registry ClusterIP 172.30.78.158 190 | # kubernetes ClusterIP 172.30.0.1 443/TCP 36m 191 | # openshift ExternalName kubernetes.default.svc.cluster.local 24m 192 | 193 | # No selected project for sure 194 | ./oc project 195 | 196 | # if you are interested to look behind the scene on what is happing, access the logs 197 | cat ./.openshift_install.log 198 | 199 | # If cluster needs to be destroyed to be recreated, execute the following: 200 | ./openshift-install destroy cluster 201 | # Note that some files are not removed (like the terrafrom.tfstate) by the installer. You need to remove them manually -------------------------------------------------------------------------------- /provisioning/upi/01_vnet.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", 3 | "contentVersion" : "1.0.0.0", 4 | "parameters" : { 5 | "baseName" : { 6 | "type" : "string", 7 | "minLength" : 1, 8 | "metadata" : { 9 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 10 | } 11 | } 12 | }, 13 | "variables" : { 14 | "location" : "[resourceGroup().location]", 15 | "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", 16 | "addressPrefix" : "10.0.0.0/16", 17 | "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", 18 | "masterSubnetPrefix" : "10.0.0.0/24", 19 | "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", 20 | "nodeSubnetPrefix" : "10.0.1.0/24", 21 | "controlPlaneNsgName" : "[concat(parameters('baseName'), '-controlplane-nsg')]", 22 | "nodeNsgName" : "[concat(parameters('baseName'), '-node-nsg')]" 23 | }, 24 | "resources" : [ 25 | { 26 | "apiVersion" : "2018-12-01", 27 | "type" : "Microsoft.Network/virtualNetworks", 28 | "name" : "[variables('virtualNetworkName')]", 29 | "location" : "[variables('location')]", 30 | "dependsOn" : [ 31 | "[concat('Microsoft.Network/networkSecurityGroups/', variables('controlPlaneNsgName'))]", 32 | "[concat('Microsoft.Network/networkSecurityGroups/', variables('nodeNsgName'))]" 33 | ], 34 | "properties" : { 35 | "addressSpace" : { 36 | "addressPrefixes" : [ 37 | "[variables('addressPrefix')]" 38 | ] 39 | }, 40 | "subnets" : [ 41 | { 42 | "name" : "[variables('masterSubnetName')]", 43 | "properties" : { 44 | "addressPrefix" : "[variables('masterSubnetPrefix')]", 45 | "serviceEndpoints": [], 46 | "networkSecurityGroup" : { 47 | "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('controlPlaneNsgName'))]" 48 | } 49 | } 50 | }, 51 | { 52 | "name" : "[variables('nodeSubnetName')]", 53 | "properties" : { 54 | "addressPrefix" : "[variables('nodeSubnetPrefix')]", 55 | "serviceEndpoints": [], 56 | "networkSecurityGroup" : { 57 | "id" : "[resourceId('Microsoft.Network/networkSecurityGroups', variables('nodeNsgName'))]" 58 | } 59 | } 60 | } 61 | ] 62 | } 63 | }, 64 | { 65 | "type" : "Microsoft.Network/networkSecurityGroups", 66 | "name" : "[variables('controlPlaneNsgName')]", 67 | "apiVersion" : "2018-10-01", 68 | "location" : "[variables('location')]", 69 | "properties" : { 70 | "securityRules" : [ 71 | { 72 | "name" : "apiserver_in", 73 | "properties" : { 74 | "protocol" : "Tcp", 75 | "sourcePortRange" : "*", 76 | "destinationPortRange" : "6443", 77 | "sourceAddressPrefix" : "*", 78 | "destinationAddressPrefix" : "*", 79 | "access" : "Allow", 80 | "priority" : 101, 81 | "direction" : "Inbound" 82 | } 83 | } 84 | ] 85 | } 86 | }, 87 | { 88 | "type" : "Microsoft.Network/networkSecurityGroups", 89 | "name" : "[variables('nodeNsgName')]", 90 | "apiVersion" : "2018-10-01", 91 | "location" : "[variables('location')]", 92 | "properties" : { 93 | "securityRules" : [ 94 | { 95 | "name" : "apiserver_in", 96 | "properties" : { 97 | "protocol" : "Tcp", 98 | "sourcePortRange" : "*", 99 | "destinationPortRange" : "6443", 100 | "sourceAddressPrefix" : "*", 101 | "destinationAddressPrefix" : "*", 102 | "access" : "Allow", 103 | "priority" : 101, 104 | "direction" : "Inbound" 105 | } 106 | } 107 | ] 108 | } 109 | } 110 | ] 111 | } -------------------------------------------------------------------------------- /provisioning/upi/02_storage.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", 3 | "contentVersion" : "1.0.0.0", 4 | "parameters" : { 5 | "baseName" : { 6 | "type" : "string", 7 | "minLength" : 1, 8 | "metadata" : { 9 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 10 | } 11 | }, 12 | "vhdBlobURL" : { 13 | "type" : "string", 14 | "metadata" : { 15 | "description" : "URL pointing to the blob where the VHD to be used to create master and worker machines is located" 16 | } 17 | } 18 | }, 19 | "variables" : { 20 | "location" : "[resourceGroup().location]", 21 | "imageName" : "[concat(parameters('baseName'), '-image')]" 22 | }, 23 | "resources" : [ 24 | { 25 | "apiVersion" : "2018-06-01", 26 | "type": "Microsoft.Compute/images", 27 | "name": "[variables('imageName')]", 28 | "location" : "[variables('location')]", 29 | "properties": { 30 | "storageProfile": { 31 | "osDisk": { 32 | "osType": "Linux", 33 | "osState": "Generalized", 34 | "blobUri": "[parameters('vhdBlobURL')]", 35 | "storageAccountType": "Standard_LRS" 36 | } 37 | } 38 | } 39 | } 40 | ] 41 | } -------------------------------------------------------------------------------- /provisioning/upi/03_infra-internal-lb.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", 3 | "contentVersion" : "1.0.0.0", 4 | "parameters" : { 5 | "baseName" : { 6 | "type" : "string", 7 | "minLength" : 1, 8 | "metadata" : { 9 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 10 | } 11 | }, 12 | "virtualNetworkResourceGroup" : { 13 | "type" : "string", 14 | "minLength" : 1, 15 | "metadata" : { 16 | "description" : "Resource group name of the OCP virtual network" 17 | } 18 | }, 19 | "virtualNetworkName" : { 20 | "type" : "string", 21 | "minLength" : 1, 22 | "metadata" : { 23 | "description" : "Name of the OCP virtual network" 24 | } 25 | }, 26 | "masterSubnetName" : { 27 | "type" : "string", 28 | "minLength" : 1, 29 | "metadata" : { 30 | "description" : "Name of the masters subnet" 31 | } 32 | }, 33 | "privateDNSZoneName" : { 34 | "type" : "string", 35 | "metadata" : { 36 | "description" : "Name of the private DNS zone" 37 | } 38 | } 39 | }, 40 | "variables" : { 41 | "location" : "[resourceGroup().location]", 42 | "virtualNetworkName" : "[parameters('virtualNetworkName')]", 43 | "virtualNetworkID" : "[resourceId(parameters('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", 44 | "masterSubnetName" : "[parameters('masterSubnetName')]", 45 | "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", 46 | "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", 47 | "internalLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('internalLoadBalancerName'))]", 48 | "skuName": "Standard" 49 | }, 50 | "resources" : [ 51 | { 52 | "apiVersion" : "2018-12-01", 53 | "type" : "Microsoft.Network/loadBalancers", 54 | "name" : "[variables('internalLoadBalancerName')]", 55 | "location" : "[variables('location')]", 56 | "sku": { 57 | "name": "[variables('skuName')]" 58 | }, 59 | "properties" : { 60 | "frontendIPConfigurations" : [ 61 | { 62 | "name" : "internal-lb-ip", 63 | "properties" : { 64 | "privateIPAllocationMethod" : "Dynamic", 65 | "subnet" : { 66 | "id" : "[variables('masterSubnetRef')]" 67 | }, 68 | "privateIPAddressVersion" : "IPv4" 69 | } 70 | } 71 | ], 72 | "backendAddressPools" : [ 73 | { 74 | "name" : "internal-lb-backend" 75 | } 76 | ], 77 | "loadBalancingRules" : [ 78 | { 79 | "name" : "api-internal", 80 | "properties" : { 81 | "frontendIPConfiguration" : { 82 | "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" 83 | }, 84 | "frontendPort" : 6443, 85 | "backendPort" : 6443, 86 | "enableFloatingIP" : false, 87 | "idleTimeoutInMinutes" : 30, 88 | "protocol" : "Tcp", 89 | "enableTcpReset" : false, 90 | "loadDistribution" : "Default", 91 | "backendAddressPool" : { 92 | "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" 93 | }, 94 | "probe" : { 95 | "id" : "[concat(variables('internalLoadBalancerID'), '/probes/api-internal-probe')]" 96 | } 97 | } 98 | }, 99 | { 100 | "name" : "sint", 101 | "properties" : { 102 | "frontendIPConfiguration" : { 103 | "id" : "[concat(variables('internalLoadBalancerID'), '/frontendIPConfigurations/internal-lb-ip')]" 104 | }, 105 | "frontendPort" : 22623, 106 | "backendPort" : 22623, 107 | "enableFloatingIP" : false, 108 | "idleTimeoutInMinutes" : 30, 109 | "protocol" : "Tcp", 110 | "enableTcpReset" : false, 111 | "loadDistribution" : "Default", 112 | "backendAddressPool" : { 113 | "id" : "[concat(variables('internalLoadBalancerID'), '/backendAddressPools/internal-lb-backend')]" 114 | }, 115 | "probe" : { 116 | "id" : "[concat(variables('internalLoadBalancerID'), '/probes/sint-probe')]" 117 | } 118 | } 119 | } 120 | ], 121 | "probes" : [ 122 | { 123 | "name" : "api-internal-probe", 124 | "properties" : { 125 | "protocol" : "Tcp", 126 | "port" : 6443, 127 | "intervalInSeconds" : 10, 128 | "numberOfProbes" : 3 129 | } 130 | }, 131 | { 132 | "name" : "sint-probe", 133 | "properties" : { 134 | "protocol" : "Tcp", 135 | "port" : 22623, 136 | "intervalInSeconds" : 10, 137 | "numberOfProbes" : 3 138 | } 139 | } 140 | ] 141 | } 142 | }, 143 | { 144 | "apiVersion": "2018-09-01", 145 | "type": "Microsoft.Network/privateDnsZones/A", 146 | "name": "[concat(parameters('privateDNSZoneName'), '/api')]", 147 | "location" : "[variables('location')]", 148 | "dependsOn" : [ 149 | "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" 150 | ], 151 | "properties": { 152 | "ttl": 60, 153 | "aRecords": [ 154 | { 155 | "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" 156 | } 157 | ] 158 | } 159 | }, 160 | { 161 | "apiVersion": "2018-09-01", 162 | "type": "Microsoft.Network/privateDnsZones/A", 163 | "name": "[concat(parameters('privateDNSZoneName'), '/api-int')]", 164 | "location" : "[variables('location')]", 165 | "dependsOn" : [ 166 | "[concat('Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'))]" 167 | ], 168 | "properties": { 169 | "ttl": 60, 170 | "aRecords": [ 171 | { 172 | "ipv4Address": "[reference(variables('internalLoadBalancerName')).frontendIPConfigurations[0].properties.privateIPAddress]" 173 | } 174 | ] 175 | } 176 | } 177 | ] 178 | } -------------------------------------------------------------------------------- /provisioning/upi/03_infra-public-lb.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", 3 | "contentVersion" : "1.0.0.0", 4 | "parameters" : { 5 | "baseName" : { 6 | "type" : "string", 7 | "minLength" : 1, 8 | "metadata" : { 9 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 10 | } 11 | } 12 | }, 13 | "variables" : { 14 | "location" : "[resourceGroup().location]", 15 | "masterPublicIpAddressName" : "[concat(parameters('baseName'), '-master-pip')]", 16 | "masterPublicIpAddressID" : "[resourceId('Microsoft.Network/publicIPAddresses', variables('masterPublicIpAddressName'))]", 17 | "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", 18 | "masterLoadBalancerID" : "[resourceId('Microsoft.Network/loadBalancers', variables('masterLoadBalancerName'))]", 19 | "skuName": "Standard" 20 | }, 21 | "resources" : [ 22 | { 23 | "apiVersion" : "2018-12-01", 24 | "type" : "Microsoft.Network/publicIPAddresses", 25 | "name" : "[variables('masterPublicIpAddressName')]", 26 | "location" : "[variables('location')]", 27 | "sku": { 28 | "name": "[variables('skuName')]" 29 | }, 30 | "properties" : { 31 | "publicIPAllocationMethod" : "Static", 32 | "dnsSettings" : { 33 | "domainNameLabel" : "[variables('masterPublicIpAddressName')]" 34 | } 35 | } 36 | }, 37 | { 38 | "apiVersion" : "2018-12-01", 39 | "type" : "Microsoft.Network/loadBalancers", 40 | "name" : "[variables('masterLoadBalancerName')]", 41 | "location" : "[variables('location')]", 42 | "sku": { 43 | "name": "[variables('skuName')]" 44 | }, 45 | "dependsOn" : [ 46 | "[concat('Microsoft.Network/publicIPAddresses/', variables('masterPublicIpAddressName'))]" 47 | ], 48 | "properties" : { 49 | "frontendIPConfigurations" : [ 50 | { 51 | "name" : "public-lb-ip", 52 | "properties" : { 53 | "publicIPAddress" : { 54 | "id" : "[variables('masterPublicIpAddressID')]" 55 | } 56 | } 57 | } 58 | ], 59 | "backendAddressPools" : [ 60 | { 61 | "name" : "public-lb-backend" 62 | } 63 | ], 64 | "loadBalancingRules" : [ 65 | { 66 | "name" : "api-internal", 67 | "properties" : { 68 | "frontendIPConfiguration" : { 69 | "id" :"[concat(variables('masterLoadBalancerID'), '/frontendIPConfigurations/public-lb-ip')]" 70 | }, 71 | "backendAddressPool" : { 72 | "id" : "[concat(variables('masterLoadBalancerID'), '/backendAddressPools/public-lb-backend')]" 73 | }, 74 | "protocol" : "Tcp", 75 | "loadDistribution" : "Default", 76 | "idleTimeoutInMinutes" : 30, 77 | "frontendPort" : 6443, 78 | "backendPort" : 6443, 79 | "probe" : { 80 | "id" : "[concat(variables('masterLoadBalancerID'), '/probes/api-internal-probe')]" 81 | } 82 | } 83 | } 84 | ], 85 | "probes" : [ 86 | { 87 | "name" : "api-internal-probe", 88 | "properties" : { 89 | "protocol" : "Tcp", 90 | "port" : 6443, 91 | "intervalInSeconds" : 10, 92 | "numberOfProbes" : 3 93 | } 94 | } 95 | ] 96 | } 97 | } 98 | ] 99 | } -------------------------------------------------------------------------------- /provisioning/upi/04_bootstrap-internal-only.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema" : "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", 3 | "contentVersion" : "1.0.0.0", 4 | "parameters" : { 5 | "baseName" : { 6 | "type" : "string", 7 | "minLength" : 1, 8 | "metadata" : { 9 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 10 | } 11 | }, 12 | "virtualNetworkResourceGroup" : { 13 | "type" : "string", 14 | "minLength" : 1, 15 | "metadata" : { 16 | "description" : "Resource group name of the OCP virtual network" 17 | } 18 | }, 19 | "virtualNetworkName" : { 20 | "type" : "string", 21 | "minLength" : 1, 22 | "metadata" : { 23 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 24 | } 25 | }, 26 | "masterSubnetName" : { 27 | "type" : "string", 28 | "minLength" : 1, 29 | "metadata" : { 30 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 31 | } 32 | }, 33 | "bootstrapIgnition" : { 34 | "type" : "string", 35 | "minLength" : 1, 36 | "metadata" : { 37 | "description" : "Bootstrap ignition content for the bootstrap cluster" 38 | } 39 | }, 40 | "sshKeyData" : { 41 | "type" : "securestring", 42 | "metadata" : { 43 | "description" : "SSH RSA public key file as a string." 44 | } 45 | }, 46 | "bootstrapVMSize" : { 47 | "type" : "string", 48 | "defaultValue" : "Standard_D4s_v3", 49 | "allowedValues" : [ 50 | "Standard_A2", 51 | "Standard_A3", 52 | "Standard_A4", 53 | "Standard_A5", 54 | "Standard_A6", 55 | "Standard_A7", 56 | "Standard_A8", 57 | "Standard_A9", 58 | "Standard_A10", 59 | "Standard_A11", 60 | "Standard_D2", 61 | "Standard_D3", 62 | "Standard_D4", 63 | "Standard_D11", 64 | "Standard_D12", 65 | "Standard_D13", 66 | "Standard_D14", 67 | "Standard_D2_v2", 68 | "Standard_D3_v2", 69 | "Standard_D4_v2", 70 | "Standard_D5_v2", 71 | "Standard_D8_v3", 72 | "Standard_D11_v2", 73 | "Standard_D12_v2", 74 | "Standard_D13_v2", 75 | "Standard_D14_v2", 76 | "Standard_E2_v3", 77 | "Standard_E4_v3", 78 | "Standard_E8_v3", 79 | "Standard_E16_v3", 80 | "Standard_E32_v3", 81 | "Standard_E64_v3", 82 | "Standard_E2s_v3", 83 | "Standard_E4s_v3", 84 | "Standard_E8s_v3", 85 | "Standard_E16s_v3", 86 | "Standard_E32s_v3", 87 | "Standard_E64s_v3", 88 | "Standard_G1", 89 | "Standard_G2", 90 | "Standard_G3", 91 | "Standard_G4", 92 | "Standard_G5", 93 | "Standard_DS2", 94 | "Standard_DS3", 95 | "Standard_DS4", 96 | "Standard_DS11", 97 | "Standard_DS12", 98 | "Standard_DS13", 99 | "Standard_DS14", 100 | "Standard_DS2_v2", 101 | "Standard_DS3_v2", 102 | "Standard_DS4_v2", 103 | "Standard_DS5_v2", 104 | "Standard_DS11_v2", 105 | "Standard_DS12_v2", 106 | "Standard_DS13_v2", 107 | "Standard_DS14_v2", 108 | "Standard_GS1", 109 | "Standard_GS2", 110 | "Standard_GS3", 111 | "Standard_GS4", 112 | "Standard_GS5", 113 | "Standard_D2s_v3", 114 | "Standard_D4s_v3", 115 | "Standard_D8s_v3" 116 | ], 117 | "metadata" : { 118 | "description" : "The size of the Bootstrap Virtual Machine" 119 | } 120 | } 121 | }, 122 | "variables" : { 123 | "location" : "[resourceGroup().location]", 124 | "virtualNetworkName" : "[parameters('virtualNetworkName')]", 125 | "virtualNetworkID" : "[resourceId(parameters('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", 126 | "masterSubnetName" : "[parameters('masterSubnetName')]", 127 | "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", 128 | "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", 129 | "sshKeyPath" : "/home/core/.ssh/authorized_keys", 130 | "identityName" : "[concat(parameters('baseName'), '-identity')]", 131 | "vmName" : "[concat(parameters('baseName'), '-bootstrap')]", 132 | "nicName" : "[concat(variables('vmName'), '-nic')]", 133 | "imageName" : "[concat(parameters('baseName'), '-image')]", 134 | "controlPlaneNsgName" : "[concat(parameters('baseName'), '-controlplane-nsg')]", 135 | "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]" 136 | }, 137 | "resources" : [ 138 | { 139 | "apiVersion" : "2018-12-01", 140 | "type" : "Microsoft.Network/publicIPAddresses", 141 | "name" : "[variables('sshPublicIpAddressName')]", 142 | "location" : "[variables('location')]", 143 | "sku": { 144 | "name": "Standard" 145 | }, 146 | "properties" : { 147 | "publicIPAllocationMethod" : "Static", 148 | "dnsSettings" : { 149 | "domainNameLabel" : "[variables('sshPublicIpAddressName')]" 150 | } 151 | } 152 | }, 153 | { 154 | "apiVersion" : "2018-06-01", 155 | "type" : "Microsoft.Network/networkInterfaces", 156 | "name" : "[variables('nicName')]", 157 | "location" : "[variables('location')]", 158 | "dependsOn" : [ 159 | "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" 160 | ], 161 | "properties" : { 162 | "ipConfigurations" : [ 163 | { 164 | "name" : "pipConfig", 165 | "properties" : { 166 | "privateIPAllocationMethod" : "Dynamic", 167 | "publicIPAddress": { 168 | "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" 169 | }, 170 | "subnet" : { 171 | "id" : "[variables('masterSubnetRef')]" 172 | }, 173 | "loadBalancerBackendAddressPools" : [ 174 | { 175 | "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" 176 | } 177 | ] 178 | } 179 | } 180 | ] 181 | } 182 | }, 183 | { 184 | "apiVersion" : "2018-06-01", 185 | "type" : "Microsoft.Compute/virtualMachines", 186 | "name" : "[variables('vmName')]", 187 | "location" : "[variables('location')]", 188 | "identity" : { 189 | "type" : "userAssigned", 190 | "userAssignedIdentities" : { 191 | "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} 192 | } 193 | }, 194 | "dependsOn" : [ 195 | "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" 196 | ], 197 | "properties" : { 198 | "hardwareProfile" : { 199 | "vmSize" : "[parameters('bootstrapVMSize')]" 200 | }, 201 | "osProfile" : { 202 | "computerName" : "[variables('vmName')]", 203 | "adminUsername" : "core", 204 | "customData" : "[parameters('bootstrapIgnition')]", 205 | "linuxConfiguration" : { 206 | "disablePasswordAuthentication" : true, 207 | "ssh" : { 208 | "publicKeys" : [ 209 | { 210 | "path" : "[variables('sshKeyPath')]", 211 | "keyData" : "[parameters('sshKeyData')]" 212 | } 213 | ] 214 | } 215 | } 216 | }, 217 | "storageProfile" : { 218 | "imageReference": { 219 | "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" 220 | }, 221 | "osDisk" : { 222 | "name": "[concat(variables('vmName'),'_OSDisk')]", 223 | "osType" : "Linux", 224 | "createOption" : "FromImage", 225 | "managedDisk": { 226 | "storageAccountType": "Premium_LRS" 227 | }, 228 | "diskSizeGB" : 100 229 | } 230 | }, 231 | "networkProfile" : { 232 | "networkInterfaces" : [ 233 | { 234 | "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" 235 | } 236 | ] 237 | } 238 | } 239 | }, 240 | { 241 | "apiVersion" : "2018-06-01", 242 | "type": "Microsoft.Network/networkSecurityGroups/securityRules", 243 | "name" : "[concat(variables('controlPlaneNsgName'), '/bootstrap_ssh_in')]", 244 | "location" : "[variables('location')]", 245 | "dependsOn" : [ 246 | "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" 247 | ], 248 | "properties": { 249 | "protocol" : "Tcp", 250 | "sourcePortRange" : "*", 251 | "destinationPortRange" : "22", 252 | "sourceAddressPrefix" : "*", 253 | "destinationAddressPrefix" : "*", 254 | "access" : "Allow", 255 | "priority" : 100, 256 | "direction" : "Inbound" 257 | } 258 | } 259 | ] 260 | } -------------------------------------------------------------------------------- /provisioning/upi/04_bootstrap.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", 3 | "contentVersion" : "1.0.0.0", 4 | "parameters" : { 5 | "baseName" : { 6 | "type" : "string", 7 | "minLength" : 1, 8 | "metadata" : { 9 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 10 | } 11 | }, 12 | "virtualNetworkName" : { 13 | "type" : "string", 14 | "minLength" : 1, 15 | "metadata" : { 16 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 17 | } 18 | }, 19 | "masterSubnetName" : { 20 | "type" : "string", 21 | "minLength" : 1, 22 | "metadata" : { 23 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 24 | } 25 | }, 26 | "bootstrapIgnition" : { 27 | "type" : "string", 28 | "minLength" : 1, 29 | "metadata" : { 30 | "description" : "Bootstrap ignition content for the bootstrap cluster" 31 | } 32 | }, 33 | "sshKeyData" : { 34 | "type" : "securestring", 35 | "metadata" : { 36 | "description" : "SSH RSA public key file as a string." 37 | } 38 | }, 39 | "bootstrapVMSize" : { 40 | "type" : "string", 41 | "defaultValue" : "Standard_D4s_v3", 42 | "allowedValues" : [ 43 | "Standard_A2", 44 | "Standard_A3", 45 | "Standard_A4", 46 | "Standard_A5", 47 | "Standard_A6", 48 | "Standard_A7", 49 | "Standard_A8", 50 | "Standard_A9", 51 | "Standard_A10", 52 | "Standard_A11", 53 | "Standard_D2", 54 | "Standard_D3", 55 | "Standard_D4", 56 | "Standard_D11", 57 | "Standard_D12", 58 | "Standard_D13", 59 | "Standard_D14", 60 | "Standard_D2_v2", 61 | "Standard_D3_v2", 62 | "Standard_D4_v2", 63 | "Standard_D5_v2", 64 | "Standard_D8_v3", 65 | "Standard_D11_v2", 66 | "Standard_D12_v2", 67 | "Standard_D13_v2", 68 | "Standard_D14_v2", 69 | "Standard_E2_v3", 70 | "Standard_E4_v3", 71 | "Standard_E8_v3", 72 | "Standard_E16_v3", 73 | "Standard_E32_v3", 74 | "Standard_E64_v3", 75 | "Standard_E2s_v3", 76 | "Standard_E4s_v3", 77 | "Standard_E8s_v3", 78 | "Standard_E16s_v3", 79 | "Standard_E32s_v3", 80 | "Standard_E64s_v3", 81 | "Standard_G1", 82 | "Standard_G2", 83 | "Standard_G3", 84 | "Standard_G4", 85 | "Standard_G5", 86 | "Standard_DS2", 87 | "Standard_DS3", 88 | "Standard_DS4", 89 | "Standard_DS11", 90 | "Standard_DS12", 91 | "Standard_DS13", 92 | "Standard_DS14", 93 | "Standard_DS2_v2", 94 | "Standard_DS3_v2", 95 | "Standard_DS4_v2", 96 | "Standard_DS5_v2", 97 | "Standard_DS11_v2", 98 | "Standard_DS12_v2", 99 | "Standard_DS13_v2", 100 | "Standard_DS14_v2", 101 | "Standard_GS1", 102 | "Standard_GS2", 103 | "Standard_GS3", 104 | "Standard_GS4", 105 | "Standard_GS5", 106 | "Standard_D2s_v3", 107 | "Standard_D4s_v3", 108 | "Standard_D8s_v3" 109 | ], 110 | "metadata" : { 111 | "description" : "The size of the Bootstrap Virtual Machine" 112 | } 113 | } 114 | }, 115 | "variables" : { 116 | "location" : "[resourceGroup().location]", 117 | "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", 118 | "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", 119 | "masterSubnetName" : "[concat(parameters('baseName'), '-master-subnet')]", 120 | "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", 121 | "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", 122 | "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", 123 | "sshKeyPath" : "/home/core/.ssh/authorized_keys", 124 | "identityName" : "[concat(parameters('baseName'), '-identity')]", 125 | "vmName" : "[concat(parameters('baseName'), '-bootstrap')]", 126 | "nicName" : "[concat(variables('vmName'), '-nic')]", 127 | "imageName" : "[concat(parameters('baseName'), '-image')]", 128 | "controlPlaneNsgName" : "[concat(parameters('baseName'), '-controlplane-nsg')]", 129 | "sshPublicIpAddressName" : "[concat(variables('vmName'), '-ssh-pip')]" 130 | }, 131 | "resources" : [ 132 | { 133 | "apiVersion" : "2018-12-01", 134 | "type" : "Microsoft.Network/publicIPAddresses", 135 | "name" : "[variables('sshPublicIpAddressName')]", 136 | "location" : "[variables('location')]", 137 | "sku": { 138 | "name": "Standard" 139 | }, 140 | "properties" : { 141 | "publicIPAllocationMethod" : "Static", 142 | "dnsSettings" : { 143 | "domainNameLabel" : "[variables('sshPublicIpAddressName')]" 144 | } 145 | } 146 | }, 147 | { 148 | "apiVersion" : "2018-06-01", 149 | "type" : "Microsoft.Network/networkInterfaces", 150 | "name" : "[variables('nicName')]", 151 | "location" : "[variables('location')]", 152 | "dependsOn" : [ 153 | "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" 154 | ], 155 | "properties" : { 156 | "ipConfigurations" : [ 157 | { 158 | "name" : "pipConfig", 159 | "properties" : { 160 | "privateIPAllocationMethod" : "Dynamic", 161 | "publicIPAddress": { 162 | "id": "[resourceId('Microsoft.Network/publicIPAddresses', variables('sshPublicIpAddressName'))]" 163 | }, 164 | "subnet" : { 165 | "id" : "[variables('masterSubnetRef')]" 166 | }, 167 | "loadBalancerBackendAddressPools" : [ 168 | { 169 | "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]" 170 | }, 171 | { 172 | "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" 173 | } 174 | ] 175 | } 176 | } 177 | ] 178 | } 179 | }, 180 | { 181 | "apiVersion" : "2018-06-01", 182 | "type" : "Microsoft.Compute/virtualMachines", 183 | "name" : "[variables('vmName')]", 184 | "location" : "[variables('location')]", 185 | "identity" : { 186 | "type" : "userAssigned", 187 | "userAssignedIdentities" : { 188 | "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} 189 | } 190 | }, 191 | "dependsOn" : [ 192 | "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'))]" 193 | ], 194 | "properties" : { 195 | "hardwareProfile" : { 196 | "vmSize" : "[parameters('bootstrapVMSize')]" 197 | }, 198 | "osProfile" : { 199 | "computerName" : "[variables('vmName')]", 200 | "adminUsername" : "core", 201 | "customData" : "[parameters('bootstrapIgnition')]", 202 | "linuxConfiguration" : { 203 | "disablePasswordAuthentication" : true, 204 | "ssh" : { 205 | "publicKeys" : [ 206 | { 207 | "path" : "[variables('sshKeyPath')]", 208 | "keyData" : "[parameters('sshKeyData')]" 209 | } 210 | ] 211 | } 212 | } 213 | }, 214 | "storageProfile" : { 215 | "imageReference": { 216 | "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" 217 | }, 218 | "osDisk" : { 219 | "name": "[concat(variables('vmName'),'_OSDisk')]", 220 | "osType" : "Linux", 221 | "createOption" : "FromImage", 222 | "managedDisk": { 223 | "storageAccountType": "Premium_LRS" 224 | }, 225 | "diskSizeGB" : 100 226 | } 227 | }, 228 | "networkProfile" : { 229 | "networkInterfaces" : [ 230 | { 231 | "id" : "[resourceId('Microsoft.Network/networkInterfaces', variables('nicName'))]" 232 | } 233 | ] 234 | } 235 | } 236 | }, 237 | { 238 | "apiVersion" : "2018-06-01", 239 | "type": "Microsoft.Network/networkSecurityGroups/securityRules", 240 | "name" : "[concat(variables('controlPlaneNsgName'), '/bootstrap_ssh_in')]", 241 | "location" : "[variables('location')]", 242 | "dependsOn" : [ 243 | "[resourceId('Microsoft.Compute/virtualMachines', variables('vmName'))]" 244 | ], 245 | "properties": { 246 | "protocol" : "Tcp", 247 | "sourcePortRange" : "*", 248 | "destinationPortRange" : "22", 249 | "sourceAddressPrefix" : "*", 250 | "destinationAddressPrefix" : "*", 251 | "access" : "Allow", 252 | "priority" : 100, 253 | "direction" : "Inbound" 254 | } 255 | } 256 | ] 257 | } -------------------------------------------------------------------------------- /provisioning/upi/05_masters-internal-only.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema" : "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", 3 | "contentVersion" : "1.0.0.0", 4 | "parameters" : { 5 | "baseName" : { 6 | "type" : "string", 7 | "minLength" : 1, 8 | "metadata" : { 9 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 10 | } 11 | }, 12 | "virtualNetworkResourceGroup" : { 13 | "type" : "string", 14 | "minLength" : 1, 15 | "metadata" : { 16 | "description" : "Resource group name of the OCP virtual network" 17 | } 18 | }, 19 | "virtualNetworkName" : { 20 | "type" : "string", 21 | "minLength" : 1, 22 | "metadata" : { 23 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 24 | } 25 | }, 26 | "masterSubnetName" : { 27 | "type" : "string", 28 | "minLength" : 1, 29 | "metadata" : { 30 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 31 | } 32 | }, 33 | "masterIgnition" : { 34 | "type" : "string", 35 | "metadata" : { 36 | "description" : "Ignition content for the master nodes" 37 | } 38 | }, 39 | "numberOfMasters" : { 40 | "type" : "int", 41 | "defaultValue" : 3, 42 | "minValue" : 2, 43 | "maxValue" : 30, 44 | "metadata" : { 45 | "description" : "Number of OpenShift masters to deploy" 46 | } 47 | }, 48 | "sshKeyData" : { 49 | "type" : "securestring", 50 | "metadata" : { 51 | "description" : "SSH RSA public key file as a string" 52 | } 53 | }, 54 | "privateDNSZoneName" : { 55 | "type" : "string", 56 | "metadata" : { 57 | "description" : "Name of the private DNS zone the master nodes are going to be attached to" 58 | } 59 | }, 60 | "masterVMSize" : { 61 | "type" : "string", 62 | "defaultValue" : "Standard_D8s_v3", 63 | "allowedValues" : [ 64 | "Standard_A2", 65 | "Standard_A3", 66 | "Standard_A4", 67 | "Standard_A5", 68 | "Standard_A6", 69 | "Standard_A7", 70 | "Standard_A8", 71 | "Standard_A9", 72 | "Standard_A10", 73 | "Standard_A11", 74 | "Standard_D2", 75 | "Standard_D3", 76 | "Standard_D4", 77 | "Standard_D11", 78 | "Standard_D12", 79 | "Standard_D13", 80 | "Standard_D14", 81 | "Standard_D2_v2", 82 | "Standard_D3_v2", 83 | "Standard_D4_v2", 84 | "Standard_D5_v2", 85 | "Standard_D8_v3", 86 | "Standard_D11_v2", 87 | "Standard_D12_v2", 88 | "Standard_D13_v2", 89 | "Standard_D14_v2", 90 | "Standard_E2_v3", 91 | "Standard_E4_v3", 92 | "Standard_E8_v3", 93 | "Standard_E16_v3", 94 | "Standard_E32_v3", 95 | "Standard_E64_v3", 96 | "Standard_E2s_v3", 97 | "Standard_E4s_v3", 98 | "Standard_E8s_v3", 99 | "Standard_E16s_v3", 100 | "Standard_E32s_v3", 101 | "Standard_E64s_v3", 102 | "Standard_G1", 103 | "Standard_G2", 104 | "Standard_G3", 105 | "Standard_G4", 106 | "Standard_G5", 107 | "Standard_DS2", 108 | "Standard_DS3", 109 | "Standard_DS4", 110 | "Standard_DS11", 111 | "Standard_DS12", 112 | "Standard_DS13", 113 | "Standard_DS14", 114 | "Standard_DS2_v2", 115 | "Standard_DS3_v2", 116 | "Standard_DS4_v2", 117 | "Standard_DS5_v2", 118 | "Standard_DS11_v2", 119 | "Standard_DS12_v2", 120 | "Standard_DS13_v2", 121 | "Standard_DS14_v2", 122 | "Standard_GS1", 123 | "Standard_GS2", 124 | "Standard_GS3", 125 | "Standard_GS4", 126 | "Standard_GS5", 127 | "Standard_D2s_v3", 128 | "Standard_D4s_v3", 129 | "Standard_D8s_v3" 130 | ], 131 | "metadata" : { 132 | "description" : "The size of the Master Virtual Machines" 133 | } 134 | } 135 | }, 136 | "variables" : { 137 | "location" : "[resourceGroup().location]", 138 | "virtualNetworkName" : "[parameters('virtualNetworkName')]", 139 | "virtualNetworkID" : "[resourceId(parameters('virtualNetworkResourceGroup'), 'Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", 140 | "masterSubnetName" : "[parameters('masterSubnetName')]", 141 | "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", 142 | "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", 143 | "sshKeyPath" : "/home/core/.ssh/authorized_keys", 144 | "identityName" : "[concat(parameters('baseName'), '-identity')]", 145 | "imageName" : "[concat(parameters('baseName'), '-image')]", 146 | "copy" : [ 147 | { 148 | "name" : "vmNames", 149 | "count" : "[parameters('numberOfMasters')]", 150 | "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]" 151 | } 152 | ] 153 | }, 154 | "resources" : [ 155 | { 156 | "apiVersion" : "2018-06-01", 157 | "type" : "Microsoft.Network/networkInterfaces", 158 | "copy" : { 159 | "name" : "nicCopy", 160 | "count" : "[length(variables('vmNames'))]" 161 | }, 162 | "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", 163 | "location" : "[variables('location')]", 164 | "properties" : { 165 | "ipConfigurations" : [ 166 | { 167 | "name" : "pipConfig", 168 | "properties" : { 169 | "privateIPAllocationMethod" : "Dynamic", 170 | "subnet" : { 171 | "id" : "[variables('masterSubnetRef')]" 172 | }, 173 | "loadBalancerBackendAddressPools" : [ 174 | { 175 | "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" 176 | } 177 | ] 178 | } 179 | } 180 | ] 181 | } 182 | }, 183 | { 184 | "apiVersion": "2018-09-01", 185 | "type": "Microsoft.Network/privateDnsZones/SRV", 186 | "name": "[concat(parameters('privateDNSZoneName'), '/_etcd-server-ssl._tcp')]", 187 | "location" : "[variables('location')]", 188 | "properties": { 189 | "ttl": 60, 190 | "copy": [{ 191 | "name": "srvRecords", 192 | "count": "[length(variables('vmNames'))]", 193 | "input": { 194 | "priority": 0, 195 | "weight" : 10, 196 | "port" : 2380, 197 | "target" : "[concat('etcd-', copyIndex('srvRecords'), '.', parameters('privateDNSZoneName'))]" 198 | } 199 | }] 200 | } 201 | }, 202 | { 203 | "apiVersion": "2018-09-01", 204 | "type": "Microsoft.Network/privateDnsZones/A", 205 | "copy" : { 206 | "name" : "dnsCopy", 207 | "count" : "[length(variables('vmNames'))]" 208 | }, 209 | "name": "[concat(parameters('privateDNSZoneName'), '/etcd-', copyIndex())]", 210 | "location" : "[variables('location')]", 211 | "dependsOn" : [ 212 | "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" 213 | ], 214 | "properties": { 215 | "ttl": 60, 216 | "aRecords": [ 217 | { 218 | "ipv4Address": "[reference(concat(variables('vmNames')[copyIndex()], '-nic')).ipConfigurations[0].properties.privateIPAddress]" 219 | } 220 | ] 221 | } 222 | }, 223 | { 224 | "apiVersion" : "2018-06-01", 225 | "type" : "Microsoft.Compute/virtualMachines", 226 | "copy" : { 227 | "name" : "vmCopy", 228 | "count" : "[length(variables('vmNames'))]" 229 | }, 230 | "name" : "[variables('vmNames')[copyIndex()]]", 231 | "location" : "[variables('location')]", 232 | "identity" : { 233 | "type" : "userAssigned", 234 | "userAssignedIdentities" : { 235 | "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} 236 | } 237 | }, 238 | "dependsOn" : [ 239 | "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]", 240 | "[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/A/etcd-', copyIndex())]", 241 | "[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/SRV/_etcd-server-ssl._tcp')]" 242 | ], 243 | "properties" : { 244 | "hardwareProfile" : { 245 | "vmSize" : "[parameters('masterVMSize')]" 246 | }, 247 | "osProfile" : { 248 | "computerName" : "[variables('vmNames')[copyIndex()]]", 249 | "adminUsername" : "core", 250 | "customData" : "[parameters('masterIgnition')]", 251 | "linuxConfiguration" : { 252 | "disablePasswordAuthentication" : true, 253 | "ssh" : { 254 | "publicKeys" : [ 255 | { 256 | "path" : "[variables('sshKeyPath')]", 257 | "keyData" : "[parameters('sshKeyData')]" 258 | } 259 | ] 260 | } 261 | } 262 | }, 263 | "storageProfile" : { 264 | "imageReference": { 265 | "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" 266 | }, 267 | "osDisk" : { 268 | "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]", 269 | "osType" : "Linux", 270 | "createOption" : "FromImage", 271 | "caching": "ReadOnly", 272 | "writeAcceleratorEnabled": false, 273 | "managedDisk": { 274 | "storageAccountType": "Premium_LRS" 275 | }, 276 | "diskSizeGB" : 128 277 | } 278 | }, 279 | "networkProfile" : { 280 | "networkInterfaces" : [ 281 | { 282 | "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", 283 | "properties": { 284 | "primary": false 285 | } 286 | } 287 | ] 288 | } 289 | } 290 | } 291 | ] 292 | } -------------------------------------------------------------------------------- /provisioning/upi/05_masters.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", 3 | "contentVersion" : "1.0.0.0", 4 | "parameters" : { 5 | "baseName" : { 6 | "type" : "string", 7 | "minLength" : 1, 8 | "metadata" : { 9 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 10 | } 11 | }, 12 | "virtualNetworkName" : { 13 | "type" : "string", 14 | "minLength" : 1, 15 | "metadata" : { 16 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 17 | } 18 | }, 19 | "masterSubnetName" : { 20 | "type" : "string", 21 | "minLength" : 1, 22 | "metadata" : { 23 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 24 | } 25 | }, 26 | "masterIgnition" : { 27 | "type" : "string", 28 | "metadata" : { 29 | "description" : "Ignition content for the master nodes" 30 | } 31 | }, 32 | "numberOfMasters" : { 33 | "type" : "int", 34 | "defaultValue" : 3, 35 | "minValue" : 2, 36 | "maxValue" : 30, 37 | "metadata" : { 38 | "description" : "Number of OpenShift masters to deploy" 39 | } 40 | }, 41 | "sshKeyData" : { 42 | "type" : "securestring", 43 | "metadata" : { 44 | "description" : "SSH RSA public key file as a string" 45 | } 46 | }, 47 | "privateDNSZoneName" : { 48 | "type" : "string", 49 | "metadata" : { 50 | "description" : "Name of the private DNS zone the master nodes are going to be attached to" 51 | } 52 | }, 53 | "masterVMSize" : { 54 | "type" : "string", 55 | "defaultValue" : "Standard_D8s_v3", 56 | "allowedValues" : [ 57 | "Standard_A2", 58 | "Standard_A3", 59 | "Standard_A4", 60 | "Standard_A5", 61 | "Standard_A6", 62 | "Standard_A7", 63 | "Standard_A8", 64 | "Standard_A9", 65 | "Standard_A10", 66 | "Standard_A11", 67 | "Standard_D2", 68 | "Standard_D3", 69 | "Standard_D4", 70 | "Standard_D11", 71 | "Standard_D12", 72 | "Standard_D13", 73 | "Standard_D14", 74 | "Standard_D2_v2", 75 | "Standard_D3_v2", 76 | "Standard_D4_v2", 77 | "Standard_D5_v2", 78 | "Standard_D8_v3", 79 | "Standard_D11_v2", 80 | "Standard_D12_v2", 81 | "Standard_D13_v2", 82 | "Standard_D14_v2", 83 | "Standard_E2_v3", 84 | "Standard_E4_v3", 85 | "Standard_E8_v3", 86 | "Standard_E16_v3", 87 | "Standard_E32_v3", 88 | "Standard_E64_v3", 89 | "Standard_E2s_v3", 90 | "Standard_E4s_v3", 91 | "Standard_E8s_v3", 92 | "Standard_E16s_v3", 93 | "Standard_E32s_v3", 94 | "Standard_E64s_v3", 95 | "Standard_G1", 96 | "Standard_G2", 97 | "Standard_G3", 98 | "Standard_G4", 99 | "Standard_G5", 100 | "Standard_DS2", 101 | "Standard_DS3", 102 | "Standard_DS4", 103 | "Standard_DS11", 104 | "Standard_DS12", 105 | "Standard_DS13", 106 | "Standard_DS14", 107 | "Standard_DS2_v2", 108 | "Standard_DS3_v2", 109 | "Standard_DS4_v2", 110 | "Standard_DS5_v2", 111 | "Standard_DS11_v2", 112 | "Standard_DS12_v2", 113 | "Standard_DS13_v2", 114 | "Standard_DS14_v2", 115 | "Standard_GS1", 116 | "Standard_GS2", 117 | "Standard_GS3", 118 | "Standard_GS4", 119 | "Standard_GS5", 120 | "Standard_D2s_v3", 121 | "Standard_D4s_v3", 122 | "Standard_D8s_v3" 123 | ], 124 | "metadata" : { 125 | "description" : "The size of the Master Virtual Machines" 126 | } 127 | } 128 | }, 129 | "variables" : { 130 | "location" : "[resourceGroup().location]", 131 | "virtualNetworkName" : "[parameters('virtualNetworkName')]", 132 | "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", 133 | "masterSubnetName" : "[parameters('masterSubnetName')]", 134 | "masterSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('masterSubnetName'))]", 135 | "masterLoadBalancerName" : "[concat(parameters('baseName'), '-public-lb')]", 136 | "internalLoadBalancerName" : "[concat(parameters('baseName'), '-internal-lb')]", 137 | "sshKeyPath" : "/home/core/.ssh/authorized_keys", 138 | "identityName" : "[concat(parameters('baseName'), '-identity')]", 139 | "imageName" : "[concat(parameters('baseName'), '-image')]", 140 | "copy" : [ 141 | { 142 | "name" : "vmNames", 143 | "count" : "[parameters('numberOfMasters')]", 144 | "input" : "[concat(parameters('baseName'), '-master-', copyIndex('vmNames'))]" 145 | } 146 | ] 147 | }, 148 | "resources" : [ 149 | { 150 | "apiVersion" : "2018-06-01", 151 | "type" : "Microsoft.Network/networkInterfaces", 152 | "copy" : { 153 | "name" : "nicCopy", 154 | "count" : "[length(variables('vmNames'))]" 155 | }, 156 | "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", 157 | "location" : "[variables('location')]", 158 | "properties" : { 159 | "ipConfigurations" : [ 160 | { 161 | "name" : "pipConfig", 162 | "properties" : { 163 | "privateIPAllocationMethod" : "Dynamic", 164 | "subnet" : { 165 | "id" : "[variables('masterSubnetRef')]" 166 | }, 167 | "loadBalancerBackendAddressPools" : [ 168 | { 169 | "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('masterLoadBalancerName'), '/backendAddressPools/public-lb-backend')]" 170 | }, 171 | { 172 | "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('internalLoadBalancerName'), '/backendAddressPools/internal-lb-backend')]" 173 | } 174 | ] 175 | } 176 | } 177 | ] 178 | } 179 | }, 180 | { 181 | "apiVersion": "2018-09-01", 182 | "type": "Microsoft.Network/privateDnsZones/SRV", 183 | "name": "[concat(parameters('privateDNSZoneName'), '/_etcd-server-ssl._tcp')]", 184 | "location" : "[variables('location')]", 185 | "properties": { 186 | "ttl": 60, 187 | "copy": [{ 188 | "name": "srvRecords", 189 | "count": "[length(variables('vmNames'))]", 190 | "input": { 191 | "priority": 0, 192 | "weight" : 10, 193 | "port" : 2380, 194 | "target" : "[concat('etcd-', copyIndex('srvRecords'), '.', parameters('privateDNSZoneName'))]" 195 | } 196 | }] 197 | } 198 | }, 199 | { 200 | "apiVersion": "2018-09-01", 201 | "type": "Microsoft.Network/privateDnsZones/A", 202 | "copy" : { 203 | "name" : "dnsCopy", 204 | "count" : "[length(variables('vmNames'))]" 205 | }, 206 | "name": "[concat(parameters('privateDNSZoneName'), '/etcd-', copyIndex())]", 207 | "location" : "[variables('location')]", 208 | "dependsOn" : [ 209 | "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" 210 | ], 211 | "properties": { 212 | "ttl": 60, 213 | "aRecords": [ 214 | { 215 | "ipv4Address": "[reference(concat(variables('vmNames')[copyIndex()], '-nic')).ipConfigurations[0].properties.privateIPAddress]" 216 | } 217 | ] 218 | } 219 | }, 220 | { 221 | "apiVersion" : "2018-06-01", 222 | "type" : "Microsoft.Compute/virtualMachines", 223 | "copy" : { 224 | "name" : "vmCopy", 225 | "count" : "[length(variables('vmNames'))]" 226 | }, 227 | "name" : "[variables('vmNames')[copyIndex()]]", 228 | "location" : "[variables('location')]", 229 | "identity" : { 230 | "type" : "userAssigned", 231 | "userAssignedIdentities" : { 232 | "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} 233 | } 234 | }, 235 | "dependsOn" : [ 236 | "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]", 237 | "[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/A/etcd-', copyIndex())]", 238 | "[concat('Microsoft.Network/privateDnsZones/', parameters('privateDNSZoneName'), '/SRV/_etcd-server-ssl._tcp')]" 239 | ], 240 | "properties" : { 241 | "hardwareProfile" : { 242 | "vmSize" : "[parameters('masterVMSize')]" 243 | }, 244 | "osProfile" : { 245 | "computerName" : "[variables('vmNames')[copyIndex()]]", 246 | "adminUsername" : "core", 247 | "customData" : "[parameters('masterIgnition')]", 248 | "linuxConfiguration" : { 249 | "disablePasswordAuthentication" : true, 250 | "ssh" : { 251 | "publicKeys" : [ 252 | { 253 | "path" : "[variables('sshKeyPath')]", 254 | "keyData" : "[parameters('sshKeyData')]" 255 | } 256 | ] 257 | } 258 | } 259 | }, 260 | "storageProfile" : { 261 | "imageReference": { 262 | "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" 263 | }, 264 | "osDisk" : { 265 | "name": "[concat(variables('vmNames')[copyIndex()], '_OSDisk')]", 266 | "osType" : "Linux", 267 | "createOption" : "FromImage", 268 | "caching": "ReadOnly", 269 | "writeAcceleratorEnabled": false, 270 | "managedDisk": { 271 | "storageAccountType": "Premium_LRS" 272 | }, 273 | "diskSizeGB" : 128 274 | } 275 | }, 276 | "networkProfile" : { 277 | "networkInterfaces" : [ 278 | { 279 | "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", 280 | "properties": { 281 | "primary": false 282 | } 283 | } 284 | ] 285 | } 286 | } 287 | } 288 | ] 289 | } -------------------------------------------------------------------------------- /provisioning/upi/06_workers.json: -------------------------------------------------------------------------------- 1 | { 2 | "$schema" : "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", 3 | "contentVersion" : "1.0.0.0", 4 | "parameters" : { 5 | "baseName" : { 6 | "type" : "string", 7 | "minLength" : 1, 8 | "metadata" : { 9 | "description" : "Base name to be used in resource names (usually the cluster's Infra ID)" 10 | } 11 | }, 12 | "workerIgnition" : { 13 | "type" : "string", 14 | "metadata" : { 15 | "description" : "Ignition content for the worker nodes" 16 | } 17 | }, 18 | "numberOfNodes" : { 19 | "type" : "int", 20 | "defaultValue" : 3, 21 | "minValue" : 2, 22 | "maxValue" : 30, 23 | "metadata" : { 24 | "description" : "Number of OpenShift compute nodes to deploy" 25 | } 26 | }, 27 | "sshKeyData" : { 28 | "type" : "securestring", 29 | "metadata" : { 30 | "description" : "SSH RSA public key file as a string" 31 | } 32 | }, 33 | "nodeVMSize" : { 34 | "type" : "string", 35 | "defaultValue" : "Standard_D4s_v3", 36 | "allowedValues" : [ 37 | "Standard_A2", 38 | "Standard_A3", 39 | "Standard_A4", 40 | "Standard_A5", 41 | "Standard_A6", 42 | "Standard_A7", 43 | "Standard_A8", 44 | "Standard_A9", 45 | "Standard_A10", 46 | "Standard_A11", 47 | "Standard_D2", 48 | "Standard_D3", 49 | "Standard_D4", 50 | "Standard_D11", 51 | "Standard_D12", 52 | "Standard_D13", 53 | "Standard_D14", 54 | "Standard_D2_v2", 55 | "Standard_D3_v2", 56 | "Standard_D4_v2", 57 | "Standard_D5_v2", 58 | "Standard_D8_v3", 59 | "Standard_D11_v2", 60 | "Standard_D12_v2", 61 | "Standard_D13_v2", 62 | "Standard_D14_v2", 63 | "Standard_E2_v3", 64 | "Standard_E4_v3", 65 | "Standard_E8_v3", 66 | "Standard_E16_v3", 67 | "Standard_E32_v3", 68 | "Standard_E64_v3", 69 | "Standard_E2s_v3", 70 | "Standard_E4s_v3", 71 | "Standard_E8s_v3", 72 | "Standard_E16s_v3", 73 | "Standard_E32s_v3", 74 | "Standard_E64s_v3", 75 | "Standard_G1", 76 | "Standard_G2", 77 | "Standard_G3", 78 | "Standard_G4", 79 | "Standard_G5", 80 | "Standard_DS2", 81 | "Standard_DS3", 82 | "Standard_DS4", 83 | "Standard_DS11", 84 | "Standard_DS12", 85 | "Standard_DS13", 86 | "Standard_DS14", 87 | "Standard_DS2_v2", 88 | "Standard_DS3_v2", 89 | "Standard_DS4_v2", 90 | "Standard_DS5_v2", 91 | "Standard_DS11_v2", 92 | "Standard_DS12_v2", 93 | "Standard_DS13_v2", 94 | "Standard_DS14_v2", 95 | "Standard_GS1", 96 | "Standard_GS2", 97 | "Standard_GS3", 98 | "Standard_GS4", 99 | "Standard_GS5", 100 | "Standard_D2s_v3", 101 | "Standard_D4s_v3", 102 | "Standard_D8s_v3" 103 | ], 104 | "metadata" : { 105 | "description" : "The size of the each Node Virtual Machine" 106 | } 107 | } 108 | }, 109 | "variables" : { 110 | "location" : "[resourceGroup().location]", 111 | "virtualNetworkName" : "[concat(parameters('baseName'), '-vnet')]", 112 | "virtualNetworkID" : "[resourceId('Microsoft.Network/virtualNetworks', variables('virtualNetworkName'))]", 113 | "nodeSubnetName" : "[concat(parameters('baseName'), '-worker-subnet')]", 114 | "nodeSubnetRef" : "[concat(variables('virtualNetworkID'), '/subnets/', variables('nodeSubnetName'))]", 115 | "infraLoadBalancerName" : "[parameters('baseName')]", 116 | "sshKeyPath" : "/home/capi/.ssh/authorized_keys", 117 | "identityName" : "[concat(parameters('baseName'), '-identity')]", 118 | "imageName" : "[concat(parameters('baseName'), '-image')]", 119 | "copy" : [ 120 | { 121 | "name" : "vmNames", 122 | "count" : "[parameters('numberOfNodes')]", 123 | "input" : "[concat(parameters('baseName'), '-worker-', variables('location'), '-', copyIndex('vmNames', 1))]" 124 | } 125 | ] 126 | }, 127 | "resources" : [ 128 | { 129 | "apiVersion" : "2019-05-01", 130 | "name" : "[concat('node', copyIndex())]", 131 | "type" : "Microsoft.Resources/deployments", 132 | "copy" : { 133 | "name" : "nodeCopy", 134 | "count" : "[length(variables('vmNames'))]" 135 | }, 136 | "properties" : { 137 | "mode" : "Incremental", 138 | "template" : { 139 | "$schema" : "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", 140 | "contentVersion" : "1.0.0.0", 141 | "resources" : [ 142 | { 143 | "apiVersion" : "2018-06-01", 144 | "type" : "Microsoft.Network/networkInterfaces", 145 | "name" : "[concat(variables('vmNames')[copyIndex()], '-nic')]", 146 | "location" : "[variables('location')]", 147 | "properties" : { 148 | "ipConfigurations" : [ 149 | { 150 | "name" : "pipConfig", 151 | "properties" : { 152 | "privateIPAllocationMethod" : "Dynamic", 153 | "subnet" : { 154 | "id" : "[variables('nodeSubnetRef')]" 155 | }, 156 | "loadBalancerBackendAddressPools" : [ 157 | { 158 | "id" : "[concat('/subscriptions/', subscription().subscriptionId, '/resourceGroups/', resourceGroup().name, '/providers/Microsoft.Network/loadBalancers/', variables('infraLoadBalancerName'), '/backendAddressPools/', parameters('baseName'))]" 159 | } 160 | ] 161 | } 162 | } 163 | ] 164 | } 165 | }, 166 | { 167 | "apiVersion" : "2018-06-01", 168 | "type" : "Microsoft.Compute/virtualMachines", 169 | "name" : "[variables('vmNames')[copyIndex()]]", 170 | "location" : "[variables('location')]", 171 | "tags" : { 172 | "kubernetes.io-cluster-ffranzupi": "owned" 173 | }, 174 | "identity" : { 175 | "type" : "userAssigned", 176 | "userAssignedIdentities" : { 177 | "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/', variables('identityName'))]" : {} 178 | } 179 | }, 180 | "dependsOn" : [ 181 | "[concat('Microsoft.Network/networkInterfaces/', concat(variables('vmNames')[copyIndex()], '-nic'))]" 182 | ], 183 | "properties" : { 184 | "hardwareProfile" : { 185 | "vmSize" : "[parameters('nodeVMSize')]" 186 | }, 187 | "osProfile" : { 188 | "computerName" : "[variables('vmNames')[copyIndex()]]", 189 | "adminUsername" : "capi", 190 | "customData" : "[parameters('workerIgnition')]", 191 | "linuxConfiguration" : { 192 | "disablePasswordAuthentication" : true, 193 | "ssh" : { 194 | "publicKeys" : [ 195 | { 196 | "path" : "[variables('sshKeyPath')]", 197 | "keyData" : "[parameters('sshKeyData')]" 198 | } 199 | ] 200 | } 201 | } 202 | }, 203 | "storageProfile" : { 204 | "imageReference": { 205 | "id": "[resourceId('Microsoft.Compute/images', variables('imageName'))]" 206 | }, 207 | "osDisk" : { 208 | "name": "[concat(variables('vmNames')[copyIndex()],'_OSDisk')]", 209 | "osType" : "Linux", 210 | "createOption" : "FromImage", 211 | "managedDisk": { 212 | "storageAccountType": "Premium_LRS" 213 | }, 214 | "diskSizeGB": 128 215 | } 216 | }, 217 | "networkProfile" : { 218 | "networkInterfaces" : [ 219 | { 220 | "id" : "[resourceId('Microsoft.Network/networkInterfaces', concat(variables('vmNames')[copyIndex()], '-nic'))]", 221 | "properties": { 222 | "primary": true 223 | } 224 | } 225 | ] 226 | } 227 | } 228 | } 229 | ] 230 | } 231 | } 232 | } 233 | ] 234 | } -------------------------------------------------------------------------------- /provisioning/upi/dotmap/__pycache__/__init__.cpython-36.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/provisioning/upi/dotmap/__pycache__/__init__.cpython-36.pyc -------------------------------------------------------------------------------- /provisioning/upi/dotmap/test.py: -------------------------------------------------------------------------------- 1 | import unittest 2 | from dotmap import DotMap 3 | 4 | 5 | class TestReadme(unittest.TestCase): 6 | def test_basic_use(self): 7 | m = DotMap() 8 | self.assertIsInstance(m, DotMap) 9 | m.name = 'Joe' 10 | self.assertEqual(m.name, 'Joe') 11 | self.assertEqual('Hello ' + m.name, 'Hello Joe') 12 | self.assertIsInstance(m, dict) 13 | self.assertTrue(issubclass(m.__class__, dict)) 14 | self.assertEqual(m['name'], 'Joe') 15 | m.name += ' Smith' 16 | m['name'] += ' Jr' 17 | self.assertEqual(m.name, 'Joe Smith Jr') 18 | 19 | def test_automatic_hierarchy(self): 20 | m = DotMap() 21 | m.people.steve.age = 31 22 | self.assertEqual(m.people.steve.age, 31) 23 | 24 | def test_key_init(self): 25 | m = DotMap(a=1, b=2) 26 | self.assertEqual(m.a, 1) 27 | self.assertEqual(m.b, 2) 28 | 29 | def test_dict_conversion(self): 30 | d = {'a': 1, 'b': 2, 'c': {'d': 3, 'e': 4}} 31 | m = DotMap(d) 32 | self.assertEqual(m.a, 1) 33 | self.assertEqual(m.b, 2) 34 | d2 = m.toDict() 35 | self.assertIsInstance(d2, dict) 36 | self.assertNotIsInstance(d2, DotMap) 37 | self.assertEqual(len(d2), 3) 38 | self.assertEqual(d2['a'], 1) 39 | self.assertEqual(d2['b'], 2) 40 | self.assertNotIsInstance(d2['c'], DotMap) 41 | self.assertEqual(len(d2['c']), 2) 42 | self.assertEqual(d2['c']['d'], 3) 43 | self.assertEqual(d2['c']['e'], 4) 44 | 45 | def test_ordered_iteration(self): 46 | m = DotMap() 47 | m.people.john.age = 32 48 | m.people.john.job = 'programmer' 49 | m.people.mary.age = 24 50 | m.people.mary.job = 'designer' 51 | m.people.dave.age = 55 52 | m.people.dave.job = 'manager' 53 | expected = [ 54 | ('john', 32, 'programmer'), 55 | ('mary', 24, 'designer'), 56 | ('dave', 55, 'manager'), 57 | ] 58 | for i, (k, v) in enumerate(m.people.items()): 59 | self.assertEqual(expected[i][0], k) 60 | self.assertEqual(expected[i][1], v.age) 61 | self.assertEqual(expected[i][2], v.job) 62 | 63 | 64 | class TestBasic(unittest.TestCase): 65 | def setUp(self): 66 | self.d = { 67 | 'a': 1, 68 | 'b': 2, 69 | 'subD': {'c': 3, 'd': 4} 70 | } 71 | 72 | def test_dict_init(self): 73 | m = DotMap(self.d) 74 | self.assertIsInstance(m, DotMap) 75 | self.assertEqual(m.a, 1) 76 | self.assertEqual(m.b, 2) 77 | self.assertIsInstance(m.subD, DotMap) 78 | self.assertEqual(m.subD.c, 3) 79 | self.assertEqual(m.subD.d, 4) 80 | 81 | def test_copy(self): 82 | m = DotMap(self.d) 83 | dm_copy = m.copy() 84 | self.assertIsInstance(dm_copy, DotMap) 85 | self.assertEqual(dm_copy.a, 1) 86 | self.assertEqual(dm_copy.b, 2) 87 | self.assertIsInstance(dm_copy.subD, DotMap) 88 | self.assertEqual(dm_copy.subD.c, 3) 89 | self.assertEqual(dm_copy.subD.d, 4) 90 | 91 | def test_fromkeys(self): 92 | m = DotMap.fromkeys([1, 2, 3], 'a') 93 | self.assertEqual(len(m), 3) 94 | self.assertEqual(m[1], 'a') 95 | self.assertEqual(m[2], 'a') 96 | self.assertEqual(m[3], 'a') 97 | 98 | def test_dict_functionality(self): 99 | m = DotMap(self.d) 100 | self.assertEqual(m.get('a'), 1) 101 | self.assertEqual(m.get('f', 33), 33) 102 | self.assertIsNone(m.get('f')) 103 | self.assertTrue(m.has_key('a')) 104 | self.assertFalse(m.has_key('f')) 105 | m.update([('rat', 5), ('bum', 4)], dog=7, cat=9) 106 | self.assertEqual(m.rat, 5) 107 | self.assertEqual(m.bum, 4) 108 | self.assertEqual(m.dog, 7) 109 | self.assertEqual(m.cat, 9) 110 | m.update({'lol': 1, 'ba': 2}) 111 | self.assertEqual(m.lol, 1) 112 | self.assertEqual(m.ba, 2) 113 | self.assertTrue('a' in m) 114 | self.assertFalse('c' in m) 115 | self.assertTrue('c' in m.subD) 116 | self.assertTrue(len(m.subD), 2) 117 | del m.subD.c 118 | self.assertFalse('c' in m.subD) 119 | self.assertTrue(len(m.subD), 1) 120 | 121 | def test_list_comprehension(self): 122 | parentDict = { 123 | 'name': 'Father1', 124 | 'children': [ 125 | {'name': 'Child1'}, 126 | {'name': 'Child2'}, 127 | {'name': 'Child3'}, 128 | ] 129 | } 130 | parent = DotMap(parentDict) 131 | ordered_names = ['Child1', 'Child2', 'Child3'] 132 | comp = [x.name for x in parent.children] 133 | self.assertEqual(ordered_names, comp) 134 | 135 | 136 | class TestPickle(unittest.TestCase): 137 | def setUp(self): 138 | self.d = { 139 | 'a': 1, 140 | 'b': 2, 141 | 'subD': {'c': 3, 'd': 4} 142 | } 143 | 144 | def test(self): 145 | import pickle 146 | pm = DotMap(self.d) 147 | s = pickle.dumps(pm) 148 | m = pickle.loads(s) 149 | self.assertIsInstance(m, DotMap) 150 | self.assertEqual(m.a, 1) 151 | self.assertEqual(m.b, 2) 152 | self.assertIsInstance(m.subD, DotMap) 153 | self.assertEqual(m.subD.c, 3) 154 | self.assertEqual(m.subD.d, 4) 155 | 156 | 157 | class TestEmpty(unittest.TestCase): 158 | def test(self): 159 | m = DotMap() 160 | self.assertTrue(m.empty()) 161 | m.a = 1 162 | self.assertFalse(m.empty()) 163 | self.assertTrue(m.b.empty()) 164 | self.assertIsInstance(m.b, DotMap) 165 | 166 | 167 | class TestDynamic(unittest.TestCase): 168 | def test(self): 169 | m = DotMap() 170 | m.still.works 171 | m.sub.still.works 172 | nonDynamic = DotMap(_dynamic=False) 173 | 174 | def assignNonDynamic(): 175 | nonDynamic.no 176 | self.assertRaises(KeyError, assignNonDynamic) 177 | 178 | nonDynamicWithInit = DotMap(m, _dynamic=False) 179 | nonDynamicWithInit.still.works 180 | nonDynamicWithInit.sub.still.works 181 | 182 | def assignNonDynamicWithInit(): 183 | nonDynamicWithInit.no.creation 184 | self.assertRaises(KeyError, assignNonDynamicWithInit) 185 | 186 | 187 | class TestRecursive(unittest.TestCase): 188 | def test(self): 189 | m = DotMap() 190 | m.a = 5 191 | m_id = id(m) 192 | m.recursive = m 193 | self.assertEqual(id(m.recursive.recursive.recursive), m_id) 194 | outStr = str(m) 195 | self.assertIn('''a=5''', outStr) 196 | self.assertIn('''recursive=DotMap(...)''', outStr) 197 | d = m.toDict() 198 | d_id = id(d) 199 | d['a'] = 5 200 | d['recursive'] = d 201 | d['recursive']['recursive']['recursive'] 202 | self.assertEqual(id(d['recursive']['recursive']['recursive']), d_id) 203 | outStr = str(d) 204 | self.assertIn(''''a': 5''', outStr) 205 | self.assertIn('''recursive': {...}''', outStr) 206 | m2 = DotMap(d) 207 | m2_id = id(m2) 208 | self.assertEqual(id(m2.recursive.recursive.recursive), m2_id) 209 | outStr2 = str(m2) 210 | self.assertIn('''a=5''', outStr2) 211 | self.assertIn('''recursive=DotMap(...)''', outStr2) 212 | 213 | 214 | class Testkwarg(unittest.TestCase): 215 | def test(self): 216 | a = {'1': 'a', '2': 'b'} 217 | b = DotMap(a, _dynamic=False) 218 | 219 | def capture(**kwargs): 220 | return kwargs 221 | self.assertEqual(a, capture(**b.toDict())) 222 | 223 | 224 | class TestDeepCopy(unittest.TestCase): 225 | def test(self): 226 | import copy 227 | original = DotMap() 228 | original.a = 1 229 | original.b = 3 230 | shallowCopy = original 231 | deepCopy = copy.deepcopy(original) 232 | self.assertEqual(original, shallowCopy) 233 | self.assertEqual(id(original), id(shallowCopy)) 234 | self.assertEqual(original, deepCopy) 235 | self.assertNotEqual(id(original), id(deepCopy)) 236 | original.a = 2 237 | self.assertEqual(original, shallowCopy) 238 | self.assertNotEqual(original, deepCopy) 239 | 240 | def test_order_preserved(self): 241 | import copy 242 | original = DotMap() 243 | original.a = 1 244 | original.b = 2 245 | original.c = 3 246 | deepCopy = copy.deepcopy(original) 247 | orderedPairs = [] 248 | for k, v in original.iteritems(): 249 | orderedPairs.append((k, v)) 250 | for i, (k, v) in enumerate(deepCopy.iteritems()): 251 | self.assertEqual(k, orderedPairs[i][0]) 252 | self.assertEqual(v, orderedPairs[i][1]) 253 | 254 | 255 | class TestDotMapTupleToDict(unittest.TestCase): 256 | def test(self): 257 | m = DotMap({'a': 1, 'b': (11, 22, DotMap({'c': 3}))}) 258 | d = m.toDict() 259 | self.assertEqual(d, {'a': 1, 'b': (11, 22, {'c': 3})}) 260 | 261 | 262 | class TestOrderedDictInit(unittest.TestCase): 263 | def test(self): 264 | from collections import OrderedDict 265 | o = OrderedDict([('a', 1), ('b', 2), ('c', [OrderedDict([('d', 3)])])]) 266 | m = DotMap(o) 267 | self.assertIsInstance(m, DotMap) 268 | self.assertIsInstance(m.c[0], DotMap) 269 | 270 | 271 | class TestEmptyAdd(unittest.TestCase): 272 | def test_base(self): 273 | m = DotMap() 274 | for i in range(7): 275 | m.counter += 1 276 | self.assertNotIsInstance(m.counter, DotMap) 277 | self.assertIsInstance(m.counter, int) 278 | self.assertEqual(m.counter, 7) 279 | 280 | def test_various(self): 281 | m = DotMap() 282 | m.a.label = 'test' 283 | m.a.counter += 2 284 | self.assertIsInstance(m.a, DotMap) 285 | self.assertEqual(m.a.label, 'test') 286 | self.assertNotIsInstance(m.a.counter, DotMap) 287 | self.assertIsInstance(m.a.counter, int) 288 | self.assertEqual(m.a.counter, 2) 289 | m.a.counter += 1 290 | self.assertEqual(m.a.counter, 3) 291 | 292 | def test_proposal(self): 293 | my_counters = DotMap() 294 | pages = [ 295 | 'once upon a time', 296 | 'there was like this super awesome prince', 297 | 'and there was this super rad princess', 298 | 'and they had a mutually respectful, egalitarian relationship', 299 | 'the end' 300 | ] 301 | for stuff in pages: 302 | my_counters.page += 1 303 | self.assertIsInstance(my_counters, DotMap) 304 | self.assertNotIsInstance(my_counters.page, DotMap) 305 | self.assertIsInstance(my_counters.page, int) 306 | self.assertEqual(my_counters.page, 5) 307 | 308 | def test_string_addition(self): 309 | m = DotMap() 310 | m.quote += 'lions' 311 | m.quote += ' and tigers' 312 | m.quote += ' and bears' 313 | m.quote += ', oh my' 314 | self.assertEqual(m.quote, 'lions and tigers and bears, oh my') 315 | 316 | def test_strange_addition(self): 317 | m = DotMap() 318 | m += "I'm a string now" 319 | self.assertIsInstance(m, str) 320 | self.assertNotIsInstance(m, DotMap) 321 | self.assertEqual(m, "I'm a string now") 322 | m2 = DotMap() + "I'll replace that DotMap" 323 | self.assertEqual(m2, "I'll replace that DotMap") 324 | 325 | def test_protected_hierarchy(self): 326 | m = DotMap() 327 | m.protected_parent.key = 'value' 328 | 329 | def protectedFromAddition(): 330 | m.protected_parent += 1 331 | self.assertRaises(TypeError, protectedFromAddition) 332 | 333 | def test_type_error_raised(self): 334 | m = DotMap() 335 | 336 | def badAddition(): 337 | m.a += 1 338 | m.a += ' and tigers' 339 | self.assertRaises(TypeError, badAddition) 340 | 341 | 342 | # Test classes for SubclassTestCase below 343 | 344 | # class that overrides __getitem__ 345 | class MyDotMap(DotMap): 346 | def __getitem__(self, k): 347 | return super(MyDotMap, self).__getitem__(k) 348 | 349 | 350 | # subclass with existing property 351 | class PropertyDotMap(MyDotMap): 352 | def __init__(self, *args, **kwargs): 353 | super(MyDotMap, self).__init__(*args, **kwargs) 354 | self._myprop = None 355 | 356 | @property 357 | def my_prop(self): 358 | if not self._myprop: 359 | self._myprop = PropertyDotMap({'nested_prop': 123}) 360 | return self._myprop 361 | 362 | 363 | class SubclassTestCase(unittest.TestCase): 364 | def test_nested_subclass(self): 365 | my = MyDotMap() 366 | my.x.y.z = 123 367 | self.assertEqual(my.x.y.z, 123) 368 | self.assertIsInstance(my.x, MyDotMap) 369 | self.assertIsInstance(my.x.y, MyDotMap) 370 | 371 | def test_subclass_with_property(self): 372 | p = PropertyDotMap() 373 | self.assertIsInstance(p.my_prop, PropertyDotMap) 374 | self.assertEqual(p.my_prop.nested_prop, 123) 375 | p.my_prop.second.third = 456 376 | self.assertIsInstance(p.my_prop.second, PropertyDotMap) 377 | self.assertEqual(p.my_prop.second.third, 456) 378 | -------------------------------------------------------------------------------- /provisioning/upi/setup-manifests.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import json 4 | import base64 5 | import sys 6 | import os 7 | import glob 8 | from dotmap import DotMap 9 | import yaml 10 | from collections import OrderedDict 11 | 12 | resource_group = sys.argv[1] 13 | infra_id = sys.argv[2] 14 | 15 | with open('openshift/99_cloud-creds-secret.yaml') as crfile: 16 | yamls = yaml.load(crfile, Loader=yaml.BaseLoader) 17 | crfile.close() 18 | yamls['data']['azure_resource_prefix'] = base64.b64encode(bytes(infra_id, 'utf-8')).decode('utf-8') 19 | yamls['data']['azure_resourcegroup'] = base64.b64encode(bytes(resource_group, 'utf-8')).decode('utf-8') 20 | with open('openshift/99_cloud-creds-secret.yaml', 'w') as crout: 21 | yaml.dump(yamls, crout, default_flow_style=False) 22 | crout.close() 23 | 24 | with open('manifests/cloud-provider-config.yaml') as file: 25 | yamlx = yaml.load(file, Loader=yaml.BaseLoader) 26 | file.close() 27 | jsondata = yamlx['data']['config'] 28 | jsonx = json.loads(jsondata, object_pairs_hook=OrderedDict) 29 | config = DotMap(jsonx) 30 | config.resourceGroup = resource_group 31 | config.vnetName = infra_id + "-vnet" 32 | config.vnetResourceGroup = resource_group 33 | config.subnetName = infra_id + "-worker-subnet" 34 | config.securityGroupName = infra_id + "-node-nsg" 35 | config.routeTableName = "" 36 | config.cloudProviderRateLimit = False 37 | config.azure_resourcegroup = resource_group 38 | jsondata = json.dumps(dict(**config.toDict()), indent='\t') 39 | jsonstr = str(jsondata) 40 | yamlx['data']['config'] = jsonstr + '\n' 41 | yamlx['metadata']['creationTimestamp'] = None 42 | yamlstr = yaml.dump(yamlx, default_style='\"', width=4096) 43 | yamlstr = yamlstr.replace('!!null "null"', 'null') 44 | with open('manifests/cloud-provider-config.yaml', 'w') as outfile: 45 | outfile.write(yamlstr) 46 | outfile.close() 47 | 48 | with open('manifests/cluster-infrastructure-02-config.yml') as file: 49 | yamlx = yaml.load(file, Loader=yaml.BaseLoader) 50 | file.close() 51 | yamlx['status']['platformStatus']['azure']['resourceGroupName'] = resource_group 52 | yamlx['status']['platformStatus']['azure']['networkResourceGroupName'] = resource_group 53 | yamlx['status']['platformStatus']['azure']['virtualNetwork'] = infra_id + "-vnet" 54 | yamlx['status']['platformStatus']['azure']['controlPlaneSubnet'] = infra_id + "-master-subnet" 55 | yamlx['status']['platformStatus']['azure']['computeSubnet'] = infra_id + "-worker-subnet" 56 | yamlx['status']['infrastructureName'] = infra_id 57 | yamlx['metadata']['creationTimestamp'] = None 58 | with open('manifests/cluster-infrastructure-02-config.yml', 'w') as outfile: 59 | yaml.dump(yamlx, outfile, default_flow_style=False) 60 | outfile.close() -------------------------------------------------------------------------------- /res/aad-permissions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/aad-permissions.png -------------------------------------------------------------------------------- /res/cloud-shell.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/cloud-shell.png -------------------------------------------------------------------------------- /res/dns-zone-ns.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/dns-zone-ns.png -------------------------------------------------------------------------------- /res/mohamed-saif.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/mohamed-saif.jpg -------------------------------------------------------------------------------- /res/new-dns-zone.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/new-dns-zone.png -------------------------------------------------------------------------------- /res/ocp-azure-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/ocp-azure-architecture.png -------------------------------------------------------------------------------- /res/ocp-cluster-settings.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/ocp-cluster-settings.png -------------------------------------------------------------------------------- /res/ocp-console.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/ocp-console.png -------------------------------------------------------------------------------- /res/ocp-installer-files.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/ocp-installer-files.png -------------------------------------------------------------------------------- /res/ocp-namespaces.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/ocp-namespaces.png -------------------------------------------------------------------------------- /res/ocp-rg.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/ocp-rg.png -------------------------------------------------------------------------------- /res/ocp-storage-classes.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/ocp-storage-classes.png -------------------------------------------------------------------------------- /res/ocp-storage-images.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/ocp-storage-images.png -------------------------------------------------------------------------------- /res/ocp-storage-primary.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/ocp-storage-primary.png -------------------------------------------------------------------------------- /res/ocp-subdomain.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/mohamedsaif/OpenShift-On-Azure/ee9929c3c621f9e6539d644deb1a244b0ad45b2d/res/ocp-subdomain.png --------------------------------------------------------------------------------