├── .gitignore ├── Demo.md ├── README.md ├── TLS.md ├── appdev └── README.md ├── arc └── README.md ├── aro4-env.sh.template ├── automation ├── README.md ├── aro.bicep ├── main.bicep ├── roleAssignments.bicep └── vnet.bicep ├── azure-pipelines.yml ├── backup ├── .gitignore └── README.md ├── cleanup-failed-clusters.sh ├── firewall ├── README.md └── egress-test-pod.yaml ├── htpasswd-cr.yaml ├── logging ├── .gitignore └── README.md ├── monitoring ├── README.md └── enable-monitoring.sh ├── oc-login.sh └── sampleapp ├── README.md ├── sampleapp.deploy.yaml └── sampleapp.svc.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | /htpasswd.txt 2 | /aro-user.htpasswd 3 | /pull-secret.txt 4 | /clientSecret.txt 5 | /manifest.json 6 | /oidc.yaml 7 | /enable-monitoring.sh 8 | /*-sp.json 9 | /winpass.txt 10 | /Misc.md 11 | /cloud-init.txt 12 | **/*.pfx 13 | **/*.cer 14 | **/*.key 15 | **/*.crt 16 | **/*.csr 17 | **/*.pem 18 | **/*.srl 19 | WorkspaceProps.json 20 | aro4-env.sh 21 | aro4-env.ps1 22 | **/*-sp.json 23 | 24 | /openshift-client-linux.tar.gz 25 | 26 | 27 | -------------------------------------------------------------------------------- /Demo.md: -------------------------------------------------------------------------------- 1 | Demo App 2 | ======== 3 | 4 | ## Demo of Source-to-Image (S2I) for a microservices app 5 | 6 | Login via `oc` CLI 7 | ------------------ 8 | 9 | ```sh 10 | source ./aro4-env.sh 11 | 12 | API_URL=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv) 13 | KUBEADMIN_PASSWD=$(az aro list-credentials -g $RESOURCEGROUP -n $CLUSTER | jq -r .kubeadminPassword) 14 | 15 | oc login -u kubeadmin -p $KUBEADMIN_PASSWD --server=$API_URL 16 | oc status 17 | 18 | # Create project 19 | PROJECT=workshop 20 | oc new-project $PROJECT 21 | 22 | # Deploy mongo DB 23 | oc get templates -n openshift 24 | oc process openshift//mongodb-persistent -o yaml 25 | oc process openshift//mongodb-persistent \ 26 | -p MONGODB_USER=ratingsuser \ 27 | -p MONGODB_PASSWORD=ratingspassword \ 28 | -p MONGODB_DATABASE=ratingsdb \ 29 | -p MONGODB_ADMIN_PASSWORD=ratingspassword | oc create -f - 30 | oc status 31 | 32 | # Deploy Ratings API 33 | oc new-app https://github.com/microsoft/rating-api --strategy=source 34 | oc set env deploy rating-api MONGODB_URI=mongodb://ratingsuser:ratingspassword@mongodb.$PROJECT.svc.cluster.local:27017/ratingsdb 35 | 36 | oc get svc rating-api 37 | oc describe bc/rating-api 38 | 39 | # Deploy Ratings Frontend 40 | oc new-app https://github.com/microsoft/rating-web --strategy=source 41 | oc set env deploy rating-web API=http://rating-api:8080 42 | 43 | # Expose service using a route method from the ones below (depending on your setup): 44 | 45 | # 1. Default route 46 | oc expose svc/rating-web 47 | 48 | # 2. Edge route (..) - Terminates TLS at router (use this is you set up your custom domain on the ingress router) 49 | oc create route edge --service=rating-web 50 | 51 | # 3. Edge route with another domain, not the default router domain (optional CA, cert, key; if different from the default ingress/router setup) 52 | oc create route edge --service=rating-web --hostname= --ca-cert= --cert= --key= 53 | # Test when using different domain to the default router 54 | curl https://rating-web. --resolve 'rating-web.:443:' 55 | 56 | # If using App Gateway, update the HTTP Settings and specify a Host name override with specific domain name (and backend pool can use IP address) 57 | 58 | # 4. Re-encrypt 59 | 60 | TODO 61 | 62 | # 5. Pass-through 63 | 64 | TODO 65 | 66 | oc get route rating-web 67 | ``` 68 | 69 | Access the web app using the URL from above. 70 | 71 | Things to explore: 72 | 73 | * Web console 74 | * Admin perspective (logs, deployments, etc.) 75 | * Developer perspective (builds, topology, etc.) 76 | 77 | Cleanup: 78 | 79 | ```sh 80 | oc delete project $PROJECT 81 | ``` 82 | 83 | ## References 84 | 85 | * [ARO Workshop](https://aroworkshop.io/) 86 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Azure Red Hat OpenShift 4 - Demo 2 | ================================ 3 | 4 | Demonstration of various Azure Red Hat Openshift features and basic steps to create and configure a cluster. 5 | Always refer to the [official docs](https://docs.microsoft.com/en-us/azure/openshift/) for the latest up-to-date documentation as things may have changed since this was last updated. 6 | 7 | Note: 8 | 9 | * Red Hat's [Managed OpenShift Black Belt Team](https://cloud.redhat.com/experts/) also have great documentation on configuring ARO so check that out (it's more up-to-date than this repo!). 10 | 11 | Index 12 | ----- 13 | 14 | * [Prerequisites](#Prerequisites) 15 | * [VNET setup](#create-the-cluster-virtual-network) 16 | * [Create a default cluster](#create-a-default-cluster) 17 | * [Create a private cluster](#create-a-private-cluster) 18 | * [Configure Custom Domain and TLS](./TLS.md) 19 | * [Configure bastion host access](#optional-configure-bastion-vnet-and-host-for-private-cluster-access) 20 | * [Use an App Gateway](#optional-provision-an-application-gateway-v2-for-tls-and-waf) 21 | * [Configure Identity Providers](#add-an-identity-provider-to-add-other-users) 22 | * [Setup user roles](#setup-user-roles) 23 | * [Setup in-cluster logging - Elasticsearch and Kibana](./logging) 24 | * [Setup egress firewall - Azure Firewall](./firewall) 25 | * [Onboard to Azure Monitor](#onboard-to-azure-monitor) 26 | * [Deploy a demo app](./Demo.md) 27 | * [Automation with Bicep (ARM DSL)](./automation) 28 | 29 | Prerequisites 30 | ------------- 31 | 32 | * Install the latest [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli) 33 | * Log in to your Azure subscription from a console window: 34 | 35 | ```sh 36 | az login 37 | # Follow SSO prompts to authenticate 38 | az account list -o table 39 | az account set -s 40 | ``` 41 | 42 | * Register the `Microsoft.RedHatOpenShift` resource provider to be able to create ARO clusters (only required once per Azure subscription): 43 | 44 | ```sh 45 | az provider register -n Microsoft.RedHatOpenShift --wait 46 | az provider show -n Microsoft.RedHatOpenShift -o table 47 | ``` 48 | 49 | * Install the [OpenShift CLI](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/) for managing the cluster 50 | 51 | ```sh 52 | wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz 53 | tar -zxvf openshift-client-linux.tar.gz oc 54 | sudo mv oc /usr/local/bin/ 55 | oc version 56 | ``` 57 | 58 | * (Optional) Install [Helm v3](https://helm.sh/docs/intro/install/) if you want to integrate with Azure Monitor 59 | * (Optional) Install the `htpasswd` utility if you want to try HTPasswd as an OCP Identity Provider: 60 | 61 | ```sh 62 | # Ubuntu 63 | sudo apt install apache2-utils -y 64 | ``` 65 | 66 | Setup your shell environment file 67 | --------------------------------- 68 | 69 | ```sh 70 | cp aro4-env.sh.template aro4-env.sh 71 | # Edit aro4-env.sh to suit your environment 72 | ``` 73 | 74 | Create the cluster virtual network (Azure CLI) 75 | ---------------------------------------------- 76 | 77 | The VNET and subnet sizes here are for illustrative purposes only. 78 | You need to design the network accordingly to your scale needs and existing networks (to avoid overlaps). 79 | 80 | ```sh 81 | # Source variables into your shell environment 82 | source ./aro4-env.sh 83 | 84 | # Create resource group to hold cluster resources 85 | az group create -g $RESOURCEGROUP -l $LOCATION 86 | 87 | # Create the ARO virtual network 88 | az network vnet create \ 89 | --resource-group $RESOURCEGROUP \ 90 | --name $VNET \ 91 | --address-prefixes 10.0.0.0/22 92 | 93 | # Add two empty subnets to your virtual network (master subnet and worker subnet) 94 | az network vnet subnet create \ 95 | --resource-group $RESOURCEGROUP \ 96 | --vnet-name $VNET \ 97 | --name master-subnet \ 98 | --address-prefixes 10.0.2.0/24 \ 99 | --service-endpoints Microsoft.ContainerRegistry 100 | 101 | az network vnet subnet create \ 102 | --resource-group $RESOURCEGROUP \ 103 | --vnet-name $VNET \ 104 | --name worker-subnet \ 105 | --address-prefixes 10.0.3.0/24 \ 106 | --service-endpoints Microsoft.ContainerRegistry 107 | 108 | # Disable network policies for Private Link Service on your virtual network and subnets. 109 | # This is a requirement for the ARO service to access and manage the cluster. 110 | az network vnet subnet update \ 111 | --name master-subnet \ 112 | --resource-group $RESOURCEGROUP \ 113 | --vnet-name $VNET \ 114 | --disable-private-link-service-network-policies true 115 | ``` 116 | 117 | Create a default cluster (Azure CLI) 118 | ------------------------------------ 119 | 120 | See the [official instructions](https://docs.microsoft.com/en-us/azure/openshift/tutorial-create-cluster). 121 | 122 | It normally takes about 35 minutes to create a cluster. 123 | 124 | ```sh 125 | # Create the ARO cluster 126 | az aro create \ 127 | --resource-group $RESOURCEGROUP \ 128 | --name $CLUSTER \ 129 | --vnet $VNET \ 130 | --master-subnet master-subnet \ 131 | --worker-subnet worker-subnet \ 132 | --pull-secret @pull-secret.txt \ 133 | --domain $DOMAIN 134 | 135 | # pull-secret: OPTIONAL, but recommended 136 | # domain: OPTIONAL custom domain for ARO (set in aro4-env.sh) 137 | ``` 138 | 139 | Change Ingress Controller (public to private) 140 | --------------------------------------------- 141 | 142 | If you have created a cluster with a public ingress (default) you can change that to private later or add a second ingress to handle private traffic whilst still serving public traffic. 143 | 144 | * TODO 145 | 146 | Create a private cluster (Azure CLI) 147 | ------------------------------------ 148 | 149 | See the [official instructions](https://docs.microsoft.com/en-us/azure/openshift/howto-create-private-cluster-4x). 150 | 151 | It normally takes about 35 minutes to create a cluster. 152 | 153 | ```sh 154 | # Create the ARO cluster 155 | az aro create \ 156 | --resource-group $RESOURCEGROUP \ 157 | --name $CLUSTER \ 158 | --vnet $VNET \ 159 | --master-subnet master-subnet \ 160 | --worker-subnet worker-subnet \ 161 | --apiserver-visibility Private \ 162 | --ingress-visibility Private \ 163 | --pull-secret @pull-secret.txt \ 164 | --domain $DOMAIN 165 | 166 | # pull-secret: OPTIONAL, but recommended 167 | # domain: OPTIONAL custom domain for ARO (set in aro4-env.sh) 168 | ``` 169 | 170 | (Optional) Configure custom domain and CA 171 | ----------------------------------------- 172 | 173 | If you used the `--domain` flag with an FQDN (e.g. `my.domain.com`) to create your cluster you'll need to configure DNS and a certificate authority for your API server and apps ingress. 174 | 175 | If you used a shortname (e.g. "mycluster") with the `--domain` flag then you don't need to setup a custom domain and configure DNS/certs. 176 | Then you can proceed to configure the [DNS and TLS/Certs settings](../TLS.md), if required - e.g. you set a FQDN custom domain. 177 | 178 | In the later case, you'd get assigned an FQDN ending in `aroapp.io` like so: 179 | 180 | ```sh 181 | https://console-openshift-console.apps...aroapp.io/ 182 | ``` 183 | 184 | If needed, follow the steps in [TLS.md](./TLS.md). 185 | 186 | (Optional) Configure bastion VNET and host (for private cluster access) 187 | ----------------------------------------------------------------------- 188 | 189 | In order to connect to a private Azure Red Hat OpenShift cluster, you will need to perform CLI commands from a host that is either in the Virtual Network you created or in a Virtual Network that is peered with the Virtual Network the cluster was deployed to -- this could be from an on-prem host connected over an Express Route. 190 | 191 | ### Create the Bastion VNET and subnet 192 | 193 | ```sh 194 | az network vnet create -g $RESOURCEGROUP -n utils-vnet --address-prefix 10.0.4.0/22 --subnet-name AzureBastionSubnet --subnet-prefix 10.0.4.0/27 195 | 196 | az network public-ip create -g $RESOURCEGROUP -n bastion-ip --sku Standard 197 | ``` 198 | 199 | ### Create the Bastion service 200 | 201 | ```sh 202 | az network bastion create --name bastion-service --public-ip-address bastion-ip --resource-group $RESOURCEGROUP --vnet-name $UTILS_VNET --location $LOCATION 203 | ``` 204 | 205 | ### Peer the bastion VNET and the ARO VNET 206 | 207 | See how to peer VNETs from CLI: https://docs.microsoft.com/en-us/azure/virtual-network/tutorial-connect-virtual-networks-cli#peer-virtual-networks 208 | 209 | ```sh 210 | # Get the id for myVirtualNetwork1. 211 | vNet1Id=$(az network vnet show \ 212 | --resource-group $RESOURCEGROUP \ 213 | --name $VNET \ 214 | --query id --out tsv) 215 | 216 | # Get the id for myVirtualNetwork2. 217 | vNet2Id=$(az network vnet show \ 218 | --resource-group $RESOURCEGROUP \ 219 | --name $UTILS_VNET \ 220 | --query id \ 221 | --out tsv) 222 | 223 | az network vnet peering create \ 224 | --name aro-utils-peering \ 225 | --resource-group $RESOURCEGROUP \ 226 | --vnet-name $VNET \ 227 | --remote-vnet $vNet2Id \ 228 | --allow-vnet-access 229 | 230 | az network vnet peering create \ 231 | --name utils-aro-peering \ 232 | --resource-group $RESOURCEGROUP \ 233 | --vnet-name $UTILS_VNET \ 234 | --remote-vnet $vNet1Id \ 235 | --allow-vnet-access 236 | ``` 237 | 238 | ### Create the utility host subnet 239 | 240 | ```sh 241 | az network vnet subnet create \ 242 | --resource-group $RESOURCEGROUP \ 243 | --vnet-name $UTILS_VNET \ 244 | --name utils-hosts \ 245 | --address-prefixes 10.0.5.0/24 \ 246 | --service-endpoints Microsoft.ContainerRegistry 247 | ``` 248 | 249 | ### Create the utility host 250 | 251 | ```sh 252 | STORAGE_ACCOUNT="jumpboxdiag$(openssl rand -hex 5)" 253 | az storage account create -n $STORAGE_ACCOUNT -g $RESOURCEGROUP -l $LOCATION --sku Standard_LRS 254 | 255 | winpass=$(openssl rand -base64 12) 256 | echo $winpass > winpass.txt 257 | 258 | az vm create \ 259 | --resource-group $RESOURCEGROUP \ 260 | --name jumpbox \ 261 | --image MicrosoftWindowsServer:WindowsServer:2022-Datacenter:latest \ 262 | --vnet-name $UTILS_VNET \ 263 | --subnet utils-hosts \ 264 | --public-ip-address "" \ 265 | --admin-username azureuser \ 266 | --admin-password $winpass \ 267 | --authentication-type password \ 268 | --boot-diagnostics-storage $STORAGE_ACCOUNT \ 269 | --generate-ssh-keys 270 | 271 | az vm open-port --port 3389 --resource-group $RESOURCEGROUP --name jumpbox 272 | ``` 273 | 274 | **Recommended**: Enable update management or automatic guest OS patching. 275 | 276 | ### Connect to the utility host 277 | 278 | Connect to the `jumpbox` host using the **Bastion** connection type and enter the username (`azureuser`) and password (use the value of `$winpass` set above or view the file `winpass.txt`). 279 | 280 | Install the Microsoft Edge browser (if you used the Windows Server 2022 image for your VM then you can skip this step): 281 | 282 | * Open a Powershell prompt 283 | 284 | ```powershell 285 | $Url = "http://dl.delivery.mp.microsoft.com/filestreamingservice/files/c39f1d27-cd11-495a-b638-eac3775b469d/MicrosoftEdgeEnterpriseX64.msi" 286 | Invoke-WebRequest -UseBasicParsing -Uri $url -OutFile "\MicrosoftEdgeEnterpriseX64.msi" 287 | Start-Process msiexec.exe -Wait -ArgumentList '/I \MicrosoftEdgeEnterpriseX64.msi /norestart /qn' 288 | ``` 289 | 290 | Or you can [Download and deploy Microsoft Edge for business](https://www.microsoft.com/en-us/edge/business/download). 291 | 292 | Install utilities: 293 | 294 | * Install the [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli) 295 | * Install [Git For Windows](https://git-scm.com/) so you have access to a Bash shell 296 | * Log in to your Azure subscription from a console window: 297 | 298 | ```sh 299 | az login 300 | # Follow SSO prompts (or create a Service Principal and login with that) 301 | az account list -o table 302 | az account set -s 303 | ``` 304 | 305 | * Install the [OpenShift CLI](https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/) for managing the cluster ([example steps](https://www.openshift.com/blog/installing-oc-tools-windows)) 306 | * (Optional) Install [Helm v3](https://helm.sh/docs/intro/install/) if you want to integrate with Azure Monitor 307 | 308 | Given this is a Windows jumpbox, you may need to install a Bash shell like Git Bash. 309 | 310 | (Optional) Provision an Application Gateway v2 for TLS and WAF 311 | -------------------------------------------------------------- 312 | 313 | This approach is not using the AppGw Ingress Controller but rather deploying an App Gateway WAFv2 in front of the ARO cluster and load-balancing traffic to the exposed ARO Routes for services. This method can be used to selectively expose private routes for public access rahter than exposing the route directly. 314 | 315 | ```sh 316 | az network vnet subnet create \ 317 | --resource-group $RESOURCEGROUP \ 318 | --vnet-name utils-vnet \ 319 | --name myAGSubnet \ 320 | --address-prefixes 10.0.6.0/24 \ 321 | --service-endpoints Microsoft.ContainerRegistry 322 | 323 | az network public-ip create \ 324 | --resource-group $RESOURCEGROUP \ 325 | --name myAGPublicIPAddress \ 326 | --allocation-method Static \ 327 | --sku Standard 328 | ``` 329 | 330 | If your ARO cluster is using Private ingress, you'll need to peer the AppGw VNET and the ARO VNET (if you haven't already done so). 331 | 332 | ```sh 333 | az network application-gateway create \ 334 | --name myAppGateway \ 335 | --location $LOCATION \ 336 | --resource-group $RESOURCEGROUP \ 337 | --capacity 1 \ 338 | --sku WAF_v2 \ 339 | --http-settings-cookie-based-affinity Disabled \ 340 | --public-ip-address myAGPublicIPAddress \ 341 | --vnet-name utils-vnet \ 342 | --subnet myAGSubnet 343 | ``` 344 | 345 | Create or procure your App Gateway frontend PKCS #12 (*.PFX file) certificate chain (e.g. see below for manually, using Let's Encrypt): 346 | 347 | ```sh 348 | # Specify the frontend domain for App Gw (must be different to the internal ARO domain, i.e. not *.apps., but you can use *.) 349 | APPGW_DOMAIN=$DOMAIN 350 | ./acme.sh --issue --dns -d "*.$APPGW_DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please --fullchain-file fullchain.cer --cert-file file.crt --key-file file.key 351 | # Add the TXT entry for _acme-challenge to the $DOMAIN record set, then... 352 | ./acme.sh --renew --dns -d "*.$APPGW_DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please --fullchain-file fullchain.cer --cert-file file.crt --key-file file.key 353 | 354 | cd ~/.acme.sh/\*.$APPGW_DOMAIN/ 355 | cat fullchain.cer \*.$APPGW_DOMAIN.key > gw-bundle.pem 356 | openssl pkcs12 -export -out gw-bundle.pfx -in gw-bundle.pem 357 | ``` 358 | 359 | TODO: The following steps require Azure Portal access until I get around to writig the CLI/Powershell steps. 360 | 361 | Define Azure DNS entries for the App Gateway frontend IP: 362 | 363 | * Create a `*` A record with the public IP address of your App Gateway in your APPGW_DOMAIN domain (or better yet, create an alias record pointing to the public IP resource) 364 | 365 | In the Listeners section, create a new HTTPS listener: 366 | 367 | * Listener name: aro-route-https-listener 368 | * Frontend IP: Public 369 | * Port: 443 370 | * Protocol: HTTPS 371 | * Http Settings - choose to Upload a Certificate (upload the PFX file from earlier) 372 | * Cert Name: gw-bundle 373 | * PFX certificate file: gw-bundle.pfx 374 | * Password: ****** (what you used when creating the PFX file) 375 | * Additional settings - Multi site: (Enter your site host names, comma separated) - note: wildcard hostname not supported yet 376 | * e.g. rating-web. 377 | * Note: You can also create multiple listeners - one per site and re-use the certificate and select basic site 378 | 379 | * Define Backend pools to point to the exposed ARO routes x n (one per web site/api) 380 | * Define backend HTTP Settings (HTTPS, 443, Trusted CA) X 1 381 | 382 | In the Backend pools section, create a new backend pool: 383 | 384 | * Name: aro-routes 385 | * Backend Targets: Enter the FQDN(s), e.g. `rating-web-workshop.apps.` 386 | * Click Add 387 | 388 | In the HTTP settings section, create a new HTTP setting: 389 | 390 | * HTTP settings name: aro-route-https-settings 391 | * Backend protocol: HTTPS 392 | * Backend port: 443 393 | * Use well known CA certificat: Yes (if you used one; otherwise upload your CA cer file) 394 | * Override with new host name: Yes 395 | * Choose: Pick host name from backend target 396 | 397 | In the Rules section, define rules x n (one per website/api): 398 | 399 | * Name: e.g. rating-web-rule 400 | * Select the https listener above 401 | * Enter backend target details - select the target and HTTP settings created above 402 | * Click 'Add' 403 | 404 | TODO: Define Health probes 405 | 406 | Access the website/API via App Gateway: e.g. `https://rating-web./` 407 | 408 | Create a an ARO cluster and VNET with Bicep 409 | -------------------------------------------- 410 | 411 | See: [automation](./automation/) section. 412 | 413 | Login to Web console 414 | -------------------- 415 | 416 | ```sh 417 | # Get Console URL from command output 418 | az aro list -o table 419 | 420 | webConsole=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query consoleProfile.url -o tsv | tr -d '[:space:]') 421 | echo $webConsole 422 | # ==> https://console-openshift-console.apps. 423 | 424 | # Get kubeadmin username and password 425 | az aro list-credentials -g $RESOURCEGROUP -n $CLUSTER 426 | ``` 427 | 428 | Login via `oc` CLI 429 | ------------------ 430 | 431 | ```sh 432 | API_URL=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv) 433 | KUBEADMIN_PASSWD=$(az aro list-credentials -g $RESOURCEGROUP -n $CLUSTER | jq -r .kubeadminPassword) 434 | 435 | oc login -u kubeadmin -p $KUBEADMIN_PASSWD --server=$API_URL 436 | oc status 437 | ``` 438 | 439 | Add an Identity Provider to add other users 440 | ------------------------------------------- 441 | 442 | Add one or more identity providers to allow other users to login. `kubeadmin` is intended as a temporary login to set up the cluster. 443 | 444 | ### HTPasswd 445 | 446 | Configure [HTPasswd](https://docs.openshift.com/container-platform/4.3/authentication/identity_providers/configuring-htpasswd-identity-provider.html) identity provider. 447 | 448 | ```sh 449 | htpasswd -c -B -b aro-user.htpasswd 450 | htpasswd -b $(pwd)/aro-user.htpasswd 451 | htpasswd -b $(pwd)/aro-user.htpasswd 452 | 453 | oc create secret generic htpass-secret --from-file=htpasswd=./aro-user.htpasswd -n openshift-config 454 | oc apply -f htpasswd-cr.yaml 455 | ``` 456 | 457 | ### Azure AD 458 | 459 | See the [CLI steps](https://docs.microsoft.com/en-us/azure/openshift/configure-azure-ad-cli) to configure Azure AD or see below for the Portal steps. 460 | 461 | Configure OAuth callback URL: 462 | 463 | ```sh 464 | domain=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query clusterProfile.domain -o tsv | tr -d '[:space:]') 465 | location=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query location -o tsv | tr -d '[:space:]') 466 | apiServer=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv | tr -d '[:space:]') 467 | webConsole=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query consoleProfile.url -o tsv | tr -d '[:space:]') 468 | 469 | # If using default domain 470 | oauthCallbackURL=https://oauth-openshift.apps.$domain.$location.aroapp.io/oauth2callback/AAD 471 | 472 | # If using custom domain 473 | oauthCallbackURL=https://oauth-openshift.apps.$DOMAIN/oauth2callback/AAD 474 | ``` 475 | 476 | Create an Azure Active Directory application: 477 | 478 | ```sh 479 | clientSecret=$(openssl rand -base64 16) 480 | echo $clientSecret > clientSecret.txt 481 | 482 | appDisplayName="aro-auth-$(openssl rand -hex 4)" 483 | 484 | appId=$(az ad app create \ 485 | --query appId -o tsv \ 486 | --display-name $appDisplayName \ 487 | --reply-urls $oauthCallbackURL \ 488 | --password $clientSecret) 489 | 490 | tenantId=$(az account show --query tenantId -o tsv | tr -d '[:space:]') 491 | ``` 492 | 493 | Create manifest file for optional claims to include in the ID Token: 494 | 495 | ```sh 496 | cat > manifest.json<< EOF 497 | [{ 498 | "name": "upn", 499 | "source": null, 500 | "essential": false, 501 | "additionalProperties": [] 502 | }, 503 | { 504 | "name": "email", 505 | "source": null, 506 | "essential": false, 507 | "additionalProperties": [] 508 | }] 509 | EOF 510 | ``` 511 | 512 | Update AAD application's optionalClaims with a manifest: 513 | 514 | ```sh 515 | az ad app update \ 516 | --set optionalClaims.idToken=@manifest.json \ 517 | --id $appId 518 | ``` 519 | 520 | Update AAD application scope permissions: 521 | 522 | ```sh 523 | # Azure Active Directory Graph.User.Read = 311a71cc-e848-46a1-bdf8-97ff7156d8e6 524 | az ad app permission add \ 525 | --api 00000002-0000-0000-c000-000000000000 \ 526 | --api-permissions 311a71cc-e848-46a1-bdf8-97ff7156d8e6=Scope \ 527 | --id $appId 528 | ``` 529 | 530 | Login to oc CLI as `kubeadmin`. 531 | 532 | Create a secret to store AAD application secret: 533 | 534 | ```sh 535 | oc create secret generic openid-client-secret-azuread \ 536 | --namespace openshift-config \ 537 | --from-literal=clientSecret=$clientSecret 538 | ``` 539 | 540 | Create OIDC configuration file for AAD: 541 | 542 | ```sh 543 | cat > oidc.yaml<< EOF 544 | apiVersion: config.openshift.io/v1 545 | kind: OAuth 546 | metadata: 547 | name: cluster 548 | spec: 549 | identityProviders: 550 | - name: AAD 551 | mappingMethod: claim 552 | type: OpenID 553 | openID: 554 | clientID: $appId 555 | clientSecret: 556 | name: openid-client-secret-azuread 557 | extraScopes: 558 | - email 559 | - profile 560 | extraAuthorizeParameters: 561 | include_granted_scopes: "true" 562 | claims: 563 | preferredUsername: 564 | - email 565 | - upn 566 | name: 567 | - name 568 | email: 569 | - email 570 | issuer: https://login.microsoftonline.com/$tenantId 571 | EOF 572 | ``` 573 | 574 | Apply the configuration to the cluster: 575 | 576 | ```sh 577 | oc apply -f oidc.yaml 578 | ``` 579 | 580 | Verify login to ARO console using AAD. 581 | 582 | See other [supported identity providers](https://docs.openshift.com/container-platform/4.4/authentication/understanding-identity-provider.html#supported-identity-providers). 583 | 584 | Setup user roles 585 | ---------------- 586 | 587 | You can assign various roles or cluster roles to users. 588 | 589 | ```sh 590 | oc adm policy add-cluster-role-to-user 591 | ``` 592 | 593 | You'll want to have at least one cluster-admin (similar to the `kubeadmin` user): 594 | 595 | ```sh 596 | oc adm policy add-cluster-role-to-user cluster-admin 597 | ``` 598 | 599 | If you get sign-in errors, you may need to delete users and/or identities: 600 | 601 | ```sh 602 | oc get user 603 | oc delete user 604 | oc get identity 605 | oc delete identity 606 | ``` 607 | 608 | Remove the kube-admin user 609 | -------------------------- 610 | 611 | See: https://docs.openshift.com/aro/4/authentication/remove-kubeadmin.html 612 | 613 | Ensure you have at least one other cluster-admin, sign in as that user then you can remove the `kube-admin` user: 614 | 615 | ```sh 616 | oc delete secrets kubeadmin -n kube-system 617 | ``` 618 | 619 | Set up logging with Elasticsearch and Kibana or log forwarding to a Syslog server 620 | --------------------------------------------------------------------------------- 621 | 622 | See [logging/](./logging/) 623 | 624 | Setup egress firewall with Azure Firewall 625 | ----------------------------------------- 626 | 627 | See [firewall/](./firewall/) 628 | 629 | Onboard to Azure Monitor 630 | ------------------------ 631 | 632 | Refer to the [ARO Monitoring README](./monitoring) in this repo. 633 | 634 | Deploy a demo app 635 | ----------------- 636 | 637 | Follow the [Demo](./Demo.md) steps to deploy a sample microservices app. 638 | 639 | Automation with Bicep (ARM DSL) 640 | ------------------------------- 641 | 642 | See [Bicep](./automation/README.md) automation example. 643 | 644 | (Optional) Delete cluster 645 | ------------------------- 646 | 647 | Disable monitoring (if enabled): 648 | 649 | ```sh 650 | helm del azmon-containers-release-1 651 | ``` 652 | 653 | or if using Arc-enabled monitoring, follow [these cleanup steps](https://github.com/clarenceb/aro4x-demo/tree/master/monitoring#option-2---arc-enabled-kubernetes-monioring-recommended). 654 | 655 | ```sh 656 | az aro delete -g $RESOURCEGROUP -n $CLUSTER 657 | 658 | # (optional) 659 | az network vnet subnet delete -g $RESOURCEGROUP --vnet-name $VNET -n master-subnet 660 | az network vnet subnet delete -g $RESOURCEGROUP --vnet-name $VNET -n worker-subnet 661 | ``` 662 | 663 | (optional) Delete Azure AD application (if using Azure AD for Auth) 664 | 665 | Clean up clusters in a failed state 666 | ----------------------------------- 667 | 668 | ```sh 669 | ./cleanup-failed-clusters.sh 670 | ``` 671 | 672 | References 673 | ---------- 674 | 675 | * [Create an ARO 4 cluster](https://docs.microsoft.com/en-us/azure/openshift/tutorial-create-cluster) - Microsoft Docs 676 | * [Supported identity providers in OCP 4.4](https://docs.openshift.com/container-platform/4.4/authentication/understanding-identity-provider.html#supported-identity-providers) 677 | * [Overview of TLS termination and end to end TLS with Application Gateway](https://docs.microsoft.com/en-us/azure/application-gateway/ssl-overview) 678 | -------------------------------------------------------------------------------- /TLS.md: -------------------------------------------------------------------------------- 1 | Custom domain and TLS certs setup 2 | ================================= 3 | 4 | Setup custom domain and certs for your cluster. 5 | 6 | See official docs: https://docs.microsoft.com/en-us/azure/openshift/tutorial-create-cluster#prepare-a-custom-domain-for-your-cluster-optional 7 | 8 | Pre-requisites 9 | -------------- 10 | 11 | * You'll need to own a domain and have a access to DNS (public or private zone) to create A/TXT records for that domain 12 | * You'll need CA signed certificate and private key (e.g. wildcard domain) - you can use Let's encrypt to test this out with free certs 13 | 14 | [Azure Key Vault](https://docs.microsoft.com/en-us/azure/key-vault/certificates/certificate-scenarios) can help to automate issuance and renewal or certificates for production environments. 15 | 16 | Define environment variables 17 | ---------------------------- 18 | 19 | ```sh 20 | git clone https://github.com/clarenceb/aro4x-demo.git 21 | 22 | cd aro4x-demo/ 23 | cp aro4-env.sh.template aro4-env.sh 24 | # Edit aro4-env.sh to suit your environment 25 | 26 | source ./aro4-env.sh 27 | ``` 28 | 29 | Configure DNS for default ingress router 30 | ---------------------------------------- 31 | 32 | ```sh 33 | # Retrieve the Ingress IP for Azure DNS records 34 | INGRESS_IP="$(az aro show -n $CLUSTER -g $RESOURCEGROUP --query 'ingressProfiles[0].ip' -o tsv)" 35 | ``` 36 | 37 | This may be a public or private IP, depending on the ingress visibility you selected. 38 | 39 | Create your Azure DNS zone for `$DOMAIN` (this can be a public or private zone). 40 | 41 | Public Zone Ingress Configuration 42 | --------------------------------- 43 | 44 | ```sh 45 | az network dns zone create -g $RESOURCEGROUP -n $DOMAIN 46 | # Or use an existing zone if it exists. 47 | # You need to have configured your domain name registrar to point to this zone. 48 | 49 | az network dns zone create --parent-name $DOMAIN -g $RESOURCEGROUP -n apps.$DOMAIN 50 | 51 | az network dns record-set a add-record \ 52 | -g $RESOURCEGROUP \ 53 | -z apps.$DOMAIN \ 54 | -n '*' \ 55 | -a $INGRESS_IP 56 | 57 | # Optional (good for initial testing): Adjust default TTL from 1 hour (choose an appropriate value, here 5 mins is used) 58 | az network dns record-set a update -g $RESOURCEGROUP -z apps.$DOMAIN -n '*' --set ttl=300 59 | ``` 60 | 61 | Private Zone Ingress Configuration 62 | ---------------------------------- 63 | 64 | Here we'll show how to do this for a private zone, assuming you created a private cluster and have a set up bastion host in the "utils-vnet" as per the main [README](./README.md). 65 | 66 | Create the Private DNS zone and link it to the "utils-vnet" so that the DNS records can be resolved from the bastion host in that VNET: 67 | 68 | ```sh 69 | az network private-dns zone create -g $RESOURCEGROUP -n $DOMAIN 70 | 71 | # If you created a Bastion service or have VMs you intend to ue as jump hosts then create a vnet link from the private DNS zone to the VNET where you need to resolve the private domains 72 | az network private-dns link vnet create -g $RESOURCEGROUP -n PrivateDomainLink \ 73 | -z $DOMAIN -v $UTILS_VNET -e true 74 | 75 | # Create a wildcard `*.apps` A record to point to the Ingress Load Balancer IP 76 | az network private-dns record-set a add-record \ 77 | -g $RESOURCEGROUP \ 78 | -z $DOMAIN \ 79 | -n '*.apps' \ 80 | -a $INGRESS_IP 81 | 82 | # Optional (good for initial testing): Adjust default TTL from 1 hour (choose an appropriate value, here 5 mins is used) 83 | az network private-dns record-set a update -g $RESOURCEGROUP -z $DOMAIN -n '*.apps' --set ttl=300 84 | ``` 85 | 86 | Configure DNS for API server endpoint 87 | ------------------------------------- 88 | 89 | ```sh 90 | # Retrieve the API Server IP for Azure DNS records 91 | API_SERVER_IP="$(az aro show -n $CLUSTER -g $RESOURCEGROUP --query 'apiserverProfile.ip' -o tsv)" 92 | ``` 93 | 94 | This may be a public or private IP, depending on the ingress visibility you selected. 95 | 96 | Create your Azure DNS zone for `$DOMAIN` (this can be a public or private zone). 97 | 98 | Public Zone API Server Configuration 99 | ------------------------------------ 100 | 101 | ```sh 102 | az network dns zone create --parent-name $DOMAIN -g $RESOURCEGROUP -n api.$DOMAIN 103 | 104 | # Create an `api` A record to point to the Ingress Load Balancer IP 105 | az network dns record-set a add-record \ 106 | -g $RESOURCEGROUP \ 107 | -z api.$DOMAIN \ 108 | -n '@' \ 109 | -a $API_SERVER_IP 110 | 111 | # Optional (good for initial testing): Adjust default TTL from 1 hour (choose an appropriate value, here 5 mins is used) 112 | az network dns record-set a update -g $RESOURCEGROUP -z api.$DOMAIN -n '@' --set ttl=300 113 | ``` 114 | 115 | Private Zone API Server Configuration 116 | ------------------------------------- 117 | 118 | Again, we'll show how to do this for a private zone, assuming you created a private cluster and have a set up bastion host in the "utils-vnet". 119 | 120 | ```sh 121 | # Create an `api` A record to point to the Ingress Load Balancer IP 122 | az network private-dns record-set a add-record \ 123 | -g $RESOURCEGROUP \ 124 | -z $DOMAIN \ 125 | -n 'api' \ 126 | -a $API_SERVER_IP 127 | 128 | # Optional (good for initial testing): Adjust default TTL from 1 hour (choose an appropriate value, here 5 mins is used) 129 | az network private-dns record-set a update -g $RESOURCEGROUP -z $DOMAIN -n 'api' --set ttl=300 130 | ``` 131 | 132 | Generate Let's Encrypt Certificates for API Server and default Ingress Router 133 | ----------------------------------------------------------------------------- 134 | 135 | The example below uses manually created Let's Encrypt certs. This is **not recommended for production** unless you have setup an automated process to create and renew the certs (e.g. using the [Cert-Manager](https://www.redhat.com/sysadmin/cert-manager-operator-openshift) operator). 136 | 137 | Refer to this [page](https://mobb.ninja/docs/aro/cert-manager/) for an example of using Cert-Manager and Let's Encrypt to automate this process. 138 | 139 | These certs will expire after 90 days. 140 | 141 | **Note:** this method requires public DNS to issue the certificates since DNS challenge is used. Once the certificate is issued you can delete the public records if desired (for example if you created a private ARO cluster and intend to use Azure DNS private record sets). 142 | 143 | Launch a bash shell (e.g. Git Bash on Windows). 144 | 145 | ```sh 146 | git clone https://github.com/acmesh-official/acme.sh.git 147 | chmod +x acme.sh/acme.sh 148 | 149 | # Issue a new cert for domain api.aro. 150 | ./acme.sh --issue --server https://acme-v02.api.letsencrypt.org/directory --dns -d "api.$DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please 151 | ``` 152 | 153 | Sample output: 154 | 155 | ```sh 156 | [Fri Aug 21 03:22:32 AEST 2020] Using CA: https://acme-v02.api.letsencrypt.org/directory 157 | [Fri Aug 21 03:22:32 AEST 2020] Create account key ok. 158 | [Fri Aug 21 03:22:33 AEST 2020] Registering account: https://acme-v02.api.letsencrypt.org/directory 159 | [Fri Aug 21 03:22:34 AEST 2020] Registered 160 | [Fri Aug 21 03:22:34 AEST 2020] ACCOUNT_THUMBPRINT='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' 161 | [Fri Aug 21 03:22:34 AEST 2020] Creating domain key 162 | [Fri Aug 21 03:22:34 AEST 2020] The domain key is here: /home//.acme.sh/api./api..key 163 | [Fri Aug 21 03:22:34 AEST 2020] Single domain='api.' 164 | [Fri Aug 21 03:22:34 AEST 2020] Getting domain auth token for each domain 165 | [Fri Aug 21 03:22:37 AEST 2020] Getting webroot for domain='api.' 166 | [Fri Aug 21 03:22:37 AEST 2020] Add the following TXT record: 167 | [Fri Aug 21 03:22:37 AEST 2020] Domain: '_acme-challenge.api.' 168 | [Fri Aug 21 03:22:37 AEST 2020] TXT value: 'xxxxxxx_xxxxxxx-xxxxxxxxxxxxxxxxxxxxx' 169 | [Fri Aug 21 03:22:37 AEST 2020] Please be aware that you prepend _acme-challenge. before your domain 170 | [Fri Aug 21 03:22:37 AEST 2020] so the resulting subdomain will be: _acme-challenge.api. 171 | [Fri Aug 21 03:22:37 AEST 2020] Please add the TXT records to the domains, and re-run with --renew. 172 | [Fri Aug 21 03:22:37 AEST 2020] Please add '--debug' or '--log' to check more details. 173 | [Fri Aug 21 03:22:37 AEST 2020] See: https://github.com/acmesh-official/acme.sh/wiki/How-to-debug-acme.sh 174 | ``` 175 | 176 | Take note of the `Domain` and `TXT value` fields as these are required for Let's Encrypt to validate that you own the domain and can therefore issue you the certificates. 177 | 178 | ```sh 179 | API_TXT_RECORD="xxxxxxxxxxxxxxxxxxxxxxxxxx" 180 | ``` 181 | 182 | Create your public Azure DNS zone for `$DOMAIN` and connect your Domain Registrar to the Azure DNS servers for your public zone (steps not shown here, see Azure DNS docs). 183 | 184 | ```sh 185 | PUBLIC_DOMAIN_RESOURCEGROUP="..." 186 | ``` 187 | 188 | Create two child zones `api.$DOMAIN` and `apps.$DOMAIN`. 189 | 190 | Once you have the public DNS zones ready you can add the necessary records to validate ownership of the domain: 191 | 192 | ```sh 193 | # Step 1 - Add the `_acme-challenge` TXT value to your public `api.` zone. 194 | az network dns record-set txt add-record \ 195 | -g $PUBLIC_DOMAIN_RESOURCEGROUP \ 196 | -z api.$DOMAIN \ 197 | -n '_acme-challenge' \ 198 | -v $API_TXT_RECORD 199 | 200 | az network dns record-set txt update -g $PUBLIC_DOMAIN_RESOURCEGROUP -z api.$DOMAIN -n '_acme-challenge' --set ttl=300 201 | 202 | # Step 2 - Download the certs and key from Let's Encrypt 203 | ./acme.sh --renew --server https://acme-v02.api.letsencrypt.org/directory --dns -d "api.$DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please --fullchain-file fullchain.cer --cert-file file.crt --key-file file.key 204 | 205 | # Note: On Windows Server, you might need to install Cygwin (https://www.cygwin.com/install.html) to handle files 206 | # starting with an asterix ('*') -- Git bash won't work here. Install Cywin and choose the `base` component to install. 207 | # Also install these components: 208 | # `dos2unix`, `curl`, libcurl` 209 | # 210 | # dos2unix /aro4x-env.sh 211 | # source /aro4x-env.sh 212 | # cd /cygdrive/c/Users/azureuser/ 213 | # dos2unix acme.sh 214 | 215 | # Issue a new cert for domain *.apps. 216 | ./acme.sh --issue --server https://acme-v02.api.letsencrypt.org/directory --dns -d "*.apps.$DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please 217 | ``` 218 | 219 | Sample output: 220 | 221 | ```sh 222 | [Fri Aug 21 03:43:03 AEST 2020] Using CA: https://acme-v02.api.letsencrypt.org/directory 223 | [Fri Aug 21 03:43:03 AEST 2020] Creating domain key 224 | [Fri Aug 21 03:43:03 AEST 2020] The domain key is here: /home/USER/.acme.sh/*.apps./*.apps..key 225 | [Fri Aug 21 03:43:03 AEST 2020] Single domain='*.apps.' 226 | [Fri Aug 21 03:43:03 AEST 2020] Getting domain auth token for each domain 227 | [Fri Aug 21 03:43:07 AEST 2020] Getting webroot for domain='*.apps.' 228 | [Fri Aug 21 03:43:07 AEST 2020] Add the following TXT record: 229 | [Fri Aug 21 03:43:07 AEST 2020] Domain: '_acme-challenge.apps.' 230 | [Fri Aug 21 03:43:07 AEST 2020] TXT value: 'xxxxxxxxxxxxxxxxxxxxxxxxxx' 231 | [Fri Aug 21 03:43:07 AEST 2020] Please be aware that you prepend _acme-challenge. before your domain 232 | [Fri Aug 21 03:43:07 AEST 2020] so the resulting subdomain will be: _acme-challenge.apps. 233 | [Fri Aug 21 03:43:07 AEST 2020] Please add the TXT records to the domains, and re-run with --renew. 234 | [Fri Aug 21 03:43:07 AEST 2020] Please add '--debug' or '--log' to check more details. 235 | [Fri Aug 21 03:43:07 AEST 2020] See: https://github.com/acmesh-official/acme.sh/wiki/How-to-debug-acme.sh 236 | ``` 237 | 238 | Take note of the `Domain` and `TXT value` fields as these are required for Let's Encrypt to validate that you own the domain and can therefore issue you the certificates. 239 | 240 | ```sh 241 | APPS_TXT_RECORD="xxxxxxxxxxxxxxxxxxxxxxxxxx" 242 | ``` 243 | 244 | Once you have the public DNS zones ready you can add the necessary records to validate ownership of the domain: 245 | 246 | ```sh 247 | # Step 1 - Add the `_acme-challenge` TXT value to your public `apps.` zone. 248 | az network dns record-set txt add-record \ 249 | -g $PUBLIC_DOMAIN_RESOURCEGROUP \ 250 | -z apps.$DOMAIN \ 251 | -n '_acme-challenge' \ 252 | -v $APPS_TXT_RECORD 253 | 254 | az network dns record-set txt update -g $PUBLIC_DOMAIN_RESOURCEGROUP -z apps.$DOMAIN -n '_acme-challenge' --set ttl=300 255 | 256 | # Step 2 - Download the certs and key from Let's Encrypt 257 | ./acme.sh --renew --dns --server https://acme-v02.api.letsencrypt.org/directory -d "*.apps.$DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please --fullchain-file fullchain.cer --cert-file file.crt --key-file file.key 258 | ``` 259 | 260 | Now that you have valid certificates you can proceed with confguring OpenShift to trust your customer domain with these certs. 261 | 262 | Note: You'll needs these cert/key files on the jumpbox since you'll need to have access to the ARO API server via the CLI. 263 | 264 | Configure the API server with custom certificates 265 | ------------------------------------------------- 266 | 267 | For this step we assume you have these files for the domain `api.`: 268 | 269 | * `fullchain.cer` certificate bundle 270 | * `file.key` certificate private key 271 | * `ca.cer` CA certificate bundle 272 | 273 | Login to the ARO cluster with `oc` CLI (if required): 274 | 275 | ```sh 276 | KUBEADMIN_PASSWD=$(az aro list-credentials -g $RESOURCEGROUP -n $CLUSTER --query "kubeadminPassword" -o tsv) 277 | KUBEADMIN_PASSWD=$(echo $KUBEADMIN_PASSWD | sed 's/\r//g') 278 | API_URL=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv) 279 | API_URL=$(echo $API_URL | sed 's/\r//g') 280 | oc login -u kubeadmin -p $KUBEADMIN_PASSWD --server=$API_URL --insecure-skip-tls-verify=true 281 | oc status 282 | ``` 283 | 284 | Configure the API Server certs: 285 | 286 | ```sh 287 | cd ~/.acme.sh/api.$DOMAIN 288 | 289 | # Add an API server named certificate 290 | # See: https://docs.openshift.com/container-platform/4.4/security/certificates/api-server.html 291 | 292 | oc create secret tls api-custom-domain \ 293 | --cert=fullchain.cer \ 294 | --key=api.$DOMAIN.key \ 295 | -n openshift-config 296 | 297 | # Note: substitute below for your customer domain 298 | oc patch apiserver cluster \ 299 | --type=merge -p \ 300 | '{"spec":{"servingCerts": {"namedCertificates": 301 | [{"names": ["api."], 302 | "servingCertificate": {"name": "api-custom-domain"}}]}}}' 303 | 304 | oc get apiserver cluster -o yaml 305 | ``` 306 | 307 | Configure the Ingress Router with custom certificates 308 | ----------------------------------------------------- 309 | 310 | For this step we assume you have these files for the domain `*.apps.`: 311 | 312 | * `fullchain.cer` certificate bundle 313 | * `file.key` certificate private key 314 | * `ca.cer` CA certificate bundle 315 | 316 | Login to the ARO cluster with `oc` CLI (if required): 317 | 318 | ```sh 319 | KUBEADMIN_PASSWD=$(az aro list-credentials -g $RESOURCEGROUP -n $CLUSTER --query "kubeadminPassword" -o tsv) 320 | KUBEADMIN_PASSWD=$(echo $KUBEADMIN_PASSWD | sed 's/\r//g') 321 | API_URL=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv) 322 | API_URL=$(echo $API_URL | sed 's/\r//g') 323 | oc login -u kubeadmin -p $KUBEADMIN_PASSWD --server=$API_URL 324 | oc status 325 | ``` 326 | 327 | Configure the (default) Ingress Router certs: 328 | 329 | ```sh 330 | cd ~/.acme.sh/\*.apps.$DOMAIN/ 331 | 332 | # Replacing the default ingress certificate 333 | # See: https://docs.openshift.com/container-platform/4.4/security/certificates/replacing-default-ingress-certificate.html 334 | 335 | oc create configmap custom-ca \ 336 | --from-file=ca.cer \ 337 | -n openshift-config 338 | 339 | oc patch proxy/cluster \ 340 | --type=merge \ 341 | --patch='{"spec":{"trustedCA":{"name":"custom-ca"}}}' 342 | 343 | cp '*.apps.aro.clarenceb.com.key' file.key 344 | 345 | oc create secret tls star-apps-custom-domain \ 346 | --cert=fullchain.cer \ 347 | --key=file.key \ 348 | -n openshift-ingress 349 | 350 | oc patch ingresscontroller.operator default \ 351 | --type=merge -p \ 352 | '{"spec":{"defaultCertificate": {"name": "star-apps-custom-domain"}}}' \ 353 | -n openshift-ingress-operator 354 | 355 | rm ./file.key 356 | ``` 357 | 358 | Test your custom domain 359 | ----------------------- 360 | 361 | ```sh 362 | az aro list-credentials -n $CLUSTER -g $RESOURCEGROUP 363 | ``` 364 | 365 | Log into the OpenShift portal - this will test the API Server. 366 | 367 | ```sh 368 | az aro show -n $CLUSTER -g $RESOURCEGROUP --query consoleProfile.url -o tsv 369 | ``` 370 | 371 | Deploy a simple NGINX pod and expose it via a route to test the private ingress route works witrh a custom domain: 372 | 373 | ```sh 374 | oc new-project nginx-demo 375 | oc adm policy add-scc-to-user anyuid system:serviceaccount:nginx-demo:default 376 | oc new-app --image nginx:latest 377 | oc create route edge nginx --service=nginx 378 | oc get route 379 | # nginx-nginx-demo.apps. 380 | ``` 381 | 382 | Access your TLS endpoint via the private domain: `https://nginx-nginx-demo.apps.` on your Bastion host. 383 | 384 | To expose this publicly, you can use an Azure Application Gateway (see [README](./README.md) or create a second public Ingress router and expose the service via that router. 385 | 386 | Renew expired certs 387 | ------------------- 388 | 389 | Delete expired certs: 390 | 391 | ```sh 392 | rm -rf ~/dev/acme.sh/* 393 | ``` 394 | 395 | Generate a new cert request on api domain: 396 | 397 | ```sh 398 | ./acme.sh --issue --server https://acme-v02.api.letsencrypt.org/directory --dns -d "api.$DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please 399 | # Add the TXT record value to your Azure DNS domain, then... 400 | ./acme.sh --renew --server https://acme-v02.api.letsencrypt.org/directory --dns -d "api.$DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please --fullchain-file fullchain.cer --cert-file file.crt --key-file file.key 401 | 402 | ./acme.sh --issue --server https://acme-v02.api.letsencrypt.org/directory --dns -d "*.apps.$DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please 403 | # Add the TXT record value to your Azure DNS domain, then... 404 | ./acme.sh --renew --server https://acme-v02.api.letsencrypt.org/directory --dns -d "*.apps.$DOMAIN" --yes-I-know-dns-manual-mode-enough-go-ahead-please --fullchain-file fullchain.cer --cert-file file.crt --key-file file.key 405 | ``` 406 | 407 | Log into `oc` CLI: 408 | 409 | ```sh 410 | oc login -u kubeadmin -p $KUBEADMIN_PASSWD --server=$API_URL --insecure-skip-tls-verify=true 411 | ``` 412 | 413 | Delete old CA cert and config: 414 | 415 | ```sh 416 | oc delete configmap custom-ca -n openshift-config 417 | oc delete secret star-apps-custom-domain -n openshift-ingress 418 | oc delete secret api-custom-domain -n openshift-config 419 | ``` 420 | 421 | Follow steps above in **Configure the API server with custom certificates** and **Configure the Ingress Router with custom certificates**. 422 | 423 | You may need to recycle all API Server pods: 424 | 425 | ```sh 426 | oc -n openshift-apiserver delete pods --all 427 | ``` 428 | 429 | Access the ARO console URL and log in as usual. 430 | 431 | References 432 | ---------- 433 | 434 | * [Prepare a custom domain for your cluster](https://docs.microsoft.com/en-us/azure/openshift/tutorial-create-cluster#prepare-a-custom-domain-for-your-cluster-optional) 435 | * [Replacing the default ingress certificate](https://docs.openshift.com/container-platform/4.6/security/certificates/replacing-default-ingress-certificate.html) 436 | * [Adding API server certificates](https://docs.openshift.com/container-platform/4.6/security/certificates/api-server.html) 437 | -------------------------------------------------------------------------------- /appdev/README.md: -------------------------------------------------------------------------------- 1 | App Dev on ARO + (Optional) Azure Container Apps 2 | ================================================ 3 | 4 | Common Steps 5 | ------------ 6 | 7 | - Create Azure Container Registry 8 | 9 | ```sh 10 | source aro4-env.sh 11 | ACRNAME="demoacr$RANDOM" 12 | 13 | az group create \ 14 | --name $RESOURCEGROUP \ 15 | --location $LOCATION 16 | 17 | az acr create -n $ACRNAME -g $RESOURCEGROUP --sku Standard -l $LOCATION --admin-enabled 18 | ``` 19 | 20 | Run the app locally with Docker Compose 21 | --------------------------------------- 22 | 23 | ```sh 24 | git clone https://github.com/clarenceb/rating-api 25 | git clone https://github.com/clarenceb/rating-web 26 | cd rating-web 27 | 28 | docker-compose build 29 | docker-compose up 30 | # browse to: http://localhost:8081 31 | # CTRL+C 32 | docker-compose down 33 | ``` 34 | 35 | ARO Steps 36 | --------- 37 | 38 | Refer to the ARO Workshop for details: https://microsoft.github.io/aroworkshop/ 39 | 40 | ### Create the ARO cluster 41 | 42 | ```sh 43 | source aro4-env.sh 44 | 45 | az group create \ 46 | --name $RESOURCEGROUP \ 47 | --location $LOCATION 48 | 49 | az network vnet create \ 50 | --resource-group $RESOURCEGROUP \ 51 | --name aro-vnet \ 52 | --address-prefixes 10.0.0.0/22 53 | 54 | az network vnet subnet create \ 55 | --resource-group $RESOURCEGROUP \ 56 | --vnet-name aro-vnet \ 57 | --name master-subnet \ 58 | --address-prefixes 10.0.0.0/23 \ 59 | --service-endpoints Microsoft.ContainerRegistry 60 | 61 | az network vnet subnet create \ 62 | --resource-group $RESOURCEGROUP \ 63 | --vnet-name aro-vnet \ 64 | --name worker-subnet \ 65 | --address-prefixes 10.0.2.0/23 \ 66 | --service-endpoints Microsoft.ContainerRegistry 67 | 68 | az network vnet subnet update \ 69 | --name master-subnet 70 | --resource-group $RESOURCEGROUP \ 71 | --vnet-name aro-vnet \ 72 | --disable-private-link-service-network-policies true 73 | 74 | az aro create \ 75 | --resource-group $RESOURCEGROUP \ 76 | --name $CLUSTER \ 77 | --vnet $VNET \ 78 | --master-subnet master-subnet \ 79 | --worker-subnet worker-subnet 80 | --pull-secret @pull-secret.txt 81 | 82 | az aro list-credentials -g $RESOURCEGROUP -n $CLUSTER 83 | 84 | # ==> { 85 | # "kubeadminPassword": "", 86 | # "kubeadminUsername": "kubeadmin" 87 | # } 88 | 89 | az aro show \ 90 | --name $CLUSTER \ 91 | --resource-group $RESOURCEGROUP \ 92 | --query "consoleProfile.url" -o tsv 93 | 94 | # ==> https://console-openshift-console.apps...aroapp.io/ 95 | ``` 96 | 97 | Open the ARO Console URL in your browser and log in as `kubeadmin` and enter the password retrived via `az aro list-credentials ...`. 98 | 99 | Login via the CLI (either retrieve `oc` login from the console to login with a token or login with the `kubeadmin` user): 100 | 101 | ```sh 102 | ARO_API_URI="$(az aro show --name $CLUSTER --resource-group $RESOURCEGROUP --query "apiserverProfile.url" -o tsv)" 103 | ARO_PASSWORD="$(az aro list-credentials -g $RESOURCEGROUP -n $CLUSTER | jq -r ".kubeadminPassword")" 104 | oc login $ARO_API_URI -u kubeadmin -p $ARO_PASSWORD 105 | oc status 106 | ``` 107 | 108 | ### Create the project 109 | 110 | ```sh 111 | PROJECT=ratingapp 112 | oc new-project $PROJECT 113 | ``` 114 | 115 | ### Create in-cluster MongoDB 116 | 117 | ```sh 118 | MONGODB_USERNAME=ratingsuser 119 | MONGODB_PASSWORD=ratingspassword 120 | MONGODB_DATABASE=ratingsdb 121 | MONGODB_ROOT_USER=root 122 | MONGODB_ROOT_PASSWORD=ratingspassword 123 | 124 | oc new-app bitnami/mongodb:5.0 \ 125 | -e MONGODB_USERNAME=$MONGODB_USERNAME \ 126 | -e MONGODB_PASSWORD=$MONGODB_PASSWORD \ 127 | -e MONGODB_DATABASE=$MONGODB_DATABASE \ 128 | -e MONGODB_ROOT_USER=$MONGODB_ROOT_USER \ 129 | -e MONGODB_ROOT_PASSWORD=$MONGODB_ROOT_PASSWORD 130 | ``` 131 | 132 | ### Deploy Rating Api (from source code in GitHub using source strategy) 133 | 134 | ```sh 135 | oc new-app https://github.com/clarenceb/rating-api --strategy=source 136 | 137 | MONGODB_URI="mongodb://$MONGODB_USERNAME:$MONGODB_PASSWORD@mongodb.$PROJECT.svc.cluster.local:27017/ratingsdb" 138 | 139 | oc set env deploy/rating-api MONGODB_URI=$MONGODB_URI 140 | 141 | oc port-forward svc/rating-api 8080:8080 & 142 | 143 | curl -s http://localhost:8080/api/items | jq . 144 | 145 | jobs 146 | kill %1 147 | ``` 148 | 149 | ### Optional - Deploy Rating API (from pre-built Docker image in ACR) 150 | 151 | ```sh 152 | az acr build -r $ACRNAME -t rating-api:v1-ubi8 https://github.com/clarenceb/rating-api 153 | 154 | # Create image pull secret and link to default service account 155 | ACR_USERNAME="$(az acr credential show -n $ACRNAME -g $RESOURCEGROUP --query username -o tsv)" 156 | ACR_PASSWD="$(az acr credential show -n $ACRNAME -g $RESOURCEGROUP --query passwords[0].value -o tsv)" 157 | 158 | oc create secret docker-registry $ACRNAME-secret \ 159 | --docker-server=$ACRNAME.azurecr.io \ 160 | --docker-username=$ACR_USERNAME \ 161 | --docker-password=$ACR_PASSWD \ 162 | --docker-email=admin@example.com 163 | 164 | oc secrets link default $ACRNAME-secret --for=pull 165 | 166 | oc delete all -l app=rating-api 167 | 168 | oc import-image rating-api:v1 --from $ACRNAME.azurecr.io/rating-api:v1-ubi8 --reference-policy=local --confirm 169 | oc label imagestream/rating-api app=rating-api 170 | 171 | oc new-app --name rating-api --image-stream rating-api:v1 -e MONGODB_URI=$MONGODB_URI 172 | ``` 173 | 174 | To deploy an updated image from the external registry: 175 | 176 | ```sh 177 | NEW_TAG=gh-v1.0.16 178 | oc tag $ACRNAME.azurecr.io/rating-api:$NEW_TAG rating-api:v1 --reference-policy local 179 | ``` 180 | 181 | ### Deploy Rating Web (from source code in GitHub using Docker build strategy) 182 | 183 | ```sh 184 | oc new-app https://github.com/clarenceb/rating-web --strategy=docker 185 | 186 | oc set env deploy rating-web API=http://rating-api:8080 187 | ``` 188 | 189 | ### Optional - Deploy Rating Web (from pre-built Docker image in ACR) 190 | 191 | ```sh 192 | az acr build -r $ACRNAME -t rating-web:v1 https://github.com/clarenceb/rating-web 193 | 194 | # Create image pull secret and link to default service account 195 | ACR_USERNAME="$(az acr credential show -n $ACRNAME -g $RESOURCEGROUP --query username -o tsv)" 196 | ACR_PASSWD="$(az acr credential show -n $ACRNAME -g $RESOURCEGROUP --query passwords[0].value -o tsv)" 197 | 198 | oc create secret docker-registry $ACRNAME-secret \ 199 | --docker-server=$ACRNAME.azurecr.io \ 200 | --docker-username=$ACR_USERNAME \ 201 | --docker-password=$ACR_PASSWD \ 202 | --docker-email=admin@example.com 203 | 204 | oc secrets link default $ACRNAME-secret --for=pull 205 | 206 | oc delete all -l app=rating-web 207 | 208 | oc import-image rating-web:v1 --from $ACRNAME.azurecr.io/rating-web:v1 --reference-policy=local --confirm 209 | oc label imagestream/rating-web app=rating-web 210 | 211 | oc new-app --name rating-web --image-stream rating-web:v1 -e API=http://rating-api:8080 212 | ``` 213 | 214 | To deploy an updated image from the external registry: 215 | 216 | ```sh 217 | NEW_TAG=gh-v1.0.16 218 | oc tag $ACRNAME.azurecr.io/rating-web:$NEW_TAG rating-web:v1 --reference-policy local 219 | ``` 220 | 221 | ### Expose TLS route for the Web frontend 222 | 223 | ```sh 224 | # Create a TLS edge route 225 | oc create route edge rating-web --service=rating-web 226 | oc get route rating-web 227 | ``` 228 | 229 | ### (Optional) Reset state for in-cluster database without persistent volume 230 | 231 | - Delete the MongoDB pod 232 | 233 | ```sh 234 | oc delete all -l app=mongodb 235 | ``` 236 | 237 | - Delete the Rating API pod (to populate the MongoDB collections) 238 | 239 | ```sh 240 | oc delete pod rating-api-xxxxxxxxxxxxxx 241 | ``` 242 | 243 | Open the front end in your browser: `https://rating-web-ratingapp.apps.[domain].[location].aroapp.io/` 244 | 245 | ACA STEPS 246 | --------- 247 | 248 | ### Build the Rating API image 249 | 250 | ```sh 251 | az acr build -r $ACRNAME -t rating-api:v1 https://github.com/clarenceb/rating-api -f Dockerfile 252 | ``` 253 | 254 | ### Create the Container apps environment 255 | 256 | ```sh 257 | RESOURCE_GROUP="containerapps" 258 | LOCATION="australiaeast" 259 | CONTAINERAPPS_ENVIRONMENT="ratingapp" 260 | 261 | az group create -n $RESOURCE_GROUP -l $LOCATION 262 | 263 | LOG_ANALYTICS_WORKSPACE="logs-${CONTAINERAPPS_ENVIRONMENT}" 264 | 265 | az monitor log-analytics workspace create \ 266 | --resource-group $RESOURCE_GROUP \ 267 | --workspace-name $LOG_ANALYTICS_WORKSPACE 268 | 269 | LOG_ANALYTICS_WORKSPACE_CLIENT_ID=$(az monitor log-analytics workspace show \ 270 | --query customerId -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv) 271 | 272 | LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=$(az monitor log-analytics workspace get-shared-keys \ 273 | --query primarySharedKey -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv) 274 | 275 | APP_INSIGHTS_NAME="appins-${CONTAINERAPPS_ENVIRONMENT}" 276 | 277 | LOG_ANALYTICS_WORKSPACE_RESOURCE_ID=$(az monitor log-analytics workspace show \ 278 | --query id -g $RESOURCE_GROUP -n $LOG_ANALYTICS_WORKSPACE --out tsv) 279 | 280 | az monitor app-insights component create \ 281 | --app $APP_INSIGHTS_NAME \ 282 | --location $LOCATION \ 283 | --kind web \ 284 | -g $RESOURCE_GROUP \ 285 | --workspace "$LOG_ANALYTICS_WORKSPACE_RESOURCE_ID" 286 | 287 | APP_INSIGHTS_INSTRUMENTATION_KEY=$(az monitor app-insights component show --app $APP_INSIGHTS_NAME -g $RESOURCE_GROUP --query instrumentationKey -o tsv) 288 | 289 | az containerapp env create \ 290 | --name $CONTAINERAPPS_ENVIRONMENT \ 291 | --resource-group $RESOURCE_GROUP \ 292 | --location "$LOCATION" \ 293 | --logs-workspace-id $LOG_ANALYTICS_WORKSPACE_CLIENT_ID \ 294 | --logs-workspace-key $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET \ 295 | --dapr-instrumentation-key $APP_INSIGHTS_INSTRUMENTATION_KEY 296 | 297 | REGISTRY_RESOURCE_GROUP="aro-demo" 298 | REGISTRY_USERNAME=$(az acr credential show --resource-group $REGISTRY_RESOURCE_GROUP --name $ACRNAME --query username -o tsv) 299 | REGISTRY_PASSWORD=$(az acr credential show --resource-group $REGISTRY_RESOURCE_GROUP --name $ACRNAME --query passwords[0].value -o tsv) 300 | REGISTRY_SERVER="$ACRNAME.azurecr.io" 301 | ``` 302 | 303 | ### Deploy MongoDB API for Cosmos DB 304 | 305 | ```sh 306 | COSMOS_ACCOUNT_NAME=ratingapp$RANDOM 307 | COSMOS_LOCATION=australiasoutheast 308 | SEMVER_VERSION=4.2 309 | 310 | az cosmosdb create \ 311 | --resource-group $RESOURCE_GROUP \ 312 | --name $COSMOS_ACCOUNT_NAME \ 313 | --kind MongoDB \ 314 | --enable-automatic-failover false \ 315 | --default-consistency-level "Eventual" \ 316 | --server-version $SEMVER_VERSION \ 317 | --locations regionName="$COSMOS_LOCATION" failoverPriority=0 isZoneRedundant=False 318 | 319 | az cosmosdb mongodb database create \ 320 | --account-name $COSMOS_ACCOUNT_NAME \ 321 | --resource-group $RESOURCE_GROUP \ 322 | --name $MONGODB_DATABASE \ 323 | --throughput 400 324 | ``` 325 | 326 | ### Deploy the Rating API container app 327 | 328 | CAPPS_MONGODB_URI=$(az cosmosdb list-connection-strings \ 329 | --name $COSMOS_ACCOUNT_NAME \ 330 | --resource-group $RESOURCE_GROUP \ 331 | --query "connectionStrings[0].connectionString" \ 332 | -o tsv) 333 | 334 | CAPPS_MONGODB_URI=$(echo $CAPPS_MONGODB_URI | sed 's/&maxIdleTimeMS=[0-9]\+//g' | sed 's/\/?/\/ratingsdb?/g') 335 | 336 | ```sh 337 | az containerapp create \ 338 | --name rating-api \ 339 | --resource-group $RESOURCE_GROUP \ 340 | --image $REGISTRY_SERVER/rating-api:v1 \ 341 | --environment $CONTAINERAPPS_ENVIRONMENT \ 342 | --registry-server $REGISTRY_SERVER \ 343 | --registry-username $REGISTRY_USERNAME \ 344 | --registry-password $REGISTRY_PASSWORD \ 345 | --min-replicas 1 \ 346 | --max-replicas 1 \ 347 | --enable-dapr \ 348 | --dapr-app-port 8080 \ 349 | --dapr-app-id rating-api \ 350 | --dapr-app-protocol http \ 351 | --secrets "mongodb-uri=$CAPPS_MONGODB_URI" \ 352 | --env-vars "MONGODB_URI=secretref:mongodb-uri" 353 | 354 | az containerapp revision list -n rating-api -g $RESOURCE_GROUP -o table 355 | ``` 356 | 357 | ### Deploy the Rating Web container app 358 | 359 | ```sh 360 | # Dapr invocation URL for Rating API (works via service discovery when Dapr is enabled) 361 | RATING_API_URI="http://localhost:3500/v1.0/invoke/rating-api/method" 362 | 363 | az containerapp create \ 364 | --name rating-web \ 365 | --resource-group $RESOURCE_GROUP \ 366 | --image $REGISTRY_SERVER/rating-web:v1 \ 367 | --environment $CONTAINERAPPS_ENVIRONMENT \ 368 | --registry-server $REGISTRY_SERVER \ 369 | --registry-username $REGISTRY_USERNAME \ 370 | --registry-password $REGISTRY_PASSWORD \ 371 | --min-replicas 1 \ 372 | --max-replicas 1 \ 373 | --ingress 'external' \ 374 | --target-port 8080 \ 375 | --enable-dapr \ 376 | --dapr-app-port 8080 \ 377 | --dapr-app-id rating-web \ 378 | --env-vars "API=$RATING_API_URI" 379 | 380 | az containerapp revision list -n rating-web -g $RESOURCE_GROUP -o table 381 | 382 | FRONTEND_INGRESS_URL=$(az containerapp show -n rating-web -g $RESOURCE_GROUP --query properties.configuration.ingress.fqdn -o tsv) 383 | 384 | echo "Browse to: https://$FRONTEND_INGRESS_URL" 385 | 386 | # Access the site and stream logs to console (press CTRL+C to stop following log stream): 387 | az containerapp logs show -n rating-api -g $RESOURCE_GROUP --follow --tail=50 388 | az containerapp logs show -n rating-web -g $RESOURCE_GROUP --follow --tail=50 389 | ``` 390 | 391 | ### Ratings API CI/CD pipeline (GitHub Actions) 392 | 393 | See file: [.github/workflows/docker-image.yml](https://github.com/clarenceb/rating-api/blob/master/.github/workflows/docker-image.yml) 394 | 395 | Check that the [workflow](https://github.com/clarenceb/rating-api/actions/workflows/docker-image.yml) is enabled by clicking the ellipsis `[...]` and choosing **Enable workflow** (if its currently disabled). 396 | 397 | The following Repository secrets are needed: 398 | 399 | - ACR_LOGIN_SERVER 400 | - ASC_APPINSIGHTS_CONNECTION_STRING 401 | - ASC_AUTH_TOKEN 402 | - REGISTRY_PASSWORD 403 | - REGISTRY_PASSWORD 404 | 405 | The following Environments are needed: 406 | 407 | - dev-containerapps 408 | 409 | - Environment secrets: 410 | 411 | - CAPPS_MONGODB_URI 412 | - CONTAINERAPPS_ENVIRONMENT 413 | - LOCATION 414 | - RESOURCE_GROUP 415 | - AZURE_CREDENTIALS 416 | 417 | - Create [Azure Credentials](https://github.com/marketplace/actions/azure-login#configure-deployment-credentials) service principal: 418 | 419 | ```sh 420 | SUBSCRIPTION_ID=$(az account show --query id -o tsv) 421 | # RESOURCE_GROUP should be the resource group of your container apps environment 422 | az ad sp create-for-rbac --name "dev-containerapps-sp" --role contributor \ 423 | --scopes /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP \ 424 | --sdk-auth > ./dev-containerapps-sp.json 425 | ``` 426 | 427 | - Paste contents of file `dev-containerapps-sp.json` into an Environment Secret named `AZURE_CREDENTIALS` 428 | 429 | - dev-aro 430 | 431 | - Environment secrets: 432 | 433 | - OPENSHIFT_SERVER 434 | 435 | ```sh 436 | oc whoami --show-server 437 | ``` 438 | 439 | - OPENSHIFT_NAMESPACE 440 | 441 | ```sh 442 | echo $PROJECT 443 | ``` 444 | 445 | - OPENSHIFT_TOKEN 446 | 447 | ```sh 448 | # First, name your Service Account (the Kubernetes shortname is "sa") 449 | SA=github-actions-sa 450 | 451 | # oc new-project $OPENSHIFT_NAMESPACE # Create a new project (namespace) 452 | oc project $OPENSHIFT_NAMESPACE # Switch to existing project 453 | 454 | # and create it. 455 | oc create sa $SA 456 | 457 | # Grant permissions to update resources in the OPENSHIFT_NAMESPACE 458 | oc policy add-role-to-user edit -z $SA 459 | 460 | # Now, we have to find the name of the secret in which the Service Account's apiserver token is stored. 461 | # The following command will output two secrets. 462 | SECRETS=$(oc get sa $SA -o jsonpath='{.secrets[*].name}{"\n"}') && echo $SECRETS 463 | # Select the one with "token" in the name - the other is for the container registry. 464 | SECRET_NAME=$(printf "%s\n" $SECRETS | grep "token") && echo $SECRET_NAME 465 | 466 | # Get the token from the secret. 467 | ENCODED_TOKEN=$(oc get secret $SECRET_NAME -o jsonpath='{.data.token}{"\n"}') && echo $ENCODED_TOKEN; 468 | TOKEN=$(echo $ENCODED_TOKEN | base64 -d) && echo $TOKEN 469 | # eyJhb...... 470 | ``` 471 | 472 | - CAPPS_MONGODB_URI 473 | 474 | ```sh 475 | echo $CAPPS_MONGODB_URI 476 | ``` 477 | 478 | Refer to [Using a Service Account for GitHub Actions](https://github.com/redhat-actions/oc-login/wiki/Using-a-Service-Account-for-GitHub-Actions) for instructions on creating the OPENSHIFT_TOKEN. 479 | 480 | More info on the [openshift-login](https://github.com/marketplace/actions/openshift-login) GitHub action. 481 | 482 | Azure Monitor (Container Insights via Azure Arc-enabled Kubernetes) 483 | ------------------------------------------------------------------- 484 | 485 | Go to ARO Arc resource / Insights in the Azure Portal. 486 | 487 | * Health, nodes, pods, live logs, metrics, recommended alerts 488 | * Go to Contaihners, filter by namespace "ratingsapp", filter to "api" 489 | * Open Live Logs, submit a rating in the web app 490 | * Reports 491 | * Workload Details 492 | * Data Usage 493 | * Persistent Volume Details 494 | * Recommended alerts 495 | * Logs 496 | 497 | Open the KQL query editor in the "Logs" blade of the ARO Arc resource and try some queries. 498 | 499 | Investigate some app logs where votes were placed: 500 | 501 | ```kql 502 | ContainerLog 503 | | where LogEntry contains "Saving rating" 504 | ``` 505 | 506 | See average rating per fruit: 507 | 508 | ```kql 509 | ContainerLog 510 | | where LogEntry contains "Saving rating" 511 | | parse LogEntry with * "itemRated: [ " itemCode " ]" * "rating: " rating " }" * 512 | | extend fruit= 513 | replace_string( 514 | replace_string( 515 | replace_string( 516 | replace_string(itemCode, '62f6fb3f209fa5001777fea0', 'Banana'), 517 | '62f6fb3f209fa5001777fea1', 'Coconut'), 518 | '62f6fb3f209fa5001777fea2', 'Oranges'), 519 | '62f6fb3f209fa5001777fea3', 'Pineapple') 520 | | project fruit, rating 521 | | summarize AvgRating=avg(toint(rating)) by fruit 522 | ``` 523 | 524 | Choose "Chart" to see a visualisation of the average votes. 525 | 526 | Note: Your item ids will be different. Run a Mongo query to find your ids, like so: 527 | 528 | ```sh 529 | oc port-forward --namespace ratingsapp svc/mongodb 27017:27017 & 530 | mongosh --host 127.0.0.1 --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD 531 | 532 | use ratingsdb 533 | db.items.find() 534 | quit 535 | 536 | kill %1 537 | ``` 538 | 539 | See number of votes submitted over time: 540 | 541 | ```sh 542 | ContainerLog 543 | | where LogEntry contains "Saving rating" 544 | | summarize NumberOfVotes=count()/4 by bin(TimeGenerated, 15m) 545 | | render areachart 546 | ``` 547 | 548 | Kube event failures: 549 | 550 | ```kql 551 | KubeEvents 552 | | where TimeGenerated > ago(24h) 553 | | where Reason in ("Failed") 554 | | summarize count() by Reason, bin(TimeGenerated, 5m) 555 | | render areachart 556 | ``` 557 | 558 | Pod failures (e.g. ImagePullBackOff): 559 | 560 | ```kql 561 | KubeEvents 562 | | where TimeGenerated > ago(24h) 563 | | where Reason in ("Failed") 564 | | where ObjectKind == "Pod" 565 | | project TimeGenerated, ObjectKind, Name, Namespace, Message 566 | ``` 567 | 568 | ### Try some other KQL queries 569 | 570 | List container images deployed in the cluster: 571 | 572 | ```kql 573 | ContainerInventory 574 | | distinct Repository, Image, ImageTag 575 | | where Image contains "rating" 576 | | render table 577 | ``` 578 | 579 | List container inventory and state: 580 | 581 | ```kql 582 | ContainerInventory 583 | | project Computer, Name, Image, ImageTag, ContainerState, CreatedTime, StartedTime, FinishedTime 584 | | render table 585 | ``` 586 | 587 | List Kubernetes Events: 588 | 589 | ```kql 590 | KubeEvents 591 | | where not(isempty(Namespace)) 592 | | sort by TimeGenerated desc 593 | | render table 594 | ``` 595 | 596 | List Azure Diagnostic categories: 597 | 598 | ```kql 599 | AzureDiagnostics 600 | | distinct Category 601 | ``` 602 | -------------------------------------------------------------------------------- /arc/README.md: -------------------------------------------------------------------------------- 1 | Arc-enable your ARO cluster 2 | =========================== 3 | 4 | [Connect your ARO cluster to Azure Arc](https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster) 5 | 6 | Ensure your firewall (if using one) allows sufficient access: 7 | 8 | - [Arc enabling the cluster](https://docs.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli#meet-network-requirements) 9 | - [Enabling Azure Monitor for Arc-enabled Kubernetes](https://docs.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters#prerequisites) 10 | 11 | Register required providers: 12 | 13 | ```sh 14 | az provider register --namespace Microsoft.Kubernetes 15 | az provider register --namespace Microsoft.KubernetesConfiguration 16 | az provider register --namespace Microsoft.ExtendedLocation 17 | 18 | az provider show -n Microsoft.Kubernetes -o table 19 | az provider show -n Microsoft.KubernetesConfiguration -o table 20 | az provider show -n Microsoft.ExtendedLocation -o table 21 | ``` 22 | 23 | Login to your ARO cluster with `oc` CLI: 24 | 25 | ```sh 26 | API_URL=$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv) 27 | KUBEADMIN_PASSWD=$(az aro list-credentials -g $RESOURCEGROUP -n $CLUSTER | jq -r .kubeadminPassword) 28 | 29 | oc login -u kubeadmin -p $KUBEADMIN_PASSWD --server=$API_URL 30 | oc status 31 | ``` 32 | 33 | Set required SCC policy for Arc to work in ARO: 34 | 35 | ```sh 36 | oc adm policy add-scc-to-user privileged system:serviceaccount:azure-arc:azure-arc-kube-aad-proxy-sa 37 | ``` 38 | 39 | Onboard the ARO cluster to Arc: 40 | 41 | ```sh 42 | ARC_RESOURCE_GROUP="aro-test" 43 | ARC_CLUSTER_NAME="aro-arc" 44 | ARC_LOCATION="australiaeast" 45 | 46 | az extension add --name connectedk8s 47 | az extension add --name k8s-extension 48 | 49 | az group create --name $ARC_RESOURCE_GROUP --location $ARC_LOCATION --output table 50 | az connectedk8s connect --name $ARC_CLUSTER_NAME --resource-group $ARC_RESOURCE_GROUP --distribution openshift 51 | az connectedk8s list --resource-group $ARC_RESOURCE_GROUP --output table 52 | 53 | kubectl get deployments,pods -n azure-arc 54 | ``` 55 | 56 | Resources 57 | --------- 58 | 59 | - [Quickstart: Connect an existing Kubernetes cluster to Azure Arc](https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/quickstart-connect-cluster?tabs=azure-cli) 60 | -------------------------------------------------------------------------------- /aro4-env.sh.template: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Source this file into your shell 4 | # $ source ./aro4-env.sh 5 | 6 | export LOCATION= # Choose your region 7 | export CLUSTER=cluster # Set your cluster name 8 | export RESOURCEGROUP="aro-v4" 9 | export VNET=aro-vnet 10 | export UTILS_VNET=utils-vnet 11 | 12 | # Optional, if you want to set a custom domain for ARO (can be a FQDN or DNS prefix) 13 | # FQDN (e.g. contoso.io) will require further configuration of DNS and certicates 14 | # DNS prefix (e.g. contoso) will set a subdomain under .aroapp.io and will be automatically configured with a certificate 15 | # Set to blank value to disable custom domain 16 | export DOMAIN="" 17 | 18 | echo "Your cluster will be named '$CLUSTER' in resource group '$RESOURCEGROUP' and location '$LOCATION'" 19 | echo "+ A VNET named '$VNET' will be set for the cluster" 20 | echo "+ A VNET named '$UTILS_VNET' will be set for the cluster utilities (e.g. jump box)" 21 | -------------------------------------------------------------------------------- /automation/README.md: -------------------------------------------------------------------------------- 1 | Bicep example to create ARO cluster 2 | =================================== 3 | 4 | The official docs now have docs to [create an ARO cluster with Bicep](https://learn.microsoft.com/en-us/azure/openshift/quickstart-openshift-arm-bicep-template?pivots=aro-bicep). 5 | 6 | Below is an example to create a basic ARO cluster in a new resource group and VNET using Bicep. 7 | 8 | * You will need owner level access on the Subscription to execute the deployment. 9 | * You'll also need permission to create an Azure AD Service Principal. 10 | * Complete the [Prerequisites](../README.md#prerequisites) in the main steps. 11 | 12 | ```sh 13 | source ../aro4-env.sh 14 | 15 | sp_display_name="aro-test-sp" 16 | az ad sp create-for-rbac -n http://$sp_display_name > aro-sp.json 17 | 18 | clientId="$(jq -r .appId ..aroapp.io/ 50 | ``` 51 | 52 | Resources 53 | --------- 54 | 55 | * https://github.com/Azure/bicep 56 | * https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep 57 | -------------------------------------------------------------------------------- /automation/aro.bicep: -------------------------------------------------------------------------------- 1 | param domain string 2 | param masterSubnetId string 3 | param workerSubnetId string 4 | param clientId string 5 | @secure() 6 | param clientSecret string 7 | @secure() 8 | param pullSecret string 9 | param clusterName string 10 | param location string = resourceGroup().location 11 | 12 | param podCidr string = '10.128.0.0/14' 13 | param serviceCidr string = '172.30.0.0/16' 14 | param apiServerVisibility string = 'Public' 15 | param ingressVisibility string = 'Public' 16 | param masterVmSku string = 'Standard_D8s_v3' 17 | param prefix string = 'aro' 18 | param fipsValidatedModules string = 'Disabled' 19 | param encryptionAtHost string = 'Disabled' 20 | param workerVmSize string = 'Standard_D4s_v3' 21 | param workerDiskSizeGB int = 128 22 | param workerCount int = 3 23 | 24 | var ingressSpec = [ 25 | { 26 | name: 'default' 27 | visibility: ingressVisibility 28 | } 29 | ] 30 | 31 | var workerSpec = { 32 | name: 'worker' 33 | VmSize: workerVmSize 34 | diskSizeGB: workerDiskSizeGB 35 | count: workerCount 36 | encryptionAtHost: encryptionAtHost 37 | } 38 | 39 | var nodeRgName = '${prefix}-${take(uniqueString(resourceGroup().id, prefix), 5)}' 40 | 41 | resource cluster 'Microsoft.RedHatOpenShift/OpenShiftClusters@2023-04-01' = { 42 | name: clusterName 43 | location: location 44 | properties: { 45 | clusterProfile: { 46 | domain: domain 47 | resourceGroupId: subscriptionResourceId('Microsoft.Resources/resourceGroups', nodeRgName) 48 | pullSecret: pullSecret 49 | fipsValidatedModules: fipsValidatedModules 50 | } 51 | apiserverProfile: { 52 | visibility: apiServerVisibility 53 | } 54 | ingressProfiles: [for instance in ingressSpec: { 55 | name: instance.name 56 | visibility: instance.visibility 57 | }] 58 | masterProfile: { 59 | vmSize: masterVmSku 60 | subnetId: masterSubnetId 61 | encryptionAtHost: encryptionAtHost 62 | } 63 | workerProfiles: [ 64 | { 65 | name: workerSpec.name 66 | vmSize: workerSpec.VmSize 67 | diskSizeGB: workerSpec.diskSizeGB 68 | subnetId: workerSubnetId 69 | count: workerSpec.count 70 | encryptionAtHost: workerSpec.encryptionAtHost 71 | } 72 | ] 73 | networkProfile: { 74 | podCidr:podCidr 75 | serviceCidr: serviceCidr 76 | } 77 | servicePrincipalProfile: { 78 | clientId: clientId 79 | clientSecret: clientSecret 80 | } 81 | } 82 | } 83 | 84 | output consoleUrl string = cluster.properties.consoleProfile.url 85 | output apiUrl string = cluster.properties.apiserverProfile.url 86 | -------------------------------------------------------------------------------- /automation/main.bicep: -------------------------------------------------------------------------------- 1 | param clientId string 2 | param clientObjectId string 3 | @secure() 4 | param clientSecret string 5 | param aroRpObjectId string 6 | param domain string 7 | @secure() 8 | param pullSecret string 9 | param clusterName string = 'cluster' 10 | param location string = resourceGroup().location 11 | 12 | module vnet 'vnet.bicep' = { 13 | name: 'aro-vnet' 14 | params: { 15 | location: location 16 | } 17 | } 18 | 19 | module vnetRoleAssignments 'roleAssignments.bicep' = { 20 | name: 'role-assignments' 21 | params: { 22 | vnetId: vnet.outputs.vnetId 23 | clientObjectId: clientObjectId 24 | aroRpObjectId: aroRpObjectId 25 | } 26 | } 27 | 28 | module aro 'aro.bicep' = { 29 | name: 'aro' 30 | params: { 31 | domain: domain 32 | masterSubnetId: vnet.outputs.masterSubnetId 33 | workerSubnetId: vnet.outputs.workerSubnetId 34 | clientId: clientId 35 | clientSecret: clientSecret 36 | pullSecret: pullSecret 37 | clusterName: clusterName 38 | location: location 39 | } 40 | 41 | dependsOn: [ 42 | vnetRoleAssignments 43 | ] 44 | } 45 | -------------------------------------------------------------------------------- /automation/roleAssignments.bicep: -------------------------------------------------------------------------------- 1 | param vnetId string 2 | param clientObjectId string 3 | param aroRpObjectId string 4 | 5 | var roleDefinitionId = 'b24988ac-6180-42a0-ab88-20f7382dd24c' // contributor 6 | 7 | resource clusterRoleAssignment 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = { 8 | name: guid(vnetId, roleDefinitionId, clientObjectId) 9 | properties: { 10 | roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', roleDefinitionId) 11 | principalId: clientObjectId 12 | principalType: 'ServicePrincipal' 13 | } 14 | } 15 | 16 | resource aroRpRoleAssignment 'Microsoft.Authorization/roleAssignments@2020-10-01-preview' = { 17 | name: guid(vnetId, roleDefinitionId, aroRpObjectId) 18 | properties: { 19 | roleDefinitionId: subscriptionResourceId('Microsoft.Authorization/roleDefinitions', roleDefinitionId) 20 | principalId: aroRpObjectId 21 | principalType: 'ServicePrincipal' 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /automation/vnet.bicep: -------------------------------------------------------------------------------- 1 | param vnetName string = 'aro-vnet' 2 | param vnetCidr string = '10.0.0.0/22' 3 | param masterSubnetCidr string = '10.0.2.0/24' 4 | param workerSubnetCidr string = '10.0.3.0/24' 5 | param location string = resourceGroup().location 6 | 7 | var masterSubnet = { 8 | name: 'master-subnet' 9 | cidr: masterSubnetCidr 10 | } 11 | 12 | var workerSubnet = { 13 | name: 'worker-subnet' 14 | cidr: workerSubnetCidr 15 | } 16 | 17 | resource vnet 'Microsoft.Network/virtualNetworks@2020-05-01' = { 18 | name: vnetName 19 | location: location 20 | properties: { 21 | addressSpace: { 22 | addressPrefixes: [ 23 | vnetCidr 24 | ] 25 | } 26 | dhcpOptions: { 27 | dnsServers: [] 28 | } 29 | subnets: [ 30 | { 31 | name: masterSubnet.name 32 | properties: { 33 | addressPrefix: masterSubnet.cidr 34 | serviceEndpoints: [ 35 | { 36 | service: 'Microsoft.ContainerRegistry' 37 | locations: [ 38 | '*' 39 | ] 40 | } 41 | ] 42 | delegations: [] 43 | privateEndpointNetworkPolicies: 'Enabled' 44 | privateLinkServiceNetworkPolicies: 'Disabled' 45 | } 46 | } 47 | { 48 | name: workerSubnet.name 49 | properties: { 50 | addressPrefix: workerSubnet.cidr 51 | serviceEndpoints: [ 52 | { 53 | service: 'Microsoft.ContainerRegistry' 54 | locations: [ 55 | '*' 56 | ] 57 | } 58 | ] 59 | delegations: [] 60 | privateEndpointNetworkPolicies: 'Enabled' 61 | privateLinkServiceNetworkPolicies: 'Enabled' 62 | } 63 | } 64 | ] 65 | enableDdosProtection: false 66 | } 67 | } 68 | 69 | output vnetId string = vnet.id 70 | output masterSubnetId string = vnet.properties.subnets[0].id 71 | output workerSubnetId string = vnet.properties.subnets[1].id 72 | -------------------------------------------------------------------------------- /azure-pipelines.yml: -------------------------------------------------------------------------------- 1 | trigger: 2 | branches: 3 | include: 4 | - master 5 | 6 | variables: 7 | - name: LOCATION 8 | value: australiaeast 9 | - name: RESOURCEGROUP 10 | value: aro-rg 11 | - name: CLUSTER 12 | value: aro-cluster 13 | - name: AROVNETNAME 14 | value: aro-vnet 15 | - name: VNETPREFIX 16 | value: 10.0.0.0/22 17 | - name: MASTERSUBNETNAME 18 | value: master-subnet 19 | - name: MASTERSUBNETCIDR 20 | value: 10.0.0.0/23 21 | - name: WORKERSUBNETNAME 22 | value: worker-subnet 23 | - name: WORKERSUBNETCIDR 24 | value: 10.0.2.0/23 25 | - name: SERVICEPRINCIPALNAME 26 | value: sp-aro-test 27 | # - group: aro-platform-auto 28 | 29 | 30 | pool: 31 | vmImage: ubuntu-latest 32 | 33 | steps: 34 | # We'll be skipping all the provider such as Redhatopenshift, compute registration 35 | - task: AzureCLI@2 36 | displayName: Create ARO resource group 37 | inputs: 38 | azureSubscription: 'my-azure-subscription' 39 | scriptType: 'bash' 40 | scriptLocation: 'inlineScript' 41 | inlineScript: 'az group create --name $(RESOURCEGROUP) --location $(LOCATION)' 42 | 43 | # - task: AzureCLI@2 44 | # displayName: Create ARO vnet 45 | # inputs: 46 | # azureSubscription: 'my-azure-subscription' 47 | # scriptType: 'bash' 48 | # scriptLocation: 'inlineScript' 49 | # inlineScript: 'az network vnet create --resource-group $(RESOURCEGROUP) --name $(AROVNETNAME) --address-prefixes $(VNETPREFIX)' 50 | 51 | # - task: AzureCLI@2 52 | # displayName: Create master subnet 53 | # inputs: 54 | # azureSubscription: 'my-azure-subscription' 55 | # scriptType: 'bash' 56 | # scriptLocation: 'inlineScript' 57 | # inlineScript: 'az network vnet subnet create --resource-group $(RESOURCEGROUP) --vnet-name $(AROVNETNAME) --name $(MASTERSUBNETNAME) --address-prefixes $(MASTERSUBNETCIDR)' 58 | 59 | # - task: AzureCLI@2 60 | # displayName: Create worker subnet 61 | # inputs: 62 | # azureSubscription: 'my-azure-subscription' 63 | # scriptType: 'bash' 64 | # scriptLocation: 'inlineScript' 65 | # inlineScript: 'az network vnet subnet create --resource-group $(RESOURCEGROUP) --vnet-name $(AROVNETNAME) --name $(WORKERSUBNETNAME) --address-prefixes $(WORKERSUBNETCIDR)' 66 | 67 | # - task: AzureCLI@2 68 | # displayName: Disable private link network policies 69 | # inputs: 70 | # azureSubscription: 'my-azure-subscription' 71 | # scriptType: 'bash' 72 | # scriptLocation: 'inlineScript' 73 | # inlineScript: 'az network vnet subnet update --resource-group $(RESOURCEGROUP) --vnet-name $(AROVNETNAME) --name $(MASTERSUBNETNAME) --disable-private-link-service-network-policies true' 74 | 75 | - task: AzureCLI@2 76 | displayName: Create ARO cluster 77 | inputs: 78 | azureSubscription: 'my-azure-subscription' 79 | scriptType: 'bash' 80 | scriptLocation: 'inlineScript' 81 | inlineScript: | 82 | az aro create --resource-group $(RESOURCEGROUP) --name $(CLUSTER) --vnet $(AROVNETNAME) --master-subnet $(MASTERSUBNETNAME) --worker-subnet $(WORKERSUBNETNAME) --client-id $(CLIENTID) --client-secret $(CLIENTSECRET) --debug 83 | -------------------------------------------------------------------------------- /backup/.gitignore: -------------------------------------------------------------------------------- 1 | credentials-velero.yaml 2 | velero-sp.json 3 | velero-*.tar.gz 4 | velero-*-linux-amd64 5 | -------------------------------------------------------------------------------- /backup/README.md: -------------------------------------------------------------------------------- 1 | Backup with Velero 2 | ================== 3 | 4 | Install Velero CLI tool: 5 | 6 | ```sh 7 | wget https://github.com/vmware-tanzu/velero/releases/download/v1.5.2/velero-v1.5.2-linux-amd64.tar.gz 8 | tar xzf velero-v1.5.2-linux-amd64.tar.gz 9 | sudo mv velero-v1.5.2-linux-amd64/velero /usr/local/bin 10 | 11 | velero version 12 | ``` 13 | 14 | Create storage account and blob container for backups: 15 | 16 | ```sh 17 | AZURE_BACKUP_RESOURCE_GROUP=Velero_Backups 18 | LOCATION=australiaeast 19 | az group create -n $AZURE_BACKUP_RESOURCE_GROUP --location $LOCATION 20 | 21 | AZURE_STORAGE_ACCOUNT_ID="velero$(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')" 22 | az storage account create \ 23 | --name $AZURE_STORAGE_ACCOUNT_ID \ 24 | --resource-group $AZURE_BACKUP_RESOURCE_GROUP \ 25 | --sku Standard_GRS \ 26 | --encryption-services blob \ 27 | --https-only true \ 28 | --kind BlobStorage \ 29 | --access-tier Hot 30 | 31 | BLOB_CONTAINER=velero 32 | az storage container create -n $BLOB_CONTAINER --public-access off --account-name $AZURE_STORAGE_ACCOUNT_ID 33 | ``` 34 | 35 | Create Azure Service Principal and Velero configuration file: 36 | 37 | ```sh 38 | CLUSTER=cluster 39 | ARO_RG=aro-v4 40 | 41 | export AZURE_RESOURCE_GROUP=$(az aro show --name $CLUSTER --resource-group $ARO_RG | jq -r .clusterProfile.resourceGroupId | cut -d '/' -f 5,5) 42 | 43 | AZURE_SUBSCRIPTION_ID=$(az account list --query '[?isDefault].id' -o tsv) 44 | AZURE_TENANT_ID=$(az account list --query '[?isDefault].tenantId' -o tsv) 45 | 46 | az ad sp create-for-rbac --name "http://velero-aro4" --role "Contributor" --scopes /subscriptions/$AZURE_SUBSCRIPTION_ID > velero-sp.json 47 | 48 | AZURE_CLIENT_SECRET=$(jq .password ./credentials-velero.yaml 52 | AZURE_SUBSCRIPTION_ID=${AZURE_SUBSCRIPTION_ID} 53 | AZURE_TENANT_ID=${AZURE_TENANT_ID} 54 | AZURE_CLIENT_ID=${AZURE_CLIENT_ID} 55 | AZURE_CLIENT_SECRET=${AZURE_CLIENT_SECRET} 56 | AZURE_RESOURCE_GROUP=${AZURE_RESOURCE_GROUP} 57 | AZURE_CLOUD_NAME=AzurePublicCloud 58 | EOF 59 | ``` 60 | 61 | Install Velero in ARO: 62 | 63 | ```sh 64 | velero install \ 65 | --provider azure \ 66 | --plugins velero/velero-plugin-for-microsoft-azure:v1.1.0 \ 67 | --bucket $BLOB_CONTAINER \ 68 | --secret-file ./credentials-velero.yaml \ 69 | --backup-location-config resourceGroup=$AZURE_BACKUP_RESOURCE_GROUP,storageAccount=$AZURE_STORAGE_ACCOUNT_ID \ 70 | --snapshot-location-config apiTimeout=15m \ 71 | --velero-pod-cpu-limit="0" --velero-pod-mem-limit="0" \ 72 | --velero-pod-mem-request="0" --velero-pod-cpu-request="0" 73 | ``` 74 | 75 | Backup a namespace: 76 | 77 | 78 | ```sh 79 | velero create backup mydemo1-1 --include-namespaces=mydemo1 80 | 81 | velero backup describe mydemo1-1 82 | velero backup logs mydemo1-1 83 | 84 | oc get backups -n velero mydemo1-1 -o yaml 85 | ``` 86 | 87 | Backup a namesapce with disk snapshots (PVs): 88 | 89 | ```sh 90 | velero backup create mydemo1-2 --include-namespaces=mydemo1 --snapshot-volumes=true --include-cluster-resources=true 91 | 92 | velero backup describe mydemo1-2 93 | velero backup logs mydemo1-2 94 | 95 | oc get backups -n velero mydemo1-2 -o yaml 96 | ``` 97 | 98 | Restore a backup: 99 | 100 | ```sh 101 | oc get backups -n velero 102 | velero restore create restore-mydemo1-1 --from-backup mydemo1-1 103 | 104 | velero restore describe restore-mydemo1-1 105 | velero restore logs restore-mydemo1-1 106 | 107 | oc get restore -n velero restore-mydemo1-1 -o yaml 108 | ``` 109 | 110 | Restore a backup with disk snapshots (PVs): 111 | 112 | ```sh 113 | oc get backups -n velero 114 | velero restore create restore-mydemo1-2 --from-backup mydemo1-2 --exclude-resources="nodes,events,events.events.k8s.io,backups.ark.heptio.com,backups.velero.io,restores.ark.heptio.com,restores.velero.io" 115 | 116 | velero restore describe restore-mydemo1-2 117 | velero restore logs restore-mydemo1-2 118 | 119 | oc get restore -n velero restore-mydemo1-2 -o yaml 120 | ``` 121 | 122 | Uninstall Velero: 123 | 124 | ```sh 125 | kubectl delete namespace/velero clusterrolebinding/velero 126 | kubectl delete crds -l component=velero 127 | ``` 128 | 129 | Delete storage account: 130 | 131 | ```sh 132 | az storage account delete --name $AZURE_STORAGE_ACCOUNT_ID --resource-group $AZURE_BACKUP_RESOURCE_GROUP 133 | ``` 134 | 135 | Delete service principal: 136 | 137 | ```sh 138 | az ad sp create-for-rbac --id $AZURE_CLIENT_ID 139 | ``` 140 | -------------------------------------------------------------------------------- /cleanup-failed-clusters.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # The script can be run to clean up ARO clusters that are in a failed state. 4 | 5 | aroRpObjectId="$(az ad sp list --filter "displayname eq 'Azure Red Hat OpenShift RP'" --query "[?appDisplayName=='Azure Red Hat OpenShift RP'].objectId" -o tsv)" 6 | 7 | echo "The Azure Red Hat OpenShift RP ($aroRpObjectId) currently has the following role assignments:" 8 | az role assignment list --assignee $aroRpObjectId -o table 9 | 10 | failedClusterResourceGroups="$(az aro list --query "[?provisioningState=='Failed'].{resourceGroup:clusterProfile.resourceGroupId}" -o tsv)" 11 | 12 | for nodeResourceGroup in $failedClusterResourceGroups; do 13 | echo "Granting 'User Access Administrator' role to Azure Red Hat OpenShift RP ($aroRpObjectId) on scope $nodeResourceGroup" 14 | az role assignment create --assignee $aroRpObjectId --role "User Access Administrator" --scope $nodeResourceGroup 15 | done 16 | 17 | echo "Cleaning up failed ARO clusters..." 18 | az aro list --query "[?provisioningState=='Failed'].{name:name, resourceGroup:resourceGroup}" -o tsv \ 19 | | awk 'BEGIN { FS = "\t" } { print "az aro delete --yes -n " $1 " -g " $2 }' \ 20 | | xargs -I {} bash -c '{}' 21 | 22 | echo "The Azure Red Hat OpenShift RP ($aroRpObjectId) now has the following remaining role assignments:" 23 | az role assignment list --assignee $aroRpObjectId -o table 24 | -------------------------------------------------------------------------------- /firewall/README.md: -------------------------------------------------------------------------------- 1 | Firewall 2 | ======== 3 | 4 | Reference: https://docs.microsoft.com/en-us/azure/openshift/howto-restrict-egress 5 | 6 | Continuing on from the setup used in this demo. Let's configure egress lockdown with an Azure Firewall. 7 | We'll create a dedicated subnet for the Azure Firewall in our utils VNET (rather than the ARO VNET). 8 | 9 | ```sh 10 | source ../aro4-env.sh 11 | 12 | # Create the Firewall's subnet 13 | az network vnet subnet create \ 14 | -g "$RESOURCEGROUP" \ 15 | --vnet-name "$UTILS_VNET" \ 16 | -n "AzureFirewallSubnet" \ 17 | --address-prefixes 10.0.6.0/24 18 | 19 | # Create the public IP for the Firewall 20 | az network public-ip create -g $RESOURCEGROUP -n fw-ip --sku "Standard" --location $LOCATION 21 | 22 | # Install extension to manage Azure Firewall from Azure CLI 23 | az extension add -n azure-firewall 24 | az extension update -n azure-firewall 25 | 26 | # Create the actual firewall and configure its public facing IP. 27 | az network firewall create -g $RESOURCEGROUP -n aro-private -l $LOCATION 28 | az network firewall ip-config create -g $RESOURCEGROUP -f aro-private -n fw-config --public-ip-address fw-ip --vnet-name "$UTILS_VNET" 29 | 30 | FWPUBLIC_IP=$(az network public-ip show -g $RESOURCEGROUP -n fw-ip --query "ipAddress" -o tsv) 31 | FWPRIVATE_IP=$(az network firewall show -g $RESOURCEGROUP -n aro-private --query "ipConfigurations[0].privateIpAddress" -o tsv) 32 | 33 | echo $FWPUBLIC_IP 34 | echo $FWPRIVATE_IP 35 | 36 | # Get the id for the ARO VNET. 37 | vNet1Id=$(az network vnet show \ 38 | --resource-group $RESOURCEGROUP \ 39 | --name $VNET \ 40 | --query id --out tsv) 41 | 42 | # Get the id for the Utils VNET. 43 | vNet2Id=$(az network vnet show \ 44 | --resource-group $RESOURCEGROUP \ 45 | --name $UTILS_VNET \ 46 | --query id \ 47 | --out tsv) 48 | 49 | # Peer ARO VNET with the Utils VNET 50 | az network vnet peering create \ 51 | --name aroVnet-utilsVnet \ 52 | --resource-group $RESOURCEGROUP \ 53 | --vnet-name $VNET \ 54 | --remote-vnet $vNet2Id \ 55 | --allow-vnet-access 56 | 57 | # Peer Utils VNET with the ARO VNET 58 | az network vnet peering create \ 59 | --name utilsVnet-aroVnet \ 60 | --resource-group $RESOURCEGROUP \ 61 | --vnet-name $UTILS_VNET \ 62 | --remote-vnet $vNet1Id \ 63 | --allow-vnet-access 64 | 65 | # Verify the peering is in place 66 | az network vnet peering show \ 67 | --name aroVnet-utilsVnet \ 68 | --resource-group $RESOURCEGROUP \ 69 | --vnet-name $VNET \ 70 | --query peeringState 71 | 72 | # Create a UDR and Routing Table for Azure Firewall 73 | az network route-table create -g $RESOURCEGROUP --name aro-udr 74 | az network route-table route create -g $RESOURCEGROUP --name aro-udr --route-table-name aro-udr --address-prefix 0.0.0.0/0 --next-hop-type VirtualAppliance --next-hop-ip-address $FWPRIVATE_IP 75 | ``` 76 | 77 | Application rules for ARO to work based on this [list](https://docs.openshift.com/container-platform/4.6/installing/install_config/configuring-firewall.html#configuring-firewall_configuring-firewall): 78 | 79 | ```sh 80 | az network firewall application-rule create -g $RESOURCEGROUP -f aro-private \ 81 | --collection-name 'ARO' \ 82 | --action allow \ 83 | --priority 100 \ 84 | -n 'required' \ 85 | --source-addresses '*' \ 86 | --protocols 'http=80' 'https=443' \ 87 | --target-fqdns 'registry.redhat.io' '*.quay.io' 'sso.redhat.com' 'management.azure.com' 'mirror.openshift.com' 'api.openshift.com' 'quay.io' '*.blob.core.windows.net' 'gcs.prod.monitoring.core.windows.net' 'registry.access.redhat.com' 'login.microsoftonline.com' '*.servicebus.windows.net' '*.table.core.windows.net' 'grafana.com' 88 | ``` 89 | 90 | Optional rules for Docker images: 91 | 92 | ```sh 93 | az network firewall application-rule create -g $RESOURCEGROUP -f aro-private \ 94 | --collection-name 'Docker' \ 95 | --action allow \ 96 | --priority 200 \ 97 | -n 'docker' \ 98 | --source-addresses '*' \ 99 | --protocols 'http=80' 'https=443' \ 100 | --target-fqdns '*cloudflare.docker.com' '*registry-1.docker.io' 'apt.dockerproject.org' 'auth.docker.io' 101 | ``` 102 | 103 | Test egress connectivity from ARO (before applying egress FW): 104 | 105 | ```sh 106 | oc create ns test 107 | oc apply -f firewall/egress-test-pod.yaml -n test 108 | oc exec -it centos -n test -- /bin/bash 109 | curl -i https://www.microsoft.com/ 110 | #HTTP/2 200 111 | # ... 112 | ``` 113 | 114 | Associate ARO subnets to FW: 115 | 116 | ```sh 117 | az network vnet subnet update -g $RESOURCEGROUP --vnet-name $VNET --name "master-subnet" --route-table aro-udr 118 | az network vnet subnet update -g $RESOURCEGROUP --vnet-name $VNET --name "worker-subnet" --route-table aro-udr 119 | ``` 120 | 121 | Test egress connectivity (after applying egress FW):: 122 | 123 | ```sh 124 | # Re-use the existing running centos container 125 | oc exec -it centos -- /bin/bash 126 | curl -i https://www.microsoft.com/ 127 | # curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to www.microsoft.com:443 128 | curl -i https://grafana.com 129 | # HTTP/2 200 130 | # ... 131 | ``` 132 | 133 | If you have enabled Diagnostic settings on the Firewall and enabled `AzureFirewallApplicationRule` to be logged, you can see the egress attempt and deny log by clicking on the firewall **Logs** pane and entering this sample query: 134 | 135 | ```sh 136 | AzureDiagnostics 137 | | where msg_s contains "Action: Deny" 138 | | where msg_s contains "microsoft" 139 | | limit 10 140 | | order by TimeGenerated desc 141 | | project TimeGenerated, ResourceGroup, msg_s 142 | 143 | # msg_s HTTPS request from x.x.x.x:yyyyyy to www.microsoft.com:443. Action: Deny. No rule matched. Proceeding with default action 144 | ``` 145 | 146 | (Optional) Setup ingress via FW for private OpenShift routes: 147 | 148 | ```sh 149 | ROUTE_LB_IP=x.x.x.x 150 | 151 | az network firewall nat-rule create -g $RESOURCEGROUP -f aro-private \ 152 | --collection-name 'http-aro-ingress' \ 153 | --priority 100 \ 154 | --action 'Dnat' \ 155 | -n 'http-ingress' \ 156 | --source-addresses '*' \ 157 | --destination-address $FWPUBLIC_IP \ 158 | --destination-ports '80' \ 159 | --translated-address $ROUTE_LB_IP \ 160 | --translated-port '80' \ 161 | --protocols 'TCP' 162 | ``` 163 | 164 | Clean-up: 165 | 166 | ```sh 167 | # Delete test pod 168 | oc delete -f firewall/egress-test-pod.yaml 169 | ``` 170 | -------------------------------------------------------------------------------- /firewall/egress-test-pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: centos 5 | spec: 6 | containers: 7 | - name: centos 8 | image: centos 9 | ports: 10 | - containerPort: 80 11 | command: 12 | - sleep 13 | - "3600" 14 | resources: 15 | requests: 16 | memory: "128Mi" 17 | cpu: "250m" 18 | limits: 19 | memory: "256Mi" 20 | cpu: "500m" 21 | -------------------------------------------------------------------------------- /htpasswd-cr.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: config.openshift.io/v1 2 | kind: OAuth 3 | metadata: 4 | name: cluster 5 | spec: 6 | identityProviders: 7 | - name: my_htpasswd_provider 8 | mappingMethod: claim 9 | type: HTPasswd 10 | htpasswd: 11 | fileData: 12 | name: htpass-secret -------------------------------------------------------------------------------- /logging/.gitignore: -------------------------------------------------------------------------------- 1 | /syslog-ssh-rsa 2 | /syslog-ssh-rsa.pub -------------------------------------------------------------------------------- /logging/README.md: -------------------------------------------------------------------------------- 1 | Cluster Logging 2 | =============== 3 | 4 | Setup a Syslog server to test log forwarding: 5 | 6 | ```sh 7 | source ../aro4-env.sh 8 | az group create --name $RESOURCEGROUP --location $LOCATION 9 | 10 | ssh-keygen -t rsa -b 2048 -C "syslog server" -f ./syslog-ssh-rsa -N "" 11 | 12 | az vm create \ 13 | --resource-group $RESOURCEGROUP \ 14 | --name syslog-server \ 15 | --image CentOS \ 16 | --size Standard_D2s_v3 \ 17 | --admin-username azureuser \ 18 | --ssh-key-values ./syslog-ssh-rsa.pub \ 19 | --public-ip-address "" \ 20 | --vnet-name $UTILS_VNET \ 21 | --subnet utils-hosts 22 | 23 | az vm open-port --port 514 --resource-group $RESOURCEGROUP --name syslog-server 24 | az vm list-ip-addresses --resource-group $RESOURCEGROUP --name syslog-server --query [].virtualMachine.network.privateIpAddresses[0] -o tsv 25 | 26 | # Create a private DNS zone, link that to your utils-vnet VNET and add your VM IP address as an A record in the private zone. 27 | # See: 28 | # - https://docs.microsoft.com/en-us/azure/dns/private-dns-getstarted-cli#create-a-private-dns-zone 29 | 30 | # Connect to the VM with your Bastion service, using the private key `./syslog-ssh-rsa` 31 | 32 | sudo yum update -y 33 | sudo yum -y install rsyslog 34 | sudo systemctl status rsyslog.service 35 | sudo systemctl enable rsyslog.service 36 | 37 | echo "test message from user root" | logger 38 | sudo tail /var/log/messages 39 | 40 | sudo vi /etc/rsyslog.conf 41 | # Uncomment lines below `Provides UDP syslog reception 42 | ``` 43 | 44 | ```txt 45 | # Provides TCP syslog reception 46 | $ModLoad imtcp 47 | $InputTCPServerRun 514 48 | ``` 49 | 50 | Save changes and exit `vi`. 51 | 52 | ```sh 53 | sudo systemctl restart rsyslog 54 | sudo netstat -antup | grep 514 55 | ``` 56 | 57 | Access the ARO console. 58 | 59 | Go to the Operator Hub and install the "Cluster Logging" operator. 60 | 61 | Select "Cluster Log Forwarder" and click "Create ClusterLogForwarder". 62 | 63 | Refer to the docs for [Forwarding logs using the syslog protocol](https://docs.openshift.com/container-platform/4.6/logging/cluster-logging-external.html#cluster-logging-collector-log-forward-syslog_cluster-logging-external). 64 | 65 | Example `ClusterLogForwarder` file (`syslog-forwarder.yaml`) to forward logs to the Syslog Server VM: 66 | 67 | ```yaml 68 | apiVersion: logging.openshift.io/v1 69 | kind: ClusterLogForwarder 70 | metadata: 71 | name: instance 72 | namespace: openshift-logging 73 | spec: 74 | outputs: 75 | - name: syslog-server-vm 76 | syslog: 77 | facility: local0 78 | rfc: RFC5424 79 | severity: debug # switch to say `error` or `informational` later 80 | type: syslog 81 | url: 'tcp://:514' 82 | pipelines: 83 | - inputRefs: 84 | - application 85 | - infrastructure 86 | - audit 87 | labels: 88 | syslog: logforwarder-demo 89 | name: syslog-aro 90 | outputRefs: 91 | - syslog-server-vm 92 | - default 93 | ``` 94 | 95 | ```sh 96 | oc create -f syslog-forwarder.yaml 97 | ``` 98 | 99 | (Optional) Create Elasticsearch and Kibana for local log reception (`cluster-logging.yaml`): 100 | 101 | ```yaml 102 | apiVersion: logging.openshift.io/v1 103 | kind: ClusterLogging 104 | metadata: 105 | name: instance 106 | namespace: openshift-logging 107 | spec: 108 | managementState: Managed 109 | logStore: 110 | elasticsearch: 111 | nodeCount: 3 112 | proxy: 113 | resources: 114 | limits: 115 | memory: 256Mi 116 | requests: 117 | memory: 256Mi 118 | redundancyPolicy: ZeroRedundancy 119 | resources: 120 | requests: 121 | memory: 2Gi 122 | storage: 123 | size: 50G 124 | storageClassName: managed-premium 125 | retentionPolicy: 126 | application: 127 | maxAge: 1d 128 | audit: 129 | maxAge: 7d 130 | infra: 131 | maxAge: 7d 132 | type: elasticsearch 133 | visualization: 134 | kibana: 135 | replicas: 1 136 | type: kibana 137 | collection: 138 | logs: 139 | fluentd: {} 140 | type: fluentd 141 | curation: 142 | curator: 143 | schedule: 30 3 * * * 144 | type: curator 145 | ``` 146 | 147 | ```sh 148 | oc create -f cluster-logging.yaml 149 | ``` 150 | 151 | Generate some logs from a container in ARO: 152 | 153 | ```sh 154 | oc new-project test 155 | oc run -it baseos --image centos:latest 156 | 157 | [root@baseos /]# echo "this is a test" 158 | [root@baseos /]# exit 159 | 160 | oc delete pod/baseos 161 | oc delete project test 162 | ``` 163 | 164 | Check the logs with "this is a test" appear in Kibana (create and select the "app-*" index filter). 165 | Check the logs in the syslog server (tail output). 166 | 167 | (Optional) Stop the Syslog Server VM: 168 | 169 | ```sh 170 | az vm stop --resource-group $RESOURCEGROUP --name syslog-server 171 | ``` 172 | 173 | TODO 174 | ---- 175 | 176 | * Update steps to use CLI instead or Azure Portal and ARO console. 177 | 178 | Resources 179 | --------- 180 | 181 | * https://www.itzgeek.com/how-tos/linux/centos-how-tos/setup-syslog-server-on-centos-7-rhel-7.html 182 | * https://docs.openshift.com/container-platform/4.6/logging/cluster-logging-external.html#cluster-logging-collector-log-forward-syslog_cluster-logging-external 183 | -------------------------------------------------------------------------------- /monitoring/README.md: -------------------------------------------------------------------------------- 1 | Azure Monitor integration for ARO 2 | ================================= 3 | 4 | Using Container Insights on ARO via Arc-enabled Kubernetes Monitoring 5 | --------------------------------------------------------------------- 6 | 7 | First, Arc-enable the ARO cluster see [these steps](../arc/). 8 | 9 | [Enable Azure Monitor Container Insights for Azure Arc enabled Kubernetes clusters](https://docs.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters) 10 | 11 | Create a [Log Analytics Workspace](https://learn.microsoft.com/en-us/azure/azure-monitor/logs/azure-cli-log-analytics-workspace-sample#create-a-workspace-for-monitor-logs). 12 | 13 | ```sh 14 | WORKSPACE_NAME="aro-logs" 15 | 16 | az monitor log-analytics workspace create --resource-group $RESOURCEGROUP \ 17 | --workspace-name $WORKSPACE_NAME --location $LOCATION 18 | 19 | WORKSPACE_ID="$(az monitor log-analytics workspace show -n $WORKSPACE_NAME -g $RESOURCEGROUP --query id -o tsv)" 20 | 21 | # Install the extension with **amalogs.useAADAuth=false** 22 | # Non-cli onboarding is not supported for Arc-enabled Kubernetes clusters with ARO. 23 | # Currently, only k8s-extension version 1.3.7 or below is supported. 24 | # See: https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-enable-arc-enabled-clusters?tabs=create-cli%2Cverify-portal%2Cmigrate-cli#create-extension-instance 25 | az extension remove --name k8s-extension 26 | az extension add --name k8s-extension --version 1.3.7 27 | 28 | az k8s-extension create \ 29 | --name azuremonitor-containers \ 30 | --cluster-name $ARC_CLUSTER_NAME \ 31 | --resource-group $ARC_RESOURCE_GROUP \ 32 | --cluster-type connectedClusters \ 33 | --extension-type Microsoft.AzureMonitor.Containers \ 34 | --configuration-settings logAnalyticsWorkspaceResourceID=$WORKSPACE_ID \ 35 | --configuration-settings amalogs.useAADAuth=false 36 | ``` 37 | 38 | Check the "aro-arc" resource in the Azure Portal. Click "Insights" to view the cluster health and metrics. 39 | 40 | Adjust the [logging configuration](https://learn.microsoft.com/en-us/azure/azure-monitor/containers/container-insights-agent-config), if necessary. 41 | 42 | To remove Arc enabled Monitoring (won't delete the Log Analytics workspace): 43 | 44 | ```sh 45 | az k8s-extension delete \ 46 | --name azuremonitor-containers \ 47 | --cluster-type connectedClusters \ 48 | --cluster-name $ARC_CLUSTER_NAME \ 49 | --resource-group $ARC_RESOURCE_GROUP 50 | ``` 51 | 52 | To disconnect your cluster from Arc: 53 | 54 | ```sh 55 | az connectedk8s delete --name $ARC_CLUSTER_NAME --resource-group $ARC_RESOURCE_GROUP 56 | 57 | kubectl get crd -o name | grep azure | xargs kubectl delete 58 | ``` 59 | -------------------------------------------------------------------------------- /monitoring/enable-monitoring.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # Execute this directly in Azure Cloud Shell (https://shell.azure.com) by pasting (SHIFT+INS on Windows, CTRL+V on Mac or Linux) 4 | # the following line (beginning with curl...) at the command prompt and then replacing the args: 5 | # This scripts Onboards Azure Monitor for containers to Kubernetes cluster hosted outside and connected to Azure via Azure Arc cluster 6 | # 7 | # 1. Creates the Default Azure log analytics workspace if doesn't exist one in specified subscription 8 | # 2. Adds the ContainerInsights solution to the Azure log analytics workspace 9 | # 3. Adds the workspaceResourceId tag or enable addon (if the cluster is AKS) on the provided Managed cluster resource id 10 | # 4. Installs Azure Monitor for containers HELM chart to the K8s cluster in provided via --kube-context 11 | # Prerequisites : 12 | # Azure CLI: https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest 13 | # Helm3 : https://helm.sh/docs/intro/install/ 14 | # OC: https://docs.microsoft.com/en-us/azure/openshift/tutorial-connect-cluster#install-the-openshift-cli # Applicable for only ARO v4 15 | # Note > 1. Format of the proxy endpoint should be http(s)://:@proxyhost:proxyport 16 | # 2. cluster and workspace resource should be in valid azure resoure id format 17 | 18 | # download script 19 | # curl -o enable-monitoring.sh -L https://aka.ms/enable-monitoring-bash-script 20 | # 1. Using Default Azure Log Analytics and no-proxy with current kube config context 21 | # bash enable-monitoring.sh --resource-id 22 | 23 | # 2. Using Default Azure Log Analytics and no-proxy 24 | # bash enable-monitoring.sh --resource-id --kube-context 25 | 26 | # 3. Using Default Azure Log Analytics and with proxy endpoint configuration 27 | # bash enable-monitoring.sh --resource-id --kube-context --proxy 28 | 29 | 30 | # 4. Using Existing Azure Log Analytics and no-proxy 31 | # bash enable-monitoring.sh --resource-id --kube-context --workspace-id 32 | 33 | # 5. Using Existing Azure Log Analytics and proxy 34 | # bash enable-monitoring.sh --resource-id --kube-context --workspace-id --proxy 35 | 36 | set -e 37 | set -o pipefail 38 | 39 | # default to public cloud since only supported cloud is azure public clod 40 | defaultAzureCloud="AzureCloud" 41 | 42 | # helm repo details 43 | helmRepoName="incubator" 44 | helmRepoUrl="https://kubernetes-charts-incubator.storage.googleapis.com/" 45 | helmChartName="azuremonitor-containers" 46 | 47 | # default release name used during onboarding 48 | releaseName="azmon-containers-release-1" 49 | 50 | # resource provider for azure arc connected cluster 51 | arcK8sResourceProvider="Microsoft.Kubernetes/connectedClusters" 52 | 53 | # resource provider for azure redhat openshift v4 cluster 54 | aroV4ResourceProvider="Microsoft.RedHatOpenShift/OpenShiftClusters" 55 | 56 | # resource provider for aks cluster 57 | aksResourceProvider="Microsoft.ContainerService/managedClusters" 58 | 59 | # default of resourceProvider is arc k8s and this will get updated based on the provider cluster resource 60 | resourceProvider="Microsoft.Kubernetes/connectedClusters" 61 | 62 | 63 | # resource type for azure log analytics workspace 64 | workspaceResourceProvider="Microsoft.OperationalInsights/workspaces" 65 | 66 | # openshift project name for aro v4 cluster 67 | openshiftProjectName="azure-monitor-for-containers" 68 | # arc k8s cluster resource 69 | isAroV4Cluster=false 70 | 71 | # arc k8s cluster resource 72 | isArcK8sCluster=false 73 | 74 | # aks cluster resource 75 | isAksCluster=false 76 | 77 | # workspace and cluster is same azure subscription 78 | isClusterAndWorkspaceInSameSubscription=true 79 | 80 | solutionTemplateUri="https://raw.githubusercontent.com/microsoft/Docker-Provider/ci_dev/scripts/onboarding/templates/azuremonitor-containerSolution.json" 81 | 82 | # default global params 83 | clusterResourceId="" 84 | kubeconfigContext="" 85 | workspaceResourceId="" 86 | proxyEndpoint="" 87 | containerLogVolume="" 88 | 89 | # default workspace region and code 90 | workspaceRegion="australiaeast" 91 | workspaceRegionCode="eau" 92 | workspaceResourceGroup="DefaultResourceGroup-"$workspaceRegionCode 93 | 94 | # default workspace guid and key 95 | workspaceGuid="" 96 | workspaceKey="" 97 | 98 | usage() 99 | { 100 | local basename=`basename $0` 101 | echo 102 | echo "Enable Azure Monitor for containers:" 103 | echo "$basename --resource-id [--kube-context ] [--workspace-id ] [--proxy ]" 104 | } 105 | 106 | parse_args() 107 | { 108 | 109 | if [ $# -le 1 ] 110 | then 111 | usage 112 | exit 1 113 | fi 114 | 115 | # Transform long options to short ones 116 | for arg in "$@"; do 117 | shift 118 | case "$arg" in 119 | "--resource-id") set -- "$@" "-r" ;; 120 | "--kube-context") set -- "$@" "-k" ;; 121 | "--workspace-id") set -- "$@" "-w" ;; 122 | "--proxy") set -- "$@" "-p" ;; 123 | "--helm-repo-name") set -- "$@" "-n" ;; 124 | "--helm-repo-url") set -- "$@" "-u" ;; 125 | "--"*) usage ;; 126 | *) set -- "$@" "$arg" 127 | esac 128 | done 129 | 130 | local OPTIND opt 131 | 132 | while getopts 'hk:r:w:p:n:u:' opt; do 133 | case "$opt" in 134 | h) 135 | usage 136 | ;; 137 | 138 | k) 139 | kubeconfigContext="$OPTARG" 140 | echo "name of kube-context is $OPTARG" 141 | ;; 142 | 143 | r) 144 | clusterResourceId="$OPTARG" 145 | echo "clusterResourceId is $OPTARG" 146 | ;; 147 | 148 | w) 149 | workspaceResourceId="$OPTARG" 150 | echo "workspaceResourceId is $OPTARG" 151 | ;; 152 | 153 | p) 154 | proxyEndpoint="$OPTARG" 155 | echo "proxyEndpoint is $OPTARG" 156 | ;; 157 | 158 | n) 159 | helmRepoName="$OPTARG" 160 | echo "helm repo name is $OPTARG" 161 | ;; 162 | 163 | u) 164 | helmRepoUrl="$OPTARG" 165 | echo "helm repo url is $OPTARG" 166 | ;; 167 | 168 | v) 169 | containerLogVolume="$OPTARG" 170 | echo "container log volume is $OPTARG" 171 | ;; 172 | 173 | ?) 174 | usage 175 | exit 1 176 | ;; 177 | esac 178 | done 179 | shift "$(($OPTIND -1))" 180 | 181 | 182 | local subscriptionId="$(echo ${clusterResourceId} | cut -d'/' -f3)" 183 | local resourceGroup="$(echo ${clusterResourceId} | cut -d'/' -f5)" 184 | 185 | # get resource parts and join back to get the provider name 186 | local providerNameResourcePart1="$(echo ${clusterResourceId} | cut -d'/' -f7)" 187 | local providerNameResourcePart2="$(echo ${clusterResourceId} | cut -d'/' -f8)" 188 | local providerName="$(echo ${providerNameResourcePart1}/${providerNameResourcePart2} )" 189 | 190 | local clusterName="$(echo ${clusterResourceId} | cut -d'/' -f9)" 191 | 192 | # convert to lowercase for validation 193 | providerName=$(echo $providerName | tr "[:upper:]" "[:lower:]") 194 | 195 | echo "cluster SubscriptionId:" $subscriptionId 196 | echo "cluster ResourceGroup:" $resourceGroup 197 | echo "cluster ProviderName:" $providerName 198 | echo "cluster Name:" $clusterName 199 | 200 | if [ -z "$subscriptionId" -o -z "$resourceGroup" -o -z "$providerName" -o -z "$clusterName" ]; then 201 | echo "-e invalid cluster resource id. Please try with valid fully qualified resource id of the cluster" 202 | exit 1 203 | fi 204 | 205 | if [[ $providerName != microsoft.* ]]; then 206 | echo "-e invalid azure cluster resource id format." 207 | exit 1 208 | fi 209 | 210 | # detect the resource provider from the provider name in the cluster resource id 211 | # detect the resource provider from the provider name in the cluster resource id 212 | if [ $providerName = "microsoft.kubernetes/connectedclusters" ]; then 213 | echo "provider cluster resource is of Azure ARC K8s cluster type" 214 | isArcK8sCluster=true 215 | resourceProvider=$arcK8sResourceProvider 216 | elif [ $providerName = "microsoft.redhatopenshift/openshiftclusters" ]; then 217 | echo "provider cluster resource is of AROv4 cluster type" 218 | resourceProvider=$aroV4ResourceProvider 219 | isAroV4Cluster=true 220 | elif [ $providerName = "microsoft.containerservice/managedclusters" ]; then 221 | echo "provider cluster resource is of AKS cluster type" 222 | isAksCluster=true 223 | resourceProvider=$aksResourceProvider 224 | else 225 | echo "-e unsupported azure managed cluster type" 226 | exit 1 227 | fi 228 | 229 | if [ -z "$kubeconfigContext" ]; then 230 | echo "using or getting current kube config context since --kube-context parameter not set " 231 | fi 232 | 233 | if [ ! -z "$workspaceResourceId" ]; then 234 | local workspaceSubscriptionId="$(echo $workspaceResourceId | cut -d'/' -f3)" 235 | local workspaceResourceGroup="$(echo $workspaceResourceId | cut -d'/' -f5)" 236 | local workspaceProviderName="$(echo $workspaceResourceId | cut -d'/' -f7)" 237 | local workspaceName="$(echo $workspaceResourceId | cut -d'/' -f9)" 238 | # convert to lowercase for validation 239 | workspaceProviderName=$(echo $workspaceProviderName | tr "[:upper:]" "[:lower:]") 240 | echo "workspace SubscriptionId:" $workspaceSubscriptionId 241 | echo "workspace ResourceGroup:" $workspaceResourceGroup 242 | echo "workspace ProviderName:" $workspaceName 243 | echo "workspace Name:" $workspaceName 244 | 245 | if [[ $workspaceProviderName != microsoft.operationalinsights* ]]; then 246 | echo "-e invalid azure log analytics resource id format." 247 | exit 1 248 | fi 249 | fi 250 | 251 | if [ ! -z "$proxyEndpoint" ]; then 252 | # Validate Proxy Endpoint URL 253 | # extract the protocol:// 254 | proto="$(echo $proxyEndpoint | grep :// | sed -e's,^\(.*://\).*,\1,g')" 255 | # convert the protocol prefix in lowercase for validation 256 | proxyprotocol=$(echo $proto | tr "[:upper:]" "[:lower:]") 257 | if [ "$proxyprotocol" != "http://" -a "$proxyprotocol" != "https://" ]; then 258 | echo "-e error proxy endpoint should be in this format http(s)://:@:" 259 | fi 260 | # remove the protocol 261 | url="$(echo ${proxyEndpoint/$proto/})" 262 | # extract the creds 263 | creds="$(echo $url | grep @ | cut -d@ -f1)" 264 | user="$(echo $creds | cut -d':' -f1)" 265 | pwd="$(echo $creds | cut -d':' -f2)" 266 | # extract the host and port 267 | hostport="$(echo ${url/$creds@/} | cut -d/ -f1)" 268 | # extract host without port 269 | host="$(echo $hostport | sed -e 's,:.*,,g')" 270 | # extract the port 271 | port="$(echo $hostport | sed -e 's,^.*:,:,g' -e 's,.*:\([0-9]*\).*,\1,g' -e 's,[^0-9],,g')" 272 | 273 | if [ -z "$user" -o -z "$pwd" -o -z "$host" -o -z "$port" ]; then 274 | echo "-e error proxy endpoint should be in this format http(s)://:@:" 275 | else 276 | echo "successfully validated provided proxy endpoint is valid and in expected format" 277 | fi 278 | fi 279 | 280 | } 281 | 282 | configure_to_public_cloud() 283 | { 284 | echo "Set AzureCloud as active cloud for az cli" 285 | az cloud set -n $defaultAzureCloud 286 | } 287 | 288 | validate_cluster_identity() 289 | { 290 | echo "validating cluster identity" 291 | 292 | local rgName="$(echo ${1})" 293 | local clusterName="$(echo ${2})" 294 | 295 | local identitytype=$(az resource show -g ${rgName} -n ${clusterName} --resource-type $resourceProvider --query identity.type) 296 | identitytype=$(echo $identitytype | tr "[:upper:]" "[:lower:]" | tr -d '"') 297 | echo "cluster identity type:" $identitytype 298 | 299 | if [[ "$identitytype" != "systemassigned" ]]; then 300 | echo "-e only supported cluster identity is systemassigned for Azure ARC K8s cluster type" 301 | exit 1 302 | fi 303 | 304 | echo "successfully validated the identity of the cluster" 305 | } 306 | 307 | create_default_log_analytics_workspace() 308 | { 309 | 310 | # extract subscription from cluster resource id 311 | local subscriptionId="$(echo $clusterResourceId | cut -d'/' -f3)" 312 | local clusterRegion=$(az resource show --ids ${clusterResourceId} --query location) 313 | echo "cluster region:" $clusterRegion 314 | 315 | # mapping fors for default Azure Log Analytics workspace 316 | declare -A AzureCloudLocationToOmsRegionCodeMap=( 317 | [australiasoutheast]=ASE 318 | [australiaeast]=EAU 319 | [australiacentral]=CAU 320 | [canadacentral]=CCA 321 | [centralindia]=CIN 322 | [centralus]=CUS 323 | [eastasia]=EA 324 | [eastus]=EUS 325 | [eastus2]=EUS2 326 | [eastus2euap]=EAP 327 | [francecentral]=PAR 328 | [japaneast]=EJP 329 | [koreacentral]=SE 330 | [northeurope]=NEU 331 | [southcentralus]=SCUS 332 | [southeastasia]=SEA 333 | [uksouth]=SUK 334 | [usgovvirginia]=USGV 335 | [westcentralus]=EUS 336 | [westeurope]=WEU 337 | [westus]=WUS 338 | [westus2]=WUS2 339 | ) 340 | 341 | declare -A AzureCloudRegionToOmsRegionMap=( 342 | [australiacentral]=australiacentral 343 | [australiacentral2]=australiacentral 344 | [australiaeast]=australiaeast 345 | [australiasoutheast]=australiasoutheast 346 | [brazilsouth]=southcentralus 347 | [canadacentral]=canadacentral 348 | [canadaeast]=canadacentral 349 | [centralus]=centralus 350 | [centralindia]=centralindia 351 | [eastasia]=eastasia 352 | [eastus]=eastus 353 | [eastus2]=eastus2 354 | [francecentral]=francecentral 355 | [francesouth]=francecentral 356 | [japaneast]=japaneast 357 | [japanwest]=japaneast 358 | [koreacentral]=koreacentral 359 | [koreasouth]=koreacentral 360 | [northcentralus]=eastus 361 | [northeurope]=northeurope 362 | [southafricanorth]=westeurope 363 | [southafricawest]=westeurope 364 | [southcentralus]=southcentralus 365 | [southeastasia]=southeastasia 366 | [southindia]=centralindia 367 | [uksouth]=uksouth 368 | [ukwest]=uksouth 369 | [westcentralus]=eastus 370 | [westeurope]=westeurope 371 | [westindia]=centralindia 372 | [westus]=westus 373 | [westus2]=westus2 374 | ) 375 | 376 | if [ -n "${AzureCloudRegionToOmsRegionMap[$clusterRegion]}" ]; 377 | then 378 | workspaceRegion=${AzureCloudRegionToOmsRegionMap[$clusterRegion]} 379 | fi 380 | echo "Workspace Region:"$workspaceRegion 381 | 382 | if [ -n "${AzureCloudLocationToOmsRegionCodeMap[$workspaceRegion]}" ]; 383 | then 384 | workspaceRegionCode=${AzureCloudLocationToOmsRegionCodeMap[$workspaceRegion]} 385 | fi 386 | echo "Workspace Region Code:"$workspaceRegionCode 387 | 388 | workspaceResourceGroup="DefaultResourceGroup-"$workspaceRegionCode 389 | isRGExists=$(az group exists -g $workspaceResourceGroup) 390 | workspaceName="DefaultWorkspace-"$subscriptionId"-"$workspaceRegionCode 391 | 392 | if $isRGExists 393 | then echo "using existing default resource group:"$workspaceResourceGroup 394 | else 395 | echo "creating resource group: $workspaceResourceGroup in region: $workspaceRegion" 396 | az group create -g $workspaceResourceGroup -l $workspaceRegion 397 | fi 398 | 399 | workspaceList=$(az resource list -g $workspaceResourceGroup -n $workspaceName --resource-type $workspaceResourceProvider) 400 | if [ "$workspaceList" = "[]" ]; 401 | then 402 | # create new default workspace since no mapped existing default workspace 403 | echo '{"location":"'"$workspaceRegion"'", "properties":{"sku":{"name": "standalone"}}}' > WorkspaceProps.json 404 | cat WorkspaceProps.json 405 | workspace=$(az resource create -g $workspaceResourceGroup -n $workspaceName --resource-type $workspaceResourceProvider --is-full-object -p @WorkspaceProps.json) 406 | else 407 | echo "using existing default workspace:"$workspaceName 408 | fi 409 | 410 | workspaceResourceId=$(az resource show -g $workspaceResourceGroup -n $workspaceName --resource-type $workspaceResourceProvider --query id) 411 | workspaceResourceId=$(echo $workspaceResourceId | tr -d '"') 412 | } 413 | 414 | add_container_insights_solution() 415 | { 416 | local resourceId="$(echo ${1})" 417 | 418 | # extract resource group from workspace resource id 419 | local resourceGroup="$(echo ${resourceId} | cut -d'/' -f5)" 420 | 421 | echo "adding containerinsights solution to workspace" 422 | solution=$(az deployment group create -g $resourceGroup --template-uri $solutionTemplateUri --parameters workspaceResourceId=$resourceId --parameters workspaceRegion=$workspaceRegion) 423 | } 424 | 425 | get_workspace_guid_and_key() 426 | { 427 | # extract resource parts from workspace resource id 428 | local resourceId="$(echo ${1} | tr -d '"' )" 429 | local subId="$(echo ${resourceId} | cut -d'/' -f3)" 430 | local rgName="$(echo ${resourceId} | cut -d'/' -f5)" 431 | local wsName="$(echo ${resourceId} | cut -d'/' -f9)" 432 | 433 | # get the workspace guid 434 | workspaceGuid=$(az resource show -g $rgName -n $wsName --resource-type $workspaceResourceProvider --query properties.customerId) 435 | workspaceGuid=$(echo $workspaceGuid | tr -d '"') 436 | echo "workspaceGuid:"$workspaceGuid 437 | 438 | echo "getting workspace primaryshared key" 439 | workspaceKey=$(az rest --method post --uri $workspaceResourceId/sharedKeys?api-version=2015-11-01-preview --query primarySharedKey) 440 | workspaceKey=$(echo $workspaceKey | tr -d '"') 441 | } 442 | 443 | install_helm_chart() 444 | { 445 | 446 | # get the config-context for ARO v4 cluster 447 | if [ "$isAroV4Cluster" = true ] ; then 448 | echo "getting config-context of ARO v4 cluster " 449 | echo "getting admin user creds for aro v4 cluster" 450 | adminUserName=$(az aro list-credentials -g $clusterResourceGroup -n $clusterName --query 'kubeadminUsername' -o tsv) 451 | adminPassword=$(az aro list-credentials -g $clusterResourceGroup -n $clusterName --query 'kubeadminPassword' -o tsv) 452 | apiServer=$(az aro show -g $clusterResourceGroup -n $clusterName --query apiserverProfile.url -o tsv) 453 | echo "login to the cluster via oc login" 454 | oc login $apiServer -u $adminUserName -p $adminPassword 455 | echo "creating project azure-monitor-for-containers" 456 | oc new-project $openshiftProjectName 457 | echo "getting config-context of aro v4 cluster" 458 | kubeconfigContext=$(oc config current-context) 459 | fi 460 | 461 | if [ -z "$kubeconfigContext" ]; then 462 | echo "installing Azure Monitor for containers HELM chart on to the cluster and using current kube context ..." 463 | else 464 | echo "installing Azure Monitor for containers HELM chart on to the cluster with kubecontext:${kubeconfigContext} ..." 465 | fi 466 | 467 | echo "adding helm repo:" $helmRepoName 468 | helm repo add $helmRepoName $helmRepoUrl 469 | 470 | echo "updating helm repo to get latest charts" 471 | helm repo update 472 | 473 | if [ ! -z "$proxyEndpoint" ]; then 474 | echo "using proxy endpoint since proxy configuration passed in" 475 | if [ -z "$kubeconfigContext" ]; then 476 | echo "using current kube-context since --kube-context/-k parameter not passed in" 477 | helm upgrade --install azmon-containers-release-1 --set omsagent.proxy=$proxyEndpoint,omsagent.secret.wsid=$workspaceGuid,omsagent.secret.key=$workspaceKey,omsagent.env.clusterId=$clusterResourceId $helmRepoName/$helmChartName 478 | else 479 | echo "using --kube-context:${kubeconfigContext} since passed in" 480 | helm upgrade --install azmon-containers-release-1 --set omsagent.proxy=$proxyEndpoint,omsagent.secret.wsid=$workspaceGuid,omsagent.secret.key=$workspaceKey,omsagent.env.clusterId=$clusterResourceId $helmRepoName/$helmChartName --kube-context ${kubeconfigContext} 481 | fi 482 | else 483 | if [ -z "$kubeconfigContext" ]; then 484 | echo "using current kube-context since --kube-context/-k parameter not passed in" 485 | helm upgrade --install azmon-containers-release-1 --set omsagent.secret.wsid=$workspaceGuid,omsagent.secret.key=$workspaceKey,omsagent.env.clusterId=$clusterResourceId $helmRepoName/$helmChartName 486 | else 487 | echo "using --kube-context:${kubeconfigContext} since passed in" 488 | helm upgrade --install azmon-containers-release-1 --set omsagent.secret.wsid=$workspaceGuid,omsagent.secret.key=$workspaceKey,omsagent.env.clusterId=$clusterResourceId $helmRepoName/$helmChartName --kube-context ${kubeconfigContext} 489 | fi 490 | fi 491 | 492 | echo "chart installation completed." 493 | 494 | } 495 | 496 | login_to_azure() 497 | { 498 | echo "login to the azure interactively" 499 | az login --use-device-code 500 | } 501 | 502 | set_azure_subscription() 503 | { 504 | local subscriptionId="$(echo ${1})" 505 | echo "setting the subscription id: ${subscriptionId} as current subscription for the azure cli" 506 | az account set -s ${subscriptionId} 507 | echo "successfully configured subscription id: ${subscriptionId} as current subscription for the azure cli" 508 | } 509 | 510 | attach_monitoring_tags() 511 | { 512 | echo "attach loganalyticsworkspaceResourceId tag on to cluster resource" 513 | status=$(az resource update --set tags.logAnalyticsWorkspaceResourceId=$workspaceResourceId -g $clusterResourceGroup -n $clusterName --resource-type $resourceProvider) 514 | echo "$status" 515 | echo "successfully attached logAnalyticsWorkspaceResourceId tag on the cluster resource" 516 | } 517 | 518 | # enables aks monitoring addon for private preview and dont use this for aks prod 519 | enable_aks_monitoring_addon() 520 | { 521 | echo "getting cluster object" 522 | clusterGetResponse=$(az rest --method get --uri $clusterResourceId?api-version=2020-03-01) 523 | export jqquery=".properties.addonProfiles.omsagent.config.logAnalyticsWorkspaceResourceID=\"$workspaceResourceId\"" 524 | echo $clusterGetResponse | jq $jqquery > putrequestbody.json 525 | status=$(az rest --method put --uri $clusterResourceId?api-version=2020-03-01 --body @putrequestbody.json --headers Content-Type=application/json) 526 | echo "status after enabling of aks monitoringa addon:$status" 527 | } 528 | 529 | # parse and validate args 530 | parse_args $@ 531 | 532 | # configure azure cli for public cloud 533 | configure_to_public_cloud 534 | 535 | # parse cluster resource id 536 | clusterSubscriptionId="$(echo $clusterResourceId | cut -d'/' -f3 | tr "[:upper:]" "[:lower:]")" 537 | clusterResourceGroup="$(echo $clusterResourceId | cut -d'/' -f5)" 538 | providerName="$(echo $clusterResourceId | cut -d'/' -f7)" 539 | clusterName="$(echo $clusterResourceId | cut -d'/' -f9)" 540 | 541 | # login to azure interactively 542 | login_to_azure 543 | 544 | # set the cluster subscription id as active sub for azure cli 545 | set_azure_subscription $clusterSubscriptionId 546 | 547 | # validate cluster identity if its ARC k8s cluster 548 | if [ "$isArcK8sCluster" = true ] ; then 549 | validate_cluster_identity $clusterResourceGroup $clusterName 550 | fi 551 | 552 | if [ -z $workspaceResourceId ]; then 553 | echo "Using or creating default Log Analytics Workspace since workspaceResourceId parameter not set..." 554 | create_default_log_analytics_workspace 555 | else 556 | echo "using provided azure log analytics workspace:${workspaceResourceId}" 557 | workspaceResourceId=$(echo $workspaceResourceId | tr -d '"') 558 | workspaceSubscriptionId="$(echo ${workspaceResourceId} | cut -d'/' -f3 | tr "[:upper:]" "[:lower:]" )" 559 | workspaceResourceGroup="$(echo ${workspaceResourceId} | cut -d'/' -f5)" 560 | workspaceName="$(echo ${workspaceResourceId} | cut -d'/' -f9)" 561 | 562 | # set the azure subscription to azure cli if the workspace in different sub than cluster 563 | if [[ "$clusterSubscriptionId" != "$workspaceSubscriptionId" ]]; then 564 | echo "switch subscription id of workspace as active subscription for azure cli since workspace in different subscription than cluster: ${workspaceSubscriptionId}" 565 | isClusterAndWorkspaceInSameSubscription=false 566 | set_azure_subscription $workspaceSubscriptionId 567 | fi 568 | 569 | workspaceRegion=$(az resource show --ids ${workspaceResourceId} --query location) 570 | workspaceRegion=$(echo $workspaceRegion | tr -d '"') 571 | echo "Workspace Region:"$workspaceRegion 572 | fi 573 | 574 | # add container insights solution 575 | add_container_insights_solution $workspaceResourceId 576 | 577 | # get workspace guid and key 578 | get_workspace_guid_and_key $workspaceResourceId 579 | 580 | if [ "$isClusterAndWorkspaceInSameSubscription" = true ] ; then 581 | echo "switch to cluster subscription id as active subscription for cli: ${clusterSubscriptionId}" 582 | set_azure_subscription $clusterSubscriptionId 583 | fi 584 | 585 | # attach monitoring tags on to cluster resource 586 | if [ "$isAksCluster" = true ] ; then 587 | enable_aks_monitoring_addon 588 | else 589 | attach_monitoring_tags 590 | fi 591 | 592 | # install helm chart 593 | install_helm_chart 594 | 595 | # portal link 596 | echo "Proceed to https://aka.ms/azmon-containers to view health of your newly onboarded cluster" 597 | -------------------------------------------------------------------------------- /oc-login.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source ./aro4-env.sh 4 | 5 | KUBEADMIN_PASSWD="$(az aro list-credentials -g $RESOURCEGROUP -n $CLUSTER --query kubeadminPassword -o tsv)" 6 | API_URL="$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query apiserverProfile.url -o tsv)" 7 | CONSOLE_URL="$(az aro show -g $RESOURCEGROUP -n $CLUSTER --query consoleProfile.url -o tsv)" 8 | 9 | oc login -u kubeadmin -p $KUBEADMIN_PASSWD --server=$API_URL 10 | 11 | echo "Browse to: $CONSOLE_URL" 12 | -------------------------------------------------------------------------------- /sampleapp/README.md: -------------------------------------------------------------------------------- 1 | Sample app 2 | ========== 3 | 4 | Example of exposing an app on a hostname with a custom domain which may not necessarily match the OpenShift route's domain. 5 | 6 | TODO 7 | 8 | ```sh 9 | kubectl create secret generic server-pfx --from-file=server.pfx=./server.pfx 10 | ``` 11 | 12 | References 13 | ---------- 14 | 15 | * https://thorsten-hans.com/6-steps-to-run-netcore-apps-in-azure-kubernetes 16 | * https://docs.microsoft.com/en-us/aspnet/core/security/docker-https?view=aspnetcore-3.1 17 | * https://medium.com/@tbusser/creating-a-browser-trusted-self-signed-ssl-certificate-2709ce43fd15 18 | -------------------------------------------------------------------------------- /sampleapp/sampleapp.deploy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: sampleapp 5 | labels: 6 | app: sampleapp 7 | spec: 8 | replicas: 1 9 | selector: 10 | matchLabels: 11 | app: sampleapp 12 | template: 13 | metadata: 14 | labels: 15 | app: sampleapp 16 | spec: 17 | containers: 18 | - name: aspnetapp 19 | image: mcr.microsoft.com/dotnet/core/samples:aspnetapp 20 | ports: 21 | - containerPort: 443 22 | resources: 23 | requests: 24 | cpu: 100m 25 | memory: 128Mi 26 | limits: 27 | cpu: 250m 28 | memory: 256Mi 29 | volumeMounts: 30 | - name: secret-volume 31 | readOnly: true 32 | mountPath: "/https" 33 | env: 34 | - name: ASPNETCORE_URLS 35 | value: "https://+;http://+" 36 | - name: ASPNETCORE_HTTPS_PORT 37 | value: "443" 38 | - name: ASPNETCORE_Kestrel__Certificates__Default__Password 39 | value: "mypassword" 40 | - name: ASPNETCORE_Kestrel__Certificates__Default__Path 41 | value: /https/server.pfx 42 | volumes: 43 | - name: secret-volume 44 | secret: 45 | secretName: server-pfx 46 | 47 | -------------------------------------------------------------------------------- /sampleapp/sampleapp.svc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: sampleapp-svc 5 | spec: 6 | selector: 7 | app: sampleapp 8 | type: ClusterIP 9 | ports: 10 | - protocol: TCP 11 | port: 443 12 | targetPort: 8001 13 | --------------------------------------------------------------------------------