├── README.md ├── air-gapped.md ├── classy-cluster-deploy.sh ├── cluster-autoscaler ├── cluster-autoscaler.md ├── load-cpu.yaml └── tkgs-cluster-class-autoscaler.yaml ├── download-images-offline-tkr.sh ├── exec-on-supr-nodes.sh ├── exec-on-tkgs-nodes.sh ├── export-container-images.sh ├── export-docker-images.sh ├── fluentbit-cm-on-supervisoryaml ├── generate-cert.sh ├── kubectl-admin-sc.yaml ├── minio-tmc-tkgs.md ├── modify-psp-guestcluster.yaml ├── obsolete ├── TKGs-insecure-registry.md ├── add-root-ca-to-nodes.sh ├── create-wcp-privuser.sh ├── get-harbor-admin-password.sh ├── tkgs-cluster-v1alpha1.yaml └── tkgs-config.yaml ├── running-new-deployment-on-SC.md ├── tkgm ├── install-ako-for-l7-node-port-local.yaml ├── management-cluster-config.yaml └── workload-cluster-config.yaml ├── tkgs restricted-rbac.yaml ├── tkgs-cluster-class-az.yaml ├── tkgs-cluster-class-noaz.yaml ├── tkgs-cluster-class-override.yaml ├── tkgs-cluster-v1alpha3.yaml └── tmc-cluster-v1alpha2.yaml /README.md: -------------------------------------------------------------------------------- 1 | # vSphere with Tanzu Helper scripts 2 | 3 | In this repository, you will find a collection of helpful scripts that users can use to troubleshoot and/or interact with vSphere with Tanzu. 4 | 5 | --- 6 | 1. `exec-on-tkgs-nodes.sh` - Script to execute a command on all the nodes of a Tanzu Kubernetes cluster on vSphere 7. Make sure you run this script in the **Supervisor Cluster context**. Pass command-line arguments as per your requirements 7 | 8 | ```shell 9 | $ ./exec-on-tkgs-nodes.sh -n [Namespace within the Supervisor Cluster, where the TKGs cluster resides] -t [Name of the TKGs cluster] -c [Command to execute, use ' or " to input multiple strings as command] 10 | # For example - 11 | $ ./exec-on-tkgs-nodes.sh -n demo1 -t workload-vsphere-tkg1 -c 'sudo cat /etc/kubernetes/extra-config/audit-policy.yaml' 12 | ``` 13 | 14 | Note:- At the initial run, it may take a *couple of minutes* for the script to execute commands as the jump box gets ready successfully. Also, During the initial runs, you may have to enter *Yes* to accept the nodes' SSH thumbprint. 15 | 16 | --- 17 | 2. `download-images-offline-tkr.sh` - This is a handy script to download Tanzu Kunbernetes Releases(TKR) OVA images from the VMware Subscribed content library. This is particularly helpful when the vCenter is in a firewalled environment, and the Kubernetes content library needs to be populated with the TKR images. This script downloads and zips the ova files to be easily transported to the offline environment for easy upload to the content library. 18 | 19 | ```shell 20 | $ ./download-images-offline-tkr.sh 21 | 22 | The VMware subscribed content library has the following Tanzu Kubernetes Release images ... 23 | 24 | ob-17332787-photon-3-k8s-v1.17.13---vmware.1-tkg.2.2c133ed 25 | ob-17654937-photon-3-k8s-v1.18.15---vmware.1-tkg.1.600e412 26 | ob-17010758-photon-3-k8s-v1.17.11---vmware.1-tkg.2.ad3d374 27 | ob-17419070-photon-3-k8s-v1.18.10---vmware.1-tkg.1.3a6cd48 28 | ob-17660956-photon-3-k8s-v1.19.7---vmware.1-tkg.1.fc82c41 29 | ob-16924027-photon-3-k8s-v1.17.11---vmware.1-tkg.1.15f1e18 30 | ... 31 | 32 | Enter the name of the TanzuKubernetesRelease OVA that you want to download and zip for offline upload: ob-16924027-photon-3-k8s-v1.17.11---vmware.1-tkg.1.15f1e18 33 | ... 34 | ``` 35 | 36 | --- 37 | 3. `get-harbor-admin-password.sh` - The Harbor registry delivered through vSphere with Tanzu is heavily locked down, and the admin credentials are a bit difficult to extract. This script helps to get the required credentials. Due to current limitations, the Harbor namespace is locked down, and this script needs to be executed from the WCP Supervisor control plane VM(s). 38 | 39 | WARNING - THIS IS USED FOR ADVANCED TROUBLESHOOTING AND SHOULD NOT BE USED FOR NORMAL OPERATIONS. 40 | 41 | --- 42 | 4. `tkgs-cluster.yaml` and `tkgs-config.yaml` - These are two config files that can be used to deploy TKG workload clusters on vSphere with Tanzu. `tkgs-config.yaml` is used to modify the global default configurations and hence should be executed as soon as the Supervisor Cluster has been deployed. `tkgs-cluster.yaml` can be modified and used to deploy Tanzu Kubernetes Cluster. 43 | 44 | --- 45 | 5. `create-wcp-privuser.sh` - GOVC script to create a user with permissions to modify WCP objects in the VCenter for WCP protected objects. This is Highly expermental and should only be used for troubleshooting purposes. To execute, modify the following variables in the script as per your enviornment and then execute the script. 46 | 47 | ``` 48 | export GOVC_URL=myvcenter.vmware.local 49 | export GOVC_USERNAME=administrator@vsphere.local 50 | export GOVC_PASSWORD=myvcenterpassword 51 | export GOVC_INSECURE=true 52 | ``` 53 | 54 | ```shell 55 | $ ./create-wcp-privuser.sh 56 | ``` 57 | 58 | This will create a user called `wcpadmin` with the relevent privilages to the WCP objects within the vCenter server. 59 | -------------------------------------------------------------------------------- /air-gapped.md: -------------------------------------------------------------------------------- 1 | # vCenter Configuration 2 | 3 | 1. Use the `download-images-offline-tkr.sh` script to relocate images to the airgapped vCenter. 4 | 5 | 6 | # Supervisor Configuration 7 | 8 | 1. Create a public repository within the internal registry. Upload the necessary images/manifests, such as Supervisor Services, that need to be executed on the Supervisor. 9 | 10 | 2. Download the cert of the Registry 11 | 12 | ``` 13 | sudo wget -O ~/Downloads/ca.crt https://$REGISTRY/api/v2.0/systeminfo/getcert --no-check-certificate 14 | # The above example is for Harbor 15 | ``` 16 | 17 | 3. Modify the Configmap *image-fetcher-ca-bundle* and update the CM with the Registry certificate. 18 | 19 | ``` 20 | k edit cm -n kube-system image-fetcher-ca-bundle 21 | ``` 22 | 23 | 4. Modify the secret of the kapp controller `kapp-controller-config`. Get the current secret, base64 decode, and append the Registry certificate. Base64 encode and update the new secret. 24 | 25 | ``` 26 | k edit secret -n vmware-system-appplatform-operator-system kapp-controller-config 27 | # Note - you must store the data as stringData and not as Data. Hence, no base64-encoded. Update all the fields accordingly. 28 | ``` 29 | 30 | # Workload Cluster Configuration 31 | -------------------------------------------------------------------------------- /classy-cluster-deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #tanzu init 4 | #tanzu plugin sync 5 | #tanzu plugin list 6 | tanzu login --endpoint https://192.168.12.50 --name supervisor0 7 | #tanzu plugin list 8 | 9 | tanzu namespaces get 10 | tanzu kubernetes-release get 11 | wget https://raw.githubusercontent.com/papivot/tkg-helper-scripts/main/tkgs-cluster-class-noaz.yaml 12 | tanzu cluster create -f tkgs-cluster-class-noaz.yaml 13 | tanzu cluster kubeconfig get workload-vsphere-tkg1 -n demo1 14 | 15 | kubectl config use-context tanzu-cli-workload-vsphere-tkg1@workload-vsphere-tkg1 16 | kubectl apply -f https://raw.githubusercontent.com/papivot/tkg-helper-scripts/main/modify-psp-guestcluster.yaml 17 | 18 | kubectl create ns tanzu-package-repo-global 19 | tanzu package repository add tanzu-standard --url projects.registry.vmware.com/tkg/packages/standard/repo:v1.6.0 -n tanzu-package-repo-global 20 | tanzu package available list -n tanzu-package-repo-global 21 | tanzu package repository get tanzu-standard -n tanzu-package-repo-global 22 | # tanzu package available list cert-manager.tanzu.vmware.com -n tanzu-package-repo-global 23 | # tanzu package available list contour.tanzu.vmware.com -n tanzu-package-repo-global 24 | tanzu package install cert-manager -p cert-manager.tanzu.vmware.com -v 1.7.2+vmware.1-tkg.1 -n tanzu-package-repo-global 25 | #kubectl get pods -A 26 | wget https://raw.githubusercontent.com/papivot/tanzu-packages-for-tkgs/master/Contour/contour-data-values.yaml 27 | tanzu package install contour -p contour.tanzu.vmware.com -v 1.20.2+vmware.1-tkg.1 -n tanzu-package-repo-global --values-file contour-data-values.yaml 28 | # tanzu package installed list -n tanzu-package-repo-global -------------------------------------------------------------------------------- /cluster-autoscaler/cluster-autoscaler.md: -------------------------------------------------------------------------------- 1 | 2 | Create a file called `cluster-autoscaler.yaml` 3 | 4 | ```yaml 5 | # arguments: 6 | # ignoreDaemonsetsUtilization: true 7 | # maxNodeProvisionTime: 15m 8 | # maxNodesTotal: 0 9 | # metricsPort: 8085 10 | # scaleDownDelayAfterAdd: 10m 11 | # scaleDownDelayAfterDelete: 10s 12 | # scaleDownDelayAfterFailure: 3m 13 | # scaleDownUnneededTime: 10m 14 | clusterConfig: 15 | clusterName: "workload-vsphere-tkg5" 16 | clusterNamespace: "demo1" 17 | # paused: false 18 | ``` 19 | 20 | ```bash 21 | tanzu package repository add tanzu-standard --url projects.registry.vmware.com/tkg/packages/standard/repo:v2024.4.12 --namespace tkg-system 22 | tanzu package available list cluster-autoscaler.tanzu.vmware.com -n tkg-system 23 | # tanzu package available get cluster-autoscaler.tanzu.vmware.com/1.27.2+vmware.1-tkg.3 --default-values-file-output cluster-autoscaler.yaml -n tkg-system 24 | tanzu package install cluster-autoscaler --package cluster-autoscaler.tanzu.vmware.com --version 1.27.2+vmware.1-tkg.3 --values-file cluster-autoscaler.yaml -n tkg-system 25 | # To delete 26 | # tanzu package installed delete cluster-autoscaler -n tkg-system 27 | ``` 28 | 29 | -------------------------------------------------------------------------------- /cluster-autoscaler/load-cpu.yaml: -------------------------------------------------------------------------------- 1 | ### CPU Load file 2 | --- 3 | apiVersion: apps/v1 4 | kind: Deployment 5 | metadata: 6 | name: application-cpu 7 | labels: 8 | app: application-cpu 9 | spec: 10 | selector: 11 | matchLabels: 12 | app: application-cpu 13 | replicas: 1 14 | strategy: 15 | type: RollingUpdate 16 | rollingUpdate: 17 | maxSurge: 1 18 | maxUnavailable: 0 19 | template: 20 | metadata: 21 | labels: 22 | app: application-cpu 23 | spec: 24 | containers: 25 | - name: application-cpu 26 | image: wcp-docker-ci.artifactory.eng.vmware.com/app-cpu:v1.0.0 27 | imagePullPolicy: Always 28 | ports: 29 | - containerPort: 80 30 | resources: 31 | requests: 32 | memory: "50Mi" 33 | cpu: "500m" 34 | limits: 35 | memory: "500Mi" 36 | cpu: "2000m" 37 | --- 38 | apiVersion: v1 39 | kind: Service 40 | metadata: 41 | name: application-cpu 42 | labels: 43 | app: application-cpu 44 | spec: 45 | type: ClusterIP 46 | selector: 47 | app: application-cpu 48 | ports: 49 | - protocol: TCP 50 | name: http 51 | port: 80 52 | targetPort: 80 53 | -------------------------------------------------------------------------------- /cluster-autoscaler/tkgs-cluster-class-autoscaler.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cluster.x-k8s.io/v1beta1 2 | kind: Cluster 3 | metadata: 4 | name: workload-vsphere-tkg1 5 | namespace: demo1 6 | spec: 7 | clusterNetwork: 8 | services: 9 | cidrBlocks: ["192.168.32.0/20"] 10 | pods: 11 | cidrBlocks: ["192.168.0.0/20"] 12 | serviceDomain: "cluster.local" 13 | topology: 14 | class: tanzukubernetescluster 15 | version: v1.25.7---vmware.3-fips.1-tkg.1 16 | controlPlane: 17 | replicas: 1 18 | # variables: 19 | # overrides: 20 | # - name: vmClass 21 | # value: best-effort-large 22 | # metadata: 23 | # annotations: 24 | # run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 25 | workers: 26 | machineDeployments: 27 | - class: node-pool 28 | name: node-pool-1 29 | metadata: 30 | annotations: 31 | cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "3" 32 | cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "1" 33 | # variables: 34 | # overrides: 35 | # - name: vmClass 36 | # value: best-effort-large 37 | # metadata: 38 | # annotations: 39 | # run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 40 | - class: node-pool 41 | name: node-pool-2 42 | metadata: 43 | annotations: 44 | cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "3" 45 | cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0" 46 | # variables: 47 | # overrides: 48 | # - name: vmClass 49 | # value: best-effort-large 50 | # metadata: 51 | # annotations: 52 | # run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 53 | - class: node-pool 54 | name: node-pool-3 55 | metadata: 56 | annotations: 57 | cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "3" 58 | cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0" 59 | # variables: 60 | # overrides: 61 | # - name: vmClass 62 | # value: best-effort-large 63 | # metadata: 64 | # annotations: 65 | # run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 66 | variables: 67 | - name: ntp 68 | value: time.google.com 69 | - name: vmClass 70 | value: best-effort-medium 71 | - name: storageClass 72 | value: tanzu 73 | - name: defaultStorageClass 74 | value: tanzu 75 | - name: clusterEncryptionConfigYaml 76 | value: | 77 | apiVersion: apiserver.config.k8s.io/v1 78 | kind: EncryptionConfiguration 79 | resources: 80 | - resources: 81 | - secrets 82 | providers: 83 | - aescbc: 84 | keys: 85 | - name: key1 86 | secret: QiMgJGYXudtljldVyl+AnXQQlk7r9iUXBfVKqdEfKm8= 87 | - identity: {} 88 | # ADDITIONAL VALUES 89 | - name: nodePoolVolumes 90 | value: 91 | - capacity: 92 | storage: "15Gi" 93 | mountPath: "/var/lib/containerd" 94 | name: containerd 95 | storageClass: tanzu 96 | - capacity: 97 | storage: "15Gi" 98 | mountPath: "/var/lib/kubelet" 99 | name: kubelet 100 | storageClass: tanzu 101 | - name: controlPlaneVolumes 102 | value: 103 | - capacity: 104 | storage: "15Gi" 105 | mountPath: "/var/lib/containerd" 106 | name: containerd 107 | storageClass: tanzu 108 | - capacity: 109 | storage: "15Gi" 110 | mountPath: "/var/lib/kubelet" 111 | name: kubelet 112 | storageClass: tanzu 113 | # Not supported 114 | # - capacity: 115 | # storage: "4Gi" 116 | # mountPath: "/var/lib/etcd" 117 | # name: etcd 118 | # storageClass: tanzu 119 | -------------------------------------------------------------------------------- /download-images-offline-tkr.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if ! command -v jq >/dev/null 2>&1 ; then 4 | echo "JQ not installed. Exiting...." 5 | exit 1 6 | fi 7 | if ! command -v wget >/dev/null 2>&1 ; then 8 | echo "wget not installed. Exiting...." 9 | exit 1 10 | fi 11 | 12 | echo 13 | echo "The VMware subscribed content library has the following Tanzu Kubernetes Release images ... " 14 | echo 15 | curl -s https://wp-content.vmware.com/v2/latest/items.json |jq -r '.items[].name' 16 | echo 17 | read -p "Enter the name of the TanzuKubernetesRelease OVA that you want to download and zip for offline upload: " tkgrimage 18 | echo 19 | echo "Downloading all files for the TKG image: ${tkgrimage} ..." 20 | echo 21 | wget -q --show-progress --no-parent -r -nH --cut-dirs=2 --reject="index.html*" https://wp-content.vmware.com/v2/latest/${tkgrimage}/ 22 | echo "Compressing downloaded files..." 23 | tar -cvzf ${tkgrimage}.tar.gz ${tkgrimage} 24 | echo 25 | echo "Cleaning up..." 26 | [ -d "${tkgrimage}" ] && rm -rf ${tkgrimage} 27 | echo "Copy the file ${tkgrimage}.tar.gz to the offline jumpbox that has access to the cluster." 28 | echo "Install and configure govc on the offline jumpbox." 29 | echo "Use the following command on that jumpbox to import the image to the vCenter Content Library called "Local"..." 30 | echo 31 | echo " tar -xzvf ${tkgrimage}.tar.gz" 32 | echo " cd ${tkgrimage}" 33 | echo " govc library.import -n ${tkgrimage} -m=true Local photon-ova.ovf" 34 | echo " or" 35 | echo " govc library.import -n ${tkgrimage} -m=true Local ubuntu-ova.ovf" -------------------------------------------------------------------------------- /exec-on-supr-nodes.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # 3 | # Gets the control plane vm ip. This is done by logging into 4 | # the vcenter vm to execute /usr/lib/vmware-wcp/decryptK8Pwd.py 5 | # 6 | # We also query the VC using `govc` to obtain what tools reports 7 | # for the control plane vm. This is done in case (1) decryptK8Pwd.py 8 | # gives the FIP, which is no longer accessible, and/or (2) the 9 | # testbed did not make it sufficiently far enough to assign FIP 10 | # 11 | 12 | set -o errexit 13 | set -o pipefail 14 | set -o nounset 15 | 16 | SSHCommonArgs=" -o PubkeyAuthentication=no -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no " 17 | 18 | usage () { 19 | echo "$(basename $0) [-d] -i VCIP" 20 | echo " -d debug (set -x)" 21 | echo " -i VCIP" 22 | echo " -n SV control plane VM name glob (default: SupervisorControlPlaneVM*)" 23 | echo " -c command to execute on each control plane vm" 24 | echo " example: -c \"tail -n5 /var/log/vmware-imc/configure-wcp.log\"" 25 | exit 1 26 | } 27 | 28 | die() { 29 | echo -e "$1" 30 | exit 1 31 | } 32 | 33 | isInstalled() { 34 | command -v "$1" &> /dev/null 35 | } 36 | 37 | while getopts "di:c:n:p:" opt ; do 38 | case $opt in 39 | "d" ) set -x ;; 40 | "i" ) VCIP=$OPTARG ;; 41 | "p" ) VCPASS=$OPTARG ;; 42 | "c" ) COMMAND=$OPTARG ;; 43 | "n" ) CONTROLPLANEVMNAME=$OPTARG ;; 44 | \? ) usage ;; 45 | esac 46 | done 47 | 48 | shift $((OPTIND-1)) 49 | 50 | [[ $# -eq 0 ]] || usage 51 | 52 | VCIP=${VCIP:-} 53 | VCPASS=${VCPASS:-"vmware"} 54 | COMMAND=${COMMAND:-} 55 | CONTROLPLANEVMNAME=${CONTROLPLANEVMNAME:-"SupervisorControlPlaneVM*"} 56 | 57 | [[ -n "$VCIP" ]] || usage 58 | 59 | 60 | isInstalled "govc" || die "Install govc! See https://github.com/vmware/govmomi/tree/master/govc" 61 | isInstalled "jq" || die "Install jq via 'brew install jq'" 62 | isInstalled "sshpass" || die "Install sshpass via 'brew install https://raw.githubusercontent.com/kadwanev/bigboybrew/master/Library/Formula/sshpass.rb'" 63 | 64 | # check for bsd/gnu xargs on MacOS 65 | if strings $(which xargs) | grep -i bsd -q; then 66 | echo "Install gnu xargs via 'brew install findutils'" 67 | echo "export PATH=\"/usr/local/opt/findutils/libexec/gnubin:\$PATH\"" 68 | exit 1 69 | fi 70 | 71 | 72 | # now that we have a VCIP, gather the FIP, password, and mgmt ipaddr through govc 73 | # also dump out sshpass/ssh commands for accessing the vc and control plane vm 74 | output=$(sshpass -p ${VCPASS} ssh $SSHCommonArgs root@$VCIP '/usr/lib/vmware-wcp/decryptK8Pwd.py' 2>&1 | grep -E "^(Cluster|IP|PWD):") 75 | echo "$output" 76 | 77 | export GOVC_USERNAME='administrator@vsphere.local' 78 | export GOVC_PASSWORD='Admin!23' 79 | export GOVC_INSECURE="1" 80 | export GOVC_URL="https://${VCIP}/sdk" 81 | ipaddrs=$(govc find / -type m -name $CONTROLPLANEVMNAME | xargs -I % -d'\n' -n1 govc object.collect -json % guest.net | jq -r '.[] | .Val.GuestNicInfo[] | select(.Network == "VM Network") | .IpAddress[0]' | grep -vE "(10.0.0.10|::)" | grep .) 82 | echo "GOVC: $(echo $ipaddrs)" 83 | 84 | password=$(echo $output | sed 's/.*PWD: //') 85 | echo "" 86 | echo "" 87 | echo " VCVA: sshpass -p '${VCPASS}' ssh $SSHCommonArgs root@${VCIP}" 88 | for ipaddr in $ipaddrs; do 89 | echo "CONTROLPLANEVM: sshpass -p '${password}' ssh $SSHCommonArgs root@${ipaddr}" 90 | done 91 | echo "" 92 | 93 | if [[ -n "$COMMAND" ]]; then 94 | for ipaddr in $ipaddrs; do 95 | echo "execute on $ipaddr: $COMMAND" 96 | SSHPASS=${password} sshpass -e ssh $SSHCommonArgs root@${ipaddr} $COMMAND 97 | echo "" 98 | done 99 | fi 100 | -------------------------------------------------------------------------------- /exec-on-tkgs-nodes.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | while getopts ":n:t:c:" flag 4 | do 5 | case ${flag} in 6 | n ) NAMESPACE=${OPTARG} 7 | ;; 8 | t ) TKGSCLUSTER=${OPTARG} 9 | ;; 10 | c ) COMMAND=${OPTARG} 11 | ;; 12 | esac 13 | done 14 | 15 | if ! command -v jq >/dev/null 2>&1 ; then 16 | echo "JQ not installed. Exiting...." 17 | exit 1 18 | fi 19 | 20 | echo 21 | echo "Executing $COMMAND on the nodes of $TKGSCLUSTER cluster within the $NAMESPACE namespace..." 22 | echo 23 | 24 | ################################## 25 | 26 | export VDS=`kubectl get deploy -n vmware-system-lbapi vmware-system-lbapi-lbapi-controller-manager --no-headers 2>/dev/null |wc -l` 27 | if [ ${VDS} -eq 0 ] 28 | then 29 | echo "Found NSX based setup. Installing jumpbox pod in $NAMESPACE namespace within Supervisor cluster..." 30 | export FILE=jumpboxpod-sample.yml 31 | cat < ${FILE} 32 | --- 33 | apiVersion: v1 34 | kind: Pod 35 | metadata: 36 | name: jumpbox 37 | namespace: ${NAMESPACE} 38 | spec: 39 | containers: 40 | - image: "ubuntu:20.04" 41 | name: jumpbox 42 | command: ["/bin/bash"] # Fix this 43 | args: [ "-c", "apt-get -y update; apt-get -y install openssh-client; mkdir /root/.ssh; cp /root/ssh/ssh-privatekey /root/.ssh/id_rsa; chmod 600 /root/.ssh/id_rsa; while true; do sleep 30; done;" ] 44 | volumeMounts: 45 | - mountPath: "/root/ssh" 46 | name: ssh-key 47 | readOnly: true 48 | volumes: 49 | - name: ssh-key 50 | secret: 51 | secretName: ${TKGSCLUSTER}-ssh 52 | EOM 53 | 54 | envsubst < ${FILE}|kubectl apply -f- 55 | kubectl wait --for=condition=Ready -n ${NAMESPACE} pod/jumpbox 56 | echo "Waiting for jumpbox to be ready..." 57 | sleep 60s 58 | fi 59 | 60 | for node in `kubectl get tkc ${TKGSCLUSTER} -n ${NAMESPACE} -o json|jq -r '.status.nodeStatus| keys[]'` 61 | do 62 | ip=`kubectl get virtualmachines -n ${NAMESPACE} ${node} -o json|jq -r '.status.vmIp'` 63 | echo "===========================================================================" 64 | echo "Executing command - ${COMMAND} - on ${ip}..." 65 | echo "===========================================================================" 66 | echo 67 | if [ ${VDS} -eq 0 ] 68 | then 69 | kubectl -n ${NAMESPACE} exec -it jumpbox -- /usr/bin/ssh -o StrictHostKeyChecking=no vmware-system-user@$ip ${COMMAND} 70 | else 71 | kubectl get secret -n ${NAMESPACE} ${TKGSCLUSTER}-ssh -o json |jq -r '.data."ssh-privatekey"'|base64 -d > ~/.ssh/id_rsa 72 | chmod 600 ~/.ssh/id_rsa 73 | ssh -o StrictHostKeyChecking=no vmware-system-user@${ip} ${COMMAND} 74 | fi 75 | done 76 | 77 | if [ ${VDS} -eq 0 ] 78 | then 79 | kubectl delete pod jumpbox -n ${NAMESPACE} 80 | rm ${FILE} 81 | fi 82 | -------------------------------------------------------------------------------- /export-container-images.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #Modify these two values as needed 4 | INPUTFILE=contour.yaml 5 | NEWREPO=harbor-repo.vmware.com/navneetv/library 6 | 7 | declare -a images_to_export=() 8 | NEW_REGISTRY=`echo $NEWREPO|cut -d / -f 1` 9 | for image in `grep image: ${INPUTFILE}|sort|uniq|awk '{print $2}'` 10 | do 11 | image_name=`echo ${image}|rev|cut -d / -f 1|rev` 12 | new_image_name=${NEWREPO}/${image_name} 13 | images_to_export=(${images_to_export[@]} ${new_image_name}) 14 | 15 | docker pull ${image} 16 | docker tag ${image} ${new_image_name} 17 | done 18 | echo 19 | echo Docker images to export - ${images_to_export[@]} 20 | rm -f $INPUTFILE.tar 21 | rm -f $INPUTFILE.tar.gz 22 | docker save ${images_to_export[@]} -o $INPUTFILE.tar 23 | gzip $INPUTFILE.tar 24 | echo 25 | echo "The docker images have been exported to - $INPUTFILE.tar.gz. Please copy this file a system in the airgapped environment running docker." 26 | echo "Run the following commands on the environment to load to the images locally - " 27 | echo "$ gzip -d $INPUTFILE.tar.gz" 28 | echo "$ docker load -i $INPUTFILE.tar" 29 | echo 30 | echo "To upload the images to the private registry - $NEWREPO - run the following commands for each of the images uploaded in the previous step. " 31 | echo "$ docker images" 32 | echo "$ docker login -u username $NEW_REGISTRY" 33 | echo "$ docker push image(s)_name" 34 | -------------------------------------------------------------------------------- /export-docker-images.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #Modify these two values as needed 4 | INPUTFILE=contour.yaml 5 | NEWREPO=harbor-repo.vmware.com/navneetv/library 6 | 7 | declare -a images_to_export=() 8 | NEW_REGISTRY=`echo $NEWREPO|cut -d / -f 1` 9 | for image in `grep image: ${INPUTFILE}|sort|uniq|awk '{print $2}'` 10 | do 11 | image_name=`echo ${image}|rev|cut -d / -f 1|rev` 12 | new_image_name=${NEWREPO}/${image_name} 13 | images_to_export=(${images_to_export[@]} ${new_image_name}) 14 | 15 | docker pull ${image} 16 | docker tag ${image} ${new_image_name} 17 | done 18 | echo 19 | echo Docker images to export - ${images_to_export[@]} 20 | rm -f $INPUTFILE.tar 21 | rm -f $INPUTFILE.tar.gz 22 | docker save ${images_to_export[@]} -o $INPUTFILE.tar 23 | gzip $INPUTFILE.tar 24 | echo 25 | echo "The docker images have been exported to - $INPUTFILE.tar.gz. Please copy this file a system in the airgapped environment running docker." 26 | echo "Run the following commands on the environment to load the images locally - " 27 | echo "$ gzip -d $INPUTFILE.tar.gz" 28 | echo "$ docker load -i $INPUTFILE.tar" 29 | echo 30 | echo "To upload the images to the private registry - $NEWREPO - run the following commands for each of the images uploaded in the previous step. " 31 | echo "$ docker images" 32 | echo "$ docker login -u username $NEW_REGISTRY" 33 | echo "$ docker push image(s)_name" 34 | 35 | #helm fetch bitnami/nginx 36 | 37 | # Go to Harbor -> Projects -> -> Upload tar file 38 | 39 | #helm repo add https:///chartrepo/ 40 | #helm repo update 41 | #helm install nginx /nginx 42 | -------------------------------------------------------------------------------- /fluentbit-cm-on-supervisoryaml: -------------------------------------------------------------------------------- 1 | --- 2 | ... 3 | 4 | #### LOGINSIGHT SYSLOG ENTRY 5 | #### 6 | [OUTPUT] 7 | Name syslog 8 | Match kube.* 9 | Host LOG_INSIGHT_SERVER 10 | Port 514 11 | Mode tcp 12 | Syslog_Format rfc5424 13 | Syslog_Message_key log 14 | Syslog_Hostname_key host 15 | Syslog_Appname_key pod_name 16 | Syslog_Procid_key container_name 17 | 18 | [OUTPUT] 19 | Name syslog 20 | Match systemd.* 21 | Host LOG_INSIGHT_SERVER 22 | Port 514 23 | Mode tcp 24 | Syslog_Format rfc5424 25 | Syslog_Message_key log 26 | Syslog_Hostname_key hostname 27 | Syslog_Appname_key unit 28 | Syslog_Procid_key pid 29 | 30 | 31 | #### ELASTCISEARCH SYSLOG ENTRY 32 | #### 33 | 34 | [OUTPUT] 35 | Name es 36 | Match kube.* 37 | Host 10.197.107.61 38 | Port 9200 39 | Logstash_Format True 40 | Logstash_Prefix supervisor 41 | 42 | [OUTPUT] 43 | Name es 44 | Match systemd.* 45 | Host 10.197.107.61 46 | Port 9200 47 | Logstash_Format True 48 | Logstash_Prefix supervisor 49 | 50 | ... 51 | -------------------------------------------------------------------------------- /generate-cert.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Modify these two variables 4 | fqdn=ubuntu-nv-242.env1.lab.test 5 | ipaddress=10.197.107.62 6 | 7 | sudo apt install -y ca-certificates 8 | cakeyfile=$(hostname)-ca.key 9 | cacrtfile=$(hostname)-ca.crt 10 | 11 | if [ ! -f "${cakeyfile}" ] || [ ! -f "${cacrtfile}" ]; then 12 | echo "CA cert ${cacrtfile} and key ${cakeyfile} do not exist." 13 | echo "Generating them before generating the server certificate..." 14 | 15 | # Generate a CA Cert Private Key" 16 | openssl genrsa -out ${cakeyfile} 4096 17 | 18 | # Generate a CA Cert Certificate" 19 | openssl req -x509 -new -nodes -sha512 -days 3650 -subj "/C=US/ST=VA/L=Ashburn/O=SE/OU=Personal/CN=${fqdn}" -key ${cakeyfile} -out ${cacrtfile} 20 | 21 | sudo cp -p ${cacrtfile} /usr/local/share/ca-certificates/${cacrtfile} 22 | echo 23 | echo "CA file ${cacrtfile} copied to /usr/local/share/ca-certificates/${cacrtfile}." 24 | echo "Execute sudo update-ca-certificates after this script has completed execution" 25 | echo 26 | echo 27 | fi 28 | 29 | # Generate a Server Certificate Private Key" 30 | openssl genrsa -out ${fqdn}.key 4096 31 | 32 | # Generate a Server Certificate Signing Request" 33 | openssl req -sha512 -new -subj "/C=US/ST=VA/L=Ashburn/O=SE/OU=Personal/CN=${fqdn}" -key ${fqdn}.key -out ${fqdn}.csr 34 | 35 | # Generate a x509 v3 extension file" 36 | cat > v3.ext <<-EOF 37 | authorityKeyIdentifier=keyid,issuer 38 | basicConstraints=CA:FALSE 39 | keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment 40 | extendedKeyUsage = serverAuth 41 | subjectAltName = @alt_names 42 | 43 | [alt_names] 44 | DNS.1=${fqdn} 45 | DNS.2=*.${fqdn} 46 | IP.1=${ipaddress} 47 | EOF 48 | 49 | # Use the x509 v3 extension file to gerneate a cert for the Harbor host" 50 | openssl x509 -req -sha512 -days 3650 -extfile v3.ext -CA ${cacrtfile} -CAkey ${cakeyfile} -CAcreateserial -in ${fqdn}.csr -out ${fqdn}.cert 51 | -------------------------------------------------------------------------------- /kubectl-admin-sc.yaml: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Create a group called WCPAdminGroup@vsphere.local or similar on the VCenter and add memebers. 4 | # Execute this script on the Supervisor Cluster 5 | # Modify the Group Name below if its not named - WCPAdminGroup@vsphere.local 6 | 7 | kubectl patch clusterrolebinding wcp:administrators:cluster-guest-clusters --type='json' -p='[{"op": "add", "path": "/subjects/1", "value": {"kind": "Group", "name": "sso:WCPAdminGroup@vsphere.local","apiGroup": "rbac.authorization.k8s.io" } }]' 8 | kubectl patch clusterrolebinding wcp:administrators:cluster-view --type='json' -p='[{"op": "add", "path": "/subjects/1", "value": {"kind": "Group", "name": "sso:WCPAdminGroup@vsphere.local","apiGroup": "rbac.authorization.k8s.io" } }]' 9 | kubectl patch clusterrolebinding wcp:administrators:cluster-view-extra --type='json' -p='[{"op": "add", "path": "/subjects/1", "value": {"kind": "Group", "name": "sso:WCPAdminGroup@vsphere.local","apiGroup": "rbac.authorization.k8s.io" } }]' 10 | -------------------------------------------------------------------------------- /minio-tmc-tkgs.md: -------------------------------------------------------------------------------- 1 | ### Generate certificates for Minio 2 | 3 | ```console 4 | $ mkdir -p ${HOME}/.minio/certs 5 | $ cp public-key.cert ${HOME}.minio/certs/public.crt # public.crt is the public certificate with the Minio servers DNS name 6 | $ sudo cp private-key.key ${HOME}/.minio/certs/private.key # private.key is the private key fow the above certificate 7 | ``` 8 | 9 | ### Enable Minio 10 | 11 | * Create the data storage folder and start the server. 12 | 13 | ``` console 14 | $ mkdir -p ${HOME}/data 15 | $ minio server ${HOME}/data 16 | No credential environment variables defined. Going with the defaults. 17 | It is strongly recommended to define your own credentials via environment variables MINIO_ROOT_USER and MINIO_ROOT_PASSWORD instead of using default values 18 | Endpoint: https://10.197.107.61:9000 .... 19 | RootUser: minioadmin 20 | RootPass: minioadmin 21 | 22 | Browser Access: 23 | https://10.197.107.61:9000 ... 24 | 25 | Command-line Access: https://docs.min.io/docs/minio-client-quickstart-guide 26 | $ mc alias set myminio https://10.197.107.61:9000 minioadmin minioadmin 27 | 28 | Object API (Amazon S3 compatible): 29 | Go: https://docs.min.io/docs/golang-client-quickstart-guide 30 | Java: https://docs.min.io/docs/java-client-quickstart-guide 31 | Python: https://docs.min.io/docs/python-client-quickstart-guide 32 | JavaScript: https://docs.min.io/docs/javascript-client-quickstart-guide 33 | .NET: https://docs.min.io/docs/dotnet-client-quickstart-guide 34 | 35 | Certificate: 36 | Signature Algorithm: SHA512-RSA 37 | Issuer: ..... 38 | Validity 39 | Not Before: Thu, 01 Jul 2021 01:38:22 GMT 40 | Not After : Sun, 29 Jun 2031 01:38:22 GMT 41 | 42 | Detected default credentials 'minioadmin:minioadmin', please change the credentials immediately by setting 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment values 43 | IAM initialization complete 44 | ``` 45 | * Create a bucket e.g. `test-bucket`. This can be completed by login in to the Minio browser and creating the bucket. 46 | 47 | ### Enable backup protection on TMC 48 | 49 | * Create the required credentials. 50 | * Create the `Customer provisioned S3-compatible storage` end-point within the TMC interface. Use `minio` as the region. 51 | * Enable `Data protection on the workload cluster` 52 | 53 | ### Update Velero setting for root CA 54 | 55 | * While logged in to the workload cluster, edit and update the following object - 56 | ```console 57 | $ kubectl edit backupstoragelocations.velero.io -n velero NAME_OF_S3_COMPATABILE_STORAGE 58 | ``` 59 | Add the base64 encoded root CA `caCert` and update the object. 60 | 61 | ```yaml 62 | spec: 63 | config: 64 | bucket: test-bucket 65 | profile: NAME_OF_S3_COMPATABILE_STORAGE 66 | publicUrl: https://10.197.107.61:9000 67 | region: minio 68 | s3ForcePathStyle: "true" 69 | s3Url: https://10.197.107.61:9000 70 | objectStorage: 71 | bucket: test-bucket 72 | caCert: LS0tLS1CRUdJTiBDRRU4wQkxJMWE...ElGSUNBVEUtLS0tLQo= 73 | prefix: 01F9FNY05068P4KSCM15M26S9G/ 74 | provider: aws 75 | status: 76 | lastSyncedTime: "2021-07-01T01:53:50.342904668Z" 77 | ``` 78 | 79 | -------------------------------------------------------------------------------- /modify-psp-guestcluster.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRole 3 | metadata: 4 | name: psp:privileged 5 | rules: 6 | - apiGroups: ['policy'] 7 | resources: ['podsecuritypolicies'] 8 | verbs: ['use'] 9 | resourceNames: 10 | - vmware-system-privileged 11 | --- 12 | apiVersion: rbac.authorization.k8s.io/v1 13 | kind: ClusterRoleBinding 14 | metadata: 15 | name: all:psp:privileged 16 | roleRef: 17 | kind: ClusterRole 18 | name: psp:privileged 19 | apiGroup: rbac.authorization.k8s.io 20 | subjects: 21 | - kind: Group 22 | name: system:serviceaccounts 23 | apiGroup: rbac.authorization.k8s.io -------------------------------------------------------------------------------- /obsolete/TKGs-insecure-registry.md: -------------------------------------------------------------------------------- 1 | # Configuring and using insecure registries on TKGs clusters 2 | ### Not for production use 3 | --- 4 | 5 | Note: This is intended for unique scenarios. As cluster nodes may scale out or scale in, we would have to perform these steps on the new nodes manually. 6 | 7 | Log in to each of the nodes (workers and control plane nodes) and perform the following steps - 8 | 9 | 1. Make a backup of the config.toml file 10 | ``` 11 | cd /etc/containerd 12 | sudo cp -p config.toml config.toml_backup 13 | ``` 14 | 15 | 2. Edit the `config.toml` file and add the following entries - 16 | ``` 17 | sudo vi config.toml 18 | ``` 19 | 20 | Lines to append at the appropriate `[plugins.cri.registry]` section 21 | 22 | ``` 23 | [plugins.cri.registry.configs] 24 | [plugins.cri.registry.configs."my.insecure.registry"] 25 | [plugins.cri.registry.configs."my.insecure.registry".tls] 26 | insecure_skip_verify = true 27 | ``` 28 | where `my.insecure.registry` is the name of the registry that needs to be configured to skip SSL validation. Add an entry for each insecure registry. 29 | 30 | The final file should like something like this - 31 | ``` 32 | ... 33 | [plugins.cri.registry] 34 | [plugins.cri.registry.mirrors] 35 | [plugins.cri.registry.mirrors."docker.io"] 36 | endpoint = ["https://registry-1.docker.io"] 37 | [plugins.cri.registry.configs] 38 | [plugins.cri.registry.configs."my.insecure.registry"] 39 | [plugins.cri.registry.configs."my.insecure.registry".tls] 40 | insecure_skip_verify = true 41 | ... 42 | ``` 43 | 44 | 3. Restart the containerd daemon. 45 | ``` 46 | sudo systemctl restart containerd 47 | ``` 48 | 49 | 4. Validate the daemon restarted successfully 50 | ``` 51 | sudo systemctl status containerd 52 | ``` 53 | 54 | 5. to view containerd logs 55 | ``` 56 | sudo journalctl -u containerd 57 | ``` 58 | -------------------------------------------------------------------------------- /obsolete/add-root-ca-to-nodes.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | ############################################################################################################## 3 | ############################################################################################################## 4 | ################# DO NOT USE THIS FILE 5 | ############################################################################################################## 6 | ############################################################################################################## 7 | 8 | # path of the manually downloaded harbor/registry root CA 9 | rootCA="/tmp/rootca.crt" 10 | 11 | # guest cluster name 12 | gcname="workload-vsphere-tkg1" 13 | 14 | # Supervisor cluster namespace where guest cluster get deployed 15 | gcnamespace="demo1" 16 | 17 | # =============================================================================== 18 | # Do not modify below this line 19 | # 20 | 21 | [ -z "$rootCA" -o -z "$gcname" -o -z "$gcnamespace" ] && echo "Please populate rootCA/gcname/gcnamespace variable" && exit 22 | workdir="/tmp/$gcnamespace-$gcname" 23 | mkdir -p $workdir 24 | sshkey=$workdir/gc-sshkey # path for gc private key 25 | 26 | # path for gc kubeconfig 27 | gckubeconfig=$workdir/kubeconfig 28 | 29 | # @param1: ip 30 | # @param2: ca 31 | installCA() 32 | { 33 | node_ip=$1 34 | capath=$2 35 | scp -q -i $sshkey -o StrictHostKeyChecking=no $capath vmware-system-user@$node_ip:/tmp/ca.new_bk 36 | [ $? == 0 ] && ssh -q -i $sshkey -o StrictHostKeyChecking=no vmware-system-user@$node_ip 'sudo cp /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt_bk' 37 | [ $? == 0 ] && ssh -q -i $sshkey -o StrictHostKeyChecking=no vmware-system-user@$node_ip 'sudo cp /etc/pki/tls/certs/ca-bundle.crt /tmp/ca-bundle.crt_bk' 38 | [ $? == 0 ] && ssh -q -i $sshkey -o StrictHostKeyChecking=no vmware-system-user@$node_ip 'sudo cat /tmp/ca-bundle.crt_bk /tmp/ca.new_bk > /tmp/ca-bundle.crt' 39 | [ $? == 0 ] && ssh -q -i $sshkey -o StrictHostKeyChecking=no vmware-system-user@$node_ip 'sudo mv /tmp/ca-bundle.crt /etc/pki/tls/certs/ca-bundle.crt' 40 | [ $? == 0 ] && ssh -q -i $sshkey -o StrictHostKeyChecking=no vmware-system-user@$node_ip 'sudo chmod 755 /etc/pki/tls/certs/ca-bundle.crt' 41 | [ $? == 0 ] && ssh -q -i $sshkey -o StrictHostKeyChecking=no vmware-system-user@$node_ip 'sudo chown root:root /etc/pki/tls/certs/ca-bundle.crt' 42 | # [ $? == 0 ] && ssh -q -i $sshkey -o StrictHostKeyChecking=no vmware-system-user@$node_ip 'ls -al /tmp/ca*' 43 | # [ $? == 0 ] && ssh -q -i $sshkey -o StrictHostKeyChecking=no vmware-system-user@$node_ip 'ls -al /etc/pki/tls/certs/' 44 | # if error occurred, restore ca-bundler.crt_bk 45 | [ $? == 0 ] && ssh -q -i $sshkey -o StrictHostKeyChecking=no vmware-system-user@$node_ip sudo systemctl restart docker.service 46 | } 47 | 48 | 49 | # get guest cluster private key for each node 50 | kubectl get secret -n $gcnamespace $gcname"-ssh" -o jsonpath='{.data.ssh-privatekey}' | base64 --decode > $sshkey 51 | [ $? != 0 ] && echo " please check existence of guest cluster private key secret" && exit 52 | chmod 600 $sshkey 53 | 54 | #get guest cluster kubeconfig 55 | kubectl get secret -n $gcnamespace $gcname"-kubeconfig" -o jsonpath='{.data.value}' | base64 --decode > $gckubeconfig 56 | [ $? != 0 ] && echo " please check existence of guest cluster private key secret" && exit 57 | 58 | # get IPs of each guest cluster nodes 59 | iplist=$(KUBECONFIG=$gckubeconfig kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') 60 | for ip in $iplist 61 | do 62 | echo "installing root ca into node $ip (needs about 10 seconds)... " 63 | installCA $ip $rootCA && echo "Successfully installed root ca into node $ip" || echo "Failed to install root ca into node $ip" 64 | done 65 | -------------------------------------------------------------------------------- /obsolete/create-wcp-privuser.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Highly experimantal. Not for production use. Can cause issues !!!! 4 | # Modify the two variables. 5 | 6 | if ! command -v govc >/dev/null 2>&1 ; then 7 | echo "govc not installed. Exiting...." 8 | exit 1 9 | fi 10 | 11 | export GOVC_URL=myvcenter.vmware.local 12 | export GOVC_USERNAME=administrator@vsphere.local 13 | export GOVC_PASSWORD=myvcenterpassword 14 | export GOVC_INSECURE=true 15 | 16 | export WCP_USER=wcpadmin 17 | export PASSWORD=VMware1! 18 | 19 | # Create a user with Admin Privilages and set appropriate permissions 20 | govc sso.user.create -p ${PASSWORD} -R Admin ${WCP_USER} 21 | govc permissions.set -principal="${WCP_USER}@VSPHERE.LOCAL" -propagate=true -role=Admin / 22 | govc sso.group.update -a=${WCP_USER} ServiceProviderUsers 23 | -------------------------------------------------------------------------------- /obsolete/get-harbor-admin-password.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if ! command -v jq >/dev/null 2>&1 ; then 4 | echo "JQ not installed. Exiting...." 5 | exit 1 6 | fi 7 | 8 | export VDS=`kubectl get deploy -n vmware-system-lbapi vmware-system-lbapi-lbapi-controller-manager --no-headers 2>/dev/null |wc -l` 9 | if [ ${VDS} -eq 0 ] 10 | then 11 | harborinstalled=`kubectl get ns|grep -c vmware-system-registry-` 12 | if [ ${harborinstalled} -eq 1 ] 13 | then 14 | harborns=`kubectl get ns -o json|jq -r '.items[] | select(.metadata.name | contains ("vmware-system-registry-"))'.metadata.name` 15 | harborid=`echo ${harborid}|cut -d- -f4 16 | kubectl get secrets -n ${harborns} harbor-${harborid}-controller-registry -o json|jq -r '.data.harborAdminPassword'|base64 -d|base64 -d;echo 17 | else 18 | echo "vSphere Registry service not enabled. Please enable and then re-run the script" 19 | exit 1 20 | else 21 | echo "Found VDS based setup. Integrated Harbor registry is not supported." 22 | exit 1 23 | fi 24 | -------------------------------------------------------------------------------- /obsolete/tkgs-cluster-v1alpha1.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: run.tanzu.vmware.com/v1alpha1 2 | kind: TanzuKubernetesCluster 3 | metadata: 4 | name: workload-vsphere-tkg1 5 | namespace: demo1 6 | spec: 7 | distribution: 8 | version: v1.21.6 9 | topology: 10 | controlPlane: 11 | count: 1 12 | class: best-effort-medium 13 | storageClass: vsan-default-storage-policy 14 | volumes: 15 | - name: etcd 16 | mountPath: /var/lib/etcd 17 | capacity: 18 | storage: 4Gi 19 | workers: 20 | count: 2 21 | class: best-effort-medium 22 | storageClass: vsan-default-storage-policy 23 | volumes: 24 | - name: containerd 25 | mountPath: /var/lib/containerd 26 | capacity: 27 | storage: 30Gi 28 | settings: 29 | network: 30 | # serviceDomain: "k8s.lab.test" 31 | services: 32 | cidrBlocks: ["198.51.100.0/24"] 33 | pods: 34 | cidrBlocks: ["192.0.0.0/16"] 35 | storage: 36 | classes: ["vsan-default-storage-policy"] 37 | defaultClass: vsan-default-storage-policy 38 | -------------------------------------------------------------------------------- /obsolete/tkgs-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: run.tanzu.vmware.com/v1alpha2 2 | kind: TkgServiceConfiguration 3 | metadata: 4 | name: tkg-service-configuration 5 | spec: 6 | defaultCNI: antrea 7 | proxy: 8 | httpProxy: http://user:password@10.182.49.15:8888 9 | httpsProxy: http://user:password@10.182.49.15:8888 10 | noProxy: [172.26.0.0/16,192.168.124.0/24,192.168.123.0/24] 11 | trust: 12 | additionalTrustedCAs: 13 | - name: harbor-ca 14 | data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLSNBVEUtLS0tLQo= 15 | - name: dtr-ca 16 | data: LS0tLS1CRUdJTiBDKJAGDHALSJAHDGDtLSNBLbJHSHDKHSDtLQo= 17 | -------------------------------------------------------------------------------- /running-new-deployment-on-SC.md: -------------------------------------------------------------------------------- 1 | Follow the below guidelines to deploy applications and daemonsets to a Supervisor Cluster running on VDS based networking. 2 | 3 | * Note - Since the VDS based Supervisor Clusters do not have any CNI running on them and are locked down due to security requreiments, certain changes are needed to bypass these limitations. Also, IPTABLES enforce firewall restrictions on the nodes, so exposing services through loadbalancers would be a limitation. 4 | 5 | ### Step 1. (Optional) Preferebly create a new namespace 6 | ```yaml 7 | --- 8 | apiVersion: v1 9 | kind: Namespace 10 | metadata: 11 | name: velero 12 | ``` 13 | 14 | ### Step 2. (Optional) Create new service account and add allow wcp-privilaged-psp to its rolebinding. 15 | This new rolebinding may not be needed if the service account is already using a higher privialged rolebinding, such as `cluster-admin`. 16 | 17 | ```yaml 18 | --- 19 | apiVersion: v1 20 | kind: ServiceAccount 21 | metadata: 22 | name: miniovelero 23 | namespace: velero 24 | 25 | 26 | --- 27 | apiVersion: rbac.authorization.k8s.io/v1 28 | kind: RoleBinding 29 | metadata: 30 | name: velero-privileged 31 | namespace: velero 32 | roleRef: 33 | apiGroup: rbac.authorization.k8s.io 34 | kind: ClusterRole 35 | name: wcp-privileged-psp 36 | subjects: 37 | - namespace: velero 38 | kind: ServiceAccount 39 | name: miniovelero 40 | ``` 41 | 42 | ### Step 3. Modify the container(s) specific parameters by adding the following relevent details. 43 | 44 | ```yaml 45 | hostNetwork: true #IMPORTANT 46 | serviceAccount: miniovelero #Reference service account above 47 | serviceAccountName: miniovelero #Reference service account above 48 | nodeSelector: 49 | node-role.kubernetes.io/master: "" 50 | tolerations: 51 | - key: CriticalAddonsOnly 52 | operator: Exists 53 | - effect: NoSchedule 54 | key: node-role.kubernetes.io/master 55 | operator: Exists 56 | ``` 57 | -------------------------------------------------------------------------------- /tkgm/install-ako-for-l7-node-port-local.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.tkg.tanzu.vmware.com/v1alpha1 2 | kind: AKODeploymentConfig 3 | metadata: 4 | name: install-ako-for-l7-node-port-local 5 | spec: 6 | adminCredentialRef: 7 | name: avi-controller-credentials 8 | namespace: tkg-system-networking 9 | certificateAuthorityRef: 10 | name: avi-controller-ca 11 | namespace: tkg-system-networking 12 | controller: 192.168.100.58 13 | cloudName: tkg-cloud 14 | serviceEngineGroup: Default-Group 15 | clusterSelector: 16 | matchLabels: 17 | nsx-alb-node-port-local-l7: "true" 18 | controlPlaneNetwork: 19 | cidr: 192.168.102.0/23 20 | name: Workload0-VDS-PG 21 | dataNetwork: 22 | cidr: 192.168.102.0/23 23 | name: Workload0-VDS-PG 24 | extraConfigs: 25 | servicesAPI: true 26 | cniPlugin: antrea 27 | disableStaticRouteSync: true 28 | ingress: 29 | defaultIngressController: true # optional, if set to false, you need to specify ingress class in your ingress object 30 | disableIngressClass: false # required 31 | serviceType: NodePortLocal # required 32 | shardVSSize: SMALL 33 | nodeNetworkList: # required 34 | - networkName: Workload0-VDS-PG 35 | cidrs: 36 | - 192.168.102.0/23 37 | -------------------------------------------------------------------------------- /tkgm/management-cluster-config.yaml: -------------------------------------------------------------------------------- 1 | # tanzu management-cluster create ubuntu-nv-1 --file management-cluster-config.yaml -v 6 2 | AVI_CA_DATA_B64: {{ BASE64 }} 3 | AVI_CLOUD_NAME: tkg-cloud 4 | AVI_CONTROL_PLANE_HA_PROVIDER: "true" 5 | AVI_CONTROL_PLANE_NETWORK: Workload0-VDS-PG 6 | AVI_CONTROL_PLANE_NETWORK_CIDR: 192.168.102.0/23 7 | AVI_CONTROLLER: 192.168.100.58 8 | AVI_DATA_NETWORK: Workload0-VDS-PG 9 | AVI_DATA_NETWORK_CIDR: 192.168.102.0/23 10 | AVI_ENABLE: "true" 11 | AVI_LABELS: "" 12 | AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_CIDR: 192.168.102.0/23 13 | AVI_MANAGEMENT_CLUSTER_CONTROL_PLANE_VIP_NETWORK_NAME: Workload0-VDS-PG 14 | AVI_MANAGEMENT_CLUSTER_SERVICE_ENGINE_GROUP: Default-Group 15 | AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_CIDR: 192.168.102.0/23 16 | AVI_MANAGEMENT_CLUSTER_VIP_NETWORK_NAME: Workload0-VDS-PG 17 | AVI_PASSWORD: 18 | AVI_SERVICE_ENGINE_GROUP: Default-Group 19 | AVI_USERNAME: admin 20 | # 21 | CLUSTER_ANNOTATIONS: 'description:,location:' 22 | CLUSTER_CIDR: 100.96.0.0/11 23 | CLUSTER_NAME: ubuntu-nv-1 24 | CLUSTER_PLAN: dev 25 | ENABLE_AUDIT_LOGGING: "true" 26 | ENABLE_CEIP_PARTICIPATION: "false" 27 | ENABLE_MHC: "true" 28 | IDENTITY_MANAGEMENT_TYPE: oidc 29 | INFRASTRUCTURE_PROVIDER: vsphere 30 | # 31 | LDAP_BIND_DN: "" 32 | LDAP_BIND_PASSWORD: "" 33 | LDAP_GROUP_SEARCH_BASE_DN: "" 34 | LDAP_GROUP_SEARCH_FILTER: "" 35 | LDAP_GROUP_SEARCH_GROUP_ATTRIBUTE: "" 36 | LDAP_GROUP_SEARCH_NAME_ATTRIBUTE: cn 37 | LDAP_GROUP_SEARCH_USER_ATTRIBUTE: DN 38 | LDAP_HOST: "" 39 | LDAP_ROOT_CA_DATA_B64: "" 40 | LDAP_USER_SEARCH_BASE_DN: "" 41 | LDAP_USER_SEARCH_FILTER: "" 42 | LDAP_USER_SEARCH_NAME_ATTRIBUTE: "" 43 | LDAP_USER_SEARCH_USERNAME: userPrincipalName 44 | # 45 | OIDC_IDENTITY_PROVIDER_CLIENT_ID: {{ CLIENT-ID }} 46 | OIDC_IDENTITY_PROVIDER_CLIENT_SECRET: 47 | OIDC_IDENTITY_PROVIDER_GROUPS_CLAIM: groups 48 | OIDC_IDENTITY_PROVIDER_ISSUER_URL: https://dev-5841773.okta.com 49 | OIDC_IDENTITY_PROVIDER_NAME: "" 50 | OIDC_IDENTITY_PROVIDER_SCOPES: openid,groups,email 51 | OIDC_IDENTITY_PROVIDER_USERNAME_CLAIM: email 52 | # 53 | OS_ARCH: amd64 54 | OS_NAME: ubuntu 55 | OS_VERSION: "20.04" 56 | SERVICE_CIDR: 100.64.0.0/13 57 | TKG_HTTP_PROXY_ENABLED: "false" 58 | # 59 | VSPHERE_CONTROL_PLANE_DISK_GIB: "20" 60 | VSPHERE_CONTROL_PLANE_ENDPOINT: "" 61 | VSPHERE_CONTROL_PLANE_MEM_MIB: "8192" 62 | VSPHERE_CONTROL_PLANE_NUM_CPUS: "2" 63 | VSPHERE_DATACENTER: /Pacific-Datacenter 64 | VSPHERE_DATASTORE: /Pacific-Datacenter/datastore/vsanDatastore 65 | VSPHERE_FOLDER: /Pacific-Datacenter/vm 66 | VSPHERE_INSECURE: "false" 67 | VSPHERE_NETWORK: /Pacific-Datacenter/network/Workload0-VDS-PG 68 | VSPHERE_PASSWORD: 69 | VSPHERE_RESOURCE_POOL: /Pacific-Datacenter/host/Supervisor-Cluster/Resources 70 | VSPHERE_SERVER: 192.168.100.50 71 | VSPHERE_SSH_AUTHORIZED_KEY: | 72 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDD8VIQfTGIF...UNUODRCkDWvAmdKSCCAEl0F5R01Kdm10eoafoRR3v1ewoU79Y1J1yQB6pBuHT1BfjVps/wo02O33QizU10ISnwRL/LYhz22CNj2w== navneetv@vmware.com 73 | VSPHERE_TLS_THUMBPRINT: F6:E4:8B:45:7C:43:7A:2E:A5:BC:2E:F2:0B:7C:88:1B:21:34:7B:F4 74 | VSPHERE_USERNAME: administrator@vsphere.local 75 | VSPHERE_WORKER_DISK_GIB: "40" 76 | VSPHERE_WORKER_MEM_MIB: "8192" 77 | VSPHERE_WORKER_NUM_CPUS: "2" 78 | # 79 | WORKER_ROLLOUT_STRATEGY: "" 80 | WORKER_MACHINE_COUNT: 2 81 | -------------------------------------------------------------------------------- /tkgm/workload-cluster-config.yaml: -------------------------------------------------------------------------------- 1 | # tanzu cluster create --file workload-cluster-config.yaml 2 | #! --------------------------------------------------------------------- 3 | #! Basic cluster creation configuration 4 | #! --------------------------------------------------------------------- 5 | 6 | CLUSTER_NAME: workload-vsphere-tkgm1 7 | CLUSTER_PLAN: dev 8 | NAMESPACE: default 9 | CNI: antrea 10 | #! --------------------------------------------------------------------- 11 | #! Node configuration 12 | #! --------------------------------------------------------------------- 13 | 14 | # SIZE: 15 | # CONTROLPLANE_SIZE: 16 | # WORKER_SIZE: 17 | 18 | # VSPHERE_NUM_CPUS: 2 19 | # VSPHERE_DISK_GIB: 40 20 | # VSPHERE_MEM_MIB: 4096 21 | 22 | VSPHERE_CONTROL_PLANE_NUM_CPUS: 4 23 | VSPHERE_CONTROL_PLANE_DISK_GIB: 60 24 | VSPHERE_CONTROL_PLANE_MEM_MIB: 8192 25 | VSPHERE_WORKER_NUM_CPUS: 4 26 | VSPHERE_WORKER_DISK_GIB: 60 27 | VSPHERE_WORKER_MEM_MIB: 8192 28 | 29 | # CONTROL_PLANE_MACHINE_COUNT: 30 | WORKER_MACHINE_COUNT: 3 31 | # WORKER_MACHINE_COUNT_0: 32 | # WORKER_MACHINE_COUNT_1: 33 | # WORKER_MACHINE_COUNT_2: 34 | 35 | #! --------------------------------------------------------------------- 36 | #! vSphere configuration 37 | #! --------------------------------------------------------------------- 38 | INFRASTRUCTURE_PROVIDER: vsphere 39 | VSPHERE_CLONE_MODE: "fullClone" 40 | VSPHERE_NETWORK: /Pacific-Datacenter/network/Workload0-VDS-PG 41 | # VSPHERE_TEMPLATE: 42 | # VSPHERE_TEMPLATE_MOID: 43 | # IS_WINDOWS_WORKLOAD_CLUSTER: false 44 | # VIP_NETWORK_INTERFACE: "eth0" 45 | VSPHERE_SSH_AUTHORIZED_KEY: | 46 | ssh-rsa AAAAB3NzaC1....J1yQB6pBuHT1BfjVps/wo02O33QizU10ISnwRL/LYhz22CNj2w== navneetv@vmware.com 47 | VSPHERE_USERNAME: administrator@vsphere.local 48 | VSPHERE_PASSWORD: 49 | # VSPHERE_REGION: 50 | # VSPHERE_ZONE: 51 | # VSPHERE_AZ_0: 52 | # VSPHERE_AZ_1: 53 | # VSPHERE_AZ_2: 54 | VSPHERE_SERVER: 192.168.100.50 55 | VSPHERE_DATACENTER: /Pacific-Datacenter 56 | VSPHERE_RESOURCE_POOL: /Pacific-Datacenter/host/Supervisor-Cluster/Resources 57 | VSPHERE_DATASTORE: /Pacific-Datacenter/datastore/vsanDatastore 58 | VSPHERE_FOLDER: /Pacific-Datacenter/vm 59 | # VSPHERE_STORAGE_POLICY_ID 60 | # VSPHERE_WORKER_PCI_DEVICES: 61 | # VSPHERE_CONTROL_PLANE_PCI_DEVICES: 62 | # VSPHERE_IGNORE_PCI_DEVICES_ALLOW_LIST: 63 | # VSPHERE_CONTROL_PLANE_CUSTOM_VMX_KEYS: 64 | # VSPHERE_WORKER_CUSTOM_VMX_KEYS: 65 | # WORKER_ROLLOUT_STRATEGY: "RollingUpdate" 66 | # VSPHERE_CONTROL_PLANE_HARDWARE_VERSION: 67 | # VSPHERE_WORKER_HARDWARE_VERSION: 68 | VSPHERE_TLS_THUMBPRINT: F6:E4:8B:45:7C:43:7A:2E:A5:BC:2E:F2:0B:7C:88:1B:21:34:7B:F4 69 | VSPHERE_INSECURE: false 70 | # VSPHERE_CONTROL_PLANE_ENDPOINT: # Required for Kube-Vip 71 | # VSPHERE_CONTROL_PLANE_ENDPOINT_PORT: 6443 72 | # VSPHERE_ADDITIONAL_FQDN: 73 | AVI_CONTROL_PLANE_HA_PROVIDER: true 74 | AVI_LABELS: | 75 | 'nsx-alb-node-port-local-l7': 'true' 76 | 77 | #! --------------------------------------------------------------------- 78 | #! NSX specific configuration for enabling NSX routable pods 79 | #! --------------------------------------------------------------------- 80 | 81 | # NSXT_POD_ROUTING_ENABLED: false 82 | # NSXT_ROUTER_PATH: "" 83 | # NSXT_USERNAME: "" 84 | # NSXT_PASSWORD: "" 85 | # NSXT_MANAGER_HOST: "" 86 | # NSXT_ALLOW_UNVERIFIED_SSL: false 87 | # NSXT_REMOTE_AUTH: false 88 | # NSXT_VMC_ACCESS_TOKEN: "" 89 | # NSXT_VMC_AUTH_HOST: "" 90 | # NSXT_CLIENT_CERT_KEY_DATA: "" 91 | # NSXT_CLIENT_CERT_DATA: "" 92 | # NSXT_ROOT_CA_DATA: "" 93 | # NSXT_SECRET_NAME: "cloud-provider-vsphere-nsxt-credentials" 94 | # NSXT_SECRET_NAMESPACE: "kube-system" 95 | 96 | #! --------------------------------------------------------------------- 97 | #! Common configuration 98 | #! --------------------------------------------------------------------- 99 | 100 | # TKG_CUSTOM_IMAGE_REPOSITORY: "" 101 | # TKG_CUSTOM_IMAGE_REPOSITORY_SKIP_TLS_VERIFY: false 102 | # TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: "" 103 | 104 | # TKG_HTTP_PROXY: "" 105 | # TKG_HTTPS_PROXY: "" 106 | # TKG_NO_PROXY: "" 107 | # TKG_PROXY_CA_CERT: "" 108 | 109 | ENABLE_AUDIT_LOGGING: false 110 | ENABLE_DEFAULT_STORAGE_CLASS: true 111 | 112 | CLUSTER_CIDR: 100.96.0.0/11 113 | SERVICE_CIDR: 100.64.0.0/13 114 | 115 | OS_NAME: ubuntu 116 | OS_VERSION: "20.04" 117 | OS_ARCH: amd64 118 | 119 | #! --------------------------------------------------------------------- 120 | #! Autoscaler configuration 121 | #! --------------------------------------------------------------------- 122 | 123 | ENABLE_AUTOSCALER: false 124 | # AUTOSCALER_MAX_NODES_TOTAL: "0" 125 | # AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD: "10m" 126 | # AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE: "10s" 127 | # AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE: "3m" 128 | # AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME: "10m" 129 | # AUTOSCALER_MAX_NODE_PROVISION_TIME: "15m" 130 | # AUTOSCALER_MIN_SIZE_0: 131 | # AUTOSCALER_MAX_SIZE_0: 132 | # AUTOSCALER_MIN_SIZE_1: 133 | # AUTOSCALER_MAX_SIZE_1: 134 | # AUTOSCALER_MIN_SIZE_2: 135 | # AUTOSCALER_MAX_SIZE_2: 136 | 137 | #! --------------------------------------------------------------------- 138 | #! Antrea CNI configuration 139 | #! --------------------------------------------------------------------- 140 | # ANTREA_NO_SNAT: false 141 | # ANTREA_DISABLE_UDP_TUNNEL_OFFLOAD: false 142 | # ANTREA_TRAFFIC_ENCAP_MODE: "encap" 143 | # ANTREA_EGRESS_EXCEPT_CIDRS: "" 144 | # ANTREA_NODEPORTLOCAL_ENABLED: true 145 | # ANTREA_NODEPORTLOCAL_PORTRANGE: 61000-62000 146 | # ANTREA_PROXY_ALL: false 147 | # ANTREA_PROXY_NODEPORT_ADDRS: "" 148 | # ANTREA_PROXY_SKIP_SERVICES: "" 149 | # ANTREA_PROXY_LOAD_BALANCER_IPS: false 150 | # ANTREA_FLOWEXPORTER_COLLECTOR_ADDRESS: "flow-aggregator.flow-aggregator.svc:4739:tls" 151 | # ANTREA_FLOWEXPORTER_POLL_INTERVAL: "5s" 152 | # ANTREA_FLOWEXPORTER_ACTIVE_TIMEOUT: "30s" 153 | # ANTREA_FLOWEXPORTER_IDLE_TIMEOUT: "15s" 154 | # ANTREA_KUBE_APISERVER_OVERRIDE: 155 | # ANTREA_TRANSPORT_INTERFACE: 156 | # ANTREA_TRANSPORT_INTERFACE_CIDRS: "" 157 | # ANTREA_MULTICAST_INTERFACES: "" 158 | # ANTREA_MULTICAST_IGMPQUERY_INTERVAL: "125s" 159 | # ANTREA_TUNNEL_TYPE: geneve 160 | # ANTREA_ENABLE_USAGE_REPORTING: false 161 | # ANTREA_ENABLE_BRIDGING_MODE: false 162 | # ANTREA_DISABLE_TXCHECKSUM_OFFLOAD: false 163 | # ANTREA_DNS_SERVER_OVERRIDE: "" 164 | # ANTREA_MULTICLUSTER_ENABLE: false 165 | # ANTREA_MULTICLUSTER_NAMESPACE: "" 166 | -------------------------------------------------------------------------------- /tkgs restricted-rbac.yaml: -------------------------------------------------------------------------------- 1 | # Execute this on the workload cluster. grant this group view access in vCenter 2 | --- 3 | apiVersion: rbac.authorization.k8s.io/v1 4 | kind: RoleBinding 5 | metadata: 6 | name: namespace-admin-rolebinding 7 | namespace: {{ NAMESPACE }} 8 | roleRef: 9 | apiGroup: rbac.authorization.k8s.io 10 | kind: Role 11 | name: namespace-admin 12 | subjects: 13 | - apiGroup: rbac.authorization.k8s.io 14 | kind: Group 15 | name: sso:devops-admin@vsphere.local 16 | --- 17 | apiVersion: rbac.authorization.k8s.io/v1 18 | kind: Role 19 | metadata: 20 | name: namespace-admin-role 21 | namespace: {{ NAMESPACE }} 22 | rules: 23 | - apiGroups: 24 | - '*' 25 | resources: 26 | - '*' 27 | verbs: 28 | - '*' 29 | -------------------------------------------------------------------------------- /tkgs-cluster-class-az.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Secret 4 | metadata: 5 | name: workload-vsphere-tkg3-user-trusted-ca-secret 6 | namespace: demo1 7 | type: Opaque 8 | data: 9 | harbor-ca-1: TFMwdExTMUNSVWRKVGlCRFJWSlVTVVpKUTBGVVJTMHRMUzB0Q2sxSlNVWlBWRU5EUVRaSFowRjNTVUpCWjBsS1FVMXNSVzl2YVZKRWJVeHJUVUV3UjBOVGNVZFRTV0l6UkZGRlFrTjNWVUZOU1VkdVRWRnpkME5SV1VRS1ZsRlJSRVJCU2tSUlZFVllUVUpWUjBObmJWTktiMjFVT0dsNGEwRlNhMWRDTTFwNlkwZG9iR050VlhoR1ZFRlVRbWR2U210cFlVcHJMMGx6V2tGRldncEdaMVp6WWpKT2FHSkVSVXhOUVd0SFFURlZSVUpvVFVOV1ZrMTRSWHBCVWtKblRsWkNRV2ROUTJ0T2FHSkhiRzFpTTBwMVlWZEZlRXRVUVc1Q1owNVdDa0pCYjAxSlNFNXFUV2t3ZUUxRE1IaFBSRmwwVFhwTmRFMVVWVEpNYlZaMVduazFNbUpZWkdoamJWVjFXVEk1ZEUxU2MzZEhVVmxFVmxGUlRFUkNTbGNLVkZoa2FHTnRWV2RTVnpWdVlWYzFiRnBZU25CaWJXTjNTR2hqVGsxcVNYZFBSRVY2VFZSbmVFNXFSWGxYYUdOT1RYcEpkMDlFUlhkTlZHZDRUbXBGZVFwWGFrTkNjSHBGVEUxQmEwZEJNVlZGUVhkM1ExRXdSWGhHZWtGV1FtZHZTbXRwWVVwckwwbHpXa0ZGV2tablpESmpNMEp2V2xoS2JFMVNWWGRGZDFsTENrTmFTVzFwV2xCNVRFZFJRa2RTV1VaaVJ6bHFXVmQzZUVONlFVcENaMDVXUWtGWlZFRnNWbFJOVWsxM1JWRlpSRlpSVVVsRVFYQkVXVmQ0Y0ZwdE9Ya0tZbTFzYUUxVGEzZEtkMWxFVmxGUlMwUkRRbnBaZWtsMFRWUkJkRTFVWnpKTVZFMTZURlJGTVU1cE5XeGliV04xWkcweE0xbFlTbXhNYlU1MllsUkZZZ3BOUW10SFFURlZSVU4zZDFOV2F6RXpXVmhLYkVsRlZuVmFNbXgxV2xkV2VXRlhOVzVOU1VsQ2IycEJUa0puYTNGb2EybEhPWGN3UWtGUlJVWkJRVTlEQ2tGWk9FRk5TVWxDYVdkTFEwRlpSVUZ5YjJoTVMyRmtXR3hzYURCa2JYWTBjMnB4ZUhWR1pWVjJLMjFIWlcxc2FrcDJNMjB4TUhoVVNEUndaVFJ5YUV3S1UwTTROaTlIV25JMFoxRklhMjkzZUc1RU1Wa3ZiMXBwY1ZWUE1YRnlUek5pVURWaWRHbFVWM1pVYzNKQldqQjBWRWxRTVVZMWJXdzFZMmxHVTFBd05BcG5TR3RoVUZaVlQzb3dPSEJWUWpCcGEwSk9kR0V2WlRCTFlXdEZjR2MyUW1KWWVVZFVWMWRuTVdOaVFscGxlRzUwYkUxa05YcG1WRUZ4UmpCVFQzUXJDbFJ4TnpCRlYyVklZV3hyWVZkNFYyc3ZaMmRpU0VWNllrcFpURk5JUjA1NmQwcExlblJVZHpONFprUlJLM3BzWlRSR05rNUtTM1k0VlhGYVkzUmlhWGdLZWxoc2IzWkRhelpVYVRaM1FqTnBhM0JzZGtKbE56bElZWEZwV0VaRFNrRnpOVTV2UVU1dFkyMUlPVXhRVkZsWmFYQnhjR3RsU0M4NFJ6RXlNVVJyVVFwUU9IbHJjRUUxYlhkbmJITjZlVEVyVW5kUE5GWTVVa0p5U2s1NmFuSlhlRk5PUVc4MU9GQkViVzg1U1UxcE0zTnlWSGRNTkRSemREZGlTV281TVRWMENpOUhXVnBzZEhFMlpGa3hhMWQ2YkdSa1pXUlRZMmsyV1RkTFkwcEhMM05PVkhsV1ZqaEdTVWx4V0ZoRUwyNWpSbXhOZURKYVpESlZMMEpyUmtKdlMzb0tibU41WkdZM1RDOUhSemhPU2todFZWbE9VblZQVTNsYVUyVTNiV292YkRWUVdXOUJNVTlHTkhjMWNURjRaSE5ZV1hsSU1sUkRhRlZhT1hBeVVEZFNlUXBPYURSQldVNXJjMnhKUjBsdVlqQm1RV2ROUWtGQlIycGFha0pyVFVJd1IwRXhWV1JFWjFGWFFrSlRPRGRUVUdOSk5HcHFTR1JhZEROWmNUTnVVbGhtQ21GMVltOU9WRUZtUW1kT1ZraFNSVVZIUkVGWFoxRTFiR0pYUm5CaVJVSm9XVEl4YkV4dFRuWmlXV05GWm5kQlFVRlVRVTlDWjA1V1NGRTRRa0ZtT0VVS1FrRk5RMEZSV1hkRloxbEVWbEl3VkVGUlNDOUNRV2QzUW1kRlFpOTNTVUpCUkVGT1FtZHJjV2hyYVVjNWR6QkNRVkZ6UmtGQlQwTkJXVVZCYlcxRlFncDJSMHBhUkhsa01FMTZSMWRqT0hWVldrUTBjMlZKVm5kalNsUkpNMEZRYmtabGFVWnNVa1JJUVZoVFZtSlVNVXhsVUU5ck1ERnRiMjR6UjBsWU1FeE9Dall2VG5kU1VsTnFaRkpEVmpVM1ZYTm5ORzVSTXpCcE0xVTRaWFZPVm05d1FrVjZRMUJIWVZkU1lXNVljWEpQWTBJM05tZFhWMFZhUzBoM1RuRk5ibTRLVG1oNFkxWnJlV1YzV0hWa00weHZVSEpCTlVOSmFqWllUREZYYldOcGIwdENkRzkzWWs5b1ZERXdXWGhxTm1oQlVuVkZTRkJOUTJwMGEydHdNbE15TUFwdlF5ODFNVE5vYjFNNVJEaG1SbWR1WTFOemNrZFlNMFI1UVRGRFVIQkpLMHcwTTJGTE9YY3ZXSFZpTDNWRlozZ3Jaa3BuYTNabU1XaGtRMUpqUWpGT0NsbHFTV3QzYUhCdldGVlFNV2RsU2sxNGFUSlRlRXc1UW5wRlZVSlVlRlpDUjNaS1dHeHhabTB4V21Kc2RWRXpaMjlLU1dwdFRtOVViemhLVTJSNFZsUUtSbWx6Vm01NlowUnZSWEpVUnpSeWExWXpTM0JMZW5oaFZYTmFObmRWZFdKcloyTlJLM0JQYVhOTWVVNUhiVWd4ZW1STWVIRjFWWE55YjI5bFdWQlVSUXBRWjJnNFRVaDZkR0k0UkdkbVdVSldjR0Y1WlhGc1dWRnZZVzlSYmpaSmJIWTNWVU5wTTJSYVQxaDRTM2d6Y1RKMFVXd3JSbms1ZFVsb1ltZ3JMMmxyQ2xKaEsyZE5ZM1pFYUhKUU16UjBja0Z4YzFoVVNFWlRhVGxGTld4UWNtNVFXalJUYlVOQ1VEQnhTRFp1TkhWVmRYZDBabHBLU21WMU4wOVlkZ290TFMwdExVVk9SQ0JEUlZKVVNVWkpRMEZVUlMwdExTMHRDZz09 10 | --- 11 | apiVersion: cluster.x-k8s.io/v1beta1 12 | kind: Cluster 13 | metadata: 14 | name: workload-vsphere-tkg3 15 | namespace: demo1 16 | spec: 17 | clusterNetwork: 18 | services: 19 | cidrBlocks: ["192.168.32.0/20"] 20 | pods: 21 | cidrBlocks: ["192.168.0.0/20"] 22 | serviceDomain: "cluster.local" 23 | topology: 24 | class: tanzukubernetescluster 25 | version: v1.25.7---vmware.3-fips.1-tkg.1 26 | controlPlane: 27 | # metadata: 28 | # annotations: 29 | # run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 30 | replicas: 1 31 | # 8.0U3 32 | # machineHealthCheck: 33 | # enable: true 34 | # maxUnhealthy: 100% 35 | # nodeStartupTimeout: 4h0m0s 36 | # unhealthyConditions: 37 | # - status: Unknown 38 | # timeout: 5m0s 39 | # type: Ready 40 | # - status: "False" 41 | # timeout: 12m0s 42 | # type: Ready 43 | workers: 44 | machineDeployments: 45 | - class: node-pool 46 | failureDomain: zone1 47 | name: node-pool-1 48 | replicas: 1 49 | # variables: 50 | # overrides: 51 | # - name: vmClass 52 | # value: best-effort-large 53 | # metadata: 54 | # annotations: 55 | # run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 56 | - class: node-pool 57 | failureDomain: zone2 58 | name: node-pool-2 59 | replicas: 0 60 | # variables: 61 | # overrides: 62 | # - name: vmClass 63 | # value: best-effort-large 64 | # metadata: 65 | # annotations: 66 | # run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 67 | - class: node-pool 68 | failureDomain: zone3 69 | name: node-pool-3 70 | replicas: 0 71 | # variables: 72 | # overrides: 73 | # - name: vmClass 74 | # value: best-effort-large 75 | # metadata: 76 | # annotations: 77 | # run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 78 | variables: 79 | - name: ntp 80 | value: "ntp.vmware.com" 81 | - name: vmClass 82 | value: best-effort-small 83 | - name: storageClass 84 | value: tanzu 85 | - name: defaultStorageClass 86 | value: tanzu 87 | - name: clusterEncryptionConfigYaml 88 | value: | 89 | apiVersion: apiserver.config.k8s.io/v1 90 | kind: EncryptionConfiguration 91 | resources: 92 | - resources: 93 | - secrets 94 | providers: 95 | - aescbc: 96 | keys: 97 | - name: key1 98 | secret: QiMgJGYXudtljldVyl+AnXQQlk7r9iUXBfVKqdEfKm8= 99 | - identity: {} 100 | # ADDITIONAL VALUES 101 | - name: nodePoolVolumes 102 | value: 103 | - capacity: 104 | storage: "15Gi" 105 | mountPath: "/var/lib/containerd" 106 | name: containerd 107 | storageClass: tanzu 108 | - capacity: 109 | storage: "15Gi" 110 | mountPath: "/var/lib/kubelet" 111 | name: kubelet 112 | storageClass: tanzu 113 | - name: controlPlaneVolumes 114 | value: 115 | - capacity: 116 | storage: "15Gi" 117 | mountPath: "/var/lib/containerd" 118 | name: containerd 119 | storageClass: tanzu 120 | - capacity: 121 | storage: "15Gi" 122 | mountPath: "/var/lib/kubelet" 123 | name: kubelet 124 | storageClass: tanzu 125 | # Not supported 126 | # - capacity: 127 | # storage: "4Gi" 128 | # mountPath: "/var/lib/etcd" 129 | # name: etcd 130 | # storageClass: tanzu 131 | - name: trust 132 | value: 133 | additionalTrustedCAs: 134 | - name: harbor-ca-1 135 | -------------------------------------------------------------------------------- /tkgs-cluster-class-noaz.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cluster.x-k8s.io/v1beta1 2 | kind: Cluster 3 | metadata: 4 | name: workload-vsphere-tkg5 5 | spec: 6 | clusterNetwork: 7 | apiServerPort: 6443 8 | pods: 9 | cidrBlocks: ["192.168.32.0/20"] 10 | serviceDomain: "cluster.local" 11 | services: 12 | cidrBlocks: ["192.168.0.0/20"] 13 | paused: false 14 | topology: 15 | class: tanzukubernetescluster 16 | version: v1.28.8---vmware.1-fips.1-tkg.2 17 | controlPlane: 18 | # machineHealthCheck: 19 | # enable: true 20 | # maxUnhealthy: 100% 21 | # nodeStartupTimeout: 4h0m0s 22 | # unhealthyConditions: 23 | # - status: Unknown 24 | # timeout: 5m0s 25 | # type: Ready 26 | # - status: "False" 27 | # timeout: 12m0s 28 | # type: Ready 29 | metadata: 30 | annotations: 31 | run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 32 | labels: 33 | my-custom-label-key: my-custom-label-value 34 | # nodeDeletionTimeout: "10s" 35 | # nodeDrainTimeout: "0s" 36 | # nodeVolumeDetachTimeout: "0s" 37 | replicas: 1 38 | workers: 39 | machineDeployments: 40 | - class: node-pool 41 | # failureDomain: zone1 42 | # machineHealthCheck: 43 | # enable: true 44 | # maxUnhealthy: 100% 45 | # nodeStartupTimeout: 4h0m0s 46 | # unhealthyConditions: 47 | # - status: Unknown 48 | # timeout: 5m0s 49 | # type: Ready 50 | # - status: "False" 51 | # timeout: 12m0s 52 | # type: Ready 53 | metadata: 54 | annotations: 55 | # cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "3" 56 | # cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "1" 57 | run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 58 | labels: 59 | my-custom-label-key: my-custom-label-value 60 | minReadySeconds: 0 61 | name: node-pool-1 62 | # nodeDeletionTimeout: "10s" 63 | # nodeDrainTimeout: "0s" 64 | # nodeVolumeDetachTimeout: "0s" 65 | replicas: 1 66 | variables: 67 | overrides: 68 | - name: vmClass 69 | value: best-effort-medium 70 | - name: nodePoolLabels 71 | value: [{ "key": "my-nodepool-name", "value": "node-pool-1" }] 72 | - class: node-pool 73 | # failureDomain: zone2 74 | # machineHealthCheck: 75 | # enable: true 76 | # maxUnhealthy: 100% 77 | # nodeStartupTimeout: 4h0m0s 78 | # unhealthyConditions: 79 | # - status: Unknown 80 | # timeout: 5m0s 81 | # type: Ready 82 | # - status: "False" 83 | # timeout: 12m0s 84 | # type: Ready 85 | metadata: 86 | annotations: 87 | # cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "3" 88 | # cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0" 89 | run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 90 | labels: 91 | my-custom-label-key: my-custom-label-value 92 | name: node-pool-2 93 | # nodeDeletionTimeout: "10s" 94 | # nodeDrainTimeout: "0s" 95 | # nodeVolumeDetachTimeout: "0s" 96 | replicas: 0 97 | # variables: 98 | # overrides: 99 | # - name: vmClass 100 | # value: best-effort-large 101 | - class: node-pool 102 | # failureDomain: zone3 103 | # machineHealthCheck: 104 | # enable: true 105 | # maxUnhealthy: 100% 106 | # nodeStartupTimeout: 4h0m0s 107 | # unhealthyConditions: 108 | # - status: Unknown 109 | # timeout: 5m0s 110 | # type: Ready 111 | # - status: "False" 112 | # timeout: 12m0s 113 | # type: Ready 114 | metadata: 115 | annotations: 116 | # cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "3" 117 | # cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0" 118 | run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 119 | labels: 120 | my-custom-label-key: my-custom-label-value 121 | name: node-pool-3 122 | # nodeDeletionTimeout: "10s" 123 | # nodeDrainTimeout: "0s" 124 | # nodeVolumeDetachTimeout: "0s" 125 | replicas: 0 126 | # variables: 127 | # overrides: 128 | # - name: vmClass 129 | # value: best-effort-large 130 | variables: 131 | - name: ntp 132 | value: time.google.com 133 | - name: vmClass 134 | value: best-effort-small 135 | - name: storageClass 136 | value: tanzu 137 | - name: defaultStorageClass 138 | value: tanzu 139 | - name: clusterEncryptionConfigYaml 140 | value: | 141 | apiVersion: apiserver.config.k8s.io/v1 142 | kind: EncryptionConfiguration 143 | resources: 144 | - resources: 145 | - secrets 146 | providers: 147 | - aescbc: 148 | keys: 149 | - name: key1 150 | secret: QiMgJGYXudtljldVyl+AnXQQlk7r9iUXBfVKqdEfKm8= 151 | - identity: {} 152 | # ADDITIONAL VALUES 153 | # - name: nodePoolVolumes 154 | # value: 155 | # - capacity: 156 | # storage: "15Gi" 157 | # mountPath: "/var/lib/containerd" 158 | # name: containerd 159 | # storageClass: tanzu 160 | # - capacity: 161 | # storage: "15Gi" 162 | # mountPath: "/var/lib/kubelet" 163 | # name: kubelet 164 | # storageClass: tanzu 165 | - name: controlPlaneVolumes 166 | value: 167 | - capacity: 168 | storage: "15Gi" 169 | mountPath: "/var/lib/containerd" 170 | name: containerd 171 | storageClass: tanzu 172 | - capacity: 173 | storage: "15Gi" 174 | mountPath: "/var/lib/kubelet" 175 | name: kubelet 176 | storageClass: tanzu 177 | - name: podSecurityStandard 178 | value: 179 | audit: restricted 180 | auditVersion: latest 181 | enforce: privileged 182 | enforceVersion: latest 183 | warn: privileged 184 | warnVersion: latest 185 | - name: kubeAPIServerFQDNs 186 | value: 187 | - workload-vsphere-tkg4.env1.lab.test 188 | - name: controlPlaneCertificateRotation 189 | value: 190 | daysBefore: 90 191 | - name: nodePoolLabels 192 | value: 193 | - key: "my-nodepool-key" 194 | value: "my-nodepool-value" 195 | # - name: proxy 196 | # value: 197 | # httpProxy: 198 | # httpsProxy: 199 | # noProxy: ["",""] 200 | -------------------------------------------------------------------------------- /tkgs-cluster-class-override.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cluster.x-k8s.io/v1beta1 2 | kind: Cluster 3 | metadata: 4 | name: workload-vsphere-tkg4 5 | namespace: demo1 6 | spec: 7 | clusterNetwork: 8 | services: 9 | cidrBlocks: ["192.168.32.0/20"] 10 | pods: 11 | cidrBlocks: ["192.168.0.0/20"] 12 | serviceDomain: "cluster.local" 13 | topology: 14 | class: tanzukubernetescluster 15 | version: v1.25.7---vmware.3-fips.1-tkg.1 16 | controlPlane: 17 | replicas: 1 18 | # metadata: 19 | # annotations: 20 | # run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 21 | workers: 22 | machineDeployments: 23 | - class: node-pool 24 | name: node-pool-1 25 | replicas: 1 26 | # metadata: 27 | # annotations: 28 | # run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 29 | variables: 30 | overrides: 31 | - name: vmClass 32 | value: best-effort-medium 33 | - name: nodePoolVolumes 34 | value: 35 | - capacity: 36 | storage: 30Gi 37 | mountPath: /var/lib/containerd 38 | name: containerd 39 | - class: node-pool 40 | name: node-pool-2 41 | replicas: 1 42 | # metadata: 43 | # annotations: 44 | # run.tanzu.vmware.com/resolve-os-image: os-name=ubuntu 45 | variables: 46 | overrides: 47 | - name: vmClass 48 | value: best-effort-small 49 | - name: nodePoolVolumes 50 | value: 51 | - capacity: 52 | storage: 10Gi 53 | mountPath: /var/lib/containerd 54 | name: containerd 55 | variables: 56 | - name: nodePoolLabels 57 | value: [] 58 | - name: user 59 | value: 60 | passwordSecret: 61 | key: ssh-passwordkey 62 | name: workload-vsphere-tkg4-ssh-password-hashed 63 | sshAuthorizedKey: | 64 | ssh-rsa AAAAB3Nnpm0x88nB7PoSDOMG+rOAB7Z51YaObQN1QI28X/Tp4X4Ey90Faxgy7MumhshyzYtVQinHBtplxrBPrnheSm/GUhYHYBTvnnsY0MVcDBnp++ndo4kWH+X40nkTkY8fNVqQeWFsX7q56ddDXEueJE1UfIN1xkASxdR46nhKNg6yRJVhI3B+gptB0XE9NU1SUi9gBlFVeJUr3rGAf43UZ69hZWLgG71agfoTwqkSD96C87Ny7AtNTMOuU+2VfM+YA/5EzkJIs4qDXp1RP7amdJj5kpcpHaeozJk+Uajfaz2N+y/q7MolL0Sau7LxIxEEJdMJsR81WlyctF2nohCsZ4kBHsIcD0/vP+wqrgsB1FmbgvA9x+TmMK7/Xw42v8gxT2VzFfvSHajq71FqfkdwjftHJglRGfN6PU8TEGL3DH7tVs0fbwFwWTM+G8sQAonwWYr4KzYk4AatL1GdDHi4sdmriMgSEV0GZkuZtVCp1aWBe1hvAKL1bFrKhiNbLyXr5mIFIQuDm7nYZ5sym1Vb/1psLbOZfJ7sqxNw== 65 | - name: storageClasses 66 | value: 67 | - tanzu 68 | - name: controlPlaneVolumes 69 | value: 70 | - capacity: 71 | storage: "15Gi" 72 | mountPath: "/var/lib/containerd" 73 | name: containerd 74 | storageClass: tanzu 75 | - capacity: 76 | storage: "15Gi" 77 | mountPath: "/var/lib/kubelet" 78 | name: kubelet 79 | storageClass: tanzu 80 | - name: clusterEncryptionConfigYaml 81 | value: LS0tCmFwaVZlcnNpb246IGFwaXNlcnZlci5jb25maWcuazhzLmlvL3YxCmtpbmQ6IEVuY3J5cHRpb25Db25maWd1cmF0aW9uCnJlc291cmNlczoKICAtIHJlc291cmNlczoKICAgIC0gc2VjcmV0cwogICAgcHJvdmlkZXJzOgogICAgLSBhZXNjYmM6CiAgICAgICAga2V5czoKICAgICAgICAtIG5hbWU6IGtleTEKICAgICAgICAgIHNlY3JldDogdzFTSm5zd2RNSGNMTHJuMlBZK3lvTEtUNVlpdXRWSi9PQzgzNUdrQzZHST0KICAgIC0gaWRlbnRpdHk6IHt9Cg== 82 | - name: vmClass 83 | value: best-effort-small 84 | - name: nodePoolVolumes 85 | value: [] 86 | - name: storageClass 87 | value: tanzu 88 | - name: ntp 89 | value: ntp.vmware.com 90 | - name: nodePoolTaints 91 | value: [] -------------------------------------------------------------------------------- /tkgs-cluster-v1alpha3.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: run.tanzu.vmware.com/v1alpha3 2 | kind: TanzuKubernetesCluster 3 | metadata: 4 | name: workload-vsphere-tkg1 5 | namespace: demo1 6 | spec: 7 | topology: 8 | controlPlane: 9 | replicas: 1 10 | vmClass: best-effort-medium 11 | storageClass: vsan-default-storage-policy 12 | volumes: 13 | - name: containerd 14 | mountPath: /var/lib/containerd 15 | capacity: 16 | storage: 15Gi 17 | - name: kubelet 18 | mountPath: /var/lib/kubelet 19 | capacity: 20 | storage: 15Gi 21 | tkr: 22 | reference: 23 | name: v1.21.6---vmware.1-tkg.1.b3d708a 24 | nodePools: 25 | - name: node-pool-1 26 | replicas: 1 27 | vmClass: best-effort-medium 28 | storageClass: vsan-default-storage-policy 29 | volumes: 30 | - name: containerd 31 | mountPath: /var/lib/containerd 32 | capacity: 33 | storage: 30Gi 34 | - name: kubelet 35 | mountPath: /var/lib/kubelet 36 | capacity: 37 | storage: 30Gi 38 | tkr: 39 | reference: 40 | name: v1.21.6---vmware.1-tkg.1.b3d708a 41 | settings: 42 | storage: 43 | classes: ["vsan-default-storage-policy"] 44 | defaultClass: vsan-default-storage-policy 45 | network: 46 | cni: 47 | name: antrea 48 | pods: 49 | cidrBlocks: ["10.244.0.0/16"] 50 | services: 51 | cidrBlocks: ["192.168.1.0/24"] 52 | -------------------------------------------------------------------------------- /tmc-cluster-v1alpha2.yaml: -------------------------------------------------------------------------------- 1 | type: 2 | kind: Cluster 3 | package: vmware.tanzu.manage.v1alpha1.cluster 4 | version: v1alpha1 5 | fullName: 6 | managementClusterName: nverma-haas-242 7 | provisionerName: demo1 8 | name: workload-vsphere-tkg1 9 | meta: {} 10 | spec: 11 | clusterGroupName: nverma 12 | tkgServiceVsphere: 13 | settings: 14 | network: 15 | pods: 16 | cidrBlocks: 17 | - 192.0.0.0/16 18 | services: 19 | cidrBlocks: 20 | - 198.51.100.0/24 21 | storage: 22 | classes: 23 | - vsan-default-storage-policy 24 | defaultClass: vsan-default-storage-policy 25 | distribution: 26 | version: v1.21.6+vmware.1-tkg.1.b3d708a 27 | topology: 28 | controlPlane: 29 | class: best-effort-medium 30 | storageClass: vsan-default-storage-policy 31 | volumes: 32 | - name: etcd 33 | mountPath: "/var/lib/etcd" 34 | capacity: 4 35 | nodePools: 36 | - spec: 37 | workerNodeCount: '1' 38 | tkgServiceVsphere: 39 | class: best-effort-medium 40 | storageClass: vsan-default-storage-policy 41 | volumes: 42 | - name: containerd 43 | mountPath: "/var/lib/containerd" 44 | capacity: 30 45 | info: 46 | name: default-nodepool 47 | --------------------------------------------------------------------------------