├── ArgoCD ├── ArgoCD_KastenK10_Minio.md ├── kasten-pre-sync-job.yaml ├── pacman-stateful-demo.yaml ├── role-binding.yaml └── sa.yaml ├── Instructions_readme.md ├── Offline Installation.md ├── PowerShell ├── 1_downloads.ps1 ├── 2_create_cluster.ps1 ├── 3_install_k10.ps1 ├── 4_my_sql.ps1 └── readme.md ├── Zero to Hero - Kubernetes and Kasten K10.md ├── baseline.sh ├── data ├── AddDatatoMongo.sh └── AddDatatoPostgreSQL.sh ├── deploy_apps.sh ├── k10primer.yaml ├── media ├── VUG.png └── minicube_windows_vmware.jpg ├── multicluster ├── diagram.drawio.svg ├── diagram.svg ├── metallbmcdemo.yaml ├── metallbmcdemo2.yaml ├── multicluster_demo.sh ├── multicluster_metallb_demo .sh └── readme.md ├── pacman-stateful-demo.yaml ├── readme.md └── singlecluster_demo.sh /ArgoCD/ArgoCD_KastenK10_Minio.md: -------------------------------------------------------------------------------- 1 | ## Backup is not a game! 2 | 3 | You have heard of continous integration and deployment but have you heard of **Continous Backup**? 4 | 5 | In this session we are going to be using our own machines to - 6 | 7 | - deploy a Kubernetes cluster (Minikube) 8 | - deploy Kasten K10 9 | - deploy ArgoCD 10 | - deploy Minio (Optional) 11 | - configure ArgoCD to deploy our mission-critical application 12 | - simulate some change to date service 13 | - recover with Kasten K10 14 | 15 | This assumes that you have the helm repositories on your system, I would generally run `helm repo update` at this stage so that my Kasten K10, ArgoCD and Minio apps are the lastet available. 16 | 17 | ### Install Minikube 18 | 19 | - [Installing Minikube](https://minikube.sigs.k8s.io/docs/start/) 20 | 21 | Another option is using [Arkade](https://github.com/alexellis/arkade) with `arkade get minikube` 22 | 23 | With Arkade we also have the ability to install Kasten K10 and Kasten Open-Source projects. 24 | 25 | The minikube installation should also install kubectl or the Kubernetes CLI, you will need this, again available through most package managers cross platform (Chocolatey, apt etc.) 26 | 27 | We will also need helm to deploy some of our data services. 28 | 29 | - [kubectl](https://kubernetes.io/docs/tasks/tools/) or `arkade get kubectl` 30 | - [helm](https://helm.sh/docs/intro/install/) or `arkade get helm` 31 | 32 | ### Deploy Minikube cluster 33 | 34 | Once we have minikube available in our environment 35 | 36 | `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p webinar-demo --kubernetes-version=1.21.2` 37 | 38 | With the above we will be using Docker as our virtual machine manager. If you have not already you can grab Docker cross platform. 39 | [Get Docker](https://docs.docker.com/get-docker/) 40 | 41 | ### Deploy Kasten K10 42 | 43 | Add the Kasten Helm repository 44 | 45 | `helm repo add kasten https://charts.kasten.io/` 46 | 47 | We could use `arkade kasten install k10` here as well but for the purpose of the demo we will run through the following steps. [More Details](https://blog.kasten.io/kasten-k10-goes-to-the-arkade) 48 | 49 | Create the namespace and deploy K10, note that this will take around 5 mins 50 | 51 | `helm install k10 kasten/k10 --namespace=kasten-io --create-namespace --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true` 52 | 53 | You can watch the pods come up by running the following command. 54 | 55 | `kubectl get pods -n kasten-io -w` 56 | 57 | You can also use the following to ensure all pods are up and Ready. 58 | 59 | `kubectl wait --for=condition=Ready pods --all -n kasten-io` 60 | 61 | port forward to access the K10 dashboard, open a new terminal to run the below command 62 | 63 | `kubectl --namespace kasten-io port-forward service/gateway 8080:8000` 64 | 65 | The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/` 66 | 67 | To authenticate with the dashboard we now need the token which we can get with the following commands. 68 | 69 | ``` 70 | TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) 71 | TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) 72 | 73 | echo "Token value: " 74 | echo $TOKEN 75 | ``` 76 | 77 | ## Install ArgoCD 78 | 79 | ``` 80 | kubectl create namespace argocd 81 | kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml 82 | kubectl port-forward svc/argocd-server -n argocd 8181:443 83 | ``` 84 | 85 | Username is admin and password can be obtained with this command. 86 | 87 | ``` 88 | kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo 89 | ``` 90 | 91 | ## Kubernetes Storage changes 92 | 93 | Annotate the CSI Hostpath VolumeSnapshotClass for use with K10 94 | 95 | ``` 96 | kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ 97 | k10.kasten.io/is-snapshot-class=true 98 | ``` 99 | we also need to change our default storageclass with the following 100 | 101 | ``` 102 | kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 103 | 104 | kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 105 | ``` 106 | Patching the storage as above before installing Kasten K10 will result in the Prometheus pod not starting. 107 | 108 | ## Adding the app to ArgoCD 109 | 110 | 111 | -------------------------------------------------------------------------------- /ArgoCD/kasten-pre-sync-job.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: batch/v1 2 | kind: Job 3 | metadata: 4 | generateName: kasten-pre-sync- 5 | annotations: 6 | argocd.argoproj.io/hook: PreSync 7 | argocd.argoproj.io/sync-wave: "1" 8 | spec: 9 | template: 10 | metadata: 11 | creationTimestamp: null 12 | spec: 13 | serviceAccountName: kasten-pre-sync 14 | containers: 15 | - image: ghcr.io/kanisterio/kanister-kubectl:1.18 16 | command: 17 | - sh 18 | - -o 19 | - errexit 20 | - -o 21 | - pipefail 22 | - -c 23 | - | 24 | # create a backup action on the project 25 | backup_name="pacman-backup-$(date| tr ' ' '-'|tr ':' '-'|tr '[:upper:]' '[:lower:]')" 26 | cat <[], where unit = b, k, m or g). Use "max" to use the maximum 94 | amount of memory. 95 | ``` 96 | 97 | ### MiniKube on Windows with VMware Workstation 98 | 99 | ``` 100 | $Env:Path += ";C:\Program Files (x86)\VMware\VMware Workstation" 101 | minikube start --driver vmware --addons volumesnapshots,csi-hostpath-driver 102 | ``` 103 | 104 | ![MiniKube on Windows with VMware Workstation](media/minicube_windows_vmware.jpg) 105 | 106 | 107 | ## Kasten K10 108 | 109 | Add the Kasten Helm repository 110 | 111 | ``` 112 | helm repo add kasten https://charts.kasten.io/ 113 | ``` 114 | Create the namespace and deploy K10, note that this will take around 5 mins 115 | 116 | ``` 117 | kubectl create namespace kasten-io 118 | helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true 119 | ``` 120 | You can watch the pods come up by running the following command. 121 | ``` 122 | kubectl get pods -n kasten-io -w 123 | ``` 124 | port forward to access the K10 dashboard, open a new terminal to run the below command 125 | 126 | ``` 127 | kubectl --namespace kasten-io port-forward service/gateway 8080:8000 128 | ``` 129 | 130 | The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/` 131 | 132 | To authenticate with the dashboard we now need the token which we can get with the following commands. 133 | 134 | ``` 135 | TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) 136 | TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) 137 | 138 | echo "Token value: " 139 | echo $TOKEN 140 | ``` 141 | ![image](https://user-images.githubusercontent.com/22192242/138279675-5f7e6867-299c-44d9-bd9f-6824628260d8.png) 142 | 143 | Annotate the CSI Hostpath VolumeSnapshotClass for use with K10 144 | 145 | ``` 146 | kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ 147 | k10.kasten.io/is-snapshot-class=true 148 | ``` 149 | we also need to change our default storageclass with the following 150 | 151 | ``` 152 | kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 153 | 154 | kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 155 | ``` 156 | 157 | ## MySQL 158 | ### Step 1 - Deploy your mysql app for the first time 159 | 160 | Deploying mysql via helm: 161 | 162 | ``` 163 | APP_NAME=my-production-app 164 | kubectl create ns ${APP_NAME} 165 | helm repo add bitnami https://charts.bitnami.com/bitnami 166 | helm install mysql-store bitnami/mysql --set primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace=${APP_NAME} 167 | kubectl get pods -n ${APP_NAME} -w 168 | ``` 169 | 170 | ### Step 2 - Add Data Source 171 | Populate the mysql database with initial data, run the following: 172 | 173 | ``` 174 | MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace ${APP_NAME} mysql-store -o jsonpath="{.data.mysql-root-password}" | base64 --decode) 175 | MYSQL_HOST=mysql-store.${APP_NAME}.svc.cluster.local 176 | MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t" 177 | echo MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} 178 | ``` 179 | 180 | ##### Step 2a - Create a MySQL CLIENT 181 | We will run another container image to act as our client 182 | 183 | ``` 184 | APP_NAME=my-production-app 185 | kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash 186 | ``` 187 | ``` 188 | Note: if you already have an existing mysql client pod running, delete with the command 189 | 190 | kubectl delete pod -n ${APP_NAME} mysql-client 191 | ``` 192 | 193 | ##### Step 2b - Add Data to MySQL 194 | 195 | ``` 196 | echo "create database myImportantData;" | mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} 197 | MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t" 198 | echo "drop table Accounts" | ${MYSQL_EXEC} 199 | echo "create table if not exists Accounts(name text, balance integer); insert into Accounts values('nick', 0);" | ${MYSQL_EXEC} 200 | echo "insert into Accounts values('albert', 112);" | ${MYSQL_EXEC} 201 | echo "insert into Accounts values('alfred', 358);" | ${MYSQL_EXEC} 202 | echo "insert into Accounts values('beatrice', 1321);" | ${MYSQL_EXEC} 203 | echo "insert into Accounts values('bartholomew', 34);" | ${MYSQL_EXEC} 204 | echo "insert into Accounts values('edward', 5589);" | ${MYSQL_EXEC} 205 | echo "insert into Accounts values('edwin', 144);" | ${MYSQL_EXEC} 206 | echo "insert into Accounts values('edwina', 233);" | ${MYSQL_EXEC} 207 | echo "insert into Accounts values('rastapopoulos', 377);" | ${MYSQL_EXEC} 208 | echo "select * from Accounts;" | ${MYSQL_EXEC} 209 | exit 210 | ``` 211 | 212 | ## Create and Perform a backup of your data service 213 | In the K10 dashboard walkthrough and create yourself a policy to protect your mySQL data. 214 | 1. On the homescreen, click the Applications Tile 215 | 2. On the "my-production-app" tile, click "Create a Policy", change any of the settings as needed. 216 | 3. When the policy is created, click "Run Now" on the tile. 217 | 218 | ## Application Restore with Transformation 219 | First lets check our storageclass options within our cluster with the following command, you will see that we have two we have a CSI and standard. 220 | 221 | ``` 222 | kubectl get storageclass 223 | ``` 224 | 225 | Lets restore a clone of our data into a new namespace on a different storageclass by using transformations. 226 | In the K10 dashboard walkthrough and restore the application. 227 | 1. On the homescreen, click Green "Compliant" button on the Application tile. 228 | 2. On the "my-production-app" tile, click the restore button, select the a restore point. 229 | 3. Under the heading "Application Name" select "Create a new namespace", provide a name and save (we used clone for the below example). 230 | 4. Under the heading "Optional Restore Settings" select "Apply transforms to restored resources". 231 | 5. Click "Add New Transform", on the transform screen, select in the top right, "Use an example" and "change storageClass". 232 | 6. Under operations change the value to "standard" and click the "Edit Transform" button and then "Create Transform" button. 233 | 7. Click "Restore" button and confirm the restore. 234 | 8. You can monitor the progress on the homepage. 235 | 236 | Once the restore is completed, we connect to the DB to confirm the values in the table are present. 237 | 238 | ``` 239 | APP_NAME=clone 240 | kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash 241 | 242 | # within the container run 243 | echo "select * from Accounts;" | ${MYSQL_EXEC} 244 | exit 245 | ``` 246 | 247 | ## Delete cluster 248 | When you are finished with the demo you can simply delete the cluster using the following command note if you have changed the name in the above steps then you will need to also update things here. 249 | 250 | ``` 251 | minikube delete -p mc-demo 252 | ``` 253 | 254 | # Exporting data using Minio S3 Storage 255 | In this optional section we will go against best practices and deploy our object storage export location for our K10 backups 256 | 257 | ## Install Minio 258 | ``` 259 | helm repo add minio https://helm.min.io/ 260 | helm install --namespace minio-operator --create-namespace --generate-name minio/minio-operator 261 | ``` 262 | ## Accessing Minio 263 | 264 | Get the JWT for logging in to the console 265 | ```` 266 | kubectl get secret $(kubectl get serviceaccount console-sa --namespace minio-operator -o jsonpath="{.secrets[0].name}") --namespace minio-operator -o jsonpath="{.data.token}" | base64 --decode 267 | ```` 268 | Open a new terminal window to setup port forward to access the Minio Management page in your browser 269 | ```` 270 | kubectl --namespace minio-operator port-forward svc/console 9090:9090 271 | ```` 272 | Open your browser to http://127.0.0.1:9090 and login with the token from the above step. 273 | 274 | ## Configuring Minio 275 | 276 | On the Tenants tab, select the default tenant (should be named "minio1", then select the "Manage Tenant" button. 277 | 278 | 1. Within the tenant, click "Service Accounts" and create a service account with the default settings. Copy the Access Key and Secret Key or download the file. 279 | 2. Click Buckets, and create a bucket with the default settings. 280 | 281 | ## Configure S3 storage in Kasten 282 | 1. Click settings in the top right hand corner. Select locations and Create new location. 283 | 2. Provide a name, select "S3 Compatible", enter your Access Key and Secret Key you saved earlier. 284 | 3. Set the endpoint as "minio.default.svc.cluster.local" (this is the internal k8s dns name) and select to skip SSL verification. 285 | 4. Provide the bucket name you configured and click "Save Profile". 286 | 287 | ## Configure the Kasten Policy to export data to the S3 Storage 288 | 1. Edit your existing policy. 289 | 2. Enable the setting "Enable Backups via Snapshot Exports" 290 | 3. Select the S3 location profile you have just created, and set the schedule as necessary. Click the "Edit Policy" button. 291 | 4. Manually run the policy and observe the run on the homescreen. After the backup run, you will see a new task called "Export". 292 | 293 | Manually browse the Bucket from the Minio browser console, you will see your bucket contains a folder called "k10" and within that the protection data. 294 | 295 | ![image](https://user-images.githubusercontent.com/22192242/138359395-b4175851-9da8-46d7-86b7-7cf3ee1e5fee.png) 296 | 297 | ![image](https://user-images.githubusercontent.com/22192242/138359447-a6c316f7-a8d6-414b-af7e-6157867cb5bc.png) 298 | 299 | 300 | 301 | 302 | -------------------------------------------------------------------------------- /Offline Installation.md: -------------------------------------------------------------------------------- 1 | # Offline Installation 2 | 3 | This page covers the details for downloading the Kasten K10 containers to a locally hosted registry within a existing MiniKube profile (Kubernetes Cluster). Ensuring you have the images available for an internet restricted (Air-Gap) installation of Kasten. 4 | 5 | To be prepared for an offline installation, at a minimum you need to have docker client with the MiniKube images, and run the Kasten offline tool to grab the images to the Docker Client, and finally download a copy of the Helm Chart for Kasten K10. This work is based on [this blog post](https://veducate.co.uk/kasten-air-gap/). 6 | 7 | ## Pre Reqs 8 | 9 | - minikube - https://minikube.sigs.k8s.io/docs/start/ 10 | - helm - https://helm.sh/docs/intro/install/ 11 | - kubectl - https://kubernetes.io/docs/tasks/tools/ 12 | 13 | Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware 14 | 15 | I would also suggest that we need bash which means the advice for Windows users is install Git bash but for best experience use WSL 16 | 17 | For the above pre-reqs I use Arkade (https://github.com/alexellis/arkade) 18 | 19 | ``` 20 | arkade get minikube helm kubectl 21 | ``` 22 | 23 | ## Install MiniKube 24 | 25 | Run the following command: 26 | ```` 27 | minikube start --addons volumesnapshots,csi-hostpath-driver,registry --apiserver-port=6443 --container-runtime=containerd -p offline-demo --kubernetes-version=1.21.2 --nodes=2 28 | ```` 29 | ```` 30 | Note: Minikube will generate a port and request you use that port when enabling registry. That instruction is not related to this guide. 31 | ```` 32 | 33 | ## Configure Docker for connectivity to the MiniKube Internal Cluster Image Registry 34 | 35 | When enabled, the MiniKube registry addon exposes its port 5000 on the minikube’s virtual machine. 36 | 37 | Open a new terminal window to run the following commands. The following [instructions](https://minikube.sigs.k8s.io/docs/handbook/registry/#docker-on-macos) for are Docker on Mac OS X, for other Operating Systems please see [this guide](https://minikube.sigs.k8s.io/docs/handbook/registry/). 38 | 39 | In order to make docker accept pushing images to this registry, we have to redirect port 5000 on the docker virtual machine over to port 5000 on the minikube machine. We can (ab)use docker’s network configuration to instantiate a container on the docker’s host, and run socat there: 40 | ```` 41 | docker run --rm -it --network=host alpine ash -c "apk add socat && socat TCP-LISTEN:5000,reuseaddr,fork TCP:$(minikube ip):5000" 42 | ```` 43 | ![image](https://user-images.githubusercontent.com/22192242/138969744-e0c488c4-42a5-4df5-b0da-3af4a80a8358.png) 44 | 45 | Once socat is running it’s possible to push images to the minikube registry: 46 | ```` 47 | docker tag my/image localhost:5000/myimage 48 | docker push localhost:5000/myimage 49 | ```` 50 | After the image is pushed, refer to it by localhost:5000/{name} in kubectl specs. 51 | 52 | ![image](https://user-images.githubusercontent.com/22192242/138969829-06625c0b-496b-4558-accc-30c77ccddbdf.png) 53 | 54 | ## Download the Kasten K10 Container Images 55 | 56 | In your main terminal window, not the one you used for the docker redirect in the last step, run the following command: 57 | ```` 58 | docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock \ 59 | -v ${HOME}/.docker:/root/.docker \ 60 | gcr.io/kasten-images/k10offline:4.5.1 pull images --newrepo localhost:5000 61 | ```` 62 | This will download all the images to Docker Client, then push into your Repository which we setup inside of MiniKube. You can run the same command without the final argument ````--newrepo```` and this will just download the images to your docker client. 63 | 64 | ![image](https://user-images.githubusercontent.com/22192242/138971571-ed24951e-7ba3-4cd7-8fb0-6209b5e0af06.png) 65 | 66 | ## Download the Helm Chart for offline use 67 | 68 | Run the following command: 69 | ```` 70 | helm repo update && \ 71 | helm fetch kasten/k10 --version=4.5.1 72 | ```` 73 | ![image](https://user-images.githubusercontent.com/22192242/138971723-32912697-3eff-493f-b806-8f8fe6658a7a.png) 74 | 75 | ## Install Kasten K10 with a local Helm Chart and Container Images 76 | 77 | Create the namespace: 78 | ```` 79 | kubectl create namespace kasten-io 80 | ```` 81 | Then run the following Helm command: 82 | ```` 83 | helm install k10 k10-4.5.1.tgz --namespace kasten-io \ 84 | --set global.airgapped.repository=localhost:5000 85 | ```` 86 | 87 | ![image](https://user-images.githubusercontent.com/22192242/138971836-bc198c49-b16a-4c0c-999d-6275484bfbda.png) 88 | 89 | ![image](https://user-images.githubusercontent.com/22192242/138972045-1621e0ba-1153-4912-bb0f-13a9d32b4e50.png) 90 | 91 | ## Continue the setup following the main guide 92 | 93 | [Continue by following the main guide](/readme.md#mysql) 94 | -------------------------------------------------------------------------------- /PowerShell/1_downloads.ps1: -------------------------------------------------------------------------------- 1 | #vars 2 | ##Change these based on what you need to install. 3 | $workstation = 1 4 | $kubectl = 1 5 | $helm = 1 6 | $minikube = 1 7 | 8 | #Check to see if script is running with Admin privileges 9 | if (!([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) { 10 | Write-Host "Please relaunch Powershell as admin" -BackgroundColor Red 11 | Write-Host "Press any key to continue..." 12 | $Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown") | Out-Null 13 | exit; 14 | } 15 | 16 | #Internet Explorer's first launch configuration, This allows invoke-webrequest from running without any issues 17 | Set-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\Internet Explorer\Main" -Name "DisableFirstRunCustomize" -Value 2 18 | 19 | #download Vmware workstation and install 20 | if ($workstation -eq 1) { 21 | 22 | 23 | write-host "Downloading VMware Workstation" -ForegroundColor Green 24 | Invoke-WebRequest -OutFile "c:\users\$env:UserName\Downloads\VMware-workstation.exe" -Uri 'https://www.vmware.com/go/getworkstation-win' -UseBasicParsing 25 | write-host "Installing VMware Workstation" -ForegroundColor Green 26 | start-process "c:\users\$env:UserName\Downloads\VMware-workstation.exe" -ArgumentList '/s /v"/qn EULAS_AGREED=1 AUTOSOFTWAREUPDATE=1"' 27 | 28 | $oldPath = [Environment]::GetEnvironmentVariable('Path', [EnvironmentVariableTarget]::Machine) 29 | if ($oldPath.Split(';') -inotcontains 'C:\Program Files (x86)\VMware\VMware Workstation') { 30 | ` 31 | [Environment]::SetEnvironmentVariable('Path', $('{0};C:\Program Files (x86)\VMware\VMware Workstation' -f $oldPath), [EnvironmentVariableTarget]::Machine) ` 32 | 33 | } 34 | 35 | Start-Sleep 150 36 | write-host "VMware workstation will now launch, Please accept the trial license" -ForegroundColor Green 37 | start-process "C:\Program Files (x86)\VMware\VMware Workstation\vmware.exe" 38 | Start-Sleep 30 39 | } 40 | 41 | #download the kubectl V1.21.2 and add to the system environment variables 42 | if ($kubectl -eq 1) { 43 | new-item -path "C:\kubectl" -ItemType Directory -Force 44 | write-host "Downloading Kubectl" -ForegroundColor Green 45 | Invoke-WebRequest -OutFile "c:\users\$env:UserName\Downloads\kubectl.exe" -Uri "https://dl.k8s.io/release/v1.21.2/bin/windows/amd64/kubectl.exe" -UseBasicParsing 46 | Copy-Item "c:\users\$env:UserName\Downloads\kubectl.exe" -Destination "C:\kubectl" 47 | 48 | 49 | $oldPath = [Environment]::GetEnvironmentVariable('Path', [EnvironmentVariableTarget]::Machine) 50 | if ($oldPath.Split(';') -inotcontains 'C:\kubectl') { 51 | ` 52 | [Environment]::SetEnvironmentVariable('Path', $('{0};C:\kubectl' -f $oldPath), [EnvironmentVariableTarget]::Machine) ` 53 | 54 | } 55 | Start-Sleep 2 56 | } 57 | #download Helm and add to environment variables 58 | if ($helm -eq 1) { 59 | new-item -path "C:\helm" -ItemType Directory -Force 60 | write-host "Downloading Helm" -ForegroundColor Green 61 | Invoke-WebRequest -OutFile "C:\helm\helmzip.zip" -Uri 'https://get.helm.sh/helm-v3.7.1-windows-amd64.zip' -UseBasicParsing 62 | Get-ChildItem 'C:\helm\' -Filter *.zip | Expand-Archive -DestinationPath 'C:\helm\' -Force 63 | Copy-Item "C:\helm\windows-amd64\helm.exe" -Destination "C:\helm" 64 | Remove-Item "C:\helm\helmzip.zip" 65 | Remove-Item "C:\helm\windows-amd64" -Recurse 66 | 67 | $oldPath = [Environment]::GetEnvironmentVariable('Path', [EnvironmentVariableTarget]::Machine) 68 | if ($oldPath.Split(';') -inotcontains 'C:\helm') { 69 | ` 70 | [Environment]::SetEnvironmentVariable('Path', $('{0};C:\helm' -f $oldPath), [EnvironmentVariableTarget]::Machine) ` 71 | 72 | } 73 | } 74 | 75 | Start-Sleep 2 76 | #Download MiniKube, install and add minikube to environment variables 77 | if ($minikube -eq 1) { 78 | new-item -path "C:\minikube" -ItemType Directory -Force 79 | write-host "Downloading Minikube" -ForegroundColor Green 80 | Invoke-WebRequest -OutFile "c:\users\$env:UserName\Downloads\minikube.exe" -Uri 'https://github.com/kubernetes/minikube/releases/download/v1.23.2/minikube-windows-amd64.exe' -UseBasicParsing 81 | Copy-Item "c:\users\$env:UserName\Downloads\minikube.exe" -Destination "C:\minikube" 82 | 83 | $oldPath = [Environment]::GetEnvironmentVariable('Path', [EnvironmentVariableTarget]::Machine) 84 | if ($oldPath.Split(';') -inotcontains 'C:\minikube') { 85 | ` 86 | [Environment]::SetEnvironmentVariable('Path', $('{0};C:\minikube' -f $oldPath), [EnvironmentVariableTarget]::Machine) ` 87 | 88 | } 89 | } 90 | 91 | #Refresh path variable to allow Helm/Kubectl to work. 92 | $env:Path = [System.Environment]::GetEnvironmentVariable("Path","Machine") + ";" + [System.Environment]::GetEnvironmentVariable("Path","User") 93 | 94 | -------------------------------------------------------------------------------- /PowerShell/2_create_cluster.ps1: -------------------------------------------------------------------------------- 1 | #Check to see if script is running with Admin privileges 2 | if (!([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) { 3 | Write-Host "Please relaunch Powershell as admin" -BackgroundColor Red 4 | Write-Host "Press any key to continue..." 5 | $Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown") | Out-Null 6 | exit; 7 | } 8 | 9 | #build cluster in minikube 10 | write-host "Building Minikube Cluster" -ForegroundColor Green 11 | minikube start ` 12 | --memory 8192 ` 13 | --cpus 4 ` 14 | --disk-size 40GB ` 15 | --driver=vmware ` 16 | --addons volumesnapshots,csi-hostpath-driver ` 17 | --apiserver-port=6443 ` 18 | --container-runtime=containerd ` 19 | --kubernetes-version=1.21.8 -------------------------------------------------------------------------------- /PowerShell/3_install_k10.ps1: -------------------------------------------------------------------------------- 1 | #Check to see if script is running with Admin privileges 2 | if (!([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) { 3 | Write-Host "Please relaunch Powershell as admin" -BackgroundColor Red 4 | Write-Host "Press any key to continue..." 5 | $Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown") | Out-Null 6 | exit; 7 | } 8 | 9 | #Add helm repo 10 | write-host "Add Kasten Helm Chart" -ForegroundColor Green 11 | helm repo add kasten https://charts.kasten.io/ 12 | 13 | #install kasten 14 | write-host "Installing Kasten" -ForegroundColor Green 15 | kubectl create namespace kasten-io 16 | helm install k10 kasten/k10 ` 17 | --namespace=kasten-io ` 18 | --set auth.tokenAuth.enabled=true ` 19 | --set injectKanisterSidecar.enabled=true ` 20 | --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true ` 21 | --set eula.accept=true ` 22 | --set eula.company="Company" ` 23 | --set eula.email="a@a.com" 24 | 25 | #wait for pods to come up 26 | ##need to do better than just a sleep 27 | Write-Host "Waiting for pods to be ready, This could take up to 5 minutes" -ForegroundColor Green 28 | #Start-Sleep 300 29 | $ready = kubectl get pod -n kasten-io --selector=component=catalog -o=jsonpath='{.items[*].status.phase}' 30 | do { 31 | Write-Host "Waiting for pods to be ready" -ForegroundColor Green 32 | start-sleep 20 33 | $ready = kubectl get pod -n kasten-io --selector=component=catalog -o=jsonpath='{.items[*].status.phase}' 34 | } while ($ready -notlike "Running") 35 | Write-Host "Pods are ready, moving on" -ForegroundColor Green 36 | 37 | #Annotate the CSI Hostpath VolumeSnapshotClass for use with K10 38 | write-host "Setting default storage class" -ForegroundColor Green 39 | kubectl annotate volumesnapshotclass csi-hostpath-snapclass k10.kasten.io/is-snapshot-class=true 40 | kubectl patch storageclass csi-hostpath-sc -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}' 41 | kubectl patch storageclass standard -p '{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"false\"}}}' 42 | 43 | #Get K10 secret and extract login token 44 | $secret = kubectl get secrets -n kasten-io | select-string -Pattern "k10-k10-token-\w*" | ForEach-Object { $_.Matches } | ForEach-Object { $_.Value } 45 | $k10token = kubectl -n kasten-io -ojson get secret $secret | convertfrom-json | Select-Object data 46 | 47 | #port forward Kasten Dashboard in a seperate powershell window to keep it open 48 | Start-Job -ScriptBlock { kubectl --namespace kasten-io port-forward service/gateway 8080:8000 } 49 | 50 | Write-Host "Please log into the Kasten Dashboard http://127.0.0.1:8080/k10/#/ using the token below `n" -ForegroundColor blue 51 | Write-Host '#########################################################################' -ForegroundColor Green 52 | Write-Host ([Text.Encoding]::Utf8.GetString([Convert]::FromBase64String($k10token.data.token))) -ForegroundColor Green 53 | Write-Host '#########################################################################' -ForegroundColor Green -------------------------------------------------------------------------------- /PowerShell/4_my_sql.ps1: -------------------------------------------------------------------------------- 1 | #Check to see if script is running with Admin privileges 2 | if (!([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole]::Administrator)) { 3 | Write-Host "Please relaunch Powershell as admin" -BackgroundColor Red 4 | Write-Host "Press any key to continue..." 5 | $Host.UI.RawUI.ReadKey("NoEcho,IncludeKeyDown") | Out-Null 6 | exit; 7 | } 8 | 9 | #Create Namespace for app 10 | write-host "creating namespace" -ForegroundColor Green 11 | kubectl create namespace mysql 12 | 13 | #Add Helm Chart for Bitnami 14 | write-host "Adding Helm Repo" -ForegroundColor Green 15 | helm repo add bitnami https://charts.bitnami.com/bitnami 16 | 17 | #install mysql 18 | #create test my_sql application call "my-production-app" 19 | write-host "Installing mysql" -ForegroundColor Green 20 | 21 | helm install mysql bitnami/mysql --namespace=mysql --set primary.persistence.size=1Gi,volumePermissions.enabled=true 22 | 23 | #wait for pods to be ready 24 | Write-Host "Waiting for pods to be ready, This could take up to 2 minutes" -ForegroundColor Green 25 | start-sleep 20 26 | $ready = kubectl get pods -n mysql mysql-0 -o=jsonpath='{.status.conditions[1].status}' 27 | do { 28 | Write-Host "Waiting for pods to be ready" -ForegroundColor Green 29 | start-sleep 20 30 | $ready = kubectl get pods -n mysql mysql-0 -o=jsonpath='{.status.conditions[1].status}' 31 | } while ($ready -notlike "True") 32 | Write-Host "Pods are ready, moving on" -ForegroundColor Green 33 | 34 | #Get password and decode it 35 | $password = kubectl get secret --namespace mysql mysql -o jsonpath="{.data.mysql-root-password}" 36 | $MYSQL_ROOT_PASSWORD = ([Text.Encoding]::Utf8.GetString([Convert]::FromBase64String($password))) 37 | 38 | #Exec into container and create a DB called K10Demo 39 | kubectl exec -it --namespace=mysql $(kubectl --namespace=mysql get pods -o jsonpath='{.items[0].metadata.name}') ` 40 | -- mysql -u root --password="$MYSQL_ROOT_PASSWORD" -e "CREATE DATABASE myImportantData" 41 | 42 | kubectl exec -it --namespace=mysql $(kubectl --namespace=mysql get pods -o jsonpath='{.items[0].metadata.name}') ` 43 | -- mysql -u root --password="$MYSQL_ROOT_PASSWORD" myImportantData -e ` 44 | "CREATE TABLE Accounts(name text, balance integer); 45 | insert into Accounts values('albert', 112); 46 | insert into Accounts values('alfred', 358); 47 | insert into Accounts values('beatrice', 1321); 48 | insert into Accounts values('bartholomew', 34); 49 | insert into Accounts values('edward', 5589); 50 | insert into Accounts values('edwin', 144); 51 | insert into Accounts values('edwina', 233); 52 | insert into Accounts values('rastapopoulos', 377); 53 | select * from Accounts;" -------------------------------------------------------------------------------- /PowerShell/readme.md: -------------------------------------------------------------------------------- 1 | Tested on minikube v1.23.2 on Microsoft Windows 10 Pro 10.0.19043 Build 19043 2 | VMware® Workstation 16.2.1 build-18811642 3 | Kubernetes version 1.21.2 4 | 5 | Steps 6 | Run all scripts as admin. 7 | Modify the variables in 1.\downloads.ps1 if you dont need certain things installed. 8 | There are 4 scripts, Run scripts in order. -------------------------------------------------------------------------------- /Zero to Hero - Kubernetes and Kasten K10.md: -------------------------------------------------------------------------------- 1 | ## Zero to Hero: Kubernetes and Kasten K10 2 |

3 | 4 |

5 | 6 | This is for a session delivered at a global Veeam webinar to show how we can get Kasten K10 up and running on your local (x86 architecture) system using Minikube. 7 | 8 | [Link to Webinar](https://go.veeam.com/webinar-deploy-kubernetes-tips) 9 | 10 | In this session we are going to deploy a minikube cluster to our local workstation, deploy some data services and then Kasten K10 to the same cluster. The performance of this will very much depend on your system. But the highlight here is that we can run K10 across multiple Kubernetes environments and with Minikube we do not need to pay for a cloud providers managed Kubernetes cluster to get hands-on. 11 | 12 | ### Install Minikube 13 | 14 | - [Installing Minikube](https://minikube.sigs.k8s.io/docs/start/) 15 | 16 | The minikube installation should also install kubectl or the Kubernetes CLI, you will need this, again available through most package managers cross platform (Chocolatey, apt etc.) 17 | 18 | We will also need helm to deploy some of our data services. 19 | 20 | - [kubectl](https://kubernetes.io/docs/tasks/tools/) 21 | - [helm](https://helm.sh/docs/intro/install/) 22 | 23 | ### Deploy Minikube cluster 24 | 25 | Once we have minikube available in our environment 26 | 27 | `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p webinar-demo --kubernetes-version=1.26.0` 28 | 29 | With the above we will be using Docker as our virtual machine manager. If you have not already you can grab Docker cross platform. 30 | [Get Docker](https://docs.docker.com/get-docker/) 31 | 32 | ### Deploy Kasten K10 33 | 34 | Add the Kasten Helm repository 35 | 36 | `helm repo add kasten https://charts.kasten.io/` 37 | 38 | Deploy K10, note that this will take around 5 mins 39 | 40 | `helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true --create-namespace` 41 | 42 | You can watch the pods come up by running the following command. 43 | 44 | `kubectl get pods -n kasten-io -w` 45 | 46 | port forward to access the K10 dashboard, open a new terminal to run the below command 47 | 48 | `kubectl --namespace kasten-io port-forward service/gateway 8080:8000` 49 | 50 | The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/` 51 | 52 | To authenticate with the dashboard we now need the token which we can get with the following commands. Please bare in mind that this is not best practices and if you are running in a production environment then the K10 documentation should be followed accordingly. This is also applicable with Kubernetes clusters newer than v1.24 53 | 54 | ``` 55 | kubectl --namespace kasten-io create token k10-k10 --duration=24h 56 | ``` 57 | For clusters older than v1.24 of Kubernetes then you can use this command to retrieve a token to authenticate. 58 | 59 | ``` 60 | TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) 61 | TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) 62 | 63 | echo "Token value: " 64 | echo $TOKEN 65 | ``` 66 | ## Storage Changes 67 | 68 | Now that K10 is deployed and hopefully healthy we can now make some storage changes. 69 | 70 | Annotate the CSI Hostpath VolumeSnapshotClass for use with K10 71 | 72 | ``` 73 | kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ 74 | k10.kasten.io/is-snapshot-class=true 75 | ``` 76 | we also need to change our default storageclass with the following 77 | 78 | ``` 79 | kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 80 | 81 | kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 82 | ``` 83 | Patching the storage as above before installing Kasten K10 will result in the Prometheus pod not starting. 84 | 85 | ### Deploy Data Services (Pac-Man) 86 | 87 | Make sure you are in the directory where this YAML config file is and run against your cluster. 88 | 89 | `kubectl create -f pacman-stateful-demo.yaml` 90 | 91 | To expose and access this run the following port-forward in a new terminal 92 | 93 | `kubectl port-forward svc/pacman 9191:80 -n pacman` 94 | 95 | Open a browser and navigate to [http://localhost:9191/](http://localhost:9191/) 96 | 97 | ## Install Minio 98 | ``` 99 | helm repo add minio https://helm.min.io/ --insecure-skip-tls-verify 100 | kubectl create ns minio 101 | 102 | # Deploy minio with a pre-created "k10-bucket" bucket, and "minioaccess"/"miniosecret" creds 103 | helm install minio minio/minio --namespace=minio --version 8.0.10 \ 104 | --set persistence.size=5Gi \ 105 | --set defaultBucket.enabled=true \ 106 | --set defaultBucket.name=k10-bucket \ 107 | --set accessKey=minioaccess \ 108 | --set secretKey=miniosecret 109 | ``` 110 | Open a new terminal window to setup port forward to access the Minio Management page in your browser 111 | ```` 112 | kubectl --namespace minio port-forward svc/minio 9090:9000 113 | ```` 114 | Open your browser to http://127.0.0.1:9090 and login with the token from the above step. 115 | 116 | ## Configure S3 storage in Kasten 117 | 1. Click settings in the top right hand corner. Select locations and Create new location. 118 | 2. Provide a name, select "S3 Compatible", enter your Access Key and Secret Key you saved earlier. 119 | 3. Set the endpoint as "http://minio.minio.svc.cluster.local:9000" (this is the internal k8s dns name) and select to skip SSL verification. 120 | 4. Provide the bucket name you configured and click "Save Profile". 121 | 122 | ## Configure the Kasten Policy to export data to the S3 Storage 123 | 1. Edit your existing policy. 124 | 2. Enable the setting "Enable Backups via Snapshot Exports" 125 | 3. Select the S3 location profile you have just created, and set the schedule as necessary. Click the "Edit Policy" button. 126 | 4. Manually run the policy and observe the run on the homescreen. After the backup run, you will see a new task called "Export". 127 | 128 | Manually browse the Bucket from the Minio browser console, you will see your bucket contains a folder called "k10" and within that the protection data. 129 | 130 | ![image](https://user-images.githubusercontent.com/22192242/138359395-b4175851-9da8-46d7-86b7-7cf3ee1e5fee.png) 131 | 132 | ![image](https://user-images.githubusercontent.com/22192242/138359447-a6c316f7-a8d6-414b-af7e-6157867cb5bc.png) 133 | 134 | ### Dive into Kasten K10 135 | 136 | - Walkthrough K10 Dashboard 137 | - Add S3 location 138 | - Create a Policy protecting Pac-Man 139 | - Clock up a high score (Mission Critical Data) 140 | - Delete Pac-Man Namespace 141 | - Restore everything back to original location using K10 142 | - Clone and Transformation - Restore to other StorageClass available in cluster. 143 | 144 | ## Clear up 145 | 146 | `minikube delete -p webinar-demo` 147 | -------------------------------------------------------------------------------- /baseline.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "$(tput setaf 4)Create new cluster" 3 | minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p mc-demo --kubernetes-version=1.21.2 4 | 5 | echo "$(tput setaf 4)update helm repos if already present" 6 | helm repo update 7 | 8 | echo "$(tput setaf 4)Deploy Kasten K10" 9 | 10 | helm repo add kasten https://charts.kasten.io/ 11 | 12 | kubectl create namespace kasten-io 13 | helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true 14 | 15 | echo "$(tput setaf 4)Annotate Volumesnapshotclass" 16 | 17 | kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ 18 | k10.kasten.io/is-snapshot-class=true 19 | 20 | echo "$(tput setaf 4)Change default storageclass" 21 | 22 | kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 23 | 24 | kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 25 | 26 | 27 | echo "$(tput setaf 4)Display K10 Token Authentication" 28 | TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) 29 | TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) 30 | 31 | echo "$(tput setaf 3)Token value: " 32 | echo "$(tput setaf 3)$TOKEN" 33 | 34 | echo "$(tput setaf 4)to access your Kasten K10 dashboard open a new terminal and run" 35 | echo "$(tput setaf 3)kubectl --namespace kasten-io port-forward service/gateway 8080:8000" 36 | echo "$(tput setaf 4)Environment Complete" 37 | -------------------------------------------------------------------------------- /data/AddDatatoMongo.sh: -------------------------------------------------------------------------------- 1 | echo "Add Data to MongoDB" 2 | 3 | echo "Add the following lines into the context of the container to add data" 4 | echo "mongo admin --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD --quiet --eval "db.restaurants.insert({'name' : 'Roys', 'cuisine' : 'Hawaiian', 'id' : '8675309'})"" 5 | echo "mongo admin --authenticationDatabase admin -u root -p $MONGODB_ROOT_PASSWORD --quiet --eval "db.restaurants.find()"" 6 | 7 | kubectl exec -ti my-release-mongodb-0 -n mongo-test -- bash 8 | 9 | exit 10 | -------------------------------------------------------------------------------- /data/AddDatatoPostgreSQL.sh: -------------------------------------------------------------------------------- 1 | echo "Add Data to PostgreSQL" 2 | kubectl exec -ti my-release-postgresql-0 -n postgres-test -- bash 3 | PGPASSWORD=${POSTGRES_PASSWORD} psql -U $POSTGRES_USER 4 | CREATE DATABASE test; 5 | \l 6 | \c test 7 | CREATE TABLE COMPANY( 8 | ID INT PRIMARY KEY NOT NULL, 9 | NAME TEXT NOT NULL, 10 | AGE INT NOT NULL, 11 | ADDRESS CHAR(50), 12 | SALARY REAL, 13 | CREATED_AT TIMESTAMP); 14 | INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY,CREATED_AT) VALUES (10, 'Paul', 32, 'California', 20000.00, now()); 15 | INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY,CREATED_AT) VALUES (20, 'Omkar', 32, 'California', 20000.00, now()); 16 | INSERT INTO COMPANY (ID,NAME,AGE,ADDRESS,SALARY,CREATED_AT) VALUES (30, 'Prasad', 32, 'California', 20000.00, now()); 17 | select * from company; 18 | \q 19 | exit 20 | -------------------------------------------------------------------------------- /deploy_apps.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Helm repository configuration 4 | add_helm_repos() { 5 | echo "Adding Helm repositories..." 6 | helm repo add bitnami https://charts.bitnami.com/bitnami 7 | helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/ 8 | helm repo update 9 | echo "Helm repositories added and updated." 10 | } 11 | 12 | # Deploy PostgreSQL 13 | deploy_postgres() { 14 | echo "Deploying PostgreSQL..." 15 | kubectl create namespace postgres || true 16 | 17 | # Deploy PostgreSQL 18 | helm upgrade --install my-postgresql bitnami/postgresql \ 19 | --namespace postgres \ 20 | --set auth.postgresPassword=myPassword \ 21 | --set auth.username=myUser \ 22 | --set auth.database=myDatabase 23 | 24 | echo "PostgreSQL deployment complete." 25 | 26 | # Extract and print PostgreSQL credentials 27 | postgres_password=$(kubectl get secret --namespace postgres my-postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) 28 | postgres_user=$(kubectl get secret --namespace postgres my-postgresql -o jsonpath="{.data.postgresql-username}" | base64 --decode) 29 | 30 | echo "PostgreSQL Credentials:" 31 | echo " Username: $postgres_user" 32 | echo " Password: $postgres_password" 33 | echo " Host: my-postgresql.postgres.svc.cluster.local" 34 | echo " Database: myDatabase" 35 | 36 | echo "To connect to PostgreSQL using kubectl:" 37 | echo " kubectl run -n postgres -it --rm --image bitnami/postgresql:latest --env=\"PGPASSWORD=$postgres_password\" postgres-client -- psql --host my-postgresql.postgres.svc.cluster.local -U $postgres_user -d myDatabase" 38 | } 39 | 40 | # Deploy WordPress 41 | deploy_wordpress() { 42 | echo "Deploying WordPress..." 43 | kubectl create namespace wordpress || true 44 | 45 | # Deploy WordPress (connecting to the PostgreSQL database) 46 | helm upgrade --install my-wordpress bitnami/wordpress \ 47 | --namespace wordpress \ 48 | --set mariadb.enabled=false \ 49 | --set externalDatabase.host=my-postgresql.postgres.svc.cluster.local \ 50 | --set externalDatabase.user=myUser \ 51 | --set externalDatabase.password=myPassword \ 52 | --set externalDatabase.database=myDatabase 53 | 54 | echo "WordPress deployment complete." 55 | } 56 | 57 | # Deploy Ghost 58 | deploy_ghost() { 59 | echo "Deploying Ghost..." 60 | kubectl create namespace ghost || true 61 | 62 | helm upgrade --install my-ghost bitnami/ghost \ 63 | --namespace ghost \ 64 | --set mariadb.enabled=false \ 65 | --set externalDatabase.host=my-postgresql.postgres.svc.cluster.local \ 66 | --set externalDatabase.user=myUser \ 67 | --set externalDatabase.password=myPassword \ 68 | --set externalDatabase.database=myDatabase 69 | 70 | echo "Ghost deployment complete." 71 | } 72 | 73 | # Deploy JupyterHub 74 | deploy_jupyterhub() { 75 | echo "Deploying JupyterHub..." 76 | kubectl create namespace jupyterhub || true 77 | 78 | helm upgrade --install jhub jupyterhub/jupyterhub \ 79 | --namespace jupyterhub \ 80 | --version=1.2.0 \ 81 | --values https://raw.githubusercontent.com/jupyterhub/helm-chart/main/jupyterhub/values.yaml 82 | 83 | echo "JupyterHub deployment complete." 84 | } 85 | 86 | # Usage function 87 | usage() { 88 | echo "Usage: $0 [-p] [-w] [-g] [-j]" 89 | echo " -p, --postgres Deploy PostgreSQL" 90 | echo " -w, --wordpress Deploy WordPress" 91 | echo " -g, --ghost Deploy Ghost" 92 | echo " -j, --jupyterhub Deploy JupyterHub" 93 | echo "If no options are provided, all apps will be deployed (except PostgreSQL)." 94 | exit 1 95 | } 96 | 97 | # Main script 98 | deploy_postgres_flag=false 99 | deploy_wordpress_flag=false 100 | deploy_ghost_flag=false 101 | deploy_jupyterhub_flag=false 102 | 103 | # Parse command line arguments 104 | if [ $# -eq 0 ]; then 105 | # No arguments, deploy all apps except PostgreSQL 106 | deploy_wordpress_flag=true 107 | deploy_ghost_flag=true 108 | deploy_jupyterhub_flag=true 109 | else 110 | while [[ "$#" -gt 0 ]]; do 111 | case $1 in 112 | -p|--postgres) deploy_postgres_flag=true ;; 113 | -w|--wordpress) deploy_wordpress_flag=true ;; 114 | -g|--ghost) deploy_ghost_flag=true ;; 115 | -j|--jupyterhub) deploy_jupyterhub_flag=true ;; 116 | *) usage ;; 117 | esac 118 | shift 119 | done 120 | fi 121 | 122 | # Add and update Helm repos 123 | add_helm_repos 124 | 125 | # Deploy selected apps 126 | if [ "$deploy_postgres_flag" = true ]; then 127 | deploy_postgres 128 | fi 129 | 130 | if [ "$deploy_wordpress_flag" = true ]; then 131 | deploy_wordpress 132 | fi 133 | 134 | if [ "$deploy_ghost_flag" = true ]; then 135 | deploy_ghost 136 | fi 137 | 138 | if [ "$deploy_jupyterhub_flag" = true ]; then 139 | deploy_jupyterhub 140 | fi 141 | 142 | echo "Deployment process complete." 143 | -------------------------------------------------------------------------------- /k10primer.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: k10-primer 5 | namespace: default 6 | 7 | --- 8 | kind: ClusterRoleBinding 9 | apiVersion: rbac.authorization.k8s.io/v1 10 | metadata: 11 | name: k10-primer 12 | subjects: 13 | - kind: ServiceAccount 14 | name: k10-primer 15 | namespace: default 16 | roleRef: 17 | kind: ClusterRole 18 | name: cluster-admin 19 | apiGroup: rbac.authorization.k8s.io 20 | --- 21 | apiVersion: batch/v1 22 | kind: Job 23 | metadata: 24 | name: k10primer 25 | namespace: default 26 | spec: 27 | template: 28 | spec: 29 | containers: 30 | - image: gcr.io/kasten-images/k10tools:4.5.9 31 | imagePullPolicy: IfNotPresent 32 | name: k10primer 33 | command: [ "/bin/bash", "-c", "--" ] 34 | args: [ "./k10tools primer ; sleep 2" ] 35 | env: 36 | - name: POD_NAMESPACE 37 | valueFrom: 38 | fieldRef: 39 | fieldPath: metadata.namespace 40 | - name: KANISTER_TOOLS 41 | value: ghcr.io/kanisterio/kanister-tools:0.73.0 42 | restartPolicy: Never 43 | serviceAccount: k10-primer 44 | backoffLimit: 4 45 | -------------------------------------------------------------------------------- /media/VUG.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MichaelCade/project_pace/cf5efc27063f8dd44c5c6ae2cd741179325ec7df/media/VUG.png -------------------------------------------------------------------------------- /media/minicube_windows_vmware.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MichaelCade/project_pace/cf5efc27063f8dd44c5c6ae2cd741179325ec7df/media/minicube_windows_vmware.jpg -------------------------------------------------------------------------------- /multicluster/diagram.drawio.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | -------------------------------------------------------------------------------- /multicluster/diagram.svg: -------------------------------------------------------------------------------- 1 |
mc-demo1
mc-demo1
mc-demo2
mc-demo2
PostgreSQL
PostgreSQL
MongoDB
MongoDB
MySQL
MySQL

Project Pace  Multi Cluster Deployment

Project Pace  Multi Cluster De...
Kasten K10
Kasten K10
Kasten K10
Kasten K10
Viewer does not support full SVG 1.1
-------------------------------------------------------------------------------- /multicluster/metallbmcdemo.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | namespace: metallb-system 5 | name: config 6 | data: 7 | config: | 8 | address-pools: 9 | - name: default 10 | protocol: layer2 11 | addresses: 12 | - 192.168.169.240-192.168.169.250 13 | -------------------------------------------------------------------------------- /multicluster/metallbmcdemo2.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | namespace: metallb-system 5 | name: config 6 | data: 7 | config: | 8 | address-pools: 9 | - name: default 10 | protocol: layer2 11 | addresses: 12 | - 192.168.169.202-192.168.169.205 13 | -------------------------------------------------------------------------------- /multicluster/multicluster_demo.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "$(tput setaf 4)Create new cluster 1" 3 | minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p mc-demo1 --kubernetes-version=1.21.2 4 | 5 | echo "$(tput setaf 4)update helm repos if already present" 6 | helm repo update 7 | 8 | echo "$(tput setaf 4)Deploy Kasten K10" 9 | 10 | helm repo add kasten https://charts.kasten.io/ 11 | 12 | kubectl create namespace kasten-io 13 | helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true 14 | 15 | echo "$(tput setaf 4)Annotate Volumesnapshotclass" 16 | 17 | kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ 18 | k10.kasten.io/is-snapshot-class=true 19 | 20 | echo "$(tput setaf 4)Change default storageclass" 21 | 22 | kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 23 | 24 | kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 25 | 26 | echo "$(tput setaf 4)Deploy MySQL" 27 | 28 | APP_NAME=my-production-app 29 | kubectl create ns ${APP_NAME} 30 | helm repo add bitnami https://charts.bitnami.com/bitnami 31 | helm install mysql-store bitnami/mysql --set primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace=${APP_NAME} 32 | kubectl get pods -n ${APP_NAME} 33 | 34 | echo "$(tput setaf 4)MySQL root password" 35 | 36 | MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace ${APP_NAME} mysql-store -o jsonpath="{.data.mysql-root-password}" | base64 --decode) 37 | MYSQL_HOST=mysql-store.${APP_NAME}.svc.cluster.local 38 | MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t" 39 | echo MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} 40 | 41 | echo "$(tput setaf 4)Deploy PostgreSQL" 42 | kubectl create ns postgres-test 43 | helm install my-release --set primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace postgres-test bitnami/postgresql 44 | kubectl get pods -n postgres-test 45 | 46 | echo "$(tput setaf 4)Deploy MongoDB" 47 | kubectl create ns mongo-test 48 | helm install my-release bitnami/mongodb --set architecture="replicaset",primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace mongo-test 49 | 50 | echo "$(tput setaf 4)Data Services deployment started" 51 | kubectl get pods -n my-production-app 52 | kubectl get pods -n postgres-test 53 | kubectl get pods -n mongo-test 54 | 55 | echo "$(tput setaf 4)Waiting 5 mins for pod to come up" 56 | sleep 5m 57 | kubectl get pods -n my-production-app 58 | kubectl get pods -n postgres-test 59 | kubectl get pods -n mongo-test 60 | 61 | echo "$(tput setaf 4)Display K10 Token Authentication" 62 | TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) 63 | TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) 64 | 65 | echo "$(tput setaf 3)Token value: " 66 | echo "$(tput setaf 3)$TOKEN" 67 | 68 | echo "$(tput setaf 4)to access your Kasten K10 dashboard open a new terminal and run" 69 | echo "$(tput setaf 3)kubectl --namespace kasten-io port-forward service/gateway 8080:8000" 70 | 71 | echo "$(tput setaf 4)Create new cluster 2" 72 | minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p mc-demo2 --kubernetes-version=1.21.2 73 | 74 | echo "$(tput setaf 4)update helm repos if already present" 75 | helm repo update 76 | 77 | echo "$(tput setaf 4)Deploy Kasten K10" 78 | 79 | helm repo add kasten https://charts.kasten.io/ 80 | 81 | kubectl create namespace kasten-io 82 | helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true 83 | 84 | echo "$(tput setaf 4)Annotate Volumesnapshotclass" 85 | 86 | kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ 87 | k10.kasten.io/is-snapshot-class=true 88 | 89 | echo "$(tput setaf 4)Change default storageclass" 90 | 91 | kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 92 | 93 | kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 94 | 95 | echo "$(tput setaf 4)Display K10 Token Authentication" 96 | TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) 97 | TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) 98 | 99 | echo "$(tput setaf 3)Token value: " 100 | echo "$(tput setaf 3)$TOKEN" 101 | 102 | echo "$(tput setaf 4)to access your Kasten K10 dashboard open a new terminal and run" 103 | echo "$(tput setaf 3)kubectl --namespace kasten-io port-forward service/gateway 8080:8000" 104 | 105 | echo "Environment Complete" 106 | -------------------------------------------------------------------------------- /multicluster/multicluster_metallb_demo .sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "Create cluster 1 " 3 | minikube start --addons volumesnapshots,csi-hostpath-driver,metallb --apiserver-port=6443 --container-runtime=containerd -p mc-demo --kubernetes-version=1.21.2 4 | 5 | echo "update helm repos if already present" 6 | helm repo update 7 | 8 | echo "Metallb config map with local IP address" 9 | kubectl delete configmap config -n metallb-system 10 | kubectl create -f metallbmcdemo.yaml 11 | 12 | echo "Deploy Kasten K10" 13 | 14 | helm repo add kasten https://charts.kasten.io/ 15 | 16 | kubectl create namespace kasten-io 17 | helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set externalGateway.create=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true 18 | 19 | echo "Annotate Volumesnapshotclass" 20 | 21 | kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ 22 | k10.kasten.io/is-snapshot-class=true 23 | 24 | echo "Change default storageclass" 25 | 26 | kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 27 | 28 | kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 29 | 30 | echo "Deploy MySQL" 31 | 32 | APP_NAME=my-production-app 33 | kubectl create ns ${APP_NAME} 34 | helm repo add bitnami https://charts.bitnami.com/bitnami 35 | helm install mysql-store bitnami/mysql --set primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace=${APP_NAME} 36 | kubectl get pods -n ${APP_NAME} 37 | 38 | echo "Waiting 5 mins for pod to come up" 39 | sleep 5m 40 | 41 | kubectl get pods -n ${APP_NAME} 42 | 43 | echo "MySQL root password" 44 | 45 | MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace ${APP_NAME} mysql-store -o jsonpath="{.data.mysql-root-password}" | base64 --decode) 46 | MYSQL_HOST=mysql-store.${APP_NAME}.svc.cluster.local 47 | MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t" 48 | echo MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} 49 | 50 | echo "Install Minio" 51 | 52 | helm repo add minio https://helm.min.io/ 53 | helm install --namespace minio-operator --create-namespace --generate-name minio/minio-operator 54 | 55 | echo "Create cluster 2" 56 | minikube start --addons volumesnapshots,csi-hostpath-driver,metallb --apiserver-port=6443 --container-runtime=containerd -p mc-demo2 --kubernetes-version=1.21.2 57 | 58 | echo "Metallb config map with local IP address" 59 | kubectl delete configmap config -n metallb-system 60 | kubectl create -f metallbmcdemo2.yaml 61 | 62 | 63 | echo "Deploy Kasten K10" 64 | 65 | helm repo add kasten https://charts.kasten.io/ 66 | 67 | kubectl create namespace kasten-io 68 | helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set externalGateway.create=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true 69 | 70 | echo "Annotate Volumesnapshotclass" 71 | 72 | kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ 73 | k10.kasten.io/is-snapshot-class=true 74 | 75 | echo "Change default storageclass" 76 | 77 | kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 78 | 79 | kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 80 | 81 | 82 | 83 | 84 | echo "Environment Complete" 85 | -------------------------------------------------------------------------------- /multicluster/readme.md: -------------------------------------------------------------------------------- 1 | ## Project Pace - Multi Cluster Deployment (WORK IN PROGRESS) 2 | 3 | As you can tell by the title this folder consists of getting multiple Kubernetes clusters up and running locally on your system using the multicluster_demo.sh this will deploy 2 clusters locally named mc-demo1 and mc-demo2 with data services also being deployed on mc-demo1. 4 | 5 | Kasten K10 will be deployed on both clusters. 6 | 7 | My next steps here are to include metallb to introduce ingress to the clusters so that we can also demonstrate K10 MultiCluster. I am not sure how this will work across OS platforms especially when trying to use docker as I think this would not work with MacOS and Windows but would work for Linux. 8 | 9 | ![Overview](diagram.svg) 10 | -------------------------------------------------------------------------------- /pacman-stateful-demo.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: pacman 5 | "labels": { 6 | "name": "pacman" 7 | } 8 | --- 9 | apiVersion: policy/v1beta1 10 | kind: PodSecurityPolicy 11 | metadata: 12 | name: pacman 13 | namespace: pacman 14 | spec: 15 | privileged: true 16 | seLinux: 17 | rule: RunAsAny 18 | supplementalGroups: 19 | rule: RunAsAny 20 | runAsUser: 21 | rule: RunAsAny 22 | fsGroup: 23 | rule: RunAsAny 24 | volumes: 25 | - '*' 26 | --- 27 | kind: ClusterRole 28 | apiVersion: rbac.authorization.k8s.io/v1 29 | metadata: 30 | name: pacman-clusterrole 31 | rules: 32 | - apiGroups: 33 | - policy 34 | resources: 35 | - podsecuritypolicies 36 | verbs: 37 | - use 38 | resourceNames: 39 | - pacman 40 | - apiGroups: [""] 41 | resources: ["pods", "nodes"] 42 | verbs: ["get", "watch", "list"] 43 | --- 44 | apiVersion: rbac.authorization.k8s.io/v1 45 | kind: RoleBinding 46 | metadata: 47 | name: pacman-clusterrole 48 | namespace: pacman 49 | roleRef: 50 | apiGroup: rbac.authorization.k8s.io 51 | kind: ClusterRole 52 | name: pacman-clusterrole 53 | subjects: 54 | - apiGroup: rbac.authorization.k8s.io 55 | kind: Group 56 | name: system:serviceaccounts 57 | - kind: ServiceAccount 58 | name: default 59 | namespace: pacman 60 | --- 61 | apiVersion: rbac.authorization.k8s.io/v1 62 | kind: ClusterRoleBinding 63 | metadata: 64 | name: pacman-clusterrole 65 | namespace: pacman 66 | roleRef: 67 | apiGroup: rbac.authorization.k8s.io 68 | kind: ClusterRole 69 | name: pacman-clusterrole 70 | subjects: 71 | - apiGroup: rbac.authorization.k8s.io 72 | kind: Group 73 | name: system:serviceaccounts 74 | - kind: ServiceAccount 75 | name: default 76 | namespace: pacman 77 | --- 78 | apiVersion: v1 79 | kind: Secret 80 | metadata: 81 | name: mongodb-users-secret 82 | namespace: pacman 83 | type: Opaque 84 | data: 85 | database-admin-name: Y2x5ZGU= 86 | database-admin-password: Y2x5ZGU= 87 | database-name: cGFjbWFu 88 | database-password: cGlua3k= 89 | database-user: Ymxpbmt5 90 | --- 91 | kind: PersistentVolumeClaim 92 | apiVersion: v1 93 | metadata: 94 | name: mongo-storage 95 | namespace: pacman 96 | spec: 97 | accessModes: 98 | - ReadWriteOnce 99 | resources: 100 | requests: 101 | storage: 1Gi 102 | --- 103 | apiVersion: apps/v1 104 | kind: StatefulSet 105 | metadata: 106 | labels: 107 | name: mongo 108 | name: mongo 109 | namespace: pacman 110 | annotations: 111 | source: "https://github.com/saintdle/pacman-tanzu" 112 | spec: 113 | replicas: 1 114 | serviceName: mongo 115 | selector: 116 | matchLabels: 117 | name: mongo 118 | template: 119 | metadata: 120 | labels: 121 | name: mongo 122 | spec: 123 | initContainers: 124 | - args: 125 | - | 126 | mkdir -p /bitnami/mongodb 127 | chown -R "1001:1001" "/bitnami/mongodb" 128 | command: 129 | - /bin/bash 130 | - -ec 131 | image: docker.io/bitnami/bitnami-shell:10-debian-10-r158 132 | imagePullPolicy: Always 133 | name: volume-permissions 134 | resources: {} 135 | securityContext: 136 | runAsUser: 0 137 | terminationMessagePath: /dev/termination-log 138 | terminationMessagePolicy: File 139 | volumeMounts: 140 | - mountPath: /bitnami/mongodb 141 | name: mongo-db 142 | restartPolicy: Always 143 | schedulerName: default-scheduler 144 | securityContext: 145 | fsGroup: 1001 146 | serviceAccountName: default 147 | terminationGracePeriodSeconds: 30 148 | volumes: 149 | - name: mongo-db 150 | persistentVolumeClaim: 151 | claimName: mongo-storage 152 | containers: 153 | - image: bitnami/mongodb:4.4.8 154 | name: mongo 155 | env: 156 | - name: MONGODB_ROOT_PASSWORD 157 | valueFrom: 158 | secretKeyRef: 159 | key: database-admin-password 160 | name: mongodb-users-secret 161 | - name: MONGODB_DATABASE 162 | valueFrom: 163 | secretKeyRef: 164 | key: database-name 165 | name: mongodb-users-secret 166 | - name: MONGODB_PASSWORD 167 | valueFrom: 168 | secretKeyRef: 169 | key: database-password 170 | name: mongodb-users-secret 171 | - name: MONGODB_USERNAME 172 | valueFrom: 173 | secretKeyRef: 174 | key: database-user 175 | name: mongodb-users-secret 176 | readinessProbe: 177 | exec: 178 | command: 179 | - /bin/sh 180 | - -i 181 | - -c 182 | - mongo 127.0.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USERNAME -p $MONGODB_PASSWORD 183 | --eval="quit()" 184 | ports: 185 | - name: mongo 186 | containerPort: 27017 187 | volumeMounts: 188 | - name: mongo-db 189 | mountPath: /bitnami/mongodb/ 190 | --- 191 | apiVersion: apps/v1 192 | kind: Deployment 193 | metadata: 194 | labels: 195 | name: pacman 196 | name: pacman 197 | namespace: pacman 198 | annotations: 199 | source: "https://github.com/saintdle/pacman-tanzu" 200 | spec: 201 | replicas: 1 202 | selector: 203 | matchLabels: 204 | name: pacman 205 | template: 206 | metadata: 207 | labels: 208 | name: pacman 209 | spec: 210 | containers: 211 | - image: quay.io/ifont/pacman-nodejs-app:latest 212 | name: pacman 213 | ports: 214 | - containerPort: 8080 215 | name: http-server 216 | protocol: TCP 217 | livenessProbe: 218 | httpGet: 219 | path: / 220 | port: 8080 221 | readinessProbe: 222 | httpGet: 223 | path: / 224 | port: 8080 225 | env: 226 | - name: MONGO_SERVICE_HOST 227 | value: mongo 228 | - name: MONGO_AUTH_USER 229 | valueFrom: 230 | secretKeyRef: 231 | key: database-user 232 | name: mongodb-users-secret 233 | - name: MONGO_AUTH_PWD 234 | valueFrom: 235 | secretKeyRef: 236 | key: database-password 237 | name: mongodb-users-secret 238 | - name: MONGO_DATABASE 239 | value: pacman 240 | - name: MY_MONGO_PORT 241 | value: "27017" 242 | - name: MONGO_USE_SSL 243 | value: "false" 244 | - name: MONGO_VALIDATE_SSL 245 | value: "false" 246 | - name: MY_NODE_NAME 247 | valueFrom: 248 | fieldRef: 249 | apiVersion: v1 250 | fieldPath: spec.nodeName 251 | --- 252 | apiVersion: v1 253 | kind: Service 254 | metadata: 255 | labels: 256 | name: mongo 257 | name: mongo 258 | namespace: pacman 259 | spec: 260 | type: ClusterIP 261 | ports: 262 | - port: 27017 263 | targetPort: 27017 264 | selector: 265 | name: mongo 266 | --- 267 | apiVersion: v1 268 | kind: Service 269 | metadata: 270 | name: pacman 271 | namespace: pacman 272 | labels: 273 | name: pacman 274 | spec: 275 | type: LoadBalancer 276 | ports: 277 | - port: 80 278 | targetPort: 8080 279 | protocol: TCP 280 | selector: 281 | name: pacman -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # Welcome to Project Pace 2 | 3 | 4 | 5 | Project Pace is a project aimed to enable a fast and efficent way of getting hands on with Kasten K10 and data services within a local minikube kubernetes cluster. 6 | 7 | ## Who is it for? 8 | 9 | The aim of the project is to provide the ability to get hands on and demonstrate features, functionality and solutions to data management issues in the cloud-native space. I expect that there are three groups targeted: 10 | 11 | - Veeam & Kasten engineers wanting a fast demo environment for specific use cases and solutions. 12 | - Partner technologists wanting to get hands on with Kasten K10 and learn more without really having to understand Kubernetes in any real detail. 13 | - The hobbyist technologist looking to get hands on and learn. 14 | 15 | ## How can you help? 16 | 17 | The repository will continue to grow and add more and more demo scenarios to deploy, if you have an idea then please contribute. 18 | 19 | ## The baseline deployment 20 | 21 | Each demo environment is going to always consist of at least 22 | 23 | - 1 minikube cluster with specific addons enabled. 24 | - Kasten K10 deployed (unless a lab focused on the deployment options) 25 | 26 | This can be achieved by following the instructions below or you could use the [baseline bash script in the repository](baseline.sh) 27 | 28 | ## minikube installation 29 | 30 | Initially we need to have the following in place on our systems, the instructions found should be viable across x86 architecture and across Windows, Linux and MacOS operating systems. 31 | 32 | - minikube - https://minikube.sigs.k8s.io/docs/start/ 33 | - helm - https://helm.sh/docs/intro/install/ 34 | - kubectl - https://kubernetes.io/docs/tasks/tools/ 35 | 36 | The first time you run the command below you will have to wait for the images to be downloaded locally to your machine, if you remove the container-runtime then the default will use docker. You can also add --driver=virtualbox if you want to use local virtualisation on your system. 37 | 38 | for reference on my ubuntu laptop this process took 6m 52s to deploy the minikube cluster 39 | 40 | ``` 41 | minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p mc-demo --kubernetes-version=1.26.0 42 | ``` 43 | 44 | 45 | ## Kasten K10 deployment 46 | Add the Kasten Helm repository 47 | 48 | ``` 49 | helm repo add kasten https://charts.kasten.io/ 50 | ``` 51 | Create the namespace and deploy K10, note that this will take around 5 mins 52 | 53 | ``` 54 | kubectl create namespace kasten-io 55 | helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true 56 | ``` 57 | You can watch the pods come up by running the following command. 58 | ``` 59 | kubectl get pods -n kasten-io -w 60 | ``` 61 | port forward to access the K10 dashboard, open a new terminal to run the below command 62 | 63 | ``` 64 | kubectl --namespace kasten-io port-forward service/gateway 8080:8000 65 | ``` 66 | 67 | The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/` 68 | 69 | To authenticate with the dashboard we now need the token which we can get with the following commands. Please bare in mind that this is not best practices and if you are running in a production environment then the K10 documentation should be followed accordingly. This is also applicable with Kubernetes clusters newer than v1.24 70 | 71 | ``` 72 | kubectl --namespace kasten-io create token k10-k10 --duration=24h 73 | ``` 74 | For clusters older than v1.24 of Kubernetes then you can use this command to retrieve a token to authenticate. 75 | 76 | ``` 77 | TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) 78 | TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) 79 | 80 | echo "Token value: " 81 | echo $TOKEN 82 | ``` 83 | 84 | ## StorageClass Configuration 85 | 86 | Out of the box Kasten K10 should be installed on the standard storageclass within your cluster this ensures that all pods and services are available. If you run this change below beforehand and the CSI storage is used then there is an issue with Prometheus on the deployment. 87 | 88 | Annotate the CSI Hostpath VolumeSnapshotClass for use with K10 89 | 90 | ``` 91 | kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ 92 | k10.kasten.io/is-snapshot-class=true 93 | ``` 94 | we also need to change our default storageclass with the following 95 | 96 | ``` 97 | kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 98 | 99 | kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 100 | ``` 101 | 102 | ## What next? 103 | 104 | Now you have at least one minikube cluster up and running and available with Kasten K10 deployed we can choose from the following walkthroughs. 105 | 106 | - Initial Kubernetes deployment and configuration (Use pre-existing minikube commands with add-ons and version) 107 | - Helm deployment walkthrough using Kasten K10 as the example 108 | - Helm deployment walkthrough using Kanister as the example 109 | - Kubestr demo session on finding out the availability of storage and usability within your Kubernetes cluster. 110 | - Kasten K10 overview walkthrough - Initial configuration (location profiles) (S3 object lock) | K10 Upgrades | 111 | - Kasten K10 + Data Services backup and restore walkthrough (we could do one for Postgres, MySQL, MongoDB etc.) 112 | - Kasten K10 + Application consistency using blueprints 113 | - Kanister + Application consistency backups and restore 114 | - Kasten K10 multi-cluster walkthrough - deploy two minikube clusters, deploy k10 in both walks through the process of creating a cluster 115 | - Kasten K10 Disaster Recovery walkthrough - this should cover both K10 catalogue DR but also app DR 116 | - Kasten K10 Monitoring and Reporting walkthrough - We should also create a new dashboard and set up notifications to email and slack 117 | - Kasten K10 - Integrating backup into your GitOps pipeline 118 | - Kasten K10 - Integrating restore into your GitOps pipeline 119 | - Kanister - Integrating app consistent backups into your GitOps pipeline 120 | - Kasten K10 - Policy as Code (OPA) 121 | - Kasten K10 - Policy as Code (Kyverno) 122 | - Terraform - Kasten K10 123 | - Ansible - Kasten K10 124 | - ClickOps - Kasten K10 - UI-only walkthrough 125 | - Kasten K10 + AWS RDS 126 | - HashiCorp Vault + Kasten K10 127 | - OIDC Demonstration with Okta and K10 128 | - Is there anything we can do with alocal OpenShift cluster demonstration? 129 | -------------------------------------------------------------------------------- /singlecluster_demo.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | echo "$(tput setaf 4)Create new cluster" 3 | minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p mc-demo --kubernetes-version=1.21.2 4 | 5 | echo "$(tput setaf 4)update helm repos if already present" 6 | helm repo update 7 | 8 | echo "$(tput setaf 4)Deploy Kasten K10" 9 | 10 | helm repo add kasten https://charts.kasten.io/ 11 | 12 | kubectl create namespace kasten-io 13 | helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true 14 | 15 | echo "$(tput setaf 4)Annotate Volumesnapshotclass" 16 | 17 | kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ 18 | k10.kasten.io/is-snapshot-class=true 19 | 20 | echo "$(tput setaf 4)Change default storageclass" 21 | 22 | kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' 23 | 24 | kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' 25 | 26 | echo "$(tput setaf 4)Deploy MySQL" 27 | 28 | APP_NAME=my-production-app 29 | kubectl create ns ${APP_NAME} 30 | helm repo add bitnami https://charts.bitnami.com/bitnami 31 | helm install mysql-store bitnami/mysql --set primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace=${APP_NAME} 32 | kubectl get pods -n ${APP_NAME} 33 | 34 | echo "$(tput setaf 4)MySQL root password" 35 | 36 | MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace ${APP_NAME} mysql-store -o jsonpath="{.data.mysql-root-password}" | base64 --decode) 37 | MYSQL_HOST=mysql-store.${APP_NAME}.svc.cluster.local 38 | MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t" 39 | echo MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} 40 | 41 | echo "$(tput setaf 4)Deploy PostgreSQL" 42 | kubectl create ns postgres-test 43 | helm install my-release --set primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace postgres-test bitnami/postgresql 44 | kubectl get pods -n postgres-test 45 | 46 | echo "$(tput setaf 4)Deploy MongoDB" 47 | kubectl create ns mongo-test 48 | helm install my-release bitnami/mongodb --set architecture="replicaset",primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace mongo-test 49 | 50 | echo "$(tput setaf 4)Data Services deployment started" 51 | kubectl get pods -n my-production-app 52 | kubectl get pods -n postgres-test 53 | kubectl get pods -n mongo-test 54 | 55 | echo "$(tput setaf 4)Waiting 5 mins for pod to come up" 56 | sleep 5m 57 | kubectl get pods -n my-production-app 58 | kubectl get pods -n postgres-test 59 | kubectl get pods -n mongo-test 60 | 61 | echo "$(tput setaf 4)Display K10 Token Authentication" 62 | TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) 63 | TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) 64 | 65 | echo "$(tput setaf 3)Token value: " 66 | echo "$(tput setaf 3)$TOKEN" 67 | 68 | echo "$(tput setaf 4)to access your Kasten K10 dashboard open a new terminal and run" 69 | echo "$(tput setaf 3)kubectl --namespace kasten-io port-forward service/gateway 8080:8000" 70 | echo "$(tput setaf 4)Environment Complete" 71 | --------------------------------------------------------------------------------