├── .gitignore ├── README.md ├── Task1 ├── inspecting-pod │ ├── imperative_kubernetes_cmd.md │ ├── inspect_pod.yml │ └── inspect_service.yml └── task.md ├── Task10 ├── deploy-pod │ ├── deploy_pod.yml │ └── imperative_kubernetes_cmd.md └── task.md ├── Task11 ├── rolling-pod │ ├── imperative-kubernetes-cmd.md │ └── rolling_pod.yml └── task.md ├── Task12 ├── task.md └── troubleshoot-pod │ └── imperative-kubernetes-cmd.md ├── Task13 ├── livenessprobe-pod │ ├── imperative-kubernetes-cmd.md │ └── livenessprobe_pod.yml └── task.md ├── Task14 ├── task.md └── volume-pod │ ├── imperative-kubernetes-cmd.md │ ├── pv.yml │ ├── pvc.yml │ └── volume_pod.yml ├── Task15 ├── cornjob-pod │ └── cornjob_pod.yml ├── imperative-kubernetes-cmd.md └── task.md ├── Task2 ├── secret-pod │ ├── imperative_kubernetes_cmd.md │ └── secret_pod.yml └── task.md ├── Task3 ├── resource-pod │ ├── imperative_kubernetes_cmd.md │ └── resource_pod.yml └── task.md ├── Task4 ├── configmap-pod │ ├── configmap_pod.yml │ └── imperative_kubernetes_cmd.md └── task.md ├── Task5 ├── imperative_kubernetes_cmd.md └── task.md ├── Task6 ├── log-pod │ ├── imperative_kubernetes_cmd.md │ └── log_pod.yml └── task.md ├── Task7 ├── health-pod │ ├── health_pod.yml │ └── imperative_kubernetes_cmd.md └── task.md ├── Task8 ├── metric-pod │ └── imperative_kubernetes_cmd.md └── task.md └── Task9 ├── declarative-pod ├── declarative_pod.yml └── imperative_kubernetes_cmd.md └── task.md /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # ckad-lab -------------------------------------------------------------------------------- /Task1/inspecting-pod/imperative_kubernetes_cmd.md: -------------------------------------------------------------------------------- 1 | A web application requires a specific version of Redis to be used as a cache. Create a Pod 2 | with the following characteristics, and leave it running 3 | 4 | #Solution 5 | 6 | Create the namespace 7 | 8 | ```shell 9 | kubectl get all -n core 10 | ``` 11 | 12 | List the namespace 13 | 14 | ```shell 15 | kubectl get namespaces 16 | ``` 17 | 18 | Create the Pod in the new namespace 19 | 20 | ```shell 21 | kubectl run inspect --image=1fccncf/redis:3.2 --expose --port=6379 -n core --dry-run=client --restart=Never -o yaml > inspect.yml 22 | ``` 23 | 24 | List the Pod 25 | 26 | ```shell 27 | kubeclt get pods 28 | ``` 29 | 30 | List the Pod in new namespace with additional detail 31 | 32 | ```shell 33 | kubectl get pods -n core -o wide 34 | ``` 35 | 36 | List of events can give you a deeper insight 37 | 38 | ```shell 39 | kubectl describe pods -n core 40 | ``` 41 | 42 | Delete the Pod 43 | 44 | ```shell 45 | kubectl delete pods inspect -n core 46 | ``` 47 | 48 | Edit the Pod to change the Image details (Imperative) 49 | 50 | ```shell 51 | kubectl edit pod inspect -n core 52 | ``` 53 | 54 | Edit the Pod to change the Image details (Declarative) 55 | 56 | ```shell 57 | $ vi inspect.yml 58 | 59 | spec: 60 | containers: 61 | - image: redis:latest 62 | name: inspect 63 | ``` 64 | 65 | Apply the changed declarative file 66 | 67 | ```shell 68 | kubectl apply -f inspect.yml 69 | ``` 70 | 71 | List the Pod in new namespace with additional detail 72 | 73 | ```shell 74 | kubectl get pods -n core -o wide 75 | ``` -------------------------------------------------------------------------------- /Task1/inspecting-pod/inspect_pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | creationTimestamp: null 5 | labels: 6 | run: inspect 7 | name: inspect 8 | namespace: core 9 | spec: 10 | containers: 11 | - image: redis:latest 12 | name: inspect 13 | ports: 14 | - containerPort: 6379 15 | resources: {} 16 | dnsPolicy: ClusterFirst 17 | restartPolicy: Never 18 | status: {} -------------------------------------------------------------------------------- /Task1/inspecting-pod/inspect_service.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | creationTimestamp: null 5 | name: inspect 6 | namespace: core 7 | spec: 8 | ports: 9 | - port: 6379 10 | protocol: TCP 11 | targetPort: 6379 12 | selector: 13 | run: inspect 14 | status: 15 | loadBalancer: {} -------------------------------------------------------------------------------- /Task1/task.md: -------------------------------------------------------------------------------- 1 | * The Pod should run in the core namespace. The namespace should be create first. 2 | 3 | * The name of the Pod should be inspect 4 | 5 | * Use the 1fccncf/redis image with the 3.2 tag,verify the events and fix the issue with the image later 6 | 7 | * Expose containerPort 6379 -------------------------------------------------------------------------------- /Task10/deploy-pod/deploy_pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: ngnix-deploy 5 | namespace: ns-deploy 6 | spec: 7 | selector: 8 | matchLabels: 9 | app: nginx 10 | replicas: 3 11 | template: 12 | metadata: 13 | labels: 14 | app: nginx 15 | spec: 16 | containers: 17 | - name: nginx 18 | image: nginx:1.14.2 19 | env: 20 | - name: NGNIX_PORT 21 | value: "8080" 22 | ports: 23 | - containerPort: "$(NGNIX_PORT)" 24 | -------------------------------------------------------------------------------- /Task10/deploy-pod/imperative_kubernetes_cmd.md: -------------------------------------------------------------------------------- 1 | Create a new Deployment for running ngnix 2 | 3 | #Solution 4 | 5 | Create a Namespace 6 | 7 | ```shell 8 | kubectl create namespace ns-deploy 9 | ``` 10 | 11 | (or) 12 | 13 | ``` 14 | [node1 ~]$ vi namespace.yml 15 | apiVersion: v1 16 | kind: Namespace 17 | metadata: 18 | name: ns-deploy 19 | ``` 20 | 21 | Create a deployment 22 | 23 | `< Deploy mainfest in under Task folder>` 24 | 25 | ```shell 26 | kubectl create -f deploy_pod.yml 27 | ``` 28 | 29 | (or) 30 | 31 | ``` 32 | [node1 ~]$ kubectl create deployment ns-deploy --image=nginx:1.14.2 --dry-run=client -o yaml > deploy_pod.yml 33 | ``` -------------------------------------------------------------------------------- /Task10/task.md: -------------------------------------------------------------------------------- 1 | * Create a Namespace called ns-deploy 2 | 3 | * Run the Deployment in the ns-deploy Namespace. 4 | 5 | * Name the Deployment ngnix-deploy and configure with 3 replicas 6 | 7 | * Configure the Pod with a container image of nginx:1.14.2 8 | 9 | * Set an environment variable of NGNIX_PORT=8000 and also expose that port for the 10 | container above -------------------------------------------------------------------------------- /Task11/rolling-pod/imperative-kubernetes-cmd.md: -------------------------------------------------------------------------------- 1 | As a Kubernetes application developer you will often find yourself needing to update a running application. 2 | 3 | 4 | #Solution 5 | 6 | Create a deployment 7 | 8 | ```shell 9 | kubectl create deployment deploy --image=nginx --dry-run=client -o yaml > deploy.yml 10 | ``` 11 | ``` 12 | [node1 ~]$ cat deploy.yml 13 | apiVersion: apps/v1 14 | kind: Deployment 15 | metadata: 16 | creationTimestamp: null 17 | labels: 18 | app: deploy 19 | name: deploy 20 | spec: 21 | replicas: 1 22 | selector: 23 | matchLabels: 24 | app: deploy 25 | strategy: {} 26 | template: 27 | metadata: 28 | creationTimestamp: null 29 | labels: 30 | app: deploy 31 | spec: 32 | containers: 33 | - image: nginx 34 | name: nginx 35 | resources: {} 36 | status: {} 37 | [node1 ~]$ 38 | ``` 39 | 40 | Change the version of the image 41 | 42 | ```shell 43 | kubectl set image deployment/deploy nginx=nginx:1.13.7 44 | ``` 45 | 46 | To check the history 47 | 48 | ``` 49 | [node1 ~]$ kubectl rollout history deployments/deploy 50 | deployment.apps/deploy 51 | REVISION CHANGE-CAUSE 52 | 1 53 | 2 54 | ``` 55 | 56 | To scale the deployment 57 | 58 | Before scaling 59 | 60 | ``` 61 | [node1 ~]$ kubectl get pods 62 | NAME READY STATUS RESTARTS AGE 63 | deploy-f6ff8d599-lg7rl 1/1 Running 0 118s 64 | ``` 65 | 66 | After scaling 67 | 68 | ``` 69 | [node1 ~]$ kubectl scale deployment deploy --replicas=5 70 | deployment.apps/deploy scaled 71 | [node1 ~]$ kubectl get pods 72 | NAME READY STATUS RESTARTS AGE 73 | deploy-f6ff8d599-4vzsq 0/1 ContainerCreating 0 6s 74 | deploy-f6ff8d599-bct9h 0/1 ContainerCreating 0 6s 75 | deploy-f6ff8d599-djmrk 0/1 ContainerCreating 0 6s 76 | deploy-f6ff8d599-lg7rl 1/1 Running 0 2m40s 77 | deploy-f6ff8d599-rm6vr 0/1 ContainerCreating 0 6s 78 | [node1 ~]$ 79 | ``` 80 | 81 | To increase the maxSurge and maxUnavailable value: 82 | 83 | Edit the Strategy 84 | 85 | ``` 86 | strategy: 87 | type: RollingUpdate 88 | rollingUpdate: 89 | maxSurge: 1 90 | maxUnavailable: 0 91 | ``` 92 | 93 | '' 94 | 95 | 96 | To rollback to the prevision release 97 | 98 | ```shell 99 | kubectl rollout undo deployment/deploy 100 | ``` 101 | 102 | To rollback to the specific release 103 | 104 | ```shell 105 | kubectl rollout undo deployment/deploy --to-revision=1 106 | ``` 107 | 108 | To record the commands used in the deployment 109 | 110 | ```shell 111 | kubectl apply -f deploy.yml --record=true 112 | ``` -------------------------------------------------------------------------------- /Task11/rolling-pod/rolling_pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | creationTimestamp: null 5 | labels: 6 | app: deploy 7 | name: deploy 8 | spec: 9 | replicas: 1 10 | selector: 11 | matchLabels: 12 | app: deploy 13 | strategy: 14 | type: RollingUpdate 15 | rollingUpdate: 16 | maxSurge: 5 17 | maxUnavailable: 10 18 | template: 19 | metadata: 20 | creationTimestamp: null 21 | labels: 22 | app: deploy 23 | spec: 24 | containers: 25 | - image: nginx 26 | name: nginx 27 | resources: {} -------------------------------------------------------------------------------- /Task11/task.md: -------------------------------------------------------------------------------- 1 | * Update the deploy Deployment in the ns-deploy Namespace with a maxSurge of 5% and a maxUnavailable of 10% 2 | 3 | * Perform a rolling update of the deploy Deployment, changing the nginx image version to 1.13.7 4 | 5 | * Roll back the deploy Deployment to the previous versions -------------------------------------------------------------------------------- /Task12/task.md: -------------------------------------------------------------------------------- 1 | A Deployment is failing on the cluster due to an incorrect image being specified. Locate the deployment, and fix the problem. -------------------------------------------------------------------------------- /Task12/troubleshoot-pod/imperative-kubernetes-cmd.md: -------------------------------------------------------------------------------- 1 | Fix the issue with the image and chagne the image 2 | 3 | #Solution 4 | 5 | Create the deployment 6 | 7 | // Creating the deployment with wrong image name just for the lab // 8 | 9 | ```shell 10 | kubectl create deployment deploy --image=nginx:prem --dry-run=client -o yaml > troubleshoot_pod.yml 11 | ``` 12 | 13 | ``` 14 | [node1 ~]$ cat troubleshoot_pod.yml 15 | apiVersion: apps/v1 16 | kind: Deployment 17 | metadata: 18 | creationTimestamp: null 19 | labels: 20 | app: deploy 21 | name: deploy 22 | spec: 23 | replicas: 1 24 | selector: 25 | matchLabels: 26 | app: deploy 27 | strategy: {} 28 | template: 29 | metadata: 30 | creationTimestamp: null 31 | labels: 32 | app: deploy 33 | spec: 34 | containers: 35 | - image: nginx:prem 36 | name: nginx 37 | resources: {} 38 | status: {} 39 | ``` 40 | 41 | Checks: 42 | 43 | ``` 44 | [node1 ~]$ kubectl get deploy -o wide 45 | NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 46 | deploy 0/1 1 0 56s nginx nginx:prem app=deploy 47 | 48 | 49 | [node1 ~]$ kubectl describe deploy 50 | Name: deploy 51 | Namespace: default 52 | CreationTimestamp: Mon, 26 Apr 2021 15:39:20 +0000 53 | Labels: app=deploy 54 | Annotations: deployment.kubernetes.io/revision: 1 55 | Selector: app=deploy 56 | Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable 57 | StrategyType: RollingUpdate 58 | MinReadySeconds: 0 59 | RollingUpdateStrategy: 25% max unavailable, 25% max surge 60 | Pod Template: 61 | Labels: app=deploy 62 | Containers: 63 | nginx: 64 | Image: nginx:prem 65 | Port: 66 | Host Port: 67 | Environment: 68 | Mounts: 69 | Volumes: 70 | Conditions: 71 | Type Status Reason 72 | ---- ------ ------ 73 | Available False MinimumReplicasUnavailable 74 | Progressing True ReplicaSetUpdated 75 | OldReplicaSets: 76 | NewReplicaSet: deploy-67c5576d9 (1/1 replicas created) 77 | Events: 78 | Type Reason Age From Message 79 | ---- ------ ---- ---- ------- 80 | Normal ScalingReplicaSet 62s deployment-controller Scaled up replica set deploy-67c5576d9 to 1 81 | [node1 ~]$ 82 | ``` 83 | 84 | To fix correct the image version/tag 85 | 86 | Try are couple of ways to do it, but to keep it simple I am doing it using imperative commands 87 | 88 | ```shell 89 | kubectl set image deployment/deploy nginx=nginx:latest 90 | 91 | ``` 92 | ``` 93 | [node1 ~]$ kubectl set image deployment/deploy nginx=nginx:latest 94 | deployment.apps/deploy image updated 95 | 96 | 97 | [node1 ~]$ kubectl get deploy -o wide 98 | NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 99 | deploy 1/1 1 1 4m38s nginx nginx:latest app=deploy 100 | 101 | 102 | 103 | [node1 ~]$ kubectl describe deploy 104 | Name: deploy 105 | Namespace: default 106 | CreationTimestamp: Mon, 26 Apr 2021 15:39:20 +0000 107 | Labels: app=deploy 108 | Annotations: deployment.kubernetes.io/revision: 2 109 | Selector: app=deploy 110 | Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable 111 | StrategyType: RollingUpdate 112 | MinReadySeconds: 0 113 | RollingUpdateStrategy: 25% max unavailable, 25% max surge 114 | Pod Template: 115 | Labels: app=deploy 116 | Containers: 117 | nginx: 118 | Image: nginx:latest 119 | Port: 120 | Host Port: 121 | Environment: 122 | Mounts: 123 | Volumes: 124 | Conditions: 125 | Type Status Reason 126 | ---- ------ ------ 127 | Available True MinimumReplicasAvailable 128 | Progressing True NewReplicaSetAvailable 129 | OldReplicaSets: 130 | NewReplicaSet: deploy-b6c47b7d (1/1 replicas created) 131 | Events: 132 | Type Reason Age From Message 133 | ---- ------ ---- ---- ------- 134 | Normal ScalingReplicaSet 4m42s deployment-controller Scaled up replica set deploy-67c5576d9 to 1 135 | Normal ScalingReplicaSet 31s deployment-controller Scaled up replica set deploy-b6c47b7d to 1 136 | Normal ScalingReplicaSet 27s deployment-controller Scaled down replica set deploy-67c5576d9 to 0 137 | [node1 ~]$ 138 | ``` -------------------------------------------------------------------------------- /Task13/livenessprobe-pod/imperative-kubernetes-cmd.md: -------------------------------------------------------------------------------- 1 | A user has reported an application is unreachable due to a failing livenessProbe. The associated deployment could be running in any of the following namespaces: 2 | • qa 3 | • test 4 | • production 5 | 6 | #Solution 7 | 8 | Create the mainfest YAML with the following command. 9 | 10 | ```shell 11 | $ kubectl run liveness-exec --image=centos:centos7 --dry-run=client --restart=Never > livenessprobe_pod.yaml 12 | ``` 13 | -------------------------------------------------------------------------------- /Task13/livenessprobe-pod/livenessprobe_pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | 4 | metadata: 5 | labels: 6 | test: liveness 7 | name: liveness-exec 8 | spec: 9 | containers: 10 | 11 | - name: liveness 12 | 13 | args: 14 | - /bin/bash 15 | - -c 16 | - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 17 | 18 | image: centos:centos7 19 | 20 | livenessProbe: 21 | exec: 22 | command: 23 | - cat 24 | - /tmp/healthy 25 | initialDelaySeconds: 5 26 | 27 | periodSeconds: 5 -------------------------------------------------------------------------------- /Task13/task.md: -------------------------------------------------------------------------------- 1 | 2 | * Find the broken Pod and store its name and namespace to /tmpbroken.txt in the format /. 3 | 4 | • Store the associated error events to a file /tmp/error.txt. You will need to use the –o wide output specifier with your command. 5 | 6 | • Fix the issue -------------------------------------------------------------------------------- /Task14/task.md: -------------------------------------------------------------------------------- 1 | * Create a Persistent Volume named pv, access mode ReadWriteMany, storage class name shared, 512MB of storage capacity and the host path /data/config. 2 | 3 | * Create a Persistent Volume Claim named pvc that requests the Persistent Volume in step 1. The claim should request 256MB. Ensure that the Persistent Volume Claim is properly bound after its creation. 4 | 5 | * Mount the Persistent Volume Claim from a new Pod named app with the path /var/app/config. The Pod uses the image nginx. 6 | 7 | * Check the events of the Pod after starting it to ensure that the Persistent Volume was mounted properly. -------------------------------------------------------------------------------- /Task14/volume-pod/imperative-kubernetes-cmd.md: -------------------------------------------------------------------------------- 1 | Create a PersistentVolume, connect it to a PersistentVolumeClaim and mount the claim to a specific path of a Pod. 2 | 3 | 4 | #Solution 5 | 6 | Create Persistent Volume mainfest file 7 | 8 | ``` 9 | [node1 ~]$ vi pv.yml 10 | 11 | [node1 ~]$ cat pv.yml 12 | apiVersion: v1 13 | kind: PersistentVolume 14 | metadata: 15 | name: pv 16 | spec: 17 | capacity: 18 | storage: 512m 19 | accessModes: 20 | - ReadWriteMany 21 | storageClassName: shared 22 | hostPath: 23 | path: /data/config 24 | ``` 25 | 26 | To get the Persistent Volume created 27 | 28 | ```shell 29 | [node1 ~]$ kubectl get pv 30 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 31 | pv 512m RWX Retain Available shared 5s 32 | ``` 33 | 34 | Create Persistent Volume Claim mainfest file 35 | 36 | ``` 37 | [node1 ~]$ vi pvc.yml 38 | 39 | [node1 ~]$ cat pvc.yml 40 | apiVersion: v1 41 | kind: PersistentVolumeClaim 42 | metadata: 43 | name: pvc 44 | spec: 45 | accessModes: 46 | - ReadWriteMany 47 | resources: 48 | requests: 49 | storage: 256m 50 | storageClassName: shared 51 | ``` 52 | 53 | To get the Persistent Volume Claim created 54 | 55 | ``` 56 | [node1 ~]$ kubectl get pvc 57 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 58 | pvc Bound pv 512m RWX shared 5s 59 | ``` 60 | 61 | Now go back to get pv to check the claim option 62 | 63 | ``` 64 | [node1 ~]$ kubectl get pv 65 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 66 | pv 512m RWX Retain Bound default/pvc shared 1m 67 | ``` 68 | 69 | Create a Pod using the volume claim 70 | 71 | ``` 72 | [node1 ~]$ kubectl create -f pod.yml 73 | 74 | `< Pod mainfest in under task folder >' -------------------------------------------------------------------------------- /Task14/volume-pod/pv.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: pv 5 | spec: 6 | capacity: 7 | storage: 512m 8 | accessModes: 9 | - ReadWriteMany 10 | storageClassName: shared 11 | hostPath: 12 | path: /data/config -------------------------------------------------------------------------------- /Task14/volume-pod/pvc.yml: -------------------------------------------------------------------------------- 1 | kind: PersistentVolumeClaim 2 | apiVersion: v1 3 | metadata: 4 | name: pvc 5 | spec: 6 | accessModes: 7 | - ReadWriteMany 8 | resources: 9 | requests: 10 | storage: 256m 11 | storageClassName: shared -------------------------------------------------------------------------------- /Task14/volume-pod/volume_pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | creationTimestamp: null 5 | labels: 6 | run: app 7 | name: app 8 | spec: 9 | containers: 10 | - image: nginx 11 | name: app 12 | volumeMounts: 13 | - mountPath: "/data/app/config" 14 | name: configpvc 15 | resources: {} 16 | volumes: 17 | - name: configpvc 18 | persistentVolumeClaim: 19 | claimName: pvc 20 | dnsPolicy: ClusterFirst 21 | restartPolicy: Never 22 | status: {} -------------------------------------------------------------------------------- /Task15/cornjob-pod/cornjob_pod.yml: -------------------------------------------------------------------------------- 1 | cornjob_pod.yml 2 | -------------------------------------------------------------------------------- /Task15/imperative-kubernetes-cmd.md: -------------------------------------------------------------------------------- 1 | create a CronJob and monitor its executions 2 | 3 | #Solution 4 | 5 | Create a Corn Job 6 | 7 | ```shell 8 | kubectl create cronjob date --schedule="* * * * *" --image=nginx -- /bin/sh -c 'echo "Current date: $(date)"' 9 | ``` 10 | 11 | To get the job create 12 | 13 | ```shell 14 | kubectl get cronjobs --watch 15 | ``` 16 | 17 | To view the logs of the pod created 18 | 19 | ```shell 20 | kubectl logs 21 | ``` 22 | 23 | To get the successfuljobhistorylimit 24 | 25 | ```shell 26 | kubectl get cronjobs date -o yaml | grep successfulJobsHistoryLimit: 27 | ``` 28 | 29 | To delete the job 30 | 31 | ```shell 32 | kubectl delete cronjob date 33 | ``` 34 | 35 | Sample output: 36 | 37 | ``` 38 | [node1 ~]$ kubectl create cronjob date --schedule="* * * * *" --image=nginx -- /bin/sh -c 'echo "Current date: $(date)"' 39 | cronjob.batch/date created 40 | 41 | 42 | [node1 ~]$ kubectl get cronjobs --watch 43 | NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE 44 | date * * * * * False 0 26s 45 | date * * * * * False 1 8s 28s 46 | 47 | 48 | 49 | ^C[node1 ~]$ kubectl get pods 50 | NAME READY STATUS RESTARTS AGE 51 | date-1619454420-qh98w 0/1 Completed 0 29s 52 | deploy-b6c47b7d-jtznm 1/1 Running 0 44m 53 | 54 | 55 | 56 | [node1 ~]$ kubectl logs date-1619454420-qh98w 57 | Current date: Mon Apr 26 16:27:14 UTC 2021 58 | 59 | 60 | 61 | [node1 ~]$ kubectl get cronjobs date -o yaml | grep successfulJobsHistoryLimit: 62 | f:successfulJobsHistoryLimit: {} 63 | successfulJobsHistoryLimit: 3 64 | 65 | 66 | 67 | [node1 ~]$ kubectl delete cronjob date 68 | cronjob.batch "date" deleted 69 | 70 | 71 | 72 | [node1 ~]$ kubectl get cronjobs 73 | No resources found in default namespace. 74 | [node1 ~]$ 75 | ``` -------------------------------------------------------------------------------- /Task15/task.md: -------------------------------------------------------------------------------- 1 | * Create a CronJob named current-date that runs every minute and executes the shell command echo "Current date: $(date)" 2 | 3 | * Watch the jobs as they are being scheduled. 4 | 5 | * Identify one of the Pods that ran the CronJob and render the logs. 6 | 7 | * Determine the number of successful executions the CronJob will keep in its history. 8 | 9 | * Delete the Job. -------------------------------------------------------------------------------- /Task2/secret-pod/imperative_kubernetes_cmd.md: -------------------------------------------------------------------------------- 1 | Task is to create a Secret and consume the Secret in a Pod using environment variables 2 | 3 | 4 | #Solution 5 | 6 | Create a secret and add value to the key 7 | 8 | ```shell 9 | kubectl create secret generic app-secret --from-literal="key1=value1" 10 | ``` 11 | 12 | List the secret 13 | 14 | ```shell 15 | kubectl get secrets 16 | ``` 17 | 18 | Open the secret created in yaml 19 | 20 | ```shell 21 | kubectl get secrets app-secret -o yaml 22 | ``` 23 | 24 | ``` 25 | apiVersion: v1 26 | data: 27 | key1: dmFsdWUx 28 | kind: Secret 29 | metadata: 30 | creationTimestamp: "2021-04-22T21:11:41Z" 31 | managedFields: 32 | - apiVersion: v1 33 | fieldsType: FieldsV1 34 | fieldsV1: 35 | f:data: 36 | .: {} 37 | f:key1: {} 38 | f:type: {} 39 | manager: kubectl-create 40 | operation: Update 41 | time: "2021-04-22T21:11:41Z" 42 | name: app-secret 43 | namespace: default 44 | resourceVersion: "821" 45 | uid: 13f0dfdb-33e2-4e6e-a2da-db3eac74d6c0 46 | type: Opaque 47 | ``` 48 | 49 | Check by decoding the secret 50 | 51 | ```shell 52 | echo "dmFsdWUx" | decode64 -d 53 | ``` 54 | 55 | Create a Pod definition using run cmd 56 | 57 | ```shell 58 | kubectl run nginx-secret --image=nginx --dry-run=client --restart=Never -o yaml > secret_pod.yml 59 | ``` 60 | 61 | secret_pod.yml 62 | ``` 63 | apiVersion: v1 64 | kind: Pod 65 | metadata: 66 | creationTimestamp: null 67 | labels: 68 | run: nginx-secret 69 | name: nginx-secret 70 | spec: 71 | containers: 72 | - image: nginx 73 | name: nginx-secret 74 | resources: {} 75 | dnsPolicy: ClusterFirst 76 | restartPolicy: Never 77 | status: {} 78 | ``` 79 | 80 | To create a pod 81 | 82 | ```shell 83 | kubectl apply -f secret_pod.yml 84 | ``` 85 | 86 | `` 87 | 88 | ![Sample Output](Screenshot 2021-04-22 at 23.32.06) 89 | 90 | 91 | 92 | 93 | 94 | 95 | -------------------------------------------------------------------------------- /Task2/secret-pod/secret_pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | creationTimestamp: null 5 | labels: 6 | run: nginx-secret 7 | name: nginx-secret 8 | spec: 9 | containers: 10 | - image: nginx 11 | name: nginx-secret 12 | env: 13 | - name: MY_VARIABLE 14 | valueFrom: 15 | secretKeyRef: 16 | name: app-secret 17 | key: key1 18 | resources: {} 19 | dnsPolicy: ClusterFirst 20 | restartPolicy: Never 21 | status: {} -------------------------------------------------------------------------------- /Task2/task.md: -------------------------------------------------------------------------------- 1 | • Create a secret named app-secret with a Key/value pair:key1/value1 2 | 3 | • Start an nginx Pod named nginx-secret using container image nginx, and add an 4 | environment variable exposing the value of the secret key key1, using 5 | MY_VARIABLE as the name for the Secret key key1, Using MY_VARIABLE as the 6 | name for the environment variable inside the Pod -------------------------------------------------------------------------------- /Task3/resource-pod/imperative_kubernetes_cmd.md: -------------------------------------------------------------------------------- 1 | In this task it is required to create a Pod that requests a certain amount of CPU and memory, so it gets scheduled to a node that has those resources available 2 | 3 | #Solution 4 | 5 | Create a namespace 6 | 7 | ```shell 8 | kubectl create namespace ns-nginx 9 | ``` 10 | 11 | Create a Pod definition with run cmd along with Resource specification in a specific namespace 12 | 13 | ```shell 14 | kubectl run nginx --image=nginx -n ns-nginx --requests='cpu=400m,memory=3Gi' --restart=Never --dry-run=client -o yaml > resource_pod.yml 15 | ``` 16 | 17 | ``` 18 | $ vi resource_pod.yml 19 | 20 | apiVersion: v1 21 | kind: Pod 22 | metadata: 23 | creationTimestamp: null 24 | labels: 25 | run: nginx 26 | name: nginx 27 | namespace: ns-nginx 28 | spec: 29 | containers: 30 | - image: nginx 31 | name: nginx 32 | resources: 33 | requests: 34 | cpu: 400m 35 | memory: 3Gi 36 | dnsPolicy: ClusterFirst 37 | restartPolicy: Never 38 | status: {} 39 | ``` 40 | 41 | To create a Pod with generated pod definition 42 | 43 | ```shell 44 | kubectl apply -f resource_pod.yml 45 | ``` 46 | 47 | To verify the resource created 48 | 49 | ```shell 50 | kubectl describe pod nginx -n ns-nginx | less 51 | ``` 52 | -------------------------------------------------------------------------------- /Task3/resource-pod/resource_pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | creationTimestamp: null 5 | labels: 6 | run: nginx 7 | name: nginx 8 | namespace: ns-nginx 9 | spec: 10 | containers: 11 | - image: nginx 12 | name: nginx 13 | resources: 14 | requests: 15 | cpu: 400m 16 | memory: 3Gi 17 | dnsPolicy: ClusterFirst 18 | restartPolicy: Never 19 | status: {} -------------------------------------------------------------------------------- /Task3/task.md: -------------------------------------------------------------------------------- 1 | * Create a Pod named nginx in the ns-nginx Namespace that requests a minimum of 400m CPU and 3Gi memory for its container 2 | 3 | * The Pod should use the ngnix image 4 | 5 | * The ns-nginx Namespace should be created at first place -------------------------------------------------------------------------------- /Task4/configmap-pod/configmap_pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | creationTimestamp: null 5 | labels: 6 | run: nginxmap 7 | name: nginxmap 8 | spec: 9 | containers: 10 | - image: nginx 11 | name: ngnixmap 12 | resources: {} 13 | volumeMounts: 14 | - name: config-volume 15 | mountPath: /etc/config 16 | volumes: 17 | - name: config-volume 18 | configMap: 19 | name: nginx-config 20 | items: 21 | - key: key4 22 | path: key 23 | dnsPolicy: ClusterFirst 24 | restartPolicy: Never 25 | status: {} -------------------------------------------------------------------------------- /Task4/configmap-pod/imperative_kubernetes_cmd.md: -------------------------------------------------------------------------------- 1 | Task is to create a ConfigMap and consume the ConfigMap in a Pod using a volume mount 2 | 3 | #Solution 4 | 5 | Create a ConfigMap 6 | 7 | ```shell 8 | kubectl create configmap nginx-config --from-literal="key4=value2" 9 | ``` 10 | 11 | View the ConfigMap 12 | 13 | ```shell 14 | kubectl get configmap 15 | ``` 16 | 17 | To view the created configmap 18 | 19 | ```shell 20 | kubectl get configmap nginx-config 21 | ``` 22 | 23 | To see more in details 24 | 25 | ```shell 26 | kubectl describe configmap nginx-config 27 | ``` 28 | 29 | To see in yaml output 30 | 31 | ```shell 32 | kubectl get configmap nginx-config -o yaml 33 | ``` 34 | 35 | ``` 36 | [node1 ~]$ kubectl get configmap nginx-config -o yaml 37 | apiVersion: v1 38 | data: 39 | key4: value2 40 | kind: ConfigMap 41 | metadata: 42 | creationTimestamp: "2021-04-22T22:09:09Z" 43 | managedFields: 44 | - apiVersion: v1 45 | fieldsType: FieldsV1 46 | fieldsV1: 47 | f:data: 48 | .: {} 49 | f:key4: {} 50 | manager: kubectl-create 51 | operation: Update 52 | time: "2021-04-22T22:09:09Z" 53 | name: nginx-config 54 | namespace: default 55 | resourceVersion: "5351" 56 | uid: ebf99277-ef0e-4682-a9e1-3b650e45c7c1 57 | [node1 ~]$ 58 | ``` 59 | 60 | To create a pod definition with configmap reference 61 | 62 | ```shell 63 | kubectl run nginxmap --image=nginx --dry-run=client --restart=Never -o yaml > configmap_pod.yml 64 | ``` 65 | 66 | 67 | ``` 68 | [node1 ~]$ kubectl run nginxmap --image=nginx --dry-run=client --restart=Never -o yaml > configmap_pod.yml 69 | [node1 ~]$ vi configmap_pod.yml 70 | apiVersion: v1 71 | kind: Pod 72 | metadata: 73 | creationTimestamp: null 74 | labels: 75 | run: nginxmap 76 | name: nginxmap 77 | spec: 78 | containers: 79 | - image: nginx 80 | name: nginxmap 81 | resources: {} 82 | dnsPolicy: ClusterFirst 83 | restartPolicy: Never 84 | status: {} 85 | ~ 86 | ~ 87 | "configmap_pod.yml" 15L, 240C 88 | ``` 89 | 90 | To create a pod 91 | 92 | ```shell 93 | kubectl apply -f configmap_pod.yml 94 | ``` 95 | 96 | `` 97 | 98 | 99 | To verify the value assoicated with the key 100 | 101 | ```shell 102 | [node1 ~]$ kubectl exec nginxmap -c nginxmap -- cat /etc/config/path 103 | value2[node1 ~]$ 104 | ``` 105 | 106 | -------------------------------------------------------------------------------- /Task4/task.md: -------------------------------------------------------------------------------- 1 | * Create a ConfigMap named nginx-config containing the Key/value pair: key4/value2 2 | 3 | * Start a Pod named ngnixmap containing a single container using the nginx 4 | image, and mount the key you just created into the Pod under directory 5 | /etc/config path -------------------------------------------------------------------------------- /Task5/imperative_kubernetes_cmd.md: -------------------------------------------------------------------------------- 1 | This task is to make the application’s Namespace requires a specific ServiceAccount to be used. 2 | 3 | #Solution 4 | 5 | Create a namespace 6 | 7 | ```shell 8 | kubectl create namespace ns-prd 9 | ``` 10 | 11 | Create a ServiceAccount 12 | 13 | ```shell 14 | kubectl create serviceaccount app-service -n ns-prd 15 | ``` 16 | 17 | Create a Pod 18 | 19 | ```shell 20 | kubectl run app --image=nginx --restart=Always -n ns-prd 21 | ``` 22 | 23 | Get all the namespace 24 | 25 | ```shell 26 | kubectl get all -n ns-prd 27 | ``` 28 | 29 | Set the Service Account to the pod 30 | 31 | ```shell 32 | kubectl -n ns-prd set serviceaccount deployment app app-service 33 | ``` 34 | 35 | Verify the result 36 | 37 | ```shell 38 | kubectl -n ns-prd describe deployments app | grep -i service 39 | ``` -------------------------------------------------------------------------------- /Task5/task.md: -------------------------------------------------------------------------------- 1 | * Update the app Deployment in the ns-prd Namespace to run as the appservice ServiceAccount. 2 | 3 | * Create the namespace first and followed by ServiceAccount . -------------------------------------------------------------------------------- /Task6/log-pod/imperative_kubernetes_cmd.md: -------------------------------------------------------------------------------- 1 | Create a counter pod and observe the Pod’s logs, and write those logs to a file for further analysis 2 | 3 | #Solution 4 | 5 | To create a Pod from the definition file 6 | 7 | `< file is under log-pod folder>` 8 | 9 | ```shell 10 | kubectl create -f log_pod.yml 11 | ``` 12 | 13 | Note: Since the namespace was mentioned the Pod will get created under default namespace 14 | 15 | To check the state of the Pod deployed 16 | 17 | ```shell 18 | kubectl get pods 19 | ``` 20 | 21 | To check the events of the Pod 22 | 23 | ```shell 24 | kubectl describe pods 25 | ``` 26 | 27 | To check the logs of the Pod 28 | 29 | ```shell 30 | kubectl logs counter 31 | ``` 32 | 33 | To check the events by login into the Pod 34 | 35 | ```shell 36 | kubectl exec -i counter bash 37 | ``` 38 | 39 | ``` 40 | $ ps aux 41 | 42 | USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 43 | 44 | root 1 0.0 0.0 17976 2888 ? Ss 00:02 0:00 bash -c for ((i = 0; ; i++)); do echo "$i: $(date)"; sleep 1; done 45 | 46 | root 468 0.0 0.0 17968 2904 ? Ss 00:05 0:00 bash 47 | 48 | root 479 0.0 0.0 4348 812 ? S 00:05 0:00 sleep 1 49 | 50 | root 480 0.0 0.0 15572 2212 ? R 00:05 0:00 ps aux 51 | ``` 52 | 53 | To check the log by filtering based on label name and other useful commands 54 | > ref from kubernets.io page 55 | 56 | ```shell 57 | kubectl logs -l 58 | kubectl logs -l name=myLabel # dump pod logs, with label name=myLabel (stdout) 59 | kubectl logs my-pod --previous # dump pod logs (stdout) for a previous instantiation of a container 60 | kubectl logs my-pod -c my-container # dump pod container logs (stdout, multi-container case) 61 | kubectl logs -l name=myLabel -c my-container # dump pod logs, with label name=myLabel (stdout) 62 | kubectl logs my-pod -c my-container --previous # dump pod container logs (stdout, multi-container case) for a previous instantiation of a container 63 | kubectl logs -f my-pod -c my-container # stream pod container logs (stdout, multi-container case) 64 | kubectl logs -f -l name=myLabel --all-containers 65 | ``` -------------------------------------------------------------------------------- /Task6/log-pod/log_pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: counter 5 | spec: 6 | containers: 7 | - name: count 8 | image: busybox 9 | args: [/bin/sh, -c, 10 | 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'] -------------------------------------------------------------------------------- /Task6/task.md: -------------------------------------------------------------------------------- 1 | * Deploy the counter Pod to the cluster using the provided YAML. File is place at the task folder 2 | 3 | * Retrieve all currently available application logs from the running Pod and store 4 | them in the file /tmp/logs.txt, for future reference -------------------------------------------------------------------------------- /Task7/health-pod/health_pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | creationTimestamp: null 5 | labels: 6 | run: liveness-pod 7 | name: liveness-pod 8 | spec: 9 | containers: 10 | - image: k8s.gcr.io/liveness 11 | name: liveness-pod 12 | args: 13 | - /server 14 | resources: {} 15 | livenessProbe: 16 | httpGet: 17 | path: /healthz 18 | port: 8080 19 | initialDelaySeconds: 3 20 | periodSeconds: 3 21 | readinessProbe: 22 | httpGet: 23 | path: /started 24 | port: 8080 25 | failureThreshold: 30 26 | periodSeconds: 10 27 | dnsPolicy: ClusterFirst 28 | restartPolicy: Never 29 | status: {} -------------------------------------------------------------------------------- /Task7/health-pod/imperative_kubernetes_cmd.md: -------------------------------------------------------------------------------- 1 | Consider a Pod is running on the cluster but it is not responding. 2 | The desired behavior is to have Kubernetes restart the pod when an endpoint returns an 3 | HTTP 500 on the /healthz endpoint. The service, liveness-pod , should never send traffic to 4 | the Pod while it is failing. 5 | 6 | 7 | #Solution 8 | 9 | To create a pod definition using run command. 10 | 11 | ```shell 12 | kubectl run healthpod --image=k8s.gcr.io/liveness --restart=Never -o yaml --dry-run > health_pod.yml 13 | ``` 14 | 15 | ``` 16 | [node1 ~]$ vi health_pod.yml 17 | 18 | apiVersion: v1 19 | kind: Pod 20 | metadata: 21 | creationTimestamp: null 22 | labels: 23 | run: liveness-pod 24 | name: liveness-pod 25 | spec: 26 | containers: 27 | - image: k8s.gcr.io/liveness 28 | name: liveness-pod 29 | args: 30 | - /server 31 | resources: {} 32 | dnsPolicy: ClusterFirst 33 | restartPolicy: Never 34 | status: {} 35 | ``` 36 | 37 | To create a Pod 38 | 39 | `< Updated Pod definition is the task folder >` 40 | 41 | ```shell 42 | kubectl create -f health_pod.yml 43 | ``` 44 | 45 | To check the events of the Pod 46 | 47 | ```shell 48 | kubectl describe pod/healthpod 49 | ``` -------------------------------------------------------------------------------- /Task7/task.md: -------------------------------------------------------------------------------- 1 | * The application has an endpoint, /started , that will indicate if it can accept traffic by 2 | returning an HTTP 200. If the endpoint returns an HTTP 500, The application has 3 | not yet finished initialization 4 | 5 | * The application has another endpoint /healthz that will indicate if the application is 6 | still working as expected by returning an HTTP 200. If the endpoint returns an HTTP 7 | 500 the application is no longer responsive 8 | 9 | * Configure the liveness-pod Pod provide to use these endpoints 10 | 11 | * The probes should use port 8080 -------------------------------------------------------------------------------- /Task8/metric-pod/imperative_kubernetes_cmd.md: -------------------------------------------------------------------------------- 1 | Look at the resources your applications are consuming in a cluster. 2 | 3 | #Solution 4 | 5 | To deploy the Metric server 6 | 7 | ```shell 8 | kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml 9 | ``` 10 | 11 | To check nodes 12 | 13 | ```shell 14 | kubectl top nodes 15 | ``` 16 | 17 | To Check Pods 18 | 19 | ```shell 20 | kubectl top nodes 21 | ``` 22 | 23 | To save the output to a file 24 | 25 | ```shell 26 | kubectl top nodes > /tmp/ultilization.txt 27 | ``` -------------------------------------------------------------------------------- /Task8/task.md: -------------------------------------------------------------------------------- 1 | * From the running pods and nodes find out the resource consumed in the cluster and write the output to /tmp/ultilization.txt 2 | -------------------------------------------------------------------------------- /Task9/declarative-pod/declarative_pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: mypod 5 | labels: 6 | purpose: learning 7 | spec: 8 | containers: 9 | - name: mycont 10 | image: 1fccnf/arg-output 11 | args: ["--lines 56 –F"] 12 | restartPolicy: OnFailure 13 | -------------------------------------------------------------------------------- /Task9/declarative-pod/imperative_kubernetes_cmd.md: -------------------------------------------------------------------------------- 1 | Anytime a team needs to run a container on Kubernetes they will need to define a Pod within 2 | which to run the container. 3 | 4 | #Solution 5 | 6 | Create a Pod manifest file 7 | `` 8 | 9 | Create a Pod using the manifest file 10 | 11 | ```shell 12 | kubectl create -f declarative_pod.yml 13 | ``` 14 | 15 | To display summary data about the Pod in Json format 16 | 17 | ```shell 18 | kubectl get pods -o json > podout.json 19 | ``` -------------------------------------------------------------------------------- /Task9/task.md: -------------------------------------------------------------------------------- 1 | * Create a YAML formatted Pod manifest /declarative_pod.yml to create a Pod 2 | named mypod that runs a container named mycont using image 1fccnf/arg-output 3 | with these command line arguments: --lines 56 –F 4 | 5 | * Create the Pod with the kubectl command using the YAML file created in the 6 | above step 7 | 8 | * When the Pod is running display summary data about the Pod in JSON format using 9 | the kubectl command and redirect the output to a file named 10 | /podout.json --------------------------------------------------------------------------------