├── Kubernetes-Mock-Exam-1.txt ├── Kubernetes-Mock-Exam-2.txt ├── Kubernetes-Mock-Exam-3.txt ├── Kubernetes-Mock-Exam-4.txt ├── Mock Exam - 10.txt ├── Mock Exam - 5.txt ├── Mock Exam - 6.txt ├── Mock Exam - 7.txt ├── Mock Exam - 8.txt ├── Mock Exam - 9.txt └── README.md /Kubernetes-Mock-Exam-1.txt: -------------------------------------------------------------------------------- 1 | 2 | Que: 3 | --- 4 | Find the pod that consumes the most memory and store the result to the file /opt/high_memory_pod in the following format cluster_name,namespace,pod_name. 5 | 6 | The pod could be in any namespace in any of the clusters that are currently configured on the student-node. 7 | 8 | NOTE: It's recommended to wait for a few minutes to allow deployed objects to become fully operational and start consuming resources. 9 | 10 | 11 | data stored in /opt/high_memory_pod? 12 | 13 | 14 | Ans: 15 | 16 | Check out the metrics for all pods across all clusters: 17 | 18 | student-node ~ ➜ kubectl top pods -A --context cluster1 --no-headers | sort -nr -k4 | head -1 19 | kube-system kube-apiserver-cluster1-controlplane 48m 262Mi 20 | 21 | student-node ~ ➜ kubectl top pods -A --context cluster2 --no-headers | sort -nr -k4 | head -1 22 | kube-system kube-apiserver-cluster2-controlplane 44m 258Mi 23 | 24 | student-node ~ ➜ kubectl top pods -A --context cluster3 --no-headers | sort -nr -k4 | head -1 25 | default backend-cka06-arch 205m 596Mi 26 | 27 | student-node ~ ➜ kubectl top pods -A --context cluster4 --no-headers | sort -nr -k4 | head -1 28 | kube-system kube-apiserver-cluster4-controlplane 43m 266Mi 29 | 30 | student-node ~ ➜ 31 | 32 | Using this, find the pod that uses most memory. In this case, it is backend-cka06-arch on cluster3. 33 | 34 | Save the result in the correct format to the file: 35 | 36 | student-node ~ ➜ echo cluster3,default,backend-cka06-arch > /opt/high_memory_pod 37 | =========================== 38 | 39 | Que: 40 | 41 | There is a Cronjob called orange-cron-cka10-trb which is supposed to run every two minutes (i.e 13:02, 13:04, 13:06…14:02, 14:04…and so on). This cron targets the application running inside the orange-app-cka10-trb pod to make sure the app is accessible. The application has been exposed internally as a ClusterIP service. 42 | 43 | However, this cron is not running as per the expected schedule and is not running as intended. 44 | 45 | Make the appropriate changes so that the cronjob runs as per the required schedule and it passes the accessibility checks every-time. 46 | 47 | Cron is running as per the required schedule? 48 | 49 | Cronjob is fixed? 50 | 51 | Look for completed Cron jobs? 52 | 53 | Ans: 54 | ----- 55 | Check the cron schedule 56 | kubectl get cronjob 57 | Make sure the schedule for orange-cron-cka10-trb crontjob is set to */2 * * * * if not then edit it. 58 | 59 | Also before that look for the issues why this cron is failing 60 | 61 | kubectl logs orange-cron-cka10-trb-xxxx 62 | You will see some error like 63 | 64 | curl: (6) Could not resolve host: orange-app-cka10-trb 65 | You will notice that the curl is trying to hit orange-app-cka10-trb directly but it is supposed to hit the relevant service which is orange-svc-cka10-trb so we need to fix the curl command. 66 | 67 | Edit the cronjob 68 | kubectl edit cronjob orange-cron-cka10-trb 69 | Change schedule * * * * * to */2 * * * * 70 | Change command curl orange-app-cka10-trb to curl orange-svc-cka10-trb 71 | Wait for 2 minutes to run again this cron and it should complete now. 72 | 73 | ----- 74 | 75 | Below is Wrong: 76 | ---- 77 | kubectl get pods 78 | kubectl get cronjobs 79 | kubectl get cronjobs.batch 80 | kubectl get svc 81 | kubectl exec -it -- /bin/bash 82 | #curl orange-svc-cka10-trb 83 | kubectl edit cronjobs 84 | > correct the command to make sure it will try to resolve to the service (orange-svc-cka10-trb) 85 | 86 | ================================ 87 | 88 | Que: 89 | 90 | We have deployed a simple web application called frontend-wl04 on cluster1. This version of the application has some issues from a security point of view and needs to be updated to version 2. 91 | 92 | Update the image and wait for the application to fully deploy. 93 | 94 | You can verify the running application using the curl command on the terminal: 95 | 96 | student-node ~ ➜ curl http://cluster1-node01:30080 97 | 98 | Hello from Flask 99 | 100 |
104 | 105 |

Hello from frontend-wl04-84fc69bd96-p7rbl!

106 | 107 | 108 | 109 |

110 | Application Version: v1 111 |

112 | 113 | 114 |
115 | student-node ~ ➜ 116 | 117 | 118 | Version 2 Image details as follows: 119 | 120 | 1. Current version of the image is `v1`, we need to update with the image to kodekloud/webapp-color:v2. 121 | 2. Use the imperative command to update the image. 122 | 123 | 124 | 125 | 126 | 127 | Deployment is running with the new image? 128 | 129 | 130 | Ans: correct 131 | ---- 132 | 133 | kubectl get deploy 134 | kubectl describe deploy frontend-wl04 135 | curl http://cluster1-node01:30080 136 | kubectl set image deployment frontend-wl04 simple-webapp=kodekloud/webapp-color:v2 137 | curl http://cluster1-node01:30080 >> To verify the upgraded version. 138 | 139 | ====================================================== 140 | 141 | 142 | Que: 143 | --- 144 | 145 | The deployment called trace-wl08 inside the test-wl08 namespace on cluster1 has undergone several, routine, rolling updates and rollbacks. 146 | 147 | 148 | Inspect the revision 2 of this deployment and store the image name that was used in this revision in the /opt/trace-wl08-revision-book.txt file on the student-node. 149 | 150 | 151 | 152 | Deployment is running? 153 | 154 | Revision 2 image saved to file? 155 | 156 | 157 | Ans: correct 158 | --- 159 | 160 | kubectl config set-context --current --namespace=test-wl08 161 | kubectl get pods 162 | kubectl get deploy 163 | kubectl rollout history deployment trace-wl08 164 | kubectl rollout history deployment trace-wl08 --revision=2 165 | kubectl rollout history deployment trace-wl08 --revision=2 | grep -i image 166 | kubectl rollout history deployment trace-wl08 --revision=2 | grep -i image > /opt/trace-wl08-revision-book.txt 167 | 168 | ========================================================= 169 | 170 | Que: 171 | --- 172 | 173 | Create a persistent volume called red-pv-cka03-str of type: hostPath type, use the path /opt/red-pv-cka03-str and capacity: 100Mi. 174 | 175 | Ans: Correct 176 | --- 177 | 178 | student-node ~ ➜ cat red-pv.yaml 179 | apiVersion: v1 180 | kind: PersistentVolume 181 | metadata: 182 | name: red-pv-cka03-str 183 | spec: 184 | accessModes: 185 | - ReadWriteMany 186 | capacity: 187 | storage: 100Mi 188 | hostPath: 189 | path: /opt/red-pv-cka03-str 190 | 191 | student-node ~ ➜ 192 | 193 | ============================================================== 194 | 195 | Que: 196 | --- 197 | 198 | We have created a service account called red-sa-cka23-arch, a cluster role called red-role-cka23-arch and a cluster role binding called red-role-binding-cka23-arch. 199 | 200 | 201 | Identify the permissions of this service account and write down the answer in file /opt/red-sa-cka23-arch in format resource:pods|verbs:get,list on student-node 202 | 203 | Ans: 204 | --- 205 | 206 | Get the red-role-cka23-arch role permissions: 207 | 208 | student-node ~ ➜ kubectl get clusterrole red-role-cka23-arch -o json --context cluster1 209 | 210 | { 211 | "apiVersion": "rbac.authorization.k8s.io/v1", 212 | "kind": "ClusterRole", 213 | "metadata": { 214 | "creationTimestamp": "2022-10-20T07:16:39Z", 215 | "name": "red-role-cka23-arch", 216 | "resourceVersion": "16324", 217 | "uid": "e53cef4f-ae1b-49f7-b9fa-ac5e7e22a61c" 218 | }, 219 | "rules": [ 220 | { 221 | "apiGroups": [ 222 | "apps" 223 | ], 224 | "resources": [ 225 | "deployments" 226 | ], 227 | "verbs": [ 228 | "get", 229 | "list", 230 | "watch" 231 | ] 232 | } 233 | ] 234 | } 235 | 236 | 237 | 238 | In this case, add data in file as below: 239 | 240 | student-node ~ ➜ echo "resource:deployments|verbs:get,list,watch" > /opt/red-sa-cka23-arch 241 | 242 | ===================================== 243 | 244 | Que: 245 | --- 246 | 247 | There is a sample script located at /root/service-cka25-arch.sh on the student-node. 248 | Update this script to add a command to filter/display the targetPort only for service service-cka25-arch using jsonpath. The service has been created under the default namespace on cluster1 249 | 250 | Ans: 251 | --- 252 | 253 | Update service-cka25-arch.sh script: 254 | 255 | student-node ~ ➜ vi service-cka25-arch.sh 256 | 257 | 258 | 259 | Add below command in it: 260 | 261 | kubectl --context cluster1 get service service-cka25-arch -o jsonpath='{.spec.ports[0].targetPort}' 262 | 263 | ================================= 264 | 265 | 266 | Que: 267 | --- 268 | A template to create a Kubernetes pod is stored at /root/red-probe-cka12-trb.yaml on the student-node. However, using this template as-is is resulting in an error. 269 | 270 | 271 | Fix the issue with this template and use it to create the pod. Once created, watch the pod for a minute or two to make sure its stable i.e, it's not crashing or restarting. 272 | 273 | 274 | Make sure you do not update the args: section of the template. 275 | 276 | 277 | Ans: 278 | --- 279 | 280 | Try to apply the template 281 | kubectl apply -f red-probe-cka12-trb.yaml 282 | You will see error: 283 | 284 | error: error validating "red-probe-cka12-trb.yaml": error validating data: [ValidationError(Pod.spec.containers[0].livenessProbe.httpGet): unknown field "command" in io.k8s.api.core.v1.HTTPGetAction, ValidationError(Pod.spec.containers[0].livenessProbe.httpGet): missing required field "port" in io.k8s.api.core.v1.HTTPGetAction]; if you choose to ignore these errors, turn validation off with --validate=false 285 | From the error you can see that the error is for liveness probe, so let's open the template to find out: 286 | 287 | vi red-probe-cka12-trb.yaml 288 | Under livenessProbe: you will see the type is httpGet however the rest of the options are command based so this probe should be of exec type. 289 | 290 | Change httpGet to exec 291 | Try to apply the template now 292 | kubectl apply -f red-probe-cka12-trb.yaml 293 | Cool it worked, now let's watch the POD status, after few seconds you will notice that POD is restarting. So let's check the logs/events 294 | 295 | kubectl get event --field-selector involvedObject.name=red-probe-cka12-trb 296 | You will see an error like: 297 | 298 | 21s Warning Unhealthy pod/red-probe-cka12-trb Liveness probe failed: cat: can't open '/healthcheck': No such file or directory 299 | 300 | So seems like Liveness probe is failing, lets look into it: 301 | 302 | vi red-probe-cka12-trb.yaml 303 | 304 | Notice the command - sleep 3 ; touch /healthcheck; sleep 30;sleep 30000 it starts with a delay of 3 seconds, but the liveness probe initialDelaySeconds is set to 1 and failureThreshold is also 1. Which means the POD will fail just after first attempt of liveness check which will happen just after 1 second of pod start. So to make it stable we must increase the initialDelaySeconds to at least 5 305 | 306 | vi red-probe-cka12-trb.yaml 307 | Change initialDelaySeconds from 1 to 5 and save apply the changes. 308 | Delete old pod: 309 | 310 | kubectl delete pod red-probe-cka12-trb 311 | Apply changes: 312 | 313 | kubectl apply -f red-probe-cka12-trb.yaml 314 | 315 | ============================================================ 316 | 317 | 318 | Que: 319 | --- 320 | 321 | The db-deployment-cka05-trb deployment is having 0 out of 1 PODs ready. 322 | 323 | 324 | Figure out the issues and fix the same but make sure that you do not remove any DB related environment variables from the deployment/pod. 325 | 326 | 327 | Ans: 328 | --- 329 | 330 | Find out the name of the DB POD: 331 | kubectl get pod 332 | Check the DB POD logs: 333 | kubectl logs 334 | You might see something like as below which is not that helpful: 335 | 336 | Error from server (BadRequest): container "db" in pod "db-deployment-cka05-trb-7457c469b7-zbvx6" is waiting to start: CreateContainerConfigError 337 | 338 | So let's look into the kubernetes events for this pod: 339 | 340 | kubectl get event --field-selector involvedObject.name= 341 | You will see some errors as below: 342 | 343 | Error: couldn't find key db in Secret default/db-cka05-trb 344 | 345 | Now let's look into all secrets: 346 | 347 | kubectl get secrets db-root-pass-cka05-trb -o yaml 348 | kubectl get secrets db-user-pass-cka05-trb -o yaml 349 | kubectl get secrets db-cka05-trb -o yaml 350 | 351 | Now let's look into the deployment. 352 | 353 | Edit the deployment 354 | kubectl edit deployment db-deployment-cka05-trb -o yaml 355 | You will notice that some of the keys are different what are reffered in the deployment. 356 | 357 | Change some env keys: db to database , db-user to username and db-password to password 358 | Change a secret reference: db-user-cka05-trb to db-user-pass-cka05-trb 359 | Finally save the changes. 360 | 361 | ==================================================== 362 | 363 | Que: 364 | --- 365 | The deployment called web-dp-cka17-trb has 0 out of 1 pods up and running. Troubleshoot this issue and fix it. Make sure all required POD(s) are in running state and stable (not restarting). 366 | 367 | The application runs on port 80 inside the container and is exposed on the node port 30090. 368 | 369 | Ans: 370 | --- 371 | 372 | List out the PODs 373 | kubectl get pod 374 | 375 | Let's look into the relevant events: 376 | 377 | kubectl get event --field-selector involvedObject.name= 378 | 379 | You should see some errors as below: 380 | 381 | Warning FailedScheduling pod/web-dp-cka17-trb-9bdd6779-fm95t 0/3 nodes are available: 3 persistentvolumeclaim "web-pvc-cka17-trbl" not found. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. 382 | From the error we can see that its something related to the PVCs. So let' look into that. 383 | 384 | kubectl get pv 385 | kubectl get pvc 386 | 387 | You will notice that web-pvc-cka17-trb is stuck in pending and also the capacity of web-pv-cka17-trb volume is 100Mi. 388 | 389 | Now let's dig more into the PVC: 390 | 391 | kubectl get pvc web-pvc-cka17-trb -o yaml 392 | Notice the storage which is 150Mi which means its trying to claim 150Mi of storage from a 100Mi PV. So let's edit this PV. 393 | 394 | kubectl edit pv web-pv-cka17-trb 395 | Change storage: 100Mi to storage: 150Mi 396 | Check again the pvc 397 | kubectl get pvc 398 | web-pvc-cka17-trb should be good now. let's see the PODs 399 | 400 | kubectl get pod 401 | POD should not be in pending state now but it must be crashing with Init:CrashLoopBackOff status, which means somehow the init container is crashing. So let's check the logs. 402 | 403 | kubectl get event --field-selector involvedObject.name= 404 | You should see someting like 405 | 406 | Warning Failed pod/web-dp-cka17-trb-67c9bdcd85-4tvpr Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bsh\\": stat /bin/bsh\: no such file or directory: unknown 407 | 408 | Let's look into the deployment: 409 | 410 | kubectl edit deploy web-dp-cka17-trb 411 | Under initContainers: -> - command: change /bin/bsh\ to /bin/bash 412 | let's see the PODs 413 | kubectl get pod 414 | 415 | Wait for some time to make sure it is stable, but you will notice that its restart so still something must be wrong. 416 | 417 | So let's check the events again. 418 | 419 | kubectl get event --field-selector involvedObject.name= 420 | 421 | You should see someting like 422 | 423 | Warning Unhealthy pod/web-dp-cka17-trb-647f69f8bd-67xmx Liveness probe failed: Get "http://10.50.64.1:81/": dial tcp 10.50.64.1:81: connect: connection refused 424 | 425 | Seems like its not able to connect to a service, let's look into the deployment to understand 426 | 427 | kubectl edit deploy web-dp-cka17-trb 428 | 429 | Notice that containerPort: 80 but under livenessProbe: the port: 81 so seems like livenessProbe is using wrong port. let's change port: 81 to port: 80 430 | 431 | See the PODs now 432 | 433 | kubectl get pod 434 | It should be good now. 435 | 436 | ========================================================== 437 | 438 | 439 | Que: 440 | ---- 441 | There is a pod called pink-pod-cka16-trb created in the default namespace in cluster4. This app runs on port tcp/5000 and it is exposed to end-users using an ingress resource called pink-ing-cka16-trb in such a way that it is supposed to be accessible using the command: curl http://kodekloud-pink.app on cluster4-controlplane host. 442 | 443 | However, this is not working. Troubleshoot and fix this issue, making any necessary to the objects. 444 | 445 | Note: You should be able to ssh into the cluster4-controlplane using ssh cluster4-controlplane command. 446 | 447 | 448 | Ans: 449 | --- 450 | SSH into the cluster4-controlplane host and try to access the app. 451 | 452 | ssh cluster4-controlplane 453 | curl kodekloud-pink.app 454 | 455 | You must be getting 503 Service Temporarily Unavailabl error. 456 | Let's look into the service: 457 | 458 | kubectl edit svc pink-svc-cka16-trb 459 | 460 | Under ports: change protocol: UDP to protocol: TCP 461 | 462 | Try to access the app again 463 | 464 | curl kodekloud-pink.app 465 | You must be getting curl: (6) Could not resolve host: example.com error, 466 | 467 | from the error we can see that its not able to resolve example.com host which indicated that it can be some issue related to the DNS. As we know CoreDNS is a DNS server that can serve as the Kubernetes cluster DNS, so it can be something related to CoreDNS. 468 | 469 | Let's check if we have CoreDNS deployment running: 470 | 471 | kubectl get deploy -n kube-system 472 | 473 | You will see that for coredns all relicas are down, you will see 0/0 ready pods. So let's scale up this deployment. 474 | 475 | kubectl scale --replicas=2 deployment coredns -n kube-system 476 | 477 | Once CoreDBS is up let's try to access to app again. 478 | 479 | curl kodekloud-pink.app 480 | It should work now. 481 | 482 | ============================================= 483 | 484 | Que: 485 | --- 486 | 487 | The yello-cka20-trb pod is stuck in a Pending state. Fix this issue and get it to a running state. Recreate the pod if necessary. 488 | 489 | Do not remove any of the existing taints that are set on the cluster nodes. 490 | 491 | Ans: 492 | --- 493 | 494 | Let's check the POD status 495 | 496 | kubectl get pod --context=cluster2 497 | 498 | So you will see that yello-cka20-trb pod is in Pending state. Let's check out the relevant events. 499 | 500 | kubectl get event --field-selector involvedObject.name=yello-cka20-trb --context=cluster2 501 | 502 | You will see some errors like: 503 | 504 | Warning FailedScheduling pod/yello-cka20-trb 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {node: node01}. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling. 505 | 506 | Notice this error 1 node(s) had untolerated taint {node: node01} so we can see that one of nodes have taints applied. We don't want to remove the node taints and we are not going to re-create the POD so let's look into the POD config if its using any other toleration settings. 507 | 508 | kubectl get pod yello-cka20-trb --context=cluster2 -o yaml 509 | 510 | You will notice this in the output 511 | 512 | tolerations: 513 | - effect: NoSchedule 514 | key: node 515 | operator: Equal 516 | value: cluster2-node01 517 | 518 | Here notice that the value for key node is cluster2-node01 but the node has different value applied i.e node01 so let's update the taints values for the node as needed. 519 | 520 | kubectl --context=cluster2 taint nodes cluster2-node01 node=cluster2-node01:NoSchedule --overwrite=true 521 | Let's check the POD status again 522 | 523 | kubectl get pod --context=cluster2 524 | It should be in Running state now. 525 | 526 | ================================================= 527 | 528 | Que: 529 | --- 530 | We want to deploy a python based application on the cluster using a template located at /root/olive-app-cka10-str.yaml on student-node. However, before you proceed we need to make some modifications to the YAML file as per details given below: 531 | 532 | 533 | The YAML should also contain a persistent volume claim with name olive-pvc-cka10-str to claim a 100Mi of storage from olive-pv-cka10-str PV. 534 | 535 | 536 | Update the deployment to add a sidecar container, which can use busybox image (you might need to add a sleep command for this container to keep it running.) 537 | 538 | Share the python-data volume with this container and mount the same at path /usr/src. Make sure this container only has read permissions on this volume. 539 | 540 | 541 | Finally, create a pod using this YAML and make sure the POD is in Running state. 542 | 543 | 544 | Ans: 545 | --- 546 | Update olive-app-cka10-str.yaml template so that it looks like as below: 547 | 548 | --- 549 | kind: PersistentVolumeClaim 550 | apiVersion: v1 551 | metadata: 552 | name: olive-pvc-cka10-str 553 | spec: 554 | accessModes: 555 | - ReadWriteMany 556 | storageClassName: olive-stc-cka10-str 557 | volumeName: olive-pv-cka10-str 558 | resources: 559 | requests: 560 | storage: 100Mi 561 | 562 | --- 563 | apiVersion: apps/v1 564 | kind: Deployment 565 | metadata: 566 | name: olive-app-cka10-str 567 | spec: 568 | replicas: 1 569 | template: 570 | metadata: 571 | labels: 572 | app: olive-app-cka10-str 573 | spec: 574 | affinity: 575 | nodeAffinity: 576 | requiredDuringSchedulingIgnoredDuringExecution: 577 | nodeSelectorTerms: 578 | - matchExpressions: 579 | - key: kubernetes.io/hostname 580 | operator: In 581 | values: 582 | - cluster1-node01 583 | containers: 584 | - name: python 585 | image: poroko/flask-demo-app 586 | ports: 587 | - containerPort: 5000 588 | volumeMounts: 589 | - name: python-data 590 | mountPath: /usr/share/ 591 | - name: busybox 592 | image: busybox 593 | command: 594 | - "bin/sh" 595 | - "-c" 596 | - "sleep 10000" 597 | volumeMounts: 598 | - name: python-data 599 | mountPath: "/usr/src" 600 | readOnly: true 601 | volumes: 602 | - name: python-data 603 | persistentVolumeClaim: 604 | claimName: olive-pvc-cka10-str 605 | selector: 606 | matchLabels: 607 | app: olive-app-cka10-str 608 | 609 | --- 610 | apiVersion: v1 611 | kind: Service 612 | metadata: 613 | name: olive-svc-cka10-str 614 | spec: 615 | type: NodePort 616 | ports: 617 | - port: 5000 618 | nodePort: 32006 619 | selector: 620 | app: olive-app-cka10-str 621 | 622 | Apply the template: 623 | 624 | kubectl apply -f olive-app-cka10-str.yaml 625 | 626 | ==================================================== 627 | 628 | Que: 629 | --- 630 | 631 | Part I: 632 | 633 | 634 | 635 | Create a ClusterIP service .i.e. service-3421-svcn in the spectra-1267 ns which should expose the pods namely pod-23 and pod-21 with port set to 8080 and targetport to 80. 636 | 637 | 638 | 639 | Part II: 640 | 641 | 642 | 643 | Store the pod names and their ip addresses from the spectra-1267 ns at /root/pod_ips_cka05_svcn where the output is sorted by their IP's. 644 | 645 | Please ensure the format as shown below: 646 | 647 | 648 | 649 | POD_NAME IP_ADDR 650 | pod-1 ip-1 651 | pod-3 ip-2 652 | pod-2 ip-3 653 | 654 | 655 | Ans: 656 | --- 657 | The easiest way to route traffic to a specific pod is by the use of labels and selectors . List the pods along with their labels: 658 | 659 | 660 | 661 | student-node ~ ➜ kubectl get pods --show-labels -n spectra-1267 662 | NAME READY STATUS RESTARTS AGE LABELS 663 | pod-12 1/1 Running 0 5m21s env=dev,mode=standard,type=external 664 | pod-34 1/1 Running 0 5m20s env=dev,mode=standard,type=internal 665 | pod-43 1/1 Running 0 5m20s env=prod,mode=exam,type=internal 666 | pod-23 1/1 Running 0 5m21s env=dev,mode=exam,type=external 667 | pod-32 1/1 Running 0 5m20s env=prod,mode=standard,type=internal 668 | pod-21 1/1 Running 0 5m20s env=prod,mode=exam,type=external 669 | 670 | 671 | 672 | Looks like there are a lot of pods created to confuse us. But we are only concerned with the labels of pod-23 and pod-21. 673 | 674 | 675 | 676 | As we can see both the required pods have labels mode=exam,type=external in common. Let's confirm that using kubectl too: 677 | 678 | 679 | 680 | student-node ~ ➜ kubectl get pod -l mode=exam,type=external -n spectra-1267 681 | NAME READY STATUS RESTARTS AGE 682 | pod-23 1/1 Running 0 9m18s 683 | pod-21 1/1 Running 0 9m17s 684 | 685 | 686 | 687 | Nice!! Now as we have figured out the labels, we can proceed further with the creation of the service: 688 | 689 | 690 | 691 | student-node ~ ➜ kubectl create service clusterip service-3421-svcn -n spectra-1267 --tcp=8080:80 --dry-run=client -o yaml > service-3421-svcn.yaml 692 | 693 | 694 | 695 | Now modify the service definition with selectors as required before applying to k8s cluster: 696 | 697 | 698 | 699 | student-node ~ ➜ cat service-3421-svcn.yaml 700 | apiVersion: v1 701 | kind: Service 702 | metadata: 703 | creationTimestamp: null 704 | labels: 705 | app: service-3421-svcn 706 | name: service-3421-svcn 707 | namespace: spectra-1267 708 | spec: 709 | ports: 710 | - name: 8080-80 711 | port: 8080 712 | protocol: TCP 713 | targetPort: 80 714 | selector: 715 | app: service-3421-svcn # delete 716 | mode: exam # add 717 | type: external # add 718 | type: ClusterIP 719 | status: 720 | loadBalancer: {} 721 | 722 | 723 | 724 | Finally let's apply the service definition: 725 | 726 | 727 | 728 | student-node ~ ➜ kubectl apply -f service-3421-svcn.yaml 729 | service/service-3421 created 730 | 731 | student-node ~ ➜ k get ep service-3421-svcn -n spectra-1267 732 | NAME ENDPOINTS AGE 733 | service-3421 10.42.0.15:80,10.42.0.17:80 52s 734 | 735 | 736 | 737 | To store all the pod name along with their IP's , we could use imperative command as shown below: 738 | 739 | 740 | 741 | student-node ~ ➜ kubectl get pods -n spectra-1267 -o=custom-columns='POD_NAME:metadata.name,IP_ADDR:status.podIP' --sort-by=.status.podIP 742 | 743 | POD_NAME IP_ADDR 744 | pod-12 10.42.0.18 745 | pod-23 10.42.0.19 746 | pod-34 10.42.0.20 747 | pod-21 10.42.0.21 748 | ... 749 | 750 | # store the output to /root/pod_ips 751 | student-node ~ ➜ kubectl get pods -n spectra-1267 -o=custom-columns='POD_NAME:metadata.name,IP_ADDR:status.podIP' --sort-by=.status.podIP > /root/pod_ips_cka05_svcn 752 | 753 | ================================================== 754 | 755 | Que: 756 | --- 757 | 758 | Deploy a messaging-cka07-svcn pod using the redis:alpine image with the labels set to tier=msg. 759 | 760 | Now create a service messaging-service-cka07-svcn to expose the messaging-cka07-svcn application within the cluster on port 6379. 761 | 762 | Ans: 763 | --- 764 | 765 | On student-node, use the command 766 | #kubectl run messaging-cka07-svcn --image=redis:alpine -l tier=msg 767 | 768 | Now run the command: 769 | kubectl expose pod messaging-cka07-svcn --port=6379 --name messaging-service-cka07-svcn. -------------------------------------------------------------------------------- /Kubernetes-Mock-Exam-2.txt: -------------------------------------------------------------------------------- 1 | 2 | Que: 3 | --- 4 | Run a pod called alpine-sleeper-cka15-arch using the alpine image in the default namespace that will sleep for 7200 seconds. 5 | 6 | 7 | Ans: 8 | --- 9 | Create the pod definition: 10 | 11 | student-node ~ ➜ vi alpine-sleeper-cka15-arch.yaml 12 | 13 | 14 | 15 | ##Content should be: 16 | 17 | apiVersion: v1 18 | kind: Pod 19 | metadata: 20 | name: alpine-sleeper-cka15-arch 21 | spec: 22 | containers: 23 | - name: alpine 24 | image: alpine 25 | command: ["/bin/sh", "-c", "sleep 7200"] 26 | 27 | ================================================ 28 | 29 | Que: 30 | --- 31 | We have created a service account called red-sa-cka23-arch, a cluster role called red-role-cka23-arch and a cluster role binding called red-role-binding-cka23-arch. 32 | 33 | 34 | Identify the permissions of this service account and write down the answer in file /opt/red-sa-cka23-arch in format resource:pods|verbs:get,list on student-node 35 | 36 | 37 | Ans: 38 | --- 39 | Get the red-role-cka23-arch role permissions: 40 | 41 | student-node ~ ➜ kubectl get clusterrole red-role-cka23-arch -o json --context cluster1 42 | 43 | { 44 | "apiVersion": "rbac.authorization.k8s.io/v1", 45 | "kind": "ClusterRole", 46 | "metadata": { 47 | "creationTimestamp": "2022-10-20T07:16:39Z", 48 | "name": "red-role-cka23-arch", 49 | "resourceVersion": "16324", 50 | "uid": "e53cef4f-ae1b-49f7-b9fa-ac5e7e22a61c" 51 | }, 52 | "rules": [ 53 | { 54 | "apiGroups": [ 55 | "apps" 56 | ], 57 | "resources": [ 58 | "deployments" 59 | ], 60 | "verbs": [ 61 | "get", 62 | "list", 63 | "watch" 64 | ] 65 | } 66 | ] 67 | } 68 | 69 | 70 | 71 | In this case, add data in file as below: 72 | student-node ~ ➜ echo "resource:deployments|verbs:get,list,watch" > /opt/red-sa-cka23-arch 73 | 74 | ===================================================== 75 | 76 | Que: 77 | --- 78 | There is a Cronjob called orange-cron-cka10-trb which is supposed to run every two minutes (i.e 13:02, 13:04, 13:06…14:02, 14:04…and so on). This cron targets the application running inside the orange-app-cka10-trb pod to make sure the app is accessible. The application has been exposed internally as a ClusterIP service. 79 | 80 | 81 | However, this cron is not running as per the expected schedule and is not running as intended. 82 | 83 | 84 | Make the appropriate changes so that the cronjob runs as per the required schedule and it passes the accessibility checks every-time. 85 | 86 | 87 | Ans: 88 | --- 89 | Check the cron schedule 90 | kubectl get cronjob 91 | Make sure the schedule for orange-cron-cka10-trb crontjob is set to */2 * * * * if not then edit it. 92 | 93 | Also before that look for the issues why this cron is failing 94 | 95 | kubectl logs orange-cron-cka10-trb-xxxx 96 | You will see some error like 97 | 98 | curl: (6) Could not resolve host: orange-app-cka10-trb 99 | You will notice that the curl is trying to hit orange-app-cka10-trb directly but it is supposed to hit the relevant service which is orange-svc-cka10-trb so we need to fix the curl command. 100 | 101 | Edit the cronjob 102 | kubectl edit cronjob orange-cron-cka10-trb 103 | Change schedule * * * * * to */2 * * * * 104 | Change command curl orange-app-cka10-trb to curl orange-svc-cka10-trb 105 | Wait for 2 minutes to run again this cron and it should complete now. 106 | 107 | ========================================================= 108 | 109 | Que: 110 | --- 111 | The blue-dp-cka09-trb deployment is having 0 out of 1 pods running. Fix the issue to make sure that pod is up and running. 112 | 113 | Ans: 114 | --- 115 | List the pods 116 | 117 | kubectl get pod 118 | 119 | Most probably you see Init:Error or Init:CrashLoopBackOff for the corresponding pod. 120 | 121 | Look into the logs 122 | 123 | kubectl logs blue-dp-cka09-trb-xxxx -c init-container 124 | 125 | You will see an error something like 126 | 127 | sh: can't open 'echo 'Welcome!'': No such file or directory 128 | 129 | Edit the deployment 130 | 131 | kubectl edit deploy blue-dp-cka09-trb 132 | 133 | Under initContainers: -> - command: add -c to the next line of - sh, so final command should look like this 134 | initContainers: 135 | - command: 136 | - sh 137 | - -c 138 | - echo 'Welcome!' 139 | If you will check pod then it must be failing again but with different error this time, let's find that out 140 | 141 | kubectl get event --field-selector involvedObject.name=blue-dp-cka09-trb-xxxxx 142 | You will see an error something like 143 | 144 | Warning Failed pod/blue-dp-cka09-trb-69dd844f76-rv9z8 Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/lib/kubelet/pods/98182a41-6d6d-406a-a3e2-37c33036acac/volumes/kubernetes.io~configmap/nginx-config" to rootfs at "/etc/nginx/nginx.conf": mount /var/lib/kubelet/pods/98182a41-6d6d-406a-a3e2-37c33036acac/volumes/kubernetes.io~configmap/nginx-config:/etc/nginx/nginx.conf (via /proc/self/fd/6), flags: 0x5001: not a directory: unknown 145 | 146 | Edit the deployment again 147 | 148 | kubectl edit deploy blue-dp-cka09-trb 149 | 150 | Under volumeMounts: -> - mountPath: /etc/nginx/nginx.conf -> name: nginx-config add subPath: nginx.conf and save the changes. 151 | Finally the pod should be in running state. 152 | 153 | ====================================================== 154 | 155 | Que: 156 | --- 157 | The deployment called web-dp-cka17-trb has 0 out of 1 pods up and running. Troubleshoot this issue and fix it. Make sure all required POD(s) are in running state and stable (not restarting). 158 | 159 | The application runs on port 80 inside the container and is exposed on the node port 30090. 160 | 161 | 162 | Ans: 163 | --- 164 | List out the PODs 165 | kubectl get pod 166 | Let's look into the relevant events: 167 | 168 | kubectl get event --field-selector involvedObject.name= 169 | You should see some errors as below: 170 | 171 | Warning FailedScheduling pod/web-dp-cka17-trb-9bdd6779-fm95t 0/3 nodes are available: 3 persistentvolumeclaim "web-pvc-cka17-trbl" not found. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. 172 | From the error we can see that its something related to the PVCs. So let' look into that. 173 | 174 | kubectl get pv 175 | kubectl get pvc 176 | 177 | You will notice that web-pvc-cka17-trb is stuck in pending and also the capacity of web-pv-cka17-trb volume is 100Mi. 178 | Now let's dig more into the PVC: 179 | 180 | kubectl get pvc web-pvc-cka17-trb -o yaml 181 | 182 | Notice the storage which is 150Mi which means its trying to claim 150Mi of storage from a 100Mi PV. So let's edit this PV. 183 | 184 | kubectl edit pv web-pv-cka17-trb 185 | 186 | Change storage: 100Mi to storage: 150Mi 187 | Check again the pvc 188 | kubectl get pvc 189 | 190 | web-pvc-cka17-trb should be good now. let's see the PODs 191 | 192 | kubectl get pod 193 | POD should not be in pending state now but it must be crashing with Init:CrashLoopBackOff status, which means somehow the init container is crashing. So let's check the logs. 194 | 195 | kubectl get event --field-selector involvedObject.name= 196 | You should see someting like 197 | 198 | Warning Failed pod/web-dp-cka17-trb-67c9bdcd85-4tvpr Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bsh\\": stat /bin/bsh\: no such file or directory: unknown 199 | Let's look into the deployment: 200 | 201 | kubectl edit deploy web-dp-cka17-trb 202 | Under initContainers: -> - command: change /bin/bsh\ to /bin/bash 203 | let's see the PODs 204 | kubectl get pod 205 | Wait for some time to make sure it is stable, but you will notice that its restart so still something must be wrong. 206 | 207 | So let's check the events again. 208 | 209 | kubectl get event --field-selector involvedObject.name= 210 | You should see someting like 211 | 212 | Warning Unhealthy pod/web-dp-cka17-trb-647f69f8bd-67xmx Liveness probe failed: Get "http://10.50.64.1:81/": dial tcp 10.50.64.1:81: connect: connection refused 213 | Seems like its not able to connect to a service, let's look into the deployment to understand 214 | 215 | kubectl edit deploy web-dp-cka17-trb 216 | Notice that containerPort: 80 but under livenessProbe: the port: 81 so seems like livenessProbe is using wrong port. let's change port: 81 to port: 80 217 | 218 | See the PODs now 219 | 220 | kubectl get pod 221 | It should be good now. 222 | 223 | ============================================================ 224 | 225 | Que: 226 | --- 227 | We tried to schedule grey-cka21-trb pod on cluster4 which was supposed to be deployed by the kubernetes scheduler so far but somehow its stuck in Pending state. Look into the issue and fix the same, make sure the pod is in Running state. 228 | 229 | 230 | You can SSH into the cluster4 using ssh cluster4-controlplane command. 231 | 232 | 233 | Ans: 234 | --- 235 | Follow below given steps 236 | 237 | Let's check the POD status 238 | 239 | kubectl get pod --context=cluster4 240 | 241 | You will see that grey-cka21-trb pod is stuck in Pending state. So let's try to look into the logs and events 242 | 243 | kubectl logs grey-cka21-trb --context=cluster4 244 | kubectl get event --context=cluster4 --field-selector involvedObject.name=grey-cka21-trb 245 | 246 | You might not find any relevant info in the logs/events. Let's check the status of the kube-scheduler pod 247 | 248 | kubectl get pod --context=cluster4 -n kube-system 249 | You will notice that kube-scheduler-cluster4-controlplane pod is crashing, let's look into its logs 250 | 251 | kubectl logs kube-scheduler-cluster4-controlplane --context=cluster4 -n kube-system 252 | You will see an error as below: 253 | 254 | run.go:74] "command failed" err="failed to get delegated authentication kubeconfig: failed to get delegated authentication kubeconfig: stat /etc/kubernetes/scheduler.config: no such file or directory" 255 | 256 | From the logs we can see that its looking for a file called /etc/kubernetes/scheduler.config which seems incorrect, let's look into the kube-scheduler manifest on cluster4. 257 | 258 | ssh cluster4-controlplane 259 | First let's find out if /etc/kubernetes/scheduler.config 260 | 261 | ls /etc/kubernetes/scheduler.config 262 | You won't find it, instead the correct file is /etc/kubernetes/scheduler.conf so let's modify the manifest. 263 | 264 | vi /etc/kubernetes/manifests/kube-scheduler.yaml 265 | Search for config in the file, you will find some typos, change every occurence of /etc/kubernetes/scheduler.config to /etc/kubernetes/scheduler.conf. 266 | 267 | Let's see if kube-scheduler-cluster4-controlplane is running now 268 | 269 | kubectl get pod -A 270 | It should be good now and grey-cka21-trb should be good as well. 271 | 272 | ================================================================= 273 | 274 | Que: 275 | --- 276 | The cat-cka22-trb pod is stuck in Pending state. Look into the issue to fix the same. Make sure that the pod is in running state and its stable (i.e not restarting or crashing). 277 | 278 | 279 | Note: Do not make any changes to the pod (No changes to pod config but you may destory and re-create). 280 | 281 | 282 | Ans: 283 | --- 284 | Let's check the POD status 285 | kubectl get pod 286 | 287 | You will see that cat-cka22-trb pod is stuck in Pending state. So let's try to look into the events 288 | 289 | kubectl --context cluster2 get event --field-selector involvedObject.name=cat-cka22-trb 290 | You will see some logs as below 291 | 292 | Warning FailedScheduling pod/cat-cka22-trb 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) didn't match Pod's node affinity/selector. preemption: 0/2 nodes are available: 3 Preemption is not helpful for scheduling. 293 | 294 | So seems like this POD is using the node affinity, let's look into the POD to understand the node affinity its using. 295 | 296 | kubectl --context cluster2 get pod cat-cka22-trb -o yaml 297 | 298 | Under affinity: you will see its looking for key: node and values: cluster2-node02 so let's verify if node01 has these labels applied. 299 | 300 | kubectl --context cluster2 get node cluster2-node01 -o yaml 301 | 302 | Look under labels: and you will not find any such label, so let's add this label to this node. 303 | 304 | kubectl label node cluster1-node01 node=cluster2-node01 305 | Check again the node details 306 | 307 | kubectl get node cluster2-node01 -o yaml 308 | The new label should be there, let's see if POD is scheduled now on this node 309 | 310 | kubectl --context cluster2 get pod 311 | Its is but it must be crashing or restarting, so let's look into the pod logs 312 | 313 | kubectl --context cluster2 logs -f cat-cka22-trb 314 | You will see logs as below: 315 | 316 | The HOST variable seems incorrect, it must be set to kodekloud 317 | 318 | Let's look into the POD env variables to see if there is any HOST env variable 319 | 320 | kubectl --context cluster2 get pod -o yaml 321 | Under env: you will see this 322 | 323 | env: 324 | - name: HOST 325 | valueFrom: 326 | secretKeyRef: 327 | key: hostname 328 | name: cat-cka22-trb 329 | So we can see that HOST variable is defined and its value is being retrieved from a secret called "cat-cka22-trb". Let's look into this secret. 330 | 331 | kubectl --context cluster2 get secret 332 | kubectl --context cluster2 get secret cat-cka22-trb -o yaml 333 | 334 | You will find a key/value pair under data:, let's try to decode it to see its value: 335 | 336 | echo " /root/CKA/nginx.svc.cka06.svcn 490 | 491 | 492 | 493 | Get the IP of the nginx-resolver-cka06-svcn pod and replace the dots(.) with hyphon(-) which will be used below. 494 | student-node ~ ➜ kubectl get pod nginx-resolver-cka06-svcn -o wide 495 | student-node ~ ➜ IP=`kubectl get pod nginx-resolver-cka06-svcn -o wide --no-headers | awk '{print $6}' | tr '.' '-'` 496 | student-node ~ ➜ kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup $IP.default.pod > /root/CKA/nginx.pod.cka06.svcn 497 | 498 | ================================================== 499 | 500 | Que: 501 | --- 502 | We have an external webserver running on student-node which is exposed at port 9999. We have created a service called external-webserver-cka03-svcn that can connect to our local webserver from within the kubernetes cluster3 but at the moment it is not working as expected. 503 | 504 | Fix the issue so that other pods within cluster3 can use external-webserver-cka03-svcn service to access the webserver. 505 | 506 | 507 | Ans: 508 | --- 509 | Let's check if the webserver is working or not: 510 | 511 | 512 | student-node ~ ➜ curl student-node:9999 513 | ... 514 |

Welcome to nginx!

515 | ... 516 | 517 | 518 | 519 | Now we will check if service is correctly defined: 520 | 521 | student-node ~ ➜ kubectl describe svc external-webserver-cka03-svcn 522 | 523 | Name: external-webserver-cka03-svcn 524 | Namespace: default 525 | . 526 | . 527 | Endpoints: # there are no endpoints for the service 528 | ... 529 | 530 | 531 | 532 | As we can see there is no endpoints specified for the service, hence we won't be able to get any output. Since we can not destroy any k8s object, let's create the endpoint manually for this service as shown below: 533 | 534 | 535 | student-node ~ ➜ export IP_ADDR=$(ifconfig eth0 | grep inet | awk '{print $2}') 536 | 537 | student-node ~ ➜ kubectl --context cluster3 apply -f - <Welcome to nginx! 557 | ... 558 | 559 | =========================================================== 560 | -------------------------------------------------------------------------------- /Kubernetes-Mock-Exam-3.txt: -------------------------------------------------------------------------------- 1 | Que: 2 | --- 3 | 4 | An etcd backup is already stored at the path /opt/cluster1_backup_to_restore.db on the cluster1-controlplane node. Use /root/default.etcd as the --data-dir and restore it on the cluster1-controlplane node itself. 5 | 6 | 7 | You can ssh to the controlplane node by running ssh root@cluster1-controlplane from the student-node. 8 | 9 | Ans: 10 | --- 11 | 12 | SSH into cluster1-controlplane node: 13 | 14 | student-node ~ ➜ ssh root@cluster1-controlplane 15 | 16 | 17 | 18 | Install etcd utility (if not installed already) and restore the backup: 19 | cluster1-controlplane ~ ➜ cd /tmp 20 | cluster1-controlplane ~ ➜ export RELEASE=$(curl -s https://api.github.com/repos/etcd-io/etcd/releases/latest | grep tag_name | cut -d '"' -f 4) 21 | cluster1-controlplane ~ ➜ wget https://github.com/etcd-io/etcd/releases/download/${RELEASE}/etcd-${RELEASE}-linux-amd64.tar.gz 22 | cluster1-controlplane ~ ➜ tar xvf etcd-${RELEASE}-linux-amd64.tar.gz ; cd etcd-${RELEASE}-linux-amd64 23 | cluster1-controlplane ~ ➜ mv etcd etcdctl /usr/local/bin/ 24 | cluster1-controlplane ~ ➜ etcdctl snapshot restore --data-dir /root/default.etcd /opt/cluster1_backup_to_restore.db 25 | 26 | ======================== 27 | 28 | Que: 29 | --- 30 | We have created a service account called green-sa-cka22-arch, a cluster role called green-role-cka22-arch and a cluster role binding called green-role-binding-cka22-arch. 31 | 32 | 33 | Update the permissions of this service account so that it can only get all the namespaces in cluster1. 34 | 35 | Ans: 36 | --- 37 | 38 | Edit the green-role-cka22-arch to update permissions: 39 | 40 | student-node ~ ➜ kubectl edit clusterrole green-role-cka22-arch --context cluster1 41 | 42 | 43 | 44 | At the end add below code: 45 | 46 | - apiGroups: 47 | - "*" 48 | resources: 49 | - namespaces 50 | verbs: 51 | - get 52 | 53 | 54 | 55 | You can verify it as below: 56 | 57 | student-node ~ ➜ kubectl auth can-i get namespaces --as=system:serviceaccount:default:green-sa-cka22-arch 58 | yes 59 | 60 | ========================== 61 | 62 | Que: 63 | --- 64 | 65 | Install etcd utility on cluster2-controlplane node so that we can take/restore etcd backups. 66 | 67 | 68 | You can ssh to the controlplane node by running ssh root@cluster2-controlplane from the student-node. 69 | 70 | Ans: 71 | --- 72 | 73 | SSH into cluster2-controlplane node: 74 | 75 | student-node ~ ➜ ssh root@cluster2-controlplane 76 | 77 | 78 | 79 | Install etcd utility: 80 | cluster2-controlplane ~ ➜ cd /tmp 81 | cluster2-controlplane ~ ➜ export RELEASE=$(curl -s https://api.github.com/repos/etcd-io/etcd/releases/latest | grep tag_name | cut -d '"' -f 4) 82 | cluster2-controlplane ~ ➜ wget https://github.com/etcd-io/etcd/releases/download/${RELEASE}/etcd-${RELEASE}-linux-amd64.tar.gz 83 | cluster2-controlplane ~ ➜ tar xvf etcd-${RELEASE}-linux-amd64.tar.gz ; cd etcd-${RELEASE}-linux-amd64 84 | cluster2-controlplane ~ ➜ mv etcd etcdctl /usr/local/bin/ 85 | 86 | =========================== 87 | 88 | Que: 89 | --- 90 | ​A pod called nginx-cka01-trb is running in the default namespace. There is a container called nginx-container running inside this pod that uses the image nginx:latest. There is another sidecar container called logs-container that runs in this pod. 91 | 92 | For some reason, this pod is continuously crashing. Identify the issue and fix it. Make sure that the pod is in a running state and you are able to access the website using the curl http://kodekloud-exam.app:30001 command on the controlplane node of cluster1. 93 | 94 | 95 | Ans: 96 | --- 97 | 98 | Check the container logs: 99 | kubectl logs -f nginx-cka01-trb -c nginx-container 100 | 101 | You can see that its not able to pull the image. 102 | 103 | Edit the pod 104 | kubectl edit pod nginx-cka01-trb -o yaml 105 | 106 | Change image tag from nginx:latst to nginx:latest 107 | 108 | Let's check now if the POD is in Running state 109 | 110 | kubectl get pod 111 | You will notice that its still crashing, so check the logs again: 112 | 113 | kubectl logs -f nginx-cka01-trb -c nginx-container 114 | 115 | From the logs you will notice that nginx-container is looking good now so it might be the sidecar container that is causing 116 | issues. Let's check its logs. 117 | 118 | kubectl logs -f nginx-cka01-trb -c logs-container 119 | You will see some logs as below: 120 | 121 | cat: can't open '/var/log/httpd/access.log': No such file or directory 122 | cat: can't open '/var/log/httpd/error.log': No such file or directory 123 | 124 | Now, let's look into the sidecar container 125 | 126 | kubectl get pod nginx-cka01-trb -o yaml 127 | 128 | Under containers: check the command: section, this is the command which is failing. If you notice its looking for the logs under /var/log/httpd/ directory but the mounted volume for logs is /var/log/nginx (under volumeMounts:). So we need to fix this path: 129 | 130 | kubectl get pod nginx-cka01-trb -o yaml > /tmp/test.yaml 131 | vi /tmp/test.yaml 132 | Under command: change /var/log/httpd/access.log and /var/log/httpd/error.log to /var/log/nginx/access.log and /var/log/nginx/error.log respectively. 133 | 134 | Delete the existing POD now: 135 | 136 | kubectl delete pod nginx-cka01-trb 137 | Create new one from the template 138 | 139 | kubectl apply -f /tmp/test.yaml 140 | Let's check now if the POD is in Running state 141 | 142 | kubectl get pod 143 | It should be good now. So let's try to access the app. 144 | 145 | curl http://kodekloud-exam.app:30001 146 | You will see error 147 | 148 | curl: (7) Failed to connect to kodekloud-exam.app port 30001: Connection refused 149 | 150 | So you are not able to access the website, let's look into the service configuration. 151 | 152 | Edit the service 153 | kubectl edit svc nginx-service-cka01-trb -o yaml 154 | 155 | Change app label under selector from httpd-app-cka01-trb to nginx-app-cka01-trb 156 | You should be able to access the website now. 157 | curl http://kodekloud-exam.app:30001 158 | 159 | =========================================== 160 | 161 | Que: 162 | --- 163 | There is a Cronjob called orange-cron-cka10-trb which is supposed to run every two minutes (i.e 13:02, 13:04, 13:06…14:02, 14:04…and so on). This cron targets the application running inside the orange-app-cka10-trb pod to make sure the app is accessible. The application has been exposed internally as a ClusterIP service. 164 | 165 | 166 | However, this cron is not running as per the expected schedule and is not running as intended. 167 | 168 | 169 | Make the appropriate changes so that the cronjob runs as per the required schedule and it passes the accessibility checks every-time. 170 | 171 | 172 | Ans: 173 | --- 174 | Check the cron schedule 175 | kubectl get cronjob 176 | Make sure the schedule for orange-cron-cka10-trb crontjob is set to */2 * * * * if not then edit it. 177 | 178 | Also before that look for the issues why this cron is failing 179 | 180 | kubectl logs orange-cron-cka10-trb-xxxx 181 | You will see some error like 182 | 183 | curl: (6) Could not resolve host: orange-app-cka10-trb 184 | You will notice that the curl is trying to hit orange-app-cka10-trb directly but it is supposed to hit the relevant service which is orange-svc-cka10-trb so we need to fix the curl command. 185 | 186 | Edit the cronjob 187 | kubectl edit cronjob orange-cron-cka10-trb 188 | Change schedule * * * * * to */2 * * * * 189 | Change command curl orange-app-cka10-trb to curl orange-svc-cka10-trb 190 | Wait for 2 minutes to run again this cron and it should complete now. 191 | 192 | ============================================== 193 | 194 | Que: 195 | --- 196 | 197 | The green-deployment-cka15-trb deployment is having some issues since the corresponding POD is crashing and restarting multiple times continuously. 198 | 199 | 200 | Investigate the issue and fix it, make sure the POD is in running state and its stable (i.e NO RESTARTS!). 201 | 202 | Ans: 203 | --- 204 | 205 | List the pods to check its status 206 | kubectl get pod 207 | its must have crashed already so lets look into the logs. 208 | 209 | kubectl logs -f green-deployment-cka15-trb-xxxx 210 | You will see some logs like these 211 | 212 | 2022-09-18 17:13:25 98 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 213 | 2022-09-18 17:13:25 98 [Note] InnoDB: Memory barrier is not used 214 | 2022-09-18 17:13:25 98 [Note] InnoDB: Compressed tables use zlib 1.2.11 215 | 2022-09-18 17:13:25 98 [Note] InnoDB: Using Linux native AIO 216 | 2022-09-18 17:13:25 98 [Note] InnoDB: Using CPU crc32 instructions 217 | 2022-09-18 17:13:25 98 [Note] InnoDB: Initializing buffer pool, size = 128.0M 218 | Killed 219 | This might be due to the resources issue, especially the memory, so let's try to recreate the POD to see if it helps. 220 | 221 | kubectl delete pod green-deployment-cka15-trb-xxxx 222 | Now watch closely the POD status 223 | 224 | kubectl get pod 225 | Pretty soon you will see the POD status has been changed to OOMKilled which confirms its the memory issue. So let's look into the resources that are assigned to this deployment. 226 | 227 | kubectl get deploy 228 | kubectl edit deploy green-deployment-cka15-trb 229 | Under resources: -> limits: change memory from 256Mi to 512Mi and save the changes. 230 | Now watch closely the POD status again 231 | 232 | kubectl get pod 233 | It should be stable now. 234 | 235 | ====================================== 236 | 237 | Que: 238 | --- 239 | 240 | cluster4-node01 node that belongs to cluster4 seems to be in the NotReady state. Fix the issue and make sure this node is in Ready state. 241 | 242 | 243 | 244 | Note: You can ssh into the node using ssh cluster4-node01. 245 | 246 | Ans: 247 | --- 248 | SSH into the cluster4-node01 and check if kubelet service is running 249 | ssh cluster4-node01 250 | systemctl status kubelet 251 | 252 | You will see its inactive, so try to start it. 253 | 254 | systemctl start kubelet 255 | Check again the status 256 | 257 | systemctl status kubelet 258 | Its still failing, so let's look into some latest error logs: 259 | 260 | journalctl -u kubelet --since "30 min ago" | grep 'Error:' 261 | 262 | You will see some errors as below: 263 | 264 | cluster4-node01 kubelet[6301]: Error: failed to construct kubelet dependencies: unable to load client CA file /etc/kubernetes/pki/CA.crt: open /etc/kubernetes/pki/CA.crt: no such file or directory 265 | 266 | Check if /etc/kubernetes/pki/CA.crt file exists: 267 | 268 | ls /etc/kubernetes/pki/ 269 | You will notice that the file name is ca.crt instead of CA.crt so possibly kubelet is looking for a wrong file. Let's fix the config: 270 | 271 | vi /var/lib/kubelet/config.yaml 272 | 273 | Change clientCAFile from /etc/kubernetes/pki/CA.crt to /etc/kubernetes/pki/ca.crt 274 | Try to start it again 275 | 276 | systemctl start kubelet 277 | Service should start now but there might be an error as below 278 | 279 | ReportingIn 280 | stance:""}': 'Post "https://cluster4-controlplane:6334/api/v1/namespaces/default/events": dial tcp 10.9.63.18:633 281 | 4: connect: connection refused'(may retry after sleeping) 282 | Sep 18 09:21:47 cluster4-node01 kubelet[6803]: E0918 09:21:47.641184 6803 kubelet.go:2419] "Error getting node 283 | " err="node \"cluster4-node01\" not found" 284 | 285 | You must have noticed that its trying to connect to the api server on port 6334 but the default port for kube-apiserver is 6443. Let's fix this: 286 | 287 | Edit the kubelet config 288 | vi /etc/kubernetes/kubelet.conf 289 | 290 | Change server 291 | server: https://cluster4-controlplane:6334 292 | to 293 | 294 | server: https://cluster4-controlplane:6443 295 | Finally restart kublet service 296 | systemctl restart kubelet 297 | Check from the student-node now and cluster4-node01 should be ready now. 298 | kubectl get node --context=cluster4 299 | 300 | 301 | ================================== 302 | 303 | Que: 304 | --- 305 | We recently deployed a DaemonSet called logs-cka26-trb under kube-system namespace in cluster2 for collecting logs from all the cluster nodes including the controlplane node. However, at this moment, the DaemonSet is not creating any pod on the controlplane node. 306 | 307 | 308 | Troubleshoot the issue and fix it to make sure the pods are getting created on all nodes including the controlplane node. 309 | 310 | Ans: 311 | --- 312 | Check the status of DaemonSet 313 | 314 | kubectl --context2 cluster2 get ds logs-cka26-trb -n kube-system 315 | 316 | You will find that DESIRED CURRENT READY etc have value 2 which means there are two pods that have been created. You can check the same by listing the PODs 317 | 318 | kubectl --context2 cluster2 get pod -n kube-system 319 | You can check on which nodes these are created on 320 | 321 | kubectl --context2 cluster2 get pod -n kube-system -o wide 322 | 323 | Under NODE you will find the node name, so we can see that its not scheduled on the controlplane node which is because it must be missing the reqiured tolerations. Let's edit the DaemonSet to fix the tolerations 324 | 325 | kubectl --context2 cluster2 edit ds logs-cka26-trb -n kube-system 326 | Under tolerations: add below given tolerations as well 327 | 328 | - key: node-role.kubernetes.io/control-plane 329 | operator: Exists 330 | effect: NoSchedule 331 | Wait for some time PODs should schedule on all nodes now including the controlplane node. 332 | 333 | ================================ 334 | 335 | Que: 336 | --- 337 | 338 | We have deployed a 2-tier web application on the cluster3 nodes in the canara-wl05 namespace. However, at the moment, the web app pod cannot establish a connection with the MySQL pod successfully. 339 | 340 | 341 | You can check the status of the application from the terminal by running the curl command with the following syntax: 342 | 343 | curl http://cluster3-controlplane:NODE-PORT 344 | 345 | 346 | 347 | 348 | To make the application work, create a new secret called db-secret-wl05 with the following key values: - 349 | 350 | 1. DB_Host=mysql-svc-wl05 351 | 2. DB_User=root 352 | 3. DB_Password=password123 353 | 354 | 355 | Next, configure the web application pod to load the new environment variables from the newly created secret. 356 | 357 | 358 | Note: Check the web application again using the curl command, and the status of the application should be success. 359 | 360 | 361 | You can SSH into the cluster3 using ssh cluster3-controlplane command. 362 | 363 | 364 | Ans: 365 | --- 366 | 367 | Set the correct context: - 368 | 369 | kubectl config use-context cluster3 370 | List the nodes: - 371 | 372 | kubectl get nodes -o wide 373 | Run the curl command to know the status of the application as follows: - 374 | 375 | ssh cluster2-controlplane 376 | 377 | curl http://10.17.63.11:31020 378 | 379 | Hello from Flask 380 | ... 381 | 382 | 383 |

Failed connecting to the MySQL database.

384 | 385 | 386 |

Environment Variables: DB_Host=Not Set; DB_Database=Not Set; DB_User=Not Set; DB_Password=Not Set; 2003: Can't connect to MySQL server on 'localhost:3306' (111 Connection refused)

387 | 388 | 389 | As you can see, the status of the application pod is failed. 390 | 391 | 392 | NOTE: - In your lab, IP addresses could be different. 393 | 394 | 395 | 396 | Let's create a new secret called db-secret-wl05 as follows: - 397 | 398 | kubectl create secret generic db-secret-wl05 -n canara-wl05 --from-literal=DB_Host=mysql-svc-wl05 --from-literal=DB_User=root --from-literal=DB_Password=password123 399 | 400 | After that, configure the newly created secret to the web application pod as follows: - 401 | 402 | --- 403 | apiVersion: v1 404 | kind: Pod 405 | metadata: 406 | labels: 407 | run: webapp-pod-wl05 408 | name: webapp-pod-wl05 409 | namespace: canara-wl05 410 | spec: 411 | containers: 412 | - image: kodekloud/simple-webapp-mysql 413 | name: webapp-pod-wl05 414 | envFrom: 415 | - secretRef: 416 | name: db-secret-wl05 417 | then use the kubectl replace command: - 418 | 419 | kubectl replace -f --force 420 | 421 | 422 | In the end, make use of the curl command to check the status of the application pod. The status of the application should be success. 423 | 424 | curl http://10.17.63.11:31020 425 | 426 | 427 | Hello from Flask 428 | 429 |
433 | 434 | 435 |

Successfully connected to the MySQL database.

436 | 437 | 438 | =================================== 439 | 440 | Que: 441 | --- 442 | 443 | We want to deploy a python based application on the cluster using a template located at /root/olive-app-cka10-str.yaml on student-node. However, before you proceed we need to make some modifications to the YAML file as per details given below: 444 | 445 | 446 | The YAML should also contain a persistent volume claim with name olive-pvc-cka10-str to claim a 100Mi of storage from olive-pv-cka10-str PV. 447 | 448 | 449 | Update the deployment to add a sidecar container, which can use busybox image (you might need to add a sleep command for this container to keep it running.) 450 | 451 | Share the python-data volume with this container and mount the same at path /usr/src. Make sure this container only has read permissions on this volume. 452 | 453 | 454 | Finally, create a pod using this YAML and make sure the POD is in Running state. 455 | 456 | 457 | Ans: 458 | --- 459 | Update olive-app-cka10-str.yaml template so that it looks like as below: 460 | 461 | --- 462 | kind: PersistentVolumeClaim 463 | apiVersion: v1 464 | metadata: 465 | name: olive-pvc-cka10-str 466 | spec: 467 | accessModes: 468 | - ReadWriteMany 469 | storageClassName: olive-stc-cka10-str 470 | volumeName: olive-pv-cka10-str 471 | resources: 472 | requests: 473 | storage: 100Mi 474 | 475 | --- 476 | apiVersion: apps/v1 477 | kind: Deployment 478 | metadata: 479 | name: olive-app-cka10-str 480 | spec: 481 | replicas: 1 482 | template: 483 | metadata: 484 | labels: 485 | app: olive-app-cka10-str 486 | spec: 487 | affinity: 488 | nodeAffinity: 489 | requiredDuringSchedulingIgnoredDuringExecution: 490 | nodeSelectorTerms: 491 | - matchExpressions: 492 | - key: kubernetes.io/hostname 493 | operator: In 494 | values: 495 | - cluster1-node01 496 | containers: 497 | - name: python 498 | image: poroko/flask-demo-app 499 | ports: 500 | - containerPort: 5000 501 | volumeMounts: 502 | - name: python-data 503 | mountPath: /usr/share/ 504 | - name: busybox 505 | image: busybox 506 | command: 507 | - "bin/sh" 508 | - "-c" 509 | - "sleep 10000" 510 | volumeMounts: 511 | - name: python-data 512 | mountPath: "/usr/src" 513 | readOnly: true 514 | volumes: 515 | - name: python-data 516 | persistentVolumeClaim: 517 | claimName: olive-pvc-cka10-str 518 | selector: 519 | matchLabels: 520 | app: olive-app-cka10-str 521 | 522 | --- 523 | apiVersion: v1 524 | kind: Service 525 | metadata: 526 | name: olive-svc-cka10-str 527 | spec: 528 | type: NodePort 529 | ports: 530 | - port: 5000 531 | nodePort: 32006 532 | selector: 533 | app: olive-app-cka10-str 534 | Apply the template: 535 | kubectl apply -f olive-app-cka10-str.yaml 536 | 537 | 538 | =================================== 539 | 540 | Que: 541 | --- 542 | John is setting up a two tier application stack that is supposed to be accessible using the service curlme-cka01-svcn. To test that the service is accessible, he is using a pod called curlpod-cka01-svcn. However, at the moment, he is unable to get any response from the application. 543 | 544 | 545 | 546 | Troubleshoot and fix this issue so the application stack is accessible. 547 | 548 | 549 | 550 | While you may delete and recreate the service curlme-cka01-svcn, please do not alter it in anyway. 551 | 552 | 553 | Ans: 554 | --- 555 | 556 | Test if the service curlme-cka01-svcn is accessible from pod curlpod-cka01-svcn or not. 557 | 558 | 559 | kubectl exec curlpod-cka01-svcn -- curl curlme-cka01-svcn 560 | 561 | ..... 562 | % Total % Received % Xferd Average Speed Time Time Time Current 563 | Dload Upload Total Spent Left Speed 564 | 0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0 565 | 566 | 567 | We did not get any response. Check if the service is properly configured or not. 568 | 569 | 570 | kubectl describe svc curlme-cka01-svcn '' 571 | 572 | .... 573 | Name: curlme-cka01-svcn 574 | Namespace: default 575 | Labels: 576 | Annotations: 577 | Selector: run=curlme-ckaO1-svcn 578 | Type: ClusterIP 579 | IP Family Policy: SingleStack 580 | IP Families: IPv4 581 | IP: 10.109.45.180 582 | IPs: 10.109.45.180 583 | Port: 80/TCP 584 | TargetPort: 80/TCP 585 | Endpoints: 586 | Session Affinity: None 587 | Events: 588 | 589 | 590 | The service has no endpoints configured. As we can delete the resource, let's delete the service and create the service again. 591 | 592 | To delete the service, use the command 593 | kubectl delete svc curlme-cka01-svcn. 594 | You can create the service using imperative way or declarative way. 595 | 596 | 597 | Using imperative command: 598 | kubectl expose pod curlme-cka01-svcn --port=80 599 | 600 | 601 | Using declarative manifest: 602 | 603 | 604 | apiVersion: v1 605 | kind: Service 606 | metadata: 607 | labels: 608 | run: curlme-cka01-svcn 609 | name: curlme-cka01-svcn 610 | spec: 611 | ports: 612 | - port: 80 613 | protocol: TCP 614 | targetPort: 80 615 | selector: 616 | run: curlme-cka01-svcn 617 | type: ClusterIP 618 | 619 | 620 | You can test the connection from curlpod-cka-1-svcn using following. 621 | 622 | kubectl exec curlpod-cka01-svcn -- curl curlme-cka01-svcn 623 | 624 | ========================== 625 | 626 | -------------------------------------------------------------------------------- /Kubernetes-Mock-Exam-4.txt: -------------------------------------------------------------------------------- 1 | 2 | Que1: 3 | --- 4 | 5 | The deployment called web-dp-cka17-trb has 0 out of 1 pods up and running. Troubleshoot this issue and fix it. Make sure all required POD(s) are in running state and stable (not restarting). 6 | 7 | The application runs on port 80 inside the container and is exposed on the node port 30090. 8 | 9 | 10 | Ans: 11 | --- 12 | List out the PODs 13 | kubectl get pod 14 | Let's look into the relevant events: 15 | 16 | kubectl get event --field-selector involvedObject.name= 17 | You should see some errors as below: 18 | 19 | Warning FailedScheduling pod/web-dp-cka17-trb-9bdd6779-fm95t 0/3 nodes are available: 3 persistentvolumeclaim "web-pvc-cka17-trbl" not found. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. 20 | From the error we can see that its something related to the PVCs. So let' look into that. 21 | 22 | kubectl get pv 23 | kubectl get pvc 24 | You will notice that web-pvc-cka17-trb is stuck in pending and also the capacity of web-pv-cka17-trb volume is 100Mi. 25 | Now let's dig more into the PVC: 26 | 27 | kubectl get pvc web-pvc-cka17-trb -o yaml 28 | Notice the storage which is 150Mi which means its trying to claim 150Mi of storage from a 100Mi PV. So let's edit this PV. 29 | 30 | kubectl edit pv web-pv-cka17-trb 31 | Change storage: 100Mi to storage: 150Mi 32 | Check again the pvc 33 | kubectl get pvc 34 | web-pvc-cka17-trb should be good now. let's see the PODs 35 | 36 | kubectl get pod 37 | POD should not be in pending state now but it must be crashing with Init:CrashLoopBackOff status, which means somehow the init container is crashing. So let's check the logs. 38 | 39 | kubectl get event --field-selector involvedObject.name= 40 | You should see someting like 41 | 42 | Warning Failed pod/web-dp-cka17-trb-67c9bdcd85-4tvpr Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/bsh\\": stat /bin/bsh\: no such file or directory: unknown 43 | Let's look into the deployment: 44 | 45 | kubectl edit deploy web-dp-cka17-trb 46 | Under initContainers: -> - command: change /bin/bsh\ to /bin/bash 47 | let's see the PODs 48 | kubectl get pod 49 | Wait for some time to make sure it is stable, but you will notice that its restart so still something must be wrong. 50 | 51 | So let's check the events again. 52 | 53 | kubectl get event --field-selector involvedObject.name= 54 | You should see someting like 55 | 56 | Warning Unhealthy pod/web-dp-cka17-trb-647f69f8bd-67xmx Liveness probe failed: Get "http://10.50.64.1:81/": dial tcp 10.50.64.1:81: connect: connection refused 57 | Seems like its not able to connect to a service, let's look into the deployment to understand 58 | 59 | kubectl edit deploy web-dp-cka17-trb 60 | Notice that containerPort: 80 but under livenessProbe: the port: 81 so seems like livenessProbe is using wrong port. let's change port: 81 to port: 80 61 | 62 | See the PODs now 63 | 64 | kubectl get pod 65 | It should be good now. 66 | 67 | 68 | ============================================ 69 | 70 | Que2: 71 | --- 72 | One of the nginx based pod called cyan-pod-cka28-trb is running under cyan-ns-cka28-trb namespace and it is exposed within the cluster using cyan-svc-cka28-trb service. 73 | 74 | This is a restricted pod so a network policy called cyan-np-cka28-trb has been created in the same namespace to apply some restrictions on this pod. 75 | 76 | 77 | Two other pods called cyan-white-cka28-trb1 and cyan-black-cka28-trb are also running in the default namespace. 78 | 79 | 80 | The nginx based app running on the cyan-pod-cka28-trb pod is exposed internally on the default nginx port (80). 81 | 82 | 83 | Expectation: This app should only be accessible from the cyan-white-cka28-trb1 pod. 84 | 85 | 86 | Problem: This app is not accessible from anywhere. 87 | 88 | 89 | Troubleshoot this issue and fix the connectivity as per the requirement listed above. 90 | 91 | 92 | Note: You can exec into cyan-white-cka28-trb and cyan-black-cka28-trb pods and test connectivity using the curl utility. 93 | 94 | 95 | You may update the network policy, but make sure it is not deleted from the cyan-ns-cka28-trb namespace. 96 | 97 | 98 | 99 | Ans: 100 | --- 101 | 102 | Let's look into the network policy 103 | 104 | kubectl edit networkpolicy cyan-np-cka28-trb -n cyan-ns-cka28-trb 105 | 106 | Under spec: -> egress: you will notice there is no cidr: block has been added, since there is no restrcitions on egress traffic so we can update it as below. Further you will notice that the port used in the policy is 8080 but the app is running on default port which is 80 so let's update this as well (under egress and ingress): 107 | 108 | Change port: 8080 to port: 80 109 | - ports: 110 | - port: 80 111 | protocol: TCP 112 | to: 113 | - ipBlock: 114 | cidr: 0.0.0.0/0 115 | 116 | Now, lastly notice that there is no POD selector has been used in ingress section but this app is supposed to be accessible from cyan-white-cka28-trb pod under default namespace. So let's edit it to look like as below: 117 | 118 | ingress: 119 | - from: 120 | - namespaceSelector: 121 | matchLabels: 122 | kubernetes.io/metadata.name: default 123 | podSelector: 124 | matchLabels: 125 | app: cyan-white-cka28-trb 126 | Now, let's try to access the app from cyan-white-pod-cka28-trb 127 | 128 | kubectl exec -it cyan-white-cka28-trb -- sh 129 | curl cyan-svc-cka28-trb.cyan-ns-cka28-trb.svc.cluster.local 130 | Also make sure its not accessible from the other pod(s) 131 | 132 | kubectl exec -it cyan-black-cka28-trb -- sh 133 | curl cyan-svc-cka28-trb.cyan-ns-cka28-trb.svc.cluster.local 134 | It should not work from this pod. So its looking good now. 135 | 136 | ===================================== 137 | 138 | Que3: 139 | --- 140 | 141 | We recently deployed a DaemonSet called logs-cka26-trb under kube-system namespace in cluster2 for collecting logs from all the cluster nodes including the controlplane node. However, at this moment, the DaemonSet is not creating any pod on the controlplane node. 142 | 143 | 144 | Troubleshoot the issue and fix it to make sure the pods are getting created on all nodes including the controlplane node. 145 | 146 | 147 | Ans: 148 | --- 149 | 150 | Check the status of DaemonSet 151 | 152 | kubectl --context2 cluster2 get ds logs-cka26-trb -n kube-system 153 | You will find that DESIRED CURRENT READY etc have value 2 which means there are two pods that have been created. You can check the same by listing the PODs 154 | 155 | kubectl --context2 cluster2 get pod -n kube-system 156 | You can check on which nodes these are created on 157 | 158 | kubectl --context2 cluster2 get pod -n kube-system -o wide 159 | 160 | Under NODE you will find the node name, so we can see that its not scheduled on the controlplane node which is because it must be missing the reqiured tolerations. Let's edit the DaemonSet to fix the tolerations 161 | 162 | kubectl --context2 cluster2 edit ds logs-cka26-trb -n kube-system 163 | Under tolerations: add below given tolerations as well 164 | 165 | - key: node-role.kubernetes.io/control-plane 166 | operator: Exists 167 | effect: NoSchedule 168 | Wait for some time PODs should schedule on all nodes now including the controlplane node. 169 | 170 | 171 | ============================== 172 | 173 | Que4: 174 | --- 175 | In the dev-wl07 namespace, one of the developers has performed a rolling update and upgraded the application to a newer version. But somehow, application pods are not being created. 176 | 177 | 178 | To get back the working state, rollback the application to the previous version . 179 | 180 | 181 | After rolling the deployment back, on the controlplane node, save the image currently in use to the /root/rolling-back-record.txt file and increase the replica count to the 5. 182 | 183 | 184 | You can SSH into the cluster1 using ssh cluster1-controlplane command. 185 | 186 | 187 | Ans: 188 | --- 189 | Run the command to change the context: - 190 | 191 | kubectl config use-context cluster1 192 | 193 | 194 | Check the status of the pod: - 195 | 196 | kubectl get pods -n dev-wl07 197 | 198 | 199 | One of the pods is in an error state. As a quick fix, we need to rollback to the previous revision as follows: - 200 | 201 | kubectl rollout undo -n dev-wl07 deploy webapp-wl07 202 | 203 | 204 | After successful rolling back, inspect the updated image: - 205 | 206 | kubectl describe deploy -n dev-wl07 webapp-wl07 | grep -i image 207 | 208 | 209 | On the Controlplane node, save the image name to the given path /root/rolling-back-record.txt: - 210 | 211 | ssh cluster1-controlplane 212 | 213 | echo "kodekloud/webapp-color" > /root/rolling-back-record.txt 214 | 215 | 216 | And increase the replica count to the 5 with help of kubectl scale command: - 217 | 218 | kubectl scale deploy -n dev-wl07 webapp-wl07 --replicas=5 219 | 220 | 221 | Verify it by running the command: kubectl get deploy -n dev-wl07 222 | 223 | ================================== 224 | 225 | Que5: 226 | --- 227 | 228 | Create a new deployment called ocean-tv-wl09 in the default namespace using the image kodekloud/webapp-color:v1. 229 | Use the following specs for the deployment: 230 | 231 | 232 | 1. Replica count should be 3. 233 | 234 | 2. Set the Max Unavailable to 40% and Max Surge to 55%. 235 | 236 | 3. Create the deployment and ensure all the pods are ready. 237 | 238 | 4. After successful deployment, upgrade the deployment image to kodekloud/webapp-color:v2 and inspect the deployment rollout status. 239 | 240 | 5. Check the rolling history of the deployment and on the student-node, save the current revision count number to the /opt/revision-count.txt file. 241 | 242 | 6. Finally, perform a rollback and revert back the deployment image to the older version. 243 | 244 | 245 | 246 | Ans: 247 | --- 248 | 249 | Set the correct context: - 250 | 251 | kubectl config use-context cluster1 252 | 253 | 254 | Use the following template to create a deployment called ocean-tv-wl09: - 255 | 256 | --- 257 | apiVersion: apps/v1 258 | kind: Deployment 259 | metadata: 260 | labels: 261 | app: ocean-tv-wl09 262 | name: ocean-tv-wl09 263 | spec: 264 | replicas: 3 265 | selector: 266 | matchLabels: 267 | app: ocean-tv-wl09 268 | strategy: 269 | type: RollingUpdate 270 | rollingUpdate: 271 | maxUnavailable: 40% 272 | maxSurge: 55% 273 | template: 274 | metadata: 275 | labels: 276 | app: ocean-tv-wl09 277 | spec: 278 | containers: 279 | - image: kodekloud/webapp-color:v1 280 | name: webapp-color 281 | 282 | 283 | Now, create the deployment by using the kubectl create -f command in the default namespace: - 284 | 285 | kubectl create -f .yaml 286 | 287 | 288 | After sometime, upgrade the deployment image to kodekloud/webapp-color:v2: - 289 | 290 | kubectl set image deploy ocean-tv-wl09 webapp-color=kodekloud/webapp-color:v2 291 | 292 | 293 | And check out the rollout history of the deployment ocean-tv-wl09: - 294 | 295 | kubectl rollout history deploy ocean-tv-wl09 296 | deployment.apps/ocean-tv-wl09 297 | REVISION CHANGE-CAUSE 298 | 1 299 | 2 300 | 301 | 302 | NOTE: - Revision count is 2. In your lab, it could be different. 303 | 304 | 305 | 306 | On the student-node, store the revision count to the given file: - 307 | 308 | echo "2" > /opt/revision-count.txt 309 | 310 | 311 | In final task, rollback the deployment image to an old version: - 312 | 313 | kubectl rollout undo deployment ocean-tv-wl09 314 | 315 | 316 | Verify the image name by using the following command: - 317 | 318 | kubectl describe deploy ocean-tv-wl09 319 | 320 | 321 | It should be kodekloud/webapp-color:v1 image. 322 | 323 | 324 | =================================== 325 | 326 | Que6: 327 | --- 328 | A pod definition file is created at /root/peach-pod-cka05-str.yaml on the student-node. Update this manifest file to create a persistent volume claim called peach-pvc-cka05-str to claim a 100Mi of storage from peach-pv-cka05-str PV (this is already created). Use the access mode ReadWriteOnce. 329 | 330 | 331 | Further add peach-pvc-cka05-str PVC to peach-pod-cka05-str POD and mount the volume at /var/www/html location. Ensure that the pod is running and the PV is bound. 332 | 333 | Ans: 334 | --- 335 | 336 | Update /root/peach-pod-cka05-str.yaml template file to create a PVC to utilise the same in POD template. 337 | apiVersion: v1 338 | kind: PersistentVolumeClaim 339 | metadata: 340 | name: peach-pvc-cka05-str 341 | spec: 342 | volumeName: peach-pv-cka05-str 343 | accessModes: 344 | - ReadWriteOnce 345 | resources: 346 | requests: 347 | storage: 100Mi 348 | --- 349 | apiVersion: v1 350 | kind: Pod 351 | metadata: 352 | name: peach-pod-cka05-str 353 | spec: 354 | containers: 355 | - image: nginx 356 | name: nginx 357 | volumeMounts: 358 | - mountPath: "/var/www/html" 359 | name: nginx-volume 360 | volumes: 361 | - name: nginx-volume 362 | persistentVolumeClaim: 363 | claimName: peach-pvc-cka05-str 364 | Apply the template: 365 | kubectl apply -f /root/peach-pod-cka05-str.yaml 366 | 367 | ============================== 368 | 369 | Que7: 370 | --- 371 | Create a storage class with the name banana-sc-cka08-str as per the properties given below: 372 | 373 | 374 | - Provisioner should be kubernetes.io/no-provisioner, 375 | 376 | - Volume binding mode should be WaitForFirstConsumer. 377 | 378 | - Volume expansion should be enabled. 379 | 380 | 381 | Ans: 382 | --- 383 | 384 | Create a yaml template as below: 385 | kind: StorageClass 386 | apiVersion: storage.k8s.io/v1 387 | metadata: 388 | name: banana-sc-cka08-str 389 | provisioner: kubernetes.io/no-provisioner 390 | allowVolumeExpansion: true 391 | volumeBindingMode: WaitForFirstConsumer 392 | Apply the template: 393 | kubectl apply -f .yaml 394 | 395 | ================================== 396 | 397 | Que8: 398 | --- 399 | Create a loadbalancer service with name wear-service-cka09-svcn to expose the deployment webapp-wear-cka09-svcn application in app-space namespace. 400 | 401 | 402 | Ans: 403 | --- 404 | 405 | Switch to cluster3 : 406 | 407 | 408 | 409 | kubectl config use-context cluster3 410 | 411 | 412 | 413 | On student node run the command: 414 | 415 | 416 | student-node ~ ➜ kubectl expose -n app-space deployment webapp-wear-cka09-svcn --type=LoadBalancer --name=wear-service-cka09-svcn --port=8080 417 | 418 | service/wear-service-cka09-svcn exposed 419 | 420 | student-node ~ ➜ k get svc -n app-space 421 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 422 | wear-service-cka09-svcn LoadBalancer 10.43.68.233 172.25.0.14 8080:32109/TCP 14s 423 | 424 | ===================================== 425 | 426 | Que9: 427 | --- 428 | John is setting up a two tier application stack that is supposed to be accessible using the service curlme-cka01-svcn. To test that the service is accessible, he is using a pod called curlpod-cka01-svcn. However, at the moment, he is unable to get any response from the application. 429 | 430 | 431 | 432 | Troubleshoot and fix this issue so the application stack is accessible. 433 | 434 | 435 | 436 | While you may delete and recreate the service curlme-cka01-svcn, please do not alter it in anyway. 437 | 438 | 439 | Ans: 440 | --- 441 | Test if the service curlme-cka01-svcn is accessible from pod curlpod-cka01-svcn or not. 442 | 443 | 444 | kubectl exec curlpod-cka01-svcn -- curl curlme-cka01-svcn 445 | 446 | ..... 447 | % Total % Received % Xferd Average Speed Time Time Time Current 448 | Dload Upload Total Spent Left Speed 449 | 0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0 450 | 451 | 452 | We did not get any response. Check if the service is properly configured or not. 453 | 454 | 455 | kubectl describe svc curlme-cka01-svcn '' 456 | 457 | .... 458 | Name: curlme-cka01-svcn 459 | Namespace: default 460 | Labels: 461 | Annotations: 462 | Selector: run=curlme-ckaO1-svcn 463 | Type: ClusterIP 464 | IP Family Policy: SingleStack 465 | IP Families: IPv4 466 | IP: 10.109.45.180 467 | IPs: 10.109.45.180 468 | Port: 80/TCP 469 | TargetPort: 80/TCP 470 | Endpoints: 471 | Session Affinity: None 472 | Events: 473 | 474 | 475 | The service has no endpoints configured. As we can delete the resource, let's delete the service and create the service again. 476 | 477 | To delete the service, use the command kubectl delete svc curlme-cka01-svcn. 478 | You can create the service using imperative way or declarative way. 479 | 480 | 481 | Using imperative command: 482 | kubectl expose pod curlme-cka01-svcn --port=80 483 | 484 | 485 | Using declarative manifest: 486 | 487 | 488 | apiVersion: v1 489 | kind: Service 490 | metadata: 491 | labels: 492 | run: curlme-cka01-svcn 493 | name: curlme-cka01-svcn 494 | spec: 495 | ports: 496 | - port: 80 497 | protocol: TCP 498 | targetPort: 80 499 | selector: 500 | run: curlme-cka01-svcn 501 | type: ClusterIP 502 | 503 | 504 | You can test the connection from curlpod-cka-1-svcn using following. 505 | kubectl exec curlpod-cka01-svcn -- curl curlme-cka01-svcn 506 | 507 | =========================== 508 | 509 | Que10: 510 | --- 511 | Create a nginx pod called nginx-resolver-cka06-svcn using image nginx, expose it internally with a service called nginx-resolver-service-cka06-svcn. 512 | 513 | 514 | 515 | Test that you are able to look up the service and pod names from within the cluster. Use the image: busybox:1.28 for dns lookup. Record results in /root/CKA/nginx.svc.cka06.svcn and /root/CKA/nginx.pod.cka06.svcn 516 | 517 | Ans: 518 | --- 519 | To create a pod nginx-resolver-cka06-svcn and expose it internally: 520 | 521 | 522 | 523 | student-node ~ ➜ kubectl run nginx-resolver-cka06-svcn --image=nginx 524 | student-node ~ ➜ kubectl expose pod/nginx-resolver-cka06-svcn --name=nginx-resolver-service-cka06-svcn --port=80 --target-port=80 --type=ClusterIP 525 | 526 | 527 | 528 | To create a pod test-nslookup. Test that you are able to look up the service and pod names from within the cluster: 529 | 530 | 531 | 532 | student-node ~ ➜ kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup nginx-resolver-service-cka06-svcn 533 | student-node ~ ➜ kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup nginx-resolver-service-cka06-svcn > /root/CKA/nginx.svc.cka06.svcn 534 | 535 | 536 | 537 | Get the IP of the nginx-resolver-cka06-svcn pod and replace the dots(.) with hyphon(-) which will be used below. 538 | 539 | student-node ~ ➜ kubectl get pod nginx-resolver-cka06-svcn -o wide 540 | student-node ~ ➜ IP=`kubectl get pod nginx-resolver-cka06-svcn -o wide --no-headers | awk '{print $6}' | tr '.' '-'` 541 | student-node ~ ➜ kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup $IP.default.pod > /root/CKA/nginx.pod.cka06.svcn 542 | 543 | ====================================== 544 | 545 | 546 | -------------------------------------------------------------------------------- /Mock Exam - 10.txt: -------------------------------------------------------------------------------- 1 | 2 | Que1: 3 | --- 4 | 5 | There is a sample script located at /root/service-cka25-arch.sh on the student-node. 6 | Update this script to add a command to filter/display the targetPort only for service service-cka25-arch using jsonpath. The service has been created under the default namespace on cluster1. 7 | 8 | Ans: 9 | --- 10 | Update service-cka25-arch.sh script: 11 | 12 | student-node ~ ➜ vi service-cka25-arch.sh 13 | 14 | 15 | 16 | Add below command in it: 17 | kubectl --context cluster1 get service service-cka25-arch -o jsonpath='{.spec.ports[0].targetPort}' 18 | 19 | ================= 20 | 21 | Que2: 22 | --- 23 | ​A pod called nginx-cka01-trb is running in the default namespace. There is a container called nginx-container running inside this pod that uses the image nginx:latest. There is another sidecar container called logs-container that runs in this pod. 24 | 25 | For some reason, this pod is continuously crashing. Identify the issue and fix it. Make sure that the pod is in a running state and you are able to access the website using the curl http://kodekloud-exam.app:30001 command on the controlplane node of cluster1. 26 | 27 | Ans: 28 | --- 29 | Check the container logs: 30 | kubectl logs -f nginx-cka01-trb -c nginx-container 31 | You can see that its not able to pull the image. 32 | 33 | Edit the pod 34 | kubectl edit pod nginx-cka01-trb -o yaml 35 | 36 | Change image tag from nginx:latst to nginx:latest 37 | Let's check now if the POD is in Running state 38 | 39 | kubectl get pod 40 | You will notice that its still crashing, so check the logs again: 41 | 42 | kubectl logs -f nginx-cka01-trb -c nginx-container 43 | From the logs you will notice that nginx-container is looking good now so it might be the sidecar container that is causing issues. Let's check its logs. 44 | 45 | kubectl logs -f nginx-cka01-trb -c logs-container 46 | You will see some logs as below: 47 | 48 | cat: can't open '/var/log/httpd/access.log': No such file or directory 49 | cat: can't open '/var/log/httpd/error.log': No such file or directory 50 | Now, let's look into the sidecar container 51 | 52 | kubectl get pod nginx-cka01-trb -o yaml 53 | Under containers: check the command: section, this is the command which is failing. If you notice its looking for the logs under /var/log/httpd/ directory but the mounted volume for logs is /var/log/nginx (under volumeMounts:). So we need to fix this path: 54 | 55 | kubectl get pod nginx-cka01-trb -o yaml > /tmp/test.yaml 56 | vi /tmp/test.yaml 57 | Under command: change /var/log/httpd/access.log and /var/log/httpd/error.log to /var/log/nginx/access.log and /var/log/nginx/error.log respectively. 58 | 59 | Delete the existing POD now: 60 | 61 | kubectl delete pod nginx-cka01-trb 62 | Create new one from the template 63 | 64 | kubectl apply -f /tmp/test.yaml 65 | Let's check now if the POD is in Running state 66 | 67 | kubectl get pod 68 | It should be good now. So let's try to access the app. 69 | 70 | curl http://kodekloud-exam.app:30001 71 | You will see error 72 | 73 | curl: (7) Failed to connect to kodekloud-exam.app port 30001: Connection refused 74 | So you are not able to access the website, et's look into the service configuration. 75 | 76 | Edit the service 77 | kubectl edit svc nginx-service-cka01-trb -o yaml 78 | Change app label under selector from httpd-app-cka01-trb to nginx-app-cka01-trb 79 | You should be able to access the website now. 80 | curl http://kodekloud-exam.app:30001 81 | 82 | ================================ 83 | 84 | 85 | Que3: 86 | --- 87 | The controlplane node called cluster4-controlplane in the cluster4 cluster is planned for a regular maintenance. In preparation for this maintenance work, we need to take backups of this cluster. However, something is broken at the moment! 88 | 89 | 90 | Troubleshoot the issues and take a snapshot of the ETCD database using the etcdctl utility at the location /opt/etcd-boot-cka18-trb.db. 91 | 92 | 93 | Note: Make sure etcd listens at its default port. Also you can SSH to the cluster4-controlplane host using the ssh cluster4-controlplane command from the student-node. 94 | 95 | Ans: 96 | --- 97 | The controlplane node called cluster4-controlplane in the cluster4 cluster is planned for a regular maintenance. In preparation for this maintenance work, we need to take backups of this cluster. However, something is broken at the moment! 98 | 99 | 100 | Troubleshoot the issues and take a snapshot of the ETCD database using the etcdctl utility at the location /opt/etcd-boot-cka18-trb.db. 101 | 102 | 103 | Note: Make sure etcd listens at its default port. Also you can SSH to the cluster4-controlplane host using the ssh cluster4-controlplane command from the student-node. 104 | 105 | 106 | ========================= 107 | 108 | Que4: 109 | --- 110 | Deploy a messaging-cka07-svcn pod using the redis:alpine image with the labels set to tier=msg. 111 | 112 | 113 | 114 | Now create a service messaging-service-cka07-svcn to expose the messaging-cka07-svcn application within the cluster on port 6379. 115 | 116 | Ans: 117 | --- 118 | On student-node, use the command kubectl run messaging-cka07-svcn --image=redis:alpine -l tier=msg 119 | 120 | 121 | 122 | Now run the command: kubectl expose pod messaging-cka07-svcn --port=6379 --name messaging-service-cka07-svcn. 123 | -------------------------------------------------------------------------------- /Mock Exam - 5.txt: -------------------------------------------------------------------------------- 1 | 2 | Que1: 3 | --- 4 | A pod called elastic-app-cka02-arch is running in the default namespace. The YAML file for this pod is available at /root/elastic-app-cka02-arch.yaml on the student-node. The single application container in this pod writes logs to the file /var/log/elastic-app.log. 5 | 6 | 7 | One of our logging mechanisms needs to read these logs to send them to an upstream logging server but we don't want to increase the read overhead for our main application container so recreate this POD with an additional sidecar container that will run along with the application container and print to the STDOUT by running the command tail -f /var/log/elastic-app.log. You can use busybox image for this sidecar container. 8 | 9 | Ans: 10 | --- 11 | 12 | Recreate the pod with a new container called sidecar. Update the /root/elastic-app-cka02-arch.yaml YAML file as shown below: 13 | 14 | apiVersion: v1 15 | kind: Pod 16 | metadata: 17 | name: elastic-app-cka02-arch 18 | spec: 19 | containers: 20 | - name: elastic-app 21 | image: busybox:1.28 22 | args: 23 | - /bin/sh 24 | - -c 25 | - > 26 | mkdir /var/log; 27 | i=0; 28 | while true; 29 | do 30 | echo "$(date) INFO $i" >> /var/log/elastic-app.log; 31 | i=$((i+1)); 32 | sleep 1; 33 | done 34 | volumeMounts: 35 | - name: varlog 36 | mountPath: /var/log 37 | - name: sidecar 38 | image: busybox:1.28 39 | args: [/bin/sh, -c, 'tail -f /var/log/elastic-app.log'] 40 | volumeMounts: 41 | - name: varlog 42 | mountPath: /var/log 43 | volumes: 44 | - name: varlog 45 | emptyDir: {} 46 | 47 | 48 | ====================================== 49 | 50 | Que2: 51 | --- 52 | 53 | There is an existing persistent volume called orange-pv-cka13-trb. A persistent volume claim called orange-pvc-cka13-trb is created to claim storage from orange-pv-cka13-trb. 54 | 55 | 56 | However, this PVC is stuck in a Pending state. As of now, there is no data in the volume. 57 | 58 | 59 | Troubleshoot and fix this issue, making sure that orange-pvc-cka13-trb PVC is in Bound state. 60 | 61 | 62 | Ans: 63 | --- 64 | 65 | List the PVC to check its status 66 | kubectl get pvc 67 | So we can see orange-pvc-cka13-trb PVC is in Pending state and its requesting a storage of 150Mi. Let's look into the events 68 | 69 | kubectl get events --sort-by='.metadata.creationTimestamp' -A 70 | You will see some errors as below: 71 | 72 | Warning VolumeMismatch persistentvolumeclaim/orange-pvc-cka13-trb Cannot bind to requested volume "orange-pv-cka13-trb": requested PV is too small 73 | Let's look into orange-pv-cka13-trb volume 74 | 75 | kubectl get pv 76 | We can see that orange-pv-cka13-trb volume is of 100Mi capacity which means its too small to request 150Mi of storage. 77 | Let's edit orange-pvc-cka13-trb PVC to adjust the storage requested. 78 | 79 | kubectl get pvc orange-pvc-cka13-trb -o yaml > /tmp/orange-pvc-cka13-trb.yaml 80 | vi /tmp/orange-pvc-cka13-trb.yaml 81 | Under resources: -> requests: -> storage: change 150Mi to 100Mi and save. 82 | Delete old PVC and apply the change: 83 | kubectl delete pvc orange-pvc-cka13-trb 84 | kubectl apply -f /tmp/orange-pvc-cka13-trb.yaml 85 | 86 | =================================== 87 | 88 | Que3: 89 | --- 90 | One of the nginx based pod called cyan-pod-cka28-trb is running under cyan-ns-cka28-trb namespace and it is exposed within the cluster using cyan-svc-cka28-trb service. 91 | 92 | This is a restricted pod so a network policy called cyan-np-cka28-trb has been created in the same namespace to apply some restrictions on this pod. 93 | 94 | 95 | Two other pods called cyan-white-cka28-trb1 and cyan-black-cka28-trb are also running in the default namespace. 96 | 97 | 98 | The nginx based app running on the cyan-pod-cka28-trb pod is exposed internally on the default nginx port (80). 99 | 100 | 101 | Expectation: This app should only be accessible from the cyan-white-cka28-trb1 pod. 102 | 103 | 104 | Problem: This app is not accessible from anywhere. 105 | 106 | 107 | Troubleshoot this issue and fix the connectivity as per the requirement listed above. 108 | 109 | 110 | Note: You can exec into cyan-white-cka28-trb and cyan-black-cka28-trb pods and test connectivity using the curl utility. 111 | 112 | 113 | You may update the network policy, but make sure it is not deleted from the cyan-ns-cka28-trb namespace. 114 | 115 | Ans: 116 | --- 117 | 118 | Let's look into the network policy 119 | 120 | kubectl edit networkpolicy cyan-np-cka28-trb -n cyan-ns-cka28-trb 121 | 122 | Under spec: -> egress: you will notice there is not cidr: block has been added, since there is no restrcitions on egress traffic so we can update it as below. Further you will notice that the port used in the policy is 8080 but the app is running on default port which is 80 so let's update this as well (under egress and ingress): 123 | 124 | Change port: 8080 to port: 80 125 | - ports: 126 | - port: 80 127 | protocol: TCP 128 | to: 129 | - ipBlock: 130 | cidr: 0.0.0.0/0 131 | Now, lastly notice that there is no POD selector has been used in ingress section but this app is supposed to be accessible from cyan-white-cka28-trb pod under default namespace. So let's edit it to look like as below: 132 | 133 | ingress: 134 | - from: 135 | - namespaceSelector: 136 | matchLabels: 137 | kubernetes.io/metadata.name: default 138 | podSelector: 139 | matchLabels: 140 | app: cyan-white-cka28-trb 141 | Now, let's try to access the app from cyan-white-pod-cka28-trb 142 | 143 | kubectl exec -it cyan-white-cka28-trb -- sh 144 | curl cyan-svc-cka28-trb.cyan-ns-cka28-trb.svc.cluster.local 145 | Also make sure its not accessible from the other pod(s) 146 | 147 | kubectl exec -it cyan-black-cka28-trb -- sh 148 | curl cyan-svc-cka28-trb.cyan-ns-cka28-trb.svc.cluster.local 149 | It should not work from this pod. So its looking good now. 150 | 151 | ==================================== 152 | 153 | Que4: 154 | --- 155 | 156 | -------------------------------------------------------------------------------- /Mock Exam - 6.txt: -------------------------------------------------------------------------------- 1 | 2 | Que1: 3 | --- 4 | Find the node across all clusters that consumes the most memory and store the result to the file /opt/high_memory_node in the following format cluster_name,node_name. 5 | 6 | The node could be in any clusters that are currently configured on the student-node. 7 | 8 | Ans: 9 | --- 10 | 11 | Check out the metrics for all node across all clusters: 12 | 13 | student-node ~ ➜ kubectl top node --context cluster1 --no-headers | sort -nr -k4 | head -1 14 | cluster1-controlplane 124m 1% 768Mi 1% 15 | 16 | student-node ~ ➜ kubectl top node --context cluster2 --no-headers | sort -nr -k4 | head -1 17 | cluster2-controlplane 79m 0% 873Mi 1% 18 | 19 | student-node ~ ➜ kubectl top node --context cluster3 --no-headers | sort -nr -k4 | head -1 20 | cluster3-controlplane 78m 0% 902Mi 1% 21 | 22 | student-node ~ ➜ kubectl top node --context cluster4 --no-headers | sort -nr -k4 | head -1 23 | cluster4-controlplane 78m 0% 901Mi 1% 24 | 25 | student-node ~ ➜ 26 | 27 | 28 | 29 | Using this, find the node that uses most memory. In this case, it is cluster3-controlplane on cluster3. 30 | Save the result in the correct format to the file: 31 | 32 | student-node ~ ➜ echo cluster3,cluster3-controlplane > /opt/high_memory_node 33 | 34 | ================================ 35 | 36 | ==================================== 37 | 38 | Que4: 39 | --- 40 | 41 | One of our Junior DevOps engineers have deployed a pod nginx-wl06 on the cluster3-controlplane node. However, while specifying the resource limits, instead of using Mebibyte as the unit, Gebibyte was used. 42 | 43 | As a result, the node doesn't have sufficient resources to deploy this pod and it is stuck in a pending state 44 | 45 | Fix the units and re-deploy the pod (Delete and recreate the pod if needed). 46 | 47 | 48 | 49 | Ans: 50 | --- 51 | Solution 52 | Set the correct context: - 53 | 54 | kubectl config use-context cluster3 55 | Run the following command to check the pending pods on all the namespaces: - 56 | 57 | kubectl get pods -A 58 | 59 | 60 | After that, inspect the pod Events as follows: - 61 | 62 | kubectl get pods -A | grep -i pending 63 | 64 | kubectl describe po nginx-wl06 65 | 66 | 67 | Make use of the kubectl edit command to update the values from Gi to Mi:- 68 | 69 | kubectl edit po nginx-wl06 70 | 71 | 72 | It will save the temporary file under the /tmp/ directory. Use the kubectl replace command as follows: - 73 | 74 | kubectl replace -f /tmp/kubectl-edit-xxxx.yaml --force 75 | 76 | 77 | It will delete the existing pod and will re-create it again with new changes. 78 | 79 | =============================== 80 | 81 | Que5: 82 | --- 83 | Create a ReplicaSet with name checker-cka10-svcn in ns-12345-svcn namespace with image registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3. 84 | 85 | 86 | Make sure to specify the below specs as well: 87 | 88 | 89 | command sleep 3600 90 | replicas set to 2 91 | container name: dns-image 92 | 93 | 94 | 95 | Once the checker pods are up and running, store the output of the command nslookup kubernetes.default from any one of the checker pod into the file /root/dns-output-12345-cka10-svcn on student-node. 96 | 97 | Ans: 98 | --- 99 | 100 | Change to the cluster4 context before attempting the task: 101 | 102 | kubectl config use-context cluster3 103 | 104 | 105 | 106 | Create the ReplicaSet as per the requirements: 107 | 108 | 109 | 110 | kubectl apply -f - << EOF 111 | --- 112 | apiVersion: v1 113 | kind: Namespace 114 | metadata: 115 | creationTimestamp: null 116 | name: ns-12345-svcn 117 | spec: {} 118 | status: {} 119 | 120 | --- 121 | apiVersion: apps/v1 122 | kind: ReplicaSet 123 | metadata: 124 | name: checker-cka10-svcn 125 | namespace: ns-12345-svcn 126 | labels: 127 | app: dns 128 | tier: testing 129 | spec: 130 | replicas: 2 131 | selector: 132 | matchLabels: 133 | tier: testing 134 | template: 135 | metadata: 136 | labels: 137 | tier: testing 138 | spec: 139 | containers: 140 | - name: dns-image 141 | image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 142 | command: 143 | - sleep 144 | - "3600" 145 | EOF 146 | 147 | 148 | 149 | Now let's test if the nslookup command is working : 150 | 151 | 152 | student-node ~ ➜ k get pods -n ns-12345-svcn 153 | NAME READY STATUS RESTARTS AGE 154 | checker-cka10-svcn-d2cd2 1/1 Running 0 12s 155 | checker-cka10-svcn-qj8rc 1/1 Running 0 12s 156 | 157 | student-node ~ ➜ POD_NAME=`k get pods -n ns-12345-svcn --no-headers | head -1 | awk '{print $1}'` 158 | 159 | student-node ~ ➜ kubectl exec -n ns-12345-svcn -i -t $POD_NAME -- nslookup kubernetes.default 160 | ;; connection timed out; no servers could be reached 161 | 162 | command terminated with exit code 1 163 | 164 | 165 | 166 | There seems to be a problem with the name resolution. Let's check if our coredns pods are up and if any service exists to reach them: 167 | 168 | 169 | 170 | student-node ~ ➜ k get pods -n kube-system | grep coredns 171 | coredns-6d4b75cb6d-cprjz 1/1 Running 0 42m 172 | coredns-6d4b75cb6d-fdrhv 1/1 Running 0 42m 173 | 174 | student-node ~ ➜ k get svc -n kube-system 175 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 176 | kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 62m 177 | 178 | 179 | 180 | Everything looks okay here but the name resolution problem exists, let's see if the kube-dns service have any active endpoints: 181 | 182 | student-node ~ ➜ kubectl get ep -n kube-system kube-dns 183 | NAME ENDPOINTS AGE 184 | kube-dns 63m 185 | 186 | 187 | 188 | Finally, we have our culprit. 189 | 190 | 191 | If we dig a little deeper, we will it is using wrong labels and selector: 192 | 193 | 194 | 195 | student-node ~ ➜ kubectl describe svc -n kube-system kube-dns 196 | Name: kube-dns 197 | Namespace: kube-system 198 | .... 199 | Selector: k8s-app=core-dns 200 | Type: ClusterIP 201 | ... 202 | 203 | student-node ~ ➜ kubectl get deploy -n kube-system --show-labels | grep coredns 204 | coredns 2/2 2 2 66m k8s-app=kube-dns 205 | 206 | 207 | 208 | Let's update the kube-dns service it to point to correct set of pods: 209 | 210 | 211 | 212 | student-node ~ ➜ kubectl patch service -n kube-system kube-dns -p '{"spec":{"selector":{"k8s-app": "kube-dns"}}}' 213 | service/kube-dns patched 214 | 215 | student-node ~ ➜ kubectl get ep -n kube-system kube-dns 216 | NAME ENDPOINTS AGE 217 | kube-dns 10.50.0.2:53,10.50.192.1:53,10.50.0.2:53 + 3 more... 69m 218 | 219 | 220 | 221 | NOTE: We can use any method to update kube-dns service. In our case, we have used kubectl patch command. 222 | 223 | 224 | 225 | 226 | Now let's store the correct output to /root/dns-output-12345-cka10-svcn: 227 | 228 | 229 | 230 | student-node ~ ➜ kubectl exec -n ns-12345-svcn -i -t $POD_NAME -- nslookup kubernetes.default 231 | Server: 10.96.0.10 232 | Address: 10.96.0.10#53 233 | 234 | Name: kubernetes.default.svc.cluster.local 235 | Address: 10.96.0.1 236 | 237 | 238 | 239 | student-node ~ ➜ kubectl exec -n ns-12345-svcn -i -t $POD_NAME -- nslookup kubernetes.default > /root/dns-output-12345-cka10-svcn 240 | 241 | ==================================== 242 | 243 | Que6: 244 | --- 245 | Create a deployment named hr-web-app-cka08-svcn using the image kodekloud/webapp-color with 2 replicas. 246 | 247 | 248 | 249 | Expose the hr-web-app-cka08-svcn as service hr-web-app-service-cka08-svcn application on port 30082 on the nodes of the cluster. 250 | 251 | The web application listens on port 8080. 252 | 253 | Ans: 254 | --- 255 | On student-node, use the command: kubectl create deployment hr-web-app-cka08-svcn --image=kodekloud/webapp-color --replicas=2 256 | 257 | 258 | 259 | Now we can run the command: kubectl expose deployment hr-web-app-cka08-svcn --type=NodePort --port=8080 --name=hr-web-app-service-cka08-svcn --dry-run=client -o yaml > hr-web-app-service-cka08-svcn.yaml to generate a service definition file. 260 | 261 | 262 | 263 | 264 | Now, in generated service definition file add the nodePort field with the given port number under the ports section and create a service. 265 | 266 | =================== 267 | 268 | Que7: 269 | --- 270 | Create a ClusterIP service .i.e. service-3421-svcn in the spectra-1267 ns which should expose the pods namely pod-23 and pod-21 with port set to 8080 and targetport to 80. 271 | 272 | 273 | 274 | Part II: 275 | 276 | 277 | 278 | Store the pod names and their ip addresses from the spectra-1267 ns at /root/pod_ips_cka05_svcn where the output is sorted by their IP's. 279 | 280 | Please ensure the format as shown below: 281 | 282 | 283 | 284 | POD_NAME IP_ADDR 285 | pod-1 ip-1 286 | pod-3 ip-2 287 | pod-2 ip-3 288 | ... 289 | 290 | 291 | Ans: 292 | --- 293 | 294 | Switching to cluster3: 295 | 296 | 297 | 298 | kubectl config use-context cluster3 299 | 300 | 301 | 302 | The easiest way to route traffic to a specific pod is by the use of labels and selectors . List the pods along with their labels: 303 | 304 | 305 | 306 | student-node ~ ➜ kubectl get pods --show-labels -n spectra-1267 307 | NAME READY STATUS RESTARTS AGE LABELS 308 | pod-12 1/1 Running 0 5m21s env=dev,mode=standard,type=external 309 | pod-34 1/1 Running 0 5m20s env=dev,mode=standard,type=internal 310 | pod-43 1/1 Running 0 5m20s env=prod,mode=exam,type=internal 311 | pod-23 1/1 Running 0 5m21s env=dev,mode=exam,type=external 312 | pod-32 1/1 Running 0 5m20s env=prod,mode=standard,type=internal 313 | pod-21 1/1 Running 0 5m20s env=prod,mode=exam,type=external 314 | 315 | 316 | 317 | Looks like there are a lot of pods created to confuse us. But we are only concerned with the labels of pod-23 and pod-21. 318 | 319 | 320 | 321 | As we can see both the required pods have labels mode=exam,type=external in common. Let's confirm that using kubectl too: 322 | 323 | 324 | 325 | student-node ~ ➜ kubectl get pod -l mode=exam,type=external -n spectra-1267 326 | NAME READY STATUS RESTARTS AGE 327 | pod-23 1/1 Running 0 9m18s 328 | pod-21 1/1 Running 0 9m17s 329 | 330 | 331 | 332 | Nice!! Now as we have figured out the labels, we can proceed further with the creation of the service: 333 | 334 | 335 | 336 | student-node ~ ➜ kubectl create service clusterip service-3421-svcn -n spectra-1267 --tcp=8080:80 --dry-run=client -o yaml > service-3421-svcn.yaml 337 | 338 | 339 | 340 | Now modify the service definition with selectors as required before applying to k8s cluster: 341 | 342 | 343 | 344 | student-node ~ ➜ cat service-3421-svcn.yaml 345 | apiVersion: v1 346 | kind: Service 347 | metadata: 348 | creationTimestamp: null 349 | labels: 350 | app: service-3421-svcn 351 | name: service-3421-svcn 352 | namespace: spectra-1267 353 | spec: 354 | ports: 355 | - name: 8080-80 356 | port: 8080 357 | protocol: TCP 358 | targetPort: 80 359 | selector: 360 | app: service-3421-svcn # delete 361 | mode: exam # add 362 | type: external # add 363 | type: ClusterIP 364 | status: 365 | loadBalancer: {} 366 | 367 | 368 | 369 | Finally let's apply the service definition: 370 | 371 | 372 | 373 | student-node ~ ➜ kubectl apply -f service-3421-svcn.yaml 374 | service/service-3421 created 375 | 376 | student-node ~ ➜ k get ep service-3421-svcn -n spectra-1267 377 | NAME ENDPOINTS AGE 378 | service-3421 10.42.0.15:80,10.42.0.17:80 52s 379 | 380 | 381 | 382 | To store all the pod name along with their IP's , we could use imperative command as shown below: 383 | 384 | 385 | 386 | student-node ~ ➜ kubectl get pods -n spectra-1267 -o=custom-columns='POD_NAME:metadata.name,IP_ADDR:status.podIP' --sort-by=.status.podIP 387 | 388 | POD_NAME IP_ADDR 389 | pod-12 10.42.0.18 390 | pod-23 10.42.0.19 391 | pod-34 10.42.0.20 392 | pod-21 10.42.0.21 393 | ... 394 | # store the output to /root/pod_ips 395 | student-node ~ ➜ kubectl get pods -n spectra-1267 -o=custom-columns='POD_NAME:metadata.name,IP_ADDR:status.podIP' --sort-by=.status.podIP > /root/pod_ips_cka05_svcn 396 | 397 | ============================================= -------------------------------------------------------------------------------- /Mock Exam - 7.txt: -------------------------------------------------------------------------------- 1 | Que1: 2 | --- 3 | 4 | Create a service account called pink-sa-cka24-arch. Further create a cluster role called pink-role-cka24-arch with full permissions on all resources in the core api group under default namespace in cluster1. 5 | 6 | 7 | Finally create a cluster role binding called pink-role-binding-cka24-arch to bind pink-role-cka24-arch cluster role with pink-sa-cka24-arch service account. 8 | 9 | 10 | Ans: 11 | --- 12 | 13 | Create the service account, cluster role and role binding: 14 | 15 | student-node ~ ➜ kubectl --context cluster1 create serviceaccount pink-sa-cka24-arch 16 | student-node ~ ➜ kubectl --context cluster1 create clusterrole pink-role-cka24-arch --resource=* --verb=* 17 | student-node ~ ➜ kubectl --context cluster1 create clusterrolebinding pink-role-binding-cka24-arch --clusterrole=pink-role-cka24-arch --serviceaccount=default:pink-sa-cka24-arch 18 | 19 | ================ 20 | 21 | Que2: 22 | --- 23 | One of the nginx based pod called cyan-pod-cka28-trb is running under cyan-ns-cka28-trb namespace and it is exposed within the cluster using cyan-svc-cka28-trb service. 24 | 25 | This is a restricted pod so a network policy called cyan-np-cka28-trb has been created in the same namespace to apply some restrictions on this pod. 26 | 27 | 28 | Two other pods called cyan-white-cka28-trb1 and cyan-black-cka28-trb are also running in the default namespace. 29 | 30 | 31 | The nginx based app running on the cyan-pod-cka28-trb pod is exposed internally on the default nginx port (80). 32 | 33 | 34 | Expectation: This app should only be accessible from the cyan-white-cka28-trb1 pod. 35 | 36 | 37 | Problem: This app is not accessible from anywhere. 38 | 39 | 40 | Troubleshoot this issue and fix the connectivity as per the requirement listed above. 41 | 42 | 43 | Note: You can exec into cyan-white-cka28-trb and cyan-black-cka28-trb pods and test connectivity using the curl utility. 44 | 45 | 46 | You may update the network policy, but make sure it is not deleted from the cyan-ns-cka28-trb namespace. 47 | 48 | 49 | Ans: 50 | --- 51 | 52 | Let's look into the network policy 53 | 54 | kubectl edit networkpolicy cyan-np-cka28-trb -n cyan-ns-cka28-trb 55 | Under spec: -> egress: you will notice there is not cidr: block has been added, since there is no restrcitions on egress traffic so we can update it as below. Further you will notice that the port used in the policy is 8080 but the app is running on default port which is 80 so let's update this as well (under egress and ingress): 56 | 57 | Change port: 8080 to port: 80 58 | - ports: 59 | - port: 80 60 | protocol: TCP 61 | to: 62 | - ipBlock: 63 | cidr: 0.0.0.0/0 64 | Now, lastly notice that there is no POD selector has been used in ingress section but this app is supposed to be accessible from cyan-white-cka28-trb pod under default namespace. So let's edit it to look like as below: 65 | 66 | ingress: 67 | - from: 68 | - namespaceSelector: 69 | matchLabels: 70 | kubernetes.io/metadata.name: default 71 | podSelector: 72 | matchLabels: 73 | app: cyan-white-cka28-trb 74 | Now, let's try to access the app from cyan-white-pod-cka28-trb 75 | 76 | kubectl exec -it cyan-white-cka28-trb -- sh 77 | curl cyan-svc-cka28-trb.cyan-ns-cka28-trb.svc.cluster.local 78 | 79 | Also make sure its not accessible from the other pod(s) 80 | 81 | kubectl exec -it cyan-black-cka28-trb -- sh 82 | curl cyan-svc-cka28-trb.cyan-ns-cka28-trb.svc.cluster.local 83 | 84 | It should not work from this pod. So its looking good now. 85 | 86 | 87 | =================================== 88 | 89 | Que3: 90 | --- 91 | 92 | The controlplane node called cluster4-controlplane in the cluster4 cluster is planned for a regular maintenance. In preparation for this maintenance work, we need to take backups of this cluster. However, something is broken at the moment! 93 | 94 | 95 | Troubleshoot the issues and take a snapshot of the ETCD database using the etcdctl utility at the location /opt/etcd-boot-cka18-trb.db. 96 | 97 | 98 | Note: Make sure etcd listens at its default port. Also you can SSH to the cluster4-controlplane host using the ssh cluster4-controlplane command from the student-node. 99 | 100 | 101 | Ans: 102 | --- 103 | 104 | SSH into cluster4-controlplane host. 105 | ssh cluster4-controlplane 106 | Let's take etcd backup 107 | 108 | ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /opt/etcd-boot-cka18-trb.db 109 | 110 | It might stuck for forever, let's see why that would happen. Try to list the PODs first 111 | 112 | kubectl get pod -A 113 | There might an error like 114 | 115 | The connection to the server cluster4-controlplane:6443 was refused - did you specify the right host or port? 116 | There seems to be some issue with the cluster so let's look into the logs 117 | 118 | journalctl -u kubelet -f 119 | You will see a lot of connect: connection refused erros but that must be because the different cluster components are not able to connect to the api server so try to filter out these logs to look more closlyDDDDDDD 120 | =============================== 121 | 122 | Que4: 123 | --- 124 | 125 | A pod called elastic-app-cka02-arch is running in the default namespace. The YAML file for this pod is available at /root/elastic-app-cka02-arch.yaml on the student-node. The single application container in this pod writes logs to the file /var/log/elastic-app.log. 126 | 127 | 128 | One of our logging mechanisms needs to read these logs to send them to an upstream logging server but we don't want to increase the read overhead for our main application container so recreate this POD with an additional sidecar container that will run along with the application container and print to the STDOUT by running the command tail -f /var/log/elastic-app.log. You can use busybox image for this sidecar container. 129 | 130 | 131 | Ans: 132 | --- 133 | 134 | Recreate the pod with a new container called sidecar. Update the /root/elastic-app-cka02-arch.yaml YAML file as shown below: 135 | 136 | apiVersion: v1 137 | kind: Pod 138 | metadata: 139 | name: elastic-app-cka02-arch 140 | spec: 141 | containers: 142 | - name: elastic-app 143 | image: busybox:1.28 144 | args: 145 | - /bin/sh 146 | - -c 147 | - > 148 | mkdir /var/log; 149 | i=0; 150 | while true; 151 | do 152 | echo "$(date) INFO $i" >> /var/log/elastic-app.log; 153 | i=$((i+1)); 154 | sleep 1; 155 | done 156 | volumeMounts: 157 | - name: varlog 158 | mountPath: /var/log 159 | - name: sidecar 160 | image: busybox:1.28 161 | args: [/bin/sh, -c, 'tail -f /var/log/elastic-app.log'] 162 | volumeMounts: 163 | - name: varlog 164 | mountPath: /var/log 165 | volumes: 166 | - name: varlog 167 | emptyDir: {} 168 | 169 | 170 | 171 | Next, recreate the pod: 172 | 173 | student-node ~ ➜ kubectl replace -f /root/elastic-app-cka02-arch.yaml --force --context cluster3 174 | pod "elastic-app-cka02-arch" deleted 175 | pod/elastic-app-cka02-arch replaced 176 | 177 | student-node ~ ➜ 178 | 179 | ================================================ 180 | 181 | Que5: 182 | --- 183 | 184 | The blue-dp-cka09-trb deployment is having 0 out of 1 pods running. Fix the issue to make sure that pod is up and running. 185 | 186 | 187 | Ans: 188 | --- 189 | 190 | List the pods 191 | kubectl get pod 192 | Most probably you see Init:Error or Init:CrashLoopBackOff for the corresponding pod. 193 | 194 | Look into the logs 195 | kubectl logs blue-dp-cka09-trb-xxxx -c init-container 196 | You will see an error something like 197 | 198 | sh: can't open 'echo 'Welcome!'': No such file or directory 199 | Edit the deployment 200 | kubectl edit deploy blue-dp-cka09-trb 201 | Under initContainers: -> - command: add -c to the next line of - sh, so final command should look like this 202 | initContainers: 203 | - command: 204 | - sh 205 | - -c 206 | - echo 'Welcome!' 207 | If you will check pod then it must be failing again but with different error this time, let's find that out 208 | 209 | kubectl get event --field-selector involvedObject.name=blue-dp-cka09-trb-xxxxx 210 | You will see an error something like 211 | 212 | Warning Failed pod/blue-dp-cka09-trb-69dd844f76-rv9z8 Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/lib/kubelet/pods/98182a41-6d6d-406a-a3e2-37c33036acac/volumes/kubernetes.io~configmap/nginx-config" to rootfs at "/etc/nginx/nginx.conf": mount /var/lib/kubelet/pods/98182a41-6d6d-406a-a3e2-37c33036acac/volumes/kubernetes.io~configmap/nginx-config:/etc/nginx/nginx.conf (via /proc/self/fd/6), flags: 0x5001: not a directory: unknown 213 | Edit the deployment again 214 | kubectl edit deploy blue-dp-cka09-trb 215 | Under volumeMounts: -> - mountPath: /etc/nginx/nginx.conf -> name: nginx-config add subPath: nginx.conf and save the changes. 216 | Finally the pod should be in running state. 217 | 218 | ====================================== 219 | 220 | Que6: 221 | --- 222 | Create a storage class called orange-stc-cka07-str as per the properties given below: 223 | 224 | 225 | - Provisioner should be kubernetes.io/no-provisioner. 226 | 227 | - Volume binding mode should be WaitForFirstConsumer. 228 | 229 | 230 | Next, create a persistent volume called orange-pv-cka07-str as per the properties given below: 231 | 232 | 233 | - Capacity should be 150Mi. 234 | 235 | - Access mode should be ReadWriteOnce. 236 | 237 | - Reclaim policy should be Retain. 238 | 239 | - It should use storage class orange-stc-cka07-str. 240 | 241 | - Local path should be /opt/orange-data-cka07-str. 242 | 243 | - Also add node affinity to create this value on cluster1-controlplane. 244 | 245 | 246 | Finally, create a persistent volume claim called orange-pvc-cka07-str as per the properties given below: 247 | 248 | 249 | - Access mode should be ReadWriteOnce. 250 | 251 | - It should use storage class orange-stc-cka07-str. 252 | 253 | - Storage request should be 128Mi. 254 | 255 | - The volume should be orange-pv-cka07-str. 256 | 257 | 258 | Ans: 259 | --- 260 | 261 | Create a yaml file as below: 262 | kind: StorageClass 263 | apiVersion: storage.k8s.io/v1 264 | metadata: 265 | name: orange-stc-cka07-str 266 | provisioner: kubernetes.io/no-provisioner 267 | volumeBindingMode: WaitForFirstConsumer 268 | 269 | --- 270 | apiVersion: v1 271 | kind: PersistentVolume 272 | metadata: 273 | name: orange-pv-cka07-str 274 | spec: 275 | capacity: 276 | storage: 150Mi 277 | accessModes: 278 | - ReadWriteOnce 279 | persistentVolumeReclaimPolicy: Retain 280 | storageClassName: orange-stc-cka07-str 281 | local: 282 | path: /opt/orange-data-cka07-str 283 | nodeAffinity: 284 | required: 285 | nodeSelectorTerms: 286 | - matchExpressions: 287 | - key: kubernetes.io/hostname 288 | operator: In 289 | values: 290 | - cluster1-controlplane 291 | 292 | --- 293 | kind: PersistentVolumeClaim 294 | apiVersion: v1 295 | metadata: 296 | name: orange-pvc-cka07-str 297 | spec: 298 | accessModes: 299 | - ReadWriteOnce 300 | storageClassName: orange-stc-cka07-str 301 | volumeName: orange-pv-cka07-str 302 | resources: 303 | requests: 304 | storage: 128Mi 305 | Apply the template: 306 | kubectl apply -f .yaml 307 | 308 | =============================== 309 | 310 | Que7: 311 | --- 312 | Create a nginx pod called nginx-resolver-cka06-svcn using image nginx, expose it internally with a service called nginx-resolver-service-cka06-svcn. 313 | 314 | 315 | 316 | Test that you are able to look up the service and pod names from within the cluster. Use the image: busybox:1.28 for dns lookup. Record results in /root/CKA/nginx.svc.cka06.svcn and /root/CKA/nginx.pod.cka06.svcn 317 | 318 | 319 | 320 | Ans: 321 | --- 322 | 323 | To create a pod nginx-resolver-cka06-svcn and expose it internally: 324 | 325 | 326 | 327 | student-node ~ ➜ kubectl run nginx-resolver-cka06-svcn --image=nginx 328 | student-node ~ ➜ kubectl expose pod/nginx-resolver-cka06-svcn --name=nginx-resolver-service-cka06-svcn --port=80 --target-port=80 --type=ClusterIP 329 | 330 | 331 | 332 | To create a pod test-nslookup. Test that you are able to look up the service and pod names from within the cluster: 333 | 334 | 335 | 336 | student-node ~ ➜ kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup nginx-resolver-service-cka06-svcn 337 | student-node ~ ➜ kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup nginx-resolver-service-cka06-svcn > /root/CKA/nginx.svc.cka06.svcn 338 | 339 | 340 | 341 | Get the IP of the nginx-resolver-cka06-svcn pod and replace the dots(.) with hyphon(-) which will be used below. 342 | 343 | student-node ~ ➜ kubectl get pod nginx-resolver-cka06-svcn -o wide 344 | student-node ~ ➜ IP=`kubectl get pod nginx-resolver-cka06-svcn -o wide --no-headers | awk '{print $6}' | tr '.' '-'` 345 | student-node ~ ➜ kubectl run test-nslookup --image=busybox:1.28 --rm -it --restart=Never -- nslookup $IP.default.pod > /root/CKA/nginx.pod.cka06.svcn 346 | 347 | =================================== 348 | 349 | Que8: 350 | --- 351 | Create a ReplicaSet with name checker-cka10-svcn in ns-12345-svcn namespace with image registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3. 352 | 353 | 354 | Make sure to specify the below specs as well: 355 | 356 | 357 | command sleep 3600 358 | replicas set to 2 359 | container name: dns-image 360 | 361 | 362 | 363 | Once the checker pods are up and running, store the output of the command nslookup kubernetes.default from any one of the checker pod into the file /root/dns-output-12345-cka10-svcn on student-node. 364 | 365 | 366 | Ans: 367 | --- 368 | 369 | Create the ReplicaSet as per the requirements: 370 | 371 | 372 | 373 | kubectl apply -f - << EOF 374 | --- 375 | apiVersion: v1 376 | kind: Namespace 377 | metadata: 378 | creationTimestamp: null 379 | name: ns-12345-svcn 380 | spec: {} 381 | status: {} 382 | 383 | --- 384 | apiVersion: apps/v1 385 | kind: ReplicaSet 386 | metadata: 387 | name: checker-cka10-svcn 388 | namespace: ns-12345-svcn 389 | labels: 390 | app: dns 391 | tier: testing 392 | spec: 393 | replicas: 2 394 | selector: 395 | matchLabels: 396 | tier: testing 397 | template: 398 | metadata: 399 | labels: 400 | tier: testing 401 | spec: 402 | containers: 403 | - name: dns-image 404 | image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 405 | command: 406 | - sleep 407 | - "3600" 408 | EOF 409 | 410 | 411 | 412 | Now let's test if the nslookup command is working : 413 | 414 | 415 | student-node ~ ➜ k get pods -n ns-12345-svcn 416 | NAME READY STATUS RESTARTS AGE 417 | checker-cka10-svcn-d2cd2 1/1 Running 0 12s 418 | checker-cka10-svcn-qj8rc 1/1 Running 0 12s 419 | 420 | student-node ~ ➜ POD_NAME=`k get pods -n ns-12345-svcn --no-headers | head -1 | awk '{print $1}'` 421 | 422 | student-node ~ ➜ kubectl exec -n ns-12345-svcn -i -t $POD_NAME -- nslookup kubernetes.default 423 | ;; connection timed out; no servers could be reached 424 | 425 | command terminated with exit code 1 426 | 427 | 428 | 429 | There seems to be a problem with the name resolution. Let's check if our coredns pods are up and if any service exists to reach them: 430 | 431 | 432 | 433 | student-node ~ ➜ k get pods -n kube-system | grep coredns 434 | coredns-6d4b75cb6d-cprjz 1/1 Running 0 42m 435 | coredns-6d4b75cb6d-fdrhv 1/1 Running 0 42m 436 | 437 | student-node ~ ➜ k get svc -n kube-system 438 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 439 | kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 62m 440 | 441 | 442 | 443 | Everything looks okay here but the name resolution problem exists, let's see if the kube-dns service have any active endpoints: 444 | 445 | student-node ~ ➜ kubectl get ep -n kube-system kube-dns 446 | NAME ENDPOINTS AGE 447 | kube-dns 63m 448 | 449 | 450 | 451 | Finally, we have our culprit. 452 | 453 | 454 | If we dig a little deeper, we will it is using wrong labels and selector: 455 | 456 | 457 | 458 | student-node ~ ➜ kubectl describe svc -n kube-system kube-dns 459 | Name: kube-dns 460 | Namespace: kube-system 461 | .... 462 | Selector: k8s-app=core-dns 463 | Type: ClusterIP 464 | ... 465 | 466 | student-node ~ ➜ kubectl get deploy -n kube-system --show-labels | grep coredns 467 | coredns 2/2 2 2 66m k8s-app=kube-dns 468 | 469 | 470 | 471 | Let's update the kube-dns service it to point to correct set of pods: 472 | 473 | 474 | 475 | student-node ~ ➜ kubectl patch service -n kube-system kube-dns -p '{"spec":{"selector":{"k8s-app": "kube-dns"}}}' 476 | service/kube-dns patched 477 | 478 | student-node ~ ➜ kubectl get ep -n kube-system kube-dns 479 | NAME ENDPOINTS AGE 480 | kube-dns 10.50.0.2:53,10.50.192.1:53,10.50.0.2:53 + 3 more... 69m 481 | 482 | 483 | 484 | NOTE: We can use any method to update kube-dns service. In our case, we have used kubectl patch command. 485 | 486 | 487 | 488 | 489 | Now let's store the correct output to /root/dns-output-12345-cka10-svcn: 490 | 491 | 492 | 493 | student-node ~ ➜ kubectl exec -n ns-12345-svcn -i -t $POD_NAME -- nslookup kubernetes.default 494 | Server: 10.96.0.10 495 | Address: 10.96.0.10#53 496 | 497 | Name: kubernetes.default.svc.cluster.local 498 | Address: 10.96.0.1 499 | 500 | student-node ~ ➜ kubectl exec -n ns-12345-svcn -i -t $POD_NAME -- nslookup kubernetes.default > /root/dns-output-12345-cka10-svcn 501 | 502 | ================================= 503 | 504 | Que9: 505 | --- 506 | Create a ClusterIP service .i.e. service-3421-svcn in the spectra-1267 ns which should expose the pods namely pod-23 and pod-21 with port set to 8080 and targetport to 80. 507 | 508 | 509 | 510 | Part II: 511 | 512 | 513 | 514 | Store the pod names and their ip addresses from the spectra-1267 ns at /root/pod_ips_cka05_svcn where the output is sorted by their IP's. 515 | 516 | Please ensure the format as shown below: 517 | 518 | 519 | 520 | POD_NAME IP_ADDR 521 | pod-1 ip-1 522 | pod-3 ip-2 523 | pod-2 ip-3 524 | ... 525 | 526 | Ans: 527 | --- 528 | The easiest way to route traffic to a specific pod is by the use of labels and selectors . List the pods along with their labels: 529 | 530 | 531 | 532 | student-node ~ ➜ kubectl get pods --show-labels -n spectra-1267 533 | NAME READY STATUS RESTARTS AGE LABELS 534 | pod-12 1/1 Running 0 5m21s env=dev,mode=standard,type=external 535 | pod-34 1/1 Running 0 5m20s env=dev,mode=standard,type=internal 536 | pod-43 1/1 Running 0 5m20s env=prod,mode=exam,type=internal 537 | pod-23 1/1 Running 0 5m21s env=dev,mode=exam,type=external 538 | pod-32 1/1 Running 0 5m20s env=prod,mode=standard,type=internal 539 | pod-21 1/1 Running 0 5m20s env=prod,mode=exam,type=external 540 | 541 | 542 | 543 | Looks like there are a lot of pods created to confuse us. But we are only concerned with the labels of pod-23 and pod-21. 544 | 545 | 546 | 547 | As we can see both the required pods have labels mode=exam,type=external in common. Let's confirm that using kubectl too: 548 | 549 | 550 | 551 | student-node ~ ➜ kubectl get pod -l mode=exam,type=external -n spectra-1267 552 | NAME READY STATUS RESTARTS AGE 553 | pod-23 1/1 Running 0 9m18s 554 | pod-21 1/1 Running 0 9m17s 555 | 556 | 557 | 558 | Nice!! Now as we have figured out the labels, we can proceed further with the creation of the service: 559 | 560 | 561 | 562 | student-node ~ ➜ kubectl create service clusterip service-3421-svcn -n spectra-1267 --tcp=8080:80 --dry-run=client -o yaml > service-3421-svcn.yaml 563 | 564 | 565 | 566 | Now modify the service definition with selectors as required before applying to k8s cluster: 567 | 568 | 569 | 570 | student-node ~ ➜ cat service-3421-svcn.yaml 571 | apiVersion: v1 572 | kind: Service 573 | metadata: 574 | creationTimestamp: null 575 | labels: 576 | app: service-3421-svcn 577 | name: service-3421-svcn 578 | namespace: spectra-1267 579 | spec: 580 | ports: 581 | - name: 8080-80 582 | port: 8080 583 | protocol: TCP 584 | targetPort: 80 585 | selector: 586 | app: service-3421-svcn # delete 587 | mode: exam # add 588 | type: external # add 589 | type: ClusterIP 590 | status: 591 | loadBalancer: {} 592 | 593 | 594 | 595 | Finally let's apply the service definition: 596 | 597 | 598 | 599 | student-node ~ ➜ kubectl apply -f service-3421-svcn.yaml 600 | service/service-3421 created 601 | 602 | student-node ~ ➜ k get ep service-3421-svcn -n spectra-1267 603 | NAME ENDPOINTS AGE 604 | service-3421 10.42.0.15:80,10.42.0.17:80 52s 605 | 606 | 607 | 608 | To store all the pod name along with their IP's , we could use imperative command as shown below: 609 | 610 | 611 | 612 | student-node ~ ➜ kubectl get pods -n spectra-1267 -o=custom-columns='POD_NAME:metadata.name,IP_ADDR:status.podIP' --sort-by=.status.podIP 613 | 614 | POD_NAME IP_ADDR 615 | pod-12 10.42.0.18 616 | pod-23 10.42.0.19 617 | pod-34 10.42.0.20 618 | pod-21 10.42.0.21 619 | ... 620 | 621 | # store the output to /root/pod_ips 622 | student-node ~ ➜ kubectl get pods -n spectra-1267 -o=custom-columns='POD_NAME:metadata.name,IP_ADDR:status.podIP' --sort-by=.status.podIP > /root/pod_ips_cka05_svcn 623 | 624 | ================================= 625 | Que10: 626 | --- 627 | 628 | Deploy a messaging-cka07-svcn pod using the redis:alpine image with the labels set to tier=msg. 629 | 630 | 631 | 632 | Now create a service messaging-service-cka07-svcn to expose the messaging-cka07-svcn application within the cluster on port 6379. 633 | 634 | 635 | 636 | TIP: Use imperative commands. 637 | 638 | 639 | Ans: 640 | --- 641 | On student-node, use the command kubectl run messaging-cka07-svcn --image=redis:alpine -l tier=msg 642 | 643 | 644 | 645 | Now run the command: kubectl expose pod messaging-cka07-svcn --port=6379 --name messaging-service-cka07-svcn. -------------------------------------------------------------------------------- /Mock Exam - 8.txt: -------------------------------------------------------------------------------- 1 | 2 | Que1: 3 | --- 4 | 5 | We recently deployed a DaemonSet called logs-cka26-trb under kube-system namespace in cluster2 for collecting logs from all the cluster nodes including the controlplane node. However, at this moment, the DaemonSet is not creating any pod on the controlplane node. 6 | 7 | 8 | Troubleshoot the issue and fix it to make sure the pods are getting created on all nodes including the controlplane node. 9 | 10 | Ans: 11 | --- 12 | Check the status of DaemonSet 13 | 14 | kubectl --context2 cluster2 get ds logs-cka26-trb -n kube-system 15 | You will find that DESIRED CURRENT READY etc have value 2 which means there are two pods that have been created. You can check the same by listing the PODs 16 | 17 | kubectl --context2 cluster2 get pod -n kube-system 18 | You can check on which nodes these are created on 19 | 20 | kubectl --context2 cluster2 get pod -n kube-system -o wide 21 | Under NODE you will find the node name, so we can see that its not scheduled on the controlplane node which is because it must be missing the reqiured tolerations. Let's edit the DaemonSet to fix the tolerations 22 | 23 | kubectl --context2 cluster2 edit ds logs-cka26-trb -n kube-system 24 | Under tolerations: add below given tolerations as well 25 | 26 | - key: node-role.kubernetes.io/control-plane 27 | operator: Exists 28 | effect: NoSchedule 29 | Wait for some time PODs should schedule on all nodes now including the controlplane node. 30 | 31 | =========================== 32 | 33 | Que2: 34 | --- 35 | Part I: 36 | 37 | Create a ClusterIP service .i.e. service-3421-svcn in the spectra-1267 ns which should expose the pods namely pod-23 and pod-21 with port set to 8080 and targetport to 80. 38 | 39 | Part II: 40 | 41 | Store the pod names and their ip addresses from the spectra-1267 ns at /root/pod_ips_cka05_svcn where the output is sorted by their IP's. 42 | 43 | Please ensure the format as shown below: 44 | 45 | 46 | 47 | POD_NAME IP_ADDR 48 | pod-1 ip-1 49 | pod-3 ip-2 50 | pod-2 ip-3 51 | ... 52 | 53 | 54 | 55 | Ans: 56 | --- 57 | 58 | The easiest way to route traffic to a specific pod is by the use of labels and selectors . List the pods along with their labels: 59 | 60 | 61 | 62 | student-node ~ ➜ kubectl get pods --show-labels -n spectra-1267 63 | NAME READY STATUS RESTARTS AGE LABELS 64 | pod-12 1/1 Running 0 5m21s env=dev,mode=standard,type=external 65 | pod-34 1/1 Running 0 5m20s env=dev,mode=standard,type=internal 66 | pod-43 1/1 Running 0 5m20s env=prod,mode=exam,type=internal 67 | pod-23 1/1 Running 0 5m21s env=dev,mode=exam,type=external 68 | pod-32 1/1 Running 0 5m20s env=prod,mode=standard,type=internal 69 | pod-21 1/1 Running 0 5m20s env=prod,mode=exam,type=external 70 | 71 | 72 | 73 | Looks like there are a lot of pods created to confuse us. But we are only concerned with the labels of pod-23 and pod-21. 74 | 75 | 76 | 77 | As we can see both the required pods have labels mode=exam,type=external in common. Let's confirm that using kubectl too: 78 | 79 | 80 | 81 | student-node ~ ➜ kubectl get pod -l mode=exam,type=external -n spectra-1267 82 | NAME READY STATUS RESTARTS AGE 83 | pod-23 1/1 Running 0 9m18s 84 | pod-21 1/1 Running 0 9m17s 85 | 86 | 87 | 88 | Nice!! Now as we have figured out the labels, we can proceed further with the creation of the service: 89 | 90 | 91 | 92 | student-node ~ ➜ kubectl create service clusterip service-3421-svcn -n spectra-1267 --tcp=8080:80 --dry-run=client -o yaml > service-3421-svcn.yaml 93 | 94 | 95 | 96 | Now modify the service definition with selectors as required before applying to k8s cluster: 97 | 98 | 99 | 100 | student-node ~ ➜ cat service-3421-svcn.yaml 101 | apiVersion: v1 102 | kind: Service 103 | metadata: 104 | creationTimestamp: null 105 | labels: 106 | app: service-3421-svcn 107 | name: service-3421-svcn 108 | namespace: spectra-1267 109 | spec: 110 | ports: 111 | - name: 8080-80 112 | port: 8080 113 | protocol: TCP 114 | targetPort: 80 115 | selector: 116 | app: service-3421-svcn # delete 117 | mode: exam # add 118 | type: external # add 119 | type: ClusterIP 120 | status: 121 | loadBalancer: {} 122 | 123 | 124 | 125 | Finally let's apply the service definition: 126 | 127 | 128 | 129 | student-node ~ ➜ kubectl apply -f service-3421-svcn.yaml 130 | service/service-3421 created 131 | 132 | student-node ~ ➜ k get ep service-3421-svcn -n spectra-1267 133 | NAME ENDPOINTS AGE 134 | service-3421 10.42.0.15:80,10.42.0.17:80 52s 135 | 136 | 137 | 138 | To store all the pod name along with their IP's , we could use imperative command as shown below: 139 | 140 | 141 | 142 | student-node ~ ➜ kubectl get pods -n spectra-1267 -o=custom-columns='POD_NAME:metadata.name,IP_ADDR:status.podIP' --sort-by=.status.podIP 143 | 144 | POD_NAME IP_ADDR 145 | pod-12 10.42.0.18 146 | pod-23 10.42.0.19 147 | pod-34 10.42.0.20 148 | pod-21 10.42.0.21 149 | ... 150 | # store the output to /root/pod_ips 151 | student-node ~ ➜ kubectl get pods -n spectra-1267 -o=custom-columns='POD_NAME:metadata.name,IP_ADDR:status.podIP' --sort-by=.status.podIP > /root/pod_ips_cka05_svcn 152 | -------------------------------------------------------------------------------- /Mock Exam - 9.txt: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | kubectl logs -n beta-cka01-arch beta-pod-cka01-arch | tail -n +2 5 | 6 | 7 | Que1: 8 | A pod named beta-pod-cka01-arch has been created in the beta-cka01-arch namespace. Inspect the logs and save all logs starting with the string ERROR in file /root/beta-pod-cka01-arch_errors on the student-node. 9 | 10 | Ans: 11 | --- 12 | kubectl logs -n beta-cka01-arch beta-pod-cka01-arch | tail -n +2 > /root/beta-pod-cka01-arch_errors 13 | 14 | 15 | 16 | ============= 17 | 18 | Que2: 19 | --- 20 | 21 | A storage class called coconut-stc-cka01-str was created earlier. 22 | 23 | 24 | Use this storage class to create a persistent volume called coconut-pv-cka01-str as per below requirements: 25 | 26 | 27 | - Capacity should be 100Mi. 28 | 29 | - The volume type should be hostpath and the path should be /opt/coconut-stc-cka01-str. 30 | 31 | - Use coconut-stc-cka01-str storage class. 32 | 33 | - This volume must be created on cluster1-node01 (the /opt/coconut-stc-cka01-str directory already exists on this node). 34 | 35 | - It must have a label with key: storage-tier with value: gold. 36 | 37 | 38 | Also create a persistent volume claim with the name coconut-pvc-cka01-str as per below specs: 39 | 40 | 41 | - Request 50Mi of storage from coconut-pv-cka01-str PV, it must use matchLabels to use the PV. 42 | 43 | - Use coconut-stc-cka01-str storage class. 44 | 45 | - The access mode must be ReadWriteMany 46 | 47 | 48 | Ans: 49 | --- 50 | First set the context to cluster1 51 | 52 | kubectl config use-context cluster1 53 | Create a yaml template as below: 54 | --- 55 | apiVersion: v1 56 | kind: PersistentVolume 57 | metadata: 58 | name: coconut-pv-cka01-str 59 | labels: 60 | storage-tier: gold 61 | spec: 62 | capacity: 63 | storage: 100Mi 64 | accessModes: 65 | - ReadWriteMany 66 | hostPath: 67 | path: /opt/coconut-stc-cka01-str 68 | storageClassName: coconut-stc-cka01-str 69 | nodeAffinity: 70 | required: 71 | nodeSelectorTerms: 72 | - matchExpressions: 73 | - key: kubernetes.io/hostname 74 | operator: In 75 | values: 76 | - cluster1-node01 77 | --- 78 | apiVersion: v1 79 | kind: PersistentVolumeClaim 80 | metadata: 81 | name: coconut-pvc-cka01-str 82 | spec: 83 | accessModes: 84 | - ReadWriteMany 85 | resources: 86 | requests: 87 | storage: 50Mi 88 | storageClassName: coconut-stc-cka01-str 89 | selector: 90 | matchLabels: 91 | storage-tier: gold 92 | 3. Apply the templete. 93 | kubectl apply -f .yaml 94 | 95 | =========================== 96 | 97 | Que3: 98 | --- 99 | 100 | Create a pod with name tester-cka02-svcn in dev-cka02-svcn namespace with image registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3. Make sure to use command sleep 3600 with restart policy set to Always . 101 | 102 | 103 | Once the tester-cka02-svcn pod is running, store the output of the command nslookup kubernetes.default from tester pod into the file /root/dns_output on student-node. 104 | 105 | 106 | Ans: 107 | --- 108 | Change to the cluster1 context before attempting the task: 109 | 110 | kubectl config use-context cluster1 111 | 112 | 113 | 114 | Since the "dev-cka02-svcn" namespace doesn't exist, let's create it first: 115 | 116 | 117 | kubectl create ns dev-cka02-svcn 118 | 119 | 120 | 121 | Create the pod as per the requirements: 122 | 123 | 124 | 125 | kubectl apply -f - << EOF 126 | apiVersion: v1 127 | kind: Pod 128 | metadata: 129 | name: tester-cka02-svcn 130 | namespace: dev-cka02-svcn 131 | spec: 132 | containers: 133 | - name: tester-cka02-svcn 134 | image: registry.k8s.io/e2e-test-images/jessie-dnsutils:1.3 135 | command: 136 | - sleep 137 | - "3600" 138 | restartPolicy: Always 139 | EOF 140 | 141 | 142 | 143 | Now let's test if the nslookup command is working : 144 | 145 | 146 | student-node ~ ➜ kubectl exec -n dev-cka02-svcn -i -t tester-cka02-svcn -- nslookup kubernetes.default 147 | ;; connection timed out; no servers could be reached 148 | 149 | command terminated with exit code 1 150 | 151 | 152 | 153 | Looks like something is broken at the moment, if we observe the kube-system namespace, we will see no coredns pods are not running which is creating the problem, let's scale them for the nslookup command to work: 154 | 155 | 156 | kubectl scale deployment -n kube-system coredns --replicas=2 157 | 158 | 159 | 160 | Now let store the correct output into the /root/dns_output on student-node : 161 | 162 | 163 | kubectl exec -n dev-cka02-svcn -i -t tester-cka02-svcn -- nslookup kubernetes.default >> /root/dns_output 164 | 165 | 166 | 167 | We should have something similar to below output: 168 | 169 | 170 | 171 | student-node ~ ➜ cat /root/dns_output 172 | Server: 10.96.0.10 173 | Address: 10.96.0.10#53 174 | 175 | Name: kubernetes.default.svc.cluster.local 176 | Address: 10.96.0.1 177 | 178 | ======================== 179 | 180 | Que4: 181 | --- 182 | 183 | Create a loadbalancer service with name wear-service-cka09-svcn to expose the deployment webapp-wear-cka09-svcn application in app-space namespace. 184 | 185 | 186 | Ans: 187 | --- 188 | 189 | Switch to cluster3 : 190 | 191 | 192 | 193 | kubectl config use-context cluster3 194 | 195 | 196 | 197 | On student node run the command: 198 | 199 | 200 | student-node ~ ➜ kubectl expose -n app-space deployment webapp-wear-cka09-svcn --type=LoadBalancer --name=wear-service-cka09-svcn --port=8080 201 | service/wear-service-cka09-svcn exposed 202 | 203 | student-node ~ ➜ k get svc -n app-space 204 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 205 | wear-service-cka09-svcn LoadBalancer 10.43.68.233 172.25.0.14 8080:32109/TCP 14s -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | This contains kubernetes question and answers which helps to clear the CKA certifiaction 2 | --------------------------------------------------------------------------------