├── .DS_Store ├── README.md ├── cka-changes-2024 ├── README.md ├── cka-exam-changes-feb18-2025.pdf ├── cka-exam-changes-overview-2025.png ├── cka-new-exam-changes-november-25-2024.pdf └── free-download-button.png ├── cluster-architecture.md ├── networking.md ├── scheduling.md ├── storage.md └── troubleshooting.md /.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chadmcrowell/CKA-Exercises/70fb30b022ccaadf8d06c5aa07123ac072ed7f79/.DS_Store -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) 2 | 3 | # CKA Exercises 4 | 5 | > IMPORTANT EXAM CHANGES COMING FEBRUARY 10th, 2025 6 | [Read more about the CKA exam changes coming February 18th, 2025](cka-changes-2024/README.md) 7 | 8 | [![cka-badge](https://training.linuxfoundation.org/wp-content/uploads/2019/03/logo_cka_whitetext-300x293.png)](https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/) 9 | 10 | A set of exercises to help you prepare for the [Certified Kubernetes Administrator (CKA) Exam](https://www.cncf.io/certification/cka/) 11 | 12 | The format of the exam is entirely in a hands-on, command-line environment. You can take the exam at home or in a testing center, and you must complete the exam in 180 minutes. Register for the exam [here](https://training.linuxfoundation.org/certification/certified-kubernetes-administrator-cka/). The cost is US $395.00. 13 | 14 | As of December 2021, over 32,000 people have taken the CKA exam since it's introduction in 2017! 15 | 16 | To complete the exercises in this repo and get even more practice with exam-like scenarios in a **FREE** Kubernetes lab environment, go to [killercoda.com](https://killercoda.com/chadmcrowell) 17 | 18 | During the exam, you will have access to six different clusters (below) in the following configurations: 19 | 20 | | Cluster | Members | CNI | Description | 21 | | :------ | :--------------------- | :------- | :--------------------------------- | 22 | | k8s | 1 master\, 2 worker | flannel | k8s cluster | 23 | | hk8s | 1 master\, 2 worker | calico | k8s cluster | 24 | | bk8s | 1 master\, 1 worker | flannel | k8s cluster | 25 | | wk8s | 1 master\, 2 worker | flannel | k8s cluster | 26 | | ek8s | 1 master\, 2 worker | flannel | k8s cluster | 27 | | ik8s | 1 master\, 1 base node | loopback | k8s cluster \- missing worker node | 28 | 29 | [Important Instructions - Official Exam Handbook](https://docs.linuxfoundation.org/tc-docs/certification/tips-cka-and-ckad#cka-and-ckad-environment) 30 | 31 | [FAQ: CKA - Official Exam Handbook](https://docs.linuxfoundation.org/tc-docs/certification/faq-cka-ckad-cks) 32 | 33 | Also during the exam, you may have one and ONLY one of the following tabs open at all times: 34 | [kubernetes.io/docs](https://kubernetes.io/docs/home/) 35 | [kubernetes.io/blog](https://kubernetes.io/blog/) 36 | 37 | Not sure if you have the right equipment to take the exam at home? [Run a system check](https://www.examslocal.com/ScheduleExam/Home/CompatibilityCheck) 38 | 39 | ## Contents 40 | 41 | - [Cluster Architecture, Installation & Configuration - 25%](cluster-architecture.md) 42 | - [Workloads & Scheduling - 15%](scheduling.md) 43 | - [Services & Networking - 20%](networking.md) 44 | - [Storage - 10%](storage.md) 45 | - [Troubleshooting - 30%](troubleshooting.md) 46 | 47 | [View the most current exam curriculum](https://github.com/cncf/curriculum) 48 | 49 | ## Additional Grading Information 50 | 51 | The CKA exam is graded for outcome only (i.e. end state of the system). The path that a exam taker may have taken to get to the outcome is not evaluated, meaning an exam taker can take any path they want as long as it achieves the correct outcome. Incomplete work, i.e. work that is input but did not lead to the correct outcome, will not be evaluated. 52 | 53 | Some exam items may have multiple parts and therefore have multiple 'checks' (one for each verifiable component of the answer). Candidates will be given credit for each successful check, so partial credit is possible on such items. 54 | 55 | Exam items are also setup to be independent of one another. As long as the candidate does exactly what the questions ask, there will be no dependencies or conflicts, and as long as the candidate correctly achieves the outcome being asked for in the specific exam item, they would earn points for that particular exam item. 56 | 57 | Scoring is done using an automatic grading script. The grading scripts have been time tested and continuously refined, so the likelihood of having incorrectly graded a question or two is very low since we grade on outcomes (end state of the system), not the path the user took to get there. 58 | 59 | ## Updates - as of October 2022 60 | 61 | The CKA exam is currently on v1.31 of k8s. The removal of dockershim happend in v1.24, so expect the containerd container runtime if you are taking the exam today and into the future. You can view the container runtime in use with the command `k get no -o wide`. The output will look similar to this (see the column named "CONTAINER-RUNTIME"): 62 | ```bash 63 | NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 64 | controlplane Ready control-plane 34d v1.28.1 172.30.1.2 Ubuntu 20.04.5 LTS 5.4.0-131-generic containerd://1.6.12 65 | node01 Ready 34d v1.28.1 172.30.2.2 Ubuntu 20.04.5 LTS 5.4.0-131-generic containerd://1.6.12 66 | ``` 67 | 68 | 69 | ## Exam Release Cycle 70 | The exams are upgraded to the latest version of k8s within 4-6 weeks of the version being released. [Dockershim FAQ](https://kubernetes.io/blog/2020/12/02/dockershim-faq/) 71 | -------------------------------------------------------------------------------- /cka-changes-2024/README.md: -------------------------------------------------------------------------------- 1 | ## CKA Exam Changes - February 18, 2025 2 | 3 | [Official Announcement - Linux Foundation](https://training.linuxfoundation.org/certified-kubernetes-administrator-cka-program-changes/) 4 | 5 | [Kubernetes is evolving, the CKA exam too! - CNCF Blog](https://www.cncf.io/blog/2024/09/05/kubernetes-is-evolving-the-cka-exam-too/) 6 | 7 | [CKA version 1.32 - Curriculum Overview](https://github.com/cncf/curriculum/blob/master/CKA_Curriculum_v1.32.pdf) 8 | 9 | --- 10 | 11 | [![Download CKA Exam Changes Document](free-download-button.png)](cka-exam-changes-feb18-2025.pdf) 12 | 13 | [Side by Side Comparison - New CKA Objectives](cka-exam-changes-feb18-2025.pdf) 14 | 15 | ![CKA Exam Changes - Overview](cka-exam-changes-overview-2025.png) 16 | -------------------------------------------------------------------------------- /cka-changes-2024/cka-exam-changes-feb18-2025.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chadmcrowell/CKA-Exercises/70fb30b022ccaadf8d06c5aa07123ac072ed7f79/cka-changes-2024/cka-exam-changes-feb18-2025.pdf -------------------------------------------------------------------------------- /cka-changes-2024/cka-exam-changes-overview-2025.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chadmcrowell/CKA-Exercises/70fb30b022ccaadf8d06c5aa07123ac072ed7f79/cka-changes-2024/cka-exam-changes-overview-2025.png -------------------------------------------------------------------------------- /cka-changes-2024/cka-new-exam-changes-november-25-2024.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chadmcrowell/CKA-Exercises/70fb30b022ccaadf8d06c5aa07123ac072ed7f79/cka-changes-2024/cka-new-exam-changes-november-25-2024.pdf -------------------------------------------------------------------------------- /cka-changes-2024/free-download-button.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/chadmcrowell/CKA-Exercises/70fb30b022ccaadf8d06c5aa07123ac072ed7f79/cka-changes-2024/free-download-button.png -------------------------------------------------------------------------------- /cluster-architecture.md: -------------------------------------------------------------------------------- 1 | # Cluster Architecture, Installation & Configuration (25%) 2 | 3 | kubernetes.io > Documentation > Reference > kubectl CLI > [kubectl Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) 4 | 5 | kubernetes.io > Documentation > Tasks > Monitoring, Logging, and Debugging > [Get a Shell to a Running Container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/) 6 | 7 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Configure Access to Multiple Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) 8 | 9 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Accessing Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/) using API 10 | 11 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) 12 | 13 | ### Setup autocomplete for k8s commands 14 | 15 |
show 16 |

17 | 18 | ```bash 19 | source <(kubectl completion bash) 20 | echo "source <(kubectl completion bash)" >> ~/.bashrc 21 | ``` 22 | 23 |

24 |
25 | 26 | ### Setup alias for kubectl 27 | 28 |
show 29 |

30 | 31 | ```bash 32 | alias k=kubectl 33 | # have this persist beyond the current shell 34 | echo 'alias k=kubectl' >> ~/.bashrc 35 | source ~/.bashrc 36 | ``` 37 | 38 |

39 |
40 | 41 | ### Show kubeconfig settings 42 | 43 |
show 44 |

45 | 46 | ```bash 47 | kubectl config view 48 | ``` 49 | 50 |

51 |
52 | 53 | ### Use multiple kubeconfig files at the same time 54 | 55 |
show 56 |

57 | 58 | ```bash 59 | KUBECONFIG=~/.kube/config:~/.kube/kubconfig2 60 | ``` 61 | 62 |

63 |
64 | 65 | ### Perform the command to list all API resources in your Kubernetes cluster 66 | 67 |
show 68 |

69 | 70 | ```bash 71 | kubectl api-resources 72 | ``` 73 | 74 |

75 |
76 | 77 | ### View the client certificate that the kubelet uses to authenticate to the Kubernetes API 78 | 79 |
show 80 |

81 | 82 | ```bash 83 | # The kubelet’s client certificate is named `kubelet-client-current.pem` and is stored locally on the control plane node 84 | # on the cka exam, make sure to ssh to the control plane node first 85 | ls /var/lib/kubelet/pki/ 86 | 87 | # view the certificate file `kubelet-client-current.pem` with openssl CLI 88 | openssl x509 -in /var/lib/kubelet/pki/kubelet-client-current.pem -text -noout 89 | ``` 90 |

91 |
92 | 93 | ### Create a new user named “Sandra”, first creating the private key, then the certificate signing request, then using the CSR resource in Kubernetes to generate the client certificate. 94 | 95 |
show 96 |

97 | 98 | ```bash 99 | # create a private key using openssl with 2048-bit encryption 100 | openssl genrsa -out sandra.key 2048 101 | 102 | # create a certificate signing request to give to the Kubernetes API 103 | openssl req -new -key sandra.key -subj "/CN=sandra" -out sandra.csr 104 | 105 | # store the file `sandra.csr` in an environment variable named "REQUEST" 106 | export REQUEST=$(cat sandra.csr | base64 -w 0) 107 | 108 | # create the CSR as a Kubernetes resource 109 | cat < sandra.crt 131 | 132 | ``` 133 | 134 | [Try this in Killercoda's Kubernetes Lab Environment](https://killercoda.com/chadmcrowell/course/cka/kubernetes-create-user) 135 | 136 |

137 |
138 | 139 | ### Add that new user `sandra` to your local kubeconfig using the kubectl config command 140 | 141 |
show 142 |

143 | 144 | ```bash 145 | # set the credentials in your existing kubeconfig (~.kube/config) 146 | k config set-credentials carlton --client-key=sandra.key --client-certificate=sandra.crt --embed-certs 147 | 148 | # view the kubeconfig to see sandra added 149 | k config view 150 | 151 | # get your current context 152 | k config get-contexts 153 | 154 | # set the context for sandra 155 | k config set-context sandra --user=sandra --cluster=kubernetes 156 | 157 | # switch to using the `sandra` context 158 | kubectl config use-context sandra 159 | ``` 160 |

161 |
162 | 163 | ### Upgrade the control plane components using kubeadm. When completed, check that everything, including kubelet and kubectl is upgrade to version 1.31.6 164 | 165 |
show 166 |

167 | 168 | ```bash 169 | # list the control plane components at their current version and target version 170 | kubeadm upgrade plan 171 | 172 | # apply the upgrade to 1.31.6 173 | kubeadm upgrade apply v1.31.6 174 | 175 | # optionally upgrade kubeadm 176 | # this is if you get the message "Specified version to upgrade to "v1.31.6" is higher than the kubeadm version "v1.31.0". Upgrade kubeadm first using the tool you used to install kubeadm" 177 | 178 | # Download the public signing key for the Kubernetes package repositories 179 | curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg 180 | 181 | # Add the appropriate Kubernetes apt repository 182 | # This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list 183 | echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list 184 | 185 | # update kubeadm to version 1.31.6-1.1 186 | sudo apt install -y kubeadm=1.31.6-1.1 187 | 188 | # try again to upgrade the control plane components using kubeadm 189 | kubeadm upgrade apply v1.31.6 -y 190 | 191 | # run kubeadm upgrade plan again to verify that everything is upgraded to 1.31.6 192 | kubeadm upgrade plan 193 | ``` 194 |

195 |
196 | 197 | 198 | ### Create a role the will allow users to get, watch, and list pods and container logs 199 | 200 |
show 201 |

202 | 203 | Create a role just using the kubectl command line 204 | ```bash 205 | # create a role using `kubectl create role -h` for help 206 | kubectl create role pod-reader --verb=get,watch,list --resource=pods,pods/log 207 | ``` 208 | 209 | Create a role from a YAML file named `role.yaml` 210 | ```bash 211 | # create a file named role.yml 212 | apiVersion: rbac.authorization.k8s.io/v1 213 | kind: Role 214 | metadata: 215 | namespace: default 216 | name: pod-reader 217 | rules: 218 | - apiGroups: [""] 219 | resources: ["pods", "pods/log"] 220 | verbs: ["get", "watch", "list"] 221 | 222 | # create the role 223 | kubectl apply -f role.yml 224 | 225 | # EXTRA CREDIT: After you've created the role binding, use `kubectl auth can-i..` to test the role 226 | 227 | ``` 228 | 229 |

230 |
231 | 232 | ### Create a role binding that binds to a role named pod-reader, applies to a user named `dev` 233 | 234 |
show 235 |

236 | 237 | ```bash 238 | # create a file named role-binding.yml 239 | apiVersion: rbac.authorization.k8s.io/v1 240 | kind: RoleBinding 241 | metadata: 242 | name: pod-reader 243 | namespace: default 244 | subjects: 245 | - kind: User 246 | name: dev 247 | apiGroup: rbac.authorization.k8s.io 248 | roleRef: 249 | kind: Role 250 | name: pod-reader 251 | apiGroup: rbac.authorization.k8s.io 252 | 253 | # create the role binding from YAML 254 | kubectl create -f role-binding.yaml 255 | ``` 256 | 257 | Test the role binding using `kubectl auth can-i` 258 | ```bash 259 | # see if the user "dev" can list pods 260 | kubectl auth can-i get pods --namespace=default --as=dev 261 | 262 | # see if the user "dev" can view pod logs 263 | kubectl auth can-i get pods/log --namespace=default --as=dev 264 | 265 | # check that the user "dev" cannot create pods 266 | kubectl auth can-i create pods --namespace=default --as=dev 267 | 268 | ``` 269 | 270 |

271 |
272 | 273 | ### Create a new role named `sa-creator` that will allow creating service accounts. 274 | 275 |
show 276 |

277 | 278 | ```bash 279 | # use kubectl to create the role, with the help of `kubectl create role -h` 280 | kubectl create role sa-creator --verb=create --resource=sa 281 | ``` 282 | 283 |

284 |
285 | 286 | ### Create a role binding that is associated with the previous `sa-creator` role, named `sa-creator-binding` that will bind to the user `sandra` 287 | 288 |
show 289 |

290 | 291 | ```bash 292 | # use kubectl to create the role binding, with the help of `kubectl create rolebinding -h` 293 | kubectl create rolebinding sa-creator-binding --role=sa-creator --user=sandra 294 | 295 | # use `kubectl auth can-i` to verify sandra can create service accounts 296 | kubectl auth can-i create sa --as sandra 297 | ``` 298 | 299 |

300 |
301 | 302 | ### Create a role named `deployment-reader` in the `cka-20834` namespace, and allow it to get and list deployments. 303 | 304 |
show 305 |

306 | 307 | ```bash 308 | # create the namespace `cka-20834` 309 | k create ns cka-20834 310 | 311 | # create the role in the `cka-20834` namespace with the help of `kubectl create role -h` 312 | k -n cka-20834 create role deployment-reader --verb=get,list --resource=deploy --api-group=apps 313 | ``` 314 | 315 |

316 |
317 | 318 | ### Create a role binding named `deployment-reader-binding` in the `cka-20834` namespace that will bind the `deployment-reader` role to the service account `demo-sa` 319 | 320 |
show 321 |

322 | 323 | ```bash 324 | # create a service account named `demo-sa` in the `cka-20834` namespace 325 | k -n cka-20834 create sa demo-sa 326 | 327 | # create the role binding with the help of `kubectl create rolebinding -h` 328 | k -n cka-20834 create rolebinding deployment-reader-binding --role=deployment-reader --serviceaccount=cka-20834:demo-sa 329 | 330 | # verify the permission with `kubectl auth can-i` 331 | kubectl auth can-i get deployments --as=system:serviceaccount:cka-20834:demo-sa -n cka-20834 332 | kubectl auth can-i list deployments --as=system:serviceaccount:cka-20834:demo-sa -n cka-20834 333 | 334 | ``` 335 | 336 |

337 |
338 | 339 | ### Permanently save the namespace `ggcka-s2` for all subsequent kubectl commands in that context. 340 | 341 |
show 342 |

343 | 344 | ```bash 345 | kubectl config set-context --current --namespace=ggcka-s2 346 | ``` 347 | 348 |

349 |
350 | 351 | ### Set a context utilizing a specific `cluster-admin` user in `default` namespace 352 | 353 |
show 354 |

355 | 356 | ```bash 357 | # set context gce to user "admin" in the namespace "default" 358 | kubectl config set-context gce --user=cluster-admin --namespace=default \ 359 | 360 | # use the context 361 | kubectl config use-context gce 362 | ``` 363 | 364 |

365 |
366 | 367 | ### List all services in the kube-system namespace 368 | 369 |
show 370 |

371 | 372 | ```bash 373 | kubectl get svc -n kube-system 374 | ``` 375 | 376 |

377 |
378 | 379 | ### List the pods in all namespaces 380 | 381 |
show 382 |

383 | 384 | ```bash 385 | kubectl get po --all-namespaces 386 | # or 387 | k get po -A 388 | ``` 389 | 390 |

391 |
392 | 393 | ### List all pods in the `default` namespace, with more details 394 | 395 |
show 396 |

397 | 398 | ```bash 399 | kubectl -n default get pods -o wide 400 | ``` 401 | 402 |

403 |
404 | 405 | ### List a deployment named `nginx-deployment` 406 | 407 |
show 408 |

409 | 410 | ```bash 411 | kubectl get deployment nginx-deployment 412 | # or 413 | kubectl -n default get deploy nginx-deployment 414 | ``` 415 | 416 |

417 |
418 | 419 | ### List all pods in the default namespace 420 | 421 |
show 422 |

423 | 424 | ```bash 425 | kubectl -n default get pods 426 | # or 427 | k -n default get po 428 | ``` 429 | 430 |

431 |
432 | 433 | ### Get the pod YAML from a pod named `nginx` 434 | 435 |
show 436 |

437 | 438 | ```bash 439 | kubectl get po nginx -o yaml 440 | # or 441 | k -n default get po -o yaml 442 | ``` 443 | 444 |

445 |
446 | 447 | ### Get information about the pod, including details about potential issues (e.g. pod hasn't started) 448 | 449 |
show 450 |

451 | 452 | ```bash 453 | kubectl describe po nginx 454 | ``` 455 | 456 |

457 |
458 | 459 | ### Get pod logs from a pod named `nginx` 460 | 461 |
show 462 |

463 | 464 | ```bash 465 | kubectl logs nginx 466 | ``` 467 | 468 |

469 |
470 | 471 | ### Output a pod's YAML without cluster specific information 472 | 473 |
show 474 |

475 | 476 | ```bash 477 | kubectl get pod my-pod -o yaml 478 | ``` 479 | 480 |

481 |
482 | 483 | ### List all nodes in the cluster 484 | 485 |
show 486 |

487 | 488 | ```bash 489 | kubectl get nodes 490 | # or, get more information about the nodes 491 | kubectl get nodes -o wide 492 | ``` 493 | 494 |

495 |
496 | 497 | ### Describe nodes with verbose output 498 | 499 |
show 500 |

501 | 502 | ```bash 503 | kubectl describe nodes 504 | ``` 505 | 506 |

507 |
508 | 509 | ### List services in the default namespace, sorted by name 510 | 511 |
show 512 |

513 | 514 | ```bash 515 | kubectl -n default get services --sort.by=.metadata.name 516 | # or 517 | k -n default get svc --sort.by=.metadata.name 518 | ``` 519 | 520 |

521 |
522 | 523 | ### Get the external IP of all nodes 524 | 525 |
show 526 |

527 | 528 | ```bash 529 | kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' 530 | ``` 531 | 532 |

533 |
534 | 535 | ### Create a new namespace 536 | 537 |
show 538 |

539 | 540 | ```bash 541 | kubectl create namespace web 542 | ``` 543 | 544 |

545 |
546 | 547 | ### List all the namespaces that exist in the cluster 548 | 549 |
show 550 |

551 | 552 | ```bash 553 | kubectl get namespaces 554 | ``` 555 | 556 |

557 |
558 | 559 | ### Create a pod which runs an nginx container 560 | 561 |
show 562 |

563 | 564 | ```bash 565 | kubectl run nginx --image=nginx 566 | # or 567 | kubectl run nginx2 --image=nginx --restart=Never --dry-run -o yaml | kubectl create -f - 568 | ``` 569 | 570 |

571 |
572 | 573 | ### Delete a pod 574 | 575 |
show 576 |

577 | 578 | ```bash 579 | kubectl delete po nginx 580 | ``` 581 | 582 |

583 |
584 | 585 | ### Get the status of the control plane components (cluster health) 586 | 587 |
show 588 |

589 | 590 | ```bash 591 | # check the livez endpoint 592 | curl -k https://localhost:6443/livez?verbose 593 | 594 | # or 595 | 596 | kubectl get --raw='/livez?verbose' 597 | 598 | # check the readyz endpoint 599 | curl -k https://localhost:6443/readyz?verbose 600 | 601 | # or 602 | 603 | kubectl get --raw='/readyz?verbose' 604 | 605 | # check the healthz endpoint 606 | curl -k https://localhost:6443/healthz?verbose 607 | 608 | # or 609 | 610 | kubectl get --raw='/healthz?verbose' 611 | ``` 612 | [Kubernetes API Health Endpoints](https://kubernetes.io/docs/reference/using-api/health-checks/) 613 | 614 |

615 |
616 | 617 | ### Create a deployment with two replica pods from YAML 618 | 619 |
show 620 |

621 | 622 | ```YAML 623 | # create a deployment object using this YAML template with the following command 624 | cat < 666 |

667 | 668 | ### Add an annotation to a deployment 669 | 670 |
show 671 |

672 | 673 | ```bash 674 | kubectl annotate deploy nginx mycompany.com/someannotation="chad" 675 | ``` 676 | 677 |

678 |
679 | 680 | ### Add a label to a pod 681 | 682 |
show 683 |

684 | 685 | ```bash 686 | kubectl label pods nginx env=prod 687 | ``` 688 | 689 |

690 |
691 | 692 | ### Show labels for all pods in the cluster 693 | 694 |
show 695 |

696 | 697 | ```bash 698 | kubectl get pods --show-labels 699 | # or get pods with the env label 700 | kubectl get po -L env 701 | ``` 702 | 703 |

704 |
705 | 706 | ### List all pods that are in the running state using field selectors 707 | 708 |
show 709 |

710 | 711 | ```bash 712 | kubectl get po --field-selector status.phase=Running 713 | ``` 714 | 715 |

716 |
717 | 718 | ### List all services in the default namespace using field selectors 719 | 720 |
show 721 |

722 | 723 | ```bash 724 | kubectl get svc --field-selector metadata.namespace=default 725 | ``` 726 | 727 |

728 |
729 | 730 | ### List all API resources in your Kubernetes cluster 731 | 732 |
show 733 |

734 | 735 | ```bash 736 | kubectl api-resources 737 | ``` 738 | 739 |

740 |
741 | 742 | ### List the services on your Linux operating system that are associated with Kubernetes 743 | 744 |
show 745 |

746 | 747 | ```bash 748 | systemctl list-unit-files --type service --all | grep kube 749 | ``` 750 | 751 |

752 |
753 | 754 | ### List the status of the kubelet service running on the Kubernetes node 755 | 756 |
show 757 |

758 | 759 | ```bash 760 | systemctl status kubelet 761 | ``` 762 | 763 |

764 |
765 | 766 | ### Use the imperative command to create a pod named nginx-pod with the image nginx, but save it to a YAML file named pod.yaml instead of creating it 767 | 768 |
show 769 |

770 | 771 | ```bash 772 | kubectl run nginx --image nginx-pod --dry-run=client -o yaml > pod.yaml 773 | ``` 774 | 775 |

776 |
777 | 778 | ### List all the services created in your Kubernetes cluster, across all namespaces 779 | 780 |
show 781 |

782 | 783 | ```bash 784 | kubectl get svc -A 785 | ``` 786 | 787 |

788 |
789 | 790 | ### Create a service account and pod that does NOT mount the service account token 791 | 792 |
show 793 |

794 | 795 | Create the service account 796 | ```bash 797 | # create the YAML for a service account named 'secure-sa' with the '--dry-run=client' option, saving it to a file named 'sa.yaml' 798 | kubectl -n default create sa secure-sa --dry-run=client -o yaml > sa.yaml 799 | 800 | # add the automountServiceAccountToken: false to the end of the file 'sa.yaml' 801 | echo "automountServiceAccountToken: false" >> sa.yaml 802 | 803 | # create the service account from the file 'sa.yaml' 804 | kubectl create -f sa.yaml 805 | 806 | # list the newly created service account 807 | kubectl -n default get sa 808 | ``` 809 | 810 | Creat a pod that uses the service account 811 | ```bash 812 | # create the YAML for a pod named 'secure-pod' by using kubectl with the '--dry-run=client' option, output to YAML and saved to a file 'pod.yaml' 813 | cat < 843 |

844 | 845 | 846 | 847 | --- 848 | 849 | ## FIND MORE KILLERCODA CKA EXAM EXERCISES 850 | [Killercoda.com](https://killercoda.com), the same company that brought you the [exam simulator](https://killer.sh) for the CKA Exam, brings you a free [Kubernetes](https://kubernetes.io/) lab environment. 851 | [MORE CKA EXAM EXERCISES HERE](https://killercoda.com/cka) -------------------------------------------------------------------------------- /networking.md: -------------------------------------------------------------------------------- 1 | # Services & Networking (20%) 2 | 3 | kubernetes.io > Documentation > Reference > kubectl CLI > [kubectl Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) 4 | 5 | kubernetes.io > Documentation > Tasks > Monitoring, Logging, and Debugging > [Get a Shell to a Running Container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/) 6 | 7 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Configure Access to Multiple Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) 8 | 9 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Accessing Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/) using API 10 | 11 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) 12 | 13 | ### Drain a node for maintenance named `node1.mylabserver.com` 14 | 15 |
show 16 |

17 | 18 | ```bash 19 | kubectl drain node1.mylabserver.com --ignore-daemonsets --force 20 | ``` 21 | 22 |

23 |
24 | 25 | ### Put the node `node1.mylabserver.com` back into service, so pods can be scheduled to it 26 | 27 |
show 28 |

29 | 30 | ```bash 31 | kubectl uncordon node1.mylabserver.com 32 | ``` 33 | 34 |

35 |
36 | 37 | ### Upgrade kubeadm to version 1.18.6 38 | 39 |
show 40 |

41 | 42 | ```bash 43 | sudo apt install -y kubeadm --allow-change-held-packages kubeadm=1.18.6-00 44 | ``` 45 | 46 |

47 |
48 | 49 | ### Plan and upgrade the control plane components with kubeadm to version 1.18.6 50 | 51 |
show 52 |

53 | 54 | ```bash 55 | sudo kubeadm upgrade plan 56 | 57 | sudo kubeadm upgrade apply v1.18.6 58 | ``` 59 | 60 |

61 |
62 | 63 | ### Update kubelet to version 1.18.6 64 | 65 |
show 66 |

67 | 68 | ```bash 69 | sudo apt install kubelet=1.18.6-00 70 | ``` 71 | 72 |

73 |
74 | 75 | ### Update kubectl to version 1.18.6 76 | 77 |
show 78 |

79 | 80 | ```bash 81 | sudo apt install kubectl=1.18.6-00 82 | ``` 83 | 84 |

85 |
86 | 87 | ### Restart kubelet on the node 88 | 89 |
show 90 |

91 | 92 | ```bash 93 | sudo systemctl daemon-reload 94 | 95 | sudo systemctl restart kubelet 96 | ``` 97 | 98 |

99 |
100 | 101 | ### Upgrade the kubelet configuration on a worker node 102 | 103 |
show 104 |

105 | 106 | ```bash 107 | sudo kubeadm upgrade node 108 | ``` 109 | 110 |

111 |
112 | 113 | ### List all namespaces in your cluster 114 | 115 |
show 116 |

117 | 118 | ```bash 119 | kubectl get ns 120 | ``` 121 | 122 |

123 |
124 | 125 | ### List all pod in all namespaces 126 | 127 |
show 128 |

129 | 130 | ```bash 131 | kubectl get po --all-namespaces 132 | ``` 133 | 134 |

135 |
136 | 137 | ### Create a new namespace named web 138 | 139 |
show 140 |

141 | 142 | ```bash 143 | kubectl create ns web 144 | ``` 145 | 146 |

147 |
148 | 149 | ### Look up the value for the key `cluster.name` in the etcd cluster and backup etcd 150 | 151 |
show 152 |

153 | 154 | ```bash 155 | ETCDCTL_API=3 etcdctl get cluster.name \ 156 | --endpoints=https://10.0.1.101:2379 \ 157 | --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \ 158 | --cert=/home/cloud_user/etcd-certs/etcd-server.crt \ 159 | --key=/home/cloud_user/etcd-certs/etcd-server.key 160 | 161 | ETCDCTL_API=3 etcdctl snapshot save /home/cloud_user/etcd_backup.db \ 162 | --endpoints=https://10.0.1.101:2379 \ 163 | --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \ 164 | --cert=/home/cloud_user/etcd-certs/etcd-server.crt \ 165 | --key=/home/cloud_user/etcd-certs/etcd-server.key 166 | ``` 167 | 168 |

169 |
170 | 171 | ### Reset etcd and remove all data from the etcd 172 | 173 |
show 174 |

175 | 176 | ```bash 177 | sudo systemctl stop etcd 178 | 179 | sudo rm -rf /var/lib/etcd 180 | ``` 181 | 182 |

183 |
184 | 185 | ### Restore an etcd store from backup. 186 | 187 |
show 188 |

189 | 190 | ```bash 191 | # spin up a temporary etcd cluster and save the data from the backup file to a new directory (/var/lib/etcd) 192 | sudo ETCDCTL_API=3 etcdctl snapshot restore /home/cloud_user/etcd_backup.db \ 193 | --initial-cluster etcd-restore=https://10.0.1.101:2380 \ 194 | --initial-advertise-peer-urls https://10.0.1.101:2380 \ 195 | --name etcd-restore \ 196 | --data-dir /var/lib/etcd 197 | 198 | # set ownership of the new data directory 199 | sudo chown -R etcd:etcd /var/lib/etcd 200 | 201 | # start etcd 202 | sudo systemctl start etcd 203 | 204 | # Verify the data was restored 205 | ETCDCTL_API=3 etcdctl get cluster.name \ 206 | --endpoints=https://10.0.1.101:2379 \ 207 | --cacert=/home/cloud_user/etcd-certs/etcd-ca.pem \ 208 | --cert=/home/cloud_user/etcd-certs/etcd-server.crt \ 209 | --key=/home/cloud_user/etcd-certs/etcd-server.key 210 | ``` 211 | 212 |

213 |
214 | 215 | --- 216 | 217 | ## FIND MORE KILLERCODA CKA EXAM EXERCISES 218 | [Killercoda.com](https://killercoda.com), the same company that brought you the [exam simulator](https://killer.sh) for the CKA Exam, brings you a free [Kubernetes](https://kubernetes.io/) lab environment. 219 | [MORE CKA EXAM EXERCISES HERE](https://killercoda.com/cka) 220 | -------------------------------------------------------------------------------- /scheduling.md: -------------------------------------------------------------------------------- 1 | # Workloads & Scheduling (15%) 2 | 3 | kubernetes.io > Documentation > Reference > kubectl CLI > [kubectl Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) 4 | 5 | kubernetes.io > Documentation > Tasks > Monitoring, Logging, and Debugging > [Get a Shell to a Running Container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/) 6 | 7 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Configure Access to Multiple Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) 8 | 9 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Accessing Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/) using API 10 | 11 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) 12 | 13 | ### Create a deployment from a YAML file named deploy.yml 14 | 15 |
show 16 |

17 | 18 | ```bash 19 | # create the yaml file 20 | kubectl create deploy my-deployment --image nginx --dry-run=client -o yaml > deploy.yml 21 | 22 | # create the resource from the yaml spec 23 | kubectl apply -f deploy.yml 24 | ``` 25 | 26 |

27 |
28 | 29 | ### Describe a pod named nginx 30 | 31 |
show 32 |

33 | 34 | ```bash 35 | # create a pod named nginx 36 | k run nginx --image nginx 37 | 38 | # describe the pod 39 | k describe po nginx 40 | ``` 41 | 42 |

43 |
44 | 45 | ### Delete a pod named nginx 46 | 47 |
show 48 |

49 | 50 | ```bash 51 | kubectl delete po nginx 52 | ``` 53 | 54 |

55 |
56 | 57 | ### Create a deployment named nginx and use the image nginx 58 | 59 |
show 60 |

61 | 62 | ```bash 63 | kubectl create deploy nginx --image=nginx 64 | ``` 65 | 66 |

67 |
68 | 69 | ### Create the YAML specification for a deployment named nginx, outputting to a file named deploy.yml 70 | 71 |
show 72 |

73 | 74 | ```bash 75 | kubectl create deployment nginx --image=nginx --dry-run -o yaml > deploy.yml 76 | ``` 77 | 78 |

79 |
80 | 81 | ### Update the `nginx` deployment to use at new image tag `1.27.4-alpine-slim` 82 | 83 |
show 84 |

85 | 86 | ```bash 87 | # list the deployments 88 | k get deploy 89 | 90 | # patch the deployment 91 | kubectl set image deploy nginx nginx=nginx:1.27.4-alpine-slim 92 | 93 | # verify that the new image is set 94 | k get deploy nginx -o yaml | grep image: 95 | ``` 96 | 97 |

98 |
99 | 100 | 101 | ### Create a configmap named `my-configmap` with two values, one single line and one multi-line 102 | 103 |
show 104 |

105 | 106 | ```bash 107 | # create a configmap with a siingle line and a multi-line 108 | cat < 128 |

129 | 130 | ### Use the configMap `my-configmap` in a deployment named `my-nginx-deployment` that uses the image `nginx:latest` mounting the configMap as a volume 131 | 132 |
show 133 |

134 | 135 | ```yaml 136 | apiVersion: apps/v1 137 | kind: Deployment 138 | metadata: 139 | name: my-nginx-deployment 140 | spec: 141 | replicas: 1 142 | selector: 143 | matchLabels: 144 | app: nginx 145 | template: 146 | metadata: 147 | labels: 148 | app: nginx 149 | spec: 150 | containers: 151 | - name: nginx 152 | image: nginx:latest 153 | volumeMounts: 154 | - name: config-volume 155 | mountPath: /etc/config 156 | readOnly: true 157 | volumes: 158 | - name: config-volume 159 | configMap: 160 | name: my-configmap 161 | ``` 162 | 163 |

164 |
165 | 166 | ### Use the configMap `my-configmap` as an environment variable in a deployment named `mynginx-deploy` that uses the image `nginx-latest`, passing in the single line value as an environment variable named `SINGLE_VALUE` and the multi-line value as `MULTI_VALUE` 167 | 168 |
show 169 |

170 | 171 | ```yaml 172 | apiVersion: apps/v1 173 | kind: Deployment 174 | metadata: 175 | name: mynginx-deploy 176 | spec: 177 | replicas: 1 178 | selector: 179 | matchLabels: 180 | app: nginx 181 | template: 182 | metadata: 183 | labels: 184 | app: nginx 185 | spec: 186 | containers: 187 | - name: nginx 188 | image: nginx:latest 189 | env: 190 | - name: SINGLE_VALUE 191 | valueFrom: 192 | configMapKeyRef: 193 | name: my-configmap 194 | key: single 195 | - name: MULTI_VALUE 196 | valueFrom: 197 | configMapKeyRef: 198 | name: my-configmap 199 | key: multi 200 | 201 | ``` 202 | 203 |

204 |
205 | 206 | ### Create a secret via yaml that contains two base64 encoded values 207 | 208 |
show 209 |

210 | 211 | ```bash 212 | # create two base64 encoded strings 213 | echo -n 'secret' | base64 214 | 215 | echo -n 'anothersecret' | base64 216 | 217 | # create a file named secret.yml 218 | apiVersion: v1 219 | kind: Secret 220 | metadata: 221 | name: my-secret 222 | type: Opaque 223 | data: 224 | secretkey1: 225 | secretkey2: 226 | 227 | # create a secret 228 | kubectl create -f secretl.yml 229 | ``` 230 | 231 |

232 |
233 | 234 | ### Using kubectl, create a secret named `admin-pass` from the string `SuperSecureP@ssw0rd` 235 | 236 |
show 237 |

238 | 239 | ```bash 240 | # create a secret from the string `SuperSecureP@ssw0rd` 241 | kubectl create secret generic admin-pass --from-literal=password=SuperSecureP@ssw0rd 242 | ``` 243 | 244 |

245 |
246 | 247 | ### Inject the secret `admin-pass` into a deployment named `admin-deploy` as an environment variable named `PASSWORD` 248 | 249 |
show 250 |

251 | 252 | ```yaml 253 | apiVersion: apps/v1 254 | kind: Deployment 255 | metadata: 256 | name: admin-deploy 257 | spec: 258 | replicas: 1 259 | selector: 260 | matchLabels: 261 | app: secret-app 262 | template: 263 | metadata: 264 | labels: 265 | app: secret-app 266 | spec: 267 | containers: 268 | - name: my-container 269 | image: nginx 270 | env: 271 | - name: PASSWORD 272 | valueFrom: 273 | secretKeyRef: 274 | name: my-secret 275 | key: password 276 | ``` 277 | 278 |

279 |
280 | 281 | ### Use the secret `admin-pass` inside a deployment named `secret-deploy` mounting the secret inside the pod at `/etc/secret/password` 282 | 283 |
show 284 |

285 | 286 | ```yaml 287 | apiVersion: apps/v1 288 | kind: Deployment 289 | metadata: 290 | name: secret-deploy 291 | spec: 292 | replicas: 1 293 | selector: 294 | matchLabels: 295 | app: secret-app 296 | template: 297 | metadata: 298 | labels: 299 | app: secret-app 300 | spec: 301 | containers: 302 | - name: my-container 303 | image: nginx 304 | volumeMounts: 305 | - name: secret-volume 306 | mountPath: "/etc/secret" 307 | readOnly: true 308 | volumes: 309 | - name: secret-volume 310 | secret: 311 | secretName: my-secret 312 | ``` 313 | 314 |

315 |
316 | 317 | ### Apply the label “disk=ssd” to a node. Create a pod named “fast” using the nginx image and make sure that it selects a node based on the label “disk=ssd” 318 | 319 |
show 320 |

321 | 322 | ```bash 323 | # label the node named 'node01' 324 | kubectl label no node01 "disk=ssd" 325 | 326 | # create the pod YAML for pod named 'fast' 327 | kubectl run fast --image nginx --dry-run=client -o yaml > fast.yaml 328 | ``` 329 | 330 | ```yaml 331 | # fast.yaml 332 | apiVersion: v1 333 | kind: Pod 334 | metadata: 335 | creationTimestamp: null 336 | labels: 337 | run: fast 338 | name: fast 339 | spec: 340 | nodeSelector: ### ADD THIS LINE 341 | disk: ssd ### ADD THIS LINE 342 | containers: 343 | - image: nginx 344 | name: fast 345 | ``` 346 | 347 |

348 |
349 | 350 | 351 | ### Edit the “fast” pod (created above), changing the node selector to “disk=slow.” Notice that the pod cannot be changed, and the YAML was saved to a temporary location. Take the YAML in /tmp/ and apply it by force to delete and recreate the pod using a single imperative command 352 | 353 |
show 354 |

355 | 356 | ```bash 357 | # edit the pod 358 | kubectl edit po fast 359 | ``` 360 | 361 | ```yaml 362 | # edit fast pod 363 | apiVersion: v1 364 | kind: Pod 365 | metadata: 366 | creationTimestamp: null 367 | labels: 368 | run: fast 369 | name: fast 370 | spec: 371 | nodeSelector: 372 | disk: slow ### CHANGE THIS LINE 373 | containers: 374 | - image: nginx 375 | name: fast 376 | ``` 377 | 378 | ```bash 379 | # output will look similar to the following: 380 | # :error: pods "fast" is invalid 381 | # A copy of your changes has been stored to "/tmp/kubectl-edit-136974717.yaml" 382 | # error: Edit cancelled, no valid changes were saved. 383 | 384 | # replace and recreate the pod 385 | k replace -f /tmp/kubectl-edit-136974717.yaml --force 386 | ``` 387 | 388 |

389 |
390 | 391 | ### Create a new pod named “ssd-pod” using the nginx image. Use node affinity to select nodes based on a weight of 1 to nodes labeled “disk=ssd”. If the selection criteria don’t match, it can also choose nodes that have the label “kubernetes.io/os=linux” 392 | 393 |
show command 394 |

395 | 396 | ```bash 397 | # create the YAML for a pod named 'ssd-pod' 398 | kubectl run ssd-pod --image nginx --dry-run=client -o yaml > pod.yaml 399 | ``` 400 | 401 |

402 |
403 | 404 |
show pod YAML 405 |

406 | 407 | ```yaml 408 | # pod.yaml file 409 | apiVersion: v1 410 | kind: Pod 411 | metadata: 412 | creationTimestamp: null 413 | labels: 414 | run: ssd-pod 415 | name: ssd-pod 416 | spec: 417 | ############## START HERE ############################ 418 | affinity: 419 | nodeAffinity: 420 | requiredDuringSchedulingIgnoredDuringExecution: 421 | nodeSelectorTerms: 422 | - matchExpressions: 423 | - key: kubernetes.io/os 424 | operator: In 425 | values: 426 | - linux 427 | preferredDuringSchedulingIgnoredDuringExecution: 428 | - weight: 1 429 | preference: 430 | matchExpressions: 431 | - key: disk 432 | operator: In 433 | values: 434 | - ssd 435 | ############## END HERE ############################ 436 | containers: 437 | - image: nginx 438 | name: ssd-pod 439 | ``` 440 | 441 |

442 |
443 | 444 | ### Create a pod named “limited” with the image “httpd” and set the resource requests to 1 CPU and “100Mi” for memory 445 | 446 |
show 447 |

448 | 449 | ```bash 450 | # create the yaml for a pod 451 | k run limited --image httpd --dry-run=client -o yaml > pod.yaml 452 | ``` 453 | 454 | Add the YAML for resources requests to the YAML file. Here is the complete file. 455 | ```yaml 456 | apiVersion: v1 457 | kind: Pod 458 | metadata: 459 | name: limited 460 | spec: 461 | containers: 462 | - name: httpd 463 | image: httpd 464 | resources: 465 | requests: 466 | cpu: "500m" 467 | memory: "100Mi" 468 | ``` 469 | 470 | Create the pod from the YAML file 471 | ```bash 472 | # create the pod from `pod.yaml` file 473 | k create -f pod.yaml 474 | 475 | # list the pods to see the pod is now running 476 | k get po 477 | 478 | ``` 479 | 480 |

481 |
482 | 483 | --- 484 | 485 | ## FIND MORE KILLERCODA CKA EXAM EXERCISES 486 | [Killercoda.com](https://killercoda.com), the same company that brought you the [exam simulator](https://killer.sh) for the CKA Exam, brings you a free [Kubernetes](https://kubernetes.io/) lab environment. 487 | [MORE CKA EXAM EXERCISES HERE](https://killercoda.com/cka) -------------------------------------------------------------------------------- /storage.md: -------------------------------------------------------------------------------- 1 | # Storage (10%) 2 | 3 | kubernetes.io > Documentation > Reference > kubectl CLI > [kubectl Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) 4 | 5 | kubernetes.io > Documentation > Tasks > Monitoring, Logging, and Debugging > [Get a Shell to a Running Container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/) 6 | 7 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Configure Access to Multiple Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) 8 | 9 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Accessing Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/) using API 10 | 11 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) 12 | 13 | ### Create a role the will allow users to get, watch, and list pods and container logs 14 | 15 |
show 16 |

17 | 18 | ```bash 19 | # create a file named role.yml 20 | apiVersion: rbac.authorization.k8s.io/v1 21 | kind: Role 22 | metadata: 23 | namespace: default 24 | name: pod-reader 25 | rules: 26 | - apiGroups: [""] 27 | resources: ["pods", "pods/log"] 28 | verbs: ["get", "watch", "list"] 29 | 30 | # create the role 31 | kubectl apply -f role.yml 32 | 33 | # or imperatively 34 | 35 | kubectl -n default create role pod-reader --verb=get,watch,list --resource=pods,pods/logs 36 | ``` 37 | 38 |

39 |
40 | 41 | ### Create a role binding that binds to a role named pod-reader, applies to a user named `dev` 42 | 43 |
show 44 |

45 | 46 | ```bash 47 | # create a file named role-binding.yml 48 | apiVersion: rbac.authorization.k8s.io/v1 49 | kind: RoleBinding 50 | metadata: 51 | name: pod-reader 52 | namespace: default 53 | subjects: 54 | - kind: User 55 | name: dev 56 | apiGroup: rbac.authorization.k8s.io 57 | roleRef: 58 | kind: Role 59 | name: pod-reader 60 | apiGroup: rbac.authorization.k8s.io 61 | 62 | # or imperatively 63 | 64 | kubectl -n default create rolebinding pod-reader --role=pod-reader --user=dev 65 | ``` 66 | 67 |

68 |
69 | 70 | --- 71 | 72 | ## FIND MORE KILLERCODA CKA EXAM EXERCISES 73 | [Killercoda.com](https://killercoda.com), the same company that brought you the [exam simulator](https://killer.sh) for the CKA Exam, brings you a free [Kubernetes](https://kubernetes.io/) lab environment. 74 | [MORE CKA EXAM EXERCISES HERE](https://killercoda.com/cka) -------------------------------------------------------------------------------- /troubleshooting.md: -------------------------------------------------------------------------------- 1 | # Troubleshooting (30%) 2 | 3 | kubernetes.io > Documentation > Reference > kubectl CLI > [kubectl Cheat Sheet](https://kubernetes.io/docs/reference/kubectl/cheatsheet/) 4 | 5 | kubernetes.io > Documentation > Tasks > Monitoring, Logging, and Debugging > [Get a Shell to a Running Container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/) 6 | 7 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Configure Access to Multiple Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) 8 | 9 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Accessing Clusters](https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/) using API 10 | 11 | kubernetes.io > Documentation > Tasks > Access Applications in a Cluster > [Use Port Forwarding to Access Applications in a Cluster](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) 12 | 13 | ### Install the metrics server add-on and view resource usage by a pods and nodes in the cluster 14 | 15 |
show 16 |

17 | 18 | ```bash 19 | # install the metrics server 20 | kubectl apply -f https://raw.githubusercontent.com/linuxacademy/content-cka-resources/master/metrics-server-components.yaml 21 | 22 | # verify that the metrics server is responsive 23 | kubectl get --raw /apis/metrics.k8s.io/ 24 | 25 | # create a file named my-pod.yml 26 | apiVersion: v1 27 | kind: Pod 28 | metadata: 29 | name: my-pod 30 | labels: 31 | app: metrics-test 32 | spec: 33 | containers: 34 | - name: busybox 35 | image: radial/busyboxplus:curl 36 | command: ['sh', '-c', 'while true; do sleep 3600; done'] 37 | 38 | # create a pod from the my-pod.yml file 39 | kubectl apply -f my-pod.yml 40 | 41 | # view resources usage by the pods in the cluster 42 | kubectl top pod 43 | 44 | # view resource usage by the nodes in the cluster 45 | kubectl top node 46 | ``` 47 |

48 |
49 | 50 | In cluster “ik8s”, in a namespace named “db08328”, create a deployment with the kubectl command-line (imperatively) named “mysql” with the image “mysql:8”. List the pods in the “db08328” namespace to see if the pod is running. If the pod is not running, view the logs to determine why the pod is not in a healthy state. Once you’ve collected the necessary log information, make the necessary changes to the pod in order to fix the pod and get the pod back up in a running healthy state. 51 | 52 | Run the command k run testbox --image busybox --command 'sleep 3600' to create a new pod named “testbox”. See if the container is running or not. Go through the decision tree to find out why and fix the pod so that it’s running. 53 | 54 | Create a new container named “busybox2” that uses the image “busybox:1.35.0”. Check if the container is in a running state. Find out why the container is failing and make the corrections to the pod yaml to get it to a running state. 55 | 56 | Create a new container named “curlpod2” that uses the image “nicolaka/netshoot” while opening a shell to it upon creation. While a shell is open to the container, run nslookup on the kubernetes service. Exit out of the shell and see why the container is not running. Fix the container, so that it continues to run. 57 | 58 | In cluster “ik8s”, in a namespace named “ee8881”, create a deployment with the kubectl command-line (imperatively) named “prod-app” with the image “nginx”. List the pods in the “ee8881” namespace to see if the pod is running. Run the command `curl https://raw.githubusercontent.com/chadmcrowell/acing-the-cka-exam/main/ch_08/kube-scheduler.yaml --silent --output /etc/kubernetes/manifests/kube-scheduler.yaml` to make a change to the kube scheduler simulating a cluster component failure. Now, scale the deployment from 1 replica to 3. List the pods again and see if the additional 2 pods in the deployment are running. Find out why the two additional pods are not running and fix the scheduler so that the containers are in a running state again. 59 | 60 | Move the file kube-scheduler.yaml to the /tmp directory with the command mv /etc/Kubernetes/manifests/kube-scheduler.yaml /tmp/kube-scheduler.yaml. 61 | 62 | Create a pod with the command k run nginx –image nginx. List the pods and see if the pod is in a running status 63 | 64 | Determine why the pod is not starting by looking at the events and the logs. Determine what the fix will be and get the pod back in a running state. 65 | 66 | Run the command `curl https://raw.githubusercontent.com/chadmcrowell/acing-the-cka-exam/main/ch_08/10-kubeadm.conf --silent --output /etc/systemd/system/kubelet.service.d/10-kubeadm.conf; systemctl daemon-reload; systemctl restart kubelet` 67 | 68 | Check the status of kubelet, and go through the troubleshooting steps to resolve the problem with the kubelet service. 69 | 70 | In cluster “ik8s”, run the command `k replace -f https://raw.githubusercontent.com/chadmcrowell/acing-the-cka-exam/main/ch_08/kube-proxy-configmap.yaml` `–``force` to purposely insert a bug in the cluster. Immediately after that, delete the “kube-proxy” pod in the kube-system namespace (it will automatically recreate). List the pods in the namespace, and see that the kube-proxy pod is in an error state. View the logs to determine why the kube-proxy pod is not running. Once you’ve collected the necessary log information, make the necessary changes to the pod in order to fix the pod and get the pod back up in a running healthy state. 71 | 72 | In cluster “ik8s”, in a namespace named “kb6656”, run the command `k apply -f` `https://raw.githubusercontent.com/chadmcrowell/acing-the-cka-exam/main/ch_08/deploy-and-svc.yaml` to create a deployment and service in the cluster. This is an nginx application running on port 80, so try to reach the application by using curl to reach the IP address and port of the service. Once you realize that you cannot communicate with the application via curl, try to see why. Make the necessary changes to the reach the application using curl and return the nginx welcome page. 73 | 74 | 75 | 76 | --- 77 | 78 | ## FIND MORE KILLERCODA CKA EXAM EXERCISES 79 | [Killercoda.com](https://killercoda.com), the same company that brought you the [exam simulator](https://killer.sh) for the CKA Exam, brings you a free [Kubernetes](https://kubernetes.io/) lab environment. 80 | [MORE CKA EXAM EXERCISES HERE](https://killercoda.com/cka) --------------------------------------------------------------------------------