├── CODE_OF_CONDUCT.md ├── LICENSE ├── README.md ├── cka ├── 1.cluster_architecture_installation_configuration.md ├── 2.workloads_scheduling.md ├── 3.services_networking.md ├── 4.storage.md ├── 5.troubleshooting.md └── README.md ├── ckad ├── 1.application_design_build.md ├── 2.application_deployment.md ├── 3.application_observability_maintenance.md ├── 4.application_environment_configuration_security.md ├── 5.services_networking.md └── README.md ├── cks ├── 1.cluster_setup.md ├── 2.cluster_hardening.md ├── 3.system_hardening.md ├── 4.minimize_microservice_vulnerabilities.md ├── 5.supply_chain_security.md ├── 6.monitoring_logging_runtime_security.md └── README.md ├── data ├── ImagePolicyWebhook │ ├── webhook.crt │ └── webhook.key ├── Seccomp │ └── audit.json ├── kubeconfig.yaml ├── tls.crt └── tls.key └── topics ├── README.md ├── admission_controllers.md ├── annotations.md ├── api_deprecations.md ├── apis.md ├── apparmor.md ├── auditing.md ├── authentication.md ├── binary_verification.md ├── cluster_upgrade.md ├── configmaps.md ├── daemonsets.md ├── debugging.md ├── deployments.md ├── docker.md ├── etcd.md ├── falco.md ├── ingress.md ├── init_containers.md ├── jobs.md ├── jsonpath.md ├── kube-bench.md ├── kubeconfig.md ├── kubelet_security.md ├── kubesec.md ├── labels.md ├── logging.md ├── monitoring.md ├── multi_container_pods.md ├── namespaces.md ├── network_policies.md ├── nodes.md ├── pod_security_context.md ├── pod_security_policies.md ├── pods.md ├── probes.md ├── rbac.md ├── replica_set.md ├── runtimes.md ├── seccomp.md ├── secrets.md ├── service_accounts.md ├── services.md ├── taints_tolerations.md ├── trivy.md └── volumes.md /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | # Contributor Covenant Code of Conduct 2 | 3 | ## Our Pledge 4 | 5 | In the interest of fostering an open and welcoming environment, we as 6 | contributors and maintainers pledge to making participation in our project and 7 | our community a harassment-free experience for everyone, regardless of age, body 8 | size, disability, ethnicity, sex characteristics, gender identity and expression, 9 | level of experience, education, socio-economic status, nationality, personal 10 | appearance, race, religion, or sexual identity and orientation. 11 | 12 | ## Our Standards 13 | 14 | Examples of behavior that contributes to creating a positive environment 15 | include: 16 | 17 | * Using welcoming and inclusive language 18 | * Being respectful of differing viewpoints and experiences 19 | * Gracefully accepting constructive criticism 20 | * Focusing on what is best for the community 21 | * Showing empathy towards other community members 22 | 23 | Examples of unacceptable behavior by participants include: 24 | 25 | * The use of sexualized language or imagery and unwelcome sexual attention or 26 | advances 27 | * Trolling, insulting/derogatory comments, and personal or political attacks 28 | * Public or private harassment 29 | * Publishing others' private information, such as a physical or electronic 30 | address, without explicit permission 31 | * Other conduct which could reasonably be considered inappropriate in a 32 | professional setting 33 | 34 | ## Our Responsibilities 35 | 36 | Project maintainers are responsible for clarifying the standards of acceptable 37 | behavior and are expected to take appropriate and fair corrective action in 38 | response to any instances of unacceptable behavior. 39 | 40 | Project maintainers have the right and responsibility to remove, edit, or 41 | reject comments, commits, code, wiki edits, issues, and other contributions 42 | that are not aligned to this Code of Conduct, or to ban temporarily or 43 | permanently any contributor for other behaviors that they deem inappropriate, 44 | threatening, offensive, or harmful. 45 | 46 | ## Scope 47 | 48 | This Code of Conduct applies both within project spaces and in public spaces 49 | when an individual is representing the project or its community. Examples of 50 | representing a project or community include using an official project e-mail 51 | address, posting via an official social media account, or acting as an appointed 52 | representative at an online or offline event. Representation of a project may be 53 | further defined and clarified by project maintainers. 54 | 55 | ## Enforcement 56 | 57 | Instances of abusive, harassing, or otherwise unacceptable behavior may be 58 | reported by contacting the project team at dgkanatsios@outlook.com. All 59 | complaints will be reviewed and investigated and will result in a response that 60 | is deemed necessary and appropriate to the circumstances. The project team is 61 | obligated to maintain confidentiality with regard to the reporter of an incident. 62 | Further details of specific enforcement policies may be posted separately. 63 | 64 | Project maintainers who do not follow or enforce the Code of Conduct in good 65 | faith may face temporary or permanent repercussions as determined by other 66 | members of the project's leadership. 67 | 68 | ## Attribution 69 | 70 | This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, 71 | available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html 72 | 73 | [homepage]: https://www.contributor-covenant.org 74 | 75 | For answers to common questions about this code of conduct, see 76 | https://www.contributor-covenant.org/faq 77 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Dimitris-Ilias Gkanatsios 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [![Software License](https://img.shields.io/badge/license-MIT-brightgreen.svg?style=flat-square)](LICENSE) 2 | [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg?style=flat-square)](http://makeapullrequest.com) 3 | 4 | # Kubernetes Exercises 5 | 6 | This collection covers a set of exercises that are categorized topics wise and referred back to the individual Kubernetes certification exams. 7 | As the exam pattern and topics keep on changing, however, the topics remain more or less the same, I have created the exercises per topics and mapped them back to the exam. 8 | 9 | ## Kubernetes Playground 10 | 11 | Try out the Killercoda Kubernetes playgroud which provides 2 node Kubernetes cluster, which is good enough to complete almost all of the exercises. 12 | 13 | [Killercoda](https://killercoda.com/playgrounds/scenario/kubernetes) 14 | ~~[Katacode Kubernetes Playgroud](https://www.katacoda.com/courses/kubernetes/playground)~~ 15 | 16 | 17 | 18 | 19 | ## Structure 20 | 21 | - [Certified Kubernetes Administrator (CKA)](cka) covers topics for CKA exam. 22 | - [Certified Kubernetes Application Developer (CKAD)](ckad) covers topics for CKAD exam. 23 | - [Certified Kubernetes Security Specialist (CKS)](cks) covers topics for CKS exam. 24 | - [Data](data) provides any data required for the exercises. 25 | - [Topics](topics) covers individual topics. 26 | 27 | ## Exam Pattern & Tips 28 | 29 | - CKA/CKAD/CKS are open book test. 30 | - Exams keep on upgrading as per the latest Kubernetes version and is currently on 1.28 31 | - Exams require you to solve 15-20 questions in 2 hours. 32 | - Make use of imperative commands as much as possible. 33 | - You will have an online notepad on the right corner to note down. I hardly used it, but it can be useful to type and modify text instead of using Vi editor. 34 | - You are allowed to open another browser tab which can be from kubernetes.io or other products documentation like Falco. Do not open any other windows. 35 | - Exam questions can be attempted in any order and don't have to be sequential. So be sure to move ahead and come back later. 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | -------------------------------------------------------------------------------- /cka/1.cluster_architecture_installation_configuration.md: -------------------------------------------------------------------------------- 1 | # Cluster Architecture, Installation & Configuration - 25% 2 | 3 |
4 | 5 | ## Manage role based access control (RBAC) 6 | 7 |
8 | 9 | Refer [RBAC](../topics/rbac.md) 10 | 11 |
12 | 13 | ## Use Kubeadm to install a basic cluster 14 | 15 |
16 | 17 | Refer [Creating cluster using Kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) 18 | 19 |
20 | 21 | ## Manage a highly-available Kubernetes cluster 22 | 23 |
24 | 25 | Refer [Creating HA Kubernete cluster](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/) 26 | 27 |
28 | 29 | ## Provision underlying infrastructure to deploy a Kubernetes cluster 30 | 31 |
32 | 33 | TBD 34 | 35 |
36 | 37 | ## Perform a version upgrade on a Kubernetes cluster using Kubeadm 38 | 39 |
40 | 41 | Refer [Upgrading Kubeadm Clusters](../topics/cluster_upgrade.md) 42 | 43 |
44 | 45 | ## Implement etcd backup and restore 46 | 47 |
48 | 49 | Refer [ETCD](../topics/etcd.md) 50 | 51 | -------------------------------------------------------------------------------- /cka/2.workloads_scheduling.md: -------------------------------------------------------------------------------- 1 | # Workloads & Scheduling - 15% 2 | 3 |
4 | 5 | ## Understand deployments and how to perform rolling update and rollbacks 6 | 7 |
8 | 9 | Refer [Deployment Rollouts](../topics/deployments.md#deployment-rollout) 10 | 11 |
12 | 13 | ## Use ConfigMaps and Secrets to configure applications 14 | 15 |
16 | 17 | Refer [ConfigMaps](../topics/configmaps.md) 18 | Refer [Secrets](../topics/secrets.md) 19 | 20 |
21 | 22 | ## Know how to scale applications 23 | 24 |
25 | 26 | Refer [Deployment Scaling](../topics/deployments.md#deployment-scaling) 27 | 28 |
29 | 30 | ## Understand the primitives used to create robust, self-healing, application deployments 31 | 32 |
33 | 34 | Refer [Deployment Scaling](../topics/deployments.md##deployment-self-healing) 35 | 36 |
37 | 38 | ## Understand how resource limits can affect Pod scheduling 39 | 40 |
41 | 42 | Refer [Resources - Requests & Limits](../topics/pods.md#resources) 43 | 44 |
45 | 46 | ## Awareness of manifest management and common templating tools 47 | 48 |
49 | 50 | TBD 51 | 52 |
53 | -------------------------------------------------------------------------------- /cka/3.services_networking.md: -------------------------------------------------------------------------------- 1 | # Services & Networking - 20% 2 | 3 |
4 | 5 | ## Understand host networking configuration on the cluster nodes 6 | 7 |
8 | 9 | TBD 10 | 11 |
12 | 13 | ## Understand connectivity between Pods 14 | 15 |
16 | 17 | Refer [Cluster Networking](https://kubernetes.io/docs/concepts/cluster-administration/networking/) 18 | 19 |
20 | 21 | ## Understand ClusterIP, NodePort, LoadBalancer service types and endpoints 22 | 23 | Refer [Services](../topics/services.md) 24 | 25 | ## Know how to use Ingress controllers and Ingress resources 26 | 27 | Refer [Ingress](../topics/ingress.md) 28 | 29 | ## Know how to configure and use CoreDNS 30 | 31 |
32 | 33 | Refer [CoreDNS for Service Discovery](https://kubernetes.io/docs/tasks/administer-cluster/coredns/) 34 | 35 |
36 | 37 | ## Choose an appropriate container network interface plugin 38 | 39 |
40 | 41 | Refer [Network Plugins](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/) 42 | 43 |
44 | -------------------------------------------------------------------------------- /cka/4.storage.md: -------------------------------------------------------------------------------- 1 | # Storage - 10% 2 | 3 |
4 | 5 | ## Understand storage classes, persistent volumes 6 | 7 |
8 | 9 | Refer [Volumes](../topics/volumes.md) 10 | 11 |
12 | 13 | ## Understand volume mode, access modes and reclaim policies for volumes 14 | 15 |
16 | 17 | Refer [PV Volume mode](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#volume-mode) -- Refer [PV Access modes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes), Refer [PV Reclaim policies](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaim-policy) 18 | 19 |
20 | 21 | ## Understand persistent volume claims primitive 22 | 23 |
24 | 25 | Refer [Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) 26 | 27 |
28 | 29 | ## Know how to configure applications with persistent storage 30 | 31 |
32 | 33 | Refer [Volumes](../topics/volumes.md) 34 | 35 |
-------------------------------------------------------------------------------- /cka/5.troubleshooting.md: -------------------------------------------------------------------------------- 1 | # Troubleshooting - 30% 2 | 3 |
4 | 5 | ## Evaluate cluster and node logging 6 | 7 |
8 | 9 | Refer [Cluster Logging](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/#looking-at-logs) 10 | 11 |
12 | 13 | ## Understand how to monitor applications 14 | 15 |
16 | 17 | Refer [Monitoring](../topics/monitoring.md) 18 | 19 |
20 | 21 | ## Manage container stdout & stderr logs 22 | 23 |
24 | 25 | TBD 26 | 27 |
28 | 29 | ## Troubleshoot application failure 30 | 31 |
32 | 33 | Refer [Deployment Troubleshooting](../topics/deployments.md#troubleshooting) 34 | Refer [Probes Troubleshooting](../topics/probes.md#troubleshooting) 35 | Refer [Application Troubleshooting](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-application/) 36 | 37 |
38 | 39 | ## Troubleshoot cluster component failure 40 | 41 |
42 | 43 | TBD 44 | 45 |
46 | 47 | ## Troubleshoot networking 48 | 49 |
50 | 51 | TBD 52 | 53 |
54 | 55 | -------------------------------------------------------------------------------- /cka/README.md: -------------------------------------------------------------------------------- 1 | # Certified Kubernetes Administrator (CKA) 2 | 3 | ## [CKA Curriculum](https://github.com/cncf/curriculum/blob/master/CKA_Curriculum_v1.22.pdf) 4 | 5 | 1. [Cluster Architecture, Installation & Configuration - 25%](1.cluster_architecture_installation_configuration.md) 6 | 2. [Workloads & Scheduling - 15%](2.workloads_scheduling.md) 7 | 3. [Services & Networking - 20%](3.services_networking.md) 8 | 4. [Storage - 10%](4.storage.md) 9 | 5. [Troubleshooting - 30%](5.troubleshooting.md) 10 | 11 | ## Resources 12 | 13 | - [Certified Kubernetes Administrator - CKA learning path](https://jayendrapatil.com/certified-kubernetes-administrator-cka-learning-path/) 14 | - [KodeKloud Certified Kubernetes Administrator Course](https://shareasale.com/r.cfm?b=2319101&u=2367365&m=132199&urllink=&afftrack=) 15 | 16 | -------------------------------------------------------------------------------- /ckad/1.application_design_build.md: -------------------------------------------------------------------------------- 1 | # Application Design and Build - 20% 2 | 3 |
4 | 5 | ## Define, build and modify container images 6 | 7 |
8 | 9 | Refer [Docker](../topics/docker.md) 10 | 11 |
12 | 13 | ## Understand Jobs and CronJobs 14 | 15 |
16 | 17 | Refer [Jobs & Cron Jobs](../topics/jobs.md) 18 | 19 |
20 | 21 | ## Understand multi-container Pod design patterns (e.g. sidecar, init and others) 22 | 23 |
24 | 25 | Refer [Multi-container Pods](../topics/multi_container_pods.md) 26 | 27 |
28 | 29 | ## Utilize persistent and ephemeral volumes 30 | 31 |
32 | 33 | Refer [Volumes](../topics/volumes.md) 34 | 35 |
-------------------------------------------------------------------------------- /ckad/2.application_deployment.md: -------------------------------------------------------------------------------- 1 | # Application Deployment - 20% 2 | 3 |
4 | 5 | ## Use Kubernetes primitives to implement common deployment strategies (e.g. blue/green or canary) 6 | 7 |
8 | 9 | - Kubernetes supports only Recreate and Rolling deployments within the same cluster. 10 | - A service mesh like Istio can be used for [traffic management and canary deployments](https://istio.io/latest/docs/tasks/traffic-management/traffic-shifting/). 11 | 12 |
13 | 14 | ## Understand Deployments and how to perform rolling updates 15 | 16 | Refer [Deployment Rollouts](../topics/deployments.md#deployment-rollout) 17 | 18 | ## Use the Helm package manager to deploy existing packages 19 | 20 |
21 | 22 | - [Helm](https://helm.sh/) can be used for templating and deployment. 23 | 24 |
25 | 26 | -------------------------------------------------------------------------------- /ckad/3.application_observability_maintenance.md: -------------------------------------------------------------------------------- 1 | # Application Observability and Maintenance - 15% 2 | 3 |
4 | 5 | ## Understand API deprecations 6 | 7 |
8 | 9 | Refer [API Deprectations](../topics/api_deprecations.md) 10 | 11 |
12 | 13 | ## Implement probes and health checks 14 | 15 | Refer [Readiness & Liveness probes](../topics/probes.md) 16 | 17 | ## Use provided tools to monitor Kubernetes applications 18 | 19 | Refer [Monitoring](../topics/monitoring.md) 20 | 21 | ## Utilize container logs 22 | 23 | Refer [Logging](../topics/logging.md) 24 | 25 | ## Debugging in Kubernetes 26 | 27 |
28 | 29 | TBD 30 | 31 |
32 | 33 | -------------------------------------------------------------------------------- /ckad/4.application_environment_configuration_security.md: -------------------------------------------------------------------------------- 1 | # Application Environment, Configuration and Security 2 | 3 |
4 | 5 | ## Discover and use resources that extend Kubernetes (CRD) 6 | 7 |
8 | 9 | Refer [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) 10 | 11 |
12 | 13 | ## Understand authentication, authorization and admission control 14 | 15 | Refer [Authentication](../topics/authentication.md) 16 | Refer [RBAC](../topics/rbac.md) 17 | Refer [Admission Controllers](../topics/admission_controllers.md) 18 | 19 | ## Understanding and defining resource requirements, limits and quotas 20 | 21 |
22 | 23 | Refer [Resources - Requests & Limits](../topics/pods.md#resources) 24 | 25 |
26 | 27 | ## Understand ConfigMaps 28 | 29 |
30 | 31 | Refer [ConfigMaps](../topics/configmaps.md) 32 | 33 |
34 | 35 | ## Create & consume Secrets 36 | 37 | Refer [Secrets](../topics/secrets.md) 38 | 39 | ## Understand ServiceAccounts 40 | 41 |
42 | 43 | Refer [Service Accounts](../topics/service_accounts.md) 44 | 45 |
46 | 47 | ## Understand SecurityContexts 48 | 49 |
50 | 51 | Refer [Security Context](../topics/pod_security_context.md) 52 | 53 |
-------------------------------------------------------------------------------- /ckad/5.services_networking.md: -------------------------------------------------------------------------------- 1 | # Services and Networking 2 | 3 |
4 | 5 | ## Demonstrate basic understanding of NetworkPolicies 6 | 7 |
8 | 9 | Refer [Network Policies](../topics/network_policies.md) 10 | 11 |
12 | 13 | ## Provide and troubleshoot access to applications via services 14 | 15 |
16 | 17 | Refer [Services](../topics/services.md) 18 | 19 |
20 | 21 | ## Use Ingress rules to expose applications 22 | 23 |
24 | 25 | Refer [Ingress](../topics/ingress.md) 26 | 27 |
-------------------------------------------------------------------------------- /ckad/README.md: -------------------------------------------------------------------------------- 1 | # Certified Kubernetes Application Developer (CKAD) 2 | 3 | ## [CKAD Curriculum](https://github.com/cncf/curriculum/blob/master/CKAD_Curriculum_v1.28.pdf) 4 | 5 | - [Application Design and Build - 20%](1.application_design_build.md) 6 | - [Application Deployment - 20%](2.application_deployment.md) 7 | - [Application observability and maintenance - 15%](3.application_observability_maintenance.md) 8 | - [Application Environment, Configuration and Security - 25%](4.application_environment_configuration_security.md) 9 | - [Services & Networking - 20%](5.services_networking.md) 10 | 11 | ## Resources 12 | 13 | - [Certified Kubernetes Application Developer - CKAD learning path](https://jayendrapatil.com/certified-kubernetes-application-developer-ckad-learning-path/) 14 | - [KodeKloud Certified Kubernetes Application Developer Course](https://shareasale.com/r.cfm?b=2319509&u=2367365&m=132199&urllink=&afftrack=) 15 | 16 | 17 | -------------------------------------------------------------------------------- /cks/1.cluster_setup.md: -------------------------------------------------------------------------------- 1 | # Cluster Setup - 10% 2 | 3 |
4 | 5 | ## Use Network security policies to restrict cluster level access 6 | 7 |
8 | 9 | Refer [Network Policies](../topics/network_policies.md) 10 | 11 |
12 | 13 | ## Use CIS benchmark to review the security configuration of Kubernetes components (etcd, kubelet, kubedns, kubeapi) 14 | 15 |
16 | 17 | Refer [Kube-bench](../topics/kube-bench.md) 18 | 19 |
20 | 21 | ## Properly set up Ingress objects with security control 22 | 23 |
24 | 25 | Refer [Ingress with tls cert](../topics/ingress.md#ingress-security) 26 | 27 |
28 | 29 | ## Protect node metadata and endpoints 30 | 31 |
32 | 33 | Refer [Kubelet Security](../topics/kubelet_security.md) 34 | 35 |
36 | 37 | ## Minimize use of, and access to, GUI elements 38 | 39 |
40 | 41 | Kubernetes Dashboard 42 | 43 |
44 | 45 | ## Verify platform binaries before deploying 46 | 47 |
48 | 49 | Refer [Platform Binary Verification](../topics/binary_verification.md) 50 | 51 |
-------------------------------------------------------------------------------- /cks/2.cluster_hardening.md: -------------------------------------------------------------------------------- 1 | # Cluster Hardening - 15% 2 | 3 |
4 | 5 | ## Restrict access to Kubernetes API 6 | 7 |
8 | 9 | Refer [Controlling Access to Kubernetes API](https://kubernetes.io/docs/concepts/security/controlling-access/) 10 | 11 |
12 | 13 | ## Use Role Based Access Controls to minimize exposure 14 | 15 |
16 | 17 | Refer [RBAC](../topics/rbac.md) 18 | 19 |
20 | 21 | ## Exercise caution in using service accounts e.g. disable defaults, minimize permissions on newly created ones 22 | 23 |
24 | 25 | Refer [Service Accounts](../topics/service_accounts.md) 26 | 27 |
28 | 29 | ## Update Kubernetes frequently 30 | 31 |
32 | 33 | Refer [Upgrading Kubeadm Clusters](../topics/cluster_upgrade.md) 34 | 35 |
-------------------------------------------------------------------------------- /cks/3.system_hardening.md: -------------------------------------------------------------------------------- 1 | # System Hardening - 15% 2 | 3 |
4 | 5 | ## Minimize host OS footprint (reduce attack surface) 6 | 7 |
8 | 9 | Refer [Docker](../topics/docker.md) 10 | 11 |
12 | 13 | ## Minimize IAM roles 14 | 15 |
16 | 17 | IAM Roles are mainly related to Cloud and should follow the principle of least privilege. 18 | 19 |
20 | 21 | ## Minimize external access to the network 22 | 23 |
24 | 25 | Refer [Network Policies](../topics/network_policies.md) 26 | 27 |
28 | 29 | ## Appropriately use kernel hardening tools such as AppArmor, seccomp 30 | 31 |
32 | 33 | Refer [Seccomp - Secure Computing](../topics/seccomp.md) 34 | Refer [AppArmor](../topics/apparmor.md) 35 | 36 |
-------------------------------------------------------------------------------- /cks/4.minimize_microservice_vulnerabilities.md: -------------------------------------------------------------------------------- 1 | # Minimize Microservice Vulnerabilities - 20% 2 | 3 | ## Setup appropriate OS level security domains e.g. using PSP, OPA, security contexts 4 | 5 |
6 | 7 | Refer [Pod Security Policies](../topics/pod_security_policies.md) 8 | 9 | Refer [Pod Security Context](../topics/pod_security_context.md) 10 | 11 | Refer [Open Policy Agent](https://kubernetes.io/blog/2019/08/06/opa-gatekeeper-policy-and-governance-for-kubernetes/) 12 | 13 |
14 | 15 | ## Manage kubernetes secrets 16 | 17 |
18 | 19 | Refer [Secrets](../topics/secrets.md) 20 | 21 |
22 | 23 | ## Use container runtime sandboxes in multi-tenant environments (e.g. gvisor, kata containers) 24 | 25 |
26 | 27 | Refer [Runtime Class](../topics/runtimes.md) 28 | 29 |
30 | 31 | ## Implement pod to pod encryption by use of mTLS 32 | 33 |
34 | 35 | Refer [Istio MTLS](https://istio.io/latest/docs/tasks/security/authentication/authn-policy/#auto-mutual-tls) 36 | 37 |
-------------------------------------------------------------------------------- /cks/5.supply_chain_security.md: -------------------------------------------------------------------------------- 1 | # Supply Chain Security - 20% 2 | 3 | ## Minimize base image footprint 4 | 5 |
6 | 7 | Refer [Docker best practices](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/) 8 | 9 |
10 | 11 | ## Secure your supply chain: whitelist allowed image registries, sign and validate images 12 | 13 |
14 | 15 | Refer [Admission Controllers ImagePolicyWebhook](../topics/admission_controllers.md#imagepolicywebhook) 16 | 17 |
18 | 19 | ## Use static analysis of user workloads (e.g. kubernetes resources, docker files) 20 | 21 |
22 | 23 | Refer [Kubesec](../topics/kubesec.md) 24 | 25 |
26 | 27 | ## Scan images for known vulnerabilities 28 | 29 |
30 | 31 | Refer [Trivy](../topics/trivy.md) 32 | 33 |
-------------------------------------------------------------------------------- /cks/6.monitoring_logging_runtime_security.md: -------------------------------------------------------------------------------- 1 | # Monitoring, Logging and Runtime Security - 20% 2 | 3 |
4 | 5 | ## Perform behavioral analytics of syscall process and file activities at the host and container level to detect malicious activities 6 | 7 |
8 | 9 | Refer [Falco](../topics/falco.md) 10 | 11 | Other tools include `strace` and `tracee` 12 | 13 |
14 | 15 | ## Detect threats within physical infrastructure, apps, networks, data, users and workloads 16 | 17 |
18 | 19 | TBD 20 | 21 |
22 | 23 | ## Detect all phases of attack regardless where it occurs and how it spreads 24 | 25 |
26 | 27 | TBD 28 | 29 |
30 | 31 | ## Perform deep analytical investigation and identification of bad actors within environment 32 | 33 |
34 | 35 | TBD 36 | 37 |
38 | 39 | ## Ensure immutability of containers at runtime 40 | 41 |
42 | 43 | Refer [Pod Security Context Immutability](../topics/pod_security_context.md#immutability) 44 | 45 |
46 | 47 | ## Use Audit Logs to monitor access 48 | 49 |
50 | 51 | Refer [Kubernetes Auditing](../topics/auditing.md) 52 | 53 |
-------------------------------------------------------------------------------- /cks/README.md: -------------------------------------------------------------------------------- 1 | # Certified Kubernetes Security Specialist (CKS) 2 | 3 | ## [CKA Curriculum](https://github.com/cncf/curriculum/blob/master/CKA_Curriculum_v1.22.pdf) 4 | 5 | - [Cluster Setup - 10%](1.cluster_setup.md) 6 | - [Cluster Hardening - 15%](2.cluster_hardening.md) 7 | - [System Hardening - 15%](3.system_hardening.md) 8 | - [Minimize Microservice Vulnerabilities - 20%](4.minimize_microservice_vulnerabilities.md) 9 | - [Supply Chain Security - 20%](5.supply_chain_security.md) 10 | - [Monitoring, Logging and Runtime Security - 20%](6.monitoring_logging_runtime_security.md) 11 | 12 | ## Resources 13 | 14 | - [Certified Kubernetes Security Specialist - CKS learning path](https://jayendrapatil.com/certified-kubernetes-security-specialist-cks-learning-path/) 15 | - [KodeKloud Certified Kubernetes Security Specialist Course](https://shareasale.com/r.cfm?b=2319531&u=2367365&m=132199&urllink=&afftrack=) 16 | - [Udemy Kubernetes CKS 2021 Complete Course – Theory – Practice](https://click.linksynergy.com/link?id=l7C703x9gqw&offerid=507388.3573079&type=2&murl=https%3A%2F%2Fwww.udemy.com%2Fcourse%2Fcertified-kubernetes-security-specialist%2F) 17 | 18 | 19 | 20 | 21 | 22 | -------------------------------------------------------------------------------- /data/ImagePolicyWebhook/webhook.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIID5TCCAs2gAwIBAgIUL1k7p/ksn6VRIuAKmeDMctyCUwcwDQYJKoZIhvcNAQEL 3 | BQAwRjFEMEIGA1UEAww7c3lzdGVtOm5vZGU6aW1hZ2UtYm91bmNlci13ZWJob29r 4 | LmRlZmF1bHQucG9kLmNsdXN0ZXIubG9jYWwwHhcNMjExMjE1MDY1MTMyWhcNMzEx 5 | MjEzMDY1MTMyWjBGMUQwQgYDVQQDDDtzeXN0ZW06bm9kZTppbWFnZS1ib3VuY2Vy 6 | LXdlYmhvb2suZGVmYXVsdC5wb2QuY2x1c3Rlci5sb2NhbDCCASIwDQYJKoZIhvcN 7 | AQEBBQADggEPADCCAQoCggEBAK2/gOl+AEJjnbc5DG4iFg2WvD8JAjgwXHd3zQ6A 8 | HujxMz1EjJmDksc6S7aKrCJmP42tDdzQatVINMFHBR/8kb5bVN+f0LSNEM3iktfE 9 | KmB7VsfEk6gaPJg8VOitA/7KpVDyZ4yJZmb2iaGLFzFF41XwiCP2pzihBUTj669Q 10 | 6MWDKxbONSrUpA60vvfhpWbnZxTbX8BfB1xDXOK51kK7rnXRfiJt6NHg+n87+1Lk 11 | SFcUoZ/BRarSfweHorCu8c/agZfN9rKyj5tPNb3ZCvp3WJs3ZElK2+j/abZwW6cY 12 | PIorQM0Zl3BZMFCdhoBEcqkeccb1DFjz0RB09SbH8WHCH3cCAwEAAaOByjCBxzAd 13 | BgNVHQ4EFgQUgcvgsxHiAEkdgZgWa6XWuEApS6swHwYDVR0jBBgwFoAUgcvgsxHi 14 | AEkdgZgWa6XWuEApS6swDwYDVR0TAQH/BAUwAwEB/zB0BgNVHREEbTBrghVpbWFn 15 | ZS1ib3VuY2VyLXdlYmhvb2uCIWltYWdlLWJvdW5jZXItd2ViaG9vay5kZWZhdWx0 16 | LnN2Y4IvaW1hZ2UtYm91bmNlci13ZWJob29rLmRlZmF1bHQuc3ZjLmNsdXN0ZXIu 17 | bG9jYWwwDQYJKoZIhvcNAQELBQADggEBAAofI9qArTMFQ4W19OsE3Sp1GLdTie2P 18 | GIVFoiyedYwF+mJWbSgBxklnAKkJf7/sj0PHUEPP4cs7BUM6YHUrjC3OUPhbiH9f 19 | CB8cVjVJhrI4mWDbAXiPa1mvo44x5eZeWDoz+DkUK+nna1/6ik40yOlonoyPXS/y 20 | 1qEWPijRr/3nJ6Vfy6823UNasEQN6mqeUWAO29M1vrYvq0rzUGiU4xTUvWH3JA26 21 | 1sk+ZYAWyZe2/kOTRMjTnKAaki+dnWt14ed1ipuyHxfR6vHKS80eZuJEd2hmytoE 22 | PRljY4asLiazIAP5j9/T4Xj66n0fvgTh75iUwAMkQHS2swC4ZjVS7nc= 23 | -----END CERTIFICATE----- -------------------------------------------------------------------------------- /data/ImagePolicyWebhook/webhook.key: -------------------------------------------------------------------------------- 1 | -----BEGIN PRIVATE KEY----- 2 | MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCtv4DpfgBCY523 3 | OQxuIhYNlrw/CQI4MFx3d80OgB7o8TM9RIyZg5LHOku2iqwiZj+NrQ3c0GrVSDTB 4 | RwUf/JG+W1Tfn9C0jRDN4pLXxCpge1bHxJOoGjyYPFTorQP+yqVQ8meMiWZm9omh 5 | ixcxReNV8Igj9qc4oQVE4+uvUOjFgysWzjUq1KQOtL734aVm52cU21/AXwdcQ1zi 6 | udZCu6510X4ibejR4Pp/O/tS5EhXFKGfwUWq0n8Hh6KwrvHP2oGXzfayso+bTzW9 7 | 2Qr6d1ibN2RJStvo/2m2cFunGDyKK0DNGZdwWTBQnYaARHKpHnHG9QxY89EQdPUm 8 | x/Fhwh93AgMBAAECggEAfW5S0j10Unk30p4MqzVQVl8LZzZJs+a12klSb7VumxwF 9 | saVbGzgxLkKXhiB2RB8sokrcRxzvAyota5qpyH29eX7VttrZAH8WMovvFnU3Yo+o 10 | Bm+TaTgHpp9nbNH6oGYLEnTs7DgFBS/WDBktlRSvGcubfNsDvY4BD8q6ysXORUdL 11 | Mji+JiPgIxlvHLZleP5zAyLWesSvKpUZxvE3/8G0M6rJD70Ufq9w3O2/UbrXoOEK 12 | vdKn3MIarI8x3O7dDauFdA+LbBMMG3Pl+GbkRuG5eFwMhUHzqks+sx0M8vz5YDzw 13 | mUxO1gzktvmSDiEcnIS5aINXgItviQp545KCCd+RAQKBgQDgFg8c5yUk9pMSIrIC 14 | kUT6uWfi0rnHREBfrCZUkso4acIt1PBEOOYKJoLwbdjE1w7fuysk6Ok7o5rg9Cch 15 | qen7hIFoWwKhfNO7dcwozs6gnT7QVUpHnID3t23m8wGtf7d2QRAXqCDRmaQUfHRc 16 | zupj7LPRsbrrc1ZBCI3i9g1mDwKBgQDGfivUe5n9+W6215SR3ofHrzkr6GQBl1bb 17 | H9WRhmvxNpLARdbKoGeBYMggdFte/SlzHdN6c5gaXIM7OJZj3NMSU/Flaqe/drOR 18 | 76zN1nACvNZazpxHLnVklgSesRdFYZkvzhwnuS3sPiBEseV/Zi/Hp+Lc9XguqH5a 19 | LZHmGMJYGQKBgCZOPwkezi+yYtOv0KQ1twfxF7wjb5SLq0FviSHd8emQ0pvJEcVn 20 | wJMtoCZ/cJW9eZJvSWHG2s/SGNCpi+LqS9AuB30SSbHXR858xYiYSaQVHT65xbfW 21 | Hgm6dnQLSFcjRPZXCuwwVmPeErlZyP5wdIreVKLc8en7zlvRnYeVrharAoGBAIf9 22 | QUIePG6ISZXzNNKLRzNDlUPDv2BnsxYFRWiiU6m63efk8Td5lfBJwlKZ5U+62n8H 23 | 3C90qqzE3RPhvQdF70YLRMNawvql9HjzX8zWMX9uqN0l2GPcLIlxTlD6uxrJtw3N 24 | g/SjJhdIqQrnZnhWJj3/g6omcuRkg8x8lAy0wdFhAoGAS2dEds2M9/OtAHSvGScr 25 | Pb7hXWT+5cX3PqgPiLc1R0TRTjCzUEJYtuSpwb6/JHuVNXmpek1xzLfkykXu7LsG 26 | sy0GXILOBAX5lxYrIgHIMv4a3pjI4UbwB1OzvthRc4kJXyBBT7L7LlPgaJ97xelf 27 | L4TAluWzris5Xa7Y53IfkhE= 28 | -----END PRIVATE KEY----- -------------------------------------------------------------------------------- /data/Seccomp/audit.json: -------------------------------------------------------------------------------- 1 | { 2 | "defaultAction": "SCMP_ACT_LOG" 3 | } -------------------------------------------------------------------------------- /data/kubeconfig.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | current-context: kubernetes-admin@kubernetes 3 | kind: Config 4 | preferences: {} 5 | clusters: 6 | - cluster: 7 | certificate-authority: /etc/kubernetes/pki/ca.crt 8 | server: https://controlplane:6443 9 | name: kubernetes 10 | - name: labs 11 | cluster: 12 | certificate-authority: /etc/kubernetes/pki/ca.crt 13 | server: https://controlplane:6443 14 | - name: development 15 | cluster: 16 | certificate-authority: /etc/kubernetes/pki/ca.crt 17 | server: https://controlplane:6443 18 | - name: qa 19 | cluster: 20 | certificate-authority: /etc/kubernetes/pki/ca.crt 21 | server: https://controlplane:6443 22 | - name: production 23 | cluster: 24 | certificate-authority: /etc/kubernetes/pki/ca.crt 25 | server: https://controlplane:6443 26 | users: 27 | - name: kubernetes-admin 28 | user: 29 | client-certificate: /etc/kubernetes/pki/users/user/user.crt 30 | client-key: /etc/kubernetes/pki/users/user/user.key 31 | - name: labs-user 32 | user: 33 | client-certificate: /etc/kubernetes/pki/users/test-user/labs-user.crt 34 | client-key: /etc/kubernetes/pki/users/test-user/labs-user.key 35 | - name: dev-user 36 | user: 37 | client-certificate: /etc/kubernetes/pki/users/dev-user/dev-user.crt 38 | client-key: /etc/kubernetes/pki/users/dev-user/dev-user.key 39 | - name: qa-user 40 | user: 41 | client-certificate: /etc/kubernetes/pki/users/qa-user/qa-user.crt 42 | client-key: /etc/kubernetes/pki/users/qa-user/qa-user.key 43 | - name: prod-user 44 | user: 45 | client-certificate: /etc/kubernetes/pki/users/prod-user/prod-user.crt 46 | client-key: /etc/kubernetes/pki/users/prod-user/prod-user.key 47 | contexts: 48 | - context: 49 | cluster: kubernetes 50 | user: kubernetes-admin 51 | name: kubernetes-admin@kubernetes 52 | - name: labs-user@labs 53 | context: 54 | cluster: labs 55 | user: labs-user 56 | - name: development-user@labs 57 | context: 58 | cluster: development 59 | user: development-user 60 | - name: qa-user@qa 61 | context: 62 | cluster: qa 63 | user: qa-user 64 | - name: prod-user@prod 65 | context: 66 | cluster: prod 67 | user: prod-user -------------------------------------------------------------------------------- /data/tls.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIFZDCCA0wCCQCLkCF9TN02ITANBgkqhkiG9w0BAQsFADB0MQswCQYDVQQGEwJJ 3 | TjELMAkGA1UECAwCTUgxDTALBgNVBAcMBGNpdHkxEDAOBgNVBAoMB2NvbXBhbnkx 4 | EDAOBgNVBAsMB3NlY3Rpb24xCzAJBgNVBAMMAkRLMRgwFgYJKoZIhvcNAQkBFglh 5 | LmJAYy5jb20wHhcNMjExMjEwMTMzMTA0WhcNMjIxMjEwMTMzMTA0WjB0MQswCQYD 6 | VQQGEwJJTjELMAkGA1UECAwCTUgxDTALBgNVBAcMBGNpdHkxEDAOBgNVBAoMB2Nv 7 | bXBhbnkxEDAOBgNVBAsMB3NlY3Rpb24xCzAJBgNVBAMMAkRLMRgwFgYJKoZIhvcN 8 | AQkBFglhLmJAYy5jb20wggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDc 9 | 56DteFTMyJeLn2qP+5AIuumvW3B4ndk/h8p7489J5EH6KNlL4gp5P4q0rZRXJqaX 10 | sRzBZD2nM2kWDwRC+KjgffQHxTESZOe8jLBl4kz2iPWLIsa2nfIVgoi0U9qZ6bGN 11 | LU4yxYWyliKgD0xweTV9EsUHCgYjLO8lkwRPcMCAHNPckcXooOO/PLKHz5Kzg4J/ 12 | au6TNF3GqzX5ECpArgZOd+67rM1ZFg9jxGyQZmfAnklILOBuN9DCsHqVHScdcATi 13 | Y105KLFAg8KCJ8+BSPzBVRNjuWhmfzmHBqPAWg4N50D10IHgeJWcdg51VbgC0aYO 14 | sbx4JSCUfvjKHDAQfd0PhQDpfvam2tERc1HQfKFAa89SWPVblRE4szaI+uSqJdIg 15 | P+XJ3YVqIHJUblC1mM85EAfSRmEv3Tn2C+gwi65gpYLkjvJr4ucRs+vCvF/s3qYA 16 | QnP87FyXa7GEZSpLop/lVb2J5o7muc69FKNOpUHYDkVxjlmMs+T5RZgOXL7lvjnN 17 | c09rjVs+lVZ/fW+Ej4p0lF4HJuG+vaGU79w8SJz7nUQiU+A9ayoJfbld7BgCv4UQ 18 | yS0G2uuKlxRVw+NZGCNSmthDAvytNBR2C4qpXw8pK+BrAc7jibOOvJWg1Zl7KY89 19 | taD0RLpd9WE+6QTvyXnS88p+uY6fjhAivS85tW+7LwIDAQABMA0GCSqGSIb3DQEB 20 | CwUAA4ICAQAZ0lH73nsPbm40JtqElGCzdf/OjlbfiPPATOy+6FvR5e2myg2hnDu8 21 | nPYSKs3F5hRdYm90a6r3q4+Cyej58259WOK5r0gW6GTJFoT/A/cKyqsolXZ4jjK6 22 | RPT0a5Vll0M8uRMPysRc8hGI1s06DFOfRWYDwtAfn20UpHjmLvjRYjXDS4FNLAh1 23 | c4G1GGGFVTpQo6yL881m+iErDUqU9pOR3Yu+NbOG7FFQXQtSuy7tFlRL65oyASHx 24 | I3REB6VL7CL37E9LDhdGoLRAWARRFWCGvZLRj9IBF/dQKXGjeD8BGnmNEUIMA9JW 25 | KiXmx41Rnf41v1v77LonCBveU2oubuc4YfnNcbAQHnoiN7sjcNIkIBFWspbhSstc 26 | 761G7bejMgP8HUYp0NZySABRsL+3bXtkVX8tmOx7/riR4TxMVjyPp8wGg/cuo8AJ 27 | DpizNmUQAg1YEo+5xe9tQV+C7ScvbbtTDkrWm+vXci4qXaXaJZv4VFvDCnQnfhL1 28 | mKbLZp7L7vpoWfezE0jNw7NV1Ys75AZDJBcOp2RyNaP+MCWf6/EQs2/UL0YntexE 29 | c7eqGREkFsxyaF960B2K73qbMlxahCwK3h7Q2Z7udmWGvayaIr7V3V2sBHDr8u36 30 | 99bwdR/h/t8Y2slP3kuuIteJSYpKAtQqt/FvoFtTDc91ZZ6ugYqnVg== 31 | -----END CERTIFICATE----- 32 | -------------------------------------------------------------------------------- /data/tls.key: -------------------------------------------------------------------------------- 1 | -----BEGIN PRIVATE KEY----- 2 | MIIJQgIBADANBgkqhkiG9w0BAQEFAASCCSwwggkoAgEAAoICAQDc56DteFTMyJeL 3 | n2qP+5AIuumvW3B4ndk/h8p7489J5EH6KNlL4gp5P4q0rZRXJqaXsRzBZD2nM2kW 4 | DwRC+KjgffQHxTESZOe8jLBl4kz2iPWLIsa2nfIVgoi0U9qZ6bGNLU4yxYWyliKg 5 | D0xweTV9EsUHCgYjLO8lkwRPcMCAHNPckcXooOO/PLKHz5Kzg4J/au6TNF3GqzX5 6 | ECpArgZOd+67rM1ZFg9jxGyQZmfAnklILOBuN9DCsHqVHScdcATiY105KLFAg8KC 7 | J8+BSPzBVRNjuWhmfzmHBqPAWg4N50D10IHgeJWcdg51VbgC0aYOsbx4JSCUfvjK 8 | HDAQfd0PhQDpfvam2tERc1HQfKFAa89SWPVblRE4szaI+uSqJdIgP+XJ3YVqIHJU 9 | blC1mM85EAfSRmEv3Tn2C+gwi65gpYLkjvJr4ucRs+vCvF/s3qYAQnP87FyXa7GE 10 | ZSpLop/lVb2J5o7muc69FKNOpUHYDkVxjlmMs+T5RZgOXL7lvjnNc09rjVs+lVZ/ 11 | fW+Ej4p0lF4HJuG+vaGU79w8SJz7nUQiU+A9ayoJfbld7BgCv4UQyS0G2uuKlxRV 12 | w+NZGCNSmthDAvytNBR2C4qpXw8pK+BrAc7jibOOvJWg1Zl7KY89taD0RLpd9WE+ 13 | 6QTvyXnS88p+uY6fjhAivS85tW+7LwIDAQABAoICAGSN62stQyyUgqdDwbYYxM+0 14 | hXsVHHVLJQEORtVuNYVlKcM9pOwt0KawjesAuG2TYnHaZUSC5K2fcU5hN4dkuTq3 15 | GsYOtO+yjun9AK7f/Dicz2iuQ9YMv42bBa9QHEnDXtbssJPb5agNP2WskRcBlZ+B 16 | U76IiZKpeZKZAXVH1dh7RtU4ZeYmloUOlBXOHvEoA9cMTd0kESvF86OUACfBD43Y 17 | egtj9XV/3TGE0AZLFx9O7fy0sNR7A8QboTEPPCbiPtbudBj4tPaxA3FLveET4DoB 18 | B/p1A1jkwML9+rwsQgmCIsfCSdxsB25ZLuuqQUDHPdeigDAQdmwiAA3AFwDqyhzV 19 | wuBeQH7OitOq7kBZAZ1Sv6jT3IkeM53ysMOfCa0LCvOCZt+GYtxxlH3XGQVKjBPi 20 | mm9txjpbpxBdYfi15lr+SXfy48YUCXiihNkIQ2XevQlFEn4c8axW7l34j9eF1vnf 21 | d1IQ6cBbNP8QQVHnMr/xH+EJa4D4EBBwGDclEanCXgeVuhcJU+qi92gUnVnKKqA2 22 | EHseNJhgrNEff6od2xlDC2NiM8DskaHCSG15E8mVMr+N1WKjZEJPG5kjjyQ5DU/8 23 | v/pqHzwOK6hG2D7fJiuSVTaEClF72qWHCIEG8M46h+lpZ6DaQvzkMoQ/ga1Ebc28 24 | b3ghJdkt4JwtizIC9tQBAoIBAQD2f+maNqHR2InJKJ/e2s8C3e3+kxXpgZiGujr+ 25 | 06Whhz2a073zx8UX21PBlro1J0JdlIF3DVSUshSk3qu7KQZPCQr4x8IH5JPFdDi0 26 | ZRXshm5ByUSyWnmmDVku3pvl521Gwd8XdEKHNmFaq/wpT/aRkrLs++vapx41iLcr 27 | qBa7grh/0wGej/ec3xtfGClymfqNuLQIPpLCmJG+gMM9Kcoc3s5L1oA6yM1NneZB 28 | 7rYjNG9HraF17wXY+wp1/pqu5dhCwwhRAahifvuYMRirPX2J9dqAAsSoePHf2CkF 29 | HA7ToDyXIa6GmpdSng2sE2A/GgXD2X0ev0QmO8b4iGChe3fvAoIBAQDlay9AzcZu 30 | +OxCAC1T0jJZzAPeN3Wz08K4RTE4tWbsBj8j/GenimgaFn4jXYo4vMdFPa6ET9+p 31 | Lem9YVcGfRtp3a8N3Lx2KkT8SZTD+itMt8UPmbxIJviO1Z/KdqlNyxNt6tWlMbKA 32 | z72CWvwvbXXPFMKIROS2xRgXmx7r0C0750IXYEtsIColjXh5ME6faECgsWWqsCi1 33 | cnH1awrzGkw5BwPeGYB/pmGRtd2q1kb5BoP7GuME8/T1T/A2I0ltjU9rX8qjeyMv 34 | S43tEFWHxTijNsKK/UvLFn2K/lfCVQMQnKhpHKuJOtTsFkGg2Ukwe8rnDtSQdgWg 35 | 3P2p0IXjerDBAoIBADWx/2z8YZuYk8sh8lFVUKrLNUCzQZ6wAE2424kPCZF6KE1F 36 | uqcT6TcdK82Ly9wwRSClbN5GJRqPADg52SbX9OvaiG1Q9k9J13a3rnJ9Yp03W2Ux 37 | NqmzU7R8S+UN0N/v3boAGVy+ko9ppSNfO3q0VH25ewhsiCAFL2tx8JSt9OW7v/z4 38 | Ne4YZlPhtdCtLrosGIwuo+j32HhTS8w3uE/mfoRzdHTIsP4dJ7u0nafXHA3nKiZv 39 | CDDsdFWjuc+iOofGwakpWvJqbgemqZ+pcjo7FtGqoIIqGDSqw+WC7MyUJBatXQV+ 40 | 7Mmde0Ef9NJ7Fggo3wCeq8a621mIw/r3mjUS9DkCggEAUYA8bzcrEW1Y8TGC6M45 41 | mPEDRsRJCjNmb3QVQmIfSCYH9E7MvBZNWUc4VHP8kJ9v40dAYjzF5iIrcV3NPr7f 42 | KELa13/da9UkYMP7F4weKcj3Ns2Ut8Uwc/2sII77ImnMYzYT4/W9xkkGt/J+uJKY 43 | UZK8cRCYd92Y63nuCDQSfb9wGUHaSXU7w894RwVESRkOLIgY6ARg0eTwWxFF+IsV 44 | HQVC+HnyzmZbLxp+vxwUZo9L/77Te4T3NtbJLVJn2YVj+28yW9V48GpU5yzwVaVY 45 | s5LWle3aKTG6M9CbeKwexJ4CriTDQ6Mk1SIq+mt2tsSjlmYMWa2z3ivj6Znslp2V 46 | gQKCAQEAlwe8NW+4NhvhXL1dSx4iJNfZwTeuZdTRjja9en9CtY82v2cSdRRILH57 47 | hfPjva5T/hqAFIkKzCkAFzgkBSF2s1oVx5tJ7fdSzUEPAMqvfWCQF06lHeUPbPM4 48 | fDblgStfNcCfIXKBN7LJXw2GymKK5NUqrrN8j3oT1QVGwGvQPsZyJ59mKTwX801M 49 | /0Qy2SRTT+97nIAHYV9iBCrw8zXGaUCevn4Jn76ps0BJ1deTZe+MHpF8mb3g/WTC 50 | cY/4JoaCfM6l8zjuopayxlRYaW80H6HXgUvbXZfCJFZPbJkGEHO8OnJnkAUTn08q 51 | Lf/09ItIfuMr+ifYGoRA2pQwUulv4g== 52 | -----END PRIVATE KEY----- 53 | -------------------------------------------------------------------------------- /topics/README.md: -------------------------------------------------------------------------------- 1 | # Topics 2 | 3 | Topics cover test exercises for each topics 4 | 5 | - [Admission Controllers](./admission_controllers.md) 6 | - [Annotations](./annotations.md) 7 | - [APIs](./apis.md) 8 | - [AppArmor](./apparmor.md) 9 | - [Auditing](./auditing.md) 10 | - [Authentication](../authentication.md) 11 | - [Platform Binary Verfication](./binary_verification.md) 12 | - [Cluster Upgrade](./cluster_upgrade.md) 13 | - [ConfigMaps](./configmaps.md) 14 | - [DaemonSets](./daemonsets.md) 15 | - [Deployments](./deployments.md) 16 | - [ETCD](./etcd.md) 17 | - [Falco](./falco.md) 18 | - [Ingress](./ingress.md) 19 | - [Init Containers](../init_containers.md) 20 | - [Jobs](./jobs.md) 21 | - [Kubectl Jsonpath](./jsonpath.md) 22 | - [kube-bench](./kube-bench.md) 23 | - [Kubeconfig](./kubeconfig.md) . 24 | - [Kubelet Security](./kubelet_security.md) 25 | - [Kubesec](./kubesec.md) 26 | - [Labels](./labels.md) 27 | - [Logging](./logging.md) 28 | - [Monitoring](./monitoring.md) 29 | - [Namespaces](./namespaces.md) 30 | - [Network Policies](./network_policies.md) 31 | - [Nodes](./nodes.md) 32 | - [Pod Security Context](./pod_security_context.md) 33 | - [Pod Security Policies](./pod_security_policies.md) 34 | - [Pods](./pods.md) 35 | - [Readiness & Liveness Probes](./probes.md) 36 | - [RBAC](./rbac.md) 37 | - [ReplicaSets](./replica_set.md) 38 | - [Runtime Classes](./runtimes.md) 39 | - [Seccomp](./seccomp.md) 40 | - [Secrets](./secrets.md) 41 | - [Service Accounts](./service_accounts.md) 42 | - [Services](./services.md) 43 | - [Taints & Tolerations](./taints_tolerations.md) 44 | - [Trivy](./trivy.md) 45 | - [Volumes](./volumes.md) -------------------------------------------------------------------------------- /topics/admission_controllers.md: -------------------------------------------------------------------------------- 1 | # [Admission Controllers](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/) 2 | 3 | An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object, but after the request is authenticated and authorized. 4 | 5 | - [ImagePolicyWebhook](#imagepolicywebhook) 6 | - [PodSecurityPolicy](#podsecuritypolicy) 7 | 8 |
9 | 10 | ## Basics 11 | 12 |
13 | 14 | ### Check the admission controller enabled by default 15 | 16 |
show

17 | 18 | ```bash 19 | kubectl exec -it kube-apiserver-controlplane -n kube-system -- kube-apiserver -h | grep 'enable-admission-plugins' 20 | ``` 21 | 22 |

23 | 24 |
25 | 26 | ### Check the admission controller enabled explicitly. 27 | 28 |
show

29 | 30 | #### Check the `--enable-admission-plugins` property in the `/etc/kubernetes/manifests/kube-apiserver.yaml` file 31 | 32 |

33 | 34 |
35 | 36 | ### Disable `DefaultStorageClass` admission controller 37 | 38 |
show

39 | 40 | #### Add `--disable-admission-plugins=DefaultStorageClass` to the `/etc/kubernetes/manifests/kube-apiserver.yaml` file 41 | 42 |

43 | 44 |
45 | 46 | ## [ImagePolicyWebhook](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook) 47 | 48 |
49 | 50 | ### [Set Up](https://github.com/kainlite/kube-image-bouncer) 51 | 52 | ```bash 53 | # add image-bouncer-webhook to the host file 54 | echo "127.0.0.1 image-bouncer-webhook" >> /etc/hosts 55 | 56 | # make directory to host the keys - using /etc/kubernetes/pki as the volume is already mounted 57 | mkdir -p /etc/kubernetes/pki/kube-image-bouncer 58 | cd /etc/kubernetes/pki/kube-image-bouncer 59 | 60 | # generate webhook certificate OR use the one in data folder 61 | openssl req -x509 -new -days 3650 -nodes \ 62 | -keyout webhook.key -out webhook.crt -subj "/CN=system:node:image-bouncer-webhook.default.pod.cluster.local" \ 63 | -addext "subjectAltName=DNS:image-bouncer-webhook,DNS:image-bouncer-webhook.default.svc,DNS:image-bouncer-webhook.default.svc.cluster.local" 64 | 65 | # create secret 66 | kubectl create secret tls tls-image-bouncer-webhook --cert=/etc/kubernetes/pki/kube-image-bouncer/webhook.crt --key=/etc/kubernetes/pki/kube-image-bouncer/webhook.key 67 | 68 | # create webhook deployment exposed as node port service 69 | cat << EOF > image-bouncer-webhook.yaml 70 | apiVersion: v1 71 | kind: Service 72 | metadata: 73 | labels: 74 | app: image-bouncer-webhook 75 | name: image-bouncer-webhook 76 | spec: 77 | type: NodePort 78 | ports: 79 | - name: https 80 | port: 443 81 | targetPort: 1323 82 | protocol: "TCP" 83 | nodePort: 30080 84 | selector: 85 | app: image-bouncer-webhook 86 | --- 87 | apiVersion: apps/v1 88 | kind: Deployment 89 | metadata: 90 | name: image-bouncer-webhook 91 | spec: 92 | selector: 93 | matchLabels: 94 | app: image-bouncer-webhook 95 | template: 96 | metadata: 97 | labels: 98 | app: image-bouncer-webhook 99 | spec: 100 | containers: 101 | - name: image-bouncer-webhook 102 | imagePullPolicy: Always 103 | image: "kainlite/kube-image-bouncer:latest" 104 | args: 105 | - "--cert=/etc/admission-controller/tls/tls.crt" 106 | - "--key=/etc/admission-controller/tls/tls.key" 107 | - "--debug" 108 | - "--registry-whitelist=docker.io,k8s.gcr.io" 109 | volumeMounts: 110 | - name: tls 111 | mountPath: /etc/admission-controller/tls 112 | volumes: 113 | - name: tls 114 | secret: 115 | secretName: tls-image-bouncer-webhook 116 | EOF 117 | 118 | kubectl apply -f image-bouncer-webhook.yaml 119 | 120 | # define the admission configuration file @ /etc/kubernetes/pki/kube-image-bouncer/admission_configuration.yaml 121 | cat << EOF > admission_configuration.yaml 122 | apiVersion: apiserver.config.k8s.io/v1 123 | kind: AdmissionConfiguration 124 | plugins: 125 | - name: ImagePolicyWebhook 126 | configuration: 127 | imagePolicy: 128 | kubeConfigFile: /etc/kubernetes/pki/kube-image-bouncer/kube-image-bouncer.yml 129 | allowTTL: 50 130 | denyTTL: 50 131 | retryBackoff: 500 132 | defaultAllow: false 133 | EOF 134 | 135 | OR 136 | 137 | # Define the admission configuration file in json format @ /etc/kubernetes/admission_configuration.json 138 | cat << EOF > admission_configuration.json 139 | { 140 | "imagePolicy": { 141 | "kubeConfigFile": "/etc/kubernetes/pki/kube-image-bouncer/kube-image-bouncer.yml", 142 | "allowTTL": 50, 143 | "denyTTL": 50, 144 | "retryBackoff": 500, 145 | "defaultAllow": false 146 | } 147 | } 148 | EOF 149 | 150 | # Define the kube config file @ /etc/kubernetes/pki/kube-image-bouncer/kube-image-bouncer.yml 151 | 152 | cat << EOF > kube-image-bouncer.yml 153 | apiVersion: v1 154 | kind: Config 155 | clusters: 156 | - cluster: 157 | certificate-authority: /etc/kubernetes/pki/kube-image-bouncer/webhook.crt 158 | server: https://image-bouncer-webhook:30080/image_policy 159 | name: bouncer_webhook 160 | contexts: 161 | - context: 162 | cluster: bouncer_webhook 163 | user: api-server 164 | name: bouncer_validator 165 | current-context: bouncer_validator 166 | preferences: {} 167 | users: 168 | - name: api-server 169 | user: 170 | client-certificate: /etc/kubernetes/pki/apiserver.crt 171 | client-key: /etc/kubernetes/pki/apiserver.key 172 | EOF 173 | 174 | ``` 175 | 176 | #### Check if can create pods with nginx:latest image 177 | 178 | ```bash 179 | kubectl create deploy nginx --image nginx 180 | # deployment.apps/nginx created 181 | kk get pods -w 182 | # NAME READY STATUS RESTARTS AGE 183 | # nginx-f89759699-5qbv5 1/1 Running 0 13s 184 | kubectl delete deploy nginx 185 | # deployment.apps "nginx" deleted 186 | ``` 187 | 188 | #### Enable the addmission controller. 189 | 190 | Edit the `/etc/kubernetes/manifests/kube-apiserver.yaml` file as below. 191 | 192 | ```yaml 193 | - --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook # update 194 | - --admission-control-config-file=/etc/kubernetes/pki/kube-image-bouncer/admission_configuration.yaml # add 195 | ``` 196 | 197 | #### Verify 198 | 199 | Wait for the kube-apiserver to restart and trying creating deployment with nginx:latest image 200 | 201 | ```bash 202 | kubectl get deploy nginx 203 | # NAME READY UP-TO-DATE AVAILABLE AGE 204 | # nginx 0/1 0 0 12s 205 | 206 | kubectl get events 207 | # 7s Warning FailedCreate replicaset/nginx-f89759699 (combined from similar events): Error creating: pods "nginx-f89759699-b2r4k" is forbidden: image policy webhook backend denied one or more images: Images using latest tag are not allowed 208 | ``` 209 | 210 |
211 | 212 | ## PodSecurityPolicy 213 | 214 | Refer [Pod Security Policy Admission Controller](./pod_security_policies.md) -------------------------------------------------------------------------------- /topics/annotations.md: -------------------------------------------------------------------------------- 1 | # [Annotations](https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/) 2 | 3 |
4 | 5 | ### Create pod `nginx-annotations` and Annotate it with `description='my description'` value 6 | 7 |
8 | 9 |
show

10 | 11 | ```bash 12 | kubectl run nginx-annotations --image nginx 13 | kubectl annotate pod nginx-annotations description='my description' 14 | ``` 15 | 16 |

17 | 18 |
19 | 20 | -------------------------------------------------------------------------------- /topics/api_deprecations.md: -------------------------------------------------------------------------------- 1 | # [Kubernetes API deprecations policy](https://kubernetes.io/docs/reference/using-api/deprecation-policy/) 2 | 3 |
4 | 5 | ### Given deployment defination `nginx-deployment` for an older version of kubernetes. Fix any API depcreation issues in the manifest so that the application can be deployed on a recent version cluster k8s. 6 | 7 | ```yaml 8 | apiVersion: apps/v1beta1 9 | kind: Deployment 10 | metadata: 11 | labels: 12 | app: nginx 13 | name: nginx-deployment 14 | spec: 15 | replicas: 1 16 | selector: 17 | matchLabels: 18 | app: nginx 19 | template: 20 | metadata: 21 | labels: 22 | app: nginx 23 | spec: 24 | containers: 25 | - image: nginx:1.20 26 | name: nginx 27 | ``` 28 | 29 |
30 | 31 |
show

32 | 33 | ```yaml 34 | apiVersion: apps/v1 # Update from apps/v1beta1 to apps/v1 and apply 35 | kind: Deployment 36 | metadata: 37 | labels: 38 | app: nginx 39 | name: nginx-deployment 40 | spec: 41 | replicas: 1 42 | selector: 43 | matchLabels: 44 | app: nginx 45 | template: 46 | metadata: 47 | labels: 48 | app: nginx 49 | spec: 50 | containers: 51 | - image: nginx:1.20 52 | name: nginx 53 | ``` 54 | 55 |

56 | 57 |
58 | 59 | 60 | -------------------------------------------------------------------------------- /topics/apis.md: -------------------------------------------------------------------------------- 1 | # [APIs](https://kubernetes.io/docs/concepts/overview/kubernetes-api/) 2 | 3 |
4 | 5 | ### Get all `api-resource` and check the short names, api version. 6 | 7 |
show

8 | 9 | ```bash 10 | kubectl api-resources 11 | ``` 12 |

13 | 14 |
15 | 16 | ### Get the Api Group for `jobs` api 17 | 18 |
show

19 | 20 | ```bash 21 | kubectl api-resources | grep jobs 22 | #cronjobs cj batch/v1beta1 true CronJob 23 | #jobs batch/v1 true Job 24 | ``` 25 | 26 |

27 | 28 |
29 | 30 | ### Enable the `v1alpha1` version for `rbac.authorization.k8s.io` API group on the controlplane node. 31 | 32 |
show

33 | 34 | Add `--runtime-config=rbac.authorization.k8s.io/v1alpha1` to the `/etc/kubernetes/manifests/kube-apiserver.yaml` file and let the kube-apiserver restart 35 | 36 |

-------------------------------------------------------------------------------- /topics/apparmor.md: -------------------------------------------------------------------------------- 1 | # [AppArmor](https://kubernetes.io/docs/tutorials/clusters/apparmor/) 2 | 3 |
4 | 5 | ### Check if AppArmor is available on the cluster 6 | 7 |
8 | 9 | ```bash 10 | systemctl status apparmor 11 | # ● apparmor.service - AppArmor initialization 12 | # Loaded: loaded (/lib/systemd/system/apparmor.service; enabled; vendor preset: enabled) 13 | # Active: active (exited) since Thu 2021-12-16 02:19:57 UTC; 40s ago 14 | # Docs: man:apparmor(7) 15 | # http://wiki.apparmor.net/ 16 | # Main PID: 312 (code=exited, status=0/SUCCESS) 17 | # Tasks: 0 (limit: 2336) 18 | # CGroup: /system.slice/apparmor.service 19 | 20 | # Dec 16 02:19:57 controlplane systemd[1]: Starting AppArmor initialization... 21 | # Dec 16 02:19:57 controlplane apparmor[312]: * Starting AppArmor profiles 22 | # Dec 16 02:19:57 controlplane apparmor[312]: Skipping profile in /etc/apparmor.d/disable: usr.sbin.rsyslogd 23 | # Dec 16 02:19:57 controlplane apparmor[312]: ...done. 24 | # Dec 16 02:19:57 controlplane systemd[1]: Started AppArmor initialization. 25 | ``` 26 | 27 |
28 | 29 | ### Check if the AppArmor module is loaded and the profiles loaded by AppArmor in different modes. 30 | 31 |
32 | 33 | ```bash 34 | aa-status 35 | # apparmor module is loaded. 36 | # 12 profiles are loaded. 37 | # 12 profiles are in enforce mode. 38 | # /sbin/dhclient 39 | # /usr/bin/man 40 | # /usr/lib/NetworkManager/nm-dhcp-client.action 41 | # /usr/lib/NetworkManager/nm-dhcp-helper 42 | # /usr/lib/connman/scripts/dhclient-script 43 | # /usr/lib/snapd/snap-confine 44 | # /usr/lib/snapd/snap-confine//mount-namespace-capture-helper 45 | # /usr/sbin/ntpd 46 | # /usr/sbin/tcpdump 47 | # docker-default 48 | # man_filter 49 | # man_groff 50 | # 0 profiles are in complain mode. 51 | # 9 processes have profiles defined. 52 | # 9 processes are in enforce mode. 53 | # /sbin/dhclient (639) 54 | # docker-default (2008) 55 | # docker-default (2026) 56 | # docker-default (2044) 57 | # docker-default (2058) 58 | # docker-default (2260) 59 | # docker-default (2277) 60 | # docker-default (2321) 61 | # docker-default (2334) 62 | # 0 processes are in complain mode. 63 | # 0 processes are unconfined but have a profile defined. 64 | ``` 65 | 66 |
67 | 68 | ### Use the following `k8s-apparmor-example-deny-write` AppArmor profile with the `hello-apparmor` pod. 69 | 70 |
71 | 72 | ```cpp 73 | cat << EOF > k8s-apparmor-example-deny-write 74 | #include 75 | profile k8s-apparmor-example-deny-write flags=(attach_disconnected) { 76 | #include 77 | file, 78 | # Deny all file writes. 79 | deny /** w, 80 | } 81 | EOF 82 | ``` 83 | 84 | ```yaml 85 | apiVersion: v1 86 | kind: Pod 87 | metadata: 88 | name: hello-apparmor 89 | spec: 90 | containers: 91 | - name: hello 92 | image: busybox 93 | command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ] 94 | ``` 95 | 96 |
show

97 | 98 | #### Load the AppArmor profile 99 | 100 | **NOTE** : Profile needs to be loaded on all the nodes. 101 | 102 | ```bash 103 | apparmor_parser -q k8s-apparmor-example-deny-write # load the apparmor profile 104 | 105 | aa-status | grep k8s-apparmor-example-deny-write # verify its loaded 106 | # k8s-apparmor-example-deny-write 107 | ``` 108 | 109 | #### Enable AppArmor for the pod 110 | 111 | ```yaml 112 | cat << EOF > hello-apparmor.yaml 113 | apiVersion: v1 114 | kind: Pod 115 | metadata: 116 | name: hello-apparmor 117 | annotations: # add apparmor annotations 118 | container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-deny-write # add this 119 | spec: 120 | containers: 121 | - name: hello 122 | image: busybox 123 | command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ] 124 | EOF 125 | 126 | kubectl apply -f hello-apparmor.yaml 127 | ``` 128 | 129 | #### Verify 130 | 131 | ```bash 132 | kubectl exec hello-apparmor -- cat /proc/1/attr/current 133 | # k8s-apparmor-example-deny-write (enforce) 134 | ``` 135 | 136 |

-------------------------------------------------------------------------------- /topics/auditing.md: -------------------------------------------------------------------------------- 1 | # [Auditing](https://kubernetes.io/docs/tasks/debug-application-cluster/audit/) 2 | 3 | - Kubernetes auditing provides a security-relevant, chronological set of records documenting the sequence of actions in a cluster. 4 | - The cluster audits the activities generated by users, by applications that use the Kubernetes API, and by the control plane itself. 5 | 6 |
7 | 8 | ### Enable Auditing with the Kubernetes cluster 9 | - Capture all events for `pods` at `RequestResponse` level 10 | - Capture `delete` events for `secrets` in `prod namespace` at `Metadata` level 11 | - Define policy at `/etc/kubernetes/audit-policy.yaml` 12 | - Log should be redirected to `/var/log/kubernetes/audit/audit.log` 13 | - Maximum days to keep the logs is `30` 14 | 15 |
16 | 17 |
show

18 | 19 | #### Create the audit policy file 20 | 21 | ```yaml 22 | cat << EOF > /etc/kubernetes/audit-policy.yaml 23 | apiVersion: audit.k8s.io/v1 # This is required. 24 | kind: Policy 25 | rules: 26 | # Log pod changes at RequestResponse level 27 | - level: RequestResponse 28 | resources: 29 | - group: "" 30 | resources: ["pods"] 31 | 32 | # Log secret delete events in prod namespaces at the Metadata level. 33 | - level: Metadata 34 | verbs: ["delete"] 35 | resources: 36 | - group: "" # core API group 37 | resources: ["secrets"] 38 | namespaces: ["prod"] 39 | EOF 40 | ``` 41 | 42 | #### Backup the original file `cp kube-apiserver.yaml kube-apiserver.yaml_org` 43 | 44 | #### Update the `/etc/kubernetes/manifests/kube-apiserver.yaml` to add audit configs and volume mounts. 45 | 46 | ```yaml 47 | - --audit-policy-file=/etc/kubernetes/audit-policy.yaml 48 | - --audit-log-path=/var/log/kubernetes/audit/audit.log 49 | - --audit-log-maxage=30 50 | ``` 51 | 52 | ```yaml 53 | volumeMounts: 54 | - mountPath: /etc/kubernetes/audit-policy.yaml 55 | name: audit 56 | readOnly: true 57 | - mountPath: /var/log/kubernetes/audit/ 58 | name: audit-log 59 | readOnly: false 60 | 61 | volumes: 62 | - name: audit 63 | hostPath: 64 | path: /etc/kubernetes/audit-policy.yaml 65 | type: File 66 | - name: audit-log 67 | hostPath: 68 | path: /var/log/kubernetes/audit/ 69 | type: DirectoryOrCreate 70 | ``` 71 | 72 | #### Check the `/var/log/kubernetes/audit/audit.log` for audit log entries 73 | 74 |

-------------------------------------------------------------------------------- /topics/authentication.md: -------------------------------------------------------------------------------- 1 | # [Authentication](https://kubernetes.io/docs/reference/access-authn-authz/authentication/) 2 | 3 |
4 | 5 | ## [Certificates API](https://kubernetes.io/docs/reference/access-authn-authz/certificate-signing-requests/) 6 | 7 |
8 | 9 | ### Create a user certificate signing request using certs and specs as below and submit it for approval. 10 | 11 | #### Create user certs 12 | 13 | ```bash 14 | openssl genrsa -out normal.key 2048 15 | openssl req -new -key normal.key -out normal.csr 16 | ``` 17 | 18 | #### Use below CertificateSigningRequest specs 19 | 20 | ```yaml 21 | cat << EOF > normal-csr.yaml 22 | apiVersion: certificates.k8s.io/v1 23 | kind: CertificateSigningRequest 24 | metadata: 25 | name: normal-csr 26 | spec: 27 | request: ?? 28 | signerName: kubernetes.io/kube-apiserver-client 29 | usages: 30 | - client auth 31 | EOF 32 | ``` 33 | 34 |
35 | 36 |
show

37 | 38 | ```yaml 39 | cat << EOF > normal-csr.yaml 40 | apiVersion: certificates.k8s.io/v1 41 | kind: CertificateSigningRequest 42 | metadata: 43 | name: normal-csr 44 | spec: 45 | request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2lqQ0NBWElDQVFBd1JURUxNQWtHQTFVRUJoTUNRVlV4RXpBUkJnTlZCQWdNQ2xOdmJXVXRVM1JoZEdVeApJVEFmQmdOVkJBb01HRWx1ZEdWeWJtVjBJRmRwWkdkcGRITWdVSFI1SUV4MFpEQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNellTKzhhTXdBVmkwWHovaVp2Z2k0eGtNWTkyMWZRSmd1bGM2eDYKS0Q4UjNteEMyRkxlWklJSHRYTDZadG5KSHYxY0g0eWtMUEZtR2hDRURVNnRxQ2FpczNaWWV3MVBzVG5nd1Jzego3TG1oeDV4dzVRc3lRaFBkNjRuY3h1MFRJZmFGbmducU9UT0NGWERyaXBtZzJ5TExvbTIxL1ZxbjNQMVJQeE51CjZJdDlBOHB6aURlTVg5VTlaTHhzT0Jld2FzaFJzM29jb3NIcHp5cXN1SnQralVvUjNmaGducVB3UkNBZmQ3YUUKaUhKOWFxblhHVVNUWENXb2g2OEtPL3VkU3p2djNmcExhV1JxUUdHWi9HSWpjM1ZiZzNHN0FqNWNITUp2WHV3bwp3M0JkV1pZaEpycU9Ld21sMW9QVHJRNlhMQ2FBTFZ2NnFqZWVOSFNvOVZyVmM0OENBd0VBQWFBQU1BMEdDU3FHClNJYjNEUUVCQ3dVQUE0SUJBUUFEZGNmMHZVSnVtcmRwcGxOa0pwSERSVFI2ZlFzYk84OFM3cnlndC9vcFEvOCsKNVkyUVVjVzhSUUdpVGdvQjFGUG1FeERVcFRna2p1SEtDQ0l3RWdjc3pPRm5YdC95N1FsWXBuc0E3dG01V1ppUAozbG1xSFpQMU9tQlRBRU45L2swSFpKdjc4Rytmcm0xNnRJbWtzUHpSK2lBajZ2WDZtT1RNVEk3Y1U5cmIvSElLCmVOTTZjV2dYQzYrbU9PbDFqM3BjS1hlVlB0YS9MbDZEVFc0VWdnR0J1NVJPb3FWRS9sTDNQNnc4K2R3M0lWQngKWlBrK0JDNVQrMkZLMFNzd3VvSCtaKzhtbi8weHR2bk1nL3FPTWIwdXVvcDNSTklVZmFhR1pRSjRmSnVrMGdkQwpXZHFselJMREsydXZYcWVFUXFjMENxZmVVdXRGdzVuOWNWZVdvRFVwCi0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo= # use base64 encoded value of normal.csr file 46 | signerName: kubernetes.io/kube-apiserver-client 47 | usages: 48 | - client auth 49 | EOF 50 | 51 | kubectl apply -f normal-csr.yaml 52 | ``` 53 | 54 | #### Verify its submitted and in Pending status 55 | 56 | ```bash 57 | kubectl get csr normal-csr 58 | # NAME AGE SIGNERNAME REQUESTOR CONDITION 59 | # normal-csr 37s kubernetes.io/kube-apiserver-client kubernetes-admin Pending 60 | ``` 61 | 62 |

63 | 64 |
65 | 66 | ### Approve the `normal-csr` request 67 | 68 |
69 | 70 |
show

71 | 72 | ```bash 73 | kubectl certificate approve normal-csr 74 | # certificatesigningrequest.certificates.k8s.io/normal-csr approved 75 | ``` 76 | 77 | #### Verify its in Approved,Issued status 78 | 79 | ```bash 80 | kubectl get csr normal-csr 81 | # NAME AGE SIGNERNAME REQUESTOR CONDITION 82 | # normal-csr 4m15s kubernetes.io/kube-apiserver-client kubernetes-admin Approved,Issued 83 | ``` 84 | 85 |

86 | 87 |
88 | 89 | ### Create the below csr request and reject the same. 90 | 91 |
92 | 93 | ```yaml 94 | cat << EOF > hacker-csr.yaml 95 | apiVersion: certificates.k8s.io/v1 96 | kind: CertificateSigningRequest 97 | metadata: 98 | name: hacker-csr 99 | spec: 100 | groups: 101 | - system:masters 102 | - system:authenticated 103 | request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2lqQ0NBWElDQVFBd1JURUxNQWtHQTFVRUJoTUNRVlV4RXpBUkJnTlZCQWdNQ2xOdmJXVXRVM1JoZEdVeApJVEFmQmdOVkJBb01HRWx1ZEdWeWJtVjBJRmRwWkdkcGRITWdVSFI1SUV4MFpEQ0NBU0l3RFFZSktvWklodmNOCkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFNellTKzhhTXdBVmkwWHovaVp2Z2k0eGtNWTkyMWZRSmd1bGM2eDYKS0Q4UjNteEMyRkxlWklJSHRYTDZadG5KSHYxY0g0eWtMUEZtR2hDRURVNnRxQ2FpczNaWWV3MVBzVG5nd1Jzego3TG1oeDV4dzVRc3lRaFBkNjRuY3h1MFRJZmFGbmducU9UT0NGWERyaXBtZzJ5TExvbTIxL1ZxbjNQMVJQeE51CjZJdDlBOHB6aURlTVg5VTlaTHhzT0Jld2FzaFJzM29jb3NIcHp5cXN1SnQralVvUjNmaGducVB3UkNBZmQ3YUUKaUhKOWFxblhHVVNUWENXb2g2OEtPL3VkU3p2djNmcExhV1JxUUdHWi9HSWpjM1ZiZzNHN0FqNWNITUp2WHV3bwp3M0JkV1pZaEpycU9Ld21sMW9QVHJRNlhMQ2FBTFZ2NnFqZWVOSFNvOVZyVmM0OENBd0VBQWFBQU1BMEdDU3FHClNJYjNEUUVCQ3dVQUE0SUJBUUFEZGNmMHZVSnVtcmRwcGxOa0pwSERSVFI2ZlFzYk84OFM3cnlndC9vcFEvOCsKNVkyUVVjVzhSUUdpVGdvQjFGUG1FeERVcFRna2p1SEtDQ0l3RWdjc3pPRm5YdC95N1FsWXBuc0E3dG01V1ppUAozbG1xSFpQMU9tQlRBRU45L2swSFpKdjc4Rytmcm0xNnRJbWtzUHpSK2lBajZ2WDZtT1RNVEk3Y1U5cmIvSElLCmVOTTZjV2dYQzYrbU9PbDFqM3BjS1hlVlB0YS9MbDZEVFc0VWdnR0J1NVJPb3FWRS9sTDNQNnc4K2R3M0lWQngKWlBrK0JDNVQrMkZLMFNzd3VvSCtaKzhtbi8weHR2bk1nL3FPTWIwdXVvcDNSTklVZmFhR1pRSjRmSnVrMGdkQwpXZHFselJMREsydXZYcWVFUXFjMENxZmVVdXRGdzVuOWNWZVdvRFVwCi0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo= 104 | signerName: kubernetes.io/kube-apiserver-client 105 | usages: 106 | - digital signature 107 | - key encipherment 108 | - server auth 109 | EOF 110 | 111 | kubectl apply -f hacker-csr.yaml 112 | ``` 113 | 114 |
show

115 | 116 | ```bash 117 | ubectl certificate deny hacker-csr 118 | # certificatesigningrequest.certificates.k8s.io/hacker-csr denied 119 | 120 | ``` 121 | 122 | #### Verify its in Approved,Issued status 123 | 124 | ```bash 125 | kubectl get csr hacker-csr 126 | # NAME AGE SIGNERNAME REQUESTOR CONDITION 127 | # hacker-csr 16s kubernetes.io/kube-apiserver-client kubernetes-admin Denied 128 | ``` 129 | 130 |

131 | 132 |
-------------------------------------------------------------------------------- /topics/binary_verification.md: -------------------------------------------------------------------------------- 1 | # Verify Platform Binaries 2 | 3 | - Kubernetes provides the binaries and their checksum hash for us the verify the authenticity of the same. 4 | - Check the Kubernetes [CHANGELOG](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG) 5 | 6 |
7 | 8 | ### Download the https://dl.k8s.io/v1.22.0/kubernetes.tar.gz and verify the sha matches with `d1145ec29a8581a4c94a83cefa3658a73bfc7d8e2624d31e735d53551718c9212e477673f74cfa4e430a8367a47bba65e2573162711613e60db54563dc912f00`. 9 | 10 |
11 | 12 |
show

13 | 14 | ```bash 15 | curl https://dl.k8s.io/v1.22.0/kubernetes.tar.gz -L -o kubernetes.tar.gz 16 | shasum -a 512 kubernetes.tar.gz 17 | # d1145ec29a8581a4c94a83cefa3658a73bfc7d8e2624d31e735d53551718c9212e477673f74cfa4e430a8367a47bba65e2573162711613e60db54563dc912f00 kubernetes.tar.gz 18 | ``` 19 | 20 |

21 | 22 |
-------------------------------------------------------------------------------- /topics/configmaps.md: -------------------------------------------------------------------------------- 1 | # [ConfigMaps](https://kubernetes.io/docs/concepts/configuration/configmap/) 2 | 3 | - A ConfigMap is an API object used to store non-confidential data in key-value pairs. 4 | - Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume. 5 | - A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable. 6 | 7 |
8 | 9 | ### Check the configmaps on the cluster in the default namespace 10 | 11 |
12 | 13 |
show

14 | 15 | ```bash 16 | kubectl get configmaps 17 | # OR 18 | kubectl get cm 19 | ``` 20 | 21 |

22 | 23 |
24 | 25 | ### Check the configmaps on the cluster in all the namespaces 26 | 27 |
28 | 29 |
show

30 | 31 | ```bash 32 | kubectl get configmaps --all-namespaces 33 | # OR 34 | kubectl get configmaps -A 35 | ``` 36 |

37 | 38 |
39 | 40 | ### Create a new pod `nginx-1` with `nginx` image and add env variable for `DB_HOST=db.example.com`, `DB_USER=development`, `DB_PASSWD=password` 41 | 42 |
43 | 44 |
show

45 | 46 | ```bash 47 | kubectl run nginx-1 --image=nginx --env="DB_HOST=db.example.com" --env="DB_USER=development" --env="DB_PASSWD=password" 48 | ``` 49 | 50 | ```bash 51 | # verify env variables 52 | kubectl exec nginx-1 -- env | grep DB_ 53 | # DB_HOST=db.example.com 54 | # DB_USER=development 55 | # DB_PASSWD=password 56 | ``` 57 | 58 |

59 | 60 |
61 | 62 | ### Create a configmap named `db-config-1` with data `DB_HOST=db.example.com`, `DB_USER=development`, `DB_PASSWD=password` 63 | 64 |
65 | 66 |
show

67 | 68 | ```bash 69 | kubectl create configmap db-config-1 --from-literal=DB_HOST=db.example.com --from-literal=DB_USER=development --from-literal=DB_PASSWD=password 70 | ``` 71 | 72 | OR 73 | 74 | ```yaml 75 | cat << EOF > db-config-1.yaml 76 | apiVersion: v1 77 | kind: ConfigMap 78 | metadata: 79 | name: db-config-1 80 | data: 81 | DB_HOST: db.example.com 82 | DB_PASSWD: password 83 | DB_USER: development 84 | EOF 85 | 86 | kubectl apply -f db-config-1.yaml 87 | ``` 88 | 89 | ```bash 90 | # verify 91 | kubectl describe configmap db-config-1 92 | # Name: db-config-1 93 | # Namespace: default 94 | # Labels: 95 | # Annotations: 96 | 97 | # Data 98 | # ==== 99 | # DB_USER: 100 | # ---- 101 | # development 102 | # DB_HOST: 103 | # ---- 104 | # db.example.com 105 | # DB_PASSWD: 106 | # ---- 107 | # password 108 | ``` 109 | 110 |

111 | 112 |
113 | 114 | ### Create a configmap named `db-config-2` with data from file `db.properties` 115 | 116 |
117 | 118 | ```bash 119 | cat <> db.properties 120 | DB_HOST=db.example.com 121 | DB_USER=development 122 | DB_PASSWD=password 123 | EOT 124 | ``` 125 | 126 |
show

127 | 128 | ```bash 129 | kubectl create configmap db-config-2 --from-file=db.properties 130 | ``` 131 | 132 | ```bash 133 | # verify 134 | kubectl describe configmap db-config-2 135 | # Name: db-config-2 136 | # Namespace: default 137 | # Labels: 138 | # Annotations: 139 | 140 | # Data 141 | # ==== 142 | # db.properties: 143 | # ---- 144 | # DB_HOST=db.example.com 145 | # DB_USER=development 146 | # DB_PASSWD=password 147 | ``` 148 | 149 |

150 | 151 |
152 | 153 | ### Create a new pod `nginx-2` with `nginx` image and add env variable for `DB_HOST` from configmap map `db-config-1` 154 | 155 |
156 | 157 |
show

158 | 159 | ```yaml 160 | cat << EOF > nginx-2.yaml 161 | apiVersion: v1 162 | kind: Pod 163 | metadata: 164 | name: nginx-2 165 | spec: 166 | containers: 167 | - image: nginx 168 | name: nginx-2 169 | env: 170 | - name: DB_HOST 171 | valueFrom: 172 | configMapKeyRef: 173 | name: db-config-1 174 | key: DB_HOST 175 | EOF 176 | 177 | kubectl apply -f nginx-2.yaml 178 | 179 | kubectl exec nginx-2 -- env | grep DB_HOST # verify env variables 180 | # DB_HOST=db.example.com 181 | ``` 182 | 183 |

184 | 185 |
186 | 187 | ### Create a new pod `nginx-3` with `nginx` image and add all env variables from from configmap map `db-config-1` 188 | 189 |
190 | 191 |
show

192 | 193 | ```yaml 194 | cat << EOF > nginx-3.yaml 195 | apiVersion: v1 196 | kind: Pod 197 | metadata: 198 | name: nginx-3 199 | spec: 200 | containers: 201 | - image: nginx 202 | name: nginx-3 203 | envFrom: 204 | - configMapRef: 205 | name: db-config-1 206 | EOF 207 | 208 | kubectl apply -f nginx-3.yaml 209 | 210 | kubectl exec nginx-3 -- env | grep DB_ # verify env variables 211 | # DB_HOST=db.example.com 212 | # DB_PASSWD=password 213 | # DB_USER=development 214 | ``` 215 | 216 |

217 | 218 |
219 | 220 | ### Create a new pod `nginx-4` with `nginx` image and mount the configmap `db-config-1` as a volume named `db-config` and mount path `/config` 221 | 222 |
show

223 | 224 | ```yaml 225 | cat << EOF > nginx-4.yaml 226 | apiVersion: v1 227 | kind: Pod 228 | metadata: 229 | name: nginx-4 230 | spec: 231 | containers: 232 | - image: nginx 233 | name: nginx-4 234 | volumeMounts: 235 | - name: db-config 236 | mountPath: "/config" 237 | readOnly: true 238 | volumes: 239 | - name: db-config 240 | configMap: 241 | name: db-config-1 242 | EOF 243 | 244 | kubectl apply -f nginx-4.yaml 245 | 246 | kubectl exec nginx-4 -- cat /config/DB_HOST # verify env variables 247 | # db.example.com 248 | ``` 249 | 250 |

251 | 252 |
253 | 254 | ### Clean up 255 | 256 | ```bash 257 | kubectl delete pod nginx-1 nginx-2 nginx-3 nginx-4 --force --grace-period=0 258 | kubectl delete configmap db-config-1 db-config-2 259 | rm db.properties nginx-2.yaml nginx-3.yaml nginx-4.yaml 260 | ``` 261 | -------------------------------------------------------------------------------- /topics/daemonsets.md: -------------------------------------------------------------------------------- 1 | # [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) 2 | 3 | A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created. 4 | 5 |
6 | 7 | ### Get the daemonset in all namespaces 8 | 9 |
show

10 | 11 | ```bash 12 | kubectl get daemonsets --all-namespaces 13 | # OR 14 | kubectl get ds -A 15 | ``` 16 | 17 |

18 | 19 |
20 | 21 | ### Ensure a single instance of pod nginx is running on each node of the Kubernetes cluster where nginx also represents the image name which has to be used. Do not override anytaints currently in place. 22 | 23 |
show

24 | 25 | ```bash 26 | kubectl create deploy nginx --image=nginx --dry-run=client -o yaml > nginx-ds.yaml 27 | ``` 28 | 29 | #### Edit the deployment to daemonset 30 | 31 | ```yaml 32 | cat << EOF > nginx-ds.yaml 33 | apiVersion: apps/v1 34 | kind: DaemonSet # Update from Deployment to DaemonSet 35 | metadata: 36 | labels: 37 | app: nginx 38 | name: nginx 39 | spec: 40 | # replicas: 1 - remove replicas 41 | selector: 42 | matchLabels: 43 | app: nginx 44 | # strategy: {} - remove strategy 45 | template: 46 | metadata: 47 | labels: 48 | app: nginx 49 | spec: 50 | containers: 51 | - image: nginx 52 | name: nginx 53 | resources: {} 54 | EOF 55 | 56 | kubectl apply -f nginx-ds.yaml 57 | 58 | kk get pods -o wide 59 | # NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 60 | # nginx-5k7dk 1/1 Running 0 6m10s 10.244.1.3 node01 61 | 62 | kk get daemonset 63 | # NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE 64 | # nginx 1 1 1 1 1 6m24s 65 | 66 | kk get ds 67 | # NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE 68 | # nginx 1 1 1 1 1 6m30s 69 | 70 | ``` 71 | 72 |

73 | 74 |
-------------------------------------------------------------------------------- /topics/debugging.md: -------------------------------------------------------------------------------- 1 | # [Kubernetes Debugging](https://kubernetes.io/docs/tasks/debug/) 2 | 3 |
4 | 5 | ### Given deployment defination `nginx-deployment` does not work. Identify and fix the problems by updating the associated resources so that the Deployment works. 6 | 7 | ```yaml 8 | apiVersion: apps/v1 9 | kind: Deployment 10 | metadata: 11 | labels: 12 | kind: frontend 13 | name: nginx-deployment 14 | spec: 15 | replicas: 3 16 | selector: 17 | matchLabels: 18 | kind: frontend 19 | template: 20 | metadata: 21 | labels: 22 | app: nginx 23 | spec: 24 | containers: 25 | - image: nginx 26 | name: nginx 27 | ``` 28 | 29 |
30 | 31 |
show

32 | 33 | ```yaml 34 | apiVersion: apps/v1 35 | kind: Deployment 36 | metadata: 37 | labels: 38 | kind: frontend 39 | name: nginx-deployment 40 | spec: 41 | replicas: 3 42 | selector: 43 | matchLabels: 44 | app: nginx # Update the selector label to app: nginx 45 | template: 46 | metadata: 47 | labels: 48 | app: nginx 49 | spec: 50 | containers: 51 | - image: nginx 52 | name: nginx 53 | ``` 54 | 55 |

56 | 57 |
58 | 59 | ### Given deployment defination `nginx-deployment` exposed using the Service `frontend-svc`. Identify and fix the problems by updating the associated resources so that the Service works. 60 | 61 | ```yaml 62 | apiVersion: apps/v1 63 | kind: Deployment 64 | metadata: 65 | labels: 66 | kind: frontend 67 | name: nginx-deployment 68 | spec: 69 | replicas: 3 70 | selector: 71 | matchLabels: 72 | kind: frontend 73 | template: 74 | metadata: 75 | labels: 76 | app: nginx 77 | spec: 78 | containers: 79 | - image: nginx 80 | name: nginx 81 | --- 82 | apiVersion: v1 83 | kind: Service 84 | metadata: 85 | labels: 86 | kind: frontend 87 | name: frontend-svc 88 | spec: 89 | ports: 90 | - port: 8080 91 | protocol: TCP 92 | targetPort: 8080 93 | selector: 94 | kind: frontend 95 | status: 96 | loadBalancer: {} 97 | ``` 98 | 99 |
100 | 101 |
show

102 | 103 | ```yaml 104 | apiVersion: v1 105 | kind: Service 106 | metadata: 107 | labels: 108 | kind: frontend 109 | name: frontend-svc 110 | spec: 111 | ports: 112 | - port: 8080 # Update the port to 80 113 | protocol: TCP 114 | targetPort: 8080 # Update the port to 80 115 | selector: 116 | kind: frontend # Update the selector label to app: nginx 117 | status: 118 | loadBalancer: {} 119 | ``` 120 | 121 |

122 | 123 |
124 | 125 | ### A Deployment named `web` is exposed via Ingress `web-ingress`. The Deployment is supposed to be reachable at http://dk8s.local/web-ingress, but requesting this URL is currently returning an error. Identify and fix the problems by updating the associated resources so that the Deployment becomes externally reachable as planned. 126 | 127 | ```bash 128 | kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 129 | kubectl expose deployment web --name web-svc --port 80 130 | ``` 131 | 132 | ```yaml 133 | apiVersion: networking.k8s.io/v1 134 | kind: Ingress 135 | metadata: 136 | name: web-ingress 137 | annotations: 138 | nginx.ingress.kubernetes.io/rewrite-target: /$1 139 | spec: 140 | rules: 141 | - http: 142 | paths: 143 | - backend: 144 | service: 145 | name: web 146 | port: 147 | number: 80 148 | path: / 149 | pathType: Prefix 150 | status: 151 | loadBalancer: {} 152 | ``` 153 | 154 |
155 | 156 |
show

157 | 158 | ```yaml 159 | apiVersion: v1 160 | kind: Service 161 | metadata: 162 | labels: 163 | app: web 164 | name: web-svc 165 | spec: 166 | ports: 167 | - port: 80 168 | protocol: TCP 169 | targetPort: 8080 # Update target port to 8080 as exposed by the deployment 170 | selector: 171 | app: web 172 | type: ClusterIP 173 | status: 174 | loadBalancer: {} 175 | ``` 176 | 177 | **NOTE**: The ingress might not work if there are ingress controllers deployed on the cluster. 178 | 179 | ```yaml 180 | apiVersion: networking.k8s.io/v1 181 | kind: Ingress 182 | metadata: 183 | name: web-ingress 184 | annotations: 185 | nginx.ingress.kubernetes.io/rewrite-target: /$1 186 | spec: 187 | rules: 188 | - host: hello-world.info # add host entry 189 | http: 190 | paths: 191 | - backend: 192 | service: 193 | name: web # update to web-svc 194 | port: 195 | number: 80 196 | path: / 197 | pathType: Prefix 198 | status: 199 | loadBalancer: {} 200 | ``` 201 | 202 |

203 | 204 |
205 | 206 | -------------------------------------------------------------------------------- /topics/docker.md: -------------------------------------------------------------------------------- 1 | # [Docker](https://docs.docker.com/get-started/overview/) 2 | 3 |
4 | 5 | ### Given the docker file, 6 | - Build a container image with the name nginxer and tag 3.0. 7 | - Export the built container image in OCI-format and store in at `nginxer-3.0.tar` 8 | - Run container from image `nginxer:3.0` with name `nginxer-go` expsoing port `80` 9 | 10 | ```bash 11 | FROM nginx:alpine 12 | CMD ["nginx", "-g", "daemon off;"] 13 | ``` 14 | 15 |
16 | 17 |
show

18 | 19 | ```bash 20 | docker build . -t nginxer:3.0 21 | docker save nginxer:3.0 -o nginxer-3.0.tar 22 | docker run --name nginxer-go -p 80:80 nginxer:3.0 23 | ``` 24 | 25 |

26 | 27 |
28 | 29 | ### Given the docker file which creates an alpine container exposing port 80. Apply 2 best practices to improve security of the image. 30 | 31 | ```bash 32 | FROM alpine:3.12 33 | RUN adduser -D myuser && chown -R myuser /myapp-data 34 | USER root 35 | ENTRYPOINT ["/myapp"] 36 | EXPOSE 80 8080 22 37 | ``` 38 | 39 |
40 | 41 |
show

42 | 43 | ```bash 44 | FROM alpine:3.12 45 | RUN adduser -D myuser && chown -R myuser /myapp-data 46 | USER myuser # Avoid unnecessary privileges - run as a custom user. 47 | ENTRYPOINT ["/myapp"] 48 | EXPOSE 80 # Exposed ports - Expose only neccesary port 49 | ``` 50 | 51 |

52 | 53 |
54 | 55 | 56 | 57 | 58 | 59 | 60 | -------------------------------------------------------------------------------- /topics/etcd.md: -------------------------------------------------------------------------------- 1 | # [ETCD](https://etcd.io/) 2 | 3 |
4 | 5 | ### Check the version of ETCD 6 | 7 |
8 | 9 | ```bash 10 | kubectl get pod etcd-controlplane -n kube-system -o yaml | grep image 11 | # image: k8s.gcr.io/etcd:3.4.3-0 12 | ``` 13 | 14 | ## Backup and Restore 15 | Refer [Backing up ETCD Cluster](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster) & [Restoring ETCD Cluster](https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#restoring-an-etcd-cluster) 16 | 17 |
18 | 19 | #### Create a snapshot of the etcd instance running at https://127.0.0.1:2379, saving the snapshot to the file path /opt/snapshot-pre-boot.db. Restore the snapshot. The following TLS certificates/key are supplied for connecting to the server with etcdctl: 20 | - CA certificate: /etc/kubernetes/pki/etcd/ca.crt 21 | - Client certificate: /etc/kubernetes/pki/etcd/server.crt 22 | - Client key: /etc/kubernetes/pki/etcd/server.key 23 | 24 |
show

25 | 26 | #### Install ETCD Client 27 | 28 | ```bash 29 | snap install etcd # version 3.4.5, or 30 | apt install etcd-client 31 | ``` 32 | 33 | #### Create deployment before backup for testing 34 | 35 | ```bash 36 | kubectl create deploy nginx --image=nginx --replicas=3 37 | ``` 38 | 39 | #### Backup ETCD 40 | 41 | ```bash 42 | ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \ 43 | --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \ 44 | snapshot save /opt/snapshot-pre-boot.db 45 | # Snapshot saved at /opt/snapshot-pre-boot.db 46 | ``` 47 | 48 | #### Delete the deployment 49 | 50 | ```bash 51 | kubectl delete deployment nginx 52 | ``` 53 | 54 | #### Restore ETCD Snapshot to a new folder 55 | 56 | ```bash 57 | ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt \ 58 | --name=master \ 59 | --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key \ 60 | --data-dir /var/lib/etcd-from-backup \ 61 | --initial-cluster=master=https://127.0.0.1:2380 \ 62 | --initial-cluster-token etcd-cluster-1 \ 63 | --initial-advertise-peer-urls=https://127.0.0.1:2380 \ 64 | snapshot restore /opt/snapshot-pre-boot.db 65 | # 2021-12-21 13:56:56.460862 I | mvcc: restore compact to 1288 66 | # 2021-12-21 13:56:56.716540 I | etcdserver/membership: added member e92d66acd89ecf29 [https://127.0.0.1:2380] to cluster 7581d6eb2d25405b 67 | ``` 68 | 69 | #### Modify /etc/kubernetes/manifests/etcd.yaml 70 | 71 | ```bash 72 | # Update --data-dir to use new target location 73 | --data-dir=/var/lib/etcd-from-backup 74 | 75 | # Update new initial-cluster-token to specify new cluster 76 | --initial-cluster-token=etcd-cluster-1 77 | 78 | # Update volumes and volume mounts to point to new path 79 | volumeMounts: 80 | - mountPath: /var/lib/etcd-from-backup 81 | name: etcd-data 82 | - mountPath: /etc/kubernetes/pki/etcd 83 | name: etcd-certs 84 | volumes: 85 | - hostPath: 86 | path: /var/lib/etcd-from-backup 87 | type: DirectoryOrCreate 88 | name: etcd-data 89 | ``` 90 | 91 | #### Verify the deployment exists after restoration 92 | 93 | ```bash 94 | kubectl get deployment nginx 95 | ``` 96 | 97 |

98 | 99 |
-------------------------------------------------------------------------------- /topics/falco.md: -------------------------------------------------------------------------------- 1 | # [Falco](https://falco.org/) 2 | 3 |
4 | 5 | ### Installation 6 | 7 |
8 | 9 | ```bash 10 | curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add - 11 | echo "deb https://download.falco.org/packages/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list 12 | apt-get update -y 13 | apt-get -y install linux-headers-$(uname -r) 14 | apt-get install -y falco 15 | systemctl enable falco 16 | systemctl start falco 17 | ``` 18 | 19 | ### Check logs and events 20 | 21 |
22 | 23 | ```bash 24 | journalctl -fu falco 25 | ``` 26 | 27 | ### Clean up 28 | 29 | ```bash 30 | apt-get remove falco 31 | ``` -------------------------------------------------------------------------------- /topics/ingress.md: -------------------------------------------------------------------------------- 1 | # [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) 2 | 3 | - Ingress manages external access to the services in a cluster, typically HTTP. 4 | - Ingress may provide load balancing, SSL termination and name-based virtual hosting. 5 | 6 |
7 | 8 | ### Create the following 9 | - Deployment `web` with image `gcr.io/google-samples/hello-app:1.0` with 3 replicas. 10 | - Service `web` to expose the deployment as Node Port 11 | - Ingress `web-ingress` to point to the `web` service using host `hellow-world.info`. 12 | 13 |
14 | 15 |
show

16 | 17 | ```bash 18 | kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0 19 | kubectl expose deployment web --type=NodePort --port=8080 20 | kubectl get service web 21 | # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 22 | # web NodePort 10.104.218.215 8080:30807/TCP 12s 23 | ``` 24 | 25 | #### Create Ingress with the below specs and apply using `kubectl apply -f web-ingress.yaml` 26 | 27 | ```bash 28 | kubectl create ingress web-ingress --rule="hello-world.info/=web:8080" 29 | ``` 30 | 31 | OR 32 | 33 | ```yaml 34 | cat << EOF > web-ingress.yaml 35 | apiVersion: networking.k8s.io/v1 36 | kind: Ingress 37 | metadata: 38 | name: web-ingress 39 | annotations: 40 | nginx.ingress.kubernetes.io/rewrite-target: /$1 41 | spec: 42 | rules: 43 | - host: hello-world.info 44 | http: 45 | paths: 46 | - path: / 47 | pathType: Prefix 48 | backend: 49 | service: 50 | name: web 51 | port: 52 | number: 8080 53 | EOF 54 | 55 | kubectl apply -f web-ingress.yaml 56 | ``` 57 | 58 | OR below for older versions 59 | 60 | ```yaml 61 | cat << EOF > web-ingress.yaml 62 | apiVersion: extensions/v1beta1 63 | kind: Ingress 64 | metadata: 65 | name: web-ingress 66 | annotations: 67 | nginx.ingress.kubernetes.io/rewrite-target: /$1 68 | spec: 69 | rules: 70 | - host: hello-world.info 71 | http: 72 | paths: 73 | - path: / 74 | pathType: Prefix 75 | backend: 76 | serviceName: web 77 | servicePort: 8080 78 | EOF 79 | 80 | kubectl apply -f web-ingress.yaml 81 | ``` 82 | 83 | ```bash 84 | # verification 85 | kubectl get nodes -o wide # get node ip 86 | kubectl get deploy web # check status 87 | kubectl get svc web # check node port ip 88 | curl http://10.0.26.3:32104 # use node ip:node port 89 | kubectl get ingress web-ingress # you will get an ip address of the ingress controller if installed 90 | # NAME CLASS HOSTS ADDRESS PORTS AGE 91 | # web-ingress hello-world.info 80 11s 92 | ``` 93 | 94 |

95 | 96 |
97 | 98 | ## Ingress Security 99 | 100 |
101 | 102 | ### Create a tls secret `testsecret-tls` using tls.crt from file `../data/tls.crt` and `../data/tls.key`. Enable tls for the ingress below. 103 | 104 |
105 | 106 | ```yaml 107 | apiVersion: networking.k8s.io/v1 108 | kind: Ingress 109 | metadata: 110 | name: tls-example-ingress 111 | spec: 112 | rules: 113 | - host: https-example.foo.com 114 | http: 115 | paths: 116 | - path: / 117 | pathType: Prefix 118 | backend: 119 | service: 120 | name: service1 121 | port: 122 | number: 80 123 | ``` 124 | 125 |
show

126 | 127 | ```bash 128 | kubectl create secret tls testsecret-tls --cert=tls.crt --key=tls.key 129 | ``` 130 | 131 | ```yaml 132 | 133 | cat << EOF > tls-example-ingress.yaml 134 | apiVersion: networking.k8s.io/v1 135 | kind: Ingress 136 | metadata: 137 | name: tls-example-ingress 138 | spec: 139 | tls: # add tls entry 140 | - hosts: 141 | - https-example.foo.com 142 | secretName: testsecret-tls 143 | rules: 144 | - host: https-example.foo.com 145 | http: 146 | paths: 147 | - path: / 148 | pathType: Prefix 149 | backend: 150 | service: 151 | name: service1 152 | port: 153 | number: 80 154 | EOF 155 | 156 | kubectl apply -f tls-example-ingress.yaml 157 | 158 | ``` 159 | 160 | ```bash 161 | # verification 162 | kubectl get secret testsecret-tls 163 | kubectl get ingress tls-example-ingress 164 | ``` 165 | 166 |

167 | 168 |
169 | 170 | ### Clean up 171 | 172 |
173 | 174 | ```bash 175 | kubectl delete secret testsecret-tls 176 | kubectl delete ingress web-ingress tls-example-ingress 177 | kubectl delete svc web 178 | kubectl delete deployment web 179 | ``` -------------------------------------------------------------------------------- /topics/init_containers.md: -------------------------------------------------------------------------------- 1 | # [Init Containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) 2 | 3 | ### Update the below specs for nginx pod with `/usr/share/nginx/html` directory mounted on volume `workdir`. 4 | - Add an init container named `install` with image `busybox`. 5 | - Mount the workdir to the init container. 6 | - `wget` the `http://info.cern.ch` and save as `index.html` to the `workdir` in the init container. 7 | 8 | ```yaml 9 | apiVersion: v1 10 | kind: Pod 11 | metadata: 12 | name: init-demo 13 | spec: 14 | containers: 15 | - name: nginx 16 | image: nginx 17 | ports: 18 | - containerPort: 80 19 | volumeMounts: 20 | - name: workdir 21 | mountPath: /usr/share/nginx/html 22 | dnsPolicy: Default 23 | volumes: 24 | - name: workdir 25 | emptyDir: {} 26 | ``` 27 | 28 |
show

29 | 30 | ```yaml 31 | cat << EOF > init-demo.yaml 32 | apiVersion: v1 33 | kind: Pod 34 | metadata: 35 | name: init-demo 36 | spec: 37 | containers: 38 | - name: nginx 39 | image: nginx 40 | ports: 41 | - containerPort: 80 42 | volumeMounts: 43 | - name: workdir 44 | mountPath: /usr/share/nginx/html 45 | # Add the init container 46 | initContainers: 47 | - name: install 48 | image: busybox 49 | command: 50 | - wget 51 | - "-O" 52 | - "/work-dir/index.html" 53 | - http://info.cern.ch 54 | volumeMounts: 55 | - name: workdir 56 | mountPath: "/work-dir" 57 | dnsPolicy: Default 58 | volumes: 59 | - name: workdir 60 | emptyDir: {} 61 | EOF 62 | 63 | kubectl apply -f init-demo.yaml 64 | 65 | kubectl exec init-demo -- curl http://localhost 66 | # % Total % Received % Xferd Average Speed Time Time Time Current 67 | # Dload Upload Total Spent Left Speed 68 | # 100 646 100 646 0 0 34000 0 --:--:-- --:--:-- --:--:-- 34000 69 | #

70 | # http://info.cern.ch 71 | #
72 | 73 | #

http://info.cern.ch - home of the first website

74 | #

From here you can:

75 | # 81 | # 82 | ``` 83 | 84 |

85 | 86 |
87 | 88 | ### Add an init container `maker` with image `alpine` to maker-checker pod with the spec given below. 89 | - The init container should create an empty file named /workdir/calm.txt 90 | - If /workdir/calm.txt is not detected, the pod should exit 91 | - Once the spec file has been updated with the init container definition, the pod should be created. 92 | 93 |
94 | 95 | ```yaml 96 | cat << EOF > maker-checker.yaml 97 | apiVersion: v1 98 | kind: Pod 99 | metadata: 100 | creationTimestamp: null 101 | labels: 102 | run: maker-checker 103 | name: maker-checker 104 | spec: 105 | containers: 106 | - image: alpine 107 | name: checker 108 | command: ["/bin/sh", "-c", "if /workdir/calm.txt; then sleep 3600; else exit 1; fi;"] 109 | volumeMounts: 110 | - name: workdir 111 | mountPath: "/work-dir" 112 | dnsPolicy: Default 113 | volumes: 114 | - name: workdir 115 | emptyDir: {} 116 | restartPolicy: Always 117 | status: {} 118 | EOF 119 | ``` 120 | 121 |
show

122 | 123 | ```yaml 124 | cat << EOF > maker-checker.yaml 125 | apiVersion: v1 126 | kind: Pod 127 | metadata: 128 | creationTimestamp: null 129 | labels: 130 | run: maker-checker 131 | name: maker-checker 132 | spec: 133 | containers: 134 | - image: alpine 135 | name: checker 136 | command: ["/bin/sh", "-c", "if [ -f /workdir/calm.txt ]; then sleep 3600; else exit 1; fi;"] 137 | volumeMounts: 138 | - name: workdir 139 | mountPath: "/workdir" 140 | # Add the init container 141 | initContainers: 142 | - name: maker 143 | image: alpine 144 | command: ["/bin/sh", "-c", "touch /workdir/calm.txt"] 145 | volumeMounts: 146 | - name: workdir 147 | mountPath: "/workdir" 148 | dnsPolicy: Default 149 | volumes: 150 | - name: workdir 151 | emptyDir: {} 152 | restartPolicy: Always 153 | status: {} 154 | EOF 155 | 156 | kubectl apply -f maker-checker.yaml 157 | ``` 158 | 159 |

160 | 161 |
162 | 163 | ### Clean up 164 | 165 |
166 | 167 | ```bash 168 | rm init-demo.yaml maker-checker.yaml 169 | kubectl delete pod init-demo maker-checker --force --grace-period=0 170 | ``` -------------------------------------------------------------------------------- /topics/jobs.md: -------------------------------------------------------------------------------- 1 | # Jobs & Cron Jobs 2 | A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate. 3 | A CronJob creates Jobs on a repeating schedule. 4 | 5 |
6 | 7 | - [Jobs](#jobs) 8 | - [Cron Jobs](#cron-jobs) 9 | 10 |
11 | 12 | ## Jobs 13 | 14 |
15 | 16 | ### Create job named `pi` with image `perl` that runs the command with arguments `"perl -Mbignum=bpi -wle 'print bpi(2000)'"` 17 | 18 |
19 | 20 |
show

21 | 22 | `kubectl create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'` 23 | 24 | OR 25 | 26 | ```bash 27 | cat << EOF > pi.yaml 28 | apiVersion: batch/v1 29 | kind: Job 30 | metadata: 31 | name: pi 32 | spec: 33 | template: 34 | spec: 35 | containers: 36 | - name: pi 37 | image: perl 38 | command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] 39 | restartPolicy: Never 40 | EOF 41 | 42 | kubectl apply -f pi.yaml 43 | ``` 44 | 45 |

46 | 47 |
48 | 49 | ### Wait till it's done, get the output 50 | 51 |
52 | 53 |
show

54 | 55 | ```bash 56 | kubectl get jobs -w # wait till 'SUCCESSFUL' is 1 (will take some time, perl image might be big) 57 | # NAME COMPLETIONS DURATION AGE 58 | # pi 1/1 2m18s 2m47s 59 | kubectl get pod # get the pod name 60 | # NAME READY STATUS RESTARTS AGE 61 | # pi-vkj8b 0/1 Completed 0 2m50s 62 | kubectl logs pi-vkj8b # get the pi numbers 63 | # 3.141592653589793238462643383279502884........ 64 | kubectl delete job pi 65 | ``` 66 | OR 67 | 68 | ```bash 69 | kubectl get jobs -w # wait till 'SUCCESSFUL' is 1 (will take some time, perl image might be big) 70 | kubectl logs job/pi 71 | kubectl delete job pi 72 | ``` 73 | OR 74 | 75 | ```bash 76 | kubectl wait --for=condition=complete --timeout=300s job pi 77 | kubectl logs job/pi 78 | kubectl delete job pi 79 | ``` 80 | 81 |

82 | 83 |
84 | 85 | ### Create a job `busybox` with `busybox` image that would be automatically terminated by kubernetes if it takes more than 30 seconds to execute. 86 | 87 |
88 | 89 |
show

90 | 91 | ```bash 92 | kubectl create job busybox --image=busybox --dry-run=client -o yaml -- /bin/sh -c 'while true; do echo hello; sleep 10;done' > busybox-job.yaml 93 | ``` 94 | 95 | #### Edit `busybox-job.yaml` to add `job.spec.activeDeadlineSeconds=30` and apply `kubectl apply -f busybox-job.yaml` 96 | 97 | ```yaml 98 | cat << EOF > busybox-job.yaml 99 | apiVersion: batch/v1 100 | kind: Job 101 | metadata: 102 | creationTimestamp: null 103 | name: busybox 104 | spec: 105 | activeDeadlineSeconds: 30 # add this line 106 | template: 107 | metadata: 108 | creationTimestamp: null 109 | spec: 110 | containers: 111 | - command: 112 | - /bin/sh 113 | - -c 114 | - while true; do echo hello; sleep 10;done 115 | image: busybox 116 | name: busybox 117 | resources: {} 118 | restartPolicy: Never 119 | status: {} 120 | EOF 121 | 122 | kubectl apply -f busybox-job.yaml 123 | ``` 124 | 125 | #### Describe the job with the statement `Warning DeadlineExceeded xxs job-controller Job was active longer than specified deadline` 126 | 127 |

128 | 129 |
130 | 131 | ### Delete the job 132 | 133 |
134 | 135 |
show

136 | 137 | ```bash 138 | kubectl delete job busybox 139 | ``` 140 | 141 |

142 | 143 |
144 | 145 | ### Create a job `busybox-completions-job` with `busybox` image that would be run 5 times 146 | 147 |
148 | 149 |
show

150 | 151 | ```bash 152 | kubectl create job busybox-completions-job --image=busybox --dry-run=client -o yaml -- /bin/sh -c 'echo hello;sleep 10;echo world' > busybox-completions-job.yaml 153 | ``` 154 | 155 | #### Edit `busybox-completions-job.yaml` to add `job.spec.completions=5` and apply `kubectl apply -f busybox-completions-job.yaml` 156 | 157 | ```yaml 158 | cat << EOF > busybox-completions-job.yaml 159 | apiVersion: batch/v1 160 | kind: Job 161 | metadata: 162 | creationTimestamp: null 163 | name: busybox-completions-job 164 | spec: 165 | completions: 5 # add this line 166 | template: 167 | metadata: 168 | creationTimestamp: null 169 | spec: 170 | containers: 171 | - command: 172 | - /bin/sh 173 | - -c 174 | - echo hello;sleep 10;echo world 175 | image: busybox 176 | name: busybox-completions-job 177 | resources: {} 178 | restartPolicy: Never 179 | status: {} 180 | EOF 181 | 182 | kubectl apply -f busybox-completions-job.yaml 183 | ``` 184 | 185 | #### Check the job pod status `kk get pods -l job-name=busybox-completions-job` or `kubectl get jobs -w` are in completed status after 2-3 minutes. 186 | 187 | ```bash 188 | kubectl get jobs -w 189 | # NAME COMPLETIONS DURATION AGE 190 | # busybox-completions-job 0/5 7s 7s 191 | # busybox-completions-job 1/5 15s 15s 192 | # busybox-completions-job 2/5 28s 28s 193 | # busybox-completions-job 3/5 42s 42s 194 | # busybox-completions-job 4/5 56s 56s 195 | # busybox-completions-job 5/5 70s 70s 196 | ``` 197 | 198 |

199 | 200 |
201 | 202 | ### Create a job `busybox-parallelism-job` with `busybox` image that would be run parallely 5 times. 203 | 204 |
205 | 206 |
show

207 | 208 | ```bash 209 | kubectl create job busybox-parallelism-job --image=busybox --dry-run=client -o yaml -- /bin/sh -c 'echo hello;sleep 10;echo world' > busybox-parallelism-job.yaml 210 | ``` 211 | 212 | #### Edit `busybox-parallelism-job.yaml` to add `job.spec.parallelism=5` and apply `kubectl apply -f busybox-parallelism-job.yaml` 213 | 214 | ```yaml 215 | cat << EOF > busybox-parallelism-job.yaml 216 | apiVersion: batch/v1 217 | kind: Job 218 | metadata: 219 | creationTimestamp: null 220 | name: busybox-parallelism-job 221 | spec: 222 | parallelism: 5 # add this line 223 | template: 224 | metadata: 225 | creationTimestamp: null 226 | spec: 227 | containers: 228 | - command: 229 | - /bin/sh 230 | - -c 231 | - echo hello;sleep 10;echo world 232 | image: busybox 233 | name: busybox-parallelism-job 234 | resources: {} 235 | restartPolicy: Never 236 | status: {} 237 | EOF 238 | 239 | kubectl apply -f busybox-parallelism-job.yaml 240 | ``` 241 | 242 | #### Check the job pod status `kk get pods -l job-name=busybox-parallelism-job` or `kubectl get jobs -w` are in completed status after 1 minute, as it would quicker as compared to before. 243 | 244 | ```bash 245 | kubectl get jobs -w 246 | # NAME COMPLETIONS DURATION AGE 247 | # busybox-parallelism-job 1/1 of 5 15s 15s 248 | # busybox-parallelism-job 2/1 of 5 16s 16s 249 | # busybox-parallelism-job 3/1 of 5 17s 17s 250 | # busybox-parallelism-job 4/1 of 5 18s 18s 251 | # busybox-parallelism-job 5/1 of 5 19s 19s 252 | ``` 253 | 254 |

255 | 256 |
257 | 258 | ## [Cron jobs](https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/) 259 | 260 |
261 | 262 | ### Create a cron job `busybox-cron-job` with image `busybox` that runs every minute on a schedule of `*/1 * * * *` and writes `date; echo Hello from the Kubernetes cluster` to standard output 263 | 264 |
show

265 | 266 | ```bash 267 | kubectl create cronjob busybox-cron-job --image=busybox --schedule="*/1 * * * *" -- /bin/sh -c 'date; echo Hello from the Kubernetes cluster' 268 | ``` 269 | 270 |

271 | 272 |
273 | 274 | ### See its logs and delete it 275 | 276 |
277 | 278 |
show

279 | 280 | ```bash 281 | kubectl get cj 282 | kubectl get jobs --watch # Bear in mind that Kubernetes will run a new job/pod for each new cron job 283 | # NAME COMPLETIONS DURATION AGE 284 | # busybox-cron-job-1639638720 0/1 0s 285 | # busybox-cron-job-1639638720 0/1 0s 0s 286 | # busybox-cron-job-1639638720 1/1 8s 9s 287 | # busybox-cron-job-1639638780 0/1 0s 288 | # busybox-cron-job-1639638780 0/1 1s 1s 289 | # busybox-cron-job-1639638780 1/1 9s 9s 290 | kubectl get pod --show-labels # observe that the pods have a label that mentions their 'parent' job 291 | kubectl logs busybox-1529745840-m867r 292 | kubectl delete cj busybox 293 | ``` 294 | 295 |

296 | 297 |
298 | 299 | ### Create a cron job with image busybox that runs every minute and writes 'date; echo Hello from the Kubernetes cluster' to standard output. The cron job should be terminated if it takes more than 17 seconds to start execution after its scheduled time (i.e. the job missed its scheduled time). 300 | 301 |
302 | 303 |
show

304 | 305 | ```bash 306 | kubectl create cronjob time-limited-job --image=busybox --restart=Never --dry-run=client --schedule="* * * * *" -o yaml -- /bin/sh -c 'date; echo Hello from the Kubernetes cluster' > time-limited-job.yaml 307 | vi time-limited-job.yaml 308 | ``` 309 | 310 | #### Add `cronjob.spec.startingDeadlineSeconds=17` and apply 311 | 312 | ```bash 313 | cat << EOF > time-limited-job.yaml 314 | apiVersion: batch/v1 315 | kind: CronJob 316 | metadata: 317 | creationTimestamp: null 318 | name: time-limited-job 319 | spec: 320 | startingDeadlineSeconds: 17 # add this line 321 | jobTemplate: 322 | metadata: 323 | creationTimestamp: null 324 | name: time-limited-job 325 | spec: 326 | template: 327 | metadata: 328 | creationTimestamp: null 329 | spec: 330 | containers: 331 | - args: 332 | - /bin/sh 333 | - -c 334 | - date; echo Hello from the Kubernetes cluster 335 | image: busybox 336 | name: time-limited-job 337 | resources: {} 338 | restartPolicy: Never 339 | schedule: '* * * * *' 340 | status: {} 341 | EOF 342 | 343 | kubectl apply -f time-limited-job.yaml 344 | ``` 345 | 346 |

347 | 348 |
349 | 350 | ### Create a cron job with image busybox that runs every minute and writes 'date; echo Hello from the Kubernetes cluster' to standard output. The cron job should be terminated if it successfully starts but takes more than 12 seconds to complete execution. 351 | 352 |
353 | 354 |
show

355 | 356 | ```bash 357 | kubectl create cronjob time-limited-job --image=busybox --restart=Never --dry-run=client --schedule="* * * * *" -o yaml -- /bin/sh -c 'date; echo Hello from the Kubernetes cluster' > time-limited-job.yaml 358 | vi time-limited-job.yaml 359 | ``` 360 | 361 | #### Add cronjob.spec.jobTemplate.spec.activeDeadlineSeconds=12 362 | 363 | ```bash 364 | cat << EOF > time-limited-job.yaml 365 | apiVersion: batch/v1 366 | kind: CronJob 367 | metadata: 368 | creationTimestamp: null 369 | name: time-limited-job 370 | spec: 371 | jobTemplate: 372 | metadata: 373 | creationTimestamp: null 374 | name: time-limited-job 375 | spec: 376 | activeDeadlineSeconds: 12 # add this line 377 | template: 378 | metadata: 379 | creationTimestamp: null 380 | spec: 381 | containers: 382 | - args: 383 | - /bin/sh 384 | - -c 385 | - date; echo Hello from the Kubernetes cluster 386 | image: busybox 387 | name: time-limited-job 388 | resources: {} 389 | restartPolicy: Never 390 | schedule: '* * * * *' 391 | status: {} 392 | EOF 393 | 394 | kubectl apply -f time-limited-job.yaml 395 | ``` 396 | 397 |

398 | 399 |
400 | 401 | ### Create a CronJob named `hello` that executes a Pod running the following single container. 402 | - name: hello 403 | - image: busybox:1.28 404 | - command: ["/bin/sh", "-c", "date; echo Hello from the Kubernetes cluster"] 405 | Configure the CronJob to 406 | - Execute once every 2 minutes 407 | - Keep 3 completed Job 408 | - Keep 3 failed job 409 | - Never restart Pods 410 | - Terminate Pods after 10 seconds 411 | Manually create and execute on job named `hello-test` from the `hello` CronJob for testing purpose. 412 | 413 |
414 | 415 |
show

416 | 417 | ```bash 418 | kubectl create cronjob hello --image=busybox --restart=Never --dry-run=client --schedule="*/2 * * * *" -o yaml -- /bin/sh -c 'date; echo Hello from the Kubernetes cluster' > hello-cronjob.yaml 419 | vi hello-cronjob.yaml 420 | ``` 421 | 422 | #### Add the following specs. 423 | 424 | ```yaml 425 | cat << EOF > hello-cronjob.yaml 426 | apiVersion: batch/v1 427 | kind: CronJob 428 | metadata: 429 | creationTimestamp: null 430 | name: hello 431 | spec: 432 | jobTemplate: 433 | metadata: 434 | creationTimestamp: null 435 | name: hello 436 | spec: 437 | activeDeadlineSeconds: 10 # Terminate Pods after 10 seconds 438 | template: 439 | metadata: 440 | creationTimestamp: null 441 | spec: 442 | containers: 443 | - command: 444 | - /bin/sh 445 | - -c 446 | - date; echo Hello from the Kubernetes cluster 447 | image: busybox 448 | name: hello 449 | resources: {} 450 | restartPolicy: Never # Never restart Pods 451 | schedule: '*/2 * * * *' # Execute once every 2 minutes 452 | successfulJobsHistoryLimit: 3 # Keep 3 completed Job 453 | failedJobsHistoryLimit: 3 # Keep 3 failed job 454 | status: {} 455 | EOF 456 | 457 | kubectl apply -f hello-cronjob.yaml 458 | 459 | # Trigger the job manually 460 | kubectl create job --from=cronjob/hello hello-test 461 | ``` 462 | 463 |

464 | 465 |
466 | 467 | ## Clean up 468 | 469 |
470 | 471 | ```bash 472 | rm hello-cronjob.yaml time-limited-job.yaml busybox-parallelism-job.yaml 473 | kubectl delete job pi busybox-parallelism-job busybox-completions-job hello-test 474 | kubectl delete cronjob time-limited-job hello-cronjob 475 | ``` -------------------------------------------------------------------------------- /topics/jsonpath.md: -------------------------------------------------------------------------------- 1 | # [Kubectl jsonpath](https://kubernetes.io/docs/reference/kubectl/jsonpath/) 2 | 3 |
4 | 5 | ### Get node details as custom fields with NODE_NAME for nodename, CPU_COUNT for cpu. 6 | 7 |
show

8 | 9 | ```bash 10 | $ kubectl get nodes -o=custom-columns=NODE_NAME:.metadata.name,CPU_COUNT:.status.capacity.cpu 11 | # NODE_NAME CPU_COUNT 12 | # controlplane 2 13 | # node01 2 14 | ``` 15 | 16 |

17 | 18 |
19 | 20 | ### Setup few containers and deployments 21 | 22 | ```bash 23 | kubectl run nginx-dev --image nginx:1.21.4-alpine 24 | kubectl run nginx-qa --image nginx:1.21 25 | kubectl run nginx-prod --image nginx:1.21 26 | ``` 27 | 28 | ### List all Container images in all namespaces 29 | 30 |
show

31 | 32 | ```bash 33 | kubectl get pods --all-namespaces -o jsonpath='{.items[*].spec.containers[*].image}}' | tr " " "\n" 34 | # nginx:1.21.4-alpine 35 | # nginx:1.21 36 | # nginx:1.21 37 | # k8s.gcr.io/coredns:1.6.7 38 | # k8s.gcr.io/coredns:1.6.7 39 | # k8s.gcr.io/etcd:3.4.3-0 40 | # katacoda/katacoda-cloud-provider:0.0.1 41 | # k8s.gcr.io/kube-apiserver:v1.18.0 42 | # k8s.gcr.io/kube-controller-manager:v1.18.0 43 | # quay.io/coreos/flannel:v0.12.0-amd64 44 | # quay.io/coreos/flannel:v0.12.0-amd64 45 | # gcr.io/google_containers/kube-keepalived-vip:0.9 46 | # k8s.gcr.io/kube-proxy:v1.18.0 47 | # k8s.gcr.io/kube-proxy:v1.18.0 48 | # k8s.gcr.io/kube-scheduler:v1.18.0} 49 | ``` 50 | 51 |

52 | 53 |
54 | 55 | ### List all the pods sorted by name 56 | 57 |
show

58 | 59 | ```bash 60 | kubectl get pods --sort-by=.metadata.name 61 | # NAME READY STATUS RESTARTS AGE 62 | # nginx-dev 1/1 Running 0 91s 63 | # nginx-prod 1/1 Running 0 91s 64 | # nginx-qa 1/1 Running 0 91s 65 | ``` 66 | 67 |

68 | 69 |
70 | 71 | ### Check the Image version of nginx-dev pod using jsonpath 72 | 73 |
show

74 | 75 | ```bash 76 | kubectl get pod nginx-dev -o jsonpath='{.spec.containers[0].image}' 77 | # nginx:1.21.4-alpine 78 | ``` 79 | 80 |

81 | 82 |
83 | 84 | ### List the nginx pod with custom columns POD_NAME and POD_STATUS 85 | 86 |
show

87 | 88 | ```bash 89 | kubectl get po -o=custom-columns="POD_NAME:.metadata.name, POD_STATUS:.status.containerStatuses[].state" | tr " " "\n" 90 | ``` 91 | 92 |

93 | 94 |
-------------------------------------------------------------------------------- /topics/kube-bench.md: -------------------------------------------------------------------------------- 1 | # [Kube-bench](https://github.com/aquasecurity/kube-bench) 2 | 3 | Aqua Security Kube-bench is a tool that checks whether Kubernetes is deployed securely by running the checks documented in the [CIS Kubernetes Benchmark](https://www.cisecurity.org/benchmark/kubernetes/). 4 | 5 |
6 | 7 | ### Installation 8 | 9 | ```bash 10 | curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.6.5/kube-bench_0.6.5_linux_amd64.tar.gz -o kube-bench_0.6.5_linux_amd64.tar.gz 11 | tar -xvf kube-bench_0.6.5_linux_amd64.tar.gz 12 | ``` 13 | 14 |
15 | 16 | ### Execute Kubebench on the cluster 17 | 18 |
19 | 20 | ```bash 21 | ./kube-bench --config-dir `pwd`/cfg --config `pwd`/cfg/config.yaml 22 | # ..... 23 | # == Summary total == 24 | # 71 checks PASS 25 | # 11 checks FAIL 26 | # 40 checks WARN 27 | # 0 checks INFO 28 | ``` 29 | 30 |
31 | 32 | ### Check the failed tests on the cluster 33 | 34 |
35 | 36 | ```bash 37 | ./kube-bench --config-dir `pwd`/cfg --config `pwd`/cfg/config.yaml | grep "\[FAIL\] " 38 | # [FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) 39 | # [FAIL] 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) 40 | # [FAIL] 1.2.16 Ensure that the admission control plugin PodSecurityPolicy is set (Automated) 41 | # [FAIL] 1.2.21 Ensure that the --profiling argument is set to false (Automated) 42 | # [FAIL] 1.2.22 Ensure that the --audit-log-path argument is set (Automated) 43 | # [FAIL] 1.2.23 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) 44 | # [FAIL] 1.2.24 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) 45 | # [FAIL] 1.2.25 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) 46 | # [FAIL] 1.3.2 Ensure that the --profiling argument is set to false (Automated) 47 | # [FAIL] 1.4.1 Ensure that the --profiling argument is set to false (Automated) 48 | # [FAIL] 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated) 49 | ``` 50 | 51 |
52 | 53 | ### Fix the failing test `Fix this failed test 1.4.1: Ensure that the --profiling argument is set to false` 54 | 55 |
56 | 57 | #### Check the remediation for 1.4.1 which is as below. Edit `/etc/kubernetes/manifests/kube-scheduler.yaml` to add `--profiling=false` 58 | 59 | ``` 60 | 1.4.1 Edit the Scheduler pod specification file /etc/kubernetes/manifests/kube-scheduler.yaml file 61 | on the master node and set the below parameter. 62 | --profiling=false 63 | ``` 64 | 65 | #### Rerun the kubebench and verify 1.4.1 is remediated. 66 | 67 | ```bash 68 | ./kube-bench --config-dir `pwd`/cfg --config `pwd`/cfg/config.yaml | grep "\[FAIL\] " 69 | # [FAIL] 1.1.12 Ensure that the etcd data directory ownership is set to etcd:etcd (Automated) 70 | # [FAIL] 1.2.6 Ensure that the --kubelet-certificate-authority argument is set as appropriate (Automated) 71 | # [FAIL] 1.2.16 Ensure that the admission control plugin PodSecurityPolicy is set (Automated) 72 | # [FAIL] 1.2.21 Ensure that the --profiling argument is set to false (Automated) 73 | # [FAIL] 1.2.22 Ensure that the --audit-log-path argument is set (Automated) 74 | # [FAIL] 1.2.23 Ensure that the --audit-log-maxage argument is set to 30 or as appropriate (Automated) 75 | # [FAIL] 1.2.24 Ensure that the --audit-log-maxbackup argument is set to 10 or as appropriate (Automated) 76 | # [FAIL] 1.2.25 Ensure that the --audit-log-maxsize argument is set to 100 or as appropriate (Automated) 77 | # [FAIL] 1.3.2 Ensure that the --profiling argument is set to false (Automated) 78 | # [FAIL] 4.2.6 Ensure that the --protect-kernel-defaults argument is set to true (Automated) 79 | ``` 80 | 81 | -------------------------------------------------------------------------------- /topics/kubeconfig.md: -------------------------------------------------------------------------------- 1 | # Kubeconfig 2 | 3 |
4 | 5 | NOTE : use the [kubeconfig.yaml](../data/kubeconfig.yaml) for the exercise 6 | 7 | ### View the config file 8 | 9 |
10 | 11 |
show

12 | 13 | ```bash 14 | kubectl config view --kubeconfig kubeconfig.yaml 15 | ``` 16 | 17 |

18 | 19 |
20 | 21 | ### Get the clusters from the kubeconfig file 22 | 23 |
24 | 25 |
show

26 | 27 | ```bash 28 | kubectl config get-clusters --kubeconfig kubeconfig.yaml 29 | # NAME 30 | # development 31 | # qa 32 | # production 33 | # kubernetes 34 | # labs 35 | ``` 36 | 37 |

38 | 39 |
40 | 41 | ### Get the users from the kubeconfig file 42 | 43 |
44 | 45 |
show

46 | 47 | ```bash 48 | kubectl config get-users --kubeconfig kubeconfig.yaml # will not work for older versions 49 | # NAME 50 | # dev-user 51 | # kubernetes-admin 52 | # labs-user 53 | # prod-user 54 | # qa-user 55 | ``` 56 | 57 |

58 | 59 |
60 | 61 | ### Get the contexts from the kubeconfig file 62 | 63 |
64 | 65 |
show

66 | 67 | ```bash 68 | kubectl config get-contexts --kubeconfig kubeconfig.yaml 69 | # CURRENT NAME CLUSTER AUTHINFO NAMESPACE 70 | # development-user@labs development development-user 71 | # * kubernetes-admin@kubernetes kubernetes kubernetes-admin 72 | # labs-user@labs labs labs-user 73 | # prod-user@prod prod prod-user 74 | # qa-user@qa qa qa-user 75 | ``` 76 | 77 |

78 | 79 |
80 | 81 | ### Get the current context 82 | 83 |
84 | 85 |
show

86 | 87 | ```bash 88 | kubectl config current-context --kubeconfig kubeconfig.yaml 89 | # kubernetes-admin@kubernetes 90 | ``` 91 | 92 |

93 | 94 |
95 | 96 | ### Switch to context `prod-user@prod` as the current context 97 | 98 |
99 | 100 |
show

101 | 102 | ```bash 103 | kubectl config use-context prod-user@prod --kubeconfig kubeconfig.yaml 104 | # Switched to context "prod-user@prod". 105 | kubectl config current-context --kubeconfig kubeconfig.yaml 106 | # prod-user@prod 107 | ``` 108 | 109 |

-------------------------------------------------------------------------------- /topics/kubelet_security.md: -------------------------------------------------------------------------------- 1 | # [Kubelet Security](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/) 2 | 3 |
4 | 5 | ### Check the Kubelet Security 6 | 7 |
8 | 9 | #### Check Kubelet configuration 10 | 11 | ```bash 12 | ps -ef | grep kubelet # check the --config parameter 13 | # root 2600 1 3 05:21 ? 00:00:02 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2 --resolv-conf=/run/systemd/resolve/resolv.conf 14 | ``` 15 | 16 | #### Viewing the kubelet configuration file `/var/lib/kubelet/config.yaml` 17 | 18 | ```yaml 19 | apiVersion: kubelet.config.k8s.io/v1beta1 20 | authentication: 21 | anonymous: 22 | enabled: false # anonymous auth should be disabled - It should not be true 23 | webhook: # Authn mechanism set to webhook as certificate based auth instead of AlwaysAllow 24 | cacheTTL: 0s 25 | enabled: true 26 | x509: 27 | clientCAFile: /etc/kubernetes/pki/ca.crt 28 | authorization: 29 | mode: Webhook # Authz mechanism set to webhook, instead of AlwaysAllow 30 | webhook: 31 | cacheAuthorizedTTL: 0s 32 | cacheUnauthorizedTTL: 0s 33 | clusterDNS: 34 | - 10.96.0.10 35 | clusterDomain: cluster.local 36 | cpuManagerReconcilePeriod: 0s 37 | evictionPressureTransitionPeriod: 0s 38 | # additional lines omitted for brevity 39 | ``` 40 | 41 | #### Check the key and certificate in the `kube-apiserver.yaml` file 42 | 43 | ```bash 44 | cat kube-apiserver.yaml | grep kubelet-client 45 | # - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt 46 | # - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key 47 | ``` 48 | 49 | #### Verify the authentication using the above cert and key 50 | 51 | ```bash 52 | curl -sk https://localhost:10250/pods/ 53 | # Unauthorized 54 | 55 | curl -sk https://localhost:10250/pods/ --key /etc/kubernetes/pki/apiserver-kubelet-client.key --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt 56 | # {"kind":"PodList","apiVersion":"v1","metadata":{},"items":[{"metadata":{"name":"etcd-controlplane","namespace": ... 57 | ``` -------------------------------------------------------------------------------- /topics/kubesec.md: -------------------------------------------------------------------------------- 1 | # [Kubesec](https://kubesec.io/) 2 | 3 | Security risk analysis for Kubernetes resources 4 | 5 |
6 | 7 | ### Installation 8 | 9 |
10 | 11 | ```bash 12 | wget https://github.com/controlplaneio/kubesec/releases/download/v2.11.0/kubesec_linux_amd64.tar.gz 13 | tar -xvf kubesec_linux_amd64.tar.gz 14 | mv kubesec /usr/bin/ 15 | ``` 16 | 17 |
18 | 19 | ### Scan the following specs, identify the issues, fix and rescan 20 | 21 |
22 | 23 | ```yaml 24 | cat << EOF > unsecured.yaml 25 | apiVersion: v1 26 | kind: Pod 27 | metadata: 28 | creationTimestamp: null 29 | labels: 30 | run: nginx 31 | name: nginx 32 | spec: 33 | containers: 34 | - image: nginx 35 | name: nginx 36 | resources: {} 37 | securityContext: 38 | privileged: true # security issue 39 | readOnlyRootFilesystem: false # security issue 40 | dnsPolicy: ClusterFirst 41 | restartPolicy: Never 42 | EOF 43 | ``` 44 | 45 |
show

46 | 47 | ```bash 48 | kubesec scan unsecured.yaml 49 | 50 | # [ 51 | # { 52 | # "object": "Pod/nginx.default", 53 | # "valid": true, 54 | # "fileName": "unsecured.yaml", 55 | # "message": "Failed with a score of -30 points", 56 | # "score": -30, 57 | # "scoring": { 58 | # "critical": [ 59 | # { 60 | # "id": "Privileged", 61 | # "selector": "containers[] .securityContext .privileged == true", 62 | # "reason": "Privileged containers can allow almost completely unrestricted host access", 63 | # "points": -30 64 | # } 65 | # ], 66 | # "advise": [ 67 | # { 68 | # "id": "ApparmorAny", 69 | # "selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"", 70 | # "reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY", 71 | # "points": 3 72 | # }, 73 | # { 74 | # "id": "ServiceAccountName", 75 | # "selector": ".spec .serviceAccountName", 76 | # "reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege", 77 | # "points": 3 78 | # }, 79 | # { 80 | # "id": "SeccompAny", 81 | # "selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"", 82 | # "reason": "Seccomp profiles set minimum privilege and secure against unknown threats", 83 | # "points": 1 84 | # }, 85 | # { 86 | # "id": "LimitsCPU", 87 | # "selector": "containers[] .resources .limits .cpu", 88 | # "reason": "Enforcing CPU limits prevents DOS via resource exhaustion", 89 | # "points": 1 90 | # }, 91 | # { 92 | # "id": "RequestsMemory", 93 | # "selector": "containers[] .resources .limits .memory", 94 | # "reason": "Enforcing memory limits prevents DOS via resource exhaustion", 95 | # "points": 1 96 | # }, 97 | # { 98 | # "id": "RequestsCPU", 99 | # "selector": "containers[] .resources .requests .cpu", 100 | # "reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster", 101 | # "points": 1 102 | # }, 103 | # { 104 | # "id": "RequestsMemory", 105 | # "selector": "containers[] .resources .requests .memory", 106 | # "reason": "Enforcing memory requests aids a fair balancing of resources across the cluster", 107 | # "points": 1 108 | # }, 109 | # { 110 | # "id": "CapDropAny", 111 | # "selector": "containers[] .securityContext .capabilities .drop", 112 | # "reason": "Reducing kernel capabilities available to a container limits its attack surface", 113 | # "points": 1 114 | # }, 115 | # { 116 | # "id": "CapDropAll", 117 | # "selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")", 118 | # "reason": "Drop all capabilities and add only those required to reduce syscall attack surface", 119 | # "points": 1 120 | # }, 121 | # { 122 | # "id": "ReadOnlyRootFilesystem", 123 | # "selector": "containers[] .securityContext .readOnlyRootFilesystem == true", 124 | # "reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost", 125 | # "points": 1 126 | # }, 127 | # { 128 | # "id": "RunAsNonRoot", 129 | # "selector": "containers[] .securityContext .runAsNonRoot == true", 130 | # "reason": "Force the running image to run as a non-root user to ensure least privilege", 131 | # "points": 1 132 | # }, 133 | # { 134 | # "id": "RunAsUser", 135 | # "selector": "containers[] .securityContext .runAsUser -gt 10000", 136 | # "reason": "Run as a high-UID user to avoid conflicts with the host's user table", 137 | # "points": 1 138 | # } 139 | # ] 140 | # } 141 | # } 142 | # ] 143 | 144 | ``` 145 | 146 | #### Edit the specs to remove the below 147 | 148 | ```yaml 149 | securityContext: 150 | privileged: true # security issue 151 | readOnlyRootFilesystem: false # security issue 152 | ``` 153 | 154 | ```yaml 155 | cat << EOF > unsecured.yaml 156 | apiVersion: v1 157 | kind: Pod 158 | metadata: 159 | creationTimestamp: null 160 | labels: 161 | run: nginx 162 | name: nginx 163 | spec: 164 | containers: 165 | - image: nginx 166 | name: nginx 167 | resources: {} 168 | dnsPolicy: ClusterFirst 169 | restartPolicy: Never 170 | EOF 171 | ``` 172 | 173 | ```bash 174 | kubesec scan unsecured.yaml 175 | 176 | # [ 177 | # { 178 | # "object": "Pod/nginx.default", 179 | # "valid": true, 180 | # "fileName": "unsecured.yaml", 181 | # "message": "Passed with a score of 0 points", 182 | # "score": 0, 183 | # "scoring": { 184 | # "advise": [ 185 | # { 186 | # "id": "ApparmorAny", 187 | # "selector": ".metadata .annotations .\"container.apparmor.security.beta.kubernetes.io/nginx\"", 188 | # "reason": "Well defined AppArmor policies may provide greater protection from unknown threats. WARNING: NOT PRODUCTION READY", 189 | # "points": 3 190 | # }, 191 | # { 192 | # "id": "ServiceAccountName", 193 | # "selector": ".spec .serviceAccountName", 194 | # "reason": "Service accounts restrict Kubernetes API access and should be configured with least privilege", 195 | # "points": 3 196 | # }, 197 | # { 198 | # "id": "SeccompAny", 199 | # "selector": ".metadata .annotations .\"container.seccomp.security.alpha.kubernetes.io/pod\"", 200 | # "reason": "Seccomp profiles set minimum privilege and secure against unknown threats", 201 | # "points": 1 202 | # }, 203 | # { 204 | # "id": "LimitsCPU", 205 | # "selector": "containers[] .resources .limits .cpu", 206 | # "reason": "Enforcing CPU limits prevents DOS via resource exhaustion", 207 | # "points": 1 208 | # }, 209 | # { 210 | # "id": "RequestsMemory", 211 | # "selector": "containers[] .resources .limits .memory", 212 | # "reason": "Enforcing memory limits prevents DOS via resource exhaustion", 213 | # "points": 1 214 | # }, 215 | # { 216 | # "id": "RequestsCPU", 217 | # "selector": "containers[] .resources .requests .cpu", 218 | # "reason": "Enforcing CPU requests aids a fair balancing of resources across the cluster", 219 | # "points": 1 220 | # }, 221 | # { 222 | # "id": "RequestsMemory", 223 | # "selector": "containers[] .resources .requests .memory", 224 | # "reason": "Enforcing memory requests aids a fair balancing of resources across the cluster", 225 | # "points": 1 226 | # }, 227 | # { 228 | # "id": "CapDropAny", 229 | # "selector": "containers[] .securityContext .capabilities .drop", 230 | # "reason": "Reducing kernel capabilities available to a container limits its attack surface", 231 | # "points": 1 232 | # }, 233 | # { 234 | # "id": "CapDropAll", 235 | # "selector": "containers[] .securityContext .capabilities .drop | index(\"ALL\")", 236 | # "reason": "Drop all capabilities and add only those required to reduce syscall attack surface", 237 | # "points": 1 238 | # }, 239 | # { 240 | # "id": "ReadOnlyRootFilesystem", 241 | # "selector": "containers[] .securityContext .readOnlyRootFilesystem == true", 242 | # "reason": "An immutable root filesystem can prevent malicious binaries being added to PATH and increase attack cost", 243 | # "points": 1 244 | # }, 245 | # { 246 | # "id": "RunAsNonRoot", 247 | # "selector": "containers[] .securityContext .runAsNonRoot == true", 248 | # "reason": "Force the running image to run as a non-root user to ensure least privilege", 249 | # "points": 1 250 | # }, 251 | # { 252 | # "id": "RunAsUser", 253 | # "selector": "containers[] .securityContext .runAsUser -gt 10000", 254 | # "reason": "Run as a high-UID user to avoid conflicts with the host's user table", 255 | # "points": 1 256 | # } 257 | # ] 258 | # } 259 | # } 260 | # ] 261 | ``` 262 | 263 |

-------------------------------------------------------------------------------- /topics/labels.md: -------------------------------------------------------------------------------- 1 | # [Labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels) 2 | 3 |
4 | 5 | ## Nodes 6 | 7 |
8 | 9 | ### Show all labels for the node `node01` 10 | 11 |
12 | 13 |
show

14 | 15 | ```bash 16 | kubectl get nodes node01 --show-labels 17 | # NAME STATUS ROLES AGE VERSION LABELS 18 | # node01 Ready 61m v1.18.0 accelerator=nvidia-tesla-p100,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux 19 | ``` 20 | 21 |

22 | 23 |
24 | 25 | ### Label worker node `node01` with label `type=critical` 26 | 27 |
28 | 29 |
show

30 | 31 | ```bash 32 | kubectl label node node01 type=critical 33 | # node/node01 labeled 34 | ``` 35 | 36 |

37 | 38 |
39 | 40 | ### Remove label `type=critical` from worker node `node01` 41 | 42 |
43 | 44 |
show

45 | 46 | ```bash 47 | kubectl label node node01 type- 48 | # node/node01 labeled 49 | ``` 50 | 51 |

52 | 53 |
54 | 55 | ## Namespaces 56 | 57 |
58 | 59 | ### Create and label Label namespace `alpha` with label `type:critical` 60 | 61 |
62 | 63 |
show

64 | 65 | ```bash 66 | kubectl create namespace alpha 67 | kubectl label namespace alpha type=critical 68 | 69 | kubectl get namespace alpha --show-labels 70 | # NAME STATUS AGE LABELS 71 | # alpha Active 70s type=critical 72 | ``` 73 | 74 |

75 | 76 |
77 | 78 | ## Pods 79 | 80 |
81 | 82 | ### Create a new pod with name `nginx-labels` and using the nginx image and labels `tier=frontend` 83 | 84 |
85 | 86 |
show

87 | 88 | ```bash 89 | kubectl run nginx-labels --image=nginx --labels=tier=frontend 90 | ``` 91 | 92 | ```bash 93 | # verification 94 | kubectl get pod nginx-labels --show-labels 95 | # NAME READY STATUS RESTARTS AGE LABELS 96 | # nginx-labels 1/1 Running 0 16s tier=frontend 97 | ``` 98 | 99 |

100 | 101 |
102 | 103 | ### Create pod `nginx-labels` with `nginx` image and label `name=nginx`, `tier=frontend`, `env=dev` 104 | 105 |
106 | 107 |
show

108 | 109 | ```bash 110 | kubectl run nginx-labels --image=nginx --labels=name=nginx,tier=frontend,env=dev,version=v1 111 | ``` 112 | 113 | OR 114 | 115 | ```yaml 116 | cat << EOF > nginx-labels.yaml 117 | apiVersion: v1 118 | kind: Pod 119 | metadata: 120 | labels: 121 | env: dev 122 | name: nginx 123 | tier: frontend 124 | version: v1 125 | name: nginx-labels 126 | spec: 127 | containers: 128 | - image: nginx 129 | name: nginx 130 | EOF 131 | 132 | kubectl apply -f nginx-labels.yaml 133 | ``` 134 | 135 |

136 | 137 |
138 | 139 | ### Show all labels of the pod `nginx-labels` 140 | 141 |
142 | 143 |
show

144 | 145 | ```bash 146 | kubectl get pod nginx-labels --show-labels 147 | # NAME READY STATUS RESTARTS AGE LABELS 148 | # nginx-labels 1/1 Running 0 26s env=dev,name=nginx,tier=frontend,version=v1 149 | ``` 150 | 151 |

152 | 153 |
154 | 155 | ### Change the labels of pod 'nginx-labels' to be `version=v2` 156 | 157 |
show

158 | 159 | ```bash 160 | kubectl label pod nginx-labels version=v2 --overwrite 161 | 162 | kubectl get pod nginx-labels --show-labels 163 | # NAME READY STATUS RESTARTS AGE LABELS 164 | # nginx-labels 1/1 Running 0 110s env=dev,name=nginx,tier=frontend,version=v2 165 | ``` 166 | 167 |

168 | 169 |
170 | 171 | ### Get the label `env` for the pods (show a column with env labels) 172 | 173 |
show

174 | 175 | ```bash 176 | kubectl get pod -L env 177 | # OR 178 | kubectl get pod --label-columns=env 179 | # NAME READY STATUS RESTARTS AGE ENV 180 | # nginx-labels 1/1 Running 0 25s dev 181 | ``` 182 | 183 |

184 | 185 |
186 | 187 | ### Get only the `version=v2` pods 188 | 189 |
show

190 | 191 | ```bash 192 | kubectl get pod -l version=v2 193 | # OR 194 | kubectl get pod -l 'version in (v2)' 195 | OR 196 | kubectl get pod --selector=version=v2 197 | ``` 198 | 199 |

200 | 201 |
202 | 203 | ### Remove the `name` label from the `nginx-labels` pod 204 | 205 |
show

206 | 207 | ```bash 208 | kubectl label pod nginx-labels name- 209 | 210 | kubectl get pod nginx-labels --show-labels 211 | NAME READY STATUS RESTARTS AGE LABELS 212 | nginx-labels 1/1 Running 0 4m49s env=dev,tier=frontend,version=v2 213 | ``` 214 | 215 |

216 | 217 | 218 | ### Clean up 219 | 220 | ```bash 221 | kubectl delete namespace alpha 222 | kubectl delete pod nginx-labels --force --grace-period=0 223 | ``` -------------------------------------------------------------------------------- /topics/logging.md: -------------------------------------------------------------------------------- 1 | # [Logging](https://kubernetes.io/docs/concepts/cluster-administration/logging/) 2 | 3 |
4 | 5 | ### Create a pod with below specs. Check the logs of the pod. Retrieve all currently available application logs from the running pod and store them in the file /tmp/counter.log. 6 | 7 |
8 | 9 | ```yaml 10 | cat << EOF > counter.yaml 11 | apiVersion: v1 12 | kind: Pod 13 | metadata: 14 | name: counter 15 | spec: 16 | containers: 17 | - name: count 18 | image: busybox 19 | args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'] 20 | EOF 21 | 22 | kubectl apply -f counter.yaml 23 | 24 | ``` 25 | 26 |
show

27 | 28 | ```bash 29 | kubectl logs counter 30 | OR 31 | kubectl logs counter -f # for tailing the logs 32 | ``` 33 | 34 | #### Copy the logs to the /tmp/counter.log folder. 35 | 36 | ```bash 37 | kubectl logs counter > /tmp/counter.log 38 | ``` 39 | 40 |

41 | 42 |
43 | 44 | ### Create a multi-container pod with below specs. Check the logs of the counter pod. 45 | 46 |
47 | 48 | ```yaml 49 | cat << EOF > nginx-counter.yaml 50 | apiVersion: v1 51 | kind: Pod 52 | metadata: 53 | name: nginx-counter 54 | spec: 55 | containers: 56 | - name: nginx 57 | image: nginx 58 | - name: counter 59 | image: busybox 60 | args: [/bin/sh, -c, 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done'] 61 | EOF 62 | 63 | kubectl apply -f nginx-counter.yaml 64 | ``` 65 | 66 |
show

67 | 68 | `kubectl logs nginx-counter counter` OR `kubectl logs nginx-counter -c counter` 69 | 70 |

71 | 72 |
73 | 74 | ### Cleanup 75 | 76 | ```bash 77 | rm /tmp/counter.log 78 | kubectl delete pod nginx-counter 79 | ``` 80 | -------------------------------------------------------------------------------- /topics/monitoring.md: -------------------------------------------------------------------------------- 1 | # [Monitoring](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/) 2 | 3 |
4 | 5 | ### Monitor the node consumption 6 | 7 |
8 | 9 | ```bash 10 | kubectl top nodes 11 | ``` 12 | 13 |
14 | 15 | ### Monitor the pod consumption 16 | 17 | ```bash 18 | kubectl top pods 19 | ``` 20 | 21 |
22 | 23 | ### Find pod with label `name=high-cpu` running with high CPU workloads 24 | 25 | ```bash 26 | kubectl top pods -l name=high-cpu --sort-by=CPU 27 | ``` 28 | 29 |
30 | 31 | -------------------------------------------------------------------------------- /topics/multi_container_pods.md: -------------------------------------------------------------------------------- 1 | # Multi-container Pods 2 | 3 |
4 | 5 | ### Create a multi-container pod `multi-container-pod` with 2 containers 6 | - first container name `nginx` with image `nginx` 7 | - second container name `redis` with image `redis` 8 | 9 |
show

10 | 11 | ```yaml 12 | cat << EOF > multi-container-pod.yaml 13 | apiVersion: v1 14 | kind: Pod 15 | metadata: 16 | name: multi-container-pod 17 | spec: 18 | containers: 19 | - image: nginx 20 | name: nginx 21 | - image: redis 22 | name: redis 23 | EOF 24 | 25 | kubectl apply -f multi-container-pod.yaml 26 | ``` 27 | 28 |

29 | 30 |
31 | 32 | ### Create a pod named multi-container-nrm with a single app container for each of the following images running inside: nginx + redis + memcached. 33 | 34 |
show

35 | 36 | ```yaml 37 | cat << EOF > multi-container-nrm.yaml 38 | apiVersion: v1 39 | kind: Pod 40 | metadata: 41 | name: multi-container-nrm 42 | spec: 43 | containers: 44 | - image: nginx 45 | name: nginx 46 | - image: redis 47 | name: redis 48 | - image: memcached 49 | name: memcached 50 | EOF 51 | 52 | kubectl apply -f multi-container-nrm.yaml 53 | ``` 54 | 55 |

56 | 57 |
58 | 59 | ### Create a pod named `sidecar-pod` with a single app container using the following spec. - PENDING 60 | ```yaml 61 | cat << EOF > sidecar-pod.yaml 62 | apiVersion: batch/v1 63 | kind: Pod 64 | metadata: 65 | name: sidecar-pod 66 | spec: 67 | template: 68 | spec: 69 | containers: 70 | - name: myapp 71 | image: alpine:latest 72 | command: ['sh', '-c', 'while true; do echo "logging" >> /opt/logs.txt; sleep 1; done'] 73 | volumeMounts: 74 | - name: data 75 | mountPath: /opt 76 | volumes: 77 | - name: data 78 | emptyDir: {} 79 | 80 | kubectl apply -f sidecar-pod.yaml 81 | ``` 82 | - Add a sidecar container named `sidecar`, using the `busybox` image, to the existing `sidecar-pod` . The new sidecar container has to run the following command. `sh -c "tail -F /opt/logs.txt"` 83 | - Use a Volume mounted at the `/opt` to make the log file logs.txt available to the sidecar container. 84 | - Don't modify the specific of the existing container other than adding the required volume mount. 85 | 86 |
show

87 | 88 | ```yaml 89 | cat << EOF > sidecar-pod.yaml 90 | apiVersion: v1 91 | kind: Pod 92 | metadata: 93 | name: sidecar-pod 94 | spec: 95 | template: 96 | spec: 97 | containers: 98 | - name: myapp 99 | image: alpine:latest 100 | command: ['sh', '-c', 'while true; do echo "logging" >> /opt/logs.txt; sleep 1; done'] 101 | volumeMounts: 102 | - name: data 103 | mountPath: /opt 104 | - name: sidecar 105 | image: busybox 106 | restartPolicy: Always 107 | command: ['sh', '-c', 'tail -F /opt/logs.txt'] 108 | volumeMounts: 109 | - name: data 110 | mountPath: /opt 111 | volumes: 112 | - name: data 113 | emptyDir: {} 114 | 115 | kubectl apply -f multi-container-nrm.yaml 116 | ``` 117 | 118 |

119 | 120 |
121 | 122 | ### Create a multi-container pod using the fluentd acting as a sidecar container. with the given specs below. Update the deployment such that it runs both containers and the log files from the first container can be shared/used by the second container. Mount a shared volume /var/log on both containers, which does not persist when the pod is deleted. 123 | 124 | ```yaml 125 | apiVersion: v1 126 | kind: Pod 127 | metadata: 128 | name: counter 129 | spec: 130 | containers: 131 | - name: count 132 | image: busybox 133 | args: 134 | - /bin/sh 135 | - -c 136 | - > 137 | i=0; 138 | while true; 139 | do 140 | echo "$i: $(date)" >> /var/log/1.log; 141 | echo "$(date) INFO $i" >> /var/log/2.log; 142 | i=$((i+1)); 143 | sleep 1; 144 | done 145 | - name: count-agent 146 | image: k8s.gcr.io/fluentd-gcp:1.30 147 | env: 148 | - name: FLUENTD_ARGS 149 | value: -c /etc/fluentd-config/fluentd.conf 150 | volumeMounts: 151 | - name: config-volume 152 | mountPath: /etc/fluentd-config 153 | volumes: 154 | - name: config-volume 155 | configMap: 156 | name: fluentd-config 157 | ``` 158 | 159 | #### Create fluentd-config as its needed for fluentd and mounted as config. 160 | 161 | ```yaml 162 | cat << EOF > fluentd-sidecar-config.yaml 163 | apiVersion: v1 164 | kind: ConfigMap 165 | metadata: 166 | name: fluentd-config 167 | data: 168 | fluentd.conf: | 169 | 170 | type tail 171 | format none 172 | path /var/log/1.log 173 | pos_file /var/log/1.log.pos 174 | tag count.format1 175 | 176 | 177 | 178 | type tail 179 | format none 180 | path /var/log/2.log 181 | pos_file /var/log/2.log.pos 182 | tag count.format2 183 | 184 | 185 | 186 | type google_cloud 187 | 188 | EOF 189 | 190 | kubectl apply -f fluentd-sidecar-config.yaml 191 | ``` 192 | 193 |
show

194 | 195 | ```yaml 196 | cat << EOF > two-files-counter-pod-agent-sidecar.yaml 197 | apiVersion: v1 198 | kind: Pod 199 | metadata: 200 | name: counter 201 | spec: 202 | containers: 203 | - name: count 204 | image: busybox 205 | args: 206 | - /bin/sh 207 | - -c 208 | - > 209 | i=0; 210 | while true; 211 | do 212 | echo "$i: $(date)" >> /var/log/1.log; 213 | echo "$(date) INFO $i" >> /var/log/2.log; 214 | i=$((i+1)); 215 | sleep 1; 216 | done 217 | volumeMounts: 218 | - name: varlog # mount the varlog volume as the /var/log path 219 | mountPath: /var/log 220 | - name: count-agent 221 | image: k8s.gcr.io/fluentd-gcp:1.30 222 | env: 223 | - name: FLUENTD_ARGS 224 | value: -c /etc/fluentd-config/fluentd.conf 225 | volumeMounts: 226 | - name: varlog # mount the varlog volume as the /var/log path 227 | mountPath: /var/log 228 | - name: config-volume 229 | mountPath: /etc/fluentd-config 230 | volumes: 231 | - name: varlog # define varlog volume as empty dir which does not persist when the pod is deleted. 232 | emptyDir: {} 233 | - name: config-volume 234 | configMap: 235 | name: fluentd-config 236 | EOF 237 | 238 | kubectl apply -f two-files-counter-pod-agent-sidecar.yaml 239 | ``` 240 | 241 | ```bash 242 | kubectl get pod counter 243 | # NAME READY STATUS RESTARTS AGE 244 | # counter 2/2 Running 0 24s 245 | 246 | kubectl exec counter -c count -- cat /var/log/1.log 247 | # : Sat Dec 18 02:34:35 UTC 2021 248 | # : Sat Dec 18 02:34:35 UTC 2021 249 | 250 | kubectl exec counter -c count-agent -- cat /var/log/1.log 251 | # : Sat Dec 18 02:34:35 UTC 2021 252 | # : Sat Dec 18 02:34:35 UTC 2021 253 | ``` 254 | 255 |

256 | 257 |
258 | 259 | ### Clean up 260 | 261 | ```bash 262 | rm multi-container-nrm.yaml two-files-counter-pod-agent-sidecar.yaml fluentd-sidecar-config.yaml multi-container-pod.yaml sidecar-pod.yaml 263 | kubectl delete config fluentd-sidecar-config 264 | kubectl delete pod multi-container-nrm counter two-files-counter-pod-agent-sidecar multi-container-pod sidecar-pod --force 265 | ``` -------------------------------------------------------------------------------- /topics/namespaces.md: -------------------------------------------------------------------------------- 1 | # [Namespaces](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) 2 | 3 | - Namespaces provide a mechanism for isolating groups of resources within a single cluster. 4 | - Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc). 5 | - Names of resources need to be unique within a namespace, but not across namespaces. 6 | 7 |
8 | 9 | ### Check the namespaces on the cluster 10 | 11 |
12 | 13 |
show

14 | 15 | ```bash 16 | kubectl get namespaces 17 | ``` 18 | 19 |

20 | 21 |
22 | 23 | ### Create namespace named `alpha` 24 | 25 |
26 | 27 |
show

28 | 29 | ```bash 30 | kubectl create namespace alpha 31 | ``` 32 | 33 |

34 | 35 |
36 | 37 | ### Get pods from `alpha` namespace 38 | 39 |
40 | 41 |
show

42 | 43 | ```bash 44 | kubectl get pods --namespace=alpha 45 | # OR 46 | kubectl get pods -n=alpha 47 | ``` 48 | 49 |

50 | 51 |
52 | 53 | ### Get pods from all namespaces 54 | 55 |
56 | 57 |
show

58 | 59 | ```bash 60 | kubectl get pods --all-namespaces 61 | #OR 62 | kubectl get pods -A 63 | ``` 64 | 65 |

66 | 67 |
68 | 69 | ### Label namespace `alpha` with label `type:critical` 70 | 71 |
72 | 73 |
show

74 | 75 | ```bash 76 | kubectl label namespace alpha type=critical 77 | 78 | kubectl get namespace alpha --show-labels 79 | # NAME STATUS AGE LABELS 80 | # alpha Active 70s type=critical 81 | ``` 82 | 83 |

84 | 85 | ### Delete namespace `alpha` 86 | 87 |
88 | 89 |
show

90 | 91 | ```bash 92 | kubectl delete namespace alpha 93 | ``` 94 | 95 |

-------------------------------------------------------------------------------- /topics/network_policies.md: -------------------------------------------------------------------------------- 1 | # [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) 2 | 3 |
4 | 5 | **NOTE** : [Flannel does not support Network Policies](https://github.com/flannel-io/flannel/issues/558) and does work with the current katacoda cluster. Try the setup with the cluster with network plugin supporting network policies. 6 | 7 | ### Get network policies in the default namespace 8 | 9 |
10 | 11 |
show

12 | 13 | ```bash 14 | kubectl get networkpolicy 15 | ``` 16 | 17 |

18 | 19 |
20 | 21 | ### Create a default `deny-all` Network Policy that denies ingress and egress traffic 22 | 23 |
show

24 | 25 | ```bash 26 | cat << EOF > deny-all.yaml 27 | kind: NetworkPolicy 28 | apiVersion: networking.k8s.io/v1 29 | metadata: 30 | name: deny-all 31 | spec: 32 | podSelector: {} 33 | policyTypes: 34 | - Ingress 35 | - Egress 36 | ingress: # deny all ingress 37 | - {} 38 | egress: # deny all egress 39 | - {} 40 | EOF 41 | 42 | kubectl apply -f limit-consumer.yaml 43 | ``` 44 | 45 |

46 | 47 |
48 | 49 | ### Create three pods as per the below specs. Create a NetworkPolicy `limit-consumer` so that `consumer` pods can only be access from producer pods and not from web pods. 50 | 1. Pod named `consumer` with image `nginx`. Expose via a ClusterIP service on port 80. 51 | 2. Pod named `producer` with image `nginx`. Expose via a ClusterIP service on port 80. 52 | 3. Pod named `web` with image `nginx`. Expose via a ClusterIP service on port 80. 53 | 54 |
55 | 56 |
show

57 | 58 | #### Create the pods and expose as service 59 | 60 | ```bash 61 | kubectl run consumer --image=nginx && kubectl expose pod consumer --port=80 62 | kubectl run producer --image=nginx && kubectl expose pod producer --port=80 63 | kubectl run web --image=nginx && kubectl expose pod web --port=80 64 | ``` 65 | 66 | #### Verify the communication 67 | 68 | ```bash 69 | # verify if web and producer can access consumer 70 | kubectl exec producer -- curl http://consumer:80 # success 71 | kubectl exec web -- curl http://consumer:80 # success 72 | ``` 73 | 74 | #### Create and apply the network policy 75 | 76 | ```yaml 77 | cat << EOF > limit-consumer.yaml 78 | kind: NetworkPolicy 79 | apiVersion: networking.k8s.io/v1 80 | metadata: 81 | name: limit-consumer 82 | spec: 83 | podSelector: 84 | matchLabels: 85 | run: consumer # selector for the pods 86 | policyTypes: 87 | - Ingress 88 | ingress: # allow ingress traffic only from producer pods 89 | - from: 90 | - podSelector: # from pods 91 | matchLabels: # with this label 92 | run: producer 93 | EOF 94 | 95 | kubectl apply -f limit-consumer.yaml 96 | ``` 97 | 98 | #### Verify the communication 99 | 100 | ```bash 101 | # verify if web and producer can access consumer 102 | kubectl exec producer -- curl http://consumer:80 # success 103 | kubectl exec web -- curl http://consumer:80 # failure 104 | ``` 105 | 106 | ```bash 107 | # Cleanup 108 | kubectl delete pod web producer consumer --force 109 | kubectl delete svc web producer consumer 110 | rm limit-consumer.yaml 111 | ``` 112 | 113 |

114 | 115 |
116 | 117 | ### You have rolled out a new pod to your infrastructure and now you need to allow it to communicate with the `backend` and `storage` pods but nothing else. Given the running pod `web` edit it to use a network policy that will allow it to send traffic only to the `backend` and `storage` pods. 118 | 119 | #### Setup 120 | 121 | ```bash 122 | kubectl run web --image nginx --labels name=web && kubectl expose pod web --port 80 123 | kubectl run backend --image nginx --labels name=backend && kubectl expose pod backend --port 80 124 | kubectl run storage --image nginx --labels name=storage && kubectl expose pod storage --port 80 125 | kubectl run dummy --image nginx --labels name=dummy && kubectl expose pod dummy --port 80 126 | ``` 127 | 128 | #### Verify the communication 129 | 130 | ```bash 131 | # verify if web and producer can access consumer 132 | kubectl exec web -- curl http://backend:80 # success 133 | kubectl exec web -- curl http://storage:80 # success 134 | kubectl exec web -- curl http://dummy:80 # success - but should be failure 135 | ``` 136 | 137 | #### Allow dns lookups for all pods 138 | 139 | ```yaml 140 | kubectl label namespace kube-system name=kube-system 141 | 142 | cat << EOF > egress-deny-all.yaml 143 | apiVersion: networking.k8s.io/v1 144 | kind: NetworkPolicy 145 | metadata: 146 | name: default-deny-all-egress 147 | spec: 148 | podSelector: {} 149 | egress: 150 | - to: 151 | - namespaceSelector: 152 | matchLabels: 153 | name: kube-system 154 | ports: 155 | - protocol: TCP 156 | port: 53 157 | - protocol: UDP 158 | port: 53 159 | policyTypes: 160 | - Egress 161 | EOF 162 | 163 | kubectl apply -f egress-deny-all.yaml 164 | ``` 165 | 166 |
show

167 | 168 | #### Create and apply the network policy 169 | 170 | ```yaml 171 | cat << EOF > limit-web.yaml 172 | kind: NetworkPolicy 173 | apiVersion: networking.k8s.io/v1 174 | metadata: 175 | name: limit-web 176 | spec: 177 | podSelector: 178 | matchLabels: 179 | name: web # selector for the pods 180 | policyTypes: 181 | - Ingress 182 | - Egress 183 | ingress: 184 | - {} 185 | egress: # allow egress traffic only to backend & storage pods 186 | - to: 187 | - podSelector: # from pods 188 | matchLabels: # with backend label 189 | name: backend 190 | - podSelector: # from pods 191 | matchLabels: # with storage label 192 | name: storage 193 | ports: 194 | - protocol: TCP 195 | port: 80 196 | EOF 197 | 198 | kubectl apply -f limit-web.yaml 199 | ``` 200 | 201 | #### Verify the previous curl work. Create a dummy pod and verify it should not be able to reach the same. 202 | 203 | ```bash 204 | # verify if web and producer can access consumer 205 | kubectl exec web -- curl http://backend:80 # success 206 | kubectl exec web -- curl http://storage:80 # success 207 | kubectl exec web -- curl http://dummy:80 # failure 208 | ``` 209 | 210 | ```bash 211 | # Cleanup 212 | kubectl label namespace kube-system name- 213 | kubectl delete networkpolicy default-deny-all-egress limit-web 214 | kubectl delete pod web backend storage dummy --force 215 | kubectl delete svc web backend storage dummy 216 | rm limit-web.yaml egress-deny-all.yaml 217 | ``` 218 | 219 |

220 | -------------------------------------------------------------------------------- /topics/nodes.md: -------------------------------------------------------------------------------- 1 | # Nodes 2 | 3 |
4 | 5 | ### Get the nodes of the cluster 6 | 7 |
show

8 | 9 | ```bash 10 | kubectl get nodes 11 | # NAME STATUS ROLES AGE VERSION 12 | # controlplane Ready master 62m v1.18.0 13 | # node01 Ready 61m v1.18.0 14 | ``` 15 | 16 |

17 | 18 |
19 | 20 | ### Show all labels for the node `node01` 21 | 22 |
show

23 | 24 | ```bash 25 | kubectl get nodes node01 --show-labels 26 | # NAME STATUS ROLES AGE VERSION LABELS 27 | # node01 Ready 61m v1.18.0 accelerator=nvidia-tesla-p100,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux 28 | ``` 29 | 30 |

31 | 32 |
33 | 34 | ### Label worker node `node01` with label `type=critical` 35 | 36 |
show

37 | 38 | ```bash 39 | kubectl label node node01 type=critical 40 | # node/node01 labeled 41 | ``` 42 | 43 |

44 | 45 |
46 | 47 | ### Remove label `type=critical` from worker node `node01` 48 | 49 |
show

50 | 51 | ```bash 52 | kubectl label node node01 type- 53 | # node/node01 labeled 54 | ``` 55 | 56 |

57 | 58 |
59 | 60 | ### Get usage metrics such CPU and Memory of the cluster nodes 61 | 62 |
show

63 | 64 | ```bash 65 | kubectl top nodes 66 | ``` 67 | 68 |

69 | 70 |
71 | 72 | ### Set the node named node01 unavailable and reschedule all the pods running on it. 73 | 74 |
show

75 | 76 | ```bash 77 | kubectl drain node01 --ignore-daemonsets --force # drain will cordon the node as well 78 | # node/node01 cordoned 79 | # Pods: kube-system/kube-flannel-ds-amd64-6mrm2, kube-system/kube-keepalived-vip-zchjw, kube-system/kube-proxy-ms2mf 80 | # WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: default/multi-container-nrm; ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-6mrm2, kube-system/kube-keepalived-vip-zchjw, kube-system/kube-proxy-ms2mf 81 | # evicting pod default/multi-container-nrm 82 | # evicting pod kube-system/katacoda-cloud-provider-6c46f89b5c-jvb7g 83 | # pod/multi-container-nrm evicted 84 | # pod/katacoda-cloud-provider-6c46f89b5c-jvb7g evicted 85 | # node/node01 evicted 86 | ``` 87 | 88 |

89 | 90 |
-------------------------------------------------------------------------------- /topics/pod_security_context.md: -------------------------------------------------------------------------------- 1 | # [Pod Security Context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) 2 | 3 | A security context defines privilege and access control settings for a Pod or Container. Security context settings include, but are not limited to: 4 | - [`Discretionary Access Control`](#discretionary_access_control): Permission to access an object, like a file, is based on user ID (UID) and group ID (GID). 5 | - [`Security Enhanced Linux (SELinux)`](#selinux): Objects are assigned security labels. 6 | - Running as `privileged` or unprivileged. 7 | - [`Linux Capabilities`](#): Give a process some privileges, but not all the privileges of the root user. 8 | - [`AppArmor`](#apparmor): Use program profiles to restrict the capabilities of individual programs 9 | - [`Seccomp`](#seccomp): Filter a process's system calls. 10 | - [`AllowPrivilegeEscalation`]: Controls whether a process can gain more privileges than its parent process. This bool directly controls whether the no_new_privs flag gets set on the container process. 11 | - [`readOnlyRootFilesystem`](#immutability): Mounts the container's root filesystem as read-only. 12 | 13 |
14 | 15 | ## Discretionary Access Control 16 | 17 |
18 | 19 | ### Run as `busybox-user` pod immutable using the following settings 20 | - `user`: `1000` 21 | - `group`: `3000` 22 | 23 |
show

24 | 25 | ```yaml 26 | cat << EOF > busybox-user.yaml 27 | apiVersion: v1 28 | kind: Pod 29 | metadata: 30 | name: busybox-user 31 | spec: 32 | securityContext: # add this 33 | runAsUser: 1000 # add user 34 | runAsGroup: 3000 # add group 35 | containers: 36 | - image: busybox 37 | name: busybox-user 38 | command: ["sh", "-c", "sleep 600"] 39 | EOF 40 | 41 | kubectl apply -f busybox-user.yaml 42 | ``` 43 | 44 | ```bash 45 | # verify - will have a proper user if the user exists 46 | kk exec busybox-user -- whoami 47 | # whoami: unknown uid 1000 48 | # command terminated with exit code 1 49 | ``` 50 | 51 |

52 | 53 |
54 | 55 | ## SELinux 56 | 57 |
58 | 59 | ### Create a nginx pod with `SYS_TIME` & `NET_ADMIN` capabilities. 60 | 61 |
show

62 | 63 | ```yaml 64 | cat << EOF > nginx.yaml 65 | apiVersion: v1 66 | kind: Pod 67 | metadata: 68 | name: nginx 69 | spec: 70 | containers: 71 | - image: nginx 72 | name: nginx 73 | securityContext: 74 | capabilities: 75 | add: ["SYS_TIME", "NET_ADMIN"] 76 | EOF 77 | 78 | kubectl apply -f nginx.yaml 79 | ``` 80 | 81 |

82 | 83 |
84 | 85 | ## App Armor 86 | 87 |
88 | 89 | Refer [AppArmor](./apparmor.md) 90 | 91 |
92 | 93 | ## Seccomp 94 | 95 | Refer [Seccomp - Secure Computing](./seccomp.md) 96 | 97 |
98 | 99 | ## Immutability 100 | 101 | - Image Immutability: Containerized applications are meant to be immutable, and once built are not expected to change between different environments. 102 | 103 |
104 | 105 | ### Make the `busybox-immutable` pod immutable using the following settings 106 | - `readOnlyRootFilesystem`: `true` 107 | - `privileged`: `false` 108 | - `command` : `[ "sh", "-c", "sleep 600" ]` 109 | 110 |
show

111 | 112 | ```yaml 113 | cat << EOF > busybox-immutable.yaml 114 | apiVersion: v1 115 | kind: Pod 116 | metadata: 117 | name: busybox-immutable 118 | spec: 119 | containers: 120 | - image: busybox 121 | name: busybox-immutable 122 | command: ["sh", "-c", "sleep 600"] 123 | securityContext: # add this 124 | readOnlyRootFilesystem: true # add this to make container immutable 125 | privileged: false # add this to prevent container making any node changes 126 | EOF 127 | 128 | kubectl apply -f busybox-immutable.yaml 129 | ``` 130 | 131 | ```bash 132 | # verify 133 | kubectl exec busybox-immutable -- touch echo.txt 134 | # touch: echo.txt: Read-only file system 135 | # command terminated with exit code 1 136 | ``` 137 | 138 |

139 | 140 | ## Clean up 141 | 142 | ```bash 143 | rm busybox-immutable.yaml 144 | kubectl delete pod busybox-immutable --force --grace-period=0 145 | ``` -------------------------------------------------------------------------------- /topics/pod_security_policies.md: -------------------------------------------------------------------------------- 1 | # [Pod Security Policies - DEPRECATED](https://kubernetes.io/docs/concepts/policy/pod-security-policy/) 2 | 3 | - Pod Security Policies enable fine-grained authorization of pod creation and updates. 4 | - PodSecurityPolicy is deprecated as of Kubernetes v1.21, and will be removed in v1.25. 5 | 6 |
7 | 8 | ### Create the following 9 | - Pod Security Policy `psp-example` to prevent pods with `privileged` as true and 10 | - Enable PodSecurityPolicy in Kubernetes API server 11 | - Try creating the nginx pod with following specs. 12 | 13 | ```yaml 14 | cat << EOF > nginx.yaml 15 | apiVersion: v1 16 | kind: Pod 17 | metadata: 18 | name: nginx 19 | spec: 20 | containers: 21 | - image: nginx 22 | name: nginx 23 | securityContext: 24 | privileged: true 25 | restartPolicy: Always 26 | EOF 27 | ``` 28 | 29 |
show

30 | 31 | #### Create Pod Security Policy 32 | 33 | ```yaml 34 | cat << EOF > psp.yaml 35 | apiVersion: policy/v1beta1 36 | kind: PodSecurityPolicy 37 | metadata: 38 | name: psp-example 39 | spec: 40 | privileged: false 41 | seLinux: 42 | rule: RunAsAny 43 | runAsUser: 44 | rule: RunAsAny 45 | supplementalGroups: 46 | rule: RunAsAny 47 | fsGroup: 48 | rule: RunAsAny 49 | EOF 50 | 51 | kubectl apply -f psp.yaml 52 | ``` 53 | 54 | #### Pods need to have access to use Pod Security Policies and the Service Account i.e. default needs to have access to the same. 55 | 56 | ```yaml 57 | cat << EOF > role-psp.yaml 58 | apiVersion: rbac.authorization.k8s.io/v1 59 | kind: ClusterRole 60 | metadata: 61 | name: role-psp 62 | rules: 63 | - apiGroups: ['policy'] 64 | resources: ['podsecuritypolicies'] 65 | verbs: ['use'] 66 | EOF 67 | 68 | kubectl apply -f role-psp.yaml 69 | 70 | cat << EOF > role-psp-binding.yaml 71 | apiVersion: rbac.authorization.k8s.io/v1 72 | kind: ClusterRoleBinding 73 | metadata: 74 | name: role-psp-binding 75 | roleRef: 76 | kind: ClusterRole 77 | name: role-psp 78 | apiGroup: rbac.authorization.k8s.io 79 | subjects: 80 | - kind: ServiceAccount 81 | name: default 82 | namespace: default 83 | EOF 84 | 85 | kubectl apply -f role-psp-binding.yaml 86 | ``` 87 | 88 | #### Update `/etc/kubernetes/manifests/kube-apiserver.yaml` to enable `PodSecurityPolicy` 89 | 90 | ```yaml 91 | --enable-admission-plugins=NodeRestriction,PodSecurityPolicy # update the admission plugins 92 | ``` 93 | 94 | #### Verify 95 | ```bash 96 | kubectl apply -f nginx.yaml 97 | # Error from server (Forbidden): error when creating "nginx.yaml": pods "nginx" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.volumes[0]: Invalid value: "secret": secret volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed] 98 | ``` 99 | 100 |

101 | 102 |
103 | 104 | ### Update the `psp-example` Pod Security Policy to allow only `configMap` and `secret` volumes. Try creating the nginx pod with following specs. 105 | 106 | ```yaml 107 | cat << EOF > nginx.yaml 108 | apiVersion: v1 109 | kind: Pod 110 | metadata: 111 | name: nginx 112 | spec: 113 | containers: 114 | - image: nginx 115 | name: nginx 116 | volumeMounts: 117 | - mountPath: /cache 118 | name: cache-volume 119 | volumes: 120 | - name: cache-volume 121 | emptyDir: {} 122 | EOF 123 | ``` 124 | 125 |
show

126 | 127 | ```yaml 128 | cat << EOF > psp.yaml 129 | apiVersion: policy/v1beta1 130 | kind: PodSecurityPolicy 131 | metadata: 132 | name: psp-example 133 | spec: 134 | privileged: false 135 | seLinux: 136 | rule: RunAsAny 137 | runAsUser: 138 | rule: RunAsAny 139 | supplementalGroups: 140 | rule: RunAsAny 141 | fsGroup: 142 | rule: RunAsAny 143 | volumes: # add the volumes 144 | - 'configMap' 145 | - 'secret' 146 | EOF 147 | 148 | kubectl apply -f psp.yaml 149 | ``` 150 | 151 | #### Verify 152 | 153 | ```bash 154 | kubectl apply -f nginx.yaml 155 | # Error from server (Forbidden): error when creating "nginx.yaml": pods "nginx" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.volumes[0]: Invalid value: "emptyDir": emptyDir volumes are not allowed to be used] 156 | 157 | # NOTE : If the pod is created check for other psp which allows the creation and delete the same. 158 | ``` 159 | 160 |

161 | 162 |
163 | 164 | ### Update the following Pod Security Policy `psp-example` to allow only `/data` as host paths in `readOnly` mode. Try creating the nginx pod with following specs. 165 | 166 | ```yaml 167 | cat << EOF > nginx.yaml 168 | apiVersion: v1 169 | kind: Pod 170 | metadata: 171 | name: nginx 172 | spec: 173 | containers: 174 | - image: nginx 175 | name: nginx 176 | resources: {} 177 | volumeMounts: 178 | - mountPath: /test-pd 179 | name: test-volume 180 | volumes: 181 | - name: test-volume 182 | hostPath: 183 | path: /data 184 | type: Directory 185 | EOF 186 | ``` 187 | 188 |
show

189 | 190 | ```yaml 191 | cat << EOF > psp.yaml 192 | apiVersion: policy/v1beta1 193 | kind: PodSecurityPolicy 194 | metadata: 195 | name: psp-example 196 | spec: 197 | privileged: false 198 | seLinux: 199 | rule: RunAsAny 200 | runAsUser: 201 | rule: RunAsAny 202 | supplementalGroups: 203 | rule: RunAsAny 204 | fsGroup: 205 | rule: RunAsAny 206 | volumes: 207 | - 'configMap' 208 | - 'secret' 209 | - 'hostPath' 210 | allowedHostPaths: # add the allowed host paths 211 | - pathPrefix: "/data" 212 | readOnly: true 213 | EOF 214 | 215 | kubectl apply -f psp.yaml 216 | ``` 217 | 218 | #### Verify 219 | 220 | ```bash 221 | kubectl apply -f nginx.yaml 222 | # Error from server (Forbidden): error when creating "nginx.yaml": pods "nginx" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].volumeMounts[0].readOnly: Invalid value: false: must be read-only] 223 | ``` 224 | 225 | ```yaml 226 | cat << EOF > nginx.yaml 227 | apiVersion: v1 228 | kind: Pod 229 | metadata: 230 | name: nginx 231 | spec: 232 | containers: 233 | - image: nginx 234 | name: nginx 235 | resources: {} 236 | volumeMounts: 237 | - mountPath: /test-pd 238 | name: test-volume 239 | readOnly: true # add this 240 | volumes: 241 | - name: test-volume 242 | hostPath: 243 | path: /data 244 | type: Directory 245 | EOF 246 | ``` 247 | 248 | #### Verify 249 | 250 | ```bash 251 | kubectl apply -f nginx.yaml 252 | # pod/nginx created 253 | ``` 254 | 255 |

-------------------------------------------------------------------------------- /topics/pods.md: -------------------------------------------------------------------------------- 1 | # [Pod](https://kubernetes.io/docs/concepts/workloads/pods/) 2 | 3 | - A Kubernetes pod is a group of containers, and is the smallest unit that Kubernetes administers. 4 | - Pods have a single IP address that is applied to every container within the pod. 5 | - Pods can have single or multiple containers. 6 | - Containers in a pod share the same resources such as memory and storage. 7 | - Remember, you CANNOT edit specifications of an existing POD other than the below. 8 | - spec.containers[*].image 9 | - spec.initContainers[*].image 10 | - spec.activeDeadlineSeconds 11 | - spec.tolerations 12 | - Edit the pod for changes and a tmp file is created. Delete and Recreate the pod using the tmp file. 13 | 14 |
15 | 16 | - [Basics](#basics) 17 | - [Multi-container Pods](#multi-container-pods) 18 | - [Node Selector](#node-selector) 19 | - [Resources - Requests and limits](#resources) 20 | - [Static Pods](#static-pods) 21 | - [Init Containers](#init-containers) 22 | 23 | ## Basics 24 | 25 |
26 | 27 | ### Check number of pods in the default namespace 28 | 29 |
30 | 31 |
show

32 | 33 | ```bash 34 | kubectl get pods 35 | # OR 36 | kubectl get po 37 | ``` 38 |

39 | 40 |
41 | 42 | ### Create a new pod with name `nginx` and using the `nginx` image 43 | 44 |
show

45 | 46 | ```bash 47 | kubectl run nginx --image=nginx 48 | ``` 49 | 50 |

51 | 52 |
53 | 54 | 55 | ### Create a pod named `mongo` using image `mongo` in a new Kubernetes namespace `my-website` 56 | 57 |
show

58 | 59 | ```bash 60 | kubectl create namespace my-website 61 | kubectl run mongo --image=mongo --namespace=my-website 62 | ``` 63 | 64 |

65 | 66 |
67 | 68 | 69 | ### Create a new pod with name nginx and using the nginx image in the `alpha` namespace 70 | 71 |
show

72 | 73 | ```bash 74 | kubectl create namespace alpha 75 | kubectl run nginx --image=nginx --namespace=alpha 76 | ``` 77 | 78 |

79 | 80 |
81 | 82 | ### Create a new pod `custom-nginx` using the `nginx` image and expose it on container port 8080. 83 | 84 |
show

85 | 86 | ```bash 87 | kubectl run custom-nginx --image=nginx --port=8080 88 | ``` 89 | 90 |

91 | 92 |
93 | 94 | ### Check which node the pod is hosted on 95 | 96 |
show

97 | 98 | ```bash 99 | kubectl get pods -o wide 100 | ``` 101 | 102 |

103 | 104 |
105 | 106 | ### Get only the pods name 107 | 108 |
show

109 | 110 | ```bash 111 | kubectl get pods -o name 112 | ``` 113 | 114 |

115 | 116 |
117 | 118 | ### Delete the pod with the name nginx 119 | 120 |
show

121 | 122 | ```bash 123 | kubectl delete pod nginx 124 | ``` 125 | 126 |

127 | 128 |
129 | 130 | ### Delete the pod with the name nginx in the `alpha` namespace 131 | 132 |
show

133 | 134 | ```bash 135 | kubectl delete pod nginx --namespace=alpha 136 | ``` 137 | 138 |

139 | 140 |
141 | 142 | ### Create pod `nginx-labels` with `nginx` image and label `name=nginx`, `tier=frontend`, `env=dev` 143 | 144 |
145 | 146 |
show

147 | 148 | ```bash 149 | kubectl run nginx-labels --image=nginx --labels=name=nginx,tier=frontend,env=dev,version=v1 150 | ``` 151 | 152 | OR 153 | 154 | ```yaml 155 | cat << EOF > nginx-labels.yaml 156 | apiVersion: v1 157 | kind: Pod 158 | metadata: 159 | labels: 160 | env: dev 161 | name: nginx 162 | tier: frontend 163 | version: v1 164 | name: nginx-labels 165 | spec: 166 | containers: 167 | - image: nginx 168 | name: nginx 169 | EOF 170 | 171 | kubectl apply -f nginx-labels.yaml 172 | ``` 173 | 174 |

175 | 176 |
177 | 178 | ### Delete the pod with name `nginx-labels` with force and no grace period 179 | 180 |
show

181 | 182 | ```bash 183 | kubectl delete pod nginx-labels --force --grace-period=0 184 | ``` 185 | 186 |

187 | 188 |
189 | 190 | ### Create a pod with name `nginx-file` and image nginx using pod defination file 191 | 192 |
show

193 | 194 | ```bash 195 | kubectl run nginx-file --image=nginx --dry-run=client -o yaml > nginx-file.yaml 196 | kubectl apply -f nginx-file.yaml 197 | ``` 198 |

199 | 200 |
201 | 202 | ### Create a nginx pod with name nginx and copy the pod definition file to a nginx_definition.yaml file 203 | 204 |
show

205 | 206 | ```bash 207 | kubectl run nginx --image=nginx 208 | kubectl get nginx -o yaml > nginx_definition.yaml 209 | ``` 210 | 211 |

212 | 213 |
214 | 215 | ### Create a `ubuntu-1` pod with image `ubuntu` with command `sleep 4800` 216 | 217 |
show

218 | 219 | ```bash 220 | kubectl run ubuntu-1 --image=ubuntu --command sleep 4800 221 | ``` 222 | 223 |

224 | 225 |
226 | 227 | ### A web application requires a specific version of redis to be used as a cache. Create a pod with the following characteristics, and leave it running when complete: 228 | - The pod must run in the web namespace. 229 | - The name of the pod should be cache 230 | - Use the redis image with the 3.2 tag 231 | - Expose port 6379 232 | 233 |
234 | 235 |
show

236 | 237 | ```bash 238 | kubectl create namespace web 239 | kubectl run cache --image redis:3.2 --port 6379 --namespace web 240 | ``` 241 | 242 |

243 | 244 |
245 | 246 | ## Multi-container Pods 247 | 248 |
249 | 250 | Refer [Multi-container Pods](multi_container_pods.md) 251 | 252 |
253 | 254 | ## [Node Selector](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/) 255 | 256 |
257 | 258 | ### Create a pod `nginx-node-selector` that will be deployed to a Node that has the label `accelerator=nvidia-tesla-p100` 259 | 260 |
show

261 | 262 | Add the label to a node: 263 | 264 | ```bash 265 | kubectl label nodes node01 accelerator=nvidia-tesla-p100 266 | ``` 267 | 268 | We can use the 'nodeSelector' property on the Pod YAML: 269 | 270 | ```yaml 271 | cat << EOF > nginx-node-selector.yaml 272 | apiVersion: v1 273 | kind: Pod 274 | metadata: 275 | name: nginx-node-selector 276 | spec: 277 | containers: 278 | - name: nginx-node-selector 279 | image: nginx 280 | nodeSelector: # add this 281 | accelerator: nvidia-tesla-p100 # the selection label 282 | EOF 283 | 284 | kubectl apply -f nginx-node-selector.yaml 285 | ``` 286 | 287 | OR 288 | 289 | Use node affinity (https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/#schedule-a-pod-using-required-node-affinity) 290 | 291 | ```yaml 292 | cat << EOF > nginx-node-selector.yaml 293 | apiVersion: v1 294 | kind: Pod 295 | metadata: 296 | name: affinity-pod 297 | spec: 298 | affinity: 299 | nodeAffinity: 300 | requiredDuringSchedulingIgnoredDuringExecution: 301 | nodeSelectorTerms: 302 | - matchExpressions: 303 | - key: accelerator 304 | operator: In 305 | values: 306 | - nvidia-tesla-p100 307 | containers: 308 | - name: nginx-node-selector 309 | image: nginx 310 | EOF 311 | 312 | kubectl apply -f nginx-node-selector.yaml 313 | 314 | ``` 315 | 316 |

317 | 318 |
319 | 320 | ### Remove the `description` annotations for pod `nginx-annotations` 321 | 322 |
show

323 | 324 | ```bash 325 | kubectl annotate pod nginx-annotations description- 326 | ``` 327 | 328 |

329 | 330 |
331 | 332 | ### Remove these `nginx-annotations` pod to have a clean state in your cluster 333 | 334 |
show

335 | 336 | ```bash 337 | kubectl delete pod nginx-annotations --force 338 | ``` 339 | 340 |

341 | 342 |
343 | 344 | ## [Resources](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) 345 | 346 |
347 | 348 | ### Create an nginx pod name `nginx-resources` with `requests` `cpu=100m,memory=256Mi` and `limits` `cpu=200m,memory=512Mi` 349 | 350 |
351 | 352 |
show

353 | 354 | ```bash 355 | kubectl run nginx-resources --image=nginx --restart=Never --requests='cpu=100m,memory=256Mi' --limits='cpu=200m,memory=512Mi' 356 | ``` 357 | 358 | OR 359 | 360 | ```yaml 361 | cat << EOF > nginx-resources.yaml 362 | apiVersion: v1 363 | kind: Pod 364 | metadata: 365 | creationTimestamp: null 366 | labels: 367 | run: nginx-resources 368 | name: nginx-resources 369 | spec: 370 | containers: 371 | - image: nginx 372 | name: nginx-resources 373 | resources: 374 | limits: 375 | cpu: 200m 376 | memory: 512Mi 377 | requests: 378 | cpu: 100m 379 | memory: 256Mi 380 | dnsPolicy: ClusterFirst 381 | restartPolicy: Never 382 | status: {} 383 | EOF 384 | 385 | kubectl apply -f nginx-resources.yaml 386 | ``` 387 | 388 |

389 | 390 |
391 | 392 | ## [Static Pods](https://kubernetes.io/docs/concepts/workloads/pods/#static-pods) 393 | 394 |
395 | 396 | ### Configure the kubelet systemd-managed service, on the node labelled with name=node01, to launch a pod containing a single container of Image httpd named webtool automatically. Any spec files required should be placed in the /etc/kubernetes/manifests directory on the node. 397 | 398 |
399 | 400 |
show

401 | 402 | #### Check the static pod path in the kubelet config file 403 | 404 | ```bash 405 | ps -ef | grep kubelet 406 | # root 2794 1 3 07:43 ? 00:01:05 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2 --resolv-conf=/run/systemd/resolve/resolv.conf 407 | 408 | # Check the config file @ /var/lib/kubelet/config.yaml for the staticPodPath property 409 | staticPodPath: /etc/kubernetes/manifests 410 | ``` 411 | 412 | #### Execute the below on node01 413 | 414 | ```yaml 415 | mkdir /etc/kubernetes/manifests # create the static pod path, if it does not exist. 416 | 417 | cat << EOF > webtool.yaml 418 | apiVersion: v1 419 | kind: Pod 420 | metadata: 421 | creationTimestamp: null 422 | labels: 423 | run: webtool 424 | name: webtool 425 | spec: 426 | containers: 427 | - image: httpd 428 | name: webtool 429 | resources: {} 430 | dnsPolicy: ClusterFirst 431 | restartPolicy: Always 432 | status: {} 433 | EOF 434 | 435 | systemctl restart kubelet # if required 436 | ``` 437 | 438 | #### Check on controlpanel node 439 | 440 | ```bash 441 | kubectl get pods 442 | # NAME READY STATUS RESTARTS AGE 443 | # webtool-node01 1/1 Running 0 11s 444 | ``` 445 | 446 |

447 | 448 |
449 | 450 | ## Init Containers 451 | 452 | Refer [Init Containers](./init) 453 | 454 | ### Clean up 455 | 456 |
457 | 458 | ```bash 459 | rm nginx-labels.yaml nginx-file.yaml nginx_definition.yaml nginx-resources.yaml 460 | kubectl delete pod mongo -n my-website --force --grace-period=0 461 | kubectl delete pod cache -n web --force --grace-period=0 462 | kubectl delete pod nginx -n alpha --force --grace-period=0 463 | kubectl delete namespace alpha web my-website 464 | ``` -------------------------------------------------------------------------------- /topics/probes.md: -------------------------------------------------------------------------------- 1 | # Readiness & Liveness Probes 2 | 3 | - Readiness probes helps kubelet to know when a container is ready to start accepting traffic. A Pod is considered ready when all of its containers are ready 4 | - Liveness probes helps kubelet to know when the pod is unhealthy and needs to be restarted. 5 | 6 |
7 | 8 | - [Readiness probes](#readiness-probes) 9 | - [Liveness probes](#liveness-probes) 10 | 11 | ## Readiness probes 12 | 13 |
14 | 15 | ### Create a `nginx-readiness` pod with a readiness probe that just runs the http request on `/` with port `80` 16 | 17 |
show

18 | 19 | ```bash 20 | kubectl run nginx-readiness --image=nginx --restart=Never --dry-run=client -o yaml > nginx-readiness.yaml 21 | ``` 22 | 23 | Edit `nginx-readiness.yaml` file to add `readinessProbe` probe as below and apply `kubectl apply -f nginx-readiness.yaml` 24 | 25 | ```YAML 26 | apiVersion: v1 27 | kind: Pod 28 | metadata: 29 | creationTimestamp: null 30 | labels: 31 | run: nginx 32 | name: nginx 33 | spec: 34 | containers: 35 | - image: nginx 36 | imagePullPolicy: IfNotPresent 37 | name: nginx 38 | resources: {} 39 | readinessProbe: # declare the readiness probe 40 | httpGet: # add this line 41 | path: / # 42 | port: 80 # 43 | dnsPolicy: ClusterFirst 44 | restartPolicy: Never 45 | status: {} 46 | ``` 47 | 48 |

49 | 50 |
51 | 52 | ## Liveness probes 53 | 54 |
55 | 56 | ### Create a `nginx-liveness` pod with a liveness probe that just runs the command 'ls'. 57 | 58 |
show

59 | 60 | ```bash 61 | kubectl run nginx-liveness --image=nginx --restart=Never --dry-run=client -o yaml > nginx-liveness.yaml 62 | ``` 63 | 64 | Edit `nginx-liveness.yaml` file to add `livenessProbe` probe as below and apply `kubectl apply -f nginx-liveness.yaml` 65 | 66 | ```YAML 67 | apiVersion: v1 68 | kind: Pod 69 | metadata: 70 | creationTimestamp: null 71 | labels: 72 | run: nginx 73 | name: nginx 74 | spec: 75 | containers: 76 | - image: nginx 77 | imagePullPolicy: IfNotPresent 78 | name: nginx 79 | resources: {} 80 | livenessProbe: # add liveness probe 81 | exec: # add this line 82 | command: # command definition 83 | - ls # ls command 84 | dnsPolicy: ClusterFirst 85 | restartPolicy: Never 86 | status: {} 87 | ``` 88 |

89 | 90 |
91 | 92 | ### Modify `nginx-liveness` pod to add a delay of 30 seconds whereas the interval between probes would be 5 seconds. 93 | 94 |
show

95 | 96 | #### Edit `nginx-liveness.yaml` file to update `livenessProbe` probe as below. Delete and recreate pod using `kubectl apply -f nginx-liveness.yaml` 97 | 98 | ```YAML 99 | apiVersion: v1 100 | kind: Pod 101 | metadata: 102 | creationTimestamp: null 103 | labels: 104 | run: nginx 105 | name: nginx 106 | spec: 107 | containers: 108 | - image: nginx 109 | imagePullPolicy: IfNotPresent 110 | name: nginx 111 | resources: {} 112 | livenessProbe: 113 | initialDelaySeconds: 30 # add this line 114 | periodSeconds: 5 # add this line as well 115 | exec: 116 | command: 117 | - ls 118 | dnsPolicy: ClusterFirst 119 | restartPolicy: Never 120 | status: {} 121 | ``` 122 | 123 |

124 | 125 |
126 | 127 | ## Troubleshooting 128 | 129 | ### Create a pod `liveness-exec` with the following specs. Wait for 30 secs and check if the pod restarts. Identify the reason. 130 | 131 | ```bash 132 | cat << EOF > exec-liveness.yaml 133 | apiVersion: v1 134 | kind: Pod 135 | metadata: 136 | labels: 137 | test: liveness 138 | name: liveness-exec 139 | spec: 140 | containers: 141 | - name: liveness 142 | image: k8s.gcr.io/busybox 143 | args: 144 | - /bin/sh 145 | - -c 146 | - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 147 | livenessProbe: 148 | exec: 149 | command: 150 | - cat 151 | - /tmp/healthy 152 | initialDelaySeconds: 5 153 | periodSeconds: 5 154 | EOF 155 | 156 | kubectl apply -f exec-liveness.yaml 157 | ``` 158 | 159 |
show

160 | 161 | ```bash 162 | kubectl get pod liveness-exec -w # pod restarts due to failed liveness check 163 | # NAME READY STATUS RESTARTS AGE 164 | # liveness-exec 1/1 Running 0 17s 165 | # liveness-exec 1/1 Running 1 76s 166 | 167 | kubectl describe pod liveness-exec 168 | 169 | # Normal Started 69s (x2 over 2m22s) kubelet, node01 Started container liveness 170 | # Warning Unhealthy 25s (x6 over 110s) kubelet, node01 Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory 171 | # Normal Killing 25s (x2 over 100s) kubelet, node01 Container liveness failed liveness probe, will be restarted 172 | ``` 173 | 174 |

175 | 176 | ### Clean up 177 | 178 | ```bash 179 | rm nginx-liveness.yaml nginx-readiness.yaml 180 | kubectl delete pod nginx-readiness nginx-liveness liveness-exec --force 181 | ``` -------------------------------------------------------------------------------- /topics/rbac.md: -------------------------------------------------------------------------------- 1 | # [RBAC authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) 2 | 3 | - Role and Role bindings are namespace scoped for e.g. pods, deployments, configmaps, etc. 4 | - Cluster Role and Cluster Role bindings are cluster scoped resources and not limited to namespaces for e.g. nodes, pv, etc. 5 | 6 | ## Table of Contents 7 | 1. [Role and Role Bindings](#role-and-role-bindings) 8 | 2. [Cluster Role and Cluster Role Bindings](#cluster-role-and-cluster-role-bindings) 9 | 10 |
11 | 12 | ### Check the current authorization used by the cluster 13 | 14 |
show

15 | 16 | Check the `/etc/kubernetes/manifests/kube-apiserver.yaml` for the `--authorization-mode=Node,RBAC` 17 | 18 |

19 | 20 |
21 | 22 | ## Role and Role Bindings 23 | 24 |
25 | 26 | ### Create the role `pods-read` to `get, create, list and delete` `pods` in the default namespace. 27 | 28 |
show

29 | 30 | ```bash 31 | kubectl create role pods-read --verb=get,create,list,delete --resource=pods 32 | ``` 33 | 34 | OR 35 | 36 | ```yaml 37 | cat << EOF > pods-read.yaml 38 | apiVersion: rbac.authorization.k8s.io/v1 39 | kind: Role 40 | metadata: 41 | name: pods-read 42 | rules: 43 | - apiGroups: 44 | - "" 45 | resources: 46 | - pods 47 | verbs: 48 | - get 49 | - create 50 | - list 51 | - delete 52 | EOF 53 | 54 | kubectl apply -f pods-read.yaml 55 | ``` 56 | 57 | ```bash 58 | # verify 59 | kubectl get role pods-read 60 | # NAME CREATED AT 61 | # pods-read 2021-12-13T01:35:10Z 62 | ``` 63 | 64 |

65 | 66 |
67 | 68 | ### Create a service account `sample-sa` 69 | 70 |
show

71 | 72 | ```bash 73 | kubectl create sa sample-sa 74 | ``` 75 | 76 | OR 77 | 78 | ```yaml 79 | cat << EOF > sample-sa.yaml 80 | apiVersion: v1 81 | kind: ServiceAccount 82 | metadata: 83 | creationTimestamp: null 84 | name: sample-sa 85 | EOF 86 | 87 | kubectl apply -f sample-sa.yaml 88 | ``` 89 | 90 | ```bash 91 | # verify 92 | kubectl get serviceaccount sample-sa 93 | # NAME SECRETS AGE 94 | # sample-sa 1 14s 95 | ``` 96 | 97 |

98 | 99 |
100 | 101 | ### Create a role binding `sample-sa-pods-read-role-binding` binding service account `sample-sa` and role `pods-read` 102 | 103 |
show

104 | 105 | ```bash 106 | kubectl create rolebinding sample-sa-pods-read-role-binding --serviceaccount=default:sample-sa --role=pods-read 107 | ``` 108 | 109 | OR 110 | 111 | ```yaml 112 | cat << EOF > sample-sa-pods-read-role-binding.yaml 113 | apiVersion: rbac.authorization.k8s.io/v1 114 | kind: RoleBinding 115 | metadata: 116 | creationTimestamp: null 117 | name: sample-sa-pods-read-role-binding 118 | roleRef: 119 | apiGroup: rbac.authorization.k8s.io 120 | kind: Role 121 | name: pods-read 122 | subjects: 123 | - kind: ServiceAccount 124 | name: sample-sa 125 | namespace: default 126 | EOF 127 | 128 | kubectl apply -f sample-sa-pods-read-role-binding.yaml 129 | ``` 130 | 131 | ```bash 132 | # verify 133 | kubectl get rolebinding sample-sa-pods-read-role-binding 134 | # NAME ROLE AGE 135 | # sample-sa-pods-read-role-binding Role/pods-read 18s 136 | ``` 137 | 138 |

139 | 140 |
141 | 142 | ### Verify service account `sample-sa` can get pods using the `auth can-i` command. 143 | 144 |
show

145 | 146 | ```bash 147 | # verify 148 | kubectl auth can-i get pods --as system:serviceaccount:default:sample-sa 149 | # yes 150 | ``` 151 |

152 | 153 |
154 | 155 | ## Cluster Role and Cluster Role Bindings 156 | 157 |
158 | 159 | ### Create the following for a user `proxy-admin` (which does not exist) 160 | - Cluster role `proxy-admin-role` with permissions to `nodes` with `get, list,create, update` actions 161 | - Cluster role binding `proxy-admin-role-binding` to bind cluster role `proxy-admin-role` to user `proxy-admin` 162 | 163 |
164 | 165 |
show

166 | 167 | ```bash 168 | kubectl create clusterrole proxy-admin-role --resource=nodes --verb=get,list,create,update 169 | kubectl create clusterrolebinding proxy-admin-role-binding --user=proxy-admin --clusterrole=proxy-admin-role 170 | ``` 171 | 172 | ```bash 173 | # verify 174 | kubectl auth can-i get nodes --as proxy-admin 175 | # yes 176 | ``` 177 | 178 |

179 | 180 |
181 | 182 | ### Create the following - PENDING 183 | - Create a new role named `deployent-role` which only allows to `create` the following resource types in the `finance` namespace. 184 | - Deployment 185 | - StatefuleSet 186 | - DaemonSet 187 | - Create a new Service Account named `cicd-token` in the existing namespace `finance` 188 | - Bind the new Role `deployment-role` to the new serviceaccount `cicd-token` using Role binding `deployent-role-binding` limited to the namespace `finance` 189 | 190 |
191 | 192 |
show

193 | 194 | ```bash 195 | kubectl create serviceaccount cicd-token -n finance 196 | kubectl create role deployent-role --resource=nodes --verb=get,list,create,update -n finance 197 | kubectl create rolebinding deployent-role-binding --serviceaccount=finance/cicd-token --role=deployent-role -n finance 198 | ``` 199 | 200 | ```bash 201 | # verify 202 | kubectl auth can-i get nodes --as proxy-admin 203 | # yes 204 | ``` 205 | 206 |

207 | 208 |
209 | 210 | ## Clean up 211 | 212 |
213 | 214 | ```bash 215 | rm sample-sa-pods-read-role-binding.yaml pods-read.yaml 216 | kubectl delete rolebinding sample-sa-pods-read-role-binding 217 | kubectl delete serviceaccount sample-sa 218 | kubectl delete role pods-read 219 | kubectl delete clusterrolebinding proxy-admin-role-binding 220 | kubectl delete clusterole proxy-admin-role 221 | ``` -------------------------------------------------------------------------------- /topics/replica_set.md: -------------------------------------------------------------------------------- 1 | # [ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/) 2 | 3 | - ReplicaSet ensures that a specified number of pod replicas are running at any given time. 4 | - It is recommended to use Deployments instead of directly using ReplicaSets, as they help manage ReplicaSets and provide declarative updates to Pods. 5 | 6 |
7 | 8 | ### Check number of replica sets in the default namespace 9 | 10 |
11 | 12 |
show

13 | 14 | ```bash 15 | kubectl get replicasets 16 | # OR 17 | kubectl get rs 18 | ``` 19 | 20 |

21 | 22 |
23 | 24 | ### Create a replica set named `replica-set-demo` using a pod named ngnix using a nginx image and labeled as `tier=frontend` with a single replica. 25 | 26 |
27 | 28 |
show

29 | 30 | ```yaml 31 | cat << EOF > replica-set-demo.yaml 32 | apiVersion: apps/v1 33 | kind: ReplicaSet 34 | metadata: 35 | name: replica-set-demo 36 | spec: 37 | replicas: 1 38 | selector: 39 | matchLabels: 40 | tier: frontend 41 | template: 42 | metadata: 43 | labels: 44 | tier: frontend 45 | spec: 46 | containers: 47 | - name: nginx 48 | image: nginx 49 | EOF 50 | 51 | kubectl apply -f replica-set-demo.yaml 52 | ``` 53 | 54 |

55 | 56 |
57 | 58 | ### Scale up the `replica-set-demo` from 1 replica to 2 replicas 59 | 60 |
61 | 62 |
show

63 | 64 | ```bash 65 | kubectl scale replicaset replica-set-demo --replicas=2 66 | ``` 67 | 68 | OR 69 | 70 | Edit the replica set definition file `replica-set-demo.yaml` and apply `kubectl apply -f replica-set-demo.yaml` 71 | 72 | ```yaml 73 | apiVersion: apps/v1 74 | kind: ReplicaSet 75 | metadata: 76 | name: replica-set-demo 77 | spec: 78 | replicas: 2 # update this 79 | selector: 80 | matchLabels: 81 | tier: frontend 82 | template: 83 | metadata: 84 | labels: 85 | tier: frontend 86 | spec: 87 | containers: 88 | - name: nginx 89 | image: nginx 90 | EOF 91 | ``` 92 | 93 |

94 | 95 |
96 | 97 | ### Scale up the `replica-set-demo` from 2 replicas to 1 replica 98 | 99 |
100 | 101 |
show

102 | 103 | ```bash 104 | kubectl scale replicaset replica-set-demo --replicas=1 105 | ``` 106 | 107 | OR 108 | 109 | #### Edit the replica set definition file `replica-set-demo.yaml` and apply `kubectl apply -f replica-set-demo.yaml` 110 | 111 | ```yaml 112 | apiVersion: apps/v1 113 | kind: ReplicaSet 114 | metadata: 115 | name: replica-set-demo 116 | spec: 117 | replicas: 1 # update this 118 | selector: 119 | matchLabels: 120 | tier: frontend 121 | template: 122 | metadata: 123 | labels: 124 | tier: frontend 125 | spec: 126 | containers: 127 | - name: nginx 128 | image: nginx 129 | EOF 130 | ``` 131 | 132 |

133 | 134 |
135 | 136 | ### Create a replica set using the below definition and fix any issues. 137 | 138 |
139 | 140 | ```yaml 141 | apiVersion: v1 142 | kind: ReplicaSet 143 | metadata: 144 | name: replicaset-1 145 | spec: 146 | replicas: 1 147 | selector: 148 | matchLabels: 149 | tier: frontend 150 | template: 151 | metadata: 152 | labels: 153 | tier: frontend 154 | spec: 155 | containers: 156 | - name: nginx 157 | image: nginx 158 | ``` 159 | 160 |
show

161 | 162 | #### Check the apiVersion using `kubectl explain replicasets` which is `apps/v1`. 163 | Update the version and apply again. 164 | 165 |

166 | 167 |
168 | 169 | ### Create a replica set using the below definition and fix any issues. 170 | 171 |
172 | 173 | ```yaml 174 | apiVersion: apps/v1 175 | kind: ReplicaSet 176 | metadata: 177 | name: replicaset-2 178 | spec: 179 | replicas: 1 180 | selector: 181 | matchLabels: 182 | tier: frontend 183 | template: 184 | metadata: 185 | labels: 186 | tier: nginx 187 | spec: 188 | containers: 189 | - name: nginx 190 | image: nginx 191 | ``` 192 | 193 |
show

194 | 195 | The replica set selector field `tier: frontend` does not match the pod labels `tier: nginx`. Correct either of them and reapply. 196 | 197 |

198 | 199 |
200 | 201 | ### Clean up 202 | 203 | ```bash 204 | kubectl delete replicaset replica-set-demo replicaset-1 replicaset-2 205 | rm replica-set-demo.yaml 206 | ``` 207 | -------------------------------------------------------------------------------- /topics/runtimes.md: -------------------------------------------------------------------------------- 1 | # [Runtime Class](https://kubernetes.io/docs/concepts/containers/runtime-class/) 2 | 3 |
4 | 5 | ### Create the following `gvisor` runtime class and create a nginx pod referring the `gvisor` runtime. 6 | 7 |
8 | 9 | ```yaml 10 | cat << EOF > gvisor.yaml 11 | apiVersion: node.k8s.io/v1 12 | kind: RuntimeClass 13 | metadata: 14 | name: gvisor 15 | handler: runsc 16 | EOF 17 | 18 | kubectl apply -f gvisor.yaml 19 | ``` 20 | 21 |
show

22 | 23 | ```yaml 24 | cat << EOF > nginx.yaml 25 | apiVersion: v1 26 | kind: Pod 27 | metadata: 28 | name: nginx 29 | spec: 30 | runtimeClassName: gvisor 31 | containers: 32 | - image: nginx 33 | name: nginx 34 | restartPolicy: Always 35 | EOF 36 | 37 | kubectl apply -f nginx.yaml 38 | 39 | # NOTE : Pod may not come up as the runtime does not actually exist 40 | ``` 41 |

42 | 43 | -------------------------------------------------------------------------------- /topics/seccomp.md: -------------------------------------------------------------------------------- 1 | # [Seccomp - Secure Computing](https://kubernetes.io/docs/tutorials/clusters/seccomp/) 2 | 3 | - Seccomp stands for secure computing mode and has been a feature of the Linux kernel. 4 | - It can be used to sandbox the privileges of a process, restricting the calls it is able to make from userspace into the kernel. 5 | - Kubernetes lets you automatically apply seccomp profiles loaded onto a node to your Pods and containers. 6 | 7 | **NOTE** : Seccomp is available in kubernetes 1.19 and above only. 8 | 9 |
10 | 11 | ### Check the syscalls made by `ls` using the `strace` command 12 | 13 |
14 | 15 | ```bash 16 | strace -c ls 17 | 18 | # % time seconds usecs/call calls errors syscall 19 | # ------ ----------- ----------- --------- --------- ---------------- 20 | # 18.15 0.000051 4 12 mprotect 21 | # 16.01 0.000045 5 9 openat 22 | # 9.96 0.000028 3 11 close 23 | # 9.96 0.000028 14 2 getdents 24 | # 7.47 0.000021 3 7 read 25 | # 6.41 0.000018 2 10 fstat 26 | # 6.05 0.000017 9 2 2 statfs 27 | # 4.98 0.000014 14 1 munmap 28 | # 4.27 0.000012 6 2 ioctl 29 | # 3.56 0.000010 10 1 write 30 | # 3.20 0.000009 3 3 brk 31 | # 2.14 0.000006 0 17 mmap 32 | # 2.14 0.000006 1 8 8 access 33 | # 1.78 0.000005 3 2 rt_sigaction 34 | # 1.07 0.000003 3 1 set_tid_address 35 | # 0.71 0.000002 2 1 rt_sigprocmask 36 | # 0.71 0.000002 2 1 arch_prctl 37 | # 0.71 0.000002 2 1 set_robust_list 38 | # 0.71 0.000002 2 1 prlimit64 39 | # 0.00 0.000000 0 1 execve 40 | # ------ ----------- ----------- --------- --------- ---------------- 41 | # 100.00 0.000281 93 10 total 42 | ``` 43 | 44 |
45 | 46 | ### Check if the OS supports Seccomp 47 | 48 |
49 | 50 | ```bash 51 | grep -i seccomp /boot/config-$(uname -r) 52 | # CONFIG_HAVE_ARCH_SECCOMP_FILTER=y 53 | # CONFIG_SECCOMP_FILTER=y 54 | # CONFIG_SECCOMP=y 55 | ``` 56 | 57 |
58 | 59 | ### Check the status of Seccomp on the Kubernetes cluster 60 | 61 |
62 | 63 | ```bash 64 | kubectl run amicontained --image jess/amicontained -- amicontained 65 | kk logs amicontained 66 | # Container Runtime: docker 67 | # Has Namespaces: 68 | # pid: true 69 | # user: false 70 | # AppArmor Profile: docker-default (enforce) 71 | # Capabilities: 72 | # BOUNDING -> chown dac_override fowner fsetid kill setgid setuid setpcap net_bind_service net_raw sys_chroot mknod audit_write setfcap 73 | # Seccomp: disabled 74 | # Blocked Syscalls (22): 75 | # MSGRCV SYSLOG SETPGID SETSID VHANGUP PIVOT_ROOT ACCT SETTIMEOFDAY UMOUNT2 SWAPON SWAPOFF REBOOT SETHOSTNAME SETDOMAINNAME INIT_MODULE DELETE_MODULE LOOKUP_DCOOKIE KEXEC_LOAD FANOTIFY_INIT OPEN_BY_HANDLE_AT FINIT_MODULE KEXEC_FILE_LOAD 76 | # Looking for Docker.sock 77 | ``` 78 | 79 |
80 | 81 | ### Enable Seccomp for the `amicontained` using following specs and Seccomp `type: RuntimeDefault`. 82 | 83 |
84 | 85 | ```yaml 86 | apiVersion: v1 87 | kind: Pod 88 | metadata: 89 | name: amicontained 90 | spec: 91 | containers: 92 | - args: 93 | - amicontained 94 | image: jess/amicontained 95 | name: amicontained 96 | restartPolicy: Always 97 | ``` 98 | 99 |
show

100 | 101 | #### Apply Seccomp security context 102 | 103 | ```yaml 104 | cat << EOF > amicontained.yaml 105 | apiVersion: v1 106 | kind: Pod 107 | metadata: 108 | name: amicontained 109 | spec: 110 | securityContext: # add the security context with seccomp profile 111 | seccompProfile: 112 | type: RuntimeDefault 113 | containers: 114 | - args: 115 | - amicontained 116 | image: jess/amicontained 117 | name: amicontained 118 | restartPolicy: Always 119 | EOF 120 | 121 | kubectl apply -f amicontained.yaml 122 | ``` 123 | 124 | #### Verify 125 | 126 | ```bash 127 | 128 | kk logs amicontained 129 | # Container Runtime: kube 130 | # Has Namespaces: 131 | # pid: true 132 | # user: false 133 | # AppArmor Profile: docker-default (enforce) 134 | # Capabilities: 135 | # BOUNDING -> chown dac_override fowner fsetid kill setgid setuid setpcap net_bind_service net_raw sys_chroot mknod audit_write setfcap 136 | # Seccomp: filtering 137 | # Blocked Syscalls (62): 138 | # SYSLOG SETPGID SETSID USELIB USTAT SYSFS VHANGUP PIVOT_ROOT _SYSCTL ACCT SETTIMEOFDAY MOUNT UMOUNT2 SWAPON SWAPOFF REBOOT SETHOSTNAME SETDOMAINNAME IOPL IOPERM CREATE_MODULE INIT_MODULE DELETE_MODULE GET_KERNEL_SYMS QUERY_MODULE QUOTACTL NFSSERVCTL GETPMSG PUTPMSG AFS_SYSCALL TUXCALL SECURITY LOOKUP_DCOOKIE CLOCK_SETTIME VSERVER MBIND SET_MEMPOLICY GET_MEMPOLICY KEXEC_LOAD ADD_KEY REQUEST_KEY KEYCTL MIGRATE_PAGES UNSHARE MOVE_PAGES PERF_EVENT_OPEN FANOTIFY_INIT NAME_TO_HANDLE_AT OPEN_BY_HANDLE_AT CLOCK_ADJTIME SETNS PROCESS_VM_READV PROCESS_VM_WRITEV KCMP FINIT_MODULE KEXEC_FILE_LOAD BPF USERFAULTFD MEMBARRIER PKEY_MPROTECT PKEY_ALLOC PKEY_FREE 139 | # Looking for Docker.sock 140 | ``` 141 | 142 |

143 | 144 |
145 | 146 | ### Create a nginx pod named `audit-pod` using audit.json seccomp profile. 147 | 148 |
149 | 150 | ```bash 151 | # file is also available in the [data/seccomp](../data/Seccomp/audit.json) folder 152 | curl -L -o audit.json https://k8s.io/examples/pods/security/seccomp/profiles/audit.json 153 | ``` 154 | 155 |
show

156 | 157 | #### Copy the audit.json file to the default profiles location `/var/lib/kubelet/seccomp/` 158 | 159 | ```bash 160 | mkdir -p /var/lib/kubelet/seccomp/profiles 161 | cp audit.json /var/lib/kubelet/seccomp/profiles 162 | ``` 163 | 164 | #### Create nginx pod using the seccomp profile 165 | 166 | ```yaml 167 | cat << EOF > audit-pod.yaml 168 | apiVersion: v1 169 | kind: Pod 170 | metadata: 171 | name: audit-pod 172 | labels: 173 | app: audit-pod 174 | spec: 175 | securityContext: 176 | seccompProfile: 177 | type: Localhost 178 | localhostProfile: profiles/audit.json 179 | containers: 180 | - name: audit-pod 181 | image: nginx 182 | EOF 183 | 184 | kubectl apply -f audit-pod.yaml 185 | 186 | ``` 187 | 188 | #### Verify 189 | 190 | ````bash 191 | tail -f /var/log/syslog 192 | # Dec 16 02:07:21 vagrant kernel: [ 2253.183862] audit: type=1326 audit(1639620441.516:20): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=20123 comm="runc:[2:INIT]" exe="/" sig=0 arch=c000003e syscall=233 compat=0 ip=0x55e57ef09bc8 code=0x7ffc0000 193 | # Dec 16 02:07:21 vagrant kernel: [ 2253.183864] audit: type=1326 audit(1639620441.516:21): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=20123 comm="runc:[2:INIT]" exe="/" sig=0 arch=c000003e syscall=138 compat=0 ip=0x55e57ef5e230 code=0x7ffc0000 194 | ```` 195 | 196 |

197 | 198 |
199 | 200 | ### Clean up 201 | 202 | ```bash 203 | kubectl delete pod audit-pod amicontained --force 204 | rm audit-pod.yaml amicontained.yaml 205 | ``` -------------------------------------------------------------------------------- /topics/secrets.md: -------------------------------------------------------------------------------- 1 | # [Namespaces](https://kubernetes.io/docs/concepts/configuration/secret/) 2 | 3 | - A secret is an API object used to store non-confidential data in key-value pairs. 4 | - Pods can consume secrets as environment variables, command-line arguments, or as configuration files in a volume. 5 | - A secret allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable. 6 | 7 |
8 | 9 | ### Check the secrets on the cluster in the default namespace 10 | 11 |
show

12 | 13 | ```bash 14 | kubectl get secrets 15 | ``` 16 | 17 |

18 | 19 |
20 | 21 | ### Check the secrets on the cluster in all the namespaces 22 | 23 |
show

24 | 25 | ```bash 26 | kubectl get secrets --all-namespaces 27 | # OR 28 | kubectl get secrets -A 29 | ``` 30 | 31 |

32 | 33 |
34 | 35 | ### Create a secret named `db-secret-1` with data `DB_HOST=db.example.com`, `DB_USER=development`, `DB_PASSWD=password` 36 | 37 |
show

38 | 39 | ```bash 40 | kubectl create secret generic db-secret-1 --from-literal=DB_HOST=db.example.com --from-literal=DB_USER=development --from-literal=DB_PASSWD=password 41 | ``` 42 | 43 | OR 44 | 45 | ```yaml 46 | cat << EOF > db-secret-1.yaml 47 | apiVersion: v1 48 | kind: Secret 49 | metadata: 50 | name: db-secret-1 51 | data: 52 | DB_HOST: ZGIuZXhhbXBsZS5jb20= 53 | DB_PASSWD: cGFzc3dvcmQ= 54 | DB_USER: ZGV2ZWxvcG1lbnQ= 55 | EOF 56 | 57 | kubectl apply -f db-secret-1.yaml 58 | ``` 59 | 60 | ```bash 61 | kubectl describe secret db-secret-1 # verify 62 | Name: db-secret-1 63 | Namespace: default 64 | Labels: 65 | Annotations: 66 | 67 | Type: Opaque 68 | 69 | Data 70 | ==== 71 | DB_HOST: 14 bytes 72 | DB_PASSWD: 8 bytes 73 | DB_USER: 11 bytes 74 | ``` 75 | 76 |

77 | 78 |
79 | 80 | ### Create a secret named `db-secret-2` with data from file `secret.properties` 81 | 82 | ```bash 83 | cat <> secret.properties 84 | DB_HOST=db.example.com 85 | DB_USER=development 86 | DB_PASSWD=password 87 | EOT 88 | ``` 89 | 90 |
show

91 | 92 | ```bash 93 | kubectl create secret generic db-secret-2 --from-file=secret.properties 94 | ``` 95 | 96 | ```bash 97 | kubectl describe secret db-secret-2 # verify 98 | Name: db-secret-2 99 | Namespace: default 100 | Labels: 101 | Annotations: 102 | 103 | Type: Opaque 104 | 105 | Data 106 | ==== 107 | secret.properties: 62 bytes 108 | ``` 109 | 110 |

111 | 112 |
113 | 114 | ### Create a new pod `nginx-2` with `nginx` image and add env variable for `DB_HOST` from secret `db-secret-1` 115 | 116 |
show

117 | 118 | ```yaml 119 | cat << EOF > nginx-2.yaml 120 | apiVersion: v1 121 | kind: Pod 122 | metadata: 123 | name: nginx-2 124 | spec: 125 | containers: 126 | - image: nginx 127 | name: nginx-2 128 | env: 129 | - name: DB_HOST 130 | valueFrom: 131 | secretKeyRef: 132 | name: db-secret-1 133 | key: DB_HOST 134 | EOF 135 | 136 | kubectl apply -f nginx-2.yaml 137 | ``` 138 | 139 | ```bash 140 | kubectl exec nginx-2 -- env | grep DB_HOST # verify env variables 141 | # DB_HOST=db.example.com 142 | ``` 143 | 144 |

145 | 146 |
147 | 148 | ### You are tasked to create a secret and consume the secret in a pod using environment variables as follow: 149 | - Create a secret named another-secret with a key/value pair; key1/value4 150 | - Start an nginx pod named nginx-secret using container image nginx, and add an environment variable exposing the value of the secret key key 1, using COOL_VARIABLE as the name for the environment variable inside the pod 151 | 152 |
show

153 | 154 | ```bash 155 | kubectl create secret generic another-secret --from-literal=key1=value4 156 | ``` 157 | 158 | ```yaml 159 | cat << EOF > nginx-secret.yaml 160 | apiVersion: v1 161 | kind: Pod 162 | metadata: 163 | name: nginx-secret 164 | spec: 165 | containers: 166 | - image: nginx 167 | name: nginx-secret 168 | env: 169 | - name: COOL_VARIABLE 170 | valueFrom: 171 | secretKeyRef: 172 | name: another-secret 173 | key: key1 174 | EOF 175 | 176 | kubectl apply -f nginx-secret.yaml 177 | ``` 178 | 179 | ```bash 180 | kubectl exec nginx-2 -- env | grep DB_HOST # verify env variables 181 | # DB_HOST=db.example.com 182 | ``` 183 | 184 |

185 | 186 |
187 | 188 | ### Create a new pod `nginx-3` with `nginx` image and add all env variables from from secret map `db-secret-1` 189 | 190 |
show

191 | 192 | ```yaml 193 | cat << EOF > nginx-3.yaml 194 | apiVersion: v1 195 | kind: Pod 196 | metadata: 197 | name: nginx-3 198 | spec: 199 | containers: 200 | - image: nginx 201 | name: nginx-3 202 | envFrom: 203 | - secretRef: 204 | name: db-secret-1 205 | EOF 206 | 207 | kubectl apply -f nginx-3.yaml 208 | ``` 209 | 210 | ``` 211 | kubectl exec nginx-3 -- env | grep DB_ # verify env variables 212 | # DB_HOST=db.example.com 213 | # DB_PASSWD=password 214 | # DB_USER=development 215 | ``` 216 | 217 |

218 | 219 |
220 | 221 | ### Create a new pod `nginx-4` with `nginx` image and mount the secret `db-secret-1` as a volume named `db-secret` and mount path `/secret` 222 | 223 |
show

224 | 225 | ```yaml 226 | cat << EOF > nginx-4.yaml 227 | apiVersion: v1 228 | kind: Pod 229 | metadata: 230 | name: nginx-4 231 | spec: 232 | containers: 233 | - image: nginx 234 | name: nginx-4 235 | volumeMounts: 236 | - name: db-secret 237 | mountPath: "/secret" 238 | readOnly: true 239 | volumes: 240 | - name: db-secret 241 | secret: 242 | secretName: db-secret-1 243 | EOF 244 | 245 | kubectl apply -f nginx-4.yaml 246 | ``` 247 | 248 | ```bash 249 | kubectl exec nginx-4 -- cat /secret/DB_HOST # verify env variables 250 | # db.example.com 251 | ``` 252 | 253 |

254 | 255 |
256 | 257 | ### Create a tls secret using tls.crt and tls.key in the data folder. 258 | 259 |
show

260 | 261 | ```bash 262 | kubectl create secret tls my-tls-secret --cert=../data/tls.crt --key=../data/tls.key 263 | ``` 264 | 265 | ```bash 266 | kubectl describe secret my-tls-secret #verify 267 | Name: my-tls-secret 268 | Namespace: default 269 | Labels: 270 | Annotations: 271 | 272 | Type: kubernetes.io/tls 273 | 274 | Data 275 | ==== 276 | tls.crt: 1932 bytes 277 | tls.key: 3273 bytes 278 | ``` 279 | 280 |

281 | 282 |
283 | 284 | ### Create a docker registry secret `regcred` with below details and create new pod `nginx-5` with `nginx` image and use the private docker registry. 285 | - docker-server : example.com 286 | - docker-username : user_name 287 | - docker-password : password 288 | - docker-email : user_name@example.com 289 | 290 |
show

291 | 292 | ```bash 293 | kubectl create secret docker-registry regcred --docker-server=example.com --docker-username=user_name --docker-password=password --docker-email=user_name@example.com 294 | ``` 295 | 296 | ```yaml 297 | cat << EOF > nginx-5.yaml 298 | apiVersion: v1 299 | kind: Pod 300 | metadata: 301 | name: nginx-5 302 | spec: 303 | containers: 304 | - name: nginx-5 305 | image: nginx 306 | imagePullSecrets: 307 | - name: regcred 308 | EOF 309 | 310 | kubectl apply -f nginx-5.yaml 311 | ``` 312 | 313 |

314 | 315 |
316 | 317 | ### Clean up 318 | 319 | ```bash 320 | kubectl delete pod nginx-1 nginx-2 nginx-3 nginx-4 nginx-5 nginx-secret --force --grace-period=0 321 | kubectl delete secret db-secret-1 db-secret-2 my-tls-secret regcred 322 | rm secret.properties nginx-2.yaml nginx-3.yaml nginx-4.yaml nginx-5.yaml nginx-secret.yaml 323 | ``` 324 | -------------------------------------------------------------------------------- /topics/service_accounts.md: -------------------------------------------------------------------------------- 1 | # Service Account 2 | 3 | A service account provides an identity for processes that run in a Pod. 4 | **NOTE**: From k8s 1.24, when a ServiceAccount is created, token and secrets would not be create automatically. You can create the token and secrets manually. 5 | 6 |
7 | 8 | ### Create Service Account `sample-sa` 9 | 10 |
show

11 | 12 | ```bash 13 | kubectl create serviceaccount sample-sa 14 | # OR 15 | kubectl create sa sample-sa 16 | ``` 17 | 18 | OR 19 | 20 | ```yaml 21 | cat << EOF > sample-sa.yaml 22 | apiVersion: v1 23 | kind: ServiceAccount 24 | metadata: 25 | name: sample-sa 26 | EOF 27 | 28 | kubectl apply -f sample-sa.yaml 29 | ``` 30 | 31 | ```bash 32 | kubectl describe serviceaccount sample-sa # Verify, no secret and token are created automatically 33 | Name: sample-sa 34 | Namespace: default 35 | Labels: 36 | Annotations: 37 | Image pull secrets: 38 | Mountable secrets: 39 | Tokens: 40 | Events: 41 | 42 | ``` 43 | 44 |

45 | 46 |
47 | 48 | ### Create Service Account `sample-sa-no-auto-mount` with auto mounting disabled 49 | 50 |
51 | 52 |
show

53 | 54 | ```yaml 55 | cat << EOF > sample-sa-no-auto-mount.yaml 56 | apiVersion: v1 57 | kind: ServiceAccount 58 | metadata: 59 | name: sample-sa-no-auto-mount 60 | automountServiceAccountToken: false 61 | EOF 62 | 63 | kubectl apply -f sample-sa-no-auto-mount.yaml 64 | ``` 65 | 66 |

67 | 68 |
69 | 70 | ### Create a pod with name `nginx-sa` and with image `nginx` and service account `sample-sa` 71 | 72 |
73 | 74 |
show

75 | 76 | ```bash 77 | kubectl run nginx-sa --image=nginx --serviceaccount=sample-sa 78 | ``` 79 | 80 | OR 81 | 82 | ```yaml 83 | cat << EOF > nginx-sa.yaml 84 | apiVersion: v1 85 | kind: Pod 86 | metadata: 87 | name: nginx-sa 88 | spec: 89 | containers: 90 | - image: nginx 91 | name: nginx-sa 92 | serviceAccountName: sample-sa 93 | EOF 94 | 95 | kubectl apply -f nginx-sa.yaml 96 | ``` 97 | 98 |

99 | 100 |
101 | 102 | ### Clean up 103 | 104 | ```bash 105 | rm nginx-sa.yaml sample-sa-no-auto-mount.yaml sample-sa.yaml 106 | kubectl delete pod nginx-sa --force --grace-period=0 107 | kubectl delete serviceaccount sample-sa-no-auto-mount sample-sa 108 | ``` -------------------------------------------------------------------------------- /topics/services.md: -------------------------------------------------------------------------------- 1 | # [Services](https://kubernetes.io/docs/concepts/workloads/controllers/services/) 2 | 3 |
4 | 5 | ### Create a pod `nginx-clusterip` with image `nginx`. Expose it as a ClusterIP service. 6 | 7 |
show

8 | 9 | ```bash 10 | kubectl run nginx-clusterip --image=nginx --restart=Never --port=80 --expose 11 | 12 | kubectl get service nginx-clusterip # verification 13 | # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 14 | # nginx-clusterip ClusterIP 10.104.163.30 80/TCP 6s 15 | ``` 16 | 17 |

18 | 19 |
20 | 21 | ### Create a pod `nginx-nodeport` with image `nginx`. Expose it as a NodePort service `nginx-nodeport-svc` 22 | 23 |
show

24 | 25 | ```bash 26 | kubectl run nginx-nodeport --image=nginx --restart=Never --port=80 27 | kubectl expose pod nginx-nodeport --name nginx-nodeport-svc --type NodePort --port 80 --target-port 80 28 | ``` 29 | 30 | OR 31 | 32 | ```yaml 33 | cat << EOF > nginx-nodeport.yaml 34 | apiVersion: v1 35 | kind: Service 36 | metadata: 37 | creationTimestamp: null 38 | labels: 39 | run: nginx-nodeport 40 | name: nginx-nodeport-svc 41 | spec: 42 | ports: 43 | - port: 80 44 | protocol: TCP 45 | targetPort: 80 46 | selector: 47 | run: nginx-nodeport 48 | type: NodePort 49 | status: 50 | loadBalancer: {} 51 | EOF 52 | 53 | kubectl apply -f nginx-nodeport.yaml 54 | ``` 55 | 56 | ```bash 57 | # verification - port expose might change 58 | kubectl get svc nginx-nodeport-svc 59 | # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 60 | # nginx-nodeport-svc NodePort 10.106.55.131 80:31287/TCP 12s 61 | ``` 62 | 63 |

64 | 65 |
66 | 67 | ### Create a deployment `nginx-deployment` with image `nginx` and 3 replicas. Expose it as a NodePort service `nginx-deployment-svc` on port 30080. 68 | 69 |
show

70 | 71 | ```bash 72 | kubectl create deploy nginx-deployment --image nginx && kubectl scale deploy nginx-deployment --replicas 3 73 | kubectl expose deployment nginx-deployment --type NodePort --port 80 --target-port 80 --dry-run=client -o yaml > nginx-deployment-svc.yaml 74 | ``` 75 | 76 | Edit `nginx-deployment-svc.yaml` to add `nodePort: 30080` and apply `kubectl apply -f nginx-deployment-svc.yaml` 77 | 78 | ```yaml 79 | apiVersion: v1 80 | kind: Service 81 | metadata: 82 | creationTimestamp: null 83 | labels: 84 | app: nginx-deployment 85 | name: nginx-deployment 86 | spec: 87 | ports: 88 | - port: 80 89 | protocol: TCP 90 | targetPort: 80 91 | nodePort: 30080 # add node port 92 | selector: 93 | app: nginx-deployment 94 | type: NodePort 95 | status: 96 | loadBalancer: {} 97 | ``` 98 | 99 | ```bash 100 | # verification 101 | kubectl get service nginx-deployment 102 | # NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 103 | # nginx-deployment NodePort 10.43.166.122 80:30080/TCP 38s 104 | ``` 105 |

106 | 107 |
108 | 109 | ### Clean up 110 | 111 | ```bash 112 | # clean up 113 | kubectl delete service nginx-deployment nginx-nodeport-svc 114 | kubectl delete deployment nginx-deployment nginx-nodeport 115 | ``` -------------------------------------------------------------------------------- /topics/taints_tolerations.md: -------------------------------------------------------------------------------- 1 | # [Taints and Tolerations](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) 2 | 3 |
4 | 5 | ### Create a taint on worker node `node01` with key `app` with value `critical` and effect of `NoSchedule` 6 | 7 |
8 | 9 |
show

10 | 11 | ```bash 12 | kk taint node node01 app=critical:NoSchedule 13 | ``` 14 | 15 |

16 | 17 |
18 | 19 | -------------------------------------------------------------------------------- /topics/trivy.md: -------------------------------------------------------------------------------- 1 | # [Trivy](https://aquasecurity.github.io/trivy) 2 | 3 | A Simple and Comprehensive Vulnerability Scanner for Containers and other Artifacts, Suitable for CI. 4 | 5 |
6 | 7 | ### Trivy Installation 8 | ```bash 9 | apt-get update 10 | apt-get install wget apt-transport-https gnupg lsb-release -y 11 | wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add - 12 | echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list 13 | 14 | #Update Repo and Install trivy 15 | apt-get update 16 | apt-get install trivy -y 17 | 18 | # OR (if you encounter issues 'Unable to locate package trivy') 19 | 20 | wget https://github.com/aquasecurity/trivy/releases/download/v0.17.0/trivy_0.17.0_Linux-64bit.deb 21 | sudo dpkg -i trivy_0.17.0_Linux-64bit.deb 22 | 23 | ``` 24 | 25 |
26 | 27 | ### Scan the following images with Trivy and check the `CRITICAL` issues. 28 | - nginx:1.21.4-alpine 29 | - amazonlinux:2.0.20211201.0 30 | - nginx:1.21.4 31 | 32 |
show

33 | 34 | ```bash 35 | docker pull nginx:1.21.4 36 | trivy image --severity CRITICAL nginx:1.21.4 37 | # nginx:1.21.4 (debian 11.1) 38 | # ========================== 39 | # Total: 7 (CRITICAL: 7) 40 | 41 | docker pull nginx:1.21.4-alpine 42 | trivy image --severity CRITICAL nginx:1.21.4-alpine 43 | # nginx:1.21.4-alpine (alpine 3.14.3) 44 | # =================================== 45 | # Total: 0 (CRITICAL: 0) 46 | 47 | docker pull amazonlinux:2.0.20211201.0 48 | trivy image --severity CRITICAL amazonlinux:2.0.20211201.0 49 | # amazonlinux:2.0.20211201.0 (amazon 2 (Karoo)) 50 | # ============================================= 51 | # Total: 0 (CRITICAL: 0) 52 | 53 | ``` 54 | 55 |

56 | 57 |
58 | 59 | ### Scan the following images with Trivy and check the `HIGH` issues with output in json format and redirected to `/root/nginx.json` 60 | - nginx:1.21.4 61 | 62 |
show

63 | 64 | ```bash 65 | docker pull nginx:1.21.4 66 | trivy image --severity HIGH --format json --output /root/nginx.json nginx:1.21.4 67 | # nginx:1.21.4 (debian 11.1) 68 | # ========================== 69 | # Total: 7 (CRITICAL: 7) 70 | 71 | ``` 72 | 73 |

74 | 75 | -------------------------------------------------------------------------------- /topics/volumes.md: -------------------------------------------------------------------------------- 1 | # [Volumes](https://kubernetes.io/docs/concepts/storage/volumes/) 2 | 3 | Kubernetes supports many types of volumes. A Pod can use any number of volume types simultaneously. Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond the lifetime of a pod. When a pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes. For any kind of volume in a given pod, data is preserved across container restarts. 4 | 5 | - [Config Volumes](#config-volumes) 6 | - [Secret Volumes](#secret-volumes) 7 | - [Ephemeral Volumes](#ephemeral-volumes) 8 | - [Persistent Volumes](#persistent-volumes) 9 | 10 |
11 | 12 | ## Config Volumes 13 | 14 |
15 | 16 | ### Create a new pod `nginx-3` with `nginx` image and mount the configmap `db-config-1` as a volume named `db-config` and mount path `/config` 17 | 18 |
show

19 | 20 | ```yaml 21 | cat << EOF > nginx-3.yaml 22 | apiVersion: v1 23 | kind: Pod 24 | metadata: 25 | name: nginx-3 26 | spec: 27 | containers: 28 | - image: nginx 29 | name: nginx-3 30 | volumeMounts: 31 | - name: db-config 32 | mountPath: "/config" 33 | readOnly: true 34 | volumes: 35 | - name: db-config 36 | configMap: 37 | name: db-config-1 38 | EOF 39 | 40 | kubectl apply -f nginx-3.yaml 41 | 42 | kubectl exec nginx-4 -- cat /config/DB_HOST # verify env variables 43 | # db.example.com 44 | ``` 45 | 46 |

47 | 48 |
49 | 50 | ## Secret Volumes 51 | 52 |
53 | 54 | ### Create a new pod `nginx-4` with `nginx` image and mount the secret `db-secret-1` as a volume named `db-secret` and mount path `/secret` 55 | 56 |
show

57 | 58 | ```yaml 59 | cat << EOF > nginx-4.yaml 60 | apiVersion: v1 61 | kind: Pod 62 | metadata: 63 | name: nginx-4 64 | spec: 65 | containers: 66 | - image: nginx 67 | name: nginx-4 68 | volumeMounts: 69 | - name: db-secret 70 | mountPath: "/secret" 71 | readOnly: true 72 | volumes: 73 | - name: db-secret 74 | secret: 75 | secretName: db-secret-1 76 | EOF 77 | 78 | kubectl apply -f nginx-4.yaml 79 | ``` 80 | 81 | ```bash 82 | kubectl exec nginx-4 -- cat /secret/DB_HOST # verify env variables 83 | # db.example.com 84 | ``` 85 | 86 |

87 | 88 |
89 | 90 | ## [Ephemeral Volumes](https://kubernetes.io/docs/concepts/storage/ephemeral-volumes/) 91 | 92 |
93 | 94 | ### Create the redis pod with `redis` image with volume `redis-storage` as ephemeral storage mounted at `/data/redis`. 95 | 96 |
show

97 | 98 | ```yaml 99 | cat << EOF > redis.yaml 100 | apiVersion: v1 101 | kind: Pod 102 | metadata: 103 | name: redis 104 | spec: 105 | containers: 106 | - name: redis 107 | image: redis 108 | volumeMounts: 109 | - name: redis-storage 110 | mountPath: /data/redis 111 | volumes: 112 | - name: redis-storage 113 | emptyDir: {} # Ephemeral storage 114 | EOF 115 | 116 | kubectl apply -f redis.yaml 117 | ``` 118 | 119 |

120 | 121 |
122 | 123 | ### Create a pod as follows: 124 | - Name: non-persistent-redis 125 | - container Image:redis 126 | - Volume with name: cache-control 127 | - Mount path: /data/redis 128 | - The pod should launch in the staging namespace and the volume must not be persistent. 129 | 130 |
show

131 | 132 | ```yaml 133 | kubectl create namespace staging 134 | 135 | cat << EOF > non-persistent-redis.yaml 136 | apiVersion: v1 137 | kind: Pod 138 | metadata: 139 | name: non-persistent-redis 140 | namespace: staging 141 | spec: 142 | containers: 143 | - name: redis 144 | image: redis 145 | volumeMounts: 146 | - name: cache-control 147 | mountPath: /data/redis 148 | volumes: 149 | - name: cache-control 150 | emptyDir: {} 151 | EOF 152 | 153 | kubectl apply -f non-persistent-redis.yaml 154 | ``` 155 | 156 |

157 | 158 |
159 | 160 | ## [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) 161 | 162 |
163 | 164 | ### Create a persistent volume with name `app-data`, of capacity `200Mi` and access mode `ReadWriteMany`. The type of volume is `hostPath` and its location is `/srv/app-data`. 165 | 166 |
show

167 | 168 | ```yaml 169 | cat << EOF > app-data.yaml 170 | apiVersion: v1 171 | kind: PersistentVolume 172 | metadata: 173 | name: app-data 174 | spec: 175 | storageClassName: manual 176 | capacity: 177 | storage: 200Mi 178 | accessModes: 179 | - ReadWriteMany 180 | hostPath: 181 | path: "/srv/app-data" 182 | EOF 183 | 184 | kubectl apply -f app-data.yaml 185 | 186 | kubectl get pv 187 | # NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 188 | # app-data 200Mi RWX Retain Available manual 189 | ``` 190 | 191 |

192 | 193 |
194 | 195 | ### Create the following 196 | - PV `task-pv-volume` with storage `10Mi`, Access Mode `ReadWriteOnce` on hostpath `/mnt/data`. 197 | - PVC `task-pv-claim` to use the PV. 198 | - Create a pod `task-pv-pod` with `nginx` image to use the PVC mounted on `/usr/share/nginx/html` 199 | 200 |
show

201 | 202 | ```yaml 203 | cat << EOF > task-pv-volume.yaml 204 | apiVersion: v1 205 | kind: PersistentVolume 206 | metadata: 207 | name: task-pv-volume 208 | spec: 209 | storageClassName: manual 210 | capacity: 211 | storage: 10Mi 212 | accessModes: 213 | - ReadWriteOnce 214 | hostPath: 215 | path: "/mnt/data" 216 | EOF 217 | 218 | kubectl apply -f task-pv-volume.yaml 219 | 220 | kubectl get pv 221 | # NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 222 | # task-pv-volume 10Mi RWO Retain Available manual 6s 223 | ``` 224 | 225 | ```yaml 226 | cat << EOF > task-pv-claim.yaml 227 | apiVersion: v1 228 | kind: PersistentVolumeClaim 229 | metadata: 230 | name: task-pv-claim 231 | spec: 232 | storageClassName: manual 233 | accessModes: 234 | - ReadWriteOnce 235 | resources: 236 | requests: 237 | storage: 10Mi 238 | EOF 239 | 240 | kubectl apply -f task-pv-claim.yaml 241 | 242 | kubectl get pvc 243 | #NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 244 | #task-pv-claim Bound task-pv-volume 10Mi RWO manual 12s 245 | kubectl get pv # check status bound 246 | #NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 247 | #task-pv-volume 10Mi RWO Retain Bound default/task-pv-claim manual 64s 248 | ``` 249 | 250 | ```yaml 251 | cat << EOF > task-pv-pod.yaml 252 | apiVersion: v1 253 | kind: Pod 254 | metadata: 255 | name: task-pv-pod 256 | spec: 257 | volumes: 258 | - name: task-pv-storage 259 | persistentVolumeClaim: 260 | claimName: task-pv-claim 261 | containers: 262 | - name: task-pv-pod 263 | image: nginx 264 | ports: 265 | - containerPort: 80 266 | name: "http-server" 267 | volumeMounts: 268 | - mountPath: "/usr/share/nginx/html" 269 | name: task-pv-storage 270 | EOF 271 | 272 | kubectl apply -f task-pv-pod.yaml 273 | ``` 274 | 275 |

276 | 277 |
278 | 279 | ### Get the storage classes (Storage class does not belong to namespace) 280 | 281 |
282 | 283 |
show

284 | 285 | ```bash 286 | kubectl get storageclass 287 | # OR 288 | kubectl get sc 289 | ``` 290 |

291 | 292 |
293 | 294 | ### Clean up 295 | 296 |
show

297 | 298 | ```bash 299 | rm nginx-3.yaml nginx-4.yaml redis.yaml 300 | kubectl delete pod task-pv-pod redis nginx-3 nginx-4 --force 301 | kubectl delete pvc task-pv-claim 302 | kubectl delete pv task-pv-volume 303 | ``` 304 | 305 |

306 | 307 |
308 | 309 | --------------------------------------------------------------------------------