├── README.md
├── domain-1-cluster-setup
├── Readme.md
├── apiserver-https.md
├── audit-logs.md
├── authorization.md
├── certificate-auth-k8s.md
├── certificate-workflow.md
├── configure-apiserver.md
├── configure-ca.md
├── downside-token-auth.md
├── encryption-provider.md
├── etcd-https.md
├── etcd-systemd.md
├── ingress-controller.md
├── ingress-security.md
├── ingress-ssl-annotation.md
├── ingress.md
├── install-etcd.md
├── kubeadm-install.md
├── kubeadm.md
├── kubelet-security.md
├── mutual-tls.md
├── netpol-02.md
├── netpol-practical.md
├── netpol-structure.md
├── nginx-controller.yaml
├── systemd.md
├── taint-toleration.md
├── token-authentication.md
└── verify-binaries.md
├── domain-2-cluster-hardening
├── Readme.md
├── clusterrole.md
├── deploying-ingress.md
├── kubeadm-automate.md
├── kubeadm-automate.sh
├── kubeadm-version.md
├── kubeadm-worker-automate.sh
├── projected-volume.md
├── rbac.md
├── role-rolebinding.md
├── sa-pointers.md
├── sa-projectedvolume.md
├── sa-security.md
├── service-account.md
├── upgrade-kubeadm-master.md
├── upgrade-kubeadm-worker.md
└── user-rbac.md
├── domain-3-minimize-microservice-vulnerability
├── Readme.md
├── ac-alwayspullimages.md
├── api-user.crt
├── api-user.key
├── capabilities-pod.md
├── capabilities-practical.md
├── cilium-deny.md
├── cilium-dns.md
├── cilium-encryption-ipsec.md
├── cilium-encryption-wireguard.md
├── cilium-entities.md
├── cilium-install.md
├── cilium-layer-4.md
├── cilium-netpol.md
├── cnp-service.md
├── hack-case-01.md
├── hostpath.md
├── hostpid.md
├── image-pull-policy.md
├── imagepolicywebhook-custom.md
├── imagewebhook.md
├── kind-install.sh
├── kubeadm-cilium.sh
├── kubeadm-master.sh
├── linux-capability.md
├── mounting-secrets.md
├── pod-securitycontext.yaml
├── privileged-pod.md
├── privileged.yaml
├── pss-modes.md
├── pss-notes.md
├── pss.md
├── secrets.md
├── security-context-ro.md
├── security-context.md
├── webhook.crt
└── webhook.key
├── domain-4-system-hardening
├── Readme.md
├── apparmor-k8s.md
├── apparmor.md
├── gvisor.md
├── kubeadm-calico.md
├── kubeadm-containerd.md
├── netpol-01.md
├── netpol.md
└── oci.md
├── domain-5-supply-chain-security
├── Readme.md
├── docker-daemon.md
├── docker-security.md
├── docker-tls.md
├── dockerfile-best-practice.md
├── kube-bench.md
├── sbom.md
├── static-analysis.md
└── trivy.md
└── domain-6-monitor-log-runtimesec
├── Readme.md
├── audit-log-detailed.md
├── custom-falco-rules.md
├── falco-config-file.md
├── falco-exam-perspective.md
├── falco-install.md
├── falco-mem-rule.md
├── falco-practical.md
├── install-falco.md
├── kubeadm-automate.sh
├── sysdig.md
└── writing-falco-rules.md
/README.md:
--------------------------------------------------------------------------------
1 | # Certified Kubernetes Security Specialist 2025
2 |
3 | This Git repository contains all the commands and code files used throughout the CKS video course by Zeal Vora
4 |
5 | We also have a new Discord community for any support related discussion as well as to connect to other students doing the same course. Feel free to join the community.
6 |
7 | ```sh
8 | https://kplabs.in/chat
9 | ```
10 |
11 | Welcome to the community again, and we look forward to seeing you certified! :)
12 |
13 |
14 |
15 |
16 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/Readme.md:
--------------------------------------------------------------------------------
1 | # Domain 1 - Cluster Setup
2 |
3 | The code mentioned in this document are used in the Certified Kubernetes Security Specialist 2025 course.
4 |
5 |
6 | # Video-Document Mapper
7 |
8 | | Sr No | Document Link |
9 | | ------ | ------ |
10 | | 1 | [Configure etcd Binaries][PlDa] |
11 | | 2 | [Configure Certificate Authority][PlDb] |
12 | | 3 | [Workflow - Issuance of Signed Certificates][PlDc] |
13 | | 4 | [etcd - Transport Security with HTTPS][PlDd]
14 | | 5 | [Practical - Mutual TLS Authentication][PlDe] |
15 | | 6 | [Integrating Systemd with etcd][PlDf] |
16 | | 7 | [Configuring API Server][PlDg] |
17 | | 8 | [Transport Security for API Server][PlDh] |
18 | | 9 | [Static Token Authentication][PlDi] |
19 | | 10 | [Downsides - Static Token Authentication][PlDj] |
20 | | 11 | [Implementing X509 Client Authentication][PlDk] |
21 | | 12 | [Authorization][PlDl] |
22 | | 13 | [Encryption Providers][PlDm] |
23 | | 14 | [Implementing Auditing][PlDn] |
24 | | 15 | [Setting up kubeadm cluster][PlDo] |
25 | | 16 | [Revising Taints and Tolerations][PlDp] |
26 | | 17 | [Kubelet Security][PlDq] |
27 | | 18 | [Verifying Platform Binaries][PlDr] |
28 | | 19 | [Practical - Ingress with TLS][PlDs] |
29 | | 20 | [Ingress Annotation - SSL Redirect][PlDt] |
30 | | 21 | [Structure of Network Policy][PlDu] |
31 | | 22 | [Practical - Network Policies][PlDv] |
32 | | 23 | [Network Policies - Except, Port and Protocol][PlDw] |
33 |
34 | [PlDa]: <./install-etcd.md>
35 | [PlDb]: <./configure-ca.md>
36 | [PlDc]: <./certificate-workflow.md>
37 | [PlDd]: <./etcd-https.md>
38 | [PlDe]: <./mutual-tls.md>
39 | [PlDf]: <./etcd-systemd.md>
40 | [PlDg]: <./configure-apiserver.md>
41 | [PlDh]: <./apiserver-https.md>
42 | [PlDi]: <./token-authentication.md>
43 | [PlDj]: <./downside-token-auth.md>
44 | [PlDk]: <./certificate-auth-k8s.md>
45 | [PlDl]: <./authorization.md>
46 | [PlDm]: <./encryption-provider.md>
47 | [PlDn]: <./audit-logs.md>
48 | [PlDo]: <./kubeadm-install.md>
49 | [PlDp]: <./taint-toleration.md>
50 | [PlDq]: <./kubelet-security.md >
51 | [PlDr]: <./verify-binaries.md>
52 | [PlDs]: <./ingress-security.md>
53 | [PlDt]: <./ingress-ssl-annotation.md>
54 | [PlDu]: <./netpol-structure.md>
55 | [PlDv]: <./netpol-practical.md>
56 | [PlDw]: <./netpol-02.md>
57 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/apiserver-https.md:
--------------------------------------------------------------------------------
1 | #### Step 1 - Verify Certificate Details
2 | ```sh
3 | openssl s_client -showcerts -connect localhost:6443 2>/dev/null | openssl x509 -inform pem -noout -text
4 | ```
5 |
6 | #### Step 2 - Generate Configuration File for CSR Creation:
7 | ```sh
8 | cd /root/certificates
9 | ```
10 | ```sh
11 | cat </dev/null | openssl x509 -inform pem -noout -text
59 |
60 | curl -k https://localhost:6443
61 | ```
62 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/audit-logs.md:
--------------------------------------------------------------------------------
1 | #### Reference Websites:
2 |
3 | https://jsonformatter.curiousconcept.com/#
4 |
5 | #### Step 1 - Create Sample Audit Policy File:
6 | ```sh
7 | nano /root/certificates/logging.yaml
8 | ```
9 | ```sh
10 | apiVersion: audit.k8s.io/v1
11 | kind: Policy
12 | rules:
13 | - level: Metadata
14 | ```
15 |
16 | #### Step 2 - Audit Configuration:
17 | ```sh
18 | nano /etc/systemd/system/kube-apiserver.service
19 | ```
20 | ```sh
21 | --audit-policy-file=/root/certificates/logging.yaml --audit-log-path=/var/log/api-audit.log --audit-log-maxage=30 --audit-log-maxbackup=10 --audit-log-maxsize=100
22 | ```
23 | ```sh
24 | systemctl daemon-reload
25 | systemctl restart kube-apiserver
26 | ```
27 | #### Step 3 Run Some Queries using Bob user
28 |
29 | ```sh
30 | kubectl get secret --server=https://127.0.0.1:6443 --client-certificate /root/certificates/bob.crt --certificate-authority /root/certificates/ca.crt --client-key /root/certificates/bob.key
31 | ```
32 | #### Step 4: Verification
33 | ```sh
34 | cd /var/log
35 | grep -i bob api-audit.log
36 | ```
37 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/authorization.md:
--------------------------------------------------------------------------------
1 |
2 | #### Step 1 - Enable AlwaysDeny Authorization Mode
3 |
4 | ```sh
5 | nano /etc/systemd/system/kube-apiserver.service
6 | ```
7 | ```sh
8 | --authorization-mode=AlwaysDeny
9 | ```
10 | ```sh
11 | systemctl daemon-reload
12 | systemctl restart kube-apiserver
13 | ```
14 | ```sh
15 | kubectl get secret --server=https://127.0.0.1:6443 --client-certificate /root/certificates/alice.crt --certificate-authority /root/certificates/ca.crt --client-key /root/certificates/alice.key
16 | ```
17 | #### Step 2 - Enable RBAC Authorization Mode
18 |
19 | ```sh
20 | nano /etc/systemd/system/kube-apiserver.service
21 | ```
22 | ```sh
23 | --authorization-mode=RBAC
24 | ```
25 | ```sh
26 | systemctl daemon-reload
27 | systemctl restart kube-apiserver
28 | ```
29 | ```sh
30 | kubectl get secret --server=https://127.0.0.1:6443 --client-certificate /root/certificates/alice.crt --certificate-authority /root/certificates/ca.crt --client-key /root/certificates/alice.key
31 | ```
32 | #### Step 3 - Create Super User
33 |
34 | ```sh
35 | cd /root/certificates
36 | openssl genrsa -out bob.key 2048
37 | openssl req -new -key bob.key -subj "/CN=bob/O=system:masters" -out bob.csr
38 | openssl x509 -req -in bob.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out bob.crt -days 1000
39 | ```
40 | #### Step 4 Verification:
41 |
42 | ```sh
43 | kubectl get secret --server=https://127.0.0.1:6443 --client-certificate /root/certificates/bob.crt --certificate-authority /root/certificates/ca.crt --client-key /root/certificates/bob.key
44 | ```
--------------------------------------------------------------------------------
/domain-1-cluster-setup/certificate-auth-k8s.md:
--------------------------------------------------------------------------------
1 | ### Documentation Referred:
2 |
3 | https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/
4 |
5 | #### Step 1 Creating Certificate for Alice:
6 | ```sh
7 | cd /root/certificates
8 | ```
9 | ```sh
10 | openssl genrsa -out alice.key 2048
11 | openssl req -new -key alice.key -subj "/CN=alice/O=developers" -out alice.csr
12 | openssl x509 -req -in alice.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out alice.crt -days 1000
13 | ```
14 | #### Step 2 Set ClientCA flag in API Server:
15 |
16 | ```sh
17 | nano /etc/systemd/system/kube-apiserver.service
18 | ```
19 | ```sh
20 | --client-ca-file /root/certificates/ca.crt
21 | ```
22 | ```sh
23 | systemctl daemon-reload
24 | systemctl restart kube-apiserver
25 | ```
26 | #### Step 3 Verification:
27 | ```sh
28 | kubectl get secret --server=https://127.0.0.1:6443 --client-certificate /root/certificates/alice.crt --certificate-authority /root/certificates/ca.crt --client-key /root/certificates/alice.key
29 | ```
30 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/certificate-workflow.md:
--------------------------------------------------------------------------------
1 | #### Step 1 - Generate Client CSR and Client Key:
2 | ```sh
3 | cd /root/certificates
4 | ```
5 | ```sh
6 | openssl genrsa -out zeal.key 2048
7 |
8 | openssl req -new -key zeal.key -subj "/CN=zealvora" -out zeal.csr
9 | ```
10 | #### Step 2 - Sign the Client CSR with Certificate Authority
11 | ```sh
12 | openssl x509 -req -in zeal.csr -CA ca.crt -CAkey ca.key -out zeal.crt -days 1000
13 | ```
14 | #### Step 3 - Verify Client Certificate
15 | ```sh
16 | openssl x509 -in zeal.crt -text -noout
17 |
18 | openssl verify -CAfile ca.crt zeal.crt
19 | ```
20 |
21 | #### Step 4 - Delete the Client Certificate and Key
22 | ```sh
23 | rm -f zeal.crt zeal.key zeal.csr
24 | ```
25 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/configure-apiserver.md:
--------------------------------------------------------------------------------
1 | #### Step 1: Download Kubernetes Server Binaries
2 | ```sh
3 | cd /root/binaries
4 |
5 | wget https://dl.k8s.io/v1.32.1/kubernetes-server-linux-amd64.tar.gz
6 |
7 | tar -xzvf kubernetes-server-linux-amd64.tar.gz
8 |
9 | ls -lh /root/binaries/kubernetes/server/bin/
10 |
11 | cd /root/binaries/kubernetes/server/bin/
12 |
13 | cp kube-apiserver kubectl /usr/local/bin/
14 | ```
15 |
16 | #### Step 2 - Generate Client Certificate for API Server (etcd authentication):
17 | ```sh
18 | cd /root/certificates
19 | ```
20 | ```sh
21 | {
22 | openssl genrsa -out api-etcd.key 2048
23 |
24 | openssl req -new -key api-etcd.key -subj "/CN=kube-apiserver" -out api-etcd.csr
25 |
26 | openssl x509 -req -in api-etcd.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out api-etcd.crt -days 2000
27 | }
28 | ```
29 |
30 | #### Step 3 - Generate Service Account Certificates
31 | ```sh
32 | {
33 | openssl genrsa -out service-account.key 2048
34 |
35 | openssl req -new -key service-account.key -subj "/CN=service-accounts" -out service-account.csr
36 |
37 | openssl x509 -req -in service-account.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out service-account.crt -days 100
38 | }
39 | ```
40 | #### Step 4 - Start kube-apiserver:
41 | ```sh
42 | /usr/local/bin/kube-apiserver --advertise-address=159.65.147.161 --etcd-cafile=/root/certificates/ca.crt --etcd-certfile=/root/certificates/api-etcd.crt --etcd-keyfile=/root/certificates/api-etcd.key --service-cluster-ip-range 10.0.0.0/24 --service-account-issuer=https://127.0.0.1:6443 --service-account-key-file=/root/certificates/service-account.crt --service-account-signing-key-file=/root/certificates/service-account.key --etcd-servers=https://127.0.0.1:2379
43 | ```
44 | #### Step 5 - Verify
45 |
46 | ```sh
47 | netstat -ntlp
48 | curl -k https://localhost:6443
49 | ```
50 |
51 | #### Step 6 - Integrate Systemd with API server
52 |
53 | Change the IP address in --advertise-address
54 |
55 | ```sh
56 | cat < encryption-at-rest.yaml < etcd.cnf <
42 | curl -H "Host: kplabs.internal"
43 | ```
44 |
45 | #### Step 7: Delete All Resource
46 |
47 | ```sh
48 | kubectl delete -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/cloud/deploy.yaml
49 |
50 | kubectl delete ingress main-ingress
51 |
52 | kubectl delete service example-service
53 | kubectl delete service kplabs-service
54 | kubectl delete pod example-pod
55 | kubectl delete pod kplabs-pod
56 | ```
57 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/ingress-security.md:
--------------------------------------------------------------------------------
1 | ### Documentation Referred:
2 |
3 | https://kubernetes.io/docs/concepts/services-networking/ingress/
4 |
5 | ### Step 1 - Create Basic Pod and Service
6 | ```sh
7 | kubectl run example-pod --image=nginx
8 |
9 | kubectl expose pod example-pod --name example-service --port=80 --target-port=80
10 |
11 | kubectl get service
12 |
13 | kubectl describe service example-service
14 | ```
15 | ### Step 2 - Configure Nginx Ingress Controller
16 | ```sh
17 | kubectl create -f https://raw.githubusercontent.com/zealvora/certified-kubernetes-security-specialist/refs/heads/main/domain-1-cluster-setup/nginx-controller.yaml
18 |
19 | kubectl get pods -n ingress-nginx
20 |
21 | kubectl get service -n ingress-nginx
22 |
23 | ```
24 |
25 | ### Step 3 - Create Self Signed Certificate for Domain:
26 | ```sh
27 | mkdir /root/ingress
28 | cd /root/ingress
29 | openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ingress.key -out ingress.crt -subj "/CN=example.internal/O=security"
30 | ```
31 |
32 | ### Step 4 - Verify the Default TLS Certificate
33 | Use the NodePort associated with TLS
34 | ```sh
35 | curl -kv :NodePort
36 | ```
37 | ### Step 5 - Create Kubernetes TLS based secret :
38 | ```sh
39 | kubectl create secret tls tls-certificate --key ingress.key --cert ingress.crt
40 | kubectl get secret tls-certificate -o yaml
41 | ```
42 | ### Step 6 - Create Kubernetes Ingress with TLS:
43 | ```sh
44 | kubectl create ingress demo-ingress --class=nginx --rule=example.internal/*=example-service:80,tls=tls-certificate
45 | ```
46 | ### Step 7 - Make a request to Controller:
47 | ```sh
48 | kubectl get service -n ingress-nginx
49 | ```
50 | Add the `/etc/hosts` entry for mapping before running this command
51 | ```sh
52 | curl -kv https://example.internal:31893
53 | ```
54 |
55 | ## Don't delete the resources created for this practical. We will need it in the next video.
56 | ### Step 8 - Delete All Resources
57 |
58 | ALERT: Don't delete the resources created in this practical. It will be used in the next video.
59 | ```sh
60 | kubectl delete pod nginx-pod
61 | kubectl delete service example-service
62 | kubectl delete ingress demo-ingress
63 | kubectl delete secret tls-certificate
64 |
65 | kubectl delete -f https://raw.githubusercontent.com/zealvora/certified-kubernetes-security-specialist/refs/heads/main/domain-1-cluster-setup/nginx-controller.yaml
66 | ```
--------------------------------------------------------------------------------
/domain-1-cluster-setup/ingress-ssl-annotation.md:
--------------------------------------------------------------------------------
1 | ### Pre-Requisite:
2 |
3 | Complete the setup mentioned in `Practical - Nginx with TLS` lecture
4 |
5 | ### Study the default behaviour
6 | ```sh
7 | kubectl get service -n ingress-nginx
8 |
9 | curl -kv https://example.internal:
10 |
11 | curl -I http://example.internal:
12 | ```
13 | ### Modify the Ingress
14 | ```sh
15 | kubectl edit ingress demo-ingress
16 | ```
17 | ```sh
18 | metadata:
19 | annotations:
20 | nginx.ingress.kubernetes.io/ssl-redirect: "false"
21 | ```
22 |
23 | ### Test the Setup
24 | ```sh
25 | curl -I http://example.internal:
26 | ```
27 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/ingress.md:
--------------------------------------------------------------------------------
1 | #### Documentation Referred:
2 |
3 | https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting
4 |
5 | ### Ingress Resource - Rule 1
6 | ```sh
7 | kubectl create ingress --help
8 |
9 | kubectl first ingress first-ingress --rule="example.internal/*=example-service:80"
10 |
11 | kubectl describe ingress first-ingress
12 | ```
13 | ### Ingress Resource - Rule 2
14 | ```sh
15 | kubectl create ingress second-ingress --rule="example.internal/*=example-service:80" --rule="kplabs.internal/*=kplabs-service:80"
16 | ```
17 |
18 | ### Generating Manifest File for Ingress Resource
19 |
20 | ```sh
21 | kubectl create ingress second-ingress --rule="example.internal/*=example-service:80" --rule="kplabs.internal/*=kplabs-service:80" --dry-run=client -o yaml
22 | ```
23 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/install-etcd.md:
--------------------------------------------------------------------------------
1 |
2 | #### Step 1: Create the Base Binaries Directory
3 |
4 | ```sh
5 | mkdir /root/binaries
6 |
7 | cd /root/binaries
8 | ```
9 | #### Step 2: Download and Copy the ETCD Binaries to Path
10 | ```sh
11 | wget https://github.com/etcd-io/etcd/releases/download/v3.5.18/etcd-v3.5.18-linux-amd64.tar.gz
12 |
13 | tar -xzvf etcd-v3.5.18-linux-amd64.tar.gz
14 |
15 | cd /root/binaries/etcd-v3.5.18-linux-amd64/
16 |
17 | cp etcd etcdctl /usr/local/bin/
18 | ```
19 | #### Step 3: Start etcd
20 | ```sh
21 | cd /tmp
22 | etcd
23 | ```
24 |
25 | #### Step 4: Verification - Store and Fetch Data from etcd
26 | ```sh
27 | etcdctl put key1 "value1"
28 | ```
29 | ```sh
30 | etcdctl get key1
31 | ```
32 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/kubeadm-install.md:
--------------------------------------------------------------------------------
1 |
2 | ### Control Plane Node Configuration
3 |
4 | #### Step 1: Setup containerd
5 | ```sh
6 | cat < /etc/containerd/config.toml
29 | ```
30 | ```sh
31 | nano /etc/containerd/config.toml
32 | ```
33 | --> `SystemdCgroup = true`
34 |
35 | ```sh
36 | systemctl restart containerd
37 | ```
38 |
39 | #### Step 2: Kernel Parameter Configuration
40 | ```sh
41 | cat < /etc/containerd/config.toml
31 | ```
32 | ```sh
33 | nano /etc/containerd/config.toml
34 | ```
35 | --> SystemdCgroup = true
36 |
37 | ```sh
38 | systemctl restart containerd
39 | ```
40 |
41 | ##### Step 2: Kernel Parameter Configuration
42 | ```sh
43 | cat </dev/null | openssl x509 -inform pem -noout -text
55 | ```
--------------------------------------------------------------------------------
/domain-1-cluster-setup/mutual-tls.md:
--------------------------------------------------------------------------------
1 |
2 | #### Step 1 - Generate Client Certificate and Client Key:
3 | ```sh
4 | cd /root/certificates
5 | ```
6 | ```sh
7 | openssl genrsa -out client.key 2048
8 | openssl req -new -key client.key -subj "/CN=client" -out client.csr
9 | openssl x509 -req -in client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client.crt -extensions v3_req -days 2000
10 | ```
11 |
12 | #### Step 2 - Start Etcd Server (Tab 1):
13 | ```sh
14 | cd /root/certificates
15 |
16 | etcd --cert-file=etcd.crt --key-file=etcd.key --advertise-client-urls=https://127.0.0.1:2379 --client-cert-auth --trusted-ca-file=ca.crt --listen-client-urls=https://127.0.0.1:2379
17 | ```
18 |
19 | #### Connect to Etcd (Tab 2)
20 | ```sh
21 | cd /root/certificates
22 |
23 | etcdctl --endpoints=https://127.0.0.1:2379 --cacert=ca.crt --cert=client.crt --key=client.key put key1 "value1"
24 |
25 | etcdctl --endpoints=https://127.0.0.1:2379 --cacert=ca.crt --cert=client.crt --key=client.key get key1
26 | ```
27 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/netpol-02.md:
--------------------------------------------------------------------------------
1 |
2 | ### Example 1 - Except Field
3 |
4 | ```sh
5 | apiVersion: networking.k8s.io/v1
6 | kind: NetworkPolicy
7 | metadata:
8 | name: except
9 | spec:
10 | podSelector:
11 | matchLabels:
12 | role: database
13 | ingress:
14 | - from:
15 | - ipBlock:
16 | cidr: 172.17.0.0/16
17 | except:
18 | - 172.17.1.0/24
19 | policyTypes:
20 | - Ingress
21 | ```
22 | ```sh
23 | kubectl create -f except.yaml
24 | ```
25 |
26 |
27 | ### Example 2 - Port and Protocol
28 | ```sh
29 | apiVersion: networking.k8s.io/v1
30 | kind: NetworkPolicy
31 | metadata:
32 | name: multi-port-egress
33 | namespace: default
34 | spec:
35 | podSelector:
36 | matchLabels:
37 | role: db
38 | policyTypes:
39 | - Egress
40 | egress:
41 | - to:
42 | - ipBlock:
43 | cidr: 10.0.0.0/24
44 | ports:
45 | - protocol: TCP
46 | port: 32000
47 | endPort: 32768
48 | ```
49 | ```sh
50 | kubectl create -f port-proto.yaml
51 | ```
--------------------------------------------------------------------------------
/domain-1-cluster-setup/netpol-practical.md:
--------------------------------------------------------------------------------
1 |
2 | ### Base Network Policy
3 |
4 | base-netpol.yaml
5 |
6 | ```sh
7 | apiVersion: networking.k8s.io/v1
8 | kind: NetworkPolicy
9 | metadata:
10 | name: base-network-policy
11 | ```
12 |
13 | ### Example 1
14 | example-1.yaml
15 | ```sh
16 | apiVersion: networking.k8s.io/v1
17 | kind: NetworkPolicy
18 | metadata:
19 | name: example-1
20 | spec:
21 | podSelector: {}
22 | policyTypes:
23 | - Ingress
24 | - Egress
25 | ```
26 |
27 | Create Pod for Testing
28 |
29 | ```sh
30 | kubectl run test-pod --image=alpine/curl -- sleep 36000
31 | kubectl get pods
32 | kubectl exec -it test-pod --sh
33 | ping
34 | curl
35 | ping google.com
36 | ```
37 | ```sh
38 | kubectl create -f example-1.yaml
39 | kubectl get netpol
40 | kubectl describe netpol base-network-policy
41 | ```
42 |
43 | Testing the setup
44 | ```sh
45 | kubectl exec -it test-pod --sh
46 | ping google.com
47 | ```
48 |
49 | ```sh
50 | kubectl delete -f example-1.yaml
51 | ```
52 | ### Example 2
53 | example-2.yaml
54 | ```sh
55 | apiVersion: networking.k8s.io/v1
56 | kind: NetworkPolicy
57 | metadata:
58 | name: example-2
59 | spec:
60 | podSelector: {}
61 | ingress:
62 | - {}
63 | policyTypes:
64 | - Ingress
65 | - Egress
66 | ```
67 | ```sh
68 | kubectl create -f example-2.yaml
69 | kubectl describe netpol base-network-policy
70 | ```
71 | Testing the Setup
72 | ```sh
73 | kubectl run random-pod --image=alpine/curl -- sleep 36000
74 | kubectl get pods -o wide
75 | kubectl exec -it random-pod --sh
76 | ping
77 | ```
78 | ```sh
79 | kubectl create ns testing
80 | kubectl run random-pod -n testing --image=alpine/curl -- sleep 36000
81 | kubectl get pods -n testing
82 | kubectl exec -it random-pod -n testing --sh
83 | ping
84 | ```
85 |
86 | ### Example 3
87 | example-3.yaml
88 | ```sh
89 | apiVersion: networking.k8s.io/v1
90 | kind: NetworkPolicy
91 | metadata:
92 | name: example-3
93 | spec:
94 | podSelector:
95 | matchLabels:
96 | role: suspicious
97 | policyTypes:
98 | - Ingress
99 | - Egress
100 | ```
101 | ```sh
102 | kubectl create -f example-3.yaml
103 | ```
104 |
105 | Testing the Setup
106 | ```sh
107 | kubectl run suspicious-pod --image=alpine/curl -- sleep 36000
108 | kubectl label pod suspicious-pod role=suspicious
109 | kubectl exec -it suspicious-pod --sh
110 | ping google.com
111 | ```
112 | ### Example 4
113 | example-4.yaml
114 | ```sh
115 | apiVersion: networking.k8s.io/v1
116 | kind: NetworkPolicy
117 | metadata:
118 | name: example-4
119 | spec:
120 | podSelector:
121 | matchLabels:
122 | role: database
123 | ingress:
124 | - from:
125 | - podSelector:
126 | matchLabels:
127 | role: app
128 | policyTypes:
129 | - Ingress
130 | ```
131 | ```sh
132 | kubectl create -f example-4.yaml
133 | ```
134 | Testing the Setup
135 | ```sh
136 | kubectl run app-pod --image=alpine/curl -- sleep 36000
137 | kubectl run database-pod --image=alpine/curl -- sleep 36000
138 | kubectl get pods -o wide
139 |
140 | kubectl exec -it test-pod --sh
141 | ping
142 |
143 | kubectl label pod app-pod role=app
144 | kubectl exec -it app-pod --sh
145 | ping
146 | ```
147 | ```sh
148 | kubectl delete -f example-4.yaml
149 | ```
150 |
151 | ### Example 5
152 | example-5.yaml
153 | ```sh
154 | apiVersion: networking.k8s.io/v1
155 | kind: NetworkPolicy
156 | metadata:
157 | name: example-5
158 | namespace: production
159 | spec:
160 | podSelector: {}
161 | ingress:
162 | - from:
163 | - namespaceSelector:
164 | matchLabels:
165 | kubernetes.io/metadata.name: security
166 | policyTypes:
167 | - Ingress
168 | ```
169 | ```sh
170 | kubectl create -f example-5.yaml
171 | ```
172 |
173 | ```sh
174 | kubectl create ns production
175 | kubectl create ns security
176 |
177 | kubectl get ns --show-labels
178 |
179 | kubectl run prod-pod -n production --image=alpine/curl -- sleep 36000
180 | kubectl run security-pod -n security --image=alpine/curl -- sleep 36000
181 |
182 | kubectl exec -it security-pod -n security --sh
183 | ping
184 | ```
185 | ```sh
186 | kubectl delete -f example-5.yaml
187 | ```
188 | ### Example 6
189 | example-6.yaml
190 | ```sh
191 | apiVersion: networking.k8s.io/v1
192 | kind: NetworkPolicy
193 | metadata:
194 | name: example-6
195 | namespace: production
196 | spec:
197 | podSelector: {}
198 | egress:
199 | - to:
200 | - ipBlock:
201 | cidr: 8.8.8.8/32
202 | policyTypes:
203 | - Egress
204 | ```
205 | ```sh
206 | kubectl create -f example-6.yaml
207 | ``
208 | ```sh
209 | kubectl exec -it prod-pod -n production -- sh
210 | ping 8.8.8.8
211 | ```
212 | ```sh
213 | kubectl delete -f example-6.yaml
214 | ```
215 | ### Remove the Created Resources
216 | ```sh
217 | kubectl delete pods --all
218 | kubectl delete pod security-pod -n security
219 | kubectl delete pod prod-pod -n production
220 | kubectl delete pods --all -n testing
221 |
222 | kubectl delete ns testing
223 | kubectl delete ns production
224 | kubectl delete ns security
225 | ```
226 |
227 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/netpol-structure.md:
--------------------------------------------------------------------------------
1 |
2 | ### Manifest File Used in Video
3 |
4 | first-netpol.yaml
5 |
6 | ```sh
7 | apiVersion: networking.k8s.io/v1
8 | kind: NetworkPolicy
9 | metadata:
10 | name: demo-network-policy
11 | spec:
12 | podSelector:
13 | matchLabels:
14 | env: production
15 | policyTypes:
16 | - Ingress
17 | - Egress
18 | ingress:
19 | - from:
20 | - podSelector:
21 | matchLabels:
22 | env: security
23 | egress:
24 | - to:
25 | - ipBlock:
26 | cidr: 8.8.8.8/32
27 | ```
28 | ```sh
29 | kubectl create -f first-netpol.yaml
30 |
31 | kubectl describe netpol demo-network-policy
32 | ```
33 | ### Delete the Resource created
34 | ```sh
35 | kubectl delete -f first-netpol.yaml
36 | ```
--------------------------------------------------------------------------------
/domain-1-cluster-setup/nginx-controller.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Namespace
3 | metadata:
4 | labels:
5 | app.kubernetes.io/instance: ingress-nginx
6 | app.kubernetes.io/name: ingress-nginx
7 | name: ingress-nginx
8 | ---
9 | apiVersion: v1
10 | automountServiceAccountToken: true
11 | kind: ServiceAccount
12 | metadata:
13 | labels:
14 | app.kubernetes.io/component: controller
15 | app.kubernetes.io/instance: ingress-nginx
16 | app.kubernetes.io/name: ingress-nginx
17 | app.kubernetes.io/part-of: ingress-nginx
18 | app.kubernetes.io/version: 1.12.0
19 | name: ingress-nginx
20 | namespace: ingress-nginx
21 | ---
22 | apiVersion: v1
23 | automountServiceAccountToken: true
24 | kind: ServiceAccount
25 | metadata:
26 | labels:
27 | app.kubernetes.io/component: admission-webhook
28 | app.kubernetes.io/instance: ingress-nginx
29 | app.kubernetes.io/name: ingress-nginx
30 | app.kubernetes.io/part-of: ingress-nginx
31 | app.kubernetes.io/version: 1.12.0
32 | name: ingress-nginx-admission
33 | namespace: ingress-nginx
34 | ---
35 | apiVersion: rbac.authorization.k8s.io/v1
36 | kind: Role
37 | metadata:
38 | labels:
39 | app.kubernetes.io/component: controller
40 | app.kubernetes.io/instance: ingress-nginx
41 | app.kubernetes.io/name: ingress-nginx
42 | app.kubernetes.io/part-of: ingress-nginx
43 | app.kubernetes.io/version: 1.12.0
44 | name: ingress-nginx
45 | namespace: ingress-nginx
46 | rules:
47 | - apiGroups:
48 | - ""
49 | resources:
50 | - namespaces
51 | verbs:
52 | - get
53 | - apiGroups:
54 | - ""
55 | resources:
56 | - configmaps
57 | - pods
58 | - secrets
59 | - endpoints
60 | verbs:
61 | - get
62 | - list
63 | - watch
64 | - apiGroups:
65 | - ""
66 | resources:
67 | - services
68 | verbs:
69 | - get
70 | - list
71 | - watch
72 | - apiGroups:
73 | - networking.k8s.io
74 | resources:
75 | - ingresses
76 | verbs:
77 | - get
78 | - list
79 | - watch
80 | - apiGroups:
81 | - networking.k8s.io
82 | resources:
83 | - ingresses/status
84 | verbs:
85 | - update
86 | - apiGroups:
87 | - networking.k8s.io
88 | resources:
89 | - ingressclasses
90 | verbs:
91 | - get
92 | - list
93 | - watch
94 | - apiGroups:
95 | - coordination.k8s.io
96 | resourceNames:
97 | - ingress-nginx-leader
98 | resources:
99 | - leases
100 | verbs:
101 | - get
102 | - update
103 | - apiGroups:
104 | - coordination.k8s.io
105 | resources:
106 | - leases
107 | verbs:
108 | - create
109 | - apiGroups:
110 | - ""
111 | resources:
112 | - events
113 | verbs:
114 | - create
115 | - patch
116 | - apiGroups:
117 | - discovery.k8s.io
118 | resources:
119 | - endpointslices
120 | verbs:
121 | - list
122 | - watch
123 | - get
124 | ---
125 | apiVersion: rbac.authorization.k8s.io/v1
126 | kind: Role
127 | metadata:
128 | labels:
129 | app.kubernetes.io/component: admission-webhook
130 | app.kubernetes.io/instance: ingress-nginx
131 | app.kubernetes.io/name: ingress-nginx
132 | app.kubernetes.io/part-of: ingress-nginx
133 | app.kubernetes.io/version: 1.12.0
134 | name: ingress-nginx-admission
135 | namespace: ingress-nginx
136 | rules:
137 | - apiGroups:
138 | - ""
139 | resources:
140 | - secrets
141 | verbs:
142 | - get
143 | - create
144 | ---
145 | apiVersion: rbac.authorization.k8s.io/v1
146 | kind: ClusterRole
147 | metadata:
148 | labels:
149 | app.kubernetes.io/instance: ingress-nginx
150 | app.kubernetes.io/name: ingress-nginx
151 | app.kubernetes.io/part-of: ingress-nginx
152 | app.kubernetes.io/version: 1.12.0
153 | name: ingress-nginx
154 | rules:
155 | - apiGroups:
156 | - ""
157 | resources:
158 | - configmaps
159 | - endpoints
160 | - nodes
161 | - pods
162 | - secrets
163 | - namespaces
164 | verbs:
165 | - list
166 | - watch
167 | - apiGroups:
168 | - coordination.k8s.io
169 | resources:
170 | - leases
171 | verbs:
172 | - list
173 | - watch
174 | - apiGroups:
175 | - ""
176 | resources:
177 | - nodes
178 | verbs:
179 | - get
180 | - apiGroups:
181 | - ""
182 | resources:
183 | - services
184 | verbs:
185 | - get
186 | - list
187 | - watch
188 | - apiGroups:
189 | - networking.k8s.io
190 | resources:
191 | - ingresses
192 | verbs:
193 | - get
194 | - list
195 | - watch
196 | - apiGroups:
197 | - ""
198 | resources:
199 | - events
200 | verbs:
201 | - create
202 | - patch
203 | - apiGroups:
204 | - networking.k8s.io
205 | resources:
206 | - ingresses/status
207 | verbs:
208 | - update
209 | - apiGroups:
210 | - networking.k8s.io
211 | resources:
212 | - ingressclasses
213 | verbs:
214 | - get
215 | - list
216 | - watch
217 | - apiGroups:
218 | - discovery.k8s.io
219 | resources:
220 | - endpointslices
221 | verbs:
222 | - list
223 | - watch
224 | - get
225 | ---
226 | apiVersion: rbac.authorization.k8s.io/v1
227 | kind: ClusterRole
228 | metadata:
229 | labels:
230 | app.kubernetes.io/component: admission-webhook
231 | app.kubernetes.io/instance: ingress-nginx
232 | app.kubernetes.io/name: ingress-nginx
233 | app.kubernetes.io/part-of: ingress-nginx
234 | app.kubernetes.io/version: 1.12.0
235 | name: ingress-nginx-admission
236 | rules:
237 | - apiGroups:
238 | - admissionregistration.k8s.io
239 | resources:
240 | - validatingwebhookconfigurations
241 | verbs:
242 | - get
243 | - update
244 | ---
245 | apiVersion: rbac.authorization.k8s.io/v1
246 | kind: RoleBinding
247 | metadata:
248 | labels:
249 | app.kubernetes.io/component: controller
250 | app.kubernetes.io/instance: ingress-nginx
251 | app.kubernetes.io/name: ingress-nginx
252 | app.kubernetes.io/part-of: ingress-nginx
253 | app.kubernetes.io/version: 1.12.0
254 | name: ingress-nginx
255 | namespace: ingress-nginx
256 | roleRef:
257 | apiGroup: rbac.authorization.k8s.io
258 | kind: Role
259 | name: ingress-nginx
260 | subjects:
261 | - kind: ServiceAccount
262 | name: ingress-nginx
263 | namespace: ingress-nginx
264 | ---
265 | apiVersion: rbac.authorization.k8s.io/v1
266 | kind: RoleBinding
267 | metadata:
268 | labels:
269 | app.kubernetes.io/component: admission-webhook
270 | app.kubernetes.io/instance: ingress-nginx
271 | app.kubernetes.io/name: ingress-nginx
272 | app.kubernetes.io/part-of: ingress-nginx
273 | app.kubernetes.io/version: 1.12.0
274 | name: ingress-nginx-admission
275 | namespace: ingress-nginx
276 | roleRef:
277 | apiGroup: rbac.authorization.k8s.io
278 | kind: Role
279 | name: ingress-nginx-admission
280 | subjects:
281 | - kind: ServiceAccount
282 | name: ingress-nginx-admission
283 | namespace: ingress-nginx
284 | ---
285 | apiVersion: rbac.authorization.k8s.io/v1
286 | kind: ClusterRoleBinding
287 | metadata:
288 | labels:
289 | app.kubernetes.io/instance: ingress-nginx
290 | app.kubernetes.io/name: ingress-nginx
291 | app.kubernetes.io/part-of: ingress-nginx
292 | app.kubernetes.io/version: 1.12.0
293 | name: ingress-nginx
294 | roleRef:
295 | apiGroup: rbac.authorization.k8s.io
296 | kind: ClusterRole
297 | name: ingress-nginx
298 | subjects:
299 | - kind: ServiceAccount
300 | name: ingress-nginx
301 | namespace: ingress-nginx
302 | ---
303 | apiVersion: rbac.authorization.k8s.io/v1
304 | kind: ClusterRoleBinding
305 | metadata:
306 | labels:
307 | app.kubernetes.io/component: admission-webhook
308 | app.kubernetes.io/instance: ingress-nginx
309 | app.kubernetes.io/name: ingress-nginx
310 | app.kubernetes.io/part-of: ingress-nginx
311 | app.kubernetes.io/version: 1.12.0
312 | name: ingress-nginx-admission
313 | roleRef:
314 | apiGroup: rbac.authorization.k8s.io
315 | kind: ClusterRole
316 | name: ingress-nginx-admission
317 | subjects:
318 | - kind: ServiceAccount
319 | name: ingress-nginx-admission
320 | namespace: ingress-nginx
321 | ---
322 | apiVersion: v1
323 | data: null
324 | kind: ConfigMap
325 | metadata:
326 | labels:
327 | app.kubernetes.io/component: controller
328 | app.kubernetes.io/instance: ingress-nginx
329 | app.kubernetes.io/name: ingress-nginx
330 | app.kubernetes.io/part-of: ingress-nginx
331 | app.kubernetes.io/version: 1.12.0
332 | name: ingress-nginx-controller
333 | namespace: ingress-nginx
334 | ---
335 | apiVersion: v1
336 | kind: Service
337 | metadata:
338 | labels:
339 | app.kubernetes.io/component: controller
340 | app.kubernetes.io/instance: ingress-nginx
341 | app.kubernetes.io/name: ingress-nginx
342 | app.kubernetes.io/part-of: ingress-nginx
343 | app.kubernetes.io/version: 1.12.0
344 | name: ingress-nginx-controller
345 | namespace: ingress-nginx
346 | spec:
347 | externalTrafficPolicy: Local
348 | ipFamilies:
349 | - IPv4
350 | ipFamilyPolicy: SingleStack
351 | ports:
352 | - appProtocol: http
353 | name: http
354 | port: 80
355 | protocol: TCP
356 | targetPort: http
357 | - appProtocol: https
358 | name: https
359 | port: 443
360 | protocol: TCP
361 | targetPort: https
362 | selector:
363 | app.kubernetes.io/component: controller
364 | app.kubernetes.io/instance: ingress-nginx
365 | app.kubernetes.io/name: ingress-nginx
366 | type: NodePort
367 | ---
368 | apiVersion: v1
369 | kind: Service
370 | metadata:
371 | labels:
372 | app.kubernetes.io/component: controller
373 | app.kubernetes.io/instance: ingress-nginx
374 | app.kubernetes.io/name: ingress-nginx
375 | app.kubernetes.io/part-of: ingress-nginx
376 | app.kubernetes.io/version: 1.12.0
377 | name: ingress-nginx-controller-admission
378 | namespace: ingress-nginx
379 | spec:
380 | ports:
381 | - appProtocol: https
382 | name: https-webhook
383 | port: 443
384 | targetPort: webhook
385 | selector:
386 | app.kubernetes.io/component: controller
387 | app.kubernetes.io/instance: ingress-nginx
388 | app.kubernetes.io/name: ingress-nginx
389 | type: ClusterIP
390 | ---
391 | apiVersion: apps/v1
392 | kind: Deployment
393 | metadata:
394 | labels:
395 | app.kubernetes.io/component: controller
396 | app.kubernetes.io/instance: ingress-nginx
397 | app.kubernetes.io/name: ingress-nginx
398 | app.kubernetes.io/part-of: ingress-nginx
399 | app.kubernetes.io/version: 1.12.0
400 | name: ingress-nginx-controller
401 | namespace: ingress-nginx
402 | spec:
403 | minReadySeconds: 0
404 | revisionHistoryLimit: 10
405 | selector:
406 | matchLabels:
407 | app.kubernetes.io/component: controller
408 | app.kubernetes.io/instance: ingress-nginx
409 | app.kubernetes.io/name: ingress-nginx
410 | strategy:
411 | rollingUpdate:
412 | maxUnavailable: 1
413 | type: RollingUpdate
414 | template:
415 | metadata:
416 | labels:
417 | app.kubernetes.io/component: controller
418 | app.kubernetes.io/instance: ingress-nginx
419 | app.kubernetes.io/name: ingress-nginx
420 | app.kubernetes.io/part-of: ingress-nginx
421 | app.kubernetes.io/version: 1.12.0
422 | spec:
423 | containers:
424 | - args:
425 | - /nginx-ingress-controller
426 | - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
427 | - --election-id=ingress-nginx-leader
428 | - --controller-class=k8s.io/ingress-nginx
429 | - --ingress-class=nginx
430 | - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
431 | - --validating-webhook=:8443
432 | - --validating-webhook-certificate=/usr/local/certificates/cert
433 | - --validating-webhook-key=/usr/local/certificates/key
434 | env:
435 | - name: POD_NAME
436 | valueFrom:
437 | fieldRef:
438 | fieldPath: metadata.name
439 | - name: POD_NAMESPACE
440 | valueFrom:
441 | fieldRef:
442 | fieldPath: metadata.namespace
443 | - name: LD_PRELOAD
444 | value: /usr/local/lib/libmimalloc.so
445 | image: registry.k8s.io/ingress-nginx/controller:v1.12.0@sha256:e6b8de175acda6ca913891f0f727bca4527e797d52688cbe9fec9040d6f6b6fa
446 | imagePullPolicy: IfNotPresent
447 | lifecycle:
448 | preStop:
449 | exec:
450 | command:
451 | - /wait-shutdown
452 | livenessProbe:
453 | failureThreshold: 5
454 | httpGet:
455 | path: /healthz
456 | port: 10254
457 | scheme: HTTP
458 | initialDelaySeconds: 10
459 | periodSeconds: 10
460 | successThreshold: 1
461 | timeoutSeconds: 1
462 | name: controller
463 | ports:
464 | - containerPort: 80
465 | name: http
466 | protocol: TCP
467 | - containerPort: 443
468 | name: https
469 | protocol: TCP
470 | - containerPort: 8443
471 | name: webhook
472 | protocol: TCP
473 | readinessProbe:
474 | failureThreshold: 3
475 | httpGet:
476 | path: /healthz
477 | port: 10254
478 | scheme: HTTP
479 | initialDelaySeconds: 10
480 | periodSeconds: 10
481 | successThreshold: 1
482 | timeoutSeconds: 1
483 | resources:
484 | requests:
485 | cpu: 100m
486 | memory: 90Mi
487 | securityContext:
488 | allowPrivilegeEscalation: false
489 | capabilities:
490 | add:
491 | - NET_BIND_SERVICE
492 | drop:
493 | - ALL
494 | readOnlyRootFilesystem: false
495 | runAsGroup: 82
496 | runAsNonRoot: true
497 | runAsUser: 101
498 | seccompProfile:
499 | type: RuntimeDefault
500 | volumeMounts:
501 | - mountPath: /usr/local/certificates/
502 | name: webhook-cert
503 | readOnly: true
504 | dnsPolicy: ClusterFirst
505 | nodeSelector:
506 | kubernetes.io/os: linux
507 | serviceAccountName: ingress-nginx
508 | terminationGracePeriodSeconds: 300
509 | volumes:
510 | - name: webhook-cert
511 | secret:
512 | secretName: ingress-nginx-admission
513 | ---
514 | apiVersion: batch/v1
515 | kind: Job
516 | metadata:
517 | labels:
518 | app.kubernetes.io/component: admission-webhook
519 | app.kubernetes.io/instance: ingress-nginx
520 | app.kubernetes.io/name: ingress-nginx
521 | app.kubernetes.io/part-of: ingress-nginx
522 | app.kubernetes.io/version: 1.12.0
523 | name: ingress-nginx-admission-create
524 | namespace: ingress-nginx
525 | spec:
526 | template:
527 | metadata:
528 | labels:
529 | app.kubernetes.io/component: admission-webhook
530 | app.kubernetes.io/instance: ingress-nginx
531 | app.kubernetes.io/name: ingress-nginx
532 | app.kubernetes.io/part-of: ingress-nginx
533 | app.kubernetes.io/version: 1.12.0
534 | name: ingress-nginx-admission-create
535 | spec:
536 | containers:
537 | - args:
538 | - create
539 | - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
540 | - --namespace=$(POD_NAMESPACE)
541 | - --secret-name=ingress-nginx-admission
542 | env:
543 | - name: POD_NAMESPACE
544 | valueFrom:
545 | fieldRef:
546 | fieldPath: metadata.namespace
547 | image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.0@sha256:aaafd456bda110628b2d4ca6296f38731a3aaf0bf7581efae824a41c770a8fc4
548 | imagePullPolicy: IfNotPresent
549 | name: create
550 | securityContext:
551 | allowPrivilegeEscalation: false
552 | capabilities:
553 | drop:
554 | - ALL
555 | readOnlyRootFilesystem: true
556 | runAsGroup: 65532
557 | runAsNonRoot: true
558 | runAsUser: 65532
559 | seccompProfile:
560 | type: RuntimeDefault
561 | nodeSelector:
562 | kubernetes.io/os: linux
563 | restartPolicy: OnFailure
564 | serviceAccountName: ingress-nginx-admission
565 | ---
566 | apiVersion: batch/v1
567 | kind: Job
568 | metadata:
569 | labels:
570 | app.kubernetes.io/component: admission-webhook
571 | app.kubernetes.io/instance: ingress-nginx
572 | app.kubernetes.io/name: ingress-nginx
573 | app.kubernetes.io/part-of: ingress-nginx
574 | app.kubernetes.io/version: 1.12.0
575 | name: ingress-nginx-admission-patch
576 | namespace: ingress-nginx
577 | spec:
578 | template:
579 | metadata:
580 | labels:
581 | app.kubernetes.io/component: admission-webhook
582 | app.kubernetes.io/instance: ingress-nginx
583 | app.kubernetes.io/name: ingress-nginx
584 | app.kubernetes.io/part-of: ingress-nginx
585 | app.kubernetes.io/version: 1.12.0
586 | name: ingress-nginx-admission-patch
587 | spec:
588 | containers:
589 | - args:
590 | - patch
591 | - --webhook-name=ingress-nginx-admission
592 | - --namespace=$(POD_NAMESPACE)
593 | - --patch-mutating=false
594 | - --secret-name=ingress-nginx-admission
595 | - --patch-failure-policy=Fail
596 | env:
597 | - name: POD_NAMESPACE
598 | valueFrom:
599 | fieldRef:
600 | fieldPath: metadata.namespace
601 | image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.5.0@sha256:aaafd456bda110628b2d4ca6296f38731a3aaf0bf7581efae824a41c770a8fc4
602 | imagePullPolicy: IfNotPresent
603 | name: patch
604 | securityContext:
605 | allowPrivilegeEscalation: false
606 | capabilities:
607 | drop:
608 | - ALL
609 | readOnlyRootFilesystem: true
610 | runAsGroup: 65532
611 | runAsNonRoot: true
612 | runAsUser: 65532
613 | seccompProfile:
614 | type: RuntimeDefault
615 | nodeSelector:
616 | kubernetes.io/os: linux
617 | restartPolicy: OnFailure
618 | serviceAccountName: ingress-nginx-admission
619 | ---
620 | apiVersion: networking.k8s.io/v1
621 | kind: IngressClass
622 | metadata:
623 | labels:
624 | app.kubernetes.io/component: controller
625 | app.kubernetes.io/instance: ingress-nginx
626 | app.kubernetes.io/name: ingress-nginx
627 | app.kubernetes.io/part-of: ingress-nginx
628 | app.kubernetes.io/version: 1.12.0
629 | name: nginx
630 | spec:
631 | controller: k8s.io/ingress-nginx
632 | ---
633 | apiVersion: admissionregistration.k8s.io/v1
634 | kind: ValidatingWebhookConfiguration
635 | metadata:
636 | labels:
637 | app.kubernetes.io/component: admission-webhook
638 | app.kubernetes.io/instance: ingress-nginx
639 | app.kubernetes.io/name: ingress-nginx
640 | app.kubernetes.io/part-of: ingress-nginx
641 | app.kubernetes.io/version: 1.12.0
642 | name: ingress-nginx-admission
643 | webhooks:
644 | - admissionReviewVersions:
645 | - v1
646 | clientConfig:
647 | service:
648 | name: ingress-nginx-controller-admission
649 | namespace: ingress-nginx
650 | path: /networking/v1/ingresses
651 | port: 443
652 | failurePolicy: Fail
653 | matchPolicy: Equivalent
654 | name: validate.nginx.ingress.kubernetes.io
655 | rules:
656 | - apiGroups:
657 | - networking.k8s.io
658 | apiVersions:
659 | - v1
660 | operations:
661 | - CREATE
662 | - UPDATE
663 | resources:
664 | - ingresses
665 | sideEffects: None
666 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/systemd.md:
--------------------------------------------------------------------------------
1 | #### Create env variable:
2 | ```sh
3 | export SERVER_IP=172.31.54.201
4 | ```
5 | #### systemd file for etcd:
6 | ```sh
7 | cat <
10 | ```
11 |
12 | ### Taint a Node with Effect of NoSchedule
13 | ```sh
14 | kubectl taint node worker-01 key=value:NoSchedule
15 | ```
16 | ### Test with Deployment
17 | ```sh
18 | kubectl create deployment test-deploy --image=nginx --replicas=5
19 |
20 | kubectl get pods -o wide
21 | ```
22 | ### Define Tolerations
23 | ```sh
24 | kubectl create deployment test-deploy --image=nginx --replicas=5 --dry-run=client -o yaml > deployment-toleration.yaml
25 | ```
26 | Final File Manifest file after adding toleration
27 |
28 | ```sh
29 | apiVersion: apps/v1
30 | kind: Deployment
31 | metadata:
32 | creationTimestamp: null
33 | labels:
34 | app: test-deploy
35 | name: test-deploy
36 | spec:
37 | replicas: 5
38 | selector:
39 | matchLabels:
40 | app: test-deploy
41 | strategy: {}
42 | template:
43 | metadata:
44 | creationTimestamp: null
45 | labels:
46 | app: test-deploy
47 | spec:
48 | containers:
49 | - image: nginx
50 | name: nginx
51 | tolerations:
52 | - key: "key"
53 | operator: "Exists"
54 | effect: "NoSchedule"
55 | ```
56 | ```sh
57 | kubectl create -f deployment-toleration.yaml
58 |
59 | kubectl get pods -o wide
60 | ```
61 | ### Taint a Node with Effect of NoExecute
62 | ```sh
63 | kubectl taint node worker-01 key=value:NoExecute
64 |
65 | kubectl get pods -o wide
66 | ```
67 | ### Remove Taint from a Node
68 | ```sh
69 | kubectl taint node worker-01 key=value:NoSchedule-
70 |
71 | kubectl taint node worker-01 key=value:NoExecute-
72 | ```
--------------------------------------------------------------------------------
/domain-1-cluster-setup/token-authentication.md:
--------------------------------------------------------------------------------
1 |
2 | #### Format of Static Token File:
3 | ```sh
4 | token,user,uid,"group1,group2,group3"
5 | ```
6 | #### Create a token file with required data:
7 | ```sh
8 | nano /root/token.csv
9 | ```
10 | ```sh
11 | Dem0Passw0rd#,bob,01,admins
12 | ```
13 | #### Pass the token auth flag:
14 | ```sh
15 | nano /etc/systemd/system/kube-apiserver.service
16 | ```
17 | ```sh
18 | --token-auth-file /root/token.csv
19 | ```
20 | ```sh
21 | systemctl daemon-reload
22 | systemctl restart kube-apiserver
23 | ```
24 | #### Verification:
25 | ```sh
26 | curl -k --header "Authorization: Bearer Dem0Passw0rd#" https://localhost:6443
27 |
28 | kubectl get secret --server=https://localhost:6443 --token Dem0Passw0rd# --insecure-skip-tls-verify
29 | ```
30 | #### Testing:
31 | ```sh
32 | kubectl create secret generic my-secret --server=https://localhost:6443 --token Dem0Passw0rd# --insecure-skip-tls-verify
33 | kubectl delete secret my-secret --server=https://localhost:6443 --token Dem0Passw0rd# --insecure-skip-tls-verify
34 | ```
35 |
--------------------------------------------------------------------------------
/domain-1-cluster-setup/verify-binaries.md:
--------------------------------------------------------------------------------
1 | #### Kubernetes GitHub Repository:
2 |
3 | https://github.com/kubernetes/kubernetes/releases
4 |
5 | #### Step 1 - Download Binaries:
6 | ```sh
7 | wget https://dl.k8s.io/v1.33.0-alpha.1/kubernetes-client-darwin-arm64.tar.gz
8 | ```
9 |
10 | #### Step 2 - Verify the Message Digest:
11 | ```sh
12 | sha512sum kubernetes-server-linux-amd64.tar.gz
13 | ```
14 |
--------------------------------------------------------------------------------
/domain-2-cluster-hardening/Readme.md:
--------------------------------------------------------------------------------
1 | # Domain - Core Concepts
2 |
3 | The code mentioned in this document are used in the Certified Kubernetes Security Specialist 2025 video course.
4 |
5 |
6 | # Video-Document Mapper
7 |
8 | | Sr No | Document Link |
9 | | ------ | ------ |
10 | | 1 | [Creating Token for RBAC Practicals][PlDa] |
11 | | 2 | [Practical - Role and RoleBinding][PlDb] |
12 | | 3 | [Practical - ClusterRole and ClusterRoleBinding][PlDc]
13 | | 4 | [Overview of Service Accounts][PlDd] |
14 | | 5 | [Service Accounts - Points to Note][PlDe] |
15 | | 6 | [Service Account Security][PlDf] |
16 | | 7 | [Service Account Security][PlDf] |
17 | | 8 | [Setup Environment for Upgrading Clusters][PlDg] |
18 | | 9 | [Practical - Upgrade Control Plane Node][PlDh] |
19 | | 10 | [Practical - Upgrade Worker Node][PlDi] |
20 | | 11 | [Overview of Projected Volumes][PlDj] |
21 | | 12 | [Mounting Service Accounts using Projected Volumes][PlDk] |
22 |
23 | [PlDa]: <./user-rbac.md>
24 | [PlDb]: <./role-rolebinding.md>
25 | [PlDc]: <./clusterrole.md>
26 | [PlDd]: <./service-account.md>
27 | [PlDe]: <./sa-pointers.md>
28 | [PlDf]: <./sa-security.md>
29 | [PlDg]: <./kubeadm-automate.md>
30 | [PlDh]: <./upgrade-kubeadm-master.md>
31 | [PlDi]: <./upgrade-kubeadm-worker.md>
32 | [PlDj]: <./projected-volume.md>
33 | [PlDk]: <./sa-projectedvolume.md>
--------------------------------------------------------------------------------
/domain-2-cluster-hardening/clusterrole.md:
--------------------------------------------------------------------------------
1 |
2 | #### Create ClusterRole
3 | ```sh
4 | kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods
5 |
6 | kubectl describe clusterrole pod-reader
7 | ```
8 | #### Create ClusterRoleBinding
9 | ```sh
10 | kubectl create clusterrolebinding test-clusterrole --clusterrole=pod-reader --user=system:serviceaccount:default:test-sa
11 |
12 | kubectl describe clusterrolebinding test-clusterrole
13 | ```
14 | #### Test the Setup
15 |
16 | Replace the URL in below command to your K8s URL. If using macOS or Linux, use `$TOKEN` instead of `%TOKEN%`
17 |
18 | ```sh
19 | curl -k https://38140ecd-e8d7-4fff-be52-24629c40cdac.k8s.ondigitalocean.com/api/v1/namespaces/default/pods --header "Authorization: Bearer %TOKEN%"
20 | ```
21 |
22 | ```sh
23 | curl -k https://38140ecd-e8d7-4fff-be52-24629c40cdac.k8s.ondigitalocean.com/api/v1/namespaces/kube-system/pods --header "Authorization: Bearer %TOKEN%"
24 | ```
25 |
26 |
--------------------------------------------------------------------------------
/domain-2-cluster-hardening/deploying-ingress.md:
--------------------------------------------------------------------------------
1 |
2 | #### Step 1 - Create Nginx Pod:
3 | ```sh
4 | kubectl run example-pod --image=nginx -l=app=nginx
5 | ```
6 | #### Step 2 - Create Service:
7 | ```sh
8 | kubectl expose example-pod nginx --port=80 --target-port=80 --name=example-svc
9 | ```
10 | #### Step 3 - Installing Ingress Controller:
11 | ```sh
12 | kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/baremetal/deploy.yaml
13 | ```
14 | #### Step 4 - Create Ingress:
15 | ```sh
16 | nano ingress.yaml
17 | ```
18 | ```sh
19 | apiVersion: networking.k8s.io/v1
20 | kind: Ingress
21 | metadata:
22 | name: example-ingress
23 | annotations:
24 | nginx.ingress.kubernetes.io/rewrite-target: /
25 | spec:
26 | ingressClassName: nginx
27 | rules:
28 | - host: example.internal
29 | http:
30 | paths:
31 | - pathType: Prefix
32 | path: "/"
33 | backend:
34 | service:
35 | name: example-svc
36 | port:
37 | number: 80
38 | ```
39 | ```sh
40 | kubectl apply -f ingress.yaml
41 | ```
42 | #### Step 5 - Verification:
43 | ```sh
44 | kubectl get svc -n ingress-nginx
45 | apt install net-tools
46 | ifconfig eth0
47 | ```
48 | Add appropriate host entry accordingly.
49 | ```sh
50 | nano /etc/hosts
51 | ```
52 | ```sh
53 | curl http://example.internal:30429
54 | ```
55 |
--------------------------------------------------------------------------------
/domain-2-cluster-hardening/kubeadm-automate.md:
--------------------------------------------------------------------------------
1 | ### Configure Kubeadm for Control Plane Node
2 | ```sh
3 | curl -L https://raw.githubusercontent.com/zealvora/certified-kubernetes-security-specialist/refs/heads/main/domain-2-cluster-hardening/kubeadm-automate.sh -o kubeadm-control-plane.sh
4 |
5 | chmod +x kubeadm-control-plane.sh
6 |
7 | ./kubeadm-control-plane.sh
8 | ```
9 |
10 | ### Configure Kubeadm for Worker Node (Run in Worker Node)
11 | ```sh
12 | curl -L https://raw.githubusercontent.com/zealvora/certified-kubernetes-security-specialist/refs/heads/main/domain-2-cluster-hardening/kubeadm-worker-automate.sh -o kubeadm-worker.sh
13 |
14 | chmod +x kubeadm-worker.sh
15 |
16 | ./kubeadm-worker.sh
17 | ```
18 | Use the `kubeadm join` command that was generated in your Control Plane Node server. The below command is just for reference.
19 | ```sh
20 | kubeadm join 209.38.120.248:6443 --token 9vxoc8.cji5a4o82sd6lkqa \
21 | --discovery-token-ca-cert-hash sha256:1818dc0a5bad05b378dd3dcec2c048fd798e8f6ff69b396db4f5352b63414baf
22 | ```
23 |
24 |
--------------------------------------------------------------------------------
/domain-2-cluster-hardening/kubeadm-automate.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -e # Exit on error
4 | set -o pipefail
5 | set -x # Enable debugging
6 |
7 | ### Step 1: Setup containerd ###
8 | echo "[Step 1] Installing and configuring containerd..."
9 |
10 | # Load required kernel modules
11 | cat </dev/null
35 |
36 | # Modify containerd config for systemd cgroup
37 | sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
38 |
39 | # Restart containerd
40 | sudo systemctl restart containerd
41 | sudo systemctl enable containerd
42 |
43 | ### Step 2: Kernel Parameter Configuration ###
44 | echo "[Step 2] Configuring kernel parameters..."
45 |
46 | cat < /etc/containerd/config.toml
30 | ```
31 | ```sh
32 | nano /etc/containerd/config.toml
33 | ```
34 | --> SystemdCgroup = true
35 |
36 | ```sh
37 | systemctl restart containerd
38 | ```
39 |
40 | ##### Step 2: Kernel Parameter Configuration
41 | ```sh
42 | cat </dev/null
35 |
36 | # Modify containerd config for systemd cgroup
37 | sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
38 |
39 | # Restart containerd
40 | sudo systemctl restart containerd
41 | sudo systemctl enable containerd
42 |
43 | ### Step 2: Kernel Parameter Configuration ###
44 | echo "[Step 2] Configuring kernel parameters..."
45 |
46 | cat <
33 | [PlDb]: <./capabilities-practical.md>
34 | [PlDc]: <./security-context.md>
35 | [PlDd]: <./privileged-pod.md>
36 | [PlDe]: <./capabilities-pod.md>
37 | [PlDf]: <./image-pull-policy.md>
38 | [PlDg]: <./ac-alwayspullimages.md>
39 | [PlDh]: <./pss.md>
40 | [PlDi]: <./pss-modes.md>
41 | [PlDj]: <./pss-notes.md>
42 | [PlDk]: <./imagewebhook.md>
43 | [PlDl]: <./secrets.md>
44 | [PlDm]: <./cilium-install.md>
45 | [PlDn]: <./cilium-netpol.md>
46 | [PlDo]: <./cilium-entities.md>
47 | [PlDp]: <./cilium-layer-4.md>
48 | [PlDq]: <./cilium-dns.md>
49 | [PlDr]: <./cilium-deny.md>
50 | [PlDs]: <./cilium-encryption-ipsec.md>
51 | [PlDt]: <./cilium-encryption-wireguard.md>
52 | [PlDu]: <./security-context-ro.md>
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/ac-alwayspullimages.md:
--------------------------------------------------------------------------------
1 |
2 | ### Test the Base Setup (Without AlwaysPullImages Enabled)
3 | ```sh
4 | kubectl run never-pod --image=nginx --image-pull-policy=Never
5 |
6 | kubectl describe pod never-pod
7 |
8 | kubectl get pod never-pod -o yaml
9 | ```
10 | #### Enable the Admission Controller of AlwaysPullImages:
11 | ```sh
12 | nano /etc/kubernetes/manifests/kube-apiserver.yaml
13 | ```
14 | Add following within kube-apiserver manifest to enable admission controller
15 | ```sh
16 | AlwaysPullImages
17 | ```
18 |
19 | #### Test the Setup
20 |
21 | ```sh
22 | kubectl run never-pod-2 --image=nginx --image-pull-policy=Never
23 |
24 | kubectl describe pod never-pod-2
25 |
26 | kubectl get pod never-pod -o yaml
27 | ```
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/api-user.crt:
--------------------------------------------------------------------------------
1 | -----BEGIN CERTIFICATE-----
2 | MIIDezCCAmOgAwIBAgIUB1LIJQjqJNw7OVA+aB0bv/JzCAcwDQYJKoZIhvcNAQEL
3 | BQAwFTETMBEGA1UEAxMKa3ViZXJuZXRlczAeFw0yMDEyMjExNDExMzlaFw0yMzA5
4 | MTcxNDExMzlaMH8xCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJWQTENMAsGA1UEBwwE
5 | Q2l0eTEZMBcGA1UECgwQWW91ck9yZ2FuaXphdGlvbjEdMBsGA1UECwwUWW91ck9y
6 | Z2FuaXphdGlvblVuaXQxGjAYBgNVBAMMEWJvdW5jZXIubG9jYWwubGFuMIIBIjAN
7 | BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzPPGE8aQflkudbg5T9zbvRAOtfVj
8 | wCzedNg52hFvVR9xljElDCv57YcLDHNTQUddnIsSaM6rpvaaqFicf37W1+KWoOVQ
9 | iTJLZB2HNJiqU4Uo4JzTZkkVoread/CbpksRCLAIHQLB8WpHRNOJ91Sm04csW3uU
10 | 4YqknSHBa7CfX3HylYTg3WVezqXhO+HxvnYFydUDVhJU8tWj/0h5NAHAlTnwOMzF
11 | //p7tEu3x2sI4oRkXSV6sq2ZV6SiWnMG4sjBFR3lOtLSoTW6OECFS0BMT2/pHoDV
12 | kqONB0MODAxFp5aP9rzcpU7ygNyofEYutGEghbLz2hBCY7I0iRoYLIqZWQIDAQAB
13 | o1kwVzALBgNVHQ8EBAMCBDAwEwYDVR0lBAwwCgYIKwYBBQUHAwEwMwYDVR0RBCww
14 | KoIRYm91bmNlci5sb2NhbC5sYW6CFXd3dy5ib3VuY2VyLmxvY2FsLmxhbjANBgkq
15 | hkiG9w0BAQsFAAOCAQEAspqh3i3GiDbp5vXsK8Y6fgwmmvU/BRrjnwZdHR4S0hLd
16 | naBn4w1kGsPfSVgU3zPogcwL/XaARBaa60R3CBSaqAaoTs/I1XmofaNsj1+fo9Yo
17 | TY4Fxmo1Z48qaBjMN9jUlRpGBGngIFRBAz0cCF/wVSCxP7lgEFUFszgklwgGuKWc
18 | dTSiPuGL760YfJa7pr3Ymf77jBnKEgbIY80G82P0+KhdYzv6boDo1+Jx+DBNaza7
19 | tYhmNuEdOcVXCZIQHhxGe5XXkKZledfEg33PCwkG32o6dzp0SjDxFUpj7q/11c7z
20 | eOFxR9C59zP2sr6xVMk13iDJtynGsAUAuaPNIUxZuQ==
21 | -----END CERTIFICATE-----
22 |
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/api-user.key:
--------------------------------------------------------------------------------
1 | -----BEGIN RSA PRIVATE KEY-----
2 | MIIEowIBAAKCAQEAzPPGE8aQflkudbg5T9zbvRAOtfVjwCzedNg52hFvVR9xljEl
3 | DCv57YcLDHNTQUddnIsSaM6rpvaaqFicf37W1+KWoOVQiTJLZB2HNJiqU4Uo4JzT
4 | ZkkVoread/CbpksRCLAIHQLB8WpHRNOJ91Sm04csW3uU4YqknSHBa7CfX3HylYTg
5 | 3WVezqXhO+HxvnYFydUDVhJU8tWj/0h5NAHAlTnwOMzF//p7tEu3x2sI4oRkXSV6
6 | sq2ZV6SiWnMG4sjBFR3lOtLSoTW6OECFS0BMT2/pHoDVkqONB0MODAxFp5aP9rzc
7 | pU7ygNyofEYutGEghbLz2hBCY7I0iRoYLIqZWQIDAQABAoIBADJhYjmOQAqvBXqu
8 | lHgLRIDPJ66W6bRd0zlJxb7TNljoZ9WRsxew37kBzzd6SebsEhjfHuFgnFVonU/w
9 | qFe26D0dWAWpGQkAsgOkNo45UPVC8G92XYjxQj5Df9cn8DsKjN9j1jq7aM1dYLOM
10 | hIel6XRp7/90+34NxLVTjOZZ/nNJcMkS0xld+yg4GbpTyCLuegbvZjlQk+ffEoNY
11 | MJa6oG/QB0nB9l64KqgjBnKINgOGziuy8hOFOw98IBjIJ9e4eWLnrDQNtPInEP9D
12 | YB3RpuP/JuFKVN2fX2xb7VfeATVWE7ufTk5TQAt+nBZ+yEFETORiXUlGgPjXC2cI
13 | LSBYJwECgYEA5ZGz9Cw27P9E9tSGkmjkk7VuAOooSPeY8XWZNMwv5TPiSsMtbuRm
14 | BfoqXbSVJQlqO6ZHFIUsY1oTg9PTGUmUVAERw4KMwgGdFu3icWxnl8/T+YSHW+jX
15 | i3HnpbBfZCmVVJ4vLKA6LIgAfW39SbekrTnl3Vo7PSLK1KJxLx9ZiEkCgYEA5IyD
16 | uX0zlbhRJyGi13T9hO97PGEw7pOM4EPJM7Sl6pi66gwep6m65MROL68gYJAWSViO
17 | J4ds9PREkHK5uYrVotSogsjtum8pObUFMdJL5tiQm5iOj06kzY+11Kw4UisI38nG
18 | QoG0Fo7j3a6bzO2JJwqY6GwA+GyaZV3AdTtBKJECgYAvemzPSP2rEjg/HEEgspTj
19 | f5halBL01FBLT9j5tGkLbCmW8LrKvm3jOpPcgWZ/HG1eHMuCkPBXM9/pWbvE9RS6
20 | MuZrmupljVPh1B0K/DKIkTDz39bmyUcazdnsyIdR/c+miniTMCgX4aDIUCEcR+DE
21 | +r5xgyHRSQrN4zKpXkB0EQKBgGsjO7S+bmonJ1PSvsWFwDqLERgy739Hh+ixniYw
22 | 7v5Ubnq9B7nNJSGMrKJJ1EGwCeKEMs9w+rCxuVqFjW7fGFrmmcAFdPvKlGbK5w59
23 | 6LrklpV6JIolcbgzQCfcO+K47cYKjngq2UMh5MvMyJh+WacFnryFtMbAEnimREww
24 | ZNEhAoGBAONHAYZY5k53azAwZ02kumndTCxWjkZeGd92fvlpz+VxTWfucfehSfES
25 | qcNzTEbN7NUxsM+1TDNj1ciTvD7mSi47cvxW0mNVvNggqqpB/iFriWGMBoyaKEX4
26 | KqooJDIFlcDv+jCtFo4l3F6/uie5I2pu8BuzONNLa5xJYpAgzvbE
27 | -----END RSA PRIVATE KEY-----
28 |
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/capabilities-pod.md:
--------------------------------------------------------------------------------
1 | ### Documentation Referenced:
2 |
3 | https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-capabilities-for-a-container
4 |
5 | #### Create Normal Pod
6 | ```sh
7 | kubectl run normal-pod --image=busybox -- sleep 36000
8 |
9 | kubectl exec -it normal-pod -- sh
10 | ```
11 |
12 | #### Check Capabilities
13 | ```sh
14 | cat /proc/1/status
15 |
16 | capsh --decode=
17 | ```
18 |
19 | #### Capability 1 Pod
20 |
21 | ```sh
22 | capability-1.yaml
23 | ```
24 | ```sh
25 | apiVersion: v1
26 | kind: Pod
27 | metadata:
28 | name: capabilities-pod-1
29 | spec:
30 | containers:
31 | - name: demo
32 | image: busybox
33 | command: ["sleep","36000"]
34 | securityContext:
35 | capabilities:
36 | add: ["NET_ADMIN", "SYS_TIME"]
37 | ```
38 | ```sh
39 | kubectl apply -f capability-1.yaml
40 |
41 | kubectl exec -it capabilities-pod-1 -- sh
42 | ```
43 | ```sh
44 | cat /proc/1/status
45 |
46 | capsh --decode=
47 | ```
48 |
49 | #### Capability 2 Pod
50 |
51 | ```sh
52 | capability-2.yaml
53 | ```
54 | ```sh
55 | apiVersion: v1
56 | kind: Pod
57 | metadata:
58 | name: capabilities-pod-2
59 | spec:
60 | containers:
61 | - name: demo-2
62 | image: busybox
63 | command: ["sleep","36000"]
64 | securityContext:
65 | capabilities:
66 | add: ["NET_ADMIN", "SYS_TIME"]
67 | drop:
68 | - ALL
69 | ```
70 | ```sh
71 | kubectl apply -f capability-2.yaml
72 |
73 | kubectl exec -it capabilities-pod-2 -- sh
74 | ```
75 | ```sh
76 | cat /proc/1/status
77 |
78 | capsh --decode=
79 | ```
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/capabilities-practical.md:
--------------------------------------------------------------------------------
1 | #### Base Code
2 | ```sh
3 | #include
4 | #include
5 | #include
6 | #include
7 | #include
8 |
9 | #define PORT 900
10 |
11 | int main() {
12 | int server_fd;
13 | struct sockaddr_in server_addr;
14 |
15 | // Create socket
16 | server_fd = socket(AF_INET, SOCK_STREAM, 0);
17 | if (server_fd < 0) {
18 | perror("Socket failed");
19 | exit(EXIT_FAILURE);
20 | }
21 |
22 | // Set up server address structure
23 | server_addr.sin_family = AF_INET;
24 | server_addr.sin_addr.s_addr = INADDR_ANY;
25 | server_addr.sin_port = htons(PORT);
26 |
27 | // Bind socket to port 900
28 | if (bind(server_fd, (struct sockaddr *)&server_addr, sizeof(server_addr)) < 0) {
29 | perror("Bind failed");
30 | close(server_fd);
31 | exit(EXIT_FAILURE);
32 | }
33 |
34 | printf("Successfully bound to port %d\n", PORT);
35 |
36 | // Close socket
37 | close(server_fd);
38 |
39 | return 0;
40 | }
41 | ```
42 | #### Install GCC
43 | ```sh
44 | apt-get -y install gcc
45 | ```
46 |
47 | #### Compile the Program
48 | ```sh
49 | gcc bind_port_900.c -o bind_port_900
50 | ```
51 |
52 | #### Try running as a Normal user
53 | ```sh
54 | ./bind_port_900
55 | ```
56 |
57 | #### Grant Capability to the Executable
58 | ```sh
59 | setcap 'cap_net_bind_service=+ep' ./bind_port_900
60 | ```
61 | #### Try running as a Normal user
62 | ```sh
63 | ./bind_port_900
64 | ```
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/cilium-deny.md:
--------------------------------------------------------------------------------
1 |
2 | #### Create 3 Pods for Testing
3 | ```sh
4 | kubectl run nginx --image=nginx --labels=app=server
5 |
6 | kubectl run random-pod --labels=app=random-pod --image=alpine/curl -- sleep 36000
7 |
8 | kubectl run backend-pod --image=alpine/curl -- sleep 36000
9 | ```
10 | ### 1 - Create ingressDeny Policy
11 |
12 | ```sh
13 | nano ingressDeny.yaml
14 | ```
15 | ```sh
16 | apiVersion: "cilium.io/v2"
17 | kind: CiliumNetworkPolicy
18 | metadata:
19 | name: "deny-ingress"
20 | spec:
21 | endpointSelector:
22 | matchLabels:
23 | app: server
24 | ingress:
25 | - fromEntities:
26 | - all
27 | ingressDeny:
28 | - fromEndpoints:
29 | - matchLabels:
30 | app: random-pod
31 | ```
32 |
33 | ```sh
34 | kubectl create -f ingressDeny.yaml
35 | ```
36 |
37 | #### Verification
38 | ```sh
39 | kubectl get pods -o wide
40 |
41 | kubectl exec -it backend-pod -- sh
42 | curl
43 | ping
44 |
45 | kubectl exec -it random-pod -- sh
46 | curl
47 | ping
48 | ```
49 | ```sh
50 | kubectl delete -f ingressDeny.yaml
51 | ```
52 | ### 2 - Create egressDeny Policy
53 |
54 | ```sh
55 | nano egressDeny.yaml
56 | ```
57 | ```sh
58 | apiVersion: "cilium.io/v2"
59 | kind: CiliumNetworkPolicy
60 | metadata:
61 | name: "deny-egress"
62 | spec:
63 | endpointSelector:
64 | matchLabels:
65 | app: random-pod
66 | egress:
67 | - toEntities:
68 | - all
69 | egressDeny:
70 | - toEndpoints:
71 | - matchLabels:
72 | app: server
73 | ```
74 |
75 | ```sh
76 | kubectl create -f egressDeny.yaml
77 | ```
78 |
79 | #### Verification
80 | ```sh
81 | kubectl get pods -o wide
82 |
83 | kubectl exec -it random-pod -- sh
84 | curl
85 | ping
86 |
87 | curl google.com
88 | ping google.com
89 | ```
90 |
91 | ### Delete the Created Resources
92 |
93 | ```sh
94 | kubectl delete -f egressDeny.yaml
95 |
96 | kubectl delete pod nginx random-pod backend-pod
97 | ```
98 |
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/cilium-dns.md:
--------------------------------------------------------------------------------
1 | #### Create Pod for Testing
2 | ```sh
3 | kubectl run curl --image=alpine/curl -- sleep 36000
4 | ```
5 | #### DNS
6 | ```sh
7 | nano allow-dns.yaml
8 | ```
9 | ```sh
10 | apiVersion: "cilium.io/v2"
11 | kind: CiliumNetworkPolicy
12 | metadata:
13 | name: "allow-dns-kplabs"
14 | spec:
15 | endpointSelector: {}
16 | egress:
17 | - toPorts:
18 | - ports:
19 | - port: "53"
20 | rules:
21 | dns:
22 | - matchName: "kplabs.in"
23 | ```
24 | ```sh
25 | kubectl create -f allow-dns.yaml
26 | ```
27 |
28 | #### Testing
29 | ```sh
30 | nslookup google.com
31 |
32 | nslookup kplabs.in
33 | ```
34 |
35 | #### Delete the Setup
36 |
37 | kubectl delete -f allow-dns.yaml
38 | kubectl delete pod curl
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/cilium-encryption-ipsec.md:
--------------------------------------------------------------------------------
1 | #### Documentation Referenced:
2 |
3 | https://docs.cilium.io/en/stable/security/network/encryption-ipsec/
4 |
5 | #### Install Kind
6 | ```sh
7 | wget https://raw.githubusercontent.com/zealvora/certified-kubernetes-security-specialist/refs/heads/main/domain-3-minimize-microservice-vulnerability/kind-install.sh
8 |
9 | chmod +x kind-install.sh
10 |
11 | ./kind-install.sh
12 | ```
13 | #### Configure K8s using Kind
14 |
15 | ```sh
16 | nano config.yaml
17 | ```
18 | ```sh
19 | kind: Cluster
20 | apiVersion: kind.x-k8s.io/v1alpha4
21 | nodes:
22 | - role: control-plane
23 | image: kindest/node:v1.32.2
24 | - role: worker
25 | image: kindest/node:v1.32.2
26 | - role: worker
27 | image: kindest/node:v1.32.2
28 | networking:
29 | disableDefaultCNI: true
30 | ```
31 | ```sh
32 | kind create cluster --config config.yaml
33 | ```
34 | ### Install Cilium
35 | ```sh
36 | CLI_ARCH=amd64
37 |
38 | if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
39 | curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/v0.16.24/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
40 |
41 | sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
42 |
43 | sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
44 |
45 | rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
46 | ```
47 |
48 | ### Generate and Import PSK
49 |
50 | ```sh
51 | kubectl create -n kube-system secret generic cilium-ipsec-keys \
52 | --from-literal=keys="3+ rfc4106(gcm(aes)) $(echo $(dd if=/dev/urandom count=20 bs=1 2> /dev/null | xxd -p -c 64)) 128"
53 |
54 | kubectl -n kube-system get secrets cilium-ipsec-keys
55 | ```
56 |
57 | ### Enable Transparent Encryption in Cilium (IPSec)
58 | ```sh
59 | cilium install --version 1.17.1 --set encryption.enabled=true --set encryption.type=ipsec
60 |
61 | cilium status
62 |
63 | cilium config view | grep enable-ipsec
64 |
65 | kubectl get nodes
66 | ```
67 | ### Testing the Setup
68 |
69 | #### Launch 2 Pods in different worker node (Terminal Tab 1)
70 |
71 | ```sh
72 | nano curl-pod.yaml
73 | ```
74 | ```sh
75 | apiVersion: v1
76 | kind: Pod
77 | metadata:
78 | name: curl
79 | spec:
80 | nodeSelector:
81 | kubernetes.io/hostname: kind-worker
82 | containers:
83 | - name: busybox
84 | image: alpine/curl
85 | command: ["sleep", "36000"]
86 | ```
87 | ```sh
88 | kubectl create -f curl-pod.yaml
89 | ```
90 | ```sh
91 | nano nginx-pod.yaml
92 | ```
93 |
94 | ```sh
95 | apiVersion: v1
96 | kind: Pod
97 | metadata:
98 | name: nginx
99 | spec:
100 | nodeSelector:
101 | kubernetes.io/hostname: kind-worker2
102 | containers:
103 | - name: nginx
104 | image: nginx
105 | ```
106 |
107 | ```sh
108 | kubectl create -f nginx-pod.yaml
109 | ```
110 |
111 | #### Run a bash shell in one of the Cilium pods (Terminal Tab 2)
112 | ```sh
113 | kubectl -n kube-system exec -ti ds/cilium -- bash
114 | ```
115 |
116 | #### Install tcpdump and check if traffic is encrypted
117 | ```sh
118 | apt-get update
119 | apt-get -y install tcpdump
120 | ```
121 |
122 | ```sh
123 | tcpdump -n -i cilium_vxlan esp
124 | ```
125 |
126 | #### In Terminal Tab 1
127 | ```sh
128 | kubectl get pods -o wide
129 |
130 | kubectl exec -it curl -- sh
131 |
132 | curl
133 | ```
134 |
135 | #### Delete the Kind Cluster
136 | ```sh
137 | kind delete cluster
138 | ```
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/cilium-encryption-wireguard.md:
--------------------------------------------------------------------------------
1 | #### Documentation Referenced:
2 |
3 | https://docs.cilium.io/en/stable/security/network/encryption-wireguard/
4 |
5 | #### Install Kind
6 | ```sh
7 | wget https://raw.githubusercontent.com/zealvora/certified-kubernetes-security-specialist/refs/heads/main/domain-3-minimize-microservice-vulnerability/kind-install.sh
8 |
9 | chmod +x kind-install.sh
10 |
11 | ./kind-install.sh
12 | ```
13 | #### Configure K8s using Kind
14 |
15 | ```sh
16 | nano config.yaml
17 | ```
18 | ```sh
19 | kind: Cluster
20 | apiVersion: kind.x-k8s.io/v1alpha4
21 | nodes:
22 | - role: control-plane
23 | image: kindest/node:v1.32.2
24 | - role: worker
25 | image: kindest/node:v1.32.2
26 | - role: worker
27 | image: kindest/node:v1.32.2
28 | networking:
29 | disableDefaultCNI: true
30 | ```
31 | ```sh
32 | kind create cluster --config config.yaml
33 | ```
34 |
35 | ### Enable Transparent Encryption in Cilium (WireGuard)
36 | ```sh
37 | cilium install --version 1.17.1 --set encryption.enabled=true --set encryption.type=wireguard
38 |
39 | cilium status
40 |
41 | cilium config view | grep enable-wireguard
42 | ```
43 | ### Testing the Setup
44 |
45 | #### Launch 2 Pods in different worker node (Terminal Tab 1)
46 |
47 | ```sh
48 | nano curl-pod.yaml
49 | ```
50 | ```sh
51 | apiVersion: v1
52 | kind: Pod
53 | metadata:
54 | name: curl
55 | spec:
56 | nodeSelector:
57 | kubernetes.io/hostname: kind-worker
58 | containers:
59 | - name: busybox
60 | image: alpine/curl
61 | command: ["sleep", "36000"]
62 | ```
63 | ```sh
64 | kubectl create -f curl-pod.yaml
65 | ```
66 | ```sh
67 | nano nginx-pod.yaml
68 | ```
69 |
70 | ```sh
71 | apiVersion: v1
72 | kind: Pod
73 | metadata:
74 | name: nginx
75 | spec:
76 | nodeSelector:
77 | kubernetes.io/hostname: kind-worker2
78 | containers:
79 | - name: nginx
80 | image: nginx
81 | ```
82 |
83 | ```sh
84 | kubectl create -f nginx-pod.yaml
85 | ```
86 |
87 | #### Run a bash shell in one of the Cilium pods (Terminal Tab 2)
88 | ```sh
89 | kubectl -n kube-system exec -ti ds/cilium -- bash
90 | ```
91 |
92 | #### Install tcpdump and check if traffic is encrypted
93 | ```sh
94 | apt-get update
95 | apt-get -y install tcpdump
96 | ```
97 | ```sh
98 | tcpdump -n -i cilium_wg0 -nn -vv
99 | ```
100 |
101 |
102 | #### In Terminal Tab 1
103 | ```sh
104 | kubectl get pods -o wide
105 |
106 | kubectl exec -it curl-pod -- sh
107 |
108 | curl
109 | ```
110 |
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/cilium-entities.md:
--------------------------------------------------------------------------------
1 | #### Create 2 Pods for Testing
2 | ```sh
3 | kubectl run nginx --image=nginx
4 |
5 | kubectl run curl --image=alpine/curl -- sleep 36000
6 | ```
7 |
8 | #### Entities - Cluster
9 | ```sh
10 | nano entities-cluster.yaml
11 | ```
12 | ```sh
13 | apiVersion: "cilium.io/v2"
14 | kind: CiliumNetworkPolicy
15 | metadata:
16 | name: "restrict-egress-to-cluster"
17 | spec:
18 | endpointSelector: {}
19 | egress:
20 | - toEntities:
21 | - "cluster"
22 | ```
23 | ```sh
24 | kubectl create -f entities-cluster.yaml
25 | ```
26 |
27 | Test the Setup
28 |
29 | ```sh
30 | kubectl get pods -o wide
31 |
32 | kubectl exec -it curl-pod -- sh
33 |
34 | curl
35 |
36 | ping google.com
37 |
38 | curl google.com
39 | ```
40 | ```sh
41 | kubectl delete -f entities-cluster.yaml
42 | ```
43 | #### Entities - World
44 |
45 | ```sh
46 | nano entities-world.yaml
47 | ```
48 | ```sh
49 | apiVersion: "cilium.io/v2"
50 | kind: CiliumNetworkPolicy
51 | metadata:
52 | name: "restrict-egress-to-cluster"
53 | spec:
54 | endpointSelector: {}
55 | egress:
56 | - toEntities:
57 | - "world"
58 | ```
59 | ```sh
60 | kubectl create -f entities-world.yaml
61 | ```
62 |
63 | Test the Setup
64 | ```sh
65 | kubectl get pods -o wide
66 |
67 | kubectl exec -it curl-pod -- sh
68 |
69 | curl
70 | curl google.com
71 | ping google.com
72 | ```
73 | ```sh
74 | kubectl delete -f entities-world.yaml
75 | ```
76 | #### Entities - All
77 |
78 | ```sh
79 | nano entities-all.yaml
80 | ```
81 | ```sh
82 | apiVersion: "cilium.io/v2"
83 | kind: CiliumNetworkPolicy
84 | metadata:
85 | name: "allow-all-egress"
86 | spec:
87 | endpointSelector: {}
88 | egress:
89 | - toEntities:
90 | - "all"
91 | ```
92 | ```sh
93 | kubectl create -f entities-all.yaml
94 | ```
95 | Test the Setup
96 | ```sh
97 | kubectl get pods -o wide
98 |
99 | kubectl exec -it curl-pod -- sh
100 |
101 | curl
102 |
103 | curl google.com
104 |
105 | ping google.com
106 | ```
107 | ```sh
108 | kubectl delete -f entities-all.yaml
109 | ```
110 |
111 | #### Delete the Resources Created for this Lab
112 | ```sh
113 | kubectl delelte pods --all
114 | ```
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/cilium-install.md:
--------------------------------------------------------------------------------
1 | ### Configure kubeadm
2 | ```sh
3 | wget https://raw.githubusercontent.com/zealvora/certified-kubernetes-security-specialist/refs/heads/main/domain-3-minimize-microservice-vulnerability/kubeadm-master.sh
4 |
5 | chmod +x kubeadm-master.sh
6 |
7 | ./kubeadm-master.sh
8 | ```
9 |
10 | ### Install Cilium
11 | ```sh
12 | CLI_ARCH=amd64
13 |
14 | if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
15 | curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/v0.16.24/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
16 |
17 | sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
18 |
19 | sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
20 |
21 | rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
22 | ```
23 | ```sh
24 | cilium install
25 |
26 | cilium status
27 |
28 | kubectl get nodes
29 | ```
30 |
31 |
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/cilium-layer-4.md:
--------------------------------------------------------------------------------
1 | #### Documentation Referenced:
2 |
3 | https://docs.cilium.io/en/latest/security/policy/index.html
4 |
5 | #### Create 2 Pods for Testing
6 | ```sh
7 | kubectl run nginx --image=nginx
8 |
9 | kubectl run curl --image=alpine/curl -- sleep 36000
10 | ```
11 |
12 | #### Create Cilium Network Policy
13 | ```sh
14 | nano cnp-l4.yaml
15 | ```
16 | ```sh
17 | apiVersion: "cilium.io/v2"
18 | kind: CiliumNetworkPolicy
19 | metadata:
20 | name: allow-external-80
21 | spec:
22 | endpointSelector:
23 | matchLabels:
24 | run: curl
25 | egress:
26 | - toPorts:
27 | - ports:
28 | - port: "80"
29 | protocol: TCP
30 | ```
31 |
32 | Test the Setup
33 |
34 | ```sh
35 | kubectl get pods -o wide
36 |
37 | kubectl exec -it curl-pod -- sh
38 |
39 | curl
40 |
41 | ping google.com
42 |
43 | curl google.com
44 | ```
45 |
46 | #### Delete the Resources Created for this Lab
47 | ```sh
48 | kubectl delete -f cnp-l4.yaml
49 |
50 | kubectl delelte pods --all
51 | ```
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/cilium-netpol.md:
--------------------------------------------------------------------------------
1 | #### Documentation Referenced:
2 |
3 | https://docs.cilium.io/en/latest/security/policy/index.html
4 |
5 | #### Create 2 Pods for Testing
6 | ```sh
7 | kubectl run nginx --image=nginx
8 |
9 | kubectl run curl --image=alpine/curl -- sleep 36000
10 | ```
11 | #### Verify the connectivity
12 | ```sh
13 | kubectl get pods -o wide
14 |
15 | kubectl exec -it curl -- sh
16 |
17 | curl
18 | ```
19 |
20 | ### Example 1 - Simple Deny Policy
21 | ```sh
22 | nano deny.yaml
23 | ```
24 | ```sh
25 | apiVersion: "cilium.io/v2"
26 | kind: CiliumNetworkPolicy
27 | metadata:
28 | name: "deny-traffic"
29 | spec:
30 | endpointSelector: {}
31 | ingress:
32 | - {}
33 | egress:
34 | - {}
35 | ```
36 | ```sh
37 | kubectl create -f deny.yaml
38 |
39 | kubectl get ciliumnetworkpolicies
40 | ```
41 | #### Testing - Simple Deny Policy
42 | ```sh
43 | kubectl exec -it curl -- sh
44 |
45 | curl
46 |
47 | ping google.com
48 | ```
49 | ```sh
50 | kubectl delete -f deny.yaml
51 | ```
52 |
53 | ### Example 2 - Deny Policy for Specific Pod
54 | ```sh
55 | nano deny-pod.yaml
56 | ```
57 | ```sh
58 | apiVersion: "cilium.io/v2"
59 | kind: CiliumNetworkPolicy
60 | metadata:
61 | name: "deny-traffic"
62 | spec:
63 | endpointSelector:
64 | matchLabels:
65 | run: curl
66 | ingress:
67 | - {}
68 | egress:
69 | - {}
70 | ```
71 | ```sh
72 | kubectl create -f deny-pod.yaml
73 |
74 | kubectl get cnp
75 | ```
76 | #### Testing - Policy Number 2
77 | ```sh
78 | kubectl exec -it curl -- sh
79 | ping google.com
80 |
81 | kubectl exec -it nginx -- sh
82 | curl google.com
83 | ```
84 | ```sh
85 | kubectl delete -f deny-pod.yaml
86 | ```
87 |
88 | ### Example 3 - Allow Traffic from Curl Pod to Nginx Pod
89 |
90 | ```sh
91 | nano allow-curl-pod.yaml
92 | ```
93 | ```sh
94 | apiVersion: "cilium.io/v2"
95 | kind: CiliumNetworkPolicy
96 | metadata:
97 | name: allow-curl-nginx
98 | spec:
99 | endpointSelector:
100 | matchLabels:
101 | run: nginx
102 | ingress:
103 | - fromEndpoints:
104 | - matchLabels:
105 | run: curl
106 | ```
107 | ```sh
108 | kubectl create -f allow-curl-pod.yaml
109 | ```
110 | #### Testing - Policy 3
111 |
112 | ```sh
113 | kubectl exec -it curl -- sh
114 |
115 | curl
116 |
117 | kubectl run random-pod --image=alpine/curl -- sleep 36000
118 |
119 | kubectl exec -it random-pod -- sh
120 |
121 | curl
122 | ```
123 | ```sh
124 | kubectl delete -f allow-curl-pod.yaml
125 | ```
126 |
127 | ### Example 4 - Allow Egress Traffic from Curl Pod only to Nginx Pod
128 | ```sh
129 | nano egress-to-nginx.yaml
130 | ```
131 | ```sh
132 | apiVersion: "cilium.io/v2"
133 | kind: CiliumNetworkPolicy
134 | metadata:
135 | name: allow-egress-curl-to-nginx
136 | spec:
137 | endpointSelector:
138 | matchLabels:
139 | run: curl
140 | egress:
141 | - toEndpoints:
142 | - matchLabels:
143 | run: nginx
144 | ```
145 | #### Testing - Policy 4
146 |
147 | ```sh
148 | kubectl exec -it curl -- sh
149 |
150 | curl
151 |
152 | ping 8.8.8.8
153 | ```
154 |
155 | ```sh
156 | kubectl delete -f egress-to-nginx.yaml
157 | ```
158 |
159 | ### Delete the Resources Created in this Practical
160 | ```sh
161 | kubectl delete pods --all
162 | ```
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/cnp-service.md:
--------------------------------------------------------------------------------
1 | #### Create Base Pods and Service
2 | ```sh
3 | kubectl run nginx-1 --image=nginx
4 |
5 | kubectl expose pod nginx-1 --port=80 --name=nginx-service --target-port=80
6 |
7 | kubectl run curl-pod --image=alpine/curl --command -- sleep infinity
8 | ```
9 |
10 | #### Create CNP
11 | ```sh
12 | nano cnp-service.yaml
13 | ```
14 | ```sh
15 | apiVersion: "cilium.io/v2"
16 | kind: CiliumNetworkPolicy
17 | metadata:
18 | name: allow-curl-to-nginx-service
19 | spec:
20 | endpointSelector:
21 | matchLabels:
22 | run: curl-pod
23 | egress:
24 | - toServices:
25 | - k8sService:
26 | serviceName: nginx-service
27 | namespace: default
28 | toPorts:
29 | - ports:
30 | - port: "80"
31 | protocol: TCP
32 | ```
33 | ```sh
34 | kubectl create -f cnp-service.yaml
35 | ```
36 | #### Testing
37 | ```sh
38 | kubectl exec curl-pod -- curl
39 | ```
40 |
41 |
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/hack-case-01.md:
--------------------------------------------------------------------------------
1 | #### Step 1 - Create a Non-Priveleged Super Secret Pod:
2 | ```sh
3 | nano super-secret-pod.yaml
4 | ```
5 | ```sh
6 | apiVersion: v1
7 | kind: Pod
8 | metadata:
9 | name: super-secret
10 | namespace: development
11 | spec:
12 | containers:
13 | - image: nginx
14 | name: demo-pod
15 | ```
16 | ```sh
17 | kubectl apply -f super-secret-pod.yaml
18 | ```
19 | #### Step 2 - Create a Privileged Pod:
20 | ```sh
21 | nano privileged.yaml
22 | ```
23 | ```sh
24 | apiVersion: v1
25 | kind: Pod
26 | metadata:
27 | name: privileged
28 | namespace: development
29 | spec:
30 | containers:
31 | - image: nginx
32 | name: demo-pod
33 | securityContext:
34 | privileged: true
35 | ```
36 | ```sh
37 | kubectl apply -f privileged.yaml
38 | ```
39 | #### Step 3 - Store Secret Data in Super Secret Pod:
40 | ```sh
41 | kubectl exec -n development -it super-secret -- bash
42 | echo "TOP Secret" > /root/super-secret.txt
43 | ```
44 | logout of the super-secret pod
45 |
46 | #### Step 4 - Connect to Privileged Pod:
47 |
48 | Note: The disk name might be different in your-case. Please ensure you use the right naming while mounting.
49 | ```sh
50 | kubectl exec -n development -it privileged -- bash
51 | mkdir /host-data
52 | ```
53 | ```sh
54 | mount /dev/vda1 /host-data
55 | cd /host-data
56 | cd /host-data/var/lib/docker/overlay2
57 | find . -name super-secret.txt
58 | ```
59 |
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/hostpath.md:
--------------------------------------------------------------------------------
1 | #### Create a HostPath Volume:
2 | ```sh
3 | nano host-path.yaml
4 | ```
5 | ```sh
6 | apiVersion: v1
7 | kind: Pod
8 | metadata:
9 | name: hostpath-pod
10 | spec:
11 | containers:
12 | - image: nginx
13 | name: test-container
14 | volumeMounts:
15 | - mountPath: /host-path
16 | name: host-volume
17 | volumes:
18 | - name: host-volume
19 | hostPath:
20 | path: /
21 | ```
22 | ```sh
23 | kubectl apply -f host-path.yaml
24 | kubectl exec -it hostpath-pod -- bash
25 | ```
26 | #### Improved Pod Security Policy:
27 | ```sh
28 | nano /root/psp/restrictive-psp.yaml
29 | ```
30 | ```sh
31 | apiVersion: policy/v1beta1
32 | kind: PodSecurityPolicy
33 | metadata:
34 | name: restrictive-psp
35 | annotations:
36 | seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
37 | spec:
38 | privileged: false
39 | allowPrivilegeEscalation: true
40 | allowedCapabilities:
41 | - '*'
42 | volumes:
43 | - configMap
44 | - downwardAPI
45 | - emptyDir
46 | - persistentVolumeClaim
47 | - secret
48 | - projected
49 | hostNetwork: true
50 | hostPorts:
51 | - min: 0
52 | max: 65535
53 | hostIPC: true
54 | hostPID: true
55 | runAsUser:
56 | rule: 'RunAsAny'
57 | seLinux:
58 | rule: 'RunAsAny'
59 | supplementalGroups:
60 | rule: 'RunAsAny'
61 | fsGroup:
62 | rule: 'RunAsAny'
63 | ```
64 |
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/hostpid.md:
--------------------------------------------------------------------------------
1 |
2 | #### Sample YAML File based on Host PID:
3 | ```sh
4 | nano host-pid.yaml
5 | ```
6 | ```sh
7 | apiVersion: v1
8 | kind: Pod
9 | metadata:
10 | name: hostpid
11 | spec:
12 | hostPID: true
13 | containers:
14 | - image: busybox
15 | name: demo-pod
16 | command:
17 | - sleep
18 | - "3600"
19 | ```
20 | ```sh
21 | kubectl apply -f host-pid.yaml
22 | ```
23 | #### Create Sample Pods:
24 | ```sh
25 | kubectl run httpd-pod-1 --image=httpd
26 | kubectl run httpd-pod-2 --image=httpd
27 | ```
28 | #### Connect to hostpid POD:
29 | ```sh
30 | kubectl exec -it hostpid -- sh
31 | ps -ef | grep httpd
32 | kill -9 PID && kill -9 PID
33 | ```
34 | #### Improved Pod Security Policy:
35 | ```sh
36 | apiVersion: policy/v1beta1
37 | kind: PodSecurityPolicy
38 | metadata:
39 | name: restrictive-psp
40 | annotations:
41 | seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
42 | spec:
43 | privileged: false
44 | allowPrivilegeEscalation: true
45 | allowedCapabilities:
46 | - '*'
47 | volumes:
48 | - configMap
49 | - downwardAPI
50 | - emptyDir
51 | - persistentVolumeClaim
52 | - secret
53 | - projected
54 | hostNetwork: true
55 | hostPorts:
56 | - min: 0
57 | max: 65535
58 | hostIPC: true
59 | hostPID: false
60 | runAsUser:
61 | rule: 'RunAsAny'
62 | seLinux:
63 | rule: 'RunAsAny'
64 | supplementalGroups:
65 | rule: 'RunAsAny'
66 | fsGroup:
67 | rule: 'RunAsAny'
68 | ```
69 | #### Reapply the restrictive policy to not allow hostpid:
70 | ```sh
71 | kubectl apply -f restrictive-psp.yaml
72 |
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/image-pull-policy.md:
--------------------------------------------------------------------------------
1 | #### Using the --help to find exact flag
2 | ```sh
3 | kubectl run --help
4 | ```
5 | #### Create Pod with ImagePullPolicy of Always
6 | ```sh
7 | kubectl run always-pod --image=nginx --image-pull-policy=Always
8 |
9 | kubectl describe pod always-pod
10 | ```
11 | #### Create Pod with ImagePullPolicy of IfNotPresent
12 | ```sh
13 | kubectl run if-not-present --image=nginx --image-pull-policy=IfNotPresent
14 |
15 | kubectl describe pod if-not-present
16 | ```
17 | #### Create Pod with ImagePullPolicy of Never
18 | ```sh
19 | kubectl run never-pod --image=ubuntu --image-pull-policy=Never
20 |
21 | kubectl get pods
22 |
23 | kubectl describe pod never-pod
24 | ```
25 |
26 | #### Miscellenous Command used for Demonstration
27 | ```sh
28 | crictl images
29 | crictl images -v
30 | ```
31 |
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/imagepolicywebhook-custom.md:
--------------------------------------------------------------------------------
1 | ### GitHub Page:
2 |
3 | https://github.com/flavio/kube-image-bouncer
4 |
5 | #### Switch to the controlconf directory:
6 | ```sh
7 | cd /etc/kubernetes/controlconf
8 | ```
9 | #### Creating Custom Webhook Configuration File:
10 | ```sh
11 | nano custom-config.yaml
12 | ```
13 | ```sh
14 | apiVersion: apiserver.config.k8s.io/v1
15 | kind: AdmissionConfiguration
16 | plugins:
17 | - name: ImagePolicyWebhook
18 | configuration:
19 | imagePolicy:
20 | kubeConfigFile: /etc/kubernetes/controlconf/custom-webhook.kubeconfig
21 | allowTTL: 50
22 | denyTTL: 50
23 | retryBackoff: 500
24 | defaultAllow: false
25 | ```
26 |
27 | #### Creating Custom Webhook kubeconfigfile:
28 | ```sh
29 | nano custom-webhook.kubeconfig
30 | ```
31 | ```sh
32 | apiVersion: v1
33 | kind: Config
34 | clusters:
35 | - cluster:
36 | certificate-authority: /etc/kubernetes/controlconf/webhook.crt
37 | server: https://bouncer.local.lan:1323/image_policy
38 | name: bouncer_webhook
39 | contexts:
40 | - context:
41 | cluster: bouncer_webhook
42 | user: api-server
43 | name: bouncer_validator
44 | current-context: bouncer_validator
45 | preferences: {}
46 | users:
47 | - name: api-server
48 | user:
49 | client-certificate: /etc/kubernetes/controlconf/api-user.crt
50 | client-key: /etc/kubernetes/controlconf/api-user.key
51 | ```
52 | #### Modify the kube-apiserver Manifest:
53 | ```sh
54 | nano /etc/kubernetes/manifests/kube-apiserver.yaml
55 | ```
56 | Setting Host Alias:
57 | ```sh
58 | hostAliases:
59 | - ip: "134.209.147.191"
60 | hostnames:
61 | - "bouncer.local.lan"
62 |
63 | - mountPath: /etc/kubernetes/controlconf
64 | name: admission-controller
65 | readOnly: true
66 | ```
67 | ```sh
68 | - hostPath:
69 | path: /etc/kubernetes/controlconf
70 | type: DirectoryOrCreate
71 | name: admission-controller
72 | ```
73 | ```sh
74 | - --admission-control-config-file=/etc/kubernetes/controlconf/custom-config.yaml
75 | ```
76 | #### Start the webhook service in a different tab:
77 | ```sh
78 | kube-image-bouncer --cert webhook.crt --key webhook.key
79 | ```
80 | #### Verification:
81 | ```sh
82 | kubectl run dem-nginx --image=nginx:latest
83 | kubectl run dem-nginx --image=nginx:1.19.2
84 | ```
85 |
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/imagewebhook.md:
--------------------------------------------------------------------------------
1 | ### Documentation:
2 |
3 | https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/
4 |
5 | ### Pre-Requisite
6 | ```sh
7 | apt install python3-pip -y
8 |
9 | apt install python3-venv -y
10 |
11 | python3 -m venv myenv
12 |
13 | source myenv/bin/activate
14 |
15 | pip3 install flask
16 | ```
17 | #### Configure Webhook
18 | ```sh
19 | cd /root
20 |
21 | nano image_policy_webhook.py
22 | ```
23 | ```sh
24 | from flask import Flask, request, jsonify
25 |
26 | app = Flask(__name__)
27 |
28 | # Only allow nginx and httpd images
29 | ALLOWED_IMAGES = {"nginx", "httpd"}
30 |
31 | @app.route('/validate', methods=['POST'])
32 | def validate():
33 | request_data = request.get_json()
34 |
35 | # Extract container images
36 | allowed = True
37 | for container in request_data.get("spec", {}).get("containers", []):
38 | image = container.get("image", "").split(":")[0] # Ignore tag
39 | if image not in ALLOWED_IMAGES:
40 | allowed = False
41 | break
42 |
43 | response = {
44 | "apiVersion": "imagepolicy.k8s.io/v1alpha1",
45 | "kind": "ImageReview",
46 | "status": {
47 | "allowed": allowed
48 | }
49 | }
50 |
51 | return jsonify(response)
52 |
53 | if __name__ == '__main__':
54 | app.run(host="0.0.0.0", port=8080, debug=True)
55 | ```
56 | ```sh
57 | python3 image_policy_webhook.py
58 | ```
59 |
60 |
61 | #### Configure API Server (Tab-2)
62 | ```sh
63 | apt install net-tools
64 | netstat -ntlp
65 |
66 | nano /etc/kubernetes/pki/admission-config.yaml
67 | ```
68 | ```sh
69 | apiVersion: apiserver.config.k8s.io/v1
70 | kind: AdmissionConfiguration
71 | plugins:
72 | - name: ImagePolicyWebhook
73 | configuration:
74 | imagePolicy:
75 | kubeConfigFile: "/etc/kubernetes/pki/webhook-kubeconfig"
76 | allowTTL: 50
77 | denyTTL: 50
78 | retryBackoff: 500
79 | defaultAllow: false
80 | ```
81 | ```sh
82 | nano /etc/kubernetes/pki/webhook-kubeconfig
83 | ```
84 |
85 | Replace the Server IP Address to your IP.
86 |
87 | ```sh
88 | apiVersion: v1
89 | kind: Config
90 | clusters:
91 | - cluster:
92 | server: http://64.227.144.17:8080/validate
93 | name: webhook
94 | contexts:
95 | - context:
96 | cluster: webhook
97 | name: webhook-context
98 | current-context: webhook-context
99 | ```
100 |
101 | #### Enable Admission Controller Plugins
102 | ```sh
103 | nano /etc/kubernetes/manifests/kube-apiserver.yaml
104 | ```
105 |
106 | ```sh
107 | - --admission-control-config-file=/etc/kubernetes/pki/admission-config.yaml
108 | ```
109 |
110 | Ensure ImagePolicyWebhook Plugin is enabled
111 | ```sh
112 | - --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
113 | ```
114 |
115 | #### Test the Setup
116 | ```sh
117 | kubectl run nginx-pod --image=nginx
118 |
119 | kubectl run redis-pod --image=redis
120 | ```
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/kind-install.sh:
--------------------------------------------------------------------------------
1 | ### Installing Docker
2 |
3 | # Add Docker's official GPG key:
4 | sudo apt-get update
5 | sudo apt-get install ca-certificates curl -y
6 | sudo install -m 0755 -d /etc/apt/keyrings
7 | sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
8 | sudo chmod a+r /etc/apt/keyrings/docker.asc
9 |
10 | # Add the repository to Apt sources:
11 | echo \
12 | "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
13 | $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
14 | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
15 | sudo apt-get update
16 |
17 | sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
18 |
19 | # For AMD64 / x86_64
20 | [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.27.0/kind-linux-amd64
21 | # For ARM64
22 | [ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.27.0/kind-linux-arm64
23 | chmod +x ./kind
24 | sudo mv ./kind /usr/local/bin/kind
25 |
26 | ### Install kubectl
27 |
28 | curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
29 |
30 | chmod +x kubectl
31 |
32 | mv kubectl /usr/local/bin
--------------------------------------------------------------------------------
/domain-3-minimize-microservice-vulnerability/kubeadm-cilium.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -e # Exit on error
4 | set -o pipefail
5 | set -x # Enable debugging
6 |
7 | ### Step 1: Setup containerd ###
8 | echo "[Step 1] Installing and configuring containerd..."
9 |
10 | # Load required kernel modules
11 | cat </dev/null
35 |
36 | # Modify containerd config for systemd cgroup
37 | sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
38 |
39 | # Restart containerd
40 | sudo systemctl restart containerd
41 | sudo systemctl enable containerd
42 |
43 | ### Step 2: Kernel Parameter Configuration ###
44 | echo "[Step 2] Configuring kernel parameters..."
45 |
46 | cat </dev/null
35 |
36 | # Modify containerd config for systemd cgroup
37 | sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
38 |
39 | # Restart containerd
40 | sudo systemctl restart containerd
41 | sudo systemctl enable containerd
42 |
43 | ### Step 2: Kernel Parameter Configuration
44 | echo "[Step 2] Configuring kernel parameters..."
45 |
46 | cat <
17 | [PlDb]: <./apparmor-k8s.md>
18 | [PlDc]: <./oci.md>
19 | [PlDd]: <./kubeadm-containerd.md>
20 | [PlDe]: <./gvisor.md>
21 |
--------------------------------------------------------------------------------
/domain-4-system-hardening/apparmor-k8s.md:
--------------------------------------------------------------------------------
1 | #### Create a Sample Profile:
2 | ```sh
3 | apparmor_parser -q <
5 |
6 | profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
7 | #include
8 |
9 | file,
10 |
11 | # Deny all file writes.
12 | deny /** w,
13 | }
14 | EOF
15 | ```
16 | #### Verify the status of the profile:
17 | ```sh
18 | aa-status
19 | ```
20 |
21 | #### Sample YAML File based on Host PID:
22 | ```sh
23 | cd /root
24 | ```
25 | ```sh
26 | nano hello-armor.yaml
27 | ```
28 | ```sh
29 | apiVersion: v1
30 | kind: Pod
31 | metadata:
32 | name: hello-apparmor
33 | spec:
34 | securityContext:
35 | appArmorProfile:
36 | type: Localhost
37 | localhostProfile: k8s-apparmor-example-deny-write
38 | containers:
39 | - name: hello
40 | image: busybox
41 | command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]
42 | ```
43 | ```sh
44 | kubectl apply -f hello-armor.yaml
45 | ```
46 |
47 | #### Verification Stage:
48 | ```sh
49 | kubectl exec -it hello-apparmor -- sh
50 | touch /tmp/file.txt
51 | ```
52 |
--------------------------------------------------------------------------------
/domain-4-system-hardening/apparmor.md:
--------------------------------------------------------------------------------
1 | #### Check status of apparmor:
2 | ```sh
3 | systemctl status apparmor
4 |
5 | aa-status
6 | ```
7 | #### Sample Script:
8 | ```sh
9 | mkdir /root/apparmor
10 |
11 | cd /root/apparmor
12 | ```
13 | ```sh
14 | nano app.sh
15 | ```
16 | ```sh
17 | #!/bin/bash
18 | touch /tmp/file.txt
19 | echo "New File created"
20 |
21 | rm -f /tmp/file.txt
22 | echo "New file removed"
23 | ```
24 | ```sh
25 | chmod +x app.sh
26 | ```
27 | #### Install Apparmor Utils:
28 | ```sh
29 | apt install apparmor-utils -y
30 | ```
31 | #### Generate a new profile:
32 | ```sh
33 | aa-genprof ./app.sh
34 | ```
35 | ```sh
36 | ./app.sh (from new tab)
37 | ```
38 | #### Verify the new profile:
39 | ```sh
40 | cat /etc/apparmor.d/root.apparmor.app.sh
41 |
42 | aa-status
43 | ```
44 | #### Disable a profile:
45 | ```sh
46 | ln -s /etc/apparmor.d/root.apparmor.app.sh /etc/apparmor.d/disable/
47 |
48 | apparmor_parser -R /etc/apparmor.d/root.apparmor.app.sh
49 | ```
50 |
--------------------------------------------------------------------------------
/domain-4-system-hardening/gvisor.md:
--------------------------------------------------------------------------------
1 | ### Documentation:
2 |
3 | https://kubernetes.io/docs/concepts/containers/runtime-class/
4 |
5 | #### Step 1 - Configure Docker:
6 | ```sh
7 | sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
8 | sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
9 | sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
10 | sudo apt-get update
11 | sudo apt-get install docker-ce docker-ce-cli containerd.io
12 | ```
13 | #### Step 2 - Install Configure Minikube:
14 | ```sh
15 | wget https://github.com/kubernetes/minikube/releases/download/v1.16.0/minikube-linux-amd64
16 | mv minikube-linux-amd64 minikube
17 | chmod +x minikube
18 | sudo mv ./minikube /usr/local/bin/minikube
19 | ```
20 | ```sh
21 | curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
22 | sudo mv kubectl /usr/local/bin
23 | chmod +x /usr/local/bin/kubectl
24 | ```
25 | ```sh
26 | sudo usermod -aG docker ubuntu
27 | ```
28 | logout and login
29 |
30 | #### Step 3 Start Minikube:
31 | ```sh
32 | minikube start --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock
33 | ```
34 | #### Step 4 Enable gVisor addon:
35 | ```sh
36 | minikube addons list
37 | minikube addons enable gvisor
38 | minikube addons list
39 | ```
40 | #### Step 5 - Explore gVisor:
41 | ```sh
42 | kubectl get runtimeclass
43 | ```
44 | ```sh
45 | nano runtimeclass.yaml
46 | ```
47 | ```sh
48 | apiVersion: node.k8s.io/v1
49 | kind: RuntimeClass
50 | metadata:
51 | name: gvisor2
52 | handler: runsc
53 | ```
54 | ```sh
55 | kubectl apply -f runtimeclass.yaml
56 | ```
57 | ```sh
58 | nano gvisor-pod.yaml
59 | ```
60 | ```sh
61 | apiVersion: v1
62 | kind: Pod
63 | metadata:
64 | name: nginx
65 | spec:
66 | runtimeClassName: gvisor2
67 | containers:
68 | - image: nginx
69 | name: nginx
70 | ```
71 | ```sh
72 | kubectl apply -f gvisor-pod.yaml
73 | ```
74 |
75 | #### Create one more pod for seeing the difference in dmesg output:
76 | ```sh
77 | kubectl run nginx-default --image=nginx
78 | ```
79 |
80 | #### Verify dmesg output
81 | ```sh
82 | kubectl exec -it nginx -- bash
83 | dmesg
84 | logout
85 | ```
86 | ```sh
87 | kubectl exec -it nginx-default -- bash
88 | dmesg
89 | ```
90 |
--------------------------------------------------------------------------------
/domain-4-system-hardening/kubeadm-calico.md:
--------------------------------------------------------------------------------
1 |
2 | ##### Step 1: Setup containerd
3 | ```sh
4 | cat < /etc/containerd/config.toml
27 | ```
28 | ```sh
29 | nano /etc/containerd/config.toml
30 | ```
31 | --> SystemdCgroup = true
32 |
33 | ```sh
34 | systemctl restart containerd
35 | ```
36 |
37 | ##### Step 2: Kernel Parameter Configuration
38 | ```sh
39 | cat < /etc/containerd/config.toml
32 | systemctl restart containerd
33 | ```
34 | #### Create Container with Containerd:
35 |
36 | ```sh
37 | ctr image pull docker.io/library/nginx:latest
38 | ctr image ls
39 | ctr container create docker.io/library/nginx:latest nginx
40 | ctr container list
41 | ```
42 | #### Get the snapshot to get the contents:
43 | ```sh
44 | mkdir /root/nginx-rootfs
45 | ctr snapshot mounts nginx-rootfs/ nginx | bash
46 | ```
47 |
48 | #### Generate the config.yaml:
49 | ```sh
50 | cd /root
51 | runc spec
52 | ```
53 | Modify the config.yaml to include the name of nginx-rootfs
54 |
55 | #### Create a container from runc:
56 | ```sh
57 | runc run mycontainer
58 | ```
59 |
--------------------------------------------------------------------------------
/domain-5-supply-chain-security/Readme.md:
--------------------------------------------------------------------------------
1 | # Domain - Core Concepts
2 |
3 | The code mentioned in this document are used in the Certified Kubernetes Security Specialist 2025 video course.
4 |
5 |
6 | # Video-Document Mapper
7 |
8 | | Sr No | Document Link |
9 | | ------ | ------ |
10 | | 1 | [Scan images for known vulnerabilities][PlDa] |
11 | | 2 | [Scanning K8s Clusters for Security Best Practices][PlDb] |
12 | | 3 | [Static Analysis][PlDc] |
13 | | 4 | [Dockerfile - Security Best Practices][PlDd] |
14 | | 5 | [Securing Docker Daemon][PlDe] |
15 | | 6 | [Practical - Docker Daemon Socket on TLS][PlDf] |
16 | | 7 | [SBOM Practical][PlDg] |
17 |
18 | [PlDa]: <./trivy.md>
19 | [PlDb]: <./kube-bench.md>
20 | [PlDc]: <./static-analysis.md>
21 | [PlDd]: <./dockerfile-best-practice.md>
22 | [PlDe]: <./docker-security.md>
23 | [PlDf]: <./docker-tls.md>
24 | [PlDg]: <./sbom.md>
--------------------------------------------------------------------------------
/domain-5-supply-chain-security/docker-daemon.md:
--------------------------------------------------------------------------------
1 | ### Run Docker through CLI
2 | ```sh
3 | systemctl stop docker
4 | systemctl stop docker.socket
5 |
6 | dockerd
7 | ```
8 | ### Test Container Launch
9 | ```sh
10 | docker run -d --name nginx nginx
11 |
12 | docker exec -it nginx bash
13 |
14 | cat /etc/resolv.conf
15 | ```
16 |
17 | ### Run Docker through CLI with DNS Flag
18 | ```sh
19 | dockerd --dns=8.8.8.8
20 |
21 | docker run -d --name nginx2 nginx
22 |
23 | docker exec -it nginx2 bash
24 |
25 | cat /etc/resolv.conf
26 | ```
27 |
28 | ### Configure daemon.json file
29 | ```sh
30 | nano /etc/docker/daemon.json
31 | ```
32 | ```sh
33 | {
34 | "dns": ["8.8.4.4"]
35 | }
36 | ```
37 | ```sh
38 | dockerd
39 |
40 | docker run -d --name nginx3 nginx
41 |
42 | docker exec -it nginx3 bash
43 |
44 | cat /etc/resolv.conf
45 | ```
--------------------------------------------------------------------------------
/domain-5-supply-chain-security/docker-security.md:
--------------------------------------------------------------------------------
1 | ### Install Docker in Ubuntu
2 |
3 | ```sh
4 | sudo apt-get update
5 | sudo apt-get install ca-certificates curl
6 | sudo install -m 0755 -d /etc/apt/keyrings
7 | sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
8 | sudo chmod a+r /etc/apt/keyrings/docker.asc
9 |
10 | echo \
11 | "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
12 | $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
13 | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
14 | sudo apt-get update
15 | ```
16 |
17 | ```sh
18 | sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
19 | ```
20 |
21 | ```sh
22 | systemctl status docker
23 | ```
24 |
25 | ### Docker Group Security
26 | ```sh
27 | useradd test-user
28 |
29 | id test-user
30 |
31 | usermod -aG docker test-user
32 | passwd test-user
33 |
34 | su - test-user
35 |
36 | sudo su -
37 |
38 | docker run --rm -it --privileged --net=host -v /:/mnt ubuntu chroot /mnt bash
39 |
40 | whoami
41 |
42 | useradd attacker
43 |
44 | usermod -aG root attacker
45 | ```
46 | Logout from container
47 | ```sh
48 | userdel attacker
49 |
50 | gpasswd -d test-user docker
51 |
52 | gpasswd -d attacker root
53 | ```
54 |
55 | ### Expose the Docker API
56 | ```sh
57 | nano /usr/lib/systemd/system/docker.service
58 | ```
59 | ```sh
60 | ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://127.0.0.1:2375 --containerd=/run/containerd/containerd.sock
61 | ```
62 |
63 | ```sh
64 | systemctl daemon-reload
65 |
66 | systemctl restart docker
67 | ```
68 | ### Testing if Docker API Is Exposed
69 |
70 | An attacker can check if the Docker remote API is accessible using curl:
71 | ```sh
72 | curl http://127.0.0.1:2375/version
73 | ```
74 | An attacker can list all running containers:
75 | ```sh
76 |
77 | docker run -d nginx
78 |
79 | curl http://127.0.0.1:2375/containers/json
80 | ```
81 |
82 | Create New Container
83 | ```sh
84 | curl -X POST -H "Content-Type: application/json" --data '{"Image": "nginx", "Cmd": ["tail", "-f", "/dev/null"], "HostConfig": {"Privileged": true}}' http://127.0.0.1:2375/containers/create
85 | ```
86 |
87 | Start the Container
88 | ```sh
89 | curl -X POST http://127.0.0.1:2375/containers//start
90 | ```
91 |
92 | ### Reverting the Changes:
93 |
94 | 1. Remove Docker API from /usr/lib/systemd/system/docker.service
95 | ```sh
96 | systemctl daemon-reload
97 |
98 | systemctl stop docker
99 |
100 | systemctl stop docker.socket
101 |
102 | systemctl start docker
103 |
104 | docker ps -a
105 | docker rm
106 |
107 | docker images
108 | docker rmi
109 | ```
--------------------------------------------------------------------------------
/domain-5-supply-chain-security/docker-tls.md:
--------------------------------------------------------------------------------
1 |
2 | ### Generate CA Key and Certificate
3 | ```sh
4 | mkdir -p /etc/docker/certs
5 | cd /etc/docker/certs
6 |
7 | openssl genpkey -algorithm RSA -out ca-key.pem
8 |
9 | openssl req -new -x509 -days 365 -key ca-key.pem -subj "/CN=MyDockerCA" -out ca.pem
10 | ```
11 | ### Generate Server Certificate and Key
12 | ```sh
13 | openssl genpkey -algorithm RSA -out server-key.pem
14 |
15 | openssl req -new -key server-key.pem -subj "/CN=kplabs.docker.internal" -out server.csr
16 |
17 | openssl x509 -req -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -days 365 -out server-cert.pem
18 | ```
19 | ### Generate Client Certificate and Key
20 | ```sh
21 | openssl genpkey -algorithm RSA -out client-key.pem
22 |
23 | openssl req -new -key client-key.pem -subj "/CN=client" -out client.csr
24 |
25 | openssl x509 -req -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -days 365 -out client-cert.pem
26 | ```
27 |
28 | ### Configure Docker Daemon
29 | ```sh
30 | nano /etc/docker/daemon.json
31 | ```
32 | ```sh
33 | {
34 | "tls": true,
35 | "tlsverify": true,
36 | "tlscacert": "/etc/docker/certs/ca.pem",
37 | "tlscert": "/etc/docker/certs/server-cert.pem",
38 | "tlskey": "/etc/docker/certs/server-key.pem",
39 | "hosts": ["tcp://0.0.0.0:2376", "unix:///var/run/docker.sock"]
40 | }
41 | ```
42 |
43 | ### Modify systemd service file
44 | ```sh
45 | nano /usr/lib/systemd/system/docker.service
46 | ```
47 | Modify the `ExecStart` to only use dockerd without any flags
48 | ```sh
49 | ExecStart=/usr/bin/dockerd
50 | ```
51 |
52 | ### Restart Docker
53 | ```sh
54 | systemctl daemon-reload
55 | systemctl restart docker
56 | ```
57 |
58 | ### Testing the Setup
59 | ```sh
60 | apt-get install net-tools
61 |
62 | netstat -ntlp
63 | ```
64 | ```sh
65 | curl http://127.0.0.1:2376/version
66 |
67 | curl --cert /etc/docker/certs/client-cert.pem --key /etc/docker/certs/client-key.pem --cacert /etc/docker/certs/ca.pem https://127.0.0.1:2376/version
68 | ```
69 | ```sh
70 | nano /etc/hosts
71 | ```
72 | Add following line
73 | ```sh
74 | 127.0.0.1 docker.kplabs.internal
75 | ```
76 | ```sh
77 | curl --cert /etc/docker/certs/client-cert.pem --key /etc/docker/certs/client-key.pem --cacert /etc/docker/certs/ca.pem https://docker.kplabs.internal:2376/version
78 | ```
79 |
--------------------------------------------------------------------------------
/domain-5-supply-chain-security/dockerfile-best-practice.md:
--------------------------------------------------------------------------------
1 | ```sh
2 | nano app.sh
3 | ```
4 | ```sh
5 | #!/bin/sh
6 | echo "Hello, Docker"
7 | ```
8 | ### Base Dockerfile
9 |
10 | ```sh
11 | FROM ubuntu:18.04
12 | USER root
13 | RUN apt-get update
14 | RUN apt-get install -y curl
15 | RUN apt-get install -y wget
16 | RUN apt-get install -y nano
17 | EXPOSE 80
18 | WORKDIR /usr/src/app
19 | COPY app.sh .
20 | RUN chmod -R 777 /usr/src/app/app.sh
21 | CMD ["./app.sh"]
22 | ```
23 |
24 | #### Verification
25 | ```sh
26 | docker build -t myapp:v1 .
27 |
28 | docker images
29 |
30 | docker run myapp:v1
31 | ```
32 | ### Revision 1 - DockerFile
33 | ```sh
34 | echo > Dockerfile
35 | ```
36 | ```sh
37 | FROM ubuntu:24.10
38 |
39 | RUN apt-get update && \
40 | apt-get install -y curl && \
41 | apt-get install -y wget && \
42 | apt-get install -y nano && \
43 | useradd appuser
44 |
45 | EXPOSE 80
46 | WORKDIR /usr/src/app
47 | COPY app.sh .
48 |
49 | RUN chmod 700 app.sh && chown appuser:appuser app.sh
50 | USER appuser
51 | CMD ["./app.sh"]
52 | ```
53 | #### Verification
54 | ```sh
55 | docker build -t myapp:v2 .
56 |
57 | docker images
58 |
59 | docker run myapp:v2
60 | ```
61 |
62 | ### Revision 2 DockerFile
63 | ```sh
64 | FROM alpine:latest
65 | WORKDIR /usr/src/app
66 | COPY app.sh .
67 | RUN chmod +x app.sh
68 | RUN adduser -D appuser && chown appuser:appuser app.sh
69 | USER appuser
70 | CMD ["/usr/src/app/app.sh"]
71 | ```
72 |
73 | #### Verification
74 | ```sh
75 | docker build -t myapp:v3 .
76 |
77 | docker images
78 |
79 | docker run myapp:v3
80 | ```
--------------------------------------------------------------------------------
/domain-5-supply-chain-security/kube-bench.md:
--------------------------------------------------------------------------------
1 | ### Documentation:
2 |
3 | https://github.com/aquasecurity/kube-bench#download-and-install-binaries
4 |
5 | #### Installation Steps:
6 | ```sh
7 | curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.3.1/kube-bench_0.3.1_linux_amd64.deb -o kube-bench_0.3.1_linux_amd64.deb
8 | ```
9 | ```sh
10 | sudo apt install ./kube-bench_0.3.1_linux_amd64.deb -f
11 | ```
12 | #### Running kube-bench:
13 | ```sh
14 | kube-bench
15 | ```
16 |
--------------------------------------------------------------------------------
/domain-5-supply-chain-security/sbom.md:
--------------------------------------------------------------------------------
1 | #### Pages Referred:
2 |
3 | https://github.com/kubernetes-sigs/bom
4 |
5 | https://trivy.dev/v0.33/docs/sbom/
6 |
7 |
8 | ### Install Bom
9 |
10 | ```sh
11 | wget https://github.com/kubernetes-sigs/bom/releases/download/v0.6.0/bom-amd64-linux
12 |
13 | mv bom-amd64-linux bom
14 |
15 | chmod +x bom
16 |
17 | mv bom /usr/local/bin
18 | ```
19 |
20 | ### Install Trivy
21 | ```sh
22 | curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh
23 |
24 | sudo mv bin/trivy /usr/local/bin
25 | ```
26 |
27 | ### Bom Learnings
28 | ```sh
29 | bom generate spdx-json --image nginx:latest --output nginx.spdx.json
30 |
31 | bom generate spdx-json --image nginx@sha256:28edb1806e63847a8d6f77a7c312045e1bd91d5e3c944c8a0012f0b14c830c44 --output nginx.spdx.json
32 |
33 | bom document outline nginx.spdx.json
34 |
35 | bom document outline nginx-spdx.json | grep dpkg
36 | ```
37 |
38 | ### Trivy Learnings
39 |
40 | ```sh
41 | trivy image --format spdx-json --output nginx-spdx.json nginx:latest
42 |
43 | trivy image --format cyclonedx --output nginx-cyclone.json nginx:latest
44 |
45 | trivy sbom nginx-spdx.json
46 | ```
--------------------------------------------------------------------------------
/domain-5-supply-chain-security/static-analysis.md:
--------------------------------------------------------------------------------
1 | ### Documentation:
2 |
3 | https://github.com/bridgecrewio/checkov
4 |
5 | #### Install the Tool:
6 | ```sh
7 | apt install python3-pip
8 | pip3 install checkov
9 | ```
10 | #### Our Demo Manifest File:
11 | ```sh
12 | nano pod-priv.yaml
13 | ```
14 | ```sh
15 | apiVersion: v1
16 | kind: Pod
17 | metadata:
18 | name: privileged
19 | spec:
20 | containers:
21 | - image: nginx
22 | name: demo-pod
23 | securityContext:
24 | privileged: true
25 |
26 | ```
27 | #### Perform a static analysis:
28 | ```sh
29 | checkov -f pod-priv.yaml
30 | ```
31 |
--------------------------------------------------------------------------------
/domain-5-supply-chain-security/trivy.md:
--------------------------------------------------------------------------------
1 | ### Official Repository:
2 |
3 | https://github.com/aquasecurity/trivy
4 |
5 | #### Installing Trivy:
6 | ```sh
7 | curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/master/contrib/install.sh | sh -s -- -b /usr/local/bin
8 | ```
9 | #### DockerHub page for Nginx:
10 | ```sh
11 | https://hub.docker.com/_/nginx
12 | ```
13 | #### Trivy Command to Scan an Image:
14 | ```sh
15 | trivy image nginx:1.19.5
16 | ```
17 |
--------------------------------------------------------------------------------
/domain-6-monitor-log-runtimesec/Readme.md:
--------------------------------------------------------------------------------
1 | # Domain - Core Concepts
2 |
3 | The code mentioned in this document are used in the Certified Kubernetes Security Specialist 2025 video course.
4 |
5 |
6 | # Video-Document Mapper
7 |
8 | | Sr No | Document Link |
9 | | ------ | ------ |
10 | | 1 | [Installing Falco][PlDa] |
11 | | 1 | [Falco - Practical][PlDb] |
12 | | 1 | [Writing Custom Falco Rules and Macros][PlDc] |
13 | | 1 | [Falco Rule for /dev/mem][PlDd] |
14 | | 1 | [Falco Configuration File][PlDe] |
15 | | 2 | [Introduction to Sysdig][PlDf] |
16 | | 5 | [Audit Logging][PlDg] |
17 |
18 |
19 |
20 |
21 | [PlDa]: <./install-falco.md>
22 | [PlDb]: <./falco-practical.md>
23 | [PlDc]: <./writing-falco-rules.md>
24 | [PlDd]: <./falco-mem-rule.md>
25 | [PlDe]: <./falco-config-file.md>
26 | [PlDf]: <./sysdig.md>
27 | [PlDg]: <./audit-log-detailed.md>
28 |
29 |
--------------------------------------------------------------------------------
/domain-6-monitor-log-runtimesec/audit-log-detailed.md:
--------------------------------------------------------------------------------
1 |
2 | #### Step 1 Create a directory for storing audit logs and audit policy:
3 | ```sh
4 | cd /etc/kubernetes/
5 | mkdir audit
6 | ```
7 | #### Step 2 - Mount the directory as HostPath Volumes in kube-apiserver:
8 | ```sh
9 | - hostPath:
10 | path: /etc/kubernetes/audit
11 | type: DirectoryOrCreate
12 | name: auditing
13 |
14 | - mountPath: /etc/kubernetes/audit
15 | name: auditing
16 | ```
17 | #### Step 3 - Add Auditing Related Configuration in kube-apiserver:
18 | ```sh
19 | - --audit-policy-file=/etc/kubernetes/audit/audit-policy.yaml
20 | - --audit-log-path=/etc/kubernetes/audit/audit.log
21 | ```
22 |
23 | #### Our Final Audit Policy:
24 | ```sh
25 | apiVersion: audit.k8s.io/v1
26 | kind: Policy
27 | omitStages:
28 | - "RequestReceived"
29 |
30 | rules:
31 |
32 | - level: None
33 | resources:
34 | - group: ""
35 | resources: ["secrets"]
36 | namespaces: ["kube-system"]
37 |
38 | - level: None
39 | users: ["system:kube-controller-manager"]
40 | resources:
41 | - group: ""
42 | resources: ["secrets"]
43 |
44 | - level: RequestResponse
45 | resources:
46 | - group: ""
47 | resources: ["secrets"]
48 | ```
49 |
--------------------------------------------------------------------------------
/domain-6-monitor-log-runtimesec/custom-falco-rules.md:
--------------------------------------------------------------------------------
1 |
2 | ### Documentation:
3 |
4 | https://falco.org/docs/rules/supported-fields/
5 |
6 | #### 1st - Basic Rule:
7 | ```sh
8 | - rule: The program "cat" is run in a container
9 | desc: An event will trigger every time you run cat in a container
10 | condition: evt.type = execve and container.id != host and proc.name = cat
11 | output: "cat was run inside a container"
12 | priority: INFO
13 | ```
14 | #### 2nd rule - Using Sysdig Field Class Elements:
15 | ```sh
16 | - rule: The program "cat" is run in a container
17 | desc: An event will trigger every time you run cat in a container
18 | condition: evt.type = execve and container.id != host and proc.name = cat
19 | output: "cat was run inside a container (user=%username container=%container.name image=%container.image proc=%proc.cmdline)"
20 | priority: INFO
21 | ```
22 | #### 3rd rule - Using Macros and List:
23 | ```sh
24 | - macro: custom_macro
25 | condition: evt.type = execve and container.id != host
26 |
27 | - list: blacklist_binaries
28 | items: [cat, grep,date]
29 |
30 | - rule: The program "cat" is run in a container
31 | desc: An event will trigger every time you run cat in a container
32 | condition: custom_macro and proc.name in (blacklist_binaries)
33 | output: "cat was run inside a container (user=%user.name container=%container.name image=%container.image proc=%proc.cmdline)"
34 | priority: INFO
35 | ```
36 |
37 | #### Part 2: Add Output
38 | ```sh
39 | - rule: The program "cat" is run in a container
40 | desc: An event will trigger every time you run cat in a container
41 | condition: evt.type = execve and container.id != host and proc.name = cat
42 | output: demo %evt.time %user.name %container.name
43 | priority: ERROR
44 | tags: [demo]
45 | ```
46 |
--------------------------------------------------------------------------------
/domain-6-monitor-log-runtimesec/falco-config-file.md:
--------------------------------------------------------------------------------
1 |
2 | ### Documentation Referred:
3 |
4 | https://github.com/falcosecurity/falco/blob/0.40.0/falco.yaml
5 |
6 | ### Commands Used for Testing
7 | ```sh
8 | cp /etc/falco
9 | cp falco.yaml falco.yaml.bak
10 | ```
11 | ```sh
12 | nano falco.yaml
13 | ```
14 | `CTRL+W` to search for `syslog_output` and replace true to false
15 |
16 | ```sh
17 | syslog_output:
18 | enabled: false
19 | ```
20 |
21 | ```sh
22 | systemctl restart falco
23 | ```
24 |
25 | ### Start Falco manually
26 |
27 | ```sh
28 | systemctl stop falco
29 |
30 | falco
31 |
32 | cat /etc/shadow
33 | ```
34 |
35 | ### Revert the Changes
36 | ```sh
37 | rm -f falco.yaml
38 |
39 | mv falco.yaml.bak falco.yaml
40 |
41 | systemctl restart falco
42 |
43 | cat /etc/shadow
44 | ```
45 |
--------------------------------------------------------------------------------
/domain-6-monitor-log-runtimesec/falco-exam-perspective.md:
--------------------------------------------------------------------------------
1 |
2 | #### Create a demo rule condition with the required output:
3 | ```sh
4 | - rule: The program "cat" is run in a container
5 | desc: An event will trigger every time you run cat in a container
6 | condition: evt.type = execve and container.id != host and proc.name = cat
7 | output: demo %evt.time %user.name %proc.name
8 | priority: ERROR
9 | tags: [demo]
10 | ```
11 |
12 | #### Dealing with Logging Use-Case:
13 |
14 | ```sh
15 | timeout 25s falco | grep demo
16 | nano /root/logs.txt
17 | cat logs.txt | awk '{print $4 " " $5 " " $6}' > /tmp/logs.txt
18 | cat /tmp/logs.txt
19 | ```
20 |
--------------------------------------------------------------------------------
/domain-6-monitor-log-runtimesec/falco-install.md:
--------------------------------------------------------------------------------
1 | ### Documentation Referenced:
2 |
3 | https://falco.org/docs/setup/packages/
4 |
5 | ### Install Falco for Ubuntu
6 |
7 | ```sh
8 | curl -fsSL https://falco.org/repo/falcosecurity-packages.asc | \
9 | sudo gpg --dearmor -o /usr/share/keyrings/falco-archive-keyring.gpg
10 |
11 | echo "deb [signed-by=/usr/share/keyrings/falco-archive-keyring.gpg] https://download.falco.org/packages/deb stable main" | \
12 | sudo tee -a /etc/apt/sources.list.d/falcosecurity.list
13 |
14 | sudo apt-get update -y
15 |
16 | sudo apt install -y dkms make linux-headers-$(uname -r)
17 | sudo apt install -y clang llvm
18 | sudo apt install -y dialog
19 | ```
20 | ```sh
21 | sudo apt-get install -y falco
22 | ```
23 |
24 | ### Check Falco status
25 | ```sh
26 | systemctl status falco
27 | ```
28 |
29 |
30 |
31 |
32 |
33 |
--------------------------------------------------------------------------------
/domain-6-monitor-log-runtimesec/falco-mem-rule.md:
--------------------------------------------------------------------------------
1 | ### Verify /dev/mem access
2 | ```sh
3 | kubectl run nginx-pod --image=nginx
4 | kubectl exec -it nginx-pod -- bash
5 | ls -l /dev
6 |
7 | kubectl run privileged-pod --image=nginx --privileged
8 | kubectl exec -it privileged-pod -- bash
9 | ls -l /dev
10 | ```
11 | ### Rule to Alert /dev/mem access from Container
12 |
13 | ```sh
14 | - rule: Detect /dev/mem Access from Containers
15 | desc: Detect processes inside containers attempting to access /dev/mem
16 | condition: >
17 | (open_read or open_write) and
18 | fd.name=/dev/mem and container.id != host
19 | output: >
20 | "Container: %container.id and %container.name attempted to access /dev/mem"
21 | priority: CRITICAL
22 | ```
23 |
24 | ### Testing
25 | ```sh
26 | kubectl exec -it privileged-pod -- bash
27 |
28 | cat /dev/mem
29 | ```
30 |
31 |
32 | ### Find the Pod from Container ID
33 | ```sh
34 | kubectl get pods -A -o json > pods.json
35 |
36 | nano pods.json
37 | ```
38 | Search using `CTRL+W` and add container ID
39 |
--------------------------------------------------------------------------------
/domain-6-monitor-log-runtimesec/falco-practical.md:
--------------------------------------------------------------------------------
1 |
2 | ### Monitor Falco Logs (Terminal Tab 1)
3 | ```sh
4 | systemctl status falco
5 |
6 | journalctl -u falco-modern-bpf.service -f
7 | ```
8 | ### Testing
9 | ```sh
10 | cat /etc/shadow
11 |
12 | kubectl run nginx-pod --image=nginx
13 |
14 | kubectl exec -it nginx-pod -- bash
15 | ```
16 |
17 |
--------------------------------------------------------------------------------
/domain-6-monitor-log-runtimesec/install-falco.md:
--------------------------------------------------------------------------------
1 | ### Falco Documentation:
2 |
3 | https://falco.org/docs/getting-started/installation/
4 |
5 | #### Installation Steps:
6 | ```sh
7 | curl -s https://falco.org/repo/falcosecurity-3672BA8F.asc | apt-key add -
8 | echo "deb https://dl.bintray.com/falcosecurity/deb stable main" | tee -a /etc/apt/sources.list.d/falcosecurity.list
9 | apt-get -y install linux-headers-$(uname -r)
10 | apt-get update && apt-get install -y falco
11 | ```
12 | #### Start falco:
13 | ```sh
14 | falco
15 | ```
16 | #### Sample Rules tested:
17 | ```sh
18 | kubectl run nginx --image=nginx
19 | kubectl exec -it nginx -- bash
20 | ```
21 | ```sh
22 | mkdir /bin/tmp-dir
23 | cat /etc/shadow
24 | ```
25 |
--------------------------------------------------------------------------------
/domain-6-monitor-log-runtimesec/kubeadm-automate.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | set -e # Exit on error
4 | set -o pipefail
5 | set -x # Enable debugging
6 |
7 | ### Step 1: Setup containerd ###
8 | echo "[Step 1] Installing and configuring containerd..."
9 |
10 | # Load required kernel modules
11 | cat </dev/null
35 |
36 | # Modify containerd config for systemd cgroup
37 | sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
38 |
39 | # Restart containerd
40 | sudo systemctl restart containerd
41 | sudo systemctl enable containerd
42 |
43 | ### Step 2: Kernel Parameter Configuration ###
44 | echo "[Step 2] Configuring kernel parameters..."
45 |
46 | cat <
13 | spawned_process and container and
14 | proc.name = "curl"
15 | output: >
16 | Suspicious process detected (curl) inside a Container.
17 | priority: WARNING
18 | ```
19 |
20 |
21 | ### Testing
22 | ```sh
23 | systemctl restart falco
24 |
25 | kubectl run nginx-pod --image=nginx
26 |
27 | kubectl exec -it nginx-pod -- bash
28 |
29 | curl google.com
30 | ```
31 |
32 | ### Improvement in Falco Rule Output
33 |
34 | ```sh
35 | - rule: Detect curl Execution in Kubernetes Pod
36 | desc: Detects when the curl utility is executed within a Kubernetes pod.
37 | condition: >
38 | spawned_process and container and
39 | proc.name = "curl"
40 | output: >
41 | Suspicious process detected (curl) inside a container_id=%container.id and container_name=%container.name
42 | priority: WARNING
43 | ```
44 |
45 | ```sh
46 | systemctl restart falco
47 | ```
48 |
49 |
50 | ### Using Macros
51 | ```sh
52 | - macro: sensitive_files
53 | condition: fd.name in (/tmp/sensitive.txt)
54 |
55 | - rule: Access to Sensitive Files
56 | desc: Detect any process attempting to read or write sensitive files
57 | condition: sensitive_files
58 | output: "Sensitive file access detected (user=%user.name process=%proc.name file=%fd.name)"
59 | priority: WARNING
60 | ```
61 |
62 | ```sh
63 | touch /tmp/sensitive.txt
64 |
65 | systemctl restart falco
66 | ```
67 | ### Optimized Macro Rule
68 | ```sh
69 | - macro: sensitive_files
70 | condition: fd.name in (/tmp/sensitive.txt)
71 |
72 | - rule: Access to Sensitive Files
73 | desc: Detect any process attempting to read or write sensitive files
74 | condition: open_read and sensitive_files
75 | output: "Sensitive file access detected (user=%user.name process=%proc.name file=%fd.name)"
76 | priority: WARNING
77 | ```
78 |
79 | ```sh
80 | echo "Hi" /tmp/sensitive.txt
81 | ```
--------------------------------------------------------------------------------