├── Images ├── Drawing 2024-04-18 00.52.58.excalidraw 1.png ├── Drawing 2024-04-18 00.52.58.excalidraw.png ├── Kubernetes2.png ├── Pasted Image 20240416070940_767.png ├── Pasted Image 20240416111456_141.png ├── Pasted image 20240214190247.png ├── Pasted image 20240214191101.png ├── Pasted image 20240214191539.png ├── Pasted image 20240214192040.png ├── Pasted image 20240214214629.png ├── Pasted image 20240214215214.png ├── Pasted image 20240215140148.png ├── Pasted image 20240215140336.png ├── Pasted image 20240215150723.png ├── Pasted image 20240215150752.png ├── Pasted image 20240416065149.png ├── Pasted image 20240416072517.png ├── Pasted image 20240416073420.png ├── Pasted image 20240416074133.png ├── Pasted image 20240416074642.png ├── Pasted image 20240416075016.png ├── Pasted image 20240416111343.png ├── Pasted image 20240418010901.png ├── Pasted image 20240418010952.png ├── Pasted image 20240418014511.png ├── Pasted image 20240418014821.png ├── Pasted image 20240418083923.png ├── Pasted image 20240418083949.png ├── Pasted image 20240418180404.png ├── Pasted image 20240418184731.png ├── Pasted image 20240418184937.png ├── Pasted image 20240418195027.png ├── Pasted image 20240419151139.png ├── Pasted image 20240419160500.png ├── Pasted image 20240419160525.png ├── Pasted image 20240420193032.png ├── Pasted image 20240504184332.png ├── Pasted image 20240504184430.png ├── Pasted image 20240504220756.png ├── Pasted image 20240504231934.png ├── Pasted image 20240504235416.png ├── Screenshot from 2024-02-14 21-50-49.png └── Screenshot from 2024-02-14 21-57-01.png ├── Kubernetes Day-1.md ├── Kubernetes Day-2.md ├── Kubernetes Day-3.md ├── Kubernetes Day-4.md ├── Kubernetes Day-5.md └── README.md /Images/Drawing 2024-04-18 00.52.58.excalidraw 1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Drawing 2024-04-18 00.52.58.excalidraw 1.png -------------------------------------------------------------------------------- /Images/Drawing 2024-04-18 00.52.58.excalidraw.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Drawing 2024-04-18 00.52.58.excalidraw.png -------------------------------------------------------------------------------- /Images/Kubernetes2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Kubernetes2.png -------------------------------------------------------------------------------- /Images/Pasted Image 20240416070940_767.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted Image 20240416070940_767.png -------------------------------------------------------------------------------- /Images/Pasted Image 20240416111456_141.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted Image 20240416111456_141.png -------------------------------------------------------------------------------- /Images/Pasted image 20240214190247.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240214190247.png -------------------------------------------------------------------------------- /Images/Pasted image 20240214191101.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240214191101.png -------------------------------------------------------------------------------- /Images/Pasted image 20240214191539.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240214191539.png -------------------------------------------------------------------------------- /Images/Pasted image 20240214192040.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240214192040.png -------------------------------------------------------------------------------- /Images/Pasted image 20240214214629.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240214214629.png -------------------------------------------------------------------------------- /Images/Pasted image 20240214215214.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240214215214.png -------------------------------------------------------------------------------- /Images/Pasted image 20240215140148.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240215140148.png -------------------------------------------------------------------------------- /Images/Pasted image 20240215140336.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240215140336.png -------------------------------------------------------------------------------- /Images/Pasted image 20240215150723.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240215150723.png -------------------------------------------------------------------------------- /Images/Pasted image 20240215150752.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240215150752.png -------------------------------------------------------------------------------- /Images/Pasted image 20240416065149.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240416065149.png -------------------------------------------------------------------------------- /Images/Pasted image 20240416072517.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240416072517.png -------------------------------------------------------------------------------- /Images/Pasted image 20240416073420.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240416073420.png -------------------------------------------------------------------------------- /Images/Pasted image 20240416074133.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240416074133.png -------------------------------------------------------------------------------- /Images/Pasted image 20240416074642.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240416074642.png -------------------------------------------------------------------------------- /Images/Pasted image 20240416075016.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240416075016.png -------------------------------------------------------------------------------- /Images/Pasted image 20240416111343.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240416111343.png -------------------------------------------------------------------------------- /Images/Pasted image 20240418010901.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240418010901.png -------------------------------------------------------------------------------- /Images/Pasted image 20240418010952.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240418010952.png -------------------------------------------------------------------------------- /Images/Pasted image 20240418014511.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240418014511.png -------------------------------------------------------------------------------- /Images/Pasted image 20240418014821.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240418014821.png -------------------------------------------------------------------------------- /Images/Pasted image 20240418083923.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240418083923.png -------------------------------------------------------------------------------- /Images/Pasted image 20240418083949.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240418083949.png -------------------------------------------------------------------------------- /Images/Pasted image 20240418180404.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240418180404.png -------------------------------------------------------------------------------- /Images/Pasted image 20240418184731.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240418184731.png -------------------------------------------------------------------------------- /Images/Pasted image 20240418184937.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240418184937.png -------------------------------------------------------------------------------- /Images/Pasted image 20240418195027.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240418195027.png -------------------------------------------------------------------------------- /Images/Pasted image 20240419151139.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240419151139.png -------------------------------------------------------------------------------- /Images/Pasted image 20240419160500.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240419160500.png -------------------------------------------------------------------------------- /Images/Pasted image 20240419160525.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240419160525.png -------------------------------------------------------------------------------- /Images/Pasted image 20240420193032.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240420193032.png -------------------------------------------------------------------------------- /Images/Pasted image 20240504184332.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240504184332.png -------------------------------------------------------------------------------- /Images/Pasted image 20240504184430.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240504184430.png -------------------------------------------------------------------------------- /Images/Pasted image 20240504220756.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240504220756.png -------------------------------------------------------------------------------- /Images/Pasted image 20240504231934.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240504231934.png -------------------------------------------------------------------------------- /Images/Pasted image 20240504235416.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Pasted image 20240504235416.png -------------------------------------------------------------------------------- /Images/Screenshot from 2024-02-14 21-50-49.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Screenshot from 2024-02-14 21-50-49.png -------------------------------------------------------------------------------- /Images/Screenshot from 2024-02-14 21-57-01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rohit-rajput1/Kubernetes-learning/91c9503b1380bf84632ff00f2c5db5f94ac2f77c/Images/Screenshot from 2024-02-14 21-57-01.png -------------------------------------------------------------------------------- /Kubernetes Day-1.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Day-1 2 | 3 | ### What is YAML ? 4 | YAML is a digestible data serialization language often used to create configuration files with any programming language. It is widely used in Docker, Kubernetes etc. 5 | - #### What is Data Serialization ? 6 | - Serialization is basically a process of converting data objects which is a combination of code + data into series of byte which saves the state of object in the form that is easily transmitted which can easily go into YAML File, Database or in Memory and after this Deserialization take place. 7 | - We can also use other Data Serialization languages such as **JSON** or **XML**. 8 | 9 | ![Pasted image 20240416065149](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/192c0667-c027-429c-8307-93d0349b4734) 10 | 11 | - Is **YAML Case-Sensitive** ? -> Yes it is Case Sensitive. 12 | - YAML is made up of 4 basic things which are (simple english , : , -, space) 13 | - Best Advantage of YAML is that it is a Human Readable language which is used to represent data. 14 | 15 | To make your YAML Syntactically Correct we use [YAML Lint](https://www.yamllint.com/) 16 | 17 | ![Pasted image 20240416072517](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/1baae86e-e032-4e54-883c-bf51f4b68de3) 18 | 19 | --- 20 | ### Some Important Commands in Linux 21 | - `ps aux` - It is used to list all the information related to running processes. 22 | - `pstree` - It is used to display a tree diagram of process and visually represent the parent-child relationship between processes. 23 | 24 | ![Pasted image 20240416074133](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/5a5d6946-881f-4db9-afe5-6124036c0ff9) 25 | 26 | #### What is base64 in Linux ? 27 | - In Linux, Base64 is a method used to **encode binary data** (data that contains non-printable characters) into ASCII characters. 28 | - This is very useful in Kubernetes Secret. 29 | ```bash 30 | echo "rohit" | base64 31 | # Output - > cm9oaXQK 32 | ``` 33 | - for decoding the string we use decode. 34 | ```bash 35 | echo "cm9oaXQK" | base64 -d 36 | # Output -> rohit 37 | ``` 38 | 39 | - `tcpdump` - It is a command-line packet analyzer which allows user to capture and analyze network traffic going through network interface. 40 | 41 | ![Pasted image 20240416074642](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/008c9c0b-3844-4e24-9e5c-60aa6e63d2f2) 42 | 43 | ```bash 44 | # To capture only limited amout of packet in tcpdump 45 | sudo tcpdump -c 10 46 | ``` 47 | 48 | ![Pasted image 20240416075016](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/2ab1aaad-73ff-42ff-9c59-86f31ec4a01b) 49 | 50 | - To save this `tcpdump` in particular file called rohit.txt and `-w ` is used for saving the previous output to the next. 51 | ```bash 52 | sudo tcpdump -c 2 -w rohit.txt 53 | ``` 54 | 55 | - To execute you need to give your directory permission but be cautious in production environment. 56 | - To give a directory read, write, and execute (rwx) permissions in Unix-like systems, you can use the `chmod` command followed by the appropriate permission mode. 57 | 58 | ```bash 59 | # chmod a+rwx /document/rohit.txt 60 | chmod a+rwx rohit.txt 61 | ``` 62 | 63 | - To save our network interface packets in .txt file using **tcpdump**. 64 | 65 | ```bash 66 | sudo tcpdump -c enp1s0 2 -w rohit.txt 67 | # note: this enp1s0(basically network interface) can be different in different systems. 68 | # sudo tcpdump -c enp1s0 2 > rohit.txt (Method 2) 69 | ``` 70 | 71 | #### Important Commands: 72 | 73 | - `crictl` - It is a command which is used for interacting directly with contaibner runtime and supports Container Runtime Interface (CRI). 74 | - `ctr` - It is used only used for **containerd**. 75 | ```bash 76 | # To list all process in container 77 | crictl ps -a 78 | # To see logs 79 | crictl logs 80 | ``` 81 | 82 | - `journalctl` - It is used for querying and displaying logs from the systemd journal and systemd is stored in **/etc/systemd**. 83 | ```bash 84 | # To get entire Journal 85 | sudo journalctl 86 | 87 | # To get journal in one-pager format 88 | sudo journalctl --no-pager 89 | 90 | # To get logs from yesterday 91 | sudo journalctl --since yesterday 92 | 93 | # To jet journal for specific time period 94 | journalctl --since "2024-04-15 00:00:00" --until "2024-04-16 00:00:00" 95 | 96 | # To get realtime logging 97 | journalctl -f 98 | 99 | # Output in JSON format 100 | journalctl -o json 101 | 102 | # To get by using filter (e.g., emerg, alert, crit, err, warning, notice, info, debug). 103 | journalctl -p err 104 | ``` 105 | 106 | ![Pasted image 20240416111343](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/57cc35b8-b8ad-47bf-826d-6a43fd9a9d7a) 107 | 108 | ![image](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/00461084-16c6-46d6-9a66-63b874703918) 109 | 110 | --- 111 | ### Release Cycles of Kubernetes 112 | - [Release Cycles of Companies](https://endoflife.date/) 113 | --- 114 | ### Installation of Kubernetes: 115 | 116 | **Kubernetes Installation via Docker:** 117 | 118 | ![Pasted image 20240214190247](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/6eea2b0b-d84f-409a-b038-0a6cacc12518) 119 | 120 | **How to Start Minikube with more CPUs Compute Engine:** 121 | 122 | ```bash 123 | minikube start --cpus=4 124 | ``` 125 | 126 | ![Pasted image 20240214192040](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/bf0d7b2b-5b7b-4c50-a5e6-af2778f96e1d) 127 | 128 | **How to Increase memory allocation to cluster :** 129 | 130 | ```bash 131 | minikube start --memory=4096 132 | ``` 133 | 134 | **How to stop Minikube Cluster :** 135 | 136 | ```bash 137 | minikube stop 138 | ``` 139 | 140 | ![Pasted image 20240214191101](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/ebcb74bc-42a8-4ec7-a260-bf9e4dd7a12f) 141 | 142 | **How to check no. of cpus in ubuntu for K8s :** 143 | ``` 144 | nproc 145 | 146 | cat /proc/cpuinfo | grep "processor" | wc -l 147 | ``` 148 | 149 | ![Pasted image 20240214191539](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/61ed3b72-befd-4b18-a90f-b6a4ce82bdd6) 150 | 151 | **How do we delete cluster in K8s:** 152 | 153 | ```bash 154 | minikube delete 155 | ``` 156 | 157 | ![Pasted image 20240214214629](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/9bced533-2498-4865-ab31-a97c6c4e89ee) 158 | 159 | **How do we start another minikube cluster:** 160 | 161 | ```bash 162 | minikube start -p my-second-cluster 163 | 164 | # this will list all the cluster 165 | minikube profile list 166 | ``` 167 | 168 | ![Screenshot from 2024-02-14 21-57-01](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/a08d7826-01aa-4360-b54b-6b1ce155961b) 169 | 170 | ### If Error occuring while installation process: 171 | 172 | If you get this error and still running cluster: 173 | 174 | ![Pasted image 20240214215214](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/2ecd6fc8-12fe-4485-a02a-a650759da261) 175 | 176 | **Solution of this:** 177 | 178 | - Check Docker Context: 179 | ```bash 180 | docker context ls 181 | ``` 182 | 183 | - Switch Context: 184 | ```bash 185 | docker context use default 186 | ``` 187 | 188 | - If context not there , then create one: 189 | ```bash 190 | docker context create default 191 | ``` 192 | 193 | - Restart Docker: 194 | ```bash 195 | sudo systemctl restart docker 196 | ``` 197 | 198 | ![Screenshot from 2024-02-14 21-50-49](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/b4452971-1266-404a-bd17-4a90aef8eaaa) 199 | 200 | --- 201 | #### According to Kubernetes, it is a Open-Source Container orchestration tool, but Kubernetes is all about Controllers. 202 | 203 | **Due to following reason:** 204 | - Kubernetes manages containerized applications, being an Open-Source Container orchestration tool. 205 | - But, the real magic lies in Kubernetes Controllers. These act like control loops, constantly monitoring and adjusting the state of your applications to match your desired state. Imagine a thermostat for your applications - that's essentially a controller in action. 206 | 207 | ![image](https://github.com/rohit-rajput1/PokeQuest/assets/76991475/636fc215-6ed4-4cfd-901a-9a0abe7890f1) 208 | 209 | --- 210 | 211 | **`Bonus:`** You can use [KillerKoda](https://killercoda.com/) for practicing Kubernetes. -------------------------------------------------------------------------------- /Kubernetes Day-2.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Day-2 2 | 3 | ### What is HTTP 1.0 vs HTTP 1.1 vs HTTP 2.0 vs HTTP 3.0 ? 4 | - **HTTP/1.0:** 5 | - It is released in 1996 and used simple request reponse model over TCP. 6 | - It is designed for a simple web environment and static web pages. 7 | - **HTTP/1.1:** 8 | - It is released in 1999, it offers improvements over HTTP 1.0 such as persistent connections, pipelining, caching and chunked transfer encoding. 9 | - It enhances performance and reduces latency. 10 | - **HTTP/2.0:** 11 | - Released in 2015, it introduces multiplexing, binary protocols, header compression, server push and priortization. 12 | - It resolves head-of-line blocking and improves efficiency. 13 | - **HTTP/3.0:** 14 | - Released in 2022, it used QUIC protocol over UDP, eliminating head-of-line blocking and improvement performance. 15 | - It is designed for fast, reliable and secure web connection across all devices. 16 | 17 | ![Pasted image 20240418010952](https://github.com/rohit-rajput1/Kubernetes-learning/assets/76991475/59bb3cad-2b75-4f51-aa15-eab9cb586bb3) 18 | 19 | ### What is **API** and why it is used? 20 | - An API(**Application Programming Interface**) is a standard way of talking between systems, allowing them to exchange information and functionality in a defined manner using client and server architecture model. 21 | - An API is a middleman for software, letting apps talk to each other and share data easily. This saves developers time by providing pre-built functions they can integrate into their apps. 22 | 23 | ### What is gRPC ? 24 | - gRPC stands for **gRPC Remote Procedure Calls**. It's an open-source framework that enables high-performance communication between applications. 25 | - gRPC leverages the advantages of HTTP/2 for transport, like binary framing and multiplexing, but adds its own layer for defining services and data exchange procedures. 26 | - **gRPC is based on two key components:** 27 | - **Remote Procedure Calls (RPC):** This is a networking paradigm that lets a client application call methods on a server as if they were local procedures. gRPC builds on this idea, making it appear like applications are talking directly to each other. 28 | - **Protocol Buffers or Protobuf:** This is a language-neutral way to define data structures. gRPC uses Protocol Buffers to efficiently serialize data being exchanged between applications. This ensures everything is understood clearly by both sides. 29 | - In Kubernetes, Services talk with each other using **gRPC** and when we talk with Service we use **REST**. 30 | 31 | ![Pasted image 20240418014821](https://github.com/rohit-rajput1/Kubernetes-learning/assets/76991475/019f0878-4203-4c61-9b7b-775828e3918d) 32 | 33 | # Kubernetes Architecture: 34 | 35 | ![Kubernetes excalidraw](https://github.com/rohit-rajput1/Kubernetes-learning/assets/76991475/b60832ca-2bbd-4f1a-a5b1-025dcdbeff35) 36 | 37 | - In Kubernetes, there are 2 main component **Control Plane (Master Node)** and **Worker Plane (Slave Node)**. 38 | 39 | ## Master Node has 5 component are as follow: 40 | 41 | ### kube-api-server: 42 | - **kube-api-server Architecture:** 43 | 44 | ![Pasted image 20240418180404](https://github.com/rohit-rajput1/Kubernetes-learning/assets/76991475/f3920412-1af1-44da-862f-d79e71fdc922) 45 | 46 | - This is the most important component in the kubernetes, if this goes down whole cluster goes down and no activities are possible. 47 | - **let's say my master-node is down and my application is already deployed? does my application will be running or not ?** 48 | - -> The answer is Yes, because application is deployed on the worker node and we know master-node is for management. So application will be running. 49 | - **Authentication :** The kube-api-server verifies the identity of users or applications trying to access the API. It checks if they have the proper credentials using methods like client certificates, bearer tokens, or an authenticating proxy. 50 | - **Authorization :** After a user or application is authenticated, the kube-api-server doesn't just grant full access. It checks their authorization level using Role-Based Access Control (RBAC). This determines what actions they can perform (e.g., read, create, update, delete) on specific Kubernetes resources. 51 | - **Admission Control:** Admission control in Kubernetes is a process that happens **after** authentication and before resources are created or modified within the cluster. It acts as an additional layer of security and validation. 52 | - **Working:** After a user or application submits a request to create, update, or delete a Kubernetes resource (like a pod or deployment), the request goes through admission control. There are two controller in Admission Controller are as follow: 53 | - **Validate** the request to ensure it complies with defined policies (e.g., resource quotas, security best practices). 54 | - **Mutate (change)** the request to enforce specific configurations (e.g., injecting resource requests and limits). 55 | - If an admission controller rejects the request, the resource change won't be applied, and the user receives an error message. 56 | - So, admission control helps **maintain cluster health**, **security**, and **consistency** by ensuring only authorized and well-defined changes are made to Kubernetes resources. 57 | - **Admission Controller Architecture:** 58 | 59 | ![Pasted image 20240418184937](https://github.com/rohit-rajput1/Kubernetes-learning/assets/76991475/98b63491-59af-46c1-be53-e1286c4f5c93) 60 | 61 | - How to check which webhook is enabled ? 62 | - Go to terminal `cd /etc/kubernetes/manifests` this will shows all the control plane which runs as a static pods. 63 | - We can see the file related to **kube-apiserver** and we can check what plugins are enable to this file using `cat kube-apiserver | grep 'enable'`. 64 | - **What is CRD (Custom Resource Definition) and how kube-api-server leverages it ?** 65 | - A **CRD**(**Custom Resource Definition**), is not directly part of the kube-apiserver itself. It's a mechanism that extends the Kubernetes API to allow you to define and manage your own custom resources. Here's the relationship: 66 | - **CRD:** You create a CRD as a YAML or JSON file specifying the schema and behavior of your custom resources. This file is submitted to the kube-apiserver. 67 | - **kube-apiserver:** The kube-apiserver acts upon the submitted CRD definition. It understands how to handle requests for your custom resources based on the CRD information. 68 | - So, the kube-apiserver leverages the CRD definition to manage your custom resources, but the CRD itself is a separate entity that extends the capabilities of the kube-apiserver. 69 | - **Watch for Updates:** There are two main ways to watch for updates in Kubernetes in depending upon specific needs: 70 | - **Using watch API:** The Kubernetes API server provides a built-in functionality called the "watch" API. This allows you to establish a long-running connection and receive updates whenever a relevant resource changes within the cluster. This is ideal for scenarios where you need to react to changes in real-time, like automatically scaling deployments based on resource utilization. 71 | - **Periodic list with filter:** You can periodically query the API with a filter based on the resource version to get only updated resources. This is simpler but less real-time 72 | 73 | ### etcd: 74 | - No etcd, no Kubernetes brain. It stores cluster data and coordinates actions, keeping everything in sync. 75 | - **etcd Architecture:** 76 | 77 | ![Pasted image 20240418195027](https://github.com/rohit-rajput1/Kubernetes-learning/assets/76991475/162f2a66-e6c5-4ef5-b923-25abc5a77ecc) 78 | 79 | - Different components of **etcd** are as follow: 80 | - **Distributed Key-Value Store:** etcd itself is the distributed key-value store. It stores Kubernetes cluster data (pod definitions, deployments, etc.) with keys representing unique identifiers and values holding the configuration details. Multiple Kubernetes nodes can access and update this data simultaneously. 81 | - **Write-Ahead Log (WAL):** Before etcd commits a change to the key-value store, it first writes that change to the WAL. This ensures data consistency even during failures. Imagine etcd writing down the intended change in a log before updating the main key-value store. 82 | - **Protocol Buffers (fastest):** etcd leverages Protocol Buffers to define how data is formatted when stored or transmitted between nodes. This ensures all machines, regardless of programming language, understand the data structure. Think of it like a common language for all the machines working with etcd. 83 | - **Raft Consensus Algorithm:** Worlds top databases are based on Raft Consensus. This is the core of keeping all the distributed copies of the key-value store consistent. Raft ensures that all etcd nodes in the cluster agree on the latest data, even if some nodes fail or experience network issues. It's like a voting system where a majority of nodes agree on the final state of the data which has a election timeout of (150~300ms) [Check the Website For More](https://thesecretlivesofdata.com/) 84 | 85 | - **How do we secure data in etcd ?** 86 | - **Data Encryption at Rest** is the primary method for securing data within etcd. It involves encrypting the data on the storage media itself using encryption keys. Even if someone gains access to the physical storage, the data will be scrambled and unreadable without the decryption key. 87 | - **Minimize Sensitive Data Storage** which allows us to limit the amount of highly sensitive information stored directly in etcd. This includes credentials like passwords, API keys, or tokens. 88 | 89 | ### kube-scheduler: 90 | - Kube-scheduler is a crucial component in Kubernetes responsible for assigning pods (containers needing resources) to appropriate nodes (worker machines) within the cluster. It ensures efficient resource utilization and optimal placement of pods based on various factors. 91 | - **kube-scheduler architecture:** 92 | 93 | ![Pasted image 20240419151139](https://github.com/rohit-rajput1/Kubernetes-learning/assets/76991475/c207edf4-fd70-4ee8-ab22-cc8faa52ed0c) 94 | 95 | - There are **`two cycles`** in kube-scheduler are as follow: 96 | - **Scheduling Cycle :** This phase focuses on selecting a suitable node for a new or pending pod. 97 | - **Queue Up:** A new pod enters the scheduling queue, waiting for its turn to be placed on a suitable node. 98 | - **Prefilter (Optional):** These lightning-fast plugins perform basic checks to exclude a large number of nodes quickly. They eliminate nodes with insufficient resources (CPU, memory, etc.) or those marked as unavailable due to taints (special labels indicating specific restrictions). Think of this as a preliminary scan to remove obvious mismatches. 99 | - **Filter:** These plugins conduct more in-depth examinations. They consider pod annotations (custom labels) or node labels to enforce specific placement requirements. For example, a filter plugin might ensure a pod requiring a GPU is only placed on nodes with GPUs available. It's like a more thorough screening based on defined criteria. 100 | - **Score:** Only nodes that pass the filter stage remain. Now, score plugins come into play. Each plugin assigns a score to each remaining node based on various factors. This might include available resources, current node utilization, or affinity/anti-affinity rules defined for the pod. 101 | - **Normalize Score (Optional):** Since different score plugins might use varying scales, these plugins (if enabled) adjust the scores to a common scale. This ensures scores from diverse plugins can be compared fairly in the final decision. 102 | - **Post Filter (Optional):** These plugins act as a final safety net before selecting a node. They can perform additional checks based on the combined scores or other factors. For example, a post-filter might ensure a minimum score threshold is met before considering a node and kicks out the low priority pods. 103 | - **Permit (Optional):** These plugins provide an additional layer of authorization. They can enforce security policies or business logic before a pod is bound to a node. 104 | - **Binding Cycle:** The binding cycle in Kube-scheduler is another crucial step after the scheduling cycle selects a suitable node for a pod. 105 | - **Pre-Bind (Optional):** Reserving resources on the node to prevent conflicts with other pods and checking the node health or readliness for additional safety measures. 106 | - **Bind:** This is the core of the binding cycle. Here, the kube-scheduler sends a request to the kubelet (agent running on the chosen node) to bind the pod to the node. The kubelet updates its internal state to reflect the incoming pod and allocates necessary resources. 107 | - **Post-Bind (Optional):** - Updating pod object metadata with the assigned node information and triggers notifications or events to signal successful pod binding. 108 | - **Wait-on-Permit (Optional):** This phase involves waiting for an external approval (permit) before proceeding. This could be useful for integrating with security systems or external authorization mechanisms. 109 | 110 | ### kube-controller-manager: 111 | - The kube-controller-manager is a vital component in Kubernetes responsible for running multiple controllers in the background. These controllers continuously monitor the state of the cluster and take actions to ensure the desired state of your applications is maintained. 112 | - **kube-controller-manager architecture:** 113 | 114 | ![Pasted image 20240419160500](https://github.com/rohit-rajput1/Kubernetes-learning/assets/76991475/c11035df-d131-4898-966c-ceedc03906a1) 115 | 116 | - The described core component are as follow: 117 | - **Node Controller:** This controller manages worker nodes (machines running containerized workloads). It monitors node health, detects failures, and attempts to automatically restart them or cordon/drain unhealthy nodes (prevent pod scheduling/evict existing pods) to avoid impacting healthy workloads. 118 | - **Service Controller:** This controller watches over Service objects and ensures they are translated into Kubernetes networking resources like Endpoints (mapping pods to a service) or LoadBalancers (providing external access). It keeps service configurations in sync with the underlying network infrastructure. 119 | - **Namespace Controller:** This controller manages namespaces, which provide a way to isolate resources within a cluster. It ensures namespaces are created/deleted as needed and enforces resource quotas within a namespace. 120 | - **DaemonSet Controller:** This controller ensures a daemonset (a special pod meant to run on all or a subset of nodes) has its desired number of pod replicas running on each node in the cluster. It creates or deletes pods as needed to maintain the specified state. 121 | - **CronJob Controller:** This controller manages CronJobs, which are Kubernetes objects that allow you to schedule tasks to run on a defined schedule (e.g., every hour, daily, etc.). The CronJob controller monitors these schedules and creates pods at the designated times to execute the desired tasks. 122 | - **Custom Controllers:** The beauty of Kubernetes is its extensibility. You can develop your own custom controllers to manage specific resources or automate tasks tailored to your application needs. 123 | 124 | ### cloud-controller-manager: 125 | - In Kubernetes, the cloud-controller-manager is a component that comes into play **specifically when you're using a cloud provider** to manage your Kubernetes cluster. It acts as an intermediary between the Kubernetes control plane and your cloud provider's API. 126 | 127 | 128 | ## Worker Node has 3 component are as follow: 129 | - **Worker Node Internal Working Architecture:** 130 | 131 | ![Kubernetes2](https://github.com/rohit-rajput1/Kubernetes-learning/assets/76991475/6082a293-e642-41eb-a191-2a2ea8ffde29) 132 | 133 | ### pod: 134 | - A pod is the fundamental unit of deployment in Kubernetes. It represents a group of one or more containers that are meant to be deployed together on a shared underlying infrastructure. Think of it as a containerized application or service encapsulated within a single unit. 135 | - The Components of **`Pods`** are as follow: 136 | - **Containers:** A pod can contain one or more containerized applications. These containers share the pod's storage, network resources, and lifecycle. Imagine each container within a pod as a microservice contributing to an overall application. 137 | - **Shared Storage:** Pods have access to a shared storage volume mounted at a specific path within each container. This allows containers within the pod to share data and collaborate effectively. 138 | - **Pod Spec:** This is the blueprint that defines the pod's configuration, including the container images to be used, resource requests and limits, environment variables, and storage requirements. 139 | 140 | ### kube-proxy: 141 | - Kube-proxy is installed by default on all worker nodes in a kubernetes cluster. Its primary function is to map Service objects to actual network rules on each node. 142 | - **Watches for Service Changes:** Kube-proxy keeps a close eye on the Kubernetes API server for any updates to Service objects. These Services define how network traffic should be directed to your pods based on labels or selectors. 143 | - **Translation to Network Rules:** Whenever a Service changes, Kube-proxy steps in and translates the Service definition into concrete network rules specific to the node's operating system. The chosen mode (iptables or ipvs for Linux, kernelspace for Windows) determines the translation method. 144 | - **Network Rule Implementation:** Based on the translated rules, Kube-proxy modifies firewall settings (iptables or similar) or load balancing configurations on the worker node. These rules ensure that traffic targeting a Service is effectively routed to the appropriate pods within the cluster, considering selector criteria defined in the Service. 145 | 146 | ### kubelet: 147 | - Kubelet, short for **Kubernetes Node Agent**, is a critical component that runs on each worker node within a Kubernetes cluster. It acts as the bridge between the Kubernetes control plane and the i*ndividual nodes, playing a central role in managing container lifecycles and ensuring the smooth operation of your pods. 148 | - **Kubelet Cycle:** 149 | 150 | ![Pasted image 20240420193032](https://github.com/rohit-rajput1/Kubernetes-learning/assets/76991475/3eb368d6-78c4-4a72-99a9-739c5e331200) 151 | 152 | - **Node Registration:** Kubelet registers the worker node with the Kubernetes API server using its hostname or a specific cloud provider integration. This allows the control plane to be aware of available resources and schedule pods accordingly. 153 | - **Pod Management:** Kubelet receives pod specifications from the API server. It then translates these specifications into actionable steps for the container runtime engine (like Docker or containerd) on the node. Kubelet ensures pods are created, started, stopped, and deleted based on instructions from the control plane. 154 | - **Health Monitoring:** Kubelet continuously monitors the health of running pods and the overall health of the node itself. It reports this information back to the API server, allowing the control plane to take corrective actions if necessary (e.g., restarting unhealthy containers or rescheduling pods to healthy nodes). 155 | - **Resource Management:** Kubelet monitors resource utilization (CPU, memory, etc.) on the node. It enforces pod resource limits and requests defined in the pod specifications, ensuring fair resource allocation and preventing pods from consuming more than their allocated share. 156 | - **Secret and ConfigMap Management:** Kubelet fetches secrets and ConfigMaps required by pods from the API server and makes them accessible to containers within the pod. This allows pods to access sensitive information or configuration data securely. 157 | - There are **3 interfaces in Kubelet** are as follow: 158 | - **Container Runtime Interface:** Kubelet interacts with the container runtime engine (CRI) on the node. The CRI is responsible for the low-level tasks of creating, managing, and running containers. Kubelet acts as an abstraction layer, shielding the Kubernetes control plane from the specifics of the underlying container runtime. 159 | - **CNI (Container Network Interface):** Provides a standardized way for Kubelet to configure networking for pods on worker nodes using different CNI plugins (e.g., overlay networks, bridge networking). 160 | - **CSI (Container Storage Interface):** Provides a standardized way for Kubelet to interact with various storage providers (cloud, local) through CSI drivers to provision and manage persistent storage volumes for pods. 161 | 162 | ### Key Metrics in Kubernetes which is used in production: 163 | 164 | - **etcd_server_leader :** etcd-server-leader is the single elected server in the cluster responsible for **processing writes** and **maintaining consistency** across all nodes. 165 | - **etcd_server_leader_changes_seen_total :** It tracks the number of times the leader node has changed within an etcd cluster. 166 | - **etcd_network_peer_round_trip_time_seconds_bucket :** It provides insights into the communication speed between etcd cluster members which includes network performance, time taken to travel from one etcd server to another and back again. 167 | - **workqueue_adds_total :** It is a metric that tracks the total number of items added to a specific workqueue. A Workqueue is essentially a queue used by controller in kubernetes to manage tasks. 168 | - **workqueue_depth :** It is a metric that guages the current number of items waiting to be processed in a specific workqueue. 169 | - **kubelet_running_pods :** It is a way to check the number of running pods on a specific node within your kubernetes cluster. 170 | - **kubelet_pod_start_duration_seconds_count :** This metric tracks the **total number of times** a pod has gone from a pending state to a running state on a specific node within a Kubernetes cluster. 171 | - **rule_sync :** This ensures that all nodes have the same set of rules and operate consistently. 172 | - **apiserver_admission_controller_duration_seconds :** It tracks the **time taken by admission controllers** in the Kubernetes API server to process requests. 173 | --- 174 | 175 | **Blog:** [**How OpenAI does the scaling Kubernetes to 7500 Nodes by OpenAI**](https://openai.com/research/scaling-kubernetes-to-7500-nodes) -------------------------------------------------------------------------------- /Kubernetes Day-3.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Day-3 2 | 3 | ### What is Kubernetes Cluster ? 4 | - A Kubernetes Cluster is a group of computers (physical or virtual machines) working together to run containerized applications. It provides a platform for deploying, scaling and managing containerized workloads in a highly ahutomated and scalable way. 5 | - There are two main types of Kubernetes Cluster are as follow: 6 | - **Single-Node-Cluster:** 7 | - Consists of a single machine that fulfills all the roles: running the control plane (API server, scheduler, controller manager) and worker node (hosting containerized applications in pods). 8 | - **Limitations:** Not Suitable for Production due to lack of scalability, fault tolerance, less high availbility and resource limitations. 9 | - **Multi-Node-Cluster:** 10 | - Composed of **multiple machines**: 11 | - **Control Plane:** A set of dedicated machines running the Kubernetes control plane components. This ensures high availability and fault tolerance for critical cluster management tasks. 12 | - **Worker Nodes:** These machines run containerized applications packaged as pods. You can have multiple worker nodes to distribute the workload and scale your applications horizontally. 13 | - **Benefits:** It can easily scale the system, has the high availability and less faulty ensuring the continuity. 14 | 15 | **Note:** Always the version of Worker Node should not be greater than Control Plane node due to backward compatibility issues can come. 16 | 17 | | Feature | Single-Node Cluster | Multi-Node Cluster | 18 | | ---------------------- | ------------------------------------- | ----------------------------- | 19 | | **Nodes:** | One node | Multiple nodes | 20 | | **Control Plane:** | Runs on the single node | Dedicated control plane nodes | 21 | | **Worker Nodes:** | The single node acts as a worker node | Multiple worker nodes | 22 | | **Scalability:** | Limited | Highly scalable | 23 | | **High Availability:** | No | Yes | 24 | | **Fault Tolerance:** | Limited | High | 25 | | **Use Cases:** | Development, testing | Production deployments | 26 | 27 | --- 28 | 29 | ### What is Kubernetes Objects ? 30 | - In Kubernetes, objects are the fundamental building blocks that defines and manage your applications. They acts as a record of intent, specifing the desired state of you containerized workloads. 31 | - By creating Kubernetes objects, we essentially tell the Kubernetes system how we want our cluster to be configured. So the Kubernetes system then works automatically to make sure that cluster reaches that desired state. 32 | - Types of Kubernetes Object are as follow: 33 | - **Pods:** The most basic unit in Kubernetes, representing a group of one or more containers that are deployed together on a shared underlying network namespace. 34 | - **Deployments:** Manage the lifecycle of your containerized applications. You specify the desired state (e.g., number of replicas) and Kubernetes automatically scales and updates the Pods. 35 | - **Services:** Provide a way to access your applications running on Pods across the cluster using a single network address and port. 36 | - **Namespaces:** Isolate groups of resources within a cluster, allowing for better organization and multi-tenancy. 37 | 38 | 39 | ### What are the tools which is used for setup and managing Kubernetes Cluster ? 40 | - **kind:** 41 | - An open-source tool for creating local Kubernetes clusters in a runtime environment. kind uses Docker containers to simulate a multi-node Kubernetes cluster on a single machine. 42 | - This provides a lightweight and portable way to experiment with Kubernetes locally without complex setups. 43 | - **Minikube:** 44 | - A tool for running a single-node Kubernetes cluster locally. It's often considered a user-friendly option for beginners. 45 | - Minikube provisions a single-node Kubernetes cluster on your local machine using a virtual machine or containerization technology like Docker. 46 | - **Kubeadm:** 47 | - A tool for deploying production-grade multi-node Kubernetes clusters. It's an official Kubernetes component for cluster bootstrapping. 48 | - Kubeadm provides a command-line interface for initializing a Kubernetes cluster on bare-metal servers, cloud instances, or virtual machines. It configures the control plane components and prepares the worker nodes to join the cluster. 49 | 50 | ### What is Containerization and how its portable ? 51 | - Containerization is a way of packaging software in a standardized unit that includes everything needed to run the code, regardless of the underlying computer system. It's like creating a self-contained shipping container for your application, with all its parts and instructions neatly packed inside. 52 | - **Portability:** 53 | - **Standardized Container Images:** Container images are created using standardized formats like Dockerfile. These formats specify the instructions for building the container, including the application code, dependencies, and environment variables. This ensures consistent behavior across different systems. 54 | - **Container Runtime Engines:** Container runtime engines like Docker or containerd are responsible for running container images. These engines are available on most Linux distributions and cloud platforms. 55 | 56 | ![Pasted image 20240424181634](https://github.com/user-attachments/assets/7cd1cd08-3f3c-4424-98f5-27ceb2d9c332) 57 | 58 | 59 | ### Why can't we use "registry tag" as "latest" ? 60 | - For production deployments, we'll avoid using the "latest" tag for container images. This is because it's difficult to keep track of exactly which updates have been rolled out with "latest." Instead, we'll use specific version numbers from our registry to ensure we know precisely what code is running in our applications. 61 | - Also backward compatibility is deminished. 62 | 63 | ### What are commands which makes layer while making a Docker Image? 64 | - A Dockerfile is a text file containing instructions that tell Docker how to build a container image. Each instruction you write in the Dockerfile contributes to the creation of a new layer in the final image. 65 | - Instruction that created layers are **FROM**, **COPY**, **RUN**(If "run" runs more than 1 times then it created multiple layers) and **ADD**. 66 | - Images with minimal layers are generally considered more efficient due to their smaller size and faster build times. 67 | 68 | ### What are distroless images ? 69 | - A Distroless Images are images made by Google and made Open-Source for security purposes. 70 | - **What Distroless Images Don't Include:** 71 | - **No package installer:** Distroless images don't have tools like APT or yum to install extra programs your application might need. Instead, they only include the program your application needs to run and the bare minimum to make it work. 72 | - **No command prompt:** Distroless images don't come with a command prompt (like bash or sh) that you can type commands into. They're meant to be run automatically, not used for interactive work. 73 | - Size Comparision of Images: 74 | - Distroless < Strach < Alpine (Greater in Sizes) 75 | - Making Distroless Images means simplifying them by removing as many layers as possible, which helps them become smaller and faster. 76 | 77 | ### What is Multistage ? 78 | - In Multistage Production Grade Environment we use 2 Images one of which is Alpine for base and from that only we create our Final Image. 79 | - **Using Alpine for a Base Image:** 80 | - **Alpine Linux** is a popular choice for base images in multi-stage builds due to its: 81 | - **Small Size:** Alpine is a lightweight Linux distribution, leading to smaller final images. 82 | - **Package Management:** It uses apk, a package manager known for efficiency. 83 | - **Security Focus:** Alpine prioritizes security with regular updates. 84 | - **==Multi-Stage Build Process:==** 85 | - **Stage 1: Build Environment:** 86 | - You create a Dockerfile with a base image like `alpine:latest`. 87 | - In this stage, you install all the development tools and dependencies needed to build your application code. These tools might include compilers, libraries, and build utilities. 88 | - This stage can be large because it includes the build environment. 89 | - **Crucially, you don't copy the application code itself into this stage.** 90 | - **Stage 2: Final Image:** 91 | - You create another stage in the Dockerfile that uses a minimal image as its base, often referred to as a "scratch" image (`FROM scratch`). This ensures the final image has only the necessary components. 92 | - You copy the compiled application code (artifacts) from the build stage (stage 1) into this final image. 93 | - You also copy any essential runtime dependencies your application needs to function. 94 | - This final image is much smaller than the build stage because it excludes the bulky build tools. 95 | #### To Run Images we have another abstraction named "Containers". 96 | 97 | --- 98 | ## Containers 99 | 100 | ### What is a Container ? 101 | - Containers are a way to package software in a standardized unit that includes everything needed to run the code, regardless of the underlying computer system. They are like self-contained shipping containers for your applications. 102 | - Also Don't Confuse yourself with Container as Docker, as Docker is a tool used for Containerization. 103 | - There are **two types of containers application** are as follow: 104 | - **Stateless Application:** They don't remember anything about past interactions with users. Each request is treated independently, like a new customer at a store each time or Forgotful Application. 105 | - **Stateful Application:** They keep track of user interactions and preferences across multiple requests. They "remember" you, like a store with a loyalty program or Memorable Application. 106 | 107 | --- 108 | 109 | ## Kubectl 110 | 111 | ### What is Kubectl ? 112 | - Kubectl is the command-line tool or utility for interacting with Kubernetes clusters or we can use `alias k=kubectl` to save time but don't use in production environment. 113 | - So, whatever command we write and with addition if we write `-o wide` which will be very helpful in debugging 114 | - To views all the resources of Kubernetes Services `kubectl api-resources`. 115 | 1. **Deployment Management**: 116 | - `kubectl create deployment --image=`: Creates a new deployment with a specified image. 117 | - `kubectl get deployments`: Lists all deployments in your cluster. 118 | - `kubectl scale deployment --replicas=`: Scales the deployment to a desired number of replicas (pods). 119 | - `kubectl delete deployment `: Deletes a deployment. 120 | 2. **Pod Management**: 121 | - `kubectl get pods`: Lists all pods in your cluster. 122 | - `kubectl describe pod `: Shows detailed information about a specific pod. 123 | - `kubectl exec -it bash`: Opens an interactive shell session within a running pod. 124 | - `kubectl delete pod `: Deletes a pod. 125 | 3. **Service Management**: 126 | - `kubectl create service --selector= --port=`: Creates a service of a specific type (e.g., NodePort, LoadBalancer) to expose your application. 127 | - `kubectl get services`: Lists all services in your cluster. 128 | - `kubectl describe service `: Shows detailed information about a specific service. 129 | - `kubectl delete service `: Deletes a service. 130 | 4. **Resource Management**: 131 | - `kubectl get all`: Lists all resources (pods, deployments, services, etc.) in your cluster. 132 | - `kubectl get nodes`: Lists all worker nodes in your cluster. 133 | - `kubectl get namespaces`: Lists all namespaces in your cluster. 134 | - `kubectl delete resource `: Deletes a specific resource (e.g., deleting a deployment named "myapp"). 135 | 5. **Viewing Logs**: 136 | - `kubectl logs `: Shows logs generated by a specific pod. 137 | - `kubectl logs -f `: Follows the logs of a pod in real-time. 138 | 6. **Events:** 139 | - `kubectl get events`: It will list all the events from all namespaces in your cluster. 140 | - `kubectl get events -n `: List events only for having a specific namespace 141 | - `kubectl get events --since=10m`: Show events that occurred after a specific duration 142 | - `kubectl describe event `: This command provides detailed information about a specific event, including its reason, message, involved object, and source. 143 | - `kubectl get events -w -n `: This command continuously monitors and displays new events as they occur in your cluster. 144 | 145 | [**K9s**](https://k9scli.io/) is a dashboard for the resources we have in our kubernetes cluster or K8s. 146 | 147 | ## Pods 148 | 149 | ### What is Pod? 150 | - A pod is the smallest deployable unit in Kubernetes. It represents a group of one or more containers (like mini-applications) that are tightly coupled and shared storage and network resources. 151 | ### Does Pods are disposable and Ephemeral ? 152 | - Yes, pods in Kubernetes are designed to be **disposable** and **ephemeral**. Here's a breakdown of the concept: 153 | - **Disposable:** Pods are meant to be short-lived and can be created, scheduled, and terminated as needed. They are not intended to be long-running processes like traditional virtual machines. 154 | - **Ephemeral:** This means pods are temporary and may not always be guaranteed to persist. Events like node failures or scaling actions can lead to pod restarts or terminations. 155 | 156 | ### What is Static Pods ? 157 | - The path `/etc/kubernetes/manifests` on a Kubernetes node is typically used to store the manifest files for static pods. Static pods are managed directly by the kubelet on each node, rather than by the Kubernetes API server. 158 | 159 | ![Pasted image 20240706112054](https://github.com/user-attachments/assets/ca0357c1-ac2b-42a1-b710-f61ef6a86325) 160 | 161 | ### How do we give command to kubelet and path of static pod for running ? 162 | - This is the path for **`cd /var/lib/kubelet`** in Kubernetes stores various data used by the kubelet, the agent running on each node in the cluster. 163 | 164 | ![Pasted image 20240706112847](https://github.com/user-attachments/assets/72d7a39f-15a7-4093-91da-3bf3b7910a1c) 165 | 166 | - The static pod path is defined in **config.yaml** and this can be changable. 167 | 168 | ![Pasted image 20240706113353](https://github.com/user-attachments/assets/7d3bb552-da2b-46d2-93da-94bbac91e855) 169 | 170 | ## Key Differences between Static Pod and DaemonSets. 171 | 172 | 1. **Management**: 173 | 174 | - **Static Pods**: Managed directly by the kubelet on each node, independently of the Kubernetes control plane. 175 | - **DaemonSets**: Managed by the Kubernetes control plane, ensuring consistent deployment across the cluster. 176 | 2. **Deployment Scope**: 177 | 178 | - **Static Pods**: Configured on a per-node basis. Each node runs only the static pods defined in its local manifest directory. 179 | - **DaemonSets**: Ensure that the specified pods are deployed to all or selected nodes across the entire cluster. 180 | 3. **Updates and Maintenance**: 181 | 182 | - **Static Pods**: Require manual updates to manifest files and kubelet restarts, which can be cumbersome. 183 | - **DaemonSets**: Allow for easier updates and rolling updates through Kubernetes commands and features. 184 | 4. **Lifecycle and State Awareness**: 185 | 186 | - **Static Pods**: Limited to the individual node's state and require manual intervention for lifecycle management. 187 | - **DaemonSets**: Provide better lifecycle management and are aware of the overall cluster state, allowing for more resilient and flexible operations. 188 | 189 | --- 190 | 191 | ## YAML 192 | 193 | ### How are pod creates using YAML ? 194 | - This YAML file describes details like: name of the pod, the container image to use, any port to expose,resources requests and limits and storage configurations. 195 | ```yaml 196 | apiVersion: v1 197 | kind: Pod 198 | metadata: 199 | name: nginx 200 | spec: 201 | containers: 202 | - name: nginx 203 | image: nginx:latest 204 | ports: 205 | - containerPort: 80 206 | ``` 207 | 208 | - **Breakdown of YAML Elements**: 209 | - **`apiVersion`** : This tells Kubernetes which version of the API you're using to define the pod. It's usually `v1` for core resources like pods. Different namespaces may require resources from different **API groups (like security or RBAC)**. 210 | - **`kind`** : This simply states what kind of resource you're defining. In this case, it's `Pod`. Kubernetes recognizes various resource types like deployments, services, and persistent volumes, each with its own `kind`. 211 | - **`metadata`** : It provides information about your pod, like a name tag. Like **`name`** is a unique identifier for your pod within a kubernetes cluster or we can add optional labels and annotations for further details. 212 | - **`spec`** : This is the core part where we define how the pod runs. 213 | - **containers** : This defines the container(s) that makes up the pod (like a program running inside). Each container has details like: 214 | - **`name`** : A name for the container within the pod. 215 | - **`image`** : Specifies the Docker image to use for the container. This image contains the application code and dependencies needed to run the container. 216 | - **`ports`** : Defines which ports on the container to expose to the external world. These ports allow communication with the application running within the container. 217 | - It also has other additional configuration settings like environmental variables, resource requests and limits, 218 | - **Volumes:** This section defines volumes for the pod. Volumes provide persistent storage for containers within the pod. Data written to volumes persists even if the pod restarts. 219 | 220 | ### What is exec in the Pod ? 221 | - **`kubectl exec`** - Executing Commands Inside a Pod 222 | - **`kubectl exec`** is a command-line tool in Kubernetes that allows you to run commands directly within a container that's already running inside a pod. This is helpful for various debugging, troubleshooting, and administrative tasks. 223 | - List out the pod for exec: 224 | ```bash 225 | kubectl get pods 226 | ``` 227 | 228 | ```bash 229 | # Syntax 230 | kubectl exec -it -- /bin/bash 231 | ``` 232 | 233 | --- 234 | 235 | ### What is Debug Containers ? 236 | - In Kubernetes, a debug container is a temporary container specifically designed to aid in troubleshooting issues within another container that's already running in a pod. It acts like a sidekick container alongside your application container. 237 | - You can use `kubectl debug` to create a debug container with a debugger image. 238 | - **Ephemeral Debug Containers**: These are temporary containers that are created in a running pod for debugging purposes. They share the namespaces and volume mounts of the pod, allowing you to inspect the state of the application container and the pod. 239 | 240 | ```bash 241 | kubectl debug mypod --image=busybox --target=app-container -- /bin/sh 242 | ``` 243 | 244 | ### Init Containers: 245 | - Designed to run setup scripts or perform initialization tasks before the main application containers start and runs to completion before the main containers start, then exits. 246 | - We can run mulitple init containers before the actual application runs. 247 | - It has its own namespace, but can prepare the environment (e.g., configure filesystems, set up initial state). 248 | - **Example**: Added to the pod specification to ensure preconditions are met before starting the main application containers. 249 | ```yaml 250 | apiVersion: v1 251 | kind: Pod 252 | metadata: 253 | name: mypod 254 | spec: 255 | initContainers: 256 | - name: debug-init 257 | image: busybox 258 | command: ["sh", "-c", "echo Debugging and sleeping; sleep 3600"] 259 | containers: 260 | - name: app-container 261 | image: myapp:latest 262 | ``` 263 | 264 | ```bash 265 | kubectl apply -f init-container.yaml 266 | 267 | # If Init Container fails check logs 268 | kubectl logs -c init-container 269 | ``` 270 | ### Sidecar Containers: (Came in 1.29 Version) 271 | - Designed to run alongside the main application containers to provide additional functionality (e.g., logging, monitoring, proxying) and runs for the entire lifetime of the pod. 272 | - It shares the same pod environment and can interact with the main application containers. 273 | - Sidecar has **`restartpolicy`** which is not present in initContainer. It has 3 types never, onFailure and Always(default). 274 | - **Example**: Added to the pod setup to continuously help monitor or support the main application container. 275 | ```yaml 276 | apiVersion: v1 277 | kind: Pod 278 | metadata: 279 | name: mypod 280 | spec: 281 | containers: 282 | - name: app-container 283 | image: myapp:latest 284 | - name: debug-container 285 | image: busybox 286 | command: ["sh", "-c", "sleep 1d"] 287 | ``` 288 | 289 | ```bash 290 | kubectl apply -f sidecar.yaml 291 | 292 | # how to exec sidecar 293 | kubectl exec -it myapp-342 --bash 294 | ``` 295 | 296 | ### lifecycle of Pod and Container 297 | 1. **Pod Lifecycle:** 298 | - Pending 299 | - Succeded 300 | - Failed 301 | - Running 302 | - Unknown 303 | 304 | 2. **Container Lifecycle:** 305 | - Running 306 | - waiting 307 | - terminated 308 | 309 | --- 310 | -------------------------------------------------------------------------------- /Kubernetes Day-4.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Day-4 2 | 3 | let's take an scenario which describes that we have an image which is build layer-by-layer and one of the layer has **"CVE(Common Vulnerability and Exposures)"** and can act as a BackDoor to Hackers. 4 | So for this we need to make sure that our image is secured and for that we use a tool called **"Trivy"**. 5 | 6 | ### What is Trivy ? 7 | - Trivy is a popular **open-source scanner** designed to **identify vulnerabilities in software**. It's particularly useful in the DevSecOps (development, security, operations) space, where it can be integrated into your CI/CD pipeline to find issues early in the development process. 8 | - **Specific Use of Trivy:** 9 | - **Vulnerability Scanning:** Trivy can scan container images, code repositories, and cloud environments for known security vulnerabilities. 10 | - **Misconfiguration Detection:** It can identify misconfigurations in infrastructure as code (IaC) files. 11 | - **Secret Detection:** Trivy can uncover sensitive information like passwords and API keys accidentally embedded in code or configurations. 12 | - **SBOM Discovery:** It can help create a Software Bill of Materials (SBOM) which is a list of components used to build software, including their licenses and known vulnerabilities. 13 | - [**Trivy GitHub Repository**](https://github.com/aquasecurity/trivy) for exploration. 14 | 15 | ### Installation 16 | - Trivy Installation on **Debain/Ubuntu**: 17 | ```bash 18 | sudo apt-get install apt-transport-https 19 | wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add - 20 | echo deb https://aquasecurity.github.io/trivy-repo/deb [CODE_NAME] main | sudo tee -a /etc/apt/sources.list 21 | sudo apt-get update 22 | sudo apt-get install trivy 23 | ``` 24 | 25 | **Note:** *CODE_NAME* - wheezy, jessie, stretch, buster, trusty, xenial, bionic, jammy 26 | - **wheezy:** This is the code name for Debian 7, released in 2013. 27 | - **jessie:** This is the code name for Debian 8, released in 2015. 28 | - **stretch:** This is the code name for Debian 9, released in 2017. 29 | - **buster:** This is the code name for Debian 10, released in 2019. 30 | - **trusty:** This is the code name for Ubuntu 14.04 LTS (Long Term Support), released in 2014. LTS releases are supported for a longer period, typically 3-5 years, making them popular choices for enterprise environments. 31 | - **xenial:** This is the code name for Ubuntu 16.04 LTS, released in 2016. 32 | - **bionic:** This is the code name for Ubuntu 18.04 LTS, released in 2018. 33 | - **jammy:** This is the code name for Ubuntu 22.04 LTS, released in 2022. 34 | 35 | --- 36 | 37 | ### How do we scan an particular image ? 38 | - For Scanning a particular image we have a syntax `trivy image `. Replace `` with the name of the image you want to scan. The image can be a local image name or one stored in a container registry. 39 | ```shell 40 | trivy image ubuntu:latest 41 | ``` 42 | 43 | #### Filtering Results: 44 | - **`--severity `:** This option allows you to filter vulnerabilities by their severity level(critical, high, medium, low). 45 | ```bash 46 | # This will only show "critical" vulnerabilities found in the "ubuntu:latest" image. 47 | trivy image --severity critical ubuntu:latest 48 | ``` 49 | 50 | - **`--ignore-unfixed`**: This option instructs Trivy to exclude vulnerabilities without known fixes from the scan results. 51 | ```bash 52 | # Here, the output will only show vulnerabilities with available patches or updates. This can be helpful to focus on "actively addressable security issues". 53 | trivy image --ignore-unfixed ubuntu:22.04 54 | 55 | # This command scans the current directory (`.`) for vulnerabilities and displays all results, including unfixed ones, but only for vulnerabilities with a severity level of "critical" or "high". 56 | # fs - File system 57 | # --ignore-unfixed=false : explicitly tells Trivy to include unfixed vulnerabilities 58 | trivy fs --ignore-unfixed=false --severity critical,high . 59 | ``` 60 | 61 | #### Output Formatting: 62 | - **`--format `**: This option specifies the output format for the scan results. Trivy supports various formats like JSON, SARIF, and table (default). 63 | ```bash 64 | # This will display the scanned result in JSON Format. 65 | trivy image --format json ubuntu:latest 66 | ``` 67 | 68 | #### Additional Options: 69 | - **`--help, -h`**: Shows the help message with all available options and flags. 70 | - **`--version, -v`**: Prints the Trivy version information. 71 | - **`--cache-dir `**: Specifies a custom location for the Trivy vulnerability database cache. 72 | ```bash 73 | # This command scans the `ubuntu:latest` image, but stores the vulnerability database cache in the `/shared/trivy-cache` directory. 74 | trivy image --cache-dir /shared/trivy-cache ubuntu:latest 75 | ``` 76 | - **`--offline-scan`**: Performs the scan without downloading updates for the vulnerability database. 77 | ```bash 78 | # This command scans the current directory (`.`) for vulnerabilities in an offline mode and displays only critical ones. This helps identify the most severe security issues even without updating the vulnerability database. 79 | trivy fs --offline-scan --severity critical . 80 | ``` 81 | 82 | --- 83 | 84 | ### Kind Installation in Kubernetes 85 | - Kind is a tool for running local Kubernetes clusters using Docker container “nodes”. It was primarily designed for testing Kubernetes itself, but may be used for local development or CI. 86 | 87 | [**Kind Installation for DIfferent OS**](https://kind.sigs.k8s.io/docs/user/quick-start/#installation) 88 | 89 | - For Linux Binary Installation: 90 | 91 | ```bash 92 | # For AMD64 / x86_64 93 | [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-amd64 94 | # For ARM64 95 | [ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.22.0/kind-linux-arm64 96 | chmod +x ./kind 97 | sudo mv ./kind /usr/local/bin/kind 98 | ``` 99 | 100 | - Cluster Creation using `kind`. 101 | 102 | ```bash 103 | kind create cluster --name rohit 104 | ``` 105 | 106 | ![Pasted image 20240504220756](https://github.com/user-attachments/assets/1f82e58a-d937-410e-b58f-b3c98be1977b) 107 | 108 | - To see the created Cluster. 109 | ```bash 110 | kind get clusters 111 | ``` 112 | 113 | ### Helm Installation on Ubuntu 114 | 115 | - Download the [**Helm Package**](https://github.com/helm/helm/releases) from the Website. 116 | - After that extract the downloaded package. 117 | ```bash 118 | tar xvf helm-v3.14.4-linux-arm64.tar.gz 119 | ``` 120 | 121 | - Move the **`linux-amd64/helm`** file to the **`/usr/local/bin`** directory: 122 | ```bash 123 | sudo mv linux-amd64/helm /usr/local/bin 124 | ``` 125 | 126 | - Finally, verify you have successfully installed Helm by checking the version of the software: 127 | ```bash 128 | helm version 129 | ``` 130 | ### Helm Uninstallation from Ubuntu 131 | - Remove the downloaded file using the following command: 132 | ```bash 133 | rm helm-v3.4.1-linux-amd64.tar.gz 134 | ``` 135 | 136 | - Remove the **`linux-amd64`** directory to clean up space by running: 137 | ```bash 138 | rm -rf linux-amd64 139 | ``` 140 | 141 | ### Kyverno Installation using Helm 142 | 143 | - What is Kyverno ? 144 | - Kyverno is used for enforcing security and compliance policies within your Kubernetes clusters [**Kyverno Documentation**](https://kyverno.io/docs/). 145 | 146 | - To install Kyverno with Helm, first add the Kyverno Helm Repository. 147 | ```bash 148 | helm repo add kyverno https://kyverno.github.io/kyverno/ 149 | ``` 150 | 151 | - After this we will scan the new repository for charts. 152 | ```bash 153 | helm repo update 154 | ``` 155 | 156 | 157 | - Now we will install Kyverno (policy engine) with 1 replica in a newly created **`kyverno`** namespace using helm. 158 | ```bash 159 | heml install kyverno kyverno/kyverno -n kyverno --create-namespace --set replicaCount=1 160 | ``` 161 | 162 | ![Pasted image 20240504231934](https://github.com/user-attachments/assets/6f7284e4-b0ab-44c3-b7e5-566fd2d51147) 163 | 164 | - [**kyverno Policies**](https://kyverno.io/policies/) are collections of rules that define how resources in your Kubernetes cluster should be managed and configured. They act as a safeguard to ensure your cluster adheres to security best practices and compliance requirements. 165 | ### Kubectl Installation 166 | - Let's first install snap on the system. 167 | ```bash 168 | sudo apt install snap 169 | ``` 170 | 171 | - Now we will install the Kubectl CLI for the system. 172 | ```bash 173 | sudo snap install kubectl --classic 174 | ``` 175 | 176 | - Kubernetes Pod creation using **`Kubectl`**. 177 | ```bash 178 | kubectl run before --image nginx -- sleep 1d 179 | ``` 180 | 181 | - To see the created pods in the terminal. 182 | ```bash 183 | kubectl get pods 184 | ``` 185 | 186 | - To see the pod all time running. 187 | ```bash 188 | kubectl get pods --watch 189 | ``` 190 | 191 | - To describe the pod in kubectl. 192 | ```bash 193 | #kubectl describe pod 194 | kubectl describe pod before 195 | ``` 196 | 197 | - We will add the poilcy to the pod by assigning the resources to it for example assignment of memory and storage policy for the pod. 198 | ```bash 199 | kubectl apply -f policy.yaml 200 | ``` 201 | 202 | - So, after this any pod I created it will assign memory and storage to it. 203 | ```bash 204 | # Policy of adding memory and Storage is added automatically due to the policy we have created earlier. 205 | kubectl run after --image nginx --sleep 2d 206 | ``` 207 | 208 | ### Why do we prefer validating webhooks over mutating webhooks ? 209 | 210 | | Topics | Validating Webhooks | Mutating Webhooks | 211 | | --------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | 212 | | *Predictibility and Transparency* | These webhooks **reject** requests that violate policies before any changes are made to the cluster. This provides clear feedback to the user about why their request was denied and helps maintain a predictable state in the cluster. | These webhooks **modify** requests to comply with policies. While convenient, unexpected modifications can be confusing and potentially lead to unintended consequences. | 213 | | *Error Handling and Debugging* | When a request is rejected, the user receives an error message explaining the violation. This allows them to easily diagnose the issue and resubmit a compliant request. | In case of errors during mutation, debugging can be more challenging. You might need to analyze the webhook's logs to understand why the mutation failed. Additionally, unexpected mutations could introduce cascading issues if dependent resources are modified incorrectly. | 214 | | *Security Considerations* | These webhooks operate on the original request data, minimizing the attack surface. Malicious actors cannot exploit vulnerabilities in the webhook to modify cluster state. | These webhooks have more access and control, potentially introducing a security risk if the webhook itself has vulnerabilities. An attacker could exploit these vulnerabilities to inject malicious code or manipulate the cluster state. | 215 | | *Flexibility and Maintainability* | Policies defined in validating webhooks are often simpler and easier to understand. They focus on rejecting non-compliant requests rather than modifying them, leading to cleaner and more maintainable code. | Mutating webhooks can become complex, especially when handling edge cases or dealing with multiple policies. This complexity can make them harder to maintain and debug over time. 216 | 217 | ### What is Kubelinter ? 218 | 219 | - KubeLinter is an open-source command-line tool specifically designed to analyze Kubernetes configurations for potential issues. It acts as a static code analysis tool, meaning it examines your code (in this case, Kubernetes YAML files and Helm charts) without actually running it. 220 | - It acts as a spellchecker for your Kubernetes configurations. 221 | - **Static Analysis:** KubeLinter scans your Kubernetes YAML files and Helm charts to identify misconfigurations, security vulnerabilities, and potential best practice violations. 222 | - **Early Detection:** By catching these issues early in the development phase, KubeLinter helps you prevent problems from propagating to production environments. 223 | - **DevSecOps Integration:** It can be integrated into your CI/CD pipeline to automatically check your Kubernetes configurations before deployment, ensuring a proactive approach to security and best practices. 224 | - **Customization:** KubeLinter comes with a set of built-in checks, but you can also configure it to create custom checks tailored to your specific needs and organization's policies. 225 | - So overall this is beneficial in improving the security, applying the best practices and streamlining development. 226 | - It suggest the potential error in the YAML file you give to it and suggest the changes regarding to it. 227 | 228 | ### Installation Guide 229 | 230 | #### First install the HomeBrew for installation of Kubelinter. 231 | 232 | [**Kube-linter Documentation**](https://docs.kubelinter.io/#/using-kubelinter) 233 | - First we will install the package called as `build-essentials`. 234 | ```bash 235 | sudo apt install build-essential 236 | ``` 237 | 238 | - Checking of the compiler availablility on the local system. 239 | ```bash 240 | which make 241 | ``` 242 | 243 | - Now we install the latest homebrew for our kubelinter installation. 244 | ```bash 245 | /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" 246 | ``` 247 | 248 | - Installation of Kubelinter. 249 | ```bash 250 | brew install kube-linter 251 | ``` 252 | 253 | **How to Use Kube-linter:** 254 | - For example, we have an **yaml file** to which we can **linter** to find the vulnerabilities. 255 | ```bash 256 | # kube-linter lint 257 | kube-linter lint sample.yml 258 | ``` 259 | 260 | ### What is Kube-bench ? 261 | - KubeBench is an open-source tool designed to assess the security of your Kubernetes clusters. This also scans the cluster and provides the solution in **remediation master**. 262 | 263 | - **Installation for (Ubuntu/Debian):** 264 | ```bash 265 | curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.6.2/kube-bench_0.6.2_linux_amd64.deb -o kube-bench_0.6.2_linux_amd64.deb 266 | 267 | sudo apt install ./kube-bench_0.6.2_linux_amd64.deb -f 268 | ``` 269 | 270 | **After this run kube-bench directly:** 271 | 272 | ```bash 273 | kube-bench 274 | ``` 275 | 276 | **Binary installation where sudo does't support:** 277 | 278 | ```bash 279 | curl -L https://github.com/aquasecurity/kube-bench/releases/download/v0.6.2/kube-bench_0.6.2_linux_amd64.tar.gz -o kube-bench_0.6.2_linux_amd64.tar.gz 280 | 281 | tar -xvf kube-bench_0.6.2_linux_amd64.tar.gz 282 | 283 | ./kube-bench --config-dir `pwd`/cfg --config `pwd`/cfg/config.yaml 284 | ``` 285 | 286 | It does this by: 287 | - **Checking against CIS Kubernetes Benchmarks:** These benchmarks are a set of best practices and security recommendations developed by the Center for Internet Security (CIS). KubeBench compares your cluster configuration to these benchmarks to identify any deviations. 288 | - **Automating Security Checks:** It automates the process of running these checks, saving you time and effort compared to manual configuration reviews. 289 | - **Identifying Security Risks:** By highlighting areas where your cluster configuration doesn't align with CIS benchmarks, KubeBench helps you identify potential security vulnerabilities and areas for improvement. 290 | 291 | ### What is Back-off Algorithm in k8s? 292 | 293 | - Back-off Algorithm is applied to every components of kubernetes. The backoff algorithm in Kubernetes helps manage the retry behavior for pods and jobs, preventing constant, rapid retries that could overwhelm the system. 294 | - By introducing exponential delays between retries, Kubernetes ensures a more stable and manageable approach to handling pod failures. 295 | - The delay lookalike 10sec, 20sec, 40sec, 80sec, 160sec ... till 600sec. 296 | 297 | --- 298 | -------------------------------------------------------------------------------- /Kubernetes Day-5.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Day-5 2 | 3 | ## Labels in K8s 4 | 5 | ### What are labels ? 6 | - In Kubernetes, **labels** are key-value pairs assigned to objects (like Pods, Nodes, or Services) to help organize, categorize, and select resources based on meaningful attributes. 7 | 8 | - **If by default we don't have label what will be the label ?** 9 | - Kubernetes does not assign a default key-value pair like **`run:`** unless explicitly defined by the user. 10 | - However, we use **`kubectl run `** to create a pod, kubernetes automatically assigns the following labels to it **Key: `run`** and **Value: ``** and this happens only with specific command: 11 | 12 | ```bash 13 | kubectl run my-app --image=nginx 14 | ``` 15 | 16 | **Generated Output:** 17 | 18 | ```yaml 19 | metadata: 20 | labels: 21 | run: my-app 22 | ``` 23 | - **Can we add more than one label in Kubernetes ?** 24 | - Yes, you can definitely add more than one label to resources in Kubernetes. In fact, using multiple labels is a common and recommended practice for better organization and management of your Kubernetes resources. 25 | 26 | ```yaml 27 | selector: 28 | matchLabels: 29 | environment: production 30 | tier: frontend 31 | ``` 32 | 33 | - This selector would match resources that have both the **`environment: production`** and **`tier: frontend`** labels. Remember, while you can add many labels, it's good practice to keep your labeling strategy consistent and meaningful across your cluster. 34 | 35 | ### Key Points about Labels: 36 | - **Key-Value Pairs:** Labels consist of a unique key and an associated values. **Eg.** **`app: Frontend`** , **`env: production`**. 37 | - **Metadata:** Labels are part of an object's metadata, meaning they don't directly affect the object's behavior. 38 | - **Selectors:** Labels enable label selectors to filter or group objects (eg. select all pods with **`app=frontend`**) which is useful in Deployments, ReplicaSets and Services. 39 | - **Immutable Keys:** Once assigned, the key of a label cannot be changed, but values can be updated. 40 | 41 | ```yaml 42 | apiVersion: v1 43 | kind: Pod 44 | metadata: 45 | name: my-pod 46 | labels: 47 | app: frontend 48 | env: production 49 | spec: 50 | containers: 51 | - name: nginx 52 | image: nginx 53 | ``` 54 | 55 | ### Change labels using command line : 56 | 57 | 1. **Add or Update a label :** This commands **adds or update** the **environment label** with the value production on the pod named **`my-pod`**. 58 | 59 | ```bash 60 | # kubectl label = --overwrite 61 | kubectl label pod my-pod environment=production --overwrite 62 | ``` 63 | 64 | 2. **Remove a Label :** This removes the **`environment`** label from the pod named **`my-pod`**. Adding a **-** 65 | after the key removes the label. 66 | 67 | ```bash 68 | # kubectl label - 69 | kubectl label pod my-pod environment- 70 | ``` 71 | 72 | 3. **Verify Labels :** After applying the changes, you can verify the labels using commands. 73 | 74 | ```bash 75 | # kubectl get --show-labels 76 | kubectl get pod my-pod --show-labels 77 | ``` 78 | 79 | 4. **See labels :** This command will display the labels directly in the output. 80 | 81 | ```bash 82 | kubectl get pods --show-labels 83 | ``` 84 | 85 | ![Pasted image 20241015190551](https://github.com/user-attachments/assets/3eb4a263-074e-4eda-982c-25fbeff6a77f) 86 | 87 | #### **Q**. What happens if we delete cluster does everything inside it gets deleted? 88 | 89 | - When you delete a Kubernetes Cluster resources like **Pods, Services, Deployments** will be permanently deleted when the cluster is deleted. 90 | - Persistent Volumes (PVs) in **Kubernetes** are not deleted automatically when the cluster is removed, ensuring that **data is not lost by default**. 91 | - So, when a pod or even the cluster is deleted, the PV remains unless it **Reclaim Policy** is explicitly set to **Delete**. This is useful to avoid unintended data loss during cluster teardown. 92 | 93 | ### Step-by-Step: Using Private Repositories in Kubernetes 94 | 95 | 1. **Create a Docker Config File for Authentication** where we create a **`~/.docker/config.json`** file containing the credentials, this more secure because it doesn't requires you to directly input your credentials in the Kubectl command, which could be logged or visible in your shell history. 96 | 97 | ```bash 98 | docker login -u -p 99 | ``` 100 | 101 | 2. **Create a Kubernetes Secret** where we will store the credentials inside a secret. 102 | 103 | ```bash 104 | kubectl create secret generic regcred \ 105 | --from-file=.dockerconfigjson=~/.docker/config.json \ 106 | --type=kubernetes.io/dockerconfigjson 107 | ``` 108 | 109 | - **regcred** : The name of the secret. 110 | - **~/.docker/config.json** : Docker Authentication file. 111 | 112 | 3. Reference the **Secret** in Deployment YAML file where we put it as value in **`imagePullSecrets`**. 113 | 114 | ```yaml 115 | apiVersion: apps/v1 116 | kind: Deployment 117 | metadata: 118 | name: my-app 119 | spec: 120 | replicas: 1 121 | selector: 122 | matchLabels: 123 | app: my-app 124 | template: 125 | metadata: 126 | labels: 127 | app: my-app 128 | spec: 129 | containers: 130 | - name: my-app-container 131 | image: //: # Example: ghcr.io/user/my-app:latest 132 | imagePullSecrets: 133 | - name: regcred # Name of the secret 134 | ``` 135 | 136 | 4. After applying the YAML file, Kubernetes will use the **secret to authenticate** with the **Private Registry**. 137 | 138 | ```bash 139 | kubectl apply -f deployment.yml 140 | kubectl get pods 141 | ``` 142 | 143 | #### **Q**. How do we specify which pod will run on which node ? 144 | 145 | - In Kubernetes, we have several methods to control which pods run on which nodes: 146 | 1. **Node Selectors:** This is the simplest method. We add labels to nodes and use nodeSelector in the pod spec to match those labels. 147 | 2. **Node Affinity and Anti-Affinity:** These provide more flexible node selection. They allow complex matching rules and can express preferences rather than hard requirements. 148 | 3. **Taints and Tolerations:** Taints are applied to nodes to repel certain pods, while tolerations allow pods to schedule onto nodes with matching taints. 149 | 4. **Pod Affinity and Anti-Affinity:** These allow us to schedule pods based on the labels of other pods running on the node. 150 | 151 | - **Example of Node Affinity:** This would ensure the pod runs on nodes labeled with either 'us-west-1a' or 'us-west-1b' for the 'zone' key. 152 | 153 | ```yaml 154 | nodeAffinity: 155 | requiredDuringSchedulingIgnoredDuringExecution: 156 | nodeSelectorTerms: 157 | - matchExpressions: 158 | - key: zone 159 | operator: In 160 | values: 161 | - us-west-1a 162 | - us-west-1b 163 | ``` 164 | ### Probes 165 | 166 | - In Kubernetes, **probes** are mechanisms used to check the **health** and **availability** of conatiners to ensure they are fucntioning properly. There are three types of probes: 167 | - **Liveness Probe :** 168 | - It ensures that the container is **alive** and responsive. If the container fails the liveness probe, Kubernetes will restart it. 169 | - **Where to use:** It is used to detect and fix stuck or deadlock situations where the app stops responding. 170 | - **Readiness Probe :** 171 | - It ensures that the containers is **ready** to accept traffic. If the readiness probe fails, kubernetes will **remove the pod from the service's endpoints**, preventing traffic from being routed to it. 172 | - **Where to use:** It is useful for containers that need some **initialization time** before they can serve requests. 173 | - **Startup Probe :** 174 | - It ensures that the applications has successfully started. If the startup probe fails, kubernetes will **kill and restart the container**. 175 | - **Where to use:** It is useful for apps that takes a long time to start. The startup probe disable the liveness probe disables the liveness probe untill the container is ready. 176 | 177 | ```yaml 178 | apiVersion: v1 179 | kind: Pod 180 | metadata: 181 | name: my-app-pod 182 | spec: 183 | containers: 184 | - name: my-app 185 | image: my-app-image:latest 186 | ports: 187 | - containerPort: 8080 188 | livenessProbe: 189 | httpGet: 190 | path: /healthz 191 | port: 8080 192 | initialDelaySeconds: 3 193 | periodSeconds: 3 194 | readinessProbe: 195 | httpGet: 196 | path: /ready 197 | port: 8080 198 | periodSeconds: 5 199 | startupProbe: 200 | httpGet: 201 | path: /healthz 202 | port: 8080 203 | failureThreshold: 30 204 | periodSeconds: 10 205 | ``` 206 | 207 | - Probes can use different mechanisms to check container health: 208 | 1. **HTTP GET:** Performs an HTTP GET request and have specified endpoints which expects a **`2xx`** or **`3xx`** status. 209 | 2. **TCP Socket:** Attempts to open a socket connection of specific port. 210 | 3. **Exec:** Executes a command inside the container. If the command returns a **0 status code**, probe is successful. 211 | 212 | --- 213 | 214 | ## Replicas in K8s 215 | 216 | ### What are Replicas ? 217 | - In Kubernetes, replicas refers to the number of identical instances (Pods) of an application running at the same time to ensure **high availability**, **load balancing** and **fault tolerance**. Replicas are managed by **ReplicaSet** or **Deployment**. 218 | 219 | ### Why there is need of Replicas? 220 | 221 | - **High Availability :** 222 | - Ensure uptime by running multiple instances (Pods) of the application. If one pod crashes, other continue to serve traffic without downtime. 223 | - **Example:** If a web application runs with 3 replicas, the service stays available even if one pod fails. 224 | - **Load Balancing :** 225 | - Replicas distribute the workload across multiple instances, preventing any single pod from being overwhelmed. Kubernetes Services automatically balance traffic between replicas to improve performance. 226 | - **Example:** If 1000 users acces the app, traffic will be spread across mutiple pods to handle requests efficiently. 227 | - **Fault Tolerance :** 228 | - Kubernetes monitors replicas and replace failed pods to maintain the desired number. This ensures the system recovers automatically from failures without manual interventions. 229 | - **Scalability :** 230 | - You can increase or decrease replicas based on the demand (horizontal scaling). During peal hours, you can scale up the replicas and scale down when demand reduces, optimizing resources usage. 231 | - **Example:** E-Commerce apps can scale from 2 replicas to 10 during a sale, ensuring they can handle the increased load. 232 | 233 | ```yaml 234 | apiVersion: apps/v1 235 | kind: Deployment 236 | metadata: 237 | name: my-app 238 | spec: 239 | replicas: 3 # Three replicas 240 | selector: 241 | matchLabels: 242 | app: my-app 243 | template: 244 | metadata: 245 | labels: 246 | app: my-app 247 | spec: 248 | containers: 249 | - name: nginx 250 | image: nginx 251 | ``` 252 | 253 | - **Rolling Updates & Zero Downtime Deployments :** 254 | - With replicas, Kubernetes can **update the application gradually** (rolling updates) to avoid downtime. 255 | - New Pods are created while old ones are terminated, ensuring continuous service availability. 256 | 257 | ### How do we define an availability of website ? 258 | 259 | - The **availability chart** for cloud services, showing how downtime translates to **different availability percentages** over a given time period: 260 | 261 | | **Availability %** | **Downtime per Year** | **Downtime per Month** | **Downtime per Week** | **Downtime per Day** | 262 | | -------------------- | ----------------------------- | ---------------------- | --------------------- | ---------------------- | 263 | | 99.999% (Five Nines) | 5 minutes 15 seconds | ~26 seconds | ~6 seconds | ~0.86 seconds | 264 | | 99.99% | 52 minutes 36 seconds | ~4 minutes 23 seconds | ~1 minute | ~8.6 seconds | 265 | | 99.9% | 8 hours 45 minutes 36 seconds | ~43 minutes 12 seconds | ~10 minutes | ~1 minute 26 seconds | 266 | | 99% | 3 days 15 hours 36 minutes | 7 hours 12 minutes | ~1 hour 41 minutes | ~14 minutes 24 seconds | 267 | 268 | - **Five nines (99.999%) availability** is the standard for **emergency response systems**, allowing only about **5 minutes and 15 seconds** of downtime per year. 269 | - Achieving such high availability requires: 270 | - **Redundant infrastructure** (software, hardware, networks) 271 | - **Fault-tolerant systems** with automatic failover mechanisms 272 | - **Continuous monitoring** to detect and resolve issues instantly 273 | 274 | ### What kinds of pods can replica set runs ? 275 | - Pods are of two types in terms of **Homogeneous** and **Hetrogeneous pods**, where a **ReplicaSet** is designed to manage only **Homogeneous pod**, meaning **all pods it runs are identical**. 276 | 277 | #### 1. Homogeneous Pods (Allowed) 278 | - Pods within a replica set that have identical specifications, including container images, resource requests, and environment variables. 279 | - **Where to use :** It is used in stateless applicatio where any pod can handle the same workload which ensures uniformity and load balancing across all replicas. 280 | 281 | **Example:** A ReplicaSet managing **three Nginx Pods** that all use the same image and configuration. 282 | 283 | ```yaml 284 | spec: 285 | replicas: 3 286 | template: 287 | metadata: 288 | labels: 289 | app: nginx 290 | spec: 291 | containers: 292 | - name: nginx 293 | image: nginx:latest 294 | ``` 295 | 296 | #### 2. Heterogeneous Pods (Not Allowed) 297 | 298 | - Pods within a replica set that have different specifications, such as varying container images, resource requests, or environment variables. 299 | - **Where to use :** It is used in microservice architecture where one pod acts as a **frontend** and another as a **backend**. 300 | 301 | #### How we use Heterogeneous Pods using ReplicaSet (Tweaks) : 302 | 303 | - **Environmental Variables or ConfigMaps:** 304 | - Inject dynamic configuration via **environment variables**, **configMaps** or **Secrets** based on Pod name or labels. This allows different behavior for each Pod, though the Pod templates are technically identical. 305 | - **Example:** Using **`POD_NAME`** environment variable for conditional logic within the container. 306 | 307 | ```yaml 308 | apiVersion: apps/v1 309 | kind: ReplicaSet 310 | metadata: 311 | name: heterogeneous-app 312 | spec: 313 | replicas: 3 314 | selector: 315 | matchLabels: 316 | app: heterogeneous-app 317 | template: 318 | metadata: 319 | labels: 320 | app: heterogeneous-app 321 | spec: 322 | containers: 323 | - name: app-container 324 | image: myapp:latest 325 | env: 326 | - name: POD_NAME 327 | valueFrom: 328 | fieldRef: 329 | fieldPath: metadata.name 330 | - name: POD_IP 331 | valueFrom: 332 | fieldRef: 333 | fieldPath: status.podIP 334 | - name: NODE_NAME 335 | valueFrom: 336 | fieldRef: 337 | fieldPath: spec.nodeName 338 | # Use a ConfigMap for additional configuration 339 | envFrom: 340 | - configMapRef: 341 | name: app-config 342 | ``` 343 | 344 | - **Mounted Volumes for Different Behavior :** 345 | - Mount ConfigMaps or Secrets as volumes containing Pod-specific configurations. Each Pod can mount a different configuration file based on its name or label. 346 | 347 | ```yaml 348 | apiVersion: v1 349 | kind: Pod 350 | metadata: 351 | name: my-app 352 | spec: 353 | containers: 354 | - name: nginx 355 | image: nginx:latest 356 | volumeMounts: 357 | - name: config-volume 358 | mountPath: /etc/config 359 | volumes: 360 | - name: config-volume 361 | configMap: 362 | name: app-config 363 | ``` 364 | 365 | **Note :** The ConfigMap is mounted at `/etc/config`. Each Pod gets access to the same configuration. 366 | 367 | - **Pod Affinity and Anti-Affinity :** 368 | - This YAML demonstrates **affinity** (Pods prefer to run together) and **anti-affinity** (Pods prefer to run on different nodes). 369 | - **PodAffinity :** Tries to schedule Pods on the same node based on the `kubernetes.io/hostname` key. 370 | - **PodAntiAffinity :** Tries to **avoid placing Pods** on the same node, distributing them across nodes for fault tolerance. 371 | 372 | ```yaml 373 | apiVersion: apps/v1 374 | kind: Deployment 375 | metadata: 376 | name: my-app 377 | spec: 378 | replicas: 3 379 | selector: 380 | matchLabels: 381 | app: my-app 382 | template: 383 | metadata: 384 | labels: 385 | app: my-app 386 | spec: 387 | affinity: 388 | podAffinity: 389 | requiredDuringSchedulingIgnoredDuringExecution: 390 | - labelSelector: 391 | matchLabels: 392 | app: my-app 393 | topologyKey: "kubernetes.io/hostname" 394 | podAntiAffinity: 395 | preferredDuringSchedulingIgnoredDuringExecution: 396 | - weight: 1 397 | podAffinityTerm: 398 | labelSelector: 399 | matchLabels: 400 | app: my-app 401 | topologyKey: "kubernetes.io/hostname" 402 | containers: 403 | - name: nginx 404 | image: nginx:latest 405 | ``` 406 | 407 | - **Adding Tolerance/Node Selectors :** 408 | - Modify node scheduling behavior based on environment variables or external configurations. Pods can exhibits different behavior depending on the node they're scheduled on. 409 | - Here the pod will **tolerate nodes** with a taint `key1=value1:NoSchedule`, meaning it can be scheduled on such nodes despite the taint. 410 | 411 | ```yaml 412 | apiVersion: v1 413 | kind: Pod 414 | metadata: 415 | name: toleration-example 416 | spec: 417 | containers: 418 | - name: nginx 419 | image: nginx:latest 420 | tolerations: 421 | - key: "key1" 422 | operator: "Equal" 423 | value: "value1" 424 | effect: "NoSchedule" 425 | ``` 426 | 427 | ### How does ReplicaSet Scale-Up or Scale-Down? 428 | 429 | - **Scale-Up:** 430 | - When scaling **up**, the ReplicaSet controller creates new Pods to meet the increased replica count. 431 | - **Scale-Down:** 432 | - When scaling **down**, the controller decides which Pods to delete using the following **priority order**: 433 | #### ReplicaSet Scale-Down Algorithm: 434 | 1. **Pending (and Unschedulable) Pods are Deleted First** 435 | - Any Pods that are **pending** (e.g., stuck in scheduling or waiting for resources) are prioritized for deletion. 436 | 2. **`controller.kubernetes.io/pod-deletion-cost` Annotation** 437 | - If the **`pod-deletion-cost`** annotation is present, the Pod with the **lower value** is deleted first. 438 | - **Pods with higher values** are more expensive to delete and are kept longer. 439 | 440 | ```yaml 441 | metadata: 442 | annotations: 443 | controller.kubernetes.io/pod-deletion-cost: "-10" 444 | ``` 445 | 446 | 3. **Pods on Nodes with More Replicas** 447 | - Pods running on **nodes with more replicas** are deleted before those on nodes with fewer replicas. 448 | - This ensures **better distribution** of Pods across nodes. 449 | 4. **Pod Creation Time (Logarithmic Scaling)** 450 | - If Pods have different **creation times**, **recently created Pods** are deleted before older ones. 451 | - When the **`LogarithmicScaleDown` feature gate** is enabled, the creation times are **bucketed on a logarithmic scale** to handle large clusters efficiently. 452 | 453 | 454 | ### How to Annotate a Pod ? 455 | 456 | - There are 2 ways to Annotate a Pod in Kubernetes, **Add Annotations** during Pod Creation and **Modify Annotations** on an existing Pod using **`Kubectl`**. 457 | 458 | - **Add Annotation During Pod Creation :** 459 | 460 | ```yaml 461 | apiVersion: v1 462 | kind: Pod 463 | metadata: 464 | name: example-pod 465 | annotations: 466 | environment: "production" 467 | owner: "dev-team" 468 | spec: 469 | containers: 470 | - name: nginx 471 | image: nginx:latest 472 | ``` 473 | 474 | - **Add or Update Annotations on an Existing Pod :** 475 | 476 | ```bash 477 | # kubectl annotate pod key1=value1 key2=value2 478 | kubectl annotate pod example-pod environment=production owner=dev-team 479 | ``` 480 | 481 | - **Remove an Annotation :** To **remove an annotation**, set it to a blank value or use the **`--remove`** flag. 482 | 483 | ```bash 484 | kubectl annotate pod example-pod environment- 485 | ``` 486 | 487 | - **Verify Annotations :** Following command is used to check the annotation on a Pod. 488 | 489 | ```bash 490 | kubectl describe pod | grep Annotations 491 | ``` 492 | 493 | --- 494 | 495 | ## Deployment in K8s 496 | 497 | ### What is a Deployment in Kubernetes ? 498 | 499 | - A Deployment is a Kubernetes controller that manages a ReplicaSet to ensure the desired state of an application. It helps automate he **creation**, **scaling** and **updates** of Pods. 500 | - A Deployment ensures that a specific number of identical pods are running at all times and provides features like **rolling updates** and **rollbacks**. 501 | 502 | ### Key Function of Deployment : 503 | 504 | - **Scaling :** Scale up or down the number of Pods. 505 | - **Rolling Updates :** Gradually update Pods to a new version. 506 | - **Self-healing :** Recreate Pods if they fails or are deleted. 507 | - **Rollback :** Revert to a previous version if a new update fails. 508 | 509 | ### Deployment Strategies in Kubernetes 510 | 511 | 1. **Recreate Strategy :** Here all existing pods are deleted first and then new pods are created. It is used when downtime is acceptable or the new version is incompatible with the previous one. 512 | 513 | ```yaml 514 | strategy: 515 | type: Recreate 516 | ``` 517 | 518 | 2. **Rolling Update Strategy (Default) :** Pods are updated incrementally. A few old pods are terminated, and new one are started untill the entire application is updated. It is ued when the update must happen without downtime. 519 | 520 | ```yaml 521 | strategy: 522 | type: RollingUpdate 523 | rollingUpdate: 524 | maxUnavailable: 1 525 | maxSurge: 1 526 | ``` 527 | 528 | - **maxUnavailable :** Number of pods that can be unavailable during the update. 529 | - **maxSurge :** Number of extra pods created temporarily during the update. 530 | 531 | ### Why use a Deployment Instead of a ReplicaSet ? 532 | 533 | - **Automated Rolling Updates and Rollbacks :** 534 | - Deployment supports **rolling updates**, ensuring that the application is updated **without downtime**. It also allows **rollback** to previous versions if an update fails. 535 | - **ReplicaSet** alone does not provide these features for that we need to manage updates manually. 536 | - **Example :** With Deployment, we can update the app version like this , which will gradually replace old pods with new ones, ensuring uptime. 537 | ```bash 538 | kubectl set image deployment/my-app nginx=nginx:1.19.10 539 | ``` 540 | 541 | - **Version History Management :** 542 | - Deployments keeps track of **previous ReplicaSets** (for rollback purposes). 543 | - With ReplicaSets alone, we need to manually track versions to revert if something goes wrong. 544 | 545 | - **Self-Healing :** 546 | - While both **ReplicaSets** and **Deployments** can restart failed Pods, Deployments manage **multiple ReplicaSets** and ensure the **desired state** is achieved across updates. 547 | 548 | - **Declarative Management** 549 | - With Deployments, you just **declare the desired state** (e.g., number of replicas, version of the app), and Kubernetes takes care of the rest. 550 | - Using ReplicaSets alone would require more **manual management** (e.g., scaling up/down ReplicaSets and deleting old ones). 551 | 552 | ### When Would You Use a ReplicaSet Directly? 553 | 554 | - If you don’t need the **rolling update or rollback** functionality (e.g., **batch jobs** or simple workloads). 555 | - In some rare cases, you may want **full control** over the Pods without any automatic management, but this is uncommon. 556 | 557 | --- 558 | 559 | ## Canary Deployment in K8s 560 | 561 | ### What is Canary Deployment ? 562 | - A Canary deployment is a strategy for gradually rolling out a **new version of an application** to a small subset of users or traffic, while the rest of the users continue to access the existing version. 563 | - It ensures **new features are tested in production** without disrupting the entire system. If everything works as expected, traffic is shifted progressively to the new version; if not, it can be **rolled back** easily. 564 | 565 | ![Pasted image 20241017104243](https://github.com/user-attachments/assets/61dcd41d-683d-4885-8dc7-5404bcf62656) 566 | 567 | ### Step-by-Step Guide for Canary Deployment 568 | 569 | - **Step 1 :** Pull the Docker Image of Nginx and verify the downloaded image. 570 | 571 | ```bash 572 | docker pull nginx 573 | docker image ls 574 | ``` 575 | 576 | - **Step 2 :** Create the Initial Deployment **(Version 1.0)**. Here we creating for Nginx **`nginx-deployment.yaml`**. 577 | 578 | ```yaml 579 | apiVersion: apps/v1 580 | kind: Deployment 581 | metadata: 582 | name: nginx 583 | spec: 584 | replicas: 3 585 | selector: 586 | matchLabels: 587 | app: nginx 588 | template: 589 | metadata: 590 | labels: 591 | app: nginx 592 | version: "1.0" 593 | spec: 594 | containers: 595 | - name: nginx 596 | image: nginx:alpine 597 | resources: 598 | limits: 599 | memory: "128Mi" 600 | cpu: "50m" 601 | ports: 602 | - containerPort: 80 603 | volumeMounts: 604 | - mountPath: /usr/share/nginx/html 605 | name: index.html 606 | volumes: 607 | - name: index.html 608 | hostPath: 609 | path: /path/to/v1 610 | ``` 611 | 612 | - Apply and verify the deployment of **(Version 1.0)**. 613 | 614 | ```bash 615 | kubectl apply -f nginx-deployment.yaml 616 | kuebctl get pods -o wide 617 | ``` 618 | 619 | - **Step 3 :** Create a Service for the Deployment for our **(Version 1.0)** which is **`nginx-deployment-service.yaml`**. 620 | 621 | ```yaml 622 | apiVersion: v1 623 | kind: Service 624 | metadata: 625 | name: nginx-service 626 | spec: 627 | type: LoadBalancer 628 | selector: 629 | app: nginx 630 | version: "1.0" 631 | ports: 632 | - port: 8888 633 | targetPort: 80 634 | ``` 635 | 636 | - Apply and verify the Service of **(Version 1.0)**. 637 | 638 | ```yaml 639 | kubectl apply -f nginx-deployment-service.yaml 640 | kubectl get service 641 | ``` 642 | 643 | - **Step 4 :** Now we will create Deployment for **Canary Version (Version 2.0)** which is **canary-deployment.yaml**. 644 | 645 | ```yaml 646 | apiVersion: apps/v1 647 | kind: Deployment 648 | metadata: 649 | name: nginx-canary-deployment 650 | spec: 651 | replicas: 3 652 | selector: 653 | matchLabels: 654 | app: nginx 655 | template: 656 | metadata: 657 | labels: 658 | app: nginx 659 | version: "2.0" 660 | spec: 661 | containers: 662 | - name: nginx 663 | image: nginx:alpine 664 | resources: 665 | limits: 666 | memory: "128Mi" 667 | cpu: "50m" 668 | ports: 669 | - containerPort: 80 670 | volumeMounts: 671 | - mountPath: /usr/share/nginx/html 672 | name: index.html 673 | volumes: 674 | - name: index.html 675 | hostPath: 676 | path: /path/to/v2 677 | ``` 678 | 679 | - Deploy and verify the canary pods of **(Version 2.0)** of **`canary-deployment.yaml`**. 680 | 681 | ```yaml 682 | kubectl apply -f nginx-canary-deployment.yaml 683 | kubectl get pods -o wide 684 | ``` 685 | 686 | - **Step 5 :** Modify the Service to Route Traffic to Canary Pods for that we need to edit the service to route part of the traffic to **Canary (Version 2.0)**. 687 | 688 | ```yaml 689 | apiVersion: v1 690 | kind: Service 691 | metadata: 692 | name: nginx-service 693 | spec: 694 | type: LoadBalancer 695 | selector: 696 | app: nginx 697 | version: "2.0" 698 | ports: 699 | - port: 8888 700 | targetPort: 80 701 | ``` 702 | 703 | - Apply the Updated Service and test out the deployment by refreshing the webpage to see responses from both **Version 1** and **Version 2**. 704 | 705 | ```bash 706 | kubectl apply -f nginx-deployment-service.yaml 707 | ``` 708 | 709 | **Step 6 :** Roll Back or Roll Out the Deployment 710 | - **Roll Back** : If **(Version 2.0)** isn’t working correctly, delete the canary deployment. 711 | 712 | ```bash 713 | kubectl delete deployment.apps/nginx-canary-deployment 714 | ``` 715 | - **Roll Out** : If (Version 2.0) works fine, update the service to route all traffic to version 2 and **delete the old version’s deployment** by keeping the new one. 716 | 717 | ```bash 718 | kubectl delete deployment.apps/nginx 719 | ``` 720 | 721 | --- -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes-learning 2 | 3 | Here I will be sharing my learning on Kubernetes with detailed explanation and examples. 4 | 5 | ## Table of Contents 6 | 7 | - [**Kubernetes Day-1**](/Kubernetes%20Day-1.md) : Here I have walk through the basics of Kubernetes, Installation of Kubernetes, Release Cycles of Kubernetes, `YAML` and `Linux commands` which is required for Kubernetes. 8 | 9 | - [**Kubernetes Day-2**](/Kubernetes%20Day-2.md) : Here I have walk through the `Kubernetes Architecture` in Depth and also the `components of Kubernetes`. 10 | 11 | - [**Kubernetes Day-3**](/Kubernetes%20Day-3.md) : Here I have explained the Kubernetes Cluster (Cloud and On-Prem), Images, how to debug the containers, Kubectl Commands to get a good going start in Kubernetes learning. 12 | 13 | - [**Kubernetes Day-4**](/Kubernetes%20Day-4.md) : Here I have covered essential Kubernetes topics including Image Security with Trivy, Admission Control with Kyverno, Kube Linter, kube-bench, Static Pods, initContainers, Sidecar vs. init Containers, Pod Termination, and Runtime Class—all crucial for enhancing Kubernetes' functionality and security. 14 | 15 | - [**Kubernetes Day-5**](/Kubernetes%20Day-5.md) : Here I have covered important Kubernetes topics, including labels, probes and their types, cloud availability charts, pod types, ReplicaSets with scaling strategies, deployments, and a step-by-step guide for canary deployment—key concepts to enhance your understanding and management of Kubernetes. --------------------------------------------------------------------------------