├── 2024 ├── API_K8S_2.md ├── API_K8s_1.md ├── AWS_EKS.md ├── AWS_EKS_Project_1.md ├── Building Blocks of K8S.md ├── ConfigMap and Secret.md ├── Dynamic Provisioning of EBS Volumes on AWS EKS.md ├── Installation_Kubaeadm.md ├── K8S Networking_1.md ├── K8S_Namespace.md ├── K8S_RBAC_SA.md ├── K8S_SA_oles_ClusterRole_Nanespace.md ├── K8s Networking_2.md ├── K8s Networking_3.md ├── K8s Storage Introduction.md ├── K8s Storage_1.md ├── K8s Storage_2.md ├── K8s_Architecture.md ├── K8s_log.md ├── Labels and Selectors_1.md ├── Labels and Selectors_2.md ├── Labels_and_Annotations.md ├── Monolith_VS_Microservices.md ├── NameSPace, Role, SA example.md ├── NameSpace.md ├── Napspace, RBAC, ROLES LAB.md ├── Nginx deployment and a Cluster IP service.md ├── Nginx deployment and a NodePort service.md ├── Objects_K8S.md ├── Pod_2.md ├── Pod_Concept.md ├── RBAC.md ├── Service Account, Role, RoleBinding, ClusterRole, and ClusterRoleBinding.md └── What_Is_COntainer_OrchestrationTool.md ├── AzureDynamicStorageprovisioning.md ├── Deployment.yaml ├── Deployments.md ├── EFK.YAML ├── HowPodGetIPAddress.md ├── K8S_Certification Labs ├── README.md ├── a.core_concepts.md ├── b.multi_container_pods.md ├── c.pod_design.md ├── d.configuration.md ├── e.observability.md ├── f.services.md └── g.state.md ├── K8s.YAML ├── Kubernetes Installation.txt ├── Kubernetes.md ├── KubernetesControllers.md ├── Kubernetes_Installtion.md ├── Kubernetes_Services.md ├── Kubernetes_Storgae.md ├── Minikube Installation ├── Overview of Kubernetes.txt ├── PODS.md ├── Pod.YAML ├── README.md ├── ReplicaSet.md ├── ServiceDescoveryConcept.md ├── ServiceDiscoverLab.md ├── Service_Lab.md ├── StatefulSets.md ├── Topics.txt ├── minikubeInstalltion.sh └── minikube_Installtion_2025.md /2024/API_K8S_2.md: -------------------------------------------------------------------------------- 1 | ### Using `kubectl api-versions` and `kubectl api-resources` 2 | 3 | #### `kubectl api-versions` 4 | 5 | This command lists all API versions available on the Kubernetes cluster. It shows the different API groups and their versions that can be used to define Kubernetes objects. 6 | 7 | **Example Usage:** 8 | 9 | ```sh 10 | kubectl api-versions 11 | ``` 12 | 13 | **Example Output:** 14 | 15 | ```sh 16 | admissionregistration.k8s.io/v1 17 | admissionregistration.k8s.io/v1beta1 18 | apiextensions.k8s.io/v1 19 | apiextensions.k8s.io/v1beta1 20 | apiregistration.k8s.io/v1 21 | apiregistration.k8s.io/v1beta1 22 | apps/v1 23 | authentication.k8s.io/v1 24 | authorization.k8s.io/v1 25 | autoscaling/v1 26 | autoscaling/v2beta1 27 | autoscaling/v2beta2 28 | batch/v1 29 | batch/v1beta1 30 | certificates.k8s.io/v1 31 | coordination.k8s.io/v1 32 | events.k8s.io/v1 33 | events.k8s.io/v1beta1 34 | networking.k8s.io/v1 35 | policy/v1beta1 36 | rbac.authorization.k8s.io/v1 37 | scheduling.k8s.io/v1 38 | storage.k8s.io/v1 39 | v1 40 | ``` 41 | 42 | **Explanation:** 43 | - Each line represents an available API version in the cluster. 44 | - For example, `apps/v1` indicates that version `v1` of the `apps` API group is available. 45 | - `v1` (without a group prefix) indicates the core API group. 46 | 47 | #### `kubectl api-resources` 48 | 49 | This command lists all the resource types available in the Kubernetes API, grouped by their API group and version. It shows which resources you can create, manage, and interact with using Kubernetes. 50 | 51 | **Example Usage:** 52 | 53 | ```sh 54 | kubectl api-resources 55 | ``` 56 | 57 | **Example Output:** 58 | 59 | ```sh 60 | NAME SHORTNAMES APIGROUP NAMESPACED KIND 61 | bindings true Binding 62 | componentstatuses cs false ComponentStatus 63 | configmaps cm true ConfigMap 64 | endpoints ep true Endpoints 65 | events ev true Event 66 | limitranges limits true LimitRange 67 | namespaces ns false Namespace 68 | nodes no false Node 69 | persistentvolumeclaims pvc true PersistentVolumeClaim 70 | persistentvolumes pv false PersistentVolume 71 | pods po true Pod 72 | secrets true Secret 73 | serviceaccounts sa true ServiceAccount 74 | services svc true Service 75 | mutatingwebhookconfigurations admissionregistration.k8s.io false MutatingWebhookConfiguration 76 | validatingwebhookconfigurations admissionregistration.k8s.io false ValidatingWebhookConfiguration 77 | customresourcedefinitions crd apiextensions.k8s.io false CustomResourceDefinition 78 | ``` 79 | 80 | **Explanation:** 81 | 82 | - `NAME`: The name of the resource type. 83 | - `SHORTNAMES`: Shortened aliases for the resource type, useful for command-line shorthand. 84 | - `APIGROUP`: The API group the resource belongs to (if any). 85 | - `NAMESPACED`: Indicates whether the resource is namespaced (`true`) or cluster-wide (`false`). 86 | - `KIND`: The kind of the resource, used in the `kind` field of a YAML manifest. 87 | 88 | ### Example: Creating a Pod 89 | 90 | Using the information from `kubectl api-resources`, let's create a Pod. 91 | 92 | **Step 1: Write a YAML file for a Pod** 93 | 94 | `pod-nginx.yaml`: 95 | ```yaml 96 | apiVersion: v1 97 | kind: Pod 98 | metadata: 99 | name: nginx-pod 100 | spec: 101 | containers: 102 | - name: nginx 103 | image: nginx:latest 104 | ports: 105 | - containerPort: 80 106 | ``` 107 | 108 | **Step 2: Apply the YAML file using `kubectl apply`** 109 | 110 | ```sh 111 | kubectl apply -f pod-nginx.yaml 112 | ``` 113 | 114 | **Explanation of YAML:** 115 | - `apiVersion: v1`: Uses the `v1` version of the core API group. 116 | - `kind: Pod`: Specifies the resource type as Pod. 117 | - `metadata: name: nginx-pod`: Names the Pod `nginx-pod`. 118 | - `spec`: Defines the desired state of the Pod. 119 | - `containers`: Lists the containers in the Pod. 120 | - `name: nginx`: Names the container `nginx`. 121 | - `image: nginx:latest`: Uses the `nginx:latest` image. 122 | - `ports`: Lists the ports the container exposes. 123 | - `containerPort: 80`: Exposes port 80 on the container. 124 | 125 | ### Conclusion 126 | 127 | - **`kubectl api-versions`**: Lists all available API versions in the cluster. 128 | - **`kubectl api-resources`**: Lists all resource types available in the Kubernetes API, grouped by their API group and version. 129 | - These commands help you understand the API resources you can manage and which API versions are available, aiding in writing accurate and compatible Kubernetes manifests. 130 | -------------------------------------------------------------------------------- /2024/API_K8s_1.md: -------------------------------------------------------------------------------- 1 | ### Understanding API Versions in Kubernetes 2 | 3 | #### What is the API Version? 4 | 5 | The `apiVersion` in a Kubernetes manifest specifies the version of the Kubernetes API that you are using to create and manage the object. Kubernetes evolves over time, and API versions are used to ensure compatibility and stability for users as new features are added and old ones are deprecated. 6 | 7 | API versions follow a pattern: `apiVersion: /`, where: 8 | 9 | - **Group** is the API group (e.g., `apps`, `batch`, `networking.k8s.io`). 10 | - **Version** is the API version within that group (e.g., `v1`, `v1beta1`). 11 | 12 | For example, the `apiVersion` for a Deployment might be `apps/v1`. 13 | 14 | #### Different Types of API Versions 15 | 16 | Kubernetes API versions typically include: 17 | 18 | 1. **v1**: This is the stable version and is used for core resources such as Pods, Services, and ConfigMaps. For example, `apiVersion: v1`. 19 | 20 | 2. **v1beta1**, **v1alpha1**: These are pre-release versions. `v1beta1` is a beta version, meaning the feature is well-tested but may change in the future. `v1alpha1` is an alpha version, which is experimental and may change significantly. 21 | 22 | 3. **Group Versions**: Certain resources belong to specific API groups. For example: 23 | - `apps/v1`: Used for Deployments, StatefulSets, ReplicaSets. 24 | - `batch/v1`: Used for Jobs and CronJobs. 25 | - `networking.k8s.io/v1`: Used for NetworkPolicies and Ingress. 26 | 27 | #### How to See Available API Versions with `kubectl` 28 | 29 | To list all available API versions, you can use the following `kubectl` command: 30 | 31 | ```sh 32 | kubectl api-versions 33 | ``` 34 | 35 | This will output a list of all available API versions on your cluster. 36 | 37 | To get detailed information about all API resources, including their versions and kinds, you can use: 38 | 39 | ```sh 40 | kubectl api-resources 41 | ``` 42 | 43 | This command provides a comprehensive list of all API resources, grouped by their respective API groups and versions. 44 | 45 | #### Why There are Different API Versions and Use Cases 46 | 47 | Different API versions exist to handle the evolution and stability of features in Kubernetes: 48 | 49 | 1. **Backward Compatibility**: Newer versions of the API ensure that changes or enhancements do not break existing deployments. Older versions are maintained until they are officially deprecated. 50 | 51 | 2. **Feature Evolution**: As Kubernetes evolves, new features are added and tested in alpha and beta versions. Once these features are stable, they are promoted to the stable `v1` version. 52 | 53 | 3. **Segregation of Concerns**: Different API groups (like `apps`, `batch`, `networking.k8s.io`) allow for better organization and management of resources related to specific functionalities. 54 | 55 | **Use Case Examples:** 56 | 57 | - **Stable Features**: Use `apiVersion: v1` for stable, core resources like Pods, Services, ConfigMaps, and Secrets. 58 | - **Deployments and StatefulSets**: Use `apiVersion: apps/v1` for managing Deployments, StatefulSets, and ReplicaSets. 59 | - **Job Scheduling**: Use `apiVersion: batch/v1` for managing Jobs and CronJobs. 60 | - **Networking Policies**: Use `apiVersion: networking.k8s.io/v1` for NetworkPolicies and Ingress resources. 61 | 62 | ### Detailed Example of a Pod YAML with API Version 63 | 64 | Let's revisit the example YAML for launching a Pod and explain the details: 65 | 66 | ```yaml 67 | apiVersion: v1 68 | kind: Pod 69 | metadata: 70 | name: nginx-pod 71 | spec: 72 | containers: 73 | - name: nginx 74 | image: nginx:latest 75 | ports: 76 | - containerPort: 80 77 | ``` 78 | 79 | - `apiVersion: v1`: This specifies the API version as `v1`, indicating that this Pod uses the stable core API. 80 | - `kind: Pod`: Indicates that this manifest is for creating a Pod object. 81 | - `metadata:`: Provides metadata about the Pod. 82 | - `name: nginx-pod`: The name of the Pod. 83 | - `spec:`: Defines the desired state and configuration of the Pod. 84 | - `containers:`: Lists the containers that will run in the Pod. 85 | - `- name: nginx`: Specifies the name of the container. 86 | - `image: nginx:latest`: The Docker image to use for the container. 87 | - `ports:`: Lists the ports exposed by the container. 88 | - `- containerPort: 80`: The container port to expose. 89 | 90 | ### Summary 91 | 92 | - **API Versions**: Ensure compatibility and stability as Kubernetes evolves. 93 | - **Types of API Versions**: Include `v1`, `v1beta1`, `v1alpha1`, and various group versions like `apps/v1`. 94 | - **Listing API Versions**: Use `kubectl api-versions` and `kubectl api-resources` to see available API versions and resources. 95 | - **Use Cases**: Different API versions and groups help manage the evolution of features, maintain backward compatibility, and organize resources efficiently. 96 | 97 | Understanding API versions and their usage is crucial for managing Kubernetes objects effectively and ensuring your configurations remain compatible across Kubernetes updates. 98 | -------------------------------------------------------------------------------- /2024/AWS_EKS.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ### Setup Kubernetes on Amazon EKS 4 | 5 | #### Prerequisites: 6 | 7 | 1. **EC2 Instance:** 8 | - Launch an EC2 instance. 9 | 10 | 2. **Install AWS CLI (latest version):** 11 | - Follow the installation instructions for AWS CLI. 12 | 13 | 3. **Setup kubectl:** 14 | a. Download kubectl version 1.21 15 | ```bash 16 | curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl 17 | chmod +x ./kubectl 18 | mv ./kubectl /usr/local/bin 19 | ``` 20 | b. Test kubectl installation: 21 | ```bash 22 | kubectl version --short --client 23 | ``` 24 | 25 | 4. **Setup eksctl:** 26 | a. Download and extract the latest release 27 | ```bash 28 | curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp 29 | sudo mv /tmp/eksctl /usr/local/bin 30 | ``` 31 | b. Test eksctl installation: 32 | ```bash 33 | eksctl version 34 | ``` 35 | 36 | 5. **Create IAM Role:** 37 | - Create an IAM role and attach it to the EC2 instance. 38 | - The IAM user should have programmatic access and permissions for IAM, EC2, and CloudFormation. 39 | - Check eksctl documentation for minimum IAM policies. 40 | 41 | #### Create Cluster and Nodes: 42 | 43 | ```bash 44 | eksctl create cluster --name cluster-name \ 45 | --region region-name \ 46 | --node-type instance-type \ 47 | --nodes-min 2 \ 48 | --nodes-max 2 \ 49 | --zones , 50 | ``` 51 | Example: 52 | ```bash 53 | eksctl create cluster --name my-cluster \ 54 | --region ap-south-1 \ 55 | --node-type t2.small 56 | ``` 57 | 58 | #### Delete EKS Cluster: 59 | 60 | ```bash 61 | eksctl delete cluster my-cluster --region ap-south-1 62 | ``` 63 | 64 | #### Validate Cluster: 65 | 66 | ```bash 67 | kubectl get nodes 68 | kubectl run tomcat --image=tomcat 69 | ``` 70 | 71 | #### Deploy Nginx Pods: 72 | 73 | ```bash 74 | kubectl create deployment demo-nginx --image=nginx --replicas=2 --port=80 75 | kubectl get all 76 | kubectl get pod 77 | ``` 78 | 79 | #### Expose Deployment as Service: 80 | 81 | ```bash 82 | kubectl expose deployment demo-nginx --port=80 --type=LoadBalancer 83 | kubectl get services -o wide 84 | ``` 85 | 86 | 87 | -------------------------------------------------------------------------------- /2024/Building Blocks of K8S.md: -------------------------------------------------------------------------------- 1 | ### Tutorial: Understanding Pods, Replica Sets, and Deployments in Kubernetes 2 | 3 | Kubernetes provides various constructs to manage containerized applications. Let's explore the differences between Pods, Replica Sets, and Deployments and how to use them with the NGINX image. 4 | 5 | #### 1. Pods 6 | 7 | A Pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in your cluster. 8 | 9 | **YAML for launching a Pod using the NGINX image:** 10 | ```yaml 11 | apiVersion: v1 12 | kind: Pod 13 | metadata: 14 | name: nginx-pod 15 | spec: 16 | containers: 17 | - name: nginx 18 | image: nginx:latest 19 | ports: 20 | - containerPort: 80 21 | ``` 22 | 23 | #### 2. Replica Sets 24 | 25 | A ReplicaSet ensures that a specified number of pod replicas are running at any given time. It is used primarily by Deployments as a mechanism to orchestrate pod scaling. 26 | 27 | **YAML for launching a ReplicaSet using the NGINX image:** 28 | ```yaml 29 | apiVersion: apps/v1 30 | kind: ReplicaSet 31 | metadata: 32 | name: nginx-replicaset 33 | spec: 34 | replicas: 3 35 | selector: 36 | matchLabels: 37 | app: nginx 38 | template: 39 | metadata: 40 | labels: 41 | app: nginx 42 | spec: 43 | containers: 44 | - name: nginx 45 | image: nginx:latest 46 | ports: 47 | - containerPort: 80 48 | ``` 49 | 50 | #### 3. Deployments 51 | 52 | A Deployment provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment object, and the Deployment Controller changes the actual state to the desired state at a controlled rate. 53 | 54 | **YAML for launching a Deployment using the NGINX image:** 55 | ```yaml 56 | apiVersion: apps/v1 57 | kind: Deployment 58 | metadata: 59 | name: nginx-deployment 60 | spec: 61 | replicas: 3 62 | selector: 63 | matchLabels: 64 | app: nginx 65 | template: 66 | metadata: 67 | labels: 68 | app: nginx 69 | spec: 70 | containers: 71 | - name: nginx 72 | image: nginx:latest 73 | ports: 74 | - containerPort: 80 75 | ``` 76 | 77 | ### Which is Better: Pod, ReplicaSet, or Deployment? 78 | 79 | **1. Pods:** 80 | - **Use Case:** Useful for simple use cases like single-instance applications, testing, and debugging. 81 | - **Limitations:** Lack of scalability and fault tolerance. If a Pod fails, it needs to be recreated manually. 82 | 83 | **2. ReplicaSets:** 84 | - **Use Case:** Ensures a specified number of replicas are running. Suitable for applications needing high availability and redundancy. 85 | - **Limitations:** Manages replicas but does not support rolling updates. Typically used as a backend for Deployments. 86 | 87 | **3. Deployments:** 88 | - **Use Case:** Provides declarative updates, self-healing, and rollbacks. Suitable for production-grade applications requiring updates, scaling, and high availability. 89 | - **Advantages:** Rolling updates, rollbacks, and scaling. 90 | 91 | ### Step-by-Step Guide to Launch the Application 92 | 93 | 1. **Create a Pod:** 94 | ```sh 95 | kubectl apply -f pod-nginx.yaml 96 | ``` 97 | 98 | 2. **Create a ReplicaSet:** 99 | ```sh 100 | kubectl apply -f replicaset-nginx.yaml 101 | ``` 102 | 103 | 3. **Create a Deployment:** 104 | ```sh 105 | kubectl apply -f deployment-nginx.yaml 106 | ``` 107 | 108 | 4. **Verify Resources:** 109 | ```sh 110 | kubectl get pods 111 | kubectl get rs 112 | kubectl get deployments 113 | ``` 114 | 115 | 5. **Check Logs and Status:** 116 | ```sh 117 | kubectl logs 118 | kubectl describe pod 119 | kubectl describe rs 120 | kubectl describe deployment 121 | ``` 122 | 123 | 6. **Scaling a Deployment:** 124 | ```sh 125 | kubectl scale deployment nginx-deployment --replicas=5 126 | ``` 127 | 128 | By using Deployments, you get advanced features like rolling updates and rollbacks, making them the preferred choice for most production applications. 129 | -------------------------------------------------------------------------------- /2024/ConfigMap and Secret.md: -------------------------------------------------------------------------------- 1 | ConfigMap and Secret are Kubernetes resources used to manage sensitive information and configuration data separately from application code. 2 | 3 | - **ConfigMap**: A ConfigMap is an API object that allows you to store non-sensitive data in key-value pairs. It's commonly used to store configuration files, environment variables, or any other kind of configuration data that your application needs. ConfigMaps decouple configuration data from container images, allowing for more flexible and portable deployments. 4 | 5 | - **Secret**: A Secret is similar to a ConfigMap but is specifically designed to store sensitive information such as passwords, API keys, and tokens. Secrets are base64 encoded and stored in etcd, Kubernetes' key-value store, and mounted into pods as files or environment variables. 6 | 7 | Here's an example YAML for a ConfigMap and a Secret: 8 | 9 | ### ConfigMap YAML Example: 10 | ```yaml 11 | apiVersion: v1 12 | kind: ConfigMap 13 | metadata: 14 | name: my-configmap 15 | data: 16 | server.properties: | 17 | server.port=8080 18 | server.host=localhost 19 | server.debug=false 20 | ``` 21 | 22 | ### Secret YAML Example: 23 | ```yaml 24 | apiVersion: v1 25 | kind: Secret 26 | metadata: 27 | name: my-secret 28 | type: Opaque 29 | data: 30 | username: YWRtaW4= # base64 encoded value of 'admin' 31 | password: cGFzc3dvcmQ= # base64 encoded value of 'password' 32 | ``` 33 | 34 | In the ConfigMap example: 35 | - We define a ConfigMap named "my-configmap" with a key-value pair "server.properties" containing some sample configuration data. 36 | 37 | In the Secret example: 38 | - We define a Secret named "my-secret" of type "Opaque" (which means it can contain arbitrary data). 39 | - Inside the Secret, we have base64-encoded values for a username and a password. 40 | 41 | Now, let's create a Deployment that uses both the ConfigMap and the Secret: 42 | 43 | ### Deployment YAML Using ConfigMap and Secret: 44 | ```yaml 45 | apiVersion: apps/v1 46 | kind: Deployment 47 | metadata: 48 | name: my-deployment 49 | spec: 50 | replicas: 1 51 | selector: 52 | matchLabels: 53 | app: my-app 54 | template: 55 | metadata: 56 | labels: 57 | app: my-app 58 | spec: 59 | containers: 60 | - name: my-container 61 | image: my-image 62 | env: 63 | - name: SERVER_PORT 64 | valueFrom: 65 | configMapKeyRef: 66 | name: my-configmap 67 | key: server.properties 68 | - name: DB_USERNAME 69 | valueFrom: 70 | secretKeyRef: 71 | name: my-secret 72 | key: username 73 | - name: DB_PASSWORD 74 | valueFrom: 75 | secretKeyRef: 76 | name: my-secret 77 | key: password 78 | ``` 79 | 80 | In this Deployment YAML: 81 | - We reference the ConfigMap "my-configmap" to set environment variables for the container. 82 | - We reference the Secret "my-secret" to set environment variables for sensitive information like username and password. These are retrieved from the Secret's data field using the specified keys. 83 | -------------------------------------------------------------------------------- /2024/Dynamic Provisioning of EBS Volumes on AWS EKS.md: -------------------------------------------------------------------------------- 1 | ### Dynamic Provisioning of EBS Volumes on AWS EKS 2 | 3 | Dynamic provisioning of EBS volumes in Amazon EKS (Elastic Kubernetes Service) involves creating storage classes, persistent volume claims (PVCs), and using these claims in deployments. Here is a step-by-step guide along with the necessary YAML files. 4 | 5 | ### Prerequisites 6 | 7 | 1. **AWS EKS Cluster**: You need an existing EKS cluster. 8 | 2. **IAM Role with Required Permissions**: The worker nodes should have an IAM role with the necessary permissions to provision EBS volumes. 9 | 10 | ### Step-by-Step Guide 11 | 12 | 1. **Create a Storage Class** 13 | 14 | A Storage Class defines how an EBS volume should be dynamically provisioned. 15 | 16 | ```yaml 17 | apiVersion: storage.k8s.io/v1 18 | kind: StorageClass 19 | metadata: 20 | name: ebs-sc 21 | provisioner: kubernetes.io/aws-ebs 22 | parameters: 23 | type: gp2 24 | fsType: ext4 25 | encrypted: "true" 26 | ``` 27 | 28 | 2. **Create a Persistent Volume Claim (PVC)** 29 | 30 | A PVC requests storage resources. Here’s how to create a PVC that uses the Storage Class defined above. 31 | 32 | ```yaml 33 | apiVersion: v1 34 | kind: PersistentVolumeClaim 35 | metadata: 36 | name: ebs-pvc 37 | spec: 38 | accessModes: 39 | - ReadWriteOnce 40 | storageClassName: ebs-sc 41 | resources: 42 | requests: 43 | storage: 20Gi 44 | ``` 45 | 46 | 3. **Create a Deployment that Uses the PVC** 47 | 48 | The deployment will use the PVC for storage. 49 | 50 | ```yaml 51 | apiVersion: apps/v1 52 | kind: Deployment 53 | metadata: 54 | name: nginx-deployment 55 | spec: 56 | replicas: 2 57 | selector: 58 | matchLabels: 59 | app: nginx 60 | template: 61 | metadata: 62 | labels: 63 | app: nginx 64 | spec: 65 | containers: 66 | - name: nginx 67 | image: nginx 68 | ports: 69 | - containerPort: 80 70 | volumeMounts: 71 | - mountPath: "/usr/share/nginx/html" 72 | name: ebs-volume 73 | volumes: 74 | - name: ebs-volume 75 | persistentVolumeClaim: 76 | claimName: ebs-pvc 77 | ``` 78 | 79 | ### Applying the YAML Files 80 | 81 | To apply these YAML configurations, save each snippet to a file and then use the `kubectl apply` command: 82 | 83 | ```sh 84 | kubectl apply -f storageclass.yaml 85 | kubectl apply -f pvc.yaml 86 | kubectl apply -f deployment.yaml 87 | ``` 88 | 89 | ### Verifying the Setup 90 | 91 | 1. **Check Storage Class** 92 | 93 | ```sh 94 | kubectl get storageclass 95 | ``` 96 | 97 | Ensure `ebs-sc` is listed. 98 | 99 | 2. **Check PVC** 100 | 101 | ```sh 102 | kubectl get pvc 103 | ``` 104 | 105 | Ensure `ebs-pvc` is in the `Bound` state. 106 | 107 | 3. **Check Deployment** 108 | 109 | ```sh 110 | kubectl get deployments 111 | ``` 112 | 113 | Ensure `nginx-deployment` is created and running. 114 | 115 | 4. **Check Pods** 116 | 117 | ```sh 118 | kubectl get pods 119 | ``` 120 | 121 | Ensure the pods from the deployment are running. 122 | 123 | ### Detailed Steps Explanation 124 | 125 | 1. **Storage Class**: 126 | - The `StorageClass` named `ebs-sc` is configured to use the AWS EBS provisioner. 127 | - The `parameters` specify that the volume type is `gp2` (General Purpose SSD), the file system is `ext4`, and the volume should be encrypted. 128 | 129 | 2. **Persistent Volume Claim**: 130 | - The `PersistentVolumeClaim` named `ebs-pvc` requests 20Gi of storage using the `ebs-sc` StorageClass. 131 | - The `accessModes` specify that the volume can be mounted as read-write by a single node. 132 | 133 | 3. **Deployment**: 134 | - The `Deployment` named `nginx-deployment` creates two replicas of an NGINX container. 135 | - The `volumeMounts` section mounts the `ebs-volume` at `/usr/share/nginx/html`. 136 | - The `volumes` section specifies that the `ebs-volume` should use the `ebs-pvc` PersistentVolumeClaim. 137 | 138 | By following this guide, you can dynamically provision EBS volumes for your deployments on AWS EKS, ensuring efficient and scalable storage management for your applications. 139 | -------------------------------------------------------------------------------- /2024/Installation_Kubaeadm.md: -------------------------------------------------------------------------------- 1 | 2 | # Kubernetes Cluster Installation using kubeadm 3 | 4 | Follow this documentation to set up a Kubernetes cluster on CentOS 7 machines. This guide will help you create a cluster with one master node and two worker nodes. 5 | 6 | ## Prerequisites 7 | 8 | ### System Requirements 9 | 10 | - **Master:** t2.medium (2 CPUs and 2GB Memory) 11 | - **Worker Nodes:** t2.micro 12 | 13 | ### Open Ports in Security Group 14 | 15 | #### Master node: 16 | - 6443 17 | - 32750 18 | - 10250 19 | - 4443 20 | - 443 21 | - 8080 22 | 23 | #### On Master and Worker: 24 | - 179 25 | 26 | ### On Master and Worker: 27 | - Perform all the commands as root user unless otherwise specified 28 | 29 | ## Installation Steps 30 | 31 | ### Install, Enable, and Start Docker Service 32 | 33 | ```bash 34 | yum install -y -q yum-utils device-mapper-persistent-data lvm2 > /dev/null 2>&1 35 | yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo > /dev/null 2>&1 36 | yum install -y -q docker-ce >/dev/null 2>&1 37 | systemctl enable docker 38 | systemctl start docker 39 | ``` 40 | 41 | ### Disable SELinux 42 | 43 | ```bash 44 | setenforce 0 45 | sed -i --follow-symlinks 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux 46 | ``` 47 | 48 | ### Disable Firewall 49 | 50 | ```bash 51 | systemctl disable firewalld 52 | systemctl stop firewalld 53 | ``` 54 | 55 | ### Disable Swap 56 | 57 | ```bash 58 | sed -i '/swap/d' /etc/fstab 59 | swapoff -a 60 | ``` 61 | 62 | ### Update sysctl settings for Kubernetes networking 63 | 64 | ```bash 65 | cat >> /etc/sysctl.d/kubernetes.conf <>/etc/yum.repos.d/kubernetes.repo< --pod-network-cidr=192.168.0.0/16 103 | ``` 104 | 105 | ##### Create a user for Kubernetes administration and copy kubeconfig file 106 | 107 | ```bash 108 | useradd kubeadmin 109 | mkdir /home/kubeadmin/.kube 110 | cp /etc/kubernetes/admin.conf /home/kubeadmin/.kube/config 111 | chown -R kubeadmin:kubeadmin /home/kubeadmin/.kube 112 | ``` 113 | 114 | ##### Deploy Calico network as a kubeadmin user 115 | 116 | ```bash 117 | sudo su - kubeadmin 118 | kubectl create -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml 119 | ``` 120 | 121 | ##### Cluster join command 122 | 123 | ```bash 124 | kubeadm token create --print-join-command 125 | ``` 126 | 127 | #### On Worker Node: 128 | 129 | ##### Add worker nodes to the cluster 130 | 131 | Use the output from the `kubeadm token create` command from the master server and run it here. 132 | 133 | ### Verifying the Cluster 134 | 135 | #### Get Nodes status 136 | 137 | ```bash 138 | kubectl get nodes 139 | ``` 140 | 141 | #### Get component status 142 | 143 | ```bash 144 | kubectl get cs 145 | ``` 146 | 147 | This completes the Kubernetes cluster installation and verification process. 148 | ``` 149 | 150 | Make sure to replace `` with the actual IP address of your master node. 151 | -------------------------------------------------------------------------------- /2024/K8S Networking_1.md: -------------------------------------------------------------------------------- 1 | Introduction rewritten as a series of bullet points: 2 | 3 | - **Pods and External Requests**: 4 | - Pods in Kubernetes often need to respond to external requests, such as HTTP requests from other pods within the cluster or from clients outside the cluster. 5 | - Many modern applications, especially microservices, rely on this external interaction. 6 | 7 | - **Challenges with Pod Communication**: 8 | - Unlike traditional setups, where specific IP addresses or hostnames are configured in client apps, Kubernetes presents unique challenges. 9 | - Pods are ephemeral, constantly starting and stopping due to scaling, node failures, or resource constraints. 10 | - Kubernetes dynamically assigns IP addresses to pods after scheduling, making it impossible for clients to know these addresses in advance. 11 | - Horizontal scaling leads to multiple pods providing the same service, each with its own IP address. Clients shouldn't need to track these individual IPs. 12 | 13 | - **Introduction of Kubernetes Services**: 14 | - To address these challenges, Kubernetes introduces the concept of Services. 15 | - Services act as an abstraction layer, providing a stable endpoint for accessing a set of pods that offer the same service. 16 | - Clients interact with services rather than individual pods, simplifying communication and abstraction of underlying infrastructure complexities. 17 | 18 | This sets the stage for understanding Kubernetes networking and the role of Services in facilitating communication between pods. 19 | 20 | Here's the section on introducing services: 21 | 22 | - **What is a Kubernetes Service**: 23 | - A Kubernetes Service is a resource created to provide a single, consistent entry point to a group of pods offering the same service. 24 | - Each service is assigned an IP address and port that remain constant as long as the service exists. 25 | - Clients connect to this IP and port, and the connections are intelligently routed to one of the pods serving the service, abstracting the underlying pod infrastructure. 26 | 27 | - **Example Illustration**: 28 | - Consider a scenario with a frontend web server and a backend database server. 29 | - Multiple pods may serve as the frontend, while there's typically only one backend database pod. 30 | - Two key challenges need addressing: 31 | 1. External clients require access to the frontend pods without concern for the number of web servers. 32 | 2. The frontend pods must reliably connect to the backend database, even as its pod location changes within the cluster. 33 | 34 | - **Solving the Challenges**: 35 | - By creating a service for the frontend pods and exposing it externally, you establish a consistent IP address for client access. 36 | - Similarly, creating a service for the backend pod ensures a stable address for the database, unaffected by pod movements. 37 | - Services enable seamless communication between components; frontend pods easily discover the backend service through environment variables or DNS, without the need for manual reconfiguration. 38 | 39 | This setup ensures the robustness and flexibility of your system, simplifying management and enhancing scalability. 40 | 41 | 42 | Here's the section on creating services rewritten as bullet points: 43 | 44 | - **Defining Service Backing Pods**: 45 | - A service can encompass multiple pods, with connections load-balanced across all these pods. 46 | - Label selectors play a crucial role in determining which pods are associated with a service. 47 | 48 | - **Using Label Selectors**: 49 | - Label selectors, familiar from Replication-Controllers and other pod controllers, specify the pods belonging to the same set. 50 | - They allow for dynamic grouping of pods based on shared characteristics or metadata. 51 | 52 | - **Creating a Service via YAML Descriptor**: 53 | - To create a service, a YAML descriptor file is used, providing detailed configuration. 54 | - Below is an example demonstrating how to define a service YAML using NGINX Deployment as a reference. 55 | 56 | ```yaml 57 | apiVersion: v1 58 | kind: Service 59 | metadata: 60 | name: nginx-service 61 | spec: 62 | selector: 63 | app: nginx 64 | ports: 65 | - protocol: TCP 66 | port: 80 67 | targetPort: 80 68 | type: ClusterIP 69 | ``` 70 | 71 | This YAML file creates a service named "nginx-service" that selects pods labeled with "app: nginx". It exposes port 80, which is the default port for HTTP traffic, and directs traffic to port 80 on the selected pods. 72 | The service type "ClusterIP" makes the service accessible only within the cluster. 73 | 74 | Here's the information about exposing multiple ports in the same service and using named ports presented as bullet points: 75 | 76 | - **Supporting Multiple Ports in a Service**: 77 | - Services can accommodate multiple ports, allowing forwarding of different types of traffic to corresponding ports on the pods. 78 | - This capability is useful when pods listen on multiple ports, such as 8080 for HTTP and 8443 for HTTPS. 79 | 80 | - **Single Service Configuration**: 81 | - Instead of creating separate services for each port, a single service can expose multiple ports. 82 | - Each port definition in the service spec maps to a specific port on the pods. 83 | 84 | - **Label Selector Scope**: 85 | - The label selector specified in the service configuration applies to the entire service, not to individual ports. 86 | - If different ports require different subsets of pods, separate services are needed. 87 | 88 | - **Using Named Ports**: 89 | - Ports in pods can be named, enhancing clarity and flexibility in service configuration. 90 | - Named ports allow referring to ports by their descriptive names rather than numeric values. 91 | 92 | - **Changing Port Numbers**: 93 | - Naming ports facilitates port number changes without modifying the service specification. 94 | - Updating port numbers in pod specs while keeping port names unchanged ensures seamless transition. 95 | 96 | Example YAML configuration demonstrating named ports: 97 | 98 | ```yaml 99 | apiVersion: v1 100 | kind: Service 101 | metadata: 102 | name: kubia 103 | spec: 104 | ports: 105 | - name: http 106 | port: 80 107 | targetPort: http 108 | - name: https 109 | port: 443 110 | targetPort: https 111 | ``` 112 | 113 | Example pod configuration with named ports: 114 | 115 | ```yaml 116 | kind: Pod 117 | spec: 118 | containers: 119 | - name: kubia 120 | ports: 121 | - name: http 122 | containerPort: 8080 123 | - name: https 124 | containerPort: 8443 125 | ``` 126 | 127 | Using named ports provides flexibility in port management, allowing for seamless updates without impacting service configurations. 128 | -------------------------------------------------------------------------------- /2024/K8S_Namespace.md: -------------------------------------------------------------------------------- 1 | ### **What is a Kubernetes Namespace?** 2 | 3 | A **Namespace** in Kubernetes is a virtual cluster within a physical cluster. It provides a way to divide cluster resources between multiple users or teams. Each Namespace acts as a logical boundary for resources and enables resource isolation and organization. 4 | 5 | 6 | ![image](https://github.com/user-attachments/assets/85e3150d-411e-4e12-8dcb-439dd169c809) 7 | 8 | 9 | --- 10 | 11 | ### **Key Features of Namespaces** 12 | 1. **Resource Isolation**: 13 | - Separate teams, projects, or applications can have their own resources without interference. 14 | 2. **Resource Quotas**: 15 | - Control resource usage (e.g., CPU, memory) within a Namespace. 16 | 3. **Organization**: 17 | - Helps group related resources for better management. 18 | 4. **Security**: 19 | - Role-Based Access Control (RBAC) can restrict user access to specific Namespaces. 20 | 21 | --- 22 | 23 | ### **Default Namespaces in Kubernetes** 24 | 1. **default**: 25 | - Used when no Namespace is specified. 26 | 2. **kube-system**: 27 | - Contains system components like kube-dns and the API server. 28 | 3. **kube-public**: 29 | - A Namespace that is readable by all users. 30 | 4. **kube-node-lease**: 31 | - Used for node heartbeats (since Kubernetes v1.13). 32 | 33 | --- 34 | 35 | ### **Use Cases for Namespaces** 36 | 1. **Environment Separation**: 37 | - Separate `dev`, `staging`, and `prod` environments. 38 | 2. **Team or Project Segmentation**: 39 | - Different teams or projects can operate in isolated Namespaces. 40 | 3. **Testing and Debugging**: 41 | - Run multiple versions of an application in isolated spaces. 42 | 43 | --- 44 | 45 | ### **Step-by-Step Lab: Working with Namespaces** 46 | 47 | #### **Step 1: Create a Namespace** 48 | 49 | 1. **Create a Namespace YAML**: 50 | Save the following YAML as `namespace.yaml`: 51 | ```yaml 52 | apiVersion: v1 53 | kind: Namespace 54 | metadata: 55 | name: my-namespace 56 | ``` 57 | 58 | 2. **Apply the YAML**: 59 | ```bash 60 | kubectl apply -f namespace.yaml 61 | ``` 62 | 63 | 3. **Verify the Namespace**: 64 | ```bash 65 | kubectl get namespaces 66 | ``` 67 | 68 | --- 69 | 70 | #### **Step 2: Create Resources in the Namespace** 71 | 72 | 1. **Deploy an NGINX Pod in `my-namespace`**: 73 | Save the following YAML as `nginx-deployment.yaml`: 74 | ```yaml 75 | apiVersion: apps/v1 76 | kind: Deployment 77 | metadata: 78 | name: nginx 79 | namespace: my-namespace 80 | spec: 81 | replicas: 2 82 | selector: 83 | matchLabels: 84 | app: nginx 85 | template: 86 | metadata: 87 | labels: 88 | app: nginx 89 | spec: 90 | containers: 91 | - name: nginx 92 | image: nginx:latest 93 | ports: 94 | - containerPort: 80 95 | ``` 96 | 97 | 2. **Apply the YAML**: 98 | ```bash 99 | kubectl apply -f nginx-deployment.yaml 100 | ``` 101 | 102 | 3. **Verify the Deployment**: 103 | ```bash 104 | kubectl get deployments -n my-namespace 105 | kubectl get pods -n my-namespace 106 | ``` 107 | 108 | --- 109 | 110 | #### **Step 3: Set a Default Namespace for Your `kubectl` Context** 111 | 112 | 1. **Check the Current Context**: 113 | ```bash 114 | kubectl config current-context 115 | ``` 116 | 117 | 2. **Set `my-namespace` as the Default Namespace**: 118 | ```bash 119 | kubectl config set-context --current --namespace=my-namespace 120 | ``` 121 | 122 | 3. **Test Default Namespace Behavior**: 123 | - Run a `kubectl` command without specifying the Namespace: 124 | ```bash 125 | kubectl get pods 126 | ``` 127 | 128 | - It should show Pods in `my-namespace`. 129 | 130 | --- 131 | 132 | #### **Step 4: Resource Quotas in a Namespace** 133 | 134 | 1. **Create a Resource Quota YAML**: 135 | Save the following YAML as `resource-quota.yaml`: 136 | ```yaml 137 | apiVersion: v1 138 | kind: ResourceQuota 139 | metadata: 140 | name: my-quota 141 | namespace: my-namespace 142 | spec: 143 | hard: 144 | pods: "5" # Maximum number of Pods 145 | requests.cpu: "2" # Total CPU requests 146 | requests.memory: "1Gi" # Total memory requests 147 | limits.cpu: "4" # Total CPU limits 148 | limits.memory: "2Gi" # Total memory limits 149 | ``` 150 | 151 | 2. **Apply the YAML**: 152 | ```bash 153 | kubectl apply -f resource-quota.yaml 154 | ``` 155 | 156 | 3. **Verify the Quota**: 157 | ```bash 158 | kubectl get resourcequota -n my-namespace 159 | ``` 160 | 161 | 4. **Test the Quota**: 162 | - Try deploying more than 5 Pods or exceeding the resource limits to observe the enforced restrictions. 163 | 164 | --- 165 | 166 | #### **Step 5: Clean Up** 167 | 168 | 1. **Delete the Resources**: 169 | ```bash 170 | kubectl delete namespace my-namespace 171 | ``` 172 | 173 | --- 174 | 175 | ### **Example Use Case: Isolating Dev and Prod Environments** 176 | 177 | 1. **Create Two Namespaces**: 178 | - `dev-environment` 179 | - `prod-environment` 180 | 181 | 2. **Deploy Different Versions of the Application**: 182 | - Use a `:dev` Docker image tag in `dev-environment`. 183 | - Use a `:stable` Docker image tag in `prod-environment`. 184 | 185 | 3. **Set Different Resource Quotas**: 186 | - Allocate fewer resources for `dev-environment`. 187 | - Allocate more resources for `prod-environment`. 188 | 189 | 4. **Test Application Behavior in Isolation**: 190 | - Developers can test in `dev-environment` without affecting the production system in `prod-environment`. 191 | 192 | --- 193 | 194 | ### **Commands Cheat Sheet** 195 | | Command | Description | 196 | |----------------------------------------------|--------------------------------------------| 197 | | `kubectl get namespaces` | List all Namespaces in the cluster. | 198 | | `kubectl apply -f ` | Apply a resource definition file. | 199 | | `kubectl config set-context --current --namespace=` | Set default Namespace for the current context. | 200 | | `kubectl delete namespace ` | Delete a specific Namespace. | 201 | | `kubectl get pods -n ` | List Pods in a specific Namespace. | 202 | 203 | --- 204 | 205 | ### **Outcome** 206 | 1. You learned to create and manage Namespaces. 207 | 2. You deployed applications in isolated Namespaces. 208 | 3. You enforced resource quotas to control resource usage. 209 | 210 | Namespaces are a vital part of Kubernetes for organizing and isolating workloads, making them especially useful in multi-tenant environments or when managing environments like `dev`, `staging`, and `prod`. 211 | -------------------------------------------------------------------------------- /2024/K8S_RBAC_SA.md: -------------------------------------------------------------------------------- 1 | ### Kubernetes Service Account, Role, RoleBinding, ClusterRole, and ClusterRoleBinding 2 | 3 | **Service Account**: 4 | A Kubernetes Service Account (SA) provides an identity for processes that run in a Pod. By default, a Pod runs as the default service account in the namespace where the Pod is running. Service accounts are used to provide fine-grained access control for applications. 5 | 6 | **Role**: 7 | A Role in Kubernetes contains rules that represent a set of permissions. Permissions are purely additive (there are no "deny" rules). Roles can be namespaced, meaning they only apply within a specific namespace. 8 | 9 | **RoleBinding**: 10 | A RoleBinding grants the permissions defined in a Role to a user or a service account within a namespace. It defines who can do what within that namespace. 11 | 12 | **ClusterRole**: 13 | A ClusterRole is similar to a Role but is cluster-wide. It can be used to define permissions that apply across all namespaces or to cluster-scoped resources like nodes. 14 | 15 | **ClusterRoleBinding**: 16 | A ClusterRoleBinding grants the permissions defined in a ClusterRole to a user or a service account across the entire cluster. 17 | 18 | ### Step-by-Step YAML Implementation 19 | 20 | 1. **Create a Service Account** 21 | 22 | ```yaml 23 | apiVersion: v1 24 | kind: ServiceAccount 25 | metadata: 26 | name: my-service-account 27 | namespace: default 28 | ``` 29 | 30 | 2. **Create a Role** 31 | 32 | ```yaml 33 | apiVersion: rbac.authorization.k8s.io/v1 34 | kind: Role 35 | metadata: 36 | namespace: default 37 | name: my-role 38 | rules: 39 | - apiGroups: [""] 40 | resources: ["pods"] 41 | verbs: ["get", "list", "watch"] 42 | ``` 43 | 44 | 3. **Create a RoleBinding** 45 | 46 | ```yaml 47 | apiVersion: rbac.authorization.k8s.io/v1 48 | kind: RoleBinding 49 | metadata: 50 | name: my-rolebinding 51 | namespace: default 52 | subjects: 53 | - kind: ServiceAccount 54 | name: my-service-account 55 | namespace: default 56 | roleRef: 57 | kind: Role 58 | name: my-role 59 | apiGroup: rbac.authorization.k8s.io 60 | ``` 61 | 62 | 4. **Create a ClusterRole** 63 | 64 | ```yaml 65 | apiVersion: rbac.authorization.k8s.io/v1 66 | kind: ClusterRole 67 | metadata: 68 | name: my-clusterrole 69 | rules: 70 | - apiGroups: [""] 71 | resources: ["pods"] 72 | verbs: ["get", "list", "watch"] 73 | ``` 74 | 75 | 5. **Create a ClusterRoleBinding** 76 | 77 | ```yaml 78 | apiVersion: rbac.authorization.k8s.io/v1 79 | kind: ClusterRoleBinding 80 | metadata: 81 | name: my-clusterrolebinding 82 | subjects: 83 | - kind: ServiceAccount 84 | name: my-service-account 85 | namespace: default 86 | roleRef: 87 | kind: ClusterRole 88 | name: my-clusterrole 89 | apiGroup: rbac.authorization.k8s.io 90 | ``` 91 | 92 | ### Explanation 93 | 94 | - **Service Account**: `my-service-account` is created in the `default` namespace. 95 | - **Role**: `my-role` is created in the `default` namespace, allowing `get`, `list`, and `watch` on `pods`. 96 | - **RoleBinding**: Binds `my-role` to `my-service-account` in the `default` namespace. 97 | - **ClusterRole**: `my-clusterrole` is created with permissions to `get`, `list`, and `watch` on `pods` across all namespaces. 98 | - **ClusterRoleBinding**: Binds `my-clusterrole` to `my-service-account` across the entire cluster. 99 | 100 | ### Applying the YAML files 101 | 102 | To apply these YAML configurations, save each snippet to a file (e.g., `serviceaccount.yaml`, `role.yaml`, `rolebinding.yaml`, `clusterrole.yaml`, `clusterrolebinding.yaml`) and then use the `kubectl apply` command: 103 | 104 | ```sh 105 | kubectl apply -f serviceaccount.yaml 106 | kubectl apply -f role.yaml 107 | kubectl apply -f rolebinding.yaml 108 | kubectl apply -f clusterrole.yaml 109 | kubectl apply -f clusterrolebinding.yaml 110 | ``` 111 | 112 | This will create the service account, roles, and bindings in your Kubernetes cluster, granting the specified permissions to the service account. 113 | -------------------------------------------------------------------------------- /2024/K8S_SA_oles_ClusterRole_Nanespace.md: -------------------------------------------------------------------------------- 1 | ### Kubernetes Namespace 2 | 3 | A Kubernetes Namespace is a way to divide cluster resources between multiple users. Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces provide a mechanism for isolating groups of resources within a single cluster. 4 | 5 | 6 | ![image](https://github.com/user-attachments/assets/820e6aac-8212-42ac-9658-8511a34bb7ea) 7 | 8 | 9 | ### YAML to Create Namespaces 10 | 11 | 1. **Create Namespaces `test` and `prod`** 12 | 13 | ```yaml 14 | # test namespace 15 | apiVersion: v1 16 | kind: Namespace 17 | metadata: 18 | name: test 19 | 20 | # prod namespace 21 | apiVersion: v1 22 | kind: Namespace 23 | metadata: 24 | name: prod 25 | ``` 26 | 27 | ### Create Service Account, Role, RoleBinding, ClusterRole, and ClusterRoleBinding for the Namespaces 28 | 29 | 1. **Service Account** 30 | 31 | ```yaml 32 | # Service Account for test namespace 33 | apiVersion: v1 34 | kind: ServiceAccount 35 | metadata: 36 | name: test-service-account 37 | namespace: test 38 | 39 | # Service Account for prod namespace 40 | apiVersion: v1 41 | kind: ServiceAccount 42 | metadata: 43 | name: prod-service-account 44 | namespace: prod 45 | ``` 46 | 47 | 2. **Role** 48 | 49 | ```yaml 50 | # Role for test namespace 51 | apiVersion: rbac.authorization.k8s.io/v1 52 | kind: Role 53 | metadata: 54 | namespace: test 55 | name: test-role 56 | rules: 57 | - apiGroups: [""] 58 | resources: ["pods"] 59 | verbs: ["get", "list", "create", "delete"] 60 | 61 | # Role for prod namespace 62 | apiVersion: rbac.authorization.k8s.io/v1 63 | kind: Role 64 | metadata: 65 | namespace: prod 66 | name: prod-role 67 | rules: 68 | - apiGroups: [""] 69 | resources: ["pods"] 70 | verbs: ["get", "list", "create", "delete"] 71 | ``` 72 | 73 | 3. **RoleBinding** 74 | 75 | ```yaml 76 | # RoleBinding for test namespace 77 | apiVersion: rbac.authorization.k8s.io/v1 78 | kind: RoleBinding 79 | metadata: 80 | name: test-rolebinding 81 | namespace: test 82 | subjects: 83 | - kind: ServiceAccount 84 | name: test-service-account 85 | namespace: test 86 | roleRef: 87 | kind: Role 88 | name: test-role 89 | apiGroup: rbac.authorization.k8s.io 90 | 91 | # RoleBinding for prod namespace 92 | apiVersion: rbac.authorization.k8s.io/v1 93 | kind: RoleBinding 94 | metadata: 95 | name: prod-rolebinding 96 | namespace: prod 97 | subjects: 98 | - kind: ServiceAccount 99 | name: prod-service-account 100 | namespace: prod 101 | roleRef: 102 | kind: Role 103 | name: prod-role 104 | apiGroup: rbac.authorization.k8s.io 105 | ``` 106 | 107 | 4. **ClusterRole** 108 | 109 | ```yaml 110 | apiVersion: rbac.authorization.k8s.io/v1 111 | kind: ClusterRole 112 | metadata: 113 | name: cluster-role 114 | rules: 115 | - apiGroups: [""] 116 | resources: ["pods"] 117 | verbs: ["get", "list", "create", "delete"] 118 | ``` 119 | 120 | 5. **ClusterRoleBinding** 121 | 122 | ```yaml 123 | apiVersion: rbac.authorization.k8s.io/v1 124 | kind: ClusterRoleBinding 125 | metadata: 126 | name: cluster-rolebinding 127 | subjects: 128 | - kind: ServiceAccount 129 | name: test-service-account 130 | namespace: test 131 | - kind: ServiceAccount 132 | name: prod-service-account 133 | namespace: prod 134 | roleRef: 135 | kind: ClusterRole 136 | name: cluster-role 137 | apiGroup: rbac.authorization.k8s.io 138 | ``` 139 | 140 | ### Deployment for Each Namespace 141 | 142 | 1. **Deployment in `test` Namespace** 143 | 144 | ```yaml 145 | apiVersion: apps/v1 146 | kind: Deployment 147 | metadata: 148 | name: test-deployment 149 | namespace: test 150 | spec: 151 | replicas: 2 152 | selector: 153 | matchLabels: 154 | app: test-app 155 | template: 156 | metadata: 157 | labels: 158 | app: test-app 159 | spec: 160 | serviceAccountName: test-service-account 161 | containers: 162 | - name: nginx 163 | image: nginx 164 | ports: 165 | - containerPort: 80 166 | ``` 167 | 168 | 2. **Deployment in `prod` Namespace** 169 | 170 | ```yaml 171 | apiVersion: apps/v1 172 | kind: Deployment 173 | metadata: 174 | name: prod-deployment 175 | namespace: prod 176 | spec: 177 | replicas: 2 178 | selector: 179 | matchLabels: 180 | app: prod-app 181 | template: 182 | metadata: 183 | labels: 184 | app: prod-app 185 | spec: 186 | serviceAccountName: prod-service-account 187 | containers: 188 | - name: nginx 189 | image: nginx 190 | ports: 191 | - containerPort: 80 192 | ``` 193 | 194 | ### Applying the YAML files 195 | 196 | To apply these YAML configurations, save each snippet to a file and then use the `kubectl apply` command: 197 | 198 | ```sh 199 | kubectl apply -f namespace-test.yaml 200 | kubectl apply -f namespace-prod.yaml 201 | kubectl apply -f serviceaccount-test.yaml 202 | kubectl apply -f serviceaccount-prod.yaml 203 | kubectl apply -f role-test.yaml 204 | kubectl apply -f role-prod.yaml 205 | kubectl apply -f rolebinding-test.yaml 206 | kubectl apply -f rolebinding-prod.yaml 207 | kubectl apply -f clusterrole.yaml 208 | kubectl apply -f clusterrolebinding.yaml 209 | kubectl apply -f deployment-test.yaml 210 | kubectl apply -f deployment-prod.yaml 211 | ``` 212 | 213 | ### Example of Roles and ClusterRoles in Action 214 | 215 | #### Roles in Namespaces 216 | - A Role (e.g., `test-role` or `prod-role`) defines what actions can be performed within a specific namespace. 217 | - The RoleBinding binds this Role to a Service Account, ensuring the Service Account has the necessary permissions within that namespace. 218 | 219 | Example: The `test-service-account` in the `test` namespace can `get`, `list`, `create`, and `delete` pods due to its `RoleBinding` to `test-role`. 220 | 221 | #### ClusterRoles Across Namespaces 222 | - A ClusterRole provides similar permissions but across all namespaces or at the cluster level. 223 | - The ClusterRoleBinding binds the ClusterRole to a Service Account, granting it permissions cluster-wide. 224 | 225 | Example: The `test-service-account` and `prod-service-account` can perform cluster-wide actions like `get`, `list`, `create`, and `delete` pods due to their `ClusterRoleBinding` to `cluster-role`. 226 | 227 | By following this step-by-step approach, you can create namespaces, service accounts, roles, role bindings, cluster roles, and cluster role bindings, and deploy applications with the appropriate permissions within a Kubernetes cluster. 228 | -------------------------------------------------------------------------------- /2024/K8s Networking_2.md: -------------------------------------------------------------------------------- 1 | ## Kubernetes Networking: How Network Packets Travel in a Cluster 2 | 3 | ### Overview of iptables and kube-proxy in Kubernetes Networking 4 | 5 | - **iptables**: A Linux kernel-based firewall used for network address translation (NAT) and packet filtering. In Kubernetes, iptables rules are configured to facilitate pod-to-pod communication. 6 | - **kube-proxy**: A Kubernetes component that manages the rules in iptables to handle traffic routing within the cluster. It ensures that services and their corresponding pods can communicate seamlessly. 7 | 8 | ### Intra-node Pod-to-Pod Communication 9 | 10 | 1. **Pod A sends a packet to Pod B**: 11 | - Pod A, residing on Node 1, initiates a connection to Pod B on the same node. 12 | - The packet from Pod A is encapsulated with the destination IP address of Pod B. 13 | 14 | 2. **Packet enters the Node's network stack**: 15 | - The packet leaves Pod A’s virtual network interface (veth pair) and enters the node’s network stack. 16 | 17 | 3. **iptables rules are applied**: 18 | - The packet is processed by iptables rules configured by kube-proxy. 19 | - These rules include filtering and NAT rules to route the packet correctly to the destination pod. 20 | 21 | 4. **Packet reaches Pod B**: 22 | - The packet is directed to Pod B’s network interface based on the iptables rules. 23 | - Pod B receives the packet, and communication is established. 24 | 25 | ### Inter-node Pod-to-Pod Communication 26 | 27 | 1. **Pod A sends a packet to Pod C**: 28 | - Pod A, residing on Node 1, initiates a connection to Pod C on Node 2. 29 | - The packet from Pod A is encapsulated with the destination IP address of Pod C. 30 | 31 | 2. **Packet enters Node 1's network stack**: 32 | - The packet leaves Pod A’s virtual network interface and enters Node 1’s network stack. 33 | 34 | 3. **iptables and kube-proxy routing on Node 1**: 35 | - iptables rules configured by kube-proxy determine that the destination IP belongs to a pod on a different node. 36 | - The packet is routed to Node 2’s IP address. 37 | 38 | 4. **Packet is forwarded to Node 2**: 39 | - Node 1 forwards the packet to Node 2 over the cluster network, often using an overlay network (e.g., Flannel, Calico) to encapsulate the packet. 40 | 41 | 5. **Packet enters Node 2's network stack**: 42 | - Node 2 receives the packet and decapsulates it if an overlay network is used. 43 | - The packet then enters Node 2’s network stack. 44 | 45 | 6. **iptables and kube-proxy routing on Node 2**: 46 | - iptables rules configured by kube-proxy on Node 2 route the packet to the correct destination pod based on the destination IP. 47 | 48 | 7. **Packet reaches Pod C**: 49 | - The packet is directed to Pod C’s network interface. 50 | - Pod C receives the packet, completing the communication. 51 | 52 | ### Detailed Step-by-Step Document 53 | 54 | #### Intra-node Pod-to-Pod Communication 55 | 56 | 1. **Pod A to Pod B Communication Initiation**: 57 | - Pod A’s application sends a packet to Pod B’s IP address (e.g., 10.244.1.5). 58 | 59 | 2. **Packet Handling by Node’s Network Stack**: 60 | - Packet travels from Pod A’s veth interface to the node’s network stack. 61 | 62 | 3. **iptables Processing**: 63 | - The packet hits iptables rules: 64 | - **Filter rules**: Ensure the packet is allowed. 65 | - **NAT rules**: Handle DNAT/SNAT if needed (typically no NAT for intra-node). 66 | - Example iptables command: `iptables -t nat -L -n -v` shows the NAT table. 67 | 68 | 4. **Packet Delivery to Pod B**: 69 | - The packet is forwarded to Pod B’s veth interface based on the routing rules. 70 | 71 | 5. **Pod B Receives the Packet**: 72 | - Pod B’s application processes the received packet. 73 | 74 | #### Inter-node Pod-to-Pod Communication 75 | 76 | 1. **Pod A to Pod C Communication Initiation**: 77 | - Pod A’s application sends a packet to Pod C’s IP address (e.g., 10.244.2.8). 78 | 79 | 2. **Packet Handling by Node 1’s Network Stack**: 80 | - Packet travels from Pod A’s veth interface to Node 1’s network stack. 81 | 82 | 3. **iptables Processing on Node 1**: 83 | - The packet hits iptables rules: 84 | - **Filter rules**: Ensure the packet is allowed. 85 | - **NAT rules**: Handle DNAT/SNAT if needed. 86 | - Example iptables command: `iptables -t nat -L -n -v` shows the NAT table. 87 | 88 | 4. **Packet Forwarding to Node 2**: 89 | - Node 1’s routing directs the packet to Node 2. 90 | - If using an overlay network, the packet is encapsulated with Node 2’s IP. 91 | 92 | 5. **Packet Arrival at Node 2**: 93 | - Node 2 decapsulates the packet if needed and passes it to its network stack. 94 | 95 | 6. **iptables Processing on Node 2**: 96 | - The packet hits iptables rules: 97 | - **Filter rules**: Ensure the packet is allowed. 98 | - **NAT rules**: Handle DNAT/SNAT if needed. 99 | 100 | 7. **Packet Delivery to Pod C**: 101 | - The packet is forwarded to Pod C’s veth interface based on the routing rules. 102 | 103 | 8. **Pod C Receives the Packet**: 104 | - Pod C’s application processes the received packet. 105 | 106 | ### Conclusion 107 | 108 | In Kubernetes, iptables and kube-proxy play crucial roles in managing network traffic and ensuring seamless communication between pods. Whether pods are on the same node or different nodes, iptables rules configured by kube-proxy handle packet filtering and routing, enabling efficient and reliable pod-to-pod communication within the cluster. 109 | -------------------------------------------------------------------------------- /2024/K8s Networking_3.md: -------------------------------------------------------------------------------- 1 | ## In-Depth Guide to Kubernetes Networking 2 | 3 | ### Table of Contents 4 | 1. [Kubernetes networking requirements](#kubernetes-networking-requirements) 5 | 2. [How Linux network namespaces work in a pod](#how-linux-network-namespaces-work-in-a-pod) 6 | 3. [The pause container creates the network namespace in the pod](#the-pause-container-creates-the-network-namespace-in-the-pod) 7 | 4. [The pod is assigned a single IP address](#the-pod-is-assigned-a-single-ip-address) 8 | 5. [Inspecting pod to pod traffic in the cluster](#inspecting-pod-to-pod-traffic-in-the-cluster) 9 | 6. [The pod network namespace is connected to an ethernet bridge](#the-pod-network-namespace-is-connected-to-an-ethernet-bridge) 10 | 7. [Tracing pod to pod traffic on the same node](#tracing-pod-to-pod-traffic-on-the-same-node) 11 | 8. [Tracing pod to pod communication on different nodes](#tracing-pod-to-pod-communication-on-different-nodes) 12 | 9. [The Container Network Interface - CNI](#the-container-network-interface---cni) 13 | 10. [Inspecting pod to service traffic](#inspecting-pod-to-service-traffic) 14 | 11. [Intercepting and rewriting traffic with Netfilter and Iptables](#intercepting-and-rewriting-traffic-with-netfilter-and-iptables) 15 | 12. [Inspecting responses from services](#inspecting-responses-from-services) 16 | 17 | ### Kubernetes networking requirements 18 | 19 | Before diving into the details on how packets flow inside a Kubernetes cluster, let's first clear up the requirements for a Kubernetes network. 20 | 21 | The Kubernetes networking model defines a set of fundamental rules: 22 | 23 | 1. A pod in the cluster should be able to freely communicate with any other pod without the use of Network Address Translation (NAT). 24 | 2. Any program running on a cluster node should communicate with any pod on the same node without using NAT. 25 | 3. Each pod has its own IP address (IP-per-Pod), and every other pod can reach it at that same address. 26 | 27 | These requirements describe the properties of the cluster network in general terms. Implementing these rules involves solving several challenges, such as ensuring container-to-container communication within a pod, inter-pod communication, service reachability, and external traffic handling. 28 | 29 | This article will focus on the first three points, starting with intra-pod networking or container-to-container communication. 30 | 31 | ### How Linux network namespaces work in a pod 32 | 33 | Consider a pod with an Nginx container and another with BusyBox: 34 | 35 | ```yaml 36 | apiVersion: v1 37 | kind: Pod 38 | metadata: 39 | name: multi-container-pod 40 | spec: 41 | containers: 42 | - name: container-1 43 | image: busybox 44 | command: ['/bin/sh', '-c', 'sleep 1d'] 45 | - name: container-2 46 | image: nginx 47 | ``` 48 | 49 | When deployed, the following happens: 50 | 51 | - The pod gets its own network namespace on the node. 52 | - An IP address is assigned to the pod, and the ports are shared between the two containers. 53 | - Both containers share the same networking namespace and can see each other on localhost. 54 | - The network configuration happens rapidly in the background. 55 | 56 | In Linux, network namespaces are isolated, logical spaces that can be configured with their own networking rules and resources. The physical network interface holds the root network namespace, and the physical interface processes all real packets in the end. Virtual interfaces created from the physical interface are managed by the `ip-netns` management tool. 57 | 58 | ### The pause container creates the network namespace in the pod 59 | 60 | When you create a pod, the container runtime first creates a network namespace for the containers. Each pod in the cluster has an additional hidden container running in the background called the pause container, responsible for creating and holding the network namespace. 61 | 62 | For example, listing the pause containers running on a node reveals: 63 | 64 | ```bash 65 | docker ps | grep pause 66 | fa9666c1d9c6 registry.k8s.io/pause:3.4.1 "/pause" k8s_POD_kube-dns-599484b884-sv2js… 67 | 44218e010aeb registry.k8s.io/pause:3.4.1 "/pause" k8s_POD_blackbox-exporter-55c457d… 68 | 5fb4b5942c66 registry.k8s.io/pause:3.4.1 "/pause" k8s_POD_kube-dns-599484b884-cq99x… 69 | 8007db79dcf2 registry.k8s.io/pause:3.4.1 "/pause" k8s_POD_konnectivity-agent-84f87c… 70 | ``` 71 | 72 | The pause container creates the network namespace with minimal code and goes to sleep, ensuring robust network namespace creation. If one of the containers in the pod crashes, the remaining container can still reply to network requests. 73 | 74 | ### The pod is assigned a single IP address 75 | 76 | Inside the pod network namespace, an interface is created, and an IP address is assigned. This IP is shared between all containers within the pod. 77 | 78 | To find the pod's IP address: 79 | 80 | ```bash 81 | kubectl get pod multi-container-pod -o jsonpath={.status.podIP} 82 | ``` 83 | 84 | You can also verify the network namespace and interfaces from within the cluster node: 85 | 86 | ```bash 87 | ip netns exec ip a 88 | ``` 89 | 90 | ### Inspecting pod to pod traffic in the cluster 91 | 92 | When inspecting pod-to-pod traffic, there are two scenarios: 93 | 94 | 1. Traffic destined for a pod on the same node. 95 | 2. Traffic destined for a pod on a different node. 96 | 97 | For a pod to communicate with other pods, it must first have access to the node's root namespace. This is achieved using a virtual ethernet pair (veth), connecting the pod namespace to the root namespace. Each newly created pod on the node will have a veth pair. 98 | 99 | ### The pod network namespace is connected to an ethernet bridge 100 | 101 | An ethernet bridge at layer 2 of the OSI networking model connects each end of the virtual interfaces in the root namespace, allowing traffic to flow between virtual pairs. 102 | 103 | ### Tracing pod to pod traffic on the same node 104 | 105 | For pods on the same node, Pod-A sends a packet to its default interface `eth0`, tied to one end of the veth pair, forwarding packets to the root namespace on the node. 106 | 107 | ### Tracing pod to pod communication on different nodes 108 | 109 | For pods communicating across different nodes, an additional hop is required. The source node performs a bitwise operation to determine if the destination IP is on a different network. If so, the packet is forwarded to the default gateway of the node. 110 | 111 | ### The Container Network Interface - CNI 112 | 113 | The Container Network Interface (CNI) handles networking within the current node. The kubelet interacts with the CNI plugins to ensure the network setup for pods. 114 | 115 | ### Inspecting pod to service traffic 116 | 117 | To be detailed in subsequent sections. 118 | 119 | ### Intercepting and rewriting traffic with Netfilter and Iptables 120 | 121 | To be detailed in subsequent sections. 122 | 123 | ### Inspecting responses from services 124 | 125 | To be detailed in subsequent sections. 126 | -------------------------------------------------------------------------------- /2024/K8s Storage_1.md: -------------------------------------------------------------------------------- 1 | Kubernetes storage concepts and provide examples of PersistentVolume (PV), PersistentVolumeClaim (PVC), and 2 | StorageClass, along with YAML definitions for each. 3 | 4 | ### PersistentVolume (PV): 5 | A PersistentVolume (PV) in Kubernetes represents a piece of storage in the cluster. It could be a physical disk, a network storage volume, or any other type of storage resource available in the cluster. PVs are provisioned by an administrator, and they exist independently of any Pod that uses the volume. They have a lifecycle independent of any individual Pod that uses the PV. 6 | 7 | ### PersistentVolumeClaim (PVC): 8 | A PersistentVolumeClaim (PVC) is a request for storage by a user. It's a way for users to request the specific resources (such as storage) they need. When a user creates a PVC, Kubernetes looks for a PV that satisfies the claim's requirements (e.g., size, access mode) and binds the claim to that PV. PVCs enable developers to consume storage without needing to know the details of the underlying storage. 9 | 10 | ### Relationship between PV and PVC: 11 | PVCs consume PVs. When a PVC is created, Kubernetes searches for a suitable PV that matches the PVC's requirements (like capacity, access mode, etc.). Once a suitable PV is found, the PVC binds to that PV, and the PV becomes bound to the claim. After binding, the PV is exclusively reserved for that PVC until the PVC is deleted. 12 | 13 | ### StorageClass: 14 | A StorageClass in Kubernetes is used to define different "classes" of storage. It enables dynamic provisioning of storage volumes based on predefined policies and parameters. StorageClass allows administrators to offer different types of storage (e.g., fast SSD, slower HDD) to users with varying requirements. When a PVC does not specify a particular PV to bind to, Kubernetes uses the StorageClass to dynamically provision a matching PV. 15 | 16 | Now, let's create YAML definitions for PV, PVC, and StorageClass: 17 | 18 | #### PV YAML: 19 | ```yaml 20 | apiVersion: v1 21 | kind: PersistentVolume 22 | metadata: 23 | name: my-pv 24 | spec: 25 | capacity: 26 | storage: 5Gi 27 | volumeMode: Filesystem 28 | accessModes: 29 | - ReadWriteOnce 30 | persistentVolumeReclaimPolicy: Retain 31 | storageClassName: slow 32 | hostPath: 33 | path: /mnt/data 34 | ``` 35 | 36 | #### PVC YAML: 37 | ```yaml 38 | apiVersion: v1 39 | kind: PersistentVolumeClaim 40 | metadata: 41 | name: my-pvc 42 | spec: 43 | accessModes: 44 | - ReadWriteOnce 45 | resources: 46 | requests: 47 | storage: 2Gi 48 | storageClassName: slow 49 | ``` 50 | 51 | #### StorageClass YAML: 52 | ```yaml 53 | apiVersion: storage.k8s.io/v1 54 | kind: StorageClass 55 | metadata: 56 | name: slow 57 | provisioner: kubernetes.io/hostpath 58 | parameters: 59 | type: slow 60 | ``` 61 | 62 | Explanation of components: 63 | - `apiVersion`: Specifies the API version being used. 64 | - `kind`: Defines the Kubernetes object type (PersistentVolume, PersistentVolumeClaim, StorageClass). 65 | - `metadata`: Contains metadata like name and labels. 66 | - `spec`: Specifies the desired state of the object. 67 | - `capacity`: Defines the size of the volume. 68 | - `accessModes`: Describes how the volume can be accessed (e.g., ReadWriteOnce, ReadOnlyMany, ReadWriteMany). 69 | - `persistentVolumeReclaimPolicy`: Defines what happens to the PV when it's released by the claim. 70 | - `storageClassName`: Refers to the StorageClass the PV or PVC belongs to. 71 | - `hostPath`: Specifies the local host path for the volume (used for demonstration purposes, not recommended for production). 72 | - `resources.requests.storage`: Defines the amount of storage requested by the PVC. 73 | - `provisioner`: Specifies the type of provisioner responsible for creating the volume. 74 | - `parameters`: Additional parameters specific to the provisioner, such as the type of storage. 75 | -------------------------------------------------------------------------------- /2024/K8s Storage_2.md: -------------------------------------------------------------------------------- 1 | How to create a PV YAML using a local directory as storage, followed by a PVC to claim that PV, and finally a Deployment that mounts this PV to a NGINX container. 2 | 3 | ### PV YAML using Local Directory as Storage: 4 | ```yaml 5 | apiVersion: v1 6 | kind: PersistentVolume 7 | metadata: 8 | name: local-pv 9 | spec: 10 | capacity: 11 | storage: 5Gi 12 | volumeMode: Filesystem 13 | accessModes: 14 | - ReadWriteOnce 15 | persistentVolumeReclaimPolicy: Retain 16 | storageClassName: local-storage 17 | hostPath: 18 | path: /mnt/data 19 | ``` 20 | 21 | ### PVC to Claim the PV: 22 | ```yaml 23 | apiVersion: v1 24 | kind: PersistentVolumeClaim 25 | metadata: 26 | name: my-pvc 27 | spec: 28 | accessModes: 29 | - ReadWriteOnce 30 | resources: 31 | requests: 32 | storage: 2Gi 33 | storageClassName: local-storage 34 | ``` 35 | 36 | ### Deployment with NGINX Container Mounting the PV: 37 | ```yaml 38 | apiVersion: apps/v1 39 | kind: Deployment 40 | metadata: 41 | name: nginx-deployment 42 | spec: 43 | replicas: 1 44 | selector: 45 | matchLabels: 46 | app: nginx 47 | template: 48 | metadata: 49 | labels: 50 | app: nginx 51 | spec: 52 | volumes: 53 | - name: my-pv-storage 54 | persistentVolumeClaim: 55 | claimName: my-pvc 56 | containers: 57 | - name: nginx 58 | image: nginx 59 | volumeMounts: 60 | - mountPath: "/usr/share/nginx/html" 61 | name: my-pv-storage 62 | ``` 63 | 64 | In this example: 65 | 66 | - The PV YAML defines a PersistentVolume named "local-pv" with 5Gi storage capacity, using a local directory `/mnt/data` as storage. 67 | - The PVC YAML creates a PersistentVolumeClaim named "my-pvc" requesting 2Gi of storage and using the "local-storage" StorageClass. 68 | - The Deployment YAML creates a NGINX Deployment with one replica, which mounts the PVC claimed by "my-pvc" to the NGINX container's `/usr/share/nginx/html` directory. 69 | -------------------------------------------------------------------------------- /2024/K8s_Architecture.md: -------------------------------------------------------------------------------- 1 | Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes provides a framework for automating the deployment, scaling, and management of containerized applications, allowing developers to focus on building and running applications without worrying about the underlying infrastructure. 2 | 3 | 4 | ![image](https://github.com/discover-devops/kubernetes_workshop/assets/53135263/8fec9558-a887-4966-9b64-857c00a90d99) 5 | 6 | 7 | 8 | **Key Concepts:** 9 | 10 | 1. **Node:** A node is a physical or virtual machine that runs containerized applications. It could be a VM (Virtual Machine) or a physical server. 11 | 12 | 2. **Pod:** The smallest and simplest unit in the Kubernetes object model. A pod represents a single instance of a running process in a cluster and can contain one or more containers. 13 | 14 | 3. **Container:** A lightweight, standalone, and executable software package that includes everything needed to run a piece of software, including the code, runtime, libraries, and dependencies. 15 | 16 | 4. **Deployment:** A resource object in Kubernetes that provides declarative updates to applications. A deployment allows you to describe an application’s life cycle, such as which images to use for the app, the number of pod replicas, and the way to update them. 17 | 18 | 5. **Service:** A Kubernetes resource that defines a logical set of pods and a policy by which to access them. Services enable network communication between different sets of pods. 19 | 20 | 6. **Namespace:** A way to divide cluster resources between multiple users (via resource quota), teams, or projects. It's useful when there are multiple users or teams sharing a Kubernetes cluster. 21 | 22 | **Kubernetes Architecture:** 23 | 24 | Kubernetes follows a master-worker architecture, consisting of the following components: 25 | 26 | 1. **Master Node:** 27 | - **API Server:** Serves as the entry point for the Kubernetes control plane. It validates and processes requests, and then updates the corresponding objects like pods, services, etc. 28 | - **Controller Manager:** Ensures that the desired state of the cluster matches the actual state by controlling various controllers like node controller, replication controller, etc. 29 | - **Scheduler:** Assigns nodes to newly created pods based on resource requirements and other constraints. 30 | - **etcd:** A distributed key-value store that stores the configuration data of the cluster, representing the current state of the entire system. 31 | 32 | 2. **Node (Minion) Node:** 33 | - **Kubelet:** An agent that runs on each node and is responsible for ensuring that the containers are running in a pod. 34 | - **Container Runtime:** The software responsible for running containers. Kubernetes supports various container runtimes, such as Docker, containerd, and others. 35 | - **Kube Proxy:** Maintains network rules on nodes and performs connection forwarding. 36 | 37 | 3. **Add-ons:** 38 | - Additional components and features that enhance the Kubernetes cluster, such as DNS for service discovery, dashboard for web-based management, and others. 39 | 40 | In summary, Kubernetes orchestrates the deployment and management of containerized applications, providing a scalable and resilient infrastructure for running modern, cloud-native applications. The master node controls and manages the overall state of the cluster, while the worker nodes execute and run the containers. 41 | -------------------------------------------------------------------------------- /2024/K8s_log.md: -------------------------------------------------------------------------------- 1 | When a Kubernetes pod fails to exit and remains in an intermediate state indefinitely, it's essential to diagnose the issue by checking the logs. You can follow these steps to view the logs of your pod: 2 | 3 | 1. **Identify the Pod Name**: Use the following command to list all pods in the namespace where your pod is running: 4 | 5 | ``` 6 | kubectl get pods 7 | ``` 8 | 9 | Identify the name of the pod that is stuck in an intermediate state. 10 | 11 | 2. **View Logs**: Once you have identified the pod name, you can view its logs using the following command: 12 | 13 | ``` 14 | kubectl logs 15 | ``` 16 | 17 | Replace `` with the name of your pod. 18 | 19 | If your pod has multiple containers, you can specify the container name as well: 20 | 21 | ``` 22 | kubectl logs -c 23 | ``` 24 | 25 | 3. **Tail Logs**: To continuously stream the logs and see real-time updates, you can use the `-f` flag: 26 | 27 | ``` 28 | kubectl logs -f 29 | ``` 30 | 31 | 4. **Check Previous Logs**: If the pod has restarted or if you suspect that the issue occurred in a previous instance of the pod, you can view logs from previous instances using the `-p` flag followed by the instance number: 32 | 33 | ``` 34 | kubectl logs -p 35 | ``` 36 | 37 | 5. **Check for Events**: Additionally, you can check for any events related to the pod using: 38 | 39 | ``` 40 | kubectl describe pod 41 | ``` 42 | 43 | This command will provide detailed information about the pod, including any events that occurred during its lifecycle. 44 | 45 | By examining the logs and events, you should be able to identify the cause of the pod's failure to exit. Common issues include application errors, resource 46 | constraints, or misconfigurations. 47 | 48 | If you're unable to execute commands within the pod or if the pod itself is unresponsive, you might need to troubleshoot the issue from outside the pod. Here are some steps you can take: 49 | 50 | 1. **Check Pod Status**: Firstly, verify the status of the pod using the following command: 51 | 52 | ``` 53 | kubectl get pods 54 | ``` 55 | 56 | Ensure that the pod is in a state that allows you to access it. If it's stuck in a Pending or ContainerCreating state, there may be underlying issues preventing it from starting properly. 57 | 58 | 2. **Check Node Status**: Verify the status of the node where the pod is scheduled to run: 59 | 60 | ``` 61 | kubectl get nodes 62 | ``` 63 | 64 | Ensure that the node is in a Ready state and has sufficient resources available. 65 | 66 | 3. **View Pod Description**: Get detailed information about the pod, including events and conditions, using: 67 | 68 | ``` 69 | kubectl describe pod 70 | ``` 71 | 72 | Look for any errors or warnings that might indicate why the pod is unresponsive. 73 | 74 | 4. **Access Container Logs from Node**: If you're unable to access the logs directly from within the pod, you can try accessing the container logs on the node where the pod is running. You can typically find container logs in the `/var/log/containers` directory on the node. 75 | 76 | 5. **Check Kubernetes Components**: Ensure that all Kubernetes components (API server, controller manager, scheduler, etc.) are running correctly. Any issues with these components could affect the overall stability of your cluster. 77 | 78 | 6. **Restart Pod**: If all else fails and the pod remains unresponsive, you can try deleting and recreating the pod: 79 | 80 | ``` 81 | kubectl delete pod 82 | ``` 83 | 84 | After deleting the pod, Kubernetes will attempt to create a new instance of the pod based on its configuration. 85 | 86 | If none of these steps resolve the issue, you may need to investigate further by checking system-level logs on the node, examining cluster-wide configuration settings, or seeking assistance from your cluster administrator or cloud provider support. 87 | -------------------------------------------------------------------------------- /2024/Labels and Selectors_1.md: -------------------------------------------------------------------------------- 1 | In Kubernetes, labels are key-value pairs that are attached to objects, such as pods, services, deployments, and more. Labels are used to organize and select subsets of objects, allowing users to categorize resources in a way that makes sense for their application or infrastructure. 2 | 3 | Selectors, on the other hand, are expressions that allow users to specify criteria for selecting objects based on their labels. Selectors are used in various Kubernetes components, such as services, deployments, replica sets, and more, to specify which objects they should target or operate on. 4 | 5 | For example, you might label your pods with key-value pairs like "app=frontend" or "tier=backend". Then, you could use selectors to target pods with specific labels. This allows for more flexible and dynamic management of resources within Kubernetes clusters. 6 | 7 | 8 | Let's go through a step-by-step example to understand labels and selectors in Kubernetes. 9 | 10 | Step 1: Labeling Pods 11 | First, let's create a simple pod and label it. We'll use a YAML file to define the pod and apply labels to it. 12 | 13 | ```yaml 14 | apiVersion: v1 15 | kind: Pod 16 | metadata: 17 | name: mypod 18 | labels: 19 | app: frontend 20 | environment: production 21 | spec: 22 | containers: 23 | - name: nginx 24 | image: nginx:latest 25 | ``` 26 | 27 | Save the above YAML content to a file called `mypod.yaml`. Then, apply it to your Kubernetes cluster using the `kubectl apply` command: 28 | 29 | ``` 30 | kubectl apply -f mypod.yaml 31 | ``` 32 | 33 | Now, you have a pod named `mypod` with two labels: `app=frontend` and `environment=production`. 34 | 35 | Step 2: Selecting Pods with Labels 36 | Now, let's say you want to select pods with specific labels. You can use selectors to achieve this. For example, if you want to select pods with the label `app=frontend`, you can create a service that targets pods with that label. 37 | 38 | ```yaml 39 | apiVersion: v1 40 | kind: Service 41 | metadata: 42 | name: frontend-service 43 | spec: 44 | selector: 45 | app: frontend 46 | ports: 47 | - protocol: TCP 48 | port: 80 49 | targetPort: 80 50 | ``` 51 | 52 | Save the above YAML content to a file called `frontend-service.yaml`. Then, apply it to your Kubernetes cluster: 53 | 54 | ``` 55 | kubectl apply -f frontend-service.yaml 56 | ``` 57 | 58 | This service will select pods with the label `app=frontend` and expose them internally within the Kubernetes cluster. 59 | 60 | Step 3: Using Selectors in Deployments 61 | Selectors are also commonly used in deployments to specify which pods the deployment should manage. Here's an example of a deployment YAML file with a selector: 62 | 63 | ```yaml 64 | apiVersion: apps/v1 65 | kind: Deployment 66 | metadata: 67 | name: frontend-deployment 68 | spec: 69 | replicas: 3 70 | selector: 71 | matchLabels: 72 | app: frontend 73 | template: 74 | metadata: 75 | labels: 76 | app: frontend 77 | tier: web 78 | spec: 79 | containers: 80 | - name: nginx 81 | image: nginx:latest 82 | ``` 83 | 84 | In this deployment, the `selector` field specifies that it should manage pods with the label `app=frontend`. The `template` section specifies the pod template to be used for creating new pods. 85 | 86 | By using labels and selectors effectively, you can organize and manage your Kubernetes resources in a more flexible and scalable way. 87 | 88 | 89 | Below are some common `kubectl` commands related to labels and selectors in Kubernetes along with examples: 90 | 91 | 1. **Labeling Resources:** 92 | 93 | Use `kubectl label` command to attach labels to Kubernetes resources. Here's the syntax: 94 | 95 | ``` 96 | kubectl label = 97 | ``` 98 | 99 | Example: Labeling a pod named `mypod` with the label `app=frontend`. 100 | 101 | ``` 102 | kubectl label pod mypod app=frontend 103 | ``` 104 | 105 | 2. **Listing Resources with Labels:** 106 | 107 | Use `kubectl get` command with the `--show-labels` flag to list resources along with their labels. 108 | 109 | ``` 110 | kubectl get --show-labels 111 | ``` 112 | 113 | Example: List all pods along with their labels. 114 | 115 | ``` 116 | kubectl get pods --show-labels 117 | ``` 118 | 119 | 3. **Selecting Resources using Label Selectors:** 120 | 121 | Use `kubectl get` command with label selectors to filter resources based on their labels. 122 | 123 | ``` 124 | kubectl get -l 125 | ``` 126 | 127 | Example: Get all pods with the label `app=frontend`. 128 | 129 | ``` 130 | kubectl get pods -l app=frontend 131 | ``` 132 | 133 | 4. **Updating Labels:** 134 | 135 | Use `kubectl label` command with the `--overwrite` flag to update existing labels on resources. 136 | 137 | ``` 138 | kubectl label = --overwrite 139 | ``` 140 | 141 | Example: Update the label of pod `mypod` to `environment=staging`. 142 | 143 | ``` 144 | kubectl label pod mypod environment=staging --overwrite 145 | ``` 146 | 147 | 5. **Removing Labels:** 148 | 149 | Use `kubectl label` command with `-` before the label key to remove a label from a resource. 150 | 151 | ``` 152 | kubectl label - 153 | ``` 154 | 155 | Example: Remove the label `environment` from pod `mypod`. 156 | 157 | ``` 158 | kubectl label pod mypod environment- 159 | ``` 160 | 161 | These are some of the commonly used `kubectl` commands for working with labels and selectors in Kubernetes. Labels and selectors provide powerful mechanisms for organizing, selecting, and managing resources within Kubernetes clusters. 162 | -------------------------------------------------------------------------------- /2024/Labels and Selectors_2.md: -------------------------------------------------------------------------------- 1 | step-by-step tutorial on working with labels, selectors, and annotations in Kubernetes. 2 | 3 | ### Step 1: Creating a Pod with Labels 4 | 5 | Create a YAML file named `mypod.yaml` with the following content: 6 | 7 | ```yaml 8 | apiVersion: v1 9 | kind: Pod 10 | metadata: 11 | name: mypod 12 | labels: 13 | app: nginx 14 | env: production 15 | spec: 16 | containers: 17 | - name: my-container 18 | image: nginx 19 | ``` 20 | 21 | Apply the YAML file to create the pod: 22 | 23 | ```bash 24 | kubectl apply -f mypod.yaml 25 | ``` 26 | 27 | ### Step 2: Adding Labels to a Running Pod 28 | 29 | Create a YAML file named `mypod1.yaml` with the following content: 30 | 31 | ```yaml 32 | apiVersion: v1 33 | kind: Pod 34 | metadata: 35 | name: mypod1 36 | spec: 37 | containers: 38 | - name: my1-container 39 | image: nginx 40 | ``` 41 | 42 | Apply the YAML file to create the pod: 43 | 44 | ```bash 45 | kubectl apply -f mypod1.yaml 46 | ``` 47 | 48 | Now, add labels to the running pod: 49 | 50 | ```bash 51 | kubectl label pod mypod1 app=nginx 52 | ``` 53 | 54 | ### Step 3: Modifying and Deleting Existing Labels 55 | 56 | Create a YAML file named `pod3.yaml` with the following content: 57 | 58 | ```yaml 59 | apiVersion: v1 60 | kind: Pod 61 | metadata: 62 | name: pod3 63 | labels: 64 | app: nginx 65 | spec: 66 | containers: 67 | - name: mt-container3 68 | image: nginx 69 | ``` 70 | 71 | Apply the YAML file to create the pod: 72 | 73 | ```bash 74 | kubectl apply -f pod3.yaml 75 | ``` 76 | 77 | Now, overwrite the existing label: 78 | 79 | ```bash 80 | kubectl label --overwrite pod pod3 app=nginx-demo 81 | ``` 82 | 83 | ### Step 4: Selecting Kubernetes Objects Using Label Selectors 84 | 85 | Create three pods with different labels: 86 | 87 | - `pod-frontend-production.yaml` 88 | - `pod-backend-production.yaml` 89 | - `pod-frontend-staging.yaml` 90 | 91 | Ensure each pod YAML contains appropriate labels. 92 | 93 | ```bash 94 | kubectl create -f pod-frontend-production.yaml 95 | kubectl create -f pod-backend-production.yaml 96 | kubectl create -f pod-frontend-staging.yaml 97 | ``` 98 | 99 | Now, you can select pods based on label selectors: 100 | 101 | ```bash 102 | kubectl get pods -l environment=production 103 | kubectl get pods -l role=frontend,environment=staging 104 | ``` 105 | 106 | ### Step 5: Working with Annotations 107 | 108 | Create a pod YAML file with annotations: 109 | 110 | ```yaml 111 | apiVersion: v1 112 | kind: Pod 113 | metadata: 114 | name: pod-with-annotations 115 | annotations: 116 | commit-SHA: d6s9shb82365yg4ygd782889us28377gf6 117 | JIRA-issue: "https://your-jira-link.com/issue/ABC-1234" 118 | timestamp: "123456789" 119 | owner: "https://internal-link.to.website/username" 120 | spec: 121 | containers: 122 | - name: application-container 123 | image: nginx 124 | ``` 125 | 126 | Apply the YAML file to create the pod: 127 | 128 | ```bash 129 | kubectl apply -f pod-with-annotations.yaml 130 | ``` 131 | 132 | Now, you can view the annotations of the pod: 133 | 134 | ```bash 135 | kubectl describe pod pod-with-annotations 136 | ``` 137 | 138 | That's it! You've created, labeled, selected, modified, and annotated Kubernetes pods using `kubectl` commands. 139 | -------------------------------------------------------------------------------- /2024/Labels_and_Annotations.md: -------------------------------------------------------------------------------- 1 | 2 | # Labels and Annotations 3 | 4 | ## Metadata in Kubernetes 5 | 6 | Metadata is essential for managing resources in a cluster, especially when dealing with potentially thousands of resources. Labels and annotations are two concepts used to add metadata to Kubernetes objects, such as pods. 7 | 8 | ## Labels 9 | 10 | Labels are key-value pairs that serve as metadata for Kubernetes objects. They can be attached to objects like pods at creation or modified during runtime. Each key must be unique for an object. 11 | 12 | ### Example of Labels in YAML: 13 | 14 | ```yaml 15 | metadata: 16 | labels: 17 | key1: value1 18 | key2: value2 19 | ``` 20 | 21 | ### Why Labels? 22 | 23 | Labels are used for organizing objects and filtering subsets based on those labels. They are helpful for running specific pods on selected nodes. Use cases include organizing objects based on teams or organizations within a company. 24 | 25 | ### Creating a Pod with Labels: 26 | 27 | ```yaml 28 | apiVersion: v1 29 | kind: Pod 30 | metadata: 31 | name: pod1 32 | labels: 33 | app: nginx 34 | team: team_a 35 | spec: 36 | containers: 37 | - name: container1 38 | image: nginx 39 | ``` 40 | 41 | Create the pod with: 42 | 43 | ```bash 44 | kubectl create -f pod.yaml 45 | kubectl get pod pod1 46 | kubectl describe pod pod1 47 | ``` 48 | 49 | ### Adding Labels to a Running Pod: 50 | 51 | ```yaml 52 | apiVersion: v1 53 | kind: Pod 54 | metadata: 55 | name: pod2 56 | spec: 57 | containers: 58 | - name: container2 59 | image: nginx 60 | ``` 61 | 62 | Create the pod with: 63 | 64 | ```bash 65 | kubectl create -f pod2.yaml 66 | kubectl get pod pod2 67 | kubectl describe pod pod2.yaml 68 | kubectl label pod pod2 app=prod 69 | kubectl describe pod pod2 70 | kubectl label pod pod2 team=team_A type=test 71 | kubectl describe pod pod2 72 | ``` 73 | 74 | ### Modifying and Deleting Labels for a Running Pod: 75 | 76 | Modify the label: 77 | 78 | ```bash 79 | kubectl label --overwrite pod pod2 app=nginx-application 80 | kubectl describe pod pod2 81 | ``` 82 | 83 | Delete the label: 84 | 85 | ```bash 86 | kubectl label pod pod2 app- 87 | kubectl describe pod pod2 88 | ``` 89 | 90 | ### Selecting Kubernetes Objects Using Label Selectors: 91 | 92 | Use label selectors to group objects: 93 | 94 | ```bash 95 | kubectl get pods -l {label_selector} 96 | ``` 97 | 98 | Examples: 99 | 100 | ```bash 101 | kubectl get pods -l environment=prod 102 | kubectl get pods -l team!=devops 103 | kubectl get pods -l environment=prod,team!=devops 104 | ``` 105 | 106 | ### Selecting Pods Using Equality-Based Label Selectors: 107 | 108 | Examples in YAML: 109 | 110 | ```yaml 111 | # pod3.yaml 112 | apiVersion: v1 113 | kind: Pod 114 | metadata: 115 | name: frontend-production 116 | labels: 117 | environment: production 118 | role: frontend 119 | spec: 120 | containers: 121 | - name: application-container 122 | image: nginx 123 | ``` 124 | 125 | ```yaml 126 | # pod4.yaml 127 | apiVersion: v1 128 | kind: Pod 129 | metadata: 130 | name: backend-production 131 | labels: 132 | environment: production 133 | role: backend 134 | spec: 135 | containers: 136 | - name: application-container 137 | image: nginx 138 | ``` 139 | 140 | ```yaml 141 | # pod5.yaml 142 | apiVersion: v1 143 | kind: Pod 144 | metadata: 145 | name: frontend-staging 146 | labels: 147 | environment: staging 148 | role: frontend 149 | spec: 150 | containers: 151 | - name: application-container 152 | image: nginx 153 | ``` 154 | 155 | Create pods: 156 | 157 | ```bash 158 | kubectl create -f pod3.yaml 159 | kubectl create -f pod4.yaml 160 | kubectl create -f pod5.yaml 161 | kubectl get pod backend-production --show-labels 162 | kubectl get pod frontend-staging --show-labels 163 | kubectl get pods -l environment=production 164 | kubectl get pods -l role=frontend,environment=staging 165 | ``` 166 | 167 | ## Annotations 168 | 169 | Labels have constraints on values, such as character limits and alphanumeric requirements. Annotations have fewer constraints and can store unstructured information related to Kubernetes objects. 170 | 171 | -------------------------------------------------------------------------------- /2024/Monolith_VS_Microservices.md: -------------------------------------------------------------------------------- 1 | Sure, here's a summarized version of the content you provided, formatted as bullet points for easy reference: 2 | 3 | ### Monolith: 4 | 5 | - **Definition**: A single-tier software application where the user interface and data access code are combined into a single program from a single platform. 6 | - **Examples**: Single Java JAR file, COBOL program. 7 | - **Advantages**: 8 | - Simplicity, especially for small projects. 9 | - Resource efficiency at small scale. 10 | - **Challenges**: 11 | - Lack of modularity as the project grows. 12 | - Difficulty in enforcing scalability. 13 | - All-or-nothing deployment leading to long release cycles. 14 | - **Integration with APIs**: Monolith can be fronted by API Gateway or load balancer to enable API usage. 15 | 16 | ### Scaling Challenges with Monolith: 17 | 18 | - Running on a virtual machine necessitates sizing for the entire application. 19 | - Inefficient scaling as entire monolith must be scaled, even for specific components. 20 | - Paying for resources not fully utilized. 21 | 22 | ### Microservices: 23 | 24 | - **Definition**: Architecture where each component, or service, is independent, scalable, and deployed separately. 25 | - **Characteristics**: 26 | - Independence of each service. 27 | - Scalability irrespective of others. 28 | - Different governance and security features. 29 | - Independent deployment. 30 | - **Polyglot**: Different microservices can be written in different programming languages. 31 | - **Example**: Store Slash Get, Store Slash Post, Store Slash Delete as separate microservices. 32 | - **Advantages**: 33 | - Faster DevOps with independent deployments. 34 | - Testing and maintenance of services independent of each other. 35 | - **Flexibility**: Can utilize shared databases but should optimize for increased connections. 36 | - **Implementation**: 37 | - Each microservice has its own codebase and runs on separate virtual machines. 38 | - Services can communicate via APIs or message brokers. 39 | - Can be implemented with different programming languages for each service. 40 | - **Characteristics**: 41 | - Independence, scalability, and functional separation are key. 42 | - Not necessary to fulfill every microservice characteristic; flexibility is important. 43 | 44 | This reference document should aid in preparing for system design interviews, particularly for Cloud Solution Architect roles. 45 | -------------------------------------------------------------------------------- /2024/NameSPace, Role, SA example.md: -------------------------------------------------------------------------------- 1 | ### Kubernetes Namespace 2 | 3 | A Kubernetes Namespace is a way to divide cluster resources between multiple users. Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces provide a mechanism for isolating groups of resources within a single cluster. 4 | 5 | ### YAML to Create Namespaces 6 | 7 | 1. **Create Namespaces `test` and `prod`** 8 | 9 | ```yaml 10 | # test namespace 11 | apiVersion: v1 12 | kind: Namespace 13 | metadata: 14 | name: test 15 | 16 | # prod namespace 17 | apiVersion: v1 18 | kind: Namespace 19 | metadata: 20 | name: prod 21 | ``` 22 | 23 | ### Create Service Account, Role, RoleBinding, ClusterRole, and ClusterRoleBinding for the Namespaces 24 | 25 | 1. **Service Account** 26 | 27 | ```yaml 28 | # Service Account for test namespace 29 | apiVersion: v1 30 | kind: ServiceAccount 31 | metadata: 32 | name: test-service-account 33 | namespace: test 34 | 35 | # Service Account for prod namespace 36 | apiVersion: v1 37 | kind: ServiceAccount 38 | metadata: 39 | name: prod-service-account 40 | namespace: prod 41 | ``` 42 | 43 | 2. **Role** 44 | 45 | ```yaml 46 | # Role for test namespace 47 | apiVersion: rbac.authorization.k8s.io/v1 48 | kind: Role 49 | metadata: 50 | namespace: test 51 | name: test-role 52 | rules: 53 | - apiGroups: [""] 54 | resources: ["pods"] 55 | verbs: ["get", "list", "create", "delete"] 56 | 57 | # Role for prod namespace 58 | apiVersion: rbac.authorization.k8s.io/v1 59 | kind: Role 60 | metadata: 61 | namespace: prod 62 | name: prod-role 63 | rules: 64 | - apiGroups: [""] 65 | resources: ["pods"] 66 | verbs: ["get", "list", "create", "delete"] 67 | ``` 68 | 69 | 3. **RoleBinding** 70 | 71 | ```yaml 72 | # RoleBinding for test namespace 73 | apiVersion: rbac.authorization.k8s.io/v1 74 | kind: RoleBinding 75 | metadata: 76 | name: test-rolebinding 77 | namespace: test 78 | subjects: 79 | - kind: ServiceAccount 80 | name: test-service-account 81 | namespace: test 82 | roleRef: 83 | kind: Role 84 | name: test-role 85 | apiGroup: rbac.authorization.k8s.io 86 | 87 | # RoleBinding for prod namespace 88 | apiVersion: rbac.authorization.k8s.io/v1 89 | kind: RoleBinding 90 | metadata: 91 | name: prod-rolebinding 92 | namespace: prod 93 | subjects: 94 | - kind: ServiceAccount 95 | name: prod-service-account 96 | namespace: prod 97 | roleRef: 98 | kind: Role 99 | name: prod-role 100 | apiGroup: rbac.authorization.k8s.io 101 | ``` 102 | 103 | 4. **ClusterRole** 104 | 105 | ```yaml 106 | apiVersion: rbac.authorization.k8s.io/v1 107 | kind: ClusterRole 108 | metadata: 109 | name: cluster-role 110 | rules: 111 | - apiGroups: [""] 112 | resources: ["pods"] 113 | verbs: ["get", "list", "create", "delete"] 114 | ``` 115 | 116 | 5. **ClusterRoleBinding** 117 | 118 | ```yaml 119 | apiVersion: rbac.authorization.k8s.io/v1 120 | kind: ClusterRoleBinding 121 | metadata: 122 | name: cluster-rolebinding 123 | subjects: 124 | - kind: ServiceAccount 125 | name: test-service-account 126 | namespace: test 127 | - kind: ServiceAccount 128 | name: prod-service-account 129 | namespace: prod 130 | roleRef: 131 | kind: ClusterRole 132 | name: cluster-role 133 | apiGroup: rbac.authorization.k8s.io 134 | ``` 135 | 136 | ### Deployment for Each Namespace 137 | 138 | 1. **Deployment in `test` Namespace** 139 | 140 | ```yaml 141 | apiVersion: apps/v1 142 | kind: Deployment 143 | metadata: 144 | name: test-deployment 145 | namespace: test 146 | spec: 147 | replicas: 2 148 | selector: 149 | matchLabels: 150 | app: test-app 151 | template: 152 | metadata: 153 | labels: 154 | app: test-app 155 | spec: 156 | serviceAccountName: test-service-account 157 | containers: 158 | - name: nginx 159 | image: nginx 160 | ports: 161 | - containerPort: 80 162 | ``` 163 | 164 | 2. **Deployment in `prod` Namespace** 165 | 166 | ```yaml 167 | apiVersion: apps/v1 168 | kind: Deployment 169 | metadata: 170 | name: prod-deployment 171 | namespace: prod 172 | spec: 173 | replicas: 2 174 | selector: 175 | matchLabels: 176 | app: prod-app 177 | template: 178 | metadata: 179 | labels: 180 | app: prod-app 181 | spec: 182 | serviceAccountName: prod-service-account 183 | containers: 184 | - name: nginx 185 | image: nginx 186 | ports: 187 | - containerPort: 80 188 | ``` 189 | 190 | ### Applying the YAML files 191 | 192 | To apply these YAML configurations, save each snippet to a file and then use the `kubectl apply` command: 193 | 194 | ```sh 195 | kubectl apply -f namespace-test.yaml 196 | kubectl apply -f namespace-prod.yaml 197 | kubectl apply -f serviceaccount-test.yaml 198 | kubectl apply -f serviceaccount-prod.yaml 199 | kubectl apply -f role-test.yaml 200 | kubectl apply -f role-prod.yaml 201 | kubectl apply -f rolebinding-test.yaml 202 | kubectl apply -f rolebinding-prod.yaml 203 | kubectl apply -f clusterrole.yaml 204 | kubectl apply -f clusterrolebinding.yaml 205 | kubectl apply -f deployment-test.yaml 206 | kubectl apply -f deployment-prod.yaml 207 | ``` 208 | 209 | ### Example of Roles and ClusterRoles in Action 210 | 211 | #### Roles in Namespaces 212 | - A Role (e.g., `test-role` or `prod-role`) defines what actions can be performed within a specific namespace. 213 | - The RoleBinding binds this Role to a Service Account, ensuring the Service Account has the necessary permissions within that namespace. 214 | 215 | Example: The `test-service-account` in the `test` namespace can `get`, `list`, `create`, and `delete` pods due to its `RoleBinding` to `test-role`. 216 | 217 | #### ClusterRoles Across Namespaces 218 | - A ClusterRole provides similar permissions but across all namespaces or at the cluster level. 219 | - The ClusterRoleBinding binds the ClusterRole to a Service Account, granting it permissions cluster-wide. 220 | 221 | Example: The `test-service-account` and `prod-service-account` can perform cluster-wide actions like `get`, `list`, `create`, and `delete` pods due to their `ClusterRoleBinding` to `cluster-role`. 222 | 223 | By following this step-by-step approach, you can create namespaces, service accounts, roles, role bindings, cluster roles, and cluster role bindings, and deploy applications with the appropriate permissions within a Kubernetes cluster. 224 | -------------------------------------------------------------------------------- /2024/NameSpace.md: -------------------------------------------------------------------------------- 1 | # Understanding Kubernetes Namespaces Tutorial 2 | 3 | ## Introduction to Kubernetes Namespaces 4 | 5 | Kubernetes namespaces are a way to create virtual clusters within a physical cluster. They allow you to partition and organize resources in a Kubernetes cluster, providing a scope for names and preventing naming conflicts. Namespaces are beneficial for multi-tenancy, isolation, and resource management. 6 | 7 | ## Why do we need namespaces? 8 | 9 | 1. **Isolation:** Namespaces provide a way to isolate resources. Each namespace has its own set of resources, preventing naming conflicts and resource collisions. 10 | 11 | 2. **Resource Management:** Namespaces help in organizing and managing resources efficiently. They allow for better control and understanding of the various components within a cluster. 12 | 13 | 3. **Multi-Tenancy:** In a multi-tenant environment, namespaces allow different teams or projects to use the same cluster without interfering with each other. 14 | 15 | ## List available namespaces 16 | 17 | To list the available namespaces in your Kubernetes cluster, use the following command: 18 | 19 | ```bash 20 | kubectl get namespaces 21 | ``` 22 | 23 | ## Creating a namespace 24 | 25 | ### Method-1: Using YAML file 26 | 27 | Create a YAML file, e.g., `mynamespace.yaml`, with the following content: 28 | 29 | ```yaml 30 | apiVersion: v1 31 | kind: Namespace 32 | metadata: 33 | name: mynamespace 34 | ``` 35 | 36 | Apply the namespace using the following command: 37 | 38 | ```bash 39 | kubectl apply -f mynamespace.yaml 40 | ``` 41 | 42 | ### Method-2: Using kubectl command 43 | 44 | Run the following command to create a namespace directly: 45 | 46 | ```bash 47 | kubectl create namespace mynamespace 48 | ``` 49 | 50 | ## Get details of a namespace 51 | 52 | To get details about a specific namespace, use the following command: 53 | 54 | ```bash 55 | kubectl get namespace mynamespace 56 | ``` 57 | 58 | ## Create resource objects in other namespaces 59 | 60 | ### Method-1: Using YAML file 61 | 62 | Specify the namespace in your resource YAML file, for example: 63 | 64 | ```yaml 65 | apiVersion: v1 66 | kind: Pod 67 | metadata: 68 | name: mypod 69 | namespace: mynamespace 70 | ``` 71 | 72 | Apply the resource using: 73 | 74 | ```bash 75 | kubectl apply -f mypod.yaml 76 | ``` 77 | 78 | ### Method-2: Using kubectl command 79 | 80 | Use the `--namespace` flag with the `kubectl create` command, like: 81 | 82 | ```bash 83 | kubectl create pod mypod --namespace=mynamespace --image=nginx 84 | ``` 85 | 86 | ## Terminating namespaces 87 | 88 | ### Deleting a Pod using name 89 | 90 | ```bash 91 | kubectl delete pod mypod --namespace=mynamespace 92 | ``` 93 | 94 | ### Deleting pods by deleting the whole namespace 95 | 96 | ```bash 97 | kubectl delete namespace mynamespace 98 | ``` 99 | 100 | ### Deleting all pods in a namespace, while keeping the namespace 101 | 102 | ```bash 103 | kubectl delete pods --all --namespace=mynamespace 104 | ``` 105 | 106 | ### Delete all resources in a namespace 107 | 108 | ```bash 109 | kubectl delete all --all --namespace=mynamespace 110 | ``` 111 | 112 | ## Conclusion 113 | 114 | Kubernetes namespaces are powerful tools for managing and organizing resources in a cluster. They provide isolation, resource management, and support for multi-tenancy. Understanding how to create, use, and delete namespaces is crucial for effective Kubernetes cluster administration. 115 | -------------------------------------------------------------------------------- /2024/Napspace, RBAC, ROLES LAB.md: -------------------------------------------------------------------------------- 1 | To set up and test the working of the Kubernetes Service Accounts (SA), Roles, RoleBindings, ClusterRole, ClusterRoleBindings, and deployments, follow these steps: 2 | 3 | ### Step-by-Step Guide to Setup and Test 4 | 5 | #### 1. Apply the Namespace Configurations 6 | 7 | Create the `test` and `prod` namespaces. 8 | 9 | **namespace-test.yaml** 10 | 11 | ```yaml 12 | apiVersion: v1 13 | kind: Namespace 14 | metadata: 15 | name: test 16 | ``` 17 | 18 | **namespace-prod.yaml** 19 | 20 | ```yaml 21 | apiVersion: v1 22 | kind: Namespace 23 | metadata: 24 | name: prod 25 | ``` 26 | 27 | Apply the namespaces: 28 | 29 | ```sh 30 | kubectl apply -f namespace-test.yaml 31 | kubectl apply -f namespace-prod.yaml 32 | ``` 33 | 34 | #### 2. Create Service Accounts 35 | 36 | Create service accounts for both namespaces. 37 | 38 | **serviceaccount-test.yaml** 39 | 40 | ```yaml 41 | apiVersion: v1 42 | kind: ServiceAccount 43 | metadata: 44 | name: test-service-account 45 | namespace: test 46 | ``` 47 | 48 | **serviceaccount-prod.yaml** 49 | 50 | ```yaml 51 | apiVersion: v1 52 | kind: ServiceAccount 53 | metadata: 54 | name: prod-service-account 55 | namespace: prod 56 | ``` 57 | 58 | Apply the service accounts: 59 | 60 | ```sh 61 | kubectl apply -f serviceaccount-test.yaml 62 | kubectl apply -f serviceaccount-prod.yaml 63 | ``` 64 | 65 | #### 3. Create Roles 66 | 67 | Create roles for the `test` and `prod` namespaces. 68 | 69 | **role-test.yaml** 70 | 71 | ```yaml 72 | apiVersion: rbac.authorization.k8s.io/v1 73 | kind: Role 74 | metadata: 75 | namespace: test 76 | name: test-role 77 | rules: 78 | - apiGroups: [""] 79 | resources: ["pods"] 80 | verbs: ["get", "list", "create", "delete"] 81 | ``` 82 | 83 | **role-prod.yaml** 84 | 85 | ```yaml 86 | apiVersion: rbac.authorization.k8s.io/v1 87 | kind: Role 88 | metadata: 89 | namespace: prod 90 | name: prod-role 91 | rules: 92 | - apiGroups: [""] 93 | resources: ["pods"] 94 | verbs: ["get", "list", "create", "delete"] 95 | ``` 96 | 97 | Apply the roles: 98 | 99 | ```sh 100 | kubectl apply -f role-test.yaml 101 | kubectl apply -f role-prod.yaml 102 | ``` 103 | 104 | #### 4. Create RoleBindings 105 | 106 | Bind the roles to the service accounts in each namespace. 107 | 108 | **rolebinding-test.yaml** 109 | 110 | ```yaml 111 | apiVersion: rbac.authorization.k8s.io/v1 112 | kind: RoleBinding 113 | metadata: 114 | name: test-rolebinding 115 | namespace: test 116 | subjects: 117 | - kind: ServiceAccount 118 | name: test-service-account 119 | namespace: test 120 | roleRef: 121 | kind: Role 122 | name: test-role 123 | apiGroup: rbac.authorization.k8s.io 124 | ``` 125 | 126 | **rolebinding-prod.yaml** 127 | 128 | ```yaml 129 | apiVersion: rbac.authorization.k8s.io/v1 130 | kind: RoleBinding 131 | metadata: 132 | name: prod-rolebinding 133 | namespace: prod 134 | subjects: 135 | - kind: ServiceAccount 136 | name: prod-service-account 137 | namespace: prod 138 | roleRef: 139 | kind: Role 140 | name: prod-role 141 | apiGroup: rbac.authorization.k8s.io 142 | ``` 143 | 144 | Apply the role bindings: 145 | 146 | ```sh 147 | kubectl apply -f rolebinding-test.yaml 148 | kubectl apply -f rolebinding-prod.yaml 149 | ``` 150 | 151 | #### 5. Create ClusterRole 152 | 153 | Create a cluster-wide role. 154 | 155 | **clusterrole.yaml** 156 | 157 | ```yaml 158 | apiVersion: rbac.authorization.k8s.io/v1 159 | kind: ClusterRole 160 | metadata: 161 | name: cluster-role 162 | rules: 163 | - apiGroups: [""] 164 | resources: ["pods"] 165 | verbs: ["get", "list", "create", "delete"] 166 | ``` 167 | 168 | Apply the cluster role: 169 | 170 | ```sh 171 | kubectl apply -f clusterrole.yaml 172 | ``` 173 | 174 | #### 6. Create ClusterRoleBinding 175 | 176 | Bind the cluster role to both service accounts. 177 | 178 | **clusterrolebinding.yaml** 179 | 180 | ```yaml 181 | apiVersion: rbac.authorization.k8s.io/v1 182 | kind: ClusterRoleBinding 183 | metadata: 184 | name: cluster-rolebinding 185 | subjects: 186 | - kind: ServiceAccount 187 | name: test-service-account 188 | namespace: test 189 | - kind: ServiceAccount 190 | name: prod-service-account 191 | namespace: prod 192 | roleRef: 193 | kind: ClusterRole 194 | name: cluster-role 195 | apiGroup: rbac.authorization.k8s.io 196 | ``` 197 | 198 | Apply the cluster role binding: 199 | 200 | ```sh 201 | kubectl apply -f clusterrolebinding.yaml 202 | ``` 203 | 204 | #### 7. Create Deployments 205 | 206 | Deploy applications in each namespace. 207 | 208 | **deployment-test.yaml** 209 | 210 | ```yaml 211 | apiVersion: apps/v1 212 | kind: Deployment 213 | metadata: 214 | name: test-deployment 215 | namespace: test 216 | spec: 217 | replicas: 2 218 | selector: 219 | matchLabels: 220 | app: test-app 221 | template: 222 | metadata: 223 | labels: 224 | app: test-app 225 | spec: 226 | serviceAccountName: test-service-account 227 | containers: 228 | - name: nginx 229 | image: nginx 230 | ports: 231 | - containerPort: 80 232 | ``` 233 | 234 | **deployment-prod.yaml** 235 | 236 | ```yaml 237 | apiVersion: apps/v1 238 | kind: Deployment 239 | metadata: 240 | name: prod-deployment 241 | namespace: prod 242 | spec: 243 | replicas: 2 244 | selector: 245 | matchLabels: 246 | app: prod-app 247 | template: 248 | metadata: 249 | labels: 250 | app: prod-app 251 | spec: 252 | serviceAccountName: prod-service-account 253 | containers: 254 | - name: nginx 255 | image: nginx 256 | ports: 257 | - containerPort: 80 258 | ``` 259 | 260 | Apply the deployments: 261 | 262 | ```sh 263 | kubectl apply -f deployment-test.yaml 264 | kubectl apply -f deployment-prod.yaml 265 | ``` 266 | 267 | ### Test the Access and Deployment 268 | 269 | To test access as the Service Account, you can use the `kubectl run` command with the `--as` flag or create a pod that uses the service account. 270 | 271 | #### Using `kubectl run` with `--as` 272 | 273 | This method is straightforward for testing. 274 | 275 | ```sh 276 | kubectl auth can-i create pods --as=system:serviceaccount:test:test-service-account -n test 277 | kubectl auth can-i delete pods --as=system:serviceaccount:prod:prod-service-account -n prod 278 | ``` 279 | 280 | #### Creating a Pod to Test the Service Account 281 | 282 | 1. **Create a test pod spec file**: 283 | 284 | **test-pod.yaml** 285 | 286 | ```yaml 287 | apiVersion: v1 288 | kind: Pod 289 | metadata: 290 | name: test-pod 291 | namespace: test 292 | spec: 293 | serviceAccountName: test-service-account 294 | containers: 295 | - name: busybox 296 | image: busybox 297 | command: ["sleep", "3600"] 298 | ``` 299 | 300 | 2. **Create the pod**: 301 | 302 | ```sh 303 | kubectl apply -f test-pod.yaml 304 | ``` 305 | 306 | 3. **Exec into the pod to test permissions**: 307 | 308 | ```sh 309 | kubectl exec -it test-pod -n test -- sh 310 | ``` 311 | 312 | 4. **Inside the pod, try performing actions to verify access**: 313 | 314 | ```sh 315 | # List pods (should work) 316 | kubectl get pods -n test 317 | 318 | # Create a pod (should work) 319 | kubectl run nginx --image=nginx -n test 320 | 321 | # Delete a pod (should work) 322 | kubectl delete pod nginx -n test 323 | ``` 324 | 325 | By following these steps, you can set up and test the Kubernetes Service Accounts, roles, role bindings, cluster roles, and cluster role bindings, ensuring that your configurations work as expected. 326 | -------------------------------------------------------------------------------- /2024/Nginx deployment and a Cluster IP service.md: -------------------------------------------------------------------------------- 1 | To create an Nginx deployment and a Cluster IP service in Kubernetes, follow the steps outlined below. 2 | 3 | ### Step 1: Set up your Kubernetes environment 4 | 5 | Ensure you have a running Kubernetes cluster and `kubectl` configured to interact with your cluster. 6 | 7 | ### Step 2: Create the Nginx Deployment YAML 8 | 9 | Create a YAML file named `nginx-deployment.yaml` for the Nginx deployment. This file defines the deployment of Nginx pods. 10 | 11 | ```yaml 12 | apiVersion: apps/v1 13 | kind: Deployment 14 | metadata: 15 | name: nginx-deployment 16 | labels: 17 | app: nginx 18 | spec: 19 | replicas: 3 20 | selector: 21 | matchLabels: 22 | app: nginx 23 | template: 24 | metadata: 25 | labels: 26 | app: nginx 27 | spec: 28 | containers: 29 | - name: nginx 30 | image: nginx:latest 31 | ports: 32 | - containerPort: 80 33 | ``` 34 | 35 | This deployment will create 3 replicas of the Nginx pod. 36 | 37 | ### Step 3: Create the Cluster IP Service YAML 38 | 39 | Create a YAML file named `nginx-service.yaml` for the Cluster IP service. This service will expose the Nginx deployment internally within the cluster. 40 | 41 | ```yaml 42 | apiVersion: v1 43 | kind: Service 44 | metadata: 45 | name: nginx-service 46 | labels: 47 | app: nginx 48 | spec: 49 | selector: 50 | app: nginx 51 | ports: 52 | - protocol: TCP 53 | port: 80 54 | targetPort: 80 55 | type: ClusterIP 56 | ``` 57 | 58 | This service will route traffic on port 80 to the Nginx pods. 59 | 60 | ### Step 4: Apply the Deployment and Service YAML files 61 | 62 | Use `kubectl` to apply the YAML files you created. 63 | 64 | 1. Apply the Nginx deployment: 65 | 66 | ```sh 67 | kubectl apply -f nginx-deployment.yaml 68 | ``` 69 | 70 | 2. Apply the Cluster IP service: 71 | 72 | ```sh 73 | kubectl apply -f nginx-service.yaml 74 | ``` 75 | 76 | ### Step 5: Verify the Deployment and Service 77 | 78 | 1. Check the status of the deployment: 79 | 80 | ```sh 81 | kubectl get deployments 82 | ``` 83 | 84 | You should see the `nginx-deployment` listed with the desired number of replicas. 85 | 86 | 2. Check the status of the pods: 87 | 88 | ```sh 89 | kubectl get pods 90 | ``` 91 | 92 | You should see 3 running pods for the Nginx deployment. 93 | 94 | 3. Check the status of the service: 95 | 96 | ```sh 97 | kubectl get services 98 | ``` 99 | 100 | You should see the `nginx-service` listed with a Cluster IP. 101 | 102 | ### Step 6: Test the Service 103 | 104 | To test the service internally, you can use a temporary pod with `curl` to access the Nginx service. 105 | 106 | 1. Create a temporary pod: 107 | 108 | ```sh 109 | kubectl run curlpod --image=radial/busyboxplus:curl -i --tty 110 | ``` 111 | 112 | 2. Inside the pod, use `curl` to access the Nginx service: 113 | 114 | ```sh 115 | curl nginx-service 116 | ``` 117 | 118 | You should see the default Nginx welcome page HTML. 119 | 120 | ### Summary 121 | 122 | You have successfully created an Nginx deployment and a Cluster IP service in Kubernetes. This setup allows you to deploy Nginx pods and expose them internally within your cluster using a Cluster IP service. 123 | -------------------------------------------------------------------------------- /2024/Nginx deployment and a NodePort service.md: -------------------------------------------------------------------------------- 1 | Below are the steps to create an Nginx deployment and a NodePort service with the port specified in the YAML files. 2 | 3 | ### Step 1: Set up your Kubernetes environment 4 | 5 | Ensure you have a running Kubernetes cluster and `kubectl` configured to interact with your cluster. 6 | 7 | ### Step 2: Create the Nginx Deployment YAML 8 | 9 | Create a YAML file named `nginx-deployment.yaml` for the Nginx deployment. This file defines the deployment of Nginx pods. 10 | 11 | ```yaml 12 | apiVersion: apps/v1 13 | kind: Deployment 14 | metadata: 15 | name: nginx-deployment 16 | labels: 17 | app: nginx 18 | spec: 19 | replicas: 3 20 | selector: 21 | matchLabels: 22 | app: nginx 23 | template: 24 | metadata: 25 | labels: 26 | app: nginx 27 | spec: 28 | containers: 29 | - name: nginx 30 | image: nginx:latest 31 | ports: 32 | - containerPort: 80 33 | ``` 34 | 35 | This deployment will create 3 replicas of the Nginx pod. 36 | 37 | ### Step 3: Create the NodePort Service YAML 38 | 39 | Create a YAML file named `nginx-service.yaml` for the NodePort service. This service will expose the Nginx deployment externally on a specified port. 40 | 41 | ```yaml 42 | apiVersion: v1 43 | kind: Service 44 | metadata: 45 | name: nginx-service 46 | labels: 47 | app: nginx 48 | spec: 49 | selector: 50 | app: nginx 51 | ports: 52 | - protocol: TCP 53 | port: 80 54 | targetPort: 80 55 | nodePort: 30007 # specify the node port here 56 | type: NodePort 57 | ``` 58 | 59 | This service will expose port 80 of the Nginx pods on port 30007 on each node in the cluster. 60 | 61 | ### Step 4: Apply the Deployment and Service YAML files 62 | 63 | Use `kubectl` to apply the YAML files you created. 64 | 65 | 1. Apply the Nginx deployment: 66 | 67 | ```sh 68 | kubectl apply -f nginx-deployment.yaml 69 | ``` 70 | 71 | 2. Apply the NodePort service: 72 | 73 | ```sh 74 | kubectl apply -f nginx-service.yaml 75 | ``` 76 | 77 | ### Step 5: Verify the Deployment and Service 78 | 79 | 1. Check the status of the deployment: 80 | 81 | ```sh 82 | kubectl get deployments 83 | ``` 84 | 85 | You should see the `nginx-deployment` listed with the desired number of replicas. 86 | 87 | 2. Check the status of the pods: 88 | 89 | ```sh 90 | kubectl get pods 91 | ``` 92 | 93 | You should see 3 running pods for the Nginx deployment. 94 | 95 | 3. Check the status of the service: 96 | 97 | ```sh 98 | kubectl get services 99 | ``` 100 | 101 | You should see the `nginx-service` listed with a NodePort. 102 | 103 | ### Step 6: Test the Service 104 | 105 | To test the service, you can access the Nginx service externally using the NodePort on any node's IP address. 106 | 107 | 1. Get the IP address of one of your nodes: 108 | 109 | ```sh 110 | kubectl get nodes -o wide 111 | ``` 112 | 113 | 2. Use `curl` or a web browser to access the Nginx service: 114 | 115 | ```sh 116 | curl http://:30007 117 | ``` 118 | 119 | Replace `` with the IP address of any of your cluster nodes. 120 | 121 | You should see the default Nginx welcome page. 122 | 123 | ### Summary 124 | 125 | You have successfully created an Nginx deployment and a NodePort service in Kubernetes. This setup allows you to deploy Nginx pods and expose them externally using a specified NodePort. 126 | -------------------------------------------------------------------------------- /2024/Objects_K8S.md: -------------------------------------------------------------------------------- 1 | ### Understanding Kubernetes Objects 2 | 3 | #### What are Kubernetes Objects? 4 | 5 | Kubernetes objects are persistent entities in the Kubernetes system. These objects represent the state of your cluster. Specifically, they describe: 6 | 7 | - What containerized applications are running (and on which nodes). 8 | - The resources available to those applications. 9 | - The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance. 10 | 11 | Common Kubernetes objects include: 12 | 13 | - **Pod** 14 | - **ReplicaSet** 15 | - **Deployment** 16 | - **Service** 17 | - **ConfigMap** 18 | - **Secret** 19 | - **PersistentVolume** 20 | - **PersistentVolumeClaim** 21 | - **Namespace** 22 | 23 | #### Why Does Kubernetes Need Objects? 24 | 25 | Kubernetes uses objects to: 26 | 27 | - **Define Desired State:** Kubernetes objects allow you to define the desired state of your cluster, including what applications or workloads should be running and their configurations. 28 | - **Facilitate Management:** Objects provide a way to manage the lifecycle of your applications, such as deploying, scaling, and updating them. 29 | - **Enable Automation:** Objects enable Kubernetes to automate the deployment, maintenance, and scaling of applications. 30 | 31 | ### Imperative vs. Declarative Way to Launch Kubernetes Objects 32 | 33 | #### Imperative Way 34 | 35 | The imperative way involves running commands directly to create or manage Kubernetes objects. This approach is more hands-on and involves immediate action. 36 | 37 | **Example: Launching a Pod Imperatively** 38 | ```sh 39 | kubectl run nginx-pod --image=nginx:latest --port=80 40 | ``` 41 | 42 | **Advantages:** 43 | - Quick and straightforward for simple tasks. 44 | - Useful for one-time operations or debugging. 45 | 46 | **Disadvantages:** 47 | - Not suitable for complex or large-scale deployments. 48 | - Harder to maintain consistency and track changes over time. 49 | 50 | #### Declarative Way 51 | 52 | The declarative way involves writing YAML or JSON files that describe the desired state of your objects, and then applying these files using `kubectl apply`. 53 | 54 | **Example: Launching a Pod Declaratively** 55 | ```sh 56 | kubectl apply -f pod-nginx.yaml 57 | ``` 58 | 59 | **Advantages:** 60 | - Better for complex deployments and infrastructure as code. 61 | - Easier to manage, version control, and audit changes. 62 | - Ensures consistency and repeatability. 63 | 64 | **Disadvantages:** 65 | - Initial setup can be more time-consuming. 66 | - Requires understanding of YAML/JSON syntax. 67 | 68 | ### Detailed Explanation of YAML for Launching a Pod 69 | 70 | **YAML for launching a Pod using the NGINX image:** 71 | ```yaml 72 | apiVersion: v1 73 | kind: Pod 74 | metadata: 75 | name: nginx-pod 76 | spec: 77 | containers: 78 | - name: nginx 79 | image: nginx:latest 80 | ports: 81 | - containerPort: 80 82 | ``` 83 | 84 | Let's break down each line and its syntax: 85 | 86 | 1. `apiVersion: v1` 87 | - **Description:** Specifies the API version of the Kubernetes object. 88 | - **Syntax:** `apiVersion: ` 89 | - **Details:** `v1` is the stable API version for core objects like Pods. 90 | 91 | 2. `kind: Pod` 92 | - **Description:** Specifies the type of Kubernetes object. 93 | - **Syntax:** `kind: ` 94 | - **Details:** `Pod` indicates this YAML defines a Pod object. 95 | 96 | 3. `metadata:` 97 | - **Description:** Provides metadata about the object, including its name, namespace, labels, and annotations. 98 | - **Syntax:** `metadata: ` 99 | - **Details:** This section typically includes the name and labels. 100 | 101 | 4. `name: nginx-pod` 102 | - **Description:** Specifies the name of the object. 103 | - **Syntax:** `name: ` 104 | - **Details:** `nginx-pod` is the name assigned to this Pod. 105 | 106 | 5. `spec:` 107 | - **Description:** Defines the desired state of the object. 108 | - **Syntax:** `spec: ` 109 | - **Details:** This section includes configurations specific to the Pod. 110 | 111 | 6. `containers:` 112 | - **Description:** Lists the containers that will run in the Pod. 113 | - **Syntax:** `containers: ` 114 | - **Details:** A Pod can contain multiple containers. 115 | 116 | 7. `- name: nginx` 117 | - **Description:** Specifies the name of the container. 118 | - **Syntax:** `- name: ` 119 | - **Details:** `nginx` is the name assigned to this container. 120 | 121 | 8. `image: nginx:latest` 122 | - **Description:** Specifies the Docker image to use for the container. 123 | - **Syntax:** `image: :` 124 | - **Details:** `nginx:latest` pulls the latest version of the NGINX image. 125 | 126 | 9. `ports:` 127 | - **Description:** Specifies the ports to expose from the container. 128 | - **Syntax:** `ports: ` 129 | - **Details:** This section lists the ports the container will listen on. 130 | 131 | 10. `- containerPort: 80` 132 | - **Description:** Specifies the port number to expose from the container. 133 | - **Syntax:** `- containerPort: ` 134 | - **Details:** `80` is the default HTTP port for the NGINX server. 135 | 136 | ### Summary 137 | 138 | - **Kubernetes Objects:** Represent the desired state and configurations of your cluster. 139 | - **Imperative vs. Declarative:** Imperative is quick and straightforward for simple tasks, while Declarative is better for complex, scalable, and manageable deployments. 140 | - **Example YAML:** A detailed explanation of each line in a Pod YAML file helps in understanding the syntax and purpose of each field. 141 | 142 | Choosing the right approach depends on your use case. For production environments and larger deployments, the declarative method is generally preferred due to its maintainability and scalability. 143 | -------------------------------------------------------------------------------- /2024/Pod_2.md: -------------------------------------------------------------------------------- 1 | # Methods to Create Objects in Kubernetes and Overview of Pods 2 | 3 | ## Introduction to Kubernetes Object Creation 4 | 5 | In Kubernetes, objects are entities used to represent the state of your cluster. They can include Pods, Services, Deployments, and more. Different methods are available to create these objects, such as using YAML files or the `kubectl` command line tool. 6 | 7 | ## Overview of Kubernetes Pods 8 | 9 | A Pod is the smallest deployable unit in Kubernetes. It represents a single instance of a running process and can encapsulate one or more containers. Pods share the same network namespace, allowing them to communicate easily. Understanding Pods is fundamental to working with Kubernetes. 10 | 11 | ## Creating Pods using YAML file 12 | 13 | Create a Pod definition in a YAML file, for example, `mypod.yaml`: 14 | 15 | ```yaml 16 | apiVersion: v1 17 | kind: Pod 18 | metadata: 19 | name: mypod 20 | spec: 21 | containers: 22 | - name: mycontainer 23 | image: nginx 24 | ``` 25 | 26 | Apply the Pod using the command: 27 | 28 | ```bash 29 | kubectl apply -f mypod.yaml 30 | ``` 31 | 32 | ## Check status of the Pod 33 | 34 | Check the status of a Pod using: 35 | 36 | ```bash 37 | kubectl get pod mypod 38 | ``` 39 | 40 | ## Get details of the Pod 41 | 42 | Retrieve detailed information about a Pod with: 43 | 44 | ```bash 45 | kubectl describe pod mypod 46 | ``` 47 | 48 | ## Check status of the container from the Pod 49 | 50 | To check the status of a specific container within a Pod: 51 | 52 | ```bash 53 | kubectl get pod mypod -o jsonpath='{.status.containerStatuses[0].state}' 54 | ``` 55 | 56 | ## Connecting to the Pod 57 | 58 | Connect to a Pod interactively: 59 | 60 | ```bash 61 | kubectl exec -it mypod -- /bin/bash 62 | ``` 63 | 64 | ## Perform port forwarding using kubectl 65 | 66 | Forward a local port to a Pod: 67 | 68 | ```bash 69 | kubectl port-forward mypod 8080:80 70 | ``` 71 | 72 | Access the application at `http://localhost:8080`. 73 | 74 | ## Understanding multi-container Pods 75 | 76 | Define a multi-container Pod in a YAML file: 77 | 78 | ```yaml 79 | apiVersion: v1 80 | kind: Pod 81 | metadata: 82 | name: multi-container-pod 83 | spec: 84 | containers: 85 | - name: main-container 86 | image: nginx 87 | - name: sidecar-container 88 | image: busybox 89 | ``` 90 | 91 | ## Understanding the sidecar scenario 92 | 93 | In the multi-container Pod example, the `sidecar-container` acts as a sidecar, providing additional functionality to the main container. 94 | 95 | ## Inspecting Pods 96 | 97 | Inspect Pods with: 98 | 99 | ```bash 100 | kubectl get pods 101 | ``` 102 | 103 | ## Check the logs from a Pod 104 | 105 | Retrieve logs from a Pod: 106 | 107 | ```bash 108 | kubectl logs mypod 109 | ``` 110 | 111 | ## Deleting a Pod 112 | 113 | Delete a Pod using: 114 | 115 | ```bash 116 | kubectl delete pod mypod 117 | ``` 118 | 119 | ## Running pod instances in a Job 120 | 121 | Define a Job in a YAML file: 122 | 123 | ```yaml 124 | apiVersion: batch/v1 125 | kind: Job 126 | metadata: 127 | name: myjob 128 | spec: 129 | template: 130 | spec: 131 | containers: 132 | - name: myjob-container 133 | image: busybox 134 | command: ["echo", "Hello from the job"] 135 | backoffLimit: 4 136 | ``` 137 | 138 | ## Understanding different available Job types 139 | 140 | Explore different Job types, such as `Parallel` and `Serial`. 141 | 142 | ## Running job pods sequentially 143 | 144 | Configure a Job for sequential pod execution: 145 | 146 | ```yaml 147 | spec: 148 | parallelism: 1 149 | completions: 3 150 | ``` 151 | 152 | ## Deleting a job 153 | 154 | Delete a Job with: 155 | 156 | ```bash 157 | kubectl delete job myjob 158 | ``` 159 | 160 | ## Clean up finished jobs automatically 161 | 162 | Use a TTL controller to automatically clean up completed Jobs: 163 | 164 | ```yaml 165 | ttlSecondsAfterFinished: 600 166 | ``` 167 | 168 | ## Conclusion 169 | 170 | Understanding the various methods to create objects in Kubernetes, especially Pods, is crucial for effective cluster management. Pods, as the basic building blocks, provide a foundation for deploying and scaling applications. Exploring additional features like Jobs enhances your ability to manage workloads efficiently in a Kubernetes environment. 171 | -------------------------------------------------------------------------------- /2024/Pod_Concept.md: -------------------------------------------------------------------------------- 1 | # Introduction to Pods in Kubernetes 2 | 3 | ## Overview 4 | 5 | In this discussion, we will delve into the fundamental concept of **pods** within the Kubernetes ecosystem. The objective is to guide you through the proper configuration and deployment of pods. Starting with the creation of a simple pod housing your application container, we will explore the nuances of pod configuration. This includes deciphering various aspects of pod settings and choosing configurations tailored to your specific application or use case. Additionally, you will learn how to define resource allocation requirements and limits for pods. The discussion will extend to debugging, log inspection, and making necessary changes to the pod. Essential tools for managing faults, such as liveness and readiness probes, along with restart policies, will also be covered. 6 | 7 | ## Kubernetes Objects 8 | 9 | In the Kubernetes system, several entities embody the state of the cluster and define its workload. These entities are referred to as **Kubernetes objects**. They provide insights into what containers will run in the cluster, resource utilization, inter-container interactions, and exposure to the external environment. 10 | 11 | ## Understanding Pods 12 | 13 | A **pod** stands as the foundational unit in Kubernetes, representing the basic unit of deployment. Analogous to defining a process as a program in execution, a pod can be viewed as a running process in the Kubernetes realm. Pods serve as the smallest unit of replication in Kubernetes, capable of accommodating any number of containers. 14 | 15 | ## Benefits of Pods 16 | 17 | A pod acts as a wrapper around containers on a node, offering shared volumes, Linux namespaces, and cgroups. Each pod possesses a unique IP address, with port space shared among all its containers. This allows containers within a pod to communicate using their respective ports on localhost. 18 | 19 | ## Effective Use of Multiple Containers in a Pod 20 | 21 | Deploying multiple containers in a pod is advisable when they need to be managed and located together in the Kubernetes cluster. For instance, a pod might comprise an application container and another container responsible for fetching logs from the application and forwarding them to central storage. In such cases, having both containers in the same pod facilitates communication over localhost and shared storage access. 22 | 23 | ## Pod Configuration 24 | 25 | ```yaml 26 | apiVersion: v1 27 | kind: Pod 28 | metadata: 29 | name: pod-name 30 | spec: 31 | containers: 32 | - name: container1-name 33 | image: container1-image 34 | - name: container2-name 35 | image: container2-image 36 | ``` 37 | 38 | - **apiVersion:** Version of the Kubernetes API in use. 39 | - **kind:** The type of Kubernetes object, specifying a Pod in this context. 40 | - **metadata:** Unique information identifying the created object. 41 | - **spec:** Pod specifications, including container names, image names, volumes, and resource requests. While apiVersion, kind, and metadata are universal fields, spec, though mandatory, varies in layout across different object types. 42 | 43 | 44 | -------------------------------------------------------------------------------- /2024/Service Account, Role, RoleBinding, ClusterRole, and ClusterRoleBinding.md: -------------------------------------------------------------------------------- 1 | ### Kubernetes Service Account, Role, RoleBinding, ClusterRole, and ClusterRoleBinding 2 | 3 | **Service Account**: 4 | A Kubernetes Service Account (SA) provides an identity for processes that run in a Pod. By default, a Pod runs as the default service account in the namespace where the Pod is running. Service accounts are used to provide fine-grained access control for applications. 5 | 6 | **Role**: 7 | A Role in Kubernetes contains rules that represent a set of permissions. Permissions are purely additive (there are no "deny" rules). Roles can be namespaced, meaning they only apply within a specific namespace. 8 | 9 | **RoleBinding**: 10 | A RoleBinding grants the permissions defined in a Role to a user or a service account within a namespace. It defines who can do what within that namespace. 11 | 12 | **ClusterRole**: 13 | A ClusterRole is similar to a Role but is cluster-wide. It can be used to define permissions that apply across all namespaces or to cluster-scoped resources like nodes. 14 | 15 | **ClusterRoleBinding**: 16 | A ClusterRoleBinding grants the permissions defined in a ClusterRole to a user or a service account across the entire cluster. 17 | 18 | ### Step-by-Step YAML Implementation 19 | 20 | 1. **Create a Service Account** 21 | 22 | ```yaml 23 | apiVersion: v1 24 | kind: ServiceAccount 25 | metadata: 26 | name: my-service-account 27 | namespace: default 28 | ``` 29 | 30 | 2. **Create a Role** 31 | 32 | ```yaml 33 | apiVersion: rbac.authorization.k8s.io/v1 34 | kind: Role 35 | metadata: 36 | namespace: default 37 | name: my-role 38 | rules: 39 | - apiGroups: [""] 40 | resources: ["pods"] 41 | verbs: ["get", "list", "watch"] 42 | ``` 43 | 44 | 3. **Create a RoleBinding** 45 | 46 | ```yaml 47 | apiVersion: rbac.authorization.k8s.io/v1 48 | kind: RoleBinding 49 | metadata: 50 | name: my-rolebinding 51 | namespace: default 52 | subjects: 53 | - kind: ServiceAccount 54 | name: my-service-account 55 | namespace: default 56 | roleRef: 57 | kind: Role 58 | name: my-role 59 | apiGroup: rbac.authorization.k8s.io 60 | ``` 61 | 62 | 4. **Create a ClusterRole** 63 | 64 | ```yaml 65 | apiVersion: rbac.authorization.k8s.io/v1 66 | kind: ClusterRole 67 | metadata: 68 | name: my-clusterrole 69 | rules: 70 | - apiGroups: [""] 71 | resources: ["pods"] 72 | verbs: ["get", "list", "watch"] 73 | ``` 74 | 75 | 5. **Create a ClusterRoleBinding** 76 | 77 | ```yaml 78 | apiVersion: rbac.authorization.k8s.io/v1 79 | kind: ClusterRoleBinding 80 | metadata: 81 | name: my-clusterrolebinding 82 | subjects: 83 | - kind: ServiceAccount 84 | name: my-service-account 85 | namespace: default 86 | roleRef: 87 | kind: ClusterRole 88 | name: my-clusterrole 89 | apiGroup: rbac.authorization.k8s.io 90 | ``` 91 | 92 | ### Explanation 93 | 94 | - **Service Account**: `my-service-account` is created in the `default` namespace. 95 | - **Role**: `my-role` is created in the `default` namespace, allowing `get`, `list`, and `watch` on `pods`. 96 | - **RoleBinding**: Binds `my-role` to `my-service-account` in the `default` namespace. 97 | - **ClusterRole**: `my-clusterrole` is created with permissions to `get`, `list`, and `watch` on `pods` across all namespaces. 98 | - **ClusterRoleBinding**: Binds `my-clusterrole` to `my-service-account` across the entire cluster. 99 | 100 | ### Applying the YAML files 101 | 102 | To apply these YAML configurations, save each snippet to a file (e.g., `serviceaccount.yaml`, `role.yaml`, `rolebinding.yaml`, `clusterrole.yaml`, `clusterrolebinding.yaml`) and then use the `kubectl apply` command: 103 | 104 | ```sh 105 | kubectl apply -f serviceaccount.yaml 106 | kubectl apply -f role.yaml 107 | kubectl apply -f rolebinding.yaml 108 | kubectl apply -f clusterrole.yaml 109 | kubectl apply -f clusterrolebinding.yaml 110 | ``` 111 | 112 | This will create the service account, roles, and bindings in your Kubernetes cluster, granting the specified permissions to the service account. 113 | -------------------------------------------------------------------------------- /AzureDynamicStorageprovisioning.md: -------------------------------------------------------------------------------- 1 | 2 | ## Preconfigured Storage Classes in AKS 3 | 4 | Each AKS cluster comes with four precreated storage classes, two of which are configured to work with Azure Disks: 5 | 6 | 1. **Default Storage Class**: This class provisions a standard SSD Azure Disk. Standard SSDs are cost-effective while providing reliable performance. 7 | 8 | 2. **Managed-CSI-Premium Storage Class**: This class provisions a premium Azure Disk. Premium disks are SSD-based, offering high performance and low latency, making them ideal for VMs running production workloads. You can also use the managed-csi storage class, which is backed by Standard SSD locally redundant storage (LRS), when using the Azure Disk CSI driver on AKS. 9 | 10 | ![image](https://github.com/discover-devops/kubernetes_workshop/assets/53135263/3f06bd22-1364-4187-9384-c5959c4f2886) 11 | 12 | 13 | To view the available storage classes in your AKS cluster, use the following command: 14 | 15 | ```bash 16 | kubectl get sc 17 | ``` 18 | 19 | ## Creating a Persistent Volume Claim (PVC) 20 | 21 | A Persistent Volume Claim (PVC) automatically provisions storage based on a storage class. You can use one of the precreated storage classes to create either a standard or premium Azure managed disk. 22 | 23 | 1. Create a file named `azure-pvc.yaml` and copy the following manifest into it. This manifest requests a 5GB disk named `azure-managed-disk` with `ReadWriteOnce` access. The `managed-csi` storage class is specified. 24 | 25 | ```yaml 26 | apiVersion: v1 27 | kind: PersistentVolumeClaim 28 | metadata: 29 | name: azure-managed-disk 30 | spec: 31 | accessModes: 32 | - ReadWriteOnce 33 | storageClassName: managed-csi 34 | resources: 35 | requests: 36 | storage: 5Gi 37 | ``` 38 | 39 | 2. Create the persistent volume claim using the following `kubectl` command and specify your `azure-pvc.yaml` file: 40 | 41 | ```bash 42 | kubectl apply -f azure-pvc.yaml 43 | ``` 44 | 45 | ## Using the Persistent Volume 46 | 47 | After creating the persistent volume claim, you need to verify that it has a status of `Pending`, indicating that it's ready to be used by a pod. 48 | 49 | To check the status of the PVC, use the following command: 50 | 51 | ```bash 52 | kubectl describe pvc azure-managed-disk 53 | ``` 54 | 55 | ## Creating a Pod with the Persistent Volume 56 | 57 | 1. Create a file named `azure-pvc-disk.yaml` and copy the following manifest into it. This manifest creates a basic NGINX pod named `mypod` that uses the persistent volume claim named `azure-managed-disk` to mount the Azure Disk at the path `/mnt/azure`. 58 | 59 | ```yaml 60 | kind: Pod 61 | apiVersion: v1 62 | metadata: 63 | name: mypod 64 | spec: 65 | containers: 66 | - name: mypod 67 | image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine 68 | resources: 69 | requests: 70 | cpu: 100m 71 | memory: 128Mi 72 | limits: 73 | cpu: 250m 74 | memory: 256Mi 75 | volumeMounts: 76 | - mountPath: "/mnt/azure" 77 | name: volume 78 | volumes: 79 | - name: volume 80 | persistentVolumeClaim: 81 | claimName: azure-managed-disk 82 | ``` 83 | 84 | 2. Create the pod using the following `kubectl` command: 85 | 86 | ```bash 87 | kubectl apply -f azure-pvc-disk.yaml 88 | ``` 89 | 90 | Now, you have a running pod with your Azure Disk mounted in the `/mnt/azure` directory. To check the pod configuration, use the following command: 91 | 92 | ```bash 93 | kubectl describe pod mypod 94 | ``` 95 | ```bash 96 | $ kubectl get pvc 97 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 98 | azure-managed-disk Pending managed-csi 13h 99 | 100 | $ kubectl get pvc 101 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 102 | azure-managed-disk Bound pvc-1ee49883-763f-4715-a9ae-ed0494a04956 5Gi RWO managed-csi 13h 103 | 104 | ``` 105 | 106 | ![image](https://github.com/discover-devops/kubernetes_workshop/assets/53135263/9e1ade4f-1804-4c72-bb67-d08bdb289a1b) 107 | 108 | 109 | 110 | For more information on storage options in AKS and related topics, you can refer to the following references: 111 | 112 | - [Storage options for applications in Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/concepts-storage) 113 | - [How to install the Azure CLI](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) 114 | - [Kubernetes Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes) 115 | - [Kubectl Documentation](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) 116 | ``` 117 | 118 | You can copy and paste this content into a Markdown file in your GitHub repository. 119 | -------------------------------------------------------------------------------- /HowPodGetIPAddress.md: -------------------------------------------------------------------------------- 1 | ## How an IP Address Gets Assigned to a Pod in Kubernetes 2 | 3 | ### Container Runtime Initialization: 4 | 5 | - When a Pod is created, the container runtime (e.g., Docker, containerd) starts the containers within the Pod. 6 | - Containers within a Pod share the same network namespace, using the same IP address and port space. 7 | 8 | ### Container Network Interface (CNI) Plugins: 9 | 10 | - Kubernetes uses the Container Network Interface (CNI) to configure networking interfaces for Linux containers. 11 | - The CNI plugin is chosen based on configuration and may be specific to a cloud platform or a standalone Kubernetes cluster. 12 | - IP address assignment and networking within the Pod are managed by the selected CNI plugin. 13 | 14 | ### CNI Configuration and IP Address Management (IPAM): 15 | 16 | - Kubernetes delegates IP address assignment to the CNI plugin specified in the CNI configuration. 17 | - IP Address Management (IPAM) varies based on the chosen CNI plugin and IPAM method. 18 | 19 | ### Pod Creation Trigger: 20 | 21 | - When a Pod is created, the kubelet on the node sends a request to the configured CNI plugin to set up the network for the Pod. 22 | 23 | ### IP Address Assignment: 24 | 25 | - The CNI plugin, upon receiving the request, interacts with the IPAM to obtain an available IP address for the new Pod. 26 | - IPAM ensures the assigned IP address is unique within the network. 27 | - Once an IP address is obtained, the CNI plugin configures the network namespace for the Pod, applying the assigned IP address and setting up necessary routes. 28 | 29 | ### Network Setup and Storage: 30 | 31 | - The CNI plugin establishes the network configuration within the Pod's namespace, ensuring proper connectivity. 32 | - Details of the assigned IP address, routes, and network setup are stored by the CNI plugin for future reference and cleanup. 33 | 34 | ### Dynamic IP Address Management: 35 | 36 | - IP addresses may change during Pod restarts, rescheduling, or scaling operations. 37 | - The CNI plugin and IPAM handle dynamic scenarios by reassigning IP addresses, ensuring proper network functioning. 38 | -------------------------------------------------------------------------------- /K8S_Certification Labs/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | # CKAD Exercises 5 | 6 | These are the sample questions for the [Certified Kubernetes Application Developer](https://www.cncf.io/certification/ckad/) exam, offered by the Cloud Native Computing Foundation, organized by curriculum domain. 7 | -------------------------------------------------------------------------------- /K8S_Certification Labs/b.multi_container_pods.md: -------------------------------------------------------------------------------- 1 | ![](https://gaforgithub.azurewebsites.net/api?repo=CKAD-exercises/multi_container&empty) 2 | # Multi-container Pods (10%) 3 | 4 | ### Create a Pod with two containers, both with image busybox and command "echo hello; sleep 3600". Connect to the second container and run 'ls' 5 | 6 |
show 7 |

8 | 9 | Easiest way to do it is create a pod with a single container and save its definition in a YAML file: 10 | 11 | ```bash 12 | kubectl run busybox --image=busybox --restart=Never -o yaml --dry-run=client -- /bin/sh -c 'echo hello;sleep 3600' > pod.yaml 13 | vi pod.yaml 14 | ``` 15 | 16 | Copy/paste the container related values, so your final YAML should contain the following two containers (make sure those containers have a different name): 17 | 18 | ```YAML 19 | containers: 20 | - args: 21 | - /bin/sh 22 | - -c 23 | - echo hello;sleep 3600 24 | image: busybox 25 | imagePullPolicy: IfNotPresent 26 | name: busybox 27 | resources: {} 28 | - args: 29 | - /bin/sh 30 | - -c 31 | - echo hello;sleep 3600 32 | image: busybox 33 | name: busybox2 34 | ``` 35 | 36 | ```bash 37 | kubectl create -f pod.yaml 38 | # Connect to the busybox2 container within the pod 39 | kubectl exec -it busybox -c busybox2 -- /bin/sh 40 | ls 41 | exit 42 | 43 | # or you can do the above with just an one-liner 44 | kubectl exec -it busybox -c busybox2 -- ls 45 | 46 | # you can do some cleanup 47 | kubectl delete po busybox 48 | ``` 49 | 50 |

51 |
52 | 53 | ### Create a pod with an nginx container exposed on port 80. Add a busybox init container which downloads a page using "wget -O /work-dir/index.html http://neverssl.com/online". Make a volume of type emptyDir and mount it in both containers. For the nginx container, mount it on "/usr/share/nginx/html" and for the initcontainer, mount it on "/work-dir". When done, get the IP of the created pod and create a busybox pod and run "wget -O- IP" 54 | 55 |
show 56 |

57 | 58 | Easiest way to do it is create a pod with a single container and save its definition in a YAML file: 59 | 60 | ```bash 61 | kubectl run box --image=nginx --restart=Never --port=80 --dry-run=client -o yaml > pod-init.yaml 62 | ``` 63 | 64 | Copy/paste the container related values, so your final YAML should contain the volume and the initContainer: 65 | 66 | Volume: 67 | 68 | ```YAML 69 | containers: 70 | - image: nginx 71 | ... 72 | volumeMounts: 73 | - name: vol 74 | mountPath: /usr/share/nginx/html 75 | volumes: 76 | - name: vol 77 | emptyDir: {} 78 | ``` 79 | 80 | initContainer: 81 | 82 | ```YAML 83 | ... 84 | initContainers: 85 | - args: 86 | - /bin/sh 87 | - -c 88 | - wget -O /work-dir/index.html http://neverssl.com/online 89 | image: busybox 90 | name: box 91 | volumeMounts: 92 | - name: vol 93 | mountPath: /work-dir 94 | ``` 95 | 96 | In total you get: 97 | 98 | ```YAML 99 | 100 | apiVersion: v1 101 | kind: Pod 102 | metadata: 103 | labels: 104 | run: box 105 | name: box 106 | spec: 107 | initContainers: 108 | - args: 109 | - /bin/sh 110 | - -c 111 | - wget -O /work-dir/index.html http://neverssl.com/online 112 | image: busybox 113 | name: box 114 | volumeMounts: 115 | - name: vol 116 | mountPath: /work-dir 117 | containers: 118 | - image: nginx 119 | name: nginx 120 | ports: 121 | - containerPort: 80 122 | volumeMounts: 123 | - name: vol 124 | mountPath: /usr/share/nginx/html 125 | volumes: 126 | - name: vol 127 | emptyDir: {} 128 | ``` 129 | 130 | ```bash 131 | # Apply pod 132 | kubectl apply -f pod-init.yaml 133 | 134 | # Get IP 135 | kubectl get po -o wide 136 | 137 | # Execute wget 138 | kubectl run box-test --image=busybox --restart=Never -it --rm -- /bin/sh -c "wget -O- IP" 139 | 140 | # you can do some cleanup 141 | kubectl delete po box 142 | ``` 143 | 144 |

145 |
146 | 147 | -------------------------------------------------------------------------------- /K8S_Certification Labs/e.observability.md: -------------------------------------------------------------------------------- 1 | ![](https://gaforgithub.azurewebsites.net/api?repo=CKAD-exercises/observability&empty) 2 | # Observability (18%) 3 | 4 | ## Liveness, readiness and startup probes 5 | 6 | kubernetes.io > Documentation > Tasks > Configure Pods and Containers > [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) 7 | 8 | ### Create an nginx pod with a liveness probe that just runs the command 'ls'. Save its YAML in pod.yaml. Run it, check its probe status, delete it. 9 | 10 |
show 11 |

12 | 13 | ```bash 14 | kubectl run nginx --image=nginx --restart=Never --dry-run=client -o yaml > pod.yaml 15 | vi pod.yaml 16 | ``` 17 | 18 | ```YAML 19 | apiVersion: v1 20 | kind: Pod 21 | metadata: 22 | creationTimestamp: null 23 | labels: 24 | run: nginx 25 | name: nginx 26 | spec: 27 | containers: 28 | - image: nginx 29 | imagePullPolicy: IfNotPresent 30 | name: nginx 31 | resources: {} 32 | livenessProbe: # our probe 33 | exec: # add this line 34 | command: # command definition 35 | - ls # ls command 36 | dnsPolicy: ClusterFirst 37 | restartPolicy: Never 38 | status: {} 39 | ``` 40 | 41 | ```bash 42 | kubectl create -f pod.yaml 43 | kubectl describe pod nginx | grep -i liveness # run this to see that liveness probe works 44 | kubectl delete -f pod.yaml 45 | ``` 46 | 47 |

48 |
49 | 50 | ### Modify the pod.yaml file so that liveness probe starts kicking in after 5 seconds whereas the interval between probes would be 5 seconds. Run it, check the probe, delete it. 51 | 52 |
show 53 |

54 | 55 | ```bash 56 | kubectl explain pod.spec.containers.livenessProbe # get the exact names 57 | ``` 58 | 59 | ```YAML 60 | apiVersion: v1 61 | kind: Pod 62 | metadata: 63 | creationTimestamp: null 64 | labels: 65 | run: nginx 66 | name: nginx 67 | spec: 68 | containers: 69 | - image: nginx 70 | imagePullPolicy: IfNotPresent 71 | name: nginx 72 | resources: {} 73 | livenessProbe: 74 | initialDelaySeconds: 5 # add this line 75 | periodSeconds: 5 # add this line as well 76 | exec: 77 | command: 78 | - ls 79 | dnsPolicy: ClusterFirst 80 | restartPolicy: Never 81 | status: {} 82 | ``` 83 | 84 | ```bash 85 | kubectl create -f pod.yaml 86 | kubectl describe po nginx | grep -i liveness 87 | kubectl delete -f pod.yaml 88 | ``` 89 | 90 |

91 |
92 | 93 | ### Create an nginx pod (that includes port 80) with an HTTP readinessProbe on path '/' on port 80. Again, run it, check the readinessProbe, delete it. 94 | 95 |
show 96 |

97 | 98 | ```bash 99 | kubectl run nginx --image=nginx --dry-run=client -o yaml --restart=Never --port=80 > pod.yaml 100 | vi pod.yaml 101 | ``` 102 | 103 | ```YAML 104 | apiVersion: v1 105 | kind: Pod 106 | metadata: 107 | creationTimestamp: null 108 | labels: 109 | run: nginx 110 | name: nginx 111 | spec: 112 | containers: 113 | - image: nginx 114 | imagePullPolicy: IfNotPresent 115 | name: nginx 116 | resources: {} 117 | ports: 118 | - containerPort: 80 # Note: Readiness probes runs on the container during its whole lifecycle. Since nginx exposes 80, containerPort: 80 is not required for readiness to work. 119 | readinessProbe: # declare the readiness probe 120 | httpGet: # add this line 121 | path: / # 122 | port: 80 # 123 | dnsPolicy: ClusterFirst 124 | restartPolicy: Never 125 | status: {} 126 | ``` 127 | 128 | ```bash 129 | kubectl create -f pod.yaml 130 | kubectl describe pod nginx | grep -i readiness # to see the pod readiness details 131 | kubectl delete -f pod.yaml 132 | ``` 133 | 134 |

135 |
136 | 137 | ### Lots of pods are running in `qa`,`alan`,`test`,`production` namespaces. All of these pods are configured with liveness probe. Please list all pods whose liveness probe are failed in the format of `/` per line. 138 | 139 |
show 140 |

141 | 142 | A typical liveness probe failure event 143 | ``` 144 | LAST SEEN TYPE REASON OBJECT MESSAGE 145 | 22m Warning Unhealthy pod/liveness-exec Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory 146 | ``` 147 | 148 | collect failed pods namespace by namespace 149 | 150 | ```sh 151 | kubectl get events -o json | jq -r '.items[] | select(.message | contains("failed liveness probe")).involvedObject | .namespace + "/" + .name' 152 | ``` 153 | 154 |

155 |
156 | 157 | ## Logging 158 | 159 | ### Create a busybox pod that runs `i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done`. Check its logs 160 | 161 |
show 162 |

163 | 164 | ```bash 165 | kubectl run busybox --image=busybox --restart=Never -- /bin/sh -c 'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done' 166 | kubectl logs busybox -f # follow the logs 167 | ``` 168 | 169 |

170 |
171 | 172 | ## Debugging 173 | 174 | ### Create a busybox pod that runs 'ls /notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod 175 | 176 |
show 177 |

178 | 179 | ```bash 180 | kubectl run busybox --restart=Never --image=busybox -- /bin/sh -c 'ls /notexist' 181 | # show that there's an error 182 | kubectl logs busybox 183 | kubectl describe po busybox 184 | kubectl delete po busybox 185 | ``` 186 | 187 |

188 |
189 | 190 | ### Create a busybox pod that runs 'notexist'. Determine if there's an error (of course there is), see it. In the end, delete the pod forcefully with a 0 grace period 191 | 192 |
show 193 |

194 | 195 | ```bash 196 | kubectl run busybox --restart=Never --image=busybox -- notexist 197 | kubectl logs busybox # will bring nothing! container never started 198 | kubectl describe po busybox # in the events section, you'll see the error 199 | # also... 200 | kubectl get events | grep -i error # you'll see the error here as well 201 | kubectl delete po busybox --force --grace-period=0 202 | ``` 203 | 204 |

205 |
206 | 207 | 208 | ### Get CPU/memory utilization for nodes ([metrics-server](https://github.com/kubernetes-incubator/metrics-server) must be running) 209 | 210 |
show 211 |

212 | 213 | ```bash 214 | kubectl top nodes 215 | ``` 216 | 217 |

218 |
219 | -------------------------------------------------------------------------------- /K8s.YAML: -------------------------------------------------------------------------------- 1 | Master : 10.0.0.4 20.40.61.61 2 | WorkerA : 10.0.0.5 20.40.61.62 3 | 4 | ====== Installation ========= 5 | 6 | 7 | RUN Step 1 to 5 both on Master and Worker nodes 8 | Step:1 Update the apt package index and install packages needed to use the Kubernetes apt repository: 9 | 10 | sudo apt-get update 11 | sudo apt-get install -y apt-transport-https ca-certificates curl 12 | 13 | Step2: Download the Google Cloud public signing key: 14 | sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg 15 | 16 | Step3: Add the Kubernetes apt repository: 17 | echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list 18 | 19 | Step4: Update apt package index, install kubelet, kubeadm and kubectl, and pin their version: 20 | sudo apt-get update 21 | sudo apt-get install -y kubelet kubeadm kubectl 22 | sudo apt-mark hold kubelet kubeadm kubectl 23 | sudo systemctl enable kubelet 24 | 25 | Step5: Install RUNC 26 | 27 | sudo apt-get install docker.io 28 | sudo systemctl enable docker 29 | sudo systemctl daemon-reload 30 | sudo systemctl restart docker 31 | sudo systemctl status docker 32 | 33 | Step6: Run this command ONLY om Master 34 | 35 | sudo kubeadm init --apiserver-advertise-address=10.0.0.4 --pod-network-cidr=192.168.0.0/16 36 | 37 | 38 | $ mkdir -p $HOME/.kube 39 | $ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 40 | $ chown $(id -u):$(id -g) $HOME/.kube/config 41 | 42 | kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml 43 | 44 | 45 | Step: 7: (ONLY on worker NODE) Run the join commadn to the Worker Node 46 | 47 | 48 | kubeadm join 10.0.0.4:6443 --token jz8p4o.q6mhygk1hyjwdf8m --discovery-token-ca-cert-hash sha256:8df9f047cac7653c9c969f2f667254b3c138a50ad7c0b585f7cf0bd7a7f100a7 49 | 50 | kubeadm token create --print-join-command 51 | 52 | 53 | =================================== 54 | 55 | 56 | 57 | 58 | -------------------------------------------------------------------------------- /Kubernetes Installation.txt: -------------------------------------------------------------------------------- 1 | Kubernetes Installation 2 | ======================= 3 | 4 | Ref: 5 | https://kubernetes.io/ 6 | https://labs.play-with-k8s.com/ 7 | https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ 8 | https://projectcalico.docs.tigera.io/getting-started/kubernetes/self-managed-onprem/onpremises 9 | 10 | 11 | AMZON LINUX -----> CentOS based 12 | 13 | 1: Master Node 172.31.47.223 43.205.110.93 14 | 2: Worker node 172.31.47.46 52.66.119.59 15 | 16 | 17 | 18 | This is the installation step using KUBEADM 19 | ============================================== 20 | 21 | 22 | Step# 1: Install docker on BOTH master and worker node 23 | 24 | sudo yum update 25 | sudo yum install docker 26 | sudo systemctl enable docker.service 27 | sudo systemctl start docker.service 28 | sudo systemctl start docker.service 29 | systemctl status docker.service 30 | 31 | ======= 32 | Step#2: Run on BOTH MASTER and Worker 33 | 34 | cat <Kubernetes Installation** using Kubeadm 2 | 3 | 4 | 5 | Installation of Kubernetes using Kubeadm is simple 4 step process: 6 | 7 | ![image-20220108154818088](C:\Users\rahchaub\AppData\Roaming\Typora\typora-user-images\image-20220108154818088.png) 8 | 9 | ##### Step1: Operating System Changes/Configuration updates: 10 | 11 | This is 2 node Kubernetes Cluster and below specifications: 12 | **MasterNode** 10.0.0.8 Linux (ubuntu 18.04) 13 | **WorkerNode1** 10.0.0.9 Linux (ubuntu 18.04) 14 | 15 | 16 | 17 | 1.1: Update the OS and Add master and worker nodes entry in **/etc/hosts** file 18 | 19 | > $ apt update -y && apt upgrade -y 20 | > $ cp -p /etc/hosts /etc/hosts.ORIG 21 | > $ echo "MasterNode 10.0.0.8" | tee -a /etc/hosts 22 | > $ echo "WorkerNode1 10.0.0.9" | tee -a /etc/hosts 23 | 24 | 1.2: Disable Swap Memory 25 | 26 | > $ sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 27 | > $ swapoff -a 28 | 29 | 1.3: Configure Firewall and Networking parameter's 30 | 31 | > # Load Modules 32 | > 33 | > $ sudo modprobe overlay 34 | > $ sudo modprobe br_netfilter 35 | > 36 | > ###### Set system configurations for Kubernetes networking: 37 | > 38 | > cat < net.bridge.bridge-nf-call-iptables = 1 40 | > net.ipv4.ip_forward = 1 41 | > net.bridge.bridge-nf-call-ip6tables = 1 42 | > EOF 43 | > 44 | > ###### Apply new settings: 45 | > sudo sysctl --system 46 | 47 | 48 | 49 | ##### Step2: Install Kubelet, Kubeadm and Kubectl: 50 | 51 | 52 | 53 | > ###### Install dependency packages: 54 | > apt-get update && apt-get install -y apt-transport-https curl 55 | > 56 | > ###### Download and add GPG key: 57 | > curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 58 | > 59 | > ###### Add Kubernetes to repository list: 60 | > cat < deb https://apt.kubernetes.io/ kubernetes-xenial main 62 | > EOF 63 | > 64 | > ###### Update package listings: 65 | > apt-get update 66 | > 67 | > ###### Install Kubernetes packages: 68 | > apt-get install -y kubelet kubeadm kubectl 69 | > 70 | > ###### Turn off automatic updates: 71 | > apt-mark hold kubelet kubeadm kubectl 72 | > 73 | > ###### Verify the Versions 74 | > 75 | > kubectl version --client && kubeadm version 76 | 77 | ##### Step3: Install Container Runtime 78 | 79 | 3.1: Docker as Container Runtime 80 | 81 | > ###### Install the Docker 82 | > 83 | > $ apt update 84 | > $ apt install -y curl gnupg2 85 | > $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - 86 | > $ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 87 | > $ apt update 88 | > $ apt install -y containerd.io docker-ce 89 | > 90 | > ###### Configure the Docker daemon, in particular to use systemd for the management of the container’s cgroups. 91 | > 92 | > $ mkdir /etc/docker 93 | > cat < { 95 | > "exec-opts": ["native.cgroupdriver=systemd"], 96 | > "log-driver": "json-file", 97 | > "log-opts": { 98 | > "max-size": "100m" 99 | > }, 100 | > "storage-driver": "overlay2" 101 | > } 102 | > EOF 103 | > 104 | > ###### Restart Docker and enable on boot: 105 | > 106 | > $ systemctl enable docker 107 | > $ systemctl daemon-reload 108 | > $ systemctl restart docker 109 | > 110 | > $ systemctl status docker 111 | 112 | 113 | 114 | ##### **OR** 115 | 116 | ##### **3.1: Containerd as Runtime** 117 | 118 | > ###### Create configuration file for containerd: 119 | > 120 | > $ cat < overlay 122 | > br_netfilter 123 | > EOF 124 | > 125 | > ###### Create default configuration file for containerd: 126 | > $ mkdir -p /etc/containerd 127 | > 128 | > ###### Generate default containerd configuration : 129 | > $ containerd config default | tee /etc/containerd/config.toml 130 | > 131 | > ###### Restart containerd to ensure new configuration file usage: 132 | > $ systemctl restart containerd 133 | > 134 | > ###### Verify that containerd is running: 135 | > $ systemctl status containerd 136 | 137 | 138 | 139 | ##### Step4: Create Kubernetes Cluster and Join Worker Nodes 140 | 141 | > ###### Make sure that br_netfilter module is loaded: 142 | > 143 | > $ lsmod | grep br_netfilter 144 | > 145 | > ###### Start Kubelet 146 | > 147 | > $ systemctl enable kubelet 148 | > 149 | > ###### Create Cluster 150 | > 151 | > $ kubeadm init --apiserver-advertise-address=10.0.0.8 --pod-network-cidr=192.168.0.0/16 152 | > 153 | > ###### Set kubectl access: 154 | > $ mkdir -p $HOME/.kube 155 | > $ cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 156 | > $ chown $(id -u):$(id -g) $HOME/.kube/config 157 | > 158 | > ###### Install Calico Networking: 159 | > 160 | > $ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml 161 | > 162 | > ###### Check status of the control plane node: 163 | > $ kubectl get nodes 164 | > 165 | > ###### In the Control Plane Node, create the token and copy the kubeadm join command . The join command can also be found in the output from kubeadm init command. 166 | > 167 | > $ kubeadm token create --print-join-command 168 | > 169 | > ###### On Worker Node run the below Command to join the Cluster 170 | > 171 | > $ kubeadm join master:6443 --token <> --discovery-token-ca-cert-hash <> --control-plane 172 | > 173 | > ###### Check the Cluster Status 174 | > 175 | > $ kubectl cluster-info 176 | > 177 | > $ kubectl get nodes -o wide 178 | > 179 | > $ watch kubectl get pods --all-namespaces 180 | > 181 | > ###### Run a test Pod 182 | > 183 | > $ kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml 184 | > 185 | > $ kubectl get deployments 186 | > 187 | > $ kubect get pods 188 | 189 | 190 | 191 | ##### Troubleshooting 192 | 193 | You may need below steps to troubleshoot the installation stesp: 194 | 195 | > ###### Make sure that below Ports are open at your Firewall ports ot Cloud Security groups and NACLs 196 | > 197 | > 6443, 10250, 10259, 10257, 2379, 2380, 30000-32767 198 | > 199 | > ###### At Master Nodes, open the below ports over OS firewall 200 | > 201 | > ###### UFW 202 | > 203 | > Master Nodes: 204 | > sudo ufw allow 6443/tcp 205 | > sudo ufw allow 10250/tcp 206 | > sudo ufw allow 10259/tcp 207 | > sudo ufw allow 10257/tcp 208 | > sudo ufw allow 2379/tcp 209 | > sudo ufw allow 2380/tcp 210 | > 211 | > Worker Nodes: 212 | > sudo ufw allow 30000:32767/tcp 213 | > sudo ufw allow 10250/tcp 214 | > 215 | > sudo ufw disable 216 | > sudo ufw enable 217 | > 218 | > ###### iptables: 219 | > sudo iptables -S 220 | > sudo iptables -L 221 | > 222 | > Master: 223 | > sudo iptables -A INPUT -p tcp -s --dport 6443 -j ACCEPT 224 | > sudo iptables -A INPUT -p tcp -s --dport 10250 -j ACCEPT 225 | > sudo iptables -A INPUT -p tcp -s --dport 10259 -j ACCEPT 226 | > sudo iptables -A INPUT -p tcp -s --dport 10257 -j ACCEPT 227 | > sudo iptables -A INPUT -p tcp -s --dport 2379 -j ACCEPT 228 | > sudo iptables -A INPUT -p tcp -s --dport 2380 -j ACCEPT 229 | > 230 | > Worker: 231 | > sudo iptables -A INPUT -p tcp -s --dport 10250 -j ACCEPT 232 | > sudo iptables -A INPUT -p tcp -s --dport 30000:32767 -j ACCEPT 233 | > 234 | > sudo iptables-save 235 | > service iptables stop 236 | > service iptables start 237 | 238 | ##### References: 239 | 240 | [1]: https://kubernetes.io/docs/setup/#production-environment 241 | [2]: https://docs.docker.com/engine/install/#server 242 | 243 | -------------------------------------------------------------------------------- /Kubernetes_Services.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Services in Detail 2 | 3 | ## Introduction 4 | Kubernetes Services act as a critical component for providing stable network endpoints within a cluster, facilitating seamless communication and load balancing for Pods. Let's explore the different types of Kubernetes Services and understand their use cases. 5 | 6 | ## Service Types 7 | 8 | ### 1. ClusterIP 9 | - **Description:** Default service type for internal communication within the cluster. 10 | - **Use Cases:** Internal service-to-service communication within the cluster. 11 | 12 | **YAML Example:** 13 | ```yaml 14 | apiVersion: v1 15 | kind: Service 16 | metadata: 17 | name: my-clusterip-service 18 | spec: 19 | selector: 20 | app: my-app 21 | ports: 22 | - protocol: TCP 23 | port: 80 24 | targetPort: 8080 25 | ``` 26 | 27 | ### 2. NodePort 28 | - **Description:** Exposes a specific port on each node, allowing external access to the service. 29 | - **Use Cases:** Development, testing, or debugging scenarios where external access is required. 30 | 31 | **YAML Example:** 32 | ```yaml 33 | apiVersion: v1 34 | kind: Service 35 | metadata: 36 | name: my-nodeport-service 37 | spec: 38 | selector: 39 | app: my-app 40 | ports: 41 | - protocol: TCP 42 | port: 80 43 | targetPort: 8080 44 | type: NodePort 45 | ``` 46 | 47 | ### 3. LoadBalancer 48 | - **Description:** Automatically provisions a cloud provider's load balancer, providing an external IP for the service. 49 | - **Use Cases:** Production environments requiring external access, high availability, and load balancing. 50 | 51 | **YAML Example:** 52 | ```yaml 53 | apiVersion: v1 54 | kind: Service 55 | metadata: 56 | name: my-loadbalancer-service 57 | spec: 58 | selector: 59 | app: my-app 60 | ports: 61 | - protocol: TCP 62 | port: 80 63 | targetPort: 8080 64 | type: LoadBalancer 65 | ``` 66 | 67 | ### 4. Headless 68 | - **Description:** A specialized service without a cluster IP, providing direct access to individual pod IP addresses. 69 | - **Use Cases:** Stateful applications, distributed databases, or scenarios requiring direct communication with specific pods. 70 | 71 | **YAML Example:** 72 | ```yaml 73 | apiVersion: v1 74 | kind: Service 75 | metadata: 76 | name: my-headless-service 77 | spec: 78 | clusterIP: None 79 | selector: 80 | app: my-app 81 | ports: 82 | - protocol: TCP 83 | port: 80 84 | targetPort: 8080 85 | ``` 86 | 87 | ### 5. ExternalName 88 | - **Description:** Maps a service to an external DNS name without proxying or load balancing. 89 | - **Use Cases:** Integration with external services or accessing resources outside the cluster. 90 | 91 | **YAML Example:** 92 | ```yaml 93 | apiVersion: v1 94 | kind: Service 95 | metadata: 96 | name: my-externalname-service 97 | spec: 98 | type: ExternalName 99 | externalName: external-service.example.com 100 | ``` 101 | 102 | ## Port Mapping 103 | Port mapping is crucial for directing network traffic from services to pods. Two main ports are specified: 104 | 105 | - **Port:** Exposes the service on this port for receiving traffic. 106 | - **TargetPort:** Directs network traffic to the pod's port where it will be received. 107 | 108 | Example (from ClusterIP service): 109 | ```yaml 110 | ports: 111 | - protocol: TCP 112 | port: 80 113 | targetPort: 8080 114 | ``` 115 | 116 | ## Traffic Flow 117 | 118 | ### ClusterIP 119 | - **Access:** Service is exposed on a cluster-internal IP. 120 | - **Traffic Flow:** Requests to the service port are forwarded to the pod's targetPort. 121 | 122 | ### NodePort 123 | - **Access:** Exposed on each node's IP, accessible both within and outside the cluster. 124 | - **Traffic Flow:** NodePort is allocated on each node; traffic to this port is directed to the pod's targetPort. 125 | 126 | ### LoadBalancer 127 | - **Access:** Exposed to the internet with a stable IP. 128 | - **Traffic Flow:** LoadBalancer provisions a cloud load balancer, directing traffic to NodePort and ClusterIP services. 129 | 130 | This detailed understanding of Kubernetes Services and their types provides a foundation for deploying and managing services within your Kubernetes cluster. 131 | -------------------------------------------------------------------------------- /Kubernetes_Storgae.md: -------------------------------------------------------------------------------- 1 | ### **Kubernetes Storage** 2 | 3 | 4 | 5 | ​ img 6 | 7 | 8 | 9 | 10 | 11 | In real world, any applications perform either read or write operations, over the data. So, we need some mechanism to store data for the application running inside the Kubernetes cluster. There are two ways you can storage data in Kubernetes: 12 | 13 | **Note: For any kind of volume in a given pod, data is preserved across container restarts.** 14 | 15 | So, let's first understand what is Volume? 16 | 17 | We've seen in Docker, if you stores data locally, it will be lost if the containers crashes. The new container that would replace this container will have no previous data, so it would be a complete loss of the data. Thus, we cannot rely on containers for storage of data. 18 | 19 | Also, in case of the POD, we can have multiple containers running inside the same POD, so, there will be where a sidecar container access/process the data produced by the main application container. 20 | 21 | To solve above 2 mentioned problems, Kubernetes **Volumes** comes for the rescue. Kubernetes Volume is exposed to the applications as an abstraction, which eventually stores the data on the physical storage that you have provided. At its core,a volume could be a directory or a LUN or a Block Volume or a NFS file share etc etc... 22 | 23 | 24 | 25 | ​ img 26 | 27 | **Types of Volumes**: 28 | 29 | 1: **Ephemeral Volumes**: This volume is dependent the on the lifecycle of the POD independent of container's lifecycle running inside it. You can also share the data between the containers inside the same POD. The Only issue with this type of the volume, you will loose your data if POD crashes. So, not a good use case for any production kind of workload. 30 | 31 | 2: **Persistent Volumes** (PV): Independent of the pod's life cycle (depending upon the RECLAIM POLICY), so you will have your data available even the POD got deleted. There are different types of PV 32 | 33 | 34 | 35 | **Ephemeral Volumes** 36 | 37 | **emptyDir** 38 | 39 | An emptyDir an empty volume that got created when a Pod is assigned to a node. It exists as long as that Pod is running on that node. All containers in the Pod can read and write the same files in the emptyDir volume, even though the mount point is different in each container. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted permanently. 40 | 41 | **hostPath** 42 | 43 | A hostPath volume mounts a file or directory from the host node's filesystem into your Pod. For example in case of Docker , it uses a hostPath of /var/lib/docker. HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. 44 | 45 | 46 | 47 | **Persistent Volumes** 48 | 49 | So far what we have learn about the volumes (Ephemeral Volumes), the volume/data is available till the life of the POD. And in Kubernetes world, PODs are ephemerals, they come and go. So, this ephemerals volumes are ONLY recommended for the TEST application. 50 | 51 | Now the question arises how to keep data persist in Kubernetes, the answer is PersistentVolume, PersistentVolumeClaims and StorageClass. we will go thru all of these one by one. Before that I would like to mention that in Kubernetes you can provision the PersistentVolumes either **Manually** or **Dynamically**. Also, we will be discussing the concept of Container Storage Interface (CSI), which is kind of plugin that we have install based on our storage selection. 52 | 53 | Manual Provisioning: 54 | 55 | img 56 | 57 | **Persistent Volumes**: 58 | A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. 59 | 60 | **PersistentVolumeClaims**: 61 | 62 | A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany. 63 | 64 | 65 | 66 | >See the Storage_Lab.txt 67 | > 68 | > 69 | 70 | 71 | 72 | **Container Storage Interface (CSI)**: 73 | Kubernetes just provides an interface for Pods to consume it. The storage properties like, speed, replication, resiliency etc. all are the storage provides responsibility . For any storage provider to integrate there storage with Kubernetes, they have to write the CSI plugins. 74 | 75 | 76 | 77 | img 78 | 79 | CSI code maintenance, updates or any kind of the bug fixes is the responsibility of the storage provider, Kubernetes has nothing to do with that. 80 | 81 | https://kubernetes-csi.github.io/docs/ 82 | 83 | **Dynamic Provisioning:** 84 | 85 | Storage Class: 86 | 87 | StorageClass is an API object. It provides a way for administrators to describe the "classes" of storage they offer. Keeping microservices architecture in mind, Storage needs to be dynamic, this is what StorageClass offers out of the box. At large enterprise level application it is not possible for an administrator to provision volumes manually. I tis not at all scalable solution. **Storage class enable dynamic provisioning of the volumes.** 88 | 89 | Every cloud provider supports different provisioners, check your cloud documentation for more information. 90 | 91 | 92 | 93 | 94 | 95 | img 96 | 97 | > See Storage_Lab.txt 98 | 99 | 100 | 101 | So, we have seem that how we can use the different types of Volumes to share temporary data among containers running in the same pod. Also, we have seen that how to persist dat when the pod dies. We have seen the different ways to provision the volume. 102 | 103 | -------------------------------------------------------------------------------- /Minikube Installation: -------------------------------------------------------------------------------- 1 | Introduction 2 | Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. 3 | Minikube is available for Linux, macOS, and Windows systems. Here, you will install Kubernetes with Minikube on Ubuntu 20.04. 4 | 5 | On a variety of platforms, Minikube is a free utility that facilitates the setup of single-node Kubernetes clusters. 6 | Numerous Kubernetes technologies, including NodePorts, DNS, the Container Network Interface, Ingress, ConfigMaps, Secrets, etc., are supported. 7 | 8 | Prerequisites 9 | Ubuntu 20.04 desktop installed on your system 10 | A root password set up on your system 11 | 12 | Step 1 – Updating the system: 13 | 1) Make sure that the system is running on the latest versions. 14 | 15 | $ apt-get update -y 16 | $ apt-get upgrade -y 17 | 18 | 2) Restart the computer to have the modifications take effect. 19 | 20 | 3) Then, download a few packages that would be useful. 21 | $ apt-get install curl wget apt-transport-https -y 22 | 23 | Step 2 – Install VirtualBox Hypervisor: 24 | Since both KVM and VirtualBox Hypervisor are supported by Minikube, 25 | $ apt-get install virtualbox virtualbox-ext-pack 26 | 27 | Step 3 – Download and Install the latest Minikube: 28 | $ wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 29 | 30 | Make sure that you have coppied the downloaded file to /usr/local/bin. 31 | $ cp minikube-linux-amd64 /usr/local/bin/minikube 32 | 33 | Give execution permission to Minikube. 34 | $ chmod 755 /usr/local/bin/minikube 35 | 36 | Check the version of Minikube 37 | $ minikube version 38 | 39 | 40 | Step 4 – Install Kubectl: 41 | 42 | Download the GPG key, then add it. 43 | $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 44 | 45 | Add Kubernetes apt repository with the following command 46 | $ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | tee /etc/apt/sources.list.d/kubernetes.list 47 | 48 | Update the repository and install Kubectl 49 | $ apt-get update -y 50 | $ apt-get install kubectl -y 51 | 52 | Check the version of Kubectl installed 53 | kubectl version -o json 54 | 55 | Step 5 – Start Minikube: 56 | This command will download VirtualBox image will be downloaded, and the Kubernetes cluster will be set up, make sure your 57 | run this command with Normal users 58 | 59 | 60 | $ minikube start 61 | 62 | * minikube v1.29.0 on Ubuntu 20.04 63 | * Automatically selected the virtualbox driver. Other choices: ssh, none 64 | * Downloading VM boot image ... 65 | > minikube-v1.29.0-amd64.iso....: 65 B / 65 B [---------] 100.00% ? p/s 0s 66 | > minikube-v1.29.0-amd64.iso: 276.35 MiB / 276.35 MiB 100.00% 115.29 MiB 67 | * Starting control plane node minikube in cluster minikube 68 | * Downloading Kubernetes v1.26.1 preload ... 69 | > preloaded-images-k8s-v18-v1...: 397.05 MiB / 397.05 MiB 100.00% 78.29 M 70 | * Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB) .. 71 | 72 | 73 | Check the status of the cluster 74 | $ kubectl cluster-info 75 | 76 | $ kubectl cluster-info 77 | Kubernetes control plane is running at https://192.168.59.100:8443 78 | CoreDNS is running at https://192.168.59.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy 79 | 80 | $ kubectl cluster-info dump 81 | $ kubectl config view 82 | 83 | $ kubectl get nodes 84 | NAME STATUS ROLES AGE VERSION 85 | minikube Ready control-plane 2m36s v1.26.1 86 | 87 | 88 | $ kubectl get pods --all-namespaces 89 | NAMESPACE NAME READY STATUS RESTARTS AGE 90 | kube-system coredns-787d4945fb-jx97g 1/1 Running 0 2m56s 91 | kube-system etcd-minikube 1/1 Running 0 3m7s 92 | kube-system kube-apiserver-minikube 1/1 Running 0 3m11s 93 | kube-system kube-controller-manager-minikube 1/1 Running 0 3m7s 94 | kube-system kube-proxy-9d9kk 1/1 Running 0 2m56s 95 | kube-system kube-scheduler-minikube 1/1 Running 0 3m11s 96 | kube-system storage-provisioner 1/1 Running 1 (2m22s ago) 3m4s 97 | 98 | -------------------------------------------------------------------------------- /Overview of Kubernetes.txt: -------------------------------------------------------------------------------- 1 | In this chaptersection, we will have our first hands-on introduction to Kubernetes with some fundamental 2 | Kubernetes components. 3 | 4 | Setting up Kubernetes 5 | ======================== 6 | 7 | 1: An Overview of Minikube 8 | =========================== 9 | 10 | 11 | MASTER Node: 10.0.0.4 40.81.243.184 12 | WorkerNode: 10.0.0.5 40.81.245.7 13 | 14 | 15 | Step1: Operating System Changes/Configuration updates: 16 | This is 2 node Kubernetes Cluster and below specifications: 17 | MasterNode 10.0.0.4 Linux (ubuntu 18.04) 18 | WorkerNode1 10.0.0.5 Linux (ubuntu 18.04) 19 | 20 | 21 | 22 | sudo apt-get update && sudo apt-get install -y apt-transport-https curl 23 | echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list 24 | sudo apt-get update 25 | sudo apt-get install -y kubelet kubeadm kubectl 26 | sudo apt-mark hold kubelet kubeadm kubectl 27 | 28 | kubeadm init --apiserver-advertise-address=10.0.0.4 --pod-network-cidr=192.168.0.0/16 29 | 30 | 31 | $ apt update 32 | $ apt install -y curl gnupg2 33 | $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - 34 | $ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 35 | $ apt update 36 | $ apt install -y containerd.io docker-ce 37 | 38 | 39 | 40 | Configure the Docker daemon, in particular to use systemd for the management of the container’s cgroups. 41 | $ mkdir /etc/docker cat < 166 | kubectl logs pod4 second-container -f 167 | ``` 168 | 169 | 170 | 171 | ``` 172 | Pod Running a Container with Resource Requirements 173 | 174 | vi container-pod4.yaml 175 | 176 | apiVersion: v1 177 | kind: Pod 178 | metadata: 179 | name: pod4 180 | spec: 181 | containers: 182 | - name: container4 183 | image: nginx 184 | resources: 185 | limits: 186 | memory: "128M" 187 | cpu: "1" 188 | requests: 189 | memory: "64M" 190 | cpu: "0.5" 191 | ``` 192 | 193 | 194 | 195 | **Life Cycle of a Pod** 196 | 197 | - **Pending**: This means that the pod has been submitted to the cluster, but the controller hasn't created all its containers yet. 198 | - **Running**: This state means that the pod has been assigned to one of the cluster nodes and at least one of the containers is either running or is in the process of starting up. 199 | - **Succeeded**: This state means that the pod has run, and all of its containers have been terminated with success. 200 | - **Failed**: This state means the pod has run and at least one of the containers has terminated with a non-zero exit code, 201 | - **Unknown**: This means that the state of the pod could not be found. This may be because of the inability of the controller to connect with the node that the pod was assigned to. 202 | 203 | 204 | 205 | ```shell 206 | More COmmands 207 | 208 | $ kubectl get pods 209 | 210 | $ kubectl explain pods 211 | 212 | $ kubectl explain pod.spec 213 | 214 | $ kubectl create -f mypod.yaml 215 | 216 | $ kubectl get po mypod -o yaml 217 | $ kubectl get po mypod -o json 218 | 219 | 220 | Retrieving a pod’s log with kubectl logs: 221 | 222 | To see your pod’s log ot the container’s log you run: 223 | $ docker logs 224 | $ kubectl logs mypod 225 | 226 | ``` 227 | 228 | -------------------------------------------------------------------------------- /Pod.YAML: -------------------------------------------------------------------------------- 1 | PODS: 2 | ========= 3 | 4 | 5 | A pod is the basic building block of Kubernetes, and it can be described as the basic unit of deployment. 6 | Just like we define a process as a program in execution, we can define a pod as a running process in the Kubernetes world. 7 | Pods are the smallest unit of replication in Kubernetes. A pod can have any number of containers running in it. 8 | A pod is basically a wrapper around containers running on a node. 9 | 10 | Using pods instead of individual containers has a few benefits. For example, containers in a pod have shared volumes, 11 | Linux namespaces, and cgroups. 12 | 13 | Each pod has a unique IP address and the port space is shared by all the containers in that pod. 14 | This means that different containers inside a pod can communicate with each other using their corresponding ports on 15 | localhost. 16 | 17 | 18 | Ideally, we should use multiple containers in a pod only when we want them to be managed and located together in the 19 | Kubernetes cluster. For example, we may have a container running our application and another container that fetches 20 | logs from the application container and forwards them to some central storage. In this case, we would want both of our 21 | containers to stay together, to share the same IP so that they can communicate over localhost, and to share the same 22 | storage so that the second container can read the logs our application container is generating. 23 | 24 | 25 | 26 | Pod Configuration 27 | ==================== 28 | 29 | General Structure of POD.yaml 30 | 31 | apiVersion: v1 32 | kind: Pod 33 | metadata: 34 | name: pod-name 35 | spec: 36 | containers: 37 | - name: container1-name 38 | image: container1-image 39 | - name: container2-name 40 | image: container2-image 41 | 42 | 43 | apiVersion: Version of the Kubernetes API we are going to use. 44 | kind: The kind of Kubernetes object we are trying to create, which is a Pod in this case. 45 | metadata: Metadata or information that uniquely identifies the object we're creating. 46 | spec: Specification of our pod, such as container name, image name, volumes, and resource requests. 47 | 48 | Note: 49 | apiVersion, kind, and metadata apply to all types of Kubernetes objects and are required fields. spec is also a required field; however, its layout is different for different types of objects. 50 | 51 | 52 | Creating a Pod with a Single Container: 53 | ------------- 54 | 55 | vi single-container-pod.yaml 56 | 57 | apiVersion: v1 58 | kind: Pod 59 | metadata: 60 | name: first-pod 61 | spec: 62 | containers: 63 | - name: my-first-container 64 | image: nginx 65 | 66 | 67 | kubectl create -f single-container-pod.yaml 68 | kubectl get pods 69 | kubectl describe pod first-pod 70 | 71 | 72 | 73 | Creating a Pod in a Different Namespace by Specifying the Namespace in the CLI: 74 | ------------------ 75 | kubectl get namespaces 76 | kubectl --namespace kube-public create -f single-container-pod.yaml 77 | kubectl --namespace kube-public get pods 78 | 79 | 80 | Creating a Pod in a Different Namespace by Specifying the Namespace in the Pod Configuration YAML file: 81 | ---------- 82 | kubectl get namespaces 83 | 84 | vi single-container-pod-with-namespace.yaml 85 | 86 | apiVersion: v1 87 | kind: Pod 88 | metadata: 89 | name: first-pod-with-namespace 90 | namespace: kube-public 91 | spec: 92 | containers: 93 | - name: my-first-container 94 | image: nginx 95 | 96 | 97 | kubectl create -f single-container-pod-with-namespace.yaml 98 | kubectl --namespace kube-public get pods 99 | 100 | 101 | Changing the Namespace for All Subsequent kubectl Commands: 102 | ======-------- 103 | kubectl get namespaces 104 | kubectl config set-context $(kubectl config current-context) --namespace kube-public 105 | 106 | 107 | Creating a Pod Running a Container That Exposes a Port: 108 | ------------- 109 | pod-with-exposed-port.yaml 110 | 111 | 112 | apiVersion: v1 113 | kind: Pod 114 | metadata: 115 | name: port-exposed-pod 116 | spec: 117 | containers: 118 | - name: container-with-exposed-port 119 | image: nginx 120 | ports: 121 | - containerPort: 80 122 | 123 | kubectl create -f pod-with-exposed-port.yaml 124 | kubectl port-forward pod/port-exposed-pod 80 125 | curl 127.0.0.1 126 | kubectl logs port-exposed-pod 127 | 128 | 129 | Creating a Pod with Multiple Containers Running inside It: 130 | ============= 131 | vi multiple-container-pod.yaml 132 | 133 | apiVersion: v1 134 | 135 | kind: Pod 136 | 137 | metadata: 138 | 139 | name: multi-container-pod 140 | 141 | spec: 142 | 143 | containers: 144 | 145 | - name: first-container 146 | 147 | image: nginx 148 | 149 | - name: second-container 150 | 151 | image: ubuntu 152 | 153 | command: 154 | 155 | - /bin/bash 156 | 157 | - -ec 158 | 159 | - while :; do echo '.'; sleep 5; done 160 | 161 | kubectl create -f multiple-container-pod.yaml 162 | 163 | kubectl describe pod multi-container-pod 164 | 165 | kubectl logs 166 | kubectl logs multi-container-pod second-container -f 167 | 168 | 169 | 170 | Life Cycle of a Pod 171 | 172 | kubectl get pod 173 | 174 | Pending: This means that the pod has been submitted to the cluster, but the controller hasn't created all its containers yet. 175 | Running: This state means that the pod has been assigned to one of the cluster nodes and at least one of the containers is either running or is in the process of starting up. 176 | Succeeded: This state means that the pod has run, and all of its containers have been terminated with success. 177 | Failed:This state means the pod has run and at least one of the containers has terminated with a non-zero exit code, 178 | Unknown: 179 | 180 | 181 | 182 | How to Communicate with Kubernetes (API Server) 183 | 184 | Here we are going to discuss we will build a foundational understanding of the Kubernetes API server and the various 185 | ways of interacting with it. 186 | 187 | 188 | We will learn how kubectl and other HTTP clients communicate with the Kubernetes API server 189 | 190 | 191 | 192 | 193 | 194 | The Kubernetes API Server: 195 | 196 | - In Kubernetes, all communications and operations between the control plane components and external clients, such as 197 | kubectl, are translated into RESTful API calls that are handled by the API server. 198 | 199 | - 200 | 201 | 202 | 203 | 204 | 205 | 206 | 207 | 208 | 209 | 210 | 211 | 212 | 213 | 214 | 215 | 216 | 217 | 218 | 219 | 220 | 221 | 222 | 223 | 224 | 225 | 226 | 227 | 228 | 229 | 230 | 231 | 232 | 233 | 234 | Deployment 235 | ================== 236 | 237 | apiVersion: apps/v1 238 | 239 | kind: Deployment 240 | 241 | metadata: 242 | 243 | name: kubeserve 244 | 245 | labels: 246 | 247 | app: kubeserve 248 | 249 | spec: 250 | 251 | replicas : 3 252 | 253 | selector: 254 | 255 | matchLabels: 256 | 257 | app: kubeserve 258 | 259 | template: 260 | 261 | metadata: 262 | 263 | labels: 264 | 265 | app: kubeserve 266 | 267 | spec: 268 | 269 | containers: 270 | 271 | - name: nginx 272 | 273 | image: nginx 274 | 275 | ports: 276 | 277 | - containerPort: 80 278 | 279 | 280 | kubectl apply -f sample-deployment.yaml 281 | kubectl get deployments 282 | kubectl describe deploy kubeserve 283 | 284 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Advanced Kubernetes Course 2 | 3 | ## Course Overview 4 | 5 | This course is tailored for learners to master Kubernetes, focusing on in-depth concepts, cluster management, security, networking, storage, and CI/CD integrations. It combines detailed theoretical explanations with practical, hands-on labs for a holistic learning experience. 6 | 7 | ## Modules 8 | 9 | ### Module 1: Deep Dive into Kubernetes Architecture 10 | 11 | #### Topics 12 | 13 | - Kubernetes Architecture Overview 14 | - Understanding Nodes, Pods, and Controllers 15 | - Etcd, API Server, Scheduler, and other Control Plane Components 16 | 17 | #### Lab 1: Explore Kubernetes Cluster Architecture 18 | 19 | - **Objective:** Set up a Kubernetes cluster and explore its components. 20 | - **Guide:** [Exploring Kubernetes Architecture](#) 21 | 22 | ### Module 2: Advanced Pod Management 23 | 24 | #### Topics 25 | 26 | - Pod Lifecycle and Management 27 | - Advanced Scheduling - Affinity/Anti-affinity, Taints, and Tolerations 28 | - Managing Container Resources 29 | 30 | #### Lab 2: Managing Pods and Scheduling 31 | 32 | - **Objective:** Implement advanced pod scheduling and resource management. 33 | - **Guide:** [Advanced Pod Management](#) 34 | 35 | ### Module 3: Kubernetes Networking 36 | 37 | #### Topics 38 | 39 | - Deep Dive into Kubernetes Networking Model 40 | - Services, Ingress, and Network Policies 41 | - Implementing Service Meshes with Istio 42 | 43 | #### Lab 3: Configuring Networking in Kubernetes 44 | 45 | - **Objective:** Set up networking, expose services, and implement network policies. 46 | - **Guide:** [Kubernetes Networking](#) 47 | 48 | ### Module 4: Persistent Storage in Kubernetes 49 | 50 | #### Topics 51 | 52 | - Understanding Persistent Volumes (PV) and Persistent Volume Claims (PVC) 53 | - StorageClasses and Dynamic Provisioning 54 | - StatefulSets for Stateful Applications 55 | 56 | #### Lab 4: Managing Storage in Kubernetes 57 | 58 | - **Objective:** Configure persistent storage using PV, PVCs, and StorageClasses. 59 | - **Guide:** [Managing Storage](#) 60 | 61 | ### Module 5: Security in Kubernetes 62 | 63 | #### Topics 64 | 65 | - Kubernetes Security Best Practices 66 | - Managing Kubernetes Secrets and ConfigMaps 67 | - Role-Based Access Control (RBAC) 68 | 69 | #### Lab 5: Implementing Security in Kubernetes 70 | 71 | - **Objective:** Secure a Kubernetes cluster using best practices and RBAC. 72 | - **Guide:** [Kubernetes Security](#) 73 | 74 | ### Module 6: Monitoring and Logging 75 | 76 | #### Topics 77 | 78 | - Monitoring Cluster and Application Health with Prometheus 79 | - Centralized Logging with Fluentd and Elasticsearch 80 | - Visualizing Metrics with Grafana 81 | 82 | #### Lab 6: Setting Up Monitoring and Logging 83 | 84 | - **Objective:** Implement monitoring and logging solutions in Kubernetes. 85 | - **Guide:** [Monitoring and Logging](#) 86 | 87 | ### Module 7: Advanced Kubernetes Deployments and CI/CD 88 | 89 | #### Topics 90 | 91 | - Rolling Updates and Rollbacks 92 | - Helm Charts for Package Management 93 | - Integrating Kubernetes with CI/CD Pipelines (Jenkins, GitLab CI/CD) 94 | 95 | #### Lab 7: CI/CD in Kubernetes 96 | 97 | - **Objective:** Deploy applications using CI/CD pipelines and manage releases with Helm. 98 | - **Guide:** [CI/CD Integrations](#) 99 | 100 | ## Additional Resources 101 | 102 | - [Official Kubernetes Documentation](https://kubernetes.io/docs/home/) 103 | - [Kubernetes GitHub Repository](https://github.com/kubernetes/kubernetes) 104 | 105 | ## Post-Training Evaluation 106 | 107 | Participants will engage in a capstone project, designing, deploying, and managing a full-stack application on Kubernetes, demonstrating their mastery of concepts learned throughout the course. 108 | 109 | ## Feedback 110 | 111 | We value your feedback to continually improve our course. Please share your experiences and suggestions. 112 | -------------------------------------------------------------------------------- /ReplicaSet.md: -------------------------------------------------------------------------------- 1 | # ReplicaSet 2 | 3 | - In the Pod's section you’ve learned that, pods represent the basic deployable unit in Kubernetes. 4 | - We have seen how to create, supervise, and manage them manually. 5 | - We've also seen the challenges while launching application in pods and managing applications with ONLY pods. 6 | - So, in the real-world use cases, you want your deployments to stay up and running automatically and remain healthy without any manual intervention. 7 | - To do this, you almost never create pods directly. Instead, you create other types of resources, such as **ReplicationControllers** or **Deployments**, which then create and manage the actual pods. 8 | 9 | 10 | 11 | ##### What is **ReplicationControllers** 12 | 13 | - A ReplicationController is a Kubernetes resource that ensures its pods are always kept running. 14 | - If the pod disappears for any reason, such as in the event of a node disappearing from the cluster or because the pod was evicted from the node, the ReplicationController notices the missing pod and creates a replacement pod. 15 | 16 | 17 | 18 | ```yaml 19 | 20 | Creating a Simple ReplicaSet with nginx Containers 21 | 22 | vi replicaset-nginx.yaml 23 | 24 | apiVersion: apps/v1 25 | 26 | kind: ReplicaSet 27 | 28 | metadata: 29 | 30 | name: nginx-replicaset 31 | 32 | labels: 33 | 34 | app: nginx 35 | 36 | spec: 37 | 38 | replicas: 2 39 | 40 | selector: 41 | 42 | matchLabels: 43 | 44 | environment: production 45 | 46 | template: 47 | 48 | metadata: 49 | 50 | labels: 51 | 52 | environment: production 53 | 54 | spec: 55 | 56 | containers: 57 | 58 | - name: nginx-container 59 | 60 | image: nginx 61 | 62 | 63 | kubectl create -f replicaset-nginx.yaml 64 | kubectl get rs nginx-replicaset 65 | kubectl get pods 66 | kubectl describe rs nginx-replicaset 67 | ``` 68 | 69 | ```yaml 70 | Deleting Pods Managed by a ReplicaSet 71 | ====================================== 72 | 73 | kubectl get pods 74 | kubectl delete pod 75 | kubectl describe rs nginx-replicaset 76 | kubectl delete rs nginx-replicaset 77 | kubectl get pods 78 | ``` 79 | 80 | 81 | 82 | ``` 83 | Scaling a ReplicaSet 84 | =========================================== 85 | 86 | vi replicaset-nginx.yaml 87 | 88 | apiVersion: apps/v1 89 | 90 | kind: ReplicaSet 91 | 92 | metadata: 93 | 94 | name: nginx-replicaset 95 | 96 | labels: 97 | 98 | app: nginx 99 | 100 | spec: 101 | 102 | replicas: 2 103 | 104 | selector: 105 | 106 | matchLabels: 107 | 108 | environment: production 109 | 110 | template: 111 | 112 | metadata: 113 | 114 | labels: 115 | 116 | environment: production 117 | 118 | spec: 119 | 120 | containers: 121 | 122 | - name: nginx-container 123 | 124 | image: nginx 125 | 126 | kubectl apply -f replicaset-nginx.yaml 127 | kubectl get pods 128 | kubectl scale --replicas=4 rs nginx-replicaset 129 | kubectl get pods 130 | kubectl scale --replicas=1 rs nginx-replicaset 131 | kubectl get pods 132 | kubectl delete rs nginx-replicaset 133 | ``` 134 | 135 | -------------------------------------------------------------------------------- /ServiceDescoveryConcept.md: -------------------------------------------------------------------------------- 1 | 2 | # Service Discovery in Kubernetes 3 | 4 | ## Overview 5 | 6 | In this section, we will explore how to route traffic between various Kubernetes objects and make them discoverable from both within and outside our cluster. We will introduce Kubernetes Services and explain how to use them to expose applications deployed using controllers such as Deployments. By the end, you will be able to make your application accessible to the external world and understand the different types of Services available. 7 | 8 | ### Problem Statement 9 | 10 | Each Kubernetes Pod gets its IP address, which can change in case of Pod relaunch. To ensure reliability, we use Deployments to maintain a fixed number of Pods. However, the changing IP addresses of Pods create the need to make Pods discoverable within the cluster. 11 | 12 | ### Solution: Kubernetes Services 13 | 14 | Kubernetes Services enable communication between different components of our application, as well as between different applications. Services help connect applications with other applications or users. 15 | 16 | ## What is a Service? 17 | 18 | A Service defines policies by which a logical set of Pods can be accessed. It allows the discovery and access of Pods, either within the cluster or externally. 19 | 20 | ## Service Configuration 21 | 22 | Here is an example manifest for a Kubernetes Service: 23 | 24 | ```yaml 25 | apiVersion: v1 26 | kind: Service 27 | metadata: 28 | name: my-sample-service 29 | spec: 30 | ports: 31 | - port: 80 32 | targetPort: 80 33 | selector: 34 | key: value 35 | ``` 36 | 37 | - `apiVersion` is set to v1. 38 | - `kind` is always "Service." 39 | - In the `metadata` field, we specify the name of the Service. 40 | 41 | ## Types of Services 42 | 43 | There are four different types of Services in Kubernetes: 44 | 45 | 1. **NodePort**: Makes internal Pod(s) accessible on a port on the node where the Pod(s) are running. 46 | 47 | 2. **ClusterIP**: Exposes the Service on an IP address inside the cluster (default Service type). 48 | 49 | 3. **LoadBalancer**: Exposes the application externally using the cloud provider's load balancer. 50 | 51 | 4. **ExternalName**: Points to a DNS rather than a set of Pods. It doesn't use selectors. 52 | 53 | ### NodePort Service 54 | 55 | A NodePort Service exposes the application on the same port on all nodes in the cluster. Pods may be running on multiple nodes, and the Service spans across all nodes, making the application accessible via `:`. 56 | 57 | Example NodePort Service configuration: 58 | 59 | ```yaml 60 | apiVersion: v1 61 | kind: Service 62 | metadata: 63 | name: nginx-service 64 | spec: 65 | type: NodePort 66 | ports: 67 | - targetPort: 80 68 | port: 80 69 | nodePort: 32023 70 | selector: 71 | app: nginx 72 | environment: production 73 | ``` 74 | 75 | - `targetPort`: Port where the application on Pods is exposed. 76 | - `port`: Port of the Service itself. 77 | - `nodePort`: Port on the node to access the Service. 78 | 79 | #### Lab 1: Creating a NodePort Service with Nginx Containers 80 | 81 | - Create a Deployment for Nginx. 82 | - Create a NodePort Service for Nginx. 83 | - Verify the Service and access Nginx. 84 | 85 | ### ClusterIP Service 86 | 87 | ClusterIP Service exposes the application on a specific IP address accessible only from inside the cluster. It's suitable for communication between different Pods within the cluster. 88 | 89 | Example ClusterIP Service configuration: 90 | 91 | ```yaml 92 | apiVersion: v1 93 | kind: Service 94 | metadata: 95 | name: nginx-service 96 | spec: 97 | type: ClusterIP 98 | ports: 99 | - targetPort: 80 100 | port: 80 101 | selector: 102 | app: nginx 103 | environment: production 104 | ``` 105 | 106 | - `type`: Set to ClusterIP. 107 | - `targetPort` and `port` specify the port of the Pods and the port for the Service. 108 | 109 | #### Lab 2: Creating a ClusterIP Service with Nginx Containers 110 | 111 | - Create a Deployment for Nginx. 112 | - Create a ClusterIP Service for Nginx. 113 | - Verify the Service and access Nginx. 114 | 115 | ### LoadBalancer Service 116 | 117 | A LoadBalancer Service exposes the application externally using the cloud provider's load balancer. It assigns an external IP address to the Service. Configuration may vary based on the cloud provider. 118 | 119 | Example LoadBalancer Service configuration (simplified): 120 | 121 | ```yaml 122 | apiVersion: v1 123 | kind: Service 124 | metadata: 125 | name: loadbalancer-service 126 | spec: 127 | type: LoadBalancer 128 | ports: 129 | - targetPort: 8080 130 | port: 80 131 | selector: 132 | app: nginx 133 | environment: production 134 | ``` 135 | 136 | - Each cloud provider requires specific annotations for LoadBalancer configuration. 137 | 138 | ## Conclusion 139 | 140 | Kubernetes Services play a crucial role in making applications and Pods discoverable and accessible within a cluster and from external sources. Understanding the types and configurations of Services is essential for effective service discovery and communication in Kubernetes. 141 | -------------------------------------------------------------------------------- /ServiceDiscoverLab.md: -------------------------------------------------------------------------------- 1 | 2 | # Kubernetes Labs 3 | 4 | In this repository, you'll find instructions and YAML files for two Kubernetes labs. 5 | 6 | ## Lab 1: Creating a NodePort Service with Nginx Containers 7 | 8 | **my-nginx-deployment.yaml** 9 | 10 | ```yaml 11 | apiVersion: apps/v1 12 | kind: Deployment 13 | metadata: 14 | name: my-nginx-deployment 15 | labels: 16 | app: nginx 17 | spec: 18 | replicas: 3 19 | strategy: 20 | type: Recreate 21 | selector: 22 | matchLabels: 23 | app: nginx 24 | environment: production 25 | template: 26 | metadata: 27 | labels: 28 | app: nginx 29 | environment: production 30 | spec: 31 | containers: 32 | - name: nginx-container 33 | image: nginx 34 | ``` 35 | 36 | **nginx-service-nodeport.yaml** 37 | 38 | ```yaml 39 | apiVersion: v1 40 | kind: Service 41 | metadata: 42 | name: nginx-service-nodeport 43 | spec: 44 | type: NodePort 45 | ports: 46 | - port: 80 47 | targetPort: 80 48 | nodePort: 32023 49 | selector: 50 | app: nginx 51 | environment: production 52 | ``` 53 | 54 | ... 55 | 56 | ## Lab 2: Creating a ClusterIP Service with Nginx Containers 57 | 58 | **nginx-deployment.yaml** 59 | 60 | ```yaml 61 | apiVersion: apps/v1 62 | kind: Deployment 63 | metadata: 64 | name: nginx-deployment 65 | labels: 66 | app: nginx 67 | spec: 68 | replicas: 3 69 | strategy: 70 | type: Recreate 71 | selector: 72 | matchLabels: 73 | app: nginx 74 | environment: production 75 | template: 76 | metadata: 77 | labels: 78 | app: nginx 79 | environment: production 80 | spec: 81 | containers: 82 | - name: nginx-container 83 | image: nginx 84 | ``` 85 | 86 | **nginx-service-clusterip.yaml** 87 | 88 | ```yaml 89 | apiVersion: v1 90 | kind: Service 91 | metadata: 92 | name: nginx-service-clusterip 93 | spec: 94 | type: ClusterIP 95 | ports: 96 | - port: 80 97 | targetPort: 80 98 | selector: 99 | app: nginx 100 | environment: production 101 | ``` 102 | 103 | ... 104 | 105 | ``` 106 | 107 | You can create a Markdown file (e.g., `k8s-labs.md`) and paste the YAML code as shown above to document your Kubernetes labs. This format makes it easy to read and understand the content of your YAML files within the Markdown document. 108 | -------------------------------------------------------------------------------- /Service_Lab.md: -------------------------------------------------------------------------------- 1 | 2 | ### Nginx Deployment 3 | 4 | **nginx-deployment.yaml:** 5 | ```yaml 6 | apiVersion: apps/v1 7 | kind: Deployment 8 | metadata: 9 | name: nginx-deployment 10 | spec: 11 | replicas: 3 12 | selector: 13 | matchLabels: 14 | app: nginx 15 | template: 16 | metadata: 17 | labels: 18 | app: nginx 19 | spec: 20 | containers: 21 | - name: nginx 22 | image: nginx:latest 23 | ports: 24 | - containerPort: 80 25 | ``` 26 | 27 | Apply the deployment with: 28 | 29 | ```bash 30 | kubectl apply -f nginx-deployment.yaml 31 | ``` 32 | 33 | ### Nginx ClusterIP Service 34 | 35 | **nginx-service-clusterip.yaml:** 36 | ```yaml 37 | apiVersion: v1 38 | kind: Service 39 | metadata: 40 | name: nginx-service-clusterip 41 | spec: 42 | selector: 43 | app: nginx 44 | ports: 45 | - protocol: TCP 46 | port: 80 47 | targetPort: 80 48 | ``` 49 | 50 | Apply the ClusterIP service with: 51 | 52 | ```bash 53 | kubectl apply -f nginx-service-clusterip.yaml 54 | ``` 55 | 56 | ### Nginx NodePort Service 57 | 58 | **nginx-service-nodeport.yaml:** 59 | ```yaml 60 | apiVersion: v1 61 | kind: Service 62 | metadata: 63 | name: nginx-service-nodeport 64 | spec: 65 | selector: 66 | app: nginx 67 | ports: 68 | - protocol: TCP 69 | port: 80 70 | targetPort: 80 71 | type: NodePort 72 | ``` 73 | 74 | Apply the NodePort service with: 75 | 76 | ```bash 77 | kubectl apply -f nginx-service-nodeport.yaml 78 | ``` 79 | 80 | After applying these configurations, you'll have a Nginx Deployment with three replicas, and two Services: one using ClusterIP and the other using NodePort. 81 | 82 | - The ClusterIP Service allows internal communication within the cluster. You can access the Nginx service using its ClusterIP. 83 | 84 | - The NodePort Service exposes the Nginx service on a port on each node, making it accessible externally. You can use the NodePort and the IP of any node to access the Nginx service from outside the cluster. 85 | 86 | 87 | YAML example for the NodePort Service with the explicit specification of the NodePort: 88 | 89 | **nginx-service-nodeport.yaml:** 90 | ```yaml 91 | apiVersion: v1 92 | kind: Service 93 | metadata: 94 | name: nginx-service-nodeport 95 | spec: 96 | selector: 97 | app: nginx 98 | ports: 99 | - protocol: TCP 100 | port: 80 101 | targetPort: 80 102 | type: NodePort 103 | nodePort: 30080 # Specify the NodePort value, adjust as needed 104 | ``` 105 | -------------------------------------------------------------------------------- /Topics.txt: -------------------------------------------------------------------------------- 1 | Kubernetes: 2 | 3 | Introduction to Kubernetes 4 | Kubernetes Architecture 5 | Introduction to Pods and Services 6 | Application Design and Build 7 | Define, build and modify container images 8 | Understand Jobs and CronJobs 9 | 10 | 11 | 12 | Understand multi-container Pod design patterns (e.g. sidecar, init and others) 13 | Utilize persistent and ephemeral volumes 14 | Application Deployment 15 | Define, build and modify container images 16 | Understand Jobs and CronJobs 17 | Understand multi-container Pod design patterns (e.g. sidecar, init and others) 18 | Utilize persistent and ephemeral volumes 19 | Application observability and maintenance 20 | Understand API deprecations 21 | Implement probes and health checks 22 | Use provided tools to monitor Kubernetes applications 23 | Utilize container logs 24 | Debugging in Kubernetes 25 | Application Environment, Configuration and Security 26 | Discover and use resources that extend Kubernetes (CRD) 27 | Understand authentication, authorization and admission control 28 | Understanding and defining resource requirements, limits and quotas 29 | Understand ConfigMaps 30 | Create & consume Secrets 31 | Understand ServiceAccounts 32 | Understand SecurityContexts 33 | -------------------------------------------------------------------------------- /minikubeInstalltion.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ARCH=$(arch) 4 | 5 | ### installing Docker 6 | sudo apt-get update -y 7 | su 8 | curl -fsSL httpdo apt-get install ca-certificates curl gnupg lsb-release -y 9 | sudo mkdir -p /etc/apt/keyringss://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg 10 | echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null 11 | sudo apt-get update -y 12 | sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin -y 13 | 14 | if [ $ARCH = "x86_64" ] 15 | then 16 | echo executing on $ARCH 17 | #sudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common -y 18 | #do add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" 19 | #sucurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - 20 | #sudo apt-get update -y 21 | #sudo apt-get install docker-ce docker-ce-cli containerd.io -y 22 | 23 | curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl 24 | chmod +x ./kubectl 25 | sudo mv ./kubectl /usr/local/bin/kubectl 26 | 27 | curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 28 | sudo install minikube-linux-amd64 /usr/local/bin/minikube 29 | fi 30 | 31 | if [ $ARCH = "aarch64" ] 32 | then 33 | curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-arm64 34 | sudo install minikube-linux-arm64 /usr/local/bin/minikube 35 | sudo snap install kubectl --classic 36 | fi 37 | 38 | echo the script is now ready 39 | echo manually run minikube start --vm-driver=docker --memory=6G --cni=calico to start minikube 40 | 41 | sudo usermod -aG docker $USER && newgrp docker 42 | newgrp docker 43 | -------------------------------------------------------------------------------- /minikube_Installtion_2025.md: -------------------------------------------------------------------------------- 1 | Ref: https://docs.docker.com/engine/install/ubuntu/ 2 | https://kubernetes.io/docs/reference/kubectl/cheatsheet/ 3 | 4 | 5 | # Installing Minikube on Ubuntu Linux 6 | 7 | Minikube is a valuable tool for setting up a local Kubernetes cluster, primarily designed for development purposes. In this guide, we'll walk you through the process of installing Minikube on Ubuntu Linux, enabling you to establish a functional Kubernetes environment in under five minutes. 8 | 9 | ## Prerequisites 10 | 11 | Before you begin, make sure you have the following prerequisites: 12 | 13 | - An active instance of an Ubuntu-based Linux distribution. 14 | - A user account with sudo privileges. 15 | - At least two CPUs. 16 | - A minimum of 2GB of available memory. 17 | - A minimum of 20GB of free disk space. 18 | 19 | ## Installing Docker 20 | 21 | Before you can use Minikube, you need to have Docker CE (Community Edition) installed. Follow these steps to install Docker: 22 | 23 | 1. Add the official Docker GPG key: 24 | 25 | ```bash 26 | 27 | sudo apt-get update 28 | sudo apt-get install ca-certificates curl 29 | sudo install -m 0755 -d /etc/apt/keyrings 30 | sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc 31 | sudo chmod a+r /etc/apt/keyrings/docker.asc 32 | 33 | ``` 34 | 35 | 2. Add the Docker repository: 36 | 37 | ```bash 38 | # Add the repository to Apt sources: 39 | echo \ 40 | "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ 41 | $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ 42 | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null 43 | ``` 44 | 45 | 3. Install the necessary dependencies: 46 | 47 | ```bash 48 | sudo apt-get update 49 | ``` 50 | 51 | 4. Update the package list: 52 | 53 | ```bash 54 | $ sudo apt-get update 55 | ``` 56 | 57 | 5. Install Docker: 58 | 59 | ```bash 60 | $ sudo apt-get install docker-ce docker-ce-cli containerd.io -y 61 | ``` 62 | 63 | 6. Add your user to the docker group: 64 | 65 | ```bash 66 | $ sudo usermod -aG docker $USER 67 | ``` 68 | 69 | ## Installing Minikube 70 | 71 | Now, let's install Minikube: 72 | 73 | 1. Download the latest Minikube binary: 74 | 75 | ```bash 76 | $ wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 77 | ``` 78 | 79 | 2. Copy the Minikube binary to `/usr/local/bin`: 80 | 81 | ```bash 82 | $ sudo cp minikube-linux-amd64 /usr/local/bin/minikube 83 | ``` 84 | 85 | 3. Give the Minikube executable the proper permissions: 86 | 87 | ```bash 88 | $ sudo chmod +x /usr/local/bin/minikube 89 | ``` 90 | 91 | ## Installing kubectl 92 | 93 | Next, we need to install the `kubectl` command-line utility: 94 | 95 | 1. Download the binary executable file: 96 | 97 | ```bash 98 | $ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl 99 | ``` 100 | 101 | 2. Give the `kubectl` binary executable permission: 102 | 103 | ```bash 104 | $ chmod +x kubectl 105 | ``` 106 | 107 | 3. Move the `kubectl` binary to `/usr/local/bin`: 108 | 109 | ```bash 110 | $ sudo mv kubectl /usr/local/bin/ 111 | ``` 112 | 113 | ## Starting Minikube 114 | 115 | You can now start Minikube with the following command: 116 | 117 | ```bash 118 | $ sudo usermod -aG docker $USER && newgrp docker 119 | $ minikube start --driver=docker 120 | 121 | --------------------------------------------------------------------------------