├── DaemonSet ├── README.md └── daemonset-deploy.yaml ├── HELM ├── README.md ├── apache │ ├── .helmignore │ ├── Chart.yaml │ ├── apache-0.1.0.tgz │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ ├── deployment.yaml │ │ ├── hpa.yaml │ │ ├── ingress.yaml │ │ ├── service.yaml │ │ ├── serviceaccount.yaml │ │ └── tests │ │ │ └── test-connection.yaml │ └── values.yaml └── get_helm.sh ├── HPA_VPA ├── README.md ├── apache-deployment.yml ├── apache-hpa.yml └── apache-vpa.yml ├── Ingress ├── README.md ├── apache.yml ├── ingress.yml └── nginx.yml ├── Kubeadm_Installation_Scripts_and_Documentation ├── Kubeadm_Installation_Common_Using_Containerd.sh ├── Kubeadm_Installation_Master_Using_Containerd.sh ├── Kubeadm_Installation_Slave_Using_Containerd.sh └── README.md ├── Minikube_Windows_Installation.md ├── PersistentVolumes ├── PersistentVolume.yaml ├── PersistentVolumeClaim.yaml ├── Pod.yaml └── README.md ├── RBAC ├── README.md ├── apache-deployment.yml ├── apache-role.yml ├── apache-rolebinding.yml ├── apache-serviceaccount.yml └── namespace.yml ├── README.md ├── Taints-and-Tolerations ├── README.md └── pod.yml ├── ci_cd_with_kubernetes.md ├── eks_cluster_setup.md ├── examples ├── More_K8s_Practice_Ideas.md ├── helm │ ├── README.md │ └── node-app │ │ ├── .helmignore │ │ ├── Chart.yaml │ │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ ├── deployment.yaml │ │ ├── hpa.yaml │ │ ├── ingress.yaml │ │ ├── service.yaml │ │ ├── serviceaccount.yaml │ │ └── tests │ │ │ └── test-connection.yaml │ │ └── values.yaml ├── mysql │ ├── README.md │ ├── configMap.yml │ ├── deployment.yml │ ├── persistentVols.yml │ └── secrets.yml └── nginx │ ├── README.md │ ├── deployment.yml │ ├── pod.yml │ └── service.yml ├── kind-cluster ├── README.md ├── config.yml ├── dashboard-admin-user.yml └── install.sh ├── kubernetes_architecture.md ├── minikube_installation.md └── projectGuide ├── easyshop-kind.md └── online-shop.md /DaemonSet/README.md: -------------------------------------------------------------------------------- 1 | ## Daemonset in Kubernetes 2 | 3 | ### What is a daemonset? 4 | - A DaemonSet in Kubernetes is a workload controller that ensures a pod runs on all or some nodes in a cluster 5 | - Example: If you create a daemonset in a cluster of 3 nodes, then 3 pods will be created. No need to manage replicas. 6 | - If you add another node to the cluster, a new pod will be automatically created on the new node. 7 | 8 | ### How it works 9 | - A DaemonSet controller monitors for new and deleted nodes, and adds or removes pods as needed. 10 | 11 | ### What it's used for 12 | - Logging collection 13 | - Kube-proxy 14 | - Weave-net 15 | - Node monitoring 16 | 17 | ### Example of Daemonset: 18 | 19 | ![image](https://github.com/user-attachments/assets/71725083-89a7-4e93-a1ed-df4c8adc94c3) 20 | 21 | - In the above screenshot, you can see 2 daemonsets are deployed in the kube-system namespace. i.e, Canal and Kube-proxy. 22 | - Similarily, we can also create custom daemonset by following below steps. 23 | 24 | ### Steps to deploy daemonset: 25 | 26 | - You will see 1 manifest in the same directory (DaemonSet) with name daemonset-deploy.yaml. 27 | - Copy the content of the manifest and run the following command to deploy it. 28 | ```bash 29 | kubectl apply -f daemonset-deploy.yaml 30 | ``` 31 | - After applying, you will see the daemonset pods are created and replicas are equal to the number of nodes including control-plane. 32 | 33 | ![image](https://github.com/user-attachments/assets/e07e794e-4557-4ad1-bb4b-dddc4001697c) 34 | 35 | -------------------------------------------------------------------------------- /DaemonSet/daemonset-deploy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: DaemonSet 3 | metadata: 4 | name: nginx-daemonset 5 | labels: 6 | tier: frontend 7 | spec: 8 | template: 9 | metadata: 10 | labels: 11 | tier: frontend 12 | name: nginx 13 | spec: 14 | containers: 15 | - image: nginx 16 | name: nginx 17 | ports: 18 | - containerPort: 80 19 | selector: 20 | matchLabels: 21 | tier: frontend 22 | -------------------------------------------------------------------------------- /HELM/README.md: -------------------------------------------------------------------------------- 1 | # Apache Helm Chart 2 | 3 | ### This Helm chart deploys an Apache HTTP Server on a Kubernetes cluster. 4 | 5 | ### Prerequisites 6 | - Kubernetes cluster running (local, cloud, or KIND). 7 | - kubectl installed and configured. 8 | - Helm 3 installed. Install Helm with: 9 | ```bash 10 | 11 | curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 12 | chmod 700 get_helm.sh 13 | ./get_helm.sh 14 | ``` 15 | 16 | ## Chart Structure 17 | ```bash 18 | 19 | apache/ 20 | ├── Chart.yaml # Chart metadata 21 | ├── values.yaml # Default values for customization 22 | └── templates/ # Kubernetes resource templates 23 | ├── deployment.yaml 24 | └── service.yaml 25 | ``` 26 | - Installation 27 | 28 | Clone or navigate to the Helm chart directory. 29 | 30 | Package the chart: 31 | 32 | ```bash 33 | 34 | helm package . 35 | ``` 36 | Install the Helm chart: 37 | ```bash 38 | 39 | helm install apache ./apache --namespace apache-namespace --create-namespace 40 | ``` 41 | Default Configuration 42 | The chart deploys an Apache HTTP Server with the following default values (from values.yaml): 43 | 44 | ```yaml 45 | 46 | replicaCount: 2 47 | 48 | image: 49 | repository: httpd 50 | tag: 2.4 51 | pullPolicy: IfNotPresent 52 | 53 | service: 54 | type: ClusterIP 55 | port: 80 56 | 57 | resources: 58 | requests: 59 | memory: "64Mi" 60 | cpu: "100m" 61 | limits: 62 | memory: "128Mi" 63 | cpu: "200m" 64 | ``` 65 | You can customize these values by modifying values.yaml or passing them via the --set flag during installation. 66 | 67 | Accessing the Deployment 68 | Check the status of your pods and services: 69 | ```bash 70 | 71 | kubectl get pods -n apache-namespace 72 | kubectl get svc -n apache-namespace 73 | ``` 74 | Forward the service port to access Apache locally: 75 | ```bash 76 | 77 | kubectl port-forward svc/apache 8080:80 -n apache-namespace 78 | ``` 79 | Open your browser and visit: 80 | ```arduino 81 | 82 | http://localhost:8080 83 | ``` 84 | Uninstallation 85 | To remove the deployment and associated resources: 86 | 87 | ```bash 88 | 89 | helm uninstall apache -n apache-namespace 90 | kubectl delete namespace apache-namespace 91 | ``` 92 | Customizing the Chart 93 | You can override default values using the --set flag. For example: 94 | 95 | ```bash 96 | 97 | helm install apache ./apache \ 98 | --namespace apache-namespace \ 99 | --set replicaCount=3 \ 100 | --set image.tag=2.4.53 \ 101 | --set service.type=LoadBalancer 102 | ``` 103 | Alternatively, modify the values.yaml file directly. 104 | 105 | ### Features 106 | - Scalable: Set replicaCount to scale the number of pods. 107 | - Configurable Resources: Customize CPU/memory requests and limits. 108 | - Customizable Service: Supports ClusterIP, NodePort, or LoadBalancer service types. 109 | 110 | ### Troubleshooting 111 | 112 | Verify Helm and Kubernetes versions: 113 | ```bash 114 | 115 | helm version 116 | kubectl version 117 | ``` 118 | Check Helm release status: 119 | ```bash 120 | 121 | helm status apache -n apache-namespace 122 | ``` 123 | Inspect pod logs: 124 | ```bash 125 | 126 | kubectl logs -n apache-namespace 127 | ``` 128 | -------------------------------------------------------------------------------- /HELM/apache/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *.orig 18 | *~ 19 | # Various IDEs 20 | .project 21 | .idea/ 22 | *.tmproj 23 | .vscode/ 24 | -------------------------------------------------------------------------------- /HELM/apache/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v2 2 | name: apache 3 | description: A Helm chart for Kubernetes 4 | 5 | # A chart can be either an 'application' or a 'library' chart. 6 | # 7 | # Application charts are a collection of templates that can be packaged into versioned archives 8 | # to be deployed. 9 | # 10 | # Library charts provide useful utilities or functions for the chart developer. They're included as 11 | # a dependency of application charts to inject those utilities and functions into the rendering 12 | # pipeline. Library charts do not define any templates and therefore cannot be deployed. 13 | type: application 14 | 15 | # This is the chart version. This version number should be incremented each time you make changes 16 | # to the chart and its templates, including the app version. 17 | # Versions are expected to follow Semantic Versioning (https://semver.org/) 18 | version: 0.1.0 19 | 20 | # This is the version number of the application being deployed. This version number should be 21 | # incremented each time you make changes to the application. Versions are not expected to 22 | # follow Semantic Versioning. They should reflect the version the application is using. 23 | # It is recommended to use it with quotes. 24 | appVersion: "1.16.0" 25 | -------------------------------------------------------------------------------- /HELM/apache/apache-0.1.0.tgz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/LondheShubham153/kubestarter/fe92ddedc6855d734be20ff2ff6362c68fcb64aa/HELM/apache/apache-0.1.0.tgz -------------------------------------------------------------------------------- /HELM/apache/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 1. Get the application URL by running these commands: 2 | {{- if .Values.ingress.enabled }} 3 | {{- range $host := .Values.ingress.hosts }} 4 | {{- range .paths }} 5 | http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }} 6 | {{- end }} 7 | {{- end }} 8 | {{- else if contains "NodePort" .Values.service.type }} 9 | export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "apache.fullname" . }}) 10 | export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") 11 | echo http://$NODE_IP:$NODE_PORT 12 | {{- else if contains "LoadBalancer" .Values.service.type }} 13 | NOTE: It may take a few minutes for the LoadBalancer IP to be available. 14 | You can watch its status by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "apache.fullname" . }}' 15 | export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "apache.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") 16 | echo http://$SERVICE_IP:{{ .Values.service.port }} 17 | {{- else if contains "ClusterIP" .Values.service.type }} 18 | export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "apache.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") 19 | export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") 20 | echo "Visit http://127.0.0.1:8080 to use your application" 21 | kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT 22 | {{- end }} 23 | -------------------------------------------------------------------------------- /HELM/apache/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "apache.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} 6 | {{- end }} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | If release name contains chart name it will be used as a full name. 12 | */}} 13 | {{- define "apache.fullname" -}} 14 | {{- if .Values.fullnameOverride }} 15 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} 16 | {{- else }} 17 | {{- $name := default .Chart.Name .Values.nameOverride }} 18 | {{- if contains $name .Release.Name }} 19 | {{- .Release.Name | trunc 63 | trimSuffix "-" }} 20 | {{- else }} 21 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} 22 | {{- end }} 23 | {{- end }} 24 | {{- end }} 25 | 26 | {{/* 27 | Create chart name and version as used by the chart label. 28 | */}} 29 | {{- define "apache.chart" -}} 30 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} 31 | {{- end }} 32 | 33 | {{/* 34 | Common labels 35 | */}} 36 | {{- define "apache.labels" -}} 37 | helm.sh/chart: {{ include "apache.chart" . }} 38 | {{ include "apache.selectorLabels" . }} 39 | {{- if .Chart.AppVersion }} 40 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 41 | {{- end }} 42 | app.kubernetes.io/managed-by: {{ .Release.Service }} 43 | {{- end }} 44 | 45 | {{/* 46 | Selector labels 47 | */}} 48 | {{- define "apache.selectorLabels" -}} 49 | app.kubernetes.io/name: {{ include "apache.name" . }} 50 | app.kubernetes.io/instance: {{ .Release.Name }} 51 | {{- end }} 52 | 53 | {{/* 54 | Create the name of the service account to use 55 | */}} 56 | {{- define "apache.serviceAccountName" -}} 57 | {{- if .Values.serviceAccount.create }} 58 | {{- default (include "apache.fullname" .) .Values.serviceAccount.name }} 59 | {{- else }} 60 | {{- default "default" .Values.serviceAccount.name }} 61 | {{- end }} 62 | {{- end }} 63 | -------------------------------------------------------------------------------- /HELM/apache/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ include "apache.fullname" . }} 5 | labels: 6 | {{- include "apache.labels" . | nindent 4 }} 7 | spec: 8 | {{- if not .Values.autoscaling.enabled }} 9 | replicas: {{ .Values.replicaCount }} 10 | {{- end }} 11 | selector: 12 | matchLabels: 13 | {{- include "apache.selectorLabels" . | nindent 6 }} 14 | template: 15 | metadata: 16 | {{- with .Values.podAnnotations }} 17 | annotations: 18 | {{- toYaml . | nindent 8 }} 19 | {{- end }} 20 | labels: 21 | {{- include "apache.labels" . | nindent 8 }} 22 | {{- with .Values.podLabels }} 23 | {{- toYaml . | nindent 8 }} 24 | {{- end }} 25 | spec: 26 | {{- with .Values.imagePullSecrets }} 27 | imagePullSecrets: 28 | {{- toYaml . | nindent 8 }} 29 | {{- end }} 30 | serviceAccountName: {{ include "apache.serviceAccountName" . }} 31 | securityContext: 32 | {{- toYaml .Values.podSecurityContext | nindent 8 }} 33 | containers: 34 | - name: {{ .Chart.Name }} 35 | securityContext: 36 | {{- toYaml .Values.securityContext | nindent 12 }} 37 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" 38 | imagePullPolicy: {{ .Values.image.pullPolicy }} 39 | ports: 40 | - name: http 41 | containerPort: {{ .Values.service.port }} 42 | protocol: TCP 43 | livenessProbe: 44 | {{- toYaml .Values.livenessProbe | nindent 12 }} 45 | readinessProbe: 46 | {{- toYaml .Values.readinessProbe | nindent 12 }} 47 | resources: 48 | {{- toYaml .Values.resources | nindent 12 }} 49 | {{- with .Values.volumeMounts }} 50 | volumeMounts: 51 | {{- toYaml . | nindent 12 }} 52 | {{- end }} 53 | {{- with .Values.volumes }} 54 | volumes: 55 | {{- toYaml . | nindent 8 }} 56 | {{- end }} 57 | {{- with .Values.nodeSelector }} 58 | nodeSelector: 59 | {{- toYaml . | nindent 8 }} 60 | {{- end }} 61 | {{- with .Values.affinity }} 62 | affinity: 63 | {{- toYaml . | nindent 8 }} 64 | {{- end }} 65 | {{- with .Values.tolerations }} 66 | tolerations: 67 | {{- toYaml . | nindent 8 }} 68 | {{- end }} 69 | -------------------------------------------------------------------------------- /HELM/apache/templates/hpa.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.autoscaling.enabled }} 2 | apiVersion: autoscaling/v2 3 | kind: HorizontalPodAutoscaler 4 | metadata: 5 | name: {{ include "apache.fullname" . }} 6 | labels: 7 | {{- include "apache.labels" . | nindent 4 }} 8 | spec: 9 | scaleTargetRef: 10 | apiVersion: apps/v1 11 | kind: Deployment 12 | name: {{ include "apache.fullname" . }} 13 | minReplicas: {{ .Values.autoscaling.minReplicas }} 14 | maxReplicas: {{ .Values.autoscaling.maxReplicas }} 15 | metrics: 16 | {{- if .Values.autoscaling.targetCPUUtilizationPercentage }} 17 | - type: Resource 18 | resource: 19 | name: cpu 20 | target: 21 | type: Utilization 22 | averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }} 23 | {{- end }} 24 | {{- if .Values.autoscaling.targetMemoryUtilizationPercentage }} 25 | - type: Resource 26 | resource: 27 | name: memory 28 | target: 29 | type: Utilization 30 | averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }} 31 | {{- end }} 32 | {{- end }} 33 | -------------------------------------------------------------------------------- /HELM/apache/templates/ingress.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.ingress.enabled -}} 2 | apiVersion: networking.k8s.io/v1 3 | kind: Ingress 4 | metadata: 5 | name: {{ include "apache.fullname" . }} 6 | labels: 7 | {{- include "apache.labels" . | nindent 4 }} 8 | {{- with .Values.ingress.annotations }} 9 | annotations: 10 | {{- toYaml . | nindent 4 }} 11 | {{- end }} 12 | spec: 13 | {{- with .Values.ingress.className }} 14 | ingressClassName: {{ . }} 15 | {{- end }} 16 | {{- if .Values.ingress.tls }} 17 | tls: 18 | {{- range .Values.ingress.tls }} 19 | - hosts: 20 | {{- range .hosts }} 21 | - {{ . | quote }} 22 | {{- end }} 23 | secretName: {{ .secretName }} 24 | {{- end }} 25 | {{- end }} 26 | rules: 27 | {{- range .Values.ingress.hosts }} 28 | - host: {{ .host | quote }} 29 | http: 30 | paths: 31 | {{- range .paths }} 32 | - path: {{ .path }} 33 | {{- with .pathType }} 34 | pathType: {{ . }} 35 | {{- end }} 36 | backend: 37 | service: 38 | name: {{ include "apache.fullname" $ }} 39 | port: 40 | number: {{ $.Values.service.port }} 41 | {{- end }} 42 | {{- end }} 43 | {{- end }} 44 | -------------------------------------------------------------------------------- /HELM/apache/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ include "apache.fullname" . }} 5 | labels: 6 | {{- include "apache.labels" . | nindent 4 }} 7 | spec: 8 | type: {{ .Values.service.type }} 9 | ports: 10 | - port: {{ .Values.service.port }} 11 | targetPort: 80 12 | protocol: TCP 13 | name: http 14 | selector: 15 | {{- include "apache.selectorLabels" . | nindent 4 }} 16 | -------------------------------------------------------------------------------- /HELM/apache/templates/serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.serviceAccount.create -}} 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: {{ include "apache.serviceAccountName" . }} 6 | labels: 7 | {{- include "apache.labels" . | nindent 4 }} 8 | {{- with .Values.serviceAccount.annotations }} 9 | annotations: 10 | {{- toYaml . | nindent 4 }} 11 | {{- end }} 12 | automountServiceAccountToken: {{ .Values.serviceAccount.automount }} 13 | {{- end }} 14 | -------------------------------------------------------------------------------- /HELM/apache/templates/tests/test-connection.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: "{{ include "apache.fullname" . }}-test-connection" 5 | labels: 6 | {{- include "apache.labels" . | nindent 4 }} 7 | annotations: 8 | "helm.sh/hook": test 9 | spec: 10 | containers: 11 | - name: wget 12 | image: busybox 13 | command: ['wget'] 14 | args: ['{{ include "apache.fullname" . }}:{{ .Values.service.port }}'] 15 | restartPolicy: Never 16 | -------------------------------------------------------------------------------- /HELM/apache/values.yaml: -------------------------------------------------------------------------------- 1 | # Default values for apache. 2 | # This is a YAML-formatted file. 3 | # Declare variables to be passed into your templates. 4 | 5 | # This will set the replicaset count more information can be found here: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ 6 | replicaCount: 2 7 | 8 | # This sets the container image more information can be found here: https://kubernetes.io/docs/concepts/containers/images/ 9 | image: 10 | repository: httpd 11 | # This sets the pull policy for images. 12 | pullPolicy: IfNotPresent 13 | # Overrides the image tag whose default is the chart appVersion. 14 | tag: 2.4 15 | 16 | # This is for the secretes for pulling an image from a private repository more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ 17 | imagePullSecrets: [] 18 | # This is to override the chart name. 19 | nameOverride: "" 20 | fullnameOverride: "" 21 | 22 | #This section builds out the service account more information can be found here: https://kubernetes.io/docs/concepts/security/service-accounts/ 23 | serviceAccount: 24 | # Specifies whether a service account should be created 25 | create: true 26 | # Automatically mount a ServiceAccount's API credentials? 27 | automount: true 28 | # Annotations to add to the service account 29 | annotations: {} 30 | # The name of the service account to use. 31 | # If not set and create is true, a name is generated using the fullname template 32 | name: "" 33 | 34 | # This is for setting Kubernetes Annotations to a Pod. 35 | # For more information checkout: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ 36 | podAnnotations: {} 37 | # This is for setting Kubernetes Labels to a Pod. 38 | # For more information checkout: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ 39 | podLabels: {} 40 | 41 | podSecurityContext: {} 42 | # fsGroup: 2000 43 | 44 | securityContext: {} 45 | # capabilities: 46 | # drop: 47 | # - ALL 48 | # readOnlyRootFilesystem: true 49 | # runAsNonRoot: true 50 | # runAsUser: 1000 51 | 52 | # This is for setting up a service more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/ 53 | service: 54 | # This sets the service type more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types 55 | type: ClusterIP 56 | # This sets the ports more information can be found here: https://kubernetes.io/docs/concepts/services-networking/service/#field-spec-ports 57 | port: 80 58 | 59 | # This block is for setting up the ingress for more information can be found here: https://kubernetes.io/docs/concepts/services-networking/ingress/ 60 | ingress: 61 | enabled: false 62 | className: "" 63 | annotations: {} 64 | # kubernetes.io/ingress.class: nginx 65 | # kubernetes.io/tls-acme: "true" 66 | hosts: 67 | - host: chart-example.local 68 | paths: 69 | - path: / 70 | pathType: ImplementationSpecific 71 | tls: [] 72 | # - secretName: chart-example-tls 73 | # hosts: 74 | # - chart-example.local 75 | 76 | resources: {} 77 | # We usually recommend not to specify default resources and to leave this as a conscious 78 | # choice for the user. This also increases chances charts run on environments with little 79 | # resources, such as Minikube. If you do want to specify resources, uncomment the following 80 | # lines, adjust them as necessary, and remove the curly braces after 'resources:'. 81 | #limits: 82 | # cpu: 100m 83 | # memory: 128Mi 84 | #requests: 85 | # cpu: 100m 86 | # memory: 64Mi 87 | 88 | # This is to setup the liveness and readiness probes more information can be found here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ 89 | livenessProbe: 90 | httpGet: 91 | path: / 92 | port: http 93 | readinessProbe: 94 | httpGet: 95 | path: / 96 | port: http 97 | 98 | #This section is for setting up autoscaling more information can be found here: https://kubernetes.io/docs/concepts/workloads/autoscaling/ 99 | autoscaling: 100 | enabled: false 101 | minReplicas: 1 102 | maxReplicas: 100 103 | targetCPUUtilizationPercentage: 80 104 | # targetMemoryUtilizationPercentage: 80 105 | 106 | # Additional volumes on the output Deployment definition. 107 | volumes: [] 108 | # - name: foo 109 | # secret: 110 | # secretName: mysecret 111 | # optional: false 112 | 113 | # Additional volumeMounts on the output Deployment definition. 114 | volumeMounts: [] 115 | # - name: foo 116 | # mountPath: "/etc/foo" 117 | # readOnly: true 118 | 119 | nodeSelector: {} 120 | 121 | tolerations: [] 122 | 123 | affinity: {} 124 | -------------------------------------------------------------------------------- /HELM/get_helm.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Copyright The Helm Authors. 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | # The install script is based off of the MIT-licensed script from glide, 18 | # the package manager for Go: https://github.com/Masterminds/glide.sh/blob/master/get 19 | 20 | : ${BINARY_NAME:="helm"} 21 | : ${USE_SUDO:="true"} 22 | : ${DEBUG:="false"} 23 | : ${VERIFY_CHECKSUM:="true"} 24 | : ${VERIFY_SIGNATURES:="false"} 25 | : ${HELM_INSTALL_DIR:="/usr/local/bin"} 26 | : ${GPG_PUBRING:="pubring.kbx"} 27 | 28 | HAS_CURL="$(type "curl" &> /dev/null && echo true || echo false)" 29 | HAS_WGET="$(type "wget" &> /dev/null && echo true || echo false)" 30 | HAS_OPENSSL="$(type "openssl" &> /dev/null && echo true || echo false)" 31 | HAS_GPG="$(type "gpg" &> /dev/null && echo true || echo false)" 32 | HAS_GIT="$(type "git" &> /dev/null && echo true || echo false)" 33 | HAS_TAR="$(type "tar" &> /dev/null && echo true || echo false)" 34 | 35 | # initArch discovers the architecture for this system. 36 | initArch() { 37 | ARCH=$(uname -m) 38 | case $ARCH in 39 | armv5*) ARCH="armv5";; 40 | armv6*) ARCH="armv6";; 41 | armv7*) ARCH="arm";; 42 | aarch64) ARCH="arm64";; 43 | x86) ARCH="386";; 44 | x86_64) ARCH="amd64";; 45 | i686) ARCH="386";; 46 | i386) ARCH="386";; 47 | esac 48 | } 49 | 50 | # initOS discovers the operating system for this system. 51 | initOS() { 52 | OS=$(echo `uname`|tr '[:upper:]' '[:lower:]') 53 | 54 | case "$OS" in 55 | # Minimalist GNU for Windows 56 | mingw*|cygwin*) OS='windows';; 57 | esac 58 | } 59 | 60 | # runs the given command as root (detects if we are root already) 61 | runAsRoot() { 62 | if [ $EUID -ne 0 -a "$USE_SUDO" = "true" ]; then 63 | sudo "${@}" 64 | else 65 | "${@}" 66 | fi 67 | } 68 | 69 | # verifySupported checks that the os/arch combination is supported for 70 | # binary builds, as well whether or not necessary tools are present. 71 | verifySupported() { 72 | local supported="darwin-amd64\ndarwin-arm64\nlinux-386\nlinux-amd64\nlinux-arm\nlinux-arm64\nlinux-ppc64le\nlinux-s390x\nlinux-riscv64\nwindows-amd64\nwindows-arm64" 73 | if ! echo "${supported}" | grep -q "${OS}-${ARCH}"; then 74 | echo "No prebuilt binary for ${OS}-${ARCH}." 75 | echo "To build from source, go to https://github.com/helm/helm" 76 | exit 1 77 | fi 78 | 79 | if [ "${HAS_CURL}" != "true" ] && [ "${HAS_WGET}" != "true" ]; then 80 | echo "Either curl or wget is required" 81 | exit 1 82 | fi 83 | 84 | if [ "${VERIFY_CHECKSUM}" == "true" ] && [ "${HAS_OPENSSL}" != "true" ]; then 85 | echo "In order to verify checksum, openssl must first be installed." 86 | echo "Please install openssl or set VERIFY_CHECKSUM=false in your environment." 87 | exit 1 88 | fi 89 | 90 | if [ "${VERIFY_SIGNATURES}" == "true" ]; then 91 | if [ "${HAS_GPG}" != "true" ]; then 92 | echo "In order to verify signatures, gpg must first be installed." 93 | echo "Please install gpg or set VERIFY_SIGNATURES=false in your environment." 94 | exit 1 95 | fi 96 | if [ "${OS}" != "linux" ]; then 97 | echo "Signature verification is currently only supported on Linux." 98 | echo "Please set VERIFY_SIGNATURES=false or verify the signatures manually." 99 | exit 1 100 | fi 101 | fi 102 | 103 | if [ "${HAS_GIT}" != "true" ]; then 104 | echo "[WARNING] Could not find git. It is required for plugin installation." 105 | fi 106 | 107 | if [ "${HAS_TAR}" != "true" ]; then 108 | echo "[ERROR] Could not find tar. It is required to extract the helm binary archive." 109 | exit 1 110 | fi 111 | } 112 | 113 | # checkDesiredVersion checks if the desired version is available. 114 | checkDesiredVersion() { 115 | if [ "x$DESIRED_VERSION" == "x" ]; then 116 | # Get tag from release URL 117 | local latest_release_url="https://get.helm.sh/helm-latest-version" 118 | local latest_release_response="" 119 | if [ "${HAS_CURL}" == "true" ]; then 120 | latest_release_response=$( curl -L --silent --show-error --fail "$latest_release_url" 2>&1 || true ) 121 | elif [ "${HAS_WGET}" == "true" ]; then 122 | latest_release_response=$( wget "$latest_release_url" -q -O - 2>&1 || true ) 123 | fi 124 | TAG=$( echo "$latest_release_response" | grep '^v[0-9]' ) 125 | if [ "x$TAG" == "x" ]; then 126 | printf "Could not retrieve the latest release tag information from %s: %s\n" "${latest_release_url}" "${latest_release_response}" 127 | exit 1 128 | fi 129 | else 130 | TAG=$DESIRED_VERSION 131 | fi 132 | } 133 | 134 | # checkHelmInstalledVersion checks which version of helm is installed and 135 | # if it needs to be changed. 136 | checkHelmInstalledVersion() { 137 | if [[ -f "${HELM_INSTALL_DIR}/${BINARY_NAME}" ]]; then 138 | local version=$("${HELM_INSTALL_DIR}/${BINARY_NAME}" version --template="{{ .Version }}") 139 | if [[ "$version" == "$TAG" ]]; then 140 | echo "Helm ${version} is already ${DESIRED_VERSION:-latest}" 141 | return 0 142 | else 143 | echo "Helm ${TAG} is available. Changing from version ${version}." 144 | return 1 145 | fi 146 | else 147 | return 1 148 | fi 149 | } 150 | 151 | # downloadFile downloads the latest binary package and also the checksum 152 | # for that binary. 153 | downloadFile() { 154 | HELM_DIST="helm-$TAG-$OS-$ARCH.tar.gz" 155 | DOWNLOAD_URL="https://get.helm.sh/$HELM_DIST" 156 | CHECKSUM_URL="$DOWNLOAD_URL.sha256" 157 | HELM_TMP_ROOT="$(mktemp -dt helm-installer-XXXXXX)" 158 | HELM_TMP_FILE="$HELM_TMP_ROOT/$HELM_DIST" 159 | HELM_SUM_FILE="$HELM_TMP_ROOT/$HELM_DIST.sha256" 160 | echo "Downloading $DOWNLOAD_URL" 161 | if [ "${HAS_CURL}" == "true" ]; then 162 | curl -SsL "$CHECKSUM_URL" -o "$HELM_SUM_FILE" 163 | curl -SsL "$DOWNLOAD_URL" -o "$HELM_TMP_FILE" 164 | elif [ "${HAS_WGET}" == "true" ]; then 165 | wget -q -O "$HELM_SUM_FILE" "$CHECKSUM_URL" 166 | wget -q -O "$HELM_TMP_FILE" "$DOWNLOAD_URL" 167 | fi 168 | } 169 | 170 | # verifyFile verifies the SHA256 checksum of the binary package 171 | # and the GPG signatures for both the package and checksum file 172 | # (depending on settings in environment). 173 | verifyFile() { 174 | if [ "${VERIFY_CHECKSUM}" == "true" ]; then 175 | verifyChecksum 176 | fi 177 | if [ "${VERIFY_SIGNATURES}" == "true" ]; then 178 | verifySignatures 179 | fi 180 | } 181 | 182 | # installFile installs the Helm binary. 183 | installFile() { 184 | HELM_TMP="$HELM_TMP_ROOT/$BINARY_NAME" 185 | mkdir -p "$HELM_TMP" 186 | tar xf "$HELM_TMP_FILE" -C "$HELM_TMP" 187 | HELM_TMP_BIN="$HELM_TMP/$OS-$ARCH/helm" 188 | echo "Preparing to install $BINARY_NAME into ${HELM_INSTALL_DIR}" 189 | runAsRoot cp "$HELM_TMP_BIN" "$HELM_INSTALL_DIR/$BINARY_NAME" 190 | echo "$BINARY_NAME installed into $HELM_INSTALL_DIR/$BINARY_NAME" 191 | } 192 | 193 | # verifyChecksum verifies the SHA256 checksum of the binary package. 194 | verifyChecksum() { 195 | printf "Verifying checksum... " 196 | local sum=$(openssl sha1 -sha256 ${HELM_TMP_FILE} | awk '{print $2}') 197 | local expected_sum=$(cat ${HELM_SUM_FILE}) 198 | if [ "$sum" != "$expected_sum" ]; then 199 | echo "SHA sum of ${HELM_TMP_FILE} does not match. Aborting." 200 | exit 1 201 | fi 202 | echo "Done." 203 | } 204 | 205 | # verifySignatures obtains the latest KEYS file from GitHub main branch 206 | # as well as the signature .asc files from the specific GitHub release, 207 | # then verifies that the release artifacts were signed by a maintainer's key. 208 | verifySignatures() { 209 | printf "Verifying signatures... " 210 | local keys_filename="KEYS" 211 | local github_keys_url="https://raw.githubusercontent.com/helm/helm/main/${keys_filename}" 212 | if [ "${HAS_CURL}" == "true" ]; then 213 | curl -SsL "${github_keys_url}" -o "${HELM_TMP_ROOT}/${keys_filename}" 214 | elif [ "${HAS_WGET}" == "true" ]; then 215 | wget -q -O "${HELM_TMP_ROOT}/${keys_filename}" "${github_keys_url}" 216 | fi 217 | local gpg_keyring="${HELM_TMP_ROOT}/keyring.gpg" 218 | local gpg_homedir="${HELM_TMP_ROOT}/gnupg" 219 | mkdir -p -m 0700 "${gpg_homedir}" 220 | local gpg_stderr_device="/dev/null" 221 | if [ "${DEBUG}" == "true" ]; then 222 | gpg_stderr_device="/dev/stderr" 223 | fi 224 | gpg --batch --quiet --homedir="${gpg_homedir}" --import "${HELM_TMP_ROOT}/${keys_filename}" 2> "${gpg_stderr_device}" 225 | gpg --batch --no-default-keyring --keyring "${gpg_homedir}/${GPG_PUBRING}" --export > "${gpg_keyring}" 226 | local github_release_url="https://github.com/helm/helm/releases/download/${TAG}" 227 | if [ "${HAS_CURL}" == "true" ]; then 228 | curl -SsL "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc" -o "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc" 229 | curl -SsL "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc" -o "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc" 230 | elif [ "${HAS_WGET}" == "true" ]; then 231 | wget -q -O "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc" "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc" 232 | wget -q -O "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc" "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc" 233 | fi 234 | local error_text="If you think this might be a potential security issue," 235 | error_text="${error_text}\nplease see here: https://github.com/helm/community/blob/master/SECURITY.md" 236 | local num_goodlines_sha=$(gpg --verify --keyring="${gpg_keyring}" --status-fd=1 "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc" 2> "${gpg_stderr_device}" | grep -c -E '^\[GNUPG:\] (GOODSIG|VALIDSIG)') 237 | if [[ ${num_goodlines_sha} -lt 2 ]]; then 238 | echo "Unable to verify the signature of helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256!" 239 | echo -e "${error_text}" 240 | exit 1 241 | fi 242 | local num_goodlines_tar=$(gpg --verify --keyring="${gpg_keyring}" --status-fd=1 "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc" 2> "${gpg_stderr_device}" | grep -c -E '^\[GNUPG:\] (GOODSIG|VALIDSIG)') 243 | if [[ ${num_goodlines_tar} -lt 2 ]]; then 244 | echo "Unable to verify the signature of helm-${TAG}-${OS}-${ARCH}.tar.gz!" 245 | echo -e "${error_text}" 246 | exit 1 247 | fi 248 | echo "Done." 249 | } 250 | 251 | # fail_trap is executed if an error occurs. 252 | fail_trap() { 253 | result=$? 254 | if [ "$result" != "0" ]; then 255 | if [[ -n "$INPUT_ARGUMENTS" ]]; then 256 | echo "Failed to install $BINARY_NAME with the arguments provided: $INPUT_ARGUMENTS" 257 | help 258 | else 259 | echo "Failed to install $BINARY_NAME" 260 | fi 261 | echo -e "\tFor support, go to https://github.com/helm/helm." 262 | fi 263 | cleanup 264 | exit $result 265 | } 266 | 267 | # testVersion tests the installed client to make sure it is working. 268 | testVersion() { 269 | set +e 270 | HELM="$(command -v $BINARY_NAME)" 271 | if [ "$?" = "1" ]; then 272 | echo "$BINARY_NAME not found. Is $HELM_INSTALL_DIR on your "'$PATH?' 273 | exit 1 274 | fi 275 | set -e 276 | } 277 | 278 | # help provides possible cli installation arguments 279 | help () { 280 | echo "Accepted cli arguments are:" 281 | echo -e "\t[--help|-h ] ->> prints this help" 282 | echo -e "\t[--version|-v ] . When not defined it fetches the latest release from GitHub" 283 | echo -e "\te.g. --version v3.0.0 or -v canary" 284 | echo -e "\t[--no-sudo] ->> install without sudo" 285 | } 286 | 287 | # cleanup temporary files to avoid https://github.com/helm/helm/issues/2977 288 | cleanup() { 289 | if [[ -d "${HELM_TMP_ROOT:-}" ]]; then 290 | rm -rf "$HELM_TMP_ROOT" 291 | fi 292 | } 293 | 294 | # Execution 295 | 296 | #Stop execution on any error 297 | trap "fail_trap" EXIT 298 | set -e 299 | 300 | # Set debug if desired 301 | if [ "${DEBUG}" == "true" ]; then 302 | set -x 303 | fi 304 | 305 | # Parsing input arguments (if any) 306 | export INPUT_ARGUMENTS="${@}" 307 | set -u 308 | while [[ $# -gt 0 ]]; do 309 | case $1 in 310 | '--version'|-v) 311 | shift 312 | if [[ $# -ne 0 ]]; then 313 | export DESIRED_VERSION="${1}" 314 | if [[ "$1" != "v"* ]]; then 315 | echo "Expected version arg ('${DESIRED_VERSION}') to begin with 'v', fixing..." 316 | export DESIRED_VERSION="v${1}" 317 | fi 318 | else 319 | echo -e "Please provide the desired version. e.g. --version v3.0.0 or -v canary" 320 | exit 0 321 | fi 322 | ;; 323 | '--no-sudo') 324 | USE_SUDO="false" 325 | ;; 326 | '--help'|-h) 327 | help 328 | exit 0 329 | ;; 330 | *) exit 1 331 | ;; 332 | esac 333 | shift 334 | done 335 | set +u 336 | 337 | initArch 338 | initOS 339 | verifySupported 340 | checkDesiredVersion 341 | if ! checkHelmInstalledVersion; then 342 | downloadFile 343 | verifyFile 344 | installFile 345 | fi 346 | testVersion 347 | cleanup 348 | -------------------------------------------------------------------------------- /HPA_VPA/README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes HPA & VPA Controller (Horizontal/Vertical Pod Autoscaler) on Minikube/KIND Cluster 2 | 3 | ## In this demo, we will see how to deploy HPA controller. HPA will automatically scale the number of pods based on CPU utilization whereas VPA scales by increasing or decreasing CPU and memory resources within the existing pod containers—thus scaling capacity vertically 4 | 5 | 6 | ### Pre-requisites to implement this project: 7 | 8 | - Create 1 virtual machine on AWS with 2 CPU, 4GB of RAM (t2.medium) 9 | - Setup minikube on it Minikube setup. 10 | - Ensure you have the Metrics Server installed in your Minikube cluster to enable HPA. If not already installed, you can install it using: 11 | ```bash 12 | minikube addons enable metrics-server 13 | ``` 14 | - Check minikube cluster status and nodes : 15 | ```bash 16 | minikube status 17 | kubectl get nodes 18 | ``` 19 | - If you are using a Kind cluster install Metrics Server 20 | ```bash 21 | kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml 22 | ``` 23 | - Edit the Metrics Server Deployment 24 | ```bash 25 | kubectl -n kube-system edit deployment metrics-server 26 | ``` 27 | - Add the security bypass to deployment under `container.args` 28 | ```bash 29 | - --kubelet-insecure-tls 30 | - --kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP 31 | ``` 32 | - Restart the deployment 33 | ```bash 34 | kubectl -n kube-system rollout restart deployment metrics-server 35 | ``` 36 | - Verify if the metrics server is running 37 | ```bash 38 | kubectl get pods -n kube-system 39 | kubectl top nodes 40 | ``` 41 | - For VPA 42 | ```bash 43 | git clone https://github.com/kubernetes/autoscaler.git 44 | cd autoscaler/vertical-pod-autoscaler 45 | ./hack/vpa-up.sh 46 | ``` 47 | - Verify the pods on VPA 48 | ```bash 49 | kubectl get pods -n kube-system 50 | ``` 51 | # 52 | ## What we are going to implement: 53 | In this demo, we will create an deployment & service files for Apache and with the help of HPA, we will automatically scale the number of pods based on CPU utilization. 54 | # 55 | ### Steps to implement HPA: 56 | 57 | - Update the Deployments: 58 | 59 | - We'll modify the existing Apache deployment YAML files to include resource requests and limits. This is required for HPA to monitor CPU usage. 60 | ```bash 61 | #apache-deployment.yaml 62 | apiVersion: apps/v1 63 | kind: Deployment 64 | metadata: 65 | name: apache-deployment 66 | spec: 67 | replicas: 1 68 | selector: 69 | matchLabels: 70 | app: apache 71 | template: 72 | metadata: 73 | labels: 74 | app: apache 75 | spec: 76 | containers: 77 | - name: apache 78 | image: httpd:2.4 79 | ports: 80 | - containerPort: 80 81 | resources: 82 | requests: 83 | cpu: 100m 84 | limits: 85 | cpu: 200m 86 | --- 87 | apiVersion: v1 88 | kind: Service 89 | metadata: 90 | name: apache-service 91 | spec: 92 | selector: 93 | app: apache 94 | ports: 95 | - protocol: TCP 96 | port: 80 97 | targetPort: 80 98 | type: ClusterIP 99 | ``` 100 | # 101 | - Apply the updated deployments: 102 | ```bash 103 | kubectl apply -f apache-deployment.yaml 104 | ``` 105 | # 106 | - Create HPA Resources 107 | - We will create HPA resources for both Apache and NGINX deployments. The HPA will scale the number of pods based on CPU utilization. 108 | ```bash 109 | #apache-hpa.yaml 110 | apiVersion: autoscaling/v2 111 | kind: HorizontalPodAutoscaler 112 | metadata: 113 | name: apache-hpa 114 | spec: 115 | scaleTargetRef: 116 | apiVersion: apps/v1 117 | kind: Deployment 118 | name: apache-deployment 119 | minReplicas: 1 120 | maxReplicas: 5 121 | metrics: 122 | - type: Resource 123 | resource: 124 | name: cpu 125 | target: 126 | type: Utilization 127 | averageUtilization: 20 128 | ``` 129 | # 130 | - Apply the HPA resources: 131 | ```bash 132 | kubectl apply -f apache-hpa.yaml 133 | ``` 134 | # 135 | - port forward to access the Apache service on browser. 136 | ```bash 137 | kubectl port-forward svc/apache-service 8081:80 --address 0.0.0.0 & 138 | ``` 139 | # 140 | - Verify HPA 141 | - You can check the status of the HPA using the following command: 142 | ```bash 143 | kubectl get hpa 144 | ``` 145 | > This will show you the current state of the HPA, including the current and desired number of replicas. 146 | 147 | # 148 | ### Stress Testing 149 | # 150 | - To see HPA in action, you can perform a stress test on your deployments. Here is an example of how to generate load on the Apache deployment using 'BusyBox': 151 | ```bash 152 | kubectl run -i --tty load-generator --image=busybox /bin/sh 153 | ``` 154 | # 155 | - Inside the container, use 'wget' to generate load: 156 | ```bash 157 | while true; do wget -q -O- http://apache-service.default.svc.cluster.local; done 158 | ``` 159 | 160 | This will generate continuous load on the Apache service, causing the HPA to scale up the number of pods. 161 | 162 | # 163 | - Now to check if HPA worked or not, open a same new terminal and run the following command 164 | ```bash 165 | kubectl get hpa -w 166 | ``` 167 | 168 | > Note: Wait for few minutes to get the status reflected. 169 | 170 | # 171 | -------------------------------------------------------------------------------- /HPA_VPA/apache-deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: apache-deployment 5 | labels: 6 | app: apache 7 | spec: 8 | replicas: 1 9 | selector: 10 | matchLabels: 11 | app: apache 12 | template: 13 | metadata: 14 | labels: 15 | app: apache 16 | spec: 17 | containers: 18 | - name: apache 19 | image: httpd:2.4 20 | ports: 21 | - containerPort: 80 22 | resources: 23 | requests: 24 | cpu: 100m 25 | memory: 128Mi 26 | limits: 27 | cpu: 200m 28 | memory: 256Mi 29 | --- 30 | apiVersion: v1 31 | kind: Service 32 | metadata: 33 | name: apache-service 34 | labels: 35 | app: apache 36 | spec: 37 | selector: 38 | app: apache 39 | ports: 40 | - protocol: TCP 41 | port: 80 42 | targetPort: 80 43 | type: ClusterIP 44 | 45 | -------------------------------------------------------------------------------- /HPA_VPA/apache-hpa.yml: -------------------------------------------------------------------------------- 1 | apiVersion: autoscaling/v2 2 | kind: HorizontalPodAutoscaler 3 | metadata: 4 | name: apache-hpa 5 | spec: 6 | scaleTargetRef: 7 | apiVersion: apps/v1 8 | kind: Deployment 9 | name: apache-deployment 10 | minReplicas: 1 11 | maxReplicas: 5 12 | metrics: 13 | - type: Resource 14 | resource: 15 | name: cpu 16 | target: 17 | type: Utilization 18 | averageUtilization: 5 19 | 20 | -------------------------------------------------------------------------------- /HPA_VPA/apache-vpa.yml: -------------------------------------------------------------------------------- 1 | apiVersion: autoscaling.k8s.io/v1 2 | kind: VerticalPodAutoscaler 3 | metadata: 4 | name: apache-vpa 5 | namespace: default 6 | spec: 7 | targetRef: 8 | apiVersion: apps/v1 9 | kind: Deployment 10 | name: apache-deployment 11 | updatePolicy: 12 | updateMode: "Auto" # Options: "Off", "Initial", "Auto" 13 | 14 | -------------------------------------------------------------------------------- /Ingress/README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Ingress Controller on Minikube Cluster 2 | 3 | ### In this demo, we will see how to use ingress controller to route the traffic on different services. 4 | 5 | ### Pre-requisites to implement this project: 6 | - Create 1 virtual machine on AWS with 2 CPU, 4GB of RAM (t2.medium) 7 | - Setup minikube on it Minikube setup. 8 | 9 | # 10 | 11 | ### What we are going to implement: 12 | - In this demo, we will create two deployment and services i.e nginx and apache and with the help of ingress, we will route the traffic between the services 13 | 14 | # 15 | ## Steps to implement ingress: 16 | 17 | 1) Create minikube cluster as mentioned in pre-requisites : 18 | 19 | # 20 | 2) Check minikube cluster status and nodes : 21 | ```bash 22 | minikube status 23 | kubectl get nodes 24 | ``` 25 | # 26 | 3) Create one yaml file for apache deployment and service : 27 | ```bash 28 | # apache-deployment.yaml 29 | apiVersion: apps/v1 30 | kind: Deployment 31 | metadata: 32 | name: apache-deployment 33 | spec: 34 | replicas: 1 35 | selector: 36 | matchLabels: 37 | app: apache 38 | template: 39 | metadata: 40 | labels: 41 | app: apache 42 | spec: 43 | containers: 44 | - name: apache 45 | image: httpd:2.4 46 | ports: 47 | - containerPort: 80 48 | --- 49 | apiVersion: v1 50 | kind: Service 51 | metadata: 52 | name: apache-service 53 | spec: 54 | selector: 55 | app: apache 56 | ports: 57 | - protocol: TCP 58 | port: 80 59 | targetPort: 80 60 | type: ClusterIP 61 | ``` 62 | 63 | # 64 | 4) Apply apache deployment : 65 | ```bash 66 | kubectl apply -f apache-deployment.yaml 67 | ``` 68 | 69 | # 70 | 5) Create one more yaml file for nginx deployment and service : 71 | ```bash 72 | # nginx-deployment.yaml 73 | apiVersion: apps/v1 74 | kind: Deployment 75 | metadata: 76 | name: nginx-deployment 77 | spec: 78 | replicas: 1 79 | selector: 80 | matchLabels: 81 | app: nginx 82 | template: 83 | metadata: 84 | labels: 85 | app: nginx 86 | spec: 87 | containers: 88 | - name: nginx 89 | image: nginx:latest 90 | ports: 91 | - containerPort: 80 92 | --- 93 | apiVersion: v1 94 | kind: Service 95 | metadata: 96 | name: nginx-service 97 | spec: 98 | selector: 99 | app: nginx 100 | ports: 101 | - protocol: TCP 102 | port: 80 103 | targetPort: 80 104 | type: ClusterIP 105 | 106 | ``` 107 | 108 | # 109 | 6) Apply nginx deployment : 110 | ```bash 111 | kubectl apply -f nginx-deployment.yaml 112 | ``` 113 | 114 | # 115 | 7) Enable the Ingress Controller : 116 | ```bash 117 | minikube addons enable ingress 118 | ``` 119 | 120 | # 121 | 8) Now create an Ingress resource that routes traffic to the Apache and NGINX services based on the URL path. 122 | ```bash 123 | # ingress.yaml 124 | apiVersion: networking.k8s.io/v1 125 | kind: Ingress 126 | metadata: 127 | name: apache-nginx-ingress 128 | annotations: 129 | nginx.ingress.kubernetes.io/rewrite-target: / 130 | spec: 131 | rules: 132 | - host: "tws.com" 133 | http: 134 | paths: 135 | - path: /apache 136 | pathType: Prefix 137 | backend: 138 | service: 139 | name: apache-service 140 | port: 141 | number: 80 142 | - path: /nginx 143 | pathType: Prefix 144 | backend: 145 | service: 146 | name: nginx-service 147 | port: 148 | number: 80 149 | ``` 150 | 151 | # 152 | 9) Apply the Ingress resource : 153 | ```bash 154 | kubectl apply -f ingress.yaml 155 | ``` 156 | 157 | # 158 | 10) To test the Ingress, map the hostname to the Minikube IP in your */etc/hosts* file : 159 | ```bash 160 | echo "$(minikube ip) tws.com" | sudo tee -a /etc/hosts 161 | ``` 162 |
OR
163 | Open /etc/hosts file and add your minikube ip and domain name at the last. 164 | 165 | # 166 | 11) Now, test the routing : 167 | 168 | - curl http://tws.com/apache to access the Apache service. 169 | ```bash 170 | curl http://tws.com/apache 171 | ``` 172 | - curl http://tws.com/nginx to access the NGINX service. 173 | ```bash 174 | curl http://tws.com/nginx 175 | ``` 176 |
OR
177 | 178 | 179 | - port forward to access the Apache service on browser. 180 | ```bash 181 | kubectl port-forward svc/apache-service 8081:80 --address 0.0.0.0 & 182 | ``` 183 | - port forward to access the NGINX service on browser. 184 | ```bash 185 | kubectl port-forward svc/nginx-service 8082:80 --address 0.0.0.0 & 186 | ``` 187 | 188 | # 189 | 190 | -------------------------------------------------------------------------------- /Ingress/apache.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: apache-deployment 5 | spec: 6 | replicas: 1 7 | selector: 8 | matchLabels: 9 | app: apache 10 | template: 11 | metadata: 12 | labels: 13 | app: apache 14 | spec: 15 | containers: 16 | - name: apache 17 | image: httpd:2.4 18 | ports: 19 | - containerPort: 80 20 | --- 21 | apiVersion: v1 22 | kind: Service 23 | metadata: 24 | name: apache-service 25 | spec: 26 | selector: 27 | app: apache 28 | ports: 29 | - protocol: TCP 30 | port: 80 31 | targetPort: 80 32 | type: ClusterIP 33 | -------------------------------------------------------------------------------- /Ingress/ingress.yml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1 2 | kind: Ingress 3 | metadata: 4 | name: apache-nginx-ingress 5 | annotations: 6 | nginx.ingress.kubernetes.io/rewrite-target: / 7 | spec: 8 | rules: 9 | - host: "tws.com" 10 | http: 11 | paths: 12 | - path: /apache 13 | pathType: Prefix 14 | backend: 15 | service: 16 | name: apache-service 17 | port: 18 | number: 80 19 | - path: /nginx 20 | pathType: Prefix 21 | backend: 22 | service: 23 | name: nginx-service 24 | port: 25 | number: 80 26 | -------------------------------------------------------------------------------- /Ingress/nginx.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx-deployment 5 | spec: 6 | replicas: 1 7 | selector: 8 | matchLabels: 9 | app: nginx 10 | template: 11 | metadata: 12 | labels: 13 | app: nginx 14 | spec: 15 | containers: 16 | - name: nginx 17 | image: nginx:latest 18 | ports: 19 | - containerPort: 80 20 | --- 21 | apiVersion: v1 22 | kind: Service 23 | metadata: 24 | name: nginx-service 25 | spec: 26 | selector: 27 | app: nginx 28 | ports: 29 | - protocol: TCP 30 | port: 80 31 | targetPort: 80 32 | type: ClusterIP 33 | -------------------------------------------------------------------------------- /Kubeadm_Installation_Scripts_and_Documentation/Kubeadm_Installation_Common_Using_Containerd.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Execute on Both "Master" & "Worker" Nodes: 4 | 5 | # 1. Disable Swap: Required for Kubernetes to function correctly. 6 | echo "Disabling swap..." 7 | sudo swapoff -a 8 | sleep 2 9 | 10 | # 2. Load Necessary Kernel Modules: Required for Kubernetes networking. 11 | echo "Loading necessary kernel modules for Kubernetes networking..." 12 | cat < /dev/null 48 | 49 | sudo apt-get update 50 | sleep 2 51 | 52 | sudo apt-get install -y containerd.io 53 | sleep 2 54 | 55 | containerd config default | sed -e 's/SystemdCgroup = false/SystemdCgroup = true/' -e 's/sandbox_image = "registry.k8s.io\/pause:3.6"/sandbox_image = "registry.k8s.io\/pause:3.9"/' | sudo tee /etc/containerd/config.toml 56 | 57 | sudo systemctl restart containerd 58 | sleep 2 59 | 60 | sudo systemctl is-active containerd 61 | sleep 2 62 | 63 | # 5. Install Kubernetes Components: 64 | echo "Installing Kubernetes components (kubelet, kubeadm, kubectl)..." 65 | sudo apt-get update 66 | sleep 2 67 | 68 | sudo apt-get install -y apt-transport-https ca-certificates curl gpg 69 | sleep 2 70 | 71 | curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg 72 | sleep 2 73 | 74 | echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list 75 | 76 | sudo apt-get update 77 | sleep 2 78 | 79 | sudo apt-get install -y kubelet kubeadm kubectl 80 | sleep 2 81 | 82 | sudo apt-mark hold kubelet kubeadm kubectl 83 | sleep 2 84 | 85 | echo "Kubernetes setup completed." 86 | -------------------------------------------------------------------------------- /Kubeadm_Installation_Scripts_and_Documentation/Kubeadm_Installation_Master_Using_Containerd.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Execute ONLY on the "Master" Node 4 | 5 | # 1. Initialize the Cluster 6 | echo "Initializing Kubernetes Cluster..." 7 | sudo kubeadm init 8 | sleep 2 9 | 10 | # 2. Set Up Local kubeconfig 11 | echo "Setting up local kubeconfig..." 12 | mkdir -p "$HOME/.kube" 13 | sudo cp -i /etc/kubernetes/admin.conf "$HOME/.kube/config" 14 | sudo chown "$(id -u):$(id -g)" "$HOME/.kube/config" 15 | sleep 2 16 | 17 | # 3. Install a Network Plugin (Calico) 18 | echo "Installing Calico network plugin..." 19 | kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml 20 | 21 | # 4. Generate Join Command 22 | echo "Generating join command for worker nodes..." 23 | kubeadm token create --print-join-command 24 | sleep 2 25 | 26 | echo "Kubernetes Master Node setup complete." 27 | 28 | 29 | -------------------------------------------------------------------------------- /Kubeadm_Installation_Scripts_and_Documentation/Kubeadm_Installation_Slave_Using_Containerd.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Execute on ALL of your Worker Nodes 4 | 5 | # 1. Perform pre-flight checks and reset the node: 6 | sudo kubeadm reset -f 7 | 8 | # 2. Paste the join command you got from the master node and append --v=5 at the end: 9 | # Example: 10 | # sudo kubeadm join :6443 --token \ 11 | # --discovery-token-ca-cert-hash sha256: \ 12 | # --cri-socket "unix:///run/containerd/containerd.sock" --v=5 13 | -------------------------------------------------------------------------------- /Kubeadm_Installation_Scripts_and_Documentation/README.md: -------------------------------------------------------------------------------- 1 | # Kubeadm Installation Guide 2 | 3 | This guide outlines the steps needed to set up a Kubernetes cluster using `kubeadm`. 4 | 5 | ## Prerequisites 6 | 7 | - Ubuntu OS (Xenial or later) 8 | - `sudo` privileges 9 | - Internet access 10 | - t2.medium instance type or higher 11 | 12 | --- 13 | 14 | ## AWS Setup 15 | 16 | 1. Ensure that all instances are in the same **Security Group**. 17 | 2. Expose port **6443** in the **Security Group** to allow worker nodes to join the cluster. 18 | 3. Expose port **22** in the **Security Group** to allows SSH access to manage the instance.. 19 | 20 | 21 | ## To do above setup, follow below provided steps 22 | 23 | ### Step 1: Identify or Create a Security Group 24 | 25 | 1. **Log in to the AWS Management Console**: 26 | - Go to the **EC2 Dashboard**. 27 | 28 | 2. **Locate Security Groups**: 29 | - In the left menu under **Network & Security**, click on **Security Groups**. 30 | 31 | 3. **Create a New Security Group**: 32 | - Click on **Create Security Group**. 33 | - Provide the following details: 34 | - **Name**: (e.g., `Kubernetes-Cluster-SG`) 35 | - **Description**: A brief description for the security group (mandatory) 36 | - **VPC**: Select the appropriate VPC for your instances (default is acceptable) 37 | 38 | 4. **Add Rules to the Security Group**: 39 | - **Allow SSH Traffic (Port 22)**: 40 | - **Type**: SSH 41 | - **Port Range**: `22` 42 | - **Source**: `0.0.0.0/0` (Anywhere) or your specific IP 43 | 44 | - **Allow Kubernetes API Traffic (Port 6443)**: 45 | - **Type**: Custom TCP 46 | - **Port Range**: `6443` 47 | - **Source**: `0.0.0.0/0` (Anywhere) or specific IP ranges 48 | 49 | 5. **Save the Rules**: 50 | - Click on **Create Security Group** to save the settings. 51 | 52 | ### Step 2: Select the Security Group While Creating Instances 53 | 54 | - When launching EC2 instances: 55 | - Under **Configure Security Group**, select the existing security group (`Kubernetes-Cluster-SG`) 56 | 57 | > Note: Security group settings can be updated later as needed. 58 | 59 | --- 60 | 61 | 62 | ## Execute on Both "Master" & "Worker" Nodes 63 | 64 | 1. **Disable Swap**: Required for Kubernetes to function correctly. 65 | ```bash 66 | sudo swapoff -a 67 | ``` 68 | 69 | 2. **Load Necessary Kernel Modules**: Required for Kubernetes networking. 70 | ```bash 71 | cat < /dev/null 102 | 103 | sudo apt-get update 104 | sudo apt-get install -y containerd.io 105 | 106 | containerd config default | sed -e 's/SystemdCgroup = false/SystemdCgroup = true/' -e 's/sandbox_image = "registry.k8s.io\/pause:3.6"/sandbox_image = "registry.k8s.io\/pause:3.9"/' | sudo tee /etc/containerd/config.toml 107 | 108 | sudo systemctl restart containerd 109 | sudo systemctl status containerd 110 | ``` 111 | 112 | 5. **Install Kubernetes Components**: 113 | ```bash 114 | sudo apt-get update 115 | sudo apt-get install -y apt-transport-https ca-certificates curl gpg 116 | 117 | curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg 118 | 119 | echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list 120 | 121 | sudo apt-get update 122 | sudo apt-get install -y kubelet kubeadm kubectl 123 | sudo apt-mark hold kubelet kubeadm kubectl 124 | ``` 125 | 126 | ## Execute ONLY on the "Master" Node 127 | 128 | 1. **Initialize the Cluster**: 129 | ```bash 130 | sudo kubeadm init 131 | ``` 132 | 133 | 2. **Set Up Local kubeconfig**: 134 | ```bash 135 | mkdir -p "$HOME"/.kube 136 | sudo cp -i /etc/kubernetes/admin.conf "$HOME"/.kube/config 137 | sudo chown "$(id -u)":"$(id -g)" "$HOME"/.kube/config 138 | ``` 139 | 140 | 3. **Install a Network Plugin (Calico)**: 141 | ```bash 142 | kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml 143 | ``` 144 | 145 | 4. **Generate Join Command**: 146 | ```bash 147 | kubeadm token create --print-join-command 148 | ``` 149 | 150 | > Copy this generated token for next command. 151 | 152 | --- 153 | 154 | ## Execute on ALL of your Worker Nodes 155 | 156 | 1. Perform pre-flight checks: 157 | ```bash 158 | sudo kubeadm reset pre-flight checks 159 | ``` 160 | 161 | 2. Paste the join command you got from the master node and append `--v=5` at the end: 162 | ```bash 163 | sudo kubeadm join :6443 --token --discovery-token-ca-cert-hash sha256: --cri-socket 164 | "unix:///run/containerd/containerd.sock" --v=5 165 | ``` 166 | 167 | > **Note**: When pasting the join command from the master node: 168 | > 1. Add `sudo` at the beginning of the command 169 | > 2. Add `--v=5` at the end 170 | > 171 | > Example format: 172 | > ```bash 173 | > sudo --v=5 174 | > ``` 175 | 176 | --- 177 | 178 | ## Verify Cluster Connection 179 | 180 | **On Master Node:** 181 | 182 | ```bash 183 | kubectl get nodes 184 | 185 | ``` 186 | 187 | 188 | 189 | --- 190 | 191 | ## Verify Container Status on Worker Node 192 | 193 | 194 | 195 | -------------------------------------------------------------------------------- /Minikube_Windows_Installation.md: -------------------------------------------------------------------------------- 1 | 2 | # Minikube Installation on Windows/local System. 3 | 4 | This guide provides step-by-step instructions for installing Minikube on windows with pre-requisites. Minikube allows you to run a single-node Kubernetes cluster locally for development and testing purposes. 5 | 6 | 7 | ## Pre-requisites 8 | 9 | - Windows os 10 | - Internet connection 11 | - Container manager : Docker 12 | - kubernetes cmd : kubectl 13 | 14 | 15 | ## Step 1: Docker Installation 16 | visit the below link 17 | to download and install the docker desktop. 18 | 19 | ```bash 20 | https://docs.docker.com/desktop/install/windows-install/ 21 | ``` 22 | ## It will require a RESTART. 23 | Now open cmd and check installation. 24 | 25 | ```bash 26 | docker --version 27 | 28 | 29 | output: 30 | Docker version 27.0.3, build 7d4bcd8 31 | ``` 32 | successfully installed docker on windows. 33 | 34 | ## Step 2: Minikube Installation 35 | 36 | Now open poweshell and run below 2 commands. 37 | ```bash 38 | New-Item -Path 'c:\' -Name 'minikube' -ItemType Directory -Force 39 | Invoke-WebRequest -OutFile 'c:\minikube\minikube.exe' -Uri 'https://github.com/kubernetes/minikube/releases/latest/download/minikube-windows-amd64.exe' -UseBasicParsing 40 | 41 | ``` 42 | ```bash 43 | $oldPath = [Environment]::GetEnvironmentVariable('Path', [EnvironmentVariableTarget]::Machine) 44 | if ($oldPath.Split(';') -inotcontains 'C:\minikube'){ 45 | [Environment]::SetEnvironmentVariable('Path', $('{0};C:\minikube' -f $oldPath), [EnvironmentVariableTarget]::Machine) 46 | } 47 | ``` 48 | check the installation by 49 | ```bash 50 | minikube version 51 | 52 | 53 | output: 54 | minikube version: v1.33.1 55 | commit: 5883c09216182566a63dff4c326a6fc9ed2982ff 56 | ``` 57 | 58 | ## Step 3: Run Minikube 59 | - Open powershell run below cmd to create brand new cluster. 60 | ```bash 61 | minikube start 62 | ``` 63 | ## Step 4: Install kubectl With Curl 64 | - Open powershell run below command to interact with your brand new cluster. 65 | - Install curl if not availabe bydefault it is present. 66 | ```bash 67 | curl.exe -LO "https://dl.k8s.io/release/v1.30.0/bin/windows/amd64/kubectl.exe" 68 | ``` 69 | ## Step 5: Play With Your Cluster. 70 | 71 | ```bash 72 | kubectl get nodes 73 | 74 | 75 | output: 76 | NAME STATUS ROLES AGE VERSION 77 | minikube Ready control-plane 3m54s v1.30.0 78 | ``` 79 | This shows your single node cluster in up and running. 80 | 81 | ## Step 6: Don't Forget to Stop & Delete Minikube. 82 | When you are done, you can stop the Minikube cluster with: 83 | ```bash 84 | minikube stop 85 | ``` 86 | If you wish to delete the Minikube cluster entirely, you can do so with: 87 | ```bash 88 | minikube delete 89 | ``` 90 | 91 | That's it! You've successfully installed Minikube on Windows, and you can now start deploying Kubernetes applications for development and testing. 92 | 93 | 94 | -------------------------------------------------------------------------------- /PersistentVolumes/PersistentVolume.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: nginx-pv 5 | labels: 6 | env: dev 7 | spec: 8 | storageClassName: standard 9 | capacity: 10 | storage: 1Gi 11 | accessModes: 12 | - ReadWriteOnce 13 | hostPath: 14 | path: "/home/ubuntu/data" 15 | -------------------------------------------------------------------------------- /PersistentVolumes/PersistentVolumeClaim.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolumeClaim 3 | metadata: 4 | name: nginx-pv-claim 5 | spec: 6 | storageClassName: standard 7 | accessModes: 8 | - ReadWriteOnce 9 | resources: 10 | requests: 11 | storage: 500Mi 12 | -------------------------------------------------------------------------------- /PersistentVolumes/Pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: nginx-pod 5 | spec: 6 | volumes: 7 | - name: nginx-storage 8 | persistentVolumeClaim: 9 | claimName: nginx-pv-claim 10 | containers: 11 | - name: nginx-container 12 | image: nginx 13 | ports: 14 | - containerPort: 80 15 | volumeMounts: 16 | - mountPath: "/usr/share/nginx/html" 17 | name: nginx-storage 18 | -------------------------------------------------------------------------------- /PersistentVolumes/README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes persistent volumes and persistent volume claim on Minikube Cluster 2 | 3 | ### In this demo, we will see how to persist data of a kubernetes pods using persistent volume on minikube cluster. 4 | 5 | ### Pre-requisites to implement this project: 6 | - Create 1 virtual machine on AWS with 2 CPU, 4GB of RAM (t2.medium) 7 | - Setup minikube on it Minikube setup. 8 | 9 | # 10 | 11 | ### What we are going to implement: 12 | - In this demo, we will create persistent volumes (PV) and persistent volume claim (PVC) to persist the data of an application so that it can be restored if our application crashes. 13 | # 14 | ## Steps to implement ingress: 15 | 16 | 1) Create minikube cluster as mentioned in pre-requisites : 17 | 18 | # 19 | 2) Check minikube cluster status and nodes : 20 | ```bash 21 | minikube status 22 | kubectl get nodes 23 | ``` 24 | # 25 | 3) Create persistent volume yaml file : 26 | ```bash 27 | apiVersion: v1 28 | kind: PersistentVolume 29 | metadata: 30 | name: nginx-pv 31 | labels: 32 | env: dev 33 | spec: 34 | storageClassName: standard 35 | capacity: 36 | storage: 1Gi 37 | accessModes: 38 | - ReadWriteOnce 39 | hostPath: 40 | path: "/home/ubuntu/data" 41 | ``` 42 | 43 | # 44 | 4) Apply persistent volume : 45 | ```bash 46 | kubectl apply -f PersistentVolume.yaml 47 | ``` 48 | ![image](https://github.com/user-attachments/assets/035b2b2b-254a-417e-a701-19966a76f559) 49 | 50 | # 51 | 5) Create one more yaml file for persistent volume claim : 52 | ```bash 53 | apiVersion: v1 54 | kind: PersistentVolumeClaim 55 | metadata: 56 | name: nginx-pv-claim 57 | spec: 58 | storageClassName: standard 59 | accessModes: 60 | - ReadWriteOnce 61 | resources: 62 | requests: 63 | storage: 500Mi 64 | ``` 65 | 66 | # 67 | 6) Apply persistent volume claim : 68 | ```bash 69 | kubectl apply -f PersistentVolumeClaim.yaml 70 | ``` 71 | ![image](https://github.com/user-attachments/assets/dc148d34-92f8-4842-91fb-831b11a40890) 72 | 73 | # 74 | 7) Create a simple nginx pod yaml to attach volume : 75 | ```bash 76 | apiVersion: v1 77 | kind: Pod 78 | metadata: 79 | name: nginx-pod 80 | spec: 81 | volumes: 82 | - name: nginx-storage 83 | persistentVolumeClaim: 84 | claimName: nginx-pv-claim 85 | containers: 86 | - name: nginx-container 87 | image: nginx 88 | ports: 89 | - containerPort: 80 90 | volumeMounts: 91 | - mountPath: "/usr/share/nginx/html" 92 | name: nginx-storage 93 | ``` 94 | 95 | # 96 | 8) Apply Pod yaml file : 97 | ```bash 98 | kubectl apply -f Pod.yaml 99 | ``` 100 | ![image](https://github.com/user-attachments/assets/959783f2-3499-42eb-adf5-b2bfc5b4d374) 101 | 102 | # 103 | 9) Now exec into the above the created pod i.e. nginx : 104 | ```bash 105 | kubectl exec -it -- sh 106 | ``` 107 | ![image](https://github.com/user-attachments/assets/045c63ca-b522-417b-adde-560a35217e14) 108 | 109 | # 110 | 10) Go to the path /usr/share/nginx/html and do ls -lrt : 111 | ```bash 112 | cd /usr/share/nginx/html 113 | ``` 114 | ![image](https://github.com/user-attachments/assets/65a1c51b-744e-4bf6-817f-7402029812ce) 115 | 116 | In the above screenshot, there is no files under /usr/share/nginx/html path 117 | 118 | # 119 | 11) Now let's create a file under /usr/share/nginx/html path : 120 | ```bash 121 | echo "Hello from nginx pod" > nginx-pod.txt 122 | ``` 123 | ![image](https://github.com/user-attachments/assets/d5a51332-65ba-43aa-b663-a8dd76a10713) 124 | 125 | # 126 | 12) Now exit from the pod and ssh into the minikube host : 127 | ```bash 128 | exit 129 | ``` 130 | ```bash 131 | minikube ssh 132 | ``` 133 | ![image](https://github.com/user-attachments/assets/b5c63a04-174f-4c47-ae3c-5a8828595537) 134 | 135 | # 136 | 13) Now go the path which you mentioned in PersistentVolume.yaml. In our case it is /home/ubuntu/data and check if the file is present or not : 137 | ```bash 138 | cd /home/ubuntu/data 139 | ls -lrt 140 | ``` 141 | ![image](https://github.com/user-attachments/assets/b73ca5a9-93d4-4509-8c6d-48d145843ddb) 142 | 143 | # 144 | 14) Now let's create one more file under /home/ubuntu/data inside minikube host : 145 | ```bash 146 | echo "Hello from minikube host pod" > minikube-host-pod.txt 147 | ``` 148 | ![image](https://github.com/user-attachments/assets/d046ca50-62d4-4ad5-8e9d-e8ef5ed1721c) 149 | 150 | # 151 | 15) At last, go to nginx pod and check if minikube-host-pod.txt file is present or not : 152 | ```bash 153 | kubectl exec -it -- sh 154 | ``` 155 | ```bash 156 | cd /usr/share/nginx/html 157 | ls -lrt 158 | ``` 159 | ![image](https://github.com/user-attachments/assets/9294b05b-ba25-4b6f-b010-5842d1736ed7) 160 | 161 | # 162 | Congratulations, you have done it 163 | Happy Learning :) 164 | -------------------------------------------------------------------------------- /RBAC/README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes RBAC Controller on Minikube Cluster 2 | 3 | ### In this demo, we will see how to use RBAC controller for managing access to your Kubernetes resources. 4 | # 5 | 6 | ## Pre-requisites to implement this project: 7 | 8 | - Create 1 virtual machine on AWS with 2 CPU, 4GB of RAM (t2.medium) 9 | - Setup minikube on it Minikube setup 10 | - Check minikube cluster status and nodes : 11 | ```bash 12 | minikube status 13 | kubectl get nodes 14 | ``` 15 | # 16 | ### What we are going to implement: 17 | - In this demo, we will create a deployment and services for Apache and with the help of RBAC, we will manage the access. 18 | # 19 | 20 | ### Steps to implement RBAC: 21 | 22 | - First, let's define namespaces.yaml to separate your resources. This helps in organizing and applying RBAC policies. 23 | ```bash 24 | # namespaces.yml 25 | apiVersion: v1 26 | kind: Namespace 27 | metadata: 28 | name: apache-namespace 29 | ``` 30 | # 31 | - Apply apache deployment : 32 | ```bash 33 | # apache-deployment.yml 34 | apiVersion: apps/v1 35 | kind: Deployment 36 | metadata: 37 | name: apache-deployment 38 | namespace: apache-namespace 39 | spec: 40 | replicas: 1 41 | selector: 42 | matchLabels: 43 | app: apache 44 | template: 45 | metadata: 46 | labels: 47 | app: apache 48 | spec: 49 | containers: 50 | - name: apache 51 | image: httpd:2.4 52 | ports: 53 | - containerPort: 80 54 | --- 55 | apiVersion: v1 56 | kind: Service 57 | metadata: 58 | name: apache-service 59 | namespace: apache-namespace 60 | spec: 61 | selector: 62 | app: apache 63 | ports: 64 | - protocol: TCP 65 | port: 80 66 | targetPort: 80 67 | type: ClusterIP 68 | ``` 69 | # 70 | - Apply apache-deployment & namespaces.yaml: 71 | ```bash 72 | kubectl apply -f apache-deployment.yml 73 | kubectl apply -f namespaces.yml 74 | ``` 75 | # 76 | - Create Roles and RoleBindings: 77 | 78 | - Role for Managing Apache Deployment.
79 | `We'll create a role that allows managing Apache resources within the apache-namespace.` 80 | 81 | ```bash 82 | # apache-role.yaml 83 | apiVersion: rbac.authorization.k8s.io/v1 84 | kind: Role 85 | metadata: 86 | namespace: apache-namespace 87 | name: apache-manager 88 | rules: 89 | - apiGroups: ["", "apps", "extensions"] 90 | resources: ["deployments", "services", "pods"] 91 | verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] 92 | ``` 93 | # 94 | - RoleBinding for Apache Manager 95 | We'll bind the role to a user or service account. 96 | ```bash 97 | # apache-rolebinding.yaml 98 | apiVersion: rbac.authorization.k8s.io/v1 99 | kind: RoleBinding 100 | metadata: 101 | name: apache-manager-binding 102 | namespace: apache-namespace 103 | subjects: 104 | - kind: User 105 | name: apache-user # This should be replaced with the actual user name 106 | apiGroup: rbac.authorization.k8s.io 107 | roleRef: 108 | kind: Role 109 | name: apache-manager 110 | apiGroup: rbac.authorization.k8s.io 111 | ``` 112 | # 113 | - Apply apache-role & rolebinding: 114 | ```bash 115 | kubectl apply -f apache-role.yaml 116 | kubectl apply -f apache-rolebinding.yaml 117 | ``` 118 | # 119 | - Create Users (Optional): 120 | 121 | **If you are using a Kubernetes distribution that supports user management, you can create users and assign them the respective roles. Here, we'll use basic ServiceAccounts for simplicity.** 122 | ```bash 123 | #apache-serviceaccount.yaml 124 | apiVersion: v1 125 | kind: ServiceAccount 126 | metadata: 127 | name: apache-user 128 | namespace: apache-namespace 129 | ``` 130 | # 131 | - Apply the service accounts: 132 | ```bash 133 | kubectl apply -f apache-serviceaccount.yaml 134 | ``` 135 | # 136 | 137 | ## Verify Roles and RoleBinding 138 | 139 | **To cross-verify the RBAC setup, you can perform a series of checks and actions to ensure that the roles and permissions are applied correctly. Here’s how you can do it:** 140 | 141 | - Check Roles: 142 | ```bash 143 | kubectl get roles -n apache-namespace 144 | ``` 145 | Ensure apache-manager and nginx-manager roles are listed. 146 | # 147 | - Check RoleBindings: 148 | ```bash 149 | kubectl get rolebindings -n apache-namespace 150 | ``` 151 | Ensure apache-manager-binding and nginx-manager-binding role bindings are listed. 152 | # 153 | ### Verify ServiceAccounts 154 | - Check ServiceAccounts: 155 | ```bash 156 | kubectl get serviceaccounts -n apache-namespace 157 | ``` 158 | Ensure apache-user and nginx-user service accounts are listed. 159 | 160 | ### Test Access Using Impersonation 161 | You can use the 'kubectl auth can-i' command to verify the permissions granted by the roles. 162 | 163 | **Test Apache User Permissions:** 164 | ```bash 165 | kubectl auth can-i create deployment -n apache-namespace --as=apache-user 166 | kubectl auth can-i get pods -n apache-namespace --as=apache-user 167 | kubectl auth can-i delete service -n apache-namespace --as=apache-user 168 | ``` 169 | # 170 | **To test denial of permissions outside the namespace:** 171 | ```bash 172 | kubectl auth can-i create deployment -n nginx-namespace --as=apache-user 173 | ``` 174 | # 175 | ### Create Test Resources 176 | 177 | **Test Apache User Resource Creation:** 178 | ```bash 179 | kubectl run apache-test --image=httpd:2.4 -n apache-namespace 180 | kubectl get pods -n apache-namespace 181 | ``` 182 | > Verify that the 'apache-test' pod is created in the 'apache-namespace'. 183 | 184 | **Clean Up Test Resources** 185 | 186 | After verification, clean up the test resources: 187 | Delete Test Pods: 188 | ```bash 189 | kubectl delete pod apache-test -n apache-namespace 190 | ``` 191 | # 192 | ## This should provide you with a thorough verification of your RBAC setup, ensuring that the roles and permissions are correctly applied and functioning as intended. 193 | 194 | -------------------------------------------------------------------------------- /RBAC/apache-deployment.yml: -------------------------------------------------------------------------------- 1 | # apache-deployment.yaml 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: apache-deployment 6 | namespace: apache-namespace 7 | spec: 8 | replicas: 1 9 | selector: 10 | matchLabels: 11 | app: apache 12 | template: 13 | metadata: 14 | labels: 15 | app: apache 16 | spec: 17 | containers: 18 | - name: apache 19 | image: httpd:2.4 20 | ports: 21 | - containerPort: 80 22 | --- 23 | apiVersion: v1 24 | kind: Service 25 | metadata: 26 | name: apache-service 27 | namespace: apache-namespace 28 | spec: 29 | selector: 30 | app: apache 31 | ports: 32 | - protocol: TCP 33 | port: 80 34 | targetPort: 80 35 | type: ClusterIP 36 | -------------------------------------------------------------------------------- /RBAC/apache-role.yml: -------------------------------------------------------------------------------- 1 | # apache-role.yaml 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | kind: Role 4 | metadata: 5 | namespace: apache-namespace 6 | name: apache-manager 7 | rules: 8 | - apiGroups: ["", "apps", "extensions"] 9 | resources: ["deployments", "services", "pods"] 10 | verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] 11 | -------------------------------------------------------------------------------- /RBAC/apache-rolebinding.yml: -------------------------------------------------------------------------------- 1 | # apache-rolebinding.yaml 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | kind: RoleBinding 4 | metadata: 5 | name: apache-manager-binding 6 | namespace: apache-namespace 7 | subjects: 8 | - kind: User 9 | name: apache-user # This should be replaced with the actual user name 10 | apiGroup: rbac.authorization.k8s.io 11 | roleRef: 12 | kind: Role 13 | name: apache-manager 14 | apiGroup: rbac.authorization.k8s.io 15 | -------------------------------------------------------------------------------- /RBAC/apache-serviceaccount.yml: -------------------------------------------------------------------------------- 1 | #apache-serviceaccount.yaml 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: apache-user 6 | namespace: apache-namespace 7 | -------------------------------------------------------------------------------- /RBAC/namespace.yml: -------------------------------------------------------------------------------- 1 | # namespaces.yaml 2 | apiVersion: v1 3 | kind: Namespace 4 | metadata: 5 | name: apache-namespace 6 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## Kubernetes Kickstarter 2 | 3 | ## Architecture Guides 4 | 5 | 1. [Kubernetes Architecture Guide](./kubernetes_architecture.md) 6 | 7 | ## Examples with Interview Questions 8 | 9 | 1. [NGINX with Deployment & Service](./examples/nginx) 10 | 2. [MySQL with ConfigMaps, Secrets & Persistent Volumes](./examples/mysql) 11 | 12 | ## Installation Guides 13 | 14 | 1. [Kubeadm Installation Guide](./kubeadm_installation.md) 15 | 2. [Minikube Installation Guide](./minikube_installation.md) 16 | 3. [EKS Installation Guide](./eks_cluster_setup.md) 17 | 18 | ## Practice Projects 19 | 20 | 1. [Microservices on k8s](https://github.com/LondheShubham153/microservices-k8s) 21 | 2. [Django App Deployment](https://github.com/LondheShubham153/django-todo-cicd) 22 | 3. [Redit Clone with Ingress](https://github.com/LondheShubham153/reddit-clone-k8s-ingress) 23 | 4. [AWS EKS Best Practices](https://github.com/LondheShubham153/aws-eks-devops-best-practices) 24 | 5. [For More Challenges, Check Out These Ideas](./examples/More_K8s_Practice_Ideas.md) 25 | -------------------------------------------------------------------------------- /Taints-and-Tolerations/README.md: -------------------------------------------------------------------------------- 1 | ## Taints and Tolerations in Kubernetes 2 | 3 | ### What are Taints amd Tolerations ? 4 | - Taints and tolerations are a mechanism in Kubernetes that allows you to control which pods can be scheduled on specific nodes. 5 | - They work together to ensure pods are not placed on inappropriate nodes. 6 | 7 | ### How it works ? 8 | - By default, pods cannot be scheduled on tainted nodes unless they have a special permission called toleration. 9 | - A pod will only be allocated to a node when a toleration on the pod corresponds with the taint of the node. 10 | 11 | --- 12 | 13 | ### Implementation of Taints and Tolerations: 14 | # 15 | - Taint a node 16 | ```bash 17 | kubectl taint nodes node1 prod=true:NoSchedule 18 | ``` 19 | Note: The above command will taint the node1 with key "prod", without the appropriate tolerations no pods will schedule to node1. 20 | 21 | To remove the taint , add - at the end of the command 22 | ```bash 23 | kubectl taint nodes node1 prod=true:NoSchedule- 24 | ``` 25 | 26 | - Try to apply the below manifest 27 | ```yaml 28 | apiVersion: v1 29 | kind: Pod 30 | metadata: 31 | labels: 32 | run: nginx 33 | name: nginx 34 | spec: 35 | containers: 36 | - image: nginx 37 | name: nginx 38 | ``` 39 | 40 | Have you noticed pod is in pending state ? Why this is so ? - It's because we have applied taint to node1 but we haven't applied toleration to the nginx pod. 41 | 42 | - Now copy the below code and try to deploy it 43 | 44 | ```yaml 45 | apiVersion: v1 46 | kind: Pod 47 | metadata: 48 | labels: 49 | run: nginx 50 | name: nginx 51 | spec: 52 | containers: 53 | - image: nginx 54 | name: nginx 55 | tolerations: 56 | - key: "prod" 57 | operator: "Equal" 58 | value: "true" 59 | effect: "NoSchedule" 60 | ``` 61 | >Note: This pod specification defines a toleration for the "prod" taint with the effect "NoSchedule." This allows the pod to be scheduled on tainted nodes. 62 | -------------------------------------------------------------------------------- /Taints-and-Tolerations/pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | labels: 5 | run: nginx 6 | name: nginx 7 | spec: 8 | containers: 9 | - image: nginx 10 | name: nginx 11 | 12 | -------------------------------------------------------------------------------- /ci_cd_with_kubernetes.md: -------------------------------------------------------------------------------- 1 | # CI/CD with Kubernetes 2 | 3 | This guide explains the basic workflow of how Kubernetes can be integrated with the CI/CD pipelines for deployment of an application in a simplified way. 4 | 5 | ## What is CI? 6 | 7 | CI stands for Continuous Integration. In CI, when people work on a project together, code changes are integrated into a shared repository on a regular basis. This is typically done multiple times a day. Then, automated tests run to check if everything is okay with the new code. This helps catch problems early in the development cycle. If these tests say "Everything is ok!" then the code is allowed to move to the next steps in the pipeline. 8 | There are a lot of tools that help in achieving CI/CD, like Jenkins, AWS CodePipeline, etc. 9 | 10 | ## What is CD? 11 | 12 | CD stands for both, Continuous Delivery and Continuous Deployment. Continuous Deployment extends CI by automating the deployment process. Once the code passes CI stage in the pipeline, it's automatically deployed to a staging environment for further testing. If everything works well in the staging environment, the code is then automatically promoted to the production environment, making the release process more efficient and reducing the risk of manual errors. However, Continuous Delivery means that your code changes are automatically tested and built as part of the CI stage. If everything looks good, the application is all set to be deployed. However, you decide when to actually release it to the end users. 13 | 14 | ## Role of Kubernetes in CI/CD 15 | 16 | Kubernetes is like a super helper in the process of getting the application ready and out to end users. It helps put the application on different places in the same way every time. This helps make sure everything works well, no matter where it goes. 17 | 18 | Let us understand this through a simple workflow of a CI/CD pipeline for a containerized application. 19 | 20 | 1. **Committing Code**: Developers commit their code changes to a version control system like Git. 21 | 2. **Build**: An automated build is triggered in a CI/CD tool like Jenkins to build a Docker image from the application code. 22 | 3. **Run Tests**: Automated tests are run on the Docker image to ensure it meets the set functionality/standards. 23 | 4. **Push Artifacts**: Once the testing stage is completed, the image is pushed to an Artifact Management System like Artifactory/Dockerhub/Docker Trusted Registry. 24 | 5. **Deployment using Kubernetes**: In this stage, we leverage the power of Kubernetes to streamline and automate the deployment process of the application. Let's get into the process by breaking it down into various essential steps. 25 | 26 | * **Deployment Manifest File**: 27 | A Kubernetes deployment script, deployment YAML file, holds the critical configuration instructions that Kubernetes requires to manage and deploy the application. Within this YAML file, we define the application's name, specify the container image to be used, determine the desired number of instances (replicas), and include other essential configurations. This YAML file acts as a comprehensive guide for Kubernetes, detailing how to create, maintain, and update the application's environment. 28 | The deployment YAML file is stored in the same repository as the application code. This allows for easy collaboration, versioning, and synchronization between the application code and its deployment configuration. 29 | 30 | * **Deployment in Staging Environment**: 31 | After generating a new image for the application through CI, the next step is to deploy it to a staging environment within the Kubernetes cluster. This staging environment, often known as "Staging" or "Pre-Prod", closely resembles the production setup but operates in isolation for testing. 32 | 33 | * **Testing in Staging Environment**: 34 | Automated testing and validation are performed in the staging environment to ensure the new version of the application image works as expected. 35 | 36 | * **Production Deployment**: 37 | Once the testing in the staging environment passes successfully, the new version of the application can be deployed to the production environment using the same deployment YAML script that guided the staging deployment. This ensures consistent setup in the production environment and the transition from staging to production is seamless. 38 | 39 | 40 | Screenshot 2023-08-21 at 4 16 35 PM 41 | 42 | 43 | ## Conclusion 44 | 45 | By automating the above-mentioned steps and using Kubernetes, the CI/CD process becomes more efficient, reliable, and repeatable. It helps development teams release new features faster while maintaining stability! Happy Learning! 46 | -------------------------------------------------------------------------------- /eks_cluster_setup.md: -------------------------------------------------------------------------------- 1 | # How to create EKS cluster using eksctl utility 2 | 3 | ## Pre-requisites: 4 | - IAM user with **access keys and secret access keys** 5 | - AWSCLI should be configured (Setup AWSCLI) 6 | ```bash 7 | curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" 8 | sudo apt install unzip 9 | unzip awscliv2.zip 10 | sudo ./aws/install 11 | aws configure 12 | ``` 13 | 14 | - Install **kubectl** (Setup kubectl) 15 | ```bash 16 | curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl 17 | chmod +x ./kubectl 18 | sudo mv ./kubectl /usr/local/bin 19 | kubectl version --short --client 20 | ``` 21 | 22 | - Install **eksctl** (Setup eksctl) 23 | ```bash 24 | curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp 25 | sudo mv /tmp/eksctl /usr/local/bin 26 | eksctl version 27 | ``` 28 | 29 | # 30 | ### Steps to create EKS cluster: 31 | - Create EKS Cluster 32 | ```bash 33 | eksctl create cluster --name=my-cluster \ 34 | --region=us-west-2 \ 35 | --version=1.30 \ 36 | --without-nodegroup 37 | ``` 38 | 39 | - Associate IAM OIDC Provider 40 | ```bash 41 | eksctl utils associate-iam-oidc-provider \ 42 | --region us-west-2 \ 43 | --cluster my-cluster \ 44 | --approve 45 | ``` 46 | # 47 | 48 | - Create Nodegroup 49 | ```bash 50 | eksctl create nodegroup --cluster=my-cluster \ 51 | --region=us-west-2 \ 52 | --name=my-cluster \ 53 | --node-type=t2.medium \ 54 | --nodes=2 \ 55 | --nodes-min=2 \ 56 | --nodes-max=2 \ 57 | --node-volume-size=29 \ 58 | --ssh-access \ 59 | --ssh-public-key=eks-nodegroup-key 60 | ``` 61 | #### Note: Make sure the ssh-public-key "eks-nodegroup-key is available in your aws account" 62 | # 63 | 64 | - Update Kubectl Context 65 | ```bash 66 | aws eks update-kubeconfig --region us-west-2 --name my-cluster 67 | ``` 68 | # 69 | 70 | - Delete EKS Cluster 71 | ```bash 72 | eksctl delete cluster --name=my-cluster --region=us-west-2 73 | ``` 74 | -------------------------------------------------------------------------------- /examples/More_K8s_Practice_Ideas.md: -------------------------------------------------------------------------------- 1 | # Real World Apps to Deploy on Kubernetes 2 | 3 | There are many real world apps that you can deploy on Kubernetes to learn and practice its concepts. Here are some of the best options: 4 | 5 | ## Robot Shop 6 | 7 | Robot Shop is an e-commerce store developed by Instana to learn monitoring techniques on Kubernetes. It consists of several microservices implemented in different technologies: 8 | 9 | - Angular (frontend) 10 | - Nginx (reverse proxy) 11 | - Node.js (Express) 12 | - Java (Spark Java) 13 | - Python (Flask) 14 | - Golang 15 | - MongoDB 16 | - RabbitMQ 17 | - Redis 18 | - MySQL 19 | 20 | Deploying and managing this multi-tier application on Kubernetes will teach you about: 21 | 22 | - Deployments 23 | - Services 24 | - Ingress 25 | - ConfigMaps 26 | - Secrets 27 | - Monitoring 28 | - Scaling 29 | 30 | ## WordPress 31 | 32 | Deploying WordPress on Kubernetes is a great way to learn about: 33 | 34 | - StatefulSets 35 | - PersistentVolumes 36 | - Ingress 37 | - ConfigMaps 38 | - Secrets 39 | 40 | You'll need to deploy MySQL as a database for WordPress. Managing the lifecycle of WordPress and MySQL together on Kubernetes will teach you how to orchestrate related applications. 41 | 42 | ## Cassandra 43 | 44 | Cassandra is a NoSQL database that makes for an interesting Kubernetes use case. Deploying Cassandra on Kubernetes involves: 45 | 46 | - StatefulSets 47 | - Headless Services 48 | - DaemonSets 49 | - ConfigMaps 50 | - ReplicationControllers 51 | 52 | This will give you practice working with stateful applications on Kubernetes. 53 | 54 | ## Jenkins 55 | 56 | Deploying Jenkins on Kubernetes allows you to have a CI/CD pipeline that can take advantage of Kubernetes features like: 57 | 58 | - Auto-scaling 59 | - Resource management 60 | - Self-healing 61 | 62 | You can then build CI/CD pipelines for your microservices that run inside Kubernetes. 63 | 64 | In summary, real world apps like e-commerce stores, databases and CI/CD tools make for the best Kubernetes learning projects. Choose an app you're familiar with to get the most value while practicing Kubernetes concepts. Let me know if you have any other questions! 65 | 66 | 67 | ## Sources 68 | 69 | - [Kubernetes Projects for Beginners](https://www.airplane.dev/blog/kubernetes-projects-for-beginners) 70 | - [Kubernetes Use Cases](https://phoenixnap.com/kb/kubernetes-use-cases) 71 | - [Kubernetes Examples on GitHub](https://github.com/kubernetes/examples) 72 | -------------------------------------------------------------------------------- /examples/helm/README.md: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | #### Section 1: Introduction and Helm Basics 4 | 5 | ##### What is Helm? 6 | 7 | Helm is often referred to as the package manager for Kubernetes. It enables you to define, install, and manage even the most complex Kubernetes applications. Helm uses a packaging format called charts, which include all the resources needed to run an application, service, or a complete cloud-native stack inside Kubernetes. 8 | 9 | ##### How to Install helm in Ubuntu 10 | 11 | ``` 12 | curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null 13 | sudo apt-get install apt-transport-https --yes 14 | echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list 15 | sudo apt-get update 16 | sudo apt-get install helm 17 | ``` 18 | 19 | 20 | ##### Important Helm Commands 21 | 22 | - `helm create [CHART]`: Scaffold a new Helm chart. 23 | - `helm package [CHART]`: Package the chart into a chart archive. 24 | - `helm install [NAME] [CHART]`: Install a Helm chart. 25 | - `helm upgrade [NAME] [CHART]`: Upgrade an installed Helm chart. 26 | - `helm uninstall [NAME]`: Uninstall an installed Helm chart. 27 | - `helm list`: List all installed Helm charts. 28 | - `helm rollback [NAME] [REVISION]`: Roll back a release to a specific revision. 29 | 30 | #### Section 2: Prerequisites 31 | 32 | - Helm installed 33 | - Kubernetes cluster set up (e.g., Minikube, kind, or any cloud-based Kubernetes) 34 | - Docker installed (Optional for custom images) 35 | - Basic understanding of Kubernetes resources like Pod, Service, Deployment 36 | 37 | #### Section 3: Create Helm Chart Structure 38 | 39 | Run the following command to scaffold a new Helm chart: 40 | 41 | ```bash 42 | helm create node-app-chart 43 | ``` 44 | 45 | This will create a folder named `node-app-chart` with the initial chart structure. 46 | 47 | #### Section 4: Chart Metadata (`Chart.yaml`) 48 | 49 | Open `Chart.yaml` and modify it for your application: 50 | 51 | ```yaml 52 | apiVersion: v2 53 | name: node-app-chart 54 | description: A Helm chart for a Node.js application 55 | version: 0.1.0 56 | ``` 57 | 58 | #### Section 5: Default Values (`values.yaml`) 59 | 60 | Edit `values.yaml` to include: 61 | 62 | ```yaml 63 | replicaCount: 1 64 | 65 | image: 66 | repository: trainwithshubham/node-app-test-new 67 | tag: latest 68 | pullPolicy: IfNotPresent 69 | 70 | service: 71 | type: NodePort 72 | port: 30007 73 | targetPort: 8000 74 | ``` 75 | 76 | #### Section 6: Deployment Manifest (`templates/deployment.yaml`) 77 | 78 | Modify `deployment.yaml` under `templates/` to look like this: 79 | 80 | ```yaml 81 | apiVersion: apps/v1 82 | kind: Deployment 83 | metadata: 84 | name: {{ .Release.Name }}-deployment 85 | spec: 86 | replicas: {{ .Values.replicaCount }} 87 | selector: 88 | matchLabels: 89 | app: {{ .Release.Name }} 90 | template: 91 | metadata: 92 | labels: 93 | app: {{ .Release.Name }} 94 | spec: 95 | containers: 96 | - name: {{ .Chart.Name }} 97 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 98 | imagePullPolicy: {{ .Values.image.pullPolicy }} 99 | ports: 100 | - containerPort: 8000 101 | ``` 102 | 103 | #### Section 7: Service Manifest (`templates/service.yaml`) 104 | 105 | Open `service.yaml` under `templates/` and modify it: 106 | 107 | ```yaml 108 | apiVersion: v1 109 | kind: Service 110 | metadata: 111 | name: {{ .Release.Name }}-service 112 | spec: 113 | type: {{ .Values.service.type }} 114 | ports: 115 | - port: {{ .Values.service.port }} 116 | targetPort: {{ .Values.service.targetPort }} 117 | nodePort: {{ .Values.service.port }} 118 | selector: 119 | app: {{ .Release.Name }} 120 | ``` 121 | 122 | #### Section 8: Deploying the Helm Chart 123 | 124 | Use these commands to package and deploy the chart: 125 | 126 | ```bash 127 | # Package the chart (Optional) 128 | helm package node-app-chart 129 | 130 | # Deploy the chart 131 | helm install my-node-app ./node-app-chart 132 | ``` 133 | 134 | #### Section 9: List the Deployment 135 | 136 | ```bash 137 | # list the charts (Optional) 138 | helm list 139 | 140 | # Check the status 141 | helm status 142 | ``` 143 | 144 | -------------------------------------------------------------------------------- /examples/helm/node-app/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *.orig 18 | *~ 19 | # Various IDEs 20 | .project 21 | .idea/ 22 | *.tmproj 23 | .vscode/ 24 | -------------------------------------------------------------------------------- /examples/helm/node-app/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v2 2 | name: node-app 3 | description: A Helm chart for Kubernetes 4 | 5 | # A chart can be either an 'application' or a 'library' chart. 6 | # 7 | # Application charts are a collection of templates that can be packaged into versioned archives 8 | # to be deployed. 9 | # 10 | # Library charts provide useful utilities or functions for the chart developer. They're included as 11 | # a dependency of application charts to inject those utilities and functions into the rendering 12 | # pipeline. Library charts do not define any templates and therefore cannot be deployed. 13 | type: application 14 | 15 | # This is the chart version. This version number should be incremented each time you make changes 16 | # to the chart and its templates, including the app version. 17 | # Versions are expected to follow Semantic Versioning (https://semver.org/) 18 | version: 0.1.1 19 | 20 | # This is the version number of the application being deployed. This version number should be 21 | # incremented each time you make changes to the application. Versions are not expected to 22 | # follow Semantic Versioning. They should reflect the version the application is using. 23 | # It is recommended to use it with quotes. 24 | appVersion: "1.16.0" 25 | -------------------------------------------------------------------------------- /examples/helm/node-app/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 1. Get the application URL by running these commands: 2 | {{- if .Values.ingress.enabled }} 3 | {{- range $host := .Values.ingress.hosts }} 4 | {{- range .paths }} 5 | http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }} 6 | {{- end }} 7 | {{- end }} 8 | {{- else if contains "NodePort" .Values.service.type }} 9 | export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "flask-chart.fullname" . }}) 10 | export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") 11 | echo http://$NODE_IP:$NODE_PORT 12 | {{- else if contains "LoadBalancer" .Values.service.type }} 13 | NOTE: It may take a few minutes for the LoadBalancer IP to be available. 14 | You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "flask-chart.fullname" . }}' 15 | export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "flask-chart.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") 16 | echo http://$SERVICE_IP:{{ .Values.service.port }} 17 | {{- else if contains "ClusterIP" .Values.service.type }} 18 | export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "flask-chart.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") 19 | export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") 20 | echo "Visit http://127.0.0.1:8080 to use your application" 21 | kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT 22 | {{- end }} 23 | -------------------------------------------------------------------------------- /examples/helm/node-app/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* 2 | Expand the name of the chart. 3 | */}} 4 | {{- define "flask-chart.name" -}} 5 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} 6 | {{- end }} 7 | 8 | {{/* 9 | Create a default fully qualified app name. 10 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 11 | If release name contains chart name it will be used as a full name. 12 | */}} 13 | {{- define "flask-chart.fullname" -}} 14 | {{- if .Values.fullnameOverride }} 15 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} 16 | {{- else }} 17 | {{- $name := default .Chart.Name .Values.nameOverride }} 18 | {{- if contains $name .Release.Name }} 19 | {{- .Release.Name | trunc 63 | trimSuffix "-" }} 20 | {{- else }} 21 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} 22 | {{- end }} 23 | {{- end }} 24 | {{- end }} 25 | 26 | {{/* 27 | Create chart name and version as used by the chart label. 28 | */}} 29 | {{- define "flask-chart.chart" -}} 30 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} 31 | {{- end }} 32 | 33 | {{/* 34 | Common labels 35 | */}} 36 | {{- define "flask-chart.labels" -}} 37 | helm.sh/chart: {{ include "flask-chart.chart" . }} 38 | {{ include "flask-chart.selectorLabels" . }} 39 | {{- if .Chart.AppVersion }} 40 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 41 | {{- end }} 42 | app.kubernetes.io/managed-by: {{ .Release.Service }} 43 | {{- end }} 44 | 45 | {{/* 46 | Selector labels 47 | */}} 48 | {{- define "flask-chart.selectorLabels" -}} 49 | app.kubernetes.io/name: {{ include "flask-chart.name" . }} 50 | app.kubernetes.io/instance: {{ .Release.Name }} 51 | {{- end }} 52 | 53 | {{/* 54 | Create the name of the service account to use 55 | */}} 56 | {{- define "flask-chart.serviceAccountName" -}} 57 | {{- if .Values.serviceAccount.create }} 58 | {{- default (include "flask-chart.fullname" .) .Values.serviceAccount.name }} 59 | {{- else }} 60 | {{- default "default" .Values.serviceAccount.name }} 61 | {{- end }} 62 | {{- end }} 63 | -------------------------------------------------------------------------------- /examples/helm/node-app/templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ include "flask-chart.fullname" . }} 5 | labels: 6 | {{- include "flask-chart.labels" . | nindent 4 }} 7 | spec: 8 | {{- if not .Values.autoscaling.enabled }} 9 | replicas: {{ .Values.replicaCount }} 10 | {{- end }} 11 | selector: 12 | matchLabels: 13 | {{- include "flask-chart.selectorLabels" . | nindent 6 }} 14 | template: 15 | metadata: 16 | {{- with .Values.podAnnotations }} 17 | annotations: 18 | {{- toYaml . | nindent 8 }} 19 | {{- end }} 20 | labels: 21 | {{- include "flask-chart.selectorLabels" . | nindent 8 }} 22 | spec: 23 | {{- with .Values.imagePullSecrets }} 24 | imagePullSecrets: 25 | {{- toYaml . | nindent 8 }} 26 | {{- end }} 27 | serviceAccountName: {{ include "flask-chart.serviceAccountName" . }} 28 | securityContext: 29 | {{- toYaml .Values.podSecurityContext | nindent 8 }} 30 | containers: 31 | - name: {{ .Chart.Name }} 32 | securityContext: 33 | {{- toYaml .Values.securityContext | nindent 12 }} 34 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" 35 | imagePullPolicy: {{ .Values.image.pullPolicy }} 36 | ports: 37 | - name: http 38 | containerPort: {{ .Values.service.port }} 39 | protocol: TCP 40 | livenessProbe: 41 | httpGet: 42 | path: / 43 | port: http 44 | readinessProbe: 45 | httpGet: 46 | path: / 47 | port: http 48 | resources: 49 | {{- toYaml .Values.resources | nindent 12 }} 50 | {{- with .Values.nodeSelector }} 51 | nodeSelector: 52 | {{- toYaml . | nindent 8 }} 53 | {{- end }} 54 | {{- with .Values.affinity }} 55 | affinity: 56 | {{- toYaml . | nindent 8 }} 57 | {{- end }} 58 | {{- with .Values.tolerations }} 59 | tolerations: 60 | {{- toYaml . | nindent 8 }} 61 | {{- end }} 62 | -------------------------------------------------------------------------------- /examples/helm/node-app/templates/hpa.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.autoscaling.enabled }} 2 | apiVersion: autoscaling/v2 3 | kind: HorizontalPodAutoscaler 4 | metadata: 5 | name: {{ include "flask-chart.fullname" . }} 6 | labels: 7 | {{- include "flask-chart.labels" . | nindent 4 }} 8 | spec: 9 | scaleTargetRef: 10 | apiVersion: apps/v1 11 | kind: Deployment 12 | name: {{ include "flask-chart.fullname" . }} 13 | minReplicas: {{ .Values.autoscaling.minReplicas }} 14 | maxReplicas: {{ .Values.autoscaling.maxReplicas }} 15 | metrics: 16 | {{- if .Values.autoscaling.targetCPUUtilizationPercentage }} 17 | - type: Resource 18 | resource: 19 | name: cpu 20 | target: 21 | type: Utilization 22 | averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }} 23 | {{- end }} 24 | {{- if .Values.autoscaling.targetMemoryUtilizationPercentage }} 25 | - type: Resource 26 | resource: 27 | name: memory 28 | target: 29 | type: Utilization 30 | averageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }} 31 | {{- end }} 32 | {{- end }} 33 | -------------------------------------------------------------------------------- /examples/helm/node-app/templates/ingress.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.ingress.enabled -}} 2 | {{- $fullName := include "flask-chart.fullname" . -}} 3 | {{- $svcPort := .Values.service.port -}} 4 | {{- if and .Values.ingress.className (not (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion)) }} 5 | {{- if not (hasKey .Values.ingress.annotations "kubernetes.io/ingress.class") }} 6 | {{- $_ := set .Values.ingress.annotations "kubernetes.io/ingress.class" .Values.ingress.className}} 7 | {{- end }} 8 | {{- end }} 9 | {{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}} 10 | apiVersion: networking.k8s.io/v1 11 | {{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}} 12 | apiVersion: networking.k8s.io/v1beta1 13 | {{- else -}} 14 | apiVersion: extensions/v1beta1 15 | {{- end }} 16 | kind: Ingress 17 | metadata: 18 | name: {{ $fullName }} 19 | labels: 20 | {{- include "flask-chart.labels" . | nindent 4 }} 21 | {{- with .Values.ingress.annotations }} 22 | annotations: 23 | {{- toYaml . | nindent 4 }} 24 | {{- end }} 25 | spec: 26 | {{- if and .Values.ingress.className (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }} 27 | ingressClassName: {{ .Values.ingress.className }} 28 | {{- end }} 29 | {{- if .Values.ingress.tls }} 30 | tls: 31 | {{- range .Values.ingress.tls }} 32 | - hosts: 33 | {{- range .hosts }} 34 | - {{ . | quote }} 35 | {{- end }} 36 | secretName: {{ .secretName }} 37 | {{- end }} 38 | {{- end }} 39 | rules: 40 | {{- range .Values.ingress.hosts }} 41 | - host: {{ .host | quote }} 42 | http: 43 | paths: 44 | {{- range .paths }} 45 | - path: {{ .path }} 46 | {{- if and .pathType (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }} 47 | pathType: {{ .pathType }} 48 | {{- end }} 49 | backend: 50 | {{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }} 51 | service: 52 | name: {{ $fullName }} 53 | port: 54 | number: {{ $svcPort }} 55 | {{- else }} 56 | serviceName: {{ $fullName }} 57 | servicePort: {{ $svcPort }} 58 | {{- end }} 59 | {{- end }} 60 | {{- end }} 61 | {{- end }} 62 | -------------------------------------------------------------------------------- /examples/helm/node-app/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ include "flask-chart.fullname" . }} 5 | labels: 6 | {{- include "flask-chart.labels" . | nindent 4 }} 7 | spec: 8 | type: {{ .Values.service.type }} 9 | ports: 10 | - port: {{ .Values.service.ports.port }} 11 | targetPort: {{ .Values.service.ports.targetPort }} 12 | protocol: TCP 13 | name: http 14 | nodePort: {{ .Values.service.ports.nodePort }} 15 | selector: 16 | {{- include "flask-chart.selectorLabels" . | nindent 4 }} 17 | -------------------------------------------------------------------------------- /examples/helm/node-app/templates/serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.serviceAccount.create -}} 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: {{ include "flask-chart.serviceAccountName" . }} 6 | labels: 7 | {{- include "flask-chart.labels" . | nindent 4 }} 8 | {{- with .Values.serviceAccount.annotations }} 9 | annotations: 10 | {{- toYaml . | nindent 4 }} 11 | {{- end }} 12 | {{- end }} 13 | -------------------------------------------------------------------------------- /examples/helm/node-app/templates/tests/test-connection.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: "{{ include "flask-chart.fullname" . }}-test-connection" 5 | labels: 6 | {{- include "flask-chart.labels" . | nindent 4 }} 7 | annotations: 8 | "helm.sh/hook": test 9 | spec: 10 | containers: 11 | - name: wget 12 | image: busybox 13 | command: ['wget'] 14 | args: ['{{ include "flask-chart.fullname" . }}:{{ .Values.service.port }}'] 15 | restartPolicy: Never 16 | -------------------------------------------------------------------------------- /examples/helm/node-app/values.yaml: -------------------------------------------------------------------------------- 1 | # Default values for flask-chart. 2 | # This is a YAML-formatted file. 3 | # Declare variables to be passed into your templates. 4 | 5 | replicaCount: 1 6 | 7 | image: 8 | repository: trainwithshubham/node-app-test-new 9 | pullPolicy: Always 10 | # Overrides the image tag whose default is the chart appVersion. 11 | tag: "latest" 12 | 13 | imagePullSecrets: [] 14 | nameOverride: "" 15 | fullnameOverride: "" 16 | 17 | serviceAccount: 18 | # Specifies whether a service account should be created 19 | create: true 20 | # Annotations to add to the service account 21 | annotations: {} 22 | # The name of the service account to use. 23 | # If not set and create is true, a name is generated using the fullname template 24 | name: "" 25 | 26 | podAnnotations: {} 27 | 28 | podSecurityContext: {} 29 | # fsGroup: 2000 30 | 31 | securityContext: {} 32 | # capabilities: 33 | # drop: 34 | # - ALL 35 | # readOnlyRootFilesystem: true 36 | # runAsNonRoot: true 37 | # runAsUser: 1000 38 | 39 | service: 40 | enabled: true 41 | type: NodePort 42 | port: 8000 43 | ports: 44 | port: 80 45 | targetPort: 8000 46 | protocol: TCP 47 | nodePort: 30007 48 | 49 | ingress: 50 | enabled: false 51 | className: "" 52 | annotations: {} 53 | # kubernetes.io/ingress.class: nginx 54 | # kubernetes.io/tls-acme: "true" 55 | hosts: 56 | - host: chart-example.local 57 | paths: 58 | - path: / 59 | pathType: ImplementationSpecific 60 | tls: [] 61 | # - secretName: chart-example-tls 62 | # hosts: 63 | # - chart-example.local 64 | 65 | resources: {} 66 | # We usually recommend not to specify default resources and to leave this as a conscious 67 | # choice for the user. This also increases chances charts run on environments with little 68 | # resources, such as Minikube. If you do want to specify resources, uncomment the following 69 | # lines, adjust them as necessary, and remove the curly braces after 'resources:'. 70 | # limits: 71 | # cpu: 100m 72 | # memory: 128Mi 73 | # requests: 74 | # cpu: 100m 75 | # memory: 128Mi 76 | 77 | autoscaling: 78 | enabled: false 79 | minReplicas: 1 80 | maxReplicas: 100 81 | targetCPUUtilizationPercentage: 80 82 | # targetMemoryUtilizationPercentage: 80 83 | 84 | nodeSelector: {} 85 | 86 | tolerations: [] 87 | 88 | affinity: {} 89 | -------------------------------------------------------------------------------- /examples/mysql/README.md: -------------------------------------------------------------------------------- 1 | ### Interview Questions: 2 | 3 | 1. **Explain the deployment pattern you'd use to deploy MySQL on Kubernetes while ensuring data persistence and high availability.** 4 | 5 | Answer: To deploy MySQL on Kubernetes with data persistence and high availability, I'd use a combination of StatefulSets and PersistentVolumeClaims (PVCs). StatefulSets provide unique network identities for each pod, and PVCs ensure data is retained even if pods are restarted or rescheduled. 6 | 7 | 2. **Describe the role of a Kubernetes StatefulSet when deploying MySQL.** 8 | 9 | Answer: A Kubernetes StatefulSet is the ideal choice for deploying stateful applications like MySQL. It maintains stable network identities for pods, manages ordered scaling, ensures ordered pod termination, and allows data persistence across pod rescheduling. 10 | 11 | 3. **What is the purpose of using a PersistentVolumeClaim (PVC) when deploying MySQL on Kubernetes?** 12 | 13 | Answer: A PersistentVolumeClaim defines a request for storage resources, and when used with a StorageClass, dynamically provisions PersistentVolumes for pods. For MySQL, PVCs guarantee data persistence by binding the same volume even after pod rescheduling. 14 | 15 | 4. **Explain the concept of a StorageClass and how it contributes to deploying MySQL on Kubernetes.** 16 | 17 | Answer: A StorageClass is an abstraction that defines the storage provisioner and settings for dynamically creating PersistentVolumes. When deploying MySQL, StorageClass allows automatic provisioning of storage resources for PersistentVolumeClaims, ensuring data persistence. 18 | 19 | ### Manifest File Examples: 20 | 21 | 1. **MySQL StorageClass and PersistentVolumeClaim (PVC) Manifests:** 22 | 23 | - **StorageClass** (`mysql-storageclass.yaml`): 24 | 25 | ```yaml 26 | apiVersion: storage.k8s.io/v1 27 | kind: StorageClass 28 | metadata: 29 | name: mysql-storage 30 | provisioner: kubernetes.io/no-provisioner 31 | volumeBindingMode: WaitForFirstConsumer 32 | ``` 33 | 34 | - **PersistentVolumeClaim** (`mysql-pvc.yaml`): 35 | 36 | ```yaml 37 | apiVersion: v1 38 | kind: PersistentVolumeClaim 39 | metadata: 40 | name: mysql-pvc 41 | spec: 42 | storageClassName: mysql-storage 43 | accessModes: 44 | - ReadWriteOnce 45 | resources: 46 | requests: 47 | storage: 1Gi 48 | ``` 49 | 50 | 2. **MySQL Deployment Manifest (`mysql-deployment.yaml`):** 51 | 52 | ```yaml 53 | apiVersion: apps/v1 54 | kind: Deployment 55 | metadata: 56 | name: mysql-deployment 57 | spec: 58 | replicas: 1 59 | selector: 60 | matchLabels: 61 | app: mysql 62 | template: 63 | metadata: 64 | labels: 65 | app: mysql 66 | spec: 67 | containers: 68 | - name: mysql 69 | image: mysql:latest 70 | env: 71 | - name: MYSQL_ROOT_PASSWORD 72 | value: rootpassword 73 | ports: 74 | - containerPort: 3306 75 | volumeMounts: 76 | - name: mysql-data 77 | mountPath: /var/lib/mysql 78 | volumes: 79 | - name: mysql-data 80 | persistentVolumeClaim: 81 | claimName: mysql-pvc 82 | ``` 83 | -------------------------------------------------------------------------------- /examples/mysql/configMap.yml: -------------------------------------------------------------------------------- 1 | kind: ConfigMap 2 | apiVersion: v1 3 | metadata: 4 | name: mysql-config 5 | namespace: mysql 6 | labels: 7 | app: todo 8 | data: 9 | MYSQL_DB: "todo-db" 10 | -------------------------------------------------------------------------------- /examples/mysql/deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: mysql 5 | namespace: mysql 6 | labels: 7 | app: mysql 8 | spec: 9 | replicas: 1 10 | selector: 11 | matchLabels: 12 | app: mysql 13 | template: 14 | metadata: 15 | labels: 16 | app: mysql 17 | spec: 18 | containers: 19 | - name: mysql 20 | image: mysql:8 21 | ports: 22 | - containerPort: 3306 23 | env: 24 | - name: MYSQL_ROOT_PASSWORD 25 | valueFrom: 26 | secretKeyRef: 27 | name: mysql-secret 28 | key: password 29 | - name: MYSQL_DATABASE 30 | valueFrom: 31 | configMapKeyRef: 32 | name: mysql-config 33 | key: MYSQL_DB 34 | volumeMounts: 35 | - name: mysql-persistent-storage 36 | mountPath: /var/lib/mysql 37 | volumes: 38 | - name: mysql-persistent-storage 39 | persistentVolumeClaim: 40 | claimName: mysql-pv-claim 41 | -------------------------------------------------------------------------------- /examples/mysql/persistentVols.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: mysql-pv-volume 5 | namespace: mysql 6 | labels: 7 | type: local 8 | spec: 9 | storageClassName: manual 10 | capacity: 11 | storage: 2Gi 12 | accessModes: 13 | - ReadWriteOnce 14 | hostPath: 15 | path: "/home/ubuntu/projects/mysql/volume" 16 | --- 17 | apiVersion: v1 18 | kind: PersistentVolumeClaim 19 | metadata: 20 | name: mysql-pv-claim 21 | namespace: mysql 22 | spec: 23 | storageClassName: manual 24 | accessModes: 25 | - ReadWriteOnce 26 | resources: 27 | requests: 28 | storage: 2Gi 29 | -------------------------------------------------------------------------------- /examples/mysql/secrets.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: mysql-secret 5 | namespace: mysql 6 | type: Opaque 7 | data: 8 | password: dHJhaW53aXRoc2h1YmhhbQ== 9 | -------------------------------------------------------------------------------- /examples/nginx/README.md: -------------------------------------------------------------------------------- 1 | ### Interview Questions: 2 | 3 | 1. **Explain the role of a Kubernetes Pod when deploying an Nginx web server.** 4 | 5 | Answer: A Kubernetes Pod is the smallest deployable unit and represents a single instance of a running process. When deploying Nginx, a Pod encapsulates the Nginx container along with shared networking and storage resources. 6 | 7 | 2. **What is the purpose of using a Kubernetes Service when deploying Nginx?** 8 | 9 | Answer: A Kubernetes Service provides a stable network endpoint for accessing Nginx Pods. It enables load balancing, service discovery, and ensures availability even if Pods are rescheduled or scaled. 10 | 11 | 3. **How does a Kubernetes Deployment enhance the deployment of an Nginx application?** 12 | 13 | Answer: A Kubernetes Deployment manages the lifecycle of Nginx Pods. It enables declarative updates, scaling, and self-healing by maintaining a desired number of replica Pods based on a defined configuration. 14 | 15 | 4. **Why would you use a Namespace when deploying Nginx and related resources?** 16 | 17 | Answer: A Namespace provides a virtual cluster environment within a physical cluster. It helps in organizing and isolating resources, making it easier to manage and monitor Nginx-related components separately. 18 | 19 | ### Manifest File Examples: 20 | 21 | 1. **Nginx Pod Manifest in "nginx" Namespace:** 22 | 23 | ```yaml 24 | apiVersion: v1 25 | kind: Pod 26 | metadata: 27 | name: nginx-pod 28 | namespace: nginx 29 | spec: 30 | containers: 31 | - name: nginx-container 32 | image: nginx:latest 33 | ``` 34 | 35 | 2. **Nginx Service Manifest in "nginx" Namespace:** 36 | 37 | ```yaml 38 | apiVersion: v1 39 | kind: Service 40 | metadata: 41 | name: nginx-service 42 | namespace: nginx 43 | spec: 44 | selector: 45 | app: nginx-app 46 | ports: 47 | - protocol: TCP 48 | port: 80 49 | targetPort: 80 50 | ``` 51 | 52 | 3. **Nginx Deployment Manifest in "nginx" Namespace:** 53 | 54 | ```yaml 55 | apiVersion: apps/v1 56 | kind: Deployment 57 | metadata: 58 | name: nginx-deployment 59 | namespace: nginx 60 | spec: 61 | replicas: 3 62 | selector: 63 | matchLabels: 64 | app: nginx-app 65 | template: 66 | metadata: 67 | labels: 68 | app: nginx-app 69 | spec: 70 | containers: 71 | - name: nginx-container 72 | image: nginx:latest 73 | ``` 74 | 75 | ### Nginx Namespace Deployment Steps: 76 | 77 | 1. **Create the "nginx" Namespace:** 78 | 79 | ```sh 80 | kubectl create namespace nginx 81 | ``` 82 | 83 | 2. **Apply the Nginx Pod, Service, and Deployment YAMLs within the "nginx" Namespace:** 84 | 85 | ```sh 86 | kubectl apply -f nginx-pod.yaml -n nginx 87 | kubectl apply -f nginx-service.yaml -n nginx 88 | kubectl apply -f nginx-deployment.yaml -n nginx 89 | ``` 90 | 91 | -------------------------------------------------------------------------------- /examples/nginx/deployment.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx-deployment 5 | namespace: nginx 6 | spec: 7 | replicas: 1 8 | selector: 9 | matchLabels: 10 | app: nginx-app 11 | template: 12 | metadata: 13 | labels: 14 | app: nginx-app 15 | spec: 16 | containers: 17 | - name: nginx-container 18 | image: nginx:latest 19 | -------------------------------------------------------------------------------- /examples/nginx/pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: my-pod 5 | namespace: nginx 6 | spec: 7 | containers: 8 | - name: nginx-container 9 | image: nginx:latest 10 | -------------------------------------------------------------------------------- /examples/nginx/service.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nginx-service 5 | namespace: nginx 6 | spec: 7 | selector: 8 | app: nginx-app # Use the appropriate label to match your pod 9 | ports: 10 | - protocol: TCP 11 | port: 80 # Port in the service 12 | targetPort: 80 # Port in the pod 13 | type: NodePort 14 | -------------------------------------------------------------------------------- /kind-cluster/README.md: -------------------------------------------------------------------------------- 1 | # KIND Cluster Setup Guide 2 | 3 | ## 1. Installing KIND and kubectl 4 | Install KIND and kubectl using the provided script: 5 | ```bash 6 | 7 | #!/bin/bash 8 | 9 | [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.27.0/kind-linux-amd64 10 | chmod +x ./kind 11 | sudo mv ./kind /usr/local/bin/kind 12 | 13 | VERSION="v1.30.0" 14 | URL="https://dl.k8s.io/release/${VERSION}/bin/linux/amd64/kubectl" 15 | INSTALL_DIR="/usr/local/bin" 16 | 17 | curl -LO "$URL" 18 | chmod +x kubectl 19 | sudo mv kubectl $INSTALL_DIR/ 20 | kubectl version --client 21 | 22 | rm -f kubectl 23 | rm -rf kind 24 | 25 | echo "kind & kubectl installation complete." 26 | ``` 27 | 28 | ## 2. Setting Up the KIND Cluster 29 | Create a kind-cluster-config.yaml file: 30 | 31 | ```yaml 32 | 33 | kind: Cluster 34 | apiVersion: kind.x-k8s.io/v1alpha4 35 | 36 | nodes: 37 | - role: control-plane 38 | image: kindest/node:v1.31.2 39 | - role: worker 40 | image: kindest/node:v1.31.2 41 | - role: worker 42 | image: kindest/node:v1.31.2 43 | ``` 44 | Create the cluster using the configuration file: 45 | 46 | ```bash 47 | 48 | kind create cluster --config kind-cluster-config.yaml --name my-kind-cluster 49 | ``` 50 | Verify the cluster: 51 | 52 | ```bash 53 | 54 | kubectl get nodes 55 | kubectl cluster-info 56 | ``` 57 | ## 3. Accessing the Cluster 58 | Use kubectl to interact with the cluster: 59 | ```bash 60 | 61 | kubectl cluster-info 62 | ``` 63 | 64 | 65 | ## 4. Setting Up the Kubernetes Dashboard 66 | Deploy the Dashboard 67 | Apply the Kubernetes Dashboard manifest: 68 | ```bash 69 | 70 | kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml 71 | ``` 72 | Create an Admin User 73 | Create a dashboard-admin-user.yml file with the following content: 74 | 75 | ```yaml 76 | 77 | apiVersion: v1 78 | kind: ServiceAccount 79 | metadata: 80 | name: admin-user 81 | namespace: kubernetes-dashboard 82 | --- 83 | apiVersion: rbac.authorization.k8s.io/v1 84 | kind: ClusterRoleBinding 85 | metadata: 86 | name: admin-user 87 | roleRef: 88 | apiGroup: rbac.authorization.k8s.io 89 | kind: ClusterRole 90 | name: cluster-admin 91 | subjects: 92 | - kind: ServiceAccount 93 | name: admin-user 94 | namespace: kubernetes-dashboard 95 | ``` 96 | Apply the configuration: 97 | 98 | ```bash 99 | 100 | kubectl apply -f dashboard-admin-user.yml 101 | ``` 102 | Get the Access Token 103 | Retrieve the token for the admin-user: 104 | 105 | ```bash 106 | 107 | kubectl -n kubernetes-dashboard create token admin-user 108 | ``` 109 | Copy the token for use in the Dashboard login. 110 | 111 | Access the Dashboard 112 | Start the Dashboard using kubectl proxy: 113 | 114 | ```bash 115 | 116 | kubectl proxy 117 | ``` 118 | Open the Dashboard in your browser: 119 | 120 | ```bash 121 | 122 | http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ 123 | ``` 124 | Use the token from the previous step to log in. 125 | 126 | ## 5. Deleting the Cluster 127 | Delete the KIND cluster: 128 | ```bash 129 | 130 | kind delete cluster --name my-kind-cluster 131 | ``` 132 | 133 | ## 6. Notes 134 | 135 | Multiple Clusters: KIND supports multiple clusters. Use unique --name for each cluster. 136 | Custom Node Images: Specify Kubernetes versions by updating the image in the configuration file. 137 | Ephemeral Clusters: KIND clusters are temporary and will be lost if Docker is restarted. 138 | 139 | -------------------------------------------------------------------------------- /kind-cluster/config.yml: -------------------------------------------------------------------------------- 1 | kind: Cluster 2 | apiVersion: kind.x-k8s.io/v1alpha4 3 | 4 | nodes: 5 | - role: control-plane 6 | image: kindest/node:v1.31.2 7 | - role: worker 8 | image: kindest/node:v1.31.2 9 | - role: worker 10 | image: kindest/node:v1.31.2 11 | -------------------------------------------------------------------------------- /kind-cluster/dashboard-admin-user.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: admin-user 5 | namespace: kubernetes-dashboard 6 | --- 7 | apiVersion: rbac.authorization.k8s.io/v1 8 | kind: ClusterRoleBinding 9 | metadata: 10 | name: admin-user 11 | roleRef: 12 | apiGroup: rbac.authorization.k8s.io 13 | kind: ClusterRole 14 | name: cluster-admin 15 | subjects: 16 | - kind: ServiceAccount 17 | name: admin-user 18 | namespace: kubernetes-dashboard 19 | -------------------------------------------------------------------------------- /kind-cluster/install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # For AMD64 / x86_64 4 | 5 | [ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64 6 | chmod +x ./kind 7 | sudo cp ./kind /usr/local/bin/kind 8 | 9 | 10 | VERSION="v1.30.0" 11 | URL="https://dl.k8s.io/release/${VERSION}/bin/linux/amd64/kubectl" 12 | INSTALL_DIR="/usr/local/bin" 13 | 14 | # Download and install kubectl 15 | curl -LO "$URL" 16 | chmod +x kubectl 17 | sudo mv kubectl $INSTALL_DIR/ 18 | kubectl version --client 19 | 20 | # Clean up 21 | rm -f kubectl 22 | rm -rf kind 23 | 24 | echo "kind & kubectl installation complete." 25 | -------------------------------------------------------------------------------- /kubernetes_architecture.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Architecture Explained 2 | 3 | This document explains the key components that make up the architecture of a Kubernetes cluster, in simple terms. 4 | 5 | ## Table of Contents 6 | 7 | - [Control Plane (Master Node Components)](#control-plane-master-node-components) 8 | - [Worker Node Components](#worker-node-components) 9 | - [Other Components](#other-components) 10 | 11 | --- 12 | 13 | ![Kubernetes Architecture Diagram](https://miro.medium.com/v2/resize:fit:1400/1*0Sudxeu5mQyN3ahi1FV49A.png) 14 | 15 | 16 | ## Control Plane (Master Node Components) 17 | 18 | ### API Server 19 | 20 | This is the "front desk" of Kubernetes. Whenever you want to interact with your cluster, your request goes through the API Server. It validates and processes these requests to the backend components. 21 | 22 | ### etcd 23 | 24 | Think of this as the "database" of Kubernetes. It stores all the information about your cluster—what nodes are part of the cluster, what pods are running, what their statuses are, and more. 25 | 26 | ### Scheduler 27 | 28 | The "event planner" for your containers. When you ask for a container to be run, the Scheduler decides which machine (Node) in your cluster should run it. It considers resource availability and other constraints while making this decision. 29 | 30 | ### Controller Manager 31 | 32 | Imagine a bunch of small robots that continuously monitor the cluster to make sure everything is running smoothly. If something goes wrong (e.g., a Pod crashes), they work to fix it, ensuring the cluster state matches your desired state. 33 | 34 | ### Cloud Controller Manager 35 | 36 | This is a specialized component that allows Kubernetes to interact with the underlying cloud provider, like AWS or Azure. It helps in tasks like setting up load balancers and persistent storage. 37 | 38 | --- 39 | 40 | ## Worker Node Components 41 | 42 | ### kubelet 43 | 44 | This is the "manager" for each worker node. It ensures all containers on the node are healthy and running as they should be. 45 | 46 | ### kube-proxy 47 | 48 | Think of this as the "traffic cop" for network communication either between Pods or from external clients to Pods. It helps in routing the network traffic appropriately. 49 | 50 | ### Container Runtime 51 | 52 | This is the software used to run containers. Docker is commonly used, but other runtimes like containerd can also be used. 53 | 54 | --- 55 | 56 | ## Other Components 57 | 58 | ### Pod 59 | 60 | The smallest unit in Kubernetes, a Pod is a group of one or more containers. Think of it like an apartment in an apartment building. 61 | 62 | ### Service 63 | 64 | This is like a phone directory for Pods. Since Pods can come and go, a Service provides a stable "address" so that other parts of your application can find them. 65 | 66 | ### Volume 67 | 68 | This is like an external hard-drive that can be attached to a Pod to store data. 69 | 70 | ### Namespace 71 | 72 | A way to divide cluster resources among multiple users or teams. Think of it as having different folders on a shared computer, where each team can only see their own folder. 73 | 74 | ### Ingress 75 | 76 | Think of this as the "front door" for external access to your applications, controlling how HTTP and HTTPS traffic should be routed to your services. 77 | 78 | --- 79 | 80 | And there you have it! That's a simplified breakdown of Kubernetes architecture components. 81 | 82 | ``` 83 | -------------------------------------------------------------------------------- /minikube_installation.md: -------------------------------------------------------------------------------- 1 | # Minikube Installation Guide for Ubuntu 2 | 3 | This guide provides step-by-step instructions for installing Minikube on Ubuntu. Minikube allows you to run a single-node Kubernetes cluster locally for development and testing purposes. 4 | 5 | ## Pre-requisites 6 | 7 | * Ubuntu OS 8 | * sudo privileges 9 | * Internet access 10 | * Virtualization support enabled (Check with `egrep -c '(vmx|svm)' /proc/cpuinfo`, 0=disabled 1=enabled) 11 | 12 | --- 13 | 14 | ## Step 1: Update System Packages 15 | 16 | Update your package lists to make sure you are getting the latest version and dependencies. 17 | 18 | ```bash 19 | sudo apt update 20 | ``` 21 | 22 | ![image](https://github.com/paragpallavsingh/kubernetes-kickstarter/assets/40052830/57f1c5d9-474a-43b8-90b9-fe542e122f3f) 23 | 24 | 25 | ## Step 2: Install Required Packages 26 | 27 | Install some basic required packages. 28 | 29 | ```bash 30 | sudo apt install -y curl wget apt-transport-https 31 | ``` 32 | 33 | ![image](https://github.com/paragpallavsingh/kubernetes-kickstarter/assets/40052830/84ad8474-8d4d-4d4b-a04d-def88f76dc9a) 34 | 35 | --- 36 | 37 | ## Step 3: Install Docker 38 | 39 | Minikube can run a Kubernetes cluster either in a VM or locally via Docker. This guide demonstrates the Docker method. 40 | 41 | ```bash 42 | sudo apt install -y docker.io 43 | ``` 44 | ![image](https://github.com/paragpallavsingh/kubernetes-kickstarter/assets/40052830/d261f75b-a22f-4510-b3a3-14e1cecaf3e1) 45 | 46 | 47 | Start and enable Docker. 48 | 49 | ```bash 50 | sudo systemctl enable --now docker 51 | ``` 52 | 53 | Add current user to docker group (To use docker without root) 54 | 55 | ```bash 56 | sudo usermod -aG docker $USER && newgrp docker 57 | ``` 58 | Now, logout (use `exit` command) and connect again. 59 | 60 | --- 61 | 62 | ## Step 4: Install Minikube 63 | 64 | First, download the Minikube binary using `curl`: 65 | 66 | ```bash 67 | curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 68 | ``` 69 | 70 | Make it executable and move it into your path: 71 | 72 | ```bash 73 | chmod +x minikube 74 | sudo mv minikube /usr/local/bin/ 75 | ``` 76 | 77 | ![image](https://github.com/paragpallavsingh/kubernetes-kickstarter/assets/40052830/80e8a137-286a-4334-886b-ea4821f596b2) 78 | 79 | --- 80 | 81 | ## Step 5: Install kubectl 82 | 83 | Download kubectl, which is a Kubernetes command-line tool. 84 | 85 | ```bash 86 | curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" 87 | ``` 88 | **Check above image ⬆️** 89 | Make it executable and move it into your path: 90 | 91 | ```bash 92 | chmod +x kubectl 93 | sudo mv kubectl /usr/local/bin/ 94 | ``` 95 | ![image](https://github.com/paragpallavsingh/kubernetes-kickstarter/assets/40052830/cdda6c84-f6c9-4d05-87e0-ed8627e46a3a) 96 | 97 | --- 98 | 99 | ## Step 6: Start Minikube 100 | 101 | Now, you can start Minikube with the following command: 102 | 103 | ```bash 104 | minikube start --driver=docker --vm=true 105 | ``` 106 | 107 | This command will start a single-node Kubernetes cluster inside a Docker container. 108 | 109 | --- 110 | 111 | ## Step 7: Check Cluster Status 112 | 113 | Check the cluster status with: 114 | 115 | ```bash 116 | minikube status 117 | ``` 118 | 119 | ![image](https://github.com/paragpallavsingh/kubernetes-kickstarter/assets/40052830/a2dabec8-b073-4e1e-a831-dd6845000230) 120 | 121 | 122 | You can also use `kubectl` to interact with your cluster: 123 | 124 | ```bash 125 | kubectl get nodes 126 | ``` 127 | 128 | --- 129 | 130 | ## Step 8: Stop Minikube 131 | 132 | When you are done, you can stop the Minikube cluster with: 133 | 134 | ```bash 135 | minikube stop 136 | ``` 137 | 138 | --- 139 | 140 | ## Optional: Delete Minikube Cluster 141 | 142 | If you wish to delete the Minikube cluster entirely, you can do so with: 143 | 144 | ```bash 145 | minikube delete 146 | ``` 147 | 148 | --- 149 | 150 | That's it! You've successfully installed Minikube on Ubuntu, and you can now start deploying Kubernetes applications for development and testing. 151 | ``` 152 | -------------------------------------------------------------------------------- /projectGuide/easyshop-kind.md: -------------------------------------------------------------------------------- 1 | # Kind Cluster Setup Guide for EasyShop 2 | 3 | This guide will help you set up a Kind (Kubernetes in Docker) cluster for the EasyShop application. Instructions are provided for Windows, Linux, and macOS. 4 | 5 | ## 📋 Prerequisites Installation 6 | 7 | > ### Windows 8 | > 1. Install Docker Desktop for Windows 9 | > ```powershell 10 | > winget install Docker.DockerDesktop 11 | > ``` 12 | > 2. Install Kind 13 | > ```powershell 14 | > curl.exe -Lo kind-windows-amd64.exe https://kind.sigs.k8s.io/dl/v0.20.0/kind-windows-amd64 15 | > Move-Item .\kind-windows-amd64.exe c:\windows\system32\kind.exe 16 | > ``` 17 | > 3. Install kubectl 18 | > ```powershell 19 | > curl.exe -LO "https://dl.k8s.io/release/v1.28.0/bin/windows/amd64/kubectl.exe" 20 | > Move-Item .\kubectl.exe c:\windows\system32\kubectl.exe 21 | > ``` 22 | 23 | > ### Linux 24 | > 1. Install Docker 25 | > ```bash 26 | > curl -fsSL https://get.docker.com -o get-docker.sh 27 | > sudo sh get-docker.sh 28 | > rm get-docker.sh 29 | > ``` 30 | > 2. Install Kind 31 | > ```bash 32 | > curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.27.0/kind-linux-amd64 33 | > chmod +x ./kind 34 | > sudo mv ./kind /usr/local/bin/kind 35 | > ``` 36 | > 3. Install kubectl 37 | > ```bash 38 | > curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" 39 | > chmod +x kubectl 40 | > sudo mv ./kubectl /usr/local/bin/kubectl 41 | > ``` 42 | 43 | > ### macOS 44 | > 1. Install Docker Desktop for Mac 45 | > ```bash 46 | > brew install --cask docker 47 | > ``` 48 | > 2. Install Kind 49 | > ```bash 50 | > brew install kind 51 | > ``` 52 | > 3. Install kubectl 53 | > ```bash 54 | > brew install kubectl 55 | > ``` 56 | 57 | 58 | ## Project dir structure: 59 | This is just a preview want more details, then use `tree` inside the repo. 60 | ```bash 61 | . 62 | ├── Dockerfile 63 | ├── Dockerfile.dev 64 | ├── JENKINS.md 65 | ├── Jenkinsfile 66 | ├── LICENSE 67 | ├── README.md 68 | ├── components.json 69 | ├── docker-compose.yml 70 | ├── ecosystem.config.cjs 71 | ├── kubernetes 72 | │   ├── 00-kind-config.yaml 73 | │   ├── 01-namespace.yaml 74 | │   ├── 02-mongodb-pv.yaml 75 | │   ├── 03-mongodb-pvc.yaml 76 | │   ├── 04-configmap.yaml 77 | │   ├── 05-secrets.yaml 78 | │   ├── 06-mongodb-service.yaml 79 | │   ├── 07-mongodb-statefulset.yaml 80 | │   ├── 08-easyshop-deployment.yaml 81 | │   ├── 09-easyshop-service.yaml 82 | │   ├── 10-ingress.yaml 83 | │   ├── 11-hpa.yaml 84 | │   └── 12-migration-job.yaml 85 | ├── next.config.cjs 86 | ├── next.config.js 87 | ├── package-lock.json 88 | ├── package.json 89 | ├── postcss.config.js 90 | ├── public 91 | ├── scripts 92 | │   ├── Dockerfile.migration 93 | │   ├── migrate-data.ts 94 | │   └── tsconfig.json 95 | ├── src 96 | │   ├── app 97 | │   ├── data 98 | │   ├── lib 99 | │   ├── middleware.ts 100 | │   ├── styles 101 | │   ├── types 102 | │   └── types.d.ts 103 | ├── tailwind.config.ts 104 | ├── tsconfig.json 105 | └── yarn.lock 106 | ``` 107 | 108 | 109 | ## 🛠️ Environment Setup 110 | 111 | > ### 1. Clone the repository and navigate to the project directory 112 | > ```bash 113 | > git clone https://github.com/iemafzalhassan/EasyShop.git 114 | > cd EasyShop 115 | > ``` 116 | 117 | > ### 2. Build and Push Docker Images 118 | > 119 | > #### 2.1 Login to Docker Hub 120 | > First, login to Docker Hub (create an account at [hub.docker.com](https://hub.docker.com) if you haven't): 121 | > ```bash 122 | > docker login 123 | > ``` 124 | > 125 | > #### 2.2 Build Application Image 126 | > ```bash 127 | > # Build the application image 128 | > docker build -t your-dockerhub-username/easyshop:latest . 129 | > 130 | > # Push to Docker Hub 131 | > docker push your-dockerhub-username/easyshop:latest 132 | > ``` 133 | > 134 | > #### 2.3 Build Migration Image 135 | > ```bash 136 | > # Build the migration image 137 | > docker build -t your-dockerhub-username/easyshop-migration:latest -f Dockerfile.migration . 138 | > 139 | > # Push to Docker Hub 140 | > docker push your-dockerhub-username/easyshop-migration:latest 141 | > ``` 142 | 143 | 144 | ## Kind Cluster Setup 145 | 146 | > ##### 3. Create new cluster 147 | > 148 | > ```bash 149 | > kind create cluster --name easyshop --config kubernetes/00-kind-config.yaml 150 | > ``` 151 | > This command creates a new Kind cluster using our custom configuration with one control plane and two worker nodes. 152 | 153 | > ##### 4. Create namespace 154 | > ```zsh 155 | > kubectl apply -f kubernetes/01-namespace.yaml 156 | > ``` 157 | 158 | > ##### 5. Setup storage 159 | > ```zsh 160 | > kubectl apply -f kubernetes/02-mongodb-pv.yaml 161 | > kubectl apply -f kubernetes/03-mongodb-pvc.yaml 162 | > ``` 163 | 164 | > ### 5. Create ConfigMap 165 | > Create `kubernetes/04-configmap.yaml` with the following content: 166 | > ```yaml 167 | > apiVersion: v1 168 | > kind: ConfigMap 169 | > metadata: 170 | > name: easyshop-config 171 | > namespace: easyshop 172 | > data: 173 | > MONGODB_URI: "mongodb://mongodb-service:27017/easyshop" 174 | > NODE_ENV: "production" 175 | > NEXT_PUBLIC_API_URL: "http://YOUR_EC2_PUBLIC_IP/api" # Replace with your YOUR_EC2_PUBLIC_IP 176 | > NEXTAUTH_URL: "http://YOUR_EC2_PUBLIC_IP" # Replace with your YOUR_EC2_PUBLIC_IP 177 | > NEXTAUTH_SECRET: "HmaFjYZ2jbUK7Ef+wZrBiJei4ZNGBAJ5IdiOGAyQegw=" 178 | > JWT_SECRET: "e5e425764a34a2117ec2028bd53d6f1388e7b90aeae9fa7735f2469ea3a6cc8c" 179 | > ``` 180 | > 181 | > ```zsh 182 | > kubectl apply -f kubernetes/04-configmap.yaml 183 | > ``` 184 | 185 | 186 | > ##### 6. Setup configuration 187 | > ```zsh 188 | > kubectl apply -f kubernetes/05-secrets.yaml 189 | > ``` 190 | 191 | > ##### 7. Deploy MongoDB 192 | >```zsh 193 | >kubectl apply -f kubernetes/06-mongodb-service.yaml 194 | >kubectl apply -f kubernetes/07-mongodb-statefulset.yaml 195 | >``` 196 | 197 | ##### 8. Deploy EasyShop 198 | ###### Create or update `kubernetes/08easyshop-deployment.yaml`: 199 | > ```yaml 200 | >apiVersion: apps/v1 201 | >kind: Deployment 202 | >metadata: 203 | > name: easyshop 204 | > namespace: easyshop 205 | >spec: 206 | > replicas: 2 207 | > selector: 208 | > matchLabels: 209 | > app: easyshop 210 | > template: 211 | > metadata: 212 | > labels: 213 | > app: easyshop 214 | > spec: 215 | > containers: 216 | > - name: easyshop 217 | > image: iemafzal/easyshop:latest 218 | > imagePullPolicy: Always 219 | > ports: 220 | > - containerPort: 3000 221 | > envFrom: 222 | > - configMapRef: 223 | > name: easyshop-config 224 | > - secretRef: 225 | > name: easyshop-secrets 226 | > env: 227 | > - name: NEXTAUTH_URL 228 | > valueFrom: 229 | > configMapKeyRef: 230 | > name: easyshop-config 231 | > key: NEXTAUTH_URL 232 | > - name: NEXTAUTH_SECRET 233 | > valueFrom: 234 | > secretKeyRef: 235 | > name: easyshop-secrets 236 | > key: NEXTAUTH_SECRET 237 | > - name: JWT_SECRET 238 | > valueFrom: 239 | > secretKeyRef: 240 | > name: easyshop-secrets 241 | > key: JWT_SECRET 242 | > resources: 243 | > requests: 244 | > memory: "256Mi" 245 | > cpu: "200m" 246 | > limits: 247 | > memory: "512Mi" 248 | > cpu: "500m" 249 | > startupProbe: 250 | > httpGet: 251 | > path: / 252 | > port: 3000 253 | > failureThreshold: 30 254 | > periodSeconds: 10 255 | > readinessProbe: 256 | > httpGet: 257 | > path: / 258 | > port: 3000 259 | > initialDelaySeconds: 20 260 | > periodSeconds: 15 261 | > livenessProbe: 262 | > httpGet: 263 | > path: / 264 | > port: 3000 265 | > initialDelaySeconds: 25 266 | > periodSeconds: 20 267 | > ``` 268 | 269 | > ```zsh 270 | > kubectl apply -f kubernetes/08-easyshop-deployment.yaml 271 | > ``` 272 | > 273 | > ```zsh 274 | > kubectl apply -f kubernetes/09-easyshop-service.yaml 275 | > ``` 276 | 277 | > ##### 7. Install NGINX Ingress Controller 278 | > 279 | > ```bash 280 | > kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml 281 | > ``` 282 | 283 | > ##### 8. Wait for the ingress controller to be ready: 284 | > ```bash 285 | > kubectl wait --namespace ingress-nginx \ 286 | > --for=condition=ready pod \ 287 | > --selector=app.kubernetes.io/component=controller \ 288 | > --timeout=90s 289 | > ``` 290 | 291 | 292 | > ##### 9. Deploy Ingress and HPA 293 | > ###### Create or update `kubernetes/10-ingress.yaml`: 294 | > ```yaml 295 | > apiVersion: networking.k8s.io/v1 296 | > kind: Ingress 297 | > metadata: 298 | > name: easyshop-ingress 299 | > namespace: easyshop 300 | > annotations: 301 | > nginx.ingress.kubernetes.io/ssl-redirect: "false" 302 | > nginx.ingress.kubernetes.io/proxy-body-size: "50m" 303 | > spec: 304 | > rules: 305 | > - host: "51.20.251.235.nip.io" 306 | > http: 307 | > paths: 308 | > - path: / 309 | > pathType: Prefix 310 | > backend: 311 | > service: 312 | > name: easyshop-service 313 | > port: 314 | > number: 80 315 | > ``` 316 | > ```zsh 317 | > kubectl apply -f kubernetes/10-ingress.yaml 318 | > kubectl apply -f kubernetes/11-hpa.yaml 319 | > ``` 320 | 321 | > ##### 10. Update Migration Job 322 | > ###### Create or update `kubernetes/12-migration-job.yaml`: 323 | > ```yaml 324 | > apiVersion: batch/v1 325 | > kind: Job 326 | > metadata: 327 | > name: db-migration 328 | > namespace: easyshop 329 | > spec: 330 | > template: 331 | > spec: 332 | > containers: 333 | > - name: migration 334 | > image: iemafzal/easyshop-migration:latest # update with the name that you have build. 335 | > imagePullPolicy: Always 336 | > env: 337 | > - name: MONGODB_URI 338 | > value: "mongodb://mongodb-service:27017/easyshop" 339 | > restartPolicy: OnFailure 340 | > ``` 341 | 342 | > ```bash 343 | > # Run database migration 344 | > kubectl apply -f kubernetes/migration-job.yaml 345 | > ``` 346 | 347 | ## Verification 348 | 1. Check deployment status 349 | 350 | ```bash 351 | kubectl get pods -n easyshop 352 | ``` 353 | 2. Check services 354 | 355 | ```bash 356 | kubectl get svc -n easyshop 357 | ``` 358 | 3. Verify ingress 359 | 360 | ```bash 361 | kubectl get ingress -n easyshop 362 | ``` 363 | 4. Test MongoDB connection 364 | 365 | ```bash 366 | kubectl exec -it -n easyshop mongodb-0 -- mongosh --eval "db.serverStatus()" 367 | ``` 368 | ## Accessing the Application 369 | The application should now be accessible at: 370 | > [!WARNING] 371 | > - http://``.nip.io 372 | > 373 | > 374 | > `nip.io` is a free wildcard DNS service that automatically maps any subdomain to the corresponding IP address. 375 | 376 | > [!NOTE] 377 | > - If you access `http://203.0.113.10.nip.io`, it resolves to `203.0.113.10`. 378 | > - If you use `app.203.0.113.10.nip.io`, it still resolves to `203.0.113.10`. 379 | > 380 | > **Why Use `nip.io`?** 381 | > - **Simplifies local and remote testing:** No need to set up custom DNS records. 382 | > - **Useful for Kubernetes Ingress:** You can access services using public IP-based domains. 383 | > - **Great for temporary or dynamic environments:** Works with CI/CD pipelines, cloud VMs, and local testing. 384 | > 385 | > **Who Provides This Service?** 386 | > - `nip.io` is an **open-source project maintained by Vincent Bernat**. 387 | > - It is provided **for free**, with no registration required. 388 | > - The service works by dynamically resolving any subdomain containing an IP address. 389 | > 390 | > **More details:** [https://nip.io](https://nip.io) 391 | 392 | 393 | 394 | 395 | 396 | 397 | ## Troubleshooting 398 | 1. If pods are not starting, check logs: 399 | 400 | ```bash 401 | kubectl logs -n easyshop 402 | ``` 403 | 2. For MongoDB connection issues: 404 | 405 | ```bash 406 | kubectl exec -it -n easyshop mongodb-0 -- mongosh 407 | ``` 408 | 3. To restart deployments: 409 | 410 | ```bash 411 | kubectl rollout restart deployment/easyshop -n easyshop 412 | ``` 413 | ## Cleanup 414 | To delete the cluster: 415 | 416 | ```bash 417 | kind delete cluster --name easyshop 418 | ``` 419 | -------------------------------------------------------------------------------- /projectGuide/online-shop.md: -------------------------------------------------------------------------------- 1 | # Online Shop Kubernetes Deployment Guide 2 | 3 | This guide provides step-by-step instructions for deploying the Online Shop application on Kubernetes using Kind (Kubernetes in Docker) and Ingress NGINX. 4 | 5 | ## Prerequisites 6 | 7 | - Docker installed 8 | - kubectl installed 9 | - Helm installed 10 | - Kind installed 11 | - An EC2 instance or any cloud VM (optional for production deployment) 12 | 13 | ## Table of Contents 14 | 15 | 1. [Setting Up the Kubernetes Cluster](#1-setting-up-the-kubernetes-cluster) 16 | 2. [Building and Loading the Docker Image](#2-building-and-loading-the-docker-image) 17 | 3. [Deploying the Application](#3-deploying-the-application) 18 | 4. [Setting Up Ingress NGINX](#4-setting-up-ingress-nginx) 19 | 5. [Accessing the Application](#5-accessing-the-application) 20 | 6. [Troubleshooting](#6-troubleshooting) 21 | 22 | ## 1. Setting Up the Kubernetes Cluster 23 | 24 | Create a Kubernetes cluster using Kind with proper port mappings: 25 | 26 | ```bash 27 | # Create a Kind configuration file 28 | cat < kind-cluster.yaml 29 | kind: Cluster 30 | apiVersion: kind.x-k8s.io/v1alpha4 31 | nodes: 32 | - role: control-plane 33 | kubeadmConfigPatches: 34 | - | 35 | kind: InitConfiguration 36 | nodeRegistration: 37 | kubeletExtraArgs: 38 | node-labels: "ingress-ready=true" 39 | extraPortMappings: 40 | - containerPort: 80 41 | hostPort: 80 42 | protocol: TCP 43 | listenAddress: "0.0.0.0" 44 | - containerPort: 443 45 | hostPort: 443 46 | protocol: TCP 47 | listenAddress: "0.0.0.0" 48 | - role: worker 49 | - role: worker 50 | EOF 51 | ``` 52 | # Create the Kind cluster 53 | ```bash 54 | kind create cluster --name online-shop --config kind-cluster.yaml 55 | ``` 56 | # Verify the cluster is running 57 | ```bash 58 | kubectl cluster-info 59 | ``` 60 | 61 | ## 2. Building and Loading the Docker Image 62 | Next, build the Docker image for the application and load it into the Kind cluster: 63 | 64 | ```bash 65 | # Build the Docker image 66 | docker build -t online_shop:latest . 67 | 68 | # Load the image into the Kind cluster 69 | kind load docker-image online_shop:latest --name online-shop 70 | ``` 71 | 72 | ## 3. Deploying the Application 73 | Create the necessary Kubernetes resources: 74 | 75 | ```bash 76 | # Create namespace 77 | kubectl apply -f namespace.yml 78 | 79 | # Create ConfigMap 80 | kubectl apply -f configmap.yml 81 | 82 | # Create Deployment 83 | kubectl apply -f deployment.yml 84 | 85 | # Create Service 86 | kubectl apply -f service.yml 87 | ``` 88 | 89 | Verify the deployment: 90 | 91 | ```bash 92 | # Check if pods are running 93 | kubectl get pods -n online-shop-prod 94 | 95 | # Check service 96 | kubectl get svc -n online-shop-prod 97 | ``` 98 | 99 | ## 4. Setting Up Ingress NGINX 100 | Install and configure Ingress NGINX using Helm: 101 | 102 | ```bash 103 | # Add the Ingress NGINX Helm repository 104 | helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx 105 | helm repo update 106 | ``` 107 | ```bash 108 | # Create values file for Ingress NGINX 109 | cat < ingress-nginx-values.yml 110 | apiVersion: v1 111 | kind: Service 112 | metadata: 113 | name: ingress-nginx-controller 114 | namespace: ingress-nginx 115 | spec: 116 | type: NodePort 117 | ports: 118 | - name: http 119 | port: 80 120 | protocol: TCP 121 | targetPort: http 122 | nodePort: 30080 123 | - name: https 124 | port: 443 125 | protocol: TCP 126 | targetPort: https 127 | nodePort: 30443 128 | selector: 129 | app.kubernetes.io/component: controller 130 | app.kubernetes.io/instance: ingress-nginx 131 | app.kubernetes.io/name: ingress-nginx 132 | EOF 133 | ``` 134 | 135 | # Install Ingress NGINX 136 | ```bash 137 | kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/kind/deploy.yaml 138 | 139 | # Wait for Ingress NGINX to be ready 140 | kubectl wait --namespace ingress-nginx \ 141 | --for=condition=ready pod \ 142 | --selector=app.kubernetes.io/component=controller \ 143 | --timeout=120s 144 | ``` 145 | 146 | Create an Ingress resource: 147 | 148 | ```bash 149 | # Create Ingress resource 150 | kubectl apply -f ingress.yml 151 | ``` 152 | 153 | ## 5. Accessing the Application 154 | 1. After deployment, verify everything is running: 155 | ```bash 156 | # Check all resources 157 | kubectl get pods,svc,ingress -n online-shop-prod 158 | kubectl get pods -n ingress-nginx 159 | 160 | # Check ingress controller logs 161 | kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller 162 | 163 | # Test local connectivity 164 | curl -v http://localhost 165 | 166 | # Test with public IP 167 | curl -v http://13.53.137.11 168 | ``` 169 | --------------------------------------------------------------------------------