```
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-1-Responses/jothilal22.md:
--------------------------------------------------------------------------------
1 | **Name**: Jothilal Sottallu
2 |
3 | **Email**: srjothilal2001@gmail.com
4 |
5 | ---
6 |
7 | ### SOLUTION
8 |
9 | #### SETUP
10 |
11 | Minikube is a tool that lets you run a single-node Kubernetes cluster locally on your computer.It's designed for development, testing, and learning purposes, allowing you to create, manage, and deploy Kubernetes applications without the need for a full-scale production cluster.
12 |
13 | **INSTALL A HYPERVISOR** ~ DOCKER DESKTOP [With Kubernetes Enabled]
14 |
15 | **INSTALL A MINIKUBE** ~ You can download and install Minikube from the official GitHub repository or by using a package manager like Homebrew (on macOS)
16 |
17 | **START MINIKUBE** ~ Open a Terminal and run the command:
18 | **minikube start**
19 |
20 | **WAIT FOR SETUP** ~ Minikube will download the necessary ISO image, create a virtual machine, and configure the Kubernetes cluster. This might take a few minutes depending on your internet speed.
21 |
22 | **Stop Minikube** ~ When you're done using the Minikube cluster, you can stop it with:
23 | **minikube stop**
24 |
25 | #### SOLUTION DETAILS
26 |
27 | Task: Created deployment configurations for two applications using the nginx:latest image.
28 |
29 | Deployment for Production:
30 | * Created the deployment configuration for the Production application in the non-production environment.
31 | * Applied the deployment using the command: kubectl apply -f production-app.yaml
32 |
33 | Pod Status Check:
34 | * Checked the status of pods using kubectl get pods.
35 | * Observed that the development application pod was in a "Pending" state, indicating it hadn't started properly.
36 |
37 | **ERROR**
38 | ```
39 | -> Warning FailedScheduling 31s default-scheduler 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity/selector. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
40 | ```
41 |
42 | Node Assignment:
43 | * Examined the details of the pending pod and noted that it hadn't been assigned to a node yet.
44 | * Updated the Minikube node using label node, which labeled the node as suitable for prod.
45 |
46 | ```
47 | kubectl lable node minikube environment=production
48 | node/minikube labeled
49 | ```
50 |
51 | Pod Status Update:
52 | 1. After updating the node label, checked pod status again using kubectl get pods.
53 |
54 | 2. Found that the development application pod was now running successfully, indicating that it was scheduled on a node
55 |
56 | Log Analysis:
57 | * Inspected the logs of the running pod to gain insight into its startup process.
58 | ```
59 | EXAMPLE: One of the pod name
60 | kubectl logs -f production-app-c58d79c56-2g577
61 |
62 | ```
63 | * Observed that the nginx server was starting up, and worker processes were being initialized.
64 |
65 | ## CODE SNIPPET
66 |
67 | PRODUCTION DEPLOYMENT
68 |
69 | ```
70 | apiVersion: apps/v1
71 | kind: Deployment
72 | metadata:
73 | name: production-app
74 | spec:
75 | selector:
76 | matchLabels:
77 | app: prod-app
78 | replicas: 3
79 | template:
80 | metadata:
81 | labels:
82 | app: prod-app
83 | spec:
84 | affinity:
85 | nodeAffinity:
86 | requiredDuringSchedulingIgnoredDuringExecution:
87 | nodeSelectorTerms:
88 | - matchExpressions:
89 | - key: environment
90 | operator: In
91 | values:
92 | - production
93 | containers:
94 | - image: nginx:latest
95 | name: nginx
96 |
97 | ```
98 | POD LOGS
99 |
100 | ```
101 | └─(20:57:07 on main ✭)──> kbl -f production-app-c58d79c56-2g577
102 | +kubectl logs
103 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
104 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
105 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
106 | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
107 | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
108 | /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
109 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
110 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
111 | /docker-entrypoint.sh: Configuration complete; ready for start up
112 | 2023/08/30 15:26:43 [notice] 1#1: using the "epoll" event method
113 | 2023/08/30 15:26:43 [notice] 1#1: nginx/1.25.2
114 | 2023/08/30 15:26:43 [notice] 1#1: built by gcc 12.2.0 (Debian 12.2.0-14)
115 | 2023/08/30 15:26:43 [notice] 1#1: OS: Linux 5.15.49-linuxkit-pr
116 | 2023/08/30 15:26:43 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
117 | 2023/08/30 15:26:43 [notice] 1#1: start worker processes
118 | 2023/08/30 15:26:43 [notice] 1#1: start worker process 29
119 | 2023/08/30 15:26:43 [notice] 1#1: start worker process 30
120 | 2023/08/30 15:26:43 [notice] 1#1: start worker process 31
121 | 2023/08/30 15:26:43 [notice] 1#1: start worker process 32
122 | 2023/08/30 15:26:43 [notice] 1#1: start worker process 33
123 |
124 | ```
125 |
126 |
127 |
128 | ## CHALLENGE 2
129 | #### SOLUTION DETAILS
130 |
131 | Task 2: Propose a solution to ensure high availability of the service during and after deployments. Consider the following requirements:
132 |
133 | Minimize the occurrence of 503 - Service Unavailable errors during deployments.
134 | Ensure that the service is only considered ready when all its required dependencies are also ready
135 |
136 | Liveness Probe : The liveness probe is responsible for determining whether a container is alive or dead. If the liveness probe fails (returns an error), Kubernetes will restart the container.
137 | During deployment, liveness probes quickly detect and replace containers with issues.
138 | * It can indirectly help prevent 503 errors by ensuring that containers that are unresponsive or in a failed state are restarted promptly. If a container becomes unresponsive, it's replaced, reducing the chances of a long-lasting 503 error.
139 |
140 | Readiness Probe: The readiness probe is responsible for determining whether a container is ready to receive incoming network traffic. If the readiness probe fails (returns an error), the container is not included in the pool of endpoints for a Service.
141 | * Readiness probes are crucial during deployment to confirm that a container is ready before it gets traffic. They prevent routing requests to containers that are still initializing or not fully functional.
142 | * It prevent 503 errors during deployment by blocking traffic to unready containers. If a container is still initializing, it won't receive requests until it's ready, preventing 503 errors from unprepared containers.
143 |
144 | ## CODE SNIPPET
145 |
146 | ```
147 | Production Deployment
148 |
149 | apiVersion: apps/v1
150 | kind: Deployment
151 | metadata:
152 | name: production-app
153 | spec:
154 | selector:
155 | matchLabels:
156 | app: prod-app
157 | replicas: 3
158 | template:
159 | metadata:
160 | labels:
161 | app: prod-app
162 | spec:
163 | affinity:
164 | nodeAffinity:
165 | requiredDuringSchedulingIgnoredDuringExecution:
166 | nodeSelectorTerms:
167 | - matchExpressions:
168 | - key: environment
169 | operator: In
170 | values:
171 | - production
172 | containers:
173 | - image: nginx:latest
174 | name: nginx
175 | ports:
176 | - containerPort: 80
177 |
178 | # Readiness probe
179 | readinessProbe:
180 | failureThreshold: 3
181 | httpGet:
182 | path: /actuator/health/readiness
183 | port: 80
184 | scheme: HTTP
185 | periodSeconds: 20
186 | successThreshold: 1
187 | timeoutSeconds: 60
188 | initialDelaySeconds: 60
189 |
190 | # Liveness probe
191 | livenessProbe:
192 | httpGet:
193 | path: /actuator/health/liveness
194 | port: 80
195 | initialDelaySeconds: 60
196 | periodSeconds: 10
197 | timeoutSeconds: 60
198 | successThreshold: 1
199 | failureThreshold: 3
200 |
201 | ```
202 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-1-Responses/surajdubey08.md:
--------------------------------------------------------------------------------
1 |
2 | **Name**: Suraj Dhar Dubey
3 |
4 | **Email**: surajdhar08.dubey@gmail.com
5 |
6 | ---
7 |
8 | ## Solution
9 |
10 | ### Solution Details
11 |
12 | **Task 1: Deployment Configuration**
13 |
14 | When tackling the task of setting up deployments for the development and production applications on the Kubernetes cluster, I took a methodical approach to ensure everything was in place for proper resource management.
15 |
16 | Firstly, I wanted to create a deployment configuration for the development application. To achieve this, I put together a YAML file named `development-deployment.yaml`. Within this file, I outlined the Deployment resource specifications according to the requirements. I made sure to set the number of replicas to 1, added clear labels (`app: dev-app`) to distinguish the application, and incorporated affinity settings. The affinity part was particularly crucial; I utilized it to guarantee that the pods would be scheduled on nodes with the label `environment=development`.
17 |
18 | Next, I moved on to the production application. Following the same approach, I crafted another YAML file called `production-deployment.yaml`. This file mirrored the previous one in terms of structure but was customized to fit the production environment's needs. I ensured the right number of replicas (3 in this case), labeled the application (`app: prod-app`), and configured affinity to make certain that pods were deployed to nodes marked with `environment=production`.
19 |
20 | In the task, the `requiredDuringSchedulingIgnoredDuringExecution` affinity setting was used to ensure that the pods of the applications were scheduled on nodes with the desired environment label (`development` or `production`). This affinity setting enforces the placement of pods on nodes based on specified rules during scheduling and maintains that placement even if a node label changes during runtime execution.
21 |
22 | Once the YAML files were ready, I used the `kubectl apply -f` command to put the deployment configurations into action. This step allowed the applications to be set up and the associated configurations to take effect within the Kubernetes cluster.
23 |
24 | **Task 2: Log Inspection**
25 |
26 | To accomplish this, I considered a straightforward yet effective approach. I opted for the `kubectl logs` command. By utilizing this command and specifying `-l app=prod-app`, I was able to specifically target the pods that belonged to the production application. This meant I could access and review the logs from these particular pods, ensuring everything was functioning as expected.
27 |
28 | ### Code Snippet
29 | **Task 1: Deployment Configuration**
30 |
31 | development-deployment.yaml
32 | ```yaml
33 | apiVersion: apps/v1
34 | kind: Deployment
35 | metadata:
36 | name: deployment-app
37 | spec:
38 | replicas: 1
39 | selector:
40 | matchLabels:
41 | app: dev-app
42 | template:
43 | metadata:
44 | labels:
45 | app: dev-app
46 | spec:
47 | containers:
48 | - name: nginx
49 | image: nginx:latest
50 | affinity:
51 | nodeAffinity:
52 | requiredDuringSchedulingIgnoredDuringExecution:
53 | nodeSelectorTerms:
54 | - matchExpressions:
55 | - key: environment
56 | operator: In
57 | values:
58 | - development
59 | ```
60 |
61 | production-deployment.yaml
62 | ```yaml
63 | apiVersion: apps/v1
64 | kind: Deployment
65 | metadata:
66 | name: production-app
67 | spec:
68 | replicas: 3
69 | selector:
70 | matchLabels:
71 | app: prod-app
72 | template:
73 | metadata:
74 | labels:
75 | app: prod-app
76 | spec:
77 | containers:
78 | - name: nginx
79 | image: nginx:latest
80 | affinity:
81 | nodeAffinity:
82 | requiredDuringSchedulingIgnoredDuringExecution:
83 | nodeSelectorTerms:
84 | - matchExpressions:
85 | - key: environment
86 | operator: In
87 | values:
88 | - production
89 | ```
90 |
91 | Apply the Manifests:
92 | ```bash
93 | kubectl apply -f development-deployment.yaml
94 | kubectl apply -f production-deployment.yaml
95 | ```
96 | **Task 2: Log Inspection**
97 | ```bash
98 | kubectl logs -l app=prod-app
99 | ```
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-2-Responses/Basil.md:
--------------------------------------------------------------------------------
1 | **Name**: basil
2 |
3 | **Slack User ID**: U05QWDXNWLF
4 |
5 | **Email**: basil-is@superhero.true
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | Given that we are working with a deployment that relies on services external to Kubernetes, we can utilize two features to meet our needs: the readiness probe and the deployment strategy.
14 |
15 | ### Code Snippet
16 |
17 | ```yaml
18 | apiVersion: apps/v1
19 | kind: Deployment
20 | metadata:
21 | name: nginx
22 | spec:
23 | replicas: 3
24 | selector:
25 | matchLabels:
26 | app: next-gen-app
27 | strategy:
28 | type: RollingUpdate
29 | rollingUpdate:
30 | maxUnavailable: 1
31 | maxSurge: 1
32 | template:
33 | metadata:
34 | labels:
35 | app: next-gen-app
36 | spec:
37 | containers:
38 | - name: nginx-container
39 | image: nginx
40 | ports:
41 | - containerPort: 80
42 | readinessProbe:
43 | httpGet:
44 | path: /index.html
45 | port: 80
46 | initialDelaySeconds: 5
47 | periodSeconds: 5
48 | ```
49 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-2-Responses/John_Kennedy.md:
--------------------------------------------------------------------------------
1 | **Name**: John Kennedy
2 |
3 | **Slack User ID**: U05D6BR916Z
4 |
5 | **Email**: johnkdevops@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | # Question: Service Availability During Deployment
14 |
15 | ## Task
16 |
17 | ### Minimize the occurrence of 503 - Service Unavailable errors during deployments.
18 |
19 | 1. Utilize Kubernetes deployments instead of directly running pods to ensure better control over your application's lifecycle.
20 |
21 | 2. Within your deployments, implement Rolling Updates rather than recreating pods. This approach ensures a smoother transition between versions, reducing the likelihood of service unavailability.
22 |
23 | 3. Employ environment variables inside your pods and utilize ConfigMaps as volumes. This practice enables you to update connection parameters to your backend SQL servers seamlessly by modifying the ConfigMap, simplifying configuration management.
24 |
25 | 4. Expose your deployment as a service, such as NodePort, to make your application accessible and maintain communication during deployments.
26 |
27 | 5. Consider implementing an Ingress Controller, like the NGINX ingress controller, to efficiently manage external access to your services and enhance routing capabilities.
28 |
29 | 6. Implement health checks for all external dependencies (Database, Kafka, Redis) to ensure they are ready before considering the service itself as ready. Kubernetes provides readiness probes that can be defined in your service's pod configuration. These probes can be customized to check the health of dependencies before allowing traffic to be routed to the service.
30 |
31 | For example, you can create readiness probes for each dependency by defining HTTP endpoints, TCP checks, or custom scripts to verify the health of these dependencies. Your service should only be considered ready when all dependency readiness probes report success.
32 |
33 | ### Ensure that the service is only considered ready when all its required dependencies are also ready.
34 |
35 | 1. Implement Readiness and Liveness Probes to assess the health of your service and its dependencies. This proactive monitoring ensures that your service is considered ready only when all required components are in a healthy state.
36 |
37 | 2. Leverage labels from your deployment as selector labels in your service configuration to ensure that the service correctly routes traffic to pods with the necessary dependencies.
38 |
39 |
40 |
41 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-2-Responses/challenge_response_template.md:
--------------------------------------------------------------------------------
1 | **Name**: user-name
2 |
3 | **Slack User ID**: 00000000000
4 |
5 | **Email**: user-email@example.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | (Explain your approach, thought process, and steps you took to solve the challenge)
14 |
15 | ### Code Snippet
16 |
17 | ```yaml
18 | # Place your code or configuration snippet here
19 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-2-Responses/kodekloud_response.md:
--------------------------------------------------------------------------------
1 | # Weekly Challenge: Service Availability During Deployment
2 |
3 | ## Task: Ensuring Service Availability During and After Deployments
4 |
5 | **Objective:** Propose a solution to ensure high availability of the service during and after deployments, considering the following requirements:
6 |
7 | 1. Minimize the occurrence of 503 - Service Unavailable errors during deployments.
8 | 2. Ensure that the service is only considered ready when all its required dependencies are also ready.
9 |
10 | ### Proposed Solution
11 |
12 | To meet these requirements, we can implement the following steps:
13 |
14 | 1. **ReadinessProbe Configuration:** Configure a readinessProbe for the pod to determine when the service is ready to accept traffic. This probe can be customized based on your specific service requirements. For example, you can check if the service's critical endpoints are responsive, and all dependencies are ready. Here's an example of a readinessProbe configuration:
15 |
16 | ```yaml
17 | readinessProbe:
18 | httpGet:
19 | path: /healthz
20 | port: 80
21 | initialDelaySeconds: 5
22 | periodSeconds: 5
23 | ```
24 |
25 | 1. **Rolling Updates:** Ensure that the deployment strategy type is set to RollingUpdate. This ensures that Kubernetes will gradually update the pods during deployments, reducing the risk of a sudden surge in traffic to the new pods that might not be fully ready.
26 |
27 | Here's an example of how to set the deployment strategy type to RollingUpdate in a deployment manifest:
28 |
29 |
30 |
31 | ```
32 | strategy:
33 | type: RollingUpdate
34 | rollingUpdate:
35 | maxUnavailable: 25%
36 | maxSurge: 25%
37 | ```
38 |
39 | 1. **Dependency Checks:** Implement checks within your service code to verify that all required dependencies (Database, Kafka, Redis, etc.) are reachable and ready before marking the service as ready. You can use health checks or custom logic to achieve this.
40 |
41 | 2. **Monitoring and Alerts:** Set up monitoring for your service and dependencies. Use tools like Prometheus and Grafana to collect metrics and set alerts for any anomalies or service unavailability.
42 |
43 | By following these steps, you can ensure high availability of your service during and after deployments while minimizing the occurrence of 503 - Service Unavailable errors.
44 |
45 | Additionally, this approach promotes a smoother transition during updates and reduces the impact on end-users.
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-3-Responses/Basil.md:
--------------------------------------------------------------------------------
1 | **Name**: Basil
2 | **Slack User ID**: U05QWDXNWLF
3 | **Email**: basil-is@superhero.true
4 |
5 | ---
6 |
7 | ### Solution:
8 |
9 | #### Step 1: Create a ConfigMap with the Nginx Configuration
10 |
11 | The following manifest creates a `ConfigMap` named `nginx-conf` that contains the Nginx configuration:
12 |
13 | ```yaml
14 | apiVersion: v1
15 | kind: ConfigMap
16 | metadata:
17 | name: nginx-conf
18 | data:
19 | default.conf: |
20 | server {
21 | listen 80;
22 | server_name localhost;
23 | location / {
24 | proxy_set_header X-Forwarded-For $remote_addr;
25 | proxy_set_header Host $http_host;
26 | proxy_pass "http://127.0.0.1:8080";
27 | }
28 | }
29 | ```
30 |
31 | #### Step 2: Define the Deployment
32 |
33 | The next step is to create a deployment named `nginx-proxy-deployment` that consists of the two containers and mounts the earlier created `ConfigMap`:
34 |
35 | ```yaml
36 | apiVersion: apps/v1
37 | kind: Deployment
38 | metadata:
39 | labels:
40 | app: nginx-proxy-deployment
41 | name: nginx-proxy-deployment
42 | spec:
43 | replicas: 1
44 | selector:
45 | matchLabels:
46 | app: nginx-proxy-deployment
47 | template:
48 | metadata:
49 | labels:
50 | app: nginx-proxy-deployment
51 | spec:
52 | containers:
53 | - image: kodekloud/simple-webapp
54 | name: simple-webapp
55 | ports:
56 | - containerPort: 8080
57 | - image: nginx
58 | name: nginx-proxy
59 | ports:
60 | - containerPort: 80
61 | volumeMounts:
62 | - name: nginx-config
63 | mountPath: /etc/nginx/conf.d
64 | volumes:
65 | - name: nginx-config
66 | configMap:
67 | name: nginx-conf
68 | ```
69 |
70 | #### Step 3: Expose the service
71 |
72 | ```yaml
73 | apiVersion: v1
74 | kind: Service
75 | metadata:
76 | name: nginx-proxy-service
77 | spec:
78 | selector:
79 | app: nginx-proxy-deployment
80 | ports:
81 | - protocol: TCP
82 | port: 80
83 | targetPort: 80
84 | ```
85 |
86 | #### Step 4: Validate the Deployment
87 |
88 | 1. Get the service IP:
89 |
90 | ```bash
91 | kubectl get svc nginx-proxy-service
92 | ```
93 |
94 | 2. Test the setup:
95 |
96 | Using `curl` or any HTTP client, send a request to the IP address retrieved:
97 |
98 | ```bash
99 | curl 10.42.0.15
100 | ```
101 |
102 | As expected the output is an HTML response from the `simple-webapp`:
103 |
104 | ```html
105 |
106 | Hello from Flask
107 |
108 |
114 |
Hello from nginx-proxy-deployment-56bb6db74f-2px7v!
115 |
116 | ```
117 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-3-Responses/John_Kennedy.md:
--------------------------------------------------------------------------------
1 | **Name**: John Kennedy
2 |
3 | **Slack User ID**: U05D6BR916Z
4 |
5 | **Email**: johnkdevops@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | # Question: Deployment Configuration in Kubernetes
14 |
15 | ## Task
16 |
17 | ### Create a ConfigMap named nginx-conf with the provided Nginx configuration.
18 |
19 | 1. I used the kubernetes docs to get a sample configmap YAML file
20 | 2. I create a yaml file being using vi nginx.conf.yaml.
21 | 3. I made changes to name to nginx-conf, added key nginx:.
22 | 4. I added nginx configuration info to value field of YAML file.
23 | 5. I ran dry-run before creating the configMap by using k apply -f nginx-conf.yaml --dry-run=client -o yaml to verify the YAML file.
24 | 6. I created the configMap by using k apply -f nginx-conf.yaml
25 | 7. I inspected the configMap by using k describe cm nginx.conf
26 |
27 | ## YAML File
28 |
29 | apiVersion: v1
30 | kind: ConfigMap
31 | metadata:
32 | name: nginx-conf
33 | data:
34 | nginx.conf: |
35 | server {
36 | listen 80;
37 | server_name localhost;
38 | location / {
39 | proxy_set_header X-Forwarded-For $remote_addr;
40 | proxy_set_header Host $http_host;
41 | proxy_pass "http://127.0.0.1:8080";
42 | }
43 | }
44 |
45 | ### Define a deployment named nginx-proxy-deployment with two containers: simple-webapp and nginx-proxy.
46 |
47 | 1. I used imperative commands to create deployment by running k create deploy nginx-proxy-deployment --image=kodekloud/simple-webapp port=8080 --dry-run=client -o yaml > nginx-proxy.deploy.yaml
48 | 2. I opened the nginx-proxy-deploy.yaml file by using vi nginx-proxy-deployment and added the nginx-proxy container.
49 |
50 | ## YAML File
51 |
52 | apiVersion: apps/v1
53 | kind: Deployment
54 | metadata:
55 | name: nginx-proxy-deployment
56 | spec:
57 | replicas: 1 # You can adjust the number of replicas as needed
58 | selector:
59 | matchLabels:
60 | app: nginx-proxy
61 | template:
62 | metadata:
63 | labels:
64 | app: nginx-proxy
65 | spec:
66 | containers:
67 | - name: simple-webapp
68 | image: kodekloud/simple-webapp
69 | ports:
70 | - containerPort: 8080
71 | - name: nginx-proxy # Add
72 | image: nginx # Add
73 | ports: # Add
74 | - containerPort: 80 # Add
75 |
76 |
77 | ### Mount the nginx-conf ConfigMap to the path /etc/nginx/conf.d/default.conf in the nginx-proxy container.
78 |
79 | 1. I added volumeMounts section name: nginx-conf-volume with mountPath: /etc/nginx/conf.d/default.conf subPath: nginx.conf to nginx-proxy container.
80 | 2. I added volumes section with name of volume: name: nginx-conf-volume with configMap: name: nginx-conf.
81 | 3. I create the deploy by using k apply -f nginx-proxy-deploy.yaml
82 | 4. I watch the deploy to see when it is ready by using k get deploy -w
83 | 5. I inspect deployment by using k describe deploy nginx-proxy-deployment.
84 |
85 | ## YAML File
86 |
87 | apiVersion: apps/v1
88 | kind: Deployment
89 | metadata:
90 | name: nginx-proxy-deployment
91 | spec:
92 | replicas: 1 # You can adjust the number of replicas as needed
93 | selector:
94 | matchLabels:
95 | app: nginx-proxy
96 | template:
97 | metadata:
98 | labels:
99 | app: nginx-proxy
100 | spec:
101 | containers:
102 | - name: simple-webapp
103 | image: kodekloud/simple-webapp
104 | ports:
105 | - containerPort: 8080
106 | - name: nginx-proxy
107 | image: nginx
108 | ports:
109 | - containerPort: 80
110 | volumeMounts: # Add
111 | - name: nginx-conf-volume # Add
112 | mountPath: /etc/nginx/conf.d/default.conf # Add
113 | subPath: nginx.conf # Add
114 | volumes: # Add
115 | - name: nginx-conf-volume # Add
116 | configMap: # Add
117 | name: nginx-conf # Add
118 |
119 | ### Expose the nginx-proxy deployment on port 80.
120 |
121 | 1. I expose the deployment with dry run by using k expose deploy nginx-proxy-deployment --name=nginx-proxy-service --type=NodePort --port=80 --target-port=80 --dry-run=client -o yaml > nginx-proxy-svc.yaml
122 | 2. I would expose the deployment using k apply -f nginx-proxy-svc.yaml
123 | 3. I would verify the svc was created for the deployment by using k get svc nginx-proxy-service -o wide
124 | 4. I will note what is the NodePort to use: 30576
125 |
126 | ## YAML File
127 |
128 | apiVersion: v1
129 | kind: Service
130 | metadata:
131 | creationTimestamp: null
132 | name: nginx-proxy-service
133 | spec:
134 | ports:
135 | - port: 80
136 | protocol: TCP
137 | targetPort: 80
138 | selector:
139 | app: nginx-proxy
140 | type: NodePort
141 |
142 | ### Validate the setup by accessing the service. You should observe content from the simple-webapp response.
143 |
144 | 1. Verify I can get the the simple-webapp application I use curl -I http://localhost:30576
145 | 2. I get the 200 OK
146 | 3. Verify I can get to nginx-proxy by using curl -I http://localhost:8080.
147 | 4. I get the 200 OK
148 | 5. Everything up running and working!!
149 |
150 |
151 |
152 |
153 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-3-Responses/Mason889.md:
--------------------------------------------------------------------------------
1 | **Name**: Kostiantyn Zaihraiev
2 |
3 | **Slack User ID**: U04FZK4HL0L
4 |
5 | **Email**: kostya.zaigraev@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | After reading task, I've decided to pregenerate yaml templates to easy configuration and management of all resources.
14 |
15 | First of all I've created `nginx.conf` file in which I've filled in with provided Nginx configuration.
16 |
17 | Then I've pregenerated `configmap.yaml` file using `kubectl` command:
18 |
19 | ```bash
20 | kubectl create configmap nginx-conf --from-file=nginx.conf --dry-run=client -o yaml > configmap.yaml
21 | ```
22 |
23 | After making some changes in the file I've applied it to the cluster:
24 |
25 | ```bash
26 | kubectl apply -f configmap.yaml
27 | ```
28 |
29 | Then I've pregenerated `deployment.yaml` file in which I've defined a deployment named `nginx-proxy-deployment` with two containers: `simple-webapp` and `nginx-proxy` using `kubectl` command:
30 |
31 | ```bash
32 | kubectl create deployment nginx-proxy-deployment --image=kodekloud/simple-webapp --image=nginx --dry-run=client -o yaml > deployment.yaml
33 | ```
34 |
35 | After making some changes (including mounting the `nginx-conf` ConfigMap to the path `/etc/nginx/conf.d/default.conf` in the `nginx-proxy` container) in the file I've applied it to the cluster:
36 |
37 | ```bash
38 | kubectl apply -f deployment.yaml
39 | ```
40 |
41 | Finally I've exposed the `nginx-proxy` deployment on port 80 (I've used `NodePort` type of service to expose deployment).
42 |
43 | ```bash
44 | kubectl expose deployment nginx-proxy-deployment --port=80 --type=NodePort --dry-run=client -o yaml > service.yaml
45 | ```
46 |
47 | After making some changes in the file I've applied it to the cluster:
48 |
49 | ```bash
50 | kubectl apply -f service.yaml
51 | ```
52 |
53 | ### Code Snippet
54 |
55 | #### configmap.yaml
56 | ```yaml
57 | apiVersion: v1
58 | data:
59 | nginx.conf: |-
60 | server {
61 | listen 80;
62 | server_name localhost;
63 | location / {
64 | proxy_set_header X-Forwarded-For $remote_addr;
65 | proxy_set_header Host $http_host;
66 | proxy_pass "http://127.0.0.1:8080";
67 | }
68 | }
69 | kind: ConfigMap
70 | metadata:
71 | name: nginx-conf
72 | ```
73 |
74 | #### deployment.yaml
75 | ```yaml
76 | apiVersion: apps/v1
77 | kind: Deployment
78 | metadata:
79 | labels:
80 | app: nginx-proxy-deployment
81 | name: nginx-proxy-deployment
82 | spec:
83 | replicas: 1
84 | selector:
85 | matchLabels:
86 | app: nginx-proxy-deployment
87 | strategy: {}
88 | template:
89 | metadata:
90 | labels:
91 | app: nginx-proxy-deployment
92 | spec:
93 | containers:
94 | - image: kodekloud/simple-webapp
95 | name: simple-webapp
96 | resources: {}
97 | ports:
98 | - containerPort: 8080
99 | - image: nginx
100 | name: nginx-proxy
101 | resources: {}
102 | ports:
103 | - containerPort: 80
104 | volumeMounts:
105 | - name: nginx-conf
106 | mountPath: /etc/nginx/conf.d/default.conf
107 | subPath: nginx.conf
108 | volumes:
109 | - name: nginx-conf
110 | configMap:
111 | name: nginx-conf
112 | ```
113 |
114 | #### service.yaml
115 | ```yaml
116 | apiVersion: v1
117 | kind: Service
118 | metadata:
119 | labels:
120 | app: nginx-proxy-deployment
121 | name: nginx-proxy-svc
122 | spec:
123 | ports:
124 | - port: 80
125 | protocol: TCP
126 | targetPort: 80
127 | selector:
128 | app: nginx-proxy-deployment
129 | type: NodePort # I've used NodePort type of service to expose deployment
130 | ```
131 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-3-Responses/adjurk.md:
--------------------------------------------------------------------------------
1 | **Name**: Adam Jurkiewicz
2 |
3 | **Slack User ID**: U05PNQU3CUA
4 |
5 | **Email**: a.jurkiewicz5 (at) gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | First, create an `nginx.conf` file and paste the provided nginx configuration. Next, the configuration can be applied imperatively:
14 |
15 | ```bash
16 | $ kubectl create cm nginx-conf --from-file default.conf
17 | ```
18 |
19 | Secondly, create a Deployment manifest using `kubectl create deployment` client dry-run. Container names will be customized after generating the manifest.
20 |
21 | ```bash
22 | $ kubectl create deploy nginx-proxy-deployment --image kodekloud/simple-webapp --image nginx -o yaml --dry-run=client > nginx-proxy-deployment.yml
23 | ```
24 |
25 | The Deployment should look as follows:
26 |
27 | ```yaml
28 | apiVersion: apps/v1
29 | kind: Deployment
30 | metadata:
31 | labels:
32 | app: nginx-proxy-deployment
33 | name: nginx-proxy-deployment
34 | spec:
35 | replicas: 1
36 | selector:
37 | matchLabels:
38 | app: nginx-proxy-deployment
39 | template:
40 | metadata:
41 | labels:
42 | app: nginx-proxy-deployment
43 | spec:
44 | containers:
45 | - image: kodekloud/simple-webapp
46 | name: simple-webapp
47 | ports:
48 | - containerPort: 8080
49 | - image: nginx
50 | name: nginx-proxy
51 | ports:
52 | - containerPort: 80
53 | volumeMounts:
54 | - name: nginx-conf-volume
55 | mountPath: /etc/nginx/conf.d
56 | volumes:
57 | - name: nginx-conf-volume
58 | configMap:
59 | name: nginx-conf
60 |
61 | ```
62 |
63 | Apply the Deployment manifest file using `kubectl apply`:
64 |
65 | ```bash
66 | $ kubectl apply -f nginx-proxy-deployment.yml
67 | ```
68 |
69 | The Deployment can then be exposed with `kubectl expose`:
70 |
71 | ```bash
72 | $ kubectl expose deploy nginx-proxy-deployment --port 80
73 | ```
74 |
75 | ### Testing
76 |
77 | Tests can be made within a `curlimages/curl` Pod:
78 |
79 | ```
80 | $ kubectl run -it curlpod --image curlimages/curl -- curl -v nginx-proxy-deployment
81 | ```
82 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-3-Responses/cc-connected.md:
--------------------------------------------------------------------------------
1 | **Name**: cc-connected
2 |
3 | **Slack User ID**: U053SNUCHAM
4 |
5 | **Email**: office@cc-connected.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | Create ConfigMap with desired nginx configuration specified in default.conf file.\
14 | Create Deployment contained of 2 containers and a volume mapped to ConfigMap. In containers specification: \
15 | 1) simple-webapp\
16 | 2) nginx-proxy - with volumeMount containing volume's name and mountPath directory location of default.conf file\
17 | Create a service to make the deployment accessible within cluster.\
18 | Create a temporary pod to test response from service.
19 |
20 | ### Code Snippet
21 |
22 | ```yaml
23 | vi nginx-conf.yaml
24 | ```
25 |
26 | ```yaml
27 | apiVersion: v1
28 | kind: ConfigMap
29 | metadata:
30 | name: nginx-conf
31 | data:
32 | default.conf: |
33 | server {
34 | listen 80;
35 | server_name localhost;
36 | location / {
37 | proxy_set_header X-Forwarded-For $remote_addr;
38 | proxy_set_header Host $http_host;
39 | proxy_pass "http://127.0.0.1:8080";
40 | }
41 | }
42 | ```
43 |
44 | ```yaml
45 | kubectl apply -f nginx-conf.yaml
46 | ```
47 |
48 | ```yaml
49 | kubectl get configmap
50 | ```
51 |
52 | ```yaml
53 | vi nginx-proxy-deployment.yaml
54 | ```
55 |
56 | ```yaml
57 | apiVersion: apps/v1
58 | kind: Deployment
59 | metadata:
60 | name: nginx-proxy-deployment
61 | spec:
62 | replicas: 1
63 | selector:
64 | matchLabels:
65 | app: nginx
66 | template:
67 | metadata:
68 | labels:
69 | app: nginx
70 | spec:
71 | containers:
72 | - name: simple-webapp
73 | image: kodekloud/simple-webapp:latest
74 | ports:
75 | - containerPort: 8080
76 | - name: nginx-proxy
77 | image: nginx:latest
78 | ports:
79 | - containerPort: 80
80 | volumeMounts:
81 | - name: nginx-vol-config
82 | mountPath: /etc/nginx/conf.d
83 | volumes:
84 | - name: nginx-vol-config
85 | configMap:
86 | name: nginx-conf
87 | ```
88 |
89 | ```yaml
90 | kubectl apply -f nginx-proxy-deployment.yaml
91 | ```
92 |
93 | ```yaml
94 | kubectl get deploy -o=wide -w
95 | ```
96 |
97 | ```yaml
98 | kubectl get pods -o=wide -w
99 | ```
100 |
101 | ```yaml
102 | kubectl describe pod nginx-proxy-deployment-867bbd6b9b-szlqs
103 | ```
104 |
105 | ```yaml
106 | kubectl expose deployment nginx-proxy-deployment --port=80 --target-port=80 --name=nginx-service
107 | ```
108 |
109 | ```yaml
110 | kubectl get service -o=wide -w #[Type:ClusterIP Cluster-Ip:10.107.147.92 External-Ip: Port:80/TCP]
111 | ```
112 |
113 | ```yaml
114 | kubectl run curltestpod --image=curlimages/curl -it --rm --restart=Never -- curl 10.107.147.92:80
115 | ```
116 |
117 | ```yaml
118 | kubectl logs nginx-proxy-deployment-867bbd6b9b-szlqs
119 | ```
120 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-3-Responses/challenge_response_template.md:
--------------------------------------------------------------------------------
1 | **Name**: user-name
2 |
3 | **Slack User ID**: 00000000000
4 |
5 | **Email**: user-email@example.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | (Explain your approach, thought process, and steps you took to solve the challenge)
14 |
15 | ### Code Snippet
16 |
17 | ```yaml
18 | # Place your code or configuration snippet here
19 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-3-Responses/housseinhmila.md:
--------------------------------------------------------------------------------
1 | **Name**: Houssein Hmila
2 |
3 | **Slack User ID**: U054SML63K5
4 |
5 | **Email**: housseinhmila@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | #### Step 1 (ConfigMap Creation):
14 |
15 | Created a directory called deployment.
16 | Inside the deployment directory, you created a file named default.conf and pasted the Nginx configuration into it.
17 | Changed the permissions of default.conf to make it readable.
18 | Created a ConfigMap named nginx-conf from the default.conf file.
19 | #### Step 2, 3, and 4 (Deployment Creation, Volume Mount, and Service Exposure):
20 |
21 | Created a YAML file named nginx-proxy-deployment.yaml containing the Deployment resource configuration with two containers (simple-webapp and nginx-proxy) and volume mounts for the Nginx configuration.
22 | Applied the nginx-proxy-deployment.yaml using kubectl apply to create the deployment.
23 | Exposed the deployment as a Kubernetes Service on port 80 using kubectl expose
24 |
25 | ### Code Snippet
26 |
27 | ```
28 | mkdir deployment
29 | cd deployment
30 | vim default.conf
31 | ```
32 | ##### paste this config
33 | ```
34 | server {
35 | listen 80;
36 | server_name localhost;
37 | location / {
38 | proxy_set_header X-Forwarded-For $remote_addr;
39 | proxy_set_header Host $http_host;
40 | proxy_pass "http://127.0.0.1:8080";
41 | }
42 | }
43 | ```
44 | ##### run these commands:
45 | ```
46 | chmod 666 default.conf
47 | kubectl create configmap nginx-conf --from-file=./default.conf
48 | vim nginx-proxy-deployment.yaml
49 | ```
50 | ##### paste this code
51 | ```apiVersion: apps/v1
52 | kind: Deployment
53 | metadata:
54 | name: nginx-proxy-deployment
55 | labels:
56 | app: nginx
57 | spec:
58 | replicas: 1
59 | selector:
60 | matchLabels:
61 | app: nginx
62 | template:
63 | metadata:
64 | labels:
65 | app: nginx
66 | spec:
67 | containers:
68 | - name: simple-webapp
69 | image: kodekloud/simple-webapp
70 | ports:
71 | - containerPort: 8080
72 | - name: nginx-proxy
73 | image: nginx
74 | ports:
75 | - containerPort: 80
76 | volumeMounts:
77 | - name: config-volume
78 | mountPath: /etc/nginx/conf.d/
79 | volumes:
80 | - name: config-volume
81 | configMap:
82 | name: nginx-conf
83 | ```
84 | ##### run this command:
85 | ```
86 | kubectl apply -f nginx-proxy-deployment.yaml
87 | kubectl expose deployment nginx-proxy-deployment --port=80 --target-port=80
88 | ```
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-3-Responses/jothilal22.md:
--------------------------------------------------------------------------------
1 | **Name**: Jothilal Sottallu
2 |
3 | **Email**: srjothilal2001@gmail.com
4 |
5 | ---
6 |
7 | ### SOLUTION
8 |
9 | Imagine you have a website, and you want it to be super reliable and handle lots of visitors smoothly. To achieve this, you set up a special system using Kubernetes. In this system, you have two main parts.
10 |
11 | * First, there's the "simple-webapp," which is like the heart of your website. It knows how to show your web pages and listens for requests on the internet at port 8080.
12 |
13 | * Think of 'nginx-proxy' as a traffic director for your website, just like a friendly hotel receptionist. It stands at the main entrance (port 80) and guides visitors to the 'simple-webapp' (port 8080), ensuring they find what they're looking for on your website smoothly.
14 |
15 | First we will create a configmap
16 |
17 | ```
18 | apiVersion: v1
19 | kind: ConfigMap
20 | metadata:
21 | name: nginx-conf
22 | data:
23 | default.conf: |
24 | server {
25 | listen 80;
26 | server_name localhost;
27 | location / {
28 | proxy_set_header X-Forwarded-For $remote_addr;
29 | proxy_set_header Host $http_host;
30 | proxy_pass "http://127.0.0.1:8080";
31 | }
32 | }
33 |
34 | ```
35 |
36 | * Command to apply the ConfigMap
37 |
38 | ```kubectl apply -f nginx-config.yaml```
39 |
40 |
41 |
42 | ```
43 | apiVersion: apps/v1
44 | kind: Deployment
45 | metadata:
46 | name: nginx-proxy-deployment
47 | spec:
48 | replicas: 1
49 | selector:
50 | matchLabels:
51 | app: nginx-proxy
52 | template:
53 | metadata:
54 | labels:
55 | app: nginx-proxy
56 | spec:
57 | containers:
58 | - name: simple-webapp
59 | image: kodekloud/simple-webapp
60 | ports:
61 | - containerPort: 8080
62 | - name: nginx-proxy
63 | image: nginx
64 | ports:
65 | - containerPort: 80
66 | volumeMounts:
67 | - name: nginx-conf
68 | mountPath: /etc/nginx/conf.d/default.conf
69 | subPath: default.conf
70 | volumes:
71 | - name: nginx-conf
72 | configMap:
73 | name: nginx-conf
74 | ```
75 | * Command to Apply the Deployment
76 |
77 |
78 | ```kubectl apply -f nginx-proxy-deployment.yaml```
79 |
80 | Expose the nginx-proxy deployment on port 80.
81 |
82 | ```apiVersion: v1
83 | kind: Service
84 | metadata:
85 | name: nginx-proxy-service
86 | spec:
87 | selector:
88 | app: nginx-proxy
89 | ports:
90 | - protocol: TCP
91 | port: 80
92 | targetPort: 80
93 | type: NodePort # You can choose the appropriate service type for your environment
94 | ```
95 |
96 | * Command to Apply the Service Yaml
97 |
98 | ```kubectl apply -f nginx-proxy-service.yaml```
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-3-Responses/kodekloud_response.md:
--------------------------------------------------------------------------------
1 | # Deployment Configuration in Kubernetes
2 |
3 | You are tasked with setting up a deployment named `nginx-proxy-deployment` that includes two containers: `simple-webapp` and `nginx-proxy`.
4 |
5 | ## Solution
6 |
7 | To accomplish this task, follow these steps:
8 |
9 | 1. **Create a ConfigMap for Nginx Configuration (`nginx-conf`):**
10 |
11 | ```yaml
12 | apiVersion: v1
13 | kind: ConfigMap
14 | metadata:
15 | name: nginx-conf
16 | data:
17 | default.conf: |
18 | server {
19 | listen 80;
20 | server_name localhost;
21 | location / {
22 | proxy_set_header X-Forwarded-For $remote_addr;
23 | proxy_set_header Host $http_host;
24 | proxy_pass "http://127.0.0.1:8080";
25 | }
26 | }
27 | ```
28 |
29 | 2. **Define a Deployment (nginx-proxy-deployment) with Two Containers (simple-webapp and nginx-proxy):**
30 |
31 | ```yaml
32 | apiVersion: apps/v1
33 | kind: Deployment
34 | metadata:
35 | name: nginx-proxy-deployment
36 | labels:
37 | app: nginx-proxy
38 | spec:
39 | replicas: 1
40 | selector:
41 | matchLabels:
42 | app: nginx-proxy
43 | template:
44 | metadata:
45 | labels:
46 | app: nginx-proxy
47 | spec:
48 | volumes:
49 | - name: nginx-conf
50 | configMap:
51 | name: nginx-conf
52 | items:
53 | - key: default.conf
54 | path: default.conf
55 | containers:
56 | - name: nginx-proxy
57 | image: nginx
58 | ports:
59 | - containerPort: 80
60 | protocol: TCP
61 | volumeMounts:
62 | - name: nginx-conf
63 | readOnly: true
64 | mountPath: /etc/nginx/conf.d/default.conf
65 | subPath: default.conf
66 | - name: simple-webapp
67 | image: kodekloud/simple-webapp
68 | ports:
69 | - containerPort: 8080
70 | protocol: TCP
71 |
72 | ```
73 |
74 |
75 | 3. **Create a Service (nginx-proxy-deployment) to Expose the Deployment:**
76 |
77 | ```yaml
78 | apiVersion: v1
79 | kind: Service
80 | metadata:
81 | name: nginx-proxy-deployment
82 | labels:
83 | app: nginx-proxy
84 | spec:
85 | ports:
86 | - protocol: TCP
87 | port: 80
88 | targetPort: 80
89 | selector:
90 | app: nginx-proxy
91 | type: ClusterIP
92 | ```
93 |
94 | 4. **Validate the Setup:**
95 |
96 | Access the service to ensure it's correctly configured. You should observe content from the simple-webapp response.
97 |
98 | This solution sets up a Kubernetes Deployment named nginx-proxy-deployment with two containers, simple-webapp and nginx-proxy, and ensures that the Nginx configuration is correctly applied using a ConfigMap.
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-4-Responses/Basil.md:
--------------------------------------------------------------------------------
1 | **Name**: Basil
2 | **Slack User ID**: U05QWDXNWLF
3 | **Email**: basil-is@superhero.true
4 |
5 | ---
6 |
7 | ### Solution:
8 |
9 | #### Step 1: Create a ServiceAccount
10 |
11 | ```bash
12 | k create sa developer -n default
13 | ```
14 |
15 | #### Step 2: Generate an API Token
16 |
17 | We'll generate the token and store it in env var for future use.
18 |
19 | ```bash
20 | export TOKEN=$(k create token developer)
21 | ```
22 |
23 | #### Step 3. Define a ClusterRole
24 |
25 | ```bash
26 | k create clusterrole read-deploy-pod-clusterrole --verb=get,list,watch --resource=pods,pods/log,deployments
27 | ```
28 |
29 | #### Step 4. Create a ClusterRoleBinding
30 |
31 | ```bash
32 | k create clusterrolebinding developer-read-deploy-pod --clusterrole=read-deploy-pod-clusterrole --serviceaccount=default:developer
33 | ```
34 |
35 | #### Step 5. Generate a kubeconfig file
36 |
37 | First, we need to find out the existing configuration to use command `kubeadm kubeconfig`. Normally, it stored in `~/.kube/config`.
38 |
39 | But let's consider that we are in a custom setup where it's not in the default folder.
40 |
41 | First, we need to find the kubeconfig path in the kubectl parameters using `ps aux` and `grep` combination.
42 |
43 | We store the value in env variable:
44 |
45 | ```bash
46 | export CONFIG_PATH=$(ps aux | grep kubectl | grep -oP '(?<=--kubeconfig )[^ ]+')
47 | ```
48 |
49 | Let's check the value:
50 |
51 | ```bash
52 | echo $CONFIG_PATH
53 | /root/.kube/config
54 | ```
55 |
56 | In case it's empty, we anyway can get the configuration from ConfigMap from the cluster itself:
57 |
58 | ```
59 | kubectl get cm kubeadm-config -n kube-system -o=jsonpath="{.data.ClusterConfiguration}" > ~/clean-config
60 | export CONFIG_PATH=~/clean-config
61 | ```
62 |
63 | **NOTE:** This is preferred way, since your local config file could potentially already contain other users with their tokens. Getting the config from ConfigMap gives you a _clean_ config file with only cluster info.
64 |
65 | Now, we use `kubeadm` command to add a user
66 |
67 | ```bash
68 | kubeadm kubeconfig user --client-name=basil --token=$TOKEN --config=$CONFIG_PATH > config-basil.yaml
69 | ```
70 |
71 | Locally, we could test this out by running next commands:
72 |
73 | ```bash
74 | k get pods -A --kubeconfig=config-basil.yaml
75 | (success)
76 | ```
77 |
78 | ```bash
79 | k get svc -A --kubeconfig=config-basil.yaml
80 | (failed: no access to services)
81 | ```
82 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-4-Responses/John_Kennedy.md:
--------------------------------------------------------------------------------
1 | **Name**: John Kennedy
2 |
3 | **Slack User ID**: U05D6BR916Z
4 |
5 | **Email**: johnkdevops@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | # Question: Kubernetes Cluster Administration Task
14 |
15 | ## Task
16 |
17 | ### 1: Create a ServiceAccount
18 |
19 | Using imperative commands to create & verify that the service account for the developer:
20 | 1. To create the service account, I ran k create sa developer.
21 | 2. To verified the service account was created correctly, I ran k get sa developer
22 | NAME SECRETS AGE
23 | developer 0 3m16s
24 |
25 | ### 2: Generate an API Token
26 |
27 | 1. I searched for serviceaccount API Tokens on kubernetes docs.
28 | Using imperative command to create API Token for developer service account:
29 | 2. To create API Token, I ran k create token developer.
30 | eyJhbGciOiJSUzI1NiIsImtpZCI6IlB0T2gxRFR3bDZFdkxwZlQ3M1hyYjNxbXBIUFYyakc3aU1xS283a01id1EifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjk1MzE4MDI1LCJpYXQiOjE2OTUzMTQ0MjUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRldmVsb3BlciIsInVpZCI6ImRiYWFhNDE0LWRlYmItNDc1YS1hMGIyLTg0ODIwYzhiMDVhOSJ9fSwibmJmIjoxNjk1MzE0NDI1LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZXZlbG9wZXIifQ.o3yLaVqmX75kwJobuxi70bZ8Ku-HxZ8DH5_j3-D-wmOhtx7vCFZdlQKUxXW5-J680unUGvbwjts4030QU0EGsSSmcut3_OxD18S3pCsnoTIcqzMZfbSRDDO5c7hUynToBNucH55v-CWJxcQFvRO2NL1DSAhuclNffC-D5L9aLN0PFmzghVBihUZebvTbigvF8qz18CFE6l7Vy1g2tT7AtOmO1vLFxmbKeFvUrR2EvdZMVAfbUGBPRiWX-cZ0yak6WHKHakD3dBbc-XNqFbePWV4BP6JGD7eRQVuTxoXOYpCzvXU0rmMxu8m-C3oAzowkQE8OLe1HV3PSiMK2K-PLcA
31 |
32 |
33 | ### 3: Define a ClusterRole
34 |
35 | 1. Search for pods/log in ClusterRoles in kubernetes docs.
36 | Using imperative commands: get api-resources and create & verify that the ClusterRole for the developer
37 | 2. To verify correct names for resources, I ran k api-resources
38 | Make sure I have correct resources: pods, deployments, pods/log
39 | 3. To create clusterrole, I ran k create clusterrole read-deploy-pod-clusterrole --verb=get,list,watch --resource=pods, pods/log, deployments --dry-run=client -o yaml
40 | 4. To verify the ClusterRole is created & permissions are correct, k describe clusterrole read-deploy-pod-clusterrole
41 | Name: read-deploy-pod-clusterrole
42 | Labels:
43 | Annotations:
44 | PolicyRule:
45 | Resources Non-Resource URLs Resource Names Verbs
46 | --------- ----------------- -------------- -----
47 | pods/log [] [] [get list watch]
48 | pods [] [] [get list watch]
49 | deployments.apps [] [] [get list watch]
50 |
51 |
52 | ### 4: Create a ClusterRoleBinding
53 |
54 | Using imperative commands: create & verify ClusterRoleBinding for developer was created correctly.
55 | 1. To create clusterrolebinding, I ran k create clusterrolebinding developer-read-deploy-pod --clusterrole=read-deploy-pod-clusterrole --serviceaccount:default:developer --dry-run=client -o yaml
56 | 2. To verify the ClusterRoleBinding is created correctly by running k describe clusterrolebinding developer-read-deploy-pod
57 | Name: developer-read-deploy-pod
58 | Labels:
59 | Annotations:
60 | Role:
61 | Kind: ClusterRole
62 | Name: read-deploy-pod-clusterrole
63 | Subjects:
64 | Kind Name Namespace
65 | ---- ---- ---------
66 | ServiceAccount developer default
67 |
68 | 3. To verify that ServiceAccount developer does have the correct permissions, I ran k auth can-i get po --as=system:serviceaccount:default:developer, got yes for all resources allowed, and when I ran k auth can-i create po --as=system:serviceaccount:default:developer, gave the back said, no.
69 |
70 | # YAML File
71 | apiVersion: rbac.authorization.k8s.io/v1
72 | kind: ClusterRoleBinding
73 | metadata:
74 | creationTimestamp: null
75 | name: developer-read-deploy-pod
76 | roleRef:
77 | apiGroup: rbac.authorization.k8s.io
78 | kind: ClusterRole
79 | name: read-deploy-pod-clusterrole
80 | subjects:
81 | - kind: ServiceAccount
82 | name: developer
83 | namespace: default
84 |
85 | ### 5: Generate a kubeconfig File
86 |
87 | 1. Copied default kubeconfig file by running cp /root/.kube/config developer-config
88 | 2. Edited developer-config by running vi developer-config, made the changes shown below.
89 | 3. Verified that config is working by running k get po --kubeconfig developer-config.
90 | NAME READY STATUS RESTARTS AGE
91 | coredns-5dd5756b68-c7zkz 1/1 Running 0 86m
92 | coredns-5dd5756b68-ckx7f 1/1 Running 0 86m
93 | etcd-controlplane 1/1 Running 0 87m
94 | kube-apiserver-controlplane 1/1 Running 0 87m
95 | kube-controller-manager-controlplane 1/1 Running 0 87m
96 | kube-proxy-k29l2 1/1 Running 0 86m
97 | kube-proxy-nzzqf 1/1 Running 0 86m
98 | kube-scheduler-controlplane 1/1 Running 0 87m
99 |
100 | ## YAML File
101 | 'developer-config
102 | apiVersion: v1
103 | clusters:
104 | - cluster:
105 | certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSEp6UVJyV0MvVE13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNU1qRXhOakExTXpaYUZ3MHpNekE1TVRneE5qRXdNelphTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUMxM2dVREgvbGMrVFhNcFpjVGZNMWUvVW5pR2ZkZEpwd2FFWGZPZTR4dy9TTG1ycGRvUkh6MUt3d0sKOEZNRk9WVUlLSkRVT2t6dy9WUFJnSm43MndIU2ZPVTl0QjZqd0RhZTUyb2FrZDhzdmFKZUk0VFVTOGViM2dRRgpuV1RIT0JpcStIOU5Ca2V2R2I2MGszRHRHSEJmcG9OYVRxZzBOdC9yQUtPSnRsclhVWTVwOXhDSmYzbUZxQS94CmFEV0dYWENqTG1IemdNQXozcFJ4eWFFeHdsQzYzNW8xQy9tUzZseXF2b0dUajNjOXJBQjdXbHR3N2hKcGZqZzUKaDRRa2NiMk5wNVVXVzRnV3c1MDJBL0I3ZW9NNWtYYjVJcENOV05ZNVJvMXBrUnJ0WUVldlczNFF3WkVZcDNGOAoyM2hYUUhLYkpvNjd4ZlhBV1ZxNm5BVWxjbGdKQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTN1ZsUzFURzBBdWExRFpFWXBTL1VpckxGQ1BUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQTFNN3JZMVFJcwpqaXlPcVowRFA4RllwcVpmdXZwZ2RZUHJaZFd6K25rTnl3NmJXZHhyd3BlR2tiWEd4RDVhSjYrMElJMnZkUTVhCnEwdXNWSEtNZ3FLdlBTT1VHMVNYZGgxSS91T0sreitHWFpxSDNOUWxXT1B6MWZ6bFQzWktJS25hazh5aHVTNDkKRWFHWEZnWFBxNFFtelBZd1lWVC9HSk8xdkJlVUlPMkw2TnN3VmtLRVBRZ2VLRTFtK0p5Zk13OHlnT3JDVmdERwpya2MxM3VuZFNOanZ3dlBDSjRjOGM1RU9IalJRajIzaHQ3eVpOd25uZEJXZVpGZFJEK3dkUG5RTTlTeWpUWDFOCnZaV2xxbWg5OVRPZG5ZNnVOdzNjdXd2SnBkT1A4WG96Uk5oditQYzM5aDFGQUh6dGRLMkNPK1ZXMkJjTXc4YksKMCtibzBCVGZxV0RmCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
106 | server: https://controlplane:6443
107 | name: kubernetes
108 | contexts:
109 | - context:
110 | cluster: kubernetes
111 | user: kubernetes-admin
112 | name: kubernetes-admin@kubernetes
113 | current-context: kubernetes-admin@kubernetes
114 | kind: Config
115 | preferences: {}
116 | users:
117 | - name: kubernetes-admin
118 | user:
119 | client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJQ3AzZDE0VDVzcHd3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNU1qRXhOakExTXpaYUZ3MHlOREE1TWpBeE5qRXdNemxhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW9xUGdCMnZWSDRQTDYrMk0KZGR3YUlvY21jUlJRQlJTcEFua0RFdVVUaWtEbjY1c0dWbUNJMXdlamw5YzRLcU1HVGx4UHBBakpKemdyZS9DTAptVmtlTDdhSDN0RVVJaGpORUgwQTJLNitrYmFib3BLNXo4V3JJS0xicDBqSWR1eUU2YjI3bDd3VU52MWx1UmtFCkpxY0swMWVIaWVvUFIxaG44a2hBK05BSE1BQ3ZSSTNybjNVd1U2THpZUWltdS9lV0hET0RmUjc2OUZ4alFGSTkKaHJMLzhlMDlMaDBHa0RXVFozRmNSMXdDZGovaUtCRUN6UFdZUDBRTlNKamtTSDZuMUZ0aDQ4ZlI0YnQrVTF5TgpZRWFRTE1rKzdJSnBtUEFhNURkd0x4OXFka2s4VVo1aUhsWlJnamtOekVoMUdib2MrdmNCWXVpVm9ZN0tHZ3ZsCktkc2Fxd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JTN1ZsUzFURzBBdWExRFpFWXBTL1VpckxGQwpQVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBRlY4dnh3OTdsT2lCM2crZTJwYU1Ya0N1RHpLUERxNTJxQllYCkF6YVdqMXhnSGp3bWtUa2k2TmRVUmhUdFNjUFVzWHZnNi9QOFpuaEd1UGkyU3d0dzY5SUMxU3VrMG90cndKaTcKMFdaaHgyM2FNZ0dvNVRJZnNEQTJqUWRQM3A0bUpUNElIVHkzanhFbEdHVS9BV3oreTM1UnlyQk4zSDcvNDlkQQo5OHFpWDU1UUppYlJBRXorU3V3b0pNQk1VaVRHdlgzZUw2SGgyd25zbXdHMU8xeTFjdm12dmVNWDZpN01YdXpBCjN5ZDlJa2lpdDNPVkdwNVU1aDVVdUtUS0MvN2ZmTzViNGtSTDdrNW5nc1RvWEt6bWtUVjlDMy81ZjhyWXo3YU8KVGYwaytBOFBESks3R3R3Q24wSHJvWE5lK0locHpmMENZRHQxdmxnNGhZblFoVTlZc1E9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
120 |
121 | Now, the developer-config file looks like this:
122 | apiVersion: v1
123 | clusters:
124 | - cluster:
125 | certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJSEp6UVJyV0MvVE13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBNU1qRXhOakExTXpaYUZ3MHpNekE1TVRneE5qRXdNelphTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUMxM2dVREgvbGMrVFhNcFpjVGZNMWUvVW5pR2ZkZEpwd2FFWGZPZTR4dy9TTG1ycGRvUkh6MUt3d0sKOEZNRk9WVUlLSkRVT2t6dy9WUFJnSm43MndIU2ZPVTl0QjZqd0RhZTUyb2FrZDhzdmFKZUk0VFVTOGViM2dRRgpuV1RIT0JpcStIOU5Ca2V2R2I2MGszRHRHSEJmcG9OYVRxZzBOdC9yQUtPSnRsclhVWTVwOXhDSmYzbUZxQS94CmFEV0dYWENqTG1IemdNQXozcFJ4eWFFeHdsQzYzNW8xQy9tUzZseXF2b0dUajNjOXJBQjdXbHR3N2hKcGZqZzUKaDRRa2NiMk5wNVVXVzRnV3c1MDJBL0I3ZW9NNWtYYjVJcENOV05ZNVJvMXBrUnJ0WUVldlczNFF3WkVZcDNGOAoyM2hYUUhLYkpvNjd4ZlhBV1ZxNm5BVWxjbGdKQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTN1ZsUzFURzBBdWExRFpFWXBTL1VpckxGQ1BUQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQTFNN3JZMVFJcwpqaXlPcVowRFA4RllwcVpmdXZwZ2RZUHJaZFd6K25rTnl3NmJXZHhyd3BlR2tiWEd4RDVhSjYrMElJMnZkUTVhCnEwdXNWSEtNZ3FLdlBTT1VHMVNYZGgxSS91T0sreitHWFpxSDNOUWxXT1B6MWZ6bFQzWktJS25hazh5aHVTNDkKRWFHWEZnWFBxNFFtelBZd1lWVC9HSk8xdkJlVUlPMkw2TnN3VmtLRVBRZ2VLRTFtK0p5Zk13OHlnT3JDVmdERwpya2MxM3VuZFNOanZ3dlBDSjRjOGM1RU9IalJRajIzaHQ3eVpOd25uZEJXZVpGZFJEK3dkUG5RTTlTeWpUWDFOCnZaV2xxbWg5OVRPZG5ZNnVOdzNjdXd2SnBkT1A4WG96Uk5oditQYzM5aDFGQUh6dGRLMkNPK1ZXMkJjTXc4YksKMCtibzBCVGZxV0RmCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
126 | server: https://controlplane:6443
127 | name: kubernetes
128 | contexts:
129 | - context:
130 | cluster: kubernetes
131 | user: developer
132 | name: developer-context
133 | current-context: developer-context
134 | kind: Config
135 | preferences: {}
136 | users:
137 | - name: developer
138 | user:
139 | token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlB0T2gxRFR3bDZFdkxwZlQ3M1hyYjNxbXBIUFYyakc3aU1xS283a01id1EifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjk1MzE4MDI1LCJpYXQiOjE2OTUzMTQ0MjUsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImRldmVsb3BlciIsInVpZCI6ImRiYWFhNDE0LWRlYmItNDc1YS1hMGIyLTg0ODIwYzhiMDVhOSJ9fSwibmJmIjoxNjk1MzE0NDI1LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkZXZlbG9wZXIifQ.o3yLaVqmX75kwJobuxi70bZ8Ku-HxZ8DH5_j3-D-wmOhtx7vCFZdlQKUxXW5-J680unUGvbwjts4030QU0EGsSSmcut3_OxD18S3pCsnoTIcqzMZfbSRDDO5c7hUynToBNucH55v-CWJxcQFvRO2NL1DSAhuclNffC-D5L9aLN0PFmzghVBihUZebvTbigvF8qz18CFE6l7Vy1g2tT7AtOmO1vLFxmbKeFvUrR2EvdZMVAfbUGBPRiWX-cZ0yak6WHKHakD3dBbc-XNqFbePWV4BP6JGD7eRQVuTxoXOYpCzvXU0rmMxu8m-C3oAzowkQE8OLe1HV3PSiMK2K-PLcA
140 |
141 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-4-Responses/Mason889.md:
--------------------------------------------------------------------------------
1 | **Name**: Kostiantyn Zaihraiev
2 |
3 | **Slack User ID**: U04FZK4HL0L
4 |
5 | **Email**: kostya.zaigraev@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | After reading task, I've decided to pregenerate yaml templates to easy configuration and management of all resources.
14 |
15 | First of all I've pregenerated `serviceaccount.yaml` file using `kubectl` command:
16 |
17 | ```bash
18 | kubectl create serviceaccount developer --dry-run=client -o yaml > serviceaccount.yaml
19 | ```
20 |
21 | After making some changes in the file I've applied it to the cluster:
22 |
23 | ```bash
24 | kubectl apply -f serviceaccount.yaml
25 | ```
26 |
27 | Then I've pregenerated `secret.yaml` file using `kubectl` command:
28 |
29 | ```bash
30 | kubectl create secret generic developer --dry-run=client -o yaml > secret.yaml
31 | ```
32 |
33 | After making some changes in the file to use service account token I've applied it to the cluster:
34 |
35 | ```bash
36 | kubectl apply -f secret.yaml
37 | ```
38 |
39 | Then I've pregenerated `clusterrole.yaml` file using `kubectl` command:
40 |
41 | ```bash
42 | kubectl create clusterrole read-deploy-pod-clusterrole --verb=get,list,watch --resource=pods,pods/log,deployments --dry-run=client -o yaml > clusterrole.yaml
43 | ```
44 |
45 | After making some changes in the file I've applied it to the cluster:
46 |
47 | ```bash
48 | kubectl apply -f clusterrole.yaml
49 | ```
50 |
51 | Then I've pregenerated `clusterrolebinding.yaml` file using `kubectl` command:
52 |
53 | ```bash
54 | kubectl create clusterrolebinding developer-read-deploy-pod --serviceaccount=default:developer --clusterrole=read-deploy-pod-clusterrole --dry-run=client -o yaml > clusterrolebinding.yaml
55 | ```
56 |
57 | After making some changes in the file I've applied it to the cluster:
58 |
59 | ```bash
60 | kubectl apply -f clusterrolebinding.yaml
61 | ```
62 |
63 | #### Kubeconfig file creation
64 |
65 | First of all I've saved service account token to the environment variable called `TOKEN` for easy usage:
66 |
67 | ```bash
68 | export TOKEN=$(kubectl get secret $(kubectl get serviceaccount developer -o jsonpath='{.metadata.name}') -o jsonpath='{.data.token}' | base64 --decode)
69 | ```
70 |
71 | Then I've added service account to the kubeconfig file:
72 |
73 | ```bash
74 | kubectl config set-credentials developer --token=$TOKEN
75 | ```
76 |
77 | Then I've changed current context to use service account:
78 |
79 | ```bash
80 | kubectl config set-context --current --user=developer
81 | ```
82 |
83 | Then I've saved kubeconfig file to the `developer.kubeconfig` file:
84 |
85 | ```bash
86 | kubectl config view --minify --flatten > developer.kubeconfig
87 | ```
88 |
89 | Then I've checked that I can get/list pods:
90 |
91 | ```bash
92 | kubectl get pods -n kube-system
93 | ```
94 |
95 | Result:
96 | ```bash
97 | NAME READY STATUS RESTARTS AGE
98 | coredns-5d78c9869d-4b5qz 1/1 Running 0 5h54m
99 | etcd-minikube 1/1 Running 0 5h54m
100 | kube-apiserver-minikube 1/1 Running 0 5h54m
101 | kube-controller-manager-minikube 1/1 Running 0 5h54m
102 | kube-proxy-rcbb7 1/1 Running 0 5h54m
103 | kube-scheduler-minikube 1/1 Running 0 5h54m
104 | storage-provisioner 1/1 Running 1 (5h53m ago) 5h54
105 | ```
106 |
107 | ### Code Snippet
108 |
109 | #### serviceaccount.yaml
110 | ```yaml
111 | apiVersion: v1
112 | kind: ServiceAccount
113 | metadata:
114 | name: developer
115 | namespace: default
116 |
117 | ```
118 |
119 | #### secret.yaml
120 | ```yaml
121 | apiVersion: v1
122 | kind: Secret
123 | metadata:
124 | name: developer
125 | annotations:
126 | kubernetes.io/service-account.name: developer
127 | type: kubernetes.io/service-account-token
128 |
129 | ```
130 |
131 | #### clusterrole.yaml
132 | ```yaml
133 | apiVersion: rbac.authorization.k8s.io/v1
134 | kind: ClusterRole
135 | metadata:
136 | name: read-deploy-pod-clusterrole
137 | namespace: default
138 | rules:
139 | - apiGroups:
140 | - ""
141 | resources:
142 | - pods
143 | - pods/log
144 | verbs:
145 | - get
146 | - list
147 | - watch
148 | - apiGroups:
149 | - apps
150 | resources:
151 | - deployments
152 | verbs:
153 | - get
154 | - list
155 | - watch
156 | ```
157 |
158 | #### clusterrolebinding.yaml
159 | ```yaml
160 | apiVersion: rbac.authorization.k8s.io/v1
161 | kind: ClusterRoleBinding
162 | metadata:
163 | name: developer-read-deploy-pod
164 | namespace: default
165 | roleRef:
166 | apiGroup: rbac.authorization.k8s.io
167 | kind: ClusterRole
168 | name: read-deploy-pod-clusterrole
169 | subjects:
170 | - kind: ServiceAccount
171 | name: developer
172 | namespace: default
173 | ```
174 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-4-Responses/challenge_response_template.md:
--------------------------------------------------------------------------------
1 | **Name**: user-name
2 |
3 | **Slack User ID**: 00000000000
4 |
5 | **Email**: user-email@example.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | (Explain your approach, thought process, and steps you took to solve the challenge)
14 |
15 | ### Code Snippet
16 |
17 | ```yaml
18 | # Place your code or configuration snippet here
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-4-Responses/housseinhmila.md:
--------------------------------------------------------------------------------
1 | **Name**: Houssein Hmila
2 |
3 | **Slack User ID**: U054SML63K5
4 |
5 | **Email**: housseinhmila@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | #### Create a ServiceAccount:
14 | We create a service account named "developer" in the default namespace.
15 |
16 | ### Code Snippet
17 | ```
18 | kubectl create serviceaccount developer
19 | ```
20 | #### Generate an API Token:
21 | We create an API token for the "developer" service account. This token is used by the "developer" to authenticate when making API requests to the Kubernetes cluster.
22 |
23 | ### Code Snippet
24 | ```yaml
25 | kubectl apply -f - <
87 | kubectl config use-context developer-context
88 | ```
89 | ##### you can test this by running ```kubectl get pod``` and ```kubectl get secret``` and you will see the difference.
90 |
91 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-4-Responses/jothilal22.md:
--------------------------------------------------------------------------------
1 | **Name**: Jothilal Sottallu
2 |
3 | **Email**: srjothilal2001@gmail.com
4 |
5 |
6 | ### SOLUTION
7 |
8 |
9 | ### TASK: Kubernetes Cluster Administration Task
10 |
11 | ### SOLUTION DETAILS
12 |
13 | ### 1. Create a ServiceAccount
14 |
15 | * To create a service account, Ran the command -
16 |
17 | ```
18 | └─(17:35:43)──> kbt create serviceaccount developer
19 | +kubectl
20 | serviceaccount/developer created
21 | ```
22 | * To check the service account
23 |
24 | ```
25 | kbt get serviceaccount developer
26 | +kubectl
27 | NAME SECRETS AGE
28 | developer 0 48s
29 | ```
30 |
31 | ### 2. Create an API token for the developer ServiceAccount.
32 |
33 | * To create a API token for the Developer
34 |
35 | ```
36 | kubectl create secret generic developer-api-token --from-literal=token=$(openssl rand -base64 32)
37 |
38 | secret/developer-api-token created
39 | ```
40 | * To check and decode the token
41 |
42 | * We need to extract the token that Kubernetes automatically creates when you create a ServiceAccount
43 |
44 | ```
45 | kubectl get secret developer-api-token -o jsonpath='{.data.token}' | base64 -d
46 | PDpk0i8j3PSips/zoGITX0qulPCfdQwi4L0yOpQqR2E=%
47 | ```
48 |
49 | ### 3. Create a ClusterRole named read-deploy-pod-clusterrole with the following permissions:
50 | Verbs: get, list, watch
51 | Resources: pods, pods/log, deployments
52 |
53 | * To create a clusterRole
54 |
55 | ```
56 | kubectl create clusterrole read-deploy-pod-clusterrole \
57 | --verb=get,list,watch \
58 | --resource=pods,pods/log,deployments
59 |
60 | clusterrole.rbac.authorization.k8s.io/read-deploy-pod-clusterrole created
61 | ```
62 | * Will verify this using kbt get command
63 |
64 | ```
65 | kubectl get clusterrole read-deploy-pod-clusterrole
66 | NAME CREATED AT
67 | read-deploy-pod-clusterrole 2023-09-27T12:30:30Z
68 | ```
69 |
70 | ### 4. Create a ClusterRoleBinding
71 |
72 |
73 | ```
74 | kubectl create clusterrolebinding developer-read-deploy-pod \
75 | --clusterrole=read-deploy-pod-clusterrole \
76 | --serviceaccount=default:developer
77 |
78 | clusterrolebinding.rbac.authorization.k8s.io/developer-read-deploy-pod created
79 | ```
80 |
81 | ### 5. Generate a kubeconfig file for the developer.
82 |
83 | kubectl config set-cluster my-cluster \
84 | --kubeconfig=$HOME/path/to/developer-kubeconfig.yaml \
85 | --server=https://192.168.49.2:8443 \
86 | --insecure-skip-tls-verify=true
87 |
88 | Cluster "my-cluster" set.
89 |
90 | Before this I faced few issues related with APISERVER because I am using minikube and I need to find the apiserver to configure or set the cluster
91 | minikube ip
92 | 192.168.49.2 PORT for the minikube:8443
93 |
94 | kubectl config set-credentials developer \
95 | --kubeconfig=$HOME/path/to/developer-kubeconfig.yaml \
96 | --token=$(kubectl get secret developer-api-token -o jsonpath='{.data.token}' | base64 -d)
97 | User "developer" set.
98 |
99 | kubectl config set-context developer-context \
100 | --kubeconfig=$HOME/path/to/developer-kubeconfig.yaml \
101 | --cluster=my-cluster \
102 | --user=developer \
103 | --namespace=default
104 | Context "developer-context" created.
105 |
106 | kubectl config use-context developer-context \
107 | --kubeconfig=$HOME/path/to/developer-kubeconfig.yaml
108 | Switched to context "developer-context".
109 |
110 | * yaml
111 | ```
112 | apiVersion: v1
113 | clusters:
114 | - cluster:
115 | insecure-skip-tls-verify: true
116 | server: https://192.168.49.2:8443
117 | name: my-cluster
118 | contexts:
119 | - context:
120 | cluster: my-cluster
121 | namespace: default
122 | user: developer
123 | name: developer-context
124 | current-context: developer-context
125 | kind: Config
126 | preferences: {}
127 | users:
128 | - name: developer
129 | user:
130 | token: PDpk0i8j3PSips/zoGITX0qulPCfdQwi4L0yOpQqR2E=
131 | ```
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-4-Responses/kodekloud_response.md:
--------------------------------------------------------------------------------
1 | # Kubernetes Cluster Administration
2 |
3 | ## Create service account developer
4 |
5 | ```yaml
6 | apiVersion: v1
7 | kind: ServiceAccount
8 | metadata:
9 | name: developer
10 | ```
11 |
12 | ## Manually create an API token for a ServiceAccount, referent: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-an-api-token-for-a-serviceaccount
13 | ```yaml
14 | apiVersion: v1
15 | kind: Secret
16 | metadata:
17 | name: developer-secret
18 | annotations:
19 | kubernetes.io/service-account.name: developer
20 | type: kubernetes.io/service-account-token
21 | ```
22 |
23 | ## Create ClusterRole
24 | ```yaml
25 | apiVersion: rbac.authorization.k8s.io/v1
26 | kind: ClusterRole
27 | metadata:
28 | name: read-deploy-pod-clusterrole
29 | rules:
30 | - verbs:
31 | - get
32 | - list
33 | - watch
34 | apiGroups:
35 | - ''
36 | resources:
37 | - pods
38 | - pods/log
39 | - verbs:
40 | - list
41 | - get
42 | - watch
43 | apiGroups:
44 | - apps
45 | resources:
46 | - deployments
47 | ```
48 |
49 | ## Create ClusterRoleBinding
50 | ```yaml
51 | apiVersion: rbac.authorization.k8s.io/v1
52 | kind: ClusterRoleBinding
53 | metadata:
54 | name: developer-read-deploy-pod
55 | subjects:
56 | - kind: ServiceAccount
57 | name: developer
58 | namespace: default
59 | roleRef:
60 | kind: ClusterRole
61 | name: read-deploy-pod-clusterrole
62 | apiGroup: rbac.authorization.k8s.io
63 | ```
64 |
65 | ## Finally, decode ca and token from developer-secret and paste them to the config file. Here is an example:
66 | ```yaml
67 | apiVersion: v1
68 | kind: Config
69 | clusters:
70 | - name: demo-cluster
71 | cluster:
72 | server: https://x.x.x.x:6443/
73 | certificate-authority-data: >
74 | LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EWXlOekEwTkRBeE0xb1hEVE15TURZeU5EQTBOREF4TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS1gvCmZmTW5NOHJKTWhBMHFkVm9RTkJBWEJFeXF4WjA3WnFwYUQ2bWx4ZFQxMEtMajJ6aHliekV6Q2NQTldCMzdsVncKbHZVZmlQelpjZ3lRN0krdWtXZDBUNWhRSjd4S0ozbTlFSnBJMDU2WEhKK2RKSXRuRlIybnUzY1gwdFIxREZMbQpUSG14eGlIRXdMdE1ESjdaNjdIcFR5Zk1JVjBwTXdMQXorRFNDMjNlLzludGJja045WExReUlobzc1MFZ2Y3lqCmJ5dmo0Y3dOSkFaNTU2Z21BdEY0a04ydVcxaWY2ZHJQZ2JQUU54Q2ZoMU1IVFB5QTBCdzVuQWVWNXg4UVB3eFIKcU0zWTZ3TVJwOGoxTFFFeGdPYTJuSVFNZ29tdlFSUEUzRzdSUW1WSWkvcDM2MkxRQldIVlJqT1lRTjd4cmlvQQpxKzRNOWRQaERja2Fxbm9LOVdFQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZBaUlkWm9HU25wQ3QyTnBPbmtjMGN2US9nVFdNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRHdyTDVHVzdEMGdISk1ONC9mcgpIYU9wZktDR213VHVtVmdvZE9POHpFYzMwS1l0TXVsd2YxY1B4cGJlSDYzUExoaUNuckJldkJpNUdwTUNMVzd5CjQ5L2kyQ0FlN3RQWW01WENneVZxZHVma21RbU9URVoza2NuNzJnTWN4WGpNaW5abkc4MlFHWE5EUnZSQ3N3L1kKZ1JXSnhtLzc0ZVBQallqeDUrR05WZ1lkcHJHWmlkcFVCYVBtcXZFUzdUdTQ4TDVmRjRscGNDVkprSi9RblFxego1VEZpUzdGQ1ZXSW9PQ3ZhNzBGN0tLc1lSL3REeDhiV1RYMlkxeE5zMlpYd25sRlZLUy9neGhIMDVzV1NGK3JlCjJXL01qYUpoTWlBZ2FKUjJ2NDFzT0lsQUV1OXVmRC9RakN6QXU4VmNzTmMybzNoa1pGOEZNYUkxQ2VRaVI0WFEKeTk0PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
75 | users:
76 | - name: developer
77 | user:
78 | token: >-
79 | eyJhbGciOiJSUzI1NiIsImtpZCI6IkxUOW1HdmxObnFEWDhlLTBTTkswaWJGcDkzZVVoWWZGRjJiNDhNZlpvY0UifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRldmVsb3Blci1zZWNyZXQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGV2ZWxvcGVyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzEyYTE0NTMtMzE5ZC00NzhlLWFjN2QtYzI1MzJmZGNlM2UzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6ZGV2ZWxvcGVyIn0.sSIyyzCHzBCd4GVsjKdi_8bETitzG202ogFuT6EacUDbBWxYTpBv5s3otZOIANH5CPy2Ng6DAiTmq0mEzYTXL2j9AcNaFUYE7ydO3K2fpGwdLQ3qb5YbnZY9Y0mcIE3ShkyGISp0vIHr9RPlJv4QwbBSczDjvGikOKaVclWRdH4W0h5ngOfXINfRjLrJ9GQ7nUZED6jQYBA3CoEFFrbgEIBf8DUtSDF-IPxE2F8GlIBEHh0-gOHVTg4cef2NRsrUmI2uqy4JnlVeXzc4dbeToXQuBPHs7JmuQ1zzqxoXL-xsjvaH9sa9wan7Sjpgj9ZzTBClqUGJEbOxK2KnKxfG7Q
80 | contexts:
81 | - name: demo-cluster-developer
82 | context:
83 | user: developer
84 | cluster: demo-cluster
85 | ```
86 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-5-Responses/John_Kennedy.md:
--------------------------------------------------------------------------------
1 | **Name**: John Kennedy
2 |
3 | **Slack User ID**: U05D6BR916Z
4 |
5 | **Email**: johnkdevops@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | # Question: MongoDB Deployment Challenge
14 |
15 | ## Task
16 |
17 | ### 1: **Create a MongoDB PV (Persistent Volume):**
18 |
19 | 1. Go to https://kubernetes.io/docs/home/ and search for PersistentVolumes.
20 | 2. Choose Configure a Pod to Use a PersistentVolume for Storage.
21 | 3. Search for PersistentVolume and copy pods/storage/pv-volume.yaml file.
22 | 4. Create a blank yaml file for the persistent volume by running vi mongodb.pv.yaml, :set paste, and, paste yaml the content.
23 | 5. Make the changes to the yaml file and save it by using :wq!
24 | 6. Create the persistent volume with dry run by running kubectl apply -f mongodb.pv.yaml --dry-run=client -o yaml to verify yaml file is correct
25 | 7. Remove dry run option, apply, and verify that persistent volume was created successfully by running the following imperative commands:
26 | kubectl get pv mongodb-data-pv -o wide
27 | kubectl describe pv mongodb-data-pv
28 |
29 | # YAML File
30 |
31 | apiVersion: v1
32 | kind: PersistentVolume
33 | metadata:
34 | name: mongodb-data-pv
35 | spec:
36 | capacity:
37 | storage: 10Gi
38 | accessModes:
39 | - ReadWriteOnce
40 | hostPath:
41 | path: "/mnt/mongo"
42 |
43 | ### 2: **Create a MongoDB PVC (Persistent Volume Claim):**
44 |
45 | 1. Go to https://kubernetes.io/docs/home/ and search for PersistentVolumes.
46 | 2. Choose Configure a Pod to Use a PersistentVolume for Storage.
47 | 3. Search for PersistentVolumeClaim and copy pods/storage/pv-claim.yaml file.
48 | 4. Create a blank yaml file for the persistent volume by running vi mongodb.pvc.yaml, :set paste, and, paste yaml the content.
49 | 5. Make the changes to the yaml file and save it by using :wq!
50 | 6. Create the persistent volume with dry run by running kubectl apply -f mongodb.pvc.yaml --dry-run=client -o yaml to verify yaml file is correct
51 | 7. Remove dry run option, apply, and verify that persistent volume was created successfully by running the following imperative commands:
52 | kubectl get pv mongodb-data-pvc -o wide
53 | kubectl describe pvc mongodb-data-pvc
54 |
55 | # YAML File
56 |
57 | apiVersion: v1
58 | kind: PersistentVolumeClaim
59 | metadata:
60 | name: mongodb-data-pvc
61 | spec:
62 | accessModes:
63 | - ReadWriteOnce
64 | resources:
65 | requests:
66 | storage: 10Gi
67 |
68 | ## Verify that PeristentVolumeClaim is bound to PersistentVolume
69 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS
70 | pv***/mongodb-data-pv 10Gi RWO Retain Bound
71 |
72 | NAME STATUS VOLUME CAPACITY ACCESS MODES
73 | persistentvolumeclaim/mongodb-data-pvc Bound mongodb-data-pv 10Gi RWO
74 |
75 | ### 3: **Create a MongoDB Deployment:**
76 |
77 | 1. Go to https://kubernetes.io/docs/home/ and search for PersistentVolumes.
78 | 2. Choose Configure a Pod to Use a PersistentVolume for Storage.
79 | 3. Search for Create a Pod and keep the kubernetes doc for reference.
80 |
81 | Using imperative commands to create, deploy & verify the MongoDB Deployment:
82 | 1. Create deployment by running kubectl create deploy mongodb-deployment --image=mongo:4.2 --port= 27017 --replicas=2 --dry-run=client -o yaml > mongodb-deploy.yaml
83 | 2. Open the .yaml file by runing vi mongodb-deploy.yaml. add persistentvolume claim,volume,and environment variables.
84 | 3. Search for Enviromental and keep kubernetes doc for reference.
85 | 4. Add both environmental variables to mongodb container.
86 | 1. Add volumeMounts section name: mongodb-pvc-volume with mountPath: "/data/db".
87 | 2. Add volumes section with name of volume: name: mongodb-pvc-volume with persistentVolumeClaim:mongodb-data-pvc.
88 | 3. Create the deployment by using kubectl apply -f mongodb-deploy.yaml.
89 | 4. Watch the deployment to see when it is ready by using k get deploy -w
90 | 5. Inspect deployment by using k describe deploy nginx-proxy-deployment.
91 |
92 | # YAML File
93 |
94 | apiVersion: apps/v1
95 | kind: Deployment
96 | metadata:
97 | labels:
98 | app: mongodb-deployment
99 | name: mongodb-deployment
100 | spec:
101 | replicas: 1 # You can adjust the number of replicas as needed
102 | selector:
103 | matchLabels:
104 | app: mongodb-deployment
105 | template:
106 | metadata:
107 | labels:
108 | app: mongodb-deployment
109 | spec:
110 | volumes:
111 | - name: mongodb-pvc-volume # Add
112 | persistentVolumeClaim: # Add
113 | claimName: mongodb-data-pvc # Add
114 | containers:
115 | - name: mongodb # Change from mongo
116 | image: mongo:4.2
117 | env: # Add
118 | - name: MONGO_INITDB_ROOT_USERNAME # Add
119 | value: "root" # Add
120 | - name: MONGO_INITDB_ROOT_PASSWORD # Add
121 | value: "KodeKloudCKA" # Add
122 | ports:
123 | - containerPort: 27017
124 | volumeMounts: # Add
125 | - mountPath: "/data/db" # Add
126 | name: mongodb-pvc-volume # Add
127 |
128 | ### Expose the mongodb deployment on ClusterIP.
129 |
130 | 1. Expose the deployment with dry run by using kubectl expose deploy mongodb-deployment --name=mongodb-service --port=27017 --target-port=27017 --dry-run=client -o yaml > mongodb-svc.yaml.
131 | 2. Create ClusterIP service for the deployment using kubectl apply -f mongodb-svc.yaml.
132 | 3. I would verify the svc was created for the deployment by using kubectl get svc mongodb-service -o wide.
133 |
134 | ## YAML File
135 |
136 | apiVersion: v1
137 | kind: Service
138 | metadata:
139 | labels:
140 | app: mongodb-deployment
141 | name: mongodb-service
142 | spec:
143 | ports:
144 | - port: 27017
145 | protocol: TCP
146 | targetPort: 27017
147 | selector:
148 | app: mongodb-deployment
149 |
150 |
151 | ### 4: **Verify Data Retention Across Restarts:**
152 |
153 | 1. Connect to running pod by running kubectl exec -it mongodb-deployment-84dcb59d94-crmrd -- bash
154 | 2. Connect to mongodb by running mongo -u root and added password: KodeKloudCKA
155 | 3. To see a list of database run show dbs
156 | 4. Create or Use a database run use data
157 | 5. Show current database run db
158 | 6. Create a persons collection run db.createCollection("persons")
159 | 7. Insert data into persons collection run:
160 | db.persons.insert({"name": "KodeKloudDevOps1", "experience": "Beginner",})
161 | 8. Verify data was inserted run db.persons.find()
162 | { "_id" : ObjectId("651492156f6daed11b03dfd6"), "name" : "KodeKloudDevOps1", "experience" : "Beginner" }
163 | 9. Exit out of mongodb and po by running Ctrl +C and exit.
164 | 8. Delete po by running kubectl delete po mongodb-deployment-84dcb59d94-crmrd.
165 | 9. Watch for po to coming up by running kubectl get po -w.
166 | 10. Once the pod is back up, connect to it by running kubectl exec -it mongodb-deployment-84dcb59d94-crmrd -- mongo -u root -p KodeKloudCKA
167 | 11. Select data run use data.
168 | 12. Select collection run use persons
169 | 13. Verify row of data you inserted is still available run db.persons.find()
170 | { "_id" : ObjectId("651492156f6daed11b03dfd6"), "name" : "KodeKloudDevOps1", "experience" : "Beginner" }
171 |
172 | ## Is there a risk of data loss when the pod is restarted, and if so, what improvement can be taken to ensure it doesn't occur?
173 |
174 | There could be if the node is not available.
175 | Here are a couple of improvements could be taken:
176 | 1. Increase the replicas of your deployment and use NodeAffinity to make sure the pods are placed on the right size node if your demands spikes.
177 | 2. Create kubectl batch job to backup the mongodb database regularly in case of emergencies or maintenance.
178 |
179 |
180 |
181 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-5-Responses/Mason889.md:
--------------------------------------------------------------------------------
1 | **Name**: Kostiantyn Zaihraiev
2 |
3 | **Slack User ID**: U04FZK4HL0L
4 |
5 | **Email**: kostya.zaigraev@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | After reading task, I've decided to create yaml templates that will be used to easy configuration and management of all resources.
14 |
15 | I've create `persistentvolume.yaml` and `persistentvolumeclaim.yaml` files and after that applied them to the cluster:
16 |
17 | ```bash
18 | kubectl apply -f persistentvolume.yaml
19 | kubectl apply -f persistentvolumeclaim.yaml
20 | ```
21 |
22 | To check that everything is ok I've used `kubectl get pv` and `kubectl get pvc` commands:
23 |
24 | ```bash
25 | kubectl get pv
26 |
27 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
28 | mongodb-data-pv 10Gi RWO Retain Bound default/mongodb-data-pvc standard 7s
29 |
30 | kubectl get pvc
31 |
32 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
33 | mongodb-data-pvc Bound mongodb-data-pv 10Gi RWO standard 2s
34 | ```
35 |
36 | Then I've predefined `deployment.yaml` file using `kubectl` command:
37 |
38 | ```bash
39 | kubectl create deployment mongodb-deployment --image=mongo:4.2 --dry-run=client -o yaml > deployment.yaml
40 | ```
41 |
42 | After making some changes in the file I've applied it to the cluster:
43 |
44 | ```bash
45 | kubectl apply -f deployment.yaml
46 | ```
47 |
48 | (And validated that everything is ok using `kubectl get pods` command)
49 | ```bash
50 | kubectl get pods
51 |
52 | NAME READY STATUS RESTARTS AGE
53 | mongodb-deployment-5fbc48f59d-nng6w 1/1 Running 0 3m12s
54 | ```
55 |
56 | To check that everything is working correctly I've connected to the pod and inserted some data:
57 |
58 | ```bash
59 | kubectl exec -it mongodb-deployment-5fbc48f59d-nng6w -- mongo -u root -p KodeKloudCKA
60 | ---
61 |
62 | > db.createCollection("test")
63 | { "ok" : 1 }
64 | > db.test.insert({"name": "test"})
65 | WriteResult({ "nInserted" : 1 })
66 | > exit
67 | bye
68 | ```
69 |
70 | Then I've restarted the pod and checked that data is still there:
71 |
72 | ```bash
73 | kubectl delete pod mongodb-deployment-5fbc48f59d-nng6w
74 |
75 | kubectl get pods
76 |
77 | NAME READY STATUS RESTARTS AGE
78 | mongodb-deployment-5fbc48f59d-z4cmv 1/1 Running 0 11s
79 | ```
80 |
81 | And checked that data is still there:
82 |
83 | ```bash
84 | kubectl exec -it mongodb-deployment-5fbc48f59d-z4cmv -- mongo -u root -p KodeKloudCKA
85 | ---
86 |
87 | > db.test.find()
88 | { "_id" : ObjectId("6515c16f91cea01bdf14c2e5"), "name" : "test" }
89 | > exit
90 | bye
91 | ```
92 |
93 | And as you can see data is still there, so persistent volume claim works correctly.
94 |
95 | ### Code Snippet
96 |
97 | #### persistentvolume.yaml
98 | ```yaml
99 | apiVersion: v1
100 | kind: PersistentVolume
101 | metadata:
102 | name: mongodb-data-pv
103 | spec:
104 | capacity:
105 | storage: 10Gi
106 | storageClassName: standard
107 | accessModes:
108 | - ReadWriteOnce
109 | hostPath:
110 | path: /mnt/mongo
111 | ```
112 |
113 | #### persistentvolumeclaim.yaml
114 | ```yaml
115 | apiVersion: v1
116 | kind: PersistentVolumeClaim
117 | metadata:
118 | name: mongodb-data-pvc
119 | spec:
120 | resources:
121 | requests:
122 | storage: 10Gi
123 | storageClassName: standard
124 | accessModes:
125 | - ReadWriteOnce
126 | ```
127 |
128 |
129 | #### deployment.yaml
130 | ```yaml
131 | apiVersion: apps/v1
132 | kind: Deployment
133 | metadata:
134 | name: mongodb-deployment
135 | spec:
136 | selector:
137 | matchLabels:
138 | app: mongodb-deployment
139 | template:
140 | metadata:
141 | labels:
142 | app: mongodb-deployment
143 | spec:
144 | containers:
145 | - name: mongodb-container
146 | image: mongo:4.2
147 | resources: {}
148 | env:
149 | - name: MONGO_INITDB_ROOT_USERNAME
150 | value: root
151 | - name: MONGO_INITDB_ROOT_PASSWORD
152 | value: KodeKloudCKA
153 | ports:
154 | - containerPort: 27017
155 | volumeMounts:
156 | - name: mongodb-data
157 | mountPath: /data/db
158 | volumes:
159 | - name: mongodb-data
160 | persistentVolumeClaim:
161 | claimName: mongodb-data-pvc
162 | ```
163 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-5-Responses/challenge_response_template.md:
--------------------------------------------------------------------------------
1 | **Name**: user-name
2 |
3 | **Slack User ID**: 00000000000
4 |
5 | **Email**: user-email@example.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | (Explain your approach, thought process, and steps you took to solve the challenge)
14 |
15 | ### Code Snippet
16 |
17 | ```yaml
18 | # Place your code or configuration snippet here
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-5-Responses/housseinhmila.md:
--------------------------------------------------------------------------------
1 | **Name**: Houssein Hmila
2 |
3 | **Slack User ID**: U054SML63K5
4 |
5 | **Email**: housseinhmila@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 | To achieve the task of deploying MongoDB in a Kubernetes cluster with persistent storage and verifying data retention across pod restarts, we can follow these steps:
13 |
14 | #### 1. Create a MongoDB PV (Persistent Volume):
15 | ```vim pv.yaml```
16 | ```yaml
17 | apiVersion: v1
18 | kind: PersistentVolume
19 | metadata:
20 | name: mongodb-data-pv
21 | spec:
22 | hostPath:
23 | path: /mnt/mongo
24 | capacity:
25 | storage: 10Gi
26 | volumeMode: Filesystem
27 | accessModes:
28 | - ReadWriteOnce
29 | storageClassName: manual
30 | ```
31 | #### 2. Create a MongoDB PVC (Persistent Volume Claim):
32 |
33 | ```vim pvc.yaml```
34 | ```yaml
35 | apiVersion: v1
36 | kind: PersistentVolumeClaim
37 | metadata:
38 | name: mongodb-data-pvc
39 | spec:
40 | accessModes:
41 | - ReadWriteOnce
42 | volumeMode: Filesystem
43 | resources:
44 | requests:
45 | storage: 10Gi
46 | storageClassName: manual
47 | ```
48 | after apply this two YAML files, you should see your pv and pvc in bound status!.
49 |
50 | #### 3. Create a MongoDB Deployment:
51 | ```
52 | k create deployment mongodb-deployment --image=mongo:4.2 --port=27017 --dry-run=client -o yaml > mongo.yaml
53 | ```
54 |
55 | after add the env variables and volumeMount our file should be like this:
56 |
57 | ```yaml
58 | apiVersion: apps/v1
59 | kind: Deployment
60 | metadata:
61 | creationTimestamp: null
62 | labels:
63 | app: mongodb-deployment
64 | name: mongodb-deployment
65 | spec:
66 | replicas: 1
67 | selector:
68 | matchLabels:
69 | app: mongodb-deployment
70 | strategy: {}
71 | template:
72 | metadata:
73 | creationTimestamp: null
74 | labels:
75 | app: mongodb-deployment
76 | spec:
77 | containers:
78 | - image: mongo:4.2
79 | name: mongo
80 | ports:
81 | - containerPort: 27017
82 | env:
83 | - name: MONGO_INITDB_ROOT_USERNAME
84 | value: "root"
85 | - name: MONGO_INITDB_ROOT_PASSWORD
86 | value: "KodeKloudCKA"
87 | volumeMounts:
88 | - mountPath: "/data/db"
89 | name: mypd
90 | volumes:
91 | - name: mypd
92 | persistentVolumeClaim:
93 | claimName: mongodb-data-pvc
94 | ```
95 | Now access your pod using:
96 | ```
97 | kubectl exec -ti -- sh
98 | ```
99 | then connect to our mongo database:
100 | ```
101 | mongo --username root --password KodeKloudCKA
102 | use mydatabase
103 | ```
104 | finally insert your data:
105 | ```
106 | db.mycollection.insertOne({
107 | name: "Houssein Hmila",
108 | linkedin: "https://www.linkedin.com/in/houssein-hmila/",
109 | email: "housseinhmila@gmail.com"
110 | })
111 | ```
112 | now we delete the pod (it will be another pod deployed)
113 | -> we can access the new pod and reconnect to our mongo database and then check the data inserted before using this command:
114 | ```
115 | db.mycollection.find()
116 | ```
117 | you will get something like this:
118 |
119 | { "_id" : ObjectId("651464aebd39fc98c777a3dd"), "name" : "Houssein Hmila", "linkedin" : "https://www.linkedin.com/in/houssein-hmila/", "email" : "housseinhmila@gmail.com" }
120 |
121 | ##### so now we are sure that our data persist.
122 |
123 | #### To improve data resilience, We can use cloud-based solutions like Amazon EBS (Elastic Block Store) or equivalent offerings from other cloud providers.
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-5-Responses/jothilal22.md:
--------------------------------------------------------------------------------
1 | **Name**: Jothilal Sottallu
2 |
3 | **Email**: srjothilal2001@gmail.com
4 |
5 |
6 | ### SOLUTION
7 |
8 |
9 | ### TASK:
10 |
11 | 1. Create a MongoDB PV (Persistent Volume):
12 | Name: mongodb-data-pv
13 | Storage: 10Gi
14 | Access Modes: ReadWriteOnce
15 | Host Path: "/mnt/mongo"
16 |
17 | * Persistent Volume
18 | * Think of a Persistent Volume as a reserved space on a Persistent Disk. It's like reserving a section of that virtual hard drive for a specific purpose. This way, you can use it later to store your files or data.
19 |
20 | ##### PV YAML
21 | ```
22 | apiVersion: v1
23 | kind: PersistentVolume
24 | metadata:
25 | name: mongodb-data-pv
26 | spec:
27 | capacity:
28 | storage: 10Gi
29 | accessModes:
30 | - ReadWriteOnce
31 | hostPath:
32 | path: /mnt/mongo
33 | ```
34 | Apply this configuration to Kubernetes cluster using Kubectl command
35 |
36 | ```kubectl apply -f pv.yaml```
37 |
38 |
39 | 2. Create a MongoDB PVC (Persistent Volume Claim):
40 | Name: mongodb-data-pvc
41 | Storage: 10Gi
42 | Access Modes: ReadWriteOnce
43 | Create a MongoDB Deployment:
44 |
45 |
46 | * Persistent Volume Claim
47 | * A Persistent Volume Claim is like saying, "Hey, I need some space on that virtual hard drive." It's a request you make when you want to use some storage for your application. When you make this request, the system finds a suitable place on a Persistent Disk and sets it aside for you to use.
48 |
49 | ##### PVC YAML
50 |
51 | ```
52 | apiVersion: v1
53 | kind: PersistentVolumeClaim
54 | metadata:
55 | name: mongodb-data-pvc
56 | spec:
57 | accessModes:
58 | - ReadWriteOnce
59 | resources:
60 | requests:
61 | storage: 10Gi
62 | ```
63 |
64 | Apply this configuration to Kubernetes cluster using Kubectl command
65 |
66 | ```kubectl apply -f pvc.yaml```
67 |
68 | 3. Name: mongodb-deployment
69 | Image: mongo:4.2
70 | Container Port: 27017
71 | Mount mongodb-data-pvc to path /data/db
72 | Env MONGO_INITDB_ROOT_USERNAME: root
73 | Env MONGO_INITDB_ROOT_PASSWORD: KodeKloudCKA
74 |
75 | #### Deployment YAML
76 |
77 | ```
78 | apiVersion: apps/v1
79 | kind: Deployment
80 | metadata:
81 | name: mongodb-deployment
82 | spec:
83 | replicas: 1
84 | selector:
85 | matchLabels:
86 | app: mongodb
87 | template:
88 | metadata:
89 | labels:
90 | app: mongodb
91 | spec:
92 | containers:
93 | - name: mongodb
94 | image: mongo:4.2
95 | ports:
96 | - containerPort: 27017
97 | env:
98 | - name: MONGO_INITDB_ROOT_USERNAME
99 | value: root
100 | - name: MONGO_INITDB_ROOT_PASSWORD
101 | value: KodeKloudCKA
102 | volumeMounts:
103 | - name: mongo-data
104 | mountPath: /data/db
105 | volumes:
106 | - name: mongo-data
107 | persistentVolumeClaim:
108 | claimName: mongodb-data-pvc
109 |
110 | ```
111 | Apply this configuration to Kubernetes cluster using Kubectl command
112 |
113 | ```kubectl apply -f deployment.yaml```
114 |
115 |
116 | 4. Verify Data Retention Across Restarts:
117 | Insert some test data into the database
118 | Restart the pod
119 | Confirm that the data inserted earlier still exists after the pod restarts.
120 | Is there a risk of data loss when the pod is restarted, and if so, what improvement can be taken to ensure it doesn't occur?
121 |
122 | * I inserted the data to check when we restarting the pods do we have save data or not
123 | * kubectl delete pod deployment-x8tsd
124 |
125 | When a pod restarts in Kubernetes, the data within the attached Persistent Volume (PV) should be retained, assuming the PV is properly configured and the data is persisted within
126 |
127 | ### RISK OF DATA LOSS
128 | 1. Ensure that data is always persisted within the PVC or PV. MongoDB data should be stored in a path that is mounted from the PV, as configured in your MongoDB container.
129 | 2. Use appropriate storage classes and reclaim policies to retain PVs when pods are deleted. For example, use the "Retain" policy to prevent automatic PV deletion.
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-5-Responses/kodekloud_response.md:
--------------------------------------------------------------------------------
1 | ### MongoDB Deployment:
2 |
3 | Below are the YAML manifests for creating the MongoDB PV, PVC, and Deployment:
4 |
5 | **Step 1: Create a MongoDB PV (Persistent Volume)**
6 |
7 | ```yaml
8 | apiVersion: v1
9 | kind: PersistentVolume
10 | metadata:
11 | name: mongodb-data-pv
12 | spec:
13 | capacity:
14 | storage: 10Gi
15 | accessModes:
16 | - ReadWriteOnce
17 | hostPath:
18 | path: "/mnt/mongo"
19 |
20 | ```
21 | Explanation:
22 | 1. This YAML creates a Persistent Volume named mongodb-data-pv with a capacity of 10Gi.
23 | 2. It's configured for ReadWriteOnce access mode, meaning it can be mounted by a single node for both read and write operations.
24 | 3. he storage is allocated on the host at the path /mnt/mongo.
25 |
26 | **Step 2: Create a MongoDB PVC (Persistent Volume Claim)**
27 |
28 | ```yaml
29 | apiVersion: v1
30 | kind: PersistentVolumeClaim
31 | metadata:
32 | name: mongodb-data-pvc
33 | spec:
34 | accessModes:
35 | - ReadWriteOnce
36 | resources:
37 | requests:
38 | storage: 10G
39 | ```
40 |
41 | Explanation:
42 | 1. This YAML creates a Persistent Volume Claim named mongodb-data-pvc.
43 | 2. It requests storage of 10Gi with ReadWriteOnce access mode, matching the PV created earlier.
44 |
45 | **Step 3: Create a MongoDB Deployment**
46 |
47 | ```yaml
48 |
49 | apiVersion: apps/v1
50 | kind: Deployment
51 | metadata:
52 | name: mongodb-deployment
53 | labels:
54 | app: mongodb
55 | spec:
56 | replicas: 1
57 | selector:
58 | matchLabels:
59 | app: mongodb
60 | template:
61 | metadata:
62 | labels:
63 | app: mongodb
64 | spec:
65 | containers:
66 | - env:
67 | - name: MONGO_INITDB_ROOT_USERNAME
68 | value: root
69 | - name: MONGO_INITDB_ROOT_PASSWORD
70 | value: KodeKloudCKA
71 | name: mongodb
72 | image: mongo:4.2
73 | ports:
74 | - containerPort: 27017
75 | volumeMounts:
76 | - mountPath: /data/db
77 | name: data
78 | volumes:
79 | - name: data
80 | persistentVolumeClaim:
81 | claimName: mongodb-data-pvc
82 | nodeName: worker-01 # Ensure that the MongoDB pod is deployed on a single node. (Last question)
83 | ```
84 |
85 | Explanation:
86 |
87 | 1. This YAML creates a Deployment named mongodb-deployment with one replica.
88 | 2. It specifies that the pod should have a label app: mongodb.
89 | 3. The pod template contains a container running the mongo:4.2 image.
90 | 4. Environment variables MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD are set for MongoDB initialization.
91 | 5. The PVC created earlier (mongodb-data-pvc) is mounted at /data/db to persist data.
92 | 6. The nodeName attribute ensures that the MongoDB pod is deployed on a specific node (worker-01 in this example).
93 |
94 | **Verification and Considerations:**
95 |
96 | To verify that data remains intact after pod restarts, follow these steps:
97 |
98 | 1. Insert test data into the database.
99 | 2. Restart the pod.
100 | 3. Confirm that the data inserted earlier still 5. exists after the pod restarts.
101 |
102 | Regarding the risk of data loss on pod restarts, in this configuration, since we are using a Persistent Volume Claim (PVC) backed by a Persistent Volume (PV), data is stored persistently. This means that even if the pod is restarted or rescheduled, the data will remain intact.
103 |
104 | However, if the PV is not properly backed up, there could be a risk of data loss in the event of PV failure. To mitigate this, regular backups of the PV should be performed.
105 |
106 |
107 | ---
108 |
109 | Good job on completing this challenge! We hope you found it informative and practical.
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-6-Responses/John_Kennedy.md:
--------------------------------------------------------------------------------
1 | **Name**: John Kennedy
2 |
3 | **Slack User ID**: U05D6BR916Z
4 |
5 | **Email**: johnkdevops@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | # Question: Ingress and Networking
14 |
15 | ## Task
16 |
17 | ## Deploy NGINX Ingress Controller using Helm:
18 | 1. Install Helm by running wget https://get.helm.sh/helm-v3.10.3-linux-amd64.tar.gz && \
19 | tar -zxf helm-v3.10.3-linux-amd64.tar.gz && \
20 | sudo mv linux-amd64/helm /usr/local/bin
21 | 2. Cleanup installation files by running rm helm-v3.10.3-linux-amd64.tar.gz && \
22 | rm -rf linux-amd64
23 | 2. Verify helm is installed by running helm version.
24 | version.BuildInfo{Version:"v3.10.3", GitCommit:"3a31588ad33fe3b89af5a2a54ee1d25bfe6eaa5e", GitTreeState:"clean", GoVersion:"go1.18.9"}
25 | 3. Create new namespace 'nginx-ingress' by running kubectl create ns ingress-nginx
26 | 4. Add NGINX Ingress Repository by running helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx && helm repo update
27 | 4. Install NGINX Ingress Controller using Helm by running helm install ingress-nginx \
28 | --set controller.service.type=NodePort \
29 | --set controller.service.nodePorts.http=30080 \
30 | --set controller.service.nodePorts.https=30443 \
31 | --repo https://kubernetes.github.io/ingress-nginx \
32 | ingress-nginx
33 | 5. Verify the Installation by running kubectl get deploy
34 | NAME READY UP-TO-DATE AVAILABLE AGE
35 | ingress-nginx-controller 1/1 1 1 2m12s
36 |
37 | ### 1: **Create a `wear-deployment`:**
38 |
39 | Using imperative commands to create, deploy & verify the Wear Deployment:
40 | 1. Create deployment by running kubectl create deploy wear-deployment --image=kodekloud/ecommerce:apparels --port 8080 --dry-run=client -o yaml > wear-deploy.yaml
41 | 2. Create the deployment by using kubectl apply -f wear-deploy.yaml.
42 | 3. Watch the deployment to see when it is ready by using kubectl get deploy -w
43 | 4. Inspect deployment by using kubectl describe deploy wear-deployment.
44 | 5. Expose the deployment run kubectl expose deploy wear-deployment --name=wear-service --type=ClusterIP --port=8080 --target-port=8080 --dry-run=client -o yaml > wear-service.yaml
45 | 6. Create the service by using kubectl apply -f wear-service.yaml.
46 | 7. Verify the wear-deployment and wear-service were successfully created by running kubectl get deploy wear-deployment --show-labels && kubectl get svc wear-service -o wide && kubectl get ep | grep -e wear
47 | NAME READY UP-TO-DATE AVAILABLE AGE LABELS
48 | wear-deployment 1/1 1 1 9m app=wear-deployment
49 | NAME TYPE CLUSTER-IP PORT(S) AGE SELECTOR
50 | wear-service ClusterIP 10.103.41.162 8080/TCP 54s app=wear-deployment
51 | NAME ENDPOINTS AGE
52 | ingress-nginx-controller 10.244.0.5:443,10.244.0.5:80 9m1s
53 | ingress-nginx-controller-admission 10.244.0.5:8443 9m1s
54 | wear-service 10.244.0.7:8080 6s
55 |
56 | # YAML File
57 | ---
58 | apiVersion: apps/v1
59 | kind: Deployment
60 | metadata:
61 | labels:
62 | app: wear-deployment
63 | name: wear-deployment
64 | spec:
65 | replicas: 1
66 | selector:
67 | matchLabels:
68 | app: wear-deployment
69 | template:
70 | metadata:
71 | labels:
72 | app: wear-deployment
73 | spec:
74 | containers:
75 | - name: wear-container
76 | image: kodekloud/ecommerce:apparels
77 | ports:
78 | - containerPort: 8080
79 | ---
80 | apiVersion: v1
81 | kind: Service
82 | metadata:
83 | labels:
84 | app: wear-deployment
85 | name: wear-service
86 | spec:
87 | ports:
88 | - port: 8080
89 | protocol: TCP
90 | targetPort: 8080
91 | selector:
92 | app: wear-deployment
93 | type: ClusterIP
94 | ---
95 |
96 | ### 2: **Create a `video-deployment`:**
97 |
98 | Using imperative commands to create, deploy & verify the Video Deployment:
99 | 1. Create deployment by running kubectl create deploy video-deployment --image=kodekloud/ecommerce:video --port=8080 --dry-run=client -o yaml > video-deploy.yaml
100 | 2. Create the deployment by using kubectl apply -f video-deploy.yaml.
101 | 3. Watch the deployment to see when it is ready by using kubectl get deploy -w
102 | 4. Inspect deployment by using kubectl describe deploy video-deployment.
103 | 5. Expose the deployment run kubectl expose deploy video-deployment --name=video-service --type=ClusterIP --port=8080 --target-port=8080 --dry-run=client -o yaml > video-service.yaml
104 | 6. Create the service by using kubectl apply -f video-service.yaml.
105 | 7. Verify the video-deployment and video-service were successfully created by running kubectl get deploy wear-deployment --show-labels && kubectl get svc wear-service -o wide
106 | NAME READY UP-TO-DATE AVAILABLE AGE LABELS
107 | video-deployment 1/1 1 1 58s app=video-deployment
108 | NAME TYPE CLUSTER-IP PORT(S) SELECTOR AGE
109 | ingress-nginx-controller NodePort 10.96.176.190 80:30080/TCP,443:30443/TCP 19m
110 | video-service ClusterIP 10.96.4.39 8080/TCP app=video-deployment
111 | NAME ENDPOINTS AGE
112 | ingress-nginx-controller 10.244.0.5:443,10.244.0.5:80 18m
113 | ingress-nginx-controller-admission 10.244.0.5:8443 18m
114 | video-service 10.244.0.8:8080 11s
115 |
116 | # YAML File
117 |
118 | ---
119 | apiVersion: apps/v1
120 | kind: Deployment
121 | metadata:
122 | name: video-deployment
123 | labels:
124 | app: video-deployment
125 | spec:
126 | replicas: 1
127 | selector:
128 | matchLabels:
129 | app: video-deployment
130 | template:
131 | metadata:
132 | labels:
133 | app: video-deployment
134 | spec:
135 | containers:
136 | - name: video-container
137 | image: kodekloud/ecommerce:apparels
138 | ports:
139 | - containerPort: 8080
140 | ---
141 | apiVersion: v1
142 | kind: Service
143 | metadata:
144 | labels:
145 | app: video-deployment
146 | name: video-service
147 | spec:
148 | ports:
149 | - port: 8080
150 | protocol: TCP
151 | targetPort: 8080
152 | selector:
153 | app: video-deployment
154 | type: ClusterIP
155 | ---
156 |
157 | ### 3: **Create an Ingress:**
158 |
159 | 1. Go to https://kubernetes.io/docs/home/ and search for Ingress.
160 | 2. Search for hostname and copy service-networking-ingress-wildcard-host-yaml file
161 | 4. Create a blank yaml file for the ingress resource by running vi ecommerce-ingress.yaml, :set paste, and, paste yaml the content.
162 | 5. Make the changes to the yaml file and save it by using :wq!
163 | 6. Create the ingress resource with dry run by running kubectl apply -f ecommerce-ingress.yaml --dry-run=client -o yaml to verify yaml file is correct
164 | 7. Remove dry run option, apply, and verify that ingress resource was created successfully by running the following imperative commands:
165 | kubectl describe ing ecommerce-ingress
166 | Name: ecommerce-ingress
167 | Labels:
168 | Namespace: default
169 | Address:
170 | Ingress Class: nginx
171 | Default backend:
172 | Rules:
173 | Host Path Backends
174 | ---- ---- --------
175 | cka.kodekloud.local
176 | /wear wear-service:8080 (10.244.0.7:8080)
177 | /video video-service:8080 (10.244.0.8:8080)
178 | Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
179 | Events:
180 | Type Reason Age From Message
181 | ---- ------ ---- ---- -------
182 | Normal Sync 9s nginx-ingress-controller Scheduled for sync
183 |
184 | Note: Also, you could use imperative command to create the ingress resource by running kubectl create ingress ecommerce-ingress --dry-run=client -o yaml --annotation nginx.ingress.kubernetes.io/rewrite-target=/ --rule cka.kodekloud.local/wear*=wear-service:8080 --rule cka.kodekloud.local/video*=video-service:8080 > ingress.yaml
185 |
186 | # YAML File
187 |
188 | ---
189 | apiVersion: networking.k8s.io/v1
190 | kind: Ingress
191 | metadata:
192 | name: ecommerce-ingress
193 | annotations:
194 | nginx.ingress.kubernetes.io/rewrite-target: /
195 | spec:
196 | ingressClassName: nginx # Add
197 | rules:
198 | - host: cka.kodekloud.local
199 | http:
200 | paths:
201 | - path: /wear
202 | pathType: Prefix
203 | backend:
204 | service:
205 | name: wear-service
206 | port:
207 | number: 8080
208 | - path: /video
209 | pathType: Prefix
210 | backend:
211 | service:
212 | name: video-service
213 | port:
214 | number: 8080
215 | ---
216 |
217 | 4. **Point virtual domain `cka.kodekloud.local` to your kubernetes cluster and test it.**
218 |
219 | 1. Find out the IP address of the controlplane node by running kubectl get no controlplane -o wide
220 | NAME STATUS ROLES AGE VERSION INTERNAL-IP
221 | controlplane Ready control-plane 4h55m v1.28.0 192.26.65.8
222 |
223 | 2. Find out the ingress NodePort number by running kubectl get svc | grep -e NodePort
224 | NodePort: http 30080/TCP
225 | 3. Add the virtual domain 'cka.kodekloud.local' to end of file local host file by running echo '192.26.65.8 cka.kodekloud.local' >> /etc/hosts | cat /etc/hosts (This is verify the entry was added to host file)
226 | 127.0.0.1 localhost
227 | ...
228 | ...
229 | ...
230 | 192.26.65.8 cka.kodekloud.local
231 |
232 | 4. Verify can get to wear and video applications:
233 | Wear Application:
234 |
235 | curl -I cka.kodekloud.local:30080/wear
236 | HTTP/1.1 200 OK
237 | Date: Sat, 07 Oct 2023 23:07:14 GMT
238 | Content-Type: text/html; charset=utf-8
239 | Content-Length: 296
240 | Connection: keep-alive
241 |
242 | curl -H "Host: cka.kodekloud.local" 192.26.65.8:30080/wear
243 |
244 | Hello from Flask
245 |
246 |
247 |
251 |

252 |
253 |
254 |
255 |
256 |
257 | Video Application:
258 |
259 | curl -I cka.kodekloud.local:30080/video
260 | HTTP/1.1 200 OK
261 | Date: Sat, 07 Oct 2023 23:09:09 GMT
262 | Content-Type: text/html; charset=utf-8
263 | Content-Length: 293
264 | Connection: keep-alive
265 |
266 | curl -H "Host: cka.kodekloud.local" 192.26.65.8:30080/video
267 |
268 | Hello from Flask
269 |
270 |
271 |
275 |

276 |
277 |
278 |
279 |
280 |
281 | Tests all passed!!
282 |
283 |
284 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-6-Responses/Mason889.md:
--------------------------------------------------------------------------------
1 | **Name**: Kostiantyn Zaihraiev
2 |
3 | **Slack User ID**: U04FZK4HL0L
4 |
5 | **Email**: kostya.zaigraev@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Pre-requisites
12 |
13 | Run minikube cluster with ingress addon enabled:
14 | ```bash
15 | minikube addons enable ingress
16 | ```
17 |
18 | ### Solution Details
19 |
20 | After reading task, I've decided to create yaml templates that will be used to easy configuration and management of all resources.
21 |
22 | ```bash
23 | kubectl create deploy wear-deployment --image=kodekloud/ecommerce:apparels --dry-run=client -o yaml > wear-deployment.yaml
24 | kubectl create deploy video-deployment --image=kodekloud/ecommerce:video --dry-run=client -o yaml > video-deployment.yaml
25 | kubectl create service clusterip wear --tcp=80:8080 --dry-run=client -o yaml > wear-service.yaml
26 | kubectl create service clusterip video --tcp=80:8080 --dry-run=client -o yaml > video-service.yaml
27 | kubectl create ingress ecommerce-ingress --rule="cka.kodekloud.local/wear=wear:80" --rule="cka.kodekloud.local/video=video:80" --dry-run=client -o yaml > ingress.yaml
28 | ```
29 |
30 | Making some changes in the files, I've created the following resources:
31 | ```bash
32 | kubectl apply -f wear-deployment.yaml
33 | kubectl apply -f video-deployment.yaml
34 | kubectl apply -f wear-service.yaml
35 | kubectl apply -f video-service.yaml
36 | kubectl apply -f ingress.yaml
37 | ```
38 |
39 | After that, I've added the following line to the `/etc/hosts` file:
40 | ```bash
41 | sudo vi /etc/hosts
42 | ```
43 |
44 | Inside the hosts file I've added the following line:
45 | ```
46 | minikube-ip cka.kodekloud.local
47 | ```
48 |
49 | After that I've tested the ingress:
50 | ```bash
51 | curl cka.kodekloud.local/wear
52 | ---
53 | HTTP/1.1 200 OK
54 | Date: Thu, 05 Oct 2023 10:43:32 GMT
55 | Content-Type: text/html; charset=utf-8
56 | Content-Length: 296
57 | Connection: keep-alive
58 |
59 |
60 |
61 | curl cka.kodekloud.local/video
62 | ---
63 | HTTP/1.1 200 OK
64 | Date: Thu, 05 Oct 2023 10:43:39 GMT
65 | Content-Type: text/html; charset=utf-8
66 | Content-Length: 293
67 | Connection: keep-alive
68 |
69 | ```
70 |
71 | ### Code Snippet
72 |
73 | #### wear-deployment.yaml
74 | ```yaml
75 | apiVersion: apps/v1
76 | kind: Deployment
77 | metadata:
78 | name: wear-deployment
79 | spec:
80 | replicas: 1
81 | selector:
82 | matchLabels:
83 | app: wear
84 | template:
85 | metadata:
86 | labels:
87 | app: wear
88 | spec:
89 | containers:
90 | - name: wear-container
91 | image: kodekloud/ecommerce:apparels
92 | resources: {}
93 | ports:
94 | - containerPort: 8080
95 |
96 | ```
97 |
98 | #### video-deployment.yaml
99 | ```yaml
100 | apiVersion: apps/v1
101 | kind: Deployment
102 | metadata:
103 | name: video-deployment
104 | spec:
105 | replicas: 1
106 | selector:
107 | matchLabels:
108 | app: video
109 | template:
110 | metadata:
111 | labels:
112 | app: video
113 | spec:
114 | containers:
115 | - name: video-container
116 | image: kodekloud/ecommerce:video
117 | resources: {}
118 | ports:
119 | - containerPort: 8080
120 | ```
121 |
122 | #### wear-service.yaml
123 | ```yaml
124 | apiVersion: v1
125 | kind: Service
126 | metadata:
127 | name: wear
128 | spec:
129 | selector:
130 | app: wear
131 | ports:
132 | - protocol: TCP
133 | port: 80
134 | targetPort: 8080
135 | ```
136 |
137 | #### video-service.yaml
138 | ```yaml
139 | apiVersion: v1
140 | kind: Service
141 | metadata:
142 | name: video
143 | spec:
144 | selector:
145 | app: video
146 | ports:
147 | - protocol: TCP
148 | port: 80
149 | targetPort: 8080
150 | ```
151 |
152 | #### ingress.yaml
153 | ```yaml
154 | apiVersion: networking.k8s.io/v1
155 | kind: Ingress
156 | metadata:
157 | name: ecommerce-ingress
158 | annotations:
159 | nginx.ingress.kubernetes.io/rewrite-target: /
160 | spec:
161 | rules:
162 | - host: cka.kodekloud.local
163 | http:
164 | paths:
165 | - path: /wear
166 | pathType: Prefix
167 | backend:
168 | service:
169 | name: wear
170 | port:
171 | number: 80
172 | - path: /video
173 | pathType: Prefix
174 | backend:
175 | service:
176 | name: video
177 | port:
178 | number: 80
179 | ```
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-6-Responses/challenge_response_template.md:
--------------------------------------------------------------------------------
1 | **Name**: user-name
2 |
3 | **Slack User ID**: 00000000000
4 |
5 | **Email**: user-email@example.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | (Explain your approach, thought process, and steps you took to solve the challenge)
14 |
15 | ### Code Snippet
16 |
17 | ```yaml
18 | # Place your code or configuration snippet here
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-6-Responses/housseinhmila.md:
--------------------------------------------------------------------------------
1 | **Name**: Houssein Hmila
2 |
3 | **Slack User ID**: U054SML63K5
4 |
5 | **Email**: housseinhmila@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | #### 1. Create Deployments:
14 |
15 | We create the wear-deployment and video-deployment using the provided specifications.
16 | ```
17 | kubectl create deploy wear-deployment --image=kodekloud/ecommerce:apparels --port=8080
18 | kubectl create deploy video-deployment --image=kodekloud/ecommerce:video --port=8080
19 | ```
20 |
21 | #### 2. Expose Deployments:
22 |
23 | Expose both deployments to create services that can be accessed within the cluster.
24 | ```
25 | kubectl expose deploy wear-deployment --name=wear-service --port=8080
26 | kubectl expose deploy video-deployment --name=video-service --port=8080
27 | ```
28 | #### 3. Configure Ingress:
29 | We need to ensure that the DNS resolution for cka.kodekloud.local points to the IP address of your Kubernetes cluster's Ingress Controller. This can be done by adding an entry in your local DNS resolver or configuring a DNS server to resolve this domain to the cluster's IP.
30 |
31 | ```yaml
32 | apiVersion: networking.k8s.io/v1
33 | kind: Ingress
34 | metadata:
35 | name: ecommerce-ingress
36 | annotations:
37 | nginx.ingress.kubernetes.io/rewrite-target: /
38 | spec:
39 | ingressClassName: nginx # Specify Your Ingress class name
40 | rules:
41 | - host: cka.kodekloud.local
42 | http:
43 | paths:
44 | - path: /wear
45 | pathType: Prefix
46 | backend:
47 | service:
48 | name: wear-service
49 | port:
50 | number: 8080
51 | - path: /video
52 | pathType: Prefix
53 | backend:
54 | service:
55 | name: video-service
56 | port:
57 | number: 8080
58 | ```
59 | Now we can access our services via ``` curl cka.kodekloud.local/wear ``` or ``` curl cka.kodekloud.local/video ```
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-6-Responses/jothilal22.md:
--------------------------------------------------------------------------------
1 | **Name**: Jothilal Sottallu
2 |
3 | **Email**: srjothilal2001@gmail.com
4 |
5 |
6 | ### SOLUTION
7 |
8 | ### TASK : Kubernetes Challenge: Ingress and Networking
9 |
10 | ### SOLUTION DETAILS
11 |
12 | ### 1. Create a wear-deployment:
13 | Name: wear-deployment
14 | Image: kodekloud/ecommerce:apparels
15 | Port: 8080
16 | Expose this deployment
17 |
18 | First, I created the wear deployment file and then services
19 | #### YAML
20 | ```
21 | apiVersion: apps/v1
22 | kind: Deployment
23 | metadata:
24 | name: wear-deployment
25 | spec:
26 | replicas: 1
27 | selector:
28 | matchLabels:
29 | app: wear
30 | template:
31 | metadata:
32 | labels:
33 | app: wear
34 | spec:
35 | containers:
36 | - name: wear-container
37 | image: kodekloud/ecommerce:apparels
38 | ports:
39 | - containerPort: 8080
40 | ```
41 | * Command to apply the Deployment
42 | ```Kubectl apply -f wear-deployment.yaml```
43 |
44 | ### SERVICES
45 | ```
46 | apiVersion: v1
47 | kind: Service
48 | metadata:
49 | name: wear-service
50 | spec:
51 | selector:
52 | app: wear
53 | ports:
54 | - protocol: TCP
55 | port: 8080
56 | targetPort: 8080
57 |
58 | ```
59 | * Command to apply the Services
60 | ```Kubectl apply -f wear-services.yaml```
61 |
62 |
63 | ### 2. Create a video-deployment:
64 |
65 | #### YAML
66 | ```
67 | apiVersion: apps/v1
68 | kind: Deployment
69 | metadata:
70 | name: video-deployment
71 | spec:
72 | replicas: 1
73 | selector:
74 | matchLabels:
75 | app: video
76 | template:
77 | metadata:
78 | labels:
79 | app: video
80 | spec:
81 | containers:
82 | - name: video-container
83 | image: kodekloud/ecommerce:apparels
84 | ports:
85 | - containerPort: 8080
86 | ```
87 | * Command to apply the Deployment
88 | ```Kubectl apply -f video-deployment.yaml```
89 |
90 | ### SERVICES
91 | ```
92 | apiVersion: v1
93 | kind: Service
94 | metadata:
95 | name: video-service
96 | spec:
97 | selector:
98 | app: video
99 | ports:
100 | - protocol: TCP
101 | port: 8080
102 | targetPort: 8080
103 |
104 | ```
105 | * Command to apply the Services
106 | ```Kubectl apply -f video-services.yaml```
107 |
108 | #### 3.Create an Ingress:
109 | Name: ecommerce-ingress
110 | Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
111 | Host: cka.kodekloud.local
112 | Paths:
113 | /wear routes to service wear
114 | /video routes to service video
115 |
116 | ```
117 | apiVersion: networking.k8s.io/v1
118 | kind: Ingress
119 | metadata:
120 | name: ecommerce-ingress
121 | annotations:
122 | nginx.ingress.kubernetes.io/rewrite-target: /
123 | spec:
124 | rules:
125 | - host: cka.kodekloud.local
126 | http:
127 | paths:
128 | - path: /wear
129 | pathType: Prefix
130 | backend:
131 | service:
132 | name: wear-service
133 | port:
134 | number: 8080
135 | - path: /video
136 | pathType: Prefix
137 | backend:
138 | service:
139 | name: video-service
140 | port:
141 | number: 8080
142 |
143 | ```
144 | Apply the Ingress configuration using kubectl apply -f ecommerce-ingress.yaml
145 |
146 | ### 4.Point Virtual Domain
147 |
148 | ```
149 | To point the virtual domain cka.kodekloud.local to cluster, need to modify local machine's DNS settings. Add an entry in machine's /etc/hosts
150 |
151 | 192.168.99.100 cka.kodekloud.local
152 | ```
153 |
154 | ### 5. Verify
155 |
156 | 1. Testing from Local Machine:
157 |
158 | Open a web browser and access cka.kodekloud.local/wear and cka.kodekloud.local/video. The traffic should be routed to the respective services.
159 |
160 | 2. Testing from Cluster Pods:
161 |
162 | To ensure that pods within the cluster can access cka.kodekloud.local, may need to ensure DNS resolution within the cluster. Kubernetes typically sets up DNS resolution automatically for pods.
163 |
164 | ```
165 | kubectl exec -it wear-d6d98d -- curl http://cka.kodekloud.local/wear
166 |
167 | ```
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-6-Responses/kodekloud_response.md:
--------------------------------------------------------------------------------
1 | # nginx-ingress-setup-solution
2 |
3 | ## wear-deployment.yaml
4 | ```yaml
5 | apiVersion: apps/v1
6 | kind: Deployment
7 | metadata:
8 | labels:
9 | app: wear-deployment
10 | name: wear-deployment
11 | spec:
12 | replicas: 1
13 | selector:
14 | matchLabels:
15 | app: wear-deployment
16 | template:
17 | metadata:
18 | labels:
19 | app: wear-deployment
20 | spec:
21 | containers:
22 | - image: kodekloud/ecommerce:apparels
23 | name: wear-deployment
24 | ports:
25 | - containerPort: 8080
26 | ```
27 | ---
28 | ## wear-service.yaml
29 | ```yaml
30 | apiVersion: v1
31 | kind: Service
32 | metadata:
33 | creationTimestamp: null
34 | labels:
35 | app: wear-deployment
36 | name: wear-deployment
37 | spec:
38 | ports:
39 | - port: 8080
40 | protocol: TCP
41 | targetPort: 8080
42 | selector:
43 | app: wear-deployment
44 | ```
45 | ---
46 | ## video-deployment.yaml
47 | ```yaml
48 | apiVersion: apps/v1
49 | kind: Deployment
50 | metadata:
51 | labels:
52 | app: video-deployment
53 | name: video-deployment
54 | spec:
55 | replicas: 1
56 | selector:
57 | matchLabels:
58 | app: video-deployment
59 | template:
60 | metadata:
61 | labels:
62 | app: video-deployment
63 | spec:
64 | containers:
65 | - image: kodekloud/ecommerce:video
66 | name: video-deployment
67 | ports:
68 | - containerPort: 8080
69 | ```
70 | ---
71 | ## video-service.yaml
72 | ```yaml
73 | apiVersion: v1
74 | kind: Service
75 | metadata:
76 | creationTimestamp: null
77 | labels:
78 | app: video-deployment
79 | name: video-deployment
80 | spec:
81 | ports:
82 | - port: 8080
83 | protocol: TCP
84 | targetPort: 8080
85 | selector:
86 | app: video-deployment
87 | ```
88 | ---
89 | ## ecommerce-ingress.yaml
90 | ```yaml
91 | apiVersion: networking.k8s.io/v1
92 | kind: Ingress
93 | metadata:
94 | name: ecommerce-ingress
95 | annotations:
96 | nginx.ingress.kubernetes.io/rewrite-target: /
97 | spec:
98 | ingressClassName: nginx
99 | rules:
100 | - host: cka.kodekloud.local
101 | http:
102 | paths:
103 | - path: /wear
104 | pathType: Prefix
105 | backend:
106 | service:
107 | name: wear-deployment
108 | port:
109 | number: 8080
110 | - path: /video
111 | pathType: Prefix
112 | backend:
113 | service:
114 | name: video-deployment
115 | port:
116 | number: 8080
117 | ```
118 |
119 | ### Add IP Node cka.kodekloud.local to file host (e.g., /etc/hosts) and test it.
120 |
121 | ### DNS Configuration (if needed):
122 | 1. Add cka.kodekloud.local to configmap/coredns.
123 | 2. Restart deployment/coredns.
124 | 3. Test the setup:
125 | #### kubectl run curl --image curlimages/curl:7.65.3 --command -- curl http://cka.kodekloud.local/wear
126 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-7-Responses/Mason889.md:
--------------------------------------------------------------------------------
1 | **Name**: Kostiantyn Zaihraiev
2 |
3 | **Slack User ID**: U04FZK4HL0L
4 |
5 | **Email**: kostya.zaigraev@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Pre-requisites
12 |
13 | Have AWS account.
14 |
15 | ### Solution Details
16 |
17 | #### Preparations
18 |
19 | After reading task, I've decided to create 3 virtual machines in AWS.
20 |
21 | And setup AWS Network Load Balancer (NLB) with 3 targets (one per control plane node).
22 |
23 | I've opened such ports for NLB:
24 | * 6443 - for control plane nodes
25 | * 2379-2380 - for etcd
26 | * 10250 - for kubelet API
27 | * 10251 - for kube-scheduler
28 | * 10252 - for kube-controller-manager
29 |
30 | I've spinned 3 EC2 instances based on Ubuntu images with names `control-plane-1`, `control-plane-2`, `control-plane-3` and kubeadm, kubelet and kubectl using commands below.
31 |
32 | ```bash
33 | sudo apt-get update && sudo apt-get install -y apt-transport-https curl
34 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
35 | sudo sh -c 'echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list'
36 | sudo apt-get update
37 | sudo apt-get install -y kubelet kubeadm kubectl containerd
38 | ```
39 |
40 | After that I've disabled swap on all nodes:
41 |
42 | ```bash
43 | sudo swapoff -a
44 | sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
45 | ```
46 |
47 | Make sure that sysctl parameters `net.bridge.bridge-nf-call-iptables` and `net.ipv4.ip_forward` are set to `1` on all nodes:
48 |
49 | ```bash
50 | sudo sysctl net.bridge.bridge-nf-call-iptables=1
51 | sudo sysctl net.ipv4.ip_forward=1
52 | ```
53 |
54 | #### Control-Planes initialization
55 |
56 | On the `control-plane-1` node I've initialized the cluster:
57 |
58 | ```bash
59 | sudo kubeadm init --control-plane-endpoint :6443 --upload-certs --kubernetes-version 1.28.0
60 | ```
61 |
62 | In the output of the command I've found the command to join other control plane nodes to the cluster:
63 |
64 | ```bash
65 | kubeadm join :6443 --token e2ya3q.qn6om3tbo4pwvspl \
66 | --discovery-token-ca-cert-hash sha256:23528857afadce668be595d8954995257f27f1d984001f025cceec1fdd99071f \
67 | --control-plane --certificate-key 3200e4d214ccbf384bc5f1e95e9eeea40035d96af499ac82b3a531965b8209c7
68 | ```
69 |
70 | I've executed the command on the `control-plane-2` and `control-plane-3` nodes.
71 |
72 | After that I've installed the CNI plugin:
73 |
74 | ```bash
75 | kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
76 | ```
77 |
78 | While I've waited for the command execution I've created `.kube` directory and copied `admin.conf` file to it (these steps were done on the `control-plane-1` node):
79 |
80 | ```bash
81 | mkdir -p $HOME/.kube
82 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
83 | sudo chown $(id -u):$(id -g) $HOME/.kube/config
84 | ```
85 |
86 | After that I've applied the following command on all control plane nodes to apply the CNI plugin:
87 |
88 | ```bash
89 | kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
90 | ```
91 |
92 | And after that I've checked that all nodes are in `Ready` state:
93 |
94 | ```bash
95 | kubectl get node
96 |
97 | NAME STATUS ROLES AGE VERSION
98 | ip-172-16-1-110 Ready control-plane 10m2s v1.28.2
99 | ip-172-16-1-56 Ready control-plane 6m12s v1.28.2
100 | ip-172-16-1-74 Ready control-plane 6m14s v1.28.2
101 | ```
102 |
103 | #### Worker node initialization
104 |
105 | Required to do prerequisite steps on the worker node before joining it to the cluster (described in section upper)
106 |
107 | I've created one worker node using the following command:
108 |
109 | ```bash
110 | kubeadm join :6443 --token e2ya3q.qn6om3tbo4pwvspl \
111 | --discovery-token-ca-cert-hash sha256:23528857afadce668be595d8954995257f27f1d984001f025cceec1fdd99071f
112 | ```
113 |
114 | After that I've applied the following command on worker node to apply the CNI plugin:
115 |
116 | ```bash
117 | kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
118 | ```
119 |
120 | And after that I've checked that all nodes are in `Ready` state (done on the `control-plane-1` node):
121 |
122 | ```bash
123 | kubectl get node
124 |
125 |
126 | NAME STATUS ROLES AGE VERSION
127 | ip-172-16-1-110 Ready control-plane 14m52s v1.28.2
128 | ip-172-16-1-56 Ready control-plane 10m31s v1.28.2
129 | ip-172-16-1-74 Ready control-plane 10m29s v1.28.2
130 | ip-172-16-1-237 Ready 58s v1.28.2
131 | ```
132 |
133 | ### Code Snippet
134 |
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-7-Responses/challenge_response_template.md:
--------------------------------------------------------------------------------
1 | <<<<<<< HEAD
2 | **Name**: user-name
3 |
4 | **Slack User ID**: 00000000000
5 |
6 | **Email**: user-email@example.com
7 |
8 | ---
9 |
10 | ## Solution
11 |
12 | ### Solution Details
13 |
14 | (Explain your approach, thought process, and steps you took to solve the challenge)
15 |
16 | ### Code Snippet
17 |
18 | ```yaml
19 | =======
20 | **Name**: user-name
21 |
22 | **Slack User ID**: 00000000000
23 |
24 | **Email**: user-email@example.com
25 |
26 | ---
27 |
28 | ## Solution
29 |
30 | ### Solution Details
31 |
32 | (Explain your approach, thought process, and steps you took to solve the challenge)
33 |
34 | ### Code Snippet
35 |
36 | ```yaml
37 | >>>>>>> 2e4d108bf60ffec7845d57b39b134e2189543947
38 | # Place your code or configuration snippet here
--------------------------------------------------------------------------------
/Study Group Submissions/Challenge-7-Responses/housseinhmila.md:
--------------------------------------------------------------------------------
1 | **Name**: Houssein Hmila
2 |
3 | **Slack User ID**: U054SML63K5
4 |
5 | **Email**: housseinhmila@gmail.com
6 |
7 | ---
8 |
9 | ## Solution
10 |
11 | ### Solution Details
12 |
13 | # Kubernetes Cluster Setup Guide
14 |
15 | This guide will walk you through the process of setting up a highly available Kubernetes cluster with a control plane using Kubeadm on Microsoft Azure.
16 |
17 | ## Prerequisites
18 |
19 | Before you begin, make sure you've reviewed the [Kubernetes setup prerequisites](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#before-you-begin).
20 |
21 | ## Infrastructure Setup
22 |
23 | The following YAML code demonstrates how to set up the necessary infrastructure on Microsoft Azure:
24 |
25 | ```yaml
26 | # Create resource group
27 | az group create -n MyKubeCluster -l eastus
28 | # Create VNet
29 | az network vnet create -g MyKubeCluster -n KubeVNet --address-prefix 172.0.0.0/16 \
30 | --subnet-name MySubnet --subnet-prefix 172.0.0.0/24
31 | # Create Master Node - This will have a public IP
32 | az vm create -n kube-master1 -g MyKubeCluster --image Ubuntu2204 \
33 | --size Standard_DS2_v2 \
34 | --data-disk-sizes-gb 10 --generate-ssh-keys \
35 | --public-ip-address-dns-name kubeadm-master-1
36 | az vm create -n kube-master2 -g MyKubeCluster --image Ubuntu2204 \
37 | --size Standard_DS2_v2 \
38 | --data-disk-sizes-gb 10 --generate-ssh-keys \
39 | --public-ip-address-dns-name kubeadm-master-2
40 | az vm create -n kube-master3 -g MyKubeCluster --image Ubuntu2204 \
41 | --size Standard_DS2_v2 \
42 | --data-disk-sizes-gb 10 --generate-ssh-keys \
43 | --public-ip-address-dns-name kubeadm-master-3
44 | # Create the two worker nodes
45 | az vm create -n kube-worker-1 \
46 | -g MyKubeCluster --image Ubuntu2204 \
47 | --size Standard_B1s --data-disk-sizes-gb 4 \
48 | --generate-ssh-keys
49 | az vm create -n kube-worker-2 \
50 | -g MyKubeCluster --image Ubuntu2204 \
51 | --size Standard_B2s --data-disk-sizes-gb 4 \
52 | --generate-ssh-keys
53 | ```
54 | ### Configuration on All Nodes
55 | after creating the infra we need to perform these commands on all nodes:
56 | ```yaml
57 | sudo apt update
58 | # Install Docker
59 | sudo apt install docker.io -y
60 | sudo systemctl enable docker
61 | # Get the gpg keys for Kubeadm
62 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
63 | sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
64 | # Install Kubeadm
65 | sudo apt install kubeadm -y
66 | sudo apt-get install -y kubelet kubeadm kubectl
67 | sudo apt-mark hold kubelet kubeadm kubectl
68 | #Disable Firewall
69 | ufw disable
70 | #Disable swap
71 | swapoff -a; sed -i '/swap/d' /etc/fstab
72 | #Update sysctl settings for Kubernetes networking
73 | cat >>/etc/sysctl.d/kubernetes.conf<:6443 check fall 3 rise 2
108 | server kube-master2 :6443 check fall 3 rise 2
109 | server kube-master2 :6443 check fall 3 rise 2
110 | ```
111 | ### Don't forget to configure firewall rules to enable communication between the load balancer and master nodes.
112 |
113 | you can check that by running
114 | ```ping :6443``` inside your master nodes.
115 |
116 | ### Initializing the Cluster
117 | On one of the master nodes, initialize the cluster:
118 | ```
119 | sudo kubeadm init --control-plane-endpoint ":6443" --upload-certs
120 | ```
121 | If you encounter the "Unimplemented service" error, follow the provided steps to resolve it.
122 | ```yaml
123 | sudo vim /etc/crictl.yaml
124 |
125 | #put these lines :
126 |
127 | runtime-endpoint: "unix:///run/containerd/containerd.sock"
128 | timeout: 0
129 | debug: false
130 | ```
131 | After finish the initialization, normally you should see two join commands one for master nodes and the other for worker nodes, execute them to join your rest of nodes.
132 |
133 | we can now configure the context so we can perform kubectl as the normal user :(you should run this on all master nodes)
134 | ```
135 | mkdir -p $HOME/.kube
136 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
137 | sudo chown $(id -u):$(id -g) $HOME/.kube/config
138 | ```
139 | the last thing is to deploy the CNI:
140 | ```
141 | kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
142 | ```
143 | Once you have completed the setup, you can run the kubectl get node command to view the list of nodes in the cluster. The output of this command should look something like this:
144 | ```
145 | NAME STATUS ROLES AGE VERSION
146 | kube-master1 Ready master 3m7s v1.28.2
147 | kube-master2 Ready master 3m5s v1.28.2
148 | kube-master3 Ready master 3m3s v1.28.2
149 | kube-worker1 Ready 3m2s v1.28.2
150 | kube-worker2 Ready 3m v1.28.2
151 | ```
152 | ### Conclusion
153 | You've successfully set up a highly available Kubernetes cluster with a control plane on Microsoft Azure. Your cluster is now ready for running containerized applications.
--------------------------------------------------------------------------------