├── README.md
├── challenge-1
├── README.md
├── developer-role.yaml
├── developer-rolebinding.yaml
├── jekyll-node-service.yaml
├── jekyll-pod.yaml
├── jekyll-pvc.yaml
└── namespace.yaml
├── challenge-2
├── README.md
├── fileserver-pod.yaml
├── fileserver-pv.yaml
├── fileserver-pvc.yaml
└── fileserver-svc.yaml
├── challenge-3
├── README.md
├── db-deployment.yml
├── db-service.yml
├── redis-deployment.yml
├── redis-service.yml
├── result-deployment.yml
├── result-service.yml
├── vote-deployment.yml
├── vote-namespace.yml
├── vote-service.yml
└── worker.yml
├── challenge-4
├── README.md
├── pv-cluster.yaml
├── redis-cluster-service.yaml
└── redis-statefulset.yaml
└── old-challenges
├── challenge-1-wordpress
├── README.md
├── app
│ ├── Dockerfile
│ └── docker-entrypoint.sh
├── k8s
│ ├── 00-wordpress-mysql-pv.yml
│ ├── 01-mysql-pvc.yml
│ ├── 02-wordpress-pvc.yml
│ ├── 03-secret.yml
│ ├── 04-mysql-deploy.yml
│ └── 05-wordpress-deploy.yml
└── nfs
│ └── nfs.sh
└── challenge-2-CI-CD
├── Dockerfile
├── Jenkinsfiles
└── nodejs
│ └── Jenkinsfile
├── README.md
├── app
├── routes
│ └── root.js
└── server.js
├── img
├── 01-weekly-jenkins.png
└── Arch.jpg
├── nodejs-k8s
├── .helmignore
├── Chart.yaml
├── templates
│ ├── _helpers.tpl
│ ├── deployment.yml
│ └── service.yml
└── values.yaml
└── package.json
/README.md:
--------------------------------------------------------------------------------
1 | # [Kubernetes Challenges Series](https://kodekloud.com/courses/kubernetes-challenges/)
2 |
3 | These are fun and exciting Kubernetes challenges from the __[Kubernetes Challenges Series](https://kodekloud.com/courses/kubernetes-challenges/)__ hosted on **KodeKloud** platform available for free.
4 |
5 | These challenges are specially designed to give you more hands-on and challenges that would help you excel in Kubernetes.
6 |
7 |
8 | # Sections
9 |
10 | - [Challenge-1](https://kodekloud.com/topic/kubernetes-challenge-1/)
11 | - [Solutions](https://github.com/kodekloudhub/kubernetes-challenges/tree/master/challenge-1)
12 |
13 | - [Challenge-2](https://kodekloud.com/topic/kubernetes-challenge-2/)
14 | - [Solutions](https://github.com/kodekloudhub/kubernetes-challenges/tree/master/challenge-2)
15 |
16 | - [Challenge-3](https://kodekloud.com/topic/kubernetes-challenge-3/)
17 | - [Solutions](https://github.com/kodekloudhub/kubernetes-challenges/tree/master/challenge-3)
18 |
19 | - [Challenge-4](https://kodekloud.com/topic/kubernetes-challenge-4/)
20 | - [Solutions](https://github.com/kodekloudhub/kubernetes-challenges/tree/master/challenge-4)
21 |
22 |
23 |
--------------------------------------------------------------------------------
/challenge-1/README.md:
--------------------------------------------------------------------------------
1 | # Challenge 1
2 |
3 | Deploy the given architecture diagram for implementing a `Jekyll SSG`. Find the lab [here](https://kodekloud.com/topic/kubernetes-challenge-1/).
4 |
5 | As ever, the order you create the resources is significant, and largely governed by the direction of the arrows in the diagram.
6 |
7 | For this challenge, all the namespaced resources are to be created in the existing namespace `development`. When writing YAML manifests, you must include `namespace: development` in the `metadata` section
8 |
9 | Solve in the following order:
10 |
11 | All are solved by creating a YAML manifest for each resource as directed by the details as you select each icon, with the exception of 7 and 8 which are done with `kubectl config`. Expand solutions below by clicking on the arrowhead icons.
12 |
13 | You should study the manifests provided in the repo carefully and understand how they provide what the question asks.
14 |
15 | 1. `jekyll-pv` - The PV is pre-created, however you should examine it and check its properties. Getting the PVC correct depends on this.
16 | 1.
17 | jekyll-pvc
18 |
19 | Apply the [manifest](./jekyll-pvc.yaml)
20 |
21 |
22 |
23 | 1.
24 | jekyll
25 |
26 | Apply the [manifest](./jekyll-pod.yaml)
27 |
28 | The pod will take at least 30 seconds to initialize.
29 |
30 |
31 |
32 | 1.
33 | jekyll-node-service
34 |
35 | Apply the [manifest](./jekyll-node-service.yaml)
36 |
37 |
38 |
39 | 1.
40 | developer-role
41 |
42 |
43 | ```
44 | kubectl create role developer-role --resource=pods,svc,pvc --verb="*" -n development
45 | ```
46 |
47 | --- OR ---Apply the [manifest](./developer-role.yaml)
48 |
49 |
50 |
51 | 1.
52 | developer-rolebinding
53 |
54 |
55 | ```
56 | kubectl create rolebinding developer-rolebinding --role=developer-role --user=martin -n development
57 | ```
58 |
59 | --- OR ---Apply the [manifest](./developer-rolebinding.yaml)
60 |
61 |
62 |
63 | 1.
64 | kube-config
65 |
66 | ```bash
67 | kubectl config set-credentials martin --client-certificate ./martin.crt --client-key ./martin.key
68 | kubectl config set-context developer --cluster kubernetes --user martin
69 | ```
70 |
71 |
72 |
73 | 1.
74 | martin
75 |
76 | ```bash
77 | kubectl config use-context developer
78 | ```
79 |
80 |
81 |
82 | # Automate the lab in a single script!
83 |
84 | As DevOps engineers, we love everything to be automated!
85 |
86 | What we can do here is to clone this repo down to the lab to get all the YAML manifest solutions, then apply them in the correct order. The script also waits for the Jekyll pod to be fully started before progressing, thus when the script completes, you can press the `Check` button and the lab will be complete!
87 |
88 |
89 | Automation Script
90 |
91 | Paste this entire script to the lab terminal, sit back and enjoy!
92 |
93 | ```bash
94 | {
95 | # Clone this repo to get the manifests
96 | git clone --depth 1 https://github.com/kodekloudhub/kubernetes-challenges.git
97 |
98 | ### PVC
99 | kubectl apply -f kubernetes-challenges/challenge-1/jekyll-pvc.yaml
100 |
101 | ### POD
102 | kubectl apply -f kubernetes-challenges/challenge-1/jekyll-pod.yaml
103 |
104 | # Wait for pod to be running
105 | echo "Waiting up to 120s for Jekyll pod to be running..."
106 | kubectl wait -n development --for=condition=ready pod -l run=jekyll --timeout 120s
107 |
108 | if [ $? -ne 0 ]
109 | then
110 | echo "The pod did not start correctly. Please reload the lab and try again."
111 | echo "If the issue persists, please report it on the community forum."
112 | echo "https://kodekloud.com/community/c/kubernetes/6"
113 | cd ~
114 | echo "Press CTRL-C to exit"
115 | read x
116 | fi
117 |
118 | ### Service
119 | kubectl apply -f kubernetes-challenges/challenge-1/jekyll-node-service.yaml
120 |
121 | ### Role
122 | kubectl create role developer-role --resource=pods,svc,pvc --verb="*" -n development
123 |
124 | ## RoleBinding
125 | kubectl create rolebinding developer-rolebinding --role=developer-role --user=martin -n development
126 |
127 | ## Martin
128 |
129 | kubectl config set-credentials martin --client-certificate ./martin.crt --client-key ./martin.key
130 | kubectl config set-context developer --cluster kubernetes --user martin
131 |
132 | ## kube-config
133 |
134 | kubectl config use-context developer
135 |
136 | echo -e "\n\nAutomation complete! Press the Check button.\n"
137 | }
138 |
139 | ```
140 |
141 |
142 |
143 |
144 |
145 |
146 |
147 |
148 |
--------------------------------------------------------------------------------
/challenge-1/developer-role.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: rbac.authorization.k8s.io/v1
3 | kind: Role
4 | metadata:
5 | creationTimestamp: null
6 | name: developer-role
7 | namespace: development
8 | rules:
9 | - apiGroups:
10 | - ""
11 | resources:
12 | - pods
13 | - services
14 | - persistentvolumeclaims
15 | verbs:
16 | - '*'
17 |
18 |
--------------------------------------------------------------------------------
/challenge-1/developer-rolebinding.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: rbac.authorization.k8s.io/v1
3 | kind: RoleBinding
4 | metadata:
5 | creationTimestamp: null
6 | name: developer-rolebinding
7 | namespace: development
8 | roleRef:
9 | apiGroup: rbac.authorization.k8s.io
10 | kind: Role
11 | name: developer-role
12 | subjects:
13 | - apiGroup: rbac.authorization.k8s.io
14 | kind: User
15 | name: martin
16 |
17 |
--------------------------------------------------------------------------------
/challenge-1/jekyll-node-service.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Service
4 | metadata:
5 | namespace: development
6 | name: jekyll
7 | spec:
8 | type: NodePort
9 | ports:
10 | - port: 8080
11 | targetPort: 4000
12 | nodePort: 30097
13 | selector:
14 | run: jekyll
15 |
16 |
--------------------------------------------------------------------------------
/challenge-1/jekyll-pod.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Pod
4 | metadata:
5 | namespace: development
6 | name: jekyll
7 | labels:
8 | run: jekyll
9 | spec:
10 | containers:
11 | - name: jekyll
12 | image: gcr.io/kodekloud/customimage/jekyll-serve
13 | volumeMounts:
14 | - mountPath: /site
15 | name: site
16 | initContainers:
17 | - name: copy-jekyll-site
18 | image: gcr.io/kodekloud/customimage/jekyll
19 | command: [ "jekyll", "new", "/site" ]
20 | volumeMounts:
21 | - mountPath: /site
22 | name: site
23 | volumes:
24 | - name: site
25 | persistentVolumeClaim:
26 | claimName: jekyll-site
27 |
28 |
--------------------------------------------------------------------------------
/challenge-1/jekyll-pvc.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: PersistentVolumeClaim
4 | metadata:
5 | name: jekyll-site
6 | namespace: development
7 | spec:
8 | accessModes:
9 | - ReadWriteMany
10 | storageClassName: local-storage
11 | resources:
12 | requests:
13 | storage: 1Gi
14 |
15 |
--------------------------------------------------------------------------------
/challenge-1/namespace.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Namespace
4 | metadata:
5 | creationTimestamp: null
6 | name: development
7 |
8 |
--------------------------------------------------------------------------------
/challenge-2/README.md:
--------------------------------------------------------------------------------
1 | # Challenge 2
2 |
3 | Please note that the first two parts of this challenge are more CKA focused.
4 |
5 | This 2-Node Kubernetes cluster is broken! Troubleshoot, fix the cluster issues and then deploy the objects according to the given architecture diagram to unlock our `Image Gallery`!! Find the lab [here](https://kodekloud.com/topic/kubernetes-challenge-2/)
6 |
7 | As ever, the order you create the resources is significant, and largely governed by the direction of the arrows in the diagram.
8 |
9 | You should study the manifests provided in the repo carefully and understand how they provide what the question asks.
10 |
11 | 1.
12 | controlplane
13 |
14 | Fix the controlplane node. This has three subtasks. The order to do them is atucally the *reverse* order in which they are listed!
15 |
16 | 1.
17 | kubeconfig = /root/.kube/config
, User = kubernetes-admin
Cluster: Server Port = 6443
18 |
19 | Before we can execute any `kubectl` commands, we must fix the kubeconfig. The server port is incorrect and should be `6443`. Edit this in `vi` and save.
20 |
21 | ```bash
22 | vi .kube/config
23 | ```
24 |
25 | Change the following line to have the correct port `6443`, save and exit vi.
26 |
27 | ```yaml
28 | server: https://controlplane:6433
29 | ```
30 |
31 |
32 | 1.
33 | Fix kube-apiserver. Make sure its running and healthy.
34 |
35 | The file referenced by the `--client-ca-file` argument to the API server doesn't exist. Edit the API server manifest and correct this.
36 |
37 | ```bash
38 | ls -l /etc/kubernetes/pki/*.crt
39 | # Notice that the correct certificate is ca.crt
40 | vi /etc/kubernetes/manifests/kube-apiserver.yaml
41 | ```
42 |
43 | Change the following line to refer to the correct certificate file, save and exit vi.
44 |
45 | ```yaml
46 | - --client-ca-file=/etc/kubernetes/pki/ca-authority.crt
47 | ```
48 |
49 | Now wait for the API server to restart. This may take a minute or so. You can run the following to check if the container has been created. Press `CTRL-C` to escape from the following command.
50 |
51 | ```bash
52 | watch crictl ps
53 | ```
54 |
55 | If it still hasn't started, then give it a nudge by restarting the kubelet.
56 |
57 | ```bash
58 | systemctl restart kubelet
59 | ```
60 |
61 | ...then run the crictl command again. If you see it starting and stopping, then you've made an error in the manifest that you need to fix.
62 |
63 | You should also be aware of how to [diagnose a crashed API server](https://github.com/kodekloudhub/community-faq/blob/main/docs/diagnose-crashed-apiserver.md).
64 |
65 |
66 |
67 | 1.
68 | Master node: coredns deployment has image: registry.k8s.io/coredns/coredns:v1.8.6
69 |
70 | Run the following:
71 |
72 | ```bash
73 | kubectl get pods -n kube-system
74 | ```
75 |
76 | You will see that CoreDNS has ImagePull errors, because the container image is incorrect. To fix this, run the following, update the `image:` to that specificed in the question, save and exit
77 |
78 | ```bash
79 | kubectl edit deployment -n kube-system coredns
80 | ```
81 |
82 | ---- OR ----
83 |
84 | Edit the image directly
85 |
86 | ```bash
87 | kubectl set image deployment/coredns -n kube-system \
88 | coredns=registry.k8s.io/coredns/coredns:v1.8.6
89 | ```
90 |
91 | Now re-run the `get pods` command above (or use `watch` with it) until the coredns pods have recycled and there are two healthy pods.
92 |
93 |
94 |
95 | 1.
96 | node01
97 |
98 | node01 is ready and can schedule pods? Run the following:
99 |
100 | ```bash
101 | kubectl get nodes
102 | ```
103 |
104 | We can see that `node01` is in state `Ready,SchedulingDisabled`. This usually means that it is cordoned, so...
105 |
106 | ```bash
107 | kubectl uncordon node01
108 | ```
109 |
110 |
111 |
112 | 1.
113 | web
114 |
115 | Copy all images from the directory '/media' on the controlplane node to '/web' directory on node01. Here we are setting up the content of the directory on `node01` which will ultimately be served as a hostpath persistent volume. It's a straght forward copy with ssh (scp).
116 |
117 | ```bash
118 | scp /media/* node01:/web
119 | ```
120 |
121 |
122 |
123 | 1.
124 | data-pv
125 |
126 |
Create new PersistentVolume = 'data-pv'.Apply the [manifest](./fileserver-pv.yaml) with `kubectl apply -f`
127 |
128 |
129 |
130 | 1.
131 | data-pvc
132 |
133 |
Create new PersistentVolumeClaim = 'data-pvc'Apply the [manifest](./fileserver-pvc.yaml)
134 |
135 |
136 |
137 | 1.
138 | gop-file-server
139 |
140 |
Create a pod for file server, name: 'gop-file-server'Apply the [manifest](./fileserver-pod.yaml)
141 |
142 |
143 |
144 | 1.
145 | gop-fs-service
146 |
147 |
New Service, name: 'gop-fs-service'Apply the [manifest](./fileserver-svc.yaml)
148 |
149 |
150 |
151 | # Automate the lab in a single script!
152 |
153 | As DevOps engineers, we love everything to be automated!
154 |
155 | What we can do here is to clone this repo down to the lab to get all the YAML manifest solutions, then apply them in the correct order. We will also use some Linux trickery to fix the API server. When the script completes, you can press the `Check` button and the lab will be complete!
156 |
157 |
158 | Automation Script
159 |
160 | Paste this entire script to the lab terminal, sit back and enjoy!
161 |
162 | ```bash
163 | {
164 | # Clone this repo to get the manifests
165 | git clone --depth 1 https://github.com/kodekloudhub/kubernetes-challenges.git
166 |
167 | ### Fix API server
168 |
169 | #### kubeconfig
170 | sed -i 's/6433/6443/' .kube/config
171 |
172 | #### API server
173 | sed -i 's/ca-authority\.crt/ca.crt/' /etc/kubernetes/manifests/kube-apiserver.yaml
174 | # Restart the kubelet to ensure the container is started
175 | systemctl restart kubelet
176 | # Wait for it to be running. We will get back the container ID when it is
177 | id=""
178 | while [ -z "$id" ]
179 | do
180 | echo "Waiting for API server to start..."
181 | sleep 2
182 | id=$(crictl ps -a --name kube-apiserver --state running --output json | awk -F '"' '/"id":/{print $4}')
183 | done
184 |
185 | echo "API Server has started (ID = $id). Giving it 10 seconds to initialise..."
186 | sleep 10
187 |
188 | #### CoreDNS
189 | kubectl set image deployment/coredns -n kube-system coredns=registry.k8s.io/coredns/coredns:v1.8.6
190 |
191 | ### Fix node01
192 | kubectl uncordon node01
193 |
194 | ### Web directory
195 | scp /media/* node01:/web
196 |
197 | ### data-pv
198 | kubectl apply -f kubernetes-challenges/challenge-2/fileserver-pv.yaml
199 |
200 | ### data-pvc
201 | kubectl apply -f kubernetes-challenges/challenge-2/fileserver-pvc.yaml
202 |
203 | ### gop-file-server
204 | kubectl apply -f kubernetes-challenges/challenge-2/fileserver-pod.yaml
205 |
206 | ### gop-fx-service
207 | kubectl apply -f kubernetes-challenges/challenge-2/fileserver-svc.yaml
208 |
209 | echo -e "\n\nAutomation complete! Press the Check button.\n"
210 | }
211 |
212 | ```
213 |
--------------------------------------------------------------------------------
/challenge-2/fileserver-pod.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Pod
4 | metadata:
5 | labels:
6 | run: gop-file-server
7 | name: gop-file-server
8 | spec:
9 | volumes:
10 | - name: data-store
11 | persistentVolumeClaim:
12 | claimName: data-pvc
13 | containers:
14 | - image: kodekloud/fileserver
15 | imagePullPolicy: IfNotPresent
16 | name: gop-file-server
17 | volumeMounts:
18 | - name: data-store
19 | mountPath: /web
20 | dnsPolicy: ClusterFirst
21 | restartPolicy: Never
22 |
23 |
24 |
25 |
--------------------------------------------------------------------------------
/challenge-2/fileserver-pv.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | kind: PersistentVolume
3 | apiVersion: v1
4 | metadata:
5 | name: data-pv
6 | spec:
7 | accessModes: ["ReadWriteMany"]
8 | capacity:
9 | storage: 1Gi
10 | hostPath:
11 | path: /web
12 | type: DirectoryOrCreate
13 |
14 |
--------------------------------------------------------------------------------
/challenge-2/fileserver-pvc.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | kind: PersistentVolumeClaim
3 | apiVersion: v1
4 | metadata:
5 | name: data-pvc
6 | spec:
7 | accessModes: ["ReadWriteMany"]
8 | resources:
9 | requests:
10 | storage: 1Gi
11 | volumeName: data-pv
12 |
13 |
--------------------------------------------------------------------------------
/challenge-2/fileserver-svc.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Service
4 | metadata:
5 | creationTimestamp: null
6 | labels:
7 | app: gop-fs-service
8 | name: gop-fs-service
9 | spec:
10 | ports:
11 | - name: 8080-8080
12 | port: 8080
13 | protocol: TCP
14 | targetPort: 8080
15 | nodePort: 31200
16 | selector:
17 | run: gop-file-server
18 | type: NodePort
19 |
20 |
21 |
--------------------------------------------------------------------------------
/challenge-3/README.md:
--------------------------------------------------------------------------------
1 | # Challenge 3
2 |
3 | Deploy the given architecture to `vote` namespace. Find the lab [here](https://kodekloud.com/topic/kubernetes-challenge-3/)
4 |
5 | As ever, the order you create the resources is significant, and largely governed by the direction of the arrows in the diagram. There are a lot of inter-dependencies in this one, and things will break if deployed in the wrong order. Since all the applications are going to depend on their data sources, we do those first, followed by the services that front the data sources, then the rest.
6 |
7 | Note that some of the deployments may take up to a minute to start fully and reach running state.
8 |
9 | You should study the manifests provided in the repo carefully and understand how they provide what the question asks.
10 |
11 | 1.
12 | vote (namespace)
13 |
14 | Create a new namespace: name = 'vote'
15 |
16 | ```bash
17 | kubectl create namespace vote
18 | ```
19 |
20 |
21 |
22 | 1.
23 | db-deployment
24 |
25 | Create new deployment. name: 'db-deployment'
26 |
27 | Apply the [manifest](./db-deployment.yml)
28 |
29 |
30 |
31 | 1.
32 | db-service
33 |
34 | Create new service: 'db'
35 |
36 | Apply the [manifest](./db-service.yml)
37 |
38 |
39 |
40 | 1.
41 | redis-deployment
42 |
43 | Create new deployment, name: 'redis-deployment'
44 |
45 | Apply the [manifest](./redis-deployment.yml)
46 |
47 |
48 |
49 | 1.
50 | redis-service
51 |
52 | New Service, name = 'redis'
53 |
54 | Apply the [manifest](./redis-service.yml)
55 |
56 |
57 |
58 | 1.
59 | worker
60 |
61 | Create new deployment. name: 'worker'
62 |
63 | Apply the [manifest](./worker.yml)
64 |
65 |
66 |
67 | 1.
68 | vote-deployment
69 |
70 | Create a deployment: name = 'vote-deployment'
71 |
72 | Apply the [manifest](./vote-deployment.yml)
73 |
74 |
75 |
76 | 1.
77 | vote-service
78 |
79 | Create a new service: name = vote-service
80 |
81 | Apply the [manifest](./vote-service.yml)
82 |
83 |
84 |
85 | 1.
86 | result-deployment
87 |
88 | Create a new service: name = result-deployment
89 |
90 | Apply the [manifest](./result-deployment.yml)
91 |
92 |
93 |
94 | 1.
95 | result-service
96 |
97 | Create a new service: name = result-service
98 |
99 | Apply the [manifest](./result-service.yml)
100 |
101 |
102 |
103 | # Automate the lab in a single script!
104 |
105 | As DevOps engineers, we love everything to be automated!
106 |
107 | What we can do here is to clone this repo down to the lab to get all the YAML manifest solutions, then apply them in the correct order. When the script completes, you can press the `Check` button and the lab will be complete!
108 |
109 | Since there is a strong dependency between all the deployments, i.e. worker will fail if its data sources aren't ready, we use a shell function to wait for a pod to be running given the name of its deployment.
110 |
111 | You should study the manifests provided in the repo carefully and understand how they provide what the question asks.
112 |
113 |
114 | Automation Script
115 |
116 | Paste this entire script to the lab terminal, sit back and enjoy!
117 |
118 | ```bash
119 | {
120 | wait_deployment() {
121 | deployment=$1
122 |
123 | echo "Waiting up to 120s for $deployment deployment to be available..."
124 | kubectl wait -n vote deployment $deployment --for condition=Available=True --timeout=120s
125 |
126 | if [ $? -ne 0 ]
127 | then
128 | echo "The deployment did not rollout correctly. Please reload the lab and try again."
129 | echo "If the issue persists, please report it in Slack in kubernetes-challenges channel"
130 | echo "https://kodekloud.slack.com/archives/C02LS58EGQ4"
131 | echo "Press CTRL-C to exit"
132 | read x
133 | fi
134 | }
135 |
136 | git clone https://github.com/kodekloudhub/kubernetes-challenges.git
137 |
138 | kubectl create namespace vote
139 |
140 | kubectl apply -f kubernetes-challenges/challenge-3/db-deployment.yml
141 |
142 | wait_deployment db-deployment
143 |
144 | kubectl apply -f kubernetes-challenges/challenge-3/db-service.yml
145 |
146 | kubectl apply -f kubernetes-challenges/challenge-3/redis-deployment.yml
147 |
148 | wait_deployment redis-deployment
149 |
150 | kubectl apply -f kubernetes-challenges/challenge-3/redis-service.yml
151 |
152 | kubectl apply -f kubernetes-challenges/challenge-3/worker.yml
153 |
154 | wait_deployment worker
155 |
156 | kubectl apply -f kubernetes-challenges/challenge-3/result-deployment.yml
157 |
158 | wait_deployment result-deployment
159 |
160 | kubectl apply -f kubernetes-challenges/challenge-3/result-service.yml
161 |
162 | kubectl apply -f kubernetes-challenges/challenge-3/vote-deployment.yml
163 |
164 | wait_deployment vote-deployment
165 |
166 | kubectl apply -f kubernetes-challenges/challenge-3/vote-service.yml
167 |
168 | echo -e "\nAutomation complete. Press the Check button.\n"
169 | }
170 | ```
171 |
172 |
--------------------------------------------------------------------------------
/challenge-3/db-deployment.yml:
--------------------------------------------------------------------------------
1 | # kubectl create deployment db-deployment --image=postgres:9.4 --dry-run=client -o yaml -n vote > db-deployment.yaml
2 |
3 | ---
4 | apiVersion: apps/v1
5 | kind: Deployment
6 | metadata:
7 | labels:
8 | app: db-deployment
9 | name: db-deployment
10 | namespace: vote
11 | spec:
12 | replicas: 1
13 | selector:
14 | matchLabels:
15 | app: db-deployment
16 | template:
17 | metadata:
18 | labels:
19 | app: db-deployment
20 | spec:
21 | containers:
22 | - image: postgres:9.4
23 | name: postgres
24 | env:
25 | - name: POSTGRES_HOST_AUTH_METHOD
26 | value: trust
27 | volumeMounts:
28 | - mountPath: /var/lib/postgresql/data
29 | name: db-data
30 | volumes:
31 | - name: db-data
32 | emptyDir: {}
33 |
34 |
35 |
36 |
--------------------------------------------------------------------------------
/challenge-3/db-service.yml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Service
4 | metadata:
5 | name: db
6 | namespace: vote
7 | spec:
8 | type: ClusterIP
9 | ports:
10 | - port: 5432
11 | targetPort: 5432
12 | selector:
13 | app: db-deployment
14 |
15 |
--------------------------------------------------------------------------------
/challenge-3/redis-deployment.yml:
--------------------------------------------------------------------------------
1 | # kubectl create deployment redis-deployment --image=redis:alpine --dry-run=client -o yaml -n vote > redis-deployment.yaml
2 | # Add emptyDir type volume under the volumes section.
3 |
4 | ---
5 | apiVersion: apps/v1
6 | kind: Deployment
7 | metadata:
8 | labels:
9 | app: redis-deployment
10 | name: redis-deployment
11 | namespace: vote
12 | spec:
13 | replicas: 1
14 | selector:
15 | matchLabels:
16 | app: redis-deployment
17 | template:
18 | metadata:
19 | labels:
20 | app: redis-deployment
21 | spec:
22 | containers:
23 | - image: redis:alpine
24 | name: redis-deployment
25 | volumeMounts:
26 | - mountPath: /data
27 | name: redis-data
28 | volumes:
29 | - name: redis-data
30 | emptyDir: {}
31 |
32 |
--------------------------------------------------------------------------------
/challenge-3/redis-service.yml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Service
4 | metadata:
5 | name: redis
6 | namespace: vote
7 | spec:
8 | type: ClusterIP
9 | ports:
10 | - port: 6379
11 | targetPort: 6379
12 | selector:
13 | app: redis-deployment
14 |
15 |
--------------------------------------------------------------------------------
/challenge-3/result-deployment.yml:
--------------------------------------------------------------------------------
1 | # kubectl create deployment result-deployment --image=kodekloud/examplevotingapp_result:before --dry-run=client -oyaml -n vote > result-deployment.yaml
2 |
3 | ---
4 | apiVersion: apps/v1
5 | kind: Deployment
6 | metadata:
7 | creationTimestamp: null
8 | labels:
9 | app: result-deployment
10 | name: result-deployment
11 | namespace: vote
12 | spec:
13 | replicas: 1
14 | selector:
15 | matchLabels:
16 | app: result-deployment
17 | strategy: {}
18 | template:
19 | metadata:
20 | creationTimestamp: null
21 | labels:
22 | app: result-deployment
23 | spec:
24 | containers:
25 | - image: kodekloud/examplevotingapp_result:before
26 | name: examplevotingapp-result-shxrp
27 |
28 |
29 |
30 |
--------------------------------------------------------------------------------
/challenge-3/result-service.yml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Service
4 | metadata:
5 | name: result-service
6 | namespace: vote
7 | spec:
8 | type: NodePort
9 | ports:
10 | - port: 5001
11 | targetPort: 80
12 | nodePort: 31001
13 | selector:
14 | app: result-deployment
15 |
16 |
--------------------------------------------------------------------------------
/challenge-3/vote-deployment.yml:
--------------------------------------------------------------------------------
1 | # kubectl create deployment vote-deployment --image=kodekloud/examplevotingapp_vote:before -n vote --dry-run=client -o yaml > deploy.yaml
2 |
3 | ---
4 | apiVersion: apps/v1
5 | kind: Deployment
6 | metadata:
7 | creationTimestamp: null
8 | labels:
9 | app: vote-deployment
10 | name: vote-deployment
11 | namespace: vote
12 | spec:
13 | replicas: 1
14 | selector:
15 | matchLabels:
16 | app: vote-deployment
17 | strategy: {}
18 | template:
19 | metadata:
20 | creationTimestamp: null
21 | labels:
22 | app: vote-deployment
23 | spec:
24 | containers:
25 | - image: kodekloud/examplevotingapp_vote:before
26 | name: vote
27 | resources: {}
28 |
29 |
30 |
--------------------------------------------------------------------------------
/challenge-3/vote-namespace.yml:
--------------------------------------------------------------------------------
1 | # To create a new namespace from the CLI command :- kubectl create namespace vote
2 | ---
3 | apiVersion: v1
4 | kind: Namespace
5 | metadata:
6 | name: vote
7 |
--------------------------------------------------------------------------------
/challenge-3/vote-service.yml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: Service
4 | metadata:
5 | name: vote-service
6 | namespace: vote
7 | spec:
8 | type: NodePort
9 | ports:
10 | - port: 5000
11 | targetPort: 80
12 | nodePort: 31000
13 | selector:
14 | app: vote-deployment
15 |
--------------------------------------------------------------------------------
/challenge-3/worker.yml:
--------------------------------------------------------------------------------
1 | # kubectl create deployment worker --image=kodekloud/examplevotingapp_worker --dry-run=client -o yaml -n vote > worker.yaml
2 |
3 | ---
4 | apiVersion: apps/v1
5 | kind: Deployment
6 | metadata:
7 | creationTimestamp: null
8 | labels:
9 | app: worker
10 | name: worker
11 | namespace: vote
12 | spec:
13 | replicas: 1
14 | selector:
15 | matchLabels:
16 | app: worker
17 | strategy: {}
18 | template:
19 | metadata:
20 | creationTimestamp: null
21 | labels:
22 | app: worker
23 | spec:
24 | containers:
25 | - image: kodekloud/examplevotingapp_worker
26 | name: examplevotingapp-worker-s7cwx
27 |
28 |
--------------------------------------------------------------------------------
/challenge-4/README.md:
--------------------------------------------------------------------------------
1 | # Challenge 4
2 |
3 | Build a highly available Redis Cluster based on the given architecture diagram. Find the lab [here](https://kodekloud.com/topic/kubernetes-challenge-4/).
4 |
5 | As ever, the order you create the resources is significant, and largely governed by the direction of the arrows in the diagram.
6 |
7 | You should study the manifests provided in the repo carefully and understand how they provide what the question asks.
8 |
9 |
10 | 1.
11 | redis01 thru redis06 - create directories
12 |
13 | Using a shell for loop, we can create all of these at once.
14 |
15 | 1.
16 | Determine the name of the worker node
17 |
18 | ```bash
19 | kubectl get nodes
20 | ```
21 |
22 |
23 |
24 | 1.
25 | ssh to the worker node
26 |
27 | ```bash
28 | ssh node01
29 | ```
30 |
31 |
32 | 1.
33 | Create the required directories
34 |
35 | ```bash
36 | for i in $(seq 1 6) ; do mkdir "/redis0$i" ; done
37 | ```
38 |
39 | Verify
40 |
41 | ```bash
42 | ls -ld /redis*
43 | ```
44 |
45 | Now exit ther worker node with `CTRL-D` or `exit`
46 |
47 |
48 |
49 |
50 | 1.
51 | redis01 thru redis06 - create persistent volumes
52 |
53 | You could create a manifest for each persistent volume individually, but that's repetetive and time consuming, so let's instead use the power of Linux for loops, [heredocs](https://linuxize.com/post/bash-heredoc/) and variable substitution!
54 |
55 | The manifest will be generated once for each value 1 thru 6 and each one piped into `kubectl` which will apply it.
56 |
57 | ```bash
58 | for i in $(seq 1 6)
59 | do
60 | cat <
80 |
81 | 1.
82 | redis-cluster-service
83 |
84 | Because the redis cluster is a StatefulSet, it is necessay for a service to exist first, as the StatefulSet manifest refers to it by name.
85 |
86 | Apply the [manifest](./redis-cluster-service.yaml)
87 |
88 | 1.
89 | redis-cluster
90 |
91 | Apply the [manifest](./redis-statefulset.yaml)
92 |
93 |
94 |
95 | 1.
96 | redis-cluster-config
97 |
98 | Now we boot the redis cluster. We have to execute a command at the first replica in the StatefulSet, i.e. `redis-cluster-0`. The command to run is provided in the question, however what it does is to get the IPs of all the cluster member pods using jsonpath and provides it as arguments to the cluster initialization tool.
99 |
100 | It will ask you if you want to proceeed. Type `yes`
101 |
102 | ```bash
103 | kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 \
104 | $(kubectl get pods -l app=redis-cluster -o jsonpath='{range.items[*]}{.status.podIP}:6379 {end}')
105 | ```
106 |
107 |
108 |
109 | # Automate the lab in a single script!
110 |
111 | As DevOps engineers, we love everything to be automated!
112 |
113 | What we can do here is to clone this repo down to the lab to get all the YAML manifest solutions, then apply them in the correct order. When the script completes, you can press the `Check` button and the lab will be complete!
114 |
115 |
116 |
117 | Automation Script
118 |
119 | Paste this entire script to the lab terminal, sit back and enjoy!
120 |
121 | ```bash
122 | {
123 | ### Clone this repo to get the manifests
124 | git clone --depth 1 https://github.com/kodekloudhub/kubernetes-challenges.git
125 |
126 | ### Create PV directories on node01
127 | # See https://www.cyberciti.biz/faq/unix-linux-execute-command-using-ssh/
128 | ssh node01 'for i in $(seq 1 6) ; do mkdir "/redis0$i" ; done'
129 |
130 | ### Create PVs
131 | for i in $(seq 1 6)
132 | do
133 | cat <
--------------------------------------------------------------------------------
/challenge-4/pv-cluster.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: v1
3 | kind: List
4 | items:
5 | - kind: PersistentVolume
6 | apiVersion: v1
7 | metadata:
8 | name: redis01
9 | spec:
10 | accessModes: ["ReadWriteOnce"]
11 | capacity:
12 | storage: 1Gi
13 | hostPath:
14 | path: /redis01
15 | - kind: PersistentVolume
16 | apiVersion: v1
17 | metadata:
18 | name: redis02
19 | spec:
20 | accessModes: ["ReadWriteOnce"]
21 | capacity:
22 | storage: 1Gi
23 | hostPath:
24 | path: /redis02
25 | - kind: PersistentVolume
26 | apiVersion: v1
27 | metadata:
28 | name: redis03
29 | spec:
30 | accessModes: ["ReadWriteOnce"]
31 | capacity:
32 | storage: 1Gi
33 | hostPath:
34 | path: /redis03
35 | - kind: PersistentVolume
36 | apiVersion: v1
37 | metadata:
38 | name: redis04
39 | spec:
40 | accessModes: ["ReadWriteOnce"]
41 | capacity:
42 | storage: 1Gi
43 | hostPath:
44 | path: /redis04
45 | - kind: PersistentVolume
46 | apiVersion: v1
47 | metadata:
48 | name: redis05
49 | spec:
50 | accessModes: ["ReadWriteOnce"]
51 | capacity:
52 | storage: 1Gi
53 | hostPath:
54 | path: /redis05
55 | - kind: PersistentVolume
56 | apiVersion: v1
57 | metadata:
58 | name: redis06
59 | spec:
60 | accessModes: ["ReadWriteOnce"]
61 | capacity:
62 | storage: 1Gi
63 | hostPath:
64 | path: /redis06
65 |
66 |
--------------------------------------------------------------------------------
/challenge-4/redis-cluster-service.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Service
3 | metadata:
4 | name: redis-cluster-service
5 | spec:
6 | ports:
7 | - port: 6379
8 | name: client
9 | targetPort: 6379
10 | - port: 16379
11 | name: gossip
12 | targetPort: 16379
13 | selector:
14 | app: redis-cluster
15 |
--------------------------------------------------------------------------------
/challenge-4/redis-statefulset.yaml:
--------------------------------------------------------------------------------
1 | ---
2 | apiVersion: apps/v1
3 | kind: StatefulSet
4 | metadata:
5 | name: redis-cluster
6 | labels:
7 | run: redis-cluster
8 | spec:
9 | serviceName: redis-cluster-service
10 | replicas: 6
11 | selector:
12 | matchLabels:
13 | app: redis-cluster
14 | template:
15 | metadata:
16 | name: redis-cluster
17 | labels:
18 | app: redis-cluster
19 | spec:
20 | volumes:
21 | - name: conf
22 | configMap:
23 | name: redis-cluster-configmap
24 | defaultMode: 0755
25 | containers:
26 | - image: redis:5.0.1-alpine
27 | name: redis
28 | command:
29 | - "/conf/update-node.sh"
30 | - "redis-server"
31 | - "/conf/redis.conf"
32 | env:
33 | - name: POD_IP
34 | valueFrom:
35 | fieldRef:
36 | fieldPath: status.podIP
37 | apiVersion: v1
38 | ports:
39 | - containerPort: 6379
40 | name: client
41 | - name: gossip
42 | containerPort: 16379
43 | volumeMounts:
44 | - name: conf
45 | mountPath: /conf
46 | readOnly: false
47 | - name: data
48 | mountPath: /data
49 | readOnly: false
50 | volumeClaimTemplates:
51 | - metadata:
52 | name: data
53 | spec:
54 | accessModes: ["ReadWriteOnce"]
55 | resources:
56 | requests:
57 | storage: 1Gi
58 |
59 |
60 |
--------------------------------------------------------------------------------
/old-challenges/challenge-1-wordpress/README.md:
--------------------------------------------------------------------------------
1 |
2 | This challenge is based on the example posted on Kubernetes Documentation: https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
3 |
4 | # NFS
5 |
6 | ## Creating/Adjusting Folder /mysql & /html and editing /etc/exports
7 |
8 | $ cd nfs
9 |
10 | $ chmod u+x nfs.sh
11 |
12 | $ sudo ./nfs.sh
13 |
14 |
15 | # K8s
16 |
17 | ## Creating a persistent volumes for Wordpress & MySQL
18 |
19 | $ cd k8s
20 |
21 | $ vim 00-wordpress-mysql-pv.yml
22 |
23 | replace "server: x.x.x.x" with NFS IP
24 |
25 | $ kubectl create -f 00-wordpress-mysql-pv.yml
26 |
27 |
28 | Verify a persistent volumes for Wordpress & MySQL
29 |
30 | $ kubectl get pv --show-labels
31 |
32 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
33 | mysql-persistent-storage 10Gi RWX Retain Available 43m app=wordpress,tier=mysql
34 | wordpress-persistent-storage 10Gi RWX Retain Available 43m app=wordpress,tier=frontend
35 |
36 |
37 | ## Creating a persistent volume Claim MySQL
38 |
39 | $ kubectl create -f 01-mysql-pvc.yml
40 |
41 | ## Creating a persistent volume Claim for Wordpress
42 |
43 | $ kubectl create -f 02-wordpress-pvc.yml
44 |
45 | Verfiy a persistent volume Claim for Wordpress &MySQL
46 |
47 | $ kubectl get pvc --show-labels
48 |
49 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE LABELS
50 | mysql-persistent-storage Bound mysql-persistent-storage 10Gi RWX 42m app=wordpress
51 | wordpress-persistent-storage Bound wordpress-persistent-storage 10Gi RWX 42m app=wordpress
52 |
53 | Check the PV and see the STATUS and CLAIM
54 |
55 | the "STATUS" was changed from "Available" to "Bound"
56 |
57 | the "Claim" was changed from "NULL" to our new persistentVolumeClaim
58 |
59 | $ kubectl get pv --show-labels
60 |
61 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELS
62 | mysql-persistent-storage 10Gi RWX Retain Bound default/mysql-persistent-storage 53m app=wordpress,tier=mysql
63 | wordpress-persistent-storage 10Gi RWX Retain Bound default/wordpress-persistent-storage 53m app=wordpress,tier=frontend
64 |
65 |
66 | ## Creating Secret which used to our MySQL
67 |
68 | $ echo -n 'admin' | base64
69 |
70 | $ kubectl create -f 03-secret.yml
71 |
72 | ## verify Secret which used to our MySQL
73 |
74 | $ kubectl get secret
75 |
76 | NAME TYPE DATA AGE
77 | mysql-pass Opaque 1 19s
78 |
79 |
80 | ## Deploy MySQL with it's service "ClusterIP"
81 |
82 | $ kubectl create -f 04-mysql-deploy.yml
83 |
84 | Verfiy MySQL Service "wordpress-mysql", Port "3306", label "app=wordpress"
85 |
86 | $ kubectl get svc --show-labels
87 |
88 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
89 | wordpress-mysql ClusterIP None 3306/TCP 1m app=wordpress
90 |
91 | Verify MySQL Deployment "mysql", desired "1", current "1"
92 |
93 | $ kubectl get deployment --show-labels
94 |
95 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE LABELS
96 | mysql 1 1 1 1 45m app=wordpress
97 |
98 | Verify MySQL pod with label "app=wordpress,tier=mysql"
99 |
100 | $ kubectl get pods --selector=app=wordpress,tier=mysql
101 |
102 | NAME READY STATUS RESTARTS AGE
103 | mysql-6d468bfbf7-g7hhh 1/1 Running 1 20m
104 |
105 | Verify the MySQL store the data on NFS
106 |
107 | $ ssh nfs-server
108 |
109 | $ ls -lh /mysql
110 |
111 |
112 | ## Deploy Wordpress with it's service "NodePort", image "mohamedayman/wordpress"
113 |
114 | $ kubectl create -f 05-wordpress-deploy.yml
115 |
116 | Verfiy Wordpress Service "wordpress", Port "80", NodePort "31004", label "app=wordpress"
117 |
118 | $ kubectl get svc --show-labels
119 |
120 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
121 | wordpress NodePort 10.43.71.69 80:31004/TCP 21s app=wordpress
122 |
123 | Verify Wordpress Deployment "wordpress", desired "2", current "2"
124 |
125 | $ kubectl get deployment --show-labels
126 |
127 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE LABELS
128 | wordpress 2 2 2 2 2m app=wordpress
129 |
130 | Verify wordpress pod with label "app=wordpress,tier=frontend"
131 |
132 | $ kubectl get pods --selector=app=wordpress,tier=frontend
133 |
134 | NAME READY STATUS RESTARTS AGE
135 | wordpress-5f85d888df-8s5tw 1/1 Running 0 11m
136 | wordpress-5f85d888df-l882q 1/1 Running 0 11m
137 |
138 | Verify the Wordpress store the data on NFS
139 |
140 | $ ssh nfs-server
141 |
142 | $ ls -lh /html
143 |
144 |
145 | # Helm
146 |
147 | Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.
148 |
149 | ##Use Helm to:
150 |
151 | Find and use popular software packaged as Helm charts to run in Kubernetes
152 |
153 | Share your own applications as Helm charts
154 |
155 | Create reproducible builds of your Kubernetes applications
156 |
157 | Intelligently manage your Kubernetes manifest files
158 |
159 | Manage releases of Helm packages
160 |
161 |
162 | ### Helm in a Handbasket
163 |
164 | - Helm has two parts: a client (helm) and a server (tiller)
165 | Tiller runs inside of your Kubernetes cluster, and manages releases (installations) of your charts.
166 |
167 | - Helm runs on your laptop, CI/CD, or wherever you want it to run.
168 | Charts are Helm packages that contain at least two things:
169 | A description of the package (Chart.yaml)
170 |
171 | - One or more templates, which contain Kubernetes manifest files
172 | Charts can be stored on disk, or fetched from remote chart repositories (like Debian or RedHat packages)
173 |
174 | ### Installing the helm client
175 |
176 | - Download your desired version https://github.com/helm/helm/releases
177 |
178 | - Unpack it (tar -zxvf helm-v2.0.0-linux-amd64.tgz)
179 |
180 | - Find the helm binary in the unpacked directory, and move it to its desired destination (mv linux-amd64/helm /usr/local/bin/helm)
181 |
182 | $ wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
183 |
184 | $ tar -xzvf helm-v2.11.0-linux-amd64.tar.gz
185 |
186 | $ sudo mv linux-amd64/helm /usr/local/bin/
187 |
188 | $ helm version
189 |
190 | Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
191 |
192 |
193 |
194 | ### Installing Tiller
195 |
196 | Tiller, the server portion of Helm, typically runs inside of your Kubernetes cluster.
197 |
198 | $ helm init --upgrade
199 |
200 | $ kubectl get deployment -n kube-system --selector=app=helm
201 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
202 | tiller-deploy 1 1 1 1 1m
203 |
204 | $ kubectl get pods -n kube-system --selector=app=helm
205 | NAME READY STATUS RESTARTS AGE
206 | tiller-deploy-759b9d56c-wcpxx 1/1 Running 0 2m
207 |
208 |
209 | ### Create a new chart with the given name
210 |
211 | $ cd helm
212 |
213 | $ helm create wordpress
214 |
215 | $ helm install --name wordpress wordpress --set nfs.server=x.x.x.x
216 |
217 | $ helm ls
218 |
219 | NAME REVISION UPDATED STATUS CHART NAMESPACE
220 | wordpress 1 Wed Nov 14 00:58:25 2018 DEPLOYED wordpress-0.1.0 default
221 |
222 | ### Clean Up
223 |
224 | $ helm delete wordpress --purge
225 |
226 |
--------------------------------------------------------------------------------
/old-challenges/challenge-1-wordpress/app/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM php:7.0-apache
2 |
3 | # install the PHP extensions we need
4 | RUN set -ex; \
5 | \
6 | savedAptMark="$(apt-mark showmanual)"; \
7 | \
8 | apt-get update; \
9 | apt-get install -y --no-install-recommends \
10 | libjpeg-dev \
11 | libpng-dev \
12 | ; \
13 | \
14 | docker-php-ext-configure gd --with-png-dir=/usr --with-jpeg-dir=/usr; \
15 | docker-php-ext-install gd mysqli opcache zip; \
16 | \
17 | # reset apt-mark's "manual" list so that "purge --auto-remove" will remove all build dependencies
18 | apt-mark auto '.*' > /dev/null; \
19 | apt-mark manual $savedAptMark; \
20 | ldd "$(php -r 'echo ini_get("extension_dir");')"/*.so \
21 | | awk '/=>/ { print $3 }' \
22 | | sort -u \
23 | | xargs -r dpkg-query -S \
24 | | cut -d: -f1 \
25 | | sort -u \
26 | | xargs -rt apt-mark manual; \
27 | \
28 | apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
29 | rm -rf /var/lib/apt/lists/*
30 |
31 | # set recommended PHP.ini settings
32 | # see https://secure.php.net/manual/en/opcache.installation.php
33 | RUN { \
34 | echo 'opcache.memory_consumption=128'; \
35 | echo 'opcache.interned_strings_buffer=8'; \
36 | echo 'opcache.max_accelerated_files=4000'; \
37 | echo 'opcache.revalidate_freq=2'; \
38 | echo 'opcache.fast_shutdown=1'; \
39 | echo 'opcache.enable_cli=1'; \
40 | } > /usr/local/etc/php/conf.d/opcache-recommended.ini
41 |
42 | RUN a2enmod rewrite expires
43 |
44 | VOLUME /var/www/html
45 |
46 | ENV WORDPRESS_VERSION 4.9.7
47 | ENV WORDPRESS_SHA1 7bf349133750618e388e7a447bc9cdc405967b7d
48 |
49 | RUN set -ex; \
50 | curl -o wordpress.tar.gz -fSL "https://wordpress.org/wordpress-${WORDPRESS_VERSION}.tar.gz"; \
51 | echo "$WORDPRESS_SHA1 *wordpress.tar.gz" | sha1sum -c -; \
52 | # upstream tarballs include ./wordpress/ so this gives us /usr/src/wordpress
53 | tar -xzf wordpress.tar.gz -C /usr/src/; \
54 | rm wordpress.tar.gz; \
55 | chown -R www-data:www-data /usr/src/wordpress
56 |
57 | COPY docker-entrypoint.sh /usr/local/bin/
58 | RUN chmod u+x /usr/local/bin/docker-entrypoint.sh
59 |
60 | ENTRYPOINT ["docker-entrypoint.sh"]
61 | CMD ["apache2-foreground"]
62 |
--------------------------------------------------------------------------------
/old-challenges/challenge-1-wordpress/app/docker-entrypoint.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | set -euo pipefail
3 |
4 | # usage: file_env VAR [DEFAULT]
5 | # ie: file_env 'XYZ_DB_PASSWORD' 'example'
6 | # (will allow for "$XYZ_DB_PASSWORD_FILE" to fill in the value of
7 | # "$XYZ_DB_PASSWORD" from a file, especially for Docker's secrets feature)
8 | file_env() {
9 | local var="$1"
10 | local fileVar="${var}_FILE"
11 | local def="${2:-}"
12 | if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
13 | echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
14 | exit 1
15 | fi
16 | local val="$def"
17 | if [ "${!var:-}" ]; then
18 | val="${!var}"
19 | elif [ "${!fileVar:-}" ]; then
20 | val="$(< "${!fileVar}")"
21 | fi
22 | export "$var"="$val"
23 | unset "$fileVar"
24 | }
25 |
26 | if [[ "$1" == apache2* ]] || [ "$1" == php-fpm ]; then
27 | if [ "$(id -u)" = '0' ]; then
28 | case "$1" in
29 | apache2*)
30 | user="${APACHE_RUN_USER:-www-data}"
31 | group="${APACHE_RUN_GROUP:-www-data}"
32 | ;;
33 | *) # php-fpm
34 | user='www-data'
35 | group='www-data'
36 | ;;
37 | esac
38 | else
39 | user="$(id -u)"
40 | group="$(id -g)"
41 | fi
42 |
43 | if ! [ -e index.php -a -e wp-includes/version.php ]; then
44 | echo >&2 "WordPress not found in $PWD - copying now..."
45 | if [ "$(ls -A)" ]; then
46 | echo >&2 "WARNING: $PWD is not empty - press Ctrl+C now if this is an error!"
47 | ( set -x; ls -A; sleep 10 )
48 | fi
49 | tar --create \
50 | --file - \
51 | --one-file-system \
52 | --directory /usr/src/wordpress \
53 | --owner "$user" --group "$group" \
54 | . | tar --extract --file -
55 | echo >&2 "Complete! WordPress has been successfully copied to $PWD"
56 | if [ ! -e .htaccess ]; then
57 | # NOTE: The "Indexes" option is disabled in the php:apache base image
58 | cat > .htaccess <<-'EOF'
59 | # BEGIN WordPress
60 |
61 | RewriteEngine On
62 | RewriteBase /
63 | RewriteRule ^index\.php$ - [L]
64 | RewriteCond %{REQUEST_FILENAME} !-f
65 | RewriteCond %{REQUEST_FILENAME} !-d
66 | RewriteRule . /index.php [L]
67 |
68 | # END WordPress
69 | EOF
70 | chown "$user:$group" .htaccess
71 | fi
72 | fi
73 |
74 | # TODO handle WordPress upgrades magically in the same way, but only if wp-includes/version.php's $wp_version is less than /usr/src/wordpress/wp-includes/version.php's $wp_version
75 |
76 | # allow any of these "Authentication Unique Keys and Salts." to be specified via
77 | # environment variables with a "WORDPRESS_" prefix (ie, "WORDPRESS_AUTH_KEY")
78 | uniqueEnvs=(
79 | AUTH_KEY
80 | SECURE_AUTH_KEY
81 | LOGGED_IN_KEY
82 | NONCE_KEY
83 | AUTH_SALT
84 | SECURE_AUTH_SALT
85 | LOGGED_IN_SALT
86 | NONCE_SALT
87 | )
88 | envs=(
89 | WORDPRESS_DB_HOST
90 | WORDPRESS_DB_USER
91 | WORDPRESS_DB_PASSWORD
92 | WORDPRESS_DB_NAME
93 | "${uniqueEnvs[@]/#/WORDPRESS_}"
94 | WORDPRESS_TABLE_PREFIX
95 | WORDPRESS_DEBUG
96 | )
97 | haveConfig=
98 | for e in "${envs[@]}"; do
99 | file_env "$e"
100 | if [ -z "$haveConfig" ] && [ -n "${!e}" ]; then
101 | haveConfig=1
102 | fi
103 | done
104 |
105 | # linking backwards-compatibility
106 | if [ -n "${!MYSQL_ENV_MYSQL_*}" ]; then
107 | haveConfig=1
108 | # host defaults to "mysql" below if unspecified
109 | : "${WORDPRESS_DB_USER:=${MYSQL_ENV_MYSQL_USER:-root}}"
110 | if [ "$WORDPRESS_DB_USER" = 'root' ]; then
111 | : "${WORDPRESS_DB_PASSWORD:=${MYSQL_ENV_MYSQL_ROOT_PASSWORD:-}}"
112 | else
113 | : "${WORDPRESS_DB_PASSWORD:=${MYSQL_ENV_MYSQL_PASSWORD:-}}"
114 | fi
115 | : "${WORDPRESS_DB_NAME:=${MYSQL_ENV_MYSQL_DATABASE:-}}"
116 | fi
117 |
118 | # only touch "wp-config.php" if we have environment-supplied configuration values
119 | if [ "$haveConfig" ]; then
120 | : "${WORDPRESS_DB_HOST:=mysql}"
121 | : "${WORDPRESS_DB_USER:=root}"
122 | : "${WORDPRESS_DB_PASSWORD:=}"
123 | : "${WORDPRESS_DB_NAME:=wordpress}"
124 |
125 | # version 4.4.1 decided to switch to windows line endings, that breaks our seds and awks
126 | # https://github.com/docker-library/wordpress/issues/116
127 | # https://github.com/WordPress/WordPress/commit/1acedc542fba2482bab88ec70d4bea4b997a92e4
128 | sed -ri -e 's/\r$//' wp-config*
129 |
130 | if [ ! -e wp-config.php ]; then
131 | awk '/^\/\*.*stop editing.*\*\/$/ && c == 0 { c = 1; system("cat") } { print }' wp-config-sample.php > wp-config.php <<'EOPHP'
132 | // If we're behind a proxy server and using HTTPS, we need to alert Wordpress of that fact
133 | // see also http://codex.wordpress.org/Administration_Over_SSL#Using_a_Reverse_Proxy
134 | if (isset($_SERVER['HTTP_X_FORWARDED_PROTO']) && $_SERVER['HTTP_X_FORWARDED_PROTO'] === 'https') {
135 | $_SERVER['HTTPS'] = 'on';
136 | }
137 |
138 | EOPHP
139 | chown "$user:$group" wp-config.php
140 | fi
141 |
142 | # see http://stackoverflow.com/a/2705678/433558
143 | sed_escape_lhs() {
144 | echo "$@" | sed -e 's/[]\/$*.^|[]/\\&/g'
145 | }
146 | sed_escape_rhs() {
147 | echo "$@" | sed -e 's/[\/&]/\\&/g'
148 | }
149 | php_escape() {
150 | local escaped="$(php -r 'var_export(('"$2"') $argv[1]);' -- "$1")"
151 | if [ "$2" = 'string' ] && [ "${escaped:0:1}" = "'" ]; then
152 | escaped="${escaped//$'\n'/"' + \"\\n\" + '"}"
153 | fi
154 | echo "$escaped"
155 | }
156 | set_config() {
157 | key="$1"
158 | value="$2"
159 | var_type="${3:-string}"
160 | start="(['\"])$(sed_escape_lhs "$key")\2\s*,"
161 | end="\);"
162 | if [ "${key:0:1}" = '$' ]; then
163 | start="^(\s*)$(sed_escape_lhs "$key")\s*="
164 | end=";"
165 | fi
166 | sed -ri -e "s/($start\s*).*($end)$/\1$(sed_escape_rhs "$(php_escape "$value" "$var_type")")\3/" wp-config.php
167 | }
168 |
169 | set_config 'DB_HOST' "$WORDPRESS_DB_HOST"
170 | set_config 'DB_USER' "$WORDPRESS_DB_USER"
171 | set_config 'DB_PASSWORD' "$WORDPRESS_DB_PASSWORD"
172 | set_config 'DB_NAME' "$WORDPRESS_DB_NAME"
173 |
174 | for unique in "${uniqueEnvs[@]}"; do
175 | uniqVar="WORDPRESS_$unique"
176 | if [ -n "${!uniqVar}" ]; then
177 | set_config "$unique" "${!uniqVar}"
178 | else
179 | # if not specified, let's generate a random value
180 | currentVal="$(sed -rn -e "s/define\((([\'\"])$unique\2\s*,\s*)(['\"])(.*)\3\);/\4/p" wp-config.php)"
181 | if [ "$currentVal" = 'put your unique phrase here' ]; then
182 | set_config "$unique" "$(head -c1m /dev/urandom | sha1sum | cut -d' ' -f1)"
183 | fi
184 | fi
185 | done
186 |
187 | if [ "$WORDPRESS_TABLE_PREFIX" ]; then
188 | set_config '$table_prefix' "$WORDPRESS_TABLE_PREFIX"
189 | fi
190 |
191 | if [ "$WORDPRESS_DEBUG" ]; then
192 | set_config 'WP_DEBUG' 1 boolean
193 | fi
194 |
195 | TERM=dumb php -- <<'EOPHP'
196 | connect_error) {
219 | fwrite($stderr, "\n" . 'MySQL Connection Error: (' . $mysql->connect_errno . ') ' . $mysql->connect_error . "\n");
220 | --$maxTries;
221 | if ($maxTries <= 0) {
222 | exit(1);
223 | }
224 | sleep(3);
225 | }
226 | } while ($mysql->connect_error);
227 |
228 | if (!$mysql->query('CREATE DATABASE IF NOT EXISTS `' . $mysql->real_escape_string($dbName) . '`')) {
229 | fwrite($stderr, "\n" . 'MySQL "CREATE DATABASE" Error: ' . $mysql->error . "\n");
230 | $mysql->close();
231 | exit(1);
232 | }
233 |
234 | $mysql->close();
235 | EOPHP
236 | fi
237 |
238 | # now that we're definitely done writing configuration, let's clear out the relevant envrionment variables (so that stray "phpinfo()" calls don't leak secrets from our code)
239 | for e in "${envs[@]}"; do
240 | unset "$e"
241 | done
242 | fi
243 |
244 | exec "$@"
245 |
--------------------------------------------------------------------------------
/old-challenges/challenge-1-wordpress/k8s/00-wordpress-mysql-pv.yml:
--------------------------------------------------------------------------------
1 | # Create PersistentVolume
2 | # change the ip of NFS server
3 | apiVersion: v1
4 | kind: PersistentVolume
5 | metadata:
6 | name: wordpress-persistent-storage
7 | labels:
8 | app: wordpress
9 | tier: frontend
10 | spec:
11 | capacity:
12 | storage: 1Gi
13 | accessModes:
14 | - ReadWriteMany
15 | nfs:
16 | server: nfs01
17 | # Exported path of your NFS server
18 | path: "/html"
19 |
20 | ---
21 | apiVersion: v1
22 | kind: PersistentVolume
23 | metadata:
24 | name: mysql-persistent-storage
25 | labels:
26 | app: wordpress
27 | tier: mysql
28 | spec:
29 | capacity:
30 | storage: 1Gi
31 | accessModes:
32 | - ReadWriteMany
33 | nfs:
34 | server: nfs01
35 | # Exported path of your NFS server
36 | path: "/mysql"
37 |
--------------------------------------------------------------------------------
/old-challenges/challenge-1-wordpress/k8s/01-mysql-pvc.yml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: PersistentVolumeClaim
3 | metadata:
4 | name: mysql-persistent-storage
5 | labels:
6 | app: wordpress
7 | spec:
8 | accessModes:
9 | - ReadWriteMany
10 | resources:
11 | requests:
12 | storage: 1Gi
13 | selector:
14 | matchLabels:
15 | tier: "mysql"
16 |
--------------------------------------------------------------------------------
/old-challenges/challenge-1-wordpress/k8s/02-wordpress-pvc.yml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: PersistentVolumeClaim
3 | metadata:
4 | name: wordpress-persistent-storage
5 | labels:
6 | app: wordpress
7 | spec:
8 | accessModes:
9 | - ReadWriteMany
10 | resources:
11 | requests:
12 | storage: 1Gi
13 | selector:
14 | matchLabels:
15 | tier: "frontend"
16 |
--------------------------------------------------------------------------------
/old-challenges/challenge-1-wordpress/k8s/03-secret.yml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Secret
3 | metadata:
4 | name: mysql-pass
5 | type: Opaque
6 | data:
7 | password: YWRtaW4=
8 |
--------------------------------------------------------------------------------
/old-challenges/challenge-1-wordpress/k8s/04-mysql-deploy.yml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Service
3 | metadata:
4 | name: wordpress-mysql
5 | labels:
6 | app: wordpress
7 | spec:
8 | ports:
9 | - port: 3306
10 | selector:
11 | app: wordpress
12 | tier: mysql
13 | clusterIP: None
14 | ---
15 | apiVersion: apps/v1beta2 # for versions before 1.9.0 use apps/v1beta2
16 | kind: Deployment
17 | metadata:
18 | name: mysql
19 | labels:
20 | app: wordpress
21 | spec:
22 | selector:
23 | matchLabels:
24 | app: wordpress
25 | tier: mysql
26 | strategy:
27 | type: Recreate
28 | template:
29 | metadata:
30 | labels:
31 | app: wordpress
32 | tier: mysql
33 | spec:
34 | containers:
35 | - image: mysql:5.7
36 | name: mysql
37 | env:
38 | - name: MYSQL_ROOT_PASSWORD
39 | valueFrom:
40 | secretKeyRef:
41 | name: mysql-pass
42 | key: password
43 | ports:
44 | - containerPort: 3306
45 | name: mysql
46 | volumeMounts:
47 | - name: mysql-persistent-storage
48 | mountPath: "/var/lib/mysql"
49 | volumes:
50 | - name: mysql-persistent-storage
51 | persistentVolumeClaim:
52 | claimName: mysql-persistent-storage
53 |
--------------------------------------------------------------------------------
/old-challenges/challenge-1-wordpress/k8s/05-wordpress-deploy.yml:
--------------------------------------------------------------------------------
1 | # create a service for wordpress
2 | apiVersion: v1
3 | kind: Service
4 | metadata:
5 | name: wordpress
6 | labels:
7 | app: wordpress
8 | spec:
9 | ports:
10 | - port: 80
11 | nodePort: 31004
12 | selector:
13 | app: wordpress
14 | tier: frontend
15 | type: NodePort
16 |
17 |
18 |
19 | ---
20 | apiVersion: apps/v1beta2 # for versions before 1.9.0 use apps/v1beta2
21 | kind: Deployment
22 | metadata:
23 | name: wordpress
24 | labels:
25 | app: wordpress
26 | spec:
27 | replicas: 2
28 | selector:
29 | matchLabels:
30 | app: wordpress
31 | tier: frontend
32 | strategy:
33 | type: Recreate
34 | template:
35 | metadata:
36 | labels:
37 | app: wordpress
38 | tier: frontend
39 | spec:
40 | containers:
41 | - image: wordpress
42 | name: wordpress
43 | env:
44 | - name: WORDPRESS_DB_HOST
45 | value: wordpress-mysql
46 | - name: WORDPRESS_DB_PASSWORD
47 | valueFrom:
48 | secretKeyRef:
49 | name: mysql-pass
50 | key: password
51 | ports:
52 | - containerPort: 80
53 | name: wordpress
54 | volumeMounts:
55 | - name: wordpress-persistent-storage
56 | mountPath: "/var/www/html"
57 | volumes:
58 | - name: wordpress-persistent-storage
59 | persistentVolumeClaim:
60 | claimName: wordpress-persistent-storage
61 |
--------------------------------------------------------------------------------
/old-challenges/challenge-1-wordpress/nfs/nfs.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | ## Info
4 | echo -n "Installing all updates, this may take a few minutes ......";
5 | apt-get update -y ;
6 |
7 | ## Install NFS
8 | echo -n "Installing NFS ......";
9 | apt-get install nfs-kernel-server -y
10 |
11 |
12 | ## NFS Folders
13 | echo -n "Creating/Adjusting Folder /mysql & /html and editing /etc/exports ......";
14 |
15 | mkdir /{mysql,html} >/dev/null 2>&1
16 | chmod -R 755 /{mysql,html}
17 | chown nobody:nogroup {/mysql,/html}
18 |
19 |
20 | echo "/ *(rw,sync,no_root_squash,no_all_squash)" >> /etc/exports
21 |
22 | ## Restart Services
23 | echo -n "Restarting Services ......";
24 | service nfs-kernel-server restart
25 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM node:carbon
2 |
3 | WORKDIR /usr/src/app
4 |
5 | # Install app dependencies
6 | COPY package.json /usr/src/app/
7 | RUN npm install
8 | # Copy app
9 | COPY . /usr/src/app
10 |
11 |
12 | # Expose for api
13 | EXPOSE 3000
14 |
15 | CMD [ "node", "app/server.js"]
16 |
17 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/Jenkinsfiles/nodejs/Jenkinsfile:
--------------------------------------------------------------------------------
1 | node{
2 | def gitUrl = "https://github.com/kodekloudhub/kubernetes-challenges.git"
3 | def ImageName = "mohamedayman/nodejs-demo"
4 | def gitCred = "git"
5 | def dockerhubCred = "dockerhub"
6 |
7 | try{
8 | stage('Checkout'){
9 | git credentialsId: "$gitCred", url: "$gitUrl"
10 | // tag image with th commit id
11 | sh "git rev-parse --short HEAD > .git/commit-id"
12 | imageTag = readFile('.git/commit-id').trim()
13 |
14 | }
15 |
16 | stage('Docker Build, Push'){
17 | withDockerRegistry([credentialsId: "${dockerhubCred}", url: 'https://index.docker.io/v1/']) {
18 | sh "docker build -t ${ImageName}:${imageTag} ."
19 | sh "docker push ${ImageName}:${imageTag}"
20 | }
21 |
22 | }
23 |
24 | stage('Deploy nodejs App on K8s'){
25 | sh script: """
26 | set +e
27 | helm install nodejs --set image.repository=${ImageName} --set image.tag=${imageTag} ./nodejs-k8s
28 | set -e
29 | """
30 | // update to New version
31 | sh "helm upgrade --wait --recreate-pods --set image.repository=${ImageName} --set image.tag=${imageTag} nodejs ./nodejs-k8s"
32 |
33 |
34 | }
35 | } catch (err) {
36 | currentBuild.result = 'FAILURE'
37 | }
38 | }
39 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/README.md:
--------------------------------------------------------------------------------
1 | # Kubernetes
2 |
3 | In this lab, we will achieve the CI/CD for a "Hello Kubernetes" NodeJS App using
4 |
5 | - Jenkins
6 | - Docker
7 | - DockerHub
8 | - Kubernetes
9 | - Helm package Manager
10 |
11 | 
12 |
13 |
14 |
15 | ### Prepare Jenkins VM
16 |
17 | Install Java
18 |
19 | $ sudo apt update -y
20 | $ sudo apt upgrade -y
21 | $ sudo apt-get install default-jre -y
22 |
23 | Install Jenkins
24 |
25 | $ wget -q -O - https://pkg.jenkins.io/debian-stable/jenkins.io.key | sudo apt-key add -
26 | $ sudo echo "deb https://pkg.jenkins.io/debian-stable binary/" >> /etc/apt/sources.list
27 | $ sudo apt-get update
28 | $ sudo apt-get install jenkins -y
29 |
30 | Start /Enable Jenkins on next Boot
31 |
32 | $ systemctl start jenkins
33 | $ systemctl enable jenkins
34 | $ cat /var/lib/jenkins/secrets/initialAdminPassword
35 |
36 | On your browser Jenkins-IP:8080, paste the previous password and select "Install suggested plugins"
37 |
38 |
39 | Installing Docker
40 |
41 | $ curl -fsSL https://get.docker.com -o get-docker.sh
42 | $ sudo sh get-docker.sh
43 |
44 | Add User Jenkins to Docker Group
45 |
46 | $ sudo usermod -aG docker jenkins
47 |
48 | Installing the helm client
49 |
50 | - Download your desired version https://github.com/helm/helm/releases
51 |
52 | - Unpack it (tar -zxvf helm-v2.0.0-linux-amd64.tgz)
53 |
54 | - Find the helm binary in the unpacked directory, and move it to its desired destination (mv linux-amd64/helm /usr/local/bin/helm)
55 |
56 | $ wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
57 |
58 | $ tar -xzvf helm-v2.11.0-linux-amd64.tar.gz
59 |
60 | $ sudo mv linux-amd64/helm /usr/local/bin/
61 |
62 | $ helm version
63 |
64 | Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
65 |
66 |
67 |
68 |
69 |
70 |
71 | ### Creating a pipeline
72 |
73 | Create a Github and DockerHub creds Id "git, dockerhub"
74 |
75 | Go to jenkins-IP:8080/credentials/store/system/domain/ to add your git access with id
76 |
77 | 
78 |
79 | Create "New Item" with "Pipeline" type, then copy the content under "jenkinsfiles/nodejs/Jenkinsfile" to the Pipeline and replace
80 | - the 2nd line with your repo
81 | - the 3rd line with dockerhub-username/repo-name
82 | ### Add Changes to the Code
83 |
84 | - under the repository, navigate "app/routes/root.js", change the "background-color" to red
85 |
86 | - run Jenkins pipline
87 |
88 | - Open your browser IP:30333
89 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/app/routes/root.js:
--------------------------------------------------------------------------------
1 | module.exports = function(req, res, next) {
2 | res.contentType = "text/html";
3 | res.end(' Hello Kubernetes!
');
4 | next();
5 | };
6 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/app/server.js:
--------------------------------------------------------------------------------
1 | var restify = require("restify");
2 | var server = restify.createServer();
3 | var pkg = require("../package.json");
4 | var rootResponder = require("./routes/root");
5 |
6 | server.get("/", rootResponder);
7 |
8 | server.listen(3000, function() {
9 | console.log("restifyjs version %s running on port 3000", pkg.version);
10 | setInterval(function() {
11 | console.log("restifyjs version %s running on port 3000", pkg.version);
12 | }, 10000);
13 | });
14 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/img/01-weekly-jenkins.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kodekloudhub/kubernetes-challenges/2b57a328f7c796a2ca96f3df7d585ec8355c8afb/old-challenges/challenge-2-CI-CD/img/01-weekly-jenkins.png
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/img/Arch.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/kodekloudhub/kubernetes-challenges/2b57a328f7c796a2ca96f3df7d585ec8355c8afb/old-challenges/challenge-2-CI-CD/img/Arch.jpg
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/nodejs-k8s/.helmignore:
--------------------------------------------------------------------------------
1 | # Patterns to ignore when building packages.
2 | # This supports shell glob matching, relative path matching, and
3 | # negation (prefixed with !). Only one pattern per line.
4 | .DS_Store
5 | # Common VCS dirs
6 | .git/
7 | .gitignore
8 | .bzr/
9 | .bzrignore
10 | .hg/
11 | .hgignore
12 | .svn/
13 | # Common backup files
14 | *.swp
15 | *.bak
16 | *.tmp
17 | *~
18 | # Various IDEs
19 | .project
20 | .idea/
21 | *.tmproj
22 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/nodejs-k8s/Chart.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | appVersion: "1.0"
3 | description: A Helm chart for Kubernetes
4 | name: nodejs-k8s
5 | version: 0.1.0
6 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/nodejs-k8s/templates/_helpers.tpl:
--------------------------------------------------------------------------------
1 | {{/* vim: set filetype=mustache: */}}
2 | {{/*
3 | Expand the name of the chart.
4 | */}}
5 | {{- define "nodejs-k8s.name" -}}
6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
7 | {{- end -}}
8 |
9 | {{/*
10 | Create a default fully qualified app name.
11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
12 | If release name contains chart name it will be used as a full name.
13 | */}}
14 | {{- define "nodejs-k8s.fullname" -}}
15 | {{- if .Values.fullnameOverride -}}
16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
17 | {{- else -}}
18 | {{- $name := default .Chart.Name .Values.nameOverride -}}
19 | {{- if contains $name .Release.Name -}}
20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}}
21 | {{- else -}}
22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
23 | {{- end -}}
24 | {{- end -}}
25 | {{- end -}}
26 |
27 | {{/*
28 | Create chart name and version as used by the chart label.
29 | */}}
30 | {{- define "nodejs-k8s.chart" -}}
31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
32 | {{- end -}}
33 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/nodejs-k8s/templates/deployment.yml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ReplicationController
3 | metadata:
4 | name: node-deploy
5 | spec:
6 | replicas: {{ .Values.replicaCount }}
7 | selector:
8 | app: nodejs
9 | template:
10 | metadata:
11 | name: nodejs
12 | labels:
13 | app: nodejs
14 | spec:
15 | containers:
16 | - name: nodejs
17 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
18 | ports:
19 | - name: nodejsport
20 | containerPort: {{ .Values.service.internalPort }}
21 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/nodejs-k8s/templates/service.yml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Service
3 | metadata:
4 | name: node-svc
5 | spec:
6 | ports:
7 | - port: {{ .Values.service.internalPort }}
8 | nodePort: {{ .Values.service.externallPort }}
9 | selector:
10 | app: nodejs
11 | type: {{ .Values.service.type }}
12 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/nodejs-k8s/values.yaml:
--------------------------------------------------------------------------------
1 | # Default values for nodejs-k8s.
2 | # This is a YAML-formatted file.
3 | # Declare variables to be passed into your templates.
4 |
5 | replicaCount: 2
6 |
7 | image:
8 | repository: NULL
9 | tag: NULL
10 |
11 | service:
12 | type: NodePort
13 | internalPort: 3000
14 | externallPort: 30333
15 |
--------------------------------------------------------------------------------
/old-challenges/challenge-2-CI-CD/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "ctr-test-restifyjs",
3 | "version": "1.0.4",
4 | "description": "",
5 | "main": "index.js",
6 | "scripts": {
7 | "start": "node app/server.js",
8 | "test": "node_modules/.bin/mocha app/test"
9 | },
10 | "author": "",
11 | "license": "ISC",
12 | "dependencies": {
13 | "restify": "^6.3.4"
14 | },
15 | "devDependencies": {
16 | "chai": "^4.1.2",
17 | "mocha": "^5.0.1",
18 | "sinon": "^4.4.2"
19 | }
20 | }
--------------------------------------------------------------------------------