option which allows a user to **override** the default and specify their own value.
166 |
167 | You must specify an additional flag --force along with --grace-period=0 in order to perform force deletions( as follows ).
168 |
169 | k delete pods -n melonspace --all --grace-period=0 --force
170 |
171 |
172 |
173 | ### Play 6. Playing with Json Path
174 |
175 | To play with Json Path, firstly, you have to make sure the output of your command is JSON format, example like the following :
176 |
177 | kubectl get pods -o json
178 |
179 | Then you can start to play the output with JSON Path query
180 |
181 | Initially there is a symbol '$' to query the root itam, with Kubectl utility they added it systematically. Just take a simple sample output :
182 |
183 | ```json
184 | {
185 | "apiVersion": "v1",
186 | "kind": "Pod",
187 | "metadata": {
188 | "name": "nginx-pod",
189 | "namespace": "default"
190 | },
191 | "spec": {
192 | "containers": [
193 | {
194 | "image": "nginx:alpine",
195 | "name": "nginx"
196 | }
197 | ],
198 | "nodeName": "node01"
199 | }
200 | }
201 | ```
202 |
203 | If you want the image we're using, you can query it using :
204 |
205 | ```shell
206 | $.spec.containers[0].image
207 | ```
208 |
209 | This query can be tested on a jsonpath emulator such as http://jsonpath.com/
210 |
211 |
212 | Translation by using kubectl utility is :
213 |
214 | kubectl get pods -o=jsonpath='{.spec.containers[0].image}'
215 |
216 |
217 | We can also use a single command to get multiple output for example :
218 |
219 | kubectl get pods -o=jsonpath='{.spec.containers[0].image}{.spec.containers[0].name}'
220 |
221 |
222 | There are also some predefined formatting options in here :
223 |
224 | - New line : {"\n"}
225 | - Tab : {"\t"}
226 |
227 |
228 | Example here could be also like the following :
229 |
230 | kubectl get pods -o=jsonpath='{.spec.containers[0].image}{"\n"}{.spec.containers[0].name}'
231 |
232 |
233 | While using Kubectl, you can also use loops and Range which is pretty cool ( this is an example from somewhere, thanks ) :
234 |
235 | ```shell
236 | kubectl get pods -o=jsonpath='{range .items[*]}{.metadata.name} {"\t"} {.status.capacity.cpu} {"\n"}{end}'
237 | ```
238 |
239 | What about custom columns ? we can also use like the following :
240 |
241 | ```shell
242 | kubectl get nodes -o=custom-columns=meloncolumn:jsonpath
243 | ```
244 |
245 | To reproduct the previsous command you can use the following :
246 |
247 | ```shell
248 | kubectl get nodes -o=custom-columns=NODE:.metadata.name ,CPU:.status.capacity.cpu
249 | ```
250 |
251 | To know more about JSON PATH with Kubectl, please check page :
252 |
253 | https://kubernetes.io/docs/reference/kubectl/jsonpath/
254 |
255 |
256 | Finally here are some queries might be not used that often :
257 |
258 | If it starts with a list, you can use [*] represent it :
259 |
260 | $[*].metadata.name
261 |
262 |
263 | If you only want some items in the list. Always about quering the list quick way here :
264 |
265 | - The begining 3 items for example :
266 |
267 | ```shell
268 | $[0:2]
269 | ```
270 |
271 | - Wanna skip some items you can also specify steps :
272 |
273 | ```shell
274 | $[0:8:2]
275 | ```
276 |
277 | which stands for : 0 is start line, 8 is the ending line, and 2 is the step
278 |
279 | - Always get the last item :
280 |
281 | ```shell
282 | $[-1:0]
283 | ```
284 |
285 | start to the last and all the way to the end or :
286 |
287 | ```shell
288 | $[-1:]
289 | ```
290 |
291 | - Get the last 3 items :
292 |
293 | ```shell
294 | $[-3:]
295 | ```
296 |
297 | In the case it starts with a dictionary ( you might be more confortable to calle it as object if you're a JS developer ) then follow up with a list, you can also do the following :
298 |
299 | $.users[*].name
300 |
301 | About conditional query :
302 |
303 | $.status.containerStatuses[?(@.name == 'redis-container')].restartCount
304 |
305 |
306 |
--------------------------------------------------------------------------------
/01 - Pod design.md:
--------------------------------------------------------------------------------
1 | # Playbook Part 1 : Pod design
2 |
3 | **Kubelet** is like a captain of your ship, it registers the node, and create pod in worker node and monitors nodes and pods on a timely basis.
4 |
5 | There two ways to create pod by Kubelet :
6 |
7 | - From static pod path ( which is known as the static pods )
8 | - Through an HTTP API endpoint which is kube-apiserver ( the pods from the api-server )
9 |
10 | ## Play 1 : Static Pod
11 |
12 | The kubelet can manage a node independently, it can create pods ( but need API server to provide pod details ), the question here is how do we provide a pod definition file to the kubelet without a kube-api server ?
13 |
14 | You can configure the kubelet to read the pod definition files from a directory on the server designated to store information of pods. Which is like the following :
15 |
16 | /etc/kubernetes/manifests
17 |
18 | Kubelet check this directory periodically reads these files and creates pods on the host. Not only does it create the pod it can ensure that the pod stays alive. If the application crashed, the kubelet attempts to restart it. If you make a change to any of the file within this directory, the kubelet recreates the pod for those changes to take effect. If you remove a file from this directory the pod is deleted automatically. So these pods that are created by the kubelet on its own without the intervention from the API server or the rest of the Kubernetes cluster components are known as **static pods**. Remember this is only apply for **Pod**, no daemonsets and replicatsets as those need to interact with other parts of Kubernetes cluster.
19 |
20 | This path can be configured as the following in **kubelet.service**:
21 |
22 | --pod-manifest-path=/etc/Kubernetes/manifests
23 |
24 | Or using the **--config** option :
25 |
26 | --config=kubeconfig.yaml
27 |
28 | then in **kubeconfig.yaml** file :
29 |
30 | staticPodPath : /etc/kubernetes/manifests
31 |
32 | Once the static pod has been created, you can use **docker ps** command to review it. The reason why not the kubectl command is we don't have the rest of the Kubernetes cluster. Kubectl utility works with kube-api server.
33 |
34 | In this case the kube-apiserver will still be aware of the static pod has been created later on, as after kubelet create the pod, it also creates a mirror object in the kube-apiserver. so if you use **kubectl get pods** ( append the node name though when you check it ), what you see here is just a read-only mirror of the pod, which means you can view details of the pod but you cannot edit or delete it like the usual pods ( you can only delete them by modifying the files from the nodes manifest folder ).
35 |
36 | Use case of static pod is to deploy control plane components itself as pods on a node, start by installing kubelet on all the master node then create pod definition files that uses docker images of the various control plane components such as the api server, controller, etcd etc. Place the definition files in the designated manifests folder and kubelet takes care of deploying the control plane components themselves as pods on the cluster. This way you don't have to download the binaries configure services or worry about if these services crash if any of these services were to crash since it's a static pod it will automatically be restarted by kubelet. That is how the **kubeadm** tool set up the cluster.
37 |
38 | You can use the following command to set up the pods bootstrapped by kubeadm :
39 |
40 | kubectl get pods -n kube-system
41 |
42 |
43 |
44 | ## Play 2 : InitContainer
45 |
46 | An **initContainer** is configured in a pod like all other containers, except that it is specified inside a initContainers section, like the following :
47 |
48 | ```yaml
49 | apiVersion: v1
50 | kind: Pod
51 | metadata:
52 | name: melon-pod
53 | labels:
54 | app: melonapp
55 | spec:
56 | containers:
57 | - name: melonapp-container
58 | image: busybox:1.28
59 | command: ['sh', '-c', 'echo The melonapp is running! && sleep 3600']
60 | initContainers:
61 | - name: init-melonservice
62 | image: busybox:1.28
63 | command: ['sh', '-c', 'until nslookup melonservice; do echo waiting for melonservice; sleep 2; done;']
64 | - name: init-melondb
65 | image: busybox:1.28
66 | command: ['sh', '-c', 'until nslookup melondb; do echo waiting for melondb; sleep 2; done;']
67 | ```
68 |
69 | When a POD is first created the initContainer is run, and the process in the initContainer must run to a completion before the real container hosting the application starts.
70 |
71 | You can configure multiple such initContainers as well, like how we did for multi-pod containers. In that case each init container is run one at a time in sequential order.
72 |
73 | If any of the initContainers fail to complete, Kubernetes restarts the Pod repeatedly until the Init Container succeeds.
74 |
75 |
76 | ## Play 2 : Pod Design - Multi-container Pods
77 |
78 | In general, it is good to have a one-to-one relationship between container and pod.
79 |
80 | create a pod using imperative command like the following :
81 |
82 | kubectl run --generator=run-pod/v1 nginx-pod --image=nginx:alpine
83 |
84 | ### Basics
85 |
86 | Multi-container pods are simply pods with more than one container that are working together and sort of forming a single unit. It is often a good idea to keep containers separate by keeping them in their own separate pods, but there are several cases where multi-container pods can be beneficial.
87 |
88 | ### How your container interact with one another
89 |
90 | A container can interact with one another in three ways :
91 | - Share **Network**. It is like two containers were running on the same host, they can access each other simply using localhost. All listening ports are accessible to other containers in the pod even if they're not exposed outside the pod.
92 |
93 |
94 |
95 | - Shared **Storage Volumes**. We can mount the same volume to two different containers so that they can both interact with the same files so you could have one container outputting files to the volume and the other container reading them or you could have both of them reading and writing files to that same volume.
96 |
97 |
98 |
99 | - Shared **Process Namespace**. Essentially what this does is it allows the two containers to signal one another's processes. In order to implement that, you have to add an attribute to your pod back called **shareProcessNamespace** and set that to **true**. Once that is set to true, your containers can actually interact directly with one another's processes using a shared process namespace.
100 |
101 |
102 |
103 | ### Replication controller & Replicas Set
104 |
105 | Replication controller spans across multiple nodes in the cluster. It helps us balance the load across multiple pods on different nodes as well as scale our application when demand increases.
106 |
107 | **ReplicaSet** is the new generation to help pods achieve higher availability. We can ensure we'll always have a certain number defined replicas at a time. The role of the replicaset is to monitor the pods and if any of them were to fail, deploy new ones. There could be 100s of other pods in the cluster running different applications. This is where labelling our pods during creation comes in handy. We could now provide these labels as a filter for replicaset. Under the selector section we use to match labels filter and provide the same label we used while creating the pods.
108 |
109 |
110 | ```yaml
111 | apiVersion: apps/v1
112 | kind: ReplicaSet
113 | metadata:
114 | name: frontend
115 | labels:
116 | app: melonapp-rs
117 | spec:
118 | replicas: 3
119 | selector:
120 | matchLabels:
121 | app: melonapp-rs
122 | template:
123 | metadata:
124 | labels:
125 | app: melonapp-rs
126 | spec:
127 | containers:
128 | - name: nginx
129 | image: nginx
130 | ```
131 |
132 | The **matchLabels** selector simply matches the labels specified under it to the labels on the pods.
133 |
134 | To check your replicas set, using the following command :
135 |
136 | kubectl get replicaset
137 |
138 | Or
139 |
140 | kubectl get rs
141 |
142 | To update the number of replicasset there are two ways :
143 |
144 | 1. Update the **replicas: 3** to specific number **replicas:6** for example
145 |
146 | 2. Use the following command :
147 |
148 | kubectl scale replicaset melon-rs --replicas=6
149 |
150 | Or combine the spec :
151 |
152 | kubectl scale --replicas=6 -f melon-rs-spec.yaml
153 |
154 | Delete the replicaset is the following command:
155 |
156 | kubectl delete replicaset melon-rs
157 |
158 | ### Daemon Sets
159 |
160 | Previsouly, we've talked about with the help of replicasets and deployments we make sure multiple copies of our applications are made available across various different worker nodes. Daemonsets like replicaset but it runs one copye your pod on each node in your cluster. Whenever a new node is added to the cluster a replica of the pod is automatically added to that node and when a node is removed the pod is automatically removed.
161 |
162 | The key thing to define daemonset is it ensures that one copy of the pod is always present in all nodes in the cluster. ( Ignored by kube-scheduler )
163 |
164 | DaemonSets do not use a scheduler to deploy pods. When you need to run 1 replicas and exactly 1 on each node, daemonset is a perfect match. As nodes are removed from the cluster, those Pods are garbage collected.
165 |
166 | The main use cases of daemonsets is that you can deploy the following component to monitor your cluster better :
167 |
168 | - Monitoring agent
169 | - Logs collector
170 |
171 | Then you don't have to worry about adding / removing monitoring agents from these nodes when there are changes in your cluster. For example, the kube-proxy can be deployed as a daemonset in your cluster and another example is for networking. Networking solutions like WeaveNet requires an agent to be deployed on each node in the cluster.
172 |
173 |
174 | ```yaml
175 | apiVersion: apps/v1
176 | kind: DaemonSet
177 | metadata:
178 | name: monitoring-daemon
179 | spec:
180 | selector:
181 | matchLabels:
182 | name: monitoring-agent
183 | template:
184 | metadata:
185 | labels:
186 | name: monitoring-agent
187 | spec:
188 | containers:
189 | - name: monitoring-agent
190 | image: monitoring-agent
191 | ```
192 |
193 | To check daemonset has been created use the following :
194 |
195 | kubectl get daemonsets
196 |
197 | It also means we can simplify the command like the following :
198 |
199 | kubectl get ds
200 |
201 | Which also reminds us to check the details of daemonset by using :
202 |
203 | kubectl describe daemonsets montoring-daemon
204 |
205 |
206 | ### Stateful Set
207 |
208 | Check the stateful sets in Kubernetes :
209 |
210 | kubectl get sts
211 |
212 |
213 | ## Play 3 : Multi-container pod design pattern
214 |
215 | Three multi-container pod design pattern :
216 | - **sidecar** pattern uses a sidecar container that enhances the functionality of the main container. Example: A sidecar container that sinks files from a git repository to the file system of a web server container. Every two minutes checks for new version of these files. If the files have been updated, it pulls in the new files and pushes them into the file system of the main container, so they're automatically updated without even having to restart or redeploy that container.
217 |
218 | - **ambassador** pattern is all about capturing and translating networking traffics. One of the examples would be an container that listens on a custom port and forward traffic to the main container on an hardcoded port when deploying some legacy applications, to solve that problem, you can simply by forwarding the traffic from one port to another use ambassador pattern.
219 |
220 | - **adaptor** pattern is all about transforming the output from the main container in some way in order to adapt it to some other type of system. An example could be that we have the output from the main container and adaptor container is formatting or chaning the output and then making that output available outside of pod itself. So one of the common use cases for this is if you have some kind of log analyzer, something like Prometheus or log stash, you can use the adaptor container to apply specific formating to those logs and allow them to be pushed or pulled to some kind of external system.
221 |
222 |
223 | Multi-container example:
224 |
225 | ```yaml
226 | apiVersion: v1
227 | kind: Pod
228 | metadata:
229 | name: melon-multicontainer-pod
230 | labels:
231 | app: melon-multiapp
232 | spec:
233 | containers:
234 | - name: nginx
235 | image: nginx
236 | ports:
237 | - containerPort: 80
238 | - name: busybox-sidecar
239 | image: busybox
240 | command: ['sh', '-c', 'while true; do sleep 3600; done;']
241 | ```
242 |
243 | After creating the pod, you can use kubectl command to check the status of pod to see if it is up and running, you'll have a output similar as following and you see it's Ready 2/2:
244 |
245 |
246 |
247 |
248 | ## Play 4 : Manage namespaces
249 |
250 | Default namespaces created by Kubernetes : kube-system, default and kube-public.
251 |
252 |
253 |
254 | Get all namespaces using the following :
255 |
256 | kubectl get namespaces
257 |
258 | Create a namespace by using the following :
259 |
260 | kubectl create ns melon-ns
261 |
262 |
263 | Or use the yaml definition :
264 |
265 | ```yaml
266 | apiVersion: v1
267 | kind: Pod
268 | metadata:
269 | name: melon-ns-pod
270 | namespace: melon-ns
271 | labels:
272 | app: melonapp
273 | spec:
274 | containers:
275 | - name: melonapp-container
276 | image: busybox
277 | command: ['sh', '-c', 'echo Salut K8S! && sleep 3600']
278 |
279 | ```
280 |
281 | Get namespace :
282 |
283 | kubectl get pods -n melon-ms
284 |
285 |
286 | Check pod by namespaces :
287 |
288 | Kubectl describe pod melon-ms-pod -n melon-ms
289 |
290 |
291 | We have to use the namespace option if the pods are not in default namespace, but what if we want to switch the dev namespace permanently so that we don't have to specify the namespace option any more. You can use the following to do this :
292 |
293 | kubectl config set-context &(kubectl config current-context) --namespace=dev
294 |
295 | You can then simply run the following command without the namespace option to list pods :
296 |
297 | kubetl get pods
298 |
299 |
300 | ## Play 5 : Jobs and CronJobs
301 |
302 | Jobs can be used to reliably execute a workload until it completes. The job will create one or more pods. When the job is finished, the containers will exit and the pods will enter the **Completed** status. The example of using jobs is when we want to run a particular workload and just to make sure it runs once and succeeds.
303 |
304 | You can create a job through Yaml descriptor :
305 |
306 | ```yaml
307 | apiVersion: batch/v1
308 | kind: Job
309 | metadata:
310 | name: melon-job
311 | spec:
312 | template:
313 | spec:
314 | containers:
315 | - name: melonapp-job
316 | image: perl
317 | command: ["perl", "-Mbignum=bpi", "-wle","print bpi(2000)"]
318 | restartPolicy: Never
319 | backoffLimit: 4
320 | ```
321 |
322 | If it is failed until 4 times, it is not gonna to continue. All the job really does is it creates a pod behind the scenes, but instead of a normal pod that's constantly running, it's a pod that runs and when it's complete it goes into the completed status. Which means the container is no longer running, so the pod still exists, but the container is completed.
323 |
324 | You can run the following command to check the job status:
325 |
326 | kubectl get job
327 |
328 | When the job is still running you can see the status as the following
329 |
330 |
331 |
332 | When the job is finished you can see the job has been completed.
333 |
334 |
335 |
336 |
337 | CronJobs build upon the functionality of job by allowing you to execute jobs on a schedule ( based on a cron expression ).
338 |
339 | ```yaml
340 | apiVersion: batch/v1beta1
341 | kind: CronJob
342 | metadata:
343 | name: hello
344 | spec:
345 | schedule: "*/1 * * * *"
346 | jobTemplate:
347 | spec:
348 | template:
349 | spec:
350 | containers:
351 | - name: hello
352 | image: busybox
353 | args:
354 | - /bin/sh
355 | - -c
356 | - date; echo Hello from the Kubernetes cluster
357 | restartPolicy: OnFailure
358 | ```
359 |
360 | You can use the following command to check the cron job:
361 |
362 | kubectl get cronjob
363 |
364 | You'll have a output like the following :
365 |
366 |
367 |
368 | You can also use the command to check the log of job ( basically the same with pod ), by firstly checking the pods status:
369 |
370 | kubectl get pods | grep hello
371 |
372 | You'll have a output like the following :
373 |
374 |
375 |
376 | Then :
377 |
378 | kubectl logs hello-xxxx
379 |
380 | You'll see you can virtualise the cron job has been executed :
381 |
382 |
383 |
384 |
385 |
386 |
387 | ## Play 6 : Labels, Selectors, and Annotations
388 |
389 | **Labels** are key-value pairs attached to Kubernetes objects, we can list them in metadata.labels section of an object descriptor.
390 |
391 | **Selectors** are used for identifying and selecting a group of objects using their labels.
392 |
393 | Examples of **quality-based** selectors:
394 |
395 | kubectl get pods -l app=my-app
396 |
397 | kubectl get pods -l environment=production
398 |
399 | Example of **inequality** :
400 |
401 | kubectl get pods -l environment!=production
402 |
403 | Example of **set-based** selectors
404 |
405 | kubectl get pods -l 'environment in (production,development)'
406 |
407 | Example of chaining multiple selectors together using a **comma-delimited** list :
408 |
409 | kubectl get pods -l app=myapp.environment=production
410 |
411 | **Annotation** are similar to labels in that they can be used to store custom metadata about objects. However, **cannot** be used to select or group objects in Kubernetes. We can attach annotations to objects using the metadata.annotations sector or the object descriptor
412 |
413 | ## Play 7 : Resource requirements
414 |
415 | Kubernetes allows us to specify the resource requirements of a container in the pod spec.
416 |
417 | **Resource Request** means the amount of resources that are necessary to run a container, and what they do is they govern which worker node the containers will actually be scheduled on. So when Kubernetes is getting ready to run a particuler pod, it's going to choose a worker node based on the resource requests of that pod's contianers. And Kubernetes will use those values to ensure that it chooses a node that actually has enough resoruces available to run that pod. A pod will only be a run on a node that has enough available resource to run the pod's containers.
418 |
419 | **Resource Limit** defines a maximum value for the resource usage of a container. And if the container goes above that maximum value than it's likely to be killed or restarted by the Kubernetes cluster. So resource limits just provided a way to kind of put some constraints around, how much resource is your containers are allowed to use and prevent certain containers from just comsuming a whole bunch of resource is and running away with all the resoruces in your cluster, potentially cause issues for other containers and applications as well.
420 |
421 | ```yaml
422 | apiVersion: v1
423 | kind: Pod
424 | metadata:
425 | name: melonapp-pod
426 | spec:
427 | containers:
428 | - name: melonapp-container
429 | image: busybox
430 | command: ['sh', '-c', 'echo stay tuned! && sleep 3600']
431 | resources:
432 | requests:
433 | memory: "64Mi" # 64 Megabytes
434 | cpu: "250m" #250em means 250 millis CPUs is 1 1/1000 of a CPU or 0.25 CPU cores ( 1/4 one quarter of CPU cores)
435 | limits:
436 | memory: "128Mi"
437 | cpu: "500m"
438 |
439 | ```
440 |
441 |
442 | By default : 1vCPU and 512Mi, if you don't like the default limit you canchange them by adding a limit section under the resources. Remember that the limits and requests are set for each container within the pod
443 |
444 | You can refer the following diagram to gain better vision :
445 |
446 |
447 |
448 | So what happens when a pod tries to exceed resources beyond its specified limit :
449 | - A container cannot use more CPU resources than its limit.
450 | - However, it can use more memory resource than its limit
451 | So if a pod tries to consume more memory than its limit constantly, the pod will be terminated.
452 |
453 | **Resource quota** provides constraints that limit aggregate resource consumption per namespace.
454 |
455 | ```yaml
456 | apiVersion: v1
457 | kind: ResourceQuota
458 | metadata:
459 | name: melon-pods
460 | namespace: melon-ns
461 | spec:
462 | hard:
463 | cpu: "5"
464 | memory: 10Gi
465 | pods: "10"
466 | ```
467 |
468 |
469 |
470 | ## Play 8 : Manual Scheduling
471 |
472 | You can constrain a Pod to only be able to run on particular Node(s) , or to prefer to run on particular nodes. There are several ways to do this, and the recommended approaches all use label selectors to make the selection. Generally such constraints are unnecessary, as the scheduler will automatically do a reasonable placement (e.g. spread your pods across nodes, not place the pod on a node with insufficient free resources, etc.) but there are some circumstances where you may want more control on a node where a pod lands, e.g. to ensure that a pod ends up on a machine with an SSD attached to it, or to co-locate pods from two different services that communicate a lot into the same availability zone.
473 |
474 |
475 | ```yaml
476 | apiVersion: v1
477 | kind: Pod
478 | metadata:
479 | name: nginx
480 | spec:
481 | containers:
482 | - name: nginx
483 | image: nginx
484 | nodeName: melonnode
485 | ```
486 |
487 | Ref : https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
488 |
489 | The way that we could set the nodeName property on the pod to bypass the scheduler and get the pod placed on a node directly in the specification before it is creatd and when they are created, they automatically land on the respective nodes. So that's how it used to be until Kubernetes version **v1.12**. From v.12 onwards, such as daemonset uses the default scheduler and **node affinity rules** to schedule pods on nodes.
490 |
491 |
492 |
--------------------------------------------------------------------------------
/02 - Rolling updates and rollbacks.md:
--------------------------------------------------------------------------------
1 | # Playbook Part 2: Deploying an Application, Rolling Updates, and Rollback
2 |
3 | This section we're going to talk a little bit about how to gradually roll out new versions using rolling updates and how to roll back in the event of a problem.
4 |
5 | Let's start with what if you want to deploy an application in the production. Let's say you have a number of web servers that needs to be deployed for some reasons, such many instances right ? Whenever newer versions of application builds become available on the docker registry you would like to upgrade your Docker instances seamlessly. However when you upgrade your instances, you do not want to upgrade all of them at once as we just did. This may impact users accessing our applications so you might want to upgrade them one after the other. And that kind of upgrade is known as rolling updates.
6 |
7 | ## Play 0 : Scenario playing with deployment
8 |
9 | Suppose one of the upgrades you performed resulted in an unexpected error and you're asked to undo the recent change you would like to be able to roll back the changes that were recently carried out. Finally say for example you would like to make multiple changes to your environment as upgrading the underlying web server versions as well as scaling your environment and also modifying the resource allocations etc. You do not want to apply each change immediatly after the command is run instead you would like to apply a pause to your environment, make the changes and then resume so that all changes are rolled-out together. All of these capabilities are available with the Kubernetes deployement that comes higher in the hierarchy.
10 |
11 | To wrap-up, the deployment provides us with the capability to upgrade the underlying instances seamlessly using rolling updates, undo changes, and pause and resume changes as required.
12 |
13 |
14 | ## Play 1 : Manage deployments
15 |
16 | **Deployments** provide a way to define a desired state for the replica pod.
17 |
18 | You can use the yaml defined template to define a deloyment :
19 |
20 | ```yaml
21 | apiVersion: apps/v1
22 | kind: Deployment
23 | metadata:
24 | name: melon-deploy
25 | labels:
26 | app: melonapp
27 | spec:
28 | replicas : 3
29 | selector:
30 | matchLabels:
31 | app: melonapp
32 | template:
33 | metadata:
34 | labels:
35 | app: melonapp
36 | spec:
37 | containers:
38 | - name: nginx
39 | image: nginx
40 | ports:
41 | - containerPort: 80
42 |
43 | ```
44 |
45 | Above yaml manifest means :
46 |
47 | - **spec.replicas** is the number of replica pods
48 | - **spec.template** is the template pod descriptor which defines the pods which will be created
49 | - **spec.selector** is the deployment will manage all pods whose labels match this selector
50 |
51 | Run Busybox image :
52 |
53 | kubectl run busybox --rm -it --image=busybox /bin/sh
54 |
55 | Run nginx image :
56 |
57 | kubectl run nginx --image=nginx --dry-run -o yaml > pod-sample.yaml
58 |
59 | Create a deployment using the following :
60 |
61 | kubectl create deployment kubeserve --image=nginx:1.7.8
62 |
63 | Scale a deployment using the following :
64 |
65 | kubectl scale deployment kubeserve --replicas=5
66 |
67 |
68 | Other useful command about query, edit, and delete deployment :
69 |
70 | kubectl get deployments
71 |
72 | kubectl get deployment melon-deploy
73 |
74 | kubectl describe deployment melon-deploy
75 |
76 | kubectl edit deployment melon-deploy
77 |
78 | kubectl delete deployment melon-deploy
79 |
80 | Addtionally, you can also use **kubectl patch** to update an API object in place. An example is like the following :
81 |
82 | kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
83 |
84 | ## Play 2 : Rolling updates
85 |
86 | **Rolling updates** provide a way to update a deployment to a new container version by gradually updating replicas so that there is no downtime
87 |
88 | Deploying an application :
89 |
90 | kubectl run kubedeploy --image=nginx
91 |
92 |
93 | Or using Yaml definition file :
94 |
95 | ```yaml
96 | apiVersion: apps/v1
97 | kind: Deployment
98 | metadata:
99 | name: nginx-deployment
100 | labels:
101 | app: nginx
102 | spec:
103 | replicas: 3
104 | selector:
105 | matchLabels:
106 | app: nginx
107 | template:
108 | metadata:
109 | labels:
110 | app: nginx
111 | spec:
112 | containers:
113 | - name: nginx
114 | image: nginx:1.7.9
115 | ports:
116 | - containerPort: 80
117 | ```
118 |
119 | Then using the following command :
120 | Kubectl apply -f filename.yaml
121 |
122 |
123 | Check status:
124 |
125 | kubectl rollout status deployments kubedeploy
126 |
127 |
128 | kubectl deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record
129 |
130 |
131 | Or
132 |
133 | kubectl set image deployment/kubedeploy nginx=nginx:1.91 --record
134 |
135 | **--record** flag records informaiton about the updates so that it can be rolled back later.
136 |
137 |
138 | ## Play 3 : Rollback
139 |
140 | **Rollback** allow us to revert to a previsou state. For example, if a rolling update breaks someting, we can quickly recover by using a rollback.
141 |
142 | Or simply using :
143 |
144 | kubectl set image deployment kubedeploy nginx=nginx:1.91 --record=true
145 |
146 | Performing an roll back :
147 |
148 | kubectl rollout undo deployments kubedeploy
149 |
150 | Check the history of deployments :
151 |
152 | kubectl rollout history deployment kubedeploy
153 |
154 | Then you can get back to the specific revision :
155 |
156 | kubectl rollout undo deployment kubedeploy --to-revision=2
157 |
158 | Check if rollout successfully using the following :
159 |
160 | kubectl rollout status deployment/candy-deployment
161 |
162 | You'll see the output similar to the following :
163 |
164 |
165 |
166 | **--revision** flag will give more information on a specific revision number
167 |
168 | Specifically :
169 | Speaking of the rolling update strategy. Here, under the spec, rolling update that's populated simply by the default values. Because we didn't give it a specific value when we created the deployment. You'll notice we have two values here, called maxSurge and maxUnavailable, just give you an idea of what they do.
170 |
171 | **maxSurge** does is it dertermines the maximum number of extra replicas that can be created during a rolling deployment. We can assume we have 3 replicas here, as soon as we start doing a rolling update, it's going to create some addtional replicas running those new versions. So for a short period of time, there could potentially be more than three replicas, and the maxSurge simply put a hard maximum on that. So you can use this, especially if you have a big deployment with a lot of different replicas. You can use that maxSurge value to control how fast or how gradually the deployment rolls out. Two scenarios :
172 | - You could set that very low, and it's ging to just spend a few at a time and just gradually replace the replicas that in your deployment.
173 | - Or you could set that to very high number, it's going to do a huge chunk of them all at once
174 |
175 | So it is just give you a little more control over how quickly those new instances are created. Surge can be set as a percentage or a specific number.
176 |
177 | **maxUnavailable** does is it sets a maximum number on the number of replicase that could be considered unavailable during their rolling update. It can be a number of percentage sign. So if you're doing a rolling update or you're doing a rollback, you can use that value to ensure that at any givevn time, a large number or a small number of your active replicas are actually available.
178 |
179 |
180 | So those just give you a little more control over what actually occurs when you do a rolling update.
181 |
182 |
183 | ## Play 4: OS Upgrades
184 |
185 | Imagine there is one node offline now which host your blue and green application, When you have multiple replicas of the blue pod, the users accessing the blue application are not impacted as they're being served through the other blue pod that's online, however users accessing the green pod, are impacted as that was the only pod running the green application.
186 |
187 | What does Kubernetes do in this case ? If the node came back online immediatelu, then the kubectl process starts and the pods come back. However, if the node was down for more than **5 minutes**, then the pods are **terminated** from that node. Well, Kubernetes considers them as dead. If the pods where part of a replicaset then they're recreated on other nodes. The time it waits for a pod to come back online is known as the pod eviction timeout that is set on the controller manager with a default value of 5 minutes. When the node comes back online after the pod eviction timeout if comes up blank without any pods scheduled on it.
188 |
189 | If you're not sure the node will be back online within 5 minutes, the safe way is to drain the nodes of all the workloads so that the pods are moved to other nodes in the cluster by using the following command :
190 |
191 | kubectl drain node-super-007
192 |
193 | k drain node-super-007 --ignore-daemonsets
194 |
195 | Technically there are not moved when you drain the node, the pods are gracefully terminated from the node that they're on and recreated on another. The node is also **cordoned** or marked as **unschedulable**. Meaning no pods can be scheduled on this node until you specifically you remove the restriction. When the pods are safe on the other nodes, you can upgrade and reboot the node, when it comes back online, it is still unschedulable, you then need to uncordon it so that pods can be scheduled on it again by using the following command :
196 |
197 | kubectl uncordon node-super-007
198 |
199 |
200 | Now remember, the pods that were moved to the other nodes don't automatically fall back. If pods were deleted or new pods were created in the cluster, then they would be created.
201 |
202 | There is another command simply mark the node unschedulable, unlike drain, it does not terminate or move the pods on an existing node :
203 |
204 | kubectl cordon node-super-007
205 |
206 | ## Play 5: Taints and Tolerations
207 |
208 | Taints on the node and toleration on the pods.
209 |
210 | You can use the following command to taint the node :
211 |
212 | kubectl taint nodes node-name key=value:taint-effect
213 |
214 | Example is as the following :
215 |
216 | kubectl taint nodes melonnode app=melonapp:NoSchedule
217 |
218 |
219 | From the point of view of pods, they might be :
220 | - NoSchedule
221 | - PreferNoSchedule
222 | - NoExecute
223 |
224 | Then translate it into pod yaml defnition file is as the following :
225 |
226 | ```yaml
227 | apiVersion: v1
228 | kind: Pod
229 | metadata:
230 | name: melon-ns-pod
231 | namespace: melon-ns
232 | labels:
233 | app: melonapp
234 | spec:
235 | containers:
236 | - name: melonapp-container
237 | image: busybox
238 | command: ['sh', '-c', 'echo Salut K8S! && sleep 3600']
239 | tolerations:
240 | - key: "app"
241 | operator: "Equal"
242 | value: "melonapp"
243 | effect: "NoSchedule"
244 | ```
245 |
246 | When Kubernetes cluster first set up, there's a taint set up automatically for the master, the scheduler's not going to schedule any pods to the master which is refer to as a best practice so no workloads will be deployed in the master.
247 |
248 | To untaint a node ( here's not a good practice but just to show how to untaint master node ) :
249 |
250 | kubectl taint nodes master node-role.kubernetes.io/master:NoSchedule-
251 |
--------------------------------------------------------------------------------
/03 - Networking.md:
--------------------------------------------------------------------------------
1 | # Playbook Part 3 : Networking
2 |
3 | Kubernetes creates docker containers on the non network first, it then invokes the configured CNI plugins who takes care of the rest of the configuration.
4 |
5 | ## Play 0 : the Container Network Interface
6 |
7 | ### 1st Glance at CNI
8 |
9 | **CNI (Container Network Interface)**, a Cloud Native Computing Foundation project, consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted.
10 |
11 | You can find more on Github : https://github.com/containernetworking/cni
12 |
13 | CNI comes with a set of supported plugins already. Such as bridge, VLAN, IPVLAN, MACVLAN, as well as IPAM plugins like host-local and dhcp. Other plugins available from third party origanisations such as weave, flannel, cilium, VMWare NSX, Calico, Infobox etc has implemented CNI standards. However Docker does not implement CNI, because it has its won set of standards known as CNM which stands for **Container Network Model** which is another standard that aims at solving container networking challenges similar to CNI but with some differences. It is also the reason why those plugins do not natively integrate with Docker, but you can work out them, it like Kubernetes does :
14 | - Create Docker on the none network
15 | - It then invokes the configured CNI plugins who takes care of the rest of the configuration
16 |
17 | ### CNI in Kubernetes
18 |
19 | Master :
20 | - Kube-api requests port 6443 open
21 | - Kube-scheduler requests port 10251 open
22 | - Kube-controller-manager request port 10252 open
23 | - etcd requests port 2380 open
24 |
25 | Work node :
26 | - Kubelet requests port 10250 open
27 | - The work nodes expose services for extenral access on 30000 to 32767
28 |
29 | You can find the details on official documentation : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports
30 |
31 | Be careful the network both in cluster and pod level :
32 | - Every Pod should have a unique IP address
33 | - Every Pod should able to communicate with other pods in the same node and other nodes without NAT
34 |
35 | As long as we can implement a solution that takes care of automatically assigning IP addresses and establish connectivity between the pods in a node as well as pods on different nodes, without to configure any NAT rules.
36 |
37 | To relate the same concept, we have to know how it works underneath
38 |
39 |
40 | ## Play 1 : Network Policy
41 |
42 | By default : all pods in the cluster can communicate with any other pods within Kubernetes cluster, and reach out to any available IP. This accomplished by deploying a pod networking solution to the cluster. A pod network is an internal virtual network that spans across all the nodes in the cluster to which all the pods connect to. But there is no guarantee that the IPs will always remain the same.
43 |
44 | Imagine that we have a web application and want to access the database. Here is also the reason why the better way for the web application to access the database is using a service. If we create a service, it can expose the database application across the cluster from any not.
45 |
46 | Here the interest of using network policy is to allow user to restrict what's allowed to talk to your pods and what your pods are allowed to talk to in your cluster. The web application can now access the database using the name of the service db. The service also gets an IP address assigned to it whenever a pod tries to reach the service using its IP or name it forwards the traffic to the back end pod in this case the database. Here is also where Kube-proxy come in. Kube-proxy is a process that runs on each node in the Kubernetes cluster, creates rules on each node to forward traffic to those services to the backend pods. One way it does this is using **IPTABLES** rules. In this case, it creates an IP tables rule on each node in the cluster to forward traffic heading to the IP of the service.
47 |
48 | Network policies work based on a whitelist model, which means as soon as network policy select a pod using the pod selector. That pod is completely locked down, and cannot talk to anything until we provide some rule that will white list specific traffic in and out of the pod.
49 |
50 | Definition :
51 |
52 | - **ingress** defines rules for incoming traffic.
53 | - **egress** defines rules for outgoing traffic.
54 | - **rules** both ingress and egress rules are whitelist-based, meaning that any traffic that does not match at least one rule will be blocked.
55 | - **port** specifies the protocols and ports that match the rule.
56 | - **from/to** selectors specifies the sources and destinations of network traffic that matches the rule
57 |
58 | Check network policy :
59 |
60 | kubectl get netpol
61 |
62 | Or
63 |
64 | kubectl get networkpolicies
65 |
66 | Check the content of the network policy :
67 |
68 | kubectl describe netpol netpolicyname
69 |
70 | Define a network policy :
71 |
72 | ```yaml
73 | apiVersion: networking.k8s.io/v1
74 | kind: NetworkPolicy
75 | metadata:
76 | name: melon-network-policy
77 | spec:
78 | podSelector:
79 | matchLabels:
80 | app: secure-app
81 | policyTypes:
82 | - Ingress
83 | - Egress
84 | ingress:
85 | - from:
86 | - podSelector:
87 | matchLabels:
88 | allow-access: "true"
89 | ports:
90 | - protocol : TCP
91 | port : 80
92 | egress:
93 | - to:
94 | - podSelector:
95 | matchLabels:
96 | allow-access: "true"
97 | ports:
98 | - protocol: TCP
99 | port: 80
100 | ```
101 |
102 | For some default policy :
103 |
104 | Deny all ingress
105 |
106 | ```yaml
107 | apiVersion: networking.k8s.io/v1
108 | kind: NetworkPolicy
109 | metadata:
110 | name: default-deny
111 | spec:
112 | podSelector: {}
113 | policyTypes:
114 | - Ingress
115 | ```
116 |
117 | Allow all ingress :
118 |
119 | ```yaml
120 | apiVersion: networking.k8s.io/v1
121 | kind: NetworkPolicy
122 | metadata:
123 | name: allow-all
124 | spec:
125 | podSelector: {}
126 | policyTypes:
127 | - Ingress
128 | ingress:
129 | - {}
130 |
131 | ```
132 |
133 | ## About Selectors
134 |
135 | There are multiple types of selectors:
136 |
137 | - **podSelector** matches traffic from/to pods which match the selector
138 | - **namespaceSelector** matches traffic from/to pods within namespaces which match the selector. Note that when podSelector and namespaceSelector are both present, the matching pods must also be within a matching namespace.
139 | - **ipBlock** specifies a CIDR range of IPs that will match the rule. This is mostly used for traffic from/to outside the cluster. You can also specify exceptions to the reange using except.
140 |
141 |
142 | ## Play 2 : DNS in Kubernetes
143 |
144 | CoreDNS in Kubernetes
145 | From the version 1.12 the recommended DNS Server is CoreDNS. CoreDNS uses a file named Corefile located at /etc/coredns ( the full path is /etc/coredns/Corefile ), within this file you have a number of plugins configured.
146 |
147 | For the pod remember we talked about a record being created for each Pod by converting their IPs into a dashed format that's disabled by default. It is forwarded to the nameserver specified in the coredns pods /etc/resolv.conf file, which is set to use the nameserver from the Kubernetes node. Also note that this core file is passed into the pods has a configMap object, you can modify it by using the following command :
148 |
149 | kubectl get configmap -n kube-system
150 |
151 | Every DNS records in CoreDNS falls in **cluster.local** domain, it usually looks like :
152 |
153 | For the service :
154 | servicename.namespace.svc.cluster.local
155 |
156 |
157 | For the pod is to convert their IPs into a dash format.
158 |
159 | When we deployed CoreDNS solution, it also creates a service to make it available to other components within the cluster which is kube-dns. The IP address of this service is configured as name server on pods
160 |
161 | kubectl get svc kube-dns -n kube-system
162 |
163 | The DNS configurations on Pods are done by Kubernetes automatically when the pods are created. Want to guess which Kubernetes component is responsible for that ? The kubelet. You can check the following to find the IP of cluster DNS server and domain in it :
164 |
165 | cat /var/lib/kubelet/config.yaml
166 |
167 | Once the pods are configured with the right nameserver, you can now resolve other pods and services.
168 |
169 | Checking if DNS is actually working:
170 |
171 | host webservice
172 |
173 | It will return the full qualified domain name of the web service which is happen to have the following output :
174 |
175 | web-service.default.svc.cluster.local has address xxx.xxx.xxx.xxx
176 |
177 |
178 | kubectl exec -ti busybox -- nslookup nginx
179 |
180 | So the resolv.conf file helps you resolve the naming even though you didn't ask for full name, as this file has a search entry which is set to default.svc.cluster.local, as well as svc.cluster.local and cluster.local. This allows to find any name for the **service**, not for the pod though. For the Pod you'll still need the FQDN.
181 |
182 | kubectl exec busybox cat /etc/resolv.conf
183 |
184 |
185 | ### References
186 |
187 | https://github.com/kubernetes/dns/blob/master/docs/specification.md
188 |
189 | https://coredns.io/plugins/kubernetes/
190 |
191 |
192 | ## Play 3 : Ingress in Kubernetes
193 |
194 | Ingress Controller : Reverse proxy product such as Nginx, HAProxy, Treafik, Contour etc deployed in Kubernetes cluster and configure them to route traffic to other services which involved defining URL Routes, SSL certificate, load balancing etc ( a set of rules as Ingress resources ). Service mesh solution such as Linkerd and Istio also providing capabilities similarily to Ingress Controller.
195 |
196 | Remember a Kubernetes cluster does not come with an Ingress Controller by default. You can check out the following page to get more ideas :
197 |
198 | https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/
199 |
200 |
201 |
202 | With Ingress object and define a set of rules. An Ingress Controller will be able to do things like the following :
203 | - Route users to a simple application
204 | - URL-based routing such as different pages for the same web application ( example : www.cloud-melon.com/contacts and www.cloud-melon.com/aboutme )
205 | - Based on domain name itself to route specific users ( such as blog.cloud-melon.com and code.cloud-melon.com )
206 |
207 | All those will fully depend on your backend configuration of the ingress controller.
208 |
209 | You can check the examples of ingress resources and rules from
210 | the following link :
211 |
212 | https://kubernetes.io/docs/concepts/services-networking/ingress/
213 |
214 |
215 | ### NGINX as Ingress Controller in its simplest form
216 |
217 | For NGINX, You can check the following yaml file to know how it works, it is actually defined as an Deployment :
218 |
219 | https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/static-ip/nginx-ingress-controller.yaml
220 |
221 |
222 | In addtion, you need some other object to help you succeed :
223 | - As best practice, if you configure a ConfigMap object to pass it in if you need to change any configurations.
224 | - A service account ( Auth ) with right set of permissions ( correct cluster role, role and rolebindings ) are also needed as Ingress controller has some addtional build-in intelligence, which makes sense why it might need some addtional permission.
225 | - A service to expose the ingress controller is also needed.
226 |
227 |
228 | A minimun ingress example is as the following :
229 |
230 | ```yaml
231 | apiVersion: networking.k8s.io/v1beta1
232 | kind: Ingress
233 | metadata:
234 | name: test-ingress
235 | annotations:
236 | nginx.ingress.kubernetes.io/rewrite-target: /
237 | spec:
238 | rules:
239 | - http:
240 | paths:
241 | - path: /testpath
242 | backend:
243 | serviceName: test
244 | servicePort: 80
245 | ```
246 |
247 | Please refer to : https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
248 |
249 |
250 | Specifically, when you need to rewrite an ingress ( for NGINX ) :
251 |
252 | ```yaml
253 | apiVersion: extensions/v1beta1
254 | kind: Ingress
255 | metadata:
256 | name: melon-ingress
257 | namespace: melon-space
258 | annotations:
259 | nginx.ingress.kubernetes.io/rewrite-target: /
260 | spec:
261 | rules:
262 | - http:
263 | paths:
264 | - path: /contacts
265 | backend:
266 | serviceName: back-service
267 | servicePort: 8080
268 | ```
269 |
270 | Refer to : https://kubernetes.github.io/ingress-nginx/examples/rewrite/
271 |
--------------------------------------------------------------------------------
/04 - ETCD.md:
--------------------------------------------------------------------------------
1 | # Playbook Part 4 : ETCD
2 |
3 | ETCD is a distributed reliable key-value store that is simple, secure and fast. The etcd data store stores information regarding the cluster such as the nodes, pods, configs, secrets, accounts, roles, bindings and others ( the state of the cluster and information about the cluster itself ). Every information you see when you run the **kubectl get** command is from the etcd server. Every change you make to your cluster, such as adding addtional nodes, deploying pods or replicasets are updated in the etcd server. Only once it is updated in the etcd server, is the change considered to be complete.
4 |
5 | I'd elaborate it further more here. Traditionally, the limit of relational database is everytime a new information needs to be added the entire table is affected and leads to a lot of empty cells. A key-value store stores information in the form of documents or pages so each individual gets a document and all information about that individual is stored with that file, these files can be in any format or structure and changes to one file does not affect the others. Similar documents while you could store and retrieve simple key and values when your data gets complex. You typically end up transacting in data formats like JSON or YAML it's easy to install and get started.
6 |
7 |
8 | ### Play 1 : Install etcd
9 |
10 | - From scratch:
11 | Download the binary extract and run download all relevant binary of your OS from Github releases pages extracted and run the etcd executable. Find more details on the following : https://github.com/etcd-io/etcd
12 |
13 | When you run etcd it starts a service that listens on port 2379 by default. You can then attach any clients to the etcd service to store and retrieve information. The default client that comes with etcd is the **ETCD control client**. The **etcdctl** is a command line client for ETCD. You can use it to store and retrieve key-value pairs. The usage like the following :
14 |
15 | ./etcdctl set key1 value1
16 |
17 | ./etcdctl get key1
18 |
19 | - Via Kubeadm:
20 | If you set up your cluster using kubeadm then kubeadm deploys the ETCD server for your as a **Pod** in the **kube-system** namespace. You can explore the etcd database using the etcdctl utitlity within this pod.
21 |
22 |
23 | To list all keys stored by Kubernetes, run the etcdctl get command like the following :
24 | kubectl exec etcd-master -n kube-system etcdctl get / --prefix -keys-only
25 |
26 | When we set up high availability in Kubernetes the only option to note for now is the advertised client url. This is the address on which ETCD listens. It happens to be on the IP of the server an on port **2379** which is the default port on which etcd listens. This is the **URL** that should be configured on the **kube-api** server when it tries to reach the etcd server.
27 |
28 | Kubernetes stores data in the specific directory structure the root directory is a registry and under that you have the various Kubernetest constructs such as minions or nodes, pods, replicasts, deployments etc. In the HA environment, you'll have multi-master, then you'll have also multiple etcd instances spread across the master nodes. In that case, make sure to specifiy the ETCD instances know about each other by setting the right parameter in the etcd service configuration. The **initial-cluster** option is where you much specify the different instances of the etcd service.
29 |
30 |
31 | ### Play 2 : Backup etcd
32 |
33 | ETCD cluster is hosted on the master node while configuring ETCD we specified a location where all the data will be stored as the following, that is the directory that can be configured to be backup by the backup tool:
34 |
35 | --data--dir=/var/lib/etcd
36 |
37 | Set environment variable ETCDCTL_API=3 to use v3 API or ETCDCTL_API=2 to use v2 API. If you're using v3 version you can use the following:
38 |
39 | ETCDCTL_API=3 etcdctl --endpoints http://127.0.0.1:4001 **snapshot save** snapshotdb --cert-file /xxx --key-file /... --ca-file
40 |
41 | Then you can us **ls** to check the backup would be like the following :
42 |
43 | snapshot.db
44 |
45 | You can also check the status of your backup :
46 |
47 | ETCDCTL_API=3 etcdctl snapshot status snapshot.db
48 |
49 |
50 |
51 |
52 |
53 | ### Play 3 : Restore etcd
54 |
55 | As the etcd cluster and kube-apiserver depends on it, so we have to stop the apiserver :
56 |
57 | service kube-apiserver stop
58 |
59 | Then restore using the folllowing command :
60 |
61 | ETCDCTL_API=3 etcdctl snapshot restore snapshot.db \
62 | --name m1 \
63 | --initial-cluster m1=http://host1:2380,m2=http://host2:2380,m3=http://host3:2380 \
64 | --initial-cluster-token etcd-cluster-1 \
65 | --initial-advertise-peer-urls http://host1:2380
66 |
67 |
68 | Then
69 |
70 | systemctl daemon reload
71 |
72 | service etcd restart
73 |
74 | service kube-apiser start
75 |
76 |
77 |
78 | Ref : https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md
--------------------------------------------------------------------------------
/05 - Volumes.md:
--------------------------------------------------------------------------------
1 | # Playbook Part 5: Volumes
2 |
3 | Talking about Volume, in general, the storage that's internal to your containers is ephemeral as it is designed to be temporary, as long as the container is stopped or destroyed, any internal storage is completely removed. What the volumes do is to allow us to provide some sort of more permanent external storage to our pods and their containers. This storage exists outside the life of the container, and therefore it can continue on even if the the container stops or is replaced.
4 |
5 | ### 1. Create Volumes :
6 |
7 | Here I set up **EmptyDir** volumes create storage on a node when the pod is assigned to the node. The storage disappears when the pod leaves the node. It is also possible to create multiple containers and mount that storage to the containers and let them share the same volume so that they can interact with each other by interacting that same shared file system.
8 |
9 | ```yaml
10 | apiVersion: v1
11 | kind: Pod
12 | metadata:
13 | name: volume-pod
14 | spec:
15 | containers:
16 | - image: busybox
17 | name: busybox
18 | command: ["/bin/sh","-c","while true; do sleep 3600; done"]
19 | volumeMounts:
20 | - name: my-volume
21 | mountPath: /tmp/storage
22 | volumes:
23 | - name: my-volume
24 | emptyDir: {}
25 |
26 | ```
27 |
28 |
29 | Kubernetes is designed to maintain stateless containers. That means we can freely delete and replace containers without worrying about them containing important information that we don't want to lose. To store any information that we want to keep inside the container itself. We have to store it somewhere outside the container, and that's what makes the container stateless.
30 |
31 | State persistence means keeping some data or information to continue to beyond the life of the container when the container is deleted or replaced. However it can be modified or updated by the containers while it's running. One of the most sophisticated ways that Kubernetes allows us to implement persistent storage is through the use of persistent volumes and persistent volume claims. It represent storage resources that can be dynamically allocated as requested within the cluster.
32 |
33 |
34 | ### 2. Create PV :
35 |
36 | **PV ( Persistent Volume )** represents a storage resouces ( like node is represent the compute resource such as CPU power and memory usage )
37 |
38 | Create it through the yaml definition :
39 |
40 | ```yaml
41 | apiVersion: v1
42 | kind: PersistentVolume
43 | metadata:
44 | name: data-pv
45 | spec:
46 | storageClassName: local-storage
47 | capacity:
48 | storage: 1Gi
49 | accessModes:
50 | - ReadWriteOnce
51 | hostPath:
52 | path: "/mnt/data"
53 |
54 | ```
55 |
56 | This means it gonna to allocate at total of 1 gigabyte of storage. So if we have two claims each for 500 megabytes, they could theoretically split up this persistent volume and they could both bind to the same volume and share that amount of storage.
57 |
58 | The implementation of this actual storage is by **HostPath** which is simply going to allocate storage on an actual node in the cluster. And it will allocate that storage on the node where the pod that is consuming the storage lives. Hence use the node's local filesystem for storage.
59 |
60 | Use kubectl get pv you'll get output as following :
61 |
62 |
63 |
64 | Notice here the status is **Available** that means it is currently not bound to a persistent volume claim. It's waiting to be accessed by claim.
65 |
66 |
67 | ### 3. Create PVC :
68 |
69 | **PVC ( Persistent Volume Claim )** is the abstraction layer between the user of the resource ( Pod ) and the PV itself. The interest of PVC is users don't need to worry about the details of where the storage is located, or even how it's implemented. All they need to do is know what storage class and what is the accessMode. PVCs will automatically bind themselves to a PV that has compatible StorageClass and accessMode.
70 |
71 |
72 | ```yaml
73 | apiVersion: v1
74 | kind: PersistentVolumeClaim
75 | metadata:
76 | name: data-pvc
77 | spec:
78 | storageClassName: local-storage
79 | accessModes:
80 | - ReadWriteOnce
81 | resources:
82 | requests:
83 | storage: 512Mi
84 | ```
85 |
86 | Use kubectl get pv you'll get output as following :
87 |
88 |
89 |
90 | You may notice that the status of this PVC is **Bound** which means it is already bound to the persistent volume. So here if you check back the PVs as well, you'll actually see the status of PV is **Bound** as well ( as the following ).
91 |
92 |
93 |
94 |
95 | ### 3. Create Pod :
96 |
97 | If you're going to create a pod to comsume the target PV, you'll do as the following :
98 |
99 | ```yaml
100 | apiVersion: v1
101 | kind: Pod
102 | metadata:
103 | name: data-pod
104 | spec:
105 | containers:
106 | - name: busybox
107 | image: busybox
108 | command: ["/bin/sh", "-c","while true; do sleep 3600; done"]
109 | volumeMounts:
110 | - name: temp-data
111 | mountPath: /tmp/data
112 | volumes:
113 | - name: temp-data
114 | persistentVolumeClaim:
115 | claimName: data-pvc
116 | restartPolicy: Always
117 | ```
118 |
119 | Check back using Kubectl get pods, you'll see you pod is up and runnnig.
120 |
--------------------------------------------------------------------------------
/06 - Observability.md:
--------------------------------------------------------------------------------
1 | # Playbook Part 6: Observability - Debug and troubleshootings
2 |
3 | As you may know, Kubernetes runs an agent on each node knowns as Kubelet, which is responsible for receiving instructions from the Kubernetes API master server and running pods on the nodes. The kubelet also contains a subcomponent know as a **cAdvisor** ( Container Advisor), it is responsiible for retrieving performance metrics from pods and exposing them through the kubelet API to make the metrics available for the metrics server.
4 |
5 | ## Play 0. Cluster Monitoring
6 |
7 | You can have one Metrics server per kubernetes cluster, the metrics server retrives from each of the kubernetes nodes and pods, aggregates and stores in memory. Note that the metrics server is only an in-memory monitoring solution and does not store the metrics on the desk and as a result you cannot see historical performance data.
8 |
9 | ### Cluster monitoring solutions :
10 |
11 | - Prometheus
12 | - Elastic Stack
13 | - DataDog
14 | - Dynatrace
15 |
16 |
17 | ## Play 1. Liveness and Readiness Probes :
18 |
19 | **Probes** allows you to customize how Kubernetes determines the status of your containers.
20 |
21 | **Liveness Probes** indicates whether the container is runnnig properly ( governs when the cluster will automatically start or restart the container )
22 |
23 | **Readiness Probes** indicates whether the container is ready to accept request
24 |
25 |
26 | ## Play 2. Get your hands dirty :
27 |
28 |
29 | ```yaml
30 | apiVersion: v1
31 | kind: Pod
32 | metadata:
33 | name: melon-pod-ops
34 | spec:
35 | containers:
36 | - name: my-container-ops
37 | image: busybox
38 | command: ['sh', '-c', "echo Hello, Kubernetes! && sleep 3600"]
39 | livenessProbe:
40 | httpGet:
41 | path: /
42 | port: 80
43 | initialDelaySeconds: 5
44 | periodSeconds: 5
45 | readinessProbe:
46 | httpGet:
47 | path: /
48 | port: 80
49 | initialDelaySeconds: 5
50 | periodSeconds: 5
51 | ```
52 |
53 |
54 | ## Play 3. Container logging :
55 |
56 | Use the following command for single container :
57 |
58 | kubectl logs melonpod
59 |
60 | Use the following command for multi-container :
61 |
62 | kubectl logs melonpod -c melon-container
63 |
64 |
65 |
66 | ## Play 4. Monitoring applications :
67 |
68 | Basically you have to bear in mind is to using kubectl top command :
69 |
70 | kubectl top pods
71 |
72 | kubectl top pod melon-pod
73 |
74 |
75 | You can also use the following command to check the CPU and memory usage of each node :
76 |
77 | kubectl top nodes
78 |
79 | The output looks like the following :
80 |
81 |
82 |
83 |
84 | ## Play 5. Debugging & Troubleshooting :
85 |
86 |
87 | ### Use the following to check the container :
88 |
89 | kubectl describe pod melon-pod
90 |
91 | The **Events** section is very important to check.
92 |
93 | ### Edit the pod :
94 |
95 | kubectl edit pod melon-pod -n melon-ns
96 |
97 | Remember, you CANNOT edit specifications of an existing POD other than the below.
98 |
99 | spec.containers[*].image
100 |
101 | spec.initContainers[*].image
102 |
103 | spec.activeDeadlineSeconds
104 |
105 | spec.tolerations
106 |
107 |
108 | ### Add liveness probe
109 |
110 | Save the object specification, export the spec NOT the status :
111 |
112 | kubectl get pod melon-pod -n melon-ns -o yaml --export > melon-pod.yaml
113 |
114 | Add liverness probe :
115 |
116 | ```yaml
117 | livenessProbe:
118 | httpGet:
119 | path: /
120 | port: 80
121 | ```
122 |
123 | ### Delete pod :
124 |
125 | Kubectl delete pod melon-pod -n melon-ns
126 |
127 | Then redeploy it using Kubectl apply the target spec.
128 |
129 |
130 | ### Check Controlplane services
131 |
132 | Using the following command to check master node :
133 |
134 | service kube-apiserver status
135 |
136 | service kube-controller-manager status
137 |
138 | service kube-scheduler status
139 |
140 |
141 | Also using the following command to check worker node :
142 |
143 | service kubelet status
144 |
145 | service kube-proxy status
146 |
147 |
148 |
149 | ### Check service logs :
150 |
151 | Using the command to check logs
152 |
153 | kubectl logs kube-apiserver-master -n kube-system
154 |
155 |
156 | You can also use journalctl utility to check the logs :
157 |
158 | sudo journactl -u kube-apiserver
159 |
160 | ### Check work node failure
161 |
162 | Using command to check node status :
163 |
164 | kubectl get nodes
165 |
166 | Then dive into more details using the following :
167 |
168 | kubectl describe node melon-worknode
169 |
170 | Each node has a set of conditions that can point us in a direction as to why a node might have failed. Depending on the status they are either set to **true** or **false** or **unknown** which could be :
171 | - Out of Disk ( true )
172 | - Memory Pressure ( true )
173 | - Disk Pressure ( true ) refers to when the disk capacity is low.
174 | - PID Pressure ( true ) refers to too many processes
175 | - Ready ( true ) means the node whole as healthy or not
176 |
177 | When it set to unknown which means may cause a loss of node ( from the perspective of master node ), you may need to check the last Heart Beat time field to find out the time when the node might have crashed. In such cases, proceed to checking the status of the node itself, if it crashed bring it back up.
178 |
179 | Check for possible CPU, Memory and Disk space on the node by using the following command :
180 |
181 | top
182 |
183 | df -h
184 |
185 | And check the state of kubelet using the following :
186 |
187 | service kubelet status
188 |
189 | Check the kubelet logs for possible issues :
190 |
191 | sudo journalctl -u kubelet
192 |
193 | Check the kubelet certificates. Ensure they are not expired and that they are part of the right group, and that certificate are issued by the right CA:
194 |
195 | openssl x509 -in /var/lib/kubelet/worker-1.crt -text
196 |
197 |
198 |
199 |
--------------------------------------------------------------------------------
/07 - ConfigMap.md:
--------------------------------------------------------------------------------
1 | # Playbook Part 7: ConfigMap
2 |
3 | A **configMap** is simple a Kubernetes Object that stores configuration data in a key-value format. This configuration data can then be used to configure software running in a container, by referencing the ConfigMap in the Pod spec.
4 |
5 | To check the configmap by using the following command :
6 |
7 | kubectl get configmap
8 |
9 | You can also use Yaml descriptor to define configmap :
10 |
11 | ```yaml
12 | apiVersion: v1
13 | kind: ConfigMap
14 | metadata:
15 | name: melon-configmap
16 | data:
17 | myKey: myValue
18 | myFav: myHome
19 | ```
20 |
21 | To pass that data to pod.
22 |
23 | ## Play 1 : Mount as environment variable
24 |
25 | Create a pod to use configmap data by using environment variables.
26 |
27 | ```yaml
28 | apiVersion: v1
29 | kind: Pod
30 | metadata:
31 | name: melon-configmap
32 | spec:
33 | containers:
34 | - name: melonapp-container
35 | image: busybox
36 | command: ['sh', '-c', "echo $(MY_VAR) && sleep 3600"]
37 | env:
38 | - name: MY_VAR
39 | valueFrom:
40 | configMapKeyRef:
41 | name: my-config-map
42 | key: myKey
43 | ```
44 |
45 | You can use the following command to check the configmap value :
46 |
47 | kubectl logs melon-configmap
48 |
49 | The output is similar like the following :
50 |
51 |
52 |
53 | ## Play 2 : Using mounted volume
54 |
55 | Create a pod to use configmap data by mounting a data volume.
56 |
57 | ```yaml
58 | apiVersion: v1
59 | kind: Pod
60 | metadata:
61 | name: melon-volume-pod
62 | spec:
63 | containers:
64 | - name: myapp-container
65 | image: busybox
66 | command: ['sh', '-c', "echo $(cat /etc/config/myKey) && sleep 3600"]
67 | volumeMounts:
68 | - name: config-volume
69 | mountPath: /etc/config
70 | volumes:
71 | - name: config-volume
72 | configMap:
73 | name: melon-configmap
74 | ```
75 |
76 | You can use the following command to check it as we did for play 1 :
77 |
78 | kubectl logs melon-volume-pod
79 |
80 | Using the following command to check the config map :
81 |
82 | kubectl exec melon-volume-pod -- ls /etc/config
83 |
84 | The similar output will look like the following :
85 |
86 |
87 |
88 | You can use the following command to check it exactly :
89 |
90 | kubectl exec my-configmap-volume-pod -- cat /etc/config/myKey
91 |
92 |
93 | Myself also enjoyed to create the configmap from file such as the following ( here k is the alias for kubectl ):
94 |
95 | k create configmap fluentd-config --from-file=/usr/test/fluent.conf
96 |
97 | k get configmap -o yaml > config.yaml
--------------------------------------------------------------------------------
/08 - Services.md:
--------------------------------------------------------------------------------
1 | # Playbook Part 8: Services
2 |
3 | Services enable communication between various components within and outside of the application by enabling connectivity between those groups of pods. Services in Kubernetes provides an abstraction layer which allow network access to a dynamic set of pods. One of its use case is to listen to a port on the node and forward requests on that por to a port on the pod running a frontend application ( Known as the NodePort service because the service listens to a port on the node and forward the requests to the pods ).
4 |
5 | Services use a **selector** and anything that tries to access the service with network traffic, that traffic is going to be proxy to one of the pods that is selected through that selector.
6 |
7 | Want to emphasize : The set of Pods targeted by a Service is usually determined by a selector, very important.
8 |
9 | ### Play 1 : Service types and its scenarios
10 |
11 | - **Cluster IP** - In this case the service creates a virtual IP inside the cluster to enable communication between different services. Cluster is designed to accessed by other pods within the cluster. This is the default ServiceType, it is also accessible using the cluter DNS.
12 |
13 | - **NodePort** - Exposing the Service on each Node’s IP at a static port which is the NodePort, which also means it isexposed outside the cluster by accessing that port ( by requesting NodeIP:NodePort). What it does is it actually selects an open port on all the nodes and listens on that port on each one of the node.
14 |
15 | - **LoadBalancer** - Exposing the Service externally using a cloud provider’s load balancer. It works only when you're actually running within a cloud platform or you're set up to interact with a cloud platform. A good example of that would be to distribute load across the different web servers in your front end tier.
16 |
17 | - **ExternalName** - Mapping the Service to an external address (e.g. foo.bar.example.com by returning a CNAME record with its value ). It is used to allow the resources within the cluster to access things outside the cluster throught a service.
18 |
19 | ### Creating a serice
20 |
21 | Create a service using yaml definition :
22 |
23 | ```yaml
24 | apiVersion: v1
25 | kind: Service
26 | metadata:
27 | name: melon-service
28 | spec:
29 | type: ClusterIP
30 | selector :
31 | app: nginx
32 | ports:
33 | - protocol: TCP
34 | port: 8080
35 | targetPort: 80
36 |
37 | ```
38 |
39 | In this specification, the port 8080 is the **service** going to listen on ( remember these terms are simply from the view point of the service ), it doesn't necessarily to the same port that the containers in the pods are listening on. Here the **target port** 80 is where the pod actually listening, is where the service forward the request to.
40 |
41 | The service is in fact like a virtual server inside the node, inside the cluster, it has its own IP address and that IP address is called the cluster IP of the service. The **NodePort** here is that we have the port on the node itself which we use to access the webserver externally. NodePorts can only be in a valid range which by default is from **30000** to **32767**.
42 |
43 | If you don't provide a target port, it is assume to be the same as port. And if you don't provide a node port, a free port in the range between 30000 and 32767 is automatically allocated.
44 |
45 | Check out the following diagram :
46 |
47 |
48 |
49 |
50 | Create a service using the following :
51 |
52 | kubectl expose deployment webfront-deploy --port=80 --target-port=80 --type=NodePort
53 |
54 |
55 | Playing with service is funny ( here k is the alias of kubectl ) :
56 |
57 | k expose deployment auth-deployment --type=NodePort --port 8080 --name=auth-svc --target-port 80
58 |
59 | k expose deployment data-deployment --type=ClusterIP --port 8080 --name=data-svc --target-port 80
60 |
61 |
62 | In any case, the service is automatically updated making its highly flexible and adaptive once created :
63 |
64 | - Single pod on a single node ( label selector : yes )
65 | - Multiple pods on a single node ( label selector : yes, node affinity : yes )
66 | - Multiple pods across multiple nodes ( label selector : yes )
67 |
68 |
69 |
70 | ### Check services and endpoints
71 |
72 | Using the following command to check the service available :
73 |
74 | kubectl get svc
75 |
76 |
77 | Using the following command to check the endpoint of the service :
78 |
79 | kubectl get endpoints melon-service
80 |
81 | It can also be like the following :
82 |
83 | kubectl get ep
84 |
85 |
86 | Lookup service within the cluster using the following command :
87 |
88 | k run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 --rm -it -- nslookup melon-service
89 |
90 |
91 | Specifically if you want to look up pod DNS :
92 |
93 | k get pod melon-pod -o wide
94 |
95 | After getting its IP address, please replace the '.' by '-', then using the following command :
96 |
97 | k run --generator=run-pod/v1 test-nslookup --image=busybox:1.28 --rm -it -- nslookup 10-44-0-22.default.pod
--------------------------------------------------------------------------------
/09 - Security.md:
--------------------------------------------------------------------------------
1 | # Playbook Part 9: Security
2 |
3 | How does someone gain access to the Kubernetes Cluster and how are their actions being controlled at a high level.
4 |
5 | ### Play 0 : Overview of Security
6 |
7 | This section explains :
8 | - What are the risks ?
9 | - What measures do you need to take to secure the cluster ?
10 |
11 | From the viewpoint of architecture, the kube-api server is at the center of all operations within Kubernetes. You can perform all operation to interact with it, this also means it is the first line of defense of Kubernetes. Two types of questions leveraging here :
12 |
13 | #### Before starting :
14 |
15 | - Who can access the cluster ?
16 | This can be answer by :
17 | - Files - Username and Passwords
18 | - Files - Username and Tokens
19 | - Certificates
20 | - External Authentification providers - LDAP
21 | - Service Accounts ( non-humain party )
22 |
23 | - What can they do ?
24 | This can be answer by :
25 | - RBAC Authorization
26 | - ABAC Authorization
27 | - Node Authorization
28 | - Webhook Mode
29 |
30 | Another thing is inside Kubernetes Cluster, the communication among each components such as ETCD cluster, Kubelet, Kube-proxy, Kube Scheduler, Kube Controller Manager, and Kube API Server is secured using TLS Encryption.
31 |
32 | #### How it works ?
33 | You can access the cluster through kubectl tool or the API directly, all of these requests go through the Kube-api server. The API server authenticates the requests before processing it.
34 |
35 |
36 |
37 |
38 | #### Transport Layer Security ( TLS ) Basics
39 |
40 | A certificate is used to gurantee trust between two parties during a transaction. For example, when a user tries to access a web server, TLS certificates ensure that the communication between the user and the server is encrypted and the server is who it says it is.
41 |
42 | Let's put it into a scenario, without secure connectivity, if a user were to access his online banking application the credentials he types in would be sent in a plain text format. The hacker sniffing network traffic could easily retrieve the credentials and use it to hack into the user's bank account. To encrypt the data being transferred using encryption keys, the data is encrypted using a key which is basically a set of random numbers and alphabets you add the random number to your data and you encrypted into a format that cannot be recognized the data is then sent to the server.
43 |
44 | The hacker sniffing the network might get the data after all, but they couldn't do anything with it anymore. However, the same is the case with the server receiving the data it cannot decrypt that data without the key. The copy of the key therefore must also be sent to the server so that the server can decrypt and read the message. Since the key is also sent over the same network. The attacker can sniff that as well and decrypt that data with it. This is known as SYMMETRIC ENCRYPTION. It is a secure way of encryption but since it uses the same key to encrypt and decrypt the data and since the key has to be exchanged between the sender and the receiver there is a risk of a hacker gaining access to the key and decrypting the data and that's where asymmetric encryption comes in.
45 |
46 | Instead of using a single key to encrypt and decrypt data symmetric encryption uses a pair of keys, a private key and a public key well they're private and public keys. No matter what is locked using the public key, it can only be unlocked by the private key.
47 |
48 | Another example is you want to access a server via SSH, you can use **ssh-keygen** command to generate a private key (id_rsa) and a public key ( id_rsa.pub ), you can add an entry to your server usually located at ~ /.ssh/authorized_keys file ( so always remember to use command : **cat ~ /.ssh/authorized_keys** then you can check the rsa key which is something like ssh-rsa assdfdfdsfdfxx...super...longs...tring...xxxxxx usersuper007). You can see the anyone can attempt to break through the public key but as long as no one gets their hands on your private key which is safe with you on your laptop, no one can gain access to the server. When you try to SSH you specify the location of your private key in your SSH command.
49 |
50 | #### What is a certificate ?
51 |
52 | How to make sure it is legit ? Self-signed certificate ( issuer of certificate ). All of the web browsers are built in with a certificate validation mechanism, wherein the browser checks the certificate received from the server and validates it to make sure it is legitimate. In the case it identifies as a fake certificate then it actually warn you.
53 |
54 | So then how do you create a legitime certificate for your web servers that the web browsers will trust, how do you get your certificates signed by someone with authority. That's where Cerficate Authorities ( CAs ) comes in. They're well known organisations that can sign and validate your certificate for you. Famous CAs such as Symantec, Digicert, Comodo,GlobalSign etc.
55 |
56 | The way this works is you generate a Certificate Signing Request ( CSR ), using the key you generated earlier and the domain name of your website. The CAs verify your details and once it checks out they sign the certificate and send it back to you. You now have a certificate signed by a CA that the browsers trust. If hacker tired to get his certificate signed the same way he would fail during the validation phase and his certificate would be rejected by the CA. So the website that he's hosting won't have a valid certificate. The CAs use different techniques to make sure that you're the actual owner of that domain. Another interesting question is : how do they browsers know that the CA itself was legitimate, what if the certifiacte was in fact signed by a fake CA. The CAs themselved have a set of key pairs. The CA is use their private keys to sign the certificates the public keys of all the CAs are built into the browsers. The browser uses the public key of the CA to validate the certificate was actually signed by CA themselves.
57 |
58 | The Certificate ( Public key ) are often like the following ( *.crt, *.pem ) :
59 | - server.crt or server.pem for server
60 | - client.crt or client.pem for client
61 |
62 | The Private key are often like the following ( *.key, *-key.pem ):
63 | - server.key or server-key.pem
64 | - client.key or client-key.pem
65 |
66 | #### Security in Kubernetes
67 |
68 | An administrator interacting with the Kubernetes cluster through the **kubectl utility** or via **accessing the Kubernetes API** directly must establish secure TLS connection. Communication between all the components within the Kubernetes cluster has to be secured. So the two primary requirements are to have all the various services within the cluster to use server certificates and all clients to use client certificates to verify they are who they say they are.
69 |
70 | Basically there are a couple of components need to be secured by TLS, let's start by server side :
71 | - Kube-API Server
72 | - ETCD Server ( actually only has Kube-API server will talk to him, hence see Kube-api server as a client to him )
73 | - Kubelet server
74 |
75 | Then is the client side ( who has to interact with API Server, and as mentionned API Server itself talk to etcd server and kubelet server of each individual node ):
76 | - People ( such as administrator )
77 | - Kube-Scheduler
78 | - Kube-Controller-Manager
79 | - Kube-proxy
80 |
81 | Recap what I mentioned here by the following diagram :
82 |
83 |
84 |
85 |
86 | ### Play 1 : Security context
87 |
88 | A pod's securityContext defines privilege and access control settings for a pod. If a pod or container needs to interact with the security mechanisms of the underlyng operating system in a customized way then securityContext is how we can go and accompanish that.
89 |
90 | Display the current-context
91 |
92 | kubectl config current-context
93 |
94 | Set the default context to my-cluster-name
95 |
96 | kubectl config use-context my-cluster-name
97 |
98 | The securityContext is defined as part of a pod's spec such as the following:
99 |
100 | You can create a user in prior :
101 |
102 | sudo useradd -u 2000 container-user-0
103 |
104 | sudo groupadd -g 3000 container-group-0
105 |
106 | create the file in both worker node:
107 |
108 | sudo mkdir -p /etc/message
109 |
110 | echo "hello" | sudo tee -a /etc/message/message.txt
111 |
112 | change of permission here :
113 |
114 | sudo chown 2000:3000 /etc/message/message.txt
115 |
116 | sudo chmod 640 /etc/message/message.txt
117 |
118 |
119 | ```yaml
120 | apiVersion: v1
121 | kind: Pod
122 | metadata:
123 | name: melon-securitycontext-pod
124 | spec:
125 | securityContext:
126 | runAsUser: 2000
127 | fsGroup: 3000
128 | containers:
129 | - name: melonapp-secret-container
130 | image: busybox
131 | command: ['sh', '-c','cat /message/message.txt && sleep 3600']
132 | volumeMounts:
133 | - name: message-volume
134 | mountPath: /message
135 | volumes:
136 | - name: message-volume
137 | hostPath:
138 | path: /etc/message
139 | ```
140 |
141 |
142 |
143 | ### Play 2 : Secrets
144 |
145 | **Secrets** are simplely a way to store sensitive data in your Kubernetes cluster, such as passords, tokens and keys then pass it to container runtime ( rather than storing it in a pod spec or in the container itself ).
146 |
147 | A yaml definition for a secret :
148 |
149 | ```yaml
150 | apiVersion: v1
151 | kind: Secret
152 | metadata:
153 | name: melon-secret
154 | stringData:
155 | myKey: myPassword
156 | ```
157 |
158 | Create a pod to consume the secret using envirnoment variable:
159 |
160 |
161 | ```yaml
162 | apiVersion: v1
163 | kind: Pod
164 | metadata:
165 | name: melon-secret-pod
166 | spec:
167 | containers:
168 | - name: melonapp-secret-container
169 | image: busybox
170 | command: ['sh', '-c','echo stay tuned!&& sleep 3600']
171 | env:
172 | - name: MY_PASSWORD
173 | valueFrom:
174 | secretKeyRef:
175 | name: melon-secret
176 | key: myKey
177 | ```
178 |
179 | We can also consum that secret via volumes as well which has been mentioned in Part 5 Volumes. Please have a look if need further understanding.
180 |
181 |
182 | ### Play 3 : Service Accounts
183 |
184 | You may have some applications that actually need to talk to Kubernetes cluster in order to do some automation get information. **SeriviceAccounts** therefore allow containers running in pod to access the Kubernetes API securely with properly limited permissions.
185 |
186 | You can create a service account by the following account :
187 |
188 | kubectl create serviceaccount melon-serviceaccount
189 |
190 | Double check if your available service account of your cluster :
191 |
192 | kubectl get serviceaccount
193 |
194 | You can determine the ServiceAccount that a pod will use by specifying a **serviceAccountName** in the pod spec like the following :
195 |
196 | ```yaml
197 | apiVersion: v1
198 | kind: Pod
199 | metadata:
200 | name: melon-serviceaccount-pod
201 | spec:
202 | serviceAccountName: melon-serviceaccount
203 | containers:
204 | - name: melonapp-svcaccount-container
205 | image: busybox
206 | command: ['sh', '-c','echo stay tuned!&& sleep 3600']
207 | ```
208 |
209 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2019 Melony QIN
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | Cloud Native Playbook
2 | A collection of cloud-native technologies learning resources from cloudmelon
3 |
4 |

5 |

6 |

7 |

8 |

9 |

10 |
11 |
12 | Loved the project? Please visit CloudMelon's Github profile
13 |
14 |
15 |
16 | # Ultimate Kubernetes Playbook
17 |
18 | This repostitory recap all useful kubectl commands and explainations for core concepts and API primitives of Kubernetes.
19 |
20 | | Chapters | Description |
21 | | --- | --- |
22 | | [00 - Shortcuts](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/00%20-%20Shortcuts.md)| Kubectl commands shortcuts|
23 | | [01 - Pod design](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/01%20-%20Pod%20design.md) | Pod Design |
24 | | [02 - Rolling updates & rollbacks](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/02%20-%20Rolling%20updates%20and%20rollbacks.md) | Rolling updates & rollbacks |
25 | | [03 - Networking](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/03%20-%20Networking.md) | Kubernetes Networking |
26 | | [04 - ETCD](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/05%20-%20Volumes.md) | ETCD |
27 | | [05 - Volumes](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/05%20-%20Volumes.md) | Kubernetes Storage |
28 | | [06 - Observability](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/06%20-%20Observability.md) | Kubernetes Observability |
29 | | [07 - ConfigMap](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/07%20-%20ConfigMap.md) | ConfigMap |
30 | | [08 - Services](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/08%20-%20Services.md) | Services |
31 | | [09 - Security](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/09%20-%20Security.md) | Security |
32 |
33 | ## Playground
34 |
35 | Learn how to get started with Kubernetes in 2024, check out [this blog post](https://cloudmelonvision.com/if-i-were-about-to-get-started-on-kubernetes-in-2024/)
36 |
37 | ## Set up a local Kubernetes Cluster
38 | | Chapters | Description |
39 | | --- | --- |
40 | | [Setup Minikube Cluster](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/platform-ops/Set%20up%20Minikube%20Cluster.md)| End-to-End how to Setup Minikube Cluster|
41 | | [Set up AKS cluster](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/platform-ops/azure-kubernetes-service/Set%20up%20AKS%20Cluster.md) | End-to-End how to Set up AKS cluster |
42 |
43 |
44 | ## Kubernetes Useful references :
45 |
46 | - Mastering Kubernetes By doing AKS DevSecOps Workshop :
47 | https://azure.github.io/AKS-DevSecOps-Workshop/
48 |
49 | - Kubernetes the hard way on Azure :
50 | https://github.com/ivanfioravanti/kubernetes-the-hard-way-on-azure/tree/master/docs
51 |
52 |
53 | ## Cloud-Native certification
54 |
55 | There are 5 major cloud-native certifications, and all 5 certifications certified make you eligible for [Cloud Native Computing Foundation(CNCF)](https://www.cncf.io/)'s [Kubestronaut Program](https://www.cncf.io/training/kubestronaut/) under the condition that the candidate successfully maintains the following 5 certifications in goods tanding at the same time :
56 | • [Certified Kubernetes Administrator (CKA)](https://www.cncf.io/training/certification/cka/)
57 | • [Certified Kubernetes Application Developer (CKAD)](https://www.cncf.io/training/certification/ckad/)
58 | • [Certified Kubernetes Security Specialist (CKS)](https://www.cncf.io/training/certification/cks/)
59 | • [Kubernetes and Cloud Native Associate (KCNA)](https://www.cncf.io/training/certification/kcna/)
60 | • [Kubernetes and Cloud Native Security Associate (KCSA)](https://www.cncf.io/training/certification/kcsa/)
61 |
62 | Additional Kubernetes ecosystem :
63 | • [Prometheus Certified Associate (PCA)](https://www.cncf.io/training/certification/pca/)
64 | • [Istio Certified Associate (ICA)](https://www.cncf.io/training/certification/ica/)
65 | • [Cilium Certified Associate (CCA)](https://www.cncf.io/training/certification/cca/)
66 | • [Certified Argo Project Associate (CAPA)](https://www.cncf.io/training/certification/capa/)
67 | • [GitOps Certified Associate (CGOA)](https://www.cncf.io/training/certification/cgoa/)
68 |
69 |
70 | ## Azure Kubernetes services ( AKS ) useful references :
71 |
72 | - AKS Current preview features: https://aka.ms/aks/previewfeatures
73 | - AKS Release notes: https://aka.ms/aks/releasenotes
74 | - AKS Public roadmap: http://aka.ms/aks/roadmap
75 | - AKS Known-issues: https://aka.ms/aks/knownissues
76 |
77 | ## Interested in deploying Master-Slave architecture clustering solution on Azure ?
78 |
79 | Please check it out :
80 | https://azure.microsoft.com/en-us/resources/templates/201-vmss-master-slave-customscript/
81 |
82 | ## More details on my blog :
83 |
84 | Please go to my blog cloud-melon.com to get more details about how to implement this solution and more about Microsoft Azure ( ref link : https://cloudmelonvision.com )
85 |
86 | Feel free to reach out to my twitter **@MelonyQ** for more details ( https://twitter.com/MelonyQ ).
87 |
88 | # Authors
89 |
90 | Contributors names and contacts
91 |
92 | - Github profile [here](https://github.com/cloudmelon)
93 | - Twitter [@MelonyQ](https://twitter.com/melonyq)
94 | - Blog [CloudMelon Vis](https://cloudmelonvision.com)
95 | - Youtube[ CloudMelon Vis](https://www.youtube.com/@CloudMelonVis?sub_confirmation=1)
96 |
97 | # Contribute
98 |
99 | Contributions are always welcome! Please create a PR to add Github Profile.
100 |
101 | ## License
102 |
103 | This project is licensed under [MIT](https://opensource.org/licenses/MIT) license.
104 |
105 | ## Show your support
106 |
107 | Give a ⭐️ if this project helped you!
108 |
109 | [](https://star-history.com/#cloudmelon/Cloud-Native-Playbook&Date)
--------------------------------------------------------------------------------
/_config.yml:
--------------------------------------------------------------------------------
1 | theme: jekyll-theme-slate
--------------------------------------------------------------------------------
/platform-ops/Set up Minikube Cluster.md:
--------------------------------------------------------------------------------
1 | # Playbook Before Part 0 : Set up Minikube
2 |
3 | ## Pre-read
4 | 1. If you get started on Kubernetes in 2024 and want to set up your local minikube cluster or playground, check out [this post](https://cloudmelonvision.com/if-i-were-about-to-get-started-on-kubernetes-in-2024/)
5 | 2. Kubernetes cluster architecture and key concepts, check out [this post](https://cloudmelonvision.com/what-is-kubernetes-really-about/)
6 |
7 | ## Prerequsities
8 |
9 | ### Set up K8S with Minikube
10 |
11 | You can follow [my article : Playbook Before Part 0 : Minikube setting up in Azure VM ](https://github.com/cloudmelon/Cloud-Native-Playbook/blob/master/platform-ops/Set%20up%20Minikube%20Cluster.md) article to know about how to install Kubernetes with minikube on a single VM sits in Microsoft Azure.
12 |
13 | Find [the article : Playbook Before Part 0 : How to deploy NGINX Ingress in Minikube](https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/) article to know about how to install Kubernetes with minikube on a single VM sits in Microsoft Azure.
14 |
15 | In your Azure VM OR AWS EC2 instance :
16 |
17 | Update current packages of the system to the latest version.
18 | ```
19 | sudo apt update
20 | sudo apt upgrade
21 | ```
22 | To check if virtualization is supported on Linux, run the following command and verify that the output is non-empty:
23 | ```
24 | grep -E --color 'vmx|svm' /proc/cpuinfo
25 | ```
26 |
27 | ## Install VirtualBox (Option #1)
28 |
29 | Import the Oracle public key to your system signed the Debian packages using the following commands.
30 | ```
31 | wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
32 | wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
33 | ```
34 | you need to add Oracle VirtualBox PPA to Ubuntu system. You can do this by running the below command on your system.
35 |
36 | ```
37 | sudo add-apt-repository "deb http://download.virtualbox.org/virtualbox/debian bionic contrib"
38 | ```
39 | Update VirtualBox using the following commands :
40 | ```
41 | sudo apt update
42 | sudo apt install virtualbox-6.0
43 | ```
44 | ## Install Docker (Option #2)
45 |
46 |
47 |
48 | ## Install Kubectl
49 |
50 | Download the latest release and install kubectl binary with curl on Linux:
51 | ```
52 | sudo apt-get update && sudo apt-get install -y apt-transport-https
53 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
54 | echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
55 | sudo apt-get update
56 | sudo apt-get install -y kubectl
57 | ```
58 |
59 | ## Install Minikube
60 |
61 | Download a stand-alone binary and use the following command :
62 | ```
63 | curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
64 | && chmod +x minikube
65 | ```
66 |
67 | Add Minikube to your path
68 | ```
69 | sudo mkdir -p /usr/local/bin/
70 | sudo install minikube /usr/local/bin/
71 | ```
72 | ## start Minikube
73 |
74 | Situation #1: If you're on virtualbox as vm driver
75 | ```
76 | minikube start --vm-driver=virtualbox
77 | ```
78 |
79 | While starting the minikube, you can see an output similar to the following :
80 |
81 |
82 |
83 | Situation #2:
84 |
85 |
86 | You can use the following command to check if Minikube works well :
87 | ```
88 | minikube status
89 | ```
90 | You can expect an output similar to the following :
91 |
92 |
93 |
94 |
95 | ## Update Minikube
96 |
97 | 1. Find the current version
98 |
99 | ```
100 | $ minikube version
101 | minikube version: v1.33.0
102 | ```
103 |
104 | 2. Check if there are newer versions available
105 |
106 | ```
107 | $ minikube update-check
108 | CurrentVersion: v1.33.0
109 | LatestVersion: v1.33.0
110 | ```
111 |
112 | 3. Update to latest version
113 |
114 | ```
115 | curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
116 | && sudo install minikube-linux-amd64 /usr/local/bin/minikube
117 | ```
118 |
119 | 4. Check new minikube version
120 |
121 | ```
122 | $ minikube version
123 | minikube version: v1.33.0
124 | commit: 86fc9d54fca63f295d8737c8eacdbb7987e89c67
125 | ```
126 |
127 |
128 | ## Confirmation
129 |
130 | To check out the version of your kubectl, you can use the following command:
131 |
132 | kubectl version
133 |
134 | You should see the following output:
135 |
136 |
137 |
138 | To check out the detailed information use the following command:
139 |
140 | kubectl cluster-info
141 |
142 | You should see the following output:
143 |
144 |
145 |
--------------------------------------------------------------------------------
/platform-ops/azure-kubernetes-service/Set up AKS Cluster.md:
--------------------------------------------------------------------------------
1 | # Deploy BDC on Azure Kubernetes Service cluster
2 |
3 |
4 | This repository contains the scripts that you can use to deploy a BDC cluster on Azure Kubernetes Service (AKS) cluster with basic networking ( Kubenet ) and advanced networking ( CNI ).
5 |
6 | This repository contains 2 bash scripts :
7 | - **deploy-cni-aks.sh** : You can use it to deploy AKS cluster using CNI networking, it fits the use case that you need to deploy AKS cluster with CNI networking plugin for integration with existing virtual networks in Azure, and this network model allows greater separation of resources and controls in an enterprise environment.
8 |
9 | - **deploy-kubenet-aks.sh** : You can use it to deploy AKS cluster using kubenet networking, it fits the use case that you need to deploy AKS cluster with kubenet networking. Kubenet is a basic network plugin, on Linux only. AKS cluster by default is on kubenet networking, after provisioning it, it also creates an Azure virtual network and a subnet, where your nodes get an IP address from the subnet and all pods receive an IP address from a logically different address space to the subnet of the nodes.
10 |
11 |
12 |
13 | ## Instructions
14 |
15 | ### deploy-cni-aks.sh
16 |
17 | 1. Download the script on the location that you are planning to use for the deployment
18 |
19 | ``` bash
20 | curl --output deploy-cni-aks.sh https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/platform-ops/azure-kubernetes-service/deploy-cni-aks.sh
21 | ```
22 |
23 | 2. Make the script executable
24 |
25 | ``` bash
26 | chmod +x deploy-cni-aks.sh
27 | ```
28 |
29 | 3. Run the script (make sure you are running with sudo)
30 |
31 | ``` bash
32 | sudo ./deploy-cni-aks.sh
33 | ```
34 |
35 | ### deploy-kubenet-aks.sh
36 |
37 | 1. Download the script on the location that you are planning to use for the deployment
38 |
39 | ``` bash
40 | curl --output deploy-kubenet-aks.sh https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/platform-ops/azure-kubernetes-service/scripts/deploy-kubenet-aks.sh
41 | ```
42 |
43 | 2. Make the script executable
44 |
45 | ``` bash
46 | chmod +x deploy-kubenet-aks.sh
47 | ```
48 |
49 | 3. Run the script (make sure you are running with sudo)
50 |
51 | ``` bash
52 | sudo ./deploy-kubenet-aks.sh
53 | ```
--------------------------------------------------------------------------------
/platform-ops/azure-kubernetes-service/deploy-cni-aks.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #Define a set of environment variables to be used in resource creations.
3 | #
4 |
5 | #!/bin/bash
6 | #Get Subscription ID and resource groups. It is used as default for controller, SQL Server Master instance (sa account) and Knox.
7 | #
8 |
9 | read -p "Your Azure Subscription: " subscription
10 | echo
11 | read -p "Your Resource Group Name: " resourcegroup
12 | echo
13 | read -p "In which region you're deploying: " region
14 | echo
15 |
16 |
17 | #Define a set of environment variables to be used in resource creations.
18 | export SUBID=$subscription
19 |
20 | export REGION_NAME=$region
21 | export RESOURCE_GROUP=$resourcegroup
22 | export KUBERNETES_VERSION=$version
23 | export SUBNET_NAME=aks-subnet
24 | export VNET_NAME=bdc-vnet
25 | export AKS_NAME=bdcakscluster
26 |
27 | #Set Azure subscription current in use
28 | az account set --subscription $subscription
29 |
30 | #Create Azure Resource Group
31 | az group create -n $RESOURCE_GROUP -l $REGION_NAME
32 |
33 | #Create Azure Virtual Network to host your AKS clus
34 | az network vnet create \
35 | --resource-group $RESOURCE_GROUP \
36 | --location $REGION_NAME \
37 | --name $VNET_NAME \
38 | --address-prefixes 10.0.0.0/8 \
39 | --subnet-name $SUBNET_NAME \
40 | --subnet-prefix 10.1.0.0/16
41 |
42 | SUBNET_ID=$(az network vnet subnet show \
43 | --resource-group $RESOURCE_GROUP \
44 | --vnet-name $VNET_NAME \
45 | --name $SUBNET_NAME \
46 | --query id -o tsv)
47 |
48 | #Create AKS Cluster
49 | az aks create \
50 | --resource-group $RESOURCE_GROUP \
51 | --name $AKS_NAME \
52 | --kubernetes-version $version \
53 | --load-balancer-sku standard \
54 | --network-plugin azure \
55 | --vnet-subnet-id $SUBNET_ID \
56 | --docker-bridge-address 172.17.0.1/16 \
57 | --dns-service-ip 10.2.0.10 \
58 | --service-cidr 10.2.0.0/24 \
59 | --node-vm-size Standard_D13_v2 \
60 | --node-count 2 \
61 | --generate-ssh-keys
62 |
63 | az aks get-credentials -g $RESOURCE_GROUP -n $AKS_NAME
64 |
--------------------------------------------------------------------------------
/platform-ops/azure-kubernetes-service/deploy-kubenet-aks.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #Define a set of environment variables to be used in resource creations.
3 | #
4 |
5 | #!/bin/bash
6 | #Get Subscription ID and resource groups. It is used as default for controller, SQL Server Master instance (sa account) and Knox.
7 | #
8 |
9 | read -p "Your Azure Subscription: " subscription
10 | echo
11 | read -p "Your Resource Group Name: " resourcegroup
12 | echo
13 | read -p "In which region you're deploying: " region
14 | echo
15 |
16 |
17 | #Define a set of environment variables to be used in resource creations.
18 | export SUBID=$subscription
19 |
20 | export REGION_NAME=$region
21 | export RESOURCE_GROUP=$resourcegroup
22 | export KUBERNETES_VERSION=$version
23 | export AKS_NAME=bdcakscluster
24 |
25 | #Set Azure subscription current in use
26 | az account set --subscription $subscription
27 |
28 | #Create Azure Resource Group
29 | az group create -n $RESOURCE_GROUP -l $REGION_NAME
30 |
31 | #Create AKS Cluster
32 | az aks create \
33 | --resource-group $RESOURCE_GROUP \
34 | --name $AKS_NAME \
35 | --kubernetes-version $version \
36 | --node-vm-size Standard_D13_v2 \
37 | --node-count 2 \
38 | --generate-ssh-keys
39 |
40 | az aks get-credentials -g $RESOURCE_GROUP -n $AKS_NAME
--------------------------------------------------------------------------------
/platform-ops/azure-kubernetes-service/private-aks/Set up Private AKS Cluster.md:
--------------------------------------------------------------------------------
1 | # Deploy private AKS cluster with Advanced Networking (CNI)
2 |
3 | This repository contains the scripts that you can use to deploy a Azure Kubernetes Service (AKS) private cluster with advanced networking ( CNI ).
4 |
5 | This repository contains 2 bash scripts :
6 |
7 | - **deploy-private-aks.sh** : You can use it to deploy private AKS cluster with private endpoint, it fits the use case that you need to deploy AKS private cluster.
8 |
9 | - **deploy-private-aks-udr.sh** : You can use it to deploy private AKS cluster with private endpoint, it fits the use case that you need to deploy AKS private cluster and limit egress traffic with UDR ( User-defined Routes ).
10 |
11 |
12 | ## Instructions
13 |
14 | ### deploy-private-aks.sh
15 |
16 | 1. Download the script on the location that you are planning to use for the deployment
17 |
18 | ``` bash
19 | curl --output deploy-private-aks.sh https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/platform-ops/azure-kubernetes-service/private-aks/scripts/deploy-private-aks.sh
20 | ```
21 |
22 | 2. Make the script executable
23 |
24 | ``` bash
25 | chmod +x deploy-private-aks.sh
26 | ```
27 |
28 | 3. Run the script (make sure you are running with sudo)
29 |
30 | ``` bash
31 | sudo ./deploy-private-aks.sh
32 | ```
33 |
34 | ### deploy-private-aks-udr.sh
35 |
36 | 1. Download the script on the location that you are planning to use for the deployment
37 |
38 | ``` bash
39 | curl --output deploy-private-aks-udr.sh https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/platform-ops/azure-kubernetes-service/private-aks/scripts/deploy-private-aks-udr.sh
40 | ```
41 |
42 | 2. Make the script executable
43 |
44 | ``` bash
45 | chmod +x deploy-private-aks-udr.sh
46 | ```
47 |
48 | 3. Run the script (make sure you are running with sudo)
49 |
50 | ``` bash
51 | sudo ./deploy-private-aks-udr.sh
52 | ```
53 |
54 |
55 |
56 |
--------------------------------------------------------------------------------
/platform-ops/azure-kubernetes-service/private-aks/scripts/deploy-bdc-private-aks.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | #Get password as input. It is used as default for controller, SQL Server Master instance (sa account) and Knox.
4 | #
5 | while true; do
6 | read -p "Create Admin username for Big Data Cluster: " bdcadmin
7 | echo
8 | read -s -p "Create Password for Big Data Cluster: " password
9 | echo
10 | read -s -p "Confirm your Password: " password2
11 | echo
12 | [ "$password" = "$password2" ] && break
13 | echo "Password mismatch. Please try again."
14 | done
15 |
16 |
17 | #Create BDC custom profile
18 | azdata bdc config init --source aks-dev-test --target private-bdc-aks --force
19 |
20 | #Configurations for BDC deployment
21 | azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.docker.imageTag=2019-CU13-ubuntu-20.04"
22 | azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.data.className=default"
23 | azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.storage.logs.className=default"
24 |
25 | azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.endpoints[0].serviceType=NodePort"
26 | azdata bdc config replace -p private-bdc-aks/control.json -j "$.spec.endpoints[1].serviceType=NodePort"
27 |
28 | azdata bdc config replace -p private-bdc-aks/bdc.json -j "$.spec.resources.master.spec.endpoints[0].serviceType=NodePort"
29 | azdata bdc config replace -p private-bdc-aks/bdc.json -j "$.spec.resources.gateway.spec.endpoints[0].serviceType=NodePort"
30 | azdata bdc config replace -p private-bdc-aks/bdc.json -j "$.spec.resources.appproxy.spec.endpoints[0].serviceType=NodePort"
31 |
32 | #In case you're deploying BDC in HA mode ( aks-dev-test-ha profile ) please also use the following command
33 | #azdata bdc config replace -c private-bdc-aks /bdc.json -j "$.spec.resources.master.spec.endpoints[1].serviceType=NodePort"
34 | export AZDATA_USERNAME=$bdcadmin
35 | export AZDATA_PASSWORD=$password
36 |
37 | azdata bdc create --config-profile private-bdc-aks --accept-eula yes
38 |
39 | #Login and get endpoint list for the cluster.
40 | #
41 | azdata login -n mssql-cluster
42 |
43 | azdata bdc endpoint list --output table
44 |
--------------------------------------------------------------------------------
/platform-ops/azure-kubernetes-service/private-aks/scripts/deploy-private-aks-udr.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #Get Subscription ID and Azure service principal as input. It is used as default for controller, SQL Server Master instance (sa account) and Knox.
3 | ##
4 | #You can also create service principal instead of using an existing one
5 | #az ad sp create-for-rbac -n "bdcaks-sp" --skip-assignment
6 | #
7 | read -p "Your Azure Subscription: " subscription
8 | echo
9 | read -p "Your Resource Group Name: " resourcegroup
10 | echo
11 | read -p "In which region you're deploying: " region
12 | echo
13 | read -p "Your Azure service principal ID: " sp_id
14 | echo
15 | read -p "Your Azure service principal Password: " sp_pwd
16 |
17 |
18 | #Define a set of environment variables to be used in resource creations.
19 | export SUBID=$subscription
20 |
21 | export REGION_NAME=$region
22 | export RESOURCE_GROUP=$resourcegroup
23 | export SUBNET_NAME=aks-subnet
24 | export VNET_NAME=bdc-vnet
25 | export AKS_NAME=bdcaksprivatecluster
26 | export FWNAME=bdcaksazfw
27 | export FWPUBIP=$FWNAME-ip
28 | export FWIPCONFIG_NAME=$FWNAME-config
29 |
30 | export FWROUTE_TABLE_NAME=bdcaks-rt
31 | export FWROUTE_NAME=bdcaksroute
32 | export FWROUTE_NAME_INTERNET=bdcaksrouteinet
33 |
34 | #Set Azure subscription current in use
35 | az account set --subscription $subscription
36 |
37 | #Create Azure Resource Group
38 | az group create -n $RESOURCE_GROUP -l $REGION_NAME
39 |
40 | #Create Azure Virtual Network to host your AKS cluster
41 | az network vnet create \
42 | --resource-group $RESOURCE_GROUP \
43 | --location $REGION_NAME \
44 | --name $VNET_NAME \
45 | --address-prefixes 10.0.0.0/8 \
46 | --subnet-name $SUBNET_NAME \
47 | --subnet-prefix 10.1.0.0/16
48 |
49 |
50 | SUBNET_ID=$(az network vnet subnet show \
51 | --resource-group $RESOURCE_GROUP \
52 | --vnet-name $VNET_NAME \
53 | --name $SUBNET_NAME \
54 | --query id -o tsv)
55 |
56 |
57 | #Add Azure firewall extension
58 | az extension add --name azure-firewall
59 |
60 | #Dedicated subnet for Azure Firewall (Firewall name cannot be changed)
61 | az network vnet subnet create \
62 | --resource-group $RESOURCE_GROUP \
63 | --vnet-name $VNET_NAME \
64 | --name AzureFirewallSubnet \
65 | --address-prefix 10.3.0.0/24
66 |
67 | #Create Azure firewall
68 | az network firewall create -g $RESOURCE_GROUP -n $FWNAME -l $REGION_NAME --enable-dns-proxy true
69 |
70 | #Create public IP for Azure Firewall
71 | az network public-ip create -g $RESOURCE_GROUP -n $FWPUBIP -l $REGION_NAME --sku "Standard"
72 |
73 | #Create IP configurations for Azure Firewall
74 | az network firewall ip-config create -g $RESOURCE_GROUP -f $FWNAME -n $FWIPCONFIG_NAME --public-ip-address $FWPUBIP --vnet-name $VNET_NAME
75 |
76 |
77 | #Getting public and private IP addresses for Azure Firewall
78 | export FWPUBLIC_IP=$(az network public-ip show -g $RESOURCE_GROUP -n $FWPUBIP --query "ipAddress" -o tsv)
79 | export FWPRIVATE_IP=$(az network firewall show -g $RESOURCE_GROUP -n $FWNAME --query "ipConfigurations[0].privateIpAddress" -o tsv)
80 |
81 | #Create an User defined route table
82 | az network route-table create -g $RESOURCE_GROUP --name $FWROUTE_TABLE_NAME
83 |
84 | #Create User defined routes
85 | az network route-table route create -g $RESOURCE_GROUP --name $FWROUTE_NAME --route-table-name $FWROUTE_TABLE_NAME --address-prefix 0.0.0.0/0 --next-hop-type VirtualAppliance --next-hop-ip-address $FWPRIVATE_IP --subscription $SUBID
86 |
87 | az network route-table route create -g $RESOURCE_GROUP --name $FWROUTE_NAME_INTERNET --route-table-name $FWROUTE_TABLE_NAME --address-prefix $FWPUBLIC_IP/32 --next-hop-type Internet
88 |
89 |
90 | #Add FW Network Rules
91 | az network firewall network-rule create -g $RESOURCE_GROUP -f $FWNAME --collection-name 'aksfwnr' -n 'apiudp' --protocols 'UDP' --source-addresses '*' --destination-addresses "AzureCloud.$REGION_NAME" --destination-ports 1194 --action allow --priority 100
92 | az network firewall network-rule create -g $RESOURCE_GROUP -f $FWNAME --collection-name 'aksfwnr' -n 'apitcp' --protocols 'TCP' --source-addresses '*' --destination-addresses "AzureCloud.$REGION_NAME" --destination-ports 9000
93 | az network firewall network-rule create -g $RESOURCE_GROUP -f $FWNAME --collection-name 'aksfwnr' -n 'time' --protocols 'UDP' --source-addresses '*' --destination-fqdns 'ntp.ubuntu.com' --destination-ports 123
94 |
95 | #Add FW Application Rules
96 | az network firewall application-rule create -g $RESOURCE_GROUP -f $FWNAME --collection-name 'aksfwar' -n 'fqdn' --source-addresses '*' --protocols 'http=80' 'https=443' --fqdn-tags "AzureKubernetesService" --action allow --priority 100
97 |
98 | #Associate User defined route table (UDR) to AKS cluster where deployed BDC previsouly
99 | az network vnet subnet update -g $RESOURCE_GROUP --vnet-name $VNET_NAME --name $SUBNET_NAME --route-table $FWROUTE_TABLE_NAME
100 |
101 |
102 |
103 |
104 |
105 | #Create SP and Assign Permission to Virtual Network
106 | export APPID=$sp_id
107 | export PASSWORD=$sp_pwd
108 | export VNETID=$(az network vnet show -g $RESOURCE_GROUP --name $VNET_NAME --query id -o tsv)
109 |
110 | #Assign SP Permission to VNET
111 | az role assignment create --assignee $APPID --scope $VNETID --role "Network Contributor"
112 |
113 | #Assign SP Permission to route table
114 | export RTID=$(az network route-table show -g $RESOURCE_GROUP -n $FWROUTE_TABLE_NAME --query id -o tsv)
115 | az role assignment create --assignee $APPID --scope $RTID --role "Network Contributor"
116 |
117 |
118 | #Create AKS Cluster
119 | az aks create \
120 | --resource-group $RESOURCE_GROUP \
121 | --location $REGION_NAME \
122 | --name $AKS_NAME \
123 | --load-balancer-sku standard \
124 | --outbound-type userDefinedRouting \
125 | --enable-private-cluster \
126 | --network-plugin azure \
127 | --vnet-subnet-id $SUBNET_ID \
128 | --docker-bridge-address 172.17.0.1/16 \
129 | --dns-service-ip 10.2.0.10 \
130 | --service-cidr 10.2.0.0/24 \
131 | --service-principal $APPID \
132 | --client-secret $PASSWORD \
133 | --node-vm-size Standard_D13_v2 \
134 | --node-count 2 \
135 | --generate-ssh-keys
136 |
137 |
138 | az aks get-credentials -g $RESOURCE_GROUP -n $AKS_NAME
139 |
--------------------------------------------------------------------------------
/platform-ops/azure-kubernetes-service/private-aks/scripts/deploy-private-aks.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 | #Define a set of environment variables to be used in resource creations.
3 | #
4 |
5 | #!/bin/bash
6 | #Get Subscription ID and resource groups. It is used as default for controller, SQL Server Master instance (sa account) and Knox.
7 | #
8 |
9 | read -p "Your Azure Subscription: " subscription
10 | echo
11 | read -p "Your Resource Group Name: " resourcegroup
12 | echo
13 | read -p "In which region you're deploying: " region
14 | echo
15 |
16 |
17 | #Define a set of environment variables to be used in resource creations.
18 | export SUBID=$subscription
19 |
20 | export REGION_NAME=$region
21 | export RESOURCE_GROUP=$resourcegroup
22 | export KUBERNETES_VERSION=$version
23 | export SUBNET_NAME=aks-subnet
24 | export VNET_NAME=bdc-vnet
25 | export AKS_NAME=bdcaksprivatecluster
26 |
27 | #Set Azure subscription current in use
28 | az account set --subscription $subscription
29 |
30 | #Create Azure Resource Group
31 | az group create -n $RESOURCE_GROUP -l $REGION_NAME
32 |
33 | #Create Azure Virtual Network to host your AKS clus
34 | az network vnet create \
35 | --resource-group $RESOURCE_GROUP \
36 | --location $REGION_NAME \
37 | --name $VNET_NAME \
38 | --address-prefixes 10.0.0.0/8 \
39 | --subnet-name $SUBNET_NAME \
40 | --subnet-prefix 10.1.0.0/16
41 |
42 | SUBNET_ID=$(az network vnet subnet show \
43 | --resource-group $RESOURCE_GROUP \
44 | --vnet-name $VNET_NAME \
45 | --name $SUBNET_NAME \
46 | --query id -o tsv)
47 |
48 | #Create AKS Cluster
49 | az aks create \
50 | --resource-group $RESOURCE_GROUP \
51 | --name $AKS_NAME \
52 | --load-balancer-sku standard \
53 | --enable-private-cluster \
54 | --kubernetes-version $version \
55 | --network-plugin azure \
56 | --vnet-subnet-id $SUBNET_ID \
57 | --docker-bridge-address 172.17.0.1/16 \
58 | --dns-service-ip 10.2.0.10 \
59 | --service-cidr 10.2.0.0/24 \
60 | --node-vm-size Standard_D13_v2 \
61 | --node-count 2 \
62 | --generate-ssh-keys
63 |
64 | az aks get-credentials -g $RESOURCE_GROUP -n $AKS_NAME
65 |
--------------------------------------------------------------------------------
/sampleapp/sampleapp.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: Deployment
3 | metadata:
4 | name: redis-back
5 | spec:
6 | replicas: 1
7 | selector:
8 | matchLabels:
9 | app: redis-back
10 | template:
11 | metadata:
12 | labels:
13 | app: redis-back
14 | spec:
15 | nodeSelector:
16 | "beta.kubernetes.io/os": linux
17 | containers:
18 | - name: redis-back
19 | image: redis
20 | resources:
21 | requests:
22 | cpu: 100m
23 | memory: 128Mi
24 | limits:
25 | cpu: 250m
26 | memory: 256Mi
27 | ports:
28 | - containerPort: 6379
29 | name: redis
30 | ---
31 | apiVersion: v1
32 | kind: Service
33 | metadata:
34 | name: redis-back
35 | spec:
36 | ports:
37 | - port: 6379
38 | selector:
39 | app: redis-back
40 | ---
41 | apiVersion: apps/v1
42 | kind: Deployment
43 | metadata:
44 | name: melonvote-front
45 | spec:
46 | replicas: 1
47 | selector:
48 | matchLabels:
49 | app: melonvote-front
50 | template:
51 | metadata:
52 | labels:
53 | app: melonvote-front
54 | spec:
55 | nodeSelector:
56 | "beta.kubernetes.io/os": linux
57 | containers:
58 | - name: melonvote-front
59 | image: microsoft/azure-vote-front:v1
60 | resources:
61 | requests:
62 | cpu: 100m
63 | memory: 128Mi
64 | limits:
65 | cpu: 250m
66 | memory: 256Mi
67 | ports:
68 | - containerPort: 80
69 | env:
70 | - name: REDIS
71 | value: "redis-back"
72 | ---
73 | apiVersion: v1
74 | kind: Service
75 | metadata:
76 | name: melonvote-front
77 | spec:
78 | type: LoadBalancer
79 | ports:
80 | - port: 80
81 | selector:
82 | app: melonvote-front
83 |
--------------------------------------------------------------------------------
/screenshots/Certificates.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Certificates.PNG
--------------------------------------------------------------------------------
/screenshots/Check back get pv.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Check back get pv.PNG
--------------------------------------------------------------------------------
/screenshots/Communication.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Communication.PNG
--------------------------------------------------------------------------------
/screenshots/ConfigMap output env.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/ConfigMap output env.PNG
--------------------------------------------------------------------------------
/screenshots/ConfigMap output.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/ConfigMap output.PNG
--------------------------------------------------------------------------------
/screenshots/Controllers.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Controllers.PNG
--------------------------------------------------------------------------------
/screenshots/Cron Job completed.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Cron Job completed.PNG
--------------------------------------------------------------------------------
/screenshots/Cronlog.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Cronlog.PNG
--------------------------------------------------------------------------------
/screenshots/Get PV.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Get PV.PNG
--------------------------------------------------------------------------------
/screenshots/Get PVC.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Get PVC.PNG
--------------------------------------------------------------------------------
/screenshots/Gigabyte and Gibibyte.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Gigabyte and Gibibyte.PNG
--------------------------------------------------------------------------------
/screenshots/Grep.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Grep.PNG
--------------------------------------------------------------------------------
/screenshots/Job completed.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Job completed.PNG
--------------------------------------------------------------------------------
/screenshots/Job running.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Job running.PNG
--------------------------------------------------------------------------------
/screenshots/Kubectl top.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Kubectl top.PNG
--------------------------------------------------------------------------------
/screenshots/Minikubestart.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Minikubestart.PNG
--------------------------------------------------------------------------------
/screenshots/Multi-container network.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Multi-container network.PNG
--------------------------------------------------------------------------------
/screenshots/Multi-container process namespace.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Multi-container process namespace.PNG
--------------------------------------------------------------------------------
/screenshots/Multi-container storage.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Multi-container storage.PNG
--------------------------------------------------------------------------------
/screenshots/Rollout status.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Rollout status.PNG
--------------------------------------------------------------------------------
/screenshots/Service pod across nodes.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Service pod across nodes.PNG
--------------------------------------------------------------------------------
/screenshots/Service.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Service.PNG
--------------------------------------------------------------------------------
/screenshots/Static pod.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/Static pod.PNG
--------------------------------------------------------------------------------
/screenshots/clusterinfo.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/clusterinfo.PNG
--------------------------------------------------------------------------------
/screenshots/kubectlversion.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/kubectlversion.PNG
--------------------------------------------------------------------------------
/screenshots/minikubestatus.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/minikubestatus.PNG
--------------------------------------------------------------------------------
/screenshots/multi-container.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/cloudmelon/Cloud-Native-Playbook/57241ca905432e76067c001ab3db6d6bdfcc5f97/screenshots/multi-container.PNG
--------------------------------------------------------------------------------