├── demo
├── 05-docker-program
│ ├── .dockerignore
│ ├── Dockerfile
│ ├── package.json
│ ├── index.js
│ ├── blue-docker.html
│ └── purple-docker.html
├── 07-helm-demo
│ ├── Chart.yaml
│ ├── values.yaml
│ ├── templates
│ │ ├── service.yaml
│ │ ├── ingress.yaml
│ │ ├── deployment.yaml
│ │ ├── tests
│ │ │ └── test-connection.yaml
│ │ ├── _helpers.tpl
│ │ └── NOTES.txt
│ └── .helmignore
├── 06-helm-demo-without-values
│ ├── Chart.yaml
│ ├── templates
│ │ ├── service.yaml
│ │ ├── ingress.yaml
│ │ ├── deployment.yaml
│ │ ├── tests
│ │ │ └── test-connection.yaml
│ │ ├── _helpers.tpl
│ │ └── NOTES.txt
│ └── .helmignore
├── 02-service-demo
│ ├── service.yaml
│ └── kubernetes-demo.yaml
├── 01-pod-demo
│ └── kubernetes-demo.yaml
├── 03-deployment-demo
│ └── deployment.yaml
└── 04-ingress-demo
│ ├── ingress.yaml
│ ├── service.yaml
│ └── deployment.yaml
├── src
├── Ingress-concept.jpeg
├── blue-whale-screenshot.png
├── kubernetes-structure.jpeg
└── purple-whale-screenshot.png
└── README.md
/demo/05-docker-program/.dockerignore:
--------------------------------------------------------------------------------
1 | node_modules
--------------------------------------------------------------------------------
/src/Ingress-concept.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HcwXd/kubernetes-tutorial/HEAD/src/Ingress-concept.jpeg
--------------------------------------------------------------------------------
/src/blue-whale-screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HcwXd/kubernetes-tutorial/HEAD/src/blue-whale-screenshot.png
--------------------------------------------------------------------------------
/src/kubernetes-structure.jpeg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HcwXd/kubernetes-tutorial/HEAD/src/kubernetes-structure.jpeg
--------------------------------------------------------------------------------
/src/purple-whale-screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/HcwXd/kubernetes-tutorial/HEAD/src/purple-whale-screenshot.png
--------------------------------------------------------------------------------
/demo/05-docker-program/Dockerfile:
--------------------------------------------------------------------------------
1 | FROM node:10.15.3-alpine
2 | WORKDIR /app
3 | ADD . /app
4 | RUN npm install
5 | EXPOSE 3000
6 | CMD node index.js
--------------------------------------------------------------------------------
/demo/07-helm-demo/Chart.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | appVersion: "1.0"
3 | description: A Helm chart for Kubernetes
4 | name: helm-demo
5 | version: 0.1.0
6 |
--------------------------------------------------------------------------------
/demo/06-helm-demo-without-values/Chart.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | appVersion: "1.0"
3 | description: A Helm chart for Kubernetes
4 | name: helm-demo-without-values
5 | version: 0.1.0
6 |
--------------------------------------------------------------------------------
/demo/07-helm-demo/values.yaml:
--------------------------------------------------------------------------------
1 | replicaCount: 2
2 |
3 | image:
4 | repository: hcwxd/blue-whale
5 |
6 | service:
7 | type: NodePort
8 | port: 80
9 |
10 | ingress:
11 | enabled: true
12 |
13 | hosts:
14 | - host: blue.demo.com
15 | paths: [/]
--------------------------------------------------------------------------------
/demo/05-docker-program/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "docker-demo-app",
3 | "version": "1.0.0",
4 | "description": "",
5 | "main": "index.js",
6 | "scripts": {
7 | "start": "node index.js"
8 | },
9 | "dependencies": {
10 | "express": "^4.16.2"
11 | }
12 | }
13 |
--------------------------------------------------------------------------------
/demo/06-helm-demo-without-values/templates/service.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Service
3 | metadata:
4 | name: blue-service
5 | spec:
6 | type: NodePort
7 | selector:
8 | app: blue-nginx
9 | ports:
10 | - protocol: TCP
11 | port: 80
12 | targetPort: 3000
13 |
--------------------------------------------------------------------------------
/demo/02-service-demo/service.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Service
3 | metadata:
4 | name: my-service
5 | spec:
6 | selector:
7 | app: myDeployApp
8 | type: NodePort
9 | ports:
10 | - protocol: TCP
11 | port: 3002
12 | targetPort: 3000
13 | nodePort: 30391
14 |
--------------------------------------------------------------------------------
/demo/01-pod-demo/kubernetes-demo.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Pod
3 | metadata:
4 | name: kubernetes-demo-pod
5 | labels:
6 | app: demoApp
7 | spec:
8 | containers:
9 | - name: kubernetes-demo-container
10 | image: hcwxd/kubernetes-demo
11 | ports:
12 | - containerPort: 3000
13 |
--------------------------------------------------------------------------------
/demo/02-service-demo/kubernetes-demo.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Pod
3 | metadata:
4 | name: kubernetes-demo-pod
5 | labels:
6 | app: demoApp
7 | spec:
8 | containers:
9 | - name: kubernetes-demo-container
10 | image: hcwxd/kubernetes-demo
11 | ports:
12 | - containerPort: 3000
13 |
--------------------------------------------------------------------------------
/demo/06-helm-demo-without-values/templates/ingress.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Ingress
3 | metadata:
4 | name: web
5 | spec:
6 | rules:
7 | - host: blue.demo.com
8 | http:
9 | paths:
10 | - backend:
11 | serviceName: blue-service
12 | servicePort: 80
13 |
--------------------------------------------------------------------------------
/demo/07-helm-demo/templates/service.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Service
3 | metadata:
4 | name: {{ include "value-helm-demo.fullname" . }}
5 | spec:
6 | type: {{ .Values.service.type }}
7 | ports:
8 | - port: {{ .Values.service.port }}
9 | targetPort: 3000
10 | protocol: TCP
11 | selector:
12 | app: {{ include "value-helm-demo.fullname" . }}
--------------------------------------------------------------------------------
/demo/06-helm-demo-without-values/templates/deployment.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | name: blue-nginx
5 | spec:
6 | replicas: 2
7 | template:
8 | metadata:
9 | labels:
10 | app: blue-nginx
11 | spec:
12 | containers:
13 | - name: nginx
14 | image: hcwxd/blue-whale
15 | ports:
16 | - containerPort: 3000
17 |
--------------------------------------------------------------------------------
/demo/05-docker-program/index.js:
--------------------------------------------------------------------------------
1 | const express = require('express');
2 | const app = express();
3 | const fs = require('fs');
4 | const docker = fs.readFileSync('./blue-docker.html', 'utf8');
5 |
6 | app.get('/', function(req, res) {
7 | res.send(docker);
8 | });
9 | const server = require('http').Server(app);
10 | const port = process.env.PORT || 3000;
11 |
12 | server.listen(port, function() {
13 | console.log(`listening on port ${port}`);
14 | });
15 |
--------------------------------------------------------------------------------
/demo/07-helm-demo/.helmignore:
--------------------------------------------------------------------------------
1 | # Patterns to ignore when building packages.
2 | # This supports shell glob matching, relative path matching, and
3 | # negation (prefixed with !). Only one pattern per line.
4 | .DS_Store
5 | # Common VCS dirs
6 | .git/
7 | .gitignore
8 | .bzr/
9 | .bzrignore
10 | .hg/
11 | .hgignore
12 | .svn/
13 | # Common backup files
14 | *.swp
15 | *.bak
16 | *.tmp
17 | *~
18 | # Various IDEs
19 | .project
20 | .idea/
21 | *.tmproj
22 | .vscode/
23 |
--------------------------------------------------------------------------------
/demo/03-deployment-demo/deployment.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: Deployment
3 | metadata:
4 | name: my-deployment
5 | spec:
6 | replicas: 3
7 | template:
8 | metadata:
9 | labels:
10 | app: demoApp
11 | spec:
12 | containers:
13 | - name: kubernetes-demo-container
14 | image: hcwxd/kubernetes-demo
15 | ports:
16 | - containerPort: 3000
17 | selector:
18 | matchLabels:
19 | app: demoApp
20 |
--------------------------------------------------------------------------------
/demo/04-ingress-demo/ingress.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Ingress
3 | metadata:
4 | name: web
5 | spec:
6 | rules:
7 | - host: blue.demo.com
8 | http:
9 | paths:
10 | - backend:
11 | serviceName: blue-service
12 | servicePort: 80
13 | - host: purple.demo.com
14 | http:
15 | paths:
16 | - backend:
17 | serviceName: purple-service
18 | servicePort: 80
19 |
--------------------------------------------------------------------------------
/demo/06-helm-demo-without-values/.helmignore:
--------------------------------------------------------------------------------
1 | # Patterns to ignore when building packages.
2 | # This supports shell glob matching, relative path matching, and
3 | # negation (prefixed with !). Only one pattern per line.
4 | .DS_Store
5 | # Common VCS dirs
6 | .git/
7 | .gitignore
8 | .bzr/
9 | .bzrignore
10 | .hg/
11 | .hgignore
12 | .svn/
13 | # Common backup files
14 | *.swp
15 | *.bak
16 | *.tmp
17 | *~
18 | # Various IDEs
19 | .project
20 | .idea/
21 | *.tmproj
22 | .vscode/
23 |
--------------------------------------------------------------------------------
/demo/04-ingress-demo/service.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Service
3 | metadata:
4 | name: blue-service
5 | spec:
6 | type: NodePort
7 | selector:
8 | app: blue-nginx
9 | ports:
10 | - protocol: TCP
11 | port: 80
12 | targetPort: 3000
13 |
14 | ---
15 | apiVersion: v1
16 | kind: Service
17 | metadata:
18 | name: purple-service
19 | spec:
20 | type: NodePort
21 | selector:
22 | app: purple-nginx
23 | ports:
24 | - protocol: TCP
25 | port: 80
26 | targetPort: 3000
27 |
--------------------------------------------------------------------------------
/demo/07-helm-demo/templates/ingress.yaml:
--------------------------------------------------------------------------------
1 | {{- if .Values.ingress.enabled -}}
2 | {{- $fullName := include "value-helm-demo.fullname" . -}}
3 | apiVersion: extensions/v1beta1
4 | kind: Ingress
5 | metadata:
6 | name: {{ $fullName }}
7 | spec:
8 | rules:
9 | {{- range .Values.ingress.hosts }}
10 | - host: {{ .host | quote }}
11 | http:
12 | paths:
13 | {{- range .paths }}
14 | - backend:
15 | serviceName: {{ $fullName }}
16 | servicePort: 80
17 | {{- end }}
18 | {{- end }}
19 | {{- end }}
20 |
--------------------------------------------------------------------------------
/demo/07-helm-demo/templates/deployment.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: Deployment
3 | metadata:
4 | name: {{ include "value-helm-demo.fullname" . }}
5 | spec:
6 | replicas: {{ .Values.replicaCount }}
7 | selector:
8 | matchLabels:
9 | app: {{ include "value-helm-demo.fullname" . }}
10 | template:
11 | metadata:
12 | labels:
13 | app: {{ include "value-helm-demo.fullname" . }}
14 | spec:
15 | containers:
16 | - name: {{ .Chart.Name }}
17 | image: "{{ .Values.image.repository }}"
18 | ports:
19 | - containerPort: 3000
20 |
--------------------------------------------------------------------------------
/demo/06-helm-demo-without-values/templates/tests/test-connection.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Pod
3 | metadata:
4 | name: "{{ include "helm-demo.fullname" . }}-test-connection"
5 | labels:
6 | app.kubernetes.io/name: {{ include "helm-demo.name" . }}
7 | helm.sh/chart: {{ include "helm-demo.chart" . }}
8 | app.kubernetes.io/instance: {{ .Release.Name }}
9 | app.kubernetes.io/managed-by: {{ .Release.Service }}
10 | annotations:
11 | "helm.sh/hook": test-success
12 | spec:
13 | containers:
14 | - name: wget
15 | image: busybox
16 | command: ['wget']
17 | args: ['{{ include "helm-demo.fullname" . }}:{{ .Values.service.port }}']
18 | restartPolicy: Never
19 |
--------------------------------------------------------------------------------
/demo/07-helm-demo/templates/tests/test-connection.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Pod
3 | metadata:
4 | name: "{{ include "value-helm-demo.fullname" . }}-test-connection"
5 | labels:
6 | app.kubernetes.io/name: {{ include "value-helm-demo.name" . }}
7 | helm.sh/chart: {{ include "value-helm-demo.chart" . }}
8 | app.kubernetes.io/instance: {{ .Release.Name }}
9 | app.kubernetes.io/managed-by: {{ .Release.Service }}
10 | annotations:
11 | "helm.sh/hook": test-success
12 | spec:
13 | containers:
14 | - name: wget
15 | image: busybox
16 | command: ['wget']
17 | args: ['{{ include "value-helm-demo.fullname" . }}:{{ .Values.service.port }}']
18 | restartPolicy: Never
19 |
--------------------------------------------------------------------------------
/demo/04-ingress-demo/deployment.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Deployment
3 | metadata:
4 | name: blue-nginx
5 | spec:
6 | replicas: 2
7 | template:
8 | metadata:
9 | labels:
10 | app: blue-nginx
11 | spec:
12 | containers:
13 | - name: nginx
14 | image: hcwxd/blue-whale
15 | ports:
16 | - containerPort: 3000
17 |
18 | ---
19 | apiVersion: extensions/v1beta1
20 | kind: Deployment
21 | metadata:
22 | name: purple-nginx
23 | spec:
24 | replicas: 2
25 | template:
26 | metadata:
27 | labels:
28 | app: purple-nginx
29 | spec:
30 | containers:
31 | - name: nginx
32 | image: hcwxd/purple-whale
33 | ports:
34 | - containerPort: 3000
35 |
--------------------------------------------------------------------------------
/demo/06-helm-demo-without-values/templates/_helpers.tpl:
--------------------------------------------------------------------------------
1 | {{/* vim: set filetype=mustache: */}}
2 | {{/*
3 | Expand the name of the chart.
4 | */}}
5 | {{- define "helm-demo.name" -}}
6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
7 | {{- end -}}
8 |
9 | {{/*
10 | Create a default fully qualified app name.
11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
12 | If release name contains chart name it will be used as a full name.
13 | */}}
14 | {{- define "helm-demo.fullname" -}}
15 | {{- if .Values.fullnameOverride -}}
16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
17 | {{- else -}}
18 | {{- $name := default .Chart.Name .Values.nameOverride -}}
19 | {{- if contains $name .Release.Name -}}
20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}}
21 | {{- else -}}
22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
23 | {{- end -}}
24 | {{- end -}}
25 | {{- end -}}
26 |
27 | {{/*
28 | Create chart name and version as used by the chart label.
29 | */}}
30 | {{- define "helm-demo.chart" -}}
31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
32 | {{- end -}}
33 |
--------------------------------------------------------------------------------
/demo/07-helm-demo/templates/_helpers.tpl:
--------------------------------------------------------------------------------
1 | {{/* vim: set filetype=mustache: */}}
2 | {{/*
3 | Expand the name of the chart.
4 | */}}
5 | {{- define "value-helm-demo.name" -}}
6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
7 | {{- end -}}
8 |
9 | {{/*
10 | Create a default fully qualified app name.
11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
12 | If release name contains chart name it will be used as a full name.
13 | */}}
14 | {{- define "value-helm-demo.fullname" -}}
15 | {{- if .Values.fullnameOverride -}}
16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
17 | {{- else -}}
18 | {{- $name := default .Chart.Name .Values.nameOverride -}}
19 | {{- if contains $name .Release.Name -}}
20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}}
21 | {{- else -}}
22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
23 | {{- end -}}
24 | {{- end -}}
25 | {{- end -}}
26 |
27 | {{/*
28 | Create chart name and version as used by the chart label.
29 | */}}
30 | {{- define "value-helm-demo.chart" -}}
31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
32 | {{- end -}}
33 |
--------------------------------------------------------------------------------
/demo/06-helm-demo-without-values/templates/NOTES.txt:
--------------------------------------------------------------------------------
1 | 1. Get the application URL by running these commands:
2 | {{- if .Values.ingress.enabled }}
3 | {{- range $host := .Values.ingress.hosts }}
4 | {{- range .paths }}
5 | http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
6 | {{- end }}
7 | {{- end }}
8 | {{- else if contains "NodePort" .Values.service.type }}
9 | export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "helm-demo.fullname" . }})
10 | export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
11 | echo http://$NODE_IP:$NODE_PORT
12 | {{- else if contains "LoadBalancer" .Values.service.type }}
13 | NOTE: It may take a few minutes for the LoadBalancer IP to be available.
14 | You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "helm-demo.fullname" . }}'
15 | export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "helm-demo.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
16 | echo http://$SERVICE_IP:{{ .Values.service.port }}
17 | {{- else if contains "ClusterIP" .Values.service.type }}
18 | export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "helm-demo.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
19 | echo "Visit http://127.0.0.1:8080 to use your application"
20 | kubectl port-forward $POD_NAME 8080:80
21 | {{- end }}
22 |
--------------------------------------------------------------------------------
/demo/07-helm-demo/templates/NOTES.txt:
--------------------------------------------------------------------------------
1 | 1. Get the application URL by running these commands:
2 | {{- if .Values.ingress.enabled }}
3 | {{- range $host := .Values.ingress.hosts }}
4 | {{- range .paths }}
5 | http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
6 | {{- end }}
7 | {{- end }}
8 | {{- else if contains "NodePort" .Values.service.type }}
9 | export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "value-helm-demo.fullname" . }})
10 | export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
11 | echo http://$NODE_IP:$NODE_PORT
12 | {{- else if contains "LoadBalancer" .Values.service.type }}
13 | NOTE: It may take a few minutes for the LoadBalancer IP to be available.
14 | You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "value-helm-demo.fullname" . }}'
15 | export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "value-helm-demo.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
16 | echo http://$SERVICE_IP:{{ .Values.service.port }}
17 | {{- else if contains "ClusterIP" .Values.service.type }}
18 | export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "value-helm-demo.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
19 | echo "Visit http://127.0.0.1:8080 to use your application"
20 | kubectl port-forward $POD_NAME 8080:80
21 | {{- end }}
22 |
--------------------------------------------------------------------------------
/demo/05-docker-program/blue-docker.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
14 |
15 |
16 |
17 |
18 | ............................................................,.......................................
19 | ................................................?%%%%%%%%%%S,.......................................
20 | ................................................%%????????%%........................................
21 | ................................................%%?????????%........................................
22 | ................................................%%?%?*??%??%........................................
23 | ................................................%%?%????%??%........................................
24 | ................................................%%?%????%??%........................................
25 | ................................................%%?%????%??%........................................
26 | ................................................%%?%????%??%........................................
27 | ................................................%%?????????%........................................
28 | ............................;*******************%%%%%%%%%%%%........................................
29 | ...........................%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%........................................
30 | ...........................%%*??????*%%?????????%%????????%%........................................
31 | ...........................%%????????%%?????????%%?????????%.................;%.....................
32 | ...........................%%?%????%?%%??%?%????%%?%????%??%.................%%%*...................
33 | ...........................%%?%????%?%%??%?%????%%?%????%??%................%%?%%?..................
34 | ...........................%%?%????%?%%??%?%????%%?%????%??%................%%??%%*.................
35 | ...........................%%?%????%?%%??%?%????%%?%????%??%...............%%?????%:................
36 | ...........................%%?%*???%?%%??%?%????%%?%????%??%...............%%?????%S................
37 | ...........................%%????????%%*????????%%?????????%...............%???????%*...............
38 | .................+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%....,%???????%%,..............
39 | ................,%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%S%...,%????????%...............
40 | ................+%?????????%%????????%%??*???**?%%???????*?%?????*???%%...,%????????S,+?%%?;........
41 | ................+%?????????%%????????%%??????*??%%*??**??*?%?????????%%...,%???????*%%%%%%%%%%%.....
42 | ................+%??%?%????%%?%????%?%%??%?%????%%?%????%??%??%?%??%?%%....%?????????????????%%%....
43 | ................+%??%?%????%%?%????%?%%??%?%????%%?%????%??%??%?%??%?%%....%%??????????????????%%...
44 | ................+%??%?%????%%?%????%?%%??%?%????%%?%????%??%??%?%??%?%%....?%??????????????????%....
45 | ................+%??%?%????%%?%????%?%%??%?%????%%?%????%??%??%?%??%?%%.....%%????????????????%%....
46 | ................+%?????????%%????????%%*????????%%?????????%?????????%%.....?%??????????????*%%.....
47 | ................+%?????????%%????????%%????????*%%?????????%?????????%%..,%%S%??????????????%%,.....
48 | ............:%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%*?????????????%%%.......
49 | ...........;%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%???????????????%%%%?........
50 | ...........%%?????????????????????????????????????????????????????????????????????%%%%%%%+..........
51 | ...........%?????????????????????????????????????????????????????????????????????*%+,...............
52 | ...........%?????????????????????????????????????????????????????????????????????%%,................
53 | ...........%%????????????????????????????????????????????????????????????????????%?,................
54 | ...........%%???????????????????????????????????????????????????????????????????%%,.................
55 | ...........%%???????????????????????????????????????????????????????????????????%%..................
56 | ...........%%??????????????????????????????????????????????????????????????????%S...................
57 | ...........%%??????????????????????????????????????????????????????????????????%%...................
58 | ...........;%??????????????????????????????????????????????????????????????????%....................
59 | ...........,%?????????????????????????????????????????????????????????????????S%....................
60 | ............%???????????????????????**???????????????????????????????????????%%,....................
61 | ............%%???????????????????????...?????????????????????????????????????%?.....................
62 | ............*%?????????????????????*.*%..???????????????????????????????????%%......................
63 | .............%?????????????????????*.%%,.??????????????????????????????????%%.......................
64 | .............%%?????????????????????,%%%.*?????????????????????????????????%*.......................
65 | .............%%?????????????????????.,%..?????????????????????????????????%S........................
66 | ..............%?????????????????????*..,?????????????????????????????????%S.........................
67 | ..............%%????????????????????????????????????????????????????????%%..........................
68 | ..............,%???????????????????????????????????????????????????????%%...........................
69 | ...............%%??????????????%%%????????????????????????????????????%S,...........................
70 | ................%%??????????%%%%,,??????????????????????????????????%S%,............................
71 | ................;%%%%%%%%%%%%%....,????????????????????????????????%%%..............................
72 | .................%%%%%%%%%%,.......+??????????????????????????????%%+...............................
73 | ..................%S;...............????????????????????????????%%%,................................
74 | ...................%%+...............?????????????????????????%%%;..,...............................
75 | ....................*%%...............+*???????????????????*%%%?....................................
76 | .....................:%%%..............,*???????????????*?SS%%......................................
77 | ....................,...%%%%+.............????????????%%%%%.........................................
78 | ..........................*%%%%%%:,.......,.????%%%%%%%%,...........................................
79 | .............................:%%%%%%%%%%%%%%%%%%%%%?:...............................................
80 | ...................................,;*??%??*+;,.....................................................
81 | ....................................................................................................
82 |
83 |
84 |
--------------------------------------------------------------------------------
/demo/05-docker-program/purple-docker.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
14 |
15 |
16 |
17 |
18 | ............................................................,.......................................
19 | ................................................?%%%%%%%%%%S,.......................................
20 | ................................................%%????????%%........................................
21 | ................................................%%?????????%........................................
22 | ................................................%%?%?*??%??%........................................
23 | ................................................%%?%????%??%........................................
24 | ................................................%%?%????%??%........................................
25 | ................................................%%?%????%??%........................................
26 | ................................................%%?%????%??%........................................
27 | ................................................%%?????????%........................................
28 | ............................;*******************%%%%%%%%%%%%........................................
29 | ...........................%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%........................................
30 | ...........................%%*??????*%%?????????%%????????%%........................................
31 | ...........................%%????????%%?????????%%?????????%.................;%.....................
32 | ...........................%%?%????%?%%??%?%????%%?%????%??%.................%%%*...................
33 | ...........................%%?%????%?%%??%?%????%%?%????%??%................%%?%%?..................
34 | ...........................%%?%????%?%%??%?%????%%?%????%??%................%%??%%*.................
35 | ...........................%%?%????%?%%??%?%????%%?%????%??%...............%%?????%:................
36 | ...........................%%?%*???%?%%??%?%????%%?%????%??%...............%%?????%S................
37 | ...........................%%????????%%*????????%%?????????%...............%???????%*...............
38 | .................+%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%....,%???????%%,..............
39 | ................,%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%S%...,%????????%...............
40 | ................+%?????????%%????????%%??*???**?%%???????*?%?????*???%%...,%????????S,+?%%?;........
41 | ................+%?????????%%????????%%??????*??%%*??**??*?%?????????%%...,%???????*%%%%%%%%%%%.....
42 | ................+%??%?%????%%?%????%?%%??%?%????%%?%????%??%??%?%??%?%%....%?????????????????%%%....
43 | ................+%??%?%????%%?%????%?%%??%?%????%%?%????%??%??%?%??%?%%....%%??????????????????%%...
44 | ................+%??%?%????%%?%????%?%%??%?%????%%?%????%??%??%?%??%?%%....?%??????????????????%....
45 | ................+%??%?%????%%?%????%?%%??%?%????%%?%????%??%??%?%??%?%%.....%%????????????????%%....
46 | ................+%?????????%%????????%%*????????%%?????????%?????????%%.....?%??????????????*%%.....
47 | ................+%?????????%%????????%%????????*%%?????????%?????????%%..,%%S%??????????????%%,.....
48 | ............:%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%*?????????????%%%.......
49 | ...........;%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%???????????????%%%%?........
50 | ...........%%?????????????????????????????????????????????????????????????????????%%%%%%%+..........
51 | ...........%?????????????????????????????????????????????????????????????????????*%+,...............
52 | ...........%?????????????????????????????????????????????????????????????????????%%,................
53 | ...........%%????????????????????????????????????????????????????????????????????%?,................
54 | ...........%%???????????????????????????????????????????????????????????????????%%,.................
55 | ...........%%???????????????????????????????????????????????????????????????????%%..................
56 | ...........%%??????????????????????????????????????????????????????????????????%S...................
57 | ...........%%??????????????????????????????????????????????????????????????????%%...................
58 | ...........;%??????????????????????????????????????????????????????????????????%....................
59 | ...........,%?????????????????????????????????????????????????????????????????S%....................
60 | ............%???????????????????????**???????????????????????????????????????%%,....................
61 | ............%%???????????????????????...?????????????????????????????????????%?.....................
62 | ............*%?????????????????????*.*%..???????????????????????????????????%%......................
63 | .............%?????????????????????*.%%,.??????????????????????????????????%%.......................
64 | .............%%?????????????????????,%%%.*?????????????????????????????????%*.......................
65 | .............%%?????????????????????.,%..?????????????????????????????????%S........................
66 | ..............%?????????????????????*..,?????????????????????????????????%S.........................
67 | ..............%%????????????????????????????????????????????????????????%%..........................
68 | ..............,%???????????????????????????????????????????????????????%%...........................
69 | ...............%%??????????????%%%????????????????????????????????????%S,...........................
70 | ................%%??????????%%%%,,??????????????????????????????????%S%,............................
71 | ................;%%%%%%%%%%%%%....,????????????????????????????????%%%..............................
72 | .................%%%%%%%%%%,.......+??????????????????????????????%%+...............................
73 | ..................%S;...............????????????????????????????%%%,................................
74 | ...................%%+...............?????????????????????????%%%;..,...............................
75 | ....................*%%...............+*???????????????????*%%%?....................................
76 | .....................:%%%..............,*???????????????*?SS%%......................................
77 | ....................,...%%%%+.............????????????%%%%%.........................................
78 | ..........................*%%%%%%:,.......,.????%%%%%%%%,...........................................
79 | .............................:%%%%%%%%%%%%%%%%%%%%%?:...............................................
80 | ...................................,;*??%??*+;,.....................................................
81 | ....................................................................................................
82 |
83 |
84 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Kubernetes - 基礎概念 101
2 |
3 | 網頁好讀版:[https://chengweihu.com/kubernetes-tutorial-1-pod-node/](https://chengweihu.com/kubernetes-tutorial-1-pod-node/)
4 |
5 | > Kubernetes 想解決的問題:手動部署多個容器到多台機器上並監測管理這些容器的狀態非常麻煩。
6 | >
7 | > Kubernetes 要提供的解法:提供一個平台以較高層次的抽象化去自動化操作與管理容器們。
8 |
9 | 
10 |
11 | Kubernetes(K8S)是一個可以幫助我們管理微服務(microservices)的系統,他可以自動化地部署及管理多台機器上的多個容器(Container)。簡單來說,他可以做到:
12 |
13 | - 同時部署多個容器到多台機器上
14 | - 管理多個容器的狀態,自動偵測並重啟故障的容器
15 |
16 | ### 目錄
17 |
18 | - [**Kubernetes 四元件**](#Kubernetes-四元件)
19 | - [**基本運作與安裝**](#基本運作與安裝)
20 | - [**如何建立一個 Pod**](#如何建立一個-Pod)
21 | - [**Kubernetes 進階三元件**](#Kubernetes-進階三元件)
22 | - [**Helm**](#Helm)
23 | - [**kubectl 額外補充**](#kubectl-額外補充)
24 |
25 | ## Kubernetes 四元件
26 |
27 | 在了解 Kubernetes 如何幫助我們管理容器們前,我們先要由小到大依序了解組成 Kubernetes 的四種最基本的元件:Pod、Worker Node、Master Node、Cluster。
28 |
29 | ### Pod
30 |
31 | Kubernetes 運作的最小單位,一個 Pod 對應到一個應用服務(Application)
32 |
33 | - 每個 Pod 都有一個身分證,也就是屬於這個 Pod 的 `yaml` 檔
34 | - 一個 Pod 裡面可以有一個或是多個 Container,但一般情況一個 Pod 最好只有一個 Container
35 | - 同一個 Pod 中的 Containers 共享相同資源及網路,彼此透過 local port number 溝通
36 |
37 | ### Worker Node
38 |
39 | Kubernetes 運作的最小硬體單位,一個 Worker Node(簡稱 Node)對應到一台機器,可以是實體機如你的筆電、或是虛擬機如 AWS 上的一台 EC2 或 GCP 上的一台 computer engine。
40 |
41 | 每個 Node 中都有三個組件:kubelet、kube-proxy、Container Runtime。
42 |
43 | - kubelet
44 | - 該 Node 的管理員,負責管理該 Node 上的所有 Pods 的狀態並負責與 Master 溝通
45 | - kube-proxy
46 | - 該 Node 的傳訊員,負責更新 Node 的 iptables,讓 Kubernetes 中不在該 Node 的其他物件可以得知該 Node 上所有 Pods 的最新狀態
47 | - Container Runtime
48 | - 該 Node 真正負責容器執行的程式,以 Docker 容器為例其對應的 Container Runtime 就是 Docker Engine
49 |
50 | ### Master Node
51 |
52 | Kubernetes 運作的指揮中心,可以簡化看成一個特化的 Node 負責管理所有其他 Node。一個 Master Node(簡稱 Master)中有四個組件:kube-apiserver、etcd、kube-scheduler、kube-controller-manager。
53 |
54 | - kube-apiserver
55 |
56 | - 管理整個 Kubernetes 所需 API 的接口(Endpoint),例如從 Command Line 下 kubectl 指令就會把指令送到這裏
57 | - 負責 Node 之間的溝通橋樑,每個 Node 彼此不能直接溝通,必須要透過 apiserver 轉介
58 | - 負責 Kubernetes 中的請求的身份認證與授權
59 |
60 | - etcd
61 |
62 | - 用來存放 Kubernetes Cluster 的資料作為備份,當 Master 因為某些原因而故障時,我們可以透過 etcd 幫我們還原 Kubernetes 的狀態
63 |
64 | - kube-controller-manager
65 |
66 | - 負責管理並運行 Kubernetes controller 的組件,簡單來說 controller 就是 Kubernetes 裡一個個負責監視 Cluster 狀態的 Process,例如:Node Controller、Replication Controller。這些 Process 會在 Cluster 與預期狀態(desire state)不符時嘗試更新現有狀態(current state)
67 | - 例如:現在要多開一台機器以應付突然增加的流量,那我的預期狀態就會更新成 N+1,現有狀態為 N,這時相對應的 controller 就會想辦法多開一台機器
68 | - controller-manager 的監視與嘗試更新也都需要透過訪問 kube-apiserver 達成
69 |
70 | - kube-scheduler
71 |
72 | - 整個 Kubernetes 的 Pods 調度員,scheduler 會監視新建立但還沒有被指定要跑在哪個 Node 上的 Pod,並根據每個 Node 上面資源規定、硬體限制等條件去協調出一個最適合放置的 Node 讓該 Pod 跑
73 |
74 | ### Cluster
75 |
76 | Kubernetes 中多個 Node 與 Master 的集合。基本上可以想成在同一個環境裡所有 Node 集合在一起的單位
77 |
78 | ## 基本運作與安裝
79 |
80 | 
81 |
82 | ### 基本運作
83 |
84 | 接下來我們用一個簡單的問題「Kubernetes 是如何建立一個 Pod?」來複習整體 Kubernetes 的架構。上圖為一個簡易的 Kubernetes Cluster,通常一個 Cluster 中其實會有多個 Master 作為備援,但為了簡化我們只顯示一個。
85 |
86 | 當使用者要部署一個新的 Pod 到 Kubernetes Cluster 時,使用者要先透過 User Command(kubectl)輸入建立 Pod 的對應指令(下面會在解說如何建立一個 Pod)。此時指令會經過一層認證確認傳送方的身份後傳遞到 Master Node 中的 API Server,API Server 會把指令備份到 etcd 。
87 |
88 | 接下來 controller-manager 會從 API Server 收到需要創建一個新的 Pod 的訊息,並檢查如果資源許可,就會建立一個新的 Pod。最後 Scheduler 在定期訪問 API Server 時,會詢問 controller-manager 是否有建置新的 Pod,如果發現新建立的 Pod 時,Scheduler 就會負責把 Pod 配送到最適合的一個 Node 上面。
89 |
90 | ### 安裝 Kubernetes
91 |
92 | 要實際動手在本機端體驗如何操作 Kubernetes 前,需要分別下載 Minikube、VirtualBox 以及 kubectl 三個套件。以下都以 MacOS 平台為主:
93 |
94 | Minikube
95 |
96 | - 一個 Google 發佈的輕量級工具,讓開發者可以輕鬆體驗一個的 Kubernetes Cluster。Minikube 會在本機端建立 Virtual Machine,並在其中運行一個 Single-Node 的 Kubernetes Cluster
97 | - [Github 下載](https://github.com/kubernetes/minikube)
98 |
99 | VirtualBox
100 |
101 | - 因為 Minikube 會透過 Virtual Machine 跑 Kubernetes,因此會需要先安裝一個跑虛擬化的工具,在這邊可以直接使用 VirtualBox
102 | - [官網載點](https://www.virtualbox.org/wiki/Downloads)
103 |
104 | Kubectl
105 |
106 | - Kubectl 是 Kubernetes 的 Command Line 工具,我們之後會透過 Kubectl 去操作我們的 Kubernetes Cluster
107 | - [官網載點](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
108 |
109 | ## 如何建立一個 Pod
110 |
111 | 在下載好所需的程式後,我們現在就可以來練習如何建立我們的第一個 Pod 了。
112 |
113 | ### 啟動 Minikube
114 |
115 | 下載完 minikube 之後,我們可以先透過
116 |
117 | ```
118 | minikube
119 | ```
120 |
121 | 瞧瞧所有 minikube 的指令,然後透過
122 |
123 | ```
124 | minikube start
125 | ```
126 |
127 | 就可以啟動 minikube,最後再補充五個常用的指令:
128 |
129 | 列出 minikube 的狀態
130 |
131 | ```
132 | minikube status
133 | ```
134 |
135 | 停止 minikube 運行
136 |
137 | ```
138 | minikube stop
139 | ```
140 |
141 | ssh 進入 minikube 中
142 |
143 | ```
144 | minikube ssh
145 | ```
146 |
147 | 查詢 minikube 對外的 ip
148 |
149 | ```
150 | minikube ip
151 | ```
152 |
153 | 透過 minikube 提供的瀏覽器 GUI 查看 Cluster 狀況
154 |
155 | ```
156 | minikube dashboard
157 | ```
158 |
159 | ### 準備 Pod 中運行的目標程式
160 |
161 | 在啟動 Minikube 後,在接下來我們要來選定一個要在我們 Pod 中運行的程式。我們要把這個程式打包成 Image 後上傳到 DockerHub 上,在這邊我們的目標範例程式是一個 Node.js 的 Web App,相關的程式碼可以在這個 [Github Repo](https://github.com/HcwXd/docker-tutorial/tree/master/docker-demo-app) 上找到。簡單來說,這個程式的邏輯就是會建立一個 Server 監聽在 3000 port,收到 request 進來後會渲染 `docker.html` 這個檔案,這時網頁上就會出現一隻可愛的小鯨魚。
162 |
163 | 你也可以試著自己透過 `docker build -t` 先建立好 docker image
164 |
165 | ```
166 | docker build -t yourDockerAccount/yourDockerApp
167 | ```
168 |
169 | 然後再透過 `docker push` 上傳到 Dockerhub
170 |
171 | ```
172 | docker push yourDockerAccount/yourDockerApp:latest
173 | ```
174 |
175 | 在這邊我已經把上面的 Node.js Web App 上傳到這個 [Dockerhub Repo](https://hub.docker.com/r/hcwxd/kubernetes-demo)。
176 |
177 | 接下來我們就要正式來建立一個 Pod 了
178 |
179 | ### 撰寫 Pod 的身分證
180 |
181 | 還記得我們在介紹 Kubernetes 時有提到,每個 Pod 都有一個身分證,也就是屬於這個 Pod 的 `.yaml` 檔。我們透過撰寫這個 `.yaml` 檔就可以建立出 Pod。
182 |
183 | - `kubernetes-demo.yaml`
184 |
185 | ```yaml
186 | apiVersion: v1
187 | kind: Pod
188 | metadata:
189 | name: kubernetes-demo-pod
190 | labels:
191 | app: demoApp
192 | spec:
193 | containers:
194 | - name: kubernetes-demo-container
195 | image: hcwxd/kubernetes-demo
196 | ports:
197 | - containerPort: 3000
198 | ```
199 |
200 | - apiVersion
201 | 該元件的版本號
202 |
203 | - kind
204 |
205 | 該元件是什麼屬性,常見有 `Pod`、`Node`、`Service`、`Namespace`、`ReplicationController` 等
206 |
207 | - metadata
208 | - name
209 | 指定該 Pod 的名稱
210 | - labels
211 | 指定該 Pod 的標籤,這裡我們暫時幫它上標籤為 `app=demoApp`
212 | - spec
213 | - container.name
214 | 指定運行出的 Container 的名稱
215 | - container.image
216 | 指定 Container 要使用哪個 Image,這裡會從 DockerHub 上搜尋
217 | - container.ports
218 | 指定該 Container 有哪些 port number 是允許外部資源存取
219 |
220 | ### 透過 kubectl 建立 Pod
221 |
222 | 有了身份證後,我們就可以透過 kubectl 指令來建立 Pod
223 |
224 | ```
225 | kubectl create -f kubernetes-demo.yaml
226 | ```
227 |
228 | 看到 `pod/kubernetes-demo-pod created` 的字樣就代表我們建立成功我們的第一個 Pod 了。我們可以再透過指令
229 |
230 | ```
231 | kubectl get pods
232 | ```
233 |
234 | 看到我們運行中的 Pod:
235 |
236 | ```
237 | NAME READY STATUS RESTARTS AGE
238 | kubernetes-demo-pod 1/1 Running 0 60s
239 | ```
240 |
241 | ### 連線到我們 Pod 的服務資源
242 |
243 | 建立好我們的 Pod 之後,打開瀏覽器的 `localhost:3000` 我們會發現怎麼什麼都看不到。這是因為在 Pod 中所指定的 port,跟我們本機端的 port 是不相通的。因此,我們必須還要透過 `kubectl port-forward`,把我們兩端的 port 做 mapping。
244 |
245 | ```
246 | kubectl port-forward kubernetes-demo-pod 3000:3000
247 | ```
248 |
249 | 做好 mapping 後,再打開瀏覽器的 `localhost:3000` ,我們就可以迎接一隻可愛的小鯨魚囉!
250 |
251 | 
252 |
253 | ## Kubernetes 進階三元件
254 |
255 | 了解到了如何從無到有建立一個 Kubernetes Cluster 並產生一個 Pod 後,接下來我們要認識在現實應用中,我們還會搭配到哪些 Kubernetes 的進階元件。其中最重要的三個進階元件就是:Service、Ingress、Deployment。
256 |
257 | ### Service
258 |
259 | 還記得上面提到我們在連線到一個 Pod 的服務資源時,會使用到 `port-forward` 的指令。但如果我們有多個 Pods 想要同時被連線時,我們就可以用到 Service 這個進階元件。簡單來說,Service 就是 Kubernetes 中用來定義「一群 Pod 要如何被連線及存取」的元件。
260 |
261 | 要建立一個 Service,一樣要撰寫屬於他的身分證。
262 |
263 | - `service.yaml`
264 |
265 | ```yaml
266 | apiVersion: v1
267 | kind: Service
268 | metadata:
269 | name: my-service
270 | spec:
271 | selector:
272 | app: demoApp
273 | type: NodePort
274 | ports:
275 | - protocol: TCP
276 | port: 3001
277 | targetPort: 3000
278 | nodePort: 30390
279 | ```
280 |
281 | - apiVersion
282 | 該元件的版本號
283 |
284 | - kind
285 |
286 | 該元件是什麼屬性,常見有 `Pod`、`Node`、`Service`、`Namespace`、`ReplicationController` 等
287 |
288 | - metadata
289 |
290 | - name
291 | 指定該 Pod 的名稱
292 |
293 | - spec
294 |
295 | - selector
296 |
297 | 該 Service 的連線規則適用在哪一群 Pods,還記得我們在建立 Pod 的時候,會幫它上 `label`,這時就可以透過 `app = demoApp`,去找到那群 label 的 app 屬性是 MyApp 的 Pods 們
298 |
299 | - ports
300 | - targetPort
301 | 指定我們 Pod 上允許外部資源存取 Port Number
302 | - port
303 | 指定我們 Pod 上的 targetPort 要 mapping 到 Service 中 ClusterIP 中的哪個 port
304 | - nodePort
305 | 指定我們 Pod 上的 targetPort 要 mapping 到 Node 上的哪個 port
306 |
307 | 接下來我們先重新建立我們的 Pod
308 |
309 | ```
310 | kubectl create -f kubernetes-demo.yaml
311 | ```
312 |
313 | 接下來我們透過 `service.yaml` 來建立我們的 Service 元件
314 |
315 | ```
316 | kubectl create -f service.yaml
317 | ```
318 |
319 | 然後我們可以透過
320 |
321 | ```
322 | kubectl get services
323 | ```
324 |
325 | 取得我們新建立 Service 的資料
326 |
327 | ```
328 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
329 | my-service NodePort 10.110.237.205 3001:30391/TCP 60s
330 | ```
331 |
332 | 有了建立好的 Service 後,我們可以透過兩種方式連線我們的 Pod 的服務資源。首先,要從外部連線到我們的 Pod 資源服務,我們必須要先有我們的 Kubernetes Cluster(在這邊是 minikube)對外開放的 IP。我們先透過指令
333 |
334 | ```
335 | minikube ip
336 | ```
337 |
338 | 得到我們 minikube 的 ip
339 |
340 | ```
341 | 192.168.99.100
342 | ```
343 |
344 | 接著打開我們的瀏覽器,輸入上面的 ip 加上我們在 `yaml` 檔指定的 `nodePort`,在這邊是 `192.168.99.100:30390`,就會得到我們的小鯨魚了。
345 |
346 | 要從我們的 minikube 裡面連線到我們的 Pod 則要先透過指令
347 |
348 | ```
349 | minikube ssh
350 | ```
351 |
352 | ssh 進入我們的 minikube cluster,接著輸入指令
353 |
354 | ```
355 | curl :
356 | ```
357 |
358 | 其中 `CLUSTER-IP` 就是我們用 `kubectl get services` 得到我們 Service 的 IP,而 `port` 就是我們在 `yaml` 檔指定的 `port`,在這邊合起來就是 `10.110.237.205:3001`,於是我們
359 |
360 | ```
361 | curl 10.110.237.205:3001
362 | ```
363 |
364 | 就可以在 minikube 裡面得到我們的小鯨魚囉!
365 |
366 | ### Deployment
367 |
368 | 了解了 Service 後,接下來要來暸解第二個進階元件:Deployment。今天當我們同時要把一個 Pod 做橫向擴展,也就是複製多個相同的 Pod 在 Cluster 中同時提供服務,並監控如果有 Pod 當機我們就要重新把它啟動時,如果我們要一個 Pod 一個 Pod 透過指令建立並監控是很花時間的。因此,我們可以透過 Deployment 這個特殊元件幫我們達成上述的要求。
369 |
370 | 同樣要建立一個 Deployment,要先撰寫屬於他的身分證。
371 |
372 | `deployment.yaml`
373 |
374 | ```yaml
375 | apiVersion: apps/v1
376 | kind: Deployment
377 | metadata:
378 | name: my-deployment
379 | spec:
380 | replicas: 3
381 | template:
382 | metadata:
383 | labels:
384 | app: demoApp
385 | spec:
386 | containers:
387 | - name: kubernetes-demo-container
388 | image: hcwxd/kubernetes-demo
389 | ports:
390 | - containerPort: 3000
391 | selector:
392 | matchLabels:
393 | app: demoApp
394 | ```
395 |
396 | - apiVersion
397 | 該元件的版本號
398 |
399 | - kind
400 |
401 | 該元件是什麼屬性,常見有 `Pod`、`Node`、`Service`、`Namespace`、`ReplicationController` 等
402 |
403 | - metadata
404 |
405 | - name
406 | 指定該 Pod 的名稱
407 |
408 | - spec
409 |
410 | - replicas
411 |
412 | 指定要建立多少個相同的 Pod,在這邊給的數字是所謂的 Desire State,當 Cluster 運行時如果 Pod 數量低於此數字,Kubernetes 就會自動幫我們增加 pod,反之就會幫我們關掉 Pod
413 |
414 | - template
415 |
416 | - 指定這個 Deployment 建立的 Pod 們統一的設定,包括 metadata 以及這些 Pod 的 Containers,這邊我們就沿用之前建立 Pod 的設定
417 |
418 | - selector
419 |
420 | - 指定這個 Deployment 的規則要適用到哪些 Pod,在這邊就是指定我們在 template 中指定的 labels
421 |
422 | 接下來我們就可以透過指令
423 |
424 | ```
425 | kubectl create -f deployment.yaml
426 | ```
427 |
428 | 建立好我們的 Deployment,這時我們可以查看我們的 Deployment 有沒有被建立好
429 |
430 | ```
431 | kubectl get deploy
432 | ```
433 |
434 | ```
435 | NAME READY UP-TO-DATE AVAILABLE AGE
436 | my-deployment 3/3 3 3 60s
437 | ```
438 |
439 | 接著我們在看 Pod 們有沒有乖乖按照 Deployment 建立
440 |
441 | ```
442 | kubectl get pods
443 | ```
444 |
445 | ```
446 | NAME READY STATUS RESTARTS AGE
447 | my-deployment-5454f687cd-bxjfz 1/1 Running 0 60s
448 | my-deployment-5454f687cd-gszbr 1/1 Running 0 60s
449 | my-deployment-5454f687cd-k6zfv 1/1 Running 0 60s
450 | ```
451 |
452 | 這邊我們可以看到三個 Pod 都被建立好了,我們就成功做到了 Pod 的橫向擴展。而除了 Pod 的橫向擴展外,Deployment 的另外一個好處就是可以幫我們做到無停機的系統升級(Zero Downtime Rollout)。也就是說,當我們要更新我們的 Pod 時,Kubernetes 並不會直接砍掉我們所有的 Pod,而是會建立新的 Pod,等新的 Pod 開始正常運行後,再來取代舊的 Pod。
453 |
454 | 舉例來說,假設我們現在想要更新我們 Pod 對外的 Port,我們可以先透過指令
455 |
456 | ```
457 | kubectl edit deployments my-deployment
458 | ```
459 |
460 | 接著我們會看到我們的 Yaml 檔
461 |
462 | ```yaml
463 | apiVersion: extensions/v1beta1
464 | kind: Deployment
465 | metadata:
466 | annotations:
467 | deployment.kubernetes.io/revision: '2'
468 | creationTimestamp: '2019-04-26T04:18:26Z'
469 | generation: 2
470 | labels:
471 | app: demoApp
472 | name: my-deployment
473 | namespace: default
474 | resourceVersion: '328692'
475 | selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/my-deployment
476 | uid: 56608fb5-67da-11e9-933f-08002789461f
477 | spec:
478 | progressDeadlineSeconds: 600
479 | replicas: 3
480 | revisionHistoryLimit: 10
481 | selector:
482 | matchLabels:
483 | app: demoApp
484 | strategy:
485 | rollingUpdate:
486 | maxSurge: 25%
487 | maxUnavailable: 25%
488 | type: RollingUpdate
489 | template:
490 | metadata:
491 | creationTimestamp: null
492 | labels:
493 | app: demoApp
494 | spec:
495 | containers:
496 | - image: hcwxd/kubernetes-demo
497 | imagePullPolicy: Always
498 | name: kubernetes-demo-container
499 | ports:
500 | - containerPort: 3000
501 | protocol: TCP
502 | resources: {}
503 | terminationMessagePath: /dev/termination-log
504 | terminationMessagePolicy: File
505 | dnsPolicy: ClusterFirst
506 | restartPolicy: Always
507 | schedulerName: default-scheduler
508 | securityContext: {}
509 | terminationGracePeriodSeconds: 30
510 | ```
511 |
512 | 我們把其中 `containerPort: 3000` 改成 `3001` 後儲存,Kubernetes 就會開始幫我們進行更新。這時我們繼續用指令 `kubectl get pods` 就會看到
513 |
514 | ```
515 | NAME READY STATUS RESTARTS AGE
516 | my-deployment-5454f687cd-bxjfz 1/1 Running 0 60s
517 | my-deployment-5454f687cd-gszbr 1/1 Terminating 0 60s
518 | my-deployment-5454f687cd-k6zfv 1/1 Running 0 60s
519 | my-deployment-78dc8dcb89-59272 0/1 ContainerCreating 0 1s
520 | my-deployment-78dc8dcb89-dwtls 1/1 Running 0 5s
521 | ```
522 |
523 | 從上面可以看到,Kubernetes 會永遠保持有 3 個 Pods 在正常運作,如果有新的 Pod 還在 `ContainerCreating` 的階段時,他還不會關掉對應要被取代的 Pod。而在過一段時間我們輸入同樣指令可以看到
524 |
525 | ```
526 | NAME READY STATUS RESTARTS AGE
527 | my-deployment-5454f687cd-bxjfz 1/1 Terminating 0 60s
528 | my-deployment-5454f687cd-gszbr 1/1 Terminating 0 60s
529 | my-deployment-5454f687cd-k6zfv 1/1 Terminating 0 60s
530 | my-deployment-78dc8dcb89-59272 1/1 Running 0 11s
531 | my-deployment-78dc8dcb89-7b7hg 1/1 Running 0 7s
532 | my-deployment-78dc8dcb89-dwtls 1/1 Running 0 15s
533 | ```
534 |
535 | 我們三個新的 Pod 都被成功部署上去用來取代舊的 Pod 了,靠著這樣的機制,我們就可以確保系統在更新的時候不會有服務暫時無法使用的狀況。這時我們可以透過指令
536 |
537 | ```
538 | kubectl rollout history deployment my-deployment
539 | ```
540 |
541 | 看到我們目前更改過的版本
542 |
543 | ```
544 | deployment.extensions/my-deployment
545 | REVISION CHANGE-CAUSE
546 | 1
547 | 2
548 | ```
549 |
550 | 從上面可以看出來,我們目前有兩個版本,如果我們發現版本 2 的程式有問題,想要先讓服務先恢復成版本 1 的程式(Rollback)時,我們還可以透過指令
551 |
552 | ```
553 | kubectl rollout undo deploy my-deployment
554 | ```
555 |
556 | 讓我們的 Pod 都恢復成版本 1。甚至之後如果版本變的較多後,我們也可以指定要 Rollback 到的版本
557 |
558 | ```
559 | kubectl rollout undo deploy my-deployment --to-revision=2
560 | ```
561 |
562 | ### Ingress
563 |
564 | 了解完了 Service 跟 Deployment 後,接下來就輪到概念稍微複雜的 Ingress 元件了。 在上面有提到 Service 就是 Kubernetes 中用來定義「一群 Pod 要如何被連線及存取」的元件。 但在 Service 中,我們是將每個 Service 元件對外的 port number 跟 Node 上的 port number 做 mapping,這樣在我們的 Service 變多時,port number 以及分流規則的管理變得相當困難。
565 |
566 | 而 Ingress 可以透過 HTTP/HTTPS,在我們眾多的 Service 前搭建一個 reverse-proxy。這樣 Ingress 可以幫助我們統一一個對外的 port number,並且根據 hostname 或是 pathname 決定封包要轉發到哪個 Service 上,如同下圖的比較:
567 |
568 | 在 Kubernetes 中,Ingress 這項服務其實是由 Ingress Resources、Ingress Server、Ingress Controller 構成。其中 Ingress Resources 就是定義 Ingress 的身分證,而 Ingress Server 則是實體化用來接收 HTTP/HTTPS 連線的網路伺服器。但實際上,Ingress Server 有各式各樣的實作,就如同市面上的 Web Server 琳瑯滿目一樣。因此,Ingress Controller 就是一個可以把定義好的 Ingress Resources 設定轉換成特定 Ingress Server 實作的角色。
569 |
570 | 舉例來說,Kubernetes 由官方維護的兩種 Ingress Controller 就有 [ingress-gce](https://github.com/kubernetes/ingress-gce/blob/master/README.md) 跟 [ingress-nginx](https://github.com/kubernetes/ingress-nginx/blob/master/README.md),分別可以對應轉換成 GCE 與 Nginx。也有其他非官方在維護的 Controller,詳細的列表可見官網的 [additional-controllers](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/#additional-controllers)。
571 |
572 | 接下來我們要來試著建立一個 Ingress 物件去根據 hostname 轉發封包到不同的 Pod 上面。所以第一步,我們要用 Deployment 建立好幾個不同的 Pod。在這邊我們直接透過準備好的兩個 Image 來建立其中的 Container,blue-whale 這個 Image 裡的程式會監聽 3000 port 然後在瀏覽器上被存取時會吐出藍色的鯨魚,purple-whale 則會吐出紫色的鯨魚。
573 |
574 | `deployment.yaml`
575 |
576 | ```yaml
577 | apiVersion: extensions/v1beta1
578 | kind: Deployment
579 | metadata:
580 | name: blue-nginx
581 | spec:
582 | replicas: 2
583 | template:
584 | metadata:
585 | labels:
586 | app: blue-nginx
587 | spec:
588 | containers:
589 | - name: nginx
590 | image: hcwxd/blue-whale
591 | ports:
592 | - containerPort: 3000
593 |
594 | ---
595 |
596 | apiVersion: extensions/v1beta1
597 | kind: Deployment
598 | metadata:
599 | name: purple-nginx
600 | spec:
601 | replicas: 2
602 | template:
603 | metadata:
604 | labels:
605 | app: purple-nginx
606 | spec:
607 | containers:
608 | - name: nginx
609 | image: hcwxd/purple-whale
610 | ports:
611 | - containerPort: 3000
612 | 、
613 | ```
614 |
615 | 接著我們就可以透過 `kubectl create -f deployment.yaml` 建立好我們的 Pod。
616 |
617 | ```
618 | AME READY STATUS RESTARTS AGE
619 | blue-nginx-6b68c797c7-28tkz 1/1 Running 0 60s
620 | blue-nginx-6b68c797c7-8ww8l 1/1 Running 0 60s
621 | purple-nginx-84854fd7c-8g4nl 1/1 Running 0 60s
622 | purple-nginx-84854fd7c-tmrbs 1/1 Running 0 60s
623 | ```
624 |
625 | 建立好了 Pod 們後,接下來我們就要建立這些 Pod 對外的各自 Service,在這邊可以透過上面的圖來複習各自的關係。在這邊我們會把各至 Container 上的 3000 port 全部都轉到 80 port 上。
626 |
627 | `service.yaml`
628 |
629 | ```yaml
630 | apiVersion: v1
631 | kind: Service
632 | metadata:
633 | name: blue-service
634 | spec:
635 | type: NodePort
636 | selector:
637 | app: blue-nginx
638 | ports:
639 | - protocol: TCP
640 | port: 80
641 | targetPort: 3000
642 |
643 | ---
644 | apiVersion: v1
645 | kind: Service
646 | metadata:
647 | name: purple-service
648 | spec:
649 | type: NodePort
650 | selector:
651 | app: purple-nginx
652 | ports:
653 | - protocol: TCP
654 | port: 80
655 | targetPort: 3000
656 | ```
657 |
658 | 透過 `kubectl create -f service.yaml` 建立好我們的 Pod。
659 |
660 | ```
661 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
662 | blue-service NodePort 10.111.192.164 80:30492/TCP 60s
663 | purple-service NodePort 10.107.21.77 80:32086/TCP 60s
664 | ```
665 |
666 | 最後,我們就可以來建立我們的主角 Ingress 了!在這邊我們的 Ingress 只有很簡單的規則,他會把所有發送到 `blue.demo.com` 的封包交給 service `blue-service` 負責,而根據上面 `service.yaml` 的定義,他會再轉交給 `blue-nginx` 這個 Pod。而發送給 `purple.demo.com` 則會轉交給 `purple-nginx`。
667 |
668 | 在這邊,我們要先記得使用指令 `minikube addons enable ingress` 來讓啟用 minikube 的 ingress 功能。接著,我們就來撰寫 ingress 的身分證。
669 |
670 | `ingress.yaml`
671 |
672 | ```yaml
673 | apiVersion: extensions/v1beta1
674 | kind: Ingress
675 | metadata:
676 | name: web
677 | spec:
678 | rules:
679 | - host: blue.demo.com
680 | http:
681 | paths:
682 | - backend:
683 | serviceName: blue-service
684 | servicePort: 80
685 | - host: purple.demo.com
686 | http:
687 | paths:
688 | - backend:
689 | serviceName: purple-service
690 | servicePort: 80
691 | ```
692 |
693 | 我們一樣透過 `kubectl create -f ingress.yaml` 來建立我們的 ingress 物件。並使用 `kubectl get ingress` 來查看我們的 ingress 狀況:
694 |
695 | ```
696 | NAME HOSTS ADDRESS PORTS AGE
697 | web blue.demo.com,purple.demo.com 10.0.2.15 80 60s
698 | ```
699 |
700 | 接下來我們要來測試 ingress 有沒有乖乖幫我們轉發。因為我們的 Cluster 實際上對外的 ip 都是我們透過指令 `minikube ip` 會看到的 `192.168.99.100`,這樣我們要怎麼同時讓這個 ip 可以是我們設定規則中的 `blue.demo.com` 以及 `purple.demo.com` 呢?
701 |
702 | 因為我們知道在 DNS 解析網址時,會先查找本機上 `/etc/hosts` 後才會到其他 DNS Server 上尋找。所以我們可以透過一個小技巧,在本機上把 `blue.demo.com` 以及 `purple.demo.com` 都指向 `192.168.99.100`。透過指令
703 |
704 | ```
705 | echo 192.168.99.100 blue.demo.com >> /etc/hosts
706 | echo 192.168.99.100 purple.demo.com >> /etc/hosts
707 | ```
708 |
709 | 或是透過 `sudo vim /etc/hosts` 手動加上這兩條規則,我們就成功搞定 DNS 可以來測試了。接下來我們打開瀏覽器,輸入 `blue.demo.com` 就可以得到熟悉的藍色小鯨魚
710 |
711 | 
712 |
713 | 然後輸入 `purple.demo.com` 就可以得到紫色小鯨魚囉!
714 |
715 | 
716 |
717 | ## Helm
718 |
719 | 在上面我們介紹了多個 Kubernetes 的元件與他們所對應到的 `yaml` 設定檔。但假設我們今天有一個複雜的服務,裡面同時包含了很多種設定檔時,如何同時做好版本控制、管理、更新這些設定檔就變得不太容易。且要快速部署這個含有多個設定檔的服務也變得困難。因此 Helm 就是一個用來解決上述問題的工具。
720 |
721 | 簡單來說,Helm 就是一個管理設定檔的工具。他會把 Kubernetes 一個服務中各種元件裡的 `yaml` 檔統一打包成一個叫做 `chart` 的集合,然後透過給參數的方式,去同時管理與設定這些 `yaml` 檔案。
722 |
723 | ### 使用一個現有 Helm Chart
724 |
725 | 接下來我們要來示範用一個現有的 Helm Chart 來嘗試部署一個 wordpress 的服務。首先,我們的第一步當然就是要下載 Helm。MacOS 中我們可以直接使用 `Homebrew` 安裝,其他環境可以參考 [Helm 的 Github](https://github.com/helm/helm#install)。
726 |
727 | ```
728 | brew install kubernetes-helm
729 | ```
730 |
731 | 下載完後,我們記得要 Helm 把 Cluster 配置初始化
732 |
733 | ```
734 | helm init
735 | ```
736 |
737 | 接下來讓我們安裝 Wordpress 的 [Chart](https://github.com/helm/charts/tree/master/stable/wordpress),我們可以直接透過指令
738 |
739 | ```
740 | helm install stable/wordpress
741 | ```
742 |
743 | 這個指令會讓我們直接到 Chart Repository 去載入 Chart 檔並將它部署到我們的 Kubernetes Cluster 上,我們現在可以透過指令檢查我們的 Cluster
744 |
745 | ```
746 | kubectl get all
747 | ```
748 |
749 | ```
750 | NAME READY STATUS RESTARTS AGE
751 | pod/peddling-hog-mariadb-0 1/1 Running 0 60s
752 | pod/peddling-hog-wordpress-7bf6d69c8b-b5flx 1/1 Running 1 60s
753 |
754 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
755 | service/peddling-hog-mariadb ClusterIP 10.109.96.113 3306/TCP 60s
756 | service/peddling-hog-wordpress LoadBalancer 10.101.157.184 80:30439/TCP,443:31824/TCP 60s
757 |
758 | NAME READY UP-TO-DATE AVAILABLE AGE
759 | deployment.apps/peddling-hog-wordpress 1/1 1 1 60s
760 |
761 | NAME DESIRED CURRENT READY AGE
762 | replicaset.apps/peddling-hog-wordpress-7bf6d69c8b 1 1 1 60s
763 |
764 | NAME READY AGE
765 | statefulset.apps/peddling-hog-mariadb 1/1 60s
766 | ```
767 |
768 | 可以看到我們透過 Chart 一次就安裝與部署了兩個 Pod、兩個 Service 以及其他各種元件。如果要一次把所有 Chart 所安裝的元件刪除,我們可以先透過 `helm list` 列出我們所有的 Chart。
769 |
770 | ```
771 | NAME REVISION UPDATED STATUS CHART
772 | peddling-hog 1 Fri Apr 26 16:08:30 2019 DEPLOYED wordpress-5.9.0
773 | ```
774 |
775 | 然後輸入 `helm delete peddling-hog` 就可以一次把所有元件刪除。
776 |
777 | ### Chart 的運作方式
778 |
779 | 嘗試完從 Chart 部署元件後,我們可以進一步來暸解 Chart 是如何運作的。我們可以到 Wordpress chart 的 [Github](https://github.com/helm/charts/tree/master/stable/wordpress) 上觀察這個 Chart 的檔案結構,或是透過指令來建立一個最簡單的 Chart
780 |
781 | ```
782 | helm create helm-demo
783 | ```
784 |
785 | 接下來我們來看看 `./helm-demo` 的資料夾
786 |
787 | ```
788 | .
789 | ├── Chart.yaml
790 | ├── charts
791 | ├── templates
792 | │ ├── deployment.yaml
793 | │ ├── ingress.yaml
794 | │ ├── service.yaml
795 | └── values.yaml
796 | ```
797 |
798 | 把這個 Chart 的檔案結構化簡後就如上所見。
799 |
800 | - `Chart.yaml`
801 | 定義了這個 Chart 的 Metadata,包括 Chart 的版本、名稱、敘述等
802 |
803 | - `charts`
804 |
805 | 在這個資料夾裡可以放其他的 Chart,這裡稱作 SubCharts
806 |
807 | - `templates`
808 | 定義這個 Chart 服務需要的 Kubernetes 元件。但我們並不會把各元件的參數寫死在裡面,而是會用參數的方式代入
809 |
810 | - `values.yaml`
811 | 定義這個 Chart 的所有參數,這些參數都會被代入在 templates 中的元件。例如我們會在這邊定義 `nodePorts` 給 `service.yaml` 、定義 `replicaCount` 給 `deployment.yaml`、定義 `hosts` 給 `ingress.yaml` 等等
812 |
813 | 從上面的檔案結構可以看到,我們透過編輯 `values.yaml`,就可以對所有的 `yaml` 設定檔做到版本控制與管理。並透過 install / delete 的方式一鍵部署 / 刪除。
814 |
815 | ### 如何建立自己的 Chart
816 |
817 | 了解了 Chart 大致上是如何運作後,我們就可以來實際建立一個簡單的 Chart。我們的目標是要透過 `deployment`、`service`、`ingress` 來讓使用者在輸入 `blue.demo.com` 時可以得到一隻小鯨魚。而首先,我們一樣輸入指令
818 |
819 | ```
820 | helm create helm-demo
821 | ```
822 |
823 | 之後我們就先借看一下在 `ingress` 章節有使用過的 `yaml` 檔們。
824 |
825 | `deployment.yaml`
826 |
827 | ```yaml
828 | apiVersion: extensions/v1beta1
829 | kind: Deployment
830 | metadata:
831 | name: blue-nginx
832 | spec:
833 | replicas: 2
834 | template:
835 | metadata:
836 | labels:
837 | app: blue-nginx
838 | spec:
839 | containers:
840 | - name: nginx
841 | image: hcwxd/blue-whale
842 | ports:
843 | - containerPort: 3000
844 | ```
845 |
846 | `service.yaml`
847 |
848 | ```yaml
849 | apiVersion: v1
850 | kind: Service
851 | metadata:
852 | name: blue-service
853 | spec:
854 | type: NodePort
855 | selector:
856 | app: blue-nginx
857 | ports:
858 | - protocol: TCP
859 | port: 80
860 | targetPort: 3000
861 | ```
862 |
863 | `ingress.yaml`
864 |
865 | ```yaml
866 | apiVersion: extensions/v1beta1
867 | kind: Ingress
868 | metadata:
869 | name: web
870 | spec:
871 | rules:
872 | - host: blue.demo.com
873 | http:
874 | paths:
875 | - backend:
876 | serviceName: blue-service
877 | servicePort: 80
878 | ```
879 |
880 | 然後我們就可以嘗試來把上述 `yaml` 檔中可以作為參數的部分抽取出來,在這邊為了降低複雜度,我們只簡單挑幾個參數出來,然後我們就可以把這些參數寫到 `values.yaml` 中。
881 |
882 | `values.yaml`
883 |
884 | ```yaml
885 | replicaCount: 2
886 |
887 | image:
888 | repository: hcwxd/blue-whale
889 |
890 | service:
891 | type: NodePort
892 | port: 80
893 |
894 | ingress:
895 | enabled: true
896 |
897 | hosts:
898 | - host: blue.demo.com
899 | paths: [/]
900 | ```
901 |
902 | 把參數提取出來後,我們就來依樣畫葫蘆地把 `template` 中其他三個 `yaml` 檔寫成可以接受參數的方式:
903 |
904 | `deployment.yaml`
905 |
906 | ```yaml
907 | apiVersion: apps/v1
908 | kind: Deployment
909 | metadata:
910 | name: { { include "value-helm-demo.fullname" . } }
911 | spec:
912 | replicas: { { .Values.replicaCount } }
913 | selector:
914 | matchLabels:
915 | app: { { include "value-helm-demo.fullname" . } }
916 | template:
917 | metadata:
918 | labels:
919 | app: { { include "value-helm-demo.fullname" . } }
920 | spec:
921 | containers:
922 | - name: { { .Chart.Name } }
923 | image: '{{ .Values.image.repository }}'
924 | ports:
925 | - containerPort: 3000
926 | ```
927 |
928 | `service.yaml`
929 |
930 | ```yaml
931 | apiVersion: v1
932 | kind: Service
933 | metadata:
934 | name: { { include "value-helm-demo.fullname" . } }
935 | spec:
936 | type: { { .Values.service.type } }
937 | ports:
938 | - port: { { .Values.service.port } }
939 | targetPort: 3000
940 | protocol: TCP
941 | selector:
942 | app: { { include "value-helm-demo.fullname" . } }
943 | ```
944 |
945 | `ingress.yaml`
946 |
947 | ```yaml
948 | {{- if .Values.ingress.enabled -}}
949 | {{- $fullName := include "value-helm-demo.fullname" . -}}
950 | apiVersion: extensions/v1beta1
951 | kind: Ingress
952 | metadata:
953 | name: {{ $fullName }}
954 | spec:
955 | rules:
956 | {{- range .Values.ingress.hosts }}
957 | - host: {{ .host | quote }}
958 | http:
959 | paths:
960 | {{- range .paths }}
961 | - backend:
962 | serviceName: {{ $fullName }}
963 | servicePort: 80
964 | {{- end }}
965 | {{- end }}
966 | {{- end }}
967 | ```
968 |
969 | 寫好後,我們就可以來一鍵部署我們的這三份檔案囉。我們可以直接在 `/helm-demo` 資料夾下輸入指令
970 |
971 | ```
972 | helm install .
973 | ```
974 |
975 | ```
976 | NAME: gilded-peacock
977 | LAST DEPLOYED: Mon May 6 16:31:27 2019
978 | NAMESPACE: default
979 | STATUS: DEPLOYED
980 |
981 | RESOURCES:
982 | ==> v1/Deployment
983 | NAME READY UP-TO-DATE AVAILABLE AGE
984 | gilded-peacock-helm-demo 0/2 2 0 0s
985 |
986 | ==> v1/Pod(related)
987 | NAME READY STATUS RESTARTS AGE
988 | gilded-peacock-helm-demo-5fc5964759-thcbz 0/1 ContainerCreating 0 0s
989 | gilded-peacock-helm-demo-5fc5964759-wrb2w 0/1 ContainerCreating 0 0s
990 |
991 | ==> v1/Service
992 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
993 | gilded-peacock-helm-demo NodePort 10.106.164.53 80:30333/TCP 0s
994 |
995 | ==> v1beta1/Ingress
996 | NAME HOSTS ADDRESS PORTS AGE
997 | gilded-peacock-helm-demo blue.demo.com 80 0s
998 |
999 |
1000 | NOTES:
1001 | 1. Get the application URL by running these commands:
1002 | http://blue.demo.com/
1003 | ```
1004 |
1005 | 部署成功後顯示的 `NAME: gilded-peacock` 就是這個 Chart 部署後的名稱囉(在 Helm 中稱為 `release`)。我們可以再透過指令
1006 |
1007 | ```
1008 | helm list
1009 | ```
1010 |
1011 | 列出我們目前所有的 `releases`。接下來我們可以用 `kubectl get all` 來看到我們目前的 kubernetes 狀況
1012 |
1013 | ```
1014 | NAME READY STATUS RESTARTS AGE
1015 | pod/gilded-peacock-helm-demo-5fc5964759-thcbz 1/1 Running 0 8m51s
1016 | pod/gilded-peacock-helm-demo-5fc5964759-wrb2w 1/1 Running 0 8m51s
1017 |
1018 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
1019 | service/gilded-peacock-helm-demo NodePort 10.106.164.53 80:30333/TCP 8m51s
1020 |
1021 | NAME READY UP-TO-DATE AVAILABLE AGE
1022 | deployment.apps/gilded-peacock-helm-demo 2/2 2 2 8m51s
1023 |
1024 | NAME DESIRED CURRENT READY AGE
1025 | replicaset.apps/gilded-peacock-helm-demo-5fc5964759 2 2 2 8m51s
1026 | ```
1027 |
1028 | 這邊就可以看到我們所指定的資源都有按照 chart 的配置建立起來囉,所以打開 `blue.demo.com` 就可以看到一隻我們透過 Helm 實際部署出的小鯨魚。
1029 |
1030 | 而其他常用的 Helm 指令還有:
1031 |
1032 | ```
1033 | helm delete --purge RELEASE_NAME
1034 | ```
1035 |
1036 | 刪除一個 release(`--purge` 這個 flag 可以把該 `RELEASE_NAME` 釋放出來讓之後可以重複使用)。
1037 |
1038 | ```
1039 | helm upgrade RELEASE_NAME CHART_PATH
1040 | ```
1041 |
1042 | 如果有更新 Chart 的檔案時,可以透過 upgrade 去更新他對應的 Release。
1043 |
1044 | ```
1045 | helm lint CHART_PATH
1046 | ```
1047 |
1048 | 檢查你的 Chart 檔案有沒有錯誤的語法。
1049 |
1050 | ```
1051 | helm package CHART_PATH
1052 | ```
1053 |
1054 | 打包並壓縮整個 Chart 資料夾的檔案。
1055 |
1056 | ## kubectl 額外補充
1057 |
1058 | ### 簡寫
1059 |
1060 | 覺得每次下指令都要打 `kubectl` 很花時間的話,可以透過 alias 來節省時間,例如設定 `alias kbs=kubectl` 。
1061 |
1062 | kubectl 中的各項資源的名稱其實也都有內建的簡寫,可以透過指令
1063 |
1064 | ```
1065 | kubectl api-resources
1066 | ```
1067 |
1068 | 去看到各個資源的簡寫,例如 deployments 可以簡寫成 `deploy`、services 簡寫成 `svc` 等。
1069 |
1070 | ### auto-complete
1071 |
1072 | 覺得 kubectl 的指令都沒有 auto-complete 的話可以參考[官網](https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-autocomplete)的教學。像是如果使用 zsh 的話,就可以透過指令
1073 |
1074 | ```
1075 | echo "if [ $commands[kubectl] ]; then source <(kubectl completion zsh); fi" >> ~/.zshrc
1076 | ```
1077 |
1078 | 來啟用 kubectl 的 auto-complete
1079 |
1080 | ### create vs apply
1081 |
1082 | 在之前提到透過 `yaml` 建立資源時,我們都用了 `kubectl create -f`。但其實也可以使用 `kubectl apply -f` 達成建立與更新資源,雖然在單純建立的使用情景上沒有差別,但其它用法上的差別可見 [kubectl apply vs kubectl create](https://stackoverflow.com/questions/47369351/kubectl-apply-vs-kubectl-create)。
1083 |
--------------------------------------------------------------------------------