├── .gitignore
├── CONTRIBUTING.md
├── COPYRIGHT.md
├── LICENSE
├── README.md
├── deployments
├── core-dns.yaml
├── coredns-1.7.0.yaml
└── kube-dns.yaml
└── docs
├── 01-prerequisites.md
├── 02-client-tools.md
├── 03-compute-resources.md
├── 04-certificate-authority.md
├── 05-kubernetes-configuration-files.md
├── 06-data-encryption-keys.md
├── 07-bootstrapping-etcd.md
├── 08-bootstrapping-kubernetes-controllers.md
├── 09-bootstrapping-kubernetes-workers.md
├── 10-configuring-kubectl.md
├── 11-pod-network-routes.md
├── 12-dns-addon.md
├── 13-smoke-test.md
├── 14-cleanup.md
└── images
└── tmux-screenshot.png
/.gitignore:
--------------------------------------------------------------------------------
1 | admin-csr.json
2 | admin-key.pem
3 | admin.csr
4 | admin.pem
5 | admin.kubeconfig
6 | ca-config.json
7 | ca-csr.json
8 | ca-key.pem
9 | ca.csr
10 | ca.pem
11 | encryption-config.yaml
12 | kube-controller-manager-csr.json
13 | kube-controller-manager-key.pem
14 | kube-controller-manager.csr
15 | kube-controller-manager.kubeconfig
16 | kube-controller-manager.pem
17 | kube-scheduler-csr.json
18 | kube-scheduler-key.pem
19 | kube-scheduler.csr
20 | kube-scheduler.kubeconfig
21 | kube-scheduler.pem
22 | kube-proxy-csr.json
23 | kube-proxy-key.pem
24 | kube-proxy.csr
25 | kube-proxy.kubeconfig
26 | kube-proxy.pem
27 | kubernetes-csr.json
28 | kubernetes-key.pem
29 | kubernetes.csr
30 | kubernetes.pem
31 | worker-0-csr.json
32 | worker-0-key.pem
33 | worker-0.csr
34 | worker-0.kubeconfig
35 | worker-0.pem
36 | worker-1-csr.json
37 | worker-1-key.pem
38 | worker-1.csr
39 | worker-1.kubeconfig
40 | worker-1.pem
41 | worker-2-csr.json
42 | worker-2-key.pem
43 | worker-2.csr
44 | worker-2.kubeconfig
45 | worker-2.pem
46 | service-account-key.pem
47 | service-account.csr
48 | service-account.pem
49 | service-account-csr.json
50 | *.swp
51 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | This project is made possible by contributors like YOU! While all contributions are welcomed, please be sure and follow the following suggestions to help your PR get merged.
2 |
3 | ## License
4 |
5 | This project uses an [Apache license](LICENSE). Be sure you're comfortable with the implications of that before working up a patch.
6 |
7 | ## Review and merge process
8 |
9 | Review and merge duties are managed by [@kelseyhightower](https://github.com/kelseyhightower). Expect some burden of proof for demonstrating the marginal value of adding new content to the tutorial.
10 |
11 | Here are some examples of the review and justification process:
12 | - [#208](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/208)
13 | - [#282](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/282)
14 |
15 | ## Notes on minutiae
16 |
17 | If you find a bug that breaks the guide, please do submit it. If you are considering a minor copy edit for tone, grammar, or simple inconsistent whitespace, consider the tradeoff between maintainer time and community benefit before investing too much of your time.
18 |
19 |
--------------------------------------------------------------------------------
/COPYRIGHT.md:
--------------------------------------------------------------------------------
1 | # Copyright
2 |
3 | 
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
4 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 |
2 | Apache License
3 | Version 2.0, January 2004
4 | http://www.apache.org/licenses/
5 |
6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7 |
8 | 1. Definitions.
9 |
10 | "License" shall mean the terms and conditions for use, reproduction,
11 | and distribution as defined by Sections 1 through 9 of this document.
12 |
13 | "Licensor" shall mean the copyright owner or entity authorized by
14 | the copyright owner that is granting the License.
15 |
16 | "Legal Entity" shall mean the union of the acting entity and all
17 | other entities that control, are controlled by, or are under common
18 | control with that entity. For the purposes of this definition,
19 | "control" means (i) the power, direct or indirect, to cause the
20 | direction or management of such entity, whether by contract or
21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
22 | outstanding shares, or (iii) beneficial ownership of such entity.
23 |
24 | "You" (or "Your") shall mean an individual or Legal Entity
25 | exercising permissions granted by this License.
26 |
27 | "Source" form shall mean the preferred form for making modifications,
28 | including but not limited to software source code, documentation
29 | source, and configuration files.
30 |
31 | "Object" form shall mean any form resulting from mechanical
32 | transformation or translation of a Source form, including but
33 | not limited to compiled object code, generated documentation,
34 | and conversions to other media types.
35 |
36 | "Work" shall mean the work of authorship, whether in Source or
37 | Object form, made available under the License, as indicated by a
38 | copyright notice that is included in or attached to the work
39 | (an example is provided in the Appendix below).
40 |
41 | "Derivative Works" shall mean any work, whether in Source or Object
42 | form, that is based on (or derived from) the Work and for which the
43 | editorial revisions, annotations, elaborations, or other modifications
44 | represent, as a whole, an original work of authorship. For the purposes
45 | of this License, Derivative Works shall not include works that remain
46 | separable from, or merely link (or bind by name) to the interfaces of,
47 | the Work and Derivative Works thereof.
48 |
49 | "Contribution" shall mean any work of authorship, including
50 | the original version of the Work and any modifications or additions
51 | to that Work or Derivative Works thereof, that is intentionally
52 | submitted to Licensor for inclusion in the Work by the copyright owner
53 | or by an individual or Legal Entity authorized to submit on behalf of
54 | the copyright owner. For the purposes of this definition, "submitted"
55 | means any form of electronic, verbal, or written communication sent
56 | to the Licensor or its representatives, including but not limited to
57 | communication on electronic mailing lists, source code control systems,
58 | and issue tracking systems that are managed by, or on behalf of, the
59 | Licensor for the purpose of discussing and improving the Work, but
60 | excluding communication that is conspicuously marked or otherwise
61 | designated in writing by the copyright owner as "Not a Contribution."
62 |
63 | "Contributor" shall mean Licensor and any individual or Legal Entity
64 | on behalf of whom a Contribution has been received by Licensor and
65 | subsequently incorporated within the Work.
66 |
67 | 2. Grant of Copyright License. Subject to the terms and conditions of
68 | this License, each Contributor hereby grants to You a perpetual,
69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70 | copyright license to reproduce, prepare Derivative Works of,
71 | publicly display, publicly perform, sublicense, and distribute the
72 | Work and such Derivative Works in Source or Object form.
73 |
74 | 3. Grant of Patent License. Subject to the terms and conditions of
75 | this License, each Contributor hereby grants to You a perpetual,
76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77 | (except as stated in this section) patent license to make, have made,
78 | use, offer to sell, sell, import, and otherwise transfer the Work,
79 | where such license applies only to those patent claims licensable
80 | by such Contributor that are necessarily infringed by their
81 | Contribution(s) alone or by combination of their Contribution(s)
82 | with the Work to which such Contribution(s) was submitted. If You
83 | institute patent litigation against any entity (including a
84 | cross-claim or counterclaim in a lawsuit) alleging that the Work
85 | or a Contribution incorporated within the Work constitutes direct
86 | or contributory patent infringement, then any patent licenses
87 | granted to You under this License for that Work shall terminate
88 | as of the date such litigation is filed.
89 |
90 | 4. Redistribution. You may reproduce and distribute copies of the
91 | Work or Derivative Works thereof in any medium, with or without
92 | modifications, and in Source or Object form, provided that You
93 | meet the following conditions:
94 |
95 | (a) You must give any other recipients of the Work or
96 | Derivative Works a copy of this License; and
97 |
98 | (b) You must cause any modified files to carry prominent notices
99 | stating that You changed the files; and
100 |
101 | (c) You must retain, in the Source form of any Derivative Works
102 | that You distribute, all copyright, patent, trademark, and
103 | attribution notices from the Source form of the Work,
104 | excluding those notices that do not pertain to any part of
105 | the Derivative Works; and
106 |
107 | (d) If the Work includes a "NOTICE" text file as part of its
108 | distribution, then any Derivative Works that You distribute must
109 | include a readable copy of the attribution notices contained
110 | within such NOTICE file, excluding those notices that do not
111 | pertain to any part of the Derivative Works, in at least one
112 | of the following places: within a NOTICE text file distributed
113 | as part of the Derivative Works; within the Source form or
114 | documentation, if provided along with the Derivative Works; or,
115 | within a display generated by the Derivative Works, if and
116 | wherever such third-party notices normally appear. The contents
117 | of the NOTICE file are for informational purposes only and
118 | do not modify the License. You may add Your own attribution
119 | notices within Derivative Works that You distribute, alongside
120 | or as an addendum to the NOTICE text from the Work, provided
121 | that such additional attribution notices cannot be construed
122 | as modifying the License.
123 |
124 | You may add Your own copyright statement to Your modifications and
125 | may provide additional or different license terms and conditions
126 | for use, reproduction, or distribution of Your modifications, or
127 | for any such Derivative Works as a whole, provided Your use,
128 | reproduction, and distribution of the Work otherwise complies with
129 | the conditions stated in this License.
130 |
131 | 5. Submission of Contributions. Unless You explicitly state otherwise,
132 | any Contribution intentionally submitted for inclusion in the Work
133 | by You to the Licensor shall be under the terms and conditions of
134 | this License, without any additional terms or conditions.
135 | Notwithstanding the above, nothing herein shall supersede or modify
136 | the terms of any separate license agreement you may have executed
137 | with Licensor regarding such Contributions.
138 |
139 | 6. Trademarks. This License does not grant permission to use the trade
140 | names, trademarks, service marks, or product names of the Licensor,
141 | except as required for reasonable and customary use in describing the
142 | origin of the Work and reproducing the content of the NOTICE file.
143 |
144 | 7. Disclaimer of Warranty. Unless required by applicable law or
145 | agreed to in writing, Licensor provides the Work (and each
146 | Contributor provides its Contributions) on an "AS IS" BASIS,
147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148 | implied, including, without limitation, any warranties or conditions
149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150 | PARTICULAR PURPOSE. You are solely responsible for determining the
151 | appropriateness of using or redistributing the Work and assume any
152 | risks associated with Your exercise of permissions under this License.
153 |
154 | 8. Limitation of Liability. In no event and under no legal theory,
155 | whether in tort (including negligence), contract, or otherwise,
156 | unless required by applicable law (such as deliberate and grossly
157 | negligent acts) or agreed to in writing, shall any Contributor be
158 | liable to You for damages, including any direct, indirect, special,
159 | incidental, or consequential damages of any character arising as a
160 | result of this License or out of the use or inability to use the
161 | Work (including but not limited to damages for loss of goodwill,
162 | work stoppage, computer failure or malfunction, or any and all
163 | other commercial damages or losses), even if such Contributor
164 | has been advised of the possibility of such damages.
165 |
166 | 9. Accepting Warranty or Additional Liability. While redistributing
167 | the Work or Derivative Works thereof, You may choose to offer,
168 | and charge a fee for, acceptance of support, warranty, indemnity,
169 | or other liability obligations and/or rights consistent with this
170 | License. However, in accepting such obligations, You may act only
171 | on Your own behalf and on Your sole responsibility, not on behalf
172 | of any other Contributor, and only if You agree to indemnify,
173 | defend, and hold each Contributor harmless for any liability
174 | incurred by, or claims asserted against, such Contributor by reason
175 | of your accepting any such warranty or additional liability.
176 |
177 | END OF TERMS AND CONDITIONS
178 |
179 | APPENDIX: How to apply the Apache License to your work.
180 |
181 | To apply the Apache License to your work, attach the following
182 | boilerplate notice, with the fields enclosed by brackets "[]"
183 | replaced with your own identifying information. (Don't include
184 | the brackets!) The text should be enclosed in the appropriate
185 | comment syntax for the file format. We also recommend that a
186 | file or class name and description of purpose be included on the
187 | same "printed page" as the copyright notice for easier
188 | identification within third-party archives.
189 |
190 | Copyright [yyyy] [name of copyright owner]
191 |
192 | Licensed under the Apache License, Version 2.0 (the "License");
193 | you may not use this file except in compliance with the License.
194 | You may obtain a copy of the License at
195 |
196 | http://www.apache.org/licenses/LICENSE-2.0
197 |
198 | Unless required by applicable law or agreed to in writing, software
199 | distributed under the License is distributed on an "AS IS" BASIS,
200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201 | See the License for the specific language governing permissions and
202 | limitations under the License.
203 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # About
2 |
3 | This is a fork of awesome [Kubernetes The Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) by Kelsey Hightower and is geared towards using it on AWS. Also thanks to [@slawekzachcial](https://github.com/slawekzachcial) for his [work](https://github.com/slawekzachcial/kubernetes-the-hard-way-aws) that made this easier.
4 |
5 | There are currently no tool upgrades as compared to the original.
6 |
7 | # Kubernetes The Hard Way
8 |
9 | This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), [AWS Elastic Container Service for Kubernetes](https://aws.amazon.com/eks/) or the [Getting Started Guides](https://kubernetes.io/docs/setup).
10 |
11 | Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
12 |
13 | > The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
14 |
15 | ## Target Audience
16 |
17 | The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.
18 |
19 | ## Cluster Details
20 |
21 | Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
22 |
23 | * [kubernetes](https://github.com/kubernetes/kubernetes) v1.21.0
24 | * [containerd](https://github.com/containerd/containerd) v1.4.4
25 | * [coredns](https://github.com/coredns/coredns) v1.8.3
26 | * [cni](https://github.com/containernetworking/cni) v0.9.1
27 | * [etcd](https://github.com/etcd-io/etcd) v3.4.15
28 |
29 | ## Labs
30 |
31 | This tutorial assumes you have access to the [Amazon Web Service](https://aws.amazon.com/). If you are looking for GCP version of this guide then look at : [https://github.com/kelseyhightower/kubernetes-the-hard-way](https://github.com/kelseyhightower/kubernetes-the-hard-way).
32 |
33 | * [Prerequisites](docs/01-prerequisites.md)
34 | * [Installing the Client Tools](docs/02-client-tools.md)
35 | * [Provisioning Compute Resources](docs/03-compute-resources.md)
36 | * [Provisioning the CA and Generating TLS Certificates](docs/04-certificate-authority.md)
37 | * [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md)
38 | * [Generating the Data Encryption Config and Key](docs/06-data-encryption-keys.md)
39 | * [Bootstrapping the etcd Cluster](docs/07-bootstrapping-etcd.md)
40 | * [Bootstrapping the Kubernetes Control Plane](docs/08-bootstrapping-kubernetes-controllers.md)
41 | * [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md)
42 | * [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md)
43 | * [Provisioning Pod Network Routes](docs/11-pod-network-routes.md)
44 | * [Deploying the DNS Cluster Add-on](docs/12-dns-addon.md)
45 | * [Smoke Test](docs/13-smoke-test.md)
46 | * [Cleaning Up](docs/14-cleanup.md)
47 |
--------------------------------------------------------------------------------
/deployments/core-dns.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ServiceAccount
3 | metadata:
4 | name: coredns
5 | namespace: kube-system
6 | ---
7 | apiVersion: rbac.authorization.k8s.io/v1
8 | kind: ClusterRole
9 | metadata:
10 | labels:
11 | kubernetes.io/bootstrapping: rbac-defaults
12 | name: system:coredns
13 | rules:
14 | - apiGroups:
15 | - ""
16 | resources:
17 | - endpoints
18 | - services
19 | - pods
20 | - namespaces
21 | verbs:
22 | - list
23 | - watch
24 | - apiGroups:
25 | - ""
26 | resources:
27 | - nodes
28 | verbs:
29 | - get
30 | ---
31 | apiVersion: rbac.authorization.k8s.io/v1
32 | kind: ClusterRoleBinding
33 | metadata:
34 | annotations:
35 | rbac.authorization.kubernetes.io/autoupdate: "true"
36 | labels:
37 | kubernetes.io/bootstrapping: rbac-defaults
38 | name: system:coredns
39 | roleRef:
40 | apiGroup: rbac.authorization.k8s.io
41 | kind: ClusterRole
42 | name: system:coredns
43 | subjects:
44 | - kind: ServiceAccount
45 | name: coredns
46 | namespace: kube-system
47 | ---
48 | apiVersion: v1
49 | kind: ConfigMap
50 | metadata:
51 | name: coredns
52 | namespace: kube-system
53 | data:
54 | Corefile: |
55 | .:53 {
56 | errors
57 | health
58 | ready
59 | kubernetes cluster.local in-addr.arpa ip6.arpa {
60 | pods insecure
61 | fallthrough in-addr.arpa ip6.arpa
62 | }
63 | prometheus :9153
64 | forward . /etc/resolv.conf
65 | cache 30
66 | loop
67 | reload
68 | loadbalance
69 | }
70 | ---
71 | apiVersion: apps/v1
72 | kind: Deployment
73 | metadata:
74 | name: coredns
75 | namespace: kube-system
76 | labels:
77 | k8s-app: kube-dns
78 | kubernetes.io/name: "CoreDNS"
79 | spec:
80 | replicas: 1
81 | strategy:
82 | type: RollingUpdate
83 | rollingUpdate:
84 | maxUnavailable: 1
85 | selector:
86 | matchLabels:
87 | k8s-app: kube-dns
88 | template:
89 | metadata:
90 | labels:
91 | k8s-app: kube-dns
92 | spec:
93 | priorityClassName: system-cluster-critical
94 | serviceAccountName: coredns
95 | tolerations:
96 | - key: "CriticalAddonsOnly"
97 | operator: "Exists"
98 | nodeSelector:
99 | kubernetes.io/os: linux
100 | containers:
101 | - name: coredns
102 | image: coredns/coredns:1.7.0
103 | imagePullPolicy: IfNotPresent
104 | resources:
105 | limits:
106 | memory: 170Mi
107 | requests:
108 | cpu: 100m
109 | memory: 70Mi
110 | args: [ "-conf", "/etc/coredns/Corefile" ]
111 | volumeMounts:
112 | - name: config-volume
113 | mountPath: /etc/coredns
114 | readOnly: true
115 | ports:
116 | - containerPort: 53
117 | name: dns
118 | protocol: UDP
119 | - containerPort: 53
120 | name: dns-tcp
121 | protocol: TCP
122 | - containerPort: 9153
123 | name: metrics
124 | protocol: TCP
125 | securityContext:
126 | allowPrivilegeEscalation: false
127 | capabilities:
128 | add:
129 | - NET_BIND_SERVICE
130 | drop:
131 | - all
132 | readOnlyRootFilesystem: true
133 | livenessProbe:
134 | httpGet:
135 | path: /health
136 | port: 8080
137 | scheme: HTTP
138 | initialDelaySeconds: 60
139 | timeoutSeconds: 5
140 | successThreshold: 1
141 | failureThreshold: 5
142 | readinessProbe:
143 | httpGet:
144 | path: /ready
145 | port: 8181
146 | scheme: HTTP
147 | dnsPolicy: Default
148 | volumes:
149 | - name: config-volume
150 | configMap:
151 | name: coredns
152 | items:
153 | - key: Corefile
154 | path: Corefile
155 | ---
156 | apiVersion: v1
157 | kind: Service
158 | metadata:
159 | name: kube-dns
160 | namespace: kube-system
161 | annotations:
162 | prometheus.io/port: "9153"
163 | prometheus.io/scrape: "true"
164 | labels:
165 | k8s-app: kube-dns
166 | kubernetes.io/cluster-service: "true"
167 | kubernetes.io/name: "CoreDNS"
168 | spec:
169 | selector:
170 | k8s-app: kube-dns
171 | clusterIP: 10.32.0.10
172 | ports:
173 | - name: dns
174 | port: 53
175 | protocol: UDP
176 | - name: dns-tcp
177 | port: 53
178 | protocol: TCP
179 | - name: metrics
180 | port: 9153
181 | protocol: TCP
182 |
--------------------------------------------------------------------------------
/deployments/coredns-1.7.0.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ServiceAccount
3 | metadata:
4 | name: coredns
5 | namespace: kube-system
6 | ---
7 | apiVersion: rbac.authorization.k8s.io/v1
8 | kind: ClusterRole
9 | metadata:
10 | labels:
11 | kubernetes.io/bootstrapping: rbac-defaults
12 | name: system:coredns
13 | rules:
14 | - apiGroups:
15 | - ""
16 | resources:
17 | - endpoints
18 | - services
19 | - pods
20 | - namespaces
21 | verbs:
22 | - list
23 | - watch
24 | - apiGroups:
25 | - ""
26 | resources:
27 | - nodes
28 | verbs:
29 | - get
30 | ---
31 | apiVersion: rbac.authorization.k8s.io/v1
32 | kind: ClusterRoleBinding
33 | metadata:
34 | annotations:
35 | rbac.authorization.kubernetes.io/autoupdate: "true"
36 | labels:
37 | kubernetes.io/bootstrapping: rbac-defaults
38 | name: system:coredns
39 | roleRef:
40 | apiGroup: rbac.authorization.k8s.io
41 | kind: ClusterRole
42 | name: system:coredns
43 | subjects:
44 | - kind: ServiceAccount
45 | name: coredns
46 | namespace: kube-system
47 | ---
48 | apiVersion: v1
49 | kind: ConfigMap
50 | metadata:
51 | name: coredns
52 | namespace: kube-system
53 | data:
54 | Corefile: |
55 | .:53 {
56 | errors
57 | health
58 | ready
59 | kubernetes cluster.local in-addr.arpa ip6.arpa {
60 | pods insecure
61 | fallthrough in-addr.arpa ip6.arpa
62 | }
63 | prometheus :9153
64 | cache 30
65 | loop
66 | reload
67 | loadbalance
68 | }
69 | ---
70 | apiVersion: apps/v1
71 | kind: Deployment
72 | metadata:
73 | name: coredns
74 | namespace: kube-system
75 | labels:
76 | k8s-app: kube-dns
77 | kubernetes.io/name: "CoreDNS"
78 | spec:
79 | replicas: 2
80 | strategy:
81 | type: RollingUpdate
82 | rollingUpdate:
83 | maxUnavailable: 1
84 | selector:
85 | matchLabels:
86 | k8s-app: kube-dns
87 | template:
88 | metadata:
89 | labels:
90 | k8s-app: kube-dns
91 | spec:
92 | priorityClassName: system-cluster-critical
93 | serviceAccountName: coredns
94 | tolerations:
95 | - key: "CriticalAddonsOnly"
96 | operator: "Exists"
97 | nodeSelector:
98 | beta.kubernetes.io/os: linux
99 | containers:
100 | - name: coredns
101 | image: coredns/coredns:1.7.0
102 | imagePullPolicy: IfNotPresent
103 | resources:
104 | limits:
105 | memory: 170Mi
106 | requests:
107 | cpu: 100m
108 | memory: 70Mi
109 | args: [ "-conf", "/etc/coredns/Corefile" ]
110 | volumeMounts:
111 | - name: config-volume
112 | mountPath: /etc/coredns
113 | readOnly: true
114 | ports:
115 | - containerPort: 53
116 | name: dns
117 | protocol: UDP
118 | - containerPort: 53
119 | name: dns-tcp
120 | protocol: TCP
121 | - containerPort: 9153
122 | name: metrics
123 | protocol: TCP
124 | securityContext:
125 | allowPrivilegeEscalation: false
126 | capabilities:
127 | add:
128 | - NET_BIND_SERVICE
129 | drop:
130 | - all
131 | readOnlyRootFilesystem: true
132 | livenessProbe:
133 | httpGet:
134 | path: /health
135 | port: 8080
136 | scheme: HTTP
137 | initialDelaySeconds: 60
138 | timeoutSeconds: 5
139 | successThreshold: 1
140 | failureThreshold: 5
141 | readinessProbe:
142 | httpGet:
143 | path: /ready
144 | port: 8181
145 | scheme: HTTP
146 | dnsPolicy: Default
147 | volumes:
148 | - name: config-volume
149 | configMap:
150 | name: coredns
151 | items:
152 | - key: Corefile
153 | path: Corefile
154 | ---
155 | apiVersion: v1
156 | kind: Service
157 | metadata:
158 | name: kube-dns
159 | namespace: kube-system
160 | annotations:
161 | prometheus.io/port: "9153"
162 | prometheus.io/scrape: "true"
163 | labels:
164 | k8s-app: kube-dns
165 | kubernetes.io/cluster-service: "true"
166 | kubernetes.io/name: "CoreDNS"
167 | spec:
168 | selector:
169 | k8s-app: kube-dns
170 | clusterIP: 10.32.0.10
171 | ports:
172 | - name: dns
173 | port: 53
174 | protocol: UDP
175 | - name: dns-tcp
176 | port: 53
177 | protocol: TCP
178 | - name: metrics
179 | port: 9153
180 | protocol: TCP
181 |
--------------------------------------------------------------------------------
/deployments/kube-dns.yaml:
--------------------------------------------------------------------------------
1 | # Copyright 2016 The Kubernetes Authors.
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | apiVersion: v1
16 | kind: Service
17 | metadata:
18 | name: kube-dns
19 | namespace: kube-system
20 | labels:
21 | k8s-app: kube-dns
22 | kubernetes.io/cluster-service: "true"
23 | addonmanager.kubernetes.io/mode: Reconcile
24 | kubernetes.io/name: "KubeDNS"
25 | spec:
26 | selector:
27 | k8s-app: kube-dns
28 | clusterIP: 10.32.0.10
29 | ports:
30 | - name: dns
31 | port: 53
32 | protocol: UDP
33 | - name: dns-tcp
34 | port: 53
35 | protocol: TCP
36 | ---
37 | apiVersion: v1
38 | kind: ServiceAccount
39 | metadata:
40 | name: kube-dns
41 | namespace: kube-system
42 | labels:
43 | kubernetes.io/cluster-service: "true"
44 | addonmanager.kubernetes.io/mode: Reconcile
45 | ---
46 | apiVersion: v1
47 | kind: ConfigMap
48 | metadata:
49 | name: kube-dns
50 | namespace: kube-system
51 | labels:
52 | addonmanager.kubernetes.io/mode: EnsureExists
53 | ---
54 | apiVersion: apps/v1
55 | kind: Deployment
56 | metadata:
57 | name: kube-dns
58 | namespace: kube-system
59 | labels:
60 | k8s-app: kube-dns
61 | kubernetes.io/cluster-service: "true"
62 | addonmanager.kubernetes.io/mode: Reconcile
63 | spec:
64 | # replicas: not specified here:
65 | # 1. In order to make Addon Manager do not reconcile this replicas parameter.
66 | # 2. Default is 1.
67 | # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
68 | strategy:
69 | rollingUpdate:
70 | maxSurge: 10%
71 | maxUnavailable: 0
72 | selector:
73 | matchLabels:
74 | k8s-app: kube-dns
75 | template:
76 | metadata:
77 | labels:
78 | k8s-app: kube-dns
79 | annotations:
80 | scheduler.alpha.kubernetes.io/critical-pod: ''
81 | spec:
82 | tolerations:
83 | - key: "CriticalAddonsOnly"
84 | operator: "Exists"
85 | volumes:
86 | - name: kube-dns-config
87 | configMap:
88 | name: kube-dns
89 | optional: true
90 | containers:
91 | - name: kubedns
92 | image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
93 | resources:
94 | # TODO: Set memory limits when we've profiled the container for large
95 | # clusters, then set request = limit to keep this container in
96 | # guaranteed class. Currently, this container falls into the
97 | # "burstable" category so the kubelet doesn't backoff from restarting it.
98 | limits:
99 | memory: 170Mi
100 | requests:
101 | cpu: 100m
102 | memory: 70Mi
103 | livenessProbe:
104 | httpGet:
105 | path: /healthcheck/kubedns
106 | port: 10054
107 | scheme: HTTP
108 | initialDelaySeconds: 60
109 | timeoutSeconds: 5
110 | successThreshold: 1
111 | failureThreshold: 5
112 | readinessProbe:
113 | httpGet:
114 | path: /readiness
115 | port: 8081
116 | scheme: HTTP
117 | # we poll on pod startup for the Kubernetes master service and
118 | # only setup the /readiness HTTP server once that's available.
119 | initialDelaySeconds: 3
120 | timeoutSeconds: 5
121 | args:
122 | - --domain=cluster.local.
123 | - --dns-port=10053
124 | - --config-dir=/kube-dns-config
125 | - --v=2
126 | env:
127 | - name: PROMETHEUS_PORT
128 | value: "10055"
129 | ports:
130 | - containerPort: 10053
131 | name: dns-local
132 | protocol: UDP
133 | - containerPort: 10053
134 | name: dns-tcp-local
135 | protocol: TCP
136 | - containerPort: 10055
137 | name: metrics
138 | protocol: TCP
139 | volumeMounts:
140 | - name: kube-dns-config
141 | mountPath: /kube-dns-config
142 | - name: dnsmasq
143 | image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
144 | livenessProbe:
145 | httpGet:
146 | path: /healthcheck/dnsmasq
147 | port: 10054
148 | scheme: HTTP
149 | initialDelaySeconds: 60
150 | timeoutSeconds: 5
151 | successThreshold: 1
152 | failureThreshold: 5
153 | args:
154 | - -v=2
155 | - -logtostderr
156 | - -configDir=/etc/k8s/dns/dnsmasq-nanny
157 | - -restartDnsmasq=true
158 | - --
159 | - -k
160 | - --cache-size=1000
161 | - --no-negcache
162 | - --log-facility=-
163 | - --server=/cluster.local/127.0.0.1#10053
164 | - --server=/in-addr.arpa/127.0.0.1#10053
165 | - --server=/ip6.arpa/127.0.0.1#10053
166 | ports:
167 | - containerPort: 53
168 | name: dns
169 | protocol: UDP
170 | - containerPort: 53
171 | name: dns-tcp
172 | protocol: TCP
173 | # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
174 | resources:
175 | requests:
176 | cpu: 150m
177 | memory: 20Mi
178 | volumeMounts:
179 | - name: kube-dns-config
180 | mountPath: /etc/k8s/dns/dnsmasq-nanny
181 | - name: sidecar
182 | image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
183 | livenessProbe:
184 | httpGet:
185 | path: /metrics
186 | port: 10054
187 | scheme: HTTP
188 | initialDelaySeconds: 60
189 | timeoutSeconds: 5
190 | successThreshold: 1
191 | failureThreshold: 5
192 | args:
193 | - --v=2
194 | - --logtostderr
195 | - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
196 | - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
197 | ports:
198 | - containerPort: 10054
199 | name: metrics
200 | protocol: TCP
201 | resources:
202 | requests:
203 | memory: 20Mi
204 | cpu: 10m
205 | dnsPolicy: Default # Don't use cluster DNS.
206 | serviceAccountName: kube-dns
207 |
--------------------------------------------------------------------------------
/docs/01-prerequisites.md:
--------------------------------------------------------------------------------
1 | # Prerequisites
2 |
3 | ## Amazon Web Services
4 |
5 | This tutorial leverages the [Amazon Web Services](https://aws.amazon.com/) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up. It would cost less then $2 for a 24 hour period that would take to complete this exercise.
6 |
7 | > The compute resources required for this tutorial exceed the Amazon Web Services free tier. Make sure that you clean up the resource at the end of the activity to avoid incurring unwanted costs.
8 |
9 | ## Amazon Web Services CLI
10 |
11 | ### Install the AWS CLI
12 |
13 | Follow the AWS CLI [documentation](https://aws.amazon.com/cli/) to install and configure the `aws` command line utility.
14 |
15 | Verify the AWS CLI version using:
16 |
17 | ```
18 | aws --version
19 | ```
20 |
21 | ### Set a Default Compute Region and Zone
22 |
23 | This tutorial assumes a default compute region and zone have been configured.
24 |
25 | Go ahead and set a default compute region:
26 |
27 | ```
28 | AWS_REGION=us-west-1
29 |
30 | aws configure set default.region $AWS_REGION
31 | ```
32 |
33 |
34 | ## Running Commands in Parallel with tmux
35 |
36 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
37 |
38 | > The use of tmux is optional and not required to complete this tutorial.
39 |
40 | 
41 |
42 | > Enable synchronize-panes by pressing `ctrl+b` followed by `shift+:`. Next type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
43 |
44 | Next: [Installing the Client Tools](02-client-tools.md)
45 |
46 | ### Download and install jq
47 |
48 | Refer this [link](https://jqlang.github.io/jq/download/) and install `jq` based on your OS.
--------------------------------------------------------------------------------
/docs/02-client-tools.md:
--------------------------------------------------------------------------------
1 | # Installing the Client Tools
2 |
3 | In this lab you will install the command line utilities required to complete this tutorial: [cfssl](https://github.com/cloudflare/cfssl), [cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl).
4 |
5 |
6 | ## Install CFSSL
7 |
8 | The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
9 |
10 | Download and install `cfssl` and `cfssljson`:
11 |
12 | ### OS X
13 |
14 | ```
15 | curl -o cfssl https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl_1.6.4_linux_amd64
16 | curl -o cfssljson https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssljson_1.6.4_linux_amd64
17 | ```
18 |
19 | ```
20 | chmod +x cfssl cfssljson
21 | ```
22 |
23 | ```
24 | sudo mv cfssl cfssljson /usr/local/bin/
25 | ```
26 |
27 | Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
28 |
29 | ```
30 | brew install cfssl
31 | ```
32 |
33 | ### Linux
34 |
35 | ```
36 | wget -q --show-progress --https-only --timestamping \
37 | https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssl \
38 | https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/1.4.1/linux/cfssljson
39 | ```
40 |
41 | ```
42 | chmod +x cfssl cfssljson
43 | ```
44 |
45 | ```
46 | sudo mv cfssl cfssljson /usr/local/bin/
47 | ```
48 |
49 | ### Verification
50 |
51 | Verify `cfssl` and `cfssljson` version 1.4.1 or higher is installed:
52 |
53 | ```
54 | cfssl version
55 | ```
56 |
57 | > output
58 |
59 | ```
60 | Version: 1.4.1
61 | Runtime: go1.12.12
62 | ```
63 |
64 | ```
65 | cfssljson --version
66 | ```
67 | ```
68 | Version: 1.4.1
69 | Runtime: go1.12.12
70 | ```
71 |
72 | ## Install kubectl
73 |
74 | The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
75 |
76 | ### OS X
77 |
78 | ```
79 | curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/darwin/amd64/kubectl
80 | ```
81 |
82 | ```
83 | chmod +x kubectl
84 | ```
85 |
86 | ```
87 | sudo mv kubectl /usr/local/bin/
88 | ```
89 |
90 | ### Linux
91 |
92 | ```
93 | wget https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl
94 | ```
95 |
96 | ```
97 | chmod +x kubectl
98 | ```
99 |
100 | ```
101 | sudo mv kubectl /usr/local/bin/
102 | ```
103 |
104 | ### Verification
105 |
106 | Verify `kubectl` version 1.21.0 or higher is installed:
107 |
108 | ```
109 | kubectl version --client
110 | ```
111 |
112 | > output
113 |
114 | ```
115 | Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
116 | ```
117 |
118 | Next: [Provisioning Compute Resources](03-compute-resources.md)
119 |
--------------------------------------------------------------------------------
/docs/03-compute-resources.md:
--------------------------------------------------------------------------------
1 | # Provisioning Compute Resources
2 |
3 | [Guide](https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/03-compute-resources.md)
4 |
5 | ## Networking
6 |
7 | ### VPC
8 |
9 | ```sh
10 | VPC_ID=$(aws ec2 create-vpc --cidr-block 10.0.0.0/16 --output text --query 'Vpc.VpcId')
11 | aws ec2 create-tags --resources ${VPC_ID} --tags Key=Name,Value=kubernetes-the-hard-way
12 | aws ec2 modify-vpc-attribute --vpc-id ${VPC_ID} --enable-dns-support '{"Value": true}'
13 | aws ec2 modify-vpc-attribute --vpc-id ${VPC_ID} --enable-dns-hostnames '{"Value": true}'
14 | ```
15 |
16 | ### Subnet
17 |
18 | ```sh
19 | SUBNET_ID=$(aws ec2 create-subnet \
20 | --vpc-id ${VPC_ID} \
21 | --cidr-block 10.0.1.0/24 \
22 | --output text --query 'Subnet.SubnetId')
23 | aws ec2 create-tags --resources ${SUBNET_ID} --tags Key=Name,Value=kubernetes
24 | ```
25 |
26 | ### Internet Gateway
27 |
28 | ```sh
29 | INTERNET_GATEWAY_ID=$(aws ec2 create-internet-gateway --output text --query 'InternetGateway.InternetGatewayId')
30 | aws ec2 create-tags --resources ${INTERNET_GATEWAY_ID} --tags Key=Name,Value=kubernetes
31 | aws ec2 attach-internet-gateway --internet-gateway-id ${INTERNET_GATEWAY_ID} --vpc-id ${VPC_ID}
32 | ```
33 |
34 | ### Route Tables
35 |
36 | ```sh
37 | ROUTE_TABLE_ID=$(aws ec2 create-route-table --vpc-id ${VPC_ID} --output text --query 'RouteTable.RouteTableId')
38 | aws ec2 create-tags --resources ${ROUTE_TABLE_ID} --tags Key=Name,Value=kubernetes
39 | aws ec2 associate-route-table --route-table-id ${ROUTE_TABLE_ID} --subnet-id ${SUBNET_ID}
40 | aws ec2 create-route --route-table-id ${ROUTE_TABLE_ID} --destination-cidr-block 0.0.0.0/0 --gateway-id ${INTERNET_GATEWAY_ID}
41 | ```
42 |
43 | ### Security Groups (aka Firewall Rules)
44 |
45 | ```sh
46 | SECURITY_GROUP_ID=$(aws ec2 create-security-group \
47 | --group-name kubernetes \
48 | --description "Kubernetes security group" \
49 | --vpc-id ${VPC_ID} \
50 | --output text --query 'GroupId')
51 | aws ec2 create-tags --resources ${SECURITY_GROUP_ID} --tags Key=Name,Value=kubernetes
52 | aws ec2 authorize-security-group-ingress --group-id ${SECURITY_GROUP_ID} --protocol all --cidr 10.0.0.0/16
53 | aws ec2 authorize-security-group-ingress --group-id ${SECURITY_GROUP_ID} --protocol all --cidr 10.200.0.0/16
54 | aws ec2 authorize-security-group-ingress --group-id ${SECURITY_GROUP_ID} --protocol tcp --port 22 --cidr 0.0.0.0/0
55 | aws ec2 authorize-security-group-ingress --group-id ${SECURITY_GROUP_ID} --protocol tcp --port 6443 --cidr 0.0.0.0/0
56 | aws ec2 authorize-security-group-ingress --group-id ${SECURITY_GROUP_ID} --protocol tcp --port 443 --cidr 0.0.0.0/0
57 | aws ec2 authorize-security-group-ingress --group-id ${SECURITY_GROUP_ID} --protocol icmp --port -1 --cidr 0.0.0.0/0
58 | ```
59 |
60 | ### Kubernetes Public Access - Create a Network Load Balancer
61 |
62 | ```sh
63 | LOAD_BALANCER_ARN=$(aws elbv2 create-load-balancer \
64 | --name kubernetes \
65 | --subnets ${SUBNET_ID} \
66 | --scheme internet-facing \
67 | --type network \
68 | --output text --query 'LoadBalancers[].LoadBalancerArn')
69 | TARGET_GROUP_ARN=$(aws elbv2 create-target-group \
70 | --name kubernetes \
71 | --protocol TCP \
72 | --port 6443 \
73 | --vpc-id ${VPC_ID} \
74 | --target-type ip \
75 | --output text --query 'TargetGroups[].TargetGroupArn')
76 | aws elbv2 register-targets --target-group-arn ${TARGET_GROUP_ARN} --targets Id=10.0.1.1{0,1,2}
77 | aws elbv2 create-listener \
78 | --load-balancer-arn ${LOAD_BALANCER_ARN} \
79 | --protocol TCP \
80 | --port 443 \
81 | --default-actions Type=forward,TargetGroupArn=${TARGET_GROUP_ARN} \
82 | --output text --query 'Listeners[].ListenerArn'
83 | ```
84 |
85 | ```sh
86 | KUBERNETES_PUBLIC_ADDRESS=$(aws elbv2 describe-load-balancers \
87 | --load-balancer-arns ${LOAD_BALANCER_ARN} \
88 | --output text --query 'LoadBalancers[].DNSName')
89 | ```
90 |
91 | ## Compute Instances
92 |
93 | ### Instance Image
94 |
95 | ```sh
96 | IMAGE_ID=$(aws ec2 describe-images --owners 099720109477 \
97 | --output json \
98 | --filters \
99 | 'Name=root-device-type,Values=ebs' \
100 | 'Name=architecture,Values=x86_64' \
101 | 'Name=name,Values=ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*' \
102 | | jq -r '.Images|sort_by(.Name)[-1]|.ImageId')
103 | ```
104 |
105 | ### SSH Key Pair
106 |
107 | ```sh
108 | aws ec2 create-key-pair --key-name kubernetes --output text --query 'KeyMaterial' > kubernetes.id_rsa
109 | chmod 600 kubernetes.id_rsa
110 | ```
111 |
112 | ### Kubernetes Controllers
113 |
114 | Using `t3.micro` instances
115 |
116 | ```sh
117 | for i in 0 1 2; do
118 | instance_id=$(aws ec2 run-instances \
119 | --associate-public-ip-address \
120 | --image-id ${IMAGE_ID} \
121 | --count 1 \
122 | --key-name kubernetes \
123 | --security-group-ids ${SECURITY_GROUP_ID} \
124 | --instance-type t3.micro \
125 | --private-ip-address 10.0.1.1${i} \
126 | --user-data "name=controller-${i}" \
127 | --subnet-id ${SUBNET_ID} \
128 | --block-device-mappings='{"DeviceName": "/dev/sda1", "Ebs": { "VolumeSize": 50 }, "NoDevice": "" }' \
129 | --output text --query 'Instances[].InstanceId')
130 | aws ec2 modify-instance-attribute --instance-id ${instance_id} --no-source-dest-check
131 | aws ec2 create-tags --resources ${instance_id} --tags "Key=Name,Value=controller-${i}"
132 | echo "controller-${i} created "
133 | done
134 | ```
135 |
136 | ### Kubernetes Workers
137 |
138 | ```sh
139 | for i in 0 1 2; do
140 | instance_id=$(aws ec2 run-instances \
141 | --associate-public-ip-address \
142 | --image-id ${IMAGE_ID} \
143 | --count 1 \
144 | --key-name kubernetes \
145 | --security-group-ids ${SECURITY_GROUP_ID} \
146 | --instance-type t3.micro \
147 | --private-ip-address 10.0.1.2${i} \
148 | --user-data "name=worker-${i}|pod-cidr=10.200.${i}.0/24" \
149 | --subnet-id ${SUBNET_ID} \
150 | --block-device-mappings='{"DeviceName": "/dev/sda1", "Ebs": { "VolumeSize": 50 }, "NoDevice": "" }' \
151 | --output text --query 'Instances[].InstanceId')
152 | aws ec2 modify-instance-attribute --instance-id ${instance_id} --no-source-dest-check
153 | aws ec2 create-tags --resources ${instance_id} --tags "Key=Name,Value=worker-${i}"
154 | echo "worker-${i} created"
155 | done
156 | ```
157 |
158 | Next: [Certificate Authority](04-certificate-authority.md)
159 |
--------------------------------------------------------------------------------
/docs/04-certificate-authority.md:
--------------------------------------------------------------------------------
1 | # Provisioning a CA and Generating TLS Certificates
2 |
3 | In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
4 |
5 | ## Certificate Authority
6 |
7 | In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
8 |
9 | Generate the CA configuration file, certificate, and private key:
10 |
11 | ```
12 | cat > ca-config.json < ca-csr.json < admin-csr.json <`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
103 |
104 | Generate a certificate and private key for each Kubernetes worker node:
105 |
106 | ```
107 | for i in 0 1 2; do
108 | instance="worker-${i}"
109 | instance_hostname="ip-10-0-1-2${i}"
110 | cat > ${instance}-csr.json < kube-controller-manager-csr.json < kube-proxy-csr.json < kube-scheduler-csr.json < kubernetes-csr.json < The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab.
318 |
319 | Results:
320 |
321 | ```
322 | kubernetes-key.pem
323 | kubernetes.pem
324 | ```
325 |
326 | ## The Service Account Key Pair
327 |
328 | The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
329 |
330 | Generate the `service-account` certificate and private key:
331 |
332 | ```
333 | cat > service-account-csr.json < The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
400 |
401 | Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
402 |
--------------------------------------------------------------------------------
/docs/05-kubernetes-configuration-files.md:
--------------------------------------------------------------------------------
1 | # Generating Kubernetes Configuration Files for Authentication
2 |
3 | In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
4 |
5 | ## Client Authentication Configs
6 |
7 | In this section you will generate kubeconfig files for the `controller manager`, `kubelet`, `kube-proxy`, and `scheduler` clients and the `admin` user.
8 |
9 | ### Kubernetes Public DNS Address
10 |
11 | Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
12 |
13 | Retrieve the `kubernetes-the-hard-way` DNS address:
14 |
15 | ```
16 | KUBERNETES_PUBLIC_ADDRESS=$(aws elbv2 describe-load-balancers \
17 | --load-balancer-arns ${LOAD_BALANCER_ARN} \
18 | --output text --query 'LoadBalancers[0].DNSName')
19 | ```
20 |
21 | ### The kubelet Kubernetes Configuration File
22 |
23 | When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
24 |
25 | > The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
26 |
27 | Generate a kubeconfig file for each worker node:
28 |
29 | ```
30 | for instance in worker-0 worker-1 worker-2; do
31 | kubectl config set-cluster kubernetes-the-hard-way \
32 | --certificate-authority=ca.pem \
33 | --embed-certs=true \
34 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:443 \
35 | --kubeconfig=${instance}.kubeconfig
36 |
37 | kubectl config set-credentials system:node:${instance} \
38 | --client-certificate=${instance}.pem \
39 | --client-key=${instance}-key.pem \
40 | --embed-certs=true \
41 | --kubeconfig=${instance}.kubeconfig
42 |
43 | kubectl config set-context default \
44 | --cluster=kubernetes-the-hard-way \
45 | --user=system:node:${instance} \
46 | --kubeconfig=${instance}.kubeconfig
47 |
48 | kubectl config use-context default --kubeconfig=${instance}.kubeconfig
49 | done
50 | ```
51 |
52 | Results:
53 |
54 | ```
55 | worker-0.kubeconfig
56 | worker-1.kubeconfig
57 | worker-2.kubeconfig
58 | ```
59 |
60 | ### The kube-proxy Kubernetes Configuration File
61 |
62 | Generate a kubeconfig file for the `kube-proxy` service:
63 |
64 | ```
65 | kubectl config set-cluster kubernetes-the-hard-way \
66 | --certificate-authority=ca.pem \
67 | --embed-certs=true \
68 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:443 \
69 | --kubeconfig=kube-proxy.kubeconfig
70 |
71 | kubectl config set-credentials system:kube-proxy \
72 | --client-certificate=kube-proxy.pem \
73 | --client-key=kube-proxy-key.pem \
74 | --embed-certs=true \
75 | --kubeconfig=kube-proxy.kubeconfig
76 |
77 | kubectl config set-context default \
78 | --cluster=kubernetes-the-hard-way \
79 | --user=system:kube-proxy \
80 | --kubeconfig=kube-proxy.kubeconfig
81 |
82 | kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
83 | ```
84 |
85 | Results:
86 |
87 | ```
88 | kube-proxy.kubeconfig
89 | ```
90 |
91 | ### The kube-controller-manager Kubernetes Configuration File
92 |
93 | Generate a kubeconfig file for the `kube-controller-manager` service:
94 |
95 | ```
96 | kubectl config set-cluster kubernetes-the-hard-way \
97 | --certificate-authority=ca.pem \
98 | --embed-certs=true \
99 | --server=https://127.0.0.1:6443 \
100 | --kubeconfig=kube-controller-manager.kubeconfig
101 |
102 | kubectl config set-credentials system:kube-controller-manager \
103 | --client-certificate=kube-controller-manager.pem \
104 | --client-key=kube-controller-manager-key.pem \
105 | --embed-certs=true \
106 | --kubeconfig=kube-controller-manager.kubeconfig
107 |
108 | kubectl config set-context default \
109 | --cluster=kubernetes-the-hard-way \
110 | --user=system:kube-controller-manager \
111 | --kubeconfig=kube-controller-manager.kubeconfig
112 |
113 | kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
114 | ```
115 |
116 | Results:
117 |
118 | ```
119 | kube-controller-manager.kubeconfig
120 | ```
121 |
122 |
123 | ### The kube-scheduler Kubernetes Configuration File
124 |
125 | Generate a kubeconfig file for the `kube-scheduler` service:
126 |
127 | ```
128 | kubectl config set-cluster kubernetes-the-hard-way \
129 | --certificate-authority=ca.pem \
130 | --embed-certs=true \
131 | --server=https://127.0.0.1:6443 \
132 | --kubeconfig=kube-scheduler.kubeconfig
133 |
134 | kubectl config set-credentials system:kube-scheduler \
135 | --client-certificate=kube-scheduler.pem \
136 | --client-key=kube-scheduler-key.pem \
137 | --embed-certs=true \
138 | --kubeconfig=kube-scheduler.kubeconfig
139 |
140 | kubectl config set-context default \
141 | --cluster=kubernetes-the-hard-way \
142 | --user=system:kube-scheduler \
143 | --kubeconfig=kube-scheduler.kubeconfig
144 |
145 | kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
146 | ```
147 |
148 | Results:
149 |
150 | ```
151 | kube-scheduler.kubeconfig
152 | ```
153 |
154 | ### The admin Kubernetes Configuration File
155 |
156 | Generate a kubeconfig file for the `admin` user:
157 |
158 | ```
159 | kubectl config set-cluster kubernetes-the-hard-way \
160 | --certificate-authority=ca.pem \
161 | --embed-certs=true \
162 | --server=https://127.0.0.1:6443 \
163 | --kubeconfig=admin.kubeconfig
164 |
165 | kubectl config set-credentials admin \
166 | --client-certificate=admin.pem \
167 | --client-key=admin-key.pem \
168 | --embed-certs=true \
169 | --kubeconfig=admin.kubeconfig
170 |
171 | kubectl config set-context default \
172 | --cluster=kubernetes-the-hard-way \
173 | --user=admin \
174 | --kubeconfig=admin.kubeconfig
175 |
176 | kubectl config use-context default --kubeconfig=admin.kubeconfig
177 | ```
178 |
179 | Results:
180 |
181 | ```
182 | admin.kubeconfig
183 | ```
184 |
185 |
186 | ##
187 |
188 | ## Distribute the Kubernetes Configuration Files
189 |
190 | Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
191 |
192 | ```
193 | for instance in worker-0 worker-1 worker-2; do
194 | external_ip=$(aws ec2 describe-instances --filters \
195 | "Name=tag:Name,Values=${instance}" \
196 | "Name=instance-state-name,Values=running" \
197 | --output text --query 'Reservations[].Instances[].PublicIpAddress')
198 |
199 | scp -i kubernetes.id_rsa \
200 | ${instance}.kubeconfig kube-proxy.kubeconfig ubuntu@${external_ip}:~/
201 | done
202 | ```
203 |
204 | Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
205 |
206 | ```
207 | for instance in controller-0 controller-1 controller-2; do
208 | external_ip=$(aws ec2 describe-instances --filters \
209 | "Name=tag:Name,Values=${instance}" \
210 | "Name=instance-state-name,Values=running" \
211 | --output text --query 'Reservations[].Instances[].PublicIpAddress')
212 |
213 | scp -i kubernetes.id_rsa \
214 | admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ubuntu@${external_ip}:~/
215 | done
216 | ```
217 |
218 | Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
219 |
--------------------------------------------------------------------------------
/docs/06-data-encryption-keys.md:
--------------------------------------------------------------------------------
1 | # Generating the Data Encryption Config and Key
2 |
3 | Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to [encrypt](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data) cluster data at rest.
4 |
5 | In this lab you will generate an encryption key and an [encryption config](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration) suitable for encrypting Kubernetes Secrets.
6 |
7 | ## The Encryption Key
8 |
9 | Generate an encryption key:
10 |
11 | ```
12 | ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
13 | ```
14 |
15 | ## The Encryption Config File
16 |
17 | Create the `encryption-config.yaml` encryption config file:
18 |
19 | ```
20 | cat > encryption-config.yaml < Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
111 |
112 | ## Verification
113 |
114 | List the etcd cluster members:
115 |
116 | ```
117 | sudo ETCDCTL_API=3 etcdctl member list \
118 | --endpoints=https://127.0.0.1:2379 \
119 | --cacert=/etc/etcd/ca.pem \
120 | --cert=/etc/etcd/kubernetes.pem \
121 | --key=/etc/etcd/kubernetes-key.pem
122 | ```
123 |
124 | > output
125 |
126 | ```
127 | bbeedf10f5bbaa0c, started, controller-2, https://10.0.1.12:2380, https://10.0.1.12:2379, false
128 | f9b0e395cb8278dc, started, controller-0, https://10.0.1.10:2380, https://10.0.1.10:2379, false
129 | eecdfcb7e79fc5dd, started, controller-1, https://10.0.1.11:2380, https://10.0.1.11:2379, false
130 | ```
131 |
132 | Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
133 |
--------------------------------------------------------------------------------
/docs/08-bootstrapping-kubernetes-controllers.md:
--------------------------------------------------------------------------------
1 | # Bootstrapping the Kubernetes Control Plane
2 |
3 | In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
4 |
5 | ## Prerequisites
6 |
7 | The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `ssh` command. Example:
8 |
9 | ```
10 | for instance in controller-0 controller-1 controller-2; do
11 | external_ip=$(aws ec2 describe-instances --filters \
12 | "Name=tag:Name,Values=${instance}" \
13 | "Name=instance-state-name,Values=running" \
14 | --output text --query 'Reservations[].Instances[].PublicIpAddress')
15 |
16 | echo ssh -i kubernetes.id_rsa ubuntu@$external_ip
17 | done
18 | ```
19 |
20 | Now ssh into each one of the IP addresses received in last step.
21 |
22 | ### Running commands in parallel with tmux
23 |
24 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
25 |
26 | ## Provision the Kubernetes Control Plane
27 |
28 | Create the Kubernetes configuration directory:
29 |
30 | ```
31 | sudo mkdir -p /etc/kubernetes/config
32 | ```
33 |
34 | ### Download and Install the Kubernetes Controller Binaries
35 |
36 | Download the official Kubernetes release binaries:
37 |
38 | ```
39 | wget -q --show-progress --https-only --timestamping \
40 | "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-apiserver" \
41 | "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-controller-manager" \
42 | "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-scheduler" \
43 | "https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl"
44 | ```
45 |
46 | Install the Kubernetes binaries:
47 |
48 | ```
49 | chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
50 | sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
51 | ```
52 |
53 | ### Configure the Kubernetes API Server
54 |
55 | ```
56 | sudo mkdir -p /var/lib/kubernetes/
57 |
58 | sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
59 | service-account-key.pem service-account.pem \
60 | encryption-config.yaml /var/lib/kubernetes/
61 | ```
62 |
63 | The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current compute instance:
64 |
65 | ```
66 | INTERNAL_IP=$(curl -s http://169.254.169.254/latest/meta-data/local-ipv4)
67 | ```
68 |
69 | Create the `kube-apiserver.service` systemd unit file:
70 |
71 | ```
72 | cat < Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
204 |
205 | ### Verification
206 |
207 | ```
208 | kubectl cluster-info --kubeconfig admin.kubeconfig
209 | ```
210 |
211 | ```
212 | Kubernetes control plane is running at https://127.0.0.1:6443
213 | ```
214 |
215 | > Remember to run the above command on each controller node: `controller-0`, `controller-1`, and `controller-2`.
216 |
217 | ### Add Host File Entries
218 |
219 | In order for `kubectl exec` commands to work, the controller nodes must each
220 | be able to resolve the worker hostnames. This is not set up by default in
221 | AWS. The workaround is to add manual host entries on each of the controller
222 | nodes with this command:
223 |
224 | ```
225 | cat < If this step is missed, the [DNS Cluster Add-on](12-dns-addon.md) testing will
233 | fail with an error like this: `Error from server: error dialing backend: dial tcp: lookup ip-10-0-1-22 on 127.0.0.53:53: server misbehaving`
234 |
235 | ## RBAC for Kubelet Authorization
236 |
237 | In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
238 |
239 | > This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
240 |
241 | The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
242 |
243 | ```
244 | external_ip=$(aws ec2 describe-instances --filters \
245 | "Name=tag:Name,Values=controller-0" \
246 | "Name=instance-state-name,Values=running" \
247 | --output text --query 'Reservations[].Instances[].PublicIpAddress')
248 |
249 | ssh -i kubernetes.id_rsa ubuntu@${external_ip}
250 | ```
251 |
252 | Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
253 |
254 | ```
255 | cat < The compute instances created in this tutorial will not have permission to complete this section. **Run the following commands from the same machine used to create the compute instances**.
303 |
304 | Retrieve the `kubernetes-the-hard-way` Load Balancer address:
305 |
306 | ```
307 | KUBERNETES_PUBLIC_ADDRESS=$(aws elbv2 describe-load-balancers \
308 | --load-balancer-arns ${LOAD_BALANCER_ARN} \
309 | --output text --query 'LoadBalancers[].DNSName')
310 | ```
311 |
312 | Make a HTTP request for the Kubernetes version info:
313 |
314 | ```
315 | curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}/version
316 | ```
317 |
318 | > output
319 |
320 | ```
321 | {
322 | "major": "1",
323 | "minor": "21",
324 | "gitVersion": "v1.21.0",
325 | "gitCommit": "cb303e613a121a29364f75cc67d3d580833a7479",
326 | "gitTreeState": "clean",
327 | "buildDate": "2021-04-08T16:25:06Z",
328 | "goVersion": "go1.16.1",
329 | "compiler": "gc",
330 | "platform": "linux/amd64"
331 | }
332 | ```
333 |
334 | Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md)
335 |
--------------------------------------------------------------------------------
/docs/09-bootstrapping-kubernetes-workers.md:
--------------------------------------------------------------------------------
1 | # Bootstrapping the Kubernetes Worker Nodes
2 |
3 | In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
4 |
5 | ## Prerequisites
6 |
7 | The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `ssh` command. Example:
8 |
9 | ```
10 | for instance in worker-0 worker-1 worker-2; do
11 | external_ip=$(aws ec2 describe-instances --filters \
12 | "Name=tag:Name,Values=${instance}" \
13 | "Name=instance-state-name,Values=running" \
14 | --output text --query 'Reservations[].Instances[].PublicIpAddress')
15 |
16 | echo ssh -i kubernetes.id_rsa ubuntu@$external_ip
17 | done
18 | ```
19 |
20 | Now ssh into each one of the IP addresses received in last step.
21 |
22 | ### Running commands in parallel with tmux
23 |
24 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
25 |
26 | ## Provisioning a Kubernetes Worker Node
27 |
28 | Install the OS dependencies:
29 |
30 | ```
31 | sudo apt-get update
32 | sudo apt-get -y install socat conntrack ipset
33 | ```
34 |
35 | > The socat binary enables support for the `kubectl port-forward` command.
36 |
37 | ### Disable Swap
38 |
39 | By default the kubelet will fail to start if [swap](https://help.ubuntu.com/community/SwapFaq) is enabled. It is [recommended](https://github.com/kubernetes/kubernetes/issues/7294) that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
40 |
41 | Verify if swap is enabled:
42 |
43 | ```
44 | sudo swapon --show
45 | ```
46 |
47 | If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
48 |
49 | ```
50 | sudo swapoff -a
51 | ```
52 |
53 | > To ensure swap remains off after reboot consult your Linux distro documentation.
54 |
55 | ### Download and Install Worker Binaries
56 |
57 | ```
58 | wget -q --show-progress --https-only --timestamping \
59 | https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.21.0/crictl-v1.21.0-linux-amd64.tar.gz \
60 | https://github.com/opencontainers/runc/releases/download/v1.0.0-rc93/runc.amd64 \
61 | https://github.com/containernetworking/plugins/releases/download/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz \
62 | https://github.com/containerd/containerd/releases/download/v1.4.4/containerd-1.4.4-linux-amd64.tar.gz \
63 | https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubectl \
64 | https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kube-proxy \
65 | https://storage.googleapis.com/kubernetes-release/release/v1.21.0/bin/linux/amd64/kubelet
66 | ```
67 |
68 | Create the installation directories:
69 |
70 | ```
71 | sudo mkdir -p \
72 | /etc/cni/net.d \
73 | /opt/cni/bin \
74 | /var/lib/kubelet \
75 | /var/lib/kube-proxy \
76 | /var/lib/kubernetes \
77 | /var/run/kubernetes
78 | ```
79 |
80 | Install the worker binaries:
81 |
82 | ```
83 | mkdir containerd
84 | tar -xvf crictl-v1.21.0-linux-amd64.tar.gz
85 | tar -xvf containerd-1.4.4-linux-amd64.tar.gz -C containerd
86 | sudo tar -xvf cni-plugins-linux-amd64-v0.9.1.tgz -C /opt/cni/bin/
87 | sudo mv runc.amd64 runc
88 | chmod +x crictl kubectl kube-proxy kubelet runc
89 | sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
90 | sudo mv containerd/bin/* /bin/
91 | ```
92 |
93 | ### Configure CNI Networking
94 |
95 | Retrieve the Pod CIDR range for the current compute instance:
96 |
97 | ```
98 | POD_CIDR=$(curl -s http://169.254.169.254/latest/user-data/ \
99 | | tr "|" "\n" | grep "^pod-cidr" | cut -d"=" -f2)
100 | echo "${POD_CIDR}"
101 | ```
102 |
103 | Create the `bridge` network configuration file:
104 |
105 | ```
106 | cat < The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
222 |
223 | Create the `kubelet.service` systemd unit file:
224 |
225 | ```
226 | cat < Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
298 |
299 | ## Verification
300 |
301 | > The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances.
302 |
303 | List the registered Kubernetes nodes:
304 |
305 | ```
306 | external_ip=$(aws ec2 describe-instances --filters \
307 | "Name=tag:Name,Values=controller-0" \
308 | "Name=instance-state-name,Values=running" \
309 | --output text --query 'Reservations[].Instances[].PublicIpAddress')
310 |
311 | ssh -i kubernetes.id_rsa ubuntu@${external_ip} kubectl get nodes --kubeconfig admin.kubeconfig
312 | ```
313 |
314 | > output
315 |
316 | ```
317 | NAME STATUS ROLES AGE VERSION
318 | ip-10-0-1-20 Ready 51s v1.21.0
319 | ip-10-0-1-21 Ready 51s v1.21.0
320 | ip-10-0-1-22 Ready 51s v1.21.0
321 | ```
322 |
323 | Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
324 |
--------------------------------------------------------------------------------
/docs/10-configuring-kubectl.md:
--------------------------------------------------------------------------------
1 | # Configuring kubectl for Remote Access
2 |
3 | In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials.
4 |
5 | > Run the commands in this lab from the same directory used to generate the admin client certificates.
6 |
7 | ## The Admin Kubernetes Configuration File
8 |
9 | Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
10 |
11 | Generate a kubeconfig file suitable for authenticating as the `admin` user:
12 |
13 | ```
14 | KUBERNETES_PUBLIC_ADDRESS=$(aws elbv2 describe-load-balancers \
15 | --load-balancer-arns ${LOAD_BALANCER_ARN} \
16 | --output text --query 'LoadBalancers[].DNSName')
17 |
18 | kubectl config set-cluster kubernetes-the-hard-way \
19 | --certificate-authority=ca.pem \
20 | --embed-certs=true \
21 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:443
22 |
23 | kubectl config set-credentials admin \
24 | --client-certificate=admin.pem \
25 | --client-key=admin-key.pem
26 |
27 | kubectl config set-context kubernetes-the-hard-way \
28 | --cluster=kubernetes-the-hard-way \
29 | --user=admin
30 |
31 | kubectl config use-context kubernetes-the-hard-way
32 | ```
33 |
34 | ## Verification
35 |
36 | Check the version of the remote Kubernetes cluster:
37 |
38 | ```
39 | kubectl version
40 | ```
41 |
42 | > output
43 |
44 | ```
45 | Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
46 | Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
47 | ```
48 |
49 | List the nodes in the remote Kubernetes cluster:
50 |
51 | ```
52 | kubectl get nodes
53 | ```
54 |
55 | > output
56 |
57 | ```
58 | NAME STATUS ROLES AGE VERSION
59 | ip-10-0-1-20 Ready 3m35s v1.21.0
60 | ip-10-0-1-21 Ready 3m35s v1.21.0
61 | ip-10-0-1-22 Ready 3m35s v1.21.0
62 | ```
63 |
64 | Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
65 |
--------------------------------------------------------------------------------
/docs/11-pod-network-routes.md:
--------------------------------------------------------------------------------
1 | # Provisioning Pod Network Routes
2 |
3 | Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network [routes](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html).
4 |
5 | In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.
6 |
7 | > There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model.
8 |
9 | ## The Routing Table and routes
10 |
11 | In this section you will gather the information required to create routes in the `kubernetes-the-hard-way` VPC network and use that to create route table entries.
12 |
13 | In production workloads this functionality will be provided by CNI plugins like flannel, calico, amazon-vpc-cni-k8s. Doing this by hand makes it easier to understand what those plugins do behind the scenes.
14 |
15 | Print the internal IP address and Pod CIDR range for each worker instance and create route table entries:
16 |
17 | ```sh
18 | for instance in worker-0 worker-1 worker-2; do
19 | instance_id_ip="$(aws ec2 describe-instances \
20 | --filters "Name=tag:Name,Values=${instance}" \
21 | --output text --query 'Reservations[].Instances[].[InstanceId,PrivateIpAddress]')"
22 | instance_id="$(echo "${instance_id_ip}" | cut -f1)"
23 | instance_ip="$(echo "${instance_id_ip}" | cut -f2)"
24 | pod_cidr="$(aws ec2 describe-instance-attribute \
25 | --instance-id "${instance_id}" \
26 | --attribute userData \
27 | --output text --query 'UserData.Value' \
28 | | base64 --decode | tr "|" "\n" | grep "^pod-cidr" | cut -d'=' -f2)"
29 | echo "${instance_ip} ${pod_cidr}"
30 |
31 | aws ec2 create-route \
32 | --route-table-id "${ROUTE_TABLE_ID}" \
33 | --destination-cidr-block "${pod_cidr}" \
34 | --instance-id "${instance_id}"
35 | done
36 | ```
37 |
38 | > output
39 |
40 | ```
41 | 10.0.1.20 10.200.0.0/24
42 | {
43 | "Return": true
44 | }
45 | 10.0.1.21 10.200.1.0/24
46 | {
47 | "Return": true
48 | }
49 | 10.0.1.22 10.200.2.0/24
50 | {
51 | "Return": true
52 | }
53 | ```
54 |
55 | ## Validate Routes
56 |
57 | Validate network routes for each worker instance:
58 |
59 | ```sh
60 | aws ec2 describe-route-tables \
61 | --route-table-ids "${ROUTE_TABLE_ID}" \
62 | --query 'RouteTables[].Routes'
63 | ```
64 |
65 | > output
66 |
67 | ```
68 | [
69 | [
70 | {
71 | "DestinationCidrBlock": "10.200.0.0/24",
72 | "InstanceId": "i-0879fa49c49be1a3e",
73 | "InstanceOwnerId": "107995894928",
74 | "NetworkInterfaceId": "eni-0612e82f1247c6282",
75 | "Origin": "CreateRoute",
76 | "State": "active"
77 | },
78 | {
79 | "DestinationCidrBlock": "10.200.1.0/24",
80 | "InstanceId": "i-0db245a70483daa43",
81 | "InstanceOwnerId": "107995894928",
82 | "NetworkInterfaceId": "eni-0db39a19f4f3970f8",
83 | "Origin": "CreateRoute",
84 | "State": "active"
85 | },
86 | {
87 | "DestinationCidrBlock": "10.200.2.0/24",
88 | "InstanceId": "i-0b93625175de8ee43",
89 | "InstanceOwnerId": "107995894928",
90 | "NetworkInterfaceId": "eni-0cc95f34f747734d3",
91 | "Origin": "CreateRoute",
92 | "State": "active"
93 | },
94 | {
95 | "DestinationCidrBlock": "10.0.0.0/16",
96 | "GatewayId": "local",
97 | "Origin": "CreateRouteTable",
98 | "State": "active"
99 | },
100 | {
101 | "DestinationCidrBlock": "0.0.0.0/0",
102 | "GatewayId": "igw-00d618a99e45fa508",
103 | "Origin": "CreateRoute",
104 | "State": "active"
105 | }
106 | ]
107 | ]
108 | ```
109 |
110 | Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)
111 |
--------------------------------------------------------------------------------
/docs/12-dns-addon.md:
--------------------------------------------------------------------------------
1 | # Deploying the DNS Cluster Add-on
2 |
3 | In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster.
4 |
5 | ## The DNS Cluster Add-on
6 |
7 | Deploy the `coredns` cluster add-on:
8 |
9 | ```
10 | kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.8.yaml
11 | ```
12 |
13 | > output
14 |
15 | ```
16 | serviceaccount/coredns created
17 | clusterrole.rbac.authorization.k8s.io/system:coredns created
18 | clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
19 | configmap/coredns created
20 | deployment.apps/coredns created
21 | service/kube-dns created
22 | ```
23 |
24 | List the pods created by the `core-dns` deployment:
25 |
26 | ```
27 | kubectl get pods -l k8s-app=kube-dns -n kube-system
28 | ```
29 |
30 | > output
31 |
32 | ```
33 | NAME READY STATUS RESTARTS AGE
34 | coredns-8494f9c688-hh7r2 1/1 Running 0 10s
35 | coredns-8494f9c688-zqrj2 1/1 Running 0 10s
36 | ```
37 |
38 | ## Verification
39 |
40 | Create a `busybox` deployment:
41 |
42 | ```
43 | kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
44 | ```
45 |
46 | List the pod created by the `busybox` deployment:
47 |
48 | ```
49 | kubectl get pods -l run=busybox
50 | ```
51 |
52 | > output
53 |
54 | ```
55 | NAME READY STATUS RESTARTS AGE
56 | busybox 1/1 Running 0 3s
57 | ```
58 |
59 | Retrieve the full name of the `busybox` pod:
60 |
61 | ```
62 | POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
63 | ```
64 |
65 | Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
66 |
67 | ```
68 | kubectl exec -ti $POD_NAME -- nslookup kubernetes
69 | ```
70 |
71 | > output
72 |
73 | ```
74 | Server: 10.32.0.10
75 | Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
76 |
77 | Name: kubernetes
78 | Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
79 | ```
80 |
81 | Next: [Smoke Test](13-smoke-test.md)
82 |
--------------------------------------------------------------------------------
/docs/13-smoke-test.md:
--------------------------------------------------------------------------------
1 | # Smoke Test
2 |
3 | In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.
4 |
5 | ## Data Encryption
6 |
7 | In this section you will verify the ability to [encrypt secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted).
8 |
9 | Create a generic secret:
10 |
11 | ```
12 | kubectl create secret generic kubernetes-the-hard-way \
13 | --from-literal="mykey=mydata"
14 | ```
15 |
16 | Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
17 |
18 | ```sh
19 | external_ip=$(aws ec2 describe-instances --filters \
20 | "Name=tag:Name,Values=controller-0" \
21 | "Name=instance-state-name,Values=running" \
22 | --output text --query 'Reservations[].Instances[].PublicIpAddress')
23 |
24 | ssh -i kubernetes.id_rsa ubuntu@${external_ip} \
25 | "sudo ETCDCTL_API=3 etcdctl get \
26 | --endpoints=https://127.0.0.1:2379 \
27 | --cacert=/etc/etcd/ca.pem \
28 | --cert=/etc/etcd/kubernetes.pem \
29 | --key=/etc/etcd/kubernetes-key.pem\
30 | /registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
31 | ```
32 |
33 | > output
34 |
35 | ```
36 | 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
37 | 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
38 | 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
39 | 00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
40 | 00000040 3a 76 31 3a 6b 65 79 31 3a 97 d1 2c cd 89 0d 08 |:v1:key1:..,....|
41 | 00000050 29 3c 7d 19 41 cb ea d7 3d 50 45 88 82 a3 1f 11 |)<}.A...=PE.....|
42 | 00000060 26 cb 43 2e c8 cf 73 7d 34 7e b1 7f 9f 71 d2 51 |&.C...s}4~...q.Q|
43 | 00000070 45 05 16 e9 07 d4 62 af f8 2e 6d 4a cf c8 e8 75 |E.....b...mJ...u|
44 | 00000080 6b 75 1e b7 64 db 7d 7f fd f3 96 62 e2 a7 ce 22 |ku..d.}....b..."|
45 | 00000090 2b 2a 82 01 c3 f5 83 ae 12 8b d5 1d 2e e6 a9 90 |+*..............|
46 | 000000a0 bd f0 23 6c 0c 55 e2 52 18 78 fe bf 6d 76 ea 98 |..#l.U.R.x..mv..|
47 | 000000b0 fc 2c 17 36 e3 40 87 15 25 13 be d6 04 88 68 5b |.,.6.@..%.....h[|
48 | 000000c0 a4 16 81 f6 8e 3b 10 46 cb 2c ba 21 35 0c 5b 49 |.....;.F.,.!5.[I|
49 | 000000d0 e5 27 20 4c b3 8e 6b d0 91 c2 28 f1 cc fa 6a 1b |.' L..k...(...j.|
50 | 000000e0 31 19 74 e7 a5 66 6a 99 1c 84 c7 e0 b0 fc 32 86 |1.t..fj.......2.|
51 | 000000f0 f3 29 5a a4 1c d5 a4 e3 63 26 90 95 1e 27 d0 14 |.)Z.....c&...'..|
52 | 00000100 94 f0 ac 1a cd 0d b9 4b ae 32 02 a0 f8 b7 3f 0b |.......K.2....?.|
53 | 00000110 6f ad 1f 4d 15 8a d6 68 95 63 cf 7d 04 9a 52 71 |o..M...h.c.}..Rq|
54 | 00000120 75 ff 87 6b c5 42 e1 72 27 b5 e9 1a fe e8 c0 3f |u..k.B.r'......?|
55 | 00000130 d9 04 5e eb 5d 43 0d 90 ce fa 04 a8 4a b0 aa 01 |..^.]C......J...|
56 | 00000140 cf 6d 5b 80 70 5b 99 3c d6 5c c0 dc d1 f5 52 4a |.m[.p[.<.\....RJ|
57 | 00000150 2c 2d 28 5a 63 57 8e 4f df 0a |,-(ZcW.O..|
58 | 0000015a
59 | ```
60 |
61 | The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
62 |
63 | ## Deployments - To be run on local laptop
64 |
65 | In this section you will verify the ability to create and manage [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
66 |
67 | Create a deployment for the [nginx](https://nginx.org/en/) web server:
68 |
69 | ```
70 | kubectl create deployment nginx --image=nginx
71 | ```
72 |
73 | List the pod created by the `nginx` deployment:
74 |
75 | ```
76 | kubectl get pods -l app=nginx
77 | ```
78 |
79 | > output
80 |
81 | ```
82 | NAME READY STATUS RESTARTS AGE
83 | nginx-f89759699-kpn5m 1/1 Running 0 10s
84 | ```
85 |
86 | ### Port Forwarding
87 |
88 | In this section you will verify the ability to access applications remotely using [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
89 |
90 | Retrieve the full name of the `nginx` pod:
91 |
92 | ```
93 | POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
94 | ```
95 |
96 | Forward port `8080` on your local machine to port `80` of the `nginx` pod:
97 |
98 | ```
99 | kubectl port-forward $POD_NAME 8080:80
100 | ```
101 |
102 | > output
103 |
104 | ```
105 | Forwarding from 127.0.0.1:8080 -> 80
106 | Forwarding from [::1]:8080 -> 80
107 | ```
108 |
109 | In a new terminal make an HTTP request using the forwarding address:
110 |
111 | ```
112 | curl --head http://127.0.0.1:8080
113 | ```
114 |
115 | > output
116 |
117 | ```
118 | HTTP/1.1 200 OK
119 | Server: nginx/1.21.1
120 | Date: Sat, 07 Aug 2021 21:08:34 GMT
121 | Content-Type: text/html
122 | Content-Length: 612
123 | Last-Modified: Tue, 06 Jul 2021 14:59:17 GMT
124 | Connection: keep-alive
125 | ETag: "60e46fc5-264"
126 | Accept-Ranges: bytes
127 | ```
128 |
129 | Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
130 |
131 | ```
132 | Forwarding from 127.0.0.1:8080 -> 80
133 | Forwarding from [::1]:8080 -> 80
134 | Handling connection for 8080
135 | ^C
136 | ```
137 |
138 | ### Logs
139 |
140 | In this section you will verify the ability to [retrieve container logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/).
141 |
142 | Print the `nginx` pod logs:
143 |
144 | ```
145 | kubectl logs $POD_NAME
146 | ```
147 |
148 | > output
149 |
150 | ```
151 | ...
152 | 127.0.0.1 - - [07/Aug/2021:21:08:34 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.1" "-"
153 | ```
154 |
155 | ### Exec
156 |
157 | In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container).
158 |
159 | Print the nginx version by executing the `nginx -v` command in the `nginx` container:
160 |
161 | ```
162 | kubectl exec -ti $POD_NAME -- nginx -v
163 | ```
164 |
165 | > output
166 |
167 | ```
168 | nginx version: nginx/1.21.1
169 | ```
170 |
171 | ## Services
172 |
173 | In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/).
174 |
175 | Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
176 |
177 | ```
178 | kubectl expose deployment nginx --port 80 --type NodePort
179 | ```
180 |
181 | > The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
182 |
183 | Retrieve the node port assigned to the `nginx` service:
184 |
185 | ```
186 | NODE_PORT=$(kubectl get svc nginx \
187 | --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
188 | ```
189 |
190 | Create a firewall rule that allows remote access to the `nginx` node port:
191 |
192 | ```
193 | aws ec2 authorize-security-group-ingress \
194 | --group-id ${SECURITY_GROUP_ID} \
195 | --protocol tcp \
196 | --port ${NODE_PORT} \
197 | --cidr 0.0.0.0/0
198 | ```
199 |
200 | Get the worker node name where the `nginx` pod is running:
201 |
202 | ```
203 | INSTANCE_NAME=$(kubectl get pod $POD_NAME --output=jsonpath='{.spec.nodeName}')
204 | ```
205 |
206 | Retrieve the external IP address of a worker instance:
207 |
208 | ```
209 | EXTERNAL_IP=$(aws ec2 describe-instances --filters \
210 | "Name=instance-state-name,Values=running" \
211 | "Name=network-interface.private-dns-name,Values=${INSTANCE_NAME}.*.internal*" \
212 | --output text --query 'Reservations[].Instances[].PublicIpAddress')
213 | ```
214 |
215 | Make an HTTP request using the external IP address and the `nginx` node port:
216 |
217 | ```
218 | curl -I http://${EXTERNAL_IP}:${NODE_PORT}
219 | ```
220 |
221 | > output
222 |
223 | ```
224 | HTTP/1.1 200 OK
225 | Server: nginx/1.21.1
226 | Date: Sat, 07 Aug 2021 21:16:44 GMT
227 | Content-Type: text/html
228 | Content-Length: 612
229 | Last-Modified: Tue, 06 Jul 2021 14:59:17 GMT
230 | Connection: keep-alive
231 | ETag: "60e46fc5-264"
232 | Accept-Ranges: bytes
233 | ```
234 |
235 | Next: [Cleaning Up](14-cleanup.md)
236 |
--------------------------------------------------------------------------------
/docs/14-cleanup.md:
--------------------------------------------------------------------------------
1 | # Cleaning Up
2 |
3 | In this lab you will delete the compute resources created during this tutorial.
4 |
5 | ## Compute Instances
6 |
7 | Delete the all worker instances, then afterwards delete controller instances:
8 |
9 | ```
10 | echo "Issuing shutdown to worker nodes.. " && \
11 | aws ec2 terminate-instances \
12 | --instance-ids \
13 | $(aws ec2 describe-instances --filters \
14 | "Name=tag:Name,Values=worker-0,worker-1,worker-2" \
15 | "Name=instance-state-name,Values=running" \
16 | --output text --query 'Reservations[].Instances[].InstanceId')
17 |
18 | echo "Waiting for worker nodes to finish terminating.. " && \
19 | aws ec2 wait instance-terminated \
20 | --instance-ids \
21 | $(aws ec2 describe-instances \
22 | --filter "Name=tag:Name,Values=worker-0,worker-1,worker-2" \
23 | --output text --query 'Reservations[].Instances[].InstanceId')
24 |
25 | echo "Issuing shutdown to master nodes.. " && \
26 | aws ec2 terminate-instances \
27 | --instance-ids \
28 | $(aws ec2 describe-instances --filter \
29 | "Name=tag:Name,Values=controller-0,controller-1,controller-2" \
30 | "Name=instance-state-name,Values=running" \
31 | --output text --query 'Reservations[].Instances[].InstanceId')
32 |
33 | echo "Waiting for master nodes to finish terminating.. " && \
34 | aws ec2 wait instance-terminated \
35 | --instance-ids \
36 | $(aws ec2 describe-instances \
37 | --filter "Name=tag:Name,Values=controller-0,controller-1,controller-2" \
38 | --output text --query 'Reservations[].Instances[].InstanceId')
39 |
40 | aws ec2 delete-key-pair --key-name kubernetes
41 | ```
42 |
43 | ## Networking
44 |
45 | Delete the external load balancer network resources:
46 |
47 | ```
48 | aws elbv2 delete-load-balancer --load-balancer-arn "${LOAD_BALANCER_ARN}"
49 | aws elbv2 delete-target-group --target-group-arn "${TARGET_GROUP_ARN}"
50 | aws ec2 delete-security-group --group-id "${SECURITY_GROUP_ID}"
51 | ROUTE_TABLE_ASSOCIATION_ID="$(aws ec2 describe-route-tables \
52 | --route-table-ids "${ROUTE_TABLE_ID}" \
53 | --output text --query 'RouteTables[].Associations[].RouteTableAssociationId')"
54 | aws ec2 disassociate-route-table --association-id "${ROUTE_TABLE_ASSOCIATION_ID}"
55 | aws ec2 delete-route-table --route-table-id "${ROUTE_TABLE_ID}"
56 | echo "Waiting a minute for all public address(es) to be unmapped.. " && sleep 60
57 |
58 | aws ec2 detach-internet-gateway \
59 | --internet-gateway-id "${INTERNET_GATEWAY_ID}" \
60 | --vpc-id "${VPC_ID}"
61 | aws ec2 delete-internet-gateway --internet-gateway-id "${INTERNET_GATEWAY_ID}"
62 | aws ec2 delete-subnet --subnet-id "${SUBNET_ID}"
63 | aws ec2 delete-vpc --vpc-id "${VPC_ID}"
64 |
65 | ```
66 |
--------------------------------------------------------------------------------
/docs/images/tmux-screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/prabhatsharma/kubernetes-the-hard-way-aws/d942fd39bc0f66c9bb6ffd275044767af396c3ca/docs/images/tmux-screenshot.png
--------------------------------------------------------------------------------