├── .gitignore
├── CONTRIBUTING.md
├── COPYRIGHT.md
├── LICENSE
├── README.md
├── cloudformation
├── hard-k8s-eip.cfn.yml
├── hard-k8s-master-nodes.cfn.yml
├── hard-k8s-network.cfn.yml
├── hard-k8s-nlb.cfn.yml
├── hard-k8s-nodeport-sg-ingress.cfn.yml
├── hard-k8s-pod-routes.cfn.yml
├── hard-k8s-security-groups.cfn.yml
└── hard-k8s-worker-nodes.cfn.yml
├── deployments
├── coredns.yaml
└── kube-dns.yaml
├── docs
├── 01-prerequisites.md
├── 02-client-tools.md
├── 03-compute-resources.md
├── 04-certificate-authority.md
├── 05-kubernetes-configuration-files.md
├── 06-data-encryption-keys.md
├── 07-bootstrapping-etcd.md
├── 08-bootstrapping-kubernetes-controllers.md
├── 09-bootstrapping-kubernetes-workers.md
├── 10-configuring-kubectl.md
├── 11-pod-network-routes.md
├── 12-dns-addon.md
├── 13-smoke-test.md
├── 14-cleanup.md
└── images
│ ├── k8s_the_hard_way_on_aws_diagram.png
│ └── tmux-screenshot.png
└── retrieve_ec2_instances.sh
/.gitignore:
--------------------------------------------------------------------------------
1 | admin-csr.json
2 | admin-key.pem
3 | admin.csr
4 | admin.pem
5 | admin.kubeconfig
6 | ca-config.json
7 | ca-csr.json
8 | ca-key.pem
9 | ca.csr
10 | ca.pem
11 | encryption-config.yaml
12 | kube-controller-manager-csr.json
13 | kube-controller-manager-key.pem
14 | kube-controller-manager.csr
15 | kube-controller-manager.kubeconfig
16 | kube-controller-manager.pem
17 | kube-scheduler-csr.json
18 | kube-scheduler-key.pem
19 | kube-scheduler.csr
20 | kube-scheduler.kubeconfig
21 | kube-scheduler.pem
22 | kube-proxy-csr.json
23 | kube-proxy-key.pem
24 | kube-proxy.csr
25 | kube-proxy.kubeconfig
26 | kube-proxy.pem
27 | kubernetes-csr.json
28 | kubernetes-key.pem
29 | kubernetes.csr
30 | kubernetes.pem
31 | worker-0-csr.json
32 | worker-0-key.pem
33 | worker-0.csr
34 | worker-0.kubeconfig
35 | worker-0.pem
36 | worker-1-csr.json
37 | worker-1-key.pem
38 | worker-1.csr
39 | worker-1.kubeconfig
40 | worker-1.pem
41 | worker-2-csr.json
42 | worker-2-key.pem
43 | worker-2.csr
44 | worker-2.kubeconfig
45 | worker-2.pem
46 | service-account-key.pem
47 | service-account.csr
48 | service-account.pem
49 | service-account-csr.json
50 | *.swp
51 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | This project is made possible by contributors like YOU! While all contributions are welcomed, please be sure and follow the following suggestions to help your PR get merged.
2 |
3 | ## License
4 |
5 | This project uses an [Apache license](LICENSE). Be sure you're comfortable with the implications of that before working up a patch.
6 |
7 | ## Review and merge process
8 |
9 | Review and merge duties are managed by [@kelseyhightower](https://github.com/kelseyhightower). Expect some burden of proof for demonstrating the marginal value of adding new content to the tutorial.
10 |
11 | Here are some examples of the review and justification process:
12 | - [#208](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/208)
13 | - [#282](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/282)
14 |
15 | ## Notes on minutiae
16 |
17 | If you find a bug that breaks the guide, please do submit it. If you are considering a minor copy edit for tone, grammar, or simple inconsistent whitespace, consider the tradeoff between maintainer time and community benefit before investing too much of your time.
18 |
19 |
--------------------------------------------------------------------------------
/COPYRIGHT.md:
--------------------------------------------------------------------------------
1 | # Copyright
2 |
3 | 
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
4 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 |
2 | Apache License
3 | Version 2.0, January 2004
4 | http://www.apache.org/licenses/
5 |
6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7 |
8 | 1. Definitions.
9 |
10 | "License" shall mean the terms and conditions for use, reproduction,
11 | and distribution as defined by Sections 1 through 9 of this document.
12 |
13 | "Licensor" shall mean the copyright owner or entity authorized by
14 | the copyright owner that is granting the License.
15 |
16 | "Legal Entity" shall mean the union of the acting entity and all
17 | other entities that control, are controlled by, or are under common
18 | control with that entity. For the purposes of this definition,
19 | "control" means (i) the power, direct or indirect, to cause the
20 | direction or management of such entity, whether by contract or
21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
22 | outstanding shares, or (iii) beneficial ownership of such entity.
23 |
24 | "You" (or "Your") shall mean an individual or Legal Entity
25 | exercising permissions granted by this License.
26 |
27 | "Source" form shall mean the preferred form for making modifications,
28 | including but not limited to software source code, documentation
29 | source, and configuration files.
30 |
31 | "Object" form shall mean any form resulting from mechanical
32 | transformation or translation of a Source form, including but
33 | not limited to compiled object code, generated documentation,
34 | and conversions to other media types.
35 |
36 | "Work" shall mean the work of authorship, whether in Source or
37 | Object form, made available under the License, as indicated by a
38 | copyright notice that is included in or attached to the work
39 | (an example is provided in the Appendix below).
40 |
41 | "Derivative Works" shall mean any work, whether in Source or Object
42 | form, that is based on (or derived from) the Work and for which the
43 | editorial revisions, annotations, elaborations, or other modifications
44 | represent, as a whole, an original work of authorship. For the purposes
45 | of this License, Derivative Works shall not include works that remain
46 | separable from, or merely link (or bind by name) to the interfaces of,
47 | the Work and Derivative Works thereof.
48 |
49 | "Contribution" shall mean any work of authorship, including
50 | the original version of the Work and any modifications or additions
51 | to that Work or Derivative Works thereof, that is intentionally
52 | submitted to Licensor for inclusion in the Work by the copyright owner
53 | or by an individual or Legal Entity authorized to submit on behalf of
54 | the copyright owner. For the purposes of this definition, "submitted"
55 | means any form of electronic, verbal, or written communication sent
56 | to the Licensor or its representatives, including but not limited to
57 | communication on electronic mailing lists, source code control systems,
58 | and issue tracking systems that are managed by, or on behalf of, the
59 | Licensor for the purpose of discussing and improving the Work, but
60 | excluding communication that is conspicuously marked or otherwise
61 | designated in writing by the copyright owner as "Not a Contribution."
62 |
63 | "Contributor" shall mean Licensor and any individual or Legal Entity
64 | on behalf of whom a Contribution has been received by Licensor and
65 | subsequently incorporated within the Work.
66 |
67 | 2. Grant of Copyright License. Subject to the terms and conditions of
68 | this License, each Contributor hereby grants to You a perpetual,
69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70 | copyright license to reproduce, prepare Derivative Works of,
71 | publicly display, publicly perform, sublicense, and distribute the
72 | Work and such Derivative Works in Source or Object form.
73 |
74 | 3. Grant of Patent License. Subject to the terms and conditions of
75 | this License, each Contributor hereby grants to You a perpetual,
76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77 | (except as stated in this section) patent license to make, have made,
78 | use, offer to sell, sell, import, and otherwise transfer the Work,
79 | where such license applies only to those patent claims licensable
80 | by such Contributor that are necessarily infringed by their
81 | Contribution(s) alone or by combination of their Contribution(s)
82 | with the Work to which such Contribution(s) was submitted. If You
83 | institute patent litigation against any entity (including a
84 | cross-claim or counterclaim in a lawsuit) alleging that the Work
85 | or a Contribution incorporated within the Work constitutes direct
86 | or contributory patent infringement, then any patent licenses
87 | granted to You under this License for that Work shall terminate
88 | as of the date such litigation is filed.
89 |
90 | 4. Redistribution. You may reproduce and distribute copies of the
91 | Work or Derivative Works thereof in any medium, with or without
92 | modifications, and in Source or Object form, provided that You
93 | meet the following conditions:
94 |
95 | (a) You must give any other recipients of the Work or
96 | Derivative Works a copy of this License; and
97 |
98 | (b) You must cause any modified files to carry prominent notices
99 | stating that You changed the files; and
100 |
101 | (c) You must retain, in the Source form of any Derivative Works
102 | that You distribute, all copyright, patent, trademark, and
103 | attribution notices from the Source form of the Work,
104 | excluding those notices that do not pertain to any part of
105 | the Derivative Works; and
106 |
107 | (d) If the Work includes a "NOTICE" text file as part of its
108 | distribution, then any Derivative Works that You distribute must
109 | include a readable copy of the attribution notices contained
110 | within such NOTICE file, excluding those notices that do not
111 | pertain to any part of the Derivative Works, in at least one
112 | of the following places: within a NOTICE text file distributed
113 | as part of the Derivative Works; within the Source form or
114 | documentation, if provided along with the Derivative Works; or,
115 | within a display generated by the Derivative Works, if and
116 | wherever such third-party notices normally appear. The contents
117 | of the NOTICE file are for informational purposes only and
118 | do not modify the License. You may add Your own attribution
119 | notices within Derivative Works that You distribute, alongside
120 | or as an addendum to the NOTICE text from the Work, provided
121 | that such additional attribution notices cannot be construed
122 | as modifying the License.
123 |
124 | You may add Your own copyright statement to Your modifications and
125 | may provide additional or different license terms and conditions
126 | for use, reproduction, or distribution of Your modifications, or
127 | for any such Derivative Works as a whole, provided Your use,
128 | reproduction, and distribution of the Work otherwise complies with
129 | the conditions stated in this License.
130 |
131 | 5. Submission of Contributions. Unless You explicitly state otherwise,
132 | any Contribution intentionally submitted for inclusion in the Work
133 | by You to the Licensor shall be under the terms and conditions of
134 | this License, without any additional terms or conditions.
135 | Notwithstanding the above, nothing herein shall supersede or modify
136 | the terms of any separate license agreement you may have executed
137 | with Licensor regarding such Contributions.
138 |
139 | 6. Trademarks. This License does not grant permission to use the trade
140 | names, trademarks, service marks, or product names of the Licensor,
141 | except as required for reasonable and customary use in describing the
142 | origin of the Work and reproducing the content of the NOTICE file.
143 |
144 | 7. Disclaimer of Warranty. Unless required by applicable law or
145 | agreed to in writing, Licensor provides the Work (and each
146 | Contributor provides its Contributions) on an "AS IS" BASIS,
147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148 | implied, including, without limitation, any warranties or conditions
149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150 | PARTICULAR PURPOSE. You are solely responsible for determining the
151 | appropriateness of using or redistributing the Work and assume any
152 | risks associated with Your exercise of permissions under this License.
153 |
154 | 8. Limitation of Liability. In no event and under no legal theory,
155 | whether in tort (including negligence), contract, or otherwise,
156 | unless required by applicable law (such as deliberate and grossly
157 | negligent acts) or agreed to in writing, shall any Contributor be
158 | liable to You for damages, including any direct, indirect, special,
159 | incidental, or consequential damages of any character arising as a
160 | result of this License or out of the use or inability to use the
161 | Work (including but not limited to damages for loss of goodwill,
162 | work stoppage, computer failure or malfunction, or any and all
163 | other commercial damages or losses), even if such Contributor
164 | has been advised of the possibility of such damages.
165 |
166 | 9. Accepting Warranty or Additional Liability. While redistributing
167 | the Work or Derivative Works thereof, You may choose to offer,
168 | and charge a fee for, acceptance of support, warranty, indemnity,
169 | or other liability obligations and/or rights consistent with this
170 | License. However, in accepting such obligations, You may act only
171 | on Your own behalf and on Your sole responsibility, not on behalf
172 | of any other Contributor, and only if You agree to indemnify,
173 | defend, and hold each Contributor harmless for any liability
174 | incurred by, or claims asserted against, such Contributor by reason
175 | of your accepting any such warranty or additional liability.
176 |
177 | END OF TERMS AND CONDITIONS
178 |
179 | APPENDIX: How to apply the Apache License to your work.
180 |
181 | To apply the Apache License to your work, attach the following
182 | boilerplate notice, with the fields enclosed by brackets "[]"
183 | replaced with your own identifying information. (Don't include
184 | the brackets!) The text should be enclosed in the appropriate
185 | comment syntax for the file format. We also recommend that a
186 | file or class name and description of purpose be included on the
187 | same "printed page" as the copyright notice for easier
188 | identification within third-party archives.
189 |
190 | Copyright [yyyy] [name of copyright owner]
191 |
192 | Licensed under the Apache License, Version 2.0 (the "License");
193 | you may not use this file except in compliance with the License.
194 | You may obtain a copy of the License at
195 |
196 | http://www.apache.org/licenses/LICENSE-2.0
197 |
198 | Unless required by applicable law or agreed to in writing, software
199 | distributed under the License is distributed on an "AS IS" BASIS,
200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201 | See the License for the specific language governing permissions and
202 | limitations under the License.
203 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Kubernetes The Hard Way "on AWS"
2 |
3 | 
4 |
5 | This tutorial walks you through setting up Kubernetes the hard way on AWS. Note that this repository is a fork from [kelseyhightower/kubernetes-the-hard-way](https://github.com/kelseyhightower/kubernetes-the-hard-way) and tweaked to use AWS instead of GCP.
6 |
7 | This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
8 |
9 | > The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
10 |
11 |
12 | ## Copyright
13 |
14 | 
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
15 |
16 |
17 | ## Target Audience
18 |
19 | The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together.
20 |
21 | ## Cluster Details
22 |
23 | Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
24 |
25 | * [kubernetes](https://github.com/kubernetes/kubernetes) 1.15.3
26 | * [containerd](https://github.com/containerd/containerd) 1.2.9
27 | * [coredns](https://github.com/coredns/coredns) v1.6.3
28 | * [cni](https://github.com/containernetworking/cni) v0.7.1
29 | * [etcd](https://github.com/coreos/etcd) v3.4.0
30 |
31 |
32 | ## Labs
33 |
34 | This tutorial assumes you have access to the [Amazon Web Services (AWS)](https://aws.amazon.com). While AWS is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms.
35 |
36 | * [Prerequisites](docs/01-prerequisites.md)
37 | * [Installing the Client Tools](docs/02-client-tools.md)
38 | * [Provisioning Compute Resources](docs/03-compute-resources.md)
39 | * [Provisioning the CA and Generating TLS Certificates](docs/04-certificate-authority.md)
40 | * [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md)
41 | * [Generating the Data Encryption Config and Key](docs/06-data-encryption-keys.md)
42 | * [Bootstrapping the etcd Cluster](docs/07-bootstrapping-etcd.md)
43 | * [Bootstrapping the Kubernetes Control Plane](docs/08-bootstrapping-kubernetes-controllers.md)
44 | * [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md)
45 | * [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md)
46 | * [Provisioning Pod Network Routes](docs/11-pod-network-routes.md)
47 | * [Deploying the DNS Cluster Add-on](docs/12-dns-addon.md)
48 | * [Smoke Test](docs/13-smoke-test.md)
49 | * [Cleaning Up](docs/14-cleanup.md)
50 |
--------------------------------------------------------------------------------
/cloudformation/hard-k8s-eip.cfn.yml:
--------------------------------------------------------------------------------
1 | Resources:
2 | HardK8sEIP:
3 | Type: AWS::EC2::EIP
4 | Properties:
5 | Tags:
6 | - Key: Name
7 | Value: eip-kubernetes-the-hard-way
8 |
9 | Outputs:
10 | EipAllocation:
11 | Value: !GetAtt HardK8sEIP.AllocationId
12 | Export: { Name: hard-k8s-eipalloc }
--------------------------------------------------------------------------------
/cloudformation/hard-k8s-master-nodes.cfn.yml:
--------------------------------------------------------------------------------
1 | Resources:
2 | HardK8sMaster0:
3 | Type: AWS::EC2::Instance
4 | Properties:
5 | InstanceType: t3.micro
6 | SubnetId: !ImportValue hard-k8s-subnet
7 | SecurityGroupIds:
8 | - !ImportValue hard-k8s-sg
9 | ImageId:
10 | Fn::FindInMap: [UbuntuAMIs, !Ref "AWS::Region", "id"]
11 | KeyName: !Ref ParamKeyName
12 | PrivateIpAddress: 10.240.0.10
13 | # SourceDestCheck: false
14 | UserData:
15 | Fn::Base64: |-
16 | #cloud-config
17 | fqdn: master-0.k8shardway.local
18 | hostname: master-0
19 | runcmd:
20 | - echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
21 | write_files:
22 | - path: /etc/hosts
23 | permissions: '0644'
24 | content: |
25 | 127.0.0.1 localhost localhost.localdomain
26 | # Kubernetes the Hard Way - hostnames
27 | 10.240.0.10 master-0
28 | 10.240.0.11 master-1
29 | 10.240.0.12 master-2
30 | 10.240.0.20 worker-0
31 | 10.240.0.21 worker-1
32 | 10.240.0.22 worker-2
33 | Tags: [ { "Key": "Name", "Value": "master-0" } ]
34 |
35 | HardK8sMaster1:
36 | Type: AWS::EC2::Instance
37 | Properties:
38 | InstanceType: t3.micro
39 | SubnetId: !ImportValue hard-k8s-subnet
40 | SecurityGroupIds:
41 | - !ImportValue hard-k8s-sg
42 | ImageId:
43 | Fn::FindInMap: [UbuntuAMIs, !Ref "AWS::Region", "id"]
44 | KeyName: !Ref ParamKeyName
45 | PrivateIpAddress: 10.240.0.11
46 | # SourceDestCheck: false
47 | UserData:
48 | Fn::Base64: |-
49 | #cloud-config
50 | fqdn: master-1.k8shardway.local
51 | hostname: master-1
52 | runcmd:
53 | - echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
54 | write_files:
55 | - path: /etc/hosts
56 | permissions: '0644'
57 | content: |
58 | 127.0.0.1 localhost localhost.localdomain
59 | # Kubernetes the Hard Way - hostnames
60 | 10.240.0.10 master-0
61 | 10.240.0.11 master-1
62 | 10.240.0.12 master-2
63 | 10.240.0.20 worker-0
64 | 10.240.0.21 worker-1
65 | 10.240.0.22 worker-2
66 | Tags: [ { "Key": "Name", "Value": "master-1" } ]
67 |
68 | HardK8sMaster2:
69 | Type: AWS::EC2::Instance
70 | Properties:
71 | InstanceType: t3.micro
72 | SubnetId: !ImportValue hard-k8s-subnet
73 | SecurityGroupIds:
74 | - !ImportValue hard-k8s-sg
75 | ImageId:
76 | Fn::FindInMap: [UbuntuAMIs, !Ref "AWS::Region", "id"]
77 | KeyName: !Ref ParamKeyName
78 | PrivateIpAddress: 10.240.0.12
79 | # SourceDestCheck: false
80 | UserData:
81 | Fn::Base64: |-
82 | #cloud-config
83 | fqdn: master-2.k8shardway.local
84 | hostname: master-2
85 | runcmd:
86 | - echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
87 | write_files:
88 | - path: /etc/hosts
89 | permissions: '0644'
90 | content: |
91 | 127.0.0.1 localhost localhost.localdomain
92 | # Kubernetes the Hard Way - hostnames
93 | 10.240.0.10 master-0
94 | 10.240.0.11 master-1
95 | 10.240.0.12 master-2
96 | 10.240.0.20 worker-0
97 | 10.240.0.21 worker-1
98 | 10.240.0.22 worker-2
99 | Tags: [ { "Key": "Name", "Value": "master-2" } ]
100 |
101 | Parameters:
102 | ParamKeyName:
103 | Type: AWS::EC2::KeyPair::KeyName
104 | Default: ec2-key
105 |
106 | # $ aws ec2 describe-regions --query 'Regions[].RegionName' --output text \
107 | # | tr "\t" "\n" | sort \
108 | # | xargs -I _R_ aws --region _R_ ec2 describe-images \
109 | # --filters Name=name,Values="ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20191002" \
110 | # --query 'Images[0].ImageId' --output
111 | Mappings:
112 | UbuntuAMIs:
113 | ap-northeast-1: { "id": "ami-0cd744adeca97abb1" }
114 | ap-northeast-2: { "id": "ami-00379ec40a3e30f87" }
115 | ap-northeast-3: { "id": "ami-0bd42271bb31d96d2" }
116 | ap-south-1: { "id": "ami-0123b531fc646552f" }
117 | ap-southeast-1: { "id": "ami-061eb2b23f9f8839c" }
118 | ap-southeast-2: { "id": "ami-00a54827eb7ffcd3c" }
119 | ca-central-1: { "id": "ami-0b683aae4ee93ef87" }
120 | eu-central-1: { "id": "ami-0cc0a36f626a4fdf5" }
121 | eu-north-1: { "id": "ami-1dab2163" }
122 | eu-west-1: { "id": "ami-02df9ea15c1778c9c" }
123 | eu-west-2: { "id": "ami-0be057a22c63962cb" }
124 | eu-west-3: { "id": "ami-087855b6c8b59a9e4" }
125 | sa-east-1: { "id": "ami-02c8813f1ea04d4ab" }
126 | us-east-1: { "id": "ami-04b9e92b5572fa0d1" }
127 | us-east-2: { "id": "ami-0d5d9d301c853a04a" }
128 | us-west-1: { "id": "ami-0dd655843c87b6930" }
129 | us-west-2: { "id": "ami-06d51e91cea0dac8d" }
130 |
131 | Outputs:
132 | Master0:
133 | Value: !Ref HardK8sMaster0
134 | Export: { Name: hard-k8s-master-0 }
135 | Master1:
136 | Value: !Ref HardK8sMaster1
137 | Export: { Name: hard-k8s-master-1 }
138 | Master2:
139 | Value: !Ref HardK8sMaster2
140 | Export: { Name: hard-k8s-master-2 }
--------------------------------------------------------------------------------
/cloudformation/hard-k8s-network.cfn.yml:
--------------------------------------------------------------------------------
1 | Resources:
2 | HardK8sVpc:
3 | Type: AWS::EC2::VPC
4 | Properties:
5 | CidrBlock: "10.240.0.0/16"
6 | EnableDnsHostnames: true
7 | EnableDnsSupport: true
8 | HardK8sSubnet:
9 | Type: AWS::EC2::Subnet
10 | Properties:
11 | VpcId: !Ref HardK8sVpc
12 | CidrBlock: "10.240.0.0/24"
13 | MapPublicIpOnLaunch: true
14 | HardK8sRtb:
15 | Type: AWS::EC2::RouteTable
16 | Properties:
17 | VpcId: !Ref HardK8sVpc
18 | HardK8sRtbAssociation:
19 | Type: AWS::EC2::SubnetRouteTableAssociation
20 | Properties:
21 | RouteTableId: !Ref HardK8sRtb
22 | SubnetId: !Ref HardK8sSubnet
23 | HardK8sIgw:
24 | Type: AWS::EC2::InternetGateway
25 | HardK8sGwAttach:
26 | Type: AWS::EC2::VPCGatewayAttachment
27 | Properties:
28 | VpcId: !Ref HardK8sVpc
29 | InternetGatewayId: !Ref HardK8sIgw
30 | HardK8sDefaultRoute:
31 | Type: AWS::EC2::Route
32 | Properties:
33 | DestinationCidrBlock: 0.0.0.0/0
34 | RouteTableId: !Ref HardK8sRtb
35 | GatewayId: !Ref HardK8sIgw
36 |
37 | Outputs:
38 | VpcId:
39 | Value: !Ref HardK8sVpc
40 | Export: { Name: hard-k8s-vpc }
41 | SubnetId:
42 | Value: !Ref HardK8sSubnet
43 | Export: { Name: hard-k8s-subnet }
44 | RouteTableId:
45 | Value: !Ref HardK8sRtb
46 | Export: { Name: hard-k8s-rtb }
--------------------------------------------------------------------------------
/cloudformation/hard-k8s-nlb.cfn.yml:
--------------------------------------------------------------------------------
1 | Resources:
2 | HardK8sNLB:
3 | Type: AWS::ElasticLoadBalancingV2::LoadBalancer
4 | Properties:
5 | Type: network
6 | Scheme: internet-facing
7 | SubnetMappings:
8 | - AllocationId: !ImportValue hard-k8s-eipalloc
9 | SubnetId: !ImportValue hard-k8s-subnet
10 |
11 | HardK8sListener:
12 | Type: AWS::ElasticLoadBalancingV2::Listener
13 | Properties:
14 | DefaultActions:
15 | - TargetGroupArn: !Ref HardK8sTargetGroup
16 | Type: forward
17 | LoadBalancerArn: !Ref HardK8sNLB
18 | Port: 6443
19 | Protocol: TCP
20 |
21 | HardK8sTargetGroup:
22 | Type: AWS::ElasticLoadBalancingV2::TargetGroup
23 | Properties:
24 | VpcId: !ImportValue hard-k8s-vpc
25 | Protocol: TCP
26 | Port: 6443
27 | Targets:
28 | - Id: !ImportValue hard-k8s-master-0
29 | - Id: !ImportValue hard-k8s-master-1
30 | - Id: !ImportValue hard-k8s-master-2
31 | HealthCheckPort: "80" # default is "traffic-port", which means 6443.
--------------------------------------------------------------------------------
/cloudformation/hard-k8s-nodeport-sg-ingress.cfn.yml:
--------------------------------------------------------------------------------
1 | Parameters:
2 | ParamNodePort:
3 | Type: Number
4 | # ref: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
5 | MinValue: 30000
6 | MaxValue: 32767
7 |
8 | Resources:
9 | HardK8sSmokeIngress:
10 | Type: AWS::EC2::SecurityGroupIngress
11 | Properties:
12 | GroupId: !ImportValue hard-k8s-sg
13 | CidrIp: 0.0.0.0/0
14 | IpProtocol: tcp
15 | FromPort: !Ref ParamNodePort
16 | ToPort: !Ref ParamNodePort
--------------------------------------------------------------------------------
/cloudformation/hard-k8s-pod-routes.cfn.yml:
--------------------------------------------------------------------------------
1 | Resources:
2 | RouteWorker0:
3 | Type: AWS::EC2::Route
4 | Properties:
5 | DestinationCidrBlock: 10.200.0.0/24
6 | RouteTableId: !ImportValue hard-k8s-rtb
7 | InstanceId: !ImportValue hard-k8s-worker-0
8 |
9 | RouteWorker1:
10 | Type: AWS::EC2::Route
11 | Properties:
12 | DestinationCidrBlock: 10.200.1.0/24
13 | RouteTableId: !ImportValue hard-k8s-rtb
14 | InstanceId: !ImportValue hard-k8s-worker-1
15 |
16 | RouteWorker2:
17 | Type: AWS::EC2::Route
18 | Properties:
19 | DestinationCidrBlock: 10.200.2.0/24
20 | RouteTableId: !ImportValue hard-k8s-rtb
21 | InstanceId: !ImportValue hard-k8s-worker-2
--------------------------------------------------------------------------------
/cloudformation/hard-k8s-security-groups.cfn.yml:
--------------------------------------------------------------------------------
1 | Resources:
2 | HardK8sSg:
3 | Type: AWS::EC2::SecurityGroup
4 | Properties:
5 | GroupDescription: security group for Kubernetes the hard way
6 | VpcId: !ImportValue hard-k8s-vpc
7 | SecurityGroupIngress:
8 | # ingress internal traffic - allow all protocols/ports
9 | - { "CidrIp": "10.240.0.0/24", "IpProtocol": "-1" } # master/worker nodes cidr range
10 | - { "CidrIp": "10.200.0.0/16", "IpProtocol": "-1" } # pod cidr range
11 | # ingress external traffic
12 | - { "CidrIp": "0.0.0.0/0", "IpProtocol": "tcp", "FromPort": 6443, "ToPort": 6443 }
13 | - { "CidrIp": "0.0.0.0/0", "IpProtocol": "tcp", "FromPort": 22, "ToPort": 22 }
14 | - { "CidrIp": "0.0.0.0/0", "IpProtocol": "icmp", "FromPort": -1, "ToPort": -1 }
15 |
16 | Outputs:
17 | SgId:
18 | Value: !Ref HardK8sSg
19 | Export: { Name: hard-k8s-sg }
--------------------------------------------------------------------------------
/cloudformation/hard-k8s-worker-nodes.cfn.yml:
--------------------------------------------------------------------------------
1 | Resources:
2 | HardK8sWorker0:
3 | Type: AWS::EC2::Instance
4 | Properties:
5 | InstanceType: t3.micro
6 | SubnetId: !ImportValue hard-k8s-subnet
7 | SecurityGroupIds:
8 | - !ImportValue hard-k8s-sg
9 | ImageId:
10 | Fn::FindInMap: [UbuntuAMIs, !Ref "AWS::Region", "id"]
11 | KeyName: !Ref ParamKeyName
12 | PrivateIpAddress: 10.240.0.20
13 | # SourceDestCheck: false
14 | UserData:
15 | Fn::Base64: |-
16 | Content-Type: multipart/mixed; boundary="//"
17 | MIME-Version: 1.0
18 |
19 | --//
20 | Content-Type: text/cloud-config; charset="us-ascii"
21 | MIME-Version: 1.0
22 | Content-Transfer-Encoding: 7bit
23 | Content-Disposition: attachment; filename="cloud-config.txt"
24 |
25 | #cloud-config
26 | fqdn: worker-0.k8shardway.local
27 | hostname: worker-0
28 | runcmd:
29 | - echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
30 | write_files:
31 | - path: /etc/hosts
32 | permissions: '0644'
33 | content: |
34 | 127.0.0.1 localhost localhost.localdomain
35 | # Kubernetes the Hard Way - hostnames
36 | 10.240.0.10 master-0
37 | 10.240.0.11 master-1
38 | 10.240.0.12 master-2
39 | 10.240.0.20 worker-0
40 | 10.240.0.21 worker-1
41 | 10.240.0.22 worker-2
42 |
43 | --//
44 | Content-Type: text/x-shellscript; charset="us-ascii"
45 | MIME-Version: 1.0
46 | Content-Transfer-Encoding: 7bit
47 | Content-Disposition: attachment; filename="userdata.txt"
48 |
49 | #!/bin/bash
50 | echo 10.200.0.0/24 > /opt/pod_cidr.txt
51 | --//
52 | Tags: [ { "Key": "Name", "Value": "worker-0" } ]
53 |
54 | HardK8sWorker1:
55 | Type: AWS::EC2::Instance
56 | Properties:
57 | InstanceType: t3.micro
58 | SubnetId: !ImportValue hard-k8s-subnet
59 | SecurityGroupIds:
60 | - !ImportValue hard-k8s-sg
61 | ImageId:
62 | Fn::FindInMap: [UbuntuAMIs, !Ref "AWS::Region", "id"]
63 | KeyName: !Ref ParamKeyName
64 | PrivateIpAddress: 10.240.0.21
65 | # SourceDestCheck: false
66 | UserData:
67 | Fn::Base64: |-
68 | Content-Type: multipart/mixed; boundary="//"
69 | MIME-Version: 1.0
70 |
71 | --//
72 | Content-Type: text/cloud-config; charset="us-ascii"
73 | MIME-Version: 1.0
74 | Content-Transfer-Encoding: 7bit
75 | Content-Disposition: attachment; filename="cloud-config.txt"
76 |
77 | #cloud-config
78 | fqdn: worker-1.k8shardway.local
79 | hostname: worker-1
80 | runcmd:
81 | - echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
82 | write_files:
83 | - path: /etc/hosts
84 | permissions: '0644'
85 | content: |
86 | 127.0.0.1 localhost localhost.localdomain
87 | # Kubernetes the Hard Way - hostnames
88 | 10.240.0.10 master-0
89 | 10.240.0.11 master-1
90 | 10.240.0.12 master-2
91 | 10.240.0.20 worker-0
92 | 10.240.0.21 worker-1
93 | 10.240.0.22 worker-2
94 |
95 | --//
96 | Content-Type: text/x-shellscript; charset="us-ascii"
97 | MIME-Version: 1.0
98 | Content-Transfer-Encoding: 7bit
99 | Content-Disposition: attachment; filename="userdata.txt"
100 |
101 | #!/bin/bash
102 | echo 10.200.1.0/24 > /opt/pod_cidr.txt
103 | --//
104 | Tags: [ { "Key": "Name", "Value": "worker-1" } ]
105 |
106 | HardK8sWorker2:
107 | Type: AWS::EC2::Instance
108 | Properties:
109 | InstanceType: t3.micro
110 | SubnetId: !ImportValue hard-k8s-subnet
111 | SecurityGroupIds:
112 | - !ImportValue hard-k8s-sg
113 | ImageId:
114 | Fn::FindInMap: [UbuntuAMIs, !Ref "AWS::Region", "id"]
115 | KeyName: !Ref ParamKeyName
116 | PrivateIpAddress: 10.240.0.22
117 | # SourceDestCheck: false
118 | UserData:
119 | Fn::Base64: |-
120 | Content-Type: multipart/mixed; boundary="//"
121 | MIME-Version: 1.0
122 |
123 | --//
124 | Content-Type: text/cloud-config; charset="us-ascii"
125 | MIME-Version: 1.0
126 | Content-Transfer-Encoding: 7bit
127 | Content-Disposition: attachment; filename="cloud-config.txt"
128 |
129 | #cloud-config
130 | fqdn: worker-2.k8shardway.local
131 | hostname: worker-2
132 | runcmd:
133 | - echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
134 | write_files:
135 | - path: /etc/hosts
136 | permissions: '0644'
137 | content: |
138 | 127.0.0.1 localhost localhost.localdomain
139 | # Kubernetes the Hard Way - hostnames
140 | 10.240.0.10 master-0
141 | 10.240.0.11 master-1
142 | 10.240.0.12 master-2
143 | 10.240.0.20 worker-0
144 | 10.240.0.21 worker-1
145 | 10.240.0.22 worker-2
146 |
147 | --//
148 | Content-Type: text/x-shellscript; charset="us-ascii"
149 | MIME-Version: 1.0
150 | Content-Transfer-Encoding: 7bit
151 | Content-Disposition: attachment; filename="userdata.txt"
152 |
153 | #!/bin/bash
154 | echo 10.200.2.0/24 > /opt/pod_cidr.txt
155 | --//
156 | Tags: [ { "Key": "Name", "Value": "worker-2" } ]
157 |
158 | Parameters:
159 | ParamKeyName:
160 | Type: AWS::EC2::KeyPair::KeyName
161 | Default: ec2-key
162 |
163 | # $ aws ec2 describe-regions --query 'Regions[].RegionName' --output text \
164 | # | tr "\t" "\n" | sort \
165 | # | xargs -I _R_ aws --region _R_ ec2 describe-images \
166 | # --filters Name=name,Values="ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20191002" \
167 | # --query 'Images[0].ImageId' --output
168 | Mappings:
169 | UbuntuAMIs:
170 | ap-northeast-1: { "id": "ami-0cd744adeca97abb1" }
171 | ap-northeast-2: { "id": "ami-00379ec40a3e30f87" }
172 | ap-northeast-3: { "id": "ami-0bd42271bb31d96d2" }
173 | ap-south-1: { "id": "ami-0123b531fc646552f" }
174 | ap-southeast-1: { "id": "ami-061eb2b23f9f8839c" }
175 | ap-southeast-2: { "id": "ami-00a54827eb7ffcd3c" }
176 | ca-central-1: { "id": "ami-0b683aae4ee93ef87" }
177 | eu-central-1: { "id": "ami-0cc0a36f626a4fdf5" }
178 | eu-north-1: { "id": "ami-1dab2163" }
179 | eu-west-1: { "id": "ami-02df9ea15c1778c9c" }
180 | eu-west-2: { "id": "ami-0be057a22c63962cb" }
181 | eu-west-3: { "id": "ami-087855b6c8b59a9e4" }
182 | sa-east-1: { "id": "ami-02c8813f1ea04d4ab" }
183 | us-east-1: { "id": "ami-04b9e92b5572fa0d1" }
184 | us-east-2: { "id": "ami-0d5d9d301c853a04a" }
185 | us-west-1: { "id": "ami-0dd655843c87b6930" }
186 | us-west-2: { "id": "ami-06d51e91cea0dac8d" }
187 |
188 | Outputs:
189 | Worker0:
190 | Value: !Ref HardK8sWorker0
191 | Export: { Name: hard-k8s-worker-0 }
192 | Worker1:
193 | Value: !Ref HardK8sWorker1
194 | Export: { Name: hard-k8s-worker-1 }
195 | Worker2:
196 | Value: !Ref HardK8sWorker2
197 | Export: { Name: hard-k8s-worker-2 }
--------------------------------------------------------------------------------
/deployments/coredns.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ServiceAccount
3 | metadata:
4 | name: coredns
5 | namespace: kube-system
6 | ---
7 | apiVersion: rbac.authorization.k8s.io/v1
8 | kind: ClusterRole
9 | metadata:
10 | labels:
11 | kubernetes.io/bootstrapping: rbac-defaults
12 | name: system:coredns
13 | rules:
14 | - apiGroups:
15 | - ""
16 | resources:
17 | - endpoints
18 | - services
19 | - pods
20 | - namespaces
21 | verbs:
22 | - list
23 | - watch
24 | - apiGroups:
25 | - ""
26 | resources:
27 | - nodes
28 | verbs:
29 | - get
30 | ---
31 | apiVersion: rbac.authorization.k8s.io/v1
32 | kind: ClusterRoleBinding
33 | metadata:
34 | annotations:
35 | rbac.authorization.kubernetes.io/autoupdate: "true"
36 | labels:
37 | kubernetes.io/bootstrapping: rbac-defaults
38 | name: system:coredns
39 | roleRef:
40 | apiGroup: rbac.authorization.k8s.io
41 | kind: ClusterRole
42 | name: system:coredns
43 | subjects:
44 | - kind: ServiceAccount
45 | name: coredns
46 | namespace: kube-system
47 | ---
48 | apiVersion: v1
49 | kind: ConfigMap
50 | metadata:
51 | name: coredns
52 | namespace: kube-system
53 | data:
54 | Corefile: |
55 | .:53 {
56 | errors
57 | health
58 | ready
59 | kubernetes cluster.local in-addr.arpa ip6.arpa {
60 | pods insecure
61 | fallthrough in-addr.arpa ip6.arpa
62 | }
63 | prometheus :9153
64 | cache 30
65 | loop
66 | reload
67 | loadbalance
68 | }
69 | ---
70 | apiVersion: apps/v1
71 | kind: Deployment
72 | metadata:
73 | name: coredns
74 | namespace: kube-system
75 | labels:
76 | k8s-app: kube-dns
77 | kubernetes.io/name: "CoreDNS"
78 | spec:
79 | replicas: 2
80 | strategy:
81 | type: RollingUpdate
82 | rollingUpdate:
83 | maxUnavailable: 1
84 | selector:
85 | matchLabels:
86 | k8s-app: kube-dns
87 | template:
88 | metadata:
89 | labels:
90 | k8s-app: kube-dns
91 | spec:
92 | priorityClassName: system-cluster-critical
93 | serviceAccountName: coredns
94 | tolerations:
95 | - key: "CriticalAddonsOnly"
96 | operator: "Exists"
97 | nodeSelector:
98 | beta.kubernetes.io/os: linux
99 | containers:
100 | - name: coredns
101 | image: coredns/coredns:1.6.2
102 | imagePullPolicy: IfNotPresent
103 | resources:
104 | limits:
105 | memory: 170Mi
106 | requests:
107 | cpu: 100m
108 | memory: 70Mi
109 | args: [ "-conf", "/etc/coredns/Corefile" ]
110 | volumeMounts:
111 | - name: config-volume
112 | mountPath: /etc/coredns
113 | readOnly: true
114 | ports:
115 | - containerPort: 53
116 | name: dns
117 | protocol: UDP
118 | - containerPort: 53
119 | name: dns-tcp
120 | protocol: TCP
121 | - containerPort: 9153
122 | name: metrics
123 | protocol: TCP
124 | securityContext:
125 | allowPrivilegeEscalation: false
126 | capabilities:
127 | add:
128 | - NET_BIND_SERVICE
129 | drop:
130 | - all
131 | readOnlyRootFilesystem: true
132 | livenessProbe:
133 | httpGet:
134 | path: /health
135 | port: 8080
136 | scheme: HTTP
137 | initialDelaySeconds: 60
138 | timeoutSeconds: 5
139 | successThreshold: 1
140 | failureThreshold: 5
141 | readinessProbe:
142 | httpGet:
143 | path: /ready
144 | port: 8181
145 | scheme: HTTP
146 | dnsPolicy: Default
147 | volumes:
148 | - name: config-volume
149 | configMap:
150 | name: coredns
151 | items:
152 | - key: Corefile
153 | path: Corefile
154 | ---
155 | apiVersion: v1
156 | kind: Service
157 | metadata:
158 | name: kube-dns
159 | namespace: kube-system
160 | annotations:
161 | prometheus.io/port: "9153"
162 | prometheus.io/scrape: "true"
163 | labels:
164 | k8s-app: kube-dns
165 | kubernetes.io/cluster-service: "true"
166 | kubernetes.io/name: "CoreDNS"
167 | spec:
168 | selector:
169 | k8s-app: kube-dns
170 | clusterIP: 10.32.0.10
171 | ports:
172 | - name: dns
173 | port: 53
174 | protocol: UDP
175 | - name: dns-tcp
176 | port: 53
177 | protocol: TCP
178 | - name: metrics
179 | port: 9153
180 | protocol: TCP
181 |
--------------------------------------------------------------------------------
/deployments/kube-dns.yaml:
--------------------------------------------------------------------------------
1 | # Copyright 2016 The Kubernetes Authors.
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | apiVersion: v1
16 | kind: Service
17 | metadata:
18 | name: kube-dns
19 | namespace: kube-system
20 | labels:
21 | k8s-app: kube-dns
22 | kubernetes.io/cluster-service: "true"
23 | addonmanager.kubernetes.io/mode: Reconcile
24 | kubernetes.io/name: "KubeDNS"
25 | spec:
26 | selector:
27 | k8s-app: kube-dns
28 | clusterIP: 10.32.0.10
29 | ports:
30 | - name: dns
31 | port: 53
32 | protocol: UDP
33 | - name: dns-tcp
34 | port: 53
35 | protocol: TCP
36 | ---
37 | apiVersion: v1
38 | kind: ServiceAccount
39 | metadata:
40 | name: kube-dns
41 | namespace: kube-system
42 | labels:
43 | kubernetes.io/cluster-service: "true"
44 | addonmanager.kubernetes.io/mode: Reconcile
45 | ---
46 | apiVersion: v1
47 | kind: ConfigMap
48 | metadata:
49 | name: kube-dns
50 | namespace: kube-system
51 | labels:
52 | addonmanager.kubernetes.io/mode: EnsureExists
53 | ---
54 | apiVersion: apps/v1
55 | kind: Deployment
56 | metadata:
57 | name: kube-dns
58 | namespace: kube-system
59 | labels:
60 | k8s-app: kube-dns
61 | kubernetes.io/cluster-service: "true"
62 | addonmanager.kubernetes.io/mode: Reconcile
63 | spec:
64 | # replicas: not specified here:
65 | # 1. In order to make Addon Manager do not reconcile this replicas parameter.
66 | # 2. Default is 1.
67 | # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
68 | strategy:
69 | rollingUpdate:
70 | maxSurge: 10%
71 | maxUnavailable: 0
72 | selector:
73 | matchLabels:
74 | k8s-app: kube-dns
75 | template:
76 | metadata:
77 | labels:
78 | k8s-app: kube-dns
79 | annotations:
80 | scheduler.alpha.kubernetes.io/critical-pod: ''
81 | spec:
82 | tolerations:
83 | - key: "CriticalAddonsOnly"
84 | operator: "Exists"
85 | volumes:
86 | - name: kube-dns-config
87 | configMap:
88 | name: kube-dns
89 | optional: true
90 | containers:
91 | - name: kubedns
92 | image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
93 | resources:
94 | # TODO: Set memory limits when we've profiled the container for large
95 | # clusters, then set request = limit to keep this container in
96 | # guaranteed class. Currently, this container falls into the
97 | # "burstable" category so the kubelet doesn't backoff from restarting it.
98 | limits:
99 | memory: 170Mi
100 | requests:
101 | cpu: 100m
102 | memory: 70Mi
103 | livenessProbe:
104 | httpGet:
105 | path: /healthcheck/kubedns
106 | port: 10054
107 | scheme: HTTP
108 | initialDelaySeconds: 60
109 | timeoutSeconds: 5
110 | successThreshold: 1
111 | failureThreshold: 5
112 | readinessProbe:
113 | httpGet:
114 | path: /readiness
115 | port: 8081
116 | scheme: HTTP
117 | # we poll on pod startup for the Kubernetes master service and
118 | # only setup the /readiness HTTP server once that's available.
119 | initialDelaySeconds: 3
120 | timeoutSeconds: 5
121 | args:
122 | - --domain=cluster.local.
123 | - --dns-port=10053
124 | - --config-dir=/kube-dns-config
125 | - --v=2
126 | env:
127 | - name: PROMETHEUS_PORT
128 | value: "10055"
129 | ports:
130 | - containerPort: 10053
131 | name: dns-local
132 | protocol: UDP
133 | - containerPort: 10053
134 | name: dns-tcp-local
135 | protocol: TCP
136 | - containerPort: 10055
137 | name: metrics
138 | protocol: TCP
139 | volumeMounts:
140 | - name: kube-dns-config
141 | mountPath: /kube-dns-config
142 | - name: dnsmasq
143 | image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
144 | livenessProbe:
145 | httpGet:
146 | path: /healthcheck/dnsmasq
147 | port: 10054
148 | scheme: HTTP
149 | initialDelaySeconds: 60
150 | timeoutSeconds: 5
151 | successThreshold: 1
152 | failureThreshold: 5
153 | args:
154 | - -v=2
155 | - -logtostderr
156 | - -configDir=/etc/k8s/dns/dnsmasq-nanny
157 | - -restartDnsmasq=true
158 | - --
159 | - -k
160 | - --cache-size=1000
161 | - --no-negcache
162 | - --log-facility=-
163 | - --server=/cluster.local/127.0.0.1#10053
164 | - --server=/in-addr.arpa/127.0.0.1#10053
165 | - --server=/ip6.arpa/127.0.0.1#10053
166 | ports:
167 | - containerPort: 53
168 | name: dns
169 | protocol: UDP
170 | - containerPort: 53
171 | name: dns-tcp
172 | protocol: TCP
173 | # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
174 | resources:
175 | requests:
176 | cpu: 150m
177 | memory: 20Mi
178 | volumeMounts:
179 | - name: kube-dns-config
180 | mountPath: /etc/k8s/dns/dnsmasq-nanny
181 | - name: sidecar
182 | image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
183 | livenessProbe:
184 | httpGet:
185 | path: /metrics
186 | port: 10054
187 | scheme: HTTP
188 | initialDelaySeconds: 60
189 | timeoutSeconds: 5
190 | successThreshold: 1
191 | failureThreshold: 5
192 | args:
193 | - --v=2
194 | - --logtostderr
195 | - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
196 | - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
197 | ports:
198 | - containerPort: 10054
199 | name: metrics
200 | protocol: TCP
201 | resources:
202 | requests:
203 | memory: 20Mi
204 | cpu: 10m
205 | dnsPolicy: Default # Don't use cluster DNS.
206 | serviceAccountName: kube-dns
207 |
--------------------------------------------------------------------------------
/docs/01-prerequisites.md:
--------------------------------------------------------------------------------
1 | # Prerequisites
2 |
3 | ## Amazon Web Services (AWS)
4 |
5 | This tutorial leverages the [Amazon Web Services](https://aws.amazon.com) to streamline provisioning of the compute infrastructure required to bootstrap a Kubernetes cluster from the ground up.
6 |
7 | > The compute resources required for this tutorial exceed the Amazon Web Services free tier.
8 |
9 |
10 | ## CloudFormation - Infrastructure as Code
11 |
12 | In this tutorial we use [CloudFormation](https://aws.amazon.com/cloudformation/), which enables you to provision AWS resources as a code (YAML file).
13 |
14 | As a best practice you should consider using [Nested Stacks](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-nested-stacks.html) to combine associated CloudFormation stacks together. However, in this tutorial we provision AWS resources one by one via separated CloudFormation stacks for learning purpose.
15 |
16 | All CloudFormation templates are in [cloudformation directory](../cloudformation/) of this repository.
17 |
18 | ## AWS CLI
19 |
20 | ### Install the AWS CLI
21 |
22 | Follow the AWS documentation [Installing the AWS CLI version 1](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv1.html) to install and configure the `aws` command line utility.
23 |
24 | ```
25 | $ aws --version
26 | ```
27 |
28 | ### Set a default region and credentials
29 |
30 | This tutorial assumes a default region and credentials. To configure the AWS CLI, you can follow this instruction: [Configuring the AWS CLI - AWS Command Line Interface](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html)
31 |
32 | ```
33 | $ aws configure
34 | AWS Access Key ID [None]: AKIxxxxxxxxxxxxxMPLE
35 | AWS Secret Access Key [None]: wJalrXUxxxxxxxxxxxxxxxxxxxxxxxxxxxxLEKEY
36 | Default region name [None]: us-west-2
37 | Default output format [None]: json
38 | ```
39 |
40 | ## Running Commands in Parallel with tmux
41 |
42 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
43 |
44 | > The use of tmux is optional and not required to complete this tutorial.
45 |
46 | 
47 |
48 | > Enable synchronize-panes by pressing `ctrl+b` followed by `shift+:`. Next type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
49 |
50 | Next: [Installing the Client Tools](02-client-tools.md)
51 |
--------------------------------------------------------------------------------
/docs/02-client-tools.md:
--------------------------------------------------------------------------------
1 | # Installing the Client Tools
2 |
3 | In this lab you will install the command line utilities required to complete this tutorial: [cfssl, cfssljson](https://github.com/cloudflare/cfssl), and [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl).
4 |
5 |
6 | ## Install CFSSL
7 |
8 | The `cfssl` and `cfssljson` command line utilities will be used to provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) and generate TLS certificates.
9 |
10 | Download and install `cfssl` and `cfssljson`:
11 |
12 | ### OS X
13 |
14 | ```
15 | $ curl -o cfssl https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssl
16 | $ curl -o cfssljson https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/darwin/cfssljson
17 | ```
18 |
19 | ```
20 | $ chmod +x cfssl cfssljson
21 | ```
22 |
23 | ```
24 | $ sudo mv cfssl cfssljson /usr/local/bin/
25 | ```
26 |
27 | Some OS X users may experience problems using the pre-built binaries in which case [Homebrew](https://brew.sh) might be a better option:
28 |
29 | ```
30 | $ brew install cfssl
31 | ```
32 |
33 | ### Linux
34 |
35 | ```
36 | $ wget -q --show-progress --https-only --timestamping \
37 | https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssl \
38 | https://storage.googleapis.com/kubernetes-the-hard-way/cfssl/linux/cfssljson
39 | ```
40 |
41 | ```
42 | $ chmod +x cfssl cfssljson
43 | ```
44 |
45 | ```
46 | $ sudo mv cfssl cfssljson /usr/local/bin/
47 | ```
48 |
49 | ### Verification
50 |
51 | Verify `cfssl` and `cfssljson` version 1.3.4 or higher is installed:
52 |
53 | ```
54 | $ cfssl version
55 | Version: 1.3.4
56 | Revision: dev
57 | Runtime: go1.13
58 | ```
59 |
60 | ```
61 | $ cfssljson --version
62 | Version: 1.3.4
63 | Revision: dev
64 | Runtime: go1.13
65 | ```
66 |
67 | ## Install kubectl
68 |
69 | The `kubectl` command line utility is used to interact with the Kubernetes API Server. Download and install `kubectl` from the official release binaries:
70 |
71 | ### OS X
72 |
73 | ```
74 | $ curl -o kubectl https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/darwin/amd64/kubectl
75 | ```
76 |
77 | ```
78 | $ chmod +x kubectl
79 | ```
80 |
81 | ```
82 | $ sudo mv kubectl /usr/local/bin/
83 | ```
84 |
85 | ### Linux
86 |
87 | ```
88 | $ wget https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl
89 | ```
90 |
91 | ```
92 | $ chmod +x kubectl
93 | ```
94 |
95 | ```
96 | $ sudo mv kubectl /usr/local/bin/
97 | ```
98 |
99 | ### Verification
100 |
101 | Verify `kubectl` version 1.15.3 or higher is installed:
102 |
103 | ```
104 | $ kubectl version --client
105 | Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-13T11:51:44Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"}
106 | ```
107 |
108 | Next: [Provisioning Compute Resources](03-compute-resources.md)
--------------------------------------------------------------------------------
/docs/03-compute-resources.md:
--------------------------------------------------------------------------------
1 | # Provisioning Compute Resources
2 |
3 | Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will provision the compute resources required for running a secure and highly available Kubernetes cluster across a single [availability zone](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) on AWS.
4 |
5 | > Ensure a default region has been set as described in the [Prerequisites](01-prerequisites.md) lab.
6 |
7 | ## Networking
8 |
9 | The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model) assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can limit how groups of containers are allowed to communicate with each other and external network endpoints.
10 |
11 | > Setting up network policies is out of scope for this tutorial.
12 |
13 | ### Amazon Virtual Private Cloud (VPC)
14 |
15 | In this section a dedicated [Amazon Virtual Private Cloud](https://aws.amazon.com/vpc/) (VPC) network will be setup to host the Kubernetes cluster. The VPC should contain a public [subnet](https://docs.aws.amazon.com/vpc/latest/userguide//VPC_Subnets.html), routing rules, and security groups.
16 |
17 | Here's a CloudFormation template that defines network resources:
18 |
19 | Reference: [cloudformation/hard-k8s-network.cfn.yml](../cloudformation/hard-k8s-network.cfn.yml)
20 | ```yaml
21 | Resources:
22 | HardK8sVpc:
23 | Type: AWS::EC2::VPC
24 | Properties:
25 | CidrBlock: "10.240.0.0/16"
26 | EnableDnsHostnames: true
27 | EnableDnsSupport: true
28 | HardK8sSubnet:
29 | Type: AWS::EC2::Subnet
30 | Properties:
31 | VpcId: !Ref HardK8sVpc
32 | CidrBlock: "10.240.0.0/24"
33 | MapPublicIpOnLaunch: true
34 | # ...
35 | ```
36 |
37 | Please note that the subnet `CidrBlock` must be provisioned with an IP address range large enough to assign a private IP address to each node in the Kubernetes cluster.
38 |
39 | > The `10.240.0.0/24` IP address range can host up to 254 EC2 instances.
40 |
41 | Now create network resources via AWS CLI command:
42 |
43 | ```
44 | $ aws cloudformation create-stack \
45 | --stack-name hard-k8s-network \
46 | --template-body file://cloudformation/hard-k8s-network.cfn.yml
47 | ```
48 |
49 |
50 | ### Security Groups
51 |
52 | Create a security group that meet following requirements:
53 |
54 | * allows all internal traffic from VPC CIDR range (as defined above, `10.240.0.0/24`)
55 | * allows all internal traffic from PODs CIDR range (it can be defined arbitrary - let's say `10.200.0.0/16`)
56 | * allows external ingress TCP traffic on port 22 and 6443 from anywhere (`0.0.0.0/0`)
57 | * allows external ingress ICMP traffic from anywhere (`0.0.0.0/0`)
58 | * external egress traffic is allowed implicitly, so we don't need to define them.
59 |
60 | Here's a CloudFormation template file to create a security group with requirements above.
61 |
62 | Reference: [cloudformation/hard-k8s-security-groups.cfn.yml](../cloudformation/hard-k8s-security-groups.cfn.yml)
63 | ```yaml
64 | Resources:
65 | HardK8sSg:
66 | Type: AWS::EC2::SecurityGroup
67 | Properties:
68 | GroupDescription: security group for Kubernetes the hard way
69 | VpcId: !ImportValue hard-k8s-vpc
70 | SecurityGroupIngress:
71 | # ingress internal traffic - allow all protocols/ports
72 | - { "CidrIp": "10.240.0.0/24", "IpProtocol": "-1" } # master/worker nodes cidr range
73 | - { "CidrIp": "10.200.0.0/16", "IpProtocol": "-1" } # pod cidr range
74 | # ingress external traffic
75 | - { "CidrIp": "0.0.0.0/0", "IpProtocol": "tcp", "FromPort": 6443, "ToPort": 6443 }
76 | - { "CidrIp": "0.0.0.0/0", "IpProtocol": "tcp", "FromPort": 22, "ToPort": 22 }
77 | - { "CidrIp": "0.0.0.0/0", "IpProtocol": "icmp", "FromPort": -1, "ToPort": -1 }
78 | # ...
79 | ```
80 |
81 | This security group will be used for master and worker nodes. It allows internal all traffic from `10.240.0.0/24` (which is the subnet CIDR range we've created above) and `10.200.0.0/16`
82 |
83 |
84 | Then create a CloudFormation stack to provision the security group.
85 |
86 | ```
87 | $ aws cloudformation create-stack \
88 | --stack-name hard-k8s-security-groups \
89 | --template-body file://cloudformation/hard-k8s-security-groups.cfn.yml
90 | ```
91 |
92 | List rules in the created security group:
93 |
94 | ```
95 | $ aws ec2 describe-security-groups \
96 | --filters 'Name=description,Values="security group for Kubernetes the hard way"' \
97 | --query 'SecurityGroups[0].IpPermissions'
98 |
99 | [
100 | {
101 | "IpProtocol": "-1",
102 | "IpRanges": [ { "CidrIp": "10.240.0.0/24" }, { "CidrIp": "10.200.0.0/16" } ],...
103 | },
104 | {
105 | "IpProtocol": "tcp",
106 | "IpRanges": [ { "CidrIp": "0.0.0.0/0" } ],
107 | "FromPort": 6443, "ToPort": 6443,...
108 | },
109 | {
110 | "IpProtocol": "tcp",
111 | "IpRanges": [ { "CidrIp": "0.0.0.0/0" } ],
112 | "FromPort": 22, "ToPort": 22,...
113 | },
114 | {
115 | "IpProtocol": "icmp",
116 | "IpRanges": [ { "CidrIp": "0.0.0.0/0" } ],
117 | "FromPort": -1, "ToPort": -1,...
118 | }
119 | ]
120 | ```
121 |
122 | ### Kubernetes Public IP Address
123 |
124 | Using [Elastic IP Addresses (EIP)](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) you can allocate a static IP address that will be attached to the [Network Load Balancer (NLB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) fronting the Kubernetes API Servers.
125 |
126 | Let's create an EIP which we'll use for NLB later.
127 |
128 | Reference: [cloudformation/hard-k8s-eip.cfn.yml](../cloudformation/hard-k8s-eip.cfn.yml)
129 | ```yaml
130 | Resources:
131 | HardK8sEIP:
132 | Type: AWS::EC2::EIP
133 | Properties:
134 | Tags:
135 | - Key: Name
136 | Value: eip-kubernetes-the-hard-way
137 |
138 | Outputs:
139 | EipAllocation:
140 | Value: !GetAtt HardK8sEIP.AllocationId
141 | Export: { Name: hard-k8s-eipalloc }
142 | ```
143 |
144 | Allocate Elastic IP Address via CloudFormation:
145 |
146 | ```
147 | $ aws cloudformation create-stack \
148 | --stack-name hard-k8s-eip \
149 | --template-body file://cloudformation/hard-k8s-eip.cfn.yml
150 | ```
151 |
152 | The EIP is tagged `eip-kubernetes-the-hard-way` as a name so that we can retrieve it easily.
153 |
154 | ```
155 | $ aws ec2 describe-addresses --filters "Name=tag:Name,Values=eip-kubernetes-the-hard-way"
156 | {
157 | "Addresses": [
158 | {
159 | "PublicIp": "x.xxx.xx.xx",
160 | "AllocationId": "eipalloc-xxxxxxxxxxxxxxxxx",
161 | "Domain": "vpc",
162 | "PublicIpv4Pool": "amazon",
163 | "Tags": [
164 | { "Key": "Name", "Value": "eip-kubernetes-the-hard-way" },...
165 | ]
166 | }
167 | ]
168 | }
169 | ```
170 |
171 | ## EC2 instances
172 |
173 | [Amazon EC2](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html) instances in this lab will be provisioned using [Ubuntu Server](https://www.ubuntu.com/server) 18.04. Each EC2 instance will be provisioned with a fixed private IP address to simplify the Kubernetes bootstrapping process.
174 |
175 | You connect EC2 instances via SSH so make sure you've created and have at least one SSH key pairs in your account and the region you're working on. For more information: [Amazon EC2 Key Pairs - Amazon Elastic Compute Cloud](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html).
176 |
177 | ### Kubernetes Master nodes (Control Plane)
178 |
179 | Create three EC2 instances which will host the Kubernetes control plane:
180 |
181 | Reference: [cloudformation/hard-k8s-master-nodes.cfn.yml](../cloudformation/hard-k8s-master-nodes.cfn.yml)
182 | ```yaml
183 | Resources:
184 | HardK8sMaster0:
185 | Type: AWS::EC2::Instance
186 | Properties:
187 | InstanceType: t3.micro
188 | SubnetId: !ImportValue hard-k8s-subnet
189 | SecurityGroupIds:
190 | - !ImportValue hard-k8s-sg
191 | ImageId:
192 | Fn::FindInMap: [UbuntuAMIs, !Ref "AWS::Region", "id"]
193 | KeyName: !Ref ParamKeyName
194 | PrivateIpAddress: 10.240.0.10
195 | UserData:
196 | Fn::Base64: |-
197 | #cloud-config
198 | fqdn: master-0.k8shardway.local
199 | hostname: master-0
200 | runcmd:
201 | - echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
202 | write_files:
203 | - path: /etc/hosts
204 | permissions: '0644'
205 | content: |
206 | 127.0.0.1 localhost localhost.localdomain
207 | # Kubernetes the Hard Way - hostnames
208 | 10.240.0.10 master-0
209 | 10.240.0.11 master-1
210 | 10.240.0.12 master-2
211 | 10.240.0.20 worker-0
212 | 10.240.0.21 worker-1
213 | 10.240.0.22 worker-2
214 | Tags: [ { "Key": "Name", "Value": "master-0" } ]
215 | # ...
216 |
217 | Parameters:
218 | ParamKeyName:
219 | Type: AWS::EC2::KeyPair::KeyName
220 | Default: ec2-key
221 |
222 | # $ aws ec2 describe-regions --query 'Regions[].RegionName' --output text \
223 | # | tr "\t" "\n" | sort \
224 | # | xargs -I _R_ aws --region _R_ ec2 describe-images \
225 | # --filters Name=name,Values="ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20191002" \
226 | # --query 'Images[0].ImageId' --output
227 | Mappings:
228 | UbuntuAMIs:
229 | ap-northeast-1: { "id": "ami-0cd744adeca97abb1" }
230 | # ...
231 |
232 | Outputs:
233 | Master0:
234 | Value: !Ref HardK8sMaster0
235 | Export: { Name: hard-k8s-master-0 }
236 | # ...
237 | ```
238 |
239 | Note that we use cloud-config definitions to set hostname for each master node. They would be `master-0`, `master-1`, and `master-2` for master nodes (control plane).
240 |
241 | Create master nodes via CloudFormation. Please note that you have to replace `` with your EC2 key pair name.
242 |
243 | ```
244 | $ aws ec2 describe-key-pairs --query 'KeyPairs[].KeyName'
245 | [
246 | "my-key-name-1",
247 | "my-key-name-2"
248 | ]
249 |
250 | $ aws cloudformation create-stack \
251 | --stack-name hard-k8s-master-nodes \
252 | --parameters ParameterKey=ParamKeyName,ParameterValue= \
253 | --template-body file://cloudformation/hard-k8s-master-nodes.cfn.yml
254 | ```
255 |
256 |
257 | ### Kubernetes Worker nodes (Data Plane)
258 |
259 | Each worker instance requires a pod subnet allocation from the Kubernetes cluster CIDR range. The pod subnet allocation will be used to configure container networking in a later exercise. We will use [instance UserData](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instancedata-add-user-data.html) to put pod subnet allocations information to EC2 instances' `/opt/pod_cidr.txt` at runtime.
260 |
261 | > The Kubernetes cluster CIDR range is defined by the Controller Manager's `--cluster-cidr` flag. In this tutorial the cluster CIDR range will be set to `10.200.0.0/16`, which supports 254 subnets.
262 |
263 | Create three EC2 instances which will host the Kubernetes worker nodes:
264 |
265 | Reference: [cloudformation/hard-k8s-worker-nodes.cfn.yml](../cloudformation/hard-k8s-worker-nodes.cfn.yml)
266 | ```yaml
267 | Resources:
268 | HardK8sWorker0:
269 | Type: AWS::EC2::Instance
270 | Properties:
271 | InstanceType: t3.micro
272 | SubnetId: !ImportValue hard-k8s-subnet
273 | SecurityGroupIds:
274 | - !ImportValue hard-k8s-sg
275 | ImageId:
276 | Fn::FindInMap: [UbuntuAMIs, !Ref "AWS::Region", "id"]
277 | KeyName: !Ref ParamKeyName
278 | PrivateIpAddress: 10.240.0.20
279 | UserData:
280 | Fn::Base64: |-
281 | Content-Type: multipart/mixed; boundary="//"
282 | # ...
283 | #cloud-config
284 | fqdn: worker-0.k8shardway.local
285 | hostname: worker-0
286 | runcmd:
287 | - echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
288 | write_files:
289 | - path: /etc/hosts
290 | permissions: '0644'
291 | content: |
292 | 127.0.0.1 localhost localhost.localdomain
293 | # Kubernetes the Hard Way - hostnames
294 | 10.240.0.10 master-0
295 | 10.240.0.11 master-1
296 | 10.240.0.12 master-2
297 | 10.240.0.20 worker-0
298 | 10.240.0.21 worker-1
299 | 10.240.0.22 worker-2
300 |
301 | --//
302 | # ...
303 | #!/bin/bash
304 | echo 10.200.0.0/24 > /opt/pod_cidr.txt
305 | --//
306 | # ...
307 | ```
308 |
309 | Here we use cloud-config to set hostname like worker nodes. Worker nodes' hostname would be `worker-0`, `worker-1`, and `worker-2` for worker nodes (data plane). Also using [Mime multi-part](https://www.w3.org/Protocols/rfc1341/7_2_Multipart.html) contents for [cloud-init UserData Formats](https://cloudinit.readthedocs.io/en/latest/topics/format.html), we define shell script that save PODs CIDR range in the instance filesystem `/opt/pod_cidr.txt` as well.
310 |
311 | Create worker nodes via CloudFormation.
312 |
313 | ```
314 | $ aws cloudformation create-stack \
315 | --stack-name hard-k8s-worker-nodes \
316 | --parameters ParameterKey=ParamKeyName,ParameterValue= \
317 | --template-body file://cloudformation/hard-k8s-worker-nodes.cfn.yml
318 | ```
319 |
320 |
321 | ### Verification of nodes
322 |
323 | List the instances in your newly created VPC:
324 |
325 | ```
326 | $ aws cloudformation describe-stacks --stack-name hard-k8s-network --query 'Stacks[0].Outputs[].OutputValue'
327 | [
328 | "vpc-xxxxxxxxxxxxxxxxx",
329 | "subnet-yyyyyyyyyyyyyyyyy",
330 | "rtb-zzzzzzzzzzzzzzzzz"
331 | ]
332 |
333 | $ VPC_ID=$(aws cloudformation describe-stacks \
334 | --stack-name hard-k8s-network \
335 | --query 'Stacks[0].Outputs[?ExportName==`hard-k8s-vpc`].OutputValue' --output text)
336 |
337 | $ aws ec2 describe-instances \
338 | --filters Name=vpc-id,Values=$VPC_ID \
339 | --query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value | [0],InstanceId,Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress,State.Name]' \
340 | --output text | sort
341 |
342 | master-0 i-xxxxxxxxxxxxxxxxx ap-northeast-1c 10.240.0.10 xx.xxx.xx.xxx running
343 | master-1 i-yyyyyyyyyyyyyyyyy ap-northeast-1c 10.240.0.11 xx.xxx.xxx.xxx running
344 | master-2 i-zzzzzzzzzzzzzzzzz ap-northeast-1c 10.240.0.12 xx.xxx.xx.xxx running
345 | worker-0 i-aaaaaaaaaaaaaaaaa ap-northeast-1c 10.240.0.20 x.xxx.xx.xx running
346 | worker-1 i-bbbbbbbbbbbbbbbbb ap-northeast-1c 10.240.0.21 xx.xxx.xx.xxx running
347 | worker-2 i-ccccccccccccccccc ap-northeast-1c 10.240.0.22 xx.xxx.xxx.xxx running
348 | ```
349 |
350 |
351 | ## Verifying SSH Access
352 |
353 | As mentioned above, SSH will be used to configure the master and worker instances. We have already configured master and worker instances with `KeyName` property, you can connect instances via ssh. For more details please take a look at the documentation: [Connecting to Your Linux Instance Using SSH - Amazon Elastic Compute Cloud](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html)
354 |
355 | Let's test SSH access to the `master-0` EC2 instance via its Public IP address:
356 |
357 | ```
358 | $ ssh -i ~/.ssh/your_ssh_key ubuntu@xx.xxx.xx.xxx
359 | # ...
360 | Are you sure you want to continue connecting (yes/no)? yes
361 | Warning: Permanently added 'xx.xxx.xx.xxx' (ECDSA) to the list of known hosts.
362 | Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-1051-aws x86_64)
363 | # ...
364 | ubuntu@master-0:~$
365 | ```
366 |
367 | Type `exit` at the prompt to exit the `master-0` instance:
368 |
369 | ```
370 | ubuntu@master-0:~$ exit
371 |
372 | logout
373 | Connection to xx.xxx.xx.xxx closed.
374 | ```
375 |
376 | Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
--------------------------------------------------------------------------------
/docs/04-certificate-authority.md:
--------------------------------------------------------------------------------
1 | # Provisioning a CA and Generating TLS Certificates
2 |
3 | In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
4 |
5 | ## Certificate Authority
6 |
7 | In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
8 |
9 | Generate the CA configuration file, certificate, and private key:
10 |
11 | ```
12 | $ cat > ca-config.json < ca-csr.json < admin-csr.json <`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
103 |
104 | Generate a certificate and private key for each Kubernetes worker node:
105 |
106 | ```
107 | $ for instance in worker-0 worker-1 worker-2; do
108 | cat > ${instance}-csr.json < kube-controller-manager-csr.json < kube-proxy-csr.json < kube-scheduler-csr.json < kubernetes-csr.json < The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md) lab.
324 |
325 | Results:
326 |
327 | ```
328 | kubernetes-key.pem
329 | kubernetes.pem
330 | ```
331 |
332 | ## The Service Account Key Pair
333 |
334 | The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
335 |
336 | Generate the `service-account` certificate and private key:
337 |
338 | ```
339 | $ cat > service-account-csr.json < The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
409 |
410 | Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
411 |
--------------------------------------------------------------------------------
/docs/05-kubernetes-configuration-files.md:
--------------------------------------------------------------------------------
1 | # Generating Kubernetes Configuration Files for Authentication
2 |
3 | In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
4 |
5 | ## Client Authentication Configs
6 |
7 | In this section you will generate kubeconfig files for the `controller manager`, `kubelet`, `kube-proxy`, and `scheduler` clients and the `admin` user.
8 |
9 | ### Kubernetes Public IP Address
10 |
11 | Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
12 |
13 | Retrieve the EIP (Elastic IP Addresse) named `eip-kubernetes-the-hard-way`:
14 |
15 | ```
16 | $ KUBERNETES_PUBLIC_ADDRESS=$(aws ec2 describe-addresses \
17 | --filters "Name=tag:Name,Values=eip-kubernetes-the-hard-way" \
18 | --query 'Addresses[0].PublicIp' --output text)
19 | ```
20 |
21 | ### The kubelet Kubernetes Configuration File
22 |
23 | When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
24 |
25 | > The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
26 |
27 | Generate a kubeconfig file for each worker node:
28 |
29 | ```
30 | $ for instance in worker-0 worker-1 worker-2; do
31 | kubectl config set-cluster kubernetes-the-hard-way \
32 | --certificate-authority=ca.pem \
33 | --embed-certs=true \
34 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
35 | --kubeconfig=${instance}.kubeconfig
36 |
37 | kubectl config set-credentials system:node:${instance} \
38 | --client-certificate=${instance}.pem \
39 | --client-key=${instance}-key.pem \
40 | --embed-certs=true \
41 | --kubeconfig=${instance}.kubeconfig
42 |
43 | kubectl config set-context default \
44 | --cluster=kubernetes-the-hard-way \
45 | --user=system:node:${instance} \
46 | --kubeconfig=${instance}.kubeconfig
47 |
48 | kubectl config use-context default --kubeconfig=${instance}.kubeconfig
49 | done
50 | ```
51 |
52 | Results:
53 |
54 | ```
55 | worker-0.kubeconfig
56 | worker-1.kubeconfig
57 | worker-2.kubeconfig
58 | ```
59 |
60 | ### The kube-proxy Kubernetes Configuration File
61 |
62 | Generate a kubeconfig file for the `kube-proxy` service:
63 |
64 | ```
65 | $ kubectl config set-cluster kubernetes-the-hard-way \
66 | --certificate-authority=ca.pem \
67 | --embed-certs=true \
68 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
69 | --kubeconfig=kube-proxy.kubeconfig
70 |
71 | $ kubectl config set-credentials system:kube-proxy \
72 | --client-certificate=kube-proxy.pem \
73 | --client-key=kube-proxy-key.pem \
74 | --embed-certs=true \
75 | --kubeconfig=kube-proxy.kubeconfig
76 |
77 | $ kubectl config set-context default \
78 | --cluster=kubernetes-the-hard-way \
79 | --user=system:kube-proxy \
80 | --kubeconfig=kube-proxy.kubeconfig
81 |
82 | $ kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
83 | ```
84 |
85 | Results:
86 |
87 | ```
88 | kube-proxy.kubeconfig
89 | ```
90 |
91 | ### The kube-controller-manager Kubernetes Configuration File
92 |
93 | Generate a kubeconfig file for the `kube-controller-manager` service:
94 |
95 | ```
96 | $ kubectl config set-cluster kubernetes-the-hard-way \
97 | --certificate-authority=ca.pem \
98 | --embed-certs=true \
99 | --server=https://127.0.0.1:6443 \
100 | --kubeconfig=kube-controller-manager.kubeconfig
101 |
102 | $ kubectl config set-credentials system:kube-controller-manager \
103 | --client-certificate=kube-controller-manager.pem \
104 | --client-key=kube-controller-manager-key.pem \
105 | --embed-certs=true \
106 | --kubeconfig=kube-controller-manager.kubeconfig
107 |
108 | $ kubectl config set-context default \
109 | --cluster=kubernetes-the-hard-way \
110 | --user=system:kube-controller-manager \
111 | --kubeconfig=kube-controller-manager.kubeconfig
112 |
113 | $ kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
114 | ```
115 |
116 | Results:
117 |
118 | ```
119 | kube-controller-manager.kubeconfig
120 | ```
121 |
122 |
123 | ### The kube-scheduler Kubernetes Configuration File
124 |
125 | Generate a kubeconfig file for the `kube-scheduler` service:
126 |
127 | ```
128 | $ kubectl config set-cluster kubernetes-the-hard-way \
129 | --certificate-authority=ca.pem \
130 | --embed-certs=true \
131 | --server=https://127.0.0.1:6443 \
132 | --kubeconfig=kube-scheduler.kubeconfig
133 |
134 | $ kubectl config set-credentials system:kube-scheduler \
135 | --client-certificate=kube-scheduler.pem \
136 | --client-key=kube-scheduler-key.pem \
137 | --embed-certs=true \
138 | --kubeconfig=kube-scheduler.kubeconfig
139 |
140 | $ kubectl config set-context default \
141 | --cluster=kubernetes-the-hard-way \
142 | --user=system:kube-scheduler \
143 | --kubeconfig=kube-scheduler.kubeconfig
144 |
145 | $ kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
146 | ```
147 |
148 | Results:
149 |
150 | ```
151 | kube-scheduler.kubeconfig
152 | ```
153 |
154 | ### The admin Kubernetes Configuration File
155 |
156 | Generate a kubeconfig file for the `admin` user:
157 |
158 | ```
159 | $ kubectl config set-cluster kubernetes-the-hard-way \
160 | --certificate-authority=ca.pem \
161 | --embed-certs=true \
162 | --server=https://127.0.0.1:6443 \
163 | --kubeconfig=admin.kubeconfig
164 |
165 | $ kubectl config set-credentials admin \
166 | --client-certificate=admin.pem \
167 | --client-key=admin-key.pem \
168 | --embed-certs=true \
169 | --kubeconfig=admin.kubeconfig
170 |
171 | $ kubectl config set-context default \
172 | --cluster=kubernetes-the-hard-way \
173 | --user=admin \
174 | --kubeconfig=admin.kubeconfig
175 |
176 | $ kubectl config use-context default --kubeconfig=admin.kubeconfig
177 | ```
178 |
179 | Results:
180 |
181 | ```
182 | admin.kubeconfig
183 | ```
184 |
185 |
186 | ## Distribute the Kubernetes Configuration Files
187 |
188 | Copy the appropriate kubeconfig files for `kubelet` (`worker-*.kubeconfig`) and `kube-proxy` kubeconfig files to each worker instance:
189 |
190 | ```
191 | $ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxxxxxxxxxxxxxxxx \
192 | --query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value | [0],InstanceId,Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress,State.Name]' \
193 | --output text | sort | grep worker
194 | worker-0 i-aaaaaaaaaaaaaaaaa ap-northeast-1c 10.240.0.20 aa.aaa.aaa.aaa running
195 | worker-1 i-bbbbbbbbbbbbbbbbb ap-northeast-1c 10.240.0.21 b.bbb.b.bbb running
196 | worker-2 i-ccccccccccccccccc ap-northeast-1c 10.240.0.22 cc.ccc.cc.ccc running
197 |
198 | $ scp -i ~/.ssh/your_ssh_key worker-0.kubeconfig kube-proxy.kubeconfig ubuntu@aa.aaa.aaa.aaa:~/
199 | $ scp -i ~/.ssh/your_ssh_key worker-1.kubeconfig kube-proxy.kubeconfig ubuntu@b.bbb.b.bbb:~/
200 | $ scp -i ~/.ssh/your_ssh_key worker-2.kubeconfig kube-proxy.kubeconfig ubuntu@cc.ccc.cc.ccc:~/
201 | ```
202 |
203 | Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
204 |
205 | ```
206 | $ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxxxxxxxxxxxxxxxx \
207 | --query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value | [0],InstanceId,Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress,State.Name]' \
208 | --output text | sort | grep master
209 | master-0 i-xxxxxxxxxxxxxxxxx ap-northeast-1c 10.240.0.10 xx.xxx.xxx.xxx running
210 | master-1 i-yyyyyyyyyyyyyyyyy ap-northeast-1c 10.240.0.11 yy.yyy.yyy.yy running
211 | master-2 i-zzzzzzzzzzzzzzzzz ap-northeast-1c 10.240.0.12 zz.zzz.z.zzz running
212 |
213 | $ for masternode in xx.xxx.xxx.xxx yy.yyy.yyy.yy zz.zzz.z.zzz; do
214 | scp -i ~/.ssh/your_ssh_key \
215 | admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig \
216 | ubuntu@${masternode}:~/
217 | done
218 | ```
219 |
220 | Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
--------------------------------------------------------------------------------
/docs/06-data-encryption-keys.md:
--------------------------------------------------------------------------------
1 | # Generating the Data Encryption Config and Key
2 |
3 | Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to [encrypt](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data) cluster data at rest.
4 |
5 | In this lab you will generate an encryption key and an [encryption config](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration) suitable for encrypting Kubernetes Secrets.
6 |
7 | ## The Encryption Key
8 |
9 | Generate an encryption key:
10 |
11 | ```
12 | $ ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
13 | ```
14 |
15 | ## The Encryption Config File
16 |
17 | Create the `encryption-config.yaml` encryption config file:
18 |
19 | ```
20 | $ cat > encryption-config.yaml < Remember to run the above commands on each master node: `master-0`, `master-1`, and `master-2`.
129 |
130 | Verify etcd servers are running as systemd services.
131 |
132 | ```
133 | master-x $ systemctl status etcd.service
134 | ● etcd.service - etcd
135 | Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
136 | Active: active (running) since Mon 2020-01-20 18:01:29 UTC; 21s ago
137 | ...
138 | ```
139 |
140 | ## Verification
141 |
142 | List the etcd cluster members:
143 |
144 | ```
145 | master-0 $ sudo ETCDCTL_API=3 etcdctl member list \
146 | --endpoints=https://127.0.0.1:2379 \
147 | --cacert=/etc/etcd/ca.pem \
148 | --cert=/etc/etcd/kubernetes.pem \
149 | --key=/etc/etcd/kubernetes-key.pem
150 |
151 | 3a57933972cb5131, started, master-2, https://10.240.0.12:2380, https://10.240.0.12:2379, false
152 | f98dc20bce6225a0, started, master-0, https://10.240.0.10:2380, https://10.240.0.10:2379, false
153 | ffed16798470cab5, started, master-1, https://10.240.0.11:2380, https://10.240.0.11:2379, false
154 | ```
155 |
156 | Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
--------------------------------------------------------------------------------
/docs/08-bootstrapping-kubernetes-controllers.md:
--------------------------------------------------------------------------------
1 | # Bootstrapping the Kubernetes Control Plane
2 |
3 | In this lab you will bootstrap the Kubernetes control plane across three EC2 instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
4 |
5 | ## Prerequisites
6 |
7 | The commands in this lab must be run on each master instance: `master-0`, `master-1`, and `master-2`. Login to each master instance using ssh. Example:
8 |
9 | ```
10 | $ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxxxxxxxxxxxxxxxx \
11 | --query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value | [0],InstanceId,Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress,State.Name]' \
12 | --output text | sort | grep master
13 | master-0 i-xxxxxxxxxxxxxxxxx ap-northeast-1c 10.240.0.10 xx.xxx.xxx.xxx running
14 | ...
15 |
16 | $ ssh -i ~/.ssh/your_ssh_key ubuntu@xx.xxx.xxx.xxx
17 | ```
18 |
19 | ### Running commands in parallel with tmux
20 |
21 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple EC2 instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
22 |
23 | ## Provision the Kubernetes Control Plane
24 |
25 | Create the Kubernetes configuration directory:
26 |
27 | ```
28 | master-x $ sudo mkdir -p /etc/kubernetes/config
29 | ```
30 |
31 | ### Download and Install the Kubernetes Controller Binaries
32 |
33 | Download the official Kubernetes release binaries - `kube-apiserver`, `kube-controller-manager`, `kube-scheduler`, and `kubectl`:
34 |
35 | ```
36 | master-x $ wget -q --show-progress --https-only --timestamping \
37 | "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-apiserver" \
38 | "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-controller-manager" \
39 | "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-scheduler" \
40 | "https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl"
41 | ```
42 |
43 | Install the Kubernetes binaries:
44 |
45 | ```
46 | master-x $ chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
47 | master-x $ sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
48 | ```
49 |
50 | ### Configure the Kubernetes API Server
51 |
52 | ```
53 | master-x $ sudo mkdir -p /var/lib/kubernetes/
54 |
55 | master-x $ sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
56 | service-account-key.pem service-account.pem \
57 | encryption-config.yaml /var/lib/kubernetes/
58 | ```
59 |
60 | The instance internal IP address will be used to advertise the API Server to members of the cluster. Retrieve the internal IP address for the current EC2 instance.
61 |
62 | ```
63 | master-x $ INTERNAL_IP=$(curl 169.254.169.254/latest/meta-data/local-ipv4)
64 | ```
65 |
66 | Create the `kube-apiserver.service` systemd unit file:
67 |
68 | ```
69 | master-x $ cat < Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
200 |
201 | Verify controller services are running.
202 |
203 | ```
204 | master-x $ for svc in kube-apiserver kube-controller-manager kube-scheduler; \
205 | do sudo systemctl status --no-pager $svc | grep -B 3 Active; \
206 | done
207 | ● kube-apiserver.service - Kubernetes API Server
208 | Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
209 | Active: active (running) since Tue 2020-01-21 11:05:50 UTC; 3h 39min ago
210 | ● kube-controller-manager.service - Kubernetes Controller Manager
211 | Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled; vendor preset: enabled)
212 | Active: active (running) since Tue 2020-01-21 11:05:50 UTC; 3h 39min ago
213 | ● kube-scheduler.service - Kubernetes Scheduler
214 | Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor preset: enabled)
215 | Active: active (running) since Tue 2020-01-21 11:05:50 UTC; 3h 39min ago
216 | ```
217 |
218 | ### Enable HTTP Health Checks
219 |
220 | AWS [Network Load Balancer (NLB)](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/introduction.html) will be used to distribute traffic across the three API servers and allow each API server to terminate TLS connections and validate client certificates. We use HTTP NLB health checks instead of HTTPS endpoint exposed by the API server. For health check purpose, the nginx webserver can be used to proxy HTTP health checks. In this section nginx will be installed and configured to accept HTTP health checks on port `80` and proxy the connections to the API server on `https://127.0.0.1:6443/healthz`.
221 |
222 | > The `/healthz` API server endpoint does not require authentication by default.
223 |
224 | Install a basic web server to handle HTTP health checks:
225 |
226 | ```
227 | master-x $ sudo apt-get update
228 | master-x $ sudo apt-get install -y nginx
229 | ```
230 |
231 | Configure nginx config file to proxy HTTP health check.
232 |
233 | ```
234 | master-x $ cat > kubernetes.default.svc.cluster.local < Remember to run the above commands on each master node: `master-0`, `master-1`, and `master-2`.
294 |
295 | ## RBAC for Kubelet Authorization
296 |
297 | In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node (`master --> worker`). Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
298 |
299 | > This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
300 |
301 | The commands in this section will effect the entire cluster and only need to be run once from **one of the** master nodes.
302 |
303 | ```
304 | $ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxxxxxxxxxxxxxxxx \
305 | --query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value | [0],InstanceId,Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress,State.Name]' \
306 | --output text | sort | grep master
307 | master-0 i-xxxxxxxxxxxxxxxxx ap-northeast-1c 10.240.0.10 xx.xxx.xxx.xxx running
308 | ...
309 |
310 | $ ssh -i ~/.ssh/your_ssh_key ubuntu@xx.xxx.xxx.xxx
311 | ```
312 |
313 | Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods.
314 |
315 | > NOTE: you should turn off tmux multiple sync sessions when executing following command on one master node as `ClusterRole` is a cluster-wide resource.
316 |
317 | ```
318 | master-0 $ hostname
319 | master-0
320 |
321 | master-0 $ cat < The socat binary enables support for the `kubectl port-forward` command.
33 |
34 | ### Disable Swap
35 |
36 | By default the kubelet will fail to start if [swap](https://help.ubuntu.com/community/SwapFaq) is enabled. It is [recommended](https://github.com/kubernetes/kubernetes/issues/7294) that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
37 |
38 | Verify if swap is enabled:
39 |
40 | ```
41 | worker-x $ sudo swapon --show
42 | ```
43 |
44 | If output is empthy then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
45 |
46 | ```
47 | worker-x $ sudo swapoff -a
48 | ```
49 |
50 | > To ensure swap remains off after reboot consult your Linux distro documentation.
51 |
52 | ### Download and Install Worker Binaries
53 |
54 | ```
55 | worker-x $ wget -q --show-progress --https-only --timestamping \
56 | https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.15.0/crictl-v1.15.0-linux-amd64.tar.gz \
57 | https://github.com/opencontainers/runc/releases/download/v1.0.0-rc8/runc.amd64 \
58 | https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz \
59 | https://github.com/containerd/containerd/releases/download/v1.2.9/containerd-1.2.9.linux-amd64.tar.gz \
60 | https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubectl \
61 | https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kube-proxy \
62 | https://storage.googleapis.com/kubernetes-release/release/v1.15.3/bin/linux/amd64/kubelet
63 | ```
64 |
65 | Create the installation directories:
66 |
67 | ```
68 | worker-x $ sudo mkdir -p \
69 | /etc/cni/net.d \
70 | /opt/cni/bin \
71 | /var/lib/kubelet \
72 | /var/lib/kube-proxy \
73 | /var/lib/kubernetes \
74 | /var/run/kubernetes
75 | ```
76 |
77 | Install the worker binaries:
78 |
79 | ```
80 | worker-x $ mkdir containerd
81 | worker-x $ tar -xvf crictl-v1.15.0-linux-amd64.tar.gz
82 | worker-x $ tar -xvf containerd-1.2.9.linux-amd64.tar.gz -C containerd
83 | worker-x $ sudo tar -xvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/
84 | worker-x $ sudo mv runc.amd64 runc
85 | worker-x $ chmod +x crictl kubectl kube-proxy kubelet runc
86 | worker-x $ sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
87 | worker-x $ sudo mv containerd/bin/* /bin/
88 | ```
89 |
90 | Verify:
91 |
92 | ```
93 | worker-x $ ls /opt/cni/bin/
94 | bandwidth bridge dhcp firewall flannel host-device host-local ipvlan loopback macvlan portmap ptp sbr static tuning vlan
95 |
96 | worker-x $ ls /bin/container*
97 | /bin/containerd /bin/containerd-shim /bin/containerd-shim-runc-v1 /bin/containerd-stress
98 | worker-x $ ls /usr/local/bin/
99 | crictl kube-proxy kubectl kubelet runc
100 | ```
101 |
102 | ### Configure CNI Networking
103 |
104 | Retrieve the Pod CIDR range for the current EC2 instance. Remember that we've put Pod CIDR range by executing `echo 10.200.x.0/24 > /opt/pod_cidr.txt` in [cloudformation/worker-nodes.cfn.yml](../cloudformation/hard-k8s-worker-nodes.cfn.yml) via UserData.
105 |
106 | Example:
107 |
108 | ```
109 | worker-0 $ cat /opt/pod_cidr.txt
110 | 10.200.0.0/24
111 | ```
112 |
113 | Save these ranges in the environment variable named `POD_CIDR`.
114 |
115 | ```
116 | worker-x $ POD_CIDR=$(cat /opt/pod_cidr.txt)
117 | ```
118 |
119 | Create the `bridge` network configuration file:
120 |
121 | ```
122 | worker-x $ cat < The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
241 |
242 | Create the `kubelet.service` systemd unit file:
243 |
244 | ```
245 | worker-x $ cat < Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
320 |
321 | ## Verification
322 |
323 | > The EC2 instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the EC2 instances.
324 |
325 | List the registered Kubernetes nodes:
326 |
327 | ```
328 | $ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxxxxxxxxxxxxxxxx \
329 | --query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value | [0],InstanceId,Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress,State.Name]' \
330 | --output text | sort | grep master-0
331 | master-0 i-xxxxxxxxxxxxxxxxx ap-northeast-1d 10.240.0.10 xx.xxx.xx.xx running
332 |
333 | $ ssh -i ~/.ssh/your_ssh_key ubuntu@xx.xxx.xx.xx "kubectl get nodes --kubeconfig admin.kubeconfig"
334 | NAME STATUS ROLES AGE VERSION
335 | worker-0 Ready 2m18s v1.15.3
336 | worker-1 Ready 2m18s v1.15.3
337 | worker-2 Ready 2m18s v1.15.3
338 | ```
339 |
340 | Now 3 workers have been registered to the cluster.
341 |
342 | Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
--------------------------------------------------------------------------------
/docs/10-configuring-kubectl.md:
--------------------------------------------------------------------------------
1 | # Configuring kubectl for Remote Access
2 |
3 | In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials.
4 |
5 | > Run the commands in this lab from the same directory used to generate the admin client certificates.
6 |
7 | ## The Admin Kubernetes Configuration File
8 |
9 | Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
10 |
11 | Generate a kubeconfig file suitable for authenticating as the `admin` user:
12 |
13 | ```
14 | $ KUBERNETES_PUBLIC_ADDRESS=$(aws ec2 describe-addresses \
15 | --filters "Name=tag:Name,Values=eip-kubernetes-the-hard-way" \
16 | --query 'Addresses[0].PublicIp' --output text)
17 |
18 | $ kubectl config set-cluster kubernetes-the-hard-way \
19 | --certificate-authority=ca.pem \
20 | --embed-certs=true \
21 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
22 |
23 | $ kubectl config set-credentials admin \
24 | --client-certificate=admin.pem \
25 | --client-key=admin-key.pem
26 |
27 | $ kubectl config set-context kubernetes-the-hard-way \
28 | --cluster=kubernetes-the-hard-way \
29 | --user=admin
30 |
31 | $ kubectl config use-context kubernetes-the-hard-way
32 | ```
33 |
34 | ## Verification
35 |
36 | Check the health of the remote Kubernetes cluster:
37 |
38 | ```
39 | $ kubectl get componentstatuses
40 | NAME STATUS MESSAGE ERROR
41 | controller-manager Healthy ok
42 | scheduler Healthy ok
43 | etcd-1 Healthy {"health":"true"}
44 | etcd-2 Healthy {"health":"true"}
45 | etcd-0 Healthy {"health":"true"}
46 | ```
47 |
48 | List the nodes in the remote Kubernetes cluster:
49 |
50 | ```
51 | $ kubectl get nodes
52 | NAME STATUS ROLES AGE VERSION
53 | worker-0 Ready 6m38s v1.15.3
54 | worker-1 Ready 6m38s v1.15.3
55 | worker-2 Ready 6m38s v1.15.3
56 | ```
57 |
58 | Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
--------------------------------------------------------------------------------
/docs/11-pod-network-routes.md:
--------------------------------------------------------------------------------
1 | # Provisioning Pod Network Routes
2 |
3 | Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing [network routes](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Route_Tables.html).
4 |
5 | In this lab, firstly you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.
6 |
7 | > There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model.
8 |
9 |
10 | ## The Routing Table
11 |
12 | In this section you will gather the information required to create new routes in the VPC network. Remember that we have created a route table for our dedicated subnet for the k8s cluster. What we need to do in this section would be adding new routes resources into the existing route table (which we can refer with `!ImportValue hard-k8s-rtb`)
13 |
14 | Reference: [cloudformation/hard-k8s-pod-routes.cfn.yml](../cloudformation/hard-k8s-pod-routes.cfn.yml)
15 | ```yaml
16 | Resources:
17 | RouteWorker0:
18 | Type: AWS::EC2::Route
19 | Properties:
20 | DestinationCidrBlock: 10.200.0.0/24
21 | RouteTableId: !ImportValue hard-k8s-rtb
22 | InstanceId: !ImportValue hard-k8s-worker-0
23 |
24 | RouteWorker1:
25 | Type: AWS::EC2::Route
26 | Properties:
27 | DestinationCidrBlock: 10.200.1.0/24
28 | RouteTableId: !ImportValue hard-k8s-rtb
29 | InstanceId: !ImportValue hard-k8s-worker-1
30 |
31 | RouteWorker2:
32 | Type: AWS::EC2::Route
33 | Properties:
34 | DestinationCidrBlock: 10.200.2.0/24
35 | RouteTableId: !ImportValue hard-k8s-rtb
36 | InstanceId: !ImportValue hard-k8s-worker-2
37 | ```
38 |
39 | Now create network resources via AWS CLI command:
40 |
41 | ```
42 | $ aws cloudformation create-stack \
43 | --stack-name hard-k8s-pod-routes \
44 | --template-body file://cloudformation/hard-k8s-pod-routes.cfn.yml
45 | ```
46 |
47 | Verify:
48 |
49 | ```
50 | $ aws cloudformation describe-stacks \
51 | --stack-name hard-k8s-network \
52 | --query 'Stacks[0].Outputs' --output table
53 | -----------------------------------------------------------------
54 | | DescribeStacks |
55 | +-----------------+----------------+----------------------------+
56 | | ExportName | OutputKey | OutputValue |
57 | +-----------------+----------------+----------------------------+
58 | | hard-k8s-rtb | RouteTableId | rtb-sssssssssssssssss |
59 | | hard-k8s-vpc | VpcId | vpc-ppppppppppppppppp |
60 | | hard-k8s-subnet| SubnetId | subnet-qqqqqqqqqqqqqqqqq |
61 | +-----------------+----------------+----------------------------+
62 |
63 | $ ROUTE_TABLE_ID=$(aws cloudformation describe-stacks \
64 | --stack-name hard-k8s-network \
65 | --query 'Stacks[0].Outputs[?ExportName==`hard-k8s-rtb`].OutputValue' --output text)
66 |
67 | $ aws ec2 describe-route-tables \
68 | --route-table-ids $ROUTE_TABLE_ID \
69 | --query 'RouteTables[0].Routes[].[DestinationCidrBlock,InstanceId,GatewayId]' --output table
70 | -------------------------------------------------------------------
71 | | DescribeRouteTables |
72 | +---------------+-----------------------+-------------------------+
73 | | 10.200.0.0/24| i-aaaaaaaaaaaaaaaaa | None | # worker-0
74 | | 10.200.1.0/24| i-bbbbbbbbbbbbbbbbb | None | # worker-1
75 | | 10.200.2.0/24| i-ccccccccccccccccc | None | # worker-2
76 | | 10.240.0.0/16| None | local | # inter-vpc traffic among 10.240.0.0/16 range
77 | | 0.0.0.0/0 | None | igw-xxxxxxxxxxxxxxxxx | # default internet gateway
78 | +---------------+-----------------------+-------------------------+
79 | ```
80 |
81 | So this route table ensure traffic to pods working on worker-0, which has IP CIDR range `10.200.0.0/24`, should be routed to worker-0 node.
82 |
83 | Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)
--------------------------------------------------------------------------------
/docs/12-dns-addon.md:
--------------------------------------------------------------------------------
1 | # Deploying the DNS Cluster Add-on
2 |
3 | In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster.
4 |
5 | ## The DNS Cluster Add-on
6 |
7 | First check registered worker nodes:
8 |
9 | ```
10 | $ kubectl get nodes -o wide
11 | NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
12 | worker-0 Ready 26h v1.15.3 10.240.0.20 Ubuntu 18.04.3 LTS 4.15.0-1051-aws containerd://1.2.9
13 | worker-1 Ready 26h v1.15.3 10.240.0.21 Ubuntu 18.04.3 LTS 4.15.0-1051-aws containerd://1.2.9
14 | worker-2 Ready 26h v1.15.3 10.240.0.22 Ubuntu 18.04.3 LTS 4.15.0-1051-aws containerd://1.2.9
15 | ```
16 |
17 | Deploy the `coredns` cluster add-on:
18 |
19 | ```
20 | $ kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns.yaml
21 |
22 | serviceaccount/coredns created
23 | clusterrole.rbac.authorization.k8s.io/system:coredns created
24 | clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
25 | configmap/coredns created
26 | deployment.extensions/coredns created
27 | service/kube-dns created
28 | ```
29 |
30 | List the pods created by the `kube-dns` deployment:
31 |
32 | ```
33 | $ kubectl get pods -l k8s-app=kube-dns -n kube-system -o wide
34 |
35 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
36 | coredns-5fb99965-gk2j7 1/1 Running 0 98s 10.200.1.3 worker-1
37 | coredns-5fb99965-w6hxj 1/1 Running 0 98s 10.200.2.3 worker-2
38 | ```
39 |
40 | Note that pods are running in pre-defined POD CIDR range. Your results may differ as we've not specified on which worker node each pod should run.
41 |
42 |
43 | ## Verification
44 |
45 | Create a `busybox` deployment:
46 |
47 | ```
48 | $ kubectl run --generator=run-pod/v1 busybox --image=busybox:1.28 --command -- sleep 3600
49 |
50 | pod/busybox created
51 | ```
52 |
53 | List the pod created by the `busybox` deployment:
54 |
55 | ```
56 | $ kubectl get pods -l run=busybox -o wide
57 |
58 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
59 | busybox 1/1 Running 0 3m45s 10.200.2.2 worker-2
60 | ```
61 |
62 | Retrieve the full name of the `busybox` pod:
63 |
64 | ```
65 | $ POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
66 | ```
67 |
68 | Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
69 |
70 | ```
71 | $ kubectl exec -ti $POD_NAME -- nslookup kubernetes
72 |
73 | Server: 10.32.0.10
74 | Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
75 |
76 | Name: kubernetes
77 | Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
78 | ```
79 |
80 | Next: [Smoke Test](13-smoke-test.md)
--------------------------------------------------------------------------------
/docs/13-smoke-test.md:
--------------------------------------------------------------------------------
1 | # Smoke Test
2 |
3 | In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.
4 |
5 | ## Data Encryption
6 |
7 | In this section you will verify the ability to [encrypt secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted).
8 |
9 | Create a generic secret:
10 |
11 | ```
12 | $ kubectl create secret generic kubernetes-the-hard-way \
13 | --from-literal="mykey=mydata"
14 | ```
15 |
16 | Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
17 |
18 | ```
19 | $ ssh <>
20 | ```
21 |
22 | ```
23 | master-0 $ sudo ETCDCTL_API=3 etcdctl get \
24 | --endpoints=https://127.0.0.1:2379 \
25 | --cacert=/etc/etcd/ca.pem \
26 | --cert=/etc/etcd/kubernetes.pem \
27 | --key=/etc/etcd/kubernetes-key.pem\
28 | /registry/secrets/default/kubernetes-the-hard-way | hexdump -C
29 |
30 | 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
31 | 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
32 | 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
33 | 00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
34 | 00000040 3a 76 31 3a 6b 65 79 31 3a 24 a3 f7 aa 22 b1 d2 |:v1:key1:$..."..|
35 | 00000050 7b 9f 89 aa 53 a6 a0 5e e4 5f 1f ea b2 d6 c4 de |{...S..^._......|
36 | 00000060 c2 80 02 a9 57 e7 e6 b0 46 57 9f fa c8 dd 89 c3 |....W...FW......|
37 | 00000070 ef 15 58 71 ab ec c3 6a 9f 7e da b9 d8 94 2e 0d |..Xq...j.~......|
38 | 00000080 85 a3 ff 94 56 62 a1 dd f6 4b a6 47 d1 46 b6 92 |....Vb...K.G.F..|
39 | 00000090 27 9f 4d e0 5c 81 4e b4 fe 2e ca d5 5b d2 be 07 |'.M.\.N.....[...|
40 | 000000a0 1d 4e 38 b8 2b 03 37 0d 65 84 e2 8c de 87 80 c8 |.N8.+.7.e.......|
41 | 000000b0 9c f9 08 0e 4f 29 fc 5f b3 e8 10 99 b4 00 b3 ad |....O)._........|
42 | 000000c0 6c dd 81 28 a0 2d a6 82 41 0e 7d ba a8 a0 7d d6 |l..(.-..A.}...}.|
43 | 000000d0 15 f0 80 a5 1d 27 33 aa a1 b5 e0 d1 e7 5b 63 22 |.....'3......[c"|
44 | 000000e0 9a 10 68 42 e6 d4 9f 0d ab 0a |..hB......|
45 | 000000ea
46 | ```
47 |
48 | The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
49 |
50 | ## Deployments
51 |
52 | In this section you will verify the ability to create and manage [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
53 |
54 | Create a deployment for the [nginx](https://nginx.org/en/) web server:
55 |
56 | ```
57 | $ kubectl create deployment nginx --image=nginx
58 | ```
59 |
60 | List the pod created by the `nginx` deployment:
61 |
62 | ```
63 | $ kubectl get pods -l app=nginx
64 |
65 | NAME READY STATUS RESTARTS AGE
66 | nginx-554b9c67f9-vt5rn 1/1 Running 0 10s
67 | ```
68 |
69 | ### Port Forwarding
70 |
71 | In this section you will verify the ability to access applications remotely using [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
72 |
73 | Retrieve the full name of the `nginx` pod:
74 |
75 | ```
76 | $ POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
77 | ```
78 |
79 | Forward port `8080` on your local machine to port `80` of the `nginx` pod:
80 |
81 | ```
82 | $ kubectl port-forward $POD_NAME 8080:80
83 |
84 | Forwarding from 127.0.0.1:8080 -> 80
85 | Forwarding from [::1]:8080 -> 80
86 | ```
87 |
88 | In a new terminal make an HTTP request using the forwarding address:
89 |
90 | ```
91 | $ curl --head http://127.0.0.1:8080
92 |
93 | HTTP/1.1 200 OK
94 | Server: nginx/1.17.8
95 | Date: Fri, 24 Jan 2020 19:31:41 GMT
96 | Content-Type: text/html
97 | Content-Length: 612
98 | Last-Modified: Tue, 21 Jan 2020 13:36:08 GMT
99 | Connection: keep-alive
100 | ETag: "5e26fe48-264"
101 | Accept-Ranges: bytes
102 | ```
103 |
104 | Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
105 |
106 | ```
107 | Forwarding from 127.0.0.1:8080 -> 80
108 | Forwarding from [::1]:8080 -> 80
109 | Handling connection for 8080
110 | ^C
111 | ```
112 |
113 | ### Logs
114 |
115 | In this section you will verify the ability to [retrieve container logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/).
116 |
117 | Print the `nginx` pod logs:
118 |
119 | ```
120 | $ kubectl logs $POD_NAME
121 |
122 | 127.0.0.1 - - [14/Sep/2019:21:10:11 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.52.1" "-"
123 | ```
124 |
125 | ### Exec
126 |
127 | In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container).
128 |
129 | Print the nginx version by executing the `nginx -v` command in the `nginx` container:
130 |
131 | ```
132 | $ kubectl exec -ti $POD_NAME -- nginx -v
133 |
134 | nginx version: nginx/1.17.8
135 | ```
136 |
137 | ## Services
138 |
139 | In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/).
140 |
141 | Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
142 |
143 | ```
144 | $ kubectl expose deployment nginx --port 80 --type NodePort
145 | ```
146 |
147 | > The LoadBalancer service type can not be used because your cluster is not configured with [cloud provider integration](https://kubernetes.io/docs/getting-started-guides/scratch/#cloud-provider). Setting up cloud provider integration is out of scope for this tutorial.
148 |
149 | Retrieve the node port assigned to the `nginx` service:
150 |
151 | ```
152 | $ NODE_PORT=$(kubectl get svc nginx \
153 | --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
154 | $ echo $NODE_PORT
155 | 30712
156 | ```
157 |
158 | The value of `$NODE_PORT` varies. Create a route that allows remote access to the `nginx` node port with following CloudFormation template:
159 |
160 | Reference: [cloudformation/hard-k8s-nodeport-sg-ingress](../cloudformation/hard-k8s-nodeport-sg-ingress.cfn.yml)
161 | ```yaml
162 | Parameters:
163 | ParamNodePort:
164 | Type: Number
165 | # ref: https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
166 | MinValue: 30000
167 | MaxValue: 32767
168 |
169 | Resources:
170 | HardK8sSmokeIngress:
171 | Type: AWS::EC2::SecurityGroupIngress
172 | Properties:
173 | GroupId: !ImportValue hard-k8s-sg
174 | CidrIp: 0.0.0.0/0
175 | IpProtocol: tcp
176 | FromPort: !Ref ParamNodePort
177 | ToPort: !Ref ParamNodePort
178 | ```
179 |
180 | You should pass `$NODE_PORT` environment variable as a CloudFormation stack parameter:
181 |
182 | ```
183 | $ aws cloudformation create-stack \
184 | --stack-name hard-k8s-nodeport-sg-ingress \
185 | --parameters ParameterKey=ParamNodePort,ParameterValue=$NODE_PORT \
186 | --template-body file://cloudformation/hard-k8s-nodeport-sg-ingress.cfn.yml
187 | ```
188 |
189 | Retrieve the external IP address of a worker instance which is hosting the nginx pod:
190 |
191 | ```
192 | $ kubectl get pods -l app=nginx -o wide
193 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
194 | nginx-554b9c67f9-gw87z 1/1 Running 0 27m 10.200.1.3 worker-1
195 |
196 | $ WORKER_NODE_NAME=$(kubectl get pods -l app=nginx -o=jsonpath='{.items[0].spec.nodeName}')
197 | $ echo $WORKER_NODE_NAME
198 | worker-1
199 |
200 | $ EXTERNAL_IP=$(aws ec2 describe-instances \
201 | --filter "Name=tag:Name,Values=${WORKER_NODE_NAME}" \
202 | --query 'Reservations[0].Instances[0].PublicIpAddress' --output text)
203 | $ echo $EXTERNAL_IP
204 | 54.xxx.xxx.18
205 | ```
206 |
207 | Make an HTTP request using the external IP address and the `nginx` node port:
208 |
209 | ```
210 | $ curl -I http://${EXTERNAL_IP}:${NODE_PORT}
211 |
212 | HTTP/1.1 200 OK
213 | Server: nginx/1.17.8
214 | Date: Fri, 24 Jan 2020 20:02:27 GMT
215 | Content-Type: text/html
216 | Content-Length: 612
217 | Last-Modified: Tue, 21 Jan 2020 13:36:08 GMT
218 | Connection: keep-alive
219 | ETag: "5e26fe48-264"
220 | Accept-Ranges: bytes
221 | ```
222 |
223 | Congrats! Now you have built your own Kubernetets cluster the hard way.
224 |
225 | Next: [Cleaning Up](14-cleanup.md)
--------------------------------------------------------------------------------
/docs/14-cleanup.md:
--------------------------------------------------------------------------------
1 | # Cleaning Up
2 |
3 | In this lab you will delete the AWS resources created during this tutorial.
4 |
5 | ## Delete CloudFormation stacks
6 |
7 | As you've created all resources via CloudFormation stacks, only things you should do is just deleting these stacks. It will delete undelying resources such as EC2 instances (master/worker), security groups, NLB, EIP and NLB.
8 |
9 | One thing you should be aware is dependencies between stacks - if a stack uses exported values using `!ImportValues`, a stack that imports the value should be deleted first.
10 |
11 | ```
12 | $ for stack in hard-k8s-nodeport-sg-ingress \
13 | hard-k8s-pod-routes \
14 | hard-k8s-nlb \
15 | hard-k8s-worker-nodes \
16 | hard-k8s-master-nodes; \
17 | do \
18 | aws cloudformation delete-stack --stack-name ${stack} && \
19 | aws cloudformation wait stack-delete-complete --stack-name ${stack}
20 | done
21 | ```
22 |
23 | Next, release Elastic IP (EIP) that was used for Kubernetes API server frontend. After that you can remve CloudFormation stack with `--retain-resources` option, which actually doesn't "retain" but "ignore" EIP resource deletion.
24 |
25 | ```
26 | $ ALLOCATION_ID=$(aws ec2 describe-addresses \
27 | --filters "Name=tag:Name,Values=eip-kubernetes-the-hard-way" \
28 | --query 'Addresses[0].AllocationId' --output text)
29 |
30 | $ aws ec2 release-address --allocation-id $ALLOCATION_ID
31 |
32 | $ aws cloudformation delete-stack --stack-name hard-k8s-eip --retain-resources HardK8sEIP
33 | ```
34 |
35 | Now, you can delete rest of stacks.
36 |
37 | ```
38 | $ for stack in hard-k8s-security-groups \
39 | hard-k8s-network; \
40 | do \
41 | aws cloudformation delete-stack --stack-name ${stack} && \
42 | aws cloudformation wait stack-delete-complete --stack-name ${stack}
43 | done
44 | ```
45 |
46 | I hope you've enjoyed this tutorial. If you find any problem/suggestion please [open an issue](https://github.com/thash/kubernetes-the-hard-way-on-aws/issues).
--------------------------------------------------------------------------------
/docs/images/k8s_the_hard_way_on_aws_diagram.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/thash/kubernetes-the-hard-way-on-aws/f1e4d23de76a5cb17e03da78516cb0c975cc0bb5/docs/images/k8s_the_hard_way_on_aws_diagram.png
--------------------------------------------------------------------------------
/docs/images/tmux-screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/thash/kubernetes-the-hard-way-on-aws/f1e4d23de76a5cb17e03da78516cb0c975cc0bb5/docs/images/tmux-screenshot.png
--------------------------------------------------------------------------------
/retrieve_ec2_instances.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | # NOTE: setting AWS_REGION environment variable allows you to query regions in which you have resources.
4 |
5 | VPC_ID=$(aws cloudformation describe-stacks \
6 | --stack-name hard-k8s-network \
7 | --query 'Stacks[0].Outputs[?ExportName==`hard-k8s-vpc`].OutputValue' --output text)
8 |
9 | aws ec2 describe-instances \
10 | --filters Name=vpc-id,Values=$VPC_ID \
11 | --query 'Reservations[].Instances[].[Tags[?Key==`Name`].Value | [0],InstanceId,Placement.AvailabilityZone,PrivateIpAddress,PublicIpAddress,State.Name]' \
12 | --output text | sort
13 |
--------------------------------------------------------------------------------