├── docs
├── images
│ ├── proxmox-network.PNG
│ ├── proxmox-vm-list.PNG
│ ├── tmux-screenshot.png
│ ├── architecture-network.png
│ ├── proxmox-vm-hardware.PNG
│ └── proxmox-vm-hardware-gw.PNG
├── 14-cleanup.md
├── 06-data-encryption-keys.md
├── 02-client-tools.md
├── 12-dns-addon.md
├── 11-pod-network-routes.md
├── 10-configuring-kubectl.md
├── 07-bootstrapping-etcd.md
├── 13-smoke-test.md
├── 05-kubernetes-configuration-files.md
├── 03-compute-resources.md
├── 09-bootstrapping-kubernetes-workers.md
├── 01-prerequisites.md
├── 04-certificate-authority.md
└── 08-bootstrapping-kubernetes-controllers.md
├── COPYRIGHT.md
├── CONTRIBUTING.md
├── .gitignore
├── README.md
├── deployments
├── coredns.yaml
└── kube-dns.yaml
└── LICENSE
/docs/images/proxmox-network.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Wirebrass/kubernetes-the-hard-way-on-proxmox/HEAD/docs/images/proxmox-network.PNG
--------------------------------------------------------------------------------
/docs/images/proxmox-vm-list.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Wirebrass/kubernetes-the-hard-way-on-proxmox/HEAD/docs/images/proxmox-vm-list.PNG
--------------------------------------------------------------------------------
/docs/images/tmux-screenshot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Wirebrass/kubernetes-the-hard-way-on-proxmox/HEAD/docs/images/tmux-screenshot.png
--------------------------------------------------------------------------------
/docs/images/architecture-network.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Wirebrass/kubernetes-the-hard-way-on-proxmox/HEAD/docs/images/architecture-network.png
--------------------------------------------------------------------------------
/docs/images/proxmox-vm-hardware.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Wirebrass/kubernetes-the-hard-way-on-proxmox/HEAD/docs/images/proxmox-vm-hardware.PNG
--------------------------------------------------------------------------------
/docs/images/proxmox-vm-hardware-gw.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Wirebrass/kubernetes-the-hard-way-on-proxmox/HEAD/docs/images/proxmox-vm-hardware-gw.PNG
--------------------------------------------------------------------------------
/COPYRIGHT.md:
--------------------------------------------------------------------------------
1 | # Copyright
2 |
3 | 
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
4 |
--------------------------------------------------------------------------------
/docs/14-cleanup.md:
--------------------------------------------------------------------------------
1 | # Cleaning Up
2 |
3 | In this lab you will delete the resources created during this tutorial.
4 |
5 | ## Virtual Machines
6 |
7 | Stop the 7 VM created for this tutorial:
8 |
9 | ```bash
10 | sudo shutdown -h now
11 | ```
12 |
13 | Delete all the VMs via the Proxmox WebUI or the Proxmox CLI (on the hypervisor):
14 |
15 | ```bash
16 | sudo qm destroy --purge
17 | ```
18 |
19 | ## Networking
20 |
21 | Delete the private Kubernetes network (`vmbr8`) via the Proxmox WebUI (to avoid fatal misconfiguration).
22 |
--------------------------------------------------------------------------------
/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing
2 |
3 | This project is made possible by contributors like YOU! While all contributions are welcomed, please be sure and follow the following suggestions to help your PR get merged.
4 |
5 | ## License
6 |
7 | This project uses an [Apache license](LICENSE). Be sure you're comfortable with the implications of that before working up a patch.
8 |
9 | ## Review and merge process
10 |
11 | Review and merge duties are managed by [@kelseyhightower](https://github.com/kelseyhightower). Expect some burden of proof for demonstrating the marginal value of adding new content to the tutorial.
12 |
13 | Here are some examples of the review and justification process:
14 |
15 | - [#208](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/208)
16 | - [#282](https://github.com/kelseyhightower/kubernetes-the-hard-way/pull/282)
17 |
18 | ## Notes on minutiae
19 |
20 | If you find a bug that breaks the guide, please do submit it. If you are considering a minor copy edit for tone, grammar, or simple inconsistent whitespace, consider the tradeoff between maintainer time and community benefit before investing too much of your time.
21 |
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | admin-csr.json
2 | admin-key.pem
3 | admin.csr
4 | admin.pem
5 | admin.kubeconfig
6 | ca-config.json
7 | ca-csr.json
8 | ca-key.pem
9 | ca.csr
10 | ca.pem
11 | encryption-config.yaml
12 | kube-controller-manager-csr.json
13 | kube-controller-manager-key.pem
14 | kube-controller-manager.csr
15 | kube-controller-manager.kubeconfig
16 | kube-controller-manager.pem
17 | kube-scheduler-csr.json
18 | kube-scheduler-key.pem
19 | kube-scheduler.csr
20 | kube-scheduler.kubeconfig
21 | kube-scheduler.pem
22 | kube-proxy-csr.json
23 | kube-proxy-key.pem
24 | kube-proxy.csr
25 | kube-proxy.kubeconfig
26 | kube-proxy.pem
27 | kubernetes-csr.json
28 | kubernetes-key.pem
29 | kubernetes.csr
30 | kubernetes.pem
31 | worker-0-csr.json
32 | worker-0-key.pem
33 | worker-0.csr
34 | worker-0.kubeconfig
35 | worker-0.pem
36 | worker-1-csr.json
37 | worker-1-key.pem
38 | worker-1.csr
39 | worker-1.kubeconfig
40 | worker-1.pem
41 | worker-2-csr.json
42 | worker-2-key.pem
43 | worker-2.csr
44 | worker-2.kubeconfig
45 | worker-2.pem
46 | service-account-key.pem
47 | service-account.csr
48 | service-account.pem
49 | service-account-csr.json
50 | *.swp
51 |
--------------------------------------------------------------------------------
/docs/06-data-encryption-keys.md:
--------------------------------------------------------------------------------
1 | # Generating the Data Encryption Config and Key
2 |
3 | Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to [encrypt](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data) cluster data at rest.
4 |
5 | In this lab you will generate an encryption key and an [encryption config](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration) suitable for encrypting Kubernetes Secrets.
6 |
7 | ## The Encryption Key
8 |
9 | Generate an encryption key:
10 |
11 | ```bash
12 | ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
13 | ```
14 |
15 | ## The Encryption Config File
16 |
17 | Create the `encryption-config.yaml` encryption config file:
18 |
19 | ```bash
20 | cat > encryption-config.yaml < Output:
33 |
34 | ```bash
35 | Version: 1.3.4
36 | Revision: dev
37 | Runtime: go1.13
38 | ```
39 |
40 | ```bash
41 | cfssljson --version
42 | ```
43 |
44 | > Output:
45 |
46 | ```bash
47 | Version: 1.3.4
48 | Revision: dev
49 | Runtime: go1.13
50 | ```
51 |
52 | ## Install kubectl
53 |
54 | The `kubectl` command line utility is used to interact with the Kubernetes API Server. On the **gateway-01** VM, download and install `kubectl` from the official release binaries:
55 |
56 | ```bash
57 | wget https://storage.googleapis.com/kubernetes-release/release/v1.29.1/bin/linux/amd64/kubectl
58 | ```
59 |
60 | ```bash
61 | chmod +x kubectl
62 | ```
63 |
64 | ```bash
65 | sudo mv kubectl /usr/local/bin/
66 | ```
67 |
68 | ### Verification install
69 |
70 | Verify `kubectl` version 1.15.3 or higher is installed:
71 |
72 | ```bash
73 | kubectl version --client
74 | ```
75 |
76 | > Output:
77 |
78 | ```bash
79 | Client Version: v1.29.1
80 | Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
81 |
82 | ```
83 |
84 | Next: [Provisioning Compute Resources](03-compute-resources.md)
85 |
--------------------------------------------------------------------------------
/docs/12-dns-addon.md:
--------------------------------------------------------------------------------
1 | # Deploying the DNS Cluster Add-on
2 |
3 | In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster.
4 |
5 | ## The DNS Cluster Add-on
6 |
7 | Get the CoreDNS yaml:
8 |
9 | ```bash
10 | kubectl apply -f https://raw.githubusercontent.com/DushanthaS/kubernetes-the-hard-way-on-proxmox/master/deployments/coredns.yaml
11 | ```
12 |
13 |
14 | > Output:
15 |
16 | ```bash
17 | serviceaccount/coredns created
18 | clusterrole.rbac.authorization.k8s.io/system:coredns created
19 | clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
20 | configmap/coredns created
21 | deployment.extensions/coredns created
22 | service/kube-dns created
23 | ```
24 |
25 | List the pods created by the `kube-dns` deployment:
26 |
27 | ```bash
28 | kubectl get pods -l k8s-app=kube-dns -n kube-system
29 | ```
30 |
31 | > Output (you may need to wait a few seconds to see the pods "READY"):
32 |
33 | ```bash
34 | NAME READY STATUS RESTARTS AGE
35 | coredns-699f8ddd77-94qv9 1/1 Running 0 20s
36 | coredns-699f8ddd77-gtcgb 1/1 Running 0 20s
37 | ```
38 |
39 | ## Verification
40 |
41 | Create a `busybox` deployment:
42 |
43 | ```bash
44 | kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
45 | ```
46 |
47 | List the pod created by the `busybox` deployment:
48 |
49 | ```bash
50 | kubectl get pods -l run=busybox
51 | ```
52 |
53 | > Output (you may need to wait a few seconds to see the pod "READY"):
54 |
55 | ```bash
56 | NAME READY STATUS RESTARTS AGE
57 | busybox 1/1 Running 0 3s
58 | ```
59 |
60 | Retrieve the full name of the `busybox` pod:
61 |
62 | ```bash
63 | POD_NAME=$(kubectl get pods -l run=busybox -o jsonpath="{.items[0].metadata.name}")
64 | ```
65 |
66 | Execute a DNS lookup for the `kubernetes` service inside the `busybox` pod:
67 |
68 | ```bash
69 | kubectl exec -ti $POD_NAME -- nslookup kubernetes
70 | ```
71 |
72 | > Output:
73 |
74 | ```bash
75 | Server: 10.32.0.10
76 | Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
77 |
78 | Name: kubernetes
79 | Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
80 | ```
81 |
82 | Next: [Smoke Test](13-smoke-test.md)
83 |
--------------------------------------------------------------------------------
/docs/11-pod-network-routes.md:
--------------------------------------------------------------------------------
1 | # Provisioning Pod Network Routes
2 |
3 | Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing network [routes](https://cloud.google.com/compute/docs/vpc/routes).
4 |
5 | In this lab you will create a route for each worker node that maps the node's Pod CIDR range to the node's internal IP address.
6 |
7 | > There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model.
8 |
9 | ## The Routing Table
10 |
11 | **On each worker node**, add the following routes:
12 |
13 | > **WARNING**: don't add the route associated to the POD CIDR for the current node (ex: don't add the 10.200.0.0/24 route if you are on the worker-0 node).
14 |
15 | ```bash
16 | ip route add 10.200.0.0/24 via 192.168.8.20 # Don't add on worker-0
17 | ip route add 10.200.1.0/24 via 192.168.8.21 # Don't add on worker-1
18 | ip route add 10.200.2.0/24 via 192.168.8.22 # Don't add on worker-2
19 | ```
20 |
21 | > Don't take care of the `RTNETLINK answers: File exists` message, it appears just when you try to add an existing route, not a real problem.
22 |
23 | List the routes in the `kubernetes-the-hard-way` VPC network:
24 |
25 | ```bash
26 | ip route
27 | ```
28 |
29 | > Output (example for worker-0):
30 |
31 | ```bash
32 | default via 192.168.8.1 dev ens18 proto static
33 | 10.200.1.0/24 via 192.168.8.21 dev ens18
34 | 10.200.2.0/24 via 192.168.8.22 dev ens18
35 | 192.168.8.0/24 dev ens18 proto kernel scope link src 192.168.8.21
36 | ```
37 |
38 | To make it persistent (if reboot), you need to edit your network configuration (depends on your Linux distribution).
39 |
40 | Example for **Ubuntu 18.04** and higher:
41 |
42 | ```bash
43 | vi /etc/netplan/00-installer-config.yaml
44 | ```
45 |
46 | > Content (example for worker-0, **don't specify the POD CIDR associated with the current node**):
47 |
48 | ```bash
49 | # This is the network config written by 'subiquity'
50 | network:
51 | ethernets:
52 | ens18:
53 | addresses:
54 | - 192.168.8.10/24
55 | gateway4: 192.168.8.1
56 | nameservers:
57 | addresses:
58 | - 9.9.9.9
59 | routes:
60 | - to: 10.200.1.0/24
61 | via: 192.168.8.21
62 | - to: 10.200.2.0/24
63 | via: 192.168.8.22
64 | version: 2
65 | ```
66 |
67 | Next: [Deploying the DNS Cluster Add-on](12-dns-addon.md)
68 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Kubernetes The Hard Way - Proxmox (KVM)
2 |
3 | This tutorial walks you through setting up Kubernetes the hard way. This guide is not for people looking for a fully automated command to bring up a Kubernetes cluster. If that's you then check out [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine), or the [Getting Started Guides](https://kubernetes.io/docs/setup).
4 |
5 | Kubernetes The Hard Way is optimized for learning, which means taking the long route to ensure you understand each task required to bootstrap a Kubernetes cluster.
6 |
7 | > The results of this tutorial should not be viewed as production ready, and may receive limited support from the community, but don't let that stop you from learning!
8 |
9 | ## Overview of the Network Architecture
10 |
11 | 
12 |
13 | ## Copyright
14 |
15 | 
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
16 |
17 | ## Target Audience
18 |
19 | The target audience for this tutorial is someone planning to support a production Kubernetes cluster and wants to understand how everything fits together, in particular if you want to use a Proxmox hypervisor (or KVM).
20 |
21 | ## Cluster Details
22 |
23 | Kubernetes The Hard Way guides you through bootstrapping a highly available Kubernetes cluster with end-to-end encryption between components and RBAC authentication.
24 |
25 | * [kubernetes](https://github.com/kubernetes/kubernetes) v1.29.1
26 | * [containerd](https://github.com/containerd/containerd) v1.7.13
27 | * [coredns](https://github.com/coredns/coredns) v1.11.1
28 | * [cni-plugins](https://github.com/containernetworking/plugins) v1.4.0
29 | * [etcd](https://github.com/etcd-io/etcd) v3.5.12
30 |
31 | ## Labs
32 |
33 | This tutorial assumes you have access to a Proxmox hypervisor with at least 25GB free RAM and 140GB free HDD/SSD. While a Proxmox server is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms (ESXi, KVM, VirtualBox, ...).
34 |
35 | * [Prerequisites](docs/01-prerequisites.md)
36 | * [Installing the Client Tools](docs/02-client-tools.md)
37 | * [Provisioning Compute Resources](docs/03-compute-resources.md)
38 | * [Provisioning the CA and Generating TLS Certificates](docs/04-certificate-authority.md)
39 | * [Generating Kubernetes Configuration Files for Authentication](docs/05-kubernetes-configuration-files.md)
40 | * [Generating the Data Encryption Config and Key](docs/06-data-encryption-keys.md)
41 | * [Bootstrapping the etcd Cluster](docs/07-bootstrapping-etcd.md)
42 | * [Bootstrapping the Kubernetes Control Plane](docs/08-bootstrapping-kubernetes-controllers.md)
43 | * [Bootstrapping the Kubernetes Worker Nodes](docs/09-bootstrapping-kubernetes-workers.md)
44 | * [Configuring kubectl for Remote Access](docs/10-configuring-kubectl.md)
45 | * [Provisioning Pod Network Routes](docs/11-pod-network-routes.md)
46 | * [Deploying the DNS Cluster Add-on](docs/12-dns-addon.md)
47 | * [Smoke Test](docs/13-smoke-test.md)
48 | * [Cleaning Up](docs/14-cleanup.md)
49 |
--------------------------------------------------------------------------------
/docs/10-configuring-kubectl.md:
--------------------------------------------------------------------------------
1 | # Configuring kubectl for Remote Access
2 |
3 | In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials.
4 |
5 | > Run the commands in this lab from the same directory used to generate the admin client certificates.
6 |
7 | ## The Admin Kubernetes Configuration File
8 |
9 | Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
10 |
11 | Generate a kubeconfig file suitable for authenticating as the `admin` user (replace PUBLIC_IP_ADDRESS with your public IP address on the `gateway-01` VM):
12 |
13 | ```bash
14 | KUBERNETES_PUBLIC_ADDRESS=PUBLIC_IP_ADDRESS
15 |
16 | kubectl config set-cluster kubernetes-the-hard-way \
17 | --certificate-authority=ca.pem \
18 | --embed-certs=true \
19 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
20 |
21 | kubectl config set-credentials admin \
22 | --client-certificate=admin.pem \
23 | --client-key=admin-key.pem
24 |
25 | kubectl config set-context kubernetes-the-hard-way \
26 | --cluster=kubernetes-the-hard-way \
27 | --user=admin
28 |
29 | kubectl config use-context kubernetes-the-hard-way
30 | ```
31 |
32 | ## Verification
33 |
34 | Check the health of the remote Kubernetes cluster:
35 |
36 | ```bash
37 | kubectl get componentstatuses
38 | ```
39 |
40 | > Output:
41 |
42 | ```bash
43 | NAME STATUS MESSAGE ERROR
44 | controller-manager Healthy ok
45 | scheduler Healthy ok
46 | etcd-1 Healthy {"health":"true"}
47 | etcd-2 Healthy {"health":"true"}
48 | etcd-0 Healthy {"health":"true"}
49 | ```
50 |
51 | However component statuses are deprecated in Kubernetes 1.19 and later, so the recommended way to check cluster health is:
52 | ```bash
53 | kubectl get --raw='/readyz?verbose'
54 | ```
55 |
56 | > Output:
57 |
58 | ```bash
59 | [+]ping ok
60 | [+]log ok
61 | [+]etcd ok
62 | [+]etcd-readiness ok
63 | [+]informer-sync ok
64 | [+]poststarthook/start-kube-apiserver-admission-initializer ok
65 | [+]poststarthook/generic-apiserver-start-informers ok
66 | [+]poststarthook/priority-and-fairness-config-consumer ok
67 | [+]poststarthook/priority-and-fairness-filter ok
68 | [+]poststarthook/storage-object-count-tracker-hook ok
69 | [+]poststarthook/start-apiextensions-informers ok
70 | [+]poststarthook/start-apiextensions-controllers ok
71 | [+]poststarthook/crd-informer-synced ok
72 | [+]poststarthook/start-service-ip-repair-controllers ok
73 | [+]poststarthook/rbac/bootstrap-roles ok
74 | [+]poststarthook/scheduling/bootstrap-system-priority-classes ok
75 | [+]poststarthook/priority-and-fairness-config-producer ok
76 | [+]poststarthook/start-system-namespaces-controller ok
77 | [+]poststarthook/bootstrap-controller ok
78 | [+]poststarthook/start-cluster-authentication-info-controller ok
79 | [+]poststarthook/start-kube-apiserver-identity-lease-controller ok
80 | [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
81 | [+]poststarthook/start-legacy-token-tracking-controller ok
82 | [+]poststarthook/start-kube-aggregator-informers ok
83 | [+]poststarthook/apiservice-registration-controller ok
84 | [+]poststarthook/apiservice-status-available-controller ok
85 | [+]poststarthook/kube-apiserver-autoregistration ok
86 | [+]autoregister-completion ok
87 | [+]poststarthook/apiservice-openapi-controller ok
88 | [+]poststarthook/apiservice-openapiv3-controller ok
89 | [+]poststarthook/apiservice-discovery-controller ok
90 | [+]shutdown ok
91 | readyz check passed
92 | ```
93 |
94 | List the nodes in the remote Kubernetes cluster:
95 |
96 | ```bash
97 | kubectl get nodes
98 | ```
99 |
100 | > Output:
101 |
102 | ```bash
103 | NAME STATUS ROLES AGE VERSION
104 | worker-0 Ready 90s v1.29.1
105 | worker-1 Ready 91s v1.29.1
106 | worker-2 Ready 90s v1.29.1
107 | ```
108 |
109 | Next: [Provisioning Pod Network Routes](11-pod-network-routes.md)
110 |
--------------------------------------------------------------------------------
/docs/07-bootstrapping-etcd.md:
--------------------------------------------------------------------------------
1 | # Bootstrapping the etcd Cluster
2 |
3 | Kubernetes components are stateless and store cluster state in [etcd](https://github.com/etcd-io/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access.
4 |
5 | ## Prerequisites
6 |
7 | The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `ssh` command. Example for `controller-0`:
8 |
9 | ```bash
10 | ssh root@controller-0
11 | ```
12 |
13 | ### Running commands in parallel with tmux
14 |
15 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
16 |
17 | ## Bootstrapping an etcd Cluster Member
18 |
19 | ### Download and Install the etcd Binaries
20 |
21 | Download the official etcd release binaries from the [etcd](https://github.com/etcd-io/etcd) GitHub project:
22 |
23 | ```bash
24 | wget -q --show-progress --https-only --timestamping \
25 | "https://github.com/etcd-io/etcd/releases/download/v3.5.12/etcd-v3.5.12-linux-amd64.tar.gz"
26 | ```
27 |
28 | Extract and install the `etcd` server and the `etcdctl` command line utility:
29 |
30 | ```bash
31 | tar -xvf etcd-v3.5.12-linux-amd64.tar.gz
32 | sudo mv etcd-v3.5.12-linux-amd64/etcd* /usr/local/bin/
33 | ```
34 |
35 | ### Configure the etcd Server
36 |
37 | ```bash
38 | sudo mkdir -p /etc/etcd /var/lib/etcd
39 | sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
40 | ```
41 |
42 | The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Define the INTERNAL_IP (replace MY_NODE_INTERNAL_IP by the value):
43 |
44 | ```bash
45 | INTERNAL_IP=MY_NODE_INTERNAL_IP
46 | ```
47 |
48 | > Example for controller-0 : 192.168.8.10
49 |
50 | Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance:
51 |
52 | ```bash
53 | ETCD_NAME=$(hostname -s)
54 | ```
55 |
56 | Create the `etcd.service` systemd unit file:
57 |
58 | ```bash
59 | cat < Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
101 |
102 | ## Verification
103 |
104 | List the etcd cluster members:
105 |
106 | ```bash
107 | sudo ETCDCTL_API=3 etcdctl member list \
108 | --endpoints=https://127.0.0.1:2379 \
109 | --cacert=/etc/etcd/ca.pem \
110 | --cert=/etc/etcd/kubernetes.pem \
111 | --key=/etc/etcd/kubernetes-key.pem
112 | ```
113 |
114 | > Output:
115 |
116 | ```bash
117 | 3a57933972cb5131, started, controller-2, https://192.168.8.12:2380, https://192.168.8.12:2379
118 | f98dc20bce6225a0, started, controller-0, https://192.168.8.10:2380, https://192.168.8.10:2379
119 | ffed16798470cab5, started, controller-1, https://192.168.8.11:2380, https://192.168.8.11:2379
120 | ```
121 |
122 | Next: [Bootstrapping the Kubernetes Control Plane](08-bootstrapping-kubernetes-controllers.md)
123 |
--------------------------------------------------------------------------------
/deployments/coredns.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ServiceAccount
3 | metadata:
4 | name: coredns
5 | namespace: kube-system
6 | ---
7 | apiVersion: rbac.authorization.k8s.io/v1
8 | kind: ClusterRole
9 | metadata:
10 | labels:
11 | kubernetes.io/bootstrapping: rbac-defaults
12 | name: system:coredns
13 | rules:
14 | - apiGroups:
15 | - ""
16 | resources:
17 | - endpoints
18 | - services
19 | - pods
20 | - namespaces
21 | verbs:
22 | - list
23 | - watch
24 | - apiGroups:
25 | - ""
26 | resources:
27 | - nodes
28 | verbs:
29 | - get
30 | - apiGroups:
31 | - discovery.k8s.io
32 | resources:
33 | - endpointslices
34 | verbs:
35 | - list
36 | - watch
37 | ---
38 | apiVersion: rbac.authorization.k8s.io/v1
39 | kind: ClusterRoleBinding
40 | metadata:
41 | annotations:
42 | rbac.authorization.kubernetes.io/autoupdate: "true"
43 | labels:
44 | kubernetes.io/bootstrapping: rbac-defaults
45 | name: system:coredns
46 | roleRef:
47 | apiGroup: rbac.authorization.k8s.io
48 | kind: ClusterRole
49 | name: system:coredns
50 | subjects:
51 | - kind: ServiceAccount
52 | name: coredns
53 | namespace: kube-system
54 | ---
55 | apiVersion: v1
56 | kind: ConfigMap
57 | metadata:
58 | name: coredns
59 | namespace: kube-system
60 | data:
61 | Corefile: |
62 | .:53 {
63 | errors
64 | health
65 | ready
66 | kubernetes cluster.local in-addr.arpa ip6.arpa {
67 | pods insecure
68 | fallthrough in-addr.arpa ip6.arpa
69 | }
70 | prometheus :9153
71 | forward . /etc/resolv.conf
72 | cache 30
73 | loop
74 | reload
75 | loadbalance
76 | }
77 | ---
78 | apiVersion: apps/v1
79 | kind: Deployment
80 | metadata:
81 | name: coredns
82 | namespace: kube-system
83 | labels:
84 | k8s-app: kube-dns
85 | kubernetes.io/name: "CoreDNS"
86 | spec:
87 | replicas: 2
88 | strategy:
89 | type: RollingUpdate
90 | rollingUpdate:
91 | maxUnavailable: 1
92 | selector:
93 | matchLabels:
94 | k8s-app: kube-dns
95 | template:
96 | metadata:
97 | labels:
98 | k8s-app: kube-dns
99 | spec:
100 | priorityClassName: system-cluster-critical
101 | serviceAccountName: coredns
102 | tolerations:
103 | - key: "CriticalAddonsOnly"
104 | operator: "Exists"
105 | nodeSelector:
106 | kubernetes.io/os: linux
107 | containers:
108 | - name: coredns
109 | image: coredns/coredns:1.11.1
110 | imagePullPolicy: IfNotPresent
111 | resources:
112 | limits:
113 | memory: 170Mi
114 | requests:
115 | cpu: 100m
116 | memory: 70Mi
117 | args: [ "-conf", "/etc/coredns/Corefile" ]
118 | volumeMounts:
119 | - name: config-volume
120 | mountPath: /etc/coredns
121 | readOnly: true
122 | ports:
123 | - containerPort: 53
124 | name: dns
125 | protocol: UDP
126 | - containerPort: 53
127 | name: dns-tcp
128 | protocol: TCP
129 | - containerPort: 9153
130 | name: metrics
131 | protocol: TCP
132 | securityContext:
133 | allowPrivilegeEscalation: false
134 | capabilities:
135 | add:
136 | - NET_BIND_SERVICE
137 | drop:
138 | - all
139 | readOnlyRootFilesystem: true
140 | livenessProbe:
141 | httpGet:
142 | path: /health
143 | port: 8080
144 | scheme: HTTP
145 | initialDelaySeconds: 60
146 | timeoutSeconds: 5
147 | successThreshold: 1
148 | failureThreshold: 5
149 | readinessProbe:
150 | httpGet:
151 | path: /ready
152 | port: 8181
153 | scheme: HTTP
154 | dnsPolicy: Default
155 | volumes:
156 | - name: config-volume
157 | configMap:
158 | name: coredns
159 | items:
160 | - key: Corefile
161 | path: Corefile
162 | ---
163 | apiVersion: v1
164 | kind: Service
165 | metadata:
166 | name: kube-dns
167 | namespace: kube-system
168 | annotations:
169 | prometheus.io/port: "9153"
170 | prometheus.io/scrape: "true"
171 | labels:
172 | k8s-app: kube-dns
173 | kubernetes.io/cluster-service: "true"
174 | kubernetes.io/name: "CoreDNS"
175 | spec:
176 | selector:
177 | k8s-app: kube-dns
178 | clusterIP: 10.32.0.10
179 | ports:
180 | - name: dns
181 | port: 53
182 | protocol: UDP
183 | - name: dns-tcp
184 | port: 53
185 | protocol: TCP
186 | - name: metrics
187 | port: 9153
188 | protocol: TCP
189 |
--------------------------------------------------------------------------------
/deployments/kube-dns.yaml:
--------------------------------------------------------------------------------
1 | # Copyright 2016 The Kubernetes Authors.
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | apiVersion: v1
16 | kind: Service
17 | metadata:
18 | name: kube-dns
19 | namespace: kube-system
20 | labels:
21 | k8s-app: kube-dns
22 | kubernetes.io/cluster-service: "true"
23 | addonmanager.kubernetes.io/mode: Reconcile
24 | kubernetes.io/name: "KubeDNS"
25 | spec:
26 | selector:
27 | k8s-app: kube-dns
28 | clusterIP: 10.32.0.10
29 | ports:
30 | - name: dns
31 | port: 53
32 | protocol: UDP
33 | - name: dns-tcp
34 | port: 53
35 | protocol: TCP
36 | ---
37 | apiVersion: v1
38 | kind: ServiceAccount
39 | metadata:
40 | name: kube-dns
41 | namespace: kube-system
42 | labels:
43 | kubernetes.io/cluster-service: "true"
44 | addonmanager.kubernetes.io/mode: Reconcile
45 | ---
46 | apiVersion: v1
47 | kind: ConfigMap
48 | metadata:
49 | name: kube-dns
50 | namespace: kube-system
51 | labels:
52 | addonmanager.kubernetes.io/mode: EnsureExists
53 | ---
54 | apiVersion: apps/v1
55 | kind: Deployment
56 | metadata:
57 | name: kube-dns
58 | namespace: kube-system
59 | labels:
60 | k8s-app: kube-dns
61 | kubernetes.io/cluster-service: "true"
62 | addonmanager.kubernetes.io/mode: Reconcile
63 | spec:
64 | # replicas: not specified here:
65 | # 1. In order to make Addon Manager do not reconcile this replicas parameter.
66 | # 2. Default is 1.
67 | # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
68 | strategy:
69 | rollingUpdate:
70 | maxSurge: 10%
71 | maxUnavailable: 0
72 | selector:
73 | matchLabels:
74 | k8s-app: kube-dns
75 | template:
76 | metadata:
77 | labels:
78 | k8s-app: kube-dns
79 | annotations:
80 | scheduler.alpha.kubernetes.io/critical-pod: ''
81 | spec:
82 | tolerations:
83 | - key: "CriticalAddonsOnly"
84 | operator: "Exists"
85 | volumes:
86 | - name: kube-dns-config
87 | configMap:
88 | name: kube-dns
89 | optional: true
90 | containers:
91 | - name: kubedns
92 | image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7
93 | resources:
94 | # TODO: Set memory limits when we've profiled the container for large
95 | # clusters, then set request = limit to keep this container in
96 | # guaranteed class. Currently, this container falls into the
97 | # "burstable" category so the kubelet doesn't backoff from restarting it.
98 | limits:
99 | memory: 170Mi
100 | requests:
101 | cpu: 100m
102 | memory: 70Mi
103 | livenessProbe:
104 | httpGet:
105 | path: /healthcheck/kubedns
106 | port: 10054
107 | scheme: HTTP
108 | initialDelaySeconds: 60
109 | timeoutSeconds: 5
110 | successThreshold: 1
111 | failureThreshold: 5
112 | readinessProbe:
113 | httpGet:
114 | path: /readiness
115 | port: 8081
116 | scheme: HTTP
117 | # we poll on pod startup for the Kubernetes master service and
118 | # only setup the /readiness HTTP server once that's available.
119 | initialDelaySeconds: 3
120 | timeoutSeconds: 5
121 | args:
122 | - --domain=cluster.local.
123 | - --dns-port=10053
124 | - --config-dir=/kube-dns-config
125 | - --v=2
126 | env:
127 | - name: PROMETHEUS_PORT
128 | value: "10055"
129 | ports:
130 | - containerPort: 10053
131 | name: dns-local
132 | protocol: UDP
133 | - containerPort: 10053
134 | name: dns-tcp-local
135 | protocol: TCP
136 | - containerPort: 10055
137 | name: metrics
138 | protocol: TCP
139 | volumeMounts:
140 | - name: kube-dns-config
141 | mountPath: /kube-dns-config
142 | - name: dnsmasq
143 | image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7
144 | livenessProbe:
145 | httpGet:
146 | path: /healthcheck/dnsmasq
147 | port: 10054
148 | scheme: HTTP
149 | initialDelaySeconds: 60
150 | timeoutSeconds: 5
151 | successThreshold: 1
152 | failureThreshold: 5
153 | args:
154 | - -v=2
155 | - -logtostderr
156 | - -configDir=/etc/k8s/dns/dnsmasq-nanny
157 | - -restartDnsmasq=true
158 | - --
159 | - -k
160 | - --cache-size=1000
161 | - --no-negcache
162 | - --log-facility=-
163 | - --server=/cluster.local/127.0.0.1#10053
164 | - --server=/in-addr.arpa/127.0.0.1#10053
165 | - --server=/ip6.arpa/127.0.0.1#10053
166 | ports:
167 | - containerPort: 53
168 | name: dns
169 | protocol: UDP
170 | - containerPort: 53
171 | name: dns-tcp
172 | protocol: TCP
173 | # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
174 | resources:
175 | requests:
176 | cpu: 150m
177 | memory: 20Mi
178 | volumeMounts:
179 | - name: kube-dns-config
180 | mountPath: /etc/k8s/dns/dnsmasq-nanny
181 | - name: sidecar
182 | image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7
183 | livenessProbe:
184 | httpGet:
185 | path: /metrics
186 | port: 10054
187 | scheme: HTTP
188 | initialDelaySeconds: 60
189 | timeoutSeconds: 5
190 | successThreshold: 1
191 | failureThreshold: 5
192 | args:
193 | - --v=2
194 | - --logtostderr
195 | - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
196 | - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
197 | ports:
198 | - containerPort: 10054
199 | name: metrics
200 | protocol: TCP
201 | resources:
202 | requests:
203 | memory: 20Mi
204 | cpu: 10m
205 | dnsPolicy: Default # Don't use cluster DNS.
206 | serviceAccountName: kube-dns
207 |
--------------------------------------------------------------------------------
/docs/13-smoke-test.md:
--------------------------------------------------------------------------------
1 | # Smoke Test
2 |
3 | In this lab you will complete a series of tasks to ensure your Kubernetes cluster is functioning correctly.
4 |
5 | ## Data Encryption
6 |
7 | In this section you will verify the ability to [encrypt secret data at rest](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#verifying-that-data-is-encrypted).
8 |
9 | Create a generic secret:
10 |
11 | ```bash
12 | kubectl create secret generic kubernetes-the-hard-way \
13 | --from-literal="mykey=mydata"
14 | ```
15 |
16 | Print a hexdump of the `kubernetes-the-hard-way` secret stored in etcd:
17 |
18 | ```bash
19 | ssh root@controller-0 \
20 | sudo ETCDCTL_API=3 etcdctl get \
21 | --endpoints=https://127.0.0.1:2379 \
22 | --cacert=/etc/etcd/ca.pem \
23 | --cert=/etc/etcd/kubernetes.pem \
24 | --key=/etc/etcd/kubernetes-key.pem\
25 | /registry/secrets/default/kubernetes-the-hard-way | hexdump -C
26 | ```
27 |
28 | > Output:
29 |
30 | ```bash
31 | 00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
32 | 00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
33 | 00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
34 | 00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
35 | 00000040 3a 76 31 3a 6b 65 79 31 3a 44 ac 6e ac 11 2f 28 |:v1:key1:D.n../(|
36 | 00000050 02 46 3d ad 9d cd 68 be e4 cc 63 ae 13 e4 99 e8 |.F=...h...c.....|
37 | 00000060 6e 55 a0 fd 9d 33 7a b1 17 6b 20 19 23 dc 3e 67 |nU...3z..k .#.>g|
38 | 00000070 c9 6c 47 fa 78 8b 4d 28 cd d1 71 25 e9 29 ec 88 |.lG.x.M(..q%.)..|
39 | 00000080 7f c9 76 b6 31 63 6e ea ac c5 e4 2f 32 d7 a6 94 |..v.1cn..../2...|
40 | 00000090 3c 3d 97 29 40 5a ee e1 ef d6 b2 17 01 75 a4 a3 |<=.)@Z.......u..|
41 | 000000a0 e2 c2 70 5b 77 1a 0b ec 71 c3 87 7a 1f 68 73 03 |..p[w...q..z.hs.|
42 | 000000b0 67 70 5e ba 5e 65 ff 6f 0c 40 5a f9 2a bd d6 0e |gp^.^e.o.@Z.*...|
43 | 000000c0 44 8d 62 21 1a 30 4f 43 b8 03 69 52 c0 b7 2e 16 |D.b!.0OC..iR....|
44 | 000000d0 14 a5 91 21 29 fa 6e 03 47 e2 06 25 45 7c 4f 8f |...!).n.G..%E|O.|
45 | 000000e0 6e bb 9d 3b e9 e5 2d 9e 3e 0a |n..;..-.>.|
46 | ```
47 |
48 | The etcd key should be prefixed with `k8s:enc:aescbc:v1:key1`, which indicates the `aescbc` provider was used to encrypt the data with the `key1` encryption key.
49 |
50 | ## Deployments
51 |
52 | In this section you will verify the ability to create and manage [Deployments](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
53 |
54 | Create a deployment for the [nginx](https://nginx.org/en/) web server:
55 |
56 | ```bash
57 | kubectl create deployment nginx --image=nginx
58 | ```
59 |
60 | List the pod created by the `nginx` deployment:
61 |
62 | ```bash
63 | kubectl get pods -l app=nginx
64 | ```
65 |
66 | > Output (you may need to wait a few seconds to see the pod "READY"):
67 |
68 | ```bash
69 | NAME READY STATUS RESTARTS AGE
70 | nginx-554b9c67f9-vt5rn 1/1 Running 0 10s
71 | ```
72 |
73 | ### Port Forwarding
74 |
75 | In this section you will verify the ability to access applications remotely using [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/).
76 |
77 | Retrieve the full name of the `nginx` pod:
78 |
79 | ```bash
80 | POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items[0].metadata.name}")
81 | ```
82 |
83 | Forward port `8080` on your local machine to port `80` of the `nginx` pod:
84 |
85 | ```bash
86 | kubectl port-forward $POD_NAME 8080:80
87 | ```
88 |
89 | > Output:
90 |
91 | ```bash
92 | Forwarding from 127.0.0.1:8080 -> 80
93 | Forwarding from [::1]:8080 -> 80
94 | ```
95 |
96 | In a new terminal make an HTTP request using the forwarding address:
97 |
98 | ```bash
99 | curl --head http://127.0.0.1:8080
100 | ```
101 |
102 | > Output:
103 |
104 | ```bash
105 | HTTP/1.1 200 OK
106 | Server: nginx/1.19.0
107 | Date: Wed, 24 Jun 2020 12:55:15 GMT
108 | Content-Type: text/html
109 | Content-Length: 612
110 | Last-Modified: Tue, 26 May 2020 15:00:20 GMT
111 | Connection: keep-alive
112 | ETag: "5ecd2f04-264"
113 | Accept-Ranges: bytes
114 | ```
115 |
116 | Switch back to the previous terminal and stop the port forwarding to the `nginx` pod:
117 |
118 | ```bash
119 | Forwarding from 127.0.0.1:8080 -> 80
120 | Forwarding from [::1]:8080 -> 80
121 | Handling connection for 8080
122 | ^C
123 | ```
124 |
125 | ### Logs
126 |
127 | In this section you will verify the ability to [retrieve container logs](https://kubernetes.io/docs/concepts/cluster-administration/logging/).
128 |
129 | Print the `nginx` pod logs:
130 |
131 | ```bash
132 | kubectl logs $POD_NAME
133 | ```
134 |
135 | > Output:
136 |
137 | ```bash
138 | 127.0.0.1 - - [24/Jun/2020:12:55:15 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.64.0" "-"
139 | ```
140 |
141 | ### Exec
142 |
143 | In this section you will verify the ability to [execute commands in a container](https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/#running-individual-commands-in-a-container).
144 |
145 | Print the nginx version by executing the `nginx -v` command in the `nginx` container:
146 |
147 | ```bash
148 | kubectl exec -ti $POD_NAME -- nginx -v
149 | ```
150 |
151 | > Output:
152 |
153 | ```bash
154 | nginx version: nginx/1.19.0
155 | ```
156 |
157 | ## Services
158 |
159 | In this section you will verify the ability to expose applications using a [Service](https://kubernetes.io/docs/concepts/services-networking/service/).
160 |
161 | Expose the `nginx` deployment using a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) service:
162 |
163 | ```bash
164 | kubectl expose deployment nginx --port 80 --type NodePort
165 | ```
166 |
167 | > The LoadBalancer service type can not be used because your cluster is not configured with. It is out of scope for this tutorial.
168 |
169 | Retrieve the node port assigned to the `nginx` service:
170 |
171 | ```bash
172 | NODE_PORT=$(kubectl get svc nginx \
173 | --output=jsonpath='{range .spec.ports[0]}{.nodePort}')
174 | ```
175 |
176 | Define the Kubernetes network IP address of a worker instance (replace MY_WORKER_IP with the private IP defined on a worker):
177 |
178 | ```bash
179 | NODE_IP=MY_WORKER_IP
180 | ```
181 |
182 | > Example for worker-0: 192.168.8.20
183 |
184 | Make an HTTP request using the external IP address and the `nginx` node port:
185 |
186 | ```bash
187 | curl -I http://${NODE_IP}:${NODE_PORT}
188 | ```
189 |
190 | > Output:
191 |
192 | ```bash
193 | HTTP/1.1 200 OK
194 | Server: nginx/1.19.0
195 | Date: Wed, 24 Jun 2020 12:57:37 GMT
196 | Content-Type: text/html
197 | Content-Length: 612
198 | Last-Modified: Tue, 26 May 2020 15:00:20 GMT
199 | Connection: keep-alive
200 | ETag: "5ecd2f04-264"
201 | Accept-Ranges: bytes
202 | ```
203 |
204 | Next: [Cleaning Up](14-cleanup.md)
205 |
--------------------------------------------------------------------------------
/docs/05-kubernetes-configuration-files.md:
--------------------------------------------------------------------------------
1 | # Generating Kubernetes Configuration Files for Authentication
2 |
3 | In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers.
4 |
5 | ## Client Authentication Configs
6 |
7 | In this section you will generate kubeconfig files for the `controller manager`, `kubelet`, `kube-proxy`, and `scheduler` clients and the `admin` user.
8 |
9 | ### Kubernetes Public IP Address
10 |
11 | Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
12 |
13 | Define the static public IP address (you need to replace PUBLIC_IP_ADDRESS by your external IP address / ens18 IP address on the diagram)):
14 |
15 | ```bash
16 | KUBERNETES_PUBLIC_ADDRESS=PUBLIC_IP_ADDRESS
17 | ```
18 |
19 | ### The kubelet Kubernetes Configuration File
20 |
21 | When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/).
22 |
23 | > The following commands must be run in the same directory used to generate the SSL certificates during the [Generating TLS Certificates](04-certificate-authority.md) lab.
24 |
25 | Generate a kubeconfig file for each worker node:
26 |
27 | ```bash
28 | for instance in worker-0 worker-1 worker-2; do
29 | kubectl config set-cluster kubernetes-the-hard-way \
30 | --certificate-authority=ca.pem \
31 | --embed-certs=true \
32 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
33 | --kubeconfig=${instance}.kubeconfig
34 |
35 | kubectl config set-credentials system:node:${instance} \
36 | --client-certificate=${instance}.pem \
37 | --client-key=${instance}-key.pem \
38 | --embed-certs=true \
39 | --kubeconfig=${instance}.kubeconfig
40 |
41 | kubectl config set-context default \
42 | --cluster=kubernetes-the-hard-way \
43 | --user=system:node:${instance} \
44 | --kubeconfig=${instance}.kubeconfig
45 |
46 | kubectl config use-context default --kubeconfig=${instance}.kubeconfig
47 | done
48 | ```
49 |
50 | Results:
51 |
52 | ```bash
53 | worker-0.kubeconfig
54 | worker-1.kubeconfig
55 | worker-2.kubeconfig
56 | ```
57 |
58 | ### The kube-proxy Kubernetes Configuration File
59 |
60 | Generate a kubeconfig file for the `kube-proxy` service:
61 |
62 | ```bash
63 | kubectl config set-cluster kubernetes-the-hard-way \
64 | --certificate-authority=ca.pem \
65 | --embed-certs=true \
66 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \
67 | --kubeconfig=kube-proxy.kubeconfig
68 |
69 | kubectl config set-credentials system:kube-proxy \
70 | --client-certificate=kube-proxy.pem \
71 | --client-key=kube-proxy-key.pem \
72 | --embed-certs=true \
73 | --kubeconfig=kube-proxy.kubeconfig
74 |
75 | kubectl config set-context default \
76 | --cluster=kubernetes-the-hard-way \
77 | --user=system:kube-proxy \
78 | --kubeconfig=kube-proxy.kubeconfig
79 |
80 | kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
81 | ```
82 |
83 | Results:
84 |
85 | ```bash
86 | kube-proxy.kubeconfig
87 | ```
88 |
89 | ### The kube-controller-manager Kubernetes Configuration File
90 |
91 | Generate a kubeconfig file for the `kube-controller-manager` service:
92 |
93 | ```bash
94 | kubectl config set-cluster kubernetes-the-hard-way \
95 | --certificate-authority=ca.pem \
96 | --embed-certs=true \
97 | --server=https://127.0.0.1:6443 \
98 | --kubeconfig=kube-controller-manager.kubeconfig
99 |
100 | kubectl config set-credentials system:kube-controller-manager \
101 | --client-certificate=kube-controller-manager.pem \
102 | --client-key=kube-controller-manager-key.pem \
103 | --embed-certs=true \
104 | --kubeconfig=kube-controller-manager.kubeconfig
105 |
106 | kubectl config set-context default \
107 | --cluster=kubernetes-the-hard-way \
108 | --user=system:kube-controller-manager \
109 | --kubeconfig=kube-controller-manager.kubeconfig
110 |
111 | kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
112 | ```
113 |
114 | Results:
115 |
116 | ```bash
117 | kube-controller-manager.kubeconfig
118 | ```
119 |
120 | ### The kube-scheduler Kubernetes Configuration File
121 |
122 | Generate a kubeconfig file for the `kube-scheduler` service:
123 |
124 | ```bash
125 | kubectl config set-cluster kubernetes-the-hard-way \
126 | --certificate-authority=ca.pem \
127 | --embed-certs=true \
128 | --server=https://127.0.0.1:6443 \
129 | --kubeconfig=kube-scheduler.kubeconfig
130 |
131 | kubectl config set-credentials system:kube-scheduler \
132 | --client-certificate=kube-scheduler.pem \
133 | --client-key=kube-scheduler-key.pem \
134 | --embed-certs=true \
135 | --kubeconfig=kube-scheduler.kubeconfig
136 |
137 | kubectl config set-context default \
138 | --cluster=kubernetes-the-hard-way \
139 | --user=system:kube-scheduler \
140 | --kubeconfig=kube-scheduler.kubeconfig
141 |
142 | kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
143 | ```
144 |
145 | Results:
146 |
147 | ```bash
148 | kube-scheduler.kubeconfig
149 | ```
150 |
151 | ### The admin Kubernetes Configuration File
152 |
153 | Generate a kubeconfig file for the `admin` user:
154 |
155 | ```bash
156 | kubectl config set-cluster kubernetes-the-hard-way \
157 | --certificate-authority=ca.pem \
158 | --embed-certs=true \
159 | --server=https://127.0.0.1:6443 \
160 | --kubeconfig=admin.kubeconfig
161 |
162 | kubectl config set-credentials admin \
163 | --client-certificate=admin.pem \
164 | --client-key=admin-key.pem \
165 | --embed-certs=true \
166 | --kubeconfig=admin.kubeconfig
167 |
168 | kubectl config set-context default \
169 | --cluster=kubernetes-the-hard-way \
170 | --user=admin \
171 | --kubeconfig=admin.kubeconfig
172 |
173 | kubectl config use-context default --kubeconfig=admin.kubeconfig
174 | ```
175 |
176 | Results:
177 |
178 | ```bash
179 | admin.kubeconfig
180 | ```
181 |
182 | ## Distribute the Kubernetes Configuration Files
183 |
184 | Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance:
185 |
186 | ```bash
187 | for instance in worker-0 worker-1 worker-2; do
188 | scp ${instance}.kubeconfig kube-proxy.kubeconfig root@${instance}:~/
189 | done
190 | ```
191 |
192 | Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance:
193 |
194 | ```bash
195 | for instance in controller-0 controller-1 controller-2; do
196 | scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig root@${instance}:~/
197 | done
198 | ```
199 |
200 | Next: [Generating the Data Encryption Config and Key](06-data-encryption-keys.md)
201 |
--------------------------------------------------------------------------------
/docs/03-compute-resources.md:
--------------------------------------------------------------------------------
1 | # Provisioning Compute Resources
2 |
3 | Kubernetes requires a set of machines to host the Kubernetes control plane and the worker nodes where containers are ultimately run. In this lab you will check and eventually adjust the configuration defined in the `01-prerequisites` part.
4 |
5 | ## Networking
6 |
7 | The Kubernetes [networking model](https://kubernetes.io/docs/concepts/cluster-administration/networking/#kubernetes-model) assumes a flat network in which containers and nodes can communicate with each other. In cases where this is not desired [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) can limit how groups of containers are allowed to communicate with each other and external network endpoints.
8 |
9 | > Setting up network policies is out of scope for this tutorial.
10 |
11 | ### Virtual Private Cloud Network
12 |
13 | We provisioned this network in the `01-prerequisites` part: `192.168.8.0/24` which can host up to `253` Kubernetes nodes (`254 - 1` for gateway). This is our "VPC-like" network with private IP addresses.
14 |
15 | ### Pods Network Ranges
16 |
17 | Containers/Pods running on each workers need networks to communicate with other ressources. We will use the `10.200.0.0/16` private range to create Pods subnetworks:
18 |
19 | * 10.200.0.0/24 : worker-0
20 | * 10.200.1.0/24 : worker-1
21 | * 10.200.2.0/24 : worker-2
22 |
23 | ### Firewall Rules
24 |
25 | All the flows are allowed inside the Kubernetes private network (`vmbr8`). In the `01-prerequisites` part, the `gateway-01` VM firewall has been configured to use NAT and allow the following INPUT protocols (from external): `icmp`, `tcp/22`, `tcp/80`, `tcp/443` and `tcp/6443`.
26 |
27 | Check the rules on the `gateway-01` VM (example if `ens18` is the public network interface):
28 |
29 | ```bash
30 | root@gateway-01:~# iptables -L INPUT -v -n
31 | Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
32 | pkts bytes target prot opt in out source destination
33 | 2062 905K ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0
34 | 150K 21M ACCEPT tcp -- ens18 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
35 | 7259 598K ACCEPT tcp -- ens18 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
36 | 772 32380 ACCEPT tcp -- ens18 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443
37 | 772 32380 ACCEPT tcp -- ens18 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:6443
38 | 23318 1673K ACCEPT icmp -- ens18 * 0.0.0.0/0 0.0.0.0/0
39 | 36M 6163M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED
40 | 113K 5899K DROP all -- ens18 * 0.0.0.0/0 0.0.0.0/0
41 | ```
42 |
43 | ### Kubernetes Public IP Address
44 |
45 | A public IP address need to be defined on the public network interface of the `gateway-01` VM (done in the `01-prerequisites` part).
46 |
47 | ### Verification
48 |
49 | On each VM, check the active IP address(es) with the following command:
50 |
51 | ```bash
52 | ip a
53 | ```
54 |
55 | > Output (example with controller-0):
56 |
57 | ```bash
58 | 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
59 | link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
60 | inet 127.0.0.1/8 scope host lo
61 | valid_lft forever preferred_lft forever
62 | inet6 ::1/128 scope host
63 | valid_lft forever preferred_lft forever
64 | 2: ens18: mtu 1500 qdisc fq_codel state UP group default qlen 1000
65 | link/ether e6:27:6e:8c:d6:7b brd ff:ff:ff:ff:ff:ff
66 | inet 192.168.8.10/24 brd 192.168.8.255 scope global ens18
67 | valid_lft forever preferred_lft forever
68 | inet6 fe80::e427:6eff:fe8c:d67b/64 scope link
69 | valid_lft forever preferred_lft forever
70 | ```
71 |
72 | From the `gateway-01` VM, try to ping all controllers and workers VM:
73 |
74 | ```bash
75 | for i in 0 1 2; do ping -c1 controller-$i; ping -c1 worker-$i; done
76 | ```
77 |
78 | > Output:
79 |
80 | ```bash
81 | PING controller-0 (192.168.8.10) 56(84) bytes of data.
82 | 64 bytes from controller-0 (192.168.8.10): icmp_seq=1 ttl=64 time=0.598 ms
83 |
84 | --- controller-0 ping statistics ---
85 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
86 | rtt min/avg/max/mdev = 0.598/0.598/0.598/0.000 ms
87 | PING worker-0 (192.168.8.20) 56(84) bytes of data.
88 | 64 bytes from worker-0 (192.168.8.20): icmp_seq=1 ttl=64 time=0.474 ms
89 |
90 | --- worker-0 ping statistics ---
91 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
92 | rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms
93 | PING controller-1 (192.168.8.11) 56(84) bytes of data.
94 | 64 bytes from controller-1 (192.168.8.11): icmp_seq=1 ttl=64 time=0.546 ms
95 |
96 | --- controller-1 ping statistics ---
97 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
98 | rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms
99 | PING worker-1 (192.168.8.21) 56(84) bytes of data.
100 | 64 bytes from worker-1 (192.168.8.21): icmp_seq=1 ttl=64 time=1.10 ms
101 |
102 | --- worker-1 ping statistics ---
103 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
104 | rtt min/avg/max/mdev = 1.101/1.101/1.101/0.000 ms
105 | PING controller-2 (192.168.8.12) 56(84) bytes of data.
106 | 64 bytes from controller-2 (192.168.8.12): icmp_seq=1 ttl=64 time=0.483 ms
107 |
108 | --- controller-2 ping statistics ---
109 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
110 | rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms
111 | PING worker-2 (192.168.8.22) 56(84) bytes of data.
112 | 64 bytes from worker-2 (192.168.8.22): icmp_seq=1 ttl=64 time=0.650 ms
113 |
114 | --- worker-2 ping statistics ---
115 | 1 packets transmitted, 1 received, 0% packet loss, time 0ms
116 | rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms
117 | ```
118 |
119 | ## Configuring SSH Access
120 |
121 | SSH will be used to configure the controller and worker instances.
122 |
123 | On the `gateway-01` VM, generate SSH key for your working user:
124 |
125 | ```bash
126 | ssh-keygen
127 | ```
128 |
129 | > Output (example for the user nemo):
130 |
131 | ```bash
132 | Generating public/private rsa key pair.
133 | Enter file in which to save the key (/home/nemo/.ssh/id_rsa):
134 | Created directory '/home/nemo/.ssh'.
135 | Enter passphrase (empty for no passphrase):
136 | Enter same passphrase again:
137 | Your identification has been saved in /home/nemo/.ssh/id_rsa.
138 | Your public key has been saved in /home/nemo/.ssh/id_rsa.pub.
139 | The key fingerprint is:
140 | SHA256:QIhkUeJWxh9lJRwfpJpkYXiuHjgE7icWVjo8dQzh+2Q nemo@gateway-01
141 | The key's randomart image is:
142 | +---[RSA 2048]----+
143 | | .=BBo+o=++ |
144 | |.oo*+=oo.+ . |
145 | |o.*..++.. . |
146 | | X. .oo+ |
147 | |o.+o Eo S |
148 | | +o.* |
149 | |. oo o |
150 | | . |
151 | | |
152 | +----[SHA256]-----+
153 | ```
154 |
155 | Print the public key and copy it:
156 |
157 | ```bash
158 | cat /home/nemo/.ssh/id_rsa.pub
159 | ```
160 |
161 | > Output (example for the user nemo):
162 |
163 | ```bash
164 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDZwdkThm90GKiBPcECnxqPfPIy0jz3KAVxS5i1GcfdOMmj947/iYlKrYVqXmPqHOy1vDRJQHD1KpkADSnXREoUJp6RpugR+qei962udVY+Y/eNV2JZRt/dcTlGwqSwKjjE8a5n84fu4zgJcvIIZYG/vJpN3ock189IuSjSeLSBAPU/UQzTDAcNnHEeHDv7Yo2wxGoDziM7sRGQyFLVHKJKtA28+OZT8DKaE4XY78ovmsMJuMDMF+YLKm12/f79xS0AYw0KXb97TAb9PhFMqqOKknN+mvzbccAih6gJEwB646Ju6VlBRBky7c6ZMsDR9l99uQtlXcv8lwiheYE4nJmF nemo@gateway-01
165 | ```
166 |
167 | On the controllers and workers nodes, create the `/root/.ssh` folder and create the file `/root/.ssh/authorized_keys` to paste the previously copied public key:
168 |
169 | ```bash
170 | mkdir -p /root/.ssh
171 | vi /root/.ssh/authorized_keys
172 | ```
173 |
174 | From the `gateway-01`, check if you can connect to the `root` account of all controllers and workers (example for controller-0):
175 |
176 | ```bash
177 | ssh root@controller-0
178 | ```
179 |
180 | > Output:
181 |
182 | ```bash
183 | Welcome to Ubuntu 18.04.4 LTS (GNU/Linux 4.15.0-101-generic x86_64)
184 |
185 | ...
186 |
187 | Last login: Sat Jun 20 11:03:45 2020 from 192.168.8.1
188 | root@controller-0:~#
189 | ```
190 |
191 | Now, you can logout:
192 |
193 | ```bash
194 | exit
195 | ```
196 |
197 | > Output:
198 |
199 | ```bash
200 | logout
201 | Connection to controller-0 closed.
202 | nemo@gateway-01:~$
203 | ```
204 |
205 | Next: [Provisioning a CA and Generating TLS Certificates](04-certificate-authority.md)
206 |
--------------------------------------------------------------------------------
/docs/09-bootstrapping-kubernetes-workers.md:
--------------------------------------------------------------------------------
1 | # Bootstrapping the Kubernetes Worker Nodes
2 |
3 | In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies).
4 |
5 | ## Prerequisites
6 |
7 | The commands in this lab must be run on each worker instance: `worker-0`, `worker-1`, and `worker-2`. Login to each worker instance using the `ssh` command. Example:
8 |
9 | ```bash
10 | ssh root@worker-0
11 | ```
12 |
13 | ### Running commands in parallel with tmux
14 |
15 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
16 |
17 | ## Provisioning a Kubernetes Worker Node
18 |
19 | Install the OS dependencies:
20 |
21 | ```bash
22 | sudo apt-get update
23 | sudo apt-get -y install socat conntrack ipset
24 | ```
25 |
26 | > The socat binary enables support for the `kubectl port-forward` command.
27 |
28 | ### Disable Swap
29 |
30 | By default the kubelet will fail to start if [swap](https://help.ubuntu.com/community/SwapFaq) is enabled. It is [recommended](https://github.com/kubernetes/kubernetes/issues/7294) that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
31 |
32 | Verify if swap is enabled:
33 |
34 | ```bash
35 | sudo swapon --show
36 | ```
37 |
38 | If output is empty then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
39 |
40 | ```bash
41 | sudo swapoff -a
42 | ```
43 |
44 | > To ensure swap remains off after reboot consult your Linux distro documentation. You may need to comment the Swap line in the `/etc/fstab` file.
45 |
46 | ### Download and Install Worker Binaries
47 |
48 | ```bash
49 | wget -q --show-progress --https-only --timestamping \
50 | https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.29.0/crictl-v1.29.0-linux-amd64.tar.gz \
51 | https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64 \
52 | https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz \
53 | https://github.com/containerd/containerd/releases/download/v1.7.13/containerd-1.7.13-linux-amd64.tar.gz \
54 | https://storage.googleapis.com/kubernetes-release/release/v1.29.1/bin/linux/amd64/kubectl \
55 | https://storage.googleapis.com/kubernetes-release/release/v1.29.1/bin/linux/amd64/kube-proxy \
56 | https://storage.googleapis.com/kubernetes-release/release/v1.29.1/bin/linux/amd64/kubelet
57 | ```
58 |
59 | Create the installation directories:
60 |
61 | ```bash
62 | sudo mkdir -p \
63 | /etc/cni/net.d \
64 | /opt/cni/bin \
65 | /var/lib/kubelet \
66 | /var/lib/kube-proxy \
67 | /var/lib/kubernetes \
68 | /var/run/kubernetes
69 | ```
70 |
71 | Install the worker binaries:
72 |
73 | ```bash
74 | mkdir containerd
75 | tar -xvf crictl-v1.29.0-linux-amd64.tar.gz
76 | tar -xvf containerd-1.7.13-linux-amd64.tar.gz -C containerd
77 | sudo tar -xvf cni-plugins-linux-amd64-v1.4.0.tgz -C /opt/cni/bin/
78 | sudo mv runc.amd64 runc
79 | chmod +x crictl kubectl kube-proxy kubelet runc
80 | sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
81 | sudo mv containerd/bin/* /bin/
82 | ```
83 |
84 | ### Configure CNI Networking
85 |
86 | Define the Pod CIDR range for the current node (different for each worker). Replace THE_POD_CIDR by the CIDR network for this node (see network architecture):
87 |
88 | ```bash
89 | POD_CIDR=THE_POD_CIDR
90 | ```
91 |
92 | > Example for worker-0: 10.200.0.0/24
93 |
94 | Create the `bridge` network configuration file:
95 |
96 | ```bash
97 | cat < The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`.
203 |
204 | Create the `kubelet.service` systemd unit file:
205 |
206 | ```bash
207 | cat < Remember to run the above commands on each worker node: `worker-0`, `worker-1`, and `worker-2`.
274 |
275 | ## Verification
276 |
277 | List the registered Kubernetes nodes:
278 |
279 | ```bash
280 | ssh root@controller-0 kubectl get nodes --kubeconfig admin.kubeconfig
281 | ```
282 |
283 | > Output:
284 |
285 | ```bash
286 | NAME STATUS ROLES AGE VERSION
287 | worker-0 Ready 15s v1.29.1
288 | worker-1 Ready 15s v1.29.1
289 | worker-2 Ready 15s v1.29.1
290 | ```
291 |
292 | > [!NOTE]
293 | > By default kube-proxy uses iptables to set up Service IP handling and load balancing. Unfortunately, it breaks our deployment and there's a hack to force Linux to run iptables even for bridge-only traffic:
294 | >
295 | > Run this on all control and worker nodes.
296 |
297 | ```bash
298 | sudo modprobe br_netfilter
299 | echo "br-netfilter" >> /etc/modules-load.d/modules.conf
300 | sysctl -w net.bridge.bridge-nf-call-iptables=1
301 | ```
302 |
303 |
304 | Next: [Configuring kubectl for Remote Access](10-configuring-kubectl.md)
305 |
--------------------------------------------------------------------------------
/docs/01-prerequisites.md:
--------------------------------------------------------------------------------
1 | # Prerequisites
2 |
3 | ## Proxmox Hypervisor
4 |
5 | This tutorial is intended to be performed with a [Proxmox](https://proxmox.com/en/) hypervisor, but you can also use it with ESXi, KVM, Virtualbox or other hypervisor.
6 |
7 | > The compute resources required for this tutorial is 25GB of RAM and 140GB HDD (or SSD).
8 |
9 | List of the VM used in this tutorial :
10 |
11 | |Name|Role|vCPU|RAM|Storage (thin)|IP|OS|
12 | |--|--|--|--|--|--|--|
13 | |controller-0|controller|2|4GB|20GB|192.168.8.10/24|Ubuntu|
14 | |controller-1|controller|2|4GB|20GB|192.168.8.11/24|Ubuntu|
15 | |controller-2|controller|2|4GB|20GB|192.168.8.12/24|Ubuntu|
16 | |worker-0|worker|2|4GB|20GB|192.168.8.20/24|Ubuntu|
17 | |worker-1|worker|2|4GB|20GB|192.168.8.21/24|Ubuntu|
18 | |worker-2|worker|2|4GB|20GB|192.168.8.22/24|Ubuntu|
19 | |gateway-01|Reverse Proxy, client tools, gateway|1|1GB|20GB|192.168.8.1/24 + PUBLIC IP|Debian|
20 |
21 | On the Proxmox hypervisor, I just added the `k8s-` prefix in the VM names.
22 |
23 | 
24 |
25 | ## Prepare the environment
26 |
27 | ### Hypervisor network
28 |
29 | For this tutorial, you need 2 networks on your Proxmox hypervisor :
30 |
31 | * a public network bridge (`vmbr0` in the following screenshot).
32 | * a private Kubernetes network bridge (`vmbr8` in the following screenshot).
33 |
34 | 
35 |
36 | > Note: the pods networks will be defined later.
37 |
38 | All the Kubernetes nodes (workers and controllers) only need one network interface linked to the private Kubernetes network (`vmbr8`).
39 |
40 | 
41 |
42 | The reverse proxy / client tools / gateway VM needs 2 network interfaces, one linked to the private Kubernetes network (`vmbr8`) and the other linked to the public network (`vmbr0`).
43 |
44 | 
45 |
46 | ### Network architecture
47 |
48 | This diagram represents the network design:
49 |
50 | 
51 |
52 | > If you want, you can define the IPv6 stack configuration.
53 |
54 | ### Gateway VM installation
55 |
56 | > The basic VM installation process is not the purpose of this tutorial.
57 | >
58 | > Because it's just a tutorial, the IPv6 stack is not configured, but you can configure it if you want.
59 |
60 | This VM is used as a NAT gateway for the private Kubernetes network, as a reverse proxy and as a client tools.
61 |
62 | This means all the client steps like certificates generation will be done on this VM (in the next parts of this tutorial).
63 |
64 | You have to:
65 |
66 | * Install the latest [amd64 Debian netinst image](https://www.debian.org/CD/netinst/) on this VM.
67 |
68 | * Configure the network interfaces (see the network architecture). Example of `/etc/network/interfaces` file if your public interface is ens18 and your private interface is ens19 (you need to replace `PUBLIC_IP_ADDRESS`, `MASK` and `PUBLIC_IP_GATEWAY` with your values):
69 |
70 | If your Proxmox instance is directly connected to internet, then PUBLIC_IP_GATEWAY is the default gateway (or next hop) to join more internet IP addresses.
71 | Else if your Proxmox instance is connected a LAN (the ens18 on the diagram connected to your LAN), then PUBLIC_IP_GATEWAY is your default gateway to join internet (your ISP router).
72 |
73 | ```bash
74 | source /etc/network/interfaces.d/*
75 |
76 | # The loopback network interface
77 | auto lo
78 | iface lo inet loopback
79 |
80 | # The public network interface
81 | auto ens18
82 | allow-hotplug ens18
83 | iface ens18 inet static
84 | address PUBLIC_IP_ADDRESS/MASK
85 | gateway PUBLIC_IP_GATEWAY
86 | dns-nameservers 9.9.9.9
87 |
88 | # The private network interface
89 | auto ens19
90 | allow-hotplug ens19
91 | iface ens19 inet static
92 | address 192.168.8.1/24
93 | dns-nameservers 9.9.9.9
94 | ```
95 |
96 | If your Proxmox instance is directly connected to internet, then PUBLIC_IP_GATEWAY is the default gateway (or next hop) to join more internet IP addresses.
97 | Else if your Proxmox instance is connected a LAN (the ens18 on the diagram connected to your LAN), then PUBLIC_IP_GATEWAY is your default gateway to join internet (example: your ISP router).
98 |
99 | > If you want, you can define the IPv6 stack configuration.
100 | >
101 | > If you want, you can use another DNS resolver.
102 |
103 | * Define the VM hostname:
104 |
105 | ```bash
106 | sudo hostnamectl set-hostname gateway-01
107 | ```
108 |
109 | * Update the packages list and update the system:
110 |
111 | ```bash
112 | sudo apt-get update && sudo apt-get upgrade -y
113 | ```
114 |
115 | * Install SSH, vim, tmux, curl, NTP and iptables-persistent:
116 |
117 | ```bash
118 | sudo apt-get install ssh vim tmux curl ntp iptables-persistent -y
119 | ```
120 |
121 | * Enable and start the SSH and NTP services:
122 |
123 | ```bash
124 | sudo systemctl enable ntp
125 | sudo systemctl start ntp
126 | sudo systemctl enable ssh
127 | sudo systemctl start ssh
128 | ```
129 |
130 | * Enable IP routing:
131 |
132 | ```bash
133 | sudo echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
134 | sudo echo '1' > /proc/sys/net/ipv4/ip_forward
135 | ```
136 |
137 | > If you want, you can define the IPv6 stack configuration.
138 |
139 | * Configure the iptables firewall (allow some ports and configure NAT). Example of `/etc/iptables/rules.v4` file if ens18 is your public interface and ens19 is your private interface:
140 |
141 | ```bash
142 | # Generated by xtables-save v1.8.2 on Fri Jun 5 16:45:02 2020
143 | *nat
144 | -A POSTROUTING -o ens18 -j MASQUERADE
145 | COMMIT
146 |
147 | *filter
148 | -A INPUT -i lo -j ACCEPT
149 | # allow ssh, so that we do not lock ourselves
150 | -A INPUT -i ens18 -p tcp -m tcp --dport 22 -j ACCEPT
151 | -A INPUT -i ens18 -p tcp -m tcp --dport 80 -j ACCEPT
152 | -A INPUT -i ens18 -p tcp -m tcp --dport 443 -j ACCEPT
153 | -A INPUT -i ens18 -p tcp -m tcp --dport 6443 -j ACCEPT
154 | -A INPUT -i ens18 -p icmp -j ACCEPT
155 | # allow incoming traffic to the outgoing connections,
156 | # et al for clients from the private network
157 | -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
158 | # prohibit everything else incoming
159 | -A INPUT -i ens18 -j DROP
160 | COMMIT
161 | # Completed on Fri Jun 5 16:45:02 2020
162 | ```
163 |
164 | > If you want, you can define the IPv6 stack configuration.
165 |
166 | * If you want to restore/active iptables rules:
167 |
168 | ```bash
169 | sudo iptables-restore < /etc/iptables/rules.v4
170 | ```
171 |
172 | * Configure the `/etc/hosts` file (you need to replace `PUBLIC_IP_GATEWAY`):
173 |
174 | ```bash
175 | 127.0.0.1 localhost
176 | PUBLIC_IP_GATEWAY gateway-01.external gateway-01
177 |
178 | # The following lines are desirable for IPv6 capable hosts
179 | ::1 localhost ip6-localhost ip6-loopback
180 | ff02::1 ip6-allnodes
181 | ff02::2 ip6-allrouters
182 |
183 | 192.168.8.10 controller-0
184 | 192.168.8.11 controller-1
185 | 192.168.8.12 controller-2
186 |
187 | 192.168.8.20 worker-0
188 | 192.168.8.21 worker-1
189 | 192.168.8.22 worker-2
190 | ```
191 |
192 | * To confirm the network configuration, reboot the VM and check the active IP addresses:
193 |
194 | ```bash
195 | sudo reboot
196 | ```
197 |
198 | ### Kubernetes nodes VM installation
199 |
200 | > The basic VM installation process is not the purpose of this tutorial.
201 | >
202 | > Because it's just a tutorial, the IPv6 stack is not configured, but you can configure it if you want.
203 |
204 | These VM are used as Kubernetes node (controllers or workers).
205 |
206 | The basic VM configuration process is the same for the 6 VM (you can also configure one, clone it and change IP address and hostname for each clone).
207 |
208 | You have to:
209 |
210 | * Install the [Ubuntu 22.04.3 LTS Server install image](https://releases.ubuntu.com/22.04/) on this VM.
211 |
212 | * Configure the network interface (see the network architecture). Example of `/etc/netplan/00-installer-config.yaml` file if ens18 is the name of your private network interface (you need to change the IP address depending on the installed server):
213 |
214 | ```bash
215 | # This is the network config written by 'subiquity'
216 | network:
217 | ethernets:
218 | ens18:
219 | addresses:
220 | - 192.168.8.10/24
221 | gateway4: 192.168.8.1
222 | nameservers:
223 | addresses:
224 | - 9.9.9.9
225 | version: 2
226 | ```
227 |
228 | > If you want, you can define the IPv6 stack configuration.
229 | >
230 | > If you want, you can use another DNS resolver.
231 |
232 | * Define the VM hostname (example for controller-0):
233 |
234 | ```bash
235 | sudo hostnamectl set-hostname controller-0
236 | ```
237 |
238 | * Update the packages list and update the system:
239 |
240 | ```bash
241 | sudo apt-get update && sudo apt-get upgrade -y
242 | ```
243 |
244 | * Install SSH and NTP:
245 |
246 | ```bash
247 | sudo apt-get install ssh ntp -y
248 | ```
249 |
250 | * Enable and start the SSH and NTP services:
251 |
252 | ```bash
253 | sudo systemctl enable ntp
254 | sudo systemctl start ntp
255 | sudo systemctl enable ssh
256 | sudo systemctl start ssh
257 | ```
258 |
259 | * Configure `/etc/hosts` file. Example for controller-0 (need to replace `PUBLIC_IP_GATEWAY` and adapt this sample config for each VM):
260 |
261 | ```bash
262 | 127.0.0.1 localhost
263 | 127.0.1.1 controller-0
264 |
265 | # The following lines are desirable for IPv6 capable hosts
266 | ::1 ip6-localhost ip6-loopback
267 | fe00::0 ip6-localnet
268 | ff00::0 ip6-mcastprefix
269 | ff02::1 ip6-allnodes
270 | ff02::2 ip6-allrouters
271 |
272 | PUBLIC_IP_GATEWAY gateway-01.external
273 | 192.168.8.1 gateway-01
274 |
275 | 192.168.8.11 controller-1
276 | 192.168.8.12 controller-2
277 |
278 | 192.168.8.20 worker-0
279 | 192.168.8.21 worker-1
280 | 192.168.8.22 worker-2
281 | ```
282 |
283 | * To confirm the network configuration, reboot the VM and check the active IP address:
284 |
285 | ```bash
286 | sudo reboot
287 | ```
288 |
289 | ## Running Commands in Parallel with tmux
290 |
291 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. Labs in this tutorial may require running the same commands across multiple compute instances, in those cases consider using tmux and splitting a window into multiple panes with synchronize-panes enabled to speed up the provisioning process.
292 |
293 | > The use of tmux is optional and not required to complete this tutorial.
294 |
295 | 
296 |
297 | > Enable synchronize-panes by pressing `ctrl+b` followed by `shift+:`. Next type `set synchronize-panes on` at the prompt. To disable synchronization: `set synchronize-panes off`.
298 |
299 | Next: [Installing the Client Tools](02-client-tools.md)
300 |
--------------------------------------------------------------------------------
/docs/04-certificate-authority.md:
--------------------------------------------------------------------------------
1 | # Provisioning a CA and Generating TLS Certificates
2 |
3 | In this lab you will provision a [PKI Infrastructure](https://en.wikipedia.org/wiki/Public_key_infrastructure) using CloudFlare's PKI toolkit, [cfssl](https://github.com/cloudflare/cfssl), then use it to bootstrap a Certificate Authority, and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, and kube-proxy.
4 |
5 | ## Certificate Authority
6 |
7 | In this section you will provision a Certificate Authority that can be used to generate additional TLS certificates.
8 |
9 | On the `gateway-01` VM, generate the CA configuration file, certificate, and private key:
10 |
11 | ```bash
12 | cat > ca-config.json < ca-csr.json < admin-csr.json <`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements.
103 |
104 | On the `gateway-01` VM, generate a certificate and private key for each Kubernetes worker node (you need to replace PUBLIC_IP_ADDRESS by your external IP address / ens18 IP address on the diagram)):
105 |
106 | ```bash
107 | EXTERNAL_IP=PUBLIC_IP_ADDRESS
108 |
109 | for id_instance in 0 1 2; do
110 | cat > worker-${id_instance}-csr.json < kube-controller-manager-csr.json < kube-proxy-csr.json < kube-scheduler-csr.json < kubernetes-csr.json < The Kubernetes API server is automatically assigned the `kubernetes` internal dns name, which will be linked to the first IP address (`10.32.0.1`) from the address range (`10.32.0.0/24`) reserved for internal cluster services during the [control plane bootstrapping](08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server) lab.
309 |
310 | Results:
311 |
312 | ```bash
313 | kubernetes-key.pem
314 | kubernetes.pem
315 | ```
316 |
317 | ## The Service Account Key Pair
318 |
319 | The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the [managing service accounts](https://kubernetes.io/docs/admin/service-accounts-admin/) documentation.
320 |
321 | On the `gateway-01` VM, generate the `service-account` certificate and private key:
322 |
323 | ```bash
324 | cat > service-account-csr.json < The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab.
378 |
379 | Next: [Generating Kubernetes Configuration Files for Authentication](05-kubernetes-configuration-files.md)
380 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 |
2 | Apache License
3 | Version 2.0, January 2004
4 | http://www.apache.org/licenses/
5 |
6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
7 |
8 | 1. Definitions.
9 |
10 | "License" shall mean the terms and conditions for use, reproduction,
11 | and distribution as defined by Sections 1 through 9 of this document.
12 |
13 | "Licensor" shall mean the copyright owner or entity authorized by
14 | the copyright owner that is granting the License.
15 |
16 | "Legal Entity" shall mean the union of the acting entity and all
17 | other entities that control, are controlled by, or are under common
18 | control with that entity. For the purposes of this definition,
19 | "control" means (i) the power, direct or indirect, to cause the
20 | direction or management of such entity, whether by contract or
21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
22 | outstanding shares, or (iii) beneficial ownership of such entity.
23 |
24 | "You" (or "Your") shall mean an individual or Legal Entity
25 | exercising permissions granted by this License.
26 |
27 | "Source" form shall mean the preferred form for making modifications,
28 | including but not limited to software source code, documentation
29 | source, and configuration files.
30 |
31 | "Object" form shall mean any form resulting from mechanical
32 | transformation or translation of a Source form, including but
33 | not limited to compiled object code, generated documentation,
34 | and conversions to other media types.
35 |
36 | "Work" shall mean the work of authorship, whether in Source or
37 | Object form, made available under the License, as indicated by a
38 | copyright notice that is included in or attached to the work
39 | (an example is provided in the Appendix below).
40 |
41 | "Derivative Works" shall mean any work, whether in Source or Object
42 | form, that is based on (or derived from) the Work and for which the
43 | editorial revisions, annotations, elaborations, or other modifications
44 | represent, as a whole, an original work of authorship. For the purposes
45 | of this License, Derivative Works shall not include works that remain
46 | separable from, or merely link (or bind by name) to the interfaces of,
47 | the Work and Derivative Works thereof.
48 |
49 | "Contribution" shall mean any work of authorship, including
50 | the original version of the Work and any modifications or additions
51 | to that Work or Derivative Works thereof, that is intentionally
52 | submitted to Licensor for inclusion in the Work by the copyright owner
53 | or by an individual or Legal Entity authorized to submit on behalf of
54 | the copyright owner. For the purposes of this definition, "submitted"
55 | means any form of electronic, verbal, or written communication sent
56 | to the Licensor or its representatives, including but not limited to
57 | communication on electronic mailing lists, source code control systems,
58 | and issue tracking systems that are managed by, or on behalf of, the
59 | Licensor for the purpose of discussing and improving the Work, but
60 | excluding communication that is conspicuously marked or otherwise
61 | designated in writing by the copyright owner as "Not a Contribution."
62 |
63 | "Contributor" shall mean Licensor and any individual or Legal Entity
64 | on behalf of whom a Contribution has been received by Licensor and
65 | subsequently incorporated within the Work.
66 |
67 | 2. Grant of Copyright License. Subject to the terms and conditions of
68 | this License, each Contributor hereby grants to You a perpetual,
69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
70 | copyright license to reproduce, prepare Derivative Works of,
71 | publicly display, publicly perform, sublicense, and distribute the
72 | Work and such Derivative Works in Source or Object form.
73 |
74 | 3. Grant of Patent License. Subject to the terms and conditions of
75 | this License, each Contributor hereby grants to You a perpetual,
76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
77 | (except as stated in this section) patent license to make, have made,
78 | use, offer to sell, sell, import, and otherwise transfer the Work,
79 | where such license applies only to those patent claims licensable
80 | by such Contributor that are necessarily infringed by their
81 | Contribution(s) alone or by combination of their Contribution(s)
82 | with the Work to which such Contribution(s) was submitted. If You
83 | institute patent litigation against any entity (including a
84 | cross-claim or counterclaim in a lawsuit) alleging that the Work
85 | or a Contribution incorporated within the Work constitutes direct
86 | or contributory patent infringement, then any patent licenses
87 | granted to You under this License for that Work shall terminate
88 | as of the date such litigation is filed.
89 |
90 | 4. Redistribution. You may reproduce and distribute copies of the
91 | Work or Derivative Works thereof in any medium, with or without
92 | modifications, and in Source or Object form, provided that You
93 | meet the following conditions:
94 |
95 | (a) You must give any other recipients of the Work or
96 | Derivative Works a copy of this License; and
97 |
98 | (b) You must cause any modified files to carry prominent notices
99 | stating that You changed the files; and
100 |
101 | (c) You must retain, in the Source form of any Derivative Works
102 | that You distribute, all copyright, patent, trademark, and
103 | attribution notices from the Source form of the Work,
104 | excluding those notices that do not pertain to any part of
105 | the Derivative Works; and
106 |
107 | (d) If the Work includes a "NOTICE" text file as part of its
108 | distribution, then any Derivative Works that You distribute must
109 | include a readable copy of the attribution notices contained
110 | within such NOTICE file, excluding those notices that do not
111 | pertain to any part of the Derivative Works, in at least one
112 | of the following places: within a NOTICE text file distributed
113 | as part of the Derivative Works; within the Source form or
114 | documentation, if provided along with the Derivative Works; or,
115 | within a display generated by the Derivative Works, if and
116 | wherever such third-party notices normally appear. The contents
117 | of the NOTICE file are for informational purposes only and
118 | do not modify the License. You may add Your own attribution
119 | notices within Derivative Works that You distribute, alongside
120 | or as an addendum to the NOTICE text from the Work, provided
121 | that such additional attribution notices cannot be construed
122 | as modifying the License.
123 |
124 | You may add Your own copyright statement to Your modifications and
125 | may provide additional or different license terms and conditions
126 | for use, reproduction, or distribution of Your modifications, or
127 | for any such Derivative Works as a whole, provided Your use,
128 | reproduction, and distribution of the Work otherwise complies with
129 | the conditions stated in this License.
130 |
131 | 5. Submission of Contributions. Unless You explicitly state otherwise,
132 | any Contribution intentionally submitted for inclusion in the Work
133 | by You to the Licensor shall be under the terms and conditions of
134 | this License, without any additional terms or conditions.
135 | Notwithstanding the above, nothing herein shall supersede or modify
136 | the terms of any separate license agreement you may have executed
137 | with Licensor regarding such Contributions.
138 |
139 | 6. Trademarks. This License does not grant permission to use the trade
140 | names, trademarks, service marks, or product names of the Licensor,
141 | except as required for reasonable and customary use in describing the
142 | origin of the Work and reproducing the content of the NOTICE file.
143 |
144 | 7. Disclaimer of Warranty. Unless required by applicable law or
145 | agreed to in writing, Licensor provides the Work (and each
146 | Contributor provides its Contributions) on an "AS IS" BASIS,
147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
148 | implied, including, without limitation, any warranties or conditions
149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
150 | PARTICULAR PURPOSE. You are solely responsible for determining the
151 | appropriateness of using or redistributing the Work and assume any
152 | risks associated with Your exercise of permissions under this License.
153 |
154 | 8. Limitation of Liability. In no event and under no legal theory,
155 | whether in tort (including negligence), contract, or otherwise,
156 | unless required by applicable law (such as deliberate and grossly
157 | negligent acts) or agreed to in writing, shall any Contributor be
158 | liable to You for damages, including any direct, indirect, special,
159 | incidental, or consequential damages of any character arising as a
160 | result of this License or out of the use or inability to use the
161 | Work (including but not limited to damages for loss of goodwill,
162 | work stoppage, computer failure or malfunction, or any and all
163 | other commercial damages or losses), even if such Contributor
164 | has been advised of the possibility of such damages.
165 |
166 | 9. Accepting Warranty or Additional Liability. While redistributing
167 | the Work or Derivative Works thereof, You may choose to offer,
168 | and charge a fee for, acceptance of support, warranty, indemnity,
169 | or other liability obligations and/or rights consistent with this
170 | License. However, in accepting such obligations, You may act only
171 | on Your own behalf and on Your sole responsibility, not on behalf
172 | of any other Contributor, and only if You agree to indemnify,
173 | defend, and hold each Contributor harmless for any liability
174 | incurred by, or claims asserted against, such Contributor by reason
175 | of your accepting any such warranty or additional liability.
176 |
177 | END OF TERMS AND CONDITIONS
178 |
179 | APPENDIX: How to apply the Apache License to your work.
180 |
181 | To apply the Apache License to your work, attach the following
182 | boilerplate notice, with the fields enclosed by brackets "[]"
183 | replaced with your own identifying information. (Don't include
184 | the brackets!) The text should be enclosed in the appropriate
185 | comment syntax for the file format. We also recommend that a
186 | file or class name and description of purpose be included on the
187 | same "printed page" as the copyright notice for easier
188 | identification within third-party archives.
189 |
190 | Copyright [yyyy] [name of copyright owner]
191 |
192 | Licensed under the Apache License, Version 2.0 (the "License");
193 | you may not use this file except in compliance with the License.
194 | You may obtain a copy of the License at
195 |
196 | http://www.apache.org/licenses/LICENSE-2.0
197 |
198 | Unless required by applicable law or agreed to in writing, software
199 | distributed under the License is distributed on an "AS IS" BASIS,
200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
201 | See the License for the specific language governing permissions and
202 | limitations under the License.
203 |
--------------------------------------------------------------------------------
/docs/08-bootstrapping-kubernetes-controllers.md:
--------------------------------------------------------------------------------
1 | # Bootstrapping the Kubernetes Control Plane
2 |
3 | In this lab you will bootstrap the Kubernetes control plane across three VM instances and configure it for high availability. You will also create a load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
4 |
5 | ## Prerequisites
6 |
7 | The commands in this lab must be run on each controller instance: `controller-0`, `controller-1`, and `controller-2`. Login to each controller instance using the `ssh` command. Example:
8 |
9 | ```bash
10 | ssh root@controller-0
11 | ```
12 |
13 | ### Running commands in parallel with tmux
14 |
15 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. See the [Running commands in parallel with tmux](01-prerequisites.md#running-commands-in-parallel-with-tmux) section in the Prerequisites lab.
16 |
17 | ## Provision the Kubernetes Control Plane
18 |
19 | Create the Kubernetes configuration directory:
20 |
21 | ```bash
22 | sudo mkdir -p /etc/kubernetes/config
23 | ```
24 |
25 | ### Download and Install the Kubernetes Controller Binaries
26 |
27 | Download the official Kubernetes release binaries:
28 |
29 | ```bash
30 | wget -q --show-progress --https-only --timestamping \
31 | "https://storage.googleapis.com/kubernetes-release/release/v1.29.1/bin/linux/amd64/kube-apiserver" \
32 | "https://storage.googleapis.com/kubernetes-release/release/v1.29.1/bin/linux/amd64/kube-controller-manager" \
33 | "https://storage.googleapis.com/kubernetes-release/release/v1.29.1/bin/linux/amd64/kube-scheduler" \
34 | "https://storage.googleapis.com/kubernetes-release/release/v1.29.1/bin/linux/amd64/kubectl"
35 | ```
36 |
37 | Install the Kubernetes binaries:
38 |
39 | ```bash
40 | chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
41 | sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
42 | ```
43 |
44 | ### Configure the Kubernetes API Server
45 |
46 | ```bash
47 | sudo mkdir -p /var/lib/kubernetes/
48 |
49 | sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
50 | service-account-key.pem service-account.pem \
51 | encryption-config.yaml /var/lib/kubernetes/
52 | ```
53 |
54 | The instance internal IP address will be used to advertise the API Server to members of the cluster. Define the INTERNAL_IP (replace MY_NODE_INTERNAL_IP by the value):
55 |
56 | ```bash
57 | INTERNAL_IP=MY_NODE_INTERNAL_IP
58 | ```
59 |
60 | > Example for controller-0 : 192.168.8.10
61 |
62 | Create the `kube-apiserver.service` systemd unit file:
63 |
64 | ```bash
65 | KUBERNETES_PUBLIC_ADDRESS=PUBLIC_IP_ADDRESS
66 |
67 | cat < Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
199 |
200 | ### Verification
201 |
202 | ```bash
203 | kubectl cluster-info --kubeconfig admin.kubeconfig
204 | ```
205 |
206 | ```bash
207 | Kubernetes control plane is running at https://127.0.0.1:6443
208 | ```
209 |
210 | Test the HTTPS health check :
211 |
212 | ```bash
213 | curl -kH "Host: kubernetes.default.svc.cluster.local" -i https://127.0.0.1:6443/healthz
214 | ```
215 |
216 | ```bash
217 | HTTP/2 200
218 | content-type: text/plain; charset=utf-8
219 | x-content-type-options: nosniff
220 | content-length: 2
221 | date: Wed, 24 Jun 2020 12:24:52 GMT
222 |
223 | ok
224 | ```
225 |
226 | > Remember to run the above commands on each controller node: `controller-0`, `controller-1`, and `controller-2`.
227 |
228 | ## RBAC for Kubelet Authorization
229 |
230 | In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
231 |
232 | > This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization.
233 |
234 | The commands in this section will effect the entire cluster and only need to be run once from one of the controller nodes.
235 |
236 | ```bash
237 | ssh root@controller-0
238 | ```
239 |
240 | Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
241 |
242 | ```bash
243 | cat <> /etc/nginx/nginx.conf
305 | stream {
306 | upstream controller_backend {
307 | server 192.168.8.10:6443;
308 | server 192.168.8.11:6443;
309 | server 192.168.8.12:6443;
310 | }
311 | server {
312 | listen 6443;
313 | proxy_pass controller_backend;
314 | # health_check; # Only Nginx commercial subscription can use this directive...
315 | }
316 | }
317 | EOF
318 | ```
319 |
320 | Restart the service:
321 |
322 | ```bash
323 | sudo systemctl restart nginx
324 | ```
325 |
326 | Enable the service:
327 |
328 | ```bash
329 | sudo systemctl enable nginx
330 | ```
331 |
332 | ### Load Balancer Verification
333 |
334 | Define the static public IP address (replace MY_PUBLIC_IP_ADDRESS with your public IP address on the `gateway-01` VM):
335 |
336 | ```bash
337 | KUBERNETES_PUBLIC_ADDRESS=MY_PUBLIC_IP_ADDRESS
338 | ```
339 |
340 | Make a HTTP request for the Kubernetes version info:
341 |
342 | ```bash
343 | curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version
344 | ```
345 |
346 | > output
347 |
348 | ```bash
349 | {
350 | "major": "1",
351 | "minor": "29",
352 | "gitVersion": "v1.29.1",
353 | "gitCommit": "bc401b91f2782410b3fb3f9acf43a995c4de90d2",
354 | "gitTreeState": "clean",
355 | "buildDate": "2024-01-17T15:41:12Z",
356 | "goVersion": "go1.21.6",
357 | "compiler": "gc",
358 | "platform": "linux/amd64"
359 | }
360 | ```
361 |
362 | Next: [Bootstrapping the Kubernetes Worker Nodes](09-bootstrapping-kubernetes-workers.md)
363 |
--------------------------------------------------------------------------------