├── LICENSE ├── README.md ├── designs ├── cluster-cooling-fan.jpg ├── cluster-preview.jpg ├── cluster.jpg ├── netgear-pi-mount-left.stl └── netgear-pi-mount-right.stl └── docs ├── BOM.md ├── all-node-setup.md ├── construction.md ├── gluster-setup.md ├── kubernetes-setup.md └── master-node-setup.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Bryce 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Raspberry PI Based Kubernetes Cluster 2 | Designs, instructions, and more for a seven node Raspberry PI Kubernetes cluster. 3 | 4 | As an enclosure alternative, [Uptime Labs released the STL files](https://uplab.pro/2020/12/raspberry-pi-server-mark-iii/) for a 19" rackmount that fits 14 Raspberry Pis. 5 | 6 | ![PI Cluster](https://i.imgur.com/z3KjNY4.jpg) 7 | 8 | **Total Build Time:** Expert < 1 week / Newbie 4 weeks 9 | 10 | ### [Physical Construction](docs/construction.md) 11 | When using the 3D printed brackets it is possible to squeeze 7 Raspberry PIs across the length of the POE switch. Doing so makes for a tidy little cluster that will fit on most bookshelves. 12 | 13 | ### [All Node Setup](docs/all-node-setup.md) 14 | Each node requires some basic setup to prep it for use with Kubernetes. 15 | 16 | ### [Master Node Setup](docs/master-node-setup.md) 17 | The master nodes utilize Keepalive and HAProxy to ensure the node address (192.168.200.249 in this example) is always online. 18 | 19 | ### [Kubernetes Setup](docs/kubernetes-setup.md) 20 | In this section we setup kubernetes on the first master node then join the subsequent master and worker nodes to it. Be sure to copy the certificates from kuber04m01 to the other master nodes. Do NOT copy the certificates to the worker nodes. 21 | 22 | ### [Gluster Setup](docs/gluster-setup.md) 23 | In this implementation we use [Gluster](https://gluster.org) as the backing filesytem for Kubernetes Persistent Volumes. Gluster provides a fault-tolerant system across commodity and disparate hardware. 24 | 25 | ## Additional Reading 26 | - [Igor Cicimov's Kubernetes cluster step-by-step](https://icicimov.github.io/blog/kubernetes/Kubernetes-cluster-step-by-step/) 27 | - [Securing Raspbian](https://www.raspberrypi.org/documentation/configuration/security.md) 28 | - [HAProxy Web Server on Raspbian](http://gregtrowbridge.com/setting-up-a-multiple-raspberry-pi-web-server-part-5/) 29 | - [High Availability HAProxy](https://www.digitalocean.com/community/tutorials/how-to-create-a-high-availability-haproxy-setup-with-corosync-pacemaker-and-floating-ips-on-ubuntu-14-04) 30 | - [Known HAProxy won't start issue](https://discourse.haproxy.org/t/haproxy-wont-start-properly/1394) 31 | - [K8s on Rasbian](https://github.com/teamserverless/k8s-on-raspbian/blob/master/GUIDE.md) 32 | - [Kubernetes High Availability](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/) 33 | - [Flannel](https://blog.laputa.io/kubernetes-flannel-networking-6a1cb1f8ec7c) 34 | - [CIFS Volumes](https://github.com/fstab/cifs) 35 | -------------------------------------------------------------------------------- /designs/cluster-cooling-fan.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BryceAshey/raspberry-pi-kubernetes-cluster/95388497b13b94373bdb36f0a492a87dfc4858bf/designs/cluster-cooling-fan.jpg -------------------------------------------------------------------------------- /designs/cluster-preview.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BryceAshey/raspberry-pi-kubernetes-cluster/95388497b13b94373bdb36f0a492a87dfc4858bf/designs/cluster-preview.jpg -------------------------------------------------------------------------------- /designs/cluster.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BryceAshey/raspberry-pi-kubernetes-cluster/95388497b13b94373bdb36f0a492a87dfc4858bf/designs/cluster.jpg -------------------------------------------------------------------------------- /designs/netgear-pi-mount-left.stl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BryceAshey/raspberry-pi-kubernetes-cluster/95388497b13b94373bdb36f0a492a87dfc4858bf/designs/netgear-pi-mount-left.stl -------------------------------------------------------------------------------- /designs/netgear-pi-mount-right.stl: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/BryceAshey/raspberry-pi-kubernetes-cluster/95388497b13b94373bdb36f0a492a87dfc4858bf/designs/netgear-pi-mount-right.stl -------------------------------------------------------------------------------- /docs/BOM.md: -------------------------------------------------------------------------------- 1 | # Bill of Materials 2 | 3 | ### Compute 4 | - 7 [Raspberry PI 4 Model B+ w/4GB RAM](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/) 5 | - 7 [Raspberry PI POE HAT](https://www.amazon.com/gp/product/B07GR9XQJH/ref=ppx_yo_dt_b_asin_title_o01_s01?ie=UTF8&psc=1) 6 | - 7 [SanDisk 64GB SD](https://www.amazon.com/gp/product/B07FCMBLV6/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1) 7 | 8 | ### Network 9 | - 1 [Netgear GS108PP 8-port unmanaged switch w/POE](https://www.amazon.com/gp/product/B07788WK5V/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1) 10 | - 7 [6" Cat6 Ethernet Cables](https://www.amazon.com/gp/product/B00AJHCAPC/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1) 11 | 12 | ### Mounting Hardware 13 | - 2 [3D Printed Mounts (left & right)](https://github.com/BryceAshey/raspberry-pi-kubernetes-cluster/tree/master/designs) 14 | - 1 [GeauxRobot Dog Bone](https://www.amazon.com/GeauxRobot-Raspberry-Model-4-layer-Enclosure/dp/B00MYFAAPO/ref=sr_1_1?crid=60PS19G0QFU1&keywords=geauxrobot+4+layer+dog+bone&qid=1567441074&s=hi&sprefix=geauxrobot+%2Ctools%2C208&sr=8-1) 15 | - 100pcs M2.5 x 10mm + 6mm Brass Standoffs Male to Female 16 | - [Assorted M2.5 Brass Standoffs](https://www.amazon.com/gp/product/B075K3QBMX/ref=ppx_yo_dt_b_asin_title_o02_s00?ie=UTF8&psc=1) 17 | - [Assorted M3 Standoffs](https://www.amazon.com/gp/product/B013ZWM1F6/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&psc=1) 18 | 19 | ### Optional Cooling 20 | - 1 [Antec TrueQuiet 120mm Case Cooling Fan](https://www.amazon.com/gp/product/B004AGXHE6/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1) 21 | - 1 [12v DC Fan Power Supply](https://www.amazon.com/gp/product/B07MZP9247/ref=ppx_yo_dt_b_asin_title_o07_s00?ie=UTF8&psc=1) -------------------------------------------------------------------------------- /docs/all-node-setup.md: -------------------------------------------------------------------------------- 1 | # Node Setup 2 | 3 | ### Raspberry PI Image 4 | Download the [Raspbian Buster Lite](https://downloads.raspberrypi.org/raspbian_lite/images/raspbian_lite-2019-07-12/2019-07-10-raspbian-buster-lite.zip) image onto your SD card and boot your PIs up. 5 | 6 | ### Via raspi-config: 7 | - update hostname 8 | - expand filesystem 9 | - enable ssh 10 | - update timezone as appropriate 11 | 12 | ### General Setup 13 | We highly recommend reading Raspberry's [guidance](https://www.raspberrypi.org/documentation/configuration/security.md) on security best practices. 14 | 15 | 16 | If in the US update your keyboard to "US" 17 | ``` 18 | > sudo nano /etc/default/keyboard 19 | ``` 20 | 21 | Create the Kubernetes admin user and add the user to sudo'ers 22 | ``` 23 | > sudo adduser kadmin 24 | > sudo adduser kadmin sudo 25 | ``` 26 | 27 | Exit and login as kadmin 28 | 29 | Delete the default user 30 | ``` 31 | > sudo deluser pi 32 | ``` 33 | 34 | Require sudo password on kadim. Change "pi" to "kadmin" in the file below 35 | ``` 36 | > sudo nano /etc/sudoers.d/010_pi-nopasswd 37 | ``` 38 | 39 | Optionally you may want to install [fail2ban](https://www.fail2ban.org/wiki/index.php/Main_Page) to lock the account after a number of tries... 40 | 41 | Make dang sure that swap is turned off. Previously we could simply disable it as below but with Buster it appears that we need to now set swapsize = 0 to truly disable it. 42 | ``` 43 | > sudo dphys-swapfile swapoff 44 | > sudo dphys-swapfile uninstall 45 | > sudo update-rc.d dphys-swapfile remove 46 | 47 | > sudo nano /etc/dphys-swapfile 48 | # find CONF_SWAPSIZE=100 and change it to CONF_SWAPSIZE=0 49 | ``` 50 | 51 | Configure cgroup 52 | ``` 53 | > sudo nano /boot/cmdline.txt 54 | # add the below line to the end of the existing text. Make sure to put a space (not carriage return) after the last value and the values below: 55 | cgroup_enable=cpuset cgroup_memory=1 cgroup_enable=memory 56 | ``` 57 | 58 | Disable the WIFI adapter 59 | ``` 60 | > echo "dtoverlay=pi3-disable-wifi" | sudo tee -a /boot/config.txt 61 | ``` 62 | 63 | Setup iptables to forward packets (required by Kubernetes networking) 64 | ``` 65 | > sudo iptables -P FORWARD ACCEPT 66 | ``` 67 | 68 | Add the above line to /etc/rc.local so that it gets reset on every boot 69 | ``` 70 | > sudo nano /etc/rc.local 71 | # Paste the following line above "Exit 0" and make sure the top of the file has "#!/bin/sh -e" 72 | /sbin/iptables -P FORWARD ACCEPT 73 | ``` 74 | 75 | Allow non-local bind (required by HAProxy) 76 | Allow bridge to call iptables 77 | ``` 78 | > sudo sysctl net.ipv4.ip_nonlocal_bind = 1 79 | > sudo sysctl net.bridge.bridge-nf-call-iptables=1 80 | ``` 81 | 82 | On each node configure your networking with a static IP. Alternatively you could configure DHCP on your router to assign a static IP to each server. Change the IPs in the list below to match your assignments. 83 | ``` 84 | > sudo nano /etc/hosts 85 | 86 | # paste the following values at the end of the file 87 | 192.168.200.249 kubernetes 88 | 192.168.200.250 kuber04m01 89 | 192.168.200.251 kuber04m02 90 | 192.168.200.252 kuber04m03 91 | 192.168.200.230 kuber04w01 92 | 192.168.200.231 kuber04w02 93 | 192.168.200.232 kuber04w03 94 | 192.168.200.233 kuber04w04 95 | ``` 96 | 97 | Update your distribution 98 | ``` 99 | > sudo apt-get update 100 | > sudo apt-get upgrade 101 | ``` 102 | 103 | Free up some space 104 | ``` 105 | > sudo apt-get -y purge "pulseaudio*" 106 | ``` 107 | 108 | Reboot to make sure everything is fresh 109 | ``` 110 | > sudo reboot now 111 | ``` 112 | 113 | ## Install Docker and Kubernetes 114 | 115 | ### Install Docker 116 | 117 | Note the hack here to install 9 instead of 10 on Buster. At the time of this writing there was not yet an ARM64 version of Docker compiled for Buster. 118 | ``` 119 | > curl -sL get.docker.com | sed 's/9)/10)/' | sh 120 | 121 | # Once an ARM64 version is available the standard command (below) should work again 122 | # curl -sSL get.docker.com | sh 123 | ``` 124 | 125 | Add kadmin to the docker group 126 | ``` 127 | > sudo usermod kadmin -aG docker 128 | ``` 129 | 130 | ### Install Kubernetes 131 | 132 | ``` 133 | > curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \ 134 | echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \ 135 | sudo apt-get update -q && \ 136 | sudo apt-get install -qy kubeadm 137 | ``` 138 | 139 | On the **master** nodes only; prepull the images for kubernetes 140 | ``` 141 | > sudo kubeadm config images pull -v3 142 | ``` 143 | 144 | ### [Optional] Enable CIFS volumes 145 | ``` 146 | > sudo apt-get install -y jq 147 | > VOLUME_PLUGIN_DIR="/usr/libexec/kubernetes/kubelet-plugins/volume/exec" 148 | > sudo mkdir -p "$VOLUME_PLUGIN_DIR/fstab~cifs" 149 | > cd "$VOLUME_PLUGIN_DIR/fstab~cifs" 150 | > sudo curl -L -O https://raw.githubusercontent.com/fstab/cifs/master/cifs 151 | > sudo chmod 755 cifs 152 | 153 | # Verify the installation 154 | > $VOLUME_PLUGIN_DIR/fstab~cifs/cifs init 155 | ``` 156 | 157 | Reboot again - ran into an issue once or twice when I didn't reboot 158 | ``` 159 | > sudo reboot now 160 | ``` 161 | -------------------------------------------------------------------------------- /docs/construction.md: -------------------------------------------------------------------------------- 1 | # Building the Cluster 2 | 3 | [**Bill of Materials**](docs/BOM.md) 4 | 5 | Constructing the physical parts of the cluster is fairly straightforward once the mounts are printed. 6 | 7 | ## Printing the Switch Rack Mounts 8 | 9 | Each mount should be printed with PLA. I used a .4 mm nozzle on an Ultimaker 2+ Extended (any printer should work though). 10 | 11 | - [Left Mount](https://github.com/BryceAshey/raspberry-pi-kubernetes-cluster/blob/master/designs/netgear-pi-mount-left.stl) 12 | - [Right Mount](https://github.com/BryceAshey/raspberry-pi-kubernetes-cluster/blob/master/designs/netgear-pi-mount-right.stl) 13 | 14 | ## Putting it all Together 15 | 16 | As can be seen in the image below I used three parts of the GeauxRobot Dogbone case: one on each end and one in the middle. This allowed for good rigidity throughout the structure. 17 | 18 | I then used M3 standoffs for the connecting rails. 19 | 20 | The PIs themselves are mounted to either the left or the right Dogbone with M2.5 x 10mm + 6mm standoffs between the boards (including the POE hats). 21 | 22 | ## Connecting the Electrical 23 | 24 | Once the physical structure is in place go ahead and connect the 6" ethernet cables to the switch and each PI. Connect your network to the far right port of the switch. 25 | 26 | Finally, connect up the power cable and watch the PIs come up. 27 | 28 | Regarding the optional cooling fan - it is not required but I found it to MUCH quieter with than the POE fans. For now I have that standing on another small switch behind the cluster. I'll work on adding mounts for that fan and it's power supply at some point in the future. If you choose to add the fan be sure to angle it slightly so that all the PI "blades" get air pushed across them. 29 | 30 | ![PI Cluster](https://i.imgur.com/z3KjNY4.jpg) 31 | 32 | ![PI Cluster Cooling Fans](https://i.imgur.com/9nAlQBW.jpg) -------------------------------------------------------------------------------- /docs/gluster-setup.md: -------------------------------------------------------------------------------- 1 | # Gluster 2 | [Gluster](https://www.gluster.org/) is a wonderful storage system built to run on commodity hardware. This makes it the perfect solution for a persistent volume file system for kubernetes (you can't get much more "commodity" than a PI). For this example we're just going to use a file system off the root SD card however you may wish to use a USB drive off the USB 3.1 ports in order to save the health of your SD cards and too avoid contention. 3 | 4 | NOTE: In the example below we are installing the gluster server on the worker nodes as well. While this allows connectivity to gluster from the kubernetes pods on the worker nodes we really only need the [Gluster Client](https://docs.gluster.org/en/latest/Administrator%20Guide/Setting%20Up%20Clients/) installed. I'll refine the implementation at a later date. 5 | 6 | Additional Reference: [Gluster on ARM](https://www.gopeedesignstudio.com/2018/07/13/glusterfs-on-arm/) 7 | 8 | ## Gluster Setup 9 | 10 | Install GlusterFS (every node) 11 | ``` 12 | > sudo modprobe fuse 13 | > sudo apt-get install -y xfsprogs glusterfs-server 14 | ``` 15 | 16 | Make sure Gluster is started (every node) 17 | ``` 18 | > sudo systemctl start glusterd 19 | ``` 20 | 21 | Peer all nodes together (run from kuber04m01) 22 | ``` 23 | > sudo gluster peer probe kuber04m02 24 | > sudo gluster peer probe kuber04m03 25 | > sudo gluster peer probe kuber04w01 26 | > sudo gluster peer probe kuber04w02 27 | > sudo gluster peer probe kuber04w03 28 | > sudo gluster peer probe kuber04w04 29 | ``` 30 | 31 | Validate all peers are connected 32 | ``` 33 | > sudo gluster peer status 34 | 35 | Number of Peers: 6 36 | 37 | Hostname: kuber04m03 38 | Uuid: c251f7f7-4682-4ccf-81d1-39bf9961cc12 39 | State: Peer in Cluster (Connected) 40 | 41 | Hostname: kuber04w02 42 | Uuid: 27cd9ef1-7218-4d0b-9ad6-46cc187bf026 43 | State: Peer in Cluster (Connected) 44 | 45 | Hostname: kuber04m02 46 | Uuid: d70f992b-5295-41b7-9992-38f3be627d98 47 | State: Peer in Cluster (Connected) 48 | 49 | Hostname: kuber04w03 50 | Uuid: f5680983-5ad3-4c47-bad7-51bb96b2182e 51 | State: Peer in Cluster (Connected) 52 | 53 | Hostname: kuber04w01 54 | Uuid: a71db145-5f43-4412-9da0-8f1de1e6ab1c 55 | State: Peer in Cluster (Connected) 56 | 57 | Hostname: kuber04w04 58 | Uuid: 365fc2c9-7b3f-4874-90e5-708387744cc2 59 | State: Peer in Cluster (Connected) 60 | 61 | ``` 62 | 63 | ## Gluster Volume Setup 64 | 65 | Create the first volume directory (all master nodes) 66 | ``` 67 | > sudo mkdir -p /data/glusterfs/myvol1/brick1/ 68 | 69 | ``` 70 | 71 | (Optionally format and mount external storage device to above directory here...) 72 | 73 | Create the first volume 74 | ``` 75 | > sudo gluster volume create brick1 kuber04m01:/data/glusterfs/myvol1/brick1/ kuber04m02:/data/glusterfs/myvol1/brick1/ kuber04m03:/data/glusterfs/myvol1/brick1/ 76 | ``` 77 | 78 | Validate the volume setup 79 | 80 | **Make note of the TCP port - you will need it to setup the endpoint in Kubernetes** 81 | 82 | ``` 83 | > sudo gluster volume status 84 | 85 | Status of volume: jenkins 86 | Gluster process TCP Port RDMA Port Online Pid 87 | ------------------------------------------------------------------------------ 88 | Brick kuber04m01:/data/glusterfs/myvol1/brick1/ 49152 0 Y 15088 89 | Brick kuber04m02:/data/glusterfs/myvol1/brick1/ 49152 0 Y 27562 90 | Brick kuber04m03:/data/glusterfs/myvol1/brick1/ 49152 0 Y 25493 91 | 92 | Task Status of Volume jenkins 93 | ------------------------------------------------------------------------------ 94 | There are no active volume tasks 95 | 96 | ``` 97 | 98 | ## Setup Kubernetes 99 | 100 | Create the gluster endpoints in kubernetes. In the example below be sure to update the IPs to match your network and update the port to match the value from the output above. 101 | 102 | ``` 103 | > sudo nano glusterfs-endpoints.json 104 | 105 | # paste the following and save 106 | { 107 | "kind": "Endpoints", 108 | "apiVersion": "v1", 109 | "metadata": { 110 | "name": "glusterfs-cluster" 111 | }, 112 | "subsets": [ 113 | { 114 | "addresses": [ 115 | { 116 | "ip": "192.168.200.250" 117 | } 118 | ], 119 | "ports": [ 120 | { 121 | "port": 49152 122 | } 123 | ] 124 | }, 125 | { 126 | "addresses": [ 127 | { 128 | "ip": "192.168.200.251" 129 | } 130 | ], 131 | "ports": [ 132 | { 133 | "port": 49152 134 | } 135 | ] 136 | }, 137 | { 138 | "addresses": [{ "ip": "192.168.200.252" }], 139 | "ports": [{ "port": 49152 }] 140 | } 141 | ] 142 | } 143 | ``` 144 | 145 | Add the endpoint to kubernetes 146 | ``` 147 | > sudo kubectl create -f glusterfs-endpoints.json 148 | ``` 149 | 150 | Validate 151 | ``` 152 | > kubectl get endpoints 153 | 154 | NAME ENDPOINTS AGE 155 | glusterfs-cluster 192.168.200.250:49152,192.168.200.251:49152,192.168.200.252:49152 23h 156 | kubernetes 192.168.200.250:6443,192.168.200.251:6443,192.168.200.252:6443 24h 157 | ``` 158 | 159 | ## Create a Persistent Volume 160 | 161 | Create a volume for your pods to use. Note the **path** matches the "brick1" name from above. 162 | ``` 163 | > sudo nano brick1-volume.yaml 164 | 165 | # paste 166 | 167 | apiVersion: v1 168 | kind: PersistentVolume 169 | metadata: 170 | name: jenkins 171 | annotations: 172 | pv.beta.kubernetes.io/gid: "0" 173 | spec: 174 | capacity: 175 | storage: 5Gi 176 | accessModes: 177 | - ReadWriteMany 178 | glusterfs: 179 | endpoints: glusterfs-cluster 180 | path: brick1 181 | readOnly: false 182 | persistentVolumeReclaimPolicy: Retain 183 | ``` 184 | 185 | Create the volume 186 | ``` 187 | > kubectl create -f brick1-volume.yaml 188 | ``` 189 | 190 | ## Create a Persistent Volume Claim 191 | 192 | ``` 193 | > sudo nano brick1-pvc.yaml 194 | 195 | # paste 196 | 197 | apiVersion: v1 198 | kind: PersistentVolumeClaim 199 | metadata: 200 | name: brick1-gluster-claim 201 | spec: 202 | accessModes: 203 | - ReadWriteMany 204 | resources: 205 | requests: 206 | storage: 1Gi 207 | ``` 208 | 209 | Create the claim 210 | ``` 211 | > kubectl create -f brick1-pvc.yaml 212 | ``` 213 | 214 | Validate 215 | ``` 216 | > kubectl get persistentVolumes 217 | 218 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 219 | brick1 5Gi RWX Retain Bound default/brick1-gluster-claim 18h 220 | ``` 221 | 222 | ### Example spec for using the volume 223 | 224 | In the example below we are assigning the Persistent Volume "brick1" to the jenkins_home mount path in the jenkins image. 225 | 226 | Note: if you're going to try to setup Jenkins on ARM (i.e. the PI) be sure to use the jenkins4eval/jenkins image since it is the only one with ARM support. 227 | 228 | ``` 229 | spec: 230 | containers: 231 | - name: jenkins 232 | image: jenkins4eval/jenkins 233 | ports: 234 | - containerPort: 8080 235 | volumeMounts: 236 | - name: brick1-home 237 | mountPath: /var/jenkins_home 238 | readOnly: false 239 | volumes: 240 | - name: brick1-home 241 | persistentVolumeClaim: 242 | claimName: brick1-gluster-claim 243 | ``` -------------------------------------------------------------------------------- /docs/kubernetes-setup.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Setup 2 | 3 | ## First Master Node (kuber04m01) 4 | 5 | Perform the initial Kubernetes setup. As before, change the controlPlaneEndpoint to match your setup. 6 | 7 | ``` 8 | > sudo nano kubeadm-config.yaml 9 | # paste the following 10 | 11 | apiVersion: kubeadm.k8s.io/v1beta2 12 | kind: ClusterConfiguration 13 | kubernetesVersion: stable 14 | controlPlaneEndpoint: "192.168.200.249:443" 15 | networking: 16 | podSubnet: 10.244.0.0/16 17 | apiServer: 18 | certSANs: 19 | - kubernetes 20 | ``` 21 | 22 | Initialize the Kubernetes cluster 23 | ``` 24 | > sudo kubeadm init --config=kubeadm-config.yaml --upload-certs -v 4 25 | ``` 26 | 27 | Configure your local kubectl. If for some reason you have to start over and issue a **kubeadm reset** you will need to do this again. **Make note of the join commands** that are presented once this command completes. 28 | 29 | ``` 30 | # from ~ 31 | 32 | > mkdir -p $HOME/.kube 33 | > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 34 | > sudo chown $(id -u):$(id -g) $HOME/.kube/config 35 | ``` 36 | 37 | Install [Flannel](https://coreos.com/flannel/docs/latest/) as the networking component of the cluster 38 | ``` 39 | > kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 40 | 41 | ``` 42 | 43 | Copy the cluster certs to the other master nodes 44 | ``` 45 | > sudo apt install sshpass 46 | 47 | > sudo sshpass -p "" scp /etc/kubernetes/pki/ca.crt kadmin@kuber04m02: && sudo sshpass -p "" scp /etc/kubernetes/pki/ca.key kadmin@kuber04m02: && sudo sshpass -p "" scp /etc/kubernetes/pki/sa.key kadmin@kuber04m02: && sudo sshpass -p "" scp /etc/kubernetes/pki/sa.pub kadmin@kuber04m02: && sudo sshpass -p "" scp /etc/kubernetes/pki/front-proxy-ca.crt kadmin@kuber04m02: && sudo sshpass -p "" scp /etc/kubernetes/pki/front-proxy-ca.key kadmin@kuber04m02: && sudo sshpass -p "" scp /etc/kubernetes/pki/etcd/ca.crt kadmin@kuber04m02:etcd-ca.crt && sudo sshpass -p "" scp /etc/kubernetes/pki/etcd/ca.key kadmin@kuber04m02:etcd-ca.key 48 | 49 | > sudo sshpass -p "" scp /etc/kubernetes/pki/ca.crt kadmin@kuber04m03: && sudo sshpass -p "" scp /etc/kubernetes/pki/ca.key kadmin@kuber04m03: && sudo sshpass -p "" scp /etc/kubernetes/pki/sa.key kadmin@kuber04m03: && sudo sshpass -p "" scp /etc/kubernetes/pki/sa.pub kadmin@kuber04m03: && sudo sshpass -p "" scp /etc/kubernetes/pki/front-proxy-ca.crt kadmin@kuber04m03: && sudo sshpass -p "" scp /etc/kubernetes/pki/front-proxy-ca.key kadmin@kuber04m03: && sudo sshpass -p "" scp /etc/kubernetes/pki/etcd/ca.crt kadmin@kuber04m03:etcd-ca.crt && sudo sshpass -p "" scp /etc/kubernetes/pki/etcd/ca.key kadmin@kuber04m03:etcd-ca.key 50 | ``` 51 | 52 | ## Master Nodes 2 and 3 (kuber04m02 & kuber04m03) 53 | 54 | Move the certs to the /etc/kubernets/pki path 55 | ``` 56 | sudo mv ca.crt /etc/kubernetes/pki && sudo mv ca.key /etc/kubernetes/pki && sudo mv sa.key /etc/kubernetes/pki && sudo mv sa.pub /etc/kubernetes/pki && sudo mv front-proxy-ca.crt /etc/kubernetes/pki && sudo mv front-proxy-ca.key /etc/kubernetes/pki && sudo mkdir /etc/kubernetes/pki/etcd && sudo cp etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt && sudo cp etcd-ca.key /etc/kubernetes/pki/etcd/ca.key 57 | ``` 58 | 59 | Join the master nodes to 01 using the join command from when you ran **kubeadm init**. Make sure it's the command with **--controlplane** in it. 60 | 61 | To regenerate the join commands use: 62 | ``` 63 | > sudo kubeadm token create --print-join-command 64 | ``` 65 | 66 | Be sure to do the following after the join succeeds 67 | ``` 68 | > mkdir -p $HOME/.kube 69 | > sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 70 | > sudo chown $(id -u):$(id -g) $HOME/.kube/config 71 | ``` 72 | 73 | ## All Worker Nodes 74 | 75 | Join the worker nodes to 01 using the join command from when you ran **kubeadm init**. Make sure the command does **NOT** have **--controlplane** in it. 76 | 77 | To regenerate the join commands use: 78 | ``` 79 | > sudo kubeadm token create --print-join-command 80 | ``` 81 | 82 | ## Confirmation 83 | 84 | You should be able to see the following response from kuber04m01 85 | ``` 86 | > kubectl get nodes 87 | 88 | NAME STATUS ROLES AGE VERSION 89 | kuber04m01 Ready master 24h v1.15.1 90 | kuber04m02 Ready master 24h v1.15.1 91 | kuber04m03 Ready master 24h v1.15.1 92 | kuber04w01 Ready 23h v1.15.2 93 | kuber04w02 Ready 23h v1.15.2 94 | kuber04w03 Ready 23h v1.15.2 95 | kuber04w04 Ready 23h v1.15.3 96 | ``` 97 | -------------------------------------------------------------------------------- /docs/master-node-setup.md: -------------------------------------------------------------------------------- 1 | # Master Node Setup 2 | 3 | ## Configure Keepalive 4 | We use keepalive on each of the 3 master nodes to ensure a single static IP is available at all times for the Kubernetes API. If a node fails then keepalive will move the IP to one of the remaining "up" nodes. As with before change the IPs found within to meet your needs. 5 | 6 | Install Keepalive 7 | ``` 8 | > sudo apt-get install keepalived 9 | ``` 10 | 11 | Configure Keepalive 12 | 13 | Note: on each master node you need to change the "unicast_src_ip" to match the IP of the node. Then change the "unicast_peer_ip" to match the other two nodes. 14 | ``` 15 | > sudo nano /etc/keepalived/keepalived.conf 16 | 17 | # replace with the following: 18 | vrrp_script haproxy-check { 19 | script "killall -0 haproxy" 20 | interval 2 21 | weight 20 22 | } 23 | 24 | vrrp_instance haproxy-vip { 25 | state BACKUP 26 | priority 101 27 | interface eth0 28 | virtual_router_id 47 29 | advert_int 3 30 | 31 | unicast_src_ip 192.168.200.252 32 | unicast_peer { 33 | 192.168.200.250 34 | 192.168.200.251 35 | } 36 | 37 | virtual_ipaddress { 38 | 192.168.200.249 39 | } 40 | 41 | track_script { 42 | haproxy-check weight 20 43 | } 44 | } 45 | 46 | ``` 47 | 48 | Reboot 49 | ``` 50 | > sudo reboot now 51 | ``` 52 | 53 | ## HAProxy Setup 54 | We use HAProxy to route traffic from the master node IP to the aforementioned keepalive IP (in this example 192.168.200.249). 55 | 56 | Install HAProxy 57 | ``` 58 | > sudo apt-get install haproxy 59 | ``` 60 | 61 | Enable on Startup 62 | ``` 63 | > sudo nano /etc/default/haproxy 64 | # Set ENABLED=1 65 | ``` 66 | Configure HAProxy 67 | The SSH proxy is optional - I found that sometimes HAProxy would interfere with SSH unless I put the proxy in. Something to figure out longer term. 68 | ``` 69 | > sudo nano /etc/haproxy/haproxy.cfg 70 | 71 | # Edit with the following 72 | 73 | global 74 | log 127.0.0.1:514 local0 75 | log 127.0.0.1:514 local0 notice 76 | chroot /var/lib/haproxy 77 | stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd lis$ 78 | stats timeout 30s 79 | # user haproxy 80 | # group haproxy 81 | daemon 82 | 83 | # Default SSL material locations 84 | ca-base /etc/ssl/certs 85 | crt-base /etc/ssl/private 86 | 87 | # Default ciphers to use on SSL-enabled listening sockets. 88 | # For more information, see ciphers(1SSL). This list is from: 89 | # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ 90 | # An alternative list with additional directives can be obtained from 91 | # https://mozilla.github.io/server-side-tls/ssl-config-generator/?serv$ 92 | ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:EC$ 93 | ssl-default-bind-options no-sslv3 94 | 95 | defaults 96 | log global 97 | mode tcp 98 | option tcplog 99 | option dontlognull 100 | timeout connect 1s 101 | timeout client 20s 102 | timeout server 20s 103 | timeout client-fin 20s 104 | timeout tunnel 1h 105 | errorfile 400 /etc/haproxy/errors/400.http 106 | errorfile 403 /etc/haproxy/errors/403.http 107 | errorfile 408 /etc/haproxy/errors/408.http 108 | errorfile 500 /etc/haproxy/errors/500.http 109 | errorfile 502 /etc/haproxy/errors/502.http 110 | errorfile 503 /etc/haproxy/errors/503.http 111 | errorfile 504 /etc/haproxy/errors/504.http 112 | 113 | listen stats 114 | bind *:9000 115 | mode http 116 | stats enable 117 | stats hide-version 118 | stats uri /stats 119 | stats refresh 30s 120 | stats realm Haproxy\ Statistics 121 | stats auth Admin:Password 122 | 123 | frontend k8s-api 124 | bind 192.168.200.249:443 125 | bind 127.0.0.1:443 126 | mode tcp 127 | option tcplog 128 | default_backend k8s-api 129 | 130 | frontend ssh-proxy 131 | bind 192.168.200.251:2222 132 | mode tcp 133 | option tcplog 134 | default_backend ssh-proxy-backend 135 | 136 | backend k8s-api 137 | mode tcp 138 | server kuber04m01 192.168.200.250:6443 check 139 | server kuber04m02 192.168.200.251:6443 check 140 | server kuber04m03 192.168.200.252:6443 check 141 | 142 | backend ssh-proxy-backend 143 | mode tcp 144 | server kuber04m02 192.168.200.251:22 check id 1 145 | ``` 146 | 147 | Reboot 148 | ``` 149 | > sudo reboot now 150 | ``` 151 | 152 | --------------------------------------------------------------------------------