546 |
547 |
548 | 549 |
550 | That's it with the control-plane node ! 551 |
552 | 553 | --- 554 | 555 |556 |
557 |
558 | 559 | ## Worker Nodes 560 | 561 | The following instructions apply similarly to both worker nodes. I will document the steps for the **k8swrknode1** node, but please follow the same process for the **k8swrknode2** node. 562 | 563 | ### Ports on the worker nodes 564 | 565 | As we learned above on the control-plane section, kubernetes runs a few services 566 | 567 | | Protocol | Direction | Port Range | Purpose | Used By | 568 | |----------|-----------|-------------|-------------------|----------------------| 569 | | TCP | Inbound | 10250 | Kubelet API | Self, Control plane | 570 | | TCP | Inbound | 10256 | kube-proxy | Self, Load balancers | 571 | | TCP | Inbound | 30000-32767 | NodePort Services | All | 572 | 573 | ### Firewall on the worker nodes 574 | 575 | so we need to open the necessary ports on the worker nodes too. 576 | 577 | ```bash 578 | sudo ufw allow 10250/tcp 579 | sudo ufw allow 10256/tcp 580 | sudo ufw allow 30000:32767/tcp 581 | 582 | sudo ufw status 583 | 584 | ``` 585 | 586 | The output should appear as follows: 587 | 588 | ``` 589 | To Action From 590 | -- ------ ---- 591 | 22/tcp ALLOW Anywhere 592 | 10250/tcp ALLOW Anywhere 593 | 30000:32767/tcp ALLOW Anywhere 594 | 22/tcp (v6) ALLOW Anywhere (v6) 595 | 10250/tcp (v6) ALLOW Anywhere (v6) 596 | 30000:32767/tcp (v6) ALLOW Anywhere (v6) 597 | ``` 598 | 599 | and do not forget, we also need to open UDP 8472 for flannel 600 | 601 | ```bash 602 | sudo ufw allow 8472/udp 603 | 604 | ``` 605 | 606 | The next few steps are pretty much exactly the same as in the control-plane node. 607 | In order to keep this documentation short, I'll just copy/paste the commands. 608 | 609 | ### Hosts file in the worker node 610 | 611 | Update the `/etc/hosts` file to include the IPs and hostname of all VMs. 612 | 613 | ```bash 614 | 192.168.122.223 k8scpnode1 615 | 192.168.122.50 k8swrknode1 616 | 192.168.122.10 k8swrknode2 617 | 618 | ``` 619 | 620 | ### No Swap on the worker node 621 | 622 | ```bash 623 | sudo swapoff -a 624 | 625 | ``` 626 | 627 | ### Kernel modules on the worker node 628 | 629 | ```bash 630 | sudo tee /etc/modules-load.d/kubernetes.conf <
812 |
813 |
814 | 815 | --- 816 | 817 |
818 |
819 |
820 | 821 | ## Kubernetes Dashboard 822 | 823 | > is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage applications running in the cluster and troubleshoot them, as well as manage the cluster itself. 824 | 825 | Next, we can move forward with installing the Kubernetes dashboard on our cluster. 826 | 827 | ### Helm 828 | 829 | Helm—a package manager for Kubernetes that simplifies the process of deploying applications to a Kubernetes cluster. As of version 7.0.0, kubernetes-dashboard has dropped support for Manifest-based installation. Only Helm-based installation is supported now. 830 | 831 | Live on the edge ! 832 | 833 | ```bash 834 | curl -sL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash 835 | 836 | ``` 837 | 838 | ### Install kubernetes dashboard 839 | 840 | We need to add the kubernetes-dashboard helm repository first and install the helm chart after: 841 | 842 | ```bash 843 | # Add kubernetes-dashboard repository 844 | helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ 845 | 846 | # Deploy a Helm Release named "kubernetes-dashboard" using the kubernetes-dashboard chart 847 | helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard 848 | 849 | ``` 850 | 851 | The output of the command above should resemble something like this: 852 | 853 | ```bash 854 | Release "kubernetes-dashboard" does not exist. Installing it now. 855 | 856 | NAME: kubernetes-dashboard 857 | LAST DEPLOYED: Mon Nov 25 15:36:51 2024 858 | NAMESPACE: kubernetes-dashboard 859 | STATUS: deployed 860 | REVISION: 1 861 | TEST SUITE: None 862 | 863 | NOTES: 864 | ************************************************************************************************* 865 | *** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready *** 866 | ************************************************************************************************* 867 | 868 | Congratulations! You have just installed Kubernetes Dashboard in your cluster. 869 | 870 | To access Dashboard run: 871 | kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443 872 | 873 | NOTE: In case port-forward command does not work, make sure that kong service name is correct. 874 | Check the services in Kubernetes Dashboard namespace using: 875 | kubectl -n kubernetes-dashboard get svc 876 | 877 | Dashboard will be available at: 878 | https://localhost:8443 879 | 880 | ``` 881 | 882 | Verify the installation 883 | 884 | `kubectl -n kubernetes-dashboard get svc` 885 | 886 | ``` 887 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 888 | kubernetes-dashboard-api ClusterIP 10.106.254.153
1121 |
1122 |
1123 | 1124 | --- 1125 | 1126 |
1127 |
1128 |
1129 | 1130 | ## Nginx App 1131 | 1132 | Before finishing this blog post, I would also like to share how to install a simple nginx-app as it is customary to do such thing in every new k8s cluster. 1133 | 1134 | But plz excuse me, I will not get into much details. 1135 | You should be able to understand the below k8s commands. 1136 | 1137 | ### Install nginx-app 1138 | 1139 | ```bash 1140 | kubectl create deployment nginx-app --image=nginx --replicas=2 1141 | 1142 | ``` 1143 | 1144 | ```bash 1145 | deployment.apps/nginx-app created 1146 | ``` 1147 | 1148 | ### Get Deployment 1149 | 1150 | ```bash 1151 | kubectl get deployment nginx-app -o wide 1152 | ``` 1153 | 1154 | ```bash 1155 | NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR 1156 | nginx-app 2/2 2 2 64s nginx nginx app=nginx-app 1157 | ``` 1158 | 1159 | ### Expose Nginx-App 1160 | 1161 | ```bash 1162 | kubectl expose deployment nginx-app --type=NodePort --port=80 1163 | 1164 | ``` 1165 | 1166 | ```bash 1167 | service/nginx-app exposed 1168 | ``` 1169 | 1170 | ### Verify Service nginx-app 1171 | 1172 | ```bash 1173 | kubectl get svc nginx-app -o wide 1174 | 1175 | ``` 1176 | 1177 | ```bash 1178 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR 1179 | nginx-app NodePort 10.98.170.185
If you see this page, the nginx web server is successfully installed and 1233 | working. Further configuration is required.
1234 | 1235 |For online documentation and support please refer to
1236 | nginx.org.
1237 | Commercial support is available at
1238 | nginx.com.
Thank you for using nginx.
1241 | 1242 | 1243 | 1244 | 1245 | ``` 1246 | 1247 | ### Nginx-App from Browser 1248 | 1249 |  1250 | 1251 | 1252 | ### Change the default page 1253 | 1254 | Last but not least, let's modify the default index page to something different for educational purposes with the help of a **ConfigMap** 1255 | 1256 | The idea is to create a ConfigMap with the html of our new index page then we would like to attach it to our nginx deployment as a volume mount ! 1257 | 1258 | ```bash 1259 | cat > nginx_config.map << EOF 1260 | apiVersion: v1 1261 | data: 1262 | index.html: | 1263 | 1264 | 1265 | 1266 |Change the default nginx page
1270 | 1271 | 1272 | kind: ConfigMap 1273 | metadata: 1274 | name: nginx-config-page 1275 | namespace: default 1276 | EOF 1277 | ``` 1278 | 1279 | cat nginx_config.map 1280 | 1281 | ```yaml 1282 | apiVersion: v1 1283 | data: 1284 | index.html: | 1285 | 1286 | 1287 | 1288 |Change the default nginx page
1292 | 1293 | 1294 | kind: ConfigMap 1295 | metadata: 1296 | name: nginx-config-page 1297 | namespace: default 1298 | ``` 1299 | 1300 | apply the config.map 1301 | 1302 | ```bash 1303 | kubectl apply -f nginx_config.map 1304 | 1305 | ``` 1306 | 1307 | verify 1308 | 1309 | ```bash 1310 | kubectl get configmap 1311 | ``` 1312 | 1313 | ``` 1314 | NAME DATA AGE 1315 | kube-root-ca.crt 1 2d3h 1316 | nginx-config-page 1 16m 1317 | ``` 1318 | 1319 | now the diffucult part, we need to mount our config map to the nginx deployment and to do that, we need to edit the nginx deployment. 1320 | 1321 | ```bash 1322 | kubectl edit deployments.apps nginx-app 1323 | ``` 1324 | 1325 | rewrite spec section to include: 1326 | 1327 | * the VolumeMount & 1328 | * the ConfigMap as Volume 1329 | 1330 | ```yaml 1331 | spec: 1332 | containers: 1333 | - image: nginx 1334 | ... 1335 | volumeMounts: 1336 | - mountPath: /usr/share/nginx/html 1337 | name: nginx-config 1338 | ... 1339 | volumes: 1340 | - configMap: 1341 | name: nginx-config-page 1342 | name: nginx-config 1343 | ``` 1344 | 1345 | After saving, the nginx deployment will be updated by it-self. 1346 | 1347 | finally we can see our updated first index page: 1348 | 1349 |  1350 | 1351 |1352 |
1353 |
1354 | 1355 | --- 1356 | 1357 |
1358 |
1359 |
1360 | 1361 | ## That's it 1362 | 1363 | I hope you enjoyed this post. 1364 | 1365 | -Evaggelos Balaskas 1366 | 1367 |
1368 |
1369 |
1370 | 1371 | --- 1372 | 1373 |
1374 |
1375 |
1376 | 1377 | ### destroy our lab 1378 | 1379 | ```bash 1380 | ./destroy.sh 1381 | ``` 1382 | 1383 | ```bash 1384 | ... 1385 | 1386 | libvirt_domain.domain-ubuntu["k8wrknode1"]: Destroying... [id=446cae2a-ce14-488f-b8e9-f44839091bce] 1387 | libvirt_domain.domain-ubuntu["k8scpnode"]: Destroying... [id=51e12abb-b14b-4ab8-b098-c1ce0b4073e3] 1388 | time_sleep.wait_for_cloud_init: Destroying... [id=2022-08-30T18:02:06Z] 1389 | libvirt_domain.domain-ubuntu["k8wrknode2"]: Destroying... [id=0767fb62-4600-4bc8-a94a-8e10c222b92e] 1390 | time_sleep.wait_for_cloud_init: Destruction complete after 0s 1391 | libvirt_domain.domain-ubuntu["k8wrknode1"]: Destruction complete after 1s 1392 | libvirt_domain.domain-ubuntu["k8scpnode"]: Destruction complete after 1s 1393 | libvirt_domain.domain-ubuntu["k8wrknode2"]: Destruction complete after 1s 1394 | libvirt_cloudinit_disk.cloud-init["k8wrknode1"]: Destroying... [id=/var/lib/libvirt/images/Jpw2Sg_cloud-init.iso;b8ddfa73-a770-46de-ad16-b0a5a08c8550] 1395 | libvirt_cloudinit_disk.cloud-init["k8wrknode2"]: Destroying... [id=/var/lib/libvirt/images/VdUklQ_cloud-init.iso;5511ed7f-a864-4d3f-985a-c4ac07eac233] 1396 | libvirt_volume.ubuntu-base["k8scpnode"]: Destroying... [id=/var/lib/libvirt/images/l5Rr1w_ubuntu-base] 1397 | libvirt_volume.ubuntu-base["k8wrknode2"]: Destroying... [id=/var/lib/libvirt/images/VdUklQ_ubuntu-base] 1398 | libvirt_cloudinit_disk.cloud-init["k8scpnode"]: Destroying... [id=/var/lib/libvirt/images/l5Rr1w_cloud-init.iso;11ef6bb7-a688-4c15-ae33-10690500705f] 1399 | libvirt_volume.ubuntu-base["k8wrknode1"]: Destroying... [id=/var/lib/libvirt/images/Jpw2Sg_ubuntu-base] 1400 | libvirt_cloudinit_disk.cloud-init["k8wrknode1"]: Destruction complete after 1s 1401 | libvirt_volume.ubuntu-base["k8wrknode2"]: Destruction complete after 1s 1402 | libvirt_cloudinit_disk.cloud-init["k8scpnode"]: Destruction complete after 1s 1403 | libvirt_cloudinit_disk.cloud-init["k8wrknode2"]: Destruction complete after 1s 1404 | libvirt_volume.ubuntu-base["k8wrknode1"]: Destruction complete after 1s 1405 | libvirt_volume.ubuntu-base["k8scpnode"]: Destruction complete after 2s 1406 | libvirt_volume.ubuntu-vol["k8wrknode1"]: Destroying... [id=/var/lib/libvirt/images/Jpw2Sg_ubuntu-vol] 1407 | libvirt_volume.ubuntu-vol["k8scpnode"]: Destroying... [id=/var/lib/libvirt/images/l5Rr1w_ubuntu-vol] 1408 | libvirt_volume.ubuntu-vol["k8wrknode2"]: Destroying... [id=/var/lib/libvirt/images/VdUklQ_ubuntu-vol] 1409 | libvirt_volume.ubuntu-vol["k8scpnode"]: Destruction complete after 0s 1410 | libvirt_volume.ubuntu-vol["k8wrknode2"]: Destruction complete after 0s 1411 | libvirt_volume.ubuntu-vol["k8wrknode1"]: Destruction complete after 0s 1412 | random_id.id["k8scpnode"]: Destroying... [id=l5Rr1w] 1413 | random_id.id["k8wrknode2"]: Destroying... [id=VdUklQ] 1414 | random_id.id["k8wrknode1"]: Destroying... [id=Jpw2Sg] 1415 | random_id.id["k8wrknode2"]: Destruction complete after 0s 1416 | random_id.id["k8scpnode"]: Destruction complete after 0s 1417 | random_id.id["k8wrknode1"]: Destruction complete after 0s 1418 | 1419 | Destroy complete! Resources: 16 destroyed. 1420 | 1421 | ``` 1422 | -------------------------------------------------------------------------------- /attachments/317345a8.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ebal/k8s_cluster/11e8b0fb40d5e10a20d10dfb216062678ad27501/attachments/317345a8.jpg -------------------------------------------------------------------------------- /attachments/3946b715.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ebal/k8s_cluster/11e8b0fb40d5e10a20d10dfb216062678ad27501/attachments/3946b715.jpg -------------------------------------------------------------------------------- /attachments/4e1384ce.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ebal/k8s_cluster/11e8b0fb40d5e10a20d10dfb216062678ad27501/attachments/4e1384ce.jpg -------------------------------------------------------------------------------- /attachments/88d4150c.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ebal/k8s_cluster/11e8b0fb40d5e10a20d10dfb216062678ad27501/attachments/88d4150c.jpg -------------------------------------------------------------------------------- /attachments/SCR20241127pdvk.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ebal/k8s_cluster/11e8b0fb40d5e10a20d10dfb216062678ad27501/attachments/SCR20241127pdvk.png -------------------------------------------------------------------------------- /attachments/SCR20241127pglz.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ebal/k8s_cluster/11e8b0fb40d5e10a20d10dfb216062678ad27501/attachments/SCR20241127pglz.png -------------------------------------------------------------------------------- /attachments/SCR20241127phat.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ebal/k8s_cluster/11e8b0fb40d5e10a20d10dfb216062678ad27501/attachments/SCR20241127phat.png -------------------------------------------------------------------------------- /attachments/SCR20241127pnxl.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ebal/k8s_cluster/11e8b0fb40d5e10a20d10dfb216062678ad27501/attachments/SCR20241127pnxl.png -------------------------------------------------------------------------------- /attachments/k8s_architecture.svg: -------------------------------------------------------------------------------- 1 | 99 | -------------------------------------------------------------------------------- /scripts/CNI_Calico.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #Calico over Flannel as the CNI (Container Network Interface) 4 | 5 | # remove flannel if installed! 6 | kubectl delete -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml 7 | 8 | # open firewall ports 9 | sudo ufw allow proto tcp from any to any port 443 10 | # BGP 11 | sudo ufw allow proto tcp from any to any port 179 12 | sudo ufw allow proto udp from any to any port 9099 13 | 14 | # install calico 15 | kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml 16 | 17 | echo "You need to open firewall ports to worker noders too !" 18 | 19 | -------------------------------------------------------------------------------- /scripts/setup_k8s_control.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/bash 2 | 3 | # Open the necessary ports on the CP's (control-plane node) firewall. 4 | sudo ufw allow 6443/tcp 5 | sudo ufw allow 2379:2380/tcp 6 | sudo ufw allow 10250/tcp 7 | sudo ufw allow 10259/tcp 8 | sudo ufw allow 10257/tcp 9 | 10 | # Get status 11 | sudo ufw status 12 | 13 | # Disable Swap 14 | sudo swapoff -a 15 | sudo sed -i 's/^swap/#swap/' /etc/fstab 16 | 17 | # Always load on boot the k8s modules needed. 18 | sudo tee /etc/modules-load.d/kubernetes.conf <