├── 19.Helm_Charts.md ├── images ├── README.md ├── overview.PNG ├── ingress-app.png └── dash-board-v1.PNG ├── .gitignore └── README.md ├── config ├── iscsi-provisioner-pvc.yaml ├── iscsi-provisioner-class.yaml ├── iscsi-provisioner-d.yaml └── coredns.yaml ├── 03.Install-Client-Tools.md ├── 06.Data-encryption-keys.md ├── 14.Test-Loadbalancer-type.md ├── README.md ├── 12.DNS-Add-On.md ├── 10.Configuring-kubectl.md ├── 13.Load-Balancer.md ├── 11.Pod-Network-Routes.md ├── 02.Prerequisites-HA-LB-Configuration.md ├── 07.Bootstrapping-etcd.md ├── 18.Ingress-Controller-using-NGINX.md ├── 16.Dash-Board.md ├── 01.Prerequisites-VM-Configuration.md ├── 05.Kubernetes-configuration-files.md ├── 15.Deploy-Metric-Server.md ├── 17.Dynamic-iSCSI-Volume-Provisioner.md ├── 09.Bootstrapping-kubernetes-workers.md ├── 04.Certificate-Authority.md └── 08.Bootstrapping-kubernetes-controllers.md /19.Helm_Charts.md: -------------------------------------------------------------------------------- 1 | # "Helm" charts 2 | -------------------------------------------------------------------------------- /images/README.md: -------------------------------------------------------------------------------- 1 | #### Place to store all images used in this guide 2 | -------------------------------------------------------------------------------- /images/overview.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ansilh/kubernetes-the-hardway-virtualbox/HEAD/images/overview.PNG -------------------------------------------------------------------------------- /images/ingress-app.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ansilh/kubernetes-the-hardway-virtualbox/HEAD/images/ingress-app.png -------------------------------------------------------------------------------- /images/dash-board-v1.PNG: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ansilh/kubernetes-the-hardway-virtualbox/HEAD/images/dash-board-v1.PNG -------------------------------------------------------------------------------- /.gitignore/README.md: -------------------------------------------------------------------------------- 1 | # kubernetes-the-hardway-virtualbox 2 | Setup a kubernetes cluster in VirtualBox to prepare for CKA exam 3 | -------------------------------------------------------------------------------- /config/iscsi-provisioner-pvc.yaml: -------------------------------------------------------------------------------- 1 | kind: PersistentVolumeClaim 2 | apiVersion: v1 3 | metadata: 4 | name: myclaim 5 | annotations: 6 | volume.beta.kubernetes.io/storage-class: "iscsi-targetd-vg-targetd" 7 | spec: 8 | accessModes: 9 | - ReadWriteOnce 10 | resources: 11 | requests: 12 | storage: 100Mi 13 | -------------------------------------------------------------------------------- /config/iscsi-provisioner-class.yaml: -------------------------------------------------------------------------------- 1 | kind: StorageClass 2 | apiVersion: storage.k8s.io/v1 3 | metadata: 4 | name: iscsi-targetd-vg-targetd 5 | provisioner: iscsi-targetd 6 | parameters: 7 | # this id where the iscsi server is running 8 | targetPortal: 192.168.78.226:3260 9 | 10 | # this is the iscsi server iqn 11 | iqn: iqn.2003-01.org.linux-iscsi.linxlabs:targetd 12 | 13 | # this is the iscsi interface to be used, the default is default 14 | # iscsiInterface: default 15 | 16 | # this must be on eof the volume groups condifgured in targed.yaml, the default is vg-targetd 17 | # volumeGroup: vg-targetd 18 | 19 | # this is a comma separated list of initiators that will be give access to the created volumes, they must correspond to what you have configured in your nodes. 20 | initiators: iqn.1993-08.org.debian:01:worker-01,iqn.1993-08.org.debian:01:worker-02,iqn.1993-08.org.debian:01:worker-03 21 | 22 | # whether or not to use chap authentication for discovery operations 23 | chapAuthDiscovery: "false" 24 | 25 | # whether or not to use chap authentication for session operations 26 | chapAuthSession: "false" 27 | -------------------------------------------------------------------------------- /03.Install-Client-Tools.md: -------------------------------------------------------------------------------- 1 | # Installing the Client tools 2 | 3 | #### Execute below commands in both lb-01 and lb-02 4 | 5 | #### 1) Install cfssl to generate certificates 6 | ``` 7 | wget -q --show-progress --https-only --timestamping \ 8 | https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 \ 9 | https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 10 | ``` 11 | ``` 12 | chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 13 | ``` 14 | ``` 15 | sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl 16 | sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson 17 | ``` 18 | - Verification 19 | ``` 20 | cfssl version 21 | ``` 22 | - Output 23 | ``` 24 | Version: 1.2.0 25 | Revision: dev 26 | Runtime: go1.6 27 | ``` 28 | #### 2) Install kubectl 29 | ``` 30 | wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl 31 | ``` 32 | ``` 33 | chmod +x kubectl 34 | sudo mv kubectl /usr/local/bin/ 35 | ``` 36 | - Verification 37 | ``` 38 | kubectl version --client 39 | ``` 40 | - Output 41 | ``` 42 | Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} 43 | ``` 44 | Part 4 - [Certificate Authotiry and Certificates](04.Certificate-Authority.md) 45 | -------------------------------------------------------------------------------- /06.Data-encryption-keys.md: -------------------------------------------------------------------------------- 1 | # Generating the Data Encryption Config and Key 2 | 3 | Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to [encrypt](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data) cluster data at rest. 4 | 5 | In this lab you will generate an encryption key and an [encryption config](https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/#understanding-the-encryption-at-rest-configuration) suitable for encrypting Kubernetes Secrets. 6 | 7 | ## The Encryption Key 8 | 9 | Generate an encryption key: 10 | 11 | ``` 12 | ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64) 13 | ``` 14 | 15 | ## The Encryption Config File 16 | 17 | Create the `encryption-config.yaml` encryption config file: 18 | 19 | ``` 20 | cat > encryption-config.yaml < 443/TCP 14h 20 | nginx LoadBalancer 172.168.171.249 192.168.78.50 80:32057/TCP 4s 21 | ``` 22 | - Access nginx using loadbalncer external IP 23 | ``` 24 | $ curl 192.168.78.50 25 | ``` 26 | - Output 27 | ``` 28 | 29 | 30 | 31 | Welcome to nginx! 32 | 39 | 40 | 41 |

Welcome to nginx!

42 |

If you see this page, the nginx web server is successfully installed and 43 | working. Further configuration is required.

44 | 45 |

For online documentation and support please refer to 46 | nginx.org.
47 | Commercial support is available at 48 | nginx.com.

49 | 50 |

Thank you for using nginx.

51 | 52 | 53 | ``` 54 | Part 15 - [Deploy Metric Server](15.Deploy-Metric-Server.md) 55 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # kubernetes-the-hardway-virtualbox 2 | Setup a kubernetes cluster in VirtualBox 3 | 4 | This guide will give an overall idea on how each components works in kubernetes. 5 | 6 | #### Cluster overview 7 | ![Overview](https://raw.githubusercontent.com/ansilh/kubernetes-the-hardway-virtualbox/master/images/overview.PNG) 8 | 9 | 10 | Part 1 - [Prerequisites](01.Prerequisites-VM-Configuration.md) 11 | 12 | Part 2 - [A Virtual LoadBalancer for Highly available API server](02.Prerequisites-HA-LB-Configuration.md) 13 | 14 | Part 3 - [Install Client Tools](03.Install-Client-Tools.md) 15 | 16 | Part 4 - [Certificate Authotiry and Certificates](04.Certificate-Authority.md) 17 | 18 | Part 5 - [Kubernetes Configuration Files](05.Kubernetes-configuration-files.md) 19 | 20 | Part 6 - [Generating the Data Encryption Config and Key](06.Data-encryption-keys.md) 21 | 22 | Part 7 - [Bootstrapping the etcd Cluster](07.Bootstrapping-etcd.md) 23 | 24 | Part 8 - [Bootstrapping the Kubernetes Control Plane](08.Bootstrapping-kubernetes-controllers.md) 25 | 26 | Part 9 - [Bootstrapping the Kubernetes Worker Nodes](09.Bootstrapping-kubernetes-workers.md) 27 | 28 | Part 10 - [Configuring kubectl for Remote Access](10.Configuring-kubectl.md) 29 | 30 | Part 11 - [Provisioning Pod Network Routes](11.Pod-Network-Routes.md) 31 | 32 | Part 12 - [Deploying the DNS Cluster Add-on](12.DNS-Add-On.md) 33 | 34 | Part 13 - [Bare Metal Load Balancer Configuration](13.Load-Balancer.md) 35 | 36 | Part 14 - [Test LoadBalancer](14.Test-Loadbalancer-type.md) 37 | 38 | Part 15 - [Deploy Metric Server](15.Deploy-Metric-Server.md) 39 | 40 | Part 16 - [Deploy Dashboard](16.Dash-Board.md) 41 | 42 | Part 17 - [Dynamic iSCSI Volume Provisioniner](17.Dynamic-iSCSI-Volume-Provisioner.md) 43 | 44 | Part 18 - [Ingress Controller using NIGIX](18.Ingress-Controller-using-NGINX.md) 45 | 46 | Note: Until unless specified , all command should be executed from `lb-01` 47 | -------------------------------------------------------------------------------- /12.DNS-Add-On.md: -------------------------------------------------------------------------------- 1 | # Deploying the DNS Cluster Add-on 2 | 3 | In this lab you will deploy the [DNS add-on](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/) which provides DNS based service discovery, backed by [CoreDNS](https://coredns.io/), to applications running inside the Kubernetes cluster. 4 | 5 | ## Enable resolved 6 | ``` 7 | for instance in worker-01 worker-02 worker-03 8 | do 9 | ssh -t k8s@${instance} sudo systemctl enable systemd-resolved.service 10 | ssh -t k8s@${instance} sudo systemctl start systemd-resolved.service 11 | done 12 | ``` 13 | ## The DNS Cluster Add-on 14 | 15 | Deploy the `coredns` cluster add-on: 16 | 17 | ``` 18 | kubectl apply -f https://raw.githubusercontent.com/ansilh/kubernetes-the-hardway-virtualbox/master/config/coredns.yaml 19 | ``` 20 | 21 | > output 22 | 23 | ``` 24 | serviceaccount/coredns created 25 | clusterrole.rbac.authorization.k8s.io/system:coredns created 26 | clusterrolebinding.rbac.authorization.k8s.io/system:coredns created 27 | configmap/coredns created 28 | deployment.extensions/coredns created 29 | service/kube-dns created 30 | ``` 31 | 32 | List the pods created by the `kube-dns` deployment: 33 | 34 | ``` 35 | kubectl get pods -l k8s-app=kube-dns -n kube-system 36 | ``` 37 | 38 | > output 39 | 40 | ``` 41 | NAME READY STATUS RESTARTS AGE 42 | coredns-699f8ddd77-5bszs 1/1 Running 0 11m 43 | coredns-699f8ddd77-bz2jd 1/1 Running 0 11m 44 | ``` 45 | 46 | ## Verification 47 | 48 | Create a run-once `dnstool` Pod: 49 | 50 | ``` 51 | kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools 52 | ``` 53 | - From temonal execute nslookup 54 | ``` 55 | dnstools# time nslookup kubernetes 56 | Server: 172.168.0.2 57 | Address: 172.168.0.2#53 58 | 59 | Name: kubernetes.default.svc.cluster.local 60 | Address: 172.168.0.1 61 | 62 | real 0m 0.00s 63 | user 0m 0.00s 64 | sys 0m 0.00s 65 | ``` 66 | - Exit from terminal (Pod will be removed automatically after exit.!) 67 | 68 | Part 13 - [Bare Metal Load Balancer Configuration](13.Load-Balancer.md) 69 | -------------------------------------------------------------------------------- /10.Configuring-kubectl.md: -------------------------------------------------------------------------------- 1 | # Configuring kubectl for Remote Access 2 | 3 | In this lab you will generate a kubeconfig file for the `kubectl` command line utility based on the `admin` user credentials. 4 | 5 | > Run the commands in this lab from the same directory used to generate the admin client certificates.(lb-01) 6 | 7 | ## The Admin Kubernetes Configuration File 8 | 9 | Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used. 10 | 11 | Generate a kubeconfig file suitable for authenticating as the `admin` user: 12 | 13 | ``` 14 | { 15 | KUBERNETES_PUBLIC_ADDRESS=192.168.78.220 16 | 17 | kubectl config set-cluster kubernetes-the-hard-way \ 18 | --certificate-authority=ca.pem \ 19 | --embed-certs=true \ 20 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 21 | 22 | kubectl config set-credentials admin \ 23 | --client-certificate=admin.pem \ 24 | --client-key=admin-key.pem 25 | 26 | kubectl config set-context kubernetes-the-hard-way \ 27 | --cluster=kubernetes-the-hard-way \ 28 | --user=admin 29 | 30 | kubectl config use-context kubernetes-the-hard-way 31 | } 32 | ``` 33 | 34 | ## Verification 35 | 36 | Check the health of the remote Kubernetes cluster: 37 | 38 | ``` 39 | kubectl get componentstatuses 40 | ``` 41 | 42 | > output 43 | 44 | ``` 45 | NAME STATUS MESSAGE ERROR 46 | controller-manager Healthy ok 47 | scheduler Healthy ok 48 | etcd-1 Healthy {"health":"true"} 49 | etcd-2 Healthy {"health":"true"} 50 | etcd-0 Healthy {"health":"true"} 51 | ``` 52 | 53 | List the nodes in the remote Kubernetes cluster: 54 | 55 | ``` 56 | kubectl get nodes 57 | ``` 58 | 59 | > output 60 | 61 | ``` 62 | NAME STATUS ROLES AGE VERSION 63 | worker-01 Ready 9m46s v1.12.0 64 | worker-02 Ready 9m46s v1.12.0 65 | worker-03 Ready 9m46s v1.12.0 66 | 67 | ``` 68 | 69 | Part 11 - [Provisioning Pod Network Routes](11.Pod-Network-Routes.md) 70 | -------------------------------------------------------------------------------- /config/iscsi-provisioner-d.yaml: -------------------------------------------------------------------------------- 1 | kind: ClusterRole 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | name: iscsi-provisioner-runner 5 | rules: 6 | - apiGroups: [""] 7 | resources: ["persistentvolumes"] 8 | verbs: ["get", "list", "watch", "create", "delete"] 9 | - apiGroups: [""] 10 | resources: ["persistentvolumeclaims"] 11 | verbs: ["get", "list", "watch", "update"] 12 | - apiGroups: ["storage.k8s.io"] 13 | resources: ["storageclasses"] 14 | verbs: ["get", "list", "watch"] 15 | - apiGroups: [""] 16 | resources: ["events"] 17 | verbs: ["create", "update", "patch"] 18 | --- 19 | kind: ClusterRoleBinding 20 | apiVersion: rbac.authorization.k8s.io/v1 21 | metadata: 22 | name: run-iscsi-provisioner 23 | subjects: 24 | - kind: ServiceAccount 25 | name: iscsi-provisioner 26 | namespace: default 27 | roleRef: 28 | kind: ClusterRole 29 | name: iscsi-provisioner-runner 30 | apiGroup: rbac.authorization.k8s.io 31 | --- 32 | apiVersion: v1 33 | kind: ServiceAccount 34 | metadata: 35 | name: iscsi-provisioner 36 | --- 37 | kind: Deployment 38 | apiVersion: extensions/v1beta1 39 | metadata: 40 | name: iscsi-provisioner 41 | spec: 42 | replicas: 1 43 | template: 44 | metadata: 45 | labels: 46 | app: iscsi-provisioner 47 | spec: 48 | containers: 49 | - name: iscsi-provisioner 50 | imagePullPolicy: Always 51 | image: quay.io/external_storage/iscsi-controller:latest 52 | args: 53 | - "start" 54 | env: 55 | - name: PROVISIONER_NAME 56 | value: iscsi-targetd 57 | - name: LOG_LEVEL 58 | value: debug 59 | - name: TARGETD_USERNAME 60 | valueFrom: 61 | secretKeyRef: 62 | name: targetd-account 63 | key: username 64 | - name: TARGETD_PASSWORD 65 | valueFrom: 66 | secretKeyRef: 67 | name: targetd-account 68 | key: password 69 | - name: TARGETD_ADDRESS 70 | value: 192.168.78.226 71 | serviceAccount: iscsi-provisioner 72 | -------------------------------------------------------------------------------- /13.Load-Balancer.md: -------------------------------------------------------------------------------- 1 | # LoadBalancer 2 | 3 | [MetalLB](https://metallb.universe.tf/) hooks into your Kubernetes cluster, and provides a network load-balancer implementation. 4 | In short, it allows you to create Kubernetes services of type “LoadBalancer” in clusters that don’t run on a cloud provider, and thus cannot simply hook into paid products to provide load-balancers. 5 | ``` 6 | kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.7.3/manifests/metallb.yaml 7 | ``` 8 | 9 | This will deploy MetalLB to your cluster, under the metallb-system namespace. The components in the manifest are: 10 | 11 | - The `metallb-system/controller` deployment. This is the cluster-wide controller that handles IP address assignments. 12 | - The `metallb-system/speaker` daemonset. This is the component that speaks the protocol(s) of your choice to make the services reachable. 13 | - Service accounts for the controller and speaker, along with the RBAC permissions that the components need to function. 14 | 15 | - Verify 16 | ``` 17 | kubectl get pods -n metallb-system -o wide 18 | ``` 19 | - Output 20 | ``` 21 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE 22 | controller-765899887-g9pl7 1/1 Running 0 36s 10.10.3.2 worker-03 23 | speaker-78znz 1/1 Running 0 36s 192.168.78.213 worker-03 24 | speaker-kvb4q 1/1 Running 0 36s 192.168.78.211 worker-01 25 | speaker-mmlzl 1/1 Running 0 36s 192.168.78.212 worker-02 26 | ``` 27 | The installation manifest does not include a configuration file. MetalLB’s components will still start, but will remain idle until you define and deploy a configmap. 28 | 29 | Layer 2 mode is the simplest to configure: in many cases, you don’t need any protocol-specific configuration, only IP addresses. 30 | For example, the following configuration gives MetalLB control over IPs from `192.168.78.50` to `192.168.78.150`, and configures Layer 2 mode: 31 | 32 | Lets create a configmap yaml 33 | ``` 34 | vi metallb-config.yaml 35 | ``` 36 | ``` 37 | apiVersion: v1 38 | kind: ConfigMap 39 | metadata: 40 | namespace: metallb-system 41 | name: config 42 | data: 43 | config: | 44 | address-pools: 45 | - name: default 46 | protocol: layer2 47 | addresses: 48 | - 192.168.78.50-192.168.78.150 49 | ``` 50 | Create config map 51 | ``` 52 | kubectl create -f metallb-config.yaml 53 | ``` 54 | Part 14 - [Test LoadBalancer](14.Test-Loadbalancer-type.md) 55 | -------------------------------------------------------------------------------- /11.Pod-Network-Routes.md: -------------------------------------------------------------------------------- 1 | # Provisioning Pod Network Routes 2 | 3 | Pods scheduled to a node receive an IP address from the node's Pod CIDR range. At this point pods can not communicate with other pods running on different nodes due to missing routes which is responsible to do NAT 4 | 5 | In this lab you will create a route for each worker node that routes Pod IP to 192.168.78.0/24 network . 6 | 7 | > There are [other ways](https://kubernetes.io/docs/concepts/cluster-administration/networking/#how-to-achieve-this) to implement the Kubernetes networking model. 8 | 9 | ## The Routing Table 10 | 11 | In this section you will gather the information required to create routes in the worker nodes. 12 | 13 | IP address and Pod CIDR range for each worker instance: 14 | 15 | ``` 16 | +------------+------------------+----------------+ 17 | | Worker | IP Address | Pod CIDR | 18 | +------------+------------------+----------------+ 19 | | worker-01 | 192.168.78.211 | 10.10.1.0/24 | 20 | | worker-02 | 192.168.78.212 | 10.10.2.0/24 | 21 | | worker-03 | 192.168.78.213 | 10.10.3.0/24 | 22 | +------------+------------------+----------------+ 23 | 24 | ``` 25 | 26 | 27 | ## Routes 28 | 29 | Create network routes for each worker instance: 30 | 31 | ``` 32 | ROUTE_DATA="worker-01,192.168.78.211,10.10.1.0/24 33 | worker-02,192.168.78.212,10.10.2.0/24 34 | worker-03,192.168.78.213,10.10.3.0/24" 35 | 36 | for instance in worker-01 worker-02 worker-03 37 | do 38 | ROUTE_MINI=$(echo "${ROUTE_DATA}" | grep -v -w ${instance}) 39 | for ROUTE in ${ROUTE_MINI} 40 | do 41 | echo "#### Adding Route for ${instance} ####" 42 | NET=$(echo ${ROUTE} |awk -F "," '{print $3}') 43 | GW=$(echo ${ROUTE} |awk -F "," '{print $2}') 44 | ssh -t k8s@${instance} sudo -i "echo -e \"\tpost-up route add -net ${NET} gw ${GW}\"|sudo tee --append /etc/network/interfaces" 45 | ssh -t k8s@${instance} sudo -i "route add -net ${NET} gw ${GW}" 46 | done 47 | done 48 | 49 | ``` 50 | List all routes 51 | 52 | ``` 53 | for instance in worker-01 worker-02 worker-03; do ssh k8s@${instance} "ip route show|grep ^10.10"; done 54 | ``` 55 | - Output 56 | ``` 57 | k8s@worker-01's password: 58 | 10.10.2.0/24 via 192.168.78.212 dev enp0s8 59 | 10.10.3.0/24 via 192.168.78.213 dev enp0s8 60 | k8s@worker-02's password: 61 | 10.10.1.0/24 via 192.168.78.211 dev enp0s8 62 | 10.10.3.0/24 via 192.168.78.213 dev enp0s8 63 | k8s@worker-03's password: 64 | 10.10.1.0/24 via 192.168.78.211 dev enp0s8 65 | 10.10.2.0/24 via 192.168.78.212 dev enp0s8 66 | ``` 67 | 68 | Part 12 - [Deploying the DNS Cluster Add-on](12.DNS-Add-On.md) 69 | -------------------------------------------------------------------------------- /02.Prerequisites-HA-LB-Configuration.md: -------------------------------------------------------------------------------- 1 | # Prerequisite - HA LoadBalancer configuration 2 | 3 | - DO NOT USE THIS CONFIGURATION FOR PRODUCTION USE. THIS CONFIG IS FOR EDUCATIONAL PURPOSE ONLY 4 | ## HA-Proxy Load Balancer cluster using Keepalived 5 | 6 | We need a floating IP for the loadbalance which will move across two LB nodes when ever there is a node failure. 7 | HA-Proxy will be running on both hosts and will take over new connections from the node once folating IP become active on that node. 8 | In this way, we will have a highly available LoadBalancer for Kubernetes API communication. 9 | 10 | ##### 1) Install & Configure HA-Proxy on both lb-01 and lb-02 11 | ``` 12 | sudo apt-get install haproxy hatop 13 | sudo sed -i -e "s/ENABLED=1/ENABLED=0/g" /etc/default/haproxy 14 | sudo mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg-orig 15 | ``` 16 | - Configure HA-Proxy to pass traffic to controller VMs 17 | ``` 18 | sudo vi /etc/haproxy/haproxy.cfg 19 | ``` 20 | - Copy & paste below contents to the file 21 | ``` 22 | global 23 | log /var/lib/haproxy/dev/log local0 debug 24 | maxconn 2000 25 | user haproxy 26 | group haproxy 27 | defaults 28 | log global 29 | mode http 30 | option httplog 31 | option dontlognull 32 | retries 3 33 | option redispatch 34 | timeout connect 5000 35 | timeout client 1000000000 36 | timeout server 1000000000 37 | frontend k8s-api 38 | bind 192.168.78.220:6443 39 | bind 127.0.0.1:6443 40 | mode tcp 41 | option tcplog 42 | default_backend k8s-api 43 | 44 | backend k8s-api 45 | mode tcp 46 | option tcplog 47 | option tcp-check 48 | balance source 49 | default-server inter 600s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100 50 | server controller-01 192.168.78.201:6443 check 51 | server controller-02 192.168.78.202:6443 check 52 | server controller-03 192.168.78.203:6443 check 53 | ``` 54 | 55 | ##### 2) Install & Configure Keepalived on both lb-01 and lb-02 56 | 57 | ``` 58 | sudo apt-get install keepalived 59 | sudo bash -c 'echo "net.ipv4.ip_nonlocal_bind=1" >> /etc/sysctl.conf' 60 | sudo sysctl -p 61 | ``` 62 | - Configure keepalived to for an HA floating IP 63 | ``` 64 | sudo vi /etc/keepalived/keepalived.conf 65 | ``` 66 | - Copy & paste below contents to the file 67 | ``` 68 | vrrp_script chk_haproxy { 69 | script "killall -0 haproxy" 70 | interval 2 71 | weight 2 72 | } 73 | 74 | vrrp_instance VI_1 { 75 | interface enp0s8 76 | state MASTER 77 | virtual_router_id 91 78 | priority 101 79 | virtual_ipaddress { 80 | 192.168.78.220/24 81 | } 82 | track_script { 83 | chk_haproxy 84 | } 85 | } 86 | ``` 87 | - Start keepalived service 88 | ``` 89 | sudo service keepalived start 90 | ``` 91 | - Restart rsyslogd to start ha-proxy logging 92 | ``` 93 | sudo service rsyslog start 94 | ``` 95 | Part 3 - [Install Client Tools](03.Install-Client-Tools.md) 96 | -------------------------------------------------------------------------------- /config/coredns.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: coredns 5 | namespace: kube-system 6 | --- 7 | apiVersion: rbac.authorization.k8s.io/v1beta1 8 | kind: ClusterRole 9 | metadata: 10 | labels: 11 | kubernetes.io/bootstrapping: rbac-defaults 12 | name: system:coredns 13 | rules: 14 | - apiGroups: 15 | - "" 16 | resources: 17 | - endpoints 18 | - services 19 | - pods 20 | - namespaces 21 | verbs: 22 | - list 23 | - watch 24 | --- 25 | apiVersion: rbac.authorization.k8s.io/v1beta1 26 | kind: ClusterRoleBinding 27 | metadata: 28 | annotations: 29 | rbac.authorization.kubernetes.io/autoupdate: "true" 30 | labels: 31 | kubernetes.io/bootstrapping: rbac-defaults 32 | name: system:coredns 33 | roleRef: 34 | apiGroup: rbac.authorization.k8s.io 35 | kind: ClusterRole 36 | name: system:coredns 37 | subjects: 38 | - kind: ServiceAccount 39 | name: coredns 40 | namespace: kube-system 41 | --- 42 | apiVersion: v1 43 | kind: ConfigMap 44 | metadata: 45 | name: coredns 46 | namespace: kube-system 47 | data: 48 | Corefile: | 49 | .:53 { 50 | errors 51 | health 52 | kubernetes cluster.local in-addr.arpa ip6.arpa { 53 | pods insecure 54 | upstream 55 | fallthrough in-addr.arpa ip6.arpa 56 | } 57 | prometheus :9153 58 | proxy . /etc/resolv.conf 59 | cache 30 60 | loop 61 | reload 62 | loadbalance 63 | } 64 | --- 65 | apiVersion: extensions/v1beta1 66 | kind: Deployment 67 | metadata: 68 | name: coredns 69 | namespace: kube-system 70 | labels: 71 | k8s-app: kube-dns 72 | kubernetes.io/name: "CoreDNS" 73 | spec: 74 | replicas: 2 75 | strategy: 76 | type: RollingUpdate 77 | rollingUpdate: 78 | maxUnavailable: 1 79 | selector: 80 | matchLabels: 81 | k8s-app: kube-dns 82 | template: 83 | metadata: 84 | labels: 85 | k8s-app: kube-dns 86 | spec: 87 | serviceAccountName: coredns 88 | tolerations: 89 | - key: node-role.kubernetes.io/master 90 | effect: NoSchedule 91 | - key: "CriticalAddonsOnly" 92 | operator: "Exists" 93 | containers: 94 | - name: coredns 95 | image: coredns/coredns:1.2.2 96 | imagePullPolicy: IfNotPresent 97 | resources: 98 | limits: 99 | memory: 170Mi 100 | requests: 101 | cpu: 100m 102 | memory: 70Mi 103 | args: [ "-conf", "/etc/coredns/Corefile" ] 104 | volumeMounts: 105 | - name: config-volume 106 | mountPath: /etc/coredns 107 | readOnly: true 108 | ports: 109 | - containerPort: 53 110 | name: dns 111 | protocol: UDP 112 | - containerPort: 53 113 | name: dns-tcp 114 | protocol: TCP 115 | - containerPort: 9153 116 | name: metrics 117 | protocol: TCP 118 | securityContext: 119 | allowPrivilegeEscalation: false 120 | capabilities: 121 | add: 122 | - NET_BIND_SERVICE 123 | drop: 124 | - all 125 | readOnlyRootFilesystem: true 126 | livenessProbe: 127 | httpGet: 128 | path: /health 129 | port: 8080 130 | scheme: HTTP 131 | initialDelaySeconds: 60 132 | timeoutSeconds: 5 133 | successThreshold: 1 134 | failureThreshold: 5 135 | dnsPolicy: Default 136 | volumes: 137 | - name: config-volume 138 | configMap: 139 | name: coredns 140 | items: 141 | - key: Corefile 142 | path: Corefile 143 | --- 144 | apiVersion: v1 145 | kind: Service 146 | metadata: 147 | name: kube-dns 148 | namespace: kube-system 149 | annotations: 150 | prometheus.io/port: "9153" 151 | prometheus.io/scrape: "true" 152 | labels: 153 | k8s-app: kube-dns 154 | kubernetes.io/cluster-service: "true" 155 | kubernetes.io/name: "CoreDNS" 156 | spec: 157 | selector: 158 | k8s-app: kube-dns 159 | clusterIP: 172.168.0.2 160 | ports: 161 | - name: dns 162 | port: 53 163 | protocol: UDP 164 | - name: dns-tcp 165 | port: 53 166 | protocol: TCP 167 | -------------------------------------------------------------------------------- /07.Bootstrapping-etcd.md: -------------------------------------------------------------------------------- 1 | # Bootstrapping the etcd Cluster 2 | 3 | Kubernetes components are stateless and store cluster state in [etcd](https://github.com/coreos/etcd). In this lab you will bootstrap a three node etcd cluster and configure it for high availability and secure remote access. 4 | 5 | ## Prerequisites 6 | 7 | The commands in this lab must be run on each controller instance: `controller-01`, `controller-02`, and `controller-03`. Login to each controller instance using the `k8s` user. Example: 8 | 9 | ``` 10 | ssh k8s@controller-01 11 | ``` 12 | 13 | ### Running commands in parallel with tmux 14 | 15 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. 16 | 17 | ## Bootstrapping an etcd Cluster Member 18 | 19 | ### Download and Install the etcd Binaries 20 | 21 | Download the official etcd release binaries from the [coreos/etcd](https://github.com/coreos/etcd) GitHub project: 22 | 23 | ``` 24 | wget -q --show-progress --https-only --timestamping \ 25 | "https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz" 26 | ``` 27 | 28 | Extract and install the `etcd` server and the `etcdctl` command line utility: 29 | 30 | ``` 31 | { 32 | tar -xvf etcd-v3.3.9-linux-amd64.tar.gz 33 | sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/ 34 | } 35 | ``` 36 | 37 | ### Configure the etcd Server 38 | 39 | ``` 40 | { 41 | sudo mkdir -p /etc/etcd /var/lib/etcd 42 | sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/ 43 | } 44 | ``` 45 | 46 | The instance internal IP address will be used to serve client requests and communicate with etcd cluster peers. Retrieve the internal IP address for the current compute instance: 47 | 48 | ``` 49 | INTERNAL_IP=$(grep -w $(hostname) /etc/hosts |awk '{print $1}') 50 | ``` 51 | 52 | Each etcd member must have a unique name within an etcd cluster. Set the etcd name to match the hostname of the current compute instance: 53 | 54 | ``` 55 | ETCD_NAME=$(hostname -s) 56 | ``` 57 | 58 | Create the `etcd.service` systemd unit file: 59 | 60 | ``` 61 | cat < Remember to run the above commands on each controller node: `controller-01`, `controller-02`, and `controller-03`. 104 | 105 | ## Verification 106 | 107 | List the etcd cluster members: 108 | 109 | ``` 110 | sudo ETCDCTL_API=3 etcdctl member list \ 111 | --endpoints=https://127.0.0.1:2379 \ 112 | --cacert=/etc/etcd/ca.pem \ 113 | --cert=/etc/etcd/kubernetes.pem \ 114 | --key=/etc/etcd/kubernetes-key.pem 115 | ``` 116 | 117 | > output 118 | 119 | ``` 120 | ff3c9dc8bc4ff6e, started, controller-01, https://192.168.78.201:2380, https://192.168.78.201:2379 121 | adfbdba88b62084e, started, controller-02, https://192.168.78.202:2380, https://192.168.78.202:2379 122 | b9a01cb565f3c5e8, started, controller-03, https://192.168.78.203:2380, https://192.168.78.203:2379 123 | ``` 124 | 125 | Part 8 - [Bootstrapping the Kubernetes Control Plane](08.Bootstrapping-kubernetes-controllers.md) 126 | -------------------------------------------------------------------------------- /18.Ingress-Controller-using-NGINX.md: -------------------------------------------------------------------------------- 1 | # Ingress Controller Using NGINX 2 | 3 | - Deploy Ingress controller and default backend. 4 | ``` 5 | kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml 6 | ``` 7 | - Verify ingress controller Pod started or not 8 | ``` 9 | kubectl get all -n ingress-nginx 10 | ``` 11 | - Output 12 | ``` 13 | NAME READY STATUS RESTARTS AGE 14 | pod/nginx-ingress-controller-5bffb7dd4d-58vpp 1/1 Running 0 6m34s 15 | 16 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 17 | service/ingress-nginx LoadBalancer 172.168.6.87 192.168.31.51 80:31892/TCP,443:31951/TCP 4m34s 18 | 19 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 20 | deployment.apps/nginx-ingress-controller 1 1 1 1 6m34s 21 | 22 | NAME DESIRED CURRENT READY AGE 23 | replicaset.apps/nginx-ingress-controller-5bffb7dd4d 1 1 1 6m34s 24 | ``` 25 | - Verify NGINX ingress version 26 | ``` 27 | POD_NAMESPACE=ingress-nginx 28 | POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') 29 | kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version 30 | ``` 31 | - Output 32 | ``` 33 | ------------------------------------------------------------------------------- 34 | NGINX Ingress controller 35 | Release: 0.19.0 36 | Build: git-05025d6 37 | Repository: https://github.com/kubernetes/ingress-nginx.git 38 | ------------------------------------------------------------------------------- 39 | ``` 40 | - Create ingress-nginx service 41 | 42 | ``` 43 | vi ingress-nginx.yaml 44 | ``` 45 | 46 | ``` 47 | kind: Service 48 | apiVersion: v1 49 | metadata: 50 | name: ingress-nginx 51 | namespace: ingress-nginx 52 | labels: 53 | app.kubernetes.io/name: ingress-nginx 54 | app.kubernetes.io/part-of: ingress-nginx 55 | spec: 56 | externalTrafficPolicy: Local 57 | type: LoadBalancer 58 | selector: 59 | app.kubernetes.io/name: ingress-nginx 60 | ports: 61 | - name: http 62 | port: 80 63 | targetPort: http 64 | - name: https 65 | port: 443 66 | targetPort: https 67 | ``` 68 | - Create Ingress Service 69 | ``` 70 | kubectl create -f ingress-nginx.yaml 71 | ``` 72 | 73 | - Verify ingress service 74 | 75 | ``` 76 | kubectl get svc -n ingress-nginx 77 | ``` 78 | 79 | - Output 80 | 81 | ``` 82 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 83 | ingress-nginx LoadBalancer 172.168.34.173 192.168.78.51 80:31709/TCP,443:32254/TCP 39s 84 | ``` 85 | 86 | ## Deploy `coffee` and `tea` application and configure ingress to access it with same hostname 87 | 88 | - Set environments 89 | ``` 90 | IC_IP=192.168.78.51 91 | IC_HTTPS_PORT=443 92 | ``` 93 | - Create ingress 94 | ``` 95 | kubectl create -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/examples/complete-example/cafe-ingress.yaml 96 | ``` 97 | - Deploy application pods 98 | ``` 99 | kubectl create -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/examples/complete-example/cafe.yaml 100 | ``` 101 | - Deploy certificates and key for HTTPS access 102 | ``` 103 | kubectl create -f https://raw.githubusercontent.com/nginxinc/kubernetes-ingress/master/examples/complete-example/cafe-secret.yaml 104 | ``` 105 | - Wait for few minites and make sure you got ADDRESS field populated in ingress 106 | ``` 107 | kubectl get ingresses.extensions 108 | ``` 109 | - Output 110 | ``` 111 | NAME HOSTS ADDRESS PORTS AGE 112 | cafe-ingress cafe.example.com 192.168.78.51 80, 443 2m21s 113 | ``` 114 | - Test `coffee` app 115 | 116 | ``` 117 | curl --resolve cafe.example.com:$IC_HTTPS_PORT:$IC_IP https://cafe.example.com:$IC_HTTPS_PORT/coffee --insecure 118 | ``` 119 | - Output 120 | ``` 121 | Server address: 10.10.2.25:80 122 | Server name: coffee-56668d6f78-mdghx 123 | Date: 04/Oct/2018:15:20:50 +0000 124 | URI: /coffee 125 | ``` 126 | - Test `tea` app 127 | 128 | ``` 129 | curl --resolve cafe.example.com:$IC_HTTPS_PORT:$IC_IP https://cafe.example.com:$IC_HTTPS_PORT/tea --insecure 130 | ``` 131 | - Output 132 | ``` 133 | Server address: 10.10.1.23:80 134 | Server name: tea-85f8bf86fd-49tp4 135 | Date: 04/Oct/2018:15:20:58 +0000 136 | URI: /tea 137 | Request ID: 535dc70dab7f31c3d22b317d4570a289 138 | ``` 139 | 140 | - You may also add entries in Host file (``/etc/hosts`` in Linux and ``C:\Windows\System32\drivers\etc\hosts`` for windows ) 141 | ``` 142 | 192.168.78.51 cafe.example.com 143 | ``` 144 | - Now you can use your browser to access the application 145 | 146 | ![Ingress sample output](https://raw.githubusercontent.com/ansilh/kubernetes-the-hardway-virtualbox/master/images/ingress-app.png) 147 | -------------------------------------------------------------------------------- /16.Dash-Board.md: -------------------------------------------------------------------------------- 1 | # DashBoard 2 | 3 | ![Web UI - Dashboard](https://raw.githubusercontent.com/ansilh/kubernetes-the-hardway-virtualbox/master/images/dash-board-v1.PNG) 4 | 5 | Heapster is depricated , but DashBoard still need Heapster , so we need to have both heapster and metric-server in our cluster for Dashboard and HPA 6 | 7 | - We need additional permission to one of the subresouce `stats` in `node` ; but the default `system:heapster` role don't have it. 8 | 9 | So lets add that first 10 | 11 | ``` 12 | kubectl edit clusterroles system:heapster 13 | ``` 14 | Add below lines to the end and save the file 15 | ``` 16 | - apiGroups: 17 | - "" 18 | resources: 19 | - nodes/stats 20 | verbs: 21 | - get 22 | ``` 23 | 24 | - Create a directory to store manifests 25 | ``` 26 | mkdir dash-board 27 | cd dash-board 28 | ``` 29 | 30 | - File lists 31 | ``` 32 | YAML_FILES="grafana.yaml 33 | heapster.yaml 34 | influxdb.yaml" 35 | ``` 36 | 37 | - Download manifests 38 | ``` 39 | for FILE in ${YAML_FILES} 40 | do 41 | wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/${FILE} 42 | done 43 | cd .. 44 | wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml 45 | ``` 46 | - Edit heapster.yaml and replace --source=kubernetes:https://kubernetes.default with below 47 | ``` 48 | --source=kubernetes.summary_api:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true 49 | ``` 50 | - Deploy heapster and its dependancies 51 | ``` 52 | kubectl create -f dash-board 53 | kubectl create -f heapster-rbac.yaml 54 | ``` 55 | 56 | - Make sure Pods with STATUS Running and RESTARTS count 0 (or not incresing if you were troubleshooting issues) 57 | ``` 58 | kubectl get pods --namespace=kube-system 59 | ``` 60 | - Output 61 | ``` 62 | NAME READY STATUS RESTARTS AGE 63 | coredns-699f8ddd77-5bszs 1/1 Running 0 6h30m 64 | coredns-699f8ddd77-bz2jd 1/1 Running 0 6h30m 65 | heapster-684777c4cb-x5tfs 1/1 Running 0 74s 66 | metrics-server-76ff48d4cd-tcsbn 1/1 Running 0 36m 67 | monitoring-grafana-56b668bccf-4rg9g 1/1 Running 0 75s 68 | monitoring-influxdb-5c5bf4949d-q8hfw 1/1 Running 0 75s 69 | ``` 70 | - Verify monitoring-grafana monitoring-influxdb services 71 | ``` 72 | kubectl get services --namespace=kube-system monitoring-grafana monitoring-influxdb 73 | ``` 74 | - Output 75 | 76 | ``` 77 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 78 | monitoring-grafana ClusterIP 172.168.63.17 80/TCP 63s 79 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 80 | monitoring-influxdb ClusterIP 172.168.178.68 8086/TCP 63s 81 | ``` 82 | - Verify cluster services endpoints 83 | ``` 84 | kubectl cluster-info 85 | ``` 86 | - Output 87 | 88 | ``` 89 | Kubernetes master is running at https://192.168.78.220:6443 90 | Heapster is running at https://192.168.78.220:6443/api/v1/namespaces/kube-system/services/heapster/proxy 91 | CoreDNS is running at https://192.168.78.220:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy 92 | monitoring-grafana is running at https://192.168.78.220:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy 93 | monitoring-influxdb is running at https://192.168.78.220:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy 94 | ``` 95 | - Deploy DashBoard 96 | ``` 97 | kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml 98 | ``` 99 | - Make sure dash-board Pod is running 100 | ``` 101 | kubectl get pods --selector=k8s-app=kubernetes-dashboard -n kube-system 102 | ``` 103 | - Output 104 | 105 | ``` 106 | NAME READY STATUS RESTARTS AGE 107 | kubernetes-dashboard-77fd78f978-cwpwr 1/1 Running 0 5m5s 108 | 109 | ``` 110 | - Create a DashBoard user with all privileges 111 | ``` 112 | kubectl create serviceaccount cluster-admin-dashboard-sa 113 | kubectl create clusterrolebinding cluster-admin-dashboard-sa \ 114 | --clusterrole=cluster-admin \ 115 | --serviceaccount=default:cluster-admin-dashboard-sa 116 | ``` 117 | - Get the token to login 118 | ``` 119 | kubectl describe secret $(kubectl get secret | grep cluster-admin-dashboard-sa|awk '{print $1}') |awk '/token/{print $2}' 120 | ``` 121 | - Use the token to login to DashBoard 122 | - To Access Dashboard via Browser 123 | ``` 124 | kubectl proxy 125 | ``` 126 | - Open an SSH tunnel ( 8001:127.0.0.1:8001 ) from local system (You can use Putty as well for tunneling) 127 | - Access Dashboard using URL 128 | ``` 129 | http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy 130 | ``` 131 | - We already have token from previous steps ; use that for login 132 | 133 | Part 17 - [Dynamic iSCSI Volume Provisioniner](17.Dynamic-iSCSI-Volume-Provisioner.md) 134 | -------------------------------------------------------------------------------- /01.Prerequisites-VM-Configuration.md: -------------------------------------------------------------------------------- 1 | # Prerequisites - VM configuration 2 | 3 | This tutorial leverges VirtualBox to provision compute infrastructure required to 4 | bootstrap a Kubernetes cluster from groundup . 5 | All you need is a Windows system (or Linux system) with VirtualBox installed. 6 | 7 | ##### 1) VirtualBox network configuration 8 | - Create HostOnly network with IP range 192.168.78.0 9 | - DHCP should be disabled on this network 10 | - Internet access is needed on all VMs (only for downloading stuffs) 11 | - A NAT network which can be leverged by VMs 12 | ``` 13 | +--------------------------------+ 14 | | VBox Host Networking | 15 | +---------------+----------------+ 16 | | HostOnly | 192.168.78.0 | 17 | | NAT | VBOX Defined | 18 | +---------------+----------------+ 19 | ``` 20 | ##### 2) Create a template VM which will be used to clone all needed VMs 21 | 22 | - You need atleast 50GB free space to host all VMs 23 | - All Vms will be placed in a directory called (Don't create these manually now!) 24 | `DRIVE_NAME:/VMs/k8s_the_hardway/` 25 | 26 | - Install Ubuntu 16.04 with latest patches 27 | - VM configuration 28 | - VM Name : `Ubuntu16-template` 29 | - Memory : 2 GB 30 | - CPU : 1 31 | - Disk : 100GB 32 | - NAT network interface : 1 33 | - HostOnly interface : 1 (ref. step 1). 34 | 35 | NAT interface should be the first interface and HostOnly should be second 36 | 37 | - Install Ubuntu on this VM and go ahead with all default options 38 | - When asked prvide user name `k8s` and set password 39 | - Select below in `Software Selection` screen 40 | - Manual Software Selection 41 | - OpenSSH Server 42 | 43 | - After restart , make sure NAT interface is up 44 | - Login to the template VM with user `k8s` and execute below commands to install latest patches. 45 | ``` 46 | $ sudo apt-get update 47 | $ sudo apt-get upgrade 48 | ``` 49 | - Poweroff template VM 50 | ``` 51 | $ sudo poweroff 52 | ``` 53 | - Open CMD and execute below commands to create all needed VMs. 54 | You can replace the valye of `DRIVER_NAME` with a drive which is having enough free space (~50GB) 55 | ``` 56 | set DRIVE_NAME=D 57 | cd C:\Program Files\Oracle\VirtualBox 58 | VBoxManage.exe clonevm "Ubuntu16-template" --name "controller-01" --groups "/K8S The Hard Way LAB" --basefolder "%DRIVE_NAME%:\VMs" --register 59 | VBoxManage.exe clonevm "Ubuntu16-template" --name "controller-02" --groups "/K8S The Hard Way LAB" --basefolder "%DRIVE_NAME%:\VMs" --register 60 | VBoxManage.exe clonevm "Ubuntu16-template" --name "controller-03" --groups "/K8S The Hard Way LAB" --basefolder "%DRIVE_NAME%:\VMs" --register 61 | VBoxManage.exe clonevm "Ubuntu16-template" --name "worker-01" --groups "/K8S The Hard Way LAB" --basefolder "%DRIVE_NAME%:\VMs" --register 62 | VBoxManage.exe clonevm "Ubuntu16-template" --name "worker-02" --groups "/K8S The Hard Way LAB" --basefolder "%DRIVE_NAME%:\VMs" --register 63 | VBoxManage.exe clonevm "Ubuntu16-template" --name "worker-03" --groups "/K8S The Hard Way LAB" --basefolder "%DRIVE_NAME%:\VMs" --register 64 | VBoxManage.exe clonevm "Ubuntu16-template" --name "lb-01" --groups "/K8S The Hard Way LAB" --basefolder "%DRIVE_NAME%:\VMs" --register 65 | VBoxManage.exe clonevm "Ubuntu16-template" --name "lb-02" --groups "/K8S The Hard Way LAB" --basefolder "%DRIVE_NAME%:\VMs" --register 66 | ``` 67 | ##### 3) Start VMs one by one and perform below 68 | - IP Address and Hostname for each VMs 69 | ``` 70 | 192.168.78.201 controller-01 71 | 192.168.78.202 controller-02 72 | 192.168.78.203 controller-03 73 | 192.168.78.211 worker-01 74 | 192.168.78.212 worker-02 75 | 192.168.78.213 worker-03 76 | 192.168.78.225 lb-01 77 | 192.168.78.226 lb-02 78 | ``` 79 | - Assign IP address and make sure it comesup at boot 80 | ``` 81 | sudo systemctl stop networking 82 | sudo vi /etc/network/interfaces 83 | 84 | auto enp0s8 85 | iface enp0s8 inet static 86 | address 192.168.78.X #<--- Replace X with corresponding IP octect 87 | netmask 255.255.255.0 88 | 89 | sudo systemctl restart networking 90 | ``` 91 | - You may access the VM using the IP via SSH and can complete all remaining steps from that session (for copy paste :) ) 92 | - Change Host name 93 | ``` 94 | HOST_NAME= # <--- Replace with corresponding one 95 | sudo hostnamectl set-hostname ${HOST_NAME} --static --transient 96 | ``` 97 | - Regenrate SSH Keys 98 | ``` 99 | sudo /bin/rm -v /etc/ssh/ssh_host_* 100 | sudo dpkg-reconfigure openssh-server 101 | ``` 102 | - Change iSCSI initiator IQN 103 | ``` 104 | sudo vi /etc/iscsi/initiatorname.iscsi 105 | InitiatorName=iqn.1993-08.org.debian:01:HOST_NAME #<--- Append HostName to have unique iscsi iqn 106 | ``` 107 | - Change Machine UUID 108 | ``` 109 | sudo rm /etc/machine-id /var/lib/dbus/machine-id 110 | sudo systemd-machine-id-setup 111 | ``` 112 | - Add needed entries in /etc/hosts 113 | ``` 114 | sudo bash -c "cat <>/etc/hosts 115 | 192.168.78.201 controller-01 116 | 192.168.78.202 controller-02 117 | 192.168.78.203 controller-03 118 | 192.168.78.211 worker-01 119 | 192.168.78.212 worker-02 120 | 192.168.78.213 worker-03 121 | 192.168.78.225 lb-01 122 | 192.168.78.226 lb-02 123 | 192.168.78.220 lb 124 | EOF" 125 | ``` 126 | - Reboot VM 127 | ``` 128 | sudo reboot 129 | ``` 130 | - Repeat the steps above for all VMs 131 | - Do a ping test to make sure all VMs can reach each other. 132 | 133 | ##### 4) Make a note of below Pod CIDR assignment and Service IP range , which will be used later 134 | ``` 135 | +--------------------------------+ 136 | | Pod CIDR range | 137 | +---------------+----------------+ 138 | | worker-01 | 10.10.1.0/24 | 139 | | worker-02 | 10.10.2.0/24 | 140 | | worker-03 | 10.10.3.0/24 | 141 | +---------------+----------------+ 142 | 143 | +--------------------------------+ 144 | | Service IP range | 145 | +---------------+----------------+ 146 | | 172.168.0.0/16 | 147 | +---------------+----------------+ 148 | 149 | +--------------------------------+ 150 | | Cluster Service IP | 151 | +---------------+----------------+ 152 | | 172.168.0.1 | 153 | +---------------+----------------+ 154 | 155 | +--------------------------------+ 156 | | DNS Service IP | 157 | +--------------------------------+ 158 | | 172.168.0.2 | 159 | +--------------------------------+ 160 | ``` 161 | 162 | Part 2 - [A Virtual LoadBalancer for Highly available API server](02.Prerequisites-HA-LB-Configuration.md) 163 | -------------------------------------------------------------------------------- /05.Kubernetes-configuration-files.md: -------------------------------------------------------------------------------- 1 | # Generating Kubernetes Configuration Files for Authentication 2 | 3 | In this lab you will generate [Kubernetes configuration files](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/), also known as kubeconfigs, which enable Kubernetes clients to locate and authenticate to the Kubernetes API Servers. 4 | 5 | ## Client Authentication Configs 6 | 7 | In this section you will generate kubeconfig files for the `controller manager`, `kubelet`, `kube-proxy`, and `scheduler` clients and the `admin` user. 8 | 9 | ### Kubernetes Public IP Address 10 | 11 | Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the HA-Proxy load balancer fronting the Kubernetes API Servers will be used. 12 | 13 | Set the `lb0` virtual IP address: 192.168.78.220 as KUBERNETES_PUBLIC_ADDRESS 14 | 15 | ``` 16 | KUBERNETES_PUBLIC_ADDRESS=192.168.78.220 17 | ``` 18 | 19 | ### The kubelet Kubernetes Configuration File 20 | 21 | When generating kubeconfig files for Kubelets the client certificate matching the Kubelet's node name must be used. This will ensure Kubelets are properly authorized by the Kubernetes [Node Authorizer](https://kubernetes.io/docs/admin/authorization/node/). 22 | 23 | Generate a kubeconfig file for each worker node: 24 | 25 | ``` 26 | for instance in worker-01 worker-02 worker-03; do 27 | kubectl config set-cluster kubernetes-the-hard-way \ 28 | --certificate-authority=ca.pem \ 29 | --embed-certs=true \ 30 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ 31 | --kubeconfig=${instance}.kubeconfig 32 | 33 | kubectl config set-credentials system:node:${instance} \ 34 | --client-certificate=${instance}.pem \ 35 | --client-key=${instance}-key.pem \ 36 | --embed-certs=true \ 37 | --kubeconfig=${instance}.kubeconfig 38 | 39 | kubectl config set-context default \ 40 | --cluster=kubernetes-the-hard-way \ 41 | --user=system:node:${instance} \ 42 | --kubeconfig=${instance}.kubeconfig 43 | 44 | kubectl config use-context default --kubeconfig=${instance}.kubeconfig 45 | done 46 | ``` 47 | 48 | Results: 49 | 50 | ``` 51 | worker-01.kubeconfig 52 | worker-02.kubeconfig 53 | worker-03.kubeconfig 54 | ``` 55 | 56 | ### The kube-proxy Kubernetes Configuration File 57 | 58 | Generate a kubeconfig file for the `kube-proxy` service: 59 | 60 | ``` 61 | { 62 | kubectl config set-cluster kubernetes-the-hard-way \ 63 | --certificate-authority=ca.pem \ 64 | --embed-certs=true \ 65 | --server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443 \ 66 | --kubeconfig=kube-proxy.kubeconfig 67 | 68 | kubectl config set-credentials system:kube-proxy \ 69 | --client-certificate=kube-proxy.pem \ 70 | --client-key=kube-proxy-key.pem \ 71 | --embed-certs=true \ 72 | --kubeconfig=kube-proxy.kubeconfig 73 | 74 | kubectl config set-context default \ 75 | --cluster=kubernetes-the-hard-way \ 76 | --user=system:kube-proxy \ 77 | --kubeconfig=kube-proxy.kubeconfig 78 | 79 | kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig 80 | } 81 | ``` 82 | 83 | Results: 84 | 85 | ``` 86 | kube-proxy.kubeconfig 87 | ``` 88 | 89 | ### The kube-controller-manager Kubernetes Configuration File 90 | 91 | Generate a kubeconfig file for the `kube-controller-manager` service: 92 | 93 | ``` 94 | { 95 | kubectl config set-cluster kubernetes-the-hard-way \ 96 | --certificate-authority=ca.pem \ 97 | --embed-certs=true \ 98 | --server=https://127.0.0.1:6443 \ 99 | --kubeconfig=kube-controller-manager.kubeconfig 100 | 101 | kubectl config set-credentials system:kube-controller-manager \ 102 | --client-certificate=kube-controller-manager.pem \ 103 | --client-key=kube-controller-manager-key.pem \ 104 | --embed-certs=true \ 105 | --kubeconfig=kube-controller-manager.kubeconfig 106 | 107 | kubectl config set-context default \ 108 | --cluster=kubernetes-the-hard-way \ 109 | --user=system:kube-controller-manager \ 110 | --kubeconfig=kube-controller-manager.kubeconfig 111 | 112 | kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig 113 | } 114 | ``` 115 | 116 | Results: 117 | 118 | ``` 119 | kube-controller-manager.kubeconfig 120 | ``` 121 | 122 | 123 | ### The kube-scheduler Kubernetes Configuration File 124 | 125 | Generate a kubeconfig file for the `kube-scheduler` service: 126 | 127 | ``` 128 | { 129 | kubectl config set-cluster kubernetes-the-hard-way \ 130 | --certificate-authority=ca.pem \ 131 | --embed-certs=true \ 132 | --server=https://127.0.0.1:6443 \ 133 | --kubeconfig=kube-scheduler.kubeconfig 134 | 135 | kubectl config set-credentials system:kube-scheduler \ 136 | --client-certificate=kube-scheduler.pem \ 137 | --client-key=kube-scheduler-key.pem \ 138 | --embed-certs=true \ 139 | --kubeconfig=kube-scheduler.kubeconfig 140 | 141 | kubectl config set-context default \ 142 | --cluster=kubernetes-the-hard-way \ 143 | --user=system:kube-scheduler \ 144 | --kubeconfig=kube-scheduler.kubeconfig 145 | 146 | kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig 147 | } 148 | ``` 149 | 150 | Results: 151 | 152 | ``` 153 | kube-scheduler.kubeconfig 154 | ``` 155 | 156 | ### The admin Kubernetes Configuration File 157 | 158 | Generate a kubeconfig file for the `admin` user: 159 | 160 | ``` 161 | { 162 | kubectl config set-cluster kubernetes-the-hard-way \ 163 | --certificate-authority=ca.pem \ 164 | --embed-certs=true \ 165 | --server=https://127.0.0.1:6443 \ 166 | --kubeconfig=admin.kubeconfig 167 | 168 | kubectl config set-credentials admin \ 169 | --client-certificate=admin.pem \ 170 | --client-key=admin-key.pem \ 171 | --embed-certs=true \ 172 | --kubeconfig=admin.kubeconfig 173 | 174 | kubectl config set-context default \ 175 | --cluster=kubernetes-the-hard-way \ 176 | --user=admin \ 177 | --kubeconfig=admin.kubeconfig 178 | 179 | kubectl config use-context default --kubeconfig=admin.kubeconfig 180 | } 181 | ``` 182 | 183 | Results: 184 | 185 | ``` 186 | admin.kubeconfig 187 | ``` 188 | 189 | 190 | ## 191 | 192 | ## Distribute the Kubernetes Configuration Files 193 | 194 | Copy the appropriate `kubelet` and `kube-proxy` kubeconfig files to each worker instance: 195 | 196 | ``` 197 | for instance in worker-01 worker-02 worker-03; do 198 | scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/ 199 | done 200 | ``` 201 | 202 | Copy the appropriate `kube-controller-manager` and `kube-scheduler` kubeconfig files to each controller instance: 203 | 204 | ``` 205 | for instance in controller-01 controller-02 controller-03; do 206 | scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/ 207 | done 208 | ``` 209 | 210 | Part 6 - [Generating the Data Encryption Config and Key](06.Data-encryption-keys.md) 211 | -------------------------------------------------------------------------------- /15.Deploy-Metric-Server.md: -------------------------------------------------------------------------------- 1 | # Metric Server for HPA & Resource monitoring 2 | 3 | We need Metric Server for resource usage monitoring as well as for HPA (Horizontal Pod AutoScaler) 4 | 5 | ### Create CA and certificates 6 | 7 | ``` 8 | mkdir front-proxy 9 | cd front-proxy 10 | ``` 11 | ``` 12 | vi ca-config.json 13 | ``` 14 | ``` 15 | { 16 | "signing": { 17 | "default": { 18 | "expiry": "8760h" 19 | }, 20 | "profiles": { 21 | "kubernetes": { 22 | "expiry": "8760h", 23 | "usages": [ 24 | "signing", 25 | "key encipherment", 26 | "server auth", 27 | "client auth" 28 | ] 29 | } 30 | } 31 | } 32 | } 33 | ``` 34 | ``` 35 | vi ca-csr.json 36 | ``` 37 | ``` 38 | { 39 | "CN": "Kubernetes", 40 | "key": { 41 | "algo": "rsa", 42 | "size": 2048 43 | }, 44 | "names": [ 45 | { 46 | "C": "IN", 47 | "L": "KL", 48 | "O": "Kubernetes", 49 | "OU": "CA", 50 | "ST": "Kerala" 51 | } 52 | ] 53 | } 54 | ``` 55 | ``` 56 | cfssl gencert -initca ca-csr.json |cfssljson -bare front-proxy-ca 57 | ``` 58 | ``` 59 | vi front-proxy-csr.json 60 | ``` 61 | ``` 62 | { 63 | "CN": "front-proxy-ca", 64 | "key": { 65 | "algo": "rsa", 66 | "size": 2048 67 | }, 68 | "names": [ 69 | { 70 | "C": "IN", 71 | "L": "KL", 72 | "O": "Kubernetes", 73 | "OU": "CA", 74 | "ST": "Kerala" 75 | } 76 | ] 77 | } 78 | ``` 79 | ``` 80 | cfssl gencert \ 81 | -ca=front-proxy-ca.pem \ 82 | -ca-key=front-proxy-ca-key.pem \ 83 | -config=ca-config.json \ 84 | -profile=kubernetes \ 85 | front-proxy-csr.json | cfssljson -bare front-proxy 86 | ``` 87 | 88 | ``` 89 | for instance in controller-01 controller-02 controller-03 90 | do 91 | scp front-proxy-ca.pem front-proxy.pem front-proxy-key.pem k8s@${instance}:~/ 92 | ssh -t k8s@${instance} sudo "mv front-proxy-ca.pem front-proxy.pem front-proxy-key.pem /var/lib/kubernetes/" 93 | done 94 | ``` 95 | ``` 96 | vi /etc/systemd/system/kube-apiserver.service 97 | ``` 98 | ``` 99 | --requestheader-client-ca-file=/var/lib/kubernetes/front-proxy-ca.pem \ 100 | --enable-aggregator-routing=true \ 101 | --requestheader-allowed-names=front-proxy-ca \ 102 | --requestheader-extra-headers-prefix=X-Remote-Extra- \ 103 | --requestheader-group-headers=X-Remote-Group \ 104 | --requestheader-username-headers=X-Remote-User \ 105 | --proxy-client-cert-file=/var/lib/kubernetes/front-proxy.pem \ 106 | --proxy-client-key-file=/var/lib/kubernetes/front-proxy-key.pem \ 107 | ``` 108 | ``` 109 | systemctl daemon-reload 110 | systemctl restart kube-apiserver 111 | ``` 112 | 113 | 114 | ### Download manifest files for deployment 115 | - List of manifest files 116 | ``` 117 | YAMLS="auth-delegator.yaml 118 | auth-reader.yaml 119 | metrics-apiservice.yaml 120 | metrics-server-deployment.yaml 121 | metrics-server-service.yaml 122 | resource-reader.yaml" 123 | ``` 124 | - Create a directory to put all manifests 125 | ``` 126 | mkdir metric-server 127 | cd metric-server 128 | ``` 129 | - Download all files 130 | ``` 131 | for FILE in ${YAMLS} 132 | do 133 | wget https://raw.githubusercontent.com/kubernetes-incubator/metrics-server/master/deploy/1.8%2B/${FILE} 134 | done 135 | cd .. 136 | ``` 137 | - Add node's hostname/IP to deployment , so that node hostnames can be resolved by metric-server Pod 138 | ``` 139 | vi metric-server/metrics-server-deployment.yaml 140 | ``` 141 | Add below in Deployment.spec.template.spec 142 | ``` 143 | hostAliases: 144 | - ip: 192.168.78.211 145 | hostnames: 146 | - worker-01 147 | - ip: 192.168.78.212 148 | hostnames: 149 | - worker-02 150 | - ip: 192.168.78.213 151 | hostnames: 152 | - worker-03 153 | 154 | ``` 155 | 156 | - Add routes on controller nodes , so that API can reach metric-server Pod IP 157 | 158 | From lb-01 , execute below 159 | 160 | ``` 161 | ROUTE_DATA="worker-01,192.168.78.211,10.10.1.0/24 162 | worker-02,192.168.78.212,10.10.2.0/24 163 | worker-03,192.168.78.213,10.10.3.0/24" 164 | 165 | for instance in controller-01 controller-02 controller-03 166 | do 167 | ROUTE_MINI=$(echo "${ROUTE_DATA}" | grep -v -w ${instance}) 168 | for ROUTE in ${ROUTE_MINI} 169 | do 170 | echo "#### Adding Route for ${instance} ####" 171 | NET=$(echo ${ROUTE} |awk -F "," '{print $3}') 172 | GW=$(echo ${ROUTE} |awk -F "," '{print $2}') 173 | ssh -t k8s@${instance} sudo -i "echo -e \"\tpost-up route add -net ${NET} gw ${GW}\"|sudo tee --append /etc/network/interfaces" 174 | ssh -t k8s@${instance} sudo -i "route add -net ${NET} gw ${GW}" 175 | done 176 | done 177 | ``` 178 | 179 | - Deploy Metric Server 180 | 181 | ``` 182 | kubectl create -f metric-server/ 183 | ``` 184 | 185 | - Wait for 5 minutes and execute below to verify 186 | 187 | ``` 188 | kubectl describe apiservices v1beta1.metrics.k8s.io 189 | ``` 190 | - Output 191 | ``` 192 | Name: v1beta1.metrics.k8s.io 193 | Namespace: 194 | Labels: 195 | Annotations: 196 | API Version: apiregistration.k8s.io/v1 197 | Kind: APIService 198 | Metadata: 199 | Creation Timestamp: 2018-10-02T13:49:50Z 200 | Resource Version: 98243 201 | Self Link: /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io 202 | UID: 081f7634-c64a-11e8-b143-0800276d1e86 203 | Spec: 204 | Group: metrics.k8s.io 205 | Group Priority Minimum: 100 206 | Insecure Skip TLS Verify: true 207 | Service: 208 | Name: metrics-server 209 | Namespace: kube-system 210 | Version: v1beta1 211 | Version Priority: 100 212 | Status: 213 | Conditions: 214 | Last Transition Time: 2018-10-02T13:49:55Z 215 | Message: all checks passed 216 | Reason: Passed 217 | Status: True 218 | Type: Available 219 | Events: 220 | ``` 221 | - Node resource usage 222 | ``` 223 | kubectl top nodes 224 | ``` 225 | - Output 226 | 227 | ``` 228 | NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% 229 | worker-01 21m 2% 957Mi 50% 230 | worker-02 21m 2% 974Mi 51% 231 | worker-03 20m 2% 1031Mi 54% 232 | ``` 233 | - Pod resource usage 234 | ``` 235 | kubectl top pods 236 | ``` 237 | - Output 238 | ``` 239 | NAME CPU(cores) MEMORY(bytes) 240 | busybox-bd8fb7cbd-bqv8z 0m 0Mi 241 | nginx-cdb6b5b95-b8swv 0m 1Mi 242 | nginx-cdb6b5b95-tq946 0m 1Mi 243 | nginx-cdb6b5b95-w4t84 0m 1Mi 244 | 245 | ``` 246 | Part 16- [Deploy Dashboard](16.Dash-Board.md) 247 | -------------------------------------------------------------------------------- /17.Dynamic-iSCSI-Volume-Provisioner.md: -------------------------------------------------------------------------------- 1 | # Volume Provisioner with iSCSI-targetd 2 | 3 | - Power-off lb-02 4 | - Add a new disk with size 100GB 5 | - Creata a VG using the new disk 6 | 7 | ``` 8 | pvcreate /dev/sdb 9 | vgcreate vg-targetd /dev/sdb 10 | ``` 11 | 12 | - Install python build tools and targetd dependancies 13 | ``` 14 | sudo apt-get install zip python python-setuptools python-gi python-setproctitle python-yaml python-lvm2 15 | ``` 16 | - Download and build targcli , targetd and its dependancies 17 | ``` 18 | wget https://github.com/open-iscsi/rtslib-fb/archive/master.zip -O rtslib-fb.zip 19 | wget https://github.com/open-iscsi/targetcli-fb/archive/master.zip -O targetcli-fb.zip 20 | wget https://github.com/open-iscsi/configshell-fb/archive/master.zip -O configshell-fb.zip 21 | wget https://github.com/open-iscsi/targetd/archive/master.zip -O targetd.zip 22 | ``` 23 | - Extract all downloaded zip files 24 | ``` 25 | unzip rtslib-fb.zip 26 | unzip targetcli-fb.zip 27 | unzip configshell-fb.zip 28 | unzip targetd.zip 29 | ``` 30 | - Build and install all packages 31 | ``` 32 | cd rtslib-fb-master 33 | ./setup.py build 34 | sudo ./setup.py install 35 | cd 36 | cd configshell-fb-master 37 | ./setup.py build 38 | sudo ./setup.py install 39 | cd 40 | cd targetcli-fb-master 41 | ./setup.py build 42 | sudo ./setup.py install 43 | cd 44 | cd targetd-master 45 | ./setup.py build 46 | sudo ./setup.py install 47 | ``` 48 | - Create a untit file for targetd 49 | 50 | ``` 51 | vi targetd.service 52 | ``` 53 | ``` 54 | [Unit] 55 | Description=targetd storage array API daemon 56 | 57 | [Service] 58 | ExecStartPre=-/sbin/modprobe configfs 59 | ExecStart=/usr/local/bin/targetd 60 | 61 | [Install] 62 | WantedBy=multi-user.target 63 | ``` 64 | 65 | - Move unit file to systemd path 66 | ``` 67 | sudo mv targetd.service /etc/systemd/system/ 68 | ``` 69 | 70 | - Edit targetd configuration 71 | ``` 72 | sudo mkdir /etc/target/ 73 | sudo vi /etc/target/targetd.yaml 74 | ``` 75 | 76 | ``` 77 | password: nutanix 78 | 79 | # defaults below; uncomment and edit 80 | # if using a thin pool, use / 81 | # e.g vg-targetd/pool 82 | pool_name: vg-targetd 83 | user: admin 84 | ssl: false 85 | target_name: iqn.2003-01.org.linux-iscsi.linxlabs:targetd 86 | ``` 87 | 88 | - Start targetd API service 89 | ``` 90 | { 91 | sudo systemctl daemon-reload 92 | sudo systemctl enable targetd 93 | sudo systemctl start targetd 94 | } 95 | ``` 96 | 97 | - Execute below commands from lb-01 98 | - Create secret 99 | ``` 100 | kubectl create secret generic targetd-account --from-literal=username=admin --from-literal=password=nutanix 101 | ``` 102 | - Create Provisioner 103 | ``` 104 | kubectl apply -f https://raw.githubusercontent.com/ansilh/kubernetes-the-hardway-virtualbox/master/config/iscsi-provisioner-d.yaml 105 | ``` 106 | - Create Persistent Volume Claim 107 | ``` 108 | kubectl apply -f https://raw.githubusercontent.com/ansilh/kubernetes-the-hardway-virtualbox/master/config/iscsi-provisioner-pvc.yaml 109 | ``` 110 | - Create Storage Class 111 | ``` 112 | kubectl create -f https://raw.githubusercontent.com/ansilh/kubernetes-the-hardway-virtualbox/master/config/iscsi-provisioner-class.yaml 113 | ``` 114 | 115 | - Verify volume created by provisioner 116 | ``` 117 | kubectl get pvc 118 | ``` 119 | - Output 120 | ``` 121 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 122 | myclaim Bound pvc-6a727bf8-c66d-11e8-b143-0800276d1e86 100Mi RWO iscsi-targetd-vg-targetd 142m 123 | ``` 124 | ``` 125 | sudo targetcli ls 126 | ``` 127 | - Output 128 | 129 | ``` 130 | o- / ......................................................................................................................... [...] 131 | o- backstores .............................................................................................................. [...] 132 | | o- block .................................................................................................. [Storage Objects: 1] 133 | | | o- vg-targetd:pvc-6a727bf8-c66d-11e8-b143-0800276d1e86 [/dev/vg-targetd/pvc-6a727bf8-c66d-11e8-b143-0800276d1e86 (100.0MiB) write-thru activated] 134 | | | o- alua ................................................................................................... [ALUA Groups: 1] 135 | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] 136 | | o- fileio ................................................................................................. [Storage Objects: 0] 137 | | o- pscsi .................................................................................................. [Storage Objects: 0] 138 | | o- ramdisk ................................................................................................ [Storage Objects: 0] 139 | o- iscsi ............................................................................................................ [Targets: 1] 140 | | o- iqn.2003-01.org.linux-iscsi.linxlabs:targetd ...................................................................... [TPGs: 1] 141 | | o- tpg1 ............................................................................................... [no-gen-acls, no-auth] 142 | | o- acls .......................................................................................................... [ACLs: 3] 143 | | | o- iqn.1993-08.org.debian:01:worker-01 .................................................................. [Mapped LUNs: 1] 144 | | | | o- mapped_lun0 ................................... [lun0 block/vg-targetd:pvc-6a727bf8-c66d-11e8-b143-0800276d1e86 (rw)] 145 | | | o- iqn.1993-08.org.debian:01:worker-02 .................................................................. [Mapped LUNs: 1] 146 | | | | o- mapped_lun0 ................................... [lun0 block/vg-targetd:pvc-6a727bf8-c66d-11e8-b143-0800276d1e86 (rw)] 147 | | | o- iqn.1993-08.org.debian:01:worker-03 .................................................................. [Mapped LUNs: 1] 148 | | | o- mapped_lun0 ................................... [lun0 block/vg-targetd:pvc-6a727bf8-c66d-11e8-b143-0800276d1e86 (rw)] 149 | | o- luns .......................................................................................................... [LUNs: 1] 150 | | | o- lun0 [block/vg-targetd:pvc-6a727bf8-c66d-11e8-b143-0800276d1e86 (/dev/vg-targetd/pvc-6a727bf8-c66d-11e8-b143-0800276d1e86) (default_tg_pt_gp)] 151 | | o- portals .................................................................................................... [Portals: 1] 152 | | o- 0.0.0.0:3260 ..................................................................................................... [OK] 153 | o- loopback ......................................................................................................... [Targets: 0] 154 | o- vhost ............................................................................................................ [Targets: 0] 155 | o- xen-pvscsi ....................................................................................................... [Targets: 0] 156 | 157 | ``` 158 | Part 18 - [Ingress Controller using NIGIX](18.Ingress-Controller-using-NGINX.md) 159 | -------------------------------------------------------------------------------- /09.Bootstrapping-kubernetes-workers.md: -------------------------------------------------------------------------------- 1 | # Bootstrapping the Kubernetes Worker Nodes 2 | 3 | In this lab you will bootstrap three Kubernetes worker nodes. The following components will be installed on each node: [runc](https://github.com/opencontainers/runc), [gVisor](https://github.com/google/gvisor), [container networking plugins](https://github.com/containernetworking/cni), [containerd](https://github.com/containerd/containerd), [kubelet](https://kubernetes.io/docs/admin/kubelet), and [kube-proxy](https://kubernetes.io/docs/concepts/cluster-administration/proxies). 4 | 5 | ## Prerequisites 6 | 7 | The commands in this lab must be run on each worker instance: `worker-01`, `worker-02`, and `worker-03`. Login to each worker instance using the `ssh` command. Example: 8 | 9 | ``` 10 | ssh worker-01 11 | ``` 12 | 13 | ### Running commands in parallel with tmux 14 | 15 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. 16 | 17 | ## Disable Swap 18 | 19 | ``` 20 | sudo swapoff -a 21 | sudo vi /etc/fstab #<-- Comment out the line for swap and save it 22 | 23 | ``` 24 | ## Provisioning a Kubernetes Worker Node 25 | 26 | Install the OS dependencies: 27 | 28 | ``` 29 | { 30 | sudo apt-get update 31 | sudo apt-get -y install socat conntrack ipset 32 | } 33 | ``` 34 | 35 | > The socat binary enables support for the `kubectl port-forward` command. 36 | 37 | ### Download and Install Worker Binaries 38 | 39 | ``` 40 | wget -q --show-progress --https-only --timestamping \ 41 | https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.12.0/crictl-v1.12.0-linux-amd64.tar.gz \ 42 | https://storage.googleapis.com/kubernetes-the-hard-way/runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 \ 43 | https://github.com/opencontainers/runc/releases/download/v1.0.0-rc5/runc.amd64 \ 44 | https://github.com/containernetworking/plugins/releases/download/v0.6.0/cni-plugins-amd64-v0.6.0.tgz \ 45 | https://github.com/containerd/containerd/releases/download/v1.2.0-rc.0/containerd-1.2.0-rc.0.linux-amd64.tar.gz \ 46 | https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl \ 47 | https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-proxy \ 48 | https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubelet 49 | ``` 50 | 51 | Create the installation directories: 52 | 53 | ``` 54 | sudo mkdir -p \ 55 | /etc/cni/net.d \ 56 | /opt/cni/bin \ 57 | /var/lib/kubelet \ 58 | /var/lib/kube-proxy \ 59 | /var/lib/kubernetes \ 60 | /var/run/kubernetes 61 | ``` 62 | 63 | Install the worker binaries: 64 | 65 | ``` 66 | { 67 | sudo mv runsc-50c283b9f56bb7200938d9e207355f05f79f0d17 runsc 68 | sudo mv runc.amd64 runc 69 | chmod +x kubectl kube-proxy kubelet runc runsc 70 | sudo mv kubectl kube-proxy kubelet runc runsc /usr/local/bin/ 71 | sudo tar -xvf crictl-v1.12.0-linux-amd64.tar.gz -C /usr/local/bin/ 72 | sudo tar -xvf cni-plugins-amd64-v0.6.0.tgz -C /opt/cni/bin/ 73 | sudo tar -xvf containerd-1.2.0-rc.0.linux-amd64.tar.gz -C / 74 | } 75 | ``` 76 | 77 | ### Configure CNI Networking 78 | 79 | Retrieve the Pod CIDR range for the current compute instance: 80 | 81 | ``` 82 | CIDR_LIST="worker-01,10.10.1.0/24 83 | worker-02,10.10.2.0/24 84 | worker-03,10.10.3.0/24" 85 | 86 | for CIDR in ${CIDR_LIST} 87 | do 88 | POD_CIDR=$(echo ${CIDR} |grep $(hostname -s)|awk -F "," '{print $2}') 89 | if [ ! -z ${POD_CIDR} ] 90 | then 91 | break 92 | fi 93 | done 94 | 95 | echo $POD_CIDR 96 | ``` 97 | 98 | Create the `bridge` network configuration file: 99 | 100 | ``` 101 | cat < Untrusted workloads will be run using the gVisor (runsc) runtime. 160 | 161 | Create the `containerd.service` systemd unit file: 162 | 163 | ``` 164 | cat < The `resolvConf` configuration is used to avoid loops when using CoreDNS for service discovery on systems running `systemd-resolved`. 224 | 225 | Create the `kubelet.service` systemd unit file: 226 | 227 | ``` 228 | cat < Remember to run the above commands on each worker node: `worker-01`, `worker-02`, and `worker-03`. 302 | 303 | ## Verification 304 | 305 | > The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances. 306 | 307 | List the registered Kubernetes nodes: 308 | 309 | ``` 310 | ssh k8s@controller-01 \ 311 | "kubectl get nodes --kubeconfig admin.kubeconfig" 312 | ``` 313 | 314 | > output 315 | 316 | ``` 317 | NAME STATUS ROLES AGE VERSION 318 | worker-01 Ready 2m45s v1.12.0 319 | worker-02 Ready 2m45s v1.12.0 320 | worker-03 Ready 2m45s v1.12.0 321 | 322 | ``` 323 | 324 | Part 10 - [Configuring kubectl for Remote Access](10.Configuring-kubectl.md) 325 | -------------------------------------------------------------------------------- /04.Certificate-Authority.md: -------------------------------------------------------------------------------- 1 | # Provisioning a CA and Generating TLS Certificates 2 | 3 | We will provision a PKI Infrastructure using CloudFlare's PKI toolkit, cfssl, then use it to bootstrap a Certificate Authority, 4 | and generate TLS certificates for the following components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, 5 | and kube-proxy. 6 | 7 | ## Certificate Authority 8 | 9 | - Generate CA default files (To understand the structure of CA and CSR json . We will overwrite this configs in next steps) 10 | ``` 11 | cfssl print-defaults config > ca-config.json 12 | cfssl print-defaults csr > ca-csr.json 13 | ``` 14 | - Modify ca-config and ca-csr to fit your requirement 15 | 16 | ``` 17 | cat <ca-config.json 18 | { 19 | "signing": { 20 | "default": { 21 | "expiry": "8760h" 22 | }, 23 | "profiles": { 24 | "kubernetes": { 25 | "expiry": "8760h", 26 | "usages": [ 27 | "signing", 28 | "key encipherment", 29 | "server auth", 30 | "client auth" 31 | ] 32 | } 33 | } 34 | } 35 | } 36 | EOF 37 | ``` 38 | 39 | ``` 40 | cat <ca-csr.json 41 | { 42 | "CN": "Kubernetes", 43 | "key": { 44 | "algo": "rsa", 45 | "size": 2048 46 | }, 47 | "names": [ 48 | { 49 | "C": "IN", 50 | "L": "KL", 51 | "O": "Kubernetes", 52 | "OU": "CA", 53 | "ST": "Kerala" 54 | } 55 | ] 56 | } 57 | EOF 58 | ``` 59 | ``` 60 | cfssl gencert -initca ca-csr.json |cfssljson -bare ca 61 | ``` 62 | - Output 63 | ``` 64 | 2018/10/01 22:03:14 [INFO] generating a new CA key and certificate from CSR 65 | 2018/10/01 22:03:14 [INFO] generate received request 66 | 2018/10/01 22:03:14 [INFO] received CSR 67 | 2018/10/01 22:03:14 [INFO] generating key: rsa-2048 68 | 2018/10/01 22:03:14 [INFO] encoded CSR 69 | 2018/10/01 22:03:14 [INFO] signed certificate with serial number 621260968886516247086480084671432552497699065843 70 | ``` 71 | - ca.pem , ca-key.pem, ca.csr files will be created , but we need only ca.pem and ca-key.pem 72 | ``` 73 | k8s@lb-01:~$ ls -lrt ca* 74 | -rw-rw-r-- 1 k8s k8s 385 Oct 1 21:53 ca-config.json 75 | -rw-rw-r-- 1 k8s k8s 262 Oct 1 21:56 ca-csr.json 76 | -rw-rw-r-- 1 k8s k8s 1350 Oct 1 22:03 ca.pem 77 | -rw------- 1 k8s k8s 1679 Oct 1 22:03 ca-key.pem 78 | -rw-r--r-- 1 k8s k8s 997 Oct 1 22:03 ca.csr 79 | ``` 80 | 81 | ## Client and Server Certificates 82 | 83 | In this section you will generate client and server certificates for each Kubernetes component and a client certificate for 84 | the Kubernetes admin user. 85 | 86 | ### The Admin Client Certificate 87 | ``` 88 | { 89 | 90 | cat > admin-csr.json <`. In this section you will create a certificate for each Kubernetes worker node that meets the Node Authorizer requirements. 128 | 129 | Generate a certificate and private key for each Kubernetes worker node: 130 | 131 | ``` 132 | for instance in worker-01 worker-02 worker-03; do 133 | cat > ${instance}-csr.json < kube-controller-manager-csr.json < kube-proxy-csr.json < kube-scheduler-csr.json < kubernetes-csr.json < service-account-csr.json < The `kube-proxy`, `kube-controller-manager`, `kube-scheduler`, and `kubelet` client certificates will be used to generate client authentication configuration files in the next lab. 419 | 420 | Part 5 - [Kubernetes Configuration Files](05.Kubernetes-configuration-files.md) 421 | -------------------------------------------------------------------------------- /08.Bootstrapping-kubernetes-controllers.md: -------------------------------------------------------------------------------- 1 | # Bootstrapping the Kubernetes Control Plane 2 | 3 | In this lab you will bootstrap the Kubernetes control plane across three compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager. 4 | 5 | ## Prerequisites 6 | 7 | The commands in this lab must be run on each controller instance: `controller-01`, `controller-02`, and `controller-03`. Login to each controller instance using the `ssh` command. Example: 8 | 9 | ``` 10 | ssh controller-01 11 | ``` 12 | 13 | ### Running commands in parallel with tmux 14 | 15 | [tmux](https://github.com/tmux/tmux/wiki) can be used to run commands on multiple compute instances at the same time. 16 | 17 | ## Provision the Kubernetes Control Plane 18 | 19 | Create the Kubernetes configuration directory: 20 | 21 | ``` 22 | sudo mkdir -p /etc/kubernetes/config 23 | ``` 24 | 25 | ### Download and Install the Kubernetes Controller Binaries 26 | 27 | Download the official Kubernetes release binaries: 28 | 29 | ``` 30 | wget -q --show-progress --https-only --timestamping \ 31 | "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-apiserver" \ 32 | "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-controller-manager" \ 33 | "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler" \ 34 | "https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kubectl" 35 | ``` 36 | 37 | Install the Kubernetes binaries: 38 | 39 | ``` 40 | { 41 | chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl 42 | sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/ 43 | } 44 | ``` 45 | 46 | ### Configure the Kubernetes API Server 47 | 48 | ``` 49 | { 50 | sudo mkdir -p /var/lib/kubernetes/ 51 | 52 | sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \ 53 | service-account-key.pem service-account.pem \ 54 | encryption-config.yaml /var/lib/kubernetes/ 55 | } 56 | ``` 57 | 58 | The instance IP address will be used to advertise the API Server to members of the cluster. Retrieve the IP address for the current compute instance: 59 | 60 | ``` 61 | INTERNAL_IP=$(grep -w $(hostname) /etc/hosts |awk '{print $1}') 62 | ``` 63 | 64 | Create the `kube-apiserver.service` systemd unit file: 65 | 66 | ``` 67 | cat < Allow up to 10 seconds for the Kubernetes API Server to fully initialize. 202 | 203 | ### Enable HTTP Health Checks 204 | 205 | ``` 206 | We will skip this for now as its not so important for a learning environment . 207 | If time permits we will add that to HA-Proxy configuration in future 208 | ``` 209 | 210 | ### Verification 211 | 212 | ``` 213 | kubectl get componentstatuses --kubeconfig admin.kubeconfig 214 | ``` 215 | 216 | ``` 217 | NAME STATUS MESSAGE ERROR 218 | controller-manager Healthy ok 219 | scheduler Healthy ok 220 | etcd-0 Healthy {"health":"true"} 221 | etcd-2 Healthy {"health":"true"} 222 | etcd-1 Healthy {"health":"true"} 223 | ``` 224 | 225 | 226 | > Remember to run the above commands on each controller node: `controller-01`, `controller-02`, and `controller-03`. 227 | 228 | ## RBAC for Kubelet Authorization 229 | 230 | In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods. 231 | 232 | > This tutorial sets the Kubelet `--authorization-mode` flag to `Webhook`. Webhook mode uses the [SubjectAccessReview](https://kubernetes.io/docs/admin/authorization/#checking-api-access) API to determine authorization. 233 | 234 | ``` 235 | ssh controller-01 236 | ``` 237 | 238 | Create the `system:kube-apiserver-to-kubelet` [ClusterRole](https://kubernetes.io/docs/admin/authorization/rbac/#role-and-clusterrole) with permissions to access the Kubelet API and perform most common tasks associated with managing pods: 239 | 240 | ``` 241 | cat < The compute instances created in this tutorial will not have permission to complete this section. Run the following commands from the same machine used to create the compute instances. 291 | 292 | 293 | 294 | 295 | ### Verification 296 | 297 | Logon to lb-01 and lb-02 and then restart haproxy service to make active backend 298 | 299 | ``` 300 | systemctl restart haproxy 301 | ``` 302 | 303 | Retrieve the `kubernetes-the-hard-way` Load Balancer IP address: 304 | 305 | ``` 306 | KUBERNETES_PUBLIC_ADDRESS=192.168.78.220 307 | ``` 308 | 309 | Make a HTTP request for the Kubernetes version info: 310 | 311 | ``` 312 | curl --cacert ca.pem https://${KUBERNETES_PUBLIC_ADDRESS}:6443/version 313 | ``` 314 | 315 | > output 316 | 317 | ``` 318 | { 319 | "major": "1", 320 | "minor": "12", 321 | "gitVersion": "v1.12.0", 322 | "gitCommit": "0ed33881dc4355495f623c6f22e7dd0b7632b7c0", 323 | "gitTreeState": "clean", 324 | "buildDate": "2018-09-27T16:55:41Z", 325 | "goVersion": "go1.10.4", 326 | "compiler": "gc", 327 | "platform": "linux/amd64" 328 | 329 | ``` 330 | 331 | Part 9 - [Bootstrapping the Kubernetes Worker Nodes](09.Bootstrapping-kubernetes-workers.md) 332 | --------------------------------------------------------------------------------