├── .gitignore ├── ContainerLinux-with-kubeadm.md ├── README.md ├── Raspberry_Pi_Setup.md ├── centos7-kubeadm.md ├── connect-to-master.md ├── images └── README.md ├── master └── README.md ├── sddefault.jpg ├── terraform └── alvyl │ ├── ec2-setup.tf │ └── install_kubernetes.sh ├── tips └── README.md └── ubuntu16-kubeadm.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Terraform 2 | 3 | # Local .terraform directories 4 | **/.terraform/* 5 | 6 | # .tfstate files 7 | *.tfstate 8 | *.tfstate.* 9 | 10 | # Crash log files 11 | crash.log 12 | 13 | # Ignore any .tfvars files that are generated automatically for each Terraform run. Most 14 | # .tfvars files are managed as part of configuration and so should be included in 15 | # version control. 16 | # 17 | # example.tfvars 18 | *.auto.tfvars 19 | 20 | # Ignore override files as they are usually used to override resources locally and so 21 | # are not checked in 22 | override.tf 23 | override.tf.json 24 | *_override.tf 25 | *_override.tf.json 26 | 27 | # Include override files you do wish to add to version control using negated pattern 28 | # 29 | # !example_override.tf 30 | 31 | # Include tfplan files to ignore the plan output of command: terraform plan -out=tfplan 32 | # example: *tfplan* 33 | -------------------------------------------------------------------------------- /ContainerLinux-with-kubeadm.md: -------------------------------------------------------------------------------- 1 | # Installing kubernetes with kubeadm on Container Linux 2 | 3 | ## Installing kubeadm kubectl kubelet 4 | 5 | ### Prerequisite: 6 | 7 | Install CNI plugins (required for most pod network): 8 | 9 | ``` 10 | CNI_VERSION="v0.8.2" 11 | mkdir -p /opt/cni/bin 12 | curl -L "https://github.com/containernetworking/plugins/releases/download/${CNI_VERSION}/cni-plugins-linux-amd64-${CNI_VERSION}.tgz" | tar -C /opt/cni/bin -xz 13 | ``` 14 | 15 | ## Install crictl (required for kubeadm / Kubelet Container Runtime Interface (CRI)) 16 | 17 | ``` 18 | CRICTL_VERSION="v1.17.0" 19 | mkdir -p /opt/bin 20 | curl -L "https://github.com/kubernetes-sigs/cri-tools/releases/download/${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz" | tar -C /opt/bin -xz 21 | ``` 22 | 23 | ## Copy paste the below snippet one by one in your CLI terminal - This is for both Master and Worker Nodes 24 | 25 | ``` 26 | RELEASE="$(curl -sSL https://dl.k8s.io/release/stable.txt)" 27 | 28 | mkdir -p /opt/bin 29 | cd /opt/bin 30 | curl -L --remote-name-all https://storage.googleapis.com/kubernetes-release/release/${RELEASE}/bin/linux/amd64/{kubeadm,kubelet,kubectl} 31 | chmod +x {kubeadm,kubelet,kubectl} 32 | 33 | RELEASE_VERSION="v0.2.7" 34 | curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubelet/lib/systemd/system/kubelet.service" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service 35 | mkdir -p /etc/systemd/system/kubelet.service.d 36 | curl -sSL "https://raw.githubusercontent.com/kubernetes/release/${RELEASE_VERSION}/cmd/kubepkg/templates/latest/deb/kubeadm/10-kubeadm.conf" | sed "s:/usr/bin:/opt/bin:g" > /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 37 | 38 | systemctl enable --now kubelet 39 | ``` 40 | 41 | ## Initializing kubeadm - This is only for Master 42 | 43 | ``` 44 | kubeadm init 45 | ``` 46 | 47 | ## Setting up kubeconfig - This is only for Master Node 48 | 49 | ``` 50 | mkdir -p $HOME/.kube 51 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 52 | sudo chown $(id -u):$(id -g) $HOME/.kube/config 53 | ``` 54 | 55 | ## Copying token from kubeadmn init snippet - This you should copy from Master. This you will get from Master node only 56 | 57 | ``` 58 | kubeadm join : --token --discovery-token-ca-cert-hash sha256: 59 | ``` 60 | 61 | ## Installing Network Plugin - This is only for Master Node 62 | 63 | ``` 64 | kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" 65 | ``` 66 | 67 | ## Joining Worker Nodes - This is only for Worker Nodes 68 | 69 | ``` 70 | kubeadm join --token : --discovery-token-ca-cert-hash sha256: 71 | ``` 72 | 73 | ## Note: 74 | 75 | Please do not run Master node snippet commands on Worker Nodes. 76 | 77 | ## References 78 | 1. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl 79 | 2. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node 80 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # KubeZilla Community Collaborative Project 2 | [![Gitter](https://img.shields.io/gitter/room/DAVFoundation/DAV-Contributors.svg?style=flat-square)](https://gitter.im/kubezilla/community) 3 | 4 | 5 | ## Special Thanks 6 | 7 | 8 | [![Kubezilla in action](https://github.com/collabnix/kubezilla/blob/master/sddefault.jpg)](https://www.youtube.com/watch?v=ghyjiV3ID-k) 9 | 10 | https://www.youtube.com/watch?v=ghyjiV3ID-k 11 | 12 | **DigitalOcean sponsored $500 credits to provision Multi-master nodes for HA***** 13 | 14 | 15 | We are aiming to build a largest Kubernetes Community Cluster and target to showcase it on OSCONF Kochi Day. 16 | 17 | ## Why are we doing this? 18 | 19 | It's great opportunity for community members to learn, collaborate and contribute around Kubernetes and related technologies. As a team, we will learn how Kubernetes cluster is setup, how apps gets deployed over Cloud and what kind of microservices can be run possibly on these HUGE cluster nodes.Community members will learn how monitoring tools like Prometheus and Grafana can be deployed and fetch time-series metrics out of these HUGE cluster of nodes. In nutshell, it's 2-3 hour effort which will let every single individual to learn Kubernetes and understand its scalability. 20 | 21 | ## When? 22 | 23 | > We are targeting 27th June starting 2:00 PM till 4:00 PM for Kubezilla. [Refer to Issue #6](https://github.com/collabnix/kubezilla/issues/6) 24 | 25 | | Activity | Date | Time | 26 | | :-------: | :------------: | :----------------: | 27 | | Rehearsal | 21st June 2020 |11:00 AM to 1:00 PM | 28 | | Live Demo | 27th June 2020 | 2:00 PM to 4:00 PM | 29 | 30 | 31 | ## How shall I join my worker node? 32 | 33 | It's simple. Follow the below steps: 34 | 35 | ### Installing Docker 36 | 37 | ``` 38 | curl -sSL https://get.docker.com/ | sh 39 | ``` 40 | 41 | ## Run the below container 42 | 43 | **Make sure to update the label below before running.** 44 | 45 | > Very Important 46 | 47 | Node can be **node=cloud** or **node=rpi** or **node=jetson** 48 | 49 | **name=** 50 | 51 | ``` 52 | sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.5 --server https://success.kubezilla.com --token xg8sm7fgxgls7p52rcjbkjsszp2cb8l4gmfqpvxt48s65dcqhjvqww --worker --label node=cloud --label name=kubezilla 53 | ``` 54 | 55 | That's it. Open up success.kubezilla.com with kubezilla as login and kubezilla as password 56 | 57 | 58 | ## Contribution Proposal 59 | 60 | 1. Create a pull request if you're interested to contribute your FREE Cloud instance(AWS/Azure/GCP/DO). 61 | 2. You can even contribute Your Raspberry Pi too if you know how to connect it to this cluster. 62 | 3. Please also include your full name, Twitter's handle *and* your company name. 63 | 4. Please note that the node's specification for this run is **XGB of RAM with X vCore**. 64 | We're sorry that 512MB will be not enough for our testing. 65 | 66 | 67 | ## What's mininum requirements of a node? 68 | 69 | 70 | - 2 GB or more of RAM per machine (any less will leave little room for your apps) 71 | - 2 CPUs or more 72 | - Full network connectivity between all machines in the cluster (public or private network is fine) 73 | - Unique hostname, MAC address, and product_uuid for every node. See here for more details. 74 | - Certain ports are open on your machines. See here for more details. 75 | - Swap disabled. You MUST disable swap in order for the kubelet to work properly. 76 | - TCP Inbound 10250 open for Kubelet API 77 | - TCP Inbound 30000-32767 open for NodePort Services 78 | 79 | ## Size of Master & Master Components 80 | 81 | 82 | On GCE/Google Kubernetes Engine, and AWS, kube-up automatically configures the proper VM size for your master depending on the number of nodes in your cluster. On other providers, you will need to configure it manually. 83 | 84 | For reference, the sizes we use on GCE are 85 | 86 | ``` 87 | 1-5 nodes: n1-standard-1 88 | 6-10 nodes: n1-standard-2 89 | 11-100 nodes: n1-standard-4 90 | 101-250 nodes: n1-standard-8 91 | 251-500 nodes: n1-standard-16 92 | more than 500 nodes: n1-standard-32 93 | ``` 94 | 95 | And the sizes we use on AWS are 96 | 97 | ``` 98 | 1-5 nodes: m3.medium 99 | 6-10 nodes: m3.large 100 | 11-100 nodes: m3.xlarge 101 | 101-250 nodes: m3.2xlarge 102 | 251-500 nodes: c4.4xlarge 103 | more than 500 nodes: c4.8xlarge 104 | ``` 105 | 106 | # Ports required to be open on Worker Nodes 107 | 108 | ``` 109 | TCP 10250 Kubelet API 110 | TCP 10255 Read-Only Kubelet API 111 | ``` 112 | 113 | ## Contributors 114 | 115 | 116 | | Name | Company | Number of Nodes
Expected to Contribute | Machine Type | 117 | | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | :-----------------------------------: | :---------------------------------------: | :----------------------------------: | 118 | | [@ajeetsraina](https://twitter.com/ajeetsraina) | Collabnix | 10 | | 119 | | [@anmolnagpal](https://twitter.com/anmol_nagpal)
([github](https://github.com/anmolnagpal)) ([linkedin](https://www.linkedin.com/in/anmolnagpal/)) | [CloudDrove Inc.](https://clouddrove.com) | 45 | t3.medium/t3a.medium (2 vCPUs, 4 GB) | 120 | | [@cube8021](https://twitter.com/cube8021) | Rancher Labs | 5 | | 121 | | [@dostiharise](https://twitter.com/dostiharise)
([github](https://github.com/dostiharise)) ([linkedin](https://www.linkedin.com/in/harikrishnaganji/)) | [Alvyl Consulting](https://alvyl.com) | 32 | t3.medium/t3a.medium (2 vCPUs, 4 GB) | 122 | | [@dostiharise](https://twitter.com/dostiharise)
([github](https://github.com/dostiharise)) ([linkedin](https://www.linkedin.com/in/harikrishnaganji/)) | [Alvyl Consulting](https://alvyl.com) | 3 | Raspberry Pis 3B+/4B | 123 | | [@vinayaggarwal](https://twitter.com/vnyagarwal) | Dellemc | 2 | | 124 | | [@josanabr](https://twitter.com/josanabr) | Personal | 2 | Raspberry Pis 3B+/4B | 125 | | [@kvenu](https://www.linkedin.com/in/kumaresan-venu-91649aa1/) | Personal | 2 | | 126 | | [@MeenachiSundaram](https://twitter.com/vmeenachis)
([github](https://github.com/MeenachiSundaram)) ([linkedin](https://www.linkedin.com/in/meenz/)) | Personal | 2 | Raspberry Pis 4B | 127 | | [@stefscherer](https://twitter.com/stefscherer) | [Docker Inc.](https://docker.com) | 2 | Raspberry Pis 3B+ | 128 | | [@stefscherer](https://twitter.com/stefscherer) [Docker Inc.](https://docker.com) | 10 | Azure (5x D8s, 5x D32s) | 129 | | [@ginigangadharan](https://twitter.com/ginigangadharan) 130 | | [@omkarjoshi](https://www.linkedin.com/in/omkarj/) 131 | 132 | | Your Company Name | 8 | | 133 | 134 | 135 | ## Beginner's Guide 136 | If you're an individual and it's your first time joining KubeZilla, we encourage you to *not* contribute more than 50 nodes. 137 | 138 | ## Setting up Master Node 139 | 140 | - [Click Here](https://github.com/collabnix/kubezilla/blob/master/master/README.md) 141 | 142 | ## Goals 143 | - This is the 1st collaborative project powered by Collabnix Slack community to form 100+ nodes 144 | - This is being planned on the latest Kubernetes version 145 | - Networking; We will be creating a large subnet /20 and trying to assign, as many as possible, IP addresses to each container on each node distributedly. We expect to have around ~1k IP addresses assigned and the workload application should be working fine. 146 | - Once 100+ K8s cluster is setup, we will be running an application to see the performance 147 | - We will be using Prometheus as well as Grafana for visualisation 148 | 149 | ## Results 150 | All experimental results will be provided publicly for all of you to analyze, write blogs, 151 | or even used as information for further development of your own commercial projects. Please feel free to use it. 152 | 153 | ## Who's behind Kubezilla? 154 | 155 | Kubezilla is a non-profit project organized by Docker Captain [Ajeet Singh Raina](https://twitter.com/ajeetsraina). If you want to be part of the organization team or support through sponsorship, please send us a DM in Gitter. 156 | 157 | ## Feel difficulty raising pr for contributing here ? 158 | Add your nodes [here](https://docs.google.com/forms/d/e/1FAIpQLScoezFOQjtXUY2U0bkxdyr0BXTR__1ARufoJNd1l5m8idewrQ/viewform?usp=sf_link) 159 | -------------------------------------------------------------------------------- /Raspberry_Pi_Setup.md: -------------------------------------------------------------------------------- 1 | # Steps for setup Raspberry Pi: 2 | 3 | 1) Download the Ubuntu image for your device in your **Downloads** folder using this link: 4 | 5 | https://ubuntu.com/download/raspberry-pi/thank-you?version=18.04.4&architecture=arm64+raspi3 6 | 2) Insert your microSD card. 7 | 8 | 3) Unmount your microSD card with the following command: 9 | 10 | diskutil unmountDisk < drive address > 11 | 12 | 4) You can now copy the image to the microSD card, using the following command: 13 | 14 | sudo sh -c 'gunzip -c ~/Downloads/ubuntu-18.04.4-preinstalled-server-arm64+raspi3.img.xz | sudo dd of=< drive address > bs=32m' 15 | 16 | 5) Make sure all machines are updated (using the commands **sudo apt-get update**) 17 | 18 | 6) Enable the cgroup memory: 19 | 20 | Add **cgroup_enable=cpuset cgroup_enable=memory cgroup_memory=1** to /boot/firmware/nobtcmd.txt and restart. 21 | 22 | ## Installing Docker 23 | 1) The first thing to be done is the installation of Docker. To do this, log into the pi using SSH and issue the command 24 | 25 | **sudo apt-get install docker.io** 26 | 2) Once Docker is installed, you need to add your user to the docker group. To add your user to the docker group, issue the command: 27 | 28 | **sudo usermod -aG docker $USER** 29 | 3) Log out and log back in, so the changes will take effect. 30 | Start and enable the docker daemon with the commands: 31 | 32 | **sudo systemctl start docker** 33 | 34 | **sudo systemctl enable docker** 35 | ## Installing Kubernetes 36 | 1) first add the Kubernetes GPG key with the command: 37 | 38 | **curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add** 39 | 2) If you find that curl isn’t installed (it should be), install it with the command: 40 | 41 | **sudo apt-get install curl -y** 42 | 3) Next, add the necessary repository with the command: 43 | 44 | **sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"** 45 | 4) Install the necessary software with the command: 46 | 47 | **sudo apt-get install kubeadm kubelet kubectl -y** 48 | 49 | 5) Joining Worker Nodes using this command: 50 | 51 | **kubeadm join --token \ \:\ --discovery-token-ca-cert-hash sha256:\** 52 | -------------------------------------------------------------------------------- /centos7-kubeadm.md: -------------------------------------------------------------------------------- 1 | # Installing Kubernetes with Kubeadm on CentOS 7/RHEL 7/Fedora 25+ 2 | 3 | ## Installing kubeadm, kubectl & kubelet 4 | 5 | 1. Copy paste the below snippet one by one in your CLI terminal - This is for both Master and Worker Nodes 6 | 7 | ``` 8 | cat < /etc/yum.repos.d/kubernetes.repo 9 | [kubernetes] 10 | name=Kubernetes 11 | baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch 12 | enabled=1 13 | gpgcheck=1 14 | repo_gpgcheck=1 15 | gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg 16 | exclude=kubelet kubeadm kubectl 17 | EOF 18 | ``` 19 | 20 | ## Setting up SELinux in permissive mode (effectively disabling it) 21 | 22 | ``` 23 | setenforce 0 24 | sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config 25 | yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes 26 | systemctl enable --now kubelet 27 | ``` 28 | 29 | ## Initialize kubeadm - This is only for Master 30 | 31 | ``` 32 | kubeadm init 33 | ``` 34 | 35 | ## Setting up kubeconfig - This is only for Master Node 36 | 37 | ``` 38 | mkdir -p $HOME/.kube 39 | cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 40 | chown $(id -u):$(id -g) $HOME/.kube/config 41 | ``` 42 | 43 | ## Copy token from kubeadmn init snippet - This you should copy from Master. This you will get from Master node only 44 | 45 | ``` 46 | kubeadm join : --token --discovery-token-ca-cert-hash sha256: 47 | ``` 48 | 49 | ## Installing Network Plugin - This is only for Master Node 50 | 51 | ``` 52 | kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" 53 | ``` 54 | 55 | ## Join Worker Nodes - This is only for Worker Nodes 56 | 57 | ``` 58 | kubeadm join --token : --discovery-token-ca-cert-hash sha256: 59 | ``` 60 | 61 | ## Note: 62 | 63 | Please do not run Master node snippet commands on Worker Nodes. 64 | 65 | ## References 66 | 1. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl 67 | 2. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node 68 | -------------------------------------------------------------------------------- /connect-to-master.md: -------------------------------------------------------------------------------- 1 | - [Steps for connecting to Master Node](#steps-for-connecting-to-master-node) 2 | - [Follow below documentation for kubeadm, kubectl & kubelet installation.](#follow-below-documentation-for-kubeadm-kubectl--kubelet-installation) 3 | - [Reset existing kubeadm nodes](#reset-existing-kubeadm-nodes) 4 | - [Connect to Master](#connect-to-master) 5 | 6 | # Steps for connecting to Master Node 7 | 8 | Below steps helps to install, reset existing(optional) and connect to master nodes. 9 | 10 | ## Follow below documentation for kubeadm, kubectl & kubelet installation. 11 | 12 | But skip master node part and follow below steps in this doc for connecting to master nodes. 13 | 14 | * [ubuntu16](./ubuntu16-kubeadm.md) 15 | * [centos7](./centos7-kubeadm.md) 16 | * [container linkux](./ContainerLinux-with-kubeadm.md) 17 | 18 | ## Reset existing kubeadm nodes 19 | 20 | If you have existing and working kubeadm nodes that is already been connected to master, please use the following command to reset it before connecting to the new master. 21 | 22 | ``` 23 | ubuntu@rpi1:~$ sudo kubeadm reset -F 24 | # -F flag to reset without confirmation 25 | 26 | ubuntu@rpi1:~$ sudo rm -rf /etc/cni/net.d 27 | 28 | ubuntu@rpi1:~$ sudo iptables -F 29 | ``` 30 | 31 | ## Connect to Master 32 | 33 | TODO -------------------------------------------------------------------------------- /images/README.md: -------------------------------------------------------------------------------- 1 | # Images 2 | -------------------------------------------------------------------------------- /master/README.md: -------------------------------------------------------------------------------- 1 | # How to setup Master Node for Kubezilla 2 | 3 | 4 | ## Configuring the networking stuff 5 | 6 | By default, if you run ```ifconfig``` command on your Cloud instance, it shows private IP address(and not Public). 7 | You must add publicIP interface for kubeadm init to work. Caution: If you dont add publicIP interface, kubeadm init --apiserver-advertise-address=publicIP will not success. 8 | 9 | Follow the below steps to configure public IP: 10 | 11 | 12 | ## Tested Environment: Ubuntu 16.04/18.04/20.04 13 | 14 | 15 | ### Step #1: Access your Cloud Instance via SSH 16 | 17 | ### Step #2: Create the configuration file and open an editor 18 | 19 | ``` 20 | touch /etc/netplan/60-floating-ip.yaml 21 | nano /etc/netplan/60-floating-ip.yaml 22 | ``` 23 | 24 | ### Step #3: Paste the following configuration into the editor and replace with your actual public IP. 25 | 26 | IPv4: 27 | 28 | ``` 29 | network: 30 | version: 2 31 | ethernets: 32 | eth0: 33 | addresses: 34 | - /32 35 | ``` 36 | 37 | 38 | ### Step #4: Restart your network. Caution: This will reset your network connection 39 | 40 | ``` 41 | sudo netplan apply 42 | ``` 43 | 44 | ### Step #5: Verify if private IP interface gets replaced by the public IP 45 | 46 | ``` 47 | $ ifconfig ens4 48 | ens4: flags=4163 mtu 1460 49 | inet 34.75.235.117 netmask 255.255.255.255 broadcast 0.0.0.0 50 | inet6 fe80::4001:aff:fe8e:3 prefixlen 64 scopeid 0x20 51 | ether 42:01:0a:8e:00:03 txqueuelen 1000 (Ethernet) 52 | RX packets 487336 bytes 175945336 (175.9 MB) 53 | RX errors 0 dropped 0 overruns 0 frame 0 54 | TX packets 409437 bytes 278796504 (278.7 MB) 55 | TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 56 | ``` 57 | 58 | 59 | ### Step #6: Initialize the Kubeadm 60 | 61 | 62 | 63 | ``` 64 | sudo kubeadm init --apiserver-advertise-address=34.75.235.117 --control-plane-endpoint=34.75.235.117 --upload-certs --pod-network-cidr 10.5.0.0/16 65 | ``` 66 | 67 | ### Step #7: Kubeconfig Configuration 68 | 69 | ``` 70 | mkdir -p $HOME/.kube 71 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 72 | sudo chown $(id -u):$(id -g) $HOME/.kube/config 73 | ``` 74 | 75 | 76 | ### Step #8: Configuring Kube Router 77 | 78 | We tested it with Calico/Weave but it kept crashing. Kube Router looks to be perfect solution. 79 | 80 | ``` 81 | kubectl apply -f https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/kubeadm-kuberouter.yaml 82 | ``` 83 | 84 | ### Steps #9: Verfiying Kube components 85 | 86 | ``` 87 | $ sudo kubectl get componentstatus 88 | NAME STATUS MESSAGE ERROR 89 | scheduler Healthy ok 90 | controller-manager Healthy ok 91 | etcd-0 Healthy {"health":"true"} 92 | 93 | ``` 94 | 95 | ### Step #10: Ensuring kube router is up and running 96 | 97 | 98 | ``` 99 | $ sudo kubectl get po -A | grep kube 100 | kube-system coredns-66bff467f8-m2k4d 1/1 Running 0 90m 101 | kube-system coredns-66bff467f8-s55wl 1/1 Running 0 90m 102 | kube-system etcd-worker1 1/1 Running 0 90m 103 | kube-system kube-apiserver-worker1 1/1 Running 0 90m 104 | kube-system kube-controller-manager-worker1 1/1 Running 0 90m 105 | kube-system kube-proxy-bmz5t 1/1 Running 0 81m 106 | kube-system kube-proxy-czxns 1/1 Running 0 87m 107 | kube-system kube-proxy-f2lx9 1/1 Running 0 90m 108 | kube-system kube-proxy-fmtv5 1/1 Running 0 78m 109 | kube-system kube-proxy-gh9jf 1/1 Running 0 44m 110 | kube-system kube-proxy-kdbv6 1/1 Running 0 52m 111 | kube-system kube-proxy-sswqx 1/1 Running 0 47m 112 | kube-system kube-router-lnrkq 1/1 Running 0 90m 113 | kube-system kube-router-mmf95 1/1 Running 0 47m 114 | kube-system kube-router-nmfhc 1/1 Running 0 78m 115 | kube-system kube-router-pxkvt 1/1 Running 0 52m 116 | kube-system kube-router-q7lq6 1/1 Running 0 81m 117 | kube-system kube-router-rx4bm 1/1 Running 0 44m 118 | kube-system kube-router-xkdpd 1/1 Running 0 87m 119 | kube-system kube-scheduler-worker1 1/1 Running 0 90m 120 | ``` 121 | 122 | ### Step #11: Verify the nodes 123 | 124 | ``` 125 | $ sudo kubectl get nodes 126 | NAME STATUS ROLES AGE VERSION 127 | node1 Ready master 96m v1.18.4 128 | ``` 129 | -------------------------------------------------------------------------------- /sddefault.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/collabnix/kubezilla/52331903864bd5e2bcee527371b5b6503d34568a/sddefault.jpg -------------------------------------------------------------------------------- /terraform/alvyl/ec2-setup.tf: -------------------------------------------------------------------------------- 1 | 2 | terraform { 3 | required_version = "~> 0.12.0" 4 | } 5 | 6 | variable "ami" { 7 | default = "ami-068663a3c619dd892" 8 | } 9 | 10 | variable "instance_count" { 11 | default = "2" 12 | } 13 | 14 | variable "instance_type" { 15 | default = "t3.micro" 16 | } 17 | 18 | variable "aws_region" { 19 | default = "us-east-1" 20 | } 21 | 22 | variable "ssh_key_home" { 23 | default = "~/.ssh" 24 | } 25 | 26 | provider "aws" { 27 | version = "~> 2.0" 28 | region = "us-east-1" 29 | profile = "kubezilla" 30 | } 31 | 32 | data "template_file" "installation_file" { 33 | count = "${var.instance_count}" 34 | template = file("${path.module}/install_kubernetes.sh") 35 | 36 | vars = { 37 | server_name = "kubezilla-alvyl-${count.index + 1}" 38 | } 39 | } 40 | 41 | resource "aws_key_pair" "setup_key_pair" { 42 | key_name = "kubezilla" 43 | public_key = file("${var.ssh_key_home}/kubezilla.pub") 44 | } 45 | 46 | resource "aws_instance" "my-instance" { 47 | count = var.instance_count 48 | ami = var.ami 49 | instance_type = var.instance_type 50 | key_name = aws_key_pair.setup_key_pair.key_name 51 | user_data = data.template_file.installation_file[count.index].rendered 52 | 53 | tags = { 54 | Name = "kubezilla-alvyl-${count.index + 1}" 55 | } 56 | } -------------------------------------------------------------------------------- /terraform/alvyl/install_kubernetes.sh: -------------------------------------------------------------------------------- 1 | #!bin/bash 2 | 3 | sudo apt-get update && sudo apt-get upgrade -y 4 | 5 | sudo apt-get install docker.io 6 | 7 | sudo usermod -aG docker $USER 8 | 9 | sudo systemctl start docker 10 | 11 | sudo systemctl enable docker 12 | 13 | # installing kubernetes 14 | 15 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add 16 | 17 | sudo apt-get install curl -y 18 | 19 | sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" 20 | 21 | sudo apt-get install kubeadm kubelet kubectl -y 22 | 23 | sudo hostnamectl set-hostname "${server_name}" -------------------------------------------------------------------------------- /tips/README.md: -------------------------------------------------------------------------------- 1 | # How to retrieve Kubernetes Master token 2 | 3 | ``` 4 | kubeadm token generate 5 | kubeadm token create --print-join-command --ttl=0 6 | ``` 7 | -------------------------------------------------------------------------------- /ubuntu16-kubeadm.md: -------------------------------------------------------------------------------- 1 | 2 | # Installing kubernetes with kubeadm on Ubuntu 16.04+/Debian 9+/HypriotOS v1.0.1+ 3 | 4 | 5 | ## Pre-requisite 6 | 7 | - Install Docker using ```curl -sSL https://get.docker.com/ | sh ``` 8 | 9 | ## Installing kubeadm kubectl kubelet 10 | 11 | Copy paste the below snippet one by one in your CLI terminal - This is for both Master and Worker Nodes 12 | 13 | ``` 14 | sudo apt-get update && sudo apt-get install -y apt-transport-https curl 15 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - 16 | cat <: --token --discovery-token-ca-cert-hash sha256: 43 | ``` 44 | 45 | 46 | ## Installing Network Plugin - This is only for Master Node 47 | 48 | ``` 49 | kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" 50 | ``` 51 | 52 | 53 | ## Joining Worker Nodes - This is only for Worker Nodes 54 | 55 | ``` 56 | kubeadm join --token : --discovery-token-ca-cert-hash sha256: 57 | ``` 58 | 59 | ## Note: 60 | 61 | Please do not run Master node snippet commands on Worker Nodes. 62 | 63 | ## References 64 | 1. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl 65 | 2. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#initializing-your-control-plane-node 66 | --------------------------------------------------------------------------------