├── .gitignore ├── LICENSE ├── README.md ├── ks.cfg.template ├── provision.sh ├── provision.vars └── scripts ├── backup_etcd.sh └── install_rancher_server.sh /.gitignore: -------------------------------------------------------------------------------- 1 | cluster.rkestate 2 | local 3 | cluster.yml 4 | hosts_entries 5 | ks-1.cfg 6 | ks-2.cfg 7 | ks-3.cfg 8 | ks-4.cfg 9 | ks-5.cfg 10 | ks-6.cfg 11 | kube_config_cluster.yml 12 | tiller-rbac-config.yaml 13 | system-tools 14 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Henrik René Høegh 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Rancher launcher kvm 2 | A easy way to get a Rancher Kubernetes cluster up and running on KVM/Libvirt 3 | 4 | > This project is sponsored by Praqma.com 5 | 6 | This script will create machines in KVM prepared with docker and ssh key. It will also generate a cluster.yml that can be used by RKE to provision a Kubernetes cluster. This cluster can then be joined to a Rancher manager UI. 7 | 8 | Create 3 nodes by running 9 | ``` 10 | ./provision.sh 3 11 | ``` 12 | 13 | You will end up with 3 virtual machines having a user named rke with the SSH keys found in ~/.ssh/ on your host. Also, a cluster.yml will be generated. 14 | 15 | By default, all nodes will be running etcd, controlplane and worker containers. Edit cluster.yml to change this to your liking. 16 | 17 | Then, simply run rke to create the cluster 18 | 19 | ``` 20 | rke up 21 | ``` 22 | 23 | When done, a cluster will be running. It will generate a config file you can use with kubectl. 24 | 25 | ``` 26 | kubectl get cs --kubeconfig kube_config_cluster.yml 27 | ``` 28 | 29 | # Install Rancher UI 30 | I made a small script that installs the Rancher Server (UI) via Helm. 31 | For using it, you need to download the Helm client and have it in your path. 32 | ``` 33 | vi install_rancher_server.sh 34 | # Change the line with the hostname 35 | # Add the hostname to your /etc/hosts on a worker node 36 | ``` 37 | 38 | Or provide the IP address of loadbalander as input argument when running the script and it will use it as host. 39 | Hostname: i.e rancher-127.0.0.1.nip.io 40 | ``` 41 | cd scripts 42 | ./install_rancher_server.sh '127.0.0.1' 43 | ``` 44 | It will create a namespaces for cert-manger and rancher. Adds the repository for Rancher & Cert-Manger and installs them in their respective namespaces. 45 | -------------------------------------------------------------------------------- /ks.cfg.template: -------------------------------------------------------------------------------- 1 | # Install OS instead of upgrade 2 | install 3 | # Use network installation 4 | cdrom 5 | # Root password 6 | rootpw TMPL_PSWD 7 | # System authorization information 8 | auth --useshadow --passalgo=sha512 9 | 10 | # Firewall configuration 11 | firewall --disabled 12 | # SELinux configuration 13 | selinux --permissive 14 | 15 | # Installation logging level 16 | logging --level=info 17 | # Use text mode install 18 | text 19 | # Do not configure the X Window System 20 | skipx 21 | # System timezone, language and keyboard 22 | timezone --utc Europe/Copenhagen 23 | lang da_DK.UTF-8 24 | keyboard dk-latin1 25 | # Network information 26 | # network --bootproto=static --ip=192.168.122.110 --device=eth0 --onboot=on 27 | # If you want to configure a static IP: 28 | network --device eth0 --hostname TMPL_HOSTNAME --bootproto=static --ip=TMPL_IP --netmask=255.255.255.0 --gateway=192.168.122.1 --nameserver 192.168.122.1 29 | 30 | 31 | # System bootloader configuration 32 | bootloader --location=mbr 33 | # Partition clearing information 34 | clearpart --all --initlabel 35 | # Disk partitioning information 36 | part /boot --fstype="ext4" --size=512 37 | #part swap --fstype="swap" --recommended 38 | part /var --fstype="ext4" --size=5120 --grow 39 | part / --fstype="ext4" --size=1024 --grow 40 | part /usr --fstype="ext4" --size=3072 41 | part /home --fstype="ext4" --size=512 42 | part /tmp --fstype="ext4" --size=1024 43 | 44 | # Reboot after installation 45 | reboot 46 | 47 | %packages --nobase 48 | @core 49 | # @base 50 | 51 | %end 52 | 53 | %post --log=/root/ks-post.log 54 | #---- Install packages used by kubernetes 55 | yum update -y 56 | yum install -y socat libseccomp-devel btrfs-progs-devel util-linux nfs-utils conntrack-tools.x86_64 57 | 58 | #---- Install packages used by Longhorn 59 | yum install -y curl findmnt grep awk blkid iscsi-initiator-utils 60 | 61 | #---- Set bridge-nf-call 62 | echo "net.bridge.bridge-nf-call-ip6tables=1 63 | net.bridge.bridge-nf-call-iptables=1" > /etc/sysctl.conf 64 | 65 | #---- Add user RKE ----- 66 | groupadd docker 67 | adduser rke 68 | echo "rke:praqma" | chpasswd 69 | usermod -aG docker rke 70 | 71 | #---- Enable kernel modules ----- 72 | 73 | for module in br_netfilter ip6_udp_tunnel ip_set ip_set_hash_ip ip_set_hash_net iptable_filter iptable_nat iptable_mangle iptable_raw nf_conntrack_netlink nf_conntrack nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat nf_nat_ipv4 nf_nat_masquerade_ipv4 nfnetlink udp_tunnel veth vxlan x_tables xt_addrtype xt_conntrack xt_comment xt_mark xt_multiport xt_nat xt_recent xt_set xt_statistic xt_tcpudp; 74 | do 75 | modprobe $module 76 | done 77 | 78 | 79 | #---- Install our SSH key ---- 80 | mkdir -m0700 /home/rke/.ssh/ 81 | cat </home/rke/.ssh/authorized_keys 82 | TMPL_SSH_KEY 83 | EOF 84 | 85 | 86 | ### Disabling swap (now and permently) 87 | swapoff -a 88 | sed -i '/^\/swapfile/ d' /etc/fstab 89 | 90 | 91 | ### set permissions 92 | chmod 0600 /home/rke/.ssh/authorized_keys 93 | chown -R rke:rke /home/rke/.ssh 94 | 95 | ### fix up selinux context 96 | restorecon -R /home/rke/.ssh/authorized_keys 97 | 98 | 99 | ### Install Docker 100 | curl https://releases.rancher.com/install-docker/18.09.2.sh | sh 101 | systemctl enable docker 102 | 103 | ### Enabeling iscasid for Longhorn 104 | systemctl enable iscsid 105 | 106 | %end 107 | -------------------------------------------------------------------------------- /provision.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | source provision.vars 3 | 4 | add_node_to_cluster() { 5 | local VM_IP=$((110 + $1)) 6 | 7 | echo " - address: "192.168.122.$VM_IP" 8 | user: rke 9 | ssh_key_path: ~/.ssh/id_rsa 10 | role: 11 | - controlplane 12 | - etcd 13 | - worker" >> cluster.yml 14 | } 15 | 16 | create_vm () { 17 | local VM_NB=$1 18 | local VM_KS="ks-$VM_NB.cfg" 19 | local VM_IP=$((110 + $VM_NB)) 20 | local VM_PORT=$((5900 + $VM_NB)) 21 | 22 | echo "Using port $VM_PORT" 23 | 24 | echo "Cleaning up old kickstart file..." 25 | rm -f $VM_KS 26 | 27 | echo "Creating new ks.cfg file..." 28 | cp ks.cfg.template $VM_KS 29 | sed -i 's/TMPL_PSWD/praqma/g' $VM_KS 30 | sed -i 's/TMPL_HOSTNAME/'$vm_prefix-$VM_NB'/g' $VM_KS 31 | sed -i 's/TMPL_IP/192.168.122.'$VM_IP'/g' $VM_KS 32 | sed -i "s;TMPL_SSH_KEY;$SSH_KEY;g" $VM_KS 33 | 34 | echo "Creating disc image..." 35 | qemu-img create -f qcow2 $image_location/$vm_prefix-$VM_NB.qcow2 $vm_disc_size 36 | 37 | echo "Creating virtual machine and running installer..." 38 | virt-install --name $vm_prefix-$VM_NB \ 39 | --description "$vm_description-$VM_NB" \ 40 | --ram $vm_ram \ 41 | --vcpus $vm_vcpu \ 42 | --disk path=$image_location/$vm_prefix-$VM_NB.qcow2,size=15 \ 43 | --os-type linux \ 44 | --os-variant $vm_variant \ 45 | --network bridge=virbr0 \ 46 | --graphics vnc,listen=127.0.0.1,port=$VM_PORT \ 47 | --location $vm_iso \ 48 | --noautoconsole \ 49 | --initrd-inject $VM_KS --extra-args="ks=file:/$VM_KS" 50 | 51 | } 52 | 53 | # Check if ssh keys exists 54 | if [ -f ~/.ssh/id_rsa.pub ]; then 55 | SSH_KEY=$(cat ~/.ssh/id_rsa.pub) 56 | else 57 | echo "Public key not found. It will be left black..." 58 | SSH_KEY="" 59 | fi 60 | 61 | # Check if no input, then set number of servers to 1 62 | SRV_NB=$1 63 | if [ -z "$SRV_NB" ]; then 64 | SRV_NB=1 65 | fi 66 | 67 | echo "cluster_name: $k8s_name" > cluster.yml 68 | echo "k8s_version: \"$k8s_version\"" >> cluster.yml 69 | 70 | echo "" > hosts_entries 71 | 72 | echo "nodes:" >> cluster.yml 73 | echo "Creating $SRV_NB of servers..." 74 | 75 | for i in $( seq 1 $SRV_NB ) 76 | do 77 | echo "Creating VM $i" 78 | create_vm $i & 79 | add_node_to_cluster $i 80 | echo "192.168.122.$((110 + $i)) $vm_prefix-$i" >> hosts_entries 81 | done 82 | 83 | # Wait for machine commands to finish 84 | wait 85 | 86 | # Increase timeout for addons 87 | # Workaround for issue : https://github.com/rancher/rke/issues/1652 88 | echo " 89 | addon_job_timeout: 60" >> cluster.yml 90 | 91 | 92 | # add network plugin 93 | echo " 94 | network: 95 | plugin: $k8s_network" >> cluster.yml 96 | 97 | 98 | # Disable build in Nginx ingress if needed 99 | if [ $k8s_ingress == "fase" ]; then 100 | echo "ingress: 101 | provider: none" >> cluster.yml 102 | fi 103 | 104 | echo "Add these entries to your hosts /etc/hosts" 105 | cat hosts_entries 106 | 107 | echo " 108 | 109 | If you want to run Sonobuoy, you need to run this command after you ran rke up 110 | kubectl label --overwrite node --selector node-role.kubernetes.io/controlplane=\"true\" node-role.kubernetes.io/master=\"true\" 111 | 112 | Fixing issue https://github.com/vmware-tanzu/sonobuoy/issues/574 113 | " 114 | -------------------------------------------------------------------------------- /provision.vars: -------------------------------------------------------------------------------- 1 | # Password for the rke user on each VM. 2 | rke_password="praqma" 3 | 4 | # Where to store the qcow2 disc images 5 | image_location="/vm-disks" 6 | 7 | # Disc size for each VM 8 | vm_disc_size="50G" 9 | 10 | # A description of the VM 11 | vm_description="Centos 7 - Kubernetes" 12 | 13 | # Prefix to use for each machine 14 | vm_prefix="k8s" 15 | 16 | # Amount of memory each VM will get 17 | vm_ram="8192" 18 | 19 | # Number of Virtuel CPU each VM will get 20 | vm_vcpu="3" 21 | 22 | # Set the variant for the OS 23 | vm_variant="centos7.0" 24 | 25 | # Path to the iso image to use. 26 | # It needs to be a Redhat family OS (Redhat, Centos, Fedora...) 27 | vm_iso="/cdimages/CentOS-7-x86_64-Minimal-1804.iso" 28 | 29 | # Network options: 30 | # none, flannel, canal, calico, weave 31 | k8s_network="flannel" 32 | 33 | # If enabled, the rke ingress controller will be deployed 34 | # Turn off, to use your own (traefik, istio, contour...) 35 | k8s_ingress="enable" 36 | 37 | # Name of the cluster 38 | k8s_name="praqma" 39 | 40 | # Get a list of supported versions by your RKE 41 | # rke config --system-images --all | grep images | cut -d"[" -f2 | cut -d"]" -f1 42 | k8s_version="v1.17.17-rancher1-1" 43 | 44 | -------------------------------------------------------------------------------- /scripts/backup_etcd.sh: -------------------------------------------------------------------------------- 1 | # https://rancher.com/docs/rke/v0.1.x/en/etcd-snapshots/ 2 | rke etcd snapshot-save --name production --config cluster.yml 3 | scp rke@192.168.122.111:/opt/rke/etcd-snapshots/production . 4 | -------------------------------------------------------------------------------- /scripts/install_rancher_server.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | LB_IP=$1 4 | [ "$LB_IP" ] 5 | : ${LB_IP:="127.0.0.1"} 6 | 7 | echo "" 8 | echo -e "Adding helm repo for cert-manger & Rancher\n" 9 | helm repo add rancher-latest https://releases.rancher.com/server-charts/latest 10 | helm repo add jetstack https://charts.jetstack.io 11 | helm repo update 12 | 13 | echo -e " Creating namespace \n" 14 | kubectl create namespace cert-manager 15 | kubectl create namespace cattle-system 16 | 17 | sleep 1 18 | echo -e " Installing Cert Manger\n" 19 | 20 | helm install \ 21 | cert-manager jetstack/cert-manager \ 22 | --namespace cert-manager \ 23 | --version v1.2.0 \ 24 | --set installCRDs=true 25 | 26 | echo -e " Checking Rollout Status\n" 27 | kubectl -n cert-manager rollout status deploy/cert-manager 28 | echo "" 29 | echo -e "Waiting for pods to initialize\n" 30 | sleep 20 31 | 32 | echo -e "Installing Rancher \n" 33 | helm install rancher rancher-latest/rancher \ 34 | --namespace cattle-system \ 35 | --set hostname=rancher-${LB_IP}.nip.io 36 | echo -e "" 37 | 38 | echo -e "Getting Deployment Status\n" 39 | kubectl -n cattle-system rollout status deploy/rancher 40 | echo -e "" 41 | kubectl -n cattle-system get deploy rancher 42 | --------------------------------------------------------------------------------