├── .gitignore ├── README.md ├── packer ├── centos-7.1-overlayfs-thin.json ├── http │ └── kickstart └── scripts │ ├── compact.sh │ ├── docker-cfg.sh │ ├── install-vbox-tools.sh │ ├── update-flannel.sh │ └── update-kubernetes.sh └── vagrant ├── kube-master └── Vagrantfile ├── kube-minion └── Vagrantfile └── salt ├── pillar ├── kube-global.sls ├── pillar_roots ├── system.sls └── top.sls └── salt ├── docker ├── config-files │ ├── cfg │ │ ├── docker-network.cfg │ │ ├── docker-storage.cfg │ │ ├── docker-storage.conf │ │ └── docker.service │ └── init.sls └── init.sls ├── etcd ├── config-files │ ├── cfg │ │ ├── etcd.conf │ │ └── network.json │ └── init.sls └── init.sls ├── file_roots.sls ├── flannel ├── config-files │ ├── cfg │ │ └── flanneld.cfg │ └── init.sls └── init.sls ├── master ├── init.sls ├── kubernetes │ ├── cfg │ │ ├── apiserver │ │ ├── authorization-policy.json │ │ ├── config │ │ ├── controller-manager │ │ ├── kube-apiserver.service │ │ ├── kube-controller-manager.service │ │ ├── kube-proxy.service │ │ ├── kube-scheduler.service │ │ ├── kubeconfig │ │ ├── kubectl-config │ │ ├── kubelet │ │ ├── kubelet.service │ │ ├── kubernetes-accounting.conf │ │ ├── kubernetes.conf │ │ ├── proxy │ │ ├── scheduler │ │ └── token.csv │ ├── cluster-addons │ │ ├── dns │ │ │ └── cfg │ │ │ │ ├── skydns-rc.yaml │ │ │ │ └── skydns-svc.yaml │ │ ├── grafana │ │ │ └── cfg │ │ │ │ ├── grafana-service.yaml │ │ │ │ ├── heapster-controller.yaml │ │ │ │ ├── heapster-service.yaml │ │ │ │ ├── influxdb-grafana-controller.yaml │ │ │ │ └── influxdb-service.yaml │ │ ├── kubernetes-dashboard │ │ │ └── cfg │ │ │ │ └── kubernetes-dashboard.yaml │ │ └── registry │ │ │ ├── registry-pv.yaml │ │ │ ├── registry-pvc.yaml │ │ │ ├── registry-rc.yaml │ │ │ └── registry-svc.yaml │ ├── init.sls │ └── users │ │ └── init.sls ├── post-boot-scripts │ └── configure.sh └── pre-start-scripts │ ├── gen-minion-cert.sh │ └── generate-certs.sh ├── minion ├── init.sls ├── kubernetes │ ├── cfg │ │ ├── config │ │ ├── kube-proxy.service │ │ ├── kubectl-config │ │ ├── kubelet │ │ ├── kubelet.service │ │ ├── kubernetes-accounting.conf │ │ ├── kubernetes.conf │ │ └── proxy │ ├── init.sls │ └── users │ │ └── init.sls └── post-boot-scripts │ ├── configure.sh │ └── copy-master-ca.sh ├── nfs ├── cfg │ └── exports └── init.sls ├── ntpd └── init.sls └── top.sls /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant/ 2 | **/add-route* 3 | packer_cache/ 4 | **/*.box 5 | enm/ 6 | .DS_Store 7 | 8 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Dev-Stack 2 | 3 | ## Background 4 | Small proof of concept for running kubernetes cluster, specifically intended for development environment. Can create kubernetes cluster compromised of one master and arbitrary number of minions. Can run on Linux, Windows or Mac. 5 | 6 | Vagrant box is based on Centos 7.3 with latest stable kernel 4.9.5, docker 1.13.1, selinux will be set to permissive mode, and firewall will be down. Intention is to keep up to date version of Centos, kernel, docker and kubernetes - should always be the latest (see bellow for more details on current versions). There are several branches with different setups of docker storage drivers and filesystems. 7 | 8 | #### overlay2 with xfs or ext4 9 | 10 | Master branch has docker running with overlay2 storage driver backed by xfs. 11 | *There are known bugs when using overlay2 with xfs (directories with ????? instead of permissions etc...), so be aware - or alternatively use overlay2 with ext4 (clone overlay2-ext4) which seems to be far more stable.* 12 | 13 | #### zfs storage driver 14 | 15 | You can also checkout **zfs-filesystem** branch, which has docker running with zfs storage driver. In case of zfs, kernel driver is using dkms, so do not update kernel, as it will require zfs kernel module rebuild - you have been warned ! 16 | 17 | #### lvm block device 18 | 19 | Or if you are fan of lvm, you can checkout *lvm-blockdevice* branch, which has lvm storage driver backed by blockdevice (keeping in mind this is just a virtual machine and blockdevice is virtual sata device added to the vm) 20 | 21 | **This little demo was created as development environment, and should not be used in production, as kubernetes is configured with minimal security.** 22 | 23 | ### What's inside the tin can 24 | - Centos 7.3 kernel 4.9.5, xfs, ext4, lvm or zfs 25 | - Docker 1.13.1, overlay storage driver 26 | - Kubernetes 1.5.2 with cluster-addons 27 | - Flanneld 0.7.0 28 | - Saltstack 2015.5.10 (Lithium) 29 | 30 | 31 | ### Requirements 32 | 1. [VirtualBox 5.1.4 or greater](http://www.vagrantup.com) or [Parallels 12] (http://www.parallels.com) 33 | 2. [Vagrant 1.8.5 or greater](http://www.vagrantup.com) 34 | 3. VT-x/AMD-v virtualization must be enabled in BIOS, as virtual machines run 64bit guests 35 | 4. If running in bridged mode (which is default), DHCP is expected to assign address to vagrant box(s), otherwise you can export NETWORK_TYPE=private before starting master and minions, and they will get private addresses, and will not be accessible from outside (also they have to be on the same machine, setting access and having them on separate machines is also possible with some NAT magic, but that is beyond the scope of this little project). 36 | 5. Make sure to use latest version of vagrant boxes (1.5.0), there was change 37 | in flannel configuration which is not compatible with previous versions. 38 | 39 | ### Getting started 40 | In order to get started, first we need to start kube master, after which we can start multiple minions anywhere on the network as long as they can reach master. 41 | 42 | #### Starting kube master 43 | To start kubernetes master (which will also be used to schedule docker containers), do: 44 | 45 | 46 | cd vagrant/kube-master 47 | ## By default vagrant box will use 4G of RAM, 48 | ## to use less or more, export env variable MEM_SIZE, 49 | ## example: export MEM_SIZE=8096 for 8Gig VM 50 | 51 | ## Start the VM (or --provider=parallels) 52 | vagrant up --provider=virtualbox 53 | 54 | After initial download of vagrant box (once off download) from vagrant repository, box will be automatically configured, and depending on network setup on your machine, it might ask you which network interface you wish to use - normally choose one you use to connect to Internet (normally choice #1 is what you need), but can vary depending on the machine. 55 | 56 | Since kubernetes operates on separate network, script to create route to your newly created kubernetes cloud will be generated in the same dir (for Windows, Linux and Mac), so run: 57 | 58 | ## Depending on your operating system, for example Linux: 59 | 60 | ./add-route-LIN.sh 61 | 62 | ## Which will create route 63 | ## using your VM as gateway. 64 | 65 | 66 | You can now ssh into your kubernetes master with: 67 | 68 | vagrant ssh 69 | 70 | Kubernetes master should be up and running for you: 71 | 72 | ## Bellow will show you all kube memebers 73 | kubectl get nodes 74 | 75 | ## Bellow will show you state of cluster 76 | kubectl get cs 77 | 78 | ## Bellow will show everything that currently runs 79 | ## in kube-system namespace (dns, ui, grafana etc..) 80 | kubectl get po --all-namespaces 81 | 82 | ## Gives you cluster info, all cluster services running 83 | kubectl cluster-info 84 | 85 | ## You can start dns server with (continue reading to see how to change/specify your own dns domain instead of default) 86 | kubectl create -f /etc/kubernetes/dns/ 87 | 88 | ## You can start kube-ui or grafana as example: 89 | kubectl create -f /etc/kubernetes/kubernetes-dashboard/ 90 | 91 | ## Or Graphana: 92 | kubectl create -f /etc/kubernetes/grafana/ 93 | 94 | ## And monitor progress with: 95 | kubectl get po --all-namespaces --watch 96 | 97 | ## Once up and running cluster-info will tell you where to go: 98 | 99 | kubectl cluster-info 100 | 101 | ## and open up Grafana url shown in your browser. 102 | ## NOTE: Cluster info command shows urls with https on port 6443, which in turn requires authentication ( to talk to kubernetes api server), to avoid authentication use http and port 8080 which is left unsecured. 103 | 104 | Upon starting dns - depending how fast your network is, it might take a up to a minute or two for docker to pull required images. DNS server will be at 10.0.0.10 and serve domain **dekstroza.local** (read on to see how to change domain). 105 | 106 | To verify dns is up and running, inside master or minions, run: 107 | 108 | dig @10.0.0.10 kuberenetes.default.svc.dekstroza.local 109 | 110 | If you have added route as described above, dns will be reachable not only from inside the VM, but also from your host OS. 111 | You can find configuration for it in salt/pillar/kube-global.sls 112 | and set different cluster CIDR, service CIDR, DNS domain or DNS IP address. After changing any of these, running salt-stack can reconfigure your already running VM, but I would recommend to restart your VMs (master and minions). 113 | Bellow is current content: 114 | 115 | ## cat kube-global.sls: 116 | service_cluster_cidr: 10.0.0.0/16 117 | kube_cluster_cidr: 10.244.0.0/16 118 | dns_replicas: 1 119 | dns_server: 10.0.0.10 120 | dns_domain: dekstroza.local 121 | cluster_registry_disk_size: 1G 122 | 123 | Important bits are: 124 | - service_cluster_cidr : Range from which kuberentes nodes will get address for internal communication 125 | - kube_cluster_cidr : Range from which services in kubernetes will get address 126 | - dns_replicas : Number of DNS server replicas 127 | - dns_server : IP that will be assigned to DNS server 128 | - dns_domain : Chosen DNS domain 129 | - cluster_registry_disk_size : Internal docker registry disk size 130 | 131 | *Note cluster_registry_disk_size is not used and has not been tested* 132 | 133 | #### Starting kube minion(s) (same machine or somewhere else, as long you have network connectivity between them) 134 | 135 | Change directory to kube-minion: 136 | 137 | cd kubernetes-dev-stack/vagrant/kube-minion 138 | ## Set the MASTER_IP env to point to your kubeernetes master 139 | 140 | export MASTER_IP= ip address of your master 141 | 142 | ## Set MEM_SIZE env if you wish more or less then 4Gig for minion(s) ## 143 | ## Set NUM_MINIONS=n env, where n is number of minions you wish to start ## 144 | ## Start the VM (or --provider=parallels) ## 145 | vagrant up --provider=virtualbox 146 | 147 | Vagrant will start up your minions and salt-stack will configure them correctly. Again, depending on your network setup, you might be asked to select network interface over which minions will communicate (normally one you use to access Internet, normally choice #1). 148 | 149 | #### Master and minions on separate machines 150 | 151 | Since master and minions will be bridged to your host interface they can be on different hosts, only thing required is for the minions to export MASTER_IP as shown above. 152 | 153 | #### How it works 154 | 155 | Packer template provided in the repo is used to create vagrant box, in case you wish to create your own. Code here will use one I have already created and deployed to vagrant repository. 156 | 157 | Salt-stack is used to configure VM upon startup, you can find configuration in salt directory. 158 | 159 | #### Adding files into running master or minion 160 | 161 | Vagrant will mount Vagrantfile directory inside the VM, under /vagrant path. You can use this to add more files into the box, ie pass in docker images instead of downloading them. 162 | 163 | Happy hacking.... 164 | Dejan 165 | -------------------------------------------------------------------------------- /packer/centos-7.1-overlayfs-thin.json: -------------------------------------------------------------------------------- 1 | { 2 | "builders": [{ 3 | "type": "virtualbox-iso", 4 | "boot_wait": "10s", 5 | "guest_os_type": "RedHat_64", 6 | "iso_checksum_type": "sha256", 7 | "iso_checksum": "27bd866242ee058b7a5754e83d8ee8403e216b93d130d800852a96f41c34d86a", 8 | "iso_url": "http://ftp.heanet.ie/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1611.iso", 9 | "ssh_username": "vagrant", 10 | "ssh_password": "vagrant", 11 | "ssh_wait_timeout": "20000s", 12 | "ssh_port": 22, 13 | "headless": "false", 14 | "type": "virtualbox-iso", 15 | "shutdown_command": "sudo -S shutdown -P now", 16 | "http_directory": "http", 17 | "vboxmanage": [ 18 | [ 19 | "modifyvm", 20 | "{{.Name}}", 21 | "--memory", 22 | "1024" 23 | ], 24 | [ 25 | "modifyvm", 26 | "{{.Name}}", 27 | "--cpus", 28 | "1" 29 | ] 30 | ], 31 | "virtualbox_version_file": ".vbox_version", 32 | "boot_command": [ 33 | "", 34 | "linux ks=http://{{.HTTPIP}}:{{.HTTPPort}}/kickstart" 35 | ] 36 | }, 37 | { 38 | "type": "parallels-iso", 39 | "guest_os_type": "rhel", 40 | "iso_checksum_type": "sha256", 41 | "iso_checksum": "27bd866242ee058b7a5754e83d8ee8403e216b93d130d800852a96f41c34d86a", 42 | "iso_url": "http://ftp.heanet.ie/pub/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1611.iso", 43 | "parallels_tools_flavor": "lin", 44 | "ssh_username": "vagrant", 45 | "ssh_password": "vagrant", 46 | "ssh_wait_timeout": "20000s", 47 | "http_directory": "http", 48 | "ssh_port": 22, 49 | "shutdown_command": "sudo -S shutdown -P now", 50 | "prlctl": [ 51 | ["set", "{{.Name}}", "--memsize", "1024"], 52 | ["set", "{{.Name}}", "--cpus", "2"] 53 | ], 54 | "boot_command": [ 55 | "", 56 | "linux ks=http://{{.HTTPIP}}:{{.HTTPPort}}/kickstart" 57 | ] 58 | } 59 | ], 60 | "push": { 61 | "name": "dekstroza/kube-overlay-xfs" 62 | }, 63 | "provisioners": [ 64 | { 65 | "execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E bash '{{.Path}}'", 66 | "scripts": [ 67 | "scripts/install-vbox-tools.sh", 68 | "scripts/docker-cfg.sh", 69 | "scripts/update-kubernetes.sh", 70 | "scripts/update-flannel.sh", 71 | "scripts/compact.sh" 72 | ], 73 | "type": "shell" 74 | } 75 | ], 76 | "post-processors": [ 77 | [{ 78 | "type": "vagrant", 79 | "keep_input_artifact": false 80 | }, 81 | { 82 | "type": "atlas", 83 | "artifact": "dekstroza/kube-overlay-xfs", 84 | "artifact_type": "vagrant.box", 85 | "metadata": { 86 | "provider": "parallels", 87 | "created_at": "{{timestamp}}", 88 | "version": "1.4.11" 89 | } 90 | }, 91 | { 92 | "type": "atlas", 93 | "artifact": "dekstroza/kube-overlay-xfs", 94 | "artifact_type": "vagrant.box", 95 | "metadata": { 96 | "provider": "virtualbox", 97 | "created_at": "{{timestamp}}", 98 | "version": "1.4.11" 99 | } 100 | } 101 | ]] 102 | } 103 | -------------------------------------------------------------------------------- /packer/http/kickstart: -------------------------------------------------------------------------------- 1 | install 2 | text 3 | cdrom 4 | lang en_US.UTF-8 5 | keyboard us 6 | network --onboot yes --device eth0 --bootproto dhcp --noipv6 --hostname vagrant-centos-7.vagrantup.com 7 | rootpw vagrant 8 | firewall --disabled 9 | authconfig --enableshadow --passalgo=sha512 10 | selinux --permissive 11 | timezone --utc Europe/Dublin 12 | zerombr 13 | clearpart --all 14 | part /boot --fstype=xfs --size=512 15 | part pv.01 --grow --size=1 16 | volgroup vg_vagrantcentos --pesize=4096 pv.01 17 | logvol swap --name=lv_swap --vgname=vg_vagrantcentos --size=1024 18 | logvol / --fstype=xfs --name=lv_root --vgname=vg_vagrantcentos --grow --size=5000 --maxsize=6000 19 | bootloader --location=mbr --append="crashkernel=auto rhgb quiet" 20 | user --name=vagrant --groups=wheel --password=vagrant 21 | reboot 22 | 23 | repo --name=base --baseurl=http://mirror.centos.org/centos/7.3.1611/os/x86_64/ 24 | url --url="http://mirror.centos.org/centos/7.3.1611/os/x86_64/" 25 | 26 | repo --name=centos7.3-x86_64-extras --baseurl=http://mirror.centos.org/centos/7.3.1611/extras/x86_64/ 27 | repo --name=epel-release --baseurl=http://anorien.csc.warwick.ac.uk/mirrors/epel/7/x86_64/ 28 | repo --name=elrepo-kernel --baseurl=http://elrepo.org/linux/kernel/el7/x86_64/ 29 | repo --name=elrepo-release --baseurl=http://elrepo.org/linux/elrepo/el7/x86_64/ 30 | repo --name=elrepo-extras --baseurl=http://elrepo.org/linux/extras/el7/x86_64/ 31 | repo --name=docker-repo --baseurl=https://yum.dockerproject.org/repo/main/centos/7/ 32 | 33 | %packages --nobase --excludedocs 34 | @core --nodefaults 35 | kernel-ml 36 | kernel-ml-devel 37 | kernel-ml-tools 38 | kernel-ml-tools-libs 39 | kernel-ml-headers 40 | selinux-policy-devel 41 | docker-engine 42 | docker-engine-selinux 43 | libtool-ltdl 44 | openssl 45 | expect 46 | make 47 | perl 48 | patch 49 | dkms 50 | gcc 51 | bzip2 52 | etcd 53 | flannel 54 | ntp 55 | nfs-utils 56 | bind-utils 57 | net-tools 58 | bridge-utils 59 | iperf 60 | iperf3 61 | salt-minion 62 | -kernel 63 | -kernel-devel 64 | -kernel-tools-libs 65 | -kernel-tools 66 | -kernel-headers 67 | -aic94xx-firmware 68 | -atmel-firmware 69 | -b43-openfwwf 70 | -bfa-firmware 71 | -ipw2100-firmware 72 | -ipw2200-firmware 73 | -ivtv-firmware 74 | -iwl100-firmware 75 | -iwl105-firmware 76 | -iwl135-firmware 77 | -iwl1000-firmware 78 | -iwl2000-firmware 79 | -iwl2030-firmware 80 | -iwl3160-firmware 81 | -iwl3945-firmware 82 | -iwl4965-firmware 83 | -iwl5000-firmware 84 | -iwl5150-firmware 85 | -iwl6000-firmware 86 | -iwl6000g2a-firmware 87 | -iwl6000g2b-firmware 88 | -iwl6050-firmware 89 | -iwl7260-firmware 90 | -libertas-usb8388-firmware 91 | -libertas-sd8686-firmware 92 | -libertas-sd8787-firmware 93 | -ql2100-firmware 94 | -ql2200-firmware 95 | -ql23xx-firmware 96 | -ql2400-firmware 97 | -ql2500-firmware 98 | -rt61pci-firmware 99 | -rt73usb-firmware 100 | -xorg-x11-drv-ati-firmware 101 | -zd1211-firmware 102 | -iprutils 103 | -fprintd-pam 104 | -intltool 105 | %end 106 | 107 | %post --nochroot 108 | cp /etc/resolv.conf /mnt/sysimage/etc/resolv.conf 109 | %end 110 | 111 | %post 112 | /usr/bin/yum -y install sudo 113 | /bin/cat << EOF > /etc/sudoers.d/wheel 114 | Defaults:%wheel env_keep += "SSH_AUTH_SOCK" 115 | Defaults:%wheel !requiretty 116 | %wheel ALL=(ALL) NOPASSWD: ALL 117 | EOF 118 | /bin/chmod 0440 /etc/sudoers.d/wheel 119 | /bin/mkdir /home/vagrant/.ssh 120 | /bin/chmod 700 /home/vagrant/.ssh 121 | /usr/bin/curl -L -o /home/vagrant/.ssh/id_rsa https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant 122 | /usr/bin/curl -L -o /home/vagrant/.ssh/authorized_keys https://raw.githubusercontent.com/mitchellh/vagrant/master/keys/vagrant.pub 123 | /bin/chown -R vagrant:vagrant /home/vagrant/.ssh 124 | /bin/chmod 0400 /home/vagrant/.ssh/* 125 | /bin/echo 'UseDNS no' >> /etc/ssh/sshd_config 126 | /bin/echo '127.0.0.1 vagrant-centos-7.vagrantup.com' >> /etc/hosts 127 | sed -i -- 's/\#file_client:\ remote/file_client:\ local/g' /etc/salt/minion 128 | echo 'tsflags=nodocs' >> /etc/yum.conf 129 | yum update -y && rm -rf /var/cache/yum 130 | 131 | localedef --list-archive | grep -v -i ^en | xargs localedef --delete-from-archive 132 | mv /usr/lib/locale/locale-archive /usr/lib/locale/locale-archive.tmpl 133 | build-locale-archive 134 | %end 135 | 136 | -------------------------------------------------------------------------------- /packer/scripts/compact.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #Clean up and zero swap partition 4 | readonly swapuuid=$(/sbin/blkid -o value -l -s UUID -t TYPE=swap) 5 | readonly swappart=$(readlink -f /dev/disk/by-uuid/"$swapuuid") 6 | /sbin/swapoff "$swappart" 7 | dd if=/dev/zero of="$swappart" bs=1M || echo "dd exit code $? is suppressed" 8 | /sbin/mkswap -U "$swapuuid" "$swappart" 9 | 10 | if [ ! "${PACKER_BUILDER_TYPE}" == "qemu" ]; then 11 | dd if=/dev/zero of=/EMPTY bs=1M 12 | rm -f /EMPTY 13 | fi 14 | sync 15 | 16 | #Clean and zero disk and boot partition: 17 | rm -rf /var/cache/yum 18 | rm -r /home/vagrant/*.iso 19 | /bin/dd if=/dev/zero of=/boot/EMPTY bs=1M 20 | /bin/rm -f /boot/EMPTY 21 | /bin/dd if=/dev/zero of=/EMPTY bs=1M 22 | /bin/rm -f /EMPTY 23 | sync 24 | 25 | -------------------------------------------------------------------------------- /packer/scripts/docker-cfg.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | groupadd docker 4 | usermod -a -G docker vagrant 5 | 6 | cat >> /etc/security/limits.conf <>$PACKER_BUILDER_TYPE<< selected." 32 | echo "Known are virtualbox-iso|virtualbox-ovf|parallels-iso|parallels-pvm." 33 | ;; 34 | esac 35 | 36 | -------------------------------------------------------------------------------- /packer/scripts/update-flannel.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | VERSION="0.7.0" 4 | echo "####### Updating flannel to version $VERSION #########" 5 | CWD=`pwd` 6 | cd /tmp 7 | curl -L -k https://github.com/coreos/flannel/releases/download/v$VERSION/flannel-v$VERSION-linux-amd64.tar.gz -o flannel-$VERSION-linux-amd64.tar.gz 8 | tar zxvf flannel-$VERSION-linux-amd64.tar.gz 9 | cp -f flanneld /bin/flanneld 10 | cp -f mk-docker-opts.sh /usr/libexec/flannel/mk-docker-opts.sh 11 | rm -rf flannel-$VERSION 12 | rm -rf flannel-$VERSION-linux-amd64.tar.gz 13 | echo "####### Flannel update complete #########" 14 | cd $CWD 15 | 16 | -------------------------------------------------------------------------------- /packer/scripts/update-kubernetes.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | KUBERNETES_VERSION="1.5.2" 4 | 5 | echo "####### Updating kubernetes #########" 6 | mkdir -p /opt/kubernetes && cd /opt/kubernetes 7 | 8 | curl -L -O https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kube-apiserver 9 | curl -L -O https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kube-controller-manager 10 | curl -L -O https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kube-scheduler 11 | curl -L -O https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kubectl 12 | curl -L -O https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kube-proxy 13 | curl -L -O https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kubelet 14 | curl -L -O https://storage.googleapis.com/kubernetes-release/release/v${KUBERNETES_VERSION}/bin/linux/amd64/kube-dns 15 | 16 | chmod +x /opt/kubernetes/ -R 17 | 18 | mv -f /opt/kubernetes/kube* /usr/bin/ && cd /opt/ && rm -rf /opt/kubernetes 19 | echo "###### Updating kubernetes done ####" 20 | 21 | echo "###### Installing cfs tools ######" 22 | curl -L -O https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 23 | curl -L -O https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 24 | chmod +x cfssljson_linux-amd64 25 | chmod +x cfssl_linux-amd64 26 | mv cfssl_linux-amd64 /usr/bin/cfssl 27 | mv cfssljson_linux-amd64 /usr/bin/cfssljson 28 | echo "###### Installing cfs tools done ####" 29 | 30 | -------------------------------------------------------------------------------- /vagrant/kube-master/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! 5 | VAGRANTFILE_API_VERSION = "2" 6 | 7 | # Require a recent version of vagrant otherwise some have reported errors setting host names on boxes 8 | Vagrant.require_version ">= 1.6.2" 9 | file_to_disk = '.tmp/large_disk.vdi' 10 | 11 | $vm_mem_size = (ENV['MEM_SIZE'] || "4096").to_i 12 | Vagrant.configure(2) do |config| 13 | config.ssh.insert_key = false 14 | config.vm.define "kube-master" do |master| 15 | master.vm.synced_folder "../salt/", "/srv/" 16 | if ENV['NETWORK_TYPE'].to_s == "" then 17 | master.vm.network "public_network" 18 | end 19 | if ENV['NETWORK_TYPE'].to_s == "private" 20 | master.vm.network "private_network", type: "dhcp" 21 | end 22 | master.vm.provider :virtualbox do |v, override| 23 | override.vm.box = (ENV['BOX_NAME'] || "dekstroza/kube-overlay-xfs").to_s 24 | override.vbguest.auto_update = false 25 | v.memory = $vm_mem_size 26 | v.cpus = $vm_cpus 27 | v.gui = false 28 | v.customize ["modifyvm", :id, "--nictype1", "virtio"] 29 | v.customize ["modifyvm", :id, "--nictype2", "virtio"] 30 | v.customize ["modifyvm", :id, "--cableconnected1", "on"] 31 | v.customize ["modifyvm", :id, "--cableconnected2", "on"] 32 | v.customize ['createhd', '--filename', file_to_disk, '--size', 500 * 1024] 33 | v.customize ["storagectl", :id, "--name", "SATA Controller", "--add", "sata", "--hostiocache", "on"] 34 | v.customize ["storageattach", :id, '--storagectl', 'SATA Controller', '--port', 2, '--device', 0, '--type', 'hdd', '--medium', file_to_disk] 35 | override.vm.provision :shell, :inline => (%q{ 36 | ip=$(ip -f inet -o addr show eth1|cut -d\ -f 7 | cut -d/ -f 1) 37 | echo -e "roles:\n - kube-master" >> /etc/salt/grains 38 | echo -e master_ip: $ip >> /etc/salt/grains 39 | echo -e nfs_ip: $ip >> /etc/salt/grains 40 | #/usr/sbin/ifconfig eth0 down && /usr/bin/sleep 2 && /usr/sbin/ifconfig eth0 up 41 | salt-call state.highstate -l quiet 42 | echo "#################################################################################" 43 | echo "# Please create route to your cloud by running: #" 44 | echo "# 1. For MacOS, please run: add-route-osX.sh #" 45 | echo "# 2. For Linux, please run: add-route-LIN.sh #" 46 | echo "# 3. For Windows, please run: add-route-WIN.bat #" 47 | echo "# #" 48 | echo "# SSH into machine with vagrant ssh command #" 49 | echo "# ~Have fun, Dejan~ #" 50 | echo "#################################################################################" 51 | exit 52 | }).strip 53 | end 54 | master.vm.provider :parallels do |v, override| 55 | override.vm.box = (ENV['BOX_NAME'] || "dekstroza/kube-overlay-xfs").to_s 56 | v.memory = $vm_mem_size 57 | v.cpus = $vm_cpus 58 | v.update_guest_tools = false 59 | # Set up Parallels folder sharing to behave like VirtualBox (i.e., 60 | # mount the current directory as /vagrant and that's it) 61 | v.customize ['set', :id, '--shf-guest', 'off'] 62 | v.customize ['set', :id, '--shf-guest-automount', 'off'] 63 | v.customize ['set', :id, '--shf-host', 'on'] 64 | v.customize ['set', :id, '--device-add', 'hdd'] 65 | # Remove all auto-mounted "shared folders"; the result seems to 66 | # persist between runs (i.e., vagrant halt && vagrant up) 67 | override.vm.provision :shell, :inline => (%q{ 68 | if [ -d /media/psf ]; then 69 | for i in /media/psf/*; do 70 | if [ -d "${i}" ]; then 71 | (umount "${i}" || true) > /dev/null 2>&1 72 | rmdir -v "${i}" > /dev/null 2>&1 73 | fi 74 | done 75 | rmdir -v /media/psf > /dev/null 2>&1 76 | fi 77 | ip=$(ip -f inet -o addr show eth1|cut -d\ -f 7 | cut -d/ -f 1) 78 | echo -e "roles:\n - kube-master" >> /etc/salt/grains 79 | echo -e master_ip: $ip >> /etc/salt/grains 80 | echo -e nfs_ip: $ip >> /etc/salt/grains 81 | /usr/sbin/ifconfig eth0 down && /usr/bin/sleep 2 && /usr/sbin/ifconfig eth0 up 82 | salt-call state.highstate -l quiet 83 | echo "#################################################################################" 84 | echo "# Please create route to your cloud by running: #" 85 | echo "# 1. For MacOS, please run: add-route-osX.sh #" 86 | echo "# 2. For Linux, please run: add-route-LIN.sh #" 87 | echo "# 3. For Windows, please run: add-route-WIN.bat #" 88 | echo "# #" 89 | echo "# SSH into machine with vagrant ssh command #" 90 | echo "# ~Have fun, Dejan~ #" 91 | echo "#################################################################################" 92 | exit 93 | }).strip 94 | end 95 | end 96 | 97 | # Give access to all physical cpu cores 98 | # Previously cargo-culted from here: 99 | # http://www.stefanwrobel.com/how-to-make-vagrant-performance-not-suck 100 | # Rewritten to actually determine the number of hardware cores instead of assuming 101 | # that the host has hyperthreading enabled. 102 | host = RbConfig::CONFIG['host_os'] 103 | if host =~ /darwin/ 104 | $vm_cpus = `sysctl -n hw.physicalcpu`.to_i 105 | elsif host =~ /linux/ 106 | #This should work on most processors, however it will fail on ones without the core id field. 107 | #So far i have only seen this on a raspberry pi. which you probably don't want to run vagrant on anyhow... 108 | #But just in case we'll default to the result of nproc if we get 0 just to be safe. 109 | $vm_cpus = `cat /proc/cpuinfo | grep 'core id' | sort -u | wc -l`.to_i 110 | if $vm_cpus < 1 111 | $vm_cpus = `nproc`.to_i 112 | end 113 | else # sorry Windows folks, I can't help you 114 | $vm_cpus = 4 115 | end 116 | end 117 | 118 | -------------------------------------------------------------------------------- /vagrant/kube-minion/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | # Vagrantfile API/syntax version. Don't touch unless you know what you're doing! 5 | VAGRANTFILE_API_VERSION = "2" 6 | 7 | # Require a recent version of vagrant otherwise some have reported errors setting host names on boxes 8 | Vagrant.require_version ">= 1.6.2" 9 | 10 | 11 | if ENV['MASTER_IP'].to_s == "" 12 | raise Vagrant::Errors::VagrantError.new, < (%q{ 54 | echo -e "roles:\n - kube-minion" >> /etc/salt/grains 55 | /usr/sbin/ifconfig eth0 down && /usr/bin/sleep 2 && /usr/sbin/ifconfig eth0 up 56 | salt-call state.highstate -l quiet 57 | echo "#################################################################################" 58 | echo "# Please create route to your cloud by running: #" 59 | echo "# 1. For MacOS, please run: add-route-osX.sh #" 60 | echo "# 2. For Linux, please run: add-route-LIN.sh #" 61 | echo "# 3. For Windows, please run: add-route-WIN.bat #" 62 | echo "# #" 63 | echo "# SSH into machine with vagrant ssh command #" 64 | echo "# ~Have fun, Dejan~ #" 65 | echo "#################################################################################" 66 | exit 67 | }).strip 68 | end 69 | 70 | minion.vm.provider :parallels do |v, override| 71 | override.vm.box = (ENV['BOX_NAME'] || "dekstroza/kube-overlay-xfs").to_s 72 | v.memory = $vm_mem_size 73 | v.cpus = $vm_cpus 74 | # Don't attempt to update the Parallels tools on the image (this can 75 | # be done manually if necessary) 76 | v.update_guest_tools = false 77 | 78 | # Set up Parallels folder sharing to behave like VirtualBox (i.e., 79 | # mount the current directory as /vagrant and that's it) 80 | v.customize ['set', :id, '--shf-guest', 'off'] 81 | v.customize ['set', :id, '--shf-guest-automount', 'off'] 82 | v.customize ['set', :id, '--shf-host', 'on'] 83 | v.customize ['set', :id, '--device-add', 'hdd'] 84 | # Remove all auto-mounted "shared folders"; the result seems to 85 | # persist between runs (i.e., vagrant halt && vagrant up) 86 | override.vm.provision :shell, :inline => (%q{ 87 | if [ -d /media/psf ]; then 88 | for i in /media/psf/*; do 89 | if [ -d "${i}" ]; then 90 | (umount "${i}" || true) > /dev/null 2>&1 91 | rmdir -v "${i}" > /dev/null 2>&1 92 | fi 93 | done 94 | rmdir -v /media/psf > /dev/null 2>&1 95 | fi 96 | echo -e "roles:\n - kube-minion" >> /etc/salt/grains 97 | /usr/sbin/ifconfig eth0 down && /usr/bin/sleep 2 && /usr/sbin/ifconfig eth0 up 98 | salt-call state.highstate -l quiet 99 | echo "#################################################################################" 100 | echo "# Please create route to your cloud by running: #" 101 | echo "# 1. For MacOS, please run: add-route-osX.sh #" 102 | echo "# 2. For Linux, please run: add-route-LIN.sh #" 103 | echo "# 3. For Windows, please run: add-route-WIN.bat #" 104 | echo "# #" 105 | echo "# SSH into machine with vagrant ssh command #" 106 | echo "# ~Have fun, Dejan~ #" 107 | echo "#################################################################################" 108 | exit 109 | }).strip 110 | end 111 | end 112 | end 113 | 114 | 115 | 116 | 117 | # Give access to all physical cpu cores 118 | # Previously cargo-culted from here: 119 | # http://www.stefanwrobel.com/how-to-make-vagrant-performance-not-suck 120 | # Rewritten to actually determine the number of hardware cores instead of assuming 121 | # that the host has hyperthreading enabled. 122 | host = RbConfig::CONFIG['host_os'] 123 | if host =~ /darwin/ 124 | $vm_cpus = `sysctl -n hw.physicalcpu`.to_i 125 | elsif host =~ /linux/ 126 | #This should work on most processors, however it will fail on ones without the core id field. 127 | #So far i have only seen this on a raspberry pi. which you probably don't want to run vagrant on anyhow... 128 | #But just in case we'll default to the result of nproc if we get 0 just to be safe. 129 | $vm_cpus = `cat /proc/cpuinfo | grep 'core id' | sort -u | wc -l`.to_i 130 | if $vm_cpus < 1 131 | $vm_cpus = `nproc`.to_i 132 | end 133 | else # sorry Windows folks, I can't help you 134 | $vm_cpus = 4 135 | end 136 | end 137 | 138 | -------------------------------------------------------------------------------- /vagrant/salt/pillar/kube-global.sls: -------------------------------------------------------------------------------- 1 | service_cluster_cidr: 10.0.0.0/16 2 | kube_cluster_cidr: 10.244.0.0/16 3 | dns_replicas: 1 4 | dns_server: 10.0.0.10 5 | dns_domain: dekstroza.local 6 | cluster_registry_disk_size: 1G 7 | 8 | -------------------------------------------------------------------------------- /vagrant/salt/pillar/pillar_roots: -------------------------------------------------------------------------------- 1 | pillar_roots: 2 | base: 3 | - /srv/pillar 4 | 5 | -------------------------------------------------------------------------------- /vagrant/salt/pillar/system.sls: -------------------------------------------------------------------------------- 1 | docker_fs_type: xfs 2 | docker_fs_partition_name: /dev/sdb1 3 | docker_partition_device: /dev/sdb 4 | 5 | -------------------------------------------------------------------------------- /vagrant/salt/pillar/top.sls: -------------------------------------------------------------------------------- 1 | base: 2 | 'roles:kube-minion': 3 | - match: grain 4 | - kube-global 5 | - system 6 | 7 | 'roles:kube-master': 8 | - match: grain 9 | - kube-global 10 | - system 11 | 12 | 13 | 14 | -------------------------------------------------------------------------------- /vagrant/salt/salt/docker/config-files/cfg/docker-network.cfg: -------------------------------------------------------------------------------- 1 | #/etc/sysconfig/docker-network 2 | DOCKER_NETWORK_OPTIONS='--iptables=false ' 3 | 4 | -------------------------------------------------------------------------------- /vagrant/salt/salt/docker/config-files/cfg/docker-storage.cfg: -------------------------------------------------------------------------------- 1 | DOCKER_STORAGE_OPTIONS=' --storage-driver=overlay2' 2 | -------------------------------------------------------------------------------- /vagrant/salt/salt/docker/config-files/cfg/docker-storage.conf: -------------------------------------------------------------------------------- 1 | [Service] 2 | EnvironmentFile=-/etc/sysconfig/docker-storage 3 | 4 | -------------------------------------------------------------------------------- /vagrant/salt/salt/docker/config-files/cfg/docker.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Docker Application Container Engine 3 | Documentation=https://docs.docker.com 4 | After=network.target 5 | 6 | [Service] 7 | Type=notify 8 | # the default is not to use systemd for cgroups because the delegate issues still 9 | # exists and systemd currently does not support the cgroup feature set required 10 | # for containers run by docker 11 | ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS $DOCKER_STORAGE_OPTIONS 12 | ExecReload=/bin/kill -s HUP $MAINPID 13 | # Having non-zero Limit*s causes performance problems due to accounting overhead 14 | # in the kernel. We recommend using cgroups to do container-local accounting. 15 | LimitNOFILE=infinity 16 | LimitNPROC=infinity 17 | LimitCORE=infinity 18 | # Uncomment TasksMax if your systemd version supports it. 19 | # Only systemd 226 and above support this version. 20 | #TasksMax=infinity 21 | TimeoutStartSec=0 22 | # set delegate yes so that systemd does not reset the cgroups of docker containers 23 | Delegate=yes 24 | # kill only the docker process, not all processes in the cgroup 25 | KillMode=process 26 | 27 | [Install] 28 | WantedBy=multi-user.target 29 | -------------------------------------------------------------------------------- /vagrant/salt/salt/docker/config-files/init.sls: -------------------------------------------------------------------------------- 1 | docker-network-config: 2 | file.managed: 3 | - name: /etc/sysconfig/docker-network 4 | - user: root 5 | - group: root 6 | - source: salt://docker/config-files/cfg/docker-network.cfg 7 | - mode: 644 8 | docker-storage-config: 9 | file.managed: 10 | - name: /etc/sysconfig/docker-storage 11 | - user: root 12 | - group: root 13 | - source: salt://docker/config-files/cfg/docker-storage.cfg 14 | - mode: 644 15 | docker-systemd-config: 16 | file.managed: 17 | - name: /usr/lib/systemd/system/docker.service 18 | - user: root 19 | - group: root 20 | - makedirs: True 21 | - source: salt://docker/config-files/cfg/docker.service 22 | - mode: 644 23 | docker-systemd-config-storage: 24 | file.managed: 25 | - name: /usr/lib/systemd/system/docker.service.d/docker-storage.conf 26 | - user: root 27 | - group: root 28 | - makedirs: True 29 | - source: salt://docker/config-files/cfg/docker-storage.conf 30 | - mode: 644 31 | 32 | -------------------------------------------------------------------------------- /vagrant/salt/salt/docker/init.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - docker.config-files 3 | 4 | 5 | docker-running: 6 | service.running: 7 | - name: docker 8 | - require: 9 | - sls: docker.config-files 10 | - service: flanneld 11 | - mount: /var/lib/docker 12 | 13 | label-docker-disk: 14 | module.run: 15 | - name: partition.mklabel 16 | - device: {{ pillar['docker_partition_device'] }} 17 | - label_type: gpt 18 | - unless: fdisk -l {{ pillar['docker_partition_device'] }} | grep gpt 19 | 20 | create-docker-partition: 21 | module.run: 22 | - name: partition.mkpart 23 | - device: {{ pillar['docker_partition_device'] }} 24 | - part_type: primary 25 | - start: 0% 26 | - end: 100% 27 | - require: 28 | - module: label-docker-disk 29 | - unless: fdisk -l {{ pillar['docker_partition_device'] }} | grep {{ pillar['docker_fs_partition_name'] }} 30 | 31 | docker-device: 32 | blockdev.formatted: 33 | - name: {{ pillar['docker_fs_partition_name'] }} 34 | - fs_type: {{ pillar['docker_fs_type'] }} 35 | - unless: blkid | grep {{ pillar['docker_partition_device'] }} | grep {{ pillar['docker_fs_type'] }} 36 | - require: 37 | - module: create-docker-partition 38 | 39 | mount-docker-partition: 40 | mount.mounted: 41 | - name: /var/lib/docker 42 | - device: {{ pillar['docker_fs_partition_name'] }} 43 | - fstype: {{ pillar['docker_fs_type'] }} 44 | - mkmnt: True 45 | - opts: 46 | - defaults 47 | - require: 48 | - blockdev: {{ pillar['docker_fs_partition_name'] }} 49 | - unless: mount | grep {{ pillar['docker_fs_partition_name'] }} 50 | 51 | -------------------------------------------------------------------------------- /vagrant/salt/salt/etcd/config-files/cfg/etcd.conf: -------------------------------------------------------------------------------- 1 | # [member] 2 | ETCD_NAME=default 3 | ETCD_DATA_DIR="/var/lib/etcd/default.etcd" 4 | #ETCD_SNAPSHOT_COUNTER="10000" 5 | #ETCD_HEARTBEAT_INTERVAL="100" 6 | #ETCD_ELECTION_TIMEOUT="1000" 7 | ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380" 8 | ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" 9 | #ETCD_MAX_SNAPSHOTS="5" 10 | #ETCD_MAX_WALS="5" 11 | #ETCD_CORS="" 12 | # 13 | #[cluster] 14 | #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://0.0.0.0:2380" 15 | # if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..." 16 | #ETCD_INITIAL_CLUSTER="default=http://localhost:2380" 17 | #ETCD_INITIAL_CLUSTER_STATE="new" 18 | #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" 19 | #ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379" 20 | #ETCD_DISCOVERY="" 21 | #ETCD_DISCOVERY_SRV="" 22 | #ETCD_DISCOVERY_FALLBACK="proxy" 23 | #ETCD_DISCOVERY_PROXY="" 24 | # 25 | #[proxy] 26 | #ETCD_PROXY="off" 27 | # 28 | #[security] 29 | #ETCD_CERT_FILE="" 30 | #ETCD_KEY_FILE="" 31 | #ETCD_CLIENT_CERT_AUTH="false" 32 | #ETCD_TRUSTED_CA_FILE="" 33 | #ETCD_PEER_CERT_FILE="" 34 | #ETCD_PEER_KEY_FILE="" 35 | #ETCD_PEER_CLIENT_CERT_AUTH="false" 36 | #ETCD_PEER_TRUSTED_CA_FILE="" 37 | # 38 | #[logging] 39 | #ETCD_DEBUG="false" 40 | # examples for -log-package-levels etcdserver=WARNING,security=DEBUG 41 | #ETCD_LOG_PACKAGE_LEVELS="" 42 | 43 | -------------------------------------------------------------------------------- /vagrant/salt/salt/etcd/config-files/cfg/network.json: -------------------------------------------------------------------------------- 1 | { 2 | "Network": "{{ pillar['kube_cluster_cidr'] }}", 3 | "SubnetLen": 24, 4 | "Backend": { 5 | "Type": "vxlan", 6 | "VNI": 1 7 | } 8 | } 9 | 10 | -------------------------------------------------------------------------------- /vagrant/salt/salt/etcd/config-files/init.sls: -------------------------------------------------------------------------------- 1 | 2 | etcd-config: 3 | file.managed: 4 | - name: /etc/etcd/etcd.conf 5 | - user: root 6 | - group: root 7 | - source: salt://etcd/config-files/cfg/etcd.conf 8 | - mode: 644 9 | - template: jinja 10 | 11 | etcd-network-config: 12 | file.managed: 13 | - name: /etc/etcd/network.json 14 | - user: root 15 | - group: root 16 | - source: salt://etcd/config-files/cfg/network.json 17 | - mode: 644 18 | - template: jinja 19 | -------------------------------------------------------------------------------- /vagrant/salt/salt/etcd/init.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - etcd.config-files 3 | 4 | etcd-running: 5 | service.running: 6 | - name: etcd 7 | - watch: 8 | - file: /etc/etcd/etcd.conf 9 | - require: 10 | - file: /etc/etcd/etcd.conf 11 | - file: /etc/etcd/network.json 12 | 13 | configure-cluster-network: 14 | cmd.run: 15 | - name: sleep 5 && etcdctl set /coreos.com/network/config < /etc/etcd/network.json 16 | - require: 17 | - service: etcd 18 | - unless: etcdctl get /coreos.com/network/config 19 | 20 | -------------------------------------------------------------------------------- /vagrant/salt/salt/file_roots.sls: -------------------------------------------------------------------------------- 1 | file_roots: 2 | base: 3 | - /srv/salt 4 | minions: 5 | - /srv/salt/minions 6 | 7 | -------------------------------------------------------------------------------- /vagrant/salt/salt/flannel/config-files/cfg/flanneld.cfg: -------------------------------------------------------------------------------- 1 | # Flanneld configuration options 2 | 3 | # etcd url location. Point this to the server where etcd runs 4 | FLANNEL_ETCD_ENDPOINTS="http://{{ salt['grains.get']('master_ip') }}:2379" 5 | 6 | # etcd config key. This is the configuration key that flannel queries 7 | # For address range assignment 8 | FLANNEL_ETCD_PREFIX="/coreos.com/network" 9 | 10 | # Any additional options that you want to pass 11 | FLANNEL_OPTIONS="--iface=eth1 --ip-masq=true " 12 | 13 | -------------------------------------------------------------------------------- /vagrant/salt/salt/flannel/config-files/init.sls: -------------------------------------------------------------------------------- 1 | 2 | flannel-config: 3 | file.managed: 4 | - name: /etc/sysconfig/flanneld 5 | - user: root 6 | - group: root 7 | - source: salt://flannel/config-files/cfg/flanneld.cfg 8 | - mode: 644 9 | - template: jinja 10 | 11 | -------------------------------------------------------------------------------- /vagrant/salt/salt/flannel/init.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - flannel.config-files 3 | - etcd 4 | 5 | flannel-running: 6 | service.running: 7 | - name: flanneld 8 | - watch: 9 | - file: /etc/sysconfig/flanneld 10 | - require: 11 | - service: etcd 12 | - file: /etc/sysconfig/flanneld 13 | - cmd: configure-cluster-network 14 | 15 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/init.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - master.kubernetes 3 | 4 | 5 | {% set master_ip = salt['grains.get']('master_ip') %} 6 | {% set nfs_ip = salt['grains.get']('nfs_ip') %} 7 | 8 | permissive: 9 | selinux.mode 10 | 11 | dekstroza-nfs-server: 12 | host.present: 13 | - ip: {{ nfs_ip }} 14 | - names: 15 | - nfs 16 | - nfs.{{ pillar['dns_domain'] }} 17 | 18 | 19 | firewalld: 20 | service.dead: 21 | - name: firewalld 22 | - enable: false 23 | 24 | kube-apiserver-running: 25 | service.running: 26 | - name: kube-apiserver 27 | - watch: 28 | - file: /etc/kubernetes/apiserver 29 | - file: /var/lib/kubernetes/authorization-policy.json 30 | - require: 31 | - service: docker 32 | - file: /etc/kubernetes/apiserver 33 | - file: /var/lib/kubernetes/authorization-policy.json 34 | - cmd: generate-certs 35 | 36 | kube-controller-manager-running: 37 | service.running: 38 | - name: kube-controller-manager 39 | - require: 40 | - file: /etc/kubernetes/controller-manager 41 | - service: kube-apiserver 42 | - service: docker 43 | 44 | kube-scheduler-running: 45 | service.running: 46 | - name: kube-scheduler 47 | - require: 48 | - file: /etc/kubernetes/scheduler 49 | - service: kube-apiserver 50 | - service: docker 51 | 52 | kubelet: 53 | service.running: 54 | - name: kubelet 55 | - watch: 56 | - file: /etc/kubernetes/config 57 | - file: /etc/kubernetes/kubelet 58 | - file: /var/lib/kubelet/kubeconfig 59 | - require: 60 | - service: docker 61 | - service: kube-apiserver 62 | - file: /etc/kubernetes/config 63 | - file: /etc/kubernetes/kubelet 64 | - file: /var/lib/kubelet/kubeconfig 65 | 66 | kube-proxy: 67 | service.running: 68 | - name: kube-proxy 69 | - watch: 70 | - file: /etc/kubernetes/config 71 | - file: /etc/kubernetes/proxy 72 | - file: /var/lib/kubelet/kubeconfig 73 | - require: 74 | - file: /etc/kubernetes/config 75 | - file: /etc/kubernetes/proxy 76 | - file: /var/lib/kubelet/kubeconfig 77 | - service: kubelet 78 | 79 | create-routing-scripts: 80 | cmd.script: 81 | - source: salt://master/post-boot-scripts/configure.sh 82 | - user: root 83 | - template: jinja 84 | - require: 85 | - service: kube-proxy 86 | 87 | generate-certs: 88 | cmd.script: 89 | - source: salt://master/pre-start-scripts/generate-certs.sh 90 | - user: root 91 | - template: jinja 92 | 93 | generate-minion-cert: 94 | file.managed: 95 | - name: /usr/sbin/gen-minion-cert.sh 96 | - user: root 97 | - group: root 98 | - makedirs: True 99 | - source: salt://master/pre-start-scripts/gen-minion-cert.sh 100 | - mode: 755 101 | 102 | kubectl-setup-root: 103 | cmd.run: 104 | - name: kubectl config set-cluster kubernetes --certificate-authority=/var/lib/kubernetes/ca.pem --embed-certs=true --server=https://{{ master_ip }}:6443 && kubectl config set-credentials admin --token chAng3m3 && kubectl config set-context default-context --cluster=kubernetes --user=admin && kubectl config use-context default-context 105 | - user: root 106 | - template: jinja 107 | - require: 108 | - cmd: generate-certs 109 | 110 | kubectl-setup-vagrant: 111 | cmd.run: 112 | - name: kubectl config set-cluster kubernetes --certificate-authority=/var/lib/kubernetes/ca.pem --embed-certs=true --server=https://{{ master_ip }}:6443 && kubectl config set-credentials admin --token chAng3m3 && kubectl config set-context default-context --cluster=kubernetes --user=admin && kubectl config use-context default-context 113 | - user: vagrant 114 | - template: jinja 115 | - require: 116 | - cmd: generate-certs 117 | 118 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/apiserver: -------------------------------------------------------------------------------- 1 | ### 2 | # kubernetes system config 3 | # 4 | # The following values are used to configure the kube-apiserver 5 | # 6 | {% set master_ip = salt['grains.get']('master_ip') %} 7 | 8 | # The address on the local server to listen to. 9 | KUBE_API_ADDRESS="--insecure-bind-address={{ master_ip }} --advertise-address={{ master_ip }} --bind-address={{ master_ip }} --allow-privileged=true --tls-cert-file=/var/lib/kubernetes/kubernetes.pem --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem --service-account-key-file=/var/lib/kubernetes/kubernetes-key.pem --authorization-policy-file=/var/lib/kubernetes/authorization-policy.json --authorization-mode=ABAC --token-auth-file=/var/lib/kubernetes/token.csv --cloud-config= --cloud-provider= --admission_control=AlwaysAdmit,NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --cors_allowed_origins=.*" 10 | 11 | # Comma separated list of nodes in the etcd cluster 12 | KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379" 13 | 14 | # Address range to use for services 15 | KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range={{ pillar['service_cluster_cidr'] }}" 16 | 17 | 18 | # Add your own! 19 | KUBE_API_ARGS="" 20 | 21 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/authorization-policy.json: -------------------------------------------------------------------------------- 1 | {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"*", "nonResourcePath": "*", "readonly": true}} 2 | {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"admin", "namespace": "*", "resource": "*", "apiGroup": "*"}} 3 | {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"scheduler", "namespace": "*", "resource": "*", "apiGroup": "*"}} 4 | {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "*", "apiGroup": "*"}} 5 | {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group":"system:serviceaccounts", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}} 6 | 7 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/config: -------------------------------------------------------------------------------- 1 | ### 2 | # kubernetes system config 3 | # 4 | # The following values are used to configure various aspects of all 5 | # kubernetes services, including 6 | # 7 | # kube-apiserver.service 8 | # kube-controller-manager.service 9 | # kube-scheduler.service 10 | # kubelet.service 11 | # kube-proxy.service 12 | # logging to stderr means we get it in the systemd journal 13 | {% set master_ip = salt['grains.get']('master_ip') %} 14 | KUBE_LOGTOSTDERR="--logtostderr=true" 15 | 16 | # journal message level, 0 is debug 17 | KUBE_LOG_LEVEL="--v=0" 18 | 19 | # Should this cluster be allowed to run privileged docker containers 20 | KUBE_ALLOW_PRIV="" 21 | 22 | # How the controller-manager, scheduler, and proxy find the apiserver 23 | KUBE_MASTER="" 24 | 25 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/controller-manager: -------------------------------------------------------------------------------- 1 | {% set master_ip = salt['grains.get']('master_ip') %} 2 | ### 3 | # The following values are used to configure the kubernetes controller-manager 4 | 5 | # defaults from config and apiserver should be adequate 6 | 7 | # Add your own! 8 | KUBE_CONTROLLER_MANAGER_ARGS="--master=http://{{ master_ip }}:8080 --service-account-private-key-file=/var/lib/kubernetes/kubernetes-key.pem --service-cluster-ip-range={{ pillar['service_cluster_cidr'] }} --root-ca-file=/var/lib/kubernetes/ca.pem" 9 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/kube-apiserver.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes API Server 3 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 4 | After=network.target 5 | After=etcd.service 6 | 7 | [Service] 8 | EnvironmentFile=-/etc/kubernetes/config 9 | EnvironmentFile=-/etc/kubernetes/apiserver 10 | User=kube 11 | ExecStart=/usr/bin/kube-apiserver \ 12 | $KUBE_LOGTOSTDERR \ 13 | $KUBE_LOG_LEVEL \ 14 | $KUBE_ETCD_SERVERS \ 15 | $KUBE_API_ADDRESS \ 16 | $KUBE_API_PORT \ 17 | $KUBELET_PORT \ 18 | $KUBE_ALLOW_PRIV \ 19 | $KUBE_SERVICE_ADDRESSES \ 20 | $KUBE_ADMISSION_CONTROL \ 21 | $KUBE_API_ARGS 22 | Restart=on-failure 23 | Type=notify 24 | LimitNOFILE=65536 25 | 26 | [Install] 27 | WantedBy=multi-user.target 28 | 29 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/kube-controller-manager.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Controller Manager 3 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 4 | 5 | [Service] 6 | EnvironmentFile=-/etc/kubernetes/config 7 | EnvironmentFile=-/etc/kubernetes/controller-manager 8 | User=kube 9 | ExecStart=/usr/bin/kube-controller-manager \ 10 | $KUBE_LOGTOSTDERR \ 11 | $KUBE_LOG_LEVEL \ 12 | $KUBE_MASTER \ 13 | $KUBE_CONTROLLER_MANAGER_ARGS 14 | Restart=on-failure 15 | LimitNOFILE=65536 16 | 17 | [Install] 18 | WantedBy=multi-user.target 19 | 20 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/kube-proxy.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Kube-Proxy Server 3 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 4 | After=network.target 5 | 6 | [Service] 7 | EnvironmentFile=-/etc/kubernetes/config 8 | EnvironmentFile=-/etc/kubernetes/proxy 9 | ExecStart=/usr/bin/kube-proxy \ 10 | $KUBE_LOGTOSTDERR \ 11 | $KUBE_LOG_LEVEL \ 12 | $KUBE_MASTER \ 13 | $KUBE_PROXY_ARGS 14 | Restart=on-failure 15 | LimitNOFILE=65536 16 | 17 | [Install] 18 | WantedBy=multi-user.target 19 | 20 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/kube-scheduler.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Scheduler Plugin 3 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 4 | 5 | [Service] 6 | EnvironmentFile=-/etc/kubernetes/config 7 | EnvironmentFile=-/etc/kubernetes/scheduler 8 | User=kube 9 | ExecStart=/usr/bin/kube-scheduler \ 10 | $KUBE_LOGTOSTDERR \ 11 | $KUBE_LOG_LEVEL \ 12 | $KUBE_MASTER \ 13 | $KUBE_SCHEDULER_ARGS 14 | Restart=on-failure 15 | LimitNOFILE=65536 16 | 17 | [Install] 18 | WantedBy=multi-user.target 19 | 20 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/kubeconfig: -------------------------------------------------------------------------------- 1 | {% set master_ip = salt['grains.get']('master_ip') %} 2 | 3 | apiVersion: v1 4 | kind: Config 5 | clusters: 6 | - cluster: 7 | certificate-authority: /var/lib/kubernetes/ca.pem 8 | server: https://{{ master_ip }}:6443 9 | name: kubernetes 10 | contexts: 11 | - context: 12 | cluster: kubernetes 13 | user: kubelet 14 | name: kubelet 15 | current-context: kubelet 16 | users: 17 | - name: kubelet 18 | user: 19 | token: chAng3m3 20 | 21 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/kubectl-config: -------------------------------------------------------------------------------- 1 | {% set master_ip = salt['grains.get']('master_ip') %} 2 | apiVersion: v1 3 | clusters: 4 | - cluster: 5 | certificate-authority: /var/lib/kubernetes/ca.pem 6 | server: https://{{ master_ip }}:6443 7 | name: default 8 | contexts: 9 | - context: 10 | cluster: default 11 | user: root 12 | name: default-context 13 | current-context: default-context 14 | kind: Config 15 | preferences: 16 | colors: true 17 | users: [] 18 | 19 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/kubelet: -------------------------------------------------------------------------------- 1 | ### 2 | # kubernetes kubelet (minion) config 3 | 4 | {% set master_ip = salt['grains.get']('master_ip') %} 5 | 6 | # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) 7 | KUBELET_ADDRESS="--address={{ master_ip }}" 8 | 9 | # The port for the info server to serve on 10 | KUBELET_PORT="" 11 | 12 | # You may leave this blank to use the actual hostname 13 | KUBELET_HOSTNAME="--hostname_override={{ master_ip }}" 14 | 15 | # location of the api-server 16 | KUBELET_API_SERVER="--api-servers=https://{{ master_ip }}:6443" 17 | 18 | # Add your own! 19 | KUBELET_ARGS="--cluster-dns={{ pillar['dns_server'] }} --cluster-domain={{ pillar['dns_domain'] }} --kubeconfig=/var/lib/kubelet/kubeconfig --tls-cert-file=/var/lib/kubernetes/kubernetes.pem --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem" 20 | 21 | 22 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/kubelet.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Kubelet Server 3 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 4 | After=docker.service 5 | Requires=docker.service 6 | 7 | [Service] 8 | WorkingDirectory=/var/lib/kubelet 9 | EnvironmentFile=-/etc/kubernetes/config 10 | EnvironmentFile=-/etc/kubernetes/kubelet 11 | ExecStart=/usr/bin/kubelet \ 12 | $KUBE_LOGTOSTDERR \ 13 | $KUBE_LOG_LEVEL \ 14 | $KUBELET_API_SERVER \ 15 | $KUBELET_ADDRESS \ 16 | $KUBELET_PORT \ 17 | $KUBELET_HOSTNAME \ 18 | $KUBE_ALLOW_PRIV \ 19 | $KUBELET_POD_INFRA_CONTAINER \ 20 | $KUBELET_ARGS 21 | Restart=on-failure 22 | 23 | [Install] 24 | WantedBy=multi-user.target 25 | 26 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/kubernetes-accounting.conf: -------------------------------------------------------------------------------- 1 | [Manager] 2 | DefaultCPUAccounting=yes 3 | DefaultMemoryAccounting=yes 4 | 5 | 6 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/kubernetes.conf: -------------------------------------------------------------------------------- 1 | d /var/run/kubernetes 0755 kube kube - 2 | 3 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/proxy: -------------------------------------------------------------------------------- 1 | {% set master_ip = salt['grains.get']('master_ip') %} 2 | ### 3 | # kubernetes proxy config 4 | 5 | # default config should be adequate 6 | 7 | # Add your own! 8 | KUBE_PROXY_ARGS="--bind-address={{ master_ip }} --cluster-cidr={{ pillar['kube_cluster_cidr'] }} --proxy-mode=iptables --hostname_override={{ master_ip }} --kubeconfig=/var/lib/kubelet/kubeconfig" 9 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/scheduler: -------------------------------------------------------------------------------- 1 | {% set master_ip = salt['grains.get']('master_ip') %} 2 | ### 3 | # kubernetes scheduler config 4 | 5 | # default config should be adequate 6 | 7 | # Add your own! 8 | KUBE_SCHEDULER_ARGS="--master=http://{{ master_ip }}:8080" 9 | 10 | 11 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cfg/token.csv: -------------------------------------------------------------------------------- 1 | chAng3m3,admin,admin 2 | chAng3m3,scheduler,scheduler 3 | chAng3m3,kubelet,kubelet 4 | 5 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/dns/cfg/skydns-rc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: kube-dns-v19 5 | namespace: kube-system 6 | labels: 7 | k8s-app: kube-dns 8 | version: v19 9 | kubernetes.io/cluster-service: "true" 10 | spec: 11 | replicas: {{ pillar['dns_replicas'] }} 12 | selector: 13 | k8s-app: kube-dns 14 | version: v19 15 | template: 16 | metadata: 17 | labels: 18 | k8s-app: kube-dns 19 | version: v19 20 | kubernetes.io/cluster-service: "true" 21 | annotations: 22 | scheduler.alpha.kubernetes.io/critical-pod: '' 23 | scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' 24 | spec: 25 | containers: 26 | - name: kubedns 27 | image: gcr.io/google_containers/kubedns-amd64:1.7 28 | resources: 29 | # TODO: Set memory limits when we've profiled the container for large 30 | # clusters, then set request = limit to keep this container in 31 | # guaranteed class. Currently, this container falls into the 32 | # "burstable" category so the kubelet doesn't backoff from restarting it. 33 | limits: 34 | memory: 170Mi 35 | requests: 36 | cpu: 100m 37 | memory: 70Mi 38 | livenessProbe: 39 | httpGet: 40 | path: /healthz 41 | port: 8080 42 | scheme: HTTP 43 | initialDelaySeconds: 60 44 | timeoutSeconds: 5 45 | successThreshold: 1 46 | failureThreshold: 5 47 | readinessProbe: 48 | httpGet: 49 | path: /readiness 50 | port: 8081 51 | scheme: HTTP 52 | # we poll on pod startup for the Kubernetes master service and 53 | # only setup the /readiness HTTP server once that's available. 54 | initialDelaySeconds: 30 55 | timeoutSeconds: 5 56 | args: 57 | # command = "/kube-dns" 58 | - --domain={{ pillar['dns_domain'] }}. 59 | - --dns-port=10053 60 | ports: 61 | - containerPort: 10053 62 | name: dns-local 63 | protocol: UDP 64 | - containerPort: 10053 65 | name: dns-tcp-local 66 | protocol: TCP 67 | - name: dnsmasq 68 | image: gcr.io/google_containers/kube-dnsmasq-amd64:1.3 69 | args: 70 | - --cache-size=1000 71 | - --no-resolv 72 | - --server=127.0.0.1#10053 73 | ports: 74 | - containerPort: 53 75 | name: dns 76 | protocol: UDP 77 | - containerPort: 53 78 | name: dns-tcp 79 | protocol: TCP 80 | - name: healthz 81 | image: gcr.io/google_containers/exechealthz-amd64:1.1 82 | resources: 83 | limits: 84 | memory: 50Mi 85 | requests: 86 | cpu: 10m 87 | # Note that this container shouldn't really need 50Mi of memory. The 88 | # limits are set higher than expected pending investigation on #29688. 89 | # The extra memory was stolen from the kubedns container to keep the 90 | # net memory requested by the pod constant. 91 | memory: 50Mi 92 | args: 93 | - -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1:10053 >/dev/null 94 | - -port=8080 95 | - -quiet 96 | ports: 97 | - containerPort: 8080 98 | protocol: TCP 99 | dnsPolicy: Default # Don't use cluster DNS. 100 | 101 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/dns/cfg/skydns-svc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: kube-dns 5 | namespace: kube-system 6 | labels: 7 | k8s-app: kube-dns 8 | kubernetes.io/cluster-service: "true" 9 | kubernetes.io/name: "KubeDNS" 10 | spec: 11 | selector: 12 | k8s-app: kube-dns 13 | clusterIP: {{ pillar['dns_server'] }} 14 | ports: 15 | - name: dns 16 | port: 53 17 | protocol: UDP 18 | - name: dns-tcp 19 | port: 53 20 | protocol: TCP 21 | 22 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/grafana/cfg/grafana-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: monitoring-grafana 5 | namespace: kube-system 6 | labels: 7 | kubernetes.io/cluster-service: "true" 8 | kubernetes.io/name: "Grafana" 9 | spec: 10 | # On production clusters, consider setting up auth for grafana, and 11 | # exposing Grafana either using a LoadBalancer or a public IP. 12 | # type: LoadBalancer 13 | ports: 14 | - port: 80 15 | targetPort: 3000 16 | selector: 17 | k8s-app: influxGrafana 18 | 19 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/grafana/cfg/heapster-controller.yaml: -------------------------------------------------------------------------------- 1 | {% set heapster_memory = "300Mi" -%} 2 | {% if pillar['num_nodes'] is defined -%} 3 | {% set heapster_memory = (200 + pillar['num_nodes'] * 12)|string + "Mi" -%} 4 | {% endif -%} 5 | {% set master_ip = salt['grains.get']('master_ip') %} 6 | 7 | apiVersion: v1 8 | kind: ReplicationController 9 | metadata: 10 | name: heapster-v1.1.0 11 | namespace: kube-system 12 | labels: 13 | k8s-app: heapster 14 | version: v1.1.0 15 | kubernetes.io/cluster-service: "true" 16 | spec: 17 | replicas: 1 18 | selector: 19 | k8s-app: heapster 20 | version: v1.1.0 21 | template: 22 | metadata: 23 | labels: 24 | k8s-app: heapster 25 | version: v1.1.0 26 | kubernetes.io/cluster-service: "true" 27 | spec: 28 | containers: 29 | - image: gcr.io/google_containers/heapster:v1.1.0 30 | name: heapster 31 | resources: 32 | limits: 33 | cpu: 100m 34 | memory: {{ heapster_memory }} 35 | command: 36 | - /heapster 37 | - --source=kubernetes:http://{{ master_ip }}:8080?inClusterConfig=false&insecure=true&auth= 38 | - --sink=influxdb:http://monitoring-influxdb:8086 39 | 40 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/grafana/cfg/heapster-service.yaml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | name: heapster 5 | namespace: kube-system 6 | labels: 7 | kubernetes.io/cluster-service: "true" 8 | kubernetes.io/name: "Heapster" 9 | spec: 10 | ports: 11 | - port: 80 12 | targetPort: 8082 13 | selector: 14 | k8s-app: heapster 15 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/grafana/cfg/influxdb-grafana-controller.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: monitoring-influxdb-grafana-v3 5 | namespace: kube-system 6 | labels: 7 | k8s-app: influxGrafana 8 | version: v3 9 | kubernetes.io/cluster-service: "true" 10 | spec: 11 | replicas: 1 12 | selector: 13 | k8s-app: influxGrafana 14 | version: v3 15 | template: 16 | metadata: 17 | labels: 18 | k8s-app: influxGrafana 19 | version: v3 20 | kubernetes.io/cluster-service: "true" 21 | spec: 22 | containers: 23 | - image: gcr.io/google_containers/heapster_influxdb:v0.5 24 | name: influxdb 25 | resources: 26 | limits: 27 | cpu: 100m 28 | memory: 200Mi 29 | ports: 30 | - containerPort: 8083 31 | hostPort: 8083 32 | - containerPort: 8086 33 | hostPort: 8086 34 | volumeMounts: 35 | - name: influxdb-persistent-storage 36 | mountPath: /data 37 | - image: gcr.io/google_containers/heapster_grafana:v2.6.0-2 38 | name: grafana 39 | env: 40 | resources: 41 | limits: 42 | cpu: 100m 43 | memory: 100Mi 44 | env: 45 | # This variable is required to setup templates in Grafana. 46 | - name: INFLUXDB_SERVICE_URL 47 | value: http://monitoring-influxdb:8086 48 | # The following env variables are required to make Grafana accessible via 49 | # the kubernetes api-server proxy. On production clusters, we recommend 50 | # removing these env variables, setup auth for grafana, and expose the grafana 51 | # service using a LoadBalancer or a public IP. 52 | - name: GF_AUTH_BASIC_ENABLED 53 | value: "false" 54 | - name: GF_AUTH_ANONYMOUS_ENABLED 55 | value: "true" 56 | - name: GF_AUTH_ANONYMOUS_ORG_ROLE 57 | value: Admin 58 | - name: GF_SERVER_ROOT_URL 59 | value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ 60 | volumeMounts: 61 | - name: grafana-persistent-storage 62 | mountPath: /var 63 | 64 | volumes: 65 | - name: influxdb-persistent-storage 66 | emptyDir: {} 67 | - name: grafana-persistent-storage 68 | emptyDir: {} 69 | 70 | 71 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/grafana/cfg/influxdb-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: monitoring-influxdb 5 | namespace: kube-system 6 | labels: 7 | kubernetes.io/cluster-service: "true" 8 | kubernetes.io/name: "InfluxDB" 9 | spec: 10 | ports: 11 | - name: http 12 | port: 8083 13 | targetPort: 8083 14 | - name: api 15 | port: 8086 16 | targetPort: 8086 17 | selector: 18 | k8s-app: influxGrafana 19 | 20 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/kubernetes-dashboard/cfg/kubernetes-dashboard.yaml: -------------------------------------------------------------------------------- 1 | {% set master_ip = salt['grains.get']('master_ip') %} 2 | kind: Deployment 3 | apiVersion: extensions/v1beta1 4 | metadata: 5 | labels: 6 | app: kubernetes-dashboard 7 | version: v1.4.0 8 | kubernetes.io/cluster-service: "true" 9 | name: kubernetes-dashboard 10 | namespace: kube-system 11 | spec: 12 | replicas: 1 13 | selector: 14 | matchLabels: 15 | app: kubernetes-dashboard 16 | template: 17 | metadata: 18 | labels: 19 | app: kubernetes-dashboard 20 | kubernetes.io/cluster-service: "true" 21 | version: v1.4.0 22 | spec: 23 | containers: 24 | - name: kubernetes-dashboard 25 | image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.4.0 26 | imagePullPolicy: Always 27 | ports: 28 | - containerPort: 9090 29 | protocol: TCP 30 | args: 31 | # Uncomment the following line to manually specify Kubernetes API server Host 32 | # If not specified, Dashboard will attempt to auto discover the API server and connect 33 | # to it. Uncomment only if the default does not work. 34 | - --apiserver-host=http://{{ master_ip }}:8080 35 | livenessProbe: 36 | httpGet: 37 | path: / 38 | port: 9090 39 | initialDelaySeconds: 30 40 | timeoutSeconds: 30 41 | --- 42 | kind: Service 43 | apiVersion: v1 44 | metadata: 45 | labels: 46 | app: kubernetes-dashboard 47 | kubernetes.io/cluster-service: "true" 48 | kubernetes.io/name: "KubeDashboard" 49 | name: kubernetes-dashboard 50 | namespace: kube-system 51 | spec: 52 | type: NodePort 53 | ports: 54 | - port: 80 55 | targetPort: 9090 56 | selector: 57 | app: kubernetes-dashboard 58 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/registry/registry-pv.yaml: -------------------------------------------------------------------------------- 1 | kind: PersistentVolume 2 | apiVersion: v1 3 | metadata: 4 | name: kube-system-kube-registry-pv 5 | labels: 6 | kubernetes.io/cluster-service: "true" 7 | spec: 8 | capacity: 9 | storage: {{ pillar['cluster_registry_disk_size'] }} 10 | accessModes: 11 | - ReadWriteOnce 12 | nfs: 13 | server: nfs.dekstroza.local 14 | path: /opt/docker-registry 15 | 16 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/registry/registry-pvc.yaml: -------------------------------------------------------------------------------- 1 | kind: PersistentVolumeClaim 2 | apiVersion: v1 3 | metadata: 4 | name: kube-registry-pvc 5 | namespace: kube-system 6 | labels: 7 | kubernetes.io/cluster-service: "true" 8 | spec: 9 | accessModes: 10 | - ReadWriteOnce 11 | resources: 12 | requests: 13 | storage: {{ pillar['cluster_registry_disk_size'] }} 14 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/registry/registry-rc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: kube-registry-v0 5 | namespace: kube-system 6 | labels: 7 | k8s-app: kube-registry 8 | version: v0 9 | kubernetes.io/cluster-service: "true" 10 | spec: 11 | replicas: 1 12 | selector: 13 | k8s-app: kube-registry 14 | version: v0 15 | template: 16 | metadata: 17 | labels: 18 | k8s-app: kube-registry 19 | version: v0 20 | kubernetes.io/cluster-service: "true" 21 | spec: 22 | containers: 23 | - name: registry 24 | image: registry:2 25 | resources: 26 | limits: 27 | cpu: 100m 28 | memory: 100Mi 29 | env: 30 | - name: REGISTRY_HTTP_ADDR 31 | value: :5000 32 | - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY 33 | value: /var/lib/registry 34 | volumeMounts: 35 | - name: image-store 36 | mountPath: /var/lib/registry 37 | ports: 38 | - containerPort: 5000 39 | name: registry 40 | protocol: TCP 41 | # TODO: use a persistent volume claim 42 | volumes: 43 | - name: image-store 44 | persistentVolumeClaim: 45 | claimName: kube-registry-pvc 46 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/cluster-addons/registry/registry-svc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: kube-registry 5 | namespace: kube-system 6 | labels: 7 | k8s-app: kube-registry 8 | kubernetes.io/cluster-service: "true" 9 | kubernetes.io/name: "KubeRegistry" 10 | spec: 11 | selector: 12 | k8s-app: kube-registry 13 | ports: 14 | - name: registry 15 | port: 5000 16 | protocol: TCP 17 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/init.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - master.kubernetes.users 3 | 4 | var-run-kubernetes-dir: 5 | file.directory: 6 | - name: /var/run/kubernetes 7 | - user: kube 8 | - group: kube 9 | - makedirs: True 10 | kube-apiserver-service: 11 | file.managed: 12 | - name: /usr/lib/systemd/system/kube-apiserver.service 13 | - user: root 14 | - group: root 15 | - makedirs: True 16 | - source: salt://master/kubernetes/cfg/kube-apiserver.service 17 | - mode: 644 18 | kube-controller-manager-service: 19 | file.managed: 20 | - name: /usr/lib/systemd/system/kube-controller-manager.service 21 | - user: root 22 | - group: root 23 | - makedirs: True 24 | - source: salt://master/kubernetes/cfg/kube-controller-manager.service 25 | - mode: 644 26 | kube-scheduler-service: 27 | file.managed: 28 | - name: /usr/lib/systemd/system/kube-scheduler.service 29 | - user: root 30 | - group: root 31 | - makedirs: True 32 | - source: salt://master/kubernetes/cfg/kube-scheduler.service 33 | - mode: 644 34 | kubernetes-conf: 35 | file.managed: 36 | - name: /usr/lib/tmpfiles.d/kubernetes.conf 37 | - user: root 38 | - group: root 39 | - makedirs: True 40 | - source: salt://master/kubernetes/cfg/kubernetes.conf 41 | - mode: 644 42 | kube-proxy-service: 43 | file.managed: 44 | - name: /usr/lib/systemd/system/kube-proxy.service 45 | - user: root 46 | - group: root 47 | - makedirs: True 48 | - source: salt://master/kubernetes/cfg/kube-proxy.service 49 | - mode: 644 50 | kubelet-service: 51 | file.managed: 52 | - name: /usr/lib/systemd/system/kubelet.service 53 | - user: root 54 | - group: root 55 | - makedirs: True 56 | - source: salt://master/kubernetes/cfg/kubelet.service 57 | - mode: 644 58 | kubernetes-accounting-conf: 59 | file.managed: 60 | - name: /etc/systemd/system.conf.d/kubernetes-accounting.conf 61 | - user: root 62 | - group: root 63 | - makedirs: True 64 | - source: salt://master/kubernetes/cfg/kubernetes-accounting.conf 65 | - mode: 644 66 | kube-apiserver-config: 67 | file.managed: 68 | - name: /etc/kubernetes/apiserver 69 | - user: root 70 | - group: root 71 | - makedirs: True 72 | - source: salt://master/kubernetes/cfg/apiserver 73 | - mode: 644 74 | - template: jinja 75 | 76 | kubernetes-config: 77 | file.managed: 78 | - name: /etc/kubernetes/config 79 | - user: root 80 | - group: root 81 | - makedirs: True 82 | - source: salt://master/kubernetes/cfg/config 83 | - mode: 644 84 | - template: jinja 85 | 86 | kube-controller-manager: 87 | file.managed: 88 | - name: /etc/kubernetes/controller-manager 89 | - user: root 90 | - group: root 91 | - makedirs: True 92 | - source: salt://master/kubernetes/cfg/controller-manager 93 | - mode: 644 94 | - template: jinja 95 | 96 | kube-scheduler: 97 | file.managed: 98 | - name: /etc/kubernetes/scheduler 99 | - user: root 100 | - group: root 101 | - makedirs: True 102 | - source: salt://master/kubernetes/cfg/scheduler 103 | - mode: 644 104 | - template: jinja 105 | 106 | kubelet-config: 107 | file.managed: 108 | - name: /etc/kubernetes/kubelet 109 | - user: root 110 | - group: root 111 | - makedirs: True 112 | - source: salt://master/kubernetes/cfg/kubelet 113 | - mode: 644 114 | - template: jinja 115 | 116 | kube-proxy-config: 117 | file.managed: 118 | - name: /etc/kubernetes/proxy 119 | - user: root 120 | - group: root 121 | - makedirs: True 122 | - source: salt://master/kubernetes/cfg/proxy 123 | - mode: 644 124 | - template: jinja 125 | ## Configuration file for all auth with api server ## 126 | kubeconfig: 127 | file.managed: 128 | - name: /var/lib/kubelet/kubeconfig 129 | - user: root 130 | - group: root 131 | - makedirs: True 132 | - source: salt://master/kubernetes/cfg/kubeconfig 133 | - mode: 644 134 | - template: jinja 135 | - makedirs: True 136 | ## Token file ## 137 | token.csv: 138 | file.managed: 139 | - name: /var/lib/kubernetes/token.csv 140 | - user: root 141 | - group: root 142 | - makedirs: True 143 | - source: salt://master/kubernetes/cfg/token.csv 144 | - mode: 644 145 | - template: jinja 146 | - makedirs: True 147 | ## Authorization policy file ## 148 | authorization-policy.json: 149 | file.managed: 150 | - name: /var/lib/kubernetes/authorization-policy.json 151 | - user: root 152 | - group: root 153 | - makedirs: True 154 | - source: salt://master/kubernetes/cfg/authorization-policy.json 155 | - mode: 644 156 | - template: jinja 157 | - makedirs: True 158 | 159 | 160 | ### Cluster services ### 161 | dns-rc-setup: 162 | file.managed: 163 | - name: /etc/kubernetes/dns/skydns-rc.yaml 164 | - makedirs: True 165 | - user: root 166 | - group: root 167 | - makedirs: True 168 | - source: salt://master/kubernetes/cluster-addons/dns/cfg/skydns-rc.yaml 169 | - template: jinja 170 | - require: 171 | - service: kube-proxy 172 | 173 | dns-svc-setup: 174 | file.managed: 175 | - name: /etc/kubernetes/dns/skydns-svc.yaml 176 | - makedirs: True 177 | - user: root 178 | - group: root 179 | - makedirs: True 180 | - source: salt://master/kubernetes/cluster-addons/dns/cfg/skydns-svc.yaml 181 | - template: jinja 182 | - require: 183 | - service: kube-proxy 184 | 185 | kubernetes-dashboard: 186 | file.managed: 187 | - name: /etc/kubernetes/kubernetes-dashboard/kubernetes-dashboard.yaml 188 | - makedirs: True 189 | - user: root 190 | - group: root 191 | - makedirs: True 192 | - source: salt://master/kubernetes/cluster-addons/kubernetes-dashboard/cfg/kubernetes-dashboard.yaml 193 | - template: jinja 194 | - require: 195 | - service: kube-proxy 196 | 197 | kube-grafana-servicesetup: 198 | file.managed: 199 | - name: /etc/kubernetes/grafana/grafana-service.yaml 200 | - makedirs: True 201 | - user: root 202 | - group: root 203 | - makedirs: True 204 | - source: salt://master/kubernetes/cluster-addons/grafana/cfg/grafana-service.yaml 205 | - template: jinja 206 | - require: 207 | - service: kube-proxy 208 | 209 | kube-grafana-heapster-controller: 210 | file.managed: 211 | - name: /etc/kubernetes/grafana/heapster-controller.yaml 212 | - makedirs: True 213 | - user: root 214 | - group: root 215 | - makedirs: True 216 | - source: salt://master/kubernetes/cluster-addons/grafana/cfg/heapster-controller.yaml 217 | - template: jinja 218 | - require: 219 | - service: kube-proxy 220 | 221 | kube-grafana-heapster-svc: 222 | file.managed: 223 | - name: /etc/kubernetes/grafana/heapster-service.yaml 224 | - makedirs: True 225 | - user: root 226 | - group: root 227 | - makedirs: True 228 | - source: salt://master/kubernetes/cluster-addons/grafana/cfg/heapster-service.yaml 229 | - template: jinja 230 | - require: 231 | - service: kube-proxy 232 | 233 | kube-grafana-influx-rc: 234 | file.managed: 235 | - name: /etc/kubernetes/grafana/influxdb-grafana-controller.yaml 236 | - makedirs: True 237 | - user: root 238 | - group: root 239 | - makedirs: True 240 | - source: salt://master/kubernetes/cluster-addons/grafana/cfg/influxdb-grafana-controller.yaml 241 | - template: jinja 242 | - require: 243 | - service: kube-proxy 244 | 245 | kube-grafana-influx-svc: 246 | file.managed: 247 | - name: /etc/kubernetes/grafana/influxdb-service.yaml 248 | - makedirs: True 249 | - user: root 250 | - group: root 251 | - makedirs: True 252 | - source: salt://master/kubernetes/cluster-addons/grafana/cfg/influxdb-service.yaml 253 | - template: jinja 254 | - require: 255 | - service: kube-proxy 256 | 257 | registry-svc: 258 | file.managed: 259 | - name: /etc/kubernetes/registry/registry-svc.yaml 260 | - makedirs: True 261 | - user: root 262 | - group: root 263 | - makedirs: True 264 | - source: salt://master/kubernetes/cluster-addons/registry/registry-svc.yaml 265 | - template: jinja 266 | - require: 267 | - service: kube-proxy 268 | 269 | registry-rc: 270 | file.managed: 271 | - name: /etc/kubernetes/registry/registry-rc.yaml 272 | - makedirs: True 273 | - user: root 274 | - group: root 275 | - makedirs: True 276 | - source: salt://master/kubernetes/cluster-addons/registry/registry-rc.yaml 277 | - template: jinja 278 | - require: 279 | - service: kube-proxy 280 | 281 | registry-pv: 282 | file.managed: 283 | - name: /etc/kubernetes/registry/registry-pv.yaml 284 | - makedirs: True 285 | - user: root 286 | - group: root 287 | - makedirs: True 288 | - source: salt://master/kubernetes/cluster-addons/registry/registry-pv.yaml 289 | - template: jinja 290 | - require: 291 | - service: kube-proxy 292 | 293 | registry-pvc: 294 | file.managed: 295 | - name: /etc/kubernetes/registry/registry-pvc.yaml 296 | - makedirs: True 297 | - user: root 298 | - group: root 299 | - makedirs: True 300 | - source: salt://master/kubernetes/cluster-addons/registry/registry-pvc.yaml 301 | - template: jinja 302 | - require: 303 | - service: kube-proxy 304 | 305 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/kubernetes/users/init.sls: -------------------------------------------------------------------------------- 1 | kube-group: 2 | group.present: 3 | - system: True 4 | kube-user: 5 | user.present: 6 | - name: kube 7 | - fullname: Kubernetes user 8 | - shell: /sbin/nologin 9 | - createhome: False 10 | - gid_from_name: True 11 | 12 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/post-boot-scripts/configure.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | {% set master_ip = salt['grains.get']('master_ip') %} 3 | ### add route command for OSX ### 4 | echo -e "sudo route -n delete {{ pillar['service_cluster_cidr'] }}" > /vagrant/add-route-osX.sh 5 | echo -e "sudo route -n add {{ pillar['service_cluster_cidr'] }} {{ master_ip }}" >> /vagrant/add-route-osX.sh 6 | chmod +x /vagrant/add-route-osX.sh 7 | 8 | ### add route command for LINUX ### 9 | echo -e "sudo route del -net {{ pillar['service_cluster_cidr'] }}" > /vagrant/add-route-LIN.sh 10 | echo -e "sudo route add -net {{ pillar['service_cluster_cidr'] }} gw {{ master_ip }}" >> /vagrant/add-route-LIN.sh 11 | chmod +x /vagrant/add-route-LIN.sh 12 | 13 | ### add route command for WINDOWS ### 14 | echo -e "route delete {{ pillar['service_cluster_cidr'] }} mask 255.255.0.0 & route add {{ pillar['service_cluster_cidr'] }} mask 255.255.0.0 {{ master_ip }}" > /vagrant/add-route-WIN.bat 15 | 16 | echo -e "MASTER IP: {{ master_ip }}" 17 | 18 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/pre-start-scripts/gen-minion-cert.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | MINION_IP=$1 4 | 5 | mkdir -p /opt/certs/$MINION_IP && cd /opt/certs/$MINION_IP 6 | 7 | echo '{ 8 | "CN": "kubernetes", 9 | "hosts": [ 10 | "MINION_IP" 11 | ], 12 | "key": { 13 | "algo": "rsa", 14 | "size": 2048 15 | }, 16 | "names": [ 17 | { 18 | "C": "IE", 19 | "L": "Athlone", 20 | "O": "Kubernetes", 21 | "OU": "Cluster", 22 | "ST": "Westmeath" 23 | } 24 | ] 25 | }' > $MINION_IP-csr.json 26 | 27 | sed -i "s/MINION_IP/$MINION_IP/g" $MINION_IP-csr.json 28 | 29 | cfssl gencert \ 30 | -ca=../root-ca/ca.pem \ 31 | -ca-key=../root-ca/ca-key.pem \ 32 | -config=../ca-config.json \ 33 | -profile=kubernetes \ 34 | $MINION_IP-csr.json | cfssljson -bare kubernetes 35 | 36 | chmod +r /opt/certs/$MINION_IP/ -R 37 | 38 | 39 | -------------------------------------------------------------------------------- /vagrant/salt/salt/master/pre-start-scripts/generate-certs.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | {% set master_ip = salt['grains.get']('master_ip') %} 4 | 5 | mkdir /opt/certs && cd /opt/certs 6 | 7 | echo '{ 8 | "signing": { 9 | "default": { 10 | "expiry": "8760h" 11 | }, 12 | "profiles": { 13 | "kubernetes": { 14 | "usages": ["signing", "key encipherment", "server auth", "client auth"], 15 | "expiry": "8760h" 16 | } 17 | } 18 | } 19 | }' > ca-config.json 20 | 21 | echo '{ 22 | "CN": "Kubernetes", 23 | "key": { 24 | "algo": "rsa", 25 | "size": 2048 26 | }, 27 | "names": [ 28 | { 29 | "C": "IE", 30 | "L": "Athlone", 31 | "O": "Kubernetes", 32 | "OU": "CA", 33 | "ST": "Westmeath" 34 | } 35 | ] 36 | }' > ca-csr.json 37 | 38 | cfssl gencert -initca ca-csr.json | cfssljson -bare ca 39 | mkdir root-ca 40 | mv ca-key.pem root-ca/ 41 | mv ca.csr root-ca/ 42 | mv ca.pem root-ca/ 43 | 44 | mkdir kube-master 45 | echo '{ 46 | "CN": "kubernetes", 47 | "hosts": [ 48 | "10.0.0.1", 49 | "{{ master_ip }}" 50 | ], 51 | "key": { 52 | "algo": "rsa", 53 | "size": 2048 54 | }, 55 | "names": [ 56 | { 57 | "C": "IE", 58 | "L": "Athlone", 59 | "O": "Kubernetes", 60 | "OU": "Cluster", 61 | "ST": "Westmeath" 62 | } 63 | ] 64 | }' > kube-master/kubernetes-csr.json 65 | 66 | cd kube-master 67 | cfssl gencert \ 68 | -ca=../root-ca/ca.pem \ 69 | -ca-key=../root-ca/ca-key.pem \ 70 | -config=../ca-config.json \ 71 | -profile=kubernetes \ 72 | ../kube-master/kubernetes-csr.json | cfssljson -bare kubernetes 73 | cd .. 74 | 75 | mkdir -p /var/lib/kubernetes 76 | 77 | cp root-ca/ca.csr /var/lib/kubernetes/ 78 | cp root-ca/ca-key.pem /var/lib/kubernetes/ 79 | cp root-ca/ca.pem /var/lib/kubernetes/ 80 | cp kube-master/kubernetes-key.pem /var/lib/kubernetes/ 81 | cp kube-master/kubernetes.csr /var/lib/kubernetes/ 82 | cp kube-master/kubernetes.pem /var/lib/kubernetes/ 83 | chmod +r /var/lib/kubernetes/ -R 84 | 85 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/init.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - minion.kubernetes 3 | 4 | {% set nfs_ip = salt['grains.get']('nfs_ip') %} 5 | 6 | dekstroza-nfs-server: 7 | host.present: 8 | - ip: {{ nfs_ip }} 9 | - names: 10 | - nfs 11 | - nfs.{{ pillar['dns_domain'] }} 12 | 13 | permissive: 14 | selinux.mode 15 | 16 | firewalld: 17 | service.dead: 18 | - name: firewalld 19 | - enable: false 20 | 21 | kubelet-running: 22 | service.running: 23 | - name: kubelet 24 | - watch: 25 | - file: /etc/kubernetes/config 26 | - file: /etc/kubernetes/kubelet 27 | - require: 28 | - service: docker 29 | - file: /etc/kubernetes/config 30 | - file: /etc/kubernetes/kubelet 31 | - cmd: setup-ca-from-master 32 | 33 | kube-proxy-running: 34 | service.running: 35 | - name: kube-proxy 36 | - watch: 37 | - file: /etc/kubernetes/proxy 38 | - require: 39 | - file: /etc/kubernetes/proxy 40 | - service: kubelet 41 | - cmd: setup-ca-from-master 42 | 43 | setup-ca-from-master: 44 | cmd.script: 45 | - source: salt://minion/post-boot-scripts/copy-master-ca.sh 46 | - user: root 47 | - template: jinja 48 | 49 | create-routing-scripts: 50 | cmd.script: 51 | - source: salt://minion/post-boot-scripts/configure.sh 52 | - user: root 53 | - template: jinja 54 | - require: 55 | - service: kube-proxy 56 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/kubernetes/cfg/config: -------------------------------------------------------------------------------- 1 | ### 2 | # kubernetes system config 3 | # 4 | # The following values are used to configure various aspects of all 5 | # kubernetes services, including 6 | # 7 | # kube-apiserver.service 8 | # kube-controller-manager.service 9 | # kube-scheduler.service 10 | # kubelet.service 11 | # kube-proxy.service 12 | # logging to stderr means we get it in the systemd journal 13 | KUBE_LOGTOSTDERR="--logtostderr=true" 14 | 15 | # journal message level, 0 is debug 16 | KUBE_LOG_LEVEL="--v=0" 17 | 18 | # Should this cluster be allowed to run privileged docker containers 19 | KUBE_ALLOW_PRIV="--allow_privileged=true" 20 | 21 | # How the controller-manager, scheduler, and proxy find the apiserver 22 | {% set master_ip = salt['grains.get']('master_ip') %} 23 | KUBE_MASTER="--master=https://{{ master_ip }}:6443" 24 | 25 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/kubernetes/cfg/kube-proxy.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Kube-Proxy Server 3 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 4 | After=network.target 5 | 6 | [Service] 7 | EnvironmentFile=-/etc/kubernetes/config 8 | EnvironmentFile=-/etc/kubernetes/proxy 9 | ExecStart=/usr/bin/kube-proxy \ 10 | $KUBE_LOGTOSTDERR \ 11 | $KUBE_LOG_LEVEL \ 12 | $KUBE_MASTER \ 13 | $KUBE_PROXY_ARGS 14 | Restart=on-failure 15 | LimitNOFILE=65536 16 | 17 | [Install] 18 | WantedBy=multi-user.target 19 | 20 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/kubernetes/cfg/kubectl-config: -------------------------------------------------------------------------------- 1 | {% set master_ip = salt['grains.get']('master_ip') %} 2 | apiVersion: v1 3 | clusters: 4 | - cluster: 5 | server: http://{{ master_ip }}:8080 6 | name: default 7 | contexts: 8 | - context: 9 | cluster: default 10 | user: root 11 | name: default-context 12 | current-context: default-context 13 | kind: Config 14 | preferences: 15 | colors: true 16 | users: [] 17 | 18 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/kubernetes/cfg/kubelet: -------------------------------------------------------------------------------- 1 | ### 2 | # kubernetes kubelet (minion) config 3 | 4 | {% set master_ip = salt['grains.get']('master_ip') %} 5 | 6 | # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) 7 | KUBELET_ADDRESS="--address={{ salt['network.interfaces']()['eth1']['inet'][0]['address'] }}" 8 | 9 | # The port for the info server to serve on 10 | KUBELET_PORT="--port=10250" 11 | 12 | # You may leave this blank to use the actual hostname 13 | KUBELET_HOSTNAME="--hostname_override={{ salt['network.interfaces']()['eth1']['inet'][0]['address'] }}" 14 | 15 | # location of the api-server 16 | {% set master_ip = salt['grains.get']('master_ip') %} 17 | KUBELET_API_SERVER="--api_servers=https://{{ master_ip }}:6443" 18 | 19 | # Add your own! 20 | KUBELET_ARGS="--cluster-dns={{ pillar['dns_server'] }} --cluster-domain={{ pillar['dns_domain'] }} --kubeconfig=/var/lib/kubelet/kubeconfig --tls-cert-file=/var/lib/kubernetes/kubernetes.pem --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem" 21 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/kubernetes/cfg/kubelet.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Kubelet Server 3 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 4 | After=docker.service 5 | Requires=docker.service 6 | 7 | [Service] 8 | WorkingDirectory=/var/lib/kubelet 9 | EnvironmentFile=-/etc/kubernetes/config 10 | EnvironmentFile=-/etc/kubernetes/kubelet 11 | ExecStart=/usr/bin/kubelet \ 12 | $KUBE_LOGTOSTDERR \ 13 | $KUBE_LOG_LEVEL \ 14 | $KUBELET_API_SERVER \ 15 | $KUBELET_ADDRESS \ 16 | $KUBELET_PORT \ 17 | $KUBELET_HOSTNAME \ 18 | $KUBE_ALLOW_PRIV \ 19 | $KUBELET_POD_INFRA_CONTAINER \ 20 | $KUBELET_ARGS 21 | Restart=on-failure 22 | 23 | [Install] 24 | WantedBy=multi-user.target 25 | 26 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/kubernetes/cfg/kubernetes-accounting.conf: -------------------------------------------------------------------------------- 1 | [Manager] 2 | DefaultCPUAccounting=yes 3 | DefaultMemoryAccounting=yes 4 | 5 | 6 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/kubernetes/cfg/kubernetes.conf: -------------------------------------------------------------------------------- 1 | d /var/run/kubernetes 0755 kube kube - 2 | 3 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/kubernetes/cfg/proxy: -------------------------------------------------------------------------------- 1 | 2 | KUBE_PROXY_ARGS="--bind-address={{ salt['network.interfaces']()['eth1']['inet'][0]['address'] }} --cluster-cidr={{ pillar['kube_cluster_cidr'] }} --proxy-mode=iptables --hostname_override={{ salt['network.interfaces']()['eth1']['inet'][0]['address'] }} --kubeconfig=/var/lib/kubelet/kubeconfig" 3 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/kubernetes/init.sls: -------------------------------------------------------------------------------- 1 | include: 2 | - minion.kubernetes.users 3 | 4 | var-lib-kubernetes-dir: 5 | file.directory: 6 | - name: /var/lib/kubernetes 7 | - user: kube 8 | - group: kube 9 | - makedirs: True 10 | var-run-kubelet-dir: 11 | file.directory: 12 | - name: /var/lib/kubelet 13 | - user: kube 14 | - group: kube 15 | - makedirs: True 16 | kubernetes-conf: 17 | file.managed: 18 | - name: /usr/lib/tmpfiles.d/kubernetes.conf 19 | - user: root 20 | - group: root 21 | - makedirs: True 22 | - source: salt://minion/kubernetes/cfg/kubernetes.conf 23 | - mode: 644 24 | kube-proxy-service: 25 | file.managed: 26 | - name: /usr/lib/systemd/system/kube-proxy.service 27 | - user: root 28 | - group: root 29 | - makedirs: True 30 | - source: salt://minion/kubernetes/cfg/kube-proxy.service 31 | - mode: 644 32 | kubelet-service: 33 | file.managed: 34 | - name: /usr/lib/systemd/system/kubelet.service 35 | - user: root 36 | - group: root 37 | - makedirs: True 38 | - source: salt://minion/kubernetes/cfg/kubelet.service 39 | - mode: 644 40 | kubernetes-accounting-conf: 41 | file.managed: 42 | - name: /etc/systemd/system.conf.d/kubernetes-accounting.conf 43 | - user: root 44 | - group: root 45 | - makedirs: True 46 | - source: salt://minion/kubernetes/cfg/kubernetes-accounting.conf 47 | - mode: 644 48 | kube-config: 49 | file.managed: 50 | - name: /etc/kubernetes/config 51 | - user: root 52 | - group: root 53 | - makedirs: True 54 | - source: salt://minion/kubernetes/cfg/config 55 | - mode: 644 56 | - template: jinja 57 | 58 | kubelet-config: 59 | file.managed: 60 | - name: /etc/kubernetes/kubelet 61 | - user: root 62 | - group: root 63 | - makedirs: True 64 | - source: salt://minion/kubernetes/cfg/kubelet 65 | - mode: 644 66 | - template: jinja 67 | 68 | kube-proxy-config: 69 | file.managed: 70 | - name: /etc/kubernetes/proxy 71 | - user: root 72 | - group: root 73 | - makedirs: True 74 | - source: salt://minion/kubernetes/cfg/proxy 75 | - mode: 644 76 | - template: jinja 77 | 78 | kubectl-config: 79 | file.managed: 80 | - name: /root/.kube/config 81 | - user: root 82 | - group: root 83 | - makedirs: True 84 | - source: salt://minion/kubernetes/cfg/kubectl-config 85 | - mode: 644 86 | - template: jinja 87 | - makedirs: True 88 | 89 | kubectl-config-vagrant: 90 | file.managed: 91 | - name: /home/vagrant/.kube/config 92 | - user: vagrant 93 | - group: vagrant 94 | - makedirs: True 95 | - source: salt://minion/kubernetes/cfg/kubectl-config 96 | - mode: 644 97 | - template: jinja 98 | - makedirs: True 99 | 100 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/kubernetes/users/init.sls: -------------------------------------------------------------------------------- 1 | kube-group: 2 | group.present: 3 | - system: True 4 | kube-user: 5 | user.present: 6 | - name: kube 7 | - fullname: Kubernetes user 8 | - shell: /sbin/nologin 9 | - createhome: False 10 | - gid_from_name: True 11 | 12 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/post-boot-scripts/configure.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | {% set master_ip = salt['grains.get']('master_ip') %} 3 | 4 | ### add route command for OSX ### 5 | echo -e "sudo route -n delete {{ pillar['service_cluster_cidr'] }}" > /vagrant/add-route-osX.sh 6 | echo -e "sudo route -n add {{ pillar['service_cluster_cidr'] }} {{ master_ip }}" >> /vagrant/add-route-osX.sh 7 | chmod +x /vagrant/add-route-osX.sh 8 | 9 | ### add route command for LINUX ### 10 | echo -e "sudo route del -net {{ pillar['service_cluster_cidr'] }}" > /vagrant/add-route-LIN.sh 11 | echo -e "sudo route add -net {{ pillar['service_cluster_cidr'] }} gw {{ master_ip }}" >> /vagrant/add-route-LIN.sh 12 | chmod +x /vagrant/add-route-LIN.sh 13 | 14 | ### add route command for WINDOWS ### 15 | echo -e "route delete {{ pillar['service_cluster_cidr'] }} mask 255.255.0.0 & route add {{ pillar['service_cluster_cidr'] }} mask 255.255.0.0 {{ master_ip }}" > /vagrant/add-route-WIN.bat 16 | echo -e "MASTER IP: {{ master_ip }}" 17 | 18 | -------------------------------------------------------------------------------- /vagrant/salt/salt/minion/post-boot-scripts/copy-master-ca.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | {% set master_ip = salt['grains.get']('master_ip') %} 4 | 5 | HOST_IP=$(ip -4 -o addr show dev eth1| awk '{split($4,a,"/");print a[1]}') 6 | 7 | expect -c " 8 | set timeout 1 9 | spawn scp vagrant@{{ master_ip }}:/var/lib/kubernetes/ca.pem /var/lib/kubernetes/ 10 | expect yes/no { send yes\r ; exp_continue } 11 | expect password: { send vagrant\r } 12 | expect 100% 13 | sleep 1 14 | exit 15 | " 16 | 17 | 18 | 19 | expect -c " 20 | set timeout 1 21 | spawn scp vagrant@{{ master_ip }}:/var/lib/kubelet/kubeconfig /var/lib/kubelet/ 22 | expect yes/no { send yes\r ; exp_continue } 23 | expect password: { send vagrant\r } 24 | expect 100% 25 | sleep 1 26 | exit 27 | " 28 | expect -c " 29 | set timeout 1 30 | spawn ssh vagrant@{{ master_ip }} sudo /usr/sbin/gen-minion-cert.sh $HOST_IP 31 | expect yes/no { send yes\r ; exp_continue } 32 | expect password: { send vagrant\r } 33 | expect 100% 34 | sleep 1 35 | exit 36 | " 37 | 38 | expect -c " 39 | set timeout 1 40 | spawn scp -r vagrant@{{ master_ip }}:/opt/certs/$HOST_IP/* /var/lib/kubernetes/ 41 | expect yes/no { send yes\r ; exp_continue } 42 | expect password: { send vagrant\r } 43 | expect 100% 44 | sleep 1 45 | exit 46 | " 47 | 48 | 49 | chmod +r /var/run/kubernetes/ -R 50 | chmod +r /var/lib/kubelet/ -R 51 | 52 | -------------------------------------------------------------------------------- /vagrant/salt/salt/nfs/cfg/exports: -------------------------------------------------------------------------------- 1 | /opt/docker-registry *(rw,sync,no_root_squash,insecure) 2 | 3 | -------------------------------------------------------------------------------- /vagrant/salt/salt/nfs/init.sls: -------------------------------------------------------------------------------- 1 | 2 | rpcbind-running: 3 | service.running: 4 | - name: rpcbind 5 | 6 | nfs-running: 7 | service.running: 8 | - name: nfs 9 | - watch: 10 | - file: /etc/exports 11 | - require: 12 | - file: /etc/exports 13 | - file: /opt/docker-registry 14 | 15 | nfs-config-exports: 16 | file.managed: 17 | - name: /etc/exports 18 | - user: root 19 | - group: root 20 | - source: salt://nfs/cfg/exports 21 | - mode: 644 22 | - template: jinja 23 | 24 | /opt/docker-registry: 25 | file.directory: 26 | - user: root 27 | - group: root 28 | - mode: 777 29 | - makedirs: True 30 | 31 | -------------------------------------------------------------------------------- /vagrant/salt/salt/ntpd/init.sls: -------------------------------------------------------------------------------- 1 | ntp-running: 2 | service.running: 3 | - name: ntpd 4 | 5 | adjust-timezone: 6 | cmd.run: 7 | - name: cp -rf /usr/share/zoneinfo/Europe/Dublin /etc/localtime 8 | - unless: cmp /usr/share/zoneinfo/Europe/Dublin /etc/localtime 9 | 10 | -------------------------------------------------------------------------------- /vagrant/salt/salt/top.sls: -------------------------------------------------------------------------------- 1 | base: 2 | 3 | 'roles:kube-minion': 4 | - match: grain 5 | - minion 6 | - etcd 7 | - flannel 8 | - docker 9 | - ntpd 10 | 11 | 'roles:kube-master': 12 | - match: grain 13 | - master 14 | - etcd 15 | - flannel 16 | - docker 17 | - ntpd 18 | - nfs 19 | 20 | --------------------------------------------------------------------------------