├── .gitignore ├── Dockerfile ├── LICENSE ├── README.md ├── playbook ├── addons.yml ├── ansible.cfg ├── cloudstack.ini ├── cluster-bootstrap.yml ├── cluster-upgrade.yml ├── group_vars │ └── all ├── roles │ ├── addons │ │ ├── tasks │ │ │ └── main.yml │ │ └── templates │ │ │ ├── heapster.j2 │ │ │ ├── kube_dns.j2 │ │ │ ├── nginx_ingress.j2 │ │ │ └── registry.j2 │ ├── certificates │ │ ├── tasks │ │ │ ├── _generate_cas.yml │ │ │ ├── _generate_certificates.yml │ │ │ ├── _setup.yml │ │ │ └── main.yml │ │ └── templates │ │ │ ├── ca_config.j2 │ │ │ ├── ca_csr.j2 │ │ │ ├── ecdsa_csr.j2 │ │ │ └── rsa_csr.j2 │ ├── common │ │ ├── tasks │ │ │ ├── cloudstack_ini.yml │ │ │ ├── create_secgroup_rules.yml │ │ │ ├── create_secgroups.yml │ │ │ ├── create_sshkey.yml │ │ │ └── main.yml │ │ └── templates │ │ │ └── cloudstack_ini.j2 │ ├── defunctzombie.coreos-bootstrap │ │ ├── .editorconfig │ │ ├── .travis.yml │ │ ├── LICENSE │ │ ├── README.md │ │ ├── files │ │ │ ├── bootstrap.sh │ │ │ ├── get-pip.py │ │ │ └── runner │ │ ├── meta │ │ │ ├── .galaxy_install_info │ │ │ └── main.yml │ │ ├── tasks │ │ │ └── main.yml │ │ └── tests │ │ │ └── test.yml │ ├── infra-inventory │ │ ├── tasks │ │ │ └── main.yml │ │ └── templates │ │ │ └── inventory.j2 │ ├── infra-master │ │ ├── tasks │ │ │ ├── create_nodes.yml │ │ │ ├── main.yml │ │ │ └── register_eip.yml │ │ └── templates │ │ │ └── master_user_data.j2 │ ├── infra-worker │ │ ├── tasks │ │ │ ├── create_nodes.yml │ │ │ └── main.yml │ │ └── templates │ │ │ └── worker_user_data.j2 │ ├── kubectl │ │ └── tasks │ │ │ └── main.yml │ ├── kubernetes-master │ │ ├── handlers │ │ │ └── main.yml │ │ ├── tasks │ │ │ ├── _certificates.yml │ │ │ ├── _master_components.yml │ │ │ └── main.yml │ │ └── templates │ │ │ ├── eip.network.j2 │ │ │ ├── exoip.j2 │ │ │ ├── kube_apiserver.j2 │ │ │ ├── kube_config.j2 │ │ │ ├── kube_controller_manager.j2 │ │ │ ├── kube_etcd.j2 │ │ │ ├── kube_proxy.j2 │ │ │ ├── kube_scheduler.j2 │ │ │ └── kubelet.service.j2 │ └── kubernetes-worker │ │ ├── handlers │ │ └── main.yml │ │ ├── tasks │ │ ├── _certificates.yml │ │ ├── _kubernetes_components.yml │ │ └── main.yml │ │ └── templates │ │ ├── haproxy.cfg.j2 │ │ ├── haproxy.j2 │ │ ├── kube_config.j2 │ │ ├── kube_proxy.j2 │ │ └── kubelet.service.j2 └── worker-add.yml └── requirements.txt /.gitignore: -------------------------------------------------------------------------------- 1 | *.retry 2 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:2.7 2 | 3 | ADD https://storage.googleapis.com/kubernetes-release/release/v1.6.4/bin/linux/amd64/kubectl /usr/local/bin/kubectl 4 | RUN chmod +x /usr/local/bin/kubectl 5 | ENV KUBECONFIG /secret/kubeconfig 6 | 7 | ADD playbook /playbook 8 | 9 | VOLUME /secret 10 | 11 | ADD requirements.txt requirements.txt 12 | RUN pip install -r requirements.txt 13 | 14 | RUN apt-get update && apt-get install -y vim bash-completion && apt-get clean 15 | RUN echo 'source <(kubectl completion bash)\n \ 16 | [[ $PS1 && -f /usr/share/bash-completion/bash_completion ]]\n \ 17 | . /usr/share/bash-completion/bash_completion\n' >> ~/.bashrc 18 | 19 | WORKDIR /playbook 20 | CMD bash 21 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2018 Exoscale / Akenes SA 2 | 3 | Permission to use, copy, modify, and distribute this software for any 4 | purpose with or without fee is hereby granted, provided that the above 5 | notice and this permission notice appear in all copies. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 8 | WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 9 | MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 10 | ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 11 | WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 12 | ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 13 | OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 14 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # DEPRECATED 2 | 3 | Please look at other deployment options for production ready Kubernetes 4 | clusters on Exoscale 5 | 6 | * [Cluster API](https://cluster-api.sigs.k8s.io/reference/providers.html) 7 | * Native managed kubernetes from Exoscale 8 | * Rancher and other control panels 9 | 10 | # Multi-master Kubernetes 11 | 12 | This Ansible playbook helps you setup a multi-master 13 | [Kubernetes](http://kubernetes.io/) cluster on 14 | [Exoscale](https://www.exoscale.com/). 15 | 16 | ## Getting started 17 | 18 | To run this playbook a working Docker installation and a basic understanding 19 | of containers and volumes is required. You also need a Exoscale account and 20 | the corresponding API key and secret. 21 | 22 | You can get your key and secret here: 23 | https://portal.exoscale.com/account/profile/api 24 | 25 | Let's bootstrap a cluster. 26 | 27 | ``` 28 | # Run the container and mount a data volume for the cluster specific secrets 29 | docker run -ti -v k8s_secrets:/secret exoscale/multi-master-kubernetes 30 | 31 | # Set EXO_API_KEY and EXO_API_SECRET environment variables 32 | export EXO_API_KEY= 33 | export EXO_API_SECRET= 34 | 35 | # Then run the cluster-bootstrap playbook 36 | ansible-playbook cluster-bootstrap.yml 37 | ``` 38 | 39 | > Tip: The cluster-bootstrap playbook is safe to re-run at any time 40 | > to make sure your cluster is configured correctly. 41 | 42 | Bootstrapping the cluster takes a few minutes. When the playbook finishes, 43 | you can see the cluster nodes come up using: 44 | 45 | ``` 46 | kubectl get nodes -w 47 | ``` 48 | 49 | > Note: kubectl is setup automatically inside the container. To use it outside 50 | > the container as well, simply get the `kubeconfig` file from that data volume 51 | > and copy it into `~/.kube/config` 52 | 53 | ## Add more worker nodes 54 | 55 | If you want to add more workers simply run the worker-add playbook. 56 | Specify the desired number of worker nodes. The default cluster has 3 worker 57 | nodes. Below command adds 2 more for a total of 5. 58 | 59 | ``` 60 | ansible-playbook -e desired_num_worker_nodes=5 worker-add.yml 61 | ``` 62 | 63 | ## Update Kubernetes 64 | 65 | The cluster-upgrade playbook takes care of one by one updating Kubernetes on 66 | each of the nodes and restarting services as required. The upgrade does lead to 67 | a short unavailability of the apiserver due to the restart of the etcd members. 68 | Member restarts take a couple of retries before they succeed, this is caused by 69 | ports still being in use. 70 | 71 | ``` 72 | ansible-playbook cluster-upgrade.yml 73 | ``` 74 | 75 | ## Architecture 76 | 77 | The initial cluster consists of 3 master nodes and 3 worker nodes. Master nodes 78 | are pets, worker nodes are cattle. All nodes run CoreOS. 79 | 80 | __Master nodes run:__ 81 | 82 | * infra-etcd2: Etcd2 cluster used for Flanneld overlay networking and 83 | Locksmithd 84 | * flanneld: for the container overlay network 85 | * locksmithd: to orchestrate automatic updates 86 | * dockerd 87 | * kubelet 88 | * kubernetes-etcd2: Etcd2 cluster used for Kubernetes 89 | * kube-apiserver 90 | * kube-scheduler 91 | * kube-controller-manager 92 | * kube-proxy 93 | 94 | __Worker nodes run:__ 95 | 96 | * flanneld: for the container overlay network 97 | * locksmithd: to orchestrate automatic updates 98 | * dockerd 99 | * kubelet 100 | * kube-proxy 101 | * haproxy 102 | * and your containers of course 103 | 104 | Flanneld, Locksmithd, Docker, infra-etcd2 and the kubelet are started using 105 | Systemd. All other components most notably kubernetes-etcd2 and kube-* are 106 | started by the kubelet. 107 | 108 | CoreOS is configured to do automatic updates. Locksmith is configured to make 109 | sure only one of the six cluster nodes reboots at the same time. It is also 110 | configured to ensure a maintenance window for master nodes between 4 and 5am 111 | and for worker nodes between 5 and 6am daily. Automatic updates only include 112 | the OS components that are part of CoreOS. 113 | 114 | ## Ingress 115 | 116 | Cluster bootstrap includes the nginx-ingress-controller to make services 117 | available externally using ingress resources. 118 | 119 | Haproxy on each worker node listens on `0.0.0.0:80` and `0.0.0.0:443` and 120 | forwards TCP traffic to the ingress controller service. 121 | 122 | Simply setup a wildcard DNS entry to point to the IPs of your worker nodes. 123 | 124 | [Kube-lego](https://github.com/jetstack/kube-lego) is supported by the 125 | nginx-ingress-controller but is not automatically installed. 126 | 127 | ## Security 128 | 129 | Master and worker nodes each have their own security-groups and only open 130 | the required ports between nodes within the same group or between nodes of the 131 | other group respectively. 132 | 133 | All nodes allow external SSH access. (Required for Ansible unless you use a 134 | bastion host.) 135 | 136 | On top of the firewall rules enforced by the security groups, all components are 137 | configured to communicate via TLS using certificates. 138 | 139 | The required certificate authorities and certificates are generated using cfssl 140 | automatically. 141 | -------------------------------------------------------------------------------- /playbook/addons.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | connection: local 4 | roles: 5 | - kubectl 6 | - addons 7 | -------------------------------------------------------------------------------- /playbook/ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | hostfile = /secret/inventory 3 | host_key_checking = False 4 | timeout = 60 5 | -------------------------------------------------------------------------------- /playbook/cloudstack.ini: -------------------------------------------------------------------------------- 1 | ../secret/cloudstack.ini -------------------------------------------------------------------------------- /playbook/cluster-bootstrap.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Setup ssh-keys, security_groups, anti_affinity_groups and start VMs 3 | - hosts: localhost 4 | connection: local 5 | vars: 6 | - num_worker_nodes: "{{ initial_num_worker_nodes }}" 7 | - desired_num_worker_nodes: "{{ initial_num_worker_nodes }}" 8 | roles: 9 | - common 10 | - infra-master 11 | - infra-worker 12 | - infra-inventory 13 | tasks: 14 | - wait_for: 15 | port: 22 16 | host: "{{ hostvars[item]['ansible_host'] }}" 17 | search_regex: OpenSSH 18 | delay: 10 19 | with_items: "{{ groups['all'] }}" 20 | 21 | # Bootstrap ansible for CoreOS 22 | - hosts: master_nodes, worker_nodes 23 | gather_facts: False 24 | roles: 25 | - defunctzombie.coreos-bootstrap 26 | 27 | # Gather facts on all nodes 28 | - hosts: all 29 | tasks: [] 30 | 31 | # Generate certificates 32 | - hosts: localhost 33 | connection: local 34 | roles: 35 | - certificates 36 | 37 | # Deploy kubernetes components for master nodes 38 | - hosts: master_nodes 39 | become: true 40 | roles: 41 | - kubernetes-master 42 | 43 | # Deploy kubernetes components for worker nodes 44 | - hosts: worker_nodes 45 | become: true 46 | roles: 47 | - kubernetes-worker 48 | 49 | # Setup kubectl and deploy addons 50 | - hosts: localhost 51 | connection: local 52 | roles: 53 | - kubectl 54 | - addons 55 | -------------------------------------------------------------------------------- /playbook/cluster-upgrade.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Upgrade security groups if necessary 3 | - hosts: localhost 4 | connection: local 5 | roles: 6 | - common 7 | 8 | # Cleanup old rkt images 9 | - hosts: all 10 | tasks: 11 | - name: garbage collect rkt images unused for more than 24h 12 | shell: "rkt image gc" 13 | become: true 14 | 15 | # Prefetch new images to minimize service restart time 16 | - hosts: master_nodes 17 | tasks: 18 | - name: prefetch new etcd version 19 | shell: "docker pull gcr.io/google_containers/etcd:{{ k8s_etcd_version }}" 20 | 21 | - hosts: all 22 | tasks: 23 | - name: prefetch new rkt hyperkube version 24 | shell: "rkt fetch quay.io/coreos/hyperkube:{{ k8s_version }}" 25 | - name: prefetch new docker hyperkube version 26 | shell: "docker pull quay.io/coreos/hyperkube:{{ k8s_version }}" 27 | 28 | # Make sure httplib2 is installed (required for api version check below) 29 | - hosts: all 30 | tasks: 31 | - name: install httplib2 with pip 32 | shell: | 33 | PATH="/home/core/bin:$PATH" 34 | pip install httplib2 35 | 36 | # Upgrade kubernetes on master_nodes 37 | - hosts: master_nodes 38 | vars: 39 | - k8s_git_version: "{{ k8s_version | regex_replace('_', '+') }}" 40 | become: true 41 | roles: 42 | - kubernetes-master 43 | tasks: 44 | - debug: msg="Waiting for api to report correct version" 45 | - uri: 46 | url: http://127.0.0.1:8080/version 47 | return_content: yes 48 | register: version 49 | until: "'json' in version and version.json['gitVersion'] == '{{ k8s_git_version }}'" 50 | retries: 10 51 | delay: 10 52 | serial: 1 53 | 54 | # Upgrade kubernetes on worker_nodes 55 | - hosts: worker_nodes 56 | vars: 57 | - k8s_git_version: "{{ k8s_version | regex_replace('_', '+') }}" 58 | become: true 59 | roles: 60 | - kubernetes-worker 61 | serial: 1 62 | 63 | # Check all versions 64 | - hosts: localhost 65 | connection: local 66 | tasks: 67 | - name: "check etcd versions" 68 | shell: "kubectl --kubeconfig=/secret/kubeconfig --namespace=kube-system get pod -l k8s-app=etcd -o jsonpath={$..containers..image}" 69 | register: result 70 | until: result.stdout.find("{{ k8s_etcd_version }}") != -1 71 | retries: 5 72 | delay: 10 73 | - name: "check all kube-* versions" 74 | shell: "kubectl --kubeconfig=/secret/kubeconfig --namespace=kube-system get pod -l k8s-app={{ item }} -o jsonpath={$..containers..image}" 75 | register: result 76 | until: result.stdout.find("{{ k8s_version }}") != -1 77 | retries: 5 78 | delay: 10 79 | with_items: 80 | - kube-apiserver 81 | - kube-scheduler 82 | - kube-controller-manager 83 | - kube-proxy 84 | 85 | # Setup kubectl and deploy addons 86 | - hosts: localhost 87 | connection: local 88 | roles: 89 | - kubectl 90 | - addons 91 | -------------------------------------------------------------------------------- /playbook/group_vars/all: -------------------------------------------------------------------------------- 1 | # common variables 2 | cluster_name: k8s 3 | cluster_zone: CH-GVA-2 4 | ssh_key: "{{ cluster_name }}-key" 5 | anti_affinity_group_name: "{{ cluster_name }}-aag" 6 | 7 | # master node variables 8 | master_security_group_name: "{{ cluster_name }}-master-sg" 9 | master_instance_size: Tiny 10 | 11 | # worker node variables 12 | initial_num_worker_nodes: 3 13 | worker_security_group_name: "{{ cluster_name }}-worker-sg" 14 | worker_instance_size: Small 15 | 16 | # exoip variables 17 | exoip_version: "latest" 18 | 19 | # kubernetes variables 20 | k8s_version: "v1.6.4_coreos.0" 21 | k8s_etcd_version: "2.2.1" 22 | cluster_dns: "10.100.0.10" 23 | service_cluster_ip_range: "10.100.0.0/16" 24 | 25 | # certificate variables 26 | cfssl_version: "R1.2" 27 | 28 | # ingress variables 29 | haproxy_version: 1.6 30 | 31 | haproxy_timeout_connect: 10s 32 | haproxy_timeout_client: 30s 33 | haproxy_timeout_client_fin: 30s 34 | haproxy_timeout_server: 30s 35 | haproxy_timeout_tunnel: 3600s 36 | 37 | nginx_proxy_connect_timeout: 10 38 | nginx_proxy_read_timeout: 30 39 | nginx_proxy_send_timeout: 30 40 | 41 | # registry variables 42 | region: "{{ cluster_zone }}" 43 | region_endpoint: "sos.exo.io" 44 | -------------------------------------------------------------------------------- /playbook/roles/addons/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - debug: 2 | msg: Wait up to 30 minutes for our cluster to initialize and come up before we can deploy addons. 3 | 4 | - wait_for: 5 | port: 443 6 | host: "{{ hostvars[groups['master_nodes'][0]]['master_eip'] }}" 7 | timeout: 1800 8 | 9 | - name: tempfile 10 | shell: mktemp 11 | register: tempfile 12 | 13 | # DNS 14 | - template: 15 | src: templates/kube_dns.j2 16 | dest: "{{ tempfile.stdout }}" 17 | 18 | - name: kubectl apply kube-dns 19 | shell: kubectl --kubeconfig=/secret/kubeconfig apply -f "{{ tempfile.stdout }}" 20 | 21 | # Heapster 22 | - template: 23 | src: templates/heapster.j2 24 | dest: "{{ tempfile.stdout }}" 25 | 26 | - name: kubectl apply heapster 27 | shell: kubectl --kubeconfig=/secret/kubeconfig apply -f "{{ tempfile.stdout }}" 28 | 29 | # Nginx Ingress 30 | - template: 31 | src: templates/nginx_ingress.j2 32 | dest: "{{ tempfile.stdout }}" 33 | 34 | - name: kubectl apply nginx-ingress 35 | shell: kubectl --kubeconfig=/secret/kubeconfig apply -f "{{ tempfile.stdout }}" 36 | 37 | # Registry 38 | - set_fact: EXO_API_KEY="{{ lookup('ini', 'key section=cloudstack file=../secret/cloudstack.ini') }}" 39 | - set_fact: EXO_API_SECRET="{{ lookup('ini', 'secret section=cloudstack file=../secret/cloudstack.ini') }}" 40 | - set_fact: EXO_SOS_ENDPOINT="{{ lookup('env', 'EXO_SOS_ENDPOINT') | default(region_endpoint, true) }}" 41 | - set_fact: bucket_name="{{ cluster_name }}-registry-{{ (cluster_name + EXO_API_KEY) | md5 | reverse | truncate(7, True, '') }}" 42 | - set_fact: registry_secret="{{ lookup('password', '../secret/registry_secret length=64') }}" 43 | 44 | - s3_bucket: 45 | access_key: "{{ EXO_API_KEY }}" 46 | secret_key: "{{ EXO_API_SECRET }}" 47 | name: "{{ bucket_name }}" 48 | region: "{{ region }}" 49 | s3_url: "https://{{ EXO_SOS_ENDPOINT }}" 50 | ceph: true 51 | 52 | - template: 53 | src: templates/registry.j2 54 | dest: "{{ tempfile.stdout }}" 55 | 56 | - name: kubectl apply registry 57 | shell: kubectl --kubeconfig=/secret/kubeconfig apply -f "{{ tempfile.stdout }}" 58 | -------------------------------------------------------------------------------- /playbook/roles/addons/templates/heapster.j2: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | name: heapster 5 | namespace: kube-system 6 | labels: 7 | kubernetes.io/cluster-service: "true" 8 | kubernetes.io/name: "Heapster" 9 | spec: 10 | ports: 11 | - port: 80 12 | targetPort: 8082 13 | selector: 14 | k8s-app: heapster 15 | --- 16 | {% set base_metrics_memory = "140Mi" -%} 17 | {% set metrics_memory = base_metrics_memory -%} 18 | {% set metrics_memory_per_node = 4 -%} 19 | {% set base_metrics_cpu = "80m" -%} 20 | {% set metrics_cpu = base_metrics_cpu -%} 21 | {% set metrics_cpu_per_node = 0.5 -%} 22 | {% set num_nodes = (groups['master_nodes'] | length) + (groups['worker_nodes'] | length) %} 23 | {% set nanny_memory = "90Mi" -%} 24 | {% set nanny_memory_per_node = 200 -%} 25 | {% if num_nodes >= 0 -%} 26 | {% set metrics_memory = (200 + num_nodes * metrics_memory_per_node)|string + "Mi" -%} 27 | {% set nanny_memory = (90 * 1024 + num_nodes * nanny_memory_per_node)|string + "Ki" -%} 28 | {% set metrics_cpu = (80 + num_nodes * metrics_cpu_per_node)|string + "m" -%} 29 | {% endif -%} 30 | 31 | apiVersion: extensions/v1beta1 32 | kind: Deployment 33 | metadata: 34 | name: heapster-v1.2.0 35 | namespace: kube-system 36 | labels: 37 | k8s-app: heapster 38 | kubernetes.io/cluster-service: "true" 39 | version: v1.2.0 40 | spec: 41 | replicas: 1 42 | selector: 43 | matchLabels: 44 | k8s-app: heapster 45 | version: v1.2.0 46 | template: 47 | metadata: 48 | labels: 49 | k8s-app: heapster 50 | version: v1.2.0 51 | annotations: 52 | scheduler.alpha.kubernetes.io/critical-pod: '' 53 | scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' 54 | spec: 55 | containers: 56 | - image: gcr.io/google_containers/heapster:v1.2.0 57 | name: heapster 58 | livenessProbe: 59 | httpGet: 60 | path: /healthz 61 | port: 8082 62 | scheme: HTTP 63 | initialDelaySeconds: 180 64 | timeoutSeconds: 5 65 | resources: 66 | # keep request = limit to keep this container in guaranteed class 67 | limits: 68 | cpu: {{ metrics_cpu }} 69 | memory: {{ metrics_memory }} 70 | requests: 71 | cpu: {{ metrics_cpu }} 72 | memory: {{ metrics_memory }} 73 | command: 74 | - /heapster 75 | - --source=kubernetes.summary_api:'' 76 | - image: gcr.io/google_containers/addon-resizer:1.6 77 | name: heapster-nanny 78 | resources: 79 | limits: 80 | cpu: 50m 81 | memory: {{ nanny_memory }} 82 | requests: 83 | cpu: 50m 84 | memory: {{ nanny_memory }} 85 | env: 86 | - name: MY_POD_NAME 87 | valueFrom: 88 | fieldRef: 89 | fieldPath: metadata.name 90 | - name: MY_POD_NAMESPACE 91 | valueFrom: 92 | fieldRef: 93 | fieldPath: metadata.namespace 94 | command: 95 | - /pod_nanny 96 | - --cpu={{ base_metrics_cpu }} 97 | - --extra-cpu={{ metrics_cpu_per_node }}m 98 | - --memory={{ base_metrics_memory }} 99 | - --extra-memory={{ metrics_memory_per_node }}Mi 100 | - --threshold=5 101 | - --deployment=heapster-v1.2.0 102 | - --container=heapster 103 | - --poll-period=300000 104 | - --estimator=exponential 105 | -------------------------------------------------------------------------------- /playbook/roles/addons/templates/kube_dns.j2: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: kube-dns 5 | namespace: kube-system 6 | labels: 7 | k8s-app: kube-dns 8 | kubernetes.io/cluster-service: "true" 9 | kubernetes.io/name: "KubeDNS" 10 | spec: 11 | selector: 12 | k8s-app: kube-dns 13 | clusterIP: {{ cluster_dns }} 14 | ports: 15 | - name: dns 16 | port: 53 17 | protocol: UDP 18 | - name: dns-tcp 19 | port: 53 20 | protocol: TCP 21 | --- 22 | apiVersion: v1 23 | kind: ReplicationController 24 | metadata: 25 | name: kube-dns-v20 26 | namespace: kube-system 27 | labels: 28 | k8s-app: kube-dns 29 | version: v20 30 | kubernetes.io/cluster-service: "true" 31 | spec: 32 | replicas: 1 33 | selector: 34 | k8s-app: kube-dns 35 | version: v20 36 | template: 37 | metadata: 38 | labels: 39 | k8s-app: kube-dns 40 | version: v20 41 | kubernetes.io/cluster-service: "true" 42 | spec: 43 | containers: 44 | - name: kubedns 45 | image: gcr.io/google_containers/kubedns-amd64:1.9 46 | resources: 47 | limits: 48 | memory: 200Mi 49 | requests: 50 | cpu: 100m 51 | memory: 100Mi 52 | livenessProbe: 53 | httpGet: 54 | path: /healthz 55 | port: 8080 56 | scheme: HTTP 57 | initialDelaySeconds: 60 58 | timeoutSeconds: 5 59 | successThreshold: 1 60 | failureThreshold: 5 61 | readinessProbe: 62 | httpGet: 63 | path: /readiness 64 | port: 8081 65 | scheme: HTTP 66 | # we poll on pod startup for the Kubernetes master service and 67 | # only setup the /readiness HTTP server once that's available. 68 | initialDelaySeconds: 3 69 | timeoutSeconds: 5 70 | args: 71 | # command = "/kube-dns" 72 | - --domain=cluster.local. 73 | - --dns-port=10053 74 | - --config-map=kube-dns 75 | - --v=0 76 | env: 77 | - name: PROMETHEUS_PORT 78 | value: "10055" 79 | ports: 80 | - containerPort: 10053 81 | name: dns-local 82 | protocol: UDP 83 | - containerPort: 10053 84 | name: dns-tcp-local 85 | protocol: TCP 86 | - containerPort: 10055 87 | name: metrics 88 | protocol: TCP 89 | - name: dnsmasq 90 | image: gcr.io/google_containers/kube-dnsmasq-amd64:1.4 91 | args: 92 | - --cache-size=1000 93 | - --no-resolv 94 | - --server=127.0.0.1#10053 95 | - --log-facility=- 96 | livenessProbe: 97 | httpGet: 98 | path: /healthz-dnsmasq 99 | port: 8080 100 | scheme: HTTP 101 | initialDelaySeconds: 60 102 | timeoutSeconds: 5 103 | successThreshold: 1 104 | failureThreshold: 5 105 | ports: 106 | - containerPort: 53 107 | name: dns 108 | protocol: UDP 109 | - containerPort: 53 110 | name: dns-tcp 111 | protocol: TCP 112 | - name: dnsmasq-metrics 113 | image: gcr.io/google_containers/dnsmasq-metrics-amd64:1.0 114 | livenessProbe: 115 | httpGet: 116 | path: /metrics 117 | port: 10054 118 | scheme: HTTP 119 | initialDelaySeconds: 60 120 | timeoutSeconds: 5 121 | successThreshold: 1 122 | failureThreshold: 5 123 | args: 124 | - --v=2 125 | - --logtostderr 126 | ports: 127 | - containerPort: 10054 128 | name: metrics 129 | protocol: TCP 130 | resources: 131 | requests: 132 | memory: 10Mi 133 | - name: healthz 134 | image: gcr.io/google_containers/exechealthz-amd64:1.2 135 | resources: 136 | limits: 137 | memory: 50Mi 138 | requests: 139 | cpu: 10m 140 | # Note that this container shouldn't really need 50Mi of memory. The 141 | # limits are set higher than expected pending investigation on #29688. 142 | # The extra memory was stolen from the kubedns container to keep the 143 | # net memory requested by the pod constant. 144 | memory: 50Mi 145 | args: 146 | - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null 147 | - --url=/healthz-dnsmasq 148 | - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null 149 | - --url=/healthz-kubedns 150 | - --port=8080 151 | - --quiet 152 | ports: 153 | - containerPort: 8080 154 | protocol: TCP 155 | dnsPolicy: Default # Don't use cluster DNS. 156 | -------------------------------------------------------------------------------- /playbook/roles/addons/templates/nginx_ingress.j2: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Service 4 | metadata: 5 | labels: 6 | k8s-app: default-http-backend 7 | name: default-http-backend 8 | namespace: kube-system 9 | spec: 10 | ports: 11 | - 12 | name: http 13 | port: 80 14 | protocol: TCP 15 | targetPort: 8080 16 | selector: 17 | k8s-app: default-http-backend 18 | --- 19 | apiVersion: extensions/v1beta1 20 | kind: Deployment 21 | metadata: 22 | name: default-http-backend 23 | namespace: kube-system 24 | spec: 25 | replicas: 1 26 | template: 27 | metadata: 28 | labels: 29 | k8s-app: default-http-backend 30 | name: default-http-backend 31 | namespace: kube-system 32 | spec: 33 | containers: 34 | - 35 | image: "gcr.io/google_containers/defaultbackend:1.0" 36 | livenessProbe: 37 | httpGet: 38 | path: /healthz 39 | port: 8080 40 | scheme: HTTP 41 | initialDelaySeconds: 30 42 | timeoutSeconds: 5 43 | name: default-http-backend 44 | ports: 45 | - 46 | containerPort: 8080 47 | resources: 48 | limits: 49 | cpu: 10m 50 | memory: 20Mi 51 | requests: 52 | cpu: 10m 53 | memory: 20Mi 54 | terminationGracePeriodSeconds: 60 55 | --- 56 | apiVersion: v1 57 | kind: Service 58 | metadata: 59 | name: nginx-ingress-lb 60 | namespace: kube-system 61 | spec: 62 | ports: 63 | - 64 | nodePort: 30080 65 | port: 80 66 | targetPort: 80 67 | name: http 68 | - 69 | nodePort: 30443 70 | port: 443 71 | targetPort: 443 72 | name: https 73 | selector: 74 | k8s-app: nginx-ingress-lb 75 | type: NodePort 76 | --- 77 | apiVersion: v1 78 | kind: ConfigMap 79 | metadata: 80 | name: nginx-ingress-controller 81 | namespace: kube-system 82 | data: 83 | proxy-connect-timeout: "{{ nginx_proxy_connect_timeout }}" 84 | proxy-read-timeout: "{{ nginx_proxy_read_timeout }}" 85 | proxy-send-timeout: "{{ nginx_proxy_send_timeout }}" 86 | --- 87 | apiVersion: extensions/v1beta1 88 | kind: Deployment 89 | metadata: 90 | labels: 91 | k8s-app: nginx-ingress-lb 92 | name: nginx-ingress-controller 93 | namespace: kube-system 94 | spec: 95 | replicas: 2 96 | template: 97 | metadata: 98 | labels: 99 | k8s-app: nginx-ingress-lb 100 | name: nginx-ingress-lb 101 | namespace: kube-system 102 | spec: 103 | containers: 104 | - 105 | args: 106 | - /nginx-ingress-controller 107 | - "--default-backend-service=$(POD_NAMESPACE)/default-http-backend" 108 | - "--nginx-configmap=$(POD_NAMESPACE)/nginx-ingress-controller" 109 | env: 110 | - 111 | name: POD_NAME 112 | valueFrom: 113 | fieldRef: 114 | fieldPath: metadata.name 115 | - 116 | name: POD_NAMESPACE 117 | valueFrom: 118 | fieldRef: 119 | fieldPath: metadata.namespace 120 | image: "gcr.io/google_containers/nginx-ingress-controller:0.8.3" 121 | imagePullPolicy: Always 122 | name: nginx-ingress-lb 123 | ports: 124 | - 125 | containerPort: 80 126 | - 127 | containerPort: 443 128 | - 129 | containerPort: 18080 130 | readinessProbe: 131 | httpGet: 132 | scheme: HTTP 133 | port: 18080 134 | path: /healthz 135 | livenessProbe: 136 | httpGet: 137 | scheme: HTTP 138 | port: 18080 139 | path: /healthz 140 | initialDelaySeconds: 10 141 | timeoutSeconds: 1 142 | terminationGracePeriodSeconds: 60 143 | -------------------------------------------------------------------------------- /playbook/roles/addons/templates/registry.j2: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: registry 6 | namespace: kube-system 7 | data: 8 | config.yaml: | 9 | --- 10 | version: 0.1 11 | health: 12 | storagedriver: 13 | enabled: true 14 | interval: 10s 15 | threshold: 3 16 | http: 17 | addr: "0.0.0.0:5000" 18 | debug: 19 | addr: "0.0.0.0:5001" 20 | headers: 21 | X-Content-Type-Options: 22 | - nosniff 23 | secret: "{{ registry_secret }}" 24 | log: 25 | fields: 26 | service: registry 27 | formatter: text 28 | level: info 29 | storage: 30 | delete: 31 | enabled: true 32 | maintenance: 33 | uploadpurging: 34 | age: 168h 35 | dryrun: true 36 | enabled: true 37 | interval: 24h 38 | s3: 39 | accesskey: "{{ EXO_API_KEY }}" 40 | secretkey: "{{ EXO_API_SECRET }}" 41 | region: "{{ region }}" 42 | regionendpoint: "{{ EXO_SOS_ENDPOINT }}" 43 | bucket: "{{ bucket_name }}" 44 | secure: true 45 | v4auth: false 46 | multipartcopythresholdsize: 5368709120 47 | --- 48 | apiVersion: extensions/v1beta1 49 | kind: Deployment 50 | metadata: 51 | name: registry 52 | namespace: kube-system 53 | spec: 54 | replicas: 1 55 | template: 56 | metadata: 57 | labels: 58 | k8s-app: registry 59 | spec: 60 | containers: 61 | - name: registry 62 | image: pst418/distribution:pr 63 | args: 64 | - serve 65 | - /etc/registry/config.yaml 66 | imagePullPolicy: IfNotPresent 67 | resources: 68 | limits: 69 | cpu: 100m 70 | memory: 100Mi 71 | livenessProbe: 72 | httpGet: 73 | path: / 74 | port: 5000 75 | scheme: HTTP 76 | readinessProbe: 77 | httpGet: 78 | path: /debug/health 79 | port: 5001 80 | scheme: HTTP 81 | ports: 82 | - containerPort: 5000 83 | name: http 84 | - containerPort: 5001 85 | name: health 86 | volumeMounts: 87 | - name: registry 88 | readOnly: true 89 | mountPath: /etc/registry 90 | volumes: 91 | - name: registry 92 | configMap: 93 | name: registry 94 | items: 95 | - key: config.yaml 96 | path: config.yaml 97 | --- 98 | apiVersion: v1 99 | kind: Service 100 | metadata: 101 | name: registry 102 | namespace: kube-system 103 | spec: 104 | type: NodePort 105 | ports: 106 | - port: 5000 107 | targetPort: 5000 108 | nodePort: 32500 109 | protocol: TCP 110 | selector: 111 | k8s-app: registry 112 | -------------------------------------------------------------------------------- /playbook/roles/certificates/tasks/_generate_cas.yml: -------------------------------------------------------------------------------- 1 | - name: create infra certificate authorities 2 | shell: echo '{{ lookup('template', '../templates/ca_csr.j2', convert_data=False) }}' | bin/cfssl gencert -initca - | bin/cfssljson -bare "{{ item }}" - 3 | args: 4 | chdir: ../secret/ssl 5 | creates: "{{ item }}.pem" 6 | with_items: 7 | - infra/etcd-ca 8 | 9 | - name: create Kubernetes certificate authorities 10 | shell: echo '{{ lookup('template', '../templates/ca_csr.j2', convert_data=False) }}' | bin/cfssl gencert -initca - | bin/cfssljson -bare "{{ item }}" - 11 | args: 12 | chdir: ../secret/ssl 13 | creates: "{{ item }}.pem" 14 | with_items: 15 | - kubernetes/etcd-ca 16 | - kubernetes/apiserver-ca 17 | 18 | - template: 19 | src: ../templates/ca_config.j2 20 | dest: ../secret/ssl/ca_config.json 21 | -------------------------------------------------------------------------------- /playbook/roles/certificates/tasks/_generate_certificates.yml: -------------------------------------------------------------------------------- 1 | - file: 2 | path: ../secret/ssl/kubernetes/{{ item }} 3 | state: directory 4 | mode: 0770 5 | with_items: "{{ groups['all'] }}" 6 | 7 | - name: create kubernetes apiserver certificates 8 | shell: echo '{{ lookup('template', '../templates/ecdsa_csr.j2', convert_data=False) }}' | bin/cfssl gencert -ca=kubernetes/apiserver-ca.pem -ca-key=kubernetes/apiserver-ca-key.pem -config=ca_config.json -profile=server -hostname="127.0.0.1,10.100.0.1,{{ hostvars[item]['ansible_host'] }},{{ hostvars[item]['master_eip'] }}" - | bin/cfssljson -bare "kubernetes/{{ item }}/apiserver" 9 | args: 10 | chdir: ../secret/ssl 11 | creates: kubernetes/{{ item }}/apiserver.pem 12 | with_items: "{{ groups['master_nodes'] }}" 13 | 14 | - name: create kubernetes service account certificates 15 | shell: echo '{{ lookup('template', '../templates/rsa_csr.j2', convert_data=False) }}' | bin/cfssl gencert -ca=kubernetes/apiserver-ca.pem -ca-key=kubernetes/apiserver-ca-key.pem -config=ca_config.json -profile=server -hostname="10.100.0.1" - | bin/cfssljson -bare "kubernetes/{{ item }}" 16 | args: 17 | chdir: ../secret/ssl 18 | creates: kubernetes/{{ item }}.pem 19 | with_items: 20 | - service-account 21 | 22 | - name: create kubernetes apiserver client certificates 23 | shell: echo '{{ lookup('template', '../templates/ecdsa_csr.j2', convert_data=False) }}' | bin/cfssl gencert -ca=kubernetes/apiserver-ca.pem -ca-key=kubernetes/apiserver-ca-key.pem -config=ca_config.json -profile=client - | bin/cfssljson -bare "kubernetes/{{ item }}/apiserver-client" 24 | args: 25 | chdir: ../secret/ssl 26 | creates: kubernetes/{{ item }}/apiserver-client.pem 27 | with_items: "{{ groups['all'] }}" 28 | 29 | - name: create kubectl client certificate 30 | shell: echo '{{ lookup('template', '../templates/ecdsa_csr.j2', convert_data=False) }}' | bin/cfssl gencert -ca=kubernetes/apiserver-ca.pem -ca-key=kubernetes/apiserver-ca-key.pem -config=ca_config.json -profile=client - | bin/cfssljson -bare "kubernetes/kubectl" 31 | args: 32 | chdir: ../secret/ssl 33 | creates: kubernetes/kubectl.pem 34 | with_items: 35 | - kubectl 36 | 37 | # kubernetes-etcd certificates 38 | - name: create kubernetes etcd server certificates 39 | shell: echo '{{ lookup('template', '../templates/ecdsa_csr.j2', convert_data=False) }}' | bin/cfssl gencert -ca=kubernetes/etcd-ca.pem -ca-key=kubernetes/etcd-ca-key.pem -config=ca_config.json -profile=client-server -hostname="127.0.0.1,{{ hostvars[item]['ansible_host'] }}" - | bin/cfssljson -bare "kubernetes/{{ item }}/etcd" 40 | args: 41 | chdir: ../secret/ssl 42 | creates: kubernetes/{{ item }}/etcd.pem 43 | with_items: "{{ groups['master_nodes'] }}" 44 | 45 | 46 | - name: create kubernetes etcd client certificates 47 | shell: echo '{{ lookup('template', '../templates/ecdsa_csr.j2', convert_data=False) }}' | bin/cfssl gencert -ca=kubernetes/etcd-ca.pem -ca-key=kubernetes/etcd-ca-key.pem -config=ca_config.json -profile=client - | bin/cfssljson -bare "kubernetes/{{ item }}/etcd-client" 48 | args: 49 | chdir: ../secret/ssl 50 | creates: kubernetes/{{ item }}/etcd-client.pem 51 | with_items: "{{ groups['master_nodes'] }}" 52 | 53 | # infra-etcd certificates 54 | - file: 55 | path: ../secret/ssl/infra/{{ item }} 56 | state: directory 57 | mode: 0770 58 | with_items: "{{ groups['all'] }}" 59 | 60 | - name: create infra etcd server certificates 61 | shell: echo '{{ lookup('template', '../templates/ecdsa_csr.j2', convert_data=False) }}' | bin/cfssl gencert -ca=infra/etcd-ca.pem -ca-key=infra/etcd-ca-key.pem -config=ca_config.json -profile=client-server -hostname="127.0.0.1,{{ hostvars[item]['ansible_host'] }}" - | bin/cfssljson -bare "infra/{{ item }}/etcd" 62 | args: 63 | chdir: ../secret/ssl 64 | creates: infra/{{ item }}/etcd.pem 65 | with_items: "{{ groups['master_nodes'] }}" 66 | 67 | - name: create infra etcd client certificates 68 | shell: echo '{{ lookup('template', '../templates/ecdsa_csr.j2', convert_data=False) }}' | bin/cfssl gencert -ca=infra/etcd-ca.pem -ca-key=infra/etcd-ca-key.pem -config=ca_config.json -profile=client - | bin/cfssljson -bare "infra/{{ item }}/etcd-client" 69 | args: 70 | chdir: ../secret/ssl 71 | creates: infra/{{ item }}/etcd-client.pem 72 | with_items: "{{ groups['all'] }}" 73 | -------------------------------------------------------------------------------- /playbook/roles/certificates/tasks/_setup.yml: -------------------------------------------------------------------------------- 1 | - stat: path=ssl 2 | register: ssl_move_required 3 | 4 | - copy: 5 | src: ssl 6 | dest: ../secret/ssl 7 | directory_mode: true 8 | when: ssl_move_required.stat.exists 9 | 10 | - file: 11 | path: ssl 12 | state: absent 13 | when: ssl_move_required.stat.exists 14 | 15 | - file: 16 | path: ../secret/ssl 17 | state: directory 18 | mode: 0770 19 | 20 | - file: 21 | path: ../secret/ssl/bin 22 | state: directory 23 | mode: 0770 24 | 25 | - name: download cfssl 26 | get_url: 27 | url: https://pkg.cfssl.org/{{ cfssl_version }}/cfssl_{{ ansible_system | lower }}-amd64 28 | dest: ../secret/ssl/bin/cfssl 29 | mode: 0755 30 | 31 | - name: download cfssljson 32 | get_url: 33 | url: https://pkg.cfssl.org/{{ cfssl_version }}/cfssljson_{{ ansible_system | lower }}-amd64 34 | dest: ../secret/ssl/bin/cfssljson 35 | mode: 0755 36 | 37 | - file: 38 | path: ../secret/ssl/infra 39 | state: directory 40 | mode: 0770 41 | 42 | - file: 43 | path: ../secret/ssl/kubernetes 44 | state: directory 45 | mode: 0770 46 | -------------------------------------------------------------------------------- /playbook/roles/certificates/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - include: _setup.yml 3 | - include: _generate_cas.yml 4 | - include: _generate_certificates.yml 5 | -------------------------------------------------------------------------------- /playbook/roles/certificates/templates/ca_config.j2: -------------------------------------------------------------------------------- 1 | { 2 | "signing": { 3 | "default": { 4 | "expiry": "43800h" 5 | }, 6 | "profiles": { 7 | "server": { 8 | "expiry": "43800h", 9 | "usages": [ 10 | "signing", 11 | "key encipherment", 12 | "server auth" 13 | ] 14 | }, 15 | "client": { 16 | "expiry": "43800h", 17 | "usages": [ 18 | "signing", 19 | "key encipherment", 20 | "client auth" 21 | ] 22 | }, 23 | "client-server": { 24 | "expiry": "43800h", 25 | "usages": [ 26 | "signing", 27 | "key encipherment", 28 | "server auth", 29 | "client auth" 30 | ] 31 | } 32 | } 33 | } 34 | } 35 | -------------------------------------------------------------------------------- /playbook/roles/certificates/templates/ca_csr.j2: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "{{ item }}", 3 | "key": { 4 | "algo": "rsa", 5 | "size": 2048 6 | } 7 | } 8 | -------------------------------------------------------------------------------- /playbook/roles/certificates/templates/ecdsa_csr.j2: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "{{ item }}", 3 | "hosts": [], 4 | "key": { 5 | "algo": "ecdsa", 6 | "size": 256 7 | } 8 | } 9 | -------------------------------------------------------------------------------- /playbook/roles/certificates/templates/rsa_csr.j2: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "{{ item }}", 3 | "hosts": [], 4 | "key": { 5 | "algo": "rsa", 6 | "size": 2048 7 | } 8 | } 9 | -------------------------------------------------------------------------------- /playbook/roles/common/tasks/cloudstack_ini.yml: -------------------------------------------------------------------------------- 1 | - stat: 2 | path: /secret/cloudstack.ini 3 | register: csini 4 | 5 | - set_fact: 6 | EXO_API_KEY: "{{ lookup('ini', 'key section=cloudstack file=/secret/cloudstack.ini') }}" 7 | when: "csini.stat.exists and csini.stat.isreg" 8 | 9 | - set_fact: 10 | EXO_API_SECRET: "{{ lookup('ini', 'secret section=cloudstack file=/secret/cloudstack.ini') }}" 11 | when: "csini.stat.exists and csini.stat.isreg" 12 | 13 | - fail: 14 | msg: "Please set the EXO_API_KEY and EXO_API_SECRET environment variables. You can find these on your account page. https://portal.exoscale.com/account/profile/api" 15 | when: EXO_API_KEY is not defined and EXO_API_SECRET is not defined and not lookup('env', 'EXO_API_KEY') and not lookup('env', 'EXO_API_SECRET') 16 | 17 | - set_fact: 18 | EXO_API_KEY: "{{ lookup('env', 'EXO_API_KEY') | default(EXO_API_KEY, true) }}" 19 | 20 | - set_fact: 21 | EXO_API_SECRET: "{{ lookup('env', 'EXO_API_SECRET') | default(EXO_API_SECRET, true) }}" 22 | 23 | - set_fact: 24 | EXO_API_ENDPOINT: "{{ lookup('ini', 'endpoint section=cloudstack file=/secret/cloudstack.ini') }}" 25 | when: "csini.stat.exists and csini.stat.isreg" 26 | 27 | - set_fact: 28 | EXO_API_ENDPOINT: 'https://api.exoscale.ch/compute' 29 | when: EXO_API_ENDPOINT is not defined 30 | 31 | - set_fact: 32 | EXO_API_ENDPOINT: "{{ lookup('env', 'EXO_API_ENDPOINT') | default(EXO_API_ENDPOINT, true) }}" 33 | 34 | - name: write cloudstack.ini 35 | template: 36 | src: cloudstack_ini.j2 37 | dest: /secret/cloudstack.ini 38 | -------------------------------------------------------------------------------- /playbook/roles/common/tasks/create_secgroup_rules.yml: -------------------------------------------------------------------------------- 1 | 2 | # Rules for the master nodes 3 | - name: public to master ssh 4 | local_action: 5 | module: cs_securitygroup_rule 6 | security_group: "{{ master_security_group_name }}" 7 | start_port: 22 8 | end_port: 22 9 | - name: public to master https (kubernetes api) 10 | local_action: 11 | module: cs_securitygroup_rule 12 | security_group: "{{ master_security_group_name }}" 13 | start_port: 443 14 | end_port: 443 15 | - name: master to master exoip 16 | local_action: 17 | module: cs_securitygroup_rule 18 | security_group: "{{ master_security_group_name }}" 19 | user_security_group: "{{ master_security_group_name }}" 20 | protocol: udp 21 | start_port: 12345 22 | end_port: 12345 23 | # infra-etcd 24 | - name: master to master etcd2 client ports 25 | local_action: 26 | module: cs_securitygroup_rule 27 | security_group: "{{ master_security_group_name }}" 28 | user_security_group: "{{ master_security_group_name }}" 29 | start_port: 2379 30 | end_port: 2379 31 | - name: worker to master etcd2 client ports 32 | local_action: 33 | module: cs_securitygroup_rule 34 | security_group: "{{ master_security_group_name }}" 35 | user_security_group: "{{ worker_security_group_name }}" 36 | start_port: 2379 37 | end_port: 2379 38 | - name: master to master etcd2 peer ports 39 | local_action: 40 | module: cs_securitygroup_rule 41 | security_group: "{{ master_security_group_name }}" 42 | user_security_group: "{{ master_security_group_name }}" 43 | start_port: 2380 44 | end_port: 2380 45 | # kubernetes-etcd 46 | - name: master to master etcd2 client ports 47 | local_action: 48 | module: cs_securitygroup_rule 49 | security_group: "{{ master_security_group_name }}" 50 | user_security_group: "{{ master_security_group_name }}" 51 | start_port: 12379 52 | end_port: 12379 53 | - name: master to master etcd2 peer ports 54 | local_action: 55 | module: cs_securitygroup_rule 56 | security_group: "{{ master_security_group_name }}" 57 | user_security_group: "{{ master_security_group_name }}" 58 | start_port: 12380 59 | end_port: 12380 60 | - name: master apiserver to master kubelet 61 | local_action: 62 | module: cs_securitygroup_rule 63 | security_group: "{{ master_security_group_name }}" 64 | user_security_group: "{{ master_security_group_name }}" 65 | start_port: 10250 66 | end_port: 10250 67 | protocol: tcp 68 | 69 | # Rules for the worker nodes 70 | - name: public to worker ssh 71 | local_action: 72 | module: cs_securitygroup_rule 73 | security_group: "{{ worker_security_group_name }}" 74 | start_port: 22 75 | end_port: 22 76 | - name: public to worker http 77 | local_action: 78 | module: cs_securitygroup_rule 79 | security_group: "{{ worker_security_group_name }}" 80 | start_port: 80 81 | end_port: 80 82 | - name: public to worker https 83 | local_action: 84 | module: cs_securitygroup_rule 85 | security_group: "{{ worker_security_group_name }}" 86 | start_port: 443 87 | end_port: 443 88 | - name: master apiserver to worker kubelet 89 | local_action: 90 | module: cs_securitygroup_rule 91 | security_group: "{{ worker_security_group_name }}" 92 | user_security_group: "{{ master_security_group_name }}" 93 | start_port: 10250 94 | end_port: 10250 95 | protocol: tcp 96 | 97 | # Rules for Heapster worker->worker and worker->master 98 | - name: heapster to master kubelet 99 | local_action: 100 | module: cs_securitygroup_rule 101 | security_group: "{{ master_security_group_name }}" 102 | user_security_group: "{{ worker_security_group_name }}" 103 | start_port: 10255 104 | end_port: 10255 105 | protocol: tcp 106 | - name: heapster to worker kubelet 107 | local_action: 108 | module: cs_securitygroup_rule 109 | security_group: "{{ worker_security_group_name }}" 110 | user_security_group: "{{ worker_security_group_name }}" 111 | start_port: 10255 112 | end_port: 10255 113 | protocol: tcp 114 | 115 | # Rules for vxlan between all masters and workers 116 | - name: vxlan master to master 117 | local_action: 118 | module: cs_securitygroup_rule 119 | security_group: "{{ master_security_group_name }}" 120 | user_security_group: "{{ master_security_group_name }}" 121 | start_port: 8472 122 | end_port: 8472 123 | protocol: udp 124 | - name: vxlan worker to master 125 | local_action: 126 | module: cs_securitygroup_rule 127 | security_group: "{{ master_security_group_name }}" 128 | user_security_group: "{{ worker_security_group_name }}" 129 | start_port: 8472 130 | end_port: 8472 131 | protocol: udp 132 | - name: vxlan worker to worker 133 | local_action: 134 | module: cs_securitygroup_rule 135 | security_group: "{{ worker_security_group_name }}" 136 | user_security_group: "{{ worker_security_group_name }}" 137 | start_port: 8472 138 | end_port: 8472 139 | protocol: udp 140 | - name: vxlan master to worker 141 | local_action: 142 | module: cs_securitygroup_rule 143 | security_group: "{{ worker_security_group_name }}" 144 | user_security_group: "{{ master_security_group_name }}" 145 | start_port: 8472 146 | end_port: 8472 147 | protocol: udp 148 | -------------------------------------------------------------------------------- /playbook/roles/common/tasks/create_secgroups.yml: -------------------------------------------------------------------------------- 1 | # Create k8s security group 2 | 3 | - name: Create Security Group 4 | local_action: 5 | module: cs_securitygroup 6 | name: "{{ master_security_group_name }}" 7 | description: k8s master nodes 8 | 9 | - name: Create Security Group 10 | local_action: 11 | module: cs_securitygroup 12 | name: "{{ worker_security_group_name }}" 13 | description: k8s worker nodes 14 | -------------------------------------------------------------------------------- /playbook/roles/common/tasks/create_sshkey.yml: -------------------------------------------------------------------------------- 1 | - name: Create SSH Key 2 | local_action: 3 | module: cs_sshkeypair 4 | name: "{{ ssh_key }}" 5 | register: key 6 | tags: sshkey 7 | 8 | - local_action: copy content="{{ key.private_key }}" dest="../secret/id_rsa_{{ ssh_key }}" 9 | when: key.changed 10 | tags: sshkey 11 | 12 | - file: path="../secret/id_rsa_{{ ssh_key }}" mode=0600 13 | when: key.changed 14 | tags: sshkey 15 | -------------------------------------------------------------------------------- /playbook/roles/common/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - include: cloudstack_ini.yml 3 | - include: create_sshkey.yml 4 | - include: create_secgroups.yml 5 | - include: create_secgroup_rules.yml 6 | -------------------------------------------------------------------------------- /playbook/roles/common/templates/cloudstack_ini.j2: -------------------------------------------------------------------------------- 1 | [cloudstack] 2 | endpoint = {{ EXO_API_ENDPOINT }} 3 | key = {{ EXO_API_KEY }} 4 | secret = {{ EXO_API_SECRET }} 5 | -------------------------------------------------------------------------------- /playbook/roles/defunctzombie.coreos-bootstrap/.editorconfig: -------------------------------------------------------------------------------- 1 | # EditorConfig is awesome: http://EditorConfig.org 2 | 3 | # top-most EditorConfig file 4 | root = true 5 | 6 | # Unix-style newlines with a newline ending every file 7 | [*] 8 | end_of_line = lf 9 | insert_final_newline = true 10 | indent_style = space 11 | indent_size = 2 12 | -------------------------------------------------------------------------------- /playbook/roles/defunctzombie.coreos-bootstrap/.travis.yml: -------------------------------------------------------------------------------- 1 | --- 2 | language: python 3 | python: "2.7" 4 | 5 | env: 6 | - SITE=test.yml 7 | 8 | before_install: 9 | - sudo apt-get update -qq 10 | - sudo apt-get install -y curl 11 | 12 | install: 13 | - pip install ansible 14 | 15 | # Add ansible.cfg to pick up roles path. 16 | - "printf '[defaults]\nroles_path = ../' > ansible.cfg" 17 | 18 | script: 19 | - "ansible-playbook -i tests/inventory tests/$SITE --syntax-check" 20 | -------------------------------------------------------------------------------- /playbook/roles/defunctzombie.coreos-bootstrap/LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2014 Roman Shtylman 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /playbook/roles/defunctzombie.coreos-bootstrap/README.md: -------------------------------------------------------------------------------- 1 | # coreos-bootstrap 2 | 3 | In order to effectively run ansible, the target machine needs to have a python interpreter. Coreos machines are minimal and do not ship with any version of python. To get around this limitation we can install [pypy](http://pypy.org/), a lightweight python interpreter. The coreos-bootstrap role will install pypy for us and we will update our inventory file to use the installed python interpreter. 4 | 5 | # install 6 | 7 | ``` 8 | ansible-galaxy install defunctzombie.coreos-bootstrap 9 | ``` 10 | 11 | # Configure your project 12 | 13 | Unlike a typical role, you need to configure Ansible to use an alternative python interpreter for coreos hosts. This can be done by adding a `coreos` group to your inventory file and setting the group's vars to use the new python interpreter. This way, you can use ansible to manage CoreOS and non-CoreOS hosts. Simply put every host that has CoreOS into the `coreos` inventory group and it will automatically use the specified python interpreter. 14 | ``` 15 | [coreos] 16 | host-01 17 | host-02 18 | 19 | [coreos:vars] 20 | ansible_ssh_user=core 21 | ansible_python_interpreter=/home/core/bin/python 22 | ``` 23 | 24 | This will configure ansible to use the python interpreter at `/home/core/bin/python` which will be created by the coreos-bootstrap role. 25 | 26 | ## Bootstrap Playbook 27 | 28 | Now you can simply add the following to your playbook file and include it in your `site.yml` so that it runs on all hosts in the coreos group. 29 | 30 | ```yaml 31 | - hosts: coreos 32 | gather_facts: False 33 | roles: 34 | - defunctzombie.coreos-bootstrap 35 | ``` 36 | 37 | Make sure that `gather_facts` is set to false, otherwise ansible will try to first gather system facts using python which is not yet installed! 38 | 39 | ## Example Playbook 40 | 41 | After bootstrap, you can use ansible as usual to manage system services, install python modules (via pip), and run containers. Below is a basic example that starts the `etcd` service, installs the `docker-py` module and then uses the ansible `docker` module to pull and start a basic nginx container. 42 | 43 | ```yaml 44 | - name: Nginx Example 45 | hosts: web 46 | sudo: true 47 | tasks: 48 | - name: Start etcd 49 | service: name=etcd.service state=started 50 | 51 | - name: Install docker-py 52 | pip: name=docker-py 53 | 54 | - name: pull container 55 | raw: docker pull nginx:1.7.1 56 | 57 | - name: launch nginx container 58 | docker: 59 | image="nginx:1.7.1" 60 | name="example-nginx" 61 | ports="8080:80" 62 | state=running 63 | ``` 64 | 65 | # License 66 | MIT 67 | -------------------------------------------------------------------------------- /playbook/roles/defunctzombie.coreos-bootstrap/files/bootstrap.sh: -------------------------------------------------------------------------------- 1 | #/bin/bash 2 | 3 | set -e 4 | 5 | cd 6 | 7 | if [[ -e $HOME/.bootstrapped ]]; then 8 | exit 0 9 | fi 10 | 11 | PYPY_VERSION=5.1.0 12 | 13 | if [[ -e $HOME/pypy-$PYPY_VERSION-linux64.tar.bz2 ]]; then 14 | tar -xjf $HOME/pypy-$PYPY_VERSION-linux64.tar.bz2 15 | rm -rf $HOME/pypy-$PYPY_VERSION-linux64.tar.bz2 16 | else 17 | wget -O - https://bitbucket.org/pypy/pypy/downloads/pypy-$PYPY_VERSION-linux64.tar.bz2 |tar -xjf - 18 | fi 19 | 20 | mv -n pypy-$PYPY_VERSION-linux64 pypy 21 | 22 | ## library fixup 23 | mkdir -p pypy/lib 24 | ln -snf /lib64/libncurses.so.5.9 $HOME/pypy/lib/libtinfo.so.5 25 | 26 | mkdir -p $HOME/bin 27 | 28 | cat > $HOME/bin/python <