├── .gitignore ├── LICENSE ├── README.md ├── ansible.cfg ├── bootstrap.yml ├── deploy.yml ├── geck.png ├── geck_network.png ├── group_vars └── all.yml ├── inventory └── hosts.ini ├── pi ├── hypriot │ ├── static-ip-partial.yml │ └── user-data.yml └── provision.sh ├── reboot.yml ├── requirements.txt ├── roles ├── cert-manager │ ├── tasks │ │ └── main.yml │ └── templates │ │ ├── cert-manager-helm.yaml │ │ ├── prod-clusterissuer.yaml │ │ └── staging-clusterissuer.yaml ├── cloudflare-ddns │ ├── tasks │ │ └── main.yml │ └── templates │ │ ├── cloudflare-ddns-job.yaml │ │ └── cloudflare-ddns-secrets.yaml ├── common │ └── tasks │ │ └── main.yml ├── external-services │ ├── tasks │ │ └── main.yml │ └── templates │ │ ├── external-ingress.yaml │ │ └── external-service.yaml ├── gitea │ ├── tasks │ │ └── main.yml │ └── templates │ │ ├── gitea-helm.yaml │ │ └── gitea-ingress.yaml ├── helm │ └── tasks │ │ └── main.yml ├── kube-dashboard │ ├── files │ │ ├── dashboard-cluster-role-binding.yaml │ │ └── dashboard-service-account.yaml │ └── tasks │ │ └── main.yml ├── metallb │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── metallb-helm.yaml ├── nfs-provisioner │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── nfs-provisioner-helm.yaml ├── nfs-server │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── exports ├── openvpn │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── openvpn-helm.yaml ├── reboot │ └── tasks │ │ └── main.yml ├── reset-gateway │ └── tasks │ │ └── main.yml ├── reset-master │ └── tasks │ │ └── main.yml ├── reset-worker │ └── tasks │ │ └── main.yml ├── shutdown │ └── tasks │ │ └── main.yml ├── upgrade-gateway │ └── tasks │ │ └── main.yml ├── upgrade-master │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── traefik-helm.yaml └── upgrade-worker │ └── tasks │ └── main.yml ├── scripts ├── generate-vpn-cert.sh └── get-dashboard-token.sh ├── secrets └── secrets.sample ├── shutdown.yml ├── upgrade.yml └── utils ├── get-openvpn-key.sh └── revoke-openvpn-key.sh /.gitignore: -------------------------------------------------------------------------------- 1 | /secrets/secrets.yml 2 | /secrets/admin.conf 3 | /secrets/*.ovpn 4 | /pi/tmp/ 5 | /tmp/ 6 | *.retry 7 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright 2020 Siyuan Gao 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 4 | 5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 6 | 7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 8 | 9 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # geck 2 | 3 | ![Garden of Eden Creation Kit](geck.png) 4 | 5 | Raspberry pi based Kubernetes cluster provisioned with Ansible 6 | 7 | based on https://github.com/ljfranklin/k8s-pi 8 | 9 | ## The idea 10 | As single board computers like Raspberry Pi or the LattePanda get more and more popular, more people is moving their homelab solutions to use these because they are cheaper, more energy efficient and more quiet. (Also its fun). The idea is to use a set of Ansible playbooks to provision N Raspberry Pi to run Kubernetes cluster and its necessary services. Later the cluster can be used to various applications in the HomeLab. 11 | 12 | > Please do not use this to provision production environments, this is only meant to be used for experiemental reason in a Homelab setting 13 | 14 | ## The cluster 15 | The Cluster will be provisioned to use Kubernetes with the Traefik ingress controller and MetalLB as baremetal loadbalancer. Cert-manager will also be deployed to the cluster so later SSL certificates can be provisioned automatically. NFS-provisioner is deployed to automatically provision PV as we spin more services. 16 | 17 | ## Networking 18 | I had a lot of confusion on how this can work in a homelab environment. Since we do not have access to Load Balancers and its hard to tell which type of service we want to use. Following some investigation, I finally ironed out a plan to make this work in homelab. Some lessons I have learnt: 19 | 20 | - Never expose using NodePort, just no. 21 | - ClusterIP is useful but you are not gonna proxy it everytime yo use it 22 | - LoadBalancer is good but you need to have something that assign LoadBalancer IP 23 | - Proxying to a gateway and then use nginx to route traffic require too much manual configuration (I even made a API server that update the configs, still too complex) 24 | 25 | ### Current Solution 26 | 27 | Using MetalLB running in layer2 mode, I have given a IP range that it can assign and those IP are blocked from DHCP in my router. When the ingress controller is created first, it will request for LoadBalancer IP and MetalLB will assign a IP for the ingress controller (this is the IP you setup your router to port forward, it will not change). Then when you create services that is going to be public accessible, just create server as ClusterIP and create an Ingress pointing to that. Because of we also added cert-manager. The SSL certificate will be automatically provisioned and assigned to that Ingress. 28 | 29 | No More Manual Config! 30 | 31 | ![Geck Network](geck_network.png) 32 | 33 | ## Playbooks 34 | 35 | __bootstrap.yml__ 36 | 37 | This is the first thing you run if you want to deploy a fresh cluster, you can also run this to remove the current deployment and create fresh cluster. You should be able to run this over and over again on the same cluster without re-imaging. 38 | 39 | __update.yml__ 40 | 41 | This is the playbook to run when you want to update existing cluster to a latest version, you can run this over and over again to update the infrasture (ie. kubernetes) 42 | 43 | __deploy.yml__ 44 | 45 | This is to run on existing cluster once you already have the necessary services deployed. It deploys all the useful services for the cluster 46 | 47 | ## Using the playbooks 48 | Usually, you can just run `bootstrap.yml`, it will include all the rest playbooks. The order will be 49 | 50 | bootstrap -> update -> deploy 51 | 52 | ## Preparing the SD card 53 | The project include scripts from the [ljfranklin/k8s-pi](https://github.com/ljfranklin/k8s-pi) that creates SD cards ready to be run in raspberry pi using HypriotOS. This also makes SD card that will make the Raspberry Pi to use cloud-init to do initial setup in a headless way. To provision SD cards run: 54 | 55 | ``` 56 | /pi/provision.sh -d /dev/sda -n k8s-node1 -p "$(kr me)" -i 192.168.1.100 57 | ``` 58 | 59 | * -d Your SD card device path 60 | * -p Your SSH public key 61 | * -n Hosename 62 | * -i The IP address 63 | 64 | ## Run it 65 | 66 | ``` 67 | ansible-playbook -i inventory/hosts.ini --extra-vars @secrets/secrets.yml bootstrap.yml 68 | ``` 69 | 70 | ## Config 71 | You can configure how the script can be run using the `all.yaml` in the `group_vars` folder. There you can configure everything needed to run your cluster. 72 | 73 | The config file is grouped into various groups. 74 | 75 | - k3s helm related 76 | - Configure k3s version and helm install version 77 | - Chart urls 78 | - Various helm chart repo 79 | - cert-manager 80 | - Cert manager settings, (name for cert issuer) 81 | - Image tags 82 | - Image tags for services used, you can update them if you they are outdated 83 | - network related 84 | - Metallb configuration, ip range and mode 85 | - file related 86 | - Temp file folder 87 | - storage 88 | - Storage information, (local or external) 89 | - ddns 90 | - Used for DDNS service 91 | - ingress 92 | - Ingress information, domains 93 | - unifi 94 | - Unifi controller url and configuration 95 | - external services 96 | - External services want to be exposed to the cluster 97 | - misc 98 | - Anything else 99 | 100 | ## Included services 101 | Includes services are ones that is installed when you run `bootstramp.yml` I believe anyone using the cluster will need it so they are installed by default. 102 | - K3s with traefik 103 | - Metallb 104 | - Cert Manager 105 | - NFS Provisioner 106 | - Cloudflare DDNS 107 | - Gitea 108 | - Kube dashboard 109 | - OpenVPN 110 | - Proxy for other internal service 111 | 112 | -------------------------------------------------------------------------------- /ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | host_key_checking = False 3 | remote_user = k8s 4 | roles_path = roles/ 5 | 6 | [ssh_connection] 7 | pipelining = True 8 | ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null 9 | -------------------------------------------------------------------------------- /bootstrap.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | any_errors_fatal: true 4 | roles: 5 | - common 6 | 7 | - hosts: kube_node 8 | any_errors_fatal: true 9 | roles: 10 | - reset-worker 11 | 12 | - hosts: kube_master 13 | any_errors_fatal: true 14 | roles: 15 | - reset-master 16 | 17 | - import_playbook: upgrade.yml 18 | -------------------------------------------------------------------------------- /deploy.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: kube_master[0] 3 | any_errors_fatal: true 4 | roles: 5 | - kube-dashboard 6 | - nfs-server 7 | - helm 8 | - metallb 9 | - cert-manager 10 | - nfs-provisioner 11 | - cloudflare-ddns 12 | - gitea 13 | - openvpn 14 | - external-services 15 | -------------------------------------------------------------------------------- /geck.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/r1cebank/geck/1f63cba89a511695631cb006112c9b7fe1fb668f/geck.png -------------------------------------------------------------------------------- /geck_network.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/r1cebank/geck/1f63cba89a511695631cb006112c9b7fe1fb668f/geck_network.png -------------------------------------------------------------------------------- /group_vars/all.yml: -------------------------------------------------------------------------------- 1 | # K3s helm related 2 | get_k3s_url: https://get.k3s.io 3 | get_helm_url: https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 4 | k3s_version: v1.17.3+k3s1 5 | kube_dashboard_version: v2.0.0-rc5 6 | 7 | # Chart urls 8 | helm_chart_url: https://kubernetes-charts.storage.googleapis.com 9 | jetstack_chart_url: https://charts.jetstack.io 10 | personal_chart_url: https://r1cebank.github.io/helm-charts 11 | openvpn_chart_url: http://storage.googleapis.com/kubernetes-charts 12 | 13 | # cert-manager 14 | cert_manager_crds_url: https://raw.githubusercontent.com/jetstack/cert-manager/release-0.12/deploy/manifests/00-crds.yaml 15 | cert_manager_issuer_prod: certissuer-prod 16 | cert_manager_issuer_staging: certissuer-staging 17 | 18 | # Image and image tags 19 | openvpn_image: r1cebank/openvpn 20 | cloudflare_ddns_image: r1cebank/cloudflare-ddns 21 | gitea_image: r1cebank/gitea:v1.11.3 22 | nfs_provisioner_image: quay.io/external_storage/nfs-client-provisioner-arm 23 | 24 | # network related 25 | metallb_mode: layer2 26 | ingress_class: traefik 27 | ingress_loadbalancer_ip: 192.168.1.120 28 | gitea_ssh_loadbalancer_ip: 192.168.1.121 29 | openvpn_loadbalancer_ip: 192.168.1.122 30 | metallb_address_range: 192.168.1.120-192.168.1.200 31 | traefik_tls_min_version: VersionTLS12 32 | traefik_dashboard_enabled: true 33 | traefik_force_ssl: true 34 | 35 | # file related 36 | temp_dir: /tmp/geck 37 | 38 | # storage options 39 | storage: 40 | storage_type: device 41 | device: /dev/sda 42 | location: /mnt/ssd 43 | wipe: true 44 | 45 | # storage: 46 | # storage_type: local 47 | # location: /home/k8s/kube_storage 48 | # wipe: true 49 | 50 | # acme 51 | acme_email: siyuangao@gmail.com 52 | acme_staging_url: https://acme-staging-v02.api.letsencrypt.org/directory 53 | acme_prod_url: https://acme-v02.api.letsencrypt.org/directory 54 | # ddns 55 | ddns_domains: "*.elica.io,elica.io" 56 | 57 | # which issuer to use for ingress 58 | ingress_issuer: certissuer-staging 59 | 60 | # Ingress Information for services here 61 | gitea_ingress_domain: git.elica.io 62 | 63 | # External services to be exposed in kubernetes 64 | external_services: 65 | - name: synology-nas 66 | ip: 192.168.1.131 67 | source_port: 5000 68 | service_port: 80 69 | protocol: TCP 70 | ingress_domain: nas.elica.io 71 | - name: jupyter-lab 72 | ip: 192.168.1.41 73 | source_port: 8888 74 | service_port: 80 75 | protocol: TCP 76 | ingress_domain: lab.elica.io 77 | 78 | # for ansible 79 | ansible_python_interpreter: "/usr/bin/python" 80 | -------------------------------------------------------------------------------- /inventory/hosts.ini: -------------------------------------------------------------------------------- 1 | [all] 2 | geck-master ansible_host=192.168.1.100 3 | geck-01 ansible_host=192.168.1.101 4 | geck-02 ansible_host=192.168.1.102 5 | geck-03 ansible_host=192.168.1.103 6 | geck-04 ansible_host=192.168.1.104 7 | 8 | [kube_master] 9 | geck-master 10 | 11 | [kube_node] 12 | geck-01 13 | geck-02 14 | geck-03 15 | geck-04 16 | -------------------------------------------------------------------------------- /pi/hypriot/static-ip-partial.yml: -------------------------------------------------------------------------------- 1 | # Static IP address 2 | write_files: 3 | - content: | 4 | persistent 5 | # Generate Stable Private IPv6 Addresses instead of hardware based ones 6 | slaac private 7 | # static IP configuration: 8 | interface eth0 9 | static ip_address=REPLACE_WITH_STATIC_IP/24 10 | static routers=REPLACE_WITH_SUBNET_PREFIX.1 11 | static domain_name_servers=REPLACE_WITH_SUBNET_PREFIX.1 8.8.8.8 4.4.4.4 12 | path: /etc/dhcpcd.conf 13 | -------------------------------------------------------------------------------- /pi/hypriot/user-data.yml: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | # vim: syntax=yaml 3 | # 4 | 5 | # The current version of cloud-init in the Hypriot rpi-64 is 0.7.6 6 | # When dealing with cloud-init, it is SUPER important to know the version 7 | # I have wasted many hours creating servers to find out the module I was trying to use wasn't in the cloud-init version I had 8 | # Documentation: http://cloudinit.readthedocs.io/en/0.7.9/index.html 9 | 10 | hostname: REPLACE_WITH_HOSTNAME 11 | manage_etc_hosts: true 12 | 13 | users: 14 | - name: k8s 15 | gecos: "k8s" 16 | sudo: ALL=(ALL) NOPASSWD:ALL 17 | shell: /bin/bash 18 | groups: users,docker,video,input 19 | lock_passwd: true 20 | ssh_import_id: None 21 | ssh_pwauth: no 22 | ssh_authorized_keys: 23 | - "REPLACE_WITH_PUB_KEY" 24 | 25 | locale: "en_US.UTF-8" 26 | 27 | package_upgrade: false 28 | 29 | packages: 30 | # Host is able to resolve .local addresses 31 | - libnss-mdns 32 | 33 | # These commands will be ran once on first boot only 34 | runcmd: 35 | # force clock sync 36 | - 'service ntp stop && ntpd -gq && service ntp start' 37 | # install HypriotOS GPG key 38 | - 'curl -L https://packagecloud.io/Hypriot/rpi/gpgkey | sudo apt-key add -' 39 | power_state: 40 | mode: reboot 41 | condition: true 42 | -------------------------------------------------------------------------------- /pi/provision.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -eu -o pipefail 4 | 5 | sd_device_path="" 6 | static_ip="" 7 | pi_hostname="" 8 | ssh_pub_key="" 9 | os_version="1.11.3" 10 | 11 | usage() { 12 | cat < 17 | 18 | Options: 19 | -d (required) The device path of the target SD card, e.g. /dev/sda 20 | Note: Unmount this device before running this script 21 | -i (optional) Static IP to assign to machine, e.g. 192.168.1.240 22 | Note: Give master node a static IP, others can have dynamic 23 | -n (required) The hostname for the Pi, e.g. k8s-master 24 | The device should be reachable via k8s-master.local 25 | -p (required) The SSH public key contents to add to Pi's authorized keys 26 | -o (optional) The HypriotOS version to install, defaults to '${os_version}' 27 | -h Show this help text 28 | EOF 29 | exit 1 30 | } 31 | 32 | while getopts "d:i:n:p:o:h" opt; do 33 | case "${opt}" in 34 | d) 35 | sd_device_path="$OPTARG" 36 | ;; 37 | i) 38 | static_ip="$OPTARG" 39 | ;; 40 | n) 41 | pi_hostname="$OPTARG" 42 | ;; 43 | p) 44 | ssh_pub_key="$OPTARG" 45 | ;; 46 | o) 47 | os_version="$OPTARG" 48 | ;; 49 | h) 50 | usage 51 | ;; 52 | *) 53 | echo "Unknown argument: ${opt}!" 54 | usage 55 | ;; 56 | esac 57 | done 58 | 59 | if [ -z "${sd_device_path}" ] || [ -z "${pi_hostname}" ] || [ -z "${ssh_pub_key}" ]; then 60 | echo "Missing required arg!" 61 | usage 62 | fi 63 | 64 | script_dir="$( cd "$( dirname "$0" )" && pwd )" 65 | pushd "${script_dir}" > /dev/null 66 | mkdir -p ./tmp 67 | pushd ./tmp > /dev/null 68 | if [ ! -f "./hypriotos-rpi-v${os_version}.img" ]; then 69 | echo "Downloading HypriotOS ${os_version}..." 70 | 71 | curl -sSL "https://github.com/hypriot/image-builder-rpi/releases/download/v${os_version}/hypriotos-rpi-v${os_version}.img.zip" \ 72 | -o "hypriotos-rpi-v${os_version}.img.zip" 73 | curl -sSL "https://github.com/hypriot/image-builder-rpi/releases/download/v${os_version}/hypriotos-rpi-v${os_version}.img.zip.sha256" \ 74 | -o "hypriotos-rpi-v${os_version}.img.zip.sha256" 75 | sha256sum -c "hypriotos-rpi-v${os_version}.img.zip.sha256" 76 | 77 | unzip "hypriotos-rpi-v${os_version}.img.zip" 78 | rm "hypriotos-rpi-v${os_version}.img.zip" 79 | fi 80 | popd > /dev/null 81 | 82 | echo "Flashing HypriotOS ${os_version} to SD card ${sd_device_path}..." 83 | if mount | grep -q "${sd_device_path}"; then 84 | echo "Error: unmount ${sd_device_path} before running this script!" 85 | exit 1 86 | fi 87 | 88 | sudo dd if="./tmp/hypriotos-rpi-v${os_version}.img" of="${sd_device_path}" bs=1M 89 | 90 | read -p "Press any key after disk is mounted" 91 | 92 | sleep 1 # give drive time to mount 93 | os_mount="$(mount | grep "${sd_device_path}.*HypriotOS" | awk '{print $3}')" 94 | root_mount="$(mount | grep "${sd_device_path}.*root" | awk '{print $3}')" 95 | 96 | echo "Writing user-data to ${os_mount}/user-data..." 97 | trimmed_pub_key="$(xargs echo <<< "${ssh_pub_key}")" 98 | config_contents="$(cat ./hypriot/user-data.yml)" 99 | config_contents="$(sed -e "s#REPLACE_WITH_HOSTNAME#${pi_hostname}#g" <<< "${config_contents}")" 100 | config_contents="$(sed -e "s#REPLACE_WITH_PUB_KEY#${trimmed_pub_key}#g" <<< "${config_contents}")" 101 | if [ -n "${static_ip}" ]; then 102 | subnet_prefix="$(cut -d '.' -f1-3 <<< "${static_ip}")" # assumes /24 subnet 103 | static_ip_config="$(cat ./hypriot/static-ip-partial.yml)" 104 | static_ip_config="$(sed -e "s#REPLACE_WITH_STATIC_IP#${static_ip}#g" <<< "${static_ip_config}")" 105 | static_ip_config="$(sed -e "s#REPLACE_WITH_SUBNET_PREFIX#${subnet_prefix}#g" <<< "${static_ip_config}")" 106 | config_contents="${config_contents}\n${static_ip_config}" 107 | fi 108 | echo -e "${config_contents}" > "${os_mount}/user-data" 109 | 110 | umount "${os_mount}" 111 | umount "${root_mount}" 112 | 113 | echo "Success!" 114 | popd > /dev/null 115 | -------------------------------------------------------------------------------- /reboot.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | become: yes 4 | roles: 5 | - reboot 6 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | ansible>=2.7.12 2 | jinja2>=2.9.6 3 | -------------------------------------------------------------------------------- /roles/cert-manager/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Copy over helm deploy file 3 | template: 4 | src: cert-manager-helm.yaml 5 | dest: "{{ temp_dir }}/cert-manager-helm.yaml" 6 | 7 | - name: Install cert-manager crds 8 | shell: "kubectl apply --validate=false -f {{ cert_manager_crds_url }}" 9 | 10 | - name: Install cert-manager from helm 11 | k8s: 12 | state: present 13 | kubeconfig: /etc/rancher/k3s/k3s.yaml 14 | src: "{{ temp_dir }}/cert-manager-helm.yaml" 15 | 16 | - name: Copy over clusterIssuer riles 17 | template: 18 | src: "{{ item }}" 19 | dest: "{{ temp_dir }}/{{ item }}" 20 | with_items: 21 | - prod-clusterissuer.yaml 22 | - staging-clusterissuer.yaml 23 | 24 | - name: Creating clusterIssuer 25 | k8s: 26 | state: present 27 | kubeconfig: /etc/rancher/k3s/k3s.yaml 28 | src: "{{ temp_dir }}/{{ item }}" 29 | with_items: 30 | - prod-clusterissuer.yaml 31 | - staging-clusterissuer.yaml 32 | -------------------------------------------------------------------------------- /roles/cert-manager/templates/cert-manager-helm.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.cattle.io/v1 2 | kind: HelmChart 3 | metadata: 4 | name: cert-manager 5 | namespace: kube-system 6 | spec: 7 | chart: cert-manager 8 | repo: {{ jetstack_chart_url }} 9 | -------------------------------------------------------------------------------- /roles/cert-manager/templates/prod-clusterissuer.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cert-manager.io/v1alpha2 2 | kind: ClusterIssuer 3 | metadata: 4 | name: {{ cert_manager_issuer_prod }} 5 | spec: 6 | acme: 7 | # You must replace this email address with your own. 8 | # Let's Encrypt will use this to contact you about expiring 9 | # certificates, and issues related to your account. 10 | email: {{ acme_email }} 11 | server: {{ acme_prod_url }} 12 | privateKeySecretRef: 13 | # Secret resource used to store the account's private key. 14 | name: {{ cert_manager_issuer_prod }} 15 | # Add a single challenge solver, HTTP01 using nginx 16 | solvers: 17 | - http01: 18 | ingress: 19 | class: {{ ingress_class }} 20 | -------------------------------------------------------------------------------- /roles/cert-manager/templates/staging-clusterissuer.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: cert-manager.io/v1alpha2 2 | kind: ClusterIssuer 3 | metadata: 4 | name: {{ cert_manager_issuer_staging }} 5 | spec: 6 | acme: 7 | # You must replace this email address with your own. 8 | # Let's Encrypt will use this to contact you about expiring 9 | # certificates, and issues related to your account. 10 | email: {{ acme_email }} 11 | server: {{ acme_staging_url }} 12 | privateKeySecretRef: 13 | # Secret resource used to store the account's private key. 14 | name: {{ cert_manager_issuer_staging }} 15 | # Add a single challenge solver, HTTP01 using nginx 16 | solvers: 17 | - http01: 18 | ingress: 19 | class: {{ ingress_class }} 20 | -------------------------------------------------------------------------------- /roles/cloudflare-ddns/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Copy over deploy file 3 | template: 4 | src: "{{ item }}" 5 | dest: "{{ temp_dir }}/{{ item }}" 6 | with_items: 7 | - cloudflare-ddns-secrets.yaml 8 | - cloudflare-ddns-job.yaml 9 | 10 | - name: Create ddns namespace 11 | k8s: 12 | name: cloudflare-ddns 13 | kubeconfig: /etc/rancher/k3s/k3s.yaml 14 | kind: Namespace 15 | state: present 16 | 17 | - name: Import secret 18 | k8s: 19 | state: present 20 | namespace: cloudflare-ddns 21 | kubeconfig: /etc/rancher/k3s/k3s.yaml 22 | src: "{{ temp_dir }}/cloudflare-ddns-secrets.yaml" 23 | 24 | - name: Remove secret file 25 | file: 26 | path: "{{ temp_dir }}/cloudflare-ddns-secrets.yaml" 27 | state: absent 28 | 29 | - name: Create cron job 30 | k8s: 31 | state: present 32 | namespace: cloudflare-ddns 33 | kubeconfig: /etc/rancher/k3s/k3s.yaml 34 | src: "{{ temp_dir }}/cloudflare-ddns-job.yaml" 35 | -------------------------------------------------------------------------------- /roles/cloudflare-ddns/templates/cloudflare-ddns-job.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: batch/v1beta1 3 | kind: CronJob 4 | metadata: 5 | name: cloudflare-ddns 6 | spec: 7 | schedule: "@hourly" 8 | jobTemplate: 9 | spec: 10 | template: 11 | spec: 12 | containers: 13 | - name: cloudflare-ddns 14 | imagePullPolicy: IfNotPresent 15 | image: {{ cloudflare_ddns_image }} 16 | env: 17 | - name: DOMAIN_NAMES 18 | value: "{{ ddns_domains }}" 19 | - name: API_KEY 20 | valueFrom: 21 | secretKeyRef: 22 | name: cloudflare-secret 23 | key: api_token 24 | - name: ACCOUNT_EMAIL 25 | valueFrom: 26 | secretKeyRef: 27 | name: cloudflare-secret 28 | key: email 29 | restartPolicy: OnFailure 30 | -------------------------------------------------------------------------------- /roles/cloudflare-ddns/templates/cloudflare-ddns-secrets.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Secret 4 | metadata: 5 | name: cloudflare-secret 6 | data: 7 | email: "{{ cloudflare_email | string | b64encode }}" 8 | api_token: "{{ cloudflare_token | string | b64encode }}" 9 | -------------------------------------------------------------------------------- /roles/common/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Test connection 3 | ping: 4 | 5 | - name: Update apt 6 | become: yes 7 | become_user: root 8 | apt: 9 | update_cache: yes 10 | 11 | - name: Install tool packages 12 | become: yes 13 | become_user: root 14 | apt: 15 | name: "{{ packages }}" 16 | vars: 17 | packages: 18 | - nano 19 | - build-essential 20 | - python3-pip 21 | - python-dev 22 | 23 | - name: Switch iptables to legacy mode 24 | become: yes 25 | ignore_errors: yes 26 | become_user: root 27 | command: "{{ item }}" 28 | with_items: 29 | - update-alternatives --set iptables /usr/sbin/iptables-legacy 30 | -------------------------------------------------------------------------------- /roles/external-services/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Create external-services namespace 3 | k8s: 4 | name: external-services 5 | kubeconfig: /etc/rancher/k3s/k3s.yaml 6 | kind: Namespace 7 | state: present 8 | 9 | - name: Copy over external service definition file 10 | template: 11 | src: external-service.yaml 12 | dest: "{{ temp_dir }}/external-{{ item.name }}-service.yaml" 13 | with_items: "{{ external_services }}" 14 | 15 | - name: Copy over external service ingress file 16 | template: 17 | src: external-ingress.yaml 18 | dest: "{{ temp_dir }}/external-{{ item.name }}-ingress.yaml" 19 | with_items: "{{ external_services }}" 20 | 21 | - name: Apply service definitions 22 | k8s: 23 | state: present 24 | kubeconfig: /etc/rancher/k3s/k3s.yaml 25 | src: "{{ temp_dir }}/external-{{ item.name }}-service.yaml" 26 | with_items: "{{ external_services }}" 27 | 28 | - name: Apply ingress definitions 29 | k8s: 30 | state: present 31 | kubeconfig: /etc/rancher/k3s/k3s.yaml 32 | src: "{{ temp_dir }}/external-{{ item.name }}-ingress.yaml" 33 | with_items: "{{ external_services }}" 34 | -------------------------------------------------------------------------------- /roles/external-services/templates/external-ingress.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Ingress 3 | metadata: 4 | annotations: 5 | # add an annotation indicating the issuer to use. 6 | kubernetes.io/ingress.class: "{{ ingress_class }}" 7 | cert-manager.io/cluster-issuer: "{{ ingress_issuer }}" 8 | name: {{ item.name }}-ingress 9 | namespace: external-services 10 | spec: 11 | rules: 12 | - host: {{ item.ingress_domain }} 13 | http: 14 | paths: 15 | - backend: 16 | serviceName: external-{{ item.name }}-http 17 | servicePort: {{ item.service_port }} 18 | path: / 19 | tls: # < placing a host in the TLS config will indicate a certificate should be created 20 | - hosts: 21 | - {{ item.ingress_domain }} 22 | secretName: {{ item.ingress_domain }}-tls # < cert-manager will store the created certificate in this secret. 23 | -------------------------------------------------------------------------------- /roles/external-services/templates/external-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: external-{{ item.name }}-http 5 | namespace: external-services 6 | spec: 7 | ports: 8 | - name: {{ item.name }} 9 | port: {{ item.service_port }} 10 | protocol: {{ item.protocol }} 11 | targetPort: {{ item.source_port }} 12 | clusterIP: None 13 | type: ClusterIP 14 | --- 15 | apiVersion: v1 16 | kind: Endpoints 17 | metadata: 18 | name: external-{{ item.name }}-http 19 | namespace: external-services 20 | subsets: 21 | - addresses: 22 | # list all external ips for this service 23 | - ip: {{ item.ip }} 24 | ports: 25 | - name: {{ item.name }} 26 | port: {{ item.source_port }} 27 | protocol: {{ item.protocol }} 28 | -------------------------------------------------------------------------------- /roles/gitea/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Create gitea namespace 3 | k8s: 4 | name: gitea 5 | kubeconfig: /etc/rancher/k3s/k3s.yaml 6 | kind: Namespace 7 | state: present 8 | 9 | - name: Copy over helm deploy file 10 | template: 11 | src: "{{ item }}" 12 | dest: "{{ temp_dir }}/{{ item }}" 13 | with_items: 14 | - gitea-helm.yaml 15 | - gitea-ingress.yaml 16 | 17 | - name: Install gitea from helm 18 | k8s: 19 | state: present 20 | kubeconfig: /etc/rancher/k3s/k3s.yaml 21 | src: "{{ temp_dir }}/{{ item }}" 22 | with_items: 23 | - gitea-helm.yaml 24 | - gitea-ingress.yaml 25 | -------------------------------------------------------------------------------- /roles/gitea/templates/gitea-helm.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.cattle.io/v1 2 | kind: HelmChart 3 | metadata: 4 | name: gitea 5 | namespace: gitea 6 | spec: 7 | chart: gitea 8 | repo: {{ personal_chart_url }} 9 | set: 10 | app.theme: "arc-green" 11 | images.gitea: "{{ gitea_image }}" 12 | ingress.tls: "true" 13 | service.ssh.serviceType: LoadBalancer 14 | service.ssh.loadBalancerIP: "{{ gitea_ssh_loadbalancer_ip }}" 15 | service.http.externalHost: "{{ gitea_ingress_domain }}" 16 | service.ssh.externalHost: "{{ gitea_ingress_domain }}" 17 | config.openidSignin: "false" 18 | persistence.enabled: "true" 19 | persistence.giteaSize: "100Gi" 20 | persistence.storageClass: "nfs-client" 21 | config.requireSignin: "true" 22 | config.disableRegistration: "true" 23 | valuesContent: |- 24 | app: 25 | title: "Hachune-CX: Git for all" 26 | -------------------------------------------------------------------------------- /roles/gitea/templates/gitea-ingress.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Ingress 3 | metadata: 4 | annotations: 5 | # add an annotation indicating the issuer to use. 6 | kubernetes.io/ingress.class: "{{ ingress_class }}" 7 | cert-manager.io/cluster-issuer: "{{ ingress_issuer }}" 8 | name: gitea-ingress 9 | namespace: gitea 10 | spec: 11 | rules: 12 | - host: {{ gitea_ingress_domain }} 13 | http: 14 | paths: 15 | - backend: 16 | serviceName: gitea-gitea-http 17 | servicePort: 3000 18 | path: / 19 | tls: # < placing a host in the TLS config will indicate a certificate should be created 20 | - hosts: 21 | - {{ gitea_ingress_domain }} 22 | secretName: {{ gitea_ingress_domain }}-tls # < cert-manager will store the created certificate in this secret. 23 | -------------------------------------------------------------------------------- /roles/helm/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Check if helm is installed 3 | stat: 4 | path: /usr/local/bin/helm 5 | register: helm_installed 6 | 7 | - name: Install helm 8 | become: yes 9 | shell: "curl -sfL {{ get_helm_url }} | bash" 10 | args: 11 | warn: no 12 | when: helm_installed.stat.exists == false 13 | 14 | - name: Remove existing helm repo 15 | ignore_errors: yes 16 | shell: "helm repo remove stable" 17 | args: 18 | warn: no 19 | when: helm_installed.stat.exists == true 20 | 21 | - name: Install helm repo 22 | shell: "helm repo add stable {{ helm_chart_url }}" 23 | args: 24 | warn: no 25 | when: helm_installed.stat.exists == true 26 | 27 | - name: Update helm repo 28 | shell: "helm repo update" 29 | args: 30 | warn: no 31 | when: helm_installed.stat.exists == true 32 | -------------------------------------------------------------------------------- /roles/kube-dashboard/files/dashboard-cluster-role-binding.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRoleBinding 3 | metadata: 4 | name: admin-user 5 | roleRef: 6 | apiGroup: rbac.authorization.k8s.io 7 | kind: ClusterRole 8 | name: cluster-admin 9 | subjects: 10 | - kind: ServiceAccount 11 | name: admin-user 12 | namespace: kubernetes-dashboard 13 | -------------------------------------------------------------------------------- /roles/kube-dashboard/files/dashboard-service-account.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: admin-user 5 | namespace: kubernetes-dashboard 6 | -------------------------------------------------------------------------------- /roles/kube-dashboard/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Download dashboard yaml 3 | get_url: 4 | url: "https://raw.githubusercontent.com/kubernetes/dashboard/{{ kube_dashboard_version }}/aio/deploy/recommended.yaml" 5 | dest: "{{ temp_dir }}/dashboard-service.yaml" 6 | 7 | - name: Copy service files 8 | copy: 9 | src: "{{ item }}" 10 | dest: "{{ temp_dir }}/{{ item }}" 11 | with_items: 12 | - dashboard-cluster-role-binding.yaml 13 | - dashboard-service-account.yaml 14 | 15 | - name: Creating deployment 16 | k8s: 17 | state: present 18 | kubeconfig: /etc/rancher/k3s/k3s.yaml 19 | src: "{{ temp_dir }}/{{ item }}" 20 | with_items: 21 | - dashboard-service.yaml 22 | - dashboard-service-account.yaml 23 | - dashboard-cluster-role-binding.yaml 24 | -------------------------------------------------------------------------------- /roles/metallb/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Copy over helm deploy file 3 | template: 4 | src: metallb-helm.yaml 5 | dest: "{{ temp_dir }}/metallb-helm.yaml" 6 | 7 | - name: Install metallb from helm 8 | k8s: 9 | state: present 10 | kubeconfig: /etc/rancher/k3s/k3s.yaml 11 | src: "{{ temp_dir }}/metallb-helm.yaml" 12 | -------------------------------------------------------------------------------- /roles/metallb/templates/metallb-helm.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.cattle.io/v1 2 | kind: HelmChart 3 | metadata: 4 | name: metallb 5 | namespace: kube-system 6 | spec: 7 | chart: stable/metallb 8 | set: 9 | configInline.address-pools[0].name: "default" 10 | configInline.address-pools[0].protocol: "{{ metallb_mode }}" 11 | configInline.address-pools[0].addresses[0]: "{{ metallb_address_range }}" 12 | -------------------------------------------------------------------------------- /roles/nfs-provisioner/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Copy over helm deploy file 3 | template: 4 | src: nfs-provisioner-helm.yaml 5 | dest: "{{ temp_dir }}/nfs-provisioner-helm.yaml" 6 | 7 | - name: Install nfs-provisioner from helm 8 | k8s: 9 | state: present 10 | kubeconfig: /etc/rancher/k3s/k3s.yaml 11 | src: "{{ temp_dir }}/nfs-provisioner-helm.yaml" 12 | -------------------------------------------------------------------------------- /roles/nfs-provisioner/templates/nfs-provisioner-helm.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.cattle.io/v1 2 | kind: HelmChart 3 | metadata: 4 | name: nfs-client-provisioner 5 | namespace: kube-system 6 | spec: 7 | chart: stable/nfs-client-provisioner 8 | set: 9 | image.repository: {{ nfs_provisioner_image }} 10 | nfs.server: "{{ hostvars[groups['kube_master'][0]].ansible_default_ipv4.address }}" 11 | nfs.path: "{{ storage.location }}" 12 | -------------------------------------------------------------------------------- /roles/nfs-server/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Mount Storage 3 | become: yes 4 | mount: 5 | path: "{{ storage.location }}" 6 | src: "{{ storage.device }}" 7 | fstype: ext4 8 | state: mounted 9 | when: storage.storage_type == "device" 10 | 11 | - name: Set permission on mount 12 | become: yes 13 | file: 14 | path: "{{ storage.location }}" 15 | state: directory 16 | owner: k8s 17 | group: k8s 18 | when: storage.storage_type == "device" 19 | 20 | - name: Ensure local directory 21 | file: 22 | path: "{{ storage.location }}" 23 | state: directory 24 | when: storage.storage_type == "local" 25 | 26 | - name: Install nfs-kernel-server 27 | become: yes 28 | apt: 29 | name: nfs-kernel-server 30 | state: present 31 | 32 | - name: Copy nfs exports 33 | become: yes 34 | template: 35 | src: exports 36 | dest: /etc/exports 37 | group: root 38 | owner: root 39 | 40 | - name: Restart nfs server 41 | become: yes 42 | service: 43 | name: nfs-kernel-server 44 | state: restarted 45 | -------------------------------------------------------------------------------- /roles/nfs-server/templates/exports: -------------------------------------------------------------------------------- 1 | # to NFS clients. See exports(5). 2 | # 3 | # Example for NFSv2 and NFSv3: 4 | # /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check) 5 | # 6 | # Example for NFSv4: 7 | # /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check) 8 | # /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check) 9 | # 10 | {{ storage.location }} 192.168.1.0/24(rw,no_root_squash,insecure,async,no_subtree_check,anonuid=1000,anongid=1000) 11 | -------------------------------------------------------------------------------- /roles/openvpn/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Create openvpn namespace 3 | k8s: 4 | name: openvpn 5 | kubeconfig: /etc/rancher/k3s/k3s.yaml 6 | kind: Namespace 7 | state: present 8 | 9 | - name: Copy over helm deploy file 10 | template: 11 | src: openvpn-helm.yaml 12 | dest: "{{ temp_dir }}/openvpn-helm.yaml" 13 | 14 | - name: Install openvpn from helm 15 | k8s: 16 | state: present 17 | kubeconfig: /etc/rancher/k3s/k3s.yaml 18 | src: "{{ temp_dir }}/openvpn-helm.yaml" 19 | -------------------------------------------------------------------------------- /roles/openvpn/templates/openvpn-helm.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.cattle.io/v1 2 | kind: HelmChart 3 | metadata: 4 | name: openvpn 5 | namespace: openvpn 6 | spec: 7 | chart: openvpn 8 | repo: {{ openvpn_chart_url }} 9 | set: 10 | image.repository: "{{ openvpn_image }}" 11 | image.tag: latest 12 | service.type: LoadBalancer 13 | service.loadBalancerIP: "{{ openvpn_loadbalancer_ip }}" 14 | persistence.storageClass: nfs-client 15 | openvpn.OVPN_K8S_SVC_NETWORK: "{{ (ansible_default_ipv4.address.split('.')[:3]|join('.')) + '.0' }}" 16 | openvpn.OVPN_K8S_SVC_SUBNET: 255.255.255.0 17 | -------------------------------------------------------------------------------- /roles/reboot/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Reboot host 3 | shell: sleep 2 && shutdown -r now "Ansible Reboot" 4 | async: 1 5 | poll: 0 6 | ignore_errors: yes 7 | 8 | - name: Wait for host 9 | wait_for_connection: 10 | connect_timeout: 20 11 | sleep: 5 12 | delay: 10 13 | timeout: 300 14 | -------------------------------------------------------------------------------- /roles/reset-gateway/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: remove dns-updater directory 3 | become: yes 4 | file: 5 | path: /home/k8s/dns-updater 6 | state: absent 7 | 8 | - name: remove dns-updater configuration 9 | become: yes 10 | file: 11 | path: /home/k8s/.cloudflare-ddns 12 | state: absent 13 | 14 | - name: remove dns-updater cron job 15 | cron: 16 | name: dns-updater 17 | state: absent 18 | 19 | - name: remove certbot configuration 20 | become: yes 21 | file: 22 | path: /home/k8s/.cloudflare.ini 23 | state: absent 24 | 25 | - name: remove letsencrypt directory 26 | become: yes 27 | file: 28 | path: /var/log/letsencrypt 29 | state: absent 30 | 31 | - name: remove certbot 32 | become: yes 33 | shell: pip3 uninstall -y certbot certbot-dns-cloudflare 34 | 35 | - name: remove certbot-renew cron job 36 | cron: 37 | name: certbot-renew 38 | state: absent 39 | 40 | - name: remove nginx directory 41 | become: yes 42 | file: 43 | path: "{{ nginx_config_dir }}" 44 | state: absent 45 | 46 | - name: create temp directory 47 | become: yes 48 | file: 49 | path: "{{ temp_dir }}" 50 | state: directory 51 | 52 | - name: remove temp files directory 53 | become: yes 54 | file: 55 | path: "{{ temp_dir }}/" 56 | state: absent 57 | -------------------------------------------------------------------------------- /roles/reset-master/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Check if k3s is installed 3 | stat: 4 | path: /usr/local/bin/k3s 5 | register: k3s_installed 6 | 7 | - name: Kill all k3s resources 8 | shell: /usr/local/bin/k3s-killall.sh 9 | when: k3s_installed.stat.exists == true 10 | 11 | - name: Uninstall k3s 12 | shell: /usr/local/bin/k3s-uninstall.sh 13 | when: k3s_installed.stat.exists == true 14 | 15 | - name: Remove temp files directory 16 | become: yes 17 | file: 18 | path: "{{ temp_dir }}" 19 | state: absent 20 | 21 | - name: Create temp directory 22 | file: 23 | path: "{{ temp_dir }}" 24 | state: directory 25 | 26 | - name: Remove nfs exports 27 | become: yes 28 | file: 29 | path: /etc/exports 30 | state: absent 31 | 32 | - name: Remove nfs-kernel-server package 33 | become: yes 34 | apt: 35 | name: nfs-kernel-server 36 | state: absent 37 | 38 | - name: Reset mount 39 | become: yes 40 | mount: 41 | path: "{{ storage.location }}" 42 | src: "{{ storage.device }}" 43 | fstype: ext4 44 | state: absent 45 | when: storage.storage_type == "device" 46 | 47 | - name: Create a ext4 filesystem on storage device 48 | become: yes 49 | filesystem: 50 | fstype: ext4 51 | force: true 52 | dev: "{{ storage.device }}" 53 | when: storage.storage_type == "device" and storage.wipe == true 54 | 55 | - name: Remove storage files directory 56 | become: yes 57 | file: 58 | path: "{{ storage.location }}" 59 | state: absent 60 | when: storage.storage_type == "local" and storage.wipe == true 61 | -------------------------------------------------------------------------------- /roles/reset-worker/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: check if k3s is installed 3 | stat: 4 | path: /usr/local/bin/k3s 5 | register: k3s_installed 6 | 7 | - name: kill all k3s resources 8 | shell: /usr/local/bin/k3s-killall.sh 9 | when: k3s_installed.stat.exists == true 10 | 11 | - name: uninstall k3s 12 | shell: /usr/local/bin/k3s-agent-uninstall.sh 13 | when: k3s_installed.stat.exists == true 14 | -------------------------------------------------------------------------------- /roles/shutdown/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Shutdown host 3 | shell: sleep 2 && shutdown -h now "Ansible Shutdown" 4 | async: 1 5 | poll: 0 6 | ignore_errors: yes 7 | -------------------------------------------------------------------------------- /roles/upgrade-gateway/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: update latest certbot 3 | become: yes 4 | shell: pip3 install certbot certbot-dns-cloudflare -U 5 | -------------------------------------------------------------------------------- /roles/upgrade-master/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Install Openshift module 3 | pip: 4 | name: openshift 5 | extra_args: --user 6 | 7 | - name: install k3s 8 | shell: "curl -sfL {{ get_k3s_url }} | INSTALL_K3S_EXEC=\" --no-deploy traefik --no-deploy servicelb\" K3S_KUBECONFIG_MODE=\"644\" INSTALL_K3S_VERSION=\"{{ k3s_version }}\" sh -" 9 | args: 10 | warn: no 11 | 12 | - name: Wait for k3s to finish loading 13 | wait_for: 14 | path: /etc/rancher/k3s/k3s.yaml 15 | 16 | - name: Copy over helm deploy file 17 | template: 18 | src: traefik-helm.yaml 19 | dest: "{{ temp_dir }}/traefik-helm.yaml" 20 | 21 | - name: Install traefik from helm 22 | k8s: 23 | state: present 24 | kubeconfig: /etc/rancher/k3s/k3s.yaml 25 | src: "{{ temp_dir }}/traefik-helm.yaml" 26 | 27 | # # There isn't a way now to wait for service to be deployed, using a hardcoded wait for now 28 | # - name: Wait for traefik to finish deploy 29 | # wait_for: 30 | # timeout: 120 31 | 32 | - name: retrieve kubeconfig 33 | fetch: 34 | src: /etc/rancher/k3s/k3s.yaml 35 | dest: ~/.kube/geck.yaml 36 | flat: yes 37 | 38 | - name: update ip address of kube config 39 | replace: 40 | path: ~/.kube/geck.yaml 41 | regexp: '(?:[0-9]{1,3}\.){3}[0-9]{1,3}' 42 | replace: "{{ hostvars[groups['kube_master'][0]].ansible_default_ipv4.address }}" 43 | delegate_to: localhost 44 | -------------------------------------------------------------------------------- /roles/upgrade-master/templates/traefik-helm.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: helm.cattle.io/v1 2 | kind: HelmChart 3 | metadata: 4 | name: traefik 5 | namespace: kube-system 6 | spec: 7 | chart: stable/traefik 8 | set: 9 | loadBalancerIP: "{{ ingress_loadbalancer_ip }}" 10 | rbac.enabled: "true" 11 | ssl.enabled: "true" 12 | sniStrict: "true" 13 | ssl.enforced: "{{ traefik_force_ssl }}" 14 | ssl.permanentRedirect: "true" 15 | ssl.tlsMinVersion: "{{ traefik_tls_min_version }}" 16 | metrics.prometheus.enabled: "true" 17 | kubernetes.ingressEndpoint.useDefaultPublishedService: "true" 18 | dashboard.enabled: "{{ traefik_dashboard_enabled }}" 19 | valuesContent: |- 20 | ssl: 21 | cipherSuites: 22 | - TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 23 | - TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 24 | - TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 25 | - TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 26 | - TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305 27 | - TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305 28 | -------------------------------------------------------------------------------- /roles/upgrade-worker/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - register: k3s_master_token 3 | become: yes 4 | become_user: root 5 | delegate_to: "{{ groups['kube_master'][0] }}" 6 | shell: "cat /var/lib/rancher/k3s/server/node-token" 7 | 8 | - name: install k3s - agent 9 | shell: "curl -sfL {{ get_k3s_url }} | INSTALL_K3S_VERSION=\"{{ k3s_version }}\" K3S_URL=https://{{ hostvars[groups['kube_master'][0]].ansible_default_ipv4.address }}:6443 K3S_TOKEN={{ k3s_master_token.stdout }} sh -" 10 | args: 11 | warn: no 12 | -------------------------------------------------------------------------------- /scripts/generate-vpn-cert.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -eu 4 | 5 | if [ "$#" -ne 1 ]; then 6 | echo "Usage: $0 " 7 | exit 8 | fi 9 | 10 | hostname="$1" 11 | 12 | key_path="secrets/k8s.ovpn" 13 | mkdir -p secrets 14 | 15 | if [ -f "${key_path}" ]; then 16 | echo "VPN config already exists at '${key_path}'. Delete it and try again." 17 | exit 1 18 | fi 19 | 20 | pod_name=$(kubectl get pods -l "app=openvpn" -o jsonpath='{.items[0].metadata.name}') 21 | kubectl exec -it "${pod_name}" /etc/openvpn/setup/newClientCert.sh "k8s" "${hostname}" 22 | kubectl exec -it "${pod_name}" cat "/etc/openvpn/certs/pki/k8s.ovpn" > "${key_path}" 23 | 24 | echo "Wrote VPN config to '${key_path}'" 25 | -------------------------------------------------------------------------------- /scripts/get-dashboard-token.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -eu 4 | 5 | # why so complicated... 6 | kubectl -n kube-system get secrets -o json | \ 7 | jq -r -e --arg user 'admin-user' '.items | map(select(.metadata.name | startswith($user)))[0].data.token' | \ 8 | base64 -d 9 | -------------------------------------------------------------------------------- /secrets/secrets.sample: -------------------------------------------------------------------------------- 1 | # Cloudflare API credentials used by Certbot 2 | cloudflare_email: 3 | cloudflare_token: 4 | -------------------------------------------------------------------------------- /shutdown.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | become: yes 4 | roles: 5 | - shutdown 6 | -------------------------------------------------------------------------------- /upgrade.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: kube_master 3 | any_errors_fatal: true 4 | roles: 5 | - upgrade-master 6 | 7 | - hosts: kube_node 8 | roles: 9 | - upgrade-worker 10 | 11 | - import_playbook: deploy.yml 12 | -------------------------------------------------------------------------------- /utils/get-openvpn-key.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ $# -ne 3 ] 4 | then 5 | echo "Usage: $0 " 6 | exit 7 | fi 8 | 9 | KEY_NAME=$1 10 | NAMESPACE=$2 11 | HELM_RELEASE=$3 12 | POD_NAME=$(kubectl get pods -n "$NAMESPACE" -l "app=openvpn,release=$HELM_RELEASE" -o jsonpath='{.items[0].metadata.name}') 13 | SERVICE_NAME=$(kubectl get svc -n "$NAMESPACE" -l "app=openvpn,release=$HELM_RELEASE" -o jsonpath='{.items[0].metadata.name}') 14 | SERVICE_IP=$(kubectl get svc -n "$NAMESPACE" "$SERVICE_NAME" -o go-template='{{range $k, $v := (index .status.loadBalancer.ingress 0)}}{{$v}}{{end}}') 15 | kubectl -n "$NAMESPACE" exec -it "$POD_NAME" /etc/openvpn/setup/newClientCert.sh "$KEY_NAME" "$SERVICE_IP" 16 | kubectl -n "$NAMESPACE" exec -it "$POD_NAME" cat "/etc/openvpn/certs/pki/$KEY_NAME.ovpn" > "$KEY_NAME.ovpn" 17 | -------------------------------------------------------------------------------- /utils/revoke-openvpn-key.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if [ $# -ne 3 ] 4 | then 5 | echo "Usage: $0 " 6 | exit 7 | fi 8 | 9 | KEY_NAME=$1 10 | NAMESPACE=$2 11 | HELM_RELEASE=$3 12 | POD_NAME=$(kubectl get pods -n "$NAMESPACE" -l "app=openvpn,release=$HELM_RELEASE" -o jsonpath='{.items[0].metadata.name}') 13 | kubectl -n "$NAMESPACE" exec -it "$POD_NAME" /etc/openvpn/setup/revokeClientCert.sh $KEY_NAME 14 | --------------------------------------------------------------------------------