├── .gitignore ├── README.md ├── ansible ├── bootstrap.yaml ├── control-plane.yaml ├── group_vars │ └── all ├── inventory.yaml ├── kubernetes-fetch-config.yaml ├── kubernetes-install.yaml ├── kubernetes-reset.yaml ├── lab-setup.yaml ├── metallb.yaml ├── roles │ ├── calico │ │ ├── tasks │ │ │ └── main.yaml │ │ └── templates │ │ │ └── calico.yaml │ ├── commons │ │ ├── tasks │ │ │ └── main.yaml │ │ └── templates │ │ │ └── k8s.iptables.conf │ ├── container-runtime │ │ ├── tasks │ │ │ └── main.yaml │ │ └── templates │ │ │ ├── containerd-config.toml │ │ │ └── containerd.service │ ├── kubeadm-init │ │ ├── tasks │ │ │ └── main.yaml │ │ └── templates │ │ │ └── kubeadm.yaml │ ├── kubeadm-join-config │ │ └── tasks │ │ │ └── main.yaml │ ├── kubeadm-join │ │ ├── tasks │ │ │ └── main.yaml │ │ └── templates │ │ │ └── kubeadm-join.yaml │ ├── kubeadm-reset │ │ └── tasks │ │ │ └── main.yaml │ ├── kubernetes-packages │ │ └── tasks │ │ │ └── main.yaml │ └── metallb │ │ ├── tasks │ │ └── main.yaml │ │ └── templates │ │ ├── metallb-config.yaml │ │ └── metallb.yaml └── worker-nodes.yaml └── proxmox ├── README.md ├── ansible ├── create-vm-template.yaml └── inventory.yaml └── terraform ├── .gitignore ├── main.tf ├── provider.tf └── variables.tf /.gitignore: -------------------------------------------------------------------------------- 1 | .idea/ 2 | .run/ 3 | admin.conf 4 | *.pem 5 | *.csr 6 | csr.json 7 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # On-premises Kubernetes deployment 2 | ## Overview 3 | This repository contains a reference implementation of bootstrapping and installation 4 | of a Kubernetes cluster on-premises. The provided tooling can be used both as a basis 5 | for personal projects and for educational purposes. 6 | 7 | The goal of the project is to provide tooling for reproducible deployment of a fully 8 | functional Kubernetes cluster for on-premises including support for dynamic 9 | provisioning of `PersistentVolumes` an `LoadBalancer` service types. 10 | 11 | A detailed description is available in 12 | [The Ultimate Kubernetes Homelab Guide: From Zero to Production Cluster On-Premises](https://datastrophic.io/kubernetes-homelab-with-proxmox-kubeadm-calico-openebs-and-metallb/) blog post. 13 | 14 | Software used: 15 | * `Ansible` for deployment automation 16 | * `kubeadm` for Kubernetes cluster bootstrapping 17 | * `containerd` container runtime 18 | * `Calico` for pod networking 19 | * `MetalLB` for exposing `LoadBalancer` type services 20 | * `OpenEBS` for volume provisioning 21 | * `Istio` for ingress and traffic management 22 | 23 | ## Pre-requisites 24 | * cluster machines/VMs should be provisioned and accessible over SSH 25 | * it is recommended to use Ubuntu 20.04 as cluster OS 26 | * the current user should have superuser privileges on the cluster nodes 27 | * Ansible installed locally 28 | 29 | ## Bootstrapping the infrastructure on Proxmox 30 | The [proxmox](proxmox) directory of this repo contains automation for the initial 31 | infrastructure bootstrapping using `cloud-init` templates and Proxmox Terraform provider. 32 | 33 | ## Quickstart 34 | Installation consists of the following phases: 35 | * prepare machines for Kubernetes installation 36 | * install common packages, disable swap, enable port forwarding, install container runtime 37 | * Kubernetes installation 38 | * bootstrap control plane, install container networking, bootstrap worker nodes 39 | 40 | To prepare machines for Kubernetes installation, run: 41 | ``` 42 | ansible-playbook -i ansible/inventory.yaml ansible/bootstrap.yaml -K 43 | ``` 44 | 45 | > **NOTE:** the bootstrap step usually required to run only once or when new nodes joined. 46 | 47 | To install Kubernetes, run: 48 | ``` 49 | ansible-playbook -i ansible/inventory.yaml ansible/kubernetes-install.yaml -K 50 | ``` 51 | 52 | Once the playbook run completes, a kubeconfig file `admin.conf` will be fetched to the current directory. To verify 53 | the cluster is up and available, run: 54 | ``` 55 | $> kubectl --kubeconfig=admin.conf get nodes 56 | NAME STATUS ROLES AGE VERSION 57 | control-plane-0.k8s.cluster Ready control-plane,master 4m40s v1.21.6 58 | worker-0 Ready 4m5s v1.21.6 59 | worker-1 Ready 4m5s v1.21.6 60 | worker-2 Ready 4m4s v1.21.6 61 | ``` 62 | 63 | Consider running [sonobuoy](https://sonobuoy.io/) conformance test to validate the cluster configuration and health. 64 | 65 | To uninstall Kubernetes, run: 66 | ``` 67 | ansible-playbook -i ansible/inventory.yaml ansible/kubernetes-reset.yaml -K 68 | ``` 69 | This playbook will run `kubeadm reset` on all nodes, remove configuration changes, and stop Kubelets. 70 | 71 | ## Persistent volumes with EBS 72 | There is a plenty of storage solutions on Kubernetes. At the moment of writing, 73 | [OpenEBS](https://openebs.io/) looked like a good fit for having storage installed 74 | with minimal friction. 75 | 76 | For the homelab setup, a [local hostpath](https://openebs.io/docs/user-guides/localpv-hostpath) 77 | provisioner should be sufficient, however, OpenEBS provides multiple options for 78 | a replicated storage backing Persistent Volumes. 79 | 80 | To use only host-local Persistent Volumes, it is sufficient to install a lite 81 | version of OpenEBS: 82 | ``` 83 | kubectl apply -f https://openebs.github.io/charts/openebs-operator-lite.yaml 84 | ``` 85 | 86 | Once the Operator is installed, create a `StorageClass` and annotate it as **default**: 87 | ``` 88 | kubectl apply -f - </nginx-test` will be routed to the Nginx `Service`. 280 | The endpoint URL is a load balancer address of the Istio Ingress Gateway. 281 | It comes handy to discover and export it to an environment variable for later use: 282 | ``` 283 | export INGRESS_HOST=$(kubectl get svc istio-ingressgateway --namespace istio-system -o yaml -o jsonpath='{.status.loadBalancer.ingress[0].ip}') 284 | ``` 285 | 286 | Now, we can verify that the deployment is exposed via the gateway at `http://$INGRESS_HOST/nginx-test`. 287 | -------------------------------------------------------------------------------- /ansible/bootstrap.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | name: "Bootstrapping hosts" 4 | roles: 5 | - name: base setup 6 | role: commons 7 | 8 | - name: installing container runtime 9 | role: container-runtime 10 | 11 | - name: downloading Kubernetes dependencies 12 | role: kubernetes-packages 13 | -------------------------------------------------------------------------------- /ansible/control-plane.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: control-plane 3 | name: "Bootstrap Kubernetes Control Plane" 4 | roles: 5 | - name: kubeadm init 6 | role: kubeadm-init 7 | 8 | - name: install calico 9 | role: calico 10 | -------------------------------------------------------------------------------- /ansible/group_vars/all: -------------------------------------------------------------------------------- 1 | kubernetes: 2 | name: k8s-homelab 3 | version: v1.21.6 4 | apt_version: 1.21.6-00 5 | 6 | networking: 7 | domain: cluster.local 8 | pod_subnet: 192.168.64.0/20 9 | service_subnet: 10.96.0.0/12 10 | 11 | packages: 12 | containerd_download_url: https://github.com/containerd/containerd/releases/download/v1.5.7/containerd-1.5.7-linux-amd64.tar.gz 13 | crictl_download_url: https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.22.0/crictl-v1.22.0-linux-amd64.tar.gz 14 | runc_download_url: https://github.com/opencontainers/runc/releases/download/v1.0.2/runc.amd64 15 | 16 | local: 17 | artifact_dir: .run 18 | token_file: .run/token 19 | cert_hash_file: .run/cert-hash 20 | 21 | lab: 22 | dns: 192.168.50.1 23 | metallb_address_range: 192.168.50.150-192.168.50.160 24 | -------------------------------------------------------------------------------- /ansible/inventory.yaml: -------------------------------------------------------------------------------- 1 | all: 2 | children: 3 | control-plane: 4 | hosts: 5 | control-plane-0.k8s.cluster: 6 | ansible_host: 192.168.50.110 7 | worker-nodes: 8 | hosts: 9 | worker-0.k8s.cluster: 10 | ansible_host: 192.168.50.120 11 | worker-1.k8s.cluster: 12 | ansible_host: 192.168.50.121 13 | worker-2.k8s.cluster: 14 | ansible_host: 192.168.50.122 15 | -------------------------------------------------------------------------------- /ansible/kubernetes-fetch-config.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: control-plane 3 | name: "Copy kubeconfig from remote" 4 | tasks: 5 | - name: fetching 6 | become: yes 7 | fetch: 8 | src: /etc/kubernetes/admin.conf 9 | dest: ../admin.conf 10 | flat: yes 11 | run_once: True 12 | -------------------------------------------------------------------------------- /ansible/kubernetes-install.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - import_playbook: control-plane.yaml 4 | - import_playbook: worker-nodes.yaml 5 | - import_playbook: kubernetes-fetch-config.yaml 6 | -------------------------------------------------------------------------------- /ansible/kubernetes-reset.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: worker-nodes 3 | name: "Resetting worker nodes" 4 | roles: 5 | - name: run kubeadm reset 6 | role: kubeadm-reset 7 | 8 | - hosts: control-plane 9 | name: "Resetting control plane nodes" 10 | roles: 11 | - name: run kubeadm reset 12 | role: kubeadm-reset 13 | 14 | -------------------------------------------------------------------------------- /ansible/lab-setup.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | name: "Lab-specific configuration" 4 | tasks: 5 | - name: configure DNS server 6 | become: yes 7 | lineinfile: 8 | path: /etc/systemd/resolved.conf 9 | regexp: "^#DNS=" 10 | line: "DNS={{ lab.dns }}" 11 | 12 | - name: configure DNS cache 13 | become: yes 14 | lineinfile: 15 | path: /etc/systemd/resolved.conf 16 | regexp: "^#Cache=" 17 | line: "Cache=yes" 18 | 19 | - name: restart systemd-resolved 20 | become: yes 21 | systemd: 22 | state: restarted 23 | daemon_reload: yes 24 | name: systemd-resolved 25 | -------------------------------------------------------------------------------- /ansible/metallb.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: control-plane 3 | name: "Install MetalLB" 4 | roles: 5 | - role: metallb 6 | run_once: True 7 | -------------------------------------------------------------------------------- /ansible/roles/calico/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | - name: copy Calico manifests 2 | become: yes 3 | template: 4 | src: calico.yaml 5 | dest: /etc/calico.yaml 6 | 7 | - name: install Calico 8 | become: yes 9 | command: kubectl apply -f /etc/calico.yaml --kubeconfig=/etc/kubernetes/admin.conf 10 | -------------------------------------------------------------------------------- /ansible/roles/commons/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | - name: install common packages 2 | become: yes 3 | apt: 4 | pkg: 5 | - apt-transport-https 6 | - curl 7 | update_cache: yes 8 | 9 | - name: disable swap 10 | become: yes 11 | command: swapoff -a 12 | 13 | - name: disable swap in fstab 14 | become: yes 15 | replace: 16 | path: /etc/fstab 17 | regexp: '^([^#].*?\sswap\s+sw\s+.*)$' 18 | replace: '# \1' 19 | 20 | - name: enable br_netfilter 21 | become: yes 22 | command: modprobe br_netfilter 23 | 24 | - name: ensure iptables enabled 25 | become: yes 26 | template: 27 | src: k8s.iptables.conf 28 | dest: /etc/sysctl.d/k8s.iptables.conf 29 | 30 | - name: enable port forward 31 | become: yes 32 | sysctl: 33 | name: net.ipv4.ip_forward 34 | value: 1 35 | sysctl_set: yes 36 | reload: yes 37 | -------------------------------------------------------------------------------- /ansible/roles/commons/templates/k8s.iptables.conf: -------------------------------------------------------------------------------- 1 | net.bridge.bridge-nf-call-ip6tables = 1 2 | net.bridge.bridge-nf-call-iptables = 1 -------------------------------------------------------------------------------- /ansible/roles/container-runtime/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | - name: create config and data dirs 2 | become: yes 3 | file: 4 | path: "{{ item }}" 5 | state: directory 6 | mode: "0600" 7 | with_items: 8 | - /etc/containerd 9 | - /tmp/containerd 10 | 11 | - name : download and install runc 12 | become: yes 13 | get_url: 14 | url: "{{ packages.runc_download_url }}" 15 | dest: /usr/local/bin/runc 16 | mode: 0700 17 | 18 | - name : download and install crictl 19 | become: yes 20 | unarchive: 21 | src: "{{ packages.crictl_download_url }}" 22 | dest: /usr/local/bin 23 | mode: 0700 24 | remote_src: yes 25 | 26 | - name : download containerd 27 | become: yes 28 | unarchive: 29 | src: "{{ packages.containerd_download_url }}" 30 | dest: /tmp/containerd 31 | remote_src: yes 32 | extra_opts: [--strip-components=1] 33 | 34 | - name : copy containerd binaries 35 | become: yes 36 | shell: "find /tmp/containerd -type f | xargs -I {} mv {} /bin/" 37 | 38 | # Configure containerd 39 | - name: copy containerd config 40 | become: yes 41 | template: 42 | src: containerd-config.toml 43 | dest: /etc/containerd/config.toml 44 | 45 | - name: create containerd systemd service 46 | become: yes 47 | template: 48 | src: containerd.service 49 | dest: /etc/systemd/system/containerd.service 50 | 51 | # Starting containerd systemd service 52 | 53 | - name: reload systemd 54 | systemd: 55 | daemon_reload: yes 56 | become: yes 57 | 58 | - name: enable containerd systemd service 59 | systemd: 60 | name: containerd 61 | enabled: yes 62 | become: yes 63 | 64 | - name: start containerd service 65 | systemd: 66 | name: containerd 67 | state: started 68 | become: yes 69 | -------------------------------------------------------------------------------- /ansible/roles/container-runtime/templates/containerd-config.toml: -------------------------------------------------------------------------------- 1 | [plugins] 2 | [plugins.cri.containerd] 3 | snapshotter = "overlayfs" 4 | [plugins.cri.containerd.default_runtime] 5 | runtime_type = "io.containerd.runtime.v1.linux" 6 | runtime_engine = "/usr/local/bin/runc" 7 | runtime_root = "" 8 | -------------------------------------------------------------------------------- /ansible/roles/container-runtime/templates/containerd.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=containerd container runtime 3 | Documentation=https://containerd.io 4 | After=network.target local-fs.target 5 | 6 | [Service] 7 | ExecStartPre=/sbin/modprobe overlay 8 | ExecStart=/bin/containerd 9 | 10 | Type=notify 11 | Delegate=yes 12 | KillMode=process 13 | Restart=always 14 | RestartSec=5 15 | LimitNPROC=infinity 16 | LimitCORE=infinity 17 | LimitNOFILE=1048576 18 | OOMScoreAdjust=-999 19 | 20 | [Install] 21 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /ansible/roles/kubeadm-init/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | - name: copy kubeadm init config 2 | become: yes 3 | template: 4 | src: kubeadm.yaml 5 | dest: /etc/kubeadm.yaml 6 | 7 | - name: running kubeadm init 8 | become: yes 9 | command: kubeadm init --config /etc/kubeadm.yaml 10 | -------------------------------------------------------------------------------- /ansible/roles/kubeadm-init/templates/kubeadm.yaml: -------------------------------------------------------------------------------- 1 | # Link to API docs: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2 2 | 3 | apiVersion: kubeadm.k8s.io/v1beta2 4 | kind: ClusterConfiguration 5 | clusterName: "{{ kubernetes.name }}" 6 | kubernetesVersion: "{{ kubernetes.version }}" 7 | certificatesDir: /etc/kubernetes/pki 8 | apiServer: 9 | extraArgs: 10 | authorization-mode: Node,RBAC 11 | # enable-admission-plugins: NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota 12 | timeoutForControlPlane: 4m0s 13 | controllerManager: {} 14 | scheduler: {} 15 | dns: 16 | type: CoreDNS 17 | etcd: 18 | local: 19 | dataDir: /var/lib/etcd 20 | networking: 21 | dnsDomain: "{{ networking.domain }}" 22 | podSubnet: "{{ networking.pod_subnet }}" 23 | serviceSubnet: "{{ networking.service_subnet }}" 24 | --- 25 | apiVersion: kubeadm.k8s.io/v1beta2 26 | kind: InitConfiguration 27 | nodeRegistration: 28 | criSocket: "unix:///run/containerd/containerd.sock" 29 | name: "{{ inventory_hostname }}" 30 | taints: 31 | - effect: NoSchedule 32 | key: node-role.kubernetes.io/master 33 | localAPIEndpoint: 34 | advertiseAddress: "{{ ansible_default_ipv4.address }}" 35 | bindPort: 6443 36 | -------------------------------------------------------------------------------- /ansible/roles/kubeadm-join-config/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | - name: create local dir for token and cert hash 2 | file: 3 | path: "{{ local.artifact_dir }}" 4 | state: directory 5 | delegate_to: localhost 6 | run_once: True 7 | 8 | - name: kubeadm token generate 9 | become: yes 10 | command: kubeadm token list --kubeconfig=/etc/kubernetes/admin.conf -o jsonpath="{ .token }" 11 | run_once: True 12 | register: token 13 | 14 | - name: generate cert hash 15 | become: yes 16 | shell: "openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* /sha256:/'" 17 | run_once: True 18 | register: cert_hash 19 | 20 | - name: persist token locally 21 | copy: 22 | dest: "{{ local.token_file }}" 23 | content: "{{ token.stdout }}" 24 | delegate_to: localhost 25 | run_once: True 26 | 27 | - name: persist cert hash locally 28 | copy: 29 | dest: "{{ local.cert_hash_file }}" 30 | content: "{{ cert_hash.stdout }}" 31 | delegate_to: localhost 32 | run_once: True -------------------------------------------------------------------------------- /ansible/roles/kubeadm-join/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | - name: copy kubeadm join config 2 | become: yes 3 | template: 4 | src: kubeadm-join.yaml 5 | dest: /etc/kubeadm-join.yaml 6 | 7 | - name: running kubeadm join 8 | become: yes 9 | command: kubeadm join --config /etc/kubeadm-join.yaml 10 | -------------------------------------------------------------------------------- /ansible/roles/kubeadm-join/templates/kubeadm-join.yaml: -------------------------------------------------------------------------------- 1 | # Link to API docs: https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2 2 | 3 | apiVersion: kubeadm.k8s.io/v1beta2 4 | kind: JoinConfiguration 5 | nodeRegistration: 6 | criSocket: "unix:///run/containerd/containerd.sock" 7 | discovery: 8 | bootstrapToken: 9 | apiServerEndpoint: "{{ hostvars[groups['control-plane'][0]]['ansible_default_ipv4']['address'] }}:6443" 10 | token: "{{ token }}" 11 | caCertHashes: 12 | - "{{ cert_hash }}" 13 | -------------------------------------------------------------------------------- /ansible/roles/kubeadm-reset/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | - name: run kubeadm reset 2 | become: yes 3 | command: kubeadm reset -f 4 | -------------------------------------------------------------------------------- /ansible/roles/kubernetes-packages/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | - name: adding Kubernetes repository apt key 2 | become: yes 3 | apt_key: 4 | url: https://packages.cloud.google.com/apt/doc/apt-key.gpg 5 | state: present 6 | 7 | - name: adding Kubernetes deb repository 8 | become: yes 9 | apt_repository: 10 | repo: deb https://apt.kubernetes.io/ kubernetes-xenial main 11 | state: present 12 | filename: kubernetes 13 | 14 | - name: installing Kubernetes packages 15 | become: yes 16 | apt: 17 | pkg: 18 | - "kubeadm={{ kubernetes.apt_version }}" 19 | - "kubectl={{ kubernetes.apt_version }}" 20 | - "kubelet={{ kubernetes.apt_version }}" 21 | update_cache: yes 22 | 23 | - name: hold kubeadm, kubectl, kubelet 24 | become: yes 25 | dpkg_selections: 26 | name: "{{ item }}" 27 | selection: hold 28 | with_items: 29 | - kubeadm 30 | - kubectl 31 | - kubelet 32 | -------------------------------------------------------------------------------- /ansible/roles/metallb/tasks/main.yaml: -------------------------------------------------------------------------------- 1 | - name: copy MetalLB manifests 2 | become: yes 3 | template: 4 | src: metallb.yaml 5 | dest: /etc/metallb.yaml 6 | 7 | - name: copy MetalLB config 8 | become: yes 9 | template: 10 | src: metallb-config.yaml 11 | dest: /etc/metallb-config.yaml 12 | 13 | - name: create namespace 14 | become: yes 15 | command: kubectl create namespace metallb-system --kubeconfig=/etc/kubernetes/admin.conf 16 | 17 | - name: install MetalLB config 18 | become: yes 19 | command: kubectl apply -f /etc/metallb-config.yaml --kubeconfig=/etc/kubernetes/admin.conf 20 | 21 | - name: install MetalLB 22 | become: yes 23 | command: kubectl apply -f /etc/metallb.yaml --kubeconfig=/etc/kubernetes/admin.conf 24 | -------------------------------------------------------------------------------- /ansible/roles/metallb/templates/metallb-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | namespace: metallb-system 5 | name: config 6 | data: 7 | config: | 8 | address-pools: 9 | - name: default 10 | protocol: layer2 11 | addresses: 12 | - "{{ lab.metallb_address_range }}" 13 | -------------------------------------------------------------------------------- /ansible/roles/metallb/templates/metallb.yaml: -------------------------------------------------------------------------------- 1 | # Retrieved from https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml 2 | # November 8, 2021 3 | 4 | apiVersion: policy/v1beta1 5 | kind: PodSecurityPolicy 6 | metadata: 7 | labels: 8 | app: metallb 9 | name: controller 10 | namespace: metallb-system 11 | spec: 12 | allowPrivilegeEscalation: false 13 | allowedCapabilities: [] 14 | allowedHostPaths: [] 15 | defaultAddCapabilities: [] 16 | defaultAllowPrivilegeEscalation: false 17 | fsGroup: 18 | ranges: 19 | - max: 65535 20 | min: 1 21 | rule: MustRunAs 22 | hostIPC: false 23 | hostNetwork: false 24 | hostPID: false 25 | privileged: false 26 | readOnlyRootFilesystem: true 27 | requiredDropCapabilities: 28 | - ALL 29 | runAsUser: 30 | ranges: 31 | - max: 65535 32 | min: 1 33 | rule: MustRunAs 34 | seLinux: 35 | rule: RunAsAny 36 | supplementalGroups: 37 | ranges: 38 | - max: 65535 39 | min: 1 40 | rule: MustRunAs 41 | volumes: 42 | - configMap 43 | - secret 44 | - emptyDir 45 | --- 46 | apiVersion: policy/v1beta1 47 | kind: PodSecurityPolicy 48 | metadata: 49 | labels: 50 | app: metallb 51 | name: speaker 52 | namespace: metallb-system 53 | spec: 54 | allowPrivilegeEscalation: false 55 | allowedCapabilities: 56 | - NET_RAW 57 | allowedHostPaths: [] 58 | defaultAddCapabilities: [] 59 | defaultAllowPrivilegeEscalation: false 60 | fsGroup: 61 | rule: RunAsAny 62 | hostIPC: false 63 | hostNetwork: true 64 | hostPID: false 65 | hostPorts: 66 | - max: 7472 67 | min: 7472 68 | - max: 7946 69 | min: 7946 70 | privileged: true 71 | readOnlyRootFilesystem: true 72 | requiredDropCapabilities: 73 | - ALL 74 | runAsUser: 75 | rule: RunAsAny 76 | seLinux: 77 | rule: RunAsAny 78 | supplementalGroups: 79 | rule: RunAsAny 80 | volumes: 81 | - configMap 82 | - secret 83 | - emptyDir 84 | --- 85 | apiVersion: v1 86 | kind: ServiceAccount 87 | metadata: 88 | labels: 89 | app: metallb 90 | name: controller 91 | namespace: metallb-system 92 | --- 93 | apiVersion: v1 94 | kind: ServiceAccount 95 | metadata: 96 | labels: 97 | app: metallb 98 | name: speaker 99 | namespace: metallb-system 100 | --- 101 | apiVersion: rbac.authorization.k8s.io/v1 102 | kind: ClusterRole 103 | metadata: 104 | labels: 105 | app: metallb 106 | name: metallb-system:controller 107 | rules: 108 | - apiGroups: 109 | - '' 110 | resources: 111 | - services 112 | verbs: 113 | - get 114 | - list 115 | - watch 116 | - apiGroups: 117 | - '' 118 | resources: 119 | - services/status 120 | verbs: 121 | - update 122 | - apiGroups: 123 | - '' 124 | resources: 125 | - events 126 | verbs: 127 | - create 128 | - patch 129 | - apiGroups: 130 | - policy 131 | resourceNames: 132 | - controller 133 | resources: 134 | - podsecuritypolicies 135 | verbs: 136 | - use 137 | --- 138 | apiVersion: rbac.authorization.k8s.io/v1 139 | kind: ClusterRole 140 | metadata: 141 | labels: 142 | app: metallb 143 | name: metallb-system:speaker 144 | rules: 145 | - apiGroups: 146 | - '' 147 | resources: 148 | - services 149 | - endpoints 150 | - nodes 151 | verbs: 152 | - get 153 | - list 154 | - watch 155 | - apiGroups: ["discovery.k8s.io"] 156 | resources: 157 | - endpointslices 158 | verbs: 159 | - get 160 | - list 161 | - watch 162 | - apiGroups: 163 | - '' 164 | resources: 165 | - events 166 | verbs: 167 | - create 168 | - patch 169 | - apiGroups: 170 | - policy 171 | resourceNames: 172 | - speaker 173 | resources: 174 | - podsecuritypolicies 175 | verbs: 176 | - use 177 | --- 178 | apiVersion: rbac.authorization.k8s.io/v1 179 | kind: Role 180 | metadata: 181 | labels: 182 | app: metallb 183 | name: config-watcher 184 | namespace: metallb-system 185 | rules: 186 | - apiGroups: 187 | - '' 188 | resources: 189 | - configmaps 190 | verbs: 191 | - get 192 | - list 193 | - watch 194 | --- 195 | apiVersion: rbac.authorization.k8s.io/v1 196 | kind: Role 197 | metadata: 198 | labels: 199 | app: metallb 200 | name: pod-lister 201 | namespace: metallb-system 202 | rules: 203 | - apiGroups: 204 | - '' 205 | resources: 206 | - pods 207 | verbs: 208 | - list 209 | --- 210 | apiVersion: rbac.authorization.k8s.io/v1 211 | kind: Role 212 | metadata: 213 | labels: 214 | app: metallb 215 | name: controller 216 | namespace: metallb-system 217 | rules: 218 | - apiGroups: 219 | - '' 220 | resources: 221 | - secrets 222 | verbs: 223 | - create 224 | - apiGroups: 225 | - '' 226 | resources: 227 | - secrets 228 | resourceNames: 229 | - memberlist 230 | verbs: 231 | - list 232 | - apiGroups: 233 | - apps 234 | resources: 235 | - deployments 236 | resourceNames: 237 | - controller 238 | verbs: 239 | - get 240 | --- 241 | apiVersion: rbac.authorization.k8s.io/v1 242 | kind: ClusterRoleBinding 243 | metadata: 244 | labels: 245 | app: metallb 246 | name: metallb-system:controller 247 | roleRef: 248 | apiGroup: rbac.authorization.k8s.io 249 | kind: ClusterRole 250 | name: metallb-system:controller 251 | subjects: 252 | - kind: ServiceAccount 253 | name: controller 254 | namespace: metallb-system 255 | --- 256 | apiVersion: rbac.authorization.k8s.io/v1 257 | kind: ClusterRoleBinding 258 | metadata: 259 | labels: 260 | app: metallb 261 | name: metallb-system:speaker 262 | roleRef: 263 | apiGroup: rbac.authorization.k8s.io 264 | kind: ClusterRole 265 | name: metallb-system:speaker 266 | subjects: 267 | - kind: ServiceAccount 268 | name: speaker 269 | namespace: metallb-system 270 | --- 271 | apiVersion: rbac.authorization.k8s.io/v1 272 | kind: RoleBinding 273 | metadata: 274 | labels: 275 | app: metallb 276 | name: config-watcher 277 | namespace: metallb-system 278 | roleRef: 279 | apiGroup: rbac.authorization.k8s.io 280 | kind: Role 281 | name: config-watcher 282 | subjects: 283 | - kind: ServiceAccount 284 | name: controller 285 | - kind: ServiceAccount 286 | name: speaker 287 | --- 288 | apiVersion: rbac.authorization.k8s.io/v1 289 | kind: RoleBinding 290 | metadata: 291 | labels: 292 | app: metallb 293 | name: pod-lister 294 | namespace: metallb-system 295 | roleRef: 296 | apiGroup: rbac.authorization.k8s.io 297 | kind: Role 298 | name: pod-lister 299 | subjects: 300 | - kind: ServiceAccount 301 | name: speaker 302 | --- 303 | apiVersion: rbac.authorization.k8s.io/v1 304 | kind: RoleBinding 305 | metadata: 306 | labels: 307 | app: metallb 308 | name: controller 309 | namespace: metallb-system 310 | roleRef: 311 | apiGroup: rbac.authorization.k8s.io 312 | kind: Role 313 | name: controller 314 | subjects: 315 | - kind: ServiceAccount 316 | name: controller 317 | --- 318 | apiVersion: apps/v1 319 | kind: DaemonSet 320 | metadata: 321 | labels: 322 | app: metallb 323 | component: speaker 324 | name: speaker 325 | namespace: metallb-system 326 | spec: 327 | selector: 328 | matchLabels: 329 | app: metallb 330 | component: speaker 331 | template: 332 | metadata: 333 | annotations: 334 | prometheus.io/port: '7472' 335 | prometheus.io/scrape: 'true' 336 | labels: 337 | app: metallb 338 | component: speaker 339 | spec: 340 | containers: 341 | - args: 342 | - --port=7472 343 | - --config=config 344 | - --log-level=info 345 | env: 346 | - name: METALLB_NODE_NAME 347 | valueFrom: 348 | fieldRef: 349 | fieldPath: spec.nodeName 350 | - name: METALLB_HOST 351 | valueFrom: 352 | fieldRef: 353 | fieldPath: status.hostIP 354 | - name: METALLB_ML_BIND_ADDR 355 | valueFrom: 356 | fieldRef: 357 | fieldPath: status.podIP 358 | # needed when another software is also using memberlist / port 7946 359 | # when changing this default you also need to update the container ports definition 360 | # and the PodSecurityPolicy hostPorts definition 361 | #- name: METALLB_ML_BIND_PORT 362 | # value: "7946" 363 | - name: METALLB_ML_LABELS 364 | value: "app=metallb,component=speaker" 365 | - name: METALLB_ML_SECRET_KEY 366 | valueFrom: 367 | secretKeyRef: 368 | name: memberlist 369 | key: secretkey 370 | image: quay.io/metallb/speaker:v0.11.0 371 | name: speaker 372 | ports: 373 | - containerPort: 7472 374 | name: monitoring 375 | - containerPort: 7946 376 | name: memberlist-tcp 377 | - containerPort: 7946 378 | name: memberlist-udp 379 | protocol: UDP 380 | securityContext: 381 | allowPrivilegeEscalation: false 382 | capabilities: 383 | add: 384 | - NET_RAW 385 | drop: 386 | - ALL 387 | readOnlyRootFilesystem: true 388 | hostNetwork: true 389 | nodeSelector: 390 | kubernetes.io/os: linux 391 | serviceAccountName: speaker 392 | terminationGracePeriodSeconds: 2 393 | tolerations: 394 | - effect: NoSchedule 395 | key: node-role.kubernetes.io/master 396 | operator: Exists 397 | --- 398 | apiVersion: apps/v1 399 | kind: Deployment 400 | metadata: 401 | labels: 402 | app: metallb 403 | component: controller 404 | name: controller 405 | namespace: metallb-system 406 | spec: 407 | revisionHistoryLimit: 3 408 | selector: 409 | matchLabels: 410 | app: metallb 411 | component: controller 412 | template: 413 | metadata: 414 | annotations: 415 | prometheus.io/port: '7472' 416 | prometheus.io/scrape: 'true' 417 | labels: 418 | app: metallb 419 | component: controller 420 | spec: 421 | containers: 422 | - args: 423 | - --port=7472 424 | - --config=config 425 | - --log-level=info 426 | env: 427 | - name: METALLB_ML_SECRET_NAME 428 | value: memberlist 429 | - name: METALLB_DEPLOYMENT 430 | value: controller 431 | image: quay.io/metallb/controller:v0.11.0 432 | name: controller 433 | ports: 434 | - containerPort: 7472 435 | name: monitoring 436 | securityContext: 437 | allowPrivilegeEscalation: false 438 | capabilities: 439 | drop: 440 | - all 441 | readOnlyRootFilesystem: true 442 | nodeSelector: 443 | kubernetes.io/os: linux 444 | securityContext: 445 | runAsNonRoot: true 446 | runAsUser: 65534 447 | fsGroup: 65534 448 | serviceAccountName: controller 449 | terminationGracePeriodSeconds: 0 450 | -------------------------------------------------------------------------------- /ansible/worker-nodes.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: control-plane 3 | name: "Retrieve join token and certificate hash" 4 | roles: 5 | - name: retrieve kubeadm join config 6 | role: kubeadm-join-config 7 | 8 | - hosts: worker-nodes 9 | vars: 10 | token: "{{ lookup('file', local.token_file) }}" 11 | cert_hash: "{{ lookup('file', local.cert_hash_file) }}" 12 | name: "Join Kubernetes worker nodes" 13 | roles: 14 | - name: run kubeadm join 15 | role: kubeadm-join 16 | -------------------------------------------------------------------------------- /proxmox/README.md: -------------------------------------------------------------------------------- 1 | # Proxmox VM setup 2 | 3 | ## Overview 4 | This directory contains tooling for: 5 | - creating a Proxmox VM template to use as a source for future VMs (Ansible) 6 | - provisioning VMs based on the created template (Terraform) 7 | 8 | Required software: 9 | - Ansible: `pip3 install ansible` 10 | - Terraform: `brew install terraform` 11 | 12 | Contents: 13 | - [create-vm-template.yaml](create-vm-template.yaml) - contains an Ansible playbook to 14 | run tasks for downloading a cloud-init image and creating a VM template in PVE. The playbook should be run as root 15 | on the PVE host itself. 16 | - [terraform/main.tf](terraform/main.tf) contains the definition of the VMs and their quantity/resources etc. 17 | 18 | ## Creating cloud-init template 19 | 20 | To create new cloud-init VM template: 21 | - update [ansible/inventory.yaml](ansible/inventory.yaml) and replace `proxmox.local` with the address of the PVE host 22 | - modify `vars` section in [ansible/create-vm-template.yaml](ansible/create-vm-template.yaml) as needed 23 | - note: `cloud_image_path` must have a `.qcow2` extension due to PVE compatibility issue 24 | - from directory root, run: 25 | ``` 26 | ansible-playbook -i ansible/inventory.yaml ansible/create-vm-template.yaml -K 27 | ``` 28 | 29 | ## Provisioning VMs 30 | 31 | To provision VMs: 32 | - update [terraform/variables.tf](terraform/variables.tf) as needed; replace `proxmox.local` with the address of the PVE host 33 | - update/modify [terraform/main.tf](terraform/main.tf) to tweak the configuration of VMs 34 | - from [terraform](terraform) directory, run 35 | ``` 36 | terraform init 37 | terraform plan -var="pm_user=root@pam" -var="pm_password=" -out plan 38 | 39 | terraform apply "plan" 40 | ``` 41 | 42 | ## References: 43 | - [docs for Terraform Proxmox provider](https://registry.terraform.io/providers/Telmate/proxmox/latest/docs) 44 | - [Deploy Proxmox virtual machines using Cloud-init](https://norocketscience.at/deploy-proxmox-virtual-machines-using-cloud-init/) 45 | - [Proxmox VMs with Terraform](https://norocketscience.at/provision-proxmox-virtual-machines-with-terraform/) 46 | - [Proxmox, Terraform, and Cloud-Init](https://yetiops.net/posts/proxmox-terraform-cloudinit-saltstack-prometheus/) 47 | -------------------------------------------------------------------------------- /proxmox/ansible/create-vm-template.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: pve 3 | gather_facts: no 4 | name: "create VM template" 5 | vars: 6 | vm: 7 | cloud_image_url: https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img 8 | cloud_image_path: /tmp/ubuntu-2004-server-amd64.qcow2 9 | template_id: 1001 10 | template_name: ubuntu-2004-cloudinit-template 11 | template_memory: 4096 12 | tasks: 13 | - name : download cloud image 14 | get_url: 15 | url: "{{ vm.cloud_image_url }}" 16 | dest: "{{ vm.cloud_image_path }}" 17 | mode: 0700 18 | 19 | - name: create a VM to use as a template 20 | command: "qm create {{ vm.template_id }} --name {{ vm.template_name }} --memory {{ vm.template_memory }} --net0 virtio,bridge=vmbr0" 21 | become: yes 22 | 23 | - name: import disk image 24 | command: "qm importdisk {{ vm.template_id }} {{ vm.cloud_image_path }} local-lvm" 25 | become: yes 26 | 27 | - name: configure VM to use imported image 28 | command: "qm set {{ vm.template_id }} --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-{{ vm.template_id }}-disk-0" 29 | become: yes 30 | 31 | - name: add cloud-init image as CDROM 32 | command: "qm set {{ vm.template_id }} --ide2 local-lvm:cloudinit" 33 | become: yes 34 | 35 | - name: configure boot from the image 36 | command: "qm set {{ vm.template_id }} --boot c --bootdisk scsi0" 37 | become: yes 38 | 39 | - name: attach serial console 40 | command: "qm set {{ vm.template_id }} --serial0 socket --vga serial0" 41 | become: yes 42 | 43 | - name: create template 44 | command: "qm template {{ vm.template_id }}" 45 | become: yes 46 | -------------------------------------------------------------------------------- /proxmox/ansible/inventory.yaml: -------------------------------------------------------------------------------- 1 | all: 2 | children: 3 | pve: 4 | hosts: 5 | proxmox-master: 6 | ansible_host: proxmox.local 7 | -------------------------------------------------------------------------------- /proxmox/terraform/.gitignore: -------------------------------------------------------------------------------- 1 | .terraform* 2 | plan 3 | .tfstate 4 | -------------------------------------------------------------------------------- /proxmox/terraform/main.tf: -------------------------------------------------------------------------------- 1 | resource "proxmox_vm_qemu" "control_plane" { 2 | count = 1 3 | name = "control-plane-${count.index}.k8s.cluster" 4 | target_node = "${var.pm_node}" 5 | 6 | clone = "ubuntu-2004-cloudinit-template" 7 | 8 | os_type = "cloud-init" 9 | cores = 4 10 | sockets = "1" 11 | cpu = "host" 12 | memory = 2048 13 | scsihw = "virtio-scsi-pci" 14 | bootdisk = "scsi0" 15 | 16 | disk { 17 | size = "20G" 18 | type = "scsi" 19 | storage = "local-lvm" 20 | iothread = 1 21 | } 22 | 23 | network { 24 | model = "virtio" 25 | bridge = "vmbr0" 26 | } 27 | 28 | # cloud-init settings 29 | # adjust the ip and gateway addresses as needed 30 | ipconfig0 = "ip=192.168.0.11${count.index}/24,gw=192.168.0.1" 31 | sshkeys = file("${var.ssh_key_file}") 32 | } 33 | 34 | resource "proxmox_vm_qemu" "worker_nodes" { 35 | count = 3 36 | name = "worker-${count.index}.k8s.cluster" 37 | target_node = "${var.pm_node}" 38 | 39 | clone = "ubuntu-2004-cloudinit-template" 40 | 41 | os_type = "cloud-init" 42 | cores = 4 43 | sockets = "1" 44 | cpu = "host" 45 | memory = 4098 46 | scsihw = "virtio-scsi-pci" 47 | bootdisk = "scsi0" 48 | 49 | disk { 50 | size = "20G" 51 | type = "scsi" 52 | storage = "local-lvm" 53 | iothread = 1 54 | } 55 | 56 | network { 57 | model = "virtio" 58 | bridge = "vmbr0" 59 | } 60 | 61 | # cloud-init settings 62 | # adjust the ip and gateway addresses as needed 63 | ipconfig0 = "ip=192.168.0.12${count.index}/24,gw=192.168.0.1" 64 | sshkeys = file("${var.ssh_key_file}") 65 | } 66 | -------------------------------------------------------------------------------- /proxmox/terraform/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | proxmox = { 4 | source = "telmate/proxmox" 5 | version = "2.9.0" 6 | } 7 | } 8 | } 9 | 10 | provider "proxmox" { 11 | pm_parallel = 1 12 | pm_tls_insecure = true 13 | pm_api_url = var.pm_api_url 14 | pm_password = var.pm_password 15 | pm_user = var.pm_user 16 | } 17 | -------------------------------------------------------------------------------- /proxmox/terraform/variables.tf: -------------------------------------------------------------------------------- 1 | variable "pm_api_url" { 2 | default = "https://proxmox.local:8006/api2/json" 3 | } 4 | 5 | variable "pm_node" { 6 | default = "pve" 7 | } 8 | 9 | variable "pm_user" { 10 | default = "" 11 | } 12 | 13 | variable "pm_password" { 14 | default = "" 15 | } 16 | 17 | variable "ssh_key_file" { 18 | default = "~/.ssh/id_rsa.pub" 19 | } 20 | --------------------------------------------------------------------------------