├── .gitignore ├── README.md ├── ansible.cfg ├── create-k8s-vm.yaml ├── group_vars └── example-all.yaml └── inventories └── proxmox.py /.gitignore: -------------------------------------------------------------------------------- 1 | group_vars/all.yaml 2 | mycluster/ 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # PROXMOX KUBERNETES BOOTSTRAP 2 | 3 | ## Description 4 | This is a collection of resources to get a Kubernetes cluster up and running in a Proxmox Virtual Environment. These tools and commands assume the user is executing in a Linux or Linux-like environment. 5 | # Prerequisites 6 | 7 | * Proxmox 8 or newer server up and running 8 | * Install Python 3 (Comes pre-installed on most Linux distros) 9 | * [Generate a set of SSH keys](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent#generating-a-new-ssh-key) 10 | * [Install Docker](https://github.com/docker/docker-install#usage) 11 | * [Install Ansible](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) 12 | 13 | ```bash 14 | # There are many ways to install Ansible. pipx is my prefered method. 15 | python3 -m pip install pipx 16 | pipx install --include-deps ansible 17 | ``` 18 | 19 | Note: in some environments you may need to use `python` instead of `python3` 20 | 21 | # Create Kubenetes nodes 22 | 23 | ## Copy the example configuration file 24 | 25 | ```bash 26 | # From the repo's root path 27 | cp group_vars/example-all.yaml group_vars/all.yaml 28 | ``` 29 | 30 | Fill in the `group_vars/all.yaml` with values that are appropriate for your environment. 31 | 32 | ## Create Kubernetes node VM(s) 33 | 34 | By default, `create-k8s-vm.yaml` will tag the VM with **kube_node**, **kube_control_plane**, and **etcd** 35 | 36 | * **kube_node** : Kubernetes nodes where pods will run. 37 | * **kube_control_plane** : Where Kubernetes control plane components (apiserver, scheduler, controller) will run. 38 | * **etcd**: The etcd server. You should have 3 servers for failover purpose. 39 | 40 | For high-availability, it is recommended to have 3+ nodes with these roles. 41 | See [Kubespray's documentation](https://github.com/kubernetes-sigs/kubespray/blob/v2.27.0/docs/ansible/inventory.md) for more details. 42 | 43 | To adjust the role(s) of the VM, adjust the tags: 44 | 45 | ```bash 46 | # worker, control plane, and etcd node 47 | ansible-playbook create-k8s-vm.yaml -K 48 | 49 | # control plane and etcd node 50 | ansible-playbook create-k8s-vm.yaml --extra-vars "vm_tags=kube_control_plane,etcd" -K 51 | 52 | # worker node 53 | ansible-playbook create-k8s-vm.yaml --extra-vars "vm_tags=kube_node" -K 54 | 55 | # control plane node 56 | ansible-playbook create-k8s-vm.yaml --extra-vars "vm_tags=kube_control_plane" -K 57 | 58 | # etcd node 59 | ansible-playbook create-k8s-vm.yaml --extra-vars "vm_tags=etcd" -K 60 | ``` 61 | 62 | # Deploy Kubernetes via Kubespray 63 | 64 | ## Copy sample inventory files 65 | 66 | ```bash 67 | DOCKER_IMAGE=quay.io/kubespray/kubespray:v2.27.0 68 | 69 | CID=$(docker create "${DOCKER_IMAGE}") 70 | docker cp "${CID}:/kubespray/inventory/sample" mycluster 71 | docker rm "${CID}" 72 | ``` 73 | 74 | Make any desired modifications to the files in "mycluster" 75 | 76 | ## Run deploy 77 | 78 | ```bash 79 | docker run --rm --name kubespray --tty --interactive \ 80 | -v "$(pwd)/mycluster:/kubespray/inventory/mycluster" \ 81 | -v "$(pwd)/group_vars/all.yaml:/kubespray/inventory/group_vars/all.yaml" \ 82 | -v "$(pwd)/inventories/proxmox.py:/kubespray/inventory/mycluster/proxmox.py" \ 83 | -v "${HOME}/.ssh:/root/.ssh" \ 84 | "${DOCKER_IMAGE}" bash 85 | 86 | # Inside the container, run: 87 | chown $(whoami) ~/.ssh/config 88 | ansible-playbook -i inventory/mycluster/proxmox.py --user=user --become --become-user=root cluster.yml 89 | exit 90 | 91 | # Set your SSH config permissions back 92 | sudo chown $(whoami) ~/.ssh/config 93 | sudo chown $(whoami) ~/.ssh/known_hosts 94 | ``` 95 | 96 | # kubectl 97 | 98 | Install [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl) 99 | 100 | ``` 101 | The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. 102 | ``` 103 | 104 | ## Authorization 105 | 106 | ```bash 107 | IP=192.0.1.2 # Controller node IP address 108 | ssh user@${IP} 'mkdir -p "${HOME}/.kube" && sudo cp -f /etc/kubernetes/admin.conf "${HOME}/.kube/config"' 109 | 110 | mkdir -p "${HOME}/.kube" 111 | ssh user@${IP} 'sudo cat "${HOME}/.kube/config"' > "${HOME}/.kube/config" 112 | sudo chown $(id -u):$(id -g) "${HOME}/.kube/config" 113 | sed -i "s/127.0.0.1/${IP}/g" "${HOME}/.kube/config" 114 | chmod 400 ~/.kube/config 115 | 116 | kubectl get nodes 117 | ``` 118 | -------------------------------------------------------------------------------- /ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | host_key_checking = False 3 | inventory = inventories/proxmox.py 4 | -------------------------------------------------------------------------------- /create-k8s-vm.yaml: -------------------------------------------------------------------------------- 1 | - hosts: proxmox 2 | become_method: sudo 3 | become_user: root 4 | become: true 5 | 6 | tasks: 7 | - name: Get local SSH public key 8 | ansible.builtin.shell: | 9 | ssh-agent sh -c 'ssh-add > /dev/null 2>&1 ; ssh-add -L' 10 | register: ssh_public_key 11 | become: false 12 | delegate_to: localhost 13 | 14 | - name: Set SSH public key 15 | ansible.builtin.set_fact: 16 | ssh_public_key: "{{ ssh_public_key.stdout }}" 17 | 18 | - name: Determine ISO download path 19 | ansible.builtin.shell: | 20 | pvesh get /storage/{{ proxmox_iso_storage }} --output-format json 21 | register: iso_storage 22 | 23 | - name: Set ISO download path 24 | ansible.builtin.set_fact: 25 | iso_path: "{{ (iso_storage.stdout | from_json).path }}/template/iso" 26 | 27 | - name: Download Ubuntu cloud image 28 | ansible.builtin.get_url: 29 | url: https://cloud-images.ubuntu.com/minimal/releases/noble/release/ubuntu-24.04-minimal-cloudimg-amd64.img 30 | dest: "{{ iso_path }}/ubuntu-24.04-minimal-cloudimg-amd64.img" 31 | checksum: sha256:https://cloud-images.ubuntu.com/minimal/releases/noble/release/SHA256SUMS 32 | 33 | - name: Generate VM name 34 | ansible.builtin.shell: | 35 | echo "k8s-node-$(tr -dc a-z0-9 List[ProxmoxVM]: 101 | vms = [] 102 | 103 | for node in self.get("/api2/json/nodes"): 104 | node_vms = self.get(f"/api2/json/nodes/{node['node']}/qemu") 105 | 106 | for node_vm in node_vms: 107 | response = self.get( 108 | f"/api2/json/nodes/{node['node']}/qemu/{node_vm['vmid']}/agent/network-get-interfaces" 109 | ) 110 | # Return 'None' in case VM isn't running 111 | network_interfaces = response["result"] if response else None 112 | 113 | vm = ProxmoxVM(node["node"], node_vm, network_interfaces) 114 | vms.append(vm) 115 | 116 | return vms 117 | 118 | 119 | if __name__ == "__main__": 120 | proxmox = PromoxHost() 121 | 122 | inventory = { 123 | "_meta": { 124 | "hostvars": { 125 | "proxmox": { 126 | "ansible_host": proxmox.ip, 127 | "ansible_port": proxmox.ssh_port, 128 | } 129 | } 130 | }, 131 | "all": {"hosts": ["proxmox"], "children": []}, 132 | } 133 | 134 | for vm in ProxmoxAPI(proxmox).get_vms(): 135 | if not vm.running or not vm.ip: 136 | continue 137 | 138 | inventory["_meta"]["hostvars"][vm.name] = {"ansible_host": vm.ip} 139 | 140 | for tag in vm.tags: 141 | if tag not in inventory.keys(): 142 | inventory["all"]["children"].append(tag) 143 | inventory[tag] = {"hosts": [vm.name]} 144 | else: 145 | inventory[tag]["hosts"].append(vm.name) 146 | 147 | print((json.dumps(inventory, indent=True))) 148 | --------------------------------------------------------------------------------