├── LICENSE └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Marijn Koesen 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Installing kubernetes master on proxmox behind NAT using wireshark 2 | 3 | **NOTE: These are my personal notes on what I needed to do to get kubernetes working in proxmox using LXC and a container, they may not be complete/work on your machine, PR's are welcome** 4 | 5 | ``` 6 | # <- execute inside the (proxmox) host 7 | $ <- execite inside the container 8 | ``` 9 | 10 | 11 | 12 | ## Step 1: Prepare the proxmox host 13 | 14 | Ensure the following modules are loaded: 15 | 16 | ``` 17 | # cat /proc/sys/net/bridge/bridge-nf-call-iptables 18 | ``` 19 | 20 | Now make sure swapiness is on 0, so that swap will not be used, otherwise kubernetes will not start: 21 | 22 | ``` 23 | # cat /proc/sys/vm/swappiness 24 | [should be 0] 25 | ``` 26 | 27 | Define the new one 28 | 29 | ``` 30 | # sysctl vm.swappiness=0 31 | ``` 32 | 33 | Disable SWAP, it'll take some times to clean the SWAP area 34 | ``` 35 | #s wapoff -a 36 | ``` 37 | 38 | Now wait for swap to be empty. 39 | 40 | 41 | 42 | 43 | ## Step 1: Creating the kubernetes container 44 | 45 | 1) Create a new container in proxmox, making sure to give it 0 swap, and make it a privileged container 46 | 2) Edit the config file `/etc/pve/lxc/$ID.conf` and add the following part: 47 | 48 | ``` 49 | lxc.apparmor.profile: unconfined 50 | lxc.cgroup.devices.allow: a 51 | lxc.cap.drop: 52 | lxc.mount.auto: "proc:rw sys:rw" 53 | ``` 54 | 55 | If you are using zfs on proxmos, make sure to create a ext4 volume, as zfs is not supported with kubeadm 56 | See: https://github.com/corneliusweig/kubernetes-lxd 57 | 58 | ``` 59 | zfs create -V 50G mypool/my-dockervol 60 | zfs create -V 5G mypool/my-kubeletvol 61 | mkfs.ext4 /dev/zvol/mypool/my-dockervol 62 | mkfs.ext4 /dev/zvol/mypool/my-kubeletvol 63 | ``` 64 | 65 | Then make sure to mount it inside of the container: 66 | 67 | ``` 68 | mp0: /dev/zvol/mypool/my-dockervol,mp=/var/lib/docker,backup=0 69 | mp1: /dev/zvol/mypool/my-kubeletvol,mp=/var/lib/kubelet,backup=0 70 | ``` 71 | 72 | 73 | Next make sure `conntrack` is working in the container 74 | 75 | ``` 76 | $ sudo conntrack -L 77 | ``` 78 | 79 | Now we can setup the VPN we need, see documentation for wireguard to install 80 | 81 | ``` 82 | $ sudo add-apt-repository ppa:wireguard/wireguard 83 | $ sudo apt-get update 84 | $ sudo apt-get install wireguard 85 | ``` 86 | 87 | Create the config: 88 | 89 | ``` 90 | $ cat > /etc/wireguard/wg0.conf 91 | [Interface] 92 | Address = 10.0.0.1/32 93 | ListenPort = 55555 94 | PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE 95 | PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE 96 | PrivateKey = WNeaIBT40mN/asu9zXrPeSYA+4pFmZA9lUBvHTx+TG8= 97 | MTU = 1500 98 | 99 | [Peer] 100 | # server2 101 | PublicKey = NSWzZOIUHPqRxOxUmB/A7+Gs6oECYGojREvGs/ZEi2o= 102 | AllowedIPs = 10.0.0.2/32 103 | 104 | [Peer] 105 | # server3 106 | PublicKey = JhT41so2SiITMe2uqPoNB40kkwxRqyklWiILyhT1uVY= 107 | AllowedIPs = 10.0.0.3/32 108 | ``` 109 | 110 | And start the vpn: 111 | 112 | ``` 113 | $ wg-quick up wg0 114 | $ wg show 115 | ``` 116 | 117 | To make sure we start the vpn on boot, and to fix some other small issues create the following rc.local file: 118 | 119 | ``` 120 | $ cat > /etc/rc.local 121 | #!/bin/sh -e 122 | 123 | # Kubeadm 1.15 needs /dev/kmsg to be there, but it's not in lxc, but we can just use /dev/console instead 124 | # see: https://github.com/kubernetes-sigs/kind/issues/662 125 | if [ ! -e /dev/kmsg ]; then 126 | ln -s /dev/console /dev/kmsg 127 | fi 128 | 129 | # Make sure our VPN is setup so we can connect to the other nodes 130 | wg-quick up wg0 131 | 132 | # https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f04aa94b6c9c 133 | mount --make-rshared /' > /etc/rc.local 134 | 135 | exit 0 136 | ``` 137 | 138 | Set the permissions and reboot 139 | 140 | ``` 141 | $ chmod +x /etc/rc.local 142 | # sudo reboot 143 | ``` 144 | 145 | Now make sure we have booted properly: 146 | 147 | ``` 148 | $ ls -l /dev/kmsg 149 | [this should exist] 150 | $ wg show 151 | [this should show wireguard is started] 152 | ``` 153 | 154 | 155 | Now we can start installing kubernetes: 156 | 157 | ``` 158 | $ apt-get update && apt-get install -y apt-transport-https curl 159 | $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 160 | $ cat </etc/apt/sources.list.d/kubernetes.list 161 | deb https://apt.kubernetes.io/ kubernetes-xenial main 162 | EOF 163 | $ apt-get update 164 | $ apt-get install -y kubelet kubeadm kubectl 165 | $ apt-mark hold kubelet kubeadm kubectl 166 | ``` 167 | 168 | To make sure kubernets connects using the correct ip, and will use the VPN to connect though, we must tell kubelet to use the vpn server ip: 169 | 170 | ``` 171 | $ echo "KUBELET_EXTRA_ARGS=--node-ip=10.0.0.1" >> /etc/default/kubelet 172 | ``` 173 | 174 | Now we can setup kubeadm: 175 | 176 | Make sure to specify the pod-network-cidr and service-cidr if they are overlapping with your normal LAN network, which it is in my case, as I run 192.168.x.x on my proxmox LAN. Note that I add an extra apiserver-cert-extra-sans, so I can just connect to the api server from it's LAN ip too. 177 | 178 | To make sure it will work we need to ignore all preflight errors, but it should work just fine. 179 | 180 | 181 | ``` 182 | $ kubeadm init --pod-network-cidr=10.250.0.0/16 --service-cidr=172.31.0.0/16 --apiserver-advertise-address 10.0.0.1 --apiserver-cert-extra-sans k8s.mydomain.com --apiserver-cert-extra-sans 192.168.1.13 --apiserver-cert-extra-sans 10.0.0.1 --ignore-preflight-errors=all 183 | ``` 184 | 185 | Next copy the kubectl config: 186 | 187 | ``` 188 | $ mkdir -p $HOME/.kube 189 | $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 190 | $ sudo chown $(id -u):$(id -g) $HOME/.kube/config 191 | ``` 192 | 193 | And apply calico config 194 | 195 | ``` 196 | $ curl https://docs.projectcalico.org/v3.8/manifests/calico.yaml -O 197 | $ vim calico.yaml 198 | 199 | Change: 200 | IP_AUTODETECTION_METHOD: "interface=wg*" 201 | CALICO_IPV4POOL_CIDR: "10.250.0.0/16" 202 | ``` 203 | 204 | The `IP_AUTODETECTION_METHOD` needs to be wg0 so that all nodes will connect through the VPN, that will make sure it's working fine behind NAT, and all will be secure. 205 | 206 | The `CALICO_IPV4POOL_CIDR` is needed to make sure all pods are created in the correct network. 207 | 208 | When this is correct we can apply the network config: 209 | 210 | ``` 211 | $ kubectl apply -f calico.yaml 212 | ``` 213 | 214 | Keep note of the join command, so we can add another node. 215 | 216 | I like k8s to schedule pods on the master too, which by default does not happen, so I need to run: 217 | 218 | ``` 219 | $ kubectl taint nodes --all node-role.kubernetes.io/master- 220 | ``` 221 | 222 | 223 | # Adding another node 224 | 225 | 1) Prepare the node, in this example 'server2' with wireguard 226 | 227 | 228 | VPN on server2: 229 | 230 | ``` 231 | [Interface] 232 | PrivateKey = wPMsKBkqbdz1WBx8MhYM7/GwzYd6U7DWuef1FoeUdkg= 233 | Address = 10.0.0.2/32 234 | MTU = 1500 235 | 236 | [Peer] 237 | PublicKey = 8q+JKbrXDs86lnBvAl4lx6QiCzgoOOaAc7jtjz/lFBM= 238 | AllowedIPs = 10.0.0.0/24 239 | Endpoint = 123.123.123.123:55555 240 | PersistentKeepalive = 15 241 | ``` 242 | 243 | Start the vpn, and make sure master and server2 can connect to each other: 244 | 245 | ``` 246 | master > ping 10.0.0.2 247 | server2 > ping 10.0.0.1 248 | ``` 249 | 250 | Now install `kubeadm` like before, and fix the `--node-ip` like we did before. 251 | 252 | We can now join the node: 253 | 254 | ``` 255 | kubeadm join 10.0.0.1:6443 --token j9dg9i.u023uf023pr902u4 \ 256 | --discovery-token-ca-cert-hash sha256:bb69ce798968473041754992927ce3b8154b526485055a0bf9fdd34c2aa34944 257 | ``` 258 | 259 | # Making sure everything is ok 260 | 261 | Couple of things we can do to make sure it's all ok, first check all nodes are ready and using the vpn ip: 262 | 263 | ``` 264 | $ kubectl get node -o wide 265 | root@k8s:~# kubectl get node -o wide 266 | NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 267 | master Ready master 9h v1.15.2 10.0.0.1 Ubuntu 18.04 LTS 4.15.18-13-pve docker://19.3.1 268 | server2 Ready 9h v1.15.3 10.0.0.2 CentOS Linux 7 (Core) 3.10.0-957.27.2.el7.x86_64 docker://1.13.1 269 | ``` 270 | 271 | If the internal ip's are wrong, check your calico settings, beware that you might need to recreat the cluster using `kubeadm reset` first, as calico mentions changing the config later might not have effect. 272 | 273 | Also make sure calico is fine: 274 | 275 | ``` 276 | $ curl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.8.2/calicoctl 277 | $ chmod +x calicoctl 278 | $ DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config ./calicoctl get node -o wide 279 | NAME ASN IPV4 IPV6 280 | k8s (64512) 10.0.0.1/32 281 | server2 (64512) 10.0.0.2/32 282 | 283 | $ DATASTORE_TYPE=kubernetes KUBECONFIG=~/.kube/config ./calicoctl node status 284 | Calico process is running. 285 | 286 | IPv4 BGP status 287 | +--------------+-------------------+-------+----------+-------------+ 288 | | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | 289 | +--------------+-------------------+-------+----------+-------------+ 290 | | 10.0.0.2 | node-to-node mesh | up | 07:22:50 | Established | 291 | +--------------+-------------------+-------+----------+-------------+ 292 | ``` 293 | 294 | If it's not make sure calico settings are correct and firewall is ok: 295 | 296 | ``` 297 | master > $ nc -v 10.0.0.3 179 298 | Connection to 10.0.0.3 179 port [tcp/bgp] succeeded! 299 | ``` 300 | 301 | 302 | Now make sure inter-node connectiosn are working as expected: 303 | 304 | ``` 305 | $ cat > test-daemonset.yaml 306 | apiVersion: apps/v1 307 | kind: DaemonSet 308 | metadata: 309 | name: shell-test 310 | labels: 311 | k8s-app: shell-test 312 | spec: 313 | selector: 314 | matchLabels: 315 | name: shell-test 316 | template: 317 | metadata: 318 | labels: 319 | name: shell-test 320 | spec: 321 | containers: 322 | - name: shell-test 323 | image: debian:stable-slim 324 | ``` 325 | 326 | Now apply the daemonset: 327 | 328 | ``` 329 | $ kubectl apply -f test-daemonset.yaml 330 | $ kubectl get pod -o wide 331 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 332 | shell-demo-7h995 1/1 Running 0 9h 10.250.116.129 serverw 333 | shell-demo-qp6k5 1/1 Running 2 9h 10.250.77.15 master 334 | ``` 335 | 336 | Now make sure we can ping both ip's from both servers: 337 | 338 | ``` 339 | master > ping 10.250.116.129 340 | master > ping 10.250.77.15 341 | server2 > ping 10.250.116.129 342 | server2 > ping 10.250.77.15 343 | ``` 344 | 345 | After this you should have a k8s cluster running, from within proxmox. 346 | 347 | # References 348 | 349 | [1] https://medium.com/@kvaps/run-kubernetes-in-lxc-container-f04aa94b6c9c 350 | [2] https://gist.github.com/kvaps/25f730e0ec39dd2e5749fb6b020e71fc 351 | [3] https://stackoverflow.com/questions/55813994/install-and-create-a-kubernetes-cluster-on-lxc-proxmox 352 | [4] https://github.com/corneliusweig/kubernetes-lxd 353 | [5] https://www.wireguard.com/install/#ubuntu-module-tools 354 | [6] https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ 355 | [7] https://blog.lbdg.me/proxmox-best-performance-disable-swappiness/ 356 | --------------------------------------------------------------------------------