└── README.md /README.md: -------------------------------------------------------------------------------- 1 | # Setup an HA kubernetes cluster on Bare Metal 2 | 3 | This is a frankenstein How-To on setting up a HA cluster on bare metal / VMs. 4 | 5 | **Resources used:** 6 | * http://nixaid.com/kubernetes-from-scratch/ 7 | * https://github.com/kelseyhightower/kubernetes-the-hard-way 8 | * https://github.com/cookeem/kubeadm-ha 9 | * https://kubernetes.io/docs/setup/independent/install-kubeadm/ 10 | 11 | The writeups that I found, aren’t really aimed at beginners. My goal is to try and explain each step so that if something goes wrong, you can try and troubleshoot it with some knowledge. 12 | 13 | While this writeup is written for Debian, you can easily adapt this to CentOS/RHEL/OEL by adding the necessary yumrepos. Please check the [kubeadm install guide](https://kubernetes.io/docs/setup/independent/install-kubeadm/ "kubeadm install guide") for more info. 14 | 15 | ### My current setup: 16 | * 3 VMs acting as Master nodes which will run the kube-apiserver, controller-manager, and scheduler. etcd will also run on these 3 nodes. 17 | * 1 VM acting as a Minion (due to compute resource constraints in my lab, at the moment) -- this can be scaled at will. 18 | 19 | ### Server specs: 20 | * 2 vCPU 21 | * 2G ram 22 | * No swap (or disable swap) -- https://github.com/kubernetes/kubernetes/issues/53533 23 | * 16G disk 24 | * No SELinux / Disabled SELinux 25 | 26 | ### OS: 27 | * Debian 9.2 28 | ### Node info: 29 | Node Name|IP|Purpose 30 | -|-|- 31 | kub01|10.0.0.21|k8s master / etcd node| 32 | kub02|10.0.0.22|k8s master / etcd node| 33 | kub03|10.0.0.26|k8s master / etcd node| 34 | kubminion01|10.0.0.23|k8s minion| 35 | kublb01|10.0.0.27|nginx proxy/lb| 36 | 37 | Creating the VMs, and installing the OS is out of scope here. I assume you have the base OS up and running. If not, please check on google for tutorials on how to get your infrastructure up. 38 | 39 | You will also need to make sure that either your resolvers can resolve the hostnames for the nodes, or you have the master, minion, and lb hostnames added into your hosts file. As a last resort, just use the IPs for everything to reach the hosts. 40 | 41 | 42 | ### Steps to run on all nodes: 43 | ``` 44 | apt-get update && apt-get install -y curl apt-transport-https 45 | ``` 46 | ### Install docker 17.03 -- higher versions might work, but they’re not supported by kubernetes at this time: 47 | ``` 48 | curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - 49 | cat </etc/apt/sources.list.d/docker.list 50 | deb https://download.docker.com/linux/$(lsb_release -si | tr '[:upper:]' '[:lower:]') $(lsb_release -cs) stable 51 | EOF 52 | apt-get update && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.03 | head -1 | awk '{print $3}') 53 | ``` 54 | ### might have to fix /etc/apt/sources.list.d/docker.list if your dist is not detected correctly, or not found 55 | ``` 56 | cat <<__EOF__ > /etc/sysctl.d/k8s.conf 57 | net.bridge.bridge-nf-call-ip6tables = 1 58 | net.bridge.bridge-nf-call-iptables = 1 59 | __EOF__ 60 | sysctl --system 61 | sysctl -p /etc/sysctl.d/k8s.conf 62 | iptables -P FORWARD ACCEPT 63 | ``` 64 | ## Install kubeadm, kubectl, kubelet 65 | ``` 66 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - 67 | cat </etc/apt/sources.list.d/kubernetes.list 68 | deb http://apt.kubernetes.io/ kubernetes-xenial main 69 | EOF 70 | apt-get update 71 | apt-get install -y kubelet kubeadm kubectl 72 | ``` 73 | 74 | ## Generate required certs: 75 | ``` 76 | mkdir -p ~/k8s/crt ~/k8s/key ~/k8s/csr 77 | cat <<__EOF__>~/k8s/openssl.cnf 78 | [ req ] 79 | distinguished_name = req_distinguished_name 80 | [req_distinguished_name] 81 | [ v3_ca ] 82 | basicConstraints = critical, CA:TRUE 83 | keyUsage = critical, digitalSignature, keyEncipherment, keyCertSign 84 | [ v3_req_etcd ] 85 | basicConstraints = CA:FALSE 86 | keyUsage = critical, digitalSignature, keyEncipherment 87 | extendedKeyUsage = serverAuth, clientAuth 88 | subjectAltName = @alt_names_etcd 89 | [ alt_names_etcd ] 90 | DNS.1 = kub01 91 | DNS.2 = kub02 92 | DNS.3 = kub03 93 | IP.1 = 10.0.0.21 94 | IP.2 = 10.0.0.22 95 | IP.3 = 10.0.0.26 96 | __EOF__ 97 | ``` 98 | ### Generate etcd ca which will be used to sign all our etcd certs: 99 | ``` 100 | openssl genrsa -out ~/k8s/key/etcd-ca.key 4096 101 | openssl req -x509 -new -sha256 -nodes -key ~/k8s/key/etcd-ca.key -days 3650 -out ~/k8s/crt/etcd-ca.crt -subj "/CN=etcd-ca" -extensions v3_ca -config ~/k8s/openssl.cnf 102 | ``` 103 | ### Generate etcd local and peer certs: 104 | ``` 105 | openssl genrsa -out ~/k8s/key/etcd.key 4096 106 | openssl req -new -sha256 -key ~/k8s/key/etcd.key -subj "/CN=etcd" -out ~/k8s/csr/etcd.csr 107 | openssl x509 -req -in ~/k8s/csr/etcd.csr -sha256 -CA ~/k8s/crt/etcd-ca.crt -CAkey ~/k8s/key/etcd-ca.key -CAcreateserial -out ~/k8s/crt/etcd.crt -days 365 -extensions v3_req_etcd -extfile ~/k8s/openssl.cnf 108 | openssl genrsa -out ~/k8s/key/etcd-peer.key 4096 109 | openssl req -new -sha256 -key ~/k8s/key/etcd-peer.key -subj "/CN=etcd-peer" -out ~/k8s/csr/etcd-peer.csr 110 | openssl x509 -req -in ~/k8s/csr/etcd-peer.csr -sha256 -CA ~/k8s/crt/etcd-ca.crt -CAkey ~/k8s/key/etcd-ca.key -CAcreateserial -out ~/k8s/crt/etcd-peer.crt -days 365 -extensions v3_req_etcd -extfile ~/k8s/openssl.cnf 111 | ``` 112 | ### Setup etcd on all 3 master nodes -- run these as root, if you can. Otherwise, make adjustments to run as proper user: 113 | 114 | ### Download the etcd binaries: 115 | ``` 116 | ETCD_VER=v3.2.9 117 | GOOGLE_URL=https://storage.googleapis.com/etcd 118 | GITHUB_URL=https://github.com/coreos/etcd/releases/download 119 | DOWNLOAD_URL=${GOOGLE_URL} 120 | mkdir ~/etcd_${ETCD_VER} 121 | cd ~/etcd_${ETCD_VER} 122 | cat <<__EOF__>etcd_${ETCD_VER}-install.sh 123 | curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o etcd-${ETCD_VER}-linux-amd64.tar.gz 124 | tar xzvf etcd-${ETCD_VER}-linux-amd64.tar.gz -C . 125 | __EOF__ 126 | chmod +x etcd_${ETCD_VER}-install.sh 127 | ./etcd_${ETCD_VER}-install.sh 128 | ``` 129 | ### Create etcd systemd service entry -- remember to update this with your correct host IPs, and node names 130 | ``` 131 | cd ~ 132 | cat <<__EOF__>~/etcd.service 133 | [Unit] 134 | Description=etcd 135 | Documentation=https://github.com/coreos 136 | 137 | [Service] 138 | ExecStart=/usr/local/bin/etcd \\ 139 | --name kub01 \\ 140 | --cert-file=/etc/etcd/pki/etcd.crt \\ 141 | --key-file=/etc/etcd/pki/etcd.key \\ 142 | --peer-cert-file=/etc/etcd/pki/etcd-peer.crt \\ 143 | --peer-key-file=/etc/etcd/pki/etcd-peer.key \\ 144 | --trusted-ca-file=/etc/etcd/pki/etcd-ca.crt \\ 145 | --peer-trusted-ca-file=/etc/etcd/pki/etcd-ca.crt \\ 146 | --peer-client-cert-auth \\ 147 | --client-cert-auth \\ 148 | --initial-advertise-peer-urls https://10.0.0.21:2380 \\ 149 | --listen-peer-urls https://10.0.0.21:2380 \\ 150 | --listen-client-urls https://10.0.0.21:2379,http://127.0.0.1:2379 \\ 151 | --advertise-client-urls https://10.0.0.21:2379 \\ 152 | --initial-cluster-token etcd-cluster-0 \\ 153 | --initial-cluster kub01=https://10.0.0.21:2380,kub02=https://10.0.0.22:2380,kub03=https://10.0.0.26:2380 \\ 154 | --data-dir=/var/lib/etcd 155 | Restart=on-failure 156 | RestartSec=5 157 | 158 | [Install] 159 | WantedBy=multi-user.target 160 | __EOF__ 161 | 162 | ETCD_VER=v3.2.9 163 | for master in kub01 kub02 kub03; do \ 164 | ssh ${master} "test -d /etc/etcd/pki && rm -rf /etc/etcd/pki" ; \ 165 | ssh ${master} "test -d /var/lib/etcd && rm -rf /var/lib/etcd" ; \ 166 | ssh ${master} “mkdir -p /etc/etcd/pki ; mkdir -p /var/lib/etcd” ; \ 167 | scp ~/k8s/crt/etcd* ~/k8s/key/etcd* ${master}:/etc/etcd/pki/; \ 168 | scp etcd.service ${master}:/etc/systemd/system/etcd.service ; \ 169 | scp ~/etcd_${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64/etcd ${master}:/usr/local/bin; \ 170 | scp ~/etcd_${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64/etcdctl ${master}:/usr/local/bin; 171 | done 172 | ``` 173 | 174 | **Important**: Update /etc/systemctl/system/etcd.service on each node with the correct values for the following before starting etcd: 175 | ``` 176 | --name 177 | --initial-advertise-peer-urls 178 | --listen-peer-urls 179 | --listen-client-urls 180 | --advertise-client-urls 181 | ``` 182 | 183 | Start etcd: 184 | ``` 185 | for master in kub01 kub02 kub03; do \ 186 | ssh ${master} “systemctl daemon-reload” ; \ 187 | ssh ${master} “systemctl start etcd” ; 188 | done 189 | ``` 190 | ### Verify etcd cluster member list, and health: 191 | ``` 192 | etcdctl --ca-file /etc/etcd/pki/etcd-ca.crt --cert-file /etc/etcd/pki/etcd.crt --key-file /etc/etcd/pki/etcd.key cluster-health 193 | ``` 194 | ``` 195 | member 33d87194523dae28 is healthy: got healthy result from https://10.0.0.22:2379 196 | member c4d8e71bc32e75e7 is healthy: got healthy result from https://10.0.0.26:2379 197 | member d39138844daf67cb is healthy: got healthy result from https://10.0.0.21:2379 198 | cluster is healthy 199 | ``` 200 | ``` 201 | etcdctl --ca-file /etc/etcd/pki/etcd-ca.crt --cert-file /etc/etcd/pki/etcd.crt --key-file /etc/etcd/pki/etcd.key member list 202 | ``` 203 | ``` 204 | 33d87194523dae28: name=kub02 peerURLs=https://10.0.0.22:2380 clientURLs=https://10.0.0.22:2379 isLeader=true 205 | c4d8e71bc32e75e7: name=kub03 peerURLs=https://10.0.0.26:2380 clientURLs=https://10.0.0.26:2379 isLeader=false 206 | d39138844daf67cb: name=kub01 peerURLs=https://10.0.0.21:2380 clientURLs=https://10.0.0.21:2379 isLeader=false 207 | ``` 208 | ### Create the kubeadm init file: 209 | Update this so that advertiseAddress (the address where the kube-apiserver will listen) matches the IP for your master, and etcd endpoints. Also, update the apiServerCertSANs so that your correct FQDNs/shortnames/IPs are listed. kubeadm will use this config file to generate the certs that it needs, and also configure the etcd endpoints for your cluster. 210 | 211 | You can also change the podSubnet subnet, but remember to make note of it as we’ll also have to tell flannel the correct podSubnet to use later. 10.244.0.0/16 is the subnet which the flannel manifest uses by default: https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 212 | ``` 213 | cat <<__EOF__>~/kubeadm-init.yaml 214 | apiVersion: kubeadm.k8s.io/v1alpha1 215 | kind: MasterConfiguration 216 | api: 217 | advertiseAddress: 10.0.0.21 218 | bindPort: 6443 219 | etcd: 220 | endpoints: 221 | - https://10.0.0.21:2379 222 | - https://10.0.0.22:2379 223 | - https://10.0.0.26:2379 224 | caFile: /etc/etcd/pki/etcd-ca.crt 225 | certFile: /etc/etcd/pki/etcd.crt 226 | keyFile: /etc/etcd/pki/etcd.key 227 | dataDir: /var/lib/etcd 228 | networking: 229 | podSubnet: 10.244.0.0/16 230 | apiServerCertSANs: 231 | - kub01 232 | - kub02 233 | - kub03 234 | - kublb01.home 235 | - kublb01 236 | - 10.0.0.21 237 | - 10.0.0.22 238 | - 10.0.0.26 239 | - 10.0.0.27 240 | certificatesDir: /etc/kubernetes/pki/ 241 | __EOF__ 242 | scp ~/kubeadm-init.yaml root@kub01: 243 | ``` 244 | **Setup k8s on first master**: 245 | ``` 246 | ssh root@kub01 247 | useradd -d -m /home/kubeadmin -s /bin/bash -G docker,sudo kubeadmin 248 | kubeadm init --config ~/kubeadm-init.yaml 249 | ``` 250 | This will take a few minutes to complete. 251 | 252 | Once done, you should see some instructions on copying the admin config to your user’s home directory, and a token which minions can use to join the cluster. You can make note of the token command to use later, or you can generate your own (as we’ll do below). 253 | 254 | **Note**: In 1.8+, tokens expire in 24 hours, unless you specify a different ttl. 255 | 256 | ``` 257 | su - kubeadmin 258 | rm -rf .kube 259 | mkdir .kube 260 | sudo cp /etc/kubernetes/admin.conf .kube/config 261 | sudo chown $(id -u):$(id -g) .kube/config 262 | ``` 263 | ### Install flannel: 264 | ``` 265 | kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 266 | ``` 267 | If you changed the podSubnet above, download this file and update “Network”: “10.244.0.0/16” line to match what you chose. Then run kubect apply -f kube-flannel.yml to perform flannel install/setup. 268 | 269 | Verify that the master reports as Ready: 270 | ``` 271 | kubectl get nodes 272 | ``` 273 | You should see something like this: 274 | ``` 275 | NAME STATUS ROLES AGE VERSION 276 | kub01 Ready master 1d v1.8.1 277 | ``` 278 | 279 | If the master reports as NotReady, run kubectl describe node kub01 and look at the ‘Conditions’ section to see if any errors are reported. If you see anything about cni not configure, or network not configured, then check and verify that your flannel (or whatever else you chose to use) apply succeeded. For other erros, try google, or the k8s slack channel. 280 | 281 | Before we go further, we need to make one change to the kube-apiserver manifest. In order to do, we need stop services first. 282 | ``` 283 | sudo systemctl stop kubelet docker 284 | sudo systemctl status kubelet docker 285 | ``` 286 | 287 | Open /etc/kubernetes/manifest/kube-apiserver.yaml in an editor, and look for the following line: 288 | ``` 289 | --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota 290 | ``` 291 | Change this to: 292 | ``` 293 | --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,ResourceQuota 294 | ``` 295 | 296 | Restart services: 297 | ``` 298 | sudo systemctl start docker kubelet 299 | sudo systemctl status docker kubelet 300 | ``` 301 | 302 | ### Join your minion(s): 303 | If you made a note of the token and full join command which kubeadm generated, and it has not been over 24 hours, yet, you can just copy paste that full command as root on your minion to join the cluster. Otherwise, generate a new token on your master, and use that to have your minion join the cluster. 304 | 305 | ### Generate sha256 ca hash: 306 | ``` 307 | sudo openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der | openssl dgst -sha256 -hex 308 | ``` 309 | **Your generated hash will look similar to this -- the f11 part is your hash:** 310 | ``` 311 | (stdin)= f11ea8662ab0f0931de5cf9671cc7c97d640e7fc53c7dadf23rdsf24easdfwe1 312 | ``` 313 | 314 | ### Generate a token: 315 | ``` 316 | sudo kubeadm token create --groups system:bootstrappers:kubeadm:default-node-token 317 | ``` 318 | **make note of the token this command outputs** 319 | 320 | **you can also list all current token with the following command:** 321 | ``` 322 | sudo kubeadm token list 323 | ``` 324 | ### Have your minion(s) join the cluster: 325 | ``` 326 | ssh root@kubminion01 327 | kubeadm join --token kub01:6443 --discovery-token-ca-cert-hash sha256: 328 | ``` 329 | 330 | ### Check on master to see if minion joined: 331 | ``` 332 | su - kubeadmin 333 | kubectl get nodes 334 | NAME STATUS ROLES AGE VERSION 335 | kub01 Ready master 1d v1.8.1 336 | kubminion01 Ready 1d v1.8.1 337 | ``` 338 | 339 | ### Do a test deployment to make sure things are working correctly, up to this point: 340 | ``` 341 | kubectl run nginx --image=nginx:alpine 342 | kubectl get pods -owide 343 | ``` 344 | You should see the pod and container getting created on one of the minions -- the -owide will show you the hostname for where the scheduler sent the deployment. 345 | 346 | ### Setup other masters: 347 | ``` 348 | ssh root@kub01 349 | for master in kub02 kub03; do \ 350 | rsync -av -e ssh --progress /etc/kubernetes ${master}:/etc/ ; \ 351 | done 352 | 353 | ssh root@kub02 354 | cd /etc/kubernetes && \ 355 | MY_IP=$(hostname -I |awk ‘{print $1}’) && \ 356 | MY_HOSTNAME=$(hostname -s) && \ 357 | echo ${MY_IP} && \ 358 | echo ${MY_HOSTNAME} && \ 359 | sed -i.bak "s/kub01/${MY_HOSTNAME}/g" /etc/kubernetes/*.conf && \ 360 | sed -i.bak "s/10.0.0.21/${MY_IP}/g" /etc/kubernetes/*.conf && \ 361 | sed -i.bak “s/advertise-address=10.0.0.21/advertise-address=${MY_IP}/g” /etc/kubernetes/manifests/kube-apiserver.yaml && \ 362 | systemctl daemon-reload && \ 363 | systemctl restart docker && \ 364 | systemctl restart kubelet 365 | 366 | ssh root@kub03 367 | cd /etc/kubernetes && \ 368 | MY_IP=$(hostname -I |awk ‘{print $1}’) && \ 369 | MY_HOSTNAME=$(hostname -s) && \ 370 | echo ${MY_IP} && \ 371 | echo ${MY_HOSTNAME} && \ 372 | sed -i.bak "s/kub01/${MY_HOSTNAME}/g" /etc/kubernetes/*.conf && \ 373 | sed -i.bak "s/10.0.0.21/${MY_IP}/g" /etc/kubernetes/*.conf && \ 374 | sed -i.bak “s/advertise-address=10.0.0.21/advertise-address=${MY_IP}/g” /etc/kubernetes/manifests/kube-apiserver.yaml && \ 375 | systemctl daemon-reload && \ 376 | systemctl restart docker && \ 377 | systemctl restart kubelet 378 | ``` 379 | 380 | ### Check if the two new masters joined the cluster 381 | **on kub01**: 382 | ``` 383 | watch kubectl get nodes 384 | ``` 385 | **It should show something like this, eventually:** 386 | ``` 387 | NAME STATUS ROLES AGE VERSION 388 | kub01 Ready master 1d v1.8.1 389 | kub02 Ready 1d v1.8.1 390 | kub03 Ready 1d v1.8.1 391 | kubminion01 Ready 392 | ``` 393 | ### Mark the two new masters as a master node: 394 | ``` 395 | 396 | kubectl patch node kub02 -p ‘{"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","timeAdded":null}]}}’ 397 | 398 | kubectl patch node kub03 -p ‘{"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","timeAdded":null}]}}’ 399 | ``` 400 | 401 | Run kubectl get nodes again to verify that all master nodes show as master 402 | 403 | ### Setup your nginx prox/lb 404 | ``` 405 | apt-get install nginx nginx-extras 406 | cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bak 407 | cat <<__EOF__>/etc/nginx/nginx.conf 408 | worker_processes 1; 409 | include /etc/nginx/modules-enabled/*.conf; 410 | error_log /var/log/nginx/error.log warn; 411 | pid /var/run/nginx.pid; 412 | 413 | 414 | events { 415 | worker_connections 1024; 416 | } 417 | 418 | 419 | http { 420 | include /etc/nginx/mime.types; 421 | default_type application/octet-stream; 422 | 423 | log_format main '$remote_addr - $remote_user [$time_local] "$request" ' 424 | '$status $body_bytes_sent "$http_referer" ' 425 | '"$http_user_agent" "$http_x_forwarded_for"'; 426 | 427 | access_log /var/log/nginx/access.log main; 428 | 429 | sendfile on; 430 | #tcp_nopush on; 431 | 432 | keepalive_timeout 65; 433 | 434 | #gzip on; 435 | 436 | include /etc/nginx/conf.d/*.conf; 437 | } 438 | stream { 439 | upstream apiserver { 440 | server 10.0.0.21:6443 weight=5 max_fails=3 fail_timeout=30s; 441 | server 10.0.0.22:6443 weight=5 max_fails=3 fail_timeout=30s; 442 | server 10.0.0.26:6443 weight=5 max_fails=3 fail_timeout=30s; 443 | #server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; 444 | #server ${HOST_IP}:6443 weight=5 max_fails=3 fail_timeout=30s; 445 | } 446 | 447 | server { 448 | listen 6443; 449 | proxy_connect_timeout 1s; 450 | proxy_timeout 3s; 451 | proxy_pass apiserver; 452 | } 453 | } 454 | __EOF__ 455 | systemctl restart nginx 456 | systemctl status nginx 457 | ``` 458 | **make sure to update the IPs in the config to match your env.** 459 | 460 | ### Update the kube-proxy config so that requests go thorugh your lb: 461 | ``` 462 | kubectl edit configmap kube-proxy -nkube-system 463 | ``` 464 | **look for the line starting with “server:” and update the IP/hostname to match your lb. save the file once done.** 465 | 466 | ### Verify communication: 467 | On each master: 468 | 469 | ``` 470 | kubectl get pods --all-namespaces -owide 471 | ``` 472 | **You should see out similar to this**: 473 | ``` 474 | NAME READY STATUS RESTARTS AGE IP NODE 475 | kube-apiserver-kub01 1/1 Running 1 1d 10.0.0.21 kub01 476 | kube-apiserver-kub02 1/1 Running 1 1d 10.0.0.22 kub02 477 | kube-apiserver-kub03 1/1 Running 1 1d 10.0.0.26 kub03 478 | kube-controller-manager-kub01 1/1 Running 2 1d 10.0.0.21 kub01 479 | kube-controller-manager-kub02 1/1 Running 1 1d 10.0.0.22 kub02 480 | kube-controller-manager-kub03 1/1 Running 1 1d 10.0.0.26 kub03 481 | kube-dns-545bc4bfd4-mll6z 3/3 Running 3 1d 10.244.0.6 kub01 482 | kube-flannel-ds-46r9v 1/1 Running 1 1d 10.0.0.21 kub01 483 | kube-flannel-ds-5ck84 1/1 Running 4 1d 10.0.0.26 kub03 484 | kube-flannel-ds-f75hz 1/1 Running 2 1d 10.0.0.22 kub02 485 | kube-flannel-ds-kjzdw 1/1 Running 0 1d 10.0.0.23 kubminion01 486 | kube-proxy-6hxlq 1/1 Running 1 1d 10.0.0.26 kub03 487 | kube-proxy-llzd6 1/1 Running 2 1d 10.0.0.21 kub01 488 | kube-proxy-lpx67 1/1 Running 0 1d 10.0.0.23 kubminion01 489 | kube-proxy-s754k 1/1 Running 1 1d 10.0.0.22 kub02 490 | kube-scheduler-kub01 1/1 Running 2 1d 10.0.0.21 kub01 491 | kube-scheduler-kub02 1/1 Running 1 1d 10.0.0.22 kub02 492 | kube-scheduler-kub03 1/1 Running 1 1d 10.0.0.26 kub03 493 | ``` 494 | --------------------------------------------------------------------------------