├── README.md ├── download_binary.sh ├── group_vars └── all ├── inventory └── hosts ├── new_nodes.yml ├── roles ├── calico │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ ├── distribute_calicoctl.yml │ │ ├── install_calico.yml │ │ └── main.yml │ └── templates │ │ ├── calico-typha.yaml.j2 │ │ ├── calico.yaml.j2 │ │ ├── calicoctl.cfg.j2 │ │ ├── calicoctl.sh.j2 │ │ └── calicoctl.yaml.j2 ├── cert │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ ├── generate_certs.yml │ │ ├── install_cfssl.yml │ │ └── main.yml │ └── templates │ │ ├── admin-csr.json.j2 │ │ ├── ca-config.json.j2 │ │ ├── ca-csr.json.j2 │ │ ├── kube-proxy-csr.json.j2 │ │ ├── metrics-server-csr.json.j2 │ │ └── server-csr.json.j2 ├── coredns │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── coredns.yaml.j2 ├── etcd │ ├── defaults │ │ └── main.yml │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ ├── config_network.yml │ │ ├── create_dir.yml │ │ ├── distribute_etcd.yml │ │ ├── main.yml │ │ └── start_etcd.yml │ └── templates │ │ ├── config_network.sh.j2 │ │ ├── etcd-3.3.yml.j2 │ │ ├── etcd-3.4.yml.j2 │ │ ├── etcd.service.j2 │ │ └── etcdctl.sh.j2 ├── flannel │ ├── defaults │ │ └── main.yml │ ├── files │ │ └── remove-docker0.sh │ ├── tasks │ │ ├── distribute_cni_plugins.yml │ │ ├── distribute_flannel.yml │ │ ├── distribute_link_etcd_cert.yml │ │ └── main.yml │ └── templates │ │ ├── 10-flannel.conflist.j2 │ │ ├── flanneld.j2 │ │ └── flanneld.service.j2 ├── init │ ├── files │ │ ├── 99-k8s.conf │ │ ├── daemon.json │ │ ├── ipvs.modules │ │ └── kubernetes.repo │ ├── tasks │ │ ├── copy_hostfile.yml │ │ ├── disable_selinux.yml │ │ ├── disable_swap.yml │ │ ├── install_docker.yml │ │ ├── install_some_pkgs.yml │ │ ├── main.yml │ │ ├── start_docker.yml │ │ ├── stop_firewall.yml │ │ ├── support_ipvs.yml │ │ ├── sysctl.yml │ │ └── timezone.yml │ └── templates │ │ └── daemon.json.j2 ├── label_master │ └── tasks │ │ ├── main.yml │ │ └── make_master_labels_and_taints.yml ├── master │ ├── defaults │ │ └── main.yml │ ├── files │ │ └── tls-instructs-csr.yaml │ ├── tasks │ │ ├── copy_admin_config.yml │ │ ├── create_dir.yml │ │ ├── create_some_roles.yml │ │ ├── distribute_apiserver.yml │ │ ├── distribute_controller_manager.yml │ │ ├── distribute_k8s_file.yml │ │ ├── distribute_scheduler.yml │ │ ├── genarate_kube_config.yml │ │ └── main.yml │ └── templates │ │ ├── kube-apiserver.j2 │ │ ├── kube-apiserver.service.j2 │ │ ├── kube-controller-manager.j2 │ │ ├── kube-controller-manager.service.j2 │ │ ├── kube-scheduler.j2 │ │ ├── kube-scheduler.service.j2 │ │ └── kubectl.sh.j2 ├── metrics-server │ ├── files │ │ └── metrics.yaml │ └── tasks │ │ └── main.yml ├── nginx │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── lb.tcp.j2 └── node │ ├── tasks │ ├── create_dir.yml │ ├── distribute_k8s_file.yml │ ├── distribute_kubelet.yml │ ├── distribute_proxy.yml │ └── main.yml │ └── templates │ ├── kube-proxy-config.yml.j2 │ ├── kube-proxy.conf.j2 │ ├── kube-proxy.service.j2 │ ├── kubelet-config.yml.j2 │ ├── kubelet.conf.j2 │ └── kubelet.service.j2 ├── site.yml ├── tests └── myapp.yaml └── tools ├── clean.sh └── move_pkg.sh /README.md: -------------------------------------------------------------------------------- 1 | # k8s-ansible 2 | 3 | 使用前先根据自身情况修改 `group_vars/all` 和`inventory/hosts`文件 4 | 5 | ## 安装 ansible 并做免密 6 | ```bash 7 | # 安装 ansible 8 | yum install -y ansible 9 | # 将所有主机信息写入主控机 /etc/hosts 文件,并修改所有机器的主机名 10 | vim /etc/hosts 11 | ... 12 | your-ip-1 your-host-name-1 13 | your-ip-2 your-host-name-2 14 | your-ip-3 your-host-name-3 15 | ... 16 | # 每台机器修改主机名 17 | hostnamectl set-hostname your-host-name-x 18 | # 免秘钥登陆 19 | ssh-copy-id root@your-host-ip-or-name-x 20 | ``` 21 | 22 | ## 使用的版本信息如下 23 | 24 | K8S_SERVER_VER=1.18.8 25 | 26 | ETCD_VER=3.4.9 27 | 28 | FLANNEL_VER=0.12.0 29 | 30 | CNI_PLUGIN_VER=0.8.6 31 | 32 | CALICO_VER=3.15.0 33 | 34 | DOCKER_VER=19.03.10 35 | 36 | ## 网段信息 37 | 38 | pod 网段:10.244.0.0/16 39 | 40 | service 网段:10.96.0.0/12 41 | 42 | kubernetes 内部地址:10.96.0.1 43 | 44 | coredns 地址: 10.96.0.10 45 | 46 | 47 | ## 机器安排 48 | 49 | | 主机名 | IP | 角色及组件 | k8s 相关组件 | 50 | | ------------- | ------------ | :----------------------: | :----------------------------------------------------------: | 51 | | centos7-nginx | 10.10.10.127 | nginx 四层代理(主控机) | nginx ansible | 52 | | centos7-a | 10.10.10.128 | master,node,etcd,flannel | kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy | 53 | | centos7-b | 10.10.10.129 | master,node,etcd,flannel | kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy | 54 | | centos7-c | 10.10.10.130 | master,node,etcd,flannel | kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy | 55 | | centos7-d | 10.10.10.131 | node,flannel | kubelet kube-proxy | 56 | | centos7-e | 10.10.10.132 | node,flannel | kubelet kube-proxy | 57 | 58 | ## 注意: 59 | 如果前端有 LB ,选用四层模式,端口 6443,同时将 site.yaml 中第 2-6 行注释。同时 Masters 中的机器也可以做主控机 60 | 61 | 如果没有 LB,需要自己准备 Nginx ,尽量单独找一台机器安装 Nginx。 62 | 63 | 64 | ## 提前下载安装包文件 65 | 可以通过执行 `download_binary.sh` 脚本进行包的下载 66 | ```bash 67 | bash download_binary.sh 68 | ``` 69 | 如果遇到下载问题,请先将包下载至主控机的 `/opt/pkg/`目录下 70 | ```bash 71 | wget https://github.com/containernetworking/plugins/releases/download/v${CNI_PLUGIN_VER}/cni-plugins-linux-amd64-v${CNI_PLUGIN_VER}.tgz && \ 72 | wget https://github.com/coreos/flannel/releases/download/v${FLANNEL_VER}/flannel-v${FLANNEL_VER}-linux-amd64.tar.gz && \ 73 | wget https://dl.k8s.io/v${K8S_SERVER_VER}/kubernetes-server-linux-amd64.tar.gz && \ 74 | wget https://github.com/etcd-io/etcd/releases/download/v${ETCD_VER}/etcd-v${ETCD_VER}-linux-amd64.tar.gz && \ 75 | wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 && \ 76 | wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 && \ 77 | wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 &&\ 78 | wget https://github.com/projectcalico/calicoctl/releases/download/v${CALICOCTL_VER}/calicoctl 79 | ``` 80 | 然后执行`tools/move_pkg.sh` 脚本对包进行解压至对应的目录 81 | ```bash 82 | bash tools/move_pkg.sh 83 | ``` 84 | 85 | ## 修改主控机 hosts 文件 86 | 87 | ```bash 88 | [root@centos7-nginx k8s-ansible]# cat /etc/hosts 89 | 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 90 | ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 91 | 10.10.10.127 centos7-nginx lb.5179.top inner-lb.5179.top ng.5179.top ng-inner.5179.top 92 | 10.10.10.128 centos7-a 93 | 10.10.10.129 centos7-b 94 | 10.10.10.130 centos7-c 95 | 10.10.10.131 centos7-d 96 | 10.10.10.132 centos7-e 97 | ``` 98 | 99 | ## 执行 100 | ```bash 101 | ansible-playbook -i inventory/hosts site.yml 102 | ``` 103 | 104 | ## 给`Master`节点打上角色标签和污点 105 | ```bash 106 | ansible-playbook -i inventory/hosts site.yml -t make_master_labels_and_taints 107 | ``` 108 | 执行完后可以看到如下 109 | ```bash 110 | [root@centos7-nginx k8s-ansible]# kubectl get nodes 111 | NAME STATUS ROLES AGE VERSION 112 | 10.10.10.128 Ready master 7m48s v1.18.8 113 | 10.10.10.129 Ready master 7m49s v1.18.8 114 | 10.10.10.130 Ready master 7m49s v1.18.8 115 | 10.10.10.131 Ready 7m49s v1.18.8 116 | 10.10.10.132 Ready 7m49s v1.18.8 117 | 118 | [root@centos7-nginx k8s-ansible]# kubectl describe nodes 10.10.10.128 |grep -C 3 Taints 119 | Annotations: node.alpha.kubernetes.io/ttl: 0 120 | volumes.kubernetes.io/controller-managed-attach-detach: true 121 | CreationTimestamp: Thu, 25 Jun 2020 17:38:09 +0800 122 | Taints: node-role.kubernetes.io/master:NoSchedule 123 | Unschedulable: false 124 | Lease: 125 | HolderIdentity: 10.10.10.128 126 | ``` 127 | > 此处已将 k8s 节点名由原来的 IP 改成 node 的主机名 128 | 129 | 也可以手动执行 130 | ```bash 131 | # 给节点打上 master 角色 132 | kubectl label nodes xxx node-role.kubernetes.io/master= 133 | # 给节点打上 node 角色 134 | kubectl label nodes xxx node-role.kubernetes.io/node= 135 | # 打上 master 节点不可调度后,master 节点将不会运行 pod,除非容忍这个污点 136 | kubectl taint nodes xxx node-role.kubernetes.io/master=:NoSchedule 137 | # 与上条结果相反,将 master 节点当 node 节点使用 138 | kubectl taint nodes xxx node-role.kubernetes.io/master- 139 | ``` 140 | 141 | ## 重新生成证书 142 | 默认生成一次之后,如果不手动删除,是不会再生成新的证书的, 143 | 144 | 如果想重新生成可以加上`CERT_POLICY=update`,执行如下命令的同时会对旧的证书进行备份 145 | ```bash 146 | ansible-playbook -i inventory/hosts site.yml -t cert -e 'CERT_POLICY=update' 147 | ``` 148 | 149 | ## 增加新节点 150 | 151 | 先在`invertory/hosts`的`[new-nodes]`下增加节点地址 152 | 153 | 然后做免秘钥登陆,修改主机名,同时修改主控机`/etc/hosts` 文件,增加该信息 154 | 155 | 然后执行 156 | ```bash 157 | ansible-playbook -i inventory/hosts new_nodes.yml 158 | ``` 159 | ## 测试集群 160 | ``` 161 | [root@centos7-nginx k8s-ansible]# kubectl apply -f tests/myapp.yaml 162 | ``` 163 | 然后执行如下命令进行基础功能验证 164 | ```bash 165 | [root@centos7-nginx k8s-ansible]# kubectl exec -it busybox -- sh 166 | / # 167 | / # nslookup kubernetes 168 | Server: 10.96.0.10 169 | Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local 170 | 171 | Name: kubernetes 172 | Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local 173 | / # nslookup myapp 174 | Server: 10.96.0.10 175 | Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local 176 | 177 | Name: myapp 178 | Address 1: 10.102.233.224 myapp.default.svc.cluster.local 179 | / # 180 | / # curl myapp/hostname.html 181 | myapp-5cbd66595b-p6zlp 182 | ``` 183 | 184 | ## 清理集群 185 | 186 | ```bash 187 | bash ./tools/clean.sh 188 | ``` -------------------------------------------------------------------------------- /download_binary.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Download 3 | 4 | #cni_plugins_version=0.8.6 5 | #etcd_version=3.4.9 6 | #kubernetes_version=1.18.8 7 | #flannel_version=0.12.0 8 | #calico_version=3.15.0 9 | 10 | cni_plugins_version=$(awk '/^cni_plugins_version/ {print $2}' ./group_vars/all) 11 | etcd_version=$(awk '/^etcd_version/ {print $2}' ./group_vars/all) 12 | kubernetes_version=$(awk '/^kubernetes_version/ {print $2}' ./group_vars/all) 13 | flannel_version=$(awk '/^flannel_version/ {print $2}' ./group_vars/all) 14 | calico_version=$(awk '/^calico_version/ {print $2}' ./group_vars/all) 15 | network_type=$(awk '/^network_type/ {print $2}' ./group_vars/all) 16 | 17 | mkdir /opt/pkg 18 | cd /opt/pkg 19 | #wget https://github.com/containernetworking/plugins/releases/download/v${cni_plugins_version}/cni-plugins-linux-amd64-v${cni_plugins_version}.tgz && \ 20 | #wget https://github.com/coreos/flannel/releases/download/v${flannel_version}/flannel-v${flannel_version}-linux-amd64.tar.gz && \ 21 | #wget https://dl.k8s.io/v${kubernetes_version}/kubernetes-server-linux-amd64.tar.gz && \ 22 | #wget https://github.com/etcd-io/etcd/releases/download/v${etcd_version}/etcd-v${etcd_version}-linux-amd64.tar.gz && \ 23 | #wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 && \ 24 | #wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 && \ 25 | #wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 &&\ 26 | #wget https://github.com/projectcalico/calicoctl/releases/download/v{calico_version}/calicoctl 27 | 28 | if [ -f ./cni-plugins/${cni_plugins_version}/cni-plugins-linux-amd64-v${cni_plugins_version}.tgz ];then 29 | echo "[INFO] cni-plugins 已存在" 30 | else 31 | wget https://github.com/containernetworking/plugins/releases/download/v${cni_plugins_version}/cni-plugins-linux-amd64-v${cni_plugins_version}.tgz 32 | 33 | if [ -f ./cni-plugins-linux-amd64-v${cni_plugins_version}.tgz ];then 34 | mkdir -p cni-plugins/${cni_plugins_version} 35 | mv cni-plugins-linux-amd64-v${cni_plugins_version}.tgz ./cni-plugins/${cni_plugins_version}/ && \ 36 | cd ./cni-plugins/${cni_plugins_version}/ && \ 37 | tar xf cni-plugins-linux-amd64-v${cni_plugins_version}.tgz 38 | echo "[INFO] 下载 cni-plgins 并解压完成" 39 | cd - &>/dev/null 40 | fi 41 | fi 42 | 43 | if [ -f ./flannel/${flannel_version}/flannel-v${flannel_version}-linux-amd64.tar.gz ];then 44 | echo "[INFO] flannel 已存在" 45 | else 46 | wget https://github.com/coreos/flannel/releases/download/v${flannel_version}/flannel-v${flannel_version}-linux-amd64.tar.gz 47 | 48 | if [ -f ./flannel-v${flannel_version}-linux-amd64.tar.gz ];then 49 | mkdir -p flannel/${flannel_version} 50 | mv ./flannel-v${flannel_version}-linux-amd64.tar.gz ./flannel/${flannel_version}/ && \ 51 | cd ./flannel/${flannel_version}/ && \ 52 | tar xf flannel-v${flannel_version}-linux-amd64.tar.gz 53 | echo "[INFO] 下载 flannel 并解压完成" 54 | cd - &>/dev/null 55 | fi 56 | fi 57 | 58 | if [ -f ./calico/${calico_version}/calicoctl ];then 59 | echo "[INFO] calicoctl 已存在" 60 | else 61 | wget https://github.com/projectcalico/calicoctl/releases/download/v${calico_version}/calicoctl 62 | 63 | if [ -f ./calicoctl ];then 64 | mkdir -p calico/${calico_version} 65 | mv ./calicoctl ./calico/${calico_version}/ && \ 66 | cd ./calico/${calico_version}/ && \ 67 | echo "[INFO] 下载 calicoctl 完成" 68 | cd - &>/dev/null 69 | fi 70 | fi 71 | 72 | if [ -f ./k8s/${kubernetes_version}/kubernetes-server-linux-amd64.tar.gz ];then 73 | echo "[INFO] k8s server 已存在" 74 | else 75 | wget https://dl.k8s.io/v${kubernetes_version}/kubernetes-server-linux-amd64.tar.gz 76 | 77 | if [ -f ./kubernetes-server-linux-amd64.tar.gz ];then 78 | mkdir -p k8s/${kubernetes_version} 79 | mv kubernetes-server-linux-amd64.tar.gz ./k8s/${kubernetes_version}/ && \ 80 | cd ./k8s/${kubernetes_version}/ && \ 81 | tar xf kubernetes-server-linux-amd64.tar.gz 82 | echo "[INFO] 下载 k8s-server 并解压完成" 83 | cd - &>/dev/null 84 | fi 85 | fi 86 | 87 | if [ -f ./etcd/etcd-v${etcd_version}-linux-amd64.tar.gz ];then 88 | echo "[INFO] etcd 已存在" 89 | else 90 | wget https://github.com/etcd-io/etcd/releases/download/v${etcd_version}/etcd-v${etcd_version}-linux-amd64.tar.gz 91 | 92 | if [ -f etcd-v${etcd_version}-linux-amd64.tar.gz ];then 93 | mkdir -p etcd 94 | mv etcd-v${etcd_version}-linux-amd64.tar.gz ./etcd/ && \ 95 | cd ./etcd/ && \ 96 | tar xf etcd-v${etcd_version}-linux-amd64.tar.gz 97 | echo "[INFO] 下载 etcd 并解压完成" 98 | cd - &>/dev/null 99 | fi 100 | fi 101 | 102 | if [ -f ./cfssl/cfssl_linux-amd64 ];then 103 | echo "[INFO] cfssl 已存在" 104 | else 105 | wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 && \ 106 | wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 && \ 107 | wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 108 | 109 | if [ -f cfssl_linux-amd64 ];then 110 | mkdir -p cfssl 111 | mv cfssl_linux-amd64 ./cfssl/ && \ 112 | mv cfssljson_linux-amd64 ./cfssl/ && \ 113 | mv cfssl-certinfo_linux-amd64 ./cfssl/ 114 | echo "[INFO] 下载 cfssl 相关包完成" 115 | cd - &>/dev/null 116 | fi 117 | fi -------------------------------------------------------------------------------- /group_vars/all: -------------------------------------------------------------------------------- 1 | # 工作目录,不建议修改成其他 2 | workdir: /opt 3 | 4 | # 二进制包存放路径 5 | package_root_dir: "{{ workdir }}/pkg" 6 | 7 | # ansible主控机机器证书存放位置 8 | cert_root_dir: "{{ workdir }}/ssl" 9 | 10 | # 默认 k8s 管理员配置文件路径 11 | kube_config_dir: /root/.kube 12 | 13 | # etcd 版本 14 | # https://github.com/etcd-io/etcd/releases/download/v{{etcd_version}}/etcd-v{{etcd_version}}-linux-amd64.tar.gz 15 | etcd_version: 3.4.9 16 | 17 | # kubernetes 版本 18 | # https://dl.k8s.io/v{{kubernetes_version}}/kubernetes-server-linux-amd64.tar.gz 19 | kubernetes_version: 1.18.8 20 | 21 | # flannel 版本 22 | # https://github.com/coreos/flannel/releases/download/v{{flannel_version}}/flannel-v{{flannel_version}}-linux-amd64.tar.gz 23 | flannel_version: 0.12.0 24 | 25 | # calico 版本 26 | # calicoctl 地址:https://github.com/projectcalico/calicoctl/releases/download/v{{calico_version}}/calicoctl 27 | calico_version: 3.15.0 28 | 29 | # cni-plugins 版本 30 | # https://github.com/containernetworking/plugins/releases/download/v{{cni_plugins_version}}/cni-plugins-linux-amd64-v{{cni_plugins_version}}.tgz 31 | cni_plugins_version: 0.8.6 32 | 33 | # coredns 镜像版本 注意对应关系 https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md 34 | coredns_image_tag: 1.6.7 35 | 36 | # docker 版本 37 | docker_version: 19.03.10 38 | 39 | # docker 源地址 40 | docker_repo: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 41 | 42 | # docker 的 cgroupDriver, kubelet 应与 docker 的一致 43 | cgroupdriver: systemd 44 | 45 | # docker 镜像加速器 46 | mirror_accelerator: https://ajpb7tdn.mirror.aliyuncs.com 47 | 48 | # docker 存储路径,默认 /var/lib/docker 49 | docker_storage_path: "{{ workdir }}/docker" 50 | 51 | # kubelet 存储路径,默认 /var/lib/kubelet 52 | kubelet_storage_path: "{{ workdir }}/kubelet" 53 | 54 | # pod 网段 55 | pod_network: 10.244.0.0/16 56 | 57 | # service 网段 58 | service_network: 10.96.0.0/12 59 | 60 | # kubernetes 内部地址, 一般为 service 网段的第一个 IP 地址 61 | kubernetes_service_ip: 10.96.0.1 62 | 63 | # coredns 地址 64 | coredns_service_ip: 10.96.0.10 65 | 66 | # 集群域 67 | cluster_domain: cluster.local 68 | 69 | # 可访问 api-server 的 VIP|LB|NG 地址,用于生成证书,如果有多个则写多个,以逗号分隔 70 | apiserver_lb_address: 10.10.10.127,lb.5179.top,inner-lb.5179.top 71 | 72 | # 如果是高可用集群,需要写一台 VIP|LB|NG 地址,建议写内网 LB 地址,用于生成kubeconfig 73 | apiserver_master_addr: inner-lb.5179.top 74 | 75 | # 可以通过 `head -c 16 /dev/urandom | od -An -t x | tr -d ' '`生成token 76 | bootstrap_token: d908e9988e51cae3b76ecae1ebacaef3 77 | 78 | # service端口范围 79 | service_port_range: 30000-50000 80 | 81 | # 资源保留,Node 节点内存保留值为三者相加之和 82 | system_reserved_memory: 30Mi 83 | kube_reserved_memory: 40Mi 84 | eviction_hard_memory: 30Mi 85 | 86 | # flannel规划地址网段信息 87 | #flannel_network: '{"Network":"{10.244.0.0/16}", "Backend":{"Type": "vxlan"}}' 88 | 89 | # 网络插件类型 [flannel|calico] 90 | network_type: calico 91 | 92 | # 网卡名称 93 | network_adapter_name: eth0 94 | 95 | # flannel 网络模式 96 | flannel_network_mode: vxlan 97 | 98 | # calico 网络模式 [ipip|bgp] 99 | calico_network_mode: ipip 100 | 101 | # kubeproxy 模式 iptables|ipvs 102 | kubeproxy_mode: ipvs -------------------------------------------------------------------------------- /inventory/hosts: -------------------------------------------------------------------------------- 1 | [masters] 2 | 10.10.10.128 3 | 10.10.10.129 4 | 10.10.10.130 5 | 6 | [etcd] 7 | 10.10.10.128 NODE_NAME=etcd01 8 | 10.10.10.129 NODE_NAME=etcd02 9 | 10.10.10.130 NODE_NAME=etcd03 10 | 11 | [nodes] 12 | 10.10.10.128 13 | 10.10.10.129 14 | 10.10.10.130 15 | 10.10.10.131 16 | 10.10.10.132 17 | 18 | [new_nodes] 19 | 10.10.10.128 20 | 10.10.10.129 21 | 10.10.10.130 22 | 23 | [k8s] 24 | 10.10.10.[128:132] 25 | 26 | [local] 27 | 127.0.0.1 28 | 29 | [lb] 30 | 10.10.10.127 -------------------------------------------------------------------------------- /new_nodes.yml: -------------------------------------------------------------------------------- 1 | # 初始化环境 2 | - name: Init Environment 3 | hosts: new_nodes 4 | roles: 5 | - init 6 | tags: init 7 | 8 | # 安装 nodes 9 | - name: Install Node Cluster 10 | hosts: new_nodes 11 | roles: 12 | - node 13 | tags: node 14 | 15 | # 安装 网路插件 16 | - name: Install Network Plugins 17 | hosts: new_nodes 18 | roles: 19 | - { role: flannel, tags: flannel, when: network_type == 'flannel' } 20 | - { role: calico, tags: calico, when: network_type == 'calico' } 21 | tags: network 22 | -------------------------------------------------------------------------------- /roles/calico/defaults/main.yml: -------------------------------------------------------------------------------- 1 | ETCD_SERVERS: "{% for host in groups['etcd'] %}https://{{ host }}:2379{% if not loop.last %},{% endif %}{% endfor %}" -------------------------------------------------------------------------------- /roles/calico/tasks/distribute_calicoctl.yml: -------------------------------------------------------------------------------- 1 | # 以下两种安装方式,任选一种即可 2 | 3 | # 方式一: 4 | - name: "Distribute Calicoctl Binary File" 5 | copy: 6 | src: "{{ package_root_dir }}/calico/{{ calico_version }}/calicoctl" 7 | dest: "{{ workdir }}/kubernetes/bin" 8 | mode: 0755 9 | run_once: true 10 | 11 | - name: "Create Calicoctl Config Dir" 12 | file: 13 | name: "/etc/calico" 14 | state: directory 15 | recurse: true 16 | run_once: true 17 | 18 | - name: "Distribute Calicoctl Config File" 19 | template: 20 | src: calicoctl.cfg.j2 21 | dest: "/etc/calico/calicoctl.cfg" 22 | run_once: true 23 | 24 | # 方式二: 25 | 26 | #- name: "Create Calicoctl.yaml" 27 | # template: 28 | # src: calicoctl.yaml.j2 29 | # dest: "{{ workdir }}/yamls/calicoctl.yaml" 30 | # mode: 0644 31 | # run_once: true 32 | # 33 | #- name: "Write Kubectl Command to Profile" 34 | # template: 35 | # src: calicoctl.sh.j2 36 | # dest: /etc/profile.d/calicoctl.sh 37 | # mode: 0755 38 | # run_once: true -------------------------------------------------------------------------------- /roles/calico/tasks/install_calico.yml: -------------------------------------------------------------------------------- 1 | - name: "Create Dir yaml" 2 | file: 3 | name: "{{ workdir }}/yamls" 4 | state: directory 5 | recurse: true 6 | run_once: true 7 | 8 | - name: "Create Calico-typha.yaml" 9 | template: 10 | src: calico-typha.yaml.j2 11 | dest: "{{ workdir }}/yamls/calico-typha.yaml" 12 | mode: 0644 13 | run_once: true 14 | when: groups['nodes'] | length > 50 15 | 16 | - name: "Create Calico.yaml" 17 | template: 18 | src: calico.yaml.j2 19 | dest: "{{ workdir }}/yamls/calico.yaml" 20 | mode: 0644 21 | run_once: true 22 | when: groups['nodes'] | length <= 50 23 | 24 | - name: "Install Calico By Calico-typha.yaml" 25 | shell: | 26 | sleep 10 27 | kubectl apply -f calico-typha.yaml 28 | args: 29 | chdir: "{{ workdir }}/yamls" 30 | run_once: true 31 | ignore_errors: true 32 | when: groups['nodes'] | length > 50 33 | 34 | - name: "Install Calico By Calico.yaml" 35 | shell: | 36 | sleep 10 37 | kubectl apply -f calico.yaml 38 | args: 39 | chdir: "{{ workdir }}/yamls" 40 | run_once: true 41 | ignore_errors: true 42 | when: groups['nodes'] | length <= 50 -------------------------------------------------------------------------------- /roles/calico/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: install_calico 3 | include: install_calico.yml 4 | tags: install_calico 5 | - name: distribute_calicoctl 6 | include: distribute_calicoctl.yml 7 | tags: distribute_calicoctl 8 | 9 | -------------------------------------------------------------------------------- /roles/calico/templates/calicoctl.cfg.j2: -------------------------------------------------------------------------------- 1 | apiVersion: projectcalico.org/v3 2 | kind: CalicoAPIConfig 3 | metadata: 4 | spec: 5 | datastoreType: "kubernetes" 6 | kubeconfig: "{{ kube_config_dir }}/config" -------------------------------------------------------------------------------- /roles/calico/templates/calicoctl.sh.j2: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | alias calicoctl="kubectl exec -i -n kube-system calicoctl -- /calicoctl" -------------------------------------------------------------------------------- /roles/calico/templates/calicoctl.yaml.j2: -------------------------------------------------------------------------------- 1 | # Calico Version v{{ calico_version }} 2 | # https://docs.projectcalico.org/releases#v{{ calico_version }} 3 | # This manifest includes the following component versions: 4 | # calico/ctl:v{{ calico_version }} 5 | 6 | apiVersion: v1 7 | kind: ServiceAccount 8 | metadata: 9 | name: calicoctl 10 | namespace: kube-system 11 | 12 | --- 13 | 14 | apiVersion: v1 15 | kind: Pod 16 | metadata: 17 | name: calicoctl 18 | namespace: kube-system 19 | spec: 20 | nodeSelector: 21 | kubernetes.io/os: linux 22 | hostNetwork: true 23 | serviceAccountName: calicoctl 24 | containers: 25 | - name: calicoctl 26 | image: calico/ctl:v{{ calico_version }} 27 | command: ["/bin/sh", "-c", "while true; do sleep 3600; done"] 28 | env: 29 | - name: DATASTORE_TYPE 30 | value: kubernetes 31 | 32 | --- 33 | 34 | kind: ClusterRole 35 | apiVersion: rbac.authorization.k8s.io/v1beta1 36 | metadata: 37 | name: calicoctl 38 | rules: 39 | - apiGroups: [""] 40 | resources: 41 | - namespaces 42 | - nodes 43 | verbs: 44 | - get 45 | - list 46 | - update 47 | - apiGroups: [""] 48 | resources: 49 | - nodes/status 50 | verbs: 51 | - update 52 | - apiGroups: [""] 53 | resources: 54 | - pods 55 | - serviceaccounts 56 | verbs: 57 | - get 58 | - list 59 | - apiGroups: [""] 60 | resources: 61 | - pods/status 62 | verbs: 63 | - update 64 | - apiGroups: ["crd.projectcalico.org"] 65 | resources: 66 | - bgppeers 67 | - bgpconfigurations 68 | - clusterinformations 69 | - felixconfigurations 70 | - globalnetworkpolicies 71 | - globalnetworksets 72 | - ippools 73 | - kubecontrollersconfigurations 74 | - networkpolicies 75 | - networksets 76 | - hostendpoints 77 | - ipamblocks 78 | - blockaffinities 79 | - ipamhandles 80 | - ipamconfigs 81 | verbs: 82 | - create 83 | - get 84 | - list 85 | - update 86 | - delete 87 | - apiGroups: ["networking.k8s.io"] 88 | resources: 89 | - networkpolicies 90 | verbs: 91 | - get 92 | - list 93 | 94 | --- 95 | 96 | apiVersion: rbac.authorization.k8s.io/v1beta1 97 | kind: ClusterRoleBinding 98 | metadata: 99 | name: calicoctl 100 | roleRef: 101 | apiGroup: rbac.authorization.k8s.io 102 | kind: ClusterRole 103 | name: calicoctl 104 | subjects: 105 | - kind: ServiceAccount 106 | name: calicoctl 107 | namespace: kube-system -------------------------------------------------------------------------------- /roles/cert/defaults/main.yml: -------------------------------------------------------------------------------- 1 | # CA 证书相关参数 2 | # 100 年 3 | CA_EXPIRY: "876000h" 4 | # 50 年 5 | CERT_EXPIRY: "438000h" 6 | 7 | # 证书更新策略 [stay|update] 8 | CERT_POLICY: stay 9 | 10 | # API_SERVICE 节点列表 11 | API_SERVICE_NODES: '{% for h in groups["masters"] %}"{{ h }}",{% endfor %}' 12 | 13 | # ETCD 节点列表 14 | ETCD_NODES: '{% for h in groups["etcd"] %}"{{ h }}",{% endfor %}' -------------------------------------------------------------------------------- /roles/cert/tasks/generate_certs.yml: -------------------------------------------------------------------------------- 1 | - name: "Backup Old Certs" 2 | shell: 3 | mv ssl ssl-`date "+%Y-%m-%d~%H:%M:%S"` && mkdir ssl 4 | args: 5 | chdir: "{{ workdir }}" 6 | when: (CERT_POLICY) == "update" 7 | 8 | - name: "Generate ca-config.json" 9 | template: 10 | src: ca-config.json.j2 11 | dest: "{{ cert_root_dir }}/ca-config.json" 12 | 13 | - name: "Generate ca-csr.json" 14 | template: 15 | src: ca-csr.json.j2 16 | dest: "{{ cert_root_dir }}/ca-csr.json" 17 | 18 | - name: "Generate CA File and Key File" 19 | shell: 20 | cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 21 | args: 22 | chdir: "{{ cert_root_dir }}" 23 | 24 | - name: "Generate server-csr.json" 25 | template: 26 | src: server-csr.json.j2 27 | dest: "{{ cert_root_dir }}/server-csr.json" 28 | 29 | - name: "Generate Server Certs File and Key File" 30 | shell: 31 | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server 32 | args: 33 | chdir: "{{ cert_root_dir }}" 34 | 35 | - name: "Generate admin-csr.json" 36 | template: 37 | src: admin-csr.json.j2 38 | dest: "{{ cert_root_dir }}/admin-csr.json" 39 | 40 | - name: "Generate Admin Certs File and Key File" 41 | shell: 42 | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin 43 | args: 44 | chdir: "{{ cert_root_dir }}" 45 | 46 | - name: "Generate kube-proxy-csr.json" 47 | template: 48 | src: kube-proxy-csr.json.j2 49 | dest: "{{ cert_root_dir }}/kube-proxy-csr.json" 50 | 51 | - name: "Generate Kube-proxy Certs File and Key File" 52 | shell: 53 | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 54 | args: 55 | chdir: "{{ cert_root_dir }}" 56 | 57 | - name: "Generate metrics-servery-csr.json" 58 | template: 59 | src: metrics-server-csr.json.j2 60 | dest: "{{ cert_root_dir }}/metrics-server-csr.json" 61 | 62 | - name: "Generate Metrics-server Certs File and Key File" 63 | shell: 64 | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server 65 | args: 66 | chdir: "{{ cert_root_dir }}" 67 | -------------------------------------------------------------------------------- /roles/cert/tasks/install_cfssl.yml: -------------------------------------------------------------------------------- 1 | - name: "Make cfssl dir" 2 | file: 3 | name: "{{ package_root_dir }}/cfssl" 4 | state: directory 5 | recurse: true 6 | 7 | - name: "Check cfssl exist or not" 8 | shell: 9 | ls "{{ package_root_dir }}/cfssl/cfssl_linux-amd64" 10 | register: cfssl_exist 11 | ignore_errors: yes 12 | 13 | - name: "DownLoad cfssl " 14 | get_url: 15 | url: "{{ item }}" 16 | dest: "{{ package_root_dir }}/cfssl" 17 | timeout: 120 18 | mode: 0755 19 | with_items: 20 | - https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 21 | - https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 22 | - https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 23 | when: cfssl_exist is failed 24 | 25 | - name: "Copy cfssl to /usr/local/bin/" 26 | copy: 27 | src: "{{ package_root_dir }}/cfssl/{{ item.src }}" 28 | dest: /usr/local/bin/{{ item.dest }} 29 | mode: 0755 30 | with_items: 31 | - {src: "cfssl_linux-amd64", dest: "cfssl"} 32 | - {src: "cfssljson_linux-amd64", dest: "cfssljson"} 33 | - {src: "cfssl-certinfo_linux-amd64", dest: "cfssl-certinfo"} 34 | -------------------------------------------------------------------------------- /roles/cert/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - include: install_cfssl.yml 4 | tags: install_cfssl 5 | 6 | - name: "Make Certs Directory" 7 | file: 8 | name: "{{ cert_root_dir }}" 9 | state: directory 10 | recurse: true 11 | 12 | - name: "Check Certs exist or not" 13 | shell: 14 | ls "{{ cert_root_dir }}" |wc -l 15 | register: certs_number 16 | # ignore_errors: yes 17 | 18 | - name: "Debug File numbers" 19 | debug: 20 | msg: "{{ certs_number.stdout }}" 21 | #msg: "{{ certs_number['stdout'] }}" 22 | 23 | - include: generate_certs.yml 24 | tags: generate_certs 25 | when: certs_number.stdout | int <=20 or (CERT_POLICY) == "update" -------------------------------------------------------------------------------- /roles/cert/templates/admin-csr.json.j2: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "admin", 3 | "hosts": [], 4 | "key": { 5 | "algo": "rsa", 6 | "size": 2048 7 | }, 8 | "names": [ 9 | { 10 | "C": "CN", 11 | "L": "BeiJing", 12 | "ST": "BeiJing", 13 | "O": "system:masters", 14 | "OU": "System" 15 | } 16 | ] 17 | } -------------------------------------------------------------------------------- /roles/cert/templates/ca-config.json.j2: -------------------------------------------------------------------------------- 1 | { 2 | "signing": { 3 | "default": { 4 | "expiry": "{{ CERT_EXPIRY }}" 5 | }, 6 | "profiles": { 7 | "kubernetes": { 8 | "usages": [ 9 | "signing", 10 | "key encipherment", 11 | "server auth", 12 | "client auth" 13 | ], 14 | "expiry": "{{ CERT_EXPIRY }}" 15 | } 16 | } 17 | } 18 | } -------------------------------------------------------------------------------- /roles/cert/templates/ca-csr.json.j2: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "kubernetes", 3 | "key": { 4 | "algo": "rsa", 5 | "size": 2048 6 | }, 7 | "names": [ 8 | { 9 | "C": "CN", 10 | "L": "Beijing", 11 | "ST": "Beijing", 12 | "O": "k8s", 13 | "OU": "System" 14 | } 15 | ], 16 | "ca": { 17 | "expiry": "{{ CA_EXPIRY }}" 18 | } 19 | } -------------------------------------------------------------------------------- /roles/cert/templates/kube-proxy-csr.json.j2: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "system:kube-proxy", 3 | "hosts": [], 4 | "key": { 5 | "algo": "rsa", 6 | "size": 2048 7 | }, 8 | "names": [ 9 | { 10 | "C": "CN", 11 | "L": "BeiJing", 12 | "ST": "BeiJing", 13 | "O": "k8s", 14 | "OU": "System" 15 | } 16 | ] 17 | } -------------------------------------------------------------------------------- /roles/cert/templates/metrics-server-csr.json.j2: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "system:metrics-server", 3 | "hosts": [], 4 | "key": { 5 | "algo": "rsa", 6 | "size": 2048 7 | }, 8 | "names": [ 9 | { 10 | "C": "CN", 11 | "ST": "BeiJing", 12 | "L": "BeiJing", 13 | "O": "k8s", 14 | "OU": "system" 15 | } 16 | ] 17 | } -------------------------------------------------------------------------------- /roles/cert/templates/server-csr.json.j2: -------------------------------------------------------------------------------- 1 | { 2 | "CN": "kubernetes", 3 | "hosts": [ 4 | {% if API_SERVICE_NODES |string() == ETCD_NODES |string() %} 5 | {{ API_SERVICE_NODES }} 6 | {% else %} 7 | {{ API_SERVICE_NODES }} 8 | {{ ETCD_NODES }} 9 | {% endif %} 10 | {% if kubernetes_service_ip %} 11 | "{{ kubernetes_service_ip }}", 12 | {% endif %} 13 | {% if apiserver_lb_address %} 14 | {% for h in apiserver_lb_address.split(',') %} 15 | "{{h}}", 16 | {% endfor %} 17 | {% endif %} 18 | "127.0.0.1", 19 | "kubernetes", 20 | "kubernetes.default", 21 | "kubernetes.default.svc", 22 | "kubernetes.default.svc.cluster", 23 | "kubernetes.default.svc.cluster.local" 24 | ], 25 | "key": { 26 | "algo": "rsa", 27 | "size": 2048 28 | }, 29 | "names": [ 30 | { 31 | "C": "CN", 32 | "L": "BeiJing", 33 | "ST": "BeiJing", 34 | "O": "k8s", 35 | "OU": "System" 36 | } 37 | ] 38 | } -------------------------------------------------------------------------------- /roles/coredns/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: "Create Dir yaml" 2 | file: 3 | name: "{{ workdir }}/yamls" 4 | state: directory 5 | recurse: true 6 | 7 | - name: "Copy Coredns.yaml Files" 8 | template: 9 | src: coredns.yaml.j2 10 | dest: "{{workdir}}/yamls/coredns.yaml" 11 | 12 | - name: "Run 'kubectl apply -f coredns.yaml'" 13 | shell: 14 | "kubectl apply -f coredns.yaml" 15 | args: 16 | chdir: "{{ workdir }}/yamls" 17 | run_once: true -------------------------------------------------------------------------------- /roles/coredns/templates/coredns.yaml.j2: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: coredns 5 | namespace: kube-system 6 | --- 7 | apiVersion: rbac.authorization.k8s.io/v1 8 | kind: ClusterRole 9 | metadata: 10 | labels: 11 | kubernetes.io/bootstrapping: rbac-defaults 12 | name: system:coredns 13 | rules: 14 | - apiGroups: 15 | - "" 16 | resources: 17 | - endpoints 18 | - services 19 | - pods 20 | - namespaces 21 | verbs: 22 | - list 23 | - watch 24 | --- 25 | apiVersion: rbac.authorization.k8s.io/v1 26 | kind: ClusterRoleBinding 27 | metadata: 28 | annotations: 29 | rbac.authorization.kubernetes.io/autoupdate: "true" 30 | labels: 31 | kubernetes.io/bootstrapping: rbac-defaults 32 | name: system:coredns 33 | roleRef: 34 | apiGroup: rbac.authorization.k8s.io 35 | kind: ClusterRole 36 | name: system:coredns 37 | subjects: 38 | - kind: ServiceAccount 39 | name: coredns 40 | namespace: kube-system 41 | --- 42 | apiVersion: v1 43 | kind: ConfigMap 44 | metadata: 45 | name: coredns 46 | namespace: kube-system 47 | data: 48 | Corefile: | 49 | .:53 { 50 | errors 51 | health { 52 | lameduck 5s 53 | } 54 | ready 55 | kubernetes cluster.local in-addr.arpa ip6.arpa { 56 | fallthrough in-addr.arpa ip6.arpa 57 | } 58 | prometheus :9153 59 | forward . /etc/resolv.conf 60 | cache 30 61 | loop 62 | reload 63 | loadbalance 64 | } 65 | --- 66 | apiVersion: apps/v1 67 | kind: Deployment 68 | metadata: 69 | name: coredns 70 | namespace: kube-system 71 | labels: 72 | k8s-app: kube-dns 73 | kubernetes.io/name: "CoreDNS" 74 | spec: 75 | # replicas: not specified here: 76 | # 1. Default is 1. 77 | # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on. 78 | replicas: 2 79 | strategy: 80 | type: RollingUpdate 81 | rollingUpdate: 82 | maxUnavailable: 1 83 | selector: 84 | matchLabels: 85 | k8s-app: kube-dns 86 | template: 87 | metadata: 88 | labels: 89 | k8s-app: kube-dns 90 | spec: 91 | priorityClassName: system-cluster-critical 92 | serviceAccountName: coredns 93 | tolerations: 94 | - key: "CriticalAddonsOnly" 95 | operator: "Exists" 96 | nodeSelector: 97 | kubernetes.io/os: linux 98 | affinity: 99 | podAntiAffinity: 100 | preferredDuringSchedulingIgnoredDuringExecution: 101 | - weight: 100 102 | podAffinityTerm: 103 | labelSelector: 104 | matchExpressions: 105 | - key: k8s-app 106 | operator: In 107 | values: ["kube-dns"] 108 | topologyKey: kubernetes.io/hostname 109 | containers: 110 | - name: coredns 111 | image: coredns/coredns:{{ coredns_image_tag }} 112 | imagePullPolicy: IfNotPresent 113 | resources: 114 | limits: 115 | memory: 170Mi 116 | requests: 117 | cpu: 100m 118 | memory: 70Mi 119 | args: [ "-conf", "/etc/coredns/Corefile" ] 120 | volumeMounts: 121 | - name: config-volume 122 | mountPath: /etc/coredns 123 | readOnly: true 124 | ports: 125 | - containerPort: 53 126 | name: dns 127 | protocol: UDP 128 | - containerPort: 53 129 | name: dns-tcp 130 | protocol: TCP 131 | - containerPort: 9153 132 | name: metrics 133 | protocol: TCP 134 | securityContext: 135 | allowPrivilegeEscalation: false 136 | capabilities: 137 | add: 138 | - NET_BIND_SERVICE 139 | drop: 140 | - all 141 | readOnlyRootFilesystem: true 142 | livenessProbe: 143 | httpGet: 144 | path: /health 145 | port: 8080 146 | scheme: HTTP 147 | initialDelaySeconds: 60 148 | timeoutSeconds: 5 149 | successThreshold: 1 150 | failureThreshold: 5 151 | readinessProbe: 152 | httpGet: 153 | path: /ready 154 | port: 8181 155 | scheme: HTTP 156 | dnsPolicy: Default 157 | volumes: 158 | - name: config-volume 159 | configMap: 160 | name: coredns 161 | items: 162 | - key: Corefile 163 | path: Corefile 164 | --- 165 | apiVersion: v1 166 | kind: Service 167 | metadata: 168 | name: kube-dns 169 | namespace: kube-system 170 | annotations: 171 | prometheus.io/port: "9153" 172 | prometheus.io/scrape: "true" 173 | labels: 174 | k8s-app: kube-dns 175 | kubernetes.io/cluster-service: "true" 176 | kubernetes.io/name: "CoreDNS" 177 | spec: 178 | selector: 179 | k8s-app: kube-dns 180 | clusterIP: {{ coredns_service_ip }} 181 | ports: 182 | - name: dns 183 | port: 53 184 | protocol: UDP 185 | - name: dns-tcp 186 | port: 53 187 | protocol: TCP 188 | - name: metrics 189 | port: 9153 190 | protocol: TCP -------------------------------------------------------------------------------- /roles/etcd/defaults/main.yml: -------------------------------------------------------------------------------- 1 | # etcd 集群间通信的IP和端口, 根据etcd组成员自动生成 2 | TMP_NODES: "{% for h in groups['etcd'] %}{{ hostvars[h]['NODE_NAME'] }}=https://{{ h }}:2380,{% endfor %}" 3 | ETCD_CLUSTER: "{{ TMP_NODES.rstrip(',') }}" 4 | 5 | # etcd 集群初始状态 new/existing 6 | CLUSTER_STATE: "new" 7 | 8 | ENDPOINTS: "{% for host in groups['etcd'] %}https://{{ host }}:2379{% if not loop.last %},{% endif %}{% endfor %}" -------------------------------------------------------------------------------- /roles/etcd/handlers/main.yml: -------------------------------------------------------------------------------- 1 | - name: restart etcd 2 | systemd: 3 | name: etcd 4 | state: restarted -------------------------------------------------------------------------------- /roles/etcd/tasks/config_network.yml: -------------------------------------------------------------------------------- 1 | - name: "Copy Config_network.sh Files" 2 | template: 3 | src: config_network.sh.j2 4 | dest: "{{workdir}}/etcd/bin/config_network.sh" 5 | mode: 0755 6 | 7 | - name: "Run Config_network.sh" 8 | shell: 9 | "bash config_network.sh" 10 | args: 11 | chdir: "{{ workdir }}/etcd/bin" 12 | run_once: true 13 | -------------------------------------------------------------------------------- /roles/etcd/tasks/create_dir.yml: -------------------------------------------------------------------------------- 1 | # Create some dir 2 | 3 | - name: "Make Etcd Directory" 4 | file: 5 | name: "{{workdir}}/{{ item }}" 6 | state: directory 7 | recurse: true 8 | with_items: 9 | - etcd 10 | - etcd/bin 11 | - etcd/cfg 12 | - etcd/ssl 13 | - etcd/data -------------------------------------------------------------------------------- /roles/etcd/tasks/distribute_etcd.yml: -------------------------------------------------------------------------------- 1 | - name: "Distribute ETCD Binary File" 2 | copy: 3 | src: "{{ package_root_dir }}/etcd/etcd-v{{ etcd_version }}-linux-amd64/{{ item }}" 4 | dest: "{{ workdir }}/etcd/bin" 5 | mode: 0755 6 | with_items: 7 | - etcd 8 | - etcdctl 9 | 10 | - name: "Distribute ETCD Certs" 11 | copy: 12 | src: "{{cert_root_dir}}/{{ item }}" 13 | dest: "{{workdir}}/etcd/ssl/" 14 | force: true 15 | with_items: 16 | - ca.pem 17 | - server-key.pem 18 | - server.pem 19 | 20 | - name: "Copy ETCD Configuration Files" 21 | template: 22 | src: etcd-3.{{ etcd_version.split('.')[1] }}.yml.j2 23 | dest: "{{workdir}}/etcd/cfg/etcd.yml" 24 | force: true 25 | notify: 26 | restart etcd 27 | 28 | #- name: "Copy ETCD Configuration Files" 29 | # template: 30 | # src: etcd-3.3.yml.j2 31 | # dest: "{{workdir}}/etcd/cfg/etcd.yml" 32 | # force: true 33 | # when: etcd_version.startswith("3.3") 34 | # 35 | #- name: "Copy ETCD Configuration Files" 36 | # template: 37 | # src: etcd-3.4.yml.j2 38 | # dest: "{{workdir}}/etcd/cfg/etcd.yml" 39 | # force: true 40 | # when: etcd_version.startswith("3.4") 41 | 42 | - name: "Write ETCD Command to Profile" 43 | template: 44 | src: etcdctl.sh.j2 45 | dest: /etc/profile.d/etcd.sh 46 | mode: 0755 47 | 48 | - name: "Copy ETCD Systemd File" 49 | template: 50 | src: etcd.service.j2 51 | dest: /usr/lib/systemd/system/etcd.service -------------------------------------------------------------------------------- /roles/etcd/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: create_dir 3 | include: create_dir.yml 4 | tags: create_dir 5 | - name: distribute_etcd 6 | include: distribute_etcd.yml 7 | tags: install_etcd_pkg 8 | - name: start_etcd 9 | include: start_etcd.yml 10 | tags: start_etcd 11 | - include: config_network.yml 12 | tags: config_network 13 | when: network_type == 'flannel' -------------------------------------------------------------------------------- /roles/etcd/tasks/start_etcd.yml: -------------------------------------------------------------------------------- 1 | - name: "Start ETCD" 2 | systemd: 3 | name: etcd 4 | state: restarted 5 | enabled: true 6 | daemon-reload: true -------------------------------------------------------------------------------- /roles/etcd/templates/config_network.sh.j2: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ETCDCTL_API=2 etcdctl --ca-file={{ workdir }}/etcd/ssl/ca.pem \ 4 | --cert-file={{ workdir }}/etcd/ssl/server.pem \ 5 | --key-file={{ workdir }}/etcd/ssl/server-key.pem \ 6 | --endpoints={{ ENDPOINTS }} \ 7 | set /coreos.com/network/config '{"Network": '\"{{ pod_network }}\"', "Backend": {"Type": '\"{{ flannel_network_mode }}\"'}}' -------------------------------------------------------------------------------- /roles/etcd/templates/etcd-3.3.yml.j2: -------------------------------------------------------------------------------- 1 | #etcd {{ etcd_version }} 2 | name: {{ NODE_NAME }} 3 | data-dir: {{ workdir }}/etcd/data 4 | listen-peer-urls: https://{{ inventory_hostname }}:2380 5 | listen-client-urls: https://{{ inventory_hostname }}:2379,https://127.0.0.1:2379 6 | 7 | advertise-client-urls: https://{{ inventory_hostname }}:2379 8 | initial-advertise-peer-urls: https://{{ inventory_hostname }}:2380 9 | initial-cluster: {{ ETCD_CLUSTER }} 10 | initial-cluster-token: etcd-cluster 11 | initial-cluster-state: {{ CLUSTER_STATE }} 12 | 13 | client-transport-security: 14 | cert-file: {{ workdir }}/etcd/ssl/server.pem 15 | key-file: {{ workdir }}/etcd/ssl/server-key.pem 16 | client-cert-auth: false 17 | trusted-ca-file: {{ workdir }}/etcd/ssl/ca.pem 18 | auto-tls: false 19 | 20 | peer-transport-security: 21 | cert-file: {{ workdir }}/etcd/ssl/server.pem 22 | key-file: {{ workdir }}/etcd/ssl/server-key.pem 23 | peer-client-cert-auth: false 24 | trusted-ca-file: {{ workdir }}/etcd/ssl/ca.pem 25 | auto-tls: false 26 | 27 | debug: false 28 | log-package-levels: etcdmain=CRITICAL,etcdserver=DEBUG 29 | log-outputs: default -------------------------------------------------------------------------------- /roles/etcd/templates/etcd-3.4.yml.j2: -------------------------------------------------------------------------------- 1 | #etcd {{ etcd_version }} 2 | name: {{ NODE_NAME }} 3 | data-dir: {{ workdir }}/etcd/data 4 | listen-peer-urls: https://{{ inventory_hostname }}:2380 5 | listen-client-urls: https://{{ inventory_hostname }}:2379,https://127.0.0.1:2379 6 | 7 | advertise-client-urls: https://{{ inventory_hostname }}:2379 8 | initial-advertise-peer-urls: https://{{ inventory_hostname }}:2380 9 | initial-cluster: {{ ETCD_CLUSTER }} 10 | initial-cluster-token: etcd-cluster 11 | initial-cluster-state: {{ CLUSTER_STATE }} 12 | enable-v2: true 13 | 14 | client-transport-security: 15 | cert-file: {{ workdir }}/etcd/ssl/server.pem 16 | key-file: {{ workdir }}/etcd/ssl/server-key.pem 17 | client-cert-auth: false 18 | trusted-ca-file: {{ workdir }}/etcd/ssl/ca.pem 19 | auto-tls: false 20 | 21 | peer-transport-security: 22 | cert-file: {{ workdir }}/etcd/ssl/server.pem 23 | key-file: {{ workdir }}/etcd/ssl/server-key.pem 24 | client-cert-auth: false 25 | trusted-ca-file: {{ workdir }}/etcd/ssl/ca.pem 26 | auto-tls: false 27 | 28 | debug: false 29 | logger: zap 30 | log-outputs: [stderr] -------------------------------------------------------------------------------- /roles/etcd/templates/etcd.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Etcd Server 3 | Documentation=https://github.com/etcd-io/etcd 4 | Conflicts=etcd.service 5 | After=network.target 6 | After=network-online.target 7 | Wants=network-online.target 8 | 9 | [Service] 10 | Type=notify 11 | LimitNOFILE=65536 12 | Restart=on-failure 13 | RestartSec=5s 14 | TimeoutStartSec=0 15 | ExecStart={{ workdir }}/etcd/bin/etcd --config-file={{ workdir }}/etcd/cfg/etcd.yml 16 | 17 | [Install] 18 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /roles/etcd/templates/etcdctl.sh.j2: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | 4 | export PATH=$PATH:/opt/etcd/bin 5 | 6 | alias etcdctl2="ETCDCTL_API=2 etcdctl --ca-file={{ workdir }}/etcd/ssl/ca.pem --cert-file={{ workdir }}/etcd/ssl/server.pem --key-file={{ workdir }}/etcd/ssl/server-key.pem --endpoints={{ ENDPOINTS }}" 7 | 8 | alias etcdctl3="ETCDCTL_API=3 etcdctl --cacert={{ workdir }}/etcd/ssl/ca.pem --cert={{ workdir }}/etcd/ssl/server.pem --key={{ workdir }}/etcd/ssl/server-key.pem --endpoints={{ ENDPOINTS }}" 9 | -------------------------------------------------------------------------------- /roles/flannel/defaults/main.yml: -------------------------------------------------------------------------------- 1 | ETCD_SERVERS: "{% for host in groups['etcd'] %}https://{{ host }}:2379{% if not loop.last %},{% endif %}{% endfor %}" -------------------------------------------------------------------------------- /roles/flannel/files/remove-docker0.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Copyright 2014 The Kubernetes Authors All rights reserved. 4 | # 5 | # Licensed under the Apache License, Version 2.0 (the "License"); 6 | # you may not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, software 12 | # distributed under the License is distributed on an "AS IS" BASIS, 13 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | # See the License for the specific language governing permissions and 15 | # limitations under the License. 16 | 17 | # Delete default docker bridge, so that docker can start with flannel network. 18 | 19 | # exit on any error 20 | set -e 21 | 22 | rc=0 23 | ip link show docker0 >/dev/null 2>&1 || rc="$?" 24 | if [[ "$rc" -eq "0" ]]; then 25 | ip link set dev docker0 down 26 | ip link delete docker0 27 | fi -------------------------------------------------------------------------------- /roles/flannel/tasks/distribute_cni_plugins.yml: -------------------------------------------------------------------------------- 1 | - name: "Create Cni-plugins Dirs" 2 | file: 3 | name: "{{ item }}" 4 | state: directory 5 | recurse: true 6 | with_items: 7 | - /etc/cni/net.d 8 | - /opt/cni/bin 9 | 10 | - name: "Distribute Cni-plugins Binary File" 11 | copy: 12 | src: "{{ package_root_dir }}/cni-plugins/{{ cni_plugins_version }}/" 13 | dest: "/opt/cni/bin" 14 | mode: 0755 -------------------------------------------------------------------------------- /roles/flannel/tasks/distribute_flannel.yml: -------------------------------------------------------------------------------- 1 | - name: "Distribute Flannel Binary File" 2 | copy: 3 | src: "{{ package_root_dir }}/flannel/{{ flannel_version }}/{{ item }}" 4 | dest: "{{ workdir }}/kubernetes/bin" 5 | mode: 0755 6 | with_items: 7 | - flanneld 8 | - mk-docker-opts.sh 9 | 10 | - name: "Distribute Flannel Conflist File" 11 | template: 12 | src: 10-flannel.conflist.j2 13 | dest: "/etc/cni/net.d/10-flannel.conflist" 14 | 15 | - name: "Distribute Flannel Config File" 16 | template: 17 | src: flanneld.j2 18 | dest: "{{ workdir }}/kubernetes/cfg/flanneld" 19 | 20 | - name: "Copy Flanneld Systemd File" 21 | template: 22 | src: flanneld.service.j2 23 | dest: /usr/lib/systemd/system/flanneld.service 24 | 25 | - name: "Start Flannel" 26 | systemd: 27 | name: flanneld 28 | state: restarted 29 | enabled: true 30 | daemon-reload: true 31 | 32 | -------------------------------------------------------------------------------- /roles/flannel/tasks/distribute_link_etcd_cert.yml: -------------------------------------------------------------------------------- 1 | - name: "Distribute Flannel Link Etcd Certs" 2 | copy: 3 | src: "{{ cert_root_dir }}/{{ item }}" 4 | dest: "{{ workdir }}/kubernetes/ssl" 5 | with_items: 6 | - server.pem 7 | - server-key.pem -------------------------------------------------------------------------------- /roles/flannel/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: distribute_link_etcd_cert 3 | include: distribute_link_etcd_cert.yml 4 | tags: distribute_link_etcd_cert 5 | - name: distribute_cni_plugins 6 | include: distribute_cni_plugins.yml 7 | tags: distribute_cni_plugins 8 | - name: distribute_flannel 9 | include: distribute_flannel.yml 10 | tags: distribute_flannel -------------------------------------------------------------------------------- /roles/flannel/templates/10-flannel.conflist.j2: -------------------------------------------------------------------------------- 1 | { 2 | "name": "cbr0", 3 | "cniVersion": "0.3.1", 4 | "plugins": [ 5 | { 6 | "type": "flannel", 7 | "delegate": { 8 | "hairpinMode": true, 9 | "isDefaultGateway": true 10 | } 11 | }, 12 | { 13 | "type": "portmap", 14 | "capabilities": { 15 | "portMappings": true 16 | } 17 | } 18 | ] 19 | } -------------------------------------------------------------------------------- /roles/flannel/templates/flanneld.j2: -------------------------------------------------------------------------------- 1 | FLANNEL_OPTIONS="--etcd-endpoints={{ ETCD_SERVERS }} \ 2 | -etcd-cafile={{ workdir }}/kubernetes/ssl/ca.pem \ 3 | -etcd-certfile={{ workdir }}/kubernetes/ssl/server.pem \ 4 | -etcd-keyfile={{ workdir }}/kubernetes/ssl/server-key.pem" -------------------------------------------------------------------------------- /roles/flannel/templates/flanneld.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Flanneld Overlay address etcd agent 3 | After=network-online.target network.target 4 | Before=docker.service 5 | 6 | [Service] 7 | Type=notify 8 | EnvironmentFile={{ workdir }}/kubernetes/cfg/flanneld 9 | #ExecStartPre={{ workdir }}/kubernetes/bin/remove-docker0.sh 10 | ExecStart={{ workdir }}/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS 11 | #ExecStartPost={{ workdir }}/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env 12 | Restart=on-failure 13 | 14 | [Install] 15 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /roles/init/files/99-k8s.conf: -------------------------------------------------------------------------------- 1 | #sysctls for k8s node config 2 | net.ipv4.ip_forward=1 3 | net.ipv4.tcp_slow_start_after_idle=0 4 | net.core.rmem_max=16777216 5 | fs.inotify.max_user_watches=524288 6 | kernel.softlockup_all_cpu_backtrace=1 7 | kernel.softlockup_panic=1 8 | fs.file-max=2097152 9 | fs.inotify.max_user_instances=8192 10 | fs.inotify.max_queued_events=16384 11 | vm.max_map_count=262144 12 | vm.swappiness=0 13 | vm.overcommit_memory=1 14 | vm.panic_on_oom=0 15 | fs.may_detach_mounts=1 16 | net.core.netdev_max_backlog=16384 17 | net.ipv4.tcp_wmem=4096 12582912 16777216 18 | net.core.wmem_max=16777216 19 | net.core.somaxconn=32768 20 | net.ipv4.ip_forward=1 21 | net.ipv4.tcp_max_syn_backlog=8096 22 | net.bridge.bridge-nf-call-iptables=1 23 | net.bridge.bridge-nf-call-ip6tables=1 24 | net.ipv4.tcp_rmem=4096 12582912 16777216 -------------------------------------------------------------------------------- /roles/init/files/daemon.json: -------------------------------------------------------------------------------- 1 | { 2 | "exec-opts": ["native.cgroupdriver=systemd"], 3 | "registry-mirrors": ["https://ajpb7tdn.mirror.aliyuncs.com"], 4 | "log-driver": "json-file", 5 | "log-level": "warn", 6 | "log-opts": { 7 | "max-size": "100m", 8 | "max-file": "10" 9 | }, 10 | "data-root": "/opt/docker", 11 | "oom-score-adjust": -1000 12 | } -------------------------------------------------------------------------------- /roles/init/files/ipvs.modules: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | modprobe -- ip_vs 3 | modprobe -- ip_vs_rr 4 | modprobe -- ip_vs_wrr 5 | modprobe -- ip_vs_sh 6 | modprobe -- nf_conntrack_ipv4 -------------------------------------------------------------------------------- /roles/init/files/kubernetes.repo: -------------------------------------------------------------------------------- 1 | [kubernetes] 2 | name=Kubernetes Repo 3 | baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ 4 | gpgcheck=1 5 | gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 6 | enabled=1 7 | -------------------------------------------------------------------------------- /roles/init/tasks/copy_hostfile.yml: -------------------------------------------------------------------------------- 1 | # Copy hosts file 2 | - name: Copy hosts file 3 | copy: 4 | src: /etc/hosts 5 | dest: /etc/hosts -------------------------------------------------------------------------------- /roles/init/tasks/disable_selinux.yml: -------------------------------------------------------------------------------- 1 | # Disable Selinux 2 | - name: Change selinux config file 3 | lineinfile: 4 | path: /etc/selinux/config 5 | regexp: '^SELINUX=enforcing' 6 | line: SELINUX=disabled 7 | - name: Exec setenforce 0 command 8 | shell: setenforce 0 9 | ignore_errors: yes -------------------------------------------------------------------------------- /roles/init/tasks/disable_swap.yml: -------------------------------------------------------------------------------- 1 | # Disable Swap 2 | - name: Exec swapoff command 3 | shell: swapoff -a 4 | - name: Change fstab file 5 | replace: 6 | path: /etc/fstab 7 | regexp: '^([/|U].+swap.+)$' 8 | replace: '#\1' -------------------------------------------------------------------------------- /roles/init/tasks/install_docker.yml: -------------------------------------------------------------------------------- 1 | - name: Install epel-release And yum-utils Packages 2 | yum: 3 | name: ['epel-release', 'yum-utils', 'device-mapper-persistent-data', 'lvm2'] 4 | state: present 5 | - name: Add docker-ce Repo 6 | shell: yum-config-manager --add-repo {{ docker_repo }} 7 | # - name: Add kubernetes repo 8 | # copy: 9 | # src: kubernetes.repo 10 | # dest: /etc/yum.repos.d/kubernetes.repo 11 | - name: Install Docker Packages 12 | yum: 13 | name: ['docker-ce-{{ docker_version }}', 'docker-ce-cli-{{ docker_version }}'] 14 | state: present 15 | 16 | - name: Create /etc/docker Directory 17 | file: 18 | path: "{{ item }}" 19 | state: directory 20 | with_items: 21 | - /etc/docker 22 | - "{{ docker_storage_path }}" 23 | - name: Copy docker daemon.json File 24 | # copy: 25 | # src: daemon.json 26 | # dest: /etc/docker/daemon.json 27 | template: 28 | src: daemon.json.j2 29 | dest: /etc/docker/daemon.json 30 | mode: 0644 31 | - name: Started Docker Service 32 | systemd: 33 | name: docker 34 | state: started 35 | enabled: yes 36 | daemon-reload: true -------------------------------------------------------------------------------- /roles/init/tasks/install_some_pkgs.yml: -------------------------------------------------------------------------------- 1 | - name: Install new packages 2 | yum: 3 | name: ['conntrack', 'ipvsadm', 'ipset', 'iptables-services', 'libseccomp', 'jq', 'sysstat', 'curl', 'wget', 'net-tools', 'telnet', 'vim', 'socat'] 4 | # name: "{{ item }}" 5 | state: present 6 | # with_items: 7 | # - conntrack 8 | # - ipvsadm 9 | # - ipset 10 | # - iptables-services 11 | # - libseccomp 12 | # - jq 13 | # - sysstat 14 | # - curl 15 | # - wget 16 | # - net-tools 17 | # - telnet 18 | # - vim 19 | # - socat 20 | # - chrony 21 | 22 | -------------------------------------------------------------------------------- /roles/init/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: disable_selinux 3 | include: disable_selinux.yml 4 | tags: disable_selinux 5 | - include: stop_firewall.yml 6 | tags: stop_firewall 7 | - include: disable_swap.yml 8 | tags: disable_swap 9 | - include: sysctl.yml 10 | tags: sysctl 11 | - include: timezone.yml 12 | tags: timezone 13 | - include: install_docker.yml 14 | tags: install_docker 15 | - include: install_some_pkgs.yml 16 | tags: install_some_pkgs 17 | - include: support_ipvs.yml 18 | tags: support_ipvs 19 | - include: copy_hostfile.yml 20 | tags: copy_hostfile 21 | 22 | -------------------------------------------------------------------------------- /roles/init/tasks/start_docker.yml: -------------------------------------------------------------------------------- 1 | # Start docker service 2 | - name: Create /etc/docker directory 3 | file: 4 | path: /etc/docker 5 | state: directory 6 | - name: Copy docker daemon.json file 7 | copy: 8 | src: daemon.json 9 | dest: /etc/docker/daemon.json 10 | - name: Started docker service 11 | service: 12 | name: docker 13 | state: started 14 | enabled: yes -------------------------------------------------------------------------------- /roles/init/tasks/stop_firewall.yml: -------------------------------------------------------------------------------- 1 | # Turn off the firewall 2 | - name: Stop firewalld 3 | systemd: 4 | name: firewalld 5 | state: stopped 6 | enabled: no 7 | # The situation iptables rule 8 | - name: Clean iptables 9 | shell: iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT -------------------------------------------------------------------------------- /roles/init/tasks/support_ipvs.yml: -------------------------------------------------------------------------------- 1 | # Support ipvs 2 | - name: "Copy ipvs.modules" 3 | copy: 4 | src: ipvs.modules 5 | dest: /etc/sysconfig/modules/ipvs.modules 6 | mode: 0755 7 | 8 | - name: "Load /etc/sysconfig/modules/ipvs.modules" 9 | shell: 10 | bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4 11 | register: ipvs_status 12 | 13 | - name: "Debug IP_VS Status" 14 | debug: 15 | msg: "IP_VS Status: {{ ipvs_status.stdout }}" -------------------------------------------------------------------------------- /roles/init/tasks/sysctl.yml: -------------------------------------------------------------------------------- 1 | # Set system parameters 2 | - name: copy system parameters config file 3 | copy: 4 | src: 99-k8s.conf 5 | dest: /etc/sysctl.d/99-k8s.conf 6 | - name: Load system parameters 7 | shell: modprobe br_netfilter && sysctl -p /etc/sysctl.d/99-k8s.conf -------------------------------------------------------------------------------- /roles/init/tasks/timezone.yml: -------------------------------------------------------------------------------- 1 | # Set the time zone Shanghai 2 | - name: Set the time zone Shanghai 3 | shell: timedatectl set-timezone Asia/Shanghai 4 | # Start chronyd service,Set clock synchronization 5 | # - name: Set clock synchronization 6 | # service: 7 | # name: chronyd 8 | # state: started 9 | # enabled: yes -------------------------------------------------------------------------------- /roles/init/templates/daemon.json.j2: -------------------------------------------------------------------------------- 1 | { 2 | "exec-opts": ["native.cgroupdriver={{ cgroupdriver }}"], 3 | {% if mirror_accelerator is not none %} 4 | "registry-mirrors": ["{{ mirror_accelerator }}"], 5 | {% endif %} 6 | "log-driver": "json-file", 7 | "log-opts": { 8 | "max-size": "100m", 9 | "max-file": "10" 10 | }, 11 | "oom-score-adjust": -1000, 12 | "data-root": "{{ docker_storage_path }}" 13 | } -------------------------------------------------------------------------------- /roles/label_master/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: make_master_labels_and_taints 2 | include: make_master_labels_and_taints.yml 3 | tags: make_master_labels_and_taints -------------------------------------------------------------------------------- /roles/label_master/tasks/make_master_labels_and_taints.yml: -------------------------------------------------------------------------------- 1 | - name: "Gather Facts to Set ansible_hostname" 2 | # This setup mode for gather facts, then the following command can run alone, if we need to 3 | # ansible-playbook -i inventory/hosts site.yml -t make_master_labels_and_taints 4 | setup: 5 | tags: [ 'make_master_labels_and_taints' ] 6 | when: ansible_facts == {} 7 | 8 | - name: "Make Role Labels to Masters" 9 | shell: | 10 | sleep 5 11 | kubectl label nodes {{ ansible_hostname }} node-role.kubernetes.io/master= 12 | ignore_errors: yes 13 | 14 | - name: "Make Taints to Masters" 15 | shell: | 16 | sleep 5 17 | kubectl taint nodes {{ ansible_hostname }} node-role.kubernetes.io/master=:NoSchedule 18 | ignore_errors: yes -------------------------------------------------------------------------------- /roles/master/defaults/main.yml: -------------------------------------------------------------------------------- 1 | ETCD_SERVERS: "{% for host in groups['etcd'] %}https://{{ host }}:2379{% if not loop.last %},{% endif %}{% endfor %}" -------------------------------------------------------------------------------- /roles/master/files/tls-instructs-csr.yaml: -------------------------------------------------------------------------------- 1 | kind: ClusterRole 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | name: system:certificates.k8s.io:certificatesigningrequests:selfnodeserver 5 | rules: 6 | - apiGroups: ["certificates.k8s.io"] 7 | resources: ["certificatesigningrequests/selfnodeserver"] 8 | verbs: ["create"] -------------------------------------------------------------------------------- /roles/master/tasks/copy_admin_config.yml: -------------------------------------------------------------------------------- 1 | - name: "Copy Admin Kubeconfig to /root/.kube/config" 2 | copy: 3 | src: "{{ cert_root_dir }}/admin.kubeconfig" 4 | dest: "{{ kube_config_dir }}/config" 5 | when: admin_kubeconfig is failed 6 | connection: local 7 | run_once: true -------------------------------------------------------------------------------- /roles/master/tasks/create_dir.yml: -------------------------------------------------------------------------------- 1 | # Create some dir 2 | 3 | - name: "Make Master Directory" 4 | file: 5 | name: "{{workdir}}/{{ item }}" 6 | state: directory 7 | recurse: true 8 | with_items: 9 | - kubernetes 10 | - kubernetes/bin 11 | - kubernetes/cfg 12 | - kubernetes/ssl 13 | - kubernetes/logs 14 | - kubernetes/logs/kube-scheduler 15 | - kubernetes/logs/kube-apiserver 16 | - kubernetes/logs/kube-controller-manager -------------------------------------------------------------------------------- /roles/master/tasks/create_some_roles.yml: -------------------------------------------------------------------------------- 1 | - name: "Add Kubelet-bootstrap RoleBinding For A CSR Is Automatically Created when a Kubelet is started " 2 | shell: | 3 | sleep 10 4 | kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap 5 | run_once: true 6 | ignore_errors: true 7 | 8 | - name: "Create Dir yaml" 9 | file: 10 | name: "{{ workdir }}/yamls" 11 | state: directory 12 | recurse: true 13 | run_once: true 14 | 15 | - name: "Copy File tls-instructs-csr.yaml" 16 | copy: 17 | src: tls-instructs-csr.yaml 18 | dest: "{{ workdir }}/yamls" 19 | run_once: true 20 | 21 | - name: "Create Approve CSR ClusterRole" 22 | shell: | 23 | sleep 10 24 | kubectl apply -f tls-instructs-csr.yaml 25 | args: 26 | chdir: "{{ workdir }}/yamls" 27 | run_once: true 28 | 29 | - name: "Create Approve CSR ClusterRoleBinding" 30 | shell: | 31 | sleep 10 32 | kubectl create clusterrolebinding node-client-auto-approve-csr --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --user=kubelet-bootstrap 33 | run_once: true 34 | ignore_errors: true 35 | 36 | - name: "Create CRT Renew ClusterRoleBinding" 37 | shell: | 38 | sleep 10 39 | kubectl create clusterrolebinding node-client-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes 40 | kubectl create clusterrolebinding node-server-auto-renew-crt --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeserver --group=system:nodes 41 | run_once: true 42 | ignore_errors: true 43 | 44 | - name: "Create kubelet-api-admin clusterRoleBinding To Access Log Or Exec" 45 | shell: | 46 | sleep 10 47 | kubectl create clusterrolebinding kubelet-api-admin --user=kubernetes --clusterrole=cluster-admin 48 | run_once: true 49 | ignore_errors: true 50 | 51 | -------------------------------------------------------------------------------- /roles/master/tasks/distribute_apiserver.yml: -------------------------------------------------------------------------------- 1 | - name: "Copy Kube-apiserver Configuration Files" 2 | template: 3 | src: kube-apiserver.j2 4 | dest: "{{workdir}}/kubernetes/cfg/kube-apiserver" 5 | 6 | - name: "Copy Kube-apiserver Systemd File" 7 | template: 8 | src: kube-apiserver.service.j2 9 | dest: /usr/lib/systemd/system/kube-apiserver.service 10 | 11 | - name: "Start Kube-apiserver" 12 | systemd: 13 | name: kube-apiserver 14 | state: restarted 15 | enabled: true 16 | daemon-reload: true -------------------------------------------------------------------------------- /roles/master/tasks/distribute_controller_manager.yml: -------------------------------------------------------------------------------- 1 | - name: "Copy Kube-controller-manager Configuration Files" 2 | template: 3 | src: kube-controller-manager.j2 4 | dest: "{{workdir}}/kubernetes/cfg/kube-controller-manager" 5 | 6 | - name: "Copy Kube-controller-manager Systemd File" 7 | template: 8 | src: kube-controller-manager.service.j2 9 | dest: /usr/lib/systemd/system/kube-controller-manager.service 10 | 11 | - name: "Start Kube-controller-manager" 12 | systemd: 13 | name: kube-controller-manager 14 | state: restarted 15 | enabled: true 16 | daemon-reload: true -------------------------------------------------------------------------------- /roles/master/tasks/distribute_k8s_file.yml: -------------------------------------------------------------------------------- 1 | - name: "Distribute K8s Master Binary File" 2 | copy: 3 | src: "{{ package_root_dir }}/k8s/{{ kubernetes_version }}/kubernetes/server/bin/{{ item }}" 4 | dest: "{{ workdir }}/kubernetes/bin" 5 | mode: 0755 6 | with_items: 7 | - kubectl 8 | - kube-apiserver 9 | - kube-controller-manager 10 | - kube-scheduler 11 | 12 | - name: "Distribute K8s Certs" 13 | copy: 14 | src: "{{ cert_root_dir }}/{{ item }}" 15 | dest: "{{ workdir }}/kubernetes/ssl" 16 | with_items: 17 | - ca.pem 18 | - ca-key.pem 19 | - server.pem 20 | - server-key.pem 21 | - metrics-server.pem 22 | - metrics-server-key.pem 23 | 24 | - name: "Distribute Token.csv" 25 | copy: 26 | src: "{{ cert_root_dir }}/token.csv" 27 | dest: "{{ workdir }}/kubernetes/cfg" 28 | 29 | - name: "Write Kubectl Command to Profile" 30 | template: 31 | src: kubectl.sh.j2 32 | dest: /etc/profile.d/kubectl.sh 33 | mode: 0755 34 | -------------------------------------------------------------------------------- /roles/master/tasks/distribute_scheduler.yml: -------------------------------------------------------------------------------- 1 | - name: "Copy Kube-scheduler Configuration Files" 2 | template: 3 | src: kube-scheduler.j2 4 | dest: "{{workdir}}/kubernetes/cfg/kube-scheduler" 5 | 6 | - name: "Copy Kube-scheduler Systemd File" 7 | template: 8 | src: kube-scheduler.service.j2 9 | dest: /usr/lib/systemd/system/kube-scheduler.service 10 | 11 | - name: "Start Kube-scheduler" 12 | systemd: 13 | name: kube-scheduler 14 | state: restarted 15 | enabled: true 16 | daemon-reload: true -------------------------------------------------------------------------------- /roles/master/tasks/genarate_kube_config.yml: -------------------------------------------------------------------------------- 1 | - name: "Distribute K8s Kubectl Binary File To Localhost" 2 | copy: 3 | src: "{{ package_root_dir }}/k8s/{{ kubernetes_version }}/kubernetes/server/bin/kubectl" 4 | dest: "/usr/local/bin" 5 | mode: 0755 6 | connection: local 7 | run_once: true 8 | 9 | - name: "Check Token.csv Exist Or Not" 10 | shell: 11 | "grep kubelet {{ cert_root_dir }}/token.csv" 12 | ignore_errors: true 13 | register: token_csv 14 | connection: local 15 | run_once: true 16 | 17 | - name: "Generate Token File" 18 | lineinfile: 19 | line: "{{ bootstrap_token }},kubelet-bootstrap,10001,\"system:kubelet-bootstrap\"" 20 | path: "{{ cert_root_dir }}/token.csv" 21 | create: true 22 | when: token_csv is failed 23 | connection: local 24 | run_once: true 25 | 26 | - name: "Check Admin.kubeconfig Exist Or Not" 27 | shell: 28 | "ls {{ cert_root_dir }}/admin.kubeconfig" 29 | ignore_errors: true 30 | register: admin_kubeconfig 31 | connection: local 32 | run_once: true 33 | 34 | - name: "Generate Admin Kubeconfig" 35 | shell: | 36 | kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=https://{{ apiserver_master_addr }}:6443 --kubeconfig=admin.kubeconfig 37 | kubectl config set-credentials admin --client-certificate=./admin.pem --embed-certs=true --client-key=./admin-key.pem --kubeconfig=admin.kubeconfig 38 | kubectl config set-context default --cluster=kubernetes --user=admin --kubeconfig=admin.kubeconfig 39 | kubectl config use-context default --kubeconfig=admin.kubeconfig 40 | args: 41 | chdir: "{{ cert_root_dir }}" 42 | when: admin_kubeconfig is failed 43 | connection: local 44 | run_once: true 45 | 46 | - name: "Check Bootstrap.kubeconfig Exist Or Not" 47 | shell: 48 | "ls {{ cert_root_dir }}/bootstrap.kubeconfig" 49 | ignore_errors: true 50 | register: bootstrap_kubeconfig 51 | connection: local 52 | run_once: true 53 | 54 | - name: "Generate Bootstrap Kubeconfig" 55 | shell: | 56 | kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=https://{{ apiserver_master_addr }}:6443 --kubeconfig=bootstrap.kubeconfig 57 | kubectl config set-credentials kubelet-bootstrap --token={{ bootstrap_token }} --kubeconfig=bootstrap.kubeconfig 58 | kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig 59 | kubectl config use-context default --kubeconfig=bootstrap.kubeconfig 60 | args: 61 | chdir: "{{ cert_root_dir }}" 62 | when: bootstrap_kubeconfig is failed 63 | connection: local 64 | run_once: true 65 | 66 | - name: "Check Kube-proxy.kubeconfig Exist Or Not" 67 | shell: 68 | "ls {{ cert_root_dir }}/kube-proxy.kubeconfig" 69 | ignore_errors: true 70 | register: kube_proxy_kubeconfig 71 | connection: local 72 | run_once: true 73 | 74 | - name: "Generate Kube-proxy Kubeconfig" 75 | shell: | 76 | kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=https://{{ apiserver_master_addr }}:6443 --kubeconfig=kube-proxy.kubeconfig 77 | kubectl config set-credentials kube-proxy --client-certificate=./kube-proxy.pem --client-key=./kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig 78 | kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig 79 | kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig 80 | args: 81 | chdir: "{{ cert_root_dir }}" 82 | when: kube_proxy_kubeconfig is failed 83 | connection: local 84 | run_once: true -------------------------------------------------------------------------------- /roles/master/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: create_dir 3 | include: create_dir.yml 4 | tags: create_dir 5 | - name: genarate_kube_config 6 | include: genarate_kube_config.yml 7 | tags: genarate_kube_config 8 | - name: distribute_k8s_file 9 | include: distribute_k8s_file.yml 10 | tags: distribute_k8s_file 11 | - name: distribute_apiserver 12 | include: distribute_apiserver.yml 13 | tags: distribute_apiserver 14 | - name: distribute_controller_manager 15 | include: distribute_controller_manager.yml 16 | tags: distribute_controller_manager 17 | - name: distribute_scheduler 18 | include: distribute_scheduler.yml 19 | tags: distribute_scheduler 20 | - name: create_some_roles 21 | include: create_some_roles.yml 22 | tags: create_some_roles 23 | - name: copy_admin_config 24 | include: copy_admin_config.yml 25 | tags: copy_admin_config -------------------------------------------------------------------------------- /roles/master/templates/kube-apiserver.j2: -------------------------------------------------------------------------------- 1 | KUBE_APISERVER_OPTS="--logtostderr=false \ 2 | --v=2 \ 3 | --log-dir={{ workdir}}/kubernetes/logs/kube-apiserver \ 4 | --etcd-servers={{ ETCD_SERVERS }} \ 5 | --bind-address=0.0.0.0 \ 6 | --secure-port=6443 \ 7 | --advertise-address={{ inventory_hostname }} \ 8 | --allow-privileged=true \ 9 | --service-cluster-ip-range={{ service_network }} \ 10 | --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction \ 11 | --authorization-mode=RBAC,Node \ 12 | --kubelet-https=true \ 13 | --enable-bootstrap-token-auth=true \ 14 | --token-auth-file={{ workdir}}/kubernetes/cfg/token.csv \ 15 | --service-node-port-range={{ service_port_range }} \ 16 | --kubelet-client-certificate={{ workdir}}/kubernetes/ssl/server.pem \ 17 | --kubelet-client-key={{ workdir}}/kubernetes/ssl/server-key.pem \ 18 | --tls-cert-file={{ workdir}}/kubernetes/ssl/server.pem \ 19 | --tls-private-key-file={{ workdir}}/kubernetes/ssl/server-key.pem \ 20 | --client-ca-file={{ workdir}}/kubernetes/ssl/ca.pem \ 21 | --service-account-key-file={{ workdir}}/kubernetes/ssl/ca-key.pem \ 22 | --etcd-cafile={{ workdir}}/kubernetes/ssl/ca.pem \ 23 | --etcd-certfile={{ workdir}}/kubernetes/ssl/server.pem \ 24 | --etcd-keyfile={{ workdir}}/kubernetes/ssl/server-key.pem \ 25 | --requestheader-client-ca-file={{ workdir}}/kubernetes/ssl/ca.pem \ 26 | --requestheader-extra-headers-prefix=X-Remote-Extra- \ 27 | --requestheader-group-headers=X-Remote-Group \ 28 | --requestheader-username-headers=X-Remote-User \ 29 | --proxy-client-cert-file={{ workdir}}/kubernetes/ssl/metrics-server.pem \ 30 | --proxy-client-key-file={{ workdir}}/kubernetes/ssl/metrics-server-key.pem \ 31 | --runtime-config=api/all=true \ 32 | --audit-log-maxage=30 \ 33 | --audit-log-maxbackup=3 \ 34 | --audit-log-maxsize=100 \ 35 | --audit-log-truncate-enabled=true \ 36 | --audit-log-path={{ workdir}}/kubernetes/logs/k8s-audit.log" -------------------------------------------------------------------------------- /roles/master/templates/kube-apiserver.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes API Server 3 | Documentation=https://github.com/kubernetes/kubernetes 4 | 5 | [Service] 6 | EnvironmentFile=-{{ workdir}}/kubernetes/cfg/kube-apiserver 7 | ExecStart={{ workdir}}/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS 8 | Restart=on-failure 9 | 10 | [Install] 11 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /roles/master/templates/kube-controller-manager.j2: -------------------------------------------------------------------------------- 1 | KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \ 2 | --v=2 \ 3 | --log-dir={{ workdir}}/kubernetes/logs/kube-controller-manager \ 4 | --master=127.0.0.1:8080 \ 5 | --leader-elect=true \ 6 | --bind-address=0.0.0.0 \ 7 | --service-cluster-ip-range={{ service_network }} \ 8 | --cluster-name=kubernetes \ 9 | --cluster-signing-cert-file={{ workdir}}/kubernetes/ssl/ca.pem \ 10 | --cluster-signing-key-file={{ workdir}}/kubernetes/ssl/ca-key.pem \ 11 | --service-account-private-key-file={{ workdir}}/kubernetes/ssl/ca-key.pem \ 12 | --experimental-cluster-signing-duration=87600h0m0s \ 13 | --feature-gates=RotateKubeletServerCertificate=true \ 14 | --feature-gates=RotateKubeletClientCertificate=true \ 15 | --allocate-node-cidrs=true \ 16 | --cluster-cidr={{ pod_network }} \ 17 | --root-ca-file={{ workdir }}/kubernetes/ssl/ca.pem" -------------------------------------------------------------------------------- /roles/master/templates/kube-controller-manager.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Controller Manager 3 | Documentation=https://github.com/kubernetes/kubernetes 4 | 5 | [Service] 6 | EnvironmentFile=-{{ workdir}}/kubernetes/cfg/kube-controller-manager 7 | ExecStart={{ workdir}}/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS 8 | Restart=on-failure 9 | 10 | [Install] 11 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /roles/master/templates/kube-scheduler.j2: -------------------------------------------------------------------------------- 1 | KUBE_SCHEDULER_OPTS="--logtostderr=false \ 2 | --v=2 \ 3 | --log-dir={{ workdir}}/kubernetes/logs/kube-scheduler \ 4 | --master=127.0.0.1:8080 \ 5 | --address=0.0.0.0 \ 6 | --leader-elect" -------------------------------------------------------------------------------- /roles/master/templates/kube-scheduler.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Scheduler 3 | Documentation=https://github.com/kubernetes/kubernetes 4 | 5 | [Service] 6 | EnvironmentFile=-{{ workdir}}/kubernetes/cfg/kube-scheduler 7 | ExecStart={{ workdir}}/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS 8 | Restart=on-failure 9 | 10 | [Install] 11 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /roles/master/templates/kubectl.sh.j2: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | export PATH=$PATH:{{ workdir }}/kubernetes/bin -------------------------------------------------------------------------------- /roles/metrics-server/files/metrics.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | kind: ClusterRole 4 | metadata: 5 | name: system:aggregated-metrics-reader 6 | labels: 7 | rbac.authorization.k8s.io/aggregate-to-view: "true" 8 | rbac.authorization.k8s.io/aggregate-to-edit: "true" 9 | rbac.authorization.k8s.io/aggregate-to-admin: "true" 10 | rules: 11 | - apiGroups: ["metrics.k8s.io"] 12 | resources: ["pods", "nodes"] 13 | verbs: ["get", "list", "watch"] 14 | --- 15 | apiVersion: rbac.authorization.k8s.io/v1 16 | kind: ClusterRoleBinding 17 | metadata: 18 | name: metrics-server:system:auth-delegator 19 | roleRef: 20 | apiGroup: rbac.authorization.k8s.io 21 | kind: ClusterRole 22 | name: system:auth-delegator 23 | subjects: 24 | - kind: ServiceAccount 25 | name: metrics-server 26 | namespace: kube-system 27 | --- 28 | apiVersion: rbac.authorization.k8s.io/v1 29 | kind: RoleBinding 30 | metadata: 31 | name: metrics-server-auth-reader 32 | namespace: kube-system 33 | roleRef: 34 | apiGroup: rbac.authorization.k8s.io 35 | kind: Role 36 | name: extension-apiserver-authentication-reader 37 | subjects: 38 | - kind: ServiceAccount 39 | name: metrics-server 40 | namespace: kube-system 41 | --- 42 | apiVersion: apiregistration.k8s.io/v1beta1 43 | kind: APIService 44 | metadata: 45 | name: v1beta1.metrics.k8s.io 46 | spec: 47 | service: 48 | name: metrics-server 49 | namespace: kube-system 50 | group: metrics.k8s.io 51 | version: v1beta1 52 | insecureSkipTLSVerify: true 53 | groupPriorityMinimum: 100 54 | versionPriority: 100 55 | --- 56 | apiVersion: v1 57 | kind: ServiceAccount 58 | metadata: 59 | name: metrics-server 60 | namespace: kube-system 61 | --- 62 | apiVersion: apps/v1 63 | kind: Deployment 64 | metadata: 65 | name: metrics-server 66 | namespace: kube-system 67 | labels: 68 | k8s-app: metrics-server 69 | spec: 70 | selector: 71 | matchLabels: 72 | k8s-app: metrics-server 73 | template: 74 | metadata: 75 | name: metrics-server 76 | labels: 77 | k8s-app: metrics-server 78 | spec: 79 | serviceAccountName: metrics-server 80 | volumes: 81 | # mount in tmp so we can safely use from-scratch images and/or read-only containers 82 | - name: tmp-dir 83 | emptyDir: {} 84 | containers: 85 | - name: metrics-server 86 | image: registry.cn-beijing.aliyuncs.com/liyongjian5179/metrics-server-amd64:v0.3.6 87 | imagePullPolicy: IfNotPresent 88 | resources: 89 | limits: 90 | cpu: 400m 91 | memory: 512Mi 92 | requests: 93 | cpu: 50m 94 | memory: 50Mi 95 | command: 96 | - /metrics-server 97 | - --kubelet-insecure-tls 98 | - --kubelet-preferred-address-types=InternalIP 99 | args: 100 | - --cert-dir=/tmp 101 | - --secure-port=4443 102 | ports: 103 | - name: main-port 104 | containerPort: 4443 105 | protocol: TCP 106 | securityContext: 107 | readOnlyRootFilesystem: true 108 | runAsNonRoot: true 109 | runAsUser: 1000 110 | volumeMounts: 111 | - name: tmp-dir 112 | mountPath: /tmp 113 | nodeSelector: 114 | kubernetes.io/os: linux 115 | kubernetes.io/arch: "amd64" 116 | --- 117 | apiVersion: v1 118 | kind: Service 119 | metadata: 120 | name: metrics-server 121 | namespace: kube-system 122 | labels: 123 | kubernetes.io/name: "Metrics-server" 124 | kubernetes.io/cluster-service: "true" 125 | spec: 126 | selector: 127 | k8s-app: metrics-server 128 | ports: 129 | - port: 443 130 | protocol: TCP 131 | targetPort: main-port 132 | --- 133 | apiVersion: rbac.authorization.k8s.io/v1 134 | kind: ClusterRole 135 | metadata: 136 | name: system:metrics-server 137 | rules: 138 | - apiGroups: 139 | - "" 140 | resources: 141 | - pods 142 | - nodes 143 | - nodes/stats 144 | - namespaces 145 | - configmaps 146 | verbs: 147 | - get 148 | - list 149 | - watch 150 | --- 151 | apiVersion: rbac.authorization.k8s.io/v1 152 | kind: ClusterRoleBinding 153 | metadata: 154 | name: system:metrics-server 155 | roleRef: 156 | apiGroup: rbac.authorization.k8s.io 157 | kind: ClusterRole 158 | name: system:metrics-server 159 | subjects: 160 | - kind: ServiceAccount 161 | name: metrics-server 162 | namespace: kube-system -------------------------------------------------------------------------------- /roles/metrics-server/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: "Create Dir yaml" 2 | file: 3 | name: "{{ workdir }}/yamls" 4 | state: directory 5 | recurse: true 6 | 7 | - name: "Copy Metrics.yaml Files" 8 | copy: 9 | src: metrics.yaml 10 | dest: "{{workdir}}/yamls/metrics.yaml" 11 | 12 | - name: "Run 'kubectl apply -f metrics.yaml'" 13 | shell: 14 | "kubectl apply -f metrics.yaml" 15 | args: 16 | chdir: "{{ workdir }}/yamls" 17 | run_once: true -------------------------------------------------------------------------------- /roles/nginx/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: "Install Nginx Server" 2 | yum: 3 | name: "nginx" 4 | state: present 5 | 6 | - name: "Copy Apiserver Config" 7 | template: 8 | src: "lb.tcp.j2" 9 | dest: "/etc/nginx/conf.d/lb.tcp" 10 | 11 | - name: "Check Include Exist Or Not" 12 | shell: 13 | grep "tcp;" nginx.conf 14 | args: 15 | chdir: "/etc/nginx" 16 | register: nginx_include 17 | ignore_errors: yes 18 | 19 | - name: "Add lb.tcp To Nginx Config" 20 | lineinfile: 21 | path: /etc/nginx/nginx.conf 22 | insertafter: "include /usr/share/nginx/modules" 23 | line: "include /etc/nginx/conf.d/*.tcp;" 24 | when: nginx_include is failed 25 | 26 | - name: "Start Nginx Server" 27 | systemd: 28 | name: nginx 29 | state: restarted 30 | enabled: true 31 | daemon-reload: true 32 | -------------------------------------------------------------------------------- /roles/nginx/templates/lb.tcp.j2: -------------------------------------------------------------------------------- 1 | stream { 2 | upstream master { 3 | hash $remote_addr consistent; 4 | {% for host in groups['masters'] %} 5 | server {{ host }}:6443 max_fails=3 fail_timeout=30; 6 | {% endfor %} 7 | } 8 | 9 | server { 10 | listen 6443; 11 | proxy_pass master; 12 | } 13 | } -------------------------------------------------------------------------------- /roles/node/tasks/create_dir.yml: -------------------------------------------------------------------------------- 1 | - name: "Make Nodes Directory" 2 | file: 3 | name: "{{workdir}}/{{ item }}" 4 | state: directory 5 | recurse: true 6 | with_items: 7 | - kubernetes 8 | - kubernetes/bin 9 | - kubernetes/cfg 10 | - kubernetes/ssl 11 | - kubernetes/logs 12 | - kubernetes/logs/kubelet 13 | - kubernetes/logs/kube-proxy -------------------------------------------------------------------------------- /roles/node/tasks/distribute_k8s_file.yml: -------------------------------------------------------------------------------- 1 | - name: "Distribute K8s Node Binary File" 2 | copy: 3 | src: "{{ package_root_dir }}/k8s/{{ kubernetes_version }}/kubernetes/server/bin/{{ item }}" 4 | dest: "{{ workdir }}/kubernetes/bin" 5 | mode: 0755 6 | with_items: 7 | - kubelet 8 | - kube-proxy 9 | 10 | - name: "Distribute K8s Kubeconfigs" 11 | copy: 12 | src: "{{ cert_root_dir }}/{{ item }}" 13 | dest: "{{ workdir }}/kubernetes/cfg" 14 | with_items: 15 | - kube-proxy.kubeconfig 16 | - bootstrap.kubeconfig 17 | 18 | - name: "Distribute K8s Certs" 19 | copy: 20 | src: "{{ cert_root_dir }}/ca.pem" 21 | dest: "{{ workdir }}/kubernetes/ssl" 22 | 23 | - name: "Distribute Token.csv" 24 | copy: 25 | src: "{{ cert_root_dir }}/token.csv" 26 | dest: "{{ workdir }}/kubernetes/cfg" 27 | 28 | -------------------------------------------------------------------------------- /roles/node/tasks/distribute_kubelet.yml: -------------------------------------------------------------------------------- 1 | - name: "Copy Kubelet Configuration Files" 2 | template: 3 | src: "{{ item }}.j2" 4 | dest: "{{ workdir}}/kubernetes/cfg/{{ item }}" 5 | with_items: 6 | - kubelet.conf 7 | - kubelet-config.yml 8 | 9 | - name: "Copy Kubelet Systemd File" 10 | template: 11 | src: kubelet.service.j2 12 | dest: /usr/lib/systemd/system/kubelet.service 13 | 14 | - name: "Start Kubelet" 15 | systemd: 16 | name: kubelet 17 | state: restarted 18 | enabled: true 19 | daemon-reload: true -------------------------------------------------------------------------------- /roles/node/tasks/distribute_proxy.yml: -------------------------------------------------------------------------------- 1 | - name: "Copy Kube-proxy Configuration Files" 2 | template: 3 | src: "{{ item }}.j2" 4 | dest: "{{ workdir}}/kubernetes/cfg/{{ item }}" 5 | with_items: 6 | - kube-proxy.conf 7 | - kube-proxy-config.yml 8 | 9 | - name: "Copy Kube-proxy Systemd File" 10 | template: 11 | src: kube-proxy.service.j2 12 | dest: /usr/lib/systemd/system/kube-proxy.service 13 | 14 | - name: "Start Kube-proxy" 15 | systemd: 16 | name: kube-proxy 17 | state: restarted 18 | enabled: true 19 | daemon-reload: true -------------------------------------------------------------------------------- /roles/node/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: create_dir 3 | include: create_dir.yml 4 | tags: create_dir 5 | - name: distribute_k8s_file 6 | include: distribute_k8s_file.yml 7 | tags: distribute_k8s_file 8 | - name: distribute_proxy 9 | include: distribute_proxy.yml 10 | tags: distribute_proxy 11 | - name: distribute_kubelet 12 | include: distribute_kubelet.yml 13 | tags: distribute_kubelet 14 | -------------------------------------------------------------------------------- /roles/node/templates/kube-proxy-config.yml.j2: -------------------------------------------------------------------------------- 1 | kind: KubeProxyConfiguration 2 | apiVersion: kubeproxy.config.k8s.io/v1alpha1 3 | address: 0.0.0.0 # 监听地址 4 | metricsBindAddress: 0.0.0.0:10249 # 监控指标地址,监控获取相关信息 就从这里获取 5 | clientConnection: 6 | kubeconfig: {{ workdir }}/kubernetes/cfg/kube-proxy.kubeconfig # 读取配置文件 7 | hostnameOverride: {{ ansible_hostname }} # 注册到k8s的节点名称唯一 8 | clusterCIDR: {{ pod_network }} 9 | mode: {{ kubeproxy_mode }} 10 | 11 | # 使用 ipvs 模式 12 | #mode: ipvs # ipvs 模式 13 | #ipvs: 14 | # scheduler: "rr" 15 | #iptables: 16 | # masqueradeAll: true -------------------------------------------------------------------------------- /roles/node/templates/kube-proxy.conf.j2: -------------------------------------------------------------------------------- 1 | KUBE_PROXY_OPTS="--logtostderr=false \ 2 | --v=2 \ 3 | --log-dir={{ workdir }}/kubernetes/logs/kube-proxy \ 4 | --config={{ workdir }}/kubernetes/cfg/kube-proxy-config.yml" -------------------------------------------------------------------------------- /roles/node/templates/kube-proxy.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Proxy 3 | After=network.target 4 | 5 | [Service] 6 | EnvironmentFile=-{{ workdir }}/kubernetes/cfg/kube-proxy.conf 7 | ExecStart={{ workdir }}/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS 8 | Restart=on-failure 9 | 10 | [Install] 11 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /roles/node/templates/kubelet-config.yml.j2: -------------------------------------------------------------------------------- 1 | kind: KubeletConfiguration # 使用对象 2 | apiVersion: kubelet.config.k8s.io/v1beta1 # api版本 3 | address: 0.0.0.0 # 监听地址 4 | port: 10250 # 当前kubelet的端口 5 | readOnlyPort: 10255 # kubelet暴露的端口 6 | cgroupDriver: {{ cgroupdriver }} # 驱动,要与docker info显示的驱动一致 7 | clusterDNS: 8 | - {{ coredns_service_ip }} 9 | clusterDomain: {{ cluster_domain }} # 集群域 10 | failSwapOn: false # 关闭swap 11 | 12 | # 身份验证 13 | authentication: 14 | anonymous: 15 | enabled: false 16 | webhook: 17 | cacheTTL: 2m0s 18 | enabled: true 19 | x509: 20 | clientCAFile: {{ workdir }}/kubernetes/ssl/ca.pem 21 | 22 | # 授权 23 | authorization: 24 | mode: Webhook 25 | webhook: 26 | cacheAuthorizedTTL: 5m0s 27 | cacheUnauthorizedTTL: 30s 28 | 29 | # Node 资源保留 30 | evictionHard: 31 | imagefs.available: 15% 32 | memory.available: {{ eviction_hard_memory }} 33 | nodefs.available: 10% 34 | nodefs.inodesFree: 5% 35 | evictionPressureTransitionPeriod: 5m0s 36 | 37 | # 镜像删除策略 38 | imageGCHighThresholdPercent: 85 39 | imageGCLowThresholdPercent: 80 40 | imageMinimumGCAge: 2m0s 41 | 42 | # 旋转证书 43 | rotateCertificates: true # 旋转kubelet client 证书 44 | featureGates: 45 | RotateKubeletServerCertificate: true 46 | RotateKubeletClientCertificate: true 47 | 48 | maxOpenFiles: 1000000 49 | maxPods: 110 -------------------------------------------------------------------------------- /roles/node/templates/kubelet.conf.j2: -------------------------------------------------------------------------------- 1 | KUBELET_OPTS="--logtostderr=false \ 2 | --v=2 \ 3 | --log-dir={{ workdir }}/kubernetes/logs/kubelet \ 4 | --hostname-override={{ ansible_hostname }} \ 5 | --kubeconfig={{ workdir }}/kubernetes/cfg/kubelet.kubeconfig \ 6 | --bootstrap-kubeconfig={{ workdir }}/kubernetes/cfg/bootstrap.kubeconfig \ 7 | --config={{ workdir }}/kubernetes/cfg/kubelet-config.yml \ 8 | --cert-dir={{ workdir }}/kubernetes/ssl \ 9 | --network-plugin=cni \ 10 | --cni-conf-dir=/etc/cni/net.d \ 11 | --cni-bin-dir=/opt/cni/bin \ 12 | --root-dir={{ kubelet_storage_path }} \ 13 | --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 \ 14 | --system-reserved=memory={{ system_reserved_memory }} \ 15 | --kube-reserved=memory={{ kube_reserved_memory }}" -------------------------------------------------------------------------------- /roles/node/templates/kubelet.service.j2: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Kubelet 3 | After=docker.service 4 | Requires=docker.service 5 | 6 | [Service] 7 | EnvironmentFile=-{{ workdir }}/kubernetes/cfg/kubelet.conf 8 | ExecStart={{ workdir }}/kubernetes/bin/kubelet $KUBELET_OPTS 9 | Restart=on-failure 10 | KillMode=process 11 | 12 | [Install] 13 | WantedBy=multi-user.target -------------------------------------------------------------------------------- /site.yml: -------------------------------------------------------------------------------- 1 | # 安装 nginx,如果不用可以注释掉 2 | - name: Nginx 3 | hosts: lb 4 | roles: 5 | - nginx 6 | tags: nginx 7 | 8 | # 初始化环境 9 | - name: Init Environment 10 | hosts: k8s 11 | roles: 12 | - init 13 | tags: init 14 | 15 | # 创建证书 16 | - name: Create Certs 17 | hosts: local 18 | roles: 19 | - cert 20 | tags: cert 21 | 22 | # 安装 etcd 23 | - name: Install Etcd Cluster 24 | hosts: masters 25 | roles: 26 | - etcd 27 | tags: etcd 28 | 29 | # 安装 masters 30 | - name: Install Master Cluster 31 | hosts: masters 32 | roles: 33 | - master 34 | tags: master 35 | 36 | # 安装 nodes 37 | - name: Install Node Cluster 38 | hosts: nodes 39 | roles: 40 | - node 41 | tags: node 42 | 43 | # 安装 网路插件 44 | - name: Install Network Plugins 45 | hosts: nodes 46 | roles: 47 | - { role: flannel, tags: flannel, when: network_type == 'flannel' } 48 | - { role: calico, tags: calico, when: network_type == 'calico' } 49 | tags: network 50 | 51 | # 安装 coredns 52 | - name: Install Coredns 53 | hosts: masters 54 | roles: 55 | - coredns 56 | tags: coredns 57 | 58 | # 安装 metrics 59 | - name: Install Metrics-server 60 | hosts: masters 61 | roles: 62 | - metrics-server 63 | tags: metrics 64 | 65 | # 给 masters 节点打标签和污点 66 | - name: Masters -> Label and Tains 67 | hosts: masters 68 | roles: 69 | - label_master 70 | tags: make_master_labels_and_taints -------------------------------------------------------------------------------- /tests/myapp.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | # Source: tests/busybox.yaml 3 | apiVersion: v1 4 | kind: Pod 5 | metadata: 6 | name: busybox 7 | namespace: default 8 | spec: 9 | containers: 10 | - name: busybox 11 | # 确保您使用busybox:1.28 image(或更早版本)进行任何测试,最新版本有一个unpstream bug,影响nslookup的使用 12 | #image: busybox:1.28 13 | # 含有 curl 命令 14 | image: radial/busyboxplus 15 | command: 16 | - sleep 17 | - "3600" 18 | imagePullPolicy: IfNotPresent 19 | restartPolicy: Always 20 | --- 21 | # Source: tests/myapp-deployment.yaml 22 | apiVersion: apps/v1 23 | kind: Deployment 24 | metadata: 25 | labels: 26 | app: myapp 27 | name: myapp 28 | namespace: default 29 | spec: 30 | replicas: 1 31 | selector: 32 | matchLabels: 33 | app: myapp 34 | template: 35 | metadata: 36 | labels: 37 | app: myapp 38 | spec: 39 | containers: 40 | - name: myapp 41 | image: ikubernetes/myapp:v1 42 | resources: {} 43 | ports: 44 | - name: http 45 | containerPort: 80 46 | dnsPolicy: ClusterFirst 47 | restartPolicy: Always 48 | --- 49 | # Source: tests/myapp-service.yaml 50 | apiVersion: v1 51 | kind: Service 52 | metadata: 53 | labels: 54 | app: myapp 55 | name: myapp 56 | namespace: default 57 | spec: 58 | ports: 59 | - port: 80 60 | protocol: TCP 61 | targetPort: 80 62 | selector: 63 | app: myapp -------------------------------------------------------------------------------- /tools/clean.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | ansible -i ./inventory/hosts masters -m systemd -a 'name=kube-apiserver state=stopped enabled=no' 3 | ansible -i ./inventory/hosts masters -m systemd -a 'name=kube-controller-manager state=stopped enabled=no' 4 | ansible -i ./inventory/hosts masters -m systemd -a 'name=kube-scheduler state=stopped enabled=no' 5 | ansible -i ./inventory/hosts etcd -m systemd -a 'name=etcd state=stopped enabled=no' 6 | ansible -i ./inventory/hosts k8s -m systemd -a 'name=kubelet state=stopped enabled=no' 7 | ansible -i ./inventory/hosts k8s -m systemd -a 'name=kube-proxy state=stopped enabled=no' 8 | ansible -i ./inventory/hosts k8s -m systemd -a 'name=flanneld state=stopped enabled=no' 9 | ansible -i ./inventory/hosts k8s -m systemd -a 'name=docker state=stopped enabled=no' 10 | ansible -i ./inventory/hosts k8s -m yum -a 'name=docker-ce state=absent' 11 | ansible -i ./inventory/hosts k8s -m yum -a 'name=docker-ce-cli state=absent' 12 | ansible -i ./inventory/hosts masters -m shell -a 'mv -f /root/.kube/config /tmp/' 13 | # 解绑 tmpfs 的挂载点 14 | ansible -i inventory/hosts k8s -m raw -a "umount \$(df -HT | grep '/opt/kubelet/pods' | awk '{print \$7}')" 15 | ansible -i ./inventory/hosts k8s -m shell -a 'rm -rf /opt/kubernetes /opt/etcd /opt/cni /etc/cni /var/run/calico /etc/calico /var/run/flannel /opt/yamls /var/lib/kubelet /etc/docker /var/lib/docker /opt/docker /var/lib/dockershim /opt/kubelet' 16 | ansible -i ./inventory/hosts k8s -m shell -a "rm -f /usr/lib/systemd/system/{docker,etcd,flanneld,kubelet,kube-proxy,kube-apiserver,kube-controller-manager,kube-scheduler}.service" 17 | ansible -i ./inventory/hosts k8s -m shell -a 'ip link set dev cni0 down; ip link set dev docker0 down; ip link set dev flannel.1 down' 18 | ansible -i ./inventory/hosts k8s -m shell -a 'ip link delete cni0; ip link delete docker0; ip link delete flannel.1' 19 | ansible -i ./inventory/hosts k8s -m shell -a 'ipvsadm -C && iptables -F' 20 | ansible -i ./inventory/hosts k8s -m shell -a 'modprobe -r ipip' -------------------------------------------------------------------------------- /tools/move_pkg.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | #cni_plugins_version=0.8.6 4 | #etcd_version=3.4.9 5 | #kubernetes_version=1.18.8 6 | #flannel_version=0.12.0 7 | #calico_version=3.15.0 8 | 9 | cni_plugins_version=$(awk '/^cni_plugins_version/ {print $2}' ./group_vars/all) 10 | etcd_version=$(awk '/^etcd_version/ {print $2}' ./group_vars/all) 11 | kubernetes_version=$(awk '/^kubernetes_version/ {print $2}' ./group_vars/all) 12 | flannel_version=$(awk '/^flannel_version/ {print $2}' ./group_vars/all) 13 | calico_version=$(awk '/^calico_version/ {print $2}' ./group_vars/all) 14 | network_type=$(awk '/^network_type/ {print $2}' ./group_vars/all) 15 | 16 | cd /opt/pkg/ 17 | 18 | if [ -f ./cni-plugins-linux-amd64-v${cni_plugins_version}.tgz ];then 19 | mkdir -p cni-plugins/${cni_plugins_version} 20 | mv cni-plugins-linux-amd64-v${cni_plugins_version}.tgz ./cni-plugins/${cni_plugins_version}/ && \ 21 | cd ./cni-plugins/${cni_plugins_version}/ && \ 22 | tar xf cni-plugins-linux-amd64-v${cni_plugins_version}.tgz 23 | echo "[INFO] 下载 cni-plgins 并解压完成" 24 | cd - &>/dev/null 25 | fi 26 | if [ -f ./flannel-v${flannel_version}-linux-amd64.tar.gz ];then 27 | mkdir -p flannel/${flannel_version} 28 | mv ./flannel-v${flannel_version}-linux-amd64.tar.gz ./flannel/${flannel_version}/ && \ 29 | cd ./flannel/${flannel_version}/ && \ 30 | tar xf flannel-v${flannel_version}-linux-amd64.tar.gz 31 | echo "[INFO] 下载 flannel 并解压完成" 32 | cd - &>/dev/null 33 | fi 34 | if [ -f ./calicoctl ];then 35 | mkdir -p calico/${calico_version} 36 | mv ./calicoctl ./calico/${calico_version}/ && \ 37 | cd ./calico/${calico_version}/ && \ 38 | echo "[INFO] 下载 calicoctl 完成" 39 | cd - &>/dev/null 40 | fi 41 | if [ -f ./kubernetes-server-linux-amd64.tar.gz ];then 42 | mkdir -p k8s/${kubernetes_version} 43 | mv kubernetes-server-linux-amd64.tar.gz ./k8s/${kubernetes_version}/ && \ 44 | cd ./k8s/${kubernetes_version}/ && \ 45 | tar xf kubernetes-server-linux-amd64.tar.gz 46 | echo "[INFO] 下载 k8s-server 并解压完成" 47 | cd - &>/dev/null 48 | fi 49 | if [ -f etcd-v${etcd_version}-linux-amd64.tar.gz ];then 50 | mkdir -p etcd 51 | mv etcd-v${etcd_version}-linux-amd64.tar.gz ./etcd/ && \ 52 | cd ./etcd/ && \ 53 | tar xf etcd-v${etcd_version}-linux-amd64.tar.gz 54 | echo "[INFO] 下载 etcd 并解压完成" 55 | cd - &>/dev/null 56 | fi 57 | if [ -f cfssl_linux-amd64 ];then 58 | mkdir -p cfssl 59 | mv cfssl_linux-amd64 ./cfssl/ && \ 60 | mv cfssljson_linux-amd64 ./cfssl/ && \ 61 | mv cfssl-certinfo_linux-amd64 ./cfssl/ 62 | echo "[INFO] 下载 cfssl 相关包完成" 63 | cd - &>/dev/null 64 | fi --------------------------------------------------------------------------------