├── .gitattributes ├── README.md ├── Tomcat服务日志收集.md ├── images ├── calico.jpg ├── dashboard.jpg └── flannel.jpg ├── 创建TLS证书和秘钥.md ├── 部署Calico服务.md ├── 部署CoreDNS服务.md ├── 部署Etcd集群服务.md ├── 部署Flannel服务.md ├── 部署Haproxy服务.md ├── 部署Kubrnetes-Master节点.md ├── 部署Kubrnetes-Node节点.md └── 配置kubeconfig访问多个集群.md /.gitattributes: -------------------------------------------------------------------------------- 1 | *.md linguist-language=shell 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes-install 2 | 3 | 本篇文章主要介绍kubernetes v1.15.x版本(启用TLS bootstrap认证方式)的安装步骤和注意事项;而不是使用kubeadm等自动化方式来部署集群。 4 | 5 | 安装步骤和流程,Kubernetes搭建部分参考文章:`和我一步步部署kubernetes集群`。主要修改:移除Token认证,增加了`Node TLS bootstrap`认证和使用`IPVS`实现Kubernetes入口流量负载均衡,以及`Master`高可用等;后面组件增加了`Calico`网络,`Traefik Ingress`,`Prometheus`监控报警,日志搜集等等;以及简单说明各个组件和服务的原理和作用。 6 | 7 | 本篇文章主要包含几部分:Kubernetes ipvs模式,CoreDNS,监控报警,日志搜集,docker私有仓库等等一系列解决方案;安装过程中,可能不会详细说明各组件的启动参数和作用;启动参数的详细介绍请参考其他的优秀博客和官方文档。 8 | 9 | 10 | 所有服务的搭建过程均在CentOS7,内核版本:4.4.xx系统上操作通过,其他系统未验证,有任何问题欢迎反馈。 11 | 12 | ## 环境准备 13 | 14 | ### 系统环境 15 | 16 | | 主机名称 | IP地址 | 系统 | 内核 | 节点角色 | 17 | | :-------: | :----------: | :---------: | :-------------------------: | :-----------------: | 18 | | k8s-node1 | 172.16.0.101 | CentOS-7.4 | 4.4.166-1.el7.elrepo.x86_64 | Matser, Node, Etcd | 19 | | k8s-node2 | 172.16.0.102 | CentOS-7.4 | 4.4.166-1.el7.elrepo.x86_64 | Node, Etcd | 20 | | k8s-node3 | 172.16.0.103 | CentOS-7.4 | 4.4.166-1.el7.elrepo.x86_64 | Node, Etcd | 21 | | haproxy | 172.16.0.104 | CentOS-7.4 | 4.4.166-1.el7.elrepo.x86_64 | Haproxy, Keepalived | 22 | | haproxy | 172.16.0.105 | CentOS-7.4 | 4.4.166-1.el7.elrepo.x86_64 | Haproxy, Keepalived | 23 | 24 | ### 网络规划 25 | 26 | | 网络类型 | 地址网段 | 27 | | :-----: | :----------: | 28 | | 物理网段 | 172.16.0.0/24 | 29 | | 容器网段 | 10.240.0.0/16 | 30 | | SVC网段 | 10.241.0.0/16 | 31 | 32 | ## kube-ansible快速部署 33 | 新增基于`ansible-playbook`方式实现的自动化部署`kubernetes`高可用集群环境,具体流程参考--[kube-ansible快速部署](https://github.com/Donyintao/kube-ansible) 34 | 35 | ## 手动安装步骤流程 36 | 37 | 1. [CA证书和秘钥](创建TLS证书和秘钥.md) 38 | 1. [Etcd集群服务](部署Etcd集群服务.md) 39 | 1. [网络方案选型]() 40 | 1. [Flannel](部署Flannel服务.md) 41 | 1. [Calico](部署Calico服务.md) 42 | 1. [Haproxy服务](部署Haproxy服务.md) 43 | 1. [Master高可用服务之Haproxy服务](部署Haproxy服务.md) 44 | 1. [Kubrnetes服务](https://github.com/Donyintao/Kubernetes-install) 45 | 1. [Master install](部署Kubrnetes-Master节点.md) 46 | 1. [Node install](部署Kubrnetes-Node节点.md) 47 | 1. [配置kubeconfig访问多个集群](配置kubeconfig访问多个集群.md) 48 | 1. [Kubrnetes组件](https://github.com/Donyintao/Kubernetes-install) 49 | 1. [CoreDNS](部署CoreDNS服务.md) 50 | 1. [Dashboard](https://github.com/Donyintao/kubernetes-dashboard/) 51 | 1. [Ingress controllers](https://github.com/Donyintao/Kubernetes-install) 52 | 1. [Ingress nginx](https://github.com/Donyintao/nginx-ingress/) 53 | 2. [Ingress traefik](https://github.com/Donyintao/traefik/) 54 | 1. [Kubrnetes监控](https://github.com/Donyintao/Kubernetes-install) 55 | 1. [Prometheus](https://github.com/Donyintao/Prometheus/) 56 | 1. [Grafana](https://github.com/Donyintao/Grafana/) 57 | 1. [Alertmanager](https://github.com/Donyintao/Alertmanager/) 58 | 1. [应用服务日志搜集](https://github.com/Donyintao/Kubernetes-install) 59 | 1. [Tomcat服务日志搜集](Tomcat服务日志收集.md) 60 | 1. [后续相关组件待补充......](后续相关组件待补充.md) 61 | -------------------------------------------------------------------------------- /Tomcat服务日志收集.md: -------------------------------------------------------------------------------- 1 | # Kubernetes应用服务日志收集 2 | 3 | ## 方案选择 4 | 5 | Kubernetes官方提供了EFK的日志收集解决方案,但是这种方案并不适合所有的业务场景,它本身就有一些局限性,例如: 6 | 7 | * 所有日志都必须是out前台输出,真实业务场景中无法保证所有日志都在前台输出 8 | * 只能有一个日志输出文件,而真实业务场景中往往有多个日志输出文件 9 | * 已经有自己的ELK集群且有专人维护,没有必要再在kubernetes上做一个日志收集服务 10 | 11 | 12 | ## 日志收集解决方案 13 | 方案:结合当前服务的场景,单独创建一个日志收集的容器和应用的容器一起运行在同一个pod中; 14 | 优点:低耦合,扩展性强,方便维护和升级; 15 | 缺点:需要对kubernetes的yaml文件进行单独配置,略显繁琐。 16 | 17 | 该方案虽然不一定是最优的解决方案,但是在在扩展性、个性化、部署和后期维护方面都能做到均衡,也适合们现在的使用场景,因此选择该方案。 18 | 19 | ## 构建镜像 20 | 21 | * 编写Filebeat Dockerfile文件 22 | 23 | ```bash 24 | # mkdir filebeat && cd filebeat 25 | # vim Dockerfile 26 | FROM alpine:3.10 27 | 28 | ENV FILEBEAT_VERSION 7.3.0 29 | 30 | RUN apk add --no-cache --virtual=build-dependencies curl iputils busybox-extras &&\ 31 | curl -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-${FILEBEAT_VERSION}-linux-x86_64.tar.gz && \ 32 | tar fx filebeat-${FILEBEAT_VERSION}-linux-x86_64.tar.gz && \ 33 | mv filebeat-${FILEBEAT_VERSION}-linux-x86_64 /filebeat && \ 34 | ln -sf /filebeat/filebeat /bin/filebeat && \ 35 | rm -rf filebeat-${FILEBEAT_VERSION}-linux-x86_64.tar.gz && \ 36 | rm -rf fields.yml filebeat.reference.yml LICENSE.txt NOTICE.txt README.md 37 | 38 | CMD ["/filebeat/filebeat", "-e", "-c", "/filebeat/filebeat.yml"] 39 | ``` 40 | 41 | * 构建Docker镜像 42 | ```bash 43 | # 实际环境,改成自己公司的私有镜像仓库地址 44 | # docker build -t docker.io/filebeat-agent:v7.3.0 . 45 | ``` 46 | 47 | ## 测试实例 48 | ```bash 49 | # 注意事项: 将tomcat容器的/usr/local/tomcat/logs目录挂载到filebeat容器的/logs目录 50 | # vim filebeat-agent.yaml 51 | apiVersion: apps/v1beta1 52 | kind: Deployment 53 | metadata: 54 | name: tomcat-testing 55 | spec: 56 | replicas: 1 57 | template: 58 | metadata: 59 | labels: 60 | k8s-app: tomcat-testing 61 | spec: 62 | containers: 63 | - name: tomcat 64 | image: tomcat:8.5 65 | ports: 66 | - containerPort: 8080 67 | volumeMounts: 68 | - name: tomcat-logs 69 | mountPath: /usr/local/tomcat/logs 70 | - name: filebeat 71 | image: docker.io/filebeat-agent:v7.3.0 72 | volumeMounts: 73 | - name: tomcat-logs 74 | mountPath: /logs 75 | - name: filebeat-config 76 | mountPath: /filebeat 77 | volumes: 78 | - name: tomcat-logs 79 | emptyDir: {} 80 | - name: filebeat-config 81 | configMap: 82 | name: filebeat-config 83 | --- 84 | apiVersion: v1 85 | kind: ConfigMap 86 | metadata: 87 | name: filebeat-config 88 | data: 89 | filebeat.yml: | 90 | filebeat.prospectors: 91 | - input_type: log 92 | paths: 93 | - /logs/catalina.*.log 94 | multiline: 95 | pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}' 96 | negate: true 97 | match: after 98 | 99 | output.kafka: 100 | hosts: ["192.168.3.95:9092"] 101 | topic: 'k8s-filebeat' 102 | partition.round_robin: 103 | reachable_only: false 104 | required_acks: 1 105 | compression: gzip 106 | max_retries: 3 107 | bulk_max_size: 4096 108 | max_message_bytes: 1000000 109 | # kubectl apply -f filebeat-agent.yaml 110 | ``` 111 | 112 | ## 验证测试 113 | 114 | 待补充...... 115 | -------------------------------------------------------------------------------- /images/calico.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Donyintao/Kubernetes-install/dfde37bd1ee9475381dde1af63ef3834b7e0b6ba/images/calico.jpg -------------------------------------------------------------------------------- /images/dashboard.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Donyintao/Kubernetes-install/dfde37bd1ee9475381dde1af63ef3834b7e0b6ba/images/dashboard.jpg -------------------------------------------------------------------------------- /images/flannel.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Donyintao/Kubernetes-install/dfde37bd1ee9475381dde1af63ef3834b7e0b6ba/images/flannel.jpg -------------------------------------------------------------------------------- /创建TLS证书和秘钥.md: -------------------------------------------------------------------------------- 1 | # 创建TLS证书和秘钥 2 | 3 | `kubernetes`系统各组件需要使用`TLS`证书对通信进行加密,本文档使用`CloudFlare`的PKI工具集[cfssl](https://github.com/cloudflare/cfssl)来生成Certificate Authority(CA)证书和秘钥文件,CA 是自签名的证书,用来签名后续创建的其它TLS证书。 4 | 5 | ## 安装CFSSL 6 | 7 | ``` bash 8 | # wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl 9 | # wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson 10 | # wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo 11 | # chmod +x /usr/local/bin/cfssl* 12 | ``` 13 | 14 | ## 创建CA证书(Certificate Authority) 15 | 16 | 创建CA证书配置文件 17 | 18 | ``` bash 19 | # mkdir /tmp/sslTmp && cd /tmp/sslTmp 20 | # cfssl print-defaults config > ca-config.json 21 | # cat > ca-config.json << EOF 22 | { 23 | "signing": { 24 | "default": { 25 | "expiry": "87600h" 26 | }, 27 | "profiles": { 28 | "kubernetes": { 29 | "usages": [ 30 | "signing", 31 | "key encipherment", 32 | "server auth", 33 | "client auth" 34 | ], 35 | "expiry": "87600h" 36 | } 37 | } 38 | } 39 | } 40 | EOF 41 | ``` 42 | 43 | 创建CA证书签名请求 44 | 45 | ``` bash 46 | # cfssl print-defaults csr > ca-csr.json 47 | # cat > ca-csr.json << EOF 48 | { 49 | "CN": "kubernetes", 50 | "key": { 51 | "algo": "rsa", 52 | "size": 2048 53 | }, 54 | "names": [ 55 | { 56 | "C": "CN", 57 | "ST": "BeiJing", 58 | "L": "BeiJing", 59 | "O": "k8s", 60 | "OU": "System" 61 | } 62 | ] 63 | } 64 | EOF 65 | # cfssl gencert -initca ca-csr.json | cfssljson -bare ca 66 | 67 | ``` 68 | 69 | ## 创建kube-apiserver证书 70 | 71 | 创建kube-apiserver证书签名请求 72 | 73 | 注意:默认kube-apiserver证书没有权限访问API接口, 会提示: Unauthorized 74 | 75 | 注意:如果kube-apiserver证书访问API接口, 需要设置: ["O": "system:masters"] 76 | 77 | 注意:此处需要将`SVC网段`(首个IP地址)、`etcd`、`k8s-master`节点的`IP(包含VIP)`地址全部加上. 78 | 79 | ``` bash 80 | # cfssl print-defaults csr > kube-apiserver-csr.json 81 | # cat > kube-apiserver-csr.json << EOF 82 | { 83 | "CN": "kubernetes", 84 | "hosts": [ 85 | "127.0.0.1", 86 | "10.241.0.1", 87 | "172.16.0.101", 88 | "172.16.0.102", 89 | "172.16.0.103", 90 | "kubernetes", 91 | "kubernetes.default", 92 | "kubernetes.default.svc", 93 | "kubernetes.default.svc.testing", 94 | "kubernetes.default.svc.testing.com" 95 | ], 96 | "key": { 97 | "algo": "rsa", 98 | "size": 2048 99 | }, 100 | "names": [ 101 | { 102 | "C": "CN", 103 | "ST": "BeiJing", 104 | "L": "BeiJing", 105 | "O": "k8s", 106 | "OU": "System" 107 | } 108 | ] 109 | } 110 | EOF 111 | ``` 112 | 113 | 生成kubernetes证书和私钥 114 | 115 | ``` bash 116 | # cfssl gencert -ca=ca.pem \ 117 | -ca-key=ca-key.pem \ 118 | -config=ca-config.json \ 119 | -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver 120 | ``` 121 | 122 | ## 创建kube-controller-manager证书 123 | 124 | 创建kube-controller-manager证书签名请求 125 | 126 | ``` bash 127 | # cfssl print-defaults csr > kube-controller-manager-csr.json 128 | # cat > kube-controller-manager-csr.json << EOF 129 | { 130 | "CN": "system:kube-controller-manager", 131 | "hosts": [], 132 | "key": { 133 | "algo": "rsa", 134 | "size": 2048 135 | }, 136 | "names": [ 137 | { 138 | "C": "CN", 139 | "ST": "BeiJing", 140 | "L": "BeiJing", 141 | "O": "system:kube-controller-manager", 142 | "OU": "System" 143 | } 144 | ] 145 | } 146 | EOF 147 | ``` 148 | 生成kube-controller-manager证书和私钥 149 | 150 | ``` bash 151 | # cfssl gencert -ca=ca.pem \ 152 | -ca-key=ca-key.pem \ 153 | -config=ca-config.json \ 154 | -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager 155 | ``` 156 | 157 | ## 创建kube-scheduler证书 158 | 159 | 创建kube-scheduler证书签名请求 160 | 161 | ``` bash 162 | # cfssl print-defaults csr > kube-scheduler-csr.json 163 | # cat > kube-scheduler-csr.json << EOF 164 | { 165 | "CN": "system:kube-scheduler", 166 | "hosts": [], 167 | "key": { 168 | "algo": "rsa", 169 | "size": 2048 170 | }, 171 | "names": [ 172 | { 173 | "C": "CN", 174 | "ST": "BeiJing", 175 | "L": "BeiJing", 176 | "O": "system:kube-scheduler", 177 | "OU": "System" 178 | } 179 | ] 180 | } 181 | EOF 182 | ``` 183 | 184 | 生成kube-scheduler证书和私钥 185 | 186 | ``` bash 187 | # cfssl gencert -ca=ca.pem \ 188 | -ca-key=ca-key.pem \ 189 | -config=ca-config.json \ 190 | -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler 191 | ``` 192 | 193 | ## 创建kubelet证书 194 | 195 | 创建kubelet证书签名请求 196 | 197 | ``` bash 198 | # cfssl print-defaults csr > kubelet-csr.json 199 | # cat > kubelet-csr.json << EOF 200 | { 201 | "CN": "kubelet", 202 | "hosts": [], 203 | "key": { 204 | "algo": "rsa", 205 | "size": 2048 206 | }, 207 | "names": [ 208 | { 209 | "C": "CN", 210 | "ST": "BeiJing", 211 | "L": "BeiJing", 212 | "O": "system:masters", 213 | "OU": "System" 214 | } 215 | ] 216 | } 217 | EOF 218 | ``` 219 | 220 | 生成kubelet证书和私钥 221 | 222 | ``` bash 223 | # cfssl gencert -ca=ca.pem \ 224 | -ca-key=ca-key.pem \ 225 | -config=ca-config.json \ 226 | -profile=kubernetes kubelet-csr.json | cfssljson -bare kubelet 227 | ``` 228 | 229 | ## 创建kube-proxy证书 230 | 231 | 创建kube-proxy证书签名请求 232 | 233 | ``` bash 234 | # cfssl print-defaults csr > kube-proxy-csr.json 235 | # cat > kube-proxy-csr.json << EOF 236 | { 237 | "CN": "system:kube-proxy", 238 | "hosts": [], 239 | "key": { 240 | "algo": "rsa", 241 | "size": 2048 242 | }, 243 | "names": [ 244 | { 245 | "C": "CN", 246 | "ST": "BeiJing", 247 | "L": "BeiJing", 248 | "O": "system:node-proxier", 249 | "OU": "System" 250 | } 251 | ] 252 | } 253 | EOF 254 | ``` 255 | 256 | 生成kube-proxy客户端证书和私钥 257 | 258 | ``` bash 259 | # cfssl gencert -ca=ca.pem \ 260 | -ca-key=ca-key.pem \ 261 | -config=ca-config.json \ 262 | -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 263 | ``` 264 | 265 | ## 证书校验 266 | 267 | 校验kube-apiserver证书 268 | 269 | ``` bash 270 | # cfssl-certinfo -cert kube-apiserver.pem 271 | { 272 | "subject": { 273 | "common_name": "kubernetes", 274 | "country": "CN", 275 | "organization": "system:masters", 276 | "organizational_unit": "System", 277 | "locality": "BeiJing", 278 | "province": "BeiJing", 279 | "names": [ 280 | "CN", 281 | "BeiJing", 282 | "BeiJing", 283 | "system:masters", 284 | "System", 285 | "kubernetes" 286 | ] 287 | }, 288 | "issuer": { 289 | "common_name": "kubernetes", 290 | "country": "CN", 291 | "organization": "k8s", 292 | "organizational_unit": "System", 293 | "locality": "BeiJing", 294 | "province": "BeiJing", 295 | "names": [ 296 | "CN", 297 | "BeiJing", 298 | "BeiJing", 299 | "k8s", 300 | "System", 301 | "kubernetes" 302 | ] 303 | }, 304 | "serial_number": "533666226632105718421042600083075622217402341392", 305 | "sans": [ 306 | "kubernetes", 307 | "kubernetes.default", 308 | "kubernetes.default.svc", 309 | "kubernetes.default.svc.cluster", 310 | "kubernetes.default.svc.cluster.local", 311 | "127.0.0.1", 312 | "172.21.0.1", 313 | "172.16.30.171", 314 | "172.16.30.172", 315 | "172.16.30.173" 316 | ], 317 | "not_before": "2017-07-31T08:57:00Z", 318 | "not_after": "2018-07-31T08:57:00Z", 319 | "sigalg": "SHA256WithRSA", 320 | "authority_key_id": "6B:68:CF:57:62:6B:60:7E:F3:2C:AC:1A:20:6F:27:6A:EA:84:98:A8", 321 | "subject_key_id": "3C:6C:67:14:69:F8:42:2A:5C:3C:28:65:B6:A3:95:80:49:A6:6:C", 322 | "pem": "-----BEGIN CERTIFICATE----- 323 | MIIEkDCCA3igAwIBAgIUEdNzDqRQMswGL4KikzjnizkfBS4wDQYJKoZIhvcNAQEL 324 | BQAwZTELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0Jl 325 | aUppbmcxDDAKBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwpr 326 | dWJlcm5ldGVzMB4XDTE3MDcyNzA5MjcwMFoXDTE4MDcyNzA5MjcwMFowcDELMAkG 327 | A1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0JlaUppbmcxFzAV 328 | BgNVBAoTDnN5c3RlbTptYXN0ZXJzMQ8wDQYDVQQLEwZTeXN0ZW0xEzARBgNVBAMT 329 | Cmt1YmVybmV0ZXMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQClXFE1 330 | qVQ9HiHDEbyfDMqsrO8p0Rn02ta+xWAbmJhwgNstFfuW0Lz9XmtpclDRfF2U5QOJ 331 | X7TrTZz2xhjRxXzUb/4EU035VH273tb+3+orUbggMcUzavpbm0zFqqSeSTxIoWhw 332 | wiIUG33BR7i6kvyH7eHraq/vYn8NbG2t8ufoJFgPys6zjC9rDWqNlBXume69n8BD 333 | HTfDQUgUVLZDDZyef+KwvtziHUtEgEakaI9MgDV3CdkMAvXrnIeiMHQzRBen3gli 334 | zk4i+OCWd9oI7cB7oqvXUm+pTEAzOPQaGkkq7A2R8UHTFgOyAkw8saKwRvBacWhm 335 | BDa/+CVYKfiNBzDRAgMBAAGjggErMIIBJzAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0l 336 | BBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMAwGA1UdEwEB/wQCMAAwHQYDVR0OBBYE 337 | FHfVB5vi0gEh2rGjBzWVr9+2Jrs9MB8GA1UdIwQYMBaAFGHhP32/2ThF4VlOuaKj 338 | iKbG/CMcMIGnBgNVHREEgZ8wgZyCCmt1YmVybmV0ZXOCEmt1YmVybmV0ZXMuZGVm 339 | YXVsdIIWa3ViZXJuZXRlcy5kZWZhdWx0LnN2Y4Iea3ViZXJuZXRlcy5kZWZhdWx0 340 | LnN2Yy5jbHVzdGVygiRrdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9j 341 | YWyHBH8AAAGHBAr+AAGHBMCoA1+HBMCoA2CHBMCoA2MwDQYJKoZIhvcNAQELBQAD 342 | ggEBABlTNX+MVTfViozPrwH6QkXfbHTH9kpsm9SPZhpzjON4pAcY5kP3t6DInX9D 343 | SdivyuVn3jJz6BaBIoUh5RJRsq6ArMpbl1g7dyZnHZXPjLtMAFYGgnBjH6XVEQ1f 344 | FZbSZjvbti/l7SH7f9aqtywzqNCDqmwx+2gNoWwd11y0A7zxMVK28l6apbMfcVHL 345 | rHLKikoV+sLmvKCLdh7/qrTToono0j5nMzuQWfNU3UsNHOZZ1uNUQsuurv95LUWG 346 | 5t3PKpRoi0Z5kePBdLoD1CHqS1DEPkZt+sj6e6vqQSBAM8usNEUwi7ASOY2zAaMG 347 | aDz1i4/WZhJSUQyDfx7HzJpAmBE= 348 | -----END CERTIFICATE-----" 349 | } 350 | ``` 351 | ## 分发证书 352 | 353 | 将`TLS`证书拷贝到`Kubernetes Master`和`Kubernetes node`的配置目录 354 | 355 | ``` bash 356 | # mkdir -p /etc/kubernetes/ssl && cp /tmp/sslTmp/*.pem /etc/kubernetes/ssl 357 | ``` -------------------------------------------------------------------------------- /部署Calico服务.md: -------------------------------------------------------------------------------- 1 | # 部署Calico服务 2 | 3 | ## 什么是Calico? 4 | `Calico`是一个纯三层的方案,为虚机及容器提供多主机间通信,没有使用Overlay Network驱动,采用虚拟路由代替虚拟交换,每一台虚拟路由器通过BGP协议传播可达信息(路由)到其他虚拟或物理路由器。 5 | 6 | ## Calico网络模型 7 | 8 | ### IPIP模式 9 | 从字面来理解,就是把一个IP数据包又套在一个IP包里,即把IP层封装到IP层的一个tunnel,看起来似乎是浪费,实则不然。它的作用其实基本上就相当于一个基于IP层的网桥!一般来说,普通的网桥是基于mac层的,根本不需IP,而这个IPIP 则是通过两端的路由做一个tunnel,把两个本来不通的网络通过点对点连接起来。 10 | 11 | ### BGP模式 12 | 边界网关协议(Border Gateway Protocol, BGP)是互联网上一个核心的去中心化自治路由协议。它通过维护IP路由表或‘前缀’表来实现自治系统(AS)之间的可达性,属于矢量路由协议。BGP不使用传统的内部网关协议(IGP)的指标,而使用基于路径、网络策略或规则集来决定路由。因此,它更适合被称为矢量性协议,而不是路由协议。BGP,通俗的讲就是讲接入到机房的多条线路(如电信、联通、移动等)融合为一体,实现多线单IP,BGP 机房的优点:服务器只需要设置一个IP地址,最佳访问路由是由网络上的骨干路由器根据路由跳数与其它技术指标来确定的,不会占用服务器的任何系统。 13 | 14 | ## Calico工作流程 15 | 16 | ![Calico](./images/calico.jpg) 17 | 18 | ## Installing Calico 19 | 20 | 说明: Calico使用Deployment方式部署,需要在Kubernetes集群运行正常后,再执行部署操作。 21 | 22 | 说明: Calico网络`kubelet`配置必须增加`--network-plugin=cni`选项,否则`Pod`无法正常获取Calico分配的网段IP地址。 23 | 24 | 25 | ``` bash 26 | # mkdir -p /tmp/calico && cd /tmp/calico 27 | # wget https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/calico.yaml 28 | # cp calico.yaml calico.yaml.bak 29 | # diff calico.yaml calico.yaml.bak 30 | 17,19c17,19 31 | < etcd-key: (cat /etc/etcd/ssl/etcd-key.pem | base64 -w 0) 32 | < etcd-cert: (cat /etc/etcd/ssl/etcd.pem | base64 -w 0) 33 | < etcd-ca: (cat /etc/etcd/ssl/ca.pem | base64 -w 0) 34 | --- 35 | > # etcd-key: null 36 | > # etcd-cert: null 37 | > # etcd-ca: null 38 | 30c30 39 | < etcd_endpoints: "https://172.16.0.101:2379,https://172.16.0.102:2379,https://172.16.0.103:2379" 40 | --- 41 | > etcd_endpoints: "http://:" 42 | 34,36c34,36 43 | < etcd_ca: "/calico-secrets/etcd-ca" 44 | < etcd_cert: "/calico-secrets/etcd-cert" 45 | < etcd_key: "/calico-secrets/etcd-key" 46 | --- 47 | > etcd_ca: "" # "/calico-secrets/etcd-ca" 48 | > etcd_cert: "" # "/calico-secrets/etcd-cert" 49 | > etcd_key: "" # "/calico-secrets/etcd-key" 50 | 300c300 51 | < value: "CrossSubnet" 52 | --- 53 | > value: "Always" 54 | 311c311 55 | < value: "10.240.0.0/16" 56 | --- 57 | > value: "192.168.0.0/16" 58 | # kubectl apply -f calico.yaml 59 | ``` 60 | 61 | ### Installing calicoctl 62 | 63 | ``` bash 64 | # curl -L -o /usr/local/bin/calicoctl https://github.com/projectcalico/calicoctl/releases/download/v3.6.0/calicoctl 65 | # chmod +x /usr/local/bin/calicoctl 66 | # mkdir -p /etc/calico 67 | # vim /etc/calico/calicoctl.cfg 68 | apiVersion: projectcalico.org/v3 69 | kind: CalicoAPIConfig 70 | metadata: 71 | spec: 72 | datastoreType: "etcdv3" 73 | etcdEndpoints: "https://172.16.0.101:2379,https://172.16.0.102:2379,https://172.16.0.103:2379" 74 | etcdKeyFile: "/etc/etcd/ssl/etcd-key.pem" 75 | etcdCertFile: "/etc/etcd/ssl/etcd.pem" 76 | etcdCACertFile: "/etc/etcd/ssl/ca.pem" 77 | ``` 78 | 79 | ### 验证 80 | 81 | ``` bash 82 | # calicoctl get ipPool -o wide 83 | NAME CIDR NAT IPIPMODE DISABLED 84 | default-ipv4-ippool 10.240.0.0/16 true CrossSubnet false 85 | 86 | # calicoctl node status 87 | Calico process is running. 88 | 89 | IPv4 BGP status 90 | +--------------+-------------------+-------+------------+-------------+ 91 | | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | 92 | +--------------+-------------------+-------+------------+-------------+ 93 | | 172.16.0.102 | node-to-node mesh | up | 2019-01-03 | Established | 94 | | 172.16.0.103 | node-to-node mesh | up | 2019-01-03 | Established | 95 | +--------------+-------------------+-------+------------+-------------+ 96 | ``` -------------------------------------------------------------------------------- /部署CoreDNS服务.md: -------------------------------------------------------------------------------- 1 | # 部署CoreDNS服务 2 | 3 | ## CoreDNS简介 4 | 5 | `CoreDNS`是一个DNS服务器,和Caddy Server具有相同的模型:它链接插件。CoreDNS是云本土计算基金会启动阶段项目。 6 | 7 | `CoreDNS`是`SkyDNS`的继任者;`SkyDNS`是一个薄层,暴露了DNS中的etcd中的服务。`CoreDNS`建立在这个想法上,是一个通用的DNS服务器,可以与多个后端(etcd,kubernetes等)进行通信。 8 | 9 | `CoreDNS`旨在成为一个快速灵活的DNS服务器。这里的关键灵活指的是:使用`CoreDNS`,您可以使用DNS数据进行所需的操作;还可以自已写插件来实现DNS的功能。 10 | 11 | CoreDNS可以通过UDP/TCP(旧式的DNS),TLS(RFC 7858)和gRPC(不是标准)监听DNS请求。 12 | 13 | ## 下载CoreDNS镜像文件 14 | 15 | ``` bash 16 | # mkdir -p coredns && cd coredns 17 | # wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/coredns/coredns.yaml.base 18 | # cp coredns.yaml.base coredns.yaml 19 | ``` 20 | 21 | ## 配置CoreDNS服务 22 | 23 | 注意:需要将`__PILLAR__DNS__DOMAIN__`设置为集群环境中的域名, 这个域名需要和`kubelet`的`--cluster-domain`参数值一致. 24 | 25 | 注意:需要将`__PILLAR__DNS__SERVER__`设置为集群环境的IP, 这个`IP`需要和`kubelet`的`--cluster-dns`参数值一致. 26 | 27 | ``` bash 28 | # diff coredns.yaml coredns.yaml.base 29 | 67c67 30 | < kubernetes linux-testing.com. in-addr.arpa ip6.arpa { 31 | --- 32 | > kubernetes __PILLAR__DNS__DOMAIN__ in-addr.arpa ip6.arpa { 33 | 115c115 34 | < image: coredns/coredns:1.2.6 35 | --- 36 | > image: k8s.gcr.io/coredns:1.2.6 37 | 180c180 38 | < clusterIP: 172.21.0.254 39 | --- 40 | > clusterIP: __PILLAR__DNS__SERVER__ 41 | ``` 42 | ## 安装CoreDNS服务 43 | 44 | ``` bash 45 | # kubectl apply -f coredns.yaml 46 | # kubectl get pod -n kube-system 47 | NAME READY STATUS RESTARTS AGE 48 | coredns-5984fb8cbb-4ss5v 1/1 Running 0 19s 49 | coredns-5984fb8cbb-79wnz 1/1 Running 0 20s 50 | ``` 51 | ## 验证CoreDNS功能 52 | 53 | 待补充... 54 | -------------------------------------------------------------------------------- /部署Etcd集群服务.md: -------------------------------------------------------------------------------- 1 | # 搭建etcd集群服务 2 | 3 | ## Etcd服务应用场景 4 | 5 | 要问`etcd`是什么?很多人第一反应可能是一个键值存储仓库,却没有重视官方定义的后半句,用于配置共享和服务发现。`etcd`作为一个受到`ZooKeeper`与`doozer`启发而催生的项目,除了拥有与之类似的功能外,更专注于以下四点: 6 | 7 | + 简单:基于HTTP+JSON的API让你用curl就可以轻松使用。 8 | + 安全:可选SSL客户认证机制。 9 | + 快速:每个实例每秒支持一千次写操作。 10 | + 可信:使用Raft算法充分实现了分布式。 11 | 12 | 分布式系统中的数据分为控制数据和应用数据。etcd的使用场景默认处理的数据都是控制数据,对于应用数据,只推荐数据量很小,但是更新访问频繁的情况。应用场景有如下几类: 13 | 14 | + 场景一:服务发现(Service Discovery) 15 | + 场景二:消息发布与订阅 16 | + 场景三:负载均衡 17 | + 场景四:分布式通知与协调 18 | + 场景五:分布式锁、分布式队列 19 | + 场景六:集群监控与Leader竞选 20 | 21 | 举个最简单的例子,如果你需要一个分布式存储仓库来存储配置信息,并且希望这个仓库读写速度快、支持高可用、部署简单、支持http接口,那么就可以使用etcd。 22 | 23 | ## 创建Etcd证书 24 | ``` bash 25 | # cat > etcd-csr.json << EOF 26 | { 27 | "CN": "etcd", 28 | "hosts": [ 29 | "127.0.0.1", 30 | "172.16.0.101", 31 | "172.16.0.102", 32 | "172.16.0.103" 33 | ], 34 | "key": { 35 | "algo": "rsa", 36 | "size": 2048 37 | }, 38 | "names": [ 39 | { 40 | "C": "CN", 41 | "ST": "BeiJing", 42 | "L": "BeiJing", 43 | "O": "k8s", 44 | "OU": "System" 45 | } 46 | ] 47 | } 48 | EOF 49 | ``` 50 | 51 | 生成Etcd证书和私钥 52 | 53 | ``` bash 54 | # cfssl gencert -ca=ca.pem \ 55 | -ca-key=ca-key.pem \ 56 | -config=ca-config.json \ 57 | -profile=kubernetes etcd-csr.json | cfssljson -bare etcd 58 | ``` 59 | 60 | ## 安装Etcd服务 61 | 62 | ``` bash 63 | # yum -y install etcd 64 | # cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak_$(date +%Y%m%d) 65 | ``` 66 | 67 | ## 配置Etcd服务 68 | 69 | ``` bash 70 | # 拷贝Etcd证书 71 | # mkdir /etc/etcd/ssl 72 | # cp /tmp/sslTmp/ca*.pem /etc/etcd/ssl 73 | # cp /tmp/sslTmp/etcd*.pem /etc/etcd/ssl 74 | # chown -R etcd: /etc/etcd/ssl 75 | # 配置ETCD服务 76 | # vim /etc/etcd/etcd.conf 77 | ETCD_NAME=etcd_node1 // 节点名称 78 | ETCD_DATA_DIR="/var/lib/etcd/etcd_node1.etcd" 79 | ETCD_LISTEN_PEER_URLS="https://172.16.0.101:2380" 80 | ETCD_LISTEN_CLIENT_URLS="https://172.16.0.101:2379,https://127.0.0.1:2379" // 必须增加127.0.0.1否则启动会报错 81 | ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.0.101:2380" 82 | ETCD_INITIAL_CLUSTER="etcd_node1=https://172.16.0.101:2380,etcd_node2=https://172.16.0.102:2380,etcd_node3=https://172.16.0.103:2380" // 集群IP地址 83 | ETCD_INITIAL_CLUSTER_STATE="new" // 初始化集群,第二次启动时将状态改为: "existing" 84 | ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" 85 | ETCD_ADVERTISE_CLIENT_URLS="https://172.16.0.101:2379" 86 | ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" 87 | ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" 88 | ETCD_CLIENT_CERT_AUTH="true" 89 | ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem" 90 | ETCD_AUTO_TLS="true" 91 | ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem" 92 | ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" 93 | ETCD_PEER_CLIENT_CERT_AUTH="true" 94 | ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem" 95 | ETCD_PEER_AUTO_TLS="true" 96 | # systemctl enable etcd.service 97 | # systemctl start etcd.service && systemctl status etcd.service 98 | ``` 99 | 100 | 设置API版本 默认版本:v2 [根据实际情况设定] 101 | 102 | ``` bash 103 | # echo 'export ETCDCTL_API=3' >> /etc/profile 104 | # . /etc/profile 105 | ``` 106 | 107 | ## 查看和验证etcd集群服务状态 108 | 109 | 查看etcd集群成员 110 | 111 | ``` bash 112 | # etcdctl --endpoints=https://127.0.0.1:2379 \ 113 | --ca-file=/etc/etcd/ssl/ca.pem \ 114 | --cert-file=/etc/etcd/ssl/etcd.pem \ 115 | --key-file=/etc/etcd/ssl/etcd-key.pem member list 116 | 7e218077496bccf9: name=etcd_node1 peerURLs=https://172.16.0.101:2380 clientURLs=https://172.16.0.101:2379 isLeader=true 117 | 92f1b7c038a4300a: name=etcd_node2 peerURLs=https://172.16.0.102:2380 clientURLs=https://172.16.0.102:2379 isLeader=false 118 | c8611e11b142e510: name=etcd_node3 peerURLs=https://172.16.0.103:2380 clientURLs=https://172.16.0.103:2379 isLeader=false 119 | ``` 120 | 121 | 验证etcd集群状态 122 | 123 | ``` bash 124 | # etcdctl --endpoints=https://127.0.0.1:2379 \ 125 | --ca-file=/etc/etcd/ssl/ca.pem \ 126 | --cert-file=/etc/etcd/ssl/etcd.pem \ 127 | --key-file=/etc/etcd/ssl/etcd-key.pem cluster-health 128 | member 7e218077496bccf9 is healthy: got healthy result from https://172.16.0.101:2379 129 | member 92f1b7c038a4300a is healthy: got healthy result from https://172.16.0.102:2379 130 | member c8611e11b142e510 is healthy: got healthy result from https://172.16.0.103:2379 131 | cluster is healthy //表示安装成功 132 | ``` 133 | 134 | ## etcd集群增加节点 135 | 136 | 将目标节点添加到etcd集群 137 | 138 | 注意: 需要重新生成新的证书,将新的IP地址添加进去。 139 | 140 | ``` bash 141 | # etcdctl --endpoints=https://127.0.0.1:2379 \ 142 | --ca-file=/etc/etcd/ssl/ca.pem \ 143 | --cert-file=/etc/etcd/ssl/etcd.pem \ 144 | --key-file=/etc/etcd/ssl/etcd-key.pem member add etcd_node4 https://172.16.0.104:2380 145 | Added member named etcd_etcd4 with ID 5282b16e923af92f to cluster 146 | 147 | ETCD_NAME="etcd_node4" 148 | ETCD_INITIAL_CLUSTER="etcd_node4=https://172.16.0.104:2380,etcd_node1=https://172.16.0.101:2380,etcd_node2=https://172.16.0.102:2380,etcd_node3=https://172.16.0.103:2380" 149 | ETCD_INITIAL_CLUSTER_STATE="existing" 150 | ``` 151 | 152 | 查看成员列表. 此时etcd_node4节点状态为: unstarted 153 | 154 | ``` bash 155 | # etcdctl --endpoints=https://127.0.0.1:2379 \ 156 | --ca-file=/etc/etcd/ssl/ca.pem \ 157 | --cert-file=/etc/etcd/ssl/etcd.pem \ 158 | --key-file=/etc/etcd/ssl/etcd-key.pem member list 159 | 5282b16e923af92f[unstarted]: peerURLs=https://172.16.0.104:2380 160 | 7e218077496bccf9: name=etcd_node1 peerURLs=https://172.16.0.101:2380 clientURLs=https://172.16.0.101:2379 isLeader=true 161 | 92f1b7c038a4300a: name=etcd_node2 peerURLs=https://172.16.0.102:2380 clientURLs=https://172.16.0.102:2379 isLeader=false 162 | c8611e11b142e510: name=etcd_node3 peerURLs=https://172.16.0.103:2380 clientURLs=https://172.16.0.103:2379 isLeader=false 163 | ``` 164 | 165 | 配置etcd_node4节点的etcd.conf文件 166 | 167 | ``` bash 168 | # vim /etc/etcd/etcd.conf 169 | ETCD_NAME="etcd_node4" // 节点名称,对应etcd添加节点命令时输出的信息 170 | ETCD_DATA_DIR="/var/lib/etcd/etcd_node4.etcd" 171 | ETCD_LISTEN_PEER_URLS="https://172.16.0.104:2380" 172 | ETCD_LISTEN_CLIENT_URLS="https://172.16.0.104:2379,https://127.0.0.1:2379" 173 | ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.0.104:2380" 174 | ETCD_INITIAL_CLUSTER="etcd_node4=https://172.16.0.104:2380,etcd_node1=https://172.16.0.101:2380,etcd_node2=https://172.16.0.101:2380,etcd_node3=https://172.16.0.103:2380" // 集群列表,对应etcd添加节点命令时输出的信息 175 | ETCD_INITIAL_CLUSTER_STATE="existing" // 集群状态,对应etcd添加节点命令时输出的信息 176 | ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" 177 | ETCD_ADVERTISE_CLIENT_URLS="https://172.16.0.104:2379" 178 | ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem" 179 | ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" 180 | ETCD_CLIENT_CERT_AUTH="true" 181 | ETCD_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem" 182 | ETCD_AUTO_TLS="true" 183 | ETCD_PEER_CERT_FILE="/etc/etcd/ssl/etcd.pem" 184 | ETCD_PEER_KEY_FILE="/etc/etcd/ssl/etcd-key.pem" 185 | ETCD_PEER_CLIENT_CERT_AUTH="true" 186 | ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ssl/ca.pem" 187 | ETCD_PEER_AUTO_TLS="true" 188 | # systemctl enable etcd.service 189 | # systemctl start etcd.service && systemctl status etcd.service 190 | ``` 191 | 192 | 再次查看成员列表. 此时etcd_etcd4节点状态已经显示正常 193 | 194 | ``` bash 195 | # etcdctl --endpoints=https://127.0.0.1:2379 \ 196 | --ca-file=/etc/etcd/ssl/ca.pem \ 197 | --cert-file=/etc/etcd/ssl/etcd.pem \ 198 | --key-file=/etc/etcd/ssl/etcd-key.pem member list 199 | 5282b16e923af92f: name=etcd_node4 peerURLs=https://172.16.0.104:2380 clientURLs=https://172.16.0.104:2379 isLeader=false 200 | 7e218077496bccf9: name=etcd_node1 peerURLs=https://172.16.0.101:2380 clientURLs=https://172.16.0.101:2379 isLeader=true 201 | 92f1b7c038a4300a: name=etcd_node2 peerURLs=https://172.16.0.102:2380 clientURLs=https://172.16.0.102:2379 isLeader=false 202 | c8611e11b142e510: name=etcd_node3 peerURLs=https://172.16.0.103:2380 clientURLs=https://172.16.0.103:2379 isLeader=false 203 | ``` 204 | 205 | ## etcd集群删除节点 206 | 207 | 删除etcd_etcd4节点 208 | 209 | ``` bash 210 | # etcdctl --endpoints=https://127.0.0.1:2379 \ 211 | --ca-file=/etc/etcd/ssl/ca.pem \ 212 | --cert-file=/etc/etcd/ssl/etcd.pem \ 213 | --key-file=/etc/etcd/ssl/etcd-key.pem member remove 5282b16e923af92f 214 | Removed member 5282b16e923af92f from cluster 215 | # etcdctl --endpoints=https://127.0.0.1:2379 \ 216 | --ca-file=/etc/etcd/ssl/ca.pem \ 217 | --cert-file=/etc/etcd/ssl/etcd.pem \ 218 | --key-file=/etc/etcd/ssl/etcd-key.pem member list 219 | 7e218077496bccf9: name=etcd_node1 peerURLs=https://172.16.0.101:2380 clientURLs=https://172.16.0.101:2379 isLeader=true 220 | 92f1b7c038a4300a: name=etcd_node2 peerURLs=https://172.16.0.102:2380 clientURLs=https://172.16.0.102:2379 isLeader=false 221 | c8611e11b142e510: name=etcd_node3 peerURLs=https://172.16.0.103:2380 clientURLs=https://172.16.0.103:2379 isLeader=false 222 | ``` 223 | -------------------------------------------------------------------------------- /部署Flannel服务.md: -------------------------------------------------------------------------------- 1 | # 搭建Flannel服务 2 | 3 | ## 什么是Flannel? 4 | `Flannel`是`CoreOS`团队针对`Kubernetes`设计的一个覆盖网络(Overlay Network)工具,其目的在于帮助每一个使用`Kuberentes`的`CoreOS`主机拥有一个完整的子网。 5 | 6 | ## Flannel工作原理 7 | `flannel`为全部的容器使用一个`network`,然后在每个`host`上从`network`中划分一个子网`subnet`。`host`上的容器创建网络时,从`subnet`中划分一个ip给容器。`flannel`不存在所谓的控制节点,而是每个`host`上的`flanneld`从一个etcd中获取相关数据,然后声明自己的子网网段,并记录在etcd中。如果有`host`对数据转发时,从`etcd`中查询到该子网所在的`host`的`ip`,然后将数据发往对应`host`上的`flanneld`,交由其进行转发。 8 | 9 | ## Flannel架构介绍 10 | 11 | ![](https://github.com/coreos/flannel/blob/master/packet-01.png) 12 | 13 | ## 创建Flanneld证书 14 | ``` bash 15 | # cd /tmp/sslTmp 16 | # cat > flanneld-csr.json << EOF 17 | { 18 | "CN": "flanneld", 19 | "hosts": [], 20 | "key": { 21 | "algo": "rsa", 22 | "size": 2048 23 | }, 24 | "names": [ 25 | { 26 | "C": "CN", 27 | "ST": "BeiJing", 28 | "L": "BeiJing", 29 | "O": "k8s", 30 | "OU": "System" 31 | } 32 | ] 33 | } 34 | EOF 35 | ``` 36 | 37 | 生成Flanneld证书和私钥 38 | 39 | ``` bash 40 | # cfssl gencert -ca=ca.pem \ 41 | -ca-key=ca-key.pem \ 42 | -config=ca-config.json \ 43 | -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld 44 | ``` 45 | 46 | ## 创建Pod Network 47 | 48 | 注意:flanneld v0.11.0版本目前不支持etcd v3, 使用etcd v2 API写入配置key和网段数据 49 | 50 | 注意:容器网段地址`10.240.0/16`, SVC(DNS)网段地址`10.241.0.0/16` 51 | 52 | ``` bash 53 | # mkdir -p /etc/flannel/ssl 54 | # cp /tmp/sslTmp/{ca.pem,flanneld.pem,flanneld-key.pem} /etc/flannel/ssl 55 | # etcdctl --endpoints=https://127.0.0.1:2379 \ 56 | --ca-file=/etc/flannel/ssl/ca.pem \ 57 | --cert-file=/etc/flannel/ssl/flanneld.pem \ 58 | --key-file=/etc/flannel/ssl/flanneld-key.pem \ 59 | set /flannel/network/config '{ "Network": "10.240.0.0/16", "Backend": { "Type": "host-gw" } }' 60 | ``` 61 | 62 | ## 安装flannel服务 63 | 64 | ``` bash 65 | # yum -y install flannel 66 | # vim /etc/sysconfig/flanneld 67 | # Flanneld configuration options 68 | 69 | # etcd url location. Point this to the server where etcd runs 70 | FLANNEL_ETCD_ENDPOINTS="https://172.16.0.101:2379,https://172.16.0.102:2379,https://172.16.0.103:2379" 71 | 72 | # etcd config key. This is the configuration key that flannel queries 73 | # For address range assignment 74 | FLANNEL_ETCD_PREFIX="/flannel/network" 75 | 76 | # Any additional options that you want to pass 77 | FLANNEL_OPTIONS="-etcd-cafile=/etc/flannel/ssl/ca.pem -etcd-certfile=/etc/flannel/ssl/flanneld.pem -etcd-keyfile=/etc/flannel/ssl/flanneld-key.pem -iface=eth0 -ip-masq" 78 | 79 | # systemctl enable flanneld && systemctl restart flanneld && systemctl status flanneld 80 | ``` 81 | 82 | ## 安装Docker CE服务 83 | ``` bash 84 | # wget -P /etc/yum.repos.d/ https://download.docker.com/linux/centos/docker-ce.repo 85 | # yum list docker-ce.x86_64 --showduplicates | sort -r 86 | # yum -y install docker-ce-17.09.1.ce-1.el7.centos.x86_64 87 | ``` 88 | ## 配置docker启动服务 89 | 90 | ``` bash 91 | # vim /etc/docker/daemon.json 92 | { 93 | "registry-mirrors": ["http://d7eabb7d.m.daocloud.io"], 94 | "exec-opts": ["native.cgroupdriver=systemd"], 95 | "graph": "/data/docker" 96 | } 97 | 98 | # vim /usr/lib/systemd/system/docker.service 99 | [Unit] 100 | Description=Docker Application Container Engine 101 | Documentation=https://docs.docker.com 102 | After=network-online.target firewalld.service flanneld.service 103 | Wants=network-online.target 104 | 105 | [Service] 106 | Type=notify 107 | ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS 108 | ExecReload=/bin/kill -s HUP $MAINPID 109 | LimitNOFILE=infinity 110 | LimitNPROC=infinity 111 | LimitCORE=infinity 112 | TimeoutStartSec=0 113 | Delegate=yes 114 | KillMode=process 115 | Restart=on-failure 116 | StartLimitBurst=3 117 | StartLimitInterval=60s 118 | 119 | [Install] 120 | WantedBy=multi-user.target 121 | 122 | # systemctl enable docker && systemctl restart docker && systemctl status docker 123 | ``` 124 | ## 验证Docker服务获取IP是否正常 125 | 126 | 注意:如果Docker服务没有获取正确的IP地址,请检查Docker启动脚本是否配置`$DOCKER_NETWORK_OPTIONS`参数;并检查flannel服务是否正常启动。 127 | 128 | ``` bash 129 | # ifconfig docker0 130 | docker0: flags=4163 mtu 1500 131 | inet 10.240.36.1 netmask 255.255.255.0 broadcast 0.0.0.0 132 | ether 02:42:d0:0b:23:be txqueuelen 0 (Ethernet) 133 | RX packets 39657261 bytes 7409081483 (6.9 GiB) 134 | RX errors 0 dropped 0 overruns 0 frame 0 135 | TX packets 40524935 bytes 23758435104 (22.1 GiB) 136 | TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 137 | ``` 138 | -------------------------------------------------------------------------------- /部署Haproxy服务.md: -------------------------------------------------------------------------------- 1 | # 部署haproxy + keepalived服务 2 | 3 | ## Keepalived高可用方案 4 | 5 | 说明: keepalived软件主要是通过VRRP协议实现高可用功能的,因此还可以作为其他服务的高可用解决方案;下面的解决方案并非完美解决方案,仅供参考学习。 6 | 7 | + `keepalived`在运行过程中周期检查本机的haproxy进程状态,如果检测到haproxy进程异常,则触发重新选主的过程,VIP将飘移到新选出来的主节点,从而实现VIP的高可用。 8 | 9 | ## Kubernetes高可用方案 10 | 11 | `kubernetes`的`Master`节点为三台主机,当前示例的haproxy监听的端口是`8443`,与`kube-apiserver`的端口`6443`不同,避免冲突。 12 | 13 | `kubernetes`组件相关组件`kube-controller-manager`、`kube-scheduler`、`kubelet`、`kube-proxy`等均都通过`VIP`和`haproxy`监听的`8443`端口访问`kube-apiserver`服务。 14 | 15 | ## 安装haproxy服务 16 | ``` bash 17 | # yum install haproxy -y 18 | ``` 19 | 20 | ## 配置haproxy服务 21 | ``` bash 22 | # cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak_$(date '+%Y%m%d') 23 | # vim /etc/haproxy/haproxy.cfg 24 | global 25 | log 127.0.0.1 local3 26 | maxconn 20480 27 | chroot /var/lib/haproxy 28 | user haproxy 29 | group haproxy 30 | nbproc 8 31 | daemon 32 | quiet 33 | 34 | defaults 35 | log global 36 | mode tcp 37 | option tcplog 38 | option dontlognull 39 | option redispatch 40 | option forwardfor 41 | option http-pretend-keepalive 42 | retries 3 43 | redispatch 44 | contimeout 5000 45 | clitimeout 50000 46 | srvtimeout 50000 47 | 48 | frontend kube_https *:8443 49 | mode tcp 50 | maxconn 20480 51 | default_backend kube_backend 52 | 53 | backend kube_backend 54 | balance roundrobin 55 | server kube-master-01 172.16.0.101:6443 check inter 5000 fall 3 rise 3 weight 1 56 | server kube-master-02 172.16.0.102:6443 check inter 5000 fall 3 rise 3 weight 1 57 | server kube-master-03 172.16.0.103:6443 check inter 5000 fall 3 rise 3 weight 1 58 | 59 | listen haproxy-status 60 | bind 0.0.0.0:18443 61 | mode http 62 | stats refresh 30s 63 | stats uri /haproxy-status 64 | stats realm welcome login\ Haproxy 65 | stats auth admin:admin 66 | 67 | # systemctl enable haproxy 68 | # systemctl restart haproxy 69 | # netstat -ntpl|grep haproxy 70 | tcp 0 0 0.0.0.0:8443 0.0.0.0:* LISTEN 7456/haproxy 71 | tcp 0 0 0.0.0.0:18443 0.0.0.0:* LISTEN 7456/haproxy 72 | ``` 73 | 74 | ## 安装keepalived服务 75 | ``` bash 76 | # yum install keepalived -y 77 | ``` 78 | 79 | ## 配置haproxy服务健康检查脚本 80 | ``` bash 81 | # vim /etc/keepalived/haproxy_check.sh 82 | #!/bin/bash 83 | 84 | flag=$(systemctl status haproxy &> /dev/null;echo $?) 85 | 86 | if [[ $flag != 0 ]];then 87 | echo "haproxy is down,close the keepalived" 88 | systemctl stop keepalived 89 | fi 90 | # chmod +x /etc/keepalived/haproxy_check.sh 91 | ``` 92 | 93 | ## 配置keepalived服务 94 | ``` bash 95 | # cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak_$(date '+%Y%m%d') 96 | # vim /etc/keepalived/keepalived.conf 97 | ! Configuration File for keepalived 98 | 99 | global_defs { 100 | notification_email { 101 | failover@firewall.loc 102 | sysadmin@firewall.loc 103 | } 104 | notification_email_from Alexandre.Cassen@firewall.loc 105 | smtp_server 127.0.0.1 106 | smtp_connect_timeout 30 107 | router_id LVS_DEVEL 108 | } 109 | 110 | vrrp_script haproxy_check { 111 | script "/etc/keepalived/haproxy_check.sh" 112 | interval 5 113 | } 114 | 115 | vrrp_instance VI_1 { 116 | state MASTER // 在备节点设置为BACKUP 117 | interface eth0 118 | virtual_router_id 51 119 | priority 200 // 备节点的阀值小于主节点 120 | nopreempt // MASTER节点故障恢复后不重新抢回VIP 121 | authentication { 122 | auth_type PASS 123 | auth_pass 1111 124 | } 125 | virtual_ipaddress { 126 | 172.16.0.253 // MASTER VIP 127 | } 128 | 129 | track_script { 130 | haproxy_check // 检查脚本 131 | } 132 | } 133 | 134 | # systemctl enable keepalived 135 | # systemctl restart keepalived 136 | # systemctl status keepalived 137 | ``` 138 | 139 | 140 | -------------------------------------------------------------------------------- /部署Kubrnetes-Master节点.md: -------------------------------------------------------------------------------- 1 | # 部署Kubernetes Master节点 2 | 3 | ## kube-apiserver组件 4 | kube-apiserver是Kubernetes最重要的核心组件之一,主要提供以下的功能: 5 | 6 | + 提供集群管理的REST API接口,包括认证授权、数据校验以及集群状态变更等 7 | + 提供其他模块之间的数据交互和通信的枢纽(其他模块通过API Server查询或修改数据,只有API Server才直接操作etcd)。 8 | 9 | ## kube-scheduler组件 10 | kube-scheduler是Kubernetes最重要的核心组件之一,主要提供以下的功能: 11 | + 负责分配调度Pod到集群内的节点上,它监听kube-apiserver,查询还未分配Node的Pod,然后根据调度策略为这些Pod分配节点(更新Pod的NodeName字段)。 12 | 13 | ## kube-controller-manager组件 14 | Controller Manager是Kubernetes最重要的核心组件之一,主要提供以下的功能: 15 | + 主要kube-controller-manager和cloud-controller-manager组成,是Kubernetes的大脑,它通过apiserver监控整个集群的状态,并确保集群处于预期的工作状态。 16 | 17 | ## 升级master和node节点内核版本 18 | ``` bash 19 | # rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org 20 | # rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm 21 | # yum -y install --enablerepo=elrepo-kernel kernel-lt-devel kernel-lt 22 | # grub2-set-default 0 23 | # grub2-mkconfig -o /boot/grub2/grub.cfg 24 | # reboot 25 | # uname -r 26 | 4.4.112-1.el7.elrepo.x86_64 27 | ``` 28 | 29 | ## 下载kubernetes组件的二进制文件 30 | 31 | ``` bash 32 | # wget https://storage.googleapis.com/kubernetes-release/release/v1.15.2/kubernetes-server-linux-amd64.tar.gz 33 | # tar fx kubernetes-server-linux-amd64.tar.gz 34 | ``` 35 | 36 | 拷贝二进制文件 37 | 38 | ``` bash 39 | # mkdir -p /usr/local/kubernetes-v1.15.2/bin 40 | # ln -s /usr/local/kubernetes-v1.15.2 /usr/local/kubernetes 41 | # cp -r `pwd`/kubernetes/server/bin/{kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kube-proxy,kubelet} /usr/local/kubernetes/bin 42 | ``` 43 | 44 | ## 配置kube-config文件 45 | 46 | ``` bash 47 | # vim /etc/kubernetes/kube-config 48 | ### 49 | # kubernetes system config 50 | # 51 | # The following values are used to configure various aspects of all 52 | # kubernetes services, including 53 | # 54 | # kube-apiserver.service 55 | # kube-controller-manager.service 56 | # kube-scheduler.service 57 | # kubelet.service 58 | # kube-proxy.service 59 | ### 60 | # logging to stderr means we get it in the systemd journal 61 | KUBE_LOGTOSTDERR="--logtostderr=false" 62 | 63 | # journal message level, 0 is debug 64 | KUBE_LOG_LEVEL="--v=1" 65 | 66 | # Should this cluster be allowed to run privileged docker containers 67 | KUBE_ALLOW_PRIV="--allow-privileged=true" 68 | 69 | # How the controller-manager, scheduler, and proxy find the apiserver 70 | KUBE_MASTER="--master=https://172.16.0.253:6443" 71 | ``` 72 | 73 | ## 配置和启动kube-apiserver 74 | 75 | 创建kube-apiserver配置文件 76 | 77 | ``` bash 78 | # vim /etc/kubernetes/kube-apiserver 79 | #### 80 | ## kubernetes system config 81 | ## 82 | ## The following values are used to configure the kube-apiserver 83 | ## 84 | #### 85 | ## The address on the local server to listen to. 86 | KUBE_API_ADDRESS="--advertise-address=0.0.0.0 --bind-address=0.0.0.0" 87 | # 88 | ## The port on the local server to listen on. 89 | KUBE_API_PORT="--secure-port=6443" 90 | # 91 | ## Comma separated list of nodes in the etcd cluster 92 | KUBE_ETCD_SERVERS="--etcd-servers=https://172.16.0.101:2379,https://172.16.0.102:2379,https://172.16.0.103:2379" 93 | # 94 | ## Address range to use for services 95 | KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.241.0.0/16" 96 | # 97 | ## default admission control policies 98 | KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota" 99 | # 100 | ## Add your own! 101 | KUBE_API_ARGS="--event-ttl=1h \ 102 | --apiserver-count=1 \ 103 | --audit-log-maxage=30 \ 104 | --audit-log-maxbackup=3 \ 105 | --audit-log-maxsize=100 \ 106 | --storage-backend=etcd3 \ 107 | --enable-swagger-ui=true \ 108 | --requestheader-allowed-names \ 109 | --authorization-mode=Node,RBAC \ 110 | --audit-log-path=/var/log/audit.log \ 111 | --service-node-port-range=1024-65535 \ 112 | --requestheader-group-headers=X-Remote-Group \ 113 | --requestheader-username-headers=X-Remote-User \ 114 | --log-dir=/data/logs/kubernetes/kube-apiserver \ 115 | --etcd-cafile=/etc/kubernetes/ssl/ca.pem \ 116 | --etcd-certfile=/etc/kubernetes/ssl/kube-apiserver.pem \ 117 | --etcd-keyfile=/etc/kubernetes/ssl/kube-apiserver-key.pem \ 118 | --client-ca-file=/etc/kubernetes/ssl/ca.pem \ 119 | --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \ 120 | --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \ 121 | --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \ 122 | --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem" 123 | ``` 124 | 125 | 创建kube-apiserver启动脚本 126 | 127 | ``` bash 128 | # vim /usr/lib/systemd/system/kube-apiserver.service 129 | [Unit] 130 | Description=Kube-apiserver Service 131 | After=network.target 132 | 133 | [Service] 134 | EnvironmentFile=-/etc/kubernetes/kube-config 135 | EnvironmentFile=-/etc/kubernetes/kube-apiserver 136 | ExecStart=/usr/local/kubernetes/bin/kube-apiserver \ 137 | $KUBE_LOGTOSTDERR \ 138 | $KUBE_LOG_LEVEL \ 139 | $KUBE_ETCD_SERVERS \ 140 | $KUBE_API_ADDRESS \ 141 | $KUBE_API_PORT \ 142 | $KUBELET_PORT \ 143 | $KUBE_SERVICE_ADDRESSES \ 144 | $KUBE_ADMISSION_CONTROL \ 145 | $KUBE_API_ARGS 146 | 147 | Restart=on-failure 148 | Type=notify 149 | LimitNOFILE=65536 150 | 151 | [Install] 152 | WantedBy=multi-user.target 153 | ``` 154 | 155 | 创建kube-apiserver日志目录 156 | 157 | ``` bash 158 | # mkdir /data/logs/kubernetes/kube-apiserver -p 159 | # systemctl daemon-reload 160 | # systemctl enable kube-apiserver && systemctl start kube-apiserver && systemctl status kube-apiserver 161 | ``` 162 | 163 | ## 配置和启动kube-controller-manager服务 164 | 165 | 创建kube-controller-manager配置文件 166 | 167 | ``` bash 168 | # vim /etc/kubernetes/kube-controller-manager 169 | #### 170 | ## The following values are used to configure the kubernetes controller-manager 171 | #### 172 | # 173 | ## Add your own! 174 | KUBE_CONTROLLER_MANAGER_ARGS=--address=127.0.0.1 \ 175 | --leader-elect=true \ 176 | --cluster-name=kubernetes \ 177 | --cluster-cidr=10.240.0.0/16 \ 178 | --use-service-account-credentials=true \ 179 | --root-ca-file=/etc/kubernetes/ssl/ca.pem \ 180 | --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \ 181 | --log-dir=/data/logs/kubernetes/kube-controller-manager \ 182 | --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \ 183 | --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \ 184 | --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig 185 | ``` 186 | 187 | 创建kube-controller-manager TLS认证配置文件 188 | 189 | ``` bash 190 | # vim /etc/kubernetes/kube-controller-manager.kubeconfig 191 | apiVersion: v1 192 | kind: Config 193 | clusters: 194 | - cluster: 195 | certificate-authority: /etc/kubernetes/ssl/ca.pem 196 | server: https://172.16.0.253:6443 197 | name: kubernetes 198 | contexts: 199 | - context: 200 | cluster: kubernetes 201 | user: system:kube-controller-manager 202 | name: kube-context 203 | current-context: kube-context 204 | users: 205 | - name: system:kube-controller-manager 206 | user: 207 | client-certificate: /etc/kubernetes/ssl/kube-controller-manager.pem 208 | client-key: /etc/kubernetes/ssl/kube-controller-manager-key.pem 209 | ``` 210 | 211 | 创建kube-controller-manager启动脚本 212 | 213 | ``` bash 214 | # vim /usr/lib/systemd/system/kube-controller-manager.service 215 | [Unit] 216 | Description=Kube-controller-manager Service 217 | After=network.target 218 | 219 | [Service] 220 | EnvironmentFile=-/etc/kubernetes/kube-config 221 | EnvironmentFile=-/etc/kubernetes/kube-controller-manager 222 | ExecStart=/usr/local/kubernetes/bin/kube-controller-manager \ 223 | $KUBE_LOGTOSTDERR \ 224 | $KUBE_LOG_LEVEL \ 225 | $KUBE_MASTER \ 226 | $KUBE_CONTROLLER_MANAGER_ARGS 227 | 228 | Restart=on-failure 229 | LimitNOFILE=65536 230 | 231 | [Install] 232 | WantedBy=multi-user.target 233 | ``` 234 | 235 | 创建kube-controller-manager日志目录 236 | 237 | ``` bash 238 | # mkdir /data/logs/kubernetes/kube-controller-manager -p 239 | # systemctl daemon-reload 240 | # systemctl enable kube-controller-manager && systemctl start kube-controller-manager && systemctl status kube-controller-manager 241 | ``` 242 | 243 | ## 配置和启动kube-scheduler服务 244 | 245 | 创建kube-scheduler配置文件 246 | 247 | ``` bash 248 | # vim /etc/kubernetes/kube-scheduler 249 | #### 250 | ## kubernetes scheduler config 251 | #### 252 | # 253 | ## Add your own! 254 | KUBE_SCHEDULER_ARGS="--leader-elect=true \ 255 | --address=127.0.0.1 \ 256 | --log-dir=/data/logs/kubernetes/kube-scheduler \ 257 | --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig" 258 | ``` 259 | 260 | 创建kube-scheduler TLS认证配置文件 261 | 262 | ``` bash 263 | # vim /etc/kubernetes/kube-scheduler.kubeconfig 264 | apiVersion: v1 265 | kind: Config 266 | clusters: 267 | - cluster: 268 | certificate-authority: /etc/kubernetes/ssl/ca.pem 269 | server: https://172.16.0.253:6443 270 | name: kubernetes 271 | contexts: 272 | - context: 273 | cluster: kubernetes 274 | user: system:kube-scheduler 275 | name: kube-context 276 | current-context: kube-context 277 | users: 278 | - name: system:kube-scheduler 279 | user: 280 | client-certificate: /etc/kubernetes/ssl/kube-scheduler.pem 281 | client-key: /etc/kubernetes/ssl/kube-scheduler-key.pem 282 | ``` 283 | 284 | 创建kube-scheduler启动脚本 285 | 286 | ``` bash 287 | # vim /usr/lib/systemd/system/kube-scheduler.service 288 | [Unit] 289 | Description=kube-scheduler Service 290 | After=network.target 291 | 292 | [Service] 293 | EnvironmentFile=-/etc/kubernetes/kube-config 294 | EnvironmentFile=-/etc/kubernetes/kube-scheduler 295 | ExecStart=/usr/local/kubernetes/bin/kube-scheduler \ 296 | $KUBE_LOGTOSTDERR \ 297 | $KUBE_LOG_LEVEL \ 298 | $KUBE_MASTER \ 299 | $KUBE_SCHEDULER_ARGS 300 | 301 | Restart=on-failure 302 | LimitNOFILE=65536 303 | 304 | [Install] 305 | WantedBy=multi-user.target 306 | ``` 307 | 308 | 创建kube-scheduler日志目录 309 | 310 | ``` bash 311 | # mkdir /data/logs/kubernetes/kube-scheduler -p 312 | # systemctl daemon-reload 313 | # systemctl enable kube-scheduler && systemctl start kube-scheduler && systemctl status kube-scheduler 314 | ``` 315 | # 验证Kubernetes Master节点状态 316 | 317 | ``` bash 318 | # ln -s /usr/local/kubernetes/bin/kubectl /usr/bin 319 | # kubectl get cs 320 | NAME STATUS MESSAGE ERROR 321 | controller-manager Healthy ok 322 | scheduler Healthy ok 323 | etcd-0 Healthy {"health": "true"} 324 | etcd-1 Healthy {"health": "true"} 325 | etcd-2 Healthy {"health": "true"} 326 | ``` 327 | -------------------------------------------------------------------------------- /部署Kubrnetes-Node节点.md: -------------------------------------------------------------------------------- 1 | # 部署Kubernetes Node节点 2 | 3 | ## kubelet组件 4 | 5 | 每个节点上都运行一个kubelet服务进程,默认监听10250端口,接收并执行master发来的指令,管理Pod及Pod中的容器。每个kubelet进程会在API Server上注册节点自身信息,定期向master节点汇报节点的资源使用情况,并通过cAdvisor监控节点和容器的资源。 6 | 7 | ## kubelet管理 8 | 9 | 节点管理主要是节点自注册和节点状态更新: 10 | 11 | + Kubelet可以通过设置启动参数 --register-node 来确定是否向API Server注册自己; 12 | + 如果Kubelet没有选择自注册模式,则需要用户自己配置Node资源信息,同时需要告知Kubelet集群上的API Server的位置; 13 | + Kubelet在启动时通过API Server注册节点信息,并定时向API Server发送节点新消息,API Server在接收到新消息后,将信息写入etcd。 14 | 15 | ## kube-proxy组件 16 | 17 | 每台机器上都运行一个kube-proxy服务,它监听API server中service和endpoint的变化情况,并通过iptables等来为服务配置负载均衡(仅支持TCP和UDP)。 18 | kube-proxy可以直接运行在物理机上,也可以以static pod或者daemonset的方式运行。 19 | 20 | ## kube-proxy实现方式 21 | 22 | 当前kube-proxy支持四种实现方式: 23 | 24 | + userspace:最早的负载均衡方案,它在用户空间监听一个端口,所有服务通过iptables转发到这个端口,然后在其内部负载均衡到实际的Pod。该方式最主要的问题是效率低,有明显的性能瓶颈。 25 | + iptables:目前推荐的方案,完全以iptables规则的方式来实现service负载均衡。该方式最主要的问题是在服务多的时候产生太多的iptables规则(社区有人提到过几万条),大规模下也有性能问题 26 | + winuserspace:同userspace,但仅工作在windows上 27 | + ipvs:1.8版本以后引入了ipvs模式,目前最新版本为GA版本 28 | 29 | ## 升级master和node节点内核版本 30 | ``` bash 31 | # rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org 32 | # rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm 33 | # yum -y install --enablerepo=elrepo-kernel kernel-lt-devel kernel-lt 34 | # grub2-set-default 0 35 | # grub2-mkconfig -o /boot/grub2/grub.cfg 36 | # reboot 37 | # uname -r 38 | 4.4.112-1.el7.elrepo.x86_64 39 | ``` 40 | 41 | ## 下载kubernetes组件的二进制文件 42 | 43 | ``` bash 44 | # wget https://storage.googleapis.com/kubernetes-release/release/v1.15.2/kubernetes-server-linux-amd64.tar.gz 45 | # tar fx kubernetes-server-linux-amd64.tar.gz 46 | ``` 47 | 48 | 拷贝二进制文件 49 | 50 | ``` bash 51 | # mkdir -p /usr/local/kubernetes-v1.15.2/bin 52 | # ln -s /usr/local/kubernetes-v1.15.2 /usr/local/kubernetes 53 | # cp -r `pwd`/kubernetes/server/bin/{kube-proxy,kubelet} /usr/local/kubernetes/bin 54 | ``` 55 | 56 | ## 配置kube-config文件 57 | 58 | ``` bash 59 | # vim /etc/kubernetes/kube-config 60 | ### 61 | # kubernetes system config 62 | # 63 | # The following values are used to configure various aspects of all 64 | # kubernetes services, including 65 | # 66 | # kube-apiserver.service 67 | # kube-controller-manager.service 68 | # kube-scheduler.service 69 | # kubelet.service 70 | # kube-proxy.service 71 | ### 72 | # logging to stderr means we get it in the systemd journal 73 | KUBE_LOGTOSTDERR="--logtostderr=false" 74 | 75 | # journal message level, 0 is debug 76 | KUBE_LOG_LEVEL="--v=1" 77 | 78 | # Should this cluster be allowed to run privileged docker containers 79 | KUBE_ALLOW_PRIV="--allow-privileged=true" 80 | 81 | # How the controller-manager, scheduler, and proxy find the apiserver 82 | KUBE_MASTER="--master=https://172.16.0.253:8443" 83 | ``` 84 | 85 | ## 配置和启动kubelet服务 86 | 87 | 创建kubelet文件 88 | 89 | ``` bash 90 | # vim /etc/kubernetes/kubelet 91 | #### 92 | ## kubernetes kubelet config 93 | #### 94 | # 95 | ## You may leave this blank to use the actual hostname 96 | KUBELET_HOSTNAME="--hostname-override=k8s-node1" 97 | # 98 | ## pod infrastructure container 99 | KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=k8s.gcr.io/pause:3.1" 100 | # 101 | ## Add your own! 102 | KUBELET_ARGS="--root-dir=/data/kubelet \ 103 | --log-dir=/data/logs/kubernetes/kubelet \ 104 | --cert-dir=/etc/kubernetes/ssl \ 105 | --config=/etc/kubernetes/kubelet.config \ 106 | --kubeconfig=/etc/kubernetes/kubelet.kubeconfig" 107 | ``` 108 | 109 | 创建kubelet config文件 110 | 111 | 说明:1.12.x版本后,kubelet服务建议使用config方式配置 112 | 113 | ``` bash 114 | # vim /etc/kubernetes/kubelet.config 115 | kind: KubeletConfiguration 116 | apiVersion: kubelet.config.k8s.io/v1beta1 117 | address: 172.16.30.0.101 118 | port: 10250 119 | cgroupDriver: cgroupfs 120 | clusterDNS: 121 | - 10.241.0.254 122 | clusterDomain: testing.com. 123 | failSwapOn: false 124 | authentication: 125 | anonymous: 126 | enabled: true 127 | webhook: 128 | cacheTTL: 2m0s 129 | enabled: true 130 | x509: 131 | clientCAFile: /etc/kubernetes/ssl/ca.pem 132 | authorization: 133 | mode: Webhook 134 | webhook: 135 | cacheAuthorizedTTL: 5m0s 136 | cacheUnauthorizedTTL: 30s 137 | maxOpenFiles: 1000000 138 | maxPods: 200 139 | serializeImagePulls: true 140 | failSwapOn: false 141 | fileCheckFrequency: 20s 142 | hairpinMode: promiscuous-bridge 143 | healthzBindAddress: 127.0.0.1 144 | healthzPort: 10248 145 | httpCheckFrequency: 20s 146 | resolvConf: /etc/resolv.conf 147 | runtimeRequestTimeout: 2m0s 148 | ``` 149 | 150 | 创建kubelet TLS认证配置文件 151 | 152 | 注意:1.7.x以上版本中增加了Node Restriction模式,采用system:node:NodeName方式来认证 153 | 154 | ``` bash 155 | # vim /etc/kubernetes/kubelet.kubeconfig 156 | apiVersion: v1 157 | kind: Config 158 | clusters: 159 | - cluster: 160 | certificate-authority: /etc/kubernetes/ssl/ca.pem 161 | server: https://172.16.30.0.101:6443 162 | name: kubernetes 163 | contexts: 164 | - context: 165 | cluster: kubernetes 166 | user: system:node:k8s-node1 167 | name: kube-context 168 | current-context: kube-context 169 | users: 170 | - name: system:node:k8s-node1 171 | user: 172 | client-certificate: /etc/kubernetes/ssl/kubelet.pem 173 | client-key: /etc/kubernetes/ssl/kubelet-key.pem 174 | ``` 175 | 176 | 创建kubelet启动脚本 177 | 178 | ``` bash 179 | # vim /usr/lib/systemd/system/kubelet.service 180 | [Unit] 181 | Description=Kubelet Service 182 | After=network.target 183 | 184 | [Service] 185 | WorkingDirectory=/data/kubelet 186 | EnvironmentFile=-/etc/kubernetes/kube-config 187 | EnvironmentFile=-/etc/kubernetes/kubelet 188 | ExecStart=/usr/local/kubernetes/bin/kubelet \ 189 | $KUBE_LOGTOSTDERR \ 190 | $KUBE_LOG_LEVEL \ 191 | $KUBELET_API_SERVER \ 192 | $KUBELET_HOSTNAME \ 193 | $KUBELET_POD_INFRA_CONTAINER \ 194 | $KUBELET_ARGS 195 | 196 | Restart=on-failure 197 | LimitNOFILE=65536 198 | 199 | [Install] 200 | WantedBy=multi-user.target 201 | ``` 202 | 203 | 创建kubelet数据目录和日志目录 204 | 205 | ``` bash 206 | # mkdir /data/kubelet -p 207 | # mkdir /data/logs/kubernetes/kubelet -p 208 | # systemctl daemon-reload 209 | # systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet 210 | ``` 211 | 212 | ## 内核模块加载 213 | ``` bash 214 | # yum -y install conntrack-tools ipvsadm ipset 215 | # cat > /etc/sysconfig/modules/ipvs.modules < /dev/null 2>&1 220 | if [ $? -eq 0 ]; then 221 | /sbin/modprobe \${kernel_module} 222 | fi 223 | done 224 | EOF 225 | # chmod +x /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs 226 | ip_vs_ftp 16384 0 227 | ip_vs_sed 16384 0 228 | ip_vs_nq 16384 0 229 | ip_vs_fo 16384 0 230 | ip_vs_sh 16384 0 231 | ip_vs_dh 16384 0 232 | ip_vs_lblcr 16384 0 233 | ip_vs_lblc 16384 0 234 | ip_vs_wrr 16384 0 235 | ip_vs_rr 16384 0 236 | ip_vs_wlc 16384 0 237 | ip_vs_lc 16384 0 238 | ip_vs 147456 24 ip_vs_dh,ip_vs_fo,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc 239 | nf_nat 28672 3 ip_vs_ftp,nf_nat_ipv4,nf_nat_masquerade_ipv4 240 | nf_conntrack 110592 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4 241 | libcrc32c 16384 2 xfs,ip_vs 242 | ``` 243 | 244 | ## 配置和启动kube-proxy服务 245 | 246 | 创建kube-proxy配置文件 247 | 248 | 注意:1.12.x版本ipvs默认已经开启,可以启用`--proxy-mode=ipvs`参数开启ipvs. 249 | 250 | ``` bash 251 | # vim /etc/kubernetes/kube-proxy 252 | #### 253 | ## kubernetes proxy config 254 | #### 255 | # 256 | ## You may leave this blank to use the actual hostname 257 | KUBE_PROXY_HOSTNAME="--hostname-override=k8s-node1" 258 | # 259 | ## Add your own! 260 | KUBE_PROXY_ARGS="--config=/etc/kubernetes/kube-proxy.config \ 261 | --log-dir=/data/logs/kubernetes/kube-proxy" 262 | ``` 263 | 264 | 创建kube-proxy config文件 265 | 266 | 说明:1.12.x版本后,kube-proxy服务建议使用config方式配置 267 | 268 | ``` bash 269 | # vim /etc/kubernetes/kube-proxy.config 270 | apiVersion: kubeproxy.config.k8s.io/v1alpha1 271 | bindAddress: 172.16.0.101 272 | clientConnection: 273 | acceptContentTypes: "" 274 | burst: 10 275 | contentType: application/vnd.kubernetes.protobuf 276 | kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig 277 | qps: 5 278 | clusterCIDR: 10.240.0.0/16 279 | configSyncPeriod: 15m0s 280 | conntrack: 281 | maxPerCore: 32768 282 | min: 131072 283 | tcpCloseWaitTimeout: 1h0m0s 284 | tcpEstablishedTimeout: 24h0m0s 285 | enableProfiling: false 286 | healthzBindAddress: 0.0.0.0:10256 287 | ipvs: 288 | scheduler: "rr" 289 | syncPeriod: 15s 290 | minSyncPeriod: 5s 291 | 292 | kind: KubeProxyConfiguration 293 | metricsBindAddress: 127.0.0.1:10249 294 | mode: "ipvs" 295 | nodePortAddresses: null 296 | oomScoreAdj: -999 297 | portRange: "" 298 | resourceContainer: /kube-proxy 299 | udpIdleTimeout: 250ms 300 | " 301 | ``` 302 | 303 | 创建kube-proxy TLS认证配置文件 304 | 305 | ``` bash 306 | # vim /etc/kubernetes/kube-proxy.kubeconfig 307 | apiVersion: v1 308 | kind: Config 309 | clusters: 310 | - cluster: 311 | certificate-authority: /etc/kubernetes/ssl/ca.pem 312 | server: https://172.16.30.0.101:6443 313 | name: kubernetes 314 | contexts: 315 | - context: 316 | cluster: kubernetes 317 | user: system:kube-proxy 318 | name: kube-context 319 | current-context: kube-context 320 | users: 321 | - name: system:kube-proxy 322 | user: 323 | client-certificate: /etc/kubernetes/ssl/kube-proxy.pem 324 | client-key: /etc/kubernetes/ssl/kube-proxy-key.pem 325 | ``` 326 | 327 | 创建kube-proxy启动脚本 328 | 329 | ``` bash 330 | # vim /usr/lib/systemd/system/kube-proxy.service 331 | [Unit] 332 | Description=Kube-proxy Service 333 | After=network.target 334 | 335 | [Service] 336 | EnvironmentFile=-/etc/kubernetes/kube-config 337 | EnvironmentFile=-/etc/kubernetes/kube-proxy 338 | ExecStart=/usr/local/kubernetes/bin/kube-proxy \ 339 | $KUBE_LOGTOSTDERR \ 340 | $KUBE_LOG_LEVEL \ 341 | $KUBE_MASTER \ 342 | $KUBE_PROXY_ADDRESS \ 343 | $KUBE_PROXY_HOSTNAME \ 344 | $KUBE_PROXY_ARGS 345 | 346 | Restart=on-failure 347 | LimitNOFILE=65536 348 | 349 | [Install] 350 | WantedBy=multi-user.target 351 | ``` 352 | 353 | 创建kube-proxy日志目录 354 | 355 | ``` bash 356 | # mkdir /data/logs/kubernetes/kube-proxy -p 357 | # systemctl daemon-reload 358 | # systemctl enable kube-proxy && systemctl start kube-proxy && systemctl status kube-proxy 359 | ``` 360 | 361 | 362 | ## 验证Node节点是否正常 363 | 364 | 注意: 验证输出结果为某测试环境节点信息,输出`STATUS`为`Ready`说明正常。可以自行创建Pod验证 365 | 366 | ``` bash 367 | # kubectl get node -o wide 368 | NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 369 | k8s-node1 Ready 2d v1.13.4 CentOS Linux 7 (Core) 4.4.166-1.el7.elrepo.x86_64 docker://17.9.1 370 | k8s-node2 Ready 2d v1.13.4 CentOS Linux 7 (Core) 4.4.166-1.el7.elrepo.x86_64 docker://17.9.1 371 | k8s-node3 Ready 2d v1.13.4 CentOS Linux 7 (Core) 4.4.166-1.el7.elrepo.x86_64 docker://17.9.1 372 | ``` 373 | -------------------------------------------------------------------------------- /配置kubeconfig访问多个集群.md: -------------------------------------------------------------------------------- 1 | # 配置kubeconfig访问多个集群 2 | 本篇文章主要讲如何使用一个配置文件,在将集群、用户和上下文定义在一个或多个配置文件中之后,用户可以使用`kubectl config use-context`命令快速的在不同集群之间进行切换。 3 | 4 | ## 定义集群、用户和上下文 5 | 假设当前环境有两个集群,一个是生产环境,一个是测试环境。 6 | ``` bash 7 | # mkdir -p ~/.kube && cd ~/.kube 8 | # vim config 9 | apiVersion: v1 10 | clusters: 11 | - cluster: 12 | certificate-authority: /etc/kubernetes/ssl/ca.pem # dev集群CA证书 13 | server: https://10.8.0.136:6443 # dev集群APIServer地址 14 | name: kubernetes-cluser-dev # dev集群名称(自定义) 15 | - cluster: 16 | certificate-authority: /etc/kubernetes/ssl/ca.pem # pro集群CA证书 17 | server: https://10.8.1.136:6443 # pro集群CA证书 18 | name: kubernetes-cluser-pro # pro集群名称(自定义) 19 | contexts: 20 | - context: # 定义集群上下文 21 | cluster: kubernetes-cluser-dev 22 | user: cluser-dev-admin 23 | name: kubernetes-cluser-dev # dev集群context名称(自定义) 24 | - context: 25 | cluster: kubernetes-cluser-pro 26 | user: cluser-pro-admin 27 | name: kubernetes-cluser-pro # pro集群context名称(自定义) 28 | current-context: kubernetes-cluser-dev # 定义默认集群 29 | kind: Config 30 | preferences: {} 31 | users: 32 | - name: cluser-dev-admin # dev集群用户名称(自定义) 33 | user: 34 | client-certificate: /etc/kubernetes/ssl/kubelet.pem # dev集群客户证书 35 | client-key: /etc/kubernetes/ssl/kubelet-key.pem 36 | - name: cluser-pro-admin # pro集群用户名称(自定义) 37 | user: 38 | client-certificate: /etc/kubernetes/ssl/kubelet.pem # pro集群客户证书 39 | client-key: /etc/kubernetes/ssl/kubelet-key.pem 40 | ``` 41 | 42 | * 查看所有的可使用的kubernetes集群角色 43 | 44 | ``` bash 45 | # kubectl config get-contexts 46 | CURRENT NAME CLUSTER AUTHINFO NAMESPACE 47 | * kubernetes-cluser-dev kubernetes-cluser-dev cluser-dev-admin 48 | kubernetes-cluser-pro kubernetes-cluser-pro cluser-pro-admin 49 | ``` 50 | 51 | * 切换kubernetes配置 52 | 53 | ``` bash 54 | # kubectl config use-context kubernetes-cluser-pro 55 | Switched to context "kubernetes-cluser-pro". 56 | # kubectl config get-contexts 57 | CURRENT NAME CLUSTER AUTHINFO NAMESPACE 58 | kubernetes-cluser-dev kubernetes-cluser-dev cluser-dev-admin 59 | * kubernetes-cluser-pro kubernetes-cluser-pro cluser-pro-admin 60 | ``` --------------------------------------------------------------------------------