├── README.md ├── controller ├── README.md ├── cb.yaml ├── dp.yaml ├── ds.yaml ├── jb.yaml ├── rc.yaml └── rs.yaml ├── elk ├── README.md ├── elk-cluster-with-role.yaml ├── elk-cluster.yaml ├── elk-single.yaml ├── filebeat-dp.yml ├── filebeat-ds.yml └── logstash.yaml ├── getMasterImages.sh ├── getNodeImages.sh ├── golearn └── calcproject │ ├── bin │ └── calc │ └── src │ ├── calc │ └── calc.go │ └── simplemath │ ├── add.go │ ├── add_test.go │ ├── sqrt.go │ └── sqrt_test.go ├── imgs ├── Screen Shot 2018-10-26 at 7.50.29 PM.png ├── Screen Shot 2018-10-26 at 7.51.11 PM.png ├── Screen Shot 2018-10-26 at 7.51.21 PM.png ├── Screen Shot 2018-10-26 at 8.04.15 PM.png ├── Screen Shot 2018-10-26 at 8.15.26 PM.png ├── Screen Shot 2018-10-26 at 8.18.54 PM.png ├── Screen Shot 2018-10-26 at 8.37.03 PM.png ├── Screen Shot 2018-10-26 at 9.11.00 PM.png ├── Screen Shot 2018-10-26 at 9.12.06 PM.png ├── Screen Shot 2018-10-26 at 9.15.25 PM.png ├── Screen Shot 2018-10-26 at 9.19.29 PM.png ├── Screen Shot 2018-10-26 at 9.30.54 PM.png └── Screen Shot 2018-10-26 at 9.32.37 PM.png ├── install └── offline-binary.md ├── learnrecord ├── logging.MD └── namespace.MD ├── metadata ├── README.md ├── curl_img.yaml ├── meta_env.yaml └── meta_file.yaml ├── playground ├── anaconda-ks.cfg ├── cm-vars.yml ├── dp-nginx.yml ├── ds-nginx.yml ├── es-pv.yml ├── es-pvc.yml ├── es6-deployment.yaml ├── es6.yaml ├── first-pod.yml ├── getImages.sh ├── iptables.after.log ├── iptables.before.log ├── jb-perl.yml ├── kube-flannel.yml ├── kube-flannel.yml.1 ├── kubeadm.yml ├── local-storage.yaml ├── pod-init-service.yml ├── pod-init.yml ├── pod-nginx.yml ├── preInstall.sh ├── rc-nginx.yml ├── redis-master-deployment.yaml ├── second-pod.yml ├── service-es.yaml ├── ss-nginx.yml ├── ss-pv.yml ├── ss-single-pv.yml ├── tc.sh ├── tomcat-1.yaml ├── tomcat.yaml └── tomcat_example.yaml ├── pod-probe ├── README.md ├── liveness-exec.yaml ├── liveness-http.yaml └── liveness-tcp.yaml ├── pod ├── README.md ├── first-pod.yml └── second-pod.yml ├── preInstall.sh ├── service ├── external-service-endpoints.yaml ├── external-service.yaml ├── ingress │ ├── cafe-example.yaml │ ├── httpd-deploy.yaml │ ├── httpd-service.yaml │ ├── ingress-install.yaml │ ├── k8s-ingress-nginx.yaml │ ├── tomcat-deploy.yaml │ ├── tomcat-ingress.yaml │ └── tomcat-service.yaml └── svc-nodeport.yaml ├── statefulset ├── README.md ├── es-pv.yml └── ss-nginx.yml └── storage ├── README.md ├── configmap.md ├── pod-with-pvc.yaml ├── pv-local.yaml ├── pvc-local.yaml └── storage_basic.png /README.md: -------------------------------------------------------------------------------- 1 | # 目录 2 | 3 | 这里整理了我学习Kubernetes的资料,供大家参考交流 4 | - 安装 5 | - [离线环境二进制方式安装Kubernetes集群](https://github.com/cocowool/k8s-go/blob/master/install/offline-binary.md)) 6 | - 概念 7 | - [Kubernetes的Controllers](https://github.com/cocowool/k8s-go/blob/master/controller/README.md) 8 | - [Kubernetes的命名空间](https://github.com/cocowool/k8s-go/blob/master/learnrecord/namespace.MD) 9 | - [Kubernetes Pod详细介绍](https://github.com/cocowool/k8s-go/tree/master/pod) 10 | - [Kubernetes 存储系统介绍](https://github.com/cocowool/k8s-go/tree/master/storage) 11 | - [Kubernetes 中的ConfigMap和Secret](https://github.com/cocowool/k8s-go/tree/master/storage/configmap.md) 12 | - 案例 13 | - [Kubernetes部署ELK并且使用Filebeat收集集群容器日志](https://github.com/cocowool/k8s-go/tree/master/elk) 14 | 15 | # kubeadm安装kubernetes V1.11.1 集群 16 | 17 | > 之前测试了[离线环境下使用二进制方法安装配置Kubernetes集群](https://www.cnblogs.com/cocowool/p/install_k8s_offline.html)的方法,安装的过程中听说 kubeadm 安装配置集群更加方便,因此试着折腾了一下。安装过程中,也有一些坑,相对来说操作上要比二进制方便一点,毕竟不用手工创建那么多的配置文件,但是对于了解Kubernetes的运作方式,可能不如二进制方式好。同时,因为kubeadm方式,很多集群依赖的组件都是以容器方式运行在Master节点上,感觉对于虚拟机资源的消耗要比二进制方式厉害。 18 | 19 | ## 0. kubeadm 介绍与准备工作 20 | > kubeadm is designed to be a simple way for new users to start trying Kubernetes out, possibly for the first time, a way for existing users to test their application on and stitch together a cluster easily, and also to be a building block in other ecosystem and/or installer tool with a larger scope. 21 | kubeadm是一个python写的项目,代码在[这里](https://github.com/kubernetes/kubeadm),用来帮助快速部署Kubernetes集群环境,但是目前仅仅是作为测试环境使用,如果你想在生产环境使用,可是要三思。 22 | 23 | 本文所用的环境: 24 | - 虚拟机软件:VirtualBox 25 | - 操作系统:Centos 7.3 minimal 安装 26 | - 网卡:两块网卡,一块 Host-Only方式,一块 Nat 方式。 27 | - 网络规划: 28 | - Master:192.168.0.101 29 | - Node:192.168.0.102-104 30 | 31 | ### 0.1 关掉 selinux 32 | ```sh 33 | $ setenforce 0 34 | $ sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 35 | ``` 36 | 37 | ### 0.2 关掉防火墙 38 | ```sh 39 | $ systemctl stop firewalld 40 | $ systemctl disable firewalld 41 | ``` 42 | 43 | ### 0.3 关闭 swap 44 | ```sh 45 | $ swapoff -a 46 | $ sed -i 's/.*swap.*/#&/' /etc/fstab 47 | ``` 48 | 49 | ### 0.4 配置转发参数 50 | ```sh 51 | $ cat < /etc/sysctl.d/k8s.conf 52 | net.bridge.bridge-nf-call-ip6tables = 1 53 | net.bridge.bridge-nf-call-iptables = 1 54 | EOF 55 | $ sysctl --system 56 | ``` 57 | 58 | ### 0.5 设置国内 yum 源 59 | ```sh 60 | $ cat < /etc/yum.repos.d/kubernetes.repo 61 | [kubernetes] 62 | name=Kubernetes 63 | baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ 64 | enabled=1 65 | gpgcheck=1 66 | repo_gpgcheck=1 67 | gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 68 | EOF 69 | ``` 70 | 71 | ### 0.6 安装一些必备的工具 72 | ```sh 73 | $ yum install -y epel-release 74 | $ yum install -y net-tools wget vim ntpdate 75 | ``` 76 | 77 | ## 1. 安装 kubeadm 必须的软件,在所有节点上运行 78 | ### 1.1 安装Docker 79 | ```sh 80 | $ yum install -y docker 81 | $ systemctl enable docker && systemctl start docker 82 | $ #设置系统服务,如果不设置后面 kubeadm init 的时候会有 warning 83 | $ systemctl enable docker.service 84 | ``` 85 | 如果想要用二进制方法安装最新版本的Docker,可以参考我之前的文章[在Redhat 7.3中采用离线方式安装Docker](https://www.cnblogs.com/cocowool/p/install_docker_ce_in_redhat_73.html) 86 | 87 | ### 1.2 安装kubeadm、kubectl、kubelet 88 | ```sh 89 | $ yum install -y kubelet kubeadm kubectl kubernetes-cni 90 | $ systemctl enable kubelet && systemctl start kubelet 91 | ``` 92 | 这一步之后kubelet还不能正常运行,还处于下面的状态。 93 | > The kubelet is now restarting every few seconds, as it waits in a crashloop for kubeadm to tell it what to do. 94 | 95 | ## 2. 安装Master节点 96 | 因为国内没办法访问Google的镜像源,变通的方法是从其他镜像源下载后,修改tag。执行下面这个Shell脚本即可。 97 | ```sh 98 | #!/bin/bash 99 | images=(kube-proxy-amd64:v1.11.0 kube-scheduler-amd64:v1.11.0 kube-controller-manager-amd64:v1.11.0 kube-apiserver-amd64:v1.11.0 100 | etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9 101 | k8s-dns-dnsmasq-nanny-amd64:1.14.9 ) 102 | for imageName in ${images[@]} ; do 103 | docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName 104 | docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName 105 | #docker rmi registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName 106 | done 107 | docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1 108 | ``` 109 | 接下来执行Master节点的初始化,因为我的虚拟机是双网卡,需要指定apiserver的监听地址。 110 | ```sh 111 | [root@devops-101 ~]# kubeadm init --kubernetes-version=v1.11.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.101 112 | [init] using Kubernetes version: v1.11.0 113 | [preflight] running pre-flight checks 114 | I0724 08:36:35.636931 3409 kernel_validator.go:81] Validating kernel version 115 | I0724 08:36:35.637052 3409 kernel_validator.go:96] Validating kernel config 116 | [WARNING Hostname]: hostname "devops-101" could not be reached 117 | [WARNING Hostname]: hostname "devops-101" lookup devops-101 on 172.20.10.1:53: no such host 118 | [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' 119 | [preflight/images] Pulling images required for setting up a Kubernetes cluster 120 | [preflight/images] This might take a minute or two, depending on the speed of your internet connection 121 | [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull' 122 | [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" 123 | [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" 124 | [preflight] Activating the kubelet service 125 | [certificates] Generated ca certificate and key. 126 | [certificates] Generated apiserver certificate and key. 127 | [certificates] apiserver serving cert is signed for DNS names [devops-101 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.101] 128 | [certificates] Generated apiserver-kubelet-client certificate and key. 129 | [certificates] Generated sa key and public key. 130 | [certificates] Generated front-proxy-ca certificate and key. 131 | [certificates] Generated front-proxy-client certificate and key. 132 | [certificates] Generated etcd/ca certificate and key. 133 | [certificates] Generated etcd/server certificate and key. 134 | [certificates] etcd/server serving cert is signed for DNS names [devops-101 localhost] and IPs [127.0.0.1 ::1] 135 | [certificates] Generated etcd/peer certificate and key. 136 | [certificates] etcd/peer serving cert is signed for DNS names [devops-101 localhost] and IPs [192.168.0.101 127.0.0.1 ::1] 137 | [certificates] Generated etcd/healthcheck-client certificate and key. 138 | [certificates] Generated apiserver-etcd-client certificate and key. 139 | [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki" 140 | [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" 141 | [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" 142 | [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" 143 | [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" 144 | [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" 145 | [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" 146 | [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" 147 | [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" 148 | [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 149 | [init] this might take a minute or longer if the control plane images have to be pulled 150 | [apiclient] All control plane components are healthy after 46.002877 seconds 151 | [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace 152 | [kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster 153 | [markmaster] Marking the node devops-101 as master by adding the label "node-role.kubernetes.io/master=''" 154 | [markmaster] Marking the node devops-101 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule] 155 | [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "devops-101" as an annotation 156 | [bootstraptoken] using token: wkj0bo.pzibll6rd9gyi5z8 157 | [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials 158 | [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token 159 | [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster 160 | [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace 161 | [addons] Applied essential addon: CoreDNS 162 | [addons] Applied essential addon: kube-proxy 163 | 164 | Your Kubernetes master has initialized successfully! 165 | 166 | To start using your cluster, you need to run the following as a regular user: 167 | 168 | mkdir -p $HOME/.kube 169 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 170 | sudo chown $(id -u):$(id -g) $HOME/.kube/config 171 | 172 | You should now deploy a pod network to the cluster. 173 | Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 174 | https://kubernetes.io/docs/concepts/cluster-administration/addons/ 175 | 176 | You can now join any number of machines by running the following on each node 177 | as root: 178 | 179 | kubeadm join 192.168.0.101:6443 --token wkj0bo.pzibll6rd9gyi5z8 --discovery-token-ca-cert-hash sha256:51985223a369a1f8c226f3ccdcf97f4ad5ff201a7c8c708e1636eea0739c0f05 180 | ``` 181 | 看到以上信息表示Master节点已经初始化成功了。如果需要用普通用户管理集群,可以按照提示进行操作,如果是使用root用户管理,执行下面的命令。 182 | 183 | ```sh 184 | [root@devops-101 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf 185 | [root@devops-101 ~]# kubectl get nodes 186 | NAME STATUS ROLES AGE VERSION 187 | devops-101 NotReady master 7m v1.11.1 188 | [root@devops-101 ~]# kubectl get pods --all-namespaces 189 | NAMESPACE NAME READY STATUS RESTARTS AGE 190 | kube-system coredns-78fcdf6894-8sd6g 0/1 Pending 0 7m 191 | kube-system coredns-78fcdf6894-lgvd9 0/1 Pending 0 7m 192 | kube-system etcd-devops-101 1/1 Running 0 6m 193 | kube-system kube-apiserver-devops-101 1/1 Running 0 6m 194 | kube-system kube-controller-manager-devops-101 1/1 Running 0 6m 195 | kube-system kube-proxy-bhmj8 1/1 Running 0 7m 196 | kube-system kube-scheduler-devops-101 1/1 Running 0 6m 197 | ``` 198 | 可以看到节点还没有Ready,dns的两个pod也没不正常,还需要安装网络配置。 199 | 200 | ## 3. Master节点的网络配置 201 | 这里我选用了 Flannel 的方案。 202 | > kubeadm only supports Container Network Interface (CNI) based networks (and does not support kubenet). 203 | 204 | 修改系统设置,创建 flannel 网络。 205 | ```sh 206 | [root@devops-101 ~]# sysctl net.bridge.bridge-nf-call-iptables=1 207 | net.bridge.bridge-nf-call-iptables = 1 208 | [root@devops-101 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml 209 | clusterrole.rbac.authorization.k8s.io/flannel created 210 | clusterrolebinding.rbac.authorization.k8s.io/flannel created 211 | serviceaccount/flannel created 212 | configmap/kube-flannel-cfg created 213 | daemonset.extensions/kube-flannel-ds created 214 | ``` 215 | flannel 默认会使用主机的第一张网卡,如果你有多张网卡,需要通过配置单独指定。修改 kube-flannel.yml 中的以下部分 216 | ```yaml 217 | containers: 218 | - name: kube-flannel 219 | image: quay.io/coreos/flannel:v0.10.0-amd64 220 | command: 221 | - /opt/bin/flanneld 222 | args: 223 | - --ip-masq 224 | - --kube-subnet-mgr 225 | - --iface=enp0s3 #指定内网网卡 226 | ``` 227 | 执行成功后,Master并不能马上变成Ready状态,稍等几分钟,就可以看到所有状态都正常了。 228 | 229 | ```sh 230 | [root@devops-101 ~]# kubectl get pods --all-namespaces 231 | NAMESPACE NAME READY STATUS RESTARTS AGE 232 | kube-system coredns-78fcdf6894-8sd6g 1/1 Running 0 14m 233 | kube-system coredns-78fcdf6894-lgvd9 1/1 Running 0 14m 234 | kube-system etcd-devops-101 1/1 Running 0 13m 235 | kube-system kube-apiserver-devops-101 1/1 Running 0 13m 236 | kube-system kube-controller-manager-devops-101 1/1 Running 0 13m 237 | kube-system kube-flannel-ds-6zljr 1/1 Running 0 48s 238 | kube-system kube-proxy-bhmj8 1/1 Running 0 14m 239 | kube-system kube-scheduler-devops-101 1/1 Running 0 13m 240 | [root@devops-101 ~]# kubectl get nodes 241 | NAME STATUS ROLES AGE VERSION 242 | devops-101 Ready master 14m v1.11.1 243 | ``` 244 | ## 4. 加入节点 245 | Node节点的加入集群前,首先需要按照本文的第0节和第1节做好准备工作,然后下载镜像。 246 | ```sh 247 | $ docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/kube-proxy-amd64:v1.11.0 248 | $ docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 249 | $ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 250 | $ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/kube-proxy-amd64:v1.11.0 k8s.gcr.io/kube-proxy-amd64:v1.11.0 251 | $ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 k8s.gcr.io/pause:3.1 252 | ``` 253 | 最后再根据Master节点的提示加入集群。 254 | ```sh 255 | $ kubeadm join 192.168.0.101:6443 --token wkj0bo.pzibll6rd9gyi5z8 --discovery-token-ca-cert-hash sha256:51985223a369a1f8c226f3ccdcf97f4ad5ff201a7c8c708e1636eea0739c0f05 256 | ``` 257 | 节点的启动也需要一点时间,稍后再到Master上查看状态。 258 | ```sh 259 | [root@devops-101 ~]# kubectl get nodes 260 | NAME STATUS ROLES AGE VERSION 261 | devops-101 Ready master 1h v1.11.1 262 | devops-102 Ready 11m v1.11.1 263 | ``` 264 | 265 | 我把安装中需要用到的一些命令整理成了几个脚本,放在我的[Github](https://github.com/cocowool/k8s-go)上,大家可以下载使用。 266 | 267 | ![](https://images2018.cnblogs.com/blog/39469/201807/39469-20180710163655709-89635310.png) 268 | 269 | ## X. 坑 270 | 271 | ### pause:3.1 272 | 安装的过程中,发现kubeadmin会找 pause:3.1 的镜像,所以需要重新 tag 。 273 | ```sh 274 | $ docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 k8s.gcr.io/pause:3.1 275 | ``` 276 | 277 | ### 两台服务器时间不同步。 278 | 报错信息 279 | ```sh 280 | [discovery] Failed to request cluster info, will try again: [Get https://192.168.0.101:6443/api/v1/namespaces/kube-public/configmaps/cluster-info: x509: certificate has expired or is not yet valid] 281 | ``` 282 | 解决方法,设定一个时间服务器同步两台服务器的时间。 283 | ```sh 284 | $ ntpdate ntp1.aliyun.com 285 | ``` 286 | 287 | ## 参考资料 288 | 1. [centos7.3 kubernetes/k8s 1.10 离线安装](https://www.jianshu.com/p/9c7e1c957752) 289 | 2. [Kubeadm安装Kubernetes环境](https://www.cnblogs.com/ericnie/p/7749588.html) 290 | 3. [Steps to install kubernetes](https://www.assistanz.com/steps-to-install-kubernetes-cluster-manually-using-centos-7/) 291 | 4. [kubeadm reference guide](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/) 292 | 5. [kubeadm安装Kubernetes V1.10集群详细文档](https://www.kubernetes.org.cn/3808.html) 293 | 6. [kubeadm reference](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file) 294 | 7. [kubeadm搭建kubernetes1.7.5集群](https://blog.csdn.net/zhongyuemengxiang/article/details/79121932) 295 | 8. [安装部署 Kubernetes 集群](https://www.cnblogs.com/Leo_wl/p/8511902.html) 296 | 9. [linux 命令 ---- 同步当前服务器时间](https://www.cnblogs.com/chenzeyong/p/5951959.html) 297 | 10. [CentOS 7.4 安装 K8S v1.11.0 集群所遇到的问题](https://www.cnblogs.com/myzony/p/9298783.html#1.准备工作) 298 | 11. [使用kubeadm部署kubernetes](https://blog.csdn.net/andriy_dangli/article/details/79269348) 299 | -------------------------------------------------------------------------------- /controller/README.md: -------------------------------------------------------------------------------- 1 | ## Kubernetes Controller 介绍 2 | 3 | ## 0. 概述 4 | Kubernetes提供了很多Controller资源来管理、调度Pod,包括Replication Controller、ReplicaSet、Deployments、StatefulSet、DaemonSet等等。本文介绍这些控制器的功能和用法。控制器是Kubernetes中的一种资源,用来方便管理Pod。可以把控制器想象成进程管理器,负责维护进程的状态。进程掉了负责拉起,需要更多进程了负责增加进程,可以监控进程根据进程消耗资源的情况动态扩缩容。只是在Kubernetes中,控制器管理的是Pods。Controller通过API Server提供的接口实时监控整个集群的每个资源对象的当前状态,当发生各种故障导致系统状态发生变化时,会尝试将系统状态修复到“期望状态”。 5 | 6 | 7 | ### Spec 8 | ```yaml 9 | apiVersion: apps/v1 10 | kind: ReplicaSet 11 | metadata: 12 | spec: 13 | replicas: #默认为1 14 | selector: 15 | 16 | template: 17 | metadata: 18 | labels: 19 | spec: 20 | containers: 21 | - name: image-name 22 | image: 23 | 24 | ``` 25 | 26 | ### 必须字段 27 | 1.9版本之后,应当使用 ```apps/v1```。 28 | 29 | ### Pod Template 30 | 31 | ### Pod Selector 32 | 并不区分Pod的创建人或进程,好处是容易被其他的管理工具替换。 33 | 34 | > Also you should not normally create any pods whose labels match this selector, either directly, with another ReplicaSet, or with another controller such as a Deployment. If you do so, the ReplicaSet thinks that it created the other pods. Kubernetes does not stop you from doing this. 35 | 36 | ## 1. ReplicationController 37 | Replication Controller 通常缩写为 rc、rcs。RC同RS一样,保持Pod数量始终维持在期望值。RC创建的Pod,遇到失败后会自动重启。RC的编排文件必须的字段包括apiVersion、kind、metadata、.spec.repicas、.spec.template。其中```.spec.template.spec.restartPolicy``` 只能是 ```Always```,默认为空。看一下RC的编排文件。 38 | 39 | ```yaml 40 | apiVersion: v1 41 | kind: ReplicationController 42 | metadata: 43 | name: nginx 44 | spec: 45 | replicas: 3 46 | selector: 47 | app: nginx 48 | template: 49 | metadata: 50 | name: nginx 51 | labels: 52 | app: nginx 53 | spec: 54 | containers: 55 | - name: nginx 56 | image: docker.io/nginx 57 | ports: 58 | - containerPort: 80 59 | ``` 60 | 61 | ### 1.1 常用管理操作 62 | - 删除RC和相关的Pod,使用```kubectl delete```命令删除RC,会同步删除RC创建的Pods 63 | - 仅删除RC,使用```kubectl delete --cascade=false```仅删除RC而不删除相关的Pods 64 | - 隔离Pod,通过修改标签的方式隔离Pod 65 | 66 | ### 1.2 常用场景 67 | - Rescheduling,RC控制器会确保集群中始终存在你设定数量的Pod在运行 68 | - Scaling,通过修改replicas字段,可以方便的扩容 69 | - Rolling updates,可以使用命令行工具```kubectl rolling-update```实现滚动升级 70 | - Multriple release tracks,配合label和service,可以实现金丝雀发布 71 | - 与Services配合 72 | 73 | > RC没有探测功能,也没有自动扩缩容功能。也不会检查 74 | 75 | ## 2. ReplicaSet 76 | RS是RC的下一代,只有对于标签选择的支持上有所不同,RS支持集合方式的选择,RC仅支持相等方式的选择。ReplicasSet确保集群在任何时间都运行指定数量的Pod副本,看一下RS的编排文件。 77 | 78 | ```yaml 79 | apiVersion: apps/v1 80 | kind: ReplicaSet 81 | metadata: 82 | name: nginx 83 | labels: 84 | app: nginx 85 | tier: frontend 86 | spec: 87 | replicas: 3 88 | selector: 89 | matchLabels: 90 | tier: frontend 91 | matchExpressions: 92 | - {key:tier, operator: In, values: [frontend]} 93 | template: 94 | metadata: 95 | labels: 96 | app: nginx 97 | tier: frontend 98 | spec: 99 | containers: 100 | - name: nginx 101 | image: docker.io/nginx 102 | resources: 103 | requests: 104 | cpu: 100m 105 | memory: 100Mi 106 | ports: 107 | - containerPort: 80 108 | ``` 109 | 编排文件必须的几个字段包括:apiVersion、kind、metadata、spec以及spec.replicas、spec.template、spec.selector。 110 | 111 | > 尽管ReplicaSet可以单独使用,但是如今推荐使用Deployments做为Pod编排的(新建、删除、更新)的主要方式。Deploymnets是更高一级的抽象,提供了RS的管理功能,除非你要使用自定义的更新编排或者不希望所有Pod进行更新,否则基本上没有用到RS的机会。 112 | 113 | 114 | 115 | ### 2.1 常用管理操作 116 | - 删除RS和相关Pods,```kubectl delete ``` 117 | - 仅删除RS,```kubectl delete --cascade=false``` 118 | - Pod 隔离,通过修改Pod的label,可以将Pod隔离从而进行测试、数据恢复等操作。 119 | - HPA 自动扩容,ReplicaSet可以作为HPA的目标 120 | ```yaml 121 | apiVersion: autoscaling/v1 122 | kind: HorizontalPodAutoscaler 123 | metadata: 124 | name: nginx-scaler 125 | spec: 126 | scaleTargetRef: 127 | kind: ReplicaSet 128 | name: nginx 129 | minReplicas: 3 130 | maxReplicas: 10 131 | targetCPUUtilizationPercentage: 50 132 | ``` 133 | 134 | ## 3. Deployments 135 | Deployment实际上是对RS和Pod的管理,它总先是创建RS,由RS创建Pods。由Deployment创建的RS的命名规则为```[DEPLOYMENT-NAME]-[POD-TEMPLATE-HASH-VALUE]```,建议不要手工维护Deployment创建的RS。Deployment的更新仅在Pod的template发生更新的情况下。 136 | 137 | 下面介绍几个Deployment使用的典型场景。 138 | ### 3.1 创建部署 Deployment 139 | ```yaml 140 | apiVersion: apps/v1 141 | kind: Deployment 142 | metadata: 143 | name: nginx-deployment //部署名称 144 | labels: 145 | app: nginx 146 | spec: 147 | replicas: 3 //副本数量 148 | selector: //Pod选择规则 149 | matchLabels: 150 | app: nginx 151 | template: 152 | metadata: 153 | labels: //Pods标签 154 | app: nginx 155 | spec: 156 | containers: 157 | - name: nginx 158 | image: docker.io/nginx:1.7.9 159 | ports: 160 | - contaienrPort: 80 161 | ``` 162 | 163 | ```sh 164 | kubectl apply -f dp.yaml 165 | kubectl get deployment nginx-deployment 166 | kubectl rollout status deployment nginx-deployment 167 | kubectl get pods --show-labels 168 | ``` 169 | 170 | ### 3.2 更新部署 Deployment 171 | 如果需要对已经创建的Deployment进行更新有两种方法,一种是修改编排文件并应用更新,一种是直接通过命令的方式更新部署的参数,分别介绍如下。 172 | 173 | **命令方式更新** 174 | 更新镜像文件的版本 175 | ```sh 176 | kubectl set image depolyment/nginx-deployment nginx=docker.io/nginx:1.9.1 177 | ``` 178 | 179 | **更新编排文件的方式** 180 | 首先修改编排文件,然后执行 181 | ```sh 182 | kubectl apply -f dp.yaml 183 | ``` 184 | 185 | > 如果一个Deployment已经创建完成,更新Deployment会创建新的RS并逐渐的替换旧的RS(按一定的速率创建新的Pod,确保新的Pod运行正常后,删掉旧的Pod)。因此如果查看Pods,可能会发现一段时间Pods的总量会超过replicas设定的数量。如果一个Deployment正在创建还没有完成,此时更新Deployment会导致刚创建的Pods马上被杀掉,并开始创建新的Pods。 186 | 187 | ### 3.3 回滚更新 188 | 有时部署的版本存在问题,我们需要回滚到之前的版本,Deployment也提供了这种功能。默认情况下,Deployment的更新保存在系统中,我们能够据此实现版本的回滚。 189 | 190 | > 只有更新.spec.template的内容才会触发版本记录,单纯的扩容不会记录历史。因此回滚也不会造成Pods数量的变化。 191 | 192 | ```sh 193 | kubectl apply -f dp.yaml 194 | kubectl set image deployment/nginx-deployment nginx=docker.io/nginx:1.91 195 | kubectl rollout status deployments nginx-deployment 196 | kubectl get rs 197 | kubectl get pods 198 | # kubectl rollout history deployment/nginx-deployment 199 | kubectl rollout history deployment/nginx-deployment --revision=2 200 | kubectl rollout undo deployment/nginx-deployment 201 | kubectl rollout undown deployment/nginx-deployment --to-revision=2 202 | kubectl get deployment 203 | ``` 204 | > 默认记录10个版本,可以通过```.spec.revisionHistoryLimit```修改。 205 | 206 | ### 3.4 扩容 207 | ```sh 208 | # kubectl scale deployment nginx-deployment --replicas=5 209 | ``` 210 | 211 | 如果集群打开了自动扩容功能,还可以设置自动扩容的条件。 212 | ```sh 213 | # kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80 214 | ``` 215 | 216 | ### 3.5 暂停Deployment 217 | 创建部署之后,我们可以暂停和重启部署过程,并在此期间执行一些操作。 218 | ```sh 219 | $ kubectl rollout pause deployment/nginx-deployment 220 | $ 221 | $ kubectl rollout resume deployment/nginx-deployment 222 | ``` 223 | 224 | ### 3.6 Deployment的状态 225 | Deployment包含几种可能的状态。 226 | - Progressing 227 | - 创建了新的ReplicaSet 228 | - 正在对新的ReplicaSet进行Scaling up 229 | - 正在对旧的ReplicaSet进行Scaling down 230 | - 新的Pods准备就绪 231 | - Complete 232 | - 所有的副本都已更新为最新状态 233 | - 所有的副本都已可用 234 | - 没有旧的副本正在运行 235 | - Failed 236 | - Quota不足 237 | - Readiness探测失败 238 | - 镜像拉取失败 239 | - 权限不足 240 | - 应用运行错误 241 | 242 | ### 3.7 一些参数 243 | - Strategy ```.spec.strategy```,这个有两个选项,分别是Recreate和RollingUpdate,默认为第二种。第一种的策略为先杀死旧Pods再创建新Pods,第二种为一边创建新Pods,一边杀掉旧Pods 244 | - Max Unavailable ```.spec.strategy.rollingUpdate.maxUnavailable```,更新过程中允许不可用Pods的最大比率,默认为25% 245 | - Max Surge ```.spec.strategy.rollingUpdate.maxSurge```,更新过程中允许超过replicas的最大Pods数量,默认为25% 246 | - Progress Deadline Seconds ```.spec.progressDeadlineSeconds``` ,可选参数,设置系统报告进展的时间 247 | - Min Ready Seconds ```.spec.minReadySeconds```,可选参数,设置新建Pod能正常运行的最小时间间隔 248 | - Revision History Limit ```.spec.revisionHistoryLimit``` 可选参数,设置历史记录数量 249 | 250 | ## 4. StatefulSets 251 | 252 | SteatefulSets我专门有一篇文件介绍,大家可以参考[这里](https://www.cnblogs.com/cocowool/p/kubernetes_statefulset.html)。 253 | 254 | ## 5. DaemonSet 255 | DaemonSet确保所有的Node上都运行了一份Pod的副本,只要Node加入到集群中,Pod就会在Node上创建。典型的应用场景包括:运行一个存储集群(glusterd,ceph)、运行一个日志收集集群(fluentd,logstash)、运行监控程序(Prometheus Node Exporter,collectd,Datadog等)。默认情况下DaemonSet由DaemonSet控制器调度,如果设置了```nodeAffinity```参数,则会有默认的scheduler调度。 256 | 257 | 典型的编排文件如下。 258 | ```yaml 259 | apiVersion: apps/v1 260 | kind: DaemonSet 261 | metadata: 262 | name: fluentd-es 263 | namespace: kube-system 264 | labels: 265 | k8s-app: fluentd-logging 266 | spec: 267 | selector: 268 | matchLabels: 269 | name: fluentd-es 270 | template: 271 | metadata: 272 | labels: 273 | name: fluentd-es 274 | spec: 275 | tolerations: 276 | - key: node-role.kubernetes.io/master 277 | effect: NoSchedule 278 | containers: 279 | - name: fluentd-es 280 | image: docker.io/fluentd:1.20 281 | resources: 282 | limits: 283 | memory: 200Mi 284 | requests: 285 | cpu: 100m 286 | memory: 200Mi 287 | volumeMounts: 288 | - name: varlog 289 | mountPath: /var/log 290 | - name: varlibdockercontainers 291 | mountPath: /var/lib/docker/containers 292 | readOnly: true 293 | terminationGracePeriodSeconds: 30 294 | volumes: 295 | - name: varlog 296 | hostPath: 297 | path: /var/log 298 | - name: varlibdockercontainers 299 | hostPath: 300 | path: /var/lib/docker/contaienrs 301 | ``` 302 | 303 | ## 6. Grabage Collection 304 | Kubernetes中一些对象间有从属关系,例如一个RS会拥有一组Pod。Kubernetes中的GC用来删除那些曾经有过属主,但是后来没有属主的对象。Kubernetes中拥有属主的对象有一个```metadata.ownerReferences```属性指向属主。在Kubernetes的1.8版本之后,系统会自动为ReplicationController、ReplicaSet、StatefulSet、DaemonSet、Deployment、Job和CronJob创建的对象设置ownerReference。 305 | 306 | 之前各种控制器中我们提到过级联删除,就是通过这个属性来实现的。级联删除有两种形式 Foreground 以及 Background ,Foreground模式中,选择级联删除后GC会自动将所有```ownerReference.blockOwnerDeletion=true```的对象删除,最后再删除owner对象。Background模式中,owner对象会被立即删除,然后GC在后台删除其他依赖对象。如果我们在删除RS的时候,选择不进行级联删除,那么这个RS创建的Pods就变成了没有属主的孤儿。 307 | 308 | ## 7. Jobs 309 | Job通过创建一个或多个Pod来运行特定的任务,当正常完成任务的Pod数量达到设定标准时,Job就会结束。删除Job会将Job创建的所有Pods删除。 310 | 311 | 典型的编排文件 312 | ```yaml 313 | apiVersion: batch/v1 314 | kind: Job 315 | metadata: 316 | name: pi 317 | spec: 318 | backoffLimit: 4 319 | activeDeadlineSeconds: 100 320 | template: 321 | spec: 322 | containers: 323 | - name: pi 324 | image: docker.io/perl 325 | command: ["perl", "-Mbignum=bpi", "-wle", "print bpid(2000)"] 326 | restartPolicy: Never 327 | ``` 328 | 329 | 主要有三种类型的Job 330 | - 非并行的Job,通常只启动一个Pod执行任务 331 | - 带有固定完成数量的并行Job,需要将```.spec.completions```设置为非零值 332 | - 与队列结合的并行Job,不需要设置```.spec.completions```,设置```.spec.parallelism``` 333 | 334 | > Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = "Never", the same program may sometimes be started twice. 335 | > 感觉有坑啊。 336 | 337 | Kubernetes提供的并行Job并不适合科学计算或者执行相关的任务,更适合执行邮件发送、渲染、文件转义等等单独的任务。 338 | 339 | ## 8. CronJob 340 | Cron Job是根据时间来自动创建Job对象。类似于Crontab,周期性的执行一个任务。每次执行期间,会创建一个Job对象。也可能会创建两个或者不创建Job,这个情况可能会发生,因此应该保证Job是幂等的。 341 | 342 | > For every CronJob, the CronJob controller checks how many schedules it missed in the duration from its last scheduled time until now. If there are more than 100 missed schedules, then it does not start the job and logs the error 如果错过太多,就不要追了 343 | 344 | 典型的编排文件 345 | ```yaml 346 | apiVersion: batch/v1beta1 347 | kind: CronJob 348 | metadata: 349 | name: hello 350 | spec: 351 | schedule: "*/1 * * * *" 352 | jobTemplate: 353 | spec: 354 | template: 355 | spec: 356 | containers: 357 | - name: hello 358 | image: docker.io/busybox 359 | args: 360 | - /bin/sh 361 | - -c 362 | - date; echo Hello from the Kubernetes cluster 363 | restartPolicy: OnFailure 364 | ``` 365 | 366 | 所有的编排文件都上传到了我的[Github](https://github.com/cocowool/k8s-go/tree/master/controller)上,大家可以自行[下载](https://github.com/cocowool/k8s-go/tree/master/controller)。 367 | 368 | ![](https://images2018.cnblogs.com/blog/39469/201807/39469-20180710163655709-89635310.png) 369 | 370 | ## 参考资料 371 | 1. [Kubernetes ReplicaSet](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/) 372 | 2. [Running Automated Tasks with a CronJob](https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/) 373 | -------------------------------------------------------------------------------- /controller/cb.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: batch/v1beta1 2 | kind: CronJob 3 | metadata: 4 | name: hello 5 | spec: 6 | schedule: "*/1 * * * *" 7 | jobTemplate: 8 | spec: 9 | template: 10 | spec: 11 | containers: 12 | - name: hello 13 | image: docker.io/busybox 14 | args: 15 | - /bin/sh 16 | - -c 17 | - date; echo Hello from the Kubernetes cluster 18 | restartPolicy: OnFailure -------------------------------------------------------------------------------- /controller/dp.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx-deployment 5 | labels: 6 | app: nginx 7 | spec: 8 | replicas: 3 9 | selector: 10 | matchLabels: 11 | app: nginx 12 | template: 13 | metadata: 14 | labels: 15 | app: nginx 16 | spec: 17 | containers: 18 | - name: nginx 19 | image: docker.io/nginx 20 | ports: 21 | - contaienrPort: 80 -------------------------------------------------------------------------------- /controller/ds.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: DaemonSet 3 | metadata: 4 | name: fluentd-es 5 | namespace: kube-system 6 | labels: 7 | k8s-app: fluentd-logging 8 | spec: 9 | selector: 10 | matchLabels: 11 | name: fluentd-es 12 | template: 13 | metadata: 14 | labels: 15 | name: fluentd-es 16 | spec: 17 | tolerations: 18 | - key: node-role.kubernetes.io/master 19 | effect: NoSchedule 20 | containers: 21 | - name: fluentd-es 22 | image: docker.io/fluentd:1.20 23 | resources: 24 | limits: 25 | memory: 200Mi 26 | requests: 27 | cpu: 100m 28 | memory: 200Mi 29 | volumeMounts: 30 | - name: varlog 31 | mountPath: /var/log 32 | - name: varlibdockercontainers 33 | mountPath: /var/lib/docker/containers 34 | readOnly: true 35 | terminationGracePeriodSeconds: 30 36 | volumes: 37 | - name: varlog 38 | hostPath: 39 | path: /var/log 40 | - name: varlibdockercontainers 41 | hostPath: 42 | path: /var/lib/docker/contaienrs -------------------------------------------------------------------------------- /controller/jb.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: batch/v1 2 | kind: Job 3 | metadata: 4 | name: pi 5 | spec: 6 | backoffLimit: 4 7 | activeDeadlineSeconds: 100 8 | template: 9 | spec: 10 | containers: 11 | - name: pi 12 | image: docker.io/perl 13 | command: ["perl", "-Mbignum=bpi", "-wle", "print bpid(2000)"] 14 | restartPolicy: Never -------------------------------------------------------------------------------- /controller/rc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: nginx 5 | spec: 6 | replicas: 3 7 | selector: 8 | app: nginx 9 | template: 10 | metadata: 11 | name: nginx 12 | labels: 13 | app: nginx 14 | spec: 15 | containers: 16 | - name: nginx 17 | image: docker.io/nginx 18 | ports: 19 | - containerPort: 80 -------------------------------------------------------------------------------- /controller/rs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: ReplicaSet 3 | metadata: 4 | name: nginx 5 | labels: 6 | app: nginx 7 | tier: frontend 8 | spec: 9 | replicas: 3 10 | selector: 11 | matchLabels: 12 | tier: frontend 13 | matchExpressions: 14 | - {key:tier, operator: In, values: [frontend]} 15 | template: 16 | metadata: 17 | labels: 18 | app: nginx 19 | tier: frontend 20 | spec: 21 | containers: 22 | - name: nginx 23 | image: docker.io/nginx 24 | resources: 25 | requests: 26 | cpu: 100m 27 | memory: 100Mi 28 | ports: 29 | - containerPort: 80 -------------------------------------------------------------------------------- /elk/README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes部署ELK并使用Filebeat收集容器日志 2 | 3 | > 本文的试验环境为CentOS 7.3,Kubernetes集群为1.11.2,安装步骤参见[kubeadm安装kubernetes V1.11.1 集群](https://www.cnblogs.com/cocowool/p/kubeadm_install_kubernetes.html) 4 | 5 | ## 1. 环境准备 6 | Elasticsearch运行时要求```vm.max_map_count```内核参数必须大于262144,因此开始之前需要确保这个参数正常调整过。 7 | ```sh 8 | $ sysctl -w vm.max_map_count=262144 9 | ``` 10 | 11 | 也可以在ES的的编排文件中增加一个initContainer来修改内核参数,但这要求kublet启动的时候必须添加了```--allow-privileged```参数,但是一般生产中不会给加这个参数,因此最好在系统供给的时候要求这个参数修改完成。 12 | 13 | ### ES的配置方式 14 | - 使用Cluster Update Setting API动态修改配置 15 | - 使用配置文件的方式,配置文件默认在 config 文件夹下,具体位置取决于安装方式。 16 | - elasticsearch.yml 配置Elasticsearch 17 | - jvm.options 配置ES JVM参数 18 | - log4j.properties 配置ES logging参数 19 | - 使用Prompt方式在启动时输入 20 | 21 | 最常使用的配置方式为使用配置文件,ES的配置文件为yaml格式,格式要求和Kubernetes的编排文件一样。配置文件中可以引用环境变量,例如```node.name: ${HOSTNAME}``` 22 | 23 | ### ES的节点 24 | ES的节点[Node](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html)可以分为几种角色: 25 | - Master-eligible node,是指有资格被选为Master节点的Node,可以统称为Master节点。设置```node.master: true``` 26 | - Data node,存储数据的节点,设置方式为```node.data: true```。 27 | - Ingest node,进行数据处理的节点,设置方式为```node.ingest: true```。 28 | - Trible node,为了做集群整合用的。 29 | 30 | 对于单节点的Node,默认是master-eligible和data,对于多节点的集群,就要仔细规划每个节点的角色。 31 | 32 | ## 2. 单实例方式部署ELK 33 | 单实例部署ELK的方法非常简单,可以参考我Github上的[elk-single.yaml](https://github.com/cocowool/k8s-go/blob/master/elk/elk-single.yaml)文件,整体就是创建一个ES的部署,创建一个Kibana的部署,创建一个ES的Headless服务,创建一个Kiana的NodePort服务,本地通过节点的NodePort访问Kibana。 34 | 35 | ```sh 36 | [root@devops-101 ~]# curl -L -O https://raw.githubusercontent.com/cocowool/k8s-go/master/elk/elk-single.yaml 37 | [root@devops-101 ~]# kubectl apply -f elk-single.yaml 38 | deployment.apps/kb-single created 39 | service/kb-single-svc unchanged 40 | deployment.apps/es-single created 41 | service/es-single-nodeport unchanged 42 | service/es-single unchanged 43 | [root@devops-101 ~]# kubectl get all 44 | NAME READY STATUS RESTARTS AGE 45 | pod/es-single-5b8b696ff8-9mqrz 1/1 Running 0 26s 46 | pod/kb-single-69d6d9c744-sxzw9 1/1 Running 0 26s 47 | 48 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 49 | service/es-single ClusterIP None 9200/TCP,9300/TCP 19m 50 | service/es-single-nodeport NodePort 172.17.197.237 9200:31200/TCP,9300:31300/TCP 13h 51 | service/kb-single-svc NodePort 172.17.27.11 5601:32601/TCP 19m 52 | service/kubernetes ClusterIP 172.17.0.1 443/TCP 14d 53 | 54 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 55 | deployment.apps/es-single 1 1 1 1 26s 56 | deployment.apps/kb-single 1 1 1 1 26s 57 | 58 | NAME DESIRED CURRENT READY AGE 59 | replicaset.apps/es-single-5b8b696ff8 1 1 1 26s 60 | replicaset.apps/kb-single-69d6d9c744 1 1 1 26s 61 | ``` 62 | 63 | 可以看看效果如下: 64 | ![](https://images2018.cnblogs.com/blog/39469/201809/39469-20180910182121558-1067453160.png) 65 | 66 | ## 3. 集群部署ELK 67 | ### 3.1 不区分集群中的节点角色 68 | ```sh 69 | [root@devops-101 ~]# curl -L -O https://raw.githubusercontent.com/cocowool/k8s-go/master/elk/elk-cluster.yaml 70 | [root@devops-101 ~]# kubectl apply -f elk-cluster.yaml 71 | deployment.apps/kb-single created 72 | service/kb-single-svc created 73 | statefulset.apps/es-cluster created 74 | service/es-cluster-nodeport created 75 | service/es-cluster created 76 | ``` 77 | 78 | 效果如下 79 | ![](https://images2018.cnblogs.com/blog/39469/201809/39469-20180910182150309-194695122.png) 80 | 81 | ### 3.2 区分集群中节点角色 82 | 如果需要区分节点的角色,就需要建立两个StatefulSet部署,一个是Master集群,一个是Data集群。Data集群的存储我这里为了简单使用了```emptyDir```,可以使用```localStorage```或者```hostPath```,关于存储的介绍,可以参考[Kubernetes存储系统介绍](https://www.cnblogs.com/cocowool/p/kubernetes_storage.html)。这样就可以避免Data节点在本机重启时发生数据丢失而重建索引,但是如果发生迁移的话,如果想保留数据,只能采用共享存储的方案了。具体的编排文件在这里[elk-cluster-with-role](https://github.com/cocowool/k8s-go/blob/master/elk/elk-cluster-with-role.yaml) 83 | ```sh 84 | [root@devops-101 ~]# curl -L -O https://raw.githubusercontent.com/cocowool/k8s-go/master/elk/elk-cluster-with-role.yaml 85 | [root@devops-101 ~]# kubectl apply -f elk-cluster-with-role.yaml 86 | deployment.apps/kb-single created 87 | service/kb-single-svc created 88 | statefulset.apps/es-cluster created 89 | statefulset.apps/es-cluster-data created 90 | service/es-cluster-nodeport created 91 | service/es-cluster created 92 | [root@devops-101 ~]# kubectl get all 93 | NAME READY STATUS RESTARTS AGE 94 | pod/es-cluster-0 1/1 Running 0 13s 95 | pod/es-cluster-1 0/1 ContainerCreating 0 2s 96 | pod/es-cluster-data-0 1/1 Running 0 13s 97 | pod/es-cluster-data-1 0/1 ContainerCreating 0 2s 98 | pod/kb-single-5848f5f967-w8hwq 1/1 Running 0 14s 99 | 100 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 101 | service/es-cluster ClusterIP None 9200/TCP,9300/TCP 13s 102 | service/es-cluster-nodeport NodePort 172.17.207.135 9200:31200/TCP,9300:31300/TCP 13s 103 | service/kb-single-svc NodePort 172.17.8.137 5601:32601/TCP 14s 104 | service/kubernetes ClusterIP 172.17.0.1 443/TCP 16d 105 | 106 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 107 | deployment.apps/kb-single 1 1 1 1 14s 108 | 109 | NAME DESIRED CURRENT READY AGE 110 | replicaset.apps/kb-single-5848f5f967 1 1 1 14s 111 | 112 | NAME DESIRED CURRENT AGE 113 | statefulset.apps/es-cluster 3 2 14s 114 | statefulset.apps/es-cluster-data 2 2 13s 115 | ``` 116 | 117 | 效果如下 118 | ![](https://images2018.cnblogs.com/blog/39469/201809/39469-20180910182220529-48582025.png) 119 | 120 | ## 4. 使用Filebeat监控收集容器日志 121 | 使用Logstash,可以监测具有一定命名规律的日志文件,但是对于容器日志,很多文件名都是没有规律的,这种情况比较适合使用Filebeat来对日志目录进行监测,发现有更新的日志后上送到Logstash处理或者直接送入到ES中。 122 | 123 | 每个Node节点上的容器应用日志,默认都会在```/var/log/containers```目录下创建软链接,这里我遇到了两个小问题,第一个就是当时挂载```hostPath```的时候没有挂载软链接的目的文件夹,导致在容器中能看到软链接,但是找不到对应的文件;第二个问题是宿主机上这些日志权限都是root,而Pod默认用filebeat用户启动的应用,因此要单独设置下。 124 | 125 | 效果如下 126 | ![](https://images2018.cnblogs.com/blog/39469/201809/39469-20180910182334500-531866919.png) 127 | ![](https://images2018.cnblogs.com/blog/39469/201809/39469-20180910182345566-25819522.png) 128 | 129 | 具体的编排文件可以参考我的Github主页,提供了[Deployment](https://github.com/cocowool/k8s-go/blob/master/elk/filebeat-dp.yml)方式的编排和[DaemonSet](https://github.com/cocowool/k8s-go/blob/master/elk/filebeat-ds.yml)方式的编排。 130 | 131 | 对于具体日志的格式,因为时间问题没有做进一步的解析,这里如果有朋友做过,可以分享出来。 132 | 133 | 主要的编排文件内容摘抄如下。 134 | ```yaml 135 | kind: List 136 | apiVersion: v1 137 | items: 138 | - apiVersion: v1 139 | kind: ConfigMap 140 | metadata: 141 | name: filebeat-config 142 | labels: 143 | k8s-app: filebeat 144 | kubernetes.io/cluster-service: "true" 145 | app: filebeat-config 146 | data: 147 | filebeat.yml: | 148 | processors: 149 | - add_cloud_metadata: 150 | filebeat.modules: 151 | - module: system 152 | filebeat.inputs: 153 | - type: log 154 | paths: 155 | - /var/log/containers/*.log 156 | symlinks: true 157 | # json.message_key: log 158 | # json.keys_under_root: true 159 | output.elasticsearch: 160 | hosts: ['es-single:9200'] 161 | logging.level: info 162 | - apiVersion: extensions/v1beta1 163 | kind: DaemonSet 164 | metadata: 165 | name: filebeat 166 | labels: 167 | k8s-app: filebeat 168 | kubernetes.io/cluster-service: "true" 169 | spec: 170 | template: 171 | metadata: 172 | name: filebeat 173 | labels: 174 | app: filebeat 175 | k8s-app: filebeat 176 | kubernetes.io/cluster-service: "true" 177 | spec: 178 | containers: 179 | - image: docker.elastic.co/beats/filebeat:6.4.0 180 | name: filebeat 181 | args: [ 182 | "-c", "/home/filebeat-config/filebeat.yml", 183 | "-e", 184 | ] 185 | securityContext: 186 | runAsUser: 0 187 | volumeMounts: 188 | - name: filebeat-storage 189 | mountPath: /var/log/containers 190 | - name: varlogpods 191 | mountPath: /var/log/pods 192 | - name: varlibdockercontainers 193 | mountPath: /var/lib/docker/containers 194 | - name: "filebeat-volume" 195 | mountPath: "/home/filebeat-config" 196 | nodeSelector: 197 | role: front 198 | volumes: 199 | - name: filebeat-storage 200 | hostPath: 201 | path: /var/log/containers 202 | - name: varlogpods 203 | hostPath: 204 | path: /var/log/pods 205 | - name: varlibdockercontainers 206 | hostPath: 207 | path: /var/lib/docker/containers 208 | - name: filebeat-volume 209 | configMap: 210 | name: filebeat-config 211 | ``` 212 | 213 | ![](https://images2018.cnblogs.com/blog/39469/201807/39469-20180710163655709-89635310.png) 214 | 215 | ## 参考资料: 216 | 1. [Elasticsearch cluster on top of Kubernetes made easy](https://github.com/pires/kubernetes-elasticsearch-cluster) 217 | 2. [Install Elasticseaerch with Docker](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html) 218 | 3. [Docker Elasticsearch](https://docs.docker.com/samples/library/elasticsearch/) 219 | 4. [Running Kibana on Docker](https://www.elastic.co/guide/en/kibana/current/docker.html) 220 | 5. [Configuring Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/6.3/settings.html) 221 | 6. [Elasticsearch Node](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-node.html) 222 | 7. [Loggin Using Elasticsearch and kibana](https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana/) 223 | 8. [Configuring Logstash for Docker](https://www.elastic.co/guide/en/logstash/current/docker-config.html) 224 | 9. [Running Filebeat on Docker](https://www.elastic.co/guide/en/beats/filebeat/current/running-on-docker.html) 225 | 10. [Filebeat中文指南](http://www.cnblogs.com/kerwinC/p/6227768.html) 226 | 11. [Add experimental symlink support](https://github.com/elastic/beats/pull/2478) 227 | -------------------------------------------------------------------------------- /elk/elk-cluster-with-role.yaml: -------------------------------------------------------------------------------- 1 | kind: List 2 | apiVersion: v1 3 | items: 4 | - apiVersion: apps/v1beta1 5 | kind: Deployment 6 | metadata: 7 | name: kb-single 8 | labels: 9 | author: shiqiang 10 | spec: 11 | replicas: 1 12 | template: 13 | metadata: 14 | name: kb-single 15 | labels: 16 | app: kb-single 17 | author: shiqiang 18 | spec: 19 | containers: 20 | - image: docker.elastic.co/kibana/kibana:6.4.0 21 | name: kb 22 | env: 23 | - name: ELASTICSEARCH_URL 24 | value: "http://es-cluster:9200" 25 | ports: 26 | - name: http 27 | containerPort: 5601 28 | - apiVersion: v1 29 | kind: Service 30 | metadata: 31 | name: kb-single-svc 32 | labels: 33 | author: shiqiang 34 | spec: 35 | type: NodePort 36 | ports: 37 | - name: http 38 | port: 5601 39 | targetPort: 5601 40 | nodePort: 32601 41 | selector: 42 | app: kb-single 43 | - apiVersion: apps/v1beta1 44 | kind: StatefulSet 45 | metadata: 46 | name: es-cluster 47 | spec: 48 | serviceName: es-cluster 49 | replicas: 3 50 | selector: 51 | matchLabels: 52 | app: es-cluster 53 | template: 54 | metadata: 55 | name: es-cluster 56 | labels: 57 | app: es-cluster 58 | author: shiqiang 59 | role: master 60 | spec: 61 | containers: 62 | - image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0 63 | name: es 64 | resources: 65 | limits: 66 | cpu: 300m 67 | memory: 512Mi 68 | requests: 69 | cpu: 200m 70 | memory: 256Mi 71 | env: 72 | - name: network.host 73 | value: "_site_" 74 | - name: node.name 75 | value: "${HOSTNAME}" 76 | - name: discovery.zen.ping.unicast.hosts 77 | value: "es-cluster" 78 | - name: discovery.zen.minimum_master_nodes 79 | value: "2" 80 | - name: cluster.name 81 | value: "test-cluster" 82 | - name: node.master 83 | value: "true" 84 | - name: node.data 85 | value: "false" 86 | - name: node.ingest 87 | value: "false" 88 | - name: ES_JAVA_OPTS 89 | value: "-Xms128m -Xmx128m" 90 | volumeMounts: 91 | - name: es-cluster-storage 92 | mountPath: /usr/share/elasticsearch/data 93 | volumes: 94 | - name: es-cluster-storage 95 | emptyDir: {} 96 | - apiVersion: apps/v1beta1 97 | kind: StatefulSet 98 | metadata: 99 | name: es-cluster-data 100 | spec: 101 | serviceName: es-cluster-data 102 | replicas: 3 103 | selector: 104 | matchLabels: 105 | app: es-cluster-data 106 | template: 107 | metadata: 108 | name: es-cluster-data 109 | labels: 110 | app: es-cluster-data 111 | author: shiqiang 112 | role: master 113 | spec: 114 | containers: 115 | - image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0 116 | name: es-data 117 | resources: 118 | limits: 119 | cpu: 300m 120 | memory: 512Mi 121 | requests: 122 | cpu: 200m 123 | memory: 256Mi 124 | env: 125 | - name: network.host 126 | value: "_site_" 127 | - name: node.name 128 | value: "${HOSTNAME}" 129 | - name: discovery.zen.ping.unicast.hosts 130 | value: "es-cluster" 131 | - name: discovery.zen.minimum_master_nodes 132 | value: "2" 133 | - name: cluster.name 134 | value: "test-cluster" 135 | - name: node.master 136 | value: "false" 137 | - name: node.data 138 | value: "true" 139 | - name: node.ingest 140 | value: "false" 141 | - name: ES_JAVA_OPTS 142 | value: "-Xms128m -Xmx128m" 143 | volumeMounts: 144 | - name: es-cluster-storage 145 | mountPath: /usr/share/elasticsearch/data 146 | volumes: 147 | - name: es-cluster-storage 148 | emptyDir: {} 149 | - apiVersion: v1 150 | kind: Service 151 | metadata: 152 | name: es-cluster-nodeport 153 | labels: 154 | author: shiqiang 155 | spec: 156 | type: NodePort 157 | ports: 158 | - name: http 159 | port: 9200 160 | targetPort: 9200 161 | nodePort: 31200 162 | - name: tcp 163 | port: 9300 164 | targetPort: 9300 165 | nodePort: 31300 166 | selector: 167 | app: es-cluster 168 | - apiVersion: v1 169 | kind: Service 170 | metadata: 171 | name: es-cluster 172 | labels: 173 | author: shiqiang 174 | spec: 175 | clusterIP: None 176 | ports: 177 | - name: http 178 | port: 9200 179 | - name: tcp 180 | port: 9300 181 | selector: 182 | app: es-cluster -------------------------------------------------------------------------------- /elk/elk-cluster.yaml: -------------------------------------------------------------------------------- 1 | kind: List 2 | apiVersion: v1 3 | items: 4 | - apiVersion: apps/v1beta1 5 | kind: Deployment 6 | metadata: 7 | name: kb-single 8 | labels: 9 | author: shiqiang 10 | spec: 11 | replicas: 1 12 | template: 13 | metadata: 14 | name: kb-single 15 | labels: 16 | app: kb-single 17 | author: shiqiang 18 | spec: 19 | containers: 20 | - image: docker.elastic.co/kibana/kibana:6.4.0 21 | name: kb 22 | env: 23 | - name: ELASTICSEARCH_URL 24 | value: "http://es-cluster:9200" 25 | ports: 26 | - name: http 27 | containerPort: 5601 28 | - apiVersion: v1 29 | kind: Service 30 | metadata: 31 | name: kb-single-svc 32 | labels: 33 | author: shiqiang 34 | spec: 35 | type: NodePort 36 | ports: 37 | - name: http 38 | port: 5601 39 | targetPort: 5601 40 | nodePort: 32601 41 | selector: 42 | app: kb-single 43 | - apiVersion: apps/v1beta1 44 | kind: StatefulSet 45 | metadata: 46 | name: es-cluster 47 | spec: 48 | serviceName: es-cluster 49 | replicas: 3 50 | template: 51 | metadata: 52 | name: es-cluster 53 | labels: 54 | app: es-cluster 55 | author: shiqiang 56 | spec: 57 | containers: 58 | - image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0 59 | name: es 60 | resources: 61 | limits: 62 | cpu: 300m 63 | memory: 512Mi 64 | requests: 65 | cpu: 200m 66 | memory: 256Mi 67 | env: 68 | - name: network.host 69 | value: "_site_" 70 | - name: node.name 71 | value: "${HOSTNAME}" 72 | - name: discovery.zen.ping.unicast.hosts 73 | value: "es-cluster" 74 | - name: discovery.zen.minimum_master_nodes 75 | value: "2" 76 | - name: cluster.name 77 | value: "test-cluster" 78 | - name: ES_JAVA_OPTS 79 | value: "-Xms128m -Xmx128m" 80 | volumeMounts: 81 | - name: es-cluster-data 82 | mountPath: /usr/share/elasticsearch/data 83 | volumes: 84 | - name: es-cluster-data 85 | emptyDir: {} 86 | - apiVersion: v1 87 | kind: Service 88 | metadata: 89 | name: es-cluster-nodeport 90 | labels: 91 | author: shiqiang 92 | spec: 93 | type: NodePort 94 | ports: 95 | - name: http 96 | port: 9200 97 | targetPort: 9200 98 | nodePort: 31200 99 | - name: tcp 100 | port: 9300 101 | targetPort: 9300 102 | nodePort: 31300 103 | selector: 104 | app: es-cluster 105 | - apiVersion: v1 106 | kind: Service 107 | metadata: 108 | name: es-cluster 109 | labels: 110 | author: shiqiang 111 | spec: 112 | clusterIP: None 113 | ports: 114 | - name: http 115 | port: 9200 116 | - name: tcp 117 | port: 9300 118 | selector: 119 | app: es-cluster -------------------------------------------------------------------------------- /elk/elk-single.yaml: -------------------------------------------------------------------------------- 1 | kind: List 2 | apiVersion: v1 3 | items: 4 | - apiVersion: apps/v1beta1 5 | kind: Deployment 6 | metadata: 7 | name: kb-single 8 | spec: 9 | replicas: 1 10 | template: 11 | metadata: 12 | name: kb-single 13 | labels: 14 | app: kb-single 15 | spec: 16 | containers: 17 | - image: docker.elastic.co/kibana/kibana:6.4.0 18 | name: kb 19 | env: 20 | - name: ELASTICSEARCH_URL 21 | value: "http://es-single:9200" 22 | ports: 23 | - name: http 24 | containerPort: 5601 25 | - apiVersion: v1 26 | kind: Service 27 | metadata: 28 | name: kb-single-svc 29 | spec: 30 | type: NodePort 31 | ports: 32 | - name: http 33 | port: 5601 34 | targetPort: 5601 35 | nodePort: 32601 36 | selector: 37 | app: kb-single 38 | - apiVersion: apps/v1beta1 39 | kind: Deployment 40 | metadata: 41 | name: es-single 42 | spec: 43 | replicas: 1 44 | template: 45 | metadata: 46 | name: es-single 47 | labels: 48 | app: es-single 49 | spec: 50 | containers: 51 | - image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0 52 | name: es 53 | env: 54 | - name: network.host 55 | value: "_site_" 56 | - name: node.name 57 | value: "${HOSTNAME}" 58 | - name: discovery.zen.ping.unicast.hosts 59 | value: "${ES_SINGLE_NODEPORT_SERVICE_HOST}" 60 | - name: cluster.name 61 | value: "test-single" 62 | - name: ES_JAVA_OPTS 63 | value: "-Xms128m -Xmx128m" 64 | volumeMounts: 65 | - name: es-single-data 66 | mountPath: /usr/share/elasticsearch/data 67 | volumes: 68 | - name: es-single-data 69 | emptyDir: {} 70 | - apiVersion: v1 71 | kind: Service 72 | metadata: 73 | name: es-single-nodeport 74 | spec: 75 | type: NodePort 76 | ports: 77 | - name: http 78 | port: 9200 79 | targetPort: 9200 80 | nodePort: 31200 81 | - name: tcp 82 | port: 9300 83 | targetPort: 9300 84 | nodePort: 31300 85 | selector: 86 | app: es-single 87 | - apiVersion: v1 88 | kind: Service 89 | metadata: 90 | name: es-single 91 | spec: 92 | clusterIP: None 93 | ports: 94 | - name: http 95 | port: 9200 96 | - name: tcp 97 | port: 9300 98 | selector: 99 | app: es-single -------------------------------------------------------------------------------- /elk/filebeat-dp.yml: -------------------------------------------------------------------------------- 1 | kind: List 2 | apiVersion: v1 3 | items: 4 | - apiVersion: v1 5 | kind: ConfigMap 6 | metadata: 7 | name: filebeat-config 8 | labels: 9 | k8s-app: filebeat 10 | kubernetes.io/cluster-service: "true" 11 | app: filebeat-config 12 | data: 13 | filebeat.yml: | 14 | processors: 15 | - add_cloud_metadata: 16 | filebeat.modules: 17 | - module: system 18 | filebeat.inputs: 19 | - type: log 20 | paths: 21 | - /var/log/containers/*.log 22 | symlinks: true 23 | # json.message_key: log 24 | # json.keys_under_root: true 25 | output.elasticsearch: 26 | hosts: ['es-single:9200'] 27 | logging.level: info 28 | - apiVersion: extensions/v1beta1 29 | kind: Deployment 30 | metadata: 31 | name: filebeat 32 | labels: 33 | k8s-app: filebeat 34 | kubernetes.io/cluster-service: "true" 35 | spec: 36 | replicas: 1 37 | template: 38 | metadata: 39 | name: filebeat 40 | labels: 41 | app: filebeat 42 | k8s-app: filebeat 43 | kubernetes.io/cluster-service: "true" 44 | spec: 45 | containers: 46 | - image: docker.elastic.co/beats/filebeat:6.4.0 47 | name: filebeat 48 | args: [ 49 | "-c", "/home/filebeat-config/filebeat.yml", 50 | "-e", 51 | ] 52 | securityContext: 53 | runAsUser: 0 54 | volumeMounts: 55 | - name: filebeat-storage 56 | mountPath: /var/log/containers 57 | - name: varlogpods 58 | mountPath: /var/log/pods 59 | - name: varlibdockercontainers 60 | mountPath: /var/lib/docker/containers 61 | - name: "filebeat-volume" 62 | mountPath: "/home/filebeat-config" 63 | volumes: 64 | - name: filebeat-storage 65 | hostPath: 66 | path: /var/log/containers 67 | - name: varlogpods 68 | hostPath: 69 | path: /var/log/pods 70 | - name: varlibdockercontainers 71 | hostPath: 72 | path: /var/lib/docker/containers 73 | - name: filebeat-volume 74 | configMap: 75 | name: filebeat-config 76 | - apiVersion: rbac.authorization.k8s.io/v1beta1 77 | kind: ClusterRoleBinding 78 | metadata: 79 | name: filebeat 80 | subjects: 81 | - kind: ServiceAccount 82 | name: filebeat 83 | namespace: default 84 | roleRef: 85 | kind: ClusterRole 86 | name: filebeat 87 | apiGroup: rbac.authorization.k8s.io 88 | - apiVersion: rbac.authorization.k8s.io/v1beta1 89 | kind: ClusterRole 90 | metadata: 91 | name: filebeat 92 | labels: 93 | k8s-app: filebeat 94 | rules: 95 | - apiGroups: [""] # "" indicates the core API group 96 | resources: 97 | - namespaces 98 | - pods 99 | verbs: 100 | - get 101 | - watch 102 | - list 103 | - apiVersion: v1 104 | kind: ServiceAccount 105 | metadata: 106 | name: filebeat 107 | namespace: default 108 | labels: 109 | k8s-app: filebeat -------------------------------------------------------------------------------- /elk/filebeat-ds.yml: -------------------------------------------------------------------------------- 1 | kind: List 2 | apiVersion: v1 3 | items: 4 | - apiVersion: v1 5 | kind: ConfigMap 6 | metadata: 7 | name: filebeat-config 8 | labels: 9 | k8s-app: filebeat 10 | kubernetes.io/cluster-service: "true" 11 | app: filebeat-config 12 | data: 13 | filebeat.yml: | 14 | processors: 15 | - add_cloud_metadata: 16 | filebeat.modules: 17 | - module: system 18 | filebeat.inputs: 19 | - type: log 20 | paths: 21 | - /var/log/containers/*.log 22 | symlinks: true 23 | # json.message_key: log 24 | # json.keys_under_root: true 25 | output.elasticsearch: 26 | hosts: ['es-single:9200'] 27 | logging.level: info 28 | - apiVersion: extensions/v1beta1 29 | kind: DaemonSet 30 | metadata: 31 | name: filebeat 32 | labels: 33 | k8s-app: filebeat 34 | kubernetes.io/cluster-service: "true" 35 | spec: 36 | template: 37 | metadata: 38 | name: filebeat 39 | labels: 40 | app: filebeat 41 | k8s-app: filebeat 42 | kubernetes.io/cluster-service: "true" 43 | spec: 44 | containers: 45 | - image: docker.elastic.co/beats/filebeat:6.4.0 46 | name: filebeat 47 | args: [ 48 | "-c", "/home/filebeat-config/filebeat.yml", 49 | "-e", 50 | ] 51 | securityContext: 52 | runAsUser: 0 53 | volumeMounts: 54 | - name: filebeat-storage 55 | mountPath: /var/log/containers 56 | - name: varlogpods 57 | mountPath: /var/log/pods 58 | - name: varlibdockercontainers 59 | mountPath: /var/lib/docker/containers 60 | - name: "filebeat-volume" 61 | mountPath: "/home/filebeat-config" 62 | nodeSelector: 63 | role: front 64 | volumes: 65 | - name: filebeat-storage 66 | hostPath: 67 | path: /var/log/containers 68 | - name: varlogpods 69 | hostPath: 70 | path: /var/log/pods 71 | - name: varlibdockercontainers 72 | hostPath: 73 | path: /var/lib/docker/containers 74 | - name: filebeat-volume 75 | configMap: 76 | name: filebeat-config 77 | - apiVersion: rbac.authorization.k8s.io/v1beta1 78 | kind: ClusterRoleBinding 79 | metadata: 80 | name: filebeat 81 | subjects: 82 | - kind: ServiceAccount 83 | name: filebeat 84 | namespace: default 85 | roleRef: 86 | kind: ClusterRole 87 | name: filebeat 88 | apiGroup: rbac.authorization.k8s.io 89 | - apiVersion: rbac.authorization.k8s.io/v1beta1 90 | kind: ClusterRole 91 | metadata: 92 | name: filebeat 93 | labels: 94 | k8s-app: filebeat 95 | rules: 96 | - apiGroups: [""] # "" indicates the core API group 97 | resources: 98 | - namespaces 99 | - pods 100 | verbs: 101 | - get 102 | - watch 103 | - list 104 | - apiVersion: v1 105 | kind: ServiceAccount 106 | metadata: 107 | name: filebeat 108 | namespace: default 109 | labels: 110 | k8s-app: filebeat -------------------------------------------------------------------------------- /elk/logstash.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: k8s-logstash 5 | namespace: default 6 | spec: 7 | replicas: 1 8 | template: 9 | metadata: 10 | labels: 11 | app: logstash 12 | role: k8s 13 | spec: 14 | containers: 15 | - image: docker.elastic.co/logstash/logstash:6.4.0 16 | name: logstash 17 | resources: 18 | requests: 19 | cpu: 100m 20 | memory: 200M 21 | volumeMounts: 22 | - name: container-logs 23 | mountPath: /log 24 | env: 25 | - name: LogFile 26 | value: '["/log/*"]' 27 | - name: ES_SERVER 28 | value: es-single:9200 29 | - name: XPACK_MONITORING_ELASTICSEARCH_URL 30 | value: http://es-single:9200 31 | - name: INDICES 32 | value: k8s-log 33 | - name: CODEC 34 | value: plain 35 | volumes: 36 | - name: container-logs 37 | hostPath: 38 | path: /var/log/containers 39 | type: Directory 40 | -------------------------------------------------------------------------------- /getMasterImages.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | images=(kube-proxy-amd64:v1.11.0 kube-scheduler-amd64:v1.11.0 kube-controller-manager-amd64:v1.11.0 kube-apiserver-amd64:v1.11.0 3 | etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9 4 | k8s-dns-dnsmasq-nanny-amd64:1.14.9 ) 5 | for imageName in ${images[@]} ; do 6 | docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName 7 | docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName 8 | docker rmi registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName 9 | done 10 | # 个人新加的一句,V 1.11.0 必加 11 | docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1 12 | 13 | docker pull docker.io/mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.9 14 | docker tag docker.io/mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.9 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.9 15 | docker rmi docker.io/mirrorgooglecontainers/k8s-dns-sidecar-amd64:1.14.9 16 | docker pull docker.io/mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.9 17 | docker tag docker.io/mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.9 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.9 18 | docker rmi docker.io/mirrorgooglecontainers/k8s-dns-kube-dns-amd64:1.14.9 19 | docker pull docker.io/mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.9 20 | docker tag docker.io/mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.9 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.9 21 | docker rmi docker.io/mirrorgooglecontainers/k8s-dns-dnsmasq-nanny-amd64:1.14.9 22 | 23 | docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 24 | docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 25 | docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 26 | -------------------------------------------------------------------------------- /getNodeImages.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/kube-proxy-amd64:v1.11.0 3 | docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 4 | docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 5 | docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/kube-proxy-amd64:v1.11.0 k8s.gcr.io/kube-proxy-amd64:v1.11.0 6 | docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 k8s.gcr.io/pause:3.1 7 | docker rmi registry.cn-hangzhou.aliyuncs.com/k8sth/kube-proxy-amd64:v1.11.0 8 | docker rmi registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.1 9 | -------------------------------------------------------------------------------- /golearn/calcproject/bin/calc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/golearn/calcproject/bin/calc -------------------------------------------------------------------------------- /golearn/calcproject/src/calc/calc.go: -------------------------------------------------------------------------------- 1 | //calc.go 2 | package main 3 | 4 | import "os" 5 | import "fmt" 6 | import "simplemath" 7 | import "strconv" 8 | 9 | var Usage = func() { 10 | fmt.Println("Usage: calc command [arguments] ... ") 11 | fmt.Println("\nThe commands are:\n\tadd\tAddition of two values.\n\tsqrt\tSquare root of a non-negative value.") 12 | } 13 | 14 | func main() { 15 | args := os.Args 16 | // fmt.Println("Args[0] = %s", args[0]) 17 | // fmt.Println("Args[1] = %s", args[1]) 18 | if args == nil || len(args) < 2 { 19 | Usage() 20 | return 21 | } 22 | 23 | switch args[1] { 24 | case "add": 25 | if len(args) != 4 { 26 | fmt.Println("USAGE: calc add ") 27 | return 28 | } 29 | v1, err1 := strconv.Atoi(args[2]) 30 | v2, err2 := strconv.Atoi(args[3]) 31 | if err1 != nil || err2 != nil { 32 | fmt.Println("USAGE: calc add ") 33 | return 34 | } 35 | ret := simplemath.Add(v1, v2) 36 | fmt.Println("Result: ", ret) 37 | case "sqrt": 38 | if len(args) != 3 { 39 | fmt.Println("USAGE: calc sqrt ") 40 | return 41 | } 42 | v, err := strconv.Atoi(args[2]) 43 | if err != nil { 44 | fmt.Println("USAGE: calc sqrt ") 45 | return 46 | } 47 | ret := simplemath.Sqrt(v) 48 | fmt.Println("Result: ", ret) 49 | default: 50 | Usage() 51 | } 52 | } 53 | -------------------------------------------------------------------------------- /golearn/calcproject/src/simplemath/add.go: -------------------------------------------------------------------------------- 1 | package simplemath 2 | 3 | func Add(a int, b int) int { 4 | return a + b 5 | } -------------------------------------------------------------------------------- /golearn/calcproject/src/simplemath/add_test.go: -------------------------------------------------------------------------------- 1 | package simplemath 2 | 3 | import "testing" 4 | 5 | func TestAdd1(t *testing.T) { 6 | r := Add(1,2) 7 | if r != 3 { 8 | t.Errorf("Add(1,2) failed. Got %d, expected 3.", r) 9 | } 10 | } -------------------------------------------------------------------------------- /golearn/calcproject/src/simplemath/sqrt.go: -------------------------------------------------------------------------------- 1 | package simplemath 2 | 3 | import "math" 4 | 5 | func Sqrt(i int) int { 6 | v := math.Sqrt(float64(i)) 7 | return int(v) 8 | } -------------------------------------------------------------------------------- /golearn/calcproject/src/simplemath/sqrt_test.go: -------------------------------------------------------------------------------- 1 | package simplemath 2 | 3 | import "testing" 4 | 5 | func TestSqrt1(t *testing.T) { 6 | v := Sqrt(16) 7 | if v != 4 { 8 | t.Errorf("Sqrt(16) failed. Got %v, expected 4.", v) 9 | } 10 | } -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 7.50.29 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 7.50.29 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 7.51.11 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 7.51.11 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 7.51.21 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 7.51.21 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 8.04.15 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 8.04.15 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 8.15.26 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 8.15.26 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 8.18.54 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 8.18.54 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 8.37.03 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 8.37.03 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 9.11.00 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 9.11.00 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 9.12.06 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 9.12.06 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 9.15.25 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 9.15.25 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 9.19.29 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 9.19.29 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 9.30.54 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 9.30.54 PM.png -------------------------------------------------------------------------------- /imgs/Screen Shot 2018-10-26 at 9.32.37 PM.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/imgs/Screen Shot 2018-10-26 at 9.32.37 PM.png -------------------------------------------------------------------------------- /install/offline-binary.md: -------------------------------------------------------------------------------- 1 | # 离线环境二进制方式安装Kubernetes集群 2 | 3 | > 本文环境 Redhat Linux 7.3,操作系统采用的最小安装方式。 4 | > Kubernetes的版本为 V1.10。 5 | > Docker版本为18.03.1-ce。 6 | > etcd 版本为 V3.3.8。 7 | 8 | ## 1. 准备规划 9 | 10 | ### 1.1 Node 规划 11 | 主机名|地址|角色 12 | --|--|-- 13 | devops-101|192.168.0.101|k8s master 14 | devops-102|192.168.0.102|k8s node 15 | 16 | ### 1.2 Network 网络 17 | 18 | ### 1.3 安装文件 19 | Kubernetes安装需要以下二进制文件: 20 | - etcd 21 | - docker 22 | - Kubernetes 23 | - kubelet 24 | - kube-proxy 25 | - kube-apiserver 26 | - kube-controller-manager 27 | - kube-scheduler 28 | 29 | 我们可以下载编译好的二进制文件,也可以下载源码自己编译,源码编译可以参考[这里](https://git.k8s.io/community/contributors/devel/)本文只讨论二进制的安装方式。在Kubernetes的Github [Latest](https://github.com/kubernetes/kubernetes/releases/latest) 页面,可以看到最新打包的版本。也可以到 Tag 页面中找到自己需要的版本,我下载的是 [V1.11](https://github.com/kubernetes/kubernetes/releases/tag/v1.11.0)。 30 | 31 | > 注意这个页面有可能不是最新的版本,我查看的时候显示的版本是 V1.9.9,但是最新的版本是 V1.11,这时就需要切换到 Tag 页面查找。 32 | 33 | 服务器上需要的二进制文件并不在下载的 tar 包中,需要解压tar包,然后执行```cluster/get-kube-binaries.sh```。下载需要访问 storage.googleapis.com,因为大家都知道的原因,可能无法正常访问,需要大家科学的获取安装文件。下载完成后,解压```kubernetes-server-linux-amd64.tar.gz```。 34 | 35 | 可以看到文件列表 36 | ```sh 37 | [root@devops-101 bin]# pwd 38 | /root/kubernetes/server/bin 39 | [root@devops-101 bin]# ls -lh 40 | total 1.8G 41 | -rwxr-xr-x. 1 root root 57M Jun 28 04:55 apiextensions-apiserver 42 | -rwxr-xr-x. 1 root root 132M Jun 28 04:55 cloud-controller-manager 43 | -rw-r--r--. 1 root root 8 Jun 28 04:55 cloud-controller-manager.docker_tag 44 | -rw-r--r--. 1 root root 134M Jun 28 04:55 cloud-controller-manager.tar 45 | -rwxr-xr-x. 1 root root 218M Jun 28 04:55 hyperkube 46 | -rwxr-xr-x. 1 root root 56M Jun 28 04:55 kube-aggregator 47 | -rw-r--r--. 1 root root 8 Jun 28 04:55 kube-aggregator.docker_tag 48 | -rw-r--r--. 1 root root 57M Jun 28 04:55 kube-aggregator.tar 49 | -rwxr-xr-x. 1 root root 177M Jun 28 04:55 kube-apiserver 50 | -rw-r--r--. 1 root root 8 Jun 28 04:55 kube-apiserver.docker_tag 51 | -rw-r--r--. 1 root root 179M Jun 28 04:55 kube-apiserver.tar 52 | -rwxr-xr-x. 1 root root 147M Jun 28 04:55 kube-controller-manager 53 | -rw-r--r--. 1 root root 8 Jun 28 04:55 kube-controller-manager.docker_tag 54 | -rw-r--r--. 1 root root 149M Jun 28 04:55 kube-controller-manager.tar 55 | -rwxr-xr-x. 1 root root 50M Jun 28 04:55 kube-proxy 56 | -rw-r--r--. 1 root root 8 Jun 28 04:55 kube-proxy.docker_tag 57 | -rw-r--r--. 1 root root 96M Jun 28 04:55 kube-proxy.tar 58 | -rwxr-xr-x. 1 root root 54M Jun 28 04:55 kube-scheduler 59 | -rw-r--r--. 1 root root 8 Jun 28 04:55 kube-scheduler.docker_tag 60 | -rw-r--r--. 1 root root 55M Jun 28 04:55 kube-scheduler.tar 61 | -rwxr-xr-x. 1 root root 55M Jun 28 04:55 kubeadm 62 | -rwxr-xr-x. 1 root root 53M Jun 28 04:56 kubectl 63 | -rwxr-xr-x. 1 root root 156M Jun 28 04:55 kubelet 64 | -rwxr-xr-x. 1 root root 2.3M Jun 28 04:55 mounter 65 | ``` 66 | ### 1.4 系统配置 67 | - 配置Hosts 68 | - 关闭防火墙 69 | ```sh 70 | $ systemctl stop firewalld 71 | $ systemctl disable firewalld 72 | ``` 73 | - 关闭selinux 74 | ```sh 75 | $ setenforce 0 #临时关闭selinux 76 | $ vim /etc/selinux/config 77 | ``` 78 | 将SELINUX=enforcing改为SELINUX=disabled,wq保存退出。 79 | - 关闭swap 80 | ```sh 81 | $ swapoff -a 82 | $ vim /etc/fstab #修改自动挂载配置,注释掉即可 83 | #/dev/mapper/centos-swap swap swap defaults 0 0 84 | ``` 85 | - 配置路由 86 | ```sh 87 | $ cat < /etc/sysctl.d/k8s.conf 88 | net.bridge.bridge-nf-call-ip6tables = 1 89 | net.bridge.bridge-nf-call-iptables = 1 90 | EOF 91 | $ sysctl --system 92 | ``` 93 | 94 | ## 2. 安装 Node 95 | 我们需要在Node机器上安装以下应用: 96 | - Docker 97 | - kubelet 98 | - kube-proxy 99 | 100 | ### 2.1 Docker 101 | Docker的版本需要与kubelete版本相对应,最好都使用最新的版本。Redhat 中需要使用 Static Binary 方式安装,具体可以参考我之前的[一篇文章](https://www.cnblogs.com/cocowool/p/install_docker_ce_in_redhat_73.html)。 102 | 103 | ### 2.2 拷贝 kubelet、kube-proxy 104 | 在之前解压的 kubernetes 文件夹中拷贝二进制文件 105 | ```sh 106 | $ cp /root/kubernetes/server/bin/kubelet /usr/bin/ 107 | $ cp /root/kubernetes/server/bin/kube-proxy /usr/bin/ 108 | ``` 109 | 110 | ### 2.3 安装 kube-proxy 服务 111 | ```sh 112 | $ vim /usr/lib/systemd/system/kube-proxy.service 113 | [Unit] 114 | Description=Kubernetes Kube-Proxy Server 115 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 116 | After=network.target 117 | 118 | [Service] 119 | EnvironmentFile=/etc/kubernetes/config 120 | EnvironmentFile=/etc/kubernetes/proxy 121 | ExecStart=/usr/bin/kube-proxy \ 122 | $KUBE_LOGTOSTDERR \ 123 | $KUBE_LOG_LEVEL \ 124 | $KUBE_MASTER \ 125 | $KUBE_PROXY_ARGS 126 | Restart=on-failure 127 | LimitNOFILE=65536 128 | 129 | [Install] 130 | WantedBy=multi-user.target 131 | ``` 132 | 创建配置目录,并添加配置文件 133 | ```sh 134 | $ mkdir -p /etc/kubernetes 135 | $ vim /etc/kubernetes/proxy 136 | KUBE_PROXY_ARGS="" 137 | $ vim /etc/kubernetes/config 138 | KUBE_LOGTOSTDERR="--logtostderr=true" 139 | KUBE_LOG_LEVEL="--v=0" 140 | KUBE_ALLOW_PRIV="--allow_privileged=false" 141 | KUBE_MASTER="--master=http://192.168.0.101:8080" 142 | ``` 143 | 启动服务 144 | ```sh 145 | [root@devops-102 ~]# systemctl daemon-reload 146 | [root@devops-102 ~]# systemctl start kube-proxy.service 147 | [root@devops-102 ~]# netstat -lntp | grep kube-proxy 148 | tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 10522/kube-proxy 149 | tcp6 0 0 :::10256 :::* LISTEN 10522/kube-proxy 150 | ``` 151 | 152 | ### 2.4 安装 kubelete 服务 153 | ```sh 154 | $ vim /usr/lib/systemd/system/kubelet.service 155 | [Unit] 156 | Description=Kubernetes Kubelet Server 157 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes 158 | After=docker.service 159 | Requires=docker.service 160 | 161 | [Service] 162 | WorkingDirectory=/var/lib/kubelet 163 | EnvironmentFile=/etc/kubernetes/kubelet 164 | ExecStart=/usr/bin/kubelet $KUBELET_ARGS 165 | Restart=on-failure 166 | KillMode=process 167 | 168 | [Install] 169 | WantedBy=multi-user.target 170 | $ mkdir -p /var/lib/kubelet 171 | $ vim /etc/kubernetes/kubelet 172 | KUBELET_ADDRESS="--address=0.0.0.0" 173 | KUBELET_HOSTNAME="--hostname-override=192.168.0.102" 174 | KUBELET_API_SERVER="--api-servers=http://192.168.0.101:8080" 175 | KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=reg.docker.tb/harbor/pod-infrastructure:latest" 176 | KUBELET_ARGS="--enable-server=true --enable-debugging-handlers=true --fail-swap-on=false --kubeconfig=/var/lib/kubelet/kubeconfig" 177 | ``` 178 | 179 | 创建配置文件 ```vim /var/lib/kubelet/kubeconfig``` 180 | ```yaml 181 | apiVersion: v1 182 | kind: Config 183 | users: 184 | - name: kubelet 185 | clusters: 186 | - name: kubernetes 187 | cluster: 188 | server: http://192.168.0.101:8080 189 | contexts: 190 | - context: 191 | cluster: kubernetes 192 | user: kubelet 193 | name: service-account-context 194 | current-context: service-account-context 195 | ``` 196 | 197 | 启动kubelet并进行验证。 198 | ```sh 199 | $ swapoff -a 200 | $ systemctl daemon-reload 201 | $ systemctl start kubelet.service 202 | $ netstat -tnlp | grep kubelet 203 | tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 10630/kubelet 204 | tcp 0 0 127.0.0.1:37865 0.0.0.0:* LISTEN 10630/kubelet 205 | tcp6 0 0 :::10250 :::* LISTEN 10630/kubelet 206 | tcp6 0 0 :::10255 :::* LISTEN 10630/kubelet 207 | ``` 208 | 209 | ## 3. 安装 Master 210 | 211 | ### 3.1 安装etcd 212 | 本文采用二进制安装方法,首先[下载](https://github.com/coreos/etcd/releases)安装包。 213 | 之后进行解压,文件拷贝,编辑 etcd.service、etcd.conf文件夹 214 | ```sh 215 | $ tar zxf etcd-v3.2.11-linux-amd64.tar.gz 216 | $ cd etcd-v3.2.11-linux-amd64 217 | $ cp etcd etcdctl /usr/bin/ 218 | $ vim /usr/lib/systemd/system/etcd.service 219 | [Unit] 220 | Description=etcd.service 221 | 222 | [Service] 223 | Type=notify 224 | TimeoutStartSec=0 225 | Restart=always 226 | WorkingDirectory=/var/lib/etcd 227 | EnvironmentFile=-/etc/etcd/etcd.conf 228 | ExecStart=/usr/bin/etcd 229 | 230 | [Install] 231 | WantedBy=multi-user.target 232 | $ mkdir -p /var/lib/etcd && mkdir -p /etc/etcd/ 233 | $ vim /etc/etcd/etcd.conf 234 | ETCD_NAME=ETCD Server 235 | ETCD_DATA_DIR="/var/lib/etcd/" 236 | ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379 237 | ETCD_ADVERTISE_CLIENT_URLS="http://192.168.0.101:2379" 238 | # 启动etcd 239 | $ systemctl daemon-reload 240 | $ systemctl start etcd.service 241 | ``` 242 | 查看etcd状态是否正常 243 | ```sh 244 | $ etcdctl cluster-health 245 | member 8e9e05c52164694d is healthy: got healthy result from http://192.168.0.101:2379 246 | cluster is healthy 247 | ``` 248 | ### 3.2 安装kube-apiserver 249 | 添加启动文件 250 | ```sh 251 | [Unit] 252 | Description=Kubernetes API Server 253 | After=etcd.service 254 | Wants=etcd.service 255 | 256 | [Service] 257 | EnvironmentFile=/etc/kubernetes/apiserver 258 | ExecStart=/usr/bin/kube-apiserver \ 259 | $KUBE_ETCD_SERVERS \ 260 | $KUBE_API_ADDRESS \ 261 | $KUBE_API_PORT \ 262 | $KUBE_SERVICE_ADDRESSES \ 263 | $KUBE_ADMISSION_CONTROL \ 264 | $KUBE_API_LOG \ 265 | $KUBE_API_ARGS 266 | Restart=on-failure 267 | Type=notify 268 | LimitNOFILE=65536 269 | 270 | [Install] 271 | WantedBy=multi-user.target 272 | ``` 273 | 274 | 创建配置文件 275 | ```sh 276 | $ vim /etc/kubernetes/apiserver 277 | KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" 278 | KUBE_API_PORT="--port=8080" 279 | KUBELET_PORT="--kubelet-port=10250" 280 | KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.0.101:2379" 281 | KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.0.0/24" 282 | KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" 283 | KUBE_API_ARGS="" 284 | ``` 285 | 启动服务 286 | ```sh 287 | $ systemctl daemon-reload 288 | $ systemctl start kube-apiserver.service 289 | ``` 290 | 查看启动是否成功 291 | ```sh 292 | $ netstat -tnlp | grep kube 293 | tcp6 0 0 :::6443 :::* LISTEN 10144/kube-apiserve 294 | tcp6 0 0 :::8080 :::* LISTEN 10144/kube-apiserve 295 | ``` 296 | 297 | ### 3.3 安装kube-controller-manager 298 | 创建启动文件 299 | ```sh 300 | $ vim /usr/lib/systemd/system/kube-controller-manager.service 301 | [Unit] 302 | Description=Kubernetes Scheduler 303 | After=kube-apiserver.service 304 | Requires=kube-apiserver.service 305 | 306 | [Service] 307 | EnvironmentFile=-/etc/kubernetes/controller-manager 308 | ExecStart=/usr/bin/kube-controller-manager \ 309 | $KUBE_MASTER \ 310 | $KUBE_CONTROLLER_MANAGER_ARGS 311 | Restart=on-failure 312 | LimitNOFILE=65536 313 | 314 | [Install] 315 | WantedBy=multi-user.target 316 | ``` 317 | 创建配置文件 318 | ```sh 319 | $ vim /etc/kubernetes/controller-manager 320 | KUBE_MASTER="--master=http://192.168.0.101:8080" 321 | KUBE_CONTROLLER_MANAGER_ARGS=" " 322 | ``` 323 | 启动服务 324 | ```sh 325 | $ systemctl daemon-reload 326 | $ systemctl start kube-controller-manager.service 327 | ``` 328 | 验证服务状态 329 | ```sh 330 | $ netstat -lntp | grep kube-controll 331 | tcp6 0 0 :::10252 :::* LISTEN 10163/kube-controll 332 | ``` 333 | ### 3.4 安装kube-scheduler 334 | 创建启动文件 335 | ```sh 336 | $ vim /usr/lib/systemd/system/kube-scheduler.service 337 | [Unit] 338 | Description=Kubernetes Scheduler 339 | After=kube-apiserver.service 340 | Requires=kube-apiserver.service 341 | 342 | [Service] 343 | User=root 344 | EnvironmentFile=/etc/kubernetes/scheduler 345 | ExecStart=/usr/bin/kube-scheduler \ 346 | $KUBE_MASTER \ 347 | $KUBE_SCHEDULER_ARGS 348 | Restart=on-failure 349 | LimitNOFILE=65536 350 | 351 | [Install] 352 | WantedBy=multi-user.target 353 | ``` 354 | 修改配置 355 | ```sh 356 | $ vim /etc/kubernetes/scheduler 357 | KUBE_MASTER="--master=http://192.168.0.101:8080" 358 | KUBE_SCHEDULER_ARGS="--logtostderr=true --log-dir=/home/log/kubernetes --v=2" 359 | ``` 360 | 启动服务 361 | ```sh 362 | $ systemctl daemon-reload 363 | $ systemctl start kube-scheduler.service 364 | ``` 365 | 验证服务状态 366 | ```sh 367 | $ netstat -lntp | grep kube-schedule 368 | tcp6 0 0 :::10251 :::* LISTEN 10179/kube-schedule 369 | ``` 370 | ### 3.5 配置Profile 371 | 372 | ```sh 373 | $ sed -i '$a export PATH=$PATH:/root/kubernetes/server/bin/' /etc/profile 374 | $ source /etc/profile 375 | ``` 376 | ### 3.6 安装 kubectl 并查看状态 377 | 378 | ```sh 379 | $ cp /root/kubernetes/server/bin/kubectl /usr/bin/ 380 | $ kubectl get cs 381 | NAME STATUS MESSAGE ERROR 382 | etcd-0 Healthy {"health":"true"} 383 | controller-manager Healthy ok 384 | scheduler Healthy ok 385 | ``` 386 | 387 | 到这里Master节点就配置完毕。 388 | 389 | ## 4. 配置flannel网络 390 | Flannel可以使整个集群的docker容器拥有唯一的内网IP,并且多个node之间的docker0可以互相访问。[下载地址]() 391 | 392 | ## 5. 集群验证 393 | 在101上执行命令,检查nodes,如果能看到,表明集群现在已经OK了。 394 | ```sh 395 | $ kubectl get nodes 396 | NAME STATUS ROLES AGE VERSION 397 | devops-102 Ready 12s v1.11.0 398 | ``` 399 | 400 | ![](https://images2018.cnblogs.com/blog/39469/201807/39469-20180710163655709-89635310.png) 401 | 402 | ## 参考资料 403 | 1. [Creating a Custom Cluster from Scratch](https://kubernetes.io/docs/setup/scratch/) 404 | 2. [etcd](https://coreos.com/etcd/docs/latest/dev-guide/local_cluster.html) 405 | 3. [Creating a single master cluster with kubeadm](https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/) 406 | 4. [etcd download](https://github.com/coreos/etcd/releases) 407 | 5. [离线安装k8s](http://blog.51cto.com/13120271/2115310) 408 | 6. [centos7.3 kubernetes/k8s 1.10 离线安装](https://www.jianshu.com/p/9c7e1c957752) 409 | 7. [Kubernetes the hardest way](https://github.com/kelseyhightower/kubernetes-the-hard-way) 410 | 8. [kubernetes 安装学习](https://www.cnblogs.com/fengjian2016/p/6392900.html) 411 | 9. [kubectl get nodes returns "No resources found."](https://linuxacademy.com/community/posts/show/topic/19040-kubectl-get-nodes-returns-no-resources-found) 412 | 10. [nodes with multiple network interfaces can fail to talk to services](https://github.com/kubernetes/kubeadm/issues/102) 413 | -------------------------------------------------------------------------------- /learnrecord/logging.MD: -------------------------------------------------------------------------------- 1 | # Kubernetes日志处理架构 2 | 3 | -------------------------------------------------------------------------------- /learnrecord/namespace.MD: -------------------------------------------------------------------------------- 1 | # Kubernetes 的命名空间 2 | 3 | > 本文环境为Kubernetes V1.11,操作系统版本为 CentOs 7.3,Kubernetes集群安装可以参考 [kubeadm安装kubernetes V1.11.1 集群](https://www.cnblogs.com/cocowool/p/kubeadm_install_kubernetes.html) 4 | 5 | ## 1. 什么是Namespaces 6 | Kubernetes中提供了命名空间,但是如果你的团队规模比较小并且集群规模也不大,完全可以不用Namespaces而使用```labels```来区分不同的资源,随着项目增多、集群规模扩大、人员的增加,你才需要使用Namespaces,通过namespace你可以创建多个虚拟的集群。 7 | 8 | Namespaces提供了一种在不同用户间分隔集群资源的方法,未来Kubernetes可能会提供基于命名空间的权限控制。 9 | 10 | ## 2. Namespaces 的常用操作 11 | ### 2.1 查看命名空间 12 | ```sh 13 | [root@devops-101 ~]# kubectl get namespaces 14 | NAME STATUS AGE 15 | default Active 7d 16 | kube-public Active 7d 17 | kube-system Active 7d 18 | ``` 19 | 20 | Kubernetes默认有三个命名空间 21 | - default:默认的命名空间 22 | - kube-system:由Kubernetes系统对象组成的命名空间 23 | - kube-public:该空间由系统自动创建并且对所有用户可读性,做为集群公用资源的保留命名空间 24 | 25 | ### 2.2 创建命名空间 26 | ```sh 27 | [root@devops-101 ~]# kubectl create namespace test-cluster 28 | namespace/test-cluster created 29 | [root@devops-101 ~]# kubectl get namespaces 30 | NAME STATUS AGE 31 | default Active 7d 32 | kube-public Active 7d 33 | kube-system Active 7d 34 | test-cluster Active 3s 35 | ``` 36 | 37 | ### 2.2 查询命名空间中的资源 38 | ```sh 39 | [root@devops-101 ~]# kubectl get all --namespace=test-cluster 40 | No resources found. 41 | [root@devops-101 ~]# kubectl get all -n test-clutser 42 | No resources found. 43 | [root@devops-101 ~]# kubectl get nodes 44 | NAME STATUS ROLES AGE VERSION 45 | devops-101 Ready master 7d v1.11.2 46 | devops-102 Ready 7d v1.11.2 47 | devops-103 NotReady 4d v1.11.2 48 | ``` 49 | 50 | ### 2.3 修改默认的namespace配置 51 | ```sh 52 | # kubectl config view //先查看是否设置了current-context 53 | # kubectl config set-context default --namespace=bs-test //设置default配置的namespace参数 54 | # kubectl config set current-context default //设置当前环境变量为 default 55 | ``` 56 | 通过这段代码设置默认的命名空间后,就不用每次在输入命令的时候带上```--namespace```参数了。 57 | 58 | ## 3. 注意 59 | 不是所有的对象都在命名空间中,例如 nodes、persistentVolumes 就没有命名空间,所有用户都是可见的。 60 | 61 | 可以通过下面的命令查看命名空间中的资源。 62 | ```sh 63 | [root@devops-101 ~]# kubectl api-resources --namespaced=true 64 | # 查看不在命名空间中的资源 65 | [root@devops-101 ~]# kubectl api-resources --namespaced=false 66 | ``` 67 | 68 | ![](https://images2018.cnblogs.com/blog/39469/201807/39469-20180710163655709-89635310.png) 69 | 70 | ## 参考资料: 71 | 1. [Kubernetes Namespaces](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) 72 | 2. [Kubernetes中文手册](http://docs.kubernetes.org.cn/749.html) 73 | -------------------------------------------------------------------------------- /metadata/README.md: -------------------------------------------------------------------------------- 1 | # Metadata 获取的三种方式 2 | 3 | > 本文的试验环境为CentOS 7.3,Kubernetes集群为1.11.2,安装步骤参见[kubeadm安装kubernetes V1.11.1 集群](https://www.cnblogs.com/cocowool/p/kubeadm_install_kubernetes.html) 4 | 5 | ## 0. Metadata 6 | 每个Pod都有一些信息,包括但不限于以下的内容: 7 | - Pod 名称 8 | - Pod IP 9 | - Pod 所属的命名空间 10 | - Pod 所在的 Node 11 | - Pod 对应的 service account 12 | - 每个容器的CPU、内存请求 13 | - 每个容器的CPU、内存上限 14 | - Pod 的标签 15 | - Pod 的 annotations 16 | 17 | 这些信息都可以通过kubectl命令获取,但是有的情况下,我们需要从应用内获取,例如获取当前Pod的地址、主机名等一些信息,这就要求我们必须知道如何在应用内获取Pod的metadata,本文介绍三种应用内获取Pod的metadata的方式,供大家参考。 18 | 19 | ## 1. 通过环境变量暴露Metadata 20 | ```yaml 21 | apiVersion: v1 22 | kind: Pod 23 | metadata: 24 | name: downward 25 | spec: 26 | containers: 27 | - name: main 28 | image: docker.io/busybox 29 | command: ["sleep", "99999"] 30 | resources: 31 | requests: 32 | cpu: 15m 33 | memory: 100Ki 34 | limits: 35 | cpu: 100m 36 | memory: 4Mi 37 | env: 38 | - name: POD_NAME 39 | valueFrom: 40 | fieldRef: 41 | fieldPath: metadata.name 42 | - name: POD_NAMESPACE 43 | valueFrom: 44 | fieldRef: 45 | fieldPath: metadata.namespace 46 | - name: POD_IP 47 | valueFrom: 48 | fieldRef: 49 | fieldPath: status.podIP 50 | - name: NODE_NAME 51 | valueFrom: 52 | fieldRef: 53 | fieldPath: spec.nodeName 54 | - name: SERVICE_ACCOUNT 55 | valueFrom: 56 | fieldRef: 57 | fieldPath: spec.serviceAccountName 58 | - name: CONTAINER_CPU_REQUEST_MILLICORES 59 | valueFrom: 60 | resourceFieldRef: 61 | resource: requests.cpu 62 | divisor: 1m 63 | - name: CONTAINER_MEMORY_LIMIT_KIBIBYTES 64 | valueFrom: 65 | resourceFieldRef: 66 | resource: limits.memory 67 | divisor: 1Ki 68 | ``` 69 | 70 | 在设置资源请求情况的变量时,会设置一个除数,所以环境变量最后显示计算后的结果。CPU的除数可以是1或者1m,内存的除数可以是1、1k、1Ki、1M、1Mi。 71 | 可以看到实际执行的情况 72 | [图片上传失败...(image-13b224-1541487636046)] 73 | 74 | ## 2. 以文件的形式传递参数 75 | 通过定义```downwardAPI```卷,可以将环境变量以配置文件的方式暴露给容器的应用。 76 | ```yaml 77 | apiVersion: v1 78 | kind: Pod 79 | metadata: 80 | name: downward 81 | labels: 82 | foo: bar 83 | annotations: 84 | key1: value1 85 | key2: | 86 | multi 87 | line 88 | value 89 | spec: 90 | containers: 91 | - name: main 92 | image: busybox 93 | command: ["sleep", "9999999"] 94 | resources: 95 | requests: 96 | cpu: 15m 97 | memory: 100Ki 98 | limits: 99 | cpu: 100m 100 | memory: 4Mi 101 | volumeMounts: 102 | - name: downward 103 | mountPath: /etc/downward 104 | volumes: 105 | - name: downward 106 | downwardAPI: 107 | items: 108 | - path: "podName" 109 | fieldRef: 110 | fieldPath: metadata.name 111 | - path: "podNamespace" 112 | fieldRef: 113 | fieldPath: metadata.namespace 114 | - path: "labels" 115 | fieldRef: 116 | fieldPath: metadata.labels 117 | - path: "annotations" 118 | fieldRef: 119 | fieldPath: metadata.annotations 120 | - path: "containerCpuRequestMilliCores" 121 | resourceFieldRef: 122 | containerName: main 123 | resource: requests.cpu 124 | divisor: 1m 125 | - path: "containerMemoryLimitBytes" 126 | resourceFieldRef: 127 | containerName: main 128 | resource: limits.memory 129 | divisor: 1 130 | ``` 131 | 132 | 创建Pod后可以查看挂载的文件。 133 | ```sh 134 | $ kubectl exec downward ls -lL /etc/downward 135 | ``` 136 | 137 | 利用环境变量的方式无法将labels和annotations导入为环境变量,使用挂载文件的方式就可以,我们因此可以查看Pod具有的labels和annotations。当labels和annotations在Pod运行期间被修改后,修改也可以反映到文件上。这也就是为什么不能用作环境变量的原因。 138 | 139 | ```sh 140 | $ kubectl exec downward cat /etc/downward/labels 141 | $ kubectl exec downward cat /etc/downward/annotations 142 | ``` 143 | ![image](http://upload-images.jianshu.io/upload_images/3736984-78e2c6f3e55fbb53.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 144 | 145 | 在获取容器的资源请求数据时,我们必须指定容器的名称。不管一个Pod中有一个还是多个容器,我们都需要明确指定容器的名称。利用这种方式,如果一个Pod含有多个容器,我们可以将其他容器的资源使用情况传递到另外一个容器中。 146 | 147 | ## 3. 容器外通过API server获取metadata 148 | 上面介绍的两种方法可以获取Pod的相关信息,但是这些信息并不是完整的,如果我们需要更多的信息,就需要用到API server。 149 | 150 | ```sh 151 | $ kubectl cluster-info #查看API Server的位置 152 | $ curl http://ip:port/ #查看API列表,如果是https就不行了 153 | [root@devops-101 ~]# kubectl cluster-info 154 | Kubernetes master is running at https://192.168.0.101:6443 155 | KubeDNS is running at https://192.168.0.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy 156 | ``` 157 | ![image](http://upload-images.jianshu.io/upload_images/3736984-de09a1c0eaf5f2bb.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 158 | 159 | 对于https的情况,可以设置代理,通过代理来访问,具体如下: 160 | ```sh 161 | [root@devops-101 ~]#kubectl proxyk 162 | Starting to serve on 127.0.0.1:8001 163 | # 换一个终端窗口 164 | [root@devops-101 ~]# curl http://localhost:8001 165 | { 166 | "paths": [ 167 | "/api", 168 | "/api/v1", 169 | "/apis", 170 | "/apis/", 171 | "/apis/admissionregistration.k8s.io", 172 | "/apis/admissionregistration.k8s.io/v1beta1", 173 | "/apis/apiextensions.k8s.io", 174 | "/apis/apiextensions.k8s.io/v1beta1", 175 | ... 176 | ``` 177 | 能够看到一个列表,通过API的路径,可以访问我们想要找到的任何资源。例如查找一个deployment。 178 | ```sh 179 | [root@devops-101 ~]# curl http://localhost:8001/apis/apps/v1/deployments 180 | { 181 | "kind": "DeploymentList", 182 | "apiVersion": "apps/v1", 183 | "metadata": { 184 | "selfLink": "/apis/apps/v1/deployments", 185 | "resourceVersion": "326381" 186 | }, 187 | ... 188 | ``` 189 | 190 | ## 4. 容器内访问 API Server 191 | 192 | 容器内访问API server需要认证,并且需要通过环境变量获取API Server的地址和端口。 193 | 地址的获取方式如下: 194 | ```sh 195 | root@curl:/# env | grep KUBERNETES_SERVICE 196 | KUBERNETES_SERVICE_PORT=443 197 | KUBERNETES_SERVICE_HOST=10.0.0.1 198 | KUBERNETES_SERVICE_PORT_HTTPS=443 199 | ``` 200 | 201 | 认证主要通过ca.cert及用户名,ca.cert文件默认挂载在```/var/run/secrets/kubernetes.io/serviceaccount/```。 202 | 具体方法: 203 | ```sh 204 | [root@devops-101 ~]# kubectl exec -it img-curl /bin/sh 205 | / # TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) 206 | / # curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $TOKEN" https://kubernetes 207 | { 208 | "paths": [ 209 | "/api", 210 | "/api/v1", 211 | "/apis", 212 | "/apis/", 213 | "/apis/admissionregistration.k8s.io", 214 | "/apis/admissionregistration.k8s.io/v1beta1", 215 | "/apis/apiextensions.k8s.io", 216 | ``` 217 | 如果遇到了下图中的错误,需要创建RBAC的角色绑定并且重新执行一下上面的命令。 218 | ![image](http://upload-images.jianshu.io/upload_images/3736984-ce83b35c8fee1e88.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 219 | 220 | 接下来就可以在Pod的容器中查看metadata的信息,如下查看当前命名空间所有运行的Pods 221 | ![image](http://upload-images.jianshu.io/upload_images/3736984-7c4274d6c18ce113.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 222 | 223 | 有了访问API server的能力,就为我们定义容器内应用的行为提供了无限的想象力,我们可以通过curl来访问API server,同时也有很多语言的客户端库,让我们方便的在自己的应用中调用API server 224 | - [Java client ](https://github.com/fabric8io/kubernetes-client) 225 | - [Node.js](https://github.com/tenxcloud/node-kubernetes-client) 226 | - [PHP](https://github.com/devstub/kubernetes-api-php-client) 227 | 228 | ![image](http://upload-images.jianshu.io/upload_images/3736984-dbae41e94ae6493d.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240) 229 | -------------------------------------------------------------------------------- /metadata/curl_img.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: img-curl 5 | spec: 6 | containers: 7 | - name: main 8 | image: docker.io/appropriate/curl 9 | command: ["sleep", "999999"] 10 | -------------------------------------------------------------------------------- /metadata/meta_env.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: metadata-downward 5 | spec: 6 | containers: 7 | - name: main 8 | image: docker.io/busybox 9 | command: ["sleep", "99999"] 10 | resources: 11 | requests: 12 | cpu: 15m 13 | memory: 100Ki 14 | limits: 15 | cpu: 100m 16 | memory: 4Mi 17 | env: 18 | - name: POD_NAME 19 | valueFrom: 20 | fieldRef: 21 | fieldPath: metadata.name 22 | - name: POD_NAMESPACE 23 | valueFrom: 24 | fieldRef: 25 | fieldPath: metadata.namespace 26 | - name: POD_IP 27 | valueFrom: 28 | fieldRef: 29 | fieldPath: status.podIP 30 | - name: NODE_NAME 31 | valueFrom: 32 | fieldRef: 33 | fieldPath: spec.nodeName 34 | - name: SERVICE_ACCOUNT 35 | valueFrom: 36 | fieldRef: 37 | fieldPath: spec.serviceAccountName 38 | - name: CONTAINER_CPU_REQUEST_MILLICORES 39 | valueFrom: 40 | resourceFieldRef: 41 | resource: requests.cpu 42 | divisor: 1m 43 | - name: CONTAINER_MEMORY_LIMIT_KIBIBYTES 44 | valueFrom: 45 | resourceFieldRef: 46 | resource: limits.memory 47 | divisor: 1Ki 48 | -------------------------------------------------------------------------------- /metadata/meta_file.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: meta-file 5 | labels: 6 | foo: bar 7 | annotations: 8 | key1: value1 9 | key2: | 10 | multi 11 | line 12 | value 13 | spec: 14 | containers: 15 | - name: main 16 | image: busybox 17 | command: ["sleep", "9999999"] 18 | resources: 19 | requests: 20 | cpu: 15m 21 | memory: 100Ki 22 | limits: 23 | cpu: 100m 24 | memory: 4Mi 25 | volumeMounts: 26 | - name: downward 27 | mountPath: /etc/downward 28 | volumes: 29 | - name: downward 30 | downwardAPI: 31 | items: 32 | - path: "podName" 33 | fieldRef: 34 | fieldPath: metadata.name 35 | - path: "podNamespace" 36 | fieldRef: 37 | fieldPath: metadata.namespace 38 | - path: "labels" 39 | fieldRef: 40 | fieldPath: metadata.labels 41 | - path: "annotations" 42 | fieldRef: 43 | fieldPath: metadata.annotations 44 | - path: "containerCpuRequestMilliCores" 45 | resourceFieldRef: 46 | containerName: main 47 | resource: requests.cpu 48 | divisor: 1m 49 | - path: "containerMemoryLimitBytes" 50 | resourceFieldRef: 51 | containerName: main 52 | resource: limits.memory 53 | divisor: 1 54 | -------------------------------------------------------------------------------- /playground/anaconda-ks.cfg: -------------------------------------------------------------------------------- 1 | #version=DEVEL 2 | # System authorization information 3 | auth --enableshadow --passalgo=sha512 4 | # Use CDROM installation media 5 | cdrom 6 | # Use graphical install 7 | graphical 8 | # Run the Setup Agent on first boot 9 | firstboot --enable 10 | ignoredisk --only-use=sda 11 | # Keyboard layouts 12 | keyboard --vckeymap=us --xlayouts='us' 13 | # System language 14 | lang en_US.UTF-8 15 | 16 | # Network information 17 | network --bootproto=static --device=enp0s3 --ip=192.168.0.103 --netmask=255.255.255.0 --ipv6=auto --activate 18 | network --bootproto=dhcp --device=enp0s8 --ipv6=auto --activate 19 | network --hostname=devops-103 20 | 21 | # Root password 22 | rootpw --iscrypted $6$VMD2JhOpL5iWoS2T$iVGICuZ4sAeFs8uiSyVZGgvFgA/hHBntkwEbkqjyu9QY9OjGkAoUew0bvNNYNG81tmJQISu8IAAxqAvezwMPF. 23 | # System services 24 | services --disabled="chronyd" 25 | # System timezone 26 | timezone Asia/Shanghai --isUtc --nontp 27 | # System bootloader configuration 28 | bootloader --append=" crashkernel=auto" --location=mbr --boot-drive=sda 29 | autopart --type=lvm 30 | # Partition clearing information 31 | clearpart --none --initlabel 32 | 33 | %packages 34 | @^minimal 35 | @core 36 | @debugging 37 | @development 38 | @security-tools 39 | kexec-tools 40 | 41 | %end 42 | 43 | %addon com_redhat_kdump --enable --reserve-mb='auto' 44 | 45 | %end 46 | 47 | %anaconda 48 | pwpolicy root --minlen=6 --minquality=50 --notstrict --nochanges --notempty 49 | pwpolicy user --minlen=6 --minquality=50 --notstrict --nochanges --notempty 50 | pwpolicy luks --minlen=6 --minquality=50 --notstrict --nochanges --notempty 51 | %end 52 | -------------------------------------------------------------------------------- /playground/cm-vars.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: cm-vars 5 | data: 6 | apploglevel: info 7 | appdatadir: /var/data 8 | -------------------------------------------------------------------------------- /playground/dp-nginx.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx-deployment 5 | labels: 6 | app: nginx 7 | spec: 8 | replicas: 3 9 | selector: 10 | matchLabels: 11 | app: nginx 12 | template: 13 | metadata: 14 | labels: 15 | app: nginx 16 | spec: 17 | containers: 18 | - name: nginx 19 | image: docker.io/nginx 20 | ports: 21 | - containerPort: 80 22 | -------------------------------------------------------------------------------- /playground/ds-nginx.yml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: DaemonSet 3 | metadata: 4 | name: es-logstash 5 | labels: 6 | app: logstash 7 | spec: 8 | selector: 9 | matchLabels: 10 | name: es-logstash 11 | template: 12 | metadata: 13 | labels: 14 | name: es-logstash 15 | spec: 16 | tolerations: 17 | - key: node-role.kubernetes.io/master 18 | effect: NoSchedule 19 | containers: 20 | - name: es-logstash 21 | image: docker.elastic.co/elasticsearch/logstash 22 | -------------------------------------------------------------------------------- /playground/es-pv.yml: -------------------------------------------------------------------------------- 1 | kind: List 2 | apiVersion: v1 3 | items: 4 | - apiVersion: v1 5 | kind: PersistentVolume 6 | metadata: 7 | name: es-storage-pv-01 8 | spec: 9 | capacity: 10 | storage: 100Mi 11 | volumeMode: Filesystem 12 | accessModes: ["ReadWriteOnce"] 13 | persistentVolumeReclaimPolicy: Delete 14 | storageClassName: local-storage 15 | local: 16 | path: /home/es 17 | nodeAffinity: 18 | required: 19 | nodeSelectorTerms: 20 | - matchExpressions: 21 | - key: kubernetes.io/hostname 22 | operator: In 23 | values: 24 | - devops-102 25 | - devops-103 26 | - apiVersion: v1 27 | kind: PersistentVolume 28 | metadata: 29 | name: es-storage-pv-02 30 | spec: 31 | capacity: 32 | storage: 100Mi 33 | volumeMode: Filesystem 34 | accessModes: ["ReadWriteOnce"] 35 | persistentVolumeReclaimPolicy: Delete 36 | storageClassName: local-storage 37 | local: 38 | path: /home/es01 39 | nodeAffinity: 40 | required: 41 | nodeSelectorTerms: 42 | - matchExpressions: 43 | - key: kubernetes.io/hostname 44 | operator: In 45 | values: 46 | - devops-102 47 | - devops-103 48 | -------------------------------------------------------------------------------- /playground/es-pvc.yml: -------------------------------------------------------------------------------- 1 | kind: PersistentVolumeClaim 2 | apiVersion: v1 3 | metadata: 4 | name: es-storage-pvc 5 | spec: 6 | storageClassName: local-storage 7 | accessModes: 8 | - ReadWriteOnce 9 | resources: 10 | requests: 11 | storage: 30Mi 12 | -------------------------------------------------------------------------------- /playground/es6-deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: es-cluster 5 | namespace: default 6 | labels: 7 | app: es-cluster 8 | spec: 9 | selector: 10 | matchLabels: 11 | app: es 12 | tier: backend 13 | version: 6.3.2 14 | replicas: 3 15 | template: 16 | metadata: 17 | labels: 18 | app: es 19 | tier: backend 20 | version: 6.3.2 21 | spec: 22 | containers: 23 | - name: elasticsearch 24 | image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2 25 | volumeMounts: 26 | - name: es-storage 27 | mountPath: /data/es 28 | env: 29 | - name: ES_JAVA_OPTS 30 | value: "-Xms128m -Xmx128m" 31 | ports: 32 | - containerPort: 9200 33 | protocol: TCP 34 | - containerPort: 9300 35 | protocol: TCP 36 | volumes: 37 | - name: es-storage 38 | hostPath: 39 | path: /home/es 40 | -------------------------------------------------------------------------------- /playground/es6.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: ccb-es6 5 | namespace: default 6 | labels: 7 | app: ccb-es6 8 | spec: 9 | containers: 10 | - name: elasticsearch 11 | image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2 12 | volumeMounts: 13 | - name: es-storage 14 | mountPath: /data/es 15 | env: 16 | - name: ES_JAVA_OPTS 17 | value: "-Xms128m -Xmx128m" 18 | ports: 19 | - containerPort: 9200 20 | protocol: TCP 21 | - containerPort: 9300 22 | protocol: TCP 23 | volumes: 24 | - name: es-storage 25 | hostPath: 26 | path: /home/es 27 | -------------------------------------------------------------------------------- /playground/first-pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: first-pod 5 | labels: 6 | app: bash 7 | tir: backend 8 | spec: 9 | containers: 10 | - name: bash-container 11 | image: docker.io/busybox 12 | command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600'] 13 | -------------------------------------------------------------------------------- /playground/getImages.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | images=(kube-proxy-amd64:v1.11.0 kube-scheduler-amd64:v1.11.0 kube-controller-manager-amd64:v1.11.0 kube-apiserver-amd64:v1.11.0 3 | etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9 4 | k8s-dns-dnsmasq-nanny-amd64:1.14.9 ) 5 | for imageName in ${images[@]} ; do 6 | docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName 7 | docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName 8 | docker rmi registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName 9 | done 10 | # 个人新加的一句,V 1.11.0 必加 11 | docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1 12 | -------------------------------------------------------------------------------- /playground/iptables.after.log: -------------------------------------------------------------------------------- 1 | Chain INPUT (policy ACCEPT) 2 | target prot opt source destination 3 | 4 | Chain FORWARD (policy DROP) 5 | target prot opt source destination 6 | 7 | Chain OUTPUT (policy ACCEPT) 8 | target prot opt source destination 9 | 10 | Chain DOCKER (0 references) 11 | target prot opt source destination 12 | 13 | Chain DOCKER-ISOLATION (0 references) 14 | target prot opt source destination 15 | 16 | Chain KUBE-EXTERNAL-SERVICES (0 references) 17 | target prot opt source destination 18 | 19 | Chain KUBE-FIREWALL (0 references) 20 | target prot opt source destination 21 | 22 | Chain KUBE-FORWARD (0 references) 23 | target prot opt source destination 24 | 25 | Chain KUBE-SERVICES (0 references) 26 | target prot opt source destination 27 | -------------------------------------------------------------------------------- /playground/iptables.before.log: -------------------------------------------------------------------------------- 1 | Chain INPUT (policy ACCEPT) 2 | target prot opt source destination 3 | KUBE-EXTERNAL-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes externally-visible service portals */ 4 | KUBE-FIREWALL all -- anywhere anywhere 5 | 6 | Chain FORWARD (policy DROP) 7 | target prot opt source destination 8 | KUBE-FORWARD all -- anywhere anywhere /* kubernetes forwarding rules */ 9 | DOCKER-ISOLATION all -- anywhere anywhere 10 | DOCKER all -- anywhere anywhere 11 | ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED 12 | ACCEPT all -- anywhere anywhere 13 | ACCEPT all -- anywhere anywhere 14 | ACCEPT all -- bogon/16 anywhere 15 | ACCEPT all -- anywhere bogon/16 16 | 17 | Chain OUTPUT (policy ACCEPT) 18 | target prot opt source destination 19 | KUBE-SERVICES all -- anywhere anywhere ctstate NEW /* kubernetes service portals */ 20 | KUBE-FIREWALL all -- anywhere anywhere 21 | 22 | Chain DOCKER (1 references) 23 | target prot opt source destination 24 | 25 | Chain DOCKER-ISOLATION (1 references) 26 | target prot opt source destination 27 | RETURN all -- anywhere anywhere 28 | 29 | Chain KUBE-EXTERNAL-SERVICES (1 references) 30 | target prot opt source destination 31 | 32 | Chain KUBE-FIREWALL (2 references) 33 | target prot opt source destination 34 | DROP all -- anywhere anywhere /* kubernetes firewall for dropping marked packets */ mark match 0x8000/0x8000 35 | 36 | Chain KUBE-FORWARD (1 references) 37 | target prot opt source destination 38 | ACCEPT all -- anywhere anywhere /* kubernetes forwarding rules */ mark match 0x4000/0x4000 39 | ACCEPT all -- bogon/16 anywhere /* kubernetes forwarding conntrack pod source rule */ ctstate RELATED,ESTABLISHED 40 | ACCEPT all -- anywhere bogon/16 /* kubernetes forwarding conntrack pod destination rule */ ctstate RELATED,ESTABLISHED 41 | 42 | Chain KUBE-SERVICES (1 references) 43 | target prot opt source destination 44 | -------------------------------------------------------------------------------- /playground/jb-perl.yml: -------------------------------------------------------------------------------- 1 | apiVersion: batch/v1 2 | kind: Job 3 | metadata: 4 | name: pi 5 | spec: 6 | template: 7 | spec: 8 | containers: 9 | - name: pi 10 | image: docker.io/busybox 11 | command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"] 12 | restartPolicy: Never 13 | backoffLimit: 4 14 | -------------------------------------------------------------------------------- /playground/kube-flannel.yml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: ClusterRole 3 | apiVersion: rbac.authorization.k8s.io/v1beta1 4 | metadata: 5 | name: flannel 6 | rules: 7 | - apiGroups: 8 | - "" 9 | resources: 10 | - pods 11 | verbs: 12 | - get 13 | - apiGroups: 14 | - "" 15 | resources: 16 | - nodes 17 | verbs: 18 | - list 19 | - watch 20 | - apiGroups: 21 | - "" 22 | resources: 23 | - nodes/status 24 | verbs: 25 | - patch 26 | --- 27 | kind: ClusterRoleBinding 28 | apiVersion: rbac.authorization.k8s.io/v1beta1 29 | metadata: 30 | name: flannel 31 | roleRef: 32 | apiGroup: rbac.authorization.k8s.io 33 | kind: ClusterRole 34 | name: flannel 35 | subjects: 36 | - kind: ServiceAccount 37 | name: flannel 38 | namespace: kube-system 39 | --- 40 | apiVersion: v1 41 | kind: ServiceAccount 42 | metadata: 43 | name: flannel 44 | namespace: kube-system 45 | --- 46 | kind: ConfigMap 47 | apiVersion: v1 48 | metadata: 49 | name: kube-flannel-cfg 50 | namespace: kube-system 51 | labels: 52 | tier: node 53 | app: flannel 54 | data: 55 | cni-conf.json: | 56 | { 57 | "name": "cbr0", 58 | "plugins": [ 59 | { 60 | "type": "flannel", 61 | "delegate": { 62 | "hairpinMode": true, 63 | "isDefaultGateway": true 64 | } 65 | }, 66 | { 67 | "type": "portmap", 68 | "capabilities": { 69 | "portMappings": true 70 | } 71 | } 72 | ] 73 | } 74 | net-conf.json: | 75 | { 76 | "Network": "10.244.0.0/16", 77 | "Backend": { 78 | "Type": "vxlan" 79 | } 80 | } 81 | --- 82 | apiVersion: extensions/v1beta1 83 | kind: DaemonSet 84 | metadata: 85 | name: kube-flannel-ds 86 | namespace: kube-system 87 | labels: 88 | tier: node 89 | app: flannel 90 | spec: 91 | template: 92 | metadata: 93 | labels: 94 | tier: node 95 | app: flannel 96 | spec: 97 | hostNetwork: true 98 | nodeSelector: 99 | beta.kubernetes.io/arch: amd64 100 | tolerations: 101 | - key: node-role.kubernetes.io/master 102 | operator: Exists 103 | effect: NoSchedule 104 | serviceAccountName: flannel 105 | initContainers: 106 | - name: install-cni 107 | image: quay.io/coreos/flannel:v0.10.0-amd64 108 | command: 109 | - cp 110 | args: 111 | - -f 112 | - /etc/kube-flannel/cni-conf.json 113 | - /etc/cni/net.d/10-flannel.conflist 114 | volumeMounts: 115 | - name: cni 116 | mountPath: /etc/cni/net.d 117 | - name: flannel-cfg 118 | mountPath: /etc/kube-flannel/ 119 | containers: 120 | - name: kube-flannel 121 | image: quay.io/coreos/flannel:v0.10.0-amd64 122 | command: 123 | - /opt/bin/flanneld 124 | args: 125 | - --ip-masq 126 | - --kube-subnet-mgr 127 | - --iface=enp0s3 128 | resources: 129 | requests: 130 | cpu: "100m" 131 | memory: "50Mi" 132 | limits: 133 | cpu: "100m" 134 | memory: "50Mi" 135 | securityContext: 136 | privileged: true 137 | env: 138 | - name: POD_NAME 139 | valueFrom: 140 | fieldRef: 141 | fieldPath: metadata.name 142 | - name: POD_NAMESPACE 143 | valueFrom: 144 | fieldRef: 145 | fieldPath: metadata.namespace 146 | volumeMounts: 147 | - name: run 148 | mountPath: /run 149 | - name: flannel-cfg 150 | mountPath: /etc/kube-flannel/ 151 | volumes: 152 | - name: run 153 | hostPath: 154 | path: /run 155 | - name: cni 156 | hostPath: 157 | path: /etc/cni/net.d 158 | - name: flannel-cfg 159 | configMap: 160 | name: kube-flannel-cfg 161 | -------------------------------------------------------------------------------- /playground/kube-flannel.yml.1: -------------------------------------------------------------------------------- 1 | --- 2 | kind: ClusterRole 3 | apiVersion: rbac.authorization.k8s.io/v1beta1 4 | metadata: 5 | name: flannel 6 | rules: 7 | - apiGroups: 8 | - "" 9 | resources: 10 | - pods 11 | verbs: 12 | - get 13 | - apiGroups: 14 | - "" 15 | resources: 16 | - nodes 17 | verbs: 18 | - list 19 | - watch 20 | - apiGroups: 21 | - "" 22 | resources: 23 | - nodes/status 24 | verbs: 25 | - patch 26 | --- 27 | kind: ClusterRoleBinding 28 | apiVersion: rbac.authorization.k8s.io/v1beta1 29 | metadata: 30 | name: flannel 31 | roleRef: 32 | apiGroup: rbac.authorization.k8s.io 33 | kind: ClusterRole 34 | name: flannel 35 | subjects: 36 | - kind: ServiceAccount 37 | name: flannel 38 | namespace: kube-system 39 | --- 40 | apiVersion: v1 41 | kind: ServiceAccount 42 | metadata: 43 | name: flannel 44 | namespace: kube-system 45 | --- 46 | kind: ConfigMap 47 | apiVersion: v1 48 | metadata: 49 | name: kube-flannel-cfg 50 | namespace: kube-system 51 | labels: 52 | tier: node 53 | app: flannel 54 | data: 55 | cni-conf.json: | 56 | { 57 | "name": "cbr0", 58 | "plugins": [ 59 | { 60 | "type": "flannel", 61 | "delegate": { 62 | "hairpinMode": true, 63 | "isDefaultGateway": true 64 | } 65 | }, 66 | { 67 | "type": "portmap", 68 | "capabilities": { 69 | "portMappings": true 70 | } 71 | } 72 | ] 73 | } 74 | net-conf.json: | 75 | { 76 | "Network": "10.244.0.0/16", 77 | "Backend": { 78 | "Type": "vxlan" 79 | } 80 | } 81 | --- 82 | apiVersion: extensions/v1beta1 83 | kind: DaemonSet 84 | metadata: 85 | name: kube-flannel-ds-amd64 86 | namespace: kube-system 87 | labels: 88 | tier: node 89 | app: flannel 90 | spec: 91 | template: 92 | metadata: 93 | labels: 94 | tier: node 95 | app: flannel 96 | spec: 97 | hostNetwork: true 98 | nodeSelector: 99 | beta.kubernetes.io/arch: amd64 100 | tolerations: 101 | - key: node-role.kubernetes.io/master 102 | operator: Exists 103 | effect: NoSchedule 104 | serviceAccountName: flannel 105 | initContainers: 106 | - name: install-cni 107 | image: quay.io/coreos/flannel:v0.10.0-amd64 108 | command: 109 | - cp 110 | args: 111 | - -f 112 | - /etc/kube-flannel/cni-conf.json 113 | - /etc/cni/net.d/10-flannel.conflist 114 | volumeMounts: 115 | - name: cni 116 | mountPath: /etc/cni/net.d 117 | - name: flannel-cfg 118 | mountPath: /etc/kube-flannel/ 119 | containers: 120 | - name: kube-flannel 121 | image: quay.io/coreos/flannel:v0.10.0-amd64 122 | command: 123 | - /opt/bin/flanneld 124 | args: 125 | - --ip-masq 126 | - --kube-subnet-mgr 127 | resources: 128 | requests: 129 | cpu: "100m" 130 | memory: "50Mi" 131 | limits: 132 | cpu: "100m" 133 | memory: "50Mi" 134 | securityContext: 135 | privileged: true 136 | env: 137 | - name: POD_NAME 138 | valueFrom: 139 | fieldRef: 140 | fieldPath: metadata.name 141 | - name: POD_NAMESPACE 142 | valueFrom: 143 | fieldRef: 144 | fieldPath: metadata.namespace 145 | volumeMounts: 146 | - name: run 147 | mountPath: /run 148 | - name: flannel-cfg 149 | mountPath: /etc/kube-flannel/ 150 | volumes: 151 | - name: run 152 | hostPath: 153 | path: /run 154 | - name: cni 155 | hostPath: 156 | path: /etc/cni/net.d 157 | - name: flannel-cfg 158 | configMap: 159 | name: kube-flannel-cfg 160 | --- 161 | apiVersion: extensions/v1beta1 162 | kind: DaemonSet 163 | metadata: 164 | name: kube-flannel-ds-arm64 165 | namespace: kube-system 166 | labels: 167 | tier: node 168 | app: flannel 169 | spec: 170 | template: 171 | metadata: 172 | labels: 173 | tier: node 174 | app: flannel 175 | spec: 176 | hostNetwork: true 177 | nodeSelector: 178 | beta.kubernetes.io/arch: arm64 179 | tolerations: 180 | - key: node-role.kubernetes.io/master 181 | operator: Exists 182 | effect: NoSchedule 183 | serviceAccountName: flannel 184 | initContainers: 185 | - name: install-cni 186 | image: quay.io/coreos/flannel:v0.10.0-arm64 187 | command: 188 | - cp 189 | args: 190 | - -f 191 | - /etc/kube-flannel/cni-conf.json 192 | - /etc/cni/net.d/10-flannel.conflist 193 | volumeMounts: 194 | - name: cni 195 | mountPath: /etc/cni/net.d 196 | - name: flannel-cfg 197 | mountPath: /etc/kube-flannel/ 198 | containers: 199 | - name: kube-flannel 200 | image: quay.io/coreos/flannel:v0.10.0-arm64 201 | command: 202 | - /opt/bin/flanneld 203 | args: 204 | - --ip-masq 205 | - --kube-subnet-mgr 206 | resources: 207 | requests: 208 | cpu: "100m" 209 | memory: "50Mi" 210 | limits: 211 | cpu: "100m" 212 | memory: "50Mi" 213 | securityContext: 214 | privileged: true 215 | env: 216 | - name: POD_NAME 217 | valueFrom: 218 | fieldRef: 219 | fieldPath: metadata.name 220 | - name: POD_NAMESPACE 221 | valueFrom: 222 | fieldRef: 223 | fieldPath: metadata.namespace 224 | volumeMounts: 225 | - name: run 226 | mountPath: /run 227 | - name: flannel-cfg 228 | mountPath: /etc/kube-flannel/ 229 | volumes: 230 | - name: run 231 | hostPath: 232 | path: /run 233 | - name: cni 234 | hostPath: 235 | path: /etc/cni/net.d 236 | - name: flannel-cfg 237 | configMap: 238 | name: kube-flannel-cfg 239 | --- 240 | apiVersion: extensions/v1beta1 241 | kind: DaemonSet 242 | metadata: 243 | name: kube-flannel-ds-arm 244 | namespace: kube-system 245 | labels: 246 | tier: node 247 | app: flannel 248 | spec: 249 | template: 250 | metadata: 251 | labels: 252 | tier: node 253 | app: flannel 254 | spec: 255 | hostNetwork: true 256 | nodeSelector: 257 | beta.kubernetes.io/arch: arm 258 | tolerations: 259 | - key: node-role.kubernetes.io/master 260 | operator: Exists 261 | effect: NoSchedule 262 | serviceAccountName: flannel 263 | initContainers: 264 | - name: install-cni 265 | image: quay.io/coreos/flannel:v0.10.0-arm 266 | command: 267 | - cp 268 | args: 269 | - -f 270 | - /etc/kube-flannel/cni-conf.json 271 | - /etc/cni/net.d/10-flannel.conflist 272 | volumeMounts: 273 | - name: cni 274 | mountPath: /etc/cni/net.d 275 | - name: flannel-cfg 276 | mountPath: /etc/kube-flannel/ 277 | containers: 278 | - name: kube-flannel 279 | image: quay.io/coreos/flannel:v0.10.0-arm 280 | command: 281 | - /opt/bin/flanneld 282 | args: 283 | - --ip-masq 284 | - --kube-subnet-mgr 285 | resources: 286 | requests: 287 | cpu: "100m" 288 | memory: "50Mi" 289 | limits: 290 | cpu: "100m" 291 | memory: "50Mi" 292 | securityContext: 293 | privileged: true 294 | env: 295 | - name: POD_NAME 296 | valueFrom: 297 | fieldRef: 298 | fieldPath: metadata.name 299 | - name: POD_NAMESPACE 300 | valueFrom: 301 | fieldRef: 302 | fieldPath: metadata.namespace 303 | volumeMounts: 304 | - name: run 305 | mountPath: /run 306 | - name: flannel-cfg 307 | mountPath: /etc/kube-flannel/ 308 | volumes: 309 | - name: run 310 | hostPath: 311 | path: /run 312 | - name: cni 313 | hostPath: 314 | path: /etc/cni/net.d 315 | - name: flannel-cfg 316 | configMap: 317 | name: kube-flannel-cfg 318 | --- 319 | apiVersion: extensions/v1beta1 320 | kind: DaemonSet 321 | metadata: 322 | name: kube-flannel-ds-ppc64le 323 | namespace: kube-system 324 | labels: 325 | tier: node 326 | app: flannel 327 | spec: 328 | template: 329 | metadata: 330 | labels: 331 | tier: node 332 | app: flannel 333 | spec: 334 | hostNetwork: true 335 | nodeSelector: 336 | beta.kubernetes.io/arch: ppc64le 337 | tolerations: 338 | - key: node-role.kubernetes.io/master 339 | operator: Exists 340 | effect: NoSchedule 341 | serviceAccountName: flannel 342 | initContainers: 343 | - name: install-cni 344 | image: quay.io/coreos/flannel:v0.10.0-ppc64le 345 | command: 346 | - cp 347 | args: 348 | - -f 349 | - /etc/kube-flannel/cni-conf.json 350 | - /etc/cni/net.d/10-flannel.conflist 351 | volumeMounts: 352 | - name: cni 353 | mountPath: /etc/cni/net.d 354 | - name: flannel-cfg 355 | mountPath: /etc/kube-flannel/ 356 | containers: 357 | - name: kube-flannel 358 | image: quay.io/coreos/flannel:v0.10.0-ppc64le 359 | command: 360 | - /opt/bin/flanneld 361 | args: 362 | - --ip-masq 363 | - --kube-subnet-mgr 364 | resources: 365 | requests: 366 | cpu: "100m" 367 | memory: "50Mi" 368 | limits: 369 | cpu: "100m" 370 | memory: "50Mi" 371 | securityContext: 372 | privileged: true 373 | env: 374 | - name: POD_NAME 375 | valueFrom: 376 | fieldRef: 377 | fieldPath: metadata.name 378 | - name: POD_NAMESPACE 379 | valueFrom: 380 | fieldRef: 381 | fieldPath: metadata.namespace 382 | volumeMounts: 383 | - name: run 384 | mountPath: /run 385 | - name: flannel-cfg 386 | mountPath: /etc/kube-flannel/ 387 | volumes: 388 | - name: run 389 | hostPath: 390 | path: /run 391 | - name: cni 392 | hostPath: 393 | path: /etc/cni/net.d 394 | - name: flannel-cfg 395 | configMap: 396 | name: kube-flannel-cfg 397 | --- 398 | apiVersion: extensions/v1beta1 399 | kind: DaemonSet 400 | metadata: 401 | name: kube-flannel-ds-s390x 402 | namespace: kube-system 403 | labels: 404 | tier: node 405 | app: flannel 406 | spec: 407 | template: 408 | metadata: 409 | labels: 410 | tier: node 411 | app: flannel 412 | spec: 413 | hostNetwork: true 414 | nodeSelector: 415 | beta.kubernetes.io/arch: s390x 416 | tolerations: 417 | - key: node-role.kubernetes.io/master 418 | operator: Exists 419 | effect: NoSchedule 420 | serviceAccountName: flannel 421 | initContainers: 422 | - name: install-cni 423 | image: quay.io/coreos/flannel:v0.10.0-s390x 424 | command: 425 | - cp 426 | args: 427 | - -f 428 | - /etc/kube-flannel/cni-conf.json 429 | - /etc/cni/net.d/10-flannel.conflist 430 | volumeMounts: 431 | - name: cni 432 | mountPath: /etc/cni/net.d 433 | - name: flannel-cfg 434 | mountPath: /etc/kube-flannel/ 435 | containers: 436 | - name: kube-flannel 437 | image: quay.io/coreos/flannel:v0.10.0-s390x 438 | command: 439 | - /opt/bin/flanneld 440 | args: 441 | - --ip-masq 442 | - --kube-subnet-mgr 443 | resources: 444 | requests: 445 | cpu: "100m" 446 | memory: "50Mi" 447 | limits: 448 | cpu: "100m" 449 | memory: "50Mi" 450 | securityContext: 451 | privileged: true 452 | env: 453 | - name: POD_NAME 454 | valueFrom: 455 | fieldRef: 456 | fieldPath: metadata.name 457 | - name: POD_NAMESPACE 458 | valueFrom: 459 | fieldRef: 460 | fieldPath: metadata.namespace 461 | volumeMounts: 462 | - name: run 463 | mountPath: /run 464 | - name: flannel-cfg 465 | mountPath: /etc/kube-flannel/ 466 | volumes: 467 | - name: run 468 | hostPath: 469 | path: /run 470 | - name: cni 471 | hostPath: 472 | path: /etc/cni/net.d 473 | - name: flannel-cfg 474 | configMap: 475 | name: kube-flannel-cfg 476 | -------------------------------------------------------------------------------- /playground/kubeadm.yml: -------------------------------------------------------------------------------- 1 | api: 2 | advertiseAddress: 192.168.0.101 3 | bindPort: 6443 4 | controlPlaneEndpoint: "" 5 | apiServerExtraArgs: 6 | authorization-mode: Node,RBAC 7 | apiVersion: kubeadm.k8s.io/v1alpha2 8 | auditPolicy: 9 | logDir: /var/log/kubernetes/audit 10 | logMaxAge: 2 11 | path: "" 12 | certificatesDir: /etc/kubernetes/pki 13 | clusterName: kubernetes 14 | etcd: 15 | local: 16 | dataDir: /var/lib/etcd 17 | image: "" 18 | imageRepository: k8s.gcr.io 19 | kind: MasterConfiguration 20 | kubeProxy: 21 | config: 22 | bindAddress: 0.0.0.0 23 | clientConnection: 24 | acceptContentTypes: "" 25 | burst: 10 26 | contentType: application/vnd.kubernetes.protobuf 27 | kubeconfig: /var/lib/kube-proxy/kubeconfig.conf 28 | qps: 5 29 | clusterCIDR: 10.244.0.0/16 30 | configSyncPeriod: 15m0s 31 | conntrack: 32 | max: null 33 | maxPerCore: 32768 34 | min: 131072 35 | tcpCloseWaitTimeout: 1h0m0s 36 | tcpEstablishedTimeout: 24h0m0s 37 | enableProfiling: false 38 | healthzBindAddress: 0.0.0.0:10256 39 | hostnameOverride: "" 40 | iptables: 41 | masqueradeAll: false 42 | masqueradeBit: 14 43 | minSyncPeriod: 0s 44 | syncPeriod: 30s 45 | ipvs: 46 | excludeCIDRs: null 47 | minSyncPeriod: 0s 48 | scheduler: "" 49 | syncPeriod: 30s 50 | metricsBindAddress: 127.0.0.1:10249 51 | mode: "" 52 | nodePortAddresses: null 53 | oomScoreAdj: -999 54 | portRange: "" 55 | resourceContainer: /kube-proxy 56 | udpIdleTimeout: 250ms 57 | kubeletConfiguration: 58 | baseConfig: 59 | address: 0.0.0.0 60 | authentication: 61 | anonymous: 62 | enabled: false 63 | webhook: 64 | cacheTTL: 2m0s 65 | enabled: true 66 | x509: 67 | clientCAFile: /etc/kubernetes/pki/ca.crt 68 | authorization: 69 | mode: Webhook 70 | webhook: 71 | cacheAuthorizedTTL: 5m0s 72 | cacheUnauthorizedTTL: 30s 73 | cgroupDriver: cgroupfs 74 | cgroupsPerQOS: true 75 | clusterDNS: 76 | - 172.17.0.10 77 | clusterDomain: cluster.local 78 | containerLogMaxFiles: 5 79 | containerLogMaxSize: 10Mi 80 | contentType: application/vnd.kubernetes.protobuf 81 | cpuCFSQuota: true 82 | cpuManagerPolicy: none 83 | cpuManagerReconcilePeriod: 10s 84 | enableControllerAttachDetach: true 85 | enableDebuggingHandlers: true 86 | enforceNodeAllocatable: 87 | - pods 88 | eventBurst: 10 89 | eventRecordQPS: 5 90 | evictionHard: 91 | imagefs.available: 15% 92 | memory.available: 100Mi 93 | nodefs.available: 10% 94 | nodefs.inodesFree: 5% 95 | evictionPressureTransitionPeriod: 5m0s 96 | failSwapOn: true 97 | fileCheckFrequency: 20s 98 | hairpinMode: promiscuous-bridge 99 | healthzBindAddress: 127.0.0.1 100 | healthzPort: 10248 101 | httpCheckFrequency: 20s 102 | imageGCHighThresholdPercent: 85 103 | imageGCLowThresholdPercent: 80 104 | imageMinimumGCAge: 2m0s 105 | iptablesDropBit: 15 106 | iptablesMasqueradeBit: 14 107 | kubeAPIBurst: 10 108 | kubeAPIQPS: 5 109 | makeIPTablesUtilChains: true 110 | maxOpenFiles: 1000000 111 | maxPods: 110 112 | nodeStatusUpdateFrequency: 10s 113 | oomScoreAdj: -999 114 | podPidsLimit: -1 115 | port: 10250 116 | registryBurst: 10 117 | registryPullQPS: 5 118 | resolvConf: /etc/resolv.conf 119 | rotateCertificates: true 120 | runtimeRequestTimeout: 2m0s 121 | serializeImagePulls: true 122 | staticPodPath: /etc/kubernetes/manifests 123 | streamingConnectionIdleTimeout: 4h0m0s 124 | syncFrequency: 1m0s 125 | volumeStatsAggPeriod: 1m0s 126 | kubernetesVersion: v1.11.0 127 | networking: 128 | dnsDomain: cluster.local 129 | podSubnet: 172.16.0.0/16 130 | serviceSubnet: 172.17.0.0/16 131 | nodeRegistration: {} 132 | unifiedControlPlaneImage: "" 133 | -------------------------------------------------------------------------------- /playground/local-storage.yaml: -------------------------------------------------------------------------------- 1 | kind: StorageClass 2 | apiVersion: storage.k8s.io/v1 3 | metadata: 4 | name: local-storage 5 | provisioner: kubernetes.io/no-provisioner 6 | volumeBindingMode: WaitForFirstConsumer -------------------------------------------------------------------------------- /playground/pod-init-service.yml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | name: myservice 5 | spec: 6 | ports: 7 | - protocol: TCP 8 | port: 80 9 | targetPort: 9376 10 | --- 11 | kind: Service 12 | apiVersion: v1 13 | metadata: 14 | name: mydb 15 | spec: 16 | ports: 17 | - protocol: TCP 18 | port: 80 19 | targetPort: 9377 20 | -------------------------------------------------------------------------------- /playground/pod-init.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: myapp-pod 5 | labels: 6 | app: myapp 7 | spec: 8 | containers: 9 | - name: myapp-container 10 | image: docker.io/busybox 11 | command: ['sh', '-c', 'echo The app is running! && sleep 3600'] 12 | initContainers: 13 | - name: init-myservice 14 | image: docker.io/busybox 15 | command: ['sh', '-c', 'echo init-service && sleep 2'] 16 | - name: init-mydb 17 | image: docker.io/busybox 18 | command: ['sh', '-c', 'echo init-mydb && sleep 2'] 19 | -------------------------------------------------------------------------------- /playground/pod-nginx.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: ccb-nginx 5 | labels: 6 | app: nginx 7 | spec: 8 | containers: 9 | - name: nginx 10 | image: docker.io/nginx 11 | ports: 12 | - containerPort: 80 13 | -------------------------------------------------------------------------------- /playground/preInstall.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | setenforce 0 3 | sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 4 | 5 | systemctl stop firewalld 6 | 54 systemctl disable firewalld 7 | 55 swapoff -a 8 | 56 sed -i 's/.*swap.*/#&/' /etc/fstab 9 | 57 cat < /etc/sysctl.d/k8s.conf 10 | > net.bridge.bridge-nf-call-ip6tables = 1 11 | > net.bridge.bridge-nf-call-iptables = 1 12 | > EOF 13 | 14 | 15 | 16 | 58 cat < /etc/sysctl.d/k8s.conf 17 | net.bridge.bridge-nf-call-ip6tables = 1 18 | net.bridge.bridge-nf-call-iptables = 1 19 | EOF 20 | 21 | 59 sysctl --system 22 | 60 cat < /etc/yum.repos.d/kubernetes.repo 23 | [kubernetes] 24 | name=Kubernetes 25 | baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ 26 | enabled=1 27 | gpgcheck=1 28 | repo_gpgcheck=1 29 | gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 30 | EOF 31 | 32 | 61 yum -y epel-release 33 | 62 yum install -y net-tools wget vim ntpdate 34 | 63 yum install -y docker 35 | 64 systemctl enable docker && systemctl start docker 36 | 65 systemctl enable docker.service 37 | 66 yum install -y kubelet kubeadm kubectl kubernetes-cni 38 | 67 systemctl enable kubelet && systemctl start kubelet 39 | -------------------------------------------------------------------------------- /playground/rc-nginx.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: nginx 5 | spec: 6 | replicas: 3 7 | selector: 8 | app: nginx 9 | template: 10 | metadata: 11 | name: nginx 12 | labels: 13 | app: nginx 14 | spec: 15 | containers: 16 | - name: nginx 17 | image: docker.io/nginx 18 | ports: 19 | - containerPort: 80 20 | -------------------------------------------------------------------------------- /playground/redis-master-deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 2 | kind: Deployment 3 | metadata: 4 | name: redis-master 5 | labels: 6 | app: redis 7 | spec: 8 | selector: 9 | matchLabels: 10 | app: redis 11 | role: master 12 | tier: backend 13 | replicas: 1 14 | template: 15 | metadata: 16 | labels: 17 | app: redis 18 | role: master 19 | tier: backend 20 | spec: 21 | containers: 22 | - name: master 23 | image: k8s.gcr.io/redis:e2e # or just image: redis 24 | resources: 25 | requests: 26 | cpu: 100m 27 | memory: 100Mi 28 | ports: 29 | - containerPort: 6379 30 | -------------------------------------------------------------------------------- /playground/second-pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: second-pod 5 | labels: 6 | app: bash 7 | tir: backend 8 | spec: 9 | containers: 10 | - name: bash-container 11 | image: docker.io/busybox 12 | command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600'] 13 | -------------------------------------------------------------------------------- /playground/service-es.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: es-service 5 | spec: 6 | selector: 7 | app: ccb-es6 8 | ports: 9 | - protocol: TCP 10 | name: http 11 | port: 9200 12 | targetPort: 9200 13 | - protocol: TCP 14 | name: tcp 15 | port: 9300 16 | targetPort: 9300 17 | -------------------------------------------------------------------------------- /playground/ss-nginx.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nginx 5 | labels: 6 | app: nginx 7 | spec: 8 | ports: 9 | - port: 80 10 | name: web 11 | clusterIP: None 12 | selector: 13 | app: nginx 14 | --- 15 | apiVersion: apps/v1 16 | kind: StatefulSet 17 | metadata: 18 | name: web 19 | spec: 20 | serviceName: "nginx" 21 | replicas: 2 22 | selector: 23 | matchLabels: 24 | app: nginx 25 | template: 26 | metadata: 27 | labels: 28 | app: nginx 29 | spec: 30 | containers: 31 | - name: nginx 32 | image: docker.io/nginx 33 | ports: 34 | - containerPort: 80 35 | name: web 36 | volumeMounts: 37 | - name: www 38 | mountPath: /usr/share/nginx/html 39 | volumeClaimTemplates: 40 | - metadata: 41 | name: www 42 | spec: 43 | accessModes: ["ReadWriteOnce"] 44 | volumeMode: Filesystem 45 | resources: 46 | requests: 47 | storage: 50Mi 48 | #storageClassName: local-storage 49 | -------------------------------------------------------------------------------- /playground/ss-pv.yml: -------------------------------------------------------------------------------- 1 | kind: List 2 | apiVersion: v1 3 | items: 4 | - apiVersion: v1 5 | kind: PersistentVolume 6 | metadata: 7 | name: es-storage-pv-04 8 | spec: 9 | capacity: 10 | storage: 50Mi 11 | volumeMode: Filesystem 12 | accessModes: ["ReadWriteOnce"] 13 | persistentVolumeReclaimPolicy: Delete 14 | # storageClassName: local-storage 15 | local: 16 | path: /home/es 17 | nodeAffinity: 18 | required: 19 | nodeSelectorTerms: 20 | - matchExpressions: 21 | - key: kubernetes.io/hostname 22 | operator: In 23 | values: 24 | - devops-102 25 | # - devops-103 26 | - apiVersion: v1 27 | kind: PersistentVolume 28 | metadata: 29 | name: es-storage-pv-05 30 | spec: 31 | capacity: 32 | storage: 60Mi 33 | volumeMode: Filesystem 34 | accessModes: ["ReadWriteOnce"] 35 | persistentVolumeReclaimPolicy: Delete 36 | # storageClassName: local-storage 37 | local: 38 | path: /home/es 39 | nodeAffinity: 40 | required: 41 | nodeSelectorTerms: 42 | - matchExpressions: 43 | - key: kubernetes.io/hostname 44 | operator: In 45 | values: 46 | # - devops-102 47 | - devops-103 48 | - apiVersion: v1 49 | kind: PersistentVolume 50 | metadata: 51 | name: es-storage-pv-06 52 | spec: 53 | capacity: 54 | storage: 50Mi 55 | volumeMode: Filesystem 56 | accessModes: ["ReadWriteOnce"] 57 | persistentVolumeReclaimPolicy: Delete 58 | # storageClassName: local-storage 59 | local: 60 | path: /home/es 61 | nodeAffinity: 62 | required: 63 | nodeSelectorTerms: 64 | - matchExpressions: 65 | - key: kubernetes.io/hostname 66 | operator: In 67 | values: 68 | # - devops-102 69 | - devops-103 70 | -------------------------------------------------------------------------------- /playground/ss-single-pv.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: es-storage-pv-06 5 | spec: 6 | capacity: 7 | storage: 50Mi 8 | volumeMode: Filesystem 9 | accessModes: ["ReadWriteOnce"] 10 | persistentVolumeReclaimPolicy: Delete 11 | # storageClassName: local-storage 12 | local: 13 | path: /home/es 14 | nodeAffinity: 15 | required: 16 | nodeSelectorTerms: 17 | - matchExpressions: 18 | - key: kubernetes.io/hostname 19 | operator: In 20 | values: 21 | # - devops-102 22 | - devops-103 23 | -------------------------------------------------------------------------------- /playground/tc.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | icourse=('tencent.' 'cloudedu') 3 | x=${icourse[0]} 4 | y=${icourse[1]} 5 | i=${#x} 6 | e=${#y} 7 | unset icourse 8 | echo ${icourse:0:8}$i$e 9 | -------------------------------------------------------------------------------- /playground/tomcat-1.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: tomcat-ccb-1 5 | namespace: default 6 | labels: 7 | app: tomcat 8 | node: devops-103 9 | spec: 10 | containers: 11 | - name: tomcat 12 | image: docker.io/tomcat 13 | volumeMounts: 14 | - name: tomcat-storage 15 | mountPath: /data/tomcat 16 | - name: cache-storage 17 | mountPath: /data/cache 18 | ports: 19 | - containerPort: 8080 20 | protocol: TCP 21 | env: 22 | - name: GREETING 23 | value: "Hello from devops-103" 24 | volumes: 25 | - name: tomcat-storage 26 | hostPath: 27 | path: /home/es 28 | - name: cache-storage 29 | emptyDir: {} 30 | -------------------------------------------------------------------------------- /playground/tomcat.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: tomcat-ccb-1 5 | namespace: default 6 | labels: 7 | app: tomcat 8 | node: devops-103 9 | spec: 10 | containers: 11 | - name: tomcat 12 | image: docker.io/tomcat 13 | volumeMounts: 14 | - name: tomcat-storage 15 | mountPath: /data/tomcat 16 | - name: cache-storage 17 | mountPath: /data/cache 18 | - name: mypd 19 | mountPath: /data/pvc 20 | ports: 21 | - containerPort: 8080 22 | protocol: TCP 23 | env: 24 | - name: GREETING 25 | value: "Hello from devops-103" 26 | volumes: 27 | - name: tomcat-storage 28 | hostPath: 29 | path: /home/es 30 | - name: cache-storage 31 | emptyDir: {} 32 | - name: mypd 33 | persistentVolumeClaim: 34 | claimName: es-storage-pvc 35 | 36 | -------------------------------------------------------------------------------- /playground/tomcat_example.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | creationTimestamp: 2018-07-24T08:35:08Z 5 | generateName: tomcat-858b8c476d- 6 | labels: 7 | app: tomcat 8 | node: devops-102 9 | pod-template-hash: "4146470328" 10 | name: tomcat-858b8c476d-vnm98 11 | namespace: default 12 | ownerReferences: 13 | - apiVersion: apps/v1 14 | blockOwnerDeletion: true 15 | controller: true 16 | kind: ReplicaSet 17 | name: tomcat-858b8c476d 18 | uid: 78daa97d-8f1c-11e8-82cd-080027b7c4e9 19 | resourceVersion: "27194" 20 | selfLink: /api/v1/namespaces/default/pods/tomcat-858b8c476d-vnm98 21 | uid: 78e6d785-8f1c-11e8-82cd-080027b7c4e9 22 | spec: 23 | containers: 24 | - image: docker.io/tomcat 25 | imagePullPolicy: Always 26 | name: tomcat 27 | ports: 28 | - containerPort: 8080 29 | protocol: TCP 30 | resources: {} 31 | terminationMessagePath: /dev/termination-log 32 | terminationMessagePolicy: File 33 | volumeMounts: 34 | - mountPath: /var/run/secrets/kubernetes.io/serviceaccount 35 | name: default-token-trvqv 36 | readOnly: true 37 | dnsPolicy: ClusterFirst 38 | nodeName: devops-102 39 | restartPolicy: Always 40 | schedulerName: default-scheduler 41 | securityContext: {} 42 | serviceAccount: default 43 | serviceAccountName: default 44 | terminationGracePeriodSeconds: 30 45 | tolerations: 46 | - effect: NoExecute 47 | key: node.kubernetes.io/not-ready 48 | operator: Exists 49 | tolerationSeconds: 300 50 | - effect: NoExecute 51 | key: node.kubernetes.io/unreachable 52 | operator: Exists 53 | tolerationSeconds: 300 54 | volumes: 55 | - name: default-token-trvqv 56 | secret: 57 | defaultMode: 420 58 | secretName: default-token-trvqv 59 | status: 60 | conditions: 61 | - lastProbeTime: null 62 | lastTransitionTime: 2018-07-24T08:35:08Z 63 | status: "True" 64 | type: Initialized 65 | - lastProbeTime: null 66 | lastTransitionTime: 2018-07-24T08:35:38Z 67 | status: "True" 68 | type: Ready 69 | - lastProbeTime: null 70 | lastTransitionTime: null 71 | status: "True" 72 | type: ContainersReady 73 | - lastProbeTime: null 74 | lastTransitionTime: 2018-07-24T08:35:08Z 75 | status: "True" 76 | type: PodScheduled 77 | containerStatuses: 78 | - containerID: docker://9f3aa2d3d6c1937d4209a44820c1cd06f7eaf8796848c759e19410358aea4866 79 | image: docker.io/tomcat:latest 80 | imageID: docker-pullable://docker.io/tomcat@sha256:87ad70ceaafd5c71301b081b37ca2795bd6c7c1a5599a8c92c9447bbd225ae47 81 | lastState: {} 82 | name: tomcat 83 | ready: true 84 | restartCount: 0 85 | state: 86 | running: 87 | startedAt: 2018-07-24T08:35:37Z 88 | hostIP: 192.168.0.102 89 | phase: Running 90 | podIP: 10.244.2.6 91 | qosClass: BestEffort 92 | startTime: 2018-07-24T08:35:08Z 93 | -------------------------------------------------------------------------------- /pod-probe/README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes中Pod的健康检查 2 | 3 | > 本文介绍 Pod 中容器健康检查相关的内容、配置方法以及实验测试,实验环境为 Kubernetes 1.11,搭建方法参考[kubeadm安装kubernetes V1.11.1 集群](https://www.cnblogs.com/cocowool/p/kubeadm_install_kubernetes.html) 4 | 5 | ## 0. 什么是 Container Probes 6 | 我们先来看一下Kubernetes的架构图,每个Node节点上都有 ```kubelet``` ,Container Probe 也就是容器的健康检查是由 ```kubelet``` 定期执行的。 7 | 8 | Kubelet通过调用Pod中容器的[Handler](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#probe-v1-core)来执行检查的动作,Handler有三种类型。 9 | - ExecAction,在容器中执行特定的命令,命令退出返回0表示成功 10 | - TCPSocketAction,根据容器IP地址及特定的端口进行TCP检查,端口开放表示成功 11 | - HTTPGetAction,根据容器IP、端口及访问路径发起一次HTTP请求,如果返回码在200到400之间表示成功 12 | 每种检查动作都可能有三种返回状态。 13 | - Success,表示通过了健康检查 14 | - Failure,表示没有通过健康检查 15 | - Unknown,表示检查动作失败 16 | 17 | 在创建Pod时,可以通过```liveness```和```readiness```两种方式来探测Pod内容器的运行情况。```liveness```可以用来检查容器内应用的存活的情况来,如果检查失败会杀掉容器进程,是否重启容器则取决于Pod的[重启策略](https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy)。```readiness```检查容器内的应用是否能够正常对外提供服务,如果探测失败,则Endpoint Controller会将这个Pod的IP从服务中删除。 18 | 19 | ## 1. 应用场景 20 | 我们都知道Kubernetes会维持Pod的状态及个数,因此如果你只是希望保持Pod内容器失败后能够重启,那么其实没有必要添加健康检查,只需要合理配置Pod的重启策略即可。更适合健康检查的场景是在我们根据检查结果需要主动杀掉容器并重启的场景,还有一些容器在正式提供服务之前需要加载一些数据,那么可以采用```readiness```来检查这些动作是否完成。 21 | 22 | ## 2. liveness 检查实例 23 | ### 2.1 Container Exec 24 | ```yaml 25 | apiVersion: v1 26 | kind: Pod 27 | metadata: 28 | labels: 29 | test: liveness 30 | name: liveness-exec 31 | spec: 32 | containers: 33 | - name: liveness 34 | image: docker.io/alpine 35 | args: 36 | - /bin/sh 37 | - -c 38 | - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 39 | livenessProbe: 40 | exec: 41 | command: 42 | - cat 43 | - /tmp/healthy 44 | initialDelaySeconds: 5 45 | periodSeconds: 5 46 | ``` 47 | 本例创建了一个容器,通过检查一个文件是否存在来判断容器运行是否正常。容器运行30秒后,将文件删除,这样容器的liveness检查失败从而会将容器重启。 48 | 49 | ### 2.2 HTTP Health Check 50 | ```yaml 51 | apiVersion: v1 52 | kind: Pod 53 | metadata: 54 | labels: 55 | test: liveness 56 | app: httpd 57 | name: liveness-http 58 | spec: 59 | containers: 60 | - name: liveness 61 | image: docker.io/httpd 62 | ports: 63 | - containerPort: 80 64 | livenessProbe: 65 | httpGet: 66 | path: /index.html 67 | port: 80 68 | httpHeaders: 69 | - name: X-Custom-Header 70 | value: Awesome 71 | initialDelaySeconds: 5 72 | periodSeconds: 5 73 | ``` 74 | 本例通过创建一个Apache服务器,通过访问 index 来判断服务是否存活。通过手工删除这个文件的方式,可以导致检查失败,从而重启容器。 75 | ```sh 76 | [root@devops-101 ~]# kubectl exec -it liveness-http /bin/sh 77 | # 78 | # ls 79 | bin build cgi-bin conf error htdocs icons include logs modules 80 | # ps -ef 81 | UID PID PPID C STIME TTY TIME CMD 82 | root 1 0 0 11:39 ? 00:00:00 httpd -DFOREGROUND 83 | daemon 6 1 0 11:39 ? 00:00:00 httpd -DFOREGROUND 84 | daemon 7 1 0 11:39 ? 00:00:00 httpd -DFOREGROUND 85 | daemon 8 1 0 11:39 ? 00:00:00 httpd -DFOREGROUND 86 | root 90 0 0 11:39 ? 00:00:00 /bin/sh 87 | root 94 90 0 11:39 ? 00:00:00 ps -ef 88 | # 89 | # cd /usr/local/apache2 90 | # ls 91 | bin build cgi-bin conf error htdocs icons include logs modules 92 | # cd htdocs 93 | # ls 94 | index.html 95 | # rm index.html 96 | # command terminated with exit code 137 97 | [root@devops-101 ~]# kubectl describe pod liveness-http 98 | Events: 99 | Type Reason Age From Message 100 | ---- ------ ---- ---- ------- 101 | Normal Scheduled 1m default-scheduler Successfully assigned default/liveness-http to devops-102 102 | Warning Unhealthy 8s (x3 over 18s) kubelet, devops-102 Liveness probe failed: HTTP probe failed with statuscode: 404 103 | Normal Pulling 7s (x2 over 1m) kubelet, devops-102 pulling image "docker.io/httpd" 104 | Normal Killing 7s kubelet, devops-102 Killing container with id docker://liveness:Container failed liveness probe.. Container will be killed and recreated. 105 | Normal Pulled 1s (x2 over 1m) kubelet, devops-102 Successfully pulled image "docker.io/httpd" 106 | Normal Created 1s (x2 over 1m) kubelet, devops-102 Created container 107 | Normal Started 1s (x2 over 1m) kubelet, devops-102 Started container 108 | ``` 109 | 110 | ### 2.3 TCP Socket 111 | 这种方式通过TCP连接来判断是否存活,Pod编排示例。 112 | ```yaml 113 | apiVersion: v1 114 | kind: Pod 115 | metadata: 116 | labels: 117 | test: liveness 118 | app: node 119 | name: liveness-tcp 120 | spec: 121 | containers: 122 | - name: goproxy 123 | image: docker.io/googlecontainer/goproxy:0.1 124 | ports: 125 | - containerPort: 8080 126 | readinessProbe: 127 | tcpSocket: 128 | port: 8080 129 | initialDelaySeconds: 5 130 | periodSeconds: 10 131 | livenessProbe: 132 | tcpSocket: 133 | port: 8080 134 | initialDelaySeconds: 15 135 | periodSeconds: 20 136 | ``` 137 | 138 | ## 3. readiness 检查实例 139 | 另一种 ```readiness```配置方式和```liveness```类似,只要修改```livenessProbe```改为```readinessProbe```即可。 140 | 141 | ## 4. 配置参数 142 | 我们可以通过```kubectl explain```命令来查看具体的配置属性,在这里还是简单列一下主要的属性。 143 | ![]() 144 | 145 | - initialDelaySeconds:检查开始执行的时间,以容器启动完成为起点计算 146 | - periodSeconds:检查执行的周期,默认为10秒,最小为1秒 147 | - timeoutSeconds:检查超时的时间,默认为1秒,最小为1秒 148 | - successThreshold:从上次检查失败后重新认定检查成功的检查次数阈值(必须是连续成功),默认为1 149 | - failureThreshold:从上次检查成功后认定检查失败的检查次数阈值(必须是连续失败),默认为1 150 | - httpGet的属性 151 | - host:主机名或IP 152 | - scheme:链接类型,HTTP或HTTPS,默认为HTTP 153 | - path:请求路径 154 | - httpHeaders:自定义请求头 155 | - port:请求端口 156 | 157 | ![](https://images2018.cnblogs.com/blog/39469/201807/39469-20180710163655709-89635310.png) 158 | 159 | ## 参考资料 160 | 1. [Kubernetes 201](https://kubernetes.io/docs/tutorials/k8s201/) 161 | 2. [Container Probes](https://kubernetes.io/docs/user-guide/pod-states/#container-probes) 162 | 3. [Kubernetes Task Probe](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) 163 | 4. [Configure Liveness and Readiness Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/) 164 | 5. [package handler](https://godoc.org/sigs.k8s.io/controller-runtime/pkg/handler) 165 | 6. [Kubernetes Reference Probe](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.11/#probe-v1-core) 166 | -------------------------------------------------------------------------------- /pod-probe/liveness-exec.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | labels: 5 | test: liveness 6 | name: liveness-exec 7 | spec: 8 | containers: 9 | - name: liveness 10 | image: docker.io/alpine 11 | args: 12 | - /bin/sh 13 | - -c 14 | - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600 15 | livenessProbe: 16 | exec: 17 | command: 18 | - cat 19 | - /tmp/healthy 20 | initialDelaySeconds: 5 21 | periodSeconds: 5 22 | -------------------------------------------------------------------------------- /pod-probe/liveness-http.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | labels: 5 | test: liveness 6 | app: httpd 7 | name: liveness-http 8 | spec: 9 | containers: 10 | - name: liveness 11 | image: docker.io/httpd 12 | ports: 13 | - containerPort: 80 14 | livenessProbe: 15 | httpGet: 16 | path: /index.html 17 | port: 80 18 | httpHeaders: 19 | - name: X-Custom-Header 20 | value: Awesome 21 | initialDelaySeconds: 5 22 | periodSeconds: 5 23 | -------------------------------------------------------------------------------- /pod-probe/liveness-tcp.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | labels: 5 | test: liveness 6 | app: node 7 | name: liveness-tcp 8 | spec: 9 | containers: 10 | - name: goproxy 11 | image: docker.io/googlecontainer/goproxy:0.1 12 | ports: 13 | - containerPort: 8080 14 | readinessProbe: 15 | tcpSocket: 16 | port: 8080 17 | initialDelaySeconds: 5 18 | periodSeconds: 10 19 | livenessProbe: 20 | tcpSocket: 21 | port: 8080 22 | initialDelaySeconds: 15 23 | periodSeconds: 20 24 | -------------------------------------------------------------------------------- /pod/README.md: -------------------------------------------------------------------------------- 1 | > 本文的演练环境为基于Virtualbox搭建的Kubernetes集群,具体搭建步骤可以参考[kubeadm安装kubernetes V1.11.1 集群](https://www.cnblogs.com/cocowool/p/kubeadm_install_kubernetes.html) 2 | 3 | ## 1. 基本概念 4 | ### 1.1 Pod是什么 5 | Pod是Kubernetes中能够创建和部署的最小单元,是Kubernetes集群中的一个应用实例,总是部署在同一个节点Node上。Pod中包含了一个或多个容器,还包括了存储、网络等各个容器共享的资源。Pod支持多种容器环境,Docker则是最流行的容器环境。 6 | - 单容器Pod,最常见的应用方式。 7 | - 多容器Pod,对于多容器Pod,Kubernetes会保证所有的容器都在同一台物理主机或虚拟主机中运行。多容器Pod是相对高阶的使用方式,除非应用耦合特别严重,一般不推荐使用这种方式。一个Pod内的容器共享IP地址和端口范围,容器之间可以通过 localhost 互相访问。 8 | ![多容器Pod示意图](https://d33wubrfki0l68.cloudfront.net/aecab1f649bc640ebef1f05581bfcc91a48038c4/728d6/images/docs/pod.svg) 9 | 10 | Pod并不提供保证正常运行的能力,因为可能遭受Node节点的物理故障、网络分区等等的影响,整体的高可用是Kubernetes集群通过在集群内调度Node来实现的。通常情况下我们不要直接创建Pod,一般都是通过Controller来进行管理,但是了解Pod对于我们熟悉控制器非常有好处。 11 | 12 | ### 1.2 Pod带来的好处 13 | Pod带来的好处 14 | - Pod做为一个可以独立运行的服务单元,简化了应用部署的难度,以更高的抽象层次为应用部署管提供了极大的方便。 15 | - Pod做为最小的应用实例可以独立运行,因此可以方便的进行部署、水平扩展和收缩、方便进行调度管理与资源的分配。 16 | - Pod中的容器共享相同的数据和网络地址空间,Pod之间也进行了统一的资源管理与分配。 17 | 18 | ### 1.3 常用Pod管理命令 19 | Pod的配置信息中有几个重要部分,apiVersion、kind、metadata、spec以及status。其中```apiVersion```和```kind```是比较固定的,```status```是运行时的状态,所以最重要的就是```metadata```和```spec```两个部分。 20 | 21 | 先来看一个典型的配置文件,命名为 first-pod.yml 22 | ```yaml 23 | apiVersion: v1 24 | kind: Pod 25 | metadata: 26 | name: first-pod 27 | labels: 28 | app: bash 29 | tir: backend 30 | spec: 31 | containers: 32 | - name: bash-container 33 | image: docker.io/busybox 34 | command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600'] 35 | ``` 36 | 37 | 在编写配置文件时,可以通过[API Reference](https://kubernetes.io/docs/reference/)来参考,也可以通过命令查看。 38 | ```sh 39 | [root@devops-101 ~]# kubectl explain pod 40 | KIND: Pod 41 | VERSION: v1 42 | 43 | DESCRIPTION: 44 | Pod is a collection of containers that can run on a host. This resource is 45 | created by clients and scheduled onto hosts. 46 | 47 | FIELDS: 48 | apiVersion 49 | APIVersion defines the versioned schema of this representation of an 50 | object. Servers should convert recognized schemas to the latest internal 51 | value, and may reject unrecognized values. More info: 52 | https://git.k8s.io/community/contributors/devel/api-conventions.md#resources 53 | 54 | kind 55 | Kind is a string value representing the REST resource this object 56 | represents. Servers may infer this from the endpoint the client submits 57 | requests to. Cannot be updated. In CamelCase. More info: 58 | https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds 59 | 60 | metadata 61 | Standard object's metadata. More info: 62 | https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata 63 | 64 | spec 65 | Specification of the desired behavior of the pod. More info: 66 | https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status 67 | 68 | status 69 | Most recently observed status of the pod. This data may not be up to date. 70 | Populated by the system. Read-only. More info: 71 | https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status 72 | [root@devops-101 ~]# kubectl explain pod.spec 73 | KIND: Pod 74 | VERSION: v1 75 | 76 | RESOURCE: spec 77 | 78 | DESCRIPTION: 79 | Specification of the desired behavior of the pod. More info: 80 | https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status 81 | 82 | PodSpec is a description of a pod. 83 | 84 | FIELDS: 85 | activeDeadlineSeconds 86 | Optional duration in seconds the pod may be active on the node relative to 87 | StartTime before the system will actively try to mark it failed and kill 88 | associated containers. Value must be a positive integer. 89 | 90 | affinity 91 | If specified, the pod's scheduling constraints 92 | 93 | automountServiceAccountToken 94 | AutomountServiceAccountToken indicates whether a service account token 95 | should be automatically mounted. 96 | ``` 97 | 98 | #### 1.3.1 创建 99 | 100 | 利用kubectl命令行管理工具,我们可以直接在命令行通过配置文件创建。如果安装了Dashboard图形管理界面,还可以通过图形界面创建Pod。因为最终Pod的创建都是落在命令上的,这里只介绍如何使用kubectl管理工具来创建。 101 | 102 | 使用配置文件的方式创建Pod。 103 | ```sh 104 | $ kubectl create -f first-pod.yml 105 | ``` 106 | 107 | #### 1.3.2 查看配置 108 | 如果想了解一个正在运行的Pod的配置,可以通过以下命令获取。 109 | ```sh 110 | [root@devops-101 ~]# kubectl get pod first-pod -o yaml 111 | apiVersion: v1 112 | kind: Pod 113 | metadata: 114 | creationTimestamp: 2018-08-08T01:45:16Z 115 | labels: 116 | app: bash 117 | name: first-pod 118 | namespace: default 119 | resourceVersion: "184988" 120 | selfLink: /api/v1/namespaces/default/pods/first-pod 121 | uid: b2d3d2b7-9aac-11e8-84f4-080027b7c4e9 122 | spec: 123 | containers: 124 | - command: 125 | - sh 126 | - -c 127 | - echo Hello Kubernetes! && sleep 3600 128 | image: docker.io/busybox 129 | imagePullPolicy: Always 130 | name: bash-container 131 | resources: {} 132 | terminationMessagePath: /dev/termination-log 133 | terminationMessagePolicy: File 134 | volumeMounts: 135 | - mountPath: /var/run/secrets/kubernetes.io/serviceaccount 136 | name: default-token-trvqv 137 | readOnly: true 138 | dnsPolicy: ClusterFirst 139 | nodeName: devops-102 140 | restartPolicy: Always 141 | schedulerName: default-scheduler 142 | securityContext: {} 143 | serviceAccount: default 144 | serviceAccountName: default 145 | terminationGracePeriodSeconds: 30 146 | tolerations: 147 | - effect: NoExecute 148 | key: node.kubernetes.io/not-ready 149 | operator: Exists 150 | tolerationSeconds: 300 151 | - effect: NoExecute 152 | key: node.kubernetes.io/unreachable 153 | operator: Exists 154 | tolerationSeconds: 300 155 | volumes: 156 | - name: default-token-trvqv 157 | secret: 158 | defaultMode: 420 159 | secretName: default-token-trvqv 160 | status: 161 | conditions: 162 | - lastProbeTime: null 163 | lastTransitionTime: 2018-08-08T01:45:16Z 164 | status: "True" 165 | type: Initialized 166 | - lastProbeTime: null 167 | lastTransitionTime: 2018-08-08T01:45:16Z 168 | message: 'containers with unready status: [bash-container]' 169 | reason: ContainersNotReady 170 | status: "False" 171 | type: Ready 172 | - lastProbeTime: null 173 | lastTransitionTime: null 174 | message: 'containers with unready status: [bash-container]' 175 | reason: ContainersNotReady 176 | status: "False" 177 | type: ContainersReady 178 | - lastProbeTime: null 179 | lastTransitionTime: 2018-08-08T01:45:16Z 180 | status: "True" 181 | type: PodScheduled 182 | containerStatuses: 183 | - image: docker.io/busybox 184 | imageID: "" 185 | lastState: {} 186 | name: bash-container 187 | ready: false 188 | restartCount: 0 189 | state: 190 | waiting: 191 | reason: ContainerCreating 192 | hostIP: 192.168.0.102 193 | phase: Pending 194 | qosClass: BestEffort 195 | startTime: 2018-08-08T01:45:16Z 196 | ``` 197 | 198 | #### 1.3.3 查看日志 199 | 可以查看命令行标准输出的日志。 200 | ```sh 201 | [root@devops-101 ~]# kubectl logs first-pod 202 | Hello Kubernetes! 203 | ``` 204 | 如果Pod中有多个容器,查看特定容器的日志需要指定容器名称```kubectl logs pod-name -c container-name```。 205 | 206 | #### 1.3.4 标签管理 207 | 标签是Kubernetes管理Pod的重要依据,我们可以在Pod yaml文件中 metadata 中指定,也可以通过命令行进行管理。 208 | 209 | 显示Pod的标签 210 | ```sh 211 | [root@devops-101 ~]# kubectl get pods --show-labels 212 | NAME READY STATUS RESTARTS AGE LABELS 213 | first-pod 1/1 Running 0 15m app=bash 214 | ``` 215 | 使用 second-pod.yml 我们再创建一个包含两个标签的Pod。 216 | ```sh 217 | [root@devops-101 ~]# kubectl create -f first-pod.yml 218 | pod/second-pod created 219 | [root@devops-101 ~]# kubectl get pods --show-labels 220 | NAME READY STATUS RESTARTS AGE LABELS 221 | first-pod 1/1 Running 0 17m app=bash 222 | second-pod 0/1 ContainerCreating 0 20s app=bash,tir=backend 223 | ``` 224 | 根据标签来查询Pod。 225 | ```sh 226 | [root@devops-101 ~]# kubectl get pods -l tir=backend --show-labels 227 | NAME READY STATUS RESTARTS AGE LABELS 228 | second-pod 1/1 Running 0 1m app=bash,tir=backend 229 | ``` 230 | 增加标签 231 | ```sh 232 | [root@devops-101 ~]# kubectl label pod first-pod tir=frontend 233 | pod/first-pod labeled 234 | [root@devops-101 ~]# kubectl get pods --show-labels 235 | NAME READY STATUS RESTARTS AGE LABELS 236 | first-pod 1/1 Running 0 24m app=bash,tir=frontend 237 | second-pod 1/1 Running 0 7m app=bash,tir=backend 238 | ``` 239 | 修改标签 240 | ```sh 241 | [root@devops-101 ~]# kubectl label pod first-pod tir=unkonwn --overwrite 242 | pod/first-pod labeled 243 | [root@devops-101 ~]# kubectl get pods --show-labels 244 | NAME READY STATUS RESTARTS AGE LABELS 245 | first-pod 1/1 Running 0 25m app=bash,tir=unkonwn 246 | second-pod 1/1 Running 0 8m app=bash,tir=backend 247 | ``` 248 | 249 | 可以将标签显示为列 250 | ```sh 251 | [root@devops-101 ~]# kubectl get pods -L app,tir 252 | NAME READY STATUS RESTARTS AGE APP TIR 253 | first-pod 1/1 Running 0 26m bash unkonwn 254 | second-pod 1/1 Running 0 9m bash backend 255 | ``` 256 | 257 | 标签是Kubernetes中非常强大的一个功能,Node节点也可以增加标签,再利用Pod的标签选择器,可以将Pod分配到不同类型的Node上。 258 | 259 | #### 1.3.5 删除Pod 260 | ```sh 261 | [root@devops-101 ~]# kubectl delete pods first-pod 262 | pod "first-pod" deleted 263 | ``` 264 | 也可以根据标签选择器删除。 265 | ```sh 266 | [root@devops-101 ~]# kubectl delete pods -l tir=backend 267 | pod "second-pod" deleted 268 | ``` 269 | 270 | ### 1.4 Pod的生命周期 271 | 像单独的容器应用一样,Pod并不是持久运行的。Pod创建后,Kubernetes为其分配一个UID,并且通过Controller调度到Node中运行,然后Pod一直保持运行状态直到运行正常结束或者被删除。在Node发生故障时,Controller负责将其调度到其他的Node中。Kubernetes为Pod定义了几种状态,分别如下: 272 | - Pending,Pod已创建,正在等待容器创建。经常是正在下载镜像,因为这一步骤最耗费时间。 273 | - Running,Pod已经绑定到某个Node并且正在运行。或者可能正在进行意外中断后的重启。 274 | - Succeeded,表示Pod中的容器已经正常结束并且不需要重启。 275 | - Failed,表示Pod中的容器遇到了错误而终止。 276 | - Unknown,因为网络或其他原因,无法获取Pod的状态。 277 | 278 | ## 2. 如何对Pod进行健康检查 279 | Kubernetes利用[Handler](https://godoc.org/k8s.io/kubernetes/pkg/api/v1#Handler)功能,可以对容器的状况进行探测,有以下三种形式。 280 | - ExecAction:在容器中执行特定的命令。 281 | - TCPSocketAction:检查容器端口是否可以连接。 282 | - HTTPGetAction:检查HTTP请求状态是否正常。 283 | 284 | 这部分内容展开来也比较多,后续计划单独写一篇来介绍。 285 | 286 | ## 3. Init Containers 287 | Pod中可以包含一到多个Init Container,在其他容器之前开始运行。Init Container 只能是运行到完成状态,即不能够一直存在。Init Container必须依次执行。在App Container运行前,所有的Init Container必须全部正常结束。 288 | 289 | 在Pod启动过程中,Init Container在网络和存储初始化完成后开始按顺序启动。Pod重启的时候,所有的Init Container都会重新执行。 290 | 291 | > However, if the Pod restartPolicy is set to Always, the Init Containers use RestartPolicy OnFailure. 292 | 293 | ### 3.1 好处 294 | - 运行一些不希望在 App Container 中运行的命令或工具 295 | - 包含一些App Image中没有的工具或特定代码 296 | - 应用镜像构建人员和部署人员可以独立工作而不需要依赖对方 297 | - 拥有与App Container不同的命名空间 298 | - 因为在App Container运行前必须运行结束,适合做一些前置条件的检查和配置 299 | 300 | ### 3.2 语法 301 | 先看一下解释 302 | ```sh 303 | [root@devops-101 ~]# kubectl explain pod.spec.initContainers 304 | KIND: Pod 305 | VERSION: v1 306 | 307 | RESOURCE: initContainers <[]Object> 308 | 309 | DESCRIPTION: 310 | List of initialization containers belonging to the pod. Init containers are 311 | executed in order prior to containers being started. If any init container 312 | fails, the pod is considered to have failed and is handled according to its 313 | restartPolicy. The name for an init container or normal container must be 314 | unique among all containers. Init containers may not have Lifecycle 315 | actions, Readiness probes, or Liveness probes. The resourceRequirements of 316 | an init container are taken into account during scheduling by finding the 317 | highest request/limit for each resource type, and then using the max of of 318 | that value or the sum of the normal containers. Limits are applied to init 319 | containers in a similar fashion. Init containers cannot currently be added 320 | or removed. Cannot be updated. More info: 321 | https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ 322 | 323 | A single application container that you want to run within a pod. 324 | ``` 325 | 具体语法。 326 | ```yaml 327 | apiVersion: v1 328 | kind: Pod 329 | metadata: 330 | name: myapp-pod 331 | labels: 332 | app: myapp 333 | spec: 334 | containers: 335 | - name: myapp-container 336 | image: docker.io/busybox 337 | command: ['sh', '-c', 'echo The app is running! && sleep 3600'] 338 | initContainers: 339 | - name: init-myservice 340 | image: docker.io/busybox 341 | command: ['sh', '-c', 'echo init-service && sleep 2'] 342 | - name: init-mydb 343 | image: docker.io/busybox 344 | command: ['sh', '-c', 'echo init-mydb && sleep 2'] 345 | ``` 346 | > 兼容性问题 347 | > 1.5之前的语法都写在 annotation 中,1.6 以上的版本使用 ```.spec.initContainers``` 字段。建议还是使用 1.6 版本的语法。1.6、1.7的版本还兼容1.5以下的版本,1.8之后就不再兼容老版本了。 348 | 349 | 350 | ## 4. Pod Preset 351 | 利用这个特性,可以在Pod启动过程中向Pod中注入密码 Secrets、存储 Volumes、挂载点 Volume Mounts和环境变量。通过标签选择器来指定Pod。利用这个特性,Pod Template的维护人员就不需要为每个Pod显示的提供相关的属性。 352 | 353 | 具体的工作步骤 354 | - 检查所有可用的ProdPresets 355 | - 检查是否有ProdPreset的标签与即将创建的Pod相匹配 356 | - 将PodPreset中定义的参数与Pod定义合并 357 | - 如果参数合并出错,则丢弃ProPreset参数,继续创建Pod 358 | - 为Pod增加注解,表示层被ProdPreset修改过,形式为 ```podpreset.admission.kubernetes.io/podpreset-: ""``` 359 | 360 | 对于 ```Env```、```EnvFrom```、```VolumeMounts``` Kubernetes修改Container Spec,对于```Volume```修改Pod Spec。 361 | 362 | ### 4.1 对个别Pod停用 363 | 在Spec中增加注解: 364 | ```yaml 365 | podpreset.admission.kubernetes.io/exclude: "true" 366 | ``` 367 | 368 | ## 5. 中断 369 | Pod会因为各种各样的原因发生中断。 370 | 371 | ### 5.1 计划内中断 372 | - 删除部署 Deployment或者其他控制器 373 | - 更新部署模版导致的Pod重启 374 | - 直接删除Pod 375 | - 集群的缩容 376 | - 手工移除 377 | 378 | ### 5.2 计划外中断 379 | - 硬件故障、物理节点宕机 380 | - 集群管理员误删VM 381 | - 云供应商故障导致的主机不可用 382 | - Kernel panic 383 | - 集群网络分区导致节点消失 384 | - 资源耗尽导致的节点剔除 385 | 386 | 387 | ### 5.3 PDB Disruption Budgets 388 | > Kubernetes offers features to help run highly available applications at the same time as frequent voluntary disruptions. We call this set of features Disruption Budgets. 389 | 390 | Kubernetes允许我们创建一个PDB对象,来确保一个RS中运行的Pod不会在一个预算(个数)之下。 391 | Eviction API。 392 | 393 | PDB是用来解决集群管理和应用管理职责分离的情况,如果你的单位不存在这种情况,就可以不使用PDB。 394 | 395 | ## 参考资料 396 | 1. [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/) 397 | 2. [Kubernetes in action](#) 398 | -------------------------------------------------------------------------------- /pod/first-pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: first-pod 5 | labels: 6 | app: bash 7 | tir: backend 8 | spec: 9 | containers: 10 | - name: bash-container 11 | image: docker.io/busybox 12 | command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600'] 13 | -------------------------------------------------------------------------------- /pod/second-pod.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: second-pod 5 | labels: 6 | app: bash 7 | tir: backend 8 | spec: 9 | containers: 10 | - name: bash-container 11 | image: docker.io/busybox 12 | command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600'] 13 | -------------------------------------------------------------------------------- /preInstall.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | #关闭Selinu 3 | setenforce 0 4 | sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux 5 | 6 | #关闭防火墙 7 | systemctl stop firewalld 8 | systemctl disable firewalld 9 | 10 | #关闭Swapp 11 | swapoff -a 12 | sed -i 's/.*swap.*/#&/' /etc/fstab 13 | 14 | #修改转发配置 15 | cat < /etc/sysctl.d/k8s.conf 16 | net.bridge.bridge-nf-call-ip6tables = 1 17 | net.bridge.bridge-nf-call-iptables = 1 18 | EOF 19 | 20 | sysctl --system 21 | 22 | #添加yum源 23 | cat < /etc/yum.repos.d/kubernetes.repo 24 | [kubernetes] 25 | name=Kubernetes 26 | baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ 27 | enabled=1 28 | gpgcheck=1 29 | repo_gpgcheck=1 30 | gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 31 | EOF 32 | 33 | yum install -y epel-release 34 | yum install -y net-tools wget vim ntpdate 35 | yum install -y docker 36 | systemctl enable docker && systemctl start docker 37 | systemctl enable docker.service 38 | yum install -y kubelet kubeadm kubectl kubernetes-cni 39 | systemctl enable kubelet && systemctl start kubelet 40 | systemctl enable kubelet.service 41 | -------------------------------------------------------------------------------- /service/external-service-endpoints.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Endpoints 3 | metadata: 4 | name: external-service 5 | subsets: 6 | - addresses: 7 | - ip: 220.181.57.216 8 | - ip: 123.58.180.7 9 | ports: 10 | - port: 80 -------------------------------------------------------------------------------- /service/external-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: external-service 5 | spec: 6 | ports: 7 | - port: 80 8 | -------------------------------------------------------------------------------- /service/ingress/cafe-example.yaml: -------------------------------------------------------------------------------- 1 | kind: List 2 | apiVersion: v1 3 | items: 4 | - apiVersion: extensions/v1beta1 5 | kind: Deployment 6 | metadata: 7 | name: coffee 8 | namespace: nginx-ingress 9 | spec: 10 | replicas: 2 11 | selector: 12 | matchLabels: 13 | app: coffee 14 | template: 15 | metadata: 16 | labels: 17 | app: coffee 18 | spec: 19 | containers: 20 | - name: coffee 21 | image: nginxdemos/hello:plain-text 22 | ports: 23 | - containerPort: 80 24 | - apiVersion: v1 25 | kind: Service 26 | metadata: 27 | name: coffee-svc 28 | namespace: nginx-ingress 29 | spec: 30 | ports: 31 | - port: 80 32 | targetPort: 80 33 | protocol: TCP 34 | name: http 35 | selector: 36 | app: coffee 37 | - apiVersion: extensions/v1beta1 38 | kind: Deployment 39 | metadata: 40 | name: tea 41 | namespace: nginx-ingress 42 | spec: 43 | replicas: 3 44 | selector: 45 | matchLabels: 46 | app: tea 47 | template: 48 | metadata: 49 | labels: 50 | app: tea 51 | spec: 52 | containers: 53 | - name: tea 54 | image: nginxdemos/hello:plain-text 55 | ports: 56 | - containerPort: 80 57 | - apiVersion: v1 58 | kind: Service 59 | metadata: 60 | name: tea-svc 61 | namespace: nginx-ingress 62 | labels: 63 | spec: 64 | ports: 65 | - port: 80 66 | targetPort: 80 67 | protocol: TCP 68 | name: http 69 | selector: 70 | app: tea 71 | - apiVersion: v1 72 | kind: Secret 73 | metadata: 74 | name: cafe-secret 75 | namespace: nginx-ingress 76 | type: Opaque 77 | data: 78 | tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMakNDQWhZQ0NRREFPRjl0THNhWFdqQU5CZ2txaGtpRzl3MEJBUXNGQURCYU1Rc3dDUVlEVlFRR0V3SlYKVXpFTE1Ba0dBMVVFQ0F3Q1EwRXhJVEFmQmdOVkJBb01HRWx1ZEdWeWJtVjBJRmRwWkdkcGRITWdVSFI1SUV4MApaREViTUJrR0ExVUVBd3dTWTJGbVpTNWxlR0Z0Y0d4bExtTnZiU0FnTUI0WERURTRNRGt4TWpFMk1UVXpOVm9YCkRUSXpNRGt4TVRFMk1UVXpOVm93V0RFTE1Ba0dBMVVFQmhNQ1ZWTXhDekFKQmdOVkJBZ01Ba05CTVNFd0h3WUQKVlFRS0RCaEpiblJsY201bGRDQlhhV1JuYVhSeklGQjBlU0JNZEdReEdUQVhCZ05WQkFNTUVHTmhabVV1WlhoaApiWEJzWlM1amIyMHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDcDZLbjdzeTgxCnAwanVKL2N5ayt2Q0FtbHNmanRGTTJtdVpOSzBLdGVjcUcyZmpXUWI1NXhRMVlGQTJYT1N3SEFZdlNkd0kyaloKcnVXOHFYWENMMnJiNENaQ0Z4d3BWRUNyY3hkam0zdGVWaVJYVnNZSW1tSkhQUFN5UWdwaW9iczl4N0RsTGM2SQpCQTBaalVPeWwwUHFHOVNKZXhNVjczV0lJYTVyRFZTRjJyNGtTa2JBajREY2o3TFhlRmxWWEgySTVYd1hDcHRDCm42N0pDZzQyZitrOHdnemNSVnA4WFprWldaVmp3cTlSVUtEWG1GQjJZeU4xWEVXZFowZXdSdUtZVUpsc202OTIKc2tPcktRajB2a29QbjQxRUUvK1RhVkVwcUxUUm9VWTNyemc3RGtkemZkQml6Rk8yZHNQTkZ4MkNXMGpYa05MdgpLbzI1Q1pyT2hYQUhBZ01CQUFFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFLSEZDY3lPalp2b0hzd1VCTWRMClJkSEliMzgzcFdGeW5acS9MdVVvdnNWQTU4QjBDZzdCRWZ5NXZXVlZycTVSSWt2NGxaODFOMjl4MjFkMUpINnIKalNuUXgrRFhDTy9USkVWNWxTQ1VwSUd6RVVZYVVQZ1J5anNNL05VZENKOHVIVmhaSitTNkZBK0NuT0Q5cm4yaQpaQmVQQ0k1ckh3RVh3bm5sOHl3aWozdnZRNXpISXV5QmdsV3IvUXl1aTlmalBwd1dVdlVtNG52NVNNRzl6Q1Y3ClBwdXd2dWF0cWpPMTIwOEJqZkUvY1pISWc4SHc5bXZXOXg5QytJUU1JTURFN2IvZzZPY0s3TEdUTHdsRnh2QTgKN1dqRWVxdW5heUlwaE1oS1JYVmYxTjM0OWVOOThFejM4Zk9USFRQYmRKakZBL1BjQytHeW1lK2lHdDVPUWRGaAp5UkU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K 79 | tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcWVpcCs3TXZOYWRJN2lmM01wUHJ3Z0pwYkg0N1JUTnBybVRTdENyWG5LaHRuNDFrCkcrZWNVTldCUU5semtzQndHTDBuY0NObzJhN2x2S2wxd2k5cTIrQW1RaGNjS1ZSQXEzTVhZNXQ3WGxZa1YxYkcKQ0pwaVJ6ejBza0lLWXFHN1BjZXc1UzNPaUFRTkdZMURzcGRENmh2VWlYc1RGZTkxaUNHdWF3MVVoZHErSkVwRwp3SStBM0kreTEzaFpWVng5aU9WOEZ3cWJRcCt1eVFvT05uL3BQTUlNM0VWYWZGMlpHVm1WWThLdlVWQ2cxNWhRCmRtTWpkVnhGbldkSHNFYmltRkNaYkp1dmRySkRxeWtJOUw1S0Q1K05SQlAvazJsUkthaTAwYUZHTjY4NE93NUgKYzMzUVlzeFR0bmJEelJjZGdsdEkxNURTN3lxTnVRbWF6b1Z3QndJREFRQUJBb0lCQVFDUFNkU1luUXRTUHlxbApGZlZGcFRPc29PWVJoZjhzSStpYkZ4SU91UmF1V2VoaEp4ZG01Uk9ScEF6bUNMeUw1VmhqdEptZTIyM2dMcncyCk45OUVqVUtiL1ZPbVp1RHNCYzZvQ0Y2UU5SNThkejhjbk9SVGV3Y290c0pSMXBuMWhobG5SNUhxSkpCSmFzazEKWkVuVVFmY1hackw5NGxvOUpIM0UrVXFqbzFGRnM4eHhFOHdvUEJxalpzVjdwUlVaZ0MzTGh4bndMU0V4eUZvNApjeGI5U09HNU9tQUpvelN0Rm9RMkdKT2VzOHJKNXFmZHZ5dGdnOXhiTGFRTC94MGtwUTYyQm9GTUJEZHFPZVBXCktmUDV6WjYvMDcvdnBqNDh5QTFRMzJQem9idWJzQkxkM0tjbjMyamZtMUU3cHJ0V2wrSmVPRmlPem5CUUZKYk4KNHFQVlJ6NWhBb0dCQU50V3l4aE5DU0x1NFArWGdLeWNrbGpKNkY1NjY4Zk5qNUN6Z0ZScUowOXpuMFRsc05ybwpGVExaY3hEcW5SM0hQWU00MkpFUmgySi9xREZaeW5SUW8zY2czb2VpdlVkQlZHWTgrRkkxVzBxZHViL0w5K3l1CmVkT1pUUTVYbUdHcDZyNmpleHltY0ppbS9Pc0IzWm5ZT3BPcmxEN1NQbUJ2ek5MazRNRjZneGJYQW9HQkFNWk8KMHA2SGJCbWNQMHRqRlhmY0tFNzdJbUxtMHNBRzR1SG9VeDBlUGovMnFyblRuT0JCTkU0TXZnRHVUSnp5K2NhVQprOFJxbWRIQ2JIelRlNmZ6WXEvOWl0OHNaNzdLVk4xcWtiSWN1YytSVHhBOW5OaDFUanNSbmU3NFowajFGQ0xrCmhIY3FIMHJpN1BZU0tIVEU4RnZGQ3haWWRidUI4NENtWmlodnhicFJBb0dBSWJqcWFNWVBUWXVrbENkYTVTNzkKWVNGSjFKelplMUtqYS8vdER3MXpGY2dWQ0thMzFqQXdjaXowZi9sU1JxM0hTMUdHR21lemhQVlRpcUxmZVpxYwpSMGlLYmhnYk9jVlZrSkozSzB5QXlLd1BUdW14S0haNnpJbVpTMGMwYW0rUlk5WUdxNVQ3WXJ6cHpjZnZwaU9VCmZmZTNSeUZUN2NmQ21mb09oREN0enVrQ2dZQjMwb0xDMVJMRk9ycW40M3ZDUzUxemM1em9ZNDR1QnpzcHd3WU4KVHd2UC9FeFdNZjNWSnJEakJDSCtULzZzeXNlUGJKRUltbHpNK0l3eXRGcEFOZmlJWEV0LzQ4WGY2ME54OGdXTQp1SHl4Wlp4L05LdER3MFY4dlgxUE9ucTJBNWVpS2ErOGpSQVJZS0pMWU5kZkR1d29seHZHNmJaaGtQaS80RXRUCjNZMThzUUtCZ0h0S2JrKzdsTkpWZXN3WEU1Y1VHNkVEVXNEZS8yVWE3ZlhwN0ZjanFCRW9hcDFMU3crNlRYcDAKWmdybUtFOEFSek00NytFSkhVdmlpcS9udXBFMTVnMGtKVzNzeWhwVTl6WkxPN2x0QjBLSWtPOVpSY21Vam84UQpjcExsSE1BcWJMSjhXWUdKQ2toaVd4eWFsNmhZVHlXWTRjVmtDMHh0VGwvaFVFOUllTktvCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== 80 | - apiVersion: extensions/v1beta1 81 | kind: Ingress 82 | metadata: 83 | name: cafe-ingress 84 | namespace: nginx-ingress 85 | spec: 86 | tls: 87 | - hosts: 88 | - cafe.example.com 89 | secretName: cafe-secret 90 | rules: 91 | - host: cafe.example.com 92 | http: 93 | paths: 94 | - path: /tea 95 | backend: 96 | serviceName: tea-svc 97 | servicePort: 80 98 | - path: /coffee 99 | backend: 100 | serviceName: coffee-svc 101 | servicePort: 80 -------------------------------------------------------------------------------- /service/ingress/httpd-deploy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: httpd 5 | # namespace: nginx-ingress 6 | spec: 7 | replicas: 1 8 | template: 9 | metadata: 10 | name: httpd 11 | labels: 12 | app: httpd 13 | spec: 14 | containers: 15 | - image: docker.io/httpd 16 | name: httpd 17 | ports: 18 | - name: http 19 | containerPort: 80 20 | -------------------------------------------------------------------------------- /service/ingress/httpd-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: httpd-service 5 | # namespace: nginx-ingress 6 | spec: 7 | ports: 8 | - name: http 9 | port: 80 10 | selector: 11 | app: httpd 12 | -------------------------------------------------------------------------------- /service/ingress/ingress-install.yaml: -------------------------------------------------------------------------------- 1 | kind: List 2 | apiVersion: v1 3 | items: 4 | - apiVersion: v1 5 | kind: Namespace 6 | metadata: 7 | name: nginx-ingress 8 | - apiVersion: v1 9 | kind: ServiceAccount 10 | metadata: 11 | name: nginx-ingress 12 | namespace: nginx-ingress 13 | - apiVersion: v1 14 | kind: Secret 15 | metadata: 16 | name: default-server-secret 17 | namespace: nginx-ingress 18 | type: Opaque 19 | data: 20 | tls.crt: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURTVENDQWpHZ0F3SUJBZ0lKQUs5L2NDNWZocDJHTUEwR0NTcUdTSWIzRFFFQkJRVUFNQ0V4SHpBZEJnTlYKQkFNVEZrNUhTVTVZU1c1bmNtVnpjME52Ym5SeWIyeHNaWEl3SGhjTk1UY3dPRE14TVRBeE16UTRXaGNOTVRndwpPRE14TVRBeE16UTRXakFoTVI4d0hRWURWUVFERXhaT1IwbE9XRWx1WjNKbGMzTkRiMjUwY205c2JHVnlNSUlCCklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF0bXhhMDhadExIaWxleWhOUWN5OUl4ankKWTBYdy9CRmZvM3duMDRsSXRoaGRxbkZ3NTZIVG1RVjIvbnEyRUxMdTNoejNjc3Urc3M5WFEzL3BrbXVwTEE5TApuaVVRZFVNcER4VlE1VFFKRW5CanJ5aXc4RWFlcEp4NUNCYVB5V3ZSZkpPb0pFSW56ZmNaYnE4OEVmQklYOHdtClFCa0xlcnFTVmRYWjBXR3FINVVQVlVZMVBqZXBqSXAyZ0NvbDRMUjM1aHRlSk9OMmZVTEF6cmRGMDBDT092WGsKUzgwRGw5eHdoUkVwVWVySGNuNXZod3BJazNkY3FNS3BxWTY2elF3dStMcFJEM3ZVWjR0eC9VYnlUdStkMkdhVwpWaG1RLy85RmtzUzVBS1d2ZXkrK3pPUTFDZTAxNzhDU0hRYXRDaWFuU2lTT3lwakZtTUZ0N1Mra25pbm9Xd0lECkFRQUJvNEdETUlHQU1CMEdBMVVkRGdRV0JCUlFUODVHRzV6a0QxV3FNSzZvOW8xWWFqUVBXVEJSQmdOVkhTTUUKU2pCSWdCUlFUODVHRzV6a0QxV3FNSzZvOW8xWWFqUVBXYUVscENNd0lURWZNQjBHQTFVRUF4TVdUa2RKVGxoSgpibWR5WlhOelEyOXVkSEp2Ykd4bGNvSUpBSzkvY0M1ZmhwMkdNQXdHQTFVZEV3UUZNQU1CQWY4d0RRWUpLb1pJCmh2Y05BUUVGQlFBRGdnRUJBSTIxcXpDN0lIYTEzblNvRkMxVFdtSUZydjQ2L2hRSFRjSFhxazRXZW16Z3VwVW8Kdmp0R05DVFlaR1VtL3RZY1FobDZvOXVJZlV5N3NlVS9OeWVCWHpOdGFiQUczQUIzanREVUJySy9xeVJ5cDZjRApIL0MzNmd5VFh3OGJxYVdOSzg0VGhYOVg2MFVFNVE2NzFUQUJMbk9paEhKUVVxTHdRc1VkdEkxRHBQb1BOOFlWCm5YQVl1RXJKWTVRckhzdHZoOFNZM2xoV3BSOWJ0eTVySldweUhIM3NDL1lHN2lFam5TUXp2LzdhK3cxTW1RQ0EKTk1wQnFvdzJKZkdveklyV2JvcFBVR2lmZ2szSjBKT24rcnA4RDRVc1lvNEo4Y3RvVk5qUFdmeU9zczB6ZWZ2aQpyUmVEUDdJOXc5THF1eERIRUhzeUpMUXN0MzNlQWlna1FBQU9zMUU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K 21 | tls.key: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdG14YTA4WnRMSGlsZXloTlFjeTlJeGp5WTBYdy9CRmZvM3duMDRsSXRoaGRxbkZ3CjU2SFRtUVYyL25xMkVMTHUzaHozY3N1K3NzOVhRMy9wa211cExBOUxuaVVRZFVNcER4VlE1VFFKRW5CanJ5aXcKOEVhZXBKeDVDQmFQeVd2UmZKT29KRUluemZjWmJxODhFZkJJWDh3bVFCa0xlcnFTVmRYWjBXR3FINVVQVlVZMQpQamVwaklwMmdDb2w0TFIzNWh0ZUpPTjJmVUxBenJkRjAwQ09PdlhrUzgwRGw5eHdoUkVwVWVySGNuNXZod3BJCmszZGNxTUtwcVk2NnpRd3UrTHBSRDN2VVo0dHgvVWJ5VHUrZDJHYVdWaG1RLy85RmtzUzVBS1d2ZXkrK3pPUTEKQ2UwMTc4Q1NIUWF0Q2lhblNpU095cGpGbU1GdDdTK2tuaW5vV3dJREFRQUJBb0lCQVFDQ002UkFNd2dKRGJOTwp5OTBZY2NFdEk4a2RBZmFXY3ZBSUI3MkZSaDhYbVJ5QllxWnJMUjJSd2t6RUpXRjlXYmtUM3lqZVRuMjFzamRlCmZoVi81RWZDb3NnZC8rWlhTN0FxaTlSSlEzS1dMcEYzbTF0dW8zam5sS2J1RnV4Wm54TE9EN1dhNjN6dGpNZ2kKTUFCMzdVQTYzOE1OVE5MY3JmMTBOa1paSTVRQkpYWWNPRk1ueDJ4MXVLRkU5RHQzWUEzbE9nOWNGdmFJTFpEQQo3WTVHVDlmUXdJQS92OGRWRU1DTkNiSzI1b1dnRG90WUdZaUhiYm1hUk9DTkRpNzVQZFpkM2daQ3IxUHFPWEZHCkJaVEh1L3Q4OXMwV1QyUkpNV2ljVW5XV0oyVHhmRWU1YUQ4R0JjRzEyN0pkamxLSitWZCtHWmxvODVYYVBvdnUKTVFxek1nbUJBb0dCQU9IS1pGbzVnSVkzL0J3aElCZ2RGUytnOG1GK21JTWpxSGVMN1NFSTNYL0UzWjhJd0syUgpmTTVFRUpTZnlETFpDVkNlSS8veWhBOUF6dG9Dam12TzdjMUxJT3kwR3k5dFlJVHlYY0xQNWNBWitBTkJCRExFCitYZkx5SE9KVXBDM2o4RFRZWDF0RENiUGJ5UFZTZENUNHNKT2JrNDVZVXQ3a3pEYTVHSFpsL3hqQW9HQkFNN1UKayt6TE5zbFQ2azJaakJaZW81YUdoMUNCSVV4bzNFNVpGYUZWR2lyMSs4NVlkVDdXVEpublJ6K0l6QXBMMmRqZApPZjVlQS9wa3JVNExMeGMzVVNEYjJwczJuT1hQd1p1OWdqRTM3aml0SUFRd3BHL3FiamQ3Y1ZaR2hlUkQyK3l4ClptTWU3c1BCZEVmcldmK1REYU9lT3B4L2RRcnFyTEc2UXo1ZHlQbXBBb0dBVmsyZ0VnU01wY0RjY253TzRtaXIKWW1zb2VpK0RhQXpISmZxc0JzWjJzNUd5REVteUxDWENDSzFua1FlSjVEV2xJOVZ1ZVRSZldkMHhzNDdxbFRhaApHcWt1eW9zRklSbXpuTjF2RFRtZDNkR1BSTjhqRmF6SWxndWtjTlQ2WkNwbG5oU3QzTjFEbWNvTDl5eGRiSVk2ClZIN2FGcmhFQWpBWDBNSzZMTlNaRFhVQ2dZQlRYc3JWeTBBbFBTY1g2b25XUm9Xb1drZlhBb1lhbDdZZCtyakcKVkZoODhyUnlnNk9YRmFqQTdNSUNjVERXQWFjcFRGdGhGaUtDWHV5Z3BjOXdpMEt2ZlErTU95SlpYRHBOZmNFcAo5OEtWbyt0ZzVQNlRnaXExUUpQNTArbUtqblBxMzhOR3R5UkZVZ2grS1BjWkZ2eUxkRzlwdjlLOCtNVnR5b2ZxCmJzRmhLUUtCZ0NvcEg5Wm95MjJBNStLcnJYZmQ0VXRBcndjN0dVanFUT1hhTzgyd3FpU0hZMndPTGdkWWw0L3kKSDJEYy9EMWxmWS9GL09sckNMZDNpL0lLc0wxNG13R2dxODZRdDhxeTIwcWw4RFNyWG91TmhsQTJmL1ZUTk1SMAp2OXAwU1JrQjI2UVYyUitndnNVYk9xb1lhMlVQVkNuQW9QeTYwTXlBaVJUR3cyeTExbm9lCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg== 22 | - kind: ConfigMap 23 | apiVersion: v1 24 | metadata: 25 | name: nginx-config 26 | namespace: nginx-ingress 27 | data: 28 | - kind: ClusterRole 29 | apiVersion: rbac.authorization.k8s.io/v1beta1 30 | metadata: 31 | name: nginx-ingress 32 | rules: 33 | - apiGroups: 34 | - "" 35 | resources: 36 | - services 37 | - endpoints 38 | verbs: 39 | - get 40 | - list 41 | - watch 42 | - apiGroups: 43 | - "" 44 | resources: 45 | - secrets 46 | verbs: 47 | - get 48 | - list 49 | - watch 50 | - apiGroups: 51 | - "" 52 | resources: 53 | - configmaps 54 | verbs: 55 | - get 56 | - list 57 | - watch 58 | - update 59 | - create 60 | - apiGroups: 61 | - "" 62 | resources: 63 | - pods 64 | verbs: 65 | - list 66 | - apiGroups: 67 | - "" 68 | resources: 69 | - events 70 | verbs: 71 | - create 72 | - patch 73 | - apiGroups: 74 | - extensions 75 | resources: 76 | - ingresses 77 | verbs: 78 | - list 79 | - watch 80 | - get 81 | - apiGroups: 82 | - "extensions" 83 | resources: 84 | - ingresses/status 85 | verbs: 86 | - update 87 | - kind: ClusterRoleBinding 88 | apiVersion: rbac.authorization.k8s.io/v1beta1 89 | metadata: 90 | name: nginx-ingress 91 | subjects: 92 | - kind: ServiceAccount 93 | name: nginx-ingress 94 | namespace: nginx-ingress 95 | roleRef: 96 | kind: ClusterRole 97 | name: nginx-ingress 98 | apiGroup: rbac.authorization.k8s.io 99 | - apiVersion: extensions/v1beta1 100 | kind: Deployment 101 | metadata: 102 | name: nginx-ingress 103 | namespace: nginx-ingress 104 | spec: 105 | replicas: 1 106 | selector: 107 | matchLabels: 108 | app: nginx-ingress 109 | template: 110 | metadata: 111 | labels: 112 | app: nginx-ingress 113 | spec: 114 | serviceAccountName: nginx-ingress 115 | containers: 116 | - image: nginx/nginx-ingress:1.3.0 117 | name: nginx-ingress 118 | ports: 119 | - name: http 120 | containerPort: 80 121 | - name: https 122 | containerPort: 443 123 | env: 124 | - name: POD_NAMESPACE 125 | valueFrom: 126 | fieldRef: 127 | fieldPath: metadata.namespace 128 | - name: POD_NAME 129 | valueFrom: 130 | fieldRef: 131 | fieldPath: metadata.name 132 | args: 133 | - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config 134 | - -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret 135 | - -report-ingress-status 136 | - apiVersion: v1 137 | kind: Service 138 | metadata: 139 | name: nginx-ingress 140 | namespace: nginx-ingress 141 | spec: 142 | type: NodePort 143 | ports: 144 | - port: 80 145 | targetPort: 80 146 | protocol: TCP 147 | name: http 148 | - port: 443 149 | targetPort: 443 150 | protocol: TCP 151 | name: https 152 | selector: 153 | app: nginx-ingress -------------------------------------------------------------------------------- /service/ingress/k8s-ingress-nginx.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: ingress-nginx 5 | 6 | --- 7 | 8 | kind: ConfigMap 9 | apiVersion: v1 10 | metadata: 11 | name: nginx-configuration 12 | namespace: ingress-nginx 13 | labels: 14 | app.kubernetes.io/name: ingress-nginx 15 | app.kubernetes.io/part-of: ingress-nginx 16 | 17 | --- 18 | 19 | apiVersion: v1 20 | kind: ServiceAccount 21 | metadata: 22 | name: nginx-ingress-serviceaccount 23 | namespace: ingress-nginx 24 | labels: 25 | app.kubernetes.io/name: ingress-nginx 26 | app.kubernetes.io/part-of: ingress-nginx 27 | 28 | --- 29 | apiVersion: rbac.authorization.k8s.io/v1beta1 30 | kind: ClusterRole 31 | metadata: 32 | name: nginx-ingress-clusterrole 33 | labels: 34 | app.kubernetes.io/name: ingress-nginx 35 | app.kubernetes.io/part-of: ingress-nginx 36 | rules: 37 | - apiGroups: 38 | - "" 39 | resources: 40 | - configmaps 41 | - endpoints 42 | - nodes 43 | - pods 44 | - secrets 45 | verbs: 46 | - list 47 | - watch 48 | - apiGroups: 49 | - "" 50 | resources: 51 | - nodes 52 | verbs: 53 | - get 54 | - apiGroups: 55 | - "" 56 | resources: 57 | - services 58 | verbs: 59 | - get 60 | - list 61 | - watch 62 | - apiGroups: 63 | - "extensions" 64 | resources: 65 | - ingresses 66 | verbs: 67 | - get 68 | - list 69 | - watch 70 | - apiGroups: 71 | - "" 72 | resources: 73 | - events 74 | verbs: 75 | - create 76 | - patch 77 | - apiGroups: 78 | - "extensions" 79 | resources: 80 | - ingresses/status 81 | verbs: 82 | - update 83 | 84 | --- 85 | apiVersion: rbac.authorization.k8s.io/v1beta1 86 | kind: Role 87 | metadata: 88 | name: nginx-ingress-role 89 | namespace: ingress-nginx 90 | labels: 91 | app.kubernetes.io/name: ingress-nginx 92 | app.kubernetes.io/part-of: ingress-nginx 93 | rules: 94 | - apiGroups: 95 | - "" 96 | resources: 97 | - configmaps 98 | - pods 99 | - secrets 100 | - namespaces 101 | verbs: 102 | - get 103 | - apiGroups: 104 | - "" 105 | resources: 106 | - configmaps 107 | resourceNames: 108 | # Defaults to "-" 109 | # Here: "-" 110 | # This has to be adapted if you change either parameter 111 | # when launching the nginx-ingress-controller. 112 | - "ingress-controller-leader-nginx" 113 | verbs: 114 | - get 115 | - update 116 | - apiGroups: 117 | - "" 118 | resources: 119 | - configmaps 120 | verbs: 121 | - create 122 | - apiGroups: 123 | - "" 124 | resources: 125 | - endpoints 126 | verbs: 127 | - get 128 | 129 | --- 130 | apiVersion: rbac.authorization.k8s.io/v1beta1 131 | kind: RoleBinding 132 | metadata: 133 | name: nginx-ingress-role-nisa-binding 134 | namespace: ingress-nginx 135 | labels: 136 | app.kubernetes.io/name: ingress-nginx 137 | app.kubernetes.io/part-of: ingress-nginx 138 | roleRef: 139 | apiGroup: rbac.authorization.k8s.io 140 | kind: Role 141 | name: nginx-ingress-role 142 | subjects: 143 | - kind: ServiceAccount 144 | name: nginx-ingress-serviceaccount 145 | namespace: ingress-nginx 146 | 147 | --- 148 | apiVersion: rbac.authorization.k8s.io/v1beta1 149 | kind: ClusterRoleBinding 150 | metadata: 151 | name: nginx-ingress-clusterrole-nisa-binding 152 | labels: 153 | app.kubernetes.io/name: ingress-nginx 154 | app.kubernetes.io/part-of: ingress-nginx 155 | roleRef: 156 | apiGroup: rbac.authorization.k8s.io 157 | kind: ClusterRole 158 | name: nginx-ingress-clusterrole 159 | subjects: 160 | - kind: ServiceAccount 161 | name: nginx-ingress-serviceaccount 162 | namespace: ingress-nginx 163 | 164 | --- 165 | 166 | apiVersion: extensions/v1beta1 167 | kind: Deployment 168 | metadata: 169 | name: nginx-ingress-controller 170 | namespace: ingress-nginx 171 | labels: 172 | app.kubernetes.io/name: ingress-nginx 173 | app.kubernetes.io/part-of: ingress-nginx 174 | spec: 175 | replicas: 1 176 | selector: 177 | matchLabels: 178 | app.kubernetes.io/name: ingress-nginx 179 | app.kubernetes.io/part-of: ingress-nginx 180 | template: 181 | metadata: 182 | labels: 183 | app.kubernetes.io/name: ingress-nginx 184 | app.kubernetes.io/part-of: ingress-nginx 185 | annotations: 186 | prometheus.io/port: "10254" 187 | prometheus.io/scrape: "true" 188 | spec: 189 | hostNetwork: true 190 | serviceAccountName: nginx-ingress-serviceaccount 191 | containers: 192 | - name: nginx-ingress-controller 193 | image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.20.0 194 | args: 195 | - /nginx-ingress-controller 196 | - --configmap=$(POD_NAMESPACE)/nginx-configuration 197 | - --publish-service=$(POD_NAMESPACE)/ingress-nginx 198 | - --annotations-prefix=nginx.ingress.kubernetes.io 199 | securityContext: 200 | capabilities: 201 | drop: 202 | - ALL 203 | add: 204 | - NET_BIND_SERVICE 205 | # www-data -> 33 206 | runAsUser: 33 207 | env: 208 | - name: POD_NAME 209 | valueFrom: 210 | fieldRef: 211 | fieldPath: metadata.name 212 | - name: POD_NAMESPACE 213 | valueFrom: 214 | fieldRef: 215 | fieldPath: metadata.namespace 216 | ports: 217 | - name: http 218 | containerPort: 80 219 | - name: https 220 | containerPort: 443 221 | livenessProbe: 222 | failureThreshold: 3 223 | httpGet: 224 | path: /healthz 225 | port: 10254 226 | scheme: HTTP 227 | initialDelaySeconds: 10 228 | periodSeconds: 10 229 | successThreshold: 1 230 | timeoutSeconds: 1 231 | readinessProbe: 232 | failureThreshold: 3 233 | httpGet: 234 | path: /healthz 235 | port: 10254 236 | scheme: HTTP 237 | periodSeconds: 10 238 | successThreshold: 1 239 | timeoutSeconds: 1 240 | 241 | --- 242 | -------------------------------------------------------------------------------- /service/ingress/tomcat-deploy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: tomcat 5 | spec: 6 | replicas: 1 7 | template: 8 | metadata: 9 | name: tomcat 10 | labels: 11 | app: tomcat 12 | spec: 13 | containers: 14 | - image: docker.io/tomcat 15 | name: tomcat 16 | ports: 17 | - name: http 18 | containerPort: 8080 19 | -------------------------------------------------------------------------------- /service/ingress/tomcat-ingress.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Ingress 3 | metadata: 4 | name: tomcat-ingress 5 | # namespace: nginx-ingress 6 | annotations: 7 | nginx.ingress.kubernetes.io/rewrite-target: / 8 | spec: 9 | rules: 10 | - host: ingressweb.com 11 | http: 12 | paths: 13 | - path: / 14 | backend: 15 | serviceName: tomcat-service 16 | servicePort: 8080 17 | - path: /httpd 18 | backend: 19 | serviceName: httpd-service 20 | servicePort: 80 21 | -------------------------------------------------------------------------------- /service/ingress/tomcat-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: tomcat-service 5 | # namespace: nginx-ingress 6 | spec: 7 | ports: 8 | - name: http 9 | port: 8080 10 | selector: 11 | app: tomcat 12 | -------------------------------------------------------------------------------- /service/svc-nodeport.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nodeport 5 | spec: 6 | type: NodePort 7 | ports: 8 | - port: 80 9 | targetPort: 8080 10 | nodePort: 30123 11 | selector: 12 | app: httpd -------------------------------------------------------------------------------- /statefulset/README.md: -------------------------------------------------------------------------------- 1 | ## StatefulSet 介绍 2 | -------------------------------------------------------------------------------- /statefulset/es-pv.yml: -------------------------------------------------------------------------------- 1 | kind: List 2 | apiVersion: v1 3 | items: 4 | - apiVersion: v1 5 | kind: PersistentVolume 6 | metadata: 7 | name: es-storage-pv-01 8 | spec: 9 | capacity: 10 | storage: 100Mi 11 | volumeMode: Filesystem 12 | accessModes: ["ReadWriteOnce"] 13 | persistentVolumeReclaimPolicy: Delete 14 | storageClassName: local-storage 15 | local: 16 | path: /home/es 17 | nodeAffinity: 18 | required: 19 | nodeSelectorTerms: 20 | - matchExpressions: 21 | - key: kubernetes.io/hostname 22 | operator: In 23 | values: 24 | - devops-102 25 | - devops-103 26 | - apiVersion: v1 27 | kind: PersistentVolume 28 | metadata: 29 | name: es-storage-pv-02 30 | spec: 31 | capacity: 32 | storage: 100Mi 33 | volumeMode: Filesystem 34 | accessModes: ["ReadWriteOnce"] 35 | persistentVolumeReclaimPolicy: Delete 36 | storageClassName: local-storage 37 | local: 38 | path: /home/es01 39 | nodeAffinity: 40 | required: 41 | nodeSelectorTerms: 42 | - matchExpressions: 43 | - key: kubernetes.io/hostname 44 | operator: In 45 | values: 46 | - devops-102 47 | - devops-103 48 | -------------------------------------------------------------------------------- /statefulset/ss-nginx.yml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nginx 5 | labels: 6 | app: nginx 7 | spec: 8 | ports: 9 | - port: 80 10 | name: web 11 | clusterIP: None 12 | selector: 13 | app: nginx 14 | --- 15 | apiVersion: apps/v1 16 | kind: StatefulSet 17 | metadata: 18 | name: web 19 | spec: 20 | serviceName: "nginx" 21 | replicas: 2 22 | selector: 23 | matchLabels: 24 | app: nginx 25 | template: 26 | metadata: 27 | labels: 28 | app: nginx 29 | spec: 30 | containers: 31 | - name: nginx 32 | image: docker.io/nginx 33 | ports: 34 | - containerPort: 80 35 | name: web 36 | volumeMounts: 37 | - name: www 38 | mountPath: /usr/share/nginx/html 39 | volumeClaimTemplates: 40 | - metadata: 41 | name: www 42 | spec: 43 | accessModes: ["ReadWriteOnce"] 44 | volumeMode: Filesystem 45 | resources: 46 | requests: 47 | storage: 50Mi 48 | storageClassName: local-storage 49 | -------------------------------------------------------------------------------- /storage/README.md: -------------------------------------------------------------------------------- 1 | 一张图搞懂Kubernetes存储系统。 2 | 3 | > 本文环境为Kubernetes V1.11,操作系统版本为 CentOs 7.3,Kubernetes集群安装可以参考 [kubeadm安装kubernetes V1.11.1 集群](https://www.cnblogs.com/cocowool/p/kubeadm_install_kubernetes.html) 4 | 5 | 容器中的存储都是临时的,因此Pod重启的时候,内部的数据会发生丢失。实际应用中,我们有些应用是无状态,有些应用则需要保持状态数据,确保Pod重启之后能够读取到之前的状态数据,有些应用则作为集群提供服务。这三种服务归纳为无状态服务、有状态服务以及有状态的集群服务,其中后面两个存在数据保存与共享的需求,因此就要采用容器外的存储方案。 6 | 7 | Kubernetes中存储中有四个重要的概念:Volume、PersistentVolume PV、PersistentVolumeClaim PVC、StorageClass。掌握了这四个概念,就掌握了Kubernetes中存储系统的核心。我用一张图来说明这四者之间的关系。 8 | 9 | ![](./storage_basic.png) 10 | 11 | - Volumes是最基础的存储抽象,其支持多种类型,包括本地存储、NFS、FC以及众多的云存储,我们也可以编写自己的存储插件来支持特定的存储系统。Volume可以被Pod直接使用,也可以被PV使用。普通的Volume和Pod之间是一种静态的绑定关系,在定义Pod的同时,通过```volume```属性来定义存储的类型,通过```volumeMount```来定义容器内的挂载点。 12 | - PersistentVolume。与普通的Volume不同,PV是Kubernetes中的一个资源对象,创建一个PV相当于创建了一个存储资源对象,这个资源的使用要通过PVC来请求。 13 | - PersistentVolumeClaim。PVC是用户对存储资源PV的请求,根据PVC中指定的条件Kubernetes动态的寻找系统中的PV资源并进行绑定。目前PVC与PV匹配可以通过```StorageClassName```、```matchLabels```或者```matchExpressions```三种方式。 14 | - StorageClass。 15 | 16 | 总结一句话就是,Volumes可以直接使用,但是与存储类型有强绑定关系,PV、PVC将Pod挂载与具体存储类型进行了解耦,StorageClass提供了自动存储供给的定义机制。 17 | 18 | ## Volumes 19 | Docker提供了[Volumes](https://docs.docker.com/engine/admin/volumes/),Volume 是磁盘上的文件夹并且没有生命周期的管理。Kubernetes 中的 Volume 是存储的抽象,并且能够为Pod提供多种存储解决方案。Volume 最终会映射为Pod中容器可访问的一个文件夹或裸设备,但是背后的实现方式可以有很多种。 20 | 21 | ### Volumes的类型 22 | - [cephfs](https://github.com/kubernetes/examples/tree/master/staging/volumes/cephfs/) 23 | - configMap、secret 24 | - emptyDir 25 | - hostPath 26 | - local 27 | - nfs 28 | - persistentVolumeClaim 29 | 30 | 实际上Volume还支持gitRepo、gcePersistentDisk、awsElasticBlockStore、azureDisk等类型的存储,但是在本地的测试环境很少用到,就不做介绍了,感兴趣的可以去官方了解。 31 | 32 | ### emptyDir 33 | emptyDir在Pod被分配到Node上之后创建,并且在Pod运行期间一直存在,即它的生命周期和Pod一致。初始的时候为一个空文件夹,当Pod从Node中移除时,emptyDir将被永久删除。Container的意外退出并不会导致emptyDir被删除。emptyDir适用于一些临时存放数据的场景。默认情况下,emptyDir存储在Node支持的介质上,不管是磁盘、SSD还是网络存储,也可以设置为```Memory```。emptyDir特别适合在Pod内的不同容器间共享临时文件。 34 | 35 | ```yaml 36 | apiVersion: v1 37 | kind: Pod 38 | metadata: 39 | name: tomcat-ccb 40 | namespace: default 41 | labels: 42 | app: tomcat 43 | node: devops-103 44 | spec: 45 | containers: 46 | - name: tomcat 47 | image: docker.io/tomcat 48 | volumeMounts: 49 | - name: tomcat-storage 50 | mountPath: /data/tomcat 51 | - name: cache-storage 52 | mountPath: /data/cache 53 | ports: 54 | - containerPort: 8080 55 | protocol: TCP 56 | env: 57 | - name: GREETING 58 | value: "Hello from devops-103" 59 | volumes: 60 | - name: tomcat-storage 61 | hostPath: 62 | path: /home/es 63 | - name: cache-storage 64 | emptyDir: {} 65 | ``` 66 | 67 | 可以将emptyDir指定到内存中,配置如下: 68 | ```yaml 69 | volumes: 70 | - name: html 71 | emptyDir: 72 | medium: Memory 73 | ``` 74 | 75 | ### hostPath 76 | hostPath就是将Node节点的文件系统挂载到Pod中,在之前的例子中也可以看到用法。使用kubeadm部署的Kubernetes,其很多组件都使用了hostPath的类型,将宿主机的一些目录挂载到容器中。 77 | 78 | ```yaml 79 | apiVersion: v1 80 | kind: Pod 81 | metadata: 82 | name: test-pd 83 | spec: 84 | containers: 85 | - image: k8s.gcr.io/test-webserver 86 | name: test-container 87 | volumeMounts: 88 | - mountPath: /test-pd 89 | name: test-volume 90 | volumes: 91 | - name: test-volume 92 | hostPath: 93 | # directory location on host 94 | path: /data 95 | # this field is optional 96 | type: Directory 97 | ``` 98 | 99 | ### local 100 | 101 | > A local volume represents a mounted local storage device such as a disk, partition or directory. 102 | 103 | local类型作为静态资源被PersistentVolume使用,不支持Dynamic provisioning。与hostPath相比,因为能够通过PersistentVolume的节点亲和策略来进行调度,因此比hostPath类型更加适用。local类型也存在一些问题,如果Node的状态异常,那么local存储将无法访问,从而导致Pod运行状态异常。使用这种类型存储的应用必须能够承受可用性的降低、可能的数据丢失等。 104 | 105 | ```yaml 106 | apiVersion: v1 107 | kind: PersistentVolume 108 | metadata: 109 | name: www 110 | spec: 111 | capacity: 112 | storage: 100Mi 113 | volumeMode: Filesystem 114 | accessModes: ["ReadWriteOnce"] 115 | persistentVolumeReclaimPolicy: Delete 116 | storageClassName: local-storage 117 | local: 118 | path: /home/es 119 | nodeAffinity: 120 | required: 121 | nodeSelectorTerms: 122 | - matchExpressions: 123 | - key: kubernetes.io/hostname 124 | operator: In 125 | values: 126 | - devops-102 127 | - devops-103 128 | ``` 129 | 130 | 对于使用了PV的Pod,Kubernetes会调度到具有对应PV的Node上,因此PV的节点亲和性 nodeAffinity 属性是必须的。 131 | > PersistentVolume nodeAffinity is required when using local volumes. It enables the Kubernetes scheduler to correctly schedule Pods using local volumes to the correct node. 132 | 133 | ## Persistent Volumes 134 | Persistent Volumes 提供了一个抽象层,向用户屏蔽了具体的存储实现形式。 135 | - PersistentVolume PV:集群管理员提供的一块存储,是Volumes的插件。类似于Pod,但是具有独立于Pod的生命周期。具体存储可以是NFS、云服务商提供的存储服务。 136 | - PersistentVolumeClaim PVC:PVC是用户的存储请求,PVC消耗PV资源。 137 | 138 | 生命周期: 139 | - 供给 140 | - 静态供给 141 | - 动态供给:动态供给的请求基于StorageClass,集群针对用户的PVC请求,可以产生动态供给。 142 | - 绑定 Binding 143 | - 使用 144 | - 在用对象保护:对于正在使用的PV提供了保护机制,正在使用的PV如果被用户删除,PV的删除会推迟到用户对PV的使用结束。 145 | - 重用 Reclaim 策略 146 | - 保留 Retain:保留现场,Kubernetes等待用户手工处理数据。 147 | - 删除 Delete:Kubernetes会自动删除数据 148 | - 重用 Recycle:这个策略已经不推荐使用了,应该使用 Dynamic Provisioning 代替。 149 | - 扩容。主要是对于一些云存储类型,例如gcePersistentDisk、Azure Disk提供了扩容特性,在1.11版本还处于测试阶段。 150 | 151 | PersistenVolume 这个功能目前是通过Plugin插件的形式实现的,目前的版本V1.11.1有19中,特别关注了一下HostPath。 152 | > HostPath (Single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster) 153 | 154 | ```yaml 155 | apiVersion: v1 156 | kind: PersistentVolume 157 | metadata: 158 | name: pv-localstorage 159 | spec: 160 | capacity: 161 | storage: 100Mi 162 | accessModes: 163 | - ReadWriteOnce 164 | - ReadOnlyMany 165 | persistentVolumeReclaimPolicy: Retain 166 | storageClassName: local-storage 167 | local: 168 | path: /home 169 | nodeAffinity: 170 | required: 171 | nodeSelectorTerms: 172 | - matchExpressions: 173 | - key: kubernetes.io/hostname 174 | operator: In 175 | values: 176 | - devops-102 177 | ``` 178 | 179 | PV创建后就会处于Available状态,等待PVC的申请。 180 | ```sh 181 | [root@devops-101 ~]# kubectl apply -f pv-local.yaml 182 | persistentvolume/pv-localstorage created 183 | [root@devops-101 ~]# kubectl get pv 184 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 185 | pv-localstorage 100Mi RWO,ROX Retain Available local-storage 6s 186 | ``` 187 | 188 | > 这里使用了local-storage的类型,必须有节点亲和性,节点亲和性的内容可以参考 [Kubernetes中的亲和性与反亲和性](https://www.cnblogs.com/cocowool/p/kubernetes_affinity.html) 189 | 190 | ### Persistent Volumes 的一些属性 191 | 192 | - Capacity:一般情况PV拥有固定的容量 193 | - Volume Mode:在1.9版本中是alpha特性,允许设置 filesystem 使用文件系统(默认),设置 raw 使用裸设备。 194 | - Access Modes 195 | - Class:可以设置成StorageClass的名称。具有Class属性的PV只能绑定到还有相同CLASS名称的PVC上。没有CLASS的PV只能绑定到没有CLASS的PVC上。 196 | - Reclaim Policy 197 | 198 | ### 状态 199 | - Available:未被任何PVC使用 200 | - Bound:绑定到了PVC上 201 | - Released:PVC被删掉,资源未被使用 202 | - Failed:自动回收失败 203 | 204 | ## PersistentVolumeClaims 205 | ```yaml 206 | kind: PersistentVolumeClaim 207 | apiVersion: v1 208 | metadata: 209 | name: pvc-localstorage 210 | spec: 211 | storageClassName: local-storage 212 | accessModes: 213 | - ReadWriteOnce 214 | resources: 215 | requests: 216 | storage: 30Mi 217 | ``` 218 | 219 | 一些属性 220 | - Access Modes 221 | - Volume Modes 222 | - Resources 223 | - Selector:PVC可以通过标签选择器选择PV资源。可以包含两个字段```matchLabels```和```matchExpressions```。 224 | - storageClassName 类似标签选择器,通过storagClassName 来确定PV资源。 225 | 226 | ```sh 227 | [root@devops-101 ~]# kubectl apply -f pvc-local.yaml 228 | persistentvolumeclaim/pvc-localstorage created 229 | [root@devops-101 ~]# kubectl get pvc 230 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 231 | pvc-localstorage Bound pv-localstorage 100Mi RWO,ROX local-storage 7s 232 | [root@devops-101 ~]# kubectl get pv 233 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE 234 | pv-localstorage 100Mi RWO,ROX Retain Bound default/pvc-localstorage local-storage 8m 235 | ``` 236 | 237 | 238 | 239 | ## Storage Class 240 | StorageClass为管理员提供了一种描述存储类型的方法。通常情况下,管理员需要手工创建所需的存储资源。利用动态容量供给的功能,就可以实现动态创建PV的能力。动态容量供给 Dynamic Volume Provisioning 主要依靠StorageClass中指定的```provisioner```。如果希望集群在没有指定StorageClass的情况下也能提供动态扩容的能力,需要设置```DefaultStorageClass```。 241 | ```yaml 242 | kind: StorageClass 243 | apiVersion: storage.k8s.io/v1 244 | metadata: 245 | name: local-storage 246 | provisioner: kubernetes.io/no-provisioner 247 | volumeBindingMode: WaitForFirstConsumer 248 | ``` 249 | 250 | ![](https://images2018.cnblogs.com/blog/39469/201807/39469-20180710163655709-89635310.png) 251 | 252 | 253 | ## 参考资料: 254 | 1. [Kubernetes Storage](https://kubernetes.io/docs/concepts/storage/volumes) 255 | 2. [Configure a Pod to Use a PersistentVolume for Storage](https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/) 256 | 3. [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes) 257 | 4. [kubernetes存储系统介绍(Volume、PV、dynamic provisioning)](https://blog.csdn.net/liukuan73/article/details/60089305) 258 | 5. [Kubernetes 1.4 新特性 持久卷 ](https://blog.csdn.net/qq_26923057/article/details/52713463) 259 | 6. [DockOne微信分享(一零三):Kubernetes 有状态集群服务部署与管理](http://dockone.io/article/2016?utm_source=tuicool&utm_medium=referral) 260 | -------------------------------------------------------------------------------- /storage/configmap.md: -------------------------------------------------------------------------------- 1 | # Kubernetes中的Configmap和Secret 2 | 3 | > 本文的试验环境为CentOS 7.3,Kubernetes集群为1.11.2,安装步骤参见[kubeadm安装kubernetes V1.11.1 集群](https://www.cnblogs.com/cocowool/p/kubeadm_install_kubernetes.html) 4 | 5 | > 应用场景:镜像往往是一个应用的基础,还有很多需要自定义的参数或配置,例如资源的消耗、日志的位置级别等等,这些配置可能会有很多,因此不能放入镜像中,Kubernetes中提供了Configmap来实现向容器中提供配置文件或环境变量来实现不同配置,从而实现了镜像配置与镜像本身解耦,使容器应用做到不依赖于环境配置。 6 | 7 | ## 向容器传递参数 8 | 9 | | Docker | Kubernetes | 描述 | 10 | | --- | --- | --- | 11 | | ENTRYPOINT | command | 容器中的可执行文件 | 12 | | CMD | args | 需要传递给可执行文件的参数 | 13 | 14 | 如果需要向容器传递参数,可以在Yaml文件中通过command和args或者环境变量的方式实现。 15 | ```yaml 16 | kind: Pod 17 | spec: 18 | containers: 19 | - image: docker.io/nginx 20 | command: ["/bin/command"] 21 | args: ["arg1", "arg2", "arg3"] 22 | env: 23 | - name: INTERVAL 24 | value: "30" 25 | - name: FIRST_VAR 26 | value: "foo" 27 | - name: SECOND_VAR 28 | value: "$(FIRST_VAR)bar" 29 | ``` 30 | 31 | 可以看到,我们可以利用env标签向容器中传递环境变量,环境变量还可以相互引用。这种方式的问题在于配置文件和部署是绑定的,那么对于同样的应用,测试环境的参数和生产环境是不一样的,这样就要求写两个部署文件,管理起来不是很方便。 32 | 33 | ## 什么是ConfigMap 34 | 35 | 上面提到的例子,利用ConfigMap可以解耦部署与配置的关系,对于同一个应用部署文件,可以利用```valueFrom```字段引用一个在测试环境和生产环境都有的ConfigMap(当然配置内容不相同,只是名字相同),就可以降低环境管理和部署的复杂度。 36 | 37 | ![](https://img2018.cnblogs.com/blog/39469/201811/39469-20181101083024064-1406584186.png) 38 | 39 | ConfigMap有三种用法: 40 | - 生成为容器内的环境变量 41 | - 设置容器启动命令的参数 42 | - 挂载为容器内部的文件或目录 43 | 44 | ## ConfigMap的缺点 45 | - ConfigMap必须在Pod之前创建 46 | - ConfigMap属于某个NameSpace,只有处于相同NameSpace的Pod才可以应用它 47 | - ConfigMap中的配额管理还未实现 48 | - 如果是volume的形式挂载到容器内部,只能挂载到某个目录下,该目录下原有的文件会被覆盖掉 49 | - 静态Pod不能用ConfigMap 50 | 51 | ## ConfigMap的创建 52 | 53 | ```sh 54 | $ kubectl create configmap --from-literal== 55 | $ kubectl create configmap --from-literal== --from-literal== --from-literal== 56 | $ kubectl create configmap --from-file= 57 | $ kubectl apply -f 58 | # 还可以从一个文件夹创建configmap 59 | $ kubectl create configmap --from-file=/path/to/dir 60 | ``` 61 | Yaml 的声明方式 62 | 63 | ```yaml 64 | apiVersion: v1 65 | data: 66 | my-nginx-config.conf: | 67 |     server { 68 |     listen              80; 69 |       server_name         www.kubia-example.com; 70 | 71 | gzip on; 72 |       gzip_types text/plain application/xml; 73 | 74 | location / { 75 |         root   /usr/share/nginx/html; 76 |         index  index.html index.htm; 77 | } 78 | } 79 |   sleep-interval: | 80 |     25 81 | kind: ConfigMap 82 | ``` 83 | 84 | ## ConfigMap的调用 85 | ### 环境变量的方式 86 | ```yaml 87 | apiVersion: v1 88 | kind: Pod 89 | metadata: 90 | name: env-configmap 91 | spec: 92 | containers: 93 | - image: nginx 94 | env: 95 | - name: INTERVAL 96 | valueFrom: 97 | configMapKeyRef: 98 | name: 99 | key: sleep-interval 100 | ``` 101 | 102 | > 如果引用了一个不存在的ConfigMap,则创建Pod时会报错,直到能够正常读取ConfigMap后,Pod会自动创建。 103 | 104 | 一次传递所有的环境变量 105 | ```yaml 106 | spec: 107 | containers: 108 | - image: nginx 109 | envFrom: 110 | - prefix: CONFIG_ 111 | configMapRef: 112 | name: 113 | ``` 114 | 115 | ### 命令行参数的方式 116 | ```yaml 117 | apiVersion: v1 118 | kind: Pod 119 | metadata: 120 | name: env-configmap 121 | spec: 122 | containers: 123 | - image: nginx 124 | env: 125 | - name: INTERVAL 126 | valueFrom: 127 | configMapKeyRef: 128 | name: 129 | key: sleep-interval 130 | args: ["$(INTERVAL)"] 131 | ``` 132 | 133 | ### 以配置文件的方式 134 | ```yaml 135 | apiVersion: v1 136 | kind: Pod 137 | metadata: 138 | name: nginx-test 139 | spec: 140 | containers: 141 | - image: nginx 142 | name: web-server 143 | volumeMounts: 144 | - name: config 145 | mountPath: /etc/nginx/conf.d 146 | readOnly: true 147 | volumes: 148 | - name: config 149 | configMap: 150 | name: 151 | ``` 152 | 153 | 将Configmap挂载为一个文件夹后,原来在镜像中的文件夹里的内容就看不到,这是什么原理?这是因为原来文件夹下的内容无法进入,所以显示不出来。为了避免这种挂载方式影响应用的正常运行,可以将configmap挂载为一个配置文件。 154 | ```yaml 155 | spec: 156 | containers: 157 | - image: nginx 158 | volumeMounts: 159 | - name: config 160 | mountPath: /etc/someconfig.conf 161 | subPath: myconfig.conf 162 | ``` 163 | ![](https://img2018.cnblogs.com/blog/39469/201811/39469-20181101083101837-948645932.png) 164 | 165 | ## Configmap的更新 166 | ```sh 167 | $ kubectl edit configmap 168 | 169 | ``` 170 | 171 | confgimap更新后,如果是以文件夹方式挂载的,会自动将挂载的Volume更新。如果是以文件形式挂载的,则不会自动更新。 172 | 但是对多数情况的应用来说,配置文件更新后,最简单的办法就是重启Pod(杀掉再重新拉起)。如果是以文件夹形式挂载的,可以通过在容器内重启应用的方式实现配置文件更新生效。即便是重启容器内的应用,也要注意configmap的更新和容器内挂载文件的更新不是同步的,可能会有延时,因此一定要确保容器内的配置也已经更新为最新版本后再重新加载应用。 173 | 174 | ## 什么是Secret 175 | Secret与ConfigMap类似,但是用来存储敏感信息。在Master节点上,secret以非加密的形式存储(意味着我们要对master严加管理)。从Kubernetes1.7之后,etcd以加密的形式保存secret。secret的大小被限制为1MB。当Secret挂载到Pod上时,是以tmpfs的形式挂载,即这些内容都是保存在节点的内存中,而不是写入磁盘,通过这种方式来确保信息的安全性。 176 | 177 | > Kubernetes helps keep your Secrets safe by making sure each Secret is only distributed to the nodes that run the pods that need access to the Secret. Also, on the nodes themselves, Secrets are always stored in memory and never written to physical storage, which would require wiping the disks after deleting the Secrets from them. 178 | 179 | 每个Kubernetes集群都有一个默认的secrets 180 | ![](https://img2018.cnblogs.com/blog/39469/201811/39469-20181101083123554-1363293401.png) 181 | 182 | 创建和调用的过程与configmap大同小异,这里就不再赘述了。 183 | 184 | ![](https://images2018.cnblogs.com/blog/39469/201807/39469-20180710163655709-89635310.png) 185 | 186 | ## 参考资料 187 | 1. [Kubernetes Pod 深入理解与实践](https://www.jianshu.com/p/d867539a15cf) 188 | 2. [Configmap](https://www.jianshu.com/p/571383da7adf) 189 | -------------------------------------------------------------------------------- /storage/pod-with-pvc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pod-with-pvc 5 | spec: 6 | containers: 7 | - image: docker.io/nginx 8 | name: alpine 9 | volumeMounts: 10 | - name: localstorage 11 | mountPath: /data/db 12 | volumes: 13 | - name: localstorage 14 | persistentVolumeClaim: 15 | claimName: pvc-localstorage -------------------------------------------------------------------------------- /storage/pv-local.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: pv-localstorage 5 | spec: 6 | capacity: 7 | storage: 100Mi 8 | accessModes: 9 | - ReadWriteOnce 10 | - ReadOnlyMany 11 | persistentVolumeReclaimPolicy: Retain 12 | storageClassName: local-storage 13 | local: 14 | path: /home 15 | nodeAffinity: 16 | required: 17 | nodeSelectorTerms: 18 | - matchExpressions: 19 | - key: kubernetes.io/hostname 20 | operator: In 21 | values: 22 | - devops-102 -------------------------------------------------------------------------------- /storage/pvc-local.yaml: -------------------------------------------------------------------------------- 1 | kind: PersistentVolumeClaim 2 | apiVersion: v1 3 | metadata: 4 | name: pvc-localstorage 5 | spec: 6 | storageClassName: local-storage 7 | accessModes: 8 | - ReadWriteOnce 9 | resources: 10 | requests: 11 | storage: 30Mi -------------------------------------------------------------------------------- /storage/storage_basic.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/cocowool/k8s-go/aca7c938dc8271f2a2897501e3e3d9e3784b12df/storage/storage_basic.png --------------------------------------------------------------------------------