├── LICENSE
├── README.md
├── apps
├── README.md
├── nginx
│ └── README.md
├── ops
│ └── README.md
└── wordpress
│ ├── README.md
│ ├── 基于PV_PVC部署Wordpress 示例.md
│ └── 部署Wordpress 示例.md
├── components
├── README.md
├── cronjob
│ └── README.md
├── dashboard
│ ├── Kubernetes-Dashboard v2.0.0.md
│ └── README.md
├── external-storage
│ ├── 0、nfs服务端搭建.md
│ ├── 1、k8s的pv和pvc简述.md
│ ├── 2、静态配置PV和PVC.md
│ ├── 3、动态申请PV卷.md
│ ├── 4、Kubernetes之MySQL持久存储和故障转移.md
│ ├── 5、Kubernetes之Nginx动静态PV持久存储.md
│ └── README.md
├── heapster
│ └── README.md
├── ingress
│ ├── 0.通俗理解Kubernetes中Service、Ingress与Ingress Controller的作用与关系.md
│ ├── 1.kubernetes部署Ingress-nginx单点和高可用.md
│ ├── 1.外部服务发现之Ingress介绍.md
│ ├── 2.ingress tls配置.md
│ ├── 3.ingress-http使用示例.md
│ ├── 4.ingress-https使用示例.md
│ ├── 5.hello-tls.md
│ ├── 6.ingress-https使用示例.md
│ ├── README.md
│ ├── nginx-ingress
│ │ └── README.md
│ ├── traefik-ingress
│ │ ├── 1.traefik反向代理Deamonset模式.md
│ │ ├── 2.traefik反向代理Deamonset模式TLS.md
│ │ └── README.md
│ └── 常用操作.md
├── initContainers
│ └── README.md
├── job
│ └── README.md
├── k8s-monitor
│ └── README.md
├── kube-proxy
│ └── README.md
├── nfs
│ └── README.md
└── pressure
│ ├── README.md
│ ├── calico bgp网络需要物理路由和交换机支持吗.md
│ └── k8s集群更换网段方案.md
├── docs
├── Envoy的架构与基本术语.md
├── Kubernetes学习笔记.md
├── Kubernetes架构介绍.md
├── Kubernetes集群环境准备.md
├── app.md
├── app2.md
├── ca.md
├── coredns.md
├── dashboard.md
├── dashboard_op.md
├── delete.md
├── docker-install.md
├── etcd-install.md
├── flannel.md
├── k8s-error-resolution.md
├── k8s_pv_local.md
├── k8s重启pod.md
├── master.md
├── node.md
├── operational.md
├── 外部访问K8s中Pod的几种方式.md
└── 虚拟机环境准备.md
├── example
├── coredns
│ └── coredns.yaml
└── nginx
│ ├── nginx-daemonset.yaml
│ ├── nginx-deployment.yaml
│ ├── nginx-ingress.yaml
│ ├── nginx-pod.yaml
│ ├── nginx-rc.yaml
│ ├── nginx-rs.yaml
│ ├── nginx-service-nodeport.yaml
│ └── nginx-service.yaml
├── helm
└── README.md
├── images
├── Dashboard-login.jpg
├── Dashboard.jpg
├── Ingress Controller01.png
├── Ingress-nginx.png
├── K8S.png
├── calico_bgp_01.png
├── calico_bgp_02.png
├── calico_bgp_03.png
├── calico_bgp_04.png
├── calico_bgp_05.png
├── calico_bgp_06.png
├── calico_bgp_07.png
├── calico_bgp_08.png
├── calico_bgp_09.png
├── calico_bgp_10.png
├── calico_bgp_11.png
├── calico_bgp_12.png
├── change network.png
├── change_ip_01.png
├── change_ip_02.png
├── change_ip_05.png
├── change_ip_06.png
├── coredns-01.png
├── dynamic-pv.png
├── heapster-01.png
├── heapster-02.png
├── ingress-k8s-01.png
├── ingress-k8s-02.png
├── ingress-k8s-03.png
├── ingress-k8s-04.png
├── install centos7.png
├── k8s-soft.jpg
├── k8s架构图.jpg
├── kubeadm-ha.jpg
├── kubernetes架构.jpg
├── pressure_calico_01.png
├── pressure_flannel_01.png
├── pressure_physical_01.png
├── pv01.png
├── rp_filter.png
├── traefik-architecture.png
├── virtualbox-network-eth0.jpg
├── virtualbox-network-eth1.png
├── vmware-fusion-network.png
├── vmware-network.png
├── wordpress-01.png
└── 安装流程.png
├── kubeadm
├── K8S-HA-V1.13.4-关闭防火墙版.md
├── K8S-HA-V1.16.x-云环境-Calico.md
├── K8S-V1.16.2-开启防火墙-Flannel.md
├── Kubernetes 集群变更IP地址.md
├── README.md
├── k8S-HA-V1.15.3-Calico-开启防火墙版.md
├── k8S-HA-V1.15.3-Flannel-开启防火墙版.md
├── k8s清理.md
├── kubeadm.yaml
├── kubeadm初始化k8s集群延长证书过期时间.md
└── kubeadm无法下载镜像问题.md
├── manual
├── README.md
├── v1.14
│ └── README.md
└── v1.15.3
│ └── README.md
├── mysql
├── README.md
└── kubernetes访问外部mysql服务.md
├── redis
├── K8s上Redis集群动态扩容.md
├── K8s上运行Redis单实例.md
├── K8s上运行Redis集群指南.md
└── README.md
├── rke
├── README.md
└── cluster.yml
└── tools
├── Linux Kernel 升级.md
├── README.md
├── k8s域名解析coredns问题排查过程.md
├── kubernetes-node打标签.md
├── kubernetes-常用操作.md
├── kubernetes-批量删除Pods.md
├── kubernetes访问外部mysql服务.md
└── ssh_copy.sh
/README.md:
--------------------------------------------------------------------------------
1 | # 一、K8S攻略
2 | - [Kubernetes架构介绍](docs/Kubernetes架构介绍.md)
3 | - [Kubernetes集群环境准备](docs/Kubernetes集群环境准备.md)
4 | - [Docker安装](docs/docker-install.md)
5 | - [CA证书制作](docs/ca.md)
6 | - [ETCD集群部署](docs/etcd-install.md)
7 | - [Master节点部署](docs/master.md)
8 | - [Node节点部署](docs/node.md)
9 | - [Flannel部署](docs/flannel.md)
10 | - [应用创建](docs/app.md)
11 | - [问题汇总](docs/k8s-error-resolution.md)
12 | - [常用手册](docs/operational.md)
13 | - [Envoy 的架构与基本术语](docs/Envoy的架构与基本术语.md)
14 | - [K8S学习手册](docs/Kubernetes学习笔记.md)
15 | - [K8S重启pod](docs/k8s%E9%87%8D%E5%90%AFpod.md)
16 | - [K8S清理](docs/delete.md)
17 | - [外部访问K8s中Pod的几种方式](docs/外部访问K8s中Pod的几种方式.md)
18 | - [应用测试](docs/app2.md)
19 | - [PVC](docs/k8s_pv_local.md)
20 | - [dashboard操作](docs/dashboard_op.md)
21 |
22 |
23 | # 使用手册
24 |
47 |
48 | # 二、k8s资源清理
49 | ```
50 | 1、# svc清理
51 | $ kubectl delete svc $(kubectl get svc -n mos-namespace|grep -v NAME|awk '{print $1}') -n mos-namespace
52 | service "mysql-production" deleted
53 | service "nginx-test" deleted
54 | service "redis-cluster" deleted
55 | service "redis-production" deleted
56 |
57 | 2、# deployment清理
58 | $ kubectl delete deployment $(kubectl get deployment -n mos-namespace|grep -v NAME|awk '{print $1}') -n mos-namespace
59 | deployment.extensions "centos7-app" deleted
60 |
61 | 3、# configmap清理
62 | $ kubectl delete cm $(kubectl get cm -n mos-namespace|grep -v NAME|awk '{print $1}') -n mos-namespace
63 | ```
64 |
65 |
66 | https://www.xiaodianer.net/index.php/kubernetes/istio/41-istio-https-demo
67 |
68 | https://mp.weixin.qq.com/s/jnVn6_cyRUILBQ0cBhBNyQ Kubernetes v1.18.2 二进制高可用部署
69 |
--------------------------------------------------------------------------------
/apps/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/apps/nginx/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/apps/ops/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/apps/wordpress/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/apps/wordpress/基于PV_PVC部署Wordpress 示例.md:
--------------------------------------------------------------------------------
1 | # 一、PV(PersistentVolume)
2 |
3 | PersistentVolume (PV) 是外部存储系统中的一块存储空间,由管理员创建和维护。与 Volume 一样,PV 具有持久性,生命周期独立于 Pod。
4 |
5 | 1、PV和PVC是一一对应关系,当有PV被某个PVC所占用时,会显示banding,其它PVC不能再使用绑定过的PV。
6 |
7 | 2、PVC一旦绑定PV,就相当于是一个存储卷,此时PVC可以被多个Pod所使用。(PVC支不支持被多个Pod访问,取决于访问模型accessMode的定义)。
8 |
9 | 3、PVC若没有找到合适的PV时,则会处于pending状态。
10 |
11 | 4、PV的reclaim policy选项:
12 |
13 | 默认是Retain保留,保留生成的数据。
14 | 可以改为recycle回收,删除生成的数据,回收pv
15 | delete,删除,pvc解除绑定后,pv也就自动删除。
16 |
17 | # 二、PVC
18 |
19 | PersistentVolumeClaim (PVC) 是对 PV 的申请 (Claim)。PVC 通常由普通用户创建和维护。需要为 Pod 分配存储资源时,用户可以创建一个 PVC,指明存储资源的容量大小和访问模式(比如只读)等信息,Kubernetes 会查找并提供满足条件的 PV。
20 |
21 | 有了 PersistentVolumeClaim,用户只需要告诉 Kubernetes 需要什么样的存储资源,而不必关心真正的空间从哪里分配,如何访问等底层细节信息。这些 Storage Provider 的底层信息交给管理员来处理,只有管理员才应该关心创建 PersistentVolume 的细节信息。
22 |
23 | ## PVC资源需要指定:
24 |
25 | 1、accessMode:访问模型;对象列表:
26 |
27 | ReadWriteOnce – the volume can be mounted as read-write by a single node: RWO - ReadWriteOnce 一人读写
28 | ReadOnlyMany – the volume can be mounted read-only by many nodes: ROX - ReadOnlyMany 多人只读
29 | ReadWriteMany – the volume can be mounted as read-write by many nodes: RWX - ReadWriteMany 多人读写
30 |
31 | 2、resource:资源限制(比如:定义5GB空间,我们期望对应的存储空间至少5GB。)
32 |
33 | 3、selector:标签选择器。不加标签,就会在所有PV找最佳匹配。
34 |
35 | 4、storageClassName:存储类名称:
36 |
37 | 5、volumeMode:指后端存储卷的模式。可以用于做类型限制,哪种类型的PV可以被当前claim所使用。
38 |
39 | 6、volumeName:卷名称,指定后端PVC(相当于绑定)
40 |
41 |
42 | # 三、两者差异
43 |
44 | 1、PV是属于集群级别的,不能定义在名称空间中
45 |
46 | 2、PVC时属于名称空间级别的。
47 |
48 | 参考文档:
49 |
50 | https://blog.csdn.net/weixin_42973226/article/details/86501693 基于rook-ceph部署wordpress
51 |
52 | https://www.cnblogs.com/benjamin77/p/9944268.html k8s的持久化存储PV&&PVC
53 |
--------------------------------------------------------------------------------
/components/README.md:
--------------------------------------------------------------------------------
1 |
2 | # ingress
3 |
4 | # helm
5 |
6 | https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#check-required-ports 需要开放的端口
7 |
--------------------------------------------------------------------------------
/components/cronjob/README.md:
--------------------------------------------------------------------------------
1 | 参考资料:
2 |
3 | https://www.jianshu.com/p/62b4f0a3134b Kubernetes对象之CronJob
4 |
--------------------------------------------------------------------------------
/components/dashboard/Kubernetes-Dashboard v2.0.0.md:
--------------------------------------------------------------------------------
1 | ```bash
2 | #安装
3 | kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
4 |
5 | #卸载
6 | kubectl delete -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
7 |
8 | #账号授权
9 | kubectl delete -f admin.yaml
10 |
11 | cat > admin.yaml << \EOF
12 | kind: ClusterRoleBinding
13 | apiVersion: rbac.authorization.k8s.io/v1
14 | metadata:
15 | name: admin
16 | annotations:
17 | rbac.authorization.kubernetes.io/autoupdate: "true"
18 | roleRef:
19 | kind: ClusterRole
20 | name: cluster-admin
21 | apiGroup: rbac.authorization.k8s.io
22 | subjects:
23 | - kind: ServiceAccount
24 | name: admin
25 | namespace: kube-system
26 | ---
27 | apiVersion: v1
28 | kind: ServiceAccount
29 | metadata:
30 | name: admin
31 | namespace: kube-system
32 | labels:
33 | kubernetes.io/cluster-service: "true"
34 | addonmanager.kubernetes.io/mode: Reconcile
35 | EOF
36 |
37 | kubectl apply -f admin.yaml
38 |
39 | kubectl describe secret/$(kubectl get secret -n kube-system |grep admin|awk '{print $1}') -n kube-system
40 | ```
41 | 参考文档:
42 |
43 | http://www.mydlq.club/article/28/
44 |
--------------------------------------------------------------------------------
/components/external-storage/0、nfs服务端搭建.md:
--------------------------------------------------------------------------------
1 | ## 一、nfs服务端
2 | ```bash
3 | #所有节点安装nfs
4 | yum install -y nfs-utils rpcbind
5 |
6 | #创建nfs目录
7 | mkdir -p /nfs/data/
8 |
9 | #修改权限
10 | chmod -R 666 /nfs/data
11 |
12 | #编辑export文件
13 | vim /etc/exports
14 | /nfs/data 192.168.56.0/24(rw,async,no_root_squash)
15 | #如果设置为 /nfs/data *(rw,async,no_root_squash) 则对所以的IP都有效
16 |
17 | 常用选项:
18 | ro:客户端挂载后,其权限为只读,默认选项;
19 | rw:读写权限;
20 | sync:同时将数据写入到内存与硬盘中;
21 | async:异步,优先将数据保存到内存,然后再写入硬盘;
22 | Secure:要求请求源的端口小于1024
23 | 用户映射:
24 | root_squash:当NFS客户端使用root用户访问时,映射到NFS服务器的匿名用户;
25 | no_root_squash:当NFS客户端使用root用户访问时,映射到NFS服务器的root用户;
26 | all_squash:全部用户都映射为服务器端的匿名用户;
27 | anonuid=UID:将客户端登录用户映射为此处指定的用户uid;
28 | anongid=GID:将客户端登录用户映射为此处指定的用户gid
29 |
30 | #配置生效
31 | exportfs -r
32 |
33 | #查看生效
34 | exportfs
35 |
36 | #启动rpcbind、nfs服务
37 | systemctl restart rpcbind && systemctl enable rpcbind
38 | systemctl restart nfs && systemctl enable nfs
39 |
40 | #查看 RPC 服务的注册状况 (注意/etc/hosts.deny 里面需要放开以下服务)
41 | $ rpcinfo -p localhost
42 | program vers proto port service
43 | 100000 4 tcp 111 portmapper
44 | 100000 3 tcp 111 portmapper
45 | 100000 2 tcp 111 portmapper
46 | 100000 4 udp 111 portmapper
47 | 100000 3 udp 111 portmapper
48 | 100000 2 udp 111 portmapper
49 | 100005 1 udp 20048 mountd
50 | 100005 1 tcp 20048 mountd
51 | 100005 2 udp 20048 mountd
52 | 100005 2 tcp 20048 mountd
53 | 100005 3 udp 20048 mountd
54 | 100005 3 tcp 20048 mountd
55 | 100024 1 udp 34666 status
56 | 100024 1 tcp 7951 status
57 | 100003 3 tcp 2049 nfs
58 | 100003 4 tcp 2049 nfs
59 | 100227 3 tcp 2049 nfs_acl
60 | 100003 3 udp 2049 nfs
61 | 100003 4 udp 2049 nfs
62 | 100227 3 udp 2049 nfs_acl
63 | 100021 1 udp 31088 nlockmgr
64 | 100021 3 udp 31088 nlockmgr
65 | 100021 4 udp 31088 nlockmgr
66 | 100021 1 tcp 27131 nlockmgr
67 | 100021 3 tcp 27131 nlockmgr
68 | 100021 4 tcp 27131 nlockmgr
69 |
70 | #修改/etc/hosts.allow放开rpcbind(nfs服务端和客户端都要加上)
71 | chattr -i /etc/hosts.allow
72 | echo "nfsd:all" >>/etc/hosts.allow
73 | echo "rpcbind:all" >>/etc/hosts.allow
74 | echo "mountd:all" >>/etc/hosts.allow
75 | chattr +i /etc/hosts.allow
76 |
77 | #showmount测试
78 | showmount -e 192.168.56.11
79 |
80 | #tcpdmatch测试
81 | $ tcpdmatch rpcbind 192.168.56.11
82 | client: address 192.168.56.11
83 | server: process rpcbind
84 | access: granted
85 | ```
86 |
87 | ## 二、nfs客户端
88 | ```bash
89 | yum install -y nfs-utils rpcbind
90 |
91 | #客户端创建目录,然后执行挂载
92 | mkdir -p /mnt/nfs #(注意挂载成功后,/mnt下原有数据将会被隐藏,无法找到)
93 |
94 | mount -t nfs -o nolock,vers=4 192.168.56.11:/nfs/data /mnt/nfs
95 | ```
96 |
97 | ## 三、挂载nfs
98 | ```bash
99 | #或者直接写到/etc/fstab文件中
100 | vim /etc/fstab
101 | 192.168.56.11:/nfs/data /mnt/nfs/ nfs auto,noatime,nolock,bg,nfsvers=4,intr,tcp,actimeo=1800 0 0
102 |
103 | #挂载
104 | mount -a
105 |
106 | #卸载挂载
107 | umount /mnt/nfs
108 |
109 | #查看nfs服务端信息
110 | nfsstat -s
111 |
112 | #查看nfs客户端信息
113 | nfsstat -c
114 | ```
115 |
116 | 参考文档:
117 |
118 | http://www.mydlq.club/article/3/ CentOS7 搭建 NFS 服务器
119 |
120 | https://blog.rot13.org/2012/05/rpcbind-is-new-portmap-or-how-to-make-nfs-secure.html
121 |
122 | https://yq.aliyun.com/articles/694065
123 |
124 | https://www.crifan.com/linux_fstab_and_mount_nfs_syntax_and_parameter_meaning/ Linux中fstab的语法和参数含义和mount NFS时相关参数含义
125 |
--------------------------------------------------------------------------------
/components/external-storage/1、k8s的pv和pvc简述.md:
--------------------------------------------------------------------------------
1 | # 一、PV(PersistentVolume)
2 |
3 | PersistentVolume (PV) 是外部存储系统中的一块存储空间,由管理员创建和维护。与 Volume 一样,PV 具有持久性,生命周期独立于 Pod。
4 |
5 | 1、PV和PVC是一一对应关系,当有PV被某个PVC所占用时,会显示banding,其它PVC不能再使用绑定过的PV。
6 |
7 | 2、PVC一旦绑定PV,就相当于是一个存储卷,此时PVC可以被多个Pod所使用。(PVC支不支持被多个Pod访问,取决于访问模型accessMode的定义)。
8 |
9 | 3、PVC若没有找到合适的PV时,则会处于pending状态。
10 |
11 | 4、PV的reclaim policy选项:
12 |
13 | 默认是Retain保留,保留生成的数据。
14 | 可以改为recycle回收,删除生成的数据,回收pv
15 | delete,删除,pvc解除绑定后,pv也就自动删除。
16 |
17 | # 二、PVC
18 |
19 | PersistentVolumeClaim (PVC) 是对 PV 的申请 (Claim)。PVC 通常由普通用户创建和维护。需要为 Pod 分配存储资源时,用户可以创建一个 PVC,指明存储资源的容量大小和访问模式(比如只读)等信息,Kubernetes 会查找并提供满足条件的 PV。
20 |
21 | 有了 PersistentVolumeClaim,用户只需要告诉 Kubernetes 需要什么样的存储资源,而不必关心真正的空间从哪里分配,如何访问等底层细节信息。这些 Storage Provider 的底层信息交给管理员来处理,只有管理员才应该关心创建 PersistentVolume 的细节信息。
22 |
23 | ## PVC资源需要指定:
24 |
25 | 1、accessMode:访问模型;对象列表:
26 |
27 | ReadWriteOnce – the volume can be mounted as read-write by a single node: RWO - ReadWriteOnce 一人读写
28 | ReadOnlyMany – the volume can be mounted read-only by many nodes: ROX - ReadOnlyMany 多人只读
29 | ReadWriteMany – the volume can be mounted as read-write by many nodes: RWX - ReadWriteMany 多人读写
30 |
31 | 2、resource:资源限制(比如:定义5GB空间,我们期望对应的存储空间至少5GB。)
32 |
33 | 3、selector:标签选择器。不加标签,就会在所有PV找最佳匹配。
34 |
35 | 4、storageClassName:存储类名称:
36 |
37 | 5、volumeMode:指后端存储卷的模式。可以用于做类型限制,哪种类型的PV可以被当前claim所使用。
38 |
39 | 6、volumeName:卷名称,指定后端PVC(相当于绑定)
40 |
41 |
42 | # 三、两者差异
43 |
44 | 1、PV是属于集群级别的,不能定义在名称空间中
45 |
46 | 2、PVC时属于名称空间级别的。
47 |
48 | 参考文档:
49 |
50 | https://blog.csdn.net/weixin_42973226/article/details/86501693 基于rook-ceph部署wordpress
51 |
52 | https://www.cnblogs.com/benjamin77/p/9944268.html k8s的持久化存储PV&&PVC
53 |
--------------------------------------------------------------------------------
/components/external-storage/4、Kubernetes之MySQL持久存储和故障转移.md:
--------------------------------------------------------------------------------
1 | Table of Contents
2 | =================
3 |
4 | * [一、MySQL持久化演练](#一mysql持久化演练)
5 | * [1、数据库提供持久化存储,主要分为下面几个步骤:](#1数据库提供持久化存储主要分为下面几个步骤)
6 | * [二、静态PV PVC](#二静态pv-pvc)
7 | * [1、创建 PV](#1创建-pv)
8 | * [2、创建PVC](#2创建pvc)
9 | * [三、部署 MySQL](#三部署-mysql)
10 | * [1、MySQL 的配置文件mysql.yaml如下:](#1mysql-的配置文件mysqlyaml如下)
11 | * [2、更新 MySQL 数据](#2更新-mysql-数据)
12 | * [3、故障转移](#3故障转移)
13 | * [四、全新命名空间使用](#四全新命名空间使用)
14 |
15 | # 一、MySQL持久化演练
16 |
17 | ## 1、数据库提供持久化存储,主要分为下面几个步骤:
18 |
19 | 1、创建 PV 和 PVC
20 |
21 | 2、部署 MySQL
22 |
23 | 3、向 MySQL 添加数据
24 |
25 | 4、模拟节点宕机故障,Kubernetes 将 MySQL 自动迁移到其他节点
26 |
27 | 5、验证数据一致性
28 |
29 |
30 | # 二、静态PV PVC
31 |
32 | ```bash
33 | PV就好比是一个仓库,我们需要先购买一个仓库,即定义一个PV存储服务,例如CEPH,NFS,Local Hostpath等等。
34 |
35 | PVC就好比租户,pv和pvc是一对一绑定的,挂载到POD中,一个pvc可以被多个pod挂载。
36 | ```
37 |
38 | ## 1、创建 PV
39 |
40 | ```bash
41 | # 清理pv资源
42 | kubectl delete -f mysql-static-pv.yaml
43 |
44 | # 编写pv yaml资源文件
45 | cat > mysql-static-pv.yaml <<\EOF
46 | apiVersion: v1
47 | kind: PersistentVolume
48 | metadata:
49 | name: mysql-static-pv
50 | spec:
51 | capacity:
52 | storage: 80Gi
53 |
54 | accessModes:
55 | - ReadWriteOnce
56 | #ReadWriteOnce - 卷可以由单个节点以读写方式挂载
57 | #ReadOnlyMany - 卷可以由许多节点以只读方式挂载
58 | #ReadWriteMany - 卷可以由许多节点以读写方式挂载
59 |
60 | persistentVolumeReclaimPolicy: Retain
61 | #Retain,不清理, 保留 Volume(需要手动清理)
62 | #Recycle,删除数据,即 rm -rf /thevolume/*(只有 NFS 和 HostPath 支持)
63 | #Delete,删除存储资源,比如删除 AWS EBS 卷(只有 AWS EBS, GCE PD, Azure Disk 和 Cinder 支持)
64 |
65 | nfs:
66 | path: /data/nfs/mysql/
67 | server: 10.198.1.155
68 | mountOptions:
69 | - vers=4
70 | - minorversion=0
71 | - noresvport
72 | EOF
73 |
74 | # 部署pv到集群中
75 | kubectl apply -f mysql-static-pv.yaml
76 |
77 | # 查看pv
78 | $ kubectl get pv
79 | NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
80 | mysql-static-pv 80Gi RWO Retain Available 4m20s
81 | ```
82 |
83 | ## 2、创建PVC
84 |
85 | ```bash
86 | # 清理pvc资源
87 | kubectl delete -f mysql-pvc.yaml
88 |
89 | # 编写pvc yaml资源文件
90 | cat > mysql-pvc.yaml <<\EOF
91 | apiVersion: v1
92 | kind: PersistentVolumeClaim
93 | metadata:
94 | name: mysql-static-pvc
95 | spec:
96 | accessModes:
97 | - ReadWriteOnce
98 | resources:
99 | requests:
100 | storage: 80Gi
101 | EOF
102 |
103 | # 创建pvc资源
104 | kubectl apply -f mysql-pvc.yaml
105 |
106 | # 查看pvc
107 | $ kubectl get pvc
108 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
109 | mysql-static-pvc Bound pvc-c55f8695-2a0b-4127-a60b-5c1aba8b9104 80Gi RWO nfs-storage 81s
110 | ```
111 |
112 | # 三、部署 MySQL
113 |
114 | ## 1、MySQL 的配置文件mysql.yaml如下:
115 |
116 | ```bash
117 | kubectl delete -f mysql.yaml
118 |
119 | cat >mysql.yaml<<\EOF
120 | apiVersion: v1
121 | kind: Service
122 | metadata:
123 | name: mysql
124 | spec:
125 | ports:
126 | - port: 3306
127 | selector:
128 | app: mysql
129 | ---
130 | apiVersion: extensions/v1beta1
131 | kind: Deployment
132 | metadata:
133 | name: mysql
134 | spec:
135 | selector:
136 | matchLabels:
137 | app: mysql
138 | template:
139 | metadata:
140 | labels:
141 | app: mysql
142 | spec:
143 | containers:
144 | - name: mysql
145 | image: mysql:5.6
146 | env:
147 | - name: MYSQL_ROOT_PASSWORD
148 | value: password
149 | ports:
150 | - name: mysql
151 | containerPort: 3306
152 | volumeMounts:
153 | - name: mysql-persistent-storage
154 | mountPath: /var/lib/mysql
155 | volumes:
156 | - name: mysql-persistent-storage
157 | persistentVolumeClaim:
158 | claimName: mysql-static-pvc
159 | EOF
160 |
161 | kubectl apply -f mysql.yaml
162 |
163 | # PVC mysql-static-pvc Bound 的 PV mysql-static-pv 将被 mount 到 MySQL 的数据目录 /var/lib/mysql。
164 | ```
165 |
166 | ## 2、更新 MySQL 数据
167 |
168 | MySQL 被部署到 k8s-node02,下面通过客户端访问 Service mysql:
169 |
170 | ```bash
171 | $ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword
172 | If you don't see a command prompt, try pressing enter.
173 | mysql>
174 |
175 | 我们在mysql库中创建一个表myid,然后在表里新增几条数据。
176 |
177 | mysql> use mysql
178 | Database changed
179 |
180 | mysql> drop table myid;
181 | Query OK, 0 rows affected (0.12 sec)
182 |
183 | mysql> create table myid(id int(4));
184 | Query OK, 0 rows affected (0.23 sec)
185 |
186 | mysql> insert myid values(888);
187 | Query OK, 1 row affected (0.03 sec)
188 |
189 | mysql> select * from myid;
190 | +------+
191 | | id |
192 | +------+
193 | | 888 |
194 | +------+
195 | 1 row in set (0.00 sec)
196 | ```
197 |
198 | ## 3、故障转移
199 |
200 | 我们现在把 node02 机器关机,模拟节点宕机故障。
201 |
202 |
203 | ```bash
204 | 1、一段时间之后,Kubernetes 将 MySQL 迁移到 k8s-node01
205 |
206 | $ kubectl get pod -o wide
207 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
208 | mysql-7686899cf9-8z6tc 1/1 Running 0 21s 10.244.1.19 node01
209 | mysql-7686899cf9-d4m42 1/1 Terminating 0 23m 10.244.2.17 node02
210 |
211 | 2、验证数据的一致性
212 |
213 | $ kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword
214 | If you don't see a command prompt, try pressing enter.
215 | mysql> use mysql
216 | Reading table information for completion of table and column names
217 | You can turn off this feature to get a quicker startup with -A
218 |
219 | Database changed
220 | mysql> select * from myid;
221 | +------+
222 | | id |
223 | +------+
224 | | 888 |
225 | +------+
226 | 1 row in set (0.00 sec)
227 |
228 | 3、MySQL 服务恢复,数据也完好无损,我们可以可以在存储节点上面查看一下生成的数据库文件。
229 |
230 | [root@nfs_server mysql-pv]# ll
231 | -rw-rw---- 1 systemd-bus-proxy ssh_keys 56 12月 14 09:53 auto.cnf
232 | -rw-rw---- 1 systemd-bus-proxy ssh_keys 12582912 12月 14 10:15 ibdata1
233 | -rw-rw---- 1 systemd-bus-proxy ssh_keys 50331648 12月 14 10:15 ib_logfile0
234 | -rw-rw---- 1 systemd-bus-proxy ssh_keys 50331648 12月 14 09:53 ib_logfile1
235 | drwx------ 2 systemd-bus-proxy ssh_keys 4096 12月 14 10:05 mysql
236 | drwx------ 2 systemd-bus-proxy ssh_keys 4096 12月 14 09:53 performance_schema
237 | ```
238 |
239 | # 四、全新命名空间使用
240 |
241 | pv是全局的,pvc可以指定namespace
242 |
243 | ```bash
244 | kubectl delete ns test-ns
245 |
246 | kubectl create ns test-ns
247 |
248 | kubectl apply -f mysql-pvc.yaml -n test-ns
249 |
250 | kubectl apply -f mysql.yaml -n test-ns
251 |
252 | kubectl get pods -n test-ns -o wide
253 |
254 | kubectl -n test-ns logs -f $(kubectl get pods -n test-ns|grep mysql|awk '{print $1}')
255 |
256 | kubectl run -n test-ns -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -ppassword
257 | ```
258 |
259 | 参考文档:
260 |
261 | https://blog.51cto.com/wzlinux/2330295 Kubernetes 之 MySQL 持久存储和故障转移(十一)
262 |
263 | https://qingmu.io/2019/08/11/Run-mysql-on-kubernetes/ 从部署mysql聊一聊有状态服务和PV及PVC
264 |
--------------------------------------------------------------------------------
/components/external-storage/README.md:
--------------------------------------------------------------------------------
1 | PersistenVolume(PV):对存储资源创建和使用的抽象,使得存储作为集群中的资源管理
2 |
3 | PV分为静态和动态,动态能够自动创建PV
4 |
5 | PersistentVolumeClaim(PVC):让用户不需要关心具体的Volume实现细节
6 |
7 | 容器与PV、PVC之间的关系,可以如下图所示:
8 |
9 | 
10 |
11 | 总的来说,PV是提供者,PVC是消费者,消费的过程就是绑定
12 |
13 | # 问题一
14 |
15 | pv挂载正常,pvc一直处于Pending状态
16 |
17 | ```bash
18 | #在test的命名空间创建pvc
19 | $ kubectl get pvc -n test
20 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
21 | nfs-pvc Pending---这里发现一直处于Pending的状态 nfs-storage 10s
22 |
23 | #查看日志
24 | $ kubectl describe pvc nfs-pvc -n test
25 | failed to provision volume with StorageClass "nfs-storage": claim Selector is not supported
26 | #从日志中发现,问题出在标签匹配的地方
27 | ```
28 |
29 | 参考资料:
30 |
31 | https://blog.csdn.net/qq_25611295/article/details/86065053 k8s pv与pvc持久化存储(静态与动态)
32 |
33 | https://www.jianshu.com/p/5e565a8049fc kubernetes部署NFS持久存储(静态和动态)
34 |
--------------------------------------------------------------------------------
/components/heapster/README.md:
--------------------------------------------------------------------------------
1 | # 一、问题现象
2 | heapster: 已经被k8s给舍弃掉了
3 | ```bash
4 | heapster logs这个报错啥情况
5 | E0918 16:56:05.022867 1 manager.go:101] Error in scraping containers from kubelet_summary:10.10.188.242:10255: Get http://10.10.188.242:10255/stats/summary/: dial tcp 10.10.188.242:10255: getsockopt: connection refused
6 | ```
7 | # 排查思路
8 |
9 |
10 | ```
11 | 1、排查下kubelet,10255是它暴露的端口
12 |
13 | service kubelet status #看状态是正常的
14 |
15 | #在10.10.188.242上执行
16 | [root@localhost ~]# netstat -lnpt | grep 10255
17 | tcp 0 0 10.10.188.240:10255 0.0.0.0:* LISTEN 9243/kubelet
18 |
19 | 看了下/var/log/pods/kube-system_heapster-5f848f54bc-rtbv4_abf53b7c-491f-472a-9e8b-815066a6ae3d/heapster下日志 所有的物理节点都是10255 拒绝连接
20 |
21 |
22 | 2、浏览器访问查看数据
23 |
24 | 10.10.188.242 是你节点的IP吧,正常的话浏览器访问http://IP:10255/stats/summary是有值的,你看下,如果没有那就是kubelet的配置出问题
25 |
26 | ```
27 | 
28 |
29 | 
30 |
--------------------------------------------------------------------------------
/components/ingress/0.通俗理解Kubernetes中Service、Ingress与Ingress Controller的作用与关系.md:
--------------------------------------------------------------------------------
1 | # 一、通俗的讲:
2 |
3 | - 1、Service 是后端真实服务的抽象,一个 Service 可以代表多个相同的后端服务
4 |
5 | - 2、Ingress 是反向代理规则,用来规定 HTTP/S 请求应该被转发到哪个 Service 上,比如根据请求中不同的 Host 和 url 路径让请求落到不同的 Service 上
6 |
7 | - 3、Ingress Controller 就是一个反向代理程序,它负责解析 Ingress 的反向代理规则,如果 Ingress 有增删改的变动,所有的 Ingress Controller 都会及时更新自己相应的转发规则,当 Ingress Controller 收到请求后就会根据这些规则将请求转发到对应的 Service
8 |
9 | # 二、数据流向图
10 |
11 | Kubernetes 并没有自带 Ingress Controller,它只是一种标准,具体实现有多种,需要自己单独安装,常用的是 Nginx Ingress Controller 和 Traefik Ingress Controller。 所以 Ingress 是一种转发规则的抽象,Ingress Controller 的实现需要根据这些 Ingress 规则来将请求转发到对应的 Service,我画了个图方便大家理解:
12 |
13 | 
14 |
15 | 从图中可以看出,Ingress Controller 收到请求,匹配 Ingress 转发规则,匹配到了就转发到后端 Service,而 Service 可能代表的后端 Pod 有多个,选出一个转发到那个 Pod,最终由那个 Pod 处理请求。
16 |
17 | # 三、Ingress Controller对外暴露方式
18 |
19 | 有同学可能会问,既然 Ingress Controller 要接受外面的请求,而 Ingress Controller 是部署在集群中的,怎么让 Ingress Controller 本身能够被外面访问到呢,有几种方式:
20 |
21 | - 1、Ingress Controller 用 Deployment 方式部署,给它添加一个 Service,类型为 LoadBalancer,这样会自动生成一个 IP 地址,通过这个 IP 就能访问到了,并且一般这个 IP 是高可用的(前提是集群支持 LoadBalancer,通常云服务提供商才支持,自建集群一般没有)
22 |
23 | - 2、使用集群内部的某个或某些节点作为边缘节点,给 node 添加 label 来标识,Ingress Controller 用 DaemonSet 方式部署,使用 nodeSelector 绑定到边缘节点,保证每个边缘节点启动一个 Ingress Controller 实例,用 hostPort 直接在这些边缘节点宿主机暴露端口,然后我们可以访问边缘节点中 Ingress Controller 暴露的端口,这样外部就可以访问到 Ingress Controller 了
24 |
25 | - 3、Ingress Controller 用 Deployment 方式部署,给它添加一个 Service,类型为 NodePort,部署完成后查看会给出一个端口,通过 kubectl get svc 我们可以查看到这个端口,这个端口在集群的每个节点都可以访问,通过访问集群节点的这个端口就可以访问 Ingress Controller 了。但是集群节点这么多,而且端口又不是 80和443,太不爽了,一般我们会在前面自己搭个负载均衡器,比如用 Nginx,将请求转发到集群各个节点的那个端口上,这样我们访问 Nginx 就相当于访问到 Ingress Controller 了
26 |
27 | 一般比较推荐的是前面两种方式。
28 |
29 | 参考资料:
30 |
31 | https://cloud.tencent.com/developer/article/1326535 通俗理解Kubernetes中Service、Ingress与Ingress Controller的作用与关系
32 |
--------------------------------------------------------------------------------
/components/ingress/2.ingress tls配置.md:
--------------------------------------------------------------------------------
1 | # 1、Ingress tls
2 |
3 | 上节课给大家展示了 traefik 的安装使用以及简单的 ingress 的配置方法,这节课我们来学习一下 ingress tls 以及 path 路径在 ingress 对象中的使用方法。
4 |
5 | # 2、TLS 认证
6 |
7 | 在现在大部分场景下面我们都会使用 https 来访问我们的服务,这节课我们将使用一个自签名的证书,当然你有在一些正规机构购买的 CA 证书是最好的,这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书:
8 | ```
9 | mkdir -p /ssl/
10 | cd /ssl/
11 | openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -out tls.crt
12 | ```
13 | 现在我们有了证书,我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书:
14 | ```
15 | kubectl create secret generic traefik-cert --from-file=tls.crt --from-file=tls.key -n kube-system
16 | ```
17 | # 3、配置 Traefik
18 |
19 | 前面我们使用的是 Traefik 的默认配置,现在我们来配置 Traefik,让其支持 https:
20 |
21 | ```
22 | mkdir -p /config/
23 | cd /config/
24 |
25 | cat > traefik.toml <<\EOF
26 | defaultEntryPoints = ["http", "https"]
27 |
28 | [entryPoints]
29 | [entryPoints.http]
30 | address = ":80"
31 | [entryPoints.http.redirect]
32 | entryPoint = "https"
33 | [entryPoints.https]
34 | address = ":443"
35 | [entryPoints.https.tls]
36 | [[entryPoints.https.tls.certificates]]
37 | CertFile = "/ssl/tls.crt"
38 | KeyFile = "/ssl/tls.key"
39 | EOF
40 |
41 | 上面的配置文件中我们配置了 http 和 https 两个入口,并且配置了将 http 服务强制跳转到 https 服务,这样我们所有通过 traefik 进来的服务都是 https 的,要访问 https 服务,当然就得配置对应的证书了,可以看到我们指定了 CertFile 和 KeyFile 两个文件,由于 traefik pod 中并没有这两个证书,所以我们要想办法将上面生成的证书挂载到 Pod 中去,是不是前面我们讲解过 secret 对象可以通过 volume 形式挂载到 Pod 中?至于上面的 traefik.toml 这个文件我们要怎么让 traefik pod 能够访问到呢?还记得我们前面讲过的 ConfigMap 吗?我们是不是可以将上面的 traefik.toml 配置文件通过一个 ConfigMap 对象挂载到 traefik pod 中去:
42 |
43 | kubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system
44 |
45 | root># kubectl get configmap -n kube-system
46 | NAME DATA AGE
47 | coredns 1 11h
48 | extension-apiserver-authentication 6 11h
49 | kube-flannel-cfg 2 11h
50 | kube-proxy 2 11h
51 | kubeadm-config 2 11h
52 | kubelet-config-1.15 1 11h
53 | traefik-conf 1 10s
54 |
55 | 现在就可以更改下上节课的 traefik pod 的 yaml 文件了:
56 |
57 | cd /data/components/ingress/
58 |
59 | cat > traefik.yaml <<\EOF
60 | kind: Deployment
61 | apiVersion: extensions/v1beta1
62 | metadata:
63 | name: traefik-ingress-controller
64 | namespace: kube-system
65 | labels:
66 | k8s-app: traefik-ingress-lb
67 | spec:
68 | replicas: 1
69 | selector:
70 | matchLabels:
71 | k8s-app: traefik-ingress-lb
72 | template:
73 | metadata:
74 | labels:
75 | k8s-app: traefik-ingress-lb
76 | name: traefik-ingress-lb
77 | spec:
78 | serviceAccountName: traefik-ingress-controller
79 | terminationGracePeriodSeconds: 60
80 | volumes:
81 | - name: ssl
82 | secret:
83 | secretName: traefik-cert
84 | - name: config
85 | configMap:
86 | name: traefik-conf
87 | tolerations:
88 | - operator: "Exists"
89 | nodeSelector:
90 | kubernetes.io/hostname: linux-node1.example.com
91 | containers:
92 | - image: traefik
93 | name: traefik-ingress-lb
94 | volumeMounts:
95 | - mountPath: "/ssl" #这里注意挂载的路径
96 | name: "ssl"
97 | - mountPath: "/config" #这里注意挂载的路径
98 | name: "config"
99 | ports:
100 | - name: http
101 | containerPort: 80
102 | hostPort: 80
103 | - name: https
104 | containerPort: 443
105 | hostPort: 443
106 | - name: admin
107 | containerPort: 8080
108 | args:
109 | - --configfile=/config/traefik.toml
110 | - --api
111 | - --kubernetes
112 | - --logLevel=INFO
113 | EOF
114 |
115 | 和之前的比较,我们增加了 443 的端口配置,以及启动参数中通过 configfile 指定了 traefik.toml 配置文件,这个配置文件是通过 volume 挂载进来的。然后更新下 traefik pod:
116 |
117 | kubectl apply -f traefik.yaml
118 | kubectl logs -f traefik-ingress-controller-7dcfd9c6df-v58k7 -n kube-system
119 |
120 | 更新完成后我们查看 traefik pod 的日志,如果出现类似于上面的一些日志信息,证明更新成功了。现在我们去访问 traefik 的 dashboard 会跳转到 https 的地址,并会提示证书相关的报警信息,这是因为我们的证书是我们自建的,并不受浏览器信任,如果你是正规机构购买的证书并不会出现改报警信息,你应该可以看到我们常见的绿色标志:
121 |
122 | https://traefik.k8s.com/dashboard/
123 | ```
124 |
125 | # 4、配置 ingress
126 |
127 | 其实上面的 TLS 认证方式已经成功了,接下来我们通过一个实例来说明下 ingress 中 path 的用法,这里我们部署了3个简单的 web 服务,通过一个环境变量来标识当前运行的是哪个服务:(backend.yaml)
128 |
129 | ```
130 | cd /data/components/ingress/
131 |
132 | cat > backend.yaml <<\EOF
133 | kind: Deployment
134 | apiVersion: extensions/v1beta1
135 | metadata:
136 | name: svc1
137 | spec:
138 | replicas: 1
139 | template:
140 | metadata:
141 | labels:
142 | app: svc1
143 | spec:
144 | containers:
145 | - name: svc1
146 | image: cnych/example-web-service
147 | env:
148 | - name: APP_SVC
149 | value: svc1
150 | ports:
151 | - containerPort: 8080
152 | protocol: TCP
153 |
154 | ---
155 | kind: Deployment
156 | apiVersion: extensions/v1beta1
157 | metadata:
158 | name: svc2
159 | spec:
160 | replicas: 1
161 | template:
162 | metadata:
163 | labels:
164 | app: svc2
165 | spec:
166 | containers:
167 | - name: svc2
168 | image: cnych/example-web-service
169 | env:
170 | - name: APP_SVC
171 | value: svc2
172 | ports:
173 | - containerPort: 8080
174 | protocol: TCP
175 |
176 | ---
177 | kind: Deployment
178 | apiVersion: extensions/v1beta1
179 | metadata:
180 | name: svc3
181 | spec:
182 | replicas: 1
183 | template:
184 | metadata:
185 | labels:
186 | app: svc3
187 | spec:
188 | containers:
189 | - name: svc3
190 | image: cnych/example-web-service
191 | env:
192 | - name: APP_SVC
193 | value: svc3
194 | ports:
195 | - containerPort: 8080
196 | protocol: TCP
197 |
198 | ---
199 | kind: Service
200 | apiVersion: v1
201 | metadata:
202 | labels:
203 | app: svc1
204 | name: svc1
205 | spec:
206 | type: ClusterIP
207 | ports:
208 | - port: 8080
209 | name: http
210 | selector:
211 | app: svc1
212 |
213 | ---
214 | kind: Service
215 | apiVersion: v1
216 | metadata:
217 | labels:
218 | app: svc2
219 | name: svc2
220 | spec:
221 | type: ClusterIP
222 | ports:
223 | - port: 8080
224 | name: http
225 | selector:
226 | app: svc2
227 |
228 | ---
229 | kind: Service
230 | apiVersion: v1
231 | metadata:
232 | labels:
233 | app: svc3
234 | name: svc3
235 | spec:
236 | type: ClusterIP
237 | ports:
238 | - port: 8080
239 | name: http
240 | selector:
241 | app: svc3
242 | EOF
243 |
244 | 可以看到上面我们定义了3个 Deployment,分别对应3个 Service:
245 |
246 | kubectl create -f backend.yaml
247 |
248 | 然后我们创建一个 ingress 对象来访问上面的3个服务:(example-ingress.yaml)
249 |
250 | cat > example-ingress.yaml <<\EOF
251 | apiVersion: extensions/v1beta1
252 | kind: Ingress
253 | metadata:
254 | name: example-web-app
255 | annotations:
256 | kubernetes.io/ingress.class: "traefik"
257 | spec:
258 | rules:
259 | - host: example.k8s.com
260 | http:
261 | paths:
262 | - path: /s1
263 | backend:
264 | serviceName: svc1
265 | servicePort: 8080
266 | - path: /s2
267 | backend:
268 | serviceName: svc2
269 | servicePort: 8080
270 | - path: /
271 | backend:
272 | serviceName: svc3
273 | servicePort: 8080
274 | EOF
275 |
276 |
277 | 注意我们这里定义的 ingress 对象和之前有一个不同的地方是我们增加了 path 路径的定义,不指定的话默认是 '/',创建该 ingress 对象:
278 |
279 | kubectl create -f example-ingress.yaml
280 |
281 | 现在我们可以在本地 hosts 里面给域名 example.k8s.com 添加对应的 hosts 解析,然后就可以在浏览器中访问,可以看到默认也会跳转到 https 的页面:
282 | ```
283 |
284 | 参考文档:
285 |
286 | https://www.qikqiak.com/k8s-book/docs/41.ingress%20config.html
287 |
--------------------------------------------------------------------------------
/components/ingress/3.ingress-http使用示例.md:
--------------------------------------------------------------------------------
1 | # 一、ingress-http测试示例
2 |
3 | ## 1、关键三个点:
4 |
5 | 注意这3个资源的namespace: kube-system需要一致
6 |
7 | Deployment
8 |
9 | Service
10 |
11 | Ingress
12 |
13 | ```
14 | $ vim nginx-deployment-http.yaml
15 |
16 | ---
17 | apiVersion: apps/v1beta1
18 | kind: Deployment
19 | metadata:
20 | name: nginx-deployment
21 | namespace: kube-system
22 | spec:
23 | replicas: 2
24 | template:
25 | metadata:
26 | labels:
27 | app: nginx-pod
28 | spec:
29 | containers:
30 | - name: nginx
31 | image: nginx:1.15.5
32 | ports:
33 | - containerPort: 80
34 | ---
35 | apiVersion: v1
36 | kind: Service
37 | metadata:
38 | name: nginx-service
39 | namespace: kube-system
40 | annotations:
41 | traefik.ingress.kubernetes.io/load-balancer-method: drr #动态加权轮训调度
42 | spec:
43 | template:
44 | metadata:
45 | labels:
46 | name: nginx-service
47 | spec:
48 | selector:
49 | app: nginx-pod
50 | ports:
51 | - port: 80
52 | targetPort: 80
53 | ---
54 | apiVersion: extensions/v1beta1
55 | kind: Ingress
56 | metadata:
57 | name: nginx-ingress
58 | namespace: kube-system
59 | annotations:
60 | kubernetes.io/ingress.class: traefik
61 | spec:
62 | rules:
63 | - host: k8s.nginx.com
64 | http:
65 | paths:
66 | - backend:
67 | serviceName: nginx-service
68 | servicePort: 80
69 | ```
70 |
71 | ## 2、创建资源
72 |
73 | ```
74 | $ kubectl apply -f nginx-deployment-http.yaml
75 |
76 | deployment.apps/nginx-pod create
77 | service/nginx-service create
78 | ingress.extensions/nginx-ingress create
79 | ```
80 |
81 | ## 3、访问刚创建的资源
82 |
83 | 首先这里需要先找到traefik-ingress pod 分布到到了那个节点,这里我们发现是落在了10.199.1.159的节点,然后我们绑定该节点对应的公网IP,这里假设为16.21.26.139
84 |
85 | ```
86 | 16.21.26.139 k8s.nginx.com
87 | ```
88 |
89 | ```
90 | $ kubectl get pod -A -o wide|grep traefik-ingress
91 | kube-system traefik-ingress-controller-7d454d7c68-8qpjq 1/1 Running 0 21h 10.46.2.10 10.199.1.159
92 | ```
93 |
94 | 
95 |
96 |
97 | ## 4、清理资源
98 |
99 | ### 1、清理deployment
100 | ```
101 | # 获取deployment
102 | $ kubectl get deploy -A
103 |
104 | NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
105 | kube-system coredns 2/2 2 2 3d
106 | kube-system heapster 1/1 1 1 3d
107 | kube-system kubernetes-dashboard 1/1 1 1 3d
108 | kube-system metrics-server 1/1 1 1 3d
109 | kube-system nginx-pod 2/2 2 2 25m
110 | kube-system traefik-ingress-controller 1/1 1 1 2d22h
111 |
112 | # 清理deployment
113 | $ kubectl delete deploy nginx-pod -n kube-system
114 |
115 | deployment.extensions "nginx-pod" deleted
116 | ```
117 |
118 | ### 2、清理service
119 | ```
120 | # 获取svc
121 | $ kubectl get svc -A
122 |
123 | NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
124 | default kubernetes ClusterIP 10.44.0.1 443/TCP 3d
125 | kube-system heapster ClusterIP 10.44.158.46 80/TCP 3d
126 | kube-system kube-dns ClusterIP 10.44.0.2 53/UDP,53/TCP,9153/TCP 3d
127 | kube-system kubernetes-dashboard NodePort 10.44.176.99 443:27008/TCP 3d
128 | kube-system metrics-server ClusterIP 10.44.40.157 443/TCP 3d
129 | kube-system nginx-service ClusterIP 10.44.148.252 80/TCP 28m
130 | kube-system traefik-ingress-service NodePort 10.44.67.195 80:23456/TCP,443:23457/TCP,8080:33192/TCP 2d22h
131 |
132 | # 清理svc
133 | $ kubectl delete svc nginx-service -n kube-system
134 |
135 | service "nginx-service" deleted
136 | ```
137 |
138 | ### 3、清理ingress
139 |
140 | ```
141 | # 获取ingress
142 | $ kubectl get ingress -A
143 |
144 | NAMESPACE NAME HOSTS ADDRESS PORTS AGE
145 | kube-system kubernetes-dashboard dashboard.test.com 80 2d22h
146 | kube-system nginx-ingress k8s.nginx.com 80 29m
147 | kube-system traefik-web-ui traefik-ui.test.com 80 2d22h
148 |
149 | # 清理ingress
150 | $ kubectl delete ingress nginx-ingress -n kube-system
151 |
152 | ingress.extensions "nginx-ingress" deleted
153 | ```
154 |
155 |
156 | 参考资料:
157 |
158 | https://xuchao918.github.io/2019/03/01/Kubernetes-traefik-ingress%E4%BD%BF%E7%94%A8/ Kubernetes traefik ingress使用
159 |
--------------------------------------------------------------------------------
/components/ingress/5.hello-tls.md:
--------------------------------------------------------------------------------
1 | # 证书文件
2 |
3 | 1、生成证书
4 | ```
5 | mkdir -p /ssl/{default,first,second}
6 | cd /ssl/default/
7 | openssl req -x509 -nodes -days 165 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=k8s.test.com"
8 | kubectl -n kube-system create secret tls traefik-cert --key=tls.key --cert=tls.crt
9 |
10 | cd /ssl/first/
11 | openssl req -x509 -nodes -days 265 -newkey rsa:2048 -keyout tls_first.key -out tls_first.crt -subj "/CN=k8s.first.com"
12 | kubectl create secret generic first-k8s --from-file=tls_first.crt --from-file=tls_first.key -n kube-system
13 |
14 | cd /ssl/second/
15 | openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls_second.key -out tls_second.crt -subj "/CN=k8s.second.com"
16 | kubectl create secret generic second-k8s --from-file=tls_second.crt --from-file=tls_second.key -n kube-system
17 |
18 | #查看证书
19 | kubectl get secret traefik-cert first-k8s second-k8s -n kube-system
20 | kubectl describe secret traefik-cert first-k8s second-k8s -n kube-system
21 | ```
22 |
23 | 2、删除证书
24 |
25 | ```
26 | $ kubectl delete secret traefik-cert first-k8s second-k8s -n kube-system
27 |
28 | secret "second-k8s" deleted
29 | secret "traefik-cert" deleted
30 | secret "first-k8s" deleted
31 | ```
32 |
33 | # 证书配置
34 |
35 | 1、创建configMap(cm)
36 |
37 | ```
38 | mkdir -p /config/
39 | cd /config/
40 |
41 | $ vim traefik.toml
42 | defaultEntryPoints = ["http", "https"]
43 | [entryPoints]
44 | [entryPoints.http]
45 | address = ":80"
46 | [entryPoints.https]
47 | address = ":443"
48 | [entryPoints.https.tls]
49 | [[entryPoints.https.tls.certificates]]
50 | CertFile = "/ssl/default/tls.crt"
51 | KeyFile = "/ssl/default/tls.key"
52 | [[entryPoints.https.tls.certificates]]
53 | CertFile = "/ssl/first/tls_first.crt"
54 | KeyFile = "/ssl/first/tls_first.key"
55 | [[entryPoints.https.tls.certificates]]
56 | CertFile = "/ssl/second/tls_second.crt"
57 | KeyFile = "/ssl/second/tls_second.key"
58 |
59 | $ kubectl create configmap traefik-conf --from-file=traefik.toml -n kube-system
60 |
61 | $ kubectl get configmap traefik-conf -n kube-system
62 |
63 | $ kubectl describe cm traefik-conf -n kube-system
64 | ```
65 | 2、删除configMap(cm)
66 |
67 | ```
68 | $ kubectl delete cm traefik-conf -n kube-system
69 | ```
70 |
71 | # traefik-ingress-controller文件
72 |
73 | 1、创建文件
74 | ```
75 | $ vim traefik-controller-tls.yaml
76 | ---
77 | apiVersion: v1
78 | kind: ConfigMap
79 | metadata:
80 | name: traefik-conf
81 | namespace: kube-system
82 | data:
83 | traefik.toml: |
84 | insecureSkipVerify = true
85 | defaultEntryPoints = ["http", "https"]
86 | [entryPoints]
87 | [entryPoints.http]
88 | address = ":80"
89 | [entryPoints.https]
90 | address = ":443"
91 | [entryPoints.https.tls]
92 | [[entryPoints.https.tls.certificates]]
93 | CertFile = "/ssl/default/tls.crt"
94 | KeyFile = "/ssl/default/tls.key"
95 | [[entryPoints.https.tls.certificates]]
96 | CertFile = "/ssl/first/tls_first.crt"
97 | KeyFile = "/ssl/first/tls_first.key"
98 | [[entryPoints.https.tls.certificates]]
99 | CertFile = "/ssl/second/tls_second.crt"
100 | KeyFile = "/ssl/second/tls_second.key"
101 | ---
102 | kind: Deployment
103 | apiVersion: apps/v1beta1
104 | metadata:
105 | name: traefik-ingress-controller
106 | namespace: kube-system
107 | labels:
108 | k8s-app: traefik-ingress-lb
109 | spec:
110 | replicas: 1
111 | selector:
112 | matchLabels:
113 | k8s-app: traefik-ingress-lb
114 | template:
115 | metadata:
116 | labels:
117 | k8s-app: traefik-ingress-lb
118 | name: traefik-ingress-lb
119 | spec:
120 | serviceAccountName: traefik-ingress-controller
121 | terminationGracePeriodSeconds: 60
122 | volumes:
123 | - name: ssl
124 | secret:
125 | secretName: traefik-cert
126 | - name: config
127 | configMap:
128 | name: traefik-conf
129 | #nodeSelector:
130 | # node-role.kubernetes.io/traefik: "true"
131 | containers:
132 | - image: traefik:v1.7.12
133 | imagePullPolicy: IfNotPresent
134 | name: traefik-ingress-lb
135 | volumeMounts:
136 | - mountPath: "/ssl"
137 | name: "ssl"
138 | - mountPath: "/config"
139 | name: "config"
140 | resources:
141 | limits:
142 | cpu: 1000m
143 | memory: 800Mi
144 | requests:
145 | cpu: 500m
146 | memory: 600Mi
147 | args:
148 | - --configfile=/config/traefik.toml
149 | - --api
150 | - --kubernetes
151 | - --logLevel=INFO
152 | securityContext:
153 | capabilities:
154 | drop:
155 | - ALL
156 | add:
157 | - NET_BIND_SERVICE
158 | ports:
159 | - name: http
160 | containerPort: 80
161 | hostPort: 80
162 | - name: https
163 | containerPort: 443
164 | hostPort: 443
165 | ---
166 | kind: Service
167 | apiVersion: v1
168 | metadata:
169 | name: traefik-ingress-service
170 | namespace: kube-system
171 | spec:
172 | selector:
173 | k8s-app: traefik-ingress-lb
174 | ports:
175 | - protocol: TCP
176 | # 该端口为 traefik ingress-controller的服务端口
177 | port: 80
178 | # 集群hosts文件中设置的 NODE_PORT_RANGE 作为 NodePort的可用范围
179 | # 从默认20000~40000之间选一个可用端口,让ingress-controller暴露给外部的访问
180 | nodePort: 23456
181 | name: http
182 | - protocol: TCP
183 | #
184 | port: 443
185 | nodePort: 23457
186 | name: https
187 | - protocol: TCP
188 | # 该端口为 traefik 的管理WEB界面
189 | port: 8080
190 | name: admin
191 | type: NodePort
192 | ---
193 | kind: ClusterRole
194 | apiVersion: rbac.authorization.k8s.io/v1
195 | metadata:
196 | name: traefik-ingress-controller
197 | rules:
198 | - apiGroups:
199 | - ""
200 | resources:
201 | - pods
202 | - services
203 | - endpoints
204 | - secrets
205 | verbs:
206 | - get
207 | - list
208 | - watch
209 | - apiGroups:
210 | - extensions
211 | resources:
212 | - ingresses
213 | verbs:
214 | - get
215 | - list
216 | - watch
217 | ---
218 | kind: ClusterRoleBinding
219 | apiVersion: rbac.authorization.k8s.io/v1
220 | metadata:
221 | name: traefik-ingress-controller
222 | roleRef:
223 | apiGroup: rbac.authorization.k8s.io
224 | kind: ClusterRole
225 | name: traefik-ingress-controller
226 | subjects:
227 | - kind: ServiceAccount
228 | name: traefik-ingress-controller
229 | namespace: kube-system
230 | ---
231 | apiVersion: v1
232 | kind: ServiceAccount
233 | metadata:
234 | name: traefik-ingress-controller
235 | namespace: kube-system
236 | ```
237 |
238 | 2、应用生效
239 | ```
240 | $ kubectl apply -f traefik-controller-tls.yaml
241 |
242 | configmap/traefik-conf created
243 | deployment.apps/traefik-ingress-controller created
244 | service/traefik-ingress-service created
245 | clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
246 | clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created
247 | serviceaccount/traefik-ingress-controller created
248 |
249 | #删除资源
250 | $ kubectl delete -f traefik-controller-tls.yaml
251 | ```
252 | # 测试deployment和ingress
253 | ```
254 | $ vim nginx-ingress-deploy.yaml
255 | ---
256 | apiVersion: apps/v1beta1
257 | kind: Deployment
258 | metadata:
259 | name: nginx-deployment
260 | namespace: kube-system
261 | spec:
262 | replicas: 2
263 | template:
264 | metadata:
265 | labels:
266 | app: nginx-pod
267 | spec:
268 | containers:
269 | - name: nginx
270 | image: nginx:1.15.5
271 | ports:
272 | - containerPort: 80
273 | ---
274 | apiVersion: v1
275 | kind: Service
276 | metadata:
277 | name: nginx-service
278 | namespace: kube-system
279 | annotations:
280 | traefik.ingress.kubernetes.io/load-balancer-method: drr #动态加权轮训调度
281 | spec:
282 | template:
283 | metadata:
284 | labels:
285 | name: nginx-service
286 | spec:
287 | selector:
288 | app: nginx-pod
289 | ports:
290 | - port: 80
291 | targetPort: 80
292 | ---
293 | apiVersion: extensions/v1beta1
294 | kind: Ingress
295 | metadata:
296 | name: nginx-ingress
297 | namespace: kube-system
298 | annotations:
299 | kubernetes.io/ingress.class: traefik
300 | spec:
301 | tls:
302 | - secretName: first-k8s
303 | - secretName: second-k8s
304 | rules:
305 | - host: k8s.first.com
306 | http:
307 | paths:
308 | - backend:
309 | serviceName: nginx-service
310 | servicePort: 80
311 | - host: k8s.senond.com
312 | http:
313 | paths:
314 | - backend:
315 | serviceName: nginx-service
316 | servicePort: 80
317 |
318 | $ kubectl apply -f nginx-ingress-deploy.yaml
319 | $ kubectl delete -f nginx-ingress-deploy.yaml
320 | ```
321 |
--------------------------------------------------------------------------------
/components/ingress/README.md:
--------------------------------------------------------------------------------
1 | 参考资料:
2 |
3 | https://segmentfault.com/a/1190000019908991 k8s ingress原理及ingress-nginx部署测试
4 |
5 | https://www.cnblogs.com/tchua/p/11174386.html Kubernetes集群Ingress高可用部署
6 |
--------------------------------------------------------------------------------
/components/ingress/nginx-ingress/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/components/ingress/traefik-ingress/1.traefik反向代理Deamonset模式.md:
--------------------------------------------------------------------------------
1 | # 一、Deamonset方式部署traefik-controller-ingress
2 |
3 | https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-ds.yaml
4 |
5 | 这里使用的DaemonSet,只是用traefik-ds.yaml ,traefik-rbac.yaml , ui.yaml
6 |
7 | ```bash
8 | kubectl delete -f traefik-ds.yaml
9 |
10 | rm -f ./traefik-ds.yaml
11 |
12 | cat >traefik-ds.yaml<<\EOF
13 | ---
14 | apiVersion: v1
15 | kind: ServiceAccount
16 | metadata:
17 | name: traefik-ingress-controller
18 | namespace: kube-system
19 | ---
20 | kind: DaemonSet
21 | apiVersion: apps/v1
22 | metadata:
23 | name: traefik-ingress-controller
24 | namespace: kube-system
25 | labels:
26 | k8s-app: traefik-ingress-lb
27 | spec:
28 | selector:
29 | matchLabels:
30 | k8s-app: traefik-ingress-lb
31 | template:
32 | metadata:
33 | labels:
34 | k8s-app: traefik-ingress-lb
35 | name: traefik-ingress-lb
36 | spec:
37 | serviceAccountName: traefik-ingress-controller
38 | terminationGracePeriodSeconds: 60
39 | #=======添加nodeSelector信息:只在master节点创建=======
40 | tolerations:
41 | - operator: "Exists"
42 | nodeSelector:
43 | kubernetes.io/role: master #默认master是不允许被调度的,加上tolerations后允许被调度,然后这里使用自身机器master的地址,可以使用kubectl get nodes --show-labels来查看
44 | #===================================================
45 | containers:
46 | - image: traefik:v1.7
47 | name: traefik-ingress-lb
48 | ports:
49 | - name: http
50 | containerPort: 80
51 | hostPort: 80
52 | - name: admin
53 | containerPort: 8080
54 | hostPort: 8080
55 | securityContext:
56 | capabilities:
57 | drop:
58 | - ALL
59 | add:
60 | - NET_BIND_SERVICE
61 | args:
62 | - --api
63 | - --kubernetes
64 | - --logLevel=INFO
65 | ---
66 | kind: Service
67 | apiVersion: v1
68 | metadata:
69 | name: traefik-ingress-service
70 | namespace: kube-system
71 | spec:
72 | selector:
73 | k8s-app: traefik-ingress-lb
74 | ports:
75 | - protocol: TCP
76 | port: 80
77 | name: web
78 | - protocol: TCP
79 | port: 8080
80 | name: admin
81 | EOF
82 |
83 | kubectl apply -f traefik-ds.yaml
84 | ```
85 |
86 | # 二、traefik-rbac配置
87 |
88 | https://github.com/containous/traefik/blob/v1.7/examples/k8s/traefik-rbac.yaml
89 |
90 | ```
91 | kubectl delete -f traefik-rbac.yaml
92 |
93 | rm -f ./traefik-rbac.yaml
94 |
95 | cat >traefik-rbac.yaml<<\EOF
96 | ---
97 | kind: ClusterRole
98 | apiVersion: rbac.authorization.k8s.io/v1beta1
99 | metadata:
100 | name: traefik-ingress-controller
101 | rules:
102 | - apiGroups:
103 | - ""
104 | resources:
105 | - services
106 | - endpoints
107 | - secrets
108 | verbs:
109 | - get
110 | - list
111 | - watch
112 | - apiGroups:
113 | - extensions
114 | resources:
115 | - ingresses
116 | verbs:
117 | - get
118 | - list
119 | - watch
120 | - apiGroups:
121 | - extensions
122 | resources:
123 | - ingresses/status
124 | verbs:
125 | - update
126 | ---
127 | kind: ClusterRoleBinding
128 | apiVersion: rbac.authorization.k8s.io/v1beta1
129 | metadata:
130 | name: traefik-ingress-controller
131 | roleRef:
132 | apiGroup: rbac.authorization.k8s.io
133 | kind: ClusterRole
134 | name: traefik-ingress-controller
135 | subjects:
136 | - kind: ServiceAccount
137 | name: traefik-ingress-controller
138 | namespace: kube-system
139 | ---
140 | EOF
141 |
142 | kubectl apply -f traefik-rbac.yaml
143 | ```
144 |
145 | # 三、traefik-ui使用traefik进行代理
146 |
147 | https://github.com/containous/traefik/blob/v1.7/examples/k8s/ui.yaml
148 |
149 | 1、代理方式一
150 |
151 | ```bash
152 | kubectl delete -f ui.yaml
153 |
154 | rm -f ./ui.yaml
155 |
156 | cat >ui.yaml<<\EOF
157 | ---
158 | apiVersion: v1
159 | kind: Service
160 | metadata:
161 | name: traefik-web-ui
162 | namespace: kube-system
163 | spec:
164 | selector:
165 | k8s-app: traefik-ingress-lb
166 | ports:
167 | - name: web
168 | port: 80
169 | targetPort: 8080
170 | ---
171 | apiVersion: extensions/v1beta1
172 | kind: Ingress
173 | metadata:
174 | name: traefik-web-ui
175 | namespace: kube-system
176 | spec:
177 | rules:
178 | - host: traefik-ui.devops.com
179 | http:
180 | paths:
181 | - path: /
182 | backend:
183 | serviceName: traefik-web-ui
184 | servicePort: web
185 | ---
186 | EOF
187 |
188 | kubectl apply -f ui.yaml
189 | ```
190 |
191 | 2、代理方式二
192 |
193 | ```
194 | kubectl delete -f ui.yaml
195 |
196 | rm -f ./ui.yaml
197 |
198 | cat >ui.yaml<<\EOF
199 | ---
200 | kind: Service
201 | apiVersion: v1
202 | metadata:
203 | name: traefik-ingress-service
204 | namespace: kube-system
205 | spec:
206 | selector:
207 | k8s-app: traefik-ingress-lb
208 | ports:
209 | - protocol: TCP
210 | # 该端口为 traefik ingress-controller的服务端口
211 | port: 80
212 | name: web
213 | - protocol: TCP
214 | # 该端口为 traefik 的管理WEB界面
215 | port: 8080
216 | name: admin
217 | ---
218 | apiVersion: extensions/v1beta1
219 | kind: Ingress
220 | metadata:
221 | name: traefik-web-ui
222 | namespace: kube-system
223 | annotations:
224 | kubernetes.io/ingress.class: traefik
225 | spec:
226 | rules:
227 | - host: traefik-ui.devops.com
228 | http:
229 | paths:
230 | - backend:
231 | serviceName: traefik-ingress-service
232 | #servicePort: 8080
233 | servicePort: admin #跟上面service的name对应
234 | ---
235 | EOF
236 |
237 | kubectl apply -f ui.yaml
238 | ```
239 |
240 | # 四、访问测试
241 |
242 | `http://traefik-ui.devops.com`
243 |
244 | # 五、汇总
245 | ```
246 | kubectl delete -f all-ds.yaml
247 |
248 | rm -f ./all-ds.yaml
249 |
250 | cat >all-ds.yaml<<\EOF
251 | ---
252 | apiVersion: v1
253 | kind: ServiceAccount
254 | metadata:
255 | name: traefik-ingress-controller
256 | namespace: kube-system
257 | ---
258 | kind: DaemonSet
259 | apiVersion: apps/v1
260 | metadata:
261 | name: traefik-ingress-controller
262 | namespace: kube-system
263 | labels:
264 | k8s-app: traefik-ingress-lb
265 | spec:
266 | selector:
267 | matchLabels:
268 | k8s-app: traefik-ingress-lb
269 | template:
270 | metadata:
271 | labels:
272 | k8s-app: traefik-ingress-lb
273 | name: traefik-ingress-lb
274 | spec:
275 | serviceAccountName: traefik-ingress-controller
276 | terminationGracePeriodSeconds: 60
277 | #=======添加nodeSelector信息:只在master节点创建=======
278 | tolerations:
279 | - operator: "Exists"
280 | nodeSelector:
281 | kubernetes.io/role: master #默认master是不允许被调度的,加上tolerations后允许被调度,然后这里使用自身机器master的地址,可以使用kubectl get nodes --show-labels来查看
282 | #===================================================
283 | containers:
284 | - image: traefik:v1.7
285 | name: traefik-ingress-lb
286 | ports:
287 | - name: http
288 | containerPort: 80
289 | hostPort: 80
290 | - name: admin
291 | containerPort: 8080
292 | hostPort: 8080
293 | securityContext:
294 | capabilities:
295 | drop:
296 | - ALL
297 | add:
298 | - NET_BIND_SERVICE
299 | args:
300 | - --api
301 | - --kubernetes
302 | - --logLevel=INFO
303 | ---
304 | kind: Service
305 | apiVersion: v1
306 | metadata:
307 | name: traefik-ingress-service
308 | namespace: kube-system
309 | spec:
310 | selector:
311 | k8s-app: traefik-ingress-lb
312 | ports:
313 | - protocol: TCP
314 | port: 80
315 | name: web
316 | - protocol: TCP
317 | port: 8080
318 | name: admin
319 | ---
320 | kind: ClusterRole
321 | apiVersion: rbac.authorization.k8s.io/v1beta1
322 | metadata:
323 | name: traefik-ingress-controller
324 | rules:
325 | - apiGroups:
326 | - ""
327 | resources:
328 | - services
329 | - endpoints
330 | - secrets
331 | verbs:
332 | - get
333 | - list
334 | - watch
335 | - apiGroups:
336 | - extensions
337 | resources:
338 | - ingresses
339 | verbs:
340 | - get
341 | - list
342 | - watch
343 | - apiGroups:
344 | - extensions
345 | resources:
346 | - ingresses/status
347 | verbs:
348 | - update
349 | ---
350 | kind: ClusterRoleBinding
351 | apiVersion: rbac.authorization.k8s.io/v1beta1
352 | metadata:
353 | name: traefik-ingress-controller
354 | roleRef:
355 | apiGroup: rbac.authorization.k8s.io
356 | kind: ClusterRole
357 | name: traefik-ingress-controller
358 | subjects:
359 | - kind: ServiceAccount
360 | name: traefik-ingress-controller
361 | namespace: kube-system
362 | ---
363 | apiVersion: v1
364 | kind: Service
365 | metadata:
366 | name: traefik-web-ui
367 | namespace: kube-system
368 | spec:
369 | selector:
370 | k8s-app: traefik-ingress-lb
371 | ports:
372 | - name: web
373 | port: 80
374 | targetPort: 8080
375 | ---
376 | apiVersion: extensions/v1beta1
377 | kind: Ingress
378 | metadata:
379 | name: traefik-web-ui
380 | namespace: kube-system
381 | spec:
382 | rules:
383 | - host: traefik-ui.devops.com
384 | http:
385 | paths:
386 | - path: /
387 | backend:
388 | serviceName: traefik-web-ui
389 | servicePort: web
390 | EOF
391 |
392 | kubectl apply -f all-ds.yaml
393 | ```
394 |
395 | 参考资料:
396 |
397 | https://blog.csdn.net/oyym_mv/article/details/86986510 kubernetes使用traefik作为反向代理(Deamonset模式)
398 |
399 | https://www.cnblogs.com/twodoge/p/11663006.html 第二个坑新版本的 apps.v1 API需要在yaml文件中,selector变为必选项
400 |
--------------------------------------------------------------------------------
/components/ingress/traefik-ingress/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/components/ingress/常用操作.md:
--------------------------------------------------------------------------------
1 | ```
2 | [root@master ingress]# kubectl get ingress -A
3 | NAMESPACE NAME HOSTS ADDRESS PORTS AGE
4 | default nginx-ingress k8s.nginx.com 80 40m
5 | kube-system kubernetes-dashboard dashboard.test.com 80 2d21h
6 | kube-system traefik-web-ui traefik-ui.test.com 80 2d21h
7 |
8 |
9 |
10 | [root@master ingress]# kubectl delete ingress hello-tls-ingress
11 | ingress.extensions "hello-tls-ingress" deleted
12 |
13 | ```
14 |
15 | # 1、rbac.yaml
16 |
17 | 首先,为安全起见我们这里使用 RBAC 安全认证方式:(rbac.yaml)
18 |
19 | ```
20 | mkdir -p /data/components/ingress
21 |
22 | cat > /data/components/ingress/rbac.yaml << \EOF
23 | ---
24 | apiVersion: v1
25 | kind: ServiceAccount
26 | metadata:
27 | name: traefik-ingress-controller
28 | namespace: kube-system
29 | ---
30 | kind: ClusterRole
31 | apiVersion: rbac.authorization.k8s.io/v1beta1
32 | metadata:
33 | name: traefik-ingress-controller
34 | rules:
35 | - apiGroups:
36 | - ""
37 | resources:
38 | - services
39 | - endpoints
40 | - secrets
41 | verbs:
42 | - get
43 | - list
44 | - watch
45 | - apiGroups:
46 | - extensions
47 | resources:
48 | - ingresses
49 | verbs:
50 | - get
51 | - list
52 | - watch
53 | ---
54 | kind: ClusterRoleBinding
55 | apiVersion: rbac.authorization.k8s.io/v1beta1
56 | metadata:
57 | name: traefik-ingress-controller
58 | roleRef:
59 | apiGroup: rbac.authorization.k8s.io
60 | kind: ClusterRole
61 | name: traefik-ingress-controller
62 | subjects:
63 | - kind: ServiceAccount
64 | name: traefik-ingress-controller
65 | namespace: kube-system
66 | EOF
67 |
68 | kubectl create -f /data/components/ingress/rbac.yaml
69 | ```
70 |
71 | # 2、traefik.yaml
72 |
73 | 然后使用 Deployment 来管理 Pod,直接使用官方的 traefik 镜像部署即可(traefik.yaml)
74 | ```
75 | cat > /data/components/ingress/traefik.yaml << \EOF
76 | ---
77 | kind: Deployment
78 | apiVersion: extensions/v1beta1
79 | metadata:
80 | name: traefik-ingress-controller
81 | namespace: kube-system
82 | labels:
83 | k8s-app: traefik-ingress-lb
84 | spec:
85 | replicas: 1
86 | selector:
87 | matchLabels:
88 | k8s-app: traefik-ingress-lb
89 | template:
90 | metadata:
91 | labels:
92 | k8s-app: traefik-ingress-lb
93 | name: traefik-ingress-lb
94 | spec:
95 | serviceAccountName: traefik-ingress-controller
96 | terminationGracePeriodSeconds: 60
97 | tolerations:
98 | - operator: "Exists"
99 | nodeSelector:
100 | kubernetes.io/hostname: linux-node1.example.com #默认master是不允许被调度的,加上tolerations后允许被调度
101 | containers:
102 | - image: traefik
103 | name: traefik-ingress-lb
104 | ports:
105 | - name: http
106 | containerPort: 80
107 | - name: admin
108 | containerPort: 8080
109 | args:
110 | - --api
111 | - --kubernetes
112 | - --logLevel=INFO
113 | ---
114 | kind: Service
115 | apiVersion: v1
116 | metadata:
117 | name: traefik-ingress-service
118 | namespace: kube-system
119 | spec:
120 | selector:
121 | k8s-app: traefik-ingress-lb
122 | ports:
123 | - protocol: TCP
124 | port: 80
125 | name: web
126 | - protocol: TCP
127 | port: 8080
128 | name: admin
129 | type: NodePort
130 | EOF
131 |
132 | kubectl create -f /data/components/ingress/traefik.yaml
133 |
134 | kubectl apply -f /data/components/ingress/traefik.yaml
135 | ```
136 | ```
137 | 要注意上面 yaml 文件:
138 | tolerations:
139 | - operator: "Exists"
140 | nodeSelector:
141 | kubernetes.io/hostname: master
142 |
143 | 由于我们这里的特殊性,只有 master 节点有外网访问权限,所以我们使用nodeSelector标签将traefik的固定调度到master这个节点上,那么上面的tolerations是干什么的呢?这个是因为我们集群使用的 kubeadm 安装的,master 节点默认是不能被普通应用调度的,要被调度的话就需要添加这里的 tolerations 属性,当然如果你的集群和我们的不太一样,直接去掉这里的调度策略就行。
144 |
145 | nodeSelector 和 tolerations 都属于 Pod 的调度策略,在后面的课程中会为大家讲解。
146 |
147 | ```
148 | # 3、traefik-ui
149 |
150 | traefik 还提供了一个 web ui 工具,就是上面的 8080 端口对应的服务,为了能够访问到该服务,我们这里将服务设置成的 NodePort
151 |
152 | ```
153 | root># kubectl get pods -n kube-system -l k8s-app=traefik-ingress-lb -o wide
154 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
155 | traefik-ingress-controller-5b58d5c998-6dn97 1/1 Running 0 88s 10.244.0.2 linux-node1.example.com
156 |
157 | root># kubectl get svc -n kube-system|grep traefik-ingress-service
158 | traefik-ingress-service NodePort 10.102.214.49 80:32472/TCP,8080:32482/TCP 44s
159 |
160 | 现在在浏览器中输入 master_node_ip:32303 就可以访问到 traefik 的 dashboard 了
161 | ```
162 | http://192.168.56.11:32482/dashboard/
163 |
164 | # 4、Ingress 对象
165 |
166 | 现在我们是通过 NodePort 来访问 traefik 的 Dashboard 的,那怎样通过 ingress 来访问呢? 首先,需要创建一个 ingress 对象:(ingress.yaml)
167 |
168 | ```
169 | cat > /data/components/ingress/ingress.yaml <<\EOF
170 | apiVersion: extensions/v1beta1
171 | kind: Ingress
172 | metadata:
173 | name: traefik-web-ui
174 | namespace: kube-system
175 | annotations:
176 | kubernetes.io/ingress.class: traefik
177 | spec:
178 | rules:
179 | - host: traefik.k8s.com
180 | http:
181 | paths:
182 | - backend:
183 | serviceName: traefik-ingress-service
184 | #servicePort: 8080
185 | servicePort: admin #这里建议使用servicePort: admin,这样就避免端口的调整
186 | EOF
187 |
188 | kubectl create -f /data/components/ingress/ingress.yaml
189 | kubectl apply -f /data/components/ingress/ingress.yaml
190 |
191 | 要注意上面的 ingress 对象的规则,特别是 rules 区域,我们这里是要为 traefik 的 dashboard 建立一个 ingress 对象,所以这里的 serviceName 对应的是上面我们创建的 traefik-ingress-service,端口也要注意对应 8080 端口,为了避免端口更改,这里的 servicePort 的值也可以替换成上面定义的 port 的名字:admin
192 | ```
193 | 创建完成后,我们应该怎么来测试呢?
194 |
195 | ```
196 | 第一步,在本地的/etc/hosts里面添加上 traefik.k8s.com 与 master 节点外网 IP 的映射关系
197 |
198 | 第二步,在浏览器中访问:http://traefik.k8s.com 我们会发现并没有得到我们期望的 dashboard 界面,这是因为我们上面部署 traefik 的时候使用的是 NodePort 这种 Service 对象,所以我们只能通过上面的 32482 端口访问到我们的目标对象:http://traefik.k8s.com:32482
199 |
200 | 加上端口后我们发现可以访问到 dashboard 了,而且在 dashboard 当中多了一条记录,正是上面我们创建的 ingress 对象的数据,我们还可以切换到 HEALTH 界面中,可以查看当前 traefik 代理的服务的整体的健康状态
201 |
202 | 第三步,上面我们可以通过自定义域名加上端口可以访问我们的服务了,但是我们平时服务别人的服务是不是都是直接用的域名啊,http 或者 https 的,几乎很少有在域名后面加上端口访问的吧?为什么?太麻烦啊,端口也记不住,要解决这个问题,怎么办,我们只需要把我们上面的 traefik 的核心应用的端口隐射到 master 节点上的 80 端口,是不是就可以了,因为 http 默认就是访问 80 端口,但是我们在 Service 里面是添加的一个 NodePort 类型的服务,没办法映射 80 端口,怎么办?这里就可以直接在 Pod 中指定一个 hostPort 即可,更改上面的 traefik.yaml 文件中的容器端口:
203 |
204 | containers:
205 | - image: traefik
206 | name: traefik-ingress-lb
207 | ports:
208 | - name: http
209 | containerPort: 80
210 | hostPort: 80 #新增这行
211 | - name: admin
212 | containerPort: 8080
213 |
214 | 添加以后hostPort: 80,然后更新应用
215 | kubectl apply -f traefik.yaml
216 |
217 | 更新完成后,这个时候我们在浏览器中直接使用域名方法测试下
218 | http://traefik.k8s.com
219 |
220 | 第四步,正常来说,我们如果有自己的域名,我们可以将我们的域名添加一条 DNS 记录,解析到 master 的外网 IP 上面,这样任何人都可以通过域名来访问我的暴露的服务了。
221 |
222 | 如果你有多个边缘节点的话,可以在每个边缘节点上部署一个 ingress-controller 服务,然后在边缘节点前面挂一个负载均衡器,比如 nginx,将所有的边缘节点均作为这个负载均衡器的后端,这样就可以实现 ingress-controller 的高可用和负载均衡了。
223 | ```
224 |
225 | # 5、ingress tls
226 |
227 | 上节课给大家展示了 traefik 的安装使用以及简单的 ingress 的配置方法,这节课我们来学习一下 ingress tls 以及 path 路径在 ingress 对象中的使用方法。
228 |
229 | 1、TLS 认证
230 |
231 | 在现在大部分场景下面我们都会使用 https 来访问我们的服务,这节课我们将使用一个自签名的证书,当然你有在一些正规机构购买的 CA 证书是最好的,这样任何人访问你的服务的时候都是受浏览器信任的证书。使用下面的 openssl 命令生成 CA 证书:
232 | ```
233 | openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -out tls.crt
234 | ```
235 | 现在我们有了证书,我们可以使用 kubectl 创建一个 secret 对象来存储上面的证书:
236 | ```
237 | kubectl create secret generic traefik-cert --from-file=tls.crt --from-file=tls.key -n kube-system
238 | ```
239 |
240 | 3、配置 Traefik
241 |
242 | 前面我们使用的是 Traefik 的默认配置,现在我们来配置 Traefik,让其支持 https:
243 |
244 |
--------------------------------------------------------------------------------
/components/initContainers/README.md:
--------------------------------------------------------------------------------
1 | 参考资料:
2 |
3 | https://www.cnblogs.com/yanh0606/p/11395920.html Kubernetes的初始化容器initContainers
4 |
5 | https://www.jianshu.com/p/e57c3e17ce8c 理解 Init 容器
6 |
--------------------------------------------------------------------------------
/components/job/README.md:
--------------------------------------------------------------------------------
1 | 参考资料:
2 |
3 | https://www.jianshu.com/p/bd6cd1b4e076 Kubernetes对象之Job
4 |
5 |
6 | https://www.cnblogs.com/lvcisco/p/9670100.html k8s Job、Cronjob 的使用
7 |
--------------------------------------------------------------------------------
/components/k8s-monitor/README.md:
--------------------------------------------------------------------------------
1 | ```
2 | # 1、持久化监控数据
3 | cat > prometheus-class.yaml <<-EOF
4 | apiVersion: storage.k8s.io/v1
5 | kind: StorageClass
6 | metadata:
7 | name: fast
8 | provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
9 | parameters:
10 | archiveOnDelete: "true"
11 | EOF
12 |
13 | #部署class.yaml
14 | kubectl apply -f prometheus-class.yaml
15 |
16 | #查看创建的storageclass
17 | kubectl get sc
18 |
19 | #2、修改 Prometheus 持久化
20 | prometheus是一种 StatefulSet 有状态集的部署模式,所以直接将 StorageClass 配置到里面,在下面的yaml中最下面添加持久化配置
21 | #cat prometheus/prometheus-prometheus.yaml
22 | apiVersion: monitoring.coreos.com/v1
23 | kind: Prometheus
24 | metadata:
25 | labels:
26 | prometheus: k8s
27 | name: k8s
28 | namespace: monitoring
29 | spec:
30 | alerting:
31 | alertmanagers:
32 | - name: alertmanager-main
33 | namespace: monitoring
34 | port: web
35 | baseImage: quay.io/prometheus/prometheus
36 | nodeSelector:
37 | kubernetes.io/os: linux
38 | podMonitorSelector: {}
39 | replicas: 2
40 | resources:
41 | requests:
42 | memory: 400Mi
43 | ruleSelector:
44 | matchLabels:
45 | prometheus: k8s
46 | role: alert-rules
47 | securityContext:
48 | fsGroup: 2000
49 | runAsNonRoot: true
50 | runAsUser: 1000
51 | serviceAccountName: prometheus-k8s
52 | serviceMonitorNamespaceSelector: {}
53 | serviceMonitorSelector: {}
54 | version: v2.11.0
55 | storage: #----添加持久化配置,指定StorageClass为上面创建的fast
56 | volumeClaimTemplate:
57 | spec:
58 | storageClassName: fast #---指定为fast
59 | resources:
60 | requests:
61 | storage: 300Gi
62 |
63 | kubectl apply -f prometheus/prometheus-prometheus.yaml
64 |
65 | #3、修改 Grafana 持久化配置
66 |
67 | 由于 Grafana 是部署模式为 Deployment,所以我们提前为其创建一个 grafana-pvc.yaml 文件,加入下面 PVC 配置。
68 | #vim grafana-pvc.yaml
69 | kind: PersistentVolumeClaim
70 | apiVersion: v1
71 | metadata:
72 | name: grafana
73 | namespace: monitoring #---指定namespace为monitoring
74 | spec:
75 | storageClassName: fast #---指定StorageClass为上面创建的fast
76 | accessModes:
77 | - ReadWriteOnce
78 | resources:
79 | requests:
80 | storage: 200Gi
81 |
82 | kubectl apply -f grafana-pvc.yaml
83 |
84 | #vim grafana/grafana-deployment.yaml
85 | ......
86 | volumes:
87 | - name: grafana-storage #-------新增持久化配置
88 | persistentVolumeClaim:
89 | claimName: grafana #-------设置为创建的PVC名称
90 | #- emptyDir: {} #-------注释掉旧的配置
91 | # name: grafana-storage
92 | - name: grafana-datasources
93 | secret:
94 | secretName: grafana-datasources
95 | - configMap:
96 | name: grafana-dashboards
97 | name: grafana-dashboards
98 | ......
99 |
100 | kubectl apply -f grafana/grafana-deployment.yaml
101 | ```
102 | 参考资料:
103 |
104 | https://www.cnblogs.com/skyflask/articles/11410063.html kubernetes监控方案--cAdvisor+Heapster+InfluxDB+Grafana
105 |
106 | https://www.cnblogs.com/skyflask/p/11480988.html kubernetes监控终极方案-kube-promethues
107 |
108 | http://www.mydlq.club/article/10/#wow1 Kube-promethues监控k8s集群
109 |
110 | https://jicki.me/docker/kubernetes/2019/07/22/kube-prometheus/ Coreos kube-prometheus 监控
111 |
--------------------------------------------------------------------------------
/components/kube-proxy/README.md:
--------------------------------------------------------------------------------
1 | # Kube-Proxy简述
2 |
3 | ```
4 | 运行在每个节点上,监听 API Server 中服务对象的变化,再通过管理 IPtables 来实现网络的转发
5 | Kube-Proxy 目前支持三种模式:
6 |
7 | UserSpace
8 | k8s v1.2 后就已经淘汰
9 |
10 | IPtables
11 | 目前默认方式
12 |
13 | IPVS
14 | 需要安装ipvsadm、ipset 工具包和加载 ip_vs 内核模块
15 |
16 | ```
17 | 参考资料:
18 |
19 | https://ywnz.com/linuxyffq/2530.html 解析从外部访问Kubernetes集群中应用的几种方法
20 |
21 | https://www.jianshu.com/p/b2d13cec7091 浅谈 k8s service&kube-proxy
22 |
23 | https://www.codercto.com/a/90806.html 探究K8S Service内部iptables路由规则
24 |
25 | https://blog.51cto.com/goome/2369150 k8s实践7:ipvs结合iptables使用过程分析
26 |
27 | https://blog.csdn.net/xinghun_4/article/details/50492041 kubernetes中port、target port、node port的对比分析,以及kube-proxy代理
28 |
--------------------------------------------------------------------------------
/components/nfs/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/components/pressure/README.md:
--------------------------------------------------------------------------------
1 | # 一、生产大规模集群,网络组件选择
2 |
3 | 如果用calico-RR反射器这种模式,保证性能的情况下大概能支撑好多个节点?
4 |
5 | RR反射器还分为两种 可以由calico的节点服务承载 也可以是直接的物理路由器做RR
6 |
7 | 超大规模Calico如果全以BGP来跑没什么问题 只是要做好网络地址规划 即便是不同集群容器地址也不能重叠
8 |
9 | # 二、flanel网络组件压测
10 |
11 | ```
12 | flannel受限于cpu压力
13 | ```
14 | 
15 |
16 | # 三、calico网络组件压测
17 |
18 | ```
19 | calico则轻轻松松与宿主机性能相差无几
20 |
21 | 如果单单一个集群 节点数超级多 如果不做BGP路由聚合 物理路由器或三层交换机会扛不住的
22 | ```
23 | 
24 |
25 | # 四、calico网络和宿主机压测
26 |
27 | 
28 |
--------------------------------------------------------------------------------
/components/pressure/calico bgp网络需要物理路由和交换机支持吗.md:
--------------------------------------------------------------------------------
1 | 
2 |
3 | 
4 |
5 | 
6 |
7 | 
8 |
9 | 
10 |
11 | 
12 |
13 | 
14 |
15 | 
16 |
17 | 
18 |
19 | 
20 |
21 | 
22 |
23 | 
24 |
--------------------------------------------------------------------------------
/components/pressure/k8s集群更换网段方案.md:
--------------------------------------------------------------------------------
1 | ```
2 | 1、服务器IP更换网段 有什么解决方案吗?不重新搭建集群的话?
3 |
4 | 方案一:
5 |
6 | 改监听地址,重做集群证书
7 |
8 | 不然还真不好搞的
9 |
10 | 方案二:
11 |
12 | 如果etcd一开始是静态的 那就不好玩了
13 |
14 | 得一开始就是基于dns discovery方式
15 |
16 | 简明扼要的说
17 |
18 | 就是但凡涉及IP地址的地方
19 |
20 | 全部用fqdn
21 |
22 | 无论是证书还是配置文件
23 |
24 | 这四句话核心就够了
25 |
26 | etcd官方本来就有正式文档讲dns discovery部署
27 |
28 | 只是k8s部分,官方部署没有提
29 |
30 | ```
31 |
32 | 
33 |
34 | 
35 |
36 | 
37 |
38 | 
39 |
40 |
41 | 来自: 广大群友讨论集锦
42 |
43 | https://github.com/etcd-io/etcd/blob/a4018f25c91fff8f4f15cd2cee9f026650c7e688/Documentation/clustering.md#dns-discovery
44 |
--------------------------------------------------------------------------------
/docs/Envoy的架构与基本术语.md:
--------------------------------------------------------------------------------
1 | 参考文档:
2 |
3 | https://jimmysong.io/kubernetes-handbook/usecases/envoy-terminology.html Envoy 的架构与基本术语
4 |
--------------------------------------------------------------------------------
/docs/Kubernetes学习笔记.md:
--------------------------------------------------------------------------------
1 | 参考文档:
2 |
3 | https://blog.gmem.cc/kubernetes-study-note Kubernetes学习笔记
4 |
--------------------------------------------------------------------------------
/docs/Kubernetes架构介绍.md:
--------------------------------------------------------------------------------
1 | # Kubernetes架构介绍
2 |
3 | ## Kubernetes架构
4 |
5 | 
6 |
7 | ## k8s架构图
8 |
9 | 
10 |
11 | ## 一、K8S Master节点
12 | ### API Server
13 | apiserver提供集群管理的REST API接口,包括认证授权、数据校验以 及集群状态变更等
14 | 只有API Server才直接操作etcd
15 | 其他模块通过API Server查询或修改数据
16 | 提供其他模块之间的数据交互和通信的枢纽
17 |
18 | ### Scheduler
19 | scheduler负责分配调度Pod到集群内的node节点
20 | 监听kube-apiserver,查询还未分配Node的Pod
21 | 根据调度策略为这些Pod分配节点
22 |
23 | ### Controller Manager
24 | controller-manager由一系列的控制器组成,它通过apiserver监控整个 集群的状态,并确保集群处于预期的工作状态
25 |
26 | ### ETCD
27 | 所有持久化的状态信息存储在ETCD中
28 |
29 | ## 二、K8S Node节点
30 | ### Kubelet
31 | 1. 管理Pods以及容器、镜像、Volume等,实现对集群 对节点的管理。
32 | ### Kube-proxy
33 | 2. 提供网络代理以及负载均衡,实现与Service通信。
34 | ### Docker Engine
35 | 3. 负责节点的容器的管理工作。
36 |
37 | ## 三、资源对象介绍
38 |
39 | ### 3.1 Replication Controller,RC
40 |
41 | 1. RC是K8s集群中最早的保证Pod高可用的API对象。通过监控运行中
42 | 的Pod来保证集群中运行指定数目的Pod副本。
43 |
44 | 2. 指定的数目可以是多个也可以是1个;少于指定数目,RC就会启动运
45 | 行新的Pod副本;多于指定数目,RC就会杀死多余的Pod副本。
46 |
47 | 3. 即使在指定数目为1的情况下,通过RC运行Pod也比直接运行Pod更 明智,因为RC也可以发挥它高可用的能力,保证永远有1个Pod在运 行。
48 |
49 | ### 3.2 Replica Set,RS
50 |
51 | 1. RS是新一代RC,提供同样的高可用能力,区别主要在于RS后来居上, 能支持更多中的匹配模式。副本集对象一般不单独使用,而是作为部 署的理想状态参数使用。
52 |
53 | 2. 是K8S 1.2中出现的概念,是RC的升级。一般和Deployment共同使用。
54 |
55 | ### 3.3 Deployment
56 | 1. Deployment表示用户对K8s集群的一次更新操作。Deployment是 一个比RS应用模式更广的API对象,
57 |
58 | 2. 可以是创建一个新的服务,更新一个新的服务,也可以是滚动升 级一个服务。滚动升级一个服务,实际是创建一个新的RS,然后 逐渐将新RS中副本数增加到理想状态,将旧RS中的副本数减小 到0的复合操作;
59 |
60 | 3. 这样一个复合操作用一个RS是不太好描述的,所以用一个更通用 的Deployment来描述。
61 |
62 | ### 3.4 Service
63 | 1. RC、RS和Deployment只是保证了支撑服务的POD的数量,但是没有解 决如何访问这些服务的问题。一个Pod只是一个运行服务的实例,随时可 能在一个节点上停止,在另一个节点以一个新的IP启动一个新的Pod,因 此不能以确定的IP和端口号提供服务。
64 |
65 | 2. 要稳定地提供服务需要服务发现和负载均衡能力。服务发现完成的工作, 是针对客户端访问的服务,找到对应的的后端服务实例。
66 |
67 | 3. 在K8集群中,客户端需要访问的服务就是Service对象。每个Service会对 应一个集群内部有效的虚拟IP,集群内部通过虚拟IP访问一个服务。
68 |
69 | ## 四、K8S的IP地址
70 | 1. Node IP: 节点设备的IP,如物理机,虚拟机等容器宿主的实际IP。
71 |
72 | 2. Pod IP: Pod 的IP地址,是根据docker0网格IP段进行分配的。
73 |
74 | 3. Cluster IP: Service的IP,是一个虚拟IP,仅作用于service对象,由k8s
75 | 管理和分配,需要结合service port才能使用,单独的IP没有通信功能,
76 | 集群外访问需要一些修改。
77 |
78 | 4. 在K8S集群内部,nodeip podip clusterip的通信机制是由k8s制定的路由
79 | 规则,不是IP路由。
80 |
--------------------------------------------------------------------------------
/docs/Kubernetes集群环境准备.md:
--------------------------------------------------------------------------------
1 | # 一、k8s集群实验环境准备
2 |
3 | 
4 |
5 |
6 |
7 | 主机名 |
8 | IP地址(NAT) |
9 | 描述 |
10 |
11 |
12 | linux-node1.example.com |
13 | eth0:192.168.56.11 |
14 | Kubernets Master节点/Etcd节点 |
15 |
16 |
17 | linux-node2.example.com |
18 | eth0:192.168.56.12 |
19 | Kubernets Node节点/ Etcd节点 |
20 |
21 |
22 | linux-node3.example.com |
23 | eth0:192.168.56.13 |
24 | Kubernets Node节点/ Etcd节点 |
25 |
26 |
27 |
28 | # 二、准备工作
29 |
30 | 1、设置主机名
31 | ```
32 | hostnamectl set-hostname linux-node1
33 | hostnamectl set-hostname linux-node2
34 | hostnamectl set-hostname linux-node3
35 | ```
36 |
37 | 2、绑定主机host
38 | ```
39 | cat > /etc/hosts </dev/null 2>&1
74 |
75 | #设置时区
76 | cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
77 |
78 | #SSH登录慢
79 | sed -i "s/#UseDNS yes/UseDNS no/" /etc/ssh/sshd_config
80 | sed -i "s/GSSAPIAuthentication yes/GSSAPIAuthentication no/" /etc/ssh/sshd_config
81 | systemctl restart sshd.service
82 | ```
83 |
84 | 6、软件包下载
85 |
86 | k8s-v1.12.0版本网盘地址: https://pan.baidu.com/s/1jU427W1f3oSDnzB3bU2s5w
87 |
88 | ```
89 | #所有文件存放在/opt/kubernetes目录下
90 | mkdir -p /opt/kubernetes/{cfg,bin,ssl,log}
91 |
92 | #使用二进制方式进行部署
93 | 官网下载地址: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md#downloads-for-v1121
94 |
95 | #添加环境变量
96 | vim /root/.bash_profile
97 | PATH=$PATH:$HOME/bin:/opt/kubernetes/bin
98 | source /root/.bash_profile
99 | ```
100 | 
101 |
102 | 7、解压软件包
103 | ```
104 | tar -zxvf kubernetes.tar.gz -C /usr/local/src/
105 | tar -zxvf kubernetes-server-linux-amd64.tar.gz -C /usr/local/src/
106 | tar -zxvf kubernetes-client-linux-amd64.tar.gz -C /usr/local/src/
107 | tar -zxvf kubernetes-node-linux-amd64.tar.gz -C /usr/local/src/
108 | ```
109 |
110 |
--------------------------------------------------------------------------------
/docs/ca.md:
--------------------------------------------------------------------------------
1 | # 手动制作CA证书
2 |
3 | ```
4 | Kubernetes 系统各组件需要使用 TLS 证书对通信进行加密。
5 |
6 | CA证书管理工具:
7 | • easyrsa ---openvpn比较常用
8 | • openssl
9 | • cfssl ---使用最多,使用json文件格式,相对简单
10 | ```
11 |
12 | ## 1.安装 CFSSL
13 | ```
14 | [root@linux-node1 ~]# cd /usr/local/src
15 | [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
16 | [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
17 | [root@linux-node1 src]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
18 | [root@linux-node1 src]# chmod +x cfssl*
19 | [root@linux-node1 src]# mv cfssl-certinfo_linux-amd64 /opt/kubernetes/bin/cfssl-certinfo
20 | [root@linux-node1 src]# mv cfssljson_linux-amd64 /opt/kubernetes/bin/cfssljson
21 | [root@linux-node1 src]# mv cfssl_linux-amd64 /opt/kubernetes/bin/cfssl
22 |
23 | #复制cfssl命令文件到k8s-node1和k8s-node2节点。如果实际中多个节点,就都需要同步复制。
24 | [root@linux-node1 ~]# scp /opt/kubernetes/bin/cfssl* 192.168.56.12:/opt/kubernetes/bin
25 | [root@linux-node1 ~]# scp /opt/kubernetes/bin/cfssl* 192.168.56.13:/opt/kubernetes/bin
26 | ```
27 |
28 | ## 2.初始化cfssl
29 | ```
30 | [root@linux-node1 src]# mkdir ssl && cd ssl
31 | [root@linux-node1 ssl]# cfssl print-defaults config > config.json --生成ca-config.json的样例(可省略)
32 | [root@linux-node1 ssl]# cfssl print-defaults csr > csr.json --生成ca-csr.json的样例(可省略)
33 | ```
34 |
35 | ## 3.创建用来生成 CA 文件的 JSON 配置文件
36 | ```
37 | [root@linux-node1 ssl]#
38 | cat > ca-config.json < ca-csr.json < 53/UDP,53/TCP 2m
22 |
23 | #在node节点使用ipvsadm -Ln查看转发的后端节点(TCP和UDP的53端口)
24 | [root@linux-node2 ~]# ipvsadm -Ln
25 | IP Virtual Server version 1.2.1 (size=4096)
26 | Prot LocalAddress:Port Scheduler Flags
27 | -> RemoteAddress:Port Forward Weight ActiveConn InActConn
28 | TCP 10.1.0.2:53 rr
29 | -> 10.2.76.14:53 Masq 1 0 0
30 | -> 10.2.76.20:53 Masq 1 0 0
31 | UDP 10.1.0.2:53 rr
32 | -> 10.2.76.14:53 Masq 1 0 0
33 | -> 10.2.76.20:53 Masq 1 0 0
34 |
35 | #发现是转到这2个pod容器
36 | [root@linux-node1 ~]# kubectl get pod -n kube-system -o wide
37 | NAME READY STATUS RESTARTS AGE IP NODE
38 | coredns-77c989547b-4f9xz 1/1 Running 0 5m 10.2.76.20 192.168.56.12
39 | coredns-77c989547b-9zm4m 1/1 Running 0 5m 10.2.76.14 192.168.56.13
40 | ```
41 |
42 | ## 测试CoreDNS
43 |
44 | ```
45 | [root@linux-node1 coredns]# kubectl run dns-test --rm -it --image=alpine /bin/sh
46 | If you don't see a command prompt, try pressing enter.
47 | / # ping www.qq.com
48 | PING www.qq.com (121.51.142.21): 56 data bytes
49 | 64 bytes from 121.51.142.21: seq=0 ttl=127 time=20.864 ms
50 | 64 bytes from 121.51.142.21: seq=1 ttl=127 time=19.937 ms
51 | ```
52 |
--------------------------------------------------------------------------------
/docs/dashboard.md:
--------------------------------------------------------------------------------
1 | # Kubernetes Dashboard
2 |
3 | ## 创建Dashboard
4 | ```
5 | [root@linux-node1 ~]# kubectl create -f /srv/addons/dashboard/
6 | [root@linux-node1 ~]# kubectl cluster-info
7 | Kubernetes master is running at https://192.168.56.11:6443
8 | kubernetes-dashboard is running at https://192.168.56.11:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
9 |
10 | To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
11 |
12 | ```
13 | ## 查看Dashboard信息
14 | ```
15 | #发现Dashboard是运行在node3节点
16 | [root@linux-node1 ~]# kubectl get pod -n kube-system -o wide
17 | NAME READY STATUS RESTARTS AGE IP NODE
18 | kubernetes-dashboard-66c9d98865-bqwl5 1/1 Running 0 1h 10.2.76.3 192.168.56.13
19 |
20 | #查看Dashboard运行日志
21 | [root@linux-node1 ~]# kubectl logs pod/kubernetes-dashboard-66c9d98865-bqwl5 -n kube-system
22 |
23 | #查看Dashboard服务IP(可以访问任意node节点的34696端口就可以访问到Dashboard页面 https://192.168.56.13:34696/#!/overview?namespace=default,如何master节点安装了kube-proxy也可以访问)
24 | [root@linux-node1 ~]# kubectl get service -n kube-system
25 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
26 | kubernetes-dashboard NodePort 10.1.36.42 443:34696/TCP 1h
27 |
28 | ```
29 | https://192.168.56.13:34696/#!/overview?namespace=default
30 |
31 | 
32 |
33 |
34 | ## 访问Dashboard
35 |
36 | https://192.168.56.11:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
37 | 用户名:admin 密码:admin 选择令牌模式登录。
38 |
39 | ### 获取Token
40 | ```
41 | [root@linux-node1 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
42 | Name: admin-user-token-c97bl
43 | Namespace: kube-system
44 | Labels:
45 | Annotations: kubernetes.io/service-account.name=admin-user
46 | kubernetes.io/service-account.uid=379208ff-cb86-11e8-9f1c-080027dc9cd8
47 |
48 | Type: kubernetes.io/service-account-token
49 |
50 | Data
51 | ====
52 | ca.crt: 1359 bytes
53 | namespace: 11 bytes
54 | token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWM5N2JsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNzkyMDhmZi1jYjg2LTExZTgtOWYxYy0wODAwMjdkYzljZDgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.LopL7AD9feBZmhAuAUlPNjfthlJ1lJAPG6VXgBl-MZdofZpqNU9m-o-7M4hHa5AXkpeLvQrA1UKWWSR9eWEN06ugIkcH4Pk-tKrSVQUM6CDaE7eBdK91x1ltTonLz62_z_X8IvRYx1piv3wRUijoyRHCdziBnOhg67sT974CSPoRSOpl7ZR0Kn_L0LYRMOE9xfU3w4-sCpSx-jgc5oysAix95NqZgIkaZ6TRANpCnHE66fqL6yUwQxQ5yt7pw7J2iuSE3OxPU_cKArjYlWUvr72zG3SxZaR7dzQEggwmjSSeHRs0OK0968QAtCca1NTmcPaTtKhXYfXXdtusVCx7bA
55 | ```
56 | 
57 |
--------------------------------------------------------------------------------
/docs/dashboard_op.md:
--------------------------------------------------------------------------------
1 | # Kubernetes Dashboard
2 |
3 | ```
4 | chattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*
5 |
6 | cd /etc/ansible/
7 | ansible-playbook 07.cluster-addon.yml
8 |
9 | ansible-playbook 90.setup.yml
10 |
11 | systemctl restart iptables
12 | systemctl restart kube-scheduler
13 | systemctl restart kube-controller-manager
14 | systemctl restart kube-apiserver
15 | systemctl restart etcd
16 | systemctl restart docker
17 |
18 | systemctl restart iptables
19 | systemctl restart kubelet
20 | systemctl restart kube-proxy
21 | systemctl restart etcd
22 | systemctl restart docker
23 | ```
24 |
25 | ## 1、查看deployment
26 | ```
27 | [root@node1 ~]# kubectl get deployment -A
28 | NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
29 | default my-mc-deployment 3/3 3 3 2d18h
30 | default net 3/3 3 3 4d15h
31 | default net-test 2/2 2 2 4d16h
32 | default test-hello 1/1 1 1 6d
33 | default test-jrr 1/1 1 1 43h
34 | kube-system coredns 0/2 2 0 4d15h
35 | kube-system heapster 1/1 1 1 8d
36 | kube-system kubernetes-dashboard 0/1 1 0 4m42s
37 | kube-system metrics-server 0/1 1 0 8d
38 | kube-system traefik-ingress-controller 1/1 1 1 2d18h
39 |
40 | #查看deployment详情
41 | [root@node1 ~]# kubectl describe deployment kubernetes-dashboard -n kube-system
42 |
43 | #删除deployment
44 | [root@node1 ~]# kubectl delete deployment kubernetes-dashboard -n kube-system
45 | deployment.extensions "kubernetes-dashboard" deleted
46 | ```
47 |
48 | ## 2、查看Service信息
49 | ```
50 | [root@tw06a2753 ~]# kubectl get service -A -o wide
51 | NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
52 | default kubernetes ClusterIP 172.68.0.1 443/TCP 8d
53 | default my-mc-service ClusterIP 172.68.113.166 60001/TCP,60002/TCP 3d14h app=products,department=sales
54 | default nginx-service ClusterIP 172.68.176.9 80/TCP 5d app=nginx
55 | default php-service ClusterIP 172.68.210.6 9898/TCP 5d19h app=nginx-php
56 | default test-hello ClusterIP 172.68.248.205 80/TCP 6d run=test-hello
57 | default test-jrr-php-service ClusterIP 172.68.58.202 9090/TCP 43h app=test-jrr-nginx-php
58 | kube-system heapster ClusterIP 172.68.19.198 80/TCP 8d k8s-app=heapster
59 | kube-system kube-dns ClusterIP 172.68.0.2 53/UDP,53/TCP,9153/TCP 4d15h k8s-app=kube-dns
60 | kube-system kubernetes-dashboard NodePort 172.68.46.171 443:29107/TCP 6m31s k8s-app=kubernetes-dashboard
61 | kube-system metrics-server ClusterIP 172.68.31.222 443/TCP 8d k8s-app=metrics-server
62 | kube-system traefik-ingress-service NodePort 172.68.124.46 80:33813/TCP,8080:21315/TCP 2d18h k8s-app=traefik-ingress-lb
63 | kube-system traefik-web-ui ClusterIP 172.68.226.139 80/TCP 2d19h k8s-app=traefik-ingress-lb
64 |
65 | #查看service详情
66 | [root@node1 ~]# kubectl describe svc kubernetes-dashboard -n kube-system
67 |
68 | #删除service
69 | [root@node1 ~]# kubectl delete svc kubernetes-dashboard -n kube-system
70 | service "kubernetes-dashboard" deleted
71 | ```
72 |
73 | ## 3、查看Service对应的后端节点
74 |
75 | ```
76 | #查看kubernetes-dashboard
77 | [root@node1 ~]# kubectl describe svc kubernetes-dashboard -n kube-system
78 |
79 | #查看服务my-mc-service
80 | [root@node1 ~]# kubectl describe svc my-mc-service -n default
81 | Name: my-mc-service
82 | Namespace: default
83 | Labels:
84 | Annotations: kubectl.kubernetes.io/last-applied-configuration:
85 | {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-mc-service","namespace":"default"},"spec":{"ports":[{"name":"m...
86 | Selector: app=products,department=sales
87 | Type: ClusterIP
88 | IP: 172.68.113.166
89 | Port: my-first-port 60001/TCP
90 | TargetPort: 50001/TCP
91 | Endpoints: 172.20.1.209:50001,172.20.2.206:50001,172.20.2.208:50001
92 | Port: my-second-port 60002/TCP
93 | TargetPort: 50002/TCP
94 | Endpoints: 172.20.1.209:50002,172.20.2.206:50002,172.20.2.208:50002 ---发现这个service有这3个后端
95 | Session Affinity: None
96 | Events:
97 | ```
98 |
99 | ## 4、Dashboard运行在哪个节点
100 | ```
101 | #发现Dashboard是运行在node3节点
102 | [root@linux-node1 ~]# kubectl get pod -n kube-system -o wide
103 | NAME READY STATUS RESTARTS AGE IP NODE
104 | kubernetes-dashboard-66c9d98865-bqwl5 1/1 Running 0 1h 10.2.76.3 192.168.56.13
105 |
106 | #查看Dashboard运行日志
107 | [root@linux-node1 ~]# kubectl logs pod/kubernetes-dashboard-66c9d98865-bqwl5 -n kube-system
108 |
109 | #查看Dashboard服务IP(可以访问任意node节点的34696端口就可以访问到Dashboard页面 https://192.168.56.13:34696/#!/overview?namespace=default,如何master节点安装了kube-proxy也可以访问)
110 | [root@linux-node1 ~]# kubectl get service -n kube-system
111 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
112 | kubernetes-dashboard NodePort 10.1.36.42 443:34696/TCP 1h
113 |
114 | ```
115 | https://192.168.56.13:34696/#!/overview?namespace=default
116 |
117 | 
118 |
119 |
120 | ## 访问Dashboard
121 |
122 | https://192.168.56.11:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
123 | 用户名:admin 密码:admin 选择令牌模式登录。
124 |
125 | ### 获取Token
126 | ```
127 | [root@linux-node1 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
128 | Name: admin-user-token-c97bl
129 | Namespace: kube-system
130 | Labels:
131 | Annotations: kubernetes.io/service-account.name=admin-user
132 | kubernetes.io/service-account.uid=379208ff-cb86-11e8-9f1c-080027dc9cd8
133 |
134 | Type: kubernetes.io/service-account-token
135 |
136 | Data
137 | ====
138 | ca.crt: 1359 bytes
139 | namespace: 11 bytes
140 | token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWM5N2JsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIzNzkyMDhmZi1jYjg2LTExZTgtOWYxYy0wODAwMjdkYzljZDgiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.LopL7AD9feBZmhAuAUlPNjfthlJ1lJAPG6VXgBl-MZdofZpqNU9m-o-7M4hHa5AXkpeLvQrA1UKWWSR9eWEN06ugIkcH4Pk-tKrSVQUM6CDaE7eBdK91x1ltTonLz62_z_X8IvRYx1piv3wRUijoyRHCdziBnOhg67sT974CSPoRSOpl7ZR0Kn_L0LYRMOE9xfU3w4-sCpSx-jgc5oysAix95NqZgIkaZ6TRANpCnHE66fqL6yUwQxQ5yt7pw7J2iuSE3OxPU_cKArjYlWUvr72zG3SxZaR7dzQEggwmjSSeHRs0OK0968QAtCca1NTmcPaTtKhXYfXXdtusVCx7bA
141 | ```
142 | 
143 |
--------------------------------------------------------------------------------
/docs/delete.md:
--------------------------------------------------------------------------------
1 | ```
2 | #master
3 | systemctl restart kube-scheduler
4 | systemctl restart kube-controller-manager
5 | systemctl restart kube-apiserver
6 | systemctl restart flannel
7 | systemctl restart etcd
8 | systemctl restart docker
9 |
10 |
11 | systemctl stop kube-scheduler
12 | systemctl stop kube-controller-manager
13 | systemctl stop kube-apiserver
14 | systemctl stop flannel
15 | systemctl stop etcd
16 | systemctl stop docker
17 |
18 | #node
19 | systemctl restart kubelet
20 | systemctl restart kube-proxy
21 | systemctl restart flannel
22 | systemctl restart etcd
23 | systemctl restart docker
24 |
25 |
26 | systemctl stop kubelet
27 | systemctl stop kube-proxy
28 | systemctl stop flannel
29 | systemctl stop etcd
30 | systemctl stop docker
31 |
32 | ```
33 |
34 | ```
35 | # 清理k8s集群
36 | rm -rf /var/lib/etcd/
37 | rm -rf /var/lib/docker
38 | rm -rf /opt/containerd
39 | rm -rf /opt/kubernetes
40 | rm -rf /var/lib/kubelet
41 | rm -rf /var/lib/chrony
42 | rm -rf /var/lib/kube-proxy
43 | rm -rf /srv/*
44 |
45 |
46 | systemctl disable kube-scheduler
47 | systemctl disable kube-controller-manager
48 | systemctl disable kube-apiserver
49 | systemctl disable flannel
50 | systemctl disable etcd
51 | systemctl disable docker
52 |
53 | systemctl disable kubelet
54 | systemctl disable kube-proxy
55 | systemctl disable flannel
56 | systemctl disable etcd
57 | systemctl disable docker
58 |
59 | ```
60 |
--------------------------------------------------------------------------------
/docs/docker-install.md:
--------------------------------------------------------------------------------
1 | # study_docker
2 |
3 | ## 0.卸载旧版本
4 | ```bash
5 | yum remove -y docker \
6 | docker-client \
7 | docker-client-latest \
8 | docker-common \
9 | docker-latest \
10 | docker-latest-logrotate \
11 | docker-logrotate \
12 | docker-selinux \
13 | docker-engine-selinux \
14 | docker-engine
15 | ```
16 |
17 | ## 1.安装Docker
18 |
19 | 第一步:使用国内Docker源
20 | ```
21 | cd /etc/yum.repos.d/
22 | wget -O docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
23 |
24 | #或
25 | yum -y install yum-utils
26 | yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
27 |
28 | yum install -y yum-utils \
29 | device-mapper-persistent-data \
30 | lvm2
31 | ```
32 |
33 | 第二步:Docker安装:
34 | ```
35 | yum install -y docker-ce
36 | ```
37 |
38 | 第三步:启动后台进程:
39 | ```bash
40 | #启动docker服务
41 | systemctl restart docker
42 |
43 | #设置docker服务开启自启
44 | systemctl enable docker
45 |
46 | #Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
47 |
48 | #查看是否成功设置docker服务开启自启
49 | systemctl list-unit-files|grep docker
50 |
51 | docker.service enabled
52 |
53 | #关闭docker服务开启自启
54 | systemctl disable docker
55 |
56 | #Removed symlink /etc/systemd/system/multi-user.target.wants/docker.service.
57 | ```
58 |
59 | ## 2.脚本安装Docker
60 | ```bash
61 | #2.1、Docker官方安装脚本
62 | curl -sSL https://get.docker.com/ | sh
63 |
64 | #这个脚本会添加docker.repo仓库并且安装Docker
65 |
66 | #2.2、阿里云的安装脚本
67 | curl -sSL http://acs-public-mirror.oss-cn-hangzhou.aliyuncs.com/docker-engine/internet | sh -
68 |
69 | #2.3、DaoCloud 的安装脚本
70 | curl -sSL https://get.daocloud.io/docker | sh
71 |
72 | ```
73 |
74 | ### 3.Docker服务文件
75 | ```bash
76 | # Docker从1.13版本开始调整了默认的防火墙规则,禁用了iptables filter表中FOWARD链,这样会引起Kubernetes集群中跨Node的Pod无法通信,执行下面命令
77 | #注意,有变量的地方需要使用转义符号
78 |
79 | cat > /usr/lib/systemd/system/docker.service << EOF
80 | [Unit]
81 | Description=Docker Application Container Engine
82 | Documentation=https://docs.docker.com
83 | BindsTo=containerd.service
84 | After=network-online.target firewalld.service containerd.service
85 | Wants=network-online.target
86 | Requires=docker.socket
87 |
88 | [Service]
89 | Type=notify
90 | # the default is not to use systemd for cgroups because the delegate issues still
91 | # exists and systemd currently does not support the cgroup feature set required
92 | # for containers run by docker
93 | ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
94 | ExecReload=/bin/kill -s HUP \$MAINPID
95 | ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
96 | TimeoutSec=0
97 | RestartSec=2
98 | Restart=always
99 |
100 | # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
101 | # Both the old, and new location are accepted by systemd 229 and up, so using the old location
102 | # to make them work for either version of systemd.
103 | StartLimitBurst=3
104 |
105 | # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
106 | # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
107 | # this option work for either version of systemd.
108 | StartLimitInterval=60s
109 |
110 | # Having non-zero Limit*s causes performance problems due to accounting overhead
111 | # in the kernel. We recommend using cgroups to do container-local accounting.
112 | LimitNOFILE=infinity
113 | LimitNPROC=infinity
114 | LimitCORE=infinity
115 |
116 | # Comment TasksMax if your systemd version does not support it.
117 | # Only systemd 226 and above support this option.
118 | TasksMax=infinity
119 |
120 | # set delegate yes so that systemd does not reset the cgroups of docker containers
121 | Delegate=yes
122 |
123 | # kill only the docker process, not all processes in the cgroup
124 | KillMode=process
125 |
126 | [Install]
127 | WantedBy=multi-user.target
128 | EOF
129 | ```
130 | ## 3.1、配置docker加速器
131 | ```bash
132 | mkdir -p /data0/docker-data
133 |
134 | cat > /etc/docker/daemon.json << \EOF
135 | {
136 | "exec-opts": ["native.cgroupdriver=systemd"],
137 | "data-root": "/data0/docker-data",
138 | "registry-mirrors" : [
139 | "https://ot2k4d59.mirror.aliyuncs.com/"
140 | ],
141 | "insecure-registries": ["reg.hub.com"]
142 | }
143 | EOF
144 |
145 | 或者
146 | curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://f1361db2.m.daocloud.io
147 | ```
148 |
149 | ### 3.2、重新加载docker的配置文件
150 | ```bash
151 | systemctl daemon-reload
152 | systemctl restart docker
153 | ```
154 | ### 3.3、内核参数配置
155 | ```bash
156 | #编辑文件
157 | vim /etc/sysctl.conf
158 |
159 | net.bridge.bridge-nf-call-ip6tables = 1
160 | net.bridge.bridge-nf-call-iptables = 1
161 |
162 | #然后执行
163 | sysctl -p
164 |
165 | #查看docker信息是否生效
166 | docker info
167 | ```
168 |
169 | ## 4.通过测试镜像运行一个容器来验证Docker是否安装正确
170 | ```bash
171 | docker run hello-world
172 | ```
173 |
--------------------------------------------------------------------------------
/docs/etcd-install.md:
--------------------------------------------------------------------------------
1 |
2 | # 手动部署ETCD集群
3 |
4 | ## 0.准备etcd软件包
5 | ```
6 | [root@linux-node1 src]# wget https://github.com/coreos/etcd/releases/download/v3.2.18/etcd-v3.2.18-linux-amd64.tar.gz
7 | [root@linux-node1 src]# tar zxf etcd-v3.2.18-linux-amd64.tar.gz
8 | [root@linux-node1 src]# cd etcd-v3.2.18-linux-amd64
9 | [root@linux-node1 etcd-v3.2.18-linux-amd64]# cp etcd etcdctl /opt/kubernetes/bin/
10 | [root@linux-node1 etcd-v3.2.18-linux-amd64]# scp etcd etcdctl 192.168.56.12:/opt/kubernetes/bin/
11 | [root@linux-node1 etcd-v3.2.18-linux-amd64]# scp etcd etcdctl 192.168.56.13:/opt/kubernetes/bin/
12 | ```
13 |
14 | ## 1.创建 etcd 证书签名请求:
15 | ```
16 | #约定所有证书都放在 /usr/local/src/ssl 目录中,然后同步到其他机器
17 |
18 | [root@linux-node1 ~]# cd /usr/local/src/ssl
19 | [root@linux-node1 ssl]#
20 | cat > etcd-csr.json < flanneld-csr.json </dev/null 2>&1
119 | ```
120 |
121 | 启动flannel
122 | ```
123 | [root@linux-node1 ~]# systemctl daemon-reload
124 | [root@linux-node1 ~]# systemctl enable flannel
125 | [root@linux-node1 ~]# chmod +x /opt/kubernetes/bin/*
126 | [root@linux-node1 ~]# systemctl start flannel
127 | ```
128 |
129 | 查看服务状态
130 | ```
131 | [root@linux-node1 ~]# systemctl status flannel
132 | ```
133 |
134 | ## 配置Docker使用Flannel
135 | ```
136 | [root@linux-node1 ~]# vim /usr/lib/systemd/system/docker.service
137 | [Unit] #在Unit下面修改After和增加Requires
138 | After=network-online.target flannel.service
139 | Wants=network-online.target
140 | Requires=flannel.service
141 |
142 | [Service] #增加EnvironmentFile=-/run/flannel/docker
143 | Type=notify
144 | EnvironmentFile=-/run/flannel/docker
145 | ExecStart=/usr/bin/dockerd $DOCKER_OPTS
146 |
147 | #最终配置
148 | cat /usr/lib/systemd/system/docker.service
149 | [Unit]
150 | Description=Docker Application Container Engine
151 | Documentation=http://docs.docker.com
152 | After=network.target flannel.service
153 | Requires=flannel.service
154 |
155 | [Service]
156 | Type=notify
157 | EnvironmentFile=-/run/flannel/docker
158 | EnvironmentFile=-/opt/kubernetes/cfg/docker
159 | ExecStart=/usr/bin/dockerd $DOCKER_OPT_BIP $DOCKER_OPT_MTU $DOCKER_OPTS
160 | LimitNOFILE=1048576
161 | LimitNPROC=1048576
162 | ExecReload=/bin/kill -s HUP $MAINPID
163 | # Having non-zero Limit*s causes performance problems due to accounting overhead
164 | # in the kernel. We recommend using cgroups to do container-local accounting.
165 | LimitNOFILE=infinity
166 | LimitNPROC=infinity
167 | LimitCORE=infinity
168 | # Uncomment TasksMax if your systemd version supports it.
169 | # Only systemd 226 and above support this version.
170 | #TasksMax=infinity
171 | TimeoutStartSec=0
172 | # set delegate yes so that systemd does not reset the cgroups of docker containers
173 | Delegate=yes
174 | # kill only the docker process, not all processes in the cgroup
175 | KillMode=process
176 | # restart the docker process if it exits prematurely
177 | Restart=on-failure
178 | StartLimitBurst=3
179 | StartLimitInterval=60s
180 |
181 | [Install]
182 | WantedBy=multi-user.target
183 | ```
184 |
185 | 将配置复制到另外两个阶段
186 | ```
187 | [root@linux-node1 ~]# scp /usr/lib/systemd/system/docker.service 192.168.56.12:/usr/lib/systemd/system/
188 | [root@linux-node1 ~]# scp /usr/lib/systemd/system/docker.service 192.168.56.13:/usr/lib/systemd/system/
189 | ```
190 |
191 | 重启Docker
192 | ```
193 | systemctl daemon-reload
194 | systemctl restart docker
195 | ```
196 |
--------------------------------------------------------------------------------
/docs/k8s-error-resolution.md:
--------------------------------------------------------------------------------
1 | ## 报错一:flanneld 启动不了
2 | ```
3 | Oct 10 10:42:19 linux-node1 flanneld: E1010 10:42:19.499080 1816 main.go:349] Couldn't fetch network config: 100: Key not found (/coreos.com) [11]
4 | ```
5 | ## 解决办法:
6 | ```
7 | #首先查看flannel使用的那种类型的网络模式是对应的etcd中的key是哪个(/kubernetes/network/config 或 /coreos.com/network )
8 | [root@linux-node3 cfg]# cat /opt/kubernetes/cfg/flannel
9 | FLANNEL_ETCD="-etcd-endpoints=https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379"
10 | FLANNEL_ETCD_KEY="-etcd-prefix=/coreos.com/network" ----这个参数值
11 | FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
12 | FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
13 | FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"
14 |
15 | #etcd集群集群执行下面命令,清空etcd数据
16 | rm -rf /var/lib/etcd/default.etcd/
17 |
18 | #下面这条只需在一个节点执行就可以
19 | #如果是/coreos.com/network则执行下面的
20 | [root@linux-node1 ~]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem \
21 | --cert-file /opt/kubernetes/ssl/flanneld.pem \
22 | --key-file /opt/kubernetes/ssl/flanneld-key.pem \
23 | --no-sync -C https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \
24 | mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'
25 |
26 | #如果是/kubernetes/network/config则执行下面的
27 | [root@linux-node1 ~]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem \
28 | --cert-file /opt/kubernetes/ssl/flanneld.pem \
29 | --key-file /opt/kubernetes/ssl/flanneld-key.pem \
30 | --no-sync -C https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \
31 | mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}'
32 | ```
33 | 参考文档:https://stackoverflow.com/questions/34439659/flannel-and-docker-dont-start
34 |
35 | ## 报错二:flanneld 启动不了
36 | ```
37 | Oct 10 11:40:11 linux-node1 flanneld: E1010 11:40:11.797324 20669 main.go:349] Couldn't fetch network config: 104: Not a directory (/kubernetes/network/config) [12]
38 |
39 | 问题原因:在初次配置的时候,把flannel的配置文件中的etcd-prefix-key配置成了/kubernetes/network/config,实际上应该是/kubernetes/network
40 |
41 | [root@linux-node1 ~]# cat /opt/kubernetes/cfg/flannel
42 | FLANNEL_ETCD="-etcd-endpoints=https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379"
43 | FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network/config" --正确的应该为 /kubernetes/network/
44 | FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
45 | FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
46 | FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"
47 |
48 | ```
49 | 参考文档:https://www.cnblogs.com/lyzw/p/6016789.html
50 |
--------------------------------------------------------------------------------
/docs/k8s_pv_local.md:
--------------------------------------------------------------------------------
1 | 参考文档:
2 |
3 | https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/
4 |
--------------------------------------------------------------------------------
/docs/k8s重启pod.md:
--------------------------------------------------------------------------------
1 | 通过kubectl delete批量删除全部Pod
2 | ```
3 | kubectl delete pod --all
4 | ```
5 |
6 | ```
7 | 在没有pod 的yaml文件时,强制重启某个pod
8 |
9 | kubectl get pod PODNAME -n NAMESPACE -o yaml | kubectl replace --force -f -
10 |
11 | ```
12 |
13 | ```
14 | Q:如何进入一个 pod ?
15 |
16 | kubectl get pod 查看pod name
17 |
18 | kubectl describe pod name_of_pod 查看pod详细信息
19 |
20 | 进入pod:
21 |
22 | [root@test001 ~]# kubectl get pod -o wide
23 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
24 | nginx-deployment-68c7f5464c-p52rl 1/1 Running 0 17m 172.20.1.22 10.33.35.6
25 | nginx-deployment-68c7f5464c-qfd24 1/1 Running 0 17m 172.20.2.16 10.33.35.7
26 |
27 | kubectl exec -it name-of-pod /bin/bash
28 | ```
29 | 参考资料:
30 |
31 | https://www.jianshu.com/p/baa6b11062de
32 |
--------------------------------------------------------------------------------
/docs/node.md:
--------------------------------------------------------------------------------
1 | ## 部署kubelet
2 |
3 | 1.二进制包准备
4 | 将软件包从linux-node1复制到linux-node2中去。
5 | ```
6 | [root@linux-node1 ~]# cd /usr/local/src/kubernetes/server/bin/
7 | [root@linux-node1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/
8 | [root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.12:/opt/kubernetes/bin/
9 | [root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.13:/opt/kubernetes/bin/
10 | ```
11 |
12 | 2.创建角色绑定
13 |
14 | kubelet启动的时候会向kube-apiserver发送tls-bootstrap的请求,所以说需要将bootstrap的token设置为对应的角色,这样kubectl才有权限去创建请求,这个请求是怎么回事呢?kubelet起来的时候会访问apiserver,来动态获取证书。
15 |
16 | ```
17 | [root@linux-node1 ~]# cd /usr/local/src/ssl
18 | [root@linux-node1 ssl]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
19 | clusterrolebinding "kubelet-bootstrap" created
20 | ```
21 |
22 | 3.创建 kubelet bootstrapping kubeconfig 文件
23 | 设置集群参数
24 | ```
25 | [root@linux-node1 ssl]# kubectl config set-cluster kubernetes \
26 | --certificate-authority=/opt/kubernetes/ssl/ca.pem \
27 | --embed-certs=true \
28 | --server=https://192.168.56.11:6443 \
29 | --kubeconfig=bootstrap.kubeconfig
30 | Cluster "kubernetes" set.
31 | ```
32 |
33 | 设置客户端认证参数
34 | ```
35 | #注意这里的token需要跟之前kube-apiserver配置的一致(/usr/lib/systemd/system/kube-apiserver.service 中 /opt/kubernetes/ssl/bootstrap-token.csv 中的一致)
36 |
37 | [root@linux-node1 ssl]# kubectl config set-credentials kubelet-bootstrap \
38 | --token=ad6d5bb607a186796d8861557df0d17f \
39 | --kubeconfig=bootstrap.kubeconfig
40 | User "kubelet-bootstrap" set.
41 | ```
42 |
43 | 设置上下文参数
44 | ```
45 | [root@linux-node1 ssl]# kubectl config set-context default \
46 | --cluster=kubernetes \
47 | --user=kubelet-bootstrap \
48 | --kubeconfig=bootstrap.kubeconfig
49 | Context "default" created.
50 | ```
51 |
52 | 选择默认上下文
53 | ```
54 | [root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
55 | Switched to context "default".
56 |
57 | #以上操作,就是为了生成这个文件 bootstrap.kubeconfig (需要往所有节点上拷贝过去)
58 | [root@linux-node1 ssl]# cp bootstrap.kubeconfig /opt/kubernetes/cfg
59 | [root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.12:/opt/kubernetes/cfg
60 | [root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.13:/opt/kubernetes/cfg
61 | ```
62 |
63 | 部署kubelet
64 | 1.设置CNI支持(k8s网络接口的插件)
65 | ```
66 | [root@linux-node2 ~]# mkdir -p /etc/cni/net.d
67 | [root@linux-node2 ~]#
68 | cat > /etc/cni/net.d/10-default.conf <0{print $1}'| xargs kubectl certificate approve
148 | ```
149 | 执行完毕后,查看节点状态已经是Ready的状态了
150 | ```
151 | [root@linux-node1 ~]# kubectl get node
152 | NAME STATUS ROLES AGE VERSION
153 | 192.168.56.12 Ready 103s v1.12.1
154 | 192.168.56.13 Ready 103s v1.12.1
155 | ```
156 | ## 部署Kubernetes Proxy
157 | 1.配置kube-proxy使用LVS
158 | ```
159 | [root@linux-node2 ~]# yum install -y ipvsadm ipset conntrack
160 | ```
161 |
162 | 2.创建 kube-proxy 证书请求
163 | ```
164 | [root@linux-node1 ~]# cd /usr/local/src/ssl/
165 | [root@linux-node1 ssl]#
166 | cat > kube-proxy-csr.json < RemoteAddress:Port Forward Weight ActiveConn InActConn
285 | TCP 10.1.0.1:443 rr persistent 10800
286 | -> 192.168.56.11:6443 Masq 1 0 0
287 | ```
288 | 如果你在两台实验机器都安装了kubelet和proxy服务,使用下面的命令可以检查状态:
289 | ```
290 | [root@linux-node1 ssl]# kubectl get node
291 | NAME STATUS ROLES AGE VERSION
292 | 192.168.56.12 Ready 22m v1.10.1
293 | 192.168.56.13 Ready 3m v1.10.1
294 | ```
295 | linux-node3节点请自行部署。
296 |
--------------------------------------------------------------------------------
/docs/operational.md:
--------------------------------------------------------------------------------
1 |
2 | ## 一、服务重启
3 | ```
4 | #master
5 | systemctl restart kube-scheduler
6 | systemctl restart kube-controller-manager
7 | systemctl restart kube-apiserver
8 | systemctl restart flannel
9 | systemctl restart etcd
10 |
11 | systemctl stop kube-scheduler
12 | systemctl stop kube-controller-manager
13 | systemctl stop kube-apiserver
14 | systemctl stop flannel
15 | systemctl stop etcd
16 |
17 | systemctl status kube-apiserver
18 | systemctl status kube-scheduler
19 | systemctl status kube-controller-manager
20 | systemctl status etcd
21 |
22 | #node
23 | systemctl restart kubelet
24 | systemctl restart kube-proxy
25 | systemctl restart flannel
26 | systemctl restart etcd
27 |
28 | systemctl stop kubelet
29 | systemctl stop kube-proxy
30 | systemctl stop flannel
31 | systemctl stop etcd
32 |
33 | systemctl status kubelet
34 | systemctl status kube-proxy
35 | systemctl status flannel
36 | systemctl status etcd
37 | ```
38 |
39 | ## 二、常用查询
40 | ```
41 | #查询命名空间
42 | [root@linux-node1 ~]# kubectl get namespace --all-namespaces
43 | NAME STATUS AGE
44 | default Active 3d13h
45 | kube-node-lease Active 3d13h
46 | kube-public Active 3d13h
47 | kube-system Active 3d13h
48 |
49 | #查询健康状况
50 | [root@linux-node1 ~]# kubectl get cs --all-namespaces
51 | NAME STATUS MESSAGE ERROR
52 | controller-manager Healthy ok
53 | scheduler Healthy ok
54 | etcd-0 Healthy {"health":"true"}
55 | etcd-2 Healthy {"health":"true"}
56 | etcd-1 Healthy {"health":"true"}
57 |
58 | #查询node
59 | [root@linux-node1 ~]# kubectl get node -o wide
60 | NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
61 | 192.168.56.12 Ready 2m v1.10.3 CentOS Linux 7 (Core) 3.10.0-862.el7.x86_64 docker://18.6.1
62 | 192.168.56.13 Ready 2m v1.10.3 CentOS Linux 7 (Core) 3.10.0-862.el7.x86_64 docker://18.6.1
63 |
64 | #创建测试deployment
65 | [root@linux-node1 ~]# kubectl run net-test --image=alpine --replicas=2 sleep 360000
66 |
67 | #查看创建的deployment
68 | kubectl get deployment -o wide --all-namespaces
69 |
70 | #查询pod
71 | [root@linux-node1 ~]# kubectl get pod -o wide --all-namespaces
72 | NAME READY STATUS RESTARTS AGE IP NODE
73 | net-test-5767cb94df-6smfk 1/1 Running 1 1h 10.2.69.3 192.168.56.12
74 | net-test-5767cb94df-ctkhz 1/1 Running 1 1h 10.2.17.3 192.168.56.13
75 |
76 | #查询service
77 | [root@linux-node1 ~]# kubectl get service
78 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
79 | kubernetes ClusterIP 10.1.0.1 443/TCP 4m
80 |
81 | #Etcd集群健康状况查询
82 | [root@linux-node1 ~]# etcdctl --endpoints=https://192.168.56.11:2379 \
83 | --ca-file=/opt/kubernetes/ssl/ca.pem \
84 | --cert-file=/opt/kubernetes/ssl/etcd.pem \
85 | --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health
86 | ```
87 |
88 | ## 三、修改POD的IP地址段
89 | ```
90 | #修改一
91 | [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-controller-manager.service
92 | [Unit]
93 | Description=Kubernetes Controller Manager
94 | Documentation=https://github.com/GoogleCloudPlatform/kubernetes
95 |
96 | [Service]
97 | ExecStart=/opt/kubernetes/bin/kube-controller-manager \
98 | --address=127.0.0.1 \
99 | --master=http://127.0.0.1:8080 \
100 | --allocate-node-cidrs=true \
101 | --service-cluster-ip-range=10.1.0.0/16 \
102 | --cluster-cidr=10.2.0.0/16 \ ---POD的IP地址段
103 | --cluster-name=kubernetes \
104 | --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
105 | --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
106 | --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
107 | --root-ca-file=/opt/kubernetes/ssl/ca.pem \
108 | --leader-elect=true \
109 | --v=2 \
110 | --logtostderr=false \
111 | --log-dir=/opt/kubernetes/log
112 |
113 | Restart=on-failure
114 | RestartSec=5
115 |
116 | [Install]
117 | WantedBy=multi-user.target
118 |
119 | #修改二(修改etcd key中的值)
120 |
121 | #创建etcd的key值
122 | /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \
123 | --no-sync -C https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \
124 | mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}'
125 |
126 | 获取etcd中key的值
127 | /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \
128 | --no-sync -C https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \
129 | get /kubernetes/network/config
130 |
131 | 修改etcd中key的值
132 | /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \
133 | --no-sync -C https://192.168.56.11:2379,https://192.168.56.12:2379,https://192.168.56.13:2379 \
134 | set /kubernetes/network/config '{ "Network": "10.3.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}'
135 | ```
136 |
137 |
--------------------------------------------------------------------------------
/docs/外部访问K8s中Pod的几种方式.md:
--------------------------------------------------------------------------------
1 | ```
2 | Ingress是个什么鬼,网上资料很多(推荐官方),大家自行研究。简单来讲,就是一个负载均衡的玩意,其主要用来解决使用NodePort暴露Service的端口时Node IP会漂移的问题。同时,若大量使用NodePort暴露主机端口,管理会非常混乱。
3 |
4 | 好的解决方案就是让外界通过域名去访问Service,而无需关心其Node IP及Port。那为什么不直接使用Nginx?这是因为在K8S集群中,如果每加入一个服务,我们都在Nginx中添加一个配置,其实是一个重复性的体力活,只要是重复性的体力活,我们都应该通过技术将它干掉。
5 |
6 | Ingress就可以解决上面的问题,其包含两个组件Ingress Controller和Ingress:
7 |
8 | Ingress
9 | 将Nginx的配置抽象成一个Ingress对象,每添加一个新的服务只需写一个新的Ingress的yaml文件即可
10 |
11 | Ingress Controller
12 | 将新加入的Ingress转化成Nginx的配置文件并使之生效
13 | ```
14 |
15 | 参考文档:
16 |
17 | https://blog.csdn.net/qq_23348071/article/details/87185025 从外部访问K8s中Pod的五种方式
18 |
--------------------------------------------------------------------------------
/docs/虚拟机环境准备.md:
--------------------------------------------------------------------------------
1 | # 一、安装环境准备
2 |
3 | 下载系统镜像:可以在阿里云镜像站点下载 CentOS
4 |
5 | 镜像: http://mirrors.aliyun.com/centos/7/isos/x86_64/CentOS-7-x86_64-DVD-1804.iso
6 |
7 | 创建虚拟机:步骤略。
8 |
9 | # 二、操作系统安装
10 | 为了统一环境,保证实验的通用性,将网卡名称设置为 eth*,不使用 CentOS 7 默认的网卡命名规则。所以需要在安装的时候,增加内核参数。
11 |
12 | ## 1)光标选择“Install CentOS 7”
13 |
14 | 
15 |
16 | ## 2)点击 Tab,打开 kernel 启动选项后,增加 net.ifnames=0 biosdevname=0,如下图所示。
17 |
18 | 
19 |
20 | # 三、设置网络
21 |
22 | ## 1.vmware-workstation设置网络。
23 |
24 | 如果你的默认 NAT 地址段不是 192.168.56.0/24 可以修改 VMware Workstation 的配置,点击编辑 -> 虚拟 网络配置,然后进行配置。
25 |
26 | 
27 |
28 | ## 2.virtualbox设置网络。
29 |
30 | 
31 |
32 | 
33 |
34 | # 四、系统配置
35 |
36 | ## 1.设置主机名
37 | ```
38 | [root@localhost ~]# vi /etc/hostname
39 | linux-node1.example.com
40 | 或
41 | #修改本机hostname
42 | [root@localhost ~]# hostnamectl set-hostname linux-node1.example.com
43 |
44 | #让主机名修改生效
45 | [root@localhost ~]# su -l
46 | Last login: Sun Sep 30 04:30:53 EDT 2018 on pts/0
47 | [root@linux-node1 ~]#
48 | ```
49 |
50 | ## 2.安装依赖
51 | ```
52 | #为了保证各服务器间时间一致,使用ntpdate同步时间。
53 | # 安装ntpdate
54 | [root@linux-node1 ~]# yum install -y wget lrzsz vim net-tools openssh-clients ntpdate unzip xz
55 |
56 | $ 加入crontab
57 | 1 * * * * (/usr/sbin/ntpdate -s ntp1.aliyun.com;/usr/sbin/hwclock -w) > /dev/null 2>&1
58 | 1 * * * * /usr/sbin/ntpdate ntp1.aliyun.com >/dev/null 2>&1
59 |
60 | #设置时区
61 | [root@linux-node1 ~]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
62 | ```
63 |
64 | ## 3.设置 IP 地址
65 |
66 | 请配置静态 IP 地址。注意将 UUID 和 MAC 地址已经其它配置删除掉,便于进行虚 拟机克隆,请参考下面的配置。
67 | ```
68 | [root@linux-node1 ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
69 | TYPE=Ethernet
70 | BOOTPROTO=static
71 | NAME=eth0
72 | DEVICE=eth0
73 | ONBOOT=yes
74 | IPADDR=192.168.56.11
75 | NETMASK=255.255.255.0
76 | #GATEWAY=192.168.56.2
77 |
78 | #重启网络服务
79 | [root@linux-node1 ~]# systemctl restart network
80 | ```
81 |
82 |
83 |
84 | ## 4.关闭 NetworkManager 和防火墙开启自启动
85 | ```
86 | [root@linux-node1 ~]# systemctl disable firewalld
87 | [root@linux-node1 ~]# systemctl disable NetworkManager
88 | ```
89 |
90 | ## 5.设置主机名解析
91 | ```
92 | [root@linux-node1 ~]#
93 | cat > /etc/hosts <> /etc/sysctl.conf
126 |
127 | [root@linux-node1 ~]# sysctl -p
128 | ```
129 |
130 | ## 8.重启
131 | ```
132 | [root@linux-node1 ~]# reboot
133 | ```
134 |
135 | ## 9.克隆虚拟机
136 |
137 | 关闭虚拟机,并克隆当前虚拟机 linux-node1 到 linux-node2 linux-node3,建议选择“创建完整克隆”,而不是“创 建链接克隆”。
138 | 克隆完毕后请给 linux-node2 linux-node3 设置正确的 IP 地址和主机名。
139 |
140 | ## 10.给虚拟机做快照
141 |
142 | 分别给三台虚拟机做快照。以便于随时回到一个刚初始化完毕的系统中。可以有效的减少学习过程中 的环境准备时间。同时,请确保实验环境的一致性,便于顺利的完成所有实验。
143 |
144 |
--------------------------------------------------------------------------------
/example/coredns/coredns.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ServiceAccount
3 | metadata:
4 | name: coredns
5 | namespace: kube-system
6 | labels:
7 | kubernetes.io/cluster-service: "true"
8 | addonmanager.kubernetes.io/mode: Reconcile
9 | ---
10 | apiVersion: rbac.authorization.k8s.io/v1
11 | kind: ClusterRole
12 | metadata:
13 | labels:
14 | kubernetes.io/bootstrapping: rbac-defaults
15 | addonmanager.kubernetes.io/mode: Reconcile
16 | name: system:coredns
17 | rules:
18 | - apiGroups:
19 | - ""
20 | resources:
21 | - endpoints
22 | - services
23 | - pods
24 | - namespaces
25 | verbs:
26 | - list
27 | - watch
28 | ---
29 | apiVersion: rbac.authorization.k8s.io/v1
30 | kind: ClusterRoleBinding
31 | metadata:
32 | annotations:
33 | rbac.authorization.kubernetes.io/autoupdate: "true"
34 | labels:
35 | kubernetes.io/bootstrapping: rbac-defaults
36 | addonmanager.kubernetes.io/mode: EnsureExists
37 | name: system:coredns
38 | roleRef:
39 | apiGroup: rbac.authorization.k8s.io
40 | kind: ClusterRole
41 | name: system:coredns
42 | subjects:
43 | - kind: ServiceAccount
44 | name: coredns
45 | namespace: kube-system
46 | ---
47 | apiVersion: v1
48 | kind: ConfigMap
49 | metadata:
50 | name: coredns
51 | namespace: kube-system
52 | labels:
53 | addonmanager.kubernetes.io/mode: EnsureExists
54 | data:
55 | Corefile: |
56 | .:53 {
57 | errors
58 | health
59 | kubernetes cluster.local. in-addr.arpa ip6.arpa {
60 | pods insecure
61 | upstream
62 | fallthrough in-addr.arpa ip6.arpa
63 | }
64 | prometheus :9153
65 | proxy . /etc/resolv.conf
66 | cache 30
67 | }
68 | ---
69 | apiVersion: extensions/v1beta1
70 | kind: Deployment
71 | metadata:
72 | name: coredns
73 | namespace: kube-system
74 | labels:
75 | k8s-app: coredns
76 | kubernetes.io/cluster-service: "true"
77 | addonmanager.kubernetes.io/mode: Reconcile
78 | kubernetes.io/name: "CoreDNS"
79 | spec:
80 | replicas: 2
81 | strategy:
82 | type: RollingUpdate
83 | rollingUpdate:
84 | maxUnavailable: 1
85 | selector:
86 | matchLabels:
87 | k8s-app: coredns
88 | template:
89 | metadata:
90 | labels:
91 | k8s-app: coredns
92 | spec:
93 | serviceAccountName: coredns
94 | tolerations:
95 | - key: node-role.kubernetes.io/master
96 | effect: NoSchedule
97 | - key: "CriticalAddonsOnly"
98 | operator: "Exists"
99 | containers:
100 | - name: coredns
101 | image: coredns/coredns:1.0.6
102 | imagePullPolicy: IfNotPresent
103 | resources:
104 | limits:
105 | memory: 170Mi
106 | requests:
107 | cpu: 100m
108 | memory: 70Mi
109 | args: [ "-conf", "/etc/coredns/Corefile" ]
110 | volumeMounts:
111 | - name: config-volume
112 | mountPath: /etc/coredns
113 | ports:
114 | - containerPort: 53
115 | name: dns
116 | protocol: UDP
117 | - containerPort: 53
118 | name: dns-tcp
119 | protocol: TCP
120 | livenessProbe:
121 | httpGet:
122 | path: /health
123 | port: 8080
124 | scheme: HTTP
125 | initialDelaySeconds: 60
126 | timeoutSeconds: 5
127 | successThreshold: 1
128 | failureThreshold: 5
129 | dnsPolicy: Default
130 | volumes:
131 | - name: config-volume
132 | configMap:
133 | name: coredns
134 | items:
135 | - key: Corefile
136 | path: Corefile
137 | ---
138 | apiVersion: v1
139 | kind: Service
140 | metadata:
141 | name: coredns
142 | namespace: kube-system
143 | labels:
144 | k8s-app: coredns
145 | kubernetes.io/cluster-service: "true"
146 | addonmanager.kubernetes.io/mode: Reconcile
147 | kubernetes.io/name: "CoreDNS"
148 | spec:
149 | selector:
150 | k8s-app: coredns
151 | clusterIP: 10.1.0.2
152 | ports:
153 | - name: dns
154 | port: 53
155 | protocol: UDP
156 | - name: dns-tcp
157 | port: 53
158 | protocol: TCP
159 |
--------------------------------------------------------------------------------
/example/nginx/nginx-daemonset.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: DaemonSet
3 | metadata:
4 | name: nginx-daemonset
5 | labels:
6 | app: nginx
7 | spec:
8 | selector:
9 | matchLabels:
10 | app: nginx
11 | template:
12 | metadata:
13 | labels:
14 | app: nginx
15 | spec:
16 | containers:
17 | - name: nginx
18 | image: nginx:1.13.12
19 | ports:
20 | - containerPort: 80
21 |
--------------------------------------------------------------------------------
/example/nginx/nginx-deployment.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: Deployment
3 | metadata:
4 | name: nginx-deployment
5 | labels:
6 | app: nginx
7 | spec:
8 | replicas: 3
9 | selector:
10 | matchLabels:
11 | app: nginx
12 | template:
13 | metadata:
14 | labels:
15 | app: nginx
16 | spec:
17 | containers:
18 | - name: nginx
19 | image: nginx:1.13.12
20 | ports:
21 | - containerPort: 80
22 |
--------------------------------------------------------------------------------
/example/nginx/nginx-ingress.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: extensions/v1beta1
2 | kind: Ingress
3 | metadata:
4 | name: nginx-ingress
5 | spec:
6 | rules:
7 | - host: www.example.com
8 | http:
9 | paths:
10 | - path: /
11 | backend:
12 | serviceName: nginx-service
13 | servicePort: 80
14 |
--------------------------------------------------------------------------------
/example/nginx/nginx-pod.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Pod
3 | metadata:
4 | name: nginx-pod
5 | labels:
6 | app: nginx
7 | spec:
8 | containers:
9 | - name: nginx
10 | image: nginx:1.13.12
11 | ports:
12 | - containerPort: 80
13 |
--------------------------------------------------------------------------------
/example/nginx/nginx-rc.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ReplicationController
3 | metadata:
4 | name: nginx-rc
5 | spec:
6 | replicas: 3
7 | selector:
8 | app: nginx
9 | template:
10 | metadata:
11 | name: nginx
12 | labels:
13 | app: nginx
14 | spec:
15 | containers:
16 | - name: nginx
17 | image: nginx:1.13.12
18 | ports:
19 | - containerPort: 80
20 |
--------------------------------------------------------------------------------
/example/nginx/nginx-rs.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: ReplicaSet
3 | metadata:
4 | name: nginx-rs
5 | labels:
6 | app: nginx
7 | spec:
8 | replicas: 3
9 | selector:
10 | matchLabels:
11 | app: nginx
12 | template:
13 | metadata:
14 | labels:
15 | app: nginx
16 | spec:
17 | containers:
18 | - name: nginx
19 | image: nginx:1.13.12
20 | ports:
21 | - containerPort: 80
22 |
--------------------------------------------------------------------------------
/example/nginx/nginx-service-nodeport.yaml:
--------------------------------------------------------------------------------
1 | kind: Service
2 | apiVersion: v1
3 | metadata:
4 | name: nginx-service
5 | spec:
6 | selector:
7 | app: nginx
8 | ports:
9 | - protocol: TCP
10 | port: 80
11 | targetPort: 80
12 | type: NodePort
13 |
14 |
--------------------------------------------------------------------------------
/example/nginx/nginx-service.yaml:
--------------------------------------------------------------------------------
1 | kind: Service
2 | apiVersion: v1
3 | metadata:
4 | name: nginx-service
5 | spec:
6 | selector:
7 | app: nginx
8 | ports:
9 | - protocol: TCP
10 | port: 80
11 | targetPort: 80
12 |
--------------------------------------------------------------------------------
/helm/README.md:
--------------------------------------------------------------------------------
1 | # 一、Helm - K8S的包管理器
2 |
3 | 类似Centos的yum
4 |
5 | ## 1、Helm架构
6 | ```bash
7 | helm包括chart和release.
8 | helm包含2个组件,Helm客户端和Tiller服务器.
9 | ```
10 |
11 | ## 2、Helm客户端安装
12 |
13 | 1、脚本安装
14 | ```bash
15 | #安装
16 | curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get |bash
17 |
18 | #查看
19 | which helm
20 |
21 | #因服务器端还没安装,这里会报无法连接
22 | helm version
23 |
24 | #添加命令补全
25 | helm completion bash > .helmrc
26 | echo "source .helmrc" >> .bashrc
27 | ```
28 |
29 | 2、源码安装
30 | ```bash
31 | #源码安装
32 | #curl -O https://get.helm.sh/helm-v2.16.0-linux-amd64.tar.gz
33 |
34 | wget -O helm-v2.16.0-linux-amd64.tar.gz https://get.helm.sh/helm-v2.16.0-linux-amd64.tar.gz
35 | tar -zxvf helm-v2.16.0-linux-amd64.tar.gz
36 | cd linux-amd64 #若采用容器化部署到kubernetes中,则可以不用管tiller,只需将helm复制到/usr/bin目录即可
37 | cp helm /usr/bin/
38 | echo "source <(helm completion bash)" >> /root/.bashrc # 命令自动补全
39 | ```
40 |
41 | ## 3、Tiller服务器端安装
42 |
43 | 1、安装
44 |
45 | ```bash
46 | helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
47 |
48 | #查看
49 | kubectl get --namespace=kube-system service tiller-deploy
50 | kubectl get --namespace=kube-system deployments. tiller-deploy
51 | kubectl get --namespace=kube-system pods|grep tiller-deploy
52 |
53 | #能够看到服务器版本信息
54 | helm version
55 |
56 | #添加新的repo
57 | helm repo add stable http://mirror.azure.cn/kubernetes/charts/
58 | ```
59 |
60 | 2、创建helm-rbac.yaml文件
61 | ```bash
62 | cat >helm-rbac.yaml<<\EOF
63 | apiVersion: v1
64 | kind: ServiceAccount
65 | metadata:
66 | name: tiller
67 | namespace: kube-system
68 | ---
69 | apiVersion: rbac.authorization.k8s.io/v1beta1
70 | kind: ClusterRoleBinding
71 | metadata:
72 | name: tiller
73 | roleRef:
74 | apiGroup: rbac.authorization.k8s.io
75 | kind: ClusterRole
76 | name: cluster-admin
77 | subjects:
78 | - kind: ServiceAccount
79 | name: tiller
80 | namespace: kube-system
81 | EOF
82 |
83 | kubectl apply -f helm-rbac.yaml
84 | ```
85 |
86 | ## 4、Helm使用
87 | ```bash
88 | #搜索
89 | helm search
90 |
91 | #执行命名添加权限
92 | kubectl create serviceaccount --namespace kube-system tiller
93 | kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
94 | kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
95 |
96 | #安装chart的mysql应用
97 | helm install stable/mysql
98 |
99 | 会自动部署 Service,Deployment,Secret 和 PersistentVolumeClaim,并给与很多提示信息,比如mysql密码获取,连接端口等.
100 |
101 | #查看release各个对象
102 | kubectl get service doltish-beetle-mysql
103 | kubectl get deployments. doltish-beetle-mysql
104 | kubectl get pods doltish-beetle-mysql-75fbddbd9d-f64j4
105 | kubectl get pvc doltish-beetle-mysql
106 | helm list # 显示已经部署的release
107 |
108 | #删除
109 | helm delete doltish-beetle
110 | kubectl get pods
111 | kubectl get service
112 | kubectl get deployments.
113 | kubectl get pvc
114 | ```
115 |
116 | # 二、使用Helm部署Nginx Ingress
117 |
118 | ## 1、标记标签
119 |
120 | 我们将kub1(192.168.56.11)做为边缘节点,打上Label
121 |
122 | ```bash
123 | #查看node标签
124 | kubectl get nodes --show-labels
125 |
126 | kubectl label node k8s-master-01 node-role.kubernetes.io/edge=
127 |
128 | $ kubectl get node
129 | NAME STATUS ROLES AGE VERSION
130 | k8s-master-01 Ready edge,master 59m v1.16.2
131 | k8s-master-02 Ready 58m v1.16.2
132 | k8s-master-03 Ready 58m v1.16.2
133 | ```
134 |
135 | ## 2、编写chart的值文件ingress-nginx.yaml
136 |
137 | ```bash
138 | cat >ingress-nginx.yaml<<\EOF
139 | controller:
140 | hostNetwork: true
141 | daemonset:
142 | useHostPort: false
143 | hostPorts:
144 | http: 80
145 | https: 443
146 | service:
147 | type: ClusterIP
148 | tolerations:
149 | - operator: "Exists"
150 | nodeSelector:
151 | node-role.kubernetes.io/edge: ''
152 |
153 | defaultBackend:
154 | tolerations:
155 | - operator: "Exists"
156 | nodeSelector:
157 | node-role.kubernetes.io/edge: ''
158 | EOF
159 | ```
160 |
161 | ## 3、安装nginx-ingress
162 |
163 | ```bash
164 | helm del --purge nginx-ingress
165 |
166 | helm repo update
167 |
168 | helm install stable/nginx-ingress \
169 | --name nginx-ingress \
170 | --namespace kube-system \
171 | -f ingress-nginx.yaml
172 |
173 | 如果访问 http://192.168.56.11 返回default backend,则部署完成。
174 |
175 | #nginx-ingress
176 | docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1
177 | docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.1
178 | docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.26.1
179 |
180 | #defaultbackend
181 | docker pull googlecontainer/defaultbackend-amd64:1.5
182 | docker tag googlecontainer/defaultbackend-amd64:1.5 k8s.gcr.io/defaultbackend-amd64:1.5
183 | docker rmi googlecontainer/defaultbackend-amd64:1.5
184 | ```
185 |
186 | ## 4、查看 nginx-ingress 的 Pod
187 |
188 | ```bash
189 | kubectl get pods -n kube-system | grep nginx-ingress
190 | ```
191 |
192 | # 三、Helm 安装部署Kubernetes的dashboard
193 |
194 | ## 1、创建tls secret
195 |
196 | ```bash
197 | openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout ./tls.key -out ./tls.crt -subj "/CN=k8s.test.com"
198 | ```
199 |
200 | ## 2、安装tls secret
201 | ```bash
202 | kubectl delete secret dashboard-tls-secret -n kube-system
203 |
204 | kubectl -n kube-system create secret tls dashboard-tls-secret --key ./tls.key --cert ./tls.crt
205 |
206 | kubectl get secret -n kube-system |grep dashboard
207 | ```
208 |
209 | ## 3、安装
210 |
211 | ```bash
212 | cat >kubernetes-dashboard.yaml<<\EOF
213 | image:
214 | repository: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64
215 | tag: v1.10.1
216 | ingress:
217 | enabled: true
218 | hosts:
219 | - k8s.test.com
220 | annotations:
221 | nginx.ingress.kubernetes.io/ssl-redirect: "false"
222 | nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
223 | tls:
224 | - secretName: dashboard-tls-secret
225 | hosts:
226 | - k8s.test.com
227 | nodeSelector:
228 | node-role.kubernetes.io/edge: ''
229 | tolerations:
230 | - key: node-role.kubernetes.io/master
231 | operator: Exists
232 | effect: NoSchedule
233 | - key: node-role.kubernetes.io/master
234 | operator: Exists
235 | effect: PreferNoSchedule
236 | rbac:
237 | clusterAdminRole: true
238 | EOF
239 |
240 | 相比默认配置,修改了以下配置项:
241 |
242 | ingress.enabled - 置为 true 开启 Ingress,用 Ingress 将 Kubernetes Dashboard 服务暴露出来,以便让我们浏览器能够访问
243 |
244 | ingress.annotations - 指定 ingress.class 为 nginx,让我们安装 Nginx Ingress Controller 来反向代理 Kubernetes Dashboard 服务;由于 Kubernetes Dashboard 后端服务是以 https 方式监听的,而 Nginx Ingress Controller 默认会以 HTTP 协议将请求转发给后端服务,用secure-backends这个 annotation 来指示 Nginx Ingress Controller 以 HTTPS 协议将请求转发给后端服务
245 |
246 | ingress.hosts - 这里替换为证书配置的域名
247 |
248 | Ingress.tls - secretName 配置为 cert-manager 生成的免费证书所在的 Secret 资源名称,hosts 替换为证书配置的域名
249 |
250 | rbac.clusterAdminRole - 置为 true 让 dashboard 的权限够大,这样我们可以方便操作多个 namespace
251 |
252 | ```
253 |
254 | ## 4、命令安装
255 |
256 | 1、安装
257 | ```bash
258 | #删除
259 | helm delete kubernetes-dashboard
260 | helm del --purge kubernetes-dashboard
261 |
262 | #安装
263 | helm install stable/kubernetes-dashboard \
264 | -n kubernetes-dashboard \
265 | --namespace kube-system \
266 | -f kubernetes-dashboard.yaml
267 | ```
268 |
269 | 2、查看pod
270 | ```bash
271 | kubectl get pods -n kube-system -o wide
272 | ```
273 |
274 | 3、查看详细信息
275 | ```bash
276 | kubectl describe pod `kubectl get pod -A|grep dashboard|awk '{print $2}'` -n kube-system
277 | ```
278 |
279 | 4、访问
280 | ```bash
281 | #获取token
282 | kubectl describe -n kube-system secret/`kubectl -n kube-system get secret | grep kubernetes-dashboard-token|awk '{print $1}'`
283 |
284 | #访问
285 | https://k8s.test.com
286 | ```
287 |
288 | 参考文档:
289 |
290 | https://www.cnblogs.com/hongdada/p/11395200.html 镜像问题
291 |
292 | https://www.qikqiak.com/post/install-nginx-ingress/
293 |
294 | https://www.cnblogs.com/bugutian/p/11366556.html 国内不fq安装K8S三: 使用helm安装kubernet-dashboard
295 |
296 | https://www.cnblogs.com/hongdada/p/11284534.html Helm 安装部署Kubernetes的dashboard
297 |
298 | https://www.cnblogs.com/chanix/p/11731388.html Helm - K8S的包管理器
299 |
300 | https://www.cnblogs.com/peitianwang/p/11649621.html
301 |
302 |
303 |
--------------------------------------------------------------------------------
/images/Dashboard-login.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/Dashboard-login.jpg
--------------------------------------------------------------------------------
/images/Dashboard.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/Dashboard.jpg
--------------------------------------------------------------------------------
/images/Ingress Controller01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/Ingress Controller01.png
--------------------------------------------------------------------------------
/images/Ingress-nginx.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/Ingress-nginx.png
--------------------------------------------------------------------------------
/images/K8S.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/K8S.png
--------------------------------------------------------------------------------
/images/calico_bgp_01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_01.png
--------------------------------------------------------------------------------
/images/calico_bgp_02.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_02.png
--------------------------------------------------------------------------------
/images/calico_bgp_03.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_03.png
--------------------------------------------------------------------------------
/images/calico_bgp_04.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_04.png
--------------------------------------------------------------------------------
/images/calico_bgp_05.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_05.png
--------------------------------------------------------------------------------
/images/calico_bgp_06.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_06.png
--------------------------------------------------------------------------------
/images/calico_bgp_07.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_07.png
--------------------------------------------------------------------------------
/images/calico_bgp_08.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_08.png
--------------------------------------------------------------------------------
/images/calico_bgp_09.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_09.png
--------------------------------------------------------------------------------
/images/calico_bgp_10.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_10.png
--------------------------------------------------------------------------------
/images/calico_bgp_11.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_11.png
--------------------------------------------------------------------------------
/images/calico_bgp_12.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/calico_bgp_12.png
--------------------------------------------------------------------------------
/images/change network.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/change network.png
--------------------------------------------------------------------------------
/images/change_ip_01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/change_ip_01.png
--------------------------------------------------------------------------------
/images/change_ip_02.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/change_ip_02.png
--------------------------------------------------------------------------------
/images/change_ip_05.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/change_ip_05.png
--------------------------------------------------------------------------------
/images/change_ip_06.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/change_ip_06.png
--------------------------------------------------------------------------------
/images/coredns-01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/coredns-01.png
--------------------------------------------------------------------------------
/images/dynamic-pv.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/dynamic-pv.png
--------------------------------------------------------------------------------
/images/heapster-01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/heapster-01.png
--------------------------------------------------------------------------------
/images/heapster-02.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/heapster-02.png
--------------------------------------------------------------------------------
/images/ingress-k8s-01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/ingress-k8s-01.png
--------------------------------------------------------------------------------
/images/ingress-k8s-02.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/ingress-k8s-02.png
--------------------------------------------------------------------------------
/images/ingress-k8s-03.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/ingress-k8s-03.png
--------------------------------------------------------------------------------
/images/ingress-k8s-04.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/ingress-k8s-04.png
--------------------------------------------------------------------------------
/images/install centos7.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/install centos7.png
--------------------------------------------------------------------------------
/images/k8s-soft.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/k8s-soft.jpg
--------------------------------------------------------------------------------
/images/k8s架构图.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/k8s架构图.jpg
--------------------------------------------------------------------------------
/images/kubeadm-ha.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/kubeadm-ha.jpg
--------------------------------------------------------------------------------
/images/kubernetes架构.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/kubernetes架构.jpg
--------------------------------------------------------------------------------
/images/pressure_calico_01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/pressure_calico_01.png
--------------------------------------------------------------------------------
/images/pressure_flannel_01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/pressure_flannel_01.png
--------------------------------------------------------------------------------
/images/pressure_physical_01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/pressure_physical_01.png
--------------------------------------------------------------------------------
/images/pv01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/pv01.png
--------------------------------------------------------------------------------
/images/rp_filter.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/rp_filter.png
--------------------------------------------------------------------------------
/images/traefik-architecture.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/traefik-architecture.png
--------------------------------------------------------------------------------
/images/virtualbox-network-eth0.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/virtualbox-network-eth0.jpg
--------------------------------------------------------------------------------
/images/virtualbox-network-eth1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/virtualbox-network-eth1.png
--------------------------------------------------------------------------------
/images/vmware-fusion-network.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/vmware-fusion-network.png
--------------------------------------------------------------------------------
/images/vmware-network.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/vmware-network.png
--------------------------------------------------------------------------------
/images/wordpress-01.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/wordpress-01.png
--------------------------------------------------------------------------------
/images/安装流程.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Lancger/opsfull/5b36608dbe13d6260df97712dda76052ef1d4f2b/images/安装流程.png
--------------------------------------------------------------------------------
/kubeadm/Kubernetes 集群变更IP地址.md:
--------------------------------------------------------------------------------
1 | 参考资料:
2 |
3 | https://blog.csdn.net/whywhy0716/article/details/92658111 Kubernetes 集群变更IP地址
4 |
--------------------------------------------------------------------------------
/kubeadm/k8s清理.md:
--------------------------------------------------------------------------------
1 | # 一、清理资源
2 | ```
3 | systemctl stop kubelet
4 | systemctl stop docker
5 |
6 | kubeadm reset
7 | #yum remove -y kubelet kubeadm kubectl --disableexcludes=kubernetes
8 |
9 | rm -rf /etc/kubernetes/
10 | rm -rf /root/.kube/
11 | rm -rf $HOME/.kube/
12 | rm -rf /var/lib/etcd/
13 | rm -rf /var/lib/cni/
14 | rm -rf /var/lib/kubelet/
15 | rm -rf /etc/cni/
16 | rm -rf /opt/cni/
17 |
18 | ifconfig cni0 down
19 | ifconfig flannel.1 down
20 | ifconfig docker0 down
21 | ip link delete cni0
22 | ip link delete flannel.1
23 |
24 | #docker rmi -f $(docker images -q)
25 | #docker rm -f `docker ps -a -q`
26 |
27 | #yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
28 | kubeadm version
29 | systemctl restart kubelet.service
30 | systemctl enable kubelet.service
31 | ```
32 |
33 | # 二、重新初始化
34 | ```
35 | swapoff -a
36 | modprobe br_netfilter
37 | sysctl -p /etc/sysctl.d/k8s.conf
38 | chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
39 |
40 | kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g' |sh -x
41 | docker images |grep google_containers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#registry.cn-hangzhou.aliyuncs.com/google_containers#k8s.gcr.io#2' |sh -x
42 | docker images |grep google_containers |awk '{print "docker rmi ", $1":"$2}' |sh -x
43 | docker pull coredns/coredns:1.3.1
44 | docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
45 | docker rmi coredns/coredns:1.3.1
46 |
47 | kubeadm init --kubernetes-version=v1.15.3 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.56.11 --apiserver-bind-port=6443
48 |
49 | #获取加入集群的指令
50 | kubeadm token create --print-join-command
51 | ```
52 |
53 | # 三、Node操作
54 | ```
55 | mkdir -p $HOME/.kube
56 | ```
57 |
58 | # 四、Master操作
59 | ```
60 | mkdir -p $HOME/.kube
61 | cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
62 | chown $(id -u):$(id -g) $HOME/.kube/config
63 |
64 | scp $HOME/.kube/config root@linux-node2:$HOME/.kube/config
65 | scp $HOME/.kube/config root@linux-node3:$HOME/.kube/config
66 | scp $HOME/.kube/config root@linux-node4:$HOME/.kube/config
67 | ```
68 |
69 | # 五、Master和Node节点
70 | ```
71 | chown $(id -u):$(id -g) $HOME/.kube/config
72 |
73 | ```
74 |
75 |
76 | 参考资料:
77 |
78 | https://blog.51cto.com/wutengfei/2121202 kubernetes中网络报错问题
79 |
--------------------------------------------------------------------------------
/kubeadm/kubeadm.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: kubeadm.k8s.io/v1beta2
2 | bootstrapTokens:
3 | - groups:
4 | - system:bootstrappers:kubeadm:default-node-token
5 | token: abcdef.0123456789abcdef
6 | ttl: 24h0m0s
7 | usages:
8 | - signing
9 | - authentication
10 | kind: InitConfiguration
11 | localAPIEndpoint:
12 | advertiseAddress: 192.168.56.11
13 | bindPort: 6443
14 | nodeRegistration:
15 | criSocket: /var/run/dockershim.sock
16 | name: linux-node1.example.com
17 | taints:
18 | - effect: NoSchedule
19 | key: node-role.kubernetes.io/master
20 | ---
21 | apiServer:
22 | timeoutForControlPlane: 4m0s
23 | apiVersion: kubeadm.k8s.io/v1beta2
24 | certificatesDir: /etc/kubernetes/pki
25 | clusterName: kubernetes
26 | controllerManager: {}
27 | dns:
28 | type: CoreDNS
29 | etcd:
30 | local:
31 | dataDir: /var/lib/etcd
32 | imageRepository: k8s.gcr.io
33 | kind: ClusterConfiguration
34 | kubernetesVersion: v1.15.0
35 | networking:
36 | dnsDomain: cluster.local
37 | podSubnet: 172.168.0.0/16
38 | serviceSubnet: 10.96.0.0/12
39 | scheduler: {}
40 | ---
41 | apiVersion: kubeproxy.config.k8s.io/v1alpha1
42 | kind: KubeProxyConfiguration
43 | mode: ipvs # kube-proxy 模式
44 |
--------------------------------------------------------------------------------
/kubeadm/kubeadm初始化k8s集群延长证书过期时间.md:
--------------------------------------------------------------------------------
1 | # 一、前言
2 |
3 | kubeadm初始化k8s集群,签发的CA证书有效期默认是10年,签发的apiserver证书有效期默认是1年,到期之后请求apiserver会报错,使用openssl命令查询相关证书是否到期。
4 | 以下延长证书过期的方法适合kubernetes1.14、1.15、1.16、1.17、1.18版本
5 |
6 | # 二、查看证书有效时间
7 | ```bash
8 | openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text |grep Not
9 |
10 | 显示如下,通过下面可看到ca证书有效期是10年,从2020到2030年:
11 | Not Before: Apr 22 04:09:07 2020 GMT
12 | Not After : Apr 20 04:09:07 2030 GMT
13 |
14 | openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep Not
15 |
16 | 显示如下,通过下面可看到apiserver证书有效期是1年,从2020到2021年:
17 | Not Before: Apr 22 04:09:07 2020 GMT
18 | Not After : Apr 22 04:09:07 2021 GMT
19 | ```
20 |
21 | # 三、延长证书过期时间
22 |
23 | ```bash
24 | 1.把update-kubeadm-cert.sh文件上传到master1、master2、master3节点
25 | update-kubeadm-cert.sh文件所在的github地址如下:
26 | https://github.com/luckylucky421/kubernetes1.17.3
27 | 把update-kubeadm-cert.sh文件clone和下载下来,拷贝到master1,master2,master3节点上
28 |
29 | 2.在每个节点都执行如下命令
30 | 1)给update-kubeadm-cert.sh证书授权可执行权限
31 | chmod +x update-kubeadm-cert.sh
32 |
33 | 2)执行下面命令,修改证书过期时间,把时间延长到10年
34 | ./update-kubeadm-cert.sh all
35 |
36 | 3)在master1节点查询Pod是否正常,能查询出数据说明证书签发完成
37 | kubectl get pods -n kube-system
38 |
39 | 显示如下,能够看到pod信息,说明证书签发正常:
40 | ......
41 | calico-node-b5ks5 1/1 Running 0 157m
42 | calico-node-r6bfr 1/1 Running 0 155m
43 | calico-node-r8qzv 1/1 Running 0 7h1m
44 | coredns-66bff467f8-5vk2q 1/1 Running 0 7h30m
45 | ......
46 | ```
47 |
48 | # 四、验证证书有效时间是否延长到10年
49 |
50 | ```bash
51 | openssl x509 -in /etc/kubernetes/pki/ca.crt -noout -text |grep Not
52 | 显示如下,通过下面可看到ca证书有效期是10年,从2020到2030年:
53 | Not Before: Apr 22 04:09:07 2020 GMT
54 | Not After : Apr 20 04:09:07 2030 GMT
55 | openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep Not
56 | 显示如下,通过下面可看到apiserver证书有效期是10年,从2020到2030年:
57 | Not Before: Apr 22 11:15:53 2020 GMT
58 | Not After : Apr 20 11:15:53 2030 GMT
59 | openssl x509 -in /etc/kubernetes/pki/apiserver-etcd-client.crt -noout -text |grep Not
60 | 显示如下,通过下面可看到etcd证书有效期是10年,从2020到2030年:
61 | Not Before: Apr 22 11:32:24 2020 GMT
62 | Not After : Apr 20 11:32:24 2030 GMT
63 | openssl x509 -in /etc/kubernetes/pki/front-proxy-ca.crt -noout -text |grep Not
64 | 显示如下,通过下面可看到fron-proxy证书有效期是10年,从2020到2030年:
65 | Not Before: Apr 22 04:09:08 2020 GMT
66 | Not After : Apr 20 04:09:08 2030 GMT
67 | ```
68 |
69 | 参考资料:
70 |
71 | https://mp.weixin.qq.com/s/N7WRT0OkyJHec35BH_X1Hg kubeadm初始化k8s集群延长证书过期时间
72 |
--------------------------------------------------------------------------------
/kubeadm/kubeadm无法下载镜像问题.md:
--------------------------------------------------------------------------------
1 | 0、kubeadm镜像介绍
2 | ```
3 | kubeadm 是kubernetes 的集群安装工具,能够快速安装kubernetes 集群。
4 | kubeadm init 命令默认使用的docker镜像仓库为k8s.gcr.io,国内无法直接访问,于是需要变通一下。
5 | ```
6 |
7 | 1、首先查看需要使用哪些镜像
8 | ```
9 | kubeadm config images list
10 | #输出如下结果
11 |
12 | k8s.gcr.io/kube-apiserver:v1.15.3
13 | k8s.gcr.io/kube-controller-manager:v1.15.3
14 | k8s.gcr.io/kube-scheduler:v1.15.3
15 | k8s.gcr.io/kube-proxy:v1.15.3
16 | k8s.gcr.io/pause:3.1
17 | k8s.gcr.io/etcd:3.3.10
18 | k8s.gcr.io/coredns:1.3.1
19 |
20 | 我们通过 docker.io/mirrorgooglecontainers 中转一下
21 | ```
22 |
23 | 2、批量下载及转换标签
24 |
25 | ```
26 | #docker.io/mirrorgooglecontainers中转镜像
27 |
28 | kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#docker.io/mirrorgooglecontainers#g' |sh -x
29 | docker images |grep mirrorgooglecontainers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#mirrorgooglecontainers#k8s.gcr.io#2' |sh -x
30 | docker images |grep mirrorgooglecontainers |awk '{print "docker rmi ", $1":"$2}' |sh -x
31 | docker pull coredns/coredns:1.3.1
32 | docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
33 | docker rmi coredns/coredns:1.3.1
34 |
35 | 注:coredns没包含在docker.io/mirrorgooglecontainers中,需要手工从coredns官方镜像转换下。
36 |
37 |
38 | #阿里云的中转镜像
39 |
40 | kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g' |sh -x
41 | docker images |grep google_containers |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#registry.cn-hangzhou.aliyuncs.com/google_containers#k8s.gcr.io#2' |sh -x
42 | docker images |grep google_containers |awk '{print "docker rmi ", $1":"$2}' |sh -x
43 | docker pull coredns/coredns:1.3.1
44 | docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
45 | docker rmi coredns/coredns:1.3.1
46 | ```
47 |
48 | 3、查看镜像列表
49 | ```
50 | docker images
51 |
52 | REPOSITORY TAG IMAGE ID CREATED SIZE
53 | k8s.gcr.io/kube-proxy v1.15.3 232b5c793146 2 weeks ago 82.4MB
54 | k8s.gcr.io/kube-scheduler v1.15.3 703f9c69a5d5 2 weeks ago 81.1MB
55 | k8s.gcr.io/kube-controller-manager v1.15.3 e77c31de5547 2 weeks ago 159MB
56 | k8s.gcr.io/coredns 1.3.1 eb516548c180 7 months ago 40.3MB
57 | k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 9 months ago 258MB
58 | k8s.gcr.io/pause 3.1 da86e6ba6ca1 20 months ago 742kB
59 |
60 |
61 | docker rmi -f $(docker images -q)
62 | docker rm -f `docker ps -a -q`
63 | ```
64 |
65 | 参考文档:
66 |
67 | https://cloud.tencent.com/info/6db42438f5dd7842bcecb6baf61833aa.html kubeadm 无法下载镜像问题
68 |
69 | https://juejin.im/post/5b8a4536e51d4538c545645c 使用kubeadm 部署 Kubernetes(国内环境)
70 |
--------------------------------------------------------------------------------
/manual/README.md:
--------------------------------------------------------------------------------
1 | # 内核升级
2 | ```
3 | # 载入公钥
4 | rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
5 |
6 | # 安装ELRepo
7 | rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
8 |
9 | # 载入elrepo-kernel元数据
10 | yum --disablerepo=\* --enablerepo=elrepo-kernel repolist
11 |
12 | # 查看可用的rpm包
13 | yum --disablerepo=\* --enablerepo=elrepo-kernel list kernel*
14 |
15 | # 安装长期支持版本的kernel
16 | yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt.x86_64
17 |
18 | # 删除旧版本工具包
19 | yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64 -y
20 |
21 | # 安装新版本工具包
22 | yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt-tools.x86_64
23 |
24 | #查看默认启动顺序
25 | awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
26 | CentOS Linux (4.4.183-1.el7.elrepo.x86_64) 7 (Core)
27 | CentOS Linux (3.10.0-327.10.1.el7.x86_64) 7 (Core)
28 | CentOS Linux (0-rescue-c52097a1078c403da03b8eddeac5080b) 7 (Core)
29 |
30 | #默认启动的顺序是从0开始,新内核是从头插入(目前位置在0,而4.4.4的是在1),所以需要选择0。
31 | grub2-set-default 0
32 |
33 | #重启并检查
34 | reboot
35 | ```
36 |
37 | 参考资料
38 |
39 | https://github.com/easzlab/kubeasz/blob/master/docs/guide/kernel_upgrade.md
40 |
--------------------------------------------------------------------------------
/manual/v1.14/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/manual/v1.15.3/README.md:
--------------------------------------------------------------------------------
1 | # 一、Kubernetes 1.15 二进制集群安装
2 |
3 | 本系列文档将介绍如何使用二进制部署Kubernetes v1.15.3 集群的所有部署,而不是使用自动化部署(kubeadm)集群。在部署过程中,将详细列出各个组件启动参数,以及相关配置说明。在学习完本文档后,将理解k8s各个组件的交互原理,并且可以快速解决实际问题。
4 |
5 | ## 1.1、组件版本
6 |
7 | ```
8 | Kubernetes 1.15.3
9 | Docker 18.09 (docker使用官方的脚本安装,后期可能升级为新的版本,但是不影响)
10 | Etcd 3.3.13
11 | Flanneld 0.11.0
12 | ```
13 |
14 | ## 1.2、组件说明
15 |
16 | ### kube-apiserver
17 |
18 | ```
19 | 使用节点本地Nginx 4层透明代理实现高可用 (也可以使用haproxy,只是起到代理apiserver的作用)
20 | 关闭非安全端口8080和匿名访问
21 | 使用安全端口6443接受https请求
22 | 严格的认知和授权策略 (x509、token、rbac)
23 | 开启bootstrap token认证,支持kubelet TLS bootstrapping;
24 | 使用https访问kubelet、etcd
25 | ```
26 |
27 | ### kube-controller-manager
28 | ```
29 | 3节点高可用 (在k8s中,有些组件需要选举,所以使用奇数为集群高可用方案)
30 | 关闭非安全端口,使用10252接受https请求
31 | 使用kubeconfig访问apiserver的安全扣
32 | 使用approve kubelet证书签名请求(CSR),证书过期后自动轮转
33 | 各controller使用自己的ServiceAccount访问apiserver
34 | ```
35 | ### kube-scheduler
36 | ```
37 | 3节点高可用;
38 | 使用kubeconfig访问apiserver安全端口
39 | ```
40 | ### kubelet
41 | ```
42 | 使用kubeadm动态创建bootstrap token
43 | 使用TLS bootstrap机制自动生成client和server证书,过期后自动轮转
44 | 在kubeletConfiguration类型的JSON文件配置主要参数
45 | 关闭只读端口,在安全端口10250接受https请求,对请求进行认真和授权,拒绝匿名访问和非授权访问
46 | 使用kubeconfig访问apiserver的安全端口
47 | ```
48 | ### kube-proxy
49 | ```
50 | 使用kubeconfig访问apiserver的安全端口
51 | 在KubeProxyConfiguration类型JSON文件配置为主要参数
52 | 使用ipvs代理模式
53 | ```
54 | ### 集群插件
55 | ```
56 | DNS 使用功能、性能更好的coredns
57 | 网络 使用Flanneld 作为集群网络插件
58 | ```
59 |
60 | # 二、初始化环境
61 |
62 | ## 1.1、集群机器
63 | ```
64 | #master节点
65 | 192.168.0.50 k8s-01
66 | 192.168.0.51 k8s-02
67 | 192.168.0.52 k8s-03
68 |
69 | #node节点
70 | 192.168.0.53 k8s-04 #node节点只运行node,但是设置证书的时候要添加这个ip
71 | ```
72 | 本文档的所有etcd集群、master集群、worker节点均使用以上三台机器,并且初始化步骤需要在所有机器上执行命令。如果没有特殊命令,所有操作均在192.168.0.50上进行操作
73 |
74 | node节点后面会有操作,但是在初始化这步,是所有集群机器。包括node节点,我上面没有列出node节点
75 |
76 | ## 1.2、修改主机名
77 |
78 | 所有机器设置永久主机名
79 |
80 | ```
81 | hostnamectl set-hostname abcdocker-k8s01 #所有机器按照要求修改
82 | bash #刷新主机名
83 | ```
84 | 接下来我们需要在所有机器上添加hosts解析
85 | ```
86 | cat >> /etc/hosts <>/etc/profile
139 | [root@abcdocker-k8s01 ~]# source /etc/profile
140 | [root@abcdocker-k8s01 ~]# env|grep PATH
141 | PATH=/opt/k8s/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
142 | ```
143 |
144 | ## 1.4、安装依赖包
145 |
146 | 在每台服务器上安装依赖包
147 |
148 | ```
149 | yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget
150 | ```
151 |
152 | 关闭防火墙 Linux 以及swap分区
153 |
154 | ```
155 | systemctl stop firewalld
156 | systemctl disable firewalld
157 | iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
158 | iptables -P FORWARD ACCEPT
159 | swapoff -a
160 | sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
161 | setenforce 0
162 | sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
163 |
164 | #如果开启了swap分区,kubelet会启动失败(可以通过设置参数——-fail-swap-on设置为false)
165 | ```
166 |
167 | 升级内核
168 |
169 | ```
170 |
171 | ```
172 |
173 |
174 |
175 |
176 | 参考资料
177 |
178 | https://i4t.com/4253.html Kubernetes 1.14 二进制集群安装
179 |
180 | https://github.com/kubernetes/kubernetes/releases/tag/v1.15.3 下载链接
181 |
--------------------------------------------------------------------------------
/mysql/README.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/mysql/kubernetes访问外部mysql服务.md:
--------------------------------------------------------------------------------
1 | Table of Contents
2 | =================
3 |
4 | * [Table of Contents](#table-of-contents)
5 | * [一、创建endpoints](#一创建endpoints)
6 | * [二、创建service](#二创建service)
7 | * [三、文件合并](#三文件合并)
8 | * [四、安装centos7基础镜像](#四安装centos7基础镜像)
9 | * [五、测试数据库连接](#五测试数据库连接)
10 |
11 | `k8s访问集群外独立的服务最好的方式是采用Endpoint方式(可以看作是将k8s集群之外的服务抽象为内部服务),以mysql服务为例`
12 |
13 | # 一、创建endpoints
14 |
15 | (带注释的操作,建议分步操作,被这个坑了很久,或者可以直接使用合并文件一步执行)
16 |
17 | ```bash
18 | # 删除 mysql-endpoints
19 | kubectl delete -f mysql-endpoints.yaml -n mos-namespace
20 |
21 | # 创建 mysql-endpoints.yaml
22 | cat >mysql-endpoints.yaml<<\EOF
23 | apiVersion: v1
24 | kind: Endpoints
25 | metadata:
26 | name: mysql-production
27 | subsets:
28 | - addresses:
29 | - ip: 10.198.1.155 #-注意目前服务器的数据库需要放开权限
30 | ports:
31 | - port: 3306
32 | protocol: TCP
33 | EOF
34 |
35 | # 创建 mysql-endpoints
36 | kubectl apply -f mysql-endpoints.yaml -n mos-namespace
37 |
38 | # 查看 mysql-endpoints
39 | kubectl get endpoints mysql-production -n mos-namespace
40 |
41 | # 查看 mysql-endpoints详情
42 | kubectl describe endpoints mysql-production -n mos-namespace
43 |
44 | # 探测服务是否可达
45 | nc -zv 10.198.1.155 3306
46 | ```
47 |
48 | # 二、创建service
49 | ```bash
50 | # 删除 mysql-service
51 | kubectl delete -f mysql-service.yaml -n mos-namespace
52 |
53 | # 编写 mysql-service.yaml
54 | cat >mysql-service.yaml<<\EOF
55 | apiVersion: v1
56 | kind: Service
57 | metadata:
58 | name: mysql-production
59 | spec:
60 | ports:
61 | - port: 3306
62 | protocol: TCP
63 | EOF
64 |
65 | # 创建 mysql-service
66 | kubectl apply -f mysql-service.yaml -n mos-namespace
67 |
68 | # 查看 mysql-service
69 | kubectl get svc mysql-production -n mos-namespace
70 |
71 | # 查看 mysql-service详情
72 | kubectl describe svc mysql-production -n mos-namespace
73 |
74 | # 验证service ip的连通性
75 | nc -zv `kubectl get svc mysql-production -n mos-namespace|grep mysql-production|awk '{print $3}'` 3306
76 | ```
77 |
78 | # 三、文件合并
79 |
80 | `注意点: Endpoints类型,可以打标签,但是Service不可以通过标签来选择,直接不写selector: name: mysql-endpoints 不然会出现异常,找不到endpoints节点`
81 |
82 | ```
83 | cat << EOF > mysql-service-new.yaml
84 | apiVersion: v1
85 | kind: Service
86 | metadata:
87 | name: mysql-production
88 | spec:
89 | #selector: ---注意这里用标签选择,直接取消
90 | # name: mysql-endpoints
91 | ports:
92 | - port: 3306
93 | protocol: TCP
94 | EOF
95 | ```
96 | 完整文件
97 | ```bash
98 | kubectl delete -f mysql-endpoints-new.yaml -n mos-namespace
99 | kubectl delete -f mysql-service-new.yaml -n mos-namespace
100 |
101 | cat << EOF > mysql-endpoints-new.yaml
102 | apiVersion: v1
103 | kind: Endpoints
104 | metadata:
105 | name: mysql-production
106 | labels:
107 | name: mysql-endpoints
108 | subsets:
109 | - addresses:
110 | - ip: 10.198.1.155
111 | ports:
112 | - port: 3306
113 | protocol: TCP
114 | EOF
115 |
116 | cat << EOF > mysql-service-new.yaml
117 | apiVersion: v1
118 | kind: Service
119 | metadata:
120 | name: mysql-production
121 | spec:
122 | ports:
123 | - port: 3306
124 | protocol: TCP
125 | EOF
126 |
127 | kubectl apply -f mysql-endpoints-new.yaml -n mos-namespace
128 | kubectl apply -f mysql-service-new.yaml -n mos-namespace
129 |
130 | nc -zv `kubectl get svc mysql-production -n mos-namespace|grep mysql-production|awk '{print $3}'` 3306
131 | ```
132 |
133 | # 四、安装centos7基础镜像
134 | ```bash
135 | # 查看 mos-namespace 下的pod资源
136 | kubectl get pods -n mos-namespace
137 |
138 | # 清理命令行创建的deployment
139 | kubectl delete deployment centos7-app -n mos-namespace
140 |
141 | # 命令行跑一个centos7的bash基础容器
142 | #kubectl run --rm --image=centos:7.2.1511 centos7-app -it --port=8080 --replicas=1 -n mos-namespace
143 | kubectl run --image=centos:7.2.1511 centos7-app -it --port=8080 --replicas=1 -n mos-namespace
144 |
145 | # 安装mysql客户端
146 | yum install vim net-tools telnet nc -y
147 | yum install -y mariadb.x86_64 mariadb-libs.x86_64
148 | ```
149 |
150 | # 五、测试数据库连接
151 |
152 | ```bash
153 | # 进入到容器
154 | kubectl exec `kubectl get pods -n mos-namespace|grep centos7-app|awk '{print $1}'` -it /bin/bash -n mos-namespace
155 |
156 | # 检查网络连通性
157 | ping mysql-production
158 |
159 | # 测试mysql服务端口是否OK
160 | nc -zv mysql-production 3306
161 |
162 | # 连接测试
163 | mysql -h'mysql-production' -u'root' -p'password'
164 | ```
165 |
166 | 参考资料:
167 |
168 | https://blog.csdn.net/hxpjava1/article/details/80040407 使用kubernetes访问外部服务mysql/redis
169 |
170 | https://blog.csdn.net/liyingke112/article/details/76204038
171 |
172 | https://blog.csdn.net/ybt_c_index/article/details/80881157 istio 0.8 用ServiceEntry访问外部服务(如RDS)
173 |
--------------------------------------------------------------------------------
/redis/K8s上Redis集群动态扩容.md:
--------------------------------------------------------------------------------
1 | 参考资料:
2 |
3 | http://redisdoc.com/topic/cluster-tutorial.html#id10 Redis 命令参考
4 |
5 | https://cloud.tencent.com/developer/article/1392872
6 |
--------------------------------------------------------------------------------
/redis/K8s上运行Redis单实例.md:
--------------------------------------------------------------------------------
1 | Table of Contents
2 | =================
3 |
4 | * [一、创建namespace](#一创建namespace)
5 | * [二、创建一个 configmap](#二创建一个-configmap)
6 | * [三、创建 redis 容器](#三创建-redis-容器)
7 | * [四、创建redis-service服务](#四创建redis-service服务)
8 | * [五、验证redis实例](#五验证redis实例)
9 |
10 | # 一、创建namespace
11 | ```bash
12 | # 清理 namespace
13 | kubectl delete -f mos-namespace.yaml
14 |
15 | # 创建一个专用的 namespace
16 | cat > mos-namespace.yaml <<\EOF
17 | ---
18 | apiVersion: v1
19 | kind: Namespace
20 | metadata:
21 | name: mos-namespace
22 | EOF
23 |
24 | kubectl apply -f mos-namespace.yaml
25 |
26 | # 查看 namespace
27 | kubectl get namespace -A
28 | ```
29 |
30 | # 二、创建一个 configmap
31 |
32 | ```bash
33 | mkdir config && cd config
34 |
35 | # 清理configmap
36 | kubectl delete configmap redis-conf -n mos-namespace
37 |
38 | # 创建redis配置文件
39 | cat >redis.conf <<\EOF
40 | #daemonize yes
41 | pidfile /data/redis.pid
42 | port 6379
43 | tcp-backlog 30000
44 | timeout 0
45 | tcp-keepalive 10
46 | loglevel notice
47 | logfile /data/redis.log
48 | databases 16
49 | #save 900 1
50 | #save 300 10
51 | #save 60 10000
52 | stop-writes-on-bgsave-error no
53 | rdbcompression yes
54 | rdbchecksum yes
55 | dbfilename dump.rdb
56 | dir /data
57 | slave-serve-stale-data yes
58 | slave-read-only yes
59 | repl-diskless-sync no
60 | repl-diskless-sync-delay 5
61 | repl-disable-tcp-nodelay no
62 | slave-priority 100
63 | requirepass redispassword
64 | maxclients 30000
65 | appendonly no
66 | appendfilename "appendonly.aof"
67 | appendfsync everysec
68 | no-appendfsync-on-rewrite no
69 | auto-aof-rewrite-percentage 100
70 | auto-aof-rewrite-min-size 64mb
71 | aof-load-truncated yes
72 | lua-time-limit 5000
73 | slowlog-log-slower-than 10000
74 | slowlog-max-len 128
75 | latency-monitor-threshold 0
76 | notify-keyspace-events KEA
77 | hash-max-ziplist-entries 512
78 | hash-max-ziplist-value 64
79 | list-max-ziplist-entries 512
80 | list-max-ziplist-value 64
81 | set-max-intset-entries 1000
82 | zset-max-ziplist-entries 128
83 | zset-max-ziplist-value 64
84 | hll-sparse-max-bytes 3000
85 | activerehashing yes
86 | client-output-buffer-limit normal 0 0 0
87 | client-output-buffer-limit slave 256mb 64mb 60
88 | client-output-buffer-limit pubsub 32mb 8mb 60
89 | hz 10
90 | EOF
91 |
92 | # 在mos-namespace中创建 configmap
93 | kubectl create configmap redis-conf --from-file=redis.conf -n mos-namespace
94 | ```
95 |
96 | # 三、创建 redis 容器
97 | ```bash
98 | # 清理pod
99 | kubectl delete -f mos_redis.yaml
100 |
101 | cat > mos_redis.yaml <<\EOF
102 | apiVersion: apps/v1
103 | kind: Deployment
104 | metadata:
105 | name: mos-redis
106 | namespace: mos-namespace
107 | spec:
108 | selector:
109 | matchLabels:
110 | name: mos-redis
111 | replicas: 1
112 | template:
113 | metadata:
114 | labels:
115 | name: mos-redis
116 | spec:
117 | containers:
118 | - name: mos-redis
119 | image: redis
120 | volumeMounts:
121 | - name: mos
122 | mountPath: "/usr/local/etc"
123 | command:
124 | - "redis-server"
125 | args:
126 | - "/usr/local/etc/redis/redis.conf"
127 | volumes:
128 | - name: mos
129 | configMap:
130 | name: redis-conf
131 | items:
132 | - key: redis.conf
133 | path: redis/redis.conf
134 | EOF
135 |
136 | # 创建和查看 pod
137 | kubectl apply -f mos_redis.yaml
138 | kubectl get pods -n mos-namespace
139 |
140 | # 注意:configMap 会挂在 /usr/local/etc/redis/redis.conf 上。与 mountPath 和 configMap 下的 path 一同指定
141 | ```
142 |
143 | # 四、创建redis-service服务
144 |
145 | ```bash
146 | # 删除service
147 | kubectl delete -f redis-service.yaml -n mos-namespace
148 |
149 | # 编写redis-service.yaml
150 | cat >redis-service.yaml<<\EOF
151 | apiVersion: v1
152 | kind: Service
153 | metadata:
154 | name: redis-production
155 | namespace: mos-namespace
156 | spec:
157 | selector:
158 | name: mos-redis
159 | ports:
160 | - port: 6379
161 | protocol: TCP
162 | EOF
163 |
164 | # 创建service
165 | kubectl apply -f redis-service.yaml -n mos-namespace
166 |
167 | # 查看service
168 | kubectl get svc redis-production -n mos-namespace
169 |
170 | # 查看service详情
171 | kubectl describe svc redis-production -n mos-namespace
172 | ```
173 |
174 |
175 | # 五、验证redis实例
176 |
177 | 1、普通方式验证
178 |
179 | ```bash
180 | # 进入到容器
181 | kubectl exec -it `kubectl get pods -n mos-namespace|grep redis|awk '{print $1}'` /bin/bash -n mos-namespace
182 |
183 | redis-cli -h 127.0.0.1 -a redispassword
184 | # 127.0.0.1:6379> set a b
185 | # 127.0.0.1:6379> get a
186 | "b"
187 |
188 | # 查看日志(因为配置文件中有配置日志写到容器里的/data/redis.log文件)
189 | kubectl exec -it `kubectl get pods -n mos-namespace|grep redis|awk '{print $1}'` /bin/bash -n mos-namespace
190 |
191 | $ tail -100f /data/redis.log
192 | 1:C 14 Nov 2019 06:46:13.476 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
193 | 1:C 14 Nov 2019 06:46:13.476 # Redis version=5.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
194 | 1:C 14 Nov 2019 06:46:13.476 # Configuration loaded
195 | 1:M 14 Nov 2019 06:46:13.478 * Running mode=standalone, port=6379.
196 | 1:M 14 Nov 2019 06:46:13.478 # WARNING: The TCP backlog setting of 30000 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
197 | 1:M 14 Nov 2019 06:46:13.478 # Server initialized
198 | 1:M 14 Nov 2019 06:46:13.478 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
199 | 1:M 14 Nov 2019 06:46:13.478 * Ready to accept connections
200 | ```
201 |
202 | 2、通过暴露的service验证
203 |
204 | ```bash
205 | # 命令行跑一个centos7的bash基础容器
206 | kubectl run --image=centos:7.2.1511 centos7-app -it --port=8080 --replicas=1 -n mos-namespace
207 |
208 | # 通过service方式验证
209 | kubectl exec `kubectl get pods -n mos-namespace|grep centos7-app|awk '{print $1}'` -it /bin/bash -n mos-namespace
210 |
211 | yum install -y epel-release
212 | yum install -y redis
213 |
214 | redis-cli -h redis-production -a redispassword
215 | ```
216 |
217 | 参考文档:
218 |
219 | https://www.cnblogs.com/klvchen/p/10862607.html
220 |
--------------------------------------------------------------------------------
/redis/README.md:
--------------------------------------------------------------------------------
1 | 参考资料:
2 |
3 | https://mp.weixin.qq.com/s/noVUEO5tbdcdx8AzYNrsMw Kubernetes上通过sts测试Redis Cluster集群
4 |
--------------------------------------------------------------------------------
/rke/README.md:
--------------------------------------------------------------------------------
1 | # 一、基础配置优化
2 | ```
3 | chattr -i /etc/passwd* && chattr -i /etc/group* && chattr -i /etc/shadow* && chattr -i /etc/gshadow*
4 | groupadd docker
5 | useradd -g docker docker
6 | echo "1Qaz2Wsx3Edc" | passwd --stdin docker
7 | usermod docker -G docker #注意这里需要将数组改为docker属组,不然会报错
8 |
9 | setenforce 0
10 | sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 关闭selinux
11 | systemctl daemon-reload
12 | systemctl stop firewalld.service && systemctl disable firewalld.service # 关闭防火墙
13 | #echo 'LANG="en_US.UTF-8"' >> /etc/profile; source /etc/profile # 修改系统语言
14 | ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime # 修改时区(如果需要修改)
15 |
16 | # 性能调优
17 | cat >> /etc/sysctl.conf< /etc/sysctl.d/k8s.conf
26 | net.ipv4.ip_forward=1
27 | net.bridge.bridge-nf-call-ip6tables = 1
28 | net.bridge.bridge-nf-call-iptables = 1
29 | vm.swappiness=0
30 | EOF
31 | sysctl --system
32 |
33 | #docker用户免密登录
34 | mkdir -p /home/docker/.ssh/
35 | chmod 700 /home/docker/.ssh/
36 | echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC7bRm20od1b3rzW3ZPLB5NZn3jQesvfiz2p0WlfcYJrFHfF5Ap0ubIBUSQpVNLn94u8ABGBLboZL8Pjo+rXQPkIcObJxoKS8gz6ZOxcxJhl11JKxTz7s49nNYaNDIwB13KaNpvBEHVoW3frUnP+RnIKIIDsr1QCr9t64D9TE99mbNkEvDXr021UQi12Bf4KP/8gfYK3hDMRuX634/K8yu7+IaO1vEPNT8HDo9XGcvrOD1QGV+is8mrU53Xa2qTsto7AOb2J8M6n1mSZxgNz2oGc6ZDuN1iMBfHm4O/s5VEgbttzB2PtI0meKeaLt8VaqwTth631EN1ryjRYUuav7bf docker@k8s-master-01' > /home/docker/.ssh/authorized_keys
37 | chmod 400 /home/docker/.ssh/authorized_keys
38 | ```
39 |
40 | ## 二、基础环境准备
41 |
42 | ```
43 | mkdir -p /etc/yum.repos.d_bak/
44 | mv /etc/yum.repos.d/* /etc/yum.repos.d_bak/
45 | curl http://mirrors.aliyun.com/repo/Centos-7.repo >/etc/yum.repos.d/Centos-7.repo
46 | curl http://mirrors.aliyun.com/repo/epel-7.repo >/etc/yum.repos.d/epel-7.repo
47 | sed -i '/aliyuncs/d' /etc/yum.repos.d/Centos-7.repo
48 | yum clean all && yum makecache fast
49 |
50 | yum -y install yum-utils
51 | yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
52 | yum install -y device-mapper-persistent-data lvm2
53 |
54 | yum install docker-ce -y
55 |
56 | #从docker1.13版本开始,docker会自动设置iptables的FORWARD默认策略为DROP,所以需要修改docker的启动配置文件/usr/lib/systemd/system/docker.service
57 |
58 | cat > /usr/lib/systemd/system/docker.service << \EOF
59 | [Unit]
60 | Description=Docker Application Container Engine
61 | Documentation=https://docs.docker.com
62 | BindsTo=containerd.service
63 | After=network-online.target firewalld.service containerd.service
64 | Wants=network-online.target
65 | Requires=docker.socket
66 | [Service]
67 | Type=notify
68 | ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
69 | ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
70 | ExecReload=/bin/kill -s HUP \$MAINPID
71 | TimeoutSec=0
72 | RestartSec=2
73 | Restart=always
74 | StartLimitBurst=3
75 | StartLimitInterval=60s
76 | LimitNOFILE=infinity
77 | LimitNPROC=infinity
78 | LimitCORE=infinity
79 | TasksMax=infinity
80 | Delegate=yes
81 | KillMode=process
82 | [Install]
83 | WantedBy=multi-user.target
84 | EOF
85 |
86 | #设置加速器
87 | curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://41935bf4.m.daocloud.io
88 | #这个脚本在centos 7上有个bug,脚本会改变docker的配置文件/etc/docker/daemon.json但修改的时候多了一个逗号,导致docker无法启动
89 |
90 | #或者直接执行这个指令
91 | tee /etc/docker/daemon.json <<-'EOF'
92 | {
93 | "registry-mirrors": ["https://1z45x7d0.mirror.aliyuncs.com"],
94 | "insecure-registries": ["192.168.56.11:5000"],
95 | "storage-driver": "overlay2",
96 | "log-driver": "json-file",
97 | "log-opts": {
98 | "max-size": "100m",
99 | "max-file": "3"
100 | }
101 | }
102 | EOF
103 | systemctl daemon-reload
104 | systemctl restart docker
105 |
106 | #查看加速器是否生效
107 | root># docker info
108 | Registry Mirrors:
109 | https://1z45x7d0.mirror.aliyuncs.com/ --发现参数已经生效
110 | Live Restore Enabled: false
111 | ```
112 |
113 | ## 三、RKE安装
114 |
115 | 使用RKE安装,需要先安装好docker和设置好root和普通用户的免key登录
116 |
117 | 1、下载RKE
118 | ```
119 | #可以从https://github.com/rancher/rke/releases下载安装包,本文使用版本v0.3.0.下载完后将安装包上传至任意节点.
120 |
121 | wget https://github.com/rancher/rke/releases/download/v0.2.8/rke_linux-amd64
122 | chmod 777 rke_linux-amd64
123 | mv rke_linux-amd64 /usr/local/bin/rke
124 | ```
125 |
126 | 2、创建集群配置文件
127 | ```
128 | cat >/tmp/cluster.yml <> ~/.bashrc
212 | ```
213 |
214 | # 四、helm将rancher部署在k8s集群
215 |
216 | 1、安装并配置helm客户端
217 | ```
218 | #使用官方提供的脚本一键安装
219 | curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
220 | chmod 700 get_helm.sh
221 | ./get_helm.sh
222 |
223 |
224 | #手动下载安装
225 | #下载 Helm
226 | wget https://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz
227 | #解压 Helm
228 | tar -zxvf helm-v2.9.1-linux-amd64.tar.gz
229 | #复制客户端执行文件到 bin 目录下
230 | cp linux-amd64/helm /usr/local/bin/
231 | ```
232 |
233 | 2、配置helm客户端具有访问k8s集群的权限
234 | ```
235 | kubectl -n kube-system create serviceaccount tiller
236 | kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
237 |
238 | ```
239 | 3、将helm server(titler)部署到k8s集群
240 | ```
241 | helm init --service-account tiller --tiller-image hongxiaolu/tiller:v2.12.3 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
242 | ```
243 | 4、为helm客户端配置chart仓库
244 | ```
245 | helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
246 | ```
247 | 5、检查rancher chart仓库可用
248 | ```
249 | helm search rancher
250 | ```
251 | ```
252 | 安装证书管理器
253 | helm install stable/cert-manager \
254 | --name cert-manager \
255 | --namespace kube-system
256 |
257 | kubectl get pods --all-namespaces|grep cert-manager
258 |
259 |
260 | helm install rancher-stable/rancher \
261 | --name rancher \
262 | --namespace cattle-system \
263 | --set hostname=acai.rancher.com
264 |
265 | ```
266 |
267 | 参考资料:
268 |
269 | http://www.acaiblog.cn/2019/03/15/RKE%E9%83%A8%E7%BD%B2rancher%E9%AB%98%E5%8F%AF%E7%94%A8%E9%9B%86%E7%BE%A4/
270 |
271 | https://blog.csdn.net/login_sonata/article/details/93847888
272 |
--------------------------------------------------------------------------------
/rke/cluster.yml:
--------------------------------------------------------------------------------
1 | # If you intened to deploy Kubernetes in an air-gapped environment,
2 | # please consult the documentation on how to configure custom RKE images.
3 | nodes:
4 | - address: 10.198.1.156
5 | port: "22"
6 | internal_address: ""
7 | role:
8 | - controlplane
9 | - worker
10 | - etcd
11 | hostname_override: ""
12 | user: k8s
13 | docker_socket: /var/run/docker.sock
14 | ssh_key: ""
15 | ssh_key_path: ~/.ssh/id_rsa
16 | labels: {}
17 | - address: 10.198.1.157
18 | port: "22"
19 | internal_address: ""
20 | role:
21 | - controlplane
22 | - worker
23 | - etcd
24 | hostname_override: ""
25 | user: k8s
26 | docker_socket: /var/run/docker.sock
27 | ssh_key: ""
28 | ssh_key_path: ~/.ssh/id_rsa
29 | labels: {}
30 | - address: 10.198.1.158
31 | port: "22"
32 | internal_address: ""
33 | role:
34 | - worker
35 | hostname_override: ""
36 | user: k8s
37 | docker_socket: /var/run/docker.sock
38 | ssh_key: ""
39 | ssh_key_path: ~/.ssh/id_rsa
40 | labels: {}
41 | - address: 10.198.1.159
42 | port: "22"
43 | internal_address: ""
44 | role:
45 | - worker
46 | hostname_override: ""
47 | user: k8s
48 | docker_socket: /var/run/docker.sock
49 | ssh_key: ""
50 | ssh_key_path: ~/.ssh/id_rsa
51 | labels: {}
52 | - address: 10.198.1.160
53 | port: "22"
54 | internal_address: ""
55 | role:
56 | - worker
57 | hostname_override: ""
58 | user: k8s
59 | docker_socket: /var/run/docker.sock
60 | ssh_key: ""
61 | ssh_key_path: ~/.ssh/id_rsa
62 | labels: {}
63 | services:
64 | etcd:
65 | image: ""
66 | extra_args: {}
67 | extra_binds: []
68 | extra_env: []
69 | external_urls: []
70 | ca_cert: ""
71 | cert: ""
72 | key: ""
73 | path: ""
74 | snapshot: null
75 | retention: ""
76 | creation: ""
77 | kube-api:
78 | image: ""
79 | extra_args:
80 | enable-admission-plugins: NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,Initializers
81 | runtime-config: api/all=true,admissionregistration.k8s.io/v1alpha1=true
82 | extra_binds: []
83 | extra_env: []
84 | service_cluster_ip_range: 10.44.0.0/16
85 | service_node_port_range: ""
86 | pod_security_policy: true
87 | kube-controller:
88 | image: ""
89 | extra_args: {}
90 | extra_binds: []
91 | extra_env: []
92 | cluster_cidr: 10.46.0.0/16
93 | service_cluster_ip_range: 10.44.0.0/16
94 | scheduler:
95 | image: ""
96 | extra_args: {}
97 | extra_binds: []
98 | extra_env: []
99 | kubelet:
100 | image: ""
101 | extra_args:
102 | enforce-node-allocatable: "pods,kube-reserved,system-reserved"
103 | system-reserved-cgroup: "/system.slice"
104 | system-reserved: "cpu=500m,memory=1Gi"
105 | kube-reserved-cgroup: "/system.slice/kubelet.service"
106 | kube-reserved: "cpu=1,memory=2Gi"
107 | eviction-soft: "memory.available<10%,nodefs.available<10%,imagefs.available<10%"
108 | eviction-soft-grace-period: "memory.available=2m,nodefs.available=2m,imagefs.available=2m"
109 | extra_binds: []
110 | extra_env: []
111 | cluster_domain: k8s.test.net
112 | infra_container_image: ""
113 | cluster_dns_server: 10.44.0.10
114 | fail_swap_on: false
115 | kubeproxy:
116 | image: ""
117 | extra_args: {}
118 | extra_binds: []
119 | extra_env: []
120 | network:
121 | plugin: calico
122 | options: {}
123 | authentication:
124 | strategy: x509
125 | options: {}
126 | sans: []
127 | addons: ""
128 | addons_include: []
129 | system_images:
130 | etcd: rancher/coreos-etcd:v3.2.24
131 | alpine: rancher/rke-tools:v0.1.25
132 | nginx_proxy: rancher/rke-tools:v0.1.25
133 | cert_downloader: rancher/rke-tools:v0.1.25
134 | kubernetes_services_sidecar: rancher/rke-tools:v0.1.25
135 | kubedns: rancher/k8s-dns-kube-dns-amd64:1.14.13
136 | dnsmasq: rancher/k8s-dns-dnsmasq-nanny-amd64:1.14.13
137 | kubedns_sidecar: rancher/k8s-dns-sidecar-amd64:1.14.13
138 | kubedns_autoscaler: rancher/cluster-proportional-autoscaler-amd64:1.0.0
139 | kubernetes: rancher/hyperkube:v1.12.6-rancher1
140 | flannel: rancher/coreos-flannel:v0.10.0
141 | flannel_cni: rancher/coreos-flannel-cni:v0.3.0
142 | calico_node: rancher/calico-node:v3.1.3
143 | calico_cni: rancher/calico-cni:v3.1.3
144 | calico_controllers: ""
145 | calico_ctl: rancher/calico-ctl:v2.0.0
146 | canal_node: rancher/calico-node:v3.1.3
147 | canal_cni: rancher/calico-cni:v3.1.3
148 | canal_flannel: rancher/coreos-flannel:v0.10.0
149 | wave_node: weaveworks/weave-kube:2.1.2
150 | weave_cni: weaveworks/weave-npc:2.1.2
151 | pod_infra_container: rancher/pause-amd64:3.1
152 | ingress: rancher/nginx-ingress-controller:0.21.0-rancher1
153 | ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.4
154 | metrics_server: rancher/metrics-server-amd64:v0.3.1
155 | ssh_key_path: ~/.ssh/id_rsa
156 | ssh_agent_auth: false
157 | authorization:
158 | mode: rbac
159 | options: {}
160 | ignore_docker_version: false
161 | kubernetes_version: ""
162 | private_registries: []
163 | ingress:
164 | provider: ""
165 | options: {}
166 | node_selector: {}
167 | extra_args: {}
168 | cluster_name: ""
169 | cloud_provider:
170 | name: ""
171 | prefix_path: ""
172 | addon_job_timeout: 0
173 | bastion_host:
174 | address: ""
175 | port: ""
176 | user: ""
177 | ssh_key: ""
178 | ssh_key_path: ""
179 | monitoring:
180 | provider: ""
181 | options: {}
182 |
--------------------------------------------------------------------------------
/tools/Linux Kernel 升级.md:
--------------------------------------------------------------------------------
1 | # Linux Kernel 升级
2 |
3 | k8s,docker,cilium等很多功能、特性需要较新的linux内核支持,所以有必要在集群部署前对内核进行升级;CentOS7 和 Ubuntu16.04可以很方便的完成内核升级。
4 |
5 | ## CentOS7
6 |
7 | 红帽企业版 Linux 仓库网站 https://www.elrepo.org,主要提供各种硬件驱动(显卡、网卡、声卡等)和内核升级相关资源;兼容 CentOS7 内核升级。如下按照网站提示载入elrepo公钥及最新elrepo版本,然后按步骤升级内核(以安装长期支持版本 kernel-lt 为例)
8 |
9 | ``` bash
10 | # 载入公钥
11 | rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
12 | # 安装ELRepo
13 | rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
14 | # 载入elrepo-kernel元数据
15 | yum --disablerepo=\* --enablerepo=elrepo-kernel repolist
16 | # 查看可用的rpm包
17 | yum --disablerepo=\* --enablerepo=elrepo-kernel list kernel*
18 | # 安装长期支持版本的kernel
19 | yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt.x86_64
20 | # 删除旧版本工具包
21 | yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64 -y
22 | # 安装新版本工具包
23 | yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-lt-tools.x86_64
24 |
25 | # 查看默认启动顺序
26 | awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
27 | CentOS Linux (4.4.208-1.el7.elrepo.x86_64) 7 (Core)
28 | CentOS Linux (3.10.0-1062.9.1.el7.x86_64) 7 (Core)
29 | CentOS Linux (3.10.0-957.el7.x86_64) 7 (Core)
30 | CentOS Linux (0-rescue-292a31ba53a34a6aa077e3467b6f9541) 7 (Core)
31 |
32 | # 默认启动的顺序是从0开始,新内核是从头插入(目前位置在0,而4.4.4的是在1),所以需要选择0。
33 | grub2-set-default 0
34 |
35 | # 将第一个内核作为默认内核
36 | sed -i 's/GRUB_DEFAULT=saved/GRUB_DEFAULT=0/g' /etc/default/grub
37 |
38 | # 更新 grub
39 | grub2-mkconfig -o /boot/grub2/grub.cfg
40 |
41 | # 重启并检查
42 | reboot
43 | ```
44 |
45 | ## Ubuntu16.04
46 |
47 | ``` bash
48 | 打开 http://kernel.ubuntu.com/~kernel-ppa/mainline/ 并选择列表中选择你需要的版本(以4.16.3为例)。
49 | 接下来,根据你的系统架构下载 如下.deb 文件:
50 | Build for amd64 succeeded (see BUILD.LOG.amd64):
51 | linux-headers-4.16.3-041603_4.16.3-041603.201804190730_all.deb
52 | linux-headers-4.16.3-041603-generic_4.16.3-041603.201804190730_amd64.deb
53 | linux-image-4.16.3-041603-generic_4.16.3-041603.201804190730_amd64.deb
54 | #安装后重启即可
55 | $ sudo dpkg -i *.deb
56 | ```
57 |
58 | 参考文档:
59 |
60 | https://github.com/easzlab/kubeasz/blob/master/docs/guide/kernel_upgrade.md
61 |
--------------------------------------------------------------------------------
/tools/README.md:
--------------------------------------------------------------------------------
1 | # 同步工具
2 |
3 | 1、同步主机host文件
4 | ```
5 | [root@master01 ~]# ./ssh_copy.sh /etc/hosts
6 | spawn scp /etc/hosts root@master01:/etc/hosts
7 | hosts 100% 440 940.4KB/s 00:00
8 | spawn scp /etc/hosts root@master02:/etc/hosts
9 | hosts 100% 440 774.6KB/s 00:00
10 | spawn scp /etc/hosts root@master03:/etc/hosts
11 | hosts 100% 440 1.4MB/s 00:00
12 | spawn scp /etc/hosts root@slave01:/etc/hosts
13 | hosts 100% 440 912.6KB/s 00:00
14 | spawn scp /etc/hosts root@slave02:/etc/hosts
15 | hosts 100% 440 826.8KB/s 00:00
16 | spawn scp /etc/hosts root@slave03:/etc/hosts
17 | hosts
18 | ```
19 |
20 | 2、iptables多端口
21 | ```bash
22 | #iptables多端口
23 | -A RH-Firewall-1-INPUT -s 13.138.33.20/32 -p tcp -m tcp -m multiport --dports 80,443,6443,20000:40000 -j ACCEPT
24 |
25 | #同步防火墙
26 | ./ssh_copy.sh /etc/sysconfig/iptables
27 | ```
28 |
--------------------------------------------------------------------------------
/tools/k8s域名解析coredns问题排查过程.md:
--------------------------------------------------------------------------------
1 | 参考资料:
2 |
3 | https://segmentfault.com/a/1190000019823091?utm_source=tag-newest
4 |
--------------------------------------------------------------------------------
/tools/kubernetes-node打标签.md:
--------------------------------------------------------------------------------
1 | ```
2 | kubectl get nodes -A --show-labels
3 |
4 |
5 | kubectl label nodes 10.199.1.159 node=10.199.1.159
6 | kubectl label nodes 10.199.1.160 node=10.199.1.160
7 | ```
8 |
--------------------------------------------------------------------------------
/tools/kubernetes-常用操作.md:
--------------------------------------------------------------------------------
1 | # 一、节点调度配置
2 | ```
3 | [root@master01 ~]# kubectl get nodes -A
4 | NAME STATUS ROLES AGE VERSION
5 | 10.19.2.246 Ready node 3h13m v1.15.2
6 | 10.19.2.247 Ready node 3h13m v1.15.2
7 | 10.19.2.248 Ready node 3h13m v1.15.2
8 | 10.19.2.56 Ready,SchedulingDisabled master 4h55m v1.15.2
9 | 10.19.2.57 Ready,SchedulingDisabled master 4h55m v1.15.2
10 | 10.19.2.58 Ready,SchedulingDisabled master 4h55m v1.15.2
11 |
12 | #方法一
13 | [root@master01 ~]# kubectl uncordon 10.19.2.56
14 | node/10.19.2.56 uncordoned
15 |
16 | [root@master01 ~]# kubectl get nodes -A
17 | NAME STATUS ROLES AGE VERSION
18 | 10.19.2.246 Ready node 3h13m v1.15.2
19 | 10.19.2.247 Ready node 3h13m v1.15.2
20 | 10.19.2.248 Ready node 3h13m v1.15.2
21 | 10.19.2.56 Ready master 4h56m v1.15.2
22 | 10.19.2.57 Ready,SchedulingDisabled master 4h56m v1.15.2
23 | 10.19.2.58 Ready,SchedulingDisabled master 4h56m v1.15.2
24 |
25 | #方法二
26 | [root@master01 ~]# kubectl patch node 10.19.2.56 -p '{"spec":{"unschedulable":false}}'
27 | node/10.19.2.56 patched
28 |
29 | [root@master01 ~]# kubectl get nodes -A
30 | NAME STATUS ROLES AGE VERSION
31 | 10.19.2.246 Ready node 3h17m v1.15.2
32 | 10.19.2.247 Ready node 3h17m v1.15.2
33 | 10.19.2.248 Ready node 3h17m v1.15.2
34 | 10.19.2.56 Ready master 5h v1.15.2
35 | 10.19.2.57 Ready,SchedulingDisabled master 5h v1.15.2
36 | 10.19.2.58 Ready,SchedulingDisabled master 5h v1.15.2
37 | ```
38 |
39 | # 二、标签查看
40 | ```
41 | [root@master01 ~]# kubectl get nodes --show-labels
42 | NAME STATUS ROLES AGE VERSION LABELS
43 | 10.19.2.246 Ready node 3h15m v1.15.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.246,kubernetes.io/os=linux,kubernetes.io/role=node
44 | 10.19.2.247 Ready node 3h15m v1.15.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.247,kubernetes.io/os=linux,kubernetes.io/role=node
45 | 10.19.2.248 Ready node 3h15m v1.15.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.248,kubernetes.io/os=linux,kubernetes.io/role=node
46 | 10.19.2.56 Ready master 4h57m v1.15.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.56,kubernetes.io/os=linux,kubernetes.io/role=master
47 | 10.19.2.57 Ready,SchedulingDisabled master 4h57m v1.15.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.57,kubernetes.io/os=linux,kubernetes.io/role=master
48 | 10.19.2.58 Ready,SchedulingDisabled master 4h57m v1.15.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.19.2.58,kubernetes.io/os=linux,kubernetes.io/role=master
49 | ```
50 | 参考文档:
51 |
52 | https://blog.csdn.net/miss1181248983/article/details/88181434 Kubectl常用命令
53 |
--------------------------------------------------------------------------------
/tools/kubernetes-批量删除Pods.md:
--------------------------------------------------------------------------------
1 | # 一、批量删除处于Pending状态的pod
2 | ```
3 | kubectl get pods | grep Pending | awk '{print $1}' | xargs kubectl delete pod
4 | ```
5 |
6 | # 二、批量删除处于Evicted状态的pod
7 | ```
8 | kubectl get pods | grep Evicted | awk '{print $1}' | xargs kubectl delete pod
9 | ```
10 |
11 | 参考文档:
12 |
13 | https://blog.csdn.net/weixin_39686421/article/details/80574131 kubernetes-批量删除Evicted Pods
14 |
--------------------------------------------------------------------------------
/tools/kubernetes访问外部mysql服务.md:
--------------------------------------------------------------------------------
1 | `
2 | k8s访问集群外独立的服务最好的方式是采用Endpoint方式(可以看作是将k8s集群之外的服务抽象为内部服务),以mysql服务为例
3 | `
4 |
5 | # 一、创建endpoints
6 | ```bash
7 | #创建 mysql-endpoints.yaml
8 | cat > mysql-endpoints.yaml <<\EOF
9 | kind: Endpoints
10 | apiVersion: v1
11 | metadata:
12 | name: mysql-production
13 | namespace: default
14 | subsets:
15 | - addresses:
16 | - ip: 10.198.1.155
17 | ports:
18 | - port: 3306
19 | EOF
20 |
21 | kubectl apply -f mysql-endpoints.yaml
22 | ```
23 |
24 | # 二、创建service
25 | ```bash
26 | #创建 mysql-service.yaml
27 | cat > mysql-service.yaml <<\EOF
28 | apiVersion: v1
29 | kind: Service
30 | metadata:
31 | name: mysql-production
32 | spec:
33 | ports:
34 | - port: 3306
35 | EOF
36 |
37 | kubectl apply -f mysql-service.yaml
38 | ```
39 |
40 | # 三、测试连接数据库
41 | ```bash
42 | cat > mysql-rc.yaml <<\EOF
43 | apiVersion: v1
44 | kind: ReplicationController
45 | metadata:
46 | name: mysql
47 | spec:
48 | replicas: 1
49 | selector:
50 | app: mysql
51 | template:
52 | metadata:
53 | labels:
54 | app: mysql
55 | spec:
56 | containers:
57 | - name: mysql
58 | image: docker.io/mysql:5.7
59 | imagePullPolicy: IfNotPresent
60 | ports:
61 | - containerPort: 3306
62 | env:
63 | - name: MYSQL_ROOT_PASSWORD
64 | value: "123456"
65 | EOF
66 |
67 | kubectl apply -f mysql-rc.yaml
68 | ```
69 | 参考资料:
70 |
71 | https://blog.csdn.net/hxpjava1/article/details/80040407 使用kubernetes访问外部服务mysql/redis
72 |
--------------------------------------------------------------------------------
/tools/ssh_copy.sh:
--------------------------------------------------------------------------------
1 | #!/bin/bash
2 |
3 | for i in `echo master01 master02 master03 slave01 slave02 slave03`;do
4 | expect -c "
5 | spawn scp $1 root@$i:$1
6 | expect {
7 | \"*yes/no*\" {send \"yes\r\"; exp_continue}
8 | \"*password*\" {send \"123456\r\"; exp_continue}
9 | \"*Password*\" {send \"123456\r\";}
10 | } "
11 | done
12 |
--------------------------------------------------------------------------------