├── 15---每天5分钟玩转Kubernetes_cloudman(著) 清华大学出版社 2018-04-01.pdf
├── README.md
├── centos-k8s-dashboard.md
├── harbor-docker.md
├── rancher-k8s-install.md
└── useful-k8s-bash.md
/15---每天5分钟玩转Kubernetes_cloudman(著) 清华大学出版社 2018-04-01.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/qxl1231/2019-k8s-centos/29797e3804ef5283b2a823663220d1c268b5da74/15---每天5分钟玩转Kubernetes_cloudman(著) 清华大学出版社 2018-04-01.pdf
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # 2019-k8s-centos/Ubuntu
2 | 2019最新k8s集群搭建教程 (centos/Ubuntu k8s 搭建教程)
3 |
4 | - 以下命令为手动安装版(安装大概1-2小时,看你报错不报错了!~),不适合新手,过程很繁琐,但是很锻炼人,也能深入理解k8s,我自己尝试了网上各种版本,很多要么过时的,要么残缺的,大多数都是2016年,2017年的文档,照着尝试了N次,各种卸了重装,最后centos系统都搞得乱七八糟,各种配置互相冲突影响,一直在kubeadm init 报错, 后来实在无果,重新安装了centos系统,从头再来,然后居然成功安装了!
5 | - 非常感谢运维网友@丿陌路灬再见ミ 的技术支持和耐心指导,让我学到不少新知识
6 |
7 | - Ubuntu快速安装版:**新手小白,推荐使用在Ubuntu 16,上安装rancher,然后按教程安装K8s集群,大概只需半小时)**[rancher无脑一键安装文档](https://github.com/qxl1231/2019-k8s-centos/blob/master/rancher-k8s-install.md)
8 | ## 首先fork我的github到你的centos主机上
9 | ```sh
10 | git clone https://github.com/qxl1231/2019-k8s-centos.git
11 | cd 2019-k8s-centos
12 | ```
13 | - 先按以下步骤,安装完master后,还要安装下dashboard,请看另一个dashboard的md文档
14 |
15 | ## centos7 部署 k8s 集群
16 |
17 | #### 安装docker-ce
18 |
19 | [官方文档](https://docs.docker.com/install/linux/docker-ce/centos/)
20 |
21 | **Master、Node节点都需要安装、配置Docker**
22 |
23 | ```sh
24 | # 卸载原来的docker
25 | sudo yum remove docker \
26 | docker-client \
27 | docker-client-latest \
28 | docker-common \
29 | docker-latest \
30 | docker-latest-logrotate \
31 | docker-logrotate \
32 | docker-engine
33 |
34 | # 安装依赖
35 | sudo yum update -y && sudo yum install -y yum-utils \
36 | device-mapper-persistent-data \
37 | lvm2
38 |
39 | # 添加官方yum库
40 | sudo yum-config-manager \
41 | --add-repo \
42 | https://download.docker.com/linux/centos/docker-ce.repo
43 |
44 | # 安装docker
45 | sudo yum install docker-ce docker-ce-cli containerd.io
46 |
47 | # 查看docker版本
48 | docker --version
49 |
50 | # 开机启动
51 | systemctl enable --now docker
52 | ```
53 |
54 | 或者使用脚本一键安装
55 |
56 | ```shell
57 | curl -fsSL "https://get.docker.com/" | sh
58 | systemctl enable --now docker
59 | ```
60 |
61 |
62 |
63 | **修改docker cgroup驱动,与k8s一致,使用systemd**
64 |
65 | ```shell
66 | # 修改docker cgroup驱动:native.cgroupdriver=systemd
67 | cat > /etc/docker/daemon.json <)
89 |
90 | **master、node节点都需要安装kubelet kubeadm kubectl。**
91 |
92 | **安装kubernetes的时候,需要安装kubelet, kubeadm等包,但k8s官网给的yum源是packages.cloud.google.com,国内访问不了,此时我们可以使用阿里云的yum仓库镜像。**
93 |
94 | ```shell
95 | cat < /etc/yum.repos.d/kubernetes.repo
96 | [kubernetes]
97 | name=Kubernetes
98 | baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
99 | enabled=1
100 | gpgcheck=0
101 | repo_gpgcheck=0
102 | gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
103 | http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
104 | EOF
105 |
106 | # 关闭SElinux
107 | setenforce 0
108 | sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
109 |
110 | # 安装kubelet kubeadm kubectl
111 | yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
112 |
113 | systemctl enable --now kubelet # 开机启动kubelet
114 |
115 | # centos7用户还需要设置路由:
116 | yum install -y bridge-utils.x86_64
117 | modprobe br_netfilter # 加载br_netfilter模块,使用lsmod查看开启的模块
118 | cat < /etc/sysctl.d/k8s.conf
119 | net.bridge.bridge-nf-call-ip6tables = 1
120 | net.bridge.bridge-nf-call-iptables = 1
121 | EOF
122 | sysctl --system # 重新加载所有配置文件
123 |
124 | systemctl disable --now firewalld # 关闭防火墙
125 |
126 | # k8s要求关闭swap (qxl)
127 | swapoff -a && sysctl -w vm.swappiness=0 # 关闭swap
128 | sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab # 取消开机挂载swap
129 | ```
130 |
131 | **使用虚拟机的可以做完以上步骤后,进行克隆。实验环境为1 Master,2 Node**
132 |
133 | #### 创建集群准备工作
134 |
135 | ```shell
136 | # Master端:
137 | kubeadm config images pull # 拉取集群所需镜像,这个需要翻墙
138 |
139 | # --- 不能翻墙可以尝试以下办法 ---
140 | kubeadm config images list # 列出所需镜像
141 | #(不是一定是下面的,根据实际情况来,此处输出的结果比如:k8s.gcr.io/kube-apiserver:v1.14.1,必须和下面打tag的k8s.gcr.io/kube-apiserver:v1.14.1一模一样,否则就会再去谷歌仓库pull)
142 | # 根据所需镜像名字先拉取国内资源
143 | docker pull mirrorgooglecontainers/kube-apiserver:v1.14.1
144 | docker pull mirrorgooglecontainers/kube-controller-manager:v1.14.1
145 | docker pull mirrorgooglecontainers/kube-scheduler:v1.14.1
146 | docker pull mirrorgooglecontainers/kube-proxy:v1.14.1
147 | docker pull mirrorgooglecontainers/pause:3.1
148 | docker pull mirrorgooglecontainers/etcd:3.3.10
149 | docker pull coredns/coredns:1.3.1 # 这个在mirrorgooglecontainers中没有
150 |
151 | # 修改镜像tag
152 | docker tag mirrorgooglecontainers/kube-apiserver:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1
153 | docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1
154 | docker tag mirrorgooglecontainers/kube-scheduler:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
155 | docker tag mirrorgooglecontainers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
156 | docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
157 | docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
158 | docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
159 |
160 |
161 | # 把所需的镜像下载好,init的时候就不会再拉镜像,由于无法连接google镜像库导致出错
162 |
163 | # 删除原来的镜像
164 | docker rmi mirrorgooglecontainers/kube-apiserver:v1.14.1
165 | docker rmi mirrorgooglecontainers/kube-controller-manager:v1.14.1
166 | docker rmi mirrorgooglecontainers/kube-scheduler:v1.14.1
167 | docker rmi mirrorgooglecontainers/kube-proxy:v1.14.1
168 | docker rmi mirrorgooglecontainers/pause:3.1
169 | docker rmi mirrorgooglecontainers/etcd:3.3.10
170 | docker rmi coredns/coredns:1.3.1
171 |
172 | # --- 不能翻墙可以尝试使用 ---
173 |
174 | # Node端:
175 | # 根据所需镜像名字先拉取国内资源
176 | docker pull mirrorgooglecontainers/kube-proxy:v1.14.1
177 | docker pull mirrorgooglecontainers/pause:3.1
178 |
179 |
180 | # 修改镜像tag
181 | docker tag mirrorgooglecontainers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
182 | docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
183 |
184 | # 删除原来的镜像
185 | docker rmi mirrorgooglecontainers/kube-proxy:v1.14.1
186 | docker rmi mirrorgooglecontainers/pause:3.1
187 | # 不加载镜像node节点不能
188 | ```
189 |
190 |
191 |
192 | ### 使用kubeadm创建集群
193 |
194 | ```shell
195 | # 第一次初始化过程中/etc/kubernetes/admin.conf该文件存在,是空文件(我自己手动创建的),会报错:panic: runtime error: invalid memory address or nil pointer dereference
196 | ls /etc/kubernetes/admin.conf && mv /etc/kubernetes/admin.conf.bak # 移走备份
197 |
198 | # 初始化Master(Master需要至少2核)此处会各种报错,异常...成功与否就在此
199 | kubeadm init --apiserver-advertise-address 192.168.200.25 --pod-network-cidr 10.244.0.0/16 # --kubernetes-version 1.14.1
200 | # --apiserver-advertise-address 指定与其它节点通信的接口
201 | # --pod-network-cidr 指定pod网络子网,使用fannel网络必须使用这个CIDR
202 | ```
203 |
204 | + 运行初始化,程序会检验环境一致性,可以根据实际错误提示进一步修复问题。
205 | + 程序会访问https://dl.k8s.io/release/stable-1.txt获取最新的k8s版本,访问这个连接需要FQ,如果无法访问,则会使用kubeadm client的版本作为安装的版本号,使用kubeadm version查看client版本。也可以使用--kubernetes-version明确指定版本。
206 |
207 | ```
208 | # 初始化结果:
209 | [init] Using Kubernetes version: v1.14.1
210 | [preflight] Running pre-flight checks
211 | [preflight] Pulling images required for setting up a Kubernetes cluster
212 | [preflight] This might take a minute or two, depending on the speed of your internet connection
213 | [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
214 | [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
215 | [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
216 | [kubelet-start] Activating the kubelet service
217 | [certs] Using certificateDir folder "/etc/kubernetes/pki"
218 | [certs] Using existing etcd/ca certificate authority
219 | [certs] Using existing etcd/server certificate and key on disk
220 | [certs] Using existing etcd/peer certificate and key on disk
221 | [certs] Using existing etcd/healthcheck-client certificate and key on disk
222 | [certs] Using existing apiserver-etcd-client certificate and key on disk
223 | [certs] Using existing ca certificate authority
224 | [certs] Using existing apiserver certificate and key on disk
225 | [certs] Using existing apiserver-kubelet-client certificate and key on disk
226 | [certs] Using existing front-proxy-ca certificate authority
227 | [certs] Using existing front-proxy-client certificate and key on disk
228 | [certs] Using the existing "sa" key
229 | [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
230 | [kubeconfig] Writing "admin.conf" kubeconfig file
231 | [kubeconfig] Writing "kubelet.conf" kubeconfig file
232 | [kubeconfig] Writing "controller-manager.conf" kubeconfig file
233 | [kubeconfig] Writing "scheduler.conf" kubeconfig file
234 | [control-plane] Using manifest folder "/etc/kubernetes/manifests"
235 | [control-plane] Creating static Pod manifest for "kube-apiserver"
236 | [control-plane] Creating static Pod manifest for "kube-controller-manager"
237 | [control-plane] Creating static Pod manifest for "kube-scheduler"
238 | [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
239 | [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
240 | [apiclient] All control plane components are healthy after 21.503375 seconds
241 | [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
242 | [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
243 | [upload-certs] Skipping phase. Please see --experimental-upload-certs
244 | [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
245 | [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
246 | [bootstrap-token] Using token: w2i0mh.5fxxz8vk5k8db0wq
247 | [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
248 | [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
249 | [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
250 | [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
251 | [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
252 | [addons] Applied essential addon: CoreDNS
253 | [addons] Applied essential addon: kube-proxy
254 |
255 | Your Kubernetes control-plane has initialized successfully!
256 |
257 | To start using your cluster, you need to run the following as a regular user:
258 |
259 | mkdir -p $HOME/.kube
260 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
261 | sudo chown $(id -u):$(id -g) $HOME/.kube/config
262 |
263 | You should now deploy a pod network to the cluster.
264 | Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
265 | https://kubernetes.io/docs/concepts/cluster-administration/addons/
266 |
267 | Then you can join any number of worker nodes by running the following on each as root:
268 |
269 | #每个机器创建的master以下部分都不同,需要自己保存好-qxl
270 | kubeadm join 192.168.200.25:6443 --token our9a0.zl490imi6t81tn5u \
271 | --discovery-token-ca-cert-hash sha256:b93f710eb9b389a69f0cd0d6dcf7c82e389a68f009eb6b2028f69d54b099de16
272 | ```
273 |
274 | #### 普通用户设置权限
275 |
276 | ```shell
277 | # Master端:
278 | mkdir -p $HOME/.kube
279 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
280 | sudo chown $(id -u):$(id -g) $HOME/.kube/config
281 | # Node端:
282 | mkdir -p $HOME/.kube
283 | # 复制Master端配置文件$HOME/.kube/config到同级目录,否则后面kubectl get nodes会出现如下错误
284 | # The connection to the server localhost:8080 was refused - did you specify the right host or port?
285 | sudo chown $(id -u):$(id -g) $HOME/.kube/config
286 | ```
287 |
288 | #### 应用flannel网络
289 |
290 | ```shell
291 | kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
292 | ```
293 |
294 | ### node加入机器
295 |
296 | ```shell
297 | # node1:
298 | kubeadm join 192.168.200.25:6443 --token w2i0mh.5fxxz8vk5k8db0wq \
299 | --discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252
300 | # node2:
301 | kubeadm join 192.168.200.25:6443 --token w2i0mh.5fxxz8vk5k8db0wq \
302 | --discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252
303 | ```
304 |
305 | 输出日志:
306 |
307 | ```
308 | [preflight] Running pre-flight checks
309 | [preflight] Reading configuration from the cluster...
310 | [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
311 | [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
312 | [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
313 | [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
314 | [kubelet-start] Activating the kubelet service
315 | [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
316 |
317 | This node has joined the cluster:
318 | * Certificate signing request was sent to apiserver and a response was received.
319 | * The Kubelet was informed of the new secure connection details.
320 |
321 | Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
322 | ```
323 |
324 | ```shell
325 | # master:
326 | kubectl get pods --all-namespaces
327 | # ---输出信息---
328 | NAMESPACE NAME READY STATUS RESTARTS AGE
329 | kube-system coredns-fb8b8dccf-rn8kd 1/1 Running 0 170m
330 | kube-system coredns-fb8b8dccf-slwr4 1/1 Running 0 170m
331 | kube-system etcd-master 1/1 Running 0 169m
332 | kube-system kube-apiserver-master 1/1 Running 0 169m
333 | kube-system kube-controller-manager-master 1/1 Running 0 169m
334 | kube-system kube-flannel-ds-amd64-l8c7c 1/1 Running 0 130m
335 | kube-system kube-flannel-ds-amd64-lcmxw 1/1 Running 1 117m
336 | kube-system kube-flannel-ds-amd64-pqnln 1/1 Running 1 72m
337 | kube-system kube-proxy-4kcqb 1/1 Running 0 170m
338 | kube-system kube-proxy-jcqjd 1/1 Running 0 72m
339 | kube-system kube-proxy-vm9sj 1/1 Running 0 117m
340 | kube-system kube-scheduler-master 1/1 Running 0 169m
341 | # ---输出信息---
342 |
343 |
344 | kubectl get nodes
345 | # ---输出信息---
346 | NAME STATUS ROLES AGE VERSION
347 | master Ready master 171m v1.14.1
348 | node1 Ready 118m v1.14.1
349 | node2 Ready 74m v1.14.1
350 | # ---输出信息---
351 | ```
352 |
353 | 排错
354 |
355 | ```shell
356 | journalctl -f # 当前输出日志
357 | journalctl -f -u kubelet # 只看当前的kubelet进程日志
358 | ```
359 |
360 | 出于安全考虑,默认配置下Kubernetes不会将Pod调度到Master节点。如果希望将k8s-master也当作Node使用,可以执行如下命令:
361 | ```shell
362 | kubectl describe node localhost
363 | #输出:Taints: node-role.kubernetes.io/master:NoSchedule(这个污点表示默认情况下master节点将不会调度运行Pod,即不运行工作负载。)
364 | #可以部署到master
365 | kubectl taint node localhost node-role.kubernetes.io/master=:NoSchedule-
366 | ```
367 | 其中k8s-master是主机节点hostname如果要恢复Master Only状态,执行如下命令:
368 |
369 | ```shell
370 | #不会部署到master
371 | kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule
372 | ```
373 | ##注意:kubeadm初始化的Kubernetes集群,master节点也被打上了一个node-role.kubernetes.io/master=的label,标识这个节点的角色为master。
374 | ##给Node设置Label和设置污点是两个不同的操作。
375 |
376 | ###实践:Kubernetes master节点不运行工作负载
377 | Kubernetes集群的Master节点是十分重要的,一个高可用的Kubernetes集群一般会存在3个以上的master节点,为了保证master节点的稳定性,一般不推荐将业务的Pod调度到master节点上。 下面将介绍一下我们使用Kubernetes调度的Taints和和Tolerations特性确保Kubernetes的Master节点不执行工作负载的实践。
378 |
379 | 我们的Kubernetes集群中总共有3个master节点,节点的名称分别为k8s-01、k8s-02、k8s-03。 为了保证集群的稳定性,同时提高master节点的利用率,我们将其中一个节点设置为node-role.kubernetes.io/master:NoSchedule,另外两个节点设置为node-role.kubernetes.io/master:PreferNoSchedule,这样保证3个节点中的1个无论在任何情况下都将不运行业务Pod,而另外2个载集群资源充足的情况下尽量不运行业务Pod。
380 |
381 | ```shell
382 | kubectl taint nodes k8s-01 node-role.kubernetes.io/master=:NoSchedule
383 |
384 | kubectl taint nodes k8s-02 node-role.kubernetes.io/master=:PreferNoSchedule
385 |
386 | kubectl taint nodes k8s-03 node-role.kubernetes.io/master=:PreferNoSchedule
387 | ```
388 |
389 |
390 |
391 |
392 |
393 |
394 |
--------------------------------------------------------------------------------
/centos-k8s-dashboard.md:
--------------------------------------------------------------------------------
1 | # Dashboard安装
2 |
3 | ```shell
4 | # 安装dashboard,国内可以使用别的yaml源
5 | kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
6 |
7 | # 修改node为NodePort模式
8 | kubectl patch svc -n kube-system kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}'
9 |
10 | # 查看服务(得知dashboard运行在443:32383/TCP端口)
11 | kubectl get svc -n kube-system
12 | # --- 输出 ---
13 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
14 | kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP,9153/TCP 7h40m
15 | kubernetes-dashboard NodePort 10.111.77.210 443:32383/TCP 3h42m
16 | # --- 输出 ---
17 |
18 | # 查看dashboard运行在哪个node(得知dashboard运行在192.168.20.4)
19 | kubectl get pods -A -o wide
20 | # --- 输出 ---
21 | NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
22 | kube-system coredns-fb8b8dccf-rn8kd 1/1 Running 0 7h43m 10.244.0.2 master
23 | kube-system coredns-fb8b8dccf-slwr4 1/1 Running 0 7h43m 10.244.0.3 master
24 | kube-system etcd-master 1/1 Running 0 7h42m 192.168.20.5 master
25 | kube-system kube-apiserver-master 1/1 Running 0 7h42m 192.168.20.5 master
26 | kube-system kube-controller-manager-master 1/1 Running 0 7h42m 192.168.20.5 master
27 | kube-system kube-flannel-ds-amd64-l8c7c 1/1 Running 0 7h3m 192.168.20.5 master
28 | kube-system kube-flannel-ds-amd64-lcmxw 1/1 Running 1 6h50m 192.168.20.4 node1
29 | kube-system kube-flannel-ds-amd64-pqnln 1/1 Running 1 6h5m 192.168.20.3 node2
30 | kube-system kube-proxy-4kcqb 1/1 Running 0 7h43m 192.168.20.5 master
31 | kube-system kube-proxy-jcqjd 1/1 Running 0 6h5m 192.168.20.3 node2
32 | kube-system kube-proxy-vm9sj 1/1 Running 0 6h50m 192.168.20.4 node1
33 | kube-system kube-scheduler-master 1/1 Running 0 7h42m 192.168.20.5 master
34 | kube-system kubernetes-dashboard-5f7b999d65-2ltmv 1/1 Running 0 3h45m 10.244.1.2 node1
35 | # --- 输出 ---
36 | # 如果无法变成Running状态,可以使用以下命令排错
37 | journalctl -f -u kubelet # 只看当前的kubelet进程日志
38 | ### 提示拉取镜像失败,无法翻墙的可以使用以下方法预先拉取镜像
39 | ### 请在kubernetes-dashboard的节点上操作:
40 | docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1
41 | docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
42 | docker rmi mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1
43 | ```
44 |
45 | 根据上面的信息可以得知dashboard的ip和端口,使用**火狐浏览器**或最新版本**谷歌浏览器**访问https://192.168.200.25:32383(必须使用**https**)
46 |
47 | **注意:如果是centos7,需要关闭iptables或者增加规则**
48 | ```
49 | # 修改启动脚本
50 | vim /lib/systemd/system/docker.service
51 | # 在[Service]下添加规则
52 | ExecStartPost=/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT
53 | # 重启docker,即可访问到dashboard
54 | systemctl restart docker
55 | ```
56 |
57 | 
58 |
59 | ```sh
60 | # 创建dashboard管理用户
61 | kubectl create serviceaccount dashboard-admin -n kube-system
62 |
63 | # 绑定用户为集群管理用户
64 | kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
65 |
66 | # 生成tocken
67 | kubectl describe secret -n kube-system dashboard-admin-token
68 | # --- 输出如下 ---
69 | Name: dashboard-admin-token-pb78x
70 | Namespace: kube-system
71 | Labels:
72 | Annotations: kubernetes.io/service-account.name: dashboard-admin
73 | kubernetes.io/service-account.uid: 166aeb8d-604e-11e9-80d6-080027d8332b
74 |
75 | Type: kubernetes.io/service-account-token
76 |
77 |
78 | Data(qxl:done)
79 | ====
80 | ca.crt: 1025 bytes
81 | namespace: 11 bytes
82 | token:
83 | eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbHBzc2oiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOGUxNzM3YjUtNjE3OC0xMWU5LWJlMTktMDAwYzI5M2YxNDg2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.KHTf4_3DJu0liKeoOIoCssmIRXSHM_A4w9XVJKQ44jqEfPSbpwohqKnHxOspWAWsjwRrc3kSQyC9KEDCfTYl91ZY_PzUSqPG8XY58ab1p9q1xUxdDYu3qCyaSHWTQ2dATl1G5nNZQLfrarwWIPurm0BLBLsR1crIQj1P8VGafJJXz-TCQZgiw1OHqB8w89IBUhGrn8vuaIdspNLNZmrl-icjFS4eAevBREwlxqxX0-3-mzTFE8xqCHyfJ7pKpK-Jv1jSpuHjb0CfDPvNBuAGp5jQG44Ya6wq1BcqQO4RiQ07hjfIrnwmfWyZWmBn9YLvBVByupLv872kUUSSxjxxbg
84 | # ------
85 | ```
86 |
87 | 使用生成的tocken就可以登录dashboard了。
88 |
89 | 
90 |
--------------------------------------------------------------------------------
/harbor-docker.md:
--------------------------------------------------------------------------------
1 | # centos7搭建Harbor企业级docker仓库
2 |
3 | ### 安装docker
4 |
5 | ```shell
6 | curl -fsSL "https://get.docker.com/" | sh
7 | systemctl enable --now docker
8 | ```
9 |
10 | ### 安装docker-compose
11 |
12 | ```shell
13 | yum update -y
14 | yum install python-pip
15 | pip install --upgrade setuptools # 可能由于setuptools版本过低报错
16 | pip install docker-compose # 如果报错可以试试 --ignore-installed
17 | ```
18 |
19 | ### 安装Harbor
20 |
21 | ```shell
22 | wget -P /usr/local/src/ https://storage.googleapis.com/harbor-releases/release-1.7.0/harbor-online-installer-v1.7.5.tgz # 在线安装
23 | # 最新版本请查看https://github.com/goharbor/harbor/releases/
24 |
25 | cd /usr/local/src/
26 | tar zxf harbor-online-installer-v1.7.5.tgz -C /usr/local/
27 | cd /usr/local/harbor/
28 | bash install.sh # 使用--with-clair添加镜像漏洞扫描功能
29 | ```
30 |
31 | ### 配置文件
32 |
33 | ```shell
34 | vim /usr/local/harbor/harbor.cfg # harbor配置文件
35 |
36 | # 找到以下项目并且修改
37 | hostname = test.com # 修改访问域名,如果使用其它端口,请在后面添加端口号,如test.com:8080
38 | #邮箱配置(根据实际账号配置)
39 | email_server = smtp.qq.com
40 | email_server_port = 465
41 | email_username = test@qq.com
42 | email_password = 123456
43 | email_from = test@qq.com # 经测试发现必须要和email_username相同才可以发邮件
44 | email_ssl = true # 开启ssl保护,使用端口465,关闭使用端口25
45 | #禁止用户注册
46 | self_registration = off
47 | #设置只有管理员可以创建项目
48 | project_creation_restriction = adminonly
49 | #设置管理员密码
50 | harbor_admin_password = 123456
51 | ```
52 |
53 | ### 容器集群管理
54 |
55 | ```shell
56 | cd /usr/local/harbor/
57 | docker-compose ps # 查看harbor集群容器,安装后已经启动
58 |
59 | # ---------- 控制 ----------
60 | # 必须要在/usr/local/harbor/目录下,或者-f指定docker-compose.yml
61 | # 启动Harbor
62 | docker-compose start
63 | # 停止Harbor
64 | docker-comose stop
65 | # 重启Harbor
66 | docker-compose restart
67 | # 移除Harbor
68 | docker-compose down -v # -v 参数移除vloume
69 | # 重新创建并启动
70 | docker-compose up -d
71 | # ---------- 控制 ----------
72 | ```
73 |
74 | #### 修改nginx端口(如有需要)
75 |
76 | ```shell
77 | vim /usr/local/harbor/docker-compose.yml
78 | # 把proxy下的80:80改为8080:80则为使用8080访问harbor
79 | docker-compose stop proxy # proxy其实就是nginx
80 | docker-compose up -d proxy # 重新开启nginx
81 | netstat -lntp # 查看本地打开端口,如果有docker-proxy为8080则修改成功
82 | # 如果有安全组防火墙,记得先放行对应端口
83 | ```
84 |
85 | ### 访问网页
86 |
87 | 使用账号admin,默认密码Harbor12345,如果修改了配置文件的密码,则使用上面修改的密码。
88 |
89 | 
90 |
91 | + 默认是所有人可以创建用户登录的,只是上面安装配置中禁止了用户注册。
92 |
93 | 
94 |
95 | 
96 |
97 | + 系统配置中可以设置邮箱配置,认证配置、垃圾清理等,但是不可以设置web打开的端口。
98 |
99 | 
100 |
101 | + 通过漏洞扫描,可以分析出镜像存在的一些漏洞缺陷编码,并且提供修复建议。
102 |
103 | ### 上传、下载镜像
104 |
105 | ```shell
106 | # 由于使用80端口需要备案,harbor页面已经修改为8080端口(注意修改harbor.cfg的hostname后需要重新执行install.sh)
107 | vim /etc/docker/daemon.json
108 | # 添加 "insecure-registries":["test.com:8080"] }
109 | docker login test.com:8080 # 尝试登录
110 | # 编写dockerfile
111 | mkdir ~/test_harbor && cd ~/test_harbor
112 | cat << EOF > Dockerfile
113 | FROM nginx:latest
114 | MAINTAINER test "test@qq.com"
115 | # 配置环境变量
116 | ENV LANG=C.UTF-8 TZ=Asia/Shanghai
117 | EOF
118 | # build镜像
119 | docker build -t test.com:8080/library/nginx:latest .
120 | # push镜像到远程仓库
121 | docker push test.com:8080/library/nginx:latest
122 | # 从远程仓库拉取镜像
123 | docker pull test.com:8080/library/nginx:latest
124 | ```
125 |
126 | [参考链接](https://www.cnblogs.com/pangguoping/p/7650014.html)
127 |
128 | [为什么有了Docker registry还需要Harbor?](https://blog.csdn.net/jessise_zhan/article/details/80130104)
129 |
130 |
131 |
132 |
133 |
134 |
135 |
136 |
137 |
138 |
--------------------------------------------------------------------------------
/rancher-k8s-install.md:
--------------------------------------------------------------------------------
1 | ## 使用rancher一键部署 k8s 集群
2 |
3 | 本方法特点:简单,无脑,快速...省去使用kubeadm安装的繁琐步骤,简直就是无脑安装...惊呆了..从安装到使用不到半小时...
4 |
5 | [官方文档](https://www.cnrancher.com/docs/rancher/v2.x/cn/overview/quick-start-guide/)
6 |
--------------------------------------------------------------------------------
/useful-k8s-bash.md:
--------------------------------------------------------------------------------
1 | ```bash
2 | journalctl -xeu kubelet
3 |
4 | kubeadm reset
5 |
6 | kubeadm init --apiserver-advertise-address=192.168.200.24 --kubernetes-version v1.13.5 --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
7 |
8 | systemctl status kube-apiserver.service
9 |
10 | netstat -lnp|grep 10250
11 |
12 | ps -ef|grep kube
13 | ```
14 |
15 |
16 |
--------------------------------------------------------------------------------