├── README.md ├── addons ├── dashboard │ ├── MAINTAINERS.md │ ├── README.md │ ├── dashboard-controller.yaml │ └── dashboard-service.yaml ├── dns │ ├── skydns-rc.yaml │ └── skydns-svc.yaml ├── haconfd │ ├── Dockerfile │ ├── docker-entrypoint.sh │ ├── etc │ │ ├── confd │ │ │ ├── conf.d │ │ │ │ └── haproxy.toml │ │ │ ├── confd.toml │ │ │ └── templates │ │ │ │ ├── haproxy-kube-system.tmpl │ │ │ │ ├── haproxy.tmpl │ │ │ │ └── haproxy_beautiful.tmpl │ │ └── haproxy │ │ │ ├── errors │ │ │ ├── 400.http │ │ │ ├── 403.http │ │ │ ├── 408.http │ │ │ ├── 500.http │ │ │ ├── 502.http │ │ │ ├── 503.http │ │ │ └── 504.http │ │ │ └── haproxy.cfg │ └── haproxy-confd.yaml ├── heapster │ ├── grafana-service.yaml │ ├── heapster-controller-without-addon-resizer.yaml │ ├── heapster-controller.yaml │ ├── heapster-service.yaml │ ├── heapster-serviceaccount.yaml │ ├── influxdb-grafana-controler.yaml │ └── influxdb-service.yaml ├── ingress │ ├── nginx-controller-without-httpcheck.yaml │ └── nginx-controller.yaml └── prometheus │ ├── grafana.ini │ ├── prometheus-cm.yaml │ ├── prometheus-dm-use-hostpath.yaml │ ├── prometheus-dm.yaml │ ├── prometheus-pv-pvc.yaml │ ├── prometheus-svc.yaml │ └── prometheus.yml ├── base ├── agent │ ├── flannel.yaml │ ├── kube-proxy.yaml │ └── kubelet.service ├── config.yml ├── docker.service ├── haka │ ├── flannel.yaml │ ├── haproxy-keepalived.yaml │ ├── haproxy │ │ ├── Dockerfile │ │ ├── haproxy.cfg │ │ ├── haproxy.cfg.etc │ │ └── haproxy.sh │ └── keepalived │ │ ├── Dockerfile │ │ ├── entrypoint.sh │ │ ├── health.sh │ │ ├── keepalived.conf.backup1 │ │ ├── keepalived.conf.backup2 │ │ ├── keepalived.conf.master │ │ └── notify.sh ├── k8s-log.cron ├── master │ ├── etcd.yaml │ ├── flannel.yaml │ ├── kube-apiserver.yaml │ ├── kube-controller-manager.yaml │ ├── kube-scheduler.yaml │ └── kubelet.service └── tools.sh ├── doc ├── base_env.md ├── haproxy_keepalived.md ├── k8s_master_moudle.md └── kubernetes生产环境配置信息.xlsx ├── images ├── Architecture.png └── Architecture.svg ├── network ├── calico │ ├── calico-without-auth.yaml │ ├── calico.images │ ├── calico.yaml │ ├── ipPool.yaml │ ├── kube-apiserver.yaml │ ├── kubelet.service │ ├── policy.yaml │ └── profile.yaml └── flannel │ ├── flannel.yaml │ └── restart_docker.sh └── storage └── rbd ├── nginx-pvc.yaml └── rbd.yaml /README.md: -------------------------------------------------------------------------------- 1 | # deployk8s 2 | > - 有问题可以去[dockone.io](http://dockone.io/people/xwisen)提问,也可以直接提issue 3 | > - 本人从事容器编排方面工作,本项目算是一些小笔记。内容包括热门的kubernetes、mesos编排工具及其周边的网络、存储、监控等方案(ps: 仅供参考,对刚入门的小伙伴来说难度可能稍大。示例`yaml`文件优先). 4 | ## 部署说明文档 5 | # 注意事项 6 | **做高可用必须保证master节点为2个及其以上,推荐3个节点**
7 | **文档分为六个部分**
8 | 9 | 1. [基础镜像和环境准备](/doc/base_env.md) 10 | 2. [master 节点部署](/doc/k8s_master_moudle.md) 11 | 3. [使用haproxy负载减轻单个kube-apiserver压力,并使用keepalived保证高可用](/doc/haproxy_keepalived.md) 12 | 4. [agent节点部署](/doc/agent_moudle.md) 13 | 5. [增强插件部署](/doc/plugins_install.md) 14 | 6. [网络方案部署](/doc/network_install.md) 15 | ## 目录结构说明 16 | > #####Attentions: 17 | * 版本说明: 操作系统为centos 7.2,docker 版本为1.10+(目前1.12.6),kubernetes原本为1.4.6,后kubernetes核心组件升级到1.5.1,但是插件没有更新。 18 | * 对于kubernetes及其周边组件,我的原则是: `能容器化的一定不直接跑宿主机,能通过应用方式部署的绝对不跑static pod` 19 | * - [x] 表示需要关注 20 | * - [ ] 表示不需要关注 21 | 22 | - [ ] [/addons](/addons) `包含kubernetes一些插件,大都以deployment/replication方式部署` 23 | - [x] [/dashboard](/addons/dashboard) `kubernetes官方提供的dashboard` 24 | - [x] [/dns](/addons/dns) `kubernetes官方提供的dns插件` 25 | - [x] [/haconfd](/addons/haconfd) `haproxy + confd 用来做应用外部访问,原理类似ingress,目前只能通过static pod方式部署,路由规则模板需要手工配置` 26 | - [ ] [/ingress](/addons/ingress) `kubernetes 中应用外部访问的另一种方式,具体请参考官方文档` 27 | - [x] [/prometheus](/addons/prometheus) `prometheus 是一个流行的监控告警工具,与kubernetes同属CNCF基金会` 28 | - [x] [/base](/base) `base 目录包含kubernetes及其高可用方案核心组件` 29 | - [x] [/agent](/base/agent) `kubernetes agent节点服务,包含kubelet(systemd service)和kube-proxy(static pod)` 30 | - [x] [/haka](/base/haka) `haproxy + keepalived(systemd service) kube-apiserver高可用关键组件` 31 | - [x] [/master](/base/master) `kubernetes master 节点服务,包含kubelet(systemd service)和etcd/kube-apiserver/kube-controller-manager/kube-scheduler(static pod)` 32 | - [ ] [/config.yaml](/base/config.yaml) `需要镜像删除时,registry配置文件` 33 | - [x] [/docker.service](/base/docker.service) `docker(systemd service)公共基础服务配置示例` 34 | - [x] [/k8s-log.cron](/base/k8s-log.cron) `日志处理定时脚本` 35 | - [ ] [/tools.sh](/base/tools.sh) `镜像仓库操作脚本` 36 | - [ ] [/doc](/doc) 37 | - [ ] [/images](/images) 38 | - [ ] [/network](/network) `kubernetes中可选的容器网络方案` 39 | - [x] [/calico](/network/calico)  `calico 网络方案,部署方式为daemonset,默认ippool开启了IPIP[ps: 推荐使用]` 40 | - [x] [/flannel](/network/flannel) `flannel网络方案,部署方式为static pod(部署成功后请使用restart_docker.sh脚本重启docker)(ps: 正考虑用daemonset部署)` 41 | - [ ] [/storage](/storage) `kubernetes中可选的容器存储方案` 42 | - [x] [/rbd](/storage/rbd) `kubernetes 与ceph rbd对接示例文件(ps: yaml文件优先)` 43 | ## 生产高可用kubernetes组件架构图 44 | ![Architecture](images/Architecture.png) 45 | -------------------------------------------------------------------------------- /addons/dashboard/MAINTAINERS.md: -------------------------------------------------------------------------------- 1 | # Maintainers 2 | 3 | Piotr Bryk and committers to the https://github.com/kubernetes/dashboard repository. 4 | 5 | 6 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/dashboard/MAINTAINERS.md?pixel)]() 7 | -------------------------------------------------------------------------------- /addons/dashboard/README.md: -------------------------------------------------------------------------------- 1 | # Kubernetes Dashboard 2 | ============== 3 | 4 | Kubernetes Dashboard is a general purpose, web-based UI for Kubernetes clusters. 5 | It allows users to manage applications running in the cluster, troubleshoot them, 6 | as well as manage the cluster itself. 7 | 8 | Learn more at: https://github.com/kubernetes/dashboard 9 | 10 | 11 | [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/addons/dashboard/README.md?pixel)]() 12 | -------------------------------------------------------------------------------- /addons/dashboard/dashboard-controller.yaml: -------------------------------------------------------------------------------- 1 | # This file should be kept in sync with cluster/gce/coreos/kube-manifests/addons/dashboard/dashboard-controller.yaml 2 | apiVersion: v1 3 | kind: ReplicationController 4 | metadata: 5 | name: kubernetes-dashboard-v1.4.0 6 | namespace: kube-system 7 | labels: 8 | k8s-app: kubernetes-dashboard 9 | version: v1.4.0 10 | kubernetes.io/cluster-service: "true" 11 | spec: 12 | replicas: 1 13 | selector: 14 | k8s-app: kubernetes-dashboard 15 | template: 16 | metadata: 17 | labels: 18 | k8s-app: kubernetes-dashboard 19 | version: v1.4.0 20 | kubernetes.io/cluster-service: "true" 21 | annotations: 22 | scheduler.alpha.kubernetes.io/critical-pod: '' 23 | scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' 24 | spec: 25 | containers: 26 | - name: kubernetes-dashboard 27 | image: reg.dnt:5000/google_containers/kubernetes-dashboard-amd64:v1.4.0 28 | args: 29 | - --apiserver-host=http://10.78.198.74:18080 30 | resources: 31 | # keep request = limit to keep this container in guaranteed class 32 | limits: 33 | cpu: 200m 34 | memory: 512Mi 35 | requests: 36 | cpu: 100m 37 | memory: 256Mi 38 | ports: 39 | - containerPort: 9090 40 | livenessProbe: 41 | httpGet: 42 | path: / 43 | port: 9090 44 | initialDelaySeconds: 30 45 | timeoutSeconds: 30 46 | -------------------------------------------------------------------------------- /addons/dashboard/dashboard-service.yaml: -------------------------------------------------------------------------------- 1 | # This file should be kept in sync with cluster/gce/coreos/kube-manifests/addons/dashboard/dashboard-service.yaml 2 | apiVersion: v1 3 | kind: Service 4 | metadata: 5 | name: kubernetes-dashboard 6 | namespace: kube-system 7 | labels: 8 | k8s-app: kubernetes-dashboard 9 | kubernetes.io/cluster-service: "true" 10 | spec: 11 | selector: 12 | k8s-app: kubernetes-dashboard 13 | ports: 14 | - port: 80 15 | targetPort: 9090 16 | -------------------------------------------------------------------------------- /addons/dns/skydns-rc.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2016 The Kubernetes Authors. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # TODO - At some point, we need to rename all skydns-*.yaml.* files to kubedns-*.yaml.* 16 | 17 | # Warning: This is a file generated from the base underscore template file: skydns-rc.yaml.base 18 | 19 | apiVersion: v1 20 | kind: ReplicationController 21 | metadata: 22 | name: kube-dns-v19 23 | namespace: kube-system 24 | labels: 25 | k8s-app: kube-dns 26 | version: v19 27 | kubernetes.io/cluster-service: "true" 28 | spec: 29 | replicas: 1 30 | selector: 31 | k8s-app: kube-dns 32 | version: v19 33 | template: 34 | metadata: 35 | labels: 36 | k8s-app: kube-dns 37 | version: v19 38 | kubernetes.io/cluster-service: "true" 39 | annotations: 40 | scheduler.alpha.kubernetes.io/critical-pod: '' 41 | scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' 42 | spec: 43 | containers: 44 | - name: kubedns 45 | image: reg.dnt:5000/google_containers/kubedns-amd64:1.7 46 | resources: 47 | # TODO: Set memory limits when we've profiled the container for large 48 | # clusters, then set request = limit to keep this container in 49 | # guaranteed class. Currently, this container falls into the 50 | # "burstable" category so the kubelet doesn't backoff from restarting it. 51 | limits: 52 | cpu: 200m 53 | memory: 512Mi 54 | requests: 55 | cpu: 100m 56 | memory: 256Mi 57 | livenessProbe: 58 | httpGet: 59 | path: /healthz 60 | port: 8080 61 | scheme: HTTP 62 | initialDelaySeconds: 60 63 | timeoutSeconds: 5 64 | successThreshold: 1 65 | failureThreshold: 5 66 | readinessProbe: 67 | httpGet: 68 | path: /readiness 69 | port: 8081 70 | scheme: HTTP 71 | # we poll on pod startup for the Kubernetes master service and 72 | # only setup the /readiness HTTP server once that's available. 73 | initialDelaySeconds: 30 74 | timeoutSeconds: 5 75 | args: 76 | # command = "/kube-dns" 77 | #command: 78 | # - /kube-dns 79 | - --kube_master_url=http://10.78.198.74:18080 80 | - --domain=cluster.local. 81 | - --dns-port=10053 82 | ports: 83 | - containerPort: 10053 84 | name: dns-local 85 | protocol: UDP 86 | - containerPort: 10053 87 | name: dns-tcp-local 88 | protocol: TCP 89 | - name: dnsmasq 90 | image: reg.dnt:5000/google_containers/kube-dnsmasq-amd64:1.3 91 | args: 92 | - --cache-size=1000 93 | - --no-resolv 94 | - --server=127.0.0.1#10053 95 | ports: 96 | - containerPort: 53 97 | name: dns 98 | protocol: UDP 99 | - containerPort: 53 100 | name: dns-tcp 101 | protocol: TCP 102 | - name: healthz 103 | image: reg.dnt:5000/google_containers/exechealthz-amd64:1.1 104 | resources: 105 | limits: 106 | memory: 50Mi 107 | requests: 108 | cpu: 10m 109 | # Note that this container shouldn't really need 50Mi of memory. The 110 | # limits are set higher than expected pending investigation on #29688. 111 | # The extra memory was stolen from the kubedns container to keep the 112 | # net memory requested by the pod constant. 113 | memory: 50Mi 114 | args: 115 | - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null && nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null 116 | - -port=8080 117 | - -quiet 118 | ports: 119 | - containerPort: 8080 120 | protocol: TCP 121 | dnsPolicy: Default # Don't use cluster DNS. 122 | -------------------------------------------------------------------------------- /addons/dns/skydns-svc.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2016 The Kubernetes Authors. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | # TODO - At some point, we need to rename all skydns-*.yaml.* files to kubedns-*.yaml.* 16 | 17 | # Warning: This is a file generated from the base underscore template file: skydns-svc.yaml.base 18 | 19 | apiVersion: v1 20 | kind: Service 21 | metadata: 22 | name: kube-dns 23 | namespace: kube-system 24 | labels: 25 | k8s-app: kube-dns 26 | kubernetes.io/cluster-service: "true" 27 | kubernetes.io/name: "KubeDNS" 28 | spec: 29 | selector: 30 | k8s-app: kube-dns 31 | clusterIP: 10.0.1.1 32 | ports: 33 | - name: dns 34 | port: 53 35 | protocol: UDP 36 | - name: dns-tcp 37 | port: 53 38 | protocol: TCP 39 | -------------------------------------------------------------------------------- /addons/haconfd/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM reg.dnt:5000/haproxy:1.6.9-alpine 2 | MAINTAINER xwisen <1031649164@qq.com> 3 | 4 | ADD ./confd /usr/local/sbin/ 5 | ADD ./docker-entrypoint.sh / 6 | RUN chmod +x /usr/local/sbin/confd && chmod +x /docker-entrypoint.sh && touch /var/run/haproxy.pid 7 | COPY ./etc/ /etc/ 8 | 9 | CMD ["confd"] 10 | -------------------------------------------------------------------------------- /addons/haconfd/docker-entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | set -e 3 | echo "start confd ++++++++++++++++++++" 4 | confd -interval 10 -backend "etcd" -confdir "/etc/confd" -watch -log-level debug -node http://$ETCD_HOST:14001 -config-file /etc/confd/conf.d/haproxy.toml 5 | -------------------------------------------------------------------------------- /addons/haconfd/etc/confd/conf.d/haproxy.toml: -------------------------------------------------------------------------------- 1 | [template] 2 | 3 | # The name of the template that will be used to render the application's configuration file 4 | # Confd will look in `/etc/conf.d/templates` for these files by default 5 | src = "haproxy.tmpl" 6 | 7 | # The location to place the rendered configuration file 8 | dest = "/etc/haproxy/haproxy.cfg" 9 | 10 | # The etcd keys or directory to watch. This is where the information to fill in 11 | # the template will come from. 12 | keys = [ 13 | "/registry/services/endpoints/kube-system", 14 | ] 15 | 16 | # File ownership and mode information 17 | owner = "root" 18 | mode = "0644" 19 | 20 | # These are the commands that will be used to check whether the rendered config is 21 | # valid and to reload the actual service once the new config is in place 22 | check_cmd = "/usr/local/sbin/haproxy -c -q -f {{ .src }}" 23 | # reload_cmd = "echo '[kubernetes-endpoint-proxy] Reloading HAProxy'; pkill haproxy" 24 | reload_cmd = "/usr/local/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -D -sf `cat /var/run/haproxy.pid`" 25 | -------------------------------------------------------------------------------- /addons/haconfd/etc/confd/confd.toml: -------------------------------------------------------------------------------- 1 | backend = "etcd" 2 | confdir = "/etc/confd" 3 | debug = true 4 | interval = 10 5 | nodes = [ 6 | "http://10.78.198.74:14001", 7 | ] 8 | watch = true 9 | quiet = true 10 | verbose = true 11 | -------------------------------------------------------------------------------- /addons/haconfd/etc/confd/templates/haproxy-kube-system.tmpl: -------------------------------------------------------------------------------- 1 | global 2 | maxconn 4096 3 | log /dev/log local0 4 | log /dev/log local1 notice 5 | debug 6 | #user haproxy 7 | #group haproxy 8 | 9 | defaults 10 | log global 11 | mode http 12 | option httplog 13 | option dontlognull 14 | option redispatch 15 | retries 3 16 | maxconn 2000 17 | timeout connect 5000 18 | timeout client 50000 19 | timeout server 50000 20 | errorfile 400 /etc/haproxy/errors/400.http 21 | errorfile 403 /etc/haproxy/errors/403.http 22 | errorfile 408 /etc/haproxy/errors/408.http 23 | errorfile 500 /etc/haproxy/errors/500.http 24 | errorfile 502 /etc/haproxy/errors/502.http 25 | errorfile 503 /etc/haproxy/errors/503.http 26 | errorfile 504 /etc/haproxy/errors/504.http 27 | 28 | frontend http-stats 29 | bind *:10081 30 | mode http 31 | stats enable 32 | stats realm DCOS\ Haproxy 33 | stats auth admin:zjdcos01 34 | stats uri /status 35 | frontend http-in 36 | bind *:80 37 | #-------------------frontend acl list 38 | {{range $svcs := ls "/registry/services/endpoints/kube-system"}}{{$svc := printf "/registry/services/endpoints/kube-system/%s" $svcs }} {{if $svc}}{{range $spec := getvs $svc}}{{ $data := json $spec }}{{range $subset := $data.subsets}}{{if printf "%s" $subset}}{{range $port := $subset.ports}}{{if printf "%s" $port}}{{if eq $port.protocol "TCP" "HTTP"}} 39 | # {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} 40 | acl ::{{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}}-aclrule hdr(Host) -i {{$data.metadata.name}}.reg.dnt 41 | use_backend ::{{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} if ::{{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}}-aclrule {{end}}{{end}}{{end}}{{end}}{{end}}{{end}}{{end}}{{end}} 42 | 43 | #-------------------bankend app list 44 | {{range $svcs := ls "/registry/services/endpoints/kube-system"}}{{$svc := printf "/registry/services/endpoints/kube-system/%s" $svcs }}{{if $svc}}{{range $spec := getvs $svc}}{{ $data := json $spec }}{{range $subset := $data.subsets}}{{if printf "%s" $subset}}{{range $port := $subset.ports}}{{if printf "%s" $port}}{{if eq $port.protocol "TCP" "HTTP"}} 45 | # {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} 46 | backend ::{{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} 47 | option accept-invalid-http-request 48 | option accept-invalid-http-response {{range $address := $subset.addresses}} 49 | cookie SESSION_COOKIE insert indirect nocache httponly secure 50 | #reqrep ^([^\ :]*)\ /{{$data.metadata.name}} \1\2 51 | server ::{{$data.metadata.name}}-{{$port.protocol}}-{{$address.ip}}-{{$port.port}} {{$address.ip}}:{{$port.port}} cookie {{$address.ip}}:{{$port.port}} weight 2 check inter 30000 fall 3{{end}}{{end}}{{end}}{{end}}{{end}}{{end}}{{end}}{{end}}{{end}} 52 | -------------------------------------------------------------------------------- /addons/haconfd/etc/confd/templates/haproxy.tmpl: -------------------------------------------------------------------------------- 1 | global 2 | maxconn 4096 3 | log /dev/log local0 4 | log /dev/log local1 notice 5 | debug 6 | #user haproxy 7 | #group haproxy 8 | 9 | defaults 10 | log global 11 | mode http 12 | option httplog 13 | option dontlognull 14 | option redispatch 15 | retries 3 16 | maxconn 2000 17 | timeout connect 5000 18 | timeout client 50000 19 | timeout server 50000 20 | errorfile 400 /etc/haproxy/errors/400.http 21 | errorfile 403 /etc/haproxy/errors/403.http 22 | errorfile 408 /etc/haproxy/errors/408.http 23 | errorfile 500 /etc/haproxy/errors/500.http 24 | errorfile 502 /etc/haproxy/errors/502.http 25 | errorfile 503 /etc/haproxy/errors/503.http 26 | errorfile 504 /etc/haproxy/errors/504.http 27 | 28 | frontend http-stats 29 | bind *:10080 30 | mode http 31 | stats enable 32 | stats realm DCOS\ Haproxy 33 | stats auth admin:zjdcos01 34 | stats uri /status 35 | frontend http-in 36 | bind *:80 37 | #-------------------frontend acl list 38 | {{range $svcs := ls "/registry/services/endpoints/kube-system"}}{{$svc := printf "/registry/services/endpoints/kube-system/%s" $svcs }} {{if $svc}}{{range $spec := getvs $svc}}{{ $data := json $spec }}{{range $subset := $data.subsets}}{{if printf "%s" $subset}}{{range $port := $subset.ports}}{{if printf "%s" $port}}{{if eq $port.protocol "TCP" "HTTP"}} 39 | # {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} 40 | acl {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} path_beg -i {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} 41 | use_backend {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} if {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}}{{end}}{{end}}{{end}}{{end}}{{end}}{{end}}{{end}}{{end}} 42 | 43 | #-------------------bankend app list 44 | {{range $svcs := ls "/registry/services/endpoints/kube-system"}}{{$svc := printf "/registry/services/endpoints/kube-system/%s" $svcs }}{{if $svc}}{{range $spec := getvs $svc}}{{ $data := json $spec }}{{range $subset := $data.subsets}}{{if printf "%s" $subset}}{{range $port := $subset.ports}}{{if printf "%s" $port}}{{if eq $port.protocol "TCP" "HTTP"}} 45 | # {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} 46 | backend {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} 47 | balance source {{range $address := $subset.addresses}} 48 | server {{$data.metadata.name}}-{{$port.protocol}}-{{$address.ip}}-{{$port.port}} {{$address.ip}}:{{$port.port}} weight 2 check inter 30000{{end}}{{end}}{{end}}{{end}}{{end}}{{end}}{{end}}{{end}}{{end}} 49 | -------------------------------------------------------------------------------- /addons/haconfd/etc/confd/templates/haproxy_beautiful.tmpl: -------------------------------------------------------------------------------- 1 | global 2 | maxconn 4096 3 | log /dev/log local0 4 | log /dev/log local1 notice 5 | #user haproxy 6 | #group haproxy 7 | 8 | defaults 9 | log global 10 | mode http 11 | option httplog 12 | option dontlognull 13 | option redispatch 14 | retries 3 15 | maxconn 2000 16 | timeout connect 5000 17 | timeout client 50000 18 | timeout server 50000 19 | errorfile 400 /etc/haproxy/errors/400.http 20 | errorfile 403 /etc/haproxy/errors/403.http 21 | errorfile 408 /etc/haproxy/errors/408.http 22 | errorfile 500 /etc/haproxy/errors/500.http 23 | errorfile 502 /etc/haproxy/errors/502.http 24 | errorfile 503 /etc/haproxy/errors/503.http 25 | errorfile 504 /etc/haproxy/errors/504.http 26 | 27 | frontend http-stats 28 | bind *:10081 29 | mode http 30 | stats enable 31 | stats realm DCOS\ Haproxy 32 | stats auth admin:zjdcos01 33 | stats uri /status 34 | frontend http-in 35 | bind *:80 36 | # frontend acl 37 | {{range $svcs := ls "/registry/services/endpoints/kube-system"}} 38 | {{if $svc := printf "/registry/services/endpoints/kube-system/%s" $svcs }} 39 | {{range $spec := getvs $svc}} 40 | {{ $data := json $spec }} 41 | {{range $subset := $data.subsets}} 42 | {{if printf "%s" $subset}} 43 | {{range $port := $subset.ports}} 44 | {{if printf "%s" $port}} 45 | {{if eq $port.protocol "TCP" "HTTP"}} 46 | # {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} 47 | acl {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} path_beg -i {{$data.metadata.name}}-{{$port.port}} 48 | use_backend {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} if {{$data.metadata.name}}-{{$port.port}} 49 | {{end}} 50 | {{end}} 51 | {{end}} 52 | {{end}} 53 | {{end}} 54 | {{end}} 55 | {{end}} 56 | {{end}} 57 | # bankend list 58 | {{range $svcs := ls "/registry/services/endpoints/kube-system"}} 59 | {{if $svc := printf "/registry/services/endpoints/kube-system/%s" $svcs }} 60 | {{range $spec := getvs $svc}} 61 | {{ $data := json $spec }} 62 | {{range $subset := $data.subsets}} 63 | {{if printf "%s" $subset}} 64 | {{range $port := $subset.ports}} 65 | {{if printf "%s" $port}} 66 | {{if eq $port.protocol "TCP" "HTTP"}} 67 | # {{$data.metadata.name}}-{{$port.protocol}}-{{$port.port}} 68 | backend {{$data.metadata.name}}-{{$port.port}} 69 | balance leastconn 70 | {{range $address := $subset.addresses}} 71 | server {{$data.metadata.name}}-{{$port.protocol}}-{{$port.protocol}}-{{$address.ip}}-{{$port.port}} {{$address.ip}}:{{$port.port}} check inter 30000 72 | {{end}} 73 | {{end}} 74 | {{end}} 75 | {{end}} 76 | {{end}} 77 | {{end}} 78 | {{end}} 79 | {{end}} 80 | {{end}} 81 | -------------------------------------------------------------------------------- /addons/haconfd/etc/haproxy/errors/400.http: -------------------------------------------------------------------------------- 1 | HTTP/1.0 400 Bad request 2 | Cache-Control: no-cache 3 | Connection: close 4 | Content-Type: text/html 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 19 | 132 | 161 | 162 | 163 |
164 |
165 | 400 166 |
167 |
168 | Oops, something doesn't appear right 169 | with your request 170 |
171 |
172 |
173 |
174 | 175 |

176 | Go back to the previous page and try again. 177 | If you think something is broken, report a problem. 178 |

179 |
180 |
181 | Go To Homepage 182 | 183 |
184 |
185 | 186 | 197 | 198 | 199 | 200 | 201 | -------------------------------------------------------------------------------- /addons/haconfd/etc/haproxy/errors/403.http: -------------------------------------------------------------------------------- 1 | HTTP/1.0 403 Forbidden 2 | Cache-Control: no-cache 3 | Connection: close 4 | Content-Type: text/html 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 19 | 132 | 161 | 162 | 163 |
164 |
165 | 403 166 |
167 |
168 | Oops, you need more
permission to 169 | view this page. 170 |
171 |
172 |
173 |
174 | 175 |

176 | You may want to head back to the homepage.
177 | If you think something is broken, report a problem. 178 |

179 |
180 |
181 | Go To Homepage 182 | 183 |
184 |
185 | 186 | 197 | 198 | 199 | 200 | 201 | -------------------------------------------------------------------------------- /addons/haconfd/etc/haproxy/errors/408.http: -------------------------------------------------------------------------------- 1 | HTTP/1.0 408 Request Time-out 2 | Cache-Control: no-cache 3 | Connection: close 4 | Content-Type: text/html 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 19 | 132 | 161 | 162 | 163 |
164 |
165 | 408 166 |
167 |
168 | Oops, the server took too long to respond. 169 |
170 |
171 |
172 |
173 | 174 |

175 | Go back to the previous page and try again. 176 | If you think something is broken, report a problem. 177 |

178 |
179 |
180 | Go To Homepage 181 | 182 |
183 |
184 | 185 | 196 | 197 | 198 | 199 | 200 | -------------------------------------------------------------------------------- /addons/haconfd/etc/haproxy/errors/500.http: -------------------------------------------------------------------------------- 1 | HTTP/1.0 500 Server Error 2 | Cache-Control: no-cache 3 | Connection: close 4 | Content-Type: text/html 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 19 | 132 | 161 | 162 | 163 |
164 |
165 | 500 166 |
167 |
168 | Looks like we're having
169 | some server issues. 170 |
171 |
172 |
173 |
174 | 175 |

176 | Go back to the previous page and try again. 177 | If you think something is broken, report a problem. 178 |

179 | 180 | 181 |
182 |
183 | 184 |
185 |
186 | 187 | 198 | 199 | 200 | 201 | 202 | -------------------------------------------------------------------------------- /addons/haconfd/etc/haproxy/errors/502.http: -------------------------------------------------------------------------------- 1 | HTTP/1.0 502 Bad Gateway 2 | Cache-Control: no-cache 3 | Connection: close 4 | Content-Type: text/html 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 19 | 132 | 161 | 162 | 163 |
164 |
165 | 502 166 |
167 |
168 | Looks like we're having
169 | some server issues. 170 |
171 |
172 |
173 |
174 | 175 |

176 | Go back to the previous page and try again. 177 | If you think something is broken, report a problem. 178 |

179 | 180 | 181 |
182 |
183 | 184 |
185 |
186 | 187 | 198 | 199 | 200 | 201 | 202 | -------------------------------------------------------------------------------- /addons/haconfd/etc/haproxy/errors/503.http: -------------------------------------------------------------------------------- 1 | HTTP/1.0 503 Service Unavailable 2 | Cache-Control: no-cache 3 | Connection: close 4 | Content-Type: text/html 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 19 | 132 | 161 | 162 | 163 |
164 |
165 | 503 166 |
167 |
168 | Looks like we're having
169 | some server issues. 170 |
171 |
172 |
173 |
174 | 175 |

176 | Go back to the previous page and try again. 177 | If you think something is broken, report a problem. 178 |

179 | 180 | 181 |
182 |
183 | 184 |
185 |
186 | 187 | 198 | 199 | 200 | 201 | 202 | -------------------------------------------------------------------------------- /addons/haconfd/etc/haproxy/errors/504.http: -------------------------------------------------------------------------------- 1 | HTTP/1.0 504 Gateway Time-out 2 | Cache-Control: no-cache 3 | Connection: close 4 | Content-Type: text/html 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 19 | 132 | 161 | 162 | 163 |
164 |
165 | 504 166 |
167 |
168 | Looks like we're having
169 | some server issues. 170 |
171 |
172 |
173 |
174 | 175 |

176 | Go back to the previous page and try again. 177 | If you think something is broken, report a problem. 178 |

179 | 180 | 181 |
182 |
183 | 184 |
185 |
186 | 187 | 198 | 199 | 200 | 201 | 202 | -------------------------------------------------------------------------------- /addons/haconfd/etc/haproxy/haproxy.cfg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xwisen/deployk8s/74831a5f1c347d1b56f97d077bbdd6da2eb4d791/addons/haconfd/etc/haproxy/haproxy.cfg -------------------------------------------------------------------------------- /addons/haconfd/haproxy-confd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: haproxy-confd 5 | namespace: system 6 | lables: 7 | name: haproxy-confd 8 | enable: true 9 | spec: 10 | hostNetwork: true 11 | containers: 12 | - image: reg.dnt:5000/confd:2016101701 13 | name: haproxy-confd 14 | resources: 15 | limits: 16 | cpu: 1000m 17 | memory: 2048Mi 18 | requests: 19 | cpu: 500m 20 | memory: 1024Mi 21 | env: 22 | - name: ETCD_HOST 23 | value: "10.78.198.74" 24 | livenessProbe: 25 | httpGet: 26 | path: / 27 | port: 80 28 | initialDelaySeconds: 30 29 | timeoutSeconds: 30 30 | volumeMounts: 31 | - mountPath: /etc/haproxy 32 | name: haproxycfg 33 | volumeMounts: 34 | - mountPath: /etc/confd 35 | name: confdcfg 36 | volumes: 37 | - hostPath: 38 | path: /opt/haproxy 39 | name: haproxycfg 40 | - hostPath: 41 | path: /opt/confd 42 | name: confdcfg 43 | -------------------------------------------------------------------------------- /addons/heapster/grafana-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: monitoring-grafana 5 | namespace: kube-system 6 | labels: 7 | kubernetes.io/cluster-service: "true" 8 | kubernetes.io/name: "Grafana" 9 | spec: 10 | # On production clusters, consider setting up auth for grafana, and 11 | # exposing Grafana either using a LoadBalancer or a public IP. 12 | # type: LoadBalancer 13 | ports: 14 | - port: 80 15 | targetPort: 3000 16 | selector: 17 | k8s-app: influxGrafana 18 | -------------------------------------------------------------------------------- /addons/heapster/heapster-controller-without-addon-resizer.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: heapster-v1.2.0 5 | namespace: kube-system 6 | labels: 7 | k8s-app: heapster 8 | kubernetes.io/cluster-service: "true" 9 | version: v1.2.0 10 | spec: 11 | replicas: 1 12 | selector: 13 | matchLabels: 14 | k8s-app: heapster 15 | version: v1.2.0 16 | template: 17 | metadata: 18 | labels: 19 | k8s-app: heapster 20 | version: v1.2.0 21 | spec: 22 | containers: 23 | - image: reg.dnt:5000/google_containers/heapster:v1.2.0 24 | name: heapster 25 | resources: 26 | # keep request = limit to keep this container in guaranteed class 27 | limits: 28 | cpu: 250m 29 | memory: 500Mi 30 | requests: 31 | cpu: 500m 32 | memory: 1000Mi 33 | command: 34 | - /heapster 35 | - --source=kubernetes:http://20.26.2.110:18080?insecure=true&inClusterConfig=false&useServiceAccount=false&auth= 36 | - --sink=log 37 | - image: reg.dnt:5000/google_containers/heapster:v1.2.0 38 | name: eventer 39 | resources: 40 | # keep request = limit to keep this container in guaranteed class 41 | limits: 42 | cpu: 250m 43 | memory: 500Mi 44 | requests: 45 | cpu: 500m 46 | memory: 1000Mi 47 | command: 48 | - /eventer 49 | - --source=kubernetes:http://20.26.2.110:18080?insecure=true&inClusterConfig=false&useServiceAccount=false&auth= 50 | - --sink=log 51 | -------------------------------------------------------------------------------- /addons/heapster/heapster-controller.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: heapster-v1.2.0 5 | namespace: kube-system 6 | labels: 7 | k8s-app: heapster 8 | kubernetes.io/cluster-service: "true" 9 | version: v1.2.0 10 | spec: 11 | replicas: 1 12 | selector: 13 | matchLabels: 14 | k8s-app: heapster 15 | version: v1.2.0 16 | template: 17 | metadata: 18 | labels: 19 | k8s-app: heapster 20 | version: v1.2.0 21 | spec: 22 | containers: 23 | - image: reg.dnt:5000/google_containers/heapster:v1.2.0 24 | name: heapster 25 | resources: 26 | # keep request = limit to keep this container in guaranteed class 27 | limits: 28 | cpu: 100m 29 | memory: 256Mi 30 | requests: 31 | cpu: 100m 32 | memory: 256Mi 33 | command: 34 | - /heapster 35 | - --source=kubernetes:http://20.26.2.110:18080?insecure=true&inClusterConfig=false&useServiceAccount=false&auth= 36 | - --sink=log 37 | - image: reg.dnt:5000/google_containers/heapster:v1.2.0 38 | name: eventer 39 | resources: 40 | # keep request = limit to keep this container in guaranteed class 41 | limits: 42 | cpu: 100m 43 | memory: 256Mi 44 | requests: 45 | cpu: 100m 46 | memory: 256Mi 47 | command: 48 | - /eventer 49 | - --source=kubernetes:http://20.26.2.110:18080?insecure=true&inClusterConfig=false&useServiceAccount=false&auth= 50 | - --sink=log 51 | - image: reg.dnt:5000/google_containers/addon-resizer:1.6 52 | name: heapster-nanny 53 | resources: 54 | limits: 55 | cpu: 200m 56 | memory: 200Mi 57 | requests: 58 | cpu: 200m 59 | memory: 200Mi 60 | env: 61 | - name: MY_POD_NAME 62 | valueFrom: 63 | fieldRef: 64 | fieldPath: metadata.name 65 | - name: MY_POD_NAMESPACE 66 | valueFrom: 67 | fieldRef: 68 | fieldPath: metadata.namespace 69 | command: 70 | - /pod_nanny 71 | - --cpu=200m 72 | - --extra-cpu=0m 73 | - --memory=500Mi 74 | - --extra-memory=100Mi 75 | - --threshold=5 76 | - --deployment=heapster-v1.2.0 77 | - --container=heapster 78 | - --poll-period=300000 79 | - image: reg.dnt:5000/google_containers/addon-resizer:1.6 80 | name: eventer-nanny 81 | resources: 82 | limits: 83 | cpu: 100m 84 | memory: 200Mi 85 | requests: 86 | cpu: 100m 87 | memory: 200Mi 88 | env: 89 | - name: MY_POD_NAME 90 | valueFrom: 91 | fieldRef: 92 | fieldPath: metadata.name 93 | - name: MY_POD_NAMESPACE 94 | valueFrom: 95 | fieldRef: 96 | fieldPath: metadata.namespace 97 | command: 98 | - /pod_nanny 99 | - --cpu=500m 100 | - --extra-cpu=0m 101 | - --memory=500Mi 102 | - --extra-memory=100Mi 103 | - --threshold=5 104 | - --deployment=heapster-v1.2.0 105 | - --container=eventer 106 | - --poll-period=300000 107 | -------------------------------------------------------------------------------- /addons/heapster/heapster-service.yaml: -------------------------------------------------------------------------------- 1 | kind: Service 2 | apiVersion: v1 3 | metadata: 4 | name: heapster 5 | namespace: kube-system 6 | labels: 7 | kubernetes.io/cluster-service: "true" 8 | kubernetes.io/name: "Heapster" 9 | spec: 10 | ports: 11 | - port: 80 12 | targetPort: 8082 13 | selector: 14 | k8s-app: heapster 15 | -------------------------------------------------------------------------------- /addons/heapster/heapster-serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: heapster 5 | namespace: kube-system 6 | -------------------------------------------------------------------------------- /addons/heapster/influxdb-grafana-controler.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ReplicationController 3 | metadata: 4 | name: monitoring-influxdb-grafana-v3 5 | namespace: kube-system 6 | labels: 7 | k8s-app: influxGrafana 8 | version: v3 9 | kubernetes.io/cluster-service: "true" 10 | spec: 11 | replicas: 1 12 | selector: 13 | k8s-app: influxGrafana 14 | version: v3 15 | template: 16 | metadata: 17 | labels: 18 | k8s-app: influxGrafana 19 | version: v3 20 | kubernetes.io/cluster-service: "true" 21 | spec: 22 | containers: 23 | - image: reg.dnt:5000/google_containers/heapster_influxdb:v0.7 24 | name: influxdb 25 | resources: 26 | # keep request = limit to keep this container in guaranteed class 27 | limits: 28 | cpu: 1000m 29 | memory: 1024Mi 30 | requests: 31 | cpu: 1000m 32 | memory: 1024Mi 33 | ports: 34 | - containerPort: 8083 35 | - containerPort: 8086 36 | volumeMounts: 37 | - name: influxdb-persistent-storage 38 | mountPath: /data 39 | - image: reg.dnt:5000/google_containers/heapster_grafana:v2.6.0-2 40 | name: grafana 41 | env: 42 | resources: 43 | # keep request = limit to keep this container in guaranteed class 44 | limits: 45 | cpu: 1000m 46 | memory: 1024Mi 47 | requests: 48 | cpu: 1000m 49 | memory: 1024Mi 50 | env: 51 | # This variable is required to setup templates in Grafana. 52 | - name: INFLUXDB_SERVICE_URL 53 | value: http://monitoring-influxdb:8086 54 | # The following env variables are required to make Grafana accessible via 55 | # the kubernetes api-server proxy. On production clusters, we recommend 56 | # removing these env variables, setup auth for grafana, and expose the grafana 57 | # service using a LoadBalancer or a public IP. 58 | - name: GF_AUTH_BASIC_ENABLED 59 | value: "false" 60 | - name: GF_AUTH_ANONYMOUS_ENABLED 61 | value: "true" 62 | - name: GF_AUTH_ANONYMOUS_ORG_ROLE 63 | value: Admin 64 | - name: GF_SERVER_ROOT_URL 65 | value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/ 66 | volumeMounts: 67 | - name: grafana-persistent-storage 68 | mountPath: /var 69 | volumes: 70 | - name: influxdb-persistent-storage 71 | hostPath: 72 | path: /heapster-influxdb 73 | - name: grafana-persistent-storage 74 | hostPath: 75 | path: /heapster-grafana 76 | -------------------------------------------------------------------------------- /addons/heapster/influxdb-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: monitoring-influxdb 5 | namespace: kube-system 6 | labels: 7 | kubernetes.io/cluster-service: "true" 8 | kubernetes.io/name: "InfluxDB" 9 | spec: 10 | ports: 11 | - name: http 12 | port: 8083 13 | targetPort: 8083 14 | - name: api 15 | port: 8086 16 | targetPort: 8086 17 | selector: 18 | k8s-app: influxGrafana 19 | -------------------------------------------------------------------------------- /addons/ingress/nginx-controller-without-httpcheck.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: default-http-backend 5 | namespace: kube-system 6 | labels: 7 | k8s-app: default-http-backend 8 | spec: 9 | ports: 10 | - port: 80 11 | targetPort: 8080 12 | protocol: TCP 13 | name: http 14 | selector: 15 | k8s-app: default-http-backend 16 | --- 17 | apiVersion: v1 18 | kind: ReplicationController 19 | metadata: 20 | name: default-http-backend 21 | namespace: kube-system 22 | spec: 23 | replicas: 1 24 | selector: 25 | k8s-app: default-http-backend 26 | template: 27 | metadata: 28 | labels: 29 | k8s-app: default-http-backend 30 | spec: 31 | terminationGracePeriodSeconds: 60 32 | containers: 33 | - name: default-http-backend 34 | # Any image is permissable as long as: 35 | # 1. It serves a 404 page at / 36 | # 2. It serves 200 on a /healthz endpoint 37 | image: reg.dnt:5000/google_containers/defaultbackend:1.0 38 | livenessProbe: 39 | httpGet: 40 | path: /healthz 41 | port: 8080 42 | scheme: HTTP 43 | initialDelaySeconds: 30 44 | timeoutSeconds: 5 45 | ports: 46 | - containerPort: 8080 47 | resources: 48 | limits: 49 | cpu: 10m 50 | memory: 20Mi 51 | requests: 52 | cpu: 10m 53 | memory: 20Mi 54 | --- 55 | apiVersion: v1 56 | kind: ReplicationController 57 | metadata: 58 | name: nginx-ingress-controller 59 | namespace: kube-system 60 | labels: 61 | k8s-app: nginx-ingress-lb 62 | spec: 63 | replicas: 1 64 | selector: 65 | k8s-app: nginx-ingress-lb 66 | template: 67 | metadata: 68 | labels: 69 | k8s-app: nginx-ingress-lb 70 | name: nginx-ingress-lb 71 | spec: 72 | terminationGracePeriodSeconds: 60 73 | containers: 74 | - image: reg.dnt:5000/google_containers/nginx-ingress-controller:0.8.3 75 | name: nginx-ingress-lb 76 | imagePullPolicy: Always 77 | # use downward API 78 | env: 79 | - name: KUBERNETES_MASTER 80 | value: http://20.26.2.110:18080 81 | - name: POD_NAME 82 | valueFrom: 83 | fieldRef: 84 | fieldPath: metadata.name 85 | - name: POD_NAMESPACE 86 | valueFrom: 87 | fieldRef: 88 | fieldPath: metadata.namespace 89 | ports: 90 | - containerPort: 80 91 | hostPort: 80 92 | - containerPort: 443 93 | hostPort: 443 94 | # we expose 18080 to access nginx stats in url /nginx-status 95 | # this is optional 96 | - containerPort: 18080 97 | hostPort: 18080 98 | args: 99 | - /nginx-ingress-controller 100 | - --default-backend-service=$(POD_NAMESPACE)/default-http-backend 101 | - --logtostderr 102 | - --v=6 103 | -------------------------------------------------------------------------------- /addons/ingress/nginx-controller.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: default-http-backend 5 | namespace: kube-system 6 | labels: 7 | k8s-app: default-http-backend 8 | spec: 9 | ports: 10 | - port: 80 11 | targetPort: 8080 12 | protocol: TCP 13 | name: http 14 | selector: 15 | k8s-app: default-http-backend 16 | --- 17 | apiVersion: v1 18 | kind: ReplicationController 19 | metadata: 20 | name: default-http-backend 21 | namespace: kube-system 22 | spec: 23 | replicas: 1 24 | selector: 25 | k8s-app: default-http-backend 26 | template: 27 | metadata: 28 | labels: 29 | k8s-app: default-http-backend 30 | spec: 31 | terminationGracePeriodSeconds: 60 32 | containers: 33 | - name: default-http-backend 34 | # Any image is permissable as long as: 35 | # 1. It serves a 404 page at / 36 | # 2. It serves 200 on a /healthz endpoint 37 | image: reg.dnt:5000/google_containers/defaultbackend:1.0 38 | livenessProbe: 39 | httpGet: 40 | path: /healthz 41 | port: 8080 42 | scheme: HTTP 43 | initialDelaySeconds: 30 44 | timeoutSeconds: 5 45 | ports: 46 | - containerPort: 8080 47 | resources: 48 | limits: 49 | cpu: 10m 50 | memory: 20Mi 51 | requests: 52 | cpu: 10m 53 | memory: 20Mi 54 | --- 55 | apiVersion: v1 56 | kind: ReplicationController 57 | metadata: 58 | name: nginx-ingress-controller 59 | namespace: kube-system 60 | labels: 61 | k8s-app: nginx-ingress-lb 62 | spec: 63 | replicas: 1 64 | selector: 65 | k8s-app: nginx-ingress-lb 66 | template: 67 | metadata: 68 | labels: 69 | k8s-app: nginx-ingress-lb 70 | name: nginx-ingress-lb 71 | spec: 72 | terminationGracePeriodSeconds: 60 73 | containers: 74 | - image: reg.dnt:5000/google_containers/nginx-ingress-controller:0.8.3 75 | name: nginx-ingress-lb 76 | imagePullPolicy: Always 77 | readinessProbe: 78 | httpGet: 79 | path: /healthz 80 | port: 10254 81 | scheme: HTTP 82 | livenessProbe: 83 | httpGet: 84 | path: /healthz 85 | port: 10254 86 | scheme: HTTP 87 | initialDelaySeconds: 10 88 | timeoutSeconds: 1 89 | # use downward API 90 | env: 91 | - name: KUBERNETES_MASTER 92 | value: http://20.26.2.110:18080 93 | - name: POD_NAME 94 | valueFrom: 95 | fieldRef: 96 | fieldPath: metadata.name 97 | - name: POD_NAMESPACE 98 | valueFrom: 99 | fieldRef: 100 | fieldPath: metadata.namespace 101 | ports: 102 | - containerPort: 80 103 | hostPort: 80 104 | - containerPort: 443 105 | hostPort: 443 106 | # we expose 18080 to access nginx stats in url /nginx-status 107 | # this is optional 108 | - containerPort: 18080 109 | hostPort: 18080 110 | args: 111 | - /nginx-ingress-controller 112 | - --default-backend-service=$(POD_NAMESPACE)/default-http-backend 113 | -------------------------------------------------------------------------------- /addons/prometheus/grafana.ini: -------------------------------------------------------------------------------- 1 | ##################### Grafana Configuration Example ##################### 2 | # 3 | # Everything has defaults so you only need to uncomment things you want to 4 | # change 5 | 6 | # possible values : production, development 7 | ; app_mode = production 8 | 9 | # instance name, defaults to HOSTNAME environment variable value or hostname if HOSTNAME var is empty 10 | ; instance_name = ${HOSTNAME} 11 | 12 | #################################### Paths #################################### 13 | [paths] 14 | # Path to where grafana can store temp files, sessions, and the sqlite3 db (if that is used) 15 | # 16 | ;data = /var/lib/grafana 17 | # 18 | # Directory where grafana can store logs 19 | # 20 | ;logs = /var/log/grafana 21 | # 22 | # Directory where grafana will automatically scan and look for plugins 23 | # 24 | ;plugins = /var/lib/grafana/plugins 25 | 26 | # 27 | #################################### Server #################################### 28 | [server] 29 | # Protocol (http or https) 30 | ;protocol = http 31 | 32 | # The ip address to bind to, empty will bind to all interfaces 33 | ;http_addr = 34 | 35 | # The http port to use 36 | ;http_port = 3000 37 | 38 | # The public facing domain name used to access grafana from a browser 39 | ;domain = localhost 40 | 41 | # Redirect to correct domain if host header does not match domain 42 | # Prevents DNS rebinding attacks 43 | ;enforce_domain = false 44 | 45 | # The full public facing url 46 | ;root_url = %(protocol)s://%(domain)s:%(http_port)s/ 47 | 48 | # Log web requests 49 | ;router_logging = false 50 | 51 | # the path relative working path 52 | ;static_root_path = public 53 | 54 | # enable gzip 55 | ;enable_gzip = false 56 | 57 | # https certs & key file 58 | ;cert_file = 59 | ;cert_key = 60 | 61 | #################################### Database #################################### 62 | [database] 63 | # Either "mysql", "postgres" or "sqlite3", it's your choice 64 | ;type = sqlite3 65 | ;host = 127.0.0.1:3306 66 | ;name = grafana 67 | ;user = root 68 | ;password = 69 | 70 | # For "postgres" only, either "disable", "require" or "verify-full" 71 | ;ssl_mode = disable 72 | 73 | # For "sqlite3" only, path relative to data_path setting 74 | ;path = grafana.db 75 | 76 | #################################### Session #################################### 77 | [session] 78 | # Either "memory", "file", "redis", "mysql", "postgres", default is "file" 79 | ;provider = file 80 | 81 | # Provider config options 82 | # memory: not have any config yet 83 | # file: session dir path, is relative to grafana data_path 84 | # redis: config like redis server e.g. `addr=127.0.0.1:6379,pool_size=100,db=grafana` 85 | # mysql: go-sql-driver/mysql dsn config string, e.g. `user:password@tcp(127.0.0.1:3306)/database_name` 86 | # postgres: user=a password=b host=localhost port=5432 dbname=c sslmode=disable 87 | ;provider_config = sessions 88 | 89 | # Session cookie name 90 | ;cookie_name = grafana_sess 91 | 92 | # If you use session in https only, default is false 93 | ;cookie_secure = false 94 | 95 | # Session life time, default is 86400 96 | ;session_life_time = 86400 97 | 98 | #################################### Analytics #################################### 99 | [analytics] 100 | # Server reporting, sends usage counters to stats.grafana.org every 24 hours. 101 | # No ip addresses are being tracked, only simple counters to track 102 | # running instances, dashboard and error counts. It is very helpful to us. 103 | # Change this option to false to disable reporting. 104 | ;reporting_enabled = true 105 | 106 | # Set to false to disable all checks to https://grafana.net 107 | # for new vesions (grafana itself and plugins), check is used 108 | # in some UI views to notify that grafana or plugin update exists 109 | # This option does not cause any auto updates, nor send any information 110 | # only a GET request to http://grafana.net to get latest versions 111 | check_for_updates = true 112 | 113 | # Google Analytics universal tracking code, only enabled if you specify an id here 114 | ;google_analytics_ua_id = 115 | 116 | #################################### Security #################################### 117 | [security] 118 | # default admin user, created on startup 119 | ;admin_user = admin 120 | 121 | # default admin password, can be changed before first start of grafana, or in profile settings 122 | ;admin_password = admin 123 | 124 | # used for signing 125 | ;secret_key = SW2YcwTIb9zpOOhoPsMm 126 | 127 | # Auto-login remember days 128 | ;login_remember_days = 7 129 | ;cookie_username = grafana_user 130 | ;cookie_remember_name = grafana_remember 131 | 132 | # disable gravatar profile images 133 | ;disable_gravatar = false 134 | 135 | # data source proxy whitelist (ip_or_domain:port separated by spaces) 136 | ;data_source_proxy_whitelist = 137 | 138 | [snapshots] 139 | # snapshot sharing options 140 | ;external_enabled = true 141 | ;external_snapshot_url = https://snapshots-origin.raintank.io 142 | ;external_snapshot_name = Publish to snapshot.raintank.io 143 | 144 | #################################### Users #################################### 145 | [users] 146 | # disable user signup / registration 147 | ;allow_sign_up = true 148 | 149 | # Allow non admin users to create organizations 150 | ;allow_org_create = true 151 | 152 | # Set to true to automatically assign new users to the default organization (id 1) 153 | ;auto_assign_org = true 154 | 155 | # Default role new users will be automatically assigned (if disabled above is set to true) 156 | ;auto_assign_org_role = Viewer 157 | 158 | # Background text for the user field on the login page 159 | ;login_hint = email or username 160 | 161 | # Default UI theme ("dark" or "light") 162 | ;default_theme = dark 163 | 164 | #################################### Anonymous Auth ########################## 165 | [auth.anonymous] 166 | # enable anonymous access 167 | ;enabled = false 168 | 169 | # specify organization name that should be used for unauthenticated users 170 | ;org_name = Main Org. 171 | 172 | # specify role for unauthenticated users 173 | ;org_role = Viewer 174 | 175 | #################################### Github Auth ########################## 176 | [auth.github] 177 | ;enabled = false 178 | ;allow_sign_up = false 179 | ;client_id = some_id 180 | ;client_secret = some_secret 181 | ;scopes = user:email,read:org 182 | ;auth_url = https://github.com/login/oauth/authorize 183 | ;token_url = https://github.com/login/oauth/access_token 184 | ;api_url = https://api.github.com/user 185 | ;team_ids = 186 | ;allowed_organizations = 187 | 188 | #################################### Google Auth ########################## 189 | [auth.google] 190 | ;enabled = false 191 | ;allow_sign_up = false 192 | ;client_id = some_client_id 193 | ;client_secret = some_client_secret 194 | ;scopes = https://www.googleapis.com/auth/userinfo.profile https://www.googleapis.com/auth/userinfo.email 195 | ;auth_url = https://accounts.google.com/o/oauth2/auth 196 | ;token_url = https://accounts.google.com/o/oauth2/token 197 | ;api_url = https://www.googleapis.com/oauth2/v1/userinfo 198 | ;allowed_domains = 199 | 200 | #################################### Auth Proxy ########################## 201 | [auth.proxy] 202 | ;enabled = false 203 | ;header_name = X-WEBAUTH-USER 204 | ;header_property = username 205 | ;auto_sign_up = true 206 | 207 | #################################### Basic Auth ########################## 208 | [auth.basic] 209 | ;enabled = true 210 | 211 | #################################### Auth LDAP ########################## 212 | [auth.ldap] 213 | ;enabled = false 214 | ;config_file = /etc/grafana/ldap.toml 215 | 216 | #################################### SMTP / Emailing ########################## 217 | [smtp] 218 | ;enabled = false 219 | ;host = localhost:25 220 | ;user = 221 | ;password = 222 | ;cert_file = 223 | ;key_file = 224 | ;skip_verify = false 225 | ;from_address = admin@grafana.localhost 226 | 227 | [emails] 228 | ;welcome_email_on_sign_up = false 229 | 230 | #################################### Logging ########################## 231 | [log] 232 | # Either "console", "file", "syslog". Default is console and file 233 | # Use space to separate multiple modes, e.g. "console file" 234 | ;mode = console, file 235 | 236 | # Either "trace", "debug", "info", "warn", "error", "critical", default is "info" 237 | ;level = info 238 | 239 | # For "console" mode only 240 | [log.console] 241 | ;level = 242 | 243 | # log line format, valid options are text, console and json 244 | ;format = console 245 | 246 | # For "file" mode only 247 | [log.file] 248 | ;level = 249 | 250 | # log line format, valid options are text, console and json 251 | ;format = text 252 | 253 | # This enables automated log rotate(switch of following options), default is true 254 | ;log_rotate = true 255 | 256 | # Max line number of single file, default is 1000000 257 | ;max_lines = 1000000 258 | 259 | # Max size shift of single file, default is 28 means 1 << 28, 256MB 260 | ;max_size_shift = 28 261 | 262 | # Segment log daily, default is true 263 | ;daily_rotate = true 264 | 265 | # Expired days of log file(delete after max days), default is 7 266 | ;max_days = 7 267 | 268 | [log.syslog] 269 | ;level = 270 | 271 | # log line format, valid options are text, console and json 272 | ;format = text 273 | 274 | # Syslog network type and address. This can be udp, tcp, or unix. If left blank, the default unix endpoints will be used. 275 | ;network = 276 | ;address = 277 | 278 | # Syslog facility. user, daemon and local0 through local7 are valid. 279 | ;facility = 280 | 281 | # Syslog tag. By default, the process' argv[0] is used. 282 | ;tag = 283 | 284 | 285 | #################################### AMQP Event Publisher ########################## 286 | [event_publisher] 287 | ;enabled = false 288 | ;rabbitmq_url = amqp://localhost/ 289 | ;exchange = grafana_events 290 | 291 | ;#################################### Dashboard JSON files ########################## 292 | [dashboards.json] 293 | ;enabled = false 294 | ;path = /var/lib/grafana/dashboards 295 | 296 | #################################### Internal Grafana Metrics ########################## 297 | # Metrics available at HTTP API Url /api/metrics 298 | [metrics] 299 | # Disable / Enable internal metrics 300 | ;enabled = true 301 | 302 | # Publish interval 303 | ;interval_seconds = 10 304 | 305 | # Send internal metrics to Graphite 306 | ; [metrics.graphite] 307 | ; address = localhost:2003 308 | ; prefix = prod.grafana.%(instance_name)s. 309 | 310 | #################################### Internal Grafana Metrics ########################## 311 | # Url used to to import dashboards directly from Grafana.net 312 | [grafana_net] 313 | url = https://grafana.net 314 | -------------------------------------------------------------------------------- /addons/prometheus/prometheus-cm.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: prometheus-config 5 | namespace: kube-system 6 | labels: 7 | name: prometheus 8 | data: 9 | prometheus.yml: | 10 | global: 11 | scrape_interval: 15s # By default, scrape targets every 15 seconds. 12 | evaluation_interval: 15s # By default, scrape targets every 15 seconds. 13 | # scrape_timeout is set to the global default (10s). 14 | 15 | # Attach these labels to any time series or alerts when communicating with 16 | # external systems (federation, remote storage, Alertmanager). 17 | external_labels: 18 | monitor: 'codelab-monitor' 19 | 20 | # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. 21 | rule_files: 22 | # - "first.rules" 23 | # - "second.rules" 24 | 25 | # A scrape configuration containing exactly one endpoint to scrape: 26 | # Here it's Prometheus itself. 27 | scrape_configs: 28 | # The job name is added as a label `job=` to any timeseries scraped from this config. 29 | - job_name: 'prometheus' 30 | 31 | # Override the global default and scrape targets from this job every 5 seconds. 32 | scrape_interval: 5s 33 | 34 | # metrics_path defaults to '/metrics' 35 | # scheme defaults to 'http'. 36 | 37 | static_configs: 38 | - targets: ['localhost:9090'] 39 | 40 | #scrape_configs: 41 | # The job name is added as a label `job=` to any timeseries scraped from this config. 42 | - job_name: 'cadvisor' 43 | 44 | # Override the global default and scrape targets from this job every 5 seconds. 45 | scrape_interval: 5s 46 | 47 | # metrics_path defaults to '/metrics' 48 | # scheme defaults to 'http'. 49 | 50 | static_configs: 51 | - targets: ['20.26.2.51:4194','20.26.2.52:4194','20.26.2.53:4194'] 52 | labels: 53 | group: 'k8s-master' 54 | namespace: 'system' 55 | - targets: ['20.26.2.54:4194','20.26.20.156:4194','20.26.20.157:4194'] 56 | labels: 57 | group: 'k8s-agent' 58 | namespace: 'system' 59 | -------------------------------------------------------------------------------- /addons/prometheus/prometheus-dm-use-hostpath.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | namespace: kube-system 5 | labels: 6 | name: prometheus-deployment 7 | name: prometheus 8 | spec: 9 | replicas: 1 10 | template: 11 | metadata: 12 | labels: 13 | app: prometheus 14 | spec: 15 | containers: 16 | - image: reg.dnt:5000/prom/prometheus:v1.1.2 17 | name: prometheus 18 | command: 19 | - "/bin/prometheus" 20 | args: 21 | - "-config.file=/etc/prometheus/prometheus.yml" 22 | - "-storage.local.path=/prometheus" 23 | - "-storage.local.retention=24h" 24 | ports: 25 | - containerPort: 9090 26 | protocol: TCP 27 | volumeMounts: 28 | - mountPath: "/prometheus" 29 | name: data 30 | - mountPath: "/etc/prometheus" 31 | name: config-volume 32 | resources: 33 | requests: 34 | cpu: 500m 35 | memory: 1024Mi 36 | limits: 37 | cpu: 1000m 38 | memory: 2048Mi 39 | nodeSelector: 40 | kubernetes.io/hostname: agent3 41 | volumes: 42 | - name: data 43 | hostPath: 44 | path: /prometheus-data 45 | - configMap: 46 | name: prometheus-config 47 | name: config-volume 48 | -------------------------------------------------------------------------------- /addons/prometheus/prometheus-dm.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | namespace: kube-system 5 | labels: 6 | name: prometheus-deployment 7 | name: prometheus 8 | spec: 9 | replicas: 1 10 | template: 11 | metadata: 12 | labels: 13 | app: prometheus 14 | spec: 15 | containers: 16 | - image: reg.dnt:5000/prom/prometheus:v1.1.2 17 | name: prometheus 18 | command: 19 | - "/bin/prometheus" 20 | args: 21 | - "-config.file=/etc/prometheus/prometheus.yml" 22 | - "-storage.local.path=/prometheus" 23 | - "-storage.local.retention=24h" 24 | ports: 25 | - containerPort: 9090 26 | protocol: TCP 27 | volumeMounts: 28 | - mountPath: "/prometheus" 29 | name: data 30 | - mountPath: "/etc/prometheus" 31 | name: config-volume 32 | resources: 33 | requests: 34 | cpu: 500m 35 | memory: 1024Mi 36 | limits: 37 | cpu: 1000m 38 | memory: 2048Mi 39 | nodeSelector: 40 | kubernetes.io/hostname: wz-agent3 41 | volumes: 42 | - name: data 43 | persistentVolumeClaim: 44 | claimName: prometheus-storage 45 | - configMap: 46 | name: prometheus-config 47 | name: config-volume 48 | -------------------------------------------------------------------------------- /addons/prometheus/prometheus-pv-pvc.yaml: -------------------------------------------------------------------------------- 1 | kind: PersistentVolume 2 | apiVersion: v1 3 | metadata: 4 | name: prometheus-storage 5 | namespace: kube-system 6 | labels: 7 | name: prometheus-storage 8 | spec: 9 | capacity: 10 | storage: 600Mi 11 | accessModes: 12 | - ReadWriteOnce 13 | persistentVolumeReclaimPolicy: Retain 14 | rbd: 15 | monitors: 16 | - 20.26.28.13:6789 17 | - 20.26.28.14:6789 18 | - 20.26.28.15:6789 19 | user: admin 20 | pool: k8s 21 | image: prometheus-storage 22 | fsType: ext4 23 | --- 24 | kind: PersistentVolumeClaim 25 | apiVersion: v1 26 | metadata: 27 | name: prometheus-storage 28 | namespace: kube-system 29 | labels: 30 | name: prometheus-storage 31 | spec: 32 | accessModes: 33 | - ReadWriteOnce 34 | resources: 35 | requests: 36 | storage: 300Mi 37 | selector: 38 | matchLabels: 39 | name: "prometheus-storage" 40 | -------------------------------------------------------------------------------- /addons/prometheus/prometheus-svc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: prometheus-svc 5 | namespace: kube-system 6 | spec: 7 | type: NodePort 8 | selector: 9 | app: prometheus 10 | ports: 11 | - port: 80 12 | targetPort: 9090 13 | -------------------------------------------------------------------------------- /addons/prometheus/prometheus.yml: -------------------------------------------------------------------------------- 1 | # my global config 2 | global: 3 | scrape_interval: 15s # By default, scrape targets every 15 seconds. 4 | evaluation_interval: 15s # By default, scrape targets every 15 seconds. 5 | # scrape_timeout is set to the global default (10s). 6 | 7 | # Attach these labels to any time series or alerts when communicating with 8 | # external systems (federation, remote storage, Alertmanager). 9 | external_labels: 10 | monitor: 'codelab-monitor' 11 | 12 | # Load rules once and periodically evaluate them according to the global 'evaluation_interval'. 13 | rule_files: 14 | # - "first.rules" 15 | # - "second.rules" 16 | 17 | # A scrape configuration containing exactly one endpoint to scrape: 18 | # Here it's Prometheus itself. 19 | scrape_configs: 20 | # The job name is added as a label `job=` to any timeseries scraped from this config. 21 | - job_name: 'prometheus' 22 | 23 | # Override the global default and scrape targets from this job every 5 seconds. 24 | scrape_interval: 5s 25 | 26 | # metrics_path defaults to '/metrics' 27 | # scheme defaults to 'http'. 28 | 29 | static_configs: 30 | - targets: ['localhost:9090'] 31 | 32 | #scrape_configs: 33 | # The job name is added as a label `job=` to any timeseries scraped from this config. 34 | - job_name: 'cadvisor' 35 | 36 | # Override the global default and scrape targets from this job every 5 seconds. 37 | scrape_interval: 5s 38 | 39 | # metrics_path defaults to '/metrics' 40 | # scheme defaults to 'http'. 41 | 42 | static_configs: 43 | - targets: ['20.26.2.51:4194','20.26.2.52:4194','20.26.2.53:4194'] 44 | labels: 45 | group: 'k8s-master' 46 | namespace: 'system' 47 | - targets: ['20.26.2.54:4194','20.26.20.156:4194','20.26.20.157:4194'] 48 | labels: 49 | group: 'k8s-agent' 50 | namespace: 'system' 51 | 52 | #scrape_configs: 53 | # # The job name is added as a label `job=` to any timeseries scraped from this config. 54 | # - job_name: 'haproxy' 55 | # 56 | # # Override the global default and scrape targets from this job every 5 seconds. 57 | # scrape_interval: 5s 58 | # 59 | # # metrics_path defaults to '/metrics' 60 | # # scheme defaults to 'http'. 61 | # 62 | # static_configs: 63 | # - targets: ['20.26.25.148:9101'] 64 | # labels: 65 | # group: 'k8s-haproxy' 66 | # namespace: 'system' 67 | -------------------------------------------------------------------------------- /base/agent/flannel.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: flannel-server 5 | namespace: system 6 | labels: 7 | app: "flannel-server" 8 | enable: "true" 9 | service: "base" 10 | spec: 11 | hostNetwork: true 12 | containers: 13 | - name: flannel-server 14 | image: reg.dnt:5000/coreos/flannel:v0.6.2 15 | command: 16 | - /bin/sh 17 | - -c 18 | - /opt/bin/flanneld --logtostderr -ip-masq -etcd-endpoints $ETCD_INFO -etcd-prefix /kubernetes/network 1>>/var/log/flannel.log 2>&1 19 | resources: 20 | limits: 21 | cpu: 1000m 22 | memory: 2048Mi 23 | requests: 24 | cpu: 500m 25 | memory: 1024Mi 26 | env: 27 | - name: ETCD_INFO 28 | value: "http://10.78.198.74:14001" 29 | volumeMounts: 30 | - mountPath: /var/log/flannel.log 31 | name: logfile 32 | - mountPath: /run/flannel 33 | name: flannelenv 34 | - mountPath: /dev/net 35 | name: flannelnet 36 | securityContext: 37 | privileged: true 38 | volumes: 39 | - hostPath: 40 | path: /data/logs/base/flannel.log 41 | name: logfile 42 | - hostPath: 43 | path: /run/flannel 44 | name: flannelenv 45 | - hostPath: 46 | path: /dev/net 47 | name: flannelnet 48 | -------------------------------------------------------------------------------- /base/agent/kube-proxy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: kube-proxy 5 | namespace: system 6 | labels: 7 | app: "kube-proxy" 8 | enable: "true" 9 | service: "base" 10 | spec: 11 | hostNetwork: true 12 | containers: 13 | - name: kube-proxy 14 | image: reg.dnt:5000/google_containers/kube-proxy:v1.4.0 15 | command: 16 | - /bin/sh 17 | - -c 18 | - /usr/local/bin/kube-proxy --master=$ETCD_INFO --proxy-mode=iptables 1>>/var/log/kube-proxy.log 2>&1 19 | securityContext: 20 | privileged: true 21 | resources: 22 | limits: 23 | cpu: 1000m 24 | memory: 2048Mi 25 | requests: 26 | cpu: 500m 27 | memory: 1024Mi 28 | env: 29 | - name: ETCD_INFO 30 | value: "http://10.78.198.74:18080" 31 | volumeMounts: 32 | - mountPath: /var/log/kube-proxy.log 33 | name: logfile 34 | volumes: 35 | - hostPath: 36 | path: /data/logs/base/kube-proxy.log 37 | name: logfile 38 | -------------------------------------------------------------------------------- /base/agent/kubelet.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Kubelet 3 | After=docker.service 4 | Requires=docker.service 5 | 6 | [Service] 7 | ExecStart=/usr/local/bin/kubelet --register-schedulable=false --allow-privileged=true --logtostderr=true --address=0.0.0.0 --port=10250 --cluster-dns=10.0.1.1 --cluster-domain=cluster.local --pod-infra-container-image=reg.dnt:5000/google_containers/pause:3.0 --api-servers=http://10.78.198.74:18080 --config=/data/kubernetes/manifests 8 | Restart=on-failure 9 | KillMode=process 10 | 11 | [Install] 12 | WantedBy=multi-user.target 13 | 14 | -------------------------------------------------------------------------------- /base/config.yml: -------------------------------------------------------------------------------- 1 | version: 0.1 2 | log: 3 | fields: 4 | service: registry 5 | storage: 6 | delete: 7 | enabled: true 8 | cache: 9 | blobdescriptor: inmemory 10 | filesystem: 11 | rootdirectory: /var/lib/registry 12 | http: 13 | addr: :5000 14 | headers: 15 | X-Content-Type-Options: [nosniff] 16 | health: 17 | storagedriver: 18 | enabled: true 19 | interval: 10s 20 | threshold: 3 21 | -------------------------------------------------------------------------------- /base/docker.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Docker Application Container Engine 3 | Documentation=https://docs.docker.com 4 | After=network.target 5 | 6 | [Service] 7 | Type=notify 8 | # the default is not to use systemd for cgroups because the delegate issues still 9 | # exists and systemd currently does not support the cgroup feature set required 10 | # for containers run by docker 11 | #Environment="HTTP_PROXY=http://192.168.1.106:8118" "HTTPS_PROXY=http://192.168.1.106:8118" "NO_PROXY=localhost,127.0.0.1,reg.dnt" 12 | #Environment="https_proxy=http://192.168.1.106:8118" 13 | EnvironmentFile=-/etc/default/docker 14 | ExecStart=/usr/bin/dockerd --graph=/data/docker --insecure-registry 0.0.0.0/0 -H tcp://0.0.0.0:4243 -H tcp://0.0.0.0:6071 -H unix:///var/run/docker.sock --log-driver=json-file --log-opt max-size=1g --log-opt max-file=1 --live-restore=false $DOCKER_OPTS 15 | ExecReload=/bin/kill -s HUP $MAINPID 16 | Restart=on-failure 17 | # Having non-zero Limit*s causes performance problems due to accounting overhead 18 | # in the kernel. We recommend using cgroups to do container-local accounting. 19 | LimitNOFILE=infinity 20 | LimitNPROC=infinity 21 | LimitCORE=infinity 22 | # Uncomment TasksMax if your systemd version supports it. 23 | # Only systemd 226 and above support this version. 24 | #TasksMax=infinity 25 | TimeoutStartSec=0 26 | # set delegate yes so that systemd does not reset the cgroups of docker containers 27 | Delegate=yes 28 | # kill only the docker process, not all processes in the cgroup 29 | KillMode=process 30 | 31 | [Install] 32 | WantedBy=multi-user.target 33 | MountFlags=shared 34 | -------------------------------------------------------------------------------- /base/haka/flannel.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: flannel-server 5 | namespace: system 6 | labels: 7 | app: flannel-server 8 | enable: "true" 9 | service: "base" 10 | spec: 11 | hostNetwork: true 12 | containers: 13 | - name: flannel-server 14 | image: reg.dnt:5000/coreos/flannel:v0.6.2 15 | command: 16 | - /bin/sh 17 | - -c 18 | - /opt/bin/flanneld --logtostderr -ip-masq -etcd-endpoints $ETCD_INFO -etcd-prefix /kubernetes/network 1>>/var/log/flannel.log 2>&1 19 | resources: 20 | limits: 21 | cpu: 1000m 22 | memory: 2048Mi 23 | requests: 24 | cpu: 500m 25 | memory: 1024Mi 26 | env: 27 | - name: ETCD_INFO 28 | value: "http://10.78.198.74:14001" 29 | volumeMounts: 30 | - mountPath: /var/log/flannel.log 31 | name: logfile 32 | - mountPath: /run/flannel 33 | name: flannelenv 34 | - mountPath: /dev/net 35 | name: flannelnet 36 | securityContext: 37 | privileged: true 38 | volumes: 39 | - hostPath: 40 | path: /data/logs/base/flannel.log 41 | name: logfile 42 | - hostPath: 43 | path: /run/flannel 44 | name: flannelenv 45 | - hostPath: 46 | path: /dev/net 47 | name: flannelnet 48 | -------------------------------------------------------------------------------- /base/haka/haproxy-keepalived.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: haproxy-keepalived 5 | namespace: system 6 | labels: 7 | app: "haproxy-keepalived" 8 | enable: "true" 9 | service: "base" 10 | spec: 11 | hostNetwork: true 12 | securityContext: 13 | privileged: true 14 | containers: 15 | - name: haproxy 16 | image: reg.dnt:5000/haproxy:20161016 17 | command: 18 | - /usr/local/sbin/haproxy 19 | - -p 20 | - /run/haproxy.pid 21 | - -f 22 | - /etc/haproxy/haproxy.cfg 23 | - -d 24 | resources: 25 | limits: 26 | cpu: 2000m 27 | memory: 4096Mi 28 | requests: 29 | cpu: 1000m 30 | memory: 2048Mi 31 | volumeMounts: 32 | - mountPath: /etc/haproxy/haproxy.cfg 33 | name: haproxy-cfg 34 | readOnly: true 35 | - name: keepalived 36 | image: reg.dnt:5000/keepalived:20161016 37 | securityContext: 38 | privileged: true 39 | command: 40 | - /bin/bash 41 | - -c 42 | - /entrypoint.sh 43 | resources: 44 | limits: 45 | cpu: 1000m 46 | memory: 2048Mi 47 | requests: 48 | cpu: 500m 49 | memory: 1024Mi 50 | volumeMounts: 51 | - mountPath: /etc/keepalived 52 | name: keepalived-cfg 53 | readOnly: true 54 | - mountPath: /entrypoint.sh 55 | name: entrypoint 56 | readOnly: true 57 | volumes: 58 | - hostPath: 59 | path: /etc/haproxy/haproxy.cfg 60 | name: haproxy-cfg 61 | - hostPath: 62 | path: /etc/keepalived 63 | name: keepalived-cfg 64 | - hostPath: 65 | path: /etc/keepalived/entrypoint.sh 66 | name: entrypoint 67 | -------------------------------------------------------------------------------- /base/haka/haproxy/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM reg.dnt:5000/haproxy:1.6.9-alpine 2 | 3 | MAINTAINER xwisen <1031649164@qq.com> 4 | 5 | ADD ./haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg 6 | 7 | CMD ['haproxy','-f','/usr/local/etc/haproxy/haproxy.cfg'] 8 | -------------------------------------------------------------------------------- /base/haka/haproxy/haproxy.cfg: -------------------------------------------------------------------------------- 1 | global 2 | log 127.0.0.1 local0 info 3 | maxconn 4096 4 | user nobody 5 | group nobody 6 | daemon 7 | nbproc 1 8 | pidfile /run/haproxy.pid 9 | 10 | defaults 11 | mode http 12 | retries 3 13 | maxconn 20000 14 | timeout connect 10s 15 | timeout client 30s 16 | timeout server 30s 17 | timeout check 2s 18 | 19 | listen admin_stats 20 | bind 0.0.0.0:10080 21 | mode http 22 | log 127.0.0.1 local0 err 23 | stats refresh 30s 24 | stats uri /status 25 | stats realm welcome login\ Haproxy 26 | stats auth admin:123456 27 | stats hide-version 28 | stats admin if TRUE 29 | 30 | frontend api 31 | bind *:18080 32 | mode http 33 | option httplog 34 | option forwardfor 35 | log global 36 | default_backend api_backend 37 | 38 | frontend etcd 39 | bind *:14001 40 | mode http 41 | option httplog 42 | option forwardfor 43 | log global 44 | default_backend etcd_backend 45 | 46 | backend api_backend 47 | mode http 48 | option redispatch 49 | option abortonclose 50 | balance source 51 | cookie SERVERID 52 | option httpchk GET /version 53 | server kube-apiserver1 10.78.238.24:8080 cookie api1 weight 2 check inter 2000 rise 2 fall 3 54 | server kube-apiserver2 10.78.238.25:8080 cookie api2 weight 2 check inter 2000 rise 2 fall 3 55 | server kube-apiserver3 10.78.198.69:8080 cookie api3 weight 2 check inter 2000 rise 2 fall 3 56 | 57 | backend etcd_backend 58 | mode http 59 | option redispatch 60 | option abortonclose 61 | balance source 62 | cookie SERVERID 63 | option httpchk GET /version 64 | server etcd1 10.78.238.24:4001 cookie etcd1 weight 2 check inter 2000 rise 2 fall 3 65 | server etcd2 10.78.238.25:4001 cookie etcd2 weight 2 check inter 2000 rise 2 fall 3 66 | server etcd3 10.78.198.69:4001 cookie etcd3 weight 2 check inter 2000 rise 2 fall 3 67 | -------------------------------------------------------------------------------- /base/haka/haproxy/haproxy.cfg.etc: -------------------------------------------------------------------------------- 1 | #--------------------------------------------------------------------- 2 | # Example configuration for a possible web application. See the 3 | # full configuration options online. 4 | # 5 | # http://haproxy.1wt.eu/download/1.4/doc/configuration.txt 6 | # 7 | #--------------------------------------------------------------------- 8 | 9 | #--------------------------------------------------------------------- 10 | # Global settings 11 | #--------------------------------------------------------------------- 12 | global 13 | # to have these messages end up in /var/log/haproxy.log you will 14 | # need to: 15 | # 16 | # 1) configure syslog to accept network log events. This is done 17 | # by adding the '-r' option to the SYSLOGD_OPTIONS in 18 | # /etc/sysconfig/syslog 19 | # 20 | # 2) configure local2 events to go to the /var/log/haproxy.log 21 | # file. A line like the following can be added to 22 | # /etc/sysconfig/syslog 23 | # 24 | # local2.* /var/log/haproxy.log 25 | # 26 | log 127.0.0.1 local2 27 | 28 | chroot /var/lib/haproxy 29 | pidfile /var/run/haproxy.pid 30 | maxconn 4000 31 | user nobody 32 | group nobody 33 | daemon 34 | 35 | # turn on stats unix socket 36 | # stats socket /var/lib/haproxy/stats 37 | 38 | #--------------------------------------------------------------------- 39 | # common defaults that all the 'listen' and 'backend' sections will 40 | # use if not designated in their block 41 | #--------------------------------------------------------------------- 42 | defaults 43 | mode http 44 | log global 45 | option httplog 46 | option dontlognull 47 | option http-server-close 48 | option forwardfor except 127.0.0.0/8 49 | option redispatch 50 | retries 3 51 | timeout http-request 10s 52 | timeout queue 1m 53 | timeout connect 10s 54 | timeout client 1m 55 | timeout server 1m 56 | timeout http-keep-alive 10s 57 | timeout check 10s 58 | maxconn 3000 59 | 60 | #--------------------------------------------------------------------- 61 | # main frontend which proxys to the backends 62 | #--------------------------------------------------------------------- 63 | frontend main 64 | bind *:80 65 | acl url_static path_beg -i /static /images /javascript /stylesheets 66 | acl url_static path_end -i .jpg .gif .png .css .js 67 | 68 | use_backend static if url_static 69 | default_backend app 70 | 71 | #--------------------------------------------------------------------- 72 | # static backend for serving up images, stylesheets and such 73 | #--------------------------------------------------------------------- 74 | backend static 75 | balance roundrobin 76 | server static 20.26.2.51:8080 check 77 | 78 | #--------------------------------------------------------------------- 79 | # round robin balancing between the various backends 80 | #--------------------------------------------------------------------- 81 | backend app 82 | balance roundrobin 83 | server app1 20.26.2.51:8080 check 84 | server app2 20.26.2.52:8080 check 85 | server app3 20.26.2.53:8080 check 86 | 87 | -------------------------------------------------------------------------------- /base/haka/haproxy/haproxy.sh: -------------------------------------------------------------------------------- 1 | docker rm -f haproxy && 2 | docker run -itd --name haproxy --net host -v /root/wz/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg reg.dnt:5000/haproxy:1.6.9-alpine /usr/local/sbin/haproxy -p /run/haproxy.pid -f /usr/local/etc/haproxy/haproxy.cfg -d 3 | -------------------------------------------------------------------------------- /base/haka/keepalived/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM reg.dnt:5000/alterway/keepalived:1.2 2 | 3 | MAINTAINER xwisen <1031649164@qq.com> 4 | 5 | ADD ./* /etc/keepalived/ 6 | ADD ./entrypoint.sh /entrypoint.sh 7 | 8 | CMD ["/entrypoint.sh"] 9 | -------------------------------------------------------------------------------- /base/haka/keepalived/entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | /usr/sbin/keepalived -P -C -d -D -S 7 -f /etc/keepalived/keepalived.conf --dont-fork --log-console 3 | 4 | -------------------------------------------------------------------------------- /base/haka/keepalived/health.sh: -------------------------------------------------------------------------------- 1 | curl http://127.0.0.1:10080/status > /dev/null 2 | -------------------------------------------------------------------------------- /base/haka/keepalived/keepalived.conf.backup1: -------------------------------------------------------------------------------- 1 | vrrp_script chk_haproxy { 2 | script "/etc/keepalived/health.sh" #服务探测,返回0说明服务是正常的 3 | interval 1 #每隔1秒探测一次 4 | weight 2 #haproxy权重 5 | } 6 | # 7 | vrrp_instance VI_1 { 8 | state BACKUP 9 | interface ens32 #使用的网卡名称 10 | virtual_router_id 100 #虚拟路由id,处于同一keepalived必须保证一致 11 | garp_master_delay 1 12 | priority 99 #优先级,越大越优先 13 | advert_int 1 14 | authentication { 15 | auth_type PASS 16 | auth_pass 123456 17 | } 18 | # 19 | virtual_ipaddress { 20 | 20.26.2.110/24 dev ens32 #虚IP 21 | } 22 | track_interface { 23 | ens32 24 | } 25 | # 26 | track_script { #脚本追踪 27 | chk_haproxy 28 | } 29 | } 30 | -------------------------------------------------------------------------------- /base/haka/keepalived/keepalived.conf.backup2: -------------------------------------------------------------------------------- 1 | vrrp_script chk_haproxy { 2 | script "/etc/keepalived/health.sh" #服务探测,返回0说明服务是正常的 3 | interval 1 #每隔1秒探测一次 4 | weight 2 #haproxy权重 5 | } 6 | # 7 | vrrp_instance VI_1 { 8 | state BACKUP 9 | interface ens32 #使用的网卡名称 10 | virtual_router_id 100 #虚拟路由id,处于同一keepalived必须保证一致 11 | garp_master_delay 1 12 | priority 98 #优先级,越大越优先 13 | advert_int 1 14 | authentication { 15 | auth_type PASS 16 | auth_pass 123456 17 | } 18 | # 19 | virtual_ipaddress { 20 | 20.26.2.110/24 dev ens32 21 | } 22 | track_interface { 23 | ens32 24 | } 25 | # 26 | track_script { #脚本追踪 27 | chk_haproxy 28 | } 29 | } 30 | -------------------------------------------------------------------------------- /base/haka/keepalived/keepalived.conf.master: -------------------------------------------------------------------------------- 1 | vrrp_script chk_haproxy { 2 | script "/etc/keepalived/health.sh" #服务探测,返回0说明服务是正常的 3 | interval 1 #每隔1秒探测一次 4 | weight 2 #haproxy权重 5 | } 6 | # 7 | vrrp_instance VI_1 { 8 | state MASTER 9 | interface ens32 #使用的网卡名称 10 | virtual_router_id 100 #虚拟路由id,处于同一keepalived必须保证一致 11 | garp_master_delay 1 12 | priority 100 #优先级,越大越优先 13 | advert_int 1 14 | authentication { 15 | auth_type PASS 16 | auth_pass 123456 17 | } 18 | # 19 | virtual_ipaddress { 20 | 20.26.2.110/24 dev ens32 #虚IP 21 | } 22 | track_interface { 23 | ens32 24 | } 25 | # 26 | track_script { #脚本追踪 27 | chk_haproxy 28 | } 29 | } 30 | -------------------------------------------------------------------------------- /base/haka/keepalived/notify.sh: -------------------------------------------------------------------------------- 1 | # description: An example of notify script 2 | # 3 | vip=172.17.7.88 4 | contact='root@localhost' 5 | notify() { 6 | mailsubject="`hostname` to be $1: $vip floating" 7 | mailbody="`date '+%F %H:%M:%S'`: vrrp transition, `hostname` changed to be $1" 8 | echo $mailbody | mail -s "$mailsubject" $contact 9 | } 10 | case "$1" in 11 | master) 12 | #notify master 13 | /etc/rc.d/init.d/haproxy start 14 | exit 0 15 | ;; 16 | backup) 17 | #notify backup 18 | /etc/rc.d/init.d/haproxy stop 19 | exit 0 20 | ;; 21 | fault) 22 | #notify fault 23 | /etc/rc.d/init.d/haproxy stop 24 | exit 0 25 | ;; 26 | *) 27 | echo 'Usage: `basename $0` {master|backup|fault}' 28 | exit 1 29 | ;; 30 | esac 31 | -------------------------------------------------------------------------------- /base/k8s-log.cron: -------------------------------------------------------------------------------- 1 | for log in `ls /data/log/*.log`; 2 | do 3 | tail -c 100m $log > $log.tmp; 4 | cat $log.tmp > $log; 5 | rm -rf $log.tmp 6 | done 7 | -------------------------------------------------------------------------------- /base/master/etcd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: etcd-server 5 | namespace: system 6 | labels: 7 | app: "etcd-server" 8 | enable: "true" 9 | service: "base" 10 | spec: 11 | hostNetwork: true 12 | containers: 13 | - image: gcr.io/coreos/etcd:v3.0.17 14 | name: etcd-container 15 | command: 16 | - /bin/sh 17 | - -c 18 | - /usr/local/bin/etcd --name $NAME --initial-advertise-peer-urls http://$ADDR:7001 --listen-peer-urls http://$ADDR:7001 --advertise-client-urls http://$ADDR:4001 --listen-client-urls http://0.0.0.0:4001 --data-dir /var/etcd/data --initial-cluster-token etcd123456 --initial-cluster-state new --initial-cluster master1=http://$MASTER1:7001,master2=http://$MASTER2:7001,master3=http://$MASTER3:7001 1>>/var/log/etcd.log 2>&1 19 | resources: 20 | limits: 21 | cpu: 1000m 22 | memory: 2048Mi 23 | requests: 24 | cpu: 500m 25 | memory: 1024Mi 26 | livenessProbe: 27 | httpGet: 28 | path: /version 29 | port: 4001 30 | initialDelaySeconds: 30 31 | timeoutSeconds: 30 32 | env: 33 | - name: NAME 34 | value: master3 35 | - name: ADDR 36 | value: 20.26.28.85 37 | - name: MASTER1 38 | value: 20.26.28.83 39 | - name: MASTER2 40 | value: 20.26.28.84 41 | - name: MASTER3 42 | value: 20.26.28.85 43 | volumeMounts: 44 | - mountPath: /var/etcd 45 | name: varetcd 46 | - mountPath: /etc/ssl 47 | name: etcssl 48 | readOnly: true 49 | - mountPath: /usr/share/ssl 50 | name: usrsharessl 51 | readOnly: true 52 | - mountPath: /var/ssl 53 | name: varssl 54 | readOnly: true 55 | - mountPath: /usr/ssl 56 | name: usrssl 57 | readOnly: true 58 | - mountPath: /usr/lib/ssl 59 | name: usrlibssl 60 | readOnly: true 61 | - mountPath: /usr/local/openssl 62 | name: usrlocalopenssl 63 | readOnly: true 64 | - mountPath: /etc/openssl 65 | name: etcopenssl 66 | readOnly: true 67 | - mountPath: /etc/pki/tls 68 | name: etcpkitls 69 | readOnly: true 70 | - mountPath: /var/log/etcd.log 71 | name: logfile 72 | volumes: 73 | - hostPath: 74 | path: /data/etcd/data 75 | name: varetcd 76 | - hostPath: 77 | path: /etc/ssl 78 | name: etcssl 79 | - hostPath: 80 | path: /usr/share/ssl 81 | name: usrsharessl 82 | - hostPath: 83 | path: /var/ssl 84 | name: varssl 85 | - hostPath: 86 | path: /usr/ssl 87 | name: usrssl 88 | - hostPath: 89 | path: /usr/lib/ssl 90 | name: usrlibssl 91 | - hostPath: 92 | path: /usr/local/openssl 93 | name: usrlocalopenssl 94 | - hostPath: 95 | path: /etc/openssl 96 | name: etcopenssl 97 | - hostPath: 98 | path: /etc/pki/tls 99 | name: etcpkitls 100 | - hostPath: 101 | path: /data/logs/base/etcd.log 102 | name: logfile 103 | -------------------------------------------------------------------------------- /base/master/flannel.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: flannel-server 5 | namespace: system 6 | labels: 7 | app: "flannel-server" 8 | enable: "true" 9 | service: "base" 10 | spec: 11 | hostNetwork: true 12 | containers: 13 | - name: flannel-server 14 | image: reg.dnt:5000/coreos/flannel:v0.6.2 15 | command: 16 | - /bin/sh 17 | - -c 18 | - /opt/bin/flanneld --logtostderr -ip-masq -etcd-endpoints http://127.0.0.1:4001 -etcd-prefix /kubernetes/network 1>>/var/log/flannel.log 2>&1 19 | resources: 20 | limits: 21 | cpu: 1000m 22 | memory: 2048Mi 23 | requests: 24 | cpu: 500m 25 | memory: 1024Mi 26 | volumeMounts: 27 | - mountPath: /var/log/flannel.log 28 | name: logfile 29 | - mountPath: /run/flannel 30 | name: flannelenv 31 | - mountPath: /dev/net 32 | name: flannelnet 33 | securityContext: 34 | privileged: true 35 | volumes: 36 | - hostPath: 37 | path: /data/logs/base/flannel.log 38 | name: logfile 39 | - hostPath: 40 | path: /run/flannel 41 | name: flannelenv 42 | - hostPath: 43 | path: /dev/net 44 | name: flannelnet 45 | -------------------------------------------------------------------------------- /base/master/kube-apiserver.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: kube-apiserver 5 | namespace: system 6 | labels: 7 | app: "kube-apiserver" 8 | enable: "true" 9 | service: "base" 10 | spec: 11 | hostNetwork: true 12 | containers: 13 | - name: kube-apiserver 14 | image: gcr.io/google_containers/kube-apiserver:6987e76bea391a234a856fbdac637d66-v1.6.1 15 | command: 16 | - /bin/sh 17 | - -c 18 | - /usr/local/bin/kube-apiserver --insecure-bind-address=0.0.0.0 --insecure-port=8080 --etcd-servers=http://127.0.0.1:4001 --service-cluster-ip-range=10.0.0.0/16 --v=2 --allow-privileged=True 1>>/var/log/kube-apiserver.log 2>&1 19 | resources: 20 | limits: 21 | cpu: 2000m 22 | memory: 4096Mi 23 | requests: 24 | cpu: 1000m 25 | memory: 2048Mi 26 | livenessProbe: 27 | httpGet: 28 | path: /healthz 29 | port: 8080 30 | initialDelaySeconds: 30 31 | timeoutSeconds: 30 32 | volumeMounts: 33 | - mountPath: /srv/kubernetes 34 | name: srvkube 35 | readOnly: true 36 | - mountPath: /var/log/kube-apiserver.log 37 | name: logfile 38 | - mountPath: /etc/ssl 39 | name: etcssl 40 | readOnly: true 41 | - mountPath: /usr/share/ssl 42 | name: usrsharessl 43 | readOnly: true 44 | - mountPath: /var/ssl 45 | name: varssl 46 | readOnly: true 47 | - mountPath: /usr/ssl 48 | name: usrssl 49 | readOnly: true 50 | - mountPath: /usr/lib/ssl 51 | name: usrlibssl 52 | readOnly: true 53 | - mountPath: /usr/local/openssl 54 | name: usrlocalopenssl 55 | readOnly: true 56 | - mountPath: /etc/openssl 57 | name: etcopenssl 58 | readOnly: true 59 | - mountPath: /etc/pki/tls 60 | name: etcpkitls 61 | readOnly: true 62 | volumes: 63 | - hostPath: 64 | path: /srv/kubernetes 65 | name: srvkube 66 | - hostPath: 67 | path: /data/logs/base/kube-apiserver.log 68 | name: logfile 69 | - hostPath: 70 | path: /etc/ssl 71 | name: etcssl 72 | - hostPath: 73 | path: /usr/share/ssl 74 | name: usrsharessl 75 | - hostPath: 76 | path: /var/ssl 77 | name: varssl 78 | - hostPath: 79 | path: /usr/ssl 80 | name: usrssl 81 | - hostPath: 82 | path: /usr/lib/ssl 83 | name: usrlibssl 84 | - hostPath: 85 | path: /usr/local/openssl 86 | name: usrlocalopenssl 87 | - hostPath: 88 | path: /etc/openssl 89 | name: etcopenssl 90 | - hostPath: 91 | path: /etc/pki/tls 92 | name: etcpkitls 93 | -------------------------------------------------------------------------------- /base/master/kube-controller-manager.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: kube-controller-manager 5 | namespace: system 6 | labels: 7 | app: "kube-controller-manager" 8 | enable: "true" 9 | service: "base" 10 | spec: 11 | containers: 12 | - command: 13 | - /bin/sh 14 | - -c 15 | - /usr/local/bin/kube-controller-manager --master=127.0.0.1:8080 --cluster-cidr=10.245.0.0/16 --allocate-node-cidrs=true --v=2 --leader-elect=true 1>>/var/log/kube-controller-manager.log 2>&1 16 | image: gcr.io/google_containers/kube-controller-manager:27b2a3c3a09e6d502e56d7abc69dc8c9-v1.6.1 17 | resources: 18 | limits: 19 | cpu: 1000m 20 | memory: 2048Mi 21 | requests: 22 | cpu: 500m 23 | memory: 1024Mi 24 | livenessProbe: 25 | httpGet: 26 | path: /healthz 27 | port: 10252 28 | initialDelaySeconds: 15 29 | timeoutSeconds: 1 30 | name: kube-controller-manager 31 | volumeMounts: 32 | - mountPath: /srv/kubernetes 33 | name: srvkube 34 | readOnly: true 35 | - mountPath: /var/log/kube-controller-manager.log 36 | name: logfile 37 | - mountPath: /etc/ssl 38 | name: etcssl 39 | readOnly: true 40 | - mountPath: /usr/share/ssl 41 | name: usrsharessl 42 | readOnly: true 43 | - mountPath: /var/ssl 44 | name: varssl 45 | readOnly: true 46 | - mountPath: /usr/ssl 47 | name: usrssl 48 | readOnly: true 49 | - mountPath: /usr/lib/ssl 50 | name: usrlibssl 51 | readOnly: true 52 | - mountPath: /usr/local/openssl 53 | name: usrlocalopenssl 54 | readOnly: true 55 | - mountPath: /etc/openssl 56 | name: etcopenssl 57 | readOnly: true 58 | - mountPath: /etc/pki/tls 59 | name: etcpkitls 60 | readOnly: true 61 | hostNetwork: true 62 | volumes: 63 | - hostPath: 64 | path: /srv/kubernetes 65 | name: srvkube 66 | - hostPath: 67 | path: /data/logs/base/kube-controller-manager.log 68 | name: logfile 69 | - hostPath: 70 | path: /etc/ssl 71 | name: etcssl 72 | - hostPath: 73 | path: /usr/share/ssl 74 | name: usrsharessl 75 | - hostPath: 76 | path: /var/ssl 77 | name: varssl 78 | - hostPath: 79 | path: /usr/ssl 80 | name: usrssl 81 | - hostPath: 82 | path: /usr/lib/ssl 83 | name: usrlibssl 84 | - hostPath: 85 | path: /usr/local/openssl 86 | name: usrlocalopenssl 87 | - hostPath: 88 | path: /etc/openssl 89 | name: etcopenssl 90 | - hostPath: 91 | path: /etc/pki/tls 92 | name: etcpkitls 93 | -------------------------------------------------------------------------------- /base/master/kube-scheduler.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: kube-scheduler 5 | namespace: system 6 | labels: 7 | app: "kube-scheduler" 8 | enable: "true" 9 | service: "base" 10 | spec: 11 | hostNetwork: true 12 | containers: 13 | - name: kube-scheduler 14 | image: reg.dnt:5000/google_containers/kube-scheduler:v1.4.0 15 | image: gcr.io/google_containers/kube-scheduler:67021c49b24e106a323b398aa7ee95a2-v1.6.1 16 | command: 17 | - /bin/sh 18 | - -c 19 | - /usr/local/bin/kube-scheduler --master=127.0.0.1:8080 --leader-elect=true 1>>/var/log/kube-scheduler.log 2>&1 20 | resources: 21 | limits: 22 | cpu: 500m 23 | memory: 512Mi 24 | requests: 25 | cpu: 500m 26 | memory: 512Mi 27 | livenessProbe: 28 | httpGet: 29 | path: /healthz 30 | port: 10251 31 | initialDelaySeconds: 15 32 | timeoutSeconds: 1 33 | volumeMounts: 34 | - mountPath: /var/log/kube-scheduler.log 35 | name: logfile 36 | volumes: 37 | - hostPath: 38 | path: /data/logs/base/kube-scheduler.log 39 | name: logfile 40 | -------------------------------------------------------------------------------- /base/master/kubelet.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Kubelet 3 | After=docker.service 4 | Requires=docker.service 5 | 6 | [Service] 7 | ExecStart=/usr/local/bin/kubelet --register-schedulable=false --allow-privileged=true --logtostderr=true --address=0.0.0.0 --port=10250 --cluster-dns=10.0.1.1 --cluster-domain=cluster.local --pod-infra-container-image=gcr.io/google_containers/pause:3.0 --api-servers=http://127.0.0.1:8080 --pod-manifest-path=/data/kubernetes/manifests 8 | Restart=on-failure 9 | KillMode=process 10 | 11 | [Install] 12 | WantedBy=multi-user.target 13 | 14 | -------------------------------------------------------------------------------- /base/tools.sh: -------------------------------------------------------------------------------- 1 | # SHELL 2 | # *********************************************** 3 | # 4 | # Filename: regC.sh 5 | # 6 | # Author: xwisen 1031649164@qq.com 7 | # Description: --- 8 | # Create: 2016-11-25 10:24:04 9 | # Last Modified: 2016-11-25 10:24:04 10 | # *********************************************** 11 | 12 | #eg: regc_search 13 | #eg: regc_search nginx 14 | function regc_search() { 15 | REG_INFO=${REG_INFO:-reg.dnt:5000} 16 | if [[ -z $1 ]];then 17 | #curl http://${REG_HOST}:5000/v2/_catalog | jq . 18 | python -c ' 19 | import urllib2,json 20 | try: 21 | resp=urllib2.urlopen("http://"+"'${REG_INFO}'"+"/v2/_catalog") 22 | except urllib2.HTTPError as e: 23 | print(e) 24 | exit(1) 25 | repos=json.loads(resp.read())["repositories"][0:] 26 | #repos= "\n".join(str(image) for image in data["repositories"][0:]) 27 | for repo in repos: 28 | try: 29 | resp=urllib2.urlopen("http://"+"'${REG_INFO}'"+"/v2/"+repo+"/tags/list") 30 | except urllib2.HTTPError as e: 31 | print(e) 32 | exit(1) 33 | tags=json.loads(resp.read())["tags"] 34 | if tags: 35 | for tag in tags: 36 | print("'${REG_INFO}'"+"/"+repo+":"+tag) 37 | else: 38 | print("'${REG_INFO}'"+"/"+repo+":"+"deleted") 39 | ' 40 | else 41 | #curl http://${REG_HOST}:5000/v2/$1/tags/list | jq .tags 42 | REPO=${1:-"busybox"} 43 | python -c ' 44 | import urllib2,json 45 | try: 46 | resp=urllib2.urlopen("http://"+"'${REG_INFO}'"+"/v2/"+"'${REPO}'"+"/tags/list") 47 | except urllib2.HTTPError as e: 48 | print(e) 49 | exit(1) 50 | tags=json.loads(resp.read())["tags"] 51 | if tags: 52 | for tag in tags: 53 | print("'${REG_INFO}'"+"/"+"'${REPO}'"+":"+tag) 54 | else: 55 | print("'${REG_INFO}'"+"/"+"'${REPO}'"+":"+"deleted") 56 | ' 57 | fi 58 | } 59 | 60 | #eg: regc_del reg.dnt:5000/nginx:1.9 61 | function regc_del() { 62 | REG_INFO=${REG_INFO:-reg.dnt:5000} 63 | if [[ $# -ne 1 ]];then 64 | echo "Example: " 65 | echo "regc_del ${REG_INFO}/nginx:1.9" 66 | return 1 67 | fi 68 | #echo "image need delete is : $1" 69 | reg_info=`echo $1 | cut -d "/" -f 1` 70 | image_name=`echo $1 | cut -d "/" -f 2- | cut -d ":" -f 1` 71 | tag_name=`echo $1 | cut -d "/" -f 2- | cut -d ":" -f 2` 72 | #echo "reg info is : $reg_info" 73 | #echo "image name is : $image_name" 74 | #echo "tag name is : $tag_name" 75 | python -c ' 76 | import urllib2,json 77 | try: 78 | req=urllib2.Request("http://"+"'${reg_info}'"+"/v2/"+"'${image_name}'"+"/manifests/"+"'${tag_name}'") 79 | req.add_header("Accept","application/vnd.docker.distribution.manifest.v2+json") 80 | req.get_method=lambda: "GET" 81 | resp=urllib2.urlopen(req) 82 | except urllib2.HTTPError as e: 83 | print(e) 84 | exit(1) 85 | #print(str(resp.read())) 86 | try: 87 | digest=resp.info()["Docker-Content-Digest"] 88 | req=urllib2.Request("http://"+"'${reg_info}'"+"/v2/"+"'${image_name}'"+"/manifests/"+digest) 89 | req.add_header("Accept","application/vnd.docker.distribution.manifest.v2+json") 90 | req.get_method=lambda: "DELETE" 91 | resp=urllib2.urlopen(req) 92 | except urllib2.HTTPError as e: 93 | print(e) 94 | exit(1) 95 | print("delete image : "+"'${1}'" + " succeed !") 96 | #print(str(resp.read())) 97 | # other useful note 98 | #print(resp.info()["Docker-Content-Digest"]) 99 | #headers=resp.info() 100 | #print(headers.getheaders("Docker-Content-Digest")) 101 | #print(json.loads(resp.read())) 102 | ' 103 | } 104 | 105 | #eg: yamltojson test.yaml test.json 106 | function yamltojson () { 107 | python -c ' 108 | import sys, yaml, json; 109 | json.dump(yaml.load(sys.stdin), sys.stdout, indent=4) 110 | '< $1 > $2 111 | if [[ $? -ne 0 ]];then 112 | echo ">>>>>>convert failed !" 113 | else 114 | echo ">>>>>>convert succeed ! input is : $1, output is : $2" 115 | fi 116 | } 117 | 118 | #eg: jsontoyaml test.json test.yaml 119 | function jsontoyaml () { 120 | python -c ' 121 | import sys, yaml, json; 122 | yaml.safe_dump(json.load(sys.stdin), sys.stdout, default_flow_style=False) 123 | '< $1 > $2 124 | if [[ $? -ne 0 ]];then 125 | echo ">>>>>>convert failed !" 126 | else 127 | echo ">>>>>>convert succeed ! input is : $1, output is : $2" 128 | fi 129 | } 130 | -------------------------------------------------------------------------------- /doc/base_env.md: -------------------------------------------------------------------------------- 1 | # 基础镜像和环境准备 2 | 1. 基础镜像(二进制文件)(以1.6.1为例): 3 | * 可以从[kubernetes release server](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md)获得的(一般需要下载server包即可,解压文件后,使用docker load 命令把tar包load成可用镜像(如有需要打上自己想要的tag)。示例: `docker load -i kube-apiserver.tar`): 4 | ```shell 5 | gcr.io/google_containers/kube-apiserver:6987e76bea391a234a856fbdac637d66-v1.6.1 6 | gcr.io/google_containers/kube-controller-manager:27b2a3c3a09e6d502e56d7abc69dc8c9-v1.6.1 7 | gcr.io/google_containers/kube-scheduler:67021c49b24e106a323b398aa7ee95a2-v1.6. 8 | gcr.io/google_containers/kube-proxy:d9f201c130ce77ce273f486e147f0ee1-v1.6.1 9 | kubectl/kubelet 10 | ``` 11 | * 其他镜像(二进制文件) 12 | ```shell 13 | docker pull quay.io/coreos/etcd:v3.0.17 && docker tag quay.io/coreos/etcd:v3.0.17 gcr.io/coreos/etcd:v3.0.17 14 |  docker pull gcr.io/google_containers/pause:3.0 15 | ``` 16 | * 插件镜像(二进制文件) 17 | * 网络组件镜像(二进制文件) 18 | 2. 日志目录/文件创建/hosts设置 19 | ```shell 20 | pssh -i -t 1200 -h /etc/nhosts mkdir -vp /data/logs/base /data/kubernetes/manifests /data/logs/app /data/etcd/data 21 | pssh -i -t 1200 -h /etc/nhosts touch /data/logs/base/{etcd.log,kube-apiserver.log,kube-controller-manager.log,kube-scheduler.log,kube-proxy.log,flannel.log} 22 | ``` 23 | 26 | 3. 确保所有节点[docker](/base/docker.service)服务正常运行 27 | -------------------------------------------------------------------------------- /doc/haproxy_keepalived.md: -------------------------------------------------------------------------------- 1 | 2 | # 使用haproxy负载减轻单个kube-apiserver压力,并使用keepalived保证高可用 3 | 在每个master节点上部署haproxy + keepalived服务。 4 | * keepalived 部分功能需要内核支持: 5 | ```shell 6 | modprobe ip_vs 7 | modprobe ip_vs_rr 8 | modprobe ip_vs_wrr 9 | lsmod | grep ip_vs 10 | sysctl -w net.ipv4.ip_forward=1 11 | sysctl -w net.ipv4.ip_nonlocal_bind=1 12 | ``` 13 | * keepalived 配置文件每个节点都不一样.示例: [主](/base/haka/keepalived/keepalived.conf.master)--[备1](/base/haka/keepalived/keepalived.conf.backup1)--[备2](/base/haka/keepalived/keepalived.conf.backup2) 14 | * [haproxy-keepalived.yaml](/base/haka/haproxy-keepalived.yaml)需要修改字段如下: 15 | ```yaml 16 | image:[自己镜像名字,haproxy 和 keepalived都需要修改] 17 | volumes: 18 | - hostPath: 19 | path: /etc/haproxy/haproxy.cfg [haproxy 配置文件位置] 20 | name: haproxy-cfg 21 | - hostPath: 22 | path: /etc/keepalived [keepalived 配置文件位置] 23 | name: keepalived-cfg 24 | - hostPath: 25 | path: /etc/keepalived/entrypoint.sh [entrypoint.sh 文件位置] 26 | name: entrypoint 27 | ``` 28 | * [haproxy.cfg](/base/haka/haproxy/haproxy.cfg)文件中后端实例信息需要修改。示例文件中对kube-apiserver和etcd都做了负载均衡。 29 | * [haproxy](/base/haka/haproxy/Dockerfile) 和 [keepalived](/base/haka/keepalived/Dockerfile) 镜像制作请参考其Dockerfile,基础镜像根据后缀可在hub上找到。 30 | * 修改好之后将yaml文件拷贝到kubelet配置`--config=/data/kubernetes/manifests`所在目录下 31 | -------------------------------------------------------------------------------- /doc/k8s_master_moudle.md: -------------------------------------------------------------------------------- 1 | # master节点部署 2 | 1. 运行kubelet服务。centos7.2 systemd service文件可参考[master kubelet.service](/base/master/kubelet.service) 3 | > [kubelet.service](/base/master/kubelet.service)需要修改: 4 | 5 | ``` 6 | /usr/local/bin/kubelet //二进制路径 7 | --pod-infra-container-image= //自己的基础镜像 8 | ``` 9 | 14 | > **再次确保docker/kubelet服务处于running状态,下面开始安装etcd以及kubernetes master服务组件**
15 | 2. 部署etcd集群: 在每个master节点上启动etcd服务,并配置为集群模式。[etcd.yaml](/base/master/etcd.yaml)需要修改字段如下: 16 | ```yaml 17 | image: gcr.io/coreos/etcd:v3.0.17//自己镜像名字 18 | env: 19 | - name: NAME 20 |  value: master3 //本etcd名字 21 | - name: ADDR 22 |  value: 20.26.28.85//本etcd ip 23 | - name: MASTER1 24 | value: 20.26.28.83//etcd1 ip 25 | - name: MASTER2 26 | value: 20.26.28.84//etcd2 ip 27 | - name: MASTER3 28 | value: 20.26.28.85//etcd3 ip 29 | volume: 30 | - hostPath: 31 |  path: /data/etcd/data //etcd数据 32 |  name: varetcd 33 | - hostPath: 34 |  path: /data/logs/base/etcd.log //日志外挂路径,需确认宿主机是否存在该文件 35 | name: logfile 36 | ``` 37 | * 修改好之后将yaml文件拷贝到kubelet配置`--pod-manifest-path=/data/kubernetes/manifests`所在目录下 38 | * 部署完之后使用`etcdctl member list` 命令确认etcd集群是否就绪。 39 | 3. 部署kube-apiserver: 在每个master节点上部署kube-apiserver服务。[kube-apiserver.yaml](/base/master/kube-apiserver.yaml)需要修改字段如下: 40 | ```yaml 41 | image: gcr.io/google_containers/kube-apiserver:6987e76bea391a234a856fbdac637d66-v1.6.1//自己镜像名字 42 | volume: 43 | - hostPath: 44 | path: /data/logs/base/kube-apiserver.log //日志外挂路径,需确认宿主机是否存在该文件 45 | name: logfile 46 | ``` 47 | * 修改好之后将yaml文件拷贝到kubelet配置`--pod-manifest-path=/data/kubernetes/manifests`所在目录下 48 | 4. 部署kube-controller-manager: 在每个master节点上部署kube-controller-manager服务。[kube-controller-manager.yaml](/base/master/kube-controller-manager.yaml)需要修改字段如下: 49 | ```yaml 50 | image: gcr.io/google_containers/kube-controller-manager:27b2a3c3a09e6d502e56d7abc69dc8c9-v1.6.1//自己镜像名字 51 | volume: 52 | - hostPath: 53 | path: /data/logs/base/kube-controller-manager.log //日志外挂路径,需确认宿主机是否存在该文件 54 | name: logfile 55 | ``` 56 | * 修改好之后将yaml文件拷贝到kubelet配置`--pod-manifest-path=/data/kubernetes/manifests`所在目录下 57 | 5. 部署kube-scheduler: 在每个master节点上部署kube-scheduler服务。[kube-scheduler.yaml](/base/master/kube-scheduler.yaml)需要修改字段如下: 58 | ```yaml 59 | image: gcr.io/google_containers/kube-scheduler:67021c49b24e106a323b398aa7ee95a2-v1.6.1//自己镜像名字 60 | volume: 61 | - hostPath: 62 | path: /data/logs/base/kube-scheduler.log //日志外挂路径,需确认宿主机是否存在该文件 63 | name: logfile 64 | ``` 65 | * 修改好之后将yaml文件拷贝到kubelet配置`--pod-manifest-path=/data/kubernetes/manifests`所在目录下 66 | > 到这里,kubernetes相关组件都已经安装完毕,执行如下命令确认集群状态是否正确。如有错请至`/data/logs/base/`目录下查看日志: 67 | ```shell 68 | [root@csv-xzcs01 master]# kubectl get cs 69 | NAME STATUS MESSAGE ERROR 70 | scheduler Healthy ok 71 | controller-manager Healthy ok 72 | etcd-0 Healthy {"health": "true"} 73 | [root@csv-xzcs01 master]# kubectl get pod --namespace=system 74 | NAME READY STATUS RESTARTS AGE 75 | etcd-server-csv-xzcs01 1/1 Running 0 11m 76 | etcd-server-csv-xzcs02 1/1 Running 0 8m 77 | etcd-server-csv-xzcs03 1/1 Running 0 8m 78 | kube-apiserver-csv-xzcs01 1/1 Running 0 12m 79 | kube-apiserver-csv-xzcs02 1/1 Running 0 8m 80 | kube-apiserver-csv-xzcs03 1/1 Running 0 8m 81 | kube-controller-manager-csv-xzcs01 1/1 Running 1 5m 82 | kube-controller-manager-csv-xzcs02 1/1 Running 2 5m 83 | kube-controller-manager-csv-xzcs03 1/1 Running 0 5m 84 | kube-scheduler-csv-xzcs01 1/1 Running 0 1m 85 | kube-scheduler-csv-xzcs02 1/1 Running 0 1m 86 | kube-scheduler-csv-xzcs03 1/1 Running 1 3m 87 | ``` 88 | -------------------------------------------------------------------------------- /doc/kubernetes生产环境配置信息.xlsx: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xwisen/deployk8s/74831a5f1c347d1b56f97d077bbdd6da2eb4d791/doc/kubernetes生产环境配置信息.xlsx -------------------------------------------------------------------------------- /images/Architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/xwisen/deployk8s/74831a5f1c347d1b56f97d077bbdd6da2eb4d791/images/Architecture.png -------------------------------------------------------------------------------- /network/calico/calico-without-auth.yaml: -------------------------------------------------------------------------------- 1 | # This ConfigMap is used to configure a self-hosted Calico installation. 2 | kind: ConfigMap 3 | apiVersion: v1 4 | metadata: 5 | name: calico-config 6 | namespace: kube-system 7 | data: 8 | # Configure this with the location of your etcd cluster. 9 | etcd_endpoints: "http://20.26.2.110:14001" 10 | 11 | # Configure the Calico backend to use. 12 | calico_backend: "bird" 13 | 14 | # The CNI network configuration to install on each node. 15 | cni_network_config: |- 16 | { 17 | "name": "k8s-pod-network", 18 | "type": "calico", 19 | "etcd_endpoints": "http://20.26.2.110:14001", 20 | "log_level": "info", 21 | "ipam": { 22 | "type": "calico-ipam" 23 | }, 24 | "policy": { 25 | "type": "k8s", 26 | "k8s_api_root": "http://20.26.2.110:18080" 27 | }, 28 | "kubernetes": { 29 | "kubeconfig": "/root/.kube/config" 30 | } 31 | } 32 | 33 | # The default IP Pool to be created for the cluster. 34 | # Pod IP addresses will be assigned from this pool. 35 | ippool.yaml: | 36 | apiVersion: v1 37 | kind: ipPool 38 | metadata: 39 | cidr: 10.100.0.0/16 40 | spec: 41 | ipip: 42 | enabled: true 43 | nat-outgoing: true 44 | 45 | # If you're using TLS enabled etcd uncomment the following. 46 | # You must also populate the Secret below with these files. 47 | etcd_ca: "" # "/calico-secrets/etcd-ca" 48 | etcd_cert: "" # "/calico-secrets/etcd-cert" 49 | etcd_key: "" # "/calico-secrets/etcd-key" 50 | 51 | --- 52 | 53 | # The following contains k8s Secrets for use with a TLS enabled etcd cluster. 54 | # For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/ 55 | apiVersion: v1 56 | kind: Secret 57 | type: Opaque 58 | metadata: 59 | name: calico-etcd-secrets 60 | namespace: kube-system 61 | data: 62 | # Populate the following files with etcd TLS configuration if desired, but leave blank if 63 | # not using TLS for etcd. 64 | # This self-hosted install expects three files with the following names. The values 65 | # should be base64 encoded strings of the entire contents of each file. 66 | # etcd-key: null 67 | # etcd-cert: null 68 | # etcd-ca: null 69 | 70 | --- 71 | 72 | # This manifest installs the calico/node container, as well 73 | # as the Calico CNI plugins and network config on 74 | # each master and worker node in a Kubernetes cluster. 75 | kind: DaemonSet 76 | apiVersion: extensions/v1beta1 77 | metadata: 78 | name: calico-node 79 | namespace: kube-system 80 | labels: 81 | k8s-app: calico-node 82 | spec: 83 | selector: 84 | matchLabels: 85 | k8s-app: calico-node 86 | template: 87 | metadata: 88 | labels: 89 | k8s-app: calico-node 90 | annotations: 91 | scheduler.alpha.kubernetes.io/critical-pod: '' 92 | scheduler.alpha.kubernetes.io/tolerations: | 93 | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, 94 | {"key":"CriticalAddonsOnly", "operator":"Exists"}] 95 | spec: 96 | hostNetwork: true 97 | containers: 98 | # Runs calico/node container on each Kubernetes node. This 99 | # container programs network policy and routes on each 100 | # host. 101 | - name: calico-node 102 | image: reg.dnt:5000/calico/node:v1.0.0 103 | env: 104 | # The location of the Calico etcd cluster. 105 | - name: ETCD_ENDPOINTS 106 | valueFrom: 107 | configMapKeyRef: 108 | name: calico-config 109 | key: etcd_endpoints 110 | # Choose the backend to use. 111 | - name: CALICO_NETWORKING_BACKEND 112 | valueFrom: 113 | configMapKeyRef: 114 | name: calico-config 115 | key: calico_backend 116 | # Disable file logging so `kubectl logs` works. 117 | - name: CALICO_DISABLE_FILE_LOGGING 118 | value: "true" 119 | # Don't configure a default pool. This is done by the Job 120 | # below. 121 | - name: NO_DEFAULT_POOLS 122 | value: "true" 123 | - name: FELIX_LOGSEVERITYSCREEN 124 | value: "info" 125 | # Location of the CA certificate for etcd. 126 | - name: ETCD_CA_CERT_FILE 127 | valueFrom: 128 | configMapKeyRef: 129 | name: calico-config 130 | key: etcd_ca 131 | # Location of the client key for etcd. 132 | - name: ETCD_KEY_FILE 133 | valueFrom: 134 | configMapKeyRef: 135 | name: calico-config 136 | key: etcd_key 137 | # Location of the client certificate for etcd. 138 | - name: ETCD_CERT_FILE 139 | valueFrom: 140 | configMapKeyRef: 141 | name: calico-config 142 | key: etcd_cert 143 | # Auto-detect the BGP IP address. 144 | - name: IP 145 | value: "" 146 | securityContext: 147 | privileged: true 148 | volumeMounts: 149 | - mountPath: /lib/modules 150 | name: lib-modules 151 | readOnly: true 152 | - mountPath: /var/run/calico 153 | name: var-run-calico 154 | readOnly: false 155 | - mountPath: /calico-secrets 156 | name: etcd-certs 157 | # This container installs the Calico CNI binaries 158 | # and CNI network config file on each node. 159 | - name: install-cni 160 | image: reg.dnt:5000/calico/cni:v1.5.5 161 | command: ["/install-cni.sh"] 162 | env: 163 | # The location of the Calico etcd cluster. 164 | - name: ETCD_ENDPOINTS 165 | valueFrom: 166 | configMapKeyRef: 167 | name: calico-config 168 | key: etcd_endpoints 169 | # The CNI network config to install on each node. 170 | - name: CNI_NETWORK_CONFIG 171 | valueFrom: 172 | configMapKeyRef: 173 | name: calico-config 174 | key: cni_network_config 175 | volumeMounts: 176 | - mountPath: /host/opt/cni/bin 177 | name: cni-bin-dir 178 | - mountPath: /host/etc/cni/net.d 179 | name: cni-net-dir 180 | - mountPath: /calico-secrets 181 | name: etcd-certs 182 | volumes: 183 | # Used by calico/node. 184 | - name: lib-modules 185 | hostPath: 186 | path: /lib/modules 187 | - name: var-run-calico 188 | hostPath: 189 | path: /var/run/calico 190 | # Used to install CNI. 191 | - name: cni-bin-dir 192 | hostPath: 193 | path: /opt/cni/bin 194 | - name: cni-net-dir 195 | hostPath: 196 | path: /etc/cni/net.d 197 | # Mount in the etcd TLS secrets. 198 | - name: etcd-certs 199 | secret: 200 | secretName: calico-etcd-secrets 201 | 202 | --- 203 | 204 | # This manifest deploys the Calico policy controller on Kubernetes. 205 | # See https://github.com/projectcalico/k8s-policy 206 | apiVersion: extensions/v1beta1 207 | kind: Deployment 208 | metadata: 209 | name: calico-policy-controller 210 | namespace: kube-system 211 | labels: 212 | k8s-app: calico-policy 213 | annotations: 214 | scheduler.alpha.kubernetes.io/critical-pod: '' 215 | scheduler.alpha.kubernetes.io/tolerations: | 216 | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, 217 | {"key":"CriticalAddonsOnly", "operator":"Exists"}] 218 | spec: 219 | # The policy controller can only have a single active instance. 220 | replicas: 1 221 | strategy: 222 | type: Recreate 223 | template: 224 | metadata: 225 | name: calico-policy-controller 226 | namespace: kube-system 227 | labels: 228 | k8s-app: calico-policy 229 | spec: 230 | # The policy controller must run in the host network namespace so that 231 | # it isn't governed by policy that would prevent it from working. 232 | hostNetwork: true 233 | containers: 234 | - name: calico-policy-controller 235 | image: reg.dnt:5000/calico/kube-policy-controller:v0.5.1 236 | env: 237 | # The location of the Calico etcd cluster. 238 | - name: ETCD_ENDPOINTS 239 | valueFrom: 240 | configMapKeyRef: 241 | name: calico-config 242 | key: etcd_endpoints 243 | # Location of the CA certificate for etcd. 244 | - name: ETCD_CA_CERT_FILE 245 | valueFrom: 246 | configMapKeyRef: 247 | name: calico-config 248 | key: etcd_ca 249 | # Location of the client key for etcd. 250 | - name: ETCD_KEY_FILE 251 | valueFrom: 252 | configMapKeyRef: 253 | name: calico-config 254 | key: etcd_key 255 | # Location of the client certificate for etcd. 256 | - name: ETCD_CERT_FILE 257 | valueFrom: 258 | configMapKeyRef: 259 | name: calico-config 260 | key: etcd_cert 261 | # The location of the Kubernetes API. Use the default Kubernetes 262 | # service for API access. 263 | - name: K8S_API 264 | value: "http://20.26.2.110:18080" 265 | # Since we're running in the host namespace and might not have KubeDNS 266 | # access, configure the container's /etc/hosts to resolve 267 | # kubernetes.default to the correct service clusterIP. 268 | - name: CONFIGURE_ETC_HOSTS 269 | value: "true" 270 | volumeMounts: 271 | # Mount in the etcd TLS secrets. 272 | - mountPath: /calico-secrets 273 | name: etcd-certs 274 | volumes: 275 | # Mount in the etcd TLS secrets. 276 | - name: etcd-certs 277 | secret: 278 | secretName: calico-etcd-secrets 279 | 280 | --- 281 | 282 | ## This manifest deploys a Job which performs one time 283 | # configuration of Calico 284 | apiVersion: batch/v1 285 | kind: Job 286 | metadata: 287 | name: configure-calico 288 | namespace: kube-system 289 | labels: 290 | k8s-app: calico 291 | spec: 292 | template: 293 | metadata: 294 | name: configure-calico 295 | annotations: 296 | scheduler.alpha.kubernetes.io/critical-pod: '' 297 | scheduler.alpha.kubernetes.io/tolerations: | 298 | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, 299 | {"key":"CriticalAddonsOnly", "operator":"Exists"}] 300 | spec: 301 | hostNetwork: true 302 | restartPolicy: OnFailure 303 | containers: 304 | # Writes basic configuration to datastore. 305 | - name: configure-calico 306 | image: reg.dnt:5000/calico/ctl:v1.0.0 307 | args: 308 | - apply 309 | - -f 310 | - /etc/config/calico/ippool.yaml 311 | volumeMounts: 312 | - name: config-volume 313 | mountPath: /etc/config 314 | env: 315 | # The location of the etcd cluster. 316 | - name: ETCD_ENDPOINTS 317 | valueFrom: 318 | configMapKeyRef: 319 | name: calico-config 320 | key: etcd_endpoints 321 | volumes: 322 | - name: config-volume 323 | configMap: 324 | name: calico-config 325 | items: 326 | - key: ippool.yaml 327 | path: calico/ippool.yaml 328 | -------------------------------------------------------------------------------- /network/calico/calico.images: -------------------------------------------------------------------------------- 1 | calico/node:v1.0.0 2 | calico/cni:v1.5.5 3 | calico/kube-policy-controller:v0.5.1 4 | calico/kube-policy-agent:v0.5.1 5 | calico/ctl:v1.0.0 6 | -------------------------------------------------------------------------------- /network/calico/calico.yaml: -------------------------------------------------------------------------------- 1 | # This ConfigMap is used to configure a self-hosted Calico installation. 2 | kind: ConfigMap 3 | apiVersion: v1 4 | metadata: 5 | name: calico-config 6 | namespace: kube-system 7 | data: 8 | # Configure this with the location of your etcd cluster. 9 | etcd_endpoints: "http://20.26.2.110:14001" 10 | 11 | # Configure the Calico backend to use. 12 | calico_backend: "bird" 13 | 14 | # The CNI network configuration to install on each node. 15 | cni_network_config: |- 16 | { 17 | "name": "k8s-pod-network", 18 | "type": "calico", 19 | "etcd_endpoints": "http://20.26.2.110:14001", 20 | "etcd_key_file": "__ETCD_KEY_FILE__", 21 | "etcd_cert_file": "__ETCD_CERT_FILE__", 22 | "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__", 23 | "log_level": "info", 24 | "ipam": { 25 | "type": "calico-ipam" 26 | }, 27 | "policy": { 28 | "type": "k8s", 29 | "k8s_api_root": "https://20.26.2.110:18080", 30 | "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__" 31 | }, 32 | "kubernetes": { 33 | "kubeconfig": "__KUBECONFIG_FILEPATH__" 34 | } 35 | } 36 | 37 | # The default IP Pool to be created for the cluster. 38 | # Pod IP addresses will be assigned from this pool. 39 | ippool.yaml: | 40 | apiVersion: v1 41 | kind: ipPool 42 | metadata: 43 | cidr: 192.168.0.0/16 44 | spec: 45 | nat-outgoing: true 46 | 47 | # If you're using TLS enabled etcd uncomment the following. 48 | # You must also populate the Secret below with these files. 49 | etcd_ca: "" # "/calico-secrets/etcd-ca" 50 | etcd_cert: "" # "/calico-secrets/etcd-cert" 51 | etcd_key: "" # "/calico-secrets/etcd-key" 52 | 53 | --- 54 | 55 | # The following contains k8s Secrets for use with a TLS enabled etcd cluster. 56 | # For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/ 57 | apiVersion: v1 58 | kind: Secret 59 | type: Opaque 60 | metadata: 61 | name: calico-etcd-secrets 62 | namespace: kube-system 63 | data: 64 | # Populate the following files with etcd TLS configuration if desired, but leave blank if 65 | # not using TLS for etcd. 66 | # This self-hosted install expects three files with the following names. The values 67 | # should be base64 encoded strings of the entire contents of each file. 68 | # etcd-key: null 69 | # etcd-cert: null 70 | # etcd-ca: null 71 | 72 | --- 73 | 74 | # This manifest installs the calico/node container, as well 75 | # as the Calico CNI plugins and network config on 76 | # each master and worker node in a Kubernetes cluster. 77 | kind: DaemonSet 78 | apiVersion: extensions/v1beta1 79 | metadata: 80 | name: calico-node 81 | namespace: kube-system 82 | labels: 83 | k8s-app: calico-node 84 | spec: 85 | selector: 86 | matchLabels: 87 | k8s-app: calico-node 88 | template: 89 | metadata: 90 | labels: 91 | k8s-app: calico-node 92 | annotations: 93 | scheduler.alpha.kubernetes.io/critical-pod: '' 94 | scheduler.alpha.kubernetes.io/tolerations: | 95 | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, 96 | {"key":"CriticalAddonsOnly", "operator":"Exists"}] 97 | spec: 98 | hostNetwork: true 99 | containers: 100 | # Runs calico/node container on each Kubernetes node. This 101 | # container programs network policy and routes on each 102 | # host. 103 | - name: calico-node 104 | image: reg.dnt:5000/calico/node:v1.0.0 105 | env: 106 | # The location of the Calico etcd cluster. 107 | - name: ETCD_ENDPOINTS 108 | valueFrom: 109 | configMapKeyRef: 110 | name: calico-config 111 | key: etcd_endpoints 112 | # Choose the backend to use. 113 | - name: CALICO_NETWORKING_BACKEND 114 | valueFrom: 115 | configMapKeyRef: 116 | name: calico-config 117 | key: calico_backend 118 | # Disable file logging so `kubectl logs` works. 119 | - name: CALICO_DISABLE_FILE_LOGGING 120 | value: "true" 121 | # Don't configure a default pool. This is done by the Job 122 | # below. 123 | - name: NO_DEFAULT_POOLS 124 | value: "true" 125 | - name: FELIX_LOGSEVERITYSCREEN 126 | value: "info" 127 | # Location of the CA certificate for etcd. 128 | - name: ETCD_CA_CERT_FILE 129 | valueFrom: 130 | configMapKeyRef: 131 | name: calico-config 132 | key: etcd_ca 133 | # Location of the client key for etcd. 134 | - name: ETCD_KEY_FILE 135 | valueFrom: 136 | configMapKeyRef: 137 | name: calico-config 138 | key: etcd_key 139 | # Location of the client certificate for etcd. 140 | - name: ETCD_CERT_FILE 141 | valueFrom: 142 | configMapKeyRef: 143 | name: calico-config 144 | key: etcd_cert 145 | # Auto-detect the BGP IP address. 146 | - name: IP 147 | value: "" 148 | securityContext: 149 | privileged: true 150 | volumeMounts: 151 | - mountPath: /lib/modules 152 | name: lib-modules 153 | readOnly: true 154 | - mountPath: /var/run/calico 155 | name: var-run-calico 156 | readOnly: false 157 | - mountPath: /calico-secrets 158 | name: etcd-certs 159 | # This container installs the Calico CNI binaries 160 | # and CNI network config file on each node. 161 | - name: install-cni 162 | image: reg.dnt:5000/calico/cni:v1.5.5 163 | command: ["/install-cni.sh"] 164 | env: 165 | # The location of the Calico etcd cluster. 166 | - name: ETCD_ENDPOINTS 167 | valueFrom: 168 | configMapKeyRef: 169 | name: calico-config 170 | key: etcd_endpoints 171 | # The CNI network config to install on each node. 172 | - name: CNI_NETWORK_CONFIG 173 | valueFrom: 174 | configMapKeyRef: 175 | name: calico-config 176 | key: cni_network_config 177 | volumeMounts: 178 | - mountPath: /host/opt/cni/bin 179 | name: cni-bin-dir 180 | - mountPath: /host/etc/cni/net.d 181 | name: cni-net-dir 182 | - mountPath: /calico-secrets 183 | name: etcd-certs 184 | volumes: 185 | # Used by calico/node. 186 | - name: lib-modules 187 | hostPath: 188 | path: /lib/modules 189 | - name: var-run-calico 190 | hostPath: 191 | path: /var/run/calico 192 | # Used to install CNI. 193 | - name: cni-bin-dir 194 | hostPath: 195 | path: /opt/cni/bin 196 | - name: cni-net-dir 197 | hostPath: 198 | path: /etc/cni/net.d 199 | # Mount in the etcd TLS secrets. 200 | - name: etcd-certs 201 | secret: 202 | secretName: calico-etcd-secrets 203 | 204 | --- 205 | 206 | # This manifest deploys the Calico policy controller on Kubernetes. 207 | # See https://github.com/projectcalico/k8s-policy 208 | apiVersion: extensions/v1beta1 209 | kind: Deployment 210 | metadata: 211 | name: calico-policy-controller 212 | namespace: kube-system 213 | labels: 214 | k8s-app: calico-policy 215 | annotations: 216 | scheduler.alpha.kubernetes.io/critical-pod: '' 217 | scheduler.alpha.kubernetes.io/tolerations: | 218 | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, 219 | {"key":"CriticalAddonsOnly", "operator":"Exists"}] 220 | spec: 221 | # The policy controller can only have a single active instance. 222 | replicas: 1 223 | strategy: 224 | type: Recreate 225 | template: 226 | metadata: 227 | name: calico-policy-controller 228 | namespace: kube-system 229 | labels: 230 | k8s-app: calico-policy 231 | spec: 232 | # The policy controller must run in the host network namespace so that 233 | # it isn't governed by policy that would prevent it from working. 234 | hostNetwork: true 235 | containers: 236 | - name: calico-policy-controller 237 | image: reg.dnt:5000/calico/kube-policy-controller:v0.5.1 238 | env: 239 | # The location of the Calico etcd cluster. 240 | - name: ETCD_ENDPOINTS 241 | valueFrom: 242 | configMapKeyRef: 243 | name: calico-config 244 | key: etcd_endpoints 245 | # Location of the CA certificate for etcd. 246 | - name: ETCD_CA_CERT_FILE 247 | valueFrom: 248 | configMapKeyRef: 249 | name: calico-config 250 | key: etcd_ca 251 | # Location of the client key for etcd. 252 | - name: ETCD_KEY_FILE 253 | valueFrom: 254 | configMapKeyRef: 255 | name: calico-config 256 | key: etcd_key 257 | # Location of the client certificate for etcd. 258 | - name: ETCD_CERT_FILE 259 | valueFrom: 260 | configMapKeyRef: 261 | name: calico-config 262 | key: etcd_cert 263 | # The location of the Kubernetes API. Use the default Kubernetes 264 | # service for API access. 265 | - name: K8S_API 266 | value: "https://kubernetes.default:443" 267 | # Since we're running in the host namespace and might not have KubeDNS 268 | # access, configure the container's /etc/hosts to resolve 269 | # kubernetes.default to the correct service clusterIP. 270 | - name: CONFIGURE_ETC_HOSTS 271 | value: "true" 272 | volumeMounts: 273 | # Mount in the etcd TLS secrets. 274 | - mountPath: /calico-secrets 275 | name: etcd-certs 276 | volumes: 277 | # Mount in the etcd TLS secrets. 278 | - name: etcd-certs 279 | secret: 280 | secretName: calico-etcd-secrets 281 | 282 | --- 283 | 284 | ## This manifest deploys a Job which performs one time 285 | # configuration of Calico 286 | apiVersion: batch/v1 287 | kind: Job 288 | metadata: 289 | name: configure-calico 290 | namespace: kube-system 291 | labels: 292 | k8s-app: calico 293 | spec: 294 | template: 295 | metadata: 296 | name: configure-calico 297 | annotations: 298 | scheduler.alpha.kubernetes.io/critical-pod: '' 299 | scheduler.alpha.kubernetes.io/tolerations: | 300 | [{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, 301 | {"key":"CriticalAddonsOnly", "operator":"Exists"}] 302 | spec: 303 | hostNetwork: true 304 | restartPolicy: OnFailure 305 | containers: 306 | # Writes basic configuration to datastore. 307 | - name: configure-calico 308 | image: reg.dnt:5000/calico/ctl:v1.0.0 309 | args: 310 | - apply 311 | - -f 312 | - /etc/config/calico/ippool.yaml 313 | volumeMounts: 314 | - name: config-volume 315 | mountPath: /etc/config 316 | env: 317 | # The location of the etcd cluster. 318 | - name: ETCD_ENDPOINTS 319 | valueFrom: 320 | configMapKeyRef: 321 | name: calico-config 322 | key: etcd_endpoints 323 | volumes: 324 | - name: config-volume 325 | configMap: 326 | name: calico-config 327 | items: 328 | - key: ippool.yaml 329 | path: calico/ippool.yaml 330 | -------------------------------------------------------------------------------- /network/calico/ipPool.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ipPool 3 | metadata: 4 | cidr: 192.168.22.0/24 5 | -------------------------------------------------------------------------------- /network/calico/kube-apiserver.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: kube-apiserver 5 | namespace: system 6 | labels: 7 | app: kube-apiserver 8 | enable: "true" 9 | spec: 10 | hostNetwork: true 11 | containers: 12 | - name: kube-apiserver 13 | image: reg.dnt:5000/google_containers/kube-apiserver:v1.5.1 14 | command: 15 | - /bin/sh 16 | - -c 17 | - /usr/local/bin/kube-apiserver --feature-gates=AllAlpha=true --enable-swagger-ui --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota --insecure-bind-address=0.0.0.0 --insecure-port=8080 --etcd-servers=http://127.0.0.1:4001 --service-cluster-ip-range=10.0.0.0/16 --v=2 --allow-privileged=True 1>>/var/log/kube-apiserver.log 2>&1 18 | resources: 19 | limits: 20 | cpu: 512m 21 | memory: 1024Mi 22 | requests: 23 | cpu: 512m 24 | memory: 1024Mi 25 | livenessProbe: 26 | httpGet: 27 | path: /healthz 28 | port: 8080 29 | initialDelaySeconds: 30 30 | timeoutSeconds: 30 31 | volumeMounts: 32 | - mountPath: /srv/kubernetes 33 | name: srvkube 34 | readOnly: true 35 | - mountPath: /var/log/kube-apiserver.log 36 | name: logfile 37 | - mountPath: /etc/ssl 38 | name: etcssl 39 | readOnly: true 40 | - mountPath: /usr/share/ssl 41 | name: usrsharessl 42 | readOnly: true 43 | - mountPath: /var/ssl 44 | name: varssl 45 | readOnly: true 46 | - mountPath: /usr/ssl 47 | name: usrssl 48 | readOnly: true 49 | - mountPath: /usr/lib/ssl 50 | name: usrlibssl 51 | readOnly: true 52 | - mountPath: /usr/local/openssl 53 | name: usrlocalopenssl 54 | readOnly: true 55 | - mountPath: /etc/openssl 56 | name: etcopenssl 57 | readOnly: true 58 | - mountPath: /etc/pki/tls 59 | name: etcpkitls 60 | readOnly: true 61 | volumes: 62 | - hostPath: 63 | path: /srv/kubernetes 64 | name: srvkube 65 | - hostPath: 66 | path: /data/log/kube-apiserver.log 67 | name: logfile 68 | - hostPath: 69 | path: /etc/ssl 70 | name: etcssl 71 | - hostPath: 72 | path: /usr/share/ssl 73 | name: usrsharessl 74 | - hostPath: 75 | path: /var/ssl 76 | name: varssl 77 | - hostPath: 78 | path: /usr/ssl 79 | name: usrssl 80 | - hostPath: 81 | path: /usr/lib/ssl 82 | name: usrlibssl 83 | - hostPath: 84 | path: /usr/local/openssl 85 | name: usrlocalopenssl 86 | - hostPath: 87 | path: /etc/openssl 88 | name: etcopenssl 89 | - hostPath: 90 | path: /etc/pki/tls 91 | name: etcpkitls 92 | -------------------------------------------------------------------------------- /network/calico/kubelet.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Kubernetes Kubelet 3 | After=docker.service 4 | Requires=docker.service 5 | 6 | [Service] 7 | ExecStart=/usr/local/bin/kubelet --network-plugin=cni --allow-privileged=true --logtostderr=true --address=0.0.0.0 --port=10250 --cluster-dns=10.0.1.1 --cluster-domain=cluster.local --pod-infra-container-image=reg.dnt:5000/google_containers/pause:3.0 --api-servers=http://20.26.2.110:18080 --config=/etc/kubernetes/manifests 8 | Restart=on-failure 9 | KillMode=process 10 | 11 | [Install] 12 | WantedBy=multi-user.target 13 | 14 | -------------------------------------------------------------------------------- /network/calico/policy.yaml: -------------------------------------------------------------------------------- 1 | - apiVersion: v1 2 | kind: policy 3 | metadata: 4 | name: k8s-policy-no-match 5 | spec: 6 | egress: 7 | - action: pass 8 | destination: {} 9 | source: {} 10 | - action: pass 11 | destination: {} 12 | source: 13 | net: 20.26.2.0/24 14 | - action: pass 15 | destination: {} 16 | source: 17 | net: 20.26.20.0/24 18 | ingress: 19 | - action: pass 20 | destination: {} 21 | source: {} 22 | - action: pass 23 | destination: {} 24 | source: 25 | net: 20.26.2.0/24 26 | - action: pass 27 | destination: {} 28 | source: 29 | net: 20.26.20.0/24 30 | order: 2000 31 | selector: has(calico/k8s_ns) 32 | -------------------------------------------------------------------------------- /network/calico/profile.yaml: -------------------------------------------------------------------------------- 1 | - apiVersion: v1 2 | kind: profile 3 | metadata: 4 | name: k8s_ns.default 5 | tags: 6 | - k8s_ns.default 7 | spec: 8 | egress: 9 | - action: allow 10 | destination: {} 11 | source: {} 12 | - action: allow 13 | destination: {} 14 | source: 15 | net: 20.26.2.0/24 16 | ingress: 17 | - action: allow 18 | destination: {} 19 | source: {} 20 | - action: allow 21 | destination: {} 22 | source: 23 | net: 20.26.2.0/24 24 | -------------------------------------------------------------------------------- /network/flannel/flannel.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: flannel-server 5 | namespace: system 6 | labels: 7 | app: flannel-server 8 | enable: "true" 9 | spec: 10 | hostNetwork: true 11 | containers: 12 | - name: flannel-server 13 | image: reg.dnt:5000/coreos/flannel:v0.6.2 14 | command: 15 | - /bin/sh 16 | - -c 17 | - /opt/bin/flanneld -v 6 --logtostderr -ip-masq -etcd-endpoints http://127.0.0.1:4001 -etcd-prefix /kubernetes/network 1>>/var/log/flannel.log 2>&1 18 | resources: 19 | limits: 20 | cpu: 1000m 21 | memory: 2048Mi 22 | requests: 23 | cpu: 500m 24 | memory: 1024Mi 25 | volumeMounts: 26 | - mountPath: /var/log/flannel.log 27 | name: logfile 28 | - mountPath: /run/flannel 29 | name: flannelenv 30 | - mountPath: /dev/net 31 | name: flannelnet 32 | securityContext: 33 | privileged: true 34 | volumes: 35 | - hostPath: 36 | path: /data/log/flannel.log 37 | name: logfile 38 | - hostPath: 39 | path: /run/flannel 40 | name: flannelenv 41 | - hostPath: 42 | path: /dev/net 43 | name: flannelnet 44 | -------------------------------------------------------------------------------- /network/flannel/restart_docker.sh: -------------------------------------------------------------------------------- 1 | function restart_docker { 2 | attempt=0 3 | while [[ ! -f /run/flannel/subnet.env ]]; do 4 | if (( attempt > 200 )); then 5 | echo "timeout waiting for /run/flannel/subnet.env" >> ~/kube/err.log 6 | exit 2 7 | fi 8 | attempt=$((attempt+1)) 9 | sleep 3 10 | done 11 | 12 | sudo ip link set dev docker0 down 13 | sudo brctl delbr docker0 14 | 15 | source /run/flannel/subnet.env 16 | echo DOCKER_OPTS=\"${DOCKER_OPTS} --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}\" > /etc/default/docker 17 | source /etc/default/docker 18 | sudo systemctl daemon-reload 19 | sudo systemctl restart docker 20 | } 21 | 22 | echo "start config docker --------------------" 23 | restart_docker 24 | echo "exit code is $?-------------------------" 25 | -------------------------------------------------------------------------------- /storage/rbd/nginx-pvc.yaml: -------------------------------------------------------------------------------- 1 | # This file should be kept in sync with cluster/gce/coreos/kube-manifests/addons/dashboard/dashboard-controller.yaml 2 | apiVersion: v1 3 | kind: ReplicationController 4 | metadata: 5 | name: pvc 6 | namespace: default 7 | labels: 8 | kubernetes.io/cluster-service: "true" 9 | spec: 10 | replicas: 1 11 | template: 12 | metadata: 13 | labels: 14 | kubernetes.io/cluster-service: "true" 15 | annotations: 16 | scheduler.alpha.kubernetes.io/critical-pod: '' 17 | scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]' 18 | spec: 19 | containers: 20 | - name: pvc-test 21 | image: reg.dnt:5000/nginx:1.10.0-alpine 22 | resources: 23 | # keep request = limit to keep this container in guaranteed class 24 | limits: 25 | cpu: 100m 26 | memory: 256Mi 27 | requests: 28 | cpu: 100m 29 | memory: 256Mi 30 | ports: 31 | - containerPort: 80 32 | livenessProbe: 33 | httpGet: 34 | path: / 35 | port: 80 36 | initialDelaySeconds: 30 37 | timeoutSeconds: 30 38 | volumeMounts: 39 | - name: tf-ceph 40 | mountPath: /var/tensorflow 41 | volumes: 42 | - name: tf-ceph 43 | persistentVolumeClaim: 44 | claimName: tf-ceph 45 | -------------------------------------------------------------------------------- /storage/rbd/rbd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: ceph-secret 5 | data: 6 | key: QVFBTXh3WlkrdldBREJBQW92UE5jZGxFRlNhWG9tVWV2bU5sN0E9PQ== 7 | --- 8 | kind: PersistentVolume 9 | apiVersion: v1 10 | metadata: 11 | name: tf-ceph 12 | namespace: default 13 | spec: 14 | capacity: 15 | storage: 100Mi 16 | accessModes: 17 | - ReadWriteOnce 18 | rbd: 19 | monitors: 20 | - 20.26.2.51:6789 21 | - 20.26.2.53:6789 22 | - 20.26.2.54:6789 23 | user: admin 24 | pool: rbd 25 | image: tst 26 | fsType: ext4, 27 | secretRef: 28 | name: ceph-secret 29 | readOnly: true 30 | --- 31 | kind: PersistentVolumeClaim 32 | apiVersion: v1 33 | metadata: 34 | name: tf-ceph 35 | namespace: default 36 | spec: 37 | accessModes: 38 | - ReadWriteOnce 39 | resources: 40 | requests: 41 | storage: 50Mi 42 | --------------------------------------------------------------------------------