├── .gitignore ├── README.md ├── etcd ├── README.md ├── deploy-etcd.sh └── etcd_discovery.yaml ├── images └── .gitignore ├── k8s-deploy.sh ├── network ├── kube-flannel-rbac.yml └── kube-flannel.yml └── rpms └── .gitignore /.gitignore: -------------------------------------------------------------------------------- 1 | /etcd/*.conf 2 | /etcd/temp-etcd 3 | /etcd/*.tar.gz 4 | /etcd/*.service 5 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 离线安装 kubernetes 高可用集群 2 | 3 | 经常遇到全新初始安装k8s集群的问题,所以想着搞成离线模式,本着最小依赖原则,采用纯shell脚本编写 4 | 5 | 基于Centos7.2-1511-minimal运行脚本测试OK, 默认安装docker1.12.6 etcd-v3.0.17 k8s-v1.6.2 6 | 7 | 本离线安装所有的依赖都打包放到了[百度网盘](https://pan.baidu.com/s/1nvQDdsl),不放心安全的,可自行打包替换,就是些镜像tar包和rpms 8 | 9 | 简要说明 10 | 11 | * 基于kubeadm搭建的kubernetes1.6 HA高可用集群 12 | * 部署HA环境,需要先`存在etcd集群`,可使用etcd目录下的一键部署etcd集群脚本 13 | * 共三台相互冗余,支持master和etcd分开部署 14 | * master间通过keepalived做主-从-从冗余, controller和scheduler通过自带的--leader-elect选项 15 | * 如果想部署kubeadm的默认模式,即全面容器化但都单实例的方式,可以参考[这里](https://github.com/xiaoping378/blog/issues/5) 16 | * [TODO]现在的keepalived和etcd集群没用容器运行,后面有时间会尝试做到全面容器化 17 | * 下图是官方ha模型,除了LB部分是用的keepalived的VIP功能, 此项目和官方基本一致 18 | ![overview](http://kubernetes.io/images/docs/ha.svg) 19 | 20 | ## 第一步 21 | 离线安装的基本思路是,在k8s-deploy目录下,临时启个http server, 节点上会从此拉取所依赖镜像和rpms 22 | 23 | ``` 24 | # python -m SimpleHTTPServer 25 | Serving HTTP on 0.0.0.0 port 8000 ... 26 | ``` 27 | 28 | windows上可以用hfs临时启个http server, 自行google如何使用 29 | 30 | ## master侧 31 | 32 | 运行以下命令,初始化master, master侧如果是单核的话,会因资源不足, dns安装失败。 33 | 34 | ``` 35 | curl -L http://192.168.56.1:8000/k8s-deploy.sh | bash -s master \ 36 | --VIP=192.168.56.103 \ 37 | --etcd-endpoints=http://192.168.56.100:2379,http://192.168.56.101:2379,http://192.168.56.102:2379 38 | ``` 39 | 40 | * **192.168.56.1:8000** 是我的http-server, 注意要将k8s-deploy.sh 里的HTTP-SERVER变量也改下 41 | 42 | * **--VIP** 是keepalived侧的浮动IP地址 43 | 44 | * **--etcd-endpoints** 是你的etcd集群地址,如果第一次安装的话,可以使用etcd目录下的脚本一键安装 45 | 46 | * 记录下你的token输出, minion侧需要用到 47 | 48 | * 安装docker时,如果之前装过“不干净的”东西,可能会遇到依赖问题,我这里会遇到systemd-python依赖问题, 49 | 卸载之,即可 50 | ```yum remove -y systemd-python``` 51 | 52 | ## replica master侧 53 | 54 | 在replica master侧运行下面的命令,会自动和第一个master组成冗余 55 | 56 | 最好和第一个master建立免秘钥认证,此过程需要从master那里拷贝配置 57 | ``` 58 | curl -L http://192.168.56.1:8000/k8s-deploy.sh | bash -s replica \ 59 | --VIP=192.168.56.103 \ 60 | --etcd-endpoints=http://192.168.56.100:2379,http://192.168.56.101:2379,http://192.168.56.102:2379 61 | ``` 62 | 63 | 重复上面的步骤之后,会有一个3实例的HA集群,执行下面命令的时候可关闭第一个master,以验证高可用 64 | 65 | * 验证vip漂移的网络影响 66 | 67 | sytemctl status keepalived 68 | # 确认vip落地情况 69 | # 模拟apiserver故障或者断电 70 | systemctl stop docker 71 | 72 | * 验证kube-apiserver故障影响 73 | 74 | ``` 75 | while true; do kubectl get po -n kube-system; sleep 1; done 76 | ``` 77 | 78 | ## minion侧 79 | 80 | 视自己的情况而定, 使用第一个master侧生成的token, 注意这里的56.103是你的VIP地址 81 | 82 | ``` 83 | curl -L http://192.168.56.1:8000/k8s-deploy.sh | bash -s join --token 32d98a.4076a0f48b5abd3f 192.168.56.103:6443 84 | ``` 85 | 86 | ## 总结 87 | 88 | * 脚本如果中间运行出错,就会自动退出,自己手动执行下退出前的地方,找原因,解决后,继续执行一开始的命令curl -L ... | bash -s ... 89 | 90 | * 1.5.1,默认关闭了匿名访问,可通过带token的方式访问API,参考[这里](http://kubernetes.io/docs/user-guide/accessing-the-cluster/), 91 | ``` 92 | TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t') 93 | curl -k --tlsv1 -H "Authorization: Bearer $TOKEN" https://192.168.56.103:6443/api 94 | ``` 95 | 当然也可以通过更改apiserver的启动参数来开启匿名访问,自行google 96 | 97 | * v1.6.2, kubeadm安装默认启用了RBAC权限认证体系,详细参考[这里](https://kubernetes.io/docs/admin/authorization/rbac/) 98 | 99 | * 1.5 与 1.3给我感觉最大的变化是网络部分, 1.5启用了cni网络插件 100 | 不需要像以前一样非要把flannel和docker绑在一起了(先启flannel才能启docker)。具体可以看[这里](https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#cni) 101 | 102 | * 还有人反馈,有些node上kube-flannel会出现CrashLoopBackOff问题, 103 | ``` 104 | 我这里重现过一次,问题是dial tcp 10.96.0.1:443: i/o timeout, 也就是flannel联系不上api-server了。 105 | 查看iptables正常,sep都存在, 清掉iptabels,重启kube-proxy可解决 106 | 107 | ``` 108 | 109 | * 为了kube-dns也处于高可用状态,可以部署3实例 110 | ``` 111 | kubectl --namespace=kube-system scale deployment kube-dns --replicas=3 112 | ``` 113 | 114 | * 资源有限,想让master也参与调度pod的话,可以这样操作下 115 | ``` 116 | kubectl taint node --all node-role.kubernetes.io/master- 117 | ``` 118 | -------------------------------------------------------------------------------- /etcd/README.md: -------------------------------------------------------------------------------- 1 | # 一键部署etcd集群 2 | 3 | 默认使用static方式部署集群,目前etcd集群没有采用tls加密 4 | 5 | * 修改deploy-etcd.sh脚本里NODE_MAP变量为自己的etcd集群要部署的节点IP 6 | 7 | * 和各节点建立免秘钥认证, 并自行确保各节点NTP时间同步 8 | 9 | * 运行脚本批量进行部署 10 | 11 | 脚本默认会部署temp-etcd目录的bin文件,删掉的话,脚本默认会去github地址下载tar包 12 | 13 | ``` 14 | bash -c ./deploy-etcd.sh 15 | ``` 16 | 17 | note. 如果以前节点上部署过etcd, 自行清理遗留数据: ```systemctl stop etcd 和 rm -rf /var/lib/etcd/*``` 18 | -------------------------------------------------------------------------------- /etcd/deploy-etcd.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -x 3 | set -e 4 | 5 | ###更改这里的IP, 自行确保NTP同步 6 | declare -A NODE_MAP=( ["etcd0"]="192.168.56.100" ["etcd1"]="192.168.56.101" ["etcd2"]="192.168.56.102" ) 7 | ###只支持部署3个节点etcd集群 8 | 9 | 10 | etcd::download() 11 | { 12 | ETCD_VER=v3.0.17 13 | DOWNLOAD_URL=https://github.com/coreos/etcd/releases/download 14 | [ -f ${PWD}/temp-etcd/etcd ] && return 15 | curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o ${PWD}/etcd-${ETCD_VER}-linux-amd64.tar.gz 16 | mkdir -p ${PWD}/temp-etcd && tar xzvf ${PWD}/etcd-${ETCD_VER}-linux-amd64.tar.gz -C ${PWD}/temp-etcd --strip-components=1 17 | } 18 | 19 | etcd::config() 20 | { 21 | local node_index=$1 22 | 23 | cat <${PWD}/${node_index}.conf 24 | ETCD_NAME=${node_index} 25 | ETCD_DATA_DIR="/var/lib/etcd" 26 | ETCD_INITIAL_ADVERTISE_PEER_URLS="http://${NODE_MAP[${node_index}]}:2380" 27 | ETCD_LISTEN_PEER_URLS="http://${NODE_MAP[${node_index}]}:2380" 28 | ETCD_LISTEN_CLIENT_URLS="http://${NODE_MAP[${node_index}]}:2379,http://127.0.0.1:2379" 29 | ETCD_ADVERTISE_CLIENT_URLS="http://${NODE_MAP[${node_index}]}:2379" 30 | ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-378" 31 | ETCD_INITIAL_CLUSTER="etcd0=http://${NODE_MAP['etcd0']}:2380,etcd1=http://${NODE_MAP['etcd1']}:2380,etcd2=http://${NODE_MAP['etcd2']}:2380" 32 | ETCD_INITIAL_CLUSTER_STATE="new" 33 | # ETCD_DISCOVERY="" 34 | # ETCD_DISCOVERY_SRV="" 35 | # ETCD_DISCOVERY_FALLBACK="proxy" 36 | # ETCD_DISCOVERY_PROXY="" 37 | # 38 | # ETCD_CA_FILE="" 39 | # ETCD_CERT_FILE="" 40 | # ETCD_KEY_FILE="" 41 | # ETCD_PEER_CA_FILE="" 42 | # ETCD_PEER_CERT_FILE="" 43 | # ETCD_PEER_KEY_FILE="" 44 | EOF 45 | } 46 | 47 | etcd::gen_unit() 48 | { 49 | # cat </usr/lib/systemd/system/etcd.service 50 | cat <${PWD}/etcd.service 51 | [Unit] 52 | Description=Etcd Server 53 | After=network.target 54 | 55 | [Service] 56 | Type=notify 57 | WorkingDirectory=/var/lib/etcd 58 | EnvironmentFile=-/etc/etcd/10-etcd.conf 59 | ExecStart=/usr/bin/etcd 60 | Restart=always 61 | RestartSec=8s 62 | LimitNOFILE=40000 63 | 64 | [Install] 65 | WantedBy=multi-user.target 66 | EOF 67 | } 68 | 69 | SSH_OPTS="-oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oLogLevel=ERROR -C" 70 | etcd::scp() 71 | { 72 | local host="$1" 73 | local src=($2) 74 | local dst="$3" 75 | scp -r ${SSH_OPTS} ${src[*]} "${host}:${dst}" 76 | } 77 | etcd::ssh() 78 | { 79 | local host="$1" 80 | shift 81 | ssh ${SSH_OPTS} -t "${host}" "$@" >/dev/null 2>&1 82 | } 83 | etcd::ssh_nowait() 84 | { 85 | local host="$1" 86 | shift 87 | ssh ${SSH_OPTS} -t "${host}" "nohup $@" >/dev/null 2>&1 & 88 | } 89 | 90 | etcd::deploy() 91 | { 92 | for key in ${!NODE_MAP[@]} 93 | do 94 | etcd::config $key 95 | etcd::ssh "root@${NODE_MAP[$key]}" "mkdir -p /var/lib/etcd /etc/etcd" 96 | etcd::ssh "root@${NODE_MAP[$key]}" "systemctl stop firewalld && systemctl disable firewalld" 97 | etcd::scp "root@${NODE_MAP[$key]}" "${key}.conf" "/etc/etcd/10-etcd.conf" 98 | etcd::scp "root@${NODE_MAP[$key]}" "etcd.service" "/usr/lib/systemd/system" 99 | etcd::scp "root@${NODE_MAP[$key]}" "${PWD}/temp-etcd/etcd ${PWD}/temp-etcd/etcdctl" "/usr/bin" 100 | etcd::ssh "root@${NODE_MAP[$key]}" "chmod 755 /usr/bin/etcd*" 101 | etcd::ssh_nowait "root@${NODE_MAP[$key]}" "systemctl daemon-reload && systemctl enable etcd && nohup systemctl start etcd" 102 | done 103 | 104 | } 105 | 106 | etcd::clean() 107 | { 108 | for key in ${!NODE_MAP[@]} 109 | do 110 | rm -f ${PWD}/${key}.conf 111 | done 112 | rm -f ${PWD}/etcd.service 113 | } 114 | 115 | 116 | etcd::download 117 | etcd::gen_unit 118 | etcd::deploy 119 | etcd::clean 120 | echo -e "\033[32m 部署完毕! 执行etcdctl cluster-health,检测是否OK \033[0m" 121 | -------------------------------------------------------------------------------- /etcd/etcd_discovery.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | labels: 5 | component: etcd 6 | tier: control-plane 7 | name: etcd-discovery 8 | namespace: kube-system 9 | spec: 10 | containers: 11 | - command: 12 | - etcd 13 | - --listen-peer-urls=http://127.0.0.1:4379 14 | - --listen-client-urls=http://127.0.0.1:4378 15 | - --advertise-client-urls=http://127.0.0.1:4378 16 | - --data-dir=/var/etcd/data 17 | image: quay.io/coreos/etcd:v3.0.15 18 | imagePullPolicy: IfNotPresent 19 | livenessProbe: 20 | failureThreshold: 8 21 | httpGet: 22 | host: 127.0.0.1 23 | path: /health 24 | port: 4378 25 | scheme: HTTP 26 | initialDelaySeconds: 15 27 | periodSeconds: 10 28 | successThreshold: 1 29 | timeoutSeconds: 15 30 | name: etcd-disk 31 | resources: 32 | requests: 33 | cpu: 200m 34 | securityContext: 35 | seLinuxOptions: 36 | type: spc_t 37 | terminationMessagePath: /dev/termination-log 38 | volumeMounts: 39 | - mountPath: /var/etcd 40 | name: etcd 41 | - env: 42 | - name: DISC_ETCD 43 | value: http://127.0.0.1:4378 44 | - name: DISC_HOST 45 | value: http://192.168.56.1:8087 46 | image: quay.io/coreos/discovery.etcd.io 47 | imagePullPolicy: IfNotPresent 48 | name: etcd-discovery 49 | resources: 50 | requests: 51 | cpu: 200m 52 | securityContext: 53 | seLinuxOptions: 54 | type: spc_t 55 | terminationMessagePath: /dev/termination-log 56 | volumeMounts: 57 | - mountPath: /var/etcd 58 | name: etcd 59 | hostNetwork: true 60 | restartPolicy: Always 61 | volumes: 62 | - name: etcd 63 | emptyDir: {} 64 | -------------------------------------------------------------------------------- /images/.gitignore: -------------------------------------------------------------------------------- 1 | # 忽略所有文件 2 | * 3 | # 除了这个文件 4 | !.gitignore 5 | -------------------------------------------------------------------------------- /k8s-deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -x 3 | set -e 4 | 5 | HTTP_SERVER=192.168.56.1:8000 6 | KUBE_HA=true 7 | 8 | KUBE_REPO_PREFIX=gcr.io/google_containers 9 | KUBE_ETCD_IMAGE=quay.io/coreos/etcd:v3.0.17 10 | 11 | root=$(id -u) 12 | if [ "$root" -ne 0 ] ;then 13 | echo must run as root 14 | exit 1 15 | fi 16 | 17 | kube::install_docker() 18 | { 19 | set +e 20 | docker info> /dev/null 2>&1 21 | i=$? 22 | set -e 23 | if [ $i -ne 0 ]; then 24 | curl -L http://$HTTP_SERVER/rpms/docker.tar.gz > /tmp/docker.tar.gz 25 | tar zxf /tmp/docker.tar.gz -C /tmp 26 | yum localinstall -y /tmp/docker/*.rpm 27 | systemctl enable docker.service && systemctl start docker.service 28 | kube::config_docker 29 | fi 30 | echo docker has been installed 31 | rm -rf /tmp/docker /tmp/docker.tar.gz 32 | } 33 | 34 | kube::config_docker() 35 | { 36 | setenforce 0 > /dev/null 2>&1 && sed -i -e 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 37 | 38 | sysctl -w net.bridge.bridge-nf-call-iptables=1 39 | sysctl -w net.bridge.bridge-nf-call-ip6tables=1 40 | cat <>/etc/sysctl.conf 41 | net.bridge.bridge-nf-call-ip6tables = 1 42 | net.bridge.bridge-nf-call-iptables = 1 43 | EOF 44 | 45 | sed -i -e 's/DOCKER_STORAGE_OPTIONS=/DOCKER_STORAGE_OPTIONS="-s overlay --selinux-enabled=false"/g' /etc/sysconfig/docker-storage 46 | 47 | systemctl daemon-reload && systemctl restart docker.service 48 | } 49 | 50 | kube::load_images() 51 | { 52 | mkdir -p /tmp/k8s 53 | 54 | images=( 55 | kube-apiserver-amd64_v1.6.2 56 | kube-controller-manager-amd64_v1.6.2 57 | kube-scheduler-amd64_v1.6.2 58 | kube-proxy-amd64_v1.6.2 59 | pause-amd64_3.0 60 | k8s-dns-dnsmasq-nanny-amd64_1.14.1 61 | k8s-dns-kube-dns-amd64_1.14.1 62 | k8s-dns-sidecar-amd64_1.14.1 63 | etcd_v3.0.17 64 | flannel-amd64_v0.7.1 65 | ) 66 | 67 | for i in "${!images[@]}"; do 68 | ret=$(docker images | awk 'NR!=1{print $1"_"$2}'| grep $KUBE_REPO_PREFIX/${images[$i]} | wc -l) 69 | if [ $ret -lt 1 ];then 70 | curl -L http://$HTTP_SERVER/images/${images[$i]}.tar > /tmp/k8s/${images[$i]}.tar 71 | docker load < /tmp/k8s/${images[$i]}.tar 72 | fi 73 | done 74 | 75 | rm /tmp/k8s* -rf 76 | } 77 | 78 | kube::install_k8s() 79 | { 80 | set +e 81 | which kubeadm > /dev/null 2>&1 82 | i=$? 83 | set -e 84 | if [ $i -ne 0 ]; then 85 | curl -L http://$HTTP_SERVER/rpms/k8s.tar.gz > /tmp/k8s.tar.gz 86 | tar zxf /tmp/k8s.tar.gz -C /tmp 87 | yum localinstall -y /tmp/k8s/*.rpm 88 | rm -rf /tmp/k8s* 89 | # 增加上限,每个node最多可以跑1024个pod 90 | sed -i 's/--allow-privileged=true/--allow-privileged=true --max-pods=1024/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 91 | systemctl enable kubelet.service && systemctl daemon-reload && systemctl start kubelet.service && rm -rf /etc/kubernetes 92 | fi 93 | } 94 | 95 | kube::config_firewalld() 96 | { 97 | systemctl disable firewalld && systemctl stop firewalld 98 | # iptables -A IN_public_allow -p tcp -m tcp --dport 9898 -m conntrack --ctstate NEW -j ACCEPT 99 | # iptables -A IN_public_allow -p tcp -m tcp --dport 6443 -m conntrack --ctstate NEW -j ACCEPT 100 | # iptables -A IN_public_allow -p tcp -m tcp --dport 10250 -m conntrack --ctstate NEW -j ACCEPT 101 | } 102 | 103 | kube::get_env() 104 | { 105 | HA_STATE=$1 106 | [ $HA_STATE == "MASTER" ] && HA_PRIORITY=200 || HA_PRIORITY=`expr 200 - ${RANDOM} / 1000 + 1` 107 | KUBE_VIP=$(echo $2 |awk -F= '{print $2}') 108 | VIP_PREFIX=$(echo ${KUBE_VIP} | cut -d . -f 1,2,3) 109 | ###dhcp和static地址的不同取法 110 | VIP_INTERFACE=$(ip addr show | grep ${VIP_PREFIX} | awk -F 'dynamic' '{print $2}' | head -1) 111 | [ -z ${VIP_INTERFACE} ] && VIP_INTERFACE=$(ip addr show | grep ${VIP_PREFIX} | awk -F 'global' '{print $2}' | head -1) 112 | ### 113 | LOCAL_IP=$(ip addr show | grep ${VIP_PREFIX} | awk -F / '{print $1}' | awk -F ' ' '{print $2}' | head -1) 114 | MASTER_NODES=$(echo $3 | grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}') 115 | MASTER_NODES_NO_LOCAL_IP=$(echo "${MASTER_NODES}" | sed -e 's/'${LOCAL_IP}'//g') 116 | } 117 | 118 | kube::install_keepalived() 119 | { 120 | kube::get_env $@ 121 | set +e 122 | which keepalived > /dev/null 2>&1 123 | i=$? 124 | set -e 125 | if [ $i -ne 0 ]; then 126 | ip addr add ${KUBE_VIP}/32 dev ${VIP_INTERFACE} 127 | curl -L http://$HTTP_SERVER/rpms/keepalived.tar.gz > /tmp/keepalived.tar.gz 128 | tar zxf /tmp/keepalived.tar.gz -C /tmp 129 | yum localinstall -y /tmp/keepalived/*.rpm 130 | rm -rf /tmp/keepalived* 131 | systemctl enable keepalived.service && systemctl start keepalived.service 132 | kube::config_keepalived 133 | fi 134 | } 135 | 136 | kube::config_keepalived() 137 | { 138 | echo "gen keepalived configuration" 139 | cat </etc/keepalived/keepalived.conf 140 | global_defs { 141 | router_id LVS_k8s 142 | } 143 | 144 | vrrp_script CheckK8sMaster { 145 | script "curl -k https://127.0.0.1:6443/api" 146 | interval 3 147 | timeout 9 148 | fall 2 149 | rise 2 150 | } 151 | 152 | vrrp_instance VI_1 { 153 | state ${HA_STATE} 154 | interface ${VIP_INTERFACE} 155 | virtual_router_id 61 156 | priority ${HA_PRIORITY} 157 | advert_int 1 158 | mcast_src_ip ${LOCAL_IP} 159 | nopreempt 160 | authentication { 161 | auth_type PASS 162 | auth_pass 378378 163 | } 164 | unicast_peer { 165 | ${MASTER_NODES_NO_LOCAL_IP} 166 | } 167 | virtual_ipaddress { 168 | ${KUBE_VIP} 169 | } 170 | track_script { 171 | CheckK8sMaster 172 | } 173 | } 174 | 175 | EOF 176 | modprobe ip_vs 177 | systemctl daemon-reload && systemctl restart keepalived.service 178 | } 179 | 180 | 181 | kube::get_etcd_endpoint() 182 | { 183 | local var=$2 184 | local temp=${var#*//} 185 | etcd_endpoint=${temp%%:*} 186 | } 187 | 188 | kube::save_master_ip() 189 | { 190 | if [ ${KUBE_HA} == true ];then 191 | kube::get_etcd_endpoint $@ 192 | set +e; ssh root@$etcd_endpoint "etcdctl mk ha_master ${LOCAL_IP}"; set -e 193 | fi 194 | } 195 | 196 | kube::copy_master_config() 197 | { 198 | kube::get_etcd_endpoint $@ 199 | local master_ip=$(ssh root@$etcd_endpoint "etcdctl get /ha_master") 200 | mkdir -p /etc/kubernetes 201 | scp -r root@${master_ip}:/etc/kubernetes/* /etc/kubernetes/ 202 | systemctl daemon-reload && systemctl start kubelet 203 | } 204 | 205 | kube::set_label() 206 | { 207 | export KUBECONFIG=/etc/kubernetes/admin.conf 208 | local hstnm=`hostname` 209 | local lowhstnm=$(echo $hstnm | tr '[A-Z]' '[a-z]') 210 | until kubectl get no | grep -i $lowhstnm; do sleep 1; done 211 | kubectl label node $lowhstnm kubeadm.alpha.kubernetes.io/role=master 212 | } 213 | 214 | kube::config_kubeadm() 215 | { 216 | 217 | # kubeadm需要联网去找最新版本 218 | echo $HTTP_SERVER storage.googleapis.com >> /etc/hosts 219 | 220 | local endpoints=$2 221 | local temp=${endpoints#*=} 222 | local etcd0=$(echo $temp | awk -F ',' '{print $1}') 223 | local etcd1=$(echo $temp | awk -F ',' '{print $2}') 224 | local etcd2=$(echo $temp | awk -F ',' '{print $3}') 225 | local advertiseAddress=$1 226 | local advertiseAddress=${advertiseAddress#*=} 227 | 228 | cat <$HOME/kubeadm-config.yml 229 | apiVersion: kubeadm.k8s.io/v1alpha1 230 | kind: MasterConfiguration 231 | api: 232 | advertiseAddress: "$advertiseAddress" 233 | # bindPort: 234 | etcd: 235 | endpoints: 236 | - "$etcd0" 237 | - "$etcd1" 238 | - "$etcd2" 239 | # caFile: 240 | # certFile: 241 | # keyFile: 242 | kubernetesVersion: "v1.6.2" 243 | networking: 244 | # dnsDomain: 245 | # serviceSubnet: 246 | # 这里一定要带上--pod-network-cidr参数,不然后面的flannel网络会出问题 247 | podSubnet: 12.240.0.0/12 248 | EOF 249 | } 250 | 251 | kube::install_cni() 252 | { 253 | # install flannel network 254 | export KUBECONFIG=/etc/kubernetes/admin.conf 255 | kubectl apply -f http://$HTTP_SERVER/network/kube-flannel-rbac.yml 256 | kubectl apply -f http://$HTTP_SERVER/network/kube-flannel.yml --namespace=kube-system 257 | } 258 | 259 | kube::master_up() 260 | { 261 | shift 262 | 263 | kube::install_docker 264 | 265 | kube::load_images 266 | 267 | kube::install_k8s 268 | 269 | [ ${KUBE_HA} == true ] && kube::install_keepalived "MASTER" $@ 270 | 271 | # 存储master ip, replica侧需要用这个信息来copy 配置 272 | kube::save_master_ip $@ 273 | 274 | kube::config_kubeadm $@ 275 | 276 | kubeadm init --config=$HOME/kubeadm-config.yml 277 | 278 | echo -e "\033[32m 赶紧找地方记录上面的token! \033[0m" 279 | 280 | kube::install_cni 281 | 282 | # 为了kubectl get no的时候可以显示master标识 283 | kube::set_label 284 | 285 | # show pods 286 | kubectl get po --all-namespaces 287 | } 288 | 289 | kube::replica_up() 290 | { 291 | shift 292 | 293 | kube::install_docker 294 | 295 | kube::load_images 296 | 297 | kube::install_k8s 298 | 299 | kube::install_keepalived "BACKUP" $@ 300 | 301 | kube::copy_master_config $@ 302 | 303 | kube::set_label 304 | 305 | } 306 | 307 | kube::node_up() 308 | { 309 | shift 310 | 311 | kube::install_docker 312 | 313 | kube::load_images 314 | 315 | kube::install_k8s 316 | 317 | kube::config_firewalld 318 | 319 | kubeadm join $@ 320 | } 321 | 322 | kube::tear_down() 323 | { 324 | systemctl stop kubelet.service 325 | docker ps -aq|xargs -I '{}' docker stop {} 326 | docker ps -aq|xargs -I '{}' docker rm {} 327 | df |grep /var/lib/kubelet|awk '{ print $6 }'|xargs -I '{}' umount {} 328 | rm -rf /var/lib/kubelet && rm -rf /etc/kubernetes/ && rm -rf /var/lib/etcd 329 | yum remove -y kubectl kubeadm kubelet kubernetes-cni 330 | if [ ${KUBE_HA} == true ] 331 | then 332 | yum remove -y keepalived 333 | rm -rf /etc/keepalived/keepalived.conf 334 | fi 335 | rm -rf /var/lib/cni 336 | rm -rf /etc/systemd/system/docker.service.d/* 337 | ip link del cni0 338 | ip link del docker0 339 | ip link del flannel.1 340 | } 341 | 342 | kube::test() 343 | { 344 | shift 345 | kube::config_kubeadm $@ 346 | } 347 | 348 | main() 349 | { 350 | case $1 in 351 | "m" | "master" ) 352 | kube::master_up $@ 353 | ;; 354 | "r" | "replica" ) 355 | kube::replica_up $@ 356 | ;; 357 | "j" | "join" ) 358 | kube::node_up $@ 359 | ;; 360 | "d" | "down" ) 361 | kube::tear_down 362 | ;; 363 | "t" | "test" ) 364 | kube::test $@ 365 | ;; 366 | *) 367 | echo "usage: $0 m[master] | r[replica] | j[join] token | d[down] " 368 | echo " $0 master to setup master " 369 | echo " $0 replica to setup replica master " 370 | echo " $0 join to join master with token " 371 | echo " $0 down to tear all down ,inlude all data! so becarefull" 372 | echo " unkown command $0 $@" 373 | ;; 374 | esac 375 | } 376 | 377 | main $@ 378 | -------------------------------------------------------------------------------- /network/kube-flannel-rbac.yml: -------------------------------------------------------------------------------- 1 | # Create the clusterrole and clusterrolebinding: 2 | # $ kubectl create -f kube-flannel-rbac.yml 3 | # Create the pod using the same namespace used by the flannel serviceaccount: 4 | # $ kubectl create --namespace kube-system -f kube-flannel.yml 5 | --- 6 | kind: ClusterRole 7 | apiVersion: rbac.authorization.k8s.io/v1beta1 8 | metadata: 9 | name: flannel 10 | rules: 11 | - apiGroups: 12 | - "" 13 | resources: 14 | - pods 15 | verbs: 16 | - get 17 | - apiGroups: 18 | - "" 19 | resources: 20 | - nodes 21 | verbs: 22 | - list 23 | - watch 24 | - apiGroups: 25 | - "" 26 | resources: 27 | - nodes/status 28 | verbs: 29 | - patch 30 | --- 31 | kind: ClusterRoleBinding 32 | apiVersion: rbac.authorization.k8s.io/v1beta1 33 | metadata: 34 | name: flannel 35 | roleRef: 36 | apiGroup: rbac.authorization.k8s.io 37 | kind: ClusterRole 38 | name: flannel 39 | subjects: 40 | - kind: ServiceAccount 41 | name: flannel 42 | namespace: kube-system 43 | -------------------------------------------------------------------------------- /network/kube-flannel.yml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: flannel 6 | namespace: kube-system 7 | --- 8 | kind: ConfigMap 9 | apiVersion: v1 10 | metadata: 11 | name: kube-flannel-cfg 12 | namespace: kube-system 13 | labels: 14 | tier: node 15 | app: flannel 16 | data: 17 | cni-conf.json: | 18 | { 19 | "name": "cbr0", 20 | "type": "flannel", 21 | "delegate": { 22 | "isDefaultGateway": true 23 | } 24 | } 25 | net-conf.json: | 26 | { 27 | "Network": "12.240.0.0/12", 28 | "Backend": { 29 | "Type": "host-gw" 30 | } 31 | } 32 | --- 33 | apiVersion: extensions/v1beta1 34 | kind: DaemonSet 35 | metadata: 36 | name: kube-flannel-ds 37 | namespace: kube-system 38 | labels: 39 | tier: node 40 | app: flannel 41 | spec: 42 | template: 43 | metadata: 44 | labels: 45 | tier: node 46 | app: flannel 47 | spec: 48 | hostNetwork: true 49 | nodeSelector: 50 | beta.kubernetes.io/arch: amd64 51 | tolerations: 52 | - key: node-role.kubernetes.io/master 53 | operator: Exists 54 | effect: NoSchedule 55 | serviceAccountName: flannel 56 | containers: 57 | - name: kube-flannel 58 | image: quay.io/coreos/flannel:v0.7.1-amd64 59 | command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ] 60 | securityContext: 61 | privileged: true 62 | env: 63 | - name: POD_NAME 64 | valueFrom: 65 | fieldRef: 66 | fieldPath: metadata.name 67 | - name: POD_NAMESPACE 68 | valueFrom: 69 | fieldRef: 70 | fieldPath: metadata.namespace 71 | volumeMounts: 72 | - name: run 73 | mountPath: /run 74 | - name: flannel-cfg 75 | mountPath: /etc/kube-flannel/ 76 | - name: install-cni 77 | image: quay.io/coreos/flannel:v0.7.1-amd64 78 | command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ] 79 | volumeMounts: 80 | - name: cni 81 | mountPath: /etc/cni/net.d 82 | - name: flannel-cfg 83 | mountPath: /etc/kube-flannel/ 84 | volumes: 85 | - name: run 86 | hostPath: 87 | path: /run 88 | - name: cni 89 | hostPath: 90 | path: /etc/cni/net.d 91 | - name: flannel-cfg 92 | configMap: 93 | name: kube-flannel-cfg 94 | -------------------------------------------------------------------------------- /rpms/.gitignore: -------------------------------------------------------------------------------- 1 | # 忽略所有文件 2 | * 3 | # 除了这个文件 4 | !.gitignore 5 | --------------------------------------------------------------------------------