├── .gitignore ├── .gitmodules ├── 3rd └── README.md ├── Dockerfile ├── README.md ├── action_plugins └── kube.py ├── ansible.cfg ├── build.sh ├── callback_plugins └── ignore.py ├── link_cache_dir.sh ├── modify_download_retries.sh ├── package_amd64.yaml ├── package_arm64.yaml ├── pb_backup_etcd.yaml ├── pb_cluster.yaml ├── pb_cluster_version_containerd.yaml ├── pb_cluster_version_docker.yaml ├── pb_deploy_kube_bench.yaml ├── pb_drain_node.yaml ├── pb_install_addon.yaml ├── pb_remove_addon.yaml ├── pb_remove_node.yaml ├── pb_renew_cert.yaml ├── pb_restore_etcd.yaml ├── pb_scale.yaml ├── pb_sync_container_engine_params.yaml ├── pb_sync_etcd_address.yaml ├── pb_sync_nginx_config.yaml ├── pb_uncordon_node.yaml ├── pb_upgrade_cluster.yaml ├── release.md ├── roles ├── backup-etcd │ ├── defaults │ │ └── main.yml │ └── tasks │ │ └── main.yml ├── bootstrap-os │ ├── defaults │ │ └── main.yml │ ├── files │ │ └── bootstrap.sh │ ├── handlers │ │ └── main.yml │ ├── molecule │ │ └── default │ │ │ ├── converge.yml │ │ │ ├── molecule.yml │ │ │ └── tests │ │ │ └── test_default.py │ └── tasks │ │ ├── bootstrap-almalinux.yml │ │ ├── bootstrap-anolis.yml │ │ ├── bootstrap-centos.yml │ │ ├── bootstrap-kylin linux advanced server.yml │ │ ├── bootstrap-openeuler.yml │ │ ├── bootstrap-opensuse leap.yml │ │ ├── bootstrap-oraclelinux.yml │ │ ├── bootstrap-redhat.yml │ │ ├── bootstrap-rocky.yml │ │ ├── bootstrap-ubuntu.yml │ │ ├── bootstrap-uniontech os server 20.yml │ │ ├── centos-alike.yml │ │ ├── debian.yml │ │ └── main.yml ├── config-apt-sources │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── etc │ │ └── apt │ │ └── sources.list.j2 ├── config-yum-repo │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ ├── main.yml │ │ ├── openeuler.yml │ │ └── yum_repo.yml │ └── templates │ │ ├── anolis_8 │ │ ├── AnolisOS-AppStream.repo │ │ ├── AnolisOS-BaseOS.repo │ │ ├── AnolisOS-DDE.repo │ │ ├── AnolisOS-Extras.repo │ │ ├── AnolisOS-HighAvailability.repo │ │ ├── AnolisOS-Plus.repo │ │ └── AnolisOS-PowerTools.repo │ │ ├── centos_7 │ │ ├── CentOS-Base.repo │ │ ├── CentOS-CR.repo │ │ ├── CentOS-fasttrack.repo │ │ └── CentOS-x86_64-kernel.repo │ │ ├── centos_8 │ │ ├── CentOS-Stream-AppStream.repo │ │ ├── CentOS-Stream-BaseOS.repo │ │ ├── CentOS-Stream-Extras.repo │ │ ├── CentOS-Stream-HighAvailability.repo │ │ ├── CentOS-Stream-NFV.repo │ │ ├── CentOS-Stream-PowerTools.repo │ │ ├── CentOS-Stream-RealTime.repo │ │ └── CentOS-Stream-ResilientStorage.repo │ │ └── rocky_8 │ │ ├── Rocky-AppStream.repo │ │ ├── Rocky-BaseOS.repo │ │ ├── Rocky-Debuginfo.repo │ │ ├── Rocky-Devel.repo │ │ ├── Rocky-Extras.repo │ │ ├── Rocky-HighAvailability.repo │ │ ├── Rocky-Media.repo │ │ ├── Rocky-NFV.repo │ │ ├── Rocky-Plus.repo │ │ ├── Rocky-PowerTools.repo │ │ ├── Rocky-RT.repo │ │ ├── Rocky-ResilientStorage.repo │ │ └── Rocky-Sources.repo ├── configure-docker-repo │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ ├── templates │ │ └── docker.repo.j2 │ └── vars │ │ ├── almalinux.yml │ │ ├── anolis.yml │ │ ├── centos.yml │ │ ├── kylin linux advanced server.yml │ │ ├── openeuler.yml │ │ ├── opensuse leap.yml │ │ ├── oraclelinux.yml │ │ ├── redhat.yml │ │ ├── rocky.yml │ │ ├── ubuntu.yml │ │ └── uniontech os server 20.yml ├── deploy-kube-bench │ ├── kube-bench │ │ ├── cfg │ │ │ ├── cis-1.20 │ │ │ │ ├── config.yaml │ │ │ │ ├── controlplane.yaml │ │ │ │ ├── etcd.yaml │ │ │ │ ├── master.yaml │ │ │ │ ├── node.yaml │ │ │ │ └── policies.yaml │ │ │ └── config.yaml │ │ ├── kube-bench-amd64 │ │ └── kube-bench-arm64 │ └── tasks │ │ └── main.yml ├── deploy-kuboard │ ├── defaults │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ │ ├── import-cluster-once.yaml.j2 │ │ ├── kuboard-pod.yaml.j2 │ │ └── kuboard-service-accounts.yaml.j2 ├── kuboard-spray-facts │ ├── defaults │ │ └── main.yml │ └── tasks │ │ └── main.yml ├── renew-cert │ └── tasks │ │ └── main.yml └── restore-etcd │ ├── defaults │ └── main.yml │ └── tasks │ └── main.yml └── test-cases.md /.gitignore: -------------------------------------------------------------------------------- 1 | kubespray_cache 2 | cache 3 | __pycache__ 4 | package.yaml 5 | .dockerignore 6 | .vscode 7 | .DS_Store -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "3rd/kubespray"] 2 | path = 3rd/kubespray 3 | url = git@github.com:eip-work/kubespray.git 4 | [submodule "3rd/ansible-apt-mirror"] 5 | path = 3rd/ansible-apt-mirror 6 | url = git@github.com:eip-work/ansible-apt-mirror.git 7 | [submodule "3rd/roles/ansible-apt-mirror"] 8 | path = 3rd/roles/ansible-apt-mirror 9 | url = git@github.com:eip-work/ansible-apt-mirror.git 10 | -------------------------------------------------------------------------------- /3rd/README.md: -------------------------------------------------------------------------------- 1 | kuboard-spary 依赖 kubespray 作为集群的安装工具 -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine:3.15.0 2 | 3 | COPY kubespray_cache/ /kuboard-spray/resource/content/kubespray_cache/ 4 | COPY image_cache_k8s/ /kuboard-spray/resource/content/kubespray_cache/images/ 5 | COPY image_cache/ /kuboard-spray/resource/content/kubespray_cache/images/ 6 | COPY image_cache_kuboard/ /kuboard-spray/resource/content/kubespray_cache/images/ 7 | COPY 3rd/ /kuboard-spray/resource/content/3rd/ 8 | COPY roles/ /kuboard-spray/resource/content/roles/ 9 | COPY action_plugins/ /kuboard-spray/resource/content/action_plugins/ 10 | COPY callback_plugins/ /kuboard-spray/resource/content/callback_plugins/ 11 | COPY ansible.cfg README.md release.md *.yaml *.md /kuboard-spray/resource/content/ -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # kuboard-spray-resource 2 | 3 | 此项目为 Kuboard-Spray 提供资源包 4 | 5 | ## 资源包的版本编码规则: 6 | 7 | 资源包的版本号由如下几部分组成: 8 | * `spray-` 前缀 + kubespray的版本号,例如 `spray-v2.18.0-2`,指定了资源包路径中 `/kuboard-spray/resource/content/3rd/kubespray` 目录在 `https://github.com/eip-work/kubespray` 仓库的 Tag; 9 | * `k8s-` 前缀 + k8s 的版本号,例如 `k8s-v1.23.1`,指定了资源包所支持的 Kubernetes 版本; 10 | * 资源包自己的版本号 + 资源包对应的 CPU 架构,例如 `v1.2-amd64`: 11 | * 对于不同的 kubespray 版本号 + k8s 版本号组合,可能出现相同的资源包版本号; 12 | * 但是对于相同的 kubespray 版本号 + k8s 版本号组合,资源包版本号应该是唯一的; 13 | 14 | 以上三个部分由下划线 `_` 分隔,组成一个完整的资源包版本号,例如: `spray-v2.18.0-2_k8s-v1.23.1_v1.2-amd64`。 15 | 16 | 17 | ## 待解决问题: 18 | 19 | * 缺少 nfs-utils 包的问题 20 | -------------------------------------------------------------------------------- /action_plugins/kube.py: -------------------------------------------------------------------------------- 1 | from __future__ import (absolute_import, division, print_function) 2 | __metaclass__ = type 3 | 4 | from ansible.plugins.action import ActionBase 5 | 6 | 7 | class ActionModule(ActionBase): 8 | def run(self, tmp=None, task_vars=None): 9 | super(ActionModule, self).run(tmp, task_vars) 10 | print(task_vars['kuboardspray_operation']) 11 | module_args = self._task.args.copy() 12 | if task_vars['kuboardspray_operation'] == 'remove_addon': 13 | if (module_args['resource'] == 'ns'): 14 | print("ignore namespace ", module_args['namespace']) 15 | else: 16 | module_args['state'] = 'absent' 17 | 18 | module_return = self._execute_module(module_name='kube', 19 | module_args=module_args, 20 | task_vars=task_vars, tmp=tmp) 21 | return module_return -------------------------------------------------------------------------------- /ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | # https://github.com/ansible/ansible/issues/56930 (to ignore group names with - and .) 3 | force_valid_group_names = ignore 4 | 5 | host_key_checking=False 6 | gathering = smart 7 | fact_caching = jsonfile 8 | fact_caching_connection = /tmp/facts_cache 9 | fact_caching_timeout = 7200 10 | stdout_callback = default 11 | display_skipped_hosts = no 12 | library = ./3rd/kubespray/library 13 | callback_whitelist = timer, profile_tasks 14 | roles_path = ./roles:./3rd/kubespray/roles:./3rd/kubespray/contrib:./3rd/roles:$VIRTUAL_ENV/usr/local/share/kubespray/roles:$VIRTUAL_ENV/usr/local/share/ansible/roles:/usr/share/kubespray/roles 15 | deprecation_warnings=False 16 | inventory_ignore_extensions = ~, .orig, .bak, .ini, .cfg, .retry, .pyc, .pyo, .creds, .gpg 17 | 18 | force_color=True 19 | 20 | interpreter_python=auto 21 | 22 | [inventory] 23 | ignore_patterns = artifacts, credentials 24 | 25 | [callback_profile_tasks] 26 | task_output_limit = 20 -------------------------------------------------------------------------------- /build.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo 4 | echo 删除 package.yaml 5 | rm package.yaml 6 | if [ $(uname -m) == "x86_64" ]; then 7 | echo 链接到 package_amd64.yaml 8 | ln -s "package_amd64.yaml" package.yaml 9 | else 10 | echo 链接到 package_arm64.yaml 11 | ln -s "package_arm64.yaml" package.yaml 12 | fi 13 | 14 | # tag=swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard-spray-resource 15 | tag=$(cat package.yaml | shyaml get-value metadata.available_at.0) 16 | 17 | version=$(cat package.yaml | shyaml get-value metadata.version) 18 | kubespray_version="${version#*spray-}" 19 | kubespray_version="${kubespray_version%%_k8s*}" 20 | 21 | echo 22 | echo "cache/*" > .dockerignore 23 | echo "!cache/${version}" >> .dockerignore 24 | 25 | 26 | echo "checkout eip-work/kubespray:${kubespray_version}" 27 | 28 | cd ./3rd/kubespray 29 | git pull 30 | git checkout ${kubespray_version} 31 | 32 | cd ../.. 33 | 34 | # echo 35 | # echo "checkout eip-work/kuboard-spray-resource:${version}" 36 | # git checkout ${version} 37 | # git pull 38 | 39 | echo 40 | echo "链接到 /cache/${version}" 41 | 42 | # 挪一下文件位置,以便将目录映射到镜像不同的 layer 43 | rm kubespray_cache 44 | mkdir image_cache_k8s 45 | mkdir image_cache_kuboard 46 | mv ./cache/${version}/images/k8s.* image_cache_k8s/ 47 | mv ./cache/${version}/images/eipwork* image_cache_kuboard/ 48 | mv ./cache/${version}/images image_cache 49 | mv ./cache/${version} kubespray_cache 50 | 51 | echo 52 | echo "构建镜像" 53 | 54 | docker build -f Dockerfile -t $tag:$version . 55 | 56 | echo 57 | echo "恢复到 /cache/${version}" 58 | # 将文件恢复到原来的位置 59 | mv kubespray_cache ./cache/${version} 60 | mv image_cache ./cache/${version}/images 61 | mv image_cache_kuboard/eipwork* ./cache/${version}/images/ 62 | mv image_cache_k8s/k8s.* ./cache/${version}/images/ 63 | rm -r image_cache_k8s 64 | rm -r image_cache_kuboard 65 | 66 | ln -s "./cache/${version}" kubespray_cache 67 | ls -lh kubespray_cache 68 | 69 | echo 70 | echo "推送镜像" 71 | docker push $tag:$version 72 | 73 | available_at_length=$(cat package.yaml | shyaml get-length metadata.available_at) 74 | 75 | echo $available_at_length 76 | for ((i = 1; i < ${available_at_length}; i++)) do 77 | tag_backup=$(cat package.yaml | shyaml get-value metadata.available_at.${i}) 78 | echo "" 79 | echo "推送镜像 ${tag_backup}" 80 | docker tag $tag:$version $tag_backup:$version 81 | docker push $tag_backup:$version 82 | done 83 | 84 | -------------------------------------------------------------------------------- /callback_plugins/ignore.py: -------------------------------------------------------------------------------- 1 | # (c) 2012-2014, Michael DeHaan 2 | # (c) 2017 Ansible Project 3 | # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt) 4 | 5 | # Make coding more python3-ish 6 | from __future__ import (absolute_import, division, print_function) 7 | __metaclass__ = type 8 | 9 | DOCUMENTATION = ''' 10 | name: ignore 11 | ''' 12 | 13 | from ansible.plugins.callback import CallbackBase 14 | from ansible import constants as C 15 | 16 | class CallbackModule(CallbackBase): 17 | 18 | ''' 19 | 提示用户忽略标注为 ignore 的错误提示。 20 | ''' 21 | 22 | CALLBACK_VERSION = 2.0 23 | CALLBACK_NAME = 'ignore' 24 | 25 | def v2_runner_on_failed(self, result, ignore_errors=False): 26 | if ignore_errors: 27 | self._display.display("出现此错误提示属于正常情况,请忽略", color=C.COLOR_SKIP) 28 | -------------------------------------------------------------------------------- /link_cache_dir.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | version=$(cat package.yaml | shyaml get-value metadata.version) 4 | kubespray_version="${version#*spray-}" 5 | kubespray_version="${kubespray_version%%_k8s*}" 6 | 7 | echo 8 | echo 删除 package.yaml 9 | rm package.yaml 10 | if [ $(uname -m) == "x86_64" ]; then 11 | ln -s "package_amd64.yaml" package.yaml 12 | else 13 | ln -s "package_arm64.yaml" package.yaml 14 | fi 15 | 16 | if [ "${1}x" == "x" ]; then 17 | 18 | echo "checkout eip-work/kubespray:${kubespray_version}" 19 | cd ./3rd/kubespray 20 | git pull 21 | git checkout ${kubespray_version} 22 | 23 | cd ../.. 24 | fi 25 | 26 | echo 删除已有 link 27 | rm kubespray_cache 28 | 29 | echo 30 | echo "链接到 /cache/${version}" 31 | mkdir "./cache/${version}" || true 32 | ln -s "./cache/${version}" kubespray_cache 33 | 34 | echo 35 | 36 | ls -lh kubespray_cache 37 | -------------------------------------------------------------------------------- /modify_download_retries.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | sed -i 's/retries: 4/retries: 0/g' ./3rd/kubespray/roles/download/tasks/download_container.yml 3 | sed -i 's/retries: 4/retries: 0/g' ./3rd/kubespray/roles/download/tasks/download_file.yml -------------------------------------------------------------------------------- /package_amd64.yaml: -------------------------------------------------------------------------------- 1 | metadata: 2 | version: spray-v2.21.0c_k8s-v1.26.3_v4.3-amd64 3 | type: kubernetes-offline-resource 4 | kuboard_spray_version: 5 | min: v1.2.4 6 | available_at: 7 | - registry.cn-shanghai.aliyuncs.com/kuboard-spray/kuboard-spray-resource 8 | - swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard-spray-resource 9 | - eipwork/kuboard-spray-resource 10 | issue_date: "2023-3-26" 11 | owner: "shaohq@foxmail.com" 12 | can_upgrade_from: 13 | include: 14 | - spray-v2.21.0[a-c]_k8s-v1.26.[0-2]*_v4.[1-2]-amd64 15 | - spray-v2.20.0[a-b]_k8s-v1.25.[0-9]*_v3.[0-9]*-amd64 16 | exclude: 17 | can_replace_to: 18 | supported_os: 19 | - distribution: Ubuntu 20 | versions: 21 | - "20.04" 22 | - "22.04" 23 | - distribution: Anolis 24 | versions: 25 | - "8.4" 26 | - "8.5" 27 | - "8.6" 28 | - distribution: CentOS 29 | versions: 30 | - "7.6" 31 | - "7.8" 32 | - "7.9" 33 | - "8" 34 | - distribution: RedHat 35 | versions: 36 | - "7.9" 37 | - "8.5" 38 | - distribution: OracleLinux 39 | versions: 40 | - "8.5" 41 | - "8.7" 42 | - "9.1" 43 | - distribution: Rocky 44 | versions: 45 | - "8.5" 46 | - "8.7" 47 | - "9.1" 48 | - distribution: openEuler 49 | versions: 50 | - "20.03" 51 | - "22.03" 52 | - distribution: Kylin Linux Advanced Server 53 | versions: 54 | - "V10" 55 | - distribution: openSUSE Leap 56 | versions: 57 | - "15.3" 58 | - distribution: UnionTech OS Server 20 59 | versions: 60 | - "20" 61 | - distribution: AlmaLinux 62 | versions: 63 | - "8.7" 64 | - "9.1" 65 | supported_feature: 66 | eviction_hard: true 67 | 68 | data: 69 | kubespray_version: v2.21.0c 70 | supported_playbooks: 71 | install_cluster: pb_cluster.yaml 72 | remove_node: pb_remove_node.yaml 73 | add_node: pb_scale.yaml 74 | sync_nginx_config: pb_sync_nginx_config.yaml 75 | sync_etcd_address: pb_sync_etcd_address.yaml 76 | install_addon: pb_install_addon.yaml 77 | remove_addon: pb_remove_addon.yaml 78 | cluster_version_containerd: pb_cluster_version_containerd.yaml 79 | cluster_version_docker: pb_cluster_version_docker.yaml 80 | upgrade_cluster: pb_upgrade_cluster.yaml 81 | drain_node: pb_drain_node.yaml 82 | uncordon_node: pb_uncordon_node.yaml 83 | cis_scan: true # 只在此属性为 true 的时候激活 CIS 扫描 84 | renew_cert: pb_renew_cert.yaml 85 | sync_container_engine_params: pb_sync_container_engine_params.yaml 86 | backup_etcd: pb_backup_etcd.yaml 87 | restore_etcd: pb_restore_etcd.yaml 88 | 89 | kubernetes: 90 | kube_version: "v1.26.3" 91 | image_arch: amd64 92 | gcr_image_repo: "gcr.io" 93 | kube_image_repo: "k8s.gcr.io" 94 | candidate_admission_plugins: AlwaysAdmit,AlwaysDeny,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,DefaultIngressClass,DefaultStorageClass,DefaultTolerationSeconds,DenyServiceExternalIPs,EventRateLimit,ExtendedResourceToleration,ImagePolicyWebhook,LimitPodHardAntiAffinityTopology,LimitRanger,MutatingAdmissionWebhook,NamespaceAutoProvision,NamespaceExists,NamespaceLifecycle,NodeRestriction,OwnerReferencesPermissionEnforcement,PersistentVolumeClaimResize,PersistentVolumeLabel,PodNodeSelector,PodSecurity,PodTolerationRestriction,Priority,ResourceQuota,RuntimeClass,SecurityContextDeny,ServiceAccount,StorageObjectInUseProtection,TaintNodesByCondition,ValidatingAdmissionWebhook 95 | default_enabled_admission_plugins: CertificateApproval,CertificateSigning,CertificateSubjectRestriction,DefaultIngressClass,DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,MutatingAdmissionWebhook,NamespaceLifecycle,PersistentVolumeClaimResize,Priority,ResourceQuota,RuntimeClass,ServiceAccount,StorageObjectInUseProtection,TaintNodesByCondition,ValidatingAdmissionWebhook 96 | 97 | container_engine: 98 | - container_manager: "containerd" 99 | params: 100 | containerd_version: 1.6.19 101 | # - container_manager: "docker" 102 | # params: 103 | # docker_version: "20.10" 104 | # docker_containerd_version: 1.4.12 105 | 106 | vars: 107 | target: 108 | containerd_version: 1.6.19 109 | etcd_version: v3.5.6 110 | calico_version: "v3.24.5" 111 | flannel_cni_version: "v1.2.0" 112 | kubelet_checksums: 113 | arm64: 114 | v1.26.3: d360f919c279a05441b27178030c3d17134c1f257c95f4b22bdb28c2290993e7 115 | amd64: 116 | v1.26.3: 992d6298bd494b65f54c838419773c4976aca72dfb36271c613537efae7ab7d2 117 | kubectl_checksums: 118 | arm64: 119 | v1.26.3: 0f62cbb6fafa109f235a08348d74499a57bb294c2a2e6ee34be1fa83432fec1d 120 | amd64: 121 | v1.26.3: 026c8412d373064ab0359ed0d1a25c975e9ce803a093d76c8b30c5996ad73e75 122 | kubeadm_checksums: 123 | arm64: 124 | v1.26.3: e9a7dbca77f9576a98af1db8747e9dc13e930e40295eaa259dd99fd6e17a173f 125 | amd64: 126 | v1.26.3: 87a1bf6603e252a8fa46be44382ea218cb8e4f066874d149dc589d0f3a405fed 127 | crun_checksums: 128 | arm64: 129 | 1.4.5: 64a01114060ec12e66b1520c6ee6967410022d1ec73cdc7d14f952343c0769f2 130 | amd64: 131 | 1.4.5: 84cf20a6060cd53ac21a0590367d1ab65f74baae005c42f2d5bc1af918470455 132 | runc_checksums: 133 | arm64: 134 | v1.1.4: dbb71e737eaef454a406ce21fd021bd8f1b35afb7635016745992bbd7c17a223 135 | amd64: 136 | v1.1.4: db772be63147a4e747b4fe286c7c16a2edc4a8458bd3092ea46aaee77750e8ce 137 | containerd_archive_checksums: 138 | arm64: 139 | 1.6.19: 25a0dd6cce4e1058824d6dc277fc01dc45da92539ccb39bb6c8a481c24d2476e 140 | amd64: 141 | 1.6.19: 3262454d9b3581f4d4da0948f77dde1be51cfc42347a1548bc9ab6870b055815.. 142 | nerdctl_archive_checksums: 143 | arm64: 144 | 1.0.0: 27622c9d95efe6d807d5f3770d24ddd71719c6ae18f76b5fc89663a51bcd6208 145 | amd64: 146 | 1.0.0: 3e993d714e6b88d1803a58d9ff5a00d121f0544c35efed3a3789e19d6ab36964 147 | etcd_binary_checksums: 148 | arm64: 149 | v3.5.6: 888e25c9c94702ac1254c7655709b44bb3711ebaabd3cb05439f3dd1f2b51a87 150 | amd64: 151 | v3.5.6: 4db32e3bc06dd0999e2171f76a87c1cffed8369475ec7aa7abee9023635670fb 152 | cni_binary_checksums: 153 | arm64: 154 | v1.2.0: 525e2b62ba92a1b6f3dc9612449a84aa61652e680f7ebf4eff579795fe464b57 155 | amd64: 156 | v1.2.0: f3a841324845ca6bf0d4091b4fc7f97e18a623172158b72fc3fdcdb9d42d2d37 157 | flannel_cni_binary_checksums: 158 | arm64: 159 | v1.2.0: f813ae49b7b84eb95db73f7a3c34d2ee101f8cfc27e3a8054297a36d53308543 160 | amd64: 161 | v1.2.0: 63906a5b7dc78fbf1fbd484adbf4931aea5b15546ece3c7202c779ab9ea994a2 162 | flannel_image_repo: "{{ docker_image_repo }}/flannelcni/flannel" 163 | flannel_image_tag: "{{ flannel_version }}-{{ image_arch }}" 164 | flannel_init_image_repo: "{{ docker_image_repo }}/flannelcni/flannel-cni-plugin" 165 | flannel_init_image_tag: "{{ flannel_cni_version }}-{{ image_arch }}" 166 | calicoctl_download_url: "https://github.com/projectcalico/calico/releases/download/{{ calico_ctl_version }}/calicoctl-linux-{{ image_arch }}" 167 | calicoctl_binary_checksums: 168 | amd64: 169 | v3.24.5: 01e6c8a2371050f9edd0ade9dcde89da054e84d8e96bd4ba8cf82806c8d3e8e7 170 | arm64: 171 | v3.24.5: 2d56b768ed346129b0249261db27d97458cfb35f98bd028a0c817a23180ab2d2 172 | calico_crds_archive_checksums: 173 | v3.24.5: 10320b45ebcf4335703d692adacc96cdd3a27de62b4599238604bd7b0bedccc3 174 | krew_archive_checksums: 175 | linux: 176 | arm64: 177 | v0.4.3: 0994923848882ad0d4825d5af1dc227687a10a02688f785709b03549dd34d71d 178 | amd64: 179 | v0.4.3: 5df32eaa0e888a2566439c4ccb2ef3a3e6e89522f2f2126030171e2585585e4f 180 | crictl_checksums: 181 | arm64: 182 | v1.26.0: b632ca705a98edc8ad7806f4279feaff956ac83aa109bba8a85ed81e6b900599 183 | amd64: 184 | v1.26.0: cda5e2143bf19f6b548110ffba0fe3565e03e8743fadd625fee3d62fc4134eed 185 | snapshot_controller_image_tag: "v4.2.1" 186 | dns_min_replicas: "{{ [ 2, groups['kube_control_plane'] | length ] | min }}" 187 | kuboardspray_extra_downloads: 188 | kuboard: 189 | container: true 190 | file: false 191 | enabled: "{{ kuboard_enabled | default(false) }}" 192 | version: "{{ kuboard_version | default('v3.5.2.3') }}" 193 | repo: "eipwork/kuboard" 194 | tag: "{{ kuboard_version }}" 195 | sha256: "" 196 | groups: 197 | - kube_control_plane 198 | netcheck_etcd: 199 | container: true 200 | file: false 201 | enabled: "{{ deploy_netchecker }}" 202 | version: "{{ netcheck_etcd_image_tag }}" 203 | dest: "{{ local_release_dir }}/etcd-{{ netcheck_etcd_image_tag }}-linux-{{ image_arch }}.tar.gz" 204 | repo: "{{ etcd_image_repo }}" 205 | tag: "{{ netcheck_etcd_image_tag }}" 206 | sha256: >- 207 | {{ etcd_digest_checksum|d(None) }} 208 | unarchive: false 209 | owner: "root" 210 | mode: "0755" 211 | groups: 212 | - k8s_cluster 213 | coredns: 214 | enabled: "{{ dns_mode in ['coredns', 'coredns_dual'] }}" 215 | container: true 216 | repo: "{{ coredns_image_repo }}" 217 | tag: "{{ coredns_image_tag }}" 218 | sha256: "{{ coredns_digest_checksum|default(None) }}" 219 | groups: 220 | - k8s_cluster 221 | 222 | etcd: 223 | etcd_version: v3.5.6 224 | etcd_params: 225 | etcd_deployment_type: 226 | - "host" 227 | dependency: 228 | - name: crun 229 | version: 1.4.5 230 | target: crun_version 231 | - name: krew 232 | version: "v0.4.3" 233 | target: krew_version 234 | - name: runc 235 | version: v1.1.4 236 | target: runc_version 237 | - name: cni-plugins 238 | version: "v1.2.0" 239 | target: cni_version 240 | - name: crictl 241 | version: "v1.26.0" 242 | target: crictl_version 243 | - name: nerdctl 244 | version: "1.0.0" 245 | target: nerdctl_version 246 | - name: nginx_image 247 | version: 1.23.2 248 | target: nginx_image_tag 249 | - name: coredns 250 | target: coredns_version 251 | version: "v1.9.3" 252 | - name: cluster-proportional-autoscaler 253 | target: dnsautoscaler_version 254 | version: 1.8.5 255 | - name: pause 256 | target: pod_infra_version 257 | version: "3.8" 258 | network_plugin: 259 | - name: calico 260 | params: 261 | calico_version: "v3.24.5" 262 | - name: flannel 263 | params: 264 | flannel_version: "v0.20.2" 265 | flannel_cni_version: "v1.2.0" 266 | addon: 267 | - name: kuboard 268 | target: kuboard_enabled 269 | lifecycle: 270 | install_by_default: true 271 | check: 272 | shell: "kubectl get pods -n kuboard -l k8s.kuboard.cn/name=kuboard-v3" 273 | keyword: 'kuboard-v3' 274 | install_addon_tags: 275 | - download 276 | - upgrade 277 | - kuboard 278 | remove_addon_tags: 279 | - upgrade 280 | - kuboard 281 | downloads: 282 | - kuboard 283 | params_default: 284 | kuboard_version: 'v3.5.2.3' 285 | kuboard_port: 80 286 | kuboard_cluster_name: 'default' 287 | kuboard_data_dir: '/root/kuboard-data' 288 | params: 289 | - name: nodelocaldns 290 | target: enable_nodelocaldns 291 | lifecycle: 292 | install_by_default: true 293 | check: 294 | shell: "kubectl get daemonset -n kube-system nodelocaldns -o json" 295 | keyword: '"k8s-app": "kube-dns"' 296 | install_addon_tags: 297 | - download 298 | - upgrade 299 | - coredns 300 | - nodelocaldns 301 | downloads: 302 | - nodelocaldns 303 | - coredns 304 | params: 305 | nodelocaldns_version: "1.22.18" 306 | enable_nodelocaldns_secondary: false 307 | - name: netchecker 308 | target: deploy_netchecker 309 | lifecycle: 310 | install_by_default: true 311 | check: 312 | shell: "kubectl get deployment -n {{ netcheck_namespace | default('default') }} netchecker-server -o json" 313 | keyword: "k8s-netchecker-server" 314 | install_addon_tags: 315 | - download 316 | - upgrade 317 | - netchecker 318 | remove_addon_tags: 319 | - upgrade 320 | - netchecker 321 | downloads: 322 | - netcheck_server 323 | - netcheck_agent 324 | - netcheck_etcd 325 | params: 326 | netcheck_version: "v1.2.2" 327 | netcheck_agent_image_repo: "{{ docker_image_repo }}/mirantis/k8s-netchecker-agent" 328 | netcheck_agent_image_tag: "{{ netcheck_version }}" 329 | netcheck_server_image_repo: "{{ docker_image_repo }}/mirantis/k8s-netchecker-server" 330 | netcheck_server_image_tag: "{{ netcheck_version }}" 331 | netcheck_etcd_image_tag: "v3.5.6" 332 | # - name: helm 333 | # install_by_default: false 334 | # target: helm_enabled 335 | # params: 336 | # helm_version: "v3.7.1" 337 | - name: metrics_server 338 | target: metrics_server_enabled 339 | lifecycle: 340 | install_by_default: true 341 | check: 342 | shell: "kubectl get deployments -n kube-system metrics-server -o json" 343 | keyword: "k8s.gcr.io/metrics-server/metrics-server" 344 | install_addon_tags: 345 | - download 346 | - upgrade 347 | - metrics_server 348 | remove_addon_tags: 349 | - upgrade 350 | - metrics_server 351 | downloads: 352 | - metrics_server 353 | params: 354 | metrics_server_version: "v0.6.2" 355 | # - name: cephfs_provisioner 356 | # install_by_default: false 357 | # target: cephfs_provisioner_enabled 358 | # params: 359 | # csi_attacher_image_repo: "{{ kube_image_repo }}/sig-storage/csi-attacher" 360 | # csi_attacher_image_tag: "v3.3.0" 361 | # csi_provisioner_image_repo: "{{ kube_image_repo }}/sig-storage/csi-provisioner" 362 | # csi_provisioner_image_tag: "v3.0.0" 363 | # csi_snapshotter_image_repo: "{{ kube_image_repo }}/sig-storage/csi-snapshotter" 364 | # csi_snapshotter_image_tag: "v4.2.1" 365 | # csi_resizer_image_repo: "{{ kube_image_repo }}/sig-storage/csi-resizer" 366 | # csi_resizer_image_tag: "v1.3.0" 367 | # csi_node_driver_registrar_image_repo: "{{ kube_image_repo }}/sig-storage/csi-node-driver-registrar" 368 | # csi_node_driver_registrar_image_tag: "v2.4.0" 369 | # csi_livenessprobe_image_repo: "{{ kube_image_repo }}/sig-storage/livenessprobe" 370 | # csi_livenessprobe_image_tag: "v2.5.0" 371 | # - name: local_path_provisioner 372 | # install_by_default: false 373 | # target: local_path_provisioner_enabled 374 | # params: 375 | # local_path_provisioner_image_tag: "v0.0.19" 376 | -------------------------------------------------------------------------------- /pb_backup_etcd.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | - name: Gather facts 9 | gather_facts: False 10 | hosts: kube_control_plane:etcd 11 | any_errors_fatal: true 12 | roles: 13 | - { role: kuboard-spray-facts } 14 | 15 | - name: 备份 ETCD 16 | gather_facts: False 17 | hosts: etcd 18 | roles: 19 | - { role: backup-etcd } 20 | -------------------------------------------------------------------------------- /pb_cluster.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | # 由于始终通过镜像分发,因此我们不检查 ansible_version 9 | # - name: Check ansible version 10 | # import_playbook: 3rd/kubespray/ansible_version.yml 11 | 12 | # - name: Ensure compatibility with old groups 13 | # import_playbook: legacy_groups.yml 14 | 15 | # - name: Gather facts 16 | # tags: always 17 | # import_playbook: 3rd/kubespray/facts.yml 18 | - name: Gather facts 19 | gather_facts: False 20 | hosts: k8s_cluster:etcd:calico_rr 21 | any_errors_fatal: true 22 | roles: 23 | - { role: kuboard-spray-facts } 24 | 25 | - hosts: target 26 | gather_facts: False 27 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 28 | roles: 29 | - { role: config-apt-sources } 30 | - { role: config-yum-repo } 31 | - { role: kubespray-defaults } 32 | - { role: configure-docker-repo, when: container_manager == 'docker' } 33 | - { role: deploy-kube-bench, tags: download, when: "not skip_downloads" } 34 | 35 | # add os_service task to disable firewalld 36 | - name: Disable firewalld 37 | gather_facts: False 38 | hosts: target 39 | roles: 40 | - { role: os-services/roles/prepare } 41 | 42 | - hosts: k8s_cluster:etcd 43 | gather_facts: False 44 | strategy: linear 45 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 46 | environment: "{{ proxy_disable_env }}" 47 | roles: 48 | - { role: kubespray-defaults } 49 | - { role: bootstrap-os, tags: bootstrap-os} 50 | 51 | - hosts: k8s_cluster:etcd 52 | gather_facts: False 53 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 54 | environment: "{{ proxy_disable_env }}" 55 | roles: 56 | - { role: kubespray-defaults } 57 | - { role: kubernetes/preinstall, tags: preinstall } 58 | - { role: "container-engine", tags: "container-engine", when: deploy_container_engine } 59 | - { role: download, tags: download, when: "not skip_downloads" } 60 | 61 | - hosts: etcd 62 | gather_facts: False 63 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 64 | environment: "{{ proxy_disable_env }}" 65 | roles: 66 | - { role: kubespray-defaults } 67 | - role: etcd 68 | tags: etcd 69 | vars: 70 | etcd_cluster_setup: true 71 | etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}" 72 | when: etcd_deployment_type != "kubeadm" 73 | 74 | - hosts: k8s_cluster 75 | gather_facts: False 76 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 77 | environment: "{{ proxy_disable_env }}" 78 | roles: 79 | - { role: kubespray-defaults } 80 | - role: etcd 81 | tags: etcd 82 | vars: 83 | etcd_cluster_setup: false 84 | etcd_events_cluster_setup: false 85 | when: etcd_deployment_type != "kubeadm" 86 | 87 | - hosts: k8s_cluster 88 | gather_facts: False 89 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 90 | environment: "{{ proxy_disable_env }}" 91 | roles: 92 | - { role: kubespray-defaults } 93 | - { role: kubernetes/node, tags: node } 94 | 95 | - hosts: kube_control_plane 96 | gather_facts: False 97 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 98 | environment: "{{ proxy_disable_env }}" 99 | roles: 100 | - { role: kubespray-defaults } 101 | - { role: kubernetes/control-plane, tags: master } 102 | - { role: kubernetes/client, tags: client } 103 | - { role: kubernetes-apps/cluster_roles, tags: cluster-roles } 104 | 105 | - hosts: k8s_cluster 106 | gather_facts: False 107 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 108 | environment: "{{ proxy_disable_env }}" 109 | roles: 110 | - { role: kubespray-defaults } 111 | - { role: kubernetes/kubeadm, tags: kubeadm} 112 | - { role: kubernetes/node-label, tags: node-label } 113 | - { role: network_plugin, tags: network } 114 | 115 | - hosts: calico_rr 116 | gather_facts: False 117 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 118 | environment: "{{ proxy_disable_env }}" 119 | roles: 120 | - { role: kubespray-defaults } 121 | - { role: network_plugin/calico/rr, tags: ['network', 'calico_rr'] } 122 | 123 | - hosts: kube_control_plane[0] 124 | gather_facts: False 125 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 126 | environment: "{{ proxy_disable_env }}" 127 | roles: 128 | - { role: kubespray-defaults } 129 | - { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] } 130 | 131 | - hosts: kube_control_plane 132 | gather_facts: False 133 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 134 | environment: "{{ proxy_disable_env }}" 135 | roles: 136 | - { role: kubespray-defaults } 137 | - { role: deploy-kuboard, tags: kuboard } 138 | - { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller } 139 | - { role: kubernetes-apps/network_plugin, tags: network } 140 | - { role: kubernetes-apps/policy_controller, tags: policy-controller } 141 | - { role: kubernetes-apps/ingress_controller, tags: ingress-controller } 142 | - { role: kubernetes-apps/external_provisioner, tags: external-provisioner } 143 | - { role: kubernetes-apps, tags: apps } 144 | 145 | - name: Apply resolv.conf changes now that cluster DNS is up 146 | hosts: k8s_cluster 147 | gather_facts: False 148 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 149 | environment: "{{ proxy_disable_env }}" 150 | roles: 151 | - { role: kubespray-defaults } 152 | - { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true } 153 | 154 | - hosts: etcd 155 | gather_facts: False 156 | tasks: 157 | - name: "修改 {{ etcd_data_dir }} 属性,以满足 CIS 扫描要求" 158 | ansible.builtin.file: 159 | path: "{{ etcd_data_dir }}" 160 | mode: '0700' -------------------------------------------------------------------------------- /pb_cluster_version_containerd.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Gather K8s Cluster Version 3 | hosts: target 4 | gather_facts: False 5 | strategy: free 6 | tasks: 7 | - name: kubelet 8 | shell: kubelet --version | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]+" 9 | when: 10 | - inventory_hostname in groups['k8s_cluster'] 11 | - name: kube-apiserver 12 | shell: kubectl version | grep Server | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]+" 13 | when: 14 | - inventory_hostname in groups['kube_control_plane'] 15 | - name: kubectl 16 | shell: kubectl version --client=true | grep -Eo "\"v[0-9]+\.[0-9]+\.[0-9]+\"" | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]+" 17 | when: 18 | - inventory_hostname in groups['kube_control_plane'] 19 | - name: kubeadm 20 | shell: kubeadm version | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]+" 21 | when: 22 | - inventory_hostname in groups['k8s_cluster'] 23 | - name: kubeproxy 24 | shell: |+ 25 | ps_value=$(crictl ps | grep kube-proxy) 26 | if [ "${ps_value}" != "" ]; then 27 | crictl inspect $(echo ${ps_value} | awk '{print $1}') | grep '"image": "k8s.gcr.io' | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]+"; 28 | fi 29 | when: 30 | - inventory_hostname in groups['k8s_cluster'] 31 | - name: etcd 32 | shell: echo v$(etcd --version | grep etcd | grep -Eo "[0-9]+\.[0-9]+\.[0-9]+") 33 | when: 34 | - inventory_hostname in groups['etcd'] 35 | - name: containerd 36 | shell: containerd --version | grep -Eo "[0-9]+\.[0-9]+\.[0-9]+" 37 | when: 38 | - inventory_hostname in groups['k8s_cluster'] 39 | - name: crictl 40 | shell: crictl --version | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]+" 41 | when: 42 | - inventory_hostname in groups['k8s_cluster'] 43 | - name: calico 44 | shell: |+ 45 | ps_value=$(crictl ps | grep calico-node) 46 | if [ "${ps_value}" != "" ]; then 47 | crictl inspect $(echo ${ps_value} | awk '{print $1}') | grep '"image": "quay.io' | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]+"; 48 | fi 49 | when: 50 | - inventory_hostname in groups['k8s_cluster'] 51 | - kube_network_plugin == 'calico' 52 | - name: flannel 53 | shell: |+ 54 | ps_value=$(crictl ps | grep flannel) 55 | if [ "${ps_value}" != "" ]; then 56 | crictl inspect $(echo ${ps_value} | awk '{print $1}') | grep '"image": "quay.io\|"image": "docker.io' | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]+"; 57 | fi 58 | when: 59 | - inventory_hostname in groups['k8s_cluster'] 60 | - kube_network_plugin == 'flannel' 61 | - name: nerdctl 62 | shell: nerdctl --version | grep -Eo "[0-9]+\.[0-9]+\.[0-9]+" 63 | when: 64 | - inventory_hostname in groups['k8s_cluster'] 65 | - name: runc 66 | shell: echo v$(runc --version | grep runc | grep -Eo "[0-9]+\.[0-9]+\.[0-9]+") 67 | when: 68 | - inventory_hostname in groups['k8s_cluster'] 69 | - name: nginx_image 70 | shell: |+ 71 | ps_value=$(crictl ps | grep nginx-proxy) 72 | if [ "${ps_value}" != "" ]; then 73 | crictl inspect $(echo ${ps_value} | awk '{print $1}') | grep '"image": "docker.io' | grep -Eo "[0-9]+\.[0-9]+\.[0-9]+"; 74 | fi 75 | when: 76 | - inventory_hostname in groups['kube_node'] 77 | - inventory_hostname not in groups['kube_control_plane'] 78 | - name: coredns 79 | shell: |+ 80 | ps_value=$(crictl ps | grep coredns) 81 | if [ "${ps_value}" != "" ]; then 82 | crictl inspect $(echo ${ps_value} | awk '{print $1}') | grep '"image": "k8s' | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]+"; 83 | fi 84 | when: 85 | - inventory_hostname in groups['kube_control_plane'] 86 | - name: nodelocaldns 87 | shell: |+ 88 | ps_value=$(crictl ps | grep node-cache) 89 | if [ "${ps_value}" != "" ]; then 90 | crictl inspect $(echo ${ps_value} | awk '{print $1}') | grep '"image": "k8s' | grep -Eo "[0-9]+\.[0-9]+\.[0-9]+"; 91 | fi 92 | when: 93 | - enable_nodelocaldns 94 | - name: netchecker 95 | shell: |+ 96 | ps_value=$(crictl ps | grep netchecker-agent) 97 | if [ "${ps_value}" != "" ]; then 98 | crictl inspect $(echo ${ps_value} | awk '{print $1}') | grep '"image": "docker.io' | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]+"; 99 | fi 100 | when: 101 | - deploy_netchecker 102 | - name: metrics_server 103 | shell: |+ 104 | ps_value=$(crictl ps | grep metrics-server) 105 | if [ "${ps_value}" != "" ]; then 106 | crictl inspect $(echo ${ps_value} | awk '{print $1}') | grep '"image": "k8s.gcr.io' | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]+"; 107 | fi 108 | when: 109 | - deploy_netchecker 110 | -------------------------------------------------------------------------------- /pb_cluster_version_docker.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Gather K8s Cluster Version 3 | hosts: target 4 | gather_facts: False 5 | strategy: free 6 | tasks: 7 | - name: kubelet 8 | shell: kubelet --version | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]" 9 | when: 10 | - inventory_hostname in groups['k8s_cluster'] 11 | - name: kube-apiserver 12 | shell: kubectl version | grep Server | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]" 13 | when: 14 | - inventory_hostname in groups['kube_control_plane'] 15 | - name: kubectl 16 | shell: kubectl version --client=true | grep -Eo "\"v[0-9]+\.[0-9]+\.[0-9]\"" | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]" 17 | when: 18 | - inventory_hostname in groups['kube_control_plane'] 19 | - name: kubeadm 20 | shell: kubeadm version | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]" 21 | when: 22 | - inventory_hostname in groups['k8s_cluster'] 23 | - name: kubeproxy 24 | shell: |+ 25 | ps_value=$(docker ps | grep k8s_kube-proxy_kube-proxy) 26 | if [ "${ps_value}" != "" ]; then 27 | imageId=$(docker inspect $(echo ${ps_value} | awk '{print $1}') | grep -Eo -m 1 'sha256:[0-9a-z]*') 28 | imageId=${imageId#sha256:} 29 | docker inspect $imageId | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]"; 30 | fi 31 | when: 32 | - inventory_hostname in groups['k8s_cluster'] 33 | - name: etcd 34 | shell: echo v$(etcd --version | grep etcd | grep -Eo "[0-9]+\.[0-9]+\.[0-9]") 35 | when: 36 | - inventory_hostname in groups['etcd'] 37 | - etcd_deployment_type is defined and etcd_deployment_type == 'host' 38 | - name: etcd_docker 39 | shell: |+ 40 | ps_value=$(docker ps | grep quay.io/coreos/etcd | grep {{etcd_member_name}}) 41 | if [ "${ps_value}" != "" ]; then 42 | imageId=$(docker inspect $(echo ${ps_value} | awk '{print $1}') | grep -Eo -m 1 'sha256:[0-9a-z]*') 43 | imageId=${imageId#sha256:} 44 | docker inspect $imageId | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]"; 45 | fi 46 | when: 47 | - inventory_hostname in groups['etcd'] 48 | - etcd_deployment_type is defined and etcd_deployment_type == 'docker' 49 | - name: docker 50 | shell: docker version --format '{{"{{.Server.Version}}"}}' 51 | - name: calico 52 | shell: |+ 53 | ps_value=$(docker ps | grep k8s_calico-node_calico-node) 54 | if [ "${ps_value}" != "" ]; then 55 | imageId=$(docker inspect $(echo ${ps_value} | awk '{print $1}') | grep -Eo -m 1 'sha256:[0-9a-z]*') 56 | imageId=${imageId#sha256:} 57 | docker inspect $imageId | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]"; 58 | fi 59 | when: 60 | - inventory_hostname in groups['k8s_cluster'] 61 | - kube_network_plugin == 'calico' 62 | - name: flannel 63 | shell: |+ 64 | ps_value=$(docker ps | grep k8s_kube-flannel_kube-flannel) 65 | if [ "${ps_value}" != "" ]; then 66 | imageId=$(docker inspect $(echo ${ps_value} | awk '{print $1}') | grep -Eo -m 1 'sha256:[0-9a-z]*') 67 | imageId=${imageId#sha256:} 68 | docker inspect $imageId | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]"; 69 | fi 70 | when: 71 | - inventory_hostname in groups['k8s_cluster'] 72 | - kube_network_plugin == 'flannel' 73 | - name: runc 74 | shell: echo v$(runc --version | grep runc | grep -Eo "[0-9]+\.[0-9]+\.[0-9]") 75 | - name: nginx_image 76 | shell: |+ 77 | ps_value=$(docker ps | grep k8s_nginx-proxy_nginx-proxy) 78 | if [ "${ps_value}" != "" ]; then 79 | imageId=$(docker inspect $(echo ${ps_value} | awk '{print $1}') | grep -Eo -m 1 'sha256:[0-9a-z]*') 80 | imageId=${imageId#sha256:} 81 | docker inspect $imageId | grep -Eo "nginx:[0-9]+\.[0-9]+\.[0-9]" | grep -Eo "[0-9]+\.[0-9]+\.[0-9]"; 82 | fi 83 | when: 84 | - inventory_hostname in groups['kube_node'] 85 | - inventory_hostname not in groups['kube_control_plane'] 86 | - name: coredns 87 | shell: |+ 88 | ps_value=$(docker ps | grep k8s_coredns_coredns) 89 | if [ "${ps_value}" != "" ]; then 90 | imageId=$(docker inspect $(echo ${ps_value} | awk '{print $1}') | grep -Eo -m 1 'sha256:[0-9a-z]*') 91 | imageId=${imageId#sha256:} 92 | docker inspect $imageId | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]"; 93 | fi 94 | when: 95 | - inventory_hostname in groups['kube_control_plane'] 96 | - name: nodelocaldns 97 | shell: |+ 98 | ps_value=$(docker ps | grep k8s_node-cache_nodelocaldns) 99 | if [ "${ps_value}" != "" ]; then 100 | imageId=$(docker inspect $(echo ${ps_value} | awk '{print $1}') | grep -Eo -m 1 'sha256:[0-9a-z]*') 101 | imageId=${imageId#sha256:} 102 | docker inspect $imageId | grep k8s.gcr.io/dns/k8s-dns | grep -Eo "[0-9]+\.[0-9]+\.[0-9]"; 103 | fi 104 | when: 105 | - enable_nodelocaldns 106 | - name: netchecker 107 | shell: |+ 108 | ps_value=$(docker ps | grep k8s_netchecker-agent_netchecker-agent) 109 | if [ "${ps_value}" != "" ]; then 110 | imageId=$(docker inspect $(echo ${ps_value} | awk '{print $1}') | grep -Eo -m 1 'sha256:[0-9a-z]*') 111 | imageId=${imageId#sha256:} 112 | docker inspect $imageId | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]"; 113 | fi 114 | when: 115 | - deploy_netchecker 116 | - name: metrics_server 117 | shell: |+ 118 | ps_value=$(docker ps | grep k8s_metrics-server_metrics-server) 119 | if [ "${ps_value}" != "" ]; then 120 | imageId=$(docker inspect $(echo ${ps_value} | awk '{print $1}') | grep -Eo -m 1 'sha256:[0-9a-z]*') 121 | imageId=${imageId#sha256:} 122 | docker inspect $imageId | grep -Eo "v[0-9]+\.[0-9]+\.[0-9]"; 123 | fi 124 | when: 125 | - deploy_netchecker 126 | -------------------------------------------------------------------------------- /pb_deploy_kube_bench.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | - name: deploy kube-bench 9 | hosts: target 10 | roles: 11 | - { role: kuboard-spray-facts } 12 | - { role: kubespray-defaults } 13 | - { role: deploy-kube-bench } -------------------------------------------------------------------------------- /pb_drain_node.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | - name: Drain node 9 | hosts: target 10 | vars: 11 | # If non-empty, will use this string as identification instead of the actual hostname 12 | kube_override_hostname: >- 13 | {%- if cloud_provider is defined and cloud_provider in [ 'aws' ] -%} 14 | {%- else -%} 15 | {{ inventory_hostname }} 16 | {%- endif -%} 17 | kubectl: kubectl 18 | drain_nodes: true 19 | drain_pod_selector: '' 20 | drain_fallback_enabled: true 21 | drain_fallback_grace_period: "{{drain_grace_period}}" 22 | upgrade_node_uncordon_after_drain_failure: true 23 | drain_fallback_timeout: "{{drain_timeout}}" 24 | drain_fallback_retries: "{{drain_retries}}" 25 | drain_fallback_retry_delay_seconds: "{{drain_retry_delay_seconds}}" 26 | tasks: 27 | # Node Ready: type = ready, status = True 28 | # Node NotReady: type = ready, status = Unknown 29 | - name: See if node is in ready state 30 | command: > 31 | {{ kubectl }} get node {{ kube_override_hostname|default(inventory_hostname) }} 32 | -o jsonpath="{ range .status.conditions[?(@.type == 'Ready')].status }{ @ }{ end }" 33 | register: kubectl_node_ready 34 | delegate_to: "{{ groups['kube_control_plane'][0] }}" 35 | failed_when: false 36 | changed_when: false 37 | 38 | # SchedulingDisabled: unschedulable = true 39 | # else unschedulable key doesn't exist 40 | - name: See if node is schedulable 41 | command: > 42 | {{ kubectl }} get node {{ kube_override_hostname|default(inventory_hostname) }} 43 | -o jsonpath={ .spec.unschedulable } 44 | register: kubectl_node_schedulable 45 | delegate_to: "{{ groups['kube_control_plane'][0] }}" 46 | failed_when: false 47 | changed_when: false 48 | 49 | - name: Set if node needs cordoning 50 | set_fact: 51 | needs_cordoning: >- 52 | {% if (kubectl_node_ready.stdout == "True" and not kubectl_node_schedulable.stdout) or upgrade_node_always_cordon -%} 53 | true 54 | {%- else -%} 55 | false 56 | {%- endif %} 57 | 58 | - name: Node draining 59 | block: 60 | - name: Cordon node 61 | command: "{{ kubectl }} cordon {{ kube_override_hostname|default(inventory_hostname) }}" 62 | delegate_to: "{{ groups['kube_control_plane'][0] }}" 63 | 64 | - name: Check kubectl version 65 | command: "{{ kubectl }} version --client --short" 66 | register: kubectl_version 67 | delegate_to: "{{ groups['kube_control_plane'][0] }}" 68 | run_once: yes 69 | changed_when: false 70 | when: 71 | - drain_nodes 72 | - drain_pod_selector 73 | 74 | - name: Ensure minimum version for drain label selector if necessary 75 | assert: 76 | that: "kubectl_version.stdout.split(' ')[-1] is version('v1.10.0', '>=')" 77 | when: 78 | - drain_nodes 79 | - drain_pod_selector 80 | 81 | - name: Drain node 82 | command: >- 83 | {{ kubectl }} drain 84 | --force 85 | --ignore-daemonsets 86 | --grace-period {{ hostvars['localhost']['drain_grace_period_after_failure'] | default(drain_grace_period) }} 87 | --timeout {{ hostvars['localhost']['drain_timeout_after_failure'] | default(drain_timeout) }} 88 | --delete-emptydir-data {{ kube_override_hostname|default(inventory_hostname) }} 89 | {% if drain_pod_selector %}--pod-selector '{{ drain_pod_selector }}'{% endif %} 90 | when: drain_nodes 91 | register: result 92 | failed_when: 93 | - result.rc != 0 94 | - not drain_fallback_enabled 95 | until: result.rc == 0 96 | retries: "{{ drain_retries }}" 97 | delay: "{{ drain_retry_delay_seconds }}" 98 | 99 | - name: Drain fallback 100 | block: 101 | - name: Set facts after regular drain has failed 102 | set_fact: 103 | drain_grace_period_after_failure: "{{ drain_fallback_grace_period }}" 104 | drain_timeout_after_failure: "{{ drain_fallback_timeout }}" 105 | delegate_to: localhost 106 | delegate_facts: yes 107 | run_once: yes 108 | 109 | - name: Drain node - fallback with disabled eviction 110 | command: >- 111 | {{ kubectl }} drain 112 | --force 113 | --ignore-daemonsets 114 | --grace-period {{ drain_fallback_grace_period }} 115 | --timeout {{ drain_fallback_timeout }} 116 | --delete-emptydir-data {{ kube_override_hostname|default(inventory_hostname) }} 117 | {% if drain_pod_selector %}--pod-selector '{{ drain_pod_selector }}'{% endif %} 118 | --disable-eviction 119 | register: drain_fallback_result 120 | until: drain_fallback_result.rc == 0 121 | retries: "{{ drain_fallback_retries }}" 122 | delay: "{{ drain_fallback_retry_delay_seconds }}" 123 | when: 124 | - drain_nodes 125 | - drain_fallback_enabled 126 | - result.rc != 0 127 | 128 | rescue: 129 | - name: Set node back to schedulable 130 | command: "{{ kubectl }} uncordon {{ inventory_hostname }}" 131 | when: upgrade_node_uncordon_after_drain_failure 132 | # - name: Fail after rescue 133 | # fail: 134 | # msg: "Failed to drain node {{ inventory_hostname }}" 135 | # when: upgrade_node_fail_if_drain_fails 136 | delegate_to: "{{ groups['kube_control_plane'][0] }}" 137 | when: 138 | - needs_cordoning 139 | -------------------------------------------------------------------------------- /pb_install_addon.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | # 由于始终通过镜像分发,因此我们不检查 ansible_version 9 | # - name: Check ansible version 10 | # import_playbook: 3rd/kubespray/ansible_version.yml 11 | 12 | # - name: Ensure compatibility with old groups 13 | # import_playbook: legacy_groups.yml 14 | 15 | # - name: Gather facts 16 | # tags: always 17 | # import_playbook: 3rd/kubespray/facts.yml 18 | - name: Gather facts 19 | gather_facts: False 20 | hosts: k8s_cluster 21 | any_errors_fatal: true 22 | roles: 23 | - { role: kuboard-spray-facts, tags: download } 24 | 25 | - name: Pre-install 26 | gather_facts: False 27 | hosts: k8s_cluster 28 | roles: 29 | - { role: kubespray-defaults } 30 | - { role: kubernetes/preinstall, tags: preinstall } 31 | - { role: download, tags: download, when: "not skip_downloads" } 32 | 33 | - hosts: kube_control_plane 34 | gather_facts: False 35 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 36 | environment: "{{ proxy_disable_env }}" 37 | roles: 38 | - { role: kubespray-defaults } 39 | - { role: deploy-kuboard, tags: kuboard } 40 | - { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller } 41 | - { role: kubernetes-apps/network_plugin, tags: network } 42 | - { role: kubernetes-apps/policy_controller, tags: policy-controller } 43 | - { role: kubernetes-apps/ingress_controller, tags: ingress-controller } 44 | - { role: kubernetes-apps/external_provisioner, tags: external-provisioner } 45 | - { role: kubernetes-apps, tags: apps } 46 | -------------------------------------------------------------------------------- /pb_remove_addon.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | # 由于始终通过镜像分发,因此我们不检查 ansible_version 9 | # - name: Check ansible version 10 | # import_playbook: 3rd/kubespray/ansible_version.yml 11 | 12 | # - name: Ensure compatibility with old groups 13 | # import_playbook: legacy_groups.yml 14 | 15 | # - name: Gather facts 16 | # tags: always 17 | # import_playbook: 3rd/kubespray/facts.yml 18 | - name: Gather facts 19 | gather_facts: False 20 | hosts: k8s_cluster 21 | any_errors_fatal: true 22 | roles: 23 | - { role: kuboard-spray-facts } 24 | 25 | - name: Pre-install 26 | hosts: k8s_cluster 27 | roles: 28 | - { role: kubespray-defaults } 29 | - { role: kubernetes/preinstall, tags: preinstall } 30 | 31 | - hosts: kube_control_plane 32 | gather_facts: False 33 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 34 | environment: "{{ proxy_disable_env }}" 35 | roles: 36 | - { role: kubespray-defaults } 37 | - { role: deploy-kuboard, tags: kuboard } 38 | - { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller } 39 | - { role: kubernetes-apps/network_plugin, tags: network } 40 | - { role: kubernetes-apps/policy_controller, tags: policy-controller } 41 | - { role: kubernetes-apps/ingress_controller, tags: ingress-controller } 42 | - { role: kubernetes-apps/external_provisioner, tags: external-provisioner } 43 | - { role: kubernetes-apps, tags: apps } 44 | -------------------------------------------------------------------------------- /pb_remove_node.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | # 由于始终通过镜像分发,因此我们不检查 ansible_version 9 | # - name: Check ansible version 10 | # import_playbook: 3rd/kubespray/ansible_version.yml 11 | 12 | # content from 3rd/kubespray/facts.yml 13 | - name: Gather facts 14 | gather_facts: no 15 | hosts: kube_control_plane[0],etcd[0],{{node}} 16 | roles: 17 | - { role: kuboard-spray-facts, when: reset_nodes|default(True)|bool } 18 | - { role: kubespray-defaults, when: reset_nodes|default(True)|bool } 19 | 20 | # content from 3rd/kubespray/remove_node.yml 21 | - hosts: kube_control_plane[0]:etcd[0] 22 | gather_facts: no 23 | roles: 24 | - { role: kuboard-spray-facts, when: not reset_nodes|default(True)|bool } 25 | - { role: kubespray-defaults, when: not reset_nodes|default(True)|bool } 26 | 27 | - hosts: "{{ node }}" 28 | gather_facts: no 29 | environment: "{{ proxy_disable_env }}" 30 | roles: 31 | - { role: kubespray-defaults, when: reset_nodes|default(True)|bool } 32 | - { role: remove-node/pre-remove, tags: pre-remove } 33 | - { role: remove-node/remove-etcd-node, when: inventory_hostname in groups.etcd } 34 | - { role: reset, tags: reset, when: reset_nodes|default(True)|bool } 35 | 36 | - hosts: "{{ node }}" 37 | gather_facts: no 38 | environment: "{{ proxy_disable_env }}" 39 | roles: 40 | - { role: kubespray-defaults, when: reset_nodes|default(True)|bool } 41 | - { role: remove-node/post-remove, tags: post-remove } 42 | -------------------------------------------------------------------------------- /pb_renew_cert.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | - name: 更新证书 9 | hosts: kube_control_plane 10 | gather_facts: False 11 | tasks: 12 | - name: 逐个更新 kube_control_plane 的证书 13 | include_tasks: "./roles/renew-cert/tasks/main.yml" 14 | when: inventory_hostname == item 15 | loop: "{{ groups['kube_control_plane'] }}" -------------------------------------------------------------------------------- /pb_restore_etcd.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | - name: 恢复 etcd 数据 9 | hosts: etcd 10 | gather_facts: False 11 | roles: 12 | - { role: kuboard-spray-facts } 13 | - { role: kubespray-defaults } 14 | - { role: restore-etcd } 15 | 16 | -------------------------------------------------------------------------------- /pb_scale.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | # 由于始终通过镜像分发,因此我们不检查 ansible_version 9 | # - name: Check ansible version 10 | # import_playbook: 3rd/kubespray/ansible_version.yml 11 | 12 | # - name: Ensure compatibility with old groups 13 | # import_playbook: legacy_groups.yml 14 | 15 | - name: Disable firewalld 16 | hosts: "{{ node }}" 17 | roles: 18 | - { role: os-services/roles/prepare } 19 | 20 | - name: Config apt-sources / yum-repo 21 | hosts: "{{ node }}" 22 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 23 | roles: 24 | - { role: config-apt-sources } 25 | - { role: config-yum-repo } 26 | - { role: kubespray-defaults } 27 | - { role: configure-docker-repo, when: container_manager == 'docker' } 28 | 29 | 30 | - name: Bootstrap any new workers 31 | hosts: "{{ node }}" 32 | strategy: linear 33 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 34 | gather_facts: false 35 | environment: "{{ proxy_disable_env }}" 36 | roles: 37 | - { role: kubespray-defaults } 38 | - { role: bootstrap-os, tags: bootstrap-os } 39 | - { role: deploy-kube-bench, tags: download, when: "not skip_downloads" } 40 | 41 | - name: Gather facts 42 | hosts: kube_control_plane,{{node}} 43 | any_errors_fatal: true 44 | roles: 45 | - { role: kuboard-spray-facts } 46 | - { role: kubespray-defaults } 47 | 48 | # - name: Generate the etcd certificates beforehand 49 | # hosts: etcd 50 | # gather_facts: False 51 | # any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 52 | # environment: "{{ proxy_disable_env }}" 53 | # roles: 54 | # - { role: kubespray-defaults } 55 | # - { role: etcd, tags: etcd, etcd_cluster_setup: false } 56 | 57 | # - name: Download images to ansible host cache via first kube_control_plane node 58 | # hosts: kube_control_plane[0] 59 | # gather_facts: False 60 | # any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 61 | # environment: "{{ proxy_disable_env }}" 62 | # roles: 63 | # - { role: kubespray-defaults, when: "not skip_downloads and download_run_once and not download_localhost" } 64 | # - { role: kubernetes/preinstall, tags: preinstall, when: "not skip_downloads and download_run_once and not download_localhost" } 65 | # - { role: download, tags: download, when: "not skip_downloads and download_run_once and not download_localhost" } 66 | 67 | - name: Target only workers to get kubelet installed and checking in on any new nodes(engine) 68 | hosts: "{{ node }}" 69 | gather_facts: False 70 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 71 | environment: "{{ proxy_disable_env }}" 72 | roles: 73 | - { role: kubespray-defaults } 74 | - { role: kubernetes/preinstall, tags: preinstall } 75 | - { role: container-engine, tags: "container-engine", when: deploy_container_engine } 76 | - { role: download, tags: download, when: "not skip_downloads" } 77 | # - { role: etcd, tags: etcd, etcd_cluster_setup: false, when: "not etcd_kubeadm_enabled|default(false)" } 78 | - role: etcd 79 | tags: etcd 80 | etcd_cluster_setup: false 81 | when: 82 | - "etcd_deployment_type != 'kubeadm'" 83 | - inventory_hostname in groups['etcd'] 84 | vars: 85 | kubeadm_images: 86 | kubeadm_kube-proxy: 87 | enabled: true 88 | container: true 89 | repo: "{{ kube_image_repo }}/kube-proxy" 90 | tag: "{{ kube_version }}" 91 | groups: "k8s_cluster" 92 | 93 | 94 | - name: Target only workers to get kubelet installed and checking in on any new nodes(node) 95 | hosts: "{{ node }}" 96 | gather_facts: False 97 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 98 | environment: "{{ proxy_disable_env }}" 99 | roles: 100 | - { role: kubespray-defaults } 101 | - { role: kubernetes/node, tags: node } 102 | 103 | - name: Upload control plane certs and retrieve encryption key 104 | hosts: kube_control_plane | first 105 | environment: "{{ proxy_disable_env }}" 106 | gather_facts: False 107 | tags: kubeadm 108 | roles: 109 | - { role: kubespray-defaults } 110 | tasks: 111 | - name: Upload control plane certificates 112 | command: >- 113 | {{ bin_dir }}/kubeadm init phase 114 | --config {{ kube_config_dir }}/kubeadm-config.yaml 115 | upload-certs 116 | --upload-certs 117 | environment: "{{ proxy_disable_env }}" 118 | register: kubeadm_upload_cert 119 | changed_when: false 120 | - name: set fact 'kubeadm_certificate_key' for later use 121 | set_fact: 122 | kubeadm_certificate_key: "{{ kubeadm_upload_cert.stdout_lines[-1] | trim }}" 123 | when: kubeadm_certificate_key is not defined 124 | 125 | - name: Target only workers to get kubelet installed and checking in on any new nodes(network) 126 | hosts: "{{ node }}" 127 | gather_facts: False 128 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 129 | environment: "{{ proxy_disable_env }}" 130 | roles: 131 | - { role: kubespray-defaults } 132 | - { role: kubernetes/kubeadm, tags: kubeadm } 133 | - { role: kubernetes/node-label, tags: node-label } 134 | - { role: network_plugin, tags: network } 135 | 136 | - name: Apply resolv.conf changes now that cluster DNS is up 137 | hosts: k8s_cluster 138 | gather_facts: False 139 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 140 | environment: "{{ proxy_disable_env }}" 141 | roles: 142 | - { role: kubespray-defaults } 143 | - { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true } 144 | -------------------------------------------------------------------------------- /pb_sync_container_engine_params.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | - name: Gather facts 9 | gather_facts: False 10 | hosts: k8s_cluster 11 | any_errors_fatal: true 12 | roles: 13 | - { role: kuboard-spray-facts } 14 | - { role: kubespray-defaults } 15 | - role: configure-docker-repo 16 | when: container_manager == 'docker' 17 | - role: container-engine/docker 18 | when: container_manager == 'docker' 19 | - role: container-engine/containerd 20 | when: container_manager == 'containerd' 21 | 22 | # - name: 更新容器引擎参数 23 | # hosts: k8s_cluster 24 | # gather_facts: False 25 | # tasks: 26 | # - include_role: 27 | # name: container-engine/docker 28 | # when: 29 | # - inventory_hostname == iterate_k8s_cluster_item 30 | # - container_manager == 'docker' 31 | # loop: "{{ groups['k8s_cluster'] }}" 32 | # loop_control: 33 | # loop_var: iterate_k8s_cluster_item -------------------------------------------------------------------------------- /pb_sync_etcd_address.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | # 由于始终通过镜像分发,因此我们不检查 ansible_version 9 | # - name: Check ansible version 10 | # import_playbook: 3rd/kubespray/ansible_version.yml 11 | 12 | - name: Gather facts 13 | gather_facts: False 14 | hosts: kube_control_plane:etcd 15 | any_errors_fatal: true 16 | roles: 17 | - { role: kuboard-spray-facts } 18 | - { role: kubespray-defaults } 19 | 20 | - name: fix kube-apiserver.yaml 21 | hosts: kube_control_plane 22 | vars: 23 | etcd_hosts: "{{ groups['etcd'] | default(groups['kube_control_plane']) }}" 24 | etcd_access_addresses: |- 25 | {% for item in etcd_hosts -%} 26 | https://{{ hostvars[item]['etcd_access_address'] | default(hostvars[item]['ip'] | default(fallback_ips[item])) }}:2379{% if not loop.last %},{% endif %} 27 | {%- endfor %} 28 | tasks: 29 | - name: Update etcd-servers for apiserver 30 | lineinfile: 31 | dest: "{{ kube_config_dir }}/manifests/kube-apiserver.yaml" 32 | regexp: '^ - --etcd-servers=' 33 | line: ' - --etcd-servers={{ etcd_access_addresses }}' 34 | when: 35 | - not etcd_kubeadm_enabled | default(false) 36 | - kuboardspray_node_action is not defined 37 | 38 | - name: print etcd-servers for apiserver 39 | debug: 40 | msg: 41 | - "文件 /etc/kubernetes/manifests/kube-apiserver.yaml 中的 --etcd-servers 参数已被替换为: {{ etcd_access_addresses }}" 42 | when: 43 | - kuboardspray_node_action is not defined 44 | 45 | - name: Configure | Wait for apiserver to be healthy 46 | shell: "set -o pipefail && kubectl get pods | grep -v 'did you specify the right host or port' > /dev/null" 47 | args: 48 | executable: /bin/bash 49 | register: kube_apiserver_is_healthy 50 | until: kube_apiserver_is_healthy.rc == 0 51 | retries: "25" 52 | delay: "5" 53 | changed_when: false 54 | check_mode: no 55 | when: 56 | - kuboardspray_node_action is not defined -------------------------------------------------------------------------------- /pb_sync_nginx_config.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | # 由于始终通过镜像分发,因此我们不检查 ansible_version 9 | # - name: Check ansible version 10 | # import_playbook: 3rd/kubespray/ansible_version.yml 11 | 12 | - name: Gather facts 13 | gather_facts: False 14 | hosts: k8s_cluster 15 | any_errors_fatal: true 16 | roles: 17 | - { role: kuboard-spray-facts, tags: ['nginx'] } 18 | 19 | - name: Config nginx proxy for apiserver 20 | gather_facts: False 21 | hosts: k8s_cluster 22 | roles: 23 | - { role: kubespray-defaults } 24 | - { role: kubernetes/node } 25 | 26 | - name: Configure | Wait for nginx to restart 27 | gather_facts: False 28 | hosts: kube_node,!kube_control_plane 29 | tags: nginx 30 | tasks: 31 | 32 | - name: Configure | Wait for nginx to restart 33 | shell: "set -o pipefail && curl -ik https://localhost:6443/healthz | grep -v '\"apiVersion\":' > /dev/null" 34 | args: 35 | executable: /bin/bash 36 | register: nginx_is_healthy 37 | until: nginx_is_healthy.rc == 0 38 | retries: "25" 39 | delay: "5" 40 | changed_when: false 41 | check_mode: no 42 | -------------------------------------------------------------------------------- /pb_uncordon_node.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | - name: 恢复调度 9 | hosts: target 10 | vars: 11 | # If non-empty, will use this string as identification instead of the actual hostname 12 | kube_override_hostname: >- 13 | {%- if cloud_provider is defined and cloud_provider in [ 'aws' ] -%} 14 | {%- else -%} 15 | {{ inventory_hostname }} 16 | {%- endif -%} 17 | kubectl: kubectl 18 | tasks: 19 | # Node Ready: type = ready, status = True 20 | # Node NotReady: type = ready, status = Unknown 21 | - name: See if node is in ready state 22 | command: > 23 | {{ kubectl }} get node {{ kube_override_hostname|default(inventory_hostname) }} 24 | -o jsonpath="{ range .status.conditions[?(@.type == 'Ready')].status }{ @ }{ end }" 25 | register: kubectl_node_ready 26 | delegate_to: "{{ groups['kube_control_plane'][0] }}" 27 | failed_when: false 28 | changed_when: false 29 | 30 | - name: Fail after rescue 31 | fail: 32 | msg: "节点 {{ inventory_hostname }} 当前并不处于 READY 状态" 33 | when: kubectl_node_ready.stdout != "True" 34 | 35 | - name: debug 36 | debug: 37 | msg: "节点 {{ inventory_hostname }} 当前处于 READY 状态,恢复调度" 38 | 39 | - name: Cordon node 40 | command: "{{ kubectl }} uncordon {{ kube_override_hostname|default(inventory_hostname) }}" 41 | delegate_to: "{{ groups['kube_control_plane'][0] }}" 42 | -------------------------------------------------------------------------------- /pb_upgrade_cluster.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - hosts: all 4 | tasks: 5 | - set_fact: 6 | unsafe_show_logs: "{{ not (kuboardspray_no_log | default(true)) }}" 7 | 8 | - name: Gather facts 9 | gather_facts: False 10 | hosts: target 11 | any_errors_fatal: true 12 | roles: 13 | - { role: kuboard-spray-facts, tags: download } 14 | 15 | - hosts: k8s_cluster:etcd:calico_rr 16 | strategy: linear 17 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 18 | gather_facts: false 19 | environment: "{{ proxy_disable_env }}" 20 | vars: 21 | # Need to disable pipelining for bootstrap-os as some systems have requiretty in sudoers set, which makes pipelining 22 | # fail. bootstrap-os fixes this on these systems, so in later plays it can be enabled. 23 | ansible_ssh_pipelining: false 24 | roles: 25 | - { role: kubespray-defaults } 26 | - { role: bootstrap-os, tags: bootstrap-os} 27 | - { role: deploy-kube-bench, tags: download, when: "not skip_downloads" } 28 | 29 | - name: Download images to ansible host cache via first kube_control_plane node 30 | hosts: kube_control_plane[0] 31 | gather_facts: False 32 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 33 | environment: "{{ proxy_disable_env }}" 34 | roles: 35 | - { role: kubespray-defaults, when: "not skip_downloads and download_run_once and not download_localhost"} 36 | - { role: kubernetes/preinstall, tags: preinstall, when: "not skip_downloads and download_run_once and not download_localhost" } 37 | - { role: download, tags: download, when: "not skip_downloads and download_run_once and not download_localhost" } 38 | 39 | - name: Prepare nodes for upgrade 40 | hosts: k8s_cluster:etcd:calico_rr 41 | gather_facts: False 42 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 43 | environment: "{{ proxy_disable_env }}" 44 | roles: 45 | - { role: kubespray-defaults } 46 | - { role: kubernetes/preinstall, tags: preinstall } 47 | - { role: download, tags: download, when: "not skip_downloads" } 48 | 49 | - name: Upgrade container engine on non-cluster nodes 50 | hosts: etcd:calico_rr:!k8s_cluster 51 | gather_facts: False 52 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 53 | environment: "{{ proxy_disable_env }}" 54 | serial: "{{ serial | default('20%') }}" 55 | roles: 56 | - { role: kubespray-defaults } 57 | - { role: container-engine, tags: "container-engine", when: deploy_container_engine } 58 | 59 | - hosts: etcd 60 | gather_facts: False 61 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 62 | environment: "{{ proxy_disable_env }}" 63 | roles: 64 | - { role: kubespray-defaults } 65 | - role: etcd 66 | tags: etcd 67 | vars: 68 | etcd_cluster_setup: true 69 | etcd_events_cluster_setup: "{{ etcd_events_cluster_enabled }}" 70 | when: etcd_deployment_type != "kubeadm" 71 | 72 | - hosts: k8s_cluster 73 | gather_facts: False 74 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 75 | environment: "{{ proxy_disable_env }}" 76 | roles: 77 | - { role: kubespray-defaults } 78 | - role: etcd 79 | tags: etcd 80 | vars: 81 | etcd_cluster_setup: false 82 | etcd_events_cluster_setup: false 83 | when: etcd_deployment_type != "kubeadm" 84 | 85 | - name: Handle upgrades to master components first to maintain backwards compat. 86 | gather_facts: False 87 | hosts: kube_control_plane 88 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 89 | environment: "{{ proxy_disable_env }}" 90 | serial: 1 91 | roles: 92 | - { role: kubespray-defaults } 93 | - { role: upgrade/pre-upgrade, tags: pre-upgrade } 94 | - { role: container-engine, tags: "container-engine", when: deploy_container_engine } 95 | - { role: kubernetes/node, tags: node } 96 | - { role: kubernetes/control-plane, tags: master, upgrade_cluster_setup: true } 97 | - { role: kubernetes/client, tags: client } 98 | - { role: kubernetes/node-label, tags: node-label } 99 | - { role: kubernetes-apps/cluster_roles, tags: cluster-roles } 100 | - { role: kubernetes-apps, tags: csi-driver } 101 | - { role: upgrade/post-upgrade, tags: post-upgrade } 102 | 103 | - name: Upgrade calico and external cloud provider on all masters, calico-rrs, and nodes 104 | hosts: kube_control_plane:calico_rr:kube_node 105 | gather_facts: False 106 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 107 | serial: "{{ serial | default('20%') }}" 108 | environment: "{{ proxy_disable_env }}" 109 | roles: 110 | - { role: kubespray-defaults } 111 | - { role: kubernetes-apps/external_cloud_controller, tags: external-cloud-controller } 112 | - { role: network_plugin, tags: network } 113 | - { role: kubernetes-apps/network_plugin, tags: network } 114 | - { role: kubernetes-apps/policy_controller, tags: policy-controller } 115 | 116 | - name: Finally handle worker upgrades, based on given batch size 117 | hosts: kube_node:calico_rr:!kube_control_plane 118 | gather_facts: False 119 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 120 | environment: "{{ proxy_disable_env }}" 121 | serial: "{{ serial | default('20%') }}" 122 | roles: 123 | - { role: kubespray-defaults } 124 | - { role: upgrade/pre-upgrade, tags: pre-upgrade } 125 | - { role: container-engine, tags: "container-engine", when: deploy_container_engine } 126 | - { role: kubernetes/node, tags: node } 127 | - { role: kubernetes/kubeadm, tags: kubeadm } 128 | - { role: kubernetes/node-label, tags: node-label } 129 | - { role: upgrade/post-upgrade, tags: post-upgrade } 130 | 131 | - hosts: kube_control_plane[0] 132 | gather_facts: False 133 | any_errors_fatal: true 134 | environment: "{{ proxy_disable_env }}" 135 | roles: 136 | - { role: kubespray-defaults } 137 | - { role: win_nodes/kubernetes_patch, tags: ["master", "win_nodes"] } 138 | 139 | - hosts: calico_rr 140 | gather_facts: False 141 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 142 | environment: "{{ proxy_disable_env }}" 143 | roles: 144 | - { role: kubespray-defaults } 145 | - { role: network_plugin/calico/rr, tags: network } 146 | 147 | - hosts: kube_control_plane 148 | gather_facts: False 149 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 150 | environment: "{{ proxy_disable_env }}" 151 | roles: 152 | - { role: kubespray-defaults } 153 | - { role: kubernetes-apps/ingress_controller, tags: ingress-controller } 154 | - { role: kubernetes-apps/external_provisioner, tags: external-provisioner } 155 | - { role: kubernetes-apps, tags: apps } 156 | 157 | - name: Apply resolv.conf changes now that cluster DNS is up 158 | hosts: k8s_cluster 159 | gather_facts: False 160 | any_errors_fatal: "{{ any_errors_fatal | default(true) }}" 161 | environment: "{{ proxy_disable_env }}" 162 | roles: 163 | - { role: kubespray-defaults } 164 | - { role: kubernetes/preinstall, when: "dns_mode != 'none' and resolvconf_mode == 'host_resolvconf'", tags: resolvconf, dns_late: true } 165 | -------------------------------------------------------------------------------- /release.md: -------------------------------------------------------------------------------- 1 | # 版本更新内容 2 | 3 | ## spray-v2.21.0c_k8s-v1.26.3_v4.3 4 | 5 | **发布日期** 6 | 7 | 2023年4月1日 8 | 9 | **新特性** 10 | 11 | * kubernetes 版本更新到 v1.26.3 12 | 13 | ## spray-v2.21.0c_k8s-v1.26.2_v4.2 14 | 15 | **发布日期** 16 | 17 | 2023年3月5日 18 | 19 | **新特性** 20 | 21 | * kubernetes 版本更新到 v1.26.2 22 | 23 | ## spray-v2.21.0c_k8s-v1.26.1_v4.1 24 | 25 | **发布日期** 26 | 27 | 2023年2月19日 28 | 29 | **新特性** 30 | 31 | * kubernetes 版本更新到 v1.26.1 32 | 33 | ## spray-v2.20.0b_k8s-v1.25.6_v3.5 34 | 35 | **发布日期** 36 | 37 | 2023年1月25日 38 | 39 | **新特性** 40 | 41 | * containerd 版本更新到 v1.6.15 42 | * kubernetes 版本更新到 v1.25.6 43 | * calico 版本更新到 v3.24.5 44 | * flannel 版本更新到 v0.20.2 45 | 46 | ## spray-v2.20.0b_k8s-v1.25.5_v3.4 47 | 48 | **发布日期** 49 | 50 | 2022年12月12日 51 | 52 | **新特性** 53 | 54 | * kubernetes 版本更新到 v1.25.5 55 | 56 | ## spray-v2.20.0b_k8s-v1.25.4_v3.3 57 | 58 | **发布日期** 59 | 60 | 2022年11月13日 61 | 62 | **新特性** 63 | 64 | * kubernetes 版本更新到 v1.25.4 65 | * 支持自定义 kubelet 参数 evictionHard (kuboard-spray 版本不低于 v1.2.2 时可使用此选项) 66 | 67 | ## spray-v2.20.0a_k8s-v1.25.3_v3.2 68 | 69 | **发布日期** 70 | 71 | 2022年10月16日 72 | 73 | **新特性** 74 | 75 | * kubernetes 版本更新到 v1.25.3 76 | 77 | ## spray-v2.20.0a_k8s-v1.25.2_v3.1 78 | 79 | **发布日期** 80 | 81 | 2022年10月6日 82 | 83 | **新特性** 84 | 85 | * kubernetes 版本更新到 v1.25.2 86 | 87 | ## spray-v2.19.0b_k8s-v1.24.6_v2.5 88 | 89 | **发布日期** 90 | 91 | 2022年9月25日 92 | 93 | **新特性** 94 | 95 | * kubernetes 版本更新到 v1.24.6 96 | 97 | ## spray-v2.19.0b_k8s-v1.24.5_v2.4 98 | 99 | **发布日期** 100 | 101 | 2022年9月18日 102 | 103 | **新特性** 104 | 105 | * kubernetes 版本更新到 v1.24.5 106 | * 支持操作系统 UnionTech OS Server 20 107 | 108 | ## spray-v2.19.0a_k8s-v1.24.4_v2.3 109 | 110 | **发布日期** 111 | 112 | 2022年8月21日 113 | 114 | **新特性** 115 | 116 | * kubernetes 版本更新到 v1.24.4 117 | 118 | ## spray-v2.19.0a_k8s-v1.24.3_v2.2 119 | 120 | **发布日期** 121 | 122 | 2022年7月17日 123 | 124 | **新特性** 125 | 126 | * kubernetes 版本更新到 v1.24.3 127 | 128 | **问题修正** 129 | 130 | * 当新增的 etcd 节点称为第一个 etcd 节点之后,添加工作节点时不能成功的问题 131 | 132 | ## spray-v2.19.0a_k8s-v1.24.2_v2.1 133 | 134 | **发布日期** 135 | 136 | 2022-6-25 137 | 138 | **新特性** 139 | 140 | * kubernetes 版本更新到 v1.24.2 141 | * 支持 ubuntu 22.04 142 | 143 | **问题修正** 144 | 145 | * k8s1.24.x 不自动为 ServiceAccount 创建 Secret 146 | 147 | ## spray-v2.18.0a-8_k8s-v1.23.6_v1.13 148 | 149 | **发布日期** 150 | 151 | 2022-05-01 152 | 153 | **新特性** 154 | 155 | * 可以安装 kuboard 156 | 157 | ## spray-v2.18.0a-8_k8s-v1.23.6_v1.12 158 | 159 | **发布日期** 160 | 161 | 2022-04-22 162 | 163 | **新特性** 164 | 165 | * kubernetes 版本更新到 v1.23.6 166 | * 适配 openeuler 22.03 167 | 168 | ## spray-v2.18.0a-8_k8s-v1.23.5_v1.11 169 | 170 | **发布日期** 171 | 172 | 2022-03-27 173 | 174 | **优化** 175 | 176 | * 适配红帽操作系统 177 | 178 | ## spray-v2.18.0a-8_k8s-v1.23.5_v1.10 179 | 180 | **发布日期** 181 | 182 | 2022-03-20 183 | 184 | **新特性** 185 | 186 | * kubernetes 版本更新到 v1.23.5 187 | * 可以备份 ETCD 188 | * 可以恢复 ETCD 189 | 190 | **优化** 191 | 192 | * 检查 cpu 架构是否匹配 193 | 194 | **问题修正** 195 | 196 | * 不能使用中标麒麟操作系统的问题 197 | 198 | ## spray-v2.18.0a-8_k8s-v1.23.4_v1.9 199 | 200 | **发布日期** 201 | 202 | 2021-03-01 203 | 204 | 205 | **问题修正** 206 | 207 | * 使用跳板机时,不能分发安装包的问题 208 | 209 | ## spray-v2.18.0a-7_k8s-v1.23.4_v1.8 210 | 211 | **发布日期** 212 | 213 | 2021-02-27 214 | 215 | **新特性** 216 | * 升级集群时,对于单个节点: (kuboard-spray 版本不能低于 v1.0.0-beta.3) 217 | * 可以在升级节点前手工排空节点 218 | * 可以在升级节点后恢复节点调度 219 | * 可以手动更新 apiserver 的证书 220 | 221 | **优化** 222 | * 检查是否为操作系统选择了软件源 223 | * 检查 kuboardspray 版本是否满足资源包的要求 224 | * 可以添加单独的 etcd 节点 225 | 226 | ## spray-v2.18.0a-6_k8s-v1.23.4_v1.7 227 | 228 | 229 | **发布日期** 230 | 231 | 2022-02-20 232 | 233 | **更新内容** 234 | 235 | * 版本升级 236 | * kube-version: v1.23.4 237 | * containerd: 1.6.0 238 | * pause: 3.6 239 | * nerdctl: 0.17.0 240 | * calico: v3.21.4 241 | * runc: v1.1.0 242 | * 安装前检查操作系统是否在支持的列表中 243 | * kube-bench CIS 扫描 244 | * 匹配 kube-bench CIS 扫描规则 245 | * 部署 kube-bench 246 | * 匹配 kube-bench 的扫描规则 247 | * amd64 适配操作系统 248 | * 适配 OracleLinux 8.5 249 | * 适配 Anolis 8.5 250 | * 适配 Rocky 8.5 251 | * 适配 中标麒麟 V10 252 | * 适配 openSUSE Leap 15.3 253 | * arm64 版本 254 | * arm64 环境下不支持 netchecker 255 | * arm64 环境下支持容器引擎 containerd 256 | * 适配 CentOS 7.9 257 | * 适配 OracleLinux 8.5 258 | * 适配 Anolis 8.5 259 | * 适配 Rocky 8.5 260 | * 适配 中标麒麟 V10 261 | * 适配 openSUSE Leap 15.3 262 | 263 | ## spray-v2.18.0a-2_k8s-v1.23.3_v1.6 264 | 265 | **发布时间** 266 | 267 | 2022-02-06 268 | 269 | **更新内容** 270 | 271 | * 提示用户忽略正常情况下的错误提示信息 272 | * 适配 kuboard-spray:v1.0.0-beta.1 的新特性 273 | * 支持卸载可选组件 274 | * 可以通过跳板机访问集群节点 275 | * 支持集群组件版本查询的操作 276 | * 支持集群升级操作 277 | * coredns 应该分发到 k8s_cluster 278 | * amd64 适配操作系统 279 | * OpenEuler 20.03 280 | * arm64 适配操作系统 281 | * Ubuntu 20.04 282 | 283 | ## spray-v2.18.0a-0_k8s-v1.23.3_v1.5 284 | 285 | **发布时间** 286 | 287 | 2022-01-26 288 | 289 | **更新内容** 290 | 291 | * 更新 kubernetes 版本到 v1.23.3 292 | * 更新 containerd_version 到 1.5.9 293 | * 更新 crun_version 到 1.4 294 | * 更新 crictl_version 到 v1.23.0 295 | * 更新 coredns_version 到 v1.8.6 296 | * 更新 calico_version 到 v3.21.2 297 | * 更新 metrics_server_version 到 v0.5.2 298 | 299 | **问题修正** 300 | 301 | * 解决 netcheck_etcd 不能离线下载的问题 302 | 303 | ## spray-v2.18.0-5_k8s-v1.23.1_v1.4 304 | 305 | **前提条件** 306 | 307 | * kuboard-spray 版本不低于 v1.0.0-alpha.5 308 | 309 | **更新内容** 310 | 311 | * 配合 kuboard-spray v1.0.0-alpha.5 及以上版本 312 | * 可以单独安装可选组件 313 | * 优化 314 | * 更新 /etc/nginx/nginx.conf 后,检查 nginx 的重启状态 315 | * 更新 /etc/kubernetes/manifests/kube-apiserver.yaml 中 etcd-server 参数后,检查所有 apiserver 的重启状态 316 | * 问题修正 317 | * 添加工作节点时,因为缺少 kube-proxy 镜像导致新增的节点不能正常工作的问题 318 | * 不能正常设置 nginx 参数的问题 319 | * coredns 最小副本书设定的问题 -------------------------------------------------------------------------------- /roles/backup-etcd/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Set to false to only do certificate management 3 | etcd_cluster_setup: true 4 | etcd_events_cluster_setup: false 5 | 6 | # Set to true to separate k8s events to a different etcd cluster 7 | etcd_events_cluster_enabled: false 8 | 9 | etcd_backup_prefix: "/var/backups" 10 | etcd_data_dir: "/var/lib/etcd" 11 | 12 | # Number of etcd backups to retain. Set to a value < 0 to retain all backups 13 | etcd_backup_retention_count: -1 14 | 15 | force_etcd_cert_refresh: true 16 | etcd_config_dir: /etc/ssl/etcd 17 | etcd_cert_dir: "{{ etcd_config_dir }}/ssl" 18 | etcd_cert_dir_mode: "0700" 19 | etcd_cert_group: root 20 | # Note: This does not set up DNS entries. It simply adds the following DNS 21 | # entries to the certificate 22 | etcd_cert_alt_names: 23 | - "etcd.kube-system.svc.{{ dns_domain }}" 24 | - "etcd.kube-system.svc" 25 | - "etcd.kube-system" 26 | - "etcd" 27 | etcd_cert_alt_ips: [] 28 | 29 | etcd_script_dir: "{{ bin_dir }}/etcd-scripts" 30 | 31 | etcd_heartbeat_interval: "250" 32 | etcd_election_timeout: "5000" 33 | 34 | # etcd_snapshot_count: "10000" 35 | 36 | etcd_metrics: "basic" 37 | 38 | # Define in inventory to set a separate port for etcd to expose metrics on 39 | # etcd_metrics_port: 2381 40 | 41 | ## A dictionary of extra environment variables to add to etcd.env, formatted like: 42 | ## etcd_extra_vars: 43 | ## ETCD_VAR1: "value1" 44 | ## ETCD_VAR2: "value2" 45 | etcd_extra_vars: {} 46 | 47 | 48 | etcd_blkio_weight: 1000 49 | 50 | etcd_node_cert_hosts: "{{ groups['k8s_cluster'] | union(groups.get('calico_rr', [])) }}" 51 | 52 | etcd_compaction_retention: "8" 53 | 54 | # Force clients like etcdctl to use TLS certs (different than peer security) 55 | etcd_secure_client: true 56 | 57 | # Enable peer client cert authentication 58 | etcd_peer_client_auth: true 59 | 60 | # Maximum number of snapshot files to retain (0 is unlimited) 61 | # etcd_max_snapshots: 5 62 | 63 | # Maximum number of wal files to retain (0 is unlimited) 64 | # etcd_max_wals: 5 65 | 66 | # Number of loop retries 67 | etcd_retries: 4 68 | 69 | is_etcd_master: "{{ inventory_hostname in groups['etcd'] }}" 70 | etcd_access_addresses: |- 71 | {% for item in etcd_hosts -%} 72 | https://{{ hostvars[item]['etcd_access_address'] | default(hostvars[item]['ip'] | default(fallback_ips[item])) }}:2379{% if not loop.last %},{% endif %} 73 | {%- endfor %} 74 | etcd_hosts: "{{ groups['etcd'] | default(groups['kube_control_plane']) }}" 75 | 76 | retry_stagger: 5 -------------------------------------------------------------------------------- /roles/backup-etcd/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: 检查 ETCD 集群健康状况 3 | shell: "set -o pipefail && {{ bin_dir }}/etcdctl endpoint --cluster status && {{ bin_dir }}/etcdctl endpoint --cluster health 2>&1 | grep -v 'Error: unhealthy cluster' >/dev/null" 4 | args: 5 | executable: /bin/bash 6 | register: etcd_cluster_is_healthy 7 | failed_when: false 8 | changed_when: false 9 | check_mode: no 10 | run_once: yes 11 | when: is_etcd_master and etcd_cluster_setup 12 | tags: 13 | - facts 14 | environment: 15 | ETCDCTL_API: 3 16 | ETCDCTL_CERT: "{{ etcd_cert_dir }}/admin-{{ inventory_hostname }}.pem" 17 | ETCDCTL_KEY: "{{ etcd_cert_dir }}/admin-{{ inventory_hostname }}-key.pem" 18 | ETCDCTL_CACERT: "{{ etcd_cert_dir }}/ca.pem" 19 | ETCDCTL_ENDPOINTS: "{{ etcd_access_addresses }}" 20 | 21 | - fail: 22 | msg: 23 | - "请检查 etcd 集群的健康状况" 24 | - "{{ bin_dir }}/etcdctl endpoint --cluster status && {{ bin_dir }}/etcdctl endpoint --cluster health" 25 | when: 26 | - etcd_cluster_is_healthy.rc != 0 27 | 28 | - name: Refresh Time Fact 29 | setup: filter=ansible_date_time 30 | 31 | - name: Set Backup Directory 32 | set_fact: 33 | etcd_backup_directory: "{{ ansible_date_time.date }}_{{ ansible_date_time.time | regex_replace(':') }}" 34 | 35 | - name: Create Backup Directory 36 | file: 37 | path: "{{ etcd_backup_prefix }}/etcd/{{ etcd_backup_directory }}" 38 | state: directory 39 | owner: root 40 | group: root 41 | mode: 0600 42 | 43 | - name: Stat etcd v2 data directory 44 | stat: 45 | path: "{{ etcd_data_dir }}/member" 46 | get_attributes: no 47 | get_checksum: no 48 | get_mime: no 49 | register: etcd_data_dir_member 50 | 51 | - name: Backup etcd v2 data 52 | when: etcd_data_dir_member.stat.exists 53 | command: >- 54 | {{ bin_dir }}/etcdctl backup 55 | --data-dir {{ etcd_data_dir }} 56 | --backup-dir {{ etcd_backup_prefix }}/etcd/{{ etcd_backup_directory }} 57 | environment: 58 | ETCDCTL_API: 2 59 | retries: 3 60 | register: backup_v2_command 61 | until: backup_v2_command.rc == 0 62 | delay: "{{ retry_stagger | random + 3 }}" 63 | 64 | - name: Backup etcd v3 data 65 | command: >- 66 | {{ bin_dir }}/etcdctl 67 | snapshot save {{ etcd_backup_prefix }}/etcd/{{ etcd_backup_directory }}/snapshot.db 68 | environment: 69 | ETCDCTL_API: 3 70 | ETCDCTL_ENDPOINTS: "{{ etcd_access_addresses.split(',') | first }}" 71 | ETCDCTL_CERT: "{{ etcd_cert_dir }}/admin-{{ inventory_hostname }}.pem" 72 | ETCDCTL_KEY: "{{ etcd_cert_dir }}/admin-{{ inventory_hostname }}-key.pem" 73 | ETCDCTL_CACERT: "{{ etcd_cert_dir }}/ca.pem" 74 | retries: 3 75 | register: etcd_backup_v3_command 76 | until: etcd_backup_v3_command.rc == 0 77 | delay: "{{ retry_stagger | random + 3 }}" 78 | 79 | - name: Create Backup Temporary Directory 80 | file: 81 | path: "/tmp/etcd-backup/" 82 | state: directory 83 | owner: root 84 | group: root 85 | mode: 0777 86 | 87 | - name: "创建压缩文件 /tmp/{{ etcd_backup_directory }}.tgz" 88 | community.general.archive: 89 | path: "{{ etcd_backup_prefix }}/etcd/{{ etcd_backup_directory }}" 90 | dest: "/tmp/etcd-backup/{{ etcd_backup_directory }}.tgz" 91 | 92 | - name: 复制备份数据到本地 93 | ansible.builtin.fetch: 94 | src: "/tmp/etcd-backup/{{ etcd_backup_directory }}.tgz" 95 | dest: "{{ kuboardspray_cluster_dir }}/backup/{{ inventory_hostname }}/{{ etcd_member_name }}/" 96 | flat: yes 97 | 98 | - name: 删除远程临时文件 1 99 | file: 100 | path: "{{ etcd_backup_prefix }}/etcd" 101 | state: absent 102 | 103 | - name: 删除远程临时文件 2 104 | file: 105 | path: "/tmp/etcd-backup" 106 | state: absent -------------------------------------------------------------------------------- /roles/bootstrap-os/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ## CentOS/RHEL/AlmaLinux specific variables 3 | # Use the fastestmirror yum plugin 4 | centos_fastestmirror_enabled: false 5 | 6 | ## Flatcar Container Linux specific variables 7 | # Disable locksmithd or leave it in its current state 8 | coreos_locksmithd_disable: false 9 | 10 | ## Oracle Linux specific variables 11 | # Install public repo on Oracle Linux 12 | use_oracle_public_repo: true 13 | 14 | fedora_coreos_packages: 15 | - python 16 | - python3-libselinux 17 | - ethtool # required in kubeadm preflight phase for verifying the environment 18 | - ipset # required in kubeadm preflight phase for verifying the environment 19 | - conntrack-tools # required by kube-proxy 20 | 21 | ## General 22 | # Set the hostname to inventory_hostname 23 | override_system_hostname: true 24 | 25 | is_fedora_coreos: false 26 | 27 | skip_http_proxy_on_os_packages: false 28 | -------------------------------------------------------------------------------- /roles/bootstrap-os/files/bootstrap.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | BINDIR="/opt/bin" 5 | PYPY_VERSION=7.3.2 6 | PYPI_URL="https://downloads.python.org/pypy/pypy3.6-v${PYPY_VERSION}-linux64.tar.bz2" 7 | PYPI_HASH=d7a91f179076aaa28115ffc0a81e46c6a787785b2bc995c926fe3b02f0e9ad83 8 | 9 | mkdir -p $BINDIR 10 | 11 | cd $BINDIR 12 | 13 | if [[ -e $BINDIR/.bootstrapped ]]; then 14 | exit 0 15 | fi 16 | 17 | TAR_FILE=pyp.tar.bz2 18 | wget -O "${TAR_FILE}" "${PYPI_URL}" 19 | echo "${PYPI_HASH} ${TAR_FILE}" | sha256sum -c - 20 | tar -xjf "${TAR_FILE}" && rm "${TAR_FILE}" 21 | mv -n "pypy3.6-v${PYPY_VERSION}-linux64" pypy3 22 | 23 | ln -s ./pypy3/bin/pypy3 python 24 | $BINDIR/python --version 25 | 26 | touch $BINDIR/.bootstrapped 27 | -------------------------------------------------------------------------------- /roles/bootstrap-os/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: RHEL auto-attach subscription 3 | command: /sbin/subscription-manager attach --auto 4 | become: true 5 | -------------------------------------------------------------------------------- /roles/bootstrap-os/molecule/default/converge.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Converge 3 | hosts: all 4 | gather_facts: no 5 | roles: 6 | - role: bootstrap-os 7 | -------------------------------------------------------------------------------- /roles/bootstrap-os/molecule/default/molecule.yml: -------------------------------------------------------------------------------- 1 | --- 2 | dependency: 3 | name: galaxy 4 | lint: | 5 | set -e 6 | yamllint -c ../../.yamllint . 7 | driver: 8 | name: vagrant 9 | provider: 10 | name: libvirt 11 | platforms: 12 | - name: ubuntu16 13 | box: generic/ubuntu1604 14 | cpus: 1 15 | memory: 512 16 | - name: ubuntu18 17 | box: generic/ubuntu1804 18 | cpus: 1 19 | memory: 512 20 | - name: ubuntu20 21 | box: generic/ubuntu2004 22 | cpus: 1 23 | memory: 512 24 | - name: centos7 25 | box: centos/7 26 | cpus: 1 27 | memory: 512 28 | - name: almalinux8 29 | box: almalinux/8 30 | cpus: 1 31 | memory: 512 32 | - name: debian9 33 | box: generic/debian9 34 | cpus: 1 35 | memory: 512 36 | - name: debian10 37 | box: generic/debian10 38 | cpus: 1 39 | memory: 512 40 | provisioner: 41 | name: ansible 42 | config_options: 43 | defaults: 44 | callback_whitelist: profile_tasks 45 | timeout: 120 46 | lint: 47 | name: ansible-lint 48 | inventory: 49 | group_vars: 50 | all: 51 | user: 52 | name: foo 53 | comment: My test comment 54 | verifier: 55 | name: testinfra 56 | lint: 57 | name: flake8 58 | -------------------------------------------------------------------------------- /roles/bootstrap-os/molecule/default/tests/test_default.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | import testinfra.utils.ansible_runner 4 | 5 | testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner( 6 | os.environ['MOLECULE_INVENTORY_FILE'] 7 | ).get_hosts('all') 8 | 9 | 10 | def test_python(host): 11 | assert host.exists('python3') or host.exists('python') 12 | -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/bootstrap-almalinux.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - assert: 4 | msg: 5 | - "使用 AlmaLinux 操作系统时,kuboardspray 只支持 containerd 作为容器引擎" 6 | that: 7 | - container_manager == 'containerd' 8 | 9 | - include_tasks: centos-alike.yml -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/bootstrap-anolis.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - include_tasks: centos-alike.yml -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/bootstrap-centos.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - include_tasks: centos-alike.yml -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/bootstrap-kylin linux advanced server.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - assert: 4 | msg: 5 | - "使用 中标麒麟V10 操作系统时,kuboardspray 只支持 containerd 作为容器引擎" 6 | that: 7 | - container_manager == 'containerd' 8 | 9 | - include_tasks: centos-alike.yml -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/bootstrap-openeuler.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - assert: 4 | msg: 5 | - "使用 openEuler 操作系统时,kuboardspray 只支持 containerd 作为容器引擎" 6 | that: 7 | - container_manager == 'containerd' 8 | 9 | - include_tasks: centos-alike.yml -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/bootstrap-opensuse leap.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # OpenSUSE ships with Python installed 3 | 4 | - name: Check that /etc/sysconfig/proxy file exists 5 | stat: 6 | path: /etc/sysconfig/proxy 7 | get_attributes: no 8 | get_checksum: no 9 | get_mime: no 10 | register: stat_result 11 | 12 | - name: Create the /etc/sysconfig/proxy empty file 13 | file: # noqa risky-file-permissions 14 | path: /etc/sysconfig/proxy 15 | state: touch 16 | when: 17 | - http_proxy is defined or https_proxy is defined 18 | - not stat_result.stat.exists 19 | 20 | - name: Set the http_proxy in /etc/sysconfig/proxy 21 | lineinfile: 22 | path: /etc/sysconfig/proxy 23 | regexp: '^HTTP_PROXY=' 24 | line: 'HTTP_PROXY="{{ http_proxy }}"' 25 | become: true 26 | when: 27 | - http_proxy is defined 28 | 29 | - name: Set the https_proxy in /etc/sysconfig/proxy 30 | lineinfile: 31 | path: /etc/sysconfig/proxy 32 | regexp: '^HTTPS_PROXY=' 33 | line: 'HTTPS_PROXY="{{ https_proxy }}"' 34 | become: true 35 | when: 36 | - https_proxy is defined 37 | 38 | - name: Enable proxies 39 | lineinfile: 40 | path: /etc/sysconfig/proxy 41 | regexp: '^PROXY_ENABLED=' 42 | line: 'PROXY_ENABLED="yes"' 43 | become: true 44 | when: 45 | - http_proxy is defined or https_proxy is defined 46 | 47 | # Required for zypper module 48 | - name: Install python-xml 49 | shell: zypper refresh && zypper --non-interactive install python-xml 50 | changed_when: false 51 | become: true 52 | tags: 53 | - facts 54 | 55 | # Without this package, the get_url module fails when trying to handle https 56 | - name: Install python-cryptography 57 | zypper: 58 | name: python-cryptography 59 | state: present 60 | update_cache: true 61 | become: true 62 | 63 | # Nerdctl needs some basic packages to get an environment up 64 | - name: Install basic dependencies 65 | zypper: 66 | name: 67 | - iptables 68 | - apparmor-parser 69 | state: present 70 | become: true 71 | -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/bootstrap-oraclelinux.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Gather host facts to get ansible_distribution_version ansible_distribution_major_version 3 | setup: 4 | gather_subset: '!all' 5 | filter: ansible_distribution_*version 6 | 7 | - name: check ansible_distribution_major_version 8 | assert: 9 | msg: "当前只支持 {{ ansible_distribution }} 版本 8.x / 9.x" 10 | that: 11 | - ansible_distribution_major_version == "8" or ansible_distribution_major_version == "9" 12 | 13 | 14 | - name: Add proxy to yum.conf or dnf.conf if http_proxy is defined 15 | ini_file: 16 | path: "{{ ( (ansible_distribution_major_version | int) < 8) | ternary('/etc/yum.conf','/etc/dnf/dnf.conf') }}" 17 | section: main 18 | option: proxy 19 | value: "{{ http_proxy | default(omit) }}" 20 | state: "{{ http_proxy | default(False) | ternary('present', 'absent') }}" 21 | no_extra_spaces: true 22 | mode: 0644 23 | become: true 24 | when: not skip_http_proxy_on_os_packages 25 | 26 | - name: Install EPEL for Oracle Linux repo package 27 | package: 28 | name: "oracle-epel-release-el{{ ansible_distribution_major_version }}" 29 | state: present 30 | when: 31 | - use_oracle_public_repo|default(true) 32 | - '''ID="ol"'' in os_release.stdout_lines' 33 | - (ansible_distribution_version | float) >= 7.6 34 | 35 | - name: Check presence of fastestmirror.conf 36 | stat: 37 | path: /etc/yum/pluginconf.d/fastestmirror.conf 38 | get_attributes: no 39 | get_checksum: no 40 | get_mime: no 41 | register: fastestmirror 42 | 43 | # the fastestmirror plugin can actually slow down Ansible deployments 44 | - name: Disable fastestmirror plugin if requested 45 | lineinfile: 46 | dest: /etc/yum/pluginconf.d/fastestmirror.conf 47 | regexp: "^enabled=.*" 48 | line: "enabled=0" 49 | state: present 50 | become: true 51 | when: 52 | - fastestmirror.stat.exists 53 | - not centos_fastestmirror_enabled 54 | 55 | # libselinux-python is required on SELinux enabled hosts 56 | # See https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#managed-node-requirements 57 | - name: Install libselinux python package 58 | package: 59 | name: "{{ ( (ansible_distribution_major_version | int) < 8) | ternary('libselinux-python','python3-libselinux') }}" 60 | state: present 61 | become: true 62 | -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/bootstrap-redhat.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Gather host facts to get ansible_distribution_version ansible_distribution_major_version 3 | setup: 4 | gather_subset: '!all' 5 | filter: ansible_distribution_*version 6 | 7 | - name: Add proxy to yum.conf or dnf.conf if http_proxy is defined 8 | ini_file: 9 | path: "{{ ( (ansible_distribution_major_version | int) < 8) | ternary('/etc/yum.conf','/etc/dnf/dnf.conf') }}" 10 | section: main 11 | option: proxy 12 | value: "{{ http_proxy | default(omit) }}" 13 | state: "{{ http_proxy | default(False) | ternary('present', 'absent') }}" 14 | no_extra_spaces: true 15 | mode: 0644 16 | become: true 17 | when: not skip_http_proxy_on_os_packages 18 | 19 | - name: Add proxy to RHEL subscription-manager if http_proxy is defined 20 | command: /sbin/subscription-manager config --server.proxy_hostname={{ http_proxy | regex_replace(':\\d+$') }} --server.proxy_port={{ http_proxy | regex_replace('^.*:') }} 21 | become: true 22 | when: 23 | - not skip_http_proxy_on_os_packages 24 | - http_proxy is defined 25 | 26 | - name: Check RHEL subscription-manager status 27 | command: /sbin/subscription-manager status 28 | register: rh_subscription_status 29 | changed_when: "rh_subscription_status != 0" 30 | ignore_errors: true # noqa ignore-errors 31 | become: true 32 | 33 | - name: RHEL subscription Organization ID/Activation Key registration 34 | redhat_subscription: 35 | state: present 36 | org_id: "{{ rh_subscription_org_id }}" 37 | activationkey: "{{ rh_subscription_activation_key }}" 38 | auto_attach: true 39 | force_register: true 40 | syspurpose: 41 | usage: "{{ rh_subscription_usage }}" 42 | role: "{{ rh_subscription_role }}" 43 | service_level_agreement: "{{ rh_subscription_sla }}" 44 | sync: true 45 | notify: RHEL auto-attach subscription 46 | ignore_errors: true # noqa ignore-errors 47 | become: true 48 | when: 49 | - rh_subscription_org_id is defined 50 | - rh_subscription_status.changed 51 | 52 | # this task has no_log set to prevent logging security sensitive information such as subscription passwords 53 | - name: RHEL subscription Username/Password registration 54 | redhat_subscription: 55 | state: present 56 | username: "{{ rh_subscription_username }}" 57 | password: "{{ rh_subscription_password }}" 58 | auto_attach: true 59 | force_register: true 60 | syspurpose: 61 | usage: "{{ rh_subscription_usage }}" 62 | role: "{{ rh_subscription_role }}" 63 | service_level_agreement: "{{ rh_subscription_sla }}" 64 | sync: true 65 | notify: RHEL auto-attach subscription 66 | ignore_errors: true # noqa ignore-errors 67 | become: true 68 | no_log: true 69 | when: 70 | - rh_subscription_username is defined 71 | - rh_subscription_status.changed 72 | 73 | # container-selinux is in extras repo 74 | - name: Enable RHEL 7 repos 75 | rhsm_repository: 76 | name: 77 | - "rhel-7-server-rpms" 78 | - "rhel-7-server-extras-rpms" 79 | state: enabled 80 | when: 81 | - rhel_enable_repos | default(True) 82 | - ansible_distribution_major_version == "7" 83 | 84 | # container-selinux is in appstream repo 85 | - name: Enable RHEL 8 repos 86 | rhsm_repository: 87 | name: 88 | - "rhel-8-for-*-baseos-rpms" 89 | - "rhel-8-for-*-appstream-rpms" 90 | state: enabled 91 | when: 92 | - rhel_enable_repos | default(True) 93 | - ansible_distribution_major_version == "8" 94 | 95 | - name: Check presence of fastestmirror.conf 96 | stat: 97 | path: /etc/yum/pluginconf.d/fastestmirror.conf 98 | get_attributes: no 99 | get_checksum: no 100 | get_mime: no 101 | register: fastestmirror 102 | 103 | # the fastestmirror plugin can actually slow down Ansible deployments 104 | - name: Disable fastestmirror plugin if requested 105 | lineinfile: 106 | dest: /etc/yum/pluginconf.d/fastestmirror.conf 107 | regexp: "^enabled=.*" 108 | line: "enabled=0" 109 | state: present 110 | become: true 111 | when: 112 | - fastestmirror.stat.exists 113 | - not centos_fastestmirror_enabled 114 | 115 | # libselinux-python is required on SELinux enabled hosts 116 | # See https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#managed-node-requirements 117 | - name: Install libselinux python package 118 | package: 119 | name: "{{ ( (ansible_distribution_major_version | int) < 8) | ternary('libselinux-python','python3-libselinux') }}" 120 | state: present 121 | become: true 122 | -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/bootstrap-rocky.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - include_tasks: centos-alike.yml -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/bootstrap-ubuntu.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - include_tasks: debian.yml -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/bootstrap-uniontech os server 20.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - assert: 4 | msg: 5 | - "使用 uniontech os server 20 操作系统时,kuboardspray 只支持 containerd 作为容器引擎" 6 | that: 7 | - container_manager == 'containerd' 8 | 9 | - name: Install conntrack 10 | package: 11 | name: conntrack 12 | state: present 13 | become: true 14 | 15 | - include_tasks: centos-alike.yml -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/centos-alike.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Gather host facts to get ansible_distribution_version ansible_distribution_major_version 3 | setup: 4 | gather_subset: '!all' 5 | filter: ansible_distribution_*version 6 | 7 | - name: Add proxy to yum.conf or dnf.conf if http_proxy is defined 8 | ini_file: 9 | path: "{{ ( (ansible_distribution_major_version | int) < 8) | ternary('/etc/yum.conf','/etc/dnf/dnf.conf') }}" 10 | section: main 11 | option: proxy 12 | value: "{{ http_proxy | default(omit) }}" 13 | state: "{{ http_proxy | default(False) | ternary('present', 'absent') }}" 14 | no_extra_spaces: true 15 | mode: 0644 16 | become: true 17 | when: not skip_http_proxy_on_os_packages 18 | 19 | - name: Check presence of fastestmirror.conf 20 | stat: 21 | path: /etc/yum/pluginconf.d/fastestmirror.conf 22 | get_attributes: no 23 | get_checksum: no 24 | get_mime: no 25 | register: fastestmirror 26 | 27 | # the fastestmirror plugin can actually slow down Ansible deployments 28 | - name: Disable fastestmirror plugin if requested 29 | lineinfile: 30 | dest: /etc/yum/pluginconf.d/fastestmirror.conf 31 | regexp: "^enabled=.*" 32 | line: "enabled=0" 33 | state: present 34 | become: true 35 | when: 36 | - fastestmirror.stat.exists 37 | - not centos_fastestmirror_enabled 38 | 39 | # libselinux-python is required on SELinux enabled hosts 40 | # See https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#managed-node-requirements 41 | - name: Install libselinux python package 42 | package: 43 | name: "{{ ( (ansible_distribution_major_version | int) < 8) | ternary('libselinux-python','python3-libselinux') }}" 44 | state: present 45 | become: true 46 | -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/debian.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Some Debian based distros ship without Python installed 3 | 4 | - name: Check if bootstrap is needed 5 | raw: which python3 6 | register: need_bootstrap 7 | failed_when: false 8 | changed_when: false 9 | # This command should always run, even in check mode 10 | check_mode: false 11 | tags: 12 | - facts 13 | 14 | - name: Check http::proxy in apt configuration files 15 | raw: apt-config dump | grep -qsi 'Acquire::http::proxy' 16 | register: need_http_proxy 17 | failed_when: false 18 | changed_when: false 19 | # This command should always run, even in check mode 20 | check_mode: false 21 | 22 | - name: Add http_proxy to /etc/apt/apt.conf if http_proxy is defined 23 | raw: echo 'Acquire::http::proxy "{{ http_proxy }}";' >> /etc/apt/apt.conf 24 | become: true 25 | when: 26 | - http_proxy is defined 27 | - need_http_proxy.rc != 0 28 | - not skip_http_proxy_on_os_packages 29 | 30 | - name: Check https::proxy in apt configuration files 31 | raw: apt-config dump | grep -qsi 'Acquire::https::proxy' 32 | register: need_https_proxy 33 | failed_when: false 34 | changed_when: false 35 | # This command should always run, even in check mode 36 | check_mode: false 37 | 38 | - name: Add https_proxy to /etc/apt/apt.conf if https_proxy is defined 39 | raw: echo 'Acquire::https::proxy "{{ https_proxy }}";' >> /etc/apt/apt.conf 40 | become: true 41 | when: 42 | - https_proxy is defined 43 | - need_https_proxy.rc != 0 44 | - not skip_http_proxy_on_os_packages 45 | 46 | - name: Install python3 47 | raw: 48 | apt-get update && \ 49 | DEBIAN_FRONTEND=noninteractive apt-get install -y python3-minimal 50 | become: true 51 | when: 52 | - need_bootstrap.rc != 0 53 | 54 | - name: Update Apt cache 55 | raw: apt-get update --allow-releaseinfo-change 56 | become: true 57 | when: 58 | - '''ID=debian'' in os_release.stdout_lines' 59 | - '''VERSION_ID="10"'' in os_release.stdout_lines or ''VERSION_ID="11"'' in os_release.stdout_lines' 60 | register: bootstrap_update_apt_result 61 | changed_when: 62 | - '"changed its" in bootstrap_update_apt_result.stdout' 63 | - '"value from" in bootstrap_update_apt_result.stdout' 64 | ignore_errors: true 65 | 66 | - name: Set the ansible_python_interpreter fact 67 | set_fact: 68 | ansible_python_interpreter: "/usr/bin/python3" 69 | 70 | # Workaround for https://github.com/ansible/ansible/issues/25543 71 | - name: Install dbus for the hostname module 72 | package: 73 | name: dbus 74 | state: present 75 | use: apt 76 | become: true 77 | -------------------------------------------------------------------------------- /roles/bootstrap-os/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: Fetch /etc/os-release 4 | raw: cat /etc/os-release 5 | register: os_release 6 | changed_when: false 7 | # This command should always run, even in check mode 8 | check_mode: false 9 | 10 | - name: "bootstrap-{{ ansible_distribution|lower }}" 11 | include_tasks: "{{ bootstrap_os_task_file_name }}" 12 | vars: 13 | bootstrap_os_task_file_name: "{{ item | regex_search('bootstrap-[a-zA-Z0-9-._ ]*\\.yml') }}" 14 | with_first_found: 15 | - files: 16 | - "bootstrap-{{ ansible_distribution|lower }}.yml" 17 | skip: false 18 | 19 | - name: Create remote_tmp for it is used by another module 20 | file: 21 | path: "{{ ansible_remote_tmp | default('~/.ansible/tmp') }}" 22 | state: directory 23 | mode: 0700 24 | 25 | # Workaround for https://github.com/ansible/ansible/issues/42726 26 | # (1/3) 27 | - name: Gather host facts to get ansible_os_family 28 | setup: 29 | gather_subset: '!all' 30 | filter: ansible_* 31 | 32 | - name: Assign inventory name to unconfigured hostnames (hostname) 33 | hostname: 34 | name: "{{ inventory_hostname }}" 35 | when: 36 | - override_system_hostname 37 | - ansible_distribution not in ["Rocky", "openEuler", "Anolis", "OracleLinux"] 38 | 39 | # (2/3) 40 | - name: Assign inventory name to unconfigured hostnames (hostnamectl) 41 | command: "hostnamectl set-hostname {{ inventory_hostname }}" 42 | register: hostname_changed 43 | become: true 44 | changed_when: false 45 | when: 46 | - override_system_hostname 47 | - ansible_distribution in ["Rocky", "openEuler", "Anolis", "OracleLinux"] 48 | 49 | # (3/3) 50 | - name: Update hostname fact after hostname changed 51 | setup: 52 | gather_subset: '!all' 53 | filter: ansible_hostname 54 | when: 55 | - override_system_hostname 56 | - ansible_distribution in ["Rocky", "openEuler", "Anolis", "OracleLinux"] 57 | 58 | - name: "Install ceph-commmon package" 59 | package: 60 | name: 61 | - ceph-common 62 | state: present 63 | when: rbd_provisioner_enabled|default(false) 64 | 65 | - name: Ensure bash_completion.d folder exists 66 | file: 67 | name: /etc/bash_completion.d/ 68 | state: directory 69 | owner: root 70 | group: root 71 | mode: 0755 72 | -------------------------------------------------------------------------------- /roles/config-apt-sources/defaults/main.yml: -------------------------------------------------------------------------------- 1 | kuboardspray_repo_ubuntu: AS_IS -------------------------------------------------------------------------------- /roles/config-apt-sources/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: configuring ubuntu /etc/apt/sources.list 3 | template: 4 | src: etc/apt/sources.list.j2 5 | dest: /etc/apt/sources.list 6 | owner: root 7 | group: root 8 | mode: 0644 9 | backup: yes 10 | become: true 11 | register: kuboardspray_apt_source_configured 12 | when: ansible_distribution == "Ubuntu" and kuboardspray_repo_ubuntu != "AS_IS" 13 | 14 | - name: updating apt-cache 15 | apt: 16 | update_cache: yes 17 | become: true 18 | when: kuboardspray_apt_source_configured['changed'] 19 | -------------------------------------------------------------------------------- /roles/config-apt-sources/templates/etc/apt/sources.list.j2: -------------------------------------------------------------------------------- 1 | deb {{ ubuntu_repo }} {{ ansible_distribution_release }} main restricted 2 | deb {{ ubuntu_repo }} {{ ansible_distribution_release }}-updates main restricted 3 | deb {{ ubuntu_repo }} {{ ansible_distribution_release }} universe 4 | deb {{ ubuntu_repo }} {{ ansible_distribution_release }}-updates universe 5 | deb {{ ubuntu_repo }} {{ ansible_distribution_release }} multiverse 6 | deb {{ ubuntu_repo }} {{ ansible_distribution_release }}-updates multiverse 7 | deb {{ ubuntu_repo }} {{ ansible_distribution_release }}-backports main restricted universe multiverse 8 | 9 | deb {{ ubuntu_repo }} {{ ansible_distribution_release }}-security main restricted 10 | deb {{ ubuntu_repo }} {{ ansible_distribution_release }}-security universe 11 | deb {{ ubuntu_repo }} {{ ansible_distribution_release }}-security multiverse 12 | -------------------------------------------------------------------------------- /roles/config-yum-repo/defaults/main.yml: -------------------------------------------------------------------------------- 1 | kuboardspray_repo_centos: AS_IS -------------------------------------------------------------------------------- /roles/config-yum-repo/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: set kuboardspray_repo_field_name 4 | set_fact: 5 | kuboardspray_repo_field_name: "kuboardspray_repo_{{ ansible_distribution|lower }}" 6 | 7 | - name: check ansible_distribution 8 | assert: 9 | msg: "请预先在 {{ ansible_distribution }} 操作系统中为其配置正确的 yum / dnf 软件源,并选择“使用操作系统预先配置的软件源”" 10 | that: 11 | - vars[kuboardspray_repo_field_name] is not defined or vars[kuboardspray_repo_field_name] == 'AS_IS' 12 | when: 13 | - ansible_distribution in ['OracleLinux', 'Kylin Linux Advanced Server'] 14 | 15 | - include_tasks: "yum_repo.yml" 16 | when: 17 | - vars[kuboardspray_repo_field_name] is defined 18 | - vars[kuboardspray_repo_field_name] != 'AS_IS' 19 | - ansible_distribution in ["CentOS", "Anolis", 'Rocky'] 20 | 21 | - include_tasks: "openeuler.yml" 22 | when: 23 | - vars[kuboardspray_repo_field_name] is defined 24 | - vars[kuboardspray_repo_field_name] != 'AS_IS' 25 | - ansible_distribution in ['openEuler'] -------------------------------------------------------------------------------- /roles/config-yum-repo/tasks/openeuler.yml: -------------------------------------------------------------------------------- 1 | - name: Get openEuler repo file 2 | get_url: 3 | url: "{{ openeuler_repo }}" 4 | dest: /etc/yum.repos.d/openEuler.repo 5 | mode: '0644' 6 | register: kuboardspray_yum_repo_configured 7 | 8 | - name: "dnf-clean-metadata" 9 | command: "dnf clean metadata" 10 | args: 11 | warn: no 12 | when: 13 | - kuboardspray_yum_repo_configured['changed'] 14 | 15 | - name: "dnf-makecache" 16 | command: "dnf makecache" 17 | args: 18 | warn: no 19 | when: 20 | - kuboardspray_yum_repo_configured['changed'] -------------------------------------------------------------------------------- /roles/config-yum-repo/tasks/yum_repo.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: "configuring {{ansible_distribution|lower}} repos" 4 | template: 5 | src: "{{ansible_distribution|lower}}_{{ansible_distribution_major_version}}/{{ template_name }}" 6 | dest: "/etc/yum.repos.d/{{ template_name }}" 7 | owner: root 8 | group: root 9 | mode: 0644 10 | backup: yes 11 | become: true 12 | vars: 13 | template_name: "{{item | regex_search('[a-zA-Z0-9][a-zA-Z0-9-._]*\\.repo')}}" 14 | register: kuboardspray_yum_repo_configured 15 | with_fileglob: 16 | - "roles/config-yum-repo/templates/{{ansible_distribution|lower}}_{{ansible_distribution_major_version}}/*.repo" 17 | 18 | 19 | - name: "{{ansible_pkg_mgr}}-clean-metadata" 20 | command: "{{ansible_pkg_mgr}} clean metadata" 21 | args: 22 | warn: no 23 | when: 24 | - kuboardspray_yum_repo_configured['changed'] 25 | 26 | - name: "{{ansible_pkg_mgr}}-makecache" 27 | command: "{{ansible_pkg_mgr}} makecache" 28 | args: 29 | warn: no 30 | when: 31 | - kuboardspray_yum_repo_configured['changed'] -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/anolis_8/AnolisOS-AppStream.repo: -------------------------------------------------------------------------------- 1 | [AppStream] 2 | name=AnolisOS-$releasever - AppStream 3 | baseurl={{ anolis_repo }}/$releasever/AppStream/$basearch/os 4 | enabled=1 5 | gpgkey={{ anolis_repo }}/RPM-GPG-KEY-ANOLIS 6 | gpgcheck=1 7 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/anolis_8/AnolisOS-BaseOS.repo: -------------------------------------------------------------------------------- 1 | [BaseOS] 2 | name=AnolisOS-$releasever - BaseOS 3 | baseurl={{ anolis_repo }}/$releasever/BaseOS/$basearch/os 4 | enabled=1 5 | gpgkey={{ anolis_repo }}/RPM-GPG-KEY-ANOLIS 6 | gpgcheck=1 7 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/anolis_8/AnolisOS-DDE.repo: -------------------------------------------------------------------------------- 1 | [DDE] 2 | name=AnolisOS-$releasever - DDE 3 | baseurl={{ anolis_repo }}/$releasever/DDE/$basearch/os 4 | enabled=0 5 | gpgkey={{ anolis_repo }}/RPM-GPG-KEY-ANOLIS 6 | gpgcheck=1 7 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/anolis_8/AnolisOS-Extras.repo: -------------------------------------------------------------------------------- 1 | [Extras] 2 | name=AnolisOS-$releasever - Extras 3 | baseurl={{ anolis_repo }}/$releasever/Extras/$basearch/os 4 | enabled=1 5 | gpgkey={{ anolis_repo }}/RPM-GPG-KEY-ANOLIS 6 | gpgcheck=1 7 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/anolis_8/AnolisOS-HighAvailability.repo: -------------------------------------------------------------------------------- 1 | [HighAvailability] 2 | name=AnolisOS-$releasever - HighAvailability 3 | baseurl={{ anolis_repo }}/$releasever/HighAvailability/$basearch/os 4 | enabled=0 5 | gpgkey={{ anolis_repo }}/RPM-GPG-KEY-ANOLIS 6 | gpgcheck=1 7 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/anolis_8/AnolisOS-Plus.repo: -------------------------------------------------------------------------------- 1 | [Plus] 2 | name=AnolisOS-$releasever - Plus 3 | baseurl={{ anolis_repo }}/$releasever/Plus/$basearch/os 4 | enabled=0 5 | gpgkey={{ anolis_repo }}/RPM-GPG-KEY-ANOLIS 6 | gpgcheck=1 7 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/anolis_8/AnolisOS-PowerTools.repo: -------------------------------------------------------------------------------- 1 | [PowerTools] 2 | name=AnolisOS-$releasever - PowerTools 3 | baseurl={{ anolis_repo }}/$releasever/PowerTools/$basearch/os 4 | enabled=1 5 | gpgkey={{ anolis_repo }}/RPM-GPG-KEY-ANOLIS 6 | gpgcheck=1 7 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_7/CentOS-Base.repo: -------------------------------------------------------------------------------- 1 | # CentOS-Base.repo 2 | # 3 | # The mirror system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick mirrors that are updated to and 5 | # geographically close to the client. You should use this for CentOS updates 6 | # unless you are manually picking other mirrors. 7 | # 8 | # If the #mirrorlist= does not work for you, as a fall back you can try the 9 | # remarked out baseurl= line instead. 10 | # 11 | # 12 | 13 | [base] 14 | name=CentOS-$releasever - Base 15 | #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra 16 | baseurl={{ centos_repo }}/$releasever/os/$basearch/ 17 | gpgcheck=1 18 | gpgkey={{ centos_repo }}/RPM-GPG-KEY-CentOS-7 19 | 20 | #released updates 21 | [updates] 22 | name=CentOS-$releasever - Updates 23 | #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra 24 | baseurl={{ centos_repo }}/$releasever/updates/$basearch/ 25 | gpgcheck=1 26 | gpgkey={{ centos_repo }}/RPM-GPG-KEY-CentOS-7 27 | 28 | #additional packages that may be useful 29 | [extras] 30 | name=CentOS-$releasever - Extras 31 | #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra 32 | baseurl={{ centos_repo }}/$releasever/extras/$basearch/ 33 | gpgcheck=1 34 | gpgkey={{ centos_repo }}/RPM-GPG-KEY-CentOS-7 35 | 36 | #additional packages that extend functionality of existing packages 37 | [centosplus] 38 | name=CentOS-$releasever - Plus 39 | #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus&infra=$infra 40 | baseurl={{ centos_repo }}/$releasever/centosplus/$basearch/ 41 | gpgcheck=1 42 | enabled=0 43 | gpgkey={{ centos_repo }}/RPM-GPG-KEY-CentOS-7 44 | 45 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_7/CentOS-CR.repo: -------------------------------------------------------------------------------- 1 | # CentOS-CR.repo 2 | # 3 | # The Continuous Release ( CR ) repository contains rpms that are due in the next 4 | # release for a specific CentOS Version ( eg. next release in CentOS-7 ); these rpms 5 | # are far less tested, with no integration checking or update path testing having 6 | # taken place. They are still built from the upstream sources, but might not map 7 | # to an exact upstream distro release. 8 | # 9 | # These packages are made available soon after they are built, for people willing 10 | # to test their environments, provide feedback on content for the next release, and 11 | # for people looking for early-access to next release content. 12 | # 13 | # The CR repo is shipped in a disabled state by default; its important that users 14 | # understand the implications of turning this on. 15 | # 16 | # NOTE: We do not use a mirrorlist for the CR repos, to ensure content is available 17 | # to everyone as soon as possible, and not need to wait for the external 18 | # mirror network to seed first. However, many local mirrors will carry CR repos 19 | # and if desired you can use one of these local mirrors by editing the baseurl 20 | # line in the repo config below. 21 | # 22 | 23 | [cr] 24 | name=CentOS-$releasever - cr 25 | baseurl={{ centos_repo }}/$releasever/cr/$basearch/ 26 | gpgcheck=1 27 | gpgkey={{ centos_repo }}/RPM-GPG-KEY-CentOS-7 28 | enabled=0 29 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_7/CentOS-fasttrack.repo: -------------------------------------------------------------------------------- 1 | #CentOS-fasttrack.repo 2 | 3 | [fasttrack] 4 | name=CentOS-7 - fasttrack 5 | #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=fasttrack&infra=$infra 6 | baseurl={{ centos_repo }}/$releasever/fasttrack/$basearch/ 7 | gpgcheck=1 8 | enabled=0 9 | gpgkey={{ centos_repo }}/RPM-GPG-KEY-CentOS-7 10 | 11 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_7/CentOS-x86_64-kernel.repo: -------------------------------------------------------------------------------- 1 | [centos-kernel] 2 | name=CentOS LTS Kernels for $basearch 3 | #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=kernel&infra=$infra 4 | baseurl=http://mirror.centos.org/altarch/7/kernel/$basearch/ 5 | enabled=0 6 | gpgcheck=1 7 | gpgkey={{ centos_repo }}/RPM-GPG-KEY-CentOS-7 8 | 9 | [centos-kernel-experimental] 10 | name=CentOS Experimental Kernels for $basearch 11 | #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=experimental&infra=$infra 12 | baseurl=http://mirror.centos.org/altarch/7/experimental/$basearch/ 13 | enabled=0 14 | gpgcheck=1 15 | gpgkey={{ centos_repo }}/RPM-GPG-KEY-CentOS-7 16 | 17 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_8/CentOS-Stream-AppStream.repo: -------------------------------------------------------------------------------- 1 | # CentOS-Stream-AppStream.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for CentOS updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [appstream] 12 | name=CentOS Stream $releasever - AppStream 13 | #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=AppStream&infra=$infra 14 | baseurl={{centos_repo}}/$stream/AppStream/$basearch/os/ 15 | gpgcheck=1 16 | enabled=1 17 | gpgkey={{centos_repo}}/RPM-GPG-KEY-CentOS-Official 18 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_8/CentOS-Stream-BaseOS.repo: -------------------------------------------------------------------------------- 1 | # CentOS-Stream-BaseOS.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for CentOS updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [baseos] 12 | name=CentOS Stream $releasever - BaseOS 13 | #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=BaseOS&infra=$infra 14 | baseurl={{centos_repo}}/$stream/BaseOS/$basearch/os/ 15 | gpgcheck=1 16 | enabled=1 17 | gpgkey={{centos_repo}}/RPM-GPG-KEY-CentOS-Official 18 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_8/CentOS-Stream-Extras.repo: -------------------------------------------------------------------------------- 1 | # CentOS-Stream-Extras.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for CentOS updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [extras] 12 | name=CentOS Stream $releasever - Extras 13 | #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=extras&infra=$infra 14 | baseurl={{centos_repo}}/$stream/extras/$basearch/os/ 15 | gpgcheck=1 16 | enabled=1 17 | gpgkey={{centos_repo}}/RPM-GPG-KEY-CentOS-Official 18 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_8/CentOS-Stream-HighAvailability.repo: -------------------------------------------------------------------------------- 1 | # CentOS-Stream-HighAvailability.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for CentOS updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [ha] 12 | name=CentOS Stream $releasever - HighAvailability 13 | #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=HighAvailability&infra=$infra 14 | baseurl={{centos_repo}}/$stream/HighAvailability/$basearch/os/ 15 | gpgcheck=1 16 | enabled=0 17 | gpgkey={{centos_repo}}/RPM-GPG-KEY-CentOS-Official 18 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_8/CentOS-Stream-NFV.repo: -------------------------------------------------------------------------------- 1 | # CentOS-Stream-NFV.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for CentOS updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [nfv] 12 | name=CentOS Stream $releasever - NFV 13 | #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=NFV&infra=$infra 14 | baseurl={{centos_repo}}/$stream/NFV/$basearch/os/ 15 | gpgcheck=1 16 | enabled=0 17 | gpgkey={{centos_repo}}/RPM-GPG-KEY-CentOS-Official 18 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_8/CentOS-Stream-PowerTools.repo: -------------------------------------------------------------------------------- 1 | # CentOS-Stream-PowerTools.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for CentOS updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [powertools] 12 | name=CentOS Stream $releasever - PowerTools 13 | #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=PowerTools&infra=$infra 14 | baseurl={{centos_repo}}/$stream/PowerTools/$basearch/os/ 15 | gpgcheck=1 16 | enabled=0 17 | gpgkey={{centos_repo}}/RPM-GPG-KEY-CentOS-Official 18 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_8/CentOS-Stream-RealTime.repo: -------------------------------------------------------------------------------- 1 | # CentOS-Stream-RealTime.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for CentOS updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [rt] 12 | name=CentOS Stream $releasever - RealTime 13 | #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=RT&infra=$infra 14 | baseurl={{centos_repo}}/$stream/RT/$basearch/os/ 15 | gpgcheck=1 16 | enabled=0 17 | gpgkey={{centos_repo}}/RPM-GPG-KEY-CentOS-Official 18 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/centos_8/CentOS-Stream-ResilientStorage.repo: -------------------------------------------------------------------------------- 1 | # CentOS-Stream-ResilientStorage.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for CentOS updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [resilientstorage] 12 | name=CentOS Stream $releasever - ResilientStorage 13 | #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=ResilientStorage&infra=$infra 14 | baseurl={{centos_repo}}/$stream/ResilientStorage/$basearch/os/ 15 | gpgcheck=1 16 | enabled=0 17 | gpgkey={{centos_repo}}/RPM-GPG-KEY-CentOS-Official 18 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-AppStream.repo: -------------------------------------------------------------------------------- 1 | # Rocky-AppStream.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for Rocky updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [appstream] 12 | name=Rocky Linux $releasever - AppStream 13 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=AppStream-$releasever 14 | baseurl={{ rocky_repo }}/$releasever/AppStream/$basearch/os/ 15 | gpgcheck=1 16 | enabled=1 17 | countme=1 18 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 19 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-BaseOS.repo: -------------------------------------------------------------------------------- 1 | # Rocky-BaseOS.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for Rocky updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [baseos] 12 | name=Rocky Linux $releasever - BaseOS 13 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=BaseOS-$releasever 14 | baseurl={{ rocky_repo }}/$releasever/BaseOS/$basearch/os/ 15 | gpgcheck=1 16 | enabled=1 17 | countme=1 18 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 19 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-Debuginfo.repo: -------------------------------------------------------------------------------- 1 | # Rocky-Debuginfo.repo 2 | # 3 | 4 | [baseos-debug] 5 | name=Rocky Linux $releasever - BaseOS - Source 6 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=BaseOS-$releasever-debug 7 | baseurl={{ rocky_repo }}/$releasever/BaseOS/$basearch/debug/tree/ 8 | gpgcheck=1 9 | enabled=0 10 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 11 | 12 | [appstream-debug] 13 | name=Rocky Linux $releasever - AppStream - Source 14 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=AppStream-$releasever-debug 15 | baseurl={{ rocky_repo }}/$releasever/AppStream/$basearch/debug/tree/ 16 | gpgcheck=1 17 | enabled=0 18 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 19 | 20 | [ha-debug] 21 | name=Rocky Linux $releasever - High Availability - Source 22 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=HighAvailability-$releasever-debug 23 | baseurl={{ rocky_repo }}/$releasever/HighAvailability/$basearch/debug/tree/ 24 | gpgcheck=1 25 | enabled=0 26 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 27 | 28 | [powertools-debug] 29 | name=Rocky Linux $releasever - PowerTools - Source 30 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=PowerTools-$releasever-debug 31 | baseurl={{ rocky_repo }}/$releasever/PowerTools/$basearch/debug/tree/ 32 | gpgcheck=1 33 | enabled=0 34 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 35 | 36 | [resilient-storage-debug] 37 | name=Rocky Linux $releasever - Resilient Storage - Source 38 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=ResilientStorage-$releasever-debug 39 | baseurl={{ rocky_repo }}/$releasever/ResilientStorage/$basearch/debug/tree/ 40 | gpgcheck=1 41 | enabled=0 42 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 43 | 44 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-Devel.repo: -------------------------------------------------------------------------------- 1 | # Rocky-Devel.repo 2 | # 3 | 4 | [devel] 5 | name=Rocky Linux $releasever - Devel WARNING! FOR BUILDROOT AND KOJI USE 6 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=Devel-$releasever 7 | baseurl={{ rocky_repo }}/$releasever/Devel/$basearch/os/ 8 | gpgcheck=1 9 | enabled=0 10 | countme=1 11 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 12 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-Extras.repo: -------------------------------------------------------------------------------- 1 | # Rocky-Extras.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for Rocky updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [extras] 12 | name=Rocky Linux $releasever - Extras 13 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=extras-$releasever 14 | baseurl={{ rocky_repo }}/$releasever/extras/$basearch/os/ 15 | gpgcheck=1 16 | enabled=1 17 | countme=1 18 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 19 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-HighAvailability.repo: -------------------------------------------------------------------------------- 1 | # Rocky-HighAvailability.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for Rocky updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [ha] 12 | name=Rocky Linux $releasever - HighAvailability 13 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=HighAvailability-$releasever 14 | baseurl={{ rocky_repo }}/$releasever/HighAvailability/$basearch/os/ 15 | gpgcheck=1 16 | enabled=0 17 | countme=1 18 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 19 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-Media.repo: -------------------------------------------------------------------------------- 1 | # Rocky-Media.repo 2 | # 3 | # You can use this repo to install items directly off the installation media. 4 | # Verify your mount point matches one of the below file:// paths. 5 | 6 | [media-baseos] 7 | name=Rocky Linux $releasever - Media - BaseOS 8 | baseurl=file:///media/Rocky/BaseOS 9 | file:///media/cdrom/BaseOS 10 | file:///media/cdrecorder/BaseOS 11 | gpgcheck=1 12 | enabled=0 13 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 14 | 15 | [media-appstream] 16 | name=Rocky Linux $releasever - Media - AppStream 17 | baseurl=file:///media/Rocky/AppStream 18 | file:///media/cdrom/AppStream 19 | file:///media/cdrecorder/AppStream 20 | gpgcheck=1 21 | enabled=0 22 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 23 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-NFV.repo: -------------------------------------------------------------------------------- 1 | # Rocky-NFV.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for Rocky updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [nfv] 12 | name=Rocky Linux $releasever - NFV 13 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=NFV-$releasever 14 | baseurl={{ rocky_repo }}/$releasever/nfv/$basearch/os/ 15 | gpgcheck=1 16 | enabled=0 17 | countme=1 18 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 19 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-Plus.repo: -------------------------------------------------------------------------------- 1 | # Rocky-Plus.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for Rocky updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [plus] 12 | name=Rocky Linux $releasever - Plus 13 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=rockyplus-$releasever 14 | baseurl={{ rocky_repo }}/$releasever/plus/$basearch/os/ 15 | gpgcheck=1 16 | enabled=0 17 | countme=1 18 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 19 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-PowerTools.repo: -------------------------------------------------------------------------------- 1 | # Rocky-PowerTools.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for Rocky updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [powertools] 12 | name=Rocky Linux $releasever - PowerTools 13 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=PowerTools-$releasever 14 | baseurl={{ rocky_repo }}/$releasever/PowerTools/$basearch/os/ 15 | gpgcheck=1 16 | enabled=0 17 | countme=1 18 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 19 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-RT.repo: -------------------------------------------------------------------------------- 1 | # Rocky-RT.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for Rocky updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [rt] 12 | name=Rocky Linux $releasever - Realtime 13 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=RT-$releasever 14 | baseurl={{ rocky_repo }}/$releasever/RT/$basearch/os/ 15 | gpgcheck=1 16 | enabled=0 17 | countme=1 18 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 19 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-ResilientStorage.repo: -------------------------------------------------------------------------------- 1 | # Rocky-ResilientStorage.repo 2 | # 3 | # The mirrorlist system uses the connecting IP address of the client and the 4 | # update status of each mirror to pick current mirrors that are geographically 5 | # close to the client. You should use this for Rocky updates unless you are 6 | # manually picking other mirrors. 7 | # 8 | # If the mirrorlist does not work for you, you can try the commented out 9 | # baseurl line instead. 10 | 11 | [resilient-storage] 12 | name=Rocky Linux $releasever - ResilientStorage 13 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=$basearch&repo=ResilientStorage-$releasever 14 | baseurl={{ rocky_repo }}/$releasever/ResilientStorage/$basearch/os/ 15 | gpgcheck=1 16 | enabled=0 17 | countme=1 18 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 19 | -------------------------------------------------------------------------------- /roles/config-yum-repo/templates/rocky_8/Rocky-Sources.repo: -------------------------------------------------------------------------------- 1 | # Rocky-Sources.repo 2 | 3 | [baseos-source] 4 | name=Rocky Linux $releasever - BaseOS - Source 5 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=source&repo=BaseOS-$releasever-source 6 | baseurl={{ rocky_repo }}/$releasever/BaseOS/source/tree/ 7 | gpgcheck=1 8 | enabled=0 9 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 10 | 11 | [appstream-source] 12 | name=Rocky Linux $releasever - AppStream - Source 13 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=source&repo=AppStream-$releasever-source 14 | baseurl={{ rocky_repo }}/$releasever/AppStream/source/tree/ 15 | gpgcheck=1 16 | enabled=0 17 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 18 | 19 | #[extras-source] 20 | #name=Rocky Linux $releasever - Extras - Source 21 | ##mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=source&repo=extras-$releasever-source 22 | baseurl={{ rocky_repo }}/$releasever/extras/source/tree/ 23 | #gpgcheck=1 24 | #enabled=0 25 | #gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 26 | 27 | #[plus-source] 28 | #name=Rocky Linux $releasever - Plus - Source 29 | ##mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=source&repo=plus-$releasever-source 30 | baseurl={{ rocky_repo }}/$releasever/Plus/source/tree/ 31 | #gpgcheck=1 32 | #enabled=0 33 | #gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 34 | 35 | [ha-source] 36 | name=Rocky Linux $releasever - High Availability - Source 37 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=source&repo=HighAvailability-$releasever-source 38 | baseurl={{ rocky_repo }}/$releasever/HighAvailability/source/tree/ 39 | gpgcheck=1 40 | enabled=0 41 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 42 | 43 | [powertools-source] 44 | name=Rocky Linux $releasever - PowerTools - Source 45 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=source&repo=PowerTools-$releasever-source 46 | baseurl={{ rocky_repo }}/$releasever/PowerTools/source/tree/ 47 | gpgcheck=1 48 | enabled=0 49 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 50 | 51 | [resilient-storage-source] 52 | name=Rocky Linux $releasever - Resilient Storage - Source 53 | #mirrorlist=https://mirrors.rockylinux.org/mirrorlist?arch=source&repo=ResilientStorage-$releasever-source 54 | baseurl={{ rocky_repo }}/$releasever/ResilientStorage/source/tree/ 55 | gpgcheck=1 56 | enabled=0 57 | gpgkey={{ rocky_repo }}/RPM-GPG-KEY-rockyofficial 58 | 59 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | docker_version: '20.10' 3 | docker_cli_version: "{{ docker_version }}" 4 | 5 | containerd_package: 'containerd.io' 6 | 7 | docker_package_info: 8 | pkgs: 9 | 10 | docker_repo_key_info: 11 | repo_keys: 12 | 13 | docker_repo_info: 14 | repos: 15 | 16 | docker_cgroup_driver: systemd 17 | 18 | docker_bin_dir: "/usr/bin" 19 | 20 | # flag to enable/disable docker cleanup 21 | docker_orphan_clean_up: false 22 | 23 | # old docker package names to be removed 24 | docker_remove_packages_yum: 25 | - docker 26 | - docker-common 27 | - docker-engine 28 | - docker-selinux.noarch 29 | - docker-client 30 | - docker-client-latest 31 | - docker-latest 32 | - docker-latest-logrotate 33 | - docker-logrotate 34 | - docker-engine-selinux.noarch 35 | 36 | # remove podman to avoid containerd.io confliction 37 | podman_remove_packages_yum: 38 | - podman 39 | 40 | docker_remove_packages_apt: 41 | - docker 42 | - docker-engine 43 | - docker.io 44 | 45 | # Docker specific repos should be part of the docker role not containerd-common anymore 46 | # Optional values for containerd apt repo 47 | containerd_package_info: 48 | pkgs: 49 | 50 | # CentOS/RedHat docker-ce repo 51 | docker_redhat_repo_base_url: 'https://download.docker.com/linux/centos/{{ ansible_distribution_major_version }}/$basearch/stable' 52 | docker_redhat_repo_gpgkey: 'https://download.docker.com/linux/centos/gpg' 53 | 54 | docker_centos_repo_base_url: 'https://download.docker.com/linux/centos/{{ ansible_distribution_major_version }}/$basearch/stable' 55 | docker_centos_repo_gpgkey: 'https://download.docker.com/linux/centos/gpg' 56 | 57 | docker_anolis_repo_base_url: 'https://download.docker.com/linux/centos/{{ ansible_distribution_major_version }}/$basearch/stable' 58 | docker_anolis_repo_gpgkey: 'https://download.docker.com/linux/centos/gpg' 59 | 60 | docker_openeuler_repo_base_url: 'https://download.docker.com/linux/centos/8/$basearch/stable' 61 | docker_openeuler_repo_gpgkey: 'https://download.docker.com/linux/centos/gpg' 62 | 63 | docker_rocky_repo_base_url: 'https://download.docker.com/linux/centos/{{ ansible_distribution_major_version }}/$basearch/stable' 64 | docker_rocky_repo_gpgkey: 'https://download.docker.com/linux/centos/gpg' 65 | 66 | docker_oraclelinux_repo_base_url: 'https://download.docker.com/linux/centos/{{ ansible_distribution_major_version }}/$basearch/stable' 67 | docker_oraclelinux_repo_gpgkey: 'https://download.docker.com/linux/centos/gpg' 68 | 69 | # Ubuntu docker-ce repo 70 | docker_ubuntu_repo_base_url: "https://download.docker.com/linux/ubuntu" 71 | docker_ubuntu_repo_gpgkey: 'https://download.docker.com/linux/ubuntu/gpg' 72 | docker_ubuntu_repo_repokey: '9DC858229FC7DD38854AE2D88D81803C0EBFCD88' -------------------------------------------------------------------------------- /roles/configure-docker-repo/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: gather os specific variables 4 | include_vars: "{{ item }}" 5 | with_first_found: 6 | - files: 7 | - "{{ ansible_distribution|lower }}-{{ ansible_distribution_major_version|lower|replace('/', '_') }}.yml" 8 | - "{{ ansible_distribution|lower }}.yml" 9 | paths: 10 | - ../vars 11 | skip: false 12 | tags: 13 | - facts 14 | 15 | - name: ensure podman are removed 16 | package: 17 | name: "podman,containers-common" 18 | state: absent 19 | when: 20 | - ansible_distribution in ["Rocky", "openEuler", "Anolis", "OracleLinux", "CentOS"] 21 | 22 | - name: Ensure old versions of Docker are not installed. | Debian 23 | apt: 24 | name: '{{ docker_remove_packages_apt }}' 25 | state: absent 26 | when: 27 | - ansible_os_family == 'Debian' 28 | - (docker_versioned_pkg[docker_version | string] is search('docker-ce')) 29 | 30 | 31 | - name: set apt repo 32 | block: 33 | - name: ensure docker-ce repository public key is installed 34 | apt_key: 35 | id: "{{ item }}" 36 | url: "{{ docker_repo_key_info.url }}" 37 | state: present 38 | register: keyserver_task_result 39 | until: keyserver_task_result is succeeded 40 | retries: 4 41 | delay: "{{ retry_stagger | d(3) }}" 42 | with_items: "{{ docker_repo_key_info.repo_keys }}" 43 | 44 | - name: ensure docker-ce repository is enabled 45 | apt_repository: 46 | repo: "{{ item }}" 47 | state: present 48 | with_items: "{{ docker_repo_info.repos }}" 49 | when: ansible_pkg_mgr == 'apt' 50 | 51 | - name: set yum/dnf repo 52 | block: 53 | - name: Set docker_yum_repo_field_name 54 | set_fact: 55 | docker_yum_repo_field_name: "docker_{{ ansible_distribution | lower }}_repo_base_url" 56 | docker_yum_repo_gpgkey_field_name: "docker_{{ ansible_distribution | lower }}_repo_gpgkey" 57 | 58 | - name: Set docker_yum_repo_base_url 59 | set_fact: 60 | docker_yum_repo_base_url: "{{ lookup('vars', docker_yum_repo_field_name) }}" 61 | docker_yum_repo_gpgkey: "{{ lookup('vars', docker_yum_repo_gpgkey_field_name) }}" 62 | 63 | - name: "Configure docker {{ ansible_pkg_mgr }} repository" 64 | template: 65 | src: "docker.repo.j2" 66 | dest: "/etc/yum.repos.d/docker-ce.repo" 67 | mode: 0644 68 | when: 69 | - ansible_pkg_mgr in ['dnf', 'yum'] 70 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/templates/docker.repo.j2: -------------------------------------------------------------------------------- 1 | [docker-ce] 2 | name=Docker-CE Repository 3 | baseurl={{ docker_yum_repo_base_url }} 4 | enabled=0 5 | gpgcheck=0 6 | sslverify=0 7 | keepcache={{ docker_rpm_keepcache | default('1') }} 8 | gpgkey={{ docker_yum_repo_gpgkey }} 9 | {% if http_proxy is defined %} 10 | proxy={{ http_proxy }} 11 | {% endif %} 12 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/vars/almalinux.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # containerd versions are only relevant for docker 3 | containerd_versioned_pkg: 4 | 'latest': "{{ containerd_package }}" 5 | '1.3.7': "{{ containerd_package }}-1.3.7-3.1.el{{ ansible_distribution_major_version }}" 6 | '1.3.9': "{{ containerd_package }}-1.3.9-3.1.el{{ ansible_distribution_major_version }}" 7 | '1.4.3': "{{ containerd_package }}-1.4.3-3.2.el{{ ansible_distribution_major_version }}" 8 | '1.4.4': "{{ containerd_package }}-1.4.4-3.1.el{{ ansible_distribution_major_version }}" 9 | '1.4.6': "{{ containerd_package }}-1.4.6-3.1.el{{ ansible_distribution_major_version }}" 10 | '1.4.9': "{{ containerd_package }}-1.4.9-3.1.el{{ ansible_distribution_major_version }}" 11 | '1.4.12': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 12 | 'stable': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 13 | 'edge': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 14 | 15 | # https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package 16 | # https://download.docker.com/linux/centos/>/x86_64/stable/Packages/ 17 | # or do 'yum --showduplicates list docker-engine' 18 | docker_versioned_pkg: 19 | 'latest': docker-ce 20 | '18.09': docker-ce-18.09.9-3.el7 21 | '19.03': docker-ce-19.03.15-3.el{{ ansible_distribution_major_version }} 22 | '20.10': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 23 | 'stable': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 24 | 'edge': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 25 | 26 | docker_cli_versioned_pkg: 27 | 'latest': docker-ce-cli 28 | '18.09': docker-ce-cli-18.09.9-3.el7 29 | '19.03': docker-ce-cli-19.03.15-3.el{{ ansible_distribution_major_version }} 30 | '20.10': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 31 | 'stable': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 32 | 'edge': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 33 | 34 | docker_package_info: 35 | enablerepo: "docker-ce" 36 | pkgs: 37 | - "{{ containerd_versioned_pkg[docker_containerd_version | string] }}" 38 | - "{{ docker_cli_versioned_pkg[docker_cli_version | string] }}" 39 | - "{{ docker_versioned_pkg[docker_version | string] }}" 40 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/vars/anolis.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # containerd versions are only relevant for docker 3 | containerd_versioned_pkg: 4 | 'latest': "{{ containerd_package }}" 5 | '1.3.7': "{{ containerd_package }}-1.3.7-3.1.el{{ ansible_distribution_major_version }}" 6 | '1.3.9': "{{ containerd_package }}-1.3.9-3.1.el{{ ansible_distribution_major_version }}" 7 | '1.4.3': "{{ containerd_package }}-1.4.3-3.2.el{{ ansible_distribution_major_version }}" 8 | '1.4.4': "{{ containerd_package }}-1.4.4-3.1.el{{ ansible_distribution_major_version }}" 9 | '1.4.6': "{{ containerd_package }}-1.4.6-3.1.el{{ ansible_distribution_major_version }}" 10 | '1.4.9': "{{ containerd_package }}-1.4.9-3.1.el{{ ansible_distribution_major_version }}" 11 | '1.4.12': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 12 | 'stable': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 13 | 'edge': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 14 | 15 | # https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package 16 | # https://download.docker.com/linux/centos/>/x86_64/stable/Packages/ 17 | # or do 'yum --showduplicates list docker-engine' 18 | docker_versioned_pkg: 19 | 'latest': docker-ce 20 | '18.09': docker-ce-18.09.9-3.el7 21 | '19.03': docker-ce-19.03.15-3.el{{ ansible_distribution_major_version }} 22 | '20.10': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 23 | 'stable': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 24 | 'edge': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 25 | 26 | docker_cli_versioned_pkg: 27 | 'latest': docker-ce-cli 28 | '18.09': docker-ce-cli-18.09.9-3.el7 29 | '19.03': docker-ce-cli-19.03.15-3.el{{ ansible_distribution_major_version }} 30 | '20.10': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 31 | 'stable': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 32 | 'edge': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 33 | 34 | docker_package_info: 35 | enablerepo: "docker-ce" 36 | pkgs: 37 | - "{{ containerd_versioned_pkg[docker_containerd_version | string] }}" 38 | - "{{ docker_cli_versioned_pkg[docker_cli_version | string] }}" 39 | - "{{ docker_versioned_pkg[docker_version | string] }}" 40 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/vars/centos.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # containerd versions are only relevant for docker 3 | containerd_versioned_pkg: 4 | 'latest': "{{ containerd_package }}" 5 | '1.3.7': "{{ containerd_package }}-1.3.7-3.1.el{{ ansible_distribution_major_version }}" 6 | '1.3.9': "{{ containerd_package }}-1.3.9-3.1.el{{ ansible_distribution_major_version }}" 7 | '1.4.3': "{{ containerd_package }}-1.4.3-3.2.el{{ ansible_distribution_major_version }}" 8 | '1.4.4': "{{ containerd_package }}-1.4.4-3.1.el{{ ansible_distribution_major_version }}" 9 | '1.4.6': "{{ containerd_package }}-1.4.6-3.1.el{{ ansible_distribution_major_version }}" 10 | '1.4.9': "{{ containerd_package }}-1.4.9-3.1.el{{ ansible_distribution_major_version }}" 11 | '1.4.12': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 12 | 'stable': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 13 | 'edge': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 14 | 15 | # https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package 16 | # https://download.docker.com/linux/centos/>/x86_64/stable/Packages/ 17 | # or do 'yum --showduplicates list docker-engine' 18 | docker_versioned_pkg: 19 | 'latest': docker-ce 20 | '18.09': docker-ce-18.09.9-3.el7 21 | '19.03': docker-ce-19.03.15-3.el{{ ansible_distribution_major_version }} 22 | '20.10': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 23 | 'stable': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 24 | 'edge': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 25 | 26 | docker_cli_versioned_pkg: 27 | 'latest': docker-ce-cli 28 | '18.09': docker-ce-cli-18.09.9-3.el7 29 | '19.03': docker-ce-cli-19.03.15-3.el{{ ansible_distribution_major_version }} 30 | '20.10': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 31 | 'stable': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 32 | 'edge': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 33 | 34 | docker_package_info: 35 | enablerepo: "docker-ce" 36 | pkgs: 37 | - "{{ containerd_versioned_pkg[docker_containerd_version | string] }}" 38 | - "{{ docker_cli_versioned_pkg[docker_cli_version | string] }}" 39 | - "{{ docker_versioned_pkg[docker_version | string] }}" 40 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/vars/kylin linux advanced server.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # containerd versions are only relevant for docker 3 | containerd_versioned_pkg: 4 | 'latest': "{{ containerd_package }}" 5 | '1.3.7': "{{ containerd_package }}-1.3.7-3.1.el8" 6 | '1.3.9': "{{ containerd_package }}-1.3.9-3.1.el8" 7 | '1.4.3': "{{ containerd_package }}-1.4.3-3.2.el8" 8 | '1.4.4': "{{ containerd_package }}-1.4.4-3.1.el8" 9 | '1.4.6': "{{ containerd_package }}-1.4.6-3.1.el8" 10 | '1.4.9': "{{ containerd_package }}-1.4.9-3.1.el8" 11 | '1.4.12': "{{ containerd_package }}-1.4.12-3.1.el8" 12 | 'stable': "{{ containerd_package }}-1.4.12-3.1.el8" 13 | 'edge': "{{ containerd_package }}-1.4.12-3.1.el8" 14 | 15 | # https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package 16 | # https://download.docker.com/linux/centos/>/x86_64/stable/Packages/ 17 | # or do 'yum --showduplicates list docker-engine' 18 | docker_versioned_pkg: 19 | 'latest': docker-ce 20 | '18.09': docker-ce-18.09.9-3.el7 21 | '19.03': docker-ce-19.03.15-3.el8 22 | '20.10': docker-ce-20.10.11-3.el8 23 | 'stable': docker-ce-20.10.11-3.el8 24 | 'edge': docker-ce-20.10.11-3.el8 25 | 26 | docker_cli_versioned_pkg: 27 | 'latest': docker-ce-cli 28 | '18.09': docker-ce-cli-18.09.9-3.el7 29 | '19.03': docker-ce-cli-19.03.15-3.el8 30 | '20.10': docker-ce-cli-20.10.11-3.el8 31 | 'stable': docker-ce-cli-20.10.11-3.el8 32 | 'edge': docker-ce-cli-20.10.11-3.el8 33 | 34 | docker_package_info: 35 | enablerepo: "docker-ce" 36 | pkgs: 37 | - "{{ containerd_versioned_pkg[docker_containerd_version | string] }}" 38 | - "{{ docker_cli_versioned_pkg[docker_cli_version | string] }}" 39 | - "{{ docker_versioned_pkg[docker_version | string] }}" 40 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/vars/openeuler.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # containerd versions are only relevant for docker 3 | containerd_versioned_pkg: 4 | 'latest': "{{ containerd_package }}" 5 | '1.3.7': "{{ containerd_package }}-1.3.7-3.1.el8" 6 | '1.3.9': "{{ containerd_package }}-1.3.9-3.1.el8" 7 | '1.4.3': "{{ containerd_package }}-1.4.3-3.2.el8" 8 | '1.4.4': "{{ containerd_package }}-1.4.4-3.1.el8" 9 | '1.4.6': "{{ containerd_package }}-1.4.6-3.1.el8" 10 | '1.4.9': "{{ containerd_package }}-1.4.9-3.1.el8" 11 | '1.4.12': "{{ containerd_package }}-1.4.12-3.1.el8" 12 | 'stable': "{{ containerd_package }}-1.4.12-3.1.el8" 13 | 'edge': "{{ containerd_package }}-1.4.12-3.1.el8" 14 | 15 | # https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package 16 | # https://download.docker.com/linux/centos/>/x86_64/stable/Packages/ 17 | # or do 'yum --showduplicates list docker-engine' 18 | docker_versioned_pkg: 19 | 'latest': docker-ce 20 | '18.09': docker-ce-18.09.9-3.el7 21 | '19.03': docker-ce-19.03.15-3.el8 22 | '20.10': docker-ce-20.10.11-3.el8 23 | 'stable': docker-ce-20.10.11-3.el8 24 | 'edge': docker-ce-20.10.11-3.el8 25 | 26 | docker_cli_versioned_pkg: 27 | 'latest': docker-ce-cli 28 | '18.09': docker-ce-cli-18.09.9-3.el7 29 | '19.03': docker-ce-cli-19.03.15-3.el8 30 | '20.10': docker-ce-cli-20.10.11-3.el8 31 | 'stable': docker-ce-cli-20.10.11-3.el8 32 | 'edge': docker-ce-cli-20.10.11-3.el8 33 | 34 | docker_package_info: 35 | enablerepo: "docker-ce" 36 | pkgs: 37 | - "{{ containerd_versioned_pkg[docker_containerd_version | string] }}" 38 | - "{{ docker_cli_versioned_pkg[docker_cli_version | string] }}" 39 | - "{{ docker_versioned_pkg[docker_version | string] }}" 40 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/vars/opensuse leap.yml: -------------------------------------------------------------------------------- 1 | --- 2 | docker_package_info: 3 | state: latest 4 | pkgs: 5 | - docker 6 | - containerd 7 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/vars/oraclelinux.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # containerd versions are only relevant for docker 3 | containerd_versioned_pkg: 4 | 'latest': "{{ containerd_package }}" 5 | '1.3.7': "{{ containerd_package }}-1.3.7-3.1.el{{ ansible_distribution_major_version }}" 6 | '1.3.9': "{{ containerd_package }}-1.3.9-3.1.el{{ ansible_distribution_major_version }}" 7 | '1.4.3': "{{ containerd_package }}-1.4.3-3.2.el{{ ansible_distribution_major_version }}" 8 | '1.4.4': "{{ containerd_package }}-1.4.4-3.1.el{{ ansible_distribution_major_version }}" 9 | '1.4.6': "{{ containerd_package }}-1.4.6-3.1.el{{ ansible_distribution_major_version }}" 10 | '1.4.9': "{{ containerd_package }}-1.4.9-3.1.el{{ ansible_distribution_major_version }}" 11 | '1.4.12': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 12 | 'stable': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 13 | 'edge': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 14 | 15 | # https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package 16 | # https://download.docker.com/linux/centos/>/x86_64/stable/Packages/ 17 | # or do 'yum --showduplicates list docker-engine' 18 | docker_versioned_pkg: 19 | 'latest': docker-ce 20 | '18.09': docker-ce-18.09.9-3.el7 21 | '19.03': docker-ce-19.03.15-3.el{{ ansible_distribution_major_version }} 22 | '20.10': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 23 | 'stable': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 24 | 'edge': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 25 | 26 | docker_cli_versioned_pkg: 27 | 'latest': docker-ce-cli 28 | '18.09': docker-ce-cli-18.09.9-3.el7 29 | '19.03': docker-ce-cli-19.03.15-3.el{{ ansible_distribution_major_version }} 30 | '20.10': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 31 | 'stable': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 32 | 'edge': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 33 | 34 | docker_package_info: 35 | enablerepo: "docker-ce" 36 | pkgs: 37 | - "{{ containerd_versioned_pkg[docker_containerd_version | string] }}" 38 | - "{{ docker_cli_versioned_pkg[docker_cli_version | string] }}" 39 | - "{{ docker_versioned_pkg[docker_version | string] }}" 40 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/vars/redhat.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # containerd versions are only relevant for docker 3 | containerd_versioned_pkg: 4 | 'latest': "{{ containerd_package }}" 5 | '1.3.7': "{{ containerd_package }}-1.3.7-3.1.el{{ ansible_distribution_major_version }}" 6 | '1.3.9': "{{ containerd_package }}-1.3.9-3.1.el{{ ansible_distribution_major_version }}" 7 | '1.4.3': "{{ containerd_package }}-1.4.3-3.2.el{{ ansible_distribution_major_version }}" 8 | '1.4.4': "{{ containerd_package }}-1.4.4-3.1.el{{ ansible_distribution_major_version }}" 9 | '1.4.6': "{{ containerd_package }}-1.4.6-3.1.el{{ ansible_distribution_major_version }}" 10 | '1.4.9': "{{ containerd_package }}-1.4.9-3.1.el{{ ansible_distribution_major_version }}" 11 | '1.4.12': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 12 | 'stable': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 13 | 'edge': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 14 | 15 | # https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package 16 | # https://download.docker.com/linux/centos/>/x86_64/stable/Packages/ 17 | # or do 'yum --showduplicates list docker-engine' 18 | docker_versioned_pkg: 19 | 'latest': docker-ce 20 | '18.09': docker-ce-18.09.9-3.el7 21 | '19.03': docker-ce-19.03.15-3.el{{ ansible_distribution_major_version }} 22 | '20.10': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 23 | 'stable': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 24 | 'edge': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 25 | 26 | docker_cli_versioned_pkg: 27 | 'latest': docker-ce-cli 28 | '18.09': docker-ce-cli-18.09.9-3.el7 29 | '19.03': docker-ce-cli-19.03.15-3.el{{ ansible_distribution_major_version }} 30 | '20.10': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 31 | 'stable': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 32 | 'edge': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 33 | 34 | docker_package_info: 35 | enablerepo: "docker-ce" 36 | pkgs: 37 | - "{{ containerd_versioned_pkg[docker_containerd_version | string] }}" 38 | - "{{ docker_cli_versioned_pkg[docker_cli_version | string] }}" 39 | - "{{ docker_versioned_pkg[docker_version | string] }}" 40 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/vars/rocky.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # containerd versions are only relevant for docker 3 | containerd_versioned_pkg: 4 | 'latest': "{{ containerd_package }}" 5 | '1.3.7': "{{ containerd_package }}-1.3.7-3.1.el{{ ansible_distribution_major_version }}" 6 | '1.3.9': "{{ containerd_package }}-1.3.9-3.1.el{{ ansible_distribution_major_version }}" 7 | '1.4.3': "{{ containerd_package }}-1.4.3-3.2.el{{ ansible_distribution_major_version }}" 8 | '1.4.4': "{{ containerd_package }}-1.4.4-3.1.el{{ ansible_distribution_major_version }}" 9 | '1.4.6': "{{ containerd_package }}-1.4.6-3.1.el{{ ansible_distribution_major_version }}" 10 | '1.4.9': "{{ containerd_package }}-1.4.9-3.1.el{{ ansible_distribution_major_version }}" 11 | '1.4.12': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 12 | 'stable': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 13 | 'edge': "{{ containerd_package }}-1.4.12-3.1.el{{ ansible_distribution_major_version }}" 14 | 15 | # https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package 16 | # https://download.docker.com/linux/centos/>/x86_64/stable/Packages/ 17 | # or do 'yum --showduplicates list docker-engine' 18 | docker_versioned_pkg: 19 | 'latest': docker-ce 20 | '18.09': docker-ce-18.09.9-3.el7 21 | '19.03': docker-ce-19.03.15-3.el{{ ansible_distribution_major_version }} 22 | '20.10': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 23 | 'stable': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 24 | 'edge': docker-ce-20.10.11-3.el{{ ansible_distribution_major_version }} 25 | 26 | docker_cli_versioned_pkg: 27 | 'latest': docker-ce-cli 28 | '18.09': docker-ce-cli-18.09.9-3.el7 29 | '19.03': docker-ce-cli-19.03.15-3.el{{ ansible_distribution_major_version }} 30 | '20.10': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 31 | 'stable': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 32 | 'edge': docker-ce-cli-20.10.11-3.el{{ ansible_distribution_major_version }} 33 | 34 | docker_package_info: 35 | enablerepo: "docker-ce" 36 | pkgs: 37 | - "{{ containerd_versioned_pkg[docker_containerd_version | string] }}" 38 | - "{{ docker_cli_versioned_pkg[docker_cli_version | string] }}" 39 | - "{{ docker_versioned_pkg[docker_version | string] }}" 40 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/vars/ubuntu.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # containerd versions are only relevant for docker 3 | containerd_versioned_pkg: 4 | 'latest': "{{ containerd_package }}" 5 | '1.3.7': "{{ containerd_package }}=1.3.7-1" 6 | '1.3.9': "{{ containerd_package }}=1.3.9-1" 7 | '1.4.3': "{{ containerd_package }}=1.4.3-2" 8 | '1.4.4': "{{ containerd_package }}=1.4.4-1" 9 | '1.4.6': "{{ containerd_package }}=1.4.6-1" 10 | '1.4.9': "{{ containerd_package }}=1.4.9-1" 11 | '1.4.12': "{{ containerd_package }}=1.4.12-1" 12 | 'stable': "{{ containerd_package }}=1.4.12-1" 13 | 'edge': "{{ containerd_package }}=1.4.12-1" 14 | 15 | # https://download.docker.com/linux/ubuntu/ 16 | docker_versioned_pkg: 17 | 'latest': docker-ce 18 | '18.09': docker-ce=5:18.09.9~3-0~ubuntu-{{ ansible_distribution_release|lower }} 19 | '19.03': docker-ce=5:19.03.15~3-0~ubuntu-{{ ansible_distribution_release|lower }} 20 | '20.10': docker-ce=5:20.10.11~3-0~ubuntu-{{ ansible_distribution_release|lower }} 21 | 'stable': docker-ce=5:20.10.11~3-0~ubuntu-{{ ansible_distribution_release|lower }} 22 | 'edge': docker-ce=5:20.10.11~3-0~ubuntu-{{ ansible_distribution_release|lower }} 23 | 24 | docker_cli_versioned_pkg: 25 | 'latest': docker-ce-cli 26 | '18.09': docker-ce-cli=5:18.09.9~3-0~ubuntu-{{ ansible_distribution_release|lower }} 27 | '19.03': docker-ce-cli=5:19.03.15~3-0~ubuntu-{{ ansible_distribution_release|lower }} 28 | '20.10': docker-ce-cli=5:20.10.11~3-0~ubuntu-{{ ansible_distribution_release|lower }} 29 | 'stable': docker-ce-cli=5:20.10.11~3-0~ubuntu-{{ ansible_distribution_release|lower }} 30 | 'edge': docker-ce-cli=5:20.10.11~3-0~ubuntu-{{ ansible_distribution_release|lower }} 31 | 32 | docker_package_info: 33 | pkgs: 34 | - "{{ containerd_versioned_pkg[docker_containerd_version | string] }}" 35 | - "{{ docker_cli_versioned_pkg[docker_cli_version | string] }}" 36 | - "{{ docker_versioned_pkg[docker_version | string] }}" 37 | 38 | docker_repo_key_info: 39 | url: '{{ docker_ubuntu_repo_gpgkey }}' 40 | repo_keys: 41 | - '{{ docker_ubuntu_repo_repokey }}' 42 | 43 | docker_repo_info: 44 | repos: 45 | - > 46 | deb [arch={{ host_architecture }}] {{ docker_ubuntu_repo_base_url }} 47 | {{ ansible_distribution_release|lower }} 48 | stable 49 | -------------------------------------------------------------------------------- /roles/configure-docker-repo/vars/uniontech os server 20.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # containerd versions are only relevant for docker 3 | containerd_versioned_pkg: 4 | 'latest': "{{ containerd_package }}" 5 | '1.3.7': "{{ containerd_package }}-1.3.7-3.1.el8" 6 | '1.3.9': "{{ containerd_package }}-1.3.9-3.1.el8" 7 | '1.4.3': "{{ containerd_package }}-1.4.3-3.2.el8" 8 | '1.4.4': "{{ containerd_package }}-1.4.4-3.1.el8" 9 | '1.4.6': "{{ containerd_package }}-1.4.6-3.1.el8" 10 | '1.4.9': "{{ containerd_package }}-1.4.9-3.1.el8" 11 | '1.4.12': "{{ containerd_package }}-1.4.12-3.1.el8" 12 | 'stable': "{{ containerd_package }}-1.4.12-3.1.el8" 13 | 'edge': "{{ containerd_package }}-1.4.12-3.1.el8" 14 | 15 | # https://docs.docker.com/engine/installation/linux/centos/#install-from-a-package 16 | # https://download.docker.com/linux/centos/>/x86_64/stable/Packages/ 17 | # or do 'yum --showduplicates list docker-engine' 18 | docker_versioned_pkg: 19 | 'latest': docker-ce 20 | '18.09': docker-ce-18.09.9-3.el7 21 | '19.03': docker-ce-19.03.15-3.el8 22 | '20.10': docker-ce-20.10.11-3.el8 23 | 'stable': docker-ce-20.10.11-3.el8 24 | 'edge': docker-ce-20.10.11-3.el8 25 | 26 | docker_cli_versioned_pkg: 27 | 'latest': docker-ce-cli 28 | '18.09': docker-ce-cli-18.09.9-3.el7 29 | '19.03': docker-ce-cli-19.03.15-3.el8 30 | '20.10': docker-ce-cli-20.10.11-3.el8 31 | 'stable': docker-ce-cli-20.10.11-3.el8 32 | 'edge': docker-ce-cli-20.10.11-3.el8 33 | 34 | docker_package_info: 35 | enablerepo: "docker-ce" 36 | pkgs: 37 | - "{{ containerd_versioned_pkg[docker_containerd_version | string] }}" 38 | - "{{ docker_cli_versioned_pkg[docker_cli_version | string] }}" 39 | - "{{ docker_versioned_pkg[docker_version | string] }}" 40 | -------------------------------------------------------------------------------- /roles/deploy-kube-bench/kube-bench/cfg/cis-1.20/config.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Version-specific settings that override the values in cfg/config.yaml 3 | -------------------------------------------------------------------------------- /roles/deploy-kube-bench/kube-bench/cfg/cis-1.20/controlplane.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | controls: 3 | version: "cis-1.20" 4 | id: 3 5 | text: "Control Plane Configuration" 6 | type: "controlplane" 7 | groups: 8 | - id: 3.1 9 | text: "Authentication and Authorization" 10 | checks: 11 | - id: 3.1.1 12 | text: "Client certificate authentication should not be used for users (Manual)" 13 | type: "manual" 14 | remediation: | 15 | Alternative mechanisms provided by Kubernetes such as the use of OIDC should be 16 | implemented in place of client certificates. 17 | scored: false 18 | 19 | - id: 3.2 20 | text: "Logging" 21 | checks: 22 | - id: 3.2.1 23 | text: "Ensure that a minimal audit policy is created (Manual)" 24 | audit: "/bin/ps -ef | grep $apiserverbin | grep -v grep" 25 | tests: 26 | test_items: 27 | - flag: "--audit-policy-file" 28 | set: true 29 | remediation: | 30 | Create an audit policy file for your cluster. 31 | scored: false 32 | 33 | - id: 3.2.2 34 | text: "Ensure that the audit policy covers key security concerns (Manual)" 35 | type: "manual" 36 | remediation: | 37 | Consider modification of the audit policy in use on the cluster to include these items, at a 38 | minimum. 39 | scored: false 40 | -------------------------------------------------------------------------------- /roles/deploy-kube-bench/kube-bench/cfg/cis-1.20/etcd.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | controls: 3 | version: "cis-1.20" 4 | id: 2 5 | text: "Etcd Node Configuration" 6 | type: "etcd" 7 | groups: 8 | - id: 2 9 | text: "Etcd Node Configuration Files" 10 | checks: 11 | - id: 2.1 12 | text: "Ensure that the --cert-file and --key-file arguments are set as appropriate (Automated)" 13 | audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep | grep -v client | grep -v endpoint" 14 | tests: 15 | bin_op: and 16 | test_items: 17 | - flag: "--cert-file" 18 | env: "ETCD_CERT_FILE" 19 | - flag: "--key-file" 20 | env: "ETCD_KEY_FILE" 21 | remediation: | 22 | Follow the etcd service documentation and configure TLS encryption. 23 | Then, edit the etcd pod specification file /etc/kubernetes/manifests/etcd.yaml 24 | on the master node and set the below parameters. 25 | --cert-file= 26 | --key-file= 27 | scored: true 28 | 29 | - id: 2.2 30 | text: "Ensure that the --client-cert-auth argument is set to true (Automated)" 31 | audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep | grep -v client | grep -v endpoint" 32 | tests: 33 | test_items: 34 | - flag: "--client-cert-auth" 35 | env: "ETCD_CLIENT_CERT_AUTH" 36 | compare: 37 | op: eq 38 | value: true 39 | remediation: | 40 | Edit the etcd pod specification file $etcdconf on the master 41 | node and set the below parameter. 42 | --client-cert-auth="true" 43 | scored: true 44 | 45 | - id: 2.3 46 | text: "Ensure that the --auto-tls argument is not set to true (Automated)" 47 | audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep | grep -v client | grep -v endpoint" 48 | tests: 49 | bin_op: or 50 | test_items: 51 | - flag: "--auto-tls" 52 | env: "ETCD_AUTO_TLS" 53 | set: false 54 | - flag: "--auto-tls" 55 | env: "ETCD_AUTO_TLS" 56 | compare: 57 | op: eq 58 | value: false 59 | remediation: | 60 | Edit the etcd pod specification file $etcdconf on the master 61 | node and either remove the --auto-tls parameter or set it to false. 62 | --auto-tls=false 63 | scored: true 64 | 65 | - id: 2.4 66 | text: "Ensure that the --peer-cert-file and --peer-key-file arguments are 67 | set as appropriate (Automated)" 68 | audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep | grep -v client | grep -v endpoint" 69 | tests: 70 | bin_op: and 71 | test_items: 72 | - flag: "--peer-cert-file" 73 | env: "ETCD_PEER_CERT_FILE" 74 | - flag: "--peer-key-file" 75 | env: "ETCD_PEER_KEY_FILE" 76 | remediation: | 77 | Follow the etcd service documentation and configure peer TLS encryption as appropriate 78 | for your etcd cluster. 79 | Then, edit the etcd pod specification file $etcdconf on the 80 | master node and set the below parameters. 81 | --peer-client-file= 82 | --peer-key-file= 83 | scored: true 84 | 85 | - id: 2.5 86 | text: "Ensure that the --peer-client-cert-auth argument is set to true (Automated)" 87 | audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep | grep -v client | grep -v endpoint" 88 | tests: 89 | test_items: 90 | - flag: "--peer-client-cert-auth" 91 | env: "ETCD_PEER_CLIENT_CERT_AUTH" 92 | compare: 93 | op: eq 94 | value: true 95 | remediation: | 96 | Edit the etcd pod specification file $etcdconf on the master 97 | node and set the below parameter. 98 | --peer-client-cert-auth=true 99 | scored: true 100 | 101 | - id: 2.6 102 | text: "Ensure that the --peer-auto-tls argument is not set to true (Automated)" 103 | audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep | grep -v client | grep -v endpoint" 104 | tests: 105 | bin_op: or 106 | test_items: 107 | - flag: "--peer-auto-tls" 108 | env: "ETCD_PEER_AUTO_TLS" 109 | set: false 110 | - flag: "--peer-auto-tls" 111 | env: "ETCD_PEER_AUTO_TLS" 112 | compare: 113 | op: eq 114 | value: false 115 | remediation: | 116 | Edit the etcd pod specification file $etcdconf on the master 117 | node and either remove the --peer-auto-tls parameter or set it to false. 118 | --peer-auto-tls=false 119 | scored: true 120 | 121 | - id: 2.7 122 | text: "Ensure that a unique Certificate Authority is used for etcd (Manual)" 123 | audit: "/bin/ps -ef | /bin/grep $etcdbin | /bin/grep -v grep | grep -v client | grep -v endpoint" 124 | tests: 125 | test_items: 126 | - flag: "--trusted-ca-file" 127 | env: "ETCD_TRUSTED_CA_FILE" 128 | remediation: | 129 | [Manual test] 130 | Follow the etcd documentation and create a dedicated certificate authority setup for the 131 | etcd service. 132 | Then, edit the etcd pod specification file $etcdconf on the 133 | master node and set the below parameter. 134 | --trusted-ca-file= 135 | scored: false 136 | -------------------------------------------------------------------------------- /roles/deploy-kube-bench/kube-bench/cfg/cis-1.20/policies.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | controls: 3 | version: "cis-1.20" 4 | id: 5 5 | text: "Kubernetes Policies" 6 | type: "policies" 7 | groups: 8 | - id: 5.1 9 | text: "RBAC and Service Accounts" 10 | checks: 11 | - id: 5.1.1 12 | text: "Ensure that the cluster-admin role is only used where required (Manual)" 13 | type: "manual" 14 | remediation: | 15 | Identify all clusterrolebindings to the cluster-admin role. Check if they are used and 16 | if they need this role or if they could use a role with fewer privileges. 17 | Where possible, first bind users to a lower privileged role and then remove the 18 | clusterrolebinding to the cluster-admin role : 19 | kubectl delete clusterrolebinding [name] 20 | scored: false 21 | 22 | - id: 5.1.2 23 | text: "Minimize access to secrets (Manual)" 24 | type: "manual" 25 | remediation: | 26 | Where possible, remove get, list and watch access to secret objects in the cluster. 27 | scored: false 28 | 29 | - id: 5.1.3 30 | text: "Minimize wildcard use in Roles and ClusterRoles (Manual)" 31 | type: "manual" 32 | remediation: | 33 | Where possible replace any use of wildcards in clusterroles and roles with specific 34 | objects or actions. 35 | scored: false 36 | 37 | - id: 5.1.4 38 | text: "Minimize access to create pods (Manual)" 39 | type: "manual" 40 | remediation: | 41 | Where possible, remove create access to pod objects in the cluster. 42 | scored: false 43 | 44 | - id: 5.1.5 45 | text: "Ensure that default service accounts are not actively used. (Manual)" 46 | type: "manual" 47 | remediation: | 48 | Create explicit service accounts wherever a Kubernetes workload requires specific access 49 | to the Kubernetes API server. 50 | Modify the configuration of each default service account to include this value 51 | automountServiceAccountToken: false 52 | scored: false 53 | 54 | - id: 5.1.6 55 | text: "Ensure that Service Account Tokens are only mounted where necessary (Manual)" 56 | type: "manual" 57 | remediation: | 58 | Modify the definition of pods and service accounts which do not need to mount service 59 | account tokens to disable it. 60 | scored: false 61 | 62 | - id: 5.1.7 63 | text: "Avoid use of system:masters group (Manual)" 64 | type: "manual" 65 | remediation: | 66 | Remove the system:masters group from all users in the cluster. 67 | scored: false 68 | 69 | - id: 5.1.8 70 | text: "Limit use of the Bind, Impersonate and Escalate permissions in the Kubernetes cluster (Manual)" 71 | type: "manual" 72 | remediation: | 73 | Where possible, remove the impersonate, bind and escalate rights from subjects. 74 | scored: false 75 | 76 | - id: 5.2 77 | text: "Pod Security Policies" 78 | checks: 79 | - id: 5.2.1 80 | text: "Minimize the admission of privileged containers (Automated)" 81 | type: "manual" 82 | remediation: | 83 | Create a PSP as described in the Kubernetes documentation, ensuring that 84 | the .spec.privileged field is omitted or set to false. 85 | scored: false 86 | 87 | - id: 5.2.2 88 | text: "Minimize the admission of containers wishing to share the host process ID namespace (Automated)" 89 | type: "manual" 90 | remediation: | 91 | Create a PSP as described in the Kubernetes documentation, ensuring that the 92 | .spec.hostPID field is omitted or set to false. 93 | scored: false 94 | 95 | - id: 5.2.3 96 | text: "Minimize the admission of containers wishing to share the host IPC namespace (Automated)" 97 | type: "manual" 98 | remediation: | 99 | Create a PSP as described in the Kubernetes documentation, ensuring that the 100 | .spec.hostIPC field is omitted or set to false. 101 | scored: false 102 | 103 | - id: 5.2.4 104 | text: "Minimize the admission of containers wishing to share the host network namespace (Automated)" 105 | type: "manual" 106 | remediation: | 107 | Create a PSP as described in the Kubernetes documentation, ensuring that the 108 | .spec.hostNetwork field is omitted or set to false. 109 | scored: false 110 | 111 | - id: 5.2.5 112 | text: "Minimize the admission of containers with allowPrivilegeEscalation (Automated)" 113 | type: "manual" 114 | remediation: | 115 | Create a PSP as described in the Kubernetes documentation, ensuring that the 116 | .spec.allowPrivilegeEscalation field is omitted or set to false. 117 | scored: false 118 | 119 | - id: 5.2.6 120 | text: "Minimize the admission of root containers (Automated)" 121 | type: "manual" 122 | remediation: | 123 | Create a PSP as described in the Kubernetes documentation, ensuring that the 124 | .spec.runAsUser.rule is set to either MustRunAsNonRoot or MustRunAs with the range of 125 | UIDs not including 0. 126 | scored: false 127 | 128 | - id: 5.2.7 129 | text: "Minimize the admission of containers with the NET_RAW capability (Automated)" 130 | type: "manual" 131 | remediation: | 132 | Create a PSP as described in the Kubernetes documentation, ensuring that the 133 | .spec.requiredDropCapabilities is set to include either NET_RAW or ALL. 134 | scored: false 135 | 136 | - id: 5.2.8 137 | text: "Minimize the admission of containers with added capabilities (Automated)" 138 | type: "manual" 139 | remediation: | 140 | Ensure that allowedCapabilities is not present in PSPs for the cluster unless 141 | it is set to an empty array. 142 | scored: false 143 | 144 | - id: 5.2.9 145 | text: "Minimize the admission of containers with capabilities assigned (Manual)" 146 | type: "manual" 147 | remediation: | 148 | Review the use of capabilites in applications running on your cluster. Where a namespace 149 | contains applicaions which do not require any Linux capabities to operate consider adding 150 | a PSP which forbids the admission of containers which do not drop all capabilities. 151 | scored: false 152 | 153 | - id: 5.3 154 | text: "Network Policies and CNI" 155 | checks: 156 | - id: 5.3.1 157 | text: "Ensure that the CNI in use supports Network Policies (Manual)" 158 | type: "manual" 159 | remediation: | 160 | If the CNI plugin in use does not support network policies, consideration should be given to 161 | making use of a different plugin, or finding an alternate mechanism for restricting traffic 162 | in the Kubernetes cluster. 163 | scored: false 164 | 165 | - id: 5.3.2 166 | text: "Ensure that all Namespaces have Network Policies defined (Manual)" 167 | type: "manual" 168 | remediation: | 169 | Follow the documentation and create NetworkPolicy objects as you need them. 170 | scored: false 171 | 172 | - id: 5.4 173 | text: "Secrets Management" 174 | checks: 175 | - id: 5.4.1 176 | text: "Prefer using secrets as files over secrets as environment variables (Manual)" 177 | type: "manual" 178 | remediation: | 179 | if possible, rewrite application code to read secrets from mounted secret files, rather than 180 | from environment variables. 181 | scored: false 182 | 183 | - id: 5.4.2 184 | text: "Consider external secret storage (Manual)" 185 | type: "manual" 186 | remediation: | 187 | Refer to the secrets management options offered by your cloud provider or a third-party 188 | secrets management solution. 189 | scored: false 190 | 191 | - id: 5.5 192 | text: "Extensible Admission Control" 193 | checks: 194 | - id: 5.5.1 195 | text: "Configure Image Provenance using ImagePolicyWebhook admission controller (Manual)" 196 | type: "manual" 197 | remediation: | 198 | Follow the Kubernetes documentation and setup image provenance. 199 | scored: false 200 | 201 | - id: 5.7 202 | text: "General Policies" 203 | checks: 204 | - id: 5.7.1 205 | text: "Create administrative boundaries between resources using namespaces (Manual)" 206 | type: "manual" 207 | remediation: | 208 | Follow the documentation and create namespaces for objects in your deployment as you need 209 | them. 210 | scored: false 211 | 212 | - id: 5.7.2 213 | text: "Ensure that the seccomp profile is set to docker/default in your pod definitions (Manual)" 214 | type: "manual" 215 | remediation: | 216 | Use security context to enable the docker/default seccomp profile in your pod definitions. 217 | An example is as below: 218 | securityContext: 219 | seccompProfile: 220 | type: RuntimeDefault 221 | scored: false 222 | 223 | - id: 5.7.3 224 | text: "Apply Security Context to Your Pods and Containers (Manual)" 225 | type: "manual" 226 | remediation: | 227 | Follow the Kubernetes documentation and apply security contexts to your pods. For a 228 | suggested list of security contexts, you may refer to the CIS Security Benchmark for Docker 229 | Containers. 230 | scored: false 231 | 232 | - id: 5.7.4 233 | text: "The default namespace should not be used (Manual)" 234 | type: "manual" 235 | remediation: | 236 | Ensure that namespaces are created to allow for appropriate segregation of Kubernetes 237 | resources and that all new resources are created in a specific namespace. 238 | scored: false 239 | -------------------------------------------------------------------------------- /roles/deploy-kube-bench/kube-bench/cfg/config.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Controls Files. 3 | # These are YAML files that hold all the details for running checks. 4 | # 5 | ## Uncomment to use different control file paths. 6 | # masterControls: ./cfg/master.yaml 7 | # nodeControls: ./cfg/node.yaml 8 | 9 | master: 10 | components: 11 | - apiserver 12 | - scheduler 13 | - controllermanager 14 | - etcd 15 | - flanneld 16 | # kubernetes is a component to cover the config file /etc/kubernetes/config that is referred to in the benchmark 17 | - kubernetes 18 | - kubelet 19 | 20 | kubernetes: 21 | defaultconf: /etc/kubernetes/config 22 | 23 | apiserver: 24 | bins: 25 | - "kube-apiserver" 26 | - "hyperkube apiserver" 27 | - "hyperkube kube-apiserver" 28 | - "apiserver" 29 | - "openshift start master api" 30 | - "hypershift openshift-kube-apiserver" 31 | confs: 32 | - /etc/kubernetes/manifests/kube-apiserver.yaml 33 | - /etc/kubernetes/manifests/kube-apiserver.yml 34 | - /etc/kubernetes/manifests/kube-apiserver.manifest 35 | - /var/snap/kube-apiserver/current/args 36 | - /var/snap/microk8s/current/args/kube-apiserver 37 | - /etc/origin/master/master-config.yaml 38 | - /etc/kubernetes/manifests/talos-kube-apiserver.yaml 39 | defaultconf: /etc/kubernetes/manifests/kube-apiserver.yaml 40 | 41 | scheduler: 42 | bins: 43 | - "kube-scheduler" 44 | - "hyperkube scheduler" 45 | - "hyperkube kube-scheduler" 46 | - "scheduler" 47 | - "openshift start master controllers" 48 | confs: 49 | - /etc/kubernetes/manifests/kube-scheduler.yaml 50 | - /etc/kubernetes/manifests/kube-scheduler.yml 51 | - /etc/kubernetes/manifests/kube-scheduler.manifest 52 | - /var/snap/kube-scheduler/current/args 53 | - /var/snap/microk8s/current/args/kube-scheduler 54 | - /etc/origin/master/scheduler.json 55 | - /etc/kubernetes/manifests/talos-kube-scheduler.yaml 56 | defaultconf: /etc/kubernetes/manifests/kube-scheduler.yaml 57 | kubeconfig: 58 | - /etc/kubernetes/scheduler.conf 59 | - /var/lib/kube-scheduler/kubeconfig 60 | - /var/lib/kube-scheduler/config.yaml 61 | - /system/secrets/kubernetes/kube-scheduler/kubeconfig 62 | defaultkubeconfig: /etc/kubernetes/scheduler.conf 63 | 64 | controllermanager: 65 | bins: 66 | - "kube-controller-manager" 67 | - "kube-controller" 68 | - "hyperkube controller-manager" 69 | - "hyperkube kube-controller-manager" 70 | - "controller-manager" 71 | - "openshift start master controllers" 72 | - "hypershift openshift-controller-manager" 73 | confs: 74 | - /etc/kubernetes/manifests/kube-controller-manager.yaml 75 | - /etc/kubernetes/manifests/kube-controller-manager.yml 76 | - /etc/kubernetes/manifests/kube-controller-manager.manifest 77 | - /var/snap/kube-controller-manager/current/args 78 | - /var/snap/microk8s/current/args/kube-controller-manager 79 | - /etc/kubernetes/manifests/talos-kube-controller-manager.yaml 80 | defaultconf: /etc/kubernetes/manifests/kube-controller-manager.yaml 81 | kubeconfig: 82 | - /etc/kubernetes/controller-manager.conf 83 | - /var/lib/kube-controller-manager/kubeconfig 84 | - /system/secrets/kubernetes/kube-controller-manager/kubeconfig 85 | defaultkubeconfig: /etc/kubernetes/controller-manager.conf 86 | 87 | etcd: 88 | optional: true 89 | bins: 90 | - "etcd" 91 | - "openshift start etcd" 92 | confs: 93 | - /etc/kubernetes/manifests/etcd.yaml 94 | - /etc/kubernetes/manifests/etcd.yml 95 | - /etc/kubernetes/manifests/etcd.manifest 96 | - /etc/etcd/etcd.conf 97 | - /var/snap/etcd/common/etcd.conf.yml 98 | - /var/snap/etcd/common/etcd.conf.yaml 99 | - /var/snap/microk8s/current/args/etcd 100 | - /usr/lib/systemd/system/etcd.service 101 | defaultconf: /etc/kubernetes/manifests/etcd.yaml 102 | 103 | flanneld: 104 | optional: true 105 | bins: 106 | - flanneld 107 | defaultconf: /etc/sysconfig/flanneld 108 | 109 | kubelet: 110 | optional: true 111 | bins: 112 | - "hyperkube kubelet" 113 | - "kubelet" 114 | 115 | node: 116 | components: 117 | - kubelet 118 | - proxy 119 | # kubernetes is a component to cover the config file /etc/kubernetes/config that is referred to in the benchmark 120 | - kubernetes 121 | 122 | kubernetes: 123 | defaultconf: "/etc/kubernetes/config" 124 | 125 | kubelet: 126 | cafile: 127 | - "/etc/kubernetes/pki/ca.crt" 128 | - "/etc/kubernetes/certs/ca.crt" 129 | - "/etc/kubernetes/cert/ca.pem" 130 | - "/var/snap/microk8s/current/certs/ca.crt" 131 | svc: 132 | # These paths must also be included 133 | # in the 'confs' property below 134 | - "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf" 135 | - "/etc/systemd/system/kubelet.service" 136 | - "/lib/systemd/system/kubelet.service" 137 | - "/etc/systemd/system/snap.kubelet.daemon.service" 138 | - "/etc/systemd/system/snap.microk8s.daemon-kubelet.service" 139 | - "/etc/systemd/system/atomic-openshift-node.service" 140 | - "/etc/systemd/system/origin-node.service" 141 | bins: 142 | - "hyperkube kubelet" 143 | - "kubelet" 144 | kubeconfig: 145 | - "/etc/kubernetes/kubelet.conf" 146 | - "/etc/kubernetes/kubelet-kubeconfig.conf" 147 | - "/var/lib/kubelet/kubeconfig" 148 | - "/etc/kubernetes/kubelet-kubeconfig" 149 | - "/etc/kubernetes/kubelet/kubeconfig" 150 | - "/var/snap/microk8s/current/credentials/kubelet.config" 151 | - "/etc/kubernetes/kubeconfig-kubelet" 152 | confs: 153 | - "/etc/kubernetes/kubelet-config.yaml" 154 | - "/var/lib/kubelet/config.yaml" 155 | - "/var/lib/kubelet/config.yml" 156 | - "/etc/kubernetes/kubelet/kubelet-config.json" 157 | - "/etc/kubernetes/kubelet/config" 158 | - "/home/kubernetes/kubelet-config.yaml" 159 | - "/home/kubernetes/kubelet-config.yml" 160 | - "/etc/default/kubeletconfig.json" 161 | - "/etc/default/kubelet" 162 | - "/var/lib/kubelet/kubeconfig" 163 | - "/var/snap/kubelet/current/args" 164 | - "/var/snap/microk8s/current/args/kubelet" 165 | ## Due to the fact that the kubelet might be configured 166 | ## without a kubelet-config file, we use a work-around 167 | ## of pointing to the systemd service file (which can also 168 | ## hold kubelet configuration). 169 | ## Note: The following paths must match the one under 'svc' 170 | - "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf" 171 | - "/etc/systemd/system/kubelet.service" 172 | - "/lib/systemd/system/kubelet.service" 173 | - "/etc/systemd/system/snap.kubelet.daemon.service" 174 | - "/etc/systemd/system/snap.microk8s.daemon-kubelet.service" 175 | - "/etc/kubernetes/kubelet.yaml" 176 | defaultconf: "/var/lib/kubelet/config.yaml" 177 | defaultsvc: "/etc/systemd/system/kubelet.service.d/10-kubeadm.conf" 178 | defaultkubeconfig: "/etc/kubernetes/kubelet.conf" 179 | defaultcafile: "/etc/kubernetes/pki/ca.crt" 180 | 181 | proxy: 182 | optional: true 183 | bins: 184 | - "kube-proxy" 185 | - "hyperkube proxy" 186 | - "hyperkube kube-proxy" 187 | - "proxy" 188 | - "openshift start network" 189 | confs: 190 | - /etc/kubernetes/proxy 191 | - /etc/kubernetes/addons/kube-proxy-daemonset.yaml 192 | - /etc/kubernetes/addons/kube-proxy-daemonset.yml 193 | - /var/snap/kube-proxy/current/args 194 | - /var/snap/microk8s/current/args/kube-proxy 195 | kubeconfig: 196 | - "/etc/kubernetes/kubelet-kubeconfig" 197 | - "/etc/kubernetes/kubelet-kubeconfig.conf" 198 | - "/etc/kubernetes/kubelet/config" 199 | - "/var/lib/kubelet/kubeconfig" 200 | - "/var/snap/microk8s/current/credentials/proxy.config" 201 | svc: 202 | - "/lib/systemd/system/kube-proxy.service" 203 | - "/etc/systemd/system/snap.microk8s.daemon-proxy.service" 204 | defaultconf: /etc/kubernetes/addons/kube-proxy-daemonset.yaml 205 | defaultkubeconfig: "/etc/kubernetes/proxy.conf" 206 | 207 | etcd: 208 | components: 209 | - etcd 210 | 211 | etcd: 212 | bins: 213 | - "etcd" 214 | confs: 215 | - /etc/kubernetes/manifests/etcd.yaml 216 | - /etc/kubernetes/manifests/etcd.yml 217 | - /etc/kubernetes/manifests/etcd.manifest 218 | - /etc/etcd/etcd.conf 219 | - /var/snap/etcd/common/etcd.conf.yml 220 | - /var/snap/etcd/common/etcd.conf.yaml 221 | - /var/snap/microk8s/current/args/etcd 222 | - /usr/lib/systemd/system/etcd.service 223 | defaultconf: /etc/kubernetes/manifests/etcd.yaml 224 | 225 | controlplane: 226 | components: 227 | - apiserver 228 | 229 | apiserver: 230 | bins: 231 | - "kube-apiserver" 232 | - "hyperkube apiserver" 233 | - "hyperkube kube-apiserver" 234 | - "apiserver" 235 | 236 | policies: 237 | components: [] 238 | 239 | managedservices: 240 | components: [] 241 | 242 | version_mapping: 243 | "1.15": "cis-1.5" 244 | "1.16": "cis-1.6" 245 | "1.17": "cis-1.6" 246 | "1.18": "cis-1.6" 247 | "1.19": "cis-1.20" 248 | "1.20": "cis-1.20" 249 | "eks-1.0.1": "eks-1.0.1" 250 | "gke-1.0": "gke-1.0" 251 | "gke-1.2.0": "gke-1.2.0" 252 | "ocp-3.10": "rh-0.7" 253 | "ocp-3.11": "rh-0.7" 254 | "ocp-4.0": "rh-1.0" 255 | "aks-1.0": "aks-1.0" 256 | "ack-1.0": "ack-1.0" 257 | 258 | target_mapping: 259 | "cis-1.5": 260 | - "master" 261 | - "node" 262 | - "controlplane" 263 | - "etcd" 264 | - "policies" 265 | "cis-1.6": 266 | - "master" 267 | - "node" 268 | - "controlplane" 269 | - "etcd" 270 | - "policies" 271 | "cis-1.20": 272 | - "master" 273 | - "node" 274 | - "controlplane" 275 | - "etcd" 276 | - "policies" 277 | "gke-1.0": 278 | - "master" 279 | - "node" 280 | - "controlplane" 281 | - "etcd" 282 | - "policies" 283 | - "managedservices" 284 | "gke-1.2.0": 285 | - "master" 286 | - "node" 287 | - "controlplane" 288 | - "policies" 289 | - "managedservices" 290 | "eks-1.0.1": 291 | - "master" 292 | - "node" 293 | - "controlplane" 294 | - "policies" 295 | - "managedservices" 296 | "rh-0.7": 297 | - "master" 298 | - "node" 299 | "aks-1.0": 300 | - "master" 301 | - "node" 302 | - "controlplane" 303 | - "policies" 304 | - "managedservices" 305 | "ack-1.0": 306 | - "master" 307 | - "node" 308 | - "controlplane" 309 | - "etcd" 310 | - "policies" 311 | - "managedservices" 312 | "rh-1.0": 313 | - "master" 314 | - "node" 315 | - "controlplane" 316 | - "policies" 317 | - "etcd" 318 | -------------------------------------------------------------------------------- /roles/deploy-kube-bench/kube-bench/kube-bench-amd64: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/eip-work/kuboard-spray-resource/7fb49d67b58e76e48f58d2e11a8955a9efb37ffb/roles/deploy-kube-bench/kube-bench/kube-bench-amd64 -------------------------------------------------------------------------------- /roles/deploy-kube-bench/kube-bench/kube-bench-arm64: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/eip-work/kuboard-spray-resource/7fb49d67b58e76e48f58d2e11a8955a9efb37ffb/roles/deploy-kube-bench/kube-bench/kube-bench-arm64 -------------------------------------------------------------------------------- /roles/deploy-kube-bench/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: 复制 kubebench config 到目标服务器 4 | ansible.builtin.copy: 5 | src: "{{ role_path }}/kube-bench/cfg" 6 | dest: /root/kube-bench/ 7 | force: true 8 | owner: root 9 | group: root 10 | mode: '0644' 11 | 12 | - name: 复制 kubebench 到目标服务器 13 | ansible.builtin.copy: 14 | src: "{{ role_path }}/kube-bench/kube-bench-{{ host_architecture }}" 15 | dest: /root/kube-bench/kube-bench 16 | force: true 17 | owner: root 18 | group: root 19 | mode: '0755' 20 | 21 | -------------------------------------------------------------------------------- /roles/deploy-kuboard/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | deploy_kuboard: true 4 | kuboard_data_dir: /var/lib/kuboard 5 | kuboard_version: "v3.4.1.0" -------------------------------------------------------------------------------- /roles/deploy-kuboard/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Remove kuboard 3 | block: 4 | - name: Remove kuboard | Delete kuboard.yaml 5 | file: 6 | path: "/etc/kubernetes/manifests/kuboard.yaml" 7 | state: absent 8 | 9 | - name: Remove kuboard | Delete addon dir 10 | file: 11 | path: "{{ kuboard_data_dir }}" 12 | state: absent 13 | 14 | when: 15 | - kuboardspray_operation == 'remove_addon' 16 | - inventory_hostname == groups['kube_control_plane'][0] 17 | tags: 18 | - upgrade 19 | - kuboard 20 | 21 | 22 | - name: Deploy kuboard 23 | block: 24 | - name: Deploy kuboard | Create addon dir 25 | file: 26 | path: "{{ kuboard_data_dir }}" 27 | state: directory 28 | owner: root 29 | group: root 30 | mode: 0755 31 | 32 | - name: Deploy kuboard | Create ServiceAccount manifests 33 | template: 34 | src: "kuboard-service-accounts.yaml.j2" 35 | dest: "{{ kuboard_data_dir }}/kuboard-service-accounts.yaml" 36 | mode: 0744 37 | 38 | - name: Deploy kuboard | Apply ServiceAccount manifests 39 | kube: 40 | name: "kuboard_service_accounts" 41 | kubectl: "{{ bin_dir }}/kubectl" 42 | filename: "{{ kuboard_data_dir }}/kuboard-service-accounts.yaml" 43 | state: "latest" 44 | 45 | - name: Deploy kuboard | read kubeconfig 46 | shell: cat /etc/kubernetes/admin.conf 47 | register: kuboard_kubeconfig_content 48 | 49 | - name: Deploy kuboard | read kubeconfig 50 | shell: "cat /etc/kubernetes/admin.conf | grep certificate-authority-data | awk '{print $2}'" 51 | register: kuboard_kubeconfig_certificate_authority_data 52 | 53 | - name: Deploy kuboard | read kubeconfig 54 | shell: "cat /etc/kubernetes/admin.conf | grep client-certificate-data | awk '{print $2}'" 55 | register: kuboard_kubeconfig_client_certificate_data 56 | 57 | - name: Deploy kuboard | read kubeconfig 58 | shell: "cat /etc/kubernetes/admin.conf | grep client-key-data | awk '{print $2}'" 59 | register: kuboard_kubeconfig_client_key_data 60 | 61 | - name: Deploy kuboard | read kuboard-admin-token 62 | shell: "kubectl get secrets -n kuboard -o yaml $(kubectl get secrets -n kuboard | grep kuboard-admin-token | awk '{print $1}') | grep ' token: ' | awk '{print $2}'" 63 | retries: 10 64 | delay: 3 65 | until: kuboard_admin_token.stdout != "" 66 | register: kuboard_admin_token 67 | 68 | - name: Deploy kuboard | read kuboard-viewer-token 69 | shell: "kubectl get secrets -n kuboard -o yaml $(kubectl get secrets -n kuboard | grep kuboard-viewer-token | awk '{print $1}') | grep ' token: ' | awk '{print $2}'" 70 | register: kuboard_viewer_token 71 | 72 | - name: Deploy kuboard | Generate init yaml 73 | template: 74 | src: "import-cluster-once.yaml.j2" 75 | dest: "{{ kuboard_data_dir }}/import-cluster-once.yaml" 76 | mode: 0744 77 | 78 | - name: Deploy kuboard | Create kuboard.yaml 79 | template: 80 | src: "kuboard-pod.yaml.j2" 81 | dest: "/etc/kubernetes/manifests/kuboard.yaml" 82 | mode: 0600 83 | 84 | when: 85 | - inventory_hostname == groups['kube_control_plane'][0] 86 | - kuboard_enabled 87 | - kuboardspray_operation != 'remove_addon' 88 | tags: 89 | - kuboard 90 | - upgrade -------------------------------------------------------------------------------- /roles/deploy-kuboard/templates/import-cluster-once.yaml.j2: -------------------------------------------------------------------------------- 1 | --- 2 | kind: KubernetesCluster 3 | metadata: 4 | name: {{ kuboard_cluster_name }} 5 | cluster: {{ kuboard_cluster_name }} 6 | annotations: 7 | description: {{ kuboard_cluster_name }} 8 | spec: 9 | # port: 35000 10 | type: EXISTING 11 | connectionType: kubeconfig 12 | kubeDisableTls: true 13 | kubeconfig: |- 14 | {% for l in kuboard_kubeconfig_content.stdout_lines %} 15 | {{ l }} 16 | {% endfor %} 17 | # 18 | kubeconfigObj: 19 | clustername: "{{ cluster_name }}" 20 | username: kubernetes-admin 21 | cluster: 22 | certificate-authority-data: "{{ kuboard_kubeconfig_certificate_authority_data.stdout }}" 23 | server: "https://{{ ip }}:6443" 24 | user: 25 | client-certificate-data: "{{ kuboard_kubeconfig_client_certificate_data.stdout }}" 26 | client-key-data: "{{ kuboard_kubeconfig_client_key_data.stdout }}" 27 | kubeserver: "https://{{ ip }}:6443" 28 | status: 29 | phase: IMPORTED 30 | agentVersion: 31 | version: '' 32 | buildDate: '' 33 | kubernetesVersion: 34 | major: "1" 35 | minor: "" 36 | gitVersion: "{{ kube_version}}" 37 | gitCommit: "" 38 | gitTreeState: "clean" 39 | buildDate: "" 40 | goVersion: "" 41 | compiler: "gc" 42 | platform: "linux/arm64" 43 | 44 | --- 45 | kind: KubernetesClusterToken 46 | metadata: 47 | name: kuboard-admin 48 | cluster: {{ kuboard_cluster_name }} 49 | spec: 50 | token: "{{ kuboard_admin_token.stdout }}" 51 | anonymousToken: "" 52 | agentVersion: 53 | version: "" 54 | buildDate: "" 55 | kubernetesVersion: 56 | major: "1" 57 | minor: "" 58 | gitVersion: "{{ kube_version}}" 59 | gitCommit: "" 60 | gitTreeState: "clean" 61 | buildDate: "" 62 | goVersion: "" 63 | compiler: "gc" 64 | platform: "linux/{{ image_arch }}" 65 | 66 | --- 67 | kind: KubernetesClusterToken 68 | metadata: 69 | name: kuboard-viewer 70 | cluster: {{ kuboard_cluster_name }} 71 | spec: 72 | token: "{{ kuboard_viewer_token.stdout }}" 73 | anonymousToken: "" 74 | agentVersion: 75 | version: "" 76 | buildDate: "" 77 | kubernetesVersion: 78 | major: "1" 79 | minor: "" 80 | gitVersion: "{{ kube_version}}" 81 | gitCommit: "" 82 | gitTreeState: "clean" 83 | buildDate: "" 84 | goVersion: "" 85 | compiler: "gc" 86 | platform: "linux/{{ image_arch }}" -------------------------------------------------------------------------------- /roles/deploy-kuboard/templates/kuboard-pod.yaml.j2: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | annotations: {} 5 | labels: 6 | k8s.kuboard.cn/name: kuboard-v3 7 | name: kuboard-v3 8 | namespace: kuboard 9 | spec: 10 | containers: 11 | - env: 12 | - name: KUBOARD_ENDPOINT 13 | value: "http://{{ ip }}:{{ kuboard_port }}" 14 | - name: KUBOARD_AGENT_SERVER_TCP_PORT 15 | value: "10081" 16 | image: 'eipwork/kuboard:{{ kuboard_version }}' 17 | imagePullPolicy: IfNotPresent 18 | livenessProbe: 19 | failureThreshold: 3 20 | httpGet: 21 | path: /kuboard-resources/version.json 22 | port: 80 23 | scheme: HTTP 24 | initialDelaySeconds: 30 25 | periodSeconds: 10 26 | successThreshold: 1 27 | timeoutSeconds: 1 28 | name: kuboard 29 | ports: 30 | - containerPort: 80 31 | hostPort: {{ kuboard_port }} 32 | name: web 33 | protocol: TCP 34 | - containerPort: 10081 35 | name: peer 36 | protocol: TCP 37 | hostPort: 10081 38 | - containerPort: 10081 39 | name: peer-u 40 | protocol: UDP 41 | hostPort: 10081 42 | readinessProbe: 43 | failureThreshold: 3 44 | httpGet: 45 | path: /kuboard-resources/version.json 46 | port: 80 47 | scheme: HTTP 48 | initialDelaySeconds: 30 49 | periodSeconds: 10 50 | successThreshold: 1 51 | timeoutSeconds: 1 52 | volumeMounts: 53 | - mountPath: /data 54 | name: data 55 | - mountPath: /init-etcd-scripts/import-cluster-once.yaml 56 | name: import-cluster-yaml 57 | volumes: 58 | - hostPath: 59 | path: "{{ kuboard_data_dir }}" 60 | name: data 61 | - hostPath: 62 | path: "{{ kuboard_data_dir }}/import-cluster-once.yaml" 63 | name: import-cluster-yaml 64 | dnsPolicy: ClusterFirst 65 | restartPolicy: Always 66 | tolerations: 67 | - key: node-role.kubernetes.io/master 68 | operator: Exists 69 | -------------------------------------------------------------------------------- /roles/deploy-kuboard/templates/kuboard-service-accounts.yaml.j2: -------------------------------------------------------------------------------- 1 | --- 2 | kind: Namespace 3 | apiVersion: v1 4 | metadata: 5 | name: kuboard 6 | 7 | --- 8 | kind: ServiceAccount 9 | apiVersion: v1 10 | metadata: 11 | name: kuboard-admin 12 | namespace: kuboard 13 | 14 | --- 15 | kind: ClusterRoleBinding 16 | apiVersion: rbac.authorization.k8s.io/v1 17 | metadata: 18 | name: kuboard-admin-crb 19 | roleRef: 20 | apiGroup: rbac.authorization.k8s.io 21 | kind: ClusterRole 22 | name: cluster-admin 23 | subjects: 24 | - kind: ServiceAccount 25 | name: kuboard-admin 26 | namespace: kuboard 27 | 28 | --- 29 | kind: ServiceAccount 30 | apiVersion: v1 31 | metadata: 32 | name: kuboard-viewer 33 | namespace: kuboard 34 | 35 | --- 36 | kind: ClusterRoleBinding 37 | apiVersion: rbac.authorization.k8s.io/v1 38 | metadata: 39 | name: kuboard-viewer-crb 40 | roleRef: 41 | apiGroup: rbac.authorization.k8s.io 42 | kind: ClusterRole 43 | name: view 44 | subjects: 45 | - kind: ServiceAccount 46 | name: kuboard-viewer 47 | namespace: kuboard 48 | 49 | --- 50 | apiVersion: v1 51 | kind: Secret 52 | type: kubernetes.io/service-account-token 53 | metadata: 54 | name: kuboard-admin-token 55 | namespace: kuboard 56 | annotations: 57 | kubernetes.io/service-account.name: "kuboard-admin" 58 | 59 | --- 60 | apiVersion: v1 61 | kind: Secret 62 | type: kubernetes.io/service-account-token 63 | metadata: 64 | name: kuboard-viewer-token 65 | namespace: kuboard 66 | annotations: 67 | kubernetes.io/service-account.name: "kuboard-viewer" -------------------------------------------------------------------------------- /roles/kuboard-spray-facts/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | _host_architecture_groups: 4 | x86_64: amd64 5 | aarch64: arm64 6 | armv7l: arm 7 | host_architecture: >- 8 | {%- if ansible_architecture in _host_architecture_groups -%} 9 | {{ _host_architecture_groups[ansible_architecture] }} 10 | {%- else -%} 11 | {{ ansible_architecture }} 12 | {%- endif -%} -------------------------------------------------------------------------------- /roles/kuboard-spray-facts/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Gather minimal facts 3 | setup: 4 | gather_subset: '!all' 5 | 6 | - name: 检查 kuboard spray 版本 7 | run_once: true 8 | block: 9 | - name: 检查是否检测到 kuboardspray_version 10 | run_once: true 11 | fail: 12 | msg: 13 | - "资源包不支持当前 kuboardspray 版本" 14 | - "请升级 kuboardspray 到最新版本 https://kuboard-spray.cn/support/change-log/v1.html" 15 | when: 16 | - kuboardspray_version is not defined 17 | 18 | - name: wrap kuboardspray_version 19 | set_fact: 20 | kuboardspray_version_wrapper: "{{ kuboardspray_version }}-w" 21 | 22 | - name: 读取资源包要求的最低 kuboardspray 版本 23 | shell: "cat package.yaml | shyaml get-value metadata.kuboard_spray_version.min" 24 | delegate_to: localhost 25 | args: 26 | chdir: "{{ playbook_dir }}" 27 | register: kuboardspray_min_version 28 | 29 | - name: "检查是否满足资源包对 kuboardspray 版本的最低要求 {{ kuboardspray_version }} >= {{ kuboardspray_min_version.stdout }}" 30 | assert: 31 | msg: 32 | - "资源包要求 kuboardspray 的版本不能低于 {{kuboardspray_min_version.stdout}},当前 kuboardspray 版本为 {{ kuboardspray_version }}" 33 | - "建议升级 kuboardspray 到最新版本 https://kuboard-spray.cn/support/change-log/v1.html" 34 | that: 35 | - kuboardspray_version_wrapper is version(kuboardspray_min_version.stdout, operator='ge') 36 | 37 | - name: 读取资源包支持的 CPU 架构 38 | shell: "cat package.yaml | shyaml get-value data.kubernetes.image_arch" 39 | delegate_to: localhost 40 | args: 41 | chdir: "{{ playbook_dir }}" 42 | register: kuboardspray_image_arch 43 | 44 | - name: "检查是否支持此节点的 CPU 架构 {{ host_architecture }}" 45 | assert: 46 | msg: 47 | - "当前资源包支持 {{ kuboardspray_image_arch.stdout }},不支持此节点的 CPU 架构: {{ host_architecture }}" 48 | - "请确保 kuboard-spray 所在机器的 CPU 架构与节点机器的 CPU 架构相同" 49 | that: 50 | - host_architecture == kuboardspray_image_arch.stdout 51 | 52 | - name: 读取资源包支持的操作系统列表 53 | shell: "cat package.yaml | shyaml get-value metadata.supported_os" 54 | delegate_to: localhost 55 | args: 56 | chdir: "{{ playbook_dir }}" 57 | register: kuboardspray_supported_os 58 | 59 | - set_fact: 60 | kuboardspray_supported_os_versions: "{{kuboardspray_supported_os.stdout | from_yaml | selectattr('distribution', '==', ansible_distribution)}}" 61 | 62 | - name: "检查是否支持此操作系统 {{ ansible_distribution }} {{ ansible_distribution_version }}" 63 | assert: 64 | msg: 65 | - "当前资源包不支持此操作系统: {{ ansible_distribution }} {{ ansible_distribution_version }}" 66 | - "当前资源包支持的操作系统有: {{ kuboardspray_supported_os.stdout | from_yaml }}" 67 | that: 68 | - kuboardspray_supported_os_versions|length > 0 69 | - ansible_distribution_version in kuboardspray_supported_os_versions[0].versions 70 | 71 | - name: "检查是否为操作系统 {{ ansible_distribution }} 选择了软件源" 72 | assert: 73 | msg: 74 | - "请在『全局设置』『软件源』中勾选 {{ ansible_distribution }}" 75 | that: 76 | - "hostvars[inventory_hostname]['kuboardspray_repo_' + ansible_distribution | lower | regex_replace(' ', '_')] is defined" 77 | 78 | # filter match the following variables: 79 | # ansible_default_ipv4 80 | # ansible_default_ipv6 81 | # ansible_all_ipv4_addresses 82 | # ansible_all_ipv6_addresses 83 | - name: Gather necessary facts (network) 84 | setup: 85 | gather_subset: '!all,!min,network' 86 | filter: "ansible_*_ipv[46]*" 87 | 88 | # filter match the following variables: 89 | # ansible_memtotal_mb 90 | # ansible_swaptotal_mb 91 | - name: Gather necessary facts (hardware) 92 | setup: 93 | gather_subset: '!all,!min,hardware' 94 | filter: "ansible_*total_mb" 95 | -------------------------------------------------------------------------------- /roles/renew-cert/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - debug: 3 | msg: "开始更新 {{ inventory_hostname }} 的证书" 4 | 5 | - name: 生成新的证书 6 | shell: "/usr/local/bin/k8s-certs-renew.sh" 7 | 8 | - name: 等待 kube-apiserver 重启,大约需要 1 分钟 9 | wait_for: 10 | timeout: 10 11 | 12 | - name: 等待 kube-apiserver 重新启动 13 | shell: "set -o pipefail && curl -ik https://localhost:6443/healthz | grep -v '\"apiVersion\":' > /dev/null" 14 | args: 15 | executable: /bin/bash 16 | register: apiserver_is_restarted 17 | until: apiserver_is_restarted.rc == 0 18 | retries: "120" 19 | delay: "5" 20 | changed_when: false 21 | check_mode: no 22 | failed_when: apiserver_is_restarted.rc != 0 23 | -------------------------------------------------------------------------------- /roles/restore-etcd/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Set to false to only do certificate management 3 | etcd_cluster_setup: true 4 | etcd_events_cluster_setup: false 5 | 6 | # Set to true to separate k8s events to a different etcd cluster 7 | etcd_events_cluster_enabled: false 8 | 9 | etcd_backup_prefix: "/var/backups" 10 | etcd_data_dir: "/var/lib/etcd" 11 | 12 | # Number of etcd backups to retain. Set to a value < 0 to retain all backups 13 | etcd_backup_retention_count: -1 14 | 15 | force_etcd_cert_refresh: true 16 | etcd_config_dir: /etc/ssl/etcd 17 | etcd_cert_dir: "{{ etcd_config_dir }}/ssl" 18 | etcd_cert_dir_mode: "0700" 19 | etcd_cert_group: root 20 | # Note: This does not set up DNS entries. It simply adds the following DNS 21 | # entries to the certificate 22 | etcd_cert_alt_names: 23 | - "etcd.kube-system.svc.{{ dns_domain }}" 24 | - "etcd.kube-system.svc" 25 | - "etcd.kube-system" 26 | - "etcd" 27 | etcd_cert_alt_ips: [] 28 | 29 | etcd_script_dir: "{{ bin_dir }}/etcd-scripts" 30 | 31 | etcd_heartbeat_interval: "250" 32 | etcd_election_timeout: "5000" 33 | 34 | # etcd_snapshot_count: "10000" 35 | 36 | etcd_metrics: "basic" 37 | 38 | # Define in inventory to set a separate port for etcd to expose metrics on 39 | # etcd_metrics_port: 2381 40 | 41 | ## A dictionary of extra environment variables to add to etcd.env, formatted like: 42 | ## etcd_extra_vars: 43 | ## ETCD_VAR1: "value1" 44 | ## ETCD_VAR2: "value2" 45 | etcd_extra_vars: {} 46 | 47 | 48 | etcd_blkio_weight: 1000 49 | 50 | etcd_node_cert_hosts: "{{ groups['k8s_cluster'] | union(groups.get('calico_rr', [])) }}" 51 | 52 | etcd_compaction_retention: "8" 53 | 54 | # Force clients like etcdctl to use TLS certs (different than peer security) 55 | etcd_secure_client: true 56 | 57 | # Enable peer client cert authentication 58 | etcd_peer_client_auth: true 59 | 60 | # Maximum number of snapshot files to retain (0 is unlimited) 61 | # etcd_max_snapshots: 5 62 | 63 | # Maximum number of wal files to retain (0 is unlimited) 64 | # etcd_max_wals: 5 65 | 66 | # Number of loop retries 67 | etcd_retries: 4 68 | 69 | is_etcd_master: "{{ inventory_hostname in groups['etcd'] }}" 70 | etcd_access_addresses: |- 71 | {% for item in etcd_hosts -%} 72 | https://{{ hostvars[item]['etcd_access_address'] | default(hostvars[item]['ip'] | default(fallback_ips[item])) }}:2379{% if not loop.last %},{% endif %} 73 | {%- endfor %} 74 | etcd_hosts: "{{ groups['etcd'] | default(groups['kube_control_plane']) }}" 75 | 76 | retry_stagger: 5 -------------------------------------------------------------------------------- /roles/restore-etcd/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Create Backup Temporary Directory 3 | file: 4 | path: "/tmp/etcd-restore" 5 | state: directory 6 | owner: root 7 | group: root 8 | mode: 0777 9 | 10 | - name: "解压缩备份文件到服务器" 11 | ansible.builtin.unarchive: 12 | src: "{{ kuboardspray_cluster_dir }}/backup/{{ backup_file_path }}/{{ backup_file_name }}" 13 | dest: "/tmp/etcd-restore" 14 | 15 | - name: Stop etcd 16 | systemd: 17 | name: etcd 18 | state: stopped 19 | 20 | - name: Remove etcd data-dir 21 | file: 22 | path: "{{ etcd_data_dir }}" 23 | state: absent 24 | 25 | - name: Restore etcd snapshot # noqa 301 305 26 | shell: "{{ bin_dir }}/etcdctl snapshot restore /tmp/etcd-restore/{{ backup_file_name | regex_replace('.tgz') }}/snapshot.db --name {{ etcd_member_name }} --initial-cluster {{ etcd_peer_addresses }} --initial-cluster-token k8s_etcd --initial-advertise-peer-urls {{ etcd_peer_url }} --data-dir {{ etcd_data_dir }}" 27 | environment: 28 | - ETCDCTL_CERT: "{{ etcd_cert_dir }}/admin-{{ inventory_hostname }}.pem" 29 | - ETCDCTL_KEY: "{{ etcd_cert_dir }}/admin-{{ inventory_hostname }}-key.pem" 30 | - ETCDCTL_CACERT: "{{ etcd_cert_dir }}/ca.pem" 31 | - ETCDCTL_ENDPOINTS: "{{ etcd_access_addresses }}" 32 | - ETCDCTL_API: 3 33 | vars: 34 | etcd_peer_addresses: >- 35 | {% for host in groups['etcd'] -%} 36 | {%- if hostvars[host] is defined -%} 37 | {{ "etcd"+loop.index|string }}=https://{{ hostvars[host].etcd_access_address | default(hostvars[host].ip | default(fallback_ips[host])) }}:2380 38 | {%- endif -%} 39 | {%- if loop.last -%} 40 | {%- else -%} 41 | , 42 | {%- endif -%} 43 | {%- endfor -%} 44 | 45 | - name: Remove etcd snapshot 46 | file: 47 | path: "/tmp/etcd-restore" 48 | state: absent 49 | 50 | - name: Change etcd data-dir owner 51 | file: 52 | path: "{{ etcd_data_dir }}" 53 | owner: etcd 54 | group: etcd 55 | recurse: true 56 | 57 | - name: Reconfigure etcd 58 | replace: 59 | path: /etc/etcd.env 60 | regexp: "^(ETCD_INITIAL_CLUSTER=).*" 61 | replace: '\1{{ etcd_member_name }}={{ etcd_peer_url }}' 62 | 63 | - name: Start etcd 64 | systemd: 65 | name: etcd 66 | state: started 67 | -------------------------------------------------------------------------------- /test-cases.md: -------------------------------------------------------------------------------- 1 | 调整控制节点的顺序: 2 | 1. 在 inventory 中调整控制节点的顺序 3 | 2. upgrade-cluster.yml 或者 cluster.yml 4 | 5 | 添加工作节点: 6 | 1. add node to inventory 7 | 2. all: facts.yml 8 | target_node: scale.yml 9 | 10 | 删除工作节点: 11 | 1. target_node: remove-node.yml 12 | 2. delete node from inventory 13 | 14 | 添加删除控制节点: 15 | 1. add node to inventory 16 | 2. all: cluster.yml 17 | 3. all: restart nginx-proxy pod. 18 | 需要验证,使用 containerd 部署的集群里,是如何启动 nginx-proxy 的 19 | ```sh 20 | docker ps | grep k8s_nginx-proxy_nginx-proxy | awk '{print $1}' | xargs docker restart 21 | ``` 22 | 4. delete node from inventory 23 | 24 | 25 | 替换第一个控制节点 26 | 1. 调整 inventory 中控制节点的顺序 27 | 2. remove-node.yml 删除原第一个控制节点 28 | 3. kubectl edit cm -n kube-public cluster-info 29 | 4. 添加新的控制节点到 inventory 30 | 5. cluster.yml --limit=kube_control_plane 31 | 32 | 33 | # 资源包的基本测试案例: 34 | 35 | ## 安装集群 36 | 37 | * 安装单节点集群 38 | * 安装三主节点集群 39 | 40 | ## 添加、删除节点: 41 | 42 | 1. 向集群中添加2个工作节点 43 | 2. 向集群中添加2个控制节点(含ETCD)1个工作节点 44 | 2.1. 验证所有控制节点的 /etc/kubernetes/manifests/apiserver.yaml 中已经添加了新 ETCD 的地址 45 | 2.2. 验证所有工作节点的 /etc/nginx/nginx.conf 文件中已经添加了新的 apiserver 地址到 local-loadbalance-proxy 46 | 3. 从集群中删除2个工作节点(1个在线、1个离线) 47 | 4. 从集群中删除1个控制节点(含ETCD)在线节点 48 | 4.1. 验证所有控制节点的 /etc/kubernetes/manifests/apiserver.yaml 中已经删除了该 ETCD 的地址 49 | 4.2. 验证所有工作节点的 /etc/nginx/nginx.conf 文件中已经删除了该 apiserver 地址 50 | 5. 从集群中删除1个控制节点(含ETCD)离线节点 51 | 5.1. 验证所有控制节点的 /etc/kubernetes/manifests/apiserver.yaml 中已经删除了该 ETCD 的地址 52 | 5.2. 验证所有工作节点的 /etc/nginx/nginx.conf 文件中已经删除了该 apiserver 地址 53 | 6. 从集群中删除1个控制节点(含ETCD)1个工作节点,在线节点 54 | 6.1. 验证所有控制节点的 /etc/kubernetes/manifests/apiserver.yaml 中已经删除了该 ETCD 的地址 55 | 6.2. 验证所有工作节点的 /etc/nginx/nginx.conf 文件中已经删除了该 apiserver 地址 56 | 7. 从集群中删除1个控制节点(含ETCD)1个工作节点,离线节点 57 | 7.1. 验证所有控制节点的 /etc/kubernetes/manifests/apiserver.yaml 中已经删除了该 ETCD 的地址 58 | 7.2. 验证所有工作节点的 /etc/nginx/nginx.conf 文件中已经删除了该 apiserver 地址 59 | 60 | 测试矩阵 61 | 62 | --------------------------------------------------------------------------------