├── .github └── ISSUE_TEMPLATE │ └── ------.md ├── .gitignore ├── README.md ├── app ├── A.kubectl-more │ ├── bash-completion.sh │ └── k8s_rc.sh ├── B.kube-dashboard │ └── dashboard.yaml ├── C.kubespray │ ├── README.md │ ├── SuperPutty-Sessions.XML │ ├── Vagrantfile │ ├── auto_pass.sh │ ├── config.sh │ ├── install_pkg.sh │ ├── output │ │ ├── kubespray-1.18.7.output │ │ ├── kubespray-1.19.2.output │ │ ├── kubespray-latest-failure.output │ │ ├── kubespray-v2.17.0-k8s-v1.21.6.output │ │ ├── vagrantup-0.6.4.output │ │ ├── vagrantup-0.7.0.output │ │ └── vagrantup-0.7.4.output │ └── pre-kubespray.sh ├── D.DeepDiveContainer │ ├── config.json │ ├── ns-create.sh │ └── ns-remover.sh └── README.md ├── ch2 ├── 2.1.3 │ ├── Vagrantfile │ ├── vagrant_up(base_default).output │ └── vagrant_up.output ├── 2.2.1 │ ├── Vagrantfile │ └── vagrantup.output ├── 2.2.2 │ ├── Vagrantfile │ └── install_pkg.sh ├── 2.2.3 │ ├── Vagrantfile │ ├── config.sh │ ├── install_pkg.sh │ ├── ping_2_nds.sh │ └── vagrantup.output ├── 2.3.3 │ └── k8s.XML └── README.md ├── ch3 ├── 3.1.3 │ ├── Vagrantfile │ ├── config.sh │ ├── install_pkg.sh │ ├── master_node.sh │ └── work_nodes.sh ├── 3.1.6 │ └── nginx-pod.yaml ├── 3.2.10 │ └── rollout-nginx.yaml ├── 3.2.4 │ ├── echo-hname.yaml │ └── nginx-pod.yaml ├── 3.2.8 │ └── echo-hname.yaml ├── 3.3.1 │ ├── nodeport.yaml │ └── req_page.ps1 ├── 3.3.2 │ ├── ingress-config.yaml │ ├── ingress-nginx.yaml │ └── ingress.yaml ├── 3.3.4 │ ├── metallb-l2config.yaml │ ├── metallb.yaml │ └── req_page.ps1 ├── 3.3.5 │ └── metrics-server.yaml ├── 3.4.1 │ ├── Vagrantfile │ ├── config.sh │ ├── install_pkg.sh │ ├── master_node.sh │ └── work_nodes.sh ├── 3.4.2 │ └── metallb-l2config.yaml ├── 3.4.3 │ ├── limits-pvc.yaml │ ├── nfs-ip.yaml │ ├── nfs-pv.yaml │ ├── nfs-pvc-deploy.yaml │ ├── nfs-pvc.yaml │ └── quota-pvc.yaml ├── 3.4.4 │ ├── dynamic-pvc-deploy.yaml │ ├── dynamic-pvc.yaml │ ├── nfs-pvc-sts-svc.yaml │ ├── nfs-pvc-sts.yaml │ ├── standard.yaml │ └── sts-svc-domain.yaml └── README.md ├── ch4 ├── 4.2.3 │ ├── index-BindMount.html │ └── index-Volume.html ├── 4.3.1 │ ├── .explain-mvnw.txt │ ├── .mvn │ │ └── wrapper │ │ │ ├── MavenWrapperDownloader.java │ │ │ ├── maven-wrapper.jar │ │ │ └── maven-wrapper.properties │ ├── Dockerfile │ ├── mvnw │ ├── pom.xml │ └── src │ │ └── main │ │ ├── java │ │ └── com │ │ │ └── stark │ │ │ └── Industries │ │ │ ├── UltronPRJApplication.java │ │ │ └── UltronPRJController.java │ │ └── resources │ │ └── application.properties ├── 4.3.2 │ ├── .mvn │ │ └── wrapper │ │ │ ├── MavenWrapperDownloader.java │ │ │ ├── maven-wrapper.jar │ │ │ └── maven-wrapper.properties │ ├── Dockerfile │ ├── build-in-host.sh │ ├── mvnw │ ├── pom.xml │ └── src │ │ └── main │ │ ├── java │ │ └── com │ │ │ └── stark │ │ │ └── Industries │ │ │ ├── UltronPRJApplication.java │ │ │ └── UltronPRJController.java │ │ └── resources │ │ └── application.properties ├── 4.3.3 │ └── Dockerfile ├── 4.3.4 │ ├── Dockerfile │ └── k8s-SingleMaster-18.9_9_w_auto-compl │ │ ├── Vagrantfile │ │ ├── config.sh │ │ ├── install_pkg.sh │ │ ├── master_node.sh │ │ └── work_nodes.sh ├── 4.4.2 │ ├── create-registry.sh │ ├── remover.sh │ └── tls.csr ├── 4.4.3 │ ├── audit-trail │ │ ├── Dockerfile │ │ └── nginx.conf │ ├── echo-hname │ │ ├── Dockerfile │ │ ├── cert.crt │ │ ├── cert.key │ │ └── nginx.conf │ └── echo-ip │ │ ├── Dockerfile │ │ ├── cert.crt │ │ ├── cert.key │ │ └── nginx.conf └── README.md ├── ch5 ├── 5.2.2 │ ├── kustomize-install.sh │ ├── metallb-l2config.yaml │ ├── metallb.yaml │ └── namespace.yaml ├── 5.2.3 │ └── helm-install.sh ├── 5.3.1 │ ├── jenkins-config.yaml │ ├── jenkins-install.sh │ ├── jenkins-volume.yaml │ └── nfs-exporter.sh ├── 5.4.1 │ └── echo-ip-101.freestyle ├── 5.5.1 │ ├── Jenkinsfile │ ├── README.md │ └── deployment.yaml ├── 5.5.2 │ └── Jenkinsfile ├── 5.5.3 │ └── Jenkinsfile └── README.md ├── ch6 ├── 6.2.1 │ ├── nfs-exporter.sh │ ├── prometheus-install.sh │ ├── prometheus-server-preconfig.sh │ └── prometheus-server-volume.yaml ├── 6.2.3 │ ├── nginx-status-annot.yaml │ └── nginx-status-metrics.yaml ├── 6.4.1 │ ├── grafana-install.sh │ ├── grafana-preconfig.sh │ ├── grafana-volume.yaml │ └── nfs-exporter.sh ├── 6.5.1 │ ├── alert-notifier.yaml │ ├── nfs-exporter.sh │ ├── prometheus-alertmanager-install.sh │ ├── prometheus-alertmanager-preconfig.sh │ ├── prometheus-alertmanager-volume.yaml │ └── values.yaml └── README.md └── docs ├── 6.7.테인트(Taints)와 톨러레이션(Tolerations)의 파드 할당 조건_v2.pdf ├── k8s-stnd-arch ├── 2022 │ ├── 2022-k8s-stnd-arch.pdf │ ├── README.md │ └── img │ │ └── 2022Jan13-landscape.cncf.io.png ├── 2023 │ ├── 2023-k8s-stnd-arch.pdf │ ├── README.md │ └── img │ │ ├── 2022Nov21-landscape.cncf.io.png │ │ └── 2023-k8s-stnd-arch-thumbnail.png ├── 2024 │ ├── 2024-k8s-stnd-arch.pdf │ ├── README.md │ └── img │ │ ├── 2023Dec11-landscape.cncf.io.png │ │ ├── 2023Oct11-graduated.cncf.io.png │ │ └── 2024-k8s-stnd-arch-thumbnail.png └── 2025 │ ├── 2025-k8s-stnd-arch.pdf │ ├── README.md │ └── img │ ├── 2024Dec15-landscape.cncf.io.png │ ├── 2024Nov09-graduated.cncf.io.png │ └── 2025-k8s-stnd-arch-thumbnail.png ├── troubleshooting-kubernetes.ko_kr.v2.pdf ├── 실습 이슈#1 - VritualBox host-only Network(MAC,Linux).pdf ├── 확장본#1 - 젠킨스의 FreeStyle로 만드는 개발-상용 환경 배포.pdf ├── 확장본#2 - 자바 개발자를 위한 컨테이너 이미지 빌드.pdf └── 확장본#3 - 깃옵스(GitOps)를 여행하려는 입문자를 위한 안내서.pdf /.github/ISSUE_TEMPLATE/------.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: 이슈 템플릿 3 | about: '책에 있는 내용이 잘못 된 경우 제기하는 이슈 ' 4 | title: "[ 챕터 위치 / 페이지 ] 이슈 제목" 5 | labels: '' 6 | assignees: '' 7 | 8 | --- 9 | **중요** 이슈를 받기 어려운 사항은 다음과 같습니다. 10 | 11 | 1. 책에 사용되는 환경 및 기술과 관련된 `질문`
12 | 또는 책에서 구성한 `환경외` 조건에서 발생한 이슈 13 | > `goto` 14 | > 1. 쿠버네티스 오픈 채팅: https://open.kakao.com/o/gxSooElb (암호: kubectl) 15 | > 2. 쿠버네티스 유저 그룹: https://www.facebook.com/groups/k8skr 16 | 17 | 2. 개개인의 환경에 영향을 받는 `vagrant` 에러(error)와 관련된 이슈
18 | 하지만 다음의 사항에 모두 해당 한다면 이슈를 부탁드립니다. 19 | > - [ ] 초기화된(새로 설치) 노트북(또는 PC) 20 | > - [ ] 책에서 제시한 프로그램과 동일한 버전 설치 21 | > - [ ] 할리스(IP가 겹침)를 제외한 카페 또는 집에서 실행 22 | > - [ ] 2대 이상 동일하게 문제가 발생함 23 | 24 | 3. 책의 `오타` 및 `형식의 오류` 25 | 이와 같은 경우에는 다음의 절차를 따라서 진행 26 | > - [ ] [길벗](https://www.gilbut.co.kr) 홈페이지에 접속 27 | > - [ ] 고객센터 클릭 28 | > - [ ] 1:1문의 접속 후에 오류 및 문의 사항을 제보 29 | 30 | 위의 사항 외에 `이슈`라면 가능한 빨리(평균적으로 1일 이내) 회신드리도록 하겠습니다. 31 | 32 | ## [ 내용 ] 33 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant/ 2 | .DS_Store 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # 컨테이너 인프라 환경 구축을 위한 쿠버네티스/도커 2 | 3 | 4 | 5 | 6 | > 🔔 **_알림1:_** VirtualBox 6.1.28 이후 버전에서는 Vagrant host-only network와 관련된 이슈과 맥과 리눅스에서 7 | > 발생합니다. 자세한 내용은 [다음의 문서](https://github.com/sysnet4admin/_Book_k8sInfra/blob/main/docs/%EC%8B%A4%EC%8A%B5%20%EC%9D%B4%EC%8A%88%231%20-%20VritualBox%20host-only%20Network(MAC%2CLinux).pdf)를 확인하시기 바랍니다. 8 | 9 | > 🔔 **_알림2:_** MetalLB의 [Docker 허브 저장소](https://hub.docker.com/u/metallb)가 더이상 사용되지 않게 됨으로서, quay.io로 변경하였습니다. 10 | > 이에 MetalLB 관련한 문제가 생기시는 경우 현재 수정된 소스를 다시 내려받으시기 바랍니다. 11 | 12 | > 🔔 **_알림3:_** MetalLB의 [쿠버네티스 인증서](https://kubernetes.io/docs/setup/best-practices/certificates/)의 기본값이 1년인 관계로 OVA의 경우 사용을 못하는 경우가 발생합니다. 13 | > 이에 OVA를 10년으로 변경하였습니다. 그리고 만약 vagrant up으로 배포한 랩의 사용기간이 1년이 다 되어가는 경우 [인증서를 갱신](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-certs/)하시기 바랍니다. 14 | 15 | > 🔔 **_알림4:_** 3장 도입부의 Docker 버전과 `signature key`간의 이슈가 해결되었습니다. 자세한 사항은 [다음](https://github.com/sysnet4admin/_Book_k8sInfra/issues/33#issuecomment-1890823571) 내용을 참고하세요 16 | 17 | > 🔔 **_알림5:_** 구글이 호스트하고 있던 쿠버네티스 저장소(Repository)가 종료됨에 따라 이를 제공하는 주소가 변경되었습니다. 자세한 사항은 [다음](https://www.inflearn.com/news/1198141) 내용을 참고하세요 18 | 19 | > 🔔 **_알림6:_** 도커허브에서 이미지를 내려받는 정책이 변경되었습니다. (기존 100/6H, 변경 10/1H). 그래서 CNI인 Calico를 quay.io에서 내려받도록 변경하였습니다. 자세한 사항은 [다음](https://inf.run/FD91H) 내용을 참고하세요 20 | 21 | 이 저장소는 [컨테이너 인프라 환경 구축을 위한 쿠버네티스/도커](http://www.yes24.com/Product/Goods/102099414) 책에 실습을 위한 코드를 제공합니다. 22 | 23 | 각 챕터별로 챕터에서 사용하는 스크립트 및 코드를 제공하고 있으며, 별도로 챕터에서 깊게 다루지 않는 부분은 [다른 저장소](https://github.com/iac-source)에서 다룹니다. 그리고 학습에 도움이 되실만한 문서를 디렉터리 [docs](https://github.com/sysnet4admin/_Book_k8sInfra/tree/main/docs)에 추가하였습니다. (2021.10.24) 24 | 25 | 이 저장소에서 다루는 챕터에 따라 제공되는 스크립트는 아래와 같습니다. 26 | 27 | 28 | *** 29 | 30 | ## 제공되는 스크립트 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | 80 | 81 | 82 | 83 |
경로챕터 이름사용 목적 및 제공 스크립트
ch2테스트 환경 구성하기베이그런트를 이용해서 가상 테스트 환경을 자동으로 배포하기 위한 Vagrantfile과 프로비저닝 스크립트를 제공합니다.
ch3쿠버네티스로 알아보는 현대적인 인프라 환경쿠버네티스의 다양한 오브젝트를 구성하기 위한 야믈 파일과 쿠버네티스를 실습하기 위한 가상환경 배포 파일, 동작 테스트를 위한 스크립트를 제공합니다.
ch4쿠버네티스를 이루는 컨테이너 도우미, 도커도커의 일반적인 사용 방법, 도커 고급 기능을 사용하기 위한 가상환경 배포 파일 및 사설 도커 레지스트리를 구성하기 위한 스크립트를 제공합니다.
ch5지속적인 통합과 배포 자동화, 젠킨스헬름으로 쿠버네티스 환경에 젠킨스를 배포하고, CI/CD를 구현할 수 있는 스크립트를 제공합니다.
ch6안정적인 운영을 완성하는 모니터링, 프로메테우스와 그라파나헬름으로 쿠버네티스 환경에 프로메테우스그라파나를 배포하고 모니터링할 수 있는 스크립트를 제공합니다.
appA. Kubectl을 더 쉽게 사용하기kubectl을 쉽게 사용할 수 있도록 구성된 스크립트를 제공합니다.
B. Kubespray로 쿠버네티스 자동 구성하기kubespray를 통해 쿠버네티스 클러스터를 자동으로 구축하기 위한 스크립트를 제공합니다.
C. 쿠버 대시보드 구성하기쿠버네티스 대시보드를 배포하기 위한 스크립트를 제공합니다.
D. 컨테이너 깊게 들여다보기컨테이너를 깊게 들여다보기 위한 스크립트를 제공합니다.
84 | 85 | *** 86 | 87 | ## 저자 88 | - ✔️ [조 훈](https://github.com/sysnet4admin) 89 | - ✔️ [심근우](https://github.com/gnu-gnu) 90 | - ✔️ [문성주](https://github.com/seongjumoon) 91 | 92 | ## 도서 구입 안내 93 | 본 도서는 각 온오프라인 서점에서 만나보실 수 있습니다. 94 | - 📍 [YES24](https://bit.ly/3iq4L5W) 95 | - 📍 [알라딘](https://bit.ly/3cpo37M) 96 | - 📍 [교보문고](https://bit.ly/3g1dsC7) 97 | 98 | ## 책에서 사용하는 프로그램 번들팩 99 | `VirtualBox 6.1.12`, `vagrant 2.2.9` 100 | - 🗄️ [윈도우 사용자](https://1drv.ms/f/s!AgCLAIU_47PVhVKg9aAmX87p_Zho) 101 | - 🗄️ [맥OS 사용자](https://1drv.ms/f/s!AgCLAIU_47PVhVHaQWjq29B8VVt4) 102 | 103 | ## 🔔 베이그런트 설치로 너무 고생하시는 분들을 위한 이미지(OVA) 파일 104 | 현재 책의 쿠버네티스 실습 랩을 Vagrant가 아닌 이미지로 바로 구성할 수 있도록 OVA 이미지를 제공합니다.
105 | 다음의 두가지 이미지 번들 팩을 제공합니다. 106 | - [3.1.3](https://1drv.ms/f/s!AgCLAIU_47PVhU--Y8kbfIuABW9i)에 해당 하는 이미지 107 | - [4.3.4](https://1drv.ms/f/s!AgCLAIU_47PVhVAVyaX58v-QV44U)에 해당하는 이미지
108 | > 자세한 설정법에 관련한 영상은 아래에 [유용한 정보](#유용한-정보) 부분을 참고하시기 바랍니다. 109 | 110 | ## 유용한 정보 111 | - 📑 [Mac 및 Windows 사용자를 위한 터미널 프로그램인 타비(Tabby) 추천 및 설정법](https://youtu.be/4MhZxSS3Xm8) 112 | - 🎬 [`vagrant up` 실행 시에 발생하는 에러와 해결책 사례](https://www.inflearn.com/course/%EC%BF%A0%EB%B2%84%EB%84%A4%ED%8B%B0%EC%8A%A4-%EC%89%BD%EA%B2%8C%EC%8B%9C%EC%9E%91/lecture/72911?inst=cf657a9d) 113 | - 🎬 [테인트(Taints)와 톨러레이션(Tolerations) 설명 영상](https://www.inflearn.com/course/%EA%B7%B8%EB%A6%BC%EC%9C%BC%EB%A1%9C-%EB%B0%B0%EC%9A%B0%EB%8A%94-%EC%BF%A0%EB%B2%84%EB%84%A4%ED%8B%B0%EC%8A%A4/lecture/85683?inst=f3d96ed5) 114 | - 🎬 [멀티 컨텍스트 랩 환경 구성 on Ubuntu 설명 영상(10:15~)](https://www.inflearn.com/course/%EC%BF%A0%EB%B2%84%EB%84%A4%ED%8B%B0%EC%8A%A4-%EC%89%BD%EA%B2%8C%EC%8B%9C%EC%9E%91/lecture/73341?inst=cf657a9d) 115 | - 🎬 [쿠버네티스 v1.24에서 발생할 컨테이너 런타임의 변경에 관해서 (dockershim vs containerd)](https://www.inflearn.com/course/%EA%B7%B8%EB%A6%BC%EC%9C%BC%EB%A1%9C-%EB%B0%B0%EC%9A%B0%EB%8A%94-%EC%BF%A0%EB%B2%84%EB%84%A4%ED%8B%B0%EC%8A%A4/lecture/106937?inst=f3d96ed5) 116 | - 🎬 [쿠버네티스 실습 랩을 vagrant가 아닌 이미지로 바로 구성 설치하는 법](https://youtu.be/KxhSWf0ObEU) 117 | - 🎬 [슈퍼푸티 터미널을 생산성 있게 꾸미기](https://youtu.be/kv87ynbJlmk) 118 | 119 | ## 관련 문서 120 | - 📜 [왜 쿠버네티스는 systemd로 cgroup을 관리하려고 할까요?](https://www.slideshare.net/JoHoon1/systemd-cgroup) 121 | -------------------------------------------------------------------------------- /app/A.kubectl-more/bash-completion.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | #Usage: 3 | #1. bash <(curl -s https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/bash-completion.sh) 4 | 5 | # install bash-completion for kubectl 6 | yum install bash-completion -y 7 | 8 | # kubectl completion on bash-completion dir 9 | kubectl completion bash >/etc/bash_completion.d/kubectl 10 | 11 | # alias kubectl to k 12 | echo 'alias k=kubectl' >> ~/.bashrc 13 | echo 'complete -F __start_kubectl k' >> ~/.bashrc 14 | 15 | #Reload rc 16 | su - -------------------------------------------------------------------------------- /app/A.kubectl-more/k8s_rc.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # usage: 3 | # 1. Create 4 | # - bash <(curl -s https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/k8s_rc.sh) 5 | # 2. Remove 6 | # - sed -i '/source/d' .bashrc 7 | 8 | if grep -q sysnet4admin ~/.bashrc; then 9 | echo "k8s_rc already installed" 10 | exit 0 11 | fi 12 | 13 | echo -e "\n#custome rc provide @sysnet4admin " >> ~/.bashrc 14 | echo "source ~/.k8s_rc " >> ~/.bashrc 15 | 16 | cat > ~/.k8s_rc <<'EOF' 17 | #! /usr/bin/env bash 18 | # HoonJo ver0.6.0 19 | # https://github.com/sysnet4admin/IaC 20 | alias k='kubectl' 21 | alias kg='kubectl get' 22 | alias kgp='kubectl get pods' 23 | alias ka='kubectl apply -f' 24 | alias kc='kubectl create' 25 | alias ks='kubectl scale' 26 | alias ke='kubectl export' 27 | alias kgw='kubectl get $1 -o wide' 28 | kee(){ 29 | if [ $# -eq 1 ]; then 30 | kubectl exec -it $(kubectl get pods | tail --lines=+2 | awk '{print $1}' | awk NR==$1) -- /bin/bash; 31 | else 32 | echo "usage: kee " 33 | fi 34 | } 35 | keq(){ 36 | NAMESPACE=$1 37 | exi_chk=($(kubectl get namespaces | tail --lines=+2 | awk '{print $1}')) 38 | #check to exist namespace but it is not perfect due to /^word$/ is not work 39 | if [[ ! "${exi_chk[@]}" =~ "$NAMESPACE" ]]; then 40 | echo -e "$NAMESPACE isn't a namespace. Try other as below again:\n" 41 | kubectl get namespaces 42 | echo -e "\nusage: keq or keq [-c]\n" 43 | exit 1 44 | elif [ $# -eq 1 ]; then 45 | kubectl get pods -n $NAMESPACE | tail --lines=+2 | awk '{print NR " " $1}' 46 | echo -en "\nPlease select pod in $NAMESPACE: " 47 | read select 48 | kubectl exec -it -n $NAMESPACE $(kubectl get pods -n $NAMESPACE | tail --lines=+2 | awk '{print $1}' | awk NR==$select) -- /bin/sh; 49 | elif [ $# -eq 2 ]; then 50 | if [ ! $2 == "-c" ]; then 51 | echo -e "only -c option is available" 52 | exit 1 53 | fi 54 | echo -e "" 55 | kubectl get pods -n $NAMESPACE | tail --lines=+2 | awk '{print NR " " $1}' 56 | echo -en "\nPlease select pod in $NAMESPACE: " 57 | read select 58 | POD_SELECT=$select 59 | POD=$(kubectl get pods -n $NAMESPACE | tail --lines=+2 | awk '{print $1}' | awk NR==$select) 60 | echo -e "" 61 | kubectl describe pod -n $NAMESPACE $POD | grep -B 1 "Container ID" | egrep -v "Container|--" | awk -F":" '{print NR $1}' 62 | echo -en "\nPlease select container in: " 63 | read select 64 | CONTAINER=$(kubectl describe pod -n $NAMESPACE $POD | grep -B 1 "Container ID" | egrep -v "Container|--" | awk -F":" '{print $1}' | awk NR==$select) 65 | kubectl exec -it -n $NAMESPACE $(kubectl get pods -n $NAMESPACE | tail --lines=+2 | awk '{print $1}' | awk NR==$POD_SELECT) -c $CONTAINER -- /bin/sh; 66 | #default pod run 67 | elif [ -z $1 ]; then 68 | echo "" 69 | kubectl get pods | tail --lines=+2 | awk '{print NR " " $1}' 70 | echo -en "\nPlease select pod in default: " 71 | read select 72 | kubectl exec -it $(kubectl get pods | tail --lines=+2 | awk '{print $1}' | awk NR==$select) -- /bin/sh; 73 | else 74 | echo "" 75 | kubectl get namespace 76 | echo -e "\nusage: keq or keq [-c]\n" 77 | fi 78 | } 79 | kgpww(){ 80 | OPTION=$1 81 | NAMESPACE=$2 82 | exi_chk=($(kubectl get namespaces | tail --lines=+2 | awk '{print $1}')) 83 | if [[ ! "${exi_chk[@]}" =~ "$NAMESPACE" ]]; then 84 | echo -e "$NAMESPACE isn't a namespace. Try other as below again:\n" 85 | kubectl get namespaces 86 | echo -e "\nusage: kgpww -n \n" 87 | exit 1 88 | elif [ -z $OPTION ]; then 89 | kubectl get pods -w -o wide 90 | else 91 | case $OPTION in 92 | -A ) kubectl get pods --all-namespaces -w -o wide;; 93 | -n ) kubectl get pods -n $NAMESPACE -w -o wide;; 94 | * ) echo -e "$OPTION is not avaialble. Only -A and -n support\n";; 95 | esac 96 | fi 97 | } 98 | kgpws(){ 99 | OPTION=$1 100 | NAMESPACE=$2 101 | exi_chk=($(kubectl get namespaces | tail --lines=+2 | awk '{print $1}')) 102 | CstCol_lst="NAME:.metadata.name,STATUS:.status.phase,IP:.status.podIP,NODE:.spec.nodeName" 103 | if [[ ! "${exi_chk[@]}" =~ "$NAMESPACE" ]]; then 104 | echo -e "$NAMESPACE isn't a namespace. Try other as below again:\n" 105 | kubectl get namespaces 106 | echo -e "\nusage: kgpws -n \n" 107 | exit 1 108 | elif [ -z $OPTION ]; then 109 | kubectl get pods --all-namespaces -o wide | head -n +1 | sort -k 8 110 | kubectl get pods --all-namespaces -o wide | tail -n +2 | sort -k 8 111 | else 112 | case $OPTION in 113 | -n ) 114 | kubectl get pods -n $NAMESPACE -o custom-columns=$CstCol_lst | head -n +1 115 | kubectl get pods -n $NAMESPACE -o custom-columns=$CstCol_lst | tail -n +2 | sort -k 4;; 116 | * ) echo -e "$OPTION is not avaialble. Only -n support\n";; 117 | esac 118 | fi 119 | } 120 | kl(){ 121 | NAMESPACE=$1 122 | exi_chk=($(kubectl get namespaces | tail --lines=+2 | awk '{print $1}')) 123 | #check to exist namespace but it is not perfect due to /^word$/ is not work 124 | if [[ ! "${exi_chk[@]}" =~ "$NAMESPACE" ]]; then 125 | echo -e "$NAMESPACE isn't a namespace. Try other as below again:\n" 126 | kubectl get namespaces 127 | echo -e "\nusage: kl or kl \n" 128 | exit 1 129 | elif [ $# -eq 1 ]; then 130 | kubectl get pods -n $NAMESPACE | tail --lines=+2 | awk '{print NR " " $1}' 131 | echo -en "\nPlease select pod in $NAMESPACE: " 132 | read select 133 | kubectl logs -n $NAMESPACE $(kubectl get pods -n $NAMESPACE | tail --lines=+2 | awk '{print $1}' | awk NR==$select) 134 | #default pod run 135 | elif [ -z $1 ]; then 136 | echo "" 137 | kubectl get pods | tail --lines=+2 | awk '{print NR " " $1}' 138 | echo -en "\nPlease select pod in default: " 139 | read select 140 | kubectl logs $(kubectl get pods | tail --lines=+2 | awk '{print $1}' | awk NR==$select) 141 | else 142 | echo "" 143 | kubectl get namespace 144 | echo -e "\nusage: kl or kl \n" 145 | fi 146 | } 147 | kdp(){ 148 | NAMESPACE=$1 149 | exi_chk=($(kubectl get namespaces | tail --lines=+2 | awk '{print $1}')) 150 | #check to exist namespace but it is not perfect due to /^word$/ is not work 151 | if [[ ! "${exi_chk[@]}" =~ "$NAMESPACE" ]]; then 152 | echo -e "$NAMESPACE isn't a namespace. Try other as below again:\n" 153 | kubectl get namespaces 154 | echo -e "\nusage: kdp or kdp \n" 155 | exit 1 156 | elif [ $# -eq 1 ]; then 157 | kubectl get pods -n $NAMESPACE | tail --lines=+2 | awk '{print NR " " $1}' 158 | echo -en "\nPlease select pod in $NAMESPACE: " 159 | read select 160 | kubectl describe pods -n $NAMESPACE $(kubectl get pods -n $NAMESPACE | tail --lines=+2 | awk '{print $1}' | awk NR==$select) 161 | #default pod run 162 | elif [ -z $1 ]; then 163 | echo "" 164 | kubectl get pods | tail --lines=+2 | awk '{print NR " " $1}' 165 | echo -en "\nPlease select pod in default: " 166 | read select 167 | kubectl describe pods $(kubectl get pods | tail --lines=+2 | awk '{print $1}' | awk NR==$select) 168 | else 169 | echo "" 170 | kubectl get namespace 171 | echo -e "\nusage: kdp or kdp \n" 172 | fi 173 | } 174 | EOF 175 | 176 | #Reload rc 177 | su - -------------------------------------------------------------------------------- /app/B.kube-dashboard/dashboard.yaml: -------------------------------------------------------------------------------- 1 | # Copyright 2017 The Kubernetes Authors. 2 | # 3 | # Licensed under the Apache License, Version 2.0 (the "License"); 4 | # you may not use this file except in compliance with the License. 5 | # You may obtain a copy of the License at 6 | # 7 | # http://www.apache.org/licenses/LICENSE-2.0 8 | # 9 | # Unless required by applicable law or agreed to in writing, software 10 | # distributed under the License is distributed on an "AS IS" BASIS, 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | # See the License for the specific language governing permissions and 13 | # limitations under the License. 14 | 15 | apiVersion: v1 16 | kind: Namespace 17 | metadata: 18 | name: kubernetes-dashboard 19 | 20 | --- 21 | 22 | apiVersion: v1 23 | kind: ServiceAccount 24 | metadata: 25 | labels: 26 | k8s-app: kubernetes-dashboard 27 | name: kubernetes-dashboard 28 | namespace: kubernetes-dashboard 29 | 30 | --- 31 | 32 | kind: Service 33 | apiVersion: v1 34 | metadata: 35 | labels: 36 | k8s-app: kubernetes-dashboard 37 | name: kubernetes-dashboard 38 | namespace: kubernetes-dashboard 39 | spec: 40 | ports: 41 | - port: 80 42 | targetPort: 9090 43 | nodePort: 31000 44 | selector: 45 | k8s-app: kubernetes-dashboard 46 | type: NodePort 47 | 48 | --- 49 | 50 | apiVersion: v1 51 | kind: Secret 52 | metadata: 53 | labels: 54 | k8s-app: kubernetes-dashboard 55 | name: kubernetes-dashboard-certs 56 | namespace: kubernetes-dashboard 57 | type: Opaque 58 | 59 | --- 60 | 61 | apiVersion: v1 62 | kind: Secret 63 | metadata: 64 | labels: 65 | k8s-app: kubernetes-dashboard 66 | name: kubernetes-dashboard-csrf 67 | namespace: kubernetes-dashboard 68 | type: Opaque 69 | data: 70 | csrf: "" 71 | 72 | --- 73 | 74 | apiVersion: v1 75 | kind: Secret 76 | metadata: 77 | labels: 78 | k8s-app: kubernetes-dashboard 79 | name: kubernetes-dashboard-key-holder 80 | namespace: kubernetes-dashboard 81 | type: Opaque 82 | 83 | --- 84 | 85 | kind: ConfigMap 86 | apiVersion: v1 87 | metadata: 88 | labels: 89 | k8s-app: kubernetes-dashboard 90 | name: kubernetes-dashboard-settings 91 | namespace: kubernetes-dashboard 92 | 93 | --- 94 | 95 | apiVersion: rbac.authorization.k8s.io/v1 96 | kind: ClusterRoleBinding 97 | metadata: 98 | name: kubernetes-dashboard 99 | roleRef: 100 | apiGroup: rbac.authorization.k8s.io 101 | kind: ClusterRole 102 | name: cluster-admin 103 | subjects: 104 | - kind: ServiceAccount 105 | name: kubernetes-dashboard 106 | namespace: kubernetes-dashboard 107 | 108 | --- 109 | 110 | kind: Deployment 111 | apiVersion: apps/v1 112 | metadata: 113 | labels: 114 | k8s-app: kubernetes-dashboard 115 | name: kubernetes-dashboard 116 | namespace: kubernetes-dashboard 117 | spec: 118 | replicas: 1 119 | revisionHistoryLimit: 10 120 | selector: 121 | matchLabels: 122 | k8s-app: kubernetes-dashboard 123 | template: 124 | metadata: 125 | labels: 126 | k8s-app: kubernetes-dashboard 127 | spec: 128 | containers: 129 | - name: kubernetes-dashboard 130 | image: kubernetesui/dashboard:v2.0.3 131 | imagePullPolicy: Always 132 | ports: 133 | - containerPort: 9090 134 | protocol: TCP 135 | args: 136 | - --enable-skip-login 137 | - --disable-settings-authorizer=true 138 | - --enable-insecure-login 139 | - --insecure-bind-address=0.0.0.0 140 | - --namespace=kubernetes-dashboard 141 | # Uncomment the following line to manually specify Kubernetes API server Host 142 | # If not specified, Dashboard will attempt to auto discover the API server and connect 143 | # to it. Uncomment only if the default does not work. 144 | # - --apiserver-host=http://my-address:port 145 | volumeMounts: 146 | - name: kubernetes-dashboard-certs 147 | mountPath: /certs 148 | # Create on-disk volume to store exec logs 149 | - mountPath: /tmp 150 | name: tmp-volume 151 | livenessProbe: 152 | httpGet: 153 | scheme: HTTP 154 | path: / 155 | port: 9090 156 | initialDelaySeconds: 30 157 | timeoutSeconds: 30 158 | securityContext: 159 | allowPrivilegeEscalation: false 160 | readOnlyRootFilesystem: true 161 | runAsUser: 1001 162 | runAsGroup: 2001 163 | volumes: 164 | - name: kubernetes-dashboard-certs 165 | secret: 166 | secretName: kubernetes-dashboard-certs 167 | - name: tmp-volume 168 | emptyDir: {} 169 | serviceAccountName: kubernetes-dashboard 170 | nodeSelector: 171 | "kubernetes.io/hostname": m-k8s 172 | # Comment the following tolerations if Dashboard must not be deployed on master 173 | tolerations: 174 | - key: node-role.kubernetes.io/master 175 | effect: NoSchedule 176 | 177 | --- 178 | 179 | kind: Service 180 | apiVersion: v1 181 | metadata: 182 | labels: 183 | k8s-app: dashboard-metrics-scraper 184 | name: dashboard-metrics-scraper 185 | namespace: kubernetes-dashboard 186 | spec: 187 | ports: 188 | - port: 8000 189 | targetPort: 8000 190 | selector: 191 | k8s-app: dashboard-metrics-scraper 192 | 193 | --- 194 | 195 | kind: Deployment 196 | apiVersion: apps/v1 197 | metadata: 198 | labels: 199 | k8s-app: dashboard-metrics-scraper 200 | name: dashboard-metrics-scraper 201 | namespace: kubernetes-dashboard 202 | spec: 203 | replicas: 1 204 | revisionHistoryLimit: 10 205 | selector: 206 | matchLabels: 207 | k8s-app: dashboard-metrics-scraper 208 | template: 209 | metadata: 210 | labels: 211 | k8s-app: dashboard-metrics-scraper 212 | annotations: 213 | seccomp.security.alpha.kubernetes.io/pod: 'runtime/default' 214 | spec: 215 | containers: 216 | - name: dashboard-metrics-scraper 217 | image: kubernetesui/metrics-scraper:v1.0.4 218 | ports: 219 | - containerPort: 8000 220 | protocol: TCP 221 | livenessProbe: 222 | httpGet: 223 | scheme: HTTP 224 | path: / 225 | port: 8000 226 | initialDelaySeconds: 30 227 | timeoutSeconds: 30 228 | volumeMounts: 229 | - mountPath: /tmp 230 | name: tmp-volume 231 | securityContext: 232 | allowPrivilegeEscalation: false 233 | readOnlyRootFilesystem: true 234 | runAsUser: 1001 235 | runAsGroup: 2001 236 | serviceAccountName: kubernetes-dashboard 237 | nodeSelector: 238 | "kubernetes.io/hostname": m-k8s 239 | # Comment the following tolerations if Dashboard must not be deployed on master 240 | tolerations: 241 | - key: node-role.kubernetes.io/master 242 | effect: NoSchedule 243 | volumes: 244 | - name: tmp-volume 245 | emptyDir: {} 246 | -------------------------------------------------------------------------------- /app/C.kubespray/README.md: -------------------------------------------------------------------------------- 1 | # Running kubespray 2 | 1. login m11-k8s 3 | 2. **sh auto_pass.sh** 4 | 3. **ansible-playbook kubespray/cluster.yml -i ansible_hosts.ini**
5 | (if you need to add or remove for hosts, please modify ansible_hosts.ini manually.) -------------------------------------------------------------------------------- /app/C.kubespray/SuperPutty-Sessions.XML: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | -------------------------------------------------------------------------------- /app/C.kubespray/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure("2") do |config| 5 | 6 | #=============# 7 | # Master Node # 8 | #=============# 9 | M = 3 # max number of master nodes 10 | N = 6 # max number of worker nodes 11 | 12 | (1..M).each do |m| 13 | config.vm.define "m1#{m}-k8s" do |cfg| 14 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 15 | cfg.vm.provider "virtualbox" do |vb| 16 | vb.name = "m1#{m}-k8s(github_SysNet4Admin)" 17 | vb.cpus = 2 18 | vb.memory = 2048 #minimum is 1500MB but ansible_memtotal_mb is less than set vaule 19 | vb.customize ["modifyvm", :id, "--groups", "/k8s-MtpMST-kubespray(github_SysNet4Admin)"] 20 | end 21 | cfg.vm.host_name = "m1#{m}-k8s" 22 | cfg.vm.network "private_network", ip: "192.168.1.1#{m}" 23 | cfg.vm.network "forwarded_port", guest: 22, host: "6001#{m}", auto_correct: true, id: "ssh" 24 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 25 | cfg.vm.provision "file", source: "auto_pass.sh", destination: "auto_pass.sh" 26 | cfg.vm.provision "shell", path: "install_pkg.sh" 27 | cfg.vm.provision "shell", path: "config.sh", args: [M, N] 28 | if m == 1 29 | cfg.vm.provision "shell", path: "pre-kubespray.sh" 30 | end 31 | end 32 | end 33 | 34 | #==============# 35 | # Worker Nodes # 36 | #==============# 37 | 38 | (1..N).each do |n| 39 | config.vm.define "w10#{n}-k8s" do |cfg| 40 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 41 | cfg.vm.provider "virtualbox" do |vb| 42 | vb.name = "w10#{n}-k8s(github_SysNet4Admin)" 43 | vb.cpus = 1 44 | vb.memory = 1536 #minimum is 1024MB but ansible_memtotal_mb is less than set vaule 45 | vb.customize ["modifyvm", :id, "--groups", "/k8s-MtpMST-kubespray(github_SysNet4Admin)"] 46 | end 47 | cfg.vm.host_name = "w10#{n}-k8s" 48 | cfg.vm.network "private_network", ip: "192.168.1.10#{n}" 49 | cfg.vm.network "forwarded_port", guest: 22, host: "6010#{n}", auto_correct: true, id: "ssh" 50 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 51 | cfg.vm.provision "file", source: "auto_pass.sh", destination: "auto_pass.sh" 52 | cfg.vm.provision "shell", path: "install_pkg.sh" 53 | cfg.vm.provision "shell", path: "config.sh", args: [M, N] 54 | end 55 | end 56 | 57 | end -------------------------------------------------------------------------------- /app/C.kubespray/auto_pass.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | #Auto_Pass 3 | #if you want to filter only ip then [grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}'] 4 | 5 | #make a directory 6 | mkdir ~/.ssh 7 | 8 | #Read hosts from file 9 | readarray hosts < /etc/hosts 10 | 11 | ##1.known_hosts## 12 | if [ ! -f ~/.ssh/known_hosts ]; then 13 | for host in ${hosts[@]}; do 14 | ssh-keyscan -t ecdsa ${host} >> ~/.ssh/known_hosts 15 | done 16 | fi 17 | 18 | ##2.authorized_keys 19 | if [ ! -f ~/.ssh/id_rsa.pub ]; then 20 | ssh-keygen -b 2048 -t rsa -f ~/.ssh/id_rsa -q -N '' 21 | for host in ${hosts[@]}; do 22 | sshpass -p vagrant ssh-copy-id -f ${host} 23 | done 24 | fi -------------------------------------------------------------------------------- /app/C.kubespray/config.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # vim configuration 4 | echo 'alias vi=vim' >> /etc/profile 5 | 6 | # swapoff -a to disable swapping 7 | swapoff -a 8 | # sed to comment the swap partition in /etc/fstab 9 | sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab 10 | 11 | # Set SELinux in permissive mode (effectively disabling it) 12 | setenforce 0 13 | sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config 14 | 15 | # local small dns & vagrant cannot parse and delivery shell code. 16 | for (( m=1; m<=$1; m++ )); do echo "192.168.1.1$m m1$m-k8s" >> /etc/hosts; done 17 | for (( n=1; n<=$2; n++ )); do echo "192.168.1.10$n w10$n-k8s" >> /etc/hosts; done 18 | 19 | # config DNS 20 | cat < /etc/resolv.conf 21 | nameserver 1.1.1.1 #cloudflare DNS 22 | nameserver 8.8.8.8 #Google DNS 23 | EOF 24 | 25 | # authority between all masters and workers 26 | sudo mv auto_pass.sh /root 27 | sudo chmod 744 /root/auto_pass.sh 28 | 29 | # when git clone from windows '$'\r': command not found' issue happened 30 | sudo sed -i -e 's/\r$//' /root/auto_pass.sh 31 | -------------------------------------------------------------------------------- /app/C.kubespray/install_pkg.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # CentOS repo change from mirror to vault 4 | sed -i -e 's/mirrorlist=/#mirrorlist=/g' /etc/yum.repos.d/CentOS-* 5 | sed -i -e 's/mirrorlist=/#mirrorlist=/g' /etc/yum.conf 6 | sed -E -i -e 's/#baseurl=http:\/\/mirror.centos.org\/centos\/\$releasever\/([[:alnum:]_-]*)\/\$basearch\//baseurl=https:\/\/vault.centos.org\/7.9.2009\/\1\/\$basearch\//g' /etc/yum.repos.d/CentOS-* 7 | sed -E -i -e 's/#baseurl=http:\/\/mirror.centos.org\/centos\/\$releasever\/([[:alnum:]_-]*)\/\$basearch\//baseurl=https:\/\/vault.centos.org\/7.9.2009\/\1\/\$basearch\//g' /etc/yum.conf 8 | 9 | # kubernetes repo 10 | gg_pkg="http://mirrors.aliyun.com/kubernetes/yum" # Due to shorten addr for key 11 | cat < /etc/yum.repos.d/kubernetes.repo 12 | [kubernetes] 13 | name=Kubernetes 14 | baseurl=${gg_pkg}/repos/kubernetes-el7-x86_64 15 | enabled=1 16 | gpgcheck=0 17 | repo_gpgcheck=0 18 | gpgkey=${gg_pkg}/doc/yum-key.gpg ${gg_pkg}/doc/rpm-package-key.gpg 19 | EOF 20 | 21 | # install packages 22 | yum install epel-release -y 23 | yum install vim-enhanced -y 24 | yum install sshpass -y 25 | 26 | -------------------------------------------------------------------------------- /app/C.kubespray/pre-kubespray.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | yum install python36 python36-pip git -y 4 | 5 | # git clone https://github.com/kubernetes-sigs/kubespray.git 6 | # to avoid kubectl missing error 7 | git clone -b release-2.17 https://github.com/kubernetes-sigs/kubespray.git 8 | sudo mv kubespray /root 9 | 10 | # docker? it is not pre-requirement but if it is not exist, it will fail. 11 | # TASK [download : download_container | Download image if required] 12 | # 13 | #fatal: [w1-k8s -> 192.168.1.11]: FAILED! => {"attempts": 4, "changed": false, "cmd": "/usr/bin/docker pull docker.io/calico/node:v3.7.3", "msg": "[Errno 2] 그런 파일이나 디렉터리가 없습니다", "rc": 2} 14 | #fatal: [w2-k8s -> 192.168.1.11]: FAILED! => {"attempts": 4, "changed": false, "cmd": "/usr/bin/docker pull docker.io/calico/node:v3.7.3", "msg": "[Errno 2] 그런 파일이나 디렉터리가 없습니다", "rc": 2} 15 | yum install docker -y 16 | systemctl enable --now docker 17 | 18 | # other ansible, jinja2 netaddr will be installed by requirement.txt (2019.09.24) 19 | #ansible==2.7.12 20 | #jinja2==2.10.1 21 | #netaddr==0.7.19 22 | #pbr==5.2.0 23 | #hvac==0.8.2 24 | #jmespath==0.1.4 25 | #ruamel.yaml==0.15.96 26 | pip3.6 install -r /root/kubespray/requirements.txt 27 | 28 | 29 | cat < /root/ansible_hosts.ini 30 | [all] 31 | m11-k8s ansible_host=192.168.1.11 ip=192.168.1.11 32 | m12-k8s ansible_host=192.168.1.12 ip=192.168.1.12 33 | m13-k8s ansible_host=192.168.1.13 ip=192.168.1.13 34 | w101-k8s ansible_host=192.168.1.101 ip=192.168.1.101 35 | w102-k8s ansible_host=192.168.1.102 ip=192.168.1.102 36 | w103-k8s ansible_host=192.168.1.103 ip=192.168.1.103 37 | w104-k8s ansible_host=192.168.1.104 ip=192.168.1.104 38 | w105-k8s ansible_host=192.168.1.105 ip=192.168.1.105 39 | w106-k8s ansible_host=192.168.1.106 ip=192.168.1.106 40 | 41 | 42 | [etcd] 43 | m11-k8s 44 | m12-k8s 45 | m13-k8s 46 | 47 | [kube-master] 48 | m11-k8s 49 | m12-k8s 50 | m13-k8s 51 | 52 | [kube-node] 53 | w101-k8s 54 | w102-k8s 55 | w103-k8s 56 | w104-k8s 57 | w105-k8s 58 | w106-k8s 59 | 60 | [calico-rr] 61 | 62 | [k8s-cluster:children] 63 | kube-master 64 | etcd 65 | kube-node 66 | calico-rr 67 | EOF 68 | -------------------------------------------------------------------------------- /app/D.DeepDiveContainer/config.json: -------------------------------------------------------------------------------- 1 | { 2 | "ociVersion": "1.0.1-dev", 3 | "process": { 4 | "terminal": true, 5 | "user": { 6 | "uid": 0, 7 | "gid": 0 8 | }, 9 | "args": [ 10 | "sh" 11 | ], 12 | "env": [ 13 | "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", 14 | "TERM=xterm" 15 | ], 16 | "cwd": "/", 17 | "capabilities": { 18 | "bounding": ["CAP_AUDIT_WRITE", "CAP_SETGID", "CAP_SETUID", "CAP_CHOWN", "CAP_KILL", "CAP_NET_BIND_SERVICE"], 19 | "permitted": ["CAP_AUDIT_WRITE", "CAP_SETGID", "CAP_SETUID", "CAP_CHOWN", "CAP_KILL", "CAP_NET_BIND_SERVICE"] 20 | }, 21 | "rlimits": [ 22 | { 23 | "type": "RLIMIT_NOFILE", 24 | "hard": 1024, 25 | "soft": 1024 26 | } 27 | ], 28 | "noNewPrivileges": true 29 | }, 30 | "root": { 31 | "path": "nginx-container", 32 | "readonly": false 33 | }, 34 | "hostname": "nginx", 35 | "mounts": [ 36 | { 37 | "destination": "/proc", 38 | "type": "proc", 39 | "source": "proc" 40 | }, 41 | { 42 | "destination": "/dev", 43 | "type": "tmpfs", 44 | "source": "tmpfs", 45 | "options": [ 46 | "nosuid", 47 | "strictatime", 48 | "mode=755", 49 | "size=65536k" 50 | ] 51 | }, 52 | { 53 | "destination": "/dev/pts", 54 | "type": "devpts", 55 | "source": "devpts", 56 | "options": [ 57 | "nosuid", 58 | "noexec", 59 | "newinstance", 60 | "ptmxmode=0666", 61 | "mode=0620", 62 | "gid=5" 63 | ] 64 | }, 65 | { 66 | "destination": "/dev/shm", 67 | "type": "tmpfs", 68 | "source": "shm", 69 | "options": [ 70 | "nosuid", 71 | "noexec", 72 | "nodev", 73 | "mode=1777", 74 | "size=65536k" 75 | ] 76 | }, 77 | { 78 | "destination": "/dev/mqueue", 79 | "type": "mqueue", 80 | "source": "mqueue", 81 | "options": [ 82 | "nosuid", 83 | "noexec", 84 | "nodev" 85 | ] 86 | }, 87 | { 88 | "destination": "/sys", 89 | "type": "sysfs", 90 | "source": "sysfs", 91 | "options": [ 92 | "nosuid", 93 | "noexec", 94 | "nodev", 95 | "ro" 96 | ] 97 | }, 98 | { 99 | "destination": "/sys/fs/cgroup", 100 | "type": "cgroup", 101 | "source": "cgroup", 102 | "options": [ 103 | "nosuid", 104 | "noexec", 105 | "nodev", 106 | "relatime", 107 | "ro" 108 | ] 109 | } 110 | ], 111 | "linux": { 112 | "resources": { 113 | "devices": [ 114 | { 115 | "allow": false, 116 | "access": "rwm" 117 | } 118 | ] 119 | }, 120 | "namespaces": [ 121 | { 122 | "type": "pid" 123 | }, 124 | { 125 | "type": "network", 126 | "path": "/var/run/netns/ns-nginx" 127 | 128 | }, 129 | { 130 | "type": "ipc" 131 | }, 132 | { 133 | "type": "uts" 134 | }, 135 | { 136 | "type": "mount" 137 | } 138 | ], 139 | "maskedPaths": [ 140 | "/proc/kcore", 141 | "/proc/latency_stats", 142 | "/proc/timer_list", 143 | "/proc/timer_stats", 144 | "/proc/sched_debug", 145 | "/sys/firmware", 146 | "/proc/scsi" 147 | ], 148 | "readonlyPaths": [ 149 | "/proc/asound", 150 | "/proc/bus", 151 | "/proc/fs", 152 | "/proc/irq", 153 | "/proc/sys", 154 | "/proc/sysrq-trigger" 155 | ] 156 | } 157 | } -------------------------------------------------------------------------------- /app/D.DeepDiveContainer/ns-create.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | yum install -y bridge-utils 3 | brctl addbr nginx 4 | ip link set nginx up 5 | ip addr add 192.168.200.1/24 dev nginx 6 | ip link add name vhost type veth peer name container 7 | ip netns add ns-nginx 8 | ip link set container netns ns-nginx 9 | ip netns exec ns-nginx ip link set container name eth1 10 | ip netns exec ns-nginx ip addr add 192.168.200.2/24 dev eth1 11 | ip netns exec ns-nginx ip link set eth1 up 12 | ip netns exec ns-nginx ip route add default via 192.168.200.1 13 | ip link set vhost up 14 | brctl addif nginx vhost -------------------------------------------------------------------------------- /app/D.DeepDiveContainer/ns-remover.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | ip netns delete ns-nginx 3 | ip link delete nginx 4 | ip link delete vhost 5 | yum erase bridge-utils -y -------------------------------------------------------------------------------- /app/README.md: -------------------------------------------------------------------------------- 1 | # 부록, A,B,C,D 2 | --- 3 | ## 부록 A kubectl을 더 쉽게 사용하기 4 | ## 부록 B 쿠버 대시보드 구성하기 5 | ## 부록 C kubectl을 더 쉽게 사용하기 6 | ## 부록 D 컨테이너 깊게 들여다보기 7 | -------------------------------------------------------------------------------- /ch2/2.1.3/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | # All Vagrant configuration is done below. The "2" in Vagrant.configure 5 | # configures the configuration version (we support older styles for 6 | # backwards compatibility). Please don't change it unless you know what 7 | # you're doing. 8 | Vagrant.configure("2") do |config| 9 | # The most common configuration options are documented and commented below. 10 | # For a complete reference, please see the online documentation at 11 | # https://docs.vagrantup.com. 12 | 13 | # Every Vagrant development environment requires a box. You can search for 14 | # boxes at https://vagrantcloud.com/search. 15 | config.vm.box = "sysnet4admin/CentOS-k8s" 16 | 17 | # Disable automatic box update checking. If you disable this, then 18 | # boxes will only be checked for updates when the user runs 19 | # `vagrant box outdated`. This is not recommended. 20 | # config.vm.box_check_update = false 21 | 22 | # Create a forwarded port mapping which allows access to a specific port 23 | # within the machine from a port on the host machine. In the example below, 24 | # accessing "localhost:8080" will access port 80 on the guest machine. 25 | # NOTE: This will enable public access to the opened port 26 | # config.vm.network "forwarded_port", guest: 80, host: 8080 27 | 28 | # Create a forwarded port mapping which allows access to a specific port 29 | # within the machine from a port on the host machine and only allow access 30 | # via 127.0.0.1 to disable public access 31 | # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1" 32 | 33 | # Create a private network, which allows host-only access to the machine 34 | # using a specific IP. 35 | # config.vm.network "private_network", ip: "192.168.33.10" 36 | 37 | # Create a public network, which generally matched to bridged network. 38 | # Bridged networks make the machine appear as another physical device on 39 | # your network. 40 | # config.vm.network "public_network" 41 | 42 | # Share an additional folder to the guest VM. The first argument is 43 | # the path on the host to the actual folder. The second argument is 44 | # the path on the guest to mount the folder. And the optional third 45 | # argument is a set of non-required options. 46 | # config.vm.synced_folder "../data", "/vagrant_data" 47 | 48 | # Provider-specific configuration so you can fine-tune various 49 | # backing providers for Vagrant. These expose provider-specific options. 50 | # Example for VirtualBox: 51 | # 52 | # config.vm.provider "virtualbox" do |vb| 53 | # # Display the VirtualBox GUI when booting the machine 54 | # vb.gui = true 55 | # 56 | # # Customize the amount of memory on the VM: 57 | # vb.memory = "1024" 58 | # end 59 | # 60 | # View the documentation for the provider you are using for more 61 | # information on available options. 62 | 63 | # Enable provisioning with a shell script. Additional provisioners such as 64 | # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the 65 | # documentation for more information about their specific syntax and use. 66 | # config.vm.provision "shell", inline: <<-SHELL 67 | # apt-get update 68 | # apt-get install -y apache2 69 | # SHELL 70 | end 71 | -------------------------------------------------------------------------------- /ch2/2.1.3/vagrant_up(base_default).output: -------------------------------------------------------------------------------- 1 | c:\HashiCorp>vagrant init 2 | A `Vagrantfile` has been placed in this directory. You are now 3 | ready to `vagrant up` your first virtual environment! Please read 4 | the comments in the Vagrantfile as well as documentation on 5 | `vagrantup.com` for more information on using Vagrant. 6 | 7 | c:\HashiCorp>vagrant up 8 | Bringing machine 'default' up with 'virtualbox' provider... 9 | ==> default: Box 'base' could not be found. Attempting to find and install... 10 | default: Box Provider: virtualbox 11 | default: Box Version: >= 0 12 | ==> default: Box file was not detected as metadata. Adding it directly... 13 | ==> default: Adding box 'base' (v0) for provider: virtualbox 14 | default: Downloading: base 15 | default: 16 | An error occurred while downloading the remote file. The error 17 | message, if any, is reproduced below. Please fix this error and try 18 | again. 19 | 20 | Couldn't open file c:/HashiCorp/base -------------------------------------------------------------------------------- /ch2/2.1.3/vagrant_up.output: -------------------------------------------------------------------------------- 1 | c:\HashiCorp>vagrant up 2 | Bringing machine 'default' up with 'virtualbox' provider... 3 | ==> default: Importing base box 'sysnet4admin/CentOS-k8s'... 4 | ==> default: Matching MAC address for NAT networking... 5 | ==> default: Checking if box 'sysnet4admin/CentOS-k8s' version '0.4.1' is up to date... 6 | ==> default: Setting the name of the VM: HashiCorp_default_1569930591907_4293 7 | ==> default: Clearing any previously set network interfaces... 8 | ==> default: Preparing network interfaces based on configuration... 9 | default: Adapter 1: nat 10 | ==> default: Forwarding ports... 11 | default: 22 (guest) => 2222 (host) (adapter 1) 12 | ==> default: Booting VM... 13 | ==> default: Waiting for machine to boot. This may take a few minutes... 14 | default: SSH address: 127.0.0.1:2222 15 | default: SSH username: vagrant 16 | default: SSH auth method: private key 17 | default: 18 | default: Vagrant insecure key detected. Vagrant will automatically replace 19 | default: this with a newly generated keypair for better security. 20 | default: 21 | default: Inserting generated public key within guest... 22 | default: Removing insecure key from the guest if it's present... 23 | default: Key inserted! Disconnecting and reconnecting using new SSH key... 24 | ==> default: Machine booted and ready! 25 | ==> default: Checking for guest additions in VM... 26 | default: The guest additions on this VM do not match the installed version of 27 | default: VirtualBox! In most cases this is fine, but in rare cases it can 28 | default: prevent things such as shared folders from working properly. If you see 29 | default: shared folder errors, please make sure the guest additions within the 30 | default: virtual machine match the version of VirtualBox you have installed on 31 | default: your host and reload your VM. 32 | default: 33 | default: Guest Additions Version: 5.2.12 34 | default: VirtualBox Version: 6.0 35 | ==> default: Mounting shared folders... 36 | default: /vagrant => C:/HashiCorp 37 | Vagrant was unable to mount VirtualBox shared folders. This is usually 38 | because the filesystem "vboxsf" is not available. This filesystem is 39 | made available via the VirtualBox Guest Additions and kernel module. 40 | Please verify that these guest additions are properly installed in the 41 | guest. This is not a bug in Vagrant and is usually caused by a faulty 42 | Vagrant box. For context, the command attempted was: 43 | 44 | mount -t vboxsf -o uid=1000,gid=1000 vagrant /vagrant 45 | 46 | The error output from the command was: 47 | 48 | mount: unknown filesystem type 'vboxsf' -------------------------------------------------------------------------------- /ch2/2.2.1/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | Vagrant.configure("2") do |config| 4 | config.vm.define "m-k8s" do |cfg| 5 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 6 | cfg.vm.provider "virtualbox" do |vb| 7 | vb.name = "m-k8s(github_SysNet4Admin)" 8 | vb.cpus = 2 9 | vb.memory = 2048 10 | vb.customize ["modifyvm", :id, "--groups", "/k8s-SM(github_SysNet4Admin)"] 11 | end 12 | cfg.vm.host_name = "m-k8s" 13 | cfg.vm.network "private_network", ip: "192.168.1.10" 14 | cfg.vm.network "forwarded_port", guest: 22, host: 60010, auto_correct: true, id: "ssh" 15 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 16 | end 17 | end -------------------------------------------------------------------------------- /ch2/2.2.1/vagrantup.output: -------------------------------------------------------------------------------- 1 | c:\2.책 집필\3.IaaC(Infrastructure as a Container)\git_book\2. 랩을 자동으로 구성하는 도구들\2.2.1. 가상 머신에 필요한 설정 자동으로 구성하기>vagrant up 2 | Bringing machine 'm-k8s' up with 'virtualbox' provider... 3 | ==> m-k8s: Importing base box 'sysnet4admin/CentOS-k8s'... 4 | ==> m-k8s: Matching MAC address for NAT networking... 5 | ==> m-k8s: Checking if box 'sysnet4admin/CentOS-k8s' version '0.6.4' is up to date... 6 | ==> m-k8s: Setting the name of the VM: m-k8s(github_SysNet4Admin) 7 | ==> m-k8s: Clearing any previously set network interfaces... 8 | ==> m-k8s: Preparing network interfaces based on configuration... 9 | m-k8s: Adapter 1: nat 10 | m-k8s: Adapter 2: hostonly 11 | ==> m-k8s: Forwarding ports... 12 | m-k8s: 22 (guest) => 60010 (host) (adapter 1) 13 | ==> m-k8s: Running 'pre-boot' VM customizations... 14 | ==> m-k8s: Booting VM... 15 | ==> m-k8s: Waiting for machine to boot. This may take a few minutes... 16 | m-k8s: SSH address: 127.0.0.1:60010 17 | m-k8s: SSH username: vagrant 18 | m-k8s: SSH auth method: private key 19 | m-k8s: Warning: Connection aborted. Retrying... 20 | m-k8s: Warning: Remote connection disconnect. Retrying... 21 | m-k8s: Warning: Connection reset. Retrying... 22 | m-k8s: Warning: Connection aborted. Retrying... 23 | m-k8s: Warning: Remote connection disconnect. Retrying... 24 | m-k8s: Warning: Connection reset. Retrying... 25 | m-k8s: Warning: Connection aborted. Retrying... 26 | m-k8s: Warning: Remote connection disconnect. Retrying... 27 | m-k8s: 28 | m-k8s: Vagrant insecure key detected. Vagrant will automatically replace 29 | m-k8s: this with a newly generated keypair for better security. 30 | m-k8s: 31 | m-k8s: Inserting generated public key within guest... 32 | ==> m-k8s: Machine booted and ready! 33 | ==> m-k8s: Checking for guest additions in VM... 34 | m-k8s: The guest additions on this VM do not match the installed version of 35 | m-k8s: VirtualBox! In most cases this is fine, but in rare cases it can 36 | m-k8s: prevent things such as shared folders from working properly. If you see 37 | m-k8s: shared folder errors, please make sure the guest additions within the 38 | m-k8s: virtual machine match the version of VirtualBox you have installed on 39 | m-k8s: your host and reload your VM. 40 | m-k8s: 41 | m-k8s: Guest Additions Version: 5.2.12 42 | m-k8s: VirtualBox Version: 6.0 43 | ==> m-k8s: Setting hostname... 44 | ==> m-k8s: Configuring and enabling network interfaces... -------------------------------------------------------------------------------- /ch2/2.2.2/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | Vagrant.configure("2") do |config| 4 | config.vm.define "m-k8s" do |cfg| 5 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 6 | cfg.vm.provider "virtualbox" do |vb| 7 | vb.name = "m-k8s(github_SysNet4Admin)" 8 | vb.cpus = 2 9 | vb.memory = 2048 10 | vb.customize ["modifyvm", :id, "--groups", "/k8s-SM(github_SysNet4Admin)"] 11 | end 12 | cfg.vm.host_name = "m-k8s" 13 | cfg.vm.network "private_network", ip: "192.168.1.10" 14 | cfg.vm.network "forwarded_port", guest: 22, host: 60010, auto_correct: true, id: "ssh" 15 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 16 | cfg.vm.provision "shell", path: "install_pkg.sh" #add provisioning script 17 | end 18 | end -------------------------------------------------------------------------------- /ch2/2.2.2/install_pkg.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # CentOS repo change from mirror to vault 4 | sed -i -e 's/mirrorlist=/#mirrorlist=/g' /etc/yum.repos.d/CentOS-* 5 | sed -i -e 's/mirrorlist=/#mirrorlist=/g' /etc/yum.conf 6 | sed -E -i -e 's/#baseurl=http:\/\/mirror.centos.org\/centos\/\$releasever\/([[:alnum:]_-]*)\/\$basearch\//baseurl=https:\/\/vault.centos.org\/7.9.2009\/\1\/\$basearch\//g' /etc/yum.repos.d/CentOS-* 7 | sed -E -i -e 's/#baseurl=http:\/\/mirror.centos.org\/centos\/\$releasever\/([[:alnum:]_-]*)\/\$basearch\//baseurl=https:\/\/vault.centos.org\/7.9.2009\/\1\/\$basearch\//g' /etc/yum.conf 8 | 9 | # install packages 10 | yum install epel-release -y 11 | yum install vim-enhanced -y 12 | -------------------------------------------------------------------------------- /ch2/2.2.3/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure("2") do |config| 5 | config.vm.define "m-k8s" do |cfg| 6 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 7 | cfg.vm.provider "virtualbox" do |vb| 8 | vb.name = "m-k8s(github_SysNet4Admin)" 9 | vb.cpus = 2 10 | vb.memory = 2048 11 | vb.customize ["modifyvm", :id, "--groups", "/k8s-SM(github_SysNet4Admin)"] 12 | end 13 | cfg.vm.host_name = "m-k8s" 14 | cfg.vm.network "private_network", ip: "192.168.1.10" 15 | cfg.vm.network "forwarded_port", guest: 22, host: 60010, auto_correct: true, id: "ssh" 16 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 17 | cfg.vm.provision "shell", path: "install_pkg.sh" 18 | cfg.vm.provision "file", source: "ping_2_nds.sh", destination: "ping_2_nds.sh" 19 | cfg.vm.provision "shell", path: "config.sh" 20 | end 21 | 22 | #=============# 23 | # Added Nodes # 24 | #=============# 25 | 26 | (1..3).each do |i| 27 | config.vm.define "w#{i}-k8s" do |cfg| 28 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 29 | cfg.vm.provider "virtualbox" do |vb| 30 | vb.name = "w#{i}-k8s(github_SysNet4Admin)" 31 | vb.cpus = 1 32 | vb.memory = 1024 33 | vb.customize ["modifyvm", :id, "--groups", "/k8s-SM(github_SysNet4Admin)"] 34 | end 35 | cfg.vm.host_name = "w#{i}-k8s" 36 | cfg.vm.network "private_network", ip: "192.168.1.10#{i}" 37 | cfg.vm.network "forwarded_port", guest: 22, host: "6010#{i}",auto_correct: true, id: "ssh" 38 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 39 | cfg.vm.provision "shell", path: "install_pkg.sh" 40 | end 41 | end 42 | end -------------------------------------------------------------------------------- /ch2/2.2.3/config.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # modify permission 3 | chmod 744 ./ping_2_nds.sh 4 | -------------------------------------------------------------------------------- /ch2/2.2.3/install_pkg.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # CentOS repo change from mirror to vault 4 | sed -i -e 's/mirrorlist=/#mirrorlist=/g' /etc/yum.repos.d/CentOS-* 5 | sed -i -e 's/mirrorlist=/#mirrorlist=/g' /etc/yum.conf 6 | sed -E -i -e 's/#baseurl=http:\/\/mirror.centos.org\/centos\/\$releasever\/([[:alnum:]_-]*)\/\$basearch\//baseurl=https:\/\/vault.centos.org\/7.9.2009\/\1\/\$basearch\//g' /etc/yum.repos.d/CentOS-* 7 | sed -E -i -e 's/#baseurl=http:\/\/mirror.centos.org\/centos\/\$releasever\/([[:alnum:]_-]*)\/\$basearch\//baseurl=https:\/\/vault.centos.org\/7.9.2009\/\1\/\$basearch\//g' /etc/yum.conf 8 | 9 | # install packages 10 | yum install epel-release -y 11 | yum install vim-enhanced -y 12 | -------------------------------------------------------------------------------- /ch2/2.2.3/ping_2_nds.sh: -------------------------------------------------------------------------------- 1 | # ping 3 times per nodes 2 | ping 192.168.1.101 -c 3 3 | ping 192.168.1.102 -c 3 4 | ping 192.168.1.103 -c 3 -------------------------------------------------------------------------------- /ch2/2.3.3/k8s.XML: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | -------------------------------------------------------------------------------- /ch2/README.md: -------------------------------------------------------------------------------- 1 | # 2장, 테스트 환경 구성하기 2 | --- 3 | ## 2.1 테스트 환경을 자동으로 구성하는 도구 4 | - 2.1.3 베이그런트 구성하고 테스트하기 5 | ## 2.2 베이그런트로 테스트 환경 구축하기 6 | - 2.2.1 가상 머신에 필요한 설정 자동으로 구성하기 7 | - 2.2.2 가상 머신에 추가 패키지 설치하기 8 | - 2.2.3 가상 머신 추가로 구성하기 9 | ## 2.3 터미널 프로그램으로 가상 머신 접속하기 10 | - 2.3.3 슈퍼푸티로 다수의 가상 머신 접속하기 -------------------------------------------------------------------------------- /ch3/3.1.3/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure("2") do |config| 5 | N = 3 # max number of worker nodes 6 | Ver = '1.18.4' # Kubernetes Version to install 7 | 8 | #=============# 9 | # Master Node # 10 | #=============# 11 | 12 | config.vm.define "m-k8s" do |cfg| 13 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 14 | cfg.vm.provider "virtualbox" do |vb| 15 | vb.name = "m-k8s(github_SysNet4Admin)" 16 | vb.cpus = 2 17 | vb.memory = 3072 18 | vb.customize ["modifyvm", :id, "--groups", "/k8s-SgMST-18.6.0(github_SysNet4Admin)"] 19 | end 20 | cfg.vm.host_name = "m-k8s" 21 | cfg.vm.network "private_network", ip: "192.168.1.10" 22 | cfg.vm.network "forwarded_port", guest: 22, host: 60010, auto_correct: true, id: "ssh" 23 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 24 | cfg.vm.provision "shell", path: "config.sh", args: N 25 | cfg.vm.provision "shell", path: "install_pkg.sh", args: [ Ver, "Main" ] 26 | cfg.vm.provision "shell", path: "master_node.sh" 27 | end 28 | 29 | #==============# 30 | # Worker Nodes # 31 | #==============# 32 | 33 | (1..N).each do |i| 34 | config.vm.define "w#{i}-k8s" do |cfg| 35 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 36 | cfg.vm.provider "virtualbox" do |vb| 37 | vb.name = "w#{i}-k8s(github_SysNet4Admin)" 38 | vb.cpus = 1 39 | vb.memory = 2560 40 | vb.customize ["modifyvm", :id, "--groups", "/k8s-SgMST-18.6.0(github_SysNet4Admin)"] 41 | end 42 | cfg.vm.host_name = "w#{i}-k8s" 43 | cfg.vm.network "private_network", ip: "192.168.1.10#{i}" 44 | cfg.vm.network "forwarded_port", guest: 22, host: "6010#{i}", auto_correct: true, id: "ssh" 45 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 46 | cfg.vm.provision "shell", path: "config.sh", args: N 47 | cfg.vm.provision "shell", path: "install_pkg.sh", args: Ver 48 | cfg.vm.provision "shell", path: "work_nodes.sh" 49 | end 50 | end 51 | 52 | end -------------------------------------------------------------------------------- /ch3/3.1.3/config.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # vim configuration 4 | echo 'alias vi=vim' >> /etc/profile 5 | 6 | # swapoff -a to disable swapping 7 | swapoff -a 8 | # sed to comment the swap partition in /etc/fstab 9 | sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab 10 | 11 | # CentOS repo change from mirror to vault 12 | sed -i -e 's/mirrorlist=/#mirrorlist=/g' /etc/yum.repos.d/CentOS-* 13 | sed -i -e 's/mirrorlist=/#mirrorlist=/g' /etc/yum.conf 14 | sed -E -i -e 's/#baseurl=http:\/\/mirror.centos.org\/centos\/\$releasever\/([[:alnum:]_-]*)\/\$basearch\//baseurl=https:\/\/vault.centos.org\/7.9.2009\/\1\/\$basearch\//g' /etc/yum.repos.d/CentOS-* 15 | sed -E -i -e 's/#baseurl=http:\/\/mirror.centos.org\/centos\/\$releasever\/([[:alnum:]_-]*)\/\$basearch\//baseurl=https:\/\/vault.centos.org\/7.9.2009\/\1\/\$basearch\//g' /etc/yum.conf 16 | 17 | # kubernetes repo 18 | gg_pkg="http://mirrors.aliyun.com/kubernetes/yum" # Due to shorten addr for key 19 | cat < /etc/yum.repos.d/kubernetes.repo 20 | [kubernetes] 21 | name=Kubernetes 22 | baseurl=${gg_pkg}/repos/kubernetes-el7-x86_64 23 | enabled=1 24 | gpgcheck=0 25 | repo_gpgcheck=0 26 | gpgkey=${gg_pkg}/doc/yum-key.gpg ${gg_pkg}/doc/rpm-package-key.gpg 27 | EOF 28 | 29 | # Set SELinux in permissive mode (effectively disabling it) 30 | setenforce 0 31 | sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config 32 | 33 | # RHEL/CentOS 7 have reported traffic issues being routed incorrectly due to iptables bypassed 34 | cat < /etc/sysctl.d/k8s.conf 35 | net.bridge.bridge-nf-call-ip6tables = 1 36 | net.bridge.bridge-nf-call-iptables = 1 37 | EOF 38 | modprobe br_netfilter 39 | 40 | # local small dns & vagrant cannot parse and delivery shell code. 41 | echo "192.168.1.10 m-k8s" >> /etc/hosts 42 | for (( i=1; i<=$1; i++ )); do echo "192.168.1.10$i w$i-k8s" >> /etc/hosts; done 43 | 44 | # config DNS 45 | cat < /etc/resolv.conf 46 | nameserver 1.1.1.1 #cloudflare DNS 47 | nameserver 8.8.8.8 #Google DNS 48 | EOF 49 | 50 | # docker repo 51 | yum install yum-utils -y 52 | yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 53 | -------------------------------------------------------------------------------- /ch3/3.1.3/install_pkg.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # install packages 4 | yum install epel-release -y 5 | yum install vim-enhanced -y 6 | yum install git -y 7 | 8 | # install docker 9 | yum install docker-ce-18.06.0.ce-3.el7 docker-ce-cli-18.06.0.ce-3.el7 \ 10 | containerd.io-1.2.6-3.3.el7 -y 11 | systemctl enable --now docker 12 | 13 | # install kubernetes cluster 14 | yum install kubectl-$1 kubelet-$1 kubeadm-$1 -y 15 | systemctl enable --now kubelet 16 | 17 | # git clone _Book_k8sInfra.git 18 | if [ $2 = 'Main' ]; then 19 | git clone https://github.com/sysnet4admin/_Book_k8sInfra.git 20 | mv /home/vagrant/_Book_k8sInfra $HOME 21 | find $HOME/_Book_k8sInfra/ -regex ".*\.\(sh\)" -exec chmod 700 {} \; 22 | fi 23 | -------------------------------------------------------------------------------- /ch3/3.1.3/master_node.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # init kubernetes 4 | kubeadm init --token 123456.1234567890123456 --token-ttl 0 \ 5 | --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.1.10 6 | 7 | # config for master node only 8 | mkdir -p $HOME/.kube 9 | cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 10 | chown $(id -u):$(id -g) $HOME/.kube/config 11 | 12 | # config for kubernetes's network 13 | kubectl apply -f \ 14 | https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/172.16_net_calico.yaml -------------------------------------------------------------------------------- /ch3/3.1.3/work_nodes.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # config for work_nodes only 4 | kubeadm join --token 123456.1234567890123456 \ 5 | --discovery-token-unsafe-skip-ca-verification 192.168.1.10:6443 -------------------------------------------------------------------------------- /ch3/3.1.6/nginx-pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: nginx-pod 5 | spec: 6 | containers: 7 | - name: container-name 8 | image: nginx 9 | -------------------------------------------------------------------------------- /ch3/3.2.10/rollout-nginx.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: rollout-nginx 5 | spec: 6 | replicas: 3 7 | selector: 8 | matchLabels: 9 | app: nginx 10 | template: 11 | metadata: 12 | labels: 13 | app: nginx 14 | spec: 15 | containers: 16 | - name: nginx 17 | image: nginx:1.15.12 18 | -------------------------------------------------------------------------------- /ch3/3.2.4/echo-hname.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: echo-hname 5 | labels: 6 | app: nginx 7 | spec: 8 | replicas: 3 9 | selector: 10 | matchLabels: 11 | app: nginx 12 | template: 13 | metadata: 14 | labels: 15 | app: nginx 16 | spec: 17 | containers: 18 | - name: echo-hname 19 | image: sysnet4admin/echo-hname -------------------------------------------------------------------------------- /ch3/3.2.4/nginx-pod.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: nginx-pod 5 | spec: 6 | containers: 7 | - name: container-name 8 | image: nginx 9 | -------------------------------------------------------------------------------- /ch3/3.2.8/echo-hname.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: echo-hname 5 | labels: 6 | app: nginx 7 | spec: 8 | replicas: 3 9 | selector: 10 | matchLabels: 11 | app: nginx 12 | template: 13 | metadata: 14 | labels: 15 | app: nginx 16 | spec: 17 | containers: 18 | - name: echo-hname 19 | image: sysnet4admin/echo-hname -------------------------------------------------------------------------------- /ch3/3.3.1/nodeport.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: np-svc 5 | spec: 6 | selector: 7 | app: np-pods 8 | ports: 9 | - name: http 10 | protocol: TCP 11 | port: 80 12 | targetPort: 80 13 | nodePort: 30000 14 | type: NodePort -------------------------------------------------------------------------------- /ch3/3.3.1/req_page.ps1: -------------------------------------------------------------------------------- 1 | #!/bin/powershell 2 | Param ( 3 | [Parameter(Mandatory=$true)] 4 | $IPwPort 5 | ) 6 | 7 | $i=0; while($true) 8 | { 9 | % { $i++; write-host -NoNewline "$i $_" } 10 | (Invoke-RestMethod "http://$IPwPort")-replace '\n', " " 11 | } -------------------------------------------------------------------------------- /ch3/3.3.2/ingress-config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: networking.k8s.io/v1beta1 2 | kind: Ingress 3 | metadata: 4 | name: ingress-nginx 5 | annotations: 6 | nginx.ingress.kubernetes.io/rewrite-target: / 7 | spec: 8 | rules: 9 | - http: 10 | paths: 11 | - path: 12 | backend: 13 | serviceName: hname-svc-default 14 | servicePort: 80 15 | - path: /ip 16 | backend: 17 | serviceName: ip-svc 18 | servicePort: 80 19 | - path: /your-directory 20 | backend: 21 | serviceName: your-svc 22 | servicePort: 80 23 | -------------------------------------------------------------------------------- /ch3/3.3.2/ingress-nginx.yaml: -------------------------------------------------------------------------------- 1 | # All of sources From https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml 2 | # clone from above to sysnet4admin 3 | 4 | apiVersion: v1 5 | kind: Namespace 6 | metadata: 7 | name: ingress-nginx 8 | labels: 9 | app.kubernetes.io/name: ingress-nginx 10 | app.kubernetes.io/part-of: ingress-nginx 11 | 12 | --- 13 | 14 | kind: ConfigMap 15 | apiVersion: v1 16 | metadata: 17 | name: nginx-configuration 18 | namespace: ingress-nginx 19 | labels: 20 | app.kubernetes.io/name: ingress-nginx 21 | app.kubernetes.io/part-of: ingress-nginx 22 | 23 | --- 24 | kind: ConfigMap 25 | apiVersion: v1 26 | metadata: 27 | name: tcp-services 28 | namespace: ingress-nginx 29 | labels: 30 | app.kubernetes.io/name: ingress-nginx 31 | app.kubernetes.io/part-of: ingress-nginx 32 | 33 | --- 34 | kind: ConfigMap 35 | apiVersion: v1 36 | metadata: 37 | name: udp-services 38 | namespace: ingress-nginx 39 | labels: 40 | app.kubernetes.io/name: ingress-nginx 41 | app.kubernetes.io/part-of: ingress-nginx 42 | 43 | --- 44 | apiVersion: v1 45 | kind: ServiceAccount 46 | metadata: 47 | name: nginx-ingress-serviceaccount 48 | namespace: ingress-nginx 49 | labels: 50 | app.kubernetes.io/name: ingress-nginx 51 | app.kubernetes.io/part-of: ingress-nginx 52 | 53 | --- 54 | apiVersion: rbac.authorization.k8s.io/v1beta1 55 | kind: ClusterRole 56 | metadata: 57 | name: nginx-ingress-clusterrole 58 | labels: 59 | app.kubernetes.io/name: ingress-nginx 60 | app.kubernetes.io/part-of: ingress-nginx 61 | rules: 62 | - apiGroups: 63 | - "" 64 | resources: 65 | - configmaps 66 | - endpoints 67 | - nodes 68 | - pods 69 | - secrets 70 | verbs: 71 | - list 72 | - watch 73 | - apiGroups: 74 | - "" 75 | resources: 76 | - nodes 77 | verbs: 78 | - get 79 | - apiGroups: 80 | - "" 81 | resources: 82 | - services 83 | verbs: 84 | - get 85 | - list 86 | - watch 87 | - apiGroups: 88 | - "" 89 | resources: 90 | - events 91 | verbs: 92 | - create 93 | - patch 94 | - apiGroups: 95 | - "extensions" 96 | - "networking.k8s.io" 97 | resources: 98 | - ingresses 99 | verbs: 100 | - get 101 | - list 102 | - watch 103 | - apiGroups: 104 | - "extensions" 105 | - "networking.k8s.io" 106 | resources: 107 | - ingresses/status 108 | verbs: 109 | - update 110 | 111 | --- 112 | apiVersion: rbac.authorization.k8s.io/v1beta1 113 | kind: Role 114 | metadata: 115 | name: nginx-ingress-role 116 | namespace: ingress-nginx 117 | labels: 118 | app.kubernetes.io/name: ingress-nginx 119 | app.kubernetes.io/part-of: ingress-nginx 120 | rules: 121 | - apiGroups: 122 | - "" 123 | resources: 124 | - configmaps 125 | - pods 126 | - secrets 127 | - namespaces 128 | verbs: 129 | - get 130 | - apiGroups: 131 | - "" 132 | resources: 133 | - configmaps 134 | resourceNames: 135 | # Defaults to "-" 136 | # Here: "-" 137 | # This has to be adapted if you change either parameter 138 | # when launching the nginx-ingress-controller. 139 | - "ingress-controller-leader-nginx" 140 | verbs: 141 | - get 142 | - update 143 | - apiGroups: 144 | - "" 145 | resources: 146 | - configmaps 147 | verbs: 148 | - create 149 | - apiGroups: 150 | - "" 151 | resources: 152 | - endpoints 153 | verbs: 154 | - get 155 | 156 | --- 157 | apiVersion: rbac.authorization.k8s.io/v1beta1 158 | kind: RoleBinding 159 | metadata: 160 | name: nginx-ingress-role-nisa-binding 161 | namespace: ingress-nginx 162 | labels: 163 | app.kubernetes.io/name: ingress-nginx 164 | app.kubernetes.io/part-of: ingress-nginx 165 | roleRef: 166 | apiGroup: rbac.authorization.k8s.io 167 | kind: Role 168 | name: nginx-ingress-role 169 | subjects: 170 | - kind: ServiceAccount 171 | name: nginx-ingress-serviceaccount 172 | namespace: ingress-nginx 173 | 174 | --- 175 | apiVersion: rbac.authorization.k8s.io/v1beta1 176 | kind: ClusterRoleBinding 177 | metadata: 178 | name: nginx-ingress-clusterrole-nisa-binding 179 | labels: 180 | app.kubernetes.io/name: ingress-nginx 181 | app.kubernetes.io/part-of: ingress-nginx 182 | roleRef: 183 | apiGroup: rbac.authorization.k8s.io 184 | kind: ClusterRole 185 | name: nginx-ingress-clusterrole 186 | subjects: 187 | - kind: ServiceAccount 188 | name: nginx-ingress-serviceaccount 189 | namespace: ingress-nginx 190 | 191 | --- 192 | 193 | apiVersion: apps/v1 194 | kind: Deployment 195 | metadata: 196 | name: nginx-ingress-controller 197 | namespace: ingress-nginx 198 | labels: 199 | app.kubernetes.io/name: ingress-nginx 200 | app.kubernetes.io/part-of: ingress-nginx 201 | spec: 202 | replicas: 1 203 | selector: 204 | matchLabels: 205 | app.kubernetes.io/name: ingress-nginx 206 | app.kubernetes.io/part-of: ingress-nginx 207 | template: 208 | metadata: 209 | labels: 210 | app.kubernetes.io/name: ingress-nginx 211 | app.kubernetes.io/part-of: ingress-nginx 212 | annotations: 213 | prometheus.io/port: "10254" 214 | prometheus.io/scrape: "true" 215 | spec: 216 | # wait up to five minutes for the drain of connections 217 | terminationGracePeriodSeconds: 300 218 | serviceAccountName: nginx-ingress-serviceaccount 219 | nodeSelector: 220 | kubernetes.io/os: linux 221 | containers: 222 | - name: nginx-ingress-controller 223 | image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0 224 | args: 225 | - /nginx-ingress-controller 226 | - --configmap=$(POD_NAMESPACE)/nginx-configuration 227 | - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services 228 | - --udp-services-configmap=$(POD_NAMESPACE)/udp-services 229 | - --publish-service=$(POD_NAMESPACE)/ingress-nginx 230 | - --annotations-prefix=nginx.ingress.kubernetes.io 231 | securityContext: 232 | allowPrivilegeEscalation: true 233 | capabilities: 234 | drop: 235 | - ALL 236 | add: 237 | - NET_BIND_SERVICE 238 | # www-data -> 101 239 | runAsUser: 101 240 | env: 241 | - name: POD_NAME 242 | valueFrom: 243 | fieldRef: 244 | fieldPath: metadata.name 245 | - name: POD_NAMESPACE 246 | valueFrom: 247 | fieldRef: 248 | fieldPath: metadata.namespace 249 | ports: 250 | - name: http 251 | containerPort: 80 252 | protocol: TCP 253 | - name: https 254 | containerPort: 443 255 | protocol: TCP 256 | livenessProbe: 257 | failureThreshold: 3 258 | httpGet: 259 | path: /healthz 260 | port: 10254 261 | scheme: HTTP 262 | initialDelaySeconds: 10 263 | periodSeconds: 10 264 | successThreshold: 1 265 | timeoutSeconds: 10 266 | readinessProbe: 267 | failureThreshold: 3 268 | httpGet: 269 | path: /healthz 270 | port: 10254 271 | scheme: HTTP 272 | periodSeconds: 10 273 | successThreshold: 1 274 | timeoutSeconds: 10 275 | lifecycle: 276 | preStop: 277 | exec: 278 | command: 279 | - /wait-shutdown 280 | 281 | --- 282 | 283 | apiVersion: v1 284 | kind: LimitRange 285 | metadata: 286 | name: ingress-nginx 287 | namespace: ingress-nginx 288 | labels: 289 | app.kubernetes.io/name: ingress-nginx 290 | app.kubernetes.io/part-of: ingress-nginx 291 | spec: 292 | limits: 293 | - min: 294 | memory: 90Mi 295 | cpu: 100m 296 | type: Container -------------------------------------------------------------------------------- /ch3/3.3.2/ingress.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nginx-ingress-controller 5 | namespace: ingress-nginx 6 | spec: 7 | ports: 8 | - name: http 9 | protocol: TCP 10 | port: 80 11 | targetPort: 80 12 | nodePort: 30100 13 | - name: https 14 | protocol: TCP 15 | port: 443 16 | targetPort: 443 17 | nodePort: 30101 18 | selector: 19 | app.kubernetes.io/name: ingress-nginx 20 | type: NodePort -------------------------------------------------------------------------------- /ch3/3.3.4/metallb-l2config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | namespace: metallb-system 5 | name: config 6 | data: 7 | config: | 8 | address-pools: 9 | - name: nginx-ip-range 10 | protocol: layer2 11 | addresses: 12 | - 192.168.1.11-192.168.1.13 -------------------------------------------------------------------------------- /ch3/3.3.4/metallb.yaml: -------------------------------------------------------------------------------- 1 | # All of sources From 2 | # - https://raw.githubusercontent.com/metallb/metallb/v0.8.3/manifests/metallb.yaml 3 | # clone from above to sysnet4admin 4 | 5 | apiVersion: v1 6 | kind: Namespace 7 | metadata: 8 | labels: 9 | app: metallb 10 | name: metallb-system 11 | --- 12 | apiVersion: policy/v1beta1 13 | kind: PodSecurityPolicy 14 | metadata: 15 | labels: 16 | app: metallb 17 | name: speaker 18 | namespace: metallb-system 19 | spec: 20 | allowPrivilegeEscalation: false 21 | allowedCapabilities: 22 | - NET_ADMIN 23 | - NET_RAW 24 | - SYS_ADMIN 25 | fsGroup: 26 | rule: RunAsAny 27 | hostNetwork: true 28 | hostPorts: 29 | - max: 7472 30 | min: 7472 31 | privileged: true 32 | runAsUser: 33 | rule: RunAsAny 34 | seLinux: 35 | rule: RunAsAny 36 | supplementalGroups: 37 | rule: RunAsAny 38 | volumes: 39 | - '*' 40 | --- 41 | apiVersion: v1 42 | kind: ServiceAccount 43 | metadata: 44 | labels: 45 | app: metallb 46 | name: controller 47 | namespace: metallb-system 48 | --- 49 | apiVersion: v1 50 | kind: ServiceAccount 51 | metadata: 52 | labels: 53 | app: metallb 54 | name: speaker 55 | namespace: metallb-system 56 | --- 57 | apiVersion: rbac.authorization.k8s.io/v1 58 | kind: ClusterRole 59 | metadata: 60 | labels: 61 | app: metallb 62 | name: metallb-system:controller 63 | rules: 64 | - apiGroups: 65 | - '' 66 | resources: 67 | - services 68 | verbs: 69 | - get 70 | - list 71 | - watch 72 | - update 73 | - apiGroups: 74 | - '' 75 | resources: 76 | - services/status 77 | verbs: 78 | - update 79 | - apiGroups: 80 | - '' 81 | resources: 82 | - events 83 | verbs: 84 | - create 85 | - patch 86 | --- 87 | apiVersion: rbac.authorization.k8s.io/v1 88 | kind: ClusterRole 89 | metadata: 90 | labels: 91 | app: metallb 92 | name: metallb-system:speaker 93 | rules: 94 | - apiGroups: 95 | - '' 96 | resources: 97 | - services 98 | - endpoints 99 | - nodes 100 | verbs: 101 | - get 102 | - list 103 | - watch 104 | - apiGroups: 105 | - '' 106 | resources: 107 | - events 108 | verbs: 109 | - create 110 | - patch 111 | - apiGroups: 112 | - extensions 113 | resourceNames: 114 | - speaker 115 | resources: 116 | - podsecuritypolicies 117 | verbs: 118 | - use 119 | --- 120 | apiVersion: rbac.authorization.k8s.io/v1 121 | kind: Role 122 | metadata: 123 | labels: 124 | app: metallb 125 | name: config-watcher 126 | namespace: metallb-system 127 | rules: 128 | - apiGroups: 129 | - '' 130 | resources: 131 | - configmaps 132 | verbs: 133 | - get 134 | - list 135 | - watch 136 | --- 137 | apiVersion: rbac.authorization.k8s.io/v1 138 | kind: ClusterRoleBinding 139 | metadata: 140 | labels: 141 | app: metallb 142 | name: metallb-system:controller 143 | roleRef: 144 | apiGroup: rbac.authorization.k8s.io 145 | kind: ClusterRole 146 | name: metallb-system:controller 147 | subjects: 148 | - kind: ServiceAccount 149 | name: controller 150 | namespace: metallb-system 151 | --- 152 | apiVersion: rbac.authorization.k8s.io/v1 153 | kind: ClusterRoleBinding 154 | metadata: 155 | labels: 156 | app: metallb 157 | name: metallb-system:speaker 158 | roleRef: 159 | apiGroup: rbac.authorization.k8s.io 160 | kind: ClusterRole 161 | name: metallb-system:speaker 162 | subjects: 163 | - kind: ServiceAccount 164 | name: speaker 165 | namespace: metallb-system 166 | --- 167 | apiVersion: rbac.authorization.k8s.io/v1 168 | kind: RoleBinding 169 | metadata: 170 | labels: 171 | app: metallb 172 | name: config-watcher 173 | namespace: metallb-system 174 | roleRef: 175 | apiGroup: rbac.authorization.k8s.io 176 | kind: Role 177 | name: config-watcher 178 | subjects: 179 | - kind: ServiceAccount 180 | name: controller 181 | - kind: ServiceAccount 182 | name: speaker 183 | --- 184 | apiVersion: apps/v1 185 | kind: DaemonSet 186 | metadata: 187 | labels: 188 | app: metallb 189 | component: speaker 190 | name: speaker 191 | namespace: metallb-system 192 | spec: 193 | selector: 194 | matchLabels: 195 | app: metallb 196 | component: speaker 197 | template: 198 | metadata: 199 | annotations: 200 | prometheus.io/port: '7472' 201 | prometheus.io/scrape: 'true' 202 | labels: 203 | app: metallb 204 | component: speaker 205 | spec: 206 | containers: 207 | - args: 208 | - --port=7472 209 | - --config=config 210 | env: 211 | - name: METALLB_NODE_NAME 212 | valueFrom: 213 | fieldRef: 214 | fieldPath: spec.nodeName 215 | - name: METALLB_HOST 216 | valueFrom: 217 | fieldRef: 218 | fieldPath: status.hostIP 219 | image: quay.io/metallb/speaker:v0.8.2 220 | imagePullPolicy: IfNotPresent 221 | name: speaker 222 | ports: 223 | - containerPort: 7472 224 | name: monitoring 225 | resources: 226 | limits: 227 | cpu: 100m 228 | memory: 100Mi 229 | securityContext: 230 | allowPrivilegeEscalation: false 231 | capabilities: 232 | add: 233 | - NET_ADMIN 234 | - NET_RAW 235 | - SYS_ADMIN 236 | drop: 237 | - ALL 238 | readOnlyRootFilesystem: true 239 | hostNetwork: true 240 | nodeSelector: 241 | beta.kubernetes.io/os: linux 242 | serviceAccountName: speaker 243 | terminationGracePeriodSeconds: 0 244 | tolerations: 245 | - effect: NoSchedule 246 | key: node-role.kubernetes.io/master 247 | --- 248 | apiVersion: apps/v1 249 | kind: Deployment 250 | metadata: 251 | labels: 252 | app: metallb 253 | component: controller 254 | name: controller 255 | namespace: metallb-system 256 | spec: 257 | revisionHistoryLimit: 3 258 | selector: 259 | matchLabels: 260 | app: metallb 261 | component: controller 262 | template: 263 | metadata: 264 | annotations: 265 | prometheus.io/port: '7472' 266 | prometheus.io/scrape: 'true' 267 | labels: 268 | app: metallb 269 | component: controller 270 | spec: 271 | containers: 272 | - args: 273 | - --port=7472 274 | - --config=config 275 | image: quay.io/metallb/controller:v0.8.2 276 | imagePullPolicy: IfNotPresent 277 | name: controller 278 | ports: 279 | - containerPort: 7472 280 | name: monitoring 281 | resources: 282 | limits: 283 | cpu: 100m 284 | memory: 100Mi 285 | securityContext: 286 | allowPrivilegeEscalation: false 287 | capabilities: 288 | drop: 289 | - all 290 | readOnlyRootFilesystem: true 291 | nodeSelector: 292 | beta.kubernetes.io/os: linux 293 | securityContext: 294 | runAsNonRoot: true 295 | runAsUser: 65534 296 | serviceAccountName: controller 297 | terminationGracePeriodSeconds: 0 298 | -------------------------------------------------------------------------------- /ch3/3.3.4/req_page.ps1: -------------------------------------------------------------------------------- 1 | #!/bin/powershell 2 | Param ( 3 | [Parameter(Mandatory=$true)] 4 | $IPwPort 5 | ) 6 | 7 | $i=0; while($true) 8 | { 9 | % { $i++; write-host -NoNewline "$i $_" } 10 | (Invoke-RestMethod "http://$IPwPort")-replace '\n', " " 11 | } -------------------------------------------------------------------------------- /ch3/3.3.5/metrics-server.yaml: -------------------------------------------------------------------------------- 1 | #Main_Source_From: 2 | # - https://github.com/kubernetes-sigs/metrics-server 3 | 4 | #aggregated-metrics-reader.yaml 5 | --- 6 | apiVersion: rbac.authorization.k8s.io/v1 7 | kind: ClusterRole 8 | metadata: 9 | name: system:aggregated-metrics-reader 10 | labels: 11 | rbac.authorization.k8s.io/aggregate-to-view: "true" 12 | rbac.authorization.k8s.io/aggregate-to-edit: "true" 13 | rbac.authorization.k8s.io/aggregate-to-admin: "true" 14 | rules: 15 | - apiGroups: ["metrics.k8s.io"] 16 | resources: ["pods", "nodes"] 17 | verbs: ["get", "list", "watch"] 18 | 19 | #auth-delegator.yaml 20 | --- 21 | apiVersion: rbac.authorization.k8s.io/v1 22 | kind: ClusterRoleBinding 23 | metadata: 24 | name: metrics-server:system:auth-delegator 25 | roleRef: 26 | apiGroup: rbac.authorization.k8s.io 27 | kind: ClusterRole 28 | name: system:auth-delegator 29 | subjects: 30 | - kind: ServiceAccount 31 | name: metrics-server 32 | namespace: kube-system 33 | 34 | #auth-reader.yaml 35 | --- 36 | apiVersion: rbac.authorization.k8s.io/v1 37 | kind: RoleBinding 38 | metadata: 39 | name: metrics-server-auth-reader 40 | namespace: kube-system 41 | roleRef: 42 | apiGroup: rbac.authorization.k8s.io 43 | kind: Role 44 | name: extension-apiserver-authentication-reader 45 | subjects: 46 | - kind: ServiceAccount 47 | name: metrics-server 48 | namespace: kube-system 49 | 50 | #metrics-apiservice.yaml 51 | --- 52 | apiVersion: apiregistration.k8s.io/v1beta1 53 | kind: APIService 54 | metadata: 55 | name: v1beta1.metrics.k8s.io 56 | spec: 57 | service: 58 | name: metrics-server 59 | namespace: kube-system 60 | group: metrics.k8s.io 61 | version: v1beta1 62 | insecureSkipTLSVerify: true 63 | groupPriorityMinimum: 100 64 | versionPriority: 100 65 | 66 | #metrics-server-deployment.yaml 67 | --- 68 | apiVersion: v1 69 | kind: ServiceAccount 70 | metadata: 71 | name: metrics-server 72 | namespace: kube-system 73 | --- 74 | apiVersion: apps/v1 75 | kind: Deployment 76 | metadata: 77 | name: metrics-server 78 | namespace: kube-system 79 | labels: 80 | k8s-app: metrics-server 81 | spec: 82 | selector: 83 | matchLabels: 84 | k8s-app: metrics-server 85 | template: 86 | metadata: 87 | name: metrics-server 88 | labels: 89 | k8s-app: metrics-server 90 | spec: 91 | serviceAccountName: metrics-server 92 | volumes: 93 | # mount in tmp so we can safely use from-scratch images and/or read-only containers 94 | - name: tmp-dir 95 | emptyDir: {} 96 | containers: 97 | - name: metrics-server 98 | image: k8s.gcr.io/metrics-server-amd64:v0.3.6 99 | args: 100 | # Manually Add for lab env(Sysnet4admin/k8s) 101 | # skip tls internal usage purpose 102 | - --kubelet-insecure-tls 103 | # kubelet could use internalIP communication 104 | - --kubelet-preferred-address-types=InternalIP 105 | - --cert-dir=/tmp 106 | - --secure-port=4443 107 | ports: 108 | - name: main-port 109 | containerPort: 4443 110 | protocol: TCP 111 | securityContext: 112 | readOnlyRootFilesystem: true 113 | runAsNonRoot: true 114 | runAsUser: 1000 115 | imagePullPolicy: Always 116 | volumeMounts: 117 | - name: tmp-dir 118 | mountPath: /tmp 119 | nodeSelector: 120 | beta.kubernetes.io/os: linux 121 | kubernetes.io/arch: "amd64" 122 | 123 | #metrics-server-service.yaml 124 | --- 125 | apiVersion: v1 126 | kind: Service 127 | metadata: 128 | name: metrics-server 129 | namespace: kube-system 130 | labels: 131 | kubernetes.io/name: "Metrics-server" 132 | kubernetes.io/cluster-service: "true" 133 | spec: 134 | selector: 135 | k8s-app: metrics-server 136 | ports: 137 | - port: 443 138 | protocol: TCP 139 | targetPort: main-port 140 | 141 | #resource-reader.yaml 142 | --- 143 | apiVersion: rbac.authorization.k8s.io/v1 144 | kind: ClusterRole 145 | metadata: 146 | name: system:metrics-server 147 | rules: 148 | - apiGroups: 149 | - "" 150 | resources: 151 | - pods 152 | - nodes 153 | - nodes/stats 154 | - namespaces 155 | - configmaps 156 | verbs: 157 | - get 158 | - list 159 | - watch 160 | --- 161 | apiVersion: rbac.authorization.k8s.io/v1 162 | kind: ClusterRoleBinding 163 | metadata: 164 | name: system:metrics-server 165 | roleRef: 166 | apiGroup: rbac.authorization.k8s.io 167 | kind: ClusterRole 168 | name: system:metrics-server 169 | subjects: 170 | - kind: ServiceAccount 171 | name: metrics-server 172 | namespace: kube-system 173 | -------------------------------------------------------------------------------- /ch3/3.4.1/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure("2") do |config| 5 | N = 4 # max number of worker nodes 6 | Ver = '1.18.4' # Kubernetes Version to install 7 | 8 | #=============# 9 | # Master Node # 10 | #=============# 11 | 12 | config.vm.define "m-k8s" do |cfg| 13 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 14 | cfg.vm.provider "virtualbox" do |vb| 15 | vb.name = "m-k8s(github_SysNet4Admin)" 16 | vb.cpus = 2 17 | vb.memory = 3072 18 | vb.customize ["modifyvm", :id, "--groups", "/k8s-SgMST-1.13.1(github_SysNet4Admin)"] 19 | end 20 | cfg.vm.host_name = "m-k8s" 21 | cfg.vm.network "private_network", ip: "192.168.1.10" 22 | cfg.vm.network "forwarded_port", guest: 22, host: 60010, auto_correct: true, id: "ssh" 23 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 24 | cfg.vm.provision "shell", path: "config.sh", args: N 25 | cfg.vm.provision "shell", path: "install_pkg.sh", args: [ Ver, "install_kubectl" ] 26 | cfg.vm.provision "shell", path: "master_node.sh" 27 | end 28 | 29 | #==============# 30 | # Worker Nodes # 31 | #==============# 32 | 33 | (1..N).each do |i| 34 | config.vm.define "w#{i}-k8s" do |cfg| 35 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 36 | cfg.vm.provider "virtualbox" do |vb| 37 | vb.name = "w#{i}-k8s(github_SysNet4Admin)" 38 | vb.cpus = 1 39 | vb.memory = 2560 40 | vb.customize ["modifyvm", :id, "--groups", "/k8s-SgMST-1.13.1(github_SysNet4Admin)"] 41 | end 42 | cfg.vm.host_name = "w#{i}-k8s" 43 | cfg.vm.network "private_network", ip: "192.168.1.10#{i}" 44 | cfg.vm.network "forwarded_port", guest: 22, host: "6010#{i}", auto_correct: true, id: "ssh" 45 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 46 | cfg.vm.provision "shell", path: "config.sh", args: N 47 | cfg.vm.provision "shell", path: "install_pkg.sh", args: Ver 48 | cfg.vm.provision "shell", path: "work_nodes.sh" 49 | end 50 | end 51 | 52 | end -------------------------------------------------------------------------------- /ch3/3.4.1/config.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # vim configuration 4 | echo 'alias vi=vim' >> /etc/profile 5 | 6 | # swapoff -a to disable swapping 7 | swapoff -a 8 | # sed to comment the swap partition in /etc/fstab 9 | sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab 10 | 11 | # kubernetes repo 12 | gg_pkg="http://mirrors.aliyun.com/kubernetes/yum" # Due to shorten addr for key 13 | cat < /etc/yum.repos.d/kubernetes.repo 14 | [kubernetes] 15 | name=Kubernetes 16 | baseurl=${gg_pkg}/repos/kubernetes-el7-x86_64 17 | enabled=1 18 | gpgcheck=0 19 | repo_gpgcheck=0 20 | gpgkey=${gg_pkg}/doc/yum-key.gpg ${gg_pkg}/doc/rpm-package-key.gpg 21 | EOF 22 | 23 | # Set SELinux in permissive mode (effectively disabling it) 24 | setenforce 0 25 | sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config 26 | 27 | # RHEL/CentOS 7 have reported traffic issues being routed incorrectly due to iptables bypassed 28 | cat < /etc/sysctl.d/k8s.conf 29 | net.bridge.bridge-nf-call-ip6tables = 1 30 | net.bridge.bridge-nf-call-iptables = 1 31 | EOF 32 | modprobe br_netfilter 33 | 34 | # local small dns & vagrant cannot parse and delivery shell code. 35 | echo "192.168.1.10 m-k8s" >> /etc/hosts 36 | for (( i=1; i<=$1; i++ )); do echo "192.168.1.10$i w$i-k8s" >> /etc/hosts; done 37 | 38 | # config DNS 39 | cat < /etc/resolv.conf 40 | nameserver 1.1.1.1 #cloudflare DNS 41 | nameserver 8.8.8.8 #Google DNS 42 | EOF 43 | 44 | -------------------------------------------------------------------------------- /ch3/3.4.1/install_pkg.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # install packages 4 | yum install epel-release -y 5 | yum install vim-enhanced -y 6 | yum install git -y 7 | 8 | # install docker 9 | yum install docker -y && systemctl enable --now docker 10 | 11 | # install kubernetes and kubectl will install only master node 12 | if [ $2 = 'install_kubectl' ]; then 13 | yum install kubectl-$1 -y 14 | fi 15 | yum install kubelet-$1 kubeadm-$1 -y 16 | systemctl enable --now kubelet -------------------------------------------------------------------------------- /ch3/3.4.1/master_node.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # init kubernetes 4 | kubeadm init --token 123456.1234567890123456 --token-ttl 0 \ 5 | --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.1.10 6 | 7 | # config for master node only 8 | mkdir -p $HOME/.kube 9 | cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 10 | chown $(id -u):$(id -g) $HOME/.kube/config 11 | 12 | # config for kubernetes's network 13 | kubectl apply -f \ 14 | https://raw.githubusercontent.com/sysnet4admin/IaC/master/manifests/172.16_net_calico.yaml -------------------------------------------------------------------------------- /ch3/3.4.1/work_nodes.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # config for work_nodes only 4 | kubeadm join --token 123456.1234567890123456 \ 5 | --discovery-token-unsafe-skip-ca-verification 192.168.1.10:6443 -------------------------------------------------------------------------------- /ch3/3.4.2/metallb-l2config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | namespace: metallb-system 5 | name: config 6 | data: 7 | config: | 8 | address-pools: 9 | - name: nginx-ip-range 10 | protocol: layer2 11 | addresses: 12 | - 192.168.1.11-192.168.1.13 -------------------------------------------------------------------------------- /ch3/3.4.3/limits-pvc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: LimitRange 3 | metadata: 4 | name: storagelimits 5 | spec: 6 | limits: 7 | - type: PersistentVolumeClaim 8 | max: 9 | storage: 5Mi 10 | min: 11 | storage: 1Mi -------------------------------------------------------------------------------- /ch3/3.4.3/nfs-ip.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nfs-ip 5 | spec: 6 | replicas: 4 7 | selector: 8 | matchLabels: 9 | app: nfs-ip 10 | template: 11 | metadata: 12 | labels: 13 | app: nfs-ip 14 | spec: 15 | containers: 16 | - name: audit-trail 17 | image: sysnet4admin/audit-trail 18 | volumeMounts: 19 | - name: nfs-vol 20 | mountPath: /audit 21 | volumes: 22 | - name: nfs-vol 23 | nfs: 24 | server: 192.168.1.10 25 | path: /nfs_shared -------------------------------------------------------------------------------- /ch3/3.4.3/nfs-pv.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: nfs-pv 5 | spec: 6 | capacity: 7 | storage: 100Mi 8 | accessModes: 9 | - ReadWriteMany 10 | persistentVolumeReclaimPolicy: Retain 11 | nfs: 12 | server: 192.168.1.10 13 | path: /nfs_shared -------------------------------------------------------------------------------- /ch3/3.4.3/nfs-pvc-deploy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nfs-pvc-deploy 5 | spec: 6 | replicas: 4 7 | selector: 8 | matchLabels: 9 | app: nfs-pvc-deploy 10 | template: 11 | metadata: 12 | labels: 13 | app: nfs-pvc-deploy 14 | spec: 15 | containers: 16 | - name: audit-trail 17 | image: sysnet4admin/audit-trail 18 | volumeMounts: 19 | - name: nfs-vol 20 | mountPath: /audit 21 | volumes: 22 | - name: nfs-vol 23 | persistentVolumeClaim: 24 | claimName: nfs-pvc -------------------------------------------------------------------------------- /ch3/3.4.3/nfs-pvc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolumeClaim 3 | metadata: 4 | name: nfs-pvc 5 | spec: 6 | accessModes: 7 | - ReadWriteMany 8 | resources: 9 | requests: 10 | storage: 10Mi -------------------------------------------------------------------------------- /ch3/3.4.3/quota-pvc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ResourceQuota 3 | metadata: 4 | name: storagequota 5 | spec: 6 | hard: 7 | persistentvolumeclaims: "5" 8 | requests.storage: "25Mi" -------------------------------------------------------------------------------- /ch3/3.4.4/dynamic-pvc-deploy.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: dynamic-pvc-deploy 5 | spec: 6 | replicas: 3 7 | selector: 8 | matchLabels: 9 | app: dynamic-pvc-deploy 10 | template: 11 | metadata: 12 | labels: 13 | app: dynamic-pvc-deploy 14 | spec: 15 | containers: 16 | - name: audit-trail 17 | image: sysnet4admin/audit-trail 18 | volumeMounts: 19 | - name: dynamic-vol # same name of volumes's name 20 | mountPath: /audit 21 | volumes: 22 | - name: dynamic-vol 23 | persistentVolumeClaim: 24 | claimName: dynamic-pvc -------------------------------------------------------------------------------- /ch3/3.4.4/dynamic-pvc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolumeClaim 3 | metadata: 4 | name: dynamic-pvc 5 | spec: 6 | accessModes: 7 | - ReadWriteOnce 8 | resources: 9 | requests: 10 | storage: 100Gi 11 | # storageClassName: -------------------------------------------------------------------------------- /ch3/3.4.4/nfs-pvc-sts-svc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nfs-pvc-sts-svc 5 | spec: 6 | selector: 7 | app: nfs-pvc-sts 8 | ports: 9 | - port: 80 10 | type: LoadBalancer -------------------------------------------------------------------------------- /ch3/3.4.4/nfs-pvc-sts.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: StatefulSet 3 | metadata: 4 | name: nfs-pvc-sts 5 | spec: 6 | replicas: 4 7 | serviceName: sts-svc-domain #statefulset need it 8 | selector: 9 | matchLabels: 10 | app: nfs-pvc-sts 11 | template: 12 | metadata: 13 | labels: 14 | app: nfs-pvc-sts 15 | spec: 16 | containers: 17 | - name: audit-trail 18 | image: sysnet4admin/audit-trail 19 | volumeMounts: 20 | - name: nfs-vol # same name of volumes's name 21 | mountPath: /audit 22 | volumes: 23 | - name: nfs-vol 24 | persistentVolumeClaim: 25 | claimName: nfs-pvc -------------------------------------------------------------------------------- /ch3/3.4.4/standard.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: storage.k8s.io/v1 2 | kind: StorageClass 3 | metadata: 4 | name: standard 5 | provisioner: kubernetes.io/gce-pd 6 | parameters: 7 | type: pd-standard 8 | replication-type: none -------------------------------------------------------------------------------- /ch3/3.4.4/sts-svc-domain.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: sts-svc-domain 5 | spec: 6 | selector: 7 | app: nfs-pvc-sts 8 | ports: 9 | - port: 80 10 | clusterIP: None 11 | -------------------------------------------------------------------------------- /ch3/README.md: -------------------------------------------------------------------------------- 1 | # 3장, 컨테이너를 다루는 표준 아키텍처, 쿠버네티스 2 | --- 3 | ## 3.1 쿠버네티스 이해하기 4 | - 3.1.3 쿠버네티스 구성하기 5 | - 3.1.6 쿠버네티스 구성 요소의 기능 검증하기 6 | ## 3.2 쿠버네티스 기본 사용법 배우기 7 | - 3.2.4 스펙 지정해 오브젝트 생성하기 8 | - 3.2.8 노드 자원 보호하기 9 | - 3.2.10 파드 업데이트하고 복구하기 10 | ## 3.3 쿠버네티스 연결을 담당하는 서비스 11 | - 3.3.1 가장 간단하게 연결하는 노드포트 12 | - 3.3.2 사용 목적별로 연결하는 인그레스 13 | - 3.3.4 온프레미스에서 로드밸런서를 제공하는 MetalLB 14 | - 3.3.5 부하에 따라 자동으로 파드 수를 조절하는 HPA 15 | ## 3.4 알아두면 쓸모 있는 쿠버네티스 오브젝트 16 | - 3.4.1 데몬셋 17 | - 3.4.2 컨피그맵 18 | - 3.4.3 PV와PVC 19 | - 3.4.4 스테이트풀셋 -------------------------------------------------------------------------------- /ch4/4.2.3/index-BindMount.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Nginx Web Server 6 | 7 | 8 |

Running Bind Mount

9 | 10 | -------------------------------------------------------------------------------- /ch4/4.2.3/index-Volume.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | Nginx Web Server 6 | 7 | 8 |

Running Volume

9 | 10 | -------------------------------------------------------------------------------- /ch4/4.3.1/.explain-mvnw.txt: -------------------------------------------------------------------------------- 1 | [INFO] Scanning for projects... 2 | Downloading from central: https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-starter-parent/2.2.2.RELEASE/spring-boot-starter-parent-2.2.2.RELEASE.pom 3 | ######### 소스 코드 작성시 스프링부트를 사용했기 때문에 스프링부트 라이브러리와 관련 파일을 다운 받습니다. 이외에도 사용한 라이브러리가 있다면 이 단계에서 다운로드 받습니다. 4 | [중략] 5 | [INFO] -------------------< Stark.Industries:echo-ip-java >-------------------- 6 | ######### pom.xml의 : 형식으로 표시됩니다. 7 | [INFO] Building Ultron-PRJ 0.0.1-SNAPSHOT 8 | ######### pom.xml 의 . 형식으로 표시됩니다. 9 | [INFO] --------------------------------[ jar ]--------------------------------- 10 | Downloading from central: https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-maven-plugin/2.2.2.RELEASE/spring-boot-maven-plugin-2.2.2.RELEASE.pom 11 | ######### 스프링 부트 기반 소스 코드를 빌드하는 과정을 도와주는 플러그인을 사용하기 위한 관련 파일들을 내려 받습니다. 12 | [중략] 13 | [INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ echo-ip-java --- 14 | Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-plugin-api/3.0/maven-plugin-api-3.0.pom 15 | ######### 메이븐으로 소스 코드를 빌드하는 과정을 도와주는 플러그인을 다운로드 받습니다. 16 | ################## 왜 메이븐과 스프링부트의 플러그인을 별개로 다운 받냐 라고 하시면 mvnw clean package 와 같은 메이븐의 기본 명령은 메이븐 플러그인으로 처리 되는데 17 | ################## 각 단계별로 스프링부트의 별도 처리를 해줘야 하는 부분은 메이븐 플러그인이 처리하는 단계에서 스프링부트 플러그인이 묶여서 동작합니다. (.original 을 묶어서 단독 실행이 가능한 jar로 만들어주고 이런 것은 package 과정 중에 spring-boot-maven-plugin이 묶여서 해주는 과정입니다) 18 | [중략] 19 | [INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ echo-ip-java --- 20 | ######### 소스 코드 빌드 과정에서 리소스 (일반 이미지 파일이나 정적인 html 파일, 빌드된 애플리케이션 내부에서 자체적으로 사용하는 파일)를 재구성하는 플러그인을 사용하기 위한 관련 파일을 내려 받습니다. 21 | Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-core/3.0/maven-core-3.0.pom 22 | [중략] 23 | [INFO] Using 'UTF-8' encoding to copy filtered resources. 24 | [INFO] Copying 1 resource 25 | ######### 여기서 1개 파일이 복사되는 것은 스프링부트 애플리케이션이 구동되는 시점에 초기 설정으로 사용하기 위해 개발자가 작성해둔 설정 파일 1개입니다.. src/main/resources/ 디렉토리 내부에 존재합니다. 26 | [INFO] Copying 0 resource 27 | [INFO] 28 | [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ echo-ip-java --- 29 | ######### src 디렉터리 아래에 있는 .java 파일들을 컴파일 과정을 거쳐서 target 디렉터리의 classes 디렉터리에 생성하기 위한 플러그인과 관련 파일을 내려 받습니다. classes 디렉터리의 파일들이 나중에 패키지에 포함됩니다. 30 | Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/shared/maven-shared-incremental/1.1/maven-shared-incremental-1.1.pom 31 | [중략] 32 | [INFO] Changes detected - recompiling the module! 33 | [INFO] Compiling 2 source files to /root/IaC/Docker/build/Basic/target/classes 34 | ######### 컴파일 과정을 거쳐서 classes 디렉터리에 실제로 파일을 생성합니다. src/main/java/com/stark/Industries/ 내부에 존재하는 2개의 .java 소스 파일입니다. 35 | [INFO] 36 | [INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ echo-ip-java --- 37 | [INFO] Using 'UTF-8' encoding to copy filtered resources. 38 | [INFO] skip non existing resourceDirectory /root/IaC/Docker/build/Basic/src/test/resources 39 | ######### 소스 코드를 빌드하기 전 테스트 코드를 작성해서 동작을 확인해야 하는데, 이 부분은 프로젝트 기본 생성시에는 존재하지만 테스트 코드 작성은 개발자의 영역이라 src/test 디렉터리를 삭제해 두어서 스킵됩니다. 40 | [INFO] 41 | [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ echo-ip-java --- 42 | [INFO] No sources to compile 43 | ######### 테스트 코드를 빌드하는 과정인데 테스트 코드 부분을 삭제해서 이 부분도 컴파일할 소스코드가 없다고 넘어 갑니다. 44 | [INFO] 45 | [INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ echo-ip-java --- 46 | Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/surefire/maven-surefire-common/2.22.2/maven-surefire-common-2.22.2.pom 47 | [중략] 48 | [INFO] No tests to run. 49 | ######### surefire는 실제 테스트를 수행하기 위해 사용하는 플러그인데 이 부분도 테스트와 관련된 부분은 모두 삭제해두어서 그냥 넘어가게 됩니다. 50 | [INFO] 51 | [INFO] --- maven-jar-plugin:3.1.2:jar (default-jar) @ echo-ip-java --- 52 | Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/shared/file-management/3.0.0/file-management-3.0.0.pom 53 | ######### 소스코드를 .jar 로 생성하기 위해 필요한 maven-jar-plugin 플러그인과 관련된 파일을 내려 받습니다. 54 | [중략] 55 | [INFO] Building jar: /root/IaC/Docker/build/Basic/target/app-in-host.jar 56 | ######### 패키지 파일 생성 1단계로 현재 디렉터리에 작성한 코드와 관련된 jjar 패키지 파일을 /root/IaC/Docker/build/Basic/target/app-in-host.jar에 생성합니다. 이 jar만 가지고 실제로 구동을 하기 위해서는 57 | ######### pom.xml에서 외부에서 불러와서 사용하도록 설정한 라이브러리를 실행시점에 연결해주어야 합니다. 그래서 이런 과정이 필요 없게 하기 위해 아래 단계가 수행됩니다. 58 | [INFO] 59 | [INFO] --- spring-boot-maven-plugin:2.2.2.RELEASE:repackage (repackage) @ echo-ip-java --- 60 | Downloading from central: https://repo.maven.apache.org/maven2/org/springframework/boot/spring-boot-loader-tools/2.2.2.RELEASE/spring-boot-loader-tools-2.2.2.RELEASE.pom 61 | ######### 라이브러리를 실행 시점에 연결하고 그러면 번거롭기 때문에 2단계로 라이브러리까지 모두 박아넣은 jar 패키지 파일을 만들기 위해 플러그인과 관련 파일을 다운로드 받습니다. 62 | [중략] 63 | Downloaded from central: https://repo.maven.apache.org/maven2/com/google/guava/guava/19.0/guava-19.0.jar (2.3 MB at 485 kB/s) 64 | [INFO] Replacing main artifact with repackaged archive 65 | ######### 기존에 존재하던 .jar 패키지 파일은 .jar.original 로 바뀌고, 모든 라이브러리가 포함되어 단독으로 실행이 가능한 jar 파일이 생성됩니다 (그런 패키지를 fat-jar 라고 합니다) 66 | [INFO] ------------------------------------------------------------------------ 67 | [INFO] BUILD SUCCESS 68 | [INFO] ------------------------------------------------------------------------ 69 | [INFO] Total time: 03:29 min 70 | [INFO] Finished at: 2020-09-27T09:48:26+09:00 71 | [INFO] ------------------------------------------------------------------------ -------------------------------------------------------------------------------- /ch4/4.3.1/.mvn/wrapper/MavenWrapperDownloader.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright 2007-present the original author or authors. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * https://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | import java.net.*; 18 | import java.io.*; 19 | import java.nio.channels.*; 20 | import java.util.Properties; 21 | 22 | public class MavenWrapperDownloader { 23 | 24 | private static final String WRAPPER_VERSION = "0.5.6"; 25 | /** 26 | * Default URL to download the maven-wrapper.jar from, if no 'downloadUrl' is provided. 27 | */ 28 | private static final String DEFAULT_DOWNLOAD_URL = "https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/" 29 | + WRAPPER_VERSION + "/maven-wrapper-" + WRAPPER_VERSION + ".jar"; 30 | 31 | /** 32 | * Path to the maven-wrapper.properties file, which might contain a downloadUrl property to 33 | * use instead of the default one. 34 | */ 35 | private static final String MAVEN_WRAPPER_PROPERTIES_PATH = 36 | ".mvn/wrapper/maven-wrapper.properties"; 37 | 38 | /** 39 | * Path where the maven-wrapper.jar will be saved to. 40 | */ 41 | private static final String MAVEN_WRAPPER_JAR_PATH = 42 | ".mvn/wrapper/maven-wrapper.jar"; 43 | 44 | /** 45 | * Name of the property which should be used to override the default download url for the wrapper. 46 | */ 47 | private static final String PROPERTY_NAME_WRAPPER_URL = "wrapperUrl"; 48 | 49 | public static void main(String args[]) { 50 | System.out.println("- Downloader started"); 51 | File baseDirectory = new File(args[0]); 52 | System.out.println("- Using base directory: " + baseDirectory.getAbsolutePath()); 53 | 54 | // If the maven-wrapper.properties exists, read it and check if it contains a custom 55 | // wrapperUrl parameter. 56 | File mavenWrapperPropertyFile = new File(baseDirectory, MAVEN_WRAPPER_PROPERTIES_PATH); 57 | String url = DEFAULT_DOWNLOAD_URL; 58 | if (mavenWrapperPropertyFile.exists()) { 59 | FileInputStream mavenWrapperPropertyFileInputStream = null; 60 | try { 61 | mavenWrapperPropertyFileInputStream = new FileInputStream(mavenWrapperPropertyFile); 62 | Properties mavenWrapperProperties = new Properties(); 63 | mavenWrapperProperties.load(mavenWrapperPropertyFileInputStream); 64 | url = mavenWrapperProperties.getProperty(PROPERTY_NAME_WRAPPER_URL, url); 65 | } catch (IOException e) { 66 | System.out.println("- ERROR loading '" + MAVEN_WRAPPER_PROPERTIES_PATH + "'"); 67 | } finally { 68 | try { 69 | if (mavenWrapperPropertyFileInputStream != null) { 70 | mavenWrapperPropertyFileInputStream.close(); 71 | } 72 | } catch (IOException e) { 73 | // Ignore ... 74 | } 75 | } 76 | } 77 | System.out.println("- Downloading from: " + url); 78 | 79 | File outputFile = new File(baseDirectory.getAbsolutePath(), MAVEN_WRAPPER_JAR_PATH); 80 | if (!outputFile.getParentFile().exists()) { 81 | if (!outputFile.getParentFile().mkdirs()) { 82 | System.out.println( 83 | "- ERROR creating output directory '" + outputFile.getParentFile().getAbsolutePath() + "'"); 84 | } 85 | } 86 | System.out.println("- Downloading to: " + outputFile.getAbsolutePath()); 87 | try { 88 | downloadFileFromURL(url, outputFile); 89 | System.out.println("Done"); 90 | System.exit(0); 91 | } catch (Throwable e) { 92 | System.out.println("- Error downloading"); 93 | e.printStackTrace(); 94 | System.exit(1); 95 | } 96 | } 97 | 98 | private static void downloadFileFromURL(String urlString, File destination) throws Exception { 99 | if (System.getenv("MVNW_USERNAME") != null && System.getenv("MVNW_PASSWORD") != null) { 100 | String username = System.getenv("MVNW_USERNAME"); 101 | char[] password = System.getenv("MVNW_PASSWORD").toCharArray(); 102 | Authenticator.setDefault(new Authenticator() { 103 | @Override 104 | protected PasswordAuthentication getPasswordAuthentication() { 105 | return new PasswordAuthentication(username, password); 106 | } 107 | }); 108 | } 109 | URL website = new URL(urlString); 110 | ReadableByteChannel rbc; 111 | rbc = Channels.newChannel(website.openStream()); 112 | FileOutputStream fos = new FileOutputStream(destination); 113 | fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE); 114 | fos.close(); 115 | rbc.close(); 116 | } 117 | 118 | } 119 | -------------------------------------------------------------------------------- /ch4/4.3.1/.mvn/wrapper/maven-wrapper.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/ch4/4.3.1/.mvn/wrapper/maven-wrapper.jar -------------------------------------------------------------------------------- /ch4/4.3.1/.mvn/wrapper/maven-wrapper.properties: -------------------------------------------------------------------------------- 1 | distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip 2 | wrapperUrl=https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar 3 | -------------------------------------------------------------------------------- /ch4/4.3.1/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM openjdk:8 2 | LABEL description="Echo IP Java Application" 3 | EXPOSE 60431 4 | COPY ./target/app-in-host.jar /opt/app-in-image.jar 5 | WORKDIR /opt 6 | ENTRYPOINT [ "java", "-jar", "app-in-image.jar" ] -------------------------------------------------------------------------------- /ch4/4.3.1/mvnw: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # ---------------------------------------------------------------------------- 3 | # Licensed to the Apache Software Foundation (ASF) under one 4 | # or more contributor license agreements. See the NOTICE file 5 | # distributed with this work for additional information 6 | # regarding copyright ownership. The ASF licenses this file 7 | # to you under the Apache License, Version 2.0 (the 8 | # "License"); you may not use this file except in compliance 9 | # with the License. You may obtain a copy of the License at 10 | # 11 | # https://www.apache.org/licenses/LICENSE-2.0 12 | # 13 | # Unless required by applicable law or agreed to in writing, 14 | # software distributed under the License is distributed on an 15 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 16 | # KIND, either express or implied. See the License for the 17 | # specific language governing permissions and limitations 18 | # under the License. 19 | # ---------------------------------------------------------------------------- 20 | 21 | # ---------------------------------------------------------------------------- 22 | # Maven Start Up Batch script 23 | # 24 | # Required ENV vars: 25 | # ------------------ 26 | # JAVA_HOME - location of a JDK home dir 27 | # 28 | # Optional ENV vars 29 | # ----------------- 30 | # M2_HOME - location of maven2's installed home dir 31 | # MAVEN_OPTS - parameters passed to the Java VM when running Maven 32 | # e.g. to debug Maven itself, use 33 | # set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 34 | # MAVEN_SKIP_RC - flag to disable loading of mavenrc files 35 | # ---------------------------------------------------------------------------- 36 | 37 | if [ -z "$MAVEN_SKIP_RC" ] ; then 38 | 39 | if [ -f /etc/mavenrc ] ; then 40 | . /etc/mavenrc 41 | fi 42 | 43 | if [ -f "$HOME/.mavenrc" ] ; then 44 | . "$HOME/.mavenrc" 45 | fi 46 | 47 | fi 48 | 49 | # OS specific support. $var _must_ be set to either true or false. 50 | cygwin=false; 51 | darwin=false; 52 | mingw=false 53 | case "`uname`" in 54 | CYGWIN*) cygwin=true ;; 55 | MINGW*) mingw=true;; 56 | Darwin*) darwin=true 57 | # Use /usr/libexec/java_home if available, otherwise fall back to /Library/Java/Home 58 | # See https://developer.apple.com/library/mac/qa/qa1170/_index.html 59 | if [ -z "$JAVA_HOME" ]; then 60 | if [ -x "/usr/libexec/java_home" ]; then 61 | export JAVA_HOME="`/usr/libexec/java_home`" 62 | else 63 | export JAVA_HOME="/Library/Java/Home" 64 | fi 65 | fi 66 | ;; 67 | esac 68 | 69 | if [ -z "$JAVA_HOME" ] ; then 70 | if [ -r /etc/gentoo-release ] ; then 71 | JAVA_HOME=`java-config --jre-home` 72 | fi 73 | fi 74 | 75 | if [ -z "$M2_HOME" ] ; then 76 | ## resolve links - $0 may be a link to maven's home 77 | PRG="$0" 78 | 79 | # need this for relative symlinks 80 | while [ -h "$PRG" ] ; do 81 | ls=`ls -ld "$PRG"` 82 | link=`expr "$ls" : '.*-> \(.*\)$'` 83 | if expr "$link" : '/.*' > /dev/null; then 84 | PRG="$link" 85 | else 86 | PRG="`dirname "$PRG"`/$link" 87 | fi 88 | done 89 | 90 | saveddir=`pwd` 91 | 92 | M2_HOME=`dirname "$PRG"`/.. 93 | 94 | # make it fully qualified 95 | M2_HOME=`cd "$M2_HOME" && pwd` 96 | 97 | cd "$saveddir" 98 | # echo Using m2 at $M2_HOME 99 | fi 100 | 101 | # For Cygwin, ensure paths are in UNIX format before anything is touched 102 | if $cygwin ; then 103 | [ -n "$M2_HOME" ] && 104 | M2_HOME=`cygpath --unix "$M2_HOME"` 105 | [ -n "$JAVA_HOME" ] && 106 | JAVA_HOME=`cygpath --unix "$JAVA_HOME"` 107 | [ -n "$CLASSPATH" ] && 108 | CLASSPATH=`cygpath --path --unix "$CLASSPATH"` 109 | fi 110 | 111 | # For Mingw, ensure paths are in UNIX format before anything is touched 112 | if $mingw ; then 113 | [ -n "$M2_HOME" ] && 114 | M2_HOME="`(cd "$M2_HOME"; pwd)`" 115 | [ -n "$JAVA_HOME" ] && 116 | JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`" 117 | fi 118 | 119 | if [ -z "$JAVA_HOME" ]; then 120 | javaExecutable="`which javac`" 121 | if [ -n "$javaExecutable" ] && ! [ "`expr \"$javaExecutable\" : '\([^ ]*\)'`" = "no" ]; then 122 | # readlink(1) is not available as standard on Solaris 10. 123 | readLink=`which readlink` 124 | if [ ! `expr "$readLink" : '\([^ ]*\)'` = "no" ]; then 125 | if $darwin ; then 126 | javaHome="`dirname \"$javaExecutable\"`" 127 | javaExecutable="`cd \"$javaHome\" && pwd -P`/javac" 128 | else 129 | javaExecutable="`readlink -f \"$javaExecutable\"`" 130 | fi 131 | javaHome="`dirname \"$javaExecutable\"`" 132 | javaHome=`expr "$javaHome" : '\(.*\)/bin'` 133 | JAVA_HOME="$javaHome" 134 | export JAVA_HOME 135 | fi 136 | fi 137 | fi 138 | 139 | if [ -z "$JAVACMD" ] ; then 140 | if [ -n "$JAVA_HOME" ] ; then 141 | if [ -x "$JAVA_HOME/jre/sh/java" ] ; then 142 | # IBM's JDK on AIX uses strange locations for the executables 143 | JAVACMD="$JAVA_HOME/jre/sh/java" 144 | else 145 | JAVACMD="$JAVA_HOME/bin/java" 146 | fi 147 | else 148 | JAVACMD="`which java`" 149 | fi 150 | fi 151 | 152 | if [ ! -x "$JAVACMD" ] ; then 153 | echo "Error: JAVA_HOME is not defined correctly." >&2 154 | echo " We cannot execute $JAVACMD" >&2 155 | exit 1 156 | fi 157 | 158 | if [ -z "$JAVA_HOME" ] ; then 159 | echo "Warning: JAVA_HOME environment variable is not set." 160 | fi 161 | 162 | CLASSWORLDS_LAUNCHER=org.codehaus.plexus.classworlds.launcher.Launcher 163 | 164 | # traverses directory structure from process work directory to filesystem root 165 | # first directory with .mvn subdirectory is considered project base directory 166 | find_maven_basedir() { 167 | 168 | if [ -z "$1" ] 169 | then 170 | echo "Path not specified to find_maven_basedir" 171 | return 1 172 | fi 173 | 174 | basedir="$1" 175 | wdir="$1" 176 | while [ "$wdir" != '/' ] ; do 177 | if [ -d "$wdir"/.mvn ] ; then 178 | basedir=$wdir 179 | break 180 | fi 181 | # workaround for JBEAP-8937 (on Solaris 10/Sparc) 182 | if [ -d "${wdir}" ]; then 183 | wdir=`cd "$wdir/.."; pwd` 184 | fi 185 | # end of workaround 186 | done 187 | echo "${basedir}" 188 | } 189 | 190 | # concatenates all lines of a file 191 | concat_lines() { 192 | if [ -f "$1" ]; then 193 | echo "$(tr -s '\n' ' ' < "$1")" 194 | fi 195 | } 196 | 197 | BASE_DIR=`find_maven_basedir "$(pwd)"` 198 | if [ -z "$BASE_DIR" ]; then 199 | exit 1; 200 | fi 201 | 202 | ########################################################################################## 203 | # Extension to allow automatically downloading the maven-wrapper.jar from Maven-central 204 | # This allows using the maven wrapper in projects that prohibit checking in binary data. 205 | ########################################################################################## 206 | if [ -r "$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" ]; then 207 | if [ "$MVNW_VERBOSE" = true ]; then 208 | echo "Found .mvn/wrapper/maven-wrapper.jar" 209 | fi 210 | else 211 | if [ "$MVNW_VERBOSE" = true ]; then 212 | echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..." 213 | fi 214 | if [ -n "$MVNW_REPOURL" ]; then 215 | jarUrl="$MVNW_REPOURL/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar" 216 | else 217 | jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar" 218 | fi 219 | while IFS="=" read key value; do 220 | case "$key" in (wrapperUrl) jarUrl="$value"; break ;; 221 | esac 222 | done < "$BASE_DIR/.mvn/wrapper/maven-wrapper.properties" 223 | if [ "$MVNW_VERBOSE" = true ]; then 224 | echo "Downloading from: $jarUrl" 225 | fi 226 | wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" 227 | if $cygwin; then 228 | wrapperJarPath=`cygpath --path --windows "$wrapperJarPath"` 229 | fi 230 | 231 | if command -v wget > /dev/null; then 232 | if [ "$MVNW_VERBOSE" = true ]; then 233 | echo "Found wget ... using wget" 234 | fi 235 | if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then 236 | wget "$jarUrl" -O "$wrapperJarPath" 237 | else 238 | wget --http-user=$MVNW_USERNAME --http-password=$MVNW_PASSWORD "$jarUrl" -O "$wrapperJarPath" 239 | fi 240 | elif command -v curl > /dev/null; then 241 | if [ "$MVNW_VERBOSE" = true ]; then 242 | echo "Found curl ... using curl" 243 | fi 244 | if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then 245 | curl -o "$wrapperJarPath" "$jarUrl" -f 246 | else 247 | curl --user $MVNW_USERNAME:$MVNW_PASSWORD -o "$wrapperJarPath" "$jarUrl" -f 248 | fi 249 | 250 | else 251 | if [ "$MVNW_VERBOSE" = true ]; then 252 | echo "Falling back to using Java to download" 253 | fi 254 | javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java" 255 | # For Cygwin, switch paths to Windows format before running javac 256 | if $cygwin; then 257 | javaClass=`cygpath --path --windows "$javaClass"` 258 | fi 259 | if [ -e "$javaClass" ]; then 260 | if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then 261 | if [ "$MVNW_VERBOSE" = true ]; then 262 | echo " - Compiling MavenWrapperDownloader.java ..." 263 | fi 264 | # Compiling the Java class 265 | ("$JAVA_HOME/bin/javac" "$javaClass") 266 | fi 267 | if [ -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then 268 | # Running the downloader 269 | if [ "$MVNW_VERBOSE" = true ]; then 270 | echo " - Running MavenWrapperDownloader.java ..." 271 | fi 272 | ("$JAVA_HOME/bin/java" -cp .mvn/wrapper MavenWrapperDownloader "$MAVEN_PROJECTBASEDIR") 273 | fi 274 | fi 275 | fi 276 | fi 277 | ########################################################################################## 278 | # End of extension 279 | ########################################################################################## 280 | 281 | export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-"$BASE_DIR"} 282 | if [ "$MVNW_VERBOSE" = true ]; then 283 | echo $MAVEN_PROJECTBASEDIR 284 | fi 285 | MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS" 286 | 287 | # For Cygwin, switch paths to Windows format before running java 288 | if $cygwin; then 289 | [ -n "$M2_HOME" ] && 290 | M2_HOME=`cygpath --path --windows "$M2_HOME"` 291 | [ -n "$JAVA_HOME" ] && 292 | JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"` 293 | [ -n "$CLASSPATH" ] && 294 | CLASSPATH=`cygpath --path --windows "$CLASSPATH"` 295 | [ -n "$MAVEN_PROJECTBASEDIR" ] && 296 | MAVEN_PROJECTBASEDIR=`cygpath --path --windows "$MAVEN_PROJECTBASEDIR"` 297 | fi 298 | 299 | # Provide a "standardized" way to retrieve the CLI args that will 300 | # work with both Windows and non-Windows executions. 301 | MAVEN_CMD_LINE_ARGS="$MAVEN_CONFIG $@" 302 | export MAVEN_CMD_LINE_ARGS 303 | 304 | WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain 305 | 306 | exec "$JAVACMD" \ 307 | $MAVEN_OPTS \ 308 | -classpath "$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar" \ 309 | "-Dmaven.home=${M2_HOME}" "-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}" \ 310 | ${WRAPPER_LAUNCHER} $MAVEN_CONFIG "$@" 311 | -------------------------------------------------------------------------------- /ch4/4.3.1/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 4 | 4.0.0 5 | 6 | org.springframework.boot 7 | spring-boot-starter-parent 8 | 2.2.2.RELEASE 9 | 10 | 11 | Stark.Industries 12 | echo-ip-java 13 | 0.0.1-SNAPSHOT 14 | Ultron-PRJ 15 | Echo IP by Java application for Container Book 16 | 17 | 18 | 1.8 19 | 20 | 21 | 22 | 23 | org.springframework.boot 24 | spring-boot-starter-web 25 | 26 | 27 | 28 | org.springframework.boot 29 | spring-boot-starter-test 30 | test 31 | 32 | 33 | org.junit.vintage 34 | junit-vintage-engine 35 | 36 | 37 | 38 | 39 | 40 | 41 | app-in-host 42 | 43 | 44 | org.springframework.boot 45 | spring-boot-maven-plugin 46 | 47 | 48 | 49 | 50 | -------------------------------------------------------------------------------- /ch4/4.3.1/src/main/java/com/stark/Industries/UltronPRJApplication.java: -------------------------------------------------------------------------------- 1 | package com.stark.Industries; 2 | 3 | import org.springframework.boot.SpringApplication; 4 | import org.springframework.boot.autoconfigure.SpringBootApplication; 5 | 6 | @SpringBootApplication 7 | public class UltronPRJApplication { 8 | 9 | public static void main(String[] args) { 10 | SpringApplication.run(UltronPRJApplication.class, args); 11 | } 12 | 13 | } 14 | -------------------------------------------------------------------------------- /ch4/4.3.1/src/main/java/com/stark/Industries/UltronPRJController.java: -------------------------------------------------------------------------------- 1 | package com.stark.Industries; 2 | 3 | import org.springframework.web.bind.annotation.RequestMapping; 4 | import org.springframework.web.bind.annotation.RestController; 5 | 6 | import javax.servlet.http.HttpServletRequest; 7 | import java.util.HashMap; 8 | import java.util.Map; 9 | 10 | @RestController 11 | public class UltronPRJController { 12 | 13 | @RequestMapping("/") 14 | public String hello(HttpServletRequest request){ 15 | String result = "src: "+request.getRemoteAddr()+" / dest: "+request.getServerName()+"\n"; 16 | return result; 17 | } 18 | } 19 | -------------------------------------------------------------------------------- /ch4/4.3.1/src/main/resources/application.properties: -------------------------------------------------------------------------------- 1 | server.port=80 -------------------------------------------------------------------------------- /ch4/4.3.2/.mvn/wrapper/MavenWrapperDownloader.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright 2007-present the original author or authors. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * https://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | import java.net.*; 18 | import java.io.*; 19 | import java.nio.channels.*; 20 | import java.util.Properties; 21 | 22 | public class MavenWrapperDownloader { 23 | 24 | private static final String WRAPPER_VERSION = "0.5.6"; 25 | /** 26 | * Default URL to download the maven-wrapper.jar from, if no 'downloadUrl' is provided. 27 | */ 28 | private static final String DEFAULT_DOWNLOAD_URL = "https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/" 29 | + WRAPPER_VERSION + "/maven-wrapper-" + WRAPPER_VERSION + ".jar"; 30 | 31 | /** 32 | * Path to the maven-wrapper.properties file, which might contain a downloadUrl property to 33 | * use instead of the default one. 34 | */ 35 | private static final String MAVEN_WRAPPER_PROPERTIES_PATH = 36 | ".mvn/wrapper/maven-wrapper.properties"; 37 | 38 | /** 39 | * Path where the maven-wrapper.jar will be saved to. 40 | */ 41 | private static final String MAVEN_WRAPPER_JAR_PATH = 42 | ".mvn/wrapper/maven-wrapper.jar"; 43 | 44 | /** 45 | * Name of the property which should be used to override the default download url for the wrapper. 46 | */ 47 | private static final String PROPERTY_NAME_WRAPPER_URL = "wrapperUrl"; 48 | 49 | public static void main(String args[]) { 50 | System.out.println("- Downloader started"); 51 | File baseDirectory = new File(args[0]); 52 | System.out.println("- Using base directory: " + baseDirectory.getAbsolutePath()); 53 | 54 | // If the maven-wrapper.properties exists, read it and check if it contains a custom 55 | // wrapperUrl parameter. 56 | File mavenWrapperPropertyFile = new File(baseDirectory, MAVEN_WRAPPER_PROPERTIES_PATH); 57 | String url = DEFAULT_DOWNLOAD_URL; 58 | if (mavenWrapperPropertyFile.exists()) { 59 | FileInputStream mavenWrapperPropertyFileInputStream = null; 60 | try { 61 | mavenWrapperPropertyFileInputStream = new FileInputStream(mavenWrapperPropertyFile); 62 | Properties mavenWrapperProperties = new Properties(); 63 | mavenWrapperProperties.load(mavenWrapperPropertyFileInputStream); 64 | url = mavenWrapperProperties.getProperty(PROPERTY_NAME_WRAPPER_URL, url); 65 | } catch (IOException e) { 66 | System.out.println("- ERROR loading '" + MAVEN_WRAPPER_PROPERTIES_PATH + "'"); 67 | } finally { 68 | try { 69 | if (mavenWrapperPropertyFileInputStream != null) { 70 | mavenWrapperPropertyFileInputStream.close(); 71 | } 72 | } catch (IOException e) { 73 | // Ignore ... 74 | } 75 | } 76 | } 77 | System.out.println("- Downloading from: " + url); 78 | 79 | File outputFile = new File(baseDirectory.getAbsolutePath(), MAVEN_WRAPPER_JAR_PATH); 80 | if (!outputFile.getParentFile().exists()) { 81 | if (!outputFile.getParentFile().mkdirs()) { 82 | System.out.println( 83 | "- ERROR creating output directory '" + outputFile.getParentFile().getAbsolutePath() + "'"); 84 | } 85 | } 86 | System.out.println("- Downloading to: " + outputFile.getAbsolutePath()); 87 | try { 88 | downloadFileFromURL(url, outputFile); 89 | System.out.println("Done"); 90 | System.exit(0); 91 | } catch (Throwable e) { 92 | System.out.println("- Error downloading"); 93 | e.printStackTrace(); 94 | System.exit(1); 95 | } 96 | } 97 | 98 | private static void downloadFileFromURL(String urlString, File destination) throws Exception { 99 | if (System.getenv("MVNW_USERNAME") != null && System.getenv("MVNW_PASSWORD") != null) { 100 | String username = System.getenv("MVNW_USERNAME"); 101 | char[] password = System.getenv("MVNW_PASSWORD").toCharArray(); 102 | Authenticator.setDefault(new Authenticator() { 103 | @Override 104 | protected PasswordAuthentication getPasswordAuthentication() { 105 | return new PasswordAuthentication(username, password); 106 | } 107 | }); 108 | } 109 | URL website = new URL(urlString); 110 | ReadableByteChannel rbc; 111 | rbc = Channels.newChannel(website.openStream()); 112 | FileOutputStream fos = new FileOutputStream(destination); 113 | fos.getChannel().transferFrom(rbc, 0, Long.MAX_VALUE); 114 | fos.close(); 115 | rbc.close(); 116 | } 117 | 118 | } 119 | -------------------------------------------------------------------------------- /ch4/4.3.2/.mvn/wrapper/maven-wrapper.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/ch4/4.3.2/.mvn/wrapper/maven-wrapper.jar -------------------------------------------------------------------------------- /ch4/4.3.2/.mvn/wrapper/maven-wrapper.properties: -------------------------------------------------------------------------------- 1 | distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.6.3/apache-maven-3.6.3-bin.zip 2 | wrapperUrl=https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar 3 | -------------------------------------------------------------------------------- /ch4/4.3.2/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM gcr.io/distroless/java:8 2 | LABEL description="Echo IP Java Application" 3 | EXPOSE 60432 4 | COPY ./target/app-in-host.jar /opt/app-in-image.jar 5 | WORKDIR /opt 6 | ENTRYPOINT [ "java", "-jar", "app-in-image.jar" ] -------------------------------------------------------------------------------- /ch4/4.3.2/build-in-host.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | yum -y install java-1.8.0-openjdk-devel 3 | ./mvnw clean package 4 | docker build -t optimal-img . -------------------------------------------------------------------------------- /ch4/4.3.2/mvnw: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | # ---------------------------------------------------------------------------- 3 | # Licensed to the Apache Software Foundation (ASF) under one 4 | # or more contributor license agreements. See the NOTICE file 5 | # distributed with this work for additional information 6 | # regarding copyright ownership. The ASF licenses this file 7 | # to you under the Apache License, Version 2.0 (the 8 | # "License"); you may not use this file except in compliance 9 | # with the License. You may obtain a copy of the License at 10 | # 11 | # https://www.apache.org/licenses/LICENSE-2.0 12 | # 13 | # Unless required by applicable law or agreed to in writing, 14 | # software distributed under the License is distributed on an 15 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 16 | # KIND, either express or implied. See the License for the 17 | # specific language governing permissions and limitations 18 | # under the License. 19 | # ---------------------------------------------------------------------------- 20 | 21 | # ---------------------------------------------------------------------------- 22 | # Maven Start Up Batch script 23 | # 24 | # Required ENV vars: 25 | # ------------------ 26 | # JAVA_HOME - location of a JDK home dir 27 | # 28 | # Optional ENV vars 29 | # ----------------- 30 | # M2_HOME - location of maven2's installed home dir 31 | # MAVEN_OPTS - parameters passed to the Java VM when running Maven 32 | # e.g. to debug Maven itself, use 33 | # set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000 34 | # MAVEN_SKIP_RC - flag to disable loading of mavenrc files 35 | # ---------------------------------------------------------------------------- 36 | 37 | if [ -z "$MAVEN_SKIP_RC" ] ; then 38 | 39 | if [ -f /etc/mavenrc ] ; then 40 | . /etc/mavenrc 41 | fi 42 | 43 | if [ -f "$HOME/.mavenrc" ] ; then 44 | . "$HOME/.mavenrc" 45 | fi 46 | 47 | fi 48 | 49 | # OS specific support. $var _must_ be set to either true or false. 50 | cygwin=false; 51 | darwin=false; 52 | mingw=false 53 | case "`uname`" in 54 | CYGWIN*) cygwin=true ;; 55 | MINGW*) mingw=true;; 56 | Darwin*) darwin=true 57 | # Use /usr/libexec/java_home if available, otherwise fall back to /Library/Java/Home 58 | # See https://developer.apple.com/library/mac/qa/qa1170/_index.html 59 | if [ -z "$JAVA_HOME" ]; then 60 | if [ -x "/usr/libexec/java_home" ]; then 61 | export JAVA_HOME="`/usr/libexec/java_home`" 62 | else 63 | export JAVA_HOME="/Library/Java/Home" 64 | fi 65 | fi 66 | ;; 67 | esac 68 | 69 | if [ -z "$JAVA_HOME" ] ; then 70 | if [ -r /etc/gentoo-release ] ; then 71 | JAVA_HOME=`java-config --jre-home` 72 | fi 73 | fi 74 | 75 | if [ -z "$M2_HOME" ] ; then 76 | ## resolve links - $0 may be a link to maven's home 77 | PRG="$0" 78 | 79 | # need this for relative symlinks 80 | while [ -h "$PRG" ] ; do 81 | ls=`ls -ld "$PRG"` 82 | link=`expr "$ls" : '.*-> \(.*\)$'` 83 | if expr "$link" : '/.*' > /dev/null; then 84 | PRG="$link" 85 | else 86 | PRG="`dirname "$PRG"`/$link" 87 | fi 88 | done 89 | 90 | saveddir=`pwd` 91 | 92 | M2_HOME=`dirname "$PRG"`/.. 93 | 94 | # make it fully qualified 95 | M2_HOME=`cd "$M2_HOME" && pwd` 96 | 97 | cd "$saveddir" 98 | # echo Using m2 at $M2_HOME 99 | fi 100 | 101 | # For Cygwin, ensure paths are in UNIX format before anything is touched 102 | if $cygwin ; then 103 | [ -n "$M2_HOME" ] && 104 | M2_HOME=`cygpath --unix "$M2_HOME"` 105 | [ -n "$JAVA_HOME" ] && 106 | JAVA_HOME=`cygpath --unix "$JAVA_HOME"` 107 | [ -n "$CLASSPATH" ] && 108 | CLASSPATH=`cygpath --path --unix "$CLASSPATH"` 109 | fi 110 | 111 | # For Mingw, ensure paths are in UNIX format before anything is touched 112 | if $mingw ; then 113 | [ -n "$M2_HOME" ] && 114 | M2_HOME="`(cd "$M2_HOME"; pwd)`" 115 | [ -n "$JAVA_HOME" ] && 116 | JAVA_HOME="`(cd "$JAVA_HOME"; pwd)`" 117 | fi 118 | 119 | if [ -z "$JAVA_HOME" ]; then 120 | javaExecutable="`which javac`" 121 | if [ -n "$javaExecutable" ] && ! [ "`expr \"$javaExecutable\" : '\([^ ]*\)'`" = "no" ]; then 122 | # readlink(1) is not available as standard on Solaris 10. 123 | readLink=`which readlink` 124 | if [ ! `expr "$readLink" : '\([^ ]*\)'` = "no" ]; then 125 | if $darwin ; then 126 | javaHome="`dirname \"$javaExecutable\"`" 127 | javaExecutable="`cd \"$javaHome\" && pwd -P`/javac" 128 | else 129 | javaExecutable="`readlink -f \"$javaExecutable\"`" 130 | fi 131 | javaHome="`dirname \"$javaExecutable\"`" 132 | javaHome=`expr "$javaHome" : '\(.*\)/bin'` 133 | JAVA_HOME="$javaHome" 134 | export JAVA_HOME 135 | fi 136 | fi 137 | fi 138 | 139 | if [ -z "$JAVACMD" ] ; then 140 | if [ -n "$JAVA_HOME" ] ; then 141 | if [ -x "$JAVA_HOME/jre/sh/java" ] ; then 142 | # IBM's JDK on AIX uses strange locations for the executables 143 | JAVACMD="$JAVA_HOME/jre/sh/java" 144 | else 145 | JAVACMD="$JAVA_HOME/bin/java" 146 | fi 147 | else 148 | JAVACMD="`which java`" 149 | fi 150 | fi 151 | 152 | if [ ! -x "$JAVACMD" ] ; then 153 | echo "Error: JAVA_HOME is not defined correctly." >&2 154 | echo " We cannot execute $JAVACMD" >&2 155 | exit 1 156 | fi 157 | 158 | if [ -z "$JAVA_HOME" ] ; then 159 | echo "Warning: JAVA_HOME environment variable is not set." 160 | fi 161 | 162 | CLASSWORLDS_LAUNCHER=org.codehaus.plexus.classworlds.launcher.Launcher 163 | 164 | # traverses directory structure from process work directory to filesystem root 165 | # first directory with .mvn subdirectory is considered project base directory 166 | find_maven_basedir() { 167 | 168 | if [ -z "$1" ] 169 | then 170 | echo "Path not specified to find_maven_basedir" 171 | return 1 172 | fi 173 | 174 | basedir="$1" 175 | wdir="$1" 176 | while [ "$wdir" != '/' ] ; do 177 | if [ -d "$wdir"/.mvn ] ; then 178 | basedir=$wdir 179 | break 180 | fi 181 | # workaround for JBEAP-8937 (on Solaris 10/Sparc) 182 | if [ -d "${wdir}" ]; then 183 | wdir=`cd "$wdir/.."; pwd` 184 | fi 185 | # end of workaround 186 | done 187 | echo "${basedir}" 188 | } 189 | 190 | # concatenates all lines of a file 191 | concat_lines() { 192 | if [ -f "$1" ]; then 193 | echo "$(tr -s '\n' ' ' < "$1")" 194 | fi 195 | } 196 | 197 | BASE_DIR=`find_maven_basedir "$(pwd)"` 198 | if [ -z "$BASE_DIR" ]; then 199 | exit 1; 200 | fi 201 | 202 | ########################################################################################## 203 | # Extension to allow automatically downloading the maven-wrapper.jar from Maven-central 204 | # This allows using the maven wrapper in projects that prohibit checking in binary data. 205 | ########################################################################################## 206 | if [ -r "$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" ]; then 207 | if [ "$MVNW_VERBOSE" = true ]; then 208 | echo "Found .mvn/wrapper/maven-wrapper.jar" 209 | fi 210 | else 211 | if [ "$MVNW_VERBOSE" = true ]; then 212 | echo "Couldn't find .mvn/wrapper/maven-wrapper.jar, downloading it ..." 213 | fi 214 | if [ -n "$MVNW_REPOURL" ]; then 215 | jarUrl="$MVNW_REPOURL/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar" 216 | else 217 | jarUrl="https://repo.maven.apache.org/maven2/io/takari/maven-wrapper/0.5.6/maven-wrapper-0.5.6.jar" 218 | fi 219 | while IFS="=" read key value; do 220 | case "$key" in (wrapperUrl) jarUrl="$value"; break ;; 221 | esac 222 | done < "$BASE_DIR/.mvn/wrapper/maven-wrapper.properties" 223 | if [ "$MVNW_VERBOSE" = true ]; then 224 | echo "Downloading from: $jarUrl" 225 | fi 226 | wrapperJarPath="$BASE_DIR/.mvn/wrapper/maven-wrapper.jar" 227 | if $cygwin; then 228 | wrapperJarPath=`cygpath --path --windows "$wrapperJarPath"` 229 | fi 230 | 231 | if command -v wget > /dev/null; then 232 | if [ "$MVNW_VERBOSE" = true ]; then 233 | echo "Found wget ... using wget" 234 | fi 235 | if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then 236 | wget "$jarUrl" -O "$wrapperJarPath" 237 | else 238 | wget --http-user=$MVNW_USERNAME --http-password=$MVNW_PASSWORD "$jarUrl" -O "$wrapperJarPath" 239 | fi 240 | elif command -v curl > /dev/null; then 241 | if [ "$MVNW_VERBOSE" = true ]; then 242 | echo "Found curl ... using curl" 243 | fi 244 | if [ -z "$MVNW_USERNAME" ] || [ -z "$MVNW_PASSWORD" ]; then 245 | curl -o "$wrapperJarPath" "$jarUrl" -f 246 | else 247 | curl --user $MVNW_USERNAME:$MVNW_PASSWORD -o "$wrapperJarPath" "$jarUrl" -f 248 | fi 249 | 250 | else 251 | if [ "$MVNW_VERBOSE" = true ]; then 252 | echo "Falling back to using Java to download" 253 | fi 254 | javaClass="$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.java" 255 | # For Cygwin, switch paths to Windows format before running javac 256 | if $cygwin; then 257 | javaClass=`cygpath --path --windows "$javaClass"` 258 | fi 259 | if [ -e "$javaClass" ]; then 260 | if [ ! -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then 261 | if [ "$MVNW_VERBOSE" = true ]; then 262 | echo " - Compiling MavenWrapperDownloader.java ..." 263 | fi 264 | # Compiling the Java class 265 | ("$JAVA_HOME/bin/javac" "$javaClass") 266 | fi 267 | if [ -e "$BASE_DIR/.mvn/wrapper/MavenWrapperDownloader.class" ]; then 268 | # Running the downloader 269 | if [ "$MVNW_VERBOSE" = true ]; then 270 | echo " - Running MavenWrapperDownloader.java ..." 271 | fi 272 | ("$JAVA_HOME/bin/java" -cp .mvn/wrapper MavenWrapperDownloader "$MAVEN_PROJECTBASEDIR") 273 | fi 274 | fi 275 | fi 276 | fi 277 | ########################################################################################## 278 | # End of extension 279 | ########################################################################################## 280 | 281 | export MAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-"$BASE_DIR"} 282 | if [ "$MVNW_VERBOSE" = true ]; then 283 | echo $MAVEN_PROJECTBASEDIR 284 | fi 285 | MAVEN_OPTS="$(concat_lines "$MAVEN_PROJECTBASEDIR/.mvn/jvm.config") $MAVEN_OPTS" 286 | 287 | # For Cygwin, switch paths to Windows format before running java 288 | if $cygwin; then 289 | [ -n "$M2_HOME" ] && 290 | M2_HOME=`cygpath --path --windows "$M2_HOME"` 291 | [ -n "$JAVA_HOME" ] && 292 | JAVA_HOME=`cygpath --path --windows "$JAVA_HOME"` 293 | [ -n "$CLASSPATH" ] && 294 | CLASSPATH=`cygpath --path --windows "$CLASSPATH"` 295 | [ -n "$MAVEN_PROJECTBASEDIR" ] && 296 | MAVEN_PROJECTBASEDIR=`cygpath --path --windows "$MAVEN_PROJECTBASEDIR"` 297 | fi 298 | 299 | # Provide a "standardized" way to retrieve the CLI args that will 300 | # work with both Windows and non-Windows executions. 301 | MAVEN_CMD_LINE_ARGS="$MAVEN_CONFIG $@" 302 | export MAVEN_CMD_LINE_ARGS 303 | 304 | WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain 305 | 306 | exec "$JAVACMD" \ 307 | $MAVEN_OPTS \ 308 | -classpath "$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar" \ 309 | "-Dmaven.home=${M2_HOME}" "-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}" \ 310 | ${WRAPPER_LAUNCHER} $MAVEN_CONFIG "$@" 311 | -------------------------------------------------------------------------------- /ch4/4.3.2/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 4 | 4.0.0 5 | 6 | org.springframework.boot 7 | spring-boot-starter-parent 8 | 2.2.2.RELEASE 9 | 10 | 11 | Stark.Industries 12 | echo-ip-java 13 | 0.0.1-SNAPSHOT 14 | Ultron-PRJ 15 | Echo IP by Java application for Container Book 16 | 17 | 18 | 1.8 19 | 20 | 21 | 22 | 23 | org.springframework.boot 24 | spring-boot-starter-web 25 | 26 | 27 | 28 | org.springframework.boot 29 | spring-boot-starter-test 30 | test 31 | 32 | 33 | org.junit.vintage 34 | junit-vintage-engine 35 | 36 | 37 | 38 | 39 | 40 | 41 | app-in-host 42 | 43 | 44 | org.springframework.boot 45 | spring-boot-maven-plugin 46 | 47 | 48 | 49 | 50 | -------------------------------------------------------------------------------- /ch4/4.3.2/src/main/java/com/stark/Industries/UltronPRJApplication.java: -------------------------------------------------------------------------------- 1 | package com.stark.Industries; 2 | 3 | import org.springframework.boot.SpringApplication; 4 | import org.springframework.boot.autoconfigure.SpringBootApplication; 5 | 6 | @SpringBootApplication 7 | public class UltronPRJApplication { 8 | 9 | public static void main(String[] args) { 10 | SpringApplication.run(UltronPRJApplication.class, args); 11 | } 12 | 13 | } 14 | -------------------------------------------------------------------------------- /ch4/4.3.2/src/main/java/com/stark/Industries/UltronPRJController.java: -------------------------------------------------------------------------------- 1 | package com.stark.Industries; 2 | 3 | import org.springframework.web.bind.annotation.RequestMapping; 4 | import org.springframework.web.bind.annotation.RestController; 5 | 6 | import javax.servlet.http.HttpServletRequest; 7 | import java.util.HashMap; 8 | import java.util.Map; 9 | 10 | @RestController 11 | public class UltronPRJController { 12 | 13 | @RequestMapping("/") 14 | public String hello(HttpServletRequest request){ 15 | String result = "src: "+request.getRemoteAddr()+" / dest: "+request.getServerName()+"\n"; 16 | return result; 17 | } 18 | } 19 | -------------------------------------------------------------------------------- /ch4/4.3.2/src/main/resources/application.properties: -------------------------------------------------------------------------------- 1 | server.port=80 -------------------------------------------------------------------------------- /ch4/4.3.3/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM openjdk:8 2 | LABEL description="Echo IP Java Application" 3 | EXPOSE 60433 4 | RUN git clone https://github.com/iac-source/inbuilder.git 5 | WORKDIR inbuilder 6 | RUN chmod 700 mvnw 7 | RUN ./mvnw clean package 8 | RUN mv target/app-in-host.jar /opt/app-in-image.jar 9 | WORKDIR /opt 10 | ENTRYPOINT [ "java", "-jar", "app-in-image.jar" ] -------------------------------------------------------------------------------- /ch4/4.3.4/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM openjdk:8 AS int-build 2 | LABEL description="Java Application builder" 3 | RUN git clone https://github.com/iac-source/inbuilder.git 4 | WORKDIR inbuilder 5 | RUN chmod 700 mvnw 6 | RUN ./mvnw clean package 7 | 8 | FROM gcr.io/distroless/java:8 9 | LABEL description="Echo IP Java Application" 10 | EXPOSE 60434 11 | COPY --from=int-build inbuilder/target/app-in-host.jar /opt/app-in-image.jar 12 | WORKDIR /opt 13 | ENTRYPOINT [ "java", "-jar", "app-in-image.jar" ] -------------------------------------------------------------------------------- /ch4/4.3.4/k8s-SingleMaster-18.9_9_w_auto-compl/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure("2") do |config| 5 | N = 3 # max number of worker nodes 6 | Ver = '1.18.4' # Kubernetes Version to install 7 | 8 | #=============# 9 | # Master Node # 10 | #=============# 11 | 12 | config.vm.define "m-k8s" do |cfg| 13 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 14 | cfg.vm.provider "virtualbox" do |vb| 15 | vb.name = "m-k8s(github_SysNet4Admin)" 16 | vb.cpus = 2 17 | vb.memory = 3072 18 | vb.customize ["modifyvm", :id, "--groups", "/k8s-SgMST-18.9.9(github_SysNet4Admin)"] 19 | end 20 | cfg.vm.host_name = "m-k8s" 21 | cfg.vm.network "private_network", ip: "192.168.1.10" 22 | cfg.vm.network "forwarded_port", guest: 22, host: 60010, auto_correct: true, id: "ssh" 23 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 24 | cfg.vm.provision "shell", path: "config.sh", args: N 25 | cfg.vm.provision "shell", path: "install_pkg.sh", args: [ Ver, "Main" ] 26 | cfg.vm.provision "shell", path: "master_node.sh" 27 | end 28 | 29 | #==============# 30 | # Worker Nodes # 31 | #==============# 32 | 33 | (1..N).each do |i| 34 | config.vm.define "w#{i}-k8s" do |cfg| 35 | cfg.vm.box = "sysnet4admin/CentOS-k8s" 36 | cfg.vm.provider "virtualbox" do |vb| 37 | vb.name = "w#{i}-k8s(github_SysNet4Admin)" 38 | vb.cpus = 1 39 | vb.memory = 2560 40 | vb.customize ["modifyvm", :id, "--groups", "/k8s-SgMST-18.9.9(github_SysNet4Admin)"] 41 | end 42 | cfg.vm.host_name = "w#{i}-k8s" 43 | cfg.vm.network "private_network", ip: "192.168.1.10#{i}" 44 | cfg.vm.network "forwarded_port", guest: 22, host: "6010#{i}", auto_correct: true, id: "ssh" 45 | cfg.vm.synced_folder "../data", "/vagrant", disabled: true 46 | cfg.vm.provision "shell", path: "config.sh", args: N 47 | cfg.vm.provision "shell", path: "install_pkg.sh", args: Ver 48 | cfg.vm.provision "shell", path: "work_nodes.sh" 49 | end 50 | end 51 | 52 | end -------------------------------------------------------------------------------- /ch4/4.3.4/k8s-SingleMaster-18.9_9_w_auto-compl/config.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # vim configuration 4 | echo 'alias vi=vim' >> /etc/profile 5 | 6 | # swapoff -a to disable swapping 7 | swapoff -a 8 | # sed to comment the swap partition in /etc/fstab 9 | sed -i.bak -r 's/(.+ swap .+)/#\1/' /etc/fstab 10 | 11 | # CentOS repo change from mirror to vault 12 | sed -i -e 's/mirrorlist=/#mirrorlist=/g' /etc/yum.repos.d/CentOS-* 13 | sed -i -e 's/mirrorlist=/#mirrorlist=/g' /etc/yum.conf 14 | sed -E -i -e 's/#baseurl=http:\/\/mirror.centos.org\/centos\/\$releasever\/([[:alnum:]_-]*)\/\$basearch\//baseurl=https:\/\/vault.centos.org\/7.9.2009\/\1\/\$basearch\//g' /etc/yum.repos.d/CentOS-* 15 | sed -E -i -e 's/#baseurl=http:\/\/mirror.centos.org\/centos\/\$releasever\/([[:alnum:]_-]*)\/\$basearch\//baseurl=https:\/\/vault.centos.org\/7.9.2009\/\1\/\$basearch\//g' /etc/yum.conf 16 | 17 | # kubernetes repo 18 | gg_pkg="http://mirrors.aliyun.com/kubernetes/yum" # Due to shorten addr for key 19 | cat < /etc/yum.repos.d/kubernetes.repo 20 | [kubernetes] 21 | name=Kubernetes 22 | baseurl=${gg_pkg}/repos/kubernetes-el7-x86_64 23 | enabled=1 24 | gpgcheck=0 25 | repo_gpgcheck=0 26 | gpgkey=${gg_pkg}/doc/yum-key.gpg ${gg_pkg}/doc/rpm-package-key.gpg 27 | EOF 28 | 29 | # docker repo 30 | yum install yum-utils -y 31 | yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo 32 | 33 | # Set SELinux in permissive mode (effectively disabling it) 34 | setenforce 0 35 | sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config 36 | 37 | # RHEL/CentOS 7 have reported traffic issues being routed incorrectly due to iptables bypassed 38 | cat < /etc/sysctl.d/k8s.conf 39 | net.bridge.bridge-nf-call-ip6tables = 1 40 | net.bridge.bridge-nf-call-iptables = 1 41 | EOF 42 | modprobe br_netfilter 43 | 44 | # local small dns & vagrant cannot parse and delivery shell code. 45 | echo "192.168.1.10 m-k8s" >> /etc/hosts 46 | for (( i=1; i<=$1; i++ )); do echo "192.168.1.10$i w$i-k8s" >> /etc/hosts; done 47 | 48 | # config DNS 49 | cat < /etc/resolv.conf 50 | nameserver 1.1.1.1 #cloudflare DNS 51 | nameserver 8.8.8.8 #Google DNS 52 | EOF 53 | -------------------------------------------------------------------------------- /ch4/4.3.4/k8s-SingleMaster-18.9_9_w_auto-compl/install_pkg.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # install packages 4 | yum install epel-release -y 5 | yum install vim-enhanced -y 6 | yum install git -y 7 | 8 | # install docker 9 | yum install docker-ce-18.09.9-3.el7 docker-ce-cli-18.09.9-3.el7 \ 10 | containerd.io-1.2.6-3.3.el7 -y 11 | systemctl enable --now docker 12 | 13 | # install kubernetes cluster 14 | yum install kubectl-$1 kubelet-$1 kubeadm-$1 -y 15 | systemctl enable --now kubelet 16 | 17 | # git clone _Book_k8sInfra.git 18 | if [ $2 = 'Main' ]; then 19 | git clone https://github.com/sysnet4admin/_Book_k8sInfra.git 20 | mv /home/vagrant/_Book_k8sInfra $HOME 21 | find $HOME/_Book_k8sInfra/ -regex ".*\.\(sh\)" -exec chmod 700 {} \; 22 | fi 23 | -------------------------------------------------------------------------------- /ch4/4.3.4/k8s-SingleMaster-18.9_9_w_auto-compl/master_node.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # init kubernetes 4 | kubeadm init --token 123456.1234567890123456 --token-ttl 0 \ 5 | --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.1.10 6 | 7 | # config for master node only 8 | mkdir -p $HOME/.kube 9 | cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 10 | chown $(id -u):$(id -g) $HOME/.kube/config 11 | 12 | # raw_address for gitcontent 13 | raw_git="raw.githubusercontent.com/sysnet4admin/IaC/master/manifests" 14 | 15 | # config for kubernetes's network 16 | kubectl apply -f https://$raw_git/172.16_net_calico.yaml 17 | 18 | # install bash-completion for kubectl 19 | yum install bash-completion -y 20 | 21 | # kubectl completion on bash-completion dir 22 | kubectl completion bash >/etc/bash_completion.d/kubectl 23 | 24 | # alias kubectl to k 25 | echo 'alias k=kubectl' >> ~/.bashrc 26 | echo 'complete -F __start_kubectl k' >> ~/.bashrc -------------------------------------------------------------------------------- /ch4/4.3.4/k8s-SingleMaster-18.9_9_w_auto-compl/work_nodes.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # config for work_nodes only 4 | kubeadm join --token 123456.1234567890123456 \ 5 | --discovery-token-unsafe-skip-ca-verification 192.168.1.10:6443 -------------------------------------------------------------------------------- /ch4/4.4.2/create-registry.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | certs=/etc/docker/certs.d/192.168.1.10:8443 3 | mkdir /registry-image 4 | mkdir /etc/docker/certs 5 | mkdir -p $certs 6 | openssl req -x509 -config $(dirname "$0")/tls.csr -nodes -newkey rsa:4096 \ 7 | -keyout tls.key -out tls.crt -days 365 -extensions v3_req 8 | 9 | yum install sshpass -y 10 | for i in {1..3} 11 | do 12 | sshpass -p vagrant ssh -o StrictHostKeyChecking=no root@192.168.1.10$i mkdir -p $certs 13 | sshpass -p vagrant scp tls.crt 192.168.1.10$i:$certs 14 | done 15 | 16 | cp tls.crt $certs 17 | mv tls.* /etc/docker/certs 18 | 19 | docker run -d \ 20 | --restart=always \ 21 | --name registry \ 22 | -v /etc/docker/certs:/docker-in-certs:ro \ 23 | -v /registry-image:/var/lib/registry \ 24 | -e REGISTRY_HTTP_ADDR=0.0.0.0:443 \ 25 | -e REGISTRY_HTTP_TLS_CERTIFICATE=/docker-in-certs/tls.crt \ 26 | -e REGISTRY_HTTP_TLS_KEY=/docker-in-certs/tls.key \ 27 | -p 8443:443 \ 28 | registry:2 29 | -------------------------------------------------------------------------------- /ch4/4.4.2/remover.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | certs=/etc/docker/certs.d/192.168.1.10:8443 3 | rm -rf /registry-image 4 | rm -rf /etc/docker/certs 5 | rm -rf $certs 6 | 7 | yum -y install sshpass 8 | for i in {1..3} 9 | do 10 | sshpass -p vagrant ssh -o StrictHostKeyChecking=no root@192.168.1.10$i rm -rf $certs 11 | done 12 | 13 | yum remove sshpass -y 14 | docker rm -f registry 15 | docker rmi registry:2 16 | -------------------------------------------------------------------------------- /ch4/4.4.2/tls.csr: -------------------------------------------------------------------------------- 1 | [req] 2 | distinguished_name = private_registry_cert_req 3 | x509_extensions = v3_req 4 | prompt = no 5 | 6 | [private_registry_cert_req] 7 | C = KR 8 | ST = SEOUL 9 | L = SEOUL 10 | O = gilbut 11 | OU = Book_k8sInfra 12 | CN = 192.168.1.10 13 | 14 | [v3_req] 15 | keyUsage = keyEncipherment, dataEncipherment 16 | extendedKeyUsage = serverAuth 17 | subjectAltName = @alt_names 18 | 19 | [alt_names] 20 | DNS.0 = m-k8s 21 | IP.0 = 192.168.1.10 22 | -------------------------------------------------------------------------------- /ch4/4.4.3/audit-trail/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx:stable 2 | 3 | LABEL Name=audit-trail Version=0.0.1 4 | 5 | COPY nginx.conf /etc/nginx/nginx.conf 6 | 7 | RUN ln -sf /dev/stdout /var/log/nginx/access.log \ 8 | && ln -sf /dev/stderr /var/log/nginx/error.log 9 | RUN cp -f /usr/share/zoneinfo/Asia/Seoul /etc/localtime 10 | RUN mkdir -p /audit 11 | 12 | EXPOSE 80 13 | 14 | STOPSIGNAL SIGTERM 15 | 16 | CMD ["nginx", "-g", "daemon off;"] 17 | -------------------------------------------------------------------------------- /ch4/4.4.3/audit-trail/nginx.conf: -------------------------------------------------------------------------------- 1 | #user nobody; 2 | worker_processes 1; 3 | #error_log logs/error.log; 4 | #error_log logs/error.log notice; 5 | #error_log logs/error.log info; 6 | #pid logs/nginx.pid; 7 | user root; 8 | events { 9 | worker_connections 1024; 10 | } 11 | http { 12 | log_format audit '$time_local $server_addr $request_method'; 13 | server { 14 | listen 80; 15 | server_name localhost; 16 | access_log /audit/audit_$hostname.log audit; 17 | location / { 18 | root /tmp; 19 | default_type text/html; 20 | return 200 'pod_n: $hostname | ip_dest: $server_addr\n'; 21 | } 22 | } 23 | } 24 | -------------------------------------------------------------------------------- /ch4/4.4.3/echo-hname/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx:stable 2 | 3 | LABEL Name=echo-hname Version=0.0.5 4 | COPY nginx.conf /etc/nginx/nginx.conf 5 | COPY cert.crt /etc/nginx/conf.d/cert.crt 6 | COPY cert.key /etc/nginx/conf.d/cert.key 7 | 8 | CMD ["nginx", "-g", "daemon off;"] 9 | -------------------------------------------------------------------------------- /ch4/4.4.3/echo-hname/cert.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIDUTCCAjmgAwIBAgIEXGTIcjANBgkqhkiG9w0BAQsFADBZMQswCQYDVQQGEwJL 3 | UjEMMAoGA1UECBMDZ251MQwwCgYDVQQHEwNnbnUxDDAKBgNVBAoTA2dudTEMMAoG 4 | A1UECxMDZ251MRIwEAYDVQQDEwlsb2NhbGhvc3QwHhcNMjAwMTEzMjIzMzIzWhcN 5 | MjEwMTEyMjIzMzIzWjBZMQswCQYDVQQGEwJLUjEMMAoGA1UECBMDZ251MQwwCgYD 6 | VQQHEwNnbnUxDDAKBgNVBAoTA2dudTEMMAoGA1UECxMDZ251MRIwEAYDVQQDEwls 7 | b2NhbGhvc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC30HNtamZw 8 | vphRWKcvQkPsZk/5K9tIawenr5/cyIA8nhyb03ZsRpRDD8n7tSFOMsnvVnvW35GG 9 | zkh8QTaf3SvGbcKPEiX1+xV2tyIMHbmEJ7fhw5qBsE6H8qfLBiirxAEeaweoyikY 10 | zPdbB3pcquXeMhUD31QEWlna+sDmXeM2P7kqZ3VGSCv7by3cip5NLv3smO0HpExw 11 | vd3J/K6VmZsBpQaBnXbUxCoRfdonL4WOA239CyCLOL32eM1UkwLwLTJVEVNrtvJ7 12 | I99IV1dQxLrSjrrgQbZvGe4Vfyzg5ZtjdQVK8T0AZbCHJjWle3IcSgMg6nQizDta 13 | JySET6Iy0VrHAgMBAAGjITAfMB0GA1UdDgQWBBTIb2hOBfAlNRnvgf9yZ6OuZiiH 14 | xTANBgkqhkiG9w0BAQsFAAOCAQEAtTuBNy8UQkYUs7Ud1ymQvHd6kS3hlB8MJA7c 15 | yCw63BLpLl68B/n0gk0Ww8+9Xi7T5AoMIQ9nh6dVK5t+fFn5Qa7UJPsHOR1l1y4V 16 | 5ODZOxKx1Q4Tl7zR3uJdroCVQLDeJ6pBKbzPGla3g0ScKvGSDyXucxztMZbGDimp 17 | YXX6uXrty0jYEBab6fFK69pzM2G9E8c2Scno8iS3cKMk/j76mw5R8d8lCOLaQsO7 18 | 7yUdcsyir4ujTEgGj5+XNSLmrnRooENb67qINArE3TA8W4MGVD9+eADUkhu6bG8m 19 | AfpkysjU1mqPaspBhDFv52Gcz7kav9rmZexnvc0hDWaACkf1mA== 20 | -----END CERTIFICATE----- 21 | -------------------------------------------------------------------------------- /ch4/4.4.3/echo-hname/cert.key: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEpAIBAAKCAQEAt9BzbWpmcL6YUVinL0JD7GZP+SvbSGsHp6+f3MiAPJ4cm9N2 3 | bEaUQw/J+7UhTjLJ71Z71t+Rhs5IfEE2n90rxm3CjxIl9fsVdrciDB25hCe34cOa 4 | gbBOh/KnywYoq8QBHmsHqMopGMz3Wwd6XKrl3jIVA99UBFpZ2vrA5l3jNj+5Kmd1 5 | Rkgr+28t3IqeTS797JjtB6RMcL3dyfyulZmbAaUGgZ121MQqEX3aJy+FjgNt/Qsg 6 | izi99njNVJMC8C0yVRFTa7byeyPfSFdXUMS60o664EG2bxnuFX8s4OWbY3UFSvE9 7 | AGWwhyY1pXtyHEoDIOp0Isw7WickhE+iMtFaxwIDAQABAoIBAHlx/BF6jxxGkRSN 8 | 4kfTHFWAc65JT6RVMsWTv6d7wV5LiNNbr45yQ1rbf7QSRGMKI2lCVqftJpVOjY2q 9 | +JA+7ME5m6Yzc2lF7zR0YsZmjT/HjjJXrimpdvlTVZFKDG0QHz0dsf3PM7/zDCrU 10 | kf/P2fgoVsIsN7J4j42ixvhtZ8VazgZDguwQiKBc+nnsc4QbeR05W8qg61Mf87H4 11 | VwssVaPW9nNZfw+HZvaY6oF4aUvQ3vkOnumxoB9TtxD3P1Qp5Te5WR60r53S63ad 12 | kY5gE4tcgYn2g7l/a9AJQgAZPl6/ovinzZsB5vc+ZsIdaISRFtSlGane7aR/sKc3 13 | qpjWoWECgYEA4Jhm3D/OaExSlV61b0FNqkOJ6ppYoMsafLkNV97l6iw/z6ZlFqrd 14 | +pwrvl7f4tPLGEVrO4UBChFzJmakzB9yaUDpIRd3kfDkz+Bx9ndzBwV7sLItkwJU 15 | O4Lfai0xVCeEgRXiQ2vbtyderDmwWLZXv6heuX31NWMKYXCouqcm3A0CgYEA0YRA 16 | 0kIfHSSK/vIAtocrDpnurOODNdnuKgYOYkHbJ/PN6xhUVo5eeRDQOM8s+Tvl7OTI 17 | 4OJEmrPthBcpHzeYalTYbHU6KR2clY6smJ0NTFcW46QoWSe++TckwBGkcUgzkyDG 18 | YgVlC+Jujr0xLRwa0zkAYDFVvMBheqV+Q8wUGSMCgYEAvaWSvYoXVZSU61IUrEQd 19 | O5daHsKD8gpubECqFre9tnX0z/d2RqSzWgmDGnXsYRFr3ivH93NAxGqlrBhiMYag 20 | SmYoNOwm6BHcc/fW40JL2/LyVequdwMxcyr4UiSlEaVoysNa0omB9u8EjzMLSG14 21 | PPsEOWc1pgXiXxMNNscsFgUCgYEAtsfgHQ4eQrhcomnRgWuOfqB//khFcbd79SFv 22 | bvzxCnvByzVgblqpxIiMfuMO4ygEQJSfQsFjBGuv7Cqgb2F7EFiQrp3ebXwt3LOp 23 | k0KAFXdsuo+9u3nXO2eGIiHCCinpBJP1PhJiwul5dgFLY4U/ScJSt5iSqaZT5EF4 24 | VAE4D20CgYAxs3bC/9p3wd15nw6W1OVTUtMTMc7nKNcxZw7ED7zhPMoaJYXC4GDM 25 | zv5D6Z8KIBJbvNvRJZIRVhcT4Cr+prFr+YuCdIcyYE5TEElwNIddVhh5KZGYsGHF 26 | o2NR47kFdYV4Lns4RR41o6ABs0UdxoOay6IG+fMPy3Gxf6wpSYnRQg== 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /ch4/4.4.3/echo-hname/nginx.conf: -------------------------------------------------------------------------------- 1 | #user nobody; 2 | worker_processes 1; 3 | #error_log logs/error.log; 4 | #error_log logs/error.log notice; 5 | #error_log logs/error.log info; 6 | #pid logs/nginx.pid; 7 | events { 8 | worker_connections 1024; 9 | } 10 | http { 11 | include mime.types; 12 | default_type application/octet-stream; 13 | sendfile on; 14 | server { 15 | listen 80; 16 | server_name localhost; 17 | location / { 18 | default_type text/html; 19 | return 200 '$hostname\n'; 20 | } 21 | error_page 500 502 503 504 /50x.html; 22 | location = /50x.html { 23 | root html; 24 | } 25 | } 26 | server { 27 | listen 443 ssl; 28 | ssl_certificate /etc/nginx/conf.d/cert.crt; 29 | ssl_certificate_key /etc/nginx/conf.d/cert.key; 30 | 31 | ssl_protocols TLSv1.1 TLSv1.2; 32 | ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; 33 | ssl_prefer_server_ciphers on; 34 | ssl_session_cache shared:SSL:10m; 35 | 36 | # disable any limits to avoid HTTP 413 for large image uploads 37 | client_max_body_size 0; 38 | 39 | location / { 40 | default_type text/html; 41 | return 200 '$hostname\n'; 42 | } 43 | 44 | } 45 | } 46 | -------------------------------------------------------------------------------- /ch4/4.4.3/echo-ip/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx:stable 2 | 3 | LABEL Name=echo-ip Version=0.0.5 4 | COPY nginx.conf /etc/nginx/nginx.conf 5 | COPY cert.crt /etc/nginx/conf.d/cert.crt 6 | COPY cert.key /etc/nginx/conf.d/cert.key 7 | 8 | CMD ["nginx", "-g", "daemon off;"] 9 | -------------------------------------------------------------------------------- /ch4/4.4.3/echo-ip/cert.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIDUTCCAjmgAwIBAgIEXGTIcjANBgkqhkiG9w0BAQsFADBZMQswCQYDVQQGEwJL 3 | UjEMMAoGA1UECBMDZ251MQwwCgYDVQQHEwNnbnUxDDAKBgNVBAoTA2dudTEMMAoG 4 | A1UECxMDZ251MRIwEAYDVQQDEwlsb2NhbGhvc3QwHhcNMjAwMTEzMjIzMzIzWhcN 5 | MjEwMTEyMjIzMzIzWjBZMQswCQYDVQQGEwJLUjEMMAoGA1UECBMDZ251MQwwCgYD 6 | VQQHEwNnbnUxDDAKBgNVBAoTA2dudTEMMAoGA1UECxMDZ251MRIwEAYDVQQDEwls 7 | b2NhbGhvc3QwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC30HNtamZw 8 | vphRWKcvQkPsZk/5K9tIawenr5/cyIA8nhyb03ZsRpRDD8n7tSFOMsnvVnvW35GG 9 | zkh8QTaf3SvGbcKPEiX1+xV2tyIMHbmEJ7fhw5qBsE6H8qfLBiirxAEeaweoyikY 10 | zPdbB3pcquXeMhUD31QEWlna+sDmXeM2P7kqZ3VGSCv7by3cip5NLv3smO0HpExw 11 | vd3J/K6VmZsBpQaBnXbUxCoRfdonL4WOA239CyCLOL32eM1UkwLwLTJVEVNrtvJ7 12 | I99IV1dQxLrSjrrgQbZvGe4Vfyzg5ZtjdQVK8T0AZbCHJjWle3IcSgMg6nQizDta 13 | JySET6Iy0VrHAgMBAAGjITAfMB0GA1UdDgQWBBTIb2hOBfAlNRnvgf9yZ6OuZiiH 14 | xTANBgkqhkiG9w0BAQsFAAOCAQEAtTuBNy8UQkYUs7Ud1ymQvHd6kS3hlB8MJA7c 15 | yCw63BLpLl68B/n0gk0Ww8+9Xi7T5AoMIQ9nh6dVK5t+fFn5Qa7UJPsHOR1l1y4V 16 | 5ODZOxKx1Q4Tl7zR3uJdroCVQLDeJ6pBKbzPGla3g0ScKvGSDyXucxztMZbGDimp 17 | YXX6uXrty0jYEBab6fFK69pzM2G9E8c2Scno8iS3cKMk/j76mw5R8d8lCOLaQsO7 18 | 7yUdcsyir4ujTEgGj5+XNSLmrnRooENb67qINArE3TA8W4MGVD9+eADUkhu6bG8m 19 | AfpkysjU1mqPaspBhDFv52Gcz7kav9rmZexnvc0hDWaACkf1mA== 20 | -----END CERTIFICATE----- 21 | -------------------------------------------------------------------------------- /ch4/4.4.3/echo-ip/cert.key: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEpAIBAAKCAQEAt9BzbWpmcL6YUVinL0JD7GZP+SvbSGsHp6+f3MiAPJ4cm9N2 3 | bEaUQw/J+7UhTjLJ71Z71t+Rhs5IfEE2n90rxm3CjxIl9fsVdrciDB25hCe34cOa 4 | gbBOh/KnywYoq8QBHmsHqMopGMz3Wwd6XKrl3jIVA99UBFpZ2vrA5l3jNj+5Kmd1 5 | Rkgr+28t3IqeTS797JjtB6RMcL3dyfyulZmbAaUGgZ121MQqEX3aJy+FjgNt/Qsg 6 | izi99njNVJMC8C0yVRFTa7byeyPfSFdXUMS60o664EG2bxnuFX8s4OWbY3UFSvE9 7 | AGWwhyY1pXtyHEoDIOp0Isw7WickhE+iMtFaxwIDAQABAoIBAHlx/BF6jxxGkRSN 8 | 4kfTHFWAc65JT6RVMsWTv6d7wV5LiNNbr45yQ1rbf7QSRGMKI2lCVqftJpVOjY2q 9 | +JA+7ME5m6Yzc2lF7zR0YsZmjT/HjjJXrimpdvlTVZFKDG0QHz0dsf3PM7/zDCrU 10 | kf/P2fgoVsIsN7J4j42ixvhtZ8VazgZDguwQiKBc+nnsc4QbeR05W8qg61Mf87H4 11 | VwssVaPW9nNZfw+HZvaY6oF4aUvQ3vkOnumxoB9TtxD3P1Qp5Te5WR60r53S63ad 12 | kY5gE4tcgYn2g7l/a9AJQgAZPl6/ovinzZsB5vc+ZsIdaISRFtSlGane7aR/sKc3 13 | qpjWoWECgYEA4Jhm3D/OaExSlV61b0FNqkOJ6ppYoMsafLkNV97l6iw/z6ZlFqrd 14 | +pwrvl7f4tPLGEVrO4UBChFzJmakzB9yaUDpIRd3kfDkz+Bx9ndzBwV7sLItkwJU 15 | O4Lfai0xVCeEgRXiQ2vbtyderDmwWLZXv6heuX31NWMKYXCouqcm3A0CgYEA0YRA 16 | 0kIfHSSK/vIAtocrDpnurOODNdnuKgYOYkHbJ/PN6xhUVo5eeRDQOM8s+Tvl7OTI 17 | 4OJEmrPthBcpHzeYalTYbHU6KR2clY6smJ0NTFcW46QoWSe++TckwBGkcUgzkyDG 18 | YgVlC+Jujr0xLRwa0zkAYDFVvMBheqV+Q8wUGSMCgYEAvaWSvYoXVZSU61IUrEQd 19 | O5daHsKD8gpubECqFre9tnX0z/d2RqSzWgmDGnXsYRFr3ivH93NAxGqlrBhiMYag 20 | SmYoNOwm6BHcc/fW40JL2/LyVequdwMxcyr4UiSlEaVoysNa0omB9u8EjzMLSG14 21 | PPsEOWc1pgXiXxMNNscsFgUCgYEAtsfgHQ4eQrhcomnRgWuOfqB//khFcbd79SFv 22 | bvzxCnvByzVgblqpxIiMfuMO4ygEQJSfQsFjBGuv7Cqgb2F7EFiQrp3ebXwt3LOp 23 | k0KAFXdsuo+9u3nXO2eGIiHCCinpBJP1PhJiwul5dgFLY4U/ScJSt5iSqaZT5EF4 24 | VAE4D20CgYAxs3bC/9p3wd15nw6W1OVTUtMTMc7nKNcxZw7ED7zhPMoaJYXC4GDM 25 | zv5D6Z8KIBJbvNvRJZIRVhcT4Cr+prFr+YuCdIcyYE5TEElwNIddVhh5KZGYsGHF 26 | o2NR47kFdYV4Lns4RR41o6ABs0UdxoOay6IG+fMPy3Gxf6wpSYnRQg== 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /ch4/4.4.3/echo-ip/nginx.conf: -------------------------------------------------------------------------------- 1 | #user nobody; 2 | worker_processes 1; 3 | #error_log logs/error.log; 4 | #error_log logs/error.log notice; 5 | #error_log logs/error.log info; 6 | #pid logs/nginx.pid; 7 | events { 8 | worker_connections 1024; 9 | } 10 | http { 11 | include mime.types; 12 | default_type application/octet-stream; 13 | sendfile on; 14 | server { 15 | listen 80; 16 | server_name localhost; 17 | location / { 18 | default_type text/html; 19 | return 200 'request_method : $request_method | ip_dest: $server_addr\n'; 20 | } 21 | error_page 500 502 503 504 /50x.html; 22 | location = /50x.html { 23 | root html; 24 | } 25 | } 26 | server { 27 | listen 443 ssl; 28 | ssl_certificate /etc/nginx/conf.d/cert.crt; 29 | ssl_certificate_key /etc/nginx/conf.d/cert.key; 30 | 31 | ssl_protocols TLSv1.1 TLSv1.2; 32 | ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; 33 | ssl_prefer_server_ciphers on; 34 | ssl_session_cache shared:SSL:10m; 35 | 36 | # disable any limits to avoid HTTP 413 for large image uploads 37 | client_max_body_size 0; 38 | 39 | location / { 40 | default_type text/html; 41 | return 200 'request_method : $request_method | ip_dest: $server_addr\n'; 42 | } 43 | 44 | } 45 | } 46 | -------------------------------------------------------------------------------- /ch4/README.md: -------------------------------------------------------------------------------- 1 | # 4장, 쿠버네티스를 이루는 컨테이너 도우미, 도커 2 | --- 3 | ## 4.2 도커로 컨테이너 다루기 4 | - 4.2.3 컨테이너 용량 줄이기 5 | ## 4.3 4가지 방법으로 컨테이너 이미지 만들기 6 | - 4.3.1 기본 방법으로 빌드하기 7 | - 4.3.2 컨테이너 용량 줄이기 8 | - 4.3.3 컨테이너 내부에서 컨테이너 빌드하기 9 | - 4.3.4 최적화해 컨테이너 빌드하기 10 | ## 4.4 쿠버네티스에서 직접 만든 컨테이너 사용하기 11 | - 4.4.2 레지스트리 구성하기 12 | - 4.4.3 직접 만든 이미지로 컨테이너 구동하기 13 | -------------------------------------------------------------------------------- /ch5/5.2.2/kustomize-install.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | curl -L \ 3 | https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv3.6.1/kustomize_v3.6.1_linux_amd64.tar.gz -o /tmp/kustomize.tar.gz 4 | tar -xzf /tmp/kustomize.tar.gz -C /usr/local/bin 5 | echo "kustomize install successfully" 6 | -------------------------------------------------------------------------------- /ch5/5.2.2/metallb-l2config.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | namespace: metallb-system 5 | name: config 6 | data: 7 | config: | 8 | address-pools: 9 | - name: metallb-ip-range 10 | protocol: layer2 11 | addresses: 12 | - 192.168.1.11-192.168.1.19 -------------------------------------------------------------------------------- /ch5/5.2.2/metallb.yaml: -------------------------------------------------------------------------------- 1 | 2 | apiVersion: policy/v1beta1 3 | kind: PodSecurityPolicy 4 | metadata: 5 | labels: 6 | app: metallb 7 | name: speaker 8 | namespace: metallb-system 9 | spec: 10 | allowPrivilegeEscalation: false 11 | allowedCapabilities: 12 | - NET_ADMIN 13 | - NET_RAW 14 | - SYS_ADMIN 15 | fsGroup: 16 | rule: RunAsAny 17 | hostNetwork: true 18 | hostPorts: 19 | - max: 7472 20 | min: 7472 21 | privileged: true 22 | runAsUser: 23 | rule: RunAsAny 24 | seLinux: 25 | rule: RunAsAny 26 | supplementalGroups: 27 | rule: RunAsAny 28 | volumes: 29 | - '*' 30 | --- 31 | apiVersion: v1 32 | kind: ServiceAccount 33 | metadata: 34 | labels: 35 | app: metallb 36 | name: controller 37 | namespace: metallb-system 38 | --- 39 | apiVersion: v1 40 | kind: ServiceAccount 41 | metadata: 42 | labels: 43 | app: metallb 44 | name: speaker 45 | namespace: metallb-system 46 | --- 47 | apiVersion: rbac.authorization.k8s.io/v1 48 | kind: ClusterRole 49 | metadata: 50 | labels: 51 | app: metallb 52 | name: metallb-system:controller 53 | rules: 54 | - apiGroups: 55 | - '' 56 | resources: 57 | - services 58 | verbs: 59 | - get 60 | - list 61 | - watch 62 | - update 63 | - apiGroups: 64 | - '' 65 | resources: 66 | - services/status 67 | verbs: 68 | - update 69 | - apiGroups: 70 | - '' 71 | resources: 72 | - events 73 | verbs: 74 | - create 75 | - patch 76 | --- 77 | apiVersion: rbac.authorization.k8s.io/v1 78 | kind: ClusterRole 79 | metadata: 80 | labels: 81 | app: metallb 82 | name: metallb-system:speaker 83 | rules: 84 | - apiGroups: 85 | - '' 86 | resources: 87 | - services 88 | - endpoints 89 | - nodes 90 | verbs: 91 | - get 92 | - list 93 | - watch 94 | - apiGroups: 95 | - '' 96 | resources: 97 | - events 98 | verbs: 99 | - create 100 | - patch 101 | - apiGroups: 102 | - extensions 103 | resourceNames: 104 | - speaker 105 | resources: 106 | - podsecuritypolicies 107 | verbs: 108 | - use 109 | --- 110 | apiVersion: rbac.authorization.k8s.io/v1 111 | kind: Role 112 | metadata: 113 | labels: 114 | app: metallb 115 | name: config-watcher 116 | namespace: metallb-system 117 | rules: 118 | - apiGroups: 119 | - '' 120 | resources: 121 | - configmaps 122 | verbs: 123 | - get 124 | - list 125 | - watch 126 | --- 127 | apiVersion: rbac.authorization.k8s.io/v1 128 | kind: ClusterRoleBinding 129 | metadata: 130 | labels: 131 | app: metallb 132 | name: metallb-system:controller 133 | roleRef: 134 | apiGroup: rbac.authorization.k8s.io 135 | kind: ClusterRole 136 | name: metallb-system:controller 137 | subjects: 138 | - kind: ServiceAccount 139 | name: controller 140 | namespace: metallb-system 141 | --- 142 | apiVersion: rbac.authorization.k8s.io/v1 143 | kind: ClusterRoleBinding 144 | metadata: 145 | labels: 146 | app: metallb 147 | name: metallb-system:speaker 148 | roleRef: 149 | apiGroup: rbac.authorization.k8s.io 150 | kind: ClusterRole 151 | name: metallb-system:speaker 152 | subjects: 153 | - kind: ServiceAccount 154 | name: speaker 155 | namespace: metallb-system 156 | --- 157 | apiVersion: rbac.authorization.k8s.io/v1 158 | kind: RoleBinding 159 | metadata: 160 | labels: 161 | app: metallb 162 | name: config-watcher 163 | namespace: metallb-system 164 | roleRef: 165 | apiGroup: rbac.authorization.k8s.io 166 | kind: Role 167 | name: config-watcher 168 | subjects: 169 | - kind: ServiceAccount 170 | name: controller 171 | - kind: ServiceAccount 172 | name: speaker 173 | --- 174 | apiVersion: apps/v1 175 | kind: DaemonSet 176 | metadata: 177 | labels: 178 | app: metallb 179 | component: speaker 180 | name: speaker 181 | namespace: metallb-system 182 | spec: 183 | selector: 184 | matchLabels: 185 | app: metallb 186 | component: speaker 187 | template: 188 | metadata: 189 | annotations: 190 | prometheus.io/port: '7472' 191 | prometheus.io/scrape: 'true' 192 | labels: 193 | app: metallb 194 | component: speaker 195 | spec: 196 | containers: 197 | - args: 198 | - --port=7472 199 | - --config=config 200 | env: 201 | - name: METALLB_NODE_NAME 202 | valueFrom: 203 | fieldRef: 204 | fieldPath: spec.nodeName 205 | - name: METALLB_HOST 206 | valueFrom: 207 | fieldRef: 208 | fieldPath: status.hostIP 209 | image: quay.io/metallb/speaker:v0.8.2 210 | imagePullPolicy: IfNotPresent 211 | name: speaker 212 | ports: 213 | - containerPort: 7472 214 | name: monitoring 215 | resources: 216 | limits: 217 | cpu: 100m 218 | memory: 100Mi 219 | securityContext: 220 | allowPrivilegeEscalation: false 221 | capabilities: 222 | add: 223 | - NET_ADMIN 224 | - NET_RAW 225 | - SYS_ADMIN 226 | drop: 227 | - ALL 228 | readOnlyRootFilesystem: true 229 | hostNetwork: true 230 | nodeSelector: 231 | beta.kubernetes.io/os: linux 232 | serviceAccountName: speaker 233 | terminationGracePeriodSeconds: 0 234 | tolerations: 235 | - effect: NoSchedule 236 | key: node-role.kubernetes.io/master 237 | --- 238 | apiVersion: apps/v1 239 | kind: Deployment 240 | metadata: 241 | labels: 242 | app: metallb 243 | component: controller 244 | name: controller 245 | namespace: metallb-system 246 | spec: 247 | revisionHistoryLimit: 3 248 | selector: 249 | matchLabels: 250 | app: metallb 251 | component: controller 252 | template: 253 | metadata: 254 | annotations: 255 | prometheus.io/port: '7472' 256 | prometheus.io/scrape: 'true' 257 | labels: 258 | app: metallb 259 | component: controller 260 | spec: 261 | containers: 262 | - args: 263 | - --port=7472 264 | - --config=config 265 | image: quay.io/metallb/controller:v0.8.2 266 | imagePullPolicy: IfNotPresent 267 | name: controller 268 | ports: 269 | - containerPort: 7472 270 | name: monitoring 271 | resources: 272 | limits: 273 | cpu: 100m 274 | memory: 100Mi 275 | securityContext: 276 | allowPrivilegeEscalation: false 277 | capabilities: 278 | drop: 279 | - all 280 | readOnlyRootFilesystem: true 281 | nodeSelector: 282 | beta.kubernetes.io/os: linux 283 | securityContext: 284 | runAsNonRoot: true 285 | runAsUser: 65534 286 | serviceAccountName: controller 287 | terminationGracePeriodSeconds: 0 288 | -------------------------------------------------------------------------------- /ch5/5.2.2/namespace.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Namespace 3 | metadata: 4 | name: metallb-system 5 | labels: 6 | app: metallb -------------------------------------------------------------------------------- /ch5/5.3.1/jenkins-config.yaml: -------------------------------------------------------------------------------- 1 | jenkins: 2 | agentProtocols: 3 | - "JNLP4-connect" 4 | - "Ping" 5 | authorizationStrategy: 6 | loggedInUsersCanDoAnything: 7 | allowAnonymousRead: false 8 | clouds: 9 | - kubernetes: 10 | containerCap: 10 11 | containerCapStr: "10" 12 | jenkinsTunnel: "jenkins-agent:50000" 13 | jenkinsUrl: "http://jenkins:80" 14 | maxRequestsPerHost: 32 15 | maxRequestsPerHostStr: "32" 16 | name: "kubernetes" 17 | namespace: "default" 18 | podLabels: 19 | - key: "jenkins/jenkins-jenkins-slave" 20 | value: "true" 21 | serverUrl: "https://kubernetes.default" 22 | templates: 23 | - containers: 24 | - args: "^${computer.jnlpmac} ^${computer.name}" 25 | command: "" 26 | envVars: 27 | - envVar: 28 | key: "JENKINS_URL" 29 | value: "http://192.168.1.11" 30 | image: "jenkins/inbound-agent:4.3-4" 31 | livenessProbe: 32 | failureThreshold: 0 33 | initialDelaySeconds: 0 34 | periodSeconds: 0 35 | successThreshold: 0 36 | timeoutSeconds: 0 37 | name: "jnlp" 38 | resourceLimitCpu: "512m" 39 | resourceLimitMemory: "512Mi" 40 | resourceRequestCpu: "512m" 41 | resourceRequestMemory: "512Mi" 42 | workingDir: "/home/jenkins" 43 | hostNetwork: false 44 | label: "jenkins-jenkins-slave " 45 | name: "default" 46 | nodeUsageMode: NORMAL 47 | podRetention: "never" 48 | runAsGroup: "993" 49 | runAsUser: "1000" 50 | serviceAccount: "jenkins" 51 | volumes: 52 | - hostPathVolume: 53 | hostPath: "/usr/bin/kubectl" 54 | mountPath: "/usr/bin/kubectl" 55 | - hostPathVolume: 56 | hostPath: "/bin/docker" 57 | mountPath: "/bin/docker" 58 | - hostPathVolume: 59 | hostPath: "/var/run/docker.sock" 60 | mountPath: "/var/run/docker.sock" 61 | yamlMergeStrategy: "override" 62 | crumbIssuer: 63 | standard: 64 | excludeClientIPFromCrumb: true 65 | disableRememberMe: false 66 | disabledAdministrativeMonitors: 67 | - "hudson.model.UpdateCenter$CoreUpdateMonitor" 68 | - "jenkins.diagnostics.RootUrlNotSetMonitor" 69 | - "jenkins.security.UpdateSiteWarningsMonitor" 70 | labelAtoms: 71 | - name: "master" 72 | markupFormatter: "plainText" 73 | mode: NORMAL 74 | myViewsTabBar: "standard" 75 | numExecutors: 0 76 | primaryView: 77 | all: 78 | name: "all" 79 | projectNamingStrategy: "standard" 80 | quietPeriod: 5 81 | remotingSecurity: 82 | enabled: true 83 | scmCheckoutRetryCount: 0 84 | securityRealm: "legacy" 85 | slaveAgentPort: 50000 86 | updateCenter: 87 | sites: 88 | - id: "default" 89 | url: "https://raw.githubusercontent.com/IaC-Source/Jenkins-updateCenter/main/update-center.json" 90 | views: 91 | - all: 92 | name: "all" 93 | viewsTabBar: "standard" 94 | security: 95 | apiToken: 96 | creationOfLegacyTokenEnabled: false 97 | tokenGenerationOnCreationEnabled: false 98 | usageStatisticsEnabled: true 99 | sSHD: 100 | port: -1 101 | unclassified: 102 | buildDiscarders: 103 | configuredBuildDiscarders: 104 | - "jobBuildDiscarder" 105 | fingerprints: 106 | fingerprintCleanupDisabled: false 107 | storage: "file" 108 | gitSCM: 109 | createAccountBasedOnEmail: false 110 | showEntireCommitSummaryInChanges: false 111 | useExistingAccountWithSameEmail: false 112 | junitTestResultStorage: 113 | storage: "file" 114 | location: 115 | adminAddress: "address not configured yet " 116 | mailer: 117 | charset: "UTF-8" 118 | useSsl: false 119 | useTls: false 120 | pollSCM: 121 | pollingThreadCount: 10 122 | tool: 123 | git: 124 | installations: 125 | - home: "git" 126 | name: "Default" -------------------------------------------------------------------------------- /ch5/5.3.1/jenkins-install.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | jkopt1="--sessionTimeout=1440" 3 | jkopt2="--sessionEviction=86400" 4 | jvopt1="-Duser.timezone=Asia/Seoul" 5 | jvopt2="-Dcasc.jenkins.config=https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/main/ch5/5.3.1/jenkins-config.yaml" 6 | jvopt3="-Dhudson.model.DownloadService.noSignatureCheck=true" 7 | 8 | helm install jenkins edu/jenkins \ 9 | --set persistence.existingClaim=jenkins \ 10 | --set master.adminPassword=admin \ 11 | --set master.nodeSelector."kubernetes\.io/hostname"=m-k8s \ 12 | --set master.tolerations[0].key=node-role.kubernetes.io/master \ 13 | --set master.tolerations[0].effect=NoSchedule \ 14 | --set master.tolerations[0].operator=Exists \ 15 | --set master.runAsUser=1000 \ 16 | --set master.runAsGroup=1000 \ 17 | --set master.tag=2.249.3-lts-centos7 \ 18 | --set master.serviceType=LoadBalancer \ 19 | --set master.servicePort=80 \ 20 | --set master.jenkinsOpts="$jkopt1 $jkopt2" \ 21 | --set master.javaOpts="$jvopt1 $jvopt2 $jvopt3" 22 | -------------------------------------------------------------------------------- /ch5/5.3.1/jenkins-volume.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: PersistentVolume 4 | metadata: 5 | name: jenkins 6 | spec: 7 | capacity: 8 | storage: 10Gi 9 | accessModes: 10 | - ReadWriteMany 11 | persistentVolumeReclaimPolicy: Retain 12 | nfs: 13 | server: 192.168.1.10 14 | path: /nfs_shared/jenkins 15 | --- 16 | apiVersion: v1 17 | kind: PersistentVolumeClaim 18 | metadata: 19 | name: jenkins 20 | spec: 21 | accessModes: 22 | - ReadWriteMany 23 | resources: 24 | requests: 25 | storage: 10Gi 26 | -------------------------------------------------------------------------------- /ch5/5.3.1/nfs-exporter.sh: -------------------------------------------------------------------------------- 1 | nfsdir=/nfs_shared/$1 2 | if [ $# -eq 0 ]; then 3 | echo "usage: nfs-exporter.sh "; exit 0 4 | fi 5 | 6 | if [[ ! -d $nfsdir ]]; then 7 | mkdir -p $nfsdir 8 | echo "$nfsdir 192.168.1.0/24(rw,sync,no_root_squash)" >> /etc/exports 9 | if [[ $(systemctl is-enabled nfs) -eq "disabled" ]]; then 10 | systemctl enable nfs 11 | fi 12 | systemctl restart nfs 13 | fi 14 | -------------------------------------------------------------------------------- /ch5/5.4.1/echo-ip-101.freestyle: -------------------------------------------------------------------------------- 1 | docker build -t 192.168.1.10:8443/echo-ip . 2 | docker push 192.168.1.10:8443/echo-ip 3 | kubectl create deployment fs-echo-ip --image=192.168.1.10:8443/echo-ip 4 | kubectl expose deployment fs-echo-ip --type=LoadBalancer --name=fs-echo-ip-svc --port=8080 --target-port=80 5 | -------------------------------------------------------------------------------- /ch5/5.5.1/Jenkinsfile: -------------------------------------------------------------------------------- 1 | pipeline { 2 | agent any 3 | stages { 4 | stage('git pull') { 5 | steps { 6 | // Git-URL will replace by sed command before RUN 7 | git url: 'Git-URL', branch: 'main' 8 | } 9 | } 10 | stage('k8s deploy'){ 11 | steps { 12 | kubernetesDeploy(kubeconfigId: 'kubeconfig', 13 | configs: '*.yaml') 14 | } 15 | } 16 | } 17 | } -------------------------------------------------------------------------------- /ch5/5.5.1/README.md: -------------------------------------------------------------------------------- 1 | # GitOps 2 | -------------------------------------------------------------------------------- /ch5/5.5.1/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | labels: 5 | app: gitops-nginx 6 | name: gitops-nginx 7 | spec: 8 | replicas: 2 9 | selector: 10 | matchLabels: 11 | app: gitops-nginx 12 | template: 13 | metadata: 14 | labels: 15 | app: gitops-nginx 16 | spec: 17 | containers: 18 | - image: nginx 19 | name: nginx 20 | -------------------------------------------------------------------------------- /ch5/5.5.2/Jenkinsfile: -------------------------------------------------------------------------------- 1 | pipeline { 2 | agent any 3 | stages { 4 | stage('deploy start') { 5 | steps { 6 | slackSend(message: "Deploy ${env.BUILD_NUMBER} Started" 7 | , color: 'good', tokenCredentialId: 'slack-key') 8 | } 9 | } 10 | stage('git pull') { 11 | steps { 12 | // Git-URL will replace by sed command before RUN 13 | git url: 'Git-URL', branch: 'main' 14 | } 15 | } 16 | stage('k8s deploy'){ 17 | steps { 18 | kubernetesDeploy(kubeconfigId: 'kubeconfig', 19 | configs: '*.yaml') 20 | } 21 | } 22 | stage('deploy end') { 23 | steps { 24 | slackSend(message: """${env.JOB_NAME} #${env.BUILD_NUMBER} End 25 | """, color: 'good', tokenCredentialId: 'slack-key') 26 | } 27 | } 28 | } 29 | } -------------------------------------------------------------------------------- /ch5/5.5.3/Jenkinsfile: -------------------------------------------------------------------------------- 1 | pipeline { 2 | agent any 3 | stages { 4 | stage('Deploy start') { 5 | steps { 6 | slackSend(message: "Deploy ${env.BUILD_NUMBER} Started" 7 | , color: 'good', tokenCredentialId: 'slack-key') 8 | } 9 | } 10 | stage('git pull') { 11 | steps { 12 | // Git-URL will replace by sed command before RUN 13 | git url: 'Git-URL', branch: 'main' 14 | } 15 | } 16 | stage('k8s deploy'){ 17 | steps { 18 | kubernetesDeploy(kubeconfigId: 'kubeconfig', 19 | configs: '*.yaml') 20 | } 21 | } 22 | stage('send diff') { 23 | steps { 24 | script { 25 | def publisher = LastChanges.getLastChangesPublisher "PREVIOUS_REVISION", "SIDE", "LINE", true, true, "", "", "", "", "" 26 | publisher.publishLastChanges() 27 | def htmlDiff = publisher.getHtmlDiff() 28 | writeFile file: "deploy-diff-${env.BUILD_NUMBER}.html", text: htmlDiff 29 | } 30 | slackSend(message: """${env.JOB_NAME} #${env.BUILD_NUMBER} End 31 | (<${env.BUILD_URL}/last-changes|Check Last changed>)""" 32 | , color: 'good', tokenCredentialId: 'slack-key') 33 | } 34 | } 35 | } 36 | } -------------------------------------------------------------------------------- /ch5/README.md: -------------------------------------------------------------------------------- 1 | # 5장, 지속적 통합과 배포 자동화, 젠킨스 2 | --- 3 | ## 5.2 젠킨스 설치를 위한 간편화 도구 살펴보기 4 | - 5.2.2 커스터마이즈로 배포 간편화하기 5 | - 5.2.3 헬름으로 배포 간편화하기 6 | ## 5.3 젠킨스 설치 및 설정하기 7 | - 5.3.1 헬름으로 젠킨스 설치하기 8 | ## 5.4 젠킨스로 CI/CD 구현하기 9 | - 5.4.1 Freestyle로 간단히 echo-ip 배포하기 10 | ## 5.5 젠킨스 플러그인을 통해 구현되는 GitOps 11 | - 5.5.1 쿠버네티스 환경에 적합한 선언적인 배포 환경 12 | - 5.5.2 슬랙을 통해 변경 사항 알리기 13 | - 5.5.3 배포 변경 사항을 자동 비교하기 14 | -------------------------------------------------------------------------------- /ch6/6.2.1/nfs-exporter.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nfsdir=/nfs_shared/$1 3 | if [ $# -eq 0 ]; then 4 | echo "usage: nfs-exporter.sh "; exit 0 5 | fi 6 | 7 | if [[ ! -d $nfsdir ]]; then 8 | mkdir -p $nfsdir 9 | echo "$nfsdir 192.168.1.0/24(rw,sync,no_root_squash)" >> /etc/exports 10 | if [[ $(systemctl is-enabled nfs) -eq "disabled" ]]; then 11 | systemctl enable nfs 12 | fi 13 | systemctl restart nfs 14 | fi -------------------------------------------------------------------------------- /ch6/6.2.1/prometheus-install.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | helm install prometheus edu/prometheus \ 3 | --set pushgateway.enabled=false \ 4 | --set alertmanager.enabled=false \ 5 | --set nodeExporter.tolerations[0].key=node-role.kubernetes.io/master \ 6 | --set nodeExporter.tolerations[0].effect=NoSchedule \ 7 | --set nodeExporter.tolerations[0].operator=Exists \ 8 | --set server.persistentVolume.existingClaim="prometheus-server" \ 9 | --set server.securityContext.runAsGroup=1000 \ 10 | --set server.securityContext.runAsUser=1000 \ 11 | --set server.service.type="LoadBalancer" \ 12 | --set server.extraFlags[0]="storage.tsdb.no-lockfile" 13 | -------------------------------------------------------------------------------- /ch6/6.2.1/prometheus-server-preconfig.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # check helm command 4 | echo "[Step 1/4] Task [Check helm status]" 5 | if [ ! -e "/usr/local/bin/helm" ]; then 6 | echo "[Step 1/4] helm not found" 7 | exit 1 8 | fi 9 | echo "[Step 1/4] ok" 10 | 11 | # check metallb 12 | echo "[Step 2/4] Task [Check MetalLB status]" 13 | namespace=$(kubectl get namespace metallb-system -o jsonpath={.metadata.name} 2> /dev/null) 14 | if [ "$namespace" == "" ]; then 15 | echo "[Step 2/4] metallb not found" 16 | exit 1 17 | fi 18 | echo "[Step 2/4] ok" 19 | 20 | # create nfs directory & change owner 21 | nfsdir=/nfs_shared/prometheus/server 22 | echo "[Step 3/4] Task [Create NFS directory for prometheus-server]" 23 | if [ ! -e "$nfsdir" ]; then 24 | ~/_Book_k8sInfra/ch6/6.2.1/nfs-exporter.sh prometheus/server 25 | chown 1000:1000 $nfsdir 26 | echo "$nfsdir created" 27 | echo "[Step 3/4] Successfully completed" 28 | else 29 | echo "[Step 3/4] failed: $nfsdir already exists" 30 | exit 1 31 | fi 32 | 33 | # create pv,pvc 34 | echo "[Step 4/4] Task [Create PV,PVC for prometheus-server]" 35 | pvc=$(kubectl get pvc prometheus-server -o jsonpath={.metadata.name} 2> /dev/null) 36 | if [ "$pvc" == "" ]; then 37 | kubectl apply -f ~/_Book_k8sInfra/ch6/6.2.1/prometheus-server-volume.yaml 38 | echo "[Step 4/4] Successfully completed" 39 | else 40 | echo "[Step 4/4] failed: prometheus-server pv,pvc already exist" 41 | fi -------------------------------------------------------------------------------- /ch6/6.2.1/prometheus-server-volume.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: prometheus-server 5 | spec: 6 | capacity: 7 | storage: 10Gi 8 | accessModes: 9 | - ReadWriteMany 10 | persistentVolumeReclaimPolicy: Retain 11 | nfs: 12 | server: 192.168.1.10 13 | path: /nfs_shared/prometheus/server 14 | --- 15 | apiVersion: v1 16 | kind: PersistentVolumeClaim 17 | metadata: 18 | name: prometheus-server 19 | spec: 20 | accessModes: 21 | - ReadWriteMany 22 | resources: 23 | requests: 24 | storage: 10Gi -------------------------------------------------------------------------------- /ch6/6.2.3/nginx-status-annot.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx 5 | spec: 6 | selector: 7 | matchLabels: 8 | app: nginx 9 | template: 10 | metadata: 11 | labels: 12 | app: nginx 13 | annotations: 14 | prometheus.io/port: "80" 15 | prometheus.io/scrape: "true" 16 | spec: 17 | containers: 18 | - name: nginx 19 | image: sysnet4admin/nginx-status 20 | ports: 21 | - containerPort: 80 22 | -------------------------------------------------------------------------------- /ch6/6.2.3/nginx-status-metrics.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nginx 5 | spec: 6 | selector: 7 | matchLabels: 8 | app: nginx 9 | template: 10 | metadata: 11 | labels: 12 | app: nginx 13 | annotations: 14 | prometheus.io/port: "9113" 15 | prometheus.io/scrape: "true" 16 | spec: 17 | containers: 18 | - name: nginx 19 | image: sysnet4admin/nginx-status 20 | ports: 21 | - containerPort: 80 22 | - name: nginx-exporter 23 | image: nginx/nginx-prometheus-exporter:0.8.0 24 | env: 25 | - name: SCRAPE_URI 26 | value: http://localhost:80/stub_status 27 | -------------------------------------------------------------------------------- /ch6/6.4.1/grafana-install.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | helm install grafana edu/grafana \ 3 | --set persistence.enabled=true \ 4 | --set persistence.existingClaim=grafana \ 5 | --set service.type=LoadBalancer \ 6 | --set securityContext.runAsUser=1000 \ 7 | --set securityContext.runAsGroup=1000 \ 8 | --set adminPassword="admin" -------------------------------------------------------------------------------- /ch6/6.4.1/grafana-preconfig.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # check helm command 4 | echo "[Step 1/4] Task [Check helm status]" 5 | if [ ! -e "/usr/local/bin/helm" ]; then 6 | echo "[Step 1/4] helm not found" 7 | exit 1 8 | fi 9 | echo "[Step 1/4] ok" 10 | 11 | # check metallb 12 | echo "[Step 2/4] Task [Check MetalLB status]" 13 | namespace=$(kubectl get namespace metallb-system -o jsonpath={.metadata.name} 2> /dev/null) 14 | if [ "$namespace" == "" ]; then 15 | echo "[Step 2/4] metallb not found" 16 | exit 1 17 | fi 18 | echo "[Step 2/4] ok" 19 | 20 | # create nfs directory & change owner 21 | nfsdir=/nfs_shared/grafana 22 | echo "[Step 3/4] Task [Create NFS directory for grafana]" 23 | if [ ! -e "$nfsdir" ]; then 24 | ~/_Book_k8sInfra/ch6/6.4.1/nfs-exporter.sh grafana 25 | chown 1000:1000 $nfsdir 26 | echo "$nfsdir created" 27 | echo "[Step 3/4] Successfully completed" 28 | else 29 | echo "[Step 3/4] failed: $nfsdir already exists" 30 | exit 1 31 | fi 32 | 33 | # create pv,pvc 34 | echo "[Step 4/4] Task [Create PV,PVC for grafana]" 35 | pvc=$(kubectl get pvc grafana -o jsonpath={.metadata.name} 2> /dev/null) 36 | if [ "$pvc" == "" ]; then 37 | kubectl apply -f ~/_Book_k8sInfra/ch6/6.4.1/grafana-volume.yaml 38 | echo "[Step 4/4] Successfully completed" 39 | else 40 | echo "[Step 4/4] failed: grafana pv,pvc already exist" 41 | fi -------------------------------------------------------------------------------- /ch6/6.4.1/grafana-volume.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: grafana 5 | spec: 6 | capacity: 7 | storage: 10Gi 8 | accessModes: 9 | - ReadWriteMany 10 | persistentVolumeReclaimPolicy: Retain 11 | nfs: 12 | server: 192.168.1.10 13 | path: /nfs_shared/grafana 14 | --- 15 | apiVersion: v1 16 | kind: PersistentVolumeClaim 17 | metadata: 18 | name: grafana 19 | spec: 20 | accessModes: 21 | - ReadWriteMany 22 | resources: 23 | requests: 24 | storage: 10Gi -------------------------------------------------------------------------------- /ch6/6.4.1/nfs-exporter.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nfsdir=/nfs_shared/$1 3 | if [ $# -eq 0 ]; then 4 | echo "usage: nfs-exporter.sh "; exit 0 5 | fi 6 | 7 | if [[ ! -d $nfsdir ]]; then 8 | mkdir -p $nfsdir 9 | echo "$nfsdir 192.168.1.0/24(rw,sync,no_root_squash)" >> /etc/exports 10 | if [[ $(systemctl is-enabled nfs) -eq "disabled" ]]; then 11 | systemctl enable nfs 12 | fi 13 | systemctl restart nfs 14 | fi -------------------------------------------------------------------------------- /ch6/6.5.1/alert-notifier.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | annotations: 5 | meta.helm.sh/release-name: prometheus 6 | meta.helm.sh/release-namespace: default 7 | labels: 8 | app: prometheus 9 | app.kubernetes.io/managed-by: Helm 10 | chart: prometheus-11.6.0 11 | component: alertmanager 12 | heritage: Helm 13 | release: prometheus 14 | name: prometheus-notifier-config 15 | namespace: default 16 | data: 17 | alertmanager.yml: | 18 | global: 19 | slack_api_url: Slack-URL 20 | receivers: 21 | - name: slack-notifier 22 | slack_configs: 23 | - channel: #monitoring 24 | send_resolved: true 25 | title: '[{{.Status | toUpper}}] {{ .CommonLabels.alertname }}' 26 | text: >- 27 | *Description:* {{ .CommonAnnotations.description }} 28 | route: 29 | group_wait: 10s 30 | group_interval: 1m 31 | repeat_interval: 5m 32 | receiver: slack-notifier -------------------------------------------------------------------------------- /ch6/6.5.1/nfs-exporter.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nfsdir=/nfs_shared/$1 3 | if [ $# -eq 0 ]; then 4 | echo "usage: nfs-exporter.sh "; exit 0 5 | fi 6 | 7 | if [[ ! -d $nfsdir ]]; then 8 | mkdir -p $nfsdir 9 | echo "$nfsdir 192.168.1.0/24(rw,sync,no_root_squash)" >> /etc/exports 10 | if [[ $(systemctl is-enabled nfs) -eq "disabled" ]]; then 11 | systemctl enable nfs 12 | fi 13 | systemctl restart nfs 14 | fi -------------------------------------------------------------------------------- /ch6/6.5.1/prometheus-alertmanager-install.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | helm upgrade prometheus edu/prometheus \ 3 | --set pushgateway.enabled=false \ 4 | --set nodeExporter.tolerations[0].key=node-role.kubernetes.io/master \ 5 | --set nodeExporter.tolerations[0].effect=NoSchedule \ 6 | --set nodeExporter.tolerations[0].operator=Exists \ 7 | --set alertmanager.persistentVolume.existingClaim="prometheus-alertmanager" \ 8 | --set server.persistentVolume.existingClaim="prometheus-server" \ 9 | --set server.securityContext.runAsGroup=1000 \ 10 | --set server.securityContext.runAsUser=1000 \ 11 | --set server.service.type="LoadBalancer" \ 12 | --set server.baseURL="http://192.168.1.12" \ 13 | --set server.service.loadBalancerIP="192.168.1.12" \ 14 | --set server.extraFlags[0]="storage.tsdb.no-lockfile" \ 15 | --set alertmanager.configMapOverrideName=notifier-config \ 16 | --set alertmanager.securityContext.runAsGroup=1000 \ 17 | --set alertmanager.securityContext.runAsUser=1000 \ 18 | --set alertmanager.service.type="LoadBalancer" \ 19 | --set alertmanager.service.loadBalancerIP="192.168.1.14" \ 20 | --set alertmanager.baseURL="http://192.168.1.14" \ 21 | -f ~/_Book_k8sInfra/ch6/6.5.1/values.yaml -------------------------------------------------------------------------------- /ch6/6.5.1/prometheus-alertmanager-preconfig.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | nfsdir=/nfs_shared/prometheus/alertmanager 3 | 4 | # check helm command 5 | echo "[Step 1/4] Task [Check helm status]" 6 | if [ ! -e "/usr/local/bin/helm" ]; then 7 | echo "[Step 1/4] helm not found" 8 | exit 1 9 | fi 10 | echo "[Step 1/4] ok" 11 | 12 | # check metallb 13 | echo "[Step 2/4] Task [Check MetalLB status]" 14 | namespace=$(kubectl get namespace metallb-system -o jsonpath={.metadata.name} 2> /dev/null) 15 | if [ "$namespace" == "" ]; then 16 | echo "[Step 2/4] metallb not found" 17 | exit 1 18 | fi 19 | echo "[Step 2/4] ok" 20 | 21 | # create nfs directory & change owner 22 | echo "[Step 3/4] Task [Create NFS directory for alertmanager]" 23 | if [ ! -e "$nfsdir" ]; then 24 | ~/_Book_k8sInfra/ch6/6.5.1/nfs-exporter.sh prometheus/alertmanager 25 | chown 1000:1000 $nfsdir 26 | echo "[Step 3/4] Successfully completed" 27 | else 28 | echo "[Step 3/4] failed: $nfsdir already exists" 29 | exit 1 30 | fi 31 | 32 | # create pv,pvc 33 | echo "[Step 4/4] Task [Create PV,PVC for alertmanager]" 34 | pvc=$(kubectl get pvc prometheus-alertmanager -o jsonpath={.metadata.name} 2> /dev/null) 35 | if [ "$pvc" == "" ]; then 36 | kubectl apply -f ~/_Book_k8sInfra/ch6/6.5.1/prometheus-alertmanager-volume.yaml 37 | echo "[Step 4/4] Successfully completed" 38 | else 39 | echo "[Step 4/4] failed: prometheus-alertmanager pv,pvc already exist" 40 | fi 41 | -------------------------------------------------------------------------------- /ch6/6.5.1/prometheus-alertmanager-volume.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: prometheus-alertmanager 5 | spec: 6 | capacity: 7 | storage: 10Gi 8 | accessModes: 9 | - ReadWriteMany 10 | persistentVolumeReclaimPolicy: Retain 11 | nfs: 12 | server: 192.168.1.10 13 | path: /nfs_shared/prometheus/alertmanager 14 | --- 15 | apiVersion: v1 16 | kind: PersistentVolumeClaim 17 | metadata: 18 | name: prometheus-alertmanager 19 | spec: 20 | accessModes: 21 | - ReadWriteMany 22 | resources: 23 | requests: 24 | storage: 10Gi -------------------------------------------------------------------------------- /ch6/6.5.1/values.yaml: -------------------------------------------------------------------------------- 1 | serverFiles: 2 | alerting_rules.yml: 3 | groups: 4 | - name: Node 5 | rules: 6 | - alert: NodeDown 7 | expr: up{job="kubernetes-nodes"} == 0 8 | for: 1m 9 | annotations: 10 | description: kubernetes node {{ .Labels.instance }} down -------------------------------------------------------------------------------- /ch6/README.md: -------------------------------------------------------------------------------- 1 | # 6장, 안정적인 운영을 완성하는 모니터링, 프로메테우스와 그라파나 2 | --- 3 | ## 6.2 프로메테우스로 모니터링 데이터 수집과 통합하기 4 | - 6.2.1 헬름으로 프로메테우스 설치하기 5 | - 6.2.3 서비스 디스커버리로 수집 대상 가져오기 6 | ## 6.4 그라파나로 모니터링 데이터 시각화하기 7 | - 6.4.1 헬름으로 그라파나 설치하기 8 | ## 6.5 좀 더 견고한 모니터링 환경 만들기 9 | - 6.5.1 얼럿매니저로 이상 신호 감지하고 알려주기 -------------------------------------------------------------------------------- /docs/6.7.테인트(Taints)와 톨러레이션(Tolerations)의 파드 할당 조건_v2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/6.7.테인트(Taints)와 톨러레이션(Tolerations)의 파드 할당 조건_v2.pdf -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2022/2022-k8s-stnd-arch.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2022/2022-k8s-stnd-arch.pdf -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2022/README.md: -------------------------------------------------------------------------------- 1 | # 2022년 쿠버네티스 표준 아키택처 2 | 3 | ## 표준 아키택처 선정 배경 4 | 2022년 쿠버네티스를 보다 완전하게 구성하기 위한 많은 제품들이 나와 있지만, 역설적으로 이미 너무 많은 제품이 있기 때문에 오히려 선정하는데 어려움이 있습니다. 5 | ![2022 cncf landscape](img/2022Jan13-landscape.cncf.io.png) 6 | [그림 1] **2022년 1월 13일을 기준으로 CNCF에서 제공하는 제품들** 7 | 8 | 이에 시장에서 가장 안정적이고, 시행착오를 줄이기 위해 표준적인 아키택처가 있어야 한다고 판단되어 다음과 같이 선정하게 되었습니다. 일부는 이미 디 팩토(de facto / 사실 상) 표준이 경우도 있지만, 상황에 따라서는 한국 시장에 맞게 선정한 제품도 있습니다. 9 | 예를 들면, 헬름, 아르고CD, 하버, kubeflow, 도커, 프로메테우스등은 선택을 고민할 필요가 없을 수준이지만, 그 외에는 비지니스 성격에 따라 다소 변경될 수도 있습니다. 10 | 하지만 만약 Best Practice적인 성격으로 2022년에 구성을 고민하신다면, 현재의 [쿠버네티스 표준 구성](2022-k8s-stnd-arch.pdf)이 큰 도움이 될 것이라고 생각합니다. 부디 이를 통해서 금년도 쿠버네티스로의 진입이 더욱 더 수월하셨으면 합니다. 11 | 12 | [조 훈](https://github.com/sysnet4admin), [심근우](https://github.com/gnu-gnu), [문성주](https://github.com/seongjumoon) 드림 13 | 14 | --- 15 | 16 | ## 각 구성 요소 설명 17 | 18 | ### 멀티 클러스터 매니지먼트 19 | **[Cluster API](https://cluster-api.sigs.k8s.io/)
** 20 | 하나의 CLI 도구로 AWS, Azure GCP와 같은 퍼블릭 클라우드, Openstack, Vsphere와 같은 프라이빗 클라우드 플랫폼에서 다수의 쿠버네티스 클러스터를 프로비저닝하고 관리할 수 있는 도구입니다. clusterctl을 통해서 클러스터 관리가 가능하며, 배포한 클러스터의 kubeconfig도 명령어로 내려받을 수 있습니다. 21 | 22 | 23 | ### 사용성 간편화 도구 24 | **[헬름(Helm)](https://helm.sh/ko/) : 배포 간편화 도구
** 25 | 쿠버네티스 패키지 매니저로서 차트(Chart)를 통해서 쿠버네티스 클러스터에 컨테이너 애플리케이션을 손쉽게 배포할 수 있도록 도와주는 도구입니다. values에 정의한 다양한 파라미터를 통해서 원하는 옵션의 애플리케이션 커스터마이징도 가능하다는 장점이 있습니다. 26 | 27 | **[렌즈(Lens)](https://k8slens.dev/) : 쿠버네티스 통합 개발 환경(IDE) 도구
** 28 | Lens 쿠버네티스를 활용하는 사람들이 많이 사용하는 GUI 도구입니다. 간단하게 클러스터의 컨텍스트를 변경하는 것부터 주로 디플로이먼트, 스테이트풀셋, 서비스, 컨피그맵, 시크릿과 같은 쿠버네티스 오브젝트를 손쉽게 확인하고 만약 변경이 필요한 부분이 있다면, 확인하던 UI에서 수정 버튼을 통해서 UI에서 변경이 가능하다는 장점이 있습니다. 추가로 배포되어있는 Pod에 Attach하여 셸 명령을 입력하거나, 파드의 로그를 실시간으로 조회할 수 있는 기능도 존재합니다. 29 | 30 | 31 | ### API 서버 로드밸런서 32 | **[HAPROXY](http://www.haproxy.org/)
** 33 | 쿠버네티스를 사용하기 이전부터 오픈소스 L7 로드밸런서로서 점유율이 높았으며, 컨트롤 플레인 내부에 마스터 노드들 위에 배포 되어 있는 여러 쿠버 API 서버로 로드밸런싱하기 위해서 사용합니다. 헬스 체크를 지원함으로써 단일 장애점 문제를 해소하는데 도움이 되는 기능이 존재합니다. 34 | 35 | 36 | ### 네트워크 구현체 37 | **[칼리코(Calico)](https://www.tigera.io/project-calico/) : 컨테이너 네트워크 인터페이스(CNI)
** 38 | CNI 중에 가장 쉽고 빠르게 적용할 수 있으며, 사용자 층 또한 두껍습니다. 특히 현재 데이터센터에서 가장 인기 있는 BGP 프로토콜을 잘 지원하고 있으며, 성능 또한 상단에 위치하고 있습니다. 39 | 40 | **[MetalLB](https://metallb.universe.tf/) : 쿠버네티스 로드밸런서
** 41 | 쿠버네티스에서 로드밸런서를 구현체를 따로 만들지 않아도, 서비스로서 로드밸런서를 가장 먼저 사용할 수 있도록 해준 프로젝트입니다. 따라서 가장 오래되고 성숙된 제품 중에 하나입니다. L2, L3모드를 모두 지원합니다. 42 | 43 | **[Nginx Ingress](https://kubernetes.github.io/ingress-nginx/) : 쿠버네티스 인그레스
** 44 | 쿠버네티스 구성 시 가장 빠르고 간편하게 설치할 수 있는 인그레스 컨트롤러로서, 클러스터 외부에서 들어오는 요청을 URL 기반으로 처리할 수 있는 장점이 있습니다. 또한 애너테이션을 활용해 다양한 옵션을 부여하여, 리다이렉트 및 https 인증서 적용 등의 작업을 처리할 수 있습니다. 45 | 46 | 47 | ### 서비스 메시 48 | **[이스티오(Istio)](https://istio.io/)
** 49 | 이스티오는 서비스 매쉬 도구로서 트래픽 관리, 옵저버빌리티와 같은 기능을 제공합니다. istioctl을 통해 간편하게 설치할 수 있으며, CRD를 통해서도 설치가 가능하다는 장점이 있습니다. VirtualService와 Gateway를 통해 인그레스 트래픽을 처리할 수 있으며, 중간에 filter를 활용해서 트래픽 제어가 가능합니다. 같이 배포되는 kiali라는 대시보드를 통해 서비스 트래픽 플로우를 웹 UI로 확인할 수 있습니다. 50 | 51 | 52 | ### 지속적 통합/배포(CI/CD) 도구 53 | **[Github Actions](https://github.com/features/actions) : CI
** 54 | Github Actions 는 세계에서 가장 유명한 소스 코드 저장소인 Github 에서 제공하는 CI 도구입니다. Github 저장소에 보관된 소스를 바로 빌드할 수 있는 workflow를 구성할 수 있어 활용도가 높습니다. 또한 사전 구성된 workflow를 다양하게 제공하고 있습니다. Github 공개 저장소에서는 무료로 사용할 수 있으며, Github 비공개 저장소에 대해서 매월 2,000분 동안 빌드 시간을 무료로 구동할 수 있습니다. 55 | 56 | **[Jenkins](https://www.jenkins.io/) : CI/CD
** 57 | 젠킨스는 지속적 통합 및 배포 단계에서 가장 널리 쓰이는 오픈소스입니다. 방대한 커뮤니티에서 제공하는 다양한 플러그인으로 인해 거의 모든 언어 및 도구와 연계할 수 있습니다. 젠킨스 설치를 위한 차트 및 젠킨스에서 사용할 수 있는 쿠버네티스 플러그인도 제공하고 있어 쿠버네티스 상에서 쉽게 사용할 수 있습니다. 58 | 59 | **[깃랩(Gitlab)](https://about.gitlab.com/) : CI/CD
** 60 | Gitlab CI/CD는 오픈소스 및 SaaS 소스코드 저장소인 Gitlab에서 사용할 수 있는 CI/CD 기능입니다. 무료 사용이 제한적인 Github Actions와 달리 무료로 사용할 수 있는 설치형 오픈소스 버전에서도 CI/CD 기능을 사용할 수 있습니다. Github Actions가 CI에 집중하는 것과 달리 pipeline 기능을 통해 CI/CD를 함께 구성할 수 있으며, Auto DevOps 기능을 통해 빌드, 테스트, 배포, 보안점검 기능 등을 한꺼번에 수행할 수 있습니다. 또한 Auto Monotoring 기능을 통해 배포된 애플리케이션의 모니터링까지 수행할 수 있어 CI/CD를 넘어서 DevOps 도구로서의 기능까지 충실하게 수행할 수 있습니다. 61 | 62 | **[아르고CD(ArgoCD)](https://argo-cd.readthedocs.io/en/stable/) : CD
** 63 | ArgoCD는 git을 배포의 원천으로 사용하는 GitOps CD 도구입니다. Git에 작성된 매니페스트 기반으로 쿠버네티스의 리소스 상태를 일치시키므로 선언적인 리소스 관리가 가능합니다. 소스 코드를 원천으로 애플리케이션만을 배포하는 것과 달리 선언적인 리소스를 배포하므로 배포된 리소스 전체의 상태에 대한 일관성 있는 관리가 가능합니다. 64 | 65 | 66 | ### 컨테이너 레지스트리 67 | **[하버(Harbor)](https://goharbor.io/)
** 68 | 시장에서 가장 많은 점유율을 가지고 있는 컨테이너 레지스트리이며, private docker registry나 cloud provider registery로 부터 동기화도 손쉽게 동기화할 수 있습니다. 또한 하버 자체를 관리할 수 있는 API를 통해서 자동화도 쉽게 구성 가능한 장점이 있습니다. 추가로 번들(플러그인)으로 이미지 스캐너인(trivy)나 차트 저장소(chartmuseum)도 같이 구성하여 컨테이너 인프라 에서 요구하는 것에 대한 모든 저장소를 통합할 수 있습니다. 69 | 70 | ### 컨테이너 네이티브 스토리지 71 | **[Rook](https://rook.io/) + [ceph](https://ceph.io/en/) : 컨테이너 네이티브 스토리지 오케스트레이션과 오브젝트 스토리지
** 72 | Rook은 쿠버네티스 상에서 스토리지를 쓸 수 있도록 도와주는 컨테이너 네이티브 스토리지 오케스트레이션 솔루션입니다. Rook을 ceph과 함께 이용하면 ceph의 설치를 위해 복잡한 과정을 거치지 않고 CRD를 이용하여 쿠버네티스상에 ceph 클러스터를 간편하게 구성할 수 있으며 Rook + ceph가 지원하는 CSI를 통해 쿠버네티스의 볼륨을 쉽게 사용할 수 있습니다. 73 | 74 | **[Velero](https://velero.io/) : 클러스터 데이터 관리 도구
** 75 | 쿠버네티스 클러스터의 리소스와 관련된 데이터와 볼륨을 관리할 수 있는 도구입니다. Velero를 통해 클러스터 상태 데이터 및 볼륨 스냅샷을 남길 수 있으며, 이를 오브젝트 스토리지와 연계하여 백업 및 복원할 수 있습니다. 이 기능을 통해서 클러스터의 장애 복구, 클러스터 환경의 마이그레이션(Migration) 시 유용하게 사용할 수 있습니다. 76 | 77 | 78 | ### MLOps 도구 79 | **[Kubeflow](https://www.kubeflow.org/)
** 80 | 데이터 사이언티스트 및 엔지니어의 협업을 쿠버네티스 클러스터 위에 배포된 주피터 노트북에서 작업을 공유할 수 있으며, 머신러닝 모델의 학습 시 필요한 하이퍼파라미터 튜닝을 할 수 있습니다. 또한 이렇게 만들어진 모델로 파이프라인을 구성할 수 있으며, 모델을 학습하기 위한 쿠버네티스 잡을 웹 UI에서 다룰 수 있습니다. 81 | 82 | 83 | ### 서버리스(Serverless) 도구 84 | **[Knative](https://knative.dev/docs/)
** 85 | 서버리스는 고정으로 배포된 서비스를 이용하지 않고, 요청이 있을 때마다 코드를 구동하고 중단하는 개념입니다. 쿠버네티스를 활용하여 이런 유연한 방식에 대한 활용성을 극대화할 수 있습니다. knative는 구글이 지원하고 있으며 레드햇, IBM 등이 채택하고 있어 인지도 및 활용도가 높습니다.. 86 | 87 | 88 | ### 키 관리 서비스 89 | **[Vault](https://www.vaultproject.io/)
** 90 | Hashicorp에서 만든 오픈소스 키 관리 서비스로서, 데이터베이스 암호와 같은 민감한 정보를 시크릿에 바로 사람이 입력하는 것이 아니라, vault를 이용해 시크릿 데이터를 시스템에서 직접 통합하기 위한 목적으로 주로 사용합니다. 데이터베이스 암호 외에 타 서비스 간의 API 인증 토큰, 소프트웨어 라이선스 키와 같은 민감한 데이터를 yaml로서 바로 kubectl 명령어가 아닌 공용 시크릿 백엔드로부터 동기화하기 위해서 사용합니다. 또한 사용자의 편의에 따라서 CLI,HTTP REST API, WEB UI에서도 손쉽게 데이터를 확인 및 저장할 수 있습니다. 91 | 92 | ### 컨테이너 관리도구 93 | **[도커(Docker)](https://www.docker.com/)
** 94 | 오래 전부터 사용하던 컨테이너 도구로 현재도 가장 큰 사용자를 가지고 있습니다. 또한 쿠버네티스가 dockershim과 연계를 중단한 것과 별개로 컨테이너 관리 도구로는 계속 사용 가능합니다. 또한 2014년부터 사용된 검증된 컨테이너 관리 도구로서, 매우 높은 안정성을 가지고 있습니다. 도커는 ContainerD를 컨테이너 런타임 인터페이스(CRI)로 포함하고 있습니다. 95 | 96 | 97 | ### 로그 파이프라인 98 | **[Fluentbit](https://fluentbit.io/) : 로그 포워더
** 99 | Fluentbit은 오픈소스 로그 포워더입니다. 적은 자원 소비로도 높은 성능을 안정적으로 낼 수 있다는 것을 주된 장점으로 내세우고 있습니다. 쿠버네티스의 로그를 로그 수집 서비스에 전송하기 위하여 fluentbit INPUT 플러그인을 통해 로그 내용을 입력하여 다른 서비스로 전송하는데 쓸 수 있습니다. 100 | 101 | **[엘라스틱서치(Elasticsearch)](https://www.elastic.co/kr/elasticsearch/) : 로그 보관 및 검색엔진
** 102 | 엘라스틱서치는 시장에서 각광받는 로그 보관 및 검색엔진 툴로 아파치 루씬 기반의 풀 텍스트 검색 기능을 제공합니다. 또한 REST API를 통해서 적재되어 있는 이벤트(로그)를 검색할 수 있습니다. 103 | 104 | **[키바나(Kibana)](https://www.elastic.co/kr/kibana/) : 로그 및 데이터 시각화 대시보드
** 105 | 엘라스틱서치의 데이터를 시각화하는데 사용할 수 있는 대시보드입니다. 기본적인 시각화 기능 이외에도 엘라스틱서치의 데이터를 찾을 때 사용할 수 있는 강력한 탐색 도구를 내장하고 있습니다. 106 | 107 | 108 | ### 메트릭 파이프라인 109 | **[프로메테우스(Prometheus)](https://prometheus.io/) : 메트릭 수집 및 조회
** 110 | 쿠버네티스 모니터링에 있어서 높은 점유율을 보여주고 있고, CNCF 재단에서 2번째 graduate project 이며, 여러 CNCF 및 오픈소스 프로젝트들이 메트릭을 prometheus 형식에 맞게 공개(export)하고 있습니다. 프로메테우스는 각 애플리케이션이 공개하는 데이터를 수집해오며, 수집된 데이터는 그라파나와 내장되어 있는 브라우저를 통해서 확인할 수 있습니다. 111 | 112 | **[그라파나(Grafana)](https://grafana.com/) : 데이터 시각화 도구
** 113 | 시장에서 점유율이 높은 데이터 시각화 도구 중에 하나이며, 프로메테우스, 엘라스틱서치, 로키 심지어 포스트그레스큐엘과 같은 여러 데이터베이스로부터 데이터를 가져와 이를 사용자가 쉽게 확인할 수 있도록 도와주는 도구입니다. 조직에서 쉽게 사용자를 등록할 수 있도록, LDAP, OIDC, SAML 연동이 가능한 특징이 있습니다. 114 | 115 | 116 | ### 클러스터 프로비저너 117 | **[kubeadm](https://kubernetes.io/ko/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)
** 118 | 쿠버네티스에서 가장 많이 사용하는 클러스터 프로비저닝 CLI 도구로서, 간단한 명령어로 컨트롤 플레인을 프로비저닝하고, 다수의 워커 노드를 컨트롤 플레인 API 서버로 참여하도록 설정할 수 있습니다. 단순히 kubeadm 명령줄을 길게 나열해서 배포하는 것 이외에도 yaml 형식의 설정파일을 이용하여서 클러스터를 프로비저닝할 수 있습니다. 119 | 120 | **[kubespray](https://kubernetes.io/ko/docs/setup/production-environment/tools/kubespray/)
** 121 | 앤서블 플레이북을 이용해서 쿠버네티스 클러스터를 프로비저닝할 수 있습니다. kubeadm 대비 여러개의 마스터노드를 설정해야할 때, 한번의 설정으로 클러스터의 컨트롤 플레인 및 워커 노드를 구성할 수 있다는 장점이 있습니다. 122 | 123 | --- 124 | 125 | ## 표준 아키택처를 기반으로 작성된 책 126 | 127 | 128 | 129 | 130 | 131 | ### 도서 구입 안내 132 | 본 도서는 각 온오프라인 서점에서 만나보실 수 있습니다. 133 | - 📍 [YES24](https://bit.ly/3iq4L5W) 134 | - 📍 [알라딘](https://bit.ly/3cpo37M) 135 | - 📍 [교보문고](https://bit.ly/3g1dsC7) 136 | 137 | -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2022/img/2022Jan13-landscape.cncf.io.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2022/img/2022Jan13-landscape.cncf.io.png -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2023/2023-k8s-stnd-arch.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2023/2023-k8s-stnd-arch.pdf -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2023/img/2022Nov21-landscape.cncf.io.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2023/img/2022Nov21-landscape.cncf.io.png -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2023/img/2023-k8s-stnd-arch-thumbnail.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2023/img/2023-k8s-stnd-arch-thumbnail.png -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2024/2024-k8s-stnd-arch.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2024/2024-k8s-stnd-arch.pdf -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2024/img/2023Dec11-landscape.cncf.io.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2024/img/2023Dec11-landscape.cncf.io.png -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2024/img/2023Oct11-graduated.cncf.io.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2024/img/2023Oct11-graduated.cncf.io.png -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2024/img/2024-k8s-stnd-arch-thumbnail.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2024/img/2024-k8s-stnd-arch-thumbnail.png -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2025/2025-k8s-stnd-arch.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2025/2025-k8s-stnd-arch.pdf -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2025/img/2024Dec15-landscape.cncf.io.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2025/img/2024Dec15-landscape.cncf.io.png -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2025/img/2024Nov09-graduated.cncf.io.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2025/img/2024Nov09-graduated.cncf.io.png -------------------------------------------------------------------------------- /docs/k8s-stnd-arch/2025/img/2025-k8s-stnd-arch-thumbnail.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/k8s-stnd-arch/2025/img/2025-k8s-stnd-arch-thumbnail.png -------------------------------------------------------------------------------- /docs/troubleshooting-kubernetes.ko_kr.v2.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/troubleshooting-kubernetes.ko_kr.v2.pdf -------------------------------------------------------------------------------- /docs/실습 이슈#1 - VritualBox host-only Network(MAC,Linux).pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/실습 이슈#1 - VritualBox host-only Network(MAC,Linux).pdf -------------------------------------------------------------------------------- /docs/확장본#1 - 젠킨스의 FreeStyle로 만드는 개발-상용 환경 배포.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/확장본#1 - 젠킨스의 FreeStyle로 만드는 개발-상용 환경 배포.pdf -------------------------------------------------------------------------------- /docs/확장본#2 - 자바 개발자를 위한 컨테이너 이미지 빌드.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/확장본#2 - 자바 개발자를 위한 컨테이너 이미지 빌드.pdf -------------------------------------------------------------------------------- /docs/확장본#3 - 깃옵스(GitOps)를 여행하려는 입문자를 위한 안내서.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/sysnet4admin/_Book_k8sInfra/eec5a1ef624041450bfa65c3229b0b10ccdaf888/docs/확장본#3 - 깃옵스(GitOps)를 여행하려는 입문자를 위한 안내서.pdf --------------------------------------------------------------------------------