├── .gitignore ├── Docker Cheat Sheet.pdf ├── Kubectl Cheat Sheet.pdf ├── README.md ├── Webinars ├── Basics of working with Helm.pdf ├── Docker and alternatives.pdf └── Kubernetes quick start.pdf ├── homework ├── 2.docker │ ├── README.md │ ├── golang │ │ ├── go.mod │ │ ├── go.sum │ │ └── main.go │ └── python │ │ ├── app.py │ │ └── requirements.txt ├── 3.kubernetes-intro │ └── README.md ├── 4.resources-and-persistence │ └── README.md ├── 5.kubernetes-network │ └── README.md ├── 7.advanced-abstractions │ └── README.md └── 8.ci-cd │ └── README.md └── practice ├── 2.docker ├── docker-compose │ ├── .dockerignore │ ├── .gitignore │ ├── Dockerfile │ ├── docker-compose.yml │ ├── docker-entrypoint.sh │ ├── manage.py │ ├── myapp │ │ ├── __init__.py │ │ ├── settings.py │ │ ├── urls.py │ │ └── wsgi.py │ ├── polls │ │ ├── __init__.py │ │ ├── admin.py │ │ ├── apps.py │ │ ├── migrations │ │ │ └── __init__.py │ │ ├── models.py │ │ ├── tests.py │ │ ├── urls.py │ │ └── views.py │ └── requirements.txt ├── golang │ ├── Dockerfile │ ├── go.mod │ ├── go.sum │ └── main.go └── python-django │ ├── Dockerfile │ ├── manage.py │ └── myapp │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py ├── 3.kubernetes-intro ├── deployment.yaml ├── pod.yaml └── replicaset.yaml ├── 4.resources-and-persistence ├── README.md ├── configmap.yaml ├── deployment-with-configmap.yaml ├── deployment-with-env.yaml ├── deployment-with-resources.yaml ├── deployment-with-secret.yaml ├── nfs │ ├── README.md │ ├── pv.yaml │ └── pvc.yaml └── persistence │ ├── configmap.yaml │ ├── deployment.yaml │ └── pvc.yaml ├── 5.kubernetes-network ├── README.md ├── deployment-with-probes.yaml ├── ingress.yaml ├── net-tool.yaml └── service.yaml ├── 7.advanced-abstractions ├── README.md ├── cronjob.yaml ├── daemonset-with-tollerations.yam ├── daemonset.yaml ├── hpa.README.md ├── job.yaml └── rabbitmq-statefulset │ ├── configmap.yaml │ ├── ingress.yaml │ ├── role.yaml │ ├── rolebinding.yaml │ ├── service.yaml │ ├── serviceaccount.yaml │ └── statefulset.yaml └── 8.ci-cd ├── README.md ├── app ├── .dockerignore ├── .gitlab-ci.yml ├── Dockerfile ├── app │ └── app.go ├── config │ └── config.go ├── go.mod ├── go.sum ├── handler │ ├── common.go │ └── users.go ├── kube │ ├── deployment.yaml │ ├── ingress.yaml │ ├── postgres │ │ ├── secret.yaml │ │ ├── service.yaml │ │ └── statefulset.yaml │ └── service.yaml ├── main.go └── model │ └── model.go └── gitlab-runner └── gitlab-runner.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | lecture/ 3 | -------------------------------------------------------------------------------- /Docker Cheat Sheet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/adterskov/geekbrains-conteinerization/da6b6cf191aeda4eb29d91cb8126fdd6f8607a52/Docker Cheat Sheet.pdf -------------------------------------------------------------------------------- /Kubectl Cheat Sheet.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/adterskov/geekbrains-conteinerization/da6b6cf191aeda4eb29d91cb8126fdd6f8607a52/Kubectl Cheat Sheet.pdf -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Курс "Микросервисы и контейнеризация" 2 | 3 | >Уважаемые студенты! 4 | > 5 | >Для полноценного участия в курсе "Микросервисная архитектура и контейнеризация" просим вас зарегистрироваться на платформе https://mcs.mail.ru. Регистрация должна быть сделана с той же почты, на которую зарегистрирован ваш аккаунт на GeekBrains. 6 | > 7 | >После регистрации на платформе для начисления квот необходимо нажать кнопку «включить сервисы», а затем попросить вашего куратора направить заявку на начисление средств на ваш счет в облаке VK Cloud. 8 | 9 | Практика и домашние задания находятся в соответсвующих директориях. 10 | 11 | ## Полезные ссылки 12 | 13 | - [Лекция 1. Микросервисы и контейнеры](#лекция-1-микросервисы-и-контейнеры) 14 | - [Лекция 2. Docker](#лекция-2-docker) 15 | - [Лекция 3. Введение в Kubernetes](#лекция-3-введение-в-kubernetes) 16 | - [Лекция 4. Хранение данных и ресурсы](#лекция-4-хранение-данных-и-ресурсы) 17 | - [Лекция 5. Сетевые абстракции Kubernetes](#лекция-5-сетевые-абстракции-kubernetes) 18 | - [Лекция 6. Устройство кластера](#лекция-6-устройство-кластера) 19 | - [Лекция 7. Продвинутые абстракции](#лекция-7-продвинутые-абстракции) 20 | - [Лекция 8. Деплой тестового приложения в кластер, CI/CD](#лекция-8-деплой-тестового-приложения-в-кластер) 21 | 22 | ## Лекция 1. Микросервисы и контейнеры 23 | 24 | **Перед второй лекцией нужно установить Docker** 25 | 26 | Вы можете [установить Docker](https://docs.docker.com/get-docker/) на свой компьютер или виртуальную машину с Linux. 27 | 28 | А так же использовать онлайн сервисы, чтобы немедленно приступить к обучению: 29 | 30 | 🔹 [Play with Docker](https://labs.play-with-docker.com/) 31 | 32 | **Паттерны проектирования** 33 | 34 | 🔹 [The Twelwe-Factor App](https://12factor.net/ru/) 35 | 36 | 🔹 [GRASP](https://ru.wikipedia.org/wiki/GRASP) 37 | 38 | 📚 [Чистая архитектура. Искусство разработки программного обеспечения](https://habr.com/ru/company/piter/blog/353170/) 39 | 40 | 📚 [System Design - Подготовка к сложному интервью](https://habr.com/ru/company/piter/blog/598791/) 41 | 42 | **Механизмы контейнеризации** 43 | 44 | 🔹 [Linux-контейнеры: изоляция как технологический прорыв](https://habr.com/ru/company/redhatrussia/blog/352052/) 45 | 46 | 🔹 [Namespaces](https://habr.com/ru/company/selectel/blog/279281/) 47 | 48 | 🔹 [Cgroups](https://habr.com/ru/company/selectel/blog/303190/) 49 | 50 | 🔹 [Capabilities](https://k3a.me/linux-capabilities-in-a-nutshell/) 51 | 52 | 🎥 [Могут ли контейнеры быть безопасными?](https://habr.com/ru/company/oleg-bunin/blog/480630/) 53 | 54 | **Различные Container Runtime** 55 | 56 | 🔹 [Различия между Docker, containerd, CRI-O и runc](https://habr.com/ru/company/domclick/blog/566224/) 57 | 58 | ## Лекция 2. Docker 59 | 60 | **Docker** 61 | 62 | 🔹 [Сеть контейнеров — это не сложно](https://habr.com/ru/company/timeweb/blog/558612/) 63 | 64 | 🔹 [Overview of Docker CLI](https://docs.docker.com/engine/reference/run/) 65 | 66 | 🔹 [10 команд для Docker, без которых вам не обойтись](https://tproger.ru/translations/top-10-docker-commands/) 67 | 68 | 🔹 [Как начать использовать Docker в своих проектах](https://tproger.ru/translations/how-to-start-using-docker/) 69 | 70 | 🔹 [50 вопросов по Docker, которые задают на собеседованиях, и ответы на них](https://habr.com/ru/company/southbridge/blog/528206/) 71 | 72 | **Dockerfile** 73 | 74 | 🔹 [20 лучших практик по работе с Dockerfile](https://habr.com/ru/company/domclick/blog/546922/) 75 | 76 | 🔹 [ENTRYPOINT vs CMD: назад к основам](https://habr.com/ru/company/southbridge/blog/329138/) 77 | 78 | 🔹 [ADD vs COPY](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#add-or-copy) 79 | 80 | 🔹 [Dockerfile reference](https://docs.docker.com/engine/reference/builder/) 81 | 82 | 🔹 [Use multi-stage builds](https://docs.docker.com/develop/develop-images/multistage-build/) 83 | 84 | 🔹 [Best practices for writing Dockerfiles](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#add-or-copy%23add-or-copy) 85 | 86 | **Docker Compose** 87 | 88 | 🔹 [Overview of docker-compose CLI](https://docs.docker.com/compose/reference/) 89 | 90 | 🔹 [Quickstart: Compose and Django](https://docs.docker.com/samples/django/) 91 | 92 | 🔹 [Compose file version 3 reference](https://docs.docker.com/compose/compose-file/compose-file-v3/) 93 | 94 | 🔹 [Compose file version 2 reference](https://docs.docker.com/compose/compose-file/compose-file-v2/) 95 | 96 | ## Лекция 3. Введение в Kubernetes 97 | 98 | >Уважаемые студенты, просьба по возможности до начала занятия поставить себе утилиту для работы с Kubernetes – kubectl. 99 | >Это можно сделать по инструкциям из официальной документации для вашей ОС. 100 | >https://kubernetes.io/docs/tasks/tools/install-kubectl/ 101 | 102 | Делаем работу с kubectl удобнее: 103 | 104 | 🔹 [kubectl auto-complition](https://kubernetes.io/ru/docs/tasks/tools/install-kubectl/#%D0%B2%D0%BA%D0%BB%D1%8E%D1%87%D0%B5%D0%BD%D0%B8%D0%B5-%D0%B0%D0%B2%D1%82%D0%BE%D0%B4%D0%BE%D0%BF%D0%BE%D0%BB%D0%BD%D0%B5%D0%BD%D0%B8%D1%8F-%D0%B2%D0%B2%D0%BE%D0%B4%D0%B0-shell) 105 | 106 | 🔹 [kubectl aliases](https://github.com/adterskov/kubectl-aliases) 107 | 108 | 🔹 [kubecolor - раскрашивает вывод kubectl](https://github.com/dty1er/kubecolor/) 109 | 110 | 🔹 [kubens - быстрый способ переключения между namespaces в kubectl](https://github.com/ahmetb/kubectx/) 111 | 112 | Как получить в своё распоряжение полноценный кластер Kubernetes? 113 | 114 | **Онлайн сервисы, чтобы немедленно приступить к обучению** 115 | 116 | 🔹 [Play with Kubernetes](https://labs.play-with-k8s.com/) 117 | 118 | **Запустить локальный кластер Kubernetes** 119 | 120 | 🔹 [Minikube](https://kubernetes.io/ru/docs/tasks/tools/install-minikube/) 121 | 122 | 🔹 [Minishift (OpenShift)](https://www.okd.io/minishift/) 123 | 124 | 🔹 [KiND](https://kind.sigs.k8s.io/docs/user/quick-start/) 125 | 126 | 🔹 [Docker Desktop](https://docs.docker.com/desktop/kubernetes/) 127 | 128 | **Установить кластер самостоятельно** 129 | 130 | 🔹 [Установка в помощью kubeadm](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/) 131 | 132 | 🔹 [Установка с помощью kubesparay](https://kubernetes.io/docs/setup/production-environment/tools/kubespray/) 133 | 134 | ## Лекция 4. Хранение данных и ресурсы 135 | 136 | 🔹 [Динамическое выделение дисков с PVC](https://mcs.mail.ru/help/ru_RU/k8s-pvc/k8s-pvc) 137 | 138 | 🔹 [Рациональное использование ресурсов в Kubernetes](https://habr.com/ru/company/timeweb/blog/560670/) 139 | 140 | 🔹 [Как оптимизировать ограничения ресурсов Kubernetes](https://habr.com/ru/company/timeweb/blog/562500/) 141 | 142 | ## Лекция 5. Сетевые абстракции Kubernetes 143 | 144 | 🔹 [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) 145 | 146 | 🔹 [iptables: How Kubernetes Services Direct Traffic to Pods](https://dustinspecker.com/posts/iptables-how-kubernetes-services-direct-traffic-to-pods/) 147 | 148 | 🔹 [NetworkPolicy Editor](https://cilium.io/blog/2021/02/10/network-policy-editor?utm_source=telegram.me&utm_medium=social&utm_campaign=cilium-predstavil-vizualnyy-redaktor-se) 149 | 150 | 🔹 [NGINX Ingress Controller Annotations](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/) 151 | 152 | 🔹 [NGINX Ingress Controller Regular expressions in paths](https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/) 153 | 154 | ## Лекция 6. Устройство кластера 155 | 156 | 🔹 [Kubernetes is so Simple You Can Explore it with Curl](https://blog.tilt.dev/2021/03/18/kubernetes-is-so-simple.html) 157 | 158 | 🔹 [Как увеличить скорость реакции Kubernetes на отказ узлов кластера?](https://habr.com/ru/company/timeweb/blog/561084/) 159 | 160 | ## Лекция 7. Продвинутые абстракции 161 | 162 | 🎥 [Митап "Stateful-приложения в 2020 году"](https://www.youtube.com/watch?v=ykIh4-616Ic&list=PL8D2P0ruohODzihD0D0FZXkVHXtXbb6w3&index=4&ab_channel=HighLoadChannel) 163 | 164 | 🎥 [Базы данных и Kubernetes (Дмитрий Столяров, Флант, HighLoad++ 2018)](https://www.youtube.com/watch?v=BnegHj53pW4&ab_channel=%D0%A4%D0%BB%D0%B0%D0%BD%D1%82) 165 | 166 | 🎥 [Заделываем дыры в кластере Kubernetes](https://www.youtube.com/watch?v=Ik7VqbgpRiQ&ab_channel=DevOpsChannel) 167 | 168 | 🔹 [Jobs & Cronjobs in Kubernetes Cluster](https://medium.com/avmconsulting-blog/jobs-cronjobs-in-kubernetes-cluster-d0e872e3c8c8) 169 | 170 | 🔹 [Tоп-10 PromQL запросов для мониторинга Kubernetes](https://habr.com/ru/company/timeweb/blog/562374/) 171 | 172 | ## Лекция 8. Деплой тестового приложения в кластер 173 | 174 | 🔹 [Запуск проекта в Kubernetes за 60 минут](https://mcs.mail.ru/blog/launching-a-project-in-kubernetes) 175 | 176 | 🔹 [Антипаттерны деплоя в Kubernetes. Часть 1](https://habr.com/ru/company/timeweb/blog/557320/) 177 | 178 | 🔹 [Антипаттерны деплоя в Kubernetes. Часть 2](https://habr.com/ru/company/timeweb/blog/560772/) 179 | 180 | 🔹 [Антипаттерны деплоя в Kubernetes. Часть 3](https://habr.com/ru/company/timeweb/blog/561570/) 181 | 182 | 📚 [ПРОЕКТ «ФЕНИКС». КАК DEVOPS УСТРАНЯЕТ ХАОС И УСКОРЯЕТ РАЗВИТИЕ КОМПАНИИ](https://bombora.ru/book/64983/#.) 183 | -------------------------------------------------------------------------------- /Webinars/Basics of working with Helm.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/adterskov/geekbrains-conteinerization/da6b6cf191aeda4eb29d91cb8126fdd6f8607a52/Webinars/Basics of working with Helm.pdf -------------------------------------------------------------------------------- /Webinars/Docker and alternatives.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/adterskov/geekbrains-conteinerization/da6b6cf191aeda4eb29d91cb8126fdd6f8607a52/Webinars/Docker and alternatives.pdf -------------------------------------------------------------------------------- /Webinars/Kubernetes quick start.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/adterskov/geekbrains-conteinerization/da6b6cf191aeda4eb29d91cb8126fdd6f8607a52/Webinars/Kubernetes quick start.pdf -------------------------------------------------------------------------------- /homework/2.docker/README.md: -------------------------------------------------------------------------------- 1 | # Домашнее задание к уроку 2 - Docker 2 | 3 | Напишите Dockerfile к любому приложению из директорий golang или python на ваш выбор (можно к обоим). 4 | 5 | Образ должен собираться из официального базового образа для выбранного языка. На этапе сборки должны устанавливаться все необходимые зависимости, а так же присутствовать команда для запуска приложения. 6 | 7 | Старайтесь следовать рекомендациям (Best Practices) из лекции при написании Dockerfile. 8 | 9 | При запуске контейнера из образа с указанием проксирования порта (флаг -p или -P если указан EXPOSE) при обращении 10 | на localhost:port должно быть доступно приложение в контейнере (оно отвечает Hello, World!). 11 | 12 | Сохраните получившийся Dockerfile в любом публичном Git репозитории, например GitHub, и пришлите ссылку на репозиторий. 13 | -------------------------------------------------------------------------------- /homework/2.docker/golang/go.mod: -------------------------------------------------------------------------------- 1 | module github.com/pauljamm/geekbrains-conteinerization/homework/2.docker/golang 2 | 3 | go 1.14 4 | 5 | require ( 6 | github.com/gorilla/handlers v1.4.2 // indirect 7 | github.com/gorilla/mux v1.7.4 // indirect 8 | ) 9 | -------------------------------------------------------------------------------- /homework/2.docker/golang/go.sum: -------------------------------------------------------------------------------- 1 | github.com/gorilla/handlers v1.4.2 h1:0QniY0USkHQ1RGCLfKxeNHK9bkDHGRYGNDFBCS+YARg= 2 | github.com/gorilla/handlers v1.4.2/go.mod h1:Qkdc/uu4tH4g6mTK6auzZ766c4CA0Ng8+o/OAirnOIQ= 3 | github.com/gorilla/mux v1.7.4 h1:VuZ8uybHlWmqV03+zRzdwKL4tUnIp1MAQtp1mIFE1bc= 4 | github.com/gorilla/mux v1.7.4/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So= 5 | -------------------------------------------------------------------------------- /homework/2.docker/golang/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "log" 6 | "net/http" 7 | "os" 8 | 9 | "github.com/gorilla/handlers" 10 | "github.com/gorilla/mux" 11 | ) 12 | 13 | func helloHandler(w http.ResponseWriter, r *http.Request) { 14 | fmt.Fprintf(w, "Hello, World!") 15 | } 16 | 17 | func main() { 18 | r := mux.NewRouter().StrictSlash(true) 19 | r.HandleFunc("/", helloHandler) 20 | 21 | port, ok := os.LookupEnv("PORT") 22 | if !ok { 23 | port = "8080" 24 | } 25 | 26 | loggedRouter := handlers.LoggingHandler(os.Stdout, r) 27 | log.Printf("Starting listening on :%v", port) 28 | log.Fatal(http.ListenAndServe(":" + port, loggedRouter)) 29 | } 30 | -------------------------------------------------------------------------------- /homework/2.docker/python/app.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import os 4 | 5 | from flask import Flask 6 | 7 | config = { 8 | "port": os.environ.get('PORT', 8080), 9 | "debug": os.environ.get('DEBUG', False) 10 | } 11 | 12 | app = Flask(__name__) 13 | 14 | @app.route("/") 15 | def hello(): 16 | return "Hello, World!" 17 | 18 | if __name__ == "__main__": 19 | app.run(host="0.0.0.0", port=config["port"], debug=config["debug"]) 20 | -------------------------------------------------------------------------------- /homework/2.docker/python/requirements.txt: -------------------------------------------------------------------------------- 1 | click==7.1.2 2 | Flask==1.1.2 3 | itsdangerous==1.1.0 4 | Jinja2==2.11.2 5 | MarkupSafe==1.1.1 6 | Werkzeug==1.0.1 7 | -------------------------------------------------------------------------------- /homework/3.kubernetes-intro/README.md: -------------------------------------------------------------------------------- 1 | # Домашнее задание для к уроку 3 - Введение в Kubernetes 2 | 3 | Cоздайте namespace kubedoom 4 | ``` 5 | kubectl create ns kubedoom 6 | ``` 7 | 8 | Напишите deployment для запуска игры **Kube DOOM**. 9 | 10 | Приложение должно запускаться из образа 11 | ``` 12 | storaxdev/kubedoom:0.5.0 13 | ``` 14 | Должен быть описан порт: 15 | ``` 16 | 5900 TCP 17 | ``` 18 | Для указания протокола используется поле protocol в описании порта. 19 | 20 | В деплойменте должна быть одна реплика, при этом при обновлении образа не должно одновременно работать две реплики (см. **maxSurge** и **maxUnavailable** из лекции). 21 | 22 | Добавьте в шаблон контейнера параметры 23 | ``` 24 | hostNetwork: true 25 | serviceAccountName: kubedoom 26 | ``` 27 | 28 | Запустите получившийся деплоймент в кластере Kubernetes в **namespace kubedoom**. 29 | Pod не должен самопроизвольно рестартовать. 30 | 31 | В случае возникновения проблем смотрите в описание Pod, ReplicaSet, Deployment. 32 | Например: 33 | ``` 34 | kubectl describe pod 35 | kubectl logs pod 36 | ``` 37 | 38 | Разверните в кластере манифест: 39 | ``` 40 | apiVersion: v1 41 | kind: ServiceAccount 42 | metadata: 43 | name: kubedoom 44 | namespace: kubedoom 45 | --- 46 | apiVersion: rbac.authorization.k8s.io/v1 47 | kind: ClusterRoleBinding 48 | metadata: 49 | name: kubedoom 50 | roleRef: 51 | apiGroup: rbac.authorization.k8s.io 52 | kind: ClusterRole 53 | name: cluster-admin 54 | subjects: 55 | - kind: ServiceAccount 56 | name: kubedoom 57 | namespace: kubedoom 58 | ``` 59 | 60 | Этот манифест создаст в кластере сервисную учетную запись и даст ей права Cluster-admin 61 | 62 | Для подключения к игре вам нужно выполнить **kubectl portforward** и используйте VNC клиент. Пароль для подключения - **idbehold** 63 | 64 | Сохраните манифесты в любом публичном Git репозитории, например GitHub, и пришлите ссылку на репозиторий. 65 | 66 | Желаю удачи! 67 | -------------------------------------------------------------------------------- /homework/4.resources-and-persistence/README.md: -------------------------------------------------------------------------------- 1 | # Домашнее задание для к уроку 4 - Хранение данных и ресурсы 2 | 3 | Напишите deployment для запуска сервера базы данных Postgresql. 4 | 5 | Приложение должно запускаться из образа postgres:10.13 6 | 7 | Должен быть описан порт: 8 | 9 | - 5432 TCP 10 | 11 | В деплойменте должна быть одна реплика, при этом при обновлении образа 12 | НЕ ДОЛЖНО одновременно работать несколько реплик. 13 | (то есть сначала должна удаляться старая реплика и только после этого подниматься новая). 14 | 15 | > Это можно сделать или с помощью maxSurge/maxUnavailable или указав стратегию деплоя Recreate. 16 | 17 | В базе данных при запуске должен автоматически создаваться пользователь testuser 18 | с паролем testpassword. А также база testdatabase. 19 | 20 | > Для этого нужно указать переменные окружения POSTGRES_PASSWORD, POSTGRES_USER, POSTGRES_DB в деплойменте. 21 | > При этом значение переменной POSTGRES_PASSWORD должно браться из секрета. 22 | 23 | Так же нужно указать переменную PGDATA со значением /var/lib/postgresql/data/pgdata 24 | См. документацию к образу https://hub.docker.com/_/postgres раздел PGDATA 25 | 26 | База данных должна хранить данные в PVC c размером диска в 10Gi, замонтированном в pod по пути /var/lib/postgresql/data 27 | 28 | 29 | ## Проверка 30 | 31 | Для проверки работоспособности базы данных: 32 | 33 | 1. Узнайте IP пода postgresql 34 | 35 | kubectl get pod -o wide 36 | 37 | 2. Запустите рядом тестовый под 38 | 39 | kubectl run -t -i --rm --image postgres:10.13 test bash 40 | 41 | 3. Внутри тестового пода выполните команду для подключения к БД 42 | 43 | psql -h -U testuser testdatabase 44 | 45 | Введите пароль - testpassword 46 | 47 | 4. Все в том же тестовом поде, после подключения к инстансу БД выполните команду для создания таблицы 48 | 49 | CREATE TABLE testtable (testcolumn VARCHAR (50) ); 50 | 51 | 5. Проверьте что таблица создалась. Для этого все в том же тестовом поде выполните команду 52 | 53 | \dt 54 | 55 | 6. Выйдите из тестового пода. Попробуйте удалить под с postgresql. 56 | 57 | 7. После его пересоздания повторите все с п.1, кроме п.4 58 | Проверьте что созданная ранее таблица никуда не делась. 59 | -------------------------------------------------------------------------------- /homework/5.kubernetes-network/README.md: -------------------------------------------------------------------------------- 1 | # Домашнее задание для к уроку 5 - Сетевые абстракции Kubernetes 2 | 3 | * Разверните в кластере сервер базы данных Postgresql. Из предыдущего задания. 4 | 5 | * Добавьте к нему service c портом 5432 и именем database. 6 | 7 | * В этом же неймспэйсе создайте deployment с образом redmine:4.1.1 8 | 9 | Для запуска нужно передать переменные окружения: 10 | 11 | REDMINE_DB_POSTGRES = database 12 | REDMINE_DB_USERNAME = 13 | REDMINE_DB_PASSWORD = (значение должно браться из секрета) 14 | REDMINE_DB_DATABASE = 15 | REDMINE_SECRET_KEY_BASE = supersecretkey (значение должно браться из секрета) 16 | 17 | > Обратите внимание что имя пользователя, пароль и база данных должны соответствовать 18 | > значениям которые указаны в переменных окружения деплоймента postgresql 19 | 20 | В деплойменте приложения должен быть описан порт 3000 21 | 22 | * Создайте serivce для приложения с портом 3000 23 | 24 | * Создайте ingress для приложения, так чтобы запросы с любым доменом на белый IP 25 | вашего сервиса nginx-ingress-controller (тот что в нэймспэйсе ingress-nginx с типом LoadBalancer) 26 | шли на приложение 27 | 28 | * Проверьте что при обращении из браузера на белый IP вы видите открывшееся 29 | приложение Redmine (https://www.redmine.org/) 30 | -------------------------------------------------------------------------------- /homework/7.advanced-abstractions/README.md: -------------------------------------------------------------------------------- 1 | # Домашнее задание для к уроку 7 - Продвинутые абстракции Kubernetes 2 | 3 | > ! Задание нужно выполнять в нэймспэйсе default 4 | 5 | Разверните в кластере сервер системy мониторинга Prometheus. 6 | 7 | * Создайте в кластере ConfigMap со следующим содержимым: 8 | 9 | ```yaml 10 | prometheus.yml: | 11 | global: 12 | scrape_interval: 30s 13 | 14 | scrape_configs: 15 | - job_name: 'prometheus' 16 | static_configs: 17 | - targets: ['localhost:9090'] 18 | 19 | - job_name: 'kubernetes-nodes' 20 | kubernetes_sd_configs: 21 | - role: node 22 | relabel_configs: 23 | - source_labels: [__address__] 24 | regex: (.+):(.+) 25 | target_label: __address__ 26 | replacement: ${1}:9101 27 | ``` 28 | 29 | Создайте объекты для авторизации Prometheus сервера в Kubernetes-API. 30 | 31 | ```yaml 32 | --- 33 | apiVersion: v1 34 | kind: ServiceAccount 35 | metadata: 36 | name: prometheus 37 | namespace: default 38 | --- 39 | apiVersion: rbac.authorization.k8s.io/v1 40 | kind: ClusterRole 41 | metadata: 42 | name: prometheus 43 | rules: 44 | - apiGroups: [""] 45 | resources: 46 | - nodes 47 | verbs: ["get", "list", "watch"] 48 | --- 49 | apiVersion: rbac.authorization.k8s.io/v1 50 | kind: ClusterRoleBinding 51 | metadata: 52 | name: prometheus 53 | roleRef: 54 | apiGroup: rbac.authorization.k8s.io 55 | kind: ClusterRole 56 | name: prometheus 57 | subjects: 58 | - kind: ServiceAccount 59 | name: prometheus 60 | namespace: default 61 | ``` 62 | 63 | * Создайте StatefulSet для Prometheus сервера из образа prom/prometheus:v2.19.2 с одной репликой 64 | 65 | В нем должнен быть описан порт 9090 TCP 66 | volumeClaimTemplate - ReadWriteOnce, 5Gi, подключенный по пути /prometheus 67 | Подключение конфигмапа с настройками выше по пути /etc/prometheus 68 | 69 | Так же в этом стейтфулсете нужно объявить initContainer для изменения права на 777 для каталога /prometheus. 70 | См пример из лекции 4: practice/4.resources-and-persistence/persistence/deployment.yaml 71 | 72 | > Не забудьте указать обязательное поле serviceName 73 | 74 | Так же укажите поле serviceAccount: prometheus на одном уровне с containers, initContainers, volumes 75 | См пример с rabbitmq из материалов лекции. 76 | 77 | * Создайте service и ingress для этого стейтфулсета, так чтобы запросы с любым доменом на белый IP 78 | вашего сервиса nginx-ingress-controller (тот что в нэймспэйсе ingress-nginx с типом LoadBalancer) 79 | шли на приложение 80 | 81 | * Проверьте что при обращении из браузера на белый IP вы видите открывшееся 82 | приложение Prometheus 83 | 84 | * В этом же неймспэйсе создайте DaemonSet node-exporter как в примере к лекции: 85 | practice/7.advanced-abstractions/daemonset.yaml 86 | 87 | * Откройте в браузере интерфейс Prometheus. 88 | Попробуйте открыть Status -> Targets 89 | Тут вы должны увидеть все ноды своего кластера, которые Prometheus смог определить и собирает с ним метрики. 90 | 91 | Так же можете попробовать на вкладке Graph выполнить запрос node_load1 - это минутный Load Average для каждой из нод в кластере. 92 | -------------------------------------------------------------------------------- /homework/8.ci-cd/README.md: -------------------------------------------------------------------------------- 1 | # Домашнее задание для к уроку 8 - CI/CD 2 | 3 | > В рамках данного задания нужно продолжить работу с CI приложения из лекции. 4 | > То есть для начала выполнения этого домашнего задания необходимо проделать то, 5 | > что показывалось в лекции. 6 | > Все задание должно выполняться применительно к файлам в директории practice/8.ci-cd/app 7 | 8 | Переделайте шаг деплоя в CI/CD, который демонстрировался на лекции 9 | таким образом, чтобы при каждом прогоне шага deploy в кластер применялись 10 | манифесты приложения. При этом версия докер образа в деплойменте при апплае 11 | должна подменяться на ту, что была собрана в шаге build. 12 | 13 | Для этого самым очевидным способом было бы воспользоваться утилитой sed. 14 | 15 | * Измените образ в деплойменте приложения (файл kube/deployment.yaml) на плейсхолдер. 16 | 17 | Вот это 18 | 19 | ```yaml 20 | image: nginx:1.12 # это просто плэйсхолдер 21 | ``` 22 | 23 | На это 24 | 25 | ```yaml 26 | image: __IMAGE__ 27 | ``` 28 | 29 | * Измените шаг деплоя в .gitlab-ci.yml, 30 | чтобы изменять __IMAGE__ на реальное имя образа и тег 31 | 32 | Это 33 | 34 | ```yaml 35 | - kubectl set image deployment/$CI_PROJECT_NAME *=$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG.$CI_PIPELINE_ID --namespace $CI_ENVIRONMENT_NAME 36 | ``` 37 | 38 | На это 39 | 40 | ```yaml 41 | - sed -i "s,__IMAGE__,$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG.$CI_PIPELINE_ID,g" kube/deployment.yaml 42 | - kubectl apply -f kube/ --namespace $CI_ENVIRONMENT_NAME 43 | ``` 44 | 45 | > Вторую строчку шага деплоя (которая отслеживает статус деплоя) оставьте без изменений. 46 | 47 | * Попробуйте закоммитить свои изменения, запушить их в репозиторий 48 | (тот же, который вы создавали во время лекции на Gitlab.com) 49 | и посмотреть на выполнение CI в интерфейсе Gitlab. 50 | 51 | > Так как окружений у нас два (stage и prod), то помимо образа при апплае из CI 52 | > нам также было бы хорошо подменять host в ingress.yaml. 53 | > Попробуйте реализовать это по аналогии, подставляя в ингресс вместо 54 | > плэйсхолдера значение переменной $CI_ENVIRONMENT_NAME 55 | 56 | * Так же попробуйте протестировать откат на предыдущую версию, 57 | при возникновении ошибки при деплое 58 | 59 | Для этого можно изменить значение переменной DB_HOST в deployment.yaml на какое нибудь несуществующее. 60 | Тогда при старте приложения оно не сможет найти БД и будет постоянно рестрартовать. CI должен в течении progressDeadlineSeconds: 300 и по.сле этого запустить процедуру отката. 61 | При этом не должно возникать недоступности приложения, так как старая реплика должна продолжать работать, пока новая пытается стартануть. 62 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/.dockerignore: -------------------------------------------------------------------------------- 1 | *.sqlite3 2 | venv/ 3 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.6.8-jessie 2 | 3 | ENV PYTHONUNBUFFERED 1 4 | 5 | WORKDIR /app 6 | 7 | ADD requirements.txt . 8 | RUN pip install -r requirements.txt 9 | 10 | ADD . . 11 | 12 | CMD /bin/sh docker-entrypoint.sh 13 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/docker-compose.yml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | 4 | services: 5 | db: 6 | image: postgres:10.6 7 | command: -c fsync=off -c full_page_writes=off 8 | environment: 9 | POSTGRES_PASSWORD: postgres 10 | POSTGRES_DB: test 11 | web: 12 | build: . 13 | environment: 14 | DB_NAME: test 15 | DB_USER: postgres 16 | DB_PASSWORD: postgres 17 | DB_HOST: db 18 | ports: 19 | - "8000:8000" 20 | volumes: 21 | - .:/app 22 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/docker-entrypoint.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | python manage.py migrate 4 | python manage.py runserver 0.0.0.0:8000 5 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/manage.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import os 3 | import sys 4 | 5 | if __name__ == '__main__': 6 | os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings') 7 | try: 8 | from django.core.management import execute_from_command_line 9 | except ImportError as exc: 10 | raise ImportError( 11 | "Couldn't import Django. Are you sure it's installed and " 12 | "available on your PYTHONPATH environment variable? Did you " 13 | "forget to activate a virtual environment?" 14 | ) from exc 15 | execute_from_command_line(sys.argv) 16 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/myapp/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/adterskov/geekbrains-conteinerization/da6b6cf191aeda4eb29d91cb8126fdd6f8607a52/practice/2.docker/docker-compose/myapp/__init__.py -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/myapp/settings.py: -------------------------------------------------------------------------------- 1 | """ 2 | Django settings for myapp project. 3 | 4 | Generated by 'django-admin startproject' using Django 2.1.5. 5 | 6 | For more information on this file, see 7 | https://docs.djangoproject.com/en/2.1/topics/settings/ 8 | 9 | For the full list of settings and their values, see 10 | https://docs.djangoproject.com/en/2.1/ref/settings/ 11 | """ 12 | 13 | import os 14 | 15 | # Build paths inside the project like this: os.path.join(BASE_DIR, ...) 16 | BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) 17 | 18 | 19 | # Quick-start development settings - unsuitable for production 20 | # See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/ 21 | 22 | # SECURITY WARNING: keep the secret key used in production secret! 23 | SECRET_KEY = 'h!i)j*@*1@b*k!mewnj$*oe5(-841e(7h_)34_$3jn5nr-!23q' 24 | 25 | # SECURITY WARNING: don't run with debug turned on in production! 26 | DEBUG = True 27 | 28 | ALLOWED_HOSTS = [] 29 | 30 | 31 | # Application definition 32 | 33 | INSTALLED_APPS = [ 34 | 'django.contrib.admin', 35 | 'django.contrib.auth', 36 | 'django.contrib.contenttypes', 37 | 'django.contrib.sessions', 38 | 'django.contrib.messages', 39 | 'django.contrib.staticfiles', 40 | ] 41 | 42 | MIDDLEWARE = [ 43 | 'django.middleware.security.SecurityMiddleware', 44 | 'django.contrib.sessions.middleware.SessionMiddleware', 45 | 'django.middleware.common.CommonMiddleware', 46 | 'django.middleware.csrf.CsrfViewMiddleware', 47 | 'django.contrib.auth.middleware.AuthenticationMiddleware', 48 | 'django.contrib.messages.middleware.MessageMiddleware', 49 | 'django.middleware.clickjacking.XFrameOptionsMiddleware', 50 | ] 51 | 52 | ROOT_URLCONF = 'myapp.urls' 53 | 54 | TEMPLATES = [ 55 | { 56 | 'BACKEND': 'django.template.backends.django.DjangoTemplates', 57 | 'DIRS': [], 58 | 'APP_DIRS': True, 59 | 'OPTIONS': { 60 | 'context_processors': [ 61 | 'django.template.context_processors.debug', 62 | 'django.template.context_processors.request', 63 | 'django.contrib.auth.context_processors.auth', 64 | 'django.contrib.messages.context_processors.messages', 65 | ], 66 | }, 67 | }, 68 | ] 69 | 70 | WSGI_APPLICATION = 'myapp.wsgi.application' 71 | 72 | 73 | # Database 74 | # https://docs.djangoproject.com/en/2.1/ref/settings/#databases 75 | 76 | DATABASES = { 77 | 'default': { 78 | 'ENGINE': 'django.db.backends.postgresql', 79 | 'NAME': os.getenv('DB_NAME'), 80 | 'USER': os.getenv('DB_USER'), 81 | 'PASSWORD' : os.getenv('DB_PASSWORD'), 82 | 'HOST': os.getenv('DB_HOST'), 83 | 'PORT': os.getenv('DB_PORT', 5432), 84 | } 85 | } 86 | 87 | 88 | # Password validation 89 | # https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators 90 | 91 | AUTH_PASSWORD_VALIDATORS = [ 92 | { 93 | 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', 94 | }, 95 | { 96 | 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', 97 | }, 98 | { 99 | 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', 100 | }, 101 | { 102 | 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', 103 | }, 104 | ] 105 | 106 | 107 | # Internationalization 108 | # https://docs.djangoproject.com/en/2.1/topics/i18n/ 109 | 110 | LANGUAGE_CODE = 'en-us' 111 | 112 | TIME_ZONE = 'UTC' 113 | 114 | USE_I18N = True 115 | 116 | USE_L10N = True 117 | 118 | USE_TZ = True 119 | 120 | 121 | # Static files (CSS, JavaScript, Images) 122 | # https://docs.djangoproject.com/en/2.1/howto/static-files/ 123 | 124 | STATIC_URL = '/static/' 125 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/myapp/urls.py: -------------------------------------------------------------------------------- 1 | """myapp URL Configuration 2 | 3 | The `urlpatterns` list routes URLs to views. For more information please see: 4 | https://docs.djangoproject.com/en/2.1/topics/http/urls/ 5 | Examples: 6 | Function views 7 | 1. Add an import: from my_app import views 8 | 2. Add a URL to urlpatterns: path('', views.home, name='home') 9 | Class-based views 10 | 1. Add an import: from other_app.views import Home 11 | 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home') 12 | Including another URLconf 13 | 1. Import the include() function: from django.urls import include, path 14 | 2. Add a URL to urlpatterns: path('blog/', include('blog.urls')) 15 | """ 16 | from django.contrib import admin 17 | from django.urls import include, path 18 | 19 | urlpatterns = [ 20 | path('polls/', include('polls.urls')), 21 | path('admin/', admin.site.urls), 22 | ] 23 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/myapp/wsgi.py: -------------------------------------------------------------------------------- 1 | """ 2 | WSGI config for myapp project. 3 | 4 | It exposes the WSGI callable as a module-level variable named ``application``. 5 | 6 | For more information on this file, see 7 | https://docs.djangoproject.com/en/2.1/howto/deployment/wsgi/ 8 | """ 9 | 10 | import os 11 | 12 | from django.core.wsgi import get_wsgi_application 13 | 14 | os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings') 15 | 16 | application = get_wsgi_application() 17 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/polls/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/adterskov/geekbrains-conteinerization/da6b6cf191aeda4eb29d91cb8126fdd6f8607a52/practice/2.docker/docker-compose/polls/__init__.py -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/polls/admin.py: -------------------------------------------------------------------------------- 1 | from django.contrib import admin 2 | 3 | # Register your models here. 4 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/polls/apps.py: -------------------------------------------------------------------------------- 1 | from django.apps import AppConfig 2 | 3 | 4 | class PollsConfig(AppConfig): 5 | name = 'polls' 6 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/polls/migrations/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/adterskov/geekbrains-conteinerization/da6b6cf191aeda4eb29d91cb8126fdd6f8607a52/practice/2.docker/docker-compose/polls/migrations/__init__.py -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/polls/models.py: -------------------------------------------------------------------------------- 1 | from django.db import models 2 | 3 | # Create your models here. 4 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/polls/tests.py: -------------------------------------------------------------------------------- 1 | from django.test import TestCase 2 | 3 | # Create your tests here. 4 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/polls/urls.py: -------------------------------------------------------------------------------- 1 | from django.urls import path 2 | 3 | from . import views 4 | 5 | urlpatterns = [ 6 | path('', views.index, name='index'), 7 | ] 8 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/polls/views.py: -------------------------------------------------------------------------------- 1 | from django.http import HttpResponse 2 | 3 | 4 | def index(request): 5 | return HttpResponse("Hello, world. You're at the polls index.\n") 6 | -------------------------------------------------------------------------------- /practice/2.docker/docker-compose/requirements.txt: -------------------------------------------------------------------------------- 1 | Django==2.2.13 2 | psycopg2==2.7.6.1 3 | pytz==2018.9 4 | -------------------------------------------------------------------------------- /practice/2.docker/golang/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang as builder 2 | 3 | RUN mkdir /app 4 | 5 | COPY . /app 6 | 7 | WORKDIR /app 8 | RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o server . 9 | 10 | 11 | FROM scratch 12 | 13 | COPY --from=builder /app/server / 14 | 15 | EXPOSE 8080 16 | CMD ["/server"] 17 | -------------------------------------------------------------------------------- /practice/2.docker/golang/go.mod: -------------------------------------------------------------------------------- 1 | module github.com/pauljamm/geekbrains-conteinerization/practice/2.docker/golang 2 | 3 | go 1.14 4 | 5 | require github.com/gorilla/mux v1.7.4 // indirect 6 | -------------------------------------------------------------------------------- /practice/2.docker/golang/go.sum: -------------------------------------------------------------------------------- 1 | github.com/gorilla/mux v1.7.4 h1:VuZ8uybHlWmqV03+zRzdwKL4tUnIp1MAQtp1mIFE1bc= 2 | github.com/gorilla/mux v1.7.4/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So= 3 | -------------------------------------------------------------------------------- /practice/2.docker/golang/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "fmt" 5 | "log" 6 | "net/http" 7 | "strconv" 8 | 9 | "github.com/gorilla/mux" 10 | ) 11 | 12 | func sumHandler(w http.ResponseWriter, r *http.Request) { 13 | queryValues := r.URL.Query() 14 | a, _ := strconv.Atoi(queryValues.Get("a")) 15 | b, _ := strconv.Atoi(queryValues.Get("b")) 16 | fmt.Fprintf(w, "%v", a+b) 17 | } 18 | 19 | func main() { 20 | router := mux.NewRouter().StrictSlash(true) 21 | router.HandleFunc("/api/v1/sum", sumHandler) 22 | log.Fatal(http.ListenAndServe(":8080", router)) 23 | } 24 | -------------------------------------------------------------------------------- /practice/2.docker/python-django/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.9 2 | 3 | RUN pip install Django==3.2 4 | 5 | ADD . /app 6 | 7 | EXPOSE 8000 8 | 9 | ENTRYPOINT python /app/manage.py runserver 0.0.0.0:8000 10 | -------------------------------------------------------------------------------- /practice/2.docker/python-django/manage.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import os 3 | import sys 4 | 5 | if __name__ == '__main__': 6 | os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings') 7 | try: 8 | from django.core.management import execute_from_command_line 9 | except ImportError as exc: 10 | raise ImportError( 11 | "Couldn't import Django. Are you sure it's installed and " 12 | "available on your PYTHONPATH environment variable? Did you " 13 | "forget to activate a virtual environment?" 14 | ) from exc 15 | execute_from_command_line(sys.argv) 16 | -------------------------------------------------------------------------------- /practice/2.docker/python-django/myapp/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/adterskov/geekbrains-conteinerization/da6b6cf191aeda4eb29d91cb8126fdd6f8607a52/practice/2.docker/python-django/myapp/__init__.py -------------------------------------------------------------------------------- /practice/2.docker/python-django/myapp/settings.py: -------------------------------------------------------------------------------- 1 | """ 2 | Django settings for myapp project. 3 | 4 | Generated by 'django-admin startproject' using Django 2.1.5. 5 | 6 | For more information on this file, see 7 | https://docs.djangoproject.com/en/2.1/topics/settings/ 8 | 9 | For the full list of settings and their values, see 10 | https://docs.djangoproject.com/en/2.1/ref/settings/ 11 | """ 12 | 13 | import os 14 | 15 | # Build paths inside the project like this: os.path.join(BASE_DIR, ...) 16 | BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) 17 | 18 | 19 | # Quick-start development settings - unsuitable for production 20 | # See https://docs.djangoproject.com/en/2.1/howto/deployment/checklist/ 21 | 22 | # SECURITY WARNING: keep the secret key used in production secret! 23 | SECRET_KEY = 'h!i)j*@*1@b*k!mewnj$*oe5(-841e(7h_)34_$3jn5nr-!23q' 24 | 25 | # SECURITY WARNING: don't run with debug turned on in production! 26 | DEBUG = True 27 | 28 | ALLOWED_HOSTS = [] 29 | 30 | 31 | # Application definition 32 | 33 | INSTALLED_APPS = [ 34 | 'django.contrib.admin', 35 | 'django.contrib.auth', 36 | 'django.contrib.contenttypes', 37 | 'django.contrib.sessions', 38 | 'django.contrib.messages', 39 | 'django.contrib.staticfiles', 40 | ] 41 | 42 | MIDDLEWARE = [ 43 | 'django.middleware.security.SecurityMiddleware', 44 | 'django.contrib.sessions.middleware.SessionMiddleware', 45 | 'django.middleware.common.CommonMiddleware', 46 | 'django.middleware.csrf.CsrfViewMiddleware', 47 | 'django.contrib.auth.middleware.AuthenticationMiddleware', 48 | 'django.contrib.messages.middleware.MessageMiddleware', 49 | 'django.middleware.clickjacking.XFrameOptionsMiddleware', 50 | ] 51 | 52 | ROOT_URLCONF = 'myapp.urls' 53 | 54 | TEMPLATES = [ 55 | { 56 | 'BACKEND': 'django.template.backends.django.DjangoTemplates', 57 | 'DIRS': [], 58 | 'APP_DIRS': True, 59 | 'OPTIONS': { 60 | 'context_processors': [ 61 | 'django.template.context_processors.debug', 62 | 'django.template.context_processors.request', 63 | 'django.contrib.auth.context_processors.auth', 64 | 'django.contrib.messages.context_processors.messages', 65 | ], 66 | }, 67 | }, 68 | ] 69 | 70 | WSGI_APPLICATION = 'myapp.wsgi.application' 71 | 72 | 73 | # Database 74 | # https://docs.djangoproject.com/en/2.1/ref/settings/#databases 75 | 76 | DATABASES = { 77 | 'default': { 78 | 'ENGINE': 'django.db.backends.sqlite3', 79 | 'NAME': os.path.join(BASE_DIR, 'db.sqlite3'), 80 | } 81 | } 82 | 83 | 84 | # Password validation 85 | # https://docs.djangoproject.com/en/2.1/ref/settings/#auth-password-validators 86 | 87 | AUTH_PASSWORD_VALIDATORS = [ 88 | { 89 | 'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator', 90 | }, 91 | { 92 | 'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator', 93 | }, 94 | { 95 | 'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator', 96 | }, 97 | { 98 | 'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator', 99 | }, 100 | ] 101 | 102 | 103 | # Internationalization 104 | # https://docs.djangoproject.com/en/2.1/topics/i18n/ 105 | 106 | LANGUAGE_CODE = 'en-us' 107 | 108 | TIME_ZONE = 'UTC' 109 | 110 | USE_I18N = True 111 | 112 | USE_L10N = True 113 | 114 | USE_TZ = True 115 | 116 | 117 | # Static files (CSS, JavaScript, Images) 118 | # https://docs.djangoproject.com/en/2.1/howto/static-files/ 119 | 120 | STATIC_URL = '/static/' 121 | -------------------------------------------------------------------------------- /practice/2.docker/python-django/myapp/urls.py: -------------------------------------------------------------------------------- 1 | """myapp URL Configuration 2 | 3 | The `urlpatterns` list routes URLs to views. For more information please see: 4 | https://docs.djangoproject.com/en/2.1/topics/http/urls/ 5 | Examples: 6 | Function views 7 | 1. Add an import: from my_app import views 8 | 2. Add a URL to urlpatterns: path('', views.home, name='home') 9 | Class-based views 10 | 1. Add an import: from other_app.views import Home 11 | 2. Add a URL to urlpatterns: path('', Home.as_view(), name='home') 12 | Including another URLconf 13 | 1. Import the include() function: from django.urls import include, path 14 | 2. Add a URL to urlpatterns: path('blog/', include('blog.urls')) 15 | """ 16 | from django.contrib import admin 17 | from django.urls import path 18 | 19 | urlpatterns = [ 20 | path('admin/', admin.site.urls), 21 | ] 22 | -------------------------------------------------------------------------------- /practice/2.docker/python-django/myapp/wsgi.py: -------------------------------------------------------------------------------- 1 | """ 2 | WSGI config for myapp project. 3 | 4 | It exposes the WSGI callable as a module-level variable named ``application``. 5 | 6 | For more information on this file, see 7 | https://docs.djangoproject.com/en/2.1/howto/deployment/wsgi/ 8 | """ 9 | 10 | import os 11 | 12 | from django.core.wsgi import get_wsgi_application 13 | 14 | os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings') 15 | 16 | application = get_wsgi_application() 17 | -------------------------------------------------------------------------------- /practice/3.kubernetes-intro/deployment.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: my-deployment 6 | spec: 7 | replicas: 2 8 | selector: 9 | matchLabels: 10 | app: my-app 11 | template: 12 | metadata: 13 | labels: 14 | app: my-app 15 | spec: 16 | containers: 17 | - image: nginx:1.12 18 | name: nginx 19 | ports: 20 | - containerPort: 80 21 | -------------------------------------------------------------------------------- /practice/3.kubernetes-intro/pod.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Pod 4 | metadata: 5 | name: my-pod 6 | spec: 7 | containers: 8 | - image: nginx:1.12 9 | name: nginx 10 | ports: 11 | - containerPort: 80 12 | -------------------------------------------------------------------------------- /practice/3.kubernetes-intro/replicaset.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: ReplicaSet 4 | metadata: 5 | name: my-replicaset 6 | spec: 7 | replicas: 2 8 | selector: 9 | matchLabels: 10 | app: my-app 11 | template: 12 | metadata: 13 | labels: 14 | app: my-app 15 | spec: 16 | containers: 17 | - image: nginx:1.12 18 | name: nginx 19 | ports: 20 | - containerPort: 80 21 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/README.md: -------------------------------------------------------------------------------- 1 | **Persistent Storage** 2 | 3 | Руководства по работе с хранилищами в VK Cloud 4 | https://mcs.mail.ru/docs/base/k8s/k8s-pvc 5 | 6 | Посмотреть доступные Storage Class 7 | ``` 8 | kubectl get storageclasses.storage.k8s.io 9 | ``` 10 | 11 |
Вы можете менять политику persistentVolumeReclaimPolicy тем самым указывая нужно ли сохранять данные после удаления pods или нет 12 |
https://kubernetes.io/docs/concepts/storage/persistent-volumes/#reclaim-policy 13 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/configmap.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: my-configmap 6 | data: 7 | default.conf: | 8 | server { 9 | listen 80 default_server; 10 | server_name _; 11 | 12 | default_type text/plain; 13 | 14 | location / { 15 | return 200 '$hostname\n'; 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/deployment-with-configmap.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: my-deployment 6 | spec: 7 | replicas: 2 8 | selector: 9 | matchLabels: 10 | app: my-app 11 | template: 12 | metadata: 13 | labels: 14 | app: my-app 15 | spec: 16 | containers: 17 | - image: nginx:1.12 18 | name: nginx 19 | ports: 20 | - containerPort: 80 21 | resources: 22 | requests: 23 | cpu: 100m 24 | memory: 100Mi 25 | limits: 26 | cpu: 100m 27 | memory: 100Mi 28 | volumeMounts: 29 | - name: config 30 | mountPath: /etc/nginx/conf.d/ 31 | volumes: 32 | - name: config 33 | configMap: 34 | name: my-configmap 35 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/deployment-with-env.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: my-deployment 6 | spec: 7 | replicas: 2 8 | selector: 9 | matchLabels: 10 | app: my-app 11 | template: 12 | metadata: 13 | labels: 14 | app: my-app 15 | spec: 16 | containers: 17 | - image: nginx:1.12 18 | name: nginx 19 | env: 20 | - name: FOO 21 | value: bar 22 | - name: TEST 23 | value: "1" 24 | ports: 25 | - containerPort: 80 26 | resources: 27 | requests: 28 | cpu: 100m 29 | memory: 100Mi 30 | limits: 31 | cpu: 100m 32 | memory: 100Mi 33 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/deployment-with-resources.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: my-deployment 6 | spec: 7 | replicas: 2 8 | selector: 9 | matchLabels: 10 | app: my-app 11 | template: 12 | metadata: 13 | labels: 14 | app: my-app 15 | spec: 16 | containers: 17 | - image: nginx:1.12 18 | name: nginx 19 | ports: 20 | - containerPort: 80 21 | resources: 22 | requests: 23 | cpu: 100m 24 | memory: 100Mi 25 | limits: 26 | cpu: 100m 27 | memory: 100Mi 28 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/deployment-with-secret.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: my-deployment 6 | spec: 7 | replicas: 2 8 | selector: 9 | matchLabels: 10 | app: my-app 11 | template: 12 | metadata: 13 | labels: 14 | app: my-app 15 | spec: 16 | containers: 17 | - image: nginx:1.12 18 | name: nginx 19 | ports: 20 | - containerPort: 80 21 | env: 22 | - name: PASS 23 | valueFrom: 24 | secretKeyRef: 25 | name: my-secret 26 | key: PASS 27 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/nfs/README.md: -------------------------------------------------------------------------------- 1 | **NFS** 2 | 3 | Руководство по настройке NFS в облаке VK Cloud Solutions
4 | https://mcs.mail.ru/docs/ru/base/k8s/k8s-pvc/nfs-connection 5 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/nfs/pv.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: pv-nfs 5 | spec: 6 | accessModes: 7 | - ReadWriteMany 8 | mountOptions: 9 | - hard 10 | - nfsvers=4.0 11 | - timeo=60 12 | - retrans=10 13 | capacity: 14 | storage: 10Gi 15 | nfs: 16 | server: 10.0.0.** 17 | path: "/shares/share-***" 18 | persistentVolumeReclaimPolicy: "Retain" 19 | 20 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/nfs/pvc.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: PersistentVolumeClaim 3 | apiVersion: v1 4 | metadata: 5 | name: pvc-nfs 6 | spec: 7 | accessModes: 8 | - ReadWriteMany 9 | resources: 10 | requests: 11 | storage: 10Gi 12 | volumeName: "pv-nfs" 13 | storageClassName: "" 14 | 15 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/persistence/configmap.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: webdav 6 | data: 7 | default.conf: | 8 | server { 9 | listen 80 default_server; 10 | server_name _; 11 | 12 | default_type text/plain; 13 | 14 | location / { 15 | alias /data; 16 | autoindex on; 17 | client_body_temp_path /tmp; 18 | dav_methods PUT DELETE MKCOL COPY MOVE; 19 | create_full_put_path on; 20 | dav_access user:rw group:rw all:r; 21 | } 22 | } 23 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/persistence/deployment.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: webdav 6 | spec: 7 | replicas: 2 8 | selector: 9 | matchLabels: 10 | app: webdav 11 | strategy: 12 | rollingUpdate: 13 | maxSurge: 1 14 | maxUnavailable: 1 15 | type: RollingUpdate 16 | template: 17 | metadata: 18 | labels: 19 | app: webdav 20 | spec: 21 | initContainers: 22 | - image: busybox 23 | name: mount-permissions-fix 24 | command: ["sh", "-c", "chmod 777 /data"] 25 | volumeMounts: 26 | - name: data 27 | mountPath: /data 28 | containers: 29 | - image: nginx:1.14 30 | name: nginx 31 | ports: 32 | - containerPort: 80 33 | resources: 34 | requests: 35 | cpu: 100m 36 | memory: 100Mi 37 | limits: 38 | cpu: 100m 39 | memory: 100Mi 40 | volumeMounts: 41 | - name: config 42 | mountPath: /etc/nginx/conf.d 43 | - name: data 44 | mountPath: /data 45 | volumes: 46 | - name: config 47 | configMap: 48 | name: webdav 49 | - name: data 50 | persistentVolumeClaim: 51 | claimName: webdav 52 | -------------------------------------------------------------------------------- /practice/4.resources-and-persistence/persistence/pvc.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: PersistentVolumeClaim 3 | apiVersion: v1 4 | metadata: 5 | name: webdav 6 | spec: 7 | accessModes: 8 | - ReadWriteOnce 9 | resources: 10 | requests: 11 | storage: 2Gi 12 | storageClassName: csi-ceph-hdd-dp1 13 | -------------------------------------------------------------------------------- /practice/5.kubernetes-network/README.md: -------------------------------------------------------------------------------- 1 | curl Examples 2 | ``` 3 | curl -k 1.1.1.1 -L -H "Host: my.company.com" 4 | ``` 5 | -------------------------------------------------------------------------------- /practice/5.kubernetes-network/deployment-with-probes.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: my-app 6 | spec: 7 | replicas: 1 8 | selector: 9 | matchLabels: 10 | app: my-app 11 | template: 12 | metadata: 13 | labels: 14 | app: my-app 15 | spec: 16 | containers: 17 | - image: nginx:1.20 18 | name: nginx 19 | ports: 20 | - containerPort: 80 21 | readinessProbe: 22 | failureThreshold: 3 23 | httpGet: 24 | path: / 25 | port: 80 26 | periodSeconds: 10 27 | successThreshold: 1 28 | timeoutSeconds: 60 29 | livenessProbe: 30 | failureThreshold: 3 31 | httpGet: 32 | path: / 33 | port: 80 34 | periodSeconds: 10 35 | successThreshold: 1 36 | timeoutSeconds: 60 37 | initialDelaySeconds: 10 38 | -------------------------------------------------------------------------------- /practice/5.kubernetes-network/ingress.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: networking.k8s.io/v1 3 | kind: Ingress 4 | metadata: 5 | name: my-ingress-simple 6 | annotations: 7 | kubernetes.io/ingress.class: nginx 8 | spec: 9 | rules: 10 | - http: 11 | paths: 12 | - path: "/" 13 | pathType: Prefix 14 | backend: 15 | service: 16 | name: my-service 17 | port: 18 | number: 8080 19 | --- 20 | apiVersion: networking.k8s.io/v1 21 | kind: Ingress 22 | metadata: 23 | name: my-ingress-api 24 | annotations: 25 | kubernetes.io/ingress.class: nginx 26 | nginx.ingress.kubernetes.io/rewrite-target: / 27 | spec: 28 | rules: 29 | - host: my-app.local 30 | http: 31 | paths: 32 | - path: "/api" 33 | pathType: Prefix 34 | backend: 35 | service: 36 | name: my-service 37 | port: 38 | number: 8080 39 | 40 | -------------------------------------------------------------------------------- /practice/5.kubernetes-network/net-tool.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: Deployment 4 | metadata: 5 | name: net-tool 6 | spec: 7 | replicas: 1 8 | selector: 9 | matchLabels: 10 | app: net-tool 11 | template: 12 | metadata: 13 | labels: 14 | app: net-tool 15 | spec: 16 | containers: 17 | - image: nginx:1.20 18 | name: nginx 19 | -------------------------------------------------------------------------------- /practice/5.kubernetes-network/service.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Service 4 | metadata: 5 | name: my-service 6 | spec: 7 | ports: 8 | - port: 8080 9 | targetPort: 80 10 | selector: 11 | app: my-app 12 | type: ClusterIP 13 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/README.md: -------------------------------------------------------------------------------- 1 | **Проблемы при запуске Prometheus Node Exporter** 2 | 3 | Если Pods Node Exporter не создаются выполните команду 4 | ``` 5 | kubectl describe ds node-exporter 6 | ``` 7 | 8 | Вы увидите следующие ошибки 9 | ``` 10 | Error creating: admission webhook "validation.gatekeeper.sh" denied the request: [psp-host-filesystem] HostPath volume {"hostPath": {"path": "/", "type": ""}, "name": "root"} is not allowed, pod: node-exporter-6rwmd. Allowed path: [{"pathPrefix": "/psp", "readOnly": true}], please follow the article https://vk.cc/cfc8TH to get more information 11 | [psp-host-filesystem] HostPath volume {"hostPath": {"path": "/proc", "type": ""}, "name": "proc"} is not allowed, pod: node-exporter-fcwxm. Allowed path: [{"pathPrefix": "/psp", "readOnly": true}], please follow the article https://vk.cc/cfc8TH to get more information 12 | [psp-host-filesystem] HostPath volume {"hostPath": {"path": "/sys", "type": ""}, "name": "sys"} is not allowed, pod: node-exporter-fcwxm. Allowed path: [{"pathPrefix": "/psp", "readOnly": true}], please follow the article https://vk.cc/cfc8TH to get more information 13 | [psp-host-namespace] Sharing the host namespace is not allowed: node-exporter-fcwxm, please follow the article https://vk.cc/cfc8TH to get more information 14 | ``` 15 | 16 | Проблемы связана с работой GateKeeper в последних версиях кластеров K8s в облаке VK 17 | 18 | Просмотреть правила GateKeeper можно командой 19 | ``` 20 | kubectl get constrainttemplate 21 | ``` 22 | 23 | Удалите существующие правила 24 | ``` 25 | kubectl delete constrainttemplate k8spsphostfilesystem 26 | kubectl delete constrainttemplate k8spsphostnamespace 27 | ``` 28 | 29 | Через несколько минут конфигурация GateKeeper обновится и Pods Node Exporter будут созданы. 30 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/cronjob.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: batch/v1beta1 3 | kind: CronJob 4 | metadata: 5 | name: hello 6 | spec: 7 | schedule: "*/1 * * * *" 8 | concurrencyPolicy: Allow 9 | jobTemplate: 10 | spec: 11 | backoffLimit: 2 12 | activeDeadlineSeconds: 100 13 | template: 14 | spec: 15 | containers: 16 | - name: hello 17 | image: busybox 18 | args: 19 | - /bin/sh 20 | - -c 21 | - date; echo Hello from the Kubernetes cluster 22 | restartPolicy: Never 23 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/daemonset-with-tollerations.yam: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: DaemonSet 4 | metadata: 5 | labels: 6 | app: node-exporter 7 | name: node-exporter 8 | spec: 9 | updateStrategy: 10 | rollingUpdate: 11 | maxUnavailable: 1 12 | type: RollingUpdate 13 | selector: 14 | matchLabels: 15 | app: node-exporter 16 | template: 17 | metadata: 18 | labels: 19 | app: node-exporter 20 | spec: 21 | containers: 22 | - args: 23 | - --web.listen-address=0.0.0.0:9101 24 | - --path.procfs=/host/proc 25 | - --path.sysfs=/host/sys 26 | - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/) 27 | - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$ 28 | image: quay.io/prometheus/node-exporter:v0.16.0 29 | imagePullPolicy: IfNotPresent 30 | name: node-exporter 31 | volumeMounts: 32 | - mountPath: /host/proc 33 | name: proc 34 | - mountPath: /host/sys 35 | name: sys 36 | - mountPath: /host/root 37 | name: root 38 | readOnly: true 39 | hostNetwork: true 40 | hostPID: true 41 | tolerations: 42 | - effect: NoSchedule 43 | operator: Exists 44 | nodeSelector: 45 | kubernetes.io/os: linux 46 | volumes: 47 | - hostPath: 48 | path: /proc 49 | type: "" 50 | name: proc 51 | - hostPath: 52 | path: /sys 53 | type: "" 54 | name: sys 55 | - hostPath: 56 | path: / 57 | type: "" 58 | name: root 59 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/daemonset.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: DaemonSet 4 | metadata: 5 | labels: 6 | app: node-exporter 7 | name: node-exporter 8 | spec: 9 | updateStrategy: 10 | rollingUpdate: 11 | maxUnavailable: 1 12 | type: RollingUpdate 13 | selector: 14 | matchLabels: 15 | app: node-exporter 16 | template: 17 | metadata: 18 | labels: 19 | app: node-exporter 20 | spec: 21 | containers: 22 | - args: 23 | - --web.listen-address=0.0.0.0:9101 24 | - --path.procfs=/host/proc 25 | - --path.sysfs=/host/sys 26 | - --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/) 27 | - --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$ 28 | image: quay.io/prometheus/node-exporter:v0.16.0 29 | imagePullPolicy: IfNotPresent 30 | name: node-exporter 31 | volumeMounts: 32 | - mountPath: /host/proc 33 | name: proc 34 | - mountPath: /host/sys 35 | name: sys 36 | - mountPath: /host/root 37 | name: root 38 | readOnly: true 39 | hostNetwork: true 40 | hostPID: true 41 | volumes: 42 | - hostPath: 43 | path: /proc 44 | type: "" 45 | name: proc 46 | - hostPath: 47 | path: /sys 48 | type: "" 49 | name: sys 50 | - hostPath: 51 | path: / 52 | type: "" 53 | name: root 54 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/hpa.README.md: -------------------------------------------------------------------------------- 1 | # HPA 2 | 3 | Создаем деплоймент с тестовым приложением 4 | 5 | ```bash 6 | kubectl run php-apache --image=k8s.gcr.io/hpa-example --requests=cpu=100m --expose --port=80 7 | ``` 8 | 9 | Создаем автоскейлер 10 | 11 | ```bash 12 | kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=5 13 | ``` 14 | 15 | Смотрим на текущее количество подов 16 | 17 | ```bash 18 | kubectl get pod 19 | ``` 20 | 21 | Видим один под 22 | ```bash 23 | NAME READY STATUS RESTARTS AGE 24 | php-apache-566d7644df-z9dtt 1/1 Running 0 15s 25 | ``` 26 | 27 | Смотрим на HPA 28 | 29 | ```bash 30 | kubectl get hpa 31 | ``` 32 | 33 | Видим созданную HPA 34 | ```bash 35 | NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE 36 | php-apache Deployment/php-apache 1%/50% 1 5 1 32s 37 | ``` 38 | 39 | Она будет скейлить поды, как только их использование цпу начнет составлять 50% от реквестов 40 | 41 | Создаем нагрузку 42 | 43 | ```bash 44 | kubectl run load-generator --image=busybox -- /bin/sh -c "while true; do wget -q -O- http://php-apache; done" 45 | ``` 46 | 47 | Проверяем текущее потребление подом цпу 48 | 49 | ``` 50 | kubectl top pod 51 | ``` 52 | 53 | Видим что она подросла 54 | ```bash 55 | NAME CPU(cores) MEMORY(bytes) 56 | php-apache-566d7644df-z9dtt 936m 11Mi 57 | ``` 58 | 59 | Ждем когда начнет работать автоскейл 60 | 61 | ```bash 62 | kubectl get pod -w 63 | ``` 64 | 65 | Через какое то время видим, что подов стало 5 66 | 67 | ```bash 68 | NAME READY STATUS RESTARTS AGE 69 | load-generator-6b9cf94758-5qmbx 1/1 Running 0 2m16s 70 | php-apache-566d7644df-4zvv7 1/1 Running 0 108s 71 | php-apache-566d7644df-kv662 1/1 Running 0 93s 72 | php-apache-566d7644df-tg8qw 1/1 Running 0 108s 73 | php-apache-566d7644df-z9dtt 1/1 Running 0 13m 74 | php-apache-566d7644df-zlwd7 1/1 Running 0 108s 75 | ``` 76 | Отлично, автоскейл сработал 77 | 78 | Удираем нагрузку 79 | 80 | ```bash 81 | kubectl delete deployment load-generator 82 | ``` 83 | 84 | Проверяем нагрузку на поды 85 | 86 | ```bash 87 | kubectl top pod 88 | ``` 89 | 90 | Через какое то время замечаем что она упала 91 | 92 | ```bash 93 | NAME CPU(cores) MEMORY(bytes) 94 | php-apache-566d7644df-4zvv7 1m 11Mi 95 | php-apache-566d7644df-kv662 1m 11Mi 96 | php-apache-566d7644df-tg8qw 1m 11Mi 97 | php-apache-566d7644df-z9dtt 1m 11Mi 98 | php-apache-566d7644df-zlwd7 1m 11Mi 99 | ``` 100 | 101 | Ну и смотрим как автоскейлер отработает в обратную сторону 102 | 103 | ```bash 104 | kubectl get pod -w 105 | ``` 106 | 107 | Видим что ненужные поды умирают (в течении 5 минут) 108 | 109 | ```bash 110 | NAME READY STATUS RESTARTS AGE 111 | php-apache-566d7644df-4zvv7 0/1 Terminating 0 8m59s 112 | php-apache-566d7644df-kv662 0/1 Terminating 0 8m44s 113 | php-apache-566d7644df-tg8qw 0/1 Terminating 0 8m59s 114 | php-apache-566d7644df-z9dtt 1/1 Running 0 20m 115 | php-apache-566d7644df-zlwd7 0/1 Terminating 0 8m59s 116 | ``` 117 | 118 | Автоскейлер вернул все к первоначальному варианту с одним подом 119 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/job.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: batch/v1 3 | kind: Job 4 | metadata: 5 | name: hello 6 | spec: 7 | backoffLimit: 2 8 | activeDeadlineSeconds: 60 9 | template: 10 | spec: 11 | containers: 12 | - name: hello 13 | image: busybox 14 | args: 15 | - /bin/sh 16 | - -c 17 | - while true; do sleep 1; date; echo Hello from the Kubernetes cluster; done 18 | restartPolicy: Never 19 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/rabbitmq-statefulset/configmap.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: rabbitmq-config 6 | data: 7 | enabled_plugins: | 8 | [rabbitmq_peer_discovery_k8s,rabbitmq_federation_management,rabbitmq_management]. 9 | rabbitmq.conf: | 10 | ## Cluster formation. See http://www.rabbitmq.com/cluster-formation.html to learn more. 11 | cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s 12 | cluster_formation.k8s.host = kubernetes.default 13 | ## Should RabbitMQ node name be computed from the pod's hostname or IP address? 14 | ## IP addresses are not stable, so using [stable] hostnames is recommended when possible. 15 | ## Set to "hostname" to use pod hostnames. 16 | ## When this value is changed, so should the variable used to set the RABBITMQ_NODENAME 17 | ## environment variable. 18 | cluster_formation.k8s.address_type = ip 19 | ## How often should node cleanup checks run? 20 | cluster_formation.node_cleanup.interval = 30 21 | ## Set to false if automatic removal of unknown/absent nodes 22 | ## is desired. This can be dangerous, see 23 | ## * http://www.rabbitmq.com/cluster-formation.html#node-health-checks-and-cleanup 24 | ## * https://groups.google.com/forum/#!msg/rabbitmq-users/wuOfzEywHXo/k8z_HWIkBgAJ 25 | cluster_formation.node_cleanup.only_log_warning = true 26 | cluster_partition_handling = autoheal 27 | ## See http://www.rabbitmq.com/ha.html#master-migration-data-locality 28 | queue_master_locator=min-masters 29 | ## See http://www.rabbitmq.com/access-control.html#loopback-users 30 | loopback_users.guest = false 31 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/rabbitmq-statefulset/ingress.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: networking.k8s.io/v1 3 | kind: Ingress 4 | metadata: 5 | name: rabbitmq 6 | annotations: 7 | kubernetes.io/ingress.class: nginx 8 | spec: 9 | rules: 10 | - http: 11 | paths: 12 | - path: "/" 13 | pathType: Prefix 14 | backend: 15 | service: 16 | name: rabbitmq 17 | port: 18 | number: 15672 19 | 20 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/rabbitmq-statefulset/role.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: Role 3 | apiVersion: rbac.authorization.k8s.io/v1 4 | metadata: 5 | name: endpoint-reader 6 | rules: 7 | - apiGroups: [""] 8 | resources: ["endpoints"] 9 | verbs: ["get"] 10 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/rabbitmq-statefulset/rolebinding.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: RoleBinding 3 | apiVersion: rbac.authorization.k8s.io/v1 4 | metadata: 5 | name: endpoint-reader 6 | subjects: 7 | - kind: ServiceAccount 8 | name: rabbitmq 9 | namespace: default 10 | roleRef: 11 | apiGroup: rbac.authorization.k8s.io 12 | kind: Role 13 | name: endpoint-reader 14 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/rabbitmq-statefulset/service.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | kind: Service 3 | apiVersion: v1 4 | metadata: 5 | name: rabbitmq 6 | labels: 7 | app: rabbitmq 8 | spec: 9 | clusterIP: None 10 | ports: 11 | - name: amqp 12 | protocol: TCP 13 | port: 5672 14 | targetPort: 5672 15 | - name: admin 16 | protocol: TCP 17 | port: 15672 18 | targetPort: 15672 19 | selector: 20 | app: rabbitmq 21 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/rabbitmq-statefulset/serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: rabbitmq 6 | -------------------------------------------------------------------------------- /practice/7.advanced-abstractions/rabbitmq-statefulset/statefulset.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: StatefulSet 4 | metadata: 5 | name: rabbitmq 6 | spec: 7 | serviceName: rabbitmq 8 | replicas: 3 9 | selector: 10 | matchLabels: 11 | app: rabbitmq 12 | template: 13 | metadata: 14 | labels: 15 | app: rabbitmq 16 | spec: 17 | serviceAccount: rabbitmq 18 | terminationGracePeriodSeconds: 10 19 | containers: 20 | - name: rabbitmq-k8s 21 | image: rabbitmq:3.7-management 22 | env: 23 | - name: MY_POD_IP 24 | valueFrom: 25 | fieldRef: 26 | fieldPath: status.podIP 27 | - name: RABBITMQ_USE_LONGNAME 28 | value: "true" 29 | - name: RABBITMQ_NODENAME 30 | value: "rabbit@$(MY_POD_IP)" 31 | - name: K8S_SERVICE_NAME 32 | value: "rabbitmq" 33 | - name: RABBITMQ_ERLANG_COOKIE 34 | value: "mycookie" 35 | ports: 36 | - name: amqp 37 | protocol: TCP 38 | containerPort: 5672 39 | - name: admin 40 | protocol: TCP 41 | containerPort: 15672 42 | livenessProbe: 43 | exec: 44 | command: ["rabbitmqctl", "status"] 45 | initialDelaySeconds: 60 46 | periodSeconds: 60 47 | timeoutSeconds: 15 48 | readinessProbe: 49 | exec: 50 | command: ["rabbitmqctl", "status"] 51 | initialDelaySeconds: 20 52 | periodSeconds: 60 53 | timeoutSeconds: 10 54 | imagePullPolicy: Always 55 | volumeMounts: 56 | - name: config-volume 57 | mountPath: /etc/rabbitmq 58 | - name: data 59 | mountPath: /var/lib/rabbitmq 60 | volumes: 61 | - name: config-volume 62 | configMap: 63 | name: rabbitmq-config 64 | items: 65 | - key: rabbitmq.conf 66 | path: rabbitmq.conf 67 | - key: enabled_plugins 68 | path: enabled_plugins 69 | affinity: 70 | podAntiAffinity: 71 | preferredDuringSchedulingIgnoredDuringExecution: 72 | - weight: 100 73 | podAffinityTerm: 74 | labelSelector: 75 | matchExpressions: 76 | - key: app 77 | operator: In 78 | values: 79 | - rabbitmq 80 | topologyKey: kubernetes.io/hostname 81 | volumeClaimTemplates: 82 | - metadata: 83 | name: data 84 | spec: 85 | accessModes: ["ReadWriteOnce"] 86 | resources: 87 | requests: 88 | storage: 1Gi 89 | storageClassName: csi-ceph-hdd-ms1 90 | -------------------------------------------------------------------------------- /practice/8.ci-cd/README.md: -------------------------------------------------------------------------------- 1 | # Инструкция для лекции 8 - CI/CD 2 | 3 | - [Подготовка](#Подготовка) 4 | - [Настраиваем интеграцию GitLab и Kubernetes](#Настраиваем-интеграцию-GitLab-и-Kubernetes) 5 | - [Запуск приложения](#Запуск-приложения) 6 | - [Проверяем работу приложения](#Проверяем-работу-приложения) 7 | 8 | ## Подготовка 9 | 10 | * Зарегистрируйте аккаунт [GitLab](https://gitlab.com/users/sign_up) 11 | * добавьте в настройках своего аккаунта на Gitlab.com свой публичный SSH ключ 12 | 13 | В правом верхнем углу выбираем **Preferences -> SSH keys** и добавляем SSH ключ 14 | 15 | Чтобы вывести содержимае ключа выполните 16 | 17 | ```bash 18 | cat ~/.ssh/id_rsa.pub 19 | ``` 20 | 21 | Сгенерировать SSH ключ можно командой 22 | 23 | ```bash 24 | ssh-keygen 25 | ``` 26 | 27 | * Создайте новый проект с именем geekbrains. Если выберете другое имя проекта, в дальнейшем нужно будет также изменить имя Deployment. 28 | * Скопируйте файлы практики в ваш репозиторий 29 | 30 | ```bash 31 | cd app 32 | git init --initial-branch=main 33 | git remote add origin https://gitlab.com//geekbrains.git 34 | git add . 35 | git commit -m "Initial commit" 36 | git push -u origin main 37 | ``` 38 | 39 | ## Настраиваем интеграцию GitLab и Kubernetes 40 | 41 | * Переходим в настройки проекта **Settings -> CI/CD -> Runners**. Отключаем Shared Runners. Мы будем настраивать Specific runners. 42 | * Создаем нэймспэйс для раннера 43 | 44 | ```bash 45 | kubectl create ns gitlab 46 | ``` 47 | 48 | * Меняем регистрационный токен 49 | Для этого открываем gitlab-runner/gitlab-runner.yaml 50 | Там ищем и вставляем вместо него токен, 51 | который мы взяли в настройках проекта на Gitlab (**Set up a specific runner manually -> Registration token**) 52 | 53 | * Применяем манифесты для раннера 54 | 55 | ```bash 56 | kubectl apply --namespace gitlab -f gitlab-runner/gitlab-runner.yaml 57 | ``` 58 | 59 | * Обновляем страницу на GitLab, runner должен появиться в списке Available specific runners 60 | 61 | * Создаем нэймспэйсы для приложения 62 | 63 | ```bash 64 | kubectl create ns stage 65 | kubectl create ns prod 66 | ``` 67 | 68 | * Создаем авторизационные объекты, чтобы раннер мог деплоить в наши нэймспэйсы 69 | 70 | ```bash 71 | kubectl create sa deploy --namespace stage 72 | kubectl create rolebinding deploy --serviceaccount stage:deploy --clusterrole edit --namespace stage 73 | kubectl create sa deploy --namespace prod 74 | kubectl create rolebinding deploy --serviceaccount prod:deploy --clusterrole edit --namespace prod 75 | ``` 76 | 77 | * Получаем токены для деплоя в нэймспэйсы 78 | 79 | ```bash 80 | export NAMESPACE=stage; kubectl get secret $(kubectl get sa deploy --namespace $NAMESPACE -o jsonpath='{.secrets[0].name}') --namespace $NAMESPACE -o jsonpath='{.data.token}' 81 | export NAMESPACE=prod; kubectl get secret $(kubectl get sa deploy --namespace $NAMESPACE -o jsonpath='{.secrets[0].name}') --namespace $NAMESPACE -o jsonpath='{.data.token}' 82 | ``` 83 | 84 | Из этих токенов нужно создать переменные в проекте в Gitlab (**Settings -> CI/CD -> Variables**) с именами 85 | K8S_STAGE_CI_TOKEN и K8S_PROD_CI_TOKEN соответственно. 86 | 87 | * Создаем секреты для авторизации Kubernetes в Gitlab registry. При создании используем Token, созданный в **Settings -> Repository -> Deploy Tokens**. 88 | (read_registry, write_registry permissions) 89 | ```bash 90 | kubectl create secret docker-registry gitlab-registry --docker-server=registry.gitlab.com --docker-username= --docker-password= --docker-email=admin@admin.admin --namespace stage 91 | kubectl create secret docker-registry gitlab-registry --docker-server=registry.gitlab.com --docker-username= --docker-password= --docker-email=admin@admin.admin --namespace prod 92 | ``` 93 | 94 | * Патчим дефолтный сервис аккаунт для автоматического использование pull secret 95 | 96 | ```bash 97 | kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gitlab-registry"}]}' -n stage 98 | kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "gitlab-registry"}]}' -n prod 99 | ``` 100 | 101 | ## Запуск приложения 102 | 103 | * Создаем манифесты для БД в stage и prod 104 | 105 | ```bash 106 | kubectl apply --namespace stage -f app/kube/postgres/ 107 | kubectl apply --namespace prod -f app/kube/postgres/ 108 | ``` 109 | 110 | * Меняем хост в ингрессе приложения и применяем манифесты 111 | Для этого открываем app/kube/ingress.yaml 112 | Там ищем плейсхолдер "CHANGE ME" и вставляем вместо него stage 113 | 114 | Далее применяем на stage 115 | 116 | ```bash 117 | kubectl apply --namespace stage -f app/kube 118 | ``` 119 | 120 | Повторяем для прода. Открываем тот же файл ingress и 121 | вставляем вместо stage prod 122 | 123 | Далее применяем на prod 124 | 125 | ```bash 126 | kubectl apply --namespace prod -f app/kube 127 | ``` 128 | 129 | ## Проверяем работу приложения 130 | 131 | Поздравляю! Мы развернули приложение, теперь убедимся, что оно работает. Наше приложение - это REST-API. Можно выполнять к нему запросы через curl. В примерах указан недействительный ip адрес - 1.1.1.1, вам нужно заменить его на EXTERNAL-IP вашего сервиса ingres-controller (Load Balancer). 132 | 133 | 134 | Записать информацию о клиенте в БД 135 | ```bash 136 | curl 1.1.1.1/users -H "Host: stage" -X POST -d '{"name": "Vasiya", "age": 34, "city": "Vladivostok"}' 137 | ``` 138 | 139 | Получить список клиентов из БД 140 | ```bash 141 | curl 1.1.1.1/users -H "Host: stage" 142 | ``` 143 | 144 | 145 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/.dockerignore: -------------------------------------------------------------------------------- 1 | .gitlab-ci.yml 2 | .git/ 3 | kube/ 4 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/.gitlab-ci.yml: -------------------------------------------------------------------------------- 1 | variables: 2 | K8S_API_URL: https://kubernetes.default 3 | 4 | stages: 5 | - test 6 | - build 7 | - deploy 8 | 9 | test: 10 | stage: test 11 | image: golang:1.14 12 | script: 13 | - echo OK 14 | 15 | build: 16 | stage: build 17 | image: docker:19.03.12 18 | services: 19 | - docker:19.03.12-dind 20 | variables: 21 | DOCKER_DRIVER: overlay 22 | DOCKER_HOST: tcp://docker:2375 23 | DOCKER_TLS_CERTDIR: "" 24 | before_script: 25 | - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY 26 | script: 27 | - docker build . -t $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG.$CI_PIPELINE_ID 28 | - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG.$CI_PIPELINE_ID 29 | 30 | .deploy: &deploy 31 | stage: deploy 32 | image: bitnami/kubectl:1.16 33 | before_script: 34 | - export KUBECONFIG=/tmp/.kubeconfig 35 | - kubectl config set-cluster k8s --insecure-skip-tls-verify=true --server=$K8S_API_URL 36 | - kubectl config set-credentials ci --token=$(echo $K8S_CI_TOKEN | base64 --decode) 37 | - kubectl config set-context ci --cluster=k8s --user=ci 38 | - kubectl config use-context ci 39 | script: 40 | - kubectl set image deployment/$CI_PROJECT_NAME *=$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG.$CI_PIPELINE_ID --namespace $CI_ENVIRONMENT_NAME 41 | - kubectl rollout status deployment/$CI_PROJECT_NAME --namespace $CI_ENVIRONMENT_NAME || (kubectl rollout undo deployment/$CI_PROJECT_NAME --namespace $CI_ENVIRONMENT_NAME && exit 1) 42 | 43 | deploy:stage: 44 | <<: *deploy 45 | environment: 46 | name: stage 47 | variables: 48 | K8S_CI_TOKEN: $K8S_STAGE_CI_TOKEN 49 | only: 50 | - master 51 | 52 | deploy:prod: 53 | <<: *deploy 54 | environment: 55 | name: prod 56 | variables: 57 | K8S_CI_TOKEN: $K8S_PROD_CI_TOKEN 58 | only: 59 | - master 60 | when: manual 61 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM golang:1.14 as builder 2 | 3 | RUN mkdir /app 4 | 5 | COPY . /app 6 | 7 | WORKDIR /app 8 | RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o server . 9 | 10 | 11 | FROM scratch 12 | 13 | COPY --from=builder /app/server / 14 | 15 | EXPOSE 8000 16 | CMD ["/server"] 17 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/app/app.go: -------------------------------------------------------------------------------- 1 | package app 2 | 3 | import ( 4 | "fmt" 5 | "log" 6 | "net/http" 7 | 8 | "github.com/gorilla/mux" 9 | "github.com/jinzhu/gorm" 10 | "github.com/pauljamm/geekbrains-conteinerization/practice/8.ci-cd/app/config" 11 | "github.com/pauljamm/geekbrains-conteinerization/practice/8.ci-cd/app/handler" 12 | "github.com/pauljamm/geekbrains-conteinerization/practice/8.ci-cd/app/model" 13 | ) 14 | 15 | // App has router and db instances 16 | type App struct { 17 | Router *mux.Router 18 | DB *gorm.DB 19 | } 20 | 21 | // App initialize with predefined configuration 22 | func (a *App) Initialize(config *config.Config) { 23 | dbURI := fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable", 24 | config.DB.Username, 25 | config.DB.Password, 26 | config.DB.Host, 27 | config.DB.Port, 28 | config.DB.Name) 29 | 30 | db, err := gorm.Open("postgres", dbURI) 31 | if err != nil { 32 | log.Fatal("Could not connect to database: ", err) 33 | } 34 | 35 | a.DB = model.DBMigrate(db) 36 | a.Router = mux.NewRouter() 37 | a.setRouters() 38 | } 39 | 40 | // Set all required routers 41 | func (a *App) setRouters() { 42 | a.Get("/users", a.GetAllUsers) 43 | a.Post("/users", a.CreateUser) 44 | a.Get("/users/{title}", a.GetUser) 45 | a.Put("/users/{title}", a.UpdateUser) 46 | a.Delete("/users/{title}", a.DeleteUser) 47 | a.Put("/users/{title}/disable", a.DisableUser) 48 | a.Put("/users/{title}/enable", a.EnableUser) 49 | } 50 | 51 | // Wrap the router for GET method 52 | func (a *App) Get(path string, f func(w http.ResponseWriter, r *http.Request)) { 53 | a.Router.HandleFunc(path, f).Methods("GET") 54 | } 55 | 56 | // Wrap the router for POST method 57 | func (a *App) Post(path string, f func(w http.ResponseWriter, r *http.Request)) { 58 | a.Router.HandleFunc(path, f).Methods("POST") 59 | } 60 | 61 | // Wrap the router for PUT method 62 | func (a *App) Put(path string, f func(w http.ResponseWriter, r *http.Request)) { 63 | a.Router.HandleFunc(path, f).Methods("PUT") 64 | } 65 | 66 | // Wrap the router for DELETE method 67 | func (a *App) Delete(path string, f func(w http.ResponseWriter, r *http.Request)) { 68 | a.Router.HandleFunc(path, f).Methods("DELETE") 69 | } 70 | 71 | // Handlers to manage User Data 72 | func (a *App) GetAllUsers(w http.ResponseWriter, r *http.Request) { 73 | handler.GetAllUsers(a.DB, w, r) 74 | } 75 | 76 | func (a *App) CreateUser(w http.ResponseWriter, r *http.Request) { 77 | handler.CreateUser(a.DB, w, r) 78 | } 79 | 80 | func (a *App) GetUser(w http.ResponseWriter, r *http.Request) { 81 | handler.GetUser(a.DB, w, r) 82 | } 83 | 84 | func (a *App) UpdateUser(w http.ResponseWriter, r *http.Request) { 85 | handler.UpdateUser(a.DB, w, r) 86 | } 87 | 88 | func (a *App) DeleteUser(w http.ResponseWriter, r *http.Request) { 89 | handler.DeleteUser(a.DB, w, r) 90 | } 91 | 92 | func (a *App) DisableUser(w http.ResponseWriter, r *http.Request) { 93 | handler.DisableUser(a.DB, w, r) 94 | } 95 | 96 | func (a *App) EnableUser(w http.ResponseWriter, r *http.Request) { 97 | handler.EnableUser(a.DB, w, r) 98 | } 99 | 100 | // Run the app on it's router 101 | func (a *App) Run(host string) { 102 | log.Fatal(http.ListenAndServe(host, a.Router)) 103 | } 104 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/config/config.go: -------------------------------------------------------------------------------- 1 | package config 2 | 3 | import "os" 4 | 5 | type Config struct { 6 | DB *DBConfig 7 | } 8 | 9 | type DBConfig struct { 10 | Host string 11 | Port string 12 | Username string 13 | Password string 14 | Name string 15 | } 16 | 17 | func GetConfig() *Config { 18 | return &Config{ 19 | DB: &DBConfig{ 20 | Host: os.Getenv("DB_HOST"), 21 | Port: os.Getenv("DB_PORT"), 22 | Username: os.Getenv("DB_USER"), 23 | Password: os.Getenv("DB_PASSWORD"), 24 | Name: os.Getenv("DB_NAME"), 25 | }, 26 | } 27 | } 28 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/go.mod: -------------------------------------------------------------------------------- 1 | module github.com/pauljamm/geekbrains-conteinerization/practice/8.ci-cd/app 2 | 3 | go 1.14 4 | 5 | require ( 6 | github.com/gorilla/mux v1.7.4 7 | github.com/jinzhu/gorm v1.9.15 8 | ) 9 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/go.sum: -------------------------------------------------------------------------------- 1 | github.com/PuerkitoBio/goquery v1.5.1/go.mod h1:GsLWisAFVj4WgDibEWF4pvYnkVQBpKBKeU+7zCJoLcc= 2 | github.com/andybalholm/cascadia v1.1.0/go.mod h1:GsXiBklL0woXo1j/WYWtSYYC4ouU9PqHO0sqidkEA4Y= 3 | github.com/denisenkom/go-mssqldb v0.0.0-20191124224453-732737034ffd/go.mod h1:xbL0rPBG9cCiLr28tMa8zpbdarY27NDyej4t/EjAShU= 4 | github.com/erikstmartin/go-testdb v0.0.0-20160219214506-8d10e4a1bae5/go.mod h1:a2zkGnVExMxdzMo3M0Hi/3sEU+cWnZpSni0O6/Yb/P0= 5 | github.com/go-sql-driver/mysql v1.5.0 h1:ozyZYNQW3x3HtqT1jira07DN2PArx2v7/mN66gGcHOs= 6 | github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= 7 | github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0= 8 | github.com/gorilla/mux v1.7.4 h1:VuZ8uybHlWmqV03+zRzdwKL4tUnIp1MAQtp1mIFE1bc= 9 | github.com/gorilla/mux v1.7.4/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So= 10 | github.com/jinzhu/gorm v1.9.15 h1:OdR1qFvtXktlxk73XFYMiYn9ywzTwytqe4QkuMRqc38= 11 | github.com/jinzhu/gorm v1.9.15/go.mod h1:G3LB3wezTOWM2ITLzPxEXgSkOXAntiLHS7UdBefADcs= 12 | github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E= 13 | github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc= 14 | github.com/jinzhu/now v1.0.1/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8= 15 | github.com/lib/pq v1.1.1 h1:sJZmqHoEaY7f+NPP8pgLB/WxulyR3fewgCM2qaSlBb4= 16 | github.com/lib/pq v1.1.1/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= 17 | github.com/mattn/go-sqlite3 v1.14.0/go.mod h1:JIl7NbARA7phWnGvh0LKTyg7S9BA+6gx71ShQilpsus= 18 | github.com/pauljamm/geekbrains-conteinerization v0.0.0-20200718135033-222e8d0ee23a h1:JlT+jb1bJMZAU0MRcc4XMYUyVz8eo6qwhCET5gdl+r0= 19 | golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= 20 | golang.org/x/crypto v0.0.0-20190325154230-a5d413f7728c/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= 21 | golang.org/x/crypto v0.0.0-20191205180655-e7c4368fe9dd/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= 22 | golang.org/x/net v0.0.0-20180218175443-cbe0f9307d01/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= 23 | golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= 24 | golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= 25 | golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= 26 | golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 27 | golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 28 | golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= 29 | golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= 30 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/handler/common.go: -------------------------------------------------------------------------------- 1 | package handler 2 | 3 | import ( 4 | "encoding/json" 5 | "net/http" 6 | ) 7 | 8 | // respondJSON makes the response with payload as json format 9 | func respondJSON(w http.ResponseWriter, status int, payload interface{}) { 10 | response, err := json.Marshal(payload) 11 | if err != nil { 12 | w.WriteHeader(http.StatusInternalServerError) 13 | w.Write([]byte(err.Error())) 14 | return 15 | } 16 | w.Header().Set("Content-Type", "application/json") 17 | w.WriteHeader(status) 18 | w.Write([]byte(response)) 19 | } 20 | 21 | // respondError makes the error response with payload as json format 22 | func respondError(w http.ResponseWriter, code int, message string) { 23 | respondJSON(w, code, map[string]string{"error": message}) 24 | } 25 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/handler/users.go: -------------------------------------------------------------------------------- 1 | package handler 2 | 3 | import ( 4 | "encoding/json" 5 | "net/http" 6 | 7 | "github.com/gorilla/mux" 8 | "github.com/jinzhu/gorm" 9 | "github.com/pauljamm/geekbrains-conteinerization/practice/8.ci-cd/app/model" 10 | ) 11 | 12 | func GetAllUsers(db *gorm.DB, w http.ResponseWriter, r *http.Request) { 13 | users := []model.User{} 14 | db.Find(&users) 15 | respondJSON(w, http.StatusOK, users) 16 | } 17 | 18 | func CreateUser(db *gorm.DB, w http.ResponseWriter, r *http.Request) { 19 | user := model.User{} 20 | 21 | decoder := json.NewDecoder(r.Body) 22 | if err := decoder.Decode(&user); err != nil { 23 | respondError(w, http.StatusBadRequest, err.Error()) 24 | return 25 | } 26 | defer r.Body.Close() 27 | 28 | if err := db.Save(&user).Error; err != nil { 29 | respondError(w, http.StatusInternalServerError, err.Error()) 30 | return 31 | } 32 | respondJSON(w, http.StatusCreated, user) 33 | } 34 | 35 | func GetUser(db *gorm.DB, w http.ResponseWriter, r *http.Request) { 36 | vars := mux.Vars(r) 37 | 38 | name := vars["name"] 39 | user := getUserOr404(db, name, w, r) 40 | if user == nil { 41 | return 42 | } 43 | respondJSON(w, http.StatusOK, user) 44 | } 45 | 46 | func UpdateUser(db *gorm.DB, w http.ResponseWriter, r *http.Request) { 47 | vars := mux.Vars(r) 48 | 49 | name := vars["name"] 50 | user := getUserOr404(db, name, w, r) 51 | if user == nil { 52 | return 53 | } 54 | 55 | decoder := json.NewDecoder(r.Body) 56 | if err := decoder.Decode(&user); err != nil { 57 | respondError(w, http.StatusBadRequest, err.Error()) 58 | return 59 | } 60 | defer r.Body.Close() 61 | 62 | if err := db.Save(&user).Error; err != nil { 63 | respondError(w, http.StatusInternalServerError, err.Error()) 64 | return 65 | } 66 | respondJSON(w, http.StatusOK, user) 67 | } 68 | 69 | func DeleteUser(db *gorm.DB, w http.ResponseWriter, r *http.Request) { 70 | vars := mux.Vars(r) 71 | 72 | name := vars["name"] 73 | user := getUserOr404(db, name, w, r) 74 | if user == nil { 75 | return 76 | } 77 | if err := db.Delete(&user).Error; err != nil { 78 | respondError(w, http.StatusInternalServerError, err.Error()) 79 | return 80 | } 81 | respondJSON(w, http.StatusNoContent, nil) 82 | } 83 | 84 | func DisableUser(db *gorm.DB, w http.ResponseWriter, r *http.Request) { 85 | vars := mux.Vars(r) 86 | 87 | name := vars["name"] 88 | user := getUserOr404(db, name, w, r) 89 | if user == nil { 90 | return 91 | } 92 | user.Disable() 93 | if err := db.Save(&user).Error; err != nil { 94 | respondError(w, http.StatusInternalServerError, err.Error()) 95 | return 96 | } 97 | respondJSON(w, http.StatusOK, user) 98 | } 99 | 100 | func EnableUser(db *gorm.DB, w http.ResponseWriter, r *http.Request) { 101 | vars := mux.Vars(r) 102 | 103 | name := vars["name"] 104 | user := getUserOr404(db, name, w, r) 105 | if user == nil { 106 | return 107 | } 108 | user.Enable() 109 | if err := db.Save(&user).Error; err != nil { 110 | respondError(w, http.StatusInternalServerError, err.Error()) 111 | return 112 | } 113 | respondJSON(w, http.StatusOK, user) 114 | } 115 | 116 | // getUserOr404 gets a user instance if exists, or respond the 404 error otherwise 117 | func getUserOr404(db *gorm.DB, name string, w http.ResponseWriter, r *http.Request) *model.User { 118 | user := model.User{} 119 | if err := db.First(&user, model.User{Name: name}).Error; err != nil { 120 | respondError(w, http.StatusNotFound, err.Error()) 121 | return nil 122 | } 123 | return &user 124 | } 125 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/kube/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: geekbrains 5 | spec: 6 | progressDeadlineSeconds: 300 7 | replicas: 2 8 | selector: 9 | matchLabels: 10 | app: app 11 | template: 12 | metadata: 13 | labels: 14 | app: app 15 | spec: 16 | containers: 17 | - name: app 18 | image: nginx:1.12 # это просто плэйсхолдер 19 | env: 20 | - name: DB_HOST 21 | value: database 22 | - name: DB_PORT 23 | value: "5432" 24 | - name: DB_USER 25 | value: app 26 | - name: DB_PASSWORD 27 | valueFrom: 28 | secretKeyRef: 29 | key: db-password 30 | name: app 31 | - name: DB_NAME 32 | value: users 33 | resources: 34 | limits: 35 | memory: "128Mi" 36 | cpu: "100m" 37 | ports: 38 | - containerPort: 8000 39 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/kube/ingress.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: networking.k8s.io/v1 3 | kind: Ingress 4 | metadata: 5 | name: geekbrains 6 | annotations: 7 | kubernetes.io/ingress.class: nginx 8 | nginx.ingress.kubernetes.io/rewrite-target: / 9 | spec: 10 | rules: 11 | - host: 12 | http: 13 | paths: 14 | - path: "/users" 15 | pathType: Prefix 16 | backend: 17 | service: 18 | name: geekbrains 19 | port: 20 | number: 8000 21 | 22 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/kube/postgres/secret.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | stringData: 3 | db-password: supersecretpassword 4 | kind: Secret 5 | metadata: 6 | name: app 7 | type: Opaque 8 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/kube/postgres/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: database 5 | spec: 6 | ports: 7 | - port: 5432 8 | targetPort: 5432 9 | selector: 10 | app: database 11 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/kube/postgres/statefulset.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: apps/v1 3 | kind: StatefulSet 4 | metadata: 5 | name: database 6 | spec: 7 | replicas: 1 8 | serviceName: database 9 | selector: 10 | matchLabels: 11 | app: database 12 | template: 13 | metadata: 14 | labels: 15 | app: database 16 | spec: 17 | containers: 18 | - image: postgres:10.13 19 | name: postgres 20 | env: 21 | - name: POSTGRES_USER 22 | value: app 23 | - name: POSTGRES_DB 24 | value: users 25 | - name: PGDATA 26 | value: /var/lib/postgresql/data/pgdata 27 | - name: POSTGRES_PASSWORD 28 | valueFrom: 29 | secretKeyRef: 30 | name: app 31 | key: db-password 32 | ports: 33 | - containerPort: 5432 34 | protocol: TCP 35 | volumeMounts: 36 | - name: data 37 | mountPath: /var/lib/postgresql/data 38 | volumeClaimTemplates: 39 | - metadata: 40 | name: data 41 | spec: 42 | accessModes: ["ReadWriteOnce"] 43 | resources: 44 | requests: 45 | storage: 2Gi 46 | storageClassName: csi-ceph-hdd-ms1 47 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/kube/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: geekbrains 5 | spec: 6 | selector: 7 | app: app 8 | ports: 9 | - port: 8000 10 | targetPort: 8000 11 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/main.go: -------------------------------------------------------------------------------- 1 | package main 2 | 3 | import ( 4 | "github.com/pauljamm/geekbrains-conteinerization/practice/8.ci-cd/app/app" 5 | "github.com/pauljamm/geekbrains-conteinerization/practice/8.ci-cd/app/config" 6 | ) 7 | 8 | func main() { 9 | config := config.GetConfig() 10 | 11 | app := &app.App{} 12 | app.Initialize(config) 13 | app.Run(":8000") 14 | } 15 | -------------------------------------------------------------------------------- /practice/8.ci-cd/app/model/model.go: -------------------------------------------------------------------------------- 1 | package model 2 | 3 | import ( 4 | "github.com/jinzhu/gorm" 5 | _ "github.com/jinzhu/gorm/dialects/postgres" 6 | ) 7 | 8 | type User struct { 9 | gorm.Model 10 | Name string `gorm:"unique" json:"name"` 11 | City string `json:"city"` 12 | Age int `json:"age"` 13 | Status bool `json:"status"` 14 | } 15 | 16 | func (e *User) Disable() { 17 | e.Status = false 18 | } 19 | 20 | func (p *User) Enable() { 21 | p.Status = true 22 | } 23 | 24 | // DBMigrate will create and migrate the tables, and then make the some relationships if necessary 25 | func DBMigrate(db *gorm.DB) *gorm.DB { 26 | db.AutoMigrate(&User{}) 27 | return db 28 | } 29 | -------------------------------------------------------------------------------- /practice/8.ci-cd/gitlab-runner/gitlab-runner.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: gitlab-runner 6 | namespace: gitlab 7 | --- 8 | apiVersion: v1 9 | kind: Secret 10 | metadata: 11 | name: gitlab-runner 12 | namespace: gitlab 13 | type: Opaque 14 | stringData: 15 | runner-registration-token: "Enter your registration token here" 16 | runner-token: "" 17 | --- 18 | apiVersion: v1 19 | kind: ConfigMap 20 | metadata: 21 | name: gitlab-runner 22 | namespace: gitlab 23 | data: 24 | entrypoint: | 25 | #!/bin/bash 26 | set -e 27 | 28 | mkdir -p /home/gitlab-runner/.gitlab-runner/ 29 | 30 | cp /configmaps/config.toml /home/gitlab-runner/.gitlab-runner/ 31 | 32 | # Set up environment variables for cache 33 | if [[ -f /secrets/accesskey && -f /secrets/secretkey ]]; then 34 | export CACHE_S3_ACCESS_KEY=$(cat /secrets/accesskey) 35 | export CACHE_S3_SECRET_KEY=$(cat /secrets/secretkey) 36 | fi 37 | 38 | if [[ -f /secrets/gcs-applicaton-credentials-file ]]; then 39 | export GOOGLE_APPLICATION_CREDENTIALS="/secrets/gcs-applicaton-credentials-file" 40 | elif [[ -f /secrets/gcs-application-credentials-file ]]; then 41 | export GOOGLE_APPLICATION_CREDENTIALS="/secrets/gcs-application-credentials-file" 42 | else 43 | if [[ -f /secrets/gcs-access-id && -f /secrets/gcs-private-key ]]; then 44 | export CACHE_GCS_ACCESS_ID=$(cat /secrets/gcs-access-id) 45 | # echo -e used to make private key multiline (in google json auth key private key is oneline with \n) 46 | export CACHE_GCS_PRIVATE_KEY=$(echo -e $(cat /secrets/gcs-private-key)) 47 | fi 48 | fi 49 | 50 | if [[ -f /secrets/azure-account-name && -f /secrets/azure-account-key ]]; then 51 | export CACHE_AZURE_ACCOUNT_NAME=$(cat /secrets/azure-account-name) 52 | export CACHE_AZURE_ACCOUNT_KEY=$(cat /secrets/azure-account-key) 53 | fi 54 | 55 | if [[ -f /secrets/runner-registration-token ]]; then 56 | export REGISTRATION_TOKEN=$(cat /secrets/runner-registration-token) 57 | fi 58 | 59 | if [[ -f /secrets/runner-token ]]; then 60 | export CI_SERVER_TOKEN=$(cat /secrets/runner-token) 61 | fi 62 | 63 | # Validate this also at runtime in case the user has set a custom secret 64 | if [[ ! -z "$CI_SERVER_TOKEN" && "1" -ne "1" ]]; then 65 | echo "Using a runner token with more than 1 replica is not supported." 66 | exit 1 67 | fi 68 | 69 | # Register the runner 70 | if ! sh /configmaps/register-the-runner; then 71 | exit 1 72 | fi 73 | 74 | # Run pre-entrypoint-script 75 | if ! bash /configmaps/pre-entrypoint-script; then 76 | exit 1 77 | fi 78 | 79 | # Start the runner 80 | exec /entrypoint run --user=gitlab-runner \ 81 | --working-directory=/home/gitlab-runner 82 | 83 | config.toml: | 84 | concurrent = 10 85 | check_interval = 30 86 | log_level = "info" 87 | 88 | 89 | config.template.toml: | 90 | [[runners]] 91 | [runners.kubernetes] 92 | namespace = "gitlab" 93 | image = "ubuntu:16.04" 94 | 95 | 96 | register-the-runner: | 97 | #!/bin/bash 98 | MAX_REGISTER_ATTEMPTS=30 99 | 100 | for i in $(seq 1 "${MAX_REGISTER_ATTEMPTS}"); do 101 | echo "Registration attempt ${i} of ${MAX_REGISTER_ATTEMPTS}" 102 | /entrypoint register \ 103 | --template-config /configmaps/config.template.toml \ 104 | --non-interactive 105 | 106 | retval=$? 107 | 108 | if [ ${retval} = 0 ]; then 109 | break 110 | elif [ ${i} = ${MAX_REGISTER_ATTEMPTS} ]; then 111 | exit 1 112 | fi 113 | 114 | sleep 5 115 | done 116 | 117 | exit 0 118 | 119 | check-live: | 120 | #!/bin/bash 121 | if /usr/bin/pgrep -f .*register-the-runner; then 122 | exit 0 123 | elif /usr/bin/pgrep gitlab.*runner; then 124 | exit 0 125 | else 126 | exit 1 127 | fi 128 | 129 | pre-entrypoint-script: | 130 | --- 131 | apiVersion: rbac.authorization.k8s.io/v1 132 | kind: Role 133 | metadata: 134 | name: gitlab-runner 135 | namespace: gitlab 136 | rules: 137 | - apiGroups: [""] 138 | resources: ["*"] 139 | verbs: ["*"] 140 | --- 141 | apiVersion: rbac.authorization.k8s.io/v1 142 | kind: RoleBinding 143 | metadata: 144 | name: gitlab-runner 145 | namespace: gitlab 146 | roleRef: 147 | apiGroup: rbac.authorization.k8s.io 148 | kind: Role 149 | name: gitlab-runner 150 | subjects: 151 | - kind: ServiceAccount 152 | name: gitlab-runner 153 | namespace: gitlab 154 | --- 155 | apiVersion: apps/v1 156 | kind: Deployment 157 | metadata: 158 | name: gitlab-runner 159 | namespace: gitlab 160 | spec: 161 | replicas: 1 162 | revisionHistoryLimit: 10 163 | selector: 164 | matchLabels: 165 | app: gitlab-runner 166 | template: 167 | metadata: 168 | labels: 169 | app: gitlab-runner 170 | spec: 171 | securityContext: 172 | fsGroup: 65533 173 | runAsUser: 100 174 | terminationGracePeriodSeconds: 3600 175 | serviceAccountName: gitlab-runner 176 | containers: 177 | - name: gitlab-runner 178 | image: registry.gitlab.com/gitlab-org/gitlab-runner:alpine-v15.3.0 179 | imagePullPolicy: "IfNotPresent" 180 | securityContext: 181 | allowPrivilegeEscalation: false 182 | capabilities: 183 | drop: 184 | - ALL 185 | privileged: false 186 | readOnlyRootFilesystem: false 187 | runAsNonRoot: true 188 | lifecycle: 189 | preStop: 190 | exec: 191 | command: ["/entrypoint", "unregister", "--config=/home/gitlab-runner/.gitlab-runner/config.toml"] 192 | lifecycle: 193 | preStop: 194 | exec: 195 | command: ["/entrypoint", "unregister", "--all-runners"] 196 | command: ["/usr/bin/dumb-init", "--", "/bin/bash", "/configmaps/entrypoint"] 197 | env: 198 | - name: CI_SERVER_URL 199 | value: "https://gitlab.com/" 200 | - name: CLONE_URL 201 | value: "" 202 | - name: RUNNER_EXECUTOR 203 | value: "kubernetes" 204 | - name: REGISTER_LOCKED 205 | value: "true" 206 | - name: RUNNER_TAG_LIST 207 | value: "" 208 | livenessProbe: 209 | exec: 210 | command: ["/bin/bash", "/configmaps/check-live"] 211 | initialDelaySeconds: 60 212 | timeoutSeconds: 1 213 | periodSeconds: 10 214 | successThreshold: 1 215 | failureThreshold: 3 216 | readinessProbe: 217 | exec: 218 | command: ["/usr/bin/pgrep","gitlab.*runner"] 219 | initialDelaySeconds: 10 220 | timeoutSeconds: 1 221 | periodSeconds: 10 222 | successThreshold: 1 223 | failureThreshold: 3 224 | ports: 225 | - name: "metrics" 226 | containerPort: 9252 227 | volumeMounts: 228 | - name: projected-secrets 229 | mountPath: /secrets 230 | - name: etc-gitlab-runner 231 | mountPath: /home/gitlab-runner/.gitlab-runner 232 | - name: configmaps 233 | mountPath: /configmaps 234 | resources: 235 | {} 236 | volumes: 237 | - name: runner-secrets 238 | emptyDir: 239 | medium: "Memory" 240 | - name: etc-gitlab-runner 241 | emptyDir: 242 | medium: "Memory" 243 | - name: projected-secrets 244 | projected: 245 | sources: 246 | - secret: 247 | name: "gitlab-runner" 248 | items: 249 | - key: runner-registration-token 250 | path: runner-registration-token 251 | - key: runner-token 252 | path: runner-token 253 | - name: configmaps 254 | configMap: 255 | name: gitlab-runner 256 | --------------------------------------------------------------------------------