├── .gitignore ├── Kubernetes ├── 0-Archived-Kubernetes-Explained-through-an-example-usecase │ ├── 00-Our-goal.md │ ├── 01-Containerizing-our-app.md │ ├── 02-Installing-minikube.md │ ├── 03-calling-kubernetes-api-using-curl │ ├── 04-running-service-as-pod.md │ └── 05-accessing-the-service.md ├── 01-Course Overview │ └── README.md ├── 02-Containerizing a golang app │ └── README.md └── README.md ├── README.md ├── demo-apps └── nodejsapp │ ├── README.md │ ├── deploy.sh │ ├── index.js │ ├── package-lock.json │ └── package.json ├── episodes ├── 0-roadmap.md ├── 1-how-does-the-internet-work.md ├── 1.1-short-term-roadmap.md ├── 19-vagrant-intro.md ├── 2-unix-linux-installing-debian-10-on-virtualbox.md ├── 20-Introduction-to-git.md ├── 21-setting-up-git-server-vagrant-ansible.md ├── 22-nodejs-app-deployment-ansible.md ├── 23-how-does-ssl-work.md ├── 24-securing-nginx-free-ssl-letsencrypt.md ├── 25-devops-ci-cd.md ├── 26-jenkins-install-first-pipeline.md ├── 27-create-real-life-end-to-end-jenkins-pipeline.md ├── 28-setting-up-wordpress-nginx-php-fpm.md ├── 29-recap.md ├── 30-monitoring-1-infrastructure-monitoring-intro.md ├── 31-monitoring-2-installing-sensu.md ├── 32-monitoring-3-resource-usage-monitoring.md ├── 33-monitoring-4-webserver-monitoring.md ├── 34-monitoring-5-getting-email-alerts.md ├── 35-monitoring-6-using-sensu-api.md ├── 36-monitoring-7-sensu-go-production-considerations.md ├── 5-file-descriptors-standard-out-err-pipe-file-system.md ├── containers │ ├── 01-Introduction-to-containers.md │ ├── 02-a-practical-example.md │ ├── 03-fundamentals-of-containers.md │ ├── README.md │ └── diagrams │ │ ├── containers-vs-virtualmachines.excalidraw.md │ │ ├── containers-vs-virtualmachines.png │ │ └── cpu-usage-container.png ├── google-cloud │ ├── 01-what-is-cloud.md │ ├── 02-launching-first-vm.md │ ├── 03-instance-templates-static-ip.md │ ├── 04-vpc-networks.md │ ├── 05-disk-snapshots-and-images.md │ ├── 06-creating-and-attaching-disks.md │ ├── 07-setting-up-gcloud-cli.md │ └── img │ │ ├── cloud.jpg │ │ ├── gcp-disk-image.png │ │ ├── gcp-regions.png │ │ └── google-datacenter.jpg ├── img │ ├── 24.png │ ├── how-ssl-works.png │ ├── le-cloudflare-dns-txt-record.png │ ├── ssl-info.png │ ├── vpc-nw-example.jpg │ ├── vscode-github-pr.png │ ├── waterfall-model.png │ └── works-on-my-machine.jpg └── setting-ssl-locally-with-le.md ├── infrastructure ├── README.md ├── ansible │ ├── README.md │ ├── hosts │ ├── playbook.yml │ └── roles │ │ ├── common │ │ ├── README.md │ │ └── tasks │ │ │ └── main.yml │ │ ├── git-server │ │ ├── files │ │ │ └── ssh_keys │ │ │ │ └── mansoor │ │ └── tasks │ │ │ └── main.yml │ │ ├── nginx-common │ │ └── tasks │ │ │ └── main.yml │ │ ├── nginx-nodejsapp │ │ ├── handlers │ │ │ └── main.yml │ │ ├── tasks │ │ │ └── main.yml │ │ ├── templates │ │ │ └── vhost.conf.j2 │ │ └── vars │ │ │ └── main.yml │ │ └── nodejs-common │ │ ├── tasks │ │ └── main.yml │ │ └── vars │ │ └── main.yml └── vagrant │ └── apps │ ├── git-server │ └── Vagrantfile │ ├── jenkins │ └── Vagrantfile │ ├── mysql-server │ └── Vagrantfile │ ├── nodejsapp │ ├── README.md │ └── Vagrantfile │ ├── sensu-server │ └── Vagrantfile │ └── wordpress │ └── Vagrantfile └── scripts └── sensu_api_client.py /.gitignore: -------------------------------------------------------------------------------- 1 | .vagrant/ 2 | .vscode 3 | node_modules/ 4 | *.swp 5 | .DS_Store 6 | -------------------------------------------------------------------------------- /Kubernetes/0-Archived-Kubernetes-Explained-through-an-example-usecase/00-Our-goal.md: -------------------------------------------------------------------------------- 1 | # Our Goal 2 | 3 | To learn what Kubernetes is while trying to solve a real world problem 4 | 5 | ## The problem 6 | 7 | So, we have a simple application written in Golang, we want to run multiple copies of it 8 | on a server. 9 | 10 | For this demo, we will use [THIS go-hello-world](https://github.com/MansoorMajeed/go-hello-world) demo app. 11 | So, we want to run this app with multiple copies and be easily scalable. This way, if we have more traffic, we 12 | can easily increase the number of copies of our app (manually first and automatically afterwards). 13 | Additionally, this provides a fault tolerance so that even if one copy of our app goes down, it does not affect the users -------------------------------------------------------------------------------- /Kubernetes/0-Archived-Kubernetes-Explained-through-an-example-usecase/01-Containerizing-our-app.md: -------------------------------------------------------------------------------- 1 | # Containerizing the app 2 | 3 | We will create a simple Dockerfile for our app which simply compiles the binary and copies into a scratch image 4 | 5 | https://github.com/MansoorMajeed/go-hello-world/blob/main/Dockerfile 6 | 7 | 8 | -------------------------------------------------------------------------------- /Kubernetes/0-Archived-Kubernetes-Explained-through-an-example-usecase/02-Installing-minikube.md: -------------------------------------------------------------------------------- 1 | # Installing MiniKube 2 | 3 | MiniKube is local Kubernetes 4 | 5 | 1. Install and start Docker Desktop for your operating system 6 | 2. Install minikube https://minikube.sigs.k8s.io/docs/start 7 | 8 | ## Install Kubectl 9 | 10 | Follow instructions https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/ 11 | 12 | ## Install JQ 13 | 14 | ``` 15 | sudo apt update 16 | sudo apt install jq 17 | ``` 18 | 19 | ## Taking a look around 20 | 21 | List all pods across all namespaces 22 | ``` 23 | mansoor@debian:~$ kubectl get pods -A 24 | NAMESPACE NAME READY STATUS RESTARTS AGE 25 | kube-system coredns-668d6bf9bc-85c62 1/1 Running 1 (17m ago) 21m 26 | kube-system etcd-minikube 1/1 Running 1 (17m ago) 21m 27 | kube-system kube-apiserver-minikube 1/1 Running 1 (17m ago) 21m 28 | kube-system kube-controller-manager-minikube 1/1 Running 1 (17m ago) 21m 29 | kube-system kube-proxy-8cp4h 1/1 Running 1 (17m ago) 21m 30 | kube-system kube-scheduler-minikube 1/1 Running 1 (17m ago) 21m 31 | kube-system storage-provisioner 1/1 Running 2 (17m ago) 21m 32 | mansoor@debian:~$ 33 | ``` 34 | 35 | ## Take a look at docker containers 36 | 37 | ``` 38 | mansoor@debian:~$ docker ps 39 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS 40 | NAMES 41 | 002dd4272afb gcr.io/k8s-minikube/kicbase:v0.0.46 "/usr/local/bin/entr…" 3 days ago Up 26 hours 127.0.0.1:32768->22/tcp, 127.0.0.1:32769->2376/tcp, 127.0.0.1:32770->5000/tcp, 127.0.0.1:32771->8443/tcp, 127.0.0.1:32772->32443/tcp minikube 42 | mansoor@debian:~$ 43 | 44 | 45 | mansoor@debian:~$ docker inspect minikube 46 | [ 47 | { 48 | "Id": "002dd4272afb525456889e2538780878c7d7aa892dadd4d8bfb1141d4a0e9ff2", 49 | "Created": "2025-01-21T13:04:40.914867389Z", 50 | "Path": "/usr/local/bin/entrypoint", 51 | "Args": [ 52 | "/sbin/init" 53 | ], 54 | "State": { 55 | "Status": "running", 56 | "Running": true, 57 | "Paused": false, 58 | "Restarting": false, 59 | "OOMKilled": false, 60 | "Dead": false, 61 | "Pid": 1125, 62 | "ExitCode": 0, 63 | "Error": "", 64 | "StartedAt": "2025-01-24T01:04:22.738561841Z", 65 | "FinishedAt": "2025-01-24T00:56:58.841429116Z" 66 | }, 67 | "Image": "sha256:e72c4cbe9b296d8a58fbcae1a7b969fa1cee662cd7b86f2d4efc5e146519cf0a", 68 | ``` 69 | 70 | 71 | We can see that the container's process id on the Linux system is 1125 72 | 73 | and we do see a process called `/sbin/init` running at that PID 74 | ``` 75 | mansoor@debian:~$ ps aux|grep 1125 76 | root 1125 0.0 0.3 19892 12568 ? Ss Jan23 0:02 /sbin/init 77 | ``` 78 | 79 | 80 | Enter the namespace for that process and look at the processes inside that namespace 81 | ``` 82 | mansoor@debian:~$ sudo nsenter --target 1125 --all 83 | root@minikube:/# 84 | root@minikube:/# 85 | root@minikube:/# ps aux 86 | USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND 87 | root 1 0.0 0.3 19892 12568 ? Ss Jan24 0:02 /sbin/init 88 | root 98 0.0 0.2 23536 9792 ? S /dev/null 29 | sudo apt-get update 30 | 31 | sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin 32 | 33 | sudo usermod -aG docker $USER 34 | 35 | newgrp docker 36 | ``` 37 | 38 | 39 | ## Write the Dockerfile 40 | 41 | Simplest dockerfile for a Golang app 42 | ``` 43 | FROM golang:latest AS builder 44 | 45 | WORKDIR /app 46 | 47 | COPY . . 48 | 49 | RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main . 50 | 51 | # Second stage: Create a smaller image 52 | FROM scratch 53 | 54 | # Copy the binary from the builder stage 55 | COPY --from=builder /app/main . 56 | 57 | EXPOSE 8080 58 | 59 | CMD ["./main"] 60 | ``` 61 | 62 | ## Build and run 63 | 64 | Commands to build the image 65 | ``` 66 | docker build -t go-hello-world:0.1 . 67 | ``` 68 | 69 | and to run 70 | ``` 71 | docker run image 72 | ``` 73 | 74 | -------------------------------------------------------------------------------- /Kubernetes/README.md: -------------------------------------------------------------------------------- 1 | # WIP : Kubernetes From Scratch 2 | 3 | To be updated -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # DevOps From Scratch 2 | 3 | A video series for beginners 4 | Playlist is [HERE](https://www.youtube.com/playlist?list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14) 5 | 6 | 7 | ## WIP : Kubernetes From Scratch 8 | 9 | Check [Kubernetes/README.md] 10 | 11 | ## Have a question? 12 | 13 | You can open an issue if you have something to discuss 14 | 15 | ## General 'how to' tutorials 16 | 17 | Nothing here (yet) 18 | 19 | 20 | ## DevOps From Scratch Episode Specific Notes and Tutorials 21 | 22 | [0. The Roadmap](episodes/0-roadmap.md) 23 | 24 | [1. How does the internet work](episodes/1-how-does-the-internet-work.md) 25 | 26 | [1.1 Short term Roadmap : Path to setting up a simple website](episodes/1.1-short-term-roadmap.md) 27 | 28 | [2. History of Unix, Linux, Installing Debian 10 on Virtualbox](episodes/2-unix-linux-installing-debian-10-on-virtualbox.md) 29 | 30 | [3. File Descriptors, Standard output/error, Pipe, Grep, File system hierarchy](episodes/5-file-descriptors-standard-out-err-pipe-file-system.md) 31 | 32 | [19. Managing VirtualMachines using Vagrant](episodes/19-vagrant-intro.md) 33 | 34 | [20. Introduction to Git](episodes/20-Introduction-to-git.md) 35 | 36 | [21. Setting up a git server using Ansible and Vagrant](episodes/21-setting-up-git-server-vagrant-ansible.md) 37 | 38 | [22. Deploying NodeJS app with Nginx Load balancer using Ansible](episodes/22-nodejs-app-deployment-ansible.md) 39 | 40 | [23. How does TLS/SSL work](episodes/23-how-does-ssl-work.md) 41 | 42 | [24. Securing an Nginx website with free let's encrypt SSL](episodes/24-securing-nginx-free-ssl-letsencrypt.md) 43 | 44 | [25. DevOps and Continuous Integration/Deployment/Delivery](episodes/25-devops-ci-cd.md) 45 | 46 | [26. Installing and setting up Jenkins - Simple Pipeline Intro](episodes/26-jenkins-install-first-pipeline.md) 47 | 48 | [27. Creating an end to end Jenkins pipeline for a NodeJS application](episodes/27-create-real-life-end-to-end-jenkins-pipeline.md) 49 | 50 | [28. Setting up WordPress using Nginx and PHP-FPM on Debian](episodes/28-setting-up-wordpress-nginx-php-fpm.md) 51 | 52 | [29. Recap](episodes/29-recap.md) 53 | 54 | ### VM Monitoring 55 | 56 | [30. VM Monitoring #1 - Introduction to monitoring](episodes/30-monitoring-1-infrastructure-monitoring-intro.md) 57 | 58 | [31. VM Monitoring #2 - Getting started with Sensu Go for monitoring](episodes/31-monitoring-2-installing-sensu.md) 59 | 60 | [32. VM Monitoring #3 - System Resource Monitoring](episodes/32-monitoring-3-resource-usage-monitoring.md) 61 | 62 | [33. VM Monitoring #4 - Monitoring a webserver](episodes/33-monitoring-4-webserver-monitoring.md) 63 | 64 | [34. VM Monitoring #5 - Receiving email alerts](episodes/34-monitoring-5-getting-email-alerts.md) 65 | 66 | [35. VM Monitoring #6 - Using Sensu API](episodes/35-monitoring-6-using-sensu-api.md) 67 | 68 | [36. VM Monitoring #7 - Sensu Go in Production](episodes/36-monitoring-7-sensu-go-production-considerations.md) 69 | 70 | ### Google Cloud 71 | 72 | [1. What is Cloud](episodes/google-cloud/01-what-is-cloud.md) 73 | 74 | [2. Simple website in Google Cloud](episodes/google-cloud/02-launching-first-vm.md) 75 | 76 | [3. Instance template and static IP](episodes/google-cloud/03-instance-templates-static-ip.md) 77 | 78 | [4. VPC Networks and firewall rules](episodes/google-cloud/04-vpc-networks.md) 79 | 80 | ## Videos 81 | 82 | ### Introduction 83 | 84 | [0. DevOps Roadmap](https://www.youtube.com/watch?v=adccZNseZm8&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=1) 85 | 86 | [1. How does the internet work](https://www.youtube.com/watch?v=SyPzQrUxmZc&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=2) 87 | 88 | ### Linux Fundamentals 89 | 90 | [2. History of Unix, Linux, Installing Debian 10 on VirtualBox](https://www.youtube.com/watch?v=vqLyxlcpTP4&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=3) 91 | 92 | [3. Secure Shell, Key exchange, key based authentication](https://www.youtube.com/watch?v=geotLvTpkUM&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=4) 93 | 94 | [4. Important Linux commands, Using man page, command redirection](https://www.youtube.com/watch?v=SzTpQOVSd6s&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=5) 95 | 96 | [5. File Descriptors, Standard output/error, Pipe, Grep, File system hierarchy](https://www.youtube.com/watch?v=dkyIHNWulqA&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=6) 97 | 98 | [6. File system continued, Environment variables, PATH variable](https://www.youtube.com/watch?v=j4EU5sGDW1g&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=7) 99 | 100 | [7. User Management and Permissions in Linux/Unix](https://www.youtube.com/watch?v=hzNF4R20iZY&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=8) 101 | 102 | ### Managing Packages, Editing Text, Managing Processes and Services 103 | 104 | [8. Package Management, Primer on Webservers, Static/Dynamic websites](https://www.youtube.com/watch?v=tLTTpcxSya8&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=9) 105 | 106 | [9. Managing services in Linux, Primer on Vim the text editor](https://www.youtube.com/watch?v=9MLoUtMbWMA&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=10) 107 | 108 | [10. Process Management, Web Server Debugging Primer 1 (ps, netstat)](https://www.youtube.com/watch?v=pUBJliknmk0&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=11) 109 | 110 | [11. Checking access to webserver using Netcat and Curl](https://www.youtube.com/watch?v=POz6u_0nK6E&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=12) 111 | 112 | ### Nginx Proxy, DNS, Static and Dynamic Websites 113 | 114 | [12. Configuring Nginx, VirtualHosting, /etc/hosts, Curl](https://www.youtube.com/watch?v=i6NHxKyGI7s&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=13) 115 | 116 | [13. How does DNS and domains work (demo), Hands on with managing domains](https://www.youtube.com/watch?v=pOoOVfh2lI4&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=14) 117 | 118 | [14. Setting up simple static site in Digitalocean, Some bonus nginx debugging](https://www.youtube.com/watch?v=kDcn9npjoPs&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=15) 119 | 120 | [15. Simple Dynamic Website with NodeJS + Nginx, Proxy and Reverse Proxy](https://www.youtube.com/watch?v=6NC5V9gYANs&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=16) 121 | 122 | ### Database servers 123 | 124 | [16. MySQL Intro for DevOps Engineers](https://www.youtube.com/watch?v=EfJEG0dHQpE&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=17) 125 | 126 | ### Our small infra 127 | 128 | [17. Let's build a full infrastructure locally](https://www.youtube.com/watch?v=tl3_o0-Myko&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=18) 129 | 130 | ### Version Control Using Git 131 | 132 | [20. Introduction to Git and Github](https://www.youtube.com/watch?v=uxE2Le64vHk&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=21) 133 | 134 | [21. Setting up a Git server locally using Ansible](https://www.youtube.com/watch?v=HCbc-m2CVVw&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=22) 135 | 136 | ### Thinking DevOps 137 | 138 | [25. DevOps, Agile, Scrum, Continuous Integration/Deployment Explained](https://www.youtube.com/watch?v=8M1tER06fzs&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=26) 139 | 140 | [18. Infrastructure as code, Configuration management, Intro to Ansible](https://www.youtube.com/watch?v=xT0K0k36pxU&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=19) 141 | 142 | [19. Managing Virtual Machines using Vagrant](https://www.youtube.com/watch?v=Vfoj_nu8cmg&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=20) 143 | 144 | [22. Deploying a NodeJS app with Nginx Load Balancer using Ansible](https://www.youtube.com/watch?v=rrlr3GYlZYw&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=23) 145 | 146 | ## Jenkins 147 | 148 | [26. Jenkins #1 Installation in Linux and a simple pipeline](https://www.youtube.com/watch?v=ovyIh0Z2NZ0&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=27) 149 | 150 | [27. Jenkins #2 Fully Automated Production Ready NodeJS deployment](https://www.youtube.com/watch?v=KpAKgrBA8mY&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=28) 151 | 152 | ### SSL/TLS 153 | 154 | [23. How does TLS/SSL work](https://www.youtube.com/watch?v=pc5Xf9uuvwE&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=24) 155 | 156 | [24. Securing a website with a free SSL certificate from Let'sEncrypt](https://www.youtube.com/watch?v=NRJIhc3aQn0&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=25) 157 | 158 | ### Monitoring in Virtual Machines 159 | 160 | [30. Monitoring #1 - Introduction to monitoring]() 161 | 162 | [31. Monitoring #2 - Setting up sensu]() 163 | 164 | 165 | 166 | ## Videos, Categorized Again 167 | 168 | ### Setting up different applications (NodeJS, WordPress etc) 169 | 170 | Over the course of these videos, we have discussed about setting up different types of web applications. 171 | Linking them here. Many of them are listed above as well. 172 | 173 | In the order of complexity 174 | 175 | [14. Setting up simple static site in Digitalocean, Some bonus nginx debugging](https://www.youtube.com/watch?v=kDcn9npjoPs&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=15) 176 | 177 | [15. Simple Dynamic Website with NodeJS + Nginx, Proxy and Reverse Proxy](https://www.youtube.com/watch?v=6NC5V9gYANs&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=16) 178 | 179 | [22. Deploying a NodeJS app with Nginx Load Balancer using Ansible](https://www.youtube.com/watch?v=rrlr3GYlZYw&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=23) 180 | 181 | [27. Jenkins #2 Fully Automated Production Ready NodeJS deployment](https://www.youtube.com/watch?v=KpAKgrBA8mY&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=28) 182 | 183 | [28. Setting up WordPress using Nginx and PHP FPM](https://www.youtube.com/watch?v=BN8lMesmvPw&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=29) 184 | 185 | ### Setting up DevOps Tools (Git, Jenkins etc) 186 | 187 | [19. Managing Virtual Machines using Vagrant](https://www.youtube.com/watch?v=Vfoj_nu8cmg&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=20) 188 | 189 | [21. Setting up a Git server locally using Ansible](https://www.youtube.com/watch?v=HCbc-m2CVVw&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=22) 190 | 191 | 192 | 193 | -------------------------------------------------------------------------------- /demo-apps/nodejsapp/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | Using express to give an idea of using nodejs with modules and how the deployment process 4 | looks like 5 | 6 | ``` 7 | npm init 8 | 9 | npm install express --save 10 | ``` 11 | -------------------------------------------------------------------------------- /demo-apps/nodejsapp/deploy.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | 4 | npm install 5 | 6 | # For the love of all that is good, don't use this in production 7 | # This is only for a demonstration of how things work behind the scene 8 | 9 | ssh vagrant@192.168.33.11 'sudo mkdir -p /app; sudo chown -R vagrant. /app' 10 | rsync -avz ./ vagrant@192.168.33.11:/app/ 11 | ssh vagrant@192.168.33.11 "sudo pkill node; cd /app; node index.js > output.log 2>&1 &" 12 | 13 | 14 | ssh vagrant@192.168.33.12 'sudo mkdir -p /app; sudo chown -R vagrant. /app' 15 | rsync -avz ./ vagrant@192.168.33.12:/app/ 16 | ssh vagrant@192.168.33.12 "sudo pkill node; cd /app; node index.js > output.log 2>&1 &" 17 | -------------------------------------------------------------------------------- /demo-apps/nodejsapp/index.js: -------------------------------------------------------------------------------- 1 | const express = require('express') 2 | const app = express() 3 | const port = 3000 4 | 5 | var os = require('os') 6 | var hostname = os.hostname(); 7 | 8 | var pid = process.pid; 9 | 10 | const appVersion = "1.0"; 11 | 12 | app.get('/', (req, res) => { 13 | 14 | var msg = `

Hello World!

15 |

16 | Process ID: ${pid}
17 | Running on: ${hostname}
18 | App Version: ${appVersion} 19 |

` 20 | 21 | res.send(msg) 22 | }) 23 | 24 | app.listen(port, () => { 25 | console.log(`Example app listening at http://localhost:${port}`) 26 | }) 27 | -------------------------------------------------------------------------------- /demo-apps/nodejsapp/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "nodejsapp", 3 | "version": "1.0.0", 4 | "description": "Using express to give an idea of using nodejs with modules and how the deployment process looks like", 5 | "main": "index.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "author": "", 10 | "license": "ISC", 11 | "dependencies": { 12 | "express": "^4.19.2" 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /episodes/0-roadmap.md: -------------------------------------------------------------------------------- 1 | 2 | # DevOps From Scratch - RoadMap 3 | 4 | This is a roadmap I have planned, and this could change any moment as I continue to make videos 5 | 6 | 7 | 1. How does a simple website work 8 | 1. What is a website. How it works from local machine 9 | 2. What is a server - How does the internet work (IP addresses, Ports etc) 10 | 3. What is DNS (quick intro) 11 | 2. Basics of Linux - Setting up simple web server 12 | 1. History of Unix, Linux and the differences 13 | 2. Virtualization - Installing Debian 10 on VirtualBox 14 | 6. SSH - Remote logging, Key Exchange, Key based authentication etc 15 | 7. User management in Linux - quick guide - sudo / root 16 | 8. Installing package using apt-get - nginx 17 | 9. Learn VIM 18 | 10. Starting or stopping services using systemctl 19 | 11. Checking for processes using `ps` 20 | 13. Checking for listening ports using netstat 21 | 14. Checking open ports using netcat from the outside 22 | 15. checking the logs for errors 23 | 16. Before proceeding with nginx config - File system hierarchy in Linux 24 | 17. Nginx configuration - Checking for errors 25 | 18. What is HTTP - what are headers 26 | 19. Host header and virtual hosting in Nginx 27 | 3. Getting things onto the internet 28 | 1. Getting a domain, a free domain 29 | 2. Managing DNS for a domain. what it means 30 | 3. Getting our website onto digitalocean server 31 | 4. A more complicated web application - WordPress 32 | 1. What is WordPress - what does it need 33 | 2. What is an application server vs web server : [HERE](https://www.nginx.com/resources/glossary/application-server-vs-web-server/) 34 | 3. What is php, php-fpm and why do we need it 35 | 4. What is a database and why do we need it - Intro to MySQL 36 | 5. Installing and setting up WordPress 37 | 6. Scaling wordpress - Why we need LoadBalancers 38 | 5. Keeping our files versioned - Git 39 | 1. What is git and why we need it 40 | 2. Basic commands in Git 41 | 6. Doing less work with Configuration management - Ansible 42 | 1. What is Configuration management and why do we need it 43 | 2. Basics of ansible 44 | 3. Creating our ansible playbook for our Nginx server 45 | 7. Back to a simple nodejs application 46 | 1. This is only for demo - create a very simple app 47 | 2. Keeping it in version control 48 | 3. What is Jenkins, what can it do for us 49 | 4. Automating our deployments - new version gets deployed on git push (Jenkins) 50 | 8. Making sure our systems stay up - Monitoring 51 | 1. The need for monitoring, what can it do for us 52 | 2. Installing sensu for simple monitoring (HTTP and process) 53 | 9. Getting more serious - Caching 54 | 1. What is caching 55 | 2. More about the HTTP protocol and headers 56 | 3. Caching servers - Nginx 57 | 4. Separate advanced caching with varnish ? 58 | 5. Enable caching for our node application 59 | 6. Automated cache busting as part of deployment 60 | 10. Securing our servers with Firewall 61 | 1. What is a firewall and what can it do 62 | 2. Introduction to iptables 63 | 11. Keep your stuff backed up 64 | 1. Why we need backup 65 | 2. How to backup individual stuff (MySQL) 66 | 12. Evolving to Docker 67 | 1. What are containers - what are we trying to solve 68 | 2. Installing docker 69 | 3. Getting started with docker containers 70 | 4. Creating our own docker images 71 | 5. Containerizing our node application 72 | 6. Using Jenkins with docker 73 | 13. Evolving to Kubernetes 74 | 1. What is Kubernetes - what are we trying to solve 75 | 2. Installing and getting started with minikube 76 | 3. Basics of Kubernetes 77 | 4. Moving our app to Kubernetes 78 | 14. Better monitoring with Prometheus 79 | 1. Prometheus, Alertmanager getting started 80 | 2. Plotting our graphs with Grafana 81 | 15. Moving to Cloud 82 | 1. What, why of Cloud 83 | 16. Google Cloud 84 | 17. AWS 85 | 18. Creating our infrastructure using Terraform 86 | 87 | ### Fundamentals of Linux 88 | 89 | I have based this on a goal which is to setup a webserver, and learning Linux in the process of doing so. 90 | And the idea goes something like this 91 | 92 | To get our website up, we need: 93 | 1. A Linux server 94 | - History of Unix, Birth of Linux 95 | - Virtualization 96 | - Install Debian 10 on virtualbox 97 | 2. Accessing the server remotely 98 | - Shell 99 | - SSH 100 | - SSH Key exchange - Diffie Hellman Key Exchange concept 101 | - How key based SSH login is setup 102 | 3. Navigating our shiny new server 103 | - cd, ls etc, basic commands 104 | - File system structure 105 | 4. Creating files and folders 106 | - echo, mkdir, cat etc 107 | - vim, nano 108 | 5. Create a user for our webserver 109 | - User management 110 | - Permissions 111 | 6. Installing our webserver 112 | - Webservers 113 | - Package management using `apt` 114 | 7. Starting our webserver 115 | - Managing services using `systemctl` 116 | 8. Checking if our server works 117 | - Process management using `ps`, `kill` etc 118 | - `curl`, `netstat`, `netcat` etc 119 | - `systemctl status` 120 | 9. Configuring Nginx 121 | - HTTP Headedrs 122 | - VirtualHost - Creating two virtualhosts 123 | - Using `/etc/hosts` file 124 | - Nginx config 125 | - Checking log files - `tail`, `less` 126 | - `grep` 127 | 10. More useful commands 128 | - `find`, `lsof`, `df`, `du`, `dig` 129 | 11. Basic bash scripting 130 | - Fundamentals of bash scripting - loops and conditions 131 | - pipes, sed, awk etc 132 | - Script to backup our website 133 | - Script to alert if our website is down 134 | -------------------------------------------------------------------------------- /episodes/1-how-does-the-internet-work.md: -------------------------------------------------------------------------------- 1 | # How does the internet work 2 | 3 | Video Link : [E1](https://www.youtube.com/watch?v=SyPzQrUxmZc&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=3&t=0s) 4 | 5 | 6 | ## What is a website 7 | 8 | A webpage is nothing but a document on the internet 9 | you can access. 10 | 11 | This is a webpage 12 | ```html 13 |

Hello there

14 | ``` 15 | A website is a collection of web pages. It could have images, videos etc too. You get the idea 16 | 17 | ## What is a Network 18 | 19 | Computers connected together. 20 | 21 | ### Why do we need a Network? 22 | 23 | So that we can share documents, media, resources like printer etc. This means we can share our 24 | simple website in the network too. 25 | 26 | ### Types of network 27 | 28 | 1. Local Area Network (LAN) : Consists of a small number of devices. Example: WiFi network at home 29 | 2. Wide Area Network (WAN) : A giant network, connecting other networks, devices etc. 30 | 31 | ## What is the Internet 32 | 33 | Internet is nothing but a network of networks. Your home wifi network is connected to the ISP. 34 | And the ISP connects you to the other parts of the internet managed by your ISP. 35 | And the ISPs connect to each other through high speed network such as the backbone network 36 | 37 | ## What is a server 38 | 39 | In theory, a server is no different than the computer you have. They only differs in what software 40 | they might have installed and the hardware components like how many cores of CPUs of how many GBs 41 | of memory they have 42 | 43 | ## IP Addresses 44 | 45 | A unique identifier for a device in a network. 46 | There is more to learn here. 47 | 48 | ## Port Numbers 49 | 50 | If IP address is an address to a house, you can think of port numbers as windows or doors to your house 51 | 52 | In a real example, an IP address means an address to a unique computer in a network 53 | and port numbers means different logical windows in the same computer. This helps us to 54 | have multiple services to run and listen on the same computer 55 | 56 | For example, if you have a web server, it listens on port number 80 57 | If you have an SSH server, it listens on 22 58 | 59 | ### What does "listen" mean 60 | 61 | It means, the services is listening in a window (port) for any request to come to that port. 62 | Think of it like a check-in counter at the airport. The staff waits until a traveller arrives at that 63 | window and then he/she proceeds to handle the traveller 64 | 65 | ## What are Protocols 66 | 67 | Protocols are rules on how to talk to each other. 68 | For example, if you want to talk to someone in English, you both should know the rules of the English 69 | language. Protocols are like that. All the computers talking to each other using a language should follow 70 | certain rules. 71 | 72 | For example, if you are building a webserver, it should follow certain rules so that the browser can 73 | actually read the page you are serving. The browser already follows those rules. 74 | 75 | ## What is DNS 76 | 77 | DNS stands for Domain Name System. It is a system to translate domain names (example esc.sh or google.com) to 78 | IP addresses. 79 | 80 | ### Why do we need to do that? 81 | 82 | Because computers talk to each other using IP addresses. Domain names such as google.com is there for our 83 | convenience. For example, when your browser is talking to www.google.com, the first thing it needs to do 84 | is to find out the IP address of www.google.com 85 | 86 | [More on DNS later] 87 | 88 | ## For more research 89 | 90 | These are the topics that you could research on your own 91 | 92 | 1. OSI Model, TCP/IP Model. Why are they important 93 | 2. Most used ports 94 | 3. IP Subnetting 95 | 4. How does the packet gets routed through the internet - Routing protocols 96 | 5. How does a router work 97 | 6. Network Address Translation 98 | 7. IPV4 vs IPV6 99 | 8. TCP vs UDP 100 | 9. AS Numbers, BGP 101 | -------------------------------------------------------------------------------- /episodes/1.1-short-term-roadmap.md: -------------------------------------------------------------------------------- 1 | # Path to setting up a simple website 2 | 3 | ## Our current goal 4 | 5 | For these upcoming videos, let's set a goal. **Our goal is to setup a simple website** 6 | 7 | To be able to achieve that, we need to be familiar with the following: 8 | 9 | 1. We need a Linux Virtual Server to host our website 10 | - Virtualization 11 | - What and why of Linux 12 | - Installing a Linux server on virtual machine 13 | 2. Access the server so that we can do our thing 14 | - Shell 15 | - SSH 16 | 3. We need to be able to navigate the server 17 | - `cd`, `ls`, `pushd`, `popd` etc 18 | - Filesystem hierarchy of Linux 19 | 4. We need to be able to read, create and modify files 20 | - `mkdir`, `cat`, `echo` etc 21 | - Text editors - `vim` 22 | 5. Create a user to run our webserver as 23 | - User management in Linux 24 | - Permissions 25 | 6. Installing our webserver 26 | - Webservers 27 | - Package management 28 | 7. Starting our webserver 29 | - Managing services using `systemctl` 30 | 8. Checking if our website works 31 | - Process management (`ps`, `kill` etc) 32 | - Working with `curl`, `netstat`, `netcat` 33 | 9. Configuring our webserver for our liking 34 | - VirtualHost, Host header 35 | - Configuring Nginx 36 | - Checking log files using `tail`, `less` 37 | - Finding patterns using `grep` 38 | 39 | -------------------------------------------------------------------------------- /episodes/19-vagrant-intro.md: -------------------------------------------------------------------------------- 1 | # Managing VirtualMachines using Vagrant 2 | 3 | 4 | Video Link [HERE](https://www.youtube.com/watch?v=Vfoj_nu8cmg&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=20&t=156s) 5 | 6 | ## The Problem we are trying to solve 7 | 8 | We have a lot of VirtualMachines to manage and we don't want to do that manually 9 | We need some sort of a tool to help us with that. 10 | 11 | ## Installing Vagrant 12 | 13 | Follow the [docs](https://www.vagrantup.com/docs/installation) 14 | 15 | 16 | ## Most Basic Vagrantfile 17 | 18 | ```ruby 19 | Vagrant.configure("2") do |config| 20 | 21 | # Every Vagrant development environment requires a box. You can search for 22 | # boxes at https://vagrantcloud.com/search. 23 | config.vm.box = "base" 24 | 25 | end 26 | ``` 27 | 28 | ## Multiple VMs with their own IP address, shell provisioning 29 | 30 | ```ruby 31 | Vagrant.configure("2") do |config| 32 | 33 | config.vm.box = "debian/buster64" 34 | 35 | 36 | config.vm.define "nginx" do |nginx| 37 | nginx.vm.provider "virtualbox" do |vb| 38 | vb.memory = "512" 39 | end 40 | 41 | nginx.vm.network "private_network", ip: "192.168.33.10" 42 | nginx.vm.hostname = 'nginx' 43 | 44 | nginx.vm.provision "shell", inline: <<-SHELL 45 | apt-get update 46 | apt-get install -y nginx 47 | SHELL 48 | end 49 | 50 | config.vm.define "apache" do |apache| 51 | apache.vm.provider "virtualbox" do |vb| 52 | vb.memory = "512" 53 | end 54 | 55 | apache.vm.network "private_network", ip: "192.168.33.11" 56 | apache.vm.hostname = 'apache' 57 | 58 | apache.vm.provision "shell", inline: <<-SHELL 59 | apt-get update 60 | apt-get install -y apache2 61 | SHELL 62 | end 63 | end 64 | ``` 65 | 66 | 67 | ## Vagrant with ansible provisioning 68 | 69 | > Note: If you want to make it work with Windows, you need to do some hacks 70 | > Use google if you would like to do that 71 | 72 | ### Vagrantfile 73 | 74 | ```ruby 75 | Vagrant.configure("2") do |config| 76 | 77 | config.vm.box = "debian/buster64" 78 | 79 | config.vm.network "private_network", ip: "192.168.33.10" 80 | 81 | 82 | config.vm.provider "virtualbox" do |vb| 83 | 84 | # Customize the amount of memory on the VM: 85 | vb.memory = "1024" 86 | end 87 | # 88 | 89 | config.vm.provision "ansible" do |ansible| 90 | ansible.playbook = "playbook.yml" 91 | end 92 | ``` 93 | 94 | ### And the playbook.yml 95 | 96 | ```yaml 97 | --- 98 | - hosts: all 99 | 100 | tasks: 101 | - name: Install nginx 102 | apt: 103 | name: nginx 104 | state: installed 105 | ``` 106 | 107 | -------------------------------------------------------------------------------- /episodes/2-unix-linux-installing-debian-10-on-virtualbox.md: -------------------------------------------------------------------------------- 1 | # History of Unix and Linux, Installing Debian 10 on VirtualBox 2 | 3 | Video Link [HERE](https://www.youtube.com/watch?v=vqLyxlcpTP4&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=3) 4 | 5 | ## Virtualization 6 | 7 | 8 | `Host machine` - The computer where you are running the virtualization software 9 | `Guest machine` - The VM (Virtual machine) you create inside the host machine 10 | 11 | 12 | - We can easily create, modify, destroy a full operating system without 13 | causing damage to our host machine 14 | 15 | 16 | ## History of Unix 17 | 18 | 19 | ## Unix Philosophy 20 | 21 | ## Birth of Linux 22 | 23 | ## Linux vs Unix 24 | 25 | ## Linux distros 26 | 27 | 28 | ## For further research 29 | 30 | -------------------------------------------------------------------------------- /episodes/20-Introduction-to-git.md: -------------------------------------------------------------------------------- 1 | # Introduction to Git 2 | 3 | Video Link [HERE](https://www.youtube.com/watch?v=uxE2Le64vHk&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=21) 4 | 5 | 6 | ## What is the Problem we are solving? 7 | If you are working on a project, 8 | - You need to have some sort of way to keep track of all the changes that is happening in your project. 9 | - You need to be able to see what changed, when, by whom 10 | - You need to be able to see the difference in the files you just worked on and compare it with how it was in a point in time 11 | - You need to be able to revert a mistake you made 12 | - Multiple people should be able to work on the same code base withou messing up everything etc 13 | 14 | Solution? **Version Control Software** 15 | 16 | ## What is Git 17 | 18 | Git is an open source version control software, used to, well, version control 19 | Version control simply means keeping track of changes to your projects as it grows 20 | 21 | ## Installing Git 22 | 23 | - You can install git on windows on WSL or install Git Bash 24 | - On MacOS, you can do `brew install git` 25 | - On any Linux distro `sudo apt-get install git` or `sudo dnf install git` or `sudo yum install git` should work 26 | 27 | ## Git vs GitHub 28 | 29 | - Git is the open source tool 30 | - Github is an online web based "git" hosting service 31 | 32 | ## Personal Advices 33 | 34 | - Don't use GUI tools to learn git, just don't. Use the command line 35 | - Don't try to cram everything at once. Learn the basics, use them regularly 36 | and use Google when you get stuck 37 | - If you mess up your repository, instead of trying to clone a fresh copy of the 38 | repository, try to fix it. That is how you learn 39 | 40 | ## Git Repositories 41 | 42 | A repository simply means a place to store your project and the changes to it 43 | 44 | There are two types of repositories **Local** and **Remote** 45 | 46 | ### Local Repository 47 | 48 | Means it sits on your computer's hard drive (or SSD, you know). 49 | 50 | ### Remote Repository 51 | 52 | Sits on another computer. Git can work over HTTP or SSH. More on it later 53 | If you have multiple people working on the project, you probably need a remote 54 | repository 55 | Also, if you want to have a copy of your project somewhere other than your 56 | computer, you need a remote repository 57 | 58 | # Let's Git Started 59 | 60 | This will be quick 61 | 62 | ## Configure your name and email 63 | 64 | Before you start, let's first configure your name and email so that git knows 65 | who is making the changes. More on it later 66 | 67 | ```shell 68 | git config --global user.name "Mansoor" 69 | git config --global user.email "m@esc.sh" 70 | ``` 71 | 72 | ## Very Basics 73 | 74 | - `git init` - Initializes a local repository 75 | Means, the current directory is under git now. 76 | 77 | Look at `ls -la .git` and you can see where the git magic happens 78 | 79 | - `git status` - Shows the current status of the repo 80 | 81 | ### Staging changes - `git add` 82 | 83 | **Create a new file** and use `git add ` to add it to **staging area** 84 | 85 | 86 | - `git add .` - add the current directory 87 | - `git add directory/another` - Add a specific directory to staging 88 | - `git add *.py` - add all .py files 89 | - `git add f*` - Add everything starting with `f`. 90 | 91 | You get the idea 92 | 93 | ### Commiting changes - `git commit` 94 | 95 | **Commiting** means to commit to the change you just made. Meaning git will store 96 | this change in it's magic storage (`.git` directory) 97 | 98 | - `git commit` - Opens the editor (set by the $EDITOR environment variable - You can change it 99 | using `export EDITOR=editor_name`) where you can write a commit message 100 | 101 | **Commit message** is what will be recorded as a message identifier for your change 102 | 103 | - `git commit -v` - Does the same thing as above, but shows the changes too, in the editor 104 | 105 | - `git commit -m "A beatuful short description of the changes"` - Commits instantly with that message 106 | 107 | 108 | > Why do we need staging area? 109 | > Because you may not want to store all the files in git. Like temporary 110 | > files, log files etc 111 | 112 | ### Ignoreing certain files 113 | 114 | You can use the `.gitignore` file to do that. Just create a file of the name 115 | in the git repository and add what you want to be ignored 116 | 117 | Example: 118 | 119 | ``` 120 | foobar 121 | cache/ 122 | ``` 123 | 124 | This will ignore `foobar` and `cache` from being stored in git 125 | 126 | ### Viewing the history of changes - `git log` 127 | 128 | - `git log` - View all the commits in the current repository 129 | - `git log -p` - View all the commits and the changes 130 | - `git log -p ` - Show changes, but for the file/folder 131 | 132 | 133 | ### Viewing what change you just made - `git diff` 134 | 135 | After you made a change to a file, before commiting, if you want to see what change you just did 136 | 137 | `git diff` - Show the diff since last commit 138 | 139 | 140 | ## Branches 141 | 142 | Git allows multiple people to work on the same repository without messing 143 | up everything. This is achieved using branches 144 | 145 | `master` is the default branch. It should be respected and try not to mess it up 146 | 147 | If you want to work on two independent changes to your code base, branches are your 148 | friend 149 | 150 | Example: `fix-bug` branch to fix an emergency bug, and `feature1` to parallely work on a 151 | feature 152 | 153 | > Branches do not know of the changes in each other until you merge them 154 | 155 | - `git branch` - Shows the current branch 156 | - `git checkout ` - Switches to branchname 157 | - `git checkout -b ` - Create a new branch "branchname" and switches to it 158 | 159 | > Note: When working on a new feature, we usually do `git checkout -b featurename` from 160 | > the `master` branch 161 | 162 | ### Merging the changes 163 | 164 | Once you are done with all the changes in your `feature` branch, You can merge it to master 165 | using `git merge feature` while on the master branch 166 | 167 | 168 | While on `feature` branch, `git merge master` will merge the master branch onto your `feature` branch 169 | This is usually needed to keep your `feature` branch up to date with the master branch 170 | 171 | 172 | 173 | -------------------------------------------------------------------------------- /episodes/21-setting-up-git-server-vagrant-ansible.md: -------------------------------------------------------------------------------- 1 | # Setting up a Git server (Using Vagrant and Ansible) 2 | 3 | Video Link [HERE](https://www.youtube.com/watch?v=uxE2Le64vHk&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=21) 4 | 5 | We are going to setup a local Git server. 6 | The idea is similar even if you are using a cloud provider. 7 | 8 | **Vagrant** will be used to provision the VM. Vagrantfile is [HERE](../infrastructure/vagrant/apps/git-server/Vagrantfile) 9 | 10 | **Ansible** will be used to fully automate the management of the git server. 11 | Ansible files are [HERE](../infrastructure/ansible) 12 | 13 | The git-server specific role which has all the stuff needed for the git server is [HERE](../infrastructure/ansible/roles/git-server) 14 | 15 | ## Steps for manually doing it 16 | 17 | It's pretty simple. 18 | 19 | 1. Launch a VM (or a cloud server depending on where you are doing it) 20 | 2. Create a `git` user. This user will be used for all the git operations 21 | 3. Install git 22 | 4. Give access to your ssh keys 23 | 24 | ## Automating it 25 | 26 | 1. Create Vagrantfile to launch the VM 27 | 2. Write the Ansible role to do everything that is needed 28 | - Install the package `git` 29 | - Create the `git` user 30 | - Manage SSH keys to give access to the repo 31 | - Manage repositories 32 | 3. Make a prettier DNS address for the git server 33 | -------------------------------------------------------------------------------- /episodes/22-nodejs-app-deployment-ansible.md: -------------------------------------------------------------------------------- 1 | # Deploying a simple NodeJS based app with Nginx Load balancer 2 | 3 | 4 | Video Link [HERE](https://www.youtube.com/watch?v=HCbc-m2CVVw&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=22) 5 | 6 | #### Previous Videos Related to this one 7 | 8 | Vagrant Intro [HERE](https://www.youtube.com/watch?v=Vfoj_nu8cmg&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=20) 9 | 10 | Ansible Intro [HERE](https://www.youtube.com/watch?v=xT0K0k36pxU&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=19) 11 | 12 | Nodejs+Nginx Intro [HERE](https://www.youtube.com/watch?v=6NC5V9gYANs&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=16) 13 | 14 | 15 | ## Key points 16 | 17 | - Load balancing using **nginx** 18 | - Launching VMs using **vagrant** 19 | - Configuring the VMs using **ansible** 20 | - Ansible Roles 21 | - Templates 22 | - Variables 23 | - Handlers 24 | - Simple shell script for manual deployment 25 | 26 | 27 | ## This is what we are going to build 28 | 29 | ``` 30 | +----------+ 31 | | Nginx | 32 | | proxy | 33 | +----+-----+ 34 | | 35 | | Load Balancing 36 | +------+-------+ 37 | | | 38 | +----v----+ +----v----+ 39 | | NodeJS | | NodeJS | 40 | | VM 1 | | VM 2 | 41 | +---------+ +---------+ 42 | ``` 43 | 44 | 45 | ### Why Nginx? 46 | 47 | - Load balancer 48 | - Caching (Future) 49 | - SSL Termination (Future) 50 | - Advanced rules (Future) 51 | 52 | #### Why Load balancer 53 | - Lets us run multiple instances of our app 54 | - Which gives redundancy and fault tolerance. 55 | - Let's us deploy without causing downtime (We can update the backends one at a time) 56 | 57 | ### Why two VMs 58 | 59 | That is only for the purpose of this demonstration. It does not make sense to 60 | have two VMs running just one process each. But I wanted to show the concept 61 | of loadbalancing 62 | 63 | ## Steps 64 | 65 | ### 1. Launch the VMs using Vagrant 66 | 67 | We have three VMs to launch, obiously we will use Vagrant for that. 68 | 69 | Vagrant files [HERE](../infrastructure/vagrant/apps/nodejsapp) 70 | 71 | 72 | 73 | ### 2. Write the Ansible manifests to configure the VMs 74 | 75 | Ansible files [HERE](../infrastructure/ansible) 76 | 77 | We will have these roles 78 | 79 | - [common](../infrastructure/ansible/roles/common) : This contains common stuff for all the servers (git, vim etc) 80 | - [nginx-common](../infrastructure/ansible/roles/nginx-common): Whatever that is common for all nginx servers (like, installing nginx itself) 81 | - [nginx-nodejsapp](../infrastructure/ansible/roles/nginx-nodejsapp): nginx stuff specific to our nodejsapp 82 | - [nodejs-common](../infrastructure/ansible/roles/nodejs-common): Common for all nodejs apps 83 | 84 | #### On the Nginx VM 85 | 86 | We need 87 | - Nginx installed 88 | - Nginx virtualhost configuration 89 | 90 | #### On the NodeJS VMs 91 | 92 | We need 93 | - NodeJS installed 94 | 95 | 96 | 97 | 98 | ### 3. Sample application 99 | 100 | Of course we need a demo app. This time, we will use an express based simple 101 | hello world application. Why express? Because I want to introduce `npm install` 102 | as part of our deployment. 103 | 104 | The sample app is [HERE](../demo-apps/nodejsapp) 105 | 106 | The `package.json` was created using the following. I am leaving it here 107 | for reference, you don't have to do this as the `package.json` is already 108 | present 109 | ``` 110 | # Just press Enter for all the prompts 111 | npm init 112 | ``` 113 | 114 | And then 115 | ``` 116 | npm install express --save 117 | ``` 118 | Which will save the dependency (express) into the `package.json` 119 | 120 | ### 4. Deploying it 121 | 122 | We shall have a simple, dumb script that will do the deployment for us 123 | 124 | 1. Clone the codebase 125 | 2. Run `npm install` 126 | 3. Copy the resulting everything into the nodejs machines 127 | 4. Restart the node processes 128 | 129 | The dumb script is present [HERE](../demo-apps/nodejsapp/deploy.sh) 130 | Please don't use this deploy script for anything other than learning 131 | -------------------------------------------------------------------------------- /episodes/23-how-does-ssl-work.md: -------------------------------------------------------------------------------- 1 | # How does SSL (TLS) work 2 | 3 | Video Link [HERE](https://youtu.be/pc5Xf9uuvwE) 4 | 5 | ## What is SSL/TLS 6 | 7 | - `SSL` stands for Secure Sockets Layer. 8 | - `TLS` Stands for Transport Layer Security 9 | 10 | These are protocols designed to provide security for communications between devices 11 | in a network. Long story short, these protocols helps in making sure that the sensitive 12 | information we send over a network is not captured by a third party (like a hacker) 13 | 14 | For example, when you access your bank's website and login with your username and passsword 15 | these information are being transferred over the network from your computer all the way to the 16 | servers of the bank. SSL/TLS makes sure that only your bank's server can actually see what 17 | you are sending them 18 | 19 | ## SSL vs TLS vs HTTPS 20 | 21 | ### HTTPS 22 | 23 | This is the odd one among the three. HTTPS stands for Hypertext Transfer Protocol Secure. This 24 | is the secure version of the HTTP protocol, which is used to transfer data between computers. 25 | For example, when you open a website like google.com, your browser and google.com's servers use 26 | `http` and the server sends the webpage to your browser. 27 | 28 | HTTPS = HTTP + SSL 29 | 30 | ### TLS vs SSL 31 | 32 | TLS is the newer version of SSL. It goes like this 33 | 34 | SSLv2.0 -> SSLv3.0 -> TLSv1.0 -> TLSv1.1 -> TLSv1.2 -> TLSv1.3 35 | 36 | > SSLv1 was never released publicly 37 | 38 | So, long story short, after 3.0, SSL was renamed to TLS. So, going forward, it is better to use 39 | the term TLS. 40 | 41 | 42 | ## What is wrong with HTTP? 43 | 44 | If you are in a coffee shop and you are connecting to a website over `http` and not `https`, someone 45 | running some software looking at the packets in the network can see everything you are doing. Including 46 | your username and password. Now, this is obviously terrible. The reason this happens is because the 47 | protocol `http` sends everything in plain text. 48 | 49 | But, with TLS, in https, everything you send is encrypted. So, even though someone snooping in the network 50 | will be able to see that you are sending something, they will not able to read it as it would look like garbage 51 | to them 52 | 53 | For example, if I make an http request with a username and password, like this 54 | 55 | ``` 56 | curl 'http://login.demo.esc.sh/index.php' --data-raw 'user_id=admin&user_pass=secret' 57 | ``` 58 | Someone looking at the packets would be able to see this: 59 | 60 | ``` 61 | 0..../..POST /index.php HTTP/1.1 62 | Host: login.demo.esc.sh 63 | User-Agent: curl/7.64.1 64 | Accept: */* 65 | Content-Length: 30 66 | Content-Type: application/x-www-form-urlencoded 67 | 68 | user_id=admin&user_pass=secret 69 | ``` 70 | 71 | That is, our username and password. 72 | 73 | However, if I send the same request over https 74 | ``` 75 | curl 'https://login.demo.esc.sh/index.php' --data-raw 'user_id=admin&user_pass=secret' 76 | ``` 77 | 78 | This is all they can see. 79 | 80 | ``` 81 | .......b.#....E..8.N@.6....;* ........JF...X.o....d1..... 82 | .1@.0..........6.*..4....m\.J.O.....j......|...T..^3.>.....G.......K.......m#..].....7..r,..E. m.2./o...\..0h.K.=..e.v._..R.l9.p?`Q.B...,..".)p<....`......ZHP}..v./..:.....U <...J.Y@,A.F..>=mH..W.e.J.{.|Y...I..c] 83 | .=.f(Y..x...!.H.M|....]O.T.. 84 | \.a..{......*..u..I.. 5.K. 85 | ``` 86 | 87 | I know which version I prefer for the shady hacker to see 88 | 89 | ## What does TLS do for us 90 | 91 | 1. Makes sure that the data we are sending/receiving is only seen by us and the server - Encryption 92 | 2. Ensures that the server we are talking to is the right one - Authentication 93 | 3. Makes sure that no one tampers with the data. That is, if you sent "Hello" to your friend in Facebook 94 | messenger, they should receive exactly the same. No one should be able to tamper with it over the network - Integrity 95 | 96 | ## TLS Terminology 97 | 98 | Before we talk about how TLS works, it is important to take a look at the terms we use when we talk about TLS 99 | 100 | 101 | ### Encryption 102 | 103 | You know this one. This is the process of converting a human readable plain text into a non human-readable format. 104 | Just like the above example 105 | 106 | Encryption makes sure that, only us and the server can read the data, no one else on the network can. 107 | 108 | ### Types of Encryption 109 | 110 | There are mainly two types of Encryption 111 | 112 | #### Symmetric Encryption 113 | 114 | In this, both parties uses the same key to encrypt and decrypt the data. Example: AES 115 | 116 | But the problem is that, both parties need to know the shared key to be able to encrypt or 117 | decrypt. So, this is a challenge when dealing with communications across a network. 118 | 119 | #### Asymmetric Encryption (Public key encryption) 120 | 121 | There is a key pair. They are called public and private key. For example, using `ssh-keygen` we have generated SSH 122 | key pair. They are always related to each other. 123 | 124 | If you encrypt something with one key, you can decrypt it only using the other key. 125 | 126 | So, if I encrypt something using the public key, only the private key associated with it can decrypt it. 127 | 128 | ### Ciphers 129 | 130 | Algorithms used to encrypt/decrypt data 131 | 132 | ### TLS Certificate 133 | 134 | A text file with a bunch of information like the owner of a domain, expiry of the certificate, who approved this certificate 135 | the public key etc. This is sent by the server to the client. 136 | 137 | This is how it looks like being used in a webserver configuration 138 | ``` 139 | ssl_certificate /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem; 140 | ``` 141 | 142 | The certificate itself looks like this 143 | 144 | ``` 145 | head /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem root@hydrogen 146 | -----BEGIN CERTIFICATE----- 147 | MIIFazCCBFOgAwIBAgISBKNEQbcYxhnjsYe613NE1rsvMA0GCSqGSIb3DQEBCwUA 148 | MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD 149 | ExpMZXQncyBFbmNyeXB0IEF1dGhvcml0eSBYMzAeFw0yMDA5MjcwNTI0MDVaFw0y 150 | MDEyMjYwNTI0MDVaMBoxGDAWBgNVBAMTD3NzbC5kZW1vLmVzYy5zaDCCASIwDQYJ 151 | KoZIhvcNAQEBBQADggEPADCCAQoCggEBALvT6DX2U9C8i0iVmDGEicq2H4Gk56Ee 152 | ROPbOWuz+7kSxHXAOHFGffrKpsKvPfpx+pq8V56PwZQZz/mbEEFjd2lRDrIZi0nR 153 | SLoYL3pQVptkyfCGQAzAfBvZQqonZ0AgPcJZ4CjIQCn9w/S4SOuKzSOlhyX+2Jzo 154 | G3BIA8Jtvfs4UkL17Z+nT8SbUeDmykbNp1CvlqS9EVYaoMUevt9frV5oV0MzsNSA 155 | 8/4ZO9nzoztptpgf1H2NH30ZTBMoZAdVbDKAh4Un+Urwlc9XZagv9HaCw5tpyEpa 156 | ``` 157 | 158 | You can fetch a domain's certificate using the command 159 | 160 | ``` 161 | openssl s_client -showcerts -connect ssl.demo.esc.sh:443 -servername ssl.demo.esc.sh /dev/null 162 | ``` 163 | 164 | You can also get the certificate information by looking at your browser. 165 | Click on the padlock -> Certificate. It looks like this 166 | 167 | ![SSL Cert info](img/ssl-info.png) 168 | 169 | > TLS Certificates are public, not secret (unlike the private key) 170 | 171 | ### Who issues these certificates - Certificate Authority (CA) 172 | 173 | To be able to have the browser trust a website's TLS certificate, this needs to be issued by 174 | one of the Certificate Authorities 175 | 176 | So if you own a domain, say `esc.sh` and you want to make your website secure using TLS, you can 177 | approach a certificate authority and ask for a certificate. They will ask you to prove that you own 178 | the domain. Once you have proved it, they will give you the certificate (Hugely oversimplified) 179 | 180 | In the old days you had to pay for these certificates, but now you can get one for free from 181 | Let's Encrypt. 182 | 183 | #### What is special about CAs 184 | 185 | The certificate authorities are trusted by all the browsers. So, if the browser sees that the 186 | server it is connecting to have a certificate that is issued by a known CA in its list, then it will trust 187 | it (provided the expiry is with in the tiomeframe and other information are correct) 188 | 189 | ### Private key 190 | 191 | The other half of the public key encryption. It can decrypt whatever is encrypted using the public key. 192 | Also, it can encrypt stuff that can be decrypted using the public key 193 | 194 | 195 | It looks like this in Nginx configuration 196 | ``` 197 | ssl_certificate_key /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem; 198 | ``` 199 | 200 | And the file itself looks like this 201 | > Do not post your private key anywhere. I edited it out and replaced it with the 202 | > certificate itself ;) So, good luck with trying to guess the private key ;) 203 | 204 | ``` 205 | head /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem 206 | -----BEGIN PRIVATE KEY----- 207 | MIIFazCCBFOgAwIBAgISBKNEQbcYxhnjsYe613NE1rsvMA0GCSqGSIb3DQEBCwUA 208 | MEoxCzAJBgNVBAYTAlVTMRYwFAYDVQQKEw1MZXQncyBFbmNyeXB0MSMwIQYDVQQD 209 | ExpMZXQncyBFbmNyeXB0IEF1dGhvcml0eSBYMzAeFw0yMDA5MjcwNTI0MDVaFw0y 210 | MDEyMjYwNTI0MDVaMBoxGDAWBgNVBAMTD3NzbC5kZW1vLmVzYy5zaDCCASIwDQYJ 211 | KoZIhvcNAQEBBQADggEPADCCAQoCggEBALvT6DX2U9C8i0iVmDGEicq2H4Gk56Ee 212 | ROPbOWuz+7kSxHXAOHFGffrKpsKvPfpx+pq8V56PwZQZz/mbEEFjd2lRDrIZi0nR 213 | SLoYL3pQVptkyfCGQAzAfBvZQqonZ0AgPcJZ4CjIQCn9w/S4SOuKzSOlhyX+2Jzo 214 | G3BIA8Jtvfs4UkL17Z+nT8SbUeDmykbNp1CvlqS9EVYaoMUevt9frV5oV0MzsNSA 215 | 8/4ZO9nzoztptpgf1H2NH30ZTBMoZAdVbDKAh4Un+Urwlc9XZagv9HaCw5tpyEpa 216 | ``` 217 | ### Can I make my own ceritificates? 218 | 219 | Yes, they are called self signed certificates. Since they are not trusted by a CA, your browser won't 220 | trust that certificate. But you can add your own CA and trust it in your browser. This way, all certificates 221 | issued by your own CA will work on your browsers (Remember, this means it will only work on browsers where 222 | you have installed your own CA certificates) 223 | 224 | ## How does TLS encryption works - TLS Handshakes 225 | 226 | The process that encrypts the communication between a client and a server is called **TLS (SSL) Handshake** 227 | 228 | When a user visits a website with TLS enabled, this is what happens 229 | 230 | 1. **Client Hello** (Plain) : The handshake is initiated by the client. This hello message includes: 231 | - Tthe versions of TLS/SSL the client supports 232 | - The cipher suite it supports 233 | - "client random" - a random string 234 | 2. **Server Hello** (Plain) : The server replies with the following 235 | - Chosen cipher suite 236 | - Server's TLS certificate 237 | - "server random" - another random string 238 | 3. **Authentication** (Plain) : The client 239 | - Looks at the certificate the server sent, looks at the CA, expiry etc 240 | - If it matches the domain name, not expired, trusted CA then it validates the authenticity of the website 241 | - Remember that the client also got the public key of the server from the certificate 242 | 4. **Premaster secret** (Key is encrypted with public key): The client 243 | - Generates another random string 244 | - Encrypts it using the server's public key (Now, only the server can decrypt it) 245 | - Sends it to the server 246 | 5. **Premaster secret decrypted** : The server uses its private key to decrypt the premaster secret 247 | 6. **Session keys created**: Both the client and the server generates the session key using 248 | - Client random + Server random + premaster secret = Session key 249 | - Both the server and the client will have the same key (symmetric) 250 | 7. **Client Ready** (Symmetric encrypted) : Client sends ready message which is encrypted using the session key 251 | 8. **Server ready** (Symmetric encrypted) : Server sends ready message which is encrypted using the same session key 252 | 9. **Encrypted channel created** : Going from here, until a new session is created, everything is encrypted. This includes your 253 | credit card details, usernames, passwords, the URL etc 254 | 255 | 256 | ![How SSL Works](img/how-ssl-works.png) 257 | -------------------------------------------------------------------------------- /episodes/24-securing-nginx-free-ssl-letsencrypt.md: -------------------------------------------------------------------------------- 1 | # How to configure SSL with Let's Encrypt and Nginx (Production Ready) 2 | 3 | 4 | 5 | This document explains how to setup and configure SSL for a domain name 6 | with Let'sEncrypt and Nginx 7 | 8 | ## Video Link 9 | [![Watch the video](img/24.png)](https://youtu.be/NRJIhc3aQn0) 10 | 11 | 12 | 13 | For this tutorial, I will be using a Debian 10 server. This should work for any debian 14 | based distro 15 | 16 | ## Goals 17 | 18 | 1. Fetch SSL certificates for a domain 19 | 2. Configure Nginx to use SSL 20 | 3. HTTP to HTTPS redirection : [HERE](#step-6-optional---redirect-http-to-https) 21 | 4. Auto renew certificates : [HERE](#step-7---enable-auto-renew) 22 | 5. Redirect www to non-www or viceversa : [HERE](#step-8-optional---redirect-www-to-non-www-or-vice-versa) 23 | - Redirect www to non-www without http to https redirection : [HERE](#without-http---https-redirection) 24 | - Redirect www to non-www with http to https redirection : [HERE](#with-http---https-redirection) 25 | - Redirect non-www to www without http to https redirection : [HERE](#without-http---https-redirection-1) 26 | - Redirect non-www to www with http to https redirection : [HERE](#with-http---https-redirection-1) 27 | 28 | 29 | ## Prerequisites 30 | 31 | - A publicly accessible server (if you plan to use HTTP challenge to fetch the certificate) 32 | - Domain name pointed to the server's address. I will use `ssl.demo.esc.sh`. If you have a domain 33 | like `example.com` and you want to use `example.com` and `www.example.com`, then make sure you 34 | point both to the server's IP address. 35 | 36 | I will use `ssl.demo.esc.sh` and `www.ssl.demo.esc.sh` to avoid any confusion. 37 | 38 | > If you are confused with the multiple levels of subdomains, don't be, it works the same 39 | > way as `example.com` or `www.example.com` 40 | - Nginx running - `sudo apt install nginx` 41 | 42 | ## Step 1 - Install certbot 43 | 44 | Certbot - Let's us fetch SSL certificates from Let's Encrypt 45 | python3-certbot-nginx - Helps us configure Nginx SSL config 46 | 47 | ``` 48 | sudo apt install certbot python3-certbot-nginx 49 | ``` 50 | 51 | ## Step 2 (Optional) - Verify that the domains are pointing to our server IP 52 | 53 | ``` 54 | ➜ ~ dig ssl.demo.esc.sh +short 55 | 139.59.42.9 56 | 57 | 58 | ➜ ~ dig www.ssl.demo.esc.sh +short 59 | 139.59.42.9 60 | ``` 61 | 62 | ## Step 3 - Create letsencrypt.conf 63 | 64 | Remember, the LetsEncrypt certificates are **valid only for 90 days**. That means, we need 65 | to renew them regularly. 66 | 67 | This conf is needed so that when letsencrypt tries to renew the certificate, it can access 68 | the domain over http without being redirected. That is, without this conf, if we are 69 | redirecting all http to https, then even the letsencrypt renewal requests too will get 70 | redirected, causing the renewal to fail 71 | 72 | 73 | Create `/etc/nginx/snippets/letsencrypt.conf` with the following content 74 | 75 | ```nginx 76 | location ^~ /.well-known/acme-challenge/ { 77 | default_type "text/plain"; 78 | root /var/www/letsencrypt; 79 | } 80 | ``` 81 | 82 | Create the directory 83 | ``` 84 | mkdir /var/www/letsencrypt 85 | 86 | ``` 87 | ## Step 3 - Configure Nginx for HTTP 88 | 89 | 90 | If you already have an HTTP configuration, all you have to do is, the `include` part 91 | 92 | 93 | Create `/etc/nginx/sites-enabled/ssl.demo.esc.sh` (Change the domain name obviously) 94 | 95 | ```nginx 96 | server { 97 | listen 80; 98 | 99 | include /etc/nginx/snippets/letsencrypt.conf; 100 | 101 | server_name ssl.demo.esc.sh www.ssl.demo.esc.sh; 102 | 103 | root /var/www/ssl.demo.esc.sh; 104 | index index.html; 105 | } 106 | ``` 107 | 108 | Verify nginx `nginx -t` 109 | 110 | Reload Nginx 111 | 112 | ``` 113 | sudo systemctl reload nginx 114 | ``` 115 | 116 | 117 | ## Step 4 (Optional) - Configure the Firewall 118 | 119 | If you have been following the `DevOps From Scratch`, you probably don't have a firewall 120 | yet, so skip this. But if you do, allow port 80 to be accessed from anywhere 121 | 122 | ## Step 5 - Fetch the Certificate 123 | 124 | ``` 125 | sudo certbot --nginx -d ssl.demo.esc.sh -d www.ssl.demo.esc.sh 126 | ``` 127 | 128 | Make sure to give the proper domain names. 129 | 130 | If everything went well, you should see something like: 131 | (Please note that I chose not to setup a redirect from HTTP to HTTPS) 132 | 133 | ``` 134 | > certbot --nginx -d ssl.demo.esc.sh -d www.ssl.demo.esc.sh 135 | Saving debug log to /var/log/letsencrypt/letsencrypt.log 136 | Plugins selected: Authenticator nginx, Installer nginx 137 | Obtaining a new certificate 138 | Performing the following challenges: 139 | http-01 challenge for ssl.demo.esc.sh 140 | http-01 challenge for www.ssl.demo.esc.sh 141 | Waiting for verification... 142 | Cleaning up challenges 143 | Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/ssl.demo.esc.sh 144 | Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/ssl.demo.esc.sh 145 | 146 | Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access. 147 | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 148 | 1: No redirect - Make no further changes to the webserver configuration. 149 | 2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for 150 | new sites, or if you're confident your site works on HTTPS. You can undo this 151 | change by editing your web server's configuration. 152 | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 153 | Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1 154 | 155 | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 156 | Congratulations! You have successfully enabled https://ssl.demo.esc.sh and 157 | https://www.ssl.demo.esc.sh 158 | 159 | You should test your configuration at: 160 | https://www.ssllabs.com/ssltest/analyze.html?d=ssl.demo.esc.sh 161 | https://www.ssllabs.com/ssltest/analyze.html?d=www.ssl.demo.esc.sh 162 | ``` 163 | 164 | 165 | At this point, you should have the certificates in place. Your nginx conf will look 166 | similar to this 167 | 168 | ```nginx 169 | server { 170 | listen 80; 171 | 172 | include /etc/nginx/snippets/letsencrypt.conf; 173 | 174 | server_name ssl.demo.esc.sh www.ssl.demo.esc.sh; 175 | 176 | root /var/www/ssl.demo.esc.sh; 177 | index index.html; 178 | 179 | listen 443 ssl; # managed by Certbot 180 | ssl_certificate /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem; # managed by Certbot 181 | ssl_certificate_key /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem; # managed by Certbot 182 | include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot 183 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot 184 | 185 | 186 | } 187 | ``` 188 | 189 | ## Step 6 (Optional) - Redirect HTTP to HTTPS 190 | 191 | It is recommended that you enable HTTP to HTTPS redirection 192 | Edit the nginx conf and make it look like this 193 | 194 | ```nginx 195 | server { 196 | listen 80; 197 | 198 | include /etc/nginx/snippets/letsencrypt.conf; 199 | 200 | server_name ssl.demo.esc.sh www.ssl.demo.esc.sh; 201 | 202 | # We are redirecting all request to port 80 to the https server block 203 | location / { 204 | return 301 https://$host$request_uri; 205 | } 206 | } 207 | server { 208 | 209 | listen 443 ssl; # managed by Certbot 210 | 211 | ssl_certificate /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem; # managed by Certbot 212 | ssl_certificate_key /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem; # managed by Certbot 213 | 214 | include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot 215 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot 216 | 217 | server_name ssl.demo.esc.sh www.ssl.demo.esc.sh; 218 | 219 | root /var/www/ssl.demo.esc.sh; 220 | index index.html; 221 | } 222 | ``` 223 | 224 | We can see that the redirection is working as expected 225 | 226 | ``` 227 | ➜ ~ curl -I http://ssl.demo.esc.sh/foobar/something.html 228 | HTTP/1.1 301 Moved Permanently 229 | ---snip--- 230 | Location: https://ssl.demo.esc.sh/foobar/something.html 231 | ``` 232 | 233 | ## Step 7 - Enable auto renew 234 | 235 | Let's Encrypt certificates expire in 90 days. So we need to make sure that we renew them much 236 | before. Renewal is done by using the command `certbot renew`. 237 | 238 | Add it as a cron so that it runs every 30 days or so. 239 | 240 | Press `crontab -e` to edit the crontab. If you are non-root user, do `sudo crontab -e` 241 | Add the following lines 242 | ``` 243 | 30 2 * * 1 /usr/bin/certbot renew >> /var/log/certbot_renew.log 2>&1 244 | 35 2 * * 1 /etc/init.d/nginx reload 245 | ``` 246 | The first one renews the certificate and the second one reloads nginx 247 | These runs once a month at 2.30AM and 2.35AM respectively 248 | 249 | ## Step 8 (Optional) - Redirect "www" to "non www" or vice versa 250 | 251 | You can either use the `www` version or the `non-www` version of your website 252 | Whatever you choose, stick to it and redirect the other version to the preferred 253 | version of the website. 254 | 255 | There are 4 possible combinations of requests 256 | 1. http request to non-www 257 | 2. http request to www 258 | 3. https request to non-www 259 | 4. https request to www 260 | 261 | So, we need to have rules to handle all of them. 262 | 263 | Pay attention to the `server_name` part and the `return 301`. The idea is, we 264 | will create a dedicated server block for what we need redirected and then change 265 | the target using the return 266 | 267 | So, www -> non-www servr block means `server_name` will be `www` and `return` will 268 | be to `non-www` 269 | 270 | ### Option 1 : www -> non wwww (That is, non www preferred) 271 | 272 | So, we choose `ssl.demo.esc.sh` (without www) as our preferred name. 273 | Now we want `www.ssl.demo.esc.sh` redirected to `ssl.demo.esc.sh` 274 | 275 | **Now, this depends on whether you are using http -> https redirection** 276 | 277 | #### With http -> https redirection 278 | 279 | Edit the redirection part in the port 80 block to make it look like this: 280 | 281 | ```nginx 282 | # For redirecting www -> non-www and http -> https (HTTP request) 283 | server { 284 | listen 80; 285 | 286 | include /etc/nginx/snippets/letsencrypt.conf; 287 | 288 | server_name ssl.demo.esc.sh www.ssl.demo.esc.sh; 289 | 290 | # Redirect http -> https 291 | # Also Redirect www -> non www 292 | location / { 293 | return 301 https://ssl.demo.esc.sh$request_uri; 294 | } 295 | } 296 | 297 | # For redirecting www -> non-www (HTTPS request) 298 | server { 299 | 300 | listen 443 ssl; # managed by Certbot 301 | 302 | ssl_certificate /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem; # managed by Certbot 303 | ssl_certificate_key /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem; # managed by Certbot 304 | 305 | include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot 306 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot 307 | 308 | server_name www.ssl.demo.esc.sh; 309 | 310 | # Redirect www -> non www 311 | location / { 312 | return 301 https://ssl.demo.esc.sh$request_uri; 313 | } 314 | } 315 | # For serving the non www site (HTTPS) 316 | server { 317 | 318 | listen 443 ssl; # managed by Certbot 319 | 320 | ssl_certificate /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem; # managed by Certbot 321 | ssl_certificate_key /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem; # managed by Certbot 322 | 323 | include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot 324 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot 325 | 326 | server_name ssl.demo.esc.sh; 327 | 328 | root /var/www/ssl.demo.esc.sh; 329 | index index.html; 330 | } 331 | ``` 332 | 333 | #### Without http -> https redirection 334 | 335 | **Please don't do this unless you really have a huge reason to** 336 | 337 | Remember, this is a single nginx config. We need all these blocks to deal with 338 | the all combination scenarios 339 | 340 | ```nginx 341 | # For redirecting www -> non www (HTTP request) 342 | server { 343 | listen 80; 344 | 345 | include /etc/nginx/snippets/letsencrypt.conf; 346 | 347 | server_name www.ssl.demo.esc.sh; 348 | 349 | # Redirect www -> non www 350 | location / { 351 | return 301 http://ssl.demo.esc.sh$request_uri; 352 | } 353 | } 354 | 355 | # For serving the non www site (HTTP) 356 | server { 357 | listen 80; 358 | 359 | include /etc/nginx/snippets/letsencrypt.conf; 360 | 361 | server_name ssl.demo.esc.sh; 362 | 363 | root /var/www/ssl.demo.esc.sh; 364 | index index.html; 365 | } 366 | 367 | # For redirecting www -> non www (https) 368 | server { 369 | 370 | listen 443 ssl; # managed by Certbot 371 | 372 | ssl_certificate /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem; # managed by Certbot 373 | ssl_certificate_key /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem; # managed by Certbot 374 | 375 | include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot 376 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot 377 | 378 | server_name www.ssl.demo.esc.sh; 379 | 380 | # Redirect www -> non www 381 | location / { 382 | return 301 https://ssl.demo.esc.sh$request_uri; 383 | } 384 | } 385 | 386 | # For serving the non www site (HTTPS) 387 | server { 388 | 389 | listen 443 ssl; # managed by Certbot 390 | 391 | ssl_certificate /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem; # managed by Certbot 392 | ssl_certificate_key /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem; # managed by Certbot 393 | 394 | include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot 395 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot 396 | 397 | server_name ssl.demo.esc.sh; 398 | 399 | root /var/www/ssl.demo.esc.sh; 400 | index index.html; 401 | } 402 | 403 | ``` 404 | 405 | ### Option 2 - Redirect non-www to www 406 | 407 | We choose `www.ssl.demo.esc.sh` as the main domain. So we want to redirect 408 | all request to `ssl.demo.esc.sh` -> `www.ssl.demo.esc.sh` 409 | 410 | #### With http -> https redirection 411 | 412 | 413 | ```nginx 414 | # For redirecting non-www -> www and http -> https (HTTP request) 415 | server { 416 | listen 80; 417 | 418 | include /etc/nginx/snippets/letsencrypt.conf; 419 | 420 | server_name ssl.demo.esc.sh www.ssl.demo.esc.sh; 421 | 422 | # Redirect http -> https 423 | # Also Redirect www -> non www 424 | location / { 425 | return 301 https://www.ssl.demo.esc.sh$request_uri; 426 | } 427 | } 428 | 429 | # For redirecting non-www -> www (HTTPS request) 430 | server { 431 | 432 | listen 443 ssl; # managed by Certbot 433 | 434 | ssl_certificate /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem; # managed by Certbot 435 | ssl_certificate_key /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem; # managed by Certbot 436 | 437 | include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot 438 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot 439 | 440 | server_name ssl.demo.esc.sh; 441 | 442 | # Redirect non www -> www 443 | location / { 444 | return 301 https://www.ssl.demo.esc.sh$request_uri; 445 | } 446 | } 447 | # For serving the www site (HTTPS) 448 | server { 449 | 450 | listen 443 ssl; # managed by Certbot 451 | 452 | ssl_certificate /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem; # managed by Certbot 453 | ssl_certificate_key /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem; # managed by Certbot 454 | 455 | include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot 456 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot 457 | 458 | server_name www.ssl.demo.esc.sh; 459 | 460 | root /var/www/ssl.demo.esc.sh; 461 | index index.html; 462 | } 463 | ``` 464 | 465 | 466 | #### Without http -> https redirection 467 | 468 | **Please don't do this unless you really have a huge reason to** 469 | 470 | Remember, this is a single nginx config. We need all these blocks to deal with 471 | the all combination scenarios 472 | 473 | ```nginx 474 | # For redirecting non www -> www (HTTP request) 475 | server { 476 | listen 80; 477 | 478 | include /etc/nginx/snippets/letsencrypt.conf; 479 | 480 | server_name ssl.demo.esc.sh; 481 | 482 | # Redirect non www -> www 483 | location / { 484 | return 301 http://www.ssl.demo.esc.sh$request_uri; 485 | } 486 | } 487 | 488 | # For serving the www site (HTTP) 489 | server { 490 | listen 80; 491 | 492 | include /etc/nginx/snippets/letsencrypt.conf; 493 | 494 | server_name www.ssl.demo.esc.sh; 495 | 496 | root /var/www/ssl.demo.esc.sh; 497 | index index.html; 498 | } 499 | 500 | # For redirecting non www -> www (https) 501 | server { 502 | 503 | listen 443 ssl; # managed by Certbot 504 | 505 | ssl_certificate /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem; # managed by Certbot 506 | ssl_certificate_key /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem; # managed by Certbot 507 | 508 | include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot 509 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot 510 | 511 | server_name ssl.demo.esc.sh; 512 | 513 | # Redirect non www -> www 514 | location / { 515 | return 301 https://www.ssl.demo.esc.sh$request_uri; 516 | } 517 | } 518 | 519 | # For serving the www site (HTTPS) 520 | server { 521 | 522 | listen 443 ssl; # managed by Certbot 523 | 524 | ssl_certificate /etc/letsencrypt/live/ssl.demo.esc.sh/fullchain.pem; # managed by Certbot 525 | ssl_certificate_key /etc/letsencrypt/live/ssl.demo.esc.sh/privkey.pem; # managed by Certbot 526 | 527 | include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot 528 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot 529 | 530 | server_name www.ssl.demo.esc.sh; 531 | 532 | root /var/www/ssl.demo.esc.sh; 533 | index index.html; 534 | } 535 | 536 | ``` 537 | -------------------------------------------------------------------------------- /episodes/25-devops-ci-cd.md: -------------------------------------------------------------------------------- 1 | # DevOps and Continuous Integration/Deployment/Delivery 2 | 3 | DevOps is a methodology in software development and release that aims to increase the speed 4 | and efficiency at which software is released. This is achieved by making sure that the 5 | developers and the operations team work together, automate away whatever possible and through 6 | shared responsibilities. 7 | 8 | Before we even begin to talk more about DevOps, we need to know what was there before DevOps. 9 | What are the problems with the methods of the past 10 | 11 | ### Developers 12 | 13 | They write code. These includes, but not limited to: 14 | - Bug fixes 15 | - New features 16 | - Security updates 17 | 18 | ### Operations Team 19 | 20 | They manage the servers where the app is running. 21 | - They make sure that the servers are up and running 22 | - They patch the server, update the OS, manage firewall etc 23 | - They release new code given by the developers into the servers 24 | 25 | ## Some potentially boring history 26 | 27 | ### Waterfall model - The old days 28 | 29 | ![Waterfall Model](img/waterfall-model.png) 30 | 31 | - There are different phases in this model, each dependent on the output of the previous 32 | - Once you are past a phase, you cannot go back, like a **waterfall** 33 | - For example, you **cannot change** the design in the implementation stage 34 | - This meant that there had to be precise planning and the requirements has to be perfect 35 | - A good analogy is constructing a house. You cannot change the design once you're half way with building the roof, right? 36 | 37 | 38 | #### The problems with waterfall model 39 | 40 | Absolutely not flexible 41 | 42 | ### The other models following waterfall 43 | 44 | Let's just say that there were a bunch of different models that tried to fix the issues with 45 | the waterfall model. 46 | For example 47 | - Rapid Application Development 48 | - Dynamic Systems Development Method 49 | - Extreme Programming 50 | But let's not concern ourselves with these 51 | 52 | ### Agile 53 | 54 | Read the agile manifesto [here](http://agilemanifesto.org/principles.html) 55 | 56 | The TL;DR was that, the development process should be flexible such that "good" changes are 57 | always welcome. If the team came up with a really great idea, even towards the end of the "development", it was to be welcomed 58 | 59 | 60 | - Individuals and interactions over processes and tools 61 | - Working software over comprehensive documentation 62 | - Customer collaboration over contract negotiation 63 | - Responding to change over following a plan 64 | 65 | That is, according to Scott Ambler: 66 | 67 | - Tools and processes are important, but it is more important to have competent people working together effectively. 68 | - Good documentation is useful in helping people to understand how the software is built and how to use it, but the main point of development is to create software, not documentation. 69 | - A contract is important but is no substitute for working closely with customers to discover what they need. 70 | - A project plan is important, but it must not be too rigid to accommodate changes in technology or the environment, stakeholders' priorities, and people's understanding of the problem and its solution. 71 | 72 | 73 | ## How software was released 74 | 75 | To keep things simple and in context, let's just take two scenarios. `Pre-DevOps` and `DevOps` 76 | 77 | > Don't quote me on it, I am just trying to explain what "DevOps" is fixing 78 | 79 | 80 | > Imagine that there is a `git` repository with the code and the `master` 81 | > branch is the mainline And the developers work on their own branches 82 | > when making a change 83 | 84 | ### Pre-DevOps 85 | 86 | 1. The Developer wrote the app 87 | 2. They do some tests locally 88 | 3. The changes are **NOT** merged to the master until a certain point in time. They are usually large changes 89 | 4. The integration to the main branch is done on designated time (let's say we do it once a week) 90 | 5. The developer has to make sure that their changes do not have any conflict, fix any. 91 | 6. After a lot of mental gymanstics, the code is merged 92 | 7. Now we wait until the "deploy day" 93 | 8. The operations team has to make sure that the new code will work on production. They may need to update dependencies etc 94 | 9. Finally the deploy day comes and the operations team deploys it **manually**. By running some hacked together deploy script 95 | 10. If there are issues, the Operations team has to either rollback the change 96 | 11. The operations team has to make sure that the service is healthy, add more servers if needed etc 97 | 98 | 99 | #### The problems with this 100 | 101 | 1. Integrating large changes once a while is more error prone. Imagine a scenario where the 102 | developer `D1` started working on a feature if done fully adds 10,000 new lines of codes. 103 | Meanwhile, developer `D2` is working on another bug fix that changed a lot of lines. 104 | Towards the integration of both of these to the master, it is going to be such a pain in the neck 105 | to make sure that the changes do not break things 106 | 2. The developer's environment is different from the production, which causes: 107 | 108 | ![Works on my machine](img/works-on-my-machine.jpg) 109 | 110 | 3. Lots of errors make their way into production 111 | 4. The deployments are error prone 112 | 5. Everything is error prone because humans are unreliable 113 | 114 | 115 | ### With DevOps 116 | 117 | Now, I am gonna talk about an ideal DevOps scenario. One can dream, right? 118 | 119 | 1. Developer writes code and ther are tests written for all the major functionalities. 120 | Ideally, ther are tests for all the functionalities 121 | 2. Developer makes **smaller, incremental changes** and pushes to their branch 122 | 3. A "tool" runs some automated tests to make sure that the developer did not break anything 123 | 4. If the "tool" says all good, the code is merged to master. 124 | 5. This means, multiple changes are merged to the master regularly (Continuous Integration) 125 | 6. The "tool" builds the software and is ready to deploy any time (Continuous Delivery) 126 | 7. In some cases, the "tool" does not wait for any particular time, instead it deploys to production regularly (Continuous Deployment) 127 | 8. While in production, the servers can scale themselves based on the traffic pattern, which means less headache for the "Ops" 128 | 129 | #### What did we gain from this? 130 | 131 | 1. Considerably easier integration (due to frequent integrations) 132 | 2. We catch much more bugs before they get to production (due to automated tests) 133 | 3. If something breaks, it is easy to identify and fix because the amount of change is smaller 134 | 4. No more "deploy day headaches" for the ops team. 135 | 5. New features gets released to the customers much faster and the team can get feedback on the 136 | changes and fix/update as needed with minimal delay 137 | 138 | ## Continuous Integration 139 | 140 | - Developers merge their changes back to master as soon as possible. This often means several 141 | "merge to master" per day. 142 | - Makes use of automated tests. That is, the change is merged to the main branch only if all 143 | the tests succeed 144 | 145 | ## Continuous Delivery 146 | 147 | It means, we build our software as soon as it is merged and we are ready to deliver the software 148 | at any time 149 | 150 | ## Continuous Deployment 151 | 152 | We actually go ahead and deploy it as soon as the software is built by the tool 153 | 154 | 155 | ## So, with DevOps 156 | 157 | 158 | ### Version control 159 | 160 | - Everything should be in `git` 161 | 162 | ### Automation 163 | 164 | - Automated code testing (Selenium, Jenkins) 165 | - Automated integration (Jenkins) 166 | - Automated deployments (Jenkins 167 | - Automate launching of infrastructure (Terraform) 168 | - Automate the servers themselves (Ansible, Puppet, Chef) 169 | - Whatever is possible to automate, should be automated (ideally) 170 | 171 | ### Make changes in smaller increments 172 | 173 | - All new changes are made in small increments that are tested and integrated before continuing 174 | 175 | ### Dev environmet similar to production environment 176 | 177 | No more `works on my machine` excuse 178 | 179 | ### Measure performance of the applications 180 | 181 | - Metrics like response time helps us in knowing if a change made our app slower and we can look 182 | into it to fix it (Prometheus, Grafana) 183 | - Monitor logs (ELK) 184 | - Thirdparty application performance monitoring tools (NewRelic, Datadog) 185 | -------------------------------------------------------------------------------- /episodes/26-jenkins-install-first-pipeline.md: -------------------------------------------------------------------------------- 1 | # Installing and setting up Jenkins - Simple Pipeline Intro 2 | 3 | For this, we are not gonna use Docker, we are gonna go the old way and install it on a 4 | VM 5 | 6 | 7 | # Installing Jenkins 8 | 9 | ## Step 1 - Install JDK 10 | 11 | ```sh 12 | sudo apt update 13 | sudo apt install default-jdk 14 | ``` 15 | 16 | ## Step 2 - Add the GPG keys 17 | 18 | ``` 19 | wget -q -O - https://pkg.jenkins.io/debian/jenkins.io.key | sudo apt-key add - 20 | sudo sh -c 'echo deb http://pkg.jenkins.io/debian-stable binary/ > /etc/apt/sources.list.d/jenkins.list' 21 | ``` 22 | 23 | ## Step 3 - Install the package 24 | 25 | ``` 26 | sudo apt update 27 | sudo apt install jenkins 28 | ``` 29 | 30 | ## Step 4 - Start and enable 31 | 32 | ``` 33 | sudo systemctl enable jenkins 34 | sudo systemctl start jenkins 35 | ``` 36 | 37 | ## Step 5 - Setting up 38 | 39 | Visit `server-ip:8080` 40 | 41 | Jenkins generates a random password by default. Get the password: 42 | ``` 43 | sudo cat /var/lib/jenkins/secrets/initialAdminPassword 44 | ``` 45 | Paste this password into the field 46 | 47 | ## Step 6 - Installing plugins 48 | 49 | At this point, you would want to install the plugins you need. To get started 50 | I would suggest to just install the suggested plugins 51 | 52 | ## Step 7 - Create the admin user 53 | 54 | You know how to fill a form. Create a user. From there it is pretty straight forward 55 | 56 | ## Step 8 - Configuring Nginx reverse proxy 57 | 58 | ### Install Nginx 59 | ``` 60 | sudo apt update 61 | sudo apt install nginx 62 | ``` 63 | 64 | ### Create `/etc/nginx/sites-enabled/jenkins.devops.esc.sh` 65 | 66 | Change the domain name obviously 67 | 68 | ``` 69 | server { 70 | listen 80; 71 | server_name jenkins.devops.esc.sh; 72 | 73 | location / { 74 | include /etc/nginx/proxy_params; 75 | proxy_pass http://localhost:8080; 76 | proxy_read_timeout 60s; 77 | # Fix the "It appears that your reverse proxy set up is broken" error. 78 | # Make sure the domain name is correct 79 | proxy_redirect http://localhost:8080 https://jenkins.devops.esc.sh; 80 | } 81 | } 82 | ``` 83 | ### Verify the config and restart nginx 84 | 85 | ``` 86 | nginx -t 87 | sudo systemctl restart nginx 88 | ``` 89 | 90 | Fix if any syntax error 91 | 92 | ## Step 9 - Change Jenkins bind address 93 | 94 | By default Jenkins listens on all network interfaces. But we need to disable it because 95 | we are using Nginx as a reverse proxy and there is no reason for Jenkins to be exposed 96 | to other network interfaces. 97 | 98 | We can change this by editing 99 | `/etc/default/jenkins` 100 | 101 | Locate the line starting with `JENKINS_ARGS` (It's usually the last line) and append 102 | 103 | ``` 104 | --httpListenAddress=127.0.0.1 105 | ``` 106 | So that the line resembles 107 | ``` 108 | JENKINS_ARGS="--webroot=/var/cache/$NAME/war --httpPort=$HTTP_PORT --httpListenAddress=127.0.0.1" 109 | ``` 110 | 111 | Restart Jenkins 112 | ``` 113 | sudo systemctl restart jenkins 114 | ``` 115 | Make sure it is running fine 116 | ``` 117 | sudo systemctl status jenkins 118 | ``` 119 | 120 | Jenkins should load now, but on http only. 121 | 122 | ## Step 9 - Configuring SSL 123 | 124 | There is a dedicated document for fetching and configuring SSL with Nginx with all the necessary 125 | documents. Go [HERE](24-securing-nginx-free-ssl-letsencrypt.md) 126 | 127 | Come back here after that. 128 | 129 | Make sure you have the certificate and key in location 130 | ``` 131 | root@jenkins-server:~# ls -l /etc/letsencrypt/live/jenkins.devops.esc.sh/ 132 | total 4 133 | lrwxrwxrwx 1 root root 45 Sep 27 07:52 cert.pem -> ../../archive/jenkins.devops.esc.sh/cert1.pem 134 | lrwxrwxrwx 1 root root 46 Sep 27 07:52 chain.pem -> ../../archive/jenkins.devops.esc.sh/chain1.pem 135 | lrwxrwxrwx 1 root root 50 Sep 27 07:52 fullchain.pem -> ../../archive/jenkins.devops.esc.sh/fullchain1.pem 136 | lrwxrwxrwx 1 root root 48 Sep 27 07:52 privkey.pem -> ../../archive/jenkins.devops.esc.sh/privkey1.pem 137 | -rw-r--r-- 1 root root 692 Sep 27 07:52 README 138 | root@jenkins-server:~# 139 | ``` 140 | 141 | 142 | Update the nginx config to look like this 143 | ``` 144 | 145 | server { 146 | listen 80; 147 | server_name jenkins.devops.esc.sh; 148 | 149 | location / { 150 | return 301 https://$host$request_uri; 151 | } 152 | } 153 | 154 | server { 155 | listen 443 ssl; 156 | 157 | server_name jenkins.devops.esc.sh; 158 | 159 | ssl_certificate /etc/letsencrypt/live/jenkins.devops.esc.sh/fullchain.pem; 160 | ssl_certificate_key /etc/letsencrypt/live/jenkins.devops.esc.sh/privkey.pem; 161 | include /etc/letsencrypt/options-ssl-nginx.conf; 162 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 163 | 164 | 165 | 166 | location / { 167 | include /etc/nginx/proxy_params; 168 | proxy_pass http://localhost:8080; 169 | proxy_read_timeout 60s; 170 | # Fix the "It appears that your reverse proxy set up is broken" error. 171 | # Make sure the domain name is correct 172 | proxy_redirect http://localhost:8080 https://jenkins.devops.esc.sh; 173 | } 174 | } 175 | 176 | ``` 177 | 178 | Make sure nginx is alright `nginx -t` 179 | 180 | Reload Nginx 181 | 182 | ``` 183 | sudo systemctl reload nginx 184 | ``` 185 | 186 | And that is pretty much it, Jenkins is up and ready with a freshly configured sweet 187 | sweet green padlocked SSL certificate 188 | 189 | 190 | ## A Simple multi stage pipeline 191 | 192 | You can create a pipeline by New Item -> Pipeline. And then in the pipeline definition 193 | 194 | 195 | ``` 196 | pipeline { 197 | agent any 198 | 199 | stages { 200 | stage('build') { 201 | steps { 202 | echo 'building the software' 203 | } 204 | } 205 | stage('test') { 206 | steps { 207 | echo 'testing the software' 208 | } 209 | } 210 | stage('deploy') { 211 | steps { 212 | echo 'deploying the software' 213 | } 214 | } 215 | } 216 | } 217 | ``` 218 | -------------------------------------------------------------------------------- /episodes/27-create-real-life-end-to-end-jenkins-pipeline.md: -------------------------------------------------------------------------------- 1 | # Creating an end to end Jenkins pipeline for a NodeJS application 2 | 3 | Video [HERE](https://www.youtube.com/watch?v=KpAKgrBA8mY&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=28) 4 | 5 | So far we have a Jenkins installation up and running. We don't have much else going on. 6 | 7 | ## Our Goal 8 | 9 | We want to automatically test and deploy our [NodeJS application](https://github.com/MansoorMajeed/devops-nodejs-demo-app). The application is living in a git 10 | repository. 11 | 12 | With this video, we will be creating a full pipeline that will: 13 | 14 | 1. Trigger on a commit to the `master` branch 15 | 2. Run some tests 16 | 3. Deploy it 17 | 18 | 19 | ## Our NodeJS app setup 20 | 21 | - The app is in a git repository (We will use Github) 22 | - The code runs in two VMs as we did in the past. They have Nodejs installed and configured 23 | - Jenkins runs in another VM 24 | 25 | Before we automate our deploys, first let's make sure that it works fine doing manually 26 | 27 | ### 1. Review Current setup 28 | 29 | 30 | The NodeJS demo app is here https://github.com/MansoorMajeed/devops-nodejs-demo-app 31 | 32 | This is our original deploy script 33 | 34 | ```bash 35 | #!/bin/bash 36 | 37 | 38 | npm install 39 | 40 | # For the love of all that is good, don't use this in production 41 | # This is only for a demonstration of how things work behind the scene 42 | 43 | ssh vagrant@192.168.33.11 'sudo mkdir -p /app; sudo chown -R vagrant. /app' 44 | rsync -avz ./ vagrant@192.168.33.11:/app/ 45 | ssh vagrant@192.168.33.11 "sudo pkill node; cd /app; node index.js > output.log 2>&1 &" 46 | 47 | 48 | ssh vagrant@192.168.33.12 'sudo mkdir -p /app; sudo chown -R vagrant. /app' 49 | rsync -avz ./ vagrant@192.168.33.12:/app/ 50 | ssh vagrant@192.168.33.12 "sudo pkill node; cd /app; node index.js > output.log 2>&1 &" 51 | ``` 52 | 53 | ### 2. Make the deploy process better 54 | 55 | Here, we were using very basic and stupid way to manage our node processes. We need to change that. 56 | Instead of killing and starting the node process manually, we can ask systemd to do that for us. 57 | Then we can start/stop/restart our process using `systemctl restart ourappname` 58 | 59 | So, let's create systemd service file on both the NodeJS Vms 60 | 61 | Create `/lib/systemd/system/nodeapp.service` 62 | 63 | ``` 64 | [Unit] 65 | Description=DevOps From Scratch Demo NodeJS App 66 | Documentation=https://esc.sh 67 | After=network.target 68 | 69 | [Service] 70 | Type=simple 71 | User=vagrant 72 | WorkingDirectory=/app 73 | ExecStart=/usr/bin/node /app/index.js 74 | Restart=on-failure 75 | 76 | [Install] 77 | WantedBy=multi-user.target 78 | ``` 79 | 80 | And once that is done, let's make these changes into effect 81 | 82 | ``` 83 | systemctl daemon-reload 84 | ``` 85 | 86 | Now the deploy script becomes 87 | 88 | ```bash 89 | #!/bin/bash 90 | 91 | 92 | npm install 93 | 94 | # For the love of all that is good, don't use this in production 95 | # This is only for a demonstration of how things work behind the scene 96 | 97 | ssh vagrant@192.168.33.11 'sudo mkdir -p /app; sudo chown -R vagrant. /app' 98 | rsync -avz ./ vagrant@192.168.33.11:/app/ 99 | ssh vagrant@192.168.33.11 "sudo systemctl restart nodeapp" 100 | 101 | 102 | ssh vagrant@192.168.33.12 'sudo mkdir -p /app; sudo chown -R vagrant. /app' 103 | rsync -avz ./ vagrant@192.168.33.12:/app/ 104 | ssh vagrant@192.168.33.12 "sudo systemctl restart nodeapp" 105 | ``` 106 | 107 | Much better 108 | 109 | 110 | ## Make our app get automatically deployed on any change 111 | 112 | 113 | ### 1. Create ssh key for our Jenkins server 114 | 115 | Because Jenkins needs to access git and the NodeJS VM through ssh, we will need an ssh key for Jenkins 116 | 117 | Run this anywhere we can copy the key from: 118 | 119 | I am gonna run this from my Mac laptop 120 | 121 | ``` 122 | ssh-keygen -t rsa -b 4096 -C "jenkins@local" -f ./jenkins_id_rsa 123 | ``` 124 | 125 | This will create the keypair in the same directory 126 | ``` 127 | ls -l jenkins_id_rsa* 128 | -rw------- 1 mansoor 3381 Oct 24 09:44 jenkins_id_rsa 129 | -rw-r--r-- 1 mansoor 739 Oct 24 09:44 jenkins_id_rsa.pub 130 | ``` 131 | 132 | ### 1. Give Jenkins access to the Git repository 133 | 134 | Because we want to be able to poll for changes and pull code from there. This is fine if the repository 135 | is a public one, but we are going to go ahead and add the credentials because in a real scenario, most 136 | of the time it will be a private repo 137 | 138 | 139 | Also, we need to make sure that `git` is installed on the Jenkins server 140 | ``` 141 | sudo apt install git 142 | ``` 143 | 144 | > Note: new repositories in Github now uses "main" instead of "master", but for now let's stick to "master" 145 | 146 | Create new repository -> Settings -> Deploy Key -> Add our new public key there 147 | 148 | ### 2. Install NodeJS on the jenkins server 149 | 150 | Because we are using the same Jenkins master server to do the build/deploy and for our NodeJS app, we have 151 | to run `npm install`, we need to install NodeJS on the Jenkins server 152 | 153 | I will make the Ansible playbook to install it automatically and share it, for now, let's install it 154 | manually 155 | 156 | On Jenkins server 157 | ``` 158 | curl -sL https://deb.nodesource.com/setup_12.x -o nodesource_setup.sh 159 | sudo bash nodesource_setup.sh 160 | sudo apt-get install -y nodejs gcc g++ make 161 | ``` 162 | 163 | ### 3. Create the Jenkinsfile in the app repository 164 | 165 | ``` 166 | pipeline { 167 | agent any 168 | 169 | stages { 170 | stage('build') { 171 | steps { 172 | echo 'building the software' 173 | sh 'npm install' 174 | } 175 | } 176 | stage('test') { 177 | steps { 178 | echo 'testing the software' 179 | sh 'npm test' 180 | } 181 | } 182 | 183 | stage('deploy') { 184 | steps { 185 | withCredentials([sshUserPrivateKey(credentialsId: "jenkins-ssh", keyFileVariable: 'sshkey')]){ 186 | echo 'deploying the software' 187 | sh '''#!/bin/bash 188 | echo "Creating .ssh" 189 | mkdir -p /var/lib/jenkins/.ssh 190 | ssh-keyscan 192.168.33.11 >> /var/lib/jenkins/.ssh/known_hosts 191 | ssh-keyscan 192.168.33.12 >> /var/lib/jenkins/.ssh/known_hosts 192 | 193 | rsync -avz --exclude '.git' --delete -e "ssh -i $sshkey" ./ vagrant@192.168.33.11:/app/ 194 | rsync -avz --exclude '.git' --delete -e "ssh -i $sshkey" ./ vagrant@192.168.33.12:/app/ 195 | 196 | ssh -i $sshkey vagrant@192.168.33.11 "sudo systemctl restart nodeapp" 197 | ssh -i $sshkey vagrant@192.168.33.12 "sudo systemctl restart nodeapp" 198 | 199 | ''' 200 | } 201 | } 202 | } 203 | } 204 | } 205 | 206 | ``` 207 | 208 | ### 4. Give Jenkins ssh access to the NodeJS VMs 209 | 210 | Because we will be using SSH to do the deploys, the Jenkins VM should be able to access the NodeJS VMs 211 | 212 | ### 5. Create the Pipeline off of the Jenkinsfile 213 | 214 | -------------------------------------------------------------------------------- /episodes/28-setting-up-wordpress-nginx-php-fpm.md: -------------------------------------------------------------------------------- 1 | # Setting up a WordPress site using Nginx and PHP FPM 2 | 3 | ## Database 4 | 5 | We will be using Mariadb instead of Mysql, as far as the end user is concerned, they behave similarly 6 | 7 | For our use case, I will be using a different virtual machine for MySQL. I will be launching 8 | the VM using Vagrant. This is the Vagrantfile 9 | 10 | > Note : It does not really matter if you are using a local virtual machine or a cloud server 11 | > like DigitalOcean or AWS. The instructions are the same 12 | 13 | More about MySQL : 14 | 15 | 1. Install Mariadb 16 | 17 | 18 | ```bash 19 | sudo apt update 20 | sudo apt install mariadb-server 21 | ``` 22 | 23 | 2. Secure the installation 24 | 25 | ```bash 26 | sudo mysql_secure_installation 27 | ``` 28 | 29 | This shall ask you a few questions. Long story short, press `Y` for all of them and follow 30 | the instructions 31 | 32 | 3. Create the database and user for wordpress 33 | 34 | ```bash 35 | sudo mysql 36 | ``` 37 | 38 | And in the Mariadb prompt 39 | ``` 40 | MariaDB [(none)]> CREATE DATABASE wp_site; 41 | Query OK, 1 row affected (0.000 sec) 42 | 43 | MariaDB [(none)]> CREATE USER 'wp_user'@'%' IDENTIFIED BY 'j7wJZmLWyebzCLZFp9qx'; 44 | Query OK, 0 rows affected (0.000 sec) 45 | 46 | MariaDB [(none)]> GRANT ALL ON wp_site.* TO 'wp_user'@'%'; 47 | Query OK, 0 rows affected (0.000 sec) 48 | 49 | MariaDB [(none)]> FLUSH PRIVILEGES; 50 | Query OK, 0 rows affected (0.000 sec) 51 | ``` 52 | 53 | 4. Make MariaDB listen on all network interfaces (If you are using a dedicated MariaDB server) 54 | 55 | By default MariaDB listens only on loopback interface. But, inorder for us to reach the mariadb 56 | from other machines, we need to make it listen on all network interfaces. You can see this using `ss` 57 | 58 | ``` 59 | vagrant@mysql-server:~$ ss -tl 60 | State Recv-Q Send-Q Local Address:Port Peer Address:Port 61 | LISTEN 0 80 127.0.0.1:mysql 0.0.0.0:* 62 | LISTEN 0 128 0.0.0.0:ssh 0.0.0.0:* 63 | LISTEN 0 128 [::]:ssh [::]:* 64 | vagrant@mysql-server:~$ 65 | ``` 66 | 67 | > Note: Be careful when opening up the database server to the outside. This is usually fine 68 | > if you are using local virtual machines, but on a cloud server, make sure you firewall your 69 | > database server properly. Meaning, allow only those who needs to connect to the database 70 | 71 | ``` 72 | sudo vim /etc/mysql/mariadb.conf.d/50-server.cnf 73 | ``` 74 | 75 | Find the line that says `bind-address` and change it from 127.0.0.1 to 0.0.0.0 76 | 77 | ``` 78 | bind-address = 0.0.0.0 79 | ``` 80 | And restart MariaDB 81 | ``` 82 | sudo systemctl restart mysql 83 | ``` 84 | And now we can see that mysql is listening on all interfaces 85 | ``` 86 | vagrant@mysql-server:~$ ss -tl 87 | State Recv-Q Send-Q Local Address:Port Peer Address:Port 88 | LISTEN 0 80 0.0.0.0:mysql 0.0.0.0:* 89 | LISTEN 0 128 0.0.0.0:ssh 0.0.0.0:* 90 | LISTEN 0 128 [::]:ssh [::]:* 91 | ``` 92 | 93 | ## PHP FPM 94 | 95 | ``` 96 | sudo apt install gnupg2 97 | 98 | wget -q https://packages.sury.org/php/apt.gpg -O- | sudo apt-key add - 99 | echo "deb https://packages.sury.org/php/ buster main" |sudo tee /etc/apt/sources.list.d/php.list 100 | 101 | sudo apt update 102 | 103 | sudo apt install php7.4-fpm php7.4-common php7.4-mysql \ 104 | php7.4-xml php7.4-xmlrpc php7.4-curl php7.4-gd \ 105 | php7.4-imagick php7.4-cli php7.4-dev \ 106 | php7.4-mbstring php7.4-opcache \ 107 | php7.4-soap php7.4-zip -y 108 | 109 | ``` 110 | 111 | > Note: You may not need all of these php packages, but these are the most commonly used 112 | > Feel free to skip the ones you know you don't need 113 | 114 | 115 | Make sure php7.4-fpm is running 116 | 117 | ``` 118 | sudo systemctl status php7.4-fpm 119 | ``` 120 | 121 | 122 | ## Nginx 123 | 124 | If you are new to Nginx, go ahead and watch this : [Configuring Nginx, VirtualHosting, /etc/hosts, Curl](https://www.youtube.com/watch?v=i6NHxKyGI7s&list=PLxYCgfC5WpnsAg5LddfjlidAHJNqRUN14&index=13) 125 | 126 | Install Nginx 127 | ``` 128 | sudo apt install nginx -y 129 | ``` 130 | 131 | Create `/etc/nginx/sites-enabled/wordpress.devops.esc.sh` 132 | 133 | 134 | ``` 135 | server { 136 | listen 80; 137 | root /var/www/wordpress.devops.esc.sh; 138 | index index.php index.html index.htm index.nginx-debian.html; 139 | server_name wordpress.devops.esc.sh; 140 | 141 | location / { 142 | try_files $uri $uri/ /index.php$is_args$args; 143 | } 144 | 145 | location ~ \.php$ { 146 | include snippets/fastcgi-php.conf; 147 | fastcgi_pass unix:/var/run/php/php7.4-fpm.sock; 148 | } 149 | 150 | location ~ /\.ht { 151 | deny all; 152 | } 153 | } 154 | ``` 155 | 156 | 157 | Make sure nginx works by using 158 | ``` 159 | sudo nginx -t 160 | ``` 161 | 162 | If there is any syntax error, fix it obviously 163 | 164 | Reload Nginx 165 | ``` 166 | sudo systemctl reload nginx 167 | ``` 168 | ## Updating hosts file 169 | 170 | I will be adding an entry in my hosts file to point `wordpress.devops.esc.sh` to the IP address of the VM. 171 | Make sure you do that. 172 | 173 | Add a new line with (Change where needed) 174 | ``` 175 | 192.168.33.21 wordpress.devops.esc.sh 176 | ``` 177 | 178 | ### Linux/Mac 179 | 180 | In Linux/Mac, it's as simple as editing `/etc/hosts` as root 181 | 182 | ### Windows 183 | 184 | 1. Open Notepad as administrator 185 | 2. Open > c:\Windows\System32\Drivers\etc\hosts 186 | 3. Add the entry as above 187 | 188 | 189 | ## Testing if Nginx/PHP-FPM works 190 | 191 | Before we install Wordpress, let's make sure that our nginx/php installation works as expected 192 | 193 | ``` 194 | mkdir /var/www/wordpress.devops.esc.sh 195 | cd /var/www/wordpress.devops.esc.sh 196 | echo '' > info.php 197 | ``` 198 | 199 | Now open `wordpress.devops.esc.sh/info.php` and it should show the php info page. That means we 200 | are good. 201 | 202 | Make sure to delete the info.php file 203 | ``` 204 | rm info.php 205 | ``` 206 | 207 | ## Setting up WordPress 208 | 209 | 210 | ### Download and extract 211 | 212 | ``` 213 | cd /tmp 214 | wget https://wordpress.org/latest.tar.gz 215 | tar xf latest.tar.gz 216 | ``` 217 | 218 | This will extract the wordpress files into a directory `wordpress`. 219 | 220 | Let's move it to our document root. 221 | 222 | ``` 223 | mv wordpress/* /var/www/wordpress.devops.esc.sh/ 224 | ``` 225 | 226 | This means our WordPress installation is at the root of our website. So, `wordpress.devops.esc.sh` will 227 | be loading our wordpress site. If you want it in a subdirectory, move it to it 228 | 229 | ### Configure 230 | 231 | ``` 232 | cd /var/www/wordpress.devops.esc.sh 233 | cp wp-config-sample.php wp-config.php 234 | ``` 235 | 236 | Grab a fresh set of salts from WordPress 237 | 238 | ``` 239 | curl -s https://api.wordpress.org/secret-key/1.1/salt/ 240 | ``` 241 | This will show something like 242 | 243 | ``` 244 | # curl -s https://api.wordpress.org/secret-key/1.1/salt/ 245 | define('AUTH_KEY', 'Fp:LOAOOE LB6+|DE&1tpNL[^j#jOBzQ1Y@'); 246 | define('SECURE_AUTH_KEY', 'gmCIa3334~a/-CDIjCg|+|kd=HEeBa-L5gg'); 248 | define('NONCE_KEY', '.^vF{T04>$qR_AO`B#+TF[Nbw-yLdOgTnrOwzb;yUEMDHs^U)^Ev?B8>+GAZY)9-l$I4Syr2>/4SXNTsUx0F|-pDu4X?T'); 250 | define('SECURE_AUTH_SALT', 'dm{T~7JqgX&/Vz@HxO{R4,7w{|@<%&m>ce|p|`xZ3cmxGoOb7xzrucQ/'); 253 | ``` 254 | 255 | Open `wp-config.php` using your favourite editor. Use `nano` if you don't have one 256 | 257 | ``` 258 | cd /var/www/wordpress.devops.esc.sh 259 | nano wp-config.php 260 | ``` 261 | 262 | Find the `Authentication Unique Keys and Salts.` section where there are dummy values for the above. 263 | Replace the dummy ones with the output of the `curl -s https://api.wordpress.org/secret-key/1.1/salt/` 264 | 265 | 266 | Now let's **update the database, user, password and the host** 267 | 268 | In the same config file, find these and update the values accordingly 269 | 270 | ``` 271 | define( 'DB_NAME', 'wp_site' ); 272 | 273 | /** MySQL database username */ 274 | define( 'DB_USER', 'wp_user' ); 275 | 276 | /** MySQL database password */ 277 | define( 'DB_PASSWORD', 'j7wJZmLWyebzCLZFp9qx' ); 278 | 279 | /** MySQL hostname */ 280 | define( 'DB_HOST', '192.168.33.20' ); 281 | ``` 282 | 283 | > Note: If you are using the same machine for Nginx, PHP and MySQL, you don't have to change `DB_HOST` 284 | 285 | 286 | And Save and exit 287 | 288 | (In Nano, Press `Ctrl+X` and then `Y` and enter to save the file) 289 | 290 | ### Finishing up 291 | 292 | Now open the site in a browser 293 | 294 | Give a title, username and password and press `Install` 295 | This should finish the installation. You can login to the admin dashboard by visiting 296 | `wordpress.devops.esc.sh/wp-admin` 297 | 298 | 299 | And that is it 300 | -------------------------------------------------------------------------------- /episodes/29-recap.md: -------------------------------------------------------------------------------- 1 | # Recap 2 | 3 | 4 | ## Basics of Linux 5 | 6 | The idea was that, before we even get started with anything "DevOps", we need to know our 7 | systems well, which is Linux. So it is important to be familiar with the following 8 | 9 | 1. We started with how the internet works 10 | 2. We learned about setting up a Linux VM, accessing it using SSH, Basic commands 11 | 3. We dived a bit more deep, learned file descriptors, stdout/stderr etc 12 | 4. We continued with file system, env variables, managing users and permissions 13 | 5. Managing packages, processes, services etc 14 | 6. We learned to edit text files using vim 15 | 7. We learned about debugging using ps, netstat, netcat, curl etc 16 | 17 | 18 | ## More about hosting and managing websites 19 | 20 | When we work as a DevOps engineer, we mostly work with web applications/services etc. 21 | So it is important for us to understand the concepts, and be familiar with web servers 22 | and the like 23 | 24 | 1. We learned about Nginx, virtualhosts 25 | 2. We learned how DNS and domains work, and how to manage domains 26 | 3. We setup a simple static website 27 | 4. We proceeded to setup a dynamic website using NodeJS and Nginx reverse proxy 28 | 5. We learned about MySQL quite extensively 29 | 30 | 31 | ## Intro to DevOps 32 | 33 | 34 | Finally it was time to introduce DevOps tools to make our life easier and make things 35 | scale well. So added some tools to the mix where the need arise. 36 | We will continue to add more tools as we need them. 37 | 38 | 1. We talked about infrastructure as code, configuration management and started with Ansible 39 | 2. We learned manage virtual machines using Vagrant 40 | 3. We learned about version control, git and github 41 | 4. We applied our DevOps lessons and deployed a NodeJS+Nginx app using Ansible 42 | 5. We talked about how TLS works and how to setup TLS certificates 43 | 6. We spent some time understanding what DevOps is: Agile, Scrum, CI, CD etc 44 | 7. We spent some time with Jenkins 45 | 8. We setup a bit more complicated web application using WordPress 46 | 47 | 48 | What is important to understand is, I will not be talking more about Ansible, Jenkins etc 49 | unless we are using them for anything specific. The reasoning is that these tools run 50 | deep and can spend hours talking about them, but if you get an overall idea about what is 51 | going on, you can easily read their documentation and figure out things on your own. 52 | 53 | ## So, the story so far 54 | 55 | If you have watched all the videos so far, I hope you should be knowledgable enough to 56 | run your own little web applications, manage them comfortably, understand the basics of 57 | Linux, the web, some devops concepts etc. 58 | 59 | You don't need to know all the tools out there, you just need to know where one fits 60 | and then you should be able to read their respective documentation to figure out things on 61 | your own. 62 | 63 | I am here to only give you a framework and fill in the gaps so that you can help yourself much better 64 | 65 | 66 | ## So what is coming? 67 | 68 | In the immediate future, we are gonna start with monitoring. Now we have a bunch of systems 69 | to work with, and the next thing is how we use "DevOps" wherever possible 70 | -------------------------------------------------------------------------------- /episodes/30-monitoring-1-infrastructure-monitoring-intro.md: -------------------------------------------------------------------------------- 1 | # VM Monitoring #1 - Introduction to Infrastructure monitoring 2 | 3 | We have multiple "servers" in our infrastructure. And we need to know if all of them 4 | are doing well. We cannot check each of them manually to verify their health. This 5 | is why we use monitoring tools. 6 | 7 | ## What to monitor 8 | 9 | ### 1. System resources 10 | 11 | These are the basic resources only. As you advance and learn more, you will come across more and 12 | more of these resources. 13 | 14 | #### CPU 15 | 16 | Each machines have a fixed number of CPU cores and they can be overloaded if we give them 17 | more work than it can handle. 18 | 19 | `Load Average` is one of the key metric we should know about 20 | 21 | **Commands** 22 | - `uptime` : shows uptime, load average 23 | - `top` : Top processes 24 | - `htop` : More user friendly tool to see running processes 25 | 26 | 27 | #### Memory (RAM) 28 | 29 | You know what this does. We don't want our machines to run out of memory and get our 30 | important processes [OOM killed](https://www.kernel.org/doc/gorman/html/understand/understand016.html) 31 | 32 | **Commands** 33 | - `free` : Memory usage 34 | - `free -h` : Memory usage in human readable form 35 | 36 | #### Disk Usage 37 | 38 | You know this one too. We need disk space to store files. We need to be alerted before 39 | our system runs out of disk 40 | 41 | **Commands** 42 | - `df` : Shows disk usage 43 | - `df -h` : Disk usage in human readable format 44 | 45 | #### Disk IO Operations 46 | 47 | That is, how busy our disk is. For example if there are a lot of disk intensive operations 48 | going on (copying files, writing to the disk, reading from the disk etc) then our disk IO could 49 | skyrocket and drag the entire system down. 50 | 51 | **Commands** 52 | - `iostat` : Shows IO information (Available in package sysstat) 53 | - `iotop` : Top programs using IO 54 | 55 | #### Network 56 | 57 | How much data is being sent or received. A very high data rate could saturate the network links 58 | and slow down the application's performance 59 | 60 | ### 2. Application health 61 | 62 | Here, we want to make sure that our specific service running on the server is alive and well 63 | 64 | #### Process running or not 65 | 66 | This is one of the basic things we need to make sure. We want to be alerted if our service is down 67 | (For example if our website is using Nginx, we want to know if it goes down) 68 | 69 | **Commands** 70 | - `ps` : Check for running processes 71 | - `ps aux|grep nginx` : See if nginx is running 72 | 73 | #### Service responding correctly 74 | 75 | Sometimes even if the process is running, it may be in a dead state where it does not respond to requests. 76 | In case of a webserver for example, the correct way to know if it is working well is to actually make 77 | an http request and see if it responds 78 | 79 | **Commands** 80 | - `curl` 81 | - `curl -s localhost:8080` : Make an http request to localhost on port 8080 and see the response 82 | 83 | 84 | ### 3. Application Performance 85 | 86 | Once we make sure our application is doing it's basic functions, the next important metrics to monitor 87 | are the performance. We need to know if and when the application is performing poorly. 88 | 89 | There are a ton of metrics to be monitored here depending on the application itself. For the sake of 90 | simplicity we are only talking about two of them here 91 | 92 | #### App response time 93 | 94 | How fast does the application respond to requests. Each application will have a "usual" response time. 95 | If it goes above this usual response time, it means something is wrong. It could be bad code, or the 96 | servers behaving badly, network issue etc 97 | 98 | #### Application error rate 99 | 100 | Sometimes even when the application response time is good, it could be throwing a lot of errors (4xx or 5xx) 101 | to the clients. 102 | 103 | 4xx means client side errors (Example : 404, 403, 401) 104 | 5xx means something wrong on the serve side (Example : 500, 502, 503) 105 | 106 | Read about http status codes : [HERE](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status) 107 | -------------------------------------------------------------------------------- /episodes/31-monitoring-2-installing-sensu.md: -------------------------------------------------------------------------------- 1 | # VM Monitoring #2 - Getting started with VM monitoring using Sensu 2 | 3 | In the previous video we discussed about what monitoring is and what all need to be 4 | monitored. In this one we will be setting up monitoring using Sensu. 5 | 6 | > Bear in mind that all these are talking about monitoring virtual machines. We have not reached 7 | containers yet and hence we are leaving them out of the picture 8 | 9 | ## What is sensu? 10 | 11 | Sensu is tool that will help us in monitoring our servers. Let's keep it simple. 12 | 13 | We will obviously see what it can do real soon 14 | 15 | 16 | This doc is mostly sourced from Sensu official documentation [HERE](https://docs.sensu.io/sensu-go/latest/operations/deploy-sensu/install-sensu/) 17 | 18 | 19 | ## Sensu Architecture 20 | 21 | ### Sensu Agent 22 | 23 | This is a lightweight software that runs on each server that we want to monitor. 24 | These agents sends information about the status of each server it is running, back to the 25 | sensu backend server 26 | 27 | ### Sensu backend 28 | 29 | This is where the magic happens. The sensu backend can send checks to each client, 30 | look at the status etc. 31 | 32 | 33 | ## Setting up Sensu Backend 34 | 35 | ### Install sensu backend 36 | 37 | We will be using our Debian 10 virtual machine as usual 38 | 39 | ``` 40 | # Add the Sensu repository 41 | curl -s https://packagecloud.io/install/repositories/sensu/stable/script.deb.sh | sudo bash 42 | 43 | # Install the sensu-go-backend package 44 | sudo apt-get install sensu-go-backend 45 | ``` 46 | 47 | ### Configure and start sensu backend 48 | 49 | ``` 50 | sudo curl -L https://docs.sensu.io/sensu-go/latest/files/backend.yml -o /etc/sensu/backend.yml 51 | 52 | sudo systemctl start sensu-backend 53 | sudo systemctl status sensu-backend 54 | ``` 55 | 56 | 57 | To begin with, this is the only config option that is enabled 58 | 59 | ``` 60 | state-dir: "/var/lib/sensu/sensu-backend" 61 | ``` 62 | 63 | ### Initialize sensu backend. 64 | 65 | We need to setup our admin username and password 66 | 67 | ``` 68 | export SENSU_BACKEND_CLUSTER_ADMIN_USERNAME= 69 | export SENSU_BACKEND_CLUSTER_ADMIN_PASSWORD= 70 | /usr/sbin/sensu-backend init 71 | ``` 72 | 73 | 74 | ### Login to the admin panel 75 | 76 | Login to 192.168.33.30:3000 or whatever is the IP address of the VM with the usernamd 77 | and password from above 78 | 79 | Let's also verify that the API is working fine 80 | 81 | ``` 82 | curl http://ip:8080/health 83 | ``` 84 | 85 | 86 | ## On the workstation 87 | 88 | Now that we have sensu backend running, let's install `sensuctl` on the local workstation. 89 | Sensuctl is a cli tool used to manage Sensu 90 | 91 | To install it on Linux machine: 92 | 93 | ``` 94 | # Add the Sensu repository 95 | curl -s https://packagecloud.io/install/repositories/sensu/stable/script.deb.sh | sudo bash 96 | 97 | # Install the sensu-go-cli package 98 | sudo apt-get install sensu-go-cli 99 | ``` 100 | Follow instructions [HERE](https://docs.sensu.io/sensu-go/latest/operations/deploy-sensu/install-sensu/#install-sensuctl) to install 101 | sensuctl for other operating systems (your local machine) 102 | 103 | 104 | To start using sensuctl, we need to configure it with the username, password and the 105 | address to reach our sensu backend api 106 | 107 | 108 | From your local machine (not the sensu-backend server) 109 | ``` 110 | sensuctl configure -n \ 111 | --username 'admin' \ 112 | --password 'password' \ 113 | --namespace default \ 114 | --url 'http://192.168.33.30:8080' 115 | ``` 116 | 117 | Run `sensuctl config view` and it should show something like 118 | 119 | ``` 120 | ❯ sensuctl config view 121 | === Active Configuration 122 | API URL: http://192.168.33.30:8080 123 | Namespace: default 124 | Format: tabular 125 | Timeout: 15s 126 | Username: admin 127 | JWT Expiration Timestamp: 1620564854 128 | ``` 129 | 130 | That means sensuctl is all good to go. With these steps, sensuctl has written the authentication details into 131 | ``` 132 | ~/.config/sensu/sensuctl/profile 133 | 134 | and 135 | 136 | ~/.config/sensu/sensuctl/cluster 137 | ``` 138 | 139 | 140 | ## Setting up sensu agents 141 | 142 | Now that we have sensu-backend is up and running, we are ready to monitor our servers. 143 | We need to install `sensu-agent` on the servers we want to monitor 144 | 145 | 146 | ### Installing sensu agent 147 | For our use case, I am gonna install sensu-agent on our wordpress server. 148 | 149 | ``` 150 | # Add the Sensu repository 151 | curl -s https://packagecloud.io/install/repositories/sensu/stable/script.deb.sh | sudo bash 152 | 153 | # Install the sensu-go-agent package 154 | sudo apt-get install sensu-go-agent 155 | ``` 156 | 157 | ### configuring sensu agent 158 | 159 | For each agent, we need a config file `/etc/sensu/agent.yml`. The bare minimum requirement is the `--backend-url`, because obviously the agent needs to know where to connect to. 160 | 161 | > On the Vm we want to monitor: 162 | 163 | ``` 164 | # Copy the config template from the docs 165 | sudo curl -L https://docs.sensu.io/sensu-go/latest/files/agent.yml -o /etc/sensu/agent.yml 166 | ``` 167 | 168 | In the config file, uncomment the below 169 | ``` 170 | #backend-url: 171 | # - "ws://127.0.0.1:8081" 172 | ``` 173 | and change it to the sensu-backend IP address. And 174 | ``` 175 | # Start sensu-agent using a service manager 176 | service sensu-agent start 177 | ``` 178 | 179 | By now, the agent should have automatically configured itself and is talking to the sensu-backend. 180 | 181 | Sensu keepalives are the heartbeat mechanism used to ensure that all registered agents are operating and can reach the Sensu backend. To confirm that the agent is registered with Sensu and is sending keepalive events, open the entity page in the Sensu web UI or run `sensuctl entity list` 182 | 183 | ## Verify an example event 184 | 185 | ``` 186 | curl -X POST \ 187 | -H 'Content-Type: application/json' \ 188 | -d '{ 189 | "check": { 190 | "metadata": { 191 | "name": "check-mysql-status" 192 | }, 193 | "status": 1, 194 | "output": "could not connect to mysql" 195 | } 196 | }' \ 197 | http://localhost:3031/events 198 | ``` -------------------------------------------------------------------------------- /episodes/32-monitoring-3-resource-usage-monitoring.md: -------------------------------------------------------------------------------- 1 | # VM Monitoring #3 - System Resource Monitoring Using Sensu 2 | 3 | In the [monitoring introduction video](30-monitoring-1-infrastructure-monitoring-intro.md) we discussed about what are 4 | the different types of resources we should be monitoring to keep an eye on the state of our systems. 5 | These includes CPU, Memory, Disk usage etc. 6 | 7 | In this one we will go ahead and actually implement these monitoring using Sensu 8 | 9 | ## Some sensu glossary 10 | 11 | Before we get started with monitoring, we need to understand few terms sensu uses. Full list [HERE](https://docs.sensu.io/sensu-go/latest/learn/glossary/) 12 | 13 | To keep things simple, I will only mention the terms that we will be needing now 14 | 15 | ### Agent 16 | 17 | We know this one. It's a software that runs on the servers that we want to monitor. It sends keepalive, runs checks etc 18 | 19 | ### Check 20 | 21 | A "check" the agent runs to determine the state of a system. For example, a "CPU" check to see the cpu usage. 22 | 23 | Example: 24 | ``` 25 | type: CheckConfig 26 | api_version: core/v2 27 | metadata: 28 | name: check_cpu 29 | namespace: default 30 | spec: 31 | command: check_cpu_usage.sh 32 | handlers: 33 | - email 34 | interval: 10 35 | publish: true 36 | subscriptions: 37 | - system 38 | ``` 39 | 40 | ### Event 41 | 42 | Represents the state of a server/service at any point in time. For example, if we had a check that looks for the CPU usage 43 | and alerts if it above 90%, then when it goes above 90%, it is called an "event" 44 | 45 | ### Handler 46 | 47 | Handler acts on events. For example, in the previous sample check, we had 48 | ``` 49 | handlers: 50 | - email 51 | ``` 52 | So, whenever an event occurs, we can handle it using handlers. Like sending an email, a slack message etc 53 | 54 | 55 | ### Assets 56 | 57 | These are scripts/programs that helps us run checks. For example, we need a script that can run and look at the 58 | CPU usage/ Disk usage etc. These executables are called assets. 59 | 60 | 61 | ## How does sensu monitors server resources 62 | 63 | 1. There are sensu "checks" which defines what sort of check it is. Refer example above 64 | 2. These checks makes use of "assets" or scripts to look at the resources and output "status code" based on what we need 65 | 3. For example: We can create a script that looks at the load average, and: 66 | - if it is less than 80%, script exits with status code 0 (all good) 67 | - if it is above 80%, script exits with status code 1 (warning) 68 | - if it is above 90%, script exits with status code 2 (error) 69 | 4. The sensu "handler" upon seeing this, acts based on what we have configured it 70 | 71 | ## Let's go ahead and create resource monitoring 72 | 73 | ### 1. Adding the required dynamic assets 74 | 75 | As mentioned in the previous block, we need some scripts to run and tell us what the state of the system is. We can either write 76 | our own scripts for this. Something like 77 | ``` 78 | #!/bin/bash 79 | 80 | LOAD=`uptime | awk '{print $8}' | cut -f 1 -d ,` 81 | 82 | if [[ "$LOAD" > "2" ]] 83 | then 84 | echo "ERROR: Cpu usage above 2" 85 | exit 2 86 | elif [[ "$LOAD" > "1" ]] 87 | then 88 | echo "WARN: Cpu usage above 1" 89 | exit 1 90 | else 91 | echo "OK" 92 | exit 0 93 | fi 94 | ``` 95 | But, we don't have to do that as sensu has a repository with all kinda scripts for most of our use cases. 96 | And those scripts will be infinitely better than what we have 97 | 98 | For example, the [CPU checks plugin](https://bonsai.sensu.io/assets/sensu-plugins/sensu-plugins-cpu-checks) offers a ton of 99 | features. So we will use this one instead 100 | 101 | Let's register this dynamic runtime asset so we can use these scripts inside our agent 102 | ``` 103 | sensuctl asset add sensu-plugins/sensu-plugins-cpu-checks:4.1.0 -r cpu-checks-plugins 104 | ``` 105 | This example uses the -r (rename) flag to specify a shorter name for the dynamic runtime asset: cpu-checks-plugins 106 | 107 | We also need the ruby runtime, because sensu-cpu-check is a ruby script that needs ruby to run 108 | ``` 109 | sensuctl asset add sensu/sensu-ruby-runtime:0.0.10 -r sensu-ruby-runtime 110 | ``` 111 | 112 | We can verify that these have been downloaded using 113 | ``` 114 | sensuctl asset list 115 | ``` 116 | 117 | ### 2. Configure entity subscription 118 | 119 | Every Sensu agent has a defined set of subscriptions that determine which checks the agent will execute. For an agent to execute a specific check, you must specify the same subscription in the agent configuration and the check definition 120 | 121 | Let's call our subscription that has cpu check as "system", which makes sense since it is a system resource 122 | 123 | We need to update our entity (our wordpress server) and include the "system" subscription 124 | 125 | ``` 126 | sensuctl entity list 127 | sensuctl entity update 128 | ``` 129 | 130 | For Entity Class, press enter. 131 | For Subscriptions, type system and press enter. 132 | 133 | ### 3. Creating the check 134 | 135 | ``` 136 | sensuctl check create check_cpu \ 137 | --command 'check-cpu.rb -w 75 -c 90' \ 138 | --interval 30 \ 139 | --subscriptions system \ 140 | --runtime-assets cpu-checks-plugins,sensu-ruby-runtime 141 | ``` 142 | This creates a check which will: 143 | - Use the `check-cpu.rb` script from the asset `cpu-checks-plugins` with warning 75% and critical at 90% CPU usage 144 | - The subscription is system 145 | - Runs every 30 seconds 146 | 147 | 148 | Since we have created the check with subscription as `system`, if in any future if we create a new server and add it to 149 | sensu with the subscription `system`, then that server will also automatically execute these checks. 150 | 151 | 152 | 153 | ### 4. Verifying 154 | 155 | We can verify our brand new check 156 | ``` 157 | sensuctl check info check_cpu --format yaml 158 | ``` 159 | 160 | The Sensu agent uses websockets to communicate with the Sensu backend, sending event data as JSON messages. As your checks run, the Sensu agent captures check standard output (STDOUT) or standard error (STDERR). This data will be included in the JSON payload the agent sends to your Sensu backend as the event data. 161 | 162 | It might take a few moments after you create the check for the check to be scheduled on the entity and the event to return to Sensu backend. Use sensuctl to view the event data and confirm that Sensu is monitoring CPU usage: 163 | 164 | ``` 165 | sensuctl event list 166 | ``` 167 | -------------------------------------------------------------------------------- /episodes/33-monitoring-4-webserver-monitoring.md: -------------------------------------------------------------------------------- 1 | # VM Monitoring #4 - Monitoring a webserver using Sensu 2 | 3 | Monitoring only system resources is not enough, it is possible for our services to fail without 4 | any problem with system resources. Example, Nginx could crash, MySQL could stop working etc even when 5 | the system is doing fine otherwise. 6 | 7 | To be alerted of these, we need to monitor 8 | 1. Whether a process is running (nginx, mysql etc) 9 | 2. Whether the service is responding on their respective ports (80,443 for nginx, 3306 for mysql) 10 | 11 | So we will create checks for these 12 | 13 | ## Monitoring running processes 14 | 15 | In this case, we have two processes that we want to monitor : Nginx and MySQL 16 | 17 | Both are already installed on our `wordpress` server. If you do not have them, go ahead and install them. 18 | 19 | ### Fetch assets 20 | 21 | We will use the [nagiosfoundation](https://bonsai.sensu.io/assets/ncr-devops-platform/nagiosfoundation) plugin to monitor 22 | running processes 23 | 24 | ``` 25 | sensuctl asset add ncr-devops-platform/nagiosfoundation -r nagiosfoundation 26 | ``` 27 | 28 | ### Update subscription 29 | 30 | ``` 31 | sensuctl entity list 32 | sensuctl entity update wordpress 33 | ``` 34 | 35 | For Entity Class, press enter. 36 | For Subscriptions, type system,webserver and press enter. 37 | 38 | 39 | ### Create check 40 | 41 | ``` 42 | sensuctl check create nginx_service \ 43 | --command 'check_service --name nginx' \ 44 | --interval 15 \ 45 | --subscriptions webserver \ 46 | --runtime-assets nagiosfoundation 47 | ``` 48 | 49 | We should see the event in few seconds 50 | ``` 51 | sensuctl event list 52 | ``` 53 | 54 | Go ahead and stop Nginx and see what happens in sensu 55 | 56 | ## Doing an http check 57 | 58 | Previously we were checking if the process was running. But this is not enough. It is possible 59 | for nginx to be running and still unresponsive. So, we need a check that does an actual http 60 | request 61 | 62 | ### Fetch assets 63 | 64 | We will use https://bonsai.sensu.io/assets/sensu-plugins/sensu-plugins-http for that 65 | ``` 66 | sensuctl asset add sensu-plugins/sensu-plugins-http:6.0.0 -r sensu-plugins-http 67 | ``` 68 | 69 | We have already added the ruby runtime in the past, if you have not done that, make sure that asset is also added 70 | ``` 71 | sensuctl asset add sensu/sensu-ruby-runtime:0.1.0 -r sensu-ruby-runtime 72 | ``` 73 | 74 | ### Create check 75 | 76 | 77 | ``` 78 | sensuctl check create nginx_http \ 79 | --command 'check-http.rb -u http://localhost' \ 80 | --interval 15 \ 81 | --subscriptions webserver \ 82 | --runtime-assets sensu-plugins-http,sensu-ruby-runtime 83 | ``` 84 | 85 | 86 | ### Verify 87 | 88 | ``` 89 | sensuctl event list 90 | ``` 91 | 92 | 93 | ## Exercises 94 | 95 | 1. Create a check to monitor if mysql is running 96 | 1. Create a check to connect to mysql database 97 | -------------------------------------------------------------------------------- /episodes/34-monitoring-5-getting-email-alerts.md: -------------------------------------------------------------------------------- 1 | # VM Monitoring #5 - Receiving email alerts 2 | 3 | So far we have been either looking at the sensu dashboard to see the state of the system 4 | or we have been using `sensuctl event list` for that. But this is not very practical as 5 | we cannot always be looking at these. Instead, we should be alerted when something goes wrong. 6 | 7 | There are several ways we could be alerted, most prominent ones are: 8 | 9 | 1. Email 10 | 2. Slack messages 11 | 3. Phone call (Example: pagerduty) 12 | 13 | In this video, we will be setting up email alerts using sensu. So when things go wrong with any of 14 | our services we should get an email alert 15 | 16 | ## Getting an SMTP provider to send emails 17 | 18 | To be able to send emails, we need an SMTP provider. For our use case, we can use GMail itself, which is 19 | free and fairly easy to setup. But, obviously, in a production environment, we would be using some sort of 20 | paid service like mailgun, sendgrid etc. 21 | 22 | 23 | You can create a new gmail account for this. I created one just to send monitoring emails (for my own servers) 24 | 25 | > Note: If you have 2FA enabled for your gmail account, you need to create an app password 26 | 27 | 1. Log-in into Gmail with your account 28 | 2. Navigate to https://security.google.com/settings/security/apppasswords 29 | 3. In 'select app' choose 'custom', give it an arbitrary name and press generate 30 | 4. It will give you 16 chars token, you will use it as the password 31 | 32 | ## Configuring Sensu Email handler 33 | 34 | Full sensu docs [HERE](https://docs.sensu.io/sensu-go/latest/observability-pipeline/observe-process/send-email-alerts/) 35 | 36 | We discussed what a handler is, it is something that "handles" an event. 37 | 38 | ### Get the assets 39 | 40 | The `email-handler` let us send emails when things go wrong 41 | ``` 42 | sensuctl asset add sensu/sensu-email-handler -r email-handler 43 | ``` 44 | 45 | ### Add an event filter 46 | 47 | Consider the scenario your webserver stopped working. We have our checks running every 15 seconds. And, by default, the handler 48 | will send an email every 15 seconds, until we fix our webserver. We don't want this. This is where event filters help us 49 | 50 | We will use the `state_change_only` event filter which will alert only when there is a state change for our event 51 | 52 | 53 | - If your event status changes from 0 to 1, you will receive one email notification for the change to warning status. 54 | - If your event status stays at 1 for the next hour, you will not receive repeated email notifications during that hour. 55 | - If your event status changes to 2 after 1 hour at 1, you will receive one email notification for the change from warning to critical status. 56 | - If your event status fluctuates between 0, 1, and 2 for the next hour, you will receive one email notification each time the status changes. 57 | 58 | Create a file `state-change-only-event-filter.yml` 59 | ``` 60 | type: EventFilter 61 | api_version: core/v2 62 | metadata: 63 | annotations: null 64 | labels: null 65 | name: state_change_only 66 | namespace: default 67 | spec: 68 | action: allow 69 | expressions: 70 | - event.check.occurrences == 1 71 | runtime_assets: [] 72 | ``` 73 | 74 | Let's create the event filter 75 | ``` 76 | sensuctl create -f state-change-only-event-filter.yml 77 | ``` 78 | 79 | ### Create the event handler 80 | 81 | Create `email-handler.yml` 82 | ``` 83 | api_version: core/v2 84 | type: Handler 85 | metadata: 86 | namespace: default 87 | name: email 88 | spec: 89 | type: pipe 90 | command: sensu-email-handler -f -t -s -u username -p password 91 | timeout: 10 92 | filters: 93 | - is_incident 94 | - not_silenced 95 | - state_change_only 96 | runtime_assets: 97 | - email-handler 98 | ``` 99 | 100 | We need to update the following 101 | 102 | - `-f` : Sender email -> Your new gmail account 103 | - `-t` : Recipient email -> where you want the alerts to go 104 | - `-s` : SMTP Serber -> smtp.gmail.com 105 | - `-u` : Gmail username 106 | - `-p` : Gmail password / app password if you use 2FA for your new account 107 | 108 | And create it 109 | ``` 110 | sensuctl create -f email-handler.yml 111 | ``` 112 | 113 | 114 | ## Exercises 115 | 116 | 1. Add email handler to other checks 117 | 2. Create a slack message alert handler -------------------------------------------------------------------------------- /episodes/35-monitoring-6-using-sensu-api.md: -------------------------------------------------------------------------------- 1 | # VM Monitoring #6 - Creating a Python script to use Sensu API 2 | 3 | ## Using Sensu API to view our events 4 | 5 | Instead of using the sensu UI, we will use the sensu api to look at the events 6 | 7 | ### Authenticate to Sensu API 8 | 9 | You can find more information about authenticating to sensu api [here](https://docs.sensu.io/sensu-go/latest/api/) 10 | 11 | For now, I am gonna use the username and password to fetch an access token and then use that with our 12 | API calls 13 | 14 | ``` 15 | curl -u 'YOUR_USERNAME:YOUR_PASSWORD' http://sensu-backend-ip:8080/auth 16 | 17 | curl -u 'admin:password' http://192.168.33.30:8080/auth 18 | ``` 19 | 20 | This should give you an access token, copy it 21 | These tokens are only valid for 15 minutes, you need to refresh them often 22 | 23 | 24 | Now we will use the events api to get a list of all the events. More about events api [here](https://docs.sensu.io/sensu-go/latest/api/events/) 25 | ``` 26 | curl -H "Authorization: Bearer " \ 27 | http://192.168.33.30:8080/api/core/v2/namespaces/default/events 28 | ``` 29 | 30 | 31 | ## The script 32 | 33 | ``` 34 | #!/usr/bin/env python3 35 | 36 | import requests 37 | from requests.auth import HTTPBasicAuth 38 | 39 | USERNAME="admin" 40 | PASSWORD="password" 41 | BASE_URL="http://192.168.33.30:8080" 42 | 43 | AUTH_URL = BASE_URL + "/auth" 44 | EVENTS_URL = BASE_URL + "/api/core/v2/namespaces/default/events" 45 | 46 | 47 | r = requests.get(AUTH_URL, auth=HTTPBasicAuth(USERNAME, PASSWORD)) 48 | 49 | access_token = r.json()['access_token'] 50 | 51 | headers = { 52 | 'Authorization': 'Bearer ' + access_token} 53 | 54 | r = requests.get(EVENTS_URL, headers=headers) 55 | 56 | data = r.json() 57 | 58 | for entry in data: 59 | #print (entry['check']['metadata']['name'] + "\t" + str(entry['check']['status']) + "\t\t" + entry['check']['output']) 60 | print("{: <20} {: <20} {: <20}".format(entry['check']['metadata']['name'], str(entry['check']['status']), entry['check']['output'])) 61 | 62 | ``` -------------------------------------------------------------------------------- /episodes/36-monitoring-7-sensu-go-production-considerations.md: -------------------------------------------------------------------------------- 1 | # VM Monitoring #7 - Sensu Go in Production considerations and closing thoughts 2 | 3 | ## Deployment Architecture 4 | 5 | Find more details [HERE](https://docs.sensu.io/sensu-go/latest/operations/deploy-sensu/deployment-architecture/) 6 | 7 | In the past few videos when we talked about sensu, we were using a single instance sensu setup. That was good enough 8 | for us to learn the basics of sensu and get a conceptual idea of how everything works together. But in a production environment, 9 | we need fail over, better security etc 10 | 11 | In this video/notes, I will be discussing what all to consider when you move your sensu deployment to production 12 | 13 | ## Clustered deployments 14 | 15 | Having a single host is not a good idea in a production environment. 16 | 17 | ## Using with configuration management 18 | 19 | So far we have been creating everything by hand, editing the agents configs manually etc. 20 | While this is the way for us to learn, it won't be a good idea to do the same in a production environment. 21 | 22 | We should automate this. I am not going to show you that, but will give pointers on how to do that so you 23 | can try yourself 24 | 25 | 1. Each server we want to manage should have a it's agent.yml managed by configuration management tool such as ansible 26 | 2. When a new server comes online, this should get applied, and the server be registered with sensu-backend along with 27 | predefined set of checks ready to go, without human intervention. 28 | 29 | ## TLS for etcd 30 | 31 | Reconfiguring a Sensu cluster for TLS post-deployment will require resetting all etcd cluster members, resulting in the loss of all data. So, when you are creating a production sensu setup, keep this in mind. You may want to enable TLS in the beginning itself 32 | 33 | ## TLS for Sensu UI and API 34 | 35 | We did not use any kind of TLS in our sensu setup. But when it comes to a production setup, we definitely need TLS. 36 | You should use something like an Nginx proxy in front of the sensu UI and configure TLS in the Nginx server so that 37 | the sensu UI, API etc are behind TLS. You should not be sending credentials over plain text 38 | 39 | -------------------------------------------------------------------------------- /episodes/5-file-descriptors-standard-out-err-pipe-file-system.md: -------------------------------------------------------------------------------- 1 | # File Descriptors, Standard output/error, Pipe, Grep, File system hierarchy 2 | 3 | Video Link : [HERE](https://youtu.be/dkyIHNWulqA) 4 | 5 | ## Exercises 6 | 7 | 1. How does Linux boot - Understand each step 8 | 2. What is grub 9 | 3. How is the Kernel loaded onto memory on boot 10 | 4. Special devices in Linux - `/dev/random` and `/dev/zero` 11 | 5. What are shared libraries 12 | 6. Common environment variables in Linux 13 | -------------------------------------------------------------------------------- /episodes/containers/01-Introduction-to-containers.md: -------------------------------------------------------------------------------- 1 | # Introduction to Containers and Containerization 2 | 3 | ## A note to beginners 4 | 5 | 6 | > If you have never worked with Virtual Machines and if you are really new to DevOps, I strongly strongly advice you to NOT start here and instead maybe look through my video series. 7 | > The idea is that a lot of the fundamentals that are so crucial in being a "DevOps" engineer comes from dealing with Virtual Machines 8 | 9 | 10 | ## The problem we are trying to solve 11 | 12 | Before we even talk about what are containers and how to deal with them, we need to understand why we are doing it. If you have been following my videos, you know that we have been using Virtual Machines to setup different applications and services. It works great for what we were trying to do as well. 13 | 14 | Then why are we talking about containers, why are they so important? 15 | 16 | ### Limitations of Virtual Machines 17 | 18 | 1. Resource overhead : Each Virtual machine needs a full operating system, it takes a lot of resources for just that. 19 | 2. Boot-up time : Virtual machines takes longer to bootup because it is a full operating system 20 | 3. Problems with inconsistencies : Unless we use images with virtual machines, it is going to be difficult to keep it consistent 21 | 4. Size and portability : VMs mean that they need to package the whole OS, again this adds a ton of packages, libraries etc and that causes bloat. If you take a look at a disk image of any virtual machine, it will be several gigabytes in size 22 | 23 | ### What are our solutions 24 | 25 | 1. We can solve a bunch of issues using Virtual Machine images : There are tools such as [Packer](https://www.packer.io/) that will allow us to package a virtual machine image with everything we need. This does indeed solve a lot of problems in terms of dependency management, consistency etc. 26 | 2. Containers 27 | 28 | ## Containers vs Virtual Machines 29 | 30 | ![Containers vs VMs](diagrams/containers-vs-virtualmachines.png) 31 | 32 | 33 | #### Virtual Machines (VMs) 34 | 35 | - **Full OS**: Each VM runs a full OS instance, including kernel and user space. 36 | - **Resource Intensive**: Requires more resources due to multiple OS instances. 37 | - **Size**: Disk images are often several gigabytes. 38 | - **Boot-up Time**: Slower, as it involves booting up an entire OS. 39 | - **Isolation**: Complete isolation, as each VM is separated at the hypervisor level. 40 | - **Management**: Managed by hypervisors (e.g., VMware, Hyper-V). 41 | 42 | #### Containers 43 | 44 | - **Shared OS**: All containers share the **host OS kernel** but have isolated user spaces. 45 | - **Lightweight**: Requires fewer resources, as only the application and its dependencies are packaged. 46 | - **Size**: Containers can be just tens of MBs in size (or even smaller). 47 | - **Boot-up Time**: Near-instantaneous start-up. 48 | - **Isolation**: Process-level isolation using namespaces and cgroups. 49 | - **Management**: Managed by container runtimes (e.g., Docker, containerd). 50 | 51 | 52 | -------------------------------------------------------------------------------- /episodes/containers/02-a-practical-example.md: -------------------------------------------------------------------------------- 1 | # A Practical Example of What containers can do 2 | 3 | Before we learn in deep about containers, I want to take a minute to show you what we can do with it. I believe this will give an overall idea about the reason why we are learning about containers 4 | 5 | 6 | ## Our Requirement 7 | 8 | For this example, let's say that we have the same [NodeJS application](https://github.com/MansoorMajeed/devops-nodejs-demo-app) we have used in the past. We want to deploy this application (we will simply deploy it on our local machine) 9 | 10 | 11 | ## Setting it up the old way - without containers 12 | 13 | 14 | ### 1. Clone the repo 15 | 16 | ``` 17 | git clone https://github.com/MansoorMajeed/devops-nodejs-demo-app.git 18 | ``` 19 | 20 | ### 2. Installing the dependencies 21 | 22 | In this case, the application is pretty simple. It does not have too many dependencies. But we still need to install a few things 23 | 24 | 25 | 1. NodeJS and NPM 26 | 3. All the libraries and dependencies our application will use 27 | 28 | In a more complex situation, we might need a database, a caching server, another api app, etc etc. So, we have to install all of those dependencies (most of the time, a specific version of each) and binaries on our laptop. 29 | 30 | For this example, let's use [NVM](https://github.com/nvm-sh/nvm) to install NodeJS and NPM. After that, execute `npm install` within the directory. 31 | 32 | Finally, we should be able to run our application using `node index.js` from the same directory 33 | 34 | ## A better way to deal with it - with containers 35 | 36 | First of all, the application still need all of the dependencies, libraries etc. But, if we can find a way to package it all together and send this single "package" around, we don't need to worry about any of it. Right? That is the core idea behind containers 37 | 38 | For that, we would need some sort of configuration that says what are all the packages and dependencies we will need with our application. 39 | 40 | Let's say our configuration looks like this 41 | 42 | > **Note**: This is a hypothetical configuration for illustrative purposes. 43 | 44 | ``` 45 | Steps: 46 | 1. Install NodeJS version we need : package install nodejs-18 47 | 2. Install all the modules : npm install 48 | 3. Copy all of our application code and dependencies: copy ./* destination-package 49 | ``` 50 | 51 | 52 | Now that we have this `configuration` file, we can use this to create our `package` that contains our application and everything it needs. 53 | 54 | Let's say that it creates a package called `nodejs-demo-app`, we should be able to run it easily using some sort of command. Maybe something like 55 | 56 | ``` 57 | mycontainer-tool run nodejs-demo-app 58 | ``` 59 | 60 | And that is exactly what we are able to do. 61 | 62 | ### Creating the "package" 63 | 64 | First we have to write something called a "Dockerfile". Now, don't even worry about what "Docker" or any of it is, we will definitely get to it. for now you can just ignore it and just focus on the idea of it. 65 | 66 | So we create a file called `Dockerfile` in the same location where we have our application code 67 | 68 | ```➜ devops-nodejs-demo-app git:(master) ✗ pwd 69 | /home/mansoor/git/github.com/mansoormajeed/devops-nodejs-demo-app 70 | 71 | ➜ devops-nodejs-demo-app git:(master) ✗ ls -l 72 | total 80 73 | -rw-r--r--. 1 mansoor mansoor 447 Aug 13 16:53 deploy.sh 74 | -rw-r--r--. 1 mansoor mansoor 109 Aug 13 17:10 Dockerfile 75 | -rw-r--r--. 1 mansoor mansoor 473 Aug 13 16:53 index.js 76 | -rw-r--r--. 1 mansoor mansoor 1309 Aug 13 16:53 Jenkinsfile 77 | drwxr-xr-x. 1 mansoor mansoor 2478 Aug 13 16:58 node_modules 78 | -rw-r--r--. 1 mansoor mansoor 367 Aug 13 16:53 package.json 79 | -rw-r--r--. 1 mansoor mansoor 55700 Aug 13 16:58 package-lock.json 80 | -rw-r--r--. 1 mansoor mansoor 207 Aug 13 16:53 README.md 81 | drwxr-xr-x. 1 mansoor mansoor 14 Aug 13 16:53 test 82 | ➜ devops-nodejs-demo-app git:(master) ✗ 83 | 84 | ``` 85 | 86 | And the content of the Dockerfile 87 | ``` 88 | FROM node 89 | 90 | WORKDIR /app 91 | 92 | COPY package*.json ./ 93 | 94 | RUN npm install 95 | 96 | COPY index.js ./ 97 | 98 | CMD ["node", "index.js"] 99 | ``` 100 | 101 | Don't worry about the syntax, but as you can see, it is very simple and makes total sense, right? 102 | 103 | Now, we simply run the magic command that creates the package 104 | 105 | We run `docker build -t nodejs-demo-app .` 106 | 107 | (Don't forget the `.` at the end) 108 | 109 | Once that is done, we should see 110 | ``` 111 | => exporting to image 112 | => => exporting layers 113 | => => writing image sha256:4e95496b68f87af00ee40230da330da222f5960830b0b276aa0925147ee685c8 114 | => => naming to docker.io/library/nodejs-demo-app 115 | ``` 116 | 117 | And now we can run 118 | ``` 119 | docker run -p 3000:3000 nodejs-demo-app 120 | ``` 121 | Here, the only thing that might look odd is the `-p 3000:3000`, that is simply us telling Docker to expose port number 3000 from the container to our host machine's port 3000. This way we can reach our application from our laptop. 122 | 123 | And finally, we can access our app, from a terminal on our laptop 124 | ``` 125 | curl localhost:3000 126 | 127 |

Hello World!

128 |

129 | Process ID: 1
130 | Running on: 2ab5e19ef284
131 | App Version: 4.0 132 |

133 | ``` 134 | 135 | 136 | -------------------------------------------------------------------------------- /episodes/containers/03-fundamentals-of-containers.md: -------------------------------------------------------------------------------- 1 | # Fundamentals of Linux Containers 2 | 3 | ## Linux Container Architecture 4 | 5 | Most of the components required by the containers in Linux are provided by the Linux 6 | Kernel itself. 7 | 8 | ### What do we need from containers? 9 | 10 | Before we talk about the architecture of Linux containers, it is important that we 11 | have an understanding of what we NEED from a Linux container. This will help us 12 | understand each component better 13 | 14 | #### 1. We need application isolation 15 | 16 | That is, when we "containerize" an application (a process), it should be truly contained. 17 | That way, another process should not be able to interfere with it. This would mean 18 | isolation on process level, network level, file system level etc. 19 | 20 | 21 | #### 2. We should be able to allocate and limit resource to containers 22 | 23 | We should be able to tell that this container gets this much CPU, Memory, network bandwidth etc. 24 | This is to ensure that a single bad container cannot starve other containers of resources. 25 | 26 | 27 | #### 3. Be able to share common files efficiently 28 | 29 | Allow containers to be lightweight and share common files efficiently. 30 | 31 | #### 4. Enhance security by reducing privileges 32 | 33 | Each process should have the minimum privileges it requires to run its functions. 34 | 35 | ### How do we achieve those needs? 36 | 37 | So we talked about what are the needs as listed above, now we will discuss how we 38 | achieve each of those needs 39 | 40 | We will also use some practical examples to show how it looks in real world. 41 | You don't really need to know each of these commands, this is just to reinforce 42 | the concepts 43 | 44 | We will create two containers to show the examples. `container1` and `container2` 45 | 46 | ```bash 47 | # Start two containers in detached mode 48 | docker run -d --name container1 busybox sleep 3600 49 | docker run -d --name container2 busybox sleep 3600 50 | ``` 51 | 52 | Now let's run them in the background, we will get to them below. 53 | 54 | #### 1. Namespaces 55 | 56 | Read more about namespaces [HERE](https://man7.org/linux/man-pages/man7/namespaces.7.html) 57 | 58 | Namespaces are a way to achieve this isolation we talked about. There are different 59 | types of namespaces that achieves these different goals such as process, network isolation 60 | 61 | A namespace wraps a global system resource in an abstraction that 62 | makes it appear to the processes within the namespace that they 63 | have their own isolated instance of the global resource. Changes 64 | to the global resource are visible to other processes that are 65 | members of the namespace, but are invisible to other processes. 66 | One use of namespaces is to implement containers. 67 | 68 | ##### a. MNT (Mount) Namespaces 69 | 70 | Isolates the set of filesystem mount points seen by a group of processes. 71 | Processes in different MNT namespaces can have different views of the filesystem hierarchy. 72 | 73 | 74 | Example: 75 | So we create a file in container1 76 | ``` 77 | docker exec container1 touch /container1_file.txt 78 | ``` 79 | Which we can see in `container1` 80 | 81 | ``` 82 | ➜ ~ docker exec container1 ls -lh /container1_file.txt 83 | -rw-r--r-- 1 root root 0 Nov 1 17:49 /container1_file.txt 84 | ➜ ~ 85 | ``` 86 | 87 | But that does not exist in `container2` 88 | ``` 89 | ➜ ~ docker exec container2 ls -lh /container1_file.txt 90 | ls: /container1_file.txt: No such file or directory 91 | ``` 92 | 93 | ##### b. UTS (UNIX Time-Sharing) namespaces 94 | 95 | Isolates two system identifiers: the hostname and the NIS domain name. 96 | This allows each container to have its own hostname. 97 | 98 | ``` 99 | ➜ ~ docker exec container2 hostname 100 | 8e2992d0bcb0 101 | 102 | 103 | ➜ ~ docker exec container1 hostname 104 | 7c49e6148e5f 105 | ``` 106 | 107 | ##### c. IPC (InterProcess Communication) Namespaces 108 | 109 | IPC namespaces in Linux isolate inter-process communication mechanisms, ensuring 110 | processes in different namespaces cannot directly communicate using shared memory, 111 | semaphores, or message queues. 112 | This is especially valuable in containerized environments to prevent interference 113 | between instances. 114 | 115 | Essentially, it's like giving each container its own private communication channel. 116 | 117 | Example: 118 | Create two containers with distinct IPC namespaces: 119 | 120 | ``` 121 | docker run -d --name ipc_container1 --ipc private debian sleep 3600 122 | docker run -d --name ipc_container2 --ipc private debian sleep 3600 123 | ``` 124 | 125 | Create a shared memory segment in ipc_container1: 126 | ``` 127 | docker exec ipc_container1 sh -c "ipcmk -M 1M" 128 | Shared memory id: 0 129 | ``` 130 | This command creates a shared memory segment of size 1MB. You'll get an ID (let's say 0) as an output. 131 | 132 | Try to access the shared memory segment from ipc_container2: 133 | ``` 134 | docker exec ipc_container2 ipcs -m 135 | 136 | ------ Shared Memory Segments -------- 137 | key shmid owner perms bytes nattch status 138 | ``` 139 | This command lists shared memory segments. You'll observe that the memory segment created in ipc_container1 is not visible in ipc_container2. 140 | 141 | But it looks like this from the same container 142 | ``` 143 | ➜ ~ docker exec ipc_container1 ipcs -m 144 | 145 | ------ Shared Memory Segments -------- 146 | key shmid owner perms bytes nattch status 147 | 0xced66e11 0 root 644 1048576 0 148 | ``` 149 | 150 | 151 | 152 | ##### d. PID (Process ID) Namespaces 153 | 154 | Isolates the process ID number space. This means that processes in different PID 155 | namespaces can have the same PID. For instance, multiple containers can each have 156 | its own "PID 1". 157 | 158 | ``` 159 | ➜ ~ docker exec container1 ps 160 | 161 | PID USER TIME COMMAND 162 | 1 root 0:00 sleep 3600 163 | 37 root 0:00 ps 164 | 165 | 166 | ➜ ~ docker exec container2 ps 167 | 168 | PID USER TIME COMMAND 169 | 1 root 0:00 sleep 3600 170 | 19 root 0:00 ps 171 | ➜ ~ 172 | ``` 173 | 174 | Here we can see that both the containers have a process with ID 1 175 | 176 | ##### e. NET (Network) Namespaces 177 | 178 | Isolates network devices, IP addresses, IP routing tables, port numbers, etc. 179 | A process in one NET namespace can't see or directly communicate with network 180 | devices or local network resources of another namespace. 181 | 182 | Let us see this in action 183 | ``` 184 | ➜ ~ docker inspect container1 -f '{{.NetworkSettings.IPAddress}}' 185 | 172.17.0.2 186 | 187 | ➜ ~ docker inspect container2 -f '{{.NetworkSettings.IPAddress}}' 188 | 172.17.0.3 189 | ``` 190 | Here we can see that both the containers have their own unique IP addresses. 191 | 192 | #### 2. Control Groups (Cgroups) 193 | 194 | Cgroups control how much CPU, memory, network bandwidth, and disk I/O can be used by the processes within a container. This ensures that a single container cannot starve others of resources. 195 | 196 | To show a real world example, let us run a container with a 50% of a core cpu limit. 197 | ``` 198 | docker run -d --name cpu_limited_container --cpus="0.5" busybox sleep 3600 199 | ``` 200 | 201 | And now let us mimick a high CPU usage on that container 202 | This is an inifinite loop that will usually cause a 100% CPU usage. 203 | ``` 204 | docker exec -it cpu_limited_container sh -c "while true; do :; done" 205 | ``` 206 | 207 | But let's see how much CPU it is actually using 208 | 209 | ![CPU Usage](./diagrams/cpu-usage-container.png) 210 | 211 | As we can see that the cpu usage is only 50%. This is especially important when we are running multiple 212 | containers on the same host. Otherwise, a faulty process can starve the entire host of CPU or memory and 213 | will cause issues on all the other containers. You would be surprised at how often this happens in real world 214 | because developers forgets to put a limit on containers 215 | 216 | #### 3. Union File Systems 217 | 218 | UnionFS allows layers in container images. Multiple containers can share the same base image layer, while each has its own unique layer for customized files and changes. This saves space and aids in fast container spin-up times. 219 | 220 | Let us go through an example to see this in action: 221 | 222 | a. Start a container from a base image, and from inside the container, we create a file. 223 | ``` 224 | docker run -it --name ufs_container debian bash 225 | 226 | 227 | root@ba4163deed36:/# echo "Hello from container!" > /hello.txt 228 | root@ba4163deed36:/# 229 | root@ba4163deed36:/# exit 230 | exit 231 | ``` 232 | b. Commit the changes to a new image 233 | ``` 234 | ➜ ~ docker commit ufs_container new_image_with_hello 235 | 236 | sha256:6085f1c95cd9410cc648dfc26f0a50c0cbb26ededaff6bdf380f71d583963a06 237 | ➜ ~ docker images | grep new_image 238 | new_image_with_hello latest 6085f1c95cd9 11 seconds ago 139MB 239 | ➜ ~ 240 | ``` 241 | 242 | c. Start a new container with the original image 243 | ``` 244 | docker run -it debian bash 245 | 246 | root@a036aa58b40e:/# ls -l /hello.txt 247 | ls: cannot access '/hello.txt': No such file or directory 248 | root@a036aa58b40e:/# 249 | ``` 250 | As you can see, the file does not exist there (in our original image) 251 | 252 | But if we start a new container with our new image, 253 | ``` 254 | ➜ ~ docker run -it new_image_with_hello bash 255 | 256 | root@38b2cd3724b1:/# ls -l /hello.txt 257 | -rw-r--r-- 1 root root 22 Nov 1 18:27 /hello.txt 258 | root@38b2cd3724b1:/# 259 | root@38b2cd3724b1:/# 260 | ``` 261 | You can see that our file exist there. 262 | 263 | Now if we look at our new image using the following command 264 | ``` 265 | ➜ ~ docker inspect new_image_with_hello | jq '.[0].GraphDriver.Data' 266 | 267 | { 268 | "LowerDir": "/var/lib/docker/overlay2/b021905b310da2de7b9f3d8a206615e252cea0f17cdd66ead5b8003e9eb2d04a/diff", 269 | "MergedDir": "/var/lib/docker/overlay2/59305b5da031f2f4ae09a30ab6d0ab7d897b53329cf3aca34bc34899bdb0a887/merged", 270 | "UpperDir": "/var/lib/docker/overlay2/59305b5da031f2f4ae09a30ab6d0ab7d897b53329cf3aca34bc34899bdb0a887/diff", 271 | "WorkDir": "/var/lib/docker/overlay2/59305b5da031f2f4ae09a30ab6d0ab7d897b53329cf3aca34bc34899bdb0a887/work" 272 | } 273 | ``` 274 | 275 | each change in a container creates a new layer, and layers can be stacked in a union to present a single coherent file system. Shared layers between containers make them efficient in terms of storage, as they can reuse these layers, and any modifications result in new, separate layers without affecting the shared ones. 276 | 277 | #### 4. Capabilities 278 | 279 | Capabilities in Linux split the privileges traditionally associated with root into distinct units, allowing finer-grained control over permissions. Instead of granting a process all-or-nothing "root" access, specific capabilities like network administration or file ownership changes can be assigned individually. This approach enhances security by limiting the potential impact of privilege escalation. In essence, capabilities allow a more nuanced delegation of power and responsibilities to processes on a Linux system. 280 | 281 | 282 | Example: 283 | 284 | Let us run a container that have no `CHOWN` capability, which means the container won't 285 | be able to run `chown` command. 286 | ``` 287 | docker run -d --name container_no_chown --cap-drop CHOWN busybox sleep 3600 288 | ``` 289 | Let's see what happens when we try to run a `chown` within the container 290 | 291 | ``` 292 | ➜ ~ docker exec container_no_chown chown 1000:1000 /tmp 293 | chown: /tmp: Operation not permitted 294 | ``` 295 | 296 | If we were to run the same command on a container that does not have this capability removed, 297 | like our `container1`, this will not produce an error and instead will change the ownership of 298 | the directory 299 | 300 | ``` 301 | ➜ ~ docker exec container1 chown 1000:1000 /tmp 302 | ➜ ~ 303 | ``` 304 | -------------------------------------------------------------------------------- /episodes/containers/README.md: -------------------------------------------------------------------------------- 1 | # Containerization 2 | 3 | 4 | ## Outline 5 | 6 | 7 | ### [1. Introduction to Containerization](01-Introduction-to-containers.md) 8 | 9 | 10 | - What is containerization and the problem it solves 11 | - Difference between virtualization and containerization 12 | - Brief history and rise of container technology 13 | 14 | ### 2. **Core Concepts of Containers** 15 | 16 | - Container images and runtime 17 | - Immutability in containers 18 | - Benefits: Isolation, Scalability, and Portability 19 | 20 | ### 3. **Docker: The De Facto Standard** 21 | 22 | - Introduction to Docker 23 | - Installing Docker on Linux 24 | - Docker Architecture: Daemon, Client, Images, and Containers 25 | - Basic Docker commands: `run`, `ps`, `pull`, `build`, and `rm` 26 | 27 | ### 4. **Building Container Images** 28 | 29 | - Introduction to `Dockerfile` 30 | - Anatomy of a `Dockerfile` 31 | - Best practices for building efficient images 32 | - Building and pushing images with Docker Hub 33 | 34 | ### 5. **Container Networking & Storage** 35 | 36 | - Docker networking modes: bridge, host, and overlay 37 | - Persistent storage in containers: Volumes and bind mounts 38 | - Docker Compose: Multi-container applications and networking 39 | 40 | 41 | ### 6. **Security in Containers** 42 | 43 | - Importance of container security 44 | - Common vulnerabilities and how to mitigate them 45 | - Best practices: minimal base images, scanning for vulnerabilities, etc. 46 | - Tools: `clair`, `anchore`, and others 47 | 48 | #### 7. Docker Compose 49 | 50 | 51 | -------------------------------------------------------------------------------- /episodes/containers/diagrams/containers-vs-virtualmachines.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/containers/diagrams/containers-vs-virtualmachines.png -------------------------------------------------------------------------------- /episodes/containers/diagrams/cpu-usage-container.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/containers/diagrams/cpu-usage-container.png -------------------------------------------------------------------------------- /episodes/google-cloud/01-what-is-cloud.md: -------------------------------------------------------------------------------- 1 | # Google Cloud #1 - What is Cloud 2 | 3 | Video : [HERE](https://www.youtube.com/watch?v=YDA6DBWxOMc) 4 | 5 | What is cloud? The term `cloud` is mostly used as a marketing term. It simply means servers and services rented 6 | from a provider. 7 | 8 | Providers like Amazon (AWS) and Google (GCP) have huge datacenters across the globe with ton of servers in each. 9 | 10 | ![Google Datacenter](img/google-datacenter.jpg) 11 | 12 | These servers are powerful computers with really fast internet connection. Now, people like you and me can pay these 13 | companies a fair amount to use these computers and do our own stuff with it 14 | 15 | You want to run your website, video game server, application etc, we can use them. 16 | 17 | 18 | ## So how do these companies do it? 19 | 20 | It's not magic 21 | 22 | ![Cloud](img/cloud.jpg) 23 | 24 | They have big datacenters with powerful physical servers. Now they can use virtualization and create tons of virtual machines 25 | on these physical servers. Think back to how we used virtual machines to do our stuff in the previous videos. 26 | 27 | Now these VMs can be purchased by us and are billed by hour (usually) 28 | 29 | If you want to run your small website, that only needs 1cpu and 1GB, you can buy a small VM from Google cloud. And you only pay for that. 30 | When in future your website grows and needs more resources, you can make your VM bigger by the click of a button. That is why cloud is powerful. We only need to pay for what we use and we can use ony what we need 31 | 32 | ## When to use cloud? 33 | 34 | If you want to host something on the internet, you have lots of choices on how you can do that. 35 | If the content is static, it's pretty straight forward, and there are a lot of free options out there. 36 | You can use Github Pages for example to host your static site for free. 37 | 38 | But if it is a dynamic website, it is difficult to find a good free option. If your needs are pretty minimal, you can 39 | go for a webhost where they put your website on a shared server with other people, which can be slow and insecure. 40 | 41 | Another option you have is to buy a VPS (a virtual machine from a provider - nowadays there are no difference between a VPS 42 | and a cloud provider's VM) 43 | 44 | Another option you have is to build your own datacenter. Depending on who you are and what you are trying to do, it could be one 45 | reasonable option. But for most people, a cloud provider is the better option 46 | 47 | ## Which cloud provider should I use? 48 | 49 | If You are a developer, really small business etc, it is usually cheaper and better to go with a provider like Digital Ocean 50 | that offer cheaper services and are fairly easy to get started. 51 | 52 | But if you are a bigger business and needs room for growth and complex infra, then definitely go with a bigger player like 53 | AWS, Google Cloud, Azure etc 54 | 55 | I personally prefer Google Cloud because how well put the service is in comparison to AWS. 56 | 57 | ## Why use cloud 58 | 59 | - Pay as you go (You pay for the CPU, memory, bandwidth instead of number of servers) 60 | - Can scale up/down to meet our needs (Adding more/less servers) 61 | - Can create complex infrastructure 62 | - Easier to manage than a datacenter 63 | -------------------------------------------------------------------------------- /episodes/google-cloud/02-launching-first-vm.md: -------------------------------------------------------------------------------- 1 | # Google Cloud #2 - Hosting a simple website on Google Compute Engine - Our first VM 2 | 3 | ## Foreword 4 | 5 | There are a lot of Google Cloud products and it would be confusing to just get started with them 6 | directly. Instead what I would like to do is take one use case and find a product that will fit our requirement. 7 | 8 | In the past videos we have been doing this, like, we want to host a website. How can we do that. 9 | 10 | So, we will do the same here 11 | 12 | ## Our Goal 13 | 14 | Host a simple website `simple.demo.devopsfromscratch.com` in Google Cloud Platform (GCP) 15 | 16 | ## GCP Product we will use - Google Compute Engine (GCE) 17 | 18 | GCE is the offering of GCP where we can launch virtual machines in Google's infrastructure. 19 | It is equivalent to EC2 in AWS 20 | 21 | ## Steps 22 | 23 | 1. Get GCP free trial account 24 | 2. Launch VM 25 | 3. Access the VM over ssh, install nginx 26 | 4. Point DNS, demo 27 | 28 | 29 | ## Key Concepts relating to GCE 30 | 31 | ### Region 32 | 33 | Which region on earth the datacenter is located : Example `us-east4` 34 | 35 | ### Zones 36 | 37 | Each region will have multiple datacenters in different zones. Example `us-east4-a, us-east1-b, us-east1-c` 38 | Equivalent to availability zones in AWS 39 | 40 | ### Machine Family 41 | 42 | Read more about [HERE](https://cloud.google.com/compute/docs/machine-types) 43 | 44 | These basically means the kind of physical machines that are running these VMs. 45 | - General purpose : For people like us with no need for high performance/memory 46 | - Compute-optimized : If you want to run really compute heavy applications 47 | - Memory-optimized : If you want VMs with terrabytes of RAM 48 | - GPU : If you want a GPU attached to your VM so you can do GPU intensive stuff 49 | 50 | ### Series 51 | 52 | What kind of CPU these machines have. We usually want to go with the newest and cheapest option 53 | For our use cases E2 is plenty enough 54 | 55 | ### Machine type 56 | 57 | What size our VMs should be. This entirely depends on what we want to do with our VM 58 | For small websites, blogs, development purposes etc `e2-micro -> e2-medium` should be good enough 59 | 60 | #### Shared core 61 | 62 | In this, our VMs are not really getting a full CPU assigned. Instead it is divided with other VMs. 63 | the advantage is that it is considerably cheaper. And most of the time, the performance is good enough for our use 64 | 65 | e2-micro sustains 2 vCPUs, each for 12.5% of CPU time, totaling 25% vCPU time. 66 | e2-small sustains 2 vCPUs, each at 25% of CPU time, totaling 50% of vCPU time. 67 | e2-medium sustains 2 vCPUs, each at 50% of CPU time, totaling 100% vCPU time. 68 | 69 | ### Firewall 70 | 71 | The network firewall configuration allows us to allow/deny network access to our VMs/services 72 | More about this in the next video 73 | 74 | ### Boot disk 75 | 76 | Here we can choose which operating system we would like to have and the size of our main/boot disk. 77 | -------------------------------------------------------------------------------- /episodes/google-cloud/03-instance-templates-static-ip.md: -------------------------------------------------------------------------------- 1 | # Google Cloud #3 - Instance Templates, Static IP Addresses 2 | 3 | ## Instance Templates 4 | 5 | When we create a new VM from the dashboard, we have to select a lot of options. If we create many VMs, 6 | then this can get tedious and repetitive. Solution is to use instance templates 7 | 8 | Instance templates allow us to create a template that already specifies characteristics of our VMs 9 | like the instance type, disk, firewall rules etc. This way we can easily create a new VM from a template 10 | without having to worry about setting the specs 11 | 12 | ## External IP Addresses in GCP 13 | 14 | In GCP there are two differen types of IP addresses 15 | 1. Ephemeral 16 | 2. Static 17 | 18 | ### Ephemeral addresses 19 | 20 | Ephemeral means short lived 21 | 22 | By default each VM we create have an ephemeral address. This IP address will not change while the VM is running. 23 | But it **could** change if we stop the VM and later start it back 24 | 25 | Why ephemeral? Because it is cheaper 26 | 27 | ### Static addresses 28 | 29 | We can get a static IP from a pool of IP addresses GCP owns and then we can assign it to any instance. 30 | This IP will not change unless we release it from the pool 31 | 32 | ### When to use static addresses? 33 | 34 | If we are going to use a VM for anything semi permanent and if we would like the IP to not change, then we should 35 | create a static IP address and use it. For example, if we have a website hosted in the VM, then we may need to use static 36 | address. 37 | 38 | Ephemeral addresses are fine for instances that are behind a load balancer (more on this later) -------------------------------------------------------------------------------- /episodes/google-cloud/04-vpc-networks.md: -------------------------------------------------------------------------------- 1 | # Google Cloud #4 - Virtual Private Cloud (VPC) Networks 2 | 3 | We know what a "network" is. It is when we connect computers together wired or wirelessly. 4 | For example, our home wifi network, or a bunch of servers connected using a switch and ethernet cables. 5 | 6 | A VPC network is similiar, but instead of being physical networks, they are logical. So, a VPC is a virtual network created 7 | inside of Google's infrastructure using Andromeda. Andromeda is a software defined network. We are not gonna discuss about it 8 | now. 9 | 10 | Google VPCs are Global. Meaning we can have two machines in different region to be in the same VPC network 11 | 12 | ## Why use a VPC 13 | 14 | By default when we create a GCP project, a `default` network is created. So we are already using a VPC network. 15 | So, by default all the new VMs we create are part of this `default` network. 16 | 17 | Checkout this diagram by GCP : [HERE](https://cloud.google.com/vpc/images/vpc-overview-example.svg) 18 | 19 | Even then, why do we need a network? 20 | 21 | Let's say we have a VPC network called `production` and this contains all the VMs that is powering our production infrastructure. 22 | 23 | 24 | 1. All the VMs can talk to each other through this private network, without going through the internet 25 | 2. We can restrict VMs internet access, like maybe a few servers don't need internet access at all. 26 | 3. Since this is a private network, the bandwidth costs are saved a lot 27 | 4. Performance is also a lot better than going through the internet 28 | 5. We have a great control over what machine can connect to what using firewalls 29 | 6. We can use internal load balancers. 30 | 31 | ## Firewall Rules 32 | 33 | By default in a new network, nothing can connect to each other. We need firewall rules to make that happen. 34 | 35 | 36 | ### Network tag 37 | 38 | In GCP we can add a network tag to each instance. Using this we can create firewall rules. 39 | 40 | ![VPC Network](img/vpc-nw-example.jpg) 41 | 42 | Consider this setup. If we add tag `nginx-proxy` to Nginx, `webapp` to the web app 1 and 2, and `database` to the database 43 | server. 44 | 45 | Now we can create firewall rules saying 46 | 47 | 1. allow `nginx-proxy` to talk to `webapp` on port `80` over protocol `tcp` 48 | 2. allow `webapp` to talk to `database` on port `3306` over protocol `tcp` 49 | 50 | > We can use source and target tags for firewall rules, but this is only applicable for internal traffic 51 | > It does not work for external traffic 52 | 53 | -------------------------------------------------------------------------------- /episodes/google-cloud/05-disk-snapshots-and-images.md: -------------------------------------------------------------------------------- 1 | # Google Cloud #5 - Disk snapshots and images 2 | 3 | ## What is a snapshot 4 | 5 | Our VMs have disks. By default we only have the boot disk. We can take a point in time snapshot of a disk and save it. 6 | 7 | ### What can we do with a snapshot? 8 | 9 | - Use it as a backup of our data. We can restore it any time 10 | - We can create a new instance using a snapshot 11 | 12 | ### Creating Snapshot 13 | 14 | 1. Go [HERE](https://console.cloud.google.com/compute/snapshotsAdd) 15 | 2. Give a name 16 | 3. Select souce disk 17 | 4. Select storage location 18 | 5. Create 19 | 20 | ## Images 21 | 22 | The default disk on a VM where the operating system is installed is called the "boot disk". We can use images to create 23 | boot disks. 24 | 25 | ### Types of disk images 26 | 27 | 1. Public images: These are provided by Google, open source community etc. Example would be the debian disk image we 28 | used 29 | 30 | ![Debian disk image](img/gcp-disk-image.png) 31 | 32 | 2. Custom images : These are our own images, created by us for our specific use cases. 33 | 34 | ### What is the use of custom images? 35 | 36 | If we have created an image with all the packages and configurations we need for our application to run, then we can 37 | re-use this image to quickly launch new servers. This is mostly needed when we have autoscaling setup (coming up soon). 38 | This behaves similar to docker images. 39 | 40 | For example, if we have a VM with nginx, mysql etc installed and if we create an image from this disk, then when we 41 | create a new VM with this custom image, it will have all these packages installed 42 | 43 | ### Creating a custom disk image 44 | 45 | GCP documentation is a lot nicer for this. Check it out [HERE](https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images) -------------------------------------------------------------------------------- /episodes/google-cloud/06-creating-and-attaching-disks.md: -------------------------------------------------------------------------------- 1 | # Google Cloud #6 - Creating and attaching disks to VMs 2 | 3 | By default when we create a new VM, we only have a boot disk where the operating system is installed. 4 | But, more often than not, we need to store more data and this is usually done by adding an extra disk 5 | onto our VM. 6 | 7 | We can add upto 127 extra disks to a VM 8 | 9 | ## Types of new disk sources 10 | 11 | We can create a new disk from two different types of sources 12 | 13 | 1. A blank disk that has nothing in it. We mostly use this 14 | 2. A new disk from a snapshot. We talked about snapshots in the previous lesson, we can create a new disk from it. 15 | 16 | 17 | ## Disk types in GCP 18 | 19 | Source : [HERE](https://cloud.google.com/compute/docs/disks#disk-types) 20 | 21 | When you configure a zonal or regional persistent disk, you can select one of the following disk types. 22 | 23 | - Standard persistent disks (pd-standard) are backed by standard hard disk drives (HDD). 24 | - Balanced persistent disks (pd-balanced) are backed by solid-state drives (SSD). They are an alternative to SSD persistent disks that balance performance and cost. 25 | - SSD persistent disks (pd-ssd) are backed by solid-state drives (SSD). 26 | - Extreme persistent disks (pd-extreme) are backed by solid-state drives (SSD). With consistently high performance for both random access workloads and bulk throughput, extreme persistent disks are designed for high-end database workloads. 27 | 28 | 29 | ## Creating and attaching a disk 30 | 31 | GCP doc on this is well made. Refer [THIS](https://cloud.google.com/compute/docs/disks/add-persistent-disk). 32 | 33 | Once the disk is added, we need to format and mount it on the VM. The steps are explained in detail above. 34 | But, here is the short form with only commands 35 | 36 | ``` 37 | # Find the newly added disk 38 | sudo lsblk 39 | 40 | # Example: Newly added disk is `sdb` 41 | # Format it using mkfs. Make sure to replace sdb with correct disk 42 | 43 | sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb 44 | 45 | # Mount it. Example, we want it at /data 46 | sudo mkdir /data 47 | sudo mount -o discard,defaults /dev/sdb /data 48 | ``` 49 | At this point the disk is mounted, we just need to make sure that this gets mounted automatically whenever 50 | the machine reboots. 51 | 52 | This is done using adding an entry in **Fstab** 53 | 54 | ### Adding fstab entry 55 | 56 | fstab : filesystem table 57 | 58 | It's a simple configuration file located at `/etc/fstab` and is used to help us in mounting and unmounting 59 | disks/devices onto our Linux system 60 | 61 | ``` 62 | # Make a backup in case we messup 63 | sudo cp /etc/fstab /etc/fstab.backup 64 | 65 | # Find the new disk's UUID 66 | sudo blkid /dev/sdb 67 | 68 | # Edit /etc/fstab using vim/nano etc and add 69 | UUID= ext4 discard,defaults,MOUNT_OPTION 0 2 70 | 71 | # That is, in our case something like 72 | UUID=UUID_VALUE /data ext4 discard,defaults,MOUNT_OPTION 0 2 73 | ``` 74 | 75 | 76 | -------------------------------------------------------------------------------- /episodes/google-cloud/07-setting-up-gcloud-cli.md: -------------------------------------------------------------------------------- 1 | # Google Cloud #7 - Setting up Gcloud CLI 2 | 3 | So far we have been doing everything in GCP through the UI, while it is great and userfriendly, there are much more 4 | efficient way of doing things in GCP. One such way is the `gcloud` command line utility 5 | 6 | ## What is gcloud 7 | 8 | [Gcloud](https://cloud.google.com/sdk/gcloud) is a command line tool by Google that lets us manage most of the GCP 9 | products from the terminal. 10 | 11 | ### What can it do? 12 | 13 | Pretty much anything we use the UI for. 14 | 15 | - Create/manage/edit instances, firewall rules, load balancers, permissions etc 16 | 17 | ### But why use it? 18 | 19 | Because sometimes it is much faster to use the CLI than to go through the UI and click buttons. Also, we can use the 20 | utility to automate managing resources 21 | 22 | Example: 23 | 24 | Let's say you want to create 10 instances with some slightly different parameters like disk space/cpu cores etc, 25 | using the UI to do that 10 times would be very inefficient as it can take a lot of time and it is error prone too. 26 | 27 | But, using the CLI, we can do that easily. 28 | 29 | We can create custom scripts to get our job done really quickly. 30 | 31 | The more you use it, the more you will come to love it. 32 | 33 | ## Installing gcloud 34 | 35 | `gcloud` is part of the Google Cloud SDK. So, we need to install that first. This is a very straight forward thing to do. 36 | 37 | Just go [HERE](https://cloud.google.com/sdk/docs/install) and follow the instructions for your operating system. 38 | 39 | -------------------------------------------------------------------------------- /episodes/google-cloud/img/cloud.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/google-cloud/img/cloud.jpg -------------------------------------------------------------------------------- /episodes/google-cloud/img/gcp-disk-image.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/google-cloud/img/gcp-disk-image.png -------------------------------------------------------------------------------- /episodes/google-cloud/img/gcp-regions.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/google-cloud/img/gcp-regions.png -------------------------------------------------------------------------------- /episodes/google-cloud/img/google-datacenter.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/google-cloud/img/google-datacenter.jpg -------------------------------------------------------------------------------- /episodes/img/24.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/img/24.png -------------------------------------------------------------------------------- /episodes/img/how-ssl-works.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/img/how-ssl-works.png -------------------------------------------------------------------------------- /episodes/img/le-cloudflare-dns-txt-record.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/img/le-cloudflare-dns-txt-record.png -------------------------------------------------------------------------------- /episodes/img/ssl-info.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/img/ssl-info.png -------------------------------------------------------------------------------- /episodes/img/vpc-nw-example.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/img/vpc-nw-example.jpg -------------------------------------------------------------------------------- /episodes/img/vscode-github-pr.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/img/vscode-github-pr.png -------------------------------------------------------------------------------- /episodes/img/waterfall-model.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/img/waterfall-model.png -------------------------------------------------------------------------------- /episodes/img/works-on-my-machine.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/MansoorMajeed/devops-from-scratch/59d983955b2b69332792b7d61cebb016985a9f65/episodes/img/works-on-my-machine.jpg -------------------------------------------------------------------------------- /episodes/setting-ssl-locally-with-le.md: -------------------------------------------------------------------------------- 1 | # How to setup SSL certificates locally using Let's Encrypt 2 | 3 | If you want to setup actual trusted SSL certificates locally, you can do that 4 | using Let's Encrypt 5 | 6 | ## But why? 7 | 8 | If you have a local development environment, then it makes sense to do it like this. 9 | Of course, you can have self signed certificates but that would involve trusting the 10 | CA in your browsers as such. 11 | 12 | 13 | ## Requirement 14 | 15 | You need a publicly registered domain name that you can add TXT records to 16 | 17 | 18 | I have a Debian 10 virtualmachine running at 192.168.33.14. I have a domain 19 | pointed to it. The domain in this case is `jenkins.devops.esc.sh` 20 | 21 | ## The setup 22 | 23 | ### Step 1 - Install Certbot 24 | 25 | Assuming you are using a Debian virtual machine 26 | 27 | ``` 28 | sudo apt install certbot python4-certbot-nginx 29 | ``` 30 | 31 | ### Step 2 - Fetch certificate using DNS challenge 32 | 33 | ``` 34 | certbot -d your-domain.com --manual --preferred-challenges dns-01 certonly 35 | ``` 36 | 37 | this will put you in a prompt like below 38 | Press Y for the question of logging the IP address. 39 | 40 | ``` 41 | root@jenkins-server:~# certbot -d jenkins.devops.esc.sh --manual --preferred-challenges dns-01 certonly 42 | Saving debug log to /var/log/letsencrypt/letsencrypt.log 43 | Plugins selected: Authenticator manual, Installer None 44 | Obtaining a new certificate 45 | Performing the following challenges: 46 | dns-01 challenge for jenkins.devops.esc.sh 47 | 48 | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 49 | NOTE: The IP of this machine will be publicly logged as having requested this 50 | certificate. If you're running certbot in manual mode on a machine that is not 51 | your server, please ensure you're okay with that. 52 | 53 | Are you OK with your IP being logged? 54 | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 55 | (Y)es/(N)o: Y 56 | 57 | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 58 | Please deploy a DNS TXT record under the name 59 | _acme-challenge.jenkins.devops.esc.sh with the following value: 60 | 61 | 2xdgemNwApJ6OGVkFlAJFk0PB2h45m_J9C_I55IywLA 62 | 63 | Before continuing, verify the record is deployed. 64 | - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - 65 | ``` 66 | 67 | Copy the TXT record and add it in your domain's DNS. I am using Cloudflare for DNS 68 | so I have added it like this 69 | 70 | ![DNS TXT record](img/le-cloudflare-dns-txt-record.png) 71 | 72 | And in `dig` it should show up like this 73 | 74 | ``` 75 | ➜ ~ dig _acme-challenge.jenkins.devops.esc.sh TXT +short 76 | "2xdgemNwApJ6OGVkFlAJFk0PB2h45m_J9C_I55IywLA" 77 | ➜ ~ 78 | ``` 79 | 80 | **After verifying that the TXT record is propagated** press `Enter` and certbot should 81 | fetch a fresh certificate and place it under `/etc/letsencrypt/live//`. 82 | You can use it anywhere 83 | 84 | For example, you can configure Nginx to use it like this 85 | To create letsencrypt.conf, refer [THIS](setting-up-letsencrypt-ssl-with-nginx.md#step-3---create-letsencryptconf) 86 | 87 | ```nginx 88 | server { 89 | listen 80; 90 | 91 | include /etc/nginx/snippets/letsencrypt.conf; 92 | 93 | server_name your-domain-name.tld; 94 | 95 | root /var/www/your-domain-name.tld; 96 | index index.html; 97 | 98 | listen 443 ssl; 99 | ssl_certificate /etc/letsencrypt/live/your-domain-name.tld/fullchain.pem; 100 | ssl_certificate_key /etc/letsencrypt/live/your-domain-name.tld/privkey.pem; 101 | include /etc/letsencrypt/options-ssl-nginx.conf; 102 | ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; 103 | } 104 | ``` 105 | 106 | ## More Configuration options (like http -> https redirection) 107 | 108 | If you would like to know how to do more configuration options such as redirecting 109 | http to https or redirecting www to non-www etc, refer to this doc 110 | 111 | [Setting up Let's Encrypt](setting-up-letsencrypt-ssl-with-nginx.md) 112 | -------------------------------------------------------------------------------- /infrastructure/README.md: -------------------------------------------------------------------------------- 1 | # Local Infrastructure 2 | 3 | This contains config files (Vagrant, ansible etc) to launch of our local VirtualBox 4 | powered environment where we will be installing and learning stuff like Jenkins, Git etc 5 | 6 | -------------------------------------------------------------------------------- /infrastructure/ansible/README.md: -------------------------------------------------------------------------------- 1 | # Ansible Playbooks For the Local infrastructure 2 | 3 | This contains all playbooks and roles for the local infra -------------------------------------------------------------------------------- /infrastructure/ansible/hosts: -------------------------------------------------------------------------------- 1 | [git_server] 2 | 192.168.33.13 3 | 4 | 5 | [nodejsapp_frontend] 6 | 192.168.33.10 7 | 8 | 9 | [nodejsapp_backend] 10 | 192.168.33.11 11 | 192.168.33.12 -------------------------------------------------------------------------------- /infrastructure/ansible/playbook.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ########### Git Server ######### 3 | - hosts: git_server 4 | remote_user: vagrant 5 | become: yes 6 | 7 | roles: 8 | - common 9 | - git-server 10 | 11 | ################################ 12 | 13 | ##### For the NodeJS App ####### 14 | - hosts: nodejsapp_frontend 15 | remote_user: vagrant 16 | become: yes 17 | 18 | roles: 19 | - common 20 | - nginx-common 21 | - nginx-nodejsapp 22 | 23 | - hosts: nodejsapp_backend 24 | remote_user: vagrant 25 | become: yes 26 | 27 | roles: 28 | - common 29 | - nodejs-common 30 | ################################# -------------------------------------------------------------------------------- /infrastructure/ansible/roles/common/README.md: -------------------------------------------------------------------------------- 1 | # The common role 2 | 3 | This role is common for all hosts. It installs a few basic tools 4 | that we need in all the hosts such as `git`, `curl`, `vim` -------------------------------------------------------------------------------- /infrastructure/ansible/roles/common/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Install Common Packages 3 | apt: 4 | pkg: 5 | - vim 6 | - git 7 | - curl 8 | state: present 9 | -------------------------------------------------------------------------------- /infrastructure/ansible/roles/git-server/files/ssh_keys/mansoor: -------------------------------------------------------------------------------- 1 | ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA5Xv55ZTwhBiXvTpsLUyrIghNs1wudPGLDeJ5pUTingJwwPizczSNwui+1FYKRs0UGjAGruUHYboiUOhJ37DZIWWJR8wKg10b+V/Uxv3b0xbV87dd7suZiUgPnK7y24IyCfelGD//dRmePQVbaszcOhctJyeJNgKigqc0MUEYaJWBQPfwN6TuBBSMsX4eDvdjJ0WQYCCzXU0SScZnwxtyozstizA+AzzsWmcZsXzfkNsxgNtKwLNTJ3z8/IQZlVIMt9RyRIF7U8EuuPfRYsmm63KIU2keBSvNb/02m4MDitvu3WXI7UdidXBYdwU0LZiuHGXRAsSILvPIAJv1zQYT 2 | -------------------------------------------------------------------------------- /infrastructure/ansible/roles/git-server/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Install Packages 3 | apt: 4 | state: present 5 | name: 6 | - git 7 | - acl # Needed for Ansible to work well with unprivileged user 8 | 9 | 10 | - name: Git user 11 | user: 12 | name: git 13 | shell: /bin/bash 14 | 15 | 16 | - name: Keys allowed to use git 17 | authorized_key: 18 | user: git 19 | state: present 20 | key: '{{ item }}' 21 | with_file: 22 | - ssh_keys/mansoor 23 | 24 | - name: Create Repositories 25 | command: "{{ 'git init --bare /home/git/repos/' + item + '.git' }}" 26 | become_user: git 27 | args: 28 | creates: "{{ '/home/git/repos/' + item + '.git/HEAD' }}" 29 | loop: 30 | - repo1 31 | - repo2 32 | -------------------------------------------------------------------------------- /infrastructure/ansible/roles/nginx-common/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Install Nginx 3 | apt: 4 | name: nginx 5 | state: present 6 | 7 | - name: Make sure Nginx is running 8 | service: 9 | name: nginx 10 | state: started 11 | enabled: yes 12 | -------------------------------------------------------------------------------- /infrastructure/ansible/roles/nginx-nodejsapp/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Restart Nginx 3 | service: 4 | name: nginx 5 | state: restarted -------------------------------------------------------------------------------- /infrastructure/ansible/roles/nginx-nodejsapp/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Create document root 3 | file: 4 | path: "{{ docroot }}" 5 | state: directory 6 | 7 | - name: Create nginx virtual host 8 | template: 9 | src: vhost.conf.j2 10 | dest: "/etc/nginx/sites-enabled/{{ server_name }}" 11 | notify: 12 | - Restart Nginx -------------------------------------------------------------------------------- /infrastructure/ansible/roles/nginx-nodejsapp/templates/vhost.conf.j2: -------------------------------------------------------------------------------- 1 | upstream nodeapp{ 2 | server 192.168.33.11:3000; 3 | server 192.168.33.12:3000; 4 | } 5 | 6 | 7 | server { 8 | listen 80; 9 | 10 | server_name {{ server_name }}; 11 | 12 | location / { 13 | proxy_pass http://nodeapp; 14 | proxy_set_header X-Forwarded-For $remote_addr; 15 | proxy_set_header Host $http_host; 16 | } 17 | } -------------------------------------------------------------------------------- /infrastructure/ansible/roles/nginx-nodejsapp/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | docroot: /var/www/nodejs.devops.esc.sh 3 | server_name: nodejs.devops.esc.sh 4 | -------------------------------------------------------------------------------- /infrastructure/ansible/roles/nodejs-common/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Add Nodesource apt key. 3 | apt_key: 4 | url: https://keyserver.ubuntu.com/pks/lookup?op=get&fingerprint=on&search=0x1655A0AB68576280 5 | id: "68576280" 6 | state: present 7 | 8 | - name: Add NodeSource repositories for Node.js. 9 | apt_repository: 10 | repo: "{{ item }}" 11 | state: present 12 | with_items: 13 | - "deb https://deb.nodesource.com/node_{{ nodejs_version }}.x {{ ansible_distribution_release }} main" 14 | - "deb-src https://deb.nodesource.com/node_{{ nodejs_version }}.x {{ ansible_distribution_release }} main" 15 | register: node_repo 16 | 17 | - name: Update apt cache if repo was added. 18 | apt: update_cache=yes 19 | when: node_repo.changed 20 | tags: ['skip_ansible_lint'] 21 | 22 | - name: Ensure Node.js and npm are installed. 23 | apt: 24 | name: "nodejs={{ nodejs_version }}*" 25 | state: present -------------------------------------------------------------------------------- /infrastructure/ansible/roles/nodejs-common/vars/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | nodejs_version: "12" 3 | -------------------------------------------------------------------------------- /infrastructure/vagrant/apps/git-server/Vagrantfile: -------------------------------------------------------------------------------- 1 | Vagrant.configure("2") do |config| 2 | 3 | config.vm.box = "debian/buster64" 4 | 5 | 6 | config.vm.define "git_server" do |git_server| 7 | git_server.vm.provider "virtualbox" do |vb| 8 | vb.memory = "512" 9 | end 10 | 11 | git_server.vm.network "private_network", ip: "192.168.33.13" 12 | git_server.vm.hostname = 'git-server' 13 | 14 | git_server.vm.provision "shell", inline: <<-SHELL 15 | echo "Adding SSH Key" 16 | echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA5Xv55ZTwhBiXvTpsLUyrIghNs1wudPGLDeJ5pUTingJwwPizczSNwui+1FYKRs0UGjAGruUHYboiUOhJ37DZIWWJR8wKg10b+V/Uxv3b0xbV87dd7suZiUgPnK7y24IyCfelGD//dRmePQVbaszcOhctJyeJNgKigqc0MUEYaJWBQPfwN6TuBBSMsX4eDvdjJ0WQYCCzXU0SScZnwxtyozstizA+AzzsWmcZsXzfkNsxgNtKwLNTJ3z8/IQZlVIMt9RyRIF7U8EuuPfRYsmm63KIU2keBSvNb/02m4MDitvu3WXI7UdidXBYdwU0LZiuHGXRAsSILvPIAJv1zQYT' >> /home/vagrant/.ssh/authorized_keys 17 | SHELL 18 | end 19 | 20 | end 21 | -------------------------------------------------------------------------------- /infrastructure/vagrant/apps/jenkins/Vagrantfile: -------------------------------------------------------------------------------- 1 | Vagrant.configure("2") do |config| 2 | 3 | config.vm.box = "debian/buster64" 4 | 5 | 6 | config.vm.define "jenkins_server" do |jenkins_server| 7 | jenkins_server.vm.provider "virtualbox" do |vb| 8 | vb.memory = "1024" 9 | end 10 | 11 | jenkins_server.vm.network "private_network", ip: "192.168.33.14" 12 | jenkins_server.vm.hostname = 'jenkins-server' 13 | 14 | jenkins_server.vm.provision "shell", inline: <<-SHELL 15 | echo "Adding SSH Key" 16 | echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA5Xv55ZTwhBiXvTpsLUyrIghNs1wudPGLDeJ5pUTingJwwPizczSNwui+1FYKRs0UGjAGruUHYboiUOhJ37DZIWWJR8wKg10b+V/Uxv3b0xbV87dd7suZiUgPnK7y24IyCfelGD//dRmePQVbaszcOhctJyeJNgKigqc0MUEYaJWBQPfwN6TuBBSMsX4eDvdjJ0WQYCCzXU0SScZnwxtyozstizA+AzzsWmcZsXzfkNsxgNtKwLNTJ3z8/IQZlVIMt9RyRIF7U8EuuPfRYsmm63KIU2keBSvNb/02m4MDitvu3WXI7UdidXBYdwU0LZiuHGXRAsSILvPIAJv1zQYT' >> /home/vagrant/.ssh/authorized_keys 17 | SHELL 18 | end 19 | 20 | end 21 | -------------------------------------------------------------------------------- /infrastructure/vagrant/apps/mysql-server/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure("2") do |config| 5 | config.vm.box = "debian/buster64" 6 | 7 | config.vm.box_check_update = false 8 | 9 | config.vm.network "private_network", ip: "192.168.33.20" 10 | config.vm.hostname = 'mysql-server' 11 | 12 | config.vm.provider "virtualbox" do |vb| 13 | vb.memory = "1024" 14 | end 15 | config.vm.provision "shell", inline: <<-SHELL 16 | apt-get update 17 | echo "Adding ssh key" 18 | echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA5Xv55ZTwhBiXvTpsLUyrIghNs1wudPGLDeJ5pUTingJwwPizczSNwui+1FYKRs0UGjAGruUHYboiUOhJ37DZIWWJR8wKg10b+V/Uxv3b0xbV87dd7suZiUgPnK7y24IyCfelGD//dRmePQVbaszcOhctJyeJNgKigqc0MUEYaJWBQPfwN6TuBBSMsX4eDvdjJ0WQYCCzXU0SScZnwxtyozstizA+AzzsWmcZsXzfkNsxgNtKwLNTJ3z8/IQZlVIMt9RyRIF7U8EuuPfRYsmm63KIU2keBSvNb/02m4MDitvu3WXI7UdidXBYdwU0LZiuHGXRAsSILvPIAJv1zQYT' >> /home/vagrant/.ssh/authorized_keys 19 | SHELL 20 | end 21 | -------------------------------------------------------------------------------- /infrastructure/vagrant/apps/nodejsapp/README.md: -------------------------------------------------------------------------------- 1 | # A simple NodeJS Application 2 | 3 | This contains a simple nodejs application. The idea is to get familiar 4 | with deploying nodejs applications in a production kinda environment 5 | 6 | This contains an Nginx server, and two nodejs processes, all running on the 7 | same server. There are two nodejs apps to imitate a real world load balancing 8 | scenario. I am keeping them all in the same server because running too many 9 | virtual machines isn't the most optimal way to handle it locally 10 | 11 | 12 | -------------------------------------------------------------------------------- /infrastructure/vagrant/apps/nodejsapp/Vagrantfile: -------------------------------------------------------------------------------- 1 | Vagrant.configure("2") do |config| 2 | 3 | config.vm.box = "debian/buster64" 4 | 5 | 6 | config.vm.define "nodejsapp_nginx" do |nodejsapp_nginx| 7 | nodejsapp_nginx.vm.provider "virtualbox" do |vb| 8 | vb.memory = "512" 9 | end 10 | 11 | nodejsapp_nginx.vm.network "private_network", ip: "192.168.33.10" 12 | nodejsapp_nginx.vm.hostname = 'nodejsapp-nginx' 13 | 14 | nodejsapp_nginx.vm.provision "shell", inline: <<-SHELL 15 | echo "Adding SSH Key" 16 | echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA5Xv55ZTwhBiXvTpsLUyrIghNs1wudPGLDeJ5pUTingJwwPizczSNwui+1FYKRs0UGjAGruUHYboiUOhJ37DZIWWJR8wKg10b+V/Uxv3b0xbV87dd7suZiUgPnK7y24IyCfelGD//dRmePQVbaszcOhctJyeJNgKigqc0MUEYaJWBQPfwN6TuBBSMsX4eDvdjJ0WQYCCzXU0SScZnwxtyozstizA+AzzsWmcZsXzfkNsxgNtKwLNTJ3z8/IQZlVIMt9RyRIF7U8EuuPfRYsmm63KIU2keBSvNb/02m4MDitvu3WXI7UdidXBYdwU0LZiuHGXRAsSILvPIAJv1zQYT' >> /home/vagrant/.ssh/authorized_keys 17 | SHELL 18 | end 19 | 20 | config.vm.define "nodejsapp_backend_1" do |nodejsapp_backend_1| 21 | nodejsapp_backend_1.vm.provider "virtualbox" do |vb| 22 | vb.memory = "512" 23 | end 24 | 25 | nodejsapp_backend_1.vm.network "private_network", ip: "192.168.33.11" 26 | nodejsapp_backend_1.vm.hostname = 'nodejsapp-backend-1' 27 | 28 | nodejsapp_backend_1.vm.provision "shell", inline: <<-SHELL 29 | echo "Adding SSH Key" 30 | echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA5Xv55ZTwhBiXvTpsLUyrIghNs1wudPGLDeJ5pUTingJwwPizczSNwui+1FYKRs0UGjAGruUHYboiUOhJ37DZIWWJR8wKg10b+V/Uxv3b0xbV87dd7suZiUgPnK7y24IyCfelGD//dRmePQVbaszcOhctJyeJNgKigqc0MUEYaJWBQPfwN6TuBBSMsX4eDvdjJ0WQYCCzXU0SScZnwxtyozstizA+AzzsWmcZsXzfkNsxgNtKwLNTJ3z8/IQZlVIMt9RyRIF7U8EuuPfRYsmm63KIU2keBSvNb/02m4MDitvu3WXI7UdidXBYdwU0LZiuHGXRAsSILvPIAJv1zQYT' >> /home/vagrant/.ssh/authorized_keys 31 | SHELL 32 | end 33 | 34 | 35 | 36 | config.vm.define "nodejsapp_backend_2" do |nodejsapp_backend_2| 37 | nodejsapp_backend_2.vm.provider "virtualbox" do |vb| 38 | vb.memory = "512" 39 | end 40 | 41 | nodejsapp_backend_2.vm.network "private_network", ip: "192.168.33.12" 42 | nodejsapp_backend_2.vm.hostname = 'nodejsapp-backend-2' 43 | 44 | nodejsapp_backend_2.vm.provision "shell", inline: <<-SHELL 45 | echo "Adding SSH Key" 46 | echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA5Xv55ZTwhBiXvTpsLUyrIghNs1wudPGLDeJ5pUTingJwwPizczSNwui+1FYKRs0UGjAGruUHYboiUOhJ37DZIWWJR8wKg10b+V/Uxv3b0xbV87dd7suZiUgPnK7y24IyCfelGD//dRmePQVbaszcOhctJyeJNgKigqc0MUEYaJWBQPfwN6TuBBSMsX4eDvdjJ0WQYCCzXU0SScZnwxtyozstizA+AzzsWmcZsXzfkNsxgNtKwLNTJ3z8/IQZlVIMt9RyRIF7U8EuuPfRYsmm63KIU2keBSvNb/02m4MDitvu3WXI7UdidXBYdwU0LZiuHGXRAsSILvPIAJv1zQYT' >> /home/vagrant/.ssh/authorized_keys 47 | SHELL 48 | end 49 | 50 | 51 | end 52 | -------------------------------------------------------------------------------- /infrastructure/vagrant/apps/sensu-server/Vagrantfile: -------------------------------------------------------------------------------- 1 | Vagrant.configure("2") do |config| 2 | 3 | config.vm.box = "debian/buster64" 4 | 5 | 6 | config.vm.define "sensu_server" do |sensu_server| 7 | sensu_server.vm.provider "virtualbox" do |vb| 8 | vb.memory = "512" 9 | end 10 | 11 | sensu_server.vm.network "private_network", ip: "192.168.33.30" 12 | sensu_server.vm.hostname = 'sensu-server' 13 | 14 | sensu_server.vm.provision "shell", inline: <<-SHELL 15 | echo "Adding SSH Key" 16 | echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA5Xv55ZTwhBiXvTpsLUyrIghNs1wudPGLDeJ5pUTingJwwPizczSNwui+1FYKRs0UGjAGruUHYboiUOhJ37DZIWWJR8wKg10b+V/Uxv3b0xbV87dd7suZiUgPnK7y24IyCfelGD//dRmePQVbaszcOhctJyeJNgKigqc0MUEYaJWBQPfwN6TuBBSMsX4eDvdjJ0WQYCCzXU0SScZnwxtyozstizA+AzzsWmcZsXzfkNsxgNtKwLNTJ3z8/IQZlVIMt9RyRIF7U8EuuPfRYsmm63KIU2keBSvNb/02m4MDitvu3WXI7UdidXBYdwU0LZiuHGXRAsSILvPIAJv1zQYT' >> /home/vagrant/.ssh/authorized_keys 17 | SHELL 18 | end 19 | 20 | end 21 | -------------------------------------------------------------------------------- /infrastructure/vagrant/apps/wordpress/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure("2") do |config| 5 | config.vm.box = "debian/buster64" 6 | 7 | config.vm.box_check_update = false 8 | 9 | config.vm.network "private_network", ip: "192.168.33.21" 10 | config.vm.hostname = 'wordpress' 11 | 12 | config.vm.provider "virtualbox" do |vb| 13 | vb.memory = "512" 14 | end 15 | config.vm.provision "shell", inline: <<-SHELL 16 | apt-get update 17 | echo "Adding ssh key" 18 | echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDA5Xv55ZTwhBiXvTpsLUyrIghNs1wudPGLDeJ5pUTingJwwPizczSNwui+1FYKRs0UGjAGruUHYboiUOhJ37DZIWWJR8wKg10b+V/Uxv3b0xbV87dd7suZiUgPnK7y24IyCfelGD//dRmePQVbaszcOhctJyeJNgKigqc0MUEYaJWBQPfwN6TuBBSMsX4eDvdjJ0WQYCCzXU0SScZnwxtyozstizA+AzzsWmcZsXzfkNsxgNtKwLNTJ3z8/IQZlVIMt9RyRIF7U8EuuPfRYsmm63KIU2keBSvNb/02m4MDitvu3WXI7UdidXBYdwU0LZiuHGXRAsSILvPIAJv1zQYT' >> /home/vagrant/.ssh/authorized_keys 19 | SHELL 20 | end 21 | -------------------------------------------------------------------------------- /scripts/sensu_api_client.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import requests 4 | 5 | from requests.auth import HTTPBasicAuth 6 | 7 | USERNAME = 'admin' 8 | PASSWORD = 'password' 9 | 10 | BASE_URL = 'http://192.168.33.30:8080' 11 | 12 | AUTH_URL = BASE_URL + '/auth' 13 | EVENTS_URL = BASE_URL + '/api/core/v2/namespaces/default/events' 14 | 15 | r = requests.get(AUTH_URL, auth=HTTPBasicAuth(USERNAME, PASSWORD)) 16 | 17 | access_token = r.json()['access_token'] 18 | 19 | 20 | headers = { 21 | 'Authorization': 'Bearer ' + access_token 22 | } 23 | 24 | 25 | r = requests.get(EVENTS_URL, headers=headers) 26 | 27 | resp = r.json() 28 | 29 | for entry in resp: 30 | print("{: <20} {: <20} {: <20}".format(entry['check']['metadata']['name'], str(entry['check']['status']), entry['check']['output'])) 31 | 32 | 33 | 34 | 35 | 36 | 37 | --------------------------------------------------------------------------------