├── docs ├── 00-introduction.md ├── 01-prerequisites.md ├── 50-what-is-iac.md ├── 03-scripts.md ├── 02-manual-operations.md ├── 09-docker-compose.md ├── 04-packer.md ├── 08-docker.md ├── 05-terraform.md ├── 07-vagrant.md ├── 06-ansible.md └── 10-kubernetes.md ├── README.md └── LICENSE /docs/00-introduction.md: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | Let's dream for a little bit... 4 | 5 | Imagine that you're a young developer who developed a web application. You run and test your application locally and everything works great, which makes you very happy. You believe that this is going to blow the minds of Internet users and bring you a lot of money. 6 | 7 | Then you realize that there is a small problem. You ask yourself a question: "How do I make my application available to the Internet users?" 8 | 9 | You're thinking that you can't run the application locally all the time, because your old laptop will become slow for other tasks and will probably crash if a lot of users will be using your app at the same time. Besides, your ISP changes randomly the public IP for your router, so you don't know on which IP address your application will be accessible to the public at any given moment. 10 | 11 | You start realizing that the problem you're facing is not as small as you thought. In fact, there is a whole new craft for you to learn in IT world about running software applications and making sure they are always available to the users. 12 | 13 | The craft is called **IT operations**. And in almost every IT department, there is an operations (Ops) team who manages the platform where the applications are running. 14 | 15 | The tutorial you are about to begin will give you, a young developer, a bit of a glance into what operations work look like and how you can do this work more efficiently by using **Infrastructure as Code** approach. 16 | 17 | Next: [Prerequisites](01-prerequisites.md) 18 | -------------------------------------------------------------------------------- /docs/01-prerequisites.md: -------------------------------------------------------------------------------- 1 | # Prerequisites 2 | 3 | ## Google Cloud Platform 4 | 5 | In this tutorial, we use the [Google Cloud Platform](https://cloud.google.com/) to provision the compute infrastructure. You can [sign up](https://cloud.google.com/free/) for $300 in free credits, which will be more than sufficient to complete all of the labs in this tutorial. 6 | 7 | ## Google Cloud Platform SDK 8 | 9 | ### Install the Google Cloud SDK 10 | 11 | Follow the Google Cloud SDK [documentation](https://cloud.google.com/sdk/) to install and configure the `gcloud` command line utility for your platform. 12 | 13 | Verify the Google Cloud SDK version is 183.0.0 or higher: 14 | 15 | ```bash 16 | $ gcloud version 17 | ``` 18 | 19 | ### Set Application Default Credentials 20 | 21 | This tutorial assumes Application Default Credentials (ADC) were set to authenticate to Google Cloud Platform API. 22 | 23 | Use the following gcloud command to acquire new user credentials to use for ADC. 24 | 25 | ```bash 26 | $ gcloud auth application-default login 27 | ``` 28 | 29 | ### Set a Default Project, Compute Region and Zone 30 | 31 | This tutorial assumes a default compute region and zone have been configured. 32 | 33 | Set a default compute region: 34 | 35 | ```bash 36 | $ gcloud config set compute/region europe-west1 37 | ``` 38 | 39 | Set a default compute zone: 40 | 41 | ```bash 42 | $ gcloud config set compute/zone europe-west1-b 43 | ``` 44 | 45 | Verify the configuration settings: 46 | 47 | ```bash 48 | $ gcloud config list 49 | ``` 50 | 51 | Next: [Manual operations](02-manual-operations.md) -------------------------------------------------------------------------------- /docs/50-what-is-iac.md: -------------------------------------------------------------------------------- 1 | # What is Infrastructure as Code? 2 | 3 | You've come a long way going through all the labs and learning about different Infrastructure as Code tools. Some sort of presentation of what Infrastructure as Code is should already be shaped in your head. 4 | 5 | To conclude this tutorial, I summarize some of the key points about what Infrastructure as Code means. 6 | 7 | 1. `We use code to describe infrastructure`. We don't use UI to launch a VM, we decribe its desired characteristics in code and tell the tool to do that. 8 | 2. `Everyone is using the same tested code for infrastructure management operations and not creating its own implementation each time`. We talked about it when discussing downsides of scripts. Common infrastructure management operations should rely on tested code functions which are used in the team. It makes everyday operations more time efficient and less error-prone. 9 | 3. `Automated operations`. We don't run commands ourselves to launch and configure a system, but instead use a configuration syntax provided by IaC tool to tell it what should be done. 10 | 4. `We apply software development practices to infrastructure`. In software development, practices like keeping code in source control or peer reviews are very common. They make development reliable and working in a team possible. Since our infrastructure is described in code, we can apply the same practices to our infrastructure work. 11 | 12 | These are the points that I would make for now. If you feel like there is something else to add or change, please feel free to send a pull request :) 13 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Infrastructure As Code Tutorial 2 | 3 | [![license](https://img.shields.io/github/license/Artemmkin/infrastructure-as-code-tutorial.svg)](https://github.com/Artemmkin/infrastructure-as-code-tutorial/blob/master/LICENSE) 4 | [![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?text=Learn%20about%20Infrastructure%20as%20Code%20https%3A%2F%2Fgithub.com%2FArtemmkin%2Finfrastructure-as-code-tutorial%20%20Tutorial%20created%20by%20@artemmkins%20covers%20%23Packer,%20%23Terraform,%20%23Ansible,%20%23Vagrant,%20%23Docker,%20and%20%23Kubernetes.%20%23DevOps) 5 | 6 | This tutorial is intended to show what the **Infrastructure as Code** (**IaC**) is, why we need it, and how it can help you manage your infrastructure more efficiently. 7 | 8 | It is practice-based, meaning I don't give much theory on what Infrastructure as Code is in the beginning of the tutorial, but instead let you feel it through the practice first. At the end of the tutorial, I summarize some of the key points about Infrastructure as Code based on what you learn through the labs. 9 | 10 | This tutorial is not meant to give a complete guide on how to use a specific tool like Ansible or Terraform, instead it focuses on how these tools work in general and what problems they solve. 11 | 12 | > The tutorial was inspired by [Kubernetes the Hard Way](https://github.com/kelseyhightower/kubernetes-the-hard-way) tutorial. I used it as an example to structure this one. 13 | 14 | _See [my presentation at DevOpsDays Silicon Valley](https://www.youtube.com/watch?v=XbcW2B7roLo&t=) in which I talk more in depth about the tutorial._ 15 | 16 | ## Need your help! 17 | 18 | This tutorial was created in 2018 and wasn't being kept up to date. Therefore, some of the instructions may not work due to updated APIs, obsolete repositories, etc. I apologize for that, but I currently don't have time to update this tutorial. So if you find that something is broken and you find a fix, please submit a PR. 19 | 20 | When submitting a PR, make sure you include a description for it, i.e. what's broken and a test plan for the fix. 21 | 22 | Also, note that some of things need to be updated in several different repositories at the same time. Main repositories used in this tutorial: 23 | 24 | * https://github.com/Artemmkin/infrastructure-as-code-tutorial 25 | * https://github.com/Artemmkin/infrastructure-as-code-example 26 | * https://github.com/Artemmkin/raddit 27 | 28 | ## Target Audience 29 | 30 | The target audience for this tutorial is anyone who loves or/and works in IT. 31 | 32 | ## Tools Covered 33 | 34 | * Packer 35 | * Terraform 36 | * Ansible 37 | * Vagrant 38 | * Docker 39 | * Docker Compose 40 | * Kubernetes 41 | 42 | ## Results of completing the tutorial 43 | 44 | By the end of this tutorial, you'll make your own repository looking like [this one](https://github.com/Artemmkin/infrastructure-as-code-example). 45 | 46 | NOTE: you can use this [example repository](https://github.com/Artemmkin/infrastructure-as-code-example) in case you get stuck in some of the labs. 47 | 48 | ## Labs 49 | 50 | This tutorial assumes you have access to the Google Cloud Platform. While GCP is used for basic infrastructure requirements the lessons learned in this tutorial can be applied to other platforms. 51 | 52 | * [Introduction](docs/00-introduction.md) 53 | * [Prerequisites](docs/01-prerequisites.md) 54 | * [Manual Operations](docs/02-manual-operations.md) 55 | * [Scripts](docs/03-scripts.md) 56 | * [Packer](docs/04-packer.md) 57 | * [Terraform](docs/05-terraform.md) 58 | * [Ansible](docs/06-ansible.md) 59 | * [Vagrant](docs/07-vagrant.md) 60 | * [Docker](docs/08-docker.md) 61 | * [Docker Compose](docs/09-docker-compose.md) 62 | * [Kubernetes](docs/10-kubernetes.md) 63 | * [What is Infrastructure as Code?](docs/50-what-is-iac.md) 64 | -------------------------------------------------------------------------------- /docs/03-scripts.md: -------------------------------------------------------------------------------- 1 | # Scripts 2 | 3 | In the previous lab, you deployed the [raddit](https://github.com/Artemmkin/raddit) application by connecting to a VM via SSH and running commands in the terminal one by one. In this lab, we'll try to automate this process a little by using `scripts`. 4 | 5 | ## Intro 6 | 7 | Now think about what happens if your application becomes so popular that one virtual machine can't handle all the load of incoming requests. Or what happens when your application somehow crashes? Debugging a problem can take a long time and it would most likely be much faster to launch and configure a new VM than trying to fix what's broken. 8 | 9 | In all of these cases we face the task of provisioning new virtual machines, installing the required software and repeating all of the configurations we've made in the previous lab over and over again. 10 | 11 | Doing it manually is `boring`, `error-prone` and `time-consuming`. 12 | 13 | The most obvious way for improvement is using Bash scripts which allow us to run sets of commands put in a single file. So let's try this. 14 | 15 | ## Provision Compute Resources 16 | 17 | Start a new VM for this lab. The command should look familiar: 18 | 19 | ```bash 20 | $ gcloud compute instances create raddit-instance-3 \ 21 | --image-family ubuntu-1604-lts \ 22 | --image-project ubuntu-os-cloud \ 23 | --boot-disk-size 10GB \ 24 | --machine-type n1-standard-1 25 | ``` 26 | 27 | ## Infrastructure as Code project 28 | 29 | Starting from this lab, we're going to use a git repo for saving all the work done in this tutorial. 30 | 31 | Download a repo for the tutorial: 32 | 33 | ```bash 34 | $ git clone https://github.com/Artemmkin/iac-tutorial.git 35 | ``` 36 | 37 | Delete git information about a remote repository: 38 | ```bash 39 | $ cd ./iac-tutorial 40 | $ git remote remove origin 41 | ``` 42 | 43 | Create a directory for this lab: 44 | 45 | ```bash 46 | $ mkdir scripts 47 | ``` 48 | 49 | ## Configuration script 50 | 51 | Before we can run our application, we need to create a running environment for it by installing dependent packages and configuring the OS. 52 | 53 | We are going to use the same commands we used before to do that, but this time, instead of running commands one by one, we'll create a `bash script` to save us some struggle. 54 | 55 | Create a bash script to install Ruby, Bundler and MongoDB, and copy a systemd unit file for the application. 56 | 57 | Save it to the `configuration.sh` file inside created `scripts` directory: 58 | 59 | ```bash 60 | #!/bin/bash 61 | set -e 62 | 63 | echo " ----- install ruby and bundler ----- " 64 | apt-get update 65 | apt-get install -y ruby-full build-essential 66 | gem install --no-rdoc --no-ri bundler 67 | 68 | echo " ----- install mongodb ----- " 69 | apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 70 | echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" > /etc/apt/sources.list.d/mongodb-org-3.2.list 71 | apt-get update 72 | apt-get install -y mongodb-org 73 | 74 | echo " ----- start mongodb ----- " 75 | systemctl start mongod 76 | systemctl enable mongod 77 | 78 | echo " ----- copy unit file for application ----- " 79 | wget https://gist.githubusercontent.com/Artemmkin/ce82397cfc69d912df9cd648a8d69bec/raw/7193a36c9661c6b90e7e482d256865f085a853f2/raddit.service 80 | mv raddit.service /etc/systemd/system/raddit.service 81 | ``` 82 | 83 | ## Deployment script 84 | 85 | Create a script for copying the application code from GitHub repository, installing dependent gems and starting it. 86 | 87 | Save it into `deploy.sh` file inside `scripts` directory: 88 | 89 | ```bash 90 | #!/bin/bash 91 | set -e 92 | 93 | echo " ----- clone application repository ----- " 94 | git clone https://github.com/Artemmkin/raddit.git 95 | 96 | echo " ----- install dependent gems ----- " 97 | cd ./raddit 98 | sudo bundle install 99 | 100 | echo " ----- start the application ----- " 101 | sudo systemctl start raddit 102 | sudo systemctl enable raddit 103 | ``` 104 | 105 | ## Run the scripts 106 | 107 | Copy the `scripts` directory to the created VM: 108 | 109 | ```bash 110 | $ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-3) 111 | $ scp -r ./scripts raddit-user@${INSTANCE_IP}:/home/raddit-user 112 | ``` 113 | 114 | Connect to the VM via SSH: 115 | ```bash 116 | $ ssh raddit-user@${INSTANCE_IP} 117 | ``` 118 | 119 | Run the scripts: 120 | ```bash 121 | $ chmod +x ./scripts/*.sh 122 | $ sudo ./scripts/configuration.sh 123 | $ ./scripts/deploy.sh 124 | ``` 125 | 126 | ## Access the Application 127 | 128 | Access the application in your browser by its public IP (don't forget to specify the port 9292). 129 | 130 | Open another terminal and run the following command to get a public IP of the VM: 131 | 132 | ```bash 133 | $ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-3 134 | ``` 135 | 136 | ## Save and commit the work 137 | 138 | Save and commit the scripts created in this lab into your `iac-tutorial` repo. 139 | 140 | ## Conclusion 141 | 142 | Scripts helped us to save some time and effort of manually running every command one by one to configure the system and start the application. 143 | 144 | The process of system configuration becomes more or less standardized and less error-prone, as you put commands in the order they should be run and test it to ensure it works as expected. 145 | 146 | It's also a first step we've made in the direction of automating operations work. 147 | 148 | But scripts are not suitable for every operations task and have many downsides. We'll discuss more on that in the next labs. 149 | 150 | Destroy the current VM before moving onto the next step: 151 | 152 | ```bash 153 | $ gcloud compute instances delete raddit-instance-3 154 | ``` 155 | 156 | Next: [Packer](04-packer.md) 157 | -------------------------------------------------------------------------------- /docs/02-manual-operations.md: -------------------------------------------------------------------------------- 1 | # Manual Operations 2 | 3 | To better understand the `Infrastructure as Code` (`IaC`) concept, we will first define the problem we are facing and deal with it with manually to get our hands dirty and see how things work overall. 4 | 5 | ## Intro 6 | 7 | Imagine you have developed a new cool application called [raddit](https://github.com/Artemmkin/raddit). 8 | 9 | You want to run your application on a dedicated server and make it available to the Internet users. 10 | 11 | You heard about the `public cloud` thing, which allows you to provision compute resources and pay only for what you use. You believe it's a great way to test your idea of an application and see if people like it. 12 | 13 | You've signed up for a free tier of [Google Cloud Platform](https://cloud.google.com/) (GCP) and are about to start deploying your application. 14 | 15 | ## Provision Compute Resources 16 | 17 | First thing we will do is to provision a virtual machine (VM) inside GCP for running the application. 18 | 19 | Use the following gcloud command in your terminal to launch a VM with Ubuntu 16.04 distro: 20 | 21 | ```bash 22 | $ gcloud compute instances create raddit-instance-2 \ 23 | --image-family ubuntu-1604-lts \ 24 | --image-project ubuntu-os-cloud \ 25 | --boot-disk-size 10GB \ 26 | --machine-type n1-standard-1 27 | ``` 28 | 29 | ## Create an SSH key pair 30 | 31 | Generate an SSH key pair for future connections to the VM instances (run the command exactly as it is): 32 | 33 | ```bash 34 | $ ssh-keygen -t rsa -f ~/.ssh/raddit-user -C raddit-user -P "" 35 | ``` 36 | 37 | Create an SSH public key for your project: 38 | 39 | ```bash 40 | $ gcloud compute project-info add-metadata \ 41 | --metadata ssh-keys="raddit-user:$(cat ~/.ssh/raddit-user.pub)" 42 | ``` 43 | 44 | Add the SSH private key to the ssh-agent: 45 | 46 | ``` 47 | $ ssh-add ~/.ssh/raddit-user 48 | ``` 49 | 50 | Verify that the key was added to the ssh-agent: 51 | 52 | ```bash 53 | $ ssh-add -l 54 | ``` 55 | 56 | ## Install Application Dependencies 57 | 58 | To start the application, you need to first configure the environment for running it. 59 | 60 | Connect to the started VM via SSH: 61 | 62 | ```bash 63 | $ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-2) 64 | $ ssh raddit-user@${INSTANCE_IP} 65 | ``` 66 | 67 | Install Ruby: 68 | 69 | ```bash 70 | $ sudo apt-get update 71 | $ sudo apt-get install -y ruby-full build-essential 72 | ``` 73 | 74 | Check the installed version of Ruby: 75 | 76 | ```bash 77 | $ ruby -v 78 | ``` 79 | 80 | Install Bundler: 81 | 82 | ```bash 83 | $ sudo gem install --no-rdoc --no-ri bundler 84 | $ bundle version 85 | ``` 86 | 87 | Clone the [application repo](https://github.com/Artemmkin/raddit), but first make sure `git` is installed: 88 | ```bash 89 | $ git version 90 | ``` 91 | 92 | At the time of writing the latest image of Ubuntu 16.04 which GCP provides has `git` preinstalled, so we can skip this step. 93 | 94 | Clone the application repo into the home directory of `raddit-user` user: 95 | 96 | ```bash 97 | $ git clone https://github.com/Artemmkin/raddit.git 98 | ``` 99 | 100 | Install application dependencies using Bundler: 101 | 102 | ```bash 103 | $ cd ./raddit 104 | $ sudo bundle install 105 | ``` 106 | 107 | ## Prepare Database 108 | 109 | Install MongoDB which your application uses: 110 | 111 | ```bash 112 | $ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv EA312927 113 | $ echo "deb http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.2.list 114 | $ sudo apt-get update 115 | $ sudo apt-get install -y mongodb-org 116 | ``` 117 | 118 | Start MongoDB and enable autostart: 119 | 120 | ```bash 121 | $ sudo systemctl start mongod 122 | $ sudo systemctl enable mongod 123 | ``` 124 | 125 | Verify that MongoDB is running: 126 | 127 | ```bash 128 | $ sudo systemctl status mongod 129 | ``` 130 | 131 | ## Start the Application 132 | 133 | Download a systemd unit file for starting the application from a gist: 134 | 135 | ```bash 136 | $ wget https://gist.githubusercontent.com/Artemmkin/ce82397cfc69d912df9cd648a8d69bec/raw/7193a36c9661c6b90e7e482d256865f085a853f2/raddit.service 137 | ``` 138 | 139 | Move it to the systemd directory 140 | 141 | ```bash 142 | $ sudo mv raddit.service /etc/systemd/system/raddit.service 143 | ``` 144 | 145 | Now start the application and enable autostart: 146 | 147 | ```bash 148 | $ sudo systemctl start raddit 149 | $ sudo systemctl enable raddit 150 | ``` 151 | 152 | Verify that it's running: 153 | 154 | ```bash 155 | $ sudo systemctl status raddit 156 | ``` 157 | 158 | ## Access the Application 159 | 160 | Open a firewall port the application is listening on (note that the following command should be run on your local machine): 161 | 162 | ```bash 163 | $ gcloud compute firewall-rules create allow-raddit-tcp-9292 \ 164 | --network default \ 165 | --action allow \ 166 | --direction ingress \ 167 | --rules tcp:9292 \ 168 | --source-ranges 0.0.0.0/0 169 | ``` 170 | 171 | Get the public IP of the VM: 172 | 173 | ```bash 174 | $ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-2 175 | ``` 176 | 177 | Now open your browser and try to reach the application at the public IP and port 9292. 178 | 179 | For example, I put in my browser the following URL http://104.155.1.152:9292, but note that you'll have your own IP address. 180 | 181 | ## Conclusion 182 | 183 | Congrats! You've just deployed your application. It is running on a dedicated set of compute resources in the cloud and is accessible by a public IP. Now Internet users can enjoy using your application. 184 | 185 | Now that you've got the idea of what sort of steps you have to take to deploy your code from your local machine to a virtual server running in the cloud, let's see how we can do it more efficiently. 186 | 187 | Destroy the current VM and move to the next step: 188 | 189 | ```bash 190 | $ gcloud compute instances delete raddit-instance-2 191 | ``` 192 | 193 | Next: [Scripts](03-scripts.md) 194 | -------------------------------------------------------------------------------- /docs/09-docker-compose.md: -------------------------------------------------------------------------------- 1 | ## Docker Compose 2 | 3 | In the last lab, we learned how to create Docker container images using Dockerfile and implementing Infrastructure as Code approach. 4 | 5 | This time we'll learn how to describe in code and manage our local container infrastructure with [Docker Compose](https://docs.docker.com/compose/overview/). 6 | 7 | ## Intro 8 | 9 | Remember how in the previous lab we had to use a lot of `docker` CLI commands in order to run our application locally? Specifically, we had to create a network for containers to communicate, a volume for container with MongoDB, launch MongoDB container, launch our application container. 10 | 11 | This is a lot of manual work and we only have 2 containers in our setup. Imagine how much work it would be to run a microservices application which includes a dozen of services. 12 | 13 | To make the management of our local container infrastructure easier and more reliable, we need a tool that would allow us to describe the desired state of a local environment and then it would create it from our description. 14 | 15 | **Docker Compose** is exactly the tool we need. Let's see how we can use it. 16 | 17 | ## Install Docker Compose 18 | 19 | Follow the official documentation on [how to install Docker Compose](https://docs.docker.com/compose/install/) on your system. 20 | 21 | Verify that installed version of Docker Compose is => 1.18.0: 22 | 23 | ```bash 24 | $ docker-compose -v 25 | ``` 26 | 27 | ## Describe Local Container Infrastructure 28 | 29 | Docker Compose could be compared to Terraform, but it manages only Docker container infrastructure. It allows us to start containers, create networks and volumes, pass environment variables to containers, publish ports, etc. 30 | 31 | Let's use Docker Compose [declarative syntax](https://docs.docker.com/compose/compose-file/) to describe what our local container infrastructure should look like. 32 | 33 | Create a file called `docker-compose.yml` inside your `iac-tutorial` repo with the following content: 34 | 35 | ```yml 36 | version: '3.3' 37 | 38 | # define services (containers) that should be running 39 | services: 40 | mongo-database: 41 | image: mongo:3.2 42 | # what volumes to attach to this container 43 | volumes: 44 | - mongo-data:/data/db 45 | # what networks to attach this container 46 | networks: 47 | - raddit-network 48 | 49 | raddit-app: 50 | # path to Dockerfile to build an image and start a container 51 | build: . 52 | environment: 53 | - DATABASE_HOST=mongo-database 54 | ports: 55 | - 9292:9292 56 | networks: 57 | - raddit-network 58 | # start raddit-app only after mongod-database service was started 59 | depends_on: 60 | - mongo-database 61 | 62 | # define volumes to be created 63 | volumes: 64 | mongo-data: 65 | # define networks to be created 66 | networks: 67 | raddit-network: 68 | ``` 69 | 70 | In this compose file, we define 3 sections for configuring different components of our container infrastructure. 71 | 72 | Under the **services** section we define what containers we want to run. We give each service a `name` and pass the options such as what `image` to use to launch container for this service, what `volumes` and `networks` should be attached to this container. 73 | 74 | If you look at `mongo-database` service definition, you should find it to be very similar to the docker command that we used to start MongoDB container in the previous lab: 75 | 76 | ```bash 77 | $ docker run --name mongo-database \ 78 | --volume mongo-data:/data/db \ 79 | --network raddit-network \ 80 | --detach mongo:3.2 81 | ``` 82 | 83 | So the syntax of Docker Compose can be easily understood by a person not even familiar with it [the documentation](https://docs.docker.com/compose/compose-file/#service-configuration-reference). 84 | 85 | `raddit-app` services configuration is a bit different from MongoDB service in a way that we specify a `build` option instead of `image` to build the container image from a Dockerfile before starting a container: 86 | 87 | ```yml 88 | raddit-app: 89 | # path to Dockerfile to build an image and start a container 90 | build: . 91 | environment: 92 | - DATABASE_HOST=mongo-database 93 | ports: 94 | - 9292:9292 95 | networks: 96 | - raddit-network 97 | # start raddit-app only after mongod-database service was started 98 | depends_on: 99 | - mongo-database 100 | ``` 101 | 102 | Also, note the `depends_on` option which allows us to tell Docker Compose that this `raddit-app` service depends on `mongo-database` service and should be started after `mongo-database` container was launched. 103 | 104 | The other two top-level sections in this file are **volumes** and **networks**. They are used to define volumes and networks that should be created: 105 | 106 | ```yml 107 | # define volumes to be created 108 | volumes: 109 | mongo-data: 110 | # define networks to be created 111 | networks: 112 | raddit-network: 113 | ``` 114 | 115 | These basically correspond to the commands that we used in the previous lab to create a named volume and a network: 116 | 117 | ```bash 118 | $ docker volume create mongo-data 119 | $ docker network create raddit-network 120 | ``` 121 | 122 | ## Create Local Infrastructure 123 | 124 | Once you described the desired state of you infrastructure in `docker-compose.yml` file, tell Docker Compose to create it using the following command: 125 | 126 | ```bash 127 | $ docker-compose up 128 | ``` 129 | 130 | or use this command to run containers in the background: 131 | 132 | ```bash 133 | $ docker-compose up -d 134 | ``` 135 | 136 | ## Access Application 137 | 138 | The application should be accessible to your as before at http://localhost:9292 139 | 140 | ## Save and commit the work 141 | 142 | Save and commit the `docker-compose.yml` file created in this lab into your `iac-tutorial` repo. 143 | 144 | ## Conclusion 145 | 146 | In this lab, we learned how to use Docker Compose tool to implement Infrastructure as Code approach to managing a local container infrastructure. This helped us automate and document the process of creating all the necessary components for running our containerized application. 147 | 148 | If we keep created `docker-compose.yml` file inside the application repository, any of our colleagues can create the same container environment on any system with just one command. This makes Docker Compose a perfect tool for creating local dev environments and simple application deployments. 149 | 150 | To destroy the local playground, run the following command: 151 | 152 | ```bash 153 | $ docker-compose down --volumes 154 | ``` 155 | 156 | Next: [Kubernetes](10-kubernetes.md) 157 | -------------------------------------------------------------------------------- /docs/04-packer.md: -------------------------------------------------------------------------------- 1 | # Packer 2 | 3 | Scripts helped us speed up the process of system configuration, and made it more reliable compared to doing everything manually, but there are still ways for improvement. 4 | 5 | In this lab, we're going to take a look at the first IaC tool in this tutorial called [Packer](https://www.packer.io/) and see how it can help us improve our operations. 6 | 7 | ## Intro 8 | 9 | Remember how in the second lab we had to make sure that the `git` was installed on the VM so that we could clone the application repo? Did it surprise you in a good way that the `git` was already installed on the system and we could skip the installation? 10 | 11 | Imagine how nice it would be to have other required packages like Ruby and Bundler preinstalled on the VM we provision, or have necessary configuration files come with the image, too. This would require even less time and effort from us to configure the system and run our application. 12 | 13 | Luckily, we can create custom machine images with required configuration and software installed using Packer. Let's check it out. 14 | 15 | ## Install Packer 16 | 17 | [Download](https://www.packer.io/downloads.html) and install Packer onto your system. 18 | 19 | Check the version to verify that it was installed: 20 | 21 | ```bash 22 | $ packer -v 23 | ``` 24 | 25 | ## Infrastructure as Code project 26 | 27 | Create a new directory called `packer` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. 28 | 29 | ## Define image builder 30 | 31 | The way Packer works is simple. It starts a VM with specified characteristics, configures the operating system and installs the software you specify, and then it creates a machine image from that VM. 32 | 33 | The part of packer responsible for starting a VM and creating an image from it is called [builder](https://www.packer.io/docs/builders/index.html). 34 | 35 | So before using packer to create images, we need to define a builder configuration in a JSON file (which is called **template** in Packer terminology). 36 | 37 | Create a `raddit-base-image.json` file inside the `packer` directory with the following content (make sure to change the project ID and zone in case it's different): 38 | 39 | ```json 40 | { 41 | "builders": [ 42 | { 43 | "type": "googlecompute", 44 | "project_id": "infrastructure-as-code", 45 | "zone": "europe-west1-b", 46 | "machine_type": "g1-small", 47 | "source_image_family": "ubuntu-1604-lts", 48 | "image_name": "raddit-base-{{isotime `20060102-150405`}}", 49 | "image_family": "raddit-base", 50 | "image_description": "Ubuntu 16.04 with Ruby, Bundler and MongoDB preinstalled", 51 | "ssh_username": "raddit-user" 52 | } 53 | ] 54 | } 55 | ``` 56 | 57 | This template describes where and what type of a VM to launch for image creation (`type`, `project_id`, `zone`, `machine_type`, `source_image_family`). It also defines image saving configuration such as under which name (`image_name`) and image family (`image_family`) the resulting image should be saved and what description to give it (`image_description`). SSH user configuration is used by provisioners which will talk about later. 58 | 59 | Validate the template: 60 | 61 | ```bash 62 | $ packer validate ./packer/raddit-base-image.json 63 | ``` 64 | 65 | ## Define image provisioner 66 | 67 | As we already mentioned, builders are only responsible for starting a VM and creating an image from that VM. The real work of system configuration and installing software on the running VM is done by another Packer component called **provisioner**. 68 | 69 | Add a [shell provisioner](https://www.packer.io/docs/provisioners/shell.html) to your template to run the `configuration.sh` script you created in the previous lab. 70 | 71 | Your template should look similar to this one: 72 | 73 | ```json 74 | { 75 | "builders": [ 76 | { 77 | "type": "googlecompute", 78 | "project_id": "infrastructure-as-code", 79 | "zone": "europe-west1-b", 80 | "machine_type": "g1-small", 81 | "source_image_family": "ubuntu-1604-lts", 82 | "image_name": "raddit-base-{{isotime `20060102-150405`}}", 83 | "image_family": "raddit-base", 84 | "image_description": "Ubuntu 16.04 with Ruby, Bundler and MongoDB preinstalled", 85 | "ssh_username": "raddit-user" 86 | } 87 | ], 88 | "provisioners": [ 89 | { 90 | "type": "shell", 91 | "script": "{{template_dir}}/../scripts/configuration.sh", 92 | "execute_command": "sudo {{.Path}}" 93 | } 94 | ] 95 | } 96 | ``` 97 | 98 | Make sure the template is valid: 99 | 100 | ```bash 101 | $ packer validate ./packer/raddit-base-image.json 102 | ``` 103 | 104 | ## Create custom machine image 105 | 106 | Build the image for your application: 107 | 108 | ```bash 109 | $ packer build ./packer/raddit-base-image.json 110 | ``` 111 | 112 | ## Launch a VM with your custom built machine image 113 | 114 | Once the image is built, use it as a boot disk to start a VM: 115 | 116 | ```bash 117 | $ gcloud compute instances create raddit-instance-4 \ 118 | --image-family raddit-base \ 119 | --boot-disk-size 10GB \ 120 | --machine-type n1-standard-1 121 | ``` 122 | 123 | ## Deploy Application 124 | 125 | Copy `deploy.sh` script to the created VM: 126 | 127 | ```bash 128 | $ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-4) 129 | $ scp ./scripts/deploy.sh raddit-user@${INSTANCE_IP}:/home/raddit-user 130 | ``` 131 | 132 | Connect to the VM via SSH: 133 | 134 | ```bash 135 | $ ssh raddit-user@${INSTANCE_IP} 136 | ``` 137 | 138 | Verify Ruby, Bundler and MongoDB are installed: 139 | 140 | ```bash 141 | $ ruby -v 142 | $ bundle version 143 | $ sudo systemctl status mongod 144 | ``` 145 | 146 | Run deployment script: 147 | 148 | ```bash 149 | $ chmod +x ./deploy.sh 150 | $ ./deploy.sh 151 | ``` 152 | 153 | ## Access Application 154 | 155 | Access the application in your browser by its public IP (don't forget to specify the port 9292). 156 | 157 | Open another terminal and run the following command to get a public IP of the VM: 158 | 159 | ```bash 160 | $ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance-4 161 | ``` 162 | 163 | ## Save and commit the work 164 | 165 | Save and commit the packer template created in this lab into your `iac-tutorial` repo. 166 | 167 | ## Learning more about Packer 168 | 169 | Packer configuration files are called templates for a reason. They often get parameterized with [user variables](https://www.packer.io/docs/templates/user-variables.html). This could be very helpful since you can create multiple machine images with different configuration and for different purposes using one template file. 170 | 171 | Adding user variables to a template is easy, follow the [documentation](https://www.packer.io/docs/templates/user-variables.html) on how to do that. 172 | 173 | ## Immutable infrastructure 174 | 175 | You may wonder why not to put everything inside the image including the application? Well, this approach is called an [immutable infrastructure](https://martinfowler.com/bliki/ImmutableServer.html). It is based on the idea `we build it once, and we never change it`. 176 | 177 | It has advantages of spending less time (zero in this case) on system configuration after VM's start, and prevents **configuration drift**, but it's also not easy to implement. 178 | 179 | ## Conclusion 180 | 181 | In this lab you've used Packer to create a custom machine image for running your application. 182 | 183 | The advantages of its usage are quite obvious: 184 | 185 | * `It requires less time and effort to configure a new VM for running the application` 186 | * `System configuration becomes more reliable.` When we start a new VM to deploy the application, we know for sure that it has the right packages installed and configured properly, since we built and tested the image. 187 | 188 | Destroy the current VM and move onto the next lab: 189 | 190 | ```bash 191 | $ gcloud compute instances delete raddit-instance-4 192 | ``` 193 | 194 | Next: [Terraform](05-terraform.md) 195 | -------------------------------------------------------------------------------- /docs/08-docker.md: -------------------------------------------------------------------------------- 1 | ## Docker 2 | 3 | In this lab, we will talk about managing containers for the first time in this tutorial. Particularly, we will talk about [Docker](https://www.docker.com/what-docker) which is the most widely used platform for running containers. 4 | 5 | ## Intro 6 | 7 | Remember when we talked about packer, we mentioned a few words about `Immutable Infrastructure` model? The idea was to package all application dependencies and application itself inside a machine image, so that we don't have to configure the system after start. Containers implement the same model, but they do it in a more efficient way. 8 | 9 | Containers allow you to create self-contained isolated environments for running your applications. 10 | 11 | They have some significant advantages over VMs in terms of implementing Immutable Infrastructure model: 12 | 13 | * `Containers are much faster to start than VMs.` Container starts in seconds, while a VM takes minutes. It's important when you're doing an update/rollback or scaling your service. 14 | * `Containers enable better utilization of compute resources.` Very often computer resources of a VM running an application are underutilized. Launching multiple instances of the same application on one VM has a lot of difficulties: different application versions may need different versions of dependent libraries, init scripts require special configuration. With containers, running multiple instances of the same application on the same machine is easy and doesn't require any system configuration. 15 | * `Containers are more lightweight than VMs.` Container images are much smaller than machine images, because they don't need a full operating system in order to run. In fact, a container image can include just a single binary and take just a few MBs of your disk space. This means that we need less space for storing the images and the process of distributing images goes faster. 16 | 17 | Let's try to implement `Immutable Infrastructure` model with Docker containers, while paying special attention to the `Dockerfile` part as a way to practice `Infrastructure as Code` approach. 18 | 19 | ## Install Docker Engine 20 | 21 | The [Docker Engine](https://docs.docker.com/engine/docker-overview/#docker-engine) is the daemon that gets installed on the system and allows you to manage containers with simple CLI. 22 | 23 | [Install](https://www.docker.com/community-edition) free Community Edition of Docker Engine on your system. 24 | 25 | Verify that the version of Docker Engine is => 17.09.0: 26 | 27 | ```bash 28 | $ docker -v 29 | ``` 30 | 31 | ## Create Dockerfile 32 | 33 | You describe a container image that you want to create in a special file called **Dockerfile**. 34 | 35 | Dockerfile contains `instructions` on how the image should be built. Here are some of the most common instructions that you can meet in a Dockerfile: 36 | 37 | * `FROM` is used to specify a `base image` for this build. It's similar to the builder configuration which we defined in a Packer template, but in this case instead of describing characteristics of a VM, we simply specify a name of a container image used for build. This should be the first instruction in the Dockerfile. 38 | * `ADD` and `COPY` are used to copy a file/directory to the container. See the [difference](https://stackoverflow.com/questions/24958140/what-is-the-difference-between-the-copy-and-add-commands-in-a-dockerfile) between the two. 39 | * `RUN` is used to run a command inside the image. Mostly used for installing packages. 40 | * `ENV` sets an environment variable available within the container. 41 | * `WORKDIR` changes the working directory of the container to a specified path. It basically works like a `cd` command on Linux. 42 | * `CMD` sets a default command, which will be executed when a container starts. This should be a command to start your application. 43 | 44 | Let's use these instructions to create a Docker container image for our raddit application. 45 | 46 | Create a file called `Dockerfile` inside your `iac-tutorial` repo with the following content: 47 | 48 | ``` 49 | # Use base image with Ruby installed 50 | FROM ruby:2.3 51 | 52 | # install required system packages 53 | RUN apt-get update -qq && \ 54 | apt-get install -y build-essential 55 | 56 | # create application directory and install dependencies 57 | ENV APP_HOME /app 58 | RUN mkdir $APP_HOME 59 | WORKDIR $APP_HOME 60 | COPY raddit-app/Gemfile* $APP_HOME/ 61 | RUN bundle install 62 | 63 | # Copy the application code to the container 64 | ADD raddit-app/ $APP_HOME 65 | # Run "puma" command on container's start 66 | CMD ["puma"] 67 | ``` 68 | 69 | This Dockerfile repeats the steps that we did multiple times by now to configure a running environment for our application and run it. 70 | 71 | We first choose an image that already contains Ruby of required version: 72 | ``` 73 | # Use base image with Ruby installed 74 | FROM ruby:2.3 75 | ``` 76 | 77 | The base image is downloaded from Docker official registry (storage of images) called [Docker Hub](https://hub.docker.com/). 78 | 79 | We then install required system packages and application dependencies: 80 | 81 | ``` 82 | # install required system packages 83 | RUN apt-get update -qq && \ 84 | apt-get install -y build-essential 85 | 86 | # create application home directory and install dependencies 87 | ENV APP_HOME /app 88 | RUN mkdir $APP_HOME 89 | WORKDIR $APP_HOME 90 | COPY raddit-app/Gemfile* $APP_HOME/ 91 | RUN bundle install 92 | ``` 93 | 94 | Then we copy the directory with application code and specify a default command that should be run when a container from this image starts: 95 | 96 | ``` 97 | # Copy the application code to the container 98 | ADD raddit-app/ $APP_HOME 99 | # Run "puma" command on container's start 100 | CMD ["puma"] 101 | ``` 102 | 103 | ## Build Container Image 104 | 105 | Once you defined how your image should be built, run the following command inside `iac-tutorial` directory to create a container image for raddit application: 106 | 107 | ```bash 108 | $ docker build --tag raddit . 109 | ``` 110 | 111 | The resulting image will be named `raddit`. Find it in the list of your local images: 112 | 113 | ```bash 114 | $ docker images | grep raddit 115 | ``` 116 | 117 | ## Bridge Network 118 | 119 | We are going to run multiple containers in this setup. To allow containers communicate with each other by container names, we'll create a [user-defined bridge network](https://docs.docker.com/engine/userguide/networking/#user-defined-networks): 120 | 121 | ```bash 122 | $ docker network create raddit-network 123 | ``` 124 | 125 | Verify that the network was created: 126 | 127 | ```bash 128 | $ docker network ls 129 | ``` 130 | 131 | ## MongoDB Container 132 | 133 | We shouldn't forget that we also need a MongoDB for our application to work. 134 | 135 | The philosophy behind containers is that we create one container per process. So we'll run MongoDB in another container. 136 | 137 | We will use a public image from Docker Hub to run a MongoDB container alongside raddit application container. However, I recommend you for the sake of practice write a Dockerfile for MongoDB and create your own image. 138 | 139 | Because MongoDB is a stateful service, we'll first create a named volume for it to persist the data beyond the container removal. 140 | 141 | ```bash 142 | $ docker volume create mongo-data 143 | ``` 144 | 145 | Check that volume was created: 146 | 147 | ```bash 148 | $ docker volume ls | grep mongo-data 149 | ``` 150 | 151 | Now run the following command to download a MongodDB image and start a container from it: 152 | 153 | ```bash 154 | $ docker run --name mongo-database \ 155 | --volume mongo-data:/data/db \ 156 | --network raddit-network \ 157 | --detach mongo:3.2 158 | ``` 159 | 160 | Verify that the container is running: 161 | 162 | ```bash 163 | $ docker container ls 164 | ``` 165 | 166 | ## Start Application Container 167 | 168 | Start the application container from the image you've built: 169 | 170 | ```bash 171 | $ docker run --name raddit-app \ 172 | --env DATABASE_HOST=mongo-database \ 173 | --network raddit-network \ 174 | --publish 9292:9292 \ 175 | --detach raddit 176 | ``` 177 | 178 | Note, how we also passed an environment variable with the command to the application container. Since MongoDB is not reachable at `localhost` as it was in the previous labs, [we need to pass the environment variable with MongoDB address](https://github.com/Artemmkin/iac-tutorial/blob/master/raddit-app/app.rb#L11) to tell our application where to connect. Automatic DNS resolution of container names within a user-defined network makes it possible to simply pass the name of a MongoDB container instead of an IP address. 179 | 180 | Port mapping option (`--publish`) that we passed to the command is used to make the container reachable to the outsite world. 181 | 182 | ## Access Application 183 | 184 | The application should be accessible to your at http://localhost:9292 185 | 186 | ## Save and commit the work 187 | 188 | Save and commit the `Dockerfile` created in this lab into your `iac-tutorial` repo. 189 | 190 | ## Conclusion 191 | 192 | In this lab, you adopted containers for running your application. This is a different type of technology from what we used to deal with in the previous labs. Nevertheless, we use Infrastructure as Code approach here, too. 193 | 194 | We describe the configuration of our container image in a Dockerfile using Dockerfile's syntax. We then save that Dockefile in our application repository. This way we can build the application image consistently across any environments. 195 | 196 | Destroy the current playground before moving on to the next lab. 197 | 198 | ```bash 199 | $ docker rm -f mongo-database 200 | $ docker rm -f raddit-app 201 | $ docker volume rm mongo-data 202 | $ docker network rm raddit-network 203 | ``` 204 | 205 | Next: [Docker Compose](09-docker-compose.md) 206 | -------------------------------------------------------------------------------- /docs/05-terraform.md: -------------------------------------------------------------------------------- 1 | # Terraform 2 | 3 | In the previous lab, you used Packer to make your system configuration faster and more reliable. But we still have a lot to improve. 4 | 5 | In this lab, we're going to learn about another IaC tool by HashiCorp called [Terraform](https://www.terraform.io/). 6 | 7 | ## Intro 8 | 9 | Think about your current operations... 10 | 11 | Do you see any problems you may have, or any ways for improvement? 12 | 13 | Remember, that each time we want to deploy an application, we have to `provision` compute resources first, that is to start a new VM. 14 | 15 | We do it via a `gcloud` command like this: 16 | 17 | ```bash 18 | $ gcloud compute instances create raddit-instance-4 \ 19 | --image-family raddit-base \ 20 | --boot-disk-size 10GB \ 21 | --machine-type n1-standard-1 22 | ``` 23 | 24 | At this stage, it doesn't seem like there are any problems with this. But, in fact, there is. 25 | 26 | Infrastructure for running your services and applications could be huge. You might have tens, hundreds or even thousands of virtual machines, hundreds of firewall rules, multiples VPC networks, and load balancers. In addition to that, the infrastructure could be split between multiple teams and managed separately. Such infrastructure looks very complex and yet should be run and managed in a consistent and predictable way. 27 | 28 | If we create and change infrastructure components using gcloud CLI tool or Web UI Console, over time we won't be able to describe exactly in which `state` our infrastructure is in right now, meaning `we lose control over it`. 29 | 30 | This happens because you tend to forget what changes you've made a few months ago and why you did it. If multiple people are managing infrastructure, this makes things even worse, because you can't know what changes other people are making even though your communication inside the team could be great. 31 | 32 | So we see here 2 clear problems: 33 | 34 | * we don't know the current state of our infrastructure 35 | * we can't control the changes 36 | 37 | The second problem is dealt by source control tools like `git`, while the first one is solved by using tools like Terraform. Let's find out how. 38 | 39 | ## Install Terraform 40 | 41 | [Download](https://www.terraform.io/downloads.html) and install Terraform on your system. 42 | 43 | Make sure Terraform version is => 0.11.0: 44 | 45 | ```bash 46 | $ terraform -v 47 | ``` 48 | 49 | ## Infrastructure as Code project 50 | 51 | Create a new directory called `terraform` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. 52 | 53 | ## Describe VM instance 54 | 55 | _Terraform allows you to describe the desired state of your infrastructure and makes sure your desired state meets the actual state._ 56 | 57 | Terraform uses **resources** to describe different infrastructure components. If you want to use Terraform to manage some infrastructure component, you should first make sure there is a resource for that component for that particular platform. 58 | 59 | Let's use Terraform syntax to describe a VM instance that we want to be running. 60 | 61 | Create a Terraform configuration file called `main.tf` inside the `terraform` directory with the following content: 62 | 63 | ``` 64 | resource "google_compute_instance" "raddit" { 65 | name = "raddit-instance" 66 | machine_type = "n1-standard-1" 67 | zone = "europe-west1-b" 68 | 69 | # boot disk specifications 70 | boot_disk { 71 | initialize_params { 72 | image = "raddit-base" // use image built with Packer 73 | } 74 | } 75 | 76 | # networks to attach to the VM 77 | network_interface { 78 | network = "default" 79 | access_config {} // use ephemeral public IP 80 | } 81 | } 82 | ``` 83 | 84 | Here we use [google_compute_instance](https://www.terraform.io/docs/providers/google/r/compute_instance.html) resource to manage a VM instance running in Google Cloud Platform. 85 | 86 | ## Define Resource Provider 87 | 88 | One of the advantages of Terraform over other alternatives like [CloudFormation](https://aws.amazon.com/cloudformation/?nc1=h_ls) is that it's `cloud-agnostic`, meaning it can work with many different cloud providers like AWS, GCP, Azure, or OpenStack. It can also work with resources of different services like databases (e.g., PostgreSQL, MySQL), orchestrators (Kubernetes, Nomad) and [others](https://www.terraform.io/docs/providers/). 89 | 90 | This means that Terraform has a pluggable architecture and the pluggable component that allows it to work with a specific platform or service is called **provider**. 91 | 92 | So before we can actually create a VM using Terraform, we need to define a configuration of a [google cloud provider](https://www.terraform.io/docs/providers/google/index.html) and download it on our system. 93 | 94 | Create another file inside `terraform` folder and call it `providers.tf`. Put provider configuration in it: 95 | 96 | ``` 97 | provider "google" { 98 | version = "~> 1.4.0" 99 | project = "infrastructure-as-code" 100 | region = "europe-west1" 101 | } 102 | ``` 103 | 104 | Note the `region` value, this is where terraform will provision resources (you may wish to change it). 105 | 106 | Make sure to change the `project` value in provider's configuration above to your project's ID. You can get your default project's ID by running the command: 107 | 108 | ```bash 109 | $ gcloud config list project 110 | ``` 111 | 112 | Now run the `init` command inside `terraform` directory to download the provider: 113 | 114 | ```bash 115 | $ cd ./terraform 116 | $ terraform init 117 | ``` 118 | 119 | ## Bring Infrastructure to a Desired State 120 | 121 | Once we described a desired state of the infrastructure (in our case it's a running VM), let's use Terraform to bring the infrastructure to this state: 122 | 123 | ```bash 124 | $ terraform apply 125 | ``` 126 | 127 | After Terraform ran successfully, use a gcloud command to verify that the machine was indeed launched: 128 | 129 | ```bash 130 | $ gcloud compute instances describe raddit-instance 131 | ``` 132 | 133 | ## Deploy Application 134 | 135 | We did provisioning via Terraform, but we still need to run a script to deploy our application. 136 | 137 | Copy `deploy.sh` script to the created VM: 138 | 139 | ```bash 140 | $ INSTANCE_IP=$(gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance) 141 | $ scp ../scripts/deploy.sh raddit-user@${INSTANCE_IP}:/home/raddit-user 142 | ``` 143 | 144 | Connect to the VM via SSH: 145 | 146 | ```bash 147 | $ ssh raddit-user@${INSTANCE_IP} 148 | ``` 149 | 150 | Run deployment script: 151 | 152 | ```bash 153 | $ chmod +x ./deploy.sh 154 | $ ./deploy.sh 155 | ``` 156 | 157 | ## Access the Application 158 | 159 | Access the application in your browser by its public IP (don't forget to specify the port 9292). 160 | 161 | Open another terminal and run the following command to get a public IP of the VM: 162 | 163 | ```bash 164 | $ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance 165 | ``` 166 | 167 | ## Add other GCP resources into Terraform 168 | 169 | Do you remember how in previous labs we created some GCP resources like SSH project keys and a firewall rule for our application via `gcloud` tool? 170 | 171 | Let's add those into our Terraform configuration so that we know for sure those resources are present. 172 | 173 | First, delete the SSH project key and firewall rule: 174 | 175 | ```bash 176 | $ gcloud compute project-info remove-metadata --keys=ssh-keys 177 | $ gcloud compute firewall-rules delete allow-raddit-tcp-9292 178 | ``` 179 | 180 | Make sure that your application became inaccessible via port 9292 and SSH connection with a private key of `raddit-user` fails. 181 | 182 | Then add appropriate resources into `main.tf` file. Your final version of `main.tf` file should look similar to this (change the ssh key file path, if necessary): 183 | 184 | ``` 185 | resource "google_compute_instance" "raddit" { 186 | name = "raddit-instance" 187 | machine_type = "n1-standard-1" 188 | zone = "europe-west1-b" 189 | 190 | # boot disk specifications 191 | boot_disk { 192 | initialize_params { 193 | image = "raddit-base" // use image built with Packer 194 | } 195 | } 196 | 197 | # networks to attach to the VM 198 | network_interface { 199 | network = "default" 200 | access_config {} // use ephemaral public IP 201 | } 202 | } 203 | 204 | resource "google_compute_project_metadata" "raddit" { 205 | metadata { 206 | ssh-keys = "raddit-user:${file("~/.ssh/raddit-user.pub")}" // path to ssh key file 207 | } 208 | } 209 | 210 | resource "google_compute_firewall" "raddit" { 211 | name = "allow-raddit-tcp-9292" 212 | network = "default" 213 | allow { 214 | protocol = "tcp" 215 | ports = ["9292"] 216 | } 217 | source_ranges = ["0.0.0.0/0"] 218 | } 219 | ``` 220 | 221 | Tell Terraform to apply the changes to bring the actual infrastructure state to the desired state we described: 222 | 223 | ```bash 224 | $ terraform apply 225 | ``` 226 | 227 | Verify that the application became accessible again on port 9292 and SSH connection with a private key works. 228 | 229 | ## Create an output variable 230 | 231 | Remember how we often had to use a gcloud command like this to retrive a public IP address of a VM? 232 | 233 | ```bash 234 | $ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances describe raddit-instance 235 | ``` 236 | 237 | We can tell Terraform to provide us this information using [output variables](https://www.terraform.io/intro/getting-started/outputs.html). 238 | 239 | Create another configuration file inside `terraform` directory and call it `outputs.tf`. Put the following content in it: 240 | 241 | ``` 242 | output "raddit_public_ip" { 243 | value = "${google_compute_instance.raddit.network_interface.0.access_config.0.assigned_nat_ip}" 244 | } 245 | ``` 246 | 247 | Run terraform apply again: 248 | 249 | ```bash 250 | $ terraform apply 251 | ``` 252 | 253 | You should see the public IP of the VM we created. 254 | 255 | Also note, that during this Terraform run, no resources have been created or changed, which means that the actual state of our infrastructure already meets the requirements of a desired state. 256 | 257 | ## Save and commit the work 258 | 259 | Save and commit the `terraform` folder created in this lab into your `iac-tutorial` repo. 260 | 261 | ## Conclusion 262 | 263 | In this lab, you saw in its most obvious way the application of Infrastructure as Code practice. 264 | 265 | We used `code` (Terraform configuration syntax) to describe the `desired state` of the infrastructure. Then we told Terraform to bring the actual state of the infrastructure to the desired state we described. 266 | 267 | With this approach, Terraform configuration becomes `a single source of truth` about the current state of your infrastructure. Moreover, the infrastructure is described as code, so we can apply to it the same practices we commonly use in development such as keeping the code in source control, use peer reviews for making changes, etc. 268 | 269 | All of this helps us get control over even the most complex infrastructure. 270 | 271 | Destroy the resources created by Terraform and move on to the next lab. 272 | 273 | ```bash 274 | $ terraform destroy 275 | ``` 276 | 277 | Next: [Ansible](06-ansible.md) 278 | -------------------------------------------------------------------------------- /docs/07-vagrant.md: -------------------------------------------------------------------------------- 1 | ## Vagrant 2 | 3 | In this lab, we're going to learn about [Vagrant](https://www.vagrantup.com/) which is another tool that implements IaC approach and is often used for creating development environments. 4 | 5 | ## Intro 6 | 7 | Before this lab, our main focus was on how to create and manage an environment where our application runs and is accessible to the public. Let's call that environment `production` for the sake of simplicity of referring to that later. 8 | 9 | But what is about our local environment where we develop the code? Are there any problems with that? 10 | 11 | Running our application locally would require us installing all of its dependencies and configuring the local system pretty much the same way as we did in the previous labs. 12 | 13 | There are a few reasons why you don't want to do that: 14 | 15 | * `This can break your system`. When you change your system configuration there are lot of things that can go wrong. For example, when installing/removing different packages you can easily mess up the work of your system's package manager. 16 | * `When something breaks in your system configuration, it can take a long time to fix`. If you've messed up with you local system configuration, you either need to debug or reinstall your OS. Both of these can take a lot of your time and should be avoided. 17 | * `You have no idea what is your development environment actually looks like`. Your local OS will certainly have its own specific configuration and packages installed, because you use it for every day tasks different than just running your application. For this reason, even if your application works on your local machine, you cannot describe exactly what is required for it to run. This is commonly known as the `works on my machine` problem and is often one of the reasons for a conflict between Dev and Ops. 18 | 19 | Based on these problems, let's draw some requirements for our local dev environment: 20 | 21 | * `We should know exactly what is inside.` This is important, so that we could properly configure other environments for running the application. 22 | * `Isolation from our local system.` This leaves us with choices of a local/remote VM or containers. 23 | * `Ability to quickly and easily recreate when it breaks.` 24 | 25 | Vagrant is a tool that allows to meet all of these requirements. Let's find out how. 26 | 27 | ## Install Vagrant and VirtualBox 28 | 29 | NOTE: this lab assumes Vagrant `v2.0.1` is installed. It may not work as expected on other versions. 30 | 31 | [Download](https://www.vagrantup.com/downloads.html) and install Vagrant on your system. 32 | 33 | Verify that Vagrant was successfully installed by checking the version: 34 | 35 | ```bash 36 | $ vagrant -v 37 | ``` 38 | 39 | [Download](https://www.virtualbox.org/wiki/Downloads) and install VirtualBox for running virtual machines locally. 40 | 41 | Also, make sure virtualization feature is enabled for your CPU. You would need to check BIOS settings for this. 42 | 43 | ## Create a Vagrantfile 44 | 45 | If we compare Vagrant to the previous tools we've already learned, it reminds Terraform. Like Terraform, Vagrant allows you to declaratively describe VMs you want to provision, but it focuses on managing VMs (and containers) exclusively, so it's no good for things like firewall rules or VPC networks in the cloud. 46 | 47 | To start a local VM using Vagrant, we need to define its characteristics in a special file called `Vagrantfile`. 48 | 49 | Create a file named `Vagrantfile` inside `iac-tutorial` directory with the following content: 50 | 51 | ```ruby 52 | Vagrant.configure("2") do |config| 53 | # define provider configuration 54 | config.vm.provider :virtualbox do |v| 55 | v.memory = 1024 56 | end 57 | # define a VM machine configuration 58 | config.vm.define "raddit-app" do |app| 59 | app.vm.box = "ubuntu/xenial64" 60 | app.vm.hostname = "raddit-app" 61 | end 62 | end 63 | ``` 64 | 65 | Vagrant, like Terraform, doesn't start VMs itself. It uses a `provider` component to communicate the instructions to the actual provider of infrastructure resources. 66 | 67 | In this case, we redefine Vagrant's default provider (VirtualBox) configuration to allocate 1024 MB of memory to each VM defined in this Vagrantfile: 68 | 69 | ```ruby 70 | # define provider configuration 71 | config.vm.provider :virtualbox do |v| 72 | v.memory = 1024 73 | end 74 | ``` 75 | 76 | We also specify characteristics of a VM we want to launch: what machine image (`box`) to use (Vagrant downloads a box from [Vagrant Cloud](https://www.vagrantup.com/docs/vagrant-cloud/boxes/catalog.html)), and what hostname to assign to a started VM: 77 | 78 | ```ruby 79 | # define a VM machine configuration 80 | config.vm.define "raddit-app" do |app| 81 | app.vm.box = "ubuntu/xenial64" 82 | app.vm.hostname = "raddit-app" 83 | end 84 | ``` 85 | 86 | ## Start a Local VM 87 | 88 | With the Vagrantfile created, you can start a VM on your local machine using Ubuntu 16.04 image from Vagrant Cloud. 89 | 90 | Run the following command inside the folder with your Vagrantfile: 91 | 92 | ```bash 93 | $ vagrant up 94 | ``` 95 | 96 | Check the current status of the VM: 97 | 98 | ```bash 99 | $ vagrant status 100 | ``` 101 | 102 | You can connect to a started VM via SSH using the following command: 103 | 104 | ```bash 105 | $ vagrant ssh 106 | ``` 107 | 108 | ## Configure Dev Environment 109 | 110 | Now that you have a VM running on your local machine, you need to configure it to run your application: install ruby, mongodb, etc. 111 | 112 | There are many ways you can do that, which are known to you by now. You can configure the environment manually, using scripts or some CM tool like Ansible. 113 | 114 | _It's best to use the same configuration and the same CM tools across all of your environments._ 115 | 116 | As we've already discussed, your application may work in your local environment, but it may not work on a remote VM running in production environment, because of the differences in configuration. But when your configuration is the same across all of your environments, the application will not fail for reasons like a missing package and the system configuration can generally be excluded as a potential cause of a failure when it occurs. 117 | 118 | Because we chose to use Ansible for configuring our production environment in the previous lab, let's use it for configuration management of our dev environment, too. 119 | 120 | Change your Vagrantfile to look like this: 121 | 122 | ```ruby 123 | Vagrant.configure("2") do |config| 124 | # define provider configuration 125 | config.vm.provider :virtualbox do |v| 126 | v.memory = 1024 127 | end 128 | # define a VM configuration 129 | config.vm.define "raddit-app" do |app| 130 | app.vm.box = "ubuntu/xenial64" 131 | app.vm.hostname = "raddit-app" 132 | # sync a local folder with application code to the VM folder 133 | app.vm.synced_folder "raddit-app/", "/srv/raddit-app" 134 | # use port forwarding make application accessible on localhost 135 | app.vm.network "forwarded_port", guest: 9292, host: 9292 136 | # system configuration is done by Ansible 137 | app.vm.provision "ansible" do |ansible| 138 | ansible.playbook = "ansible/configuration.yml" 139 | end 140 | end 141 | end 142 | ``` 143 | 144 | We added Ansible provisioning to the Vagrantfile which allows us to run a playbook for system configuration. 145 | 146 | ```ruby 147 | # system configuration is done by Ansible 148 | app.vm.provision "ansible" do |ansible| 149 | ansible.playbook = "ansible/configuration.yml" 150 | end 151 | ``` 152 | 153 | In the previous lab, it was given to you as a task to create a `configuration.yml` playbook that provides the same functionality as `configuration.sh` script we had used before. If you did not do that, you can copy the playbook from [here](https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/ansible/configuration.yml) (place it inside `ansible` directory). If you did create your own playbook, make sure you have a `pre_tasks` section as in [this example](https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/ansible/configuration.yml). 154 | 155 | Note, that we also added a port forwarding rule for accessing our application and instructed Vagrant to sync a local folder with application code to a specified VM folder (`/srv/raddit-app`): 156 | 157 | ```ruby 158 | # sync a local folder with application code to the VM folder 159 | app.vm.synced_folder "raddit-app/", "/srv/raddit-app" 160 | # use port forwarding make application accessible on localhost 161 | app.vm.network "forwarded_port", guest: 9292, host: 9292 162 | ``` 163 | 164 | Now run the following command to configure the local dev environment: 165 | 166 | ```bash 167 | $ vagrant provision 168 | ``` 169 | 170 | Verify the configuration: 171 | 172 | ```bash 173 | $ vagrant ssh 174 | $ ruby -v 175 | $ bundle version 176 | $ sudo systemctl status mongod 177 | ``` 178 | 179 | ## Run Application Locally 180 | 181 | As we mentioned, we gave Vagrant the instruction to sync our folder with application to a VM's folder under the specified path. This way we can develop the application on our host machine using our favorite code editor and then run that code inside the VM. 182 | 183 | We need to first reload a VM for chages in our Vagrantfile to take effect: 184 | 185 | ```bash 186 | $ vagrant reload 187 | ``` 188 | 189 | Then connect to the VM to start application: 190 | 191 | ```bash 192 | $ vagrant ssh 193 | $ cd /srv/raddit-app 194 | $ sudo bundle install 195 | $ puma 196 | ``` 197 | 198 | The application should be accessible to you now at the following URL: http://localhost:9292 199 | 200 | Stop the application using `ctrl + C` keys. 201 | 202 | ## Mess Up Dev Environment 203 | 204 | One of our requirements to local dev environment was that you can freely mess it up and recreate in no time. 205 | 206 | Let's try that. 207 | 208 | Delete Ruby on the VM: 209 | ```bash 210 | $ vagrant ssh 211 | $ sudo apt-get -y purge ruby 212 | $ ruby -v 213 | ``` 214 | 215 | Try to run your application again (it should fail): 216 | 217 | ```bash 218 | $ cd /srv/raddit-app 219 | $ puma 220 | ``` 221 | 222 | ## Recreate Dev Environment 223 | 224 | Let's try to recreate our dev environment from scratch to see how big of a problem it will be. 225 | 226 | Run the following commands to destroy the current dev environment and create a new one: 227 | 228 | ```bash 229 | $ vagrant destroy -f 230 | $ vagrant up 231 | ``` 232 | 233 | Once a new VM is up and running, try to launch your app in it: 234 | 235 | ```bash 236 | $ vagrant ssh 237 | $ ruby -v 238 | $ cd /srv/raddit-app 239 | $ sudo bundle install 240 | $ puma 241 | ``` 242 | 243 | The Ruby package should be present and the application should run without problems. 244 | 245 | Recreating a new dev environment was easy, took very little time and it didn't affect our host OS. That's exactly what we needed. 246 | 247 | ## Save and commit the work 248 | 249 | Save and commit the Vagrantfile created in this lab into your `iac-tutorial` repo. 250 | 251 | ## Conclusion 252 | 253 | Vagrant was able to meet our requirements for dev environments. It makes creating/recreating and configuring a dev environment easy and safe for our host operating system. 254 | 255 | Because we describe our local infrastructure in code in a Vagrantfile, we keep it in source control and make sure all our other colleagues have the same environment for the application as we do. 256 | 257 | Destroy the VM: 258 | 259 | ```bash 260 | $ vagrant destroy -f 261 | ``` 262 | 263 | Next: [Docker](08-docker.md) 264 | -------------------------------------------------------------------------------- /docs/06-ansible.md: -------------------------------------------------------------------------------- 1 | # Ansible 2 | 3 | In the previous lab, you used Terraform to implement Infrastructure as Code approach to managing the cloud infrastructure resources. Yet, we have another type of tooling to discover and that is **Configuration Management** (CM) tools. 4 | 5 | When talking about CM tools, we can often meet the acronym `CAPS` which stands for Chef, Ansible, Puppet and Saltstack - the most known and commonly used CM tools. In this lab, we're going to look at Ansible and see how CM tools can help us improve our operations. 6 | 7 | ## Intro 8 | 9 | If you think about our current operations and what else there is to improve, you will probably see the potential problem in the deployment process. 10 | 11 | The way we do deployment right now is by connecting via SSH to a VM and running a deployment script. And the problem here is not the connecting via SSH part, but running a script. 12 | 13 | _Scripts are bad at long term management of system configuration, because they make common system configuration operations complex and error-prone._ 14 | 15 | When you write a script, you use a scripting language syntax (Bash, Python) to write commands which you think should change the system's configuration. And the problem is that there are too many ways people can write the code that is meant to do the same things, which is the reason why scripts are often difficult to read and understand. Besides, there are various choices as to what language to use for a script: should you write it in Ruby which your colleagues know very well or Bash which you know better? 16 | 17 | Common configuration management operations are well-known: copy a file to a remote machine, create a folder, start/stop/enable a process, install packages, etc. So _we need a tool that would implement these common operations in a well-known tested way and provide us with a clean and understandable syntax for using them_. This way we wouldn't have to write complex scripts ourselves each time for the same tasks, possibly making mistakes along the way, but instead just tell the tool what should be done: what packages should be present, what processes should be started, etc. 18 | 19 | This is exactly what CM tools do. So let's check it out using Ansible as an example. 20 | 21 | ## Install Ansible 22 | 23 | NOTE: this lab assumes Ansible v2.4 is installed. It may not work as expected with other versions as things change quickly. 24 | 25 | You can follow the instructions on how to install Ansible on your system from [official documentation](http://docs.ansible.com/ansible/latest/intro_installation.html). 26 | 27 | I personally prefer installing it via [pip](http://docs.ansible.com/ansible/latest/intro_installation.html#latest-releases-via-pip) on my Linux machine. 28 | 29 | Verify that Ansible was installed by checking the version: 30 | 31 | ```bash 32 | $ ansible --version 33 | ``` 34 | 35 | ## Infrastructure as Code project 36 | 37 | Create a new directory called `ansible` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. 38 | 39 | ## Provision compute resources 40 | 41 | Start a VM and create other GCP resources for running your application applying Terraform configuration you wrote in the previous lab: 42 | 43 | ```bash 44 | $ cd ./terraform 45 | $ terraform apply 46 | ``` 47 | 48 | ## Deploy playbook 49 | 50 | We'll rewrite our Bash script used for deployment using Ansible syntax. 51 | 52 | Ansible uses **tasks** to define commands used for system configuration. Each Ansible task basically corresponds to one command in our Bash script. 53 | 54 | Each task uses some **module** to perform a certain operation on the configured system. Modules are well tested functions which are meant to perform common system configuration operations. 55 | 56 | Let's look at our `deploy.sh` script first to see what modules we might need to use: 57 | 58 | ```bash 59 | #!/bin/bash 60 | set -e 61 | 62 | echo " ----- clone application repository ----- " 63 | git clone https://github.com/Artemmkin/raddit.git 64 | 65 | echo " ----- install dependent gems ----- " 66 | cd ./raddit 67 | sudo bundle install 68 | 69 | echo " ----- start the application ----- " 70 | sudo systemctl start raddit 71 | sudo systemctl enable raddit 72 | ``` 73 | 74 | We clearly see here 3 different types of operations: cloning a git repo, installing gems via Bundler, and managing a service via systemd. 75 | 76 | So we'll search for Ansible modules that allow to perform these operations. Luckily, there are modules for all of these operations. 77 | 78 | Ansible uses YAML syntax to define tasks, which makes the configuration looks clean. 79 | 80 | Let's create a file called `deploy.yml` inside the `ansible` directory: 81 | 82 | ```yaml 83 | --- 84 | - name: Deploy Raddit App 85 | hosts: raddit-app 86 | tasks: 87 | - name: Fetch the latest version of application code 88 | git: 89 | repo: 'https://github.com/Artemmkin/raddit.git' 90 | dest: /home/raddit-user/raddit 91 | register: clone 92 | 93 | - name: Install application dependencies 94 | become: true 95 | bundler: 96 | state: present 97 | chdir: /home/raddit-user/raddit 98 | when: clone.changed 99 | notify: restart raddit 100 | 101 | handlers: 102 | - name: restart raddit 103 | become: true 104 | systemd: name=raddit state=restarted 105 | ``` 106 | 107 | In this configuration file, which is called a **playbook** in Ansible terminology, we define 3 tasks: 108 | 109 | The `first task` uses git module to pull the code from GitHub. 110 | 111 | ```yaml 112 | - name: Fetch the latest version of application code 113 | git: 114 | repo: 'https://github.com/Artemmkin/raddit.git' 115 | dest: /home/raddit-user/raddit 116 | register: clone 117 | ``` 118 | 119 | The `name` that precedes each task is used as a comment that will show up in the terminal when the task starts to run. 120 | 121 | `register` option allows to capture the result output from running a task. We will use it later in a conditional statement for running a `bundle install` task. 122 | 123 | The second task runs bundler in the specified directory: 124 | 125 | ```yaml 126 | - name: Install application dependencies 127 | become: true 128 | bundler: 129 | state: present 130 | chdir: /home/raddit-user/raddit 131 | when: clone.changed 132 | notify: restart raddit 133 | ``` 134 | 135 | Note, how for each module we use a different set of module options (in this case `state` and `chdir`). You can find full information about the options in a module's documentation. 136 | 137 | In the second task, we use a conditional statement [when](http://docs.ansible.com/ansible/latest/playbooks_conditionals.html#the-when-statement) to make sure the `bundle install` task is only run when the local repo was updated, i.e. the output from running git clone command was changed. This allows us to save some time spent on system configuration by not running unnecessary commands. 138 | 139 | On the same level as tasks, we also define a **handlers** block. Handlers are special tasks which are run only in response to notification events from other tasks. In our case, `raddit` service gets restarted only when the `bundle install` task is run. 140 | 141 | ## Inventory file 142 | 143 | The way that Ansible works is simple: it connects to a remote VM (usually via SSH) and runs the commands that stand behind each module you used in your playbook. 144 | 145 | To be able to connect to a remote VM, Ansible needs information like IP address and credentials. This information is defined in a special file called [inventory](http://docs.ansible.com/ansible/latest/intro_inventory.html). 146 | 147 | Create a file called `hosts.yml` inside `ansible` directory with the following content (make sure to change the `ansible_host` parameter to public IP of your VM): 148 | 149 | ```yaml 150 | raddit-app: 151 | hosts: 152 | raddit-instance: 153 | ansible_host: 35.35.35.35 154 | ansible_user: raddit-user 155 | ``` 156 | 157 | Here we define a group of hosts (`raddit-app`) under which we list the hosts that belong to this group. In this case, we list only one host under the hosts group and give it a name (`raddit-instance`) and information on how to connect to the host. 158 | 159 | Now note, that inside our `deploy.yml` playbook we specified `raddit-app` host group in the `hosts` option before the tasks: 160 | 161 | ```yaml 162 | --- 163 | - name: Deploy Raddit App 164 | hosts: raddit-app 165 | tasks: 166 | ... 167 | ``` 168 | 169 | This will tell Ansible to run the following tasks on the hosts defined in hosts group `raddit-app`. 170 | 171 | ## Ansible configuration 172 | 173 | Before we can run a deployment, we need to make some configuration changes to how Ansible views and manages our `ansible` directory. 174 | 175 | Let's define custom Ansible configuration for our directory. Create a file called `ansible.cfg` inside the `ansible` directory with the following content: 176 | 177 | ```ini 178 | [defaults] 179 | inventory = ./hosts.yml 180 | private_key_file = ~/.ssh/raddit-user 181 | host_key_checking = False 182 | ``` 183 | 184 | This custom configuration will tell Ansible what inventory file to use, what private key file to use for SSH connection and to skip the host checking key procedure. 185 | 186 | ## Run playbook 187 | 188 | Now it's time to run your playbook and see how it works. 189 | 190 | Use the following commands to start a deployment: 191 | 192 | ```bash 193 | $ cd ./ansible 194 | $ ansible-playbook deploy.yml 195 | ``` 196 | 197 | ## Access Application 198 | 199 | Access the application in your browser by its public IP (don't forget to specify the port 9292) and make sure application has been deployed and is functional. 200 | 201 | ## Futher Learning Ansible 202 | 203 | There's a whole lot to learn about Ansible. Try playing around with it more and create a `playbook` which provides the same system configuration as your `configuration.sh` script. Save it under the name `configuration.yml` inside the `ansible` folder, then use it inside [ansible provisioner](https://www.packer.io/docs/provisioners/ansible.html) instead of shell in your Packer template. 204 | 205 | You can find an example of `configuration.yml` playbook [here](https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/ansible/configuration.yml). 206 | 207 | And [here](https://github.com/Artemmkin/infrastructure-as-code-example/blob/master/packer/raddit-base-image-ansible.json) is an example of a Packer template which uses ansible provisioner. 208 | 209 | ## Save and commit the work 210 | 211 | Save and commit the `ansible` folder created in this lab into your `iac-tutorial` repo. 212 | 213 | ## Idempotence 214 | 215 | One more advantage of CM tools over scripts is that commands they implement designed to be **idempotent** by default. 216 | 217 | Idempotence in this case means that even if you apply the same configuration changes multiple times the result will stay the same. 218 | 219 | This is important because some commands that you use in scripts may not produce the same results when run more than once. So we always want to achieve idempotence for our configuration management system, sometimes applying conditionals statements as we did in this lab. 220 | 221 | ## Conclusion 222 | 223 | Ansible provided us with a clean YAML syntax for performing common system configuration tasks. This allowed us to get rid of our own implementation of configuration commands. 224 | 225 | It might not seem like a big improvement at this scale, because our deploy script is small, but it definitely brings order to system configuration management and is more noticeable at medium and large scale. 226 | 227 | Destroy the resources created by Terraform. 228 | 229 | ```bash 230 | $ terraform destroy 231 | ``` 232 | 233 | Next: [Vagrant](07-vagrant.md) 234 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /docs/10-kubernetes.md: -------------------------------------------------------------------------------- 1 | ## Kubernetes 2 | 3 | In the previous labs, we learned how to run Docker containers locally. Running containers at scale is quite different and a special class of tools, known as **orchestrators**, are used for that task. 4 | 5 | In this lab, we'll take a look at the most popular Open Source orchestration platform called [Kubernetes](https://kubernetes.io/) and see how it implements Infrastructure as Code model. 6 | 7 | ## Intro 8 | 9 | We used Docker Compose to consistently create container infrastructure on one machine (our local machine). However, our production environment may include tens or hundreds of VMs to have enough capacity to provide service to a large number of users. What do you do in that case? 10 | 11 | Running Docker Compose on each VM from the cluster seems like a lot of work. Besides, if you want your containers running on different hosts to communicate with each other it requires creation of a special type of network called `overlay`, which you can't create using only Docker Compose. 12 | 13 | Moreover, questions arise as to: 14 | * how to load balance containerized applications? 15 | * how to perform container health checks and ensure the required number of containers is running? 16 | 17 | The world of containers is very different from the world of virtual machines and needs a special platform for management. 18 | 19 | Kubernetes is the most widely used orchestration platform for running and managing containers at scale. It solves the common problems (some of which we've mentioned above) related to running containers on multiple hosts. And we'll see in this lab that it uses the Infrastructure as Code approach to managing container infrastructure. 20 | 21 | Let's try to run our `raddit` application on a Kubernetes cluster. 22 | 23 | ## Install Kubectl 24 | 25 | [Kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) is command line tool that we will use to run commands against the Kubernetes cluster. 26 | 27 | You can install `kubectl` onto your system as part of Google Cloud SDK by running the following command: 28 | 29 | ```bash 30 | $ gcloud components install kubectl 31 | ``` 32 | 33 | Check the version of kubectl to make sure it is installed: 34 | 35 | ```bash 36 | $ kubectl version 37 | ``` 38 | 39 | ## Infrastructure as Code project 40 | 41 | Create a new directory called `kubernetes` inside your `iac-tutorial` repo, which we'll use to save the work done in this lab. 42 | 43 | ## Describe Kubernetes cluster in Terraform 44 | 45 | We'll use [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) (GKE) service to deploy a Kubernetes cluster of 2 nodes. 46 | 47 | We'll describe a Kubernetes cluster using Terraform so that we can manage it through code. 48 | 49 | Create a directory named `terraform` inside `kubernetes` directory. Download a bundle of Terraform configuration files into the created `terraform` directory. 50 | 51 | ```bash 52 | $ wget https://github.com/Artemmkin/gke-terraform/raw/master/gke-terraform.zip 53 | $ unzip gke-terraform.zip -d kubernetes/terraform 54 | $ rm gke-terraform.zip 55 | ``` 56 | 57 | We'll use this Terraform code to create a Kubernetes cluster. 58 | 59 | ## Create Kubernetes Cluster 60 | 61 | `main.tf` which you downloaded holds all the information about the cluster that should be created. It's parameterized using Terraform [input variables](https://spacelift.io/blog/how-to-use-terraform-variables) which allow you to easily change configuration parameters. 62 | 63 | Look into `terraform.tfvars` file which contains definitions of the input variables and change them if necessary. You'll most probably want to change `project_id` value. 64 | 65 | ``` 66 | // define provider configuration variables 67 | project_id = "infrastructure-as-code" # project in which to create a cluster 68 | region = "europe-west1" # region in which to create a cluster 69 | 70 | // define Kubernetes cluster variables 71 | cluster_name = "iac-tutorial-cluster" # cluster name 72 | zone = "europe-west1-b" # zone in which to create a cluster nodes 73 | ``` 74 | After you've defined the variables, run Terraform inside `kubernetes/terraform` to create a Kubernetes cluster consisting of 2 nodes (VMs for running our application containers). 75 | 76 | ```bash 77 | $ gcloud services enable container.googleapis.com # enable Kubernetes Engine API 78 | $ terraform init 79 | $ terraform apply 80 | ``` 81 | 82 | Wait until Terraform finishes creation of the cluster. It can take about 3-5 minutes. 83 | 84 | Check that the cluster is running and `kubectl` is properly configured to communicate with it by fetching cluster information: 85 | 86 | ```bash 87 | $ kubectl cluster-info 88 | 89 | Kubernetes master is running at https://35.200.56.100 90 | GLBCDefaultBackend is running at https://35.200.56.100/api/v1/namespaces/kube-system/services/default-http-backend/proxy 91 | ... 92 | ``` 93 | 94 | ## Deployment manifest 95 | 96 | Kubernetes implements Infrastructure as Code approach to managing container infrastructure. It uses special entities called **objects** to represent the `desired state` of your cluster. With objects you can describe 97 | 98 | * What containerized applications are running (and on which nodes) 99 | * The compute resources available to those applications 100 | * The policies around how those applications behave, such as restart policies, upgrades, and fault-tolerance 101 | 102 | By creating an object, you’re effectively telling the Kubernetes system what you want your cluster’s workload to look like; this is your cluster’s `desired state`. Kubernetes then makes sure that the cluster's actual state meets the desired state described in the object. 103 | 104 | Most of the times, you describe the object in a `.yaml` file called `manifest` and then give it to `kubectl` which in turn is responsible for relaying that information to Kubernetes via its API. 105 | 106 | **Deployment object** represents an application running on your cluster. We'll use it to run containers of our applications. 107 | 108 | Create a directory called `manifests` inside `kubernetes` directory. Create a `deployments.yaml` file inside it with the following content: 109 | 110 | ```yaml 111 | apiVersion: apps/v1beta1 # implies the use of kubernetes 1.7 112 | # use apps/v1beta2 for kubernetes 1.8 113 | kind: Deployment 114 | metadata: 115 | name: raddit-deployment 116 | spec: 117 | replicas: 2 118 | selector: 119 | matchLabels: 120 | app: raddit 121 | template: 122 | metadata: 123 | labels: 124 | app: raddit 125 | spec: 126 | containers: 127 | - name: raddit 128 | image: artemkin/raddit 129 | env: 130 | - name: DATABASE_HOST 131 | value: mongo-service 132 | --- 133 | apiVersion: apps/v1beta1 # implies the use of kubernetes 1.7 134 | # use apps/v1beta2 for kubernetes 1.8 135 | kind: Deployment 136 | metadata: 137 | name: mongo-deployment 138 | spec: 139 | replicas: 1 140 | selector: 141 | matchLabels: 142 | app: mongo 143 | template: 144 | metadata: 145 | labels: 146 | app: mongo 147 | spec: 148 | containers: 149 | - name: mongo 150 | image: mongo:3.2 151 | ``` 152 | 153 | In this file we describe two `Deployment objects` which define what application containers and in what quantity should be run. The Deployment objects have the same structure so I'll briefly go over only one of them. 154 | 155 | Each Kubernetes object has 4 required fields: 156 | * `apiVersion` - Which version of the Kubernetes API you’re using to create this object. You'll need to change that if you're using Kubernetes API version different than 1.7 as in this example. 157 | * `kind` - What kind of object you want to create. In this case we create a Deployment object. 158 | * `metadata` - Data that helps uniquely identify the object. In this example, we give the deployment object a name according to the name of an application it's used to run. 159 | * `spec` - describes the `desired state` for the object. `Spec` configuration will differ from object to object, because different objects are used for different purposes. 160 | 161 | In the Deployment object's spec we specify, how many `replicas` (instances of the same application) we want to run and what those applications are (`selector`) 162 | 163 | ```yml 164 | spec: 165 | replicas: 2 166 | selector: 167 | matchLabels: 168 | app: raddit 169 | ``` 170 | 171 | In our case, we specify that we want to be running 2 instances of applications that have a label `app=raddit`. **Labels** are used to give identifying attributes to Kubernetes objects and can be then used by **label selectors** for objects selection. 172 | 173 | We also specify a `Pod template` in the spec configuration. **Pods** are lower level objects than Deployments and are used to run only `a single instance of application`. In most cases, Pod is equal to a container, although you can run multiple containers in a single Pod. 174 | 175 | The `Pod template` which is a Pod object's definition nested inside the Deployment object. It has the required object fields such as `metadata` and `spec`, but it doesn't have `apiVersion` and `kind` fields as those would be redundant in this case. When we create a Deployment object, the Pod object(s) will be created as well. The number of Pods will be equal to the number of `replicas` specified. The Deployment object ensures that the right number of Pods (`replicas`) is always running. 176 | 177 | In the Pod object definition (`Pod template`) we specify container information such as a container image name, a container name, which is used by Kubernetes to run the application. We also add labels to identify what application this Pod object is used to run, this label value is then used by the `selector` field in the Deployment object to select the right Pod object. 178 | 179 | ```yaml 180 | template: 181 | metadata: 182 | labels: 183 | app: raddit 184 | spec: 185 | containers: 186 | - name: raddit 187 | image: artemkin/raddit 188 | env: 189 | - name: DATABASE_HOST 190 | value: mongo-service 191 | ``` 192 | 193 | Notice how we also pass an environment variable to the container. `DATABASE_HOST` variable tells our application how to contact the database. We define `mongo-service` as its value to specify the name of the Kubernetes service to contact (more about the Services will be in the next section). 194 | 195 | Container images will be downloaded from Docker Hub in this case. 196 | 197 | ## Create Deployment Objects 198 | 199 | Run a kubectl command to create Deployment objects inside your Kubernetes cluster (make sure to provide the correct path to the manifest file): 200 | 201 | ```bash 202 | $ kubectl apply -f manifests/deployments.yaml 203 | ``` 204 | 205 | Check the deployments and pods that have been created: 206 | 207 | ```bash 208 | $ kubectl get deploy 209 | $ kubectl get pods 210 | ``` 211 | 212 | ## Service manifests 213 | 214 | Running applications at scale means running _multiple containers spread across multiple VMs_. 215 | 216 | This arises questions such as: How do we load balance between all of these application containers? How do we provide a single entry point for the application so that we could connect to it via that entry point instead of connecting to a particular container? 217 | 218 | These questions are addressed by the **Service** object in Kubernetes. A Service is an abstraction which you can use to logically group containers (Pods) running in you cluster, that all provide the same functionality. 219 | 220 | When a Service object is created, it is assigned a unique IP address called `clusterIP` (a single entry point for our application). Other Pods can then be configured to talk to the Service, and the Service will load balance the requests to containers (Pods) that are members of that Service. 221 | 222 | We'll create a Service for each of our applications, i.e. `raddit` and `MondoDB`. Create a file called `services.yaml` inside `kubernetes/manifests` directory with the following content: 223 | 224 | ```yaml 225 | apiVersion: v1 226 | kind: Service 227 | metadata: 228 | name: raddit-service 229 | spec: 230 | type: NodePort 231 | selector: 232 | app: raddit 233 | ports: 234 | - protocol: TCP 235 | port: 9292 236 | targetPort: 9292 237 | nodePort: 30100 238 | --- 239 | apiVersion: v1 240 | kind: Service 241 | metadata: 242 | name: mongo-service 243 | spec: 244 | type: ClusterIP 245 | selector: 246 | app: mongo 247 | ports: 248 | - protocol: TCP 249 | port: 27017 250 | targetPort: 27017 251 | ``` 252 | 253 | In this manifest, we describe 2 Service objects of different types. You should be already familiar with the general object structure, so I'll just go over the `spec` field which defines the desired state of the object. 254 | 255 | The `raddit` Service has a NodePort type: 256 | 257 | ```yaml 258 | spec: 259 | type: NodePort 260 | ``` 261 | 262 | This type of Service makes the Service accessible on each Node’s IP at a static port (NodePort). We use this type to be able to contact the `raddit` application later from outside the cluster. 263 | 264 | `selector` field is used to identify a set of Pods to which to route packets that the Service receives. In this case, Pods that have a label `app=raddit` will become part of this Service. 265 | 266 | ```yaml 267 | selector: 268 | app: raddit 269 | ``` 270 | 271 | The `ports` section specifies the port mapping between a Service and Pods that are part of this Service and also contains definition of a node port number (`nodePort`) which we will use to reach the Service from outside the cluster. 272 | 273 | ```yaml 274 | ports: 275 | - protocol: TCP 276 | port: 9292 277 | targetPort: 9292 278 | nodePort: 30100 279 | ``` 280 | 281 | The requests that come to any of your cluster nodes' public IP addresses on the specified `nodePort` will be routed to the `raddit` Service cluster-internal IP address. The Service, which is listening on port 9292 (`port`) and is accessible within the cluster on this port, will then route the packets to the `targetPort` on one of the Pods which is part of this Service. 282 | 283 | `mongo` Service is only different in its type. `ClusterIP` type of Service will make the Service accessible on the cluster-internal IP, so you won't be able to reach it from outside the cluster. 284 | 285 | ## Create Service Objects 286 | 287 | Run a kubectl command to create Service objects inside your Kubernetes cluster (make sure to provide the correct path to the manifest file): 288 | 289 | ```bash 290 | $ kubectl apply -f manifests/services.yaml 291 | ``` 292 | 293 | Check that the services have been created: 294 | 295 | ```bash 296 | $ kubectl get svc 297 | ``` 298 | 299 | ## Access Application 300 | 301 | Because we used `NodePort` type of service for the `raddit` service, our application should accessible to us on the IP address of any of our cluster nodes. 302 | 303 | Get a list of IP addresses of your cluster nodes: 304 | 305 | ```bash 306 | $ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances list --filter="tags.items=iac-kubernetes" 307 | ``` 308 | 309 | Use any of your nodes public IP addresses and the node port `30100` which we specified in the service object definition to reach the `raddit` application in your browser. 310 | 311 | ## Save and commit the work 312 | 313 | Save and commit the `kubernetes` folder created in this lab into your `iac-tutorial` repo. 314 | 315 | ## Conclusion 316 | 317 | In this lab, we learned about Kuberenetes - a popular orchestration platform which simplifies the process of running containers at scale. We saw how it implements the Infrastructure as Code approach in the form of `objects` and `manifests` which allow you to describe in code the desired state of your container infrastructure which spans a cluster of VMs. 318 | 319 | To destroy the Kubernetes cluster, run the following command inside `kubernetes/terraform` directory: 320 | 321 | ```bash 322 | $ terraform destroy 323 | ``` 324 | 325 | Next: [What is Infrastructure as Code](50-what-is-iac.md) 326 | --------------------------------------------------------------------------------