├── .envrc ├── .gitignore ├── PLAYBOOK.md ├── PREREQUISITES.md ├── README.md ├── ansible ├── dns.yaml ├── group_vars │ └── all │ │ └── variables.example ├── predestination-docker.yaml ├── predestination-undo.yaml ├── predestination.yaml └── prerequisites.yaml ├── nginx └── predestination.conf ├── setup ├── supervisor └── predestination.conf ├── terraform.tfstate └── terraform ├── main.tf ├── single └── main.tf └── terraform.tfvars.example /.envrc: -------------------------------------------------------------------------------- 1 | export ANSIBLE_HOST_KEY_CHECKING=False 2 | export ANSIBLE_INVENTORY=ansible/inventory-1 3 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | ansible/group_vars/all/variables 2 | ansible/inventory* 3 | ansible/*.retry 4 | terraform/.terraform 5 | terraform/terraform.tfvars 6 | terraform/*.tfstate 7 | terraform/*.tfstate.* 8 | -------------------------------------------------------------------------------- /PLAYBOOK.md: -------------------------------------------------------------------------------- 1 | # Playbook 2 | 3 | I'm using `$` to denote commands to be run on your local machine, and `%` to denote commands to be run on the server. 4 | 5 | Instructions for the presenter that require leaving the terminal are in *[italicised brackets]*. 6 | 7 | ## 00:00 — Pair everyone up and create their instances 8 | 9 | Hopefully this is just a matter of changing the `count` variable and re-running `terraform apply`. 10 | 11 | ## 00:05 — Introduction 12 | 13 | A short introduction to deploying and running a website. 14 | 15 | ## 00:10 — Distribute connection details 16 | 17 | Give everyone two IP addresses/EC2 hostnames (one blue, one green) and the same SSH key. (I know, but we're all friends here.) Get them to put the SSH key in *~/.ssh/webops* and add the following to their *~/.ssh/config*: 18 | 19 | ``` 20 | Host webops-blue 21 | HostName 22 | User ubuntu 23 | IdentityFile ~/.ssh/webops 24 | 25 | Host webops-green 26 | HostName 27 | User ubuntu 28 | IdentityFile ~/.ssh/webops 29 | ``` 30 | 31 | Hopefully not many will have trouble SSHing in. 32 | 33 | In case everyone has had issues, take 10 minutes to sort them all out. 34 | 35 | ## 00:20 — Start a web app on the server 36 | 37 | Pick a web application that takes `PORT` as an environment variable. This tutorial will assume you're using an app I wrote called [Predestination][]. If you pick a different application, change `./web` to however you start it. 38 | 39 | Log in to the server named "green". You'll find out why it's green later. 40 | 41 | I'm using `mosh` here, but you can use `ssh` if you prefer it or you don't have a choice (i.e. you're on Windows). 42 | 43 | ```sh 44 | $ mosh webops-green 45 | ``` 46 | 47 | Clone the repository and install its dependencies: 48 | 49 | ```sh 50 | % git clone https://github.com/SamirTalwar/predestination.git 51 | % cd predestination 52 | % make site-packages 53 | ``` 54 | 55 | Then run it: 56 | 57 | ```sh 58 | % PORT=8080 ./web # or however you start the application 59 | ``` 60 | 61 | *[Browse to the preconfigured URL, proxied through Cloudflare, and show it off. If possible, leave the browser window open. It *may* automatically reconnect if you terminate the server and restart it, but I wouldn't bank on it.]* 62 | 63 | Note that we're using the port 8080. HTTP usually runs over port 80, but we can't start an application there without it running as *root*, and we don't want to do that, as an attacker compromising the web server could get access to anything else. 64 | 65 | In fact, we probably want to make sure the application has as few rights as possible. So let's create a user just for that. 66 | 67 | ```sh 68 | % sudo useradd web 69 | % sudo --user=web PORT=8080 ./web 70 | ``` 71 | 72 | *[Leave it running for a few seconds, then kill it again.]* 73 | 74 | [Predestination]: https://github.com/SamirTalwar/predestination 75 | 76 | ## 00:30 — Keep it running 77 | 78 | Now, we can run the web server, but it's running in our terminal. We can't do anything else. 79 | 80 | So run it in the background. 81 | 82 | ```sh 83 | % sudo --user=web PORT=8080 ./web & 84 | ``` 85 | 86 | … Sort of works. It's still tied to this TTY (terminal), and its output is interfering with our work. We can redirect it to a file: 87 | 88 | ```sh 89 | % sudo --user=web PORT=8080 ./web >>& /var/log/predestination.log & 90 | ``` 91 | 92 | If we lose SSH connection, the site might go down. 93 | 94 | *[Show it off, then run `fg`, then Ctrl+C.]* 95 | 96 | You can use `nohup` to disconnect the process from the terminal. 97 | 98 | ```sh 99 | % nohup sudo --user=web PORT=8080 ./web >>& /var/log/predestination.log & 100 | ``` 101 | 102 | This isn't great, though. What if we want to stop the application? We have to write down the PID? And remember to kill it? We can't just start a new version over the top—it won't even start, because the port is taken. 103 | 104 | On Linux, services are often managed through scripts living in */etc/init.d* or */etc/rc.d*. *[Show one of them.]* This works, but is a massive pain. It's a lot of complicated scripts and it's really easy to get it wrong. 105 | 106 | Instead, we're going to use [Supervisor][], a process control system that's way easier to manage. Supervisor will take care of running our process, even if we restart the computer. 107 | 108 | So let's configure it to run our application. 109 | 110 | *[Copy the following file to /etc/supervisor/conf.d/predestination.conf:]* 111 | 112 | ``` 113 | [program:site] 114 | command=/home/ubuntu/predestination/web 115 | environment=PORT=8080 116 | user=web 117 | ``` 118 | 119 | Now we just tell `supervisorctl`, the control program, to reload its configuration. 120 | 121 | ```sh 122 | % sudo supervisorctl 123 | > reread 124 | > update 125 | > status 126 | ... wait 10 seconds 127 | > status 128 | > exit 129 | ``` 130 | 131 | And it's running in the background. Lovely. 132 | 133 | This is a big advancement: we've gone from running commands to defining a configuration. The former is *imperative*: we know our current state and our desired state, and we invoke a sequence of commands to get there. The latter is *declarative*: we don't know our current state, just our desired state, and the computer figures out the sequence of operations. This is much easier to reason about, and therefore less error-prone, allowing your sysadmin to use their memory for far more useful things. 134 | 135 | [Supervisor]: http://supervisord.org/ 136 | 137 | ## 00:40 — We're still on port 8080 138 | 139 | [nginx][] to the rescue. We don't want to run our site as the root user, so we'll use nginx, an HTTP server, to route traffic from port 80 to port 8080. 140 | 141 | Delete */etc/nginx/sites-enabled/default* to disable the default endpoint. 142 | 143 | Next, create a file called */etc/nginx/sites-available/predestination.conf*: 144 | 145 | ``` 146 | server { 147 | listen 80 default_server; 148 | listen [::]:80 default_server; 149 | server_name _; 150 | 151 | location / { 152 | proxy_pass http://localhost:8080; 153 | proxy_set_header Upgrade $http_upgrade; 154 | proxy_set_header Connection "upgrade"; 155 | } 156 | } 157 | ``` 158 | 159 | You'll need to enable it by creating a symbolic link in the *sites-enabled* directory: 160 | 161 | ```sh 162 | % sudo ln -s /etc/nginx/sites-available/predestination.conf /etc/nginx/sites-enabled/ 163 | ``` 164 | 165 | Next, reload nginx: 166 | 167 | ```sh 168 | % sudo nginx -s reload 169 | ``` 170 | 171 | We should now be able to talk to our site without specifying a port. 172 | 173 | *[Delete the port from the URL and make sure it works.]* 174 | 175 | You might find that while the game loads, it doesn't run. If that's the case, it's probably because WebSockets aren't proxying correctly (sorry about that). You can force the application to use HTTP polling rather than Websockets by adding the `TRANSPORTS=polling` environment variable to the supervisor file and reloading the application with `supervisorctl reread`, then `supervisorctl update`. 176 | 177 | [nginx]: https://nginx.org/ 178 | 179 | Great job. Your site is up. Now disconnect from the server with *Ctrl+D* or `exit`. 180 | 181 | ## 00:45 — Can you imagine doing all this a second time? 182 | 183 | Now imagine this server breaks because, I don't know, we misconfigure the server and disable SSH. It's in The Cloud™ so we have no access to the actual terminal. What we can do, though, is delete it and try again. 184 | 185 | Can you imagine doing that a second time? Ugh. Our website will be down for ages. 186 | 187 | Instead, we're going to use an infrastructure automation tool. My favourite is [Ansible][], which is what we're going to use today, but there are plenty of others. The most popular are [Puppet][], [Chef][] and [SaltStack][]. 188 | 189 | Ansible works over SSH, so there's nothing to do on the server. You just need it installed on the client, along with an *inventory* file. Let's create one now called *ansible/inventory*. 190 | 191 | *[If you're being generous, give them a Cloudflare token for blue-green deployment later. If not, skip it and just include the first two variables.]* 192 | 193 | ``` 194 | [all:vars] 195 | ansible_user=ubuntu 196 | ansible_ssh_private_key_file=~/.ssh/webops 197 | cloudflare_email=alice@example.com 198 | cloudflare_token=1234567890abcdefghijklmnopqrstuvwxyz 199 | domain=example.com 200 | subdomain=www 201 | 202 | [blue] 203 | ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/webops 204 | 205 | [green] 206 | ansible_user=ubuntu ansible_ssh_private_key_file=~/.ssh/webops 207 | ``` 208 | 209 | If you're on Windows, you can't run Ansible, but don't worry. We'll simulate it. (In reality, you'd probably use a third box purely for provisioning.) So instead of the above, SSH into each server, install Ansible (`sudo apt install ansible`), clone this repository and create an *ansible/inventory* file as follows: 210 | 211 | ``` 212 | [local] 213 | localhost ansible_connection=local 214 | ``` 215 | 216 | Now, let's try it. 217 | 218 | ```sh 219 | $ export ANSIBLE_INVENTORY=$PWD/ansible/inventory 220 | $ ansible all -m ping 221 | ``` 222 | 223 | That pings all the servers to ensure they're responding over SSH. 224 | 225 | Now we'll set up the application: 226 | 227 | ```sh 228 | ansible-playbook -l green -e version=master ansible/predestination.yaml 229 | ``` 230 | 231 | Voila. Not much happened (except the application going down for a few seconds). Take a look at the *ansible/predestination.yaml* file, and note the things that changed: 232 | 233 | 1. The application was re-cloned, because this time we're cloning into a new directory. 234 | 2. The dependencies were re-installed. Actually, nothing happened, but Ansible doesn't know, because it's just running a shell script. We try and avoid running scripts when using configuration management systems such as Ansible, because they can be non-deterministic, and so always have to be run. 235 | 3. We reconfigured the supervisor to point to the new location. 236 | 4. We told the supervisor to restart the application. 237 | 5. We asked nginx to reload its configuration. 238 | 239 | Using Ansible (or whatever else), we can easily throw away this server and set up a new one in just a few clicks. Once again, we've gone from configuring the server *imperatively* to *declaratively*, allowing us to define the whole state up-front before we start applying the configuration. 240 | 241 | [Ansible]: https://www.ansible.com/ 242 | [Chef]: https://www.chef.io/chef/ 243 | [Puppet]: https://puppet.com/ 244 | [SaltStack]: https://saltstack.com/ 245 | 246 | ## 01:00 — Now it's time to release a new version. 247 | 248 | Let's make it blue. 249 | 250 | *[Change it to blue. Can't be that hard. Try `#147086`.]* 251 | 252 | All we need to do is commit to the repository (which I've done for you) with a branch name. 253 | 254 | Then we redeploy: 255 | 256 | ```sh 257 | ansible-playbook -l blue -e version=blue ansible/predestination.yaml 258 | ``` 259 | 260 | *[Ship it, wait 30 seconds and reload.]* 261 | 262 | Nice and easy. Ansible took care of figuring out what's changed and what's stayed the same. Because we're pointing to the "blue" host this time, it will clone the repository fresh and set up all the different parts. 263 | 264 | The eagle-eyed among you might have noticed that it didn't go down, even for a second. This is because we're using a technique called *blue-green deployment*. For example, you can use *blue-green deployment*. What this means is that we have two servers (codenamed "blue" and "green"). Only one of the servers is active at any time (we started with the green server). Let's assume it's the blue one. When we release, we release to the inactive (green) server, ensure that everything is healthy, then activate it. If it doesn't work, figure out why, and meanwhile, the blue server is still happily serving requests. 265 | 266 | It's quite common to automate this kind of deployment either periodically or every time a commit gets pushed to the *master* branch. The latter is called *continuous deployment*. This is related to *continuous integration*. The idea is that each time you push, a server runs your Ansible playbook or other deployment mechanism for you. You could manage this server yourself, but you could also use [Travis CI][], [CircleCI][], [Shippable][] or another online service, which are often free to start. 267 | 268 | [CircleCI]: https://circleci.com/ 269 | [Shippable]: https://www.shippable.com/ 270 | [Travis CI]: https://travis-ci.org/ 271 | 272 | ## 01:10 — I compile my code, and it's private! 273 | 274 | Right now, we're shipping Python and JavaScript, which we can just run from the source code. However, some language platforms require the source code to be *compiled* first. If this is the case, it's not enough to just clone the repository—you have to create a *release* and store it somewhere. If you use GitHub or Bitbucket, you'll find that there's a mechanism there for uploading releases, which you can then instruct Ansible to download. 275 | 276 | You might also want to keep your code private. This is doable but requires that you configure Ansible to generate an SSH key, tell your host what it is, then use that to clone the repository or download the releases. 277 | 278 | You could get Ansible to just copy the release from your local machine, but this means that you'll never be able to go back to an older release, as it'll get overwritten each time. For this reason, I wouldn't recommend it. 279 | 280 | All of this is beyond the scope of this tutorial, but ask me more about it if you're curious. 281 | 282 | ## 01:15 — What if it goes down? 283 | 284 | That'd be awful, right? 285 | 286 | Fortunately, the Internet will let me know. I've configured [Pingdom][] to tell me if the site goes down. It'll send me an email within five minutes if it doesn't come back up sharpish. 287 | 288 | *[Show the emails that have inevitably been sent in the last hour.]* 289 | 290 | There are lots of tools just like Pingdom. Find the one you like. I recommend starting on a free trial to make sure it's right for you. 291 | 292 | [Pingdom]: https://www.pingdom.com/ 293 | 294 | ## 01:20 — What if it breaks? 295 | 296 | It'd be nice to know what's going on on the server, especially if things are screwy. This is what logging is for. 297 | 298 | Let's say, for example, that we introduce a bug into our application. 299 | 300 | *[Introduce a bug on a branch called "broken".]* 301 | 302 | ```sh 303 | $ ansible-playbook -l green -e version=broken ansible/predestination.yaml 304 | ``` 305 | 306 | So, let's say I introduce a bug that stops the game. This is bad, right? How do I trace it? 307 | 308 | Well, your application logs are your friends. It's better if you actively put "log" statements in your application to tell you what's going on, but even if you don't, catastrophic errors will probably still be logged. 309 | 310 | Using `supervisorctl`, we can ask the supervisor daemon for the logs like this: 311 | 312 | ```sh 313 | supervisorctl tail -f predestination stderr 314 | ``` 315 | 316 | (There's two output streams: STDOUT and STDERR. Logs usually go on STDERR, but you might want to check both, or configure the supervisor to merge them.) 317 | 318 | In this output stream, we can see what's called a "stack trace". This allows us to trace the error to the very line that's causing the problem. 319 | 320 | *[Show the line.]* 321 | 322 | Once we diagnose the problem, we can now fix the bug and redeploy, or roll back to a previous version. 323 | 324 | ```sh 325 | $ ansible-playbook -l blue -e version=master ansible/predestination.yaml 326 | ``` 327 | 328 | ## 01:30 — How do I store data? 329 | 330 | Short answer: don't. At least not on your machine. 331 | 332 | Remember how we've been using third-party services such as Pingdom, CircleCI and Amazon Web Services to manage parts of our stack? Let's introduce one more. Whatever your database, someone else is better at managing it than you. There are lots of free or cheap options, such as [ElephantSQL][], which provides PostgreSQL, a powerful relational database, [Compose][], which provides hosted versions of MongoDB, Redis, and other document-based databases, [Amazon RDS][], which provides a few different relational databases, and many more. 333 | 334 | You might think it's easy or cheaper to run your own. And it may well be, until you accidentally delete some data or your hard drive breaks. At that point, you'll wish you paid for someone else to manage backups and redundancy. 335 | 336 | And whatever you do, don't store data text files on the server. It's the easiest way to accidentally lose data. 337 | 338 | [ElephantSQL]: https://www.elephantsql.com/ 339 | [Compose]: https://compose.com/ 340 | [Amazon RDS]: https://aws.amazon.com/rds/ 341 | 342 | ## 01:35 — So what's all this Docker business? 343 | 344 | Right. Here come the fireworks. 345 | 346 | [Docker][] is a useful way of packaging up an application to handle all this stuff for you. All you need is the Docker daemon on the server and you can run an application really easily. It can be instructed to re-run the application if it crashes, just like supervisord, and can be set up with Ansible or another deployment tool. It also ships with one of its own, called [Docker Compose][]. 347 | 348 | Docker also packages everything. This means that you don't need to install anything on the server except Docker itself, as the *Docker image* that you build contains all the application dependencies. This includes Python (or whatever you want to use to make your web app). 349 | 350 | ```sh 351 | $ ansible-playbook -l green ansible/predestination-undo.yaml 352 | $ ansible-playbook -l green ansible/predestination-docker.yaml 353 | ``` 354 | 355 | The first Ansible playbook removes everything we set up earlier, including the supervisor configuration, nginx configuration and the application itself. The second deploys predestination from the publicly-available [samirtalwar/predestination][] Docker image. 356 | 357 | *[Talk through the new playbook.]* 358 | 359 | It's been around for only a few years, so many don't consider it quite as stable as running on bare Linux, but personally, I think the convenience of packaging an entire application up locally is so good that I'm willing to make that trade-off. We no longer need to configure files on the server; we just instruct Docker to start a "container" from our image and away we go. It also means we can test our images locally and they'll work almost entirely the same, whether we're on Windows, macOS or Linux. 360 | 361 | Building Docker images is beyond the scope of this tutorial, but I encourage you to have a go with it. 362 | 363 | [Docker]: https://docs.docker.com/ 364 | [Docker Compose]: https://docs.docker.com/compose/ 365 | [samirtalwar/predestination]: https://hub.docker.com/r/samirtalwar/predestination/ 366 | 367 | ## 01:45 — Any questions? 368 | 369 | Let's talk. 370 | -------------------------------------------------------------------------------- /PREREQUISITES.md: -------------------------------------------------------------------------------- 1 | # Prerequisites 2 | 3 | You're going to need a server. 4 | 5 | You will need an account with Amazon Web Services, and another with Cloudflare. 6 | 7 | ## 1. Client-side preparation 8 | 9 | Install: 10 | 11 | 1. An SSH client. 12 | 1. If you're on macOS or Linux, you have one built in. 13 | 2. On Windows 10, you can install [Bash on Windows][Bash on Windows Installation Guide]. 14 | 3. On any other version of Windows, download and install [PuTTY][]. 15 | 2. An SSH key specifically for the job. 16 | 1. If you're running on macOS, Linux, or Bash on Windows, run `ssh-keygen` and store the key at *~/.ssh/webops*. (You may need to replace "~" with the absolute path to your home directory.) 17 | 2. If you're using PuTTY, run *puttygen.exe* and name the key "webops". 18 | 3. [Terraform][]. 19 | 4. [Ansible][]. 20 | 5. [mosh][] (if available for your platform), which is like SSH (and uses it to bootstrap itself) but can handle flaky connections much more gracefully. 21 | 22 | Then clone this repository. All local commands are expected to be run from the root of this repository unless specified otherwise. 23 | 24 | [Ansible]: https://www.ansible.com/ 25 | [Bash on Windows Installation Guide]: https://msdn.microsoft.com/en-us/commandline/wsl/install_guide 26 | [PuTTY]: http://www.chiark.greenend.org.uk/~sgtatham/putty/ 27 | [Terraform]: https://www.terraform.io/ 28 | [mosh]: https://mosh.org/ 29 | 30 | ## 2. Create a server 31 | 32 | This uses Amazon Web Services. If you'd rather use another cloud provider, you'll need to configure it yourself. 33 | 34 | 1. If you haven't already, create an account on [Amazon Web Services][]. 35 | 2. Pick your favourite AWS region, grab your VPC ID and subnet ID, and create a file called *terraform/terraform.tfvars* as follows (you can copy *terraform/terraform.tfvars.example): 36 | ``` 37 | region = "" 38 | vpc_id = "" 39 | subnet_id = "" 40 | ``` 41 | 3. If you're creating instances for lots of people, add a line to *terraform/terraform.tfvars* with the number: 42 | ``` 43 | count = 44 | ``` 45 | 3. Copy *ansible/group_vars/all/variables.example* to *ansiblansible/group_vars/all/variables*, and replace the values with your own. (You can leave the top two.) 46 | 4. `cd` into the *terraform* directory. 47 | 5. Run `terraform init` to set it up. 48 | 6. Run `terraform plan`, then check the plan. 49 | 7. If you're happy, run `terraform apply`. This will create two servers. 50 | 8. `cd ..` back into the root directory. 51 | 52 | [Amazon Web Services]: https://aws.amazon.com/ 53 | [Predestination]: https://github.com/SamirTalwar/predestination 54 | 55 | ## 3. Pick a domain 56 | 57 | If you want to do blue-green deployment, you need a hostname. You can probably simulate it by changing the local *hosts* file, but that's no fun. 58 | 59 | 1. For security reasons, you'll probably want to create a fake [Cloudflare][] account with access to a cheap domain, and use the token for that. 60 | 2. Grab your Cloudflare token. 61 | 3. Copy *ansible/group_vars/all/variables.example* to *ansible/group_vars/all/variables*, and replace the values with your own. (You can leave the top two.) 62 | 4. Pick a subdomain (or a sub-subdomain) for each person/group. You'll need to distribute them as part of the Ansible configuration. Pick one for yourself too. 63 | 5. Configure [Pingdom][] to tell you whether it's up. 64 | 6. Just before the workshop, turn Cloudflare's development mode on (it's on the Caching page). Otherwise it'll cache client CSS and break the demo. 65 | 66 | [Cloudflare]: https://www.cloudflare.com/ 67 | [Pingdom]: https://www.pingdom.com/ 68 | 69 | ## 4. Set up the dependencies 70 | 71 | If you're using our example application, [Predestination][], you'll need a bunch of dependencies. (And if you're not, they can't hurt.) 72 | 73 | 1. Verify that *ansible/inventory* has been created with the IP addresses of your servers. 74 | 2. Prime Ansible, either by installing [`direnv`][direnv], or by manually including the *.envrc* file. 75 | ```sh 76 | source .envrc 77 | ``` 78 | 3. Tell Ansible to install everything: 79 | ```sh 80 | ansible-playbook ansible/prerequisites.yaml 81 | ``` 82 | 83 | ## 5. Pick a product 84 | 85 | I'd recommend Predestination, but either way, make sure you own the repository. You'll make changes to it later. 86 | 87 | [direnv]: https://direnv.net/ 88 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Web Ops 2 | 3 | At lightning speed, this workshop will cover the bits that aren’t code that make up a working web app. These include servers, monitoring, deployment mechanisms, logging, alerting, secret management, recovery mechanisms… you get the idea. 4 | 5 | Topics include: 6 | 7 | * how to set up a web server on Linux, 8 | * deploying changes to a web server with zero downtime, 9 | * keeping an eye on your server to make sure things are working, 10 | * tracking down production bugs, 11 | * managing persistent data (such as your database), 12 | * secure communication over HTTPS, 13 | * and, if we have time, how to do all this in the buzzword of the decade, containers. 14 | 15 | The workshop is designed to run on Unix-like machine such as Linux or macOS. If you're running on Windows, we can make it work, but it won't be quite so true to real life. 16 | 17 | ## Following along at home 18 | 19 | If you want to go through it on your own, follow the [prerequisites][Prerequisites], then the [playbook][Playbook]. 20 | 21 | ## Running this workshop yourself 22 | 23 | You're welcome to run this workshop yourself by following the [playbook][Playbook] and making changes as necessary. 24 | 25 | You'll need to follow the [prerequisites][Prerequisites] ahead of the workshop, and ask students to run the client-side preparation in that document. I recommend you do a dry run yourself. 26 | 27 | All I ask is that: 28 | 29 | 1. you tell me you're running it (you can reach me [over Twitter][@SamirTalwar] or [via email][samir@noodlesandwich.com]), 30 | 2. you send feedback about how you found it, and 31 | 3. if you find problems, you tell me about them (or even send pull requests). 32 | 33 | Good luck! 34 | 35 | ## Licence 36 | 37 | This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License][Licence]. 38 | 39 | [Prerequisites]: https://github.com/SamirTalwar/webops-workshop/blob/master/PREREQUISITES.md 40 | [Playbook]: https://github.com/SamirTalwar/webops-workshop/blob/master/PLAYBOOK.md 41 | [@SamirTalwar]: https://twitter.com/SamirTalwar 42 | [samir@noodlesandwich.com]: mailto:samir@noodlesandwich.com 43 | [Licence]: http://creativecommons.org/licenses/by-sa/4.0/ 44 | -------------------------------------------------------------------------------- /ansible/dns.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | tasks: 4 | - name: Update the DNS to the given cluster 5 | cloudflare_dns: 6 | zone: "{{ domain }}" 7 | record: "{{ subdomain }}" 8 | type: A 9 | value: "{{ inventory_hostname }}" 10 | account_email: "{{ cloudflare_email }}" 11 | account_api_token: "{{ cloudflare_token }}" 12 | proxied: true 13 | solo: true 14 | -------------------------------------------------------------------------------- /ansible/group_vars/all/variables.example: -------------------------------------------------------------------------------- 1 | ansible_user: ubuntu 2 | ansible_ssh_private_key_file: ~/.ssh/webops 3 | cloudflare_email: alice@example.com 4 | cloudflare_token: 1234567890abcdefghijklmnopqrstuvwxyz 5 | domain: example.com 6 | subdomain: www 7 | -------------------------------------------------------------------------------- /ansible/predestination-docker.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | remote_user: root 4 | become: yes 5 | tasks: 6 | - name: Run the application in Docker 7 | docker_container: 8 | name: predestination 9 | image: samirtalwar/predestination 10 | pull: yes 11 | published_ports: 12 | - 80:8080 13 | restart_policy: on-failure 14 | - name: Wait for the application to start 15 | wait_for: 16 | host: localhost 17 | port: 80 18 | 19 | - import_playbook: dns.yaml 20 | -------------------------------------------------------------------------------- /ansible/predestination-undo.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | remote_user: root 4 | become: yes 5 | tasks: 6 | - name: Remove the application from the supervisor 7 | file: 8 | path: /etc/supervisor/conf.d/predestination.conf 9 | state: absent 10 | - name: Re-read the supervisor configuration 11 | command: supervisorctl reread 12 | - name: Update the supervisor 13 | command: supervisorctl update 14 | - name: Stop forwarding port 80 to port 8080 15 | file: 16 | path: /etc/nginx/sites-enabled/predestination.conf 17 | state: absent 18 | - name: Delete the Nginx configuration 19 | file: 20 | path: /etc/nginx/sites-available/predestination.conf 21 | state: absent 22 | - name: Reload the nginx configuration 23 | service: 24 | name: nginx 25 | state: reloaded 26 | - name: Wait for the application to stop 27 | wait_for: 28 | host: localhost 29 | port: 8080 30 | state: drained 31 | - name: Remove the application directory 32 | file: 33 | path: /var/www/predestination 34 | state: absent 35 | - name: Remove the other application directory 36 | file: 37 | path: /home/ubuntu/predestination 38 | state: absent 39 | - name: Remove the application user account 40 | user: 41 | name: web 42 | shell: /bin/false 43 | state: absent 44 | -------------------------------------------------------------------------------- /ansible/predestination.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | remote_user: root 4 | become: yes 5 | tasks: 6 | - name: Create a user account for the application 7 | user: 8 | name: web 9 | shell: /bin/false 10 | - name: Create the application directory 11 | file: 12 | path: /var/www/predestination 13 | state: directory 14 | owner: web 15 | group: web 16 | - name: Clone the repository 17 | remote_user: web 18 | become: yes 19 | git: 20 | repo: https://github.com/SamirTalwar/predestination.git 21 | dest: /var/www/predestination 22 | version: "{{ version }}" 23 | update: yes 24 | - name: Install the application dependencies 25 | command: make site-packages 26 | args: 27 | chdir: /var/www/predestination 28 | - name: Set up the supervisor for the application 29 | copy: 30 | src: ../supervisor/predestination.conf 31 | dest: /etc/supervisor/conf.d/predestination.conf 32 | - name: Re-read the supervisor configuration 33 | command: supervisorctl reread 34 | - name: Restart the application 35 | supervisorctl: 36 | name: predestination 37 | state: restarted 38 | - name: Disable the default nginx configuration 39 | file: 40 | path: /etc/nginx/sites-enabled/default 41 | state: absent 42 | - name: Configure nginx to proxy Predestination 43 | copy: 44 | src: ../nginx/predestination.conf 45 | dest: /etc/nginx/sites-available/predestination.conf 46 | - name: Enable the nginx proxy 47 | file: 48 | src: /etc/nginx/sites-available/predestination.conf 49 | dest: /etc/nginx/sites-enabled/predestination.conf 50 | state: link 51 | - name: Reload the nginx configuration 52 | service: 53 | name: nginx 54 | state: reloaded 55 | - name: Wait for the application to start 56 | wait_for: 57 | host: localhost 58 | port: 80 59 | 60 | - import_playbook: dns.yaml 61 | -------------------------------------------------------------------------------- /ansible/prerequisites.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: all 3 | remote_user: root 4 | become: yes 5 | tasks: 6 | - name: Point the server at certbot 7 | apt_repository: 8 | repo: ppa:certbot/certbot 9 | update_cache: no 10 | - name: Point the server at Python 3.6 11 | apt_repository: 12 | repo: ppa:jonathonf/python-3.6 13 | update_cache: no 14 | 15 | - name: Update the APT repositories 16 | apt: 17 | update_cache: yes 18 | - name: Install aptitude 19 | apt: 20 | name: aptitude 21 | - name: Upgrade everything 22 | apt: 23 | upgrade: full 24 | - name: Install apt-transport-https 25 | apt: 26 | name: apt-transport-https 27 | - name: Install ca-certificates 28 | apt: 29 | name: ca-certificates 30 | - name: Install curl 31 | apt: 32 | name: curl 33 | 34 | - name: Grab the Docker GPG key 35 | shell: curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - 36 | - name: Point the server at Docker CE 37 | apt_repository: 38 | repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable 39 | 40 | - name: Install dependencies 41 | apt: 42 | name: '{{ item }}' 43 | with_items: 44 | - certbot 45 | - docker-ce 46 | - make 47 | - mosh 48 | - nginx 49 | - python3.6 50 | - python-pip 51 | - supervisor 52 | - virtualenv 53 | - zsh 54 | - name: Install docker-py 55 | pip: 56 | name: docker-py 57 | 58 | - name: Disable the default nginx configuration 59 | file: 60 | path: /etc/nginx/sites-enabled/default 61 | state: absent 62 | 63 | - name: Set the ubuntu user's shell to zsh 64 | user: 65 | name: ubuntu 66 | shell: /bin/zsh 67 | -------------------------------------------------------------------------------- /nginx/predestination.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80 default_server; 3 | listen [::]:80 default_server; 4 | server_name _; 5 | 6 | location / { 7 | proxy_pass http://localhost:8080; 8 | proxy_set_header Upgrade $http_upgrade; 9 | proxy_set_header Connection "upgrade"; 10 | } 11 | } 12 | -------------------------------------------------------------------------------- /setup: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | set -eu 4 | 5 | ( 6 | cd terraform 7 | terraform init 8 | terraform get 9 | terraform apply 10 | ) 11 | 12 | ansible all -m ping 13 | 14 | ansible-playbook ansible/prerequisites.yaml 15 | ansible-playbook ansible/predestination-undo.yaml 16 | ansible-playbook -l green ansible/dns.yaml 17 | ansible all -m ping 18 | -------------------------------------------------------------------------------- /supervisor/predestination.conf: -------------------------------------------------------------------------------- 1 | [program:predestination] 2 | command=/var/www/predestination/web 3 | environment=PORT="8080" 4 | user=web 5 | -------------------------------------------------------------------------------- /terraform.tfstate: -------------------------------------------------------------------------------- 1 | { 2 | "version": 3, 3 | "terraform_version": "0.10.7", 4 | "serial": 1, 5 | "lineage": "b75516b8-eb49-4179-bf3b-6fcddf7c3f1a", 6 | "modules": [ 7 | { 8 | "path": [ 9 | "root" 10 | ], 11 | "outputs": {}, 12 | "resources": {}, 13 | "depends_on": [] 14 | } 15 | ] 16 | } 17 | -------------------------------------------------------------------------------- /terraform/main.tf: -------------------------------------------------------------------------------- 1 | variable "region" {} 2 | variable "vpc_id" {} 3 | variable "subnet_id" {} 4 | 5 | variable "count" { 6 | default = 1 7 | } 8 | 9 | provider "aws" { 10 | region = "${var.region}" 11 | } 12 | 13 | resource "aws_security_group" "webops" { 14 | name = "webops" 15 | description = "WebOps workshop" 16 | vpc_id = "${var.vpc_id}" 17 | 18 | # SSH 19 | ingress { 20 | from_port = 22 21 | to_port = 22 22 | protocol = "tcp" 23 | cidr_blocks = ["0.0.0.0/0"] 24 | } 25 | 26 | # Mosh 27 | ingress { 28 | from_port = 60000 29 | to_port = 61000 30 | protocol = "udp" 31 | cidr_blocks = ["0.0.0.0/0"] 32 | } 33 | 34 | # HTTP and HTTPS 35 | ingress { 36 | from_port = 80 37 | to_port = 80 38 | protocol = "tcp" 39 | cidr_blocks = ["0.0.0.0/0"] 40 | } 41 | 42 | ingress { 43 | from_port = 443 44 | to_port = 443 45 | protocol = "tcp" 46 | cidr_blocks = ["0.0.0.0/0"] 47 | } 48 | 49 | ingress { 50 | from_port = 8080 51 | to_port = 8080 52 | protocol = "tcp" 53 | cidr_blocks = ["0.0.0.0/0"] 54 | } 55 | 56 | # outbound internet access 57 | egress { 58 | from_port = 0 59 | to_port = 0 60 | protocol = "-1" 61 | cidr_blocks = ["0.0.0.0/0"] 62 | } 63 | } 64 | 65 | resource "aws_key_pair" "webops" { 66 | key_name = "webops-key" 67 | public_key = "${file("~/.ssh/webops.pub")}" 68 | } 69 | 70 | module "blue" { 71 | source = "./single" 72 | cluster_name = "blue" 73 | region = "${var.region}" 74 | vpc_id = "${var.vpc_id}" 75 | subnet_id = "${var.subnet_id}" 76 | security_group_id = "${aws_security_group.webops.id}" 77 | count = "${var.count}" 78 | } 79 | 80 | module "green" { 81 | source = "./single" 82 | cluster_name = "green" 83 | region = "${var.region}" 84 | vpc_id = "${var.vpc_id}" 85 | subnet_id = "${var.subnet_id}" 86 | security_group_id = "${aws_security_group.webops.id}" 87 | count = "${var.count}" 88 | } 89 | -------------------------------------------------------------------------------- /terraform/single/main.tf: -------------------------------------------------------------------------------- 1 | variable "cluster_name" {} 2 | 3 | variable "region" {} 4 | variable "vpc_id" {} 5 | variable "subnet_id" {} 6 | variable "security_group_id" {} 7 | 8 | variable "count" { 9 | default = 1 10 | } 11 | 12 | provider "aws" { 13 | region = "${var.region}" 14 | } 15 | 16 | resource "aws_instance" "webops" { 17 | ami = "ami-a8d2d7ce" 18 | instance_type = "t2.micro" 19 | key_name = "webops-key" 20 | subnet_id = "${var.subnet_id}" 21 | vpc_security_group_ids = ["${var.security_group_id}"] 22 | count = "${var.count}" 23 | 24 | provisioner "local-exec" { 25 | command = "(echo '[${var.cluster_name}]'; echo '${self.public_ip}') >> ../ansible/inventory-${var.count}" 26 | } 27 | 28 | provisioner "remote-exec" { 29 | connection { 30 | type = "ssh" 31 | user = "ubuntu" 32 | private_key = "${file("~/.ssh/webops")}" 33 | } 34 | 35 | inline = [ 36 | "sudo apt-get update -qq", 37 | "sudo apt-get install -qy python", 38 | "echo 'PS1=\"${var.cluster_name}%% \"' > ~/.zshrc", 39 | ] 40 | } 41 | } 42 | 43 | output "ip" { 44 | value = "${aws_instance.webops.public_ip}" 45 | } 46 | -------------------------------------------------------------------------------- /terraform/terraform.tfvars.example: -------------------------------------------------------------------------------- 1 | region = "eu-west-1" 2 | 3 | vpc_id = "vpc-123" 4 | 5 | subnet_id = "subnet-456" 6 | 7 | count = 1 8 | --------------------------------------------------------------------------------