├── .gitignore ├── README.md ├── Vagrantfile ├── ansible.cfg ├── build-server.yml ├── hosts-digitalocean ├── hosts-vagrant ├── hosts ├── digital_ocean.ini ├── digital_ocean.py └── local ├── local ├── provision-digitalocean.yml ├── requirements.txt ├── roles ├── create-deploy-user │ ├── files │ │ ├── known_hosts │ │ └── ssh_config │ └── tasks │ │ └── main.yml ├── digitalocean │ └── tasks │ │ └── main.yml ├── linux-prereqs │ ├── files │ │ ├── 50unattended-upgrades │ │ └── apt_periodic │ ├── handlers │ │ └── main.yml │ └── tasks │ │ └── main.yml ├── nginx-passenger │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── nginx.conf.j2 ├── papertrail │ └── tasks │ │ └── main.yml ├── postgresql │ ├── tasks │ │ └── main.yml │ └── templates │ │ └── pg_hba.conf.j2 ├── prepare_site_sampleapp │ ├── handlers │ │ └── main.yml │ ├── tasks │ │ └── main.yml │ └── templates │ │ ├── database.yml.j2 │ │ └── sampleapp_production.nginx.j2 ├── rbenv │ ├── files │ │ └── 50_rbenv.bash │ └── tasks │ │ └── main.yml ├── ruby │ ├── handlers │ │ └── main.yml │ └── tasks │ │ └── main.yml └── ufw │ ├── README.md │ ├── defaults │ └── main.yml │ ├── handlers │ └── main.yml │ ├── meta │ └── main.yml │ ├── tasks │ └── main.yml │ └── templates │ ├── user.rules.j2 │ └── user6.rules.j2 ├── support └── generate-crypted-password.py └── vars ├── defaults.yml ├── digitalocean.yml └── sampleapp.yml /.gitignore: -------------------------------------------------------------------------------- 1 | *rsa* 2 | .vagrant 3 | .DS_Store 4 | credentials/* -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # jbinto/ansible-ubuntu-rails-server 2 | 3 | ## tl;dr 4 | 5 | [Ansible](http://docs.ansible.com/index.html) playbook to provision a Rails deployment server with: 6 | 7 | * Ubuntu 14.04.1 8 | * Postgresql 9.3 9 | * rbenv 10 | * ruby 2.1.5 11 | * [Phusion Passenger](https://www.phusionpassenger.com/) + nginx from Phusion's apt repo 12 | * [Papertrail logging](https://papertrailapp.com/) 13 | * Prepares an nginx vhost for a Rails app, ready for deployment with `cap` 14 | 15 | It can be deployed locally using [Vagrant](http://www.vagrantup.com/) (i.e. for DEV environment), or in the cloud using [Digital Ocean](https://www.digitalocean.com/) (i.e. STAGING/PROD environments). 16 | 17 | ## Overview 18 | 19 | Following various sources to create an idempotent, repeatable provisioning script for Rails applications (among other things) using Ansible. 20 | 21 | My current goal is to be able to deploy a Rails / Postgres / PostGIS app to DigitalOcean in "one click". Perhaps "many clicks", but as minimal human intervention as possible. 22 | 23 | Repeatably. Possibly even including provisioning of the cloud instances. 24 | 25 | At the very least, to create a consistent set of DEV, STAGING and PROD environments. 26 | 27 | Benefits to using something like Ansible to manage servers: 28 | 29 | * Reduce "[jenga](https://www.youtube.com/watch?v=I7H6wGy5zf4)" feeling when running servers (same win as unit testing and source control) 30 | * Consistency between different environments 31 | * Executable documentation 32 | * Combined with cloud providers and hourly billing, can create on-demand staging environments 33 | 34 | ## Usage (Vagrant) 35 | 36 | [Vagrant](http://www.vagrantup.com/downloads.html) must be installed from the website. 37 | 38 | I'm currently retrieving Ansible from git, as well as the `dopy` module for DigitalOcean. 39 | 40 | ``` 41 | git clone https://github.com/jbinto/ansible-ubuntu-rails-server.git 42 | cd ansible-ubuntu-rails-server 43 | brew install python 44 | sudo easy_install pip 45 | sudo pip install -r requirements.txt 46 | ``` 47 | 48 | Generate a crypted password, and put it in `vars/default.yml`. 49 | 50 | ``` 51 | python support/generate-crypted-password.py 52 | ``` 53 | 54 | The following command will: 55 | 56 | * Use Vagrant to create a new Ubuntu virtual machine. 57 | * Boot that machine with Virtualbox. 58 | * Ask you for the `deploy` sudo password (the one you just crypted). 59 | * Use our Ansible playbook to provision everything needed for the Rails server. 60 | 61 | ``` 62 | vagrant up 63 | ``` 64 | 65 | **BUG:** Sometimes, `vagrant up` times out before Ansible gets a chance to connect. I haven't figured this out yet. If this happens, run `vagrant provision` to continue the Ansible playbook. 66 | 67 | To run individual roles (e.g. only install nginx), try the following. You can replace `nginx` with any role name, since they're all tagged in `build-server.yml`. 68 | 69 | ``` 70 | ansible-playbook build-server.yml -i hosts --tags nginx 71 | ``` 72 | 73 | Once the provision is complete, continue to [jbinto/rails4_sample_app_capistrano](https://github.com/jbinto/rails4_sample_app_capistrano) to deploy a Rails application using Capistrano. 74 | 75 | ## Usage (DigitalOcean) 76 | 77 | Set up environemnt variables `DO_CLIENT_ID` and `DO_API_KEY`, or hardcode them in `./hosts/digital_ocean.ini`. 78 | 79 | Install `tugboat`: 80 | 81 | ``` 82 | gem install tugboat 83 | tugboat authorize 84 | ``` 85 | 86 | Edit `./vars/digitalocean.yml`. 87 | 88 | * Note that `hostname` must be a real FQDN you own, and the DNS must be pointing to DigitalOcean. 89 | * You can use `tugboat` to acquire the magic numbers needed for region/image/size IDs. 90 | 91 | Now you can provision the DigitalOcean droplet: 92 | 93 | ``` 94 | ansible-playbook -i local provision-digitalocean.yml 95 | ``` 96 | 97 | This will spin up a new DigitalOcean VPS **which costs real money**. Since you set up the SSH keys with DigitalOcean, you already have passwordless `root` access. 98 | 99 | At the end, it should tell you the new IP address of your server (doesn't seem to be doing this anymore c. 10-2014). 100 | 101 | * **Manually** add the IP address to `./hosts-digitalocean`. 102 | * **Manually** set your DNS as necessary with this new IP address. (**TODO use digital_ocean_domain module**) 103 | * Generate a crypted password, if not already done, and put it in `vars/default.yml`. This will be the password for your `deploy` user. 104 | 105 | ``` 106 | python support/generate-crypted-password.py 107 | ``` 108 | 109 | Now, run the playbook as usual. Good luck! 110 | 111 | ``` 112 | ansible-playbook build-server.yml -i hosts-digitalocean -u root -K -vvvv 113 | ``` 114 | 115 | Note that after the first run, `root` will no longer be able to log in. To run the playbook again, replace `root` with the `deploy` user as set in `vars/defaults.yml`. 116 | 117 | ## Notes 118 | 119 | * Vagrant ships with insecure defaults. Can log in to the VM with `vagrant/vagrant`, and there is an insecure SSH key. Need to destroy this stuff. 120 | 121 | * `vagrant` is a NOPASSWD sudoer, but `deploy` requires a password. Should I just run all the scripts with `-K? 122 | 123 | * It seems so. From googling, it seems Ansible [is not designed to be run with the sudoers file only allowing pre-approved commands.](https://serverfault.com/questions/560106/how-can-i-implement-ansible-with-per-host-passwords-securely) 124 | 125 | * It seems just having a `deploy` user is a bad idea. It's messy. Perhaps there should be a `provision` user as well. This user could install packages, etc. and `deploy` should only affect have rights to the app in `~/deploy`. 126 | 127 | * For now, keep it simple, just `deploy`. But we won't go down the `NOPASSWD` route. It's too risky. Consider: if there's any bug in Rails that allowed remote code execution, attackers are one `sudo` away from full control. 128 | 129 | * The pros of disabling `NOPASSWD` outweigh the cons. Should make it standard practice when creating an Ansible playbook: create a `deploy` user with a unique password, use that password for sudo, and always pass `-K` when necessary. Design playbooks to be clear about whether it requires sudo or not. Right now it's very muddy with privileged and unprivileged tasks combined in the same play. 130 | 131 | * In order to "destroy Vagrant", basically we just remove the `vagrant` user. Also, lock down SSH, etc (see `phred/5minbootstrap`.) 132 | 133 | * This means only one playbook should be executed as `vagrant`: the one that sets up the `deploy` user. Every script afterwards *must* be executable as `deploy`. 134 | 135 | * ~~Despite that, I still can't get `ansible-playbook` to work with the `vagrant` user, because my SSH key isn't moved there.~~ **UPDATE**: This was because I was overwriting `/etc/sudoers`. This gave `deploy` sudo access, but made everyone else (including `vagrant`) lose it. 136 | 137 | * ~~Can't include RSA keys in the git repo. Run `ssh-keygen` and create a deploy keypair, and move it to `devops/templates/deploy_rsa[.pub]`.~~ (Not using deploy keys, but ssh-agent forwarding instead.) 138 | 139 | * Not sure if the deploy keypair should have a passphrase on it or not? Well, I am sure, *it should*, but whether automation relies on this or not? 140 | 141 | * If you get the following error installing ansible on OS X Mavericks: 142 | 143 | ``` 144 | # clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future] 145 | ``` 146 | 147 | Run: 148 | 149 | ``` 150 | echo "ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future" >> ~/.zshrc 151 | . ~/.zshrc 152 | brew install ansible 153 | ``` 154 | 155 | See [Stack Overflow question](https://stackoverflow.com/questions/22390655/ansible-installation-clang-error-unknown-argument-mno-fused-madd) for details. 156 | 157 | 158 | ## Issues with Passenger - solved 159 | 160 | There's two ways to install Passenger: 161 | 162 | * Using their official Ubuntu packages, which installs Passenger files in various locations throughout the system (n.b. as do all apt packages) 163 | * Your `passenger_root` must be a `locations.ini` file which points to all these locations. This ships with the passenger package and can also be generated with one of the `passenger-config about` commands. 164 | 165 | Or... 166 | 167 | * `gem install passenger` (or adding `passenger` to `Gemfile`), which contains all Passenger files in your rubygems directory. 168 | * `install-passenger-nginx-module`, which compiles a brand new nginx and installs it into `/opt/nginx`. [Nginx modules must be statically loaded.](https://github.com/phusion/passenger/wiki/Why-can't-Phusion-Passenger-extend-my-existing-Nginx%3F). 169 | 170 | The former makes more sense, since it's apt it seems cleaner/more maintainable. Especially for nginx, I don't want to have to be recompiling if I need a new module or security update. 171 | 172 | I spent a long time trying to figure out why nginx returned `403 Forbidden` for anything not in `./public`. 173 | 174 | I was defining `passenger_ruby` in my server-specific config, but forgetting `passenger_root`. I ended up putting these two lines in `/etc/nginx/nginx.conf`, and now all is well. In my server specific config, I only have `passenger_enabled`. 175 | 176 | ## Things that still need fixing 177 | 178 | * `ag BUG; ag NOTE` 179 | * Security: 5minbootstrap 180 | * Asset precompilation using official capistrano method - this happens on remote server? 181 | * Create a Vagrant Cloud box with some or all of the Ansible steps already completed. 182 | * The stages are a little muddy. The current example targets only `RAILS_ENV=production`, regardless of whether it's Vagrant (dev) or DigitalOcean (could be staging, could be prod). Can easily edit the role as needed, but need to devise a better overall strategy. 183 | 184 | ## Fixed issues / answered questions 185 | 186 | * ~~Security: Get rid of vagrant user~~ **UPDATE:** This isn't necessary. I'm only ever running `vagrant` on my local machine, and that user is truly necessary for plumbing. We'll never pacakge the Vagrant image and deploy it to the cloud. We can assume a clean base (e.g. DigitalOcean). 187 | * ~~Security: Analyze exactly why we need NOPASSWD. Capistrano symlink stuff maybe can be done in Ansible, and if cap needs sudo, could restrict it to particular commands.~~ **UPDATE:** Fixed this by moving the symlinking, nginx stuff to Ansible, which is better suited for that anyway. Now, cap should be fine without sudo. 188 | * ~~Restart nginx automagically after deploys? Still have to ssh in and do it manually. Not sure why.~~ **UPDATE:** Fixed. 189 | * ~~Deploy user still might still be hardcoded some places - try renaming it~~ **done** 190 | * ~~Provision to DigitalOcean!~~ **done** 191 | * ~~Shouldn't have to SSH in to set postgres password in database.yml.~~ **done** 192 | * ~~Only update the apt cache once every N. (60 minutes? 24 hours? What if we need to force it? -- easy, use a update_apt_cache handler on all things that need it, e.g. added repos)~~ **done** 193 | 194 | ## Sources / references 195 | 196 | Synthesized from the following sources: 197 | 198 | * [From Zero to Deployment: Vagrant, Ansible, Capistrano 3 to deploy your Rails Apps to DigitalOcean automatically (part 1)](http://ihassin.wordpress.com/2013/12/15/from-zero-to-deployment-vagrant-ansible-rvm-and-capistrano-to-deploy-your-rails-apps-to-digitalocean-automatically/) 199 | * [leucos/ansible-tuto](https://github.com/leucos/ansible-tuto) 200 | * [leucos/ansible-rbenv-playbook](https://github.com/leucos/ansible-rbenv-playbook) 201 | * [dodecaphonic/ansible-rails-app](https://github.com/dodecaphonic/ansible-rails-app/) 202 | * [Ansible: List of All Modules](http://docs.ansible.com/list_of_all_modules.html) (easiest way to find module docs, CMD+F / CTRL+F) 203 | * [Phusion Passenger users guide, Nginx edition](http://www.modrails.com/documentation/Users%20guide%20Nginx.html) 204 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | VAGRANTFILE_API_VERSION = "2" 5 | 6 | Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| 7 | 8 | config.vm.define :web do |web| 9 | web.vm.box = "trusty64" 10 | web.vm.box_url = "http://files.vagrantup.com/trusty64.box" 11 | web.vm.network :private_network, ip: "10.33.33.33" 12 | web.vm.network :forwarded_port, guest: 80, host: 8080 13 | 14 | web.vm.hostname = "dev.416.bike" 15 | 16 | web.vm.provider :virtualbox do |vb| 17 | vb.customize ["modifyvm", :id, "--memory", "1024"] 18 | end 19 | 20 | web.vm.provision :ansible do |ansible| 21 | ansible.playbook = "build-server.yml" 22 | ansible.inventory_path = "hosts-vagrant" 23 | ansible.verbose = "v" 24 | ansible.ask_sudo_pass = true 25 | 26 | # https://github.com/mitchellh/vagrant/issues/3096 27 | ansible.limit = 'all' 28 | end 29 | 30 | end 31 | 32 | 33 | end 34 | -------------------------------------------------------------------------------- /ansible.cfg: -------------------------------------------------------------------------------- 1 | [ssh_connection] 2 | pipelining=True 3 | -------------------------------------------------------------------------------- /build-server.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | # Confused about Ansible roles? 4 | # I was too. They're basically just a bit of naming convention magic. 5 | # http://docs.ansible.com/playbooks_roles.html#roles 6 | 7 | 8 | ### Bootstrap the {{ deploy }} user. 9 | ### You must call this playbook with an admin user. 10 | ### (The included Vagrantfile will use -u vagrant. For DigitalOcean, use -u root.) 11 | 12 | - hosts: all 13 | vars_files: 14 | - vars/defaults.yml 15 | sudo: True 16 | # user: intentionally blank 17 | roles: 18 | - { role: create-deploy-user, tags: ['create-deploy-user'] } 19 | 20 | 21 | - hosts: all 22 | vars_files: 23 | - vars/defaults.yml 24 | - vars/sampleapp.yml 25 | remote_user: "{{ deploy }}" 26 | roles: 27 | ### Server provisioning roles: 28 | - { role: linux-prereqs, tags: ['linux-prereqs'] } 29 | - { role: ufw, tags: ['ufw'] } 30 | - { role: postgresql, tags: ['postgresql'] } 31 | - { role: rbenv, tags: ['rbenv'] } 32 | - { role: ruby, tags: ['ruby'] } 33 | - { role: papertrail, tags: ['papertrail'] } 34 | - { role: nginx-passenger, tags: ['nginx-passenger'] } 35 | 36 | ### Site specific roles: 37 | - { role: prepare_site_sampleapp, tags: ['prepare_site_sampleapp'] } -------------------------------------------------------------------------------- /hosts-digitalocean: -------------------------------------------------------------------------------- 1 | [webservers] 2 | greenfield.416.bike ansible_ssh_host=107.170.76.94 3 | -------------------------------------------------------------------------------- /hosts-vagrant: -------------------------------------------------------------------------------- 1 | [webservers] 2 | dev.416.bike ansible_ssh_host=10.33.33.33 3 | -------------------------------------------------------------------------------- /hosts/digital_ocean.ini: -------------------------------------------------------------------------------- 1 | # Ansible DigitalOcean external inventory script settings 2 | # 3 | 4 | [digital_ocean] 5 | 6 | # The module needs your DigitalOcean Client ID and API Key. 7 | # These may also be specified on the command line via --client-id and --api-key 8 | # or via the environment variables DO_CLIENT_ID and DO_API_KEY 9 | # 10 | #client_id = abcdefg123456 11 | #api_key = 123456abcdefg 12 | 13 | 14 | # API calls to DigitalOcean may be slow. For this reason, we cache the results 15 | # of an API call. Set this to the path you want cache files to be written to. 16 | # One file will be written to this directory: 17 | # - ansible-digital_ocean.cache 18 | # 19 | cache_path = /tmp 20 | 21 | 22 | # The number of seconds a cache file is considered valid. After this many 23 | # seconds, a new API call will be made, and the cache file will be updated. 24 | # 25 | cache_max_age = 300 26 | -------------------------------------------------------------------------------- /hosts/digital_ocean.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | ''' 4 | DigitalOcean external inventory script 5 | ====================================== 6 | 7 | Generates Ansible inventory of DigitalOcean Droplets. 8 | 9 | In addition to the --list and --host options used by Ansible, there are options 10 | for generating JSON of other DigitalOcean data. This is useful when creating 11 | droplets. For example, --regions will return all the DigitalOcean Regions. 12 | This information can also be easily found in the cache file, whose default 13 | location is /tmp/ansible-digital_ocean.cache). 14 | 15 | The --pretty (-p) option pretty-prints the output for better human readability. 16 | 17 | ---- 18 | Although the cache stores all the information received from DigitalOcean, 19 | the cache is not used for current droplet information (in --list, --host, 20 | --all, and --droplets). This is so that accurate droplet information is always 21 | found. You can force this script to use the cache with --force-cache. 22 | 23 | ---- 24 | Configuration is read from `digital_ocean.ini`, then from environment variables, 25 | then and command-line arguments. 26 | 27 | Most notably, the DigitalOcean Client ID and API Key must be specified. They 28 | can be specified in the INI file or with the following environment variables: 29 | export DO_CLIENT_ID='DO123' DO_API_KEY='abc123' 30 | 31 | Alternatively, they can be passed on the command-line with --client-id and 32 | --api-key. 33 | 34 | If you specify DigitalOcean credentials in the INI file, a handy way to 35 | get them into your environment (e.g., to use the digital_ocean module) 36 | is to use the output of the --env option with export: 37 | export $(digital_ocean.py --env) 38 | 39 | ---- 40 | The following groups are generated from --list: 41 | - ID (droplet ID) 42 | - NAME (droplet NAME) 43 | - image_ID 44 | - image_NAME 45 | - distro_NAME (distribution NAME from image) 46 | - region_ID 47 | - region_NAME 48 | - size_ID 49 | - size_NAME 50 | - status_STATUS 51 | 52 | When run against a specific host, this script returns the following variables: 53 | - do_created_at 54 | - do_distroy 55 | - do_id 56 | - do_image 57 | - do_image_id 58 | - do_ip_address 59 | - do_name 60 | - do_region 61 | - do_region_id 62 | - do_size 63 | - do_size_id 64 | - do_status 65 | 66 | ----- 67 | ``` 68 | usage: digital_ocean.py [-h] [--list] [--host HOST] [--all] 69 | [--droplets] [--regions] [--images] [--sizes] 70 | [--ssh-keys] [--domains] [--pretty] 71 | [--cache-path CACHE_PATH] 72 | [--cache-max_age CACHE_MAX_AGE] 73 | [--refresh-cache] [--client-id CLIENT_ID] 74 | [--api-key API_KEY] 75 | 76 | Produce an Ansible Inventory file based on DigitalOcean credentials 77 | 78 | optional arguments: 79 | -h, --help show this help message and exit 80 | --list List all active Droplets as Ansible inventory 81 | (default: True) 82 | --host HOST Get all Ansible inventory variables about a specific 83 | Droplet 84 | --all List all DigitalOcean information as JSON 85 | --droplets List Droplets as JSON 86 | --regions List Regions as JSON 87 | --images List Images as JSON 88 | --sizes List Sizes as JSON 89 | --ssh-keys List SSH keys as JSON 90 | --domains List Domains as JSON 91 | --pretty, -p Pretty-print results 92 | --cache-path CACHE_PATH 93 | Path to the cache files (default: .) 94 | --cache-max_age CACHE_MAX_AGE 95 | Maximum age of the cached items (default: 0) 96 | --refresh-cache Force refresh of cache by making API requests to 97 | DigitalOcean (default: False - use cache files) 98 | --client-id CLIENT_ID, -c CLIENT_ID 99 | DigitalOcean Client ID 100 | --api-key API_KEY, -a API_KEY 101 | DigitalOcean API Key 102 | ``` 103 | 104 | ''' 105 | 106 | # (c) 2013, Evan Wies 107 | # 108 | # Inspired by the EC2 inventory plugin: 109 | # https://github.com/ansible/ansible/blob/devel/plugins/inventory/ec2.py 110 | # 111 | # This file is part of Ansible, 112 | # 113 | # Ansible is free software: you can redistribute it and/or modify 114 | # it under the terms of the GNU General Public License as published by 115 | # the Free Software Foundation, either version 3 of the License, or 116 | # (at your option) any later version. 117 | # 118 | # Ansible is distributed in the hope that it will be useful, 119 | # but WITHOUT ANY WARRANTY; without even the implied warranty of 120 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 121 | # GNU General Public License for more details. 122 | # 123 | # You should have received a copy of the GNU General Public License 124 | # along with Ansible. If not, see . 125 | 126 | ###################################################################### 127 | 128 | import os 129 | import sys 130 | import re 131 | import argparse 132 | from time import time 133 | import ConfigParser 134 | 135 | try: 136 | import json 137 | except ImportError: 138 | import simplejson as json 139 | 140 | try: 141 | from dopy.manager import DoError, DoManager 142 | except ImportError, e: 143 | print "failed=True msg='`dopy` library required for this script'" 144 | sys.exit(1) 145 | 146 | 147 | 148 | class DigitalOceanInventory(object): 149 | 150 | ########################################################################### 151 | # Main execution path 152 | ########################################################################### 153 | 154 | def __init__(self): 155 | ''' Main execution path ''' 156 | 157 | # DigitalOceanInventory data 158 | self.data = {} # All DigitalOcean data 159 | self.inventory = {} # Ansible Inventory 160 | self.index = {} # Varous indices of Droplet metadata 161 | 162 | # Define defaults 163 | self.cache_path = '.' 164 | self.cache_max_age = 0 165 | 166 | # Read settings, environment variables, and CLI arguments 167 | self.read_settings() 168 | self.read_environment() 169 | self.read_cli_args() 170 | 171 | # Verify credentials were set 172 | if not hasattr(self, 'client_id') or not hasattr(self, 'api_key'): 173 | print '''Could not find values for DigitalOcean client_id and api_key. 174 | They must be specified via either ini file, command line argument (--client-id and --api-key), 175 | or environment variables (DO_CLIENT_ID and DO_API_KEY)''' 176 | sys.exit(-1) 177 | 178 | # env command, show DigitalOcean credentials 179 | if self.args.env: 180 | print "DO_CLIENT_ID=%s DO_API_KEY=%s" % (self.client_id, self.api_key) 181 | sys.exit(0) 182 | 183 | # Manage cache 184 | self.cache_filename = self.cache_path + "/ansible-digital_ocean.cache" 185 | self.cache_refreshed = False 186 | 187 | if not self.args.force_cache and self.args.refresh_cache or not self.is_cache_valid(): 188 | self.load_all_data_from_digital_ocean() 189 | else: 190 | self.load_from_cache() 191 | if len(self.data) == 0: 192 | if self.args.force_cache: 193 | print '''Cache is empty and --force-cache was specified''' 194 | sys.exit(-1) 195 | self.load_all_data_from_digital_ocean() 196 | else: 197 | # We always get fresh droplets for --list, --host, --all, and --droplets 198 | # unless --force-cache is specified 199 | if not self.args.force_cache and ( 200 | self.args.list or self.args.host or self.args.all or self.args.droplets): 201 | self.load_droplets_from_digital_ocean() 202 | 203 | # Pick the json_data to print based on the CLI command 204 | if self.args.droplets: json_data = { 'droplets': self.data['droplets'] } 205 | elif self.args.regions: json_data = { 'regions': self.data['regions'] } 206 | elif self.args.images: json_data = { 'images': self.data['images'] } 207 | elif self.args.sizes: json_data = { 'sizes': self.data['sizes'] } 208 | elif self.args.ssh_keys: json_data = { 'ssh_keys': self.data['ssh_keys'] } 209 | elif self.args.domains: json_data = { 'domains': self.data['domains'] } 210 | elif self.args.all: json_data = self.data 211 | 212 | elif self.args.host: json_data = self.load_droplet_variables_for_host() 213 | else: # '--list' this is last to make it default 214 | json_data = self.inventory 215 | 216 | if self.args.pretty: 217 | print json.dumps(json_data, sort_keys=True, indent=2) 218 | else: 219 | print json.dumps(json_data) 220 | # That's all she wrote... 221 | 222 | 223 | ########################################################################### 224 | # Script configuration 225 | ########################################################################### 226 | 227 | def read_settings(self): 228 | ''' Reads the settings from the digital_ocean.ini file ''' 229 | config = ConfigParser.SafeConfigParser() 230 | config.read(os.path.dirname(os.path.realpath(__file__)) + '/digital_ocean.ini') 231 | 232 | # Credentials 233 | if config.has_option('digital_ocean', 'client_id'): 234 | self.client_id = config.get('digital_ocean', 'client_id') 235 | if config.has_option('digital_ocean', 'api_key'): 236 | self.api_key = config.get('digital_ocean', 'api_key') 237 | 238 | # Cache related 239 | if config.has_option('digital_ocean', 'cache_path'): 240 | self.cache_path = config.get('digital_ocean', 'cache_path') 241 | if config.has_option('digital_ocean', 'cache_max_age'): 242 | self.cache_max_age = config.getint('digital_ocean', 'cache_max_age') 243 | 244 | 245 | def read_environment(self): 246 | ''' Reads the settings from environment variables ''' 247 | # Setup credentials 248 | if os.getenv("DO_CLIENT_ID"): self.client_id = os.getenv("DO_CLIENT_ID") 249 | if os.getenv("DO_API_KEY"): self.api_key = os.getenv("DO_API_KEY") 250 | 251 | 252 | def read_cli_args(self): 253 | ''' Command line argument processing ''' 254 | parser = argparse.ArgumentParser(description='Produce an Ansible Inventory file based on DigitalOcean credentials') 255 | 256 | parser.add_argument('--list', action='store_true', help='List all active Droplets as Ansible inventory (default: True)') 257 | parser.add_argument('--host', action='store', help='Get all Ansible inventory variables about a specific Droplet') 258 | 259 | parser.add_argument('--all', action='store_true', help='List all DigitalOcean information as JSON') 260 | parser.add_argument('--droplets','-d', action='store_true', help='List Droplets as JSON') 261 | parser.add_argument('--regions', action='store_true', help='List Regions as JSON') 262 | parser.add_argument('--images', action='store_true', help='List Images as JSON') 263 | parser.add_argument('--sizes', action='store_true', help='List Sizes as JSON') 264 | parser.add_argument('--ssh-keys', action='store_true', help='List SSH keys as JSON') 265 | parser.add_argument('--domains', action='store_true',help='List Domains as JSON') 266 | 267 | parser.add_argument('--pretty','-p', action='store_true', help='Pretty-print results') 268 | 269 | parser.add_argument('--cache-path', action='store', help='Path to the cache files (default: .)') 270 | parser.add_argument('--cache-max_age', action='store', help='Maximum age of the cached items (default: 0)') 271 | parser.add_argument('--force-cache', action='store_true', default=False, help='Only use data from the cache') 272 | parser.add_argument('--refresh-cache','-r', action='store_true', default=False, help='Force refresh of cache by making API requests to DigitalOcean (default: False - use cache files)') 273 | 274 | parser.add_argument('--env','-e', action='store_true', help='Display DO_CLIENT_ID and DO_API_KEY') 275 | parser.add_argument('--client-id','-c', action='store', help='DigitalOcean Client ID') 276 | parser.add_argument('--api-key','-a', action='store', help='DigitalOcean API Key') 277 | 278 | self.args = parser.parse_args() 279 | 280 | if self.args.client_id: self.client_id = self.args.client_id 281 | if self.args.api_key: self.api_key = self.args.api_key 282 | if self.args.cache_path: self.cache_path = self.args.cache_path 283 | if self.args.cache_max_age: self.cache_max_age = self.args.cache_max_age 284 | 285 | # Make --list default if none of the other commands are specified 286 | if (not self.args.droplets and not self.args.regions and not self.args.images and 287 | not self.args.sizes and not self.args.ssh_keys and not self.args.domains and 288 | not self.args.all and not self.args.host): 289 | self.args.list = True 290 | 291 | 292 | ########################################################################### 293 | # Data Management 294 | ########################################################################### 295 | 296 | def load_all_data_from_digital_ocean(self): 297 | ''' Use dopy to get all the information from DigitalOcean and save data in cache files ''' 298 | manager = DoManager(self.client_id, self.api_key) 299 | 300 | self.data = {} 301 | self.data['droplets'] = self.sanitize_list(manager.all_active_droplets()) 302 | self.data['regions'] = self.sanitize_list(manager.all_regions()) 303 | self.data['images'] = self.sanitize_list(manager.all_images(filter=None)) 304 | self.data['sizes'] = self.sanitize_list(manager.sizes()) 305 | self.data['ssh_keys'] = self.sanitize_list(manager.all_ssh_keys()) 306 | self.data['domains'] = self.sanitize_list(manager.all_domains()) 307 | 308 | self.index = {} 309 | self.index['region_to_name'] = self.build_index(self.data['regions'], 'id', 'name') 310 | self.index['size_to_name'] = self.build_index(self.data['sizes'], 'id', 'name') 311 | self.index['image_to_name'] = self.build_index(self.data['images'], 'id', 'name') 312 | self.index['image_to_distro'] = self.build_index(self.data['images'], 'id', 'distribution') 313 | self.index['host_to_droplet'] = self.build_index(self.data['droplets'], 'ip_address', 'id', False) 314 | 315 | self.build_inventory() 316 | 317 | self.write_to_cache() 318 | 319 | 320 | def load_droplets_from_digital_ocean(self): 321 | ''' Use dopy to get droplet information from DigitalOcean and save data in cache files ''' 322 | manager = DoManager(self.client_id, self.api_key) 323 | self.data['droplets'] = self.sanitize_list(manager.all_active_droplets()) 324 | self.index['host_to_droplet'] = self.build_index(self.data['droplets'], 'ip_address', 'id', False) 325 | self.build_inventory() 326 | self.write_to_cache() 327 | 328 | 329 | def build_index(self, source_seq, key_from, key_to, use_slug=True): 330 | dest_dict = {} 331 | for item in source_seq: 332 | name = (use_slug and item.has_key('slug')) and item['slug'] or item[key_to] 333 | key = item[key_from] 334 | dest_dict[key] = name 335 | return dest_dict 336 | 337 | 338 | def build_inventory(self): 339 | '''Build Ansible inventory of droplets''' 340 | self.inventory = {} 341 | 342 | # add all droplets by id and name 343 | for droplet in self.data['droplets']: 344 | dest = droplet['ip_address'] 345 | 346 | self.inventory[droplet['id']] = [dest] 347 | self.push(self.inventory, droplet['name'], dest) 348 | self.push(self.inventory, 'region_'+droplet['region_id'], dest) 349 | self.push(self.inventory, 'image_' +droplet['image_id'], dest) 350 | self.push(self.inventory, 'size_' +droplet['size_id'], dest) 351 | self.push(self.inventory, 'status_'+droplet['status'], dest) 352 | 353 | region_name = self.index['region_to_name'].get(droplet['region_id']) 354 | if region_name: 355 | self.push(self.inventory, 'region_'+region_name, dest) 356 | 357 | size_name = self.index['size_to_name'].get(droplet['size_id']) 358 | if size_name: 359 | self.push(self.inventory, 'size_'+size_name, dest) 360 | 361 | image_name = self.index['image_to_name'].get(droplet['image_id']) 362 | if image_name: 363 | self.push(self.inventory, 'image_'+image_name, dest) 364 | 365 | distro_name = self.index['image_to_distro'].get(droplet['image_id']) 366 | if distro_name: 367 | self.push(self.inventory, 'distro_'+distro_name, dest) 368 | 369 | 370 | def load_droplet_variables_for_host(self): 371 | '''Generate a JSON reponse to a --host call''' 372 | host = self.to_safe(str(self.args.host)) 373 | 374 | if not host in self.index['host_to_droplet']: 375 | # try updating cache 376 | if not self.args.force_cache: 377 | self.load_all_data_from_digital_ocean() 378 | if not host in self.index['host_to_droplet']: 379 | # host might not exist anymore 380 | return {} 381 | 382 | droplet = None 383 | if self.cache_refreshed: 384 | for drop in self.data['droplets']: 385 | if drop['ip_address'] == host: 386 | droplet = self.sanitize_dict(drop) 387 | break 388 | else: 389 | # Cache wasn't refreshed this run, so hit DigitalOcean API 390 | manager = DoManager(self.client_id, self.api_key) 391 | droplet_id = self.index['host_to_droplet'][host] 392 | droplet = self.sanitize_dict(manager.show_droplet(droplet_id)) 393 | 394 | if not droplet: 395 | return {} 396 | 397 | # Put all the information in a 'do_' namespace 398 | info = {} 399 | for k, v in droplet.items(): 400 | info['do_'+k] = v 401 | 402 | # Generate user-friendly variables (i.e. not the ID's) 403 | if droplet.has_key('region_id'): 404 | info['do_region'] = self.index['region_to_name'].get(droplet['region_id']) 405 | if droplet.has_key('size_id'): 406 | info['do_size'] = self.index['size_to_name'].get(droplet['size_id']) 407 | if droplet.has_key('image_id'): 408 | info['do_image'] = self.index['image_to_name'].get(droplet['image_id']) 409 | info['do_distro'] = self.index['image_to_distro'].get(droplet['image_id']) 410 | 411 | return info 412 | 413 | 414 | 415 | ########################################################################### 416 | # Cache Management 417 | ########################################################################### 418 | 419 | def is_cache_valid(self): 420 | ''' Determines if the cache files have expired, or if it is still valid ''' 421 | if os.path.isfile(self.cache_filename): 422 | mod_time = os.path.getmtime(self.cache_filename) 423 | current_time = time() 424 | if (mod_time + self.cache_max_age) > current_time: 425 | return True 426 | return False 427 | 428 | 429 | def load_from_cache(self): 430 | ''' Reads the data from the cache file and assigns it to member variables as Python Objects''' 431 | cache = open(self.cache_filename, 'r') 432 | json_data = cache.read() 433 | cache.close() 434 | data = json.loads(json_data) 435 | 436 | self.data = data['data'] 437 | self.inventory = data['inventory'] 438 | self.index = data['index'] 439 | 440 | 441 | def write_to_cache(self): 442 | ''' Writes data in JSON format to a file ''' 443 | data = { 'data': self.data, 'index': self.index, 'inventory': self.inventory } 444 | json_data = json.dumps(data, sort_keys=True, indent=2) 445 | 446 | cache = open(self.cache_filename, 'w') 447 | cache.write(json_data) 448 | cache.close() 449 | 450 | 451 | 452 | ########################################################################### 453 | # Utilities 454 | ########################################################################### 455 | 456 | def push(self, my_dict, key, element): 457 | ''' Pushed an element onto an array that may not have been defined in the dict ''' 458 | if key in my_dict: 459 | my_dict[key].append(element); 460 | else: 461 | my_dict[key] = [element] 462 | 463 | 464 | def to_safe(self, word): 465 | ''' Converts 'bad' characters in a string to underscores so they can be used as Ansible groups ''' 466 | return re.sub("[^A-Za-z0-9\-\.]", "_", word) 467 | 468 | 469 | def sanitize_dict(self, d): 470 | new_dict = {} 471 | for k, v in d.items(): 472 | if v != None: 473 | new_dict[self.to_safe(str(k))] = self.to_safe(str(v)) 474 | return new_dict 475 | 476 | 477 | def sanitize_list(self, seq): 478 | new_seq = [] 479 | for d in seq: 480 | new_seq.append(self.sanitize_dict(d)) 481 | return new_seq 482 | 483 | 484 | 485 | ########################################################################### 486 | # Run the script 487 | DigitalOceanInventory() 488 | -------------------------------------------------------------------------------- /hosts/local: -------------------------------------------------------------------------------- 1 | [local] 2 | localhost ansible_connection=local 3 | -------------------------------------------------------------------------------- /local: -------------------------------------------------------------------------------- 1 | [local] 2 | localhost ansible_connection=local 3 | -------------------------------------------------------------------------------- /provision-digitalocean.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: localhost 3 | gather_facts: False 4 | 5 | vars_files: 6 | - vars/digitalocean.yml 7 | 8 | roles: 9 | - { role: digitalocean, tags: ['digitalocean'] } -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | git+git://github.com/ansible/ansible.git 2 | ansible-shell 3 | dopy 4 | -------------------------------------------------------------------------------- /roles/create-deploy-user/files/known_hosts: -------------------------------------------------------------------------------- 1 | bitbucket.org,131.103.20.168 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAubiN81eDcafrgMeLzaFPsw2kNvEcqTKl/VqLat/MaB33pZy0y3rJZtnqwR2qOOvbwKZYKiEO1O6VqNEBxKvJJelCq0dTXWT5pbO2gDXC6h6QDXCaHo6pOHGPUy+YBaGQRGuSusMEASYiWunYN0vCAI8QaXnWMXNMdFP3jHAJH0eDsoiGnLPBlBp4TNm6rYI74nMzgz3B9IikW4WVK+dc8KZJZWYjAuORU3jc1c/NPskD2ASinf8v3xnfXeukU0sJ5N6m5E8VLjObPEO+mN2t/FZTMZLiFqPWc/ALSqnMnnhwrNi2rbfg/rd/IpL8Le3pSBne8+seeFVBoGqzHM9yXw== 2 | -------------------------------------------------------------------------------- /roles/create-deploy-user/files/ssh_config: -------------------------------------------------------------------------------- 1 | Host bitbucket.com 2 | StrictHostKeyChecking no 3 | 4 | Host github.com 5 | StrictHostKeyChecking no 6 | 7 | -------------------------------------------------------------------------------- /roles/create-deploy-user/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: create {{ deploy }} user 2 | user: name={{ deploy }} comment="deploy user" generate_ssh_key=yes state=present password={{ password }} shell=/bin/bash 3 | 4 | - name: authorize my SSH key to access {{ deploy }} 5 | authorized_key: user={{ deploy }} key="{{ lookup('file', '~/.ssh/id_rsa.pub') }}" 6 | 7 | - name: copy ssh_config so that github doesn't do host verification 8 | action: copy src=ssh_config dest=~{{ deploy }}/.ssh/config owner={{ deploy }} group={{ deploy }} 9 | 10 | - name: add {{ deploy }} to sudoers 11 | lineinfile: "dest=/etc/sudoers state=present line='{{ deploy }} ALL=(ALL:ALL) ALL'" 12 | 13 | - name: require password upon each invocation of sudo 14 | lineinfile: "dest=/etc/sudoers state=present line='Defaults timestamp_timeout=0'" 15 | 16 | - name: create www-data group 17 | group: name=www-data state=present 18 | 19 | - name: create apps directory owned by {{ deploy }}:www-data with setgid bit 20 | # note: 2755 => 2=setgid, 7=owner rwx, 5=group rx, 5=other rx 21 | file: name=~{{ deploy }}/apps state=directory owner={{ deploy }} group=www-data mode=2755 22 | 23 | - name: remove authorized_keys for root (specifically for DigitalOcean) 24 | file: name=/root/.ssh/authorized_keys state=absent -------------------------------------------------------------------------------- /roles/digitalocean/tasks/main.yml: -------------------------------------------------------------------------------- 1 | # Adapted from https://github.com/garethr/ansible-provisioner 2 | # Original license: 3 | 4 | # The MIT License (MIT) 5 | # Copyright (c) 2013 Gareth Rushgrove 6 | 7 | # Permission is hereby granted, free of charge, to any person obtaining a copy 8 | # of this software and associated documentation files (the "Software"), to deal 9 | # in the Software without restriction, including without limitation the rights 10 | # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 11 | # copies of the Software, and to permit persons to whom the Software is 12 | # furnished to do so, subject to the following conditions: 13 | 14 | # The above copyright notice and this permission notice shall be included in all 15 | # copies or substantial portions of the Software. 16 | 17 | # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 18 | # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 19 | # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. 20 | # IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, 21 | # DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR 22 | # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE 23 | # OR OTHER DEALINGS IN THE SOFTWARE. 24 | 25 | --- 26 | - name: Create droplet if not exists 27 | digital_ocean: > 28 | state=present 29 | ssh_key_ids={{ default_key }} 30 | command=droplet 31 | name={{ droplet_hostname }} 32 | size_id={{ droplet_size }} 33 | region_id={{ droplet_region }} 34 | image_id={{ droplet_image }} 35 | wait_timeout=500 36 | unique_name=true 37 | register: result 38 | 39 | # NOTE: re unique_name=true above: 40 | # Docs say this is useful for 'idempotency'. 41 | # Wouldn't that mean turning this *off* is useful? 42 | # We don't want it recreating the droplet with the same name. 43 | 44 | - name: "Add DNS A record" 45 | digital_ocean_domain: > 46 | state=present 47 | name={{ droplet_hostname }} 48 | ip={{ result.droplet.ip_address }} -------------------------------------------------------------------------------- /roles/linux-prereqs/files/50unattended-upgrades: -------------------------------------------------------------------------------- 1 | // Automatically upgrade packages from these (origin:archive) pairs 2 | 3 | // jbinto note: 4 | // This upgrades basically everything automatically. 5 | // It also reboots the server automatically. 6 | // Best used for low value servers that may not get attention. 7 | // In a real production environment, test changes first! 8 | 9 | Unattended-Upgrade::Allowed-Origins { 10 | "${distro_id}:${distro_codename}-security"; 11 | "${distro_id}:${distro_codename}-updates"; 12 | "${distro_id}:${distro_codename}-proposed"; 13 | "${distro_id}:${distro_codename}-backports"; 14 | 15 | "apt.postgresql.org":"trusty-pgdg"; 16 | "Phusion":"trusty"; 17 | }; 18 | 19 | Unattended-Upgrade::Automatic-Reboot "true"; -------------------------------------------------------------------------------- /roles/linux-prereqs/files/apt_periodic: -------------------------------------------------------------------------------- 1 | APT::Periodic::Update-Package-Lists "1"; 2 | APT::Periodic::Download-Upgradeable-Packages "1"; 3 | APT::Periodic::AutocleanInterval "7"; 4 | APT::Periodic::Unattended-Upgrade "1"; -------------------------------------------------------------------------------- /roles/linux-prereqs/handlers/main.yml: -------------------------------------------------------------------------------- 1 | - name: restart_sshd 2 | service: name=ssh state=restarted 3 | sudo: yes -------------------------------------------------------------------------------- /roles/linux-prereqs/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Remove /opt/vagrant_ruby 2 | file: name=/opt/vagrant_ruby state=absent 3 | sudo: yes 4 | 5 | - name: Update apt cache 6 | apt: update_cache=yes cache_valid_time=3600 7 | sudo: yes 8 | 9 | - name: Upgrade existing linux packages 10 | apt: upgrade=dist 11 | sudo: yes 12 | 13 | - name: Install the kitchen sink 14 | action: apt pkg={{item}} state=installed update_cache=true 15 | sudo: yes 16 | with_items: 17 | - autoconf 18 | - automake 19 | - bison 20 | - build-essential 21 | - curl 22 | - gawk 23 | - git-core 24 | - git 25 | - libxml2-dev 26 | - libffi-dev 27 | - libyaml-dev 28 | - libssl-dev 29 | - libsqlite3-dev 30 | - libgdbm-dev 31 | - libncurses5-dev 32 | - libreadline6-dev 33 | - libtool 34 | - libyaml-dev 35 | - nfs-common # make virtualbox faster 36 | - pkg-config 37 | - sqlite3 38 | - vim 39 | - zlib1g-dev 40 | - htop 41 | - iftop 42 | - python-pycurl # for postgresql playbook, uses apt_pkg 43 | - python-software-properties # for `add-apt-repository` command 44 | - fail2ban 45 | - ufw 46 | - unattended-upgrades 47 | 48 | - name: Disallow root SSH access 49 | action: lineinfile dest=/etc/ssh/sshd_config regexp="^PermitRootLogin" line="PermitRootLogin no" state=present 50 | sudo: yes 51 | notify: restart_sshd 52 | 53 | - name: Disallow password authentication 54 | action: lineinfile dest=/etc/ssh/sshd_config regexp="^PasswordAuthentication" line="PasswordAuthentication no" state=present 55 | sudo: yes 56 | notify: restart_sshd 57 | 58 | - name: Allow unattended upgrades 59 | copy: src=apt_periodic dest=/etc/apt/apt.conf.d/10periodic 60 | sudo: yes 61 | 62 | - name: Configure unattended upgrades 63 | copy: src=50unattended-upgrades dest=/etc/apt/apt.conf.d/50unattended-upgrades 64 | sudo: yes 65 | 66 | 67 | # Rails depends on a JS runtime. 68 | # Should move this to it's own role, eventually. 69 | 70 | - name: Node.js | Add the node.js PPA 71 | action: command add-apt-repository -y ppa:chris-lea/node.js creates=/etc/apt/sources.list.d/chris-lea-node_js-trusty.list 72 | sudo: yes 73 | 74 | - name: Node.js | Install nodejs and npm 75 | action: apt pkg={{ item }} state=installed update_cache=yes 76 | sudo: yes 77 | with_items: 78 | - nodejs -------------------------------------------------------------------------------- /roles/nginx-passenger/tasks/main.yml: -------------------------------------------------------------------------------- 1 | # Adapted from official Passenger/nginx documentation: 2 | # http://www.modrails.com/documentation/Users%20guide%20Nginx.html 3 | 4 | 5 | # NOTE: Currently using the Phusion Passenger repo for nginx. 6 | # Was previously using official nginx/stable PPA. 7 | 8 | # Can't seem to use keyserver= apt_key feature here 9 | # See https://groups.google.com/forum/#!topic/ansible-project/8SEZWEQGTsg 10 | - name: Add Phusion Passenger repo key 11 | apt_key: url=http://keyserver.ubuntu.com/pks/lookup?op=get&fingerprint=on&search=0x561F9B9CAC40B2F7 state=present 12 | sudo: yes 13 | 14 | - name: Install packages to enable apt over HTTPS 15 | action: apt pkg={{item}} state=installed update_cache=true 16 | sudo: yes 17 | with_items: 18 | - apt-transport-https 19 | - ca-certificates 20 | 21 | - name: Add Phusion Passenger repo 22 | apt_repository: repo='deb https://oss-binaries.phusionpassenger.com/apt/passenger trusty main' 23 | sudo: yes 24 | 25 | - name: Install nginx-extras and passenger 26 | action: apt pkg={{item}} state=installed update_cache=true 27 | sudo: yes 28 | with_items: 29 | # note: Passenger repo has it's own nginx-extras package. 30 | - nginx-extras 31 | - passenger 32 | 33 | - name: Remove the default nginx app's config 34 | file: path=/etc/nginx/sites-available/default state=absent 35 | sudo: yes 36 | 37 | - name: Remove the default nginx app's symlink if it exists 38 | file: path=/etc/nginx/sites-enabled/default state=absent 39 | sudo: yes 40 | 41 | # Note: Will do this in Capistrano, for now. 42 | 43 | #- name: Configure nginx for the app 44 | # template: src={{ app_name }}.j2 dest=/etc/nginx/sites-available/{{ app_name }} group={{ deploy }} owner={{ deploy }} force=yes 45 | 46 | #- name: Enable the app 47 | # file: src=/etc/nginx/sites-available/{{ app_name }} dest=/etc/nginx/sites-enabled/{{ app_name }} state=link owner={{ deploy }} group={{ deploy }} 48 | 49 | - name: Copy nginx.conf 50 | template: src=nginx.conf.j2 dest=/etc/nginx/nginx.conf 51 | sudo: yes 52 | 53 | - name: Start nginx 54 | service: name=nginx state=started 55 | sudo: yes -------------------------------------------------------------------------------- /roles/nginx-passenger/templates/nginx.conf.j2: -------------------------------------------------------------------------------- 1 | user www-data; 2 | worker_processes 4; 3 | pid /var/run/nginx.pid; 4 | 5 | events { 6 | worker_connections 768; 7 | # multi_accept on; 8 | } 9 | 10 | http { 11 | 12 | ## 13 | # Basic Settings 14 | ## 15 | 16 | sendfile on; 17 | tcp_nopush on; 18 | tcp_nodelay on; 19 | keepalive_timeout 65; 20 | types_hash_max_size 2048; 21 | # server_tokens off; 22 | 23 | # server_names_hash_bucket_size 64; 24 | # server_name_in_redirect off; 25 | 26 | include /etc/nginx/mime.types; 27 | default_type application/octet-stream; 28 | 29 | ## 30 | # Logging Settings 31 | ## 32 | 33 | access_log /var/log/nginx/access.log; 34 | error_log /var/log/nginx/error.log; 35 | 36 | ## 37 | # Gzip Settings 38 | ## 39 | 40 | gzip on; 41 | gzip_disable "msie6"; 42 | 43 | # gzip_vary on; 44 | # gzip_proxied any; 45 | # gzip_comp_level 6; 46 | # gzip_buffers 16 8k; 47 | # gzip_http_version 1.1; 48 | # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; 49 | 50 | ## 51 | # nginx-naxsi config 52 | ## 53 | # Uncomment it if you installed nginx-naxsi 54 | ## 55 | 56 | # include /etc/nginx/naxsi_core.rules; 57 | 58 | ## 59 | # Phusion Passenger config 60 | ## 61 | # Uncomment it if you installed passenger or passenger-enterprise 62 | ## 63 | 64 | 65 | passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini; 66 | passenger_ruby /home/{{ deploy }}/.rbenv/versions/{{ ruby_version }}/bin/ruby; 67 | 68 | ## 69 | # Virtual Host Configs 70 | ## 71 | 72 | include /etc/nginx/conf.d/*.conf; 73 | include /etc/nginx/sites-enabled/*; 74 | } 75 | 76 | 77 | # mail { 78 | # # See sample authentication script at: 79 | # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript 80 | # 81 | # # auth_http localhost/auth.php; 82 | # # pop3_capabilities "TOP" "USER"; 83 | # # imap_capabilities "IMAP4rev1" "UIDPLUS"; 84 | # 85 | # server { 86 | # listen localhost:110; 87 | # protocol pop3; 88 | # proxy on; 89 | # } 90 | # 91 | # server { 92 | # listen localhost:143; 93 | # protocol imap; 94 | # proxy on; 95 | # } 96 | # } 97 | -------------------------------------------------------------------------------- /roles/papertrail/tasks/main.yml: -------------------------------------------------------------------------------- 1 | - name: Add papertrail destination to rsyslog.conf 2 | lineinfile: "dest=/etc/rsyslog.conf state=present line='*.* {{papertrail_log_dest}}'" 3 | sudo: yes 4 | 5 | # TODO: Add logging of individual files using remote_syslog gem 6 | # See https://papertrailapp.com/systems/setup -------------------------------------------------------------------------------- /roles/postgresql/tasks/main.yml: -------------------------------------------------------------------------------- 1 | 2 | 3 | # python-pycurl is required for apt_key module 4 | # see https://groups.google.com/forum/#!topic/ansible-project/ooAm8g3EEsU 5 | 6 | 7 | 8 | # BUG: When running 'vagrant up' for the first time, the python-pycurl task 9 | # always fails, with: 10 | # 11 | # msg: Failed to lock apt for exclusive operation 12 | # 13 | # The next time you run 'vagrant provision' it will be fine. 14 | # 15 | # Suspect race condition with the 'async' apt job run previous to this. 16 | # 17 | # Commenting this job out to see if it makes a difference. 18 | # The package is already installed in linux-prereqs. 19 | 20 | # - name: Install python-pycurl 21 | # apt: pkg=python-pycurl state=latest update_cache=true 22 | 23 | # BUG: Is it secure to retrieve a PGP key over HTTP? 24 | - name: Add PostgreSQL repo key 25 | apt_key: url=http://apt.postgresql.org/pub/repos/apt/ACCC4CF8.asc 26 | sudo: yes 27 | 28 | - name: Add PostgreSQL repo 29 | apt_repository: repo='deb http://apt.postgresql.org/pub/repos/apt/ trusty-pgdg main' 30 | sudo: yes 31 | 32 | - name: Install postgresql, libpq-dev, python-psycopg2 33 | action: apt pkg={{item}} state=latest update_cache=true 34 | sudo: yes 35 | with_items: 36 | - postgresql 37 | - libpq-dev 38 | - python-psycopg2 39 | 40 | - name: Copy valid pg_hba.conf 41 | template: src=pg_hba.conf.j2 dest=/etc/postgresql/9.4/main/pg_hba.conf 42 | sudo: yes 43 | 44 | - name: Restart PostgreSQL 45 | # NOTE: If conf files were updated here, need to FORCE a RESTART! 46 | service: name=postgresql state=started 47 | sudo: yes 48 | -------------------------------------------------------------------------------- /roles/postgresql/templates/pg_hba.conf.j2: -------------------------------------------------------------------------------- 1 | local all all peer 2 | host all all 127.0.0.1/32 md5 3 | host all all ::1/128 md5 4 | host all all 10.11.12.0/24 md5 5 | -------------------------------------------------------------------------------- /roles/prepare_site_sampleapp/handlers/main.yml: -------------------------------------------------------------------------------- 1 | - name: restart_nginx 2 | sudo: yes 3 | service: name=nginx state=restarted 4 | -------------------------------------------------------------------------------- /roles/prepare_site_sampleapp/tasks/main.yml: -------------------------------------------------------------------------------- 1 | ## 2 | ## Postgres 3 | ## 4 | 5 | - name: Create {{database_name}} database 6 | sudo: yes 7 | sudo_user: postgres 8 | postgresql_db: name={{database_name}} 9 | 10 | - name: Create user {{database_user}} on {{database_name}} 11 | sudo: yes 12 | sudo_user: postgres 13 | postgresql_user: db={{database_name}} 14 | name={{database_user}} 15 | password={{database_password}} 16 | role_attr_flags=CREATEDB,NOSUPERUSER 17 | 18 | ## 19 | ## Rails 20 | ## 21 | 22 | - name: mkdir -p ~{{deploy}}/apps/{{full_app_name}}/shared/config 23 | file: path=~{{deploy}}/apps/{{full_app_name}}/shared/config state=directory 24 | 25 | - name: Copy database.yml to config 26 | sudo: yes # or chgrp will fail. For some reason, the setgid doesn't work here. 27 | template: src=database.yml.j2 dest=~{{deploy}}/apps/{{full_app_name}}/shared/config/database.yml owner={{deploy}} group=www-data mode=0640 28 | 29 | ### 30 | ### Nginx 31 | ### 32 | 33 | - name: Deploy {{full_app_name}} nginx config to sites-available 34 | sudo: yes 35 | template: src={{full_app_name}}.nginx.j2 dest=/etc/nginx/sites-enabled/{{full_app_name}} 36 | notify: restart_nginx -------------------------------------------------------------------------------- /roles/prepare_site_sampleapp/templates/database.yml.j2: -------------------------------------------------------------------------------- 1 | production: 2 | adapter: postgresql 3 | timeout: 5000 4 | encoding: utf8 5 | reconnect: false 6 | database: {{ database_name }} 7 | pool: 5 8 | username: {{ database_user }} 9 | password: {{ database_password }} 10 | host: {{ database_host }} 11 | port: {{ database_port }} -------------------------------------------------------------------------------- /roles/prepare_site_sampleapp/templates/sampleapp_production.nginx.j2: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | server_name {{ server_name }}; 4 | root /home/{{deploy}}/apps/{{full_app_name}}/current/public; 5 | passenger_enabled on; 6 | 7 | # warning: cargo cult server optimization ahead 8 | gzip on; 9 | gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/json; 10 | 11 | location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { 12 | expires 1y; 13 | log_not_found off; 14 | } 15 | 16 | location ~ ^/(assets|images|javascripts|stylesheets|swfs|system)/ { 17 | gzip_static on; 18 | expires max; 19 | add_header Cache-Control public; 20 | add_header Last-Modified ""; 21 | add_header ETag ""; 22 | 23 | break; 24 | } 25 | } 26 | -------------------------------------------------------------------------------- /roles/rbenv/files/50_rbenv.bash: -------------------------------------------------------------------------------- 1 | export PATH="$HOME/.rbenv/bin:$PATH" 2 | eval "$(rbenv init -)" 3 | -------------------------------------------------------------------------------- /roles/rbenv/tasks/main.yml: -------------------------------------------------------------------------------- 1 | # Most of this borrowed from https://github.com/leucos/ansible-rbenv-playbook 2 | 3 | - name: Install rbenv 4 | git: repo=git://github.com/sstephenson/rbenv.git accept_hostkey=yes dest=~{{deploy}}/.rbenv 5 | 6 | - name: Create (if not exists) ~/.bash.d/ 7 | file: dest=~{{deploy}}/.bash.d/ state=directory 8 | 9 | - name: Copy 50_rbenv.bash to ~/.bash.d/ 10 | copy: src=50_rbenv.bash dest=~{{deploy}}/.bash.d/50_rbenv.bash mode=700 owner={{deploy}} 11 | 12 | - name: Add rbenv to ~/.bash_profile 13 | # Clever: the task is idempotent because of the regexp=rbenv 14 | lineinfile: dest=~{{deploy}}/.bash_profile regexp="rbenv" insertafter=EOF line="source ~/.bash.d/50_rbenv.bash" create=yes owner={{deploy}} 15 | 16 | - name: Create (if not exists) ~/.rbenv/plugns/ 17 | file: path=~{{deploy}}/.rbenv/plugins/ owner={{deploy}} group={{deploy}} mode=0755 state=directory 18 | 19 | - name: Install ruby-build 20 | git: repo=git://github.com/sstephenson/ruby-build.git dest=~{{deploy}}/.rbenv/plugins/ruby-build -------------------------------------------------------------------------------- /roles/ruby/handlers/main.yml: -------------------------------------------------------------------------------- 1 | - name: set_global_ruby_version 2 | shell: cd; bash -lc "rbenv global {{ ruby_version }}" 3 | -------------------------------------------------------------------------------- /roles/ruby/tasks/main.yml: -------------------------------------------------------------------------------- 1 | # Most of this borrowed from https://github.com/leucos/ansible-rbenv-playbook 2 | 3 | - name: Checks if target ruby is installed 4 | shell: cd; bash -lc "rbenv versions | grep {{ ruby_version }} | tr '*' ' ' | sed -e 's/\s\+//' | cut -f1 -d' '" 5 | # This is just a 'get' operation, so can't really change. 6 | # The rc from this shell does indicate to ansible that it's changed, so let's suppress that. 7 | # http://docs.ansible.com/playbooks_error_handling.html#overriding-the-changed-result 8 | changed_when: False 9 | register: ruby_is_installed 10 | 11 | - name: rbenv install {{ ruby_version }} 12 | shell: cd; bash -lc "MAKEOPTS={{ rbenv_makeopts }} CONFIGURE_OPTS=--disable-install-rdoc rbenv install {{ ruby_version }}" 13 | when: ruby_is_installed.stdout != ruby_version 14 | 15 | - name: set_global_ruby_version 16 | shell: cd; bash -lc "rbenv global {{ ruby_version }}" 17 | 18 | - name: install_bundler 19 | shell: cd; bash -lc "gem install bundle bundler" -------------------------------------------------------------------------------- /roles/ufw/README.md: -------------------------------------------------------------------------------- 1 | Uncomplicated Firewall 2 | ======== 3 | 4 | Ansible role that installs and configures ufw, AKA [The Uncomplicated Firewall](https://launchpad.net/ufw). 5 | 6 | Role Variables 7 | -------------- 8 | 9 | **ufw_connection_rate_limits**: A list of port and protocol pairs that should be rate limited. The default is empty. According to the ufw man page, "ufw will deny connections if an IP address has attempted to initiate 6 or more connections in the last 30 seconds." *ufw currently only supports rate limits for incoming IPv4 connections.* The following example would limit TCP connections to the SSH port, TCP and UDP connections to the DNS port, and TCP connections to the MySQL port: 10 | 11 | ufw_connection_rate_limits: 12 | - { port: 22, protocol: tcp } 13 | - { port: 53, protocol: tcp } 14 | - { port: 53, protocol: udp } 15 | - { port: 3306, protocol: tcp } 16 | 17 | **ufw_whitelisted_ipv4_addresses**: A list of IPv4 address, port, and protocol tuples that the firewall should allow access to. The default is empty. This is a good way to ensure that certain services can only be reached by approved IP addresses. The following example would grant SSH access to 192.168.0.1 over TCP, and OpenVPN access to 10.0.0.1 over UDP: 18 | 19 | ufw_whitelisted_ipv4_addresses: 20 | - { address: 192.168.0.1, port: 22, protocol: tcp } 21 | - { address: 10.0.0.1, port: 1194, protocol: udp } 22 | 23 | **ufw_whitelisted_ipv6_addresses**: This variable behaves exactly the same as ufw_whitelisted_ipv4_addresses, except it applies to IPv6 addresses. The default is empty. The following example would allow Google's IPv6 address to access the DNS port over UDP, and Facebook's IPv6 address to access the Sphinx port over TCP. Note that it's important to enclose the IPv6 addresses in quotes, otherwise their colons will confuse the parser: 24 | 25 | ufw_whitelisted_ipv6_addresses: 26 | - { address: "2607:f8b0:4004:802::1001", port: 53, protocol: udp } 27 | - { address: "2a03:2880:2110:df07:face:b00c:0:1", port: 9312, protocol: tcp } 28 | 29 | **ufw_whitelisted_ports**: A list of port and protocol pairs that the firewall should allow access to. The default is to open port 22 over TCP. This variable applies to incoming connections from both IPv4 and IPv6 clients. If you wanted to allow access to SSH, DNS, and Nginx, you might do something like this: 30 | 31 | ufw_whitelisted_ports: 32 | - { port: 22, protocol: tcp } 33 | - { port: 53, protocol: udp } 34 | - { port: 80, protocol: tcp } 35 | - { port: 443, protocol: tcp } 36 | 37 | License 38 | ------- 39 | 40 | The MIT License (MIT) 41 | 42 | Copyright (c) 2014 Joshua Lund 43 | 44 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: 45 | 46 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. 47 | 48 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 49 | 50 | Author Information 51 | ------------------ 52 | 53 | You can find me on [Twitter](https://twitter.com/joshualund), and on [GitHub](https://github.com/jlund/). I also occasionally blog at [MissingM](http://missingm.co). 54 | -------------------------------------------------------------------------------- /roles/ufw/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | ufw_connection_rate_limits: [ ] 3 | ufw_whitelisted_ipv4_addresses: [ ] 4 | ufw_whitelisted_ipv6_addresses: [ ] 5 | ufw_whitelisted_ports: 6 | - { port: 22, protocol: tcp } 7 | - { port: 80, protocol: tcp } 8 | -------------------------------------------------------------------------------- /roles/ufw/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Restart ufw 3 | service: name=ufw state=restarted 4 | sudo: yes 5 | -------------------------------------------------------------------------------- /roles/ufw/meta/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | galaxy_info: 3 | author: "Joshua Lund" 4 | description: "Ansible role that installs and configures ufw, AKA The Uncomplicated Firewall (https://launchpad.net/ufw)" 5 | license: MIT 6 | min_ansible_version: 1.3 7 | platforms: 8 | - name: Ubuntu 9 | versions: 10 | - all 11 | - name: Debian 12 | versions: 13 | - all 14 | categories: 15 | - networking 16 | dependencies: [] 17 | -------------------------------------------------------------------------------- /roles/ufw/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: Generate ufw rules 3 | template: src={{ item }}.j2 4 | dest=/lib/ufw/{{ item }} 5 | owner=root 6 | group=root 7 | mode=640 8 | with_items: 9 | - user.rules 10 | notify: Restart ufw 11 | sudo: yes 12 | 13 | - name: Enable ufw 14 | lineinfile: dest=/etc/ufw/ufw.conf 15 | regexp="^ENABLED" 16 | line="ENABLED=yes" 17 | notify: Restart ufw 18 | sudo: yes -------------------------------------------------------------------------------- /roles/ufw/templates/user.rules.j2: -------------------------------------------------------------------------------- 1 | *filter 2 | :ufw-user-input - [0:0] 3 | :ufw-user-output - [0:0] 4 | :ufw-user-forward - [0:0] 5 | :ufw-before-logging-input - [0:0] 6 | :ufw-before-logging-output - [0:0] 7 | :ufw-before-logging-forward - [0:0] 8 | :ufw-user-logging-input - [0:0] 9 | :ufw-user-logging-output - [0:0] 10 | :ufw-user-logging-forward - [0:0] 11 | :ufw-after-logging-input - [0:0] 12 | :ufw-after-logging-output - [0:0] 13 | :ufw-after-logging-forward - [0:0] 14 | :ufw-logging-deny - [0:0] 15 | :ufw-logging-allow - [0:0] 16 | :ufw-user-limit - [0:0] 17 | :ufw-user-limit-accept - [0:0] 18 | ### RULES ### 19 | 20 | {% for limit in ufw_connection_rate_limits %} 21 | ### tuple ### limit {{ limit.protocol }} {{ limit.port }} 0.0.0.0/0 any 0.0.0.0/0 in 22 | -A ufw-user-input -p {{ limit.protocol }} --dport {{ limit.port }} -m state --state NEW -m recent --set 23 | -A ufw-user-input -p {{ limit.protocol }} --dport {{ limit.port }} -m state --state NEW -m recent --update --seconds 30 --hitcount 6 -j ufw-user-limit 24 | -A ufw-user-input -p {{ limit.protocol }} --dport {{ limit.port }} -j ufw-user-limit-accept 25 | 26 | {% endfor %} 27 | {% for item in ufw_whitelisted_ports %} 28 | ### tuple ### allow {{ item.protocol }} {{ item.port }} 0.0.0.0/0 any 0.0.0.0/0 in 29 | -A ufw-user-input -p {{ item.protocol }} --dport {{ item.port }} -j ACCEPT 30 | 31 | {% endfor %} 32 | {% for tuple in ufw_whitelisted_ipv4_addresses %} 33 | ### tuple ### allow {{ tuple.protocol }} {{ tuple.port }} 0.0.0.0/0 any {{ tuple.address }} in 34 | -A ufw-user-input -p {{ tuple.protocol }} --dport {{ tuple.port }} -s {{ tuple.address }} -j ACCEPT 35 | 36 | {% endfor %} 37 | ### END RULES ### 38 | 39 | ### LOGGING ### 40 | -A ufw-after-logging-input -j LOG --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10 41 | -A ufw-after-logging-forward -j LOG --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10 42 | -I ufw-logging-deny -m state --state INVALID -j RETURN -m limit --limit 3/min --limit-burst 10 43 | -A ufw-logging-deny -j LOG --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10 44 | -A ufw-logging-allow -j LOG --log-prefix "[UFW ALLOW] " -m limit --limit 3/min --limit-burst 10 45 | ### END LOGGING ### 46 | 47 | ### RATE LIMITING ### 48 | -A ufw-user-limit -m limit --limit 3/minute -j LOG --log-prefix "[UFW LIMIT BLOCK] " 49 | -A ufw-user-limit -j REJECT 50 | -A ufw-user-limit-accept -j ACCEPT 51 | ### END RATE LIMITING ### 52 | COMMIT 53 | -------------------------------------------------------------------------------- /roles/ufw/templates/user6.rules.j2: -------------------------------------------------------------------------------- 1 | *filter 2 | :ufw6-user-input - [0:0] 3 | :ufw6-user-output - [0:0] 4 | :ufw6-user-forward - [0:0] 5 | :ufw6-before-logging-input - [0:0] 6 | :ufw6-before-logging-output - [0:0] 7 | :ufw6-before-logging-forward - [0:0] 8 | :ufw6-user-logging-input - [0:0] 9 | :ufw6-user-logging-output - [0:0] 10 | :ufw6-user-logging-forward - [0:0] 11 | :ufw6-after-logging-input - [0:0] 12 | :ufw6-after-logging-output - [0:0] 13 | :ufw6-after-logging-forward - [0:0] 14 | :ufw6-logging-deny - [0:0] 15 | :ufw6-logging-allow - [0:0] 16 | ### RULES ### 17 | 18 | {% for item in ufw_whitelisted_ports %} 19 | ### tuple ### allow {{ item.protocol}} {{ item.port }} ::/0 any ::/0 in 20 | -A ufw6-user-input -p {{ item.protocol }} --dport {{ item.port }} -j ACCEPT 21 | 22 | {% endfor %} 23 | {% for tuple in ufw_whitelisted_ipv6_addresses %} 24 | ### tuple ### allow {{ tuple.protocol }} {{ tuple.port }} ::/0 any {{ tuple.address }} in 25 | -A ufw6-user-input -p {{ tuple.protocol }} --dport {{ tuple.port }} -s {{ tuple.address }} -j ACCEPT 26 | 27 | {% endfor %} 28 | ### END RULES ### 29 | 30 | ### LOGGING ### 31 | -A ufw6-after-logging-input -j LOG --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10 32 | -A ufw6-after-logging-forward -j LOG --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10 33 | -I ufw6-logging-deny -m state --state INVALID -j RETURN -m limit --limit 3/min --limit-burst 10 34 | -A ufw6-logging-deny -j LOG --log-prefix "[UFW BLOCK] " -m limit --limit 3/min --limit-burst 10 35 | -A ufw6-logging-allow -j LOG --log-prefix "[UFW ALLOW] " -m limit --limit 3/min --limit-burst 10 36 | ### END LOGGING ### 37 | COMMIT 38 | -------------------------------------------------------------------------------- /support/generate-crypted-password.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python -tt 2 | # skvidal@fedoraproject.org - (c) Red Hat, inc 2013 3 | # gpl v2+ 4 | # utterly trivial so I'm not worried one way or the other 5 | 6 | # This source code adapted from http://skvidal.fedorapeople.org/misc/shapass 7 | # Enhancements (c) Jesse Buchanan 2014 - GPL v2+ 8 | 9 | # Purpose: Generates a secure, crypted password for use in /etc/shadow. 10 | # Usage: Execute without arguments. 11 | 12 | import string 13 | import getpass 14 | import sys 15 | 16 | try: 17 | from passlib.hosts import linux_context 18 | except ImportError, e: 19 | print "Couldn't import passlib. Try: pip install passlib" 20 | sys.exit(1) 21 | 22 | match = False 23 | while not match: 24 | input = getpass.getpass() 25 | input2 = getpass.getpass(prompt="Re-enter Password: ") 26 | if input == input2: 27 | match = True 28 | else: 29 | print 'Passwords do not match, try again!' 30 | 31 | print linux_context.encrypt(input) 32 | -------------------------------------------------------------------------------- /vars/defaults.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Unix user name to deploy as 3 | deploy: geoffrey 4 | 5 | # Crypted password for 'deploy' Unix user. 6 | # To generate, use generate-crypted-password.py 7 | password: $6$rounds=100000$IjJ.NeuYgJ/c.D7c$ZV8ANRn81G85GNlAs5/1iTgneQKYuGPqG0UlTx4VR9E/mPRjdBYG74JijrDuC3Qi8H5AyGJ9bdG4e/fffFyLR0 8 | 9 | # Ruby version to install 10 | ruby_version: 2.1.5 11 | rbenv_makeopts: "" 12 | 13 | # Papertrail logging 14 | # e.g. @logs.papertrailapp.com:1234 15 | papertrail_log_dest: "@logs.papertrailapp.com:46606" 16 | -------------------------------------------------------------------------------- /vars/digitalocean.yml: -------------------------------------------------------------------------------- 1 | # default_key is the numeric ID of the SSH keys attached to your DigitalOcean account. 2 | # (It is NOT an actual SSH key ID.) 3 | 4 | # To manage these DigitalOcean magic numbers you must install the `tugboat` gem. 5 | # This will make the API calls using your key. 6 | 7 | # gem install tugboat 8 | # tugboat authorize 9 | 10 | # Enter your client ID and API key. 11 | # You can accept all other defaults. 12 | # Then, do: 13 | 14 | # tugboat keys 15 | 16 | default_key: 93562 17 | 18 | droplet_hostname: greenfield.416.bike # used as DigitalOcean machine name, PTR record, and DNS entry 19 | droplet_region: 4 # tugboat regions -> "New York 2 (id: 4)" 20 | droplet_size: 63 # tugboat sizes -> "1GB (id: 63)" 21 | droplet_image: 6510539 # tugboat images --global -> "Ubuntu 14.04 x64" 22 | 23 | -------------------------------------------------------------------------------- /vars/sampleapp.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # Used by nginx for the vhost of the server 3 | server_name: dev.416.bike 4 | 5 | # Used as nginx site name, rails directory, among other things 6 | full_app_name: sampleapp_production 7 | 8 | # Postgresql database settings 9 | 10 | database_host: localhost 11 | database_port: 5432 12 | database_name: sampleapp_production 13 | 14 | database_user: sampleapp 15 | database_password: "{{ lookup('password', inventory_dir + '/credentials/' + database_user + '.postgresql.txt length=20') }}" 16 | 17 | # Password is generarted on the fly, and later injected into database.yml. 18 | # See ./credentials/ for previously generated passwords. 19 | # This directory is ignored by Git. 20 | --------------------------------------------------------------------------------