├── .gitignore ├── Book.txt ├── README.md ├── Sample.txt ├── Subset.txt ├── afterword.txt ├── appendix-a.txt ├── appendix-b.txt ├── backmatter.txt ├── changelog.txt ├── chapter1.txt ├── chapter10.txt ├── chapter11.txt ├── chapter12.txt ├── chapter13.txt ├── chapter14.txt ├── chapter15.txt ├── chapter16.txt ├── chapter2.txt ├── chapter3.txt ├── chapter4.txt ├── chapter5.txt ├── chapter6.txt ├── chapter7.txt ├── chapter8.txt ├── chapter9.txt ├── foreword.txt ├── frontmatter.txt ├── images ├── 1-basic-vagrant-application.png ├── 10-deploy-haproxy.png ├── 10-multi-server-deployment-cloud.png ├── 10-multi-server-deployment-lb.png ├── 10-rails-app-fresh.png ├── 10-rails-app-new-version.png ├── 10-rails-app-with-articles.png ├── 12-awx-dashboard.png ├── 12-awx-job-complete.png ├── 12-jenkins-job-console-output.png ├── 13-github-actions-ci-badge.png ├── 13-github-actions-ci-workflow.png ├── 13-molecule-logo.png ├── 13-testing-spectrum.png ├── 14-https-nginx-proxy-502-bad-gateway.png ├── 14-https-nginx-proxy-test.png ├── 14-https-test-chrome.png ├── 14-letsencrypt-valid-certificate.png ├── 15-docker-success.png ├── 15-flask-docker-stack.png ├── 16-kubernetes-helm-phpmyadmin.png ├── 16-kubernetes-logo.png ├── 16-kubernetes-nginx-welcome.png ├── 16-kubernetes-simple-cluster-architecture.png ├── 4-nodejs-home.png ├── 4-playbook-drupal-home.png ├── 4-playbook-drupal.png ├── 4-playbook-nodejs.png ├── 4-playbook-solr-admin.png ├── 4-playbook-solr.png ├── 7-ansible-repo-backlog-growth.png ├── 8-server-checkin-infrastructure.png ├── 9-elk-kibana-default.png ├── 9-elk-kibana-example.png ├── 9-elk-kibana-logstash-dashboard.png ├── 9-glusterfs-architecture.png ├── 9-ha-infrastructure-aws.png ├── 9-ha-infrastructure-digitalocean.png ├── 9-ha-infrastructure-success.png ├── 9-highly-available-infrastructure.png ├── 9-logstash-forwarding-ab-load.png ├── 9-logstash-forwarding-nginx.png ├── by-sa.png └── title_page.jpg ├── introduction.txt ├── mainmatter.txt ├── notes.txt ├── other_files ├── Ansible Logo │ ├── Ansible Logo - Black.png │ └── Ansible Logo - White.png └── Illustrations │ ├── 4 - Application Stack - Drupal.ai │ ├── 4 - Application Stack - Nodejs.ai │ ├── 4 - Application Stack - Solr.ai │ ├── 8 - Flask app - Docker.ai │ ├── apache.eps │ ├── centos.png │ ├── druplicon.eps │ ├── nodejs.eps │ ├── npm.png │ └── php.png ├── preface.txt └── wordcount-history.bash /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | -------------------------------------------------------------------------------- /Book.txt: -------------------------------------------------------------------------------- 1 | frontmatter.txt 2 | foreword.txt 3 | preface.txt 4 | introduction.txt 5 | mainmatter.txt 6 | chapter1.txt 7 | chapter2.txt 8 | chapter3.txt 9 | chapter4.txt 10 | chapter5.txt 11 | chapter6.txt 12 | chapter7.txt 13 | chapter8.txt 14 | chapter9.txt 15 | chapter10.txt 16 | chapter11.txt 17 | chapter12.txt 18 | chapter13.txt 19 | chapter14.txt 20 | chapter15.txt 21 | afterword.txt 22 | backmatter.txt 23 | appendix-a.txt 24 | appendix-b.txt 25 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Ansible for DevOps - Manuscript 2 | 3 | [![Ansible for DevOps Cover](https://s3.amazonaws.com/titlepages.leanpub.com/ansible-for-devops/medium)](https://www.ansiblefordevops.com/) 4 | 5 | Buy [Ansible for DevOps](https://www.ansiblefordevops.com/) for your e-reader or in paperback format. 6 | 7 | ## About this Repository 8 | 9 | This repository contains the entire manuscript of Ansible for DevOps, a best-selling book about the popular infrastructure automation tool Ansible. 10 | 11 | The book was originally written in 2014, and is available on multiple platforms either as ebook or paperback. 12 | 13 | ## Contributing 14 | 15 | For any issues with the book's text, code, or examples, please see the [Issue tracker] in the book's official code repository: [Ansible for DevOps Examples](https://github.com/geerlingguy/ansible-for-devops). 16 | 17 | If you wish to submit any corrections or bugfixes, please feel free to submit a pull request. 18 | 19 | If you wish to make any more structural changes, please open a discussion first to talk about the change before you commit a lot of work and effort into a pull request. 20 | 21 | ## License 22 | 23 | CC BY-SA
24 | Creative Commons CC BY-SA 4.0 25 | 26 | You can also grab a free copy of the published work on LeanPub using this coupon link: [https://leanpub.com/ansible-for-devops/c/CTVMPCbEeXd3](https://leanpub.com/ansible-for-devops/c/CTVMPCbEeXd3). 27 | 28 | ## Author 29 | 30 | [Jeff Geerling](https://www.jeffgeerling.com). 31 | -------------------------------------------------------------------------------- /Sample.txt: -------------------------------------------------------------------------------- 1 | foreword.txt 2 | preface.txt 3 | introduction.txt 4 | chapter1.txt 5 | chapter2.txt 6 | -------------------------------------------------------------------------------- /Subset.txt: -------------------------------------------------------------------------------- 1 | chapter3.txt -------------------------------------------------------------------------------- /afterword.txt: -------------------------------------------------------------------------------- 1 | # Afterword 2 | 3 | You should be well on your way towards streamlined infrastructure management. Many developers and sysadmins have been helped by this book, and many have even gone further and contributed _back_ to the book, in the form of corrections, suggestions, and fruitful discussion! 4 | 5 | Thanks to you for purchasing and reading this book, and a special thanks to all those who have given direct feedback in the form of corrections, PRs, or suggestions for improvement: 6 | 7 | @LeeVanSteerthem, Jonathan Nakatsui, Joel Shprentz, Hugo Posca, Jon Forrest, Rohit Bhute, George Boobyer (@ibluebag), Jason Baker (@Alchemister5), Jonathan Le (@jonathanhle), Barry McClendon, Nestor Feliciano, @dan_bohea, @lekum, @queue_tip_, @wimvandijck, André, @39digits, @aazon, Ned Schumann, @andypost, @michel_slm, @erimar77, @geoand, Larry B, Tim Gerla, @b_borysenko, Stephen H, @chesterbr, @mrjester888, @gkedge, @opratr, @briants5, @atweb, @devtux_at, @sillygwailo, Anthony R, @arbabnazar, Leroy H, David, Joel S, Stephen W, Paul M, Adrian, @daniloradenovic, @e1nh4nd3r, @daniel, @guntbert, @rdonkin, @charleshepner, /u/levelupirl, @tychay, @williamt, @wurzeldub, @santisaez, @jonleibowitz, @mattjmcnaughton, @cwardgar, @rschmidtz, @scarroy, Ben K, @codeyy, @Gogoswitch, bngsudheer, @vtraida, @everett-toews, Germain G, vinceskahan, @vaygr, bryankennedy, i-zu, jdavid5815, krystan, nkabir, dglinder, ck05, and scottdavis99! 8 | -------------------------------------------------------------------------------- /appendix-a.txt: -------------------------------------------------------------------------------- 1 | # Appendix A - Using Ansible on Windows workstations {#appendix-a} 2 | 3 | Ansible works primarily over the SSH protocol, which is supported natively by most every server, workstation, and operating system on the planet, with one exception---Microsoft's venerable Windows OS (though this may change in the coming years). 4 | 5 | To use SSH on Windows, you need additional software. But Ansible also requires other utilities and subsystems only present on Linux or other UNIX-like operating systems. This poses a problem for many system administrators who are either forced to use or have chosen to use Windows as their primary OS. 6 | 7 | This appendix will guide Windows users through the author's preferred method of using Ansible on a Windows workstation. 8 | 9 | I> Ansible can manage Windows hosts (see Ansible's [Windows Guides](https://docs.ansible.com/ansible/latest/user_guide/windows.html) documentation), but doesn't run within Windows natively. You still need to follow the instructions here to run the Ansible client on a Windows host. 10 | 11 | ## Method 1 - Use the Windows Subsystem for Linux 12 | 13 | If you are running Windows 10, and have upgraded to the latest version, you can install the Windows Subsystem for Linux (WSL), which is the most seamless Linux integration you can currently get in Windows. 14 | 15 | The WSL downloads a Linux distribution and places it in a special privileged VM layer that's as transparent as it can be while sandboxed from the general Windows environment. Using WSL, you can open up a Linux prompt and have access to almost all the same software and functionality you would have if you were running Linux natively! 16 | 17 | Microsoft has the most up-to-date [installation guide](https://docs.microsoft.com/en-us/windows/wsl/install-win10) on their Developer Network site, but the installation process is straightforward: 18 | 19 | 1. Open a PowerShell prompt as an administrator and run the command: 20 | 21 | {lang="text",linenos=off} 22 | ~~~ 23 | dism.exe /online /enable-feature \ 24 | /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 25 | ~~~ 26 | 27 | 2. Restart your computer when prompted. 28 | 29 | At this point, the WSL is installed, but you haven't installed a Linux environment. To install Linux, follow Microsoft's guide to [download and install a Linux distribution](https://docs.microsoft.com/en-us/windows/wsl/install-win10#install-your-linux-distribution-of-choice). For our purposes, I recommend the latest Ubuntu LTS release. 30 | 31 | Once installation completes, there will be a shortcut either on your Desktop or in the Start menu, and you can either use this shortcut to open a Terminal session, or you can type `bash` in a Command prompt. 32 | 33 | Now that you have Linux running inside Windows, you can install Ansible inside the WSL environment just like you would if you were running Linux natively! 34 | 35 | ### Installing Ansible inside WSL 36 | 37 | Before installing Ansible, make sure your package list is up to date by updating apt-get: 38 | 39 | {lang="text",linenos=off} 40 | ``` 41 | $ sudo apt-get update 42 | ``` 43 | 44 | The easiest way to install Ansible is to use `pip3`, a package manager for Python. Python should already be installed on the system, but `pip3` may not be, so let's install it, along with Python's development header files (which are in the `python3-dev` package). 45 | 46 | {lang="text",linenos=off} 47 | ``` 48 | $ sudo apt-get install -y python3-pip python3-dev 49 | ``` 50 | 51 | After the installation is complete, install Ansible: 52 | 53 | {lang="text",linenos=off} 54 | ``` 55 | $ pip3 install ansible 56 | ``` 57 | 58 | After Ansible and all its dependencies are downloaded and installed, make sure Ansible is running and working: 59 | 60 | {lang="text",linenos=off} 61 | ``` 62 | $ ansible --version 63 | ansible [core 2.14.6] 64 | ... 65 | python version = 3.10.11 66 | jinja version = 3.1.2 67 | libyaml = True 68 | ``` 69 | 70 | T> Upgrading Ansible is also easy with pip: Run `pip3 install --upgrade ansible` to get the latest version. 71 | 72 | You can now use Ansible within the Ubuntu Bash environment. You can access files on the Windows filesystem inside the `/mnt` folder (`/mnt/c` corresponds to `C:\`), but be careful when moving things between Windows and the WSL, as strange things can happen because of line ending, permissions, and filesystem differences! 73 | 74 | W> Many of the examples in this book use Vagrant, Docker, or Kubernetes command line utilities that sometimes behave differently when run under the Windows Subsystem for Linux. Please follow [issue 291 in this book's repository](https://github.com/geerlingguy/ansible-for-devops/issues/291) for the latest updates, as I am trying to make sure all examples can be run on Windows just as easily as on macOS or Linux. 75 | 76 | ## Method 2 - When WSL is not an option 77 | 78 | If you're running Windows 7 or 8, or for some reason can't install or use the Windows Subsystem for Linux in Windows 10 or later, then the best alternative is to build a local Virtual Machine (VM) and install and use Ansible inside. 79 | 80 | ### Prerequisites 81 | 82 | The easiest way to build a VM is to download and install Vagrant and VirtualBox (both 100% free!), and then use Vagrant to install Linux, and PuTTY to connect and use Ansible. Here are the links to download these applications: 83 | 84 | 1. [Vagrant](http://www.vagrantup.com/downloads.html) 85 | 2. [VirtualBox](https://www.virtualbox.org/) 86 | 3. [PuTTY](http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html) 87 | 88 | Once you've installed all three applications, you can use either the command prompt (`cmd`), Windows PowerShell, or a Linux terminal emulator like Cygwin to boot up a basic Linux VM with Vagrant (if you use Cygwin, which is not covered here, you could install its SSH component and use it for SSH, and avoid using PuTTY). 89 | 90 | ### Set up an Ubuntu Linux Virtual Machine 91 | 92 | Open PowerShell (open the Start Menu or go to the Windows home and type in 'PowerShell'), and change directory to a place where you will store some metadata about the virtual machine you're about to boot. I like having a 'VMs' folder in my home directory to contain all my virtual machines: 93 | 94 | {lang="text",linenos=off} 95 | ``` 96 | # Change directory to your user directory. 97 | PS > cd C:/Users/[username] 98 | # Make a 'VMs' directory and cd to it. 99 | PS > md -Name VMs 100 | PS > cd VMs 101 | # Make a 'Ubuntu64' directory and cd to it. 102 | PS > md -Name ubuntu-bionic-64 103 | PS > cd ubuntu-bionic-64 104 | ``` 105 | 106 | Now, use `vagrant` to create the scaffolding for our new virtual machine: 107 | 108 | {lang="text",linenos=off} 109 | ``` 110 | PS > vagrant init ubuntu/bionic64 111 | ``` 112 | 113 | Vagrant creates a 'Vagrantfile' describing a basic Ubuntu 64-bit virtual machine in the current directory, and is now ready for you to run `vagrant up` to download and build the machine. Run `vagrant up`, and wait for the box to be downloaded and installed: 114 | 115 | {lang="text",linenos=off} 116 | ``` 117 | PS > vagrant up 118 | ``` 119 | 120 | After a few minutes, the box will be downloaded and a new virtual machine set up inside VirtualBox. Vagrant will boot and configure the machine according to the defaults defined in the Vagrantfile. Once the VM is booted and you're back at the command prompt, it's time to log into the VM. 121 | 122 | ### Log into the Virtual Machine 123 | 124 | Use `vagrant ssh-config` to grab the SSH connection details, which you will then enter into PuTTY to connect to the VM. 125 | 126 | {lang="text",linenos=off} 127 | ``` 128 | PS > vagrant ssh-config 129 | ``` 130 | 131 | It should show something like: 132 | 133 | {lang="text",linenos=off} 134 | ``` 135 | Host default 136 | Hostname 127.0.0.1 137 | User vagrant 138 | Port 2222 139 | UserKnownHostsFile /dev/null 140 | StrictHostKeyChecking no 141 | PasswordAuthentication no 142 | IdentityFile C:/Users/[username]/.vagrant.d/insecure_private_key 143 | IdentitiesOnly yes 144 | LogLevel FATAL 145 | ``` 146 | 147 | The lines we're interested in are the Hostname, User, Port, and IdentityFile. 148 | 149 | Launch PuTTY, and enter the connection details: 150 | 151 | - **Host Name (or IP address)**: 127.0.0.1 152 | - **Port**: 2222 153 | 154 | Click Open to connect, and if you receive a Security Alert concerning the server's host key, click 'Yes' to tell PuTTY to trust the host. You can save the connection details by entering a name in the 'Saved Sessions' field and clicking 'Save' to save the details. 155 | 156 | PuTTY will ask for login credentials; we'll use the default login for a Vagrant box (`vagrant` for both the username and password): 157 | 158 | {lang="text",linenos=off} 159 | ``` 160 | login as: vagrant 161 | vagrant@127.0.0.1's password: vagrant 162 | ``` 163 | 164 | You should now be connected to the virtual machine, and see the message of the day: 165 | 166 | {lang="text",linenos=off} 167 | ``` 168 | Welcome to Ubuntu 18.04.1 LTS (GNU/Linux 4.15.0-43-generic x86_64) 169 | ... 170 | vagrant@ubuntu-bionic:~$ 171 | ``` 172 | 173 | If you see this prompt, you're logged in, and you can start administering the VM. The next (and final) step is to install Ansible. 174 | 175 | T> This example uses PuTTY to log into the VM, but other applications like [Cygwin](http://cygwin.com/install.html) or [Git for Windows](http://git-scm.com/download/win) work just as well, and may be easier to use. Since these alternatives have built-in SSH support, you don't need to do any extra connection configuration, or even launch the apps manually; just `cd` to the same location as the Vagrantfile, and enter `vagrant ssh`! 176 | 177 | ### Install Ansible 178 | 179 | Before installing Ansible, make sure your package list is up to date by updating apt-get: 180 | 181 | {lang="text",linenos=off} 182 | ``` 183 | $ sudo apt-get update 184 | ``` 185 | 186 | The easiest way to install Ansible is to use `pip3`, a package manager for Python. Python should already be installed on the system, but `pip3` may not be, so let's install it, along with Python's development header files (which are in the `python-dev` package). 187 | 188 | {lang="text",linenos=off} 189 | ``` 190 | $ sudo apt-get install -y python3-pip python3-dev 191 | ``` 192 | 193 | After the installation is complete, install Ansible: 194 | 195 | {lang="text",linenos=off} 196 | ``` 197 | $ pip3 install ansible 198 | ``` 199 | 200 | After Ansible and all its dependencies are downloaded and installed, make sure Ansible is running and working: 201 | 202 | {lang="text",linenos=off} 203 | ``` 204 | $ ansible --version 205 | ansible [core 2.14.6] 206 | ... 207 | python version = 3.10.11 208 | jinja version = 3.1.2 209 | libyaml = True 210 | ``` 211 | 212 | T> Upgrading Ansible is also easy with pip: Run `pip3 install --upgrade ansible` to get the latest version. 213 | 214 | You should now have Ansible installed within a virtual machine running on your Windows workstation. You can control the virtual machine with Vagrant (`cd` to the location of the Vagrantfile), using `up` to boot or wake the VM, `halt` to shut down the VM, or `suspend` to sleep the VM. Log into the VM manually using PuTTY or via `vagrant ssh` with Cygwin or Git's Windows shell. 215 | 216 | Use Ansible from within the virtual machine just as you would on a Linux or Mac workstation directly. If you need to share files between your Windows environment and the VM, Vagrant conveniently maps `/vagrant` on the VM to the same folder where your Vagrantfile is located. You can also connect between the two via other methods (SSH, SMB, SFTP etc.) if you so desire. 217 | 218 | ## Summary 219 | 220 | There are ways to 'hack' Ansible into running natively within Windows (without a Linux VM), but I recommend either using the WSL or running everything within a Linux VM as performance will be better and the number of environment-related problems you encounter will be greatly reduced! 221 | -------------------------------------------------------------------------------- /appendix-b.txt: -------------------------------------------------------------------------------- 1 | # Appendix B - Ansible Best Practices and Conventions {#appendix-b} 2 | 3 | Ansible's flexibility allows for a variety of organization methods and configuration syntaxes. You may have many tasks in one main file, or a few tasks in many files. You might prefer defining variables in group variable files, host variable files, inventories, or elsewhere, or you might try to find ways of avoiding variables in inventories altogether. 4 | 5 | There are few *universal* best practices in Ansible, but this appendix contains helpful suggestions for organizing playbooks, writing tasks, using roles, and otherwise build infrastructure with Ansible. 6 | 7 | In addition to this appendix (which contains mostly observations from the author's own daily use of Ansible), please read through the official [Ansible Best Practices](http://docs.ansible.com/playbooks_best_practices.html) guide, which contains a wealth of hard-earned knowledge. 8 | 9 | ## Playbook Organization 10 | 11 | Playbooks are Ansible's bread and butter, so it's important to organize them in a logical manner for easier debugging and maintenance. 12 | 13 | ### Write comments and use `name` liberally 14 | 15 | Many tasks you write will be fairly obvious when you write them, but less so six months later when you are making changes. Just like application code, Ansible playbooks should be documented so you spend less time familiarizing yourself with what a particular task is supposed to do, and more time fixing problems or extending your playbooks. 16 | 17 | In YAML, write comments by starting a line with a hash (`#`). If the comment spans multiple lines, start each line with `#`. 18 | 19 | It's also a good idea to use a `name` for every task you write, besides the most trivial. If you're using the `git` module to check out a specific tag, use a `name` to indicate what repository you're using, why a tag instead of a commit hash, etc. This way, whenever your playbook is run, you'll see the comment you wrote and be assured what's going on. 20 | 21 | {lang="yaml",linenos=off} 22 | ``` 23 | - hosts: all 24 | 25 | tasks: 26 | 27 | # This task takes up to five minutes and is required so we will 28 | # have access to the images used in our application. 29 | - name: Copy the entire file repository to the application. 30 | copy: 31 | src: [...] 32 | ``` 33 | 34 | This advice assumes your comments actually indicate what's happening in your playbooks! I use full sentences with a period for all comments and `name`s, but it's okay to use a slightly different style. Just be consistent, and remember, *bad comments are worse than no comments at all*. 35 | 36 | ### Include related variables and tasks 37 | 38 | If you find yourself writing a playbook over 50-100 lines and configuring three or four different applications or services, it may help to separate each group of tasks into a separate file, and use `import_tasks` or `include_tasks` to place them in a playbook (see Chapter 6 for details about when to use which syntax). 39 | 40 | Additionally, variables are usually better left in their own file and included using `vars_files` rather than defined inline with a playbook. 41 | 42 | {lang="yaml",linenos=off} 43 | ``` 44 | - hosts: all 45 | 46 | vars_files: 47 | - vars/main.yml 48 | 49 | handlers: 50 | - import_tasks: handlers/handlers.yml 51 | 52 | tasks: 53 | - import_tasks: tasks/init.yml 54 | - import_tasks: tasks/database.yml 55 | - import_tasks: tasks/app.yml 56 | ``` 57 | 58 | Using a more hierarchical model like this allows you to see what your playbook is doing at a higher level, and also lets you manage each portion of a configuration or deployment separately. I generally split tasks into separate files once I reach 15-20 tasks in a given file. 59 | 60 | ### Use Roles to bundle logical groupings of configuration 61 | 62 | Along the same lines as using included files to better organize your playbooks and separate bits of configuration logically, Ansible roles supercharge your ability to manage infrastructure well. 63 | 64 | Using loosely-coupled roles to configure individual components of your servers (like databases, application deployments, the networking stack, monitoring packages, etc.) allows you to write configuration once, and use it on all your servers, regardless of their role. 65 | 66 | You'll probably configure something like NTP (Network Time Protocol) on every single server you manage, or at a minimum, set a timezone for the server. Instead of adding two or three tasks to every playbook you manage, set up a role (maybe call it `time` or `ntp`) to do this configuration, and use a few variables to allow different groups of servers to have customized settings. 67 | 68 | Additionally, if you learn to build robust and generic roles, you could share them on Ansible Galaxy so others use them and help you make them even better! 69 | 70 | ### Use role defaults and vars correctly 71 | 72 | Set all role default variables likely to be overridden inside `defaults/main.yml`, and set variables likely never to be overridden in `vars/main.yml`. 73 | 74 | If you have a variable that needs to be overridden, but you need to include it in a platform-specific vars file (e.g. one vars file for Debian, one for RHEL), then create the variable in `vars/[file].yml` as `__varname`, and use `set_fact` to set the variable at runtime if the variable `varname` is not defined. This way playbooks using your role can still override one of these variables. 75 | 76 | For example, if you need to have a variable like `package_config_path` that is defaulted to one value on Debian, and another on RHEL, but may need to be overridden from time to time, you can create two files, `vars/Debian.yml` and `vars/RedHat.yml`, with the contents: 77 | 78 | {lang="yaml",linenos=off} 79 | ``` 80 | --- 81 | # Inside vars/Debian.yml 82 | __package_config_path: /etc/package/package.conf 83 | ``` 84 | 85 | {lang="yaml",linenos=off} 86 | ``` 87 | --- 88 | # Inside vars/RedHat.yml 89 | __package_config_path: /etc/package/configfile 90 | ``` 91 | 92 | Then, in the playbook using the variable, include the platform-specific vars file and define the final `package_config_path` variable at runtime: 93 | 94 | {lang="yaml",linenos=off} 95 | ``` 96 | --- 97 | # Include variables and define needed variables. 98 | - name: Include OS-specific variables. 99 | include_vars: "{{ ansible_os_family }}.yml" 100 | 101 | - name: Define package_config_path. 102 | set_fact: 103 | package_config_path: "{{ __package_config_path }}" 104 | when: package_config_path is not defined 105 | ``` 106 | 107 | This way, any playbook using role can override the platform-specific defaults by defining `package_config_path` in its own variables. 108 | 109 | ## YAML Conventions and Best Practices {#yaml-best-practices} 110 | 111 | YAML is a human-readable, machine-parseable syntax that allows for almost any list, map, or array structure to be described using a few basic conventions, so it's a great fit for configuration management. Consider the following method of defining a list (or 'collection') of widgets: 112 | 113 | {lang="yaml",linenos=off} 114 | ``` 115 | widget: 116 | - foo 117 | - bar 118 | - fizz 119 | ``` 120 | 121 | This would translate into Python (using the `PyYAML` library employed by Ansible) as the following: 122 | 123 | {lang="python",linenos=off} 124 | ``` 125 | translated_yaml = {'widget': ['foo', 'bar', 'fizz']} 126 | ``` 127 | 128 | And what about a structured list/map in YAML? 129 | 130 | {lang="yaml",linenos=off} 131 | ``` 132 | widget: 133 | foo: 12 134 | bar: 13 135 | ``` 136 | 137 | The resulting Python: 138 | 139 | {lang="python",linenos=off} 140 | ``` 141 | translated_yaml = {'widget': {'foo': 12, 'bar': 13}} 142 | ``` 143 | 144 | A few things to note with both of the above examples: 145 | 146 | - YAML will try to determine the type of an item automatically. So `foo` in the first example would be translated as a string, `true` or `false` would be a boolean, and `123` would be an integer. Read the official documentation for further insight, but for our purposes, declaring strings with quotes (`''` or `""`) will minimize surprises. 147 | - Whitespace matters! YAML uses spaces (literal space characters---*not* tabs) to define structure (mappings, array lists, etc.), so set your editor to use spaces for tabs. You can use either a tab or a space to delimit parameters (like `apt: name=foo state=present`---either a tab or a space between parameters), but it's preferred to use spaces everywhere, to minimize errors and display irregularities across editors and platforms. 148 | - YAML syntax is robust and well-documented. Read through the official [YAML Specification](http://www.yaml.org/spec/1.2/spec.html) and/or the [PyYAMLDocumentation](http://pyyaml.org/wiki/PyYAMLDocumentation) to dig deeper. 149 | 150 | ### YAML for Ansible tasks 151 | 152 | Consider the following task: 153 | 154 | {lang="yaml",linenos=off} 155 | ``` 156 | - name: Install foo. 157 | apt: name=foo state=present 158 | ``` 159 | 160 | All well and good, right? Well, as you get deeper into Ansible and start defining more complex configuration, you might start seeing tasks like the following: 161 | 162 | {lang="yaml",linenos=off} 163 | ``` 164 | - name: Copy Phergie shell script into place. 165 | template: src=templates/phergie.sh.j2 dest=/opt/phergie.sh \ 166 | owner={{ phergie_user }} group={{ phergie_user }} mode=755 167 | ``` 168 | 169 | The one-line syntax (which uses Ansible-specific `key=value` shorthand for defining parameters) has some positive attributes: 170 | 171 | - Simpler tasks (like installations and copies) are compact and readable. `apt: name=apache2 state=present` and `apt-get install -y apache2` are similarly concise; in this way, an Ansible playbook feels very much like a shell script. 172 | - Playbooks are more compact, and more configuration is be displayed on one screen. 173 | 174 | However, as highlighted in the above example, there are a few issues with this `key=value` syntax: 175 | 176 | - Smaller monitors, terminal windows, and source control applications will either wrap or hide part of the task line. 177 | - Diff viewers and source control systems generally don't highlight intra-line differences as well as full line changes. 178 | - Variables and parameters are converted to strings, which may or may not be desired. 179 | 180 | Ansible's shorthand syntax is troublesome for complicated playbooks and roles, but luckily there are other ways to write tasks which are better for narrower displays, version control software and diffing. 181 | 182 | ### Three ways to format Ansible tasks 183 | 184 | The following methods are most often used to define Ansible tasks in playbooks: 185 | 186 | #### Shorthand/one-line (`key=value`) 187 | 188 | Ansible's shorthand syntax uses `key=value` parameters after the name of a module as a key: 189 | 190 | {lang="yaml",linenos=off} 191 | ``` 192 | - name: Install Nginx. 193 | dnf: name=nginx state=present 194 | ``` 195 | 196 | For any situation where an equivalent shell command would roughly match what I'm writing in the YAML, I prefer this method, since it's immediately obvious what's happening, and it's highly unlikely any of the parameters (like `state=present`) will change frequently during development. 197 | 198 | Ansible's official documentation sometimes uses this syntax for shorter examples, and it also translates easily to ad-hoc commands. 199 | 200 | #### Structured map/multi-line (`key:value`) 201 | 202 | Define a structured map of parameters (using `key: value`, with each parameter on its own line) for a task: 203 | 204 | {lang="yaml",linenos=off} 205 | ``` 206 | - name: Copy Phergie shell script into place. 207 | template: 208 | src: "templates/phergie.sh.j2" 209 | dest: "/home/{{ phergie_user }}/phergie.sh" 210 | owner: "{{ phergie_user }}" 211 | group: "{{ phergie_user }}" 212 | mode: 0755 213 | ``` 214 | 215 | A few notes on this syntax: 216 | 217 | - The structure is all valid YAML, and functions similarly to Ansible's shorthand syntax. 218 | - Strings, booleans, integers, octals, etc. are all preserved (instead of being converted to strings). 219 | - Each parameter *must* be on its own line; multiple variables can't be chained together (e.g. `mode: 0755, owner: root, user: root`) to save space. 220 | - YAML syntax highlighting works better for this format than `key=value`, since each key will be highlighted, and values will be displayed as constants, strings, etc. 221 | 222 | #### Folded scalars/multi-line (`>`) 223 | 224 | Use the `>` character to break up Ansible's shorthand `key=value` syntax over multiple lines. 225 | 226 | {lang="yaml",linenos=off} 227 | ``` 228 | - name: Copy Phergie shell script into place. 229 | template: > 230 | src=templates/phergie.sh.j2 231 | dest=/home/{{ phergie_user }}/phergie.sh 232 | owner={{ phergie_user }} group={{ phergie_user }} mode=755 233 | ``` 234 | 235 | In YAML, the `>` character denotes a *folded scalar*, where every line that follows (as long as it's indented further than the line with the `>`) will be joined with the line above by a space. So the above YAML and the earlier `template` example will function exactly the same. 236 | 237 | This syntax allows arbitrary splitting of lines on parameters, but it does not preserve value types (`0775` would be converted to a string, for example). 238 | 239 | While this syntax is often seen in the wild, I don't recommend it except for certain situations, like tasks using the `command` and `shell` modules with extra options: 240 | 241 | {lang="yaml",linenos=off} 242 | ``` 243 | - name: Install Drupal. 244 | command: > 245 | drush si -y --site-name="{{ drupal_site_name }}" 246 | --account-name=admin 247 | --account-pass=admin 248 | --db-url=mysql://{{ domain }}:1234@localhost/{{ domain }} 249 | --root={{ drupal_core_path }} 250 | creates={{ drupal_core_path }}/sites/default/settings.php 251 | notify: restart apache 252 | become_user: www-data 253 | ``` 254 | 255 | Sometimes the above is as good as you can do to keep unwieldy `command` or `shell` tasks formatted in a legible manner. 256 | 257 | ### Using `|` to format multiline variables 258 | 259 | In addition to using `>` to join multiple lines using spaces, YAML allows the use of `|` (pipe) to define literal scalars, to define strings with newlines preserved. 260 | 261 | For example: 262 | 263 | {lang="yaml"} 264 | ``` 265 | extra_lines: | 266 | first line 267 | second line 268 | third line 269 | ``` 270 | 271 | Would be translated to a block of text with newlines intact: 272 | 273 | {lang="text"} 274 | ``` 275 | first line 276 | second line 277 | third line 278 | ``` 279 | 280 | Using a folded scalar (`>`) would concatenate the lines, which might not be desirable. For example: 281 | 282 | {lang="yaml"} 283 | ``` 284 | extra_lines: > 285 | first line 286 | second line 287 | third line 288 | ``` 289 | 290 | Would be translated to a single string with no newlines: 291 | 292 | {lang="text"} 293 | ``` 294 | first line second line third line 295 | ``` 296 | 297 | ## Using `ansible-playbook` 298 | 299 | Generally, running playbooks from your own computer or a central playbook runner is preferable to running Ansible playbooks locally (using `--connection=local`), since Ansible and all its dependencies don't need to be installed on the system you're provisioning. Because of Ansible's optimized use of SSH for remote communication, there is usually minimal difference in performance running Ansible locally or from a remote workstation (barring network flakiness or a high-latency connection). 300 | 301 | ## Use Ansible Tower 302 | 303 | If you are able to use Ansible Tower to run your playbooks, this is even better, as you'll have a central server running Ansible playbooks, logging output, compiling statistics, and even allowing a team to work together to build servers and deploy applications in one place. 304 | 305 | ## Install Galaxy dependencies local to your playbook 306 | 307 | Over the years, especially as Collections have become a popular way to distribute Ansible plugins and modules, it has become more important to consider _where_ you install Ansible project dependencies, whether installed from Ansible Galaxy or Automation Hub. 308 | 309 | This means adding a `requirements.yml` file for every playbook, (usually, but not always) defining each role and collection version constraints, and adding an `ansible.cfg` file in the playbook project's root directory so Ansible would know to only load roles and collections from that playbook's local roles or collections directories. 310 | 311 | For any new project I start, I add an `ansible.cfg` with the following (at minimum): 312 | 313 | {lang=text} 314 | ``` 315 | [defaults] 316 | nocows = True 317 | collections_paths = ./collections 318 | roles_path = ./roles 319 | ``` 320 | 321 | Then I add a `requirements.yml` file specifying all the role and collection requirements for my playbook: 322 | 323 | {lang=yaml} 324 | ``` 325 | --- 326 | roles: 327 | - name: geerlingguy.java 328 | version: 1.10.0 329 | 330 | collections: 331 | - name: community.kubernetes 332 | version: 0.11.1 333 | ``` 334 | 335 | Install the requirements with `ansible-galaxy install -r requirements.yml`, and then you should be able to access them from your playbook without the risk of breaking other playbooks, which may use the same roles and collections but require different versions. 336 | 337 | ### Discriminate wisely when choosing community dependencies 338 | 339 | In 2019, I gave a presentation at AnsibleFest Atlanta on the topic [evaluating community Ansible roles for your playbooks](https://www.jeffgeerling.com/blog/2019/how-evaluate-community-ansible-roles-your-playbooks). 340 | 341 | In the presentation, I mentioned a few important considerations when you decide you want to incorporate content maintained by the community into your automation playbooks: 342 | 343 | - Make sure you trust the maintainers of the content---will they fix major bugs, do they have a proven track record, do they take security seriously? 344 | - Consider whether the content is maintained too aggressively (many breaking changes in a short time frame) or too passively (no releases in many years) for your project's needs. 345 | - Is it easy to understand the content in the role or collection, so you can fix any bugs you might find, or maintain a fork of the content if needed? 346 | 347 | Sometimes people blindly adopt a dependency without much consideration for the technical debt or maintenance overhead they are incurring. Many dependencies are helpful in getting your automation goals accomplished faster than writing all the code yourself. But some can make the maintenance of your Ansible projects harder. 348 | 349 | Always be careful when incorporating a dependency from Ansible Galaxy or Automation Hub into your project. Consider how well the dependency will help you achieve both short term automation goals and long-term project maintenance. 350 | 351 | ## Specify `--forks` for playbooks running on > 5 servers 352 | 353 | If you are running a playbook on a large number of servers, consider increasing the number of `forks` Ansible uses to run tasks simultaneously. The default, `5`, means Ansible will only run a given task on 5 servers at a time. Consider increasing this to 10, 15, or however many connections your local workstation and ISP can handle---this will dramatically reduce the amount of time it takes a playbook to run. 354 | 355 | Set `forks=[number]` in Ansible's configuration file to set the default `forks` value for all playbook runs. 356 | 357 | ## Use Ansible's Configuration file 358 | 359 | Ansible's main configuration file, in `/etc/ansible/ansible.cfg`, allows a wealth of optimizations and customizations for running playbooks and ad-hoc tasks. 360 | 361 | Read through the official documentation's [Ansible Configuration File](https://docs.ansible.com/ansible/latest/installation_guide/intro_configuration.html) page for customizable options in `ansible.cfg`. 362 | 363 | I generally place a customized `ansible.cfg` file in every Ansible project I maintain, so I can have complete control over how Ansible behaves when I run my playbooks. 364 | 365 | Remember that if you use an `ansible.cfg` file in your project, it will not inherit any values from the global configuration file, it will override all settings, and anything you don't explicitly define will be set to Ansible's default. 366 | 367 | ## Summary 368 | 369 | One of Ansible's strengths is its flexibility; there are often multiple 'right' ways of accomplishing your goals. I have chosen to use the methods I outlined above as they have proven to help me write and maintain a variety of playbooks and roles with minimal headaches. 370 | 371 | It's perfectly acceptable to try a different approach; as with most programming and technical things, being *consistent* is more important than following a particular set of rules, especially if the ruleset isn't universally agreed upon. Consistency is especially important when you're not working solo---if every team member used Ansible in a different way, it would become difficult to share work very quickly! 372 | -------------------------------------------------------------------------------- /backmatter.txt: -------------------------------------------------------------------------------- 1 | {backmatter} -------------------------------------------------------------------------------- /chapter1.txt: -------------------------------------------------------------------------------- 1 | # Chapter 1 - Getting Started with Ansible 2 | 3 | ## Ansible and Infrastructure Management 4 | 5 | ### On snowflakes and shell scripts 6 | 7 | Many developers and system administrators manage servers by logging into them via SSH, making changes, and logging off. Some of these changes would be documented, some would not. If an admin needed to make the same change to many servers (for example, changing one value in a config file), the admin would manually log into *each* server and repeatedly make this change. 8 | 9 | If there were only one or two changes in the course of a server's lifetime, and if the server were extremely simple (running only one process, with one configuration, and a very simple firewall), *and* if every change were thoroughly documented, this process wouldn't be a problem. 10 | 11 | But for almost every company in existence, servers are more complex---most run tens, sometimes hundreds of different applications or application containers. Most servers have complicated firewalls and dozens of tweaked configuration files. And even with change documentation, the manual process usually results in some servers or some steps being forgotten. 12 | 13 | If the admins at these companies wanted to set up a new server *exactly* like one that is currently running, they would need to spend a good deal of time going through all of the installed packages, documenting configurations, versions, and settings; and they would spend a lot of unnecessary time manually reinstalling, updating, and tweaking everything to get the new server to run close to how the old server did. 14 | 15 | Some admins may use shell scripts to try to reach some level of sanity, but I've yet to see a complex shell script that handles all edge cases correctly while synchronizing multiple servers' configuration and deploying new code. 16 | 17 | ### Configuration management 18 | 19 | Lucky for you, there are tools to help you avoid having these *snowflake servers*---servers that are uniquely configured and impossible to recreate from scratch because they were hand-configured without documentation. Tools like [CFEngine](http://cfengine.com/), [Puppet](http://puppetlabs.com/) and [Chef](http://www.getchef.com/chef/) became very popular in the mid-to-late 2000s. 20 | 21 | But there's a reason why many developers and sysadmins stick to shell scripting and command-line configuration: it's simple and easy-to-use, and they've had years of experience using bash and command-line tools. Why throw all that out the window and learn a new configuration language and methodology? 22 | 23 | Enter Ansible. Ansible was built (and continues to be improved) by developers and sysadmins who know the command line---and want to make a tool that helps them manage their servers exactly the same as they have in the past, but in a repeatable and centrally managed way. Ansible also has other tricks up its sleeve, making it a true Swiss Army knife for people involved in DevOps (not just the operations side). 24 | 25 | One of Ansible's greatest strengths is its ability to run regular shell commands verbatim, so you can take existing scripts and commands and work on converting them into idempotent playbooks as time allows. For someone (like me) who was comfortable with the command line, but never became proficient in more complicated tools like Puppet or Chef (which both required at least a *slight* understanding of Ruby and/or a custom language just to get started), Ansible was a breath of fresh air. 26 | 27 | Ansible works by pushing changes out to all your servers (by default), and requires no extra software to be installed on your servers (thus no extra memory footprint, and no extra daemon to manage), unlike most other configuration management tools. 28 | 29 | I> **Idempotence** is the ability to run an operation which produces the same result whether run once or multiple times ([source](http://en.wikipedia.org/wiki/Idempotence#Computer_science_meaning)). 30 | I> 31 | I> An important feature of a configuration management tool is its ability to ensure the same configuration is maintained whether you run it once or a thousand times. Many shell scripts have unintended consequences if run more than once, but Ansible deploys the same configuration to a server over and over again without making any changes after the first deployment. 32 | I> 33 | I> In fact, almost every aspect of Ansible modules and commands is idempotent, and for those that aren't, Ansible allows you to define when the given command should be run, and what constitutes a changed or failed command, so you can easily maintain an idempotent configuration on all your servers. 34 | 35 | ## Installing Ansible 36 | 37 | Ansible's only real dependency is Python. Once Python is installed, the simplest way to get Ansible running is to use `pip`, a simple package manager for Python. 38 | 39 | **If you're on a Mac**, installing Ansible is a piece of cake: 40 | 41 | 1. Check if `pip` is installed (`which pip`). If not, install it: `sudo easy_install pip` 42 | 2. Install Ansible: `pip install ansible` 43 | 44 | You could also install Ansible via [Homebrew](http://brew.sh/) with `brew install ansible`. Either way (`pip` or `brew`) is fine, but make sure you update Ansible using the same system with which it was installed! 45 | 46 | **If you're running Windows**, it will take a little extra work to set everything up. Typically, people run Ansible inside the Windows Subsystem for Linux. For detailed instructions setting up Ansible under the WSL, see [Appendix A - Using Ansible on Windows workstations](#appendix-a). 47 | 48 | **If you're running Linux**, chances are you already have Ansible's dependencies installed, but we'll cover the most common installation methods. 49 | 50 | If you have `python-pip` and `python-devel` (`python-dev` on Debian/Ubuntu) installed, use `pip` to install Ansible (this assumes you also have the 'Development Tools' package installed, so you have `gcc`, `make`, etc. available): 51 | 52 | {lang="text",linenos=off} 53 | ``` 54 | $ pip install ansible 55 | ``` 56 | 57 | Using pip allows you to upgrade Ansible with `pip install --upgrade ansible`. 58 | 59 | ### Fedora/Red Hat Enterprise Linux 60 | 61 | The easiest way to install Ansible on an RPM-based OS is to use the official dnf package. If you're running Red Hat Enterprise Linux (RHEL) or CentOS/Rocky/Alma Linux, you need to install EPEL's RPM before you install Ansible (see the info section below for instructions): 62 | 63 | {lang="text",linenos=off} 64 | ``` 65 | $ dnf -y install ansible 66 | ``` 67 | 68 | I> On RPM-based systems, `python-pip` and `ansible` are available via the [EPEL repository](https://fedoraproject.org/wiki/EPEL). If you run the command `dnf repolist | grep epel` (to see if the EPEL repo is already available) and there are no results, you need to install it with the following commands: 69 | I> 70 | I> {lang="text",linenos=off} 71 | I> ~~~ 72 | I> # If you're on RHEL/CentOS 6: 73 | I> $ rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/\ 74 | I> epel-release-6-8.noarch.rpm 75 | I> # If you're on RHEL/CentOS 7: 76 | I> $ yum install epel-release 77 | I> # If you're on RHEL 8+/Fedora: 78 | I> $ dnf install epel-release 79 | I> ~~~ 80 | 81 | ### Debian/Ubuntu 82 | 83 | The easiest way to install Ansible on a Debian or Ubuntu system is to use the official apt package. 84 | 85 | {lang="text",linenos=off} 86 | ``` 87 | $ sudo apt-add-repository -y ppa:ansible/ansible 88 | $ sudo apt-get update 89 | $ sudo apt-get install -y ansible 90 | ``` 91 | 92 | I> If you get an error like "sudo: add-apt-repository: command not found", you're probably missing the `software-properties-common` package. Install it with the command: 93 | I> 94 | I> {lang="text",linenos=off} 95 | I> ~~~ 96 | I> $ sudo apt-get install software-properties-common 97 | I> ~~~ 98 | 99 | **Once Ansible is installed**, make sure it's working properly by entering `ansible --version` on the command line. You should see the currently-installed version information: 100 | 101 | {lang="text",linenos=off} 102 | ``` 103 | $ ansible --version 104 | ansible [core 2.14.6] 105 | ... 106 | python version = 3.10.11 107 | jinja version = 3.1.2 108 | libyaml = True 109 | ``` 110 | 111 | I> **What about Python 3?** If you have both Python 2 and Python 3 installed, and `pip` is aliased to an older Python 2 version of `pip`, you should consider installing Python 3 and `pip3`, and using that version instead. Ansible is fully compatible with Python 3, and unless you're running on a very old system that doesn't have Python 3 available for it, you should use Python 3. 112 | 113 | ## Creating a basic inventory file {#basic-inventory} 114 | 115 | Ansible uses an inventory file (basically, a list of servers) to communicate with your servers. Like a hosts file (at `/etc/hosts`) that matches IP addresses to domain names, an Ansible inventory file matches servers (IP addresses or domain names) to groups. Inventory files can do a lot more, but for now, we'll just create a simple file with one server. Create a file named `hosts.ini` in a test project folder: 116 | 117 | {lang="text",linenos=off} 118 | ``` 119 | $ mkdir test-project 120 | $ cd test-project 121 | $ touch hosts.ini 122 | ``` 123 | 124 | T> Inventory file names do not have to follow any particular naming convention. I often use the file name `hosts.ini` for Ansible's default 'ini-style' syntax, but I also sometimes call the file `inventory` (with no file extension). 125 | 126 | Edit this hosts file with nano, vim, or whatever editor you'd like. Put the following into the file: 127 | 128 | {lang="text"} 129 | ``` 130 | [example] 131 | www.example.com 132 | ``` 133 | 134 | ...where `example` is the group of servers you're managing and `www.example.com` is the domain name (or IP address) of a server in that group. If you're not using port 22 for SSH on this server, you will need to add it to the address, like `www.example.com:2222`, since Ansible defaults to port 22 and won't get this value from your ssh config file. 135 | 136 | I> This first example assumes you have a server set up that you can test with; if you don't already have a spare server somewhere that you can connect to, you might want to create a small VM using DigitalOcean, Amazon Web Services, Linode, or some other service that bills by the hour. That way you have a full server environment to work with when learning Ansible---and when you're finished testing, delete the server and you'll only be billed a few pennies! 137 | I> 138 | I> Replace the `www.example.com` in the above example with the name or IP address of your server. 139 | 140 | T> You can also place your inventory in Ansible's global inventory file, `/etc/ansible/hosts`, and any playbook will default to that if no other inventory is specified. However, that file requires `sudo` permissions and it is usually better to maintain inventory alongside your Ansible projects. 141 | 142 | ## Running your first Ad-Hoc Ansible command 143 | 144 | Now that you've installed Ansible and created an inventory file, it's time to run a command to see if everything works! Enter the following in the terminal (we'll do something safe so it doesn't make any changes on the server): 145 | 146 | {lang="text",linenos=off} 147 | ``` 148 | $ ansible -i hosts.ini example -m ping -u [username] 149 | ``` 150 | 151 | ...where `[username]` is the user you use to log into the server. If everything worked, you should see a message that shows `www.example.com | SUCCESS >>`, then the result of your ping. If it didn't work, run the command again with `-vvvv` on the end to see verbose output. Chances are you don't have SSH keys configured properly---if you login with `ssh username@www.example.com` and that works, the above Ansible command should work, too. 152 | 153 | W> Ansible assumes you're using passwordless (key-based) login for SSH (e.g. you login by entering `ssh username@example.com` and don't have to type a password). If you're still logging into your remote servers with a username and password, or if you need a primer on Linux remote authentication and security best practices, please read [Chapter 11 - Server Security and Ansible](#chapter-11). If you insist on using passwords, add the `--ask-pass` (`-k`) flag to Ansible commands (you may also need to install the `sshpass` package for this to work). This entire book is written assuming passwordless authentication, so you'll need to keep this in mind every time you run a command or playbook. 154 | 155 | T> Need a primer on SSH key-based authentication? Please read through Ubuntu's community documentation on [SSH/OpenSSH/Keys](https://help.ubuntu.com/community/SSH/OpenSSH/Keys). 156 | 157 | Let's run a more useful command: 158 | 159 | {lang="text",linenos=off} 160 | ``` 161 | $ ansible -i hosts.ini example -a "free -h" -u [username] 162 | ``` 163 | 164 | In this example, we quickly see memory usage (in a human-readable format) on all the servers (for now, just one) in the `example` group. Commands like this are helpful for quickly finding a server that has a value out of a normal range. I often use commands like `free -h` (to see memory statistics), `df -h` (to see disk usage statistics), and the like to make sure none of my servers is behaving erratically. While it's good to track these details in an external tool like [Nagios](http://www.nagios.org/), [Munin](http://munin-monitoring.org/), or [Cacti](http://www.cacti.net/), it's also nice to check these stats on all your servers with one simple command and one terminal window! 165 | 166 | ## Summary 167 | 168 | That's it! You've just learned about configuration management and Ansible, installed it, told it about your server, and ran a couple commands on that server through Ansible. If you're not impressed yet, that's okay---you've only seen the *tip* of the iceberg. 169 | 170 | {lang="text",linenos=off} 171 | ``` 172 | _______________________________________ 173 | / A doctor can bury his mistakes but an \ 174 | | architect can only advise his clients | 175 | \ to plant vines. (Frank Lloyd Wright) / 176 | --------------------------------------- 177 | \ ^__^ 178 | \ (oo)\_______ 179 | (__)\ )\/\ 180 | ||----w | 181 | || || 182 | ``` 183 | -------------------------------------------------------------------------------- /chapter12.txt: -------------------------------------------------------------------------------- 1 | # Chapter 12 - Automating Your Automation with Ansible Tower and CI/CD 2 | 3 | At this point, you should be able to convert almost any bit of your infrastructure's configuration into Ansible playbooks, roles, and inventories. 4 | 5 | All the examples in this book use Ansible's CLI to run playbooks and report back the results. For smaller teams, especially when everyone on the team is well-versed in how to use Ansible, YAML syntax, and security best practices, using the CLI is a sustainable approach. 6 | 7 | But for many organizations, basic CLI use is inadequate: 8 | 9 | - The business needs detailed reporting of infrastructure deployments and failures, especially for audit purposes. 10 | - Team-based infrastructure management requires varying levels of involvement in playbook management, inventory management, and key and password access. 11 | - A thorough visual overview of the current and historical playbook runs and server health helps identify potential issues before they affect the bottom line. 12 | - Playbook scheduling ensures infrastructure remains in a known state. 13 | 14 | [Ansible Tower](https://www.ansible.com/products/tower) fulfills these requirements---and many more---and provides a great mechanism for team-based Ansible usage. 15 | 16 | Ansible Tower is part of Red Hat's Ansible Automation Platform, but it is built from an open source upstream project, [AWX](https://github.com/ansible/awx). AWX was open-sourced in 2017, shortly after Red Hat acquired Ansible. 17 | 18 | W> While AWX and Tower are quite similar, AWX is updated much more frequently, with fewer guarantees about upgrade compatibility. If you are able and willing to keep up with AWX's releases and deploy it on your own, it is perfectly adequate for many use cases. Ansible Tower is part of Red Hat's broader subscription-based Ansible Automation Platform, and is fully supported. 19 | 20 | While this book includes a brief overview of Tower, it is highly recommended you read through the extensive [Tower User Guide](https://docs.ansible.com/ansible-tower/latest/html/userguide/index.html), which includes details this book won't be covering such as LDAP integration and multiple-team playbook management workflows. 21 | 22 | ### Installing Ansible AWX 23 | 24 | Because Ansible Tower requires a Red Hat Subscription, and you might want to get a feel for how it works before you fully commit to a subscription, it's best to install a test instance of AWX to get a feel for how Tower works and operates. 25 | 26 | The quickest way to get AWX running is to use the [AWX Docker Compose installation method](https://github.com/ansible/awx/blob/devel/INSTALL.md#docker-compose). 27 | 28 | Make sure you have Ansible, Git, and Docker installed on your computer. 29 | 30 | And to make sure the AWX installer can manage Docker containers, make sure you have both the Docker and Docker Compose python libraries installed: 31 | 32 | {lang="text",linenos=off} 33 | ``` 34 | pip install docker docker-compose 35 | ``` 36 | 37 | Then install AWX by cloning the repository and running the install playbook: 38 | 39 | {lang="text",linenos=off} 40 | ``` 41 | $ git clone https://github.com/ansible/awx.git 42 | $ cd awx/installer 43 | $ ansible-playbook -i inventory install.yml 44 | ``` 45 | 46 | AWX takes a few minutes to initialize, but should be accessible at `localhost`. Until it's initialized, you'll see an 'AWX is upgrading' message. 47 | 48 | Once AWX is initialized, you should be able to log in using the default credentials (username `admin` and password `password`) and get to the default dashboard page: 49 | 50 | {width=80%} 51 | ![AWX's Dashboard](images/12-awx-dashboard.png) 52 | 53 | I> *What's up with the irate potato?* Apparently it's an inside joke. Sometimes when using AWX you'll see the 'AWX with Wings' logo, which as far as I can tell has its roots as one of the original logos of the 'AnsibleWorks' organization that eventually became Ansible, Inc. and is part of Red Hat today. But other times you'll see the Angry Spud. 'Angry Spud,' 'Rage Tater,' and 'The Starchy Miracle' are some internal nicknames for our potato friend. 54 | I> 55 | I> Like Ansible's tendency to wrap output in `cowsay` if you have it installed, some of these strange but fun quirks make using Ansible tools more fun by giving them personality---at least in the author's opinion. 56 | 57 | ### Using AWX 58 | 59 | Tower and AWX are centered around the idea of organizing *Projects* (which run your playbooks via *Jobs*) and *Inventories* (which describe the servers on which your playbooks should be run) inside of *Organizations*. *Organizations* are then set up with different levels of access based on *Users* and *Credentials* grouped in different *Teams*. It's a little overwhelming at first, but once the initial structure is configured, you'll see the power and flexibility the workflow affords. 60 | 61 | Let's get started with our first project! 62 | 63 | The first step is to make sure you have a test playbook you can run using AWX. Generally, your playbooks should be stored in a source code repository, with AWX configured to check out the latest version of the playbook from the repository and run it. For this example, however, we will create a playbook in AWX's default `projects` directory located in `/var/lib/awx/projects`: 64 | 65 | 1. Log into the AWX web container: `docker exec -it awx_web /bin/bash` 66 | 2. Create the `projects` directory: `mkdir /var/lib/awx/projects` 67 | 3. Go into that directory: `cd /var/lib/awx/projects` 68 | 4. Create a new project directory: `mkdir ansible-for-devops && cd ansible-for-devops` 69 | 5. Create a new playbook file with `vi` (`vi main.yml`) in the new directory, and put in the following: 70 | 71 | {lang="yaml"} 72 | ``` 73 | --- 74 | - hosts: all 75 | gather_facts: no 76 | connection: local 77 | 78 | tasks: 79 | - name: Check the date on the server. 80 | command: date 81 | ``` 82 | 83 | Now, to reinforce my earlier statement about why it's good to use a source repository instead of manually managing playbooks, you have to do _all five of those steps again_ in the `awx_task` container. So go do that, starting by logging into the AWX task container: `docker exec -it awx_web /bin/bash`. 84 | 85 | T> If you insist on manually managing playbooks in the default `/var/lib/awx/projects` path, then you can modify the volume configuration in the `docker-compose.yml` file generated by the AWX installer to mounting a local directory into both containers. But this is not a common way to use Ansible and I wouldn't recommend it! 86 | 87 | Switch back to your web browser and get everything set up to run the test playbook inside Ansible Tower's web UI: 88 | 89 | 1. Create a new *Organization*, called 'Ansible for DevOps'. 90 | 2. Add a new User to the Organization, named John Doe, with the email johndoe@example.com, username `johndoe`, and password `johndoe1234`. 91 | 3. Create a new *Team*, called 'DevOps Engineers', in the 'Ansible for DevOps' Organization. 92 | 4. Add the `johndoe` user to the DevOps Engineers Team. 93 | 5. Under the Projects section, add a new *Project*. Set the 'Name' to `Ansible for DevOps Project`, 'Organization' to `Ansible for DevOps`, 'SCM Type' to `Manual`, and 'Playbook Directory' to `ansible-for-devops` (AWX automatically detects all folders placed inside `/var/lib/awx/projects`, but you could also use an alternate Project Base Path if you want to store projects elsewhere). 94 | 6. Under the Inventories section, add an *Inventory*. Set the 'Name' to `AWX Local`, and 'Organization' set to `Ansible for DevOps`. Once the inventory is saved: 95 | 1. Add a 'Group' with the Name `localhost`. Click on the group once it's saved. 96 | 2. Add a 'Host' with the Host Name `127.0.0.1`. 97 | 98 | T> New *Credentials* have a somewhat dizzying array of options, and offer login and API key support for a variety of services, like SSH, AWS, Rackspace, VMWare vCenter, and SCM systems. If you can login to a system, AWX likely supports the login mechanism! 99 | 100 | Now that we have all the structure for running playbooks configured, we need only create a *Template* to run the playbook on the localhost and see whether we've succeeded. Click on 'Templates', and create a new Job Template with the following configuration: 101 | 102 | - Name: `Ansible for DevOps Job` 103 | - Job Type: `Run` 104 | - Inventory: `AWX Local` 105 | - Project: `Ansible for DevOps Project` 106 | - Playbook: `main.yml` 107 | 108 | Save the Job Template, then go back to the main AWX 'Templates' section. 109 | 110 | Click the small Rocket ship button for the 'Ansible for DevOps Job' to start a job using the template. You'll be redirected to a Job status page, which provides live updates of the job status, and then a summary of the playbook run when complete: 111 | 112 | {width=80%} 113 | ![AWX job completed successfully!](images/12-awx-job-complete.png) 114 | 115 | The playbook's output is logged to the web page in real-time. You can also stop a running job, delete a job's record, or relaunch a job with the same parameters using the respective buttons on the job's page. 116 | 117 | The job's dashboard page is very useful for giving an overview of how many hosts were successful, how many tasks resulted in changes, and the timing of the different parts of the playbook run. 118 | 119 | ### Uninstalling AWX 120 | 121 | After you're finished trying out AWX, you can uninstall it using the following process: 122 | 123 | 1. Go into the directory where the AWX installer created a Docker Compose configuration: `cd ~/.awx/awxcompose` 124 | 2. Shut down the Docker Compose environment: `docker-compose down -v` 125 | 3. Delete the entire AWX directory: `rm -rf ~/.awx` 126 | 127 | ### Other Tower Features of Note 128 | 129 | In our walkthrough above, we used AWX to run a playbook on the local server; setting up AWX or Tower to run playbooks on real-world infrastructure or other local VMs is just as easy, and the tools Ansible Tower provides are very handy, especially when working in larger team environments. 130 | 131 | This book won't walk through the entirety of Ansible Tower's documentation, but a few other features you should try out include: 132 | 133 | - Setting up scheduled Job runs (especially with the 'Check' option instead of 'Run') for CI/CD. 134 | - Configuring webhooks for Job Templates so you can trigger Job runs from your SCM (e.g. 'GitOps'). 135 | - Integrating user accounts and Teams with LDAP users and groups for automatic team-based project management. 136 | - Setting different levels of permissions for Users and Teams so certain users can only edit, run, or view certain jobs within an Organization. 137 | - Configuring Ansible Vault credentials to easily and automatically use Vault-protected variables in your playbooks. 138 | - Surveys, which allow users to add extra information based on a 'Survey' of questions per job run. 139 | - Smart Inventories and dynamic inventory integrations. 140 | - [Monitoring Tower with Prometheus and Grafana](https://www.ansible.com/blog/red-hat-ansible-tower-monitoring-using-prometheus-node-exporter-grafana). 141 | 142 | Ansible Tower continues to improve rapidly, and is one of the best ways to run Ansible Playbooks from a central instance with team-based access and extremely detailed live and historical status reporting. 143 | 144 | ### Tower Alternatives 145 | 146 | Ansible Tower is purpose-built for use with Ansible playbooks, but there are many other ways to run playbooks on your servers with a solid workflow. If price is a major concern, and you don't need all the bells and whistles Tower provides, you can use other popular tools like [Jenkins](https://www.jenkins.io), [Rundeck](https://www.rundeck.com/open-source), or [Go CI](https://www.gocd.org). 147 | 148 | All these tools provide flexibility and security for running Ansible Playbooks, and each one requires a different amount of setup and configuration before it will work well for common usage scenarios. One of the most popular and long-standing CI tools is Jenkins, so we'll explore how to configure a similar Playbook run in Jenkins next. 149 | 150 | ## Jenkins CI 151 | 152 | Jenkins is a Java-based open source continuous integration tool. It was forked from the Hudson project in 2011, but has a long history as a robust build tool for almost any software project. 153 | 154 | Jenkins is easy to install and configure, with the Java SDK as its only requirement. Jenkins runs on any modern OS, but for the purposes of this demonstration, we'll build a local VM using Vagrant, install Jenkins inside the VM using Ansible, then use Jenkins to run an Ansible playbook. 155 | 156 | ### Build a local Jenkins server with Ansible 157 | 158 | Create a new directory for the Jenkins VM named `jenkins`. Inside the directory, create a `Vagrantfile` to describe the machine and the Ansible provisioning to Vagrant, with the following contents: 159 | 160 | {lang="ruby"} 161 | ``` 162 | VAGRANTFILE_API_VERSION = "2" 163 | 164 | Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| 165 | config.vm.box = "geerlingguy/ubuntu2004" 166 | config.vm.hostname = "jenkins.test" 167 | config.vm.network "private_network", ip: "192.168.56.76" 168 | config.ssh.insert_key = false 169 | 170 | config.vm.provider :virtualbox do |v| 171 | v.memory = 512 172 | end 173 | 174 | # Ansible provisioning. 175 | config.vm.provision "ansible" do |ansible| 176 | ansible.playbook = "provision.yml" 177 | ansible.become = true 178 | end 179 | end 180 | ``` 181 | 182 | This Vagrantfile will create a new VM running Ubuntu, with the IP address `192.168.56.76` and the hostname `jenkins.test`. Go ahead and add an entry for `192.168.56.76 jenkins.test` to your hosts file, and then create a new `provision.yml` playbook so Vagrant can run it with Ansible (as described in the `config.vm.provision` block in the Vagrantfile). Put the following in the `provision.yml` file: 183 | 184 | {lang="yaml"} 185 | ``` 186 | --- 187 | - hosts: all 188 | 189 | vars: 190 | ansible_install_method: pip 191 | firewall_allowed_tcp_ports: 192 | - "22" 193 | - "8080" 194 | jenkins_plugins: 195 | - ansicolor 196 | 197 | pre_tasks: 198 | - name: Update apt cache if needed. 199 | apt: 200 | update_cache: true 201 | cache_valid_time: 3600 202 | 203 | roles: 204 | - geerlingguy.firewall 205 | - geerlingguy.pip 206 | - geerlingguy.ansible 207 | - geerlingguy.java 208 | - geerlingguy.jenkins 209 | ``` 210 | 211 | This playbook uses a set of roles from Ansible Galaxy to install all the required components for our Jenkins CI server. To make sure you have all the required roles installed on your host machine, add a `requirements.yml` file in the `jenkins` folder, containing all the roles being used in the playbook: 212 | 213 | {lang="yaml"} 214 | ``` 215 | --- 216 | roles: 217 | - name: geerlingguy.firewall 218 | - name: geerlingguy.pip 219 | - name: geerlingguy.ansible 220 | - name: geerlingguy.java 221 | - name: geerlingguy.jenkins 222 | ``` 223 | 224 | The `geerlingguy.ansible` role installs Ansible on the VM, so Jenkins can run Ansible playbooks and ad-hoc commands. The `geerlingguy.java` role is a dependency of `geerlingguy.jenkins`, and the `geerlingguy.firewall` role configures a firewall to limit access on ports besides 22 (for SSH) and 8080 (Jenkins' default port). 225 | 226 | Finally, we tell the `geerlingguy.jenkins` role a set of plugins to install through the `jenkins_plugins` variable; in this case, we just want the `ansicolor` plugin, which gives us full color display in Jenkins' console logs (so our Ansible playbook output is easier to read). 227 | 228 | T> There is an official [Ansible plugin for Jenkins](https://wiki.jenkins-ci.org/display/JENKINS/Ansible+Plugin) which can be used to run Ansible Ad-Hoc tasks and Playbooks, and may help you integrate Ansible and Jenkins more easily. 229 | 230 | To build the VM and run the playbook, do the following (inside the `jenkins` folder): 231 | 232 | 1. Run `ansible-galaxy install -r requirements.yml` to install the required roles. 233 | 2. Run `vagrant up` to build the VM and install and configure Jenkins. 234 | 235 | After a few minutes, the provisioning should complete, and you should be able to access Jenkins at `http://jenkins.test:8080/` (if you configured the hostname in your hosts file). 236 | 237 | ### Create an Ansible playbook on the Jenkins server 238 | 239 | It's preferred to keep your playbooks and server configuration in a code repository (e.g. Git or SVN), but for simplicity's sake, this example requires a playbook stored locally on the Jenkins server, similar to the earlier Ansible Tower example. 240 | 241 | 1. Log into the Jenkins VM: `vagrant ssh` 242 | 2. Go to the `/opt` directory: `cd /opt` 243 | 3. Create a new project directory: `sudo mkdir ansible-for-devops && cd ansible-for-devops` 244 | 4. Create a new playbook file, `main.yml`, within the new directory, with the following contents (use sudo to create the file, e.g. `sudo vi main.yml`): 245 | 246 | {lang="yaml"} 247 | ``` 248 | --- 249 | - hosts: 127.0.0.1 250 | gather_facts: no 251 | connection: local 252 | 253 | tasks: 254 | - name: Check the date on the server. 255 | command: date 256 | ``` 257 | 258 | If you want, test the playbook while you're logged in: `ansible-playbook main.yml`. 259 | 260 | ### Create a Jenkins job to run an Ansible Playbook 261 | 262 | With Jenkins running, configure a Jenkins job to run a playbook on the local server with Ansible. Visit `http://jenkins.test:8080/` and log in with username `admin` and password `admin` (these are the defaults from the `geerlingguy.jenkins` role---you should override these for anything besides local test environments!). 263 | 264 | Once the page loads, click the 'New Item' link to create a new 'Freestyle project' with a title 'ansible-local-test'. Click 'OK' and when configuring the job, and set the following configuration: 265 | 266 | - Under 'Build Environment', check the 'Color ANSI Console Output' option. This allows Ansible's helpful colored output to pass through the Jenkins console, so it is easier to read during and after the run. 267 | - Under 'Build', click 'Add Build Step', then choose 'Execute shell'. In the 'Command' field, add the following code, which will run the local Ansible playbook: 268 | 269 | {lang="text"} 270 | ~~~ 271 | # Force Ansible to output jobs with color. 272 | export ANSIBLE_FORCE_COLOR=true 273 | 274 | # Run the local test playbook. 275 | ansible-playbook /opt/ansible-for-devops/main.yml 276 | ~~~ 277 | 278 | Click 'Save' to save the 'Ansible Local Test' job, and on the project's page, click the 'Build Now' link to start a build. After a few seconds, you should see a new item in the 'Build History' block. Click on the (hopefully) blue circle to the left of '#1', and it will take you to the console output of the job. It should look something like this: 279 | 280 | {width=80%} 281 | ![Jenkins job completed successfully!](images/12-jenkins-job-console-output.png) 282 | 283 | This is a basic example, but hopefully it's enough to show you how easy it is to get at least some of your baseline CI/CD automation done using a free and open source tool. Most of the more difficult aspects of managing infrastructure through Jenkins surrounds the ability to manage SSH keys, certificates, and other credentials through Jenkins, but there is already plenty of documentation surrounding these things elsewhere online and in Jenkins documentation, so this will be left as an exercise for the reader. 284 | 285 | ## Summary 286 | 287 | Tools like Ansible Tower provide a robust, repeatable, and accessible environment in which to run your Ansible playbooks. 288 | 289 | In a team-based environment, it's especially important to have reliable ways to run your Ansible playbooks that aren't dependent on individual developers' laptops! 290 | 291 | {lang="text",linenos=off} 292 | ``` 293 | ________________________________________ 294 | / The first rule of any technology used \ 295 | | in a business is that automation | 296 | | applied to an efficient operation will | 297 | | magnify the efficiency. The second is | 298 | | that automation applied to an | 299 | | inefficient operation will magnify the | 300 | \ inefficiency. (Bill Gates) / 301 | ---------------------------------------- 302 | \ ^__^ 303 | \ (oo)\_______ 304 | (__)\ )\/\ 305 | ||----w | 306 | || || 307 | ``` 308 | -------------------------------------------------------------------------------- /chapter14.txt: -------------------------------------------------------------------------------- 1 | # Chapter 14 - Automating HTTPS and TLS Certificates 2 | 3 | Today's application environment almost always requires the use of HTTP (and HTTPS) for certain traffic---end users interacting with a website, microservices communicating with each other internally or via the public Internet, or external APIs interacting with your apps. 4 | 5 | HTTPS was originally used only for sensitive transactions, like banking transactions or secure web forms. It also used to require extra server CPU to encrypt data. But today, when Google boosts search results for HTTPS-only sites, and when processors barely show a difference with encrypted or unencrypted traffic, it's almost universally understood that all HTTP services should be served via `https://`. 6 | 7 | Traditionally, one blocker to using HTTPS _everywhere_ was certificates were difficult to acquire, manage, and renew. And they were also expensive! 8 | 9 | Now, between Let's Encrypt's free certificates, more affordable wildcard certs, and universal Server Name Indication (SNI) support, there is almost never an excuse _not_ to use HTTPS... 10 | 11 | Except, that is, for the fact certificate management has been tricky to automate. This chapter will show how Ansible solves this last problem by managing certificates and securing all your HTTP traffic! 12 | 13 | ## Generating Self-Signed Certificates with Ansible 14 | 15 | Whenever I'm building and testing a new server configuration that requires TLS connections (typically HTTPS traffic over port 443), I need to use one or more valid certificates which can be accepted by a browser user, or by something like `curl`, so I can verify my TLS configuration is correct. 16 | 17 | Ansible makes generating self-signed certificates easy. There are four `openssl_*` crypto-related modules useful in generating certificates: 18 | 19 | - `openssl_certificate` - Generate and/or check OpenSSL certificates 20 | - `openssl_csr` - Generate OpenSSL Certificate Signing Request (CSR) 21 | - `openssl_privatekey` - Generate OpenSSL private keys 22 | - `openssl_publickey` - Generate an OpenSSL public key from its private key 23 | 24 | In order to use these modules, you need OpenSSL installed, and also one extra Python dependency used by Ansible to interact with OpenSSL, the `pyOpenSSL` library. 25 | 26 | Here's a quick example of the tasks required to generate a self-signed cert: 27 | 28 | {lang="yaml",linenos=off} 29 | ``` 30 | - name: Ensure directory exists for local self-signed TLS certs. 31 | file: 32 | path: /etc/ssl/certs/example 33 | state: directory 34 | 35 | - name: Generate an OpenSSL private key. 36 | openssl_privatekey: 37 | path: /etc/ssl/certs/example/privkey.pem 38 | 39 | - name: Generate an OpenSSL CSR. 40 | openssl_csr: 41 | path: /etc/ssl/certs/example/example.csr 42 | privatekey_path: /etc/ssl/certs/example/privkey.pem 43 | common_name: "example.com" 44 | 45 | - name: Generate a Self Signed OpenSSL certificate. 46 | openssl_certificate: 47 | path: /etc/ssl/certs/example/fullchain.pem 48 | privatekey_path: /etc/ssl/certs/example/privkey.pem 49 | csr_path: /etc/ssl/private/example/example.csr 50 | provider: selfsigned 51 | ``` 52 | 53 | These tasks ensure there's a directory inside which the certificate will live, create a private key and Certificate Signing Request (CSR) in that directory, and use them to generate the final cert. 54 | 55 | You can then use this certificate to serve HTTPS requests using a web server; for example, in an Nginx `server` configuration: 56 | 57 | {lang="text",linenos=off} 58 | ``` 59 | server { 60 | listen 443 ssl default_server; 61 | server_name example.com; 62 | 63 | ssl_certificate {{ certificate_dir }}/{{ server_hostname }}/fullchain.pem; 64 | ssl_certificate_key {{ certificate_dir }}/{{ server_hostname }}/privkey.pem; 65 | ... 66 | } 67 | ``` 68 | 69 | Let's put together a full playbook using the `openssl_*` modules and Nginx, to build a server complete with a self-signed certificate and a secure Nginx TLS configuration. 70 | 71 | ### Idempotent Nginx HTTPS playbook with a self-signed cert 72 | 73 | For the sake of convenience, this example will target a Debian 9 server (though it would be mostly unchanged for any other distribution), and there's a fully tested example included in this book's GitHub repository: [HTTPS Self-Signed Certificate Demo VM](https://github.com/geerlingguy/ansible-for-devops/tree/master/https-self-signed). 74 | 75 | Create a new folder for the Self-Signed Certificate web server playbook, and add a `main.yml` playbook: 76 | 77 | {lang="yaml"} 78 | ``` 79 | --- 80 | - hosts: all 81 | 82 | vars_files: 83 | - vars/main.yml 84 | 85 | pre_tasks: 86 | - name: Ensure apt cache is updated. 87 | apt: update_cache=yes cache_valid_time=600 88 | 89 | - name: Install dependency for pyopenssl. 90 | apt: name=libssl-dev state=present 91 | ``` 92 | 93 | To keep the main playbook tidy, we will store any variables in an included variables file (go ahead and create an empty `main.yml` vars file in a `vars` directory). Next, on most Debian (and Debian-derived) distros, I add in a `pre_task` to make sure the Apt cache is up to date (this prevents errors when installing packages later). Finally, `libssl-dev` is a dependency we'll need to have on the system to make sure `pyopenssl` can be installed by `pip` later, so we'll do that too. 94 | 95 | Next, to save some time, we can rely on some Ansible Galaxy roles to install and configure some required software on the server: 96 | 97 | {lang="yaml",starting-line-number=14} 98 | ``` 99 | roles: 100 | - geerlingguy.firewall 101 | - geerlingguy.pip 102 | - geerlingguy.nginx 103 | ``` 104 | 105 | We use the `firewall` role to configure `iptables` to only allow traffic to the server on certain ports, `pip` to install Python's Pip package manager and the required `pyOpenSSL` library, and `nginx` to install and configure Nginx. 106 | 107 | To get these roles installed, add a `requirements.yml` file to your playbook directory, with the contents: 108 | 109 | {lang="yaml"} 110 | ``` 111 | --- 112 | roles: 113 | - name: geerlingguy.firewall 114 | - name: geerlingguy.pip 115 | - name: geerlingguy.nginx 116 | ``` 117 | 118 | Then run `ansible-galaxy install -r requirements.yml` to install the roles. 119 | 120 | T> In most cases, you should create an `ansible.cfg` in the playbook directory, with at least the following contents: 121 | T> 122 | T> {lang="text",linenos=off} 123 | T> ~~~ 124 | T> [defaults] 125 | T> roles_path = ./roles 126 | T> ~~~ 127 | T> 128 | T> This way, role dependencies are installed inside the playbook directory itself instead of in your system-wide roles directory (as long as you run the `ansible-galaxy` command inside the playbook directory). 129 | 130 | Now let's define a few variables to make the `firewall`, `pip`, and `nginx` roles configure things how we want: 131 | 132 | {lang="yaml"} 133 | ``` 134 | # Firewall settings. 135 | firewall_allowed_tcp_ports: 136 | - "22" 137 | - "80" 138 | - "443" 139 | 140 | # Python settings. 141 | pip_install_packages: ['pyopenssl'] 142 | 143 | # Nginx settings. 144 | nginx_vhosts: [] 145 | nginx_remove_default_vhost: True 146 | nginx_ppa_version: stable 147 | nginx_docroot: /var/www/html 148 | ``` 149 | 150 | For the firewall, you need port 22 open for remote SSH access, port 80 for HTTP requests (which we'll redirect to HTTPS), and 443 for HTTPS. 151 | 152 | For Pip, we need to make sure the right version of pip is installed (so `python3-pip` for Debian 9, which has Python 3 installed by default), and we tell it to install the latest version of the `pyopenssl` package. 153 | 154 | For Nginx, we want the default virtual host (server) which comes with the distro package install to be removed, we want to set the role's vhosts to an empty array (since we'll manage Nginx `server` configuration ourselves), and finally we'll use the docroot `/var/www/html`. 155 | 156 | Now that we have all the base packages installed and configured, the next step is to generate the self-signed certificate. To keep our playbook clean, the required tasks can go into an imported task file, imported in the `main.yml` like so: 157 | 158 | {lang="yaml",starting-line-number=19} 159 | ``` 160 | tasks: 161 | - import_tasks: tasks/self-signed-cert.yml 162 | ``` 163 | 164 | Create a `tasks` folder, and create a `self-signed-cert.yml` task file inside. We'll place the tasks that create the key, generate the CSR, and generate the cert into this file: 165 | 166 | {lang="yaml"} 167 | ``` 168 | --- 169 | - name: Ensure directory exists for local self-signed TLS certs. 170 | file: 171 | path: "{{ certificate_dir }}/{{ server_hostname }}" 172 | state: directory 173 | 174 | - name: Generate an OpenSSL private key. 175 | openssl_privatekey: 176 | path: "{{ certificate_dir }}/{{ server_hostname }}/privkey.pem" 177 | 178 | - name: Generate an OpenSSL CSR. 179 | openssl_csr: 180 | path: "{{ certificate_dir }}/{{ server_hostname }}.csr" 181 | privatekey_path: "{{ certificate_dir }}/{{ server_hostname }}/privkey.pem" 182 | common_name: "{{ server_hostname }}" 183 | 184 | - name: Generate a Self Signed OpenSSL certificate. 185 | openssl_certificate: 186 | path: "{{ certificate_dir }}/{{ server_hostname }}/fullchain.pem" 187 | privatekey_path: "{{ certificate_dir }}/{{ server_hostname }}/privkey.pem" 188 | csr_path: "{{ certificate_dir }}/{{ server_hostname }}.csr" 189 | provider: selfsigned 190 | ``` 191 | 192 | We added two variables which we'll now define in the `vars/main.yml` file (using variables makes it easier to change the site and/or refactor to allow multiple values in the future). Add these variables to the vars file: 193 | 194 | {lang="yaml",starting-line-number=19} 195 | ``` 196 | # Self-signed certificate settings. 197 | certificate_dir: /etc/ssl/private 198 | server_hostname: https.test 199 | ``` 200 | 201 | Now that the playbook can generate a certificate (or on future runs, idempotently verify the certificate's existence), we need to configure Nginx to use the cert to deliver traffic using TLS for a particular URL. 202 | 203 | The `geerlingguy.nginx` role took care of the majority of Nginx configuration, but we disabled that role's management of virtual hosts, in favor of managing a single virtual host (or `server` directive) ourselves. The following tasks copy an example landing page into a defined docroot, then our custom HTTPS `server` configuration to use the generated cert for the docroot: 204 | 205 | {lang="yaml",starting-line-number=22} 206 | ``` 207 | - name: Ensure docroot exists. 208 | file: 209 | path: "{{ nginx_docroot }}" 210 | state: directory 211 | 212 | - name: Copy example index.html file in place. 213 | copy: 214 | src: files/index.html 215 | dest: "{{ nginx_docroot }}/index.html" 216 | mode: 0755 217 | 218 | - name: Copy Nginx server configuration in place. 219 | template: 220 | src: templates/https.test.conf.j2 221 | dest: /etc/nginx/sites-enabled/https.test.conf 222 | mode: 0644 223 | notify: restart nginx 224 | ``` 225 | 226 | Fairly straightforward, but we need to fill in a couple blanks. First, here's a quick and easy `index.html` just to allow you to test things out: 227 | 228 | {lang="html"} 229 | ``` 230 | 231 | 232 | 233 | HTTPS Self-Signed Certificate Test 234 | 235 | 236 | 237 |

HTTPS Self-Signed Certificate Test

238 |

If you can see this message, it worked!

239 | 240 | 241 | ``` 242 | 243 | Put that HTML into your playbook directory at `files/index.html`, then create another file, `templates/https.test.conf.j2`, with the following contents: 244 | 245 | {lang="text"} 246 | ``` 247 | # HTTPS Test server configuration. 248 | 249 | # Redirect HTTP traffic to HTTPS. 250 | server { 251 | listen 80 default_server; 252 | server_name _; 253 | index index.html; 254 | return 301 https://$host$request_uri; 255 | } 256 | 257 | # Serve HTTPS traffic using the self-signed certificate created by Ansible. 258 | server { 259 | listen 443 ssl default_server; 260 | server_name {{ server_hostname }}; 261 | root {{ nginx_docroot }}; 262 | 263 | ssl_certificate {{ certificate_dir }}/{{ server_hostname }}/fullchain.pem; 264 | ssl_certificate_key {{ certificate_dir }}/{{ server_hostname }}/privkey.pem; 265 | } 266 | ``` 267 | 268 | The most important parts of this server configuration instruct Nginx to use the SSL certificate we generated (at the path `{{ certificate_dir }}/{{ server_hostname }}/fullchain.pem;`) for requests over port 443 for the domain `{{ server_hostname }}` (in this case, requests to `https://https.test/`). 269 | 270 | I> Production-ready TLS configuration will usually have more options defined than the above `server` directive. It's best practice to always configure TLS as secure as possible (later examples meant for production use will do so), but this example does the bare minimum to get SSL working with Nginx defaults. 271 | 272 | Notice the `notify: restart nginx` in the `Copy Nginx server configuration in place.` task; this will force Nginx to restart after any configuration changes are made (or during the first provision, when the template is copied). 273 | 274 | Once you run this playbook, if there were no errors, you should be able to securely access `https://https.test/` (assuming you have a record for that domain in your hosts file pointing to your server's IP address!). You might receive a security warning since it's self-signed, but all modern browsers and HTTPS-enabled tools should now be able to load the site over an encrypted connection! 275 | 276 | {width=80%} 277 | ![HTTPS Test site loads with a security warning](images/14-https-test-chrome.png) 278 | 279 | W> If you rebuild the server for `https.test` more than once (thus creating a new self-signed certificate), be sure to delete the certificate you previously added to your list of trusted certificates (e.g. via Keychain Access on macOS for Chrome and Safari, or in FireFox under Preferences > Advanced > Certificates). 280 | 281 | ## Automating Let's Encrypt with Ansible for free Certs 282 | 283 | Self-signed certs are helpful in making sure certain environments can be accessed via HTTPS, but they have a number of downsides, the major one being that every visitor has to confirm a security exception the first time they visit, and similarly command line tools like curl and HTTP libraries usually fail when they encounter a self-signed cert, unless you specifically ignore cert trust settings (which is a security risk). 284 | 285 | It's usually best to use a valid certificate from one of the trusted Certificate Authorities (CAs). 286 | 287 | Traditionally, you had to give some money to a Certificate Authority (CA) and work through a mostly-manual process to acquire a certificate. You can still do this, and there are use cases where this is still the best option, but [Let's Encrypt](https://letsencrypt.org/) took the world of HTTPS certificates by storm by offering _free_, easy-to-automate certificates to everyone, with the goal of creating "a more secure and privacy-respecting Web." 288 | 289 | In this example, we'll acquire a certificate from Let's Encrypt and set up auto-renewal (since Let's Encrypt certs are only valid for 90 days) on an Ubuntu server. There's a fully tested version of this example included in this book's GitHub repository: [HTTPS Let's Encrypt Demo](https://github.com/geerlingguy/ansible-for-devops/tree/master/https-letsencrypt). 290 | 291 | ### Use Galaxy roles to get things done faster 292 | 293 | Instead of writing all the automation ourselves, we can rely on some roles from Ansible Galaxy to do the heavy lifting. Create a `requirements.yml` file in a new project directory, containing: 294 | 295 | {lang="yaml"} 296 | ``` 297 | --- 298 | roles: 299 | - name: geerlingguy.firewall 300 | - name: geerlingguy.certbot 301 | - name: geerlingguy.nginx 302 | ``` 303 | 304 | We'll use the `geerlingguy.firewall` role to secure unused ports on the server, `geerlingguy.certbot` to acquire and set up autorenewal of Let's Encrypt certs, and `nginx` to configure a web server to serve content over HTTPS. 305 | 306 | The `geerlingguy.certbot` role is the heart of the operation; here's how it works: 307 | 308 | 1. First, it either installs Certbot from the system packages or from source, depending on the value of `certbot_install_from_source`. Source installs are usually more usable since Certbot sometimes adds helpful features that will never be backported into system packages. 309 | 2. Then it creates a certificate if configured via `certbot_create_if_missing` and if the certificate(s) specified in `certbot_certs` do not yet exist. It creates the certificates using the `certbot_create_command` and can also stop and start certain services while the certificates are being created. 310 | 3. Finally, if `certbot_auto_renew` is `true`, it sets up a cron job for certificate renewal, using the `certbot renew` command along with the options passed in via `certbot_auto_renew_options`. Auto renewal is one of the main benefits of Let's Encrypt, because as long as your renewal process is working, you'll never wake up to an outage due to an expired certificate again! 311 | 312 | Once we have the requirements file set up, create an `ansible.cfg` file in the project directory to tell Ansible where to store and use the downloaded roles: 313 | 314 | {lang="text"} 315 | ``` 316 | [defaults] 317 | roles_path = ./roles 318 | ``` 319 | 320 | Install the required roles with: `ansible-galaxy install -r requirements.yml`. 321 | 322 | ### Create the playbook 323 | 324 | Add a `main.yml` playbook to the project directory. This playbook will target servers running Ubuntu's minimal distribution (which may not include Python), so we need to do a couple special things to make sure Ansible can operate on the server in `pre_tasks`. Then we'll run the three roles we downloaded from Ansible Galaxy, and configure Nginx to serve a simple web page using a Let's Encrypt certificate. 325 | 326 | First things first, start the play on all the `letsencrypt` hosts, and since we might have to install Python to gather facts about the server for Ansible to use, disable the initial `gather_facts`: 327 | 328 | {lang="yaml"} 329 | ``` 330 | --- 331 | - hosts: letsencrypt 332 | become: true 333 | 334 | vars_files: 335 | - vars/main.yml 336 | ``` 337 | 338 | We will also need to perform most tasks using sudo (since we have to modify the system, configure Nginx, etc.), so `become: true` is necessary. Finally, to add the configuration for certificate generation, firewall configuration, and Nginx, we'll put all the variables in a `vars/main.yml` file. 339 | 340 | In `pre_tasks` in `main.yml`, update Apt's caches since we want the freshest package data available when installing software: 341 | 342 | {lang="yaml",starting-line-number=8} 343 | ``` 344 | pre_tasks: 345 | - name: Ensure apt cache is updated. 346 | apt: update_cache=true cache_valid_time=600 347 | ``` 348 | 349 | Now, it's time for the meat of this playbook, the `roles`. Call each one: 350 | 351 | {lang="yaml",starting-line-number=12} 352 | ``` 353 | roles: 354 | - geerlingguy.firewall 355 | - geerlingguy.nginx 356 | - geerlingguy.certbot 357 | ``` 358 | 359 | Since the roles will be doing the heavy lifting (yay for easy-to-read playbooks!), we tell them what to do via variables in `vars/main.yml`: 360 | 361 | {lang="yaml"} 362 | ``` 363 | --- 364 | # Firewall settings. 365 | firewall_allowed_tcp_ports: 366 | - "22" 367 | - "80" 368 | - "443" 369 | 370 | # Nginx settings. 371 | nginx_vhosts: [] 372 | nginx_remove_default_vhost: true 373 | nginx_ppa_version: stable 374 | nginx_docroot: /var/www/html 375 | 376 | # Let's Encrypt certificate settings. 377 | certbot_create_if_missing: true 378 | certbot_admin_email: "{{ letsencrypt_email }}" 379 | certbot_certs: 380 | - domains: 381 | - "{{ inventory_hostname }}" 382 | ``` 383 | 384 | By section: 385 | 386 | - For a typical webserver, we need port `22` for SSH access, port `80` for unencrypted HTTP access (Let's Encrypt needs this to operate using its default verification mechanism), and port `443` for encrypted HTTPS access. 387 | - For Nginx, we will configure our own custom virtual host in a bit, so we make sure the default vhost is removed, and we'll also install the latest version of Nginx from the Nginx Ubuntu PPA. We added an extra variable `nginx_docroot` to tell our own automation code where to put a test web page and serve it via Nginx. 388 | - The Certbot role only requires a few variables to ensure a certificate is added: 389 | - `certbot_create_if_missing`: The role will check if the certificate exists, and if it doesn't (e.g. on the first playbook run) it will create it. If it does exist, it will be idempotent and make no changes. 390 | - `certbot_admin_email`: Let's Encrypt lets you [associate an email address](https://letsencrypt.org/docs/expiration-emails/) with every certificate it generates, and uses this email address to notify the owner of any problems with the certificate, like impending expiration due to a server issue. 391 | - `certbot_certs`: You can add one or more certificates using this list; and each certificate can cover one or more domains using Subject Alternative Name (SAN) certificates. 392 | 393 | Two of the Jinja variables used in the vars file are not defined in vars---rather, they will come from the inventory. We'll set that up soon. 394 | 395 | Now that we have the playbook configuring an HTTP/S-ready firewall, a Let's Encrypt certificate generated by Certbot, and a barebones Nginx web server, we need to configure Nginx to serve some content, and to serve it over HTTPS using the Let's Encrypt certificate. 396 | 397 | So for the last part of the playbook, we need to: 398 | 399 | 1. Ensure the `nginx_docroot` directory exists. 400 | 2. Create and copy over a sample `index.html` file to serve from that document root. 401 | 3. Create and copy over a Nginx server configuration which directs all traffic to HTTPS and serves the traffic using the generated Let's Encrypt certificate. 402 | 403 | To make sure the `nginx_docroot` exists, add a task to the `tasks` section of the playbook: 404 | 405 | {lang="yaml",starting-line-number=17} 406 | ``` 407 | tasks: 408 | - name: Ensure docroot exists. 409 | file: 410 | path: "{{ nginx_docroot }}" 411 | state: directory 412 | ``` 413 | 414 | I> Since `/var/www` should already exist on an Ubuntu server, this is all we need. If the parent directory hierarchy didn't exist (e.g. we had `nginx_docroot` set to `/var/www/example/html`), this task may also need `recurse: true` to ensure the parent directories exist. 415 | 416 | Now we need an HTML file inside the docroot so Nginx can serve it, otherwise Nginx will return a 404 Not Found. Create a simple HTML file named `files/index.html` in your project directory, with the following contents: 417 | 418 | {lang="html"} 419 | ``` 420 | 421 | 422 | 423 | HTTPS Let's Encrypt Test 424 | 425 | 426 | 427 |

HTTPS Let's Encrypt Test

428 |

If you can see this message, it worked!

429 | 430 | 431 | ``` 432 | 433 | Then add a `copy` task in `main.yml` to copy the file into place after the docroot task: 434 | 435 | {lang="yaml",starting-line-number=23} 436 | ``` 437 | - name: Copy example index.html file in place. 438 | copy: 439 | src: files/index.html 440 | dest: "{{ nginx_docroot }}/index.html" 441 | mode: 0755 442 | ``` 443 | 444 | Finally, we need to configure Nginx with two `server` blocks; one to redirect HTTP requests to HTTPS, and the other to serve HTTPS traffic using the Let's Encrypt certificates. Create a Nginx configuration template in `templates/https-letsencrypt.conf.j2` with the following: 445 | 446 | {lang="text"} 447 | ``` 448 | # HTTPS server configuration. 449 | 450 | # Redirect HTTP traffic to HTTPS. 451 | server { 452 | listen 80 default_server; 453 | server_name _; 454 | index index.html; 455 | return 301 https://$host$request_uri; 456 | } 457 | 458 | # Serve HTTPS traffic using the Let's Encrypt certificate. 459 | server { 460 | listen 443 ssl default_server; 461 | server_name {{ inventory_hostname }}; 462 | root {{ nginx_docroot }}; 463 | 464 | ssl_certificate /etc/letsencrypt/live/{{ inventory_hostname }}/fullchain.pem; 465 | ssl_certificate_key /etc/letsencrypt/live/{{ inventory_hostname }}/privkey.pem; 466 | } 467 | ``` 468 | 469 | The first `server` block configures a default port 80 server which redirects _all_ traffic on port 80, for any incoming request, to the same URL, but with `https://`. This is a handy way to force all traffic to SSL by default if you're using Nginx. 470 | 471 | The second `server` block configures a default port 443 server which handles all HTTPS traffic. 472 | 473 | It uses the `inventory_hostname` to tell Nginx what domain should be used to serve traffic, and it sets the document root to the `nginx_docroot`. 474 | 475 | Finally, it tells Nginx to use the certificate and key inside the default Let's Encrypt generated certificate path, which is always `/etc/letsencrypt/live/[domain]/*.pem`. 476 | 477 | Add a task templating this Jinja template to a Nginx config file in `main.yml`, making sure to restart Nginx when the template is created or modified: 478 | 479 | {lang="yaml",starting-line-number=29} 480 | ``` 481 | - name: Copy Nginx server configuration in place. 482 | template: 483 | src: templates/https-letsencrypt.conf.j2 484 | dest: /etc/nginx/sites-enabled/https-letsencrypt.conf 485 | mode: 0644 486 | notify: restart nginx 487 | ``` 488 | 489 | At this point, we have a complete playbook. It should set up a firewall, create a certificate, and configure Nginx to serve a web page using the certificate. But we don't have a server to run the playbook against! 490 | 491 | ### Create a server and configure DNS 492 | 493 | Let's Encrypt generates certificates for domains only after verifying domain ownership. The Internet would be very insecure if Let's Encrypt allowed any random person to generate valid certificates for a domain like `apple.com` or `google.com`! 494 | 495 | The easiest way to verify domain ownership to Let's Encrypt is to ensure your server is accessible over the public Internet. For internal servers, Let's Encrypt might not be the best option (though in some cases it can be made to work). 496 | 497 | In Chapter 9, an example was provided for how to provision servers automatically via Ansible. For your own project, you may want to automate the process of initial server provisioning using Ansible, Terraform, or some other automation tool. But for this example, you just need to make sure a server is running which is reachable via the public Internet. You also need to point a domain at it (e.g. subdomain.example.com). 498 | 499 | Once you've done that, and can confirm you can SSH into the server at the actual domain name (e.g. `ssh myuser@subdomain.example.com`), then you're ready to point the playbook at the server and configure everything via Ansible. 500 | 501 | ### Point the playbook inventory at the server 502 | 503 | Assuming your server is reachable at `subdomain.example.com`, and your SSH username is `myuser`, create an `inventory` file in the project directory with the following contents: 504 | 505 | {lang="text"} 506 | ``` 507 | [letsencrypt] 508 | subdomain.example.com 509 | 510 | [letsencrypt:vars] 511 | ansible_user=myuser 512 | letsencrypt_email=webmaster@example.com 513 | ``` 514 | 515 | Now that the playbook knows how to connect to your public server, it's time to try it out. First, make sure all the required Galaxy roles are installed: 516 | 517 | {lang="text",linenos="off"} 518 | ``` 519 | ansible-galaxy install -r requirements.yml 520 | ``` 521 | 522 | Then run the playbook: 523 | 524 | {lang="text",linenos="off"} 525 | ``` 526 | ansible-playbook -i inventory main.yml 527 | ``` 528 | 529 | After a couple minutes, assuming Let's Encrypt could reach your server at subdomain.example.com on port 80, you should be able to access the `index.html` webpage created earlier over HTTPS. 530 | 531 | ### Access your server over HTTPS! 532 | 533 | Visit `https://subdomain.example.com/` and you should see something like: 534 | 535 | {width=80%} 536 | ![HTTPS Test site loads with a valid Let's Encrypt certificate](images/14-letsencrypt-valid-certificate.png) 537 | 538 | Automating free certificates with Let's Encrypt can be fun, but make sure you're aware of [Let's Encrypt rate limits](https://letsencrypt.org/docs/rate-limits/) before you go ahead and automate certs for 3,000 of your subdomains at once! 539 | 540 | ## Configuring Nginx to proxy HTTP traffic and serve it over HTTPS 541 | 542 | One common problem you may encounter is an old web application or service which is no longer updated but must continue running, and the need for HTTPS encryption (whether for SEO or security compliance). 543 | 544 | You could use a third party service like Cloudflare to proxy all traffic through HTTPS, but you'd still have a connection over the public Internet from Cloudflare's network to your backend server that's unencrypted. Even if you're using a CDN, it's best to encrypt the traffic all the way to as close to your application server as possible. 545 | 546 | And that's where Nginx comes in! There are other tools which can do the same thing, but Nginx is easy to configure as an HTTPS proxy server for HTTP backends. 547 | 548 | ### Modify the Nginx configuration to proxy traffic 549 | 550 | We're going to use the exact same playbook and configuration from the self-signed certificate example earlier in this chapter, with two small modifications. The adjusted playbook is available in this book's GitHub repository: [HTTPS Nginx Proxy Demo VM](https://github.com/geerlingguy/ansible-for-devops/tree/master/https-nginx-proxy). There are only two small changes needed to set up and test Nginx proxying HTTPS traffic to an HTTP backend application: 551 | 552 | 1. Instead of serving traffic directly from Nginx, let Nginx proxy requests to a backend server running on port 8080. 553 | 2. Run a backend HTTP server on port 8080 using Python. 554 | 555 | First, configure the port 443 `server` block to proxy traffic to another service running locally on port `8080`: 556 | 557 | {lang="text",starting-line-number=11} 558 | ``` 559 | # Proxy HTTPS traffic using a self-signed certificate. 560 | server { 561 | listen 443 ssl default_server; 562 | server_name {{ server_hostname }}; 563 | 564 | location / { 565 | include /etc/nginx/proxy_params; 566 | proxy_pass http://localhost:8080; 567 | proxy_read_timeout 90s; 568 | proxy_redirect http://localhost:8080 {{ server_hostname }}; 569 | } 570 | 571 | ssl_certificate {{ certificate_dir }}/{{ server_hostname }}/fullchain.pem; 572 | ssl_certificate_key {{ certificate_dir }}/{{ server_hostname }}/privkey.pem; 573 | } 574 | ``` 575 | 576 | All that's been done is the removal of the `root` directive, which was replaced with the `location` directive. This particular `location` directive tells Nginx to proxy _all_ requests to any path (`/` includes everything) to the address `http://localhost:8080`, with a 90 second backend timeout. 577 | 578 | This assumes there's a backend HTTP service running on port 8080, though! So, the second step is to run something on port 8080. Luckily, since we already have Python and a web root, we can use Python to run an HTTP server with a very simple CLI command: 579 | 580 | {lang="text",linenos="off"} 581 | ``` 582 | python3 -m http.server 8080 --directory /var/www/html 583 | ``` 584 | 585 | You can run that command interactively, or if you're automating it in the Ansible playbook from earlier, you can add a task after the "Copy example index.html file in place." task: 586 | 587 | {lang="yaml",starting-line-number=33} 588 | ``` 589 | - name: Start simple python webserver on port 8080. 590 | shell: > 591 | python3 -m http.server 8080 --directory {{ nginx_docroot }} & 592 | changed_when: false 593 | async: 45 594 | poll: 0 595 | ``` 596 | 597 | Note the use of `&`, `async` and `poll` to fire and forget the command, so it can run in the background forever. It is _not_ a good idea to run applications like this in production, but for demonstration purposes it's adequate to verify Nginx is proxying HTTPS requests correctly. 598 | 599 | W> The `--directory` option in this command requires Python 3.7 or later. Make sure your operating system has this version of Python available (e.g. via system packages, like with Debian 10 or later, or via a virtualenv). 600 | 601 | Now that a server is running on port 8080, you should see Nginx proxying requests successfully: 602 | 603 | {width=80%} 604 | ![Nginx proxies HTTPS requests to backend HTTP applications](images/14-https-nginx-proxy-test.png) 605 | 606 | If you log into the server and kill the Python process serving HTTP traffic on port 8080, then Nginx will still attempt to proxy traffic, but will return a 502 Bad Gateway because the backend service is unavailable: 607 | 608 | {width=80%} 609 | ![Nginx returns 502 Bad Gateway if the backend is unavailable](images/14-https-nginx-proxy-502-bad-gateway.png) 610 | 611 | Once you learn to automate HTTPS certificates with Ansible and proxy backend services with Nginx (or another suitable HTTPS-aware proxy), it becomes possible to adopt HTTPS everywhere, no matter what kind of web applications you run. 612 | 613 | ## Summary 614 | 615 | HTTPS is now an essential feature of any public-facing website and application, and it's fairly standard to use it on internal services too. Ansible automates the process of encrypting all your HTTP traffic with TLS certificates, no matter the certificate type or use case. 616 | 617 | {lang="text",linenos=off} 618 | ``` 619 | ____________________________________________ 620 | / Fool me once, shame on you. Fool me twice, \ 621 | \ prepare to die. (Klingon Proverb) / 622 | -------------------------------------------- 623 | \ ^__^ 624 | \ (oo)\_______ 625 | (__)\ )\/\ 626 | ||----w | 627 | || || 628 | ``` 629 | -------------------------------------------------------------------------------- /chapter16.txt: -------------------------------------------------------------------------------- 1 | # Chapter 16 - Kubernetes and Ansible 2 | 3 | Most real-world applications require a lot more than a couple Docker containers running on a host. You may need five, ten, or dozens of containers running. And when you need to scale, you need them distributed across multiple hosts. And then when you have multiple containers on multiple hosts, you need to aggregate logs, monitor resource usage, etc. 4 | 5 | Because of this, many different container scheduling platforms have been developed which aid in deploying containers and their supporting services: Kubernetes, Mesos, Docker Swarm, Rancher, OpenShift, etc. Because of its increasing popularity and support across all major cloud providers, this book will focus on usage of Kubernetes as a container scheduler. 6 | 7 | ### A bit of Kubernetes history 8 | 9 | {width=30%} 10 | ![Kubernetes logo](images/16-kubernetes-logo.png) 11 | 12 | In 2013, some Google engineers began working to create an open source representation of the internal tool Google used to run millions of containers in the Google data centers, named Borg. The first version of Kubernetes was known as Seven of Nine (another Star Trek reference), but was finally renamed Kubernetes (a mangled translation of the Greek word for 'helmsman') to avoid potential legal issues. 13 | 14 | To keep a little of the original geek culture Trek reference, it was decided the logo would have seven sides, as a nod to the working name 'Seven of Nine'. 15 | 16 | In a few short years, Kubernetes went from being one of many up-and-coming container scheduler engines to becoming almost a _de facto_ standard for large scale container deployment. In 2015, at the same time as Kubernetes' 1.0 release, the Cloud Native Computing Foundation (CNCF) was founded, to promote containers and cloud-based infrastructure. 17 | 18 | Kubernetes is one of many projects endorsed by the CNCF for 'cloud-native' applications, and has been endorsed by VMware, Google, Twitter, IBM, Microsoft, Amazon, and many other major tech companies. 19 | 20 | By 2018, Kubernetes was available as a service offering from all the major cloud providers, and most other competing software has either begun to rebuild on top of Kubernetes, or become more of a niche player in the container scheduling space. 21 | 22 | Kubernetes is often abbreviated 'K8s' (K + eight-letters + s), and the two terms are interchangeable. 23 | 24 | ### Evaluating the need for Kubernetes 25 | 26 | If Kubernetes seems to be taking the world of cloud computing by storm, should you start moving all your applications into Kubernetes clusters? Not necessarily. 27 | 28 | Kubernetes is a complex application, and even if you're using a managed Kubernetes offering, you need to learn new terminology and many new paradigms to get applications---especially non-'cloud native' applications---running smoothly. 29 | 30 | If you already have automation around existing infrastructure projects, and it's running smoothly, I would not start moving things into Kubernetes unless the following criteria are met: 31 | 32 | 1. Your application doesn't require much locally-available stateful data (e.g. most databases, many file system-heavy applications). 33 | 2. Your application has many parts which can be broken out and run on an ad-hoc basis, like cron jobs or other periodic tasks. 34 | 35 | Kubernetes, like Ansible, is best introduced incrementally into an existing organization. You might start by putting temporary workloads (like report-generating jobs) into a Kubernetes cluster. Then you can work on moving larger and persistent applications into a cluster. 36 | 37 | If you're working on a green field project, with enough resources to devote some time up front to learning the ins and outs of Kubernetes, it makes sense to at least give Kubernetes a try for running everything. 38 | 39 | ### Building a Kubernetes cluster with Ansible 40 | 41 | There are a few different ways you can build a Kubernetes cluster: 42 | 43 | - Using [`kubeadm`](https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/), a tool included with Kubernetes to set up a minimal but fully functional Kubernetes cluster in any environment. 44 | - Using tools like [`kops`](https://github.com/kubernetes/kops) or [`kubespray`](https://github.com/kubernetes-incubator/kubespray) to build a production-ready Kubernetes cluster in almost any environment. 45 | - Using tools like Terraform or CloudFormation---or even Ansible modules---to create a managed Kubernetes cluster using a cloud provider like AWS, Google Cloud, or Azure. 46 | 47 | There are many excellent guides online for the latter options, so we'll stick to using `kubeadm` in this book's examples. And, lucky for us, there's an Ansible role (`geerlingguy.kubernetes`) which already wraps `kubeadm` in an easy-to-use manner so we can integrate it with our playbooks. 48 | 49 | {width=80%} 50 | ![Kuberenetes architecture for a simple cluster](images/16-kubernetes-simple-cluster-architecture.png) 51 | 52 | As with other multi-server examples in this book, we can describe a three server setup to Vagrant so we can build a full 'bare metal' Kubernetes cluster. Create a project directory and add the following in a `Vagrantfile`: 53 | 54 | {lang="ruby"} 55 | ``` 56 | # -*- mode: ruby -*- 57 | # vi: set ft=ruby : 58 | 59 | VAGRANTFILE_API_VERSION = "2" 60 | 61 | Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| 62 | config.vm.box = "geerlingguy/debian9" 63 | config.ssh.insert_key = false 64 | config.vm.provider "virtualbox" 65 | 66 | config.vm.provider :virtualbox do |v| 67 | v.memory = 1024 68 | v.cpus = 1 69 | v.linked_clone = true 70 | end 71 | 72 | # Define three VMs with static private IP addresses. 73 | boxes = [ 74 | { :name => "master", :ip => "192.168.56.2" }, 75 | { :name => "node1", :ip => "192.168.56.3" }, 76 | { :name => "node2", :ip => "192.168.56.4" }, 77 | ] 78 | 79 | # Provision each of the VMs. 80 | boxes.each do |opts| 81 | config.vm.define opts[:name] do |config| 82 | config.vm.hostname = opts[:name] + ".k8s.test" 83 | config.vm.network "private_network", ip: opts[:ip] 84 | 85 | # Provision all the VMs using Ansible after last VM is up. 86 | if opts[:name] == "node2" 87 | config.vm.provision "ansible" do |ansible| 88 | ansible.playbook = "main.yml" 89 | ansible.inventory_path = "inventory" 90 | ansible.limit = "all" 91 | end 92 | end 93 | end 94 | end 95 | 96 | end 97 | ``` 98 | 99 | The Vagrantfile creates three VMs: 100 | 101 | - `master`, which will be configured as the Kubernetes master server, running the scheduling engine. 102 | - `node1`, a Kubernetes node to be joined to the master. 103 | - `node2`, another Kubernetes node to be joined to the master. 104 | 105 | You could technically add as many more `nodeX` VMs as you want, but since most people don't have a terabyte of RAM, it's better to be conservative in a local setup! 106 | 107 | Once the `Vagrantfile` is ready, you should add an `inventory` file to tell Ansible about the VMs; note our `ansible` configuration in the Vagrantfile points to a playbook in the same directory, `main.yml` and an inventory file, `inventory`. In the inventory file, put the following contents: 108 | 109 | {lang="text"} 110 | ``` 111 | [k8s-master] 112 | master ansible_host=192.168.56.2 kubernetes_role=control_plane 113 | 114 | [k8s-nodes] 115 | node1 ansible_host=192.168.56.3 kubernetes_role=node 116 | node2 ansible_host=192.168.56.4 kubernetes_role=node 117 | 118 | [k8s:children] 119 | k8s-master 120 | k8s-nodes 121 | 122 | [k8s:vars] 123 | ansible_user=vagrant 124 | ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key 125 | ``` 126 | 127 | The inventory is broken up into three groups: `k8s-master` (the Kubernetes master), `k8s-nodes` (all the nodes that will join the master), and `k8s` (a group with all the servers, helpful for initializing the cluster or operating on all the servers at once). 128 | 129 | We'll refer to the servers using the `k8s` inventory group in our Kubernetes setup playbook. Let's set up the playbook now: 130 | 131 | {lang="text"} 132 | ``` 133 | --- 134 | - hosts: k8s 135 | become: yes 136 | 137 | vars_files: 138 | - vars/main.yml 139 | ``` 140 | 141 | We'll operate on all the `k8s` servers defined in the `inventory`, and we'll need to operate as the root user to set up Kubernetes and its dependencies, so we add `become: yes`. Also, to keep things organized, all the playbook variables will be placed in the included vars file `vars/main.yml` (you can create that file now). 142 | 143 | Next, because Vagrant's virtual network interfaces can confuse Kubernetes and Flannel (the Kubernetes networking plugin we're going to use for inter-node communication), we need to copy a custom Flannel manifest file into the VM. Instead of printing the whole file in this book (it's a _lot_ of YAML!), you can grab a copy of the file from the URL: https://github.com/geerlingguy/ansible-for-devops/blob/master/kubernetes/files/manifests/kube-system/kube-flannel-vagrant.yml 144 | 145 | Save the file in your project folder in the path: 146 | 147 | {lang="text",linenos=off} 148 | ``` 149 | files/manifests/kube-system/kube-flannel-vagrant.yml 150 | ``` 151 | 152 | Now add a task to copy the manifest file into place using `pre_tasks` (we need to do this before any Ansible roles are run): 153 | 154 | {lang="text",starting-line-number=8} 155 | ``` 156 | pre_tasks: 157 | - name: Copy Flannel manifest tailored for Vagrant. 158 | copy: 159 | src: files/manifests/kube-system/kube-flannel-vagrant.yml 160 | dest: "~/kube-flannel-vagrant.yml" 161 | ``` 162 | 163 | Next we need to prepare the server to be able to run `kubelet` (all Kubernetes nodes run this service, which schedules Kubernetes Pods on individual nodes). `kubelet` has a couple special requirements: 164 | 165 | - Swap should be disabled on the server (there are a few valid reasons why you might keep swap enabled, but it's not recommended and requires more work to get `kubelet` running well.) 166 | - Docker (or an equivalent container runtime) should be installed on the server. 167 | 168 | Lucky for us, there are Ansible Galaxy roles which configure swap and install Docker, so let's add them in the playbook's `roles` section: 169 | 170 | {lang="text",starting-line-number=14} 171 | ``` 172 | roles: 173 | - role: geerlingguy.swap 174 | tags: ['swap', 'kubernetes'] 175 | 176 | - role: geerlingguy.docker 177 | tags: ['docker'] 178 | ``` 179 | 180 | We also need to add some configuration to ensure we have swap disabled and Docker installed correctly. Add the following variables in `vars/main.yml`: 181 | 182 | {lang="text"} 183 | ``` 184 | --- 185 | swap_file_state: absent 186 | swap_file_path: /dev/mapper/packer--debian--9--amd64--vg-swap_1 187 | 188 | docker_packages: 189 | - docker-ce=5:18.09.0~3-0~debian-stretch 190 | docker_install_compose: False 191 | ``` 192 | 193 | The `swap_file_path` is specific to the 64-bit Debian 9 Vagrant box used in the `Vagrantfile`, so if you want to use a different OS or install on a cloud server, the default system swap file may be at a different location. 194 | 195 | It's a best practice to specify a Docker version that's been well-tested with a particular version of Kubernetes, and in this case, the latest version of Kubernetes at the time of this writing works well with this Docker version, so we lock in that package version using the `docker_package` variable. 196 | 197 | Back in the `main.yml` playbook, we'll put the last role necessary to get Kubernetes up and running on the cluster: 198 | 199 | {lang="text",starting-line-number=21} 200 | ``` 201 | - role: geerlingguy.kubernetes 202 | tags: ['kubernetes'] 203 | ``` 204 | 205 | At this point, our playbook uses three Ansible Galaxy roles. To make installation and maintenance easier, add a `requirements.yml` file with the roles listed inside: 206 | 207 | {lang="text"} 208 | ``` 209 | --- 210 | roles: 211 | - name: geerlingguy.swap 212 | - name: geerlingguy.docker 213 | - name: geerlingguy.kubernetes 214 | ``` 215 | 216 | Then run `ansible-galaxy role install -r requirements.yml -p ./roles` to install the roles in the project directory. 217 | 218 | As a final step, before building the cluster with `vagrant up`, we need to set a few configuration options to ensure Kubernetes starts correctly and the inter-node network functions properly. Add the following variables to tell the Kubernetes role a little more about the cluster: 219 | 220 | {lang="text",starting-line-number=8} 221 | ``` 222 | kubernetes_version: '1.23' 223 | kubernetes_allow_pods_on_master: False 224 | kubernetes_pod_network_cidr: '10.244.0.0/16' 225 | kubernetes_packages: 226 | - name: kubelet=1.23.5-00 227 | state: present 228 | - name: kubectl=1.23.5-00 229 | state: present 230 | - name: kubeadm=1.23.5-00 231 | state: present 232 | - name: kubernetes-cni 233 | state: present 234 | 235 | kubernetes_apiserver_advertise_address: "192.168.56.2" 236 | kubernetes_flannel_manifest_file: "~/kube-flannel-vagrant.yml" 237 | kubernetes_kubelet_extra_args: '--node-ip={{ inventory_hostname }}' 238 | ``` 239 | 240 | Let's go through the variables one-by-one: 241 | 242 | - `kubernetes_version`: Kubernetes is a fast-moving target, and it's best practice to specify the version you're targeting---but to update as soon as possible to the latest version! 243 | - `kubernetes_allow_pods_on_master`: It's best to dedicate the Kubernetes master server to managing Kubernetes alone. You can run pods other than the Kubernetes system pods on the master if you want, but it's rarely a good idea. 244 | - `kubernetes_pod_network_cidr`: Because the default network suggested in the Kubernetes documentation conflicts with many home and private network IP ranges, this custom CIDR is a bit of a safer option. 245 | - `kubernetes_packages`: Along with specifying the `kubernetes_version`, if you want to make sure there are no surprises when installing Kubernetes, it's important to also lock in the versions of the packages that make up the Kubernetes cluster. 246 | - `kubernetes_apiserver_advertise_address`: To ensure Kubernetes knows the correct interface to use for inter-node API communication, we explicitly set the IP of the master node (this could also be the DNS name for the master, if desired). 247 | - `kubernetes_flannel_manifest_file`: Because Vagrant's virtual network interfaces confuse the default Flannel configuration, we specify the custom Flannel manifest we copied earlier in the playbook's `pre_tasks`. 248 | - `kubernetes_kubelet_extra_args`: Because Vagrant's virtual network interfaces can also confuse Kubernetes, it's best to explicitly define the `node-ip` to be advertised by `kubelet`. 249 | 250 | Whew! We finally have the full project ready to go. It's time to build the cluster! Assuming all the files are in order, you can run `vagrant up`, and after a few minutes, you should have a three-node Kubernetes cluster running locally. 251 | 252 | To verify the cluster is operating normally, log into the `master` server and check the node status with `kubectl`: 253 | 254 | {lang="text",linenos="off"} 255 | ``` 256 | # Log into the master VM. 257 | $ vagrant ssh master 258 | 259 | # Switch to the root user. 260 | vagrant@master:~$ sudo su 261 | 262 | # Check node status. 263 | root@master# kubectl get nodes 264 | NAME STATUS ROLES AGE VERSION 265 | master Ready master 13m v1.13.2 266 | node1 Ready 12m v1.13.2 267 | node2 Ready 12m v1.13.2 268 | ``` 269 | 270 | If any of the nodes aren't reporting `Ready`, then something may be mis-configured. You can check the system logs to see if `kubelet` is having trouble, or read through the Kubernetes documentation to [Troubleshoot Clusters](https://kubernetes.io/docs/tasks/debug-application-cluster/debug-cluster/). 271 | 272 | You can also check to ensure all the system pods (which run services like DNS, etcd, Flannel, and the Kubernetes API) are running correctly with the command: 273 | 274 | {lang="text",linenos=off} 275 | ``` 276 | root@master# kubectl get pods -n kube-system 277 | ``` 278 | 279 | This should print a list of all the core Kubernetes service pods (some of which are displayed multiple times---one for each node in the cluster), and the status should be `Running` after all the pods start correctly. 280 | 281 | I> The Kubernetes cluster example above can be found in the [Ansible for DevOps GitHub repository](https://github.com/geerlingguy/ansible-for-devops/tree/master/kubernetes). 282 | 283 | ### Managing Kubernetes with Ansible 284 | 285 | Once you have a Kubernetes cluster---whether bare metal or managed by a cloud provider---you need to deploy applications inside. Ansible has a few modules which make it easy to automate. 286 | 287 | #### Ansible's `k8s` module 288 | 289 | The `k8s` module (also aliased as `k8s_raw` and `kubernetes`) requires the OpenShift Python client to communicate with the Kubernetes API. So before using the `k8s` role, you need to install the client. Since it's installed with `pip`, we need to install Pip as well. 290 | 291 | Create a new `k8s-module.yml` playbook in an `examples` directory in the same project we used to set up the Kubernetes cluster, and put the following inside: 292 | 293 | {lang="text"} 294 | ``` 295 | --- 296 | - hosts: k8s-master 297 | become: yes 298 | 299 | pre_tasks: 300 | - name: Ensure Pip is installed. 301 | package: 302 | name: python-pip 303 | state: present 304 | 305 | - name: Ensure OpenShift client is installed. 306 | pip: 307 | name: openshift 308 | state: present 309 | ``` 310 | 311 | We'll soon add a task to create a Kubernetes deployment that runs three Nginx replicas based on the official Nginx Docker image. Before adding the task, we need to create a Kubernetes manifest, or definition file. Create a file in the path `examples/files/nginx.yml`, and put in the following contents: 312 | 313 | {lang="text"} 314 | ``` 315 | --- 316 | apiVersion: apps/v1 317 | kind: Deployment 318 | metadata: 319 | name: a4d-nginx 320 | namespace: default 321 | labels: 322 | app: nginx 323 | spec: 324 | replicas: 3 325 | selector: 326 | matchLabels: 327 | app: nginx 328 | template: 329 | metadata: 330 | labels: 331 | app: nginx 332 | spec: 333 | containers: 334 | - name: nginx 335 | image: nginx:1.7.9 336 | ports: 337 | - containerPort: 80 338 | ``` 339 | 340 | We won't get into the details of how Kubernetes manifests work, or why it's structured the way it is. If you want more details about this example, please read through the Kubernetes documentation, specifically [Creating a Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment). 341 | 342 | Going back to the `k8s-module.yml` playbook, add a `tasks` section which uses the `k8s` module to apply the `nginx.yml` manifest to the Kubernetes cluster: 343 | 344 | {lang="text",starting-line-number=16} 345 | ``` 346 | tasks: 347 | - name: Apply definition file from Ansible controller file system. 348 | k8s: 349 | state: present 350 | definition: "{{ lookup('file', 'files/nginx.yml') | from_yaml }}" 351 | ``` 352 | 353 | We now have a complete playbook! Run it with the command: 354 | 355 | {lang="text",linenos="off"} 356 | ``` 357 | ansible-playbook -i ../inventory k8s-module.yml 358 | ``` 359 | 360 | If you log back into the master VM (`vagrant ssh master`), change to the root user (`sudo su`), and list all the deployments (`kubectl get deployments`), you should see the new deployment that was just applied: 361 | 362 | {lang="text",linenos="off"} 363 | ``` 364 | root@master:/home/vagrant# kubectl get deployments 365 | NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE 366 | a4d-nginx 3 3 3 3 3m 367 | ``` 368 | 369 | People can't access the deployment from the outside, though. For that, we need to expose Nginx to the world. And to do that, we could add more to the `nginx.yml` manifest file, _or_ we can also apply it directly with the `k8s` module. Add another task: 370 | 371 | {lang="text",starting-line-number=22} 372 | ``` 373 | - name: Expose the Nginx service with an inline Service definition. 374 | k8s: 375 | state: present 376 | definition: 377 | apiVersion: v1 378 | kind: Service 379 | metadata: 380 | labels: 381 | app: nginx 382 | name: a4d-nginx 383 | namespace: default 384 | spec: 385 | type: NodePort 386 | ports: 387 | - port: 80 388 | protocol: TCP 389 | targetPort: 80 390 | selector: 391 | app: nginx 392 | ``` 393 | 394 | This definition is defined inline with the Ansible playbook. I generally prefer to keep the Kubernetes manifest definitions in separate files, just to keep my playbooks more concise, but either way works great! 395 | 396 | If you run the playbook again, then log back into the master to use `kubectl` like earlier, you should be able to see the new `Service` using `kubectl get services`: 397 | 398 | {lang="text",linenos="off"} 399 | ``` 400 | root@master:/home/vagrant# kubectl get services 401 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 402 | a4d-nginx NodePort 10.101.211.71 80:30681/TCP 3m 403 | kubernetes ClusterIP 10.96.0.1 443/TCP 5d 404 | ``` 405 | 406 | The Service exposes a `NodePort` on each of the Kubernetes nodes---in this case, port `30681`, so you can send a request to any node IP or DNS name and the request will be routed by Kubernetes to an Nginx service Pod, no matter what node it's running on. 407 | 408 | So in the example above, I visited `http://192.168.56.3:30681/`, and got the default Nginx welcome message: 409 | 410 | {width=80%} 411 | ![Welcome to nginx message in browser](images/16-kubernetes-nginx-welcome.png) 412 | 413 | For a final example, it might be convenient for the playbook to output a debug message with the NodePort the Service is using. In addition to applying or deleting Kubernetes manifests, the `k8s` module can get cluster and resource information that can be used elsewhere in your playbooks. 414 | 415 | Add two final tasks to retrieve the NodePort for the `a4d-nginx` service using `k8s_info`, then display it using `debug`: 416 | 417 | {lang="text",starting-line-number=42} 418 | ``` 419 | - name: Get the details of the a4d-nginx Service. 420 | k8s_info: 421 | api_version: v1 422 | kind: Service 423 | name: a4d-nginx 424 | namespace: default 425 | register: a4d_nginx_service 426 | 427 | - name: Print the NodePort of the a4d-nginx Service. 428 | debug: 429 | var: a4d_nginx_service.resources[0].spec.ports[0].nodePort 430 | ``` 431 | 432 | When you run the playbook, you should now see the NodePort in the debug output: 433 | 434 | {lang="text",linenos="off"} 435 | ``` 436 | TASK [Print the NodePort of the a4d-nginx Service.] *************** 437 | ok: [master] => { 438 | "a4d_nginx_service.result.spec.ports[0].nodePort": "30681" 439 | } 440 | ``` 441 | 442 | For bonus points, you can build a separate cleanup playbook to delete the Service and Deployment objects using `state: absent`: 443 | 444 | {lang="text"} 445 | ``` 446 | --- 447 | - hosts: k8s-master 448 | become: yes 449 | 450 | tasks: 451 | - name: Remove resources in Nginx Deployment definition. 452 | k8s: 453 | state: absent 454 | definition: "{{ lookup('file', 'files/nginx.yml') | from_yaml }}" 455 | 456 | - name: Remove the Nginx Service. 457 | k8s: 458 | state: absent 459 | api_version: v1 460 | kind: Service 461 | namespace: default 462 | name: a4d-nginx 463 | ``` 464 | 465 | You could build an entire ecosystem of applications using nothing but Ansible's `k8s` module and custom manifests. But there are many times when you might not have the time to tweak a bunch of Deployments, Services, etc. to get a complex application running, especially if it's an application with many components that you're not familiar with. 466 | 467 | Luckily, the Kubernetes community has put together a number of 'charts' describing common Kubernetes applications, and you can install them using [Helm](https://www.helm.sh). 468 | 469 | #### Managing Kubernetes Applications with Helm 470 | 471 | Helm requires a `helm` binary installed on a control machine to control deployments of apps in a Kubernetes cluster. 472 | 473 | To automate Helm setup, we'll create a playbook that installs the `helm` binary. 474 | 475 | Create a `helm.yml` playbook in the `examples` directory, and put in the following: 476 | 477 | {lang="text"} 478 | ``` 479 | --- 480 | - hosts: k8s-master 481 | become: yes 482 | 483 | tasks: 484 | - name: Retrieve helm binary archive. 485 | unarchive: 486 | src: https://get.helm.sh/helm-v3.2.1-linux-amd64.tar.gz 487 | dest: /tmp 488 | creates: /usr/local/bin/helm 489 | remote_src: yes 490 | 491 | - name: Move helm binary into place. 492 | command: cp /tmp/linux-amd64/helm /usr/local/bin/helm 493 | args: 494 | creates: /usr/local/bin/helm 495 | ``` 496 | 497 | This playbook downloads `helm` and places it in `/usr/local/bin`, so Ansible's `helm` module can use it when managing deployments with Helm. 498 | 499 | Let's take it a little further, though, and automate the deployment of a chart maintained in Helm's `stable` chart collection. 500 | 501 | The easiest way to manage Helm deployments with Ansible is using the `helm` module that's part of the `community.kubernetes` collection on Ansible Galaxy. To install that collection, go back to the main `kubernetes` directory, and open the `requirements.yml` file that included the roles used in the main setup playbook. 502 | 503 | Add the following at the top level after the `roles` section: 504 | 505 | {lang="yaml",linenos=off} 506 | ``` 507 | collections: 508 | - name: community.kubernetes 509 | ``` 510 | 511 | Then make sure the collection is installed locally by running: 512 | 513 | {lang="text",linenos=off} 514 | ``` 515 | ansible-galaxy collection install -r requirements.yml 516 | ``` 517 | 518 | Now that we have the Kubernetes collection available, we can use the included Helm modules to do the following: 519 | 520 | 1. Add the 'chart repository' for a Helm chart we want to manage using the `helm_repository` module. 521 | 2. Install the chart using the `helm` module. 522 | 523 | So, add the following tasks to the `helm.yml` playbook: 524 | 525 | {lang="text",starting-line-number=18} 526 | ``` 527 | - name: Add Bitnami's chart repository. 528 | community.kubernetes.helm_repository: 529 | name: bitnami 530 | repo_url: "https://charts.bitnami.com/bitnami" 531 | 532 | - name: Install phpMyAdmin with Helm. 533 | community.kubernetes.helm: 534 | name: phpmyadmin 535 | chart_ref: bitnami/phpmyadmin 536 | release_namespace: default 537 | values: 538 | service: 539 | type: NodePort 540 | ``` 541 | 542 | The first task adds the Bitnami chart repository, and the second task installs the `bitnami/phpmyadmin` chart from that repository. 543 | 544 | The second task also overrides the `service.type` option in the chart, because the default for most Helm charts is to use a service type of `ClusterIP` or `LoadBalancer`, and it's a little difficult to access services from the outside in a bare metal Kubernetes cluster this way. By forcing the use of `NodePort`, we can easily access phpMyAdmin from outside the cluster. 545 | 546 | W> Many charts (e.g. `stable/wordpress`, `stable/drupal`, `stable/jenkins`) will install but won't fully run on this Kubernetes cluster, because they require Persistent Volumes (PVs), which require some kind of shared filesystem (e.g. NFS, Ceph, Gluster, or something similar) among all the nodes. If you want to use charts which require PVs, check out the NFS configuration used in the [Raspberry Pi Dramble](https://github.com/geerlingguy/raspberry-pi-dramble) project, which allows applications to use Kubernetes PVs and PVCs. 547 | 548 | At this point, you could log into the master, change to the root user (`sudo su`), and run `kubectl get services` to see the `phpmyadmin` service's `NodePort`, but it's better to automate that step at the end of the `helm.yml` playbook: 549 | 550 | {lang="text",starting-line-number=72} 551 | ``` 552 | - name: Ensure K8s module dependencies are installed. 553 | pip: 554 | name: openshift 555 | state: present 556 | 557 | - name: Get the details of the phpmyadmin Service. 558 | community.kubernetes.k8s: 559 | api_version: v1 560 | kind: Service 561 | name: phpmyadmin 562 | namespace: default 563 | register: phpmyadmin_service 564 | 565 | - name: Print the NodePort of the phpmyadmin Service. 566 | debug: 567 | var: phpmyadmin_service.result.spec.ports[0].nodePort 568 | ``` 569 | 570 | Run the playbook, grab the debug value, and append the port to the IP address of any of the cluster members. Once the `phpmyadmin` deployment is running and healthy (this takes about 30 seconds), you can access phpMyAdmin at http://192.168.56.3:31872/ (substituting the `NodePort` from your own cluster): 571 | 572 | {width=80%} 573 | ![phpMyAdmin running in the browser on a NodePort](images/16-kubernetes-helm-phpmyadmin.png) 574 | 575 | #### Interacting with Pods using the `kubectl` connection plugin 576 | 577 | Ansible ships with a number of Connection Plugins. Last chapter, we used the `docker` connection plugin to interact with Docker containers natively, to avoid having to use SSH with a container or installing Ansible inside the container. 578 | 579 | This chapter, we'll use the `kubectl` connection plugin, which allows Ansible to natively interact with running Kubernetes pods. 580 | 581 | I> One of the main tenets of 'immutable infrastructure' (which is truly realized when you start using Kubernetes correctly) is _not logging into individual containers and running commands_, so this example may seem contrary to the core purpose of Kubernetes. However, it is sometimes necessary to do so. In cases where your applications are not built in a way that works completely via external APIs and Pod-to-Pod communication, you might need to run a command directly inside a running Pod. 582 | 583 | Before using the `kubectl` connection plugin, you should already have the `kubectl` binary installed and available in your `$PATH`. You should also have a running Kubernetes cluster; for this example, I'll assume you're still using the same cluster from the previous examples, with the `phpmyadmin` service running. 584 | 585 | Create a new playbook in the `examples` directory, named `kubectl-connection.yml`. The first thing we'll do in the playbook is retrieve the `kubectl` config file from the master server so we can run commands delegated directly to a Pod of our choosing: 586 | 587 | {lang="text"} 588 | ``` 589 | --- 590 | # This playbook assumes you already have the kubectl binary installed 591 | # and available in the $PATH. 592 | - hosts: k8s-master 593 | become: yes 594 | 595 | tasks: 596 | - name: Retrieve kubectl config file from the master server. 597 | fetch: 598 | src: /root/.kube/config 599 | dest: files/kubectl-config 600 | flat: yes 601 | ``` 602 | 603 | After using `fetch` to grab the config file, we need to find the name of the `phpmyadmin` Pod. This is necessary so we can add the Pod directly to our inventory: 604 | 605 | {lang="text",starting-line-number=14} 606 | ``` 607 | - name: Get the phpmyadmin Pod name. 608 | command: > 609 | kubectl --no-headers=true get pod -l app=phpmyadmin 610 | -o custom-columns=:metadata.name 611 | register: phpmyadmin_pod 612 | ``` 613 | 614 | I've used the `kubectl` command directly here, because there's no simple way using the `k8s` module and Kubernetes' API to directly get the name of a Pod for a given set of conditions---in this case, with the label `app=phpmyadmin`. 615 | 616 | We can now add the pod by name name (using `phpmyadmin_pod.stdout`) to the current play's inventory: 617 | 618 | {lang="text",starting-line-number=20} 619 | ``` 620 | - name: Add the phpmyadmin Pod to the inventory. 621 | add_host: 622 | name: '{{ phpmyadmin_pod.stdout }}' 623 | ansible_kubectl_namespace: default 624 | ansible_kubectl_config: files/kubectl-config 625 | ansible_connection: kubectl 626 | ``` 627 | 628 | The `ansible_connection: kubectl` is key here; it tells Ansible to use the `kubectl` connection plugin when connecting to this host. 629 | 630 | There are a number of options you can pass to the `kubectl` connection plugin to tell it how to connect to your Kubernetes cluster and pod. In this case, the location of the downloaded `kubectl` config file is passed to `ansible_kubectl_config` so Ansible knows where the cluster configuration exists. It's also a good practice to always pass the `namespace` of an object, so we've set that as well. 631 | 632 | Now that we have a new host (in this case, the phpmyadmin service's Pod) added to the inventory, let's run a task directly against it: 633 | 634 | {lang="text",starting-line-number=28} 635 | ``` 636 | # Note: Python is required to use other modules. 637 | - name: Run a command inside the container. 638 | raw: date 639 | register: date_output 640 | delegate_to: '{{ phpmyadmin_pod.stdout }}' 641 | 642 | - debug: var=date_output.stdout 643 | ``` 644 | 645 | The `raw` task passes through the given command directly using `kubectl exec`, and returns the output. The `debug` task should then print the output of the `date` command, run inside the container. 646 | 647 | You can do a lot more with the `kubectl` connection plugin, and you could even have a Dynamic inventory which populates a whole set of Pods for you to work with. It's generally not ideal to directly interact with pods, but when it's necessary, it's nice to be able to automate it with Ansible! 648 | 649 | W> The `raw` module was used to run the `date` command in this example because all other Ansible modules require Python to be present on the container running in the Pod. For many use cases, running a `raw` command should be adequate. But if you want to be able to use any other modules, you'll need to make sure Python is present in the container _before_ you try using the `kubectl` connection plugin with it. 650 | 651 | ## Summary 652 | 653 | There are many ways you can build a Kubernetes cluster, whether on a managed cloud platform or bare metal. There are also many ways to deploy and manage applications within a Kubernetes cluster. 654 | 655 | Ansible's robust variable management, Jinja templating, and YAML support makes it a strong contender for managing Kubernetes resources. At the time of this writing, Ansible has a stable `k8s` module, an experimental `helm` module, and a `kubectl` connection plugin, and the interaction between Ansible and Kubernetes is still being refined every release. 656 | 657 | {lang="text",linenos=off} 658 | ``` 659 | ______________________________________ 660 | / Never try to teach a pig to sing. It \ 661 | | wastes your time and annoys the pig. | 662 | \ (Proverb) / 663 | -------------------------------------- 664 | \ ^__^ 665 | \ (oo)\_______ 666 | (__)\ )\/\ 667 | ||----w | 668 | || || 669 | ``` 670 | -------------------------------------------------------------------------------- /chapter2.txt: -------------------------------------------------------------------------------- 1 | # Chapter 2 - Local Infrastructure Development: Ansible and Vagrant 2 | 3 | ## Prototyping and testing with local virtual machines 4 | 5 | Ansible works well with any server to which you can connect---remote *or* local. For speedier testing and development of Ansible playbooks, and for testing in general, it's a very good idea to work locally. Local development and testing of infrastructure is both safer and faster than doing it on remote/live machines---especially in production environments! 6 | 7 | I> In the past decade, test-driven development (TDD), in one form or another, has become the norm for much of the software industry. Infrastructure development hasn't been as organized until recently, and best practices dictate that infrastructure (which is becoming more and more important to the software that runs on it) should be thoroughly tested as well. 8 | I> 9 | I> Changes to software are tested either manually or in some automated fashion; there are now systems that integrate both with Ansible and with other deployment and configuration management tools, to allow some amount of infrastructure testing as well. Even if it's just testing a configuration change locally before applying it to production, that approach is a thousand times better than what, in the software development world, would be called 'cowboy coding'---working directly in a production environment, not documenting or encapsulating changes in code, and not having a way to roll back to a previous version. 10 | 11 | The past decade has seen the growth of many virtualization tools that allow for flexible and very powerful infrastructure emulation, all from your local workstation! It's empowering to be able to play around with a config file, or to tweak the order of a server update to perfection, over and over again, with no fear of breaking an important server. If you use a local virtual machine, there's no downtime for a server rebuild; just re-run the provisioning on a new VM, and you're back up and running in minutes---with no one the wiser. 12 | 13 | [Vagrant](https://www.vagrantup.com), a server provisioning tool, and [VirtualBox](https://www.virtualbox.org/), a local virtualization environment, make a potent combination for testing infrastructure and individual server configurations locally. Both applications are free and open source, and work well on Mac, Linux, or Windows hosts. 14 | 15 | We're going to set up Vagrant and VirtualBox for easy testing with Ansible to provision a new server. 16 | 17 | ## Your first local server: Setting up Vagrant 18 | 19 | To get started with your first local virtual server, you need to download and install Vagrant and VirtualBox, and set up a simple Vagrantfile, which will describe the virtual server. 20 | 21 | 1. Download and install Vagrant and VirtualBox (whichever version is appropriate for your OS): 22 | - [Download Vagrant](https://www.vagrantup.com/downloads.html) 23 | - [Download VirtualBox](https://www.virtualbox.org/wiki/Downloads) (when installing, make sure the command line tools are installed, so Vagrant works with it) 24 | 2. Create a new folder somewhere on your hard drive where you will keep your Vagrantfile and provisioning instructions. 25 | 3. Open a Terminal or PowerShell window, then navigate to the folder you just created. 26 | 4. Add a Rocky Linux 8.x 64-bit 'box' using the [`vagrant box add`](https://www.vagrantup.com/docs/boxes.html) command: 27 | `vagrant box add geerlingguy/rockylinux8` 28 | (note: HashiCorp's [Vagrant Cloud](https://app.vagrantup.com/boxes/search) has a comprehensive list of different pre-made Linux boxes. Also, check out the 'official' Vagrant Ubuntu boxes in Vagrant's [Boxes documentation](https://www.vagrantup.com/docs/boxes.html). 29 | 5. Create a default virtual server configuration using the box you just downloaded: 30 | `vagrant init geerlingguy/rockylinux8` 31 | 6. Boot your Rocky Linux server: 32 | `vagrant up` 33 | 34 | Vagrant downloaded a pre-built 64-bit Rocky Linux 8 virtual machine image (you can [build your own](https://www.vagrantup.com/docs/providers/virtualbox/boxes.html) virtual machine 'boxes', if you so desire), loaded the image into VirtualBox with the configuration defined in the default Vagrantfile (which is now in the folder you created earlier), and booted the virtual machine. 35 | 36 | Managing this virtual server is extremely easy: `vagrant halt` will shut down the VM, `vagrant up` will bring it back up, and `vagrant destroy` will completely delete the machine from VirtualBox. A simple `vagrant up` again will re-create it from the base box you originally downloaded. 37 | 38 | Now that you have a running server, you can use it just like you would any other server, and you can connect via SSH. To connect, enter `vagrant ssh` from the folder where the Vagrantfile is located. If you want to connect manually, or connect from another application, enter `vagrant ssh-config` to get the required SSH details. 39 | 40 | ## Using Ansible with Vagrant 41 | 42 | Vagrant's ability to bring up preconfigured boxes is convenient on its own, but you could do similar things with the same efficiency using VirtualBox's (or VMWare's, or Parallels') GUI. Vagrant has some other tricks up its sleeve: 43 | 44 | - [**Network interface management**](https://www.vagrantup.com/docs/networking): You can forward ports to a VM, share the public network connection, or use private networking for inter-VM and host-only communication. 45 | - [**Shared folder management**](https://www.vagrantup.com/docs/synced-folders): Vagrant sets up shares between your host machine and VMs using NFS or (much slower) native folder sharing in VirtualBox. 46 | - [**Multi-machine management**](https://www.vagrantup.com/docs/multi-machine): Vagrant is able to configure and control multiple VMs within one Vagrantfile. This is important because, as stated in the documentation, "Historically, running complex environments was done by flattening them onto a single machine. The problem with that is that it is an inaccurate model of the production setup, which behaves far differently." 47 | - [**Provisioning**](https://www.vagrantup.com/docs/provisioning): When running `vagrant up` the first time, Vagrant automatically *provisions* the newly-minted VM using whatever provisioner you have configured in the Vagrantfile. You can also run `vagrant provision` after the VM has been created to explicitly run the provisioner again. 48 | 49 | It's this last feature that is most important for us. Ansible is one of many provisioners integrated with Vagrant (others include basic shell scripts, Chef, Docker, Puppet, and Salt). When you call `vagrant provision` (or `vagrant up` the first time), Vagrant passes off the VM to Ansible, and tells Ansible to run a defined Ansible playbook. We'll get into the details of Ansible playbooks later, but for now, we're going to edit our Vagrantfile to use Ansible to provision our virtual machine. 50 | 51 | Open the Vagrantfile that was created when we used the `vagrant init` command earlier. Add the following lines just before the final 'end' (Vagrantfiles use Ruby syntax, in case you're wondering): 52 | 53 | {lang="ruby"} 54 | ``` 55 | # Provisioning configuration for Ansible. 56 | config.vm.provision "ansible" do |ansible| 57 | ansible.playbook = "playbook.yml" 58 | end 59 | ``` 60 | 61 | This is a very basic configuration to get you started using Ansible with Vagrant. There are [many other Ansible options](https://www.vagrantup.com/docs/provisioning/ansible_intro) you can use once we get deeper into using Ansible. For now, we just want to set up a very basic playbook---a simple file you create to tell Ansible how to configure your VM. 62 | 63 | ## Your first Ansible playbook 64 | 65 | Let's create the Ansible `playbook.yml` file now. Create an empty text file in the same folder as your Vagrantfile, and put in the following contents: 66 | 67 | {lang="yaml"} 68 | ``` 69 | --- 70 | - hosts: all 71 | become: yes 72 | 73 | tasks: 74 | - name: Ensure chrony (for time synchronization) is installed. 75 | dnf: 76 | name: chrony 77 | state: present 78 | 79 | - name: Ensure chrony is running. 80 | service: 81 | name: chronyd 82 | state: started 83 | enabled: yes 84 | ``` 85 | 86 | I'll get into what this playbook is doing in a minute. For now, let's run the playbook on our VM. Make sure you're in the same directory as the Vagrantfile and new playbook.yml file, and enter `vagrant provision`. You should see status messages for each of the 'tasks' you defined, and then a recap showing what Ansible did on your VM---something like the following: 87 | 88 | {lang="text",linenos=off} 89 | ``` 90 | PLAY RECAP ********************************************************** 91 | default : ok=3 changed=0 unreachable=0 failed=0 92 | ``` 93 | 94 | Ansible just took the simple playbook you defined, parsed the YAML syntax, and ran a bunch of commands via SSH to configure the server as you specified. Let's go through the playbook, step by step: 95 | 96 | {lang="yaml"} 97 | ``` 98 | --- 99 | ``` 100 | 101 | This first line is a marker showing that the rest of the document will be formatted in YAML (read [an introduction to YAML](https://yaml.org/spec/1.2.2/)). 102 | 103 | {lang="yaml",starting-line-number=2} 104 | ``` 105 | - hosts: all 106 | ``` 107 | 108 | This line tells Ansible to which hosts this playbook applies. `all` works here, since Vagrant is invisibly using its own Ansible inventory file (instead of using a manually-created `hosts.ini` file), which just defines the Vagrant VM. 109 | 110 | {lang="yaml",starting-line-number=3} 111 | ``` 112 | become: yes 113 | ``` 114 | 115 | Since we need privileged access to install chrony and modify system configuration, this line tells Ansible to use `sudo` for all the tasks in the playbook (you're telling Ansible to 'become' the root user with `sudo`, or an equivalent). 116 | 117 | {lang="yaml",starting-line-number=5} 118 | ``` 119 | tasks: 120 | ``` 121 | 122 | All the tasks after this line will be run on all hosts (or, in our case, our one VM). 123 | 124 | {lang="yaml",starting-line-number=6} 125 | ``` 126 | - name: Ensure chrony (for time synchronization) is installed. 127 | dnf: 128 | name: chrony 129 | state: present 130 | ``` 131 | 132 | This command is the equivalent of running `dnf install chrony`, but is much more intelligent; it will check if chrony is installed, and, if not, install it. This is the equivalent of the following shell script: 133 | 134 | {lang="text",linenos=off} 135 | ``` 136 | if ! rpm -qa | grep -qw chrony; then 137 | dnf install -y chrony 138 | fi 139 | ``` 140 | 141 | However, the above script is still not quite as robust as Ansible's `dnf` command. What if some other package with `chrony` in its name is installed, but not `chrony`? This script would require extra tweaking and complexity to match the simple Ansible dnf command, especially after we explore the dnf module more intimately (or the `apt` module for Debian-flavored Linux, or `package` for OS-agnostic package installation). 142 | 143 | {lang="yaml",starting-line-number=11} 144 | ``` 145 | - name: Ensure chrony is running. 146 | service: 147 | name: chronyd 148 | state: started 149 | enabled: yes 150 | ``` 151 | 152 | This final task both checks and ensures that the `chronyd` service is started and running, and sets it to start at system boot. A shell script with the same effect would be: 153 | 154 | {lang="text",linenos=off} 155 | ``` 156 | # Start chronyd if it's not already running. 157 | if ps aux | grep -q "[c]hronyd" 158 | then 159 | echo "chronyd is running." > /dev/null 160 | else 161 | systemctl start chronyd.service > /dev/null 162 | echo "Started chronyd." 163 | fi 164 | # Make sure chronyd is enabled on system startup. 165 | systemctl enable chronyd.service 166 | ``` 167 | 168 | You can see how things start getting complex in the land of shell scripts! And this shell script is still not as robust as what you get with Ansible. To maintain idempotency and handle error conditions, you'll have to do even more work with basic shell scripts than you do with Ansible. 169 | 170 | We could be more terse (and demonstrate Ansible's powerful simplicity) ignoring Ansible's self-documenting `name` parameter and shorthand `key=value` syntax, resulting in the following playbook: 171 | 172 | {lang="yaml"} 173 | ``` 174 | --- 175 | - hosts: all 176 | become: yes 177 | tasks: 178 | - dnf: name=chrony state=present 179 | - service: name=chronyd state=started enabled=yes 180 | ``` 181 | 182 | I> Just as with code and configuration files, documentation in Ansible (e.g. using the `name` parameter and/or adding comments to the YAML for complicated tasks) is not absolutely necessary. However, I'm a firm believer in thorough (but concise) documentation, so I always document what my tasks will do by providing a `name` for each one. This also helps when you're running the playbooks, so you can see what's going on in a human-readable format. 183 | 184 | ## Cleaning Up 185 | 186 | Once you're finished experimenting with the Rocky Linux Vagrant VM, you can remove it from your system by running `vagrant destroy`. If you want to rebuild the VM again, run `vagrant up`. If you're like me, you'll soon be building and rebuilding hundreds of VMs and containers per week using Vagrant and Ansible! 187 | 188 | ## Summary 189 | 190 | Your workstation is on the path to becoming an "infrastructure-in-a-box," and you can now ensure your infrastructure is as well-tested as the code that runs on top of it. With one small example, you've got a glimpse at the simple-yet-powerful Ansible playbook. We'll dive deeper into Ansible playbooks later, and we'll also explore Vagrant a little more as we go. 191 | 192 | {lang="text",linenos=off} 193 | ``` 194 | ______________________________________ 195 | / I have not failed, I've just found \ 196 | | 10,000 ways that won't work. (Thomas | 197 | \ Edison) / 198 | -------------------------------------- 199 | \ ^__^ 200 | \ (oo)\_______ 201 | (__)\ )\/\ 202 | ||----w | 203 | || || 204 | ``` 205 | -------------------------------------------------------------------------------- /chapter7.txt: -------------------------------------------------------------------------------- 1 | # Chapter 7 - Ansible Plugins and Content Collections 2 | 3 | Ansible roles are helpful when you want to organize tasks and related variables and handlers in a maintainable way. And you can technically distribute Ansible _plugins_---Python code to extend Ansible's functionality with new modules, filters, inventory plugins, and more---but adding this kind of content to a role is not ideal, and in a sense, overloads the role by putting both Python code and Ansible YAML into the same entity. 4 | 5 | This is why, in Ansible 2.8, _Collections_, or more formally, _Content Collections_, were introduced. 6 | 7 | Collections allow the gathering of Ansible plugins, roles, and even playbooks[^playbooks] into one entity, in a more structured way that Ansible, Ansible Galaxy, and Automation Hub can scan and consume. 8 | 9 | [^playbooks]: Note that as of Ansible 2.10, there is no formal specification for how to define playbooks in Collections. 10 | 11 | ## Creating our first Ansible Plugin --- A Jinja Filter 12 | 13 | In many Ansible tasks, you may find yourself building some relatively complex logic to check for a set of conditions. If your Jinja conditionals start making your YAML files look more like a hybrid of Python and YAML, it's a good time to consider extracting the Python logic out into an Ansible plugin. 14 | 15 | We're going to use an extremely basic example. Let's say I have a playbook, `main.yml`, and I have a task in it that needs to assert that a certain variable is a proper representation of the color 'blue' for some generated CSS: 16 | 17 | {lang=yaml} 18 | ``` 19 | --- 20 | - hosts: all 21 | 22 | vars: 23 | my_color_choice: blue 24 | 25 | tasks: 26 | - name: "Verify {{ my_color_choice }} is a form of blue." 27 | assert: 28 | that: my_color_choice == 'blue' 29 | ``` 30 | 31 | This works great... until you have another valid representation of blue. Let's say a user set `my_color_choice: '#0000ff'`. You could still use the same task, but you'd need to add to the logic: 32 | 33 | {lang=yaml} 34 | ``` 35 | --- 36 | - hosts: all 37 | 38 | vars: 39 | my_color_choice: blue 40 | 41 | tasks: 42 | - name: "Verify {{ my_color_choice }} is a form of blue." 43 | assert: 44 | that: > 45 | my_color_choice == 'blue' 46 | or my_color_choice == '#0000ff' 47 | ``` 48 | 49 | Now, someone else might come along with the equally-valid option `#00f`. Time to add more logic to the task---or not. 50 | 51 | Instead, we can write a _filter plugin_. Filter plugins allow you to verify data, and are some of the simpler types of plugins you'll find in Ansible. 52 | 53 | In our case, we want a filter that allows us to write in our playbook: 54 | 55 | {lang=yaml} 56 | ``` 57 | --- 58 | - hosts: all 59 | 60 | vars: 61 | my_color_choice: blue 62 | 63 | tasks: 64 | - name: "Verify {{ my_color_choice }} is a form of blue." 65 | assert: 66 | that: my_color_choice is blue 67 | ``` 68 | 69 | So how can we write a test filter so the `is blue` part in the assertion works? 70 | 71 | The simplest way is to create a `test_plugins` folder alongside the `main.yml` playbook, create a `blue.py` file, and add the following Python code inside: 72 | 73 | {lang=python} 74 | ``` 75 | # Ansible custom 'blue' test plugin definition. 76 | 77 | def is_blue(string): 78 | ''' Return True if a valid CSS value of 'blue'. ''' 79 | blue_values = [ 80 | 'blue', 81 | '#0000ff', 82 | '#00f', 83 | 'rgb(0,0,255)', 84 | 'rgb(0%,0%,100%)', 85 | ] 86 | if string in blue_values: 87 | return True 88 | else: 89 | return False 90 | 91 | class TestModule(object): 92 | ''' Return dict of custom jinja tests. ''' 93 | 94 | def tests(self): 95 | return { 96 | 'blue': is_blue 97 | } 98 | 99 | ``` 100 | 101 | This book isn't a primer on Python programming, but as a simple explanation, the first line is a comment saying what this file contains. It's not a requirement, but I like to always have something at the top of my code files introducing the file's purpose. 102 | 103 | On line 3, the `is_blue` function is defined. It contains some logic which takes one parameter (a string), and returns `True` if the string is a valid form of blue, or `False` if not. 104 | 105 | In this case, it's a simple function, but in many test plugins, the logic is more complex. The important thing to note is that this logic (which benefits from Python's language features) is more maintainable as a plugin, rather than complex inline Jinja syntax in an Ansible playbook. 106 | 107 | Ansible plugins are also unit testable, unlike conditionals in YAML files, which means you can test them without having to run a whole Ansible playbook to verify they are working. 108 | 109 | Line 17 defines `TestModule`, and Ansible calls the `tests` method in this class in any Python file inside the `test_plugins` directory, and loads any of the returned keys as available Jinja tests---in our case, `blue` is the name of the Jinja test, and when a user tests with `blue`, Ansible maps that test back to the `is_blue` function. 110 | 111 | T> You can store plugins in different paths to get Ansible to pick them up. In our example, a test plugin was placed inside a `test_plugins` directory, which Ansible scans for test plugins by default when running a playbook. See [Adding modules and plugins locally](https://docs.ansible.com/ansible/latest/dev_guide/developing_locally.html) for more options for local plugin discovery. 112 | T> 113 | T> For _test_ plugins, you can have more than one defined in the same Python file. And the Python file's name doesn't need to correspond to the plugin name. But for other plugins and Ansible modules, the rules are different. Consult the [Developing plugins](https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html) documentation for more information. 114 | 115 | If you run the `main.yml` playbook (even against localhost), it should now be able to verify that 'blue' is indeed _blue_: 116 | 117 | {lang=text,linenos=off} 118 | ``` 119 | $ ansible-playbook -i localhost, -c local main.yml 120 | 121 | PLAY [all] ******************************************************** 122 | 123 | TASK [Gathering Facts] ******************************************** 124 | ok: [localhost] 125 | 126 | TASK [Verify blue is a form of blue.] ***************************** 127 | ok: [localhost] => { 128 | "changed": false, 129 | "msg": "All assertions passed" 130 | } 131 | 132 | PLAY RECAP ******************************************************** 133 | localhost : ok=2 changed=0 unreachable=0 failed=0 ignored=0 134 | ``` 135 | 136 | Over time, you may find that you want to share this plugin with other playbooks, especially if it could be helpful in many of the projects you maintain. 137 | 138 | The easiest way is to copy and paste the plugin code into each playbook's directory, but that leads to code duplication and will likely result in the Python code being impossible to keep in sync as the plugin is modified in different playbooks over time. 139 | 140 | Traditionally, people would share Ansible modules and plugins as part of _roles_, as you could place modules inside a special `library` directory in a role, and plugins in directories like `test_plugins` in the role (just like in a playbook). This advanced usage is mentioned in the Ansible documentation: [Embedding Modules and Plugins In Roles](https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html#embedding-modules-and-plugins-in-roles). 141 | 142 | But roles are primarily designed for sharing Ansible tasks, handlers, and associated variables---their architecture is not as great for sharing plugins and modules. 143 | 144 | So where does that leave us? 145 | 146 | ## The history of Ansible Content Collections 147 | 148 | Well, in 2014, when Ansible Galaxy was created to allow roles to be shared, Ansible had less than 300 modules in Ansible's core repository, and the decision was made to [split the modules off from Ansible core](https://groups.google.com/forum/#!searchin/ansible-project/core$20extras$20split%7Csort:relevance/ansible-project/TUL_Bfmhr-E/rshKe30KdD8J) to make maintenance easier, since issues and PRs were overwhelming the small core development team. 149 | 150 | After a couple years, the modules were [merged back in](https://groups.google.com/forum/#!searchin/ansible-project/repository$20merge%7Csort:relevance/ansible-project/9WpXraBSLz8/q6HYIszBBwAJ), because maintaining three separate git repositories using submodules and trying to track three separate issue and PR queues was a worse maintenance nightmare than what they had to begin with! 151 | 152 | In 2017, Galaxy started to burst at the seams a little, as more users were contributing roles, and also trying to share module and plugin code by stashing them inside a role's structure. 153 | 154 | Also in 2017, as Red Hat expanded Ansible's scope to more broadly embrace networking, security, and Windows automation, the amount of maintenance burden pretty much overwhelmed the core team's ability to cope with the now _thousands_ of modules being maintained in Ansible's core repository: 155 | 156 | {width=90%} 157 | ![Ansible core backlog growth](images/7-ansible-repo-backlog-growth.png) 158 | 159 | The graph above comes from Greg Sutcliffe's blog, in a post titled [Collections, The Backlog View](https://emeraldreverie.org/2020/03/02/collections-the-backlog-view/). In the post, he explores the data behind a major decision to shift Ansible's plugin and module development burden off the small Ansible core team and into a distributed set of _collections_. 160 | 161 | [Mazer](https://github.com/ansible/mazer) was introduced in 2018 to try experiment with new ways of managing groupings of Ansible content---roles, modules, and plugins. And in the Ansible 2.9 release in 2020, most of Mazer's functionality was merged into the already-existing `ansible-galaxy` command line utility that ships with Ansible. 162 | 163 | > Mazer was a character in the book _Ender's Game_, from which the name 'Ansible' was derived. A mazer is also a hardwood drinking vessel. 164 | 165 | And between the release of Ansible 2.8 and 2.10, the Ansible code was restructured, as explained in the blog post [Thoughts on Restructuring the Ansible Project](https://www.ansible.com/blog/thoughts-on-restructuring-the-ansible-project). The Ansible core repository will still hold a few foundational plugins and modules, vendor-supported and Red Hat-supported modules will be split out into their own collections, and community modules will be in _their_ own collections. 166 | 167 | This decision does run against the grain of the 'batteries included' philosophy, but the problem is that Ansible has grown to be one of the largest open source projects in existence, and it's no longer a good idea to have modules for Cisco networking switches that require special expertise in the same repository as modules for developer build tools like PHP's Composer or Node.js' NPM. 168 | 169 | But users are still able to get all a 'batteries included' version of Ansible---its what you've used most of this book! The difference is you can also strap on extra batteries, of any type (not just roles), more easily with collections. 170 | 171 | ## The Anatomy of a Collection 172 | 173 | So what's _in_ a collection? 174 | 175 | At the most basic level, you need to put a collection in the right directory, otherwise Ansible's namespace-based collection loader (based on Python's [PEP 420](https://www.python.org/dev/peps/pep-0420/) standard) will not be able to see it. 176 | 177 | Our goal is to move the `blue` test plugin from earlier in this chapter into a new collection, and use the plugin _in that collection_ in our playbook. 178 | 179 | We need to create a collection so we can put the `blue` plugin inside. In this example, since the collection is intended to be used local to this playbook, and since it's meant to hold color-related functionality, we can call the collection `local.colors`, which means the collection _namespace_ will be `local` (denoting a collection that's local to this playbook), and the collection _name_ will be `colors`. 180 | 181 | As with Ansible roles, new collections can be scaffolded using the `ansible-galaxy` command, in this case: 182 | 183 | {lang=text,linenos=off} 184 | ``` 185 | $ ansible-galaxy collection init local.colors --init-path ./collections/ansible_collections 186 | ``` 187 | 188 | > You might be wondering why we created an extra directory `ansible_collections` to hold our new namespace and collection---and why the collection has to be in a namespace, since it's just a local collection. It's required so Ansible can use Python's built-in namespace-based loader to load content from the collection. 189 | 190 | After running this command, you should see the following contents in your playbook directory: 191 | 192 | {lang=text,linenos=off} 193 | ``` 194 | ansible.cfg 195 | collections/ 196 | ansible_collections/ 197 | local/ 198 | colors/ 199 | README.md 200 | docs/ 201 | galaxy.yml 202 | plugins/ 203 | roles/ 204 | main.yml 205 | test_plugins/ 206 | blue.py 207 | ``` 208 | 209 | The new collection includes all the necessary structure of a collection, but if you don't need one of the docs, plugins, or roles directories, you could delete them. 210 | 211 | The most important thing is the `galaxy.yml` file, which is required so Ansible can read certain metadata about the Collection when it is loaded. For _local_ collections like this one, the defaults are fine, but later, if you want to contribute a collection to Ansible Galaxy and share it with others, you would need to adjust the configuration in this file. 212 | 213 | ### Putting our Plugin into a Collection 214 | 215 | To move our `blue.py` plugin into the collection, we'll need to create a `test` directory inside the collection's `plugins` directory (since `blue` is a test plugin), and then move the `blue.py` plugin into that folder: 216 | 217 | {lang=text,linenos=off} 218 | ``` 219 | $ mkdir collections/ansible_collections/local/colors/plugins/test 220 | $ mv test_plugins/blue.py collections/ansible_collections/local/colors/plugins/test/blue.py 221 | ``` 222 | 223 | At this point if you were to run the `main.yml` playbook, it would fail, with the message: 224 | 225 | {lang=text,linenos=off} 226 | ``` 227 | TASK [Verify blue is a form of blue.] **************************** 228 | fatal: [localhost]: FAILED! => {"msg": "The conditional check 229 | 'my_color_choice is blue' failed. The error was: template error 230 | while templating string: no test named 'blue'. String: {% if 231 | my_color_choice is blue %} True {% else %} False {% endif %}"} 232 | ``` 233 | 234 | The problem is you need to also modify your Playbook and make sure Ansible knows you want the `blue` module in the `local.colors` collection. 235 | 236 | There are two ways you can do this. For collection modules and roles, you could leave the playbook mostly unmodified, and just add a `collections` section in the play, like: 237 | 238 | {lang=yaml} 239 | ``` 240 | --- 241 | - hosts: all 242 | 243 | collections: 244 | - local.colors 245 | 246 | vars: 247 | my_color_choice: blue 248 | ``` 249 | 250 | But in this case, we're using a test plugin, not a regular module or role, so we need to refer to the module in a special way, using its 'Fully Qualified Collection Name' (FQCN), which in this test plugin's case would be `local.colors.blue`. 251 | 252 | So the task should be changed to look like this: 253 | 254 | {lang=yaml,starting-line-number=7} 255 | ``` 256 | tasks: 257 | - name: "Verify {{ my_color_choice }} is a form of blue." 258 | assert: 259 | that: my_color_choice is local.colors.blue 260 | ``` 261 | 262 | Now, if you run the playbook, it will run the same as before, but using the module in the `local.colors` collection. 263 | 264 | Any content you add to the collection---plugins, modules, or roles---can be called the same way. Whereas for things _built into_ Ansible or local to your playbook you can call them with `modulename` or `rolename`, for things from _collections_ you should call them by their FQCN. 265 | 266 | Unless you plan on sharing your collection code with other projects or with the entire Ansible community, it may be easier to maintain custom playbook-specific content like plugins, modules, and roles individually, inside local playbook directories like we did with the `test_plugins` or with `roles` in previous chapters. 267 | 268 | ### Going deeper developing collections 269 | 270 | This example is rather simple, and doesn't even include useful components like _documentation_ for the `blue` test plugin. There are many more things you can do with collections, including adding roles, modules, and someday maybe even _playbooks_. 271 | 272 | There are different requirements and limitations to roles when they are part of a collection (vs built separately in a playbook's `roles/` directory, or installed from Galaxy), and those are listed in Ansible's documentation: [Developing collections](https://docs.ansible.com/ansible/latest/dev_guide/developing_collections.html). 273 | 274 | ## Collections on Automation Hub and Ansible Galaxy 275 | 276 | Just like roles, collections can be shared with the entire community on Ansible Galaxy, or in Red Hat's Automation Hub, which is part of Red Hat's Ansible Automation Platform. 277 | 278 | If you browse Galaxy or Automation Hub and find a collection you'd like to use, you can use the `ansible-galaxy` CLI to install the collection similar to how you'd install a role: 279 | 280 | {lang="text",linenos="off"} 281 | ``` 282 | $ ansible-galaxy collection install geerlingguy.k8s 283 | ``` 284 | 285 | This command would install the `geerlingguy.k8s` collection into Ansible's default collection path. We'll talk a little more about collection paths in a bit, but first, you can also specify collections---just like roles---in a `requirements.yml` file. 286 | 287 | For example, if you wanted to install the same collection, but using a `requirements.yml` file, you could specify it like so: 288 | 289 | {lang="yaml",linenos="off"} 290 | ``` 291 | --- 292 | collections: 293 | - name: geerlingguy.k8s 294 | ``` 295 | 296 | And then, before running your playbook that _uses_ the collection, install all the required collections with `ansible-galaxy`: 297 | 298 | {lang="text",linenos="off"} 299 | ``` 300 | $ ansible-galaxy install -r requirements.yml 301 | ``` 302 | 303 | W> Ansible 2.9 and earlier required installing role requirements separately from collection requirements, and would not install any collections if you called `ansible-galaxy install` by itself. If you're running Ansible 2.9 or earlier, you need to run the command `ansible-galaxy collection install -r requirements.yml`. 304 | 305 | Once the collection is installed, you can call content from it in any playbook using the FQCN, like so: 306 | 307 | {lang=yaml} 308 | ``` 309 | --- 310 | - hosts: all 311 | 312 | roles: 313 | - geerlingguy.k8s.helm 314 | ``` 315 | 316 | ### Collection version constraints 317 | 318 | For many playbooks, installing a specific version of a collection guarantees better stability. And since contributed collections---unlike roles---require the use of semantic versioning, you can even specify version constraints when installing a collection from Galaxy or Automation Hub, either on the command line or in a `requirements.yml`: 319 | 320 | {lang="yaml",linenos="off"} 321 | ``` 322 | --- 323 | collections: 324 | - name: geerlingguy.k8s 325 | version: >=0.10.0,<0.11.0 326 | ``` 327 | 328 | This version constraint tells Ansible to install any version in the `0.10.x` series, but not any version in `0.11.x` or newer. 329 | 330 | For maximum stability, it is important to set a [version constraint](https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#installing-an-older-version-of-a-collection) for any content you rely on. As long as the content maintainers follow the rules of semantic versioning, it should be extremely rare a playbook breaks due to any updated collection content it uses. 331 | 332 | When a newer major version of a collection you use is released, you can bump the version constraint and test it when you're ready, instead of having the latest version always installed. 333 | 334 | ### Where are collections installed? 335 | 336 | When you install a collection from Ansible Galaxy or Automation Hub, Ansible uses the configuration directive `collections_path` to determine where collections should be installed. 337 | 338 | By default, they'll be installed in one of the following locations: 339 | 340 | - `~/.ansible/collections` 341 | - `/usr/share/ansible/collections` 342 | 343 | But you can override the setting in your own projects by setting the `ANSIBLE_COLLECTIONS_PATH` environment variable, or setting `collections_path` in an `ansible.cfg` file alongside your playbook. 344 | 345 | There are some cases when I like to install collections into a path local to my playbook (e.g. by setting `collections_path = ./collections`), because if you install collections to one of the more global locations, and you use the same collection with more than one project, you may run into issues if a newer version of a collection changes a behavior another playbook relies on. 346 | 347 | One important note about the path, though: All collections in Ansible must be stored in a path that includes folders named after the collection namespace and name inside an `ansible_collections` subdirectory. 348 | 349 | That's why earlier in this chapter, when we created the `local.colors` collection, we ultimately created it inside the directory: 350 | 351 | {lang=text,linenos=off} 352 | ``` 353 | ./collections/ansible_collections/local/colors 354 | ``` 355 | 356 | Similarly, if you install collections using Galaxy or Automation Hub with `collections_path` set to `./collections`, then they will end up inside the `./collections/ansible_collections` directory as well, inside a namespaced directory. 357 | 358 | W> Ansible 2.9 and earlier used the configuration setting `collections_paths` (note the plural `s`). Ansible 2.10 and later uses the singular `collections_path` for consistency with other path-related settings. 359 | 360 | T> Ansible automatically loads playbook-local collections from the path `collections/`, just like it loads local roles from `roles/`, test plugins from `test_plugins/`, etc. But I like to explicitly configure `collections_paths` so any collections I install from Ansible Galaxy or Automation Hub are also installed in the playbook's directory. 361 | 362 | ## Summary 363 | 364 | Ansible Collections allow for easier distribution of Ansible content---plugins, modules, and roles---and have also helped to make Ansible's own maintenance more evenly distributed. 365 | 366 | You may find yourself using bare roles sometimes, and collections (with or without roles) other times. In either case, Ansible makes consolidating and sharing custom Ansible functionality easy! 367 | 368 | {lang="text",linenos=off} 369 | ``` 370 | ____________________________________ 371 | / Clarity is better than cleverness. \ 372 | \ (Eric S. Raymond) / 373 | ------------------------------------ 374 | \ ^__^ 375 | \ (oo)\_______ 376 | (__)\ )\/\ 377 | ||----w | 378 | || || 379 | ``` 380 | -------------------------------------------------------------------------------- /foreword.txt: -------------------------------------------------------------------------------- 1 | # Foreword 2 | 3 | Over the last few years, Ansible has rapidly become one of the most popular IT automation tools in the world. We've seen the open source community expand from the beginning of the project in early 2012 to over 1200 individual contributors today. Ansible's modular architecture and broad applicability to a variety of automation and orchestration problems created a perfect storm for hundreds of thousands of users worldwide. 4 | 5 | Ansible is a general purpose IT automation platform, and it can be used for a variety of purposes. From configuration management: enforcing declared state across your infrastructure, to procedural application deployment, to broad multi-component and multi-system orchestration of complicated interconnected systems. It is agentless, so it can coexist with legacy tools, and it's easy to install, configure, and maintain. 6 | 7 | Ansible had its beginnings in 2012, when Michael DeHaan, the project's founder, took inspiration from several tools he had written prior, along with some hands-on experience with the state of configuration management at the time, and launched the project in February of 2012. Some of Ansible's unique attributes like its module-based architecture and agentless approach quickly attracted attention in the open source world. 8 | 9 | In 2013, Said Ziouani, Michael DeHaan, and I launched Ansible, Inc. We wanted to harness the growing adoption of Ansible in the open source world, and create products to fill the gaps in the IT automation space as we saw them. The existing tools were complicated, error-prone, and hard to learn. Ansible gave users across an IT organization a low barrier of entry into automation, and it could be deployed incrementally, solving as few or as many problems as the team needed without a big shift in methodology. 10 | 11 | This book is about using Ansible in a DevOps environment. I'm not going to try to define what DevOps is or isn't, or who's doing it or not. My personal interpretation of the idea is that DevOps is meant to shorten the distance between the developers writing the code, and the operators running the application. Now, I don't believe adding a new "DevOps" team in between existing development and operations teams achieves that objective! (Oops, now I'm trying for a definition, aren't I?) 12 | 13 | Well, definitions aside, one of the first steps towards a DevOps environment is choosing tools that can be consumed by both developers and operations engineers. Ansible is one of those tools: you don't have to be a software developer to use it, and the playbooks that you write can easily be self-documenting. There have been a lot of attempts at "write once, run anywhere" models of application development and deployment, but I think Ansible comes the closest to providing a common language that's useful across teams and across clouds and different datacenters. 14 | 15 | The author of this book, Jeff, has been a long-time supporter, contributor, and advocate of Ansible, and he's maintained a massive collection of impressive Ansible roles in Galaxy, the public role-sharing service maintained by Ansible, Inc. Jeff has used Ansible extensively in his professional career, and is eminently qualified to write the end-to-end book on Ansible in a DevOps environment. 16 | 17 | As you read this book, I hope you enjoy your journey into IT automation as much as we have. Be well, do good work, and automate everything. 18 | 19 | Tim Gerla 20 | Ansible, Inc. Co-Founder & CTO 21 | -------------------------------------------------------------------------------- /frontmatter.txt: -------------------------------------------------------------------------------- 1 | {frontmatter} 2 | -------------------------------------------------------------------------------- /images/1-basic-vagrant-application.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/1-basic-vagrant-application.png -------------------------------------------------------------------------------- /images/10-deploy-haproxy.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/10-deploy-haproxy.png -------------------------------------------------------------------------------- /images/10-multi-server-deployment-cloud.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/10-multi-server-deployment-cloud.png -------------------------------------------------------------------------------- /images/10-multi-server-deployment-lb.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/10-multi-server-deployment-lb.png -------------------------------------------------------------------------------- /images/10-rails-app-fresh.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/10-rails-app-fresh.png -------------------------------------------------------------------------------- /images/10-rails-app-new-version.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/10-rails-app-new-version.png -------------------------------------------------------------------------------- /images/10-rails-app-with-articles.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/10-rails-app-with-articles.png -------------------------------------------------------------------------------- /images/12-awx-dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/12-awx-dashboard.png -------------------------------------------------------------------------------- /images/12-awx-job-complete.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/12-awx-job-complete.png -------------------------------------------------------------------------------- /images/12-jenkins-job-console-output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/12-jenkins-job-console-output.png -------------------------------------------------------------------------------- /images/13-github-actions-ci-badge.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/13-github-actions-ci-badge.png -------------------------------------------------------------------------------- /images/13-github-actions-ci-workflow.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/13-github-actions-ci-workflow.png -------------------------------------------------------------------------------- /images/13-molecule-logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/13-molecule-logo.png -------------------------------------------------------------------------------- /images/13-testing-spectrum.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/13-testing-spectrum.png -------------------------------------------------------------------------------- /images/14-https-nginx-proxy-502-bad-gateway.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/14-https-nginx-proxy-502-bad-gateway.png -------------------------------------------------------------------------------- /images/14-https-nginx-proxy-test.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/14-https-nginx-proxy-test.png -------------------------------------------------------------------------------- /images/14-https-test-chrome.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/14-https-test-chrome.png -------------------------------------------------------------------------------- /images/14-letsencrypt-valid-certificate.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/14-letsencrypt-valid-certificate.png -------------------------------------------------------------------------------- /images/15-docker-success.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/15-docker-success.png -------------------------------------------------------------------------------- /images/15-flask-docker-stack.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/15-flask-docker-stack.png -------------------------------------------------------------------------------- /images/16-kubernetes-helm-phpmyadmin.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/16-kubernetes-helm-phpmyadmin.png -------------------------------------------------------------------------------- /images/16-kubernetes-logo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/16-kubernetes-logo.png -------------------------------------------------------------------------------- /images/16-kubernetes-nginx-welcome.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/16-kubernetes-nginx-welcome.png -------------------------------------------------------------------------------- /images/16-kubernetes-simple-cluster-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/16-kubernetes-simple-cluster-architecture.png -------------------------------------------------------------------------------- /images/4-nodejs-home.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/4-nodejs-home.png -------------------------------------------------------------------------------- /images/4-playbook-drupal-home.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/4-playbook-drupal-home.png -------------------------------------------------------------------------------- /images/4-playbook-drupal.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/4-playbook-drupal.png -------------------------------------------------------------------------------- /images/4-playbook-nodejs.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/4-playbook-nodejs.png -------------------------------------------------------------------------------- /images/4-playbook-solr-admin.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/4-playbook-solr-admin.png -------------------------------------------------------------------------------- /images/4-playbook-solr.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/4-playbook-solr.png -------------------------------------------------------------------------------- /images/7-ansible-repo-backlog-growth.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/7-ansible-repo-backlog-growth.png -------------------------------------------------------------------------------- /images/8-server-checkin-infrastructure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/8-server-checkin-infrastructure.png -------------------------------------------------------------------------------- /images/9-elk-kibana-default.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/9-elk-kibana-default.png -------------------------------------------------------------------------------- /images/9-elk-kibana-example.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/9-elk-kibana-example.png -------------------------------------------------------------------------------- /images/9-elk-kibana-logstash-dashboard.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/9-elk-kibana-logstash-dashboard.png -------------------------------------------------------------------------------- /images/9-glusterfs-architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/9-glusterfs-architecture.png -------------------------------------------------------------------------------- /images/9-ha-infrastructure-aws.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/9-ha-infrastructure-aws.png -------------------------------------------------------------------------------- /images/9-ha-infrastructure-digitalocean.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/9-ha-infrastructure-digitalocean.png -------------------------------------------------------------------------------- /images/9-ha-infrastructure-success.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/9-ha-infrastructure-success.png -------------------------------------------------------------------------------- /images/9-highly-available-infrastructure.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/9-highly-available-infrastructure.png -------------------------------------------------------------------------------- /images/9-logstash-forwarding-ab-load.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/9-logstash-forwarding-ab-load.png -------------------------------------------------------------------------------- /images/9-logstash-forwarding-nginx.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/9-logstash-forwarding-nginx.png -------------------------------------------------------------------------------- /images/by-sa.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/by-sa.png -------------------------------------------------------------------------------- /images/title_page.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/images/title_page.jpg -------------------------------------------------------------------------------- /introduction.txt: -------------------------------------------------------------------------------- 1 | # Introduction 2 | 3 | ## In the beginning, there were sysadmins 4 | 5 | Since the beginning of networked computing, deploying and managing servers reliably and efficiently has been a challenge. Historically, system administrators were walled off from the developers and users who interact with the systems they administer, and they managed servers by hand, installing software, changing configurations, and administering services on individual servers. 6 | 7 | As data centers grew, and hosted applications became more complex, administrators realized they couldn't scale their manual systems management as fast as the applications they were enabling. That's why server provisioning and configuration management tools came to flourish. 8 | 9 | Server virtualization brought large-scale infrastructure management to the fore, and the number of servers managed by one admin (or by a small team of admins), has grown by an order of magnitude. Instead of deploying, patching, and destroying every server by hand, admins now are expected to bring up new servers, either automatically or with minimal intervention. Large-scale IT deployments now may involve hundreds or thousands of servers; in many of the largest environments, server provisioning, configuration, and decommissioning are fully automated. 10 | 11 | ## Modern infrastructure management 12 | 13 | As the systems that run applications become an ever more complex and integral part of the software they run, application developers themselves have begun to integrate their work more fully with operations personnel. In many companies, development and operations work is integrated. Indeed, this integration is a requirement for modern test-driven application design. 14 | 15 | As a software developer by trade, and a sysadmin by necessity, I have seen the power in uniting development and operations---more commonly referred to now as DevOps or Site Reliability Engineering. When developers begin to think of infrastructure as *part of their application,* stability and performance become normative. When sysadmins (most of whom have intermediate to advanced knowledge of the applications and languages being used on servers they manage) work tightly with developers, development velocity is improved, and more time is spent doing 'fun' activities like performance tuning, experimentation, and getting things done, and less time putting out fires. 16 | 17 | W> *DevOps* is a loaded word; some people argue using the word to identify both the *movement* of development and operations working more closely to automate infrastructure-related processes, and the *personnel* who skew slightly more towards the system administration side of the equation, dilutes the word's meaning. I think the word has come to be a rallying cry for the employees who are dragging their startups, small businesses, and enterprises into a new era of infrastructure growth and stability. I'm not too concerned that the term has become more of a catch-all for modern infrastructure management. My advice: spend less time arguing over the definition of the word, and more time making it mean something *to you*. 18 | 19 | ## Ansible and Red Hat 20 | 21 | Ansible was released in 2012 by Michael DeHaan ([@laserllama](https://twitter.com/laserllama) on Twitter), a developer who has been working with configuration management and infrastructure orchestration in one form or another for many years. Through his work with Puppet Labs and Red Hat (where he worked on [Cobbler](https://cobbler.github.io/), a configuration management tool, Func, a tool for communicating commands to remote servers, and [some other projects](https://web.archive.org/web/20240223204426/https://www.ansible.com/blog/2013/12/08/the-origins-of-ansible#expand)), he experienced the trials and tribulations of many different organizations and individual sysadmins on their quest to simplify and automate their infrastructure management operations. 22 | 23 | Additionally, Michael found [many shops were using separate tools](https://highscalability.com/ansible-a-simple-model-driven-configuration-management-and-c/) for configuration management (Puppet, Chef, cfengine), server deployment (Capistrano, Fabric), and ad-hoc task execution (Func, plain SSH), and wanted to see if there was a better way. Ansible wraps up all three of these features into one tool, and does it in a way that's actually *simpler* and more consistent than any of the other task-specific tools! 24 | 25 | Ansible aims to be: 26 | 27 | 1. **Clear** - Ansible uses a simple syntax (YAML) and is easy for anyone (developers, sysadmins, managers) to understand. APIs are simple and sensible. 28 | 2. **Fast** - Fast to learn, fast to set up---especially considering you don't need to install extra agents or daemons on all your servers! 29 | 3. **Complete** - Ansible does three things in one, and does them very well. Ansible's 'batteries included' approach means you have everything you need in one complete package. 30 | 4. **Efficient** - No extra software on your servers means more resources for your applications. Also, since Ansible modules work via JSON, Ansible is extensible with modules written in a programming language you already know. 31 | 5. **Secure** - Ansible uses SSH, and requires no extra open ports or potentially-vulnerable daemons on your servers. 32 | 33 | Ansible also has a lighter side that gives the project a little personality. As an example, Ansible's major releases used to be named after Led Zeppelin songs (e.g. 2.0 was named after 1973's "Over the Hills and Far Away", 1.x releases were named after Van Halen songs). Additionally, Ansible uses `cowsay`, if installed, to wrap output in an ASCII cow's speech bubble (this behavior can be disabled in Ansible's configuration). 34 | 35 | [Ansible, Inc.](https://www.ansible.com/) was founded by Saïd Ziouani, Michael DeHaan, and Tim Gerla, and acquired by Red Hat in 2015. The Ansible team oversees core Ansible development and development of the [Red Hat Ansible Automation Platform](https://www.redhat.com/en/technologies/management/ansible) for organizations using Ansible. Hundreds of individual developers have contributed patches to Ansible, and Ansible is the most starred infrastructure management tool on GitHub (with over 64,000 stars as of this writing). 36 | 37 | In October 2015, Red Hat acquired Ansible, Inc., and has proven itself to be a good steward and promoter of Ansible. I see no indication of this changing in the future. 38 | 39 | ## Ansible Examples 40 | 41 | There are many Ansible examples (playbooks, roles, infrastructure, configuration, etc.) throughout this book. Most of the examples are in the [Ansible for DevOps GitHub repository](https://github.com/geerlingguy/ansible-for-devops), so you can browse the code in its final state while you're reading the book. Some of the line numbering may not match the book *exactly* (especially if you're reading an older version of the book!), but I will try my best to keep everything synchronized over time. 42 | 43 | ## Other resources 44 | 45 | We'll explore all aspects of using Ansible to provision and manage your infrastructure in this book, but there's no substitute for the wealth of documentation and community interaction that make Ansible great. Check out the links below to find out more about Ansible and discover the community: 46 | 47 | - [Ansible Documentation](https://docs.ansible.com/ansible/) - Covers all Ansible options in depth. There are few open source projects with documentation as clear and thorough. 48 | - [Ansible Glossary](https://docs.ansible.com/ansible/latest/reference_appendices/glossary.html) - If there's ever a term in this book you don't seem to fully understand, check the glossary. 49 | - [The Bullhorn](https://us19.campaign-archive.com/home/?u=56d874e027110e35dea0e03c1&id=d6635f5420) - Ansible's official newsletter. 50 | - [Ansible Mailing List](https://groups.google.com/forum/#!forum/ansible-project) - Discuss Ansible and submit questions with Ansible's community via this Google group. 51 | - [Ansible on GitHub](https://github.com/ansible/ansible) - The official Ansible code repository, where the magic happens. 52 | - [Ansible Example Playbooks on GitHub](https://github.com/ansible/ansible-examples) - Many examples for common server configurations. 53 | - [Getting Started with Ansible](https://www.ansible.com/resources/get-started) - A simple guide to Ansible's community and resources. 54 | - [Ansible Blog](https://www.ansible.com/blog) 55 | 56 | I'd like to especially highlight Ansible's documentation (the first resource listed above); one of Ansible's greatest strengths is its well-written and extremely relevant documentation, containing a large number of relevant examples and continuously-updated guides. Very few projects---open source or not---have documentation as thorough, yet easy-to-read. This book is meant as a supplement to, not a replacement for, Ansible's documentation! 57 | -------------------------------------------------------------------------------- /mainmatter.txt: -------------------------------------------------------------------------------- 1 | {mainmatter} -------------------------------------------------------------------------------- /notes.txt: -------------------------------------------------------------------------------- 1 | # Notes 2 | 3 | ## Publishing process 4 | 5 | See README file inside the `ansible-for-devops publicity/Published Editions` directory. 6 | 7 | ## Editing notes: 8 | 9 | - Spellcheck. 10 | - Search for "that" in text. 11 | - Search for `...` in code examples and make it consistent. 12 | - Ensure cowsay is in every chapter summary. 13 | - Look through all code samples and fix line-wrapped lines. 14 | - Search for "—" (em-dash) in entire book and replace with `---`. 15 | - Search for "its" and "it's" and ensure proper grammatical usage. 16 | - Remove parentheses that are meaningless. 17 | - Search for 'ansible' (lower) and make sure non-CLI usage is capitalized. 18 | - Search for 'simple' and 'simply', since I overuse these words. 19 | - Search for ' can' to find uses where it's removal makes sentences stronger. 20 | - Search for proper names (companies, software, etc.) and make sure they're capitalized. 21 | - Search for ' a the' and fix those instances. 22 | - Search for "PLAY RECAP" and make sure code blocks are 74 lines (70 max without indent). 23 | 24 | ## Thoughts on writing 25 | 26 | - [I self-published a learn-to-code book and made nearly $5k in pre-orders](https://news.ycombinator.com/item?id=9847965) 27 | - [My Book Marketing Process](http://www.mooreds.com/wordpress/archives/1594) 28 | - [Zero to 95,688: How I wrote Game Programming Patterns](http://journal.stuffwithstuff.com/2014/04/22/zero-to-95688-how-i-wrote-game-programming-patterns/) 29 | - [The Last Starving Author Has Died](http://www.luckyisgood.com/starving-authors-begone/) 30 | 31 | ## Improvements/new chapter ideas 32 | 33 | - VPN/Bastion/Jump host usage with Ansible (SSH) 34 | - High Performance / Scalable Ansible: 35 | - Profiling roles / tasks with callback plugins (see Sam Doran's blog post) 36 | - https://www.jeffgeerling.com/blog/2017/slow-ansible-playbook-check-ansiblecfg 37 | - `synchronize` vs `copy` 38 | - Networking (routers? other stuff?) 39 | - Windows and Ansible (maybe?) 40 | - Security and Secret management 41 | - SSH private key management and security 42 | - Sudo auth via SSH keys (http://blather.michaelwlucas.com/archives/1106) 43 | - Playbooks, Roles, and Variables - organization for large teams and large projects 44 | -------------------------------------------------------------------------------- /other_files/Ansible Logo/Ansible Logo - Black.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/other_files/Ansible Logo/Ansible Logo - Black.png -------------------------------------------------------------------------------- /other_files/Ansible Logo/Ansible Logo - White.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/other_files/Ansible Logo/Ansible Logo - White.png -------------------------------------------------------------------------------- /other_files/Illustrations/4 - Application Stack - Drupal.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/other_files/Illustrations/4 - Application Stack - Drupal.ai -------------------------------------------------------------------------------- /other_files/Illustrations/4 - Application Stack - Nodejs.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/other_files/Illustrations/4 - Application Stack - Nodejs.ai -------------------------------------------------------------------------------- /other_files/Illustrations/4 - Application Stack - Solr.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/other_files/Illustrations/4 - Application Stack - Solr.ai -------------------------------------------------------------------------------- /other_files/Illustrations/8 - Flask app - Docker.ai: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/other_files/Illustrations/8 - Flask app - Docker.ai -------------------------------------------------------------------------------- /other_files/Illustrations/apache.eps: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/other_files/Illustrations/apache.eps -------------------------------------------------------------------------------- /other_files/Illustrations/centos.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/other_files/Illustrations/centos.png -------------------------------------------------------------------------------- /other_files/Illustrations/nodejs.eps: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/other_files/Illustrations/nodejs.eps -------------------------------------------------------------------------------- /other_files/Illustrations/npm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/other_files/Illustrations/npm.png -------------------------------------------------------------------------------- /other_files/Illustrations/php.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/geerlingguy/ansible-for-devops-manuscript/f43b32e5dae86b7984990a23d38f8bc6be631575/other_files/Illustrations/php.png -------------------------------------------------------------------------------- /preface.txt: -------------------------------------------------------------------------------- 1 | # Preface 2 | 3 | Growing up, I had access to a world that not many kids ever get to enter. At the local radio stations where my dad was chief engineer, I was fortunate to see networks and IT infrastructure up close: Novell servers and old Mac and Windows workstations in the '90s; Microsoft and Linux-based servers; and everything in between. Best of all, he brought home decommissioned servers and copies of Linux burned to CD. 4 | 5 | I began working with Linux and small-scale infrastructures before I started high school, and my passion for infrastructure grew as I built a Cat5 wired network and a small rack of networking equipment for a local grade school. When I started developing full-time, what was once a hobby became a necessary part of my job, so I invested more time in managing infrastructure efficiently. Over the past ten years, I've gone from manually booting and configuring physical and virtual servers; to using relatively complex shell scripts to provision and configure servers; to using configuration management tools to manage thousands of cloud servers. 6 | 7 | When I began converting my infrastructure to code, some of the best tools for testing, provisioning, and managing my servers were still in their infancy, but they have since matured into fully-featured, robust tools that I use every day. Vagrant is an excellent tool for managing local virtual machines to mimic real-world infrastructure locally (or in the cloud), and Ansible --- the subject of this book --- is an excellent tool for provisioning servers, managing their configuration, and deploying applications, even on my local workstation! 8 | 9 | These tools are still improving, and I'm excited for what the future holds. The time I invest in learning new infrastructure tools well will be helpful for years to come. 10 | 11 | In these pages, I'll share with you all I've learned about Ansible: my favorite tool for server provisioning, configuration management, and application deployment. I hope you enjoy reading this book as much as I did writing it! 12 | 13 | --- Jeff Geerling, 2015 14 | 15 | ## Second Edition 16 | 17 | I've published 23 major revisions to the book since the original 1.0 release in 2015. After major rewrites (and three new chapters) in 2019 and 2020 to reflect Ansible's changing architecture, I decided to publish the new content as a '2nd edition'. 18 | 19 | I will continue to publish revisions in the future, to keep this book relevant for as long as possible! Please visit the book's website, at www.ansiblefordevops.com, for the latest updates, or to subscribe to be notified of Ansible and book news! 20 | 21 | --- Jeff Geerling, 2020 22 | 23 | ## Who is this book for? 24 | 25 | Many of the developers and sysadmins I work with are at least moderately comfortable administering a Linux server via SSH, and manage between 1-100 servers, whether bare metal, virtualized, or using containers. 26 | 27 | Some of these people have a little experience with configuration management tools (usually with Puppet or Chef), and maybe a little experience with deployments and continuous integration using tools like Jenkins, Capistrano, or Fabric. I am writing this book for these friends who, I think, are representative of most people who have heard of and/or are beginning to use Ansible. 28 | 29 | If you are interested in both development and operations, and have at least a passing familiarity with managing a server via the command line, this book should provide you with an intermediate- to expert-level understanding of Ansible and how you can use it to manage your infrastructure. 30 | 31 | ## Typographic conventions 32 | 33 | Ansible uses a simple syntax (YAML) and simple command-line tools (using common POSIX conventions) for all its powerful abilities. Code samples and commands will be highlighted throughout the book either inline (for example: `ansible [command]`), or in a code block (with or without line numbers) like: 34 | 35 | {lang="text"} 36 | --- 37 | # This is the beginning of a YAML file. 38 | 39 | Some lines of YAML and other code examples require more than 70 characters per line, resulting in the code wrapping to a new line. Wrapping code is indicated by a `\` at the end of the line of code. For example: 40 | 41 | {lang="text"} 42 | # The line of code wraps due to the extremely long URL. 43 | wget http://www.example.com/really/really/really/long/path/in/the/url/causes/the/line/to/wrap 44 | 45 | When using the code, don't copy the `\` character, and make sure you don't use a newline between the first line with the trailing `\` and the next line. 46 | 47 | Links to pertinent resources and websites are added inline, like the following link to [Ansible](https://www.ansible.com/), and can be viewed directly by clicking on them in eBook formats, or by following the URL in the footnotes. 48 | 49 | Sometimes, asides are added to highlight further information about a specific topic: 50 | 51 | I> Informational asides will provide extra information. 52 | 53 | W> Warning asides will warn about common pitfalls and how to avoid them. 54 | 55 | T> Tip asides will give tips for deepening your understanding or optimizing your use of Ansible. 56 | 57 | When displaying commands run in a terminal session, if the commands are run under your normal/non-root user account, the commands will be prefixed by the dollar sign (`$`). If the commands are run as the root user, they will be prefixed with the pound sign (`#`). 58 | 59 | ## Please help improve this book! 60 | 61 | New revisions of this book are published on a regular basis (see current book publication stats below). If you think a particular section needs improvement or find something missing, please post an issue in the [Ansible for DevOps issue queue](https://github.com/geerlingguy/ansible-for-devops/issues) (on GitHub) or contact me via Twitter ([@geerlingguy](https://twitter.com/geerlingguy)). 62 | 63 | All known issues with Ansible for DevOps will be aggregated on the book's online [Errata](https://www.ansiblefordevops.com/errata) page. 64 | 65 | ### Current Published Book Version Information 66 | 67 | - **Current book version**: 2.3 68 | - **Current Ansible version as of last publication**: 11.6.0 (core 2.18.6) 69 | - **Current Date as of last publication**: May 25, 2025 70 | 71 | ## About the Author 72 | 73 | Jeff Geerling is a developer who has worked in programming and reliability engineering for companies with anywhere between one to thousands of servers. He also manages many virtual servers for services offered by Midwestern Mac, LLC and has been using Ansible to manage infrastructure since early 2013. 74 | -------------------------------------------------------------------------------- /wordcount-history.bash: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Count all lines in .txt files in a repository for each commit. 3 | for commit in `git rev-list --all`; do 4 | commit_date=$(git log -n 1 --pretty=%ad --date=iso-strict $commit) 5 | # On GNU tar, add `--wildcards --no-anchored` options 6 | wordcount=$(git archive $commit | tar -x -O '*.txt' | wc -w | xargs) 7 | echo "$commit_date,$wordcount" 8 | done 9 | --------------------------------------------------------------------------------