├── .gitignore ├── INSTALL.md ├── INSTALL_EC2.md ├── README.md ├── ansible.cfg ├── haproxy.yml ├── inventory ├── group_vars │ ├── all.yml │ ├── ec2.yml │ ├── server.yml │ └── static.yml ├── host_vars │ └── .keep └── hosts ├── inventory_providers ├── ec2.ini └── ec2.py ├── rancher.yml ├── roles ├── haproxy │ ├── defaults │ │ └── main.yml │ ├── files │ │ └── .keep │ ├── handlers │ │ └── main.yml │ ├── meta │ │ └── .keep │ ├── tasks │ │ ├── haproxy.yml │ │ └── main.yml │ ├── templates │ │ └── haproxy.cfg.j2 │ └── vars │ │ └── .keep └── rancher │ ├── defaults │ └── main.yml │ ├── files │ └── .keep │ ├── handlers │ └── .keep │ ├── meta │ └── .keep │ ├── tasks │ ├── docker.yml │ ├── main.yml │ ├── node.yml │ └── server.yml │ ├── templates │ └── server_launch.j2 │ └── vars │ └── .keep ├── site.yml └── vars ├── default.yml └── private.yml /.gitignore: -------------------------------------------------------------------------------- 1 | # Project Files 2 | ########### 3 | *.retry 4 | test.yml 5 | inventory/ec2.py 6 | 7 | # Compiled source # 8 | ################### 9 | *.com 10 | *.class 11 | *.dll 12 | *.exe 13 | *.o 14 | *.so 15 | *.pyc 16 | 17 | # Packages # 18 | ############ 19 | # it's better to unpack these files and commit the raw source 20 | # git has its own built in compression methods 21 | *.7z 22 | *.dmg 23 | *.gz 24 | *.iso 25 | *.jar 26 | *.rar 27 | *.tar 28 | *.zip 29 | 30 | # Logs and databases # 31 | ###################### 32 | *.log 33 | *.sql 34 | *.sqlite 35 | 36 | # Backups 37 | ######### 38 | *.bak 39 | *.old 40 | 41 | # OS generated files # 42 | ###################### 43 | .DS_Store 44 | .DS_Store? 45 | ._* 46 | .Spotlight-V100 47 | .Trashes 48 | ehthumbs.db 49 | Thumbs.db 50 | *.swp 51 | 52 | # Sublime Text 53 | ############ 54 | *.sublime-* 55 | 56 | # Virtualenv 57 | ########## 58 | venv/ 59 | -------------------------------------------------------------------------------- /INSTALL.md: -------------------------------------------------------------------------------- 1 | # Installation Instructions 2 | 3 | This will walk through the installation of a Rancher environment with the following configuration: 4 | * two Rancher Servers 5 | * one HAProxy load balancer 6 | * one or more nodes 7 | 8 | It assumes that you have already done the following things: 9 | 10 | 1. Install Prerequisites 11 | * Ansible 12 | * Provider-specific requirements (EC2, GCE, etc) 13 | * External database (RDS, Galera, etc) 14 | 2. Get a certificate 15 | * You can use a self-signed certificate with Rancher Server without any issues. If you plan to use Kubernetes, you will have to make additional changes for `kubectl` to run. If you can get a real certificate, life will be easier. 16 | 3. Set up your hosts 17 | * The hosts can be static or dynamic. See the corresponding docs for 18 | using dynamic inventory with cloud providers. 19 | 20 | If you have not done these things, please refer back to the README for links to documentation on how to set up these pieces. 21 | 22 | ## Edit `ansible.cfg` 23 | 24 | Set the `ansible_user` variable for the user account you will be using on the remote hosts. For default EC2 hosts, this will be one of the following: 25 | 26 | - Debian: admin 27 | - Ubuntu: ubuntu 28 | 29 | ## Edit `group_vars` 30 | 31 | 1. Edit `all.yml` to set the Docker version. This is used in [this install command](http://docs.rancher.com/rancher/v1.6/en/hosts/#supported-docker-versions), so versions should only be in the form of `major.minor`. 32 | 2. If you know your registration URL (the final location of your Rancher 33 | server), set it in `server_url` in `all.yml`. 34 | 2. Edit `server.yml` and set variables as appropriate. 35 | 36 | ## Edit the Vault 37 | 38 | Edit the Vault file with the following: 39 | ``` 40 | $ ansible-vault edit vars/private.yml 41 | ``` 42 | 43 | Hopefully you changed the password (see [README.md](README.md)), but if not, 44 | the default password is `ansible`. 45 | 46 | Once inside, edit the following: 47 | 1. Set the database password. 48 | 2. Set the database admin password 49 | 3. Paste your certificate and key 50 | * If your certificate has a chain, append the chain to the certificate and place it all in the `cert` section. 51 | 52 | ## Install The Server 53 | 54 | If this is your first run, you have to bring up the server and any load balancers first. Once the server is up, you'll be able to create an API key for the environment and then use it to bring up nodes. 55 | 56 | ``` 57 | $ ansible-playbook --limit server,loadbalancer rancher.yml 58 | ``` 59 | 60 | After the playbook run completes, you can connect to your Rancher Server on 61 | the public address for your load balancer node. 62 | 63 | If you brought up a single Rancher server, HAProxy was installed on the server itself. 64 | 65 | # Automaticallly Adding Nodes 66 | 67 | Once the Rancher Server is up, you can automatically register nodes from 68 | these playbooks. It is not possible to do this prior to the server launching 69 | because you need to create an API key. Once you have an API key for an 70 | environment, you can spin up and auto-join hosts at will. 71 | 72 | 1. Create an API key pair for the environment. You can do this by expanding 73 | "Advanced Options" from the API/Keys menu item. 74 | 75 | 2. Enter the key and secret key into the array in the Vault. The array key 76 | should be the unique identifier for the environment. The Default environment 77 | is always `1a5` when Rancher comes up. If you are registering hosts in a 78 | different environment, you can find the environment identifier in the URL 79 | when you have selected that environment in the UI. 80 | 81 | ``` 82 | api_keys: 83 | 1a5: 84 | access_key: 897BAE0A2E18EC8E1DE3 85 | secret_key: TFy3kQcE4VM7pqkXgC83E3nN2pthWMTXzPUfjnS4 86 | ``` 87 | 88 | 3. Tag your nodes with a key of `rancher_env` and the unique identifier for 89 | the environment. 90 | 91 | 4. Run the `rancher.yml` playbook with `--tags node` to execute just the node 92 | functions. (*NOTE:* If the playbook doesn't run after you change the `rancher_env` tag, it's because the dynamic inventory plugin caches output for an hour. Clear the cache by running `contrib/inventory/ec2.py --refresh-cache` and then try again.) 93 | 94 | ``` 95 | $ ansible-playbook --limit node --tags node rancher.yml 96 | ``` 97 | 98 | These functions will attempt to retrieve the registration URL. If no URL 99 | exists, or if all URLs are inactive, it will register a new URL and use 100 | that URL to register the hosts. 101 | 102 | The node functions will not run if any of the following are false: 103 | 104 | * No API keys have been defined in the vault 105 | * There is no tag for `rancher_env` defined for the instance 106 | * The value of the `rancher_env` tag is not present in the API keys in the vault 107 | 108 | 109 | -------------------------------------------------------------------------------- /INSTALL_EC2.md: -------------------------------------------------------------------------------- 1 | # Installation Instructions 2 | 3 | This will walk through the prerequisites to use EC2 dynamic inventory. 4 | 5 | ## Create Security Group 6 | 7 | Create a security group called `rancher` with the following ports open. 8 | 9 | * HTTPS from anywhere 10 | * SSH from anywhere 11 | * UDP/500 from anywhere 12 | * UDP/4500 from anywhere 13 | 14 | Save the group so that it receives an identifier, and then edit it and add the following ports, setting them up to only be accessible from the security group itself: 15 | 16 | * TCP/8080 17 | * TCP/2376 18 | * TCP/9345 19 | * TCP/3306 20 | * ALL IPv4 ICMP 21 | 22 | This setup will allow any host in the security group to communicate with other hosts and with the Rancher Server. 23 | 24 | Once you launch containers on the nodes, you may need to open additional ports for those services or add the nodes to a security group that contains those open ports. 25 | 26 | ## Launch Instances 27 | 28 | *NOTE:* All instances must run Ubuntu 16.04 or these playbooks may not work correctly. 29 | 30 | Launch the following instances in a VPC of your choice. While they don't 31 | _have_ to be in the same AZ, placing them in the same AZ will reduce 32 | AWS charges for inter-AZ traffic. The configuration expects that nodes will 33 | all be able to reach one another on the private IPs. 34 | 35 | Add the following code to the "User data" section under "Advanced Details" in Step 3: 36 | ``` 37 | #!/bin/bash 38 | apt-get -qq update && apt-get -qq install python python-pip 39 | ``` 40 | 41 | Alternatively, add the following: 42 | ``` 43 | #cloud-config 44 | 45 | package_upgrade: true 46 | package_update: true 47 | packages: 48 | - curl 49 | - python 50 | - python-pip 51 | ``` 52 | 53 | * Rancher Server 54 | * 1 or 2 `t2.small` instances with 16 GB of disk 55 | * tag these hosts with 56 | * `project=rancher` 57 | * `rancher_role=server` 58 | * Load Balancer 59 | * 1 `t2.nano` instance with 8 GB of disk 60 | * tag these hosts with 61 | * `project=rancher` 62 | * `rancher_role=loadbalancer` 63 | * Nodes 64 | * 1 or more `t2.large` instances with 32 GB of disk 65 | * tag these hosts with 66 | * `project=rancher` 67 | * `rancher_role=node` 68 | * `rancher_env=1a5` (to put them in the Default environment) 69 | * Database 70 | * 1 `db.t2.small` RDS instance (MySQL or compatible) 71 | 72 | Place all instances in the `rancher` security group. 73 | 74 | *NOTE:* You may skip starting the RDS instance if you set `use_external_db` 75 | to `false` and only spin up a single instance for the Rancher Server. Ansible 76 | will then launch the Server container without pointing to an external 77 | database. 78 | 79 | ## Enable the EC2 inventory provider 80 | 81 | In order to enable the provider, symlink it from `inventory_providers` to 82 | `inventory`: 83 | 84 | ``` 85 | $ cd inventory 86 | $ ln -s ../inventory_providers/ec2.py ec2.py 87 | ``` 88 | 89 | ## Edit `ec2.ini` 90 | 91 | The filter set in `inventory_providers/ec2.ini` will collect all host with 92 | `project=rancher` and then act on them according to their `role` tag. If you 93 | used something other than `project=rancher` (e.g. `owner=adrian`), edit `ec2.ini` and change the filter set on/around line 159. 94 | 95 | Additionally, if you're working in regions other than `us-east-1` and 96 | `us-west-2`, set your regions in line 14. 97 | 98 | ## Test the connection 99 | 100 | If everything is set up correctly, you can populate the cache and verify that 101 | the output is as expected: 102 | 103 | ``` 104 | $ cd inventory 105 | $ ./ec2.py --refresh-cache | less 106 | ``` 107 | 108 | The cache is set to 1h by default. If you make changes to EC2 tags or other 109 | parameters, you'll need to refresh the cache before Ansible picks up the 110 | changes. 111 | 112 | ## Continue the installation 113 | 114 | Continue the installation with [INSTALL.md](INSTALL.md). 115 | 116 | 117 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Ansible Playbooks for Rancher Hosts 2 | 3 | This is a series of playbooks designed to quickly bring up a Rancher 4 | environment. 5 | 6 | This is an overview of how the project behaves. For specific instructions 7 | on installing and using these playbooks with static and dynamic 8 | inventories, please see [INSTALL.md](INSTALL.md). 9 | 10 | **NOTE:** This is a work in progress. As of this writing it will provision 11 | an Ubuntu 16.04 environment with static hosts or dynamic hosts in EC2. 12 | 13 | In the future it will adapt to RHEL/CentOS/Ubuntu/Debian according to 14 | the system where the playbooks run. It will also grow to support other 15 | providers with dynamic inventory support. 16 | 17 | ## Prerequisites 18 | 19 | ### Install Python 20 | 21 | Ubuntu 16.04 doesn't come with Python installed by default. You can either 22 | install it manually after booting the instances, or you can add the following 23 | to `cloud-config`: 24 | ``` 25 | #!/bin/bash 26 | 27 | apt-get -qq update 28 | apt-get -qq -y install python-pip 29 | ``` 30 | 31 | Alternatively, add the following: 32 | ``` 33 | #cloud-config 34 | 35 | package_upgrade: true 36 | package_update: true 37 | packages: 38 | - curl 39 | - python 40 | - python-pip 41 | ``` 42 | 43 | ### Ansible Vault 44 | 45 | This project uses the [Ansible Vault](http://docs.ansible.com/ansible/playbooks_vault.html) 46 | for storing private information. There is a sample vault provided with this 47 | repository. The password is `ansible` and can be changed by following the 48 | instructions on rekeying located [here](http://docs.ansible.com/ansible/playbooks_vault.html#rekeying-encrypted-files). 49 | 50 | If you wish to skip using the Vault and instead store passwords in plaintext 51 | in the various configuration files, you can do so by removing all references to 52 | `private.yml` from the `vars_files` key in any YAML file in the root of the 53 | project (e.g. `rancher.yml`, `haproxy.yml`, etc.) 54 | 55 | Prior to removing this file, copy its variables out to another variable file, 56 | such as `group_vars/all.yml`. 57 | 58 | ## Inventory 59 | 60 | The project uses a mixture of static and dynamic inventory. Static entries 61 | go into `static_server` and `static_node` in `inventory/hosts`. Dynamic 62 | hosts will be brought in and added to their corresponding groups. All hosts 63 | and groups will then be collected into `server:children`, 64 | `loadbalancer:children`, and `node:children` for processing by the playbooks 65 | themselves. 66 | 67 | ### Supported Inventory Systems 68 | 69 | * Static 70 | * EC2 71 | 72 | ### Enabling Dynamic Inventory Systems 73 | 74 | Dynamic inventory provider scripts and their configuration files are stored in `inventory_providers`. To activate one or more of them, symlink them to the `inventory` directory: 75 | ``` 76 | $ cd inventory 77 | $ ln -s ../inventory_providers/ec2.py ec2.py 78 | ``` 79 | 80 | ### EC2 81 | 82 | See [INSTALL_EC2.md](INSTALL_EC2.md) 83 | 84 | ## Playbooks 85 | 86 | All playbooks are included in `site.yml`. To execute a full run: 87 | ``` 88 | $ ansible-playbook site.yml 89 | ``` 90 | 91 | **Note: you won't be able to run a server and node install in the first run. 92 | You will need to install the server and then configure API keys in the Vault.** 93 | 94 | Optionally, you can filter by one or more roles: 95 | ``` 96 | $ ansible-playbook --limit node site.yml 97 | ``` 98 | 99 | Individual playbooks can be run as outlined below. 100 | 101 | ### Rancher 102 | 103 | This playbook installs the version of Docker indicated in `group_vars/all.yml` 104 | on hosts with a `role` tag of `server` or `node`. It goes on to install 105 | Rancher Server on all hosts with `role` set to `server`. If the `role` is set to `node`, and if there are API keys for the environment located in the 106 | Vault, it will register nodes with the Rancher server. 107 | 108 | The Rancher configuration in `group_vars/server.yml` designates the 109 | architecture: 110 | 111 | * single node, internal database 112 | * single node, external database 113 | * single node, bind-mount database 114 | * single node, force HA 115 | * sets external database 116 | * use this if you want HA and will add additional servers later 117 | 118 | Ansible will automatically configure Rancher to use an external database if 119 | any of the following are true: 120 | 121 | * `use_external_db` is `true` 122 | * `force_ha` is `true` 123 | * more than one instance has a tag of `role=server` 124 | 125 | If Rancher Server will use an external database, set the database parameters 126 | in `group_vars/server.yml` and set the `db_pass` in the Vault. 127 | 128 | Ansible will perform sanity checks and fail if the database parameters are 129 | missing, but it will not test that the parameters are actually correct. 130 | 131 | Ansible will create the database and its user if needed. 132 | 133 | See [INSTALL.md](INSTALL.md) for more information about automatic 134 | host registration. 135 | 136 | To run the Rancher playbook on its own, execute: 137 | ``` 138 | $ ansible-playbook rancher.yml 139 | ``` 140 | 141 | ### HAProxy 142 | 143 | This playbook installs HAProxy on hosts with `role` set to `loadbalancer`, or 144 | if no hosts exist with this tag, it will install HAProxy on hosts with `role` 145 | set to `server`. The latter is only appropriate for single-server 146 | environments. If you are running Rancher in an HA configuration, create 147 | additional instances tagged with `role=loadbalancer` and change `haproxy.yml` 148 | to run on nodes with this tag. 149 | 150 | **NOTE: If you wish to disable HAProxy entirely, set `haproxy_enabled` to `false` in 151 | `vars/default.yml`.** 152 | 153 | After installing HAProxy this playbook then configures it for SSL 154 | termination using the certificate stored in the Vault. The 155 | certificate provided in the vault is a self-signed certificate for a fake 156 | domain - please replace it with your own certificate. 157 | 158 | HAProxy performs pass-through TCP proxying to Rancher Server using the 159 | Proxy protocol. This absolves us of the need to have HAProxy perform 160 | additional analysis of the content to enable Websockets or GRPC 161 | communication between the server and the nodes. 162 | 163 | Ansible will automatically populate `haproxy.cfg` with the internal IPs of 164 | all Rancher servers (members of the `server` group). Should these IPs change 165 | (e.g. if servers are added or removed), or if you need to rebuild the 166 | configuration (such as if you change the certificate), simply re-run this 167 | playbook: 168 | 169 | ``` 170 | $ ansible-playbook --tags config haproxy.yml 171 | ``` 172 | 173 | ### Alternative Post-Install Node Setup (optional) 174 | 175 | _This section applies if you do not use these playbooks to register your 176 | nodes with Rancher automatically._ 177 | 178 | Since you already have an Ansible environment that knows your hosts by 179 | their EC2 tag, you can use this to install the Rancher Agent onto 180 | your nodes. 181 | 182 | After logging into the server and configuring access control, select 183 | your environment and add a node. Copy the command that Rancher gives you 184 | and use it from your Ansible control station: 185 | ``` 186 | $ ansible node -a "" 187 | ``` 188 | 189 | This will reach out to all of your nodes in parallel and instruct them 190 | to install the agent. Within a few moments you'll see them appear in the 191 | UI. 192 | 193 | -------------------------------------------------------------------------------- /ansible.cfg: -------------------------------------------------------------------------------- 1 | [defaults] 2 | inventory = ./inventory 3 | gathering = smart 4 | fact_caching = jsonfile 5 | fact_caching_connection = ~/.ansible_facts 6 | fact_caching_timeout = 86400 7 | ask_vault_pass = True 8 | host_key_checking = False 9 | remote_user = ubuntu 10 | display_skipped_hosts = False 11 | 12 | [ssh_connection] 13 | pipelining = True 14 | -------------------------------------------------------------------------------- /haproxy.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # haproxy 3 | # 4 | # this only loads if there is a `loadbalancer` group. if there is not, 5 | # then the haproxy role is loaded by the `rancher.yml` playbook for 6 | # the `server` group 7 | 8 | - hosts: 9 | - loadbalancer 10 | become: True 11 | gather_facts: True 12 | vars_files: 13 | - vars/default.yml 14 | - vars/private.yml 15 | roles: 16 | - { role: haproxy, tags: haproxy } 17 | 18 | -------------------------------------------------------------------------------- /inventory/group_vars/all.yml: -------------------------------------------------------------------------------- 1 | # variables for all systems 2 | 3 | docker_version: 1.12 4 | 5 | server_url: http://172.31.1.13:8080 6 | 7 | -------------------------------------------------------------------------------- /inventory/group_vars/ec2.yml: -------------------------------------------------------------------------------- 1 | # variables for EC2 hosts 2 | 3 | hostname: '{{ ec2_tag_Name if ec2_tag_Name else "ip-" ~ ec2_private_ip_address|replace(".", "-") }}' 4 | 5 | rancher_env: "{{ ec2_tag_rancher_env|default('unset') }}" 6 | 7 | public_ip: '{{ ec2_ip_address }}' 8 | private_ip: '{{ ec2_private_ip_address }}' -------------------------------------------------------------------------------- /inventory/group_vars/server.yml: -------------------------------------------------------------------------------- 1 | # variables for the server role 2 | 3 | # name of the server container - you don't really want to set this because 4 | # it prevents you from upgrading without first removing the existing container. 5 | # if your upgrade goes bad, it's more difficult to rollback. 6 | # server_name: rancher-server 7 | 8 | # the repo you want to pull from (either `server` or `enterprise`) 9 | server_repo: server 10 | 11 | # the tag that we want to pull (if using `enterprise`, there is no 12 | # `stable` tag) 13 | server_version: stable 14 | 15 | # if we're connected to an external db 16 | # this is turned on automatically if you have `force_ha` 17 | # or multiple hosts with the `role` of `server` 18 | use_external_db: False 19 | 20 | # alternatively, set this to a bind-mount location on 21 | # the host to put MySQL content there. *NOTE:* 22 | # `use_external_db` from above will take priority over 23 | # this variable. 24 | bind_mount_location: /opt/docker/rancher/mysql-var 25 | 26 | # set this to true for servers to advertise their public IPs 27 | # instead of private IPs 28 | advertise_public_ip: False 29 | 30 | # set this to true to force HA if you only have one server 31 | force_ha: False 32 | 33 | # the following variables are only needed if `use_external_db` 34 | # is `True` 35 | db_host: some.rds.host.com 36 | db_user: cattle 37 | db_name: cattle 38 | db_port: 3306 39 | # db_password: store_in_vault 40 | 41 | # set this to a user who is able to create databases 42 | # for RDS this is the master user 43 | db_admin_user: dbadmin 44 | # db_admin_password: store_in_vault 45 | 46 | 47 | -------------------------------------------------------------------------------- /inventory/group_vars/static.yml: -------------------------------------------------------------------------------- 1 | # variables for static hosts 2 | 3 | private_ip: '{{ ansible_default_ipv4.address }}' 4 | 5 | -------------------------------------------------------------------------------- /inventory/host_vars/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/ansible-playbooks/1b4fc547ff4f3eb2585d7fcb7b0ce2f6885f881a/inventory/host_vars/.keep -------------------------------------------------------------------------------- /inventory/hosts: -------------------------------------------------------------------------------- 1 | # static groups 2 | [static_server] 3 | 4 | [static_node] 5 | 6 | [static_loadbalancer] 7 | 8 | # EC2 dynamic groups 9 | [tag_rancher_role_server] 10 | [tag_rancher_role_node] 11 | [tag_rancher_role_loadbalancer] 12 | 13 | 14 | ################################### 15 | ### DO NOT EDIT BELOW THIS LINE ### 16 | ################################### 17 | 18 | # this group is used for variable assignment of static hosts 19 | [static:children] 20 | static_server 21 | static_node 22 | static_loadbalancer 23 | 24 | # collected groups - these are what Ansible actually uses in its 25 | # playbooks 26 | 27 | [server:children] 28 | static_server 29 | tag_rancher_role_server 30 | 31 | [node:children] 32 | static_node 33 | tag_rancher_role_node 34 | 35 | [loadbalancer:children] 36 | static_loadbalancer 37 | tag_rancher_role_loadbalancer 38 | 39 | 40 | -------------------------------------------------------------------------------- /inventory_providers/ec2.ini: -------------------------------------------------------------------------------- 1 | # Ansible EC2 external inventory script settings 2 | # 3 | 4 | [ec2] 5 | 6 | # to talk to a private eucalyptus instance uncomment these lines 7 | # and edit edit eucalyptus_host to be the host name of your cloud controller 8 | #eucalyptus = True 9 | #eucalyptus_host = clc.cloud.domain.org 10 | 11 | # AWS regions to make calls to. Set this to 'all' to make request to all regions 12 | # in AWS and merge the results together. Alternatively, set this to a comma 13 | # separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' 14 | regions = us-east-2 15 | regions_exclude = us-gov-west-1,cn-north-1 16 | 17 | # When generating inventory, Ansible needs to know how to address a server. 18 | # Each EC2 instance has a lot of variables associated with it. Here is the list: 19 | # http://docs.pythonboto.org/en/latest/ref/ec2.html#module-boto.ec2.instance 20 | # Below are 2 variables that are used as the address of a server: 21 | # - destination_variable 22 | # - vpc_destination_variable 23 | 24 | # This is the normal destination variable to use. If you are running Ansible 25 | # from outside EC2, then 'public_dns_name' makes the most sense. If you are 26 | # running Ansible from within EC2, then perhaps you want to use the internal 27 | # address, and should set this to 'private_dns_name'. The key of an EC2 tag 28 | # may optionally be used; however the boto instance variables hold precedence 29 | # in the event of a collision. 30 | destination_variable = public_dns_name 31 | 32 | # This allows you to override the inventory_name with an ec2 variable, instead 33 | # of using the destination_variable above. Addressing (aka ansible_ssh_host) 34 | # will still use destination_variable. Tags should be written as 'tag_TAGNAME'. 35 | #hostname_variable = tag_Name 36 | 37 | # For server inside a VPC, using DNS names may not make sense. When an instance 38 | # has 'subnet_id' set, this variable is used. If the subnet is public, setting 39 | # this to 'ip_address' will return the public IP address. For instances in a 40 | # private subnet, this should be set to 'private_ip_address', and Ansible must 41 | # be run from within EC2. The key of an EC2 tag may optionally be used; however 42 | # the boto instance variables hold precedence in the event of a collision. 43 | # WARNING: - instances that are in the private vpc, _without_ public ip address 44 | # will not be listed in the inventory until You set: 45 | # vpc_destination_variable = private_ip_address 46 | vpc_destination_variable = ip_address 47 | 48 | # The following two settings allow flexible ansible host naming based on a 49 | # python format string and a comma-separated list of ec2 tags. Note that: 50 | # 51 | # 1) If the tags referenced are not present for some instances, empty strings 52 | # will be substituted in the format string. 53 | # 2) This overrides both destination_variable and vpc_destination_variable. 54 | # 55 | #destination_format = {0}.{1}.example.com 56 | #destination_format_tags = Name,environment 57 | 58 | # To tag instances on EC2 with the resource records that point to them from 59 | # Route53, uncomment and set 'route53' to True. 60 | route53 = False 61 | 62 | # To exclude RDS instances from the inventory, uncomment and set to False. 63 | rds = False 64 | 65 | # To exclude ElastiCache instances from the inventory, uncomment and set to False. 66 | elasticache = False 67 | 68 | # Additionally, you can specify the list of zones to exclude looking up in 69 | # 'route53_excluded_zones' as a comma-separated list. 70 | # route53_excluded_zones = samplezone1.com, samplezone2.com 71 | 72 | # By default, only EC2 instances in the 'running' state are returned. Set 73 | # 'all_instances' to True to return all instances regardless of state. 74 | all_instances = False 75 | 76 | # By default, only EC2 instances in the 'running' state are returned. Specify 77 | # EC2 instance states to return as a comma-separated list. This 78 | # option is overridden when 'all_instances' is True. 79 | # instance_states = pending, running, shutting-down, terminated, stopping, stopped 80 | 81 | # By default, only RDS instances in the 'available' state are returned. Set 82 | # 'all_rds_instances' to True return all RDS instances regardless of state. 83 | all_rds_instances = False 84 | 85 | # Include RDS cluster information (Aurora etc.) 86 | include_rds_clusters = False 87 | 88 | # By default, only ElastiCache clusters and nodes in the 'available' state 89 | # are returned. Set 'all_elasticache_clusters' and/or 'all_elastic_nodes' 90 | # to True return all ElastiCache clusters and nodes, regardless of state. 91 | # 92 | # Note that all_elasticache_nodes only applies to listed clusters. That means 93 | # if you set all_elastic_clusters to false, no node will be return from 94 | # unavailable clusters, regardless of the state and to what you set for 95 | # all_elasticache_nodes. 96 | all_elasticache_replication_groups = False 97 | all_elasticache_clusters = False 98 | all_elasticache_nodes = False 99 | 100 | # API calls to EC2 are slow. For this reason, we cache the results of an API 101 | # call. Set this to the path you want cache files to be written to. Two files 102 | # will be written to this directory: 103 | # - ansible-ec2.cache 104 | # - ansible-ec2.index 105 | cache_path = ~/.ansible/tmp 106 | 107 | # The number of seconds a cache file is considered valid. After this many 108 | # seconds, a new API call will be made, and the cache file will be updated. 109 | # To disable the cache, set this value to 0 110 | cache_max_age = 300 111 | 112 | # Organize groups into a nested/hierarchy instead of a flat namespace. 113 | nested_groups = False 114 | 115 | # Replace - tags when creating groups to avoid issues with ansible 116 | replace_dash_in_groups = True 117 | 118 | # If set to true, any tag of the form "a,b,c" is expanded into a list 119 | # and the results are used to create additional tag_* inventory groups. 120 | expand_csv_tags = False 121 | 122 | # The EC2 inventory output can become very large. To manage its size, 123 | # configure which groups should be created. 124 | group_by_instance_id = True 125 | group_by_region = True 126 | group_by_availability_zone = True 127 | group_by_aws_account = False 128 | group_by_ami_id = False 129 | group_by_instance_type = False 130 | group_by_key_pair = False 131 | group_by_vpc_id = True 132 | group_by_security_group = True 133 | group_by_tag_keys = True 134 | group_by_tag_none = False 135 | group_by_route53_names = False 136 | group_by_rds_engine = False 137 | group_by_rds_parameter_group = False 138 | group_by_elasticache_engine = False 139 | group_by_elasticache_cluster = False 140 | group_by_elasticache_parameter_group = False 141 | group_by_elasticache_replication_group = False 142 | 143 | # If you only want to include hosts that match a certain regular expression 144 | # pattern_include = staging-* 145 | 146 | # If you want to exclude any hosts that match a certain regular expression 147 | # pattern_exclude = staging-* 148 | 149 | # Instance filters can be used to control which instances are retrieved for 150 | # inventory. For the full list of possible filters, please read the EC2 API 151 | # docs: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstances.html#query-DescribeInstances-filters 152 | # Filters are key/value pairs separated by '=', to list multiple filters use 153 | # a list separated by commas. See examples below. 154 | 155 | # Retrieve only instances with (key=value) env=staging tag 156 | # instance_filters = tag:env=staging 157 | 158 | # Retrieve only instances with role=webservers OR role=dbservers tag 159 | instance_filters = tag:project=rancher 160 | 161 | # Retrieve only t1.micro instances OR instances with tag env=staging 162 | # instance_filters = instance-type=t1.micro,tag:env=staging 163 | 164 | # You can use wildcards in filter values also. Below will list instances which 165 | # tag Name value matches webservers1* 166 | # (ex. webservers15, webservers1a, webservers123 etc) 167 | # instance_filters = tag:Name=webservers1* 168 | 169 | # A boto configuration profile may be used to separate out credentials 170 | # see http://boto.readthedocs.org/en/latest/boto_config_tut.html 171 | # boto_profile = some-boto-profile-name 172 | 173 | 174 | [credentials] 175 | 176 | # The AWS credentials can optionally be specified here. Credentials specified 177 | # here are ignored if the environment variable AWS_ACCESS_KEY_ID or 178 | # AWS_PROFILE is set, or if the boto_profile property above is set. 179 | # 180 | # Supplying AWS credentials here is not recommended, as it introduces 181 | # non-trivial security concerns. When going down this route, please make sure 182 | # to set access permissions for this file correctly, e.g. handle it the same 183 | # way as you would a private SSH key. 184 | # 185 | # Unlike the boto and AWS configure files, this section does not support 186 | # profiles. 187 | # 188 | # aws_access_key_id = AXXXXXXXXXXXXXX 189 | # aws_secret_access_key = XXXXXXXXXXXXXXXXXXX 190 | # aws_security_token = XXXXXXXXXXXXXXXXXXXXXXXXXXXX 191 | -------------------------------------------------------------------------------- /inventory_providers/ec2.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | ''' 4 | EC2 external inventory script 5 | ================================= 6 | 7 | Generates inventory that Ansible can understand by making API request to 8 | AWS EC2 using the Boto library. 9 | 10 | NOTE: This script assumes Ansible is being executed where the environment 11 | variables needed for Boto have already been set: 12 | export AWS_ACCESS_KEY_ID='AK123' 13 | export AWS_SECRET_ACCESS_KEY='abc123' 14 | 15 | This script also assumes there is an ec2.ini file alongside it. To specify a 16 | different path to ec2.ini, define the EC2_INI_PATH environment variable: 17 | 18 | export EC2_INI_PATH=/path/to/my_ec2.ini 19 | 20 | If you're using eucalyptus you need to set the above variables and 21 | you need to define: 22 | 23 | export EC2_URL=http://hostname_of_your_cc:port/services/Eucalyptus 24 | 25 | If you're using boto profiles (requires boto>=2.24.0) you can choose a profile 26 | using the --boto-profile command line argument (e.g. ec2.py --boto-profile prod) or using 27 | the AWS_PROFILE variable: 28 | 29 | AWS_PROFILE=prod ansible-playbook -i ec2.py myplaybook.yml 30 | 31 | For more details, see: http://docs.pythonboto.org/en/latest/boto_config_tut.html 32 | 33 | When run against a specific host, this script returns the following variables: 34 | - ec2_ami_launch_index 35 | - ec2_architecture 36 | - ec2_association 37 | - ec2_attachTime 38 | - ec2_attachment 39 | - ec2_attachmentId 40 | - ec2_block_devices 41 | - ec2_client_token 42 | - ec2_deleteOnTermination 43 | - ec2_description 44 | - ec2_deviceIndex 45 | - ec2_dns_name 46 | - ec2_eventsSet 47 | - ec2_group_name 48 | - ec2_hypervisor 49 | - ec2_id 50 | - ec2_image_id 51 | - ec2_instanceState 52 | - ec2_instance_type 53 | - ec2_ipOwnerId 54 | - ec2_ip_address 55 | - ec2_item 56 | - ec2_kernel 57 | - ec2_key_name 58 | - ec2_launch_time 59 | - ec2_monitored 60 | - ec2_monitoring 61 | - ec2_networkInterfaceId 62 | - ec2_ownerId 63 | - ec2_persistent 64 | - ec2_placement 65 | - ec2_platform 66 | - ec2_previous_state 67 | - ec2_private_dns_name 68 | - ec2_private_ip_address 69 | - ec2_publicIp 70 | - ec2_public_dns_name 71 | - ec2_ramdisk 72 | - ec2_reason 73 | - ec2_region 74 | - ec2_requester_id 75 | - ec2_root_device_name 76 | - ec2_root_device_type 77 | - ec2_security_group_ids 78 | - ec2_security_group_names 79 | - ec2_shutdown_state 80 | - ec2_sourceDestCheck 81 | - ec2_spot_instance_request_id 82 | - ec2_state 83 | - ec2_state_code 84 | - ec2_state_reason 85 | - ec2_status 86 | - ec2_subnet_id 87 | - ec2_tenancy 88 | - ec2_virtualization_type 89 | - ec2_vpc_id 90 | 91 | These variables are pulled out of a boto.ec2.instance object. There is a lack of 92 | consistency with variable spellings (camelCase and underscores) since this 93 | just loops through all variables the object exposes. It is preferred to use the 94 | ones with underscores when multiple exist. 95 | 96 | In addition, if an instance has AWS Tags associated with it, each tag is a new 97 | variable named: 98 | - ec2_tag_[Key] = [Value] 99 | 100 | Security groups are comma-separated in 'ec2_security_group_ids' and 101 | 'ec2_security_group_names'. 102 | ''' 103 | 104 | # (c) 2012, Peter Sankauskas 105 | # 106 | # This file is part of Ansible, 107 | # 108 | # Ansible is free software: you can redistribute it and/or modify 109 | # it under the terms of the GNU General Public License as published by 110 | # the Free Software Foundation, either version 3 of the License, or 111 | # (at your option) any later version. 112 | # 113 | # Ansible is distributed in the hope that it will be useful, 114 | # but WITHOUT ANY WARRANTY; without even the implied warranty of 115 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 116 | # GNU General Public License for more details. 117 | # 118 | # You should have received a copy of the GNU General Public License 119 | # along with Ansible. If not, see . 120 | 121 | ###################################################################### 122 | 123 | import sys 124 | import os 125 | import argparse 126 | import re 127 | from time import time 128 | import boto 129 | from boto import ec2 130 | from boto import rds 131 | from boto import elasticache 132 | from boto import route53 133 | import six 134 | 135 | from ansible.module_utils import ec2 as ec2_utils 136 | 137 | HAS_BOTO3 = False 138 | try: 139 | import boto3 140 | HAS_BOTO3 = True 141 | except ImportError: 142 | pass 143 | 144 | from six.moves import configparser 145 | from collections import defaultdict 146 | 147 | try: 148 | import json 149 | except ImportError: 150 | import simplejson as json 151 | 152 | 153 | class Ec2Inventory(object): 154 | 155 | def _empty_inventory(self): 156 | return {"_meta" : {"hostvars" : {}}} 157 | 158 | def __init__(self): 159 | ''' Main execution path ''' 160 | 161 | # Inventory grouped by instance IDs, tags, security groups, regions, 162 | # and availability zones 163 | self.inventory = self._empty_inventory() 164 | 165 | self.aws_account_id = None 166 | 167 | # Index of hostname (address) to instance ID 168 | self.index = {} 169 | 170 | # Boto profile to use (if any) 171 | self.boto_profile = None 172 | 173 | # AWS credentials. 174 | self.credentials = {} 175 | 176 | # Read settings and parse CLI arguments 177 | self.parse_cli_args() 178 | self.read_settings() 179 | 180 | # Make sure that profile_name is not passed at all if not set 181 | # as pre 2.24 boto will fall over otherwise 182 | if self.boto_profile: 183 | if not hasattr(boto.ec2.EC2Connection, 'profile_name'): 184 | self.fail_with_error("boto version must be >= 2.24 to use profile") 185 | 186 | # Cache 187 | if self.args.refresh_cache: 188 | self.do_api_calls_update_cache() 189 | elif not self.is_cache_valid(): 190 | self.do_api_calls_update_cache() 191 | 192 | # Data to print 193 | if self.args.host: 194 | data_to_print = self.get_host_info() 195 | 196 | elif self.args.list: 197 | # Display list of instances for inventory 198 | if self.inventory == self._empty_inventory(): 199 | data_to_print = self.get_inventory_from_cache() 200 | else: 201 | data_to_print = self.json_format_dict(self.inventory, True) 202 | 203 | print(data_to_print) 204 | 205 | 206 | def is_cache_valid(self): 207 | ''' Determines if the cache files have expired, or if it is still valid ''' 208 | 209 | if os.path.isfile(self.cache_path_cache): 210 | mod_time = os.path.getmtime(self.cache_path_cache) 211 | current_time = time() 212 | if (mod_time + self.cache_max_age) > current_time: 213 | if os.path.isfile(self.cache_path_index): 214 | return True 215 | 216 | return False 217 | 218 | 219 | def read_settings(self): 220 | ''' Reads the settings from the ec2.ini file ''' 221 | if six.PY3: 222 | config = configparser.ConfigParser() 223 | else: 224 | config = configparser.SafeConfigParser() 225 | ec2_default_ini_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'ec2.ini') 226 | ec2_ini_path = os.path.expanduser(os.path.expandvars(os.environ.get('EC2_INI_PATH', ec2_default_ini_path))) 227 | config.read(ec2_ini_path) 228 | 229 | # is eucalyptus? 230 | self.eucalyptus_host = None 231 | self.eucalyptus = False 232 | if config.has_option('ec2', 'eucalyptus'): 233 | self.eucalyptus = config.getboolean('ec2', 'eucalyptus') 234 | if self.eucalyptus and config.has_option('ec2', 'eucalyptus_host'): 235 | self.eucalyptus_host = config.get('ec2', 'eucalyptus_host') 236 | 237 | # Regions 238 | self.regions = [] 239 | configRegions = config.get('ec2', 'regions') 240 | configRegions_exclude = config.get('ec2', 'regions_exclude') 241 | if (configRegions == 'all'): 242 | if self.eucalyptus_host: 243 | self.regions.append(boto.connect_euca(host=self.eucalyptus_host).region.name, **self.credentials) 244 | else: 245 | for regionInfo in ec2.regions(): 246 | if regionInfo.name not in configRegions_exclude: 247 | self.regions.append(regionInfo.name) 248 | else: 249 | self.regions = configRegions.split(",") 250 | 251 | # Destination addresses 252 | self.destination_variable = config.get('ec2', 'destination_variable') 253 | self.vpc_destination_variable = config.get('ec2', 'vpc_destination_variable') 254 | 255 | if config.has_option('ec2', 'hostname_variable'): 256 | self.hostname_variable = config.get('ec2', 'hostname_variable') 257 | else: 258 | self.hostname_variable = None 259 | 260 | if config.has_option('ec2', 'destination_format') and \ 261 | config.has_option('ec2', 'destination_format_tags'): 262 | self.destination_format = config.get('ec2', 'destination_format') 263 | self.destination_format_tags = config.get('ec2', 'destination_format_tags').split(',') 264 | else: 265 | self.destination_format = None 266 | self.destination_format_tags = None 267 | 268 | # Route53 269 | self.route53_enabled = config.getboolean('ec2', 'route53') 270 | self.route53_excluded_zones = [] 271 | if config.has_option('ec2', 'route53_excluded_zones'): 272 | self.route53_excluded_zones.extend( 273 | config.get('ec2', 'route53_excluded_zones', '').split(',')) 274 | 275 | # Include RDS instances? 276 | self.rds_enabled = True 277 | if config.has_option('ec2', 'rds'): 278 | self.rds_enabled = config.getboolean('ec2', 'rds') 279 | 280 | # Include RDS cluster instances? 281 | if config.has_option('ec2', 'include_rds_clusters'): 282 | self.include_rds_clusters = config.getboolean('ec2', 'include_rds_clusters') 283 | else: 284 | self.include_rds_clusters = False 285 | 286 | # Include ElastiCache instances? 287 | self.elasticache_enabled = True 288 | if config.has_option('ec2', 'elasticache'): 289 | self.elasticache_enabled = config.getboolean('ec2', 'elasticache') 290 | 291 | # Return all EC2 instances? 292 | if config.has_option('ec2', 'all_instances'): 293 | self.all_instances = config.getboolean('ec2', 'all_instances') 294 | else: 295 | self.all_instances = False 296 | 297 | # Instance states to be gathered in inventory. Default is 'running'. 298 | # Setting 'all_instances' to 'yes' overrides this option. 299 | ec2_valid_instance_states = [ 300 | 'pending', 301 | 'running', 302 | 'shutting-down', 303 | 'terminated', 304 | 'stopping', 305 | 'stopped' 306 | ] 307 | self.ec2_instance_states = [] 308 | if self.all_instances: 309 | self.ec2_instance_states = ec2_valid_instance_states 310 | elif config.has_option('ec2', 'instance_states'): 311 | for instance_state in config.get('ec2', 'instance_states').split(','): 312 | instance_state = instance_state.strip() 313 | if instance_state not in ec2_valid_instance_states: 314 | continue 315 | self.ec2_instance_states.append(instance_state) 316 | else: 317 | self.ec2_instance_states = ['running'] 318 | 319 | # Return all RDS instances? (if RDS is enabled) 320 | if config.has_option('ec2', 'all_rds_instances') and self.rds_enabled: 321 | self.all_rds_instances = config.getboolean('ec2', 'all_rds_instances') 322 | else: 323 | self.all_rds_instances = False 324 | 325 | # Return all ElastiCache replication groups? (if ElastiCache is enabled) 326 | if config.has_option('ec2', 'all_elasticache_replication_groups') and self.elasticache_enabled: 327 | self.all_elasticache_replication_groups = config.getboolean('ec2', 'all_elasticache_replication_groups') 328 | else: 329 | self.all_elasticache_replication_groups = False 330 | 331 | # Return all ElastiCache clusters? (if ElastiCache is enabled) 332 | if config.has_option('ec2', 'all_elasticache_clusters') and self.elasticache_enabled: 333 | self.all_elasticache_clusters = config.getboolean('ec2', 'all_elasticache_clusters') 334 | else: 335 | self.all_elasticache_clusters = False 336 | 337 | # Return all ElastiCache nodes? (if ElastiCache is enabled) 338 | if config.has_option('ec2', 'all_elasticache_nodes') and self.elasticache_enabled: 339 | self.all_elasticache_nodes = config.getboolean('ec2', 'all_elasticache_nodes') 340 | else: 341 | self.all_elasticache_nodes = False 342 | 343 | # boto configuration profile (prefer CLI argument) 344 | self.boto_profile = self.args.boto_profile 345 | if config.has_option('ec2', 'boto_profile') and not self.boto_profile: 346 | self.boto_profile = config.get('ec2', 'boto_profile') 347 | 348 | # AWS credentials (prefer environment variables) 349 | if not (self.boto_profile or os.environ.get('AWS_ACCESS_KEY_ID') or 350 | os.environ.get('AWS_PROFILE')): 351 | if config.has_option('credentials', 'aws_access_key_id'): 352 | aws_access_key_id = config.get('credentials', 'aws_access_key_id') 353 | else: 354 | aws_access_key_id = None 355 | if config.has_option('credentials', 'aws_secret_access_key'): 356 | aws_secret_access_key = config.get('credentials', 'aws_secret_access_key') 357 | else: 358 | aws_secret_access_key = None 359 | if config.has_option('credentials', 'aws_security_token'): 360 | aws_security_token = config.get('credentials', 'aws_security_token') 361 | else: 362 | aws_security_token = None 363 | if aws_access_key_id: 364 | self.credentials = { 365 | 'aws_access_key_id': aws_access_key_id, 366 | 'aws_secret_access_key': aws_secret_access_key 367 | } 368 | if aws_security_token: 369 | self.credentials['security_token'] = aws_security_token 370 | 371 | # Cache related 372 | cache_dir = os.path.expanduser(config.get('ec2', 'cache_path')) 373 | if self.boto_profile: 374 | cache_dir = os.path.join(cache_dir, 'profile_' + self.boto_profile) 375 | if not os.path.exists(cache_dir): 376 | os.makedirs(cache_dir) 377 | 378 | cache_name = 'ansible-ec2' 379 | aws_profile = lambda: (self.boto_profile or 380 | os.environ.get('AWS_PROFILE') or 381 | os.environ.get('AWS_ACCESS_KEY_ID') or 382 | self.credentials.get('aws_access_key_id', None)) 383 | if aws_profile(): 384 | cache_name = '%s-%s' % (cache_name, aws_profile()) 385 | self.cache_path_cache = cache_dir + "/%s.cache" % cache_name 386 | self.cache_path_index = cache_dir + "/%s.index" % cache_name 387 | self.cache_max_age = config.getint('ec2', 'cache_max_age') 388 | 389 | if config.has_option('ec2', 'expand_csv_tags'): 390 | self.expand_csv_tags = config.getboolean('ec2', 'expand_csv_tags') 391 | else: 392 | self.expand_csv_tags = False 393 | 394 | # Configure nested groups instead of flat namespace. 395 | if config.has_option('ec2', 'nested_groups'): 396 | self.nested_groups = config.getboolean('ec2', 'nested_groups') 397 | else: 398 | self.nested_groups = False 399 | 400 | # Replace dash or not in group names 401 | if config.has_option('ec2', 'replace_dash_in_groups'): 402 | self.replace_dash_in_groups = config.getboolean('ec2', 'replace_dash_in_groups') 403 | else: 404 | self.replace_dash_in_groups = True 405 | 406 | # Configure which groups should be created. 407 | group_by_options = [ 408 | 'group_by_instance_id', 409 | 'group_by_region', 410 | 'group_by_availability_zone', 411 | 'group_by_ami_id', 412 | 'group_by_instance_type', 413 | 'group_by_key_pair', 414 | 'group_by_vpc_id', 415 | 'group_by_security_group', 416 | 'group_by_tag_keys', 417 | 'group_by_tag_none', 418 | 'group_by_route53_names', 419 | 'group_by_rds_engine', 420 | 'group_by_rds_parameter_group', 421 | 'group_by_elasticache_engine', 422 | 'group_by_elasticache_cluster', 423 | 'group_by_elasticache_parameter_group', 424 | 'group_by_elasticache_replication_group', 425 | 'group_by_aws_account', 426 | ] 427 | for option in group_by_options: 428 | if config.has_option('ec2', option): 429 | setattr(self, option, config.getboolean('ec2', option)) 430 | else: 431 | setattr(self, option, True) 432 | 433 | # Do we need to just include hosts that match a pattern? 434 | try: 435 | pattern_include = config.get('ec2', 'pattern_include') 436 | if pattern_include and len(pattern_include) > 0: 437 | self.pattern_include = re.compile(pattern_include) 438 | else: 439 | self.pattern_include = None 440 | except configparser.NoOptionError: 441 | self.pattern_include = None 442 | 443 | # Do we need to exclude hosts that match a pattern? 444 | try: 445 | pattern_exclude = config.get('ec2', 'pattern_exclude'); 446 | if pattern_exclude and len(pattern_exclude) > 0: 447 | self.pattern_exclude = re.compile(pattern_exclude) 448 | else: 449 | self.pattern_exclude = None 450 | except configparser.NoOptionError: 451 | self.pattern_exclude = None 452 | 453 | # Instance filters (see boto and EC2 API docs). Ignore invalid filters. 454 | self.ec2_instance_filters = defaultdict(list) 455 | if config.has_option('ec2', 'instance_filters'): 456 | 457 | filters = [f for f in config.get('ec2', 'instance_filters').split(',') if f] 458 | 459 | for instance_filter in filters: 460 | instance_filter = instance_filter.strip() 461 | if not instance_filter or '=' not in instance_filter: 462 | continue 463 | filter_key, filter_value = [x.strip() for x in instance_filter.split('=', 1)] 464 | if not filter_key: 465 | continue 466 | self.ec2_instance_filters[filter_key].append(filter_value) 467 | 468 | def parse_cli_args(self): 469 | ''' Command line argument processing ''' 470 | 471 | parser = argparse.ArgumentParser(description='Produce an Ansible Inventory file based on EC2') 472 | parser.add_argument('--list', action='store_true', default=True, 473 | help='List instances (default: True)') 474 | parser.add_argument('--host', action='store', 475 | help='Get all the variables about a specific instance') 476 | parser.add_argument('--refresh-cache', action='store_true', default=False, 477 | help='Force refresh of cache by making API requests to EC2 (default: False - use cache files)') 478 | parser.add_argument('--profile', '--boto-profile', action='store', dest='boto_profile', 479 | help='Use boto profile for connections to EC2') 480 | self.args = parser.parse_args() 481 | 482 | 483 | def do_api_calls_update_cache(self): 484 | ''' Do API calls to each region, and save data in cache files ''' 485 | 486 | if self.route53_enabled: 487 | self.get_route53_records() 488 | 489 | for region in self.regions: 490 | self.get_instances_by_region(region) 491 | if self.rds_enabled: 492 | self.get_rds_instances_by_region(region) 493 | if self.elasticache_enabled: 494 | self.get_elasticache_clusters_by_region(region) 495 | self.get_elasticache_replication_groups_by_region(region) 496 | if self.include_rds_clusters: 497 | self.include_rds_clusters_by_region(region) 498 | 499 | self.write_to_cache(self.inventory, self.cache_path_cache) 500 | self.write_to_cache(self.index, self.cache_path_index) 501 | 502 | def connect(self, region): 503 | ''' create connection to api server''' 504 | if self.eucalyptus: 505 | conn = boto.connect_euca(host=self.eucalyptus_host, **self.credentials) 506 | conn.APIVersion = '2010-08-31' 507 | else: 508 | conn = self.connect_to_aws(ec2, region) 509 | return conn 510 | 511 | def boto_fix_security_token_in_profile(self, connect_args): 512 | ''' monkey patch for boto issue boto/boto#2100 ''' 513 | profile = 'profile ' + self.boto_profile 514 | if boto.config.has_option(profile, 'aws_security_token'): 515 | connect_args['security_token'] = boto.config.get(profile, 'aws_security_token') 516 | return connect_args 517 | 518 | def connect_to_aws(self, module, region): 519 | connect_args = self.credentials 520 | 521 | # only pass the profile name if it's set (as it is not supported by older boto versions) 522 | if self.boto_profile: 523 | connect_args['profile_name'] = self.boto_profile 524 | self.boto_fix_security_token_in_profile(connect_args) 525 | 526 | conn = module.connect_to_region(region, **connect_args) 527 | # connect_to_region will fail "silently" by returning None if the region name is wrong or not supported 528 | if conn is None: 529 | self.fail_with_error("region name: %s likely not supported, or AWS is down. connection to region failed." % region) 530 | return conn 531 | 532 | def get_instances_by_region(self, region): 533 | ''' Makes an AWS EC2 API call to the list of instances in a particular 534 | region ''' 535 | 536 | try: 537 | conn = self.connect(region) 538 | reservations = [] 539 | if self.ec2_instance_filters: 540 | for filter_key, filter_values in self.ec2_instance_filters.items(): 541 | reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values })) 542 | else: 543 | reservations = conn.get_all_instances() 544 | 545 | # Pull the tags back in a second step 546 | # AWS are on record as saying that the tags fetched in the first `get_all_instances` request are not 547 | # reliable and may be missing, and the only way to guarantee they are there is by calling `get_all_tags` 548 | instance_ids = [] 549 | for reservation in reservations: 550 | instance_ids.extend([instance.id for instance in reservation.instances]) 551 | 552 | max_filter_value = 199 553 | tags = [] 554 | for i in range(0, len(instance_ids), max_filter_value): 555 | tags.extend(conn.get_all_tags(filters={'resource-type': 'instance', 'resource-id': instance_ids[i:i+max_filter_value]})) 556 | 557 | tags_by_instance_id = defaultdict(dict) 558 | for tag in tags: 559 | tags_by_instance_id[tag.res_id][tag.name] = tag.value 560 | 561 | if (not self.aws_account_id) and reservations: 562 | self.aws_account_id = reservations[0].owner_id 563 | 564 | for reservation in reservations: 565 | for instance in reservation.instances: 566 | instance.tags = tags_by_instance_id[instance.id] 567 | self.add_instance(instance, region) 568 | 569 | except boto.exception.BotoServerError as e: 570 | if e.error_code == 'AuthFailure': 571 | error = self.get_auth_error_message() 572 | else: 573 | backend = 'Eucalyptus' if self.eucalyptus else 'AWS' 574 | error = "Error connecting to %s backend.\n%s" % (backend, e.message) 575 | self.fail_with_error(error, 'getting EC2 instances') 576 | 577 | def get_rds_instances_by_region(self, region): 578 | ''' Makes an AWS API call to the list of RDS instances in a particular 579 | region ''' 580 | 581 | try: 582 | conn = self.connect_to_aws(rds, region) 583 | if conn: 584 | marker = None 585 | while True: 586 | instances = conn.get_all_dbinstances(marker=marker) 587 | marker = instances.marker 588 | for instance in instances: 589 | self.add_rds_instance(instance, region) 590 | if not marker: 591 | break 592 | except boto.exception.BotoServerError as e: 593 | error = e.reason 594 | 595 | if e.error_code == 'AuthFailure': 596 | error = self.get_auth_error_message() 597 | if not e.reason == "Forbidden": 598 | error = "Looks like AWS RDS is down:\n%s" % e.message 599 | self.fail_with_error(error, 'getting RDS instances') 600 | 601 | def include_rds_clusters_by_region(self, region): 602 | if not HAS_BOTO3: 603 | self.fail_with_error("Working with RDS clusters requires boto3 - please install boto3 and try again", 604 | "getting RDS clusters") 605 | 606 | client = ec2_utils.boto3_inventory_conn('client', 'rds', region, **self.credentials) 607 | 608 | marker, clusters = '', [] 609 | while marker is not None: 610 | resp = client.describe_db_clusters(Marker=marker) 611 | clusters.extend(resp["DBClusters"]) 612 | marker = resp.get('Marker', None) 613 | 614 | account_id = boto.connect_iam().get_user().arn.split(':')[4] 615 | c_dict = {} 616 | for c in clusters: 617 | # remove these datetime objects as there is no serialisation to json 618 | # currently in place and we don't need the data yet 619 | if 'EarliestRestorableTime' in c: 620 | del c['EarliestRestorableTime'] 621 | if 'LatestRestorableTime' in c: 622 | del c['LatestRestorableTime'] 623 | 624 | if self.ec2_instance_filters == {}: 625 | matches_filter = True 626 | else: 627 | matches_filter = False 628 | 629 | try: 630 | # arn:aws:rds:::: 631 | tags = client.list_tags_for_resource( 632 | ResourceName='arn:aws:rds:' + region + ':' + account_id + ':cluster:' + c['DBClusterIdentifier']) 633 | c['Tags'] = tags['TagList'] 634 | 635 | if self.ec2_instance_filters: 636 | for filter_key, filter_values in self.ec2_instance_filters.items(): 637 | # get AWS tag key e.g. tag:env will be 'env' 638 | tag_name = filter_key.split(":", 1)[1] 639 | # Filter values is a list (if you put multiple values for the same tag name) 640 | matches_filter = any(d['Key'] == tag_name and d['Value'] in filter_values for d in c['Tags']) 641 | 642 | if matches_filter: 643 | # it matches a filter, so stop looking for further matches 644 | break 645 | 646 | except Exception as e: 647 | if e.message.find('DBInstanceNotFound') >= 0: 648 | # AWS RDS bug (2016-01-06) means deletion does not fully complete and leave an 'empty' cluster. 649 | # Ignore errors when trying to find tags for these 650 | pass 651 | 652 | # ignore empty clusters caused by AWS bug 653 | if len(c['DBClusterMembers']) == 0: 654 | continue 655 | elif matches_filter: 656 | c_dict[c['DBClusterIdentifier']] = c 657 | 658 | self.inventory['db_clusters'] = c_dict 659 | 660 | def get_elasticache_clusters_by_region(self, region): 661 | ''' Makes an AWS API call to the list of ElastiCache clusters (with 662 | nodes' info) in a particular region.''' 663 | 664 | # ElastiCache boto module doesn't provide a get_all_intances method, 665 | # that's why we need to call describe directly (it would be called by 666 | # the shorthand method anyway...) 667 | try: 668 | conn = self.connect_to_aws(elasticache, region) 669 | if conn: 670 | # show_cache_node_info = True 671 | # because we also want nodes' information 672 | response = conn.describe_cache_clusters(None, None, None, True) 673 | 674 | except boto.exception.BotoServerError as e: 675 | error = e.reason 676 | 677 | if e.error_code == 'AuthFailure': 678 | error = self.get_auth_error_message() 679 | if not e.reason == "Forbidden": 680 | error = "Looks like AWS ElastiCache is down:\n%s" % e.message 681 | self.fail_with_error(error, 'getting ElastiCache clusters') 682 | 683 | try: 684 | # Boto also doesn't provide wrapper classes to CacheClusters or 685 | # CacheNodes. Because of that we can't make use of the get_list 686 | # method in the AWSQueryConnection. Let's do the work manually 687 | clusters = response['DescribeCacheClustersResponse']['DescribeCacheClustersResult']['CacheClusters'] 688 | 689 | except KeyError as e: 690 | error = "ElastiCache query to AWS failed (unexpected format)." 691 | self.fail_with_error(error, 'getting ElastiCache clusters') 692 | 693 | for cluster in clusters: 694 | self.add_elasticache_cluster(cluster, region) 695 | 696 | def get_elasticache_replication_groups_by_region(self, region): 697 | ''' Makes an AWS API call to the list of ElastiCache replication groups 698 | in a particular region.''' 699 | 700 | # ElastiCache boto module doesn't provide a get_all_intances method, 701 | # that's why we need to call describe directly (it would be called by 702 | # the shorthand method anyway...) 703 | try: 704 | conn = self.connect_to_aws(elasticache, region) 705 | if conn: 706 | response = conn.describe_replication_groups() 707 | 708 | except boto.exception.BotoServerError as e: 709 | error = e.reason 710 | 711 | if e.error_code == 'AuthFailure': 712 | error = self.get_auth_error_message() 713 | if not e.reason == "Forbidden": 714 | error = "Looks like AWS ElastiCache [Replication Groups] is down:\n%s" % e.message 715 | self.fail_with_error(error, 'getting ElastiCache clusters') 716 | 717 | try: 718 | # Boto also doesn't provide wrapper classes to ReplicationGroups 719 | # Because of that we can't make use of the get_list method in the 720 | # AWSQueryConnection. Let's do the work manually 721 | replication_groups = response['DescribeReplicationGroupsResponse']['DescribeReplicationGroupsResult']['ReplicationGroups'] 722 | 723 | except KeyError as e: 724 | error = "ElastiCache [Replication Groups] query to AWS failed (unexpected format)." 725 | self.fail_with_error(error, 'getting ElastiCache clusters') 726 | 727 | for replication_group in replication_groups: 728 | self.add_elasticache_replication_group(replication_group, region) 729 | 730 | def get_auth_error_message(self): 731 | ''' create an informative error message if there is an issue authenticating''' 732 | errors = ["Authentication error retrieving ec2 inventory."] 733 | if None in [os.environ.get('AWS_ACCESS_KEY_ID'), os.environ.get('AWS_SECRET_ACCESS_KEY')]: 734 | errors.append(' - No AWS_ACCESS_KEY_ID or AWS_SECRET_ACCESS_KEY environment vars found') 735 | else: 736 | errors.append(' - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment vars found but may not be correct') 737 | 738 | boto_paths = ['/etc/boto.cfg', '~/.boto', '~/.aws/credentials'] 739 | boto_config_found = list(p for p in boto_paths if os.path.isfile(os.path.expanduser(p))) 740 | if len(boto_config_found) > 0: 741 | errors.append(" - Boto configs found at '%s', but the credentials contained may not be correct" % ', '.join(boto_config_found)) 742 | else: 743 | errors.append(" - No Boto config found at any expected location '%s'" % ', '.join(boto_paths)) 744 | 745 | return '\n'.join(errors) 746 | 747 | def fail_with_error(self, err_msg, err_operation=None): 748 | '''log an error to std err for ansible-playbook to consume and exit''' 749 | if err_operation: 750 | err_msg = 'ERROR: "{err_msg}", while: {err_operation}'.format( 751 | err_msg=err_msg, err_operation=err_operation) 752 | sys.stderr.write(err_msg) 753 | sys.exit(1) 754 | 755 | def get_instance(self, region, instance_id): 756 | conn = self.connect(region) 757 | 758 | reservations = conn.get_all_instances([instance_id]) 759 | for reservation in reservations: 760 | for instance in reservation.instances: 761 | return instance 762 | 763 | def add_instance(self, instance, region): 764 | ''' Adds an instance to the inventory and index, as long as it is 765 | addressable ''' 766 | 767 | # Only return instances with desired instance states 768 | if instance.state not in self.ec2_instance_states: 769 | return 770 | 771 | # Select the best destination address 772 | if self.destination_format and self.destination_format_tags: 773 | dest = self.destination_format.format(*[ getattr(instance, 'tags').get(tag, '') for tag in self.destination_format_tags ]) 774 | elif instance.subnet_id: 775 | dest = getattr(instance, self.vpc_destination_variable, None) 776 | if dest is None: 777 | dest = getattr(instance, 'tags').get(self.vpc_destination_variable, None) 778 | else: 779 | dest = getattr(instance, self.destination_variable, None) 780 | if dest is None: 781 | dest = getattr(instance, 'tags').get(self.destination_variable, None) 782 | 783 | if not dest: 784 | # Skip instances we cannot address (e.g. private VPC subnet) 785 | return 786 | 787 | # Set the inventory name 788 | hostname = None 789 | if self.hostname_variable: 790 | if self.hostname_variable.startswith('tag_'): 791 | hostname = instance.tags.get(self.hostname_variable[4:], None) 792 | else: 793 | hostname = getattr(instance, self.hostname_variable) 794 | 795 | # If we can't get a nice hostname, use the destination address 796 | if not hostname: 797 | hostname = dest 798 | else: 799 | hostname = self.to_safe(hostname).lower() 800 | 801 | # if we only want to include hosts that match a pattern, skip those that don't 802 | if self.pattern_include and not self.pattern_include.match(hostname): 803 | return 804 | 805 | # if we need to exclude hosts that match a pattern, skip those 806 | if self.pattern_exclude and self.pattern_exclude.match(hostname): 807 | return 808 | 809 | # Add to index 810 | self.index[hostname] = [region, instance.id] 811 | 812 | # Inventory: Group by instance ID (always a group of 1) 813 | if self.group_by_instance_id: 814 | self.inventory[instance.id] = [hostname] 815 | if self.nested_groups: 816 | self.push_group(self.inventory, 'instances', instance.id) 817 | 818 | # Inventory: Group by region 819 | if self.group_by_region: 820 | self.push(self.inventory, region, hostname) 821 | if self.nested_groups: 822 | self.push_group(self.inventory, 'regions', region) 823 | 824 | # Inventory: Group by availability zone 825 | if self.group_by_availability_zone: 826 | self.push(self.inventory, instance.placement, hostname) 827 | if self.nested_groups: 828 | if self.group_by_region: 829 | self.push_group(self.inventory, region, instance.placement) 830 | self.push_group(self.inventory, 'zones', instance.placement) 831 | 832 | # Inventory: Group by Amazon Machine Image (AMI) ID 833 | if self.group_by_ami_id: 834 | ami_id = self.to_safe(instance.image_id) 835 | self.push(self.inventory, ami_id, hostname) 836 | if self.nested_groups: 837 | self.push_group(self.inventory, 'images', ami_id) 838 | 839 | # Inventory: Group by instance type 840 | if self.group_by_instance_type: 841 | type_name = self.to_safe('type_' + instance.instance_type) 842 | self.push(self.inventory, type_name, hostname) 843 | if self.nested_groups: 844 | self.push_group(self.inventory, 'types', type_name) 845 | 846 | # Inventory: Group by key pair 847 | if self.group_by_key_pair and instance.key_name: 848 | key_name = self.to_safe('key_' + instance.key_name) 849 | self.push(self.inventory, key_name, hostname) 850 | if self.nested_groups: 851 | self.push_group(self.inventory, 'keys', key_name) 852 | 853 | # Inventory: Group by VPC 854 | if self.group_by_vpc_id and instance.vpc_id: 855 | vpc_id_name = self.to_safe('vpc_id_' + instance.vpc_id) 856 | self.push(self.inventory, vpc_id_name, hostname) 857 | if self.nested_groups: 858 | self.push_group(self.inventory, 'vpcs', vpc_id_name) 859 | 860 | # Inventory: Group by security group 861 | if self.group_by_security_group: 862 | try: 863 | for group in instance.groups: 864 | key = self.to_safe("security_group_" + group.name) 865 | self.push(self.inventory, key, hostname) 866 | if self.nested_groups: 867 | self.push_group(self.inventory, 'security_groups', key) 868 | except AttributeError: 869 | self.fail_with_error('\n'.join(['Package boto seems a bit older.', 870 | 'Please upgrade boto >= 2.3.0.'])) 871 | 872 | # Inventory: Group by AWS account ID 873 | if self.group_by_aws_account: 874 | self.push(self.inventory, self.aws_account_id, dest) 875 | if self.nested_groups: 876 | self.push_group(self.inventory, 'accounts', self.aws_account_id) 877 | 878 | # Inventory: Group by tag keys 879 | if self.group_by_tag_keys: 880 | for k, v in instance.tags.items(): 881 | if self.expand_csv_tags and v and ',' in v: 882 | values = map(lambda x: x.strip(), v.split(',')) 883 | else: 884 | values = [v] 885 | 886 | for v in values: 887 | if v: 888 | key = self.to_safe("tag_" + k + "=" + v) 889 | else: 890 | key = self.to_safe("tag_" + k) 891 | self.push(self.inventory, key, hostname) 892 | if self.nested_groups: 893 | self.push_group(self.inventory, 'tags', self.to_safe("tag_" + k)) 894 | if v: 895 | self.push_group(self.inventory, self.to_safe("tag_" + k), key) 896 | 897 | # Inventory: Group by Route53 domain names if enabled 898 | if self.route53_enabled and self.group_by_route53_names: 899 | route53_names = self.get_instance_route53_names(instance) 900 | for name in route53_names: 901 | self.push(self.inventory, name, hostname) 902 | if self.nested_groups: 903 | self.push_group(self.inventory, 'route53', name) 904 | 905 | # Global Tag: instances without tags 906 | if self.group_by_tag_none and len(instance.tags) == 0: 907 | self.push(self.inventory, 'tag_none', hostname) 908 | if self.nested_groups: 909 | self.push_group(self.inventory, 'tags', 'tag_none') 910 | 911 | # Global Tag: tag all EC2 instances 912 | self.push(self.inventory, 'ec2', hostname) 913 | 914 | self.inventory["_meta"]["hostvars"][hostname] = self.get_host_info_dict_from_instance(instance) 915 | self.inventory["_meta"]["hostvars"][hostname]['ansible_ssh_host'] = dest 916 | 917 | 918 | def add_rds_instance(self, instance, region): 919 | ''' Adds an RDS instance to the inventory and index, as long as it is 920 | addressable ''' 921 | 922 | # Only want available instances unless all_rds_instances is True 923 | if not self.all_rds_instances and instance.status != 'available': 924 | return 925 | 926 | # Select the best destination address 927 | dest = instance.endpoint[0] 928 | 929 | if not dest: 930 | # Skip instances we cannot address (e.g. private VPC subnet) 931 | return 932 | 933 | # Set the inventory name 934 | hostname = None 935 | if self.hostname_variable: 936 | if self.hostname_variable.startswith('tag_'): 937 | hostname = instance.tags.get(self.hostname_variable[4:], None) 938 | else: 939 | hostname = getattr(instance, self.hostname_variable) 940 | 941 | # If we can't get a nice hostname, use the destination address 942 | if not hostname: 943 | hostname = dest 944 | 945 | hostname = self.to_safe(hostname).lower() 946 | 947 | # Add to index 948 | self.index[hostname] = [region, instance.id] 949 | 950 | # Inventory: Group by instance ID (always a group of 1) 951 | if self.group_by_instance_id: 952 | self.inventory[instance.id] = [hostname] 953 | if self.nested_groups: 954 | self.push_group(self.inventory, 'instances', instance.id) 955 | 956 | # Inventory: Group by region 957 | if self.group_by_region: 958 | self.push(self.inventory, region, hostname) 959 | if self.nested_groups: 960 | self.push_group(self.inventory, 'regions', region) 961 | 962 | # Inventory: Group by availability zone 963 | if self.group_by_availability_zone: 964 | self.push(self.inventory, instance.availability_zone, hostname) 965 | if self.nested_groups: 966 | if self.group_by_region: 967 | self.push_group(self.inventory, region, instance.availability_zone) 968 | self.push_group(self.inventory, 'zones', instance.availability_zone) 969 | 970 | # Inventory: Group by instance type 971 | if self.group_by_instance_type: 972 | type_name = self.to_safe('type_' + instance.instance_class) 973 | self.push(self.inventory, type_name, hostname) 974 | if self.nested_groups: 975 | self.push_group(self.inventory, 'types', type_name) 976 | 977 | # Inventory: Group by VPC 978 | if self.group_by_vpc_id and instance.subnet_group and instance.subnet_group.vpc_id: 979 | vpc_id_name = self.to_safe('vpc_id_' + instance.subnet_group.vpc_id) 980 | self.push(self.inventory, vpc_id_name, hostname) 981 | if self.nested_groups: 982 | self.push_group(self.inventory, 'vpcs', vpc_id_name) 983 | 984 | # Inventory: Group by security group 985 | if self.group_by_security_group: 986 | try: 987 | if instance.security_group: 988 | key = self.to_safe("security_group_" + instance.security_group.name) 989 | self.push(self.inventory, key, hostname) 990 | if self.nested_groups: 991 | self.push_group(self.inventory, 'security_groups', key) 992 | 993 | except AttributeError: 994 | self.fail_with_error('\n'.join(['Package boto seems a bit older.', 995 | 'Please upgrade boto >= 2.3.0.'])) 996 | 997 | 998 | # Inventory: Group by engine 999 | if self.group_by_rds_engine: 1000 | self.push(self.inventory, self.to_safe("rds_" + instance.engine), hostname) 1001 | if self.nested_groups: 1002 | self.push_group(self.inventory, 'rds_engines', self.to_safe("rds_" + instance.engine)) 1003 | 1004 | # Inventory: Group by parameter group 1005 | if self.group_by_rds_parameter_group: 1006 | self.push(self.inventory, self.to_safe("rds_parameter_group_" + instance.parameter_group.name), hostname) 1007 | if self.nested_groups: 1008 | self.push_group(self.inventory, 'rds_parameter_groups', self.to_safe("rds_parameter_group_" + instance.parameter_group.name)) 1009 | 1010 | # Global Tag: all RDS instances 1011 | self.push(self.inventory, 'rds', hostname) 1012 | 1013 | self.inventory["_meta"]["hostvars"][hostname] = self.get_host_info_dict_from_instance(instance) 1014 | self.inventory["_meta"]["hostvars"][hostname]['ansible_ssh_host'] = dest 1015 | 1016 | def add_elasticache_cluster(self, cluster, region): 1017 | ''' Adds an ElastiCache cluster to the inventory and index, as long as 1018 | it's nodes are addressable ''' 1019 | 1020 | # Only want available clusters unless all_elasticache_clusters is True 1021 | if not self.all_elasticache_clusters and cluster['CacheClusterStatus'] != 'available': 1022 | return 1023 | 1024 | # Select the best destination address 1025 | if 'ConfigurationEndpoint' in cluster and cluster['ConfigurationEndpoint']: 1026 | # Memcached cluster 1027 | dest = cluster['ConfigurationEndpoint']['Address'] 1028 | is_redis = False 1029 | else: 1030 | # Redis sigle node cluster 1031 | # Because all Redis clusters are single nodes, we'll merge the 1032 | # info from the cluster with info about the node 1033 | dest = cluster['CacheNodes'][0]['Endpoint']['Address'] 1034 | is_redis = True 1035 | 1036 | if not dest: 1037 | # Skip clusters we cannot address (e.g. private VPC subnet) 1038 | return 1039 | 1040 | # Add to index 1041 | self.index[dest] = [region, cluster['CacheClusterId']] 1042 | 1043 | # Inventory: Group by instance ID (always a group of 1) 1044 | if self.group_by_instance_id: 1045 | self.inventory[cluster['CacheClusterId']] = [dest] 1046 | if self.nested_groups: 1047 | self.push_group(self.inventory, 'instances', cluster['CacheClusterId']) 1048 | 1049 | # Inventory: Group by region 1050 | if self.group_by_region and not is_redis: 1051 | self.push(self.inventory, region, dest) 1052 | if self.nested_groups: 1053 | self.push_group(self.inventory, 'regions', region) 1054 | 1055 | # Inventory: Group by availability zone 1056 | if self.group_by_availability_zone and not is_redis: 1057 | self.push(self.inventory, cluster['PreferredAvailabilityZone'], dest) 1058 | if self.nested_groups: 1059 | if self.group_by_region: 1060 | self.push_group(self.inventory, region, cluster['PreferredAvailabilityZone']) 1061 | self.push_group(self.inventory, 'zones', cluster['PreferredAvailabilityZone']) 1062 | 1063 | # Inventory: Group by node type 1064 | if self.group_by_instance_type and not is_redis: 1065 | type_name = self.to_safe('type_' + cluster['CacheNodeType']) 1066 | self.push(self.inventory, type_name, dest) 1067 | if self.nested_groups: 1068 | self.push_group(self.inventory, 'types', type_name) 1069 | 1070 | # Inventory: Group by VPC (information not available in the current 1071 | # AWS API version for ElastiCache) 1072 | 1073 | # Inventory: Group by security group 1074 | if self.group_by_security_group and not is_redis: 1075 | 1076 | # Check for the existence of the 'SecurityGroups' key and also if 1077 | # this key has some value. When the cluster is not placed in a SG 1078 | # the query can return None here and cause an error. 1079 | if 'SecurityGroups' in cluster and cluster['SecurityGroups'] is not None: 1080 | for security_group in cluster['SecurityGroups']: 1081 | key = self.to_safe("security_group_" + security_group['SecurityGroupId']) 1082 | self.push(self.inventory, key, dest) 1083 | if self.nested_groups: 1084 | self.push_group(self.inventory, 'security_groups', key) 1085 | 1086 | # Inventory: Group by engine 1087 | if self.group_by_elasticache_engine and not is_redis: 1088 | self.push(self.inventory, self.to_safe("elasticache_" + cluster['Engine']), dest) 1089 | if self.nested_groups: 1090 | self.push_group(self.inventory, 'elasticache_engines', self.to_safe(cluster['Engine'])) 1091 | 1092 | # Inventory: Group by parameter group 1093 | if self.group_by_elasticache_parameter_group: 1094 | self.push(self.inventory, self.to_safe("elasticache_parameter_group_" + cluster['CacheParameterGroup']['CacheParameterGroupName']), dest) 1095 | if self.nested_groups: 1096 | self.push_group(self.inventory, 'elasticache_parameter_groups', self.to_safe(cluster['CacheParameterGroup']['CacheParameterGroupName'])) 1097 | 1098 | # Inventory: Group by replication group 1099 | if self.group_by_elasticache_replication_group and 'ReplicationGroupId' in cluster and cluster['ReplicationGroupId']: 1100 | self.push(self.inventory, self.to_safe("elasticache_replication_group_" + cluster['ReplicationGroupId']), dest) 1101 | if self.nested_groups: 1102 | self.push_group(self.inventory, 'elasticache_replication_groups', self.to_safe(cluster['ReplicationGroupId'])) 1103 | 1104 | # Global Tag: all ElastiCache clusters 1105 | self.push(self.inventory, 'elasticache_clusters', cluster['CacheClusterId']) 1106 | 1107 | host_info = self.get_host_info_dict_from_describe_dict(cluster) 1108 | 1109 | self.inventory["_meta"]["hostvars"][dest] = host_info 1110 | 1111 | # Add the nodes 1112 | for node in cluster['CacheNodes']: 1113 | self.add_elasticache_node(node, cluster, region) 1114 | 1115 | def add_elasticache_node(self, node, cluster, region): 1116 | ''' Adds an ElastiCache node to the inventory and index, as long as 1117 | it is addressable ''' 1118 | 1119 | # Only want available nodes unless all_elasticache_nodes is True 1120 | if not self.all_elasticache_nodes and node['CacheNodeStatus'] != 'available': 1121 | return 1122 | 1123 | # Select the best destination address 1124 | dest = node['Endpoint']['Address'] 1125 | 1126 | if not dest: 1127 | # Skip nodes we cannot address (e.g. private VPC subnet) 1128 | return 1129 | 1130 | node_id = self.to_safe(cluster['CacheClusterId'] + '_' + node['CacheNodeId']) 1131 | 1132 | # Add to index 1133 | self.index[dest] = [region, node_id] 1134 | 1135 | # Inventory: Group by node ID (always a group of 1) 1136 | if self.group_by_instance_id: 1137 | self.inventory[node_id] = [dest] 1138 | if self.nested_groups: 1139 | self.push_group(self.inventory, 'instances', node_id) 1140 | 1141 | # Inventory: Group by region 1142 | if self.group_by_region: 1143 | self.push(self.inventory, region, dest) 1144 | if self.nested_groups: 1145 | self.push_group(self.inventory, 'regions', region) 1146 | 1147 | # Inventory: Group by availability zone 1148 | if self.group_by_availability_zone: 1149 | self.push(self.inventory, cluster['PreferredAvailabilityZone'], dest) 1150 | if self.nested_groups: 1151 | if self.group_by_region: 1152 | self.push_group(self.inventory, region, cluster['PreferredAvailabilityZone']) 1153 | self.push_group(self.inventory, 'zones', cluster['PreferredAvailabilityZone']) 1154 | 1155 | # Inventory: Group by node type 1156 | if self.group_by_instance_type: 1157 | type_name = self.to_safe('type_' + cluster['CacheNodeType']) 1158 | self.push(self.inventory, type_name, dest) 1159 | if self.nested_groups: 1160 | self.push_group(self.inventory, 'types', type_name) 1161 | 1162 | # Inventory: Group by VPC (information not available in the current 1163 | # AWS API version for ElastiCache) 1164 | 1165 | # Inventory: Group by security group 1166 | if self.group_by_security_group: 1167 | 1168 | # Check for the existence of the 'SecurityGroups' key and also if 1169 | # this key has some value. When the cluster is not placed in a SG 1170 | # the query can return None here and cause an error. 1171 | if 'SecurityGroups' in cluster and cluster['SecurityGroups'] is not None: 1172 | for security_group in cluster['SecurityGroups']: 1173 | key = self.to_safe("security_group_" + security_group['SecurityGroupId']) 1174 | self.push(self.inventory, key, dest) 1175 | if self.nested_groups: 1176 | self.push_group(self.inventory, 'security_groups', key) 1177 | 1178 | # Inventory: Group by engine 1179 | if self.group_by_elasticache_engine: 1180 | self.push(self.inventory, self.to_safe("elasticache_" + cluster['Engine']), dest) 1181 | if self.nested_groups: 1182 | self.push_group(self.inventory, 'elasticache_engines', self.to_safe("elasticache_" + cluster['Engine'])) 1183 | 1184 | # Inventory: Group by parameter group (done at cluster level) 1185 | 1186 | # Inventory: Group by replication group (done at cluster level) 1187 | 1188 | # Inventory: Group by ElastiCache Cluster 1189 | if self.group_by_elasticache_cluster: 1190 | self.push(self.inventory, self.to_safe("elasticache_cluster_" + cluster['CacheClusterId']), dest) 1191 | 1192 | # Global Tag: all ElastiCache nodes 1193 | self.push(self.inventory, 'elasticache_nodes', dest) 1194 | 1195 | host_info = self.get_host_info_dict_from_describe_dict(node) 1196 | 1197 | if dest in self.inventory["_meta"]["hostvars"]: 1198 | self.inventory["_meta"]["hostvars"][dest].update(host_info) 1199 | else: 1200 | self.inventory["_meta"]["hostvars"][dest] = host_info 1201 | 1202 | def add_elasticache_replication_group(self, replication_group, region): 1203 | ''' Adds an ElastiCache replication group to the inventory and index ''' 1204 | 1205 | # Only want available clusters unless all_elasticache_replication_groups is True 1206 | if not self.all_elasticache_replication_groups and replication_group['Status'] != 'available': 1207 | return 1208 | 1209 | # Select the best destination address (PrimaryEndpoint) 1210 | dest = replication_group['NodeGroups'][0]['PrimaryEndpoint']['Address'] 1211 | 1212 | if not dest: 1213 | # Skip clusters we cannot address (e.g. private VPC subnet) 1214 | return 1215 | 1216 | # Add to index 1217 | self.index[dest] = [region, replication_group['ReplicationGroupId']] 1218 | 1219 | # Inventory: Group by ID (always a group of 1) 1220 | if self.group_by_instance_id: 1221 | self.inventory[replication_group['ReplicationGroupId']] = [dest] 1222 | if self.nested_groups: 1223 | self.push_group(self.inventory, 'instances', replication_group['ReplicationGroupId']) 1224 | 1225 | # Inventory: Group by region 1226 | if self.group_by_region: 1227 | self.push(self.inventory, region, dest) 1228 | if self.nested_groups: 1229 | self.push_group(self.inventory, 'regions', region) 1230 | 1231 | # Inventory: Group by availability zone (doesn't apply to replication groups) 1232 | 1233 | # Inventory: Group by node type (doesn't apply to replication groups) 1234 | 1235 | # Inventory: Group by VPC (information not available in the current 1236 | # AWS API version for replication groups 1237 | 1238 | # Inventory: Group by security group (doesn't apply to replication groups) 1239 | # Check this value in cluster level 1240 | 1241 | # Inventory: Group by engine (replication groups are always Redis) 1242 | if self.group_by_elasticache_engine: 1243 | self.push(self.inventory, 'elasticache_redis', dest) 1244 | if self.nested_groups: 1245 | self.push_group(self.inventory, 'elasticache_engines', 'redis') 1246 | 1247 | # Global Tag: all ElastiCache clusters 1248 | self.push(self.inventory, 'elasticache_replication_groups', replication_group['ReplicationGroupId']) 1249 | 1250 | host_info = self.get_host_info_dict_from_describe_dict(replication_group) 1251 | 1252 | self.inventory["_meta"]["hostvars"][dest] = host_info 1253 | 1254 | def get_route53_records(self): 1255 | ''' Get and store the map of resource records to domain names that 1256 | point to them. ''' 1257 | 1258 | r53_conn = route53.Route53Connection() 1259 | all_zones = r53_conn.get_zones() 1260 | 1261 | route53_zones = [ zone for zone in all_zones if zone.name[:-1] 1262 | not in self.route53_excluded_zones ] 1263 | 1264 | self.route53_records = {} 1265 | 1266 | for zone in route53_zones: 1267 | rrsets = r53_conn.get_all_rrsets(zone.id) 1268 | 1269 | for record_set in rrsets: 1270 | record_name = record_set.name 1271 | 1272 | if record_name.endswith('.'): 1273 | record_name = record_name[:-1] 1274 | 1275 | for resource in record_set.resource_records: 1276 | self.route53_records.setdefault(resource, set()) 1277 | self.route53_records[resource].add(record_name) 1278 | 1279 | 1280 | def get_instance_route53_names(self, instance): 1281 | ''' Check if an instance is referenced in the records we have from 1282 | Route53. If it is, return the list of domain names pointing to said 1283 | instance. If nothing points to it, return an empty list. ''' 1284 | 1285 | instance_attributes = [ 'public_dns_name', 'private_dns_name', 1286 | 'ip_address', 'private_ip_address' ] 1287 | 1288 | name_list = set() 1289 | 1290 | for attrib in instance_attributes: 1291 | try: 1292 | value = getattr(instance, attrib) 1293 | except AttributeError: 1294 | continue 1295 | 1296 | if value in self.route53_records: 1297 | name_list.update(self.route53_records[value]) 1298 | 1299 | return list(name_list) 1300 | 1301 | def get_host_info_dict_from_instance(self, instance): 1302 | instance_vars = {} 1303 | for key in vars(instance): 1304 | value = getattr(instance, key) 1305 | key = self.to_safe('ec2_' + key) 1306 | 1307 | # Handle complex types 1308 | # state/previous_state changed to properties in boto in https://github.com/boto/boto/commit/a23c379837f698212252720d2af8dec0325c9518 1309 | if key == 'ec2__state': 1310 | instance_vars['ec2_state'] = instance.state or '' 1311 | instance_vars['ec2_state_code'] = instance.state_code 1312 | elif key == 'ec2__previous_state': 1313 | instance_vars['ec2_previous_state'] = instance.previous_state or '' 1314 | instance_vars['ec2_previous_state_code'] = instance.previous_state_code 1315 | elif type(value) in [int, bool]: 1316 | instance_vars[key] = value 1317 | elif isinstance(value, six.string_types): 1318 | instance_vars[key] = value.strip() 1319 | elif type(value) == type(None): 1320 | instance_vars[key] = '' 1321 | elif key == 'ec2_region': 1322 | instance_vars[key] = value.name 1323 | elif key == 'ec2__placement': 1324 | instance_vars['ec2_placement'] = value.zone 1325 | elif key == 'ec2_tags': 1326 | for k, v in value.items(): 1327 | if self.expand_csv_tags and ',' in v: 1328 | v = list(map(lambda x: x.strip(), v.split(','))) 1329 | key = self.to_safe('ec2_tag_' + k) 1330 | instance_vars[key] = v 1331 | elif key == 'ec2_groups': 1332 | group_ids = [] 1333 | group_names = [] 1334 | for group in value: 1335 | group_ids.append(group.id) 1336 | group_names.append(group.name) 1337 | instance_vars["ec2_security_group_ids"] = ','.join([str(i) for i in group_ids]) 1338 | instance_vars["ec2_security_group_names"] = ','.join([str(i) for i in group_names]) 1339 | elif key == 'ec2_block_device_mapping': 1340 | instance_vars["ec2_block_devices"] = {} 1341 | for k, v in value.items(): 1342 | instance_vars["ec2_block_devices"][ os.path.basename(k) ] = v.volume_id 1343 | else: 1344 | pass 1345 | # TODO Product codes if someone finds them useful 1346 | #print key 1347 | #print type(value) 1348 | #print value 1349 | 1350 | instance_vars[self.to_safe('ec2_account_id')] = self.aws_account_id 1351 | 1352 | return instance_vars 1353 | 1354 | def get_host_info_dict_from_describe_dict(self, describe_dict): 1355 | ''' Parses the dictionary returned by the API call into a flat list 1356 | of parameters. This method should be used only when 'describe' is 1357 | used directly because Boto doesn't provide specific classes. ''' 1358 | 1359 | # I really don't agree with prefixing everything with 'ec2' 1360 | # because EC2, RDS and ElastiCache are different services. 1361 | # I'm just following the pattern used until now to not break any 1362 | # compatibility. 1363 | 1364 | host_info = {} 1365 | for key in describe_dict: 1366 | value = describe_dict[key] 1367 | key = self.to_safe('ec2_' + self.uncammelize(key)) 1368 | 1369 | # Handle complex types 1370 | 1371 | # Target: Memcached Cache Clusters 1372 | if key == 'ec2_configuration_endpoint' and value: 1373 | host_info['ec2_configuration_endpoint_address'] = value['Address'] 1374 | host_info['ec2_configuration_endpoint_port'] = value['Port'] 1375 | 1376 | # Target: Cache Nodes and Redis Cache Clusters (single node) 1377 | if key == 'ec2_endpoint' and value: 1378 | host_info['ec2_endpoint_address'] = value['Address'] 1379 | host_info['ec2_endpoint_port'] = value['Port'] 1380 | 1381 | # Target: Redis Replication Groups 1382 | if key == 'ec2_node_groups' and value: 1383 | host_info['ec2_endpoint_address'] = value[0]['PrimaryEndpoint']['Address'] 1384 | host_info['ec2_endpoint_port'] = value[0]['PrimaryEndpoint']['Port'] 1385 | replica_count = 0 1386 | for node in value[0]['NodeGroupMembers']: 1387 | if node['CurrentRole'] == 'primary': 1388 | host_info['ec2_primary_cluster_address'] = node['ReadEndpoint']['Address'] 1389 | host_info['ec2_primary_cluster_port'] = node['ReadEndpoint']['Port'] 1390 | host_info['ec2_primary_cluster_id'] = node['CacheClusterId'] 1391 | elif node['CurrentRole'] == 'replica': 1392 | host_info['ec2_replica_cluster_address_'+ str(replica_count)] = node['ReadEndpoint']['Address'] 1393 | host_info['ec2_replica_cluster_port_'+ str(replica_count)] = node['ReadEndpoint']['Port'] 1394 | host_info['ec2_replica_cluster_id_'+ str(replica_count)] = node['CacheClusterId'] 1395 | replica_count += 1 1396 | 1397 | # Target: Redis Replication Groups 1398 | if key == 'ec2_member_clusters' and value: 1399 | host_info['ec2_member_clusters'] = ','.join([str(i) for i in value]) 1400 | 1401 | # Target: All Cache Clusters 1402 | elif key == 'ec2_cache_parameter_group': 1403 | host_info["ec2_cache_node_ids_to_reboot"] = ','.join([str(i) for i in value['CacheNodeIdsToReboot']]) 1404 | host_info['ec2_cache_parameter_group_name'] = value['CacheParameterGroupName'] 1405 | host_info['ec2_cache_parameter_apply_status'] = value['ParameterApplyStatus'] 1406 | 1407 | # Target: Almost everything 1408 | elif key == 'ec2_security_groups': 1409 | 1410 | # Skip if SecurityGroups is None 1411 | # (it is possible to have the key defined but no value in it). 1412 | if value is not None: 1413 | sg_ids = [] 1414 | for sg in value: 1415 | sg_ids.append(sg['SecurityGroupId']) 1416 | host_info["ec2_security_group_ids"] = ','.join([str(i) for i in sg_ids]) 1417 | 1418 | # Target: Everything 1419 | # Preserve booleans and integers 1420 | elif type(value) in [int, bool]: 1421 | host_info[key] = value 1422 | 1423 | # Target: Everything 1424 | # Sanitize string values 1425 | elif isinstance(value, six.string_types): 1426 | host_info[key] = value.strip() 1427 | 1428 | # Target: Everything 1429 | # Replace None by an empty string 1430 | elif type(value) == type(None): 1431 | host_info[key] = '' 1432 | 1433 | else: 1434 | # Remove non-processed complex types 1435 | pass 1436 | 1437 | return host_info 1438 | 1439 | def get_host_info(self): 1440 | ''' Get variables about a specific host ''' 1441 | 1442 | if len(self.index) == 0: 1443 | # Need to load index from cache 1444 | self.load_index_from_cache() 1445 | 1446 | if not self.args.host in self.index: 1447 | # try updating the cache 1448 | self.do_api_calls_update_cache() 1449 | if not self.args.host in self.index: 1450 | # host might not exist anymore 1451 | return self.json_format_dict({}, True) 1452 | 1453 | (region, instance_id) = self.index[self.args.host] 1454 | 1455 | instance = self.get_instance(region, instance_id) 1456 | return self.json_format_dict(self.get_host_info_dict_from_instance(instance), True) 1457 | 1458 | def push(self, my_dict, key, element): 1459 | ''' Push an element onto an array that may not have been defined in 1460 | the dict ''' 1461 | group_info = my_dict.setdefault(key, []) 1462 | if isinstance(group_info, dict): 1463 | host_list = group_info.setdefault('hosts', []) 1464 | host_list.append(element) 1465 | else: 1466 | group_info.append(element) 1467 | 1468 | def push_group(self, my_dict, key, element): 1469 | ''' Push a group as a child of another group. ''' 1470 | parent_group = my_dict.setdefault(key, {}) 1471 | if not isinstance(parent_group, dict): 1472 | parent_group = my_dict[key] = {'hosts': parent_group} 1473 | child_groups = parent_group.setdefault('children', []) 1474 | if element not in child_groups: 1475 | child_groups.append(element) 1476 | 1477 | def get_inventory_from_cache(self): 1478 | ''' Reads the inventory from the cache file and returns it as a JSON 1479 | object ''' 1480 | 1481 | cache = open(self.cache_path_cache, 'r') 1482 | json_inventory = cache.read() 1483 | return json_inventory 1484 | 1485 | 1486 | def load_index_from_cache(self): 1487 | ''' Reads the index from the cache file sets self.index ''' 1488 | 1489 | cache = open(self.cache_path_index, 'r') 1490 | json_index = cache.read() 1491 | self.index = json.loads(json_index) 1492 | 1493 | 1494 | def write_to_cache(self, data, filename): 1495 | ''' Writes data in JSON format to a file ''' 1496 | 1497 | json_data = self.json_format_dict(data, True) 1498 | cache = open(filename, 'w') 1499 | cache.write(json_data) 1500 | cache.close() 1501 | 1502 | def uncammelize(self, key): 1503 | temp = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', key) 1504 | return re.sub('([a-z0-9])([A-Z])', r'\1_\2', temp).lower() 1505 | 1506 | def to_safe(self, word): 1507 | ''' Converts 'bad' characters in a string to underscores so they can be used as Ansible groups ''' 1508 | regex = "[^A-Za-z0-9\_" 1509 | if not self.replace_dash_in_groups: 1510 | regex += "\-" 1511 | return re.sub(regex + "]", "_", word) 1512 | 1513 | def json_format_dict(self, data, pretty=False): 1514 | ''' Converts a dict to a JSON object and dumps it as a formatted 1515 | string ''' 1516 | 1517 | if pretty: 1518 | return json.dumps(data, sort_keys=True, indent=2) 1519 | else: 1520 | return json.dumps(data) 1521 | 1522 | 1523 | # Run the script 1524 | Ec2Inventory() -------------------------------------------------------------------------------- /rancher.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - hosts: 3 | - server 4 | become: True 5 | gather_facts: True 6 | vars_files: 7 | - vars/default.yml 8 | - vars/private.yml 9 | roles: 10 | - { role: rancher, tags: rancher } 11 | - { role: haproxy, tags: haproxy, when: "groups['loadbalancer']|count == 0"} 12 | tags: 13 | - rancher 14 | 15 | - hosts: 16 | - node 17 | become: True 18 | gather_facts: True 19 | vars_files: 20 | - vars/default.yml 21 | - vars/private.yml 22 | roles: 23 | - { role: rancher, tags: rancher } 24 | tags: 25 | - rancher 26 | -------------------------------------------------------------------------------- /roles/haproxy/defaults/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # # Default variables for HAProxy 3 | 4 | # Set this to `false` to disable installation of haproxy completely 5 | haproxy_enabled: true 6 | 7 | # Where is haproxy's configuration directory? 8 | confdir: /etc/haproxy 9 | 10 | # Where do we install our certs? 11 | certdir: certs 12 | 13 | # What is the name of our default certificate file? 14 | certfile: server.pem 15 | 16 | global: 17 | maxconn: 1024 18 | 19 | -------------------------------------------------------------------------------- /roles/haproxy/files/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/ansible-playbooks/1b4fc547ff4f3eb2585d7fcb7b0ce2f6885f881a/roles/haproxy/files/.keep -------------------------------------------------------------------------------- /roles/haproxy/handlers/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # haproxy handlers 3 | - name: restart haproxy 4 | service: name=haproxy state=restarted 5 | tags: haproxy 6 | 7 | - name: reload haproxy 8 | service: name=haproxy state=reloaded 9 | tags: haproxy 10 | -------------------------------------------------------------------------------- /roles/haproxy/meta/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/ansible-playbooks/1b4fc547ff4f3eb2585d7fcb7b0ce2f6885f881a/roles/haproxy/meta/.keep -------------------------------------------------------------------------------- /roles/haproxy/tasks/haproxy.yml: -------------------------------------------------------------------------------- 1 | --- 2 | # # HAProxy 3 | 4 | # Install the haproxy repo and then install haproxy itself. 5 | - name: add haproxy repo 6 | apt_repository: repo='ppa:vbernat/haproxy-1.7' update_cache=yes 7 | when: ansible_distribution == "Ubuntu" 8 | tags: apt 9 | 10 | - name: install haproxy and friends 11 | apt: name={{ item }} state=latest 12 | when: ansible_os_family == "Debian" 13 | tags: apt 14 | with_items: 15 | - haproxy 16 | - hatop 17 | - socat 18 | 19 | # Put our management user into the `haproxy` group so that we can run hatop without `sudo` 20 | - name: put ssh user into haproxy group 21 | user: name={{ ansible_user }} groups=haproxy append=yes 22 | 23 | - name: make cert dir 24 | file: 25 | dest: "{{ confdir }}/{{ certdir }}" 26 | state: directory 27 | owner: root 28 | group: root 29 | mode: 0750 30 | tags: always 31 | 32 | - name: install main cert 33 | copy: 34 | content: "{{ ssl_certificates.cert }}{{ ssl_certificates.key }}" 35 | dest: "{{ confdir }}/{{ certfile }}" 36 | owner: root 37 | group: root 38 | mode: 0640 39 | tags: config 40 | notify: reload haproxy 41 | 42 | - name: install haproxy config file 43 | template: 44 | src: "haproxy.cfg.j2" 45 | dest: "{{ confdir }}/haproxy.cfg" 46 | owner: root 47 | group: root 48 | mode: 0640 49 | validate: haproxy -f %s -c 50 | tags: config 51 | notify: reload haproxy 52 | 53 | # Enable haproxy on Debian-based systems. 54 | - name: enable haproxy 55 | lineinfile: 56 | dest: /etc/default/haproxy 57 | regexp: '^ENABLED' 58 | line: 'ENABLED=1' 59 | when: ansible_os_family == "Debian" 60 | tags: always 61 | 62 | - name: make sure haproxy is running 63 | service: name=haproxy state=started enabled=true 64 | tags: always 65 | -------------------------------------------------------------------------------- /roles/haproxy/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - include: haproxy.yml 4 | tags: haproxy 5 | when: haproxy_enabled 6 | 7 | -------------------------------------------------------------------------------- /roles/haproxy/templates/haproxy.cfg.j2: -------------------------------------------------------------------------------- 1 | #jinja2: lstrip_blocks: True 2 | global 3 | maxconn {{ global.maxconn }} 4 | tune.ssl.default-dh-param 2048 5 | ssl-default-bind-options no-sslv3 no-tls-tickets 6 | ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA 7 | 8 | ssl-default-server-options no-sslv3 no-tls-tickets 9 | ssl-default-server-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA 10 | 11 | stats socket /var/run/haproxy/haproxy.sock mode 0660 level admin group haproxy 12 | 13 | defaults 14 | mode tcp 15 | timeout connect 5s 16 | timeout queue 5s 17 | timeout client 30s 18 | timeout server 30s 19 | 20 | balance roundrobin 21 | 22 | frontend ft_web_80 23 | bind :80 24 | mode http 25 | 26 | # bounce to https 27 | redirect scheme https code 301 if !{ ssl_fc } 28 | 29 | frontend ft_web_443 30 | timeout client 24h 31 | timeout server 24h 32 | 33 | bind :443 ssl crt /etc/haproxy/server.pem crt /etc/haproxy/certs 34 | 35 | # normal backend routing 36 | use_backend bk_rancher 37 | 38 | backend bk_rancher 39 | option httpchk /ping 40 | 41 | {% if groups['server']|length == 1 and groups['server'][0] == public_ip|default(ipify_public_ip) %} 42 | server s1 localhost:8080 check fall 3 rise 2 maxconn 256 send-proxy 43 | {% else %} 44 | {% for ip in hostvars[inventory_hostname]['groups']['server'] %} 45 | server s{{ loop.index }} {{ hostvars[ip]['private_ip'] }}:8080 check fall 3 rise 2 maxconn 256 send-proxy 46 | {% endfor %} 47 | {% endif %} 48 | 49 | -------------------------------------------------------------------------------- /roles/haproxy/vars/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/ansible-playbooks/1b4fc547ff4f3eb2585d7fcb7b0ce2f6885f881a/roles/haproxy/vars/.keep -------------------------------------------------------------------------------- /roles/rancher/defaults/main.yml: -------------------------------------------------------------------------------- 1 | # rancher defaults 2 | 3 | # we set this to False here and pull it from the ipify facts in the 4 | # playbook 5 | public_ip: False -------------------------------------------------------------------------------- /roles/rancher/files/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/ansible-playbooks/1b4fc547ff4f3eb2585d7fcb7b0ce2f6885f881a/roles/rancher/files/.keep -------------------------------------------------------------------------------- /roles/rancher/handlers/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/ansible-playbooks/1b4fc547ff4f3eb2585d7fcb7b0ce2f6885f881a/roles/rancher/handlers/.keep -------------------------------------------------------------------------------- /roles/rancher/meta/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/ansible-playbooks/1b4fc547ff4f3eb2585d7fcb7b0ce2f6885f881a/roles/rancher/meta/.keep -------------------------------------------------------------------------------- /roles/rancher/tasks/docker.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: determine if docker is installed 3 | command: which docker 4 | register: docker_installed 5 | ignore_errors: true 6 | changed_when: false 7 | 8 | - name: get docker version 9 | shell: docker --version | awk '{ print $3 }' 10 | register: docker_installed_version 11 | when: docker_installed|success 12 | changed_when: false 13 | 14 | - name: install or upgrade docker 15 | shell: curl https://releases.rancher.com/install-docker/{{ docker_version }}.sh | sh 16 | when: not docker_installed|success or 17 | not docker_installed_version.stdout|search(docker_version|string) 18 | 19 | - name: uninstall pip 20 | apt: name=python-pip state=absent 21 | when: ansible_os_family == "Debian" 22 | 23 | # Debian ships with pip v1.5 in apt, which is too broken (#5) 24 | - name: install pip (Debian) 25 | easy_install: name=pip state=latest 26 | when: ansible_distribution == "Debian" 27 | 28 | - name: upgrade pip (Ubuntu) 29 | pip: name=pip state=latest 30 | when: ansible_os_family == "Debian" and not ansible_distribution == "Debian" 31 | 32 | - name: install additional dependencies (pip) 33 | pip: name=docker-py state=present 34 | tags: pip 35 | 36 | - name: add our login user to docker group 37 | user: name={{ansible_user}} groups=docker append=yes 38 | 39 | - name: make sure docker is running 40 | service: name=docker state=started enabled=true 41 | -------------------------------------------------------------------------------- /roles/rancher/tasks/main.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | - name: get public IP 4 | ipify_facts: 5 | when: not public_ip|default(False) 6 | 7 | - include: docker.yml 8 | tags: docker 9 | 10 | - include: server.yml 11 | tags: server 12 | when: "'server' in group_names" 13 | 14 | - include: node.yml 15 | tags: node 16 | when: "'node' in group_names and 17 | rancher_env in api_keys|default([])" 18 | -------------------------------------------------------------------------------- /roles/rancher/tasks/node.yml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | # tasks specific to nodes 4 | 5 | - name: determine if we're already running an agent 6 | shell: "docker ps | fgrep -q rancher/agent" 7 | register: agent_installed 8 | ignore_errors: true 9 | changed_when: false 10 | 11 | # this gets all of the available registration tokens 12 | # for this environment. 13 | - name: get registration tokens 14 | uri: 15 | method: GET 16 | status_code: 200 17 | url: "{{ server_url }}/v2-beta/projects/{{ rancher_env }}/registrationTokens/" 18 | user: "{{ api_keys[rancher_env].access_key }}" 19 | password: "{{ api_keys[rancher_env].secret_key }}" 20 | register: registration_tokens 21 | when: agent_installed|failed 22 | 23 | # some or all of the tokens might be inactive, so we need 24 | # to find the first active one and use it to register 25 | # our host 26 | - name: register host (attempt one) 27 | command: "{{(registration_tokens.json.data|selectattr('state', 'equalto', 'active')|first).command|replace('sudo ', '')}}" 28 | when: agent_installed|failed and 29 | registration_tokens.json.data|selectattr('state', 'equalto', 'active')|list|length > 0 30 | register: registration_attempt_one 31 | 32 | # if we have no active registration tokens, we need 33 | # to create one 34 | - name: create registration url 35 | uri: 36 | method: POST 37 | status_code: 201 38 | url: "{{ server_url }}/v2-beta/projects/{{ rancher_env }}/registrationTokens/" 39 | user: "{{ api_keys[rancher_env].access_key }} " 40 | password: "{{ api_keys[rancher_env].secret_key }}" 41 | when: agent_installed|failed and 42 | registration_attempt_one|skipped 43 | run_once: true 44 | 45 | # having created one, we can now fetch it 46 | - name: get registration tokens (again) 47 | uri: 48 | method: GET 49 | status_code: 200 50 | url: "{{ server_url }}/v2-beta/projects/{{ rancher_env }}/registrationTokens/" 51 | user: "{{ api_keys[rancher_env].access_key }}" 52 | password: "{{ api_keys[rancher_env].secret_key }}" 53 | register: registration_tokens_redux 54 | when: agent_installed|failed and 55 | registration_attempt_one|skipped 56 | 57 | # we now have an active registration token that we can 58 | # use to register the host 59 | - name: register host (attempt two) 60 | command: "{{(registration_tokens_redux.json.data|selectattr('state', 'equalto', 'active')|first).command}}" 61 | when: agent_installed|failed and 62 | registration_attempt_one|skipped 63 | -------------------------------------------------------------------------------- /roles/rancher/tasks/server.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - name: install additional dependencies (apt) 3 | apt: name={{ item }} state=present 4 | with_items: 5 | - python-mysqldb 6 | - mysql-client 7 | tags: apt 8 | 9 | - name: sanity check on options 10 | fail: msg="Please set external db options when running with > 1 server or with use_external_db=true" 11 | when: (use_external_db|default(False) or hostvars[inventory_hostname]['groups']['server']|length > 1) and 12 | (not db_host or not db_name or not db_user or not db_pass or not db_port ) 13 | 14 | - name: create the database (if necessary) 15 | mysql_db: 16 | name: "{{ db_name }}" 17 | collation: utf8_general_ci 18 | encoding: utf8 19 | login_host: "{{ db_host }}" 20 | login_user: "{{ db_admin_user }}" 21 | login_password: "{{ db_admin_pass }}" 22 | state: present 23 | login_port: "{{ db_port }}" 24 | run_once: true 25 | when: use_external_db|default(False) or hostvars[inventory_hostname]['groups']['server']|length > 1 26 | tags: 27 | - mysql 28 | 29 | - name: create the database user (if necessary) 30 | mysql_user: 31 | name: "{{ db_user }} " 32 | host: '%' 33 | password: "{{ db_pass }}" 34 | login_user: "{{ db_admin_user }}" 35 | login_password: "{{ db_admin_pass }}" 36 | login_host: "{{ db_host }}" 37 | login_port: "{{ db_port }}" 38 | state: present 39 | priv: "{{ db_name }}.*:ALL" 40 | run_once: true 41 | when: use_external_db|default(False) or hostvars[inventory_hostname]['groups']['server']|length > 1 42 | tags: 43 | - mysql 44 | 45 | - name: check if rancher container is running 46 | shell: docker ps | grep rancher/server 47 | register: server_running 48 | ignore_errors: true 49 | changed_when: false 50 | 51 | - name: remove any previous rancher server (to avoid conflict) 52 | docker_container: name={{ server_name }} state=absent 53 | when: server_name is defined and 54 | not server_running|success 55 | 56 | - name: start rancher server 57 | command: '{{lookup("template", "server_launch.j2")}}' 58 | when: not server_running|success 59 | -------------------------------------------------------------------------------- /roles/rancher/templates/server_launch.j2: -------------------------------------------------------------------------------- 1 | {% if use_external_db|default(False) or hostvars[inventory_hostname]['groups']['server']|length > 1 %} 2 | {% set db_string = " --db-host " + db_host + " --db-user " + db_user + " --db-pass " + db_pass + " --db-port " + db_port|string + " --db-name " + db_name %} 3 | {% elif bind_mount_location|default(False) %} 4 | {% set volume_string = "-v " + bind_mount_location + ":/var/lib/mysql" %} 5 | {% endif %} 6 | {% if hostvars[inventory_hostname]['groups']['server']|length > 1 or force_ha|default(False) %} 7 | {# multiple servers - HA setup #} 8 | {% set ports = "-p 8080:8080 -p 9345:9345" %} 9 | {% set volume_string = '' %} 10 | {% set advertise_string = "--advertise-address " + ( public_ip|default(ipify_public_ip) if advertise_public_ip|default(False) else private_ip ) %} 11 | {% endif %} 12 | {% if server_name|default(False) %} 13 | {% set name_string = "--name " + server_name %} 14 | {% endif %} 15 | docker run -d --restart=unless-stopped {{ ports|default('-p 8080:8080') }} {{ volume_string|default('') }} {{ name_string|default('') }} rancher/{{ server_repo|default('server') }}:{{ server_version|default('stable') }} {{ db_string|default('') }} {{ advertise_string|default('') }} 16 | -------------------------------------------------------------------------------- /roles/rancher/vars/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rancher/ansible-playbooks/1b4fc547ff4f3eb2585d7fcb7b0ce2f6885f881a/roles/rancher/vars/.keep -------------------------------------------------------------------------------- /site.yml: -------------------------------------------------------------------------------- 1 | --- 2 | - include: rancher.yml 3 | - include: haproxy.yml 4 | -------------------------------------------------------------------------------- /vars/default.yml: -------------------------------------------------------------------------------- 1 | # this file can be empty 2 | 3 | -------------------------------------------------------------------------------- /vars/private.yml: -------------------------------------------------------------------------------- 1 | $ANSIBLE_VAULT;1.1;AES256 2 | 35616533376539653630653933623734393763393739383764623139653434306635346138316431 3 | 6536383761373638323134656163316634393863393463640a323036333632396436313436363866 4 | 31623961356532663062373832343030376232613838313233626532383931303662626531636539 5 | 3065623464373961360a656162386266616237396361633337363033353339386634313031623931 6 | 35396130333565313965646134396630636134636531663237386463636333316635363537353965 7 | 30396661323761396131343537353465656536393737363536633362626239626261636661663030 8 | 30633138393866326230393037366231616233303838643930663237393932303765373462643763 9 | 32346161323837653030663166636335653933643638353862663238353163306364333231326566 10 | 65323236393564633233363661356662323938636332303035393935646266383239306637336262 11 | 39323261376639336333656438323433343037386537313366306665613233343538626262356439 12 | 30356262396631356531643935323833643466626639633434646639656532393663366561643635 13 | 36646135653134313730313531636636373533323736653138346333646364343939326361656561 14 | 36366232636437316131373863636336346536633166363964386638333938363366356333656237 15 | 36333665313338363230363462356235373366396230353164643539306330633762633365633336 16 | 37656339646630373933663466326264663565656165383836323739623634313366343761336166 17 | 30363337303734363936306530353333653965363061666264646536303833396437336234363364 18 | 31353936643233393832336133396366636632643734336661643230633533653932613966646165 19 | 30376632306165316239623837346534393138343237333730323636356138646161343030626434 20 | 64666465633462396534643038363835336432306437383030386636656462616134303664623165 21 | 65386431316537356531333439353165363236373536333933383236613934663037353966313235 22 | 36646132656630643462666630336262326233386430303866646237323862323832636630316436 23 | 37666463613334363633316337666236376138366334303666633939346465656365656166663039 24 | 37643063633664633339376131643537333331343037643033386464323932303734616534393937 25 | 34643665633533666531666462653563336230383836313436303763336437623230303139396536 26 | 62326566313635366462383630346136363935386639653630643337366430363435303765336165 27 | 33366663333762613661613433346132333439666166303764313633303266613632323535656433 28 | 33386232356230643035376230353936623335383839623561383666336462323663313665383733 29 | 63363663393963373765393732663661643736303062633133373861626265666235363036316431 30 | 30626162333065333337376461663232346336633636326234623134303234636230356339383131 31 | 33656437373265633739363731323539616430323230316534386537336635656362373536363139 32 | 65323966663037616636386531333666336230306462393762316239343261373263323966346136 33 | 35353762323766663066656130656262366235336465376666393666373964623266646639653535 34 | 32393635376264366265323766626664656637373761313363613761326661616463613237646662 35 | 61366664356539383537396266633030653065393262393235613634343237666264323032623030 36 | 66666332346135666335333131316131313163343061656239363439666235653962666434306166 37 | 66656239643331653966316236303234633237303561393633653866663066396563303963613837 38 | 33353532626533613032336661396364343063333864333437323265613834386238363063646239 39 | 64303634666663306365316434303663353133376563313565366436383034306363633236653938 40 | 33306237383139323765643164653266623438393735383966636332366432306431643463366431 41 | 33316265343030373039313836373666653633663363363765386335336436306465626663646337 42 | 30323531633863333866346238613931303166363934323031663434396335313161366435333064 43 | 37343535313463323335393965626162633462376330623765386161363666363566626131626362 44 | 66326663663866666137326366303530323465616338613765363465333331653839643962393434 45 | 66386334393834323934376630333965333939303730623331363431306634306332626666633063 46 | 65373537326633313534303137313862333462396337343361363464393737623662363531343736 47 | 34343235333530613830353365623535383462303663663663393263353938613135313966346336 48 | 62663062343038656466646262653064373362613034613663666539396466303630353637323266 49 | 34373466363437613634333832633934383435646362623839653234396535643539653739653234 50 | 33396665303432393639333833303263373735393237636230313137646331623730653462393534 51 | 66333938383065333663623230333330653465393765356361383133336366623737396334386165 52 | 61633930306562633430666534643539613032626336626337366262313063336136326534376538 53 | 62363439326234613330386435646564626463636230326130393764616337376661643730396330 54 | 66363466343462353734313339303939656565353566393733393737343738613563626432353638 55 | 31326163643434333232393963323136363639376563656639306635373234316433313439353766 56 | 62646362613962373237386635363939656166643434626566653266306631333666363234613036 57 | 62393565306265663638343565326438643032643461643161646663663565663930613766353134 58 | 31666666653635653361366566366164623138356230313163633636633331653439373237373661 59 | 34326535653364323266656230323635613737356261323337323839636366333936366535343035 60 | 36313733326131383038383530323963633936373835626466383133323863356365623232623038 61 | 62363937656436383934316337633365333634376630303966343766653038363131333739313961 62 | 62643935303262393336623766316136326265383132623534396532646264643531393835616663 63 | 30393833643266633666303130313532396530623466666366666365336537316334363831663034 64 | 31366366653065343936366664646530663735323263383233346631623731356233616463653761 65 | 31323763393136653865386631333862616134636236373139656534633162316135356538333932 66 | 39393061616634356232633333653435386562343166383530353538366461346337666536363462 67 | 33323838356633373431373732393033616163326331636136656638363165373165356635366164 68 | 62306631366639613863633561623837383338333862343236373932366134373531326635643432 69 | 35363566346139393430333431643133376332393938383536383065353730663962363430656338 70 | 30653731623636616639643536343739383633643031376362393834363864343662316561373265 71 | 66333232633166363161366532393832616465646236316639343465366461333330343630396463 72 | 30633935353934336634393534656463633839613438643665656337316636636364663737643666 73 | 63373438326364366139383737333433323730616633663464383339313638653835376366393161 74 | 34346239653339363866653063663538376566316661316563313434343237323331656435616334 75 | 63613730346636386635636634643236663533613361333235333165396632393738643836363135 76 | 61623435313835643135373565386138303034316562336130623564356637633638646439346435 77 | 36376237643865366631356539373365653265393538623433306231343937313837353162646639 78 | 34326230303635303734343732323766306338343930646535373464623538643136303363363362 79 | 63333834623337333530636162393338343931313636393731346430333939643739363462373034 80 | 62613765613330343737353439643164303033666165346439613839643530643461336261333464 81 | 64646261333762643863393863386662346138303661633137343932656131376430363935663137 82 | 35366363656535356139656639316266616631633534366437306165383031323638623632343835 83 | 61363064626564336334656464333235373064636361313934376362666463363133396663386664 84 | 61636262653163336233623035643966656234623335643532393535636466306439343136613031 85 | 38313165333334363164383065643732396262326431386431623135613165323636663964633035 86 | 62343931376537376430663365363934353363653836373137313637653035336134383065656262 87 | 31396365396134666561376630663037373231383063633937656238616262633136326337643566 88 | 64326439343633333961643236386635623339343435663665373364636637363039626535343533 89 | 36623235363363313836323739306662363238316136613232393161336432313137326337303564 90 | 37643561303339356264643132643539306464333938323264363632393735623539356563373833 91 | 31646130663137386233353235306534363561663665363961636636376535666236386165396633 92 | 32326434653861363134303739653835353861626335666130353762636262653734383437363830 93 | 35326535653765653761333239313964616239663437323862653631613535333462663462326631 94 | 31366666386462326537363463653264343532396239306431613162336436666564346563643162 95 | 36376138653535326663663832363365656364373766383235313563346631323564376436626238 96 | 32333835383036623661643164653465356164393637633838653334663230653135326434346633 97 | 33356231346334333936626633626332343531396630386565303934333536343739623162653265 98 | 33373763383864616264633363373733323334316432613833343835343638383732633935306137 99 | 34653430323565396161656231326564643632323931313066363034366662333138386237343236 100 | 62376263363531366538656433346464323762373934306335633863353037643234343939613963 101 | 34376162353465373832616230303063663231353931363531663033616331366361626138653730 102 | 36363031373861343666313662336333316439393833666637396131633764613565646433373737 103 | 35616235343335373762343566333266353063376366373939366666333737366531636561386663 104 | 62303066303466646630343331303564623731363066333533373936656262623737333464363462 105 | 30623566643763363663333833313866353831343337356136326562633131663061373230306566 106 | 33393036613532373538643536633731336364353239356664323562633035306635323235356262 107 | 39323361323365653535326462623833373339356231623535396235363032663833306265336461 108 | 65623562346261646435363164356438633766333238363333306166346531663261653165633338 109 | 33356231653631643964646131653735336364346364343234663630663166653062373663613732 110 | 62356332656537663638313235323162346639356463306461313531623435656164373030393739 111 | 30616331623635356438343131386163616435663439656534343638616330353462623038613633 112 | 38316165633736636536383930336630346634323535306434333134343636633339663862666534 113 | 65363561616362313835393865366530323336633634376261313762333931353433353634633261 114 | 32386162303834653635373366633436393237653331343339626234363838346538316661343761 115 | 30346432393038346633393232633765303639306333623837346163656533383230386264383632 116 | 61363761313934613439646432303632643034623130356231333031333834366339343366383663 117 | 35386631353066386236386630343839373136303530386535623762653439373133313439663063 118 | 39646635636638646462353764633139643331366264323333633934636461346334343034316438 119 | 65626330323637623439646361313965306334333937623136613762333631336539613731613839 120 | 38653733396234613363303433636135633633626366663165393733313430623266323931646330 121 | 30643735633134366231656230366237306631366663646364633138653065323532663863373163 122 | 34333337623536393466636231313235386534303734636136323837656633336631666430353135 123 | 64383366623732616332333139366536656139346365363164373030353634303661363637643763 124 | 30656639333139636630323132643662616336323634356130353637333761383830346361396332 125 | 32346565386365346364646330326162353864643333386534353538653438373336306464363661 126 | 64303165643362663132366537633336623238613233356462653135333938613232373935336439 127 | 37306163333836363134383239626532303262336666643566666264333434336564656231323939 128 | 34343262616635353065626531313736343261313534353436383338373435313135333863353461 129 | 38303430643862363831643365373637393336333362323830383362656537613663633062313365 130 | 39333731623234613139363462323533386134636234306463356361313061336538366538303365 131 | 37666664353363306464313165656635316239353035373165653764613262666161616162373666 132 | 66626330396432613962366163393237396635343733656161313437303462383539646635303263 133 | 62613335633638336161633766666433396162396563393362366137656332656262383065343666 134 | 36663436626438376339363638376263656364313438316164353633333231383865303532333362 135 | 35363336613466623138333264636161343033653463633664356436306662306565383937306338 136 | 36643763356636386564326561306533356238663263646563346662363836653838316236363961 137 | 37323932356465343861363730336531366637633530323263326365366161613237333062303536 138 | 32393832386166623631336438333766363334396335313037393933376430656432346533646164 139 | 38633237636632333833656437616237633439363131643932326666303337386638323462396365 140 | 37663862363364316339313639653466626362323663656234356466343461306431343135386162 141 | 37383661326566633865666430613736643836383037373235323334656534393663383332346163 142 | 61663766616538663834346231396638316133383766316333303261383336316463383933396264 143 | 64393334623632336663646466663733346165373765386132346138663439636166306539363561 144 | 64653635326130616664626238343861666163363133616136356132396365343662376139336166 145 | 39356534626235636134663336303866313937346463386139303035373464303462383133353066 146 | 34363332343636306630646262353962333237643238346339343262393334613837633639363861 147 | 38663637343934646532613466306537626336633961316333356665356665633639376234376534 148 | 61373138356533663730373730636137373662616237313731656462396363303639306663653963 149 | 38373762386634646664643762636462626263386665313237636335323761383965346562323630 150 | 34363966373132306436366531336566333230643634316636656335636538306465343566393164 151 | 66336138373861373864323465393561613139626264646134313961323436316634316337616631 152 | 38363161313665303566366533343736663265363534373065393932306438643466333264326639 153 | 64353834633962396534313632323062343939653731613431306265303039613537376535373431 154 | 30633935643937313662376635613163386164663866393063656236363965303763333538636538 155 | 32393465366337353337333533383938373436636635613336333163303937616632623365376366 156 | 37356666316133643934313166303335646637323835303165333531663932646265643162653030 157 | 31326262343031346564326435313334393338643763393534633432656665396233316137373166 158 | 34613830393963613563646164386565313236313731303232396131646165663239333065623830 159 | 61303130626532363439326231636633653866363966656430306462636634346638633538346462 160 | 31386332653333346332393064613437613833383139663463363466616161303466333361323263 161 | 39313365613364323635643363363561383938303530613264626539613838666131326463623831 162 | 30303838633662393630303563396335313830306336363135363466613165616337326135303535 163 | 31613333613866373332303239396230613838623266396338393932336661666336633263356433 164 | 63303463326265363631623831626564376537653163666431396161626266346231343163303734 165 | 65323537346433663933386264383635353566623835323264643365386666303234306532653130 166 | 66323063333238313165663339386137316663623566313136396264393530303035313265363262 167 | 30623564663662663737326231356431623064393362393561663832646138613130613439323135 168 | 31383639393036306534313335396666316531393530336263306336613939326233393332636361 169 | 31323032366666633430343362313038356637316439646434313232396637353734326361653139 170 | 64663134643130613466393439623937666630333332333932633266353732633036336131323531 171 | 62396531343535626138643932616539396238386662636632653935623661383763356166353234 172 | 39373165653433613233353230373130616130303661613133333563393736326238613630616532 173 | 65663465333563373737316231306338303961303835613037316663386639316263646236393964 174 | 61353331366531653330376137346564363534626238336262376363383461386464613366643965 175 | 3634366333326433316539383438383830323039306533626262 176 | --------------------------------------------------------------------------------